text stringlengths 199 648k | id stringlengths 47 47 | dump stringclasses 1 value | url stringlengths 14 419 | file_path stringlengths 139 140 | language stringclasses 1 value | language_score float64 0.65 1 | token_count int64 50 235k | score float64 2.52 5.34 | int_score int64 3 5 |
|---|---|---|---|---|---|---|---|---|---|
From Online Dictionary of Crystallography
Groupoïde (Fr). Gruppoid (Ge). Grupoide (Sp). Gruppoide (It). 亜群 (Ja).
A groupoid (G,*) is a set G with a law of composition * mapping of a subset of G x G into G. The properties of a groupoid are:
- if x, y, z ∈ G and if one of the compositions (x*y)*z or x*(y*z) is defined, so is the other and they are equal; (associativity);
- if x, x' and y ∈ G are such that x*y and x'*y are defined and equal, then x = x'; (cancellation property)
- for all x ∈ G there exist elements ex (left unit of x), ex' (right unit of x) and x-1 ("inverse" of x) such that:
- ex*x = x
- x* ex' = x
- x-1*x = ex'.
From these properties it follows that:
- x* x-1 = ex, i.e. that that ex is right unit for x-1,
- ex' is left unit for x-1
- ex and ex' are idempotents, i.e. ex* ex = ex and ex'* ex' = ex'.
The concept of groupoid as defined here was introduced by Brandt (1927). An alternative meaning of groupoid was introduced by Hausmann & Ore (1937) as a set on which binary operations act but neither the identity nor the inversion are included. For this second meaning nowadays the term magma is used instead (Bourbaki, 1998).
- Bourbaki, N. (1998) Elements of Mathematics: Algebra 1. Springer.
- Brandt H (1927) Mathematische Annalen, 96, 360-366.
- Hausmann, B. A. and Ore, O. (1937) American Journal of Mathematics, 59, 983-1004. | <urn:uuid:f86bb971-7659-40d5-8de0-e2bb74c8be2a> | CC-MAIN-2016-26 | http://reference.iucr.org/dictionary/Groupoid | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396875.58/warc/CC-MAIN-20160624154956-00046-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.877246 | 455 | 3.25 | 3 |
AS REPORTED on The Gazette website, parents are being urged to lookout for signs of scarlet fever following a sharp increase in the number of cases of the highly contagious disease.
Below is information from the NHS about the symptoms of scarlet fever and what to do if you suspect you child has caught the disease.
The bacterial illness causes a distinctive pink-red rash that feels like sandpaper to touch.
It may start in one area but soon spreads to many parts of the body, such as the ears, neck and check. It may be itchy.
Other symptoms include a high temperature, a flushed face and a red, swollen tongue.
Symptoms of scarlet fever usually develop two to five days after infection, although you will be contagious before showing signs of the illness.
When to seek medical advice
See your GP as soon as possible if you suspect you or your child has scarlet fever.
Your GP can usually diagnose scarlet fever by looking at the characteristic rash and other symptoms, although a sample of saliva will need to be taken from the back of the throat (throat swab) and tested in a laboratory to confirm the diagnosis.
There's no evidence that catching scarlet fever when pregnant will put your baby at risk. However, heavily pregnant women should tell healthcare staff if they have been in contact with someone who has scarlet fever.
How it spreads
Scarlet fever is extremely contagious and can be caught by:
• breathing in bacteria in airborne droplets from an infected person's coughs and sneezes
• touching the skin of a person with a streptococcal skin infection
• sharing contaminated towels, baths, clothes or bed linen It can also be caught from carriers - people who have the bacteria in their throat or on their skin but do not show any symptoms.
Who is affected
Anybody can catch scarlet fever, but it usually affects children aged two to eight years old. As it is so contagious, scarlet fever is likely to affect someone in close contact with a person with a sore throat or skin infection caused by streptococcus bacteria. It may also affect someone who lives in a crowded environment, such as day care.
Between 2,000 and 4,000 cases of scarlet fever are diagnosed each year in England.
How it is treated
Scarlet fever used to be a very serious disease, but most cases today are mild. This is because scarlet fever can easily be treated with antibiotic tablets. These must be taken for 10 days, even though most people recover after four to five days.
With proper treatment, further problems are very unlikely. However, there is a small risk of the infection spreading to other parts of the body, such as the ear, sinuses and lungs.
Read more about treating scarlet fever and the complications of scarlet fever.
Preventing it from spreading
If your child has scarlet fever, do not let them go to school and keep them away from other people until they have been on a course of antibiotics for at least 24 hours.
All tissues and cloths that someone with scarlet fever has coughed or sneezed into should be washed or disposed of immediately. Wash your hands thoroughly with soap and water if you have touched any of these.
Avoid sharing contaminated eating utensils, cups and glasses, clothes, baths, bed linen or towels. | <urn:uuid:278d2282-fe5d-404c-8bc2-d3523334d005> | CC-MAIN-2016-26 | http://www.basingstokegazette.co.uk/news/11167173.How_to_spot_scarlet_fever/?ref=rss | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783393518.22/warc/CC-MAIN-20160624154953-00191-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.961119 | 693 | 3.015625 | 3 |
Guerra Junqueiro, Abílio (äbēˈlyō gĕrˈə zhŏnkāˈrō) [key], 1850–1923, Portuguese poet. A revolutionary, he wrote violent satiric poems attacking conservatism, romanticism, and the Church. Typical are A morte de Dom Jõas (1874) and A velhice do Padre Eterno (1885). He later turned to writing simple, touching lyrics of rural life, as in Os simples (1892).
The Columbia Electronic Encyclopedia, 6th ed. Copyright © 2012, Columbia University Press. All rights reserved. | <urn:uuid:f5d6f8cd-67f4-44e2-845d-023d41a067ee> | CC-MAIN-2016-26 | http://www.factmonster.com/encyclopedia/people/guerra-junqueiro-abilio.html | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396887.54/warc/CC-MAIN-20160624154956-00106-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.82267 | 139 | 2.578125 | 3 |
TheMathPage: An Approach to Calculus
Library Home || Full Table of Contents || Suggest a Link || Library Help
|Lawrence Spector; Borough of Manhattan Community College, CUNY|
|An introduction to real numbers, including such topics as: continuous versus discrete; limits; continuous functions; the "limit" infinity (?); the derivative; the chain rule; velocity and rates of change; maximum and minimum values; derivatives of trigonometric functions, inverse trigonometric functions, exponential and logarithmic functions; evaluating e; and the appendix, What is a number?, on the existence of the real numbers. Includes questions for checking understanding, with solutions.|
|Levels:||High School (9-12), College|
|Resource Types:||Problems/Puzzles, Articles|
|Math Topics:||Calculus (Single Variable)|
© 1994- The Math Forum at NCTM. All rights reserved. | <urn:uuid:c6f4f2dd-8a2a-4d6d-bdf9-c2e8478700fc> | CC-MAIN-2016-26 | http://mathforum.org/library/view/69837.html | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783393463.1/warc/CC-MAIN-20160624154953-00159-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.757157 | 197 | 2.890625 | 3 |
Like a jeweler hunting for a flawless diamond, for years, researchers have pursued purity in the niobium material used to make superconducting radiofrequency (SRF) cavities. But a team at Fermilab recently discovered that adding a bit of impurity may actually improve cavity performance.
This year, a team led by Fermilab scientist Anna Grassellino found that baking niobium SRF cavities at high temperatures with gasses such as nitrogen or argon caused the cavities’ quality factors (Q) to double or triple and to increase with gradients, a phenomenon opposite to what had been seen in the past. Grassellino thinks this phenomenon may be due to nitrogen or argon atoms — impurities — that diffuse into the niobium. Introducing the impurities through such a bake may provide a way for ILC-type cavities to reach high Q at high gradient—enabling more cost-effective accelerators.
“It’s kind of interesting because, for many years, SRF has chased the idea of purity,” Grassellino said. “While it is true that purity of the niobium bulk is crucial for reaching high gradients, we are now discovering that you need a little bit of dirt at the very surface of the niobium cavity — the first tens of nanometres — to actually improve the cavity quality factor.”
Q refers to how well a cavity stores energy. If an SRF cavity has a higher Q, it requires less radiofrequency power to achieve a certain gradient. Gradient refers to the energy transferred to a particle over a certain distance. High Qs help accelerators save on costs needed for the machine’s cryogenics, or refrigeration, system, which is used to dissipate heat created by radiofrequency power. High gradients reduce costs by keeping accelerators compact.
In the past, cavity Q has been known to decrease as gradient is increased — a limitation known as medium-field Q slope. But Grassellino found that baking SRF cavities at 800 to 1000 °C with nitrogen or argon causes Q to increase as gradient is increased – basically a reversed medium-field Q slope.
Grassellino came across this finding unexpectedly: She was hoping to use the high-temperature bake with nitrogen gas to convert niobium cavities to niobium nitride cavities. Instead of bonding with the niobium to form a new compound, the nitrogen atoms appeared to enter the cavity wall and take up residence in empty spaces in the niobium’s crystal structure — basically just foreign atoms in an otherwise niobium cavity. Grassellino saw the same effect when she baked cavities with argon gas.
Surprisingly, the impurities have a beneficial effect: The addition of foreign atoms appears to lower the surface resistance of the cavities, which in turn increases Q.
“As often occurs in the physics world, we accidentally found a cure for medium-field Q slope,” said Grassellino, who recently published a paper on the finding.
Of several single-cell, ILC-type cavities that Grassellino put through the bake, each reached a Q ranging between 3.5 x 1010 and 7 x 1010 at a temperature of 2 kelvins and a gradient of around 20 megavolts per metre. These Qs are two to three times higher than those of cavities treated with the standard ILC procedure. The same results were also achieved on a nine-cell cavity.
Increasing Q by adding impurities will be a big help for machines in which refrigeration costs are the main concern, according to Hasan Padamsee, former group leader of SRF at Cornell University.
“This treatment would have a big impact on a continuous-wave machine like LCLS-II,” Padamsee said, referring to the upgrade of SLAC’s Linac Coherent Light Source, a powerful X-ray laser. This type of machine requires high Q at medium accelerating gradients to keep refrigerator size, and costs, low.
“The gas-bake process is currently the best candidate treatment for achieving a Q high enough that LCLS would need only one cryoplant to cool it,” Grassellino said.
While achieving high gradient provides the most cost savings for a machine like the ILC, achieving both high gradient and high Q would be even better. In addition to refrigeration savings, increasing Q could lower ILC production costs by increasing cavity yield – or the fraction of cavities that meet the minimum Q – according to Anthony Crawford, a physicist in Fermilab’s Superconducting Materials Department.
“If the cavity Q is too low, then the cavity must be reprocessed or eliminated from the production stream,” Crawford said. Also, higher Q – less power dissipation – means lessening heating effects, which could significantly improve the cavity yield, thereby lowering production costs for the ILC linacs. Adding impurities at the very surface might turn out to be the key to do it.
Grassellino, together with Alexandr Romanenko from Fermilab’s SRF Materials Department and the lab’s Cavity Processing Group, led by Allan Rowe, are already working toward optimised recipes by playing with bake temperatures, times and partial-pressure regimes. A collaboration working toward the LCLS-II optimised recipe is ongoing. Matthias Liepe at Cornell University and Charlie Reece at Jefferson Lab lead those institutions’ participation in this effort.
“These findings already provide a better processing recipe for the many continuous-wave accelerators planned worldwide,” Grassellino said. “Once we optimise the bake parameters, we might score and basically end up with a recipe for high Q at even higher gradients, which might become the new recipe of preparation for the ILC cavities and other projects at the forefront of SRF technology.” | <urn:uuid:2b24234b-a8fa-49b2-8b21-3928381162b2> | CC-MAIN-2016-26 | http://newsline.linearcollider.org/2013/11/21/a-little-dirt-never-hurt/ | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783399117.38/warc/CC-MAIN-20160624154959-00122-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.941006 | 1,259 | 3.03125 | 3 |
The Old Testament book of Job can be mysterious, exhausting, and frustrating. Yet, for millennia, readers have also drawn comfort and hope from the story of Job’s extreme suffering.
For those who are unfamiliar with it, briefly tell the story of Job.
Rev. Ash: The book of Job is one of the most astonishing books in the world. We don’t know when or by whom it was written. It tells a true and deep story, the story of Job, an upright and righteous man (Job 1:1,8; 2:3) who trusted God. He was a very great man (1:3). And yet quite suddenly he suffered the loss of all his wealth and possessions, all his children, and his health (1:6-2:10). After this catastrophe, Job has long debates with his three so-called “comforters” (chapters 4-26) about what is going on and why. Job then sums up his case (chapters 27-31). After that he hears two answers; the first is from a man called Elihu (chapters 32-37), the second (in two parts) from God himself (chapters 38-41). The book ends with Job’s final response to God (42:1-6), God’s verdict on the debates (42:7-9), and God’s final vindication and restoration of Job (42:10-17).
Why are you convinced that the book of Job “makes no sense apart from the cross of Christ”? And what is the “wisdom of the Cross” that your book’s subtitle speaks of?
Rev. Ash: If you believe in any kind of justice, this story would seem to contradict your beliefs. For in it a man who does not deserve to suffer finds himself suffering intensely and deeply. Read on its own it would seem to be, as someone has put it, “the record of an unanswered agony.” Job’s “comforters” can only make sense of it by supposing that Job is a secret and wicked sinner (e.g. 22:5). But we, the readers, know this is not true (1:1,8; 2:3; 42:7). In their world, good things only happen to good people and bad things only to bad people (e.g. 8:3,4). The Cross shows that at the heart of history there is undeserved suffering that makes possible undeserved blessing; that because a righteous man suffered, unrighteous people like us can experience mercy and grace. This is the wisdom of the Cross (1 Cor.1:18-2:5). Job foreshadows this great truth.
Briefly explain the three big questions you say Job raises.
Rev. Ash: A. What kind of world do we live in, and how is it governed? The most common answers are either that God runs it (full stop: what he says goes) or that it is chaotic, perhaps with a multitude of powers, gods/goddesses/spirits—call them what you will. The Bible’s answer is that God runs it entirely, but does so through the intermediate agency of a variety of supernatural powers, some of which are evil. This is a deep truth and one that Job explores, how God can govern the world making use in some strange way of evil to do it, without himself being tainted by evil.
B. What kind of Church should we want? The biggest dangers to church life worldwide are the “prosperity gospel” (if I follow Jesus, God will make me rich and healthy) and its close cousin the “therapeutic gospel” (if I follow Jesus and already have wealth and health, then Jesus will also make me feel good about myself). Job pulls the rug out from under both these distortions.
C. What kind of Savior do we need? Only the perfect obedience and suffering of Jesus Christ can bring grace to a needy world. Job opens up this truth perhaps above all.
Why is the book of Job so long? And why is most of it poetry?
Rev. Ash: Deeply to grapple with God in a messed-up world takes time. We cannot tidily sum up the message of Job on a postcard or in an SMS or Tweet. We need to let these truths soak into our souls and engage with us in our real human experience; there is no shortcut for that. Poetry touches us in our emotions, our feelings, our affections, our delights and aversions. We need to read it and hear it aloud to let God get to work on us through it. Beware the desires to summarize it, rush it, get through it quickly so we can get on to the next thing, boil it down to tidy propositions! You have not engaged with Job until, for example, you have been moved to tears by his lament in chapter 3.
What can we learn and model from Job’s perseverance?
Rev. Ash: Writing to suffering Christians, James encourages them and us to wait patiently for the return of the Lord Jesus; he says, “You have heard of the steadfastness of Job” (James 5:7,11). As we walk with Job through his trials, we watch as Jesus perseveres through his; and, by the Spirit of Jesus in our hearts, we are enabled the better to walk through our own troubles with patient faith.
Why do you say the book of Job is not fundamentally about suffering? Then what is it about? And how does it foreshadow Jesus?
Rev. Ash: Like every Bible book, Job is most deeply a book about God and specifically about Jesus Christ, the righteous man who suffers unjustly and is finally vindicated by his Father. It is a mistake to think the book speaks simply to human suffering as a universal experience; for the central character who suffers is very far from a typical or universal human being; he is conspicuously great, exceptionally upright, and definitively righteous. Job in his extremeness foreshadows Jesus in his uniqueness. It is therefore only about us if we are indwelt by the Spirit of Jesus and enter into some share of the sufferings of Christ (e.g. Col.1:24). And yet it is about us as believers in Christ; for Satan still demands to sift disciples like wheat to prove our genuineness (compare Job 1:8-11; 2:4,5 with Lk.22:31) and in the end our genuine faith will redound to the glory of God (1 Peter 1:7).
What do you recommend as a good way for people to experience Job?
Rev. Ash: I have four suggestions. First, that preachers have a go at longer sermon series on Job, perhaps 10 sermons rather than the skimpy two or three that some offer! Second, that individual Christians read Job aloud to themselves. Aloud is important, so you cannot read too fast and you cannot skim. If you find the language too inaccessible, try a vivid paraphrase like The Message. Third, you could try reading gradually through Job aloud with a small group. Fourth, you could try reading slowly through Job using my book as a friendly guide!
Is there anything else you’d like to say?
Rev. Ash: It may seem strange to say that I love the book of Job, given that it is so dark and intense. And yet I do. I find that immersing myself in it helps me appreciate more deeply the love of my Savior, the misery of being a sinner in a world under God’s curse, and the wonder of the Christian hope. I hope you will find the same.
Bio: Christopher Ash works for the Proclamation Trust in London as director of the Cornhill Training Course. In addition to serving on the council of Tyndale House in Cambridge, he’s the author of several books, including Out of the Storm: Grappling with God in the Book of Job and Teaching Romans. He’s married to Carolyn; they have three sons and one daughter.
- Did the Early Church Invent Jesus’ Divinity After the Fact? Michael Bird Discusses Bart Ehrman’s New Book (And An Unusual Rebuttal)
- The Book of Common Prayer Remains a Force: An Interview with Alan Jacobs
- Interview: Nabeel Qureshi, author of Seeking Allah, Finding Jesus
- The Gospel According to Daniel: An Interview with Bryan Chapell
- How Do We Know That Jesus Was the Messiah? A Look at Old Testament Prophecy and Jesus | <urn:uuid:9172fa31-fa0e-43a9-8517-65618aeb6472> | CC-MAIN-2016-26 | https://www.biblegateway.com/blog/2014/07/job-is-a-book-about-jesus-an-interview-with-christopher-ash/ | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397565.80/warc/CC-MAIN-20160624154957-00020-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.953451 | 1,817 | 2.953125 | 3 |
HAMPTON: A CENTURY OF TOWN AND BEACH, 1888-1988
Back to previous chapter -- Forward to next section -- Return to Table of Contents
Chapter 8 -- Part 1
Protecting and selling a town treasure
A correspondent, writing in a June 1874 Portsmouth newspaper, commented, "The beach seemed to be held by the ancients for the benefit of the public ..... Recently the Town had ceased to exercise its authority and control and has allowed numerous 'squatters' to erect and open buildings of all descriptions and for all kinds of purposes which in the end, if not checked soon, will tell much against the town and peace and be ruinous to the five or six hotels located at that beautiful retreat where now numerous persons go in the hot summer months for health, pleasure and comfort."
This situation continued until October 1878, when the town meeting voted to authorize the selectmen to remove some 53 "well-built" houses constructed on sites that it was believed belonged to the Town. A February 1879 editorial in the Haverhill Bulletin questioned the Hampton vote:
"..... We fail to see why she [Hampton] should object to people making her seashore a place of retreat during the summer months. A town usually courts immigration, and most communities invite others to come and settle in their midst. But Hampton would drive people out of town. They seem to prefer that the citizens of Haverhill go elsewhere to spend the summer months. Their wishes doubtless will be gratified and the course they are taking will induce people to go to other resorts, and Hampton Beach will become what seems to be so much desired -- "a howling wilderness."
As agent for the Town in the suits against beach squatters, Uri Lamprey responded to the "attack" from the Haverhill Bulletin,
There is no truth to the assertion that Hampton is opposed to people coming to the beach; neither by vote has the town shown any such disposition. The town claims to own all of the territory known as the beach, and all records in relation to the same go on to prove or substantiate the claim of the town to the beach; the town has full police control over it, subject only to the laws of the State and those of custom. All the town is trying to do is to have the question of ownership settled by the court. There is no disposition to drive people away, as long as they behave well.
The regular 1879 town meeting refused to annul the previous vote and the cases were pursued through the courts for years until the Town was successful in its claim to the Beach, primarily that area on the shore from the North Hampton town line to Winnacunnet Road and from the corner of Ocean Boulevard and Ashworth Avenue to the river. General Gilman Marston, an Exeter attorney, first represented the Town. After his death, Edward G. Eastman pursued the matter in court for another decade. In 1888, the [Exeter] News-Letter said that "The handsome French roof cottage of Mr. Scott ... has been sold and is being moved from our beautiful South Beach to Hampton Falls" as a consequence of a state Supreme Court decision against the squatters. By June 1894, the matter must have been finally resolved, because the News-Letter reported that "Squatters at the lower end of the beach are quite generally moving to land purchased of Mr. Oliver Nudd [the vicinity of today's Nudd Avenue]. Eight buildings have been moved since fall. To a few of the squatting cottagers the town has leased their holdings." In that year, Laura Bradley, William P. Tucker, and George M. Cook paid a total of $35 for "rent of land at beach," according to the town report. This is the first mention of lots being rented at Hampton Beach, and apparently the idea of leasing large sections of land must have appealed to the selectmen and to residents, especially after 1898, when the Hampton Beach Improvement Company began successfully to sublet its holdings on what is now the main beach. About the turn of the century, when squatters became especially numerous in the newly formed White Rocks Island section, the Town was again in court, losing to Frank Beckman in 1907, but winning its cases against the other squatters in the area. In 1905, Curtis DeLancey was paid $50 for "demolishing buildings at the beach," apparently structures the Town ordered owners to remove but that the latter failed to do.
The 1901 town meeting indefinitely postponed a warrant article that would have granted a 99-year lease for the land between the Causeway (Winnacunnet Road) and the North Hampton line to Wallace Lovell's Granite State Land Company, the same firm that developed Seabrook Beach. The success of the Hampton Beach Improvement Company had apparently convinced voters that if the Town was going to lease land, it had better do so as a Town project rather than allow a private company to do it. Therefore, in 1909, the town meeting voted to survey the North Beach section and to separate it into lots for lease, with the first option going to abutters, who owned the lots west of King's Highway on North Beach, lots that were originally set off in 1747 when this area was known as Huckleberry Flats. The Town paid $566 for the surveying and three abutters elected to lease adjacent land. E. G. Cole leased six blocks for 50 years at $600 per year, businessman Irving Powers and Life-saving Station Chief B. F. Smart each leased one block for $100 per year.
In June 1911, the Union reported, "A dozen cottages will be erected in the Pines section of town-owned land, first year of its development." The 1911 town report listed some $1,400 in Beach rental income, most of that from White Rocks Island. In 1912, the Town received $1,938 from the rental of one North Shore lot, four at Plaice Cove, 18 at the Pines, and 46 at White Rocks Island, while another 21 people owed back rent on a total of 29 lots. The 1914 town report carried an inventory of Town-owned Beach land: 75 lots at White Rocks Island, 156 at the Pines, 174 Plantation lots, 44 at Plaice Cove, and 24 on the North Shore. The leased land was valued at $131, 700, of which $10,000 was the Hampton Beach Improvement Company leasehold. The 1914 income, including the HBIC rent of $500, was $3, 279.20. Renters paid $8 per lot at White Rocks Island and up to $25 for oceanfront lots at the Pines. By 1916, the town report showed 305 Pines lots (which must have included White Rocks Island), 176 Plantation lots, 50 at Plaice Cove, and 30 on the North Shore. In 1917, the Town received $3,700 in rents; some $679 was due the Town. For some reason, the names of the three men who first had agreed to rent large locks at the Plantation are not included among the names of those paying rent.
In July 1914, the previously mentioned Newcomb case was reopened and heard by the State Supreme Court. Charles R. Newcomb claimed ownership of about 18 acres of land, including most of White Island, based on claims and grants going back to 1714. The land had been purchased five years earlier by Newcomb from the Granite State Land Company. This was the White Rocks Island section on which the Town claimed ownership, and leases had been paid to the Town since 1907. A referee ruled in favor of the Town.
The December 1916 souvenir edition of the Hampton Union included maps of many leased-land parcels, including White Island Point (lots $8 for year), Plaice Cove ($20 for oceanfront, $10 for back lots), North Shore and Plantation (lot rentals $25, $20, $10). The latter section was originally laid out in 1747, "adjacent to the site of the first house erected at beach about 1800. At that time and for long since considered worthless. It [the Plantation section] is now the most valuable asset the town owns."
The 1919 report carried the lengthy "Report of Court in Mitchell Case," one of the more interesting Hampton public documents. It contains much information about the ownership of private and public lands along North Beach. George E. Mitchell, who owned some 18 acres of land off King's Highway, claimed that his parcel extended across the highway to the ocean. The land in question had been purchased by Mitchell in 1889 from S. H. Dumas, who had purchased it from the heirs of David Nudd. For some reason, although the Town began to rent lots in the area in 1909, Mitchell did not contest the Town's ownership of the land until 1913. As mentioned earlier, in 1896 the selectmen had erected a line of stone posts along the west wide of King's Highway, this line being also approximately the location of an ancient fence that had been built sometime in the eighteenth century. This line, the Town said, was the western boundary of land the Town owned between the line and the ocean, extending from Winnacunnet Road to the North Hampton border.
The case was heard by a referee, who ruled that although many of the deeds Nudd gathered to create his property in the area included some variation of the words easterly to the sea, Nudd and the other previous owners of the plots of land apparently understood the term to mean that their property was easterly bounded by the sandhill land that bordered the sea. E. G. Cole, who was the north abutter of Mitchell, testified that his land ended at the King's Highway. The court document recounts a long list of public actions taken at various town meetings,s beginning in the early 1700s, at which the Town reinforced its claims to the land or the beachfront. Also mentioned are many successful efforts by the Town to remove squatters from the land. Called to testify, longtime selectman Joseph B. Brown told of several occasions when he, on the advice of a lawyer, had sent wagons to the Beach to remove the lumber and other construction materials from sites where people were attempting to build on Town land. Except for those illegal building efforts, Brown said he had never observed anyone attempting to occupy any of the land west of King's Highway.
Taking into consideration all the documents and testimony presented in evidence, the referee ruled that, since the Town had proceeded to survey the land in question and to lease the lots, and Mitchell had not attempted any occupation of the land since its purchase from Dumas, the Town did have title to the land under the original grant to the proprietors.
A few years later, the Town acted on another case of squatters at the Beach. During the 1850s, people began to erect summer bathhouses at the shore, both along the main beach and at North Beach, just south of what became the Coast Guard station site. At first, these were just simple structures, perhaps similar to gazebos, intended for use during the day as places for people to sit, enjoy lunch, and change their clothes. The largest group of summer bathhouses was at North Beach, and by 1907 the structures were considered so unsightly that the town meeting instructed the selectmen "to see that the bathhouses at North Beach be put in good condition, painted, put on proper foundations, &c; these improvements to be made at the owners' expense; also that they be instructed not to issue permits for the building of any houses in that locality except for strictly bathing purposes."
There is no indication in the newspapers that any improvements were made at that time. In 1925, the town meeting voted to instruct the selectmen to remove the bathhouses south of the Coast Guard station by the end of the year; and also to authorize the selectmen to build a public bathhouse there or to make arrangements with a private party to do so. Apparently the selectmen failed to follow the wishes of the meeting because at the 1926 meetings, following a lively debate, residents voted to rescind the 1925 action, allowing the bathhouses to remain. After the State acquired the beachfront in 1933, the bathhouses finally were removed.
By 1930, the value of leased-land rents was up to $8,300; in 1940, it was $9,600; in 1950, it totaled $11,900; and the 1960 rental income for 413 lots (at rents of $15 to $150 per lot) was $18,900. Following a study in 1961, Town officials determined that Hampton was getting only a 2.9 percent return on the land value, which was estimated to be worth $1.6 million if the land were sold. Since private investors would consider a 10 percent return adequate, the Town decided to increase its rental fees, choosing to set a return of 4.5 percent of value. Changes could not be made for existing leases, but new leases were subject to higher fees ranging from $150 per year on North Beach to $105 per year for rear lots in the Plantation and in the Pines section. Over 10 years, as new leases were written, this increase was calculated to generate another $26,000 in rental income.
This move by the selectmen prompted a successful citizens' petition at the 1962 town meeting that authorized the selectmen to appoint a committee to study the possible sale of the leased land, except for the Hampton Beach Improvement Company leasehold. The Committee issued a questionnaire to nearly 400 lessees. According to study committee chairman John Long, out of 191 responses, 133 renters wanted to buy their lots. However, the renters expressed cries of disbelief at a public hearing in January 1963 when Town Manager Kenneth Boehner suggested that the selling prices could be $5,000 for oceanfront and $3,000 to $4,000 for back lots. Many questioned whether or not they would be forced to buy their lots arguing that they had a deal with the Town and it should not be changed. Howard Page, Jr., who had initiated the petition article and wanted to buy his lot, said that the lease payment was like another tax and that the amount could be doubled over the next 10 years. He said that many banks were reluctant to give mortgages on buildings for which the owners only leased their lots.
The committee report to the 1963 town meeting recommended against the sale of the leased land, listing five areas of study: (1) the potential for increased income from the new rental fees; (2) positive controls over the land through the leases; (3) the fact that the leased land could not be sold without a vote of the Town; (4) the fact that the terms of the leases did not protect the leaseholders' rights to buy their lots; and (5) the possibility of placing income from the land sale in a trust fund. The committee concluded that Town controls over the leases had a beneficial effect on the Beach, that there would be a rapid change in ownership if the land was sold and the result would be harmful to the Town, and that it would be "highly improbable" that the Town could set a fair price for the lots. The committee believed the Town should not force lessees to buy their lots, and they explained that the Town did not have the statutory power to set up a trust fund. The committee did recommend that the terms of leases be extended from 10 years to 15 years and that rents be tied to a fair assessment of the land value at the current tax rate. The 1964 town meeting did approve the 15-year leases (in cases of leases expiring after April 1965), and in 1974, without a town meeting vote, the selectmen began offering 25-year leases. The 1964 Precinct meeting adopted a set of recommendations compiled by its own leased-land study committee, which was appointed as a result of the 1963 Precinct meeting. The Precinct's suggestions were incorporated into an article that was defeated at the 1965 town meeting. The article would have eliminated the lease provision that allowed the rents to be increased every three and seven years; would have given leaseholders the option of buying their lot based on a price set by a three-arbitrator panel; would have protected leaseholders from being compelled to buy their lots; and would have prohibited an enhancement tax from being added to the building that would increase the property tax.
The 1970 town meeting approved two ballot questions, probably the first time voters had acted upon leased-land questions outside of the deliberative meeting. The first article permitted the appointment of a committee by the moderator "to consider whether the Town and the Hampton Beach Improvement Company could agree upon a plan by which persons holding lots under the Improvement Company might acquire rights extending beyond the term of the Town lease to the Improvement Company; and to report to the next Annual Meeting, with or without recommendations." The second article gave the selectmen the power to see any leased lots to lessees at a value set by the 1968 property valuation. In April 1971, Mr. and Mrs. Linwood Taylor signed a purchase-and-sales agreement on a North Beach lot, the first to do so under the terms of that latter article, although they did not take title to the land until 1975. The first article proved to be more difficult to resolve. At the 1971 meeting, the committee delayed the completion of its study until the 1972 town meeting.
Also, in 1970 (when land rent income was $55,787), the selectmen announced another increase in rents; this time the annual payment would be equal to the previous year's taxes on the land, since the rent was due May, before the tax rate was set. On one of the first leases to be renewed, the lot rent increased from $125 to $950.
The 1972 town meeting reaffirmed the right of the selectmen to continue to sell the leased lots, but at current market value, and also authorized the selectmen to convey to quitclaim deed 32 lots in the HBIC area that in some way had been quitclaim deeded by HBIC officials. These property owners claimed to have deeds to their lots, paid no rent to the HBIC, and also paid no taxes on the land to the Town. Without further payment to the Town, the owners of the structures on these lots received quitclaim deeds, and tax bills on the land.
The 1974 town meeting acted upon several articles relating to leased land. First residents voted to rescind previous votes giving the selectmen the authority to sell leased land, except for five lease for which sales agreements had previously been signed. Actually, agreements had been reached on only four sales; a fifth agreement with Earl Adams was still being negotiated. A motion to amend the article to just four sales failed on a hand vote. Voters rejected another article that would have allowed the sales to continue, placing the income in a bank account with the interest to be used to offset taxes (the latter provision would have required an act of the Legislature); and approved a third article that prohibited forever (unless changed by vote of a town meeting) the sale of the former Coast Guard station site, a lot at the junction of King's Highway, High Street, and Ocean Boulevard (which was to remain undeveloped for traffic safety reasons, but which has been developed as the free parking lot adjacent to the traffic lights), and all of the easternmost lots at the Pines that had not already been leased.
Prior to the 1974 vote prohibiting the further sale of leased land, the selectmen early in 1973 decided their authority to sell was discretionary and they agreed not to sell any more lots. In January 1975, however, three Pines lots were sold to Earl Adams, who had built the seasonal U. S. Apartments building on the lots a few years earlier. This sale was one of five grandfathered as a result of action at the 1974 town meeting, and it proved to be a controversial decision, especially since Adams's sales agreement had not been finalized before the 1974 meeting. Some people had objected to Adams being given a lease in the first place, since his lots were sand dunes.
By 1976, 234 leaseholds (out of more than 400 leased directly by the Town) had been converted from the old rental rates to the new method of basing the rent on the previous year's tax rate at the current year's valuation. In that year, the Town received nearly $134,000 in rent.
The 1981 town meeting voters were ready to deal with the leased-land problem in a comprehensive manner. One of th problems that faced leaseholders in the HBIC area was the matter of the company's lease, which would expire in 1997. The leaseholders wanted to know what their rights would be after that time. At the meeting, voters began the action that has resulted in the sale of most of the Town-owned lots at the Beach. Article 21 asked the Town to give the selectmen authority again to sell leased lots at the current market value to the lessees, except for the HBIC, and for the sales proceeds to be placed in a trust fund and the interest to be used, "from time to time .... for the purchase, construction, or improvement of real estate for the town." Voters instead approved an amendment, or substitute article, which authorized the moderator to appoint a seven-member leased-land study committee "to study the implications of the Town leased land program, both as pertains to the Hampton Beach Improvement Co. lease and to other town owned leased land. Such committee to report to the town in the 1981 town report for action at the 1982 town meeting." This substitute article continued to prohibit the selectmen from selling any more lots, but a subsequent article authorized selectmen to grant leases based on the Town; reversionary rights in the HBIC land as long as the new leases, which were to be granted only to HBIC tenants, did not extend beyond March 2002.
In its report to the 1982 town meeting, the leased-land study committee (by a vote of 4-3) recommended the sale of the land, called for the creation of a real estate commission, outlined a system for appraisals, called for the sale of the lots at 30 percent of their fair market value (which recognized the lessees' leasehold interest, some of which extended back for generations and others that only began the night before a subsequent special meeting), and suggested that the Town finance the lots at 12 percent interest for terms of no more than 20 years. Leaseholders not wishing to buy their lots could continue to lease. If the Town decided not to sell the lots, then the rent would be equal to the taxes.
The March annual meeting also passed three related articles. The first article adopted a 1975 enabling statue (written by 1970 leased-land study committee member James Fallon, Jr.) that allowed the Town to place lot-sales proceeds in a real estate trust fund. The other two articles asked the local legislative delegation to file two bills, one to amend that 1975 law so the lot-sales principal would have to be held in trust and not used by the Town for real estate construction purposes, and the second to establish a Hampton Leased Land Real Estate Commission. A special town meeting was called for May 1982 to act on the land-sale article itself. (Another article at the regular annual meeting, which was tabled, asked the Town to appropriate $20,000 and again to contest the HBIC lease, this time because the Company had begun charging its tenants a percentage of their liquor sales income as part of the rental payment. Later, some of the Beach businesses in question sued the HBIC and lost.)
Proponents of the sale had strong economic arguments: The 649 lots (including the HBIC lots) had an assessed value of $20.1 million, meaning that if all lots sold at 30 percent of value, the Town would net $6 million for its trust fund. If invested at 12 percent, that fund would give the Town $720,000 in annual income, compared to the $275,000 received in rents at the time. Once the land was sold, the Town would also receive taxes on it, so the trust fund money would be in addition to the current income.
Opponents included many old-timers who believed the Town lands were a trust received from earlier residents and the land should not be sold. The Union also opposed the sale, feeling that perhaps the Town would receive more in rent money as the flat-fee lots were converted to tax-based rents.
At the May special meeting, attended by some 600 voters and leaseholders, the proposed sale of the Town-owned leased land passed, 223-187. The Legislature passed the laws necessary to set up the trust fund and real estate commission the next year. Although generations of residents had acted to protect the Town's rights to the Town-owned Beach land since 1638, a majority of the voters at that meeting decided it was in the best interest of the Town to allow the direct lessees and sublessees of the HBIC to own their lots.
On the night before the special meeting, the Board of Selectmen voted to give leases on eight Pines lots to attorney, leased-land study committee member, and Selectman Brian Doherty and his fiancee. The unimproved lots, merely sand dunes, were on an early town plan, and houses had been located on them at one time. A storm in the early 1920s destroyed the houses, including one owned by Fred Gagne, so the tenants let their leases lapse, making an informal agreement with the selectmen to keep the land open to the public, according to Gagne's daughter, Diana Lamontagne, a former selectman. Those who lost their homes were given lots elsewhere. When residents learned of the leases to Doherty, there was a town uproar, and Doherty, claiming no wrongdoing, returned the leases to the Town. (It will be recalled that in 1974 the Town had voted not to lease any lots in that area of the Beach. As a result of this situation, the eight lots, plus any other formed by accretion, were designated as a perpetual park by town meeting.)
In October 1982, selectmen began signing sales agreements that gave the lessees up to three years to purchase their land, a time period allowed because the Legislature had not yet acted upon the necessary measures to create the real estate trust fund. As a result of the 1986 town meeting, lessees were given 90 days to purchase their lots after signing a sales agreement. The sales were handled by the real estate commission, whose five members were appointed in May 1983 by the Superior Court from a list of 10 nominees drawn up by the selectmen. Also at the 1986 meeting, residents voted to suspend all sales in the HBIC area until the HBIC lease expired in 1997. The sale of HBIC lots was approved again at the 1988 town meeting.
As of December 13, 1988, the real estate trust fund totaled $11.9 million, and earned nearly $1 million in interest for the general fund. Outstanding mortgages, on lot sales financed by the Town, equaled $1.5 million. Of 650 available lots, 555 lots (including the Casino lots) had been purchased by their lessees. Although some 63 percent of the HBIC lots were also purchased, the 1986 town meeting voted to stop sales in the HBIC area until after the lease expires in 1997.
The decision to sell the leased land is one of the most important acts made by a Hampton town meeting -- comparable to the decision to grant the original HBIC lease. On the passage of time will determine the ultimate impact of the land sale. | <urn:uuid:d4035296-cb36-4836-a24c-04c3a34ebaca> | CC-MAIN-2016-26 | http://www.hampton.lib.nh.us/hampton/history/randall/chap8/randall8_1.htm | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395039.24/warc/CC-MAIN-20160624154955-00147-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.981591 | 5,688 | 2.625 | 3 |
Chimpanzees used in scientific and medical research
are subjected to a great deal: fed drugs for testing, infected
with disease, and sometimes sedated for the numerous blood
draws they endure.
Ryan yawns while watching onlookers from a metal barrier
behind the living quarters at the Save the Chimps sanctuary
outside Fort Pierce, Florida.
Scientists are questioning the benefits of chimp research,
just as the United States is planning to restart such work
after a 10-year moratorium. And while the usefulness of
performing experiments on chimpanzees is debated, some things
are clear: The animals often suffer mental distress over the
treatment and often have shortened life spans. McClatchy
Newspapers correspondent Chris Adams reports.
Donovan the chimp transformed from a friendly ape who "adapts
well to peers" to one who beat his female cage-mate so
aggressively they had to be separated.
Lira became a "chronic hair-plucker", with large barren
patches on her body.
Bobby bit and mutilated his own arm, leaving permanent scars.
He was so depressed that he slept sitting up, facing the wall
of his cage.
The debate about medical testing on chimpanzees often
revolves around the physical impact on the chimps - week
after week of liver biopsies or year after year of being
infected with HIV or hepatitis.
But an examination by McClatchy Newspapers of the
chimp-research world found that, in addition to a physical
toll, the testing life can have a significant impact on a
chimp's mental state.
For the 180 chimpanzees that live at the Alamogordo
Primate Facility, on an Air Force base in New Mexico, the world
of research looms large: For the past 10 years, they've been
kept out of research; now the National Institutes of Health is
trying to move them to a research facility in Texas, where
they'd be used in studies on hepatitis and possibly other
The science of chimp research is dicey. The United States is
virtually alone in the world in pursuing it, and many
scientists say the chimps' value as a medical model is
declining. Chimps are among humans' closest genetic cousins,
and given their range of emotions and their level of
understanding, researchers themselves afford chimps special
protections that other research animals don't get, even
According to the National Research Council, the public
"expects a high level of respect for the animals", given the
"special connection of chimpanzees to humans".
For the chimps, research can be lonely and debilitating; some
end up with mental ailments including post-traumatic stress
disorder or depression. Sometimes the symptoms will ease once
the testing ends, but sometimes they stick with a chimp for
"Chimpanzees depend on close physical contact. They love
their comforts, and like to stretch out on a nice soft bed of
grass. They make their own choices all the time," chimp
researcher Jane Goodall said.
Bobby holds on to a metal barrier. Photos by MCT.
"None of these things can in any possible way be
experienced by a laboratory chimp. I've been in quite a lot of
medical research labs, and the truth is I wish I hadn't,
because they haunt me."
The researchers who handle the chimps disagree. They say the
chimps are treated well and humanely, oversight panels ensure
that only necessary research is performed on them, and
they're given space to move and play.
John VandeBerg, who oversees the primate facility at the
Texas Biomedical Research Institute, said the chimps were
treated compassionately and that life in the lab was good.
The chimps, he said, even have televisions. They like to
watch animal movies.
The effort to understand the chimps' minds has grown in the
One chimp who helped illustrate the impact of research was
Billy; his story was chronicled in the medical journal
Developmental Psychology in 2009.
Raised as an entertainer - working the birthday party circuit
- Billy lived compatibly with humans and had a strong bond
with his owners before he was given over to researchers at | <urn:uuid:10b2c724-2f8c-4f4a-bd30-d78b0f2b93f5> | CC-MAIN-2016-26 | http://www.odt.co.nz/lifestyle/magazine/165351/consider-chimps | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396147.66/warc/CC-MAIN-20160624154956-00003-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.946239 | 908 | 2.828125 | 3 |
I = \sum_i -p_i\log p_i
is a function of probabilities $p_i$ and although it is often called entropy, it is not the thermodynamic entropy of Clausius (that $S$ from thermodynamics defined through $\int dQ/T$). This is not only because of absence of $k_B$, but also because in order to give $I$ value, one must put in probabilities $p_i$.
No probabilities occur in classical thermodynamics, hence it is not possible to derive the above formula from thermodynamic laws.
However, there is a connection between $I$ and thermodynamic entropy $S$. This connection is: if a system is in equilibrium with reservoir so that it has volume $V$ and average of energy is $U$, a statistical estimation of its thermodynamic entropy $S^*$ (a function of $U,V$) can be calculated as the maximum possible value of $k_BI$ for all possible values of $p_i$ under the imposed constraints (volume is fixed to $V$, average of energy is $U$).
This rule was not, as far as I know, falsified for macroscopic bodies for which it is meant to be used. Why it is valid is not immediately clear.
The information theory comes in when we ask: what is the meaning of $I$ for arbitrary values of $p_i$? The answer it gives is: it is a measure of amount of data that is needed to exactly specify the microstate of the system given those probabilities.
With this interpretation of $I$ the connection can be rephrased in this way:
if a system is in equilibrium with reservoir so that it has volume $V$ and average of energy is $U$, the measure of uncertainty $I$ about the exact microstate given the macroscopic constraints $U$,$V$ is the same function of $U,V$ as thermodynamic entropy divided by k_B.
This relation has been verified for rarified gas and other simple cases and it is simply assumed it holds universally for any macroscopic system in thermodynamic equilibrium. | <urn:uuid:ae8a947d-52d1-4255-abf9-4421fcd1da8d> | CC-MAIN-2016-26 | http://physics.stackexchange.com/questions/44647/physics-based-derivation-of-the-formula-for-entropy | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397111.67/warc/CC-MAIN-20160624154957-00047-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.938151 | 452 | 2.59375 | 3 |
Tue, 11/17/2009 8:17 PM | Lifestyle The Jakarta Post
With only 19 days to go before the critical UN climate summit in Copenhagen, environmental activists are right to be highlighting Indonesia's unenviable position as one of the world's largest emitters of greenhouse gases.
There is no doubt that long term abuse and mismanagement of the nation's forests is the main cause, which is why Greenpeace set up a "Climate Defenders Camp" on the Kampar Peninsula in Central Sumatra three weeks ago.
Burgeoning population pressures, rampant illegal logging and widespread forest fires release millions of tons of carbon dioxide into the atmosphere, which is significantly contributing to the global warming crisis.
When I first flew over the Kampar Peninsula in Riau province eight years ago, massive swathes of natural forest had clearly been destroyed. A complex network of canals had been chiseled out to provide access for illegal loggers. Most of the forest closest to the surrounding water had disappeared, and there was much evidence of devastating slash-and-burn farming.
It looked like a free-for-all haven for encroachment, where anyone could take what they wanted, turning the area into a desolate and barren wasteland.
In addition to being home to endangered animal species such as the Sumatran tiger, the clouded leopard and Wallace's hawk eagle, as well as to many local communities who depend on the forest for their livelihoods, this peninsula is one of the largest remaining peat swamps on the planet.
Peat lands play a major role in regulating global climate, acting as "carbon sinks", storing more carbon dioxide per unit area than any other ecosystem. Their protection is absolutely vital if irreversible climate change is to be averted.
Without groups like Greenpeace, countless threats to our environment would go unchecked. Many such domestic and international nongovernmental organizations relentlessly campaign to improve Indonesia's dire ecological health.
It is therefore unfortunate that this drive to throw a spotlight on the importance of protecting the Kampar Peninsula peat lands has been overshadowed by controversy and confrontation.
One element of the campaign has been to target PT Riau Andalan Pulp and Paper (RAPP), which has government licenses to develop some of the area. RAPP is a leading fiber plantations manager with one of the world's largest pulp and paper mills located in Pangkalan Kerinci, close to Riau's state capital in Pekanbaru.
The group wants to stop the company from operating on the peninsula, claiming that it will further damage the environmentally fragile ecosystem.
RAPP, on the other hand, argues that its activities are helping protect the area and prevent the release of carbon dioxide. Leaving the Kampar peat land forests unmanaged will only accelerate the deforestation and degradation, the company warns. It has commissioned scientific studies and hired experts to safeguard the environmental values of this ecologically sensitive area.
The company's strategy is to create a buffer zone by developing a plantation ring of acacia on the degraded perimeter of the peninsula, based on High Conservation Value assessments by independent third parties.
The company is adamant this is a case where commercial intervention can be positive for the environment. It says its operational plan has been specifically designed to avert greenhouse gas emissions and protect the core area of the remaining ecologically rich natural forest.
Based on the sorry state of this area I witnessed some years ago, it is clear that strong action is long overdue to save the Kampar Cape.
In spite of good intentions, the authorities have often proven powerless to halt the destruction of forests in Sumatra due to their pitifully limited resources.
The nearby Teso Nilo national park is a testament to this impotence, where an area supposedly permanently set aside for conservation, is the scene of scandalous pillaging by illegal logging gangs.
In a country where the public purse is hugely stretched, it makes sense for commercial entities to share responsibility for environmental protection, and to deploy their own resources in combating forest destruction.
Large businesses like RAPP have reputations to safeguard and international markets to protect. Although far from perfect, this company has been implementing a sustainable forest management strategy for many years. And it is Indonesia's only member of the World Business Council for Sustainable Development.
The Worldwide Fund for Nature (WWF) is one of many environmental groups that has been working with RAPP for many years. It has not been shy to criticize some practices when appropriate, but it is prepared to collaborate with companies where conservation needs can be effectively addressed.
Greenpeace is an exemplary fighter in the battle to stop climate change. We cannot afford for conservationists to lose, and it is important that attention is being focused on the connection between peat lands and global warming.
Surely now would be a perfect time for this laudable alarm bell ringer to join other stakeholders in working with RAPP to develop viable, practical solutions that enable environmental and economic interests to be properly balanced.
The Copenhagen climate summit cannot afford to fail. Sound forest management is an essential ingredient for its success. It makes sense for proposed solutions from the business sector to be seriously considered, and embraced where proven to be effective.
Jonathan Wootliff leads the Corporate Accountability practice at the consulting firm, Reputation Partners. He specializes in sustainable development and in building of productive relationships between companies and NGOs. He can be contacted at firstname.lastname@example.org | <urn:uuid:f562e844-1b25-4857-8b85-b19e436b1d66> | CC-MAIN-2016-26 | http://naturealert.blogspot.com/2009/11/protecting-sumatras-peat-lands-vital.html | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395346.6/warc/CC-MAIN-20160624154955-00095-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.946309 | 1,109 | 2.546875 | 3 |
Gender-specific and gender-neutral pronouns
A gender-specific pronoun is a pronoun associated with a particular grammatical gender, such as masculine, feminine, or neuter, or with a biological gender (or sex), i.e. female or male. Examples are the English third-person personal pronouns he and she.
A gender-neutral pronoun, by contrast, is a pronoun that is not associated with a particular grammatical or biological gender and that does not imply, for instance, male or female. Many English pronouns are gender-neutral, including they (which in certain contexts can also refer to a singular antecedent such as "everyone", "a person", or "the patient").
Many of the world's languages do not have gender-specific pronouns. Others, however – particularly those that have a system of grammatical gender (or have historically had such a system, as with English) – have gender specificity in certain of their pronouns, particularly third-person personal pronouns.
Problems of usage arise in languages such as English, in contexts where a person of unspecified or unknown gender is being referred to but commonly available pronouns (he or she) are gender-specific. In such cases a gender-specific, usually masculine, pronoun is sometimes used with intended gender-neutral meaning; such use of "he" was also common in English until the latter half of the 20th century but is now controversial. Use of singular they is another common alternative, but is not accepted by everybody. Some attempts have been made, by proponents of gender-neutral language, to introduce invented gender-neutral pronouns.
- 1 Grammar patterns
- 2 English
- 2.1 Historical and dialectal gender-neutral pronouns
- 2.2 It and one as gender-neutral pronouns
- 2.3 Generic he
- 2.4 Generic she
- 2.5 Singular they
- 2.6 Alternatives to generic he
- 3 Other languages
- 4 See also
- 5 Notes
- 6 External links
Some languages of the world (including Austronesian languages, many East Asian languages, the Quechuan languages, and the Uralic languages) do not have gender distinctions in personal pronouns, just as most of them lack any system of grammatical gender. In others, such as many of the Niger–Congo languages, there is a system of grammatical gender (or noun classes), but the divisions are not based on sex. .
In other languages – including most Indo-European and Afro-Asiatic languages – third-person personal pronouns (at least those used to refer to people) intrinsically distinguish male from female. This feature commonly co-exists with a full system of grammatical gender, where all nouns are assigned to classes such as masculine, feminine and neuter. However in some languages, such as English, this general system of noun gender has been lost, but gender distinctions are preserved in the third-person pronouns (the singular pronouns only, in the case of English).
In languages with grammatical gender, even pronouns which are semantically gender-neutral may be required to take a gender for such purposes as grammatical agreement. Thus in French, for example, the first- and second-person personal pronouns may behave as either masculine or feminine depending on the sex of the referent; and indefinite pronouns such as quelqu'un ("someone") and personne ("no one") are treated conventionally as masculine. See Grammatical gender: Gender of pronouns.)
Issues concerning gender and pronoun usage commonly arise in situations where it appears necessary to choose between gender-specific pronouns, even though the sex of the person or persons being referred to is not known, not specified, or (for plurals) mixed. In English and many other languages, the masculine form has traditionally served as the default or unmarked form; that is, masculine pronouns have been used in cases where the referent or referents are not known to be (all) female. This leads to sentences such as:
- In English: If anybody comes, tell him. Here the masculine pronoun him refers to a person of unknown sex.
- In French: Vos amis sont arrivés — ils étaient en avance ("Your friends have arrived – they were early"). Here the masculine plural pronoun ils is used rather than the feminine elles, unless it is known that all the friends in question are female (in which case the noun would also change to amies and the past participle would change to arrivées).
As early as 1795, dissatisfaction with this convention led to calls for gender-neutral pronouns, and attempts to invent pronouns for this purpose date back to at least 1850, although the use of singular they as a natural gender-neutral pronoun in English is much older.
The English language has gender-specific personal pronouns in the third-person singular. The masculine pronoun is he (with derived forms him, his and himself); the feminine is she (with derived forms her, hers and herself); the neuter is it (with derived forms its and itself). These are described in full in the article on English personal pronouns.
Generally speaking, the masculine pronoun is used to refer to male persons (and some higher animals, such as pets); the feminine to refer to female persons and female animals, and sometimes figuratively in referring to such items as ships and countries; and the neuter to refer to inanimate objects and concepts, animals of unspecified or unimportant sex, and sometimes children of unspecified sex. For full details, see Gender in English. For the use of he for referring to a person of unspecified sex, as well as the various alternatives to this convention, see the discussion in the sections below.
The other English pronouns (the first- and second-person personal pronouns I, we, you, etc.; the third-person plural personal pronoun they; the indefinite pronouns one, someone, anyone, etc.; and others) do not make male–female gender distinctions, that is, they are gender-neutral. The only distinction made is between personal and non-personal reference (someone vs. something, who vs. what, etc.)
Historical and dialectal gender-neutral pronouns
In 1789, William H. Marshall records the existence of a dialectal English epicene pronoun, singular "ou": "'Ou will' expresses either he will, she will, or it will." Marshall traces "ou" to Middle English epicene "a", used by the 14th century English writer John of Trevisa, and both the OED and Wright's English Dialect Dictionary confirm the use of "a" for he, she, it, they, and even I. This "a" is a reduced form of the Anglo-Saxon he = "he" and heo = "she".
Baron goes on to describe how relics of these sex-neutral terms survive in some British dialects of Modern English (for example hoo for "she", in Yorkshire), and sometimes a pronoun of one gender might be applied to a person or animal of the opposite gender.
It and one as gender-neutral pronouns
Whereas "he" and "she" are used for entities treated as persons (including supernatural beings and, sometimes, sympathetic animals, especially pets), the pronoun "it" is normally used for entities not regarded as persons, though the use of "he" or "she" is optional for animals of known sex. Quirk et al. give the following example, illustrating use of both "it" and "her" to refer to a bird:
- The robin builds its nest in a well-chosen position ... and, after the eggs have hatched, the mother bird feeds her young there for several weeks.
The pronoun "it" can also be used of children in some circumstances, for instance when the sex is indefinite or when the writer has no emotional connection to the child, as in a scientific context. Quirk et al. give the following example:
- A child learns to speak the language of its environment.
According to The Handbook of Non-Sexist Writing, it is sometimes the "obvious" choice for children. Examples given include
- To society, a baby's sex is second in importance to its health.
but also the more colloquial
- When the new baby comes, it's going to sleep in Lil's room.
"It" may even be used when the child's sex is known. In the following story, the characters refer to the boy-child at the center of the narrative as a "he", but then the narrator refers to it as an "it":
- "He looks like nobody but himself," said Mrs. Owens, firmly. ... It was then that ... the child opened its eyes wide in wakefulness. It stared around it ...
In this case, the child has yet to be developed into a character that can communicate with the reader.
However, when not referring specifically to children, "it" is not generally applied to people, even in cases where their gender is unknown.
Another gender-neutral pronoun that can be used to refer to people is the impersonal pronoun "one". This can sometimes be used to avoid gender-specification issues; however, it cannot normally substitute for a personal pronoun directly, and a sentence containing "he" or "she" would need to be rephrased, probably with change of meaning, to enable "one" to be used instead. Compare:
- Each student should save his questions until the end.
- One should save one's questions until the end.
In everyday language, generic you is often used instead of one:
- You should save your questions until the end.
It may be that forms of the pronoun he had been used for both sexes during the Middle English and Modern English periods. "There was rather an extended period of time in the history of the English language when the choice of a supposedly masculine personal pronoun (him) said nothing about the gender or sex of the referent." The use of he to refer to a person of unknown gender was prescribed by manuals of style and school textbooks from the early 18th century until around the 1960s. An early example of which is Anne Fisher's 1745 grammar book "A New Grammar". Older editions of Fowler also took this view.
- The customer brought his purchases to the cashier for checkout.
- In a supermarket, anyone can buy anything he needs.
- When a customer argues, always agree with him.
This may be compared to usage of the word man for humans in general (although that was the original sense of the word "man" in the Germanic languages, much as the Latin word for "human in general", homo, came to mean "male human"—which was vir, in Latin—in most of the Romance languages).
- "All men are created equal."
- "That's one small step for [a] man, one giant leap for mankind."
- "Man cannot live by bread alone."
While the use, in formal English, of he, him or his as a gender-neutral pronoun has traditionally been considered grammatically correct, such use may also be considered to be a violation of gender agreement.:48 It has also been seen as prejudicial by some, as in the following cases:
- The Massachusetts Medical Society effectively blocked membership of female physicians on the grounds that the society's by-laws used the pronoun he when referring to members.:46
- The Persons Case, the legal battle over whether Canadian women counted as legal persons eligible to sit in the Senate, partially turned on use of "he" to refer to a (generic) person qualified to be a senator.
Its use in some contexts has also been ridiculed, or criticized as absurd or "silly":
"... everyone will be able to decide for himself whether or not to have an abortion."— Albert Bleumenthal, N.Y. State Assembly (cited in Longman 1984), as quoted in Merriam-Webster's Concise Dictionary of English Usage
"... the ideal that every boy and girl should be so equipped that he shall not be handicapped in his struggle for social progress …"— C.C. Fries, American English Grammar (1940) quoted in Readers Digest 1983; as cited in Merriam-Webster's Concise Dictionary of English Usage
"... She and Louis had a game—who could find the ugliest photograph of himself"— Joseph P. Lash, Eleanor and Franklin (1971) (quoted in Readers Digest 1983; as cited in Merriam-Webster's Concise Dictionary of English Usage)
- William Safire in his On Language column in The New York Times approved of the use of generic he, mentioning the mnemonic phrase "the male embraces the female". A reader replied with an example of use of the purportedly gender-neutral he:
"The average American needs the small routines of getting ready for work. As he shaves or blow-dries his hair or pulls on his panty-hose, he is easing himself by small stages into the demands of the day."
To redress the perceived imbalance resulting from use of generic he, some authors now adopt a generic she instead, or alternate between she and he. This and some other ways of dealing with the problem are described below.
She has traditionally been used as a generic pronoun when making generalizations about people belonging to a group when most members of that group are assumed to be female:
- A secretary should keep her temper in check.
- A nurse must always be kind to her patients.
This avoidance of the "generic" he is seen by proponents of non-sexist writing as indicating that the purportedly gender-neutral he is in fact not gender-neutral since it "brings a male image to mind".
Since at least the 14th century, they (used with verbs conjugated in the plural, as with you), them, their, theirs, and themselves or themself have been used, with varying degrees of general acceptance, as singular pronouns. This usage is often called the singular they. It is widely used and accepted in Britain, Australia, and North America in conversation.
- I say to each person in this room: may they enjoy themselves tonight!
- Anyone who arrives at the door can let themself in using this key.
- "If a person is born of a . . . gloomy temper . . . they cannot help it."— Chesterfield, Letter to his son (1759)
They may be used even when the gender of the subject is obvious; they implies a generic (or representative of type class) rather than individuated interpretation:
- 'Tis meet that some more audience than a mother, since nature makes them partial, should o'erhear the speech — Shakespeare, Hamlet
- There's not a man I meet but doth salute me / As if I were their well-acquainted friend — Shakespeare, Comedy of Errors
- If some guy beat me up, then I'd leave them.
- Every bride hopes that their wedding day will go as planned.
Alternatives to generic he
The generic, or universal, use of he as described above has been a source of controversy, as it appears to reflect a bias towards men and a "male-centric" society, and against women. The 19th and 20th centuries saw an upsurge in consciousness and advocacy of gender equity, and this has led in particular to preferences for gender-neutral language. Alternatives to generic he have consequently gained in popularity. The chief of these are described in the sections below.
He or she, (s)he, etc.
The periphrastics "he or she", "him or her", "his or her", "his or hers", "himself or herself" are seen by some as resolving the problem, though they are cumbersome. These periphrases can be abbreviated[clarification needed] in writing as "he/she", "(s)he", "s/he", "hse",[not in citation given] "him/her", "his/her", "himself/herself", but are not easily abbreviated in verbal communication. With the exception of "(s)he" and "s/he", a writer still has the choice of which pronoun to place first.
Alternation of she and he
Authors sometimes employ rubrics for selecting she or he such as:
- Use the gender of the primary author.
- Alternate between "she" and "he".
- Alternate by paragraph or chapter.
- Use he and she to make distinctions between two groups of people.
Some groups and individuals have invented, borrowed and used non-standard pronouns, hoping they will become standard. Various proposals for such changes have been around since at least the 19th century. For example, abbreviated pronouns have been proposed: 'e for he or she, h' for him or her in object case, and 's for his or her(s) or its; "zhe" (also "ze"), "zher(s)" (also "zer" or "zir"), "shi"/"hir", and "zhim" (also "mer") for "he or she", "his or her(s)", and "him or her", respectively; 'self (for himself/herself); and hu, hus, hum, humself (for s/he, his/hers, him/her, himself/herself).
According to Dennis Baron, the neologism that received the greatest partial mainstream acceptance was Charles Crozat Converse's 1884 proposal of thon, a contraction of "that one" (other sources date its coinage to 1858 or 1859):
Thon was picked up by Funk and Wagnall's Standard Dictionary in 1898, and was listed there as recently as 1964. It was also included in Webster's Second New International Dictionary, though it is absent from the first and third, and it still has its supporters today.
"Co" was coined by feminist writer Mary Orovan in 1970. "Co" is in common usage in intentional communities of the Federation of Egalitarian Communities, and "co" appears in the bylaws of several of these communities. In addition to use when the gender of the antecedent is unknown or indeterminate, some use it as gender-blind language and always replace gender-specific pronouns.
Pronouns and LGBT people
For people who are transgender, style guides and associations of journalists and health professionals advise use of the pronoun preferred or considered appropriate by the person in question. When dealing with clients or patients, health practitioners are advised to take note of the pronouns used by the individuals themselves, which may involve using different pronouns at different times. This is also extended to the name preferred by the person concerned. LGBT advocacy groups also advise using the pronouns and names preferred or considered appropriate by the person concerned. They further recommend avoiding gender confusion when referring to the background of transgender people (for instance by using Private Manning to avoid a male pronoun or name).
In terms of gender-neutral titles, as alternatives to Mr. or Miss/Mrs./Ms., a number of different titles may be used, including "Mixter/Mixer/Mx." or "Ind./Individual".
The following table summarizes the foregoing approaches.
|Nominative (subject)||Oblique (object)||Possessive determiner||Possessive pronoun||Reflexive|
|He||He is laughing||I called him||His eyes gleam||That is his||He likes himself|
|She||She is laughing||I called her||Her eyes gleam||That is hers||She likes herself|
|One||One is laughing||I called one||One's eyes gleam||That is "that one's"||One likes oneself|
|Conventions based on traditional pronouns|
|Singular they||They are laughing||I called them||Their eyes gleam||That is theirs||They like themself|
|She/he||She/he is laughing||I called him/her||His/her eyes gleam||That is his/hers||She/he likes him/herself|
|S/he (compact)||S/he is laughing||I called him/r||His/r eyes gleam||That is his/rs||S/he likes him/herself|
|Apostrophe||'E is laughing||I called h'||'S eyes gleam||That is 'rs||'E likes h'/h'self|
|Elverson (1975)||Ey is laughing||I called em||Eir eyes gleam||That is eirs||Ey likes eirself|
|Humanist||Hu is laughing||I called hum||Hus eyes gleam||That is hus||Hu likes humself|
|Jee||Jee is laughing||I called jem||Jeir eyes gleam||That is jeirs||Jee likes jemself|
|Ney||Ney is laughing||I called nem||Neir eyes gleam||That is neirs||Ney likes nemself|
|Peh||Peh is laughing||I called pehm||Peh's eyes gleam||That is peh's||Peh likes pehself|
|Per||Per is laughing||I called per||Per eyes gleam||That is pers||Per likes perself|
|Spivak (1983)||E is laughing||I called Em||Eir eyes gleam||That is Eirs||E likes Emself|
|Thon||Thon is laughing||I called thon||Thons eyes gleam||That is thons||Thon likes thonself|
|Ve||Ve is laughing||I called ver||Vis eyes gleam||That is vis||Ve likes verself|
|Xe||Xe is laughing||I called xem||Xyr eyes gleam||That is xyrs||Xe likes xemself|
|Yo (regional)||Yo is laughing||I called yo||—||—||?|
|Ze (or zie or sie) and zir (Germanic Origin)||Ze is laughing||I called zir/zem||Zir/Zes eyes gleam||That is zirs/zes||Ze likes zirself/zemself|
|Ze (or zie or sie) and hir||Ze is laughing||I called hir||Hir eyes gleam||That is hirs||Ze likes hirself|
|Ze and mer||Ze is laughing||I called mer||Zer eyes gleam||That is zers||Ze likes zemself|
|Zhe||Zhe is laughing||I called zhim||Zher eyes gleam||That is zhers||Zhe likes zhimself|
In most Indo-European languages (though not in the modern Indo-Iranian languages) third-person personal pronouns are gender-specific, while first- and second-person pronouns are not. The distinction is found even in languages which do not retain a masculine–feminine grammatical gender system for nouns generally, such as English and Danish. Sometimes the distinction is neutralized in the plural, as in most modern Germanic languages (gender-neutral third-person plural pronouns include English they and German sie), and also in modern Russian (where the equivalent pronoun is они oni). However some languages make the distinction in the plural as well, as with French ils and elles, and Czech oni and ony, respectively masculine and feminine equivalents of "they". It is traditional in most languages, in cases of mixed or indeterminate gender, to use the masculine as a default.
For example, in French,
- First person singular je ('I'), me, moi ('me')
- Second person singular (familiar) tu, te, toi ('you')
- First person plural nous ('we', 'us')
- Second person plural vous ('you')
- Third person possessives leur ('their') and son/sa/ses (depending on the gender and number of the item possessed), 'his', 'her' 'its', 'their' (masculine-gender object)/ 'his', 'her' 'its', 'their' (feminine-gender object) / 'his', 'her' 'its', 'their' (objects) )
are all gender-inclusive; but
- Third person pronouns il ('he'), le ('him'), ils ('they', referring to an all-male or mixed-gender group) are all masculine.
- Third person pronouns elle ('she'), la ('her') and elles ('they', referring to an all-female group) are all feminine.
The choice of possessive pronoun in many Romance languages is determined by the grammatical gender of the possessed object; the gender of the possessor is not explicit. For instance, in French the possessive pronouns are usually sa for a feminine object, and son for a masculine object: son livre can mean either "his book" or "her book"; the masculine son is used because livre is masculine. Similarly, sa maison means either "his house" or "her house" because "maison" is feminine. Non-possessive pronouns, on the other hand, are usually gender-specific.
As in French, Catalan also determines the gender of object but not of the possessor, by possessive pronouns - seu stands for a masculine object (el seu llibre), while seva, seua or sa stands for a feminine object (la seva mansió).
Portuguese works with two sets of pronouns. One of them (seu for masculine and sua for feminine) follows the same rules as French and Catalan, with the gender determined by the object (o seu livro and a sua casa); in the other set (dele for masculine and dela for feminine), the gender is determined by the possessor as in English, so o livro dele is possessed by a masculine being and o livro dela is possessed by a feminine being.
In contrast, Spanish possessive pronouns agree with neither the gender of the possessor nor that of the possession. In the third person, the possessive pronoun su (or sus for plural - number agrees with the possession) is used. Example: Su libro could mean either "his book" or "her book", with the gender of the possessor being made clear from the context of the statement. Pronouns for referring to people in Spanish have gender - él for "him" and ella for "her", there is also the gender neutral lo for "it". Spanish pronouns are usually part of the verb and are used separately only when making a distinction. e.g. The verb vivir - "to live" would usually be conjugated in the third person as vive - "He/she lives". To make a distinction, one might say "ella vive en Madrid pero él vive en Barcelona" - "She lives in Madrid but he lives in Barcelona".
Italian also behaves like French, with phrases such as il mio/tuo/suo libro not implying anything about the owner's sex or the owner's name's grammatical gender. In the third person, if the "owner's" sex or category (person vs thing) is an issue, it is solved by expressing di lui, di lei for persons or superior animals or di esso for things or inferior animals. Lui scese e portò su le valigie di lei (He went downstairs and brought her luggage upstairs). This rarely happens, though, because it is considered inelegant and the owner's gender can often be inferred from the context, which is anyhow much more important in an Italian environment than in an English-speaking one.
Where a language has grammatical gender, gendered pronouns are sometimes used according to the grammatical gender of their antecedent, as French il ('he') for le livre ('the book' - masculine), whereas in Spanish, el libro is also masculine, but it would not be considered correct to refer to it by using the masculine pronoun él. Instead, something such as "Where is the book?" "It is on the table", would be rendered as "¿Dónde está el libro?" "Está sobre la mesa" where the pronoun is omitted. However, when the pronoun is used as a direct object, gender-specific forms reappear in Spanish. The sentence I can't find it. (always referring to the masculine noun libro (book)) would be No lo encuentro, whereas if I can't find it refers to a magazine (revista in Spanish, which is feminine) then the sentence would be No la encuentro.
If it is absolutely necessary to provide a subject when referring to an object, a demonstrative can be used instead of a pronoun: ¿Qué es eso? translates literally What is that?. And a suitable answer would be Eso es un libro or Eso es una revista, (That's a book, That's a magazine) with the genderless eso as subject in both cases.
Icelandic uses a similar system to other Germanic languages in distinguishing three 3rd-person genders in the singular - hann (masculine gender), hún (feminine gender), það (neuter gender). However it also uses this three-way distinction in the plural: þeir (m. only), þær (f. only), þau (n., which includes mixed gender). It is therefore possible to be gender-specific in all circumstances should one wish - although of course þau can be used for gender-inclusiveness. Otherwise the form used is determined grammatically (i.e., by the gender of the noun replaced). In general statements the use of menn could be preferable as it is less specific than þau.
In Norwegian, a new word was proposed, hin ('sie' or 'hir') to fill the gap between the third person pronouns hun ('her') and han ('him'). Hin is very rarely used, and in limited special interest groups; it is not embraced by society as a whole. A reason for the marginal interest in a neuter gender word is the constructed nature of the word, and that the word is homonymous with several older words both in official language and dialectal speech, such as hin ('the other') and hinsides ('beyond'). One can also use man or en or den (en means 'one'). These three are considered impersonal. Amongst LGBT interest groups the use of the word 'hen' after the Swedish implementation in 2010 is now in use
In Swedish, the word hen was introduced in the 2010s as a replacement of the gender-specific hon ("she") and han ("he"). It can be used when the gender of a person is not known or when it is not desirable to specify them as either a "she" or "he". The word was first proposed in 1966, and again in 1994, with reference to the Finnish hän, a personal pronoun that is gender-neutral, since Finnish completely lacks grammatical gender. However, it did not receive widespread recognition until around 2010, when it began to be used in some texts, and provoked media debates and controversy.
"Hen" is currently treated as neologism by Swedish manuals of style. Major newspapers like Dagens Nyheter have recommended against its usage, though many journalists still use it. The Swedish Language Council has not issued any general recommendations against the use of hen, but advises against the use of the object form henom ("her/him"); it instead recommends using hen as both the subject and object form. Hen has two basic usages: as a way to avoid a stated preference to either gender; or as a way of referring to individuals who are transgender, who prefer to identify themselves as belonging to a third gender or who reject the division of male/female gender roles on ideological grounds. In late July 2014, the Swedish Academy announced that in April 2015, hen will be included in Svenska Akademiens ordlista, the most authoritative glossary on the Swedish language. Its entry will cover two definitions: as a reference to individuals belonging to an unspecified sex or third gender, or where the sex is not known.
Traditionally, there are other variants of avoiding using gender-specific pronouns; e.g., "vederbörande" ("the referred person") and "man" ("one", as in "Man borde..."/"One should..."). Lately some have started to use "en" ("one", as in "En borde..."/"One should...") instead of "man" since "man"/"one" sounds the same as "man"/"male adult." "Denne" ("this one") may be used to refer to a non-gender-specific referent already or soon-to-be mentioned ("Vederbörande kan, om denne så vill,..."/"The referent may, if they wish,..."). One method is rewriting into plural, as Swedish like English has only gender-neutral pronouns in plural. Another method is writing the pronoun in the referent's grammatical gender ("Barnet får om det vill."/"The child is allowed to, if it wants to." "Barn" is neuter, thus the use of the third-person neuter pronoun "det"); some nouns retain their traditional pronouns, e.g., "man"/"man" uses "han"/"he," "kvinna"/"woman" uses "hon"/"she," and "människa"/"human being" uses "hon"/"she." While grammatically correct, using "den/det" to refer to human beings may sound as if the speaker considers referred human beings to be objects.
The Persian language has no distinction between animated male and female. 'he' and ' she' are expressed by the same pronoun u (او). Singular inanimate as 'it' is referred by an (آن).
Singular personal pronouns are gender-specific (hi, "she"; e,ef,fe,fo,o, "he"). The singular possessive pronoun ei is the same word for both genders, but in some instances it mutates the following word differently depending on whether it means "his" or "her".
- ei modryb, her aunt; ei fodryb, his aunt; but
- ei nith, his or her niece.
Written Chinese has gone in the opposite direction, from non-gendered to gendered pronouns, though this hasn't affected the spoken language.
In spoken standard Mandarin, there is no gender distinction in personal pronouns: the pronoun tā (他) can mean "he", "she", or "it". However, when the antecedent of the spoken pronoun tā is unclear, native speakers will assume it is a male person. In 1917, the Old Chinese graph tā (她, from nǚ 女, "woman") was borrowed into the written language to specifically represent "she" by Liu Bannong. As a result, the old character tā (他), which previously also meant "she" in written texts, is sometimes restricted to meaning "he" only. In contrast to most Chinese characters coined to represent specifically male concepts, the character tā is formed with the ungendered character for person rén (人), rather than the character for male nán (男)."
The creation of gendered pronouns in Chinese was part of the May Fourth Movement to modernize Chinese culture, and specifically an attempt to assert sameness between Chinese and the European languages, which generally have gendered pronouns. Of all the contemporary neologisms from the period, the only ones to remain in common use are tā (它) for objects, tā (牠, from niú 牛, "cow") for animals, and tā (祂 from shì 示, "revelation") for gods. Although Liu and other writers tried to popularize a different pronunciation for the feminine tā, including yi from the Wu dialect and tuo from a literary reading, these efforts failed, and all forms of the pronoun tā retain identical pronunciation. This identical pronunciation of the split characters holds true for not only Mandarin but also many of the varieties of Chinese. There is a recent trend on the Internet for people to write "TA" in Latin script, derived from the pinyin romanization of Chinese, as a gender-neutral pronoun.
The Cantonese third-person-singular pronoun is keui5 (佢), and may refer to people of any gender. For a specifically female pronoun, some writers replace the person radical rén (亻) with the female radical nǚ (女), forming the character keui5 (姖). However, this analogous variation to tā is neither widely accepted in standard written Cantonese nor grammatically or semantically required. Moreover, while the character keui5 (佢) has no meaning in classical Chinese, the character keui5 (姖) has a separate meaning unrelated to its dialectic use in standard or classical Chinese.
Pure personal pronouns used as the anaphor did not exist in traditional Japanese. In the modern Japanese, 'kare' and 'kano-jo' are thought to be the masculine and feminine third-party pronouns, respectively. Historically, however, 'kare' was a word in the demonstrative paradigm (i.e., a system involving demonstrative prefixes, ko-, so-, is-, and a-), used to point to an object that is physically far but psychologically near. The feminine counterpart 'kano-jo', on the other hand, is a combination of 'kano' (adjective version of ka-) and 'jo' (woman), coined for the translation of its Western equivalents. It was not until Meiji era that 'kare' and 'kano-jo' were commonly used as the masculine and feminine pronoun in the same way as their Western equivalents. Although their usage as the Western equivalent pronouns tends to be infrequent, 'kare' and 'kano-jo' are commonly used today as ways of saying 'girlfriend' and 'boyfriend'.
First-person pronouns, 'ore', 'boku', and 'watashi', while not explicitly carrying gender, can strongly imply gender based on the inherent levels of politeness/formality as well as hierarchical connotation. While 'boku' and 'ore' are traditionally known to be masculine pronouns and 'atashi' is characterized as feminine, 'boku' is considered to be less masculine to its 'ore' counterpart and often denotes a softer form of masculinity. It is often used by girls who find the pronoun 'atashi' as being too feminine. In order to denote a sense of authority, males will tend to resort to 'ore' to display a sense of confidence to their peers.
The Korean pronoun geu (그) is somewhat gender-neutral, as while the gender-specific pronoun geunyeo (그녀) is often the preferred pronoun when referring to feminine nouns, geu can refer to masculine, feminine, and neuter nouns.
Turkish does not have a system of grammatical gender and thus does not have any gender-specific pronouns. The Turkish singular third-person pronoun o (he/she/it) is completely gender-neutral and can be used to refer to masculine, feminine, and neuter nouns. The plural third-person pronoun onlar (they) is used the same way.
Turkish is also a null-subject language which means pronouns can usually be dropped while retaining the meaning of the sentence. For example, the sentences "O okuldan geldi." and "Okuldan geldi." both translate to "He/she/it came from school."
In most Afro-Asiatic languages only the first-person pronouns (singular and plural) are gender-inclusive: second and third person pronouns are gender-specific.
Thai pronouns are numerous. Here is only a short list.
|First person||Second person||Third person|
|Masculine||ผม (phom)||นาย (nai) (informal)||หมอนั่น (mhor nun) (derogative)|
|Feminine||ดิฉัน (di chan) ชั้น (chan)||นางนั่น (nang nun) (derogative)|
|Neuter||ฉัน (chan) เรา (rao)||คุณ (khun) เธอ (ther)||เขา (khao)|
The pronoun เธอ (ther, lit: you) is semi-feminine. It can be used when the speaker or the listener (or both) are female. It is seldom used when both parties are male.
Esperanto has no universally accepted gender-neutral pronouns, but there are several proposals. Zamenhof proposed using the pronoun ĝi (literally "it"). Some writers also use other established pronouns like tiu ("this" or "that") or oni ("one"). Still other writers use neologisms such as ŝli for this purpose.
- Gender marking in job titles
- Gender-neutrality in genderless languages
- Gender-neutrality in languages with grammatical gender
- Pronoun game
- Feminist language planning
- Lavender linguistics
- Siewierska, Anna; Gender Distinctions in Independent Personal Pronouns; in Haspelmath, Martin; Dryer, Matthew S.; Gil, David; Comrie, Bernard (eds.) The World Atlas of Language Structures, pp. 182–185. Oxford: Oxford University Press, 2005. ISBN 0-19-925591-1
- Corbett, Greville G. (2011). "Sex-based and Non-sex-based Gender Systems". In Dryer, Matthew S.; Haspelmath, Martin. The World Atlas of Language Structures Online. Munich: Max Planck Digital Library. Retrieved 2013-04-27.
- Williams, John (1990s). "History — Modern Neologism". Gender-Neutral Pronoun FAQ. Archived from the original on December 5, 2006. Retrieved 2007-01-01.
- As with all pronouns beginning in h, the h is dropped when the word is unstressed. The reduced form a is pronounced //.
- Williams, John (1990s). "History - Native-English GNPs". Gender-Neutral Pronoun FAQ. Archived from the original on December 5, 2006. Retrieved 2007-01-01.
- Arthur Hughes, Peter Trudgill, Dominic Watt, English Accents and Dialects: An Introduction to Social and Regional Varieties of English in the British Isles, 5th edition, Routledge, 2012, p. 35.
- Liberman, Mark (2008-01-07). "Language Log: Yo". Itre.cis.upenn.edu. doi:10.1215/00031283-2007-012. Retrieved 2013-10-26.
- Mignon Fogarty. "Grammar Girl / Yo as a Pronoun.".
- Quirk, Randolph; Greenbaum, Sidney; Leech, Geoffrey; Svartvik, Jan (1985). A comprehensive grammar of the English language. Harlow: Longman. pp. 316–317, 342. ISBN 978-0-582-51734-9.
- Miller, Casey; Swift, Kate (1995) . Mosse, Kate, ed. The Handbook of Non-Sexist Writing (3rd British ed.). London: The Women's Press. p. 58. ISBN 0-7043-4442-4.
- Neil Gaiman, 2008, The Graveyard Book, p. 25.
- Susanne Wagner (22 July 2004). "Gender in English pronouns: Myth and reality" (PDF). Albert-Ludwigs-Universität Freiburg.
- Patricia T. O'Conner; Stewart Kellerman (July 21, 2009). "All-Purpose Pronoun". The New York Times.
- Fowler, H.W. (2009) . A Dictionary of Modern English Usage. Original 1926 edition with an introduction and notes by David Crystal. Oxford University Press. pp. 648–649. ISBN 978-0-19-958589-2.
- Huddleston, Rodney; Pullum, Geoffrey K. (2002). The Cambridge Grammar of the English Language. Cambridge; New York: Cambridge University Press. p. 492. ISBN 0-521-43146-8.
- Miller, Casey; Swift, Kate (1995) . Mosse, Kate, ed. The Handbook of Non-Sexist Writing (3rd British ed.). London: The Women's Press. pp. 46–48. ISBN 0-7043-4442-4.
- Reference to Meaning of Word "Persons" in Section 24 of British North America Act, 1867. (Judicial Committee of The Privy Council). Edwards v. A.G. of Canada A.C. 124. Human Rights in Canada: A Historical Perspective.
- Merriam-Webster's Concise Dictionary of English Usage. Penguin. 2002. p. 735. ISBN 9780877796336.
- Safire, William (28 April 1985). "On Language; You Not Tarzan, Me Not Jane". The New York Times. pp. 46–47.
- Adendyck, C. (7 July 1985). "[Letter commenting on] Hypersexism And the Feds". The New York Times.
- Chesterfield, Philip Dormer Stanhope Earl of (1759). "Letters to his Son, CCCLV, dated 27 April 27, 1759". The Works of Lord Chesterfield. Harper (published 1845). p. 568..
Quoted in Fowler, H.W.; Burchfield, R.W. (1996). The New Fowler's Modern English Usage. Oxford University Press. p. 779. ISBN 9780198610212.
- Michael Newman (1996) Epicene pronouns: The linguistics of a prescriptive problem; Newman (1997) "What can pronouns tell us? A case study of English epicenes", Studies in language 22:2, 353–389.
- Dale Spender, Man Made Language, Pandora Press, 1998, p. 152.
- "Ne doesn't like tem zeeself". economist.com. The Economist. August 2010. Retrieved 26 June 2016.
according to Mr Baron's count there have been 'more than 100 attempts to coin a gender-neutral pronoun over the course of more than 150 years', including heesh, hse, kin, ve, ta, tey, fm, z, ze, shem, se, j/e, jee, ey, ho, po, ae, et, heshe, hann, herm, ala, de, ghach ...
- Writing about literature: essay and translation skills for university, p. 90, Judith Woolf, Routledge, 2005
- john commented on the word thon (2007-10-11). "thon - definition and meaning". Wordnik.com. Retrieved 2013-10-26.
- Baron, Dennis (1986). "10, The Word That Failed". Grammar and Gender. Yale University Press. p. 201. ISBN 0-300-03883-6.
- Baron, Dennis. "The Epicene Pronouns". Retrieved 2010-06-22.
- Kingdon, Jim. "Gender-free Pronouns in English". Retrieved 2010-06-22.
- "Skyhouse Community – Bylaws". Retrieved 2010-06-22.
- "Bylaws – Sandhill – 1982". Retrieved 2010-06-22.
- "Bylaws – East Wind – 1974". Retrieved 2010-06-22.
- "Bylaws – Twin Oaks". Retrieved 2010-06-22.
- "Visitor Guide – Twin Oaks Community: What does all this stuff mean?". Retrieved 2010-06-22.
- Division of Public Affairs (September 2011). "Style Guide" (PDF). Vanderbilt University. p. 34. Retrieved 2013-09-17.
Use the pronoun preferred by the individuals who have acquired the physical characteristics of the opposite sex or present themselves in a way that does not correspond with their sex at birth.
- Associated Press (2015). "transgender". The Associated Press Stylebook 2015. ISBN 9780465097937.
Use the pronoun preferred by the individuals who have acquired the physical characteristics of the opposite sex or present themselves in a way that does not correspond with their sex at birth. If that preference is not expressed, use the pronoun consistent with the way the individuals live publicly.
- Sponsored by the American Medical Association and The Fenway Health with unrestricted support from Fenway Health and Pfizer. "Meeting the Health Care Needs of Lesbian, Gay, Bisexual, and Transgender (LGBT) People: The End to LGBT Invisibility" (PowerPoint Presentation). The Fenway Institute. p. 24. Retrieved 2013-09-17.
Use the pronoun that matches the person's gender identity
- "Therapists with Lesbian, Gay, Bisexual, and Transgender Clients" (Word Document). Association for Behavioral and Cognitive Therapies. 2010. p. 2. Retrieved 2013-09-17.
transsexuals prefer to be referred to using the pronoun of identified gender, regardless of their level of transition[dead link]
- Elizondo, Paul M. III, D.O.; Wilkinson, Willy, M.P.H.; Daley, Christopher, M.D. (13 November 2015). "Working With Transgender Persons". Phychiatric Times. Retrieved 2013-09-17.
If you are not sure which pronoun to use, you can ask the patient
- (PDF). January 2010. pp. 2 and 5 http://www.fenwayhealth.org/documents/the-fenway-institute/handouts/Handout_7-C_Glossary_of_Gender_and_Transgender_Terms__fi.pdf. Retrieved 2015-11-13.
listen to your clients – what terms do they use to describe themselves... Pronoun preference typically varies, including alternately using male or female pronouns using the pronoun that matches the gender presentation at that time.Missing or empty
- "Competencies for Counseling with Transgender Clients" (PDF). Association for Lesbian, Gay, Bisexual, and Transgender Issues in Counseling. 18 September 2009. p. 3.
honor the set of pronouns that clients select and use them throughout the counseling process
- "Frequently Asked Questions on Trans Identity" (PDF). Common Ground – Trans Etiquette. University of Richmond. Retrieved 2013-09-17.
Use the correct name and pronoun- Most names and pronouns are gendered. It's important to be considerate of one's gender identity by using the pronouns of the respective gender pronouns [sic] , or gender-‐neutral pronouns, they use
- Glicksman, Eve (April 2013). "Transgender terminology: It's complicated". Vol 44, No. 4: American Psychological Association. p. 39. Retrieved 2013-09-17.
Use whatever name and gender pronoun the person prefers
- "Transgender FAQ". Resources. Human Rights Campaign. Retrieved 2013-09-17.
should be identified with their preferred pronoun
- "NAMES, PRONOUN USAGE & DESCRIPTIONS" (PDF). GLAAD Media Reference Guide. GLAAD. May 2010. p. 11. Retrieved 2013-09-17.
It is usually best to report on transgender people's stories from the present day instead of narrating them from some point or multiple points in the past, thus avoiding confusion and potentially disrespectful use of incorrect pronouns.
- "Journalists: Commit to Fair and Accurate Coverage of Transgender People, including Pvt. Chelsea Manning". Transgender Law Center. 2013. Retrieved 2013-09-17.
Avoid pronoun confusion when examining the stories and backgrounds of transgender people prior to their transition. In Private Manning's case, she may simply be referred to as Private Manning.
- Black, Judie (1975-08-23). "Ey Has a Word for it". Chicago Tribune. p. 12.
- Used in several college humanities texts published by Bandanna Books. Originated by editor Sasha Newborn in 1982.
- Jayce's Gender-Neutral Pronouns
- "A Pronoun Proposal". 2010. Retrieved November 30, 2015.
- Dicebox's gender neutral or "gender irrelevant" pronoun. (2003)
- "Explication of Peh". Dicebox. 2012-09-28. Retrieved 2014-11-08.
- MediaMOO's "person" gender, derived from Marge Piercy's Woman on the Edge of Time (1979), in which people of 2137 use "per" as their sole third-person pronoun.
- Capitalized E, Eir, Eirs, Em. The change from ey to E means that, in speech, the Spivak subject pronoun would often be pronounced the same as he, since the h of he is not pronounced in unstressed positions.
- Williams, John. "Technical - Declension of the Major Gender-Neutral Pronouns". Gender-Neutral Pronoun FAQ Archived February 22, 2014, at the Wayback Machine.
- proposed in 1884 by American lawyer Charles Crozat Converse. Reference: "Epicene". The Mavens' Word of the Day. Random House. 1998-08-12. Retrieved 2006-12-20.
- Proposed by New Zealand writer Keri Hulme some time in the 1980s. Also used by writer Greg Egan for non-gendered artificial intelligences and "asex" humans.
Egan, Greg (July 1998). Diaspora. Gollancz. ISBN 0-7528-0925-3.
Egan, Greg. Distress. ISBN 1-85799-484-1.
- A discussion about theory of Mind: a paper from 2000 that uses and defines these pronouns
- Gender-Neutral Pronoun FAQ Archived June 25, 2013, at the Wayback Machine.
Bornstein, Kate. My Gender Workbook. ISBN 0-415-91673-9.
- Creel, Richard (1997). "Ze, Zer, Mer". APA Newsletters. The American Philosophical Association. Retrieved 2006-05-15.
- Foldvary, Fred (2000). "Zhe, Zher, Zhim". The Progress Report. Economic Justice Network. Retrieved 01-05 2010. Check date values in:
- "Should we be using "hen," as well as "she" and "he" in Norway? [norwegian]". Aftenposten. Aftenposten. 2015-05-01. Retrieved 12 April 2016.
- Benaissa, Mina (29 July 2014). "Svenska Akademiens ordlista inför hen". Sveriges Radio.
- Fortson, Benjamin W. (2010). Indo-European Language and Culture: An Introduction (2nd ed.). Chichester, United Kingdom; Malden, Massachusetts: John Wiley & Sons; Blackwell Publishing. p. 407. Retrieved 12 May 2011.
- Ettner, Charles (2001). "In Chinese, men and women are equal - or - women and men are equal?". In Hellinger, Maris; Bussmann, Hadumod. Gender Across Languages: The Linguistic Representation of Women and Men 1. John Benjamins Publishing Company. p. 36.
- Liu, Lydia (1995). Translingual Practice: Literature, National Culture, and Translated Modernity--China, 1900-1937. Stanford University Press. pp. 36–38.
- Victor Mair (2013), "He / she / it / none of the above," Language Log, April 19, 2013.
- "Chinese Character Database: Phonologically Disambiguated According to the Cantonese Dialect". Chinese University of Hong Kong. 2006. Retrieved 2007-02-16. The entry for "佢" (Humanum.arts.cuhk.edu.hk) notes its use as a third-person pronoun in Cantonese, but the entry for "姖" (Humanum.arts.cuhk.edu.hk) does not; it only gives the pronunciation geoi6 and notes that it is used in place names.
- Japanese: Revised Edition, Iwasaki, Shoichi. Japanese: Shoichi Iwasaki. Philadelphia, PA: J. Benjamins, 2002. Print..
- Japanese Language, Gender, and Ideology: Cultural Models and Real People, Okamoto, Shigeko, and Janet S. Shibamoto. Smith. Japanese Language, Gender, and Ideology: Cultural Models and Real People. New York: Oxford UP, 2004. Print..
- Japanese Personal Pronouns, Maciamo. "Personal Pronouns in Japanese - Wa-pedia." Wa-pedia. Wa-pedia, n.d. Web. 07 Aug. 2014..
- Susanne Wagner (22 July 2004). "Gender in English pronouns: Myth and reality" (PDF). Albert Ludwigs University of Freiburg.
|Look up Appendix:English third-person singular pronouns or Appendix:List of protologisms/third person singular gender neutral pronouns in Wiktionary, the free dictionary.|
- Gender-Neutral Pronouns, a style guide
- Gender-free Legal Writing
- The Epicene Pronouns: A Chronology of the Word That Failed (link updated 5-31-06)
- Bennett, Jessica (January 31, 2016). "She? Ze? They? What's In a Gender Pronoun". New York Times.
- Footnotes: pronouns
- On the Creation of "She " in Japanese
- Regender can translate webpages to use gender-neutral pronouns.
- Is there a gender-neutral substitute for "his or her"?
- Grammar Girl, Quick and Dirty tips for Better Writing / Yo as Pronoun.
- FGA: "xe", "xem", and "xyr" are sex-neutral pronouns and adjectives
- The dictionary definition of Appendix:English third-person singular pronouns at Wiktionary | <urn:uuid:be4d9890-6809-4d92-8282-d996bc80cce6> | CC-MAIN-2016-26 | https://en.wikipedia.org/wiki/Gender-neutral_pronoun | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397696.49/warc/CC-MAIN-20160624154957-00025-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.906018 | 12,473 | 4.03125 | 4 |
Urbino (ōrbēˈnō) [key], town (1991 pop. 15,114), in the Marche, central Italy. It is an agricultural and tourist center, located on the site of a former Roman community. The town flourished under the Montefeltro family (12th–16th cent.) and then under the Della Rovere family (1508–1631), before coming under the papacy. The court of Federigo da Montefeltro, 2d duke of Urbino (1444–82), was a great artistic center during the Renaissance. Urbino was particularly noted for its school of painting (15th–17th cent.) and for the manufacture of majolica ware. The splendid Palazzo Ducale (1444–82) today houses a major museum, with paintings by Raphael (born in the town), Titian, Piero della Francesca, and others. Also of interest are St. John's Oratory and Raphael's house (now a museum). | <urn:uuid:7c77c8e5-207a-4146-9bf9-215d5a85130c> | CC-MAIN-2016-26 | http://www.factmonster.com/encyclopedia/world/urbino.html | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396875.58/warc/CC-MAIN-20160624154956-00032-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.972839 | 212 | 2.796875 | 3 |
- freely available
Carbon Footprint of Beef Cattle
AbstractThe carbon footprint of beef cattle is presented for Canada, The United States, The European Union, Australia and Brazil. The values ranged between 8 and 22 kg CO2e per kg of live weight (LW) depending on the type of farming system, the location, the year, the type of management practices, the allocation, as well as the boundaries of the study. Substantial reductions have been observed for most of these countries in the last thirty years. For instance, in Canada the mean carbon footprint of beef cattle at the exit gate of the farm decreased from 18.2 kg CO2e per kg LW in 1981 to 9.5 kg CO2e per kg LW in 2006 mainly because of improved genetics, better diets, and more sustainable land management practices. Cattle production results in products other than meat, such as hides, offal and products for rendering plants; hence the environmental burden must be distributed between these useful products. In order to do this, the cattle carbon footprint needs to be reported in kg of CO2e per kg of product. For example, in Canada in 2006, on a mass basis, the carbon footprint of cattle by-products at the exit gate of the slaughterhouse was 12.9 kg CO2e per kg of product. Based on an economic allocation, the carbon footprints of meat (primal cuts), hide, offal and fat, bones and other products for rendering were 19.6, 12.3, 7 and 2 kg CO2e per kg of product, respectively.
Share & Cite This Article
Desjardins, R.L.; Worth, D.E.; Vergé, X.P.C.; Maxime, D.; Dyer, J.; Cerkowniak, D. Carbon Footprint of Beef Cattle. Sustainability 2012, 4, 3279-3301.View more citation formats
Desjardins RL, Worth DE, Vergé XPC, Maxime D, Dyer J, Cerkowniak D. Carbon Footprint of Beef Cattle. Sustainability. 2012; 4(12):3279-3301.Chicago/Turabian Style
Desjardins, Raymond L.; Worth, Devon E.; Vergé, Xavier P.C.; Maxime, Dominique; Dyer, Jim; Cerkowniak, Darrel. 2012. "Carbon Footprint of Beef Cattle." Sustainability 4, no. 12: 3279-3301.
Notes: Multiple requests from the same IP address are counted as one view. | <urn:uuid:58679ef6-3823-4e74-843f-ae0ac4842f8d> | CC-MAIN-2016-26 | http://www.mdpi.com/2071-1050/4/12/3279 | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397696.49/warc/CC-MAIN-20160624154957-00164-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.850132 | 541 | 3.0625 | 3 |
Origin: as. Lilie, L. Lilium, gr. Cf. Flower-de-luce.
1. (Science: botany) a plant and flower of the genus Lilium, endogenous bulbous plants, having a regular perianth of six coloured pieces, six stamens, and a superior three-celled ovary.
there are nearly fifty species, all found in the North temperate zone. Lilium candidum and L. Longiflorum are the common white lilies of gardens; L. Philadelphicum is the wild red lily of the atlantic states. L. Chalcedonicum is supposed to be the lily of the field in our lords parable; L. Auratum is the great gold-banded lily of japan.
2. (Science: botany) a name given to handsome flowering plants of several genera, having some resemblance in colour or form to a true lily, as pancratium, Crinum, amaryllis, Nerine, etc.
3. That end of a compass needle which should point to the north; so called as often ornamented with the figure of a lily or fleur-de-lis. But sailing further, it veers its lily to the west. (Sir t.
(Science: botany) Browne) african lily, a plant of the genus Hyacinthus. Lily iron, a kind of harpoon with a detachable head of peculiar shape, used in capturing swordfish.
(Science: botany) lily of the valley, a low perennial herb (convallaria majalis), having a raceme of nodding, fragrant, white flowers. Lily pad, the l
arge floating leaf of the water lily.
(Science: botany) tiger lily, the nymphaea, a plant with floating roundish leaves, and large flowers having many petals, usually white, but sometimes pink, red, blue, or yellow. | <urn:uuid:f5d6417a-2385-4052-9975-5d9c4f7b78a6> | CC-MAIN-2016-26 | http://www.biology-online.org/dictionary/Lilies | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396455.95/warc/CC-MAIN-20160624154956-00127-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.887957 | 420 | 3.359375 | 3 |
The Quebec election and the perceived likelihood of a majority Parti Québécois government has provoked much talk of the prospect of another referendum on independence. So far, the only firm promise that Pauline Marois has made is to produce a white paper on the topic of an independent Quebec. This initiative will almost certainly follow the paths laid out by previous exercises, but there’s one topic where the project needs a fundamental rethink: what currency would an independent Quebec use?
In retrospect, the discussion of monetary policy during the 1995 referendum campaign was alarmingly superficial. Debate revolved almost exclusively around whether or not the Rest of Canada could or would prevent an independent Quebec from carrying out the Yes campaign’s promise to continue to use the Canadian dollar.
These questions are beside the point. Firstly, there is almost nothing that Canada can do to prevent another country from using its currency. Secondly, there’s little reason to try. If anything, the rest of Canada would stand to gain: it could extract a (small) seigniorage tax from Quebec.
The real question is why Quebec would want to use the Canadian dollar in the first place. The government of an independent Quebec could be expected to assume a share of the federal debt roughly equal to its share of the population. When the existing provincial debt is added, it would produce a total of roughly 90% of Quebec’s GDP—all denominated in a currency it did not control. By not adopting its own currency, an independent Quebec would be depriving itself of a lender of last resort. If the federal government were to find itself unable to borrow, the Bank of Canada can and would step in to buy its debt. The Bank of Canada would be under no such obligation to do the same for the government of another country. Not only would Quebec have high debt levels, it would face higher borrowing costs as investors demand higher returns to compensate for the risk of default.
From Quebec’s point of view, the best scenario has traditionally been a monetary union. In 1995, it was easy enough to point to Europe: the Maastricht Treaty and its commitment to create the euro had demonstrated that it was possible for different countries to agree to a monetary union. But thanks to the euro experiment, we know a lot more about how monetary unions work—and we also know what happens when countries barge ahead with poorly-thought-out monetary unions.
One of the problems with the euro project was that unlike the Bank of Canada and other national central banks, the European Central Bank was not given the authority to act as a lender of last resort. This was a deliberate omission: Germany did not want to use a currency whose value could be inflated away by a bailout of a more profligate country. Instead of setting up a lender of last resort, the Maastricht Treaty’s conditions on government debt and deficits were designed to make one unnecessary.
As we all know, this structure did not survive its first test. Even before the recession, the Maastricht limits were discarded when they became inconvenient to France and Germany. More recently, the hardest-hit countries of Southern Europe lurched from debt crisis to debt crisis until the ECB finally decided to act as a de facto lender of last resort.
With a debt-GDP ratio comparable to that of pre-crisis Italy, an independent Quebec could scarcely afford the luxury of doing without a lender of last resort. But it is extremely unlikely that the rest of Canada would agree to enter a monetary union in which the Bank of Canada played that role for both countries. The relatively large size of the Quebec economy compared to the rest of Canada’s would mean that a Quebec debt crisis could have significant inflationary consequences for the RoC.
More generally, a RoC-Quebec monetary union is unlikely to satisfy the conditions for an optimal currency area. (It’s arguable that the current Canadian monetary union doesn’t satisfy them, either.) For example, one criterion is the existence of system that transfers income from areas that are doing relatively well to other regions. In Canada, this involves more than government-to-government payments: it includes unemployment benefits and salaries paid to federal workers. In 2012, these amounted to a net transfer of $16 billion from the ROC to Quebec – 4.3 per cent of its GDP (pdf – Table 4.2 on page 46). Transfers of this size are still unthinkable in Europe, just as they would be for a RoC-Quebec monetary union.
Of course, just because a monetary union is poorly-designed doesn’t mean that it will collapse. The euro continues to survive, even though the recent recession would have very likely been far less harsh if it had never been adopted in the first place. The members of the eurozone see the common currency as a step towards a deeper political union, and they are evidently willing to accept a high economic cost in order to advance that goal.
There would be no such political support to sustain an eventual RoC-Quebec monetary union. There is a world of difference between viewing a monetary union as a step towards a stronger political union and seeing it as a transition to political disintegration. | <urn:uuid:5a10657a-5e4e-4aba-bbce-51385a055eee> | CC-MAIN-2016-26 | http://www.macleans.ca/economy/economicanalysis/could-an-independent-quebec-use-the-canadian-dollar-would-it-want-to-2/ | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783393533.44/warc/CC-MAIN-20160624154953-00102-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.967982 | 1,069 | 2.515625 | 3 |
A tree that exudes gum, especially a eucalyptus.
Oraciones de ejemplo
- Only in Australia are two genera, Eucalyptus and Acacia - more familiarly, the gum tree and the wattle - able to dominate a continent.
- This brings us to the gum tree, or more specifically, to Eucalyptus globulus, grown widely as a source of wood-pulp.
- Neighbour Ian Kinny, whose memorial garden along with a gum tree, bottlebrush and shrubs were destroyed, described the impact of the attack as ‘extremely cruel’.
up a gum tree
- British informal In or into a predicament: offers of devolution will lead ministers straight up a gum treeMás ejemplos en oraciones
- I'm all for that approach because we're all up a gum tree with romance.
- At Heathrow last week thousands of Brits found themselves up a gum tree, as British Airways, plagued by staff shortages and technical hitches, scrapped more than 100 flights to and from the airport.
For editors and proofreaders
Saltos de línea: gum tree
Definición de gum tree en:
¿Qué te llama la atención de esta palabra o frase?
Los comentarios que no respeten nuestras Normas comunitarias podrían ser moderados o eliminados.
Muy popular en Reino Unido
Muy popular en Australia
= de moda | <urn:uuid:fdb4763a-1329-4da6-9c68-d711c96476ae> | CC-MAIN-2016-26 | http://www.oxforddictionaries.com/es/definicion/ingles/gum-tree | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396959.83/warc/CC-MAIN-20160624154956-00171-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.697971 | 333 | 3.015625 | 3 |
Able to be described in words.
- To be sure, depicted as jovial, effable, and magnanimous throughout, the beaten ‘Bull’ continues to embody ‘manly’ virtues such as fortitude, determination, persistence, strength, vigor, and so forth.
- That it is a special but effable sort of strongly-self-modelling computational process.
- So it is there that is situated the first thing that is effable and can be called by proper names.
Early 17th century: from Latin effabilis, from effari 'utter'.
For editors and proofreaders
Definition of effable in:
What do you find interesting about this word or phrase?
Comments that don't adhere to our Community Guidelines may be moderated or removed. | <urn:uuid:c51143d3-4dd4-4c32-ad84-79c02cd7f6e9> | CC-MAIN-2016-26 | http://www.oxforddictionaries.com/us/definition/american_english/effable | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396872.10/warc/CC-MAIN-20160624154956-00012-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.905091 | 169 | 2.859375 | 3 |
As the body temperature controls of very small children are not fully developed, they are the ones who are prone to frequent fevers. Occurrence of fits as a result of high fever is a common phenomenon with children. These fits are known as febrile convulsions.
These fits or seizures occur in a child with high fever of over 102.2°F (39°C). These febrile convulsions generally occur at an early stage of other infections like viral infections, respiratory infections, etc where the fever rises rapidly.
Though febrile convulsion affects about 1 out of 20 children in the age group of one to four years, it rarely takes a serious turn.
A lot of children suffer from febrile convulsions at one point of their childhood or the other. This is caused by an electrical system in the brain which fails to cope with the rising temperature as it remains less matured at that time.
There are certain conditions that can trigger off febrile convulsions, or increases its chances. They are as follows:
High temperature and a rapid rise in temperature
Inherit from parents or genetically predisposed to it
If elder siblings have suffered from it
Frequent infections and illness
If first attack of febrile convulsion was accompanied by a relatively low body temperature, that is below 102.2°F
After receiving the MMR (measles, mumps, rubella) vaccine. But this is very rare.
The common symptoms of febrile convulsions seen in children are
Brief seizures or fits (one minute to a maximum of five minutes)
Loss of consciousness
Stops breathing for about 30 seconds
Loses control of their bladder or bowel
Twitching or spasms of limbs and face muscles too
Rolling of the eyes upwards
The head is thrown backwards and the arms and legs begin to jerk
The skin becomes pale and sometimes turns blue for few seconds
The child falls asleep after the fit stops and remains irritable and confused after waking up.
The febrile convulsions do not generally lead to much of complications except frightening the people around the child. But it also increases the risk of another attack during the same illness or later. If febrile convulsion occurs before the age of one year, and if there is a family history, then the probability of recurrence remains.
The most important complication out of febrile convulsion is development of epilepsy, though this is very rare and occurs to those who suffer from febrile convulsions more frequently and for a longer period of time.
There is no such treatment for febrile convulsions since this is a natural occurrence with a lot of children. However, certain methods can be adopted as follows:
Try and stop the child from hurting himself, but do not hold him tightly.
Do not put anything in their mouth, not even water.
Place them in a recovery position
Call the doctor
Cool the room to bring the temperature down
Loosen their clothes
Turn the child’s head sideways to prevent choking
If the child has suffered from convulsions before, diazepam is given through rectum using a rectal tube.
After the attack, some doctors advice mild painkillers or paracetamol to bring down the temperature
Prompt medical treatment and hospitalization might be required for some children
No matter how frightening these febrile convulsions might seem to be, they rarely cause any long term injury. But if it occurs frequently, then mild brain dysfunction can occur.
Certain measures can be taken to lower the risk of febrile convulsion attacks, like:
Keeping a close check on the child suffering from high temperature
Temperature lowering medicines can be given
Keeping medicines like diazepam in hand
Consult the doctor immediately, and hospitalize the child if necessary. | <urn:uuid:1a8e9ae4-64e8-4d93-af7d-8b2f6db5bdd2> | CC-MAIN-2016-26 | http://www.ayushveda.com/healthcare/febrile-convulsion.htm | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397748.48/warc/CC-MAIN-20160624154957-00037-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.93383 | 813 | 3.875 | 4 |
- About MIME
- MIME Course Listings
- Research and Innovation
- Our Impact
- My MIME
Materials science involves understanding what gives materials their properties---and then using this knowledge to engineer new and better materials that can meet wide-ranging societal and environmental needs.
Materials scientists study how to fabricate new materials, predict their behavior, and control their structure and properties over length-scales spanning from meters down to the atomic scale.
The scope of materials science is immense: It encompasses diverse materials classes (metals, polymers, composites, glasses, and ceramics) and covers applications ranging from structural materials such as those used in bridges and aircraft to electronic, magnetic and optical materials used in computing, communications, and new electronic devices.
The MIME materials science faculty at OSU are engaged in research in many areas:
Together their efforts address applications in areas ranging from sustainable energy to medicine and public health. | <urn:uuid:fe72fedd-63a4-4ef1-aa89-4cc27853afcd> | CC-MAIN-2016-26 | http://mime.oregonstate.edu/academics/grad/me/materials | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395621.98/warc/CC-MAIN-20160624154955-00078-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.903931 | 193 | 2.546875 | 3 |
The Prisoner: Science: Rover
Rovers, the Village Guardians, roam the Village and its outskirts as a constant reminder that there is no escape. There is also a Rover guarding the helicopter platform at all times, one designed for high-pressure areas which roams the ocean floor, and at least one held in reserve for special cases, activated only at the request of Number 2 or when a Supervisor declares Orange Alert.
At its most basic, Rover reacts to motion, chasing the fastest-moving life-form in its immediate area, then chemically paralyzing and suffocating them. The Rover assigned to the ocean floor has eliminated all major life forms, including seaweed, off the coast of the Village. Except for occasional schools of fish and shrimp which wash past it to the shallow spots, the area is devoid of life.
Rovers are colloidal mechanisms which work on a fluidic principle. First synthesized as linear colloidal structures for use in microfluidics, their evolution into a spherical shape (thanks to some prodding by a team of Biotechnologists) increased their stability and their maximum possible size. While not technically living beings, it is thought they contain cellular material and genetic traits from Tetrodontidae and Diodontidae, known in the vernacular as Blow fish. Like the Rovers, these fish are able to expand to a spherical shape and release a substance known as tetrodotoxin. Tetrodotoxin prevents transfer of impulses on a cellular level by blocking the transfer of sodium, thus causing paralysis.
The form of Tetrodotoxin secreted by the Rovers is much more powerful than that of the common blowfish, and victims who are not taken immediately to the Revival Pod will succumb to paralysis of the pulmonary and respiratory systems within only minutes, or seconds if the Rover is set to kill.
They are the size of soccer balls when they arrive in the Village, where they are trained for their task until they grow to their full expansion of 8 feet in diameter. While dormant, waiting for the next Orange Alert, they shrink to manageable size for storage.
According to a report by Number 2, Rovers are aware, but not self-aware. They sense changes in their immediate environment through the visual and infrared spectrum and make basic decisions based on their training and programming. They can be programmed to use Tetrodotoxin and suffocation to subdue or kill, and to act together in a retrieval capacity for attempts to escape by sea. They are also responsive to passwords and are programmed with a failsafe which prevents them from attacking Villagers who carry an electropass.
Rovers can be modified to respond to remote control and serve as eyes and ears for Number 2. Number 2 can broadcast his voice through a Rover so modified. Airborne Rovers have also been developed, but have not yet been put to practical use. They are the Village’s most effective deterrent to date. Although they can die, as it were, they are unrelenting in their pursuit, unwavering in their adherence to training and programming and unresponsive to pain. There are some Villagers who have even developed a fascination with the Rovers, but no one is quite sure what the intended purpose of the Rover worship is.
Speculative scientific aspects of The Prisoner have come from a combination of the television episodes, related comics and novels, and unrelated scientific resources. This article is copyright Pet Serrano 2003. Carlton International owns all rights to The Prisoner.
Prisoner reviews are © 2003 Logan and Pet Serrano.
Not for reproduction without the authors express permission
Prisoner names, characters and everything else associated with
the series are the property of Carlton International. | <urn:uuid:6a27d624-7222-4008-ba99-4d19ffe7a22d> | CC-MAIN-2016-26 | http://sadgeezer.com/The-Prisoner-Science-Rover.htm | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783399106.96/warc/CC-MAIN-20160624154959-00044-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.9602 | 756 | 2.765625 | 3 |
Setting the Stage for a New Paradigm in Treatment of Heart Failure
Embargo expired: 30-Jan-2013 5:00 PM EST
Source Newsroom: University of North Carolina Health Care System
Newswise — CHAPEL HILL, N.C. – Despite a substantial increase in the number of people suffering the debilitating and often deadly effects of heart failure, treatments for the condition have not advanced significantly for at least 10 years. An analysis by researchers at the University of North Carolina School of Medicine shows new breakthroughs could be closer than we thought.
The analysis points to striking similarities between heart cells in patients with heart failure and brain cells in patients with Alzheimer’s disease, raising the possibility that some treatment approaches being developed for Alzheimer’s may also help reverse the damage from heart failure.
“We know that Alzheimer’s is a process of wear and tear on the brain, and the same sort of wear and tear affects the heart,” said Cam Patterson, MD, MBA, UNC’s chief of cardiology. “The good news is now that we recognize that — and can understand how the wear and tear actually affects proteins in the heart — it offers us a new chance to identify strategies to reverse that wear and tear. It’s like providing a key to preventing aging of the heart.”
The analysis, co-authored by Patterson and Monte Willis, MD, PhD, associate professor of pathology and laboratory medicine at UNC, appears in the Jan. 31, 2013 issue of the New England Journal of Medicine.
The researchers say a variety of recent studies point to one conclusion: misfolded proteins in heart cells are a key factor in the process of heart failure. “There’s a convergence of data pointing to this being a real problem,” said Patterson.
The analysis brings together three main lines of evidence. First, studies of heart tissue from patients with heart failure reveal large accumulations of misfolded proteins within damaged heart cells, similar to the accumulations found in the brain cells of patients with Alzheimer’s. Second, recent studies using mice show heart problems can result from defects in the body’s quality-control system for monitoring and maintaining proteins. Finally, studies of a rare genetic disorder link severe heart problems to misfolding of two proteins, known as desmin and CryAB.
The new conclusion opens enticing avenues for possible treatments. Scientists studying Alzheimer’s and other neurological disorders have long focused on ways to correct or prevent protein misfolding, and have even developed drugs that accomplish this feat. “This raises the possibility that that same type of strategy, and maybe even some of those compounds, will be beneficial in heart failure,” said Patterson. “It’s an entirely new treatment paradigm.”
Heart failure, in which the heart fails to pump as effectively as it should, is a chronic, debilitating and often deadly condition affecting millions of adults in the United States. It can result from heart attacks, coronary heart disease and many other causes. Increases in heart attack survival rates mean more people are living with the debilitating effects of heart failure, including fatigue, shortness of breath and increased mortality. | <urn:uuid:486c351f-8ea6-4af6-b7f0-a120c0906857> | CC-MAIN-2016-26 | http://www.newswise.com/articles/setting-the-stage-for-a-new-paradigm-in-treatment-of-heart-failure | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395546.12/warc/CC-MAIN-20160624154955-00126-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.929104 | 659 | 2.859375 | 3 |
Lorne Campbell, Jill Dunkerton, Jo Kirby and Lisa Monnas
Technical Bulletin Volume 22, 2001
The weave structure found in 15th-century Italian silk velvets is compared with the purple velvet on the reverse of a diptych attributed to Ercole de' Roberti.
Quantitative dye analysis by high perfromance liquid chromatography (HPLC) was carried out to identify Polish cochineal (as distinct from Mexican cochineal, whose presence would have given an earliest date of production of the diptych). Madder in orange fibres and indigo on deep purple threads were also identified.
Dye mixtures used to make purple, contemporary literature on dyeing, contemporary marking of the dye used in the selvedge, and 15th-century Italian terms for purple are discussed. Wear on the velvet makes it clear where fastenings were placed: the diptych opened like a book.
This technical study shows it is likely that the diptych belonged to a particular owner, since it was shown to match a description in a contemporary inventory very closely. Details of the weave analysis and the HPLC conditions are given in an appendix.
dye, Ercole de' Roberti, panel paintings, primary source documents, purple, velvet
To cite this article we suggest using
Campbell, L., Dunkerton, J., Kirby, J., Monnas, L. 'Two Panels by Ercole de' Roberti and the Identification of "Veluto Morello"'. National Gallery Technical Bulletin Vol 22, pp 29–41.
Problems opening files? Get Adobe Reader [External link] | <urn:uuid:d1ebfecb-7a16-44d6-b751-fbc5cfb79880> | CC-MAIN-2016-26 | http://www.nationalgallery.org.uk/technical-bulletin/campbell_dunkerton_kirby_monnas2001 | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396147.66/warc/CC-MAIN-20160624154956-00040-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.904625 | 342 | 2.9375 | 3 |
Children lack basic knowledge of where their food comes from
Cheese comes from plants, tomatoes grow underground, and chicken? Well, thats what fishfingers are made of.
These were the results of a survey by the British Nutrition Foundation, which found that despite a number of healthy eating initiatives, many children still do not know where food comes from or what it contains.
Researchers asked 27,500 children across the UK about healthy eating, cooking and the origin of food in one of the largest studies of its kind.
It found that nearly a third — 29 per cent — of primary school pupils thought that cheese came from plants. One in ten secondary | <urn:uuid:4956d44d-c3f1-4f6b-bff8-6dfa328a1e14> | CC-MAIN-2016-26 | http://www.thetimes.co.uk/tto/health/child-health/article3781023.ece | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783398209.20/warc/CC-MAIN-20160624154958-00036-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.97595 | 132 | 2.984375 | 3 |
Behind the buzz and beyond the hype:
Our Nanowerk-exclusive feature articles
Posted: Aug 27, 2008
Nanotechnology tunes - listening to the music of molecules
(Nanowerk Spotlight) Detecting the presence of a given substance at the molecular level, down to a single molecule, remains a considerable challenge for many nanotechnology sensor applications that range from nanobiotechnology research to environmental monitoring and antiterror or military applications.
Currently, chemical functionalization techniques are used to specify what a nanoscale detector will sense. For biological molecules, this might mean developing an antibody/antigen (i.e., lock-key) pair, or an alternative synthetically generated ligand. For chemical gases, it is much more challenging to develop the right 'glue' that sticks a given gas (and only that gas) to a substrate. Thus, for many gas-sensing applications, appropriate functionalization may not even possible.
The advantage of spectroscopic techniques – measuring and interpreting electromagnetic spectra arising from either emission or absorption of radiant energy by various substances – such as Raman, infrared, and nuclear magnetic resonance spectroscopy is that they are label-free, i.e. they require no preconditioning in order to identify a given analyte. They are also highly selective, capable of distinguishing species that are chemically or functionally very similar. On the downside, spectroscopic methods face enormous challenges in measuring dilute concentrations of an analyte and generally involve the use of large, expensive equipment.
"We have been working on ways to overcome the functionalization bottleneck in sensing and, instead of trying to see a molecule by using photons or electrons – as in optical spectroscopy or electron microscopy – we have been using vibrational energy exchange to in effect 'listen' to the vibrations of the molecule," Jeffrey Grossman tells Nanowerk. "The concept is much like bringing a set of nano tuning forks up to a molecule and seeing which ones become excited. Those would form a chord of 'notes' that are unique to that particular molecule. Thus, the molecule can be identified."
In his recent work, funded by the National Science Foundation, Grossman, who leads the Computational Nanoscience Group at UC Berkeley and is Executive Director of Berkeley's Center of Integrated Nanomechanical Systems, has been taking advantage of the unique manner in which vibrational energy transfers between nanoscale objects, with applications spanning chemical, biological, radiation, and even acoustic sensing.
"The scientific core of our work is aimed at utilizing the unique way in which mechanical energy – in other words, heat – is exchanged at the nanoscale" Grossman explains. "Specifically, we have shown that if one nanoscale object vibrates at the same frequency as another – 'in resonance' – then it is possible for these two objects to exchange heat extremely efficiently. At the same time, if they are not vibrating at the same frequency then the flow of heat is blocked and little or no energy is exchanged."
In essence, what Grossman and his group have done is to demonstrate that one can take advantage of this nanomechanical exchange of energy for detection or characterization of an unknown molecule type. They have termed their novel chemical detection technique nanomechanical resonance spectroscopy (NRS). NRS basically employs an array of nanomechanical resonators that are used to directly interrogate a heated ('exited thermally') analyte's vibrational frequencies.
The new proposed NRS sensor can 'listen' to the 'music' a molecule makes without needing to adhere the molecule to a surface, thereby allowing for continuous measurements with little or no cleaning, resetting, or degradation of the sensor. The result would be a nanodetection system that can detect many different species without functionalization chemistry steps. The NRS is only possible because of nanotechnology's ability to synthesize nanoscale objects that resonate at the same frequencies of the natural vibrations of molecules.
Left: Illustration of nanotube or nanowire when its fundamental mode of oscillation is unexcited (bottom) or excited (top). Right: analogy with a piano – the frequency of oscillation of the nanotube or nanowire can be thought of as a note on a piano. (Image: Dr. Grossman, UC Berkeley)
This work is part of an emerging field of study of frequency dependent thermal phenomena in nanoscale systems – that is, recognizing and exploiting the wave nature of heat. In these small systems the macroscopic concept of temperature (the time averaged thermal energy) is insufficient to describe how a system will behave.
Grossman points out that one must also be concerned about how the thermal energy is distributed in frequency and this distribution can influence the way in which heat is transported (or blocked) as well as how heat effects other important properties such as electron transport.
"There are other researchers who are studying the importance of the frequency dependence of heat in nanostructures, for example in thermoelectric materials, or in
thermally rectifying materials" he says. "However, our work is unique by proposing an application that depends solely on frequency dependent phenomena and also by utilizing these phenomena for label-free detection."
Illustration of how each molecule has its own unique set of vibrational excitations which, taken together, form a unique “chord” in the NRS approach. (Image: Dr. Grossman, UC Berkeley)
Grossman notes that in their recent paper, he and Greaney present the NRS concept and use atomistic computational modeling to demonstrate a proof-of-principle of the idea. An actual NRS device has yet to be built, although there are many ways in which the scientists believe this could be done.
"One could envision several NRS device setups" says Grossman. "For example, a series of carbon nanotubes of differing length (or radius) suspended over a trench, similar to the strings in a harp. Alternatively one could pass analytes through holes in graphene membranes or over substrates coated in fullerenes of different radius. The design of a practical NRS is more limited by capabilities for detecting the excitation of a vibrational mode than the ability to fabricate nanoscale devices."
The number of applications for a sensitive, label-free detection system is quite extensive. It would be extremely useful in areas such as medicine, homeland security,
environmental monitoring, and clean energy and water.
Research in this area of thermal phenomena and heat flow at the nanoscale is just beginning. "With the development of theory for describing heat transport in nanoscale systems and advances in nanoscale fabrication and characterization techniques we are now well equipped to study these phenomena" Grossman says. "As a result we should look forward to the development of more applications that exploit the wave nature of heat." | <urn:uuid:4f071474-e9c8-4852-9bef-d491d49a4190> | CC-MAIN-2016-26 | http://www.nanowerk.com/spotlight/spotid=6941.php | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783393518.22/warc/CC-MAIN-20160624154953-00056-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.923713 | 1,416 | 2.796875 | 3 |
A report, published by the Cancer Institute NSW in Australia has found that alcohol might be more strongly linked to cancer than previous thought.
The authors of the report reviewed the findings of 634 previous studies to determine the link between alcohol consumption and the risk of various cancers. In total, cancer risk was found to be 22% higher in people who consumed four alcoholic drinks a day compared to non-drinkers and 90% higher in those who consumed eight alcoholic drinks a day. On the other hand, consumption of two alcoholic drinks a day appeared to have little or no effect on cancer risk.
The researchers found conclusive evidence that moderate to heavy alcohol consumption raised the risk of cancer in the liver, breast, colon, stomach and upper-digestive tract while no link was found between alcohol and bladder, endometrial, or pancreatic cancer.
Cancers of the upper-digestive tract which includes the mouth, esophagus, larynx, and pharynx were four times more likely in those who consumed four alcoholic drinks a day and six times greater in those who consumed eight times a day compared to non-drinkers.
Breast cancer risk increased by around 10% with each additional alcoholic drink consumed, this equates to twice the risk of breast cancer for someone consuming an average of eight alcoholic drinks per day. It is believed that as many as 10% of all breast cancers might be attributable to alcohol.
Stomach cancer was only slightly associated with heavy alcohol consumption with eight drinks a day raising stomach cancer risk by around 30%.
Colo-rectal cancers did not appear to be associated with alcohol in women however heavy male drinkers were at a 60 to 80 percent greater risk of developing colo-rectal cancers than non-drinkers.
The results were the opposite for liver cancer with women drinkers appearing to be at a greater risk of liver cancer than men. Heavy male drinkers were 60% more likely to develop liver cancer while females were a massive nine times more likely to develop the disease compared to non-drinkers. Around 35% of all liver cancers are thought to be alcohol related.
Some evidence existed for a link between alcohol and prostate, kidney, lung, cervical, and ovarian cancers, however the evidence was not sufficient to draw definitive conclusions.
Many mechanisms have been proposed to explain the link between alcohol and cancer. Acetaldehyde, which is produced in the body when alcohol is metabolised, is a known carcinogen. Alcohol also impairs the bodies ability to absorb folate resulting in a folate deficiency in many heavy drinkers. Low folate levels have been implicated in the development of several forms of cancer including colo-rectal, breast, and pancreatic cancers. Alcohol is also known to increase estrogen levels which may explain its link with breast cancer.
Cancer is currently the second most common cause of death in the western world behind heart disease. Around 35% of women and almost half of all men will be diagnosed with cancer during their lifetimes and this figure is likely to rise as life expectancies continue to increase. Prostate cancer is the most common form of cancer in men while breast cancer is the most common in women.
It is thought as many as a third of all cancers are preventable through healthy lifestyle choices such as not smoking, consuming a variety of fresh fruits and vegetables, and being physically active.
Similar Articles You Might Like:
- Heavy Drinking Ups Risk Of Cancer In Mouth, Esophagus, Breast & Liver
- Two Drinks A Day Increases Breast Cancer Risk By 32%
- Folate May Reduce Pancreatic Cancer Risk
- Heavy Drinking Raises Liver Cancer Risk
- Obesity Linked To Cancer Of Pancreas, Liver, Bladder & Prostate
- Cancer Death Rates For Smokers Compared To Non-Smokers
- 45% Of Male Cancers, 40% Female Cancers Preventable According To Study
- Cancer More Likely In Tall People | <urn:uuid:9f139850-3e50-417a-9e21-058c54610514> | CC-MAIN-2016-26 | http://healthhubs.net/cancer/eight-drinks-a-day-raises-total-cancer-risk-by-90/ | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391519.2/warc/CC-MAIN-20160624154951-00023-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.96878 | 805 | 3.09375 | 3 |
High School Reform
Reforming high school was a priority of State Superintendent Jack OConnell. The California Legislative Analyst Office issued a report and the High School Alliance has established core principles for successful high schools, while the Bill and Melinda Gates Foundation issued a report on The Silent Epidemic: Perspectives on High School Dropouts. Lincoln School District has taken a different tack, making more difficult to enter high school without attaining a minimum score on multiple assessment areas. Los Angeles Unifed is seeking solutions to overcrowded high schools by creating small learning communities. But details are of conversion are sketchy for the $45 million price tag for 53 high schools.
Peter Schrag reviews the Economic Policy Institute's 99-page report, "Rethinking High School Graduation Rates & Trends". Interestingly, their research high school graduation rates are much higher that traditionally reported. In 2007, Peter Schrag reviewed the published California high school graduation rate. A 2007 publication from the Alliance for Excellent Education, illustrates the discrepancies in graduation rates (National and California) reported by government and independent sources, examines why this is important, and explains how certain federal policies have contributed to the graduation rate confusion. Here are two graduation rate report 1 and graduation rate report 2 issued in 2008. By 2009 there was a call for federal standardization of calculating the graduation rate. In 2010, the implementation of using the statewide student attendance tracking system CALPADS, further complicated the ability to determine the graduation rates. (See Alameda and Contra Costa County data). Currently, high school graduates must pass the California High School Exit Exam to get a diploma, but this proposed Assembly bill would require seniors to register to vote also. In 2012, the Federal government published the first state by state graduation rate for high schoolers using one metric.
MDRC offers a synthesis of the five challenges of high school reform after evaluating three reform models: Career Academies, First Things First, and Talent Development. A 2009 report on Delware high schools identified six best practices: transition programs to orient incoming ninth grade students, academies to organize ninth grade students into smaller groups, extra instructional time to support students not making satisfactory academic progress, after-school instructional help for all students needing extra help, mentoring, and twilight programs for students missing a small number of credits to graduate.
The Aspen Institute published Transforming High School Teaching and Learning: A District-wide Design which provides new insights on how to substantially improve high school teaching and learning across an urban school district. Drawing on the expertise of teachers, principals, superintendents, policy makers and researchers, the new Aspen Institute Program on Education and Society report offers both an analytic framework and concrete suggestions for a new approach to high school improvement.
A 2009 Alliance for Education report looks at the research on a number of high school performance indicators that have emerged as being predictive of high school graduation and college and career readiness. In another 2009 report prepared for the National Governor Association a framework for thinking about program and policy changes through three strategies: expanding access to AP courses; building teacher and student capacity; and creating incentives for schools and students is examined.
In 2001, San Diego became one of seven cities to participate in the Schools for a New Society Initiative. Sponsored by Carnegie Corporation of New York, the initiative aims to redesign high schools using a working theory of action that involves forming community partnerships -- particularly with local education funds (LEFs) -- and enlisting the community's help to expand learning opportunities for youth and demand educational excellence. In this publication, Alan Bersin shares with Collaborative Communications Group his reflections on the purpose of high school reform, how he would change San Diego�s theory of action for high schools -- and the political and governance contexts in which the reforms occurred.
Carnegie Corporation of New York recognizes that 19th century high schools do not adequately prepare students for the 21st century. In 2001, the Corporation launched the Schools for a New Society Initiative -- a $60 million initiative designed to reinvent American high school education. "A Framework for Success for All Students," outlines the conceptual framework for how a district can transform its high schools into a system that ensures that all young people have access to high-quality education that will prepare them for the present -- and the future.
A common theme in high school reform revolves a common theme of student engagement. Charter schools like BASE/Bay Area School of Enterprise use community based projects to engage students. A research study that deconstructs how youth perceive "activism" and explores the motivating factors and barriers in their decision to become involved in social causes. The study includes more than 1200 young people, including expert interviews, ethnographies and a national poll of a representative sample with participants ages 12 to 24. Findings deconstruct youth activism and find an "activation gap", showing a strong disparity between interest in and involvement in social causes. Anecdotal responses from respondents also offer clues into successful strategies into closing the "activation gap". The Bill and Melinda Gates Foundation is focusing on the new 3 Rs: Rigor, Revelance and Relationships. In 2011, the Gates Foundation issued a report detailing the reason students drop out of high school.
Major universities like Stanford has edcuational studies. Here is one on high school reform from Stanford.
School boards, state laws and teenagers becoming adults intersect in high school from dress codes to first amendment rights.
A decade ago proponents seeking to put an end to affirmative action argued that a vote for Proposition 209 was a vote for fairness. They claimed their initiative was a way to correct social inequalities and foster equal opportunity. However, the "Removing the Roadblocks" report provides research that explains how and why ending affirmative action has produced neither the "results" nor the opportunities that were promised.
Improving High School: A Strategic Approach
High school represents a critical phase in the educational development of K-12 students. High school also is a kind of �launching pad� into adult life. Students mature a great deal during these later teenage years, developing important work habits and attitudes as they become more independent.
Our report examines high schools through the lens of three groups of high school students:
Dropouts (Students Who Fail to Graduate) This group constitutes about 30 percent of the entering ninth grade class.
The �General� Track (Students Who Graduate Without Qualifying for a Four-Year University) This group includes about 45 percent of all entering ninth grade students.
The �University� Track (Students Who Graduate and Qualify for Admission to the State�s Public Four-Year Universities) These students account for about one-quarter of entering ninth grade students.
Students in the three groups have very different experiences in high school. Their success in high school and their post-high school options reflect those experiences.
Dropouts-A Need to Engage Students
Findings Research and data suggest that the factors leading to student dropouts are in place by the time students enter ninth grade. Despite decades of trying, research has not identified programs or services that consistently reduce dropout rates.
Promising Directions We suggest two main strategies for reducing dropouts. First, the state should increase accountability for dropouts as a way of encouraging high schools to become more responsive to the needs and goals of students who are struggling to succeed in high school. Second, the state should help schools obtain better information about effective remedial services for low-performing elementary and middle school students as a means of preventing dropouts.
The General Track-Creating Better Options
Findings Data suggest that about one-half of this group attends college after graduation and the other one-half enters the labor force. Research and data indicate that many in this group do not have clear postgraduation goals, which prevents these students from using high school most effectively to make a smooth transition to adult life. In addition, high schools appear to promote four-year college degrees and de-emphasize attractive community college vocational options.
Promising Directions High schools should be encouraged to become more flexible in helping students achieve their personal goals after graduation. This requires several changes. First, the state should increase high school accountability for helping students make a successful transition to work or college after high school. Second, additional funds for middle school planning and counseling would help students and parents obtain better information about the options available to students in high school. Third, the state should make changes to existing vocational programs that help districts create high-quality vocational sequences that have greater benefits to students.
The University Track-Increasing Incentives for Achievement
Findings Entering freshmen frequently lack the English or mathematics skills required for study at the university level. Higher education admissions and placement policies contribute to the problem, as they fail to clearly communicate the skill levels needed for success in college.
Promising Directions Using better measures of high school achievement in the admissions process would help ensure that students are adequately prepared. We think the state should use the existing Standardized Testing and Reporting (STAR) tests for admission and placement decisions in our postsecondary system.
Despite considerable differences in the problems facing these groups, several themes emerge in our recommendations that are consistent across the groups. Our recommendations address the problems experienced by high school students by strengthening state and local accountability, improving available information for decision making, and increasing flexibility to improve the options available to decision makers.
Los Angeles Unified School District considered requiring a series of high school classes required for admission into the University of California or California State University systems for all students.
The Legislature should �fine tune� existing accountability programs in order to create stronger incentives for increased student achievement. We recommend strengthening state accountability by resetting the state�s standard for proficiency under the federal No Child Left Behind Act and increasing the importance of dropout and graduation data in the state and federal accountability formulas (as the quality of the state�s data improves). We also would make high schools accountable to the state for improving student transitions to college and work.
Two of our recommendations aim at strengthening local accountability. By creating a career planning process, we try to increase the leverage of students and parents to participate in a course plan that meets the long-term aspirations of students. By using STAR scores as a primary measure of student performance for the University of California and the California State University admissions, we try to enlist parents in the cause of promoting high achievement-rather than high grades.
Improving information available to state and local decision makers is also an important state role. The lack of good data on high school dropouts complicates the state�s desire to hold schools and districts accountable for addressing this problem. Our recommendations on using dropout data from the California Longitudinal Pupil Achievement Data System is designed to highlight the importance of this new system and suggest a way to provide early information to state decision makers and local educators on the nature of the dropout problem. Our proposal to evaluate state supplemental instruction and social promotion programs is intended to provide insight into ways educators can increase the achievement of low-performing students.
Parents and students also need better information about their choices and the likelihood of success in those choices. To provide this information, we recommend intensive career counseling and planning in eighth and tenth grades. With this information, parents and students will be able to make informed choices about how best to use high school to reach the students� post-high school goals for work and school.
Existing state and federal categorical programs provide a considerable amount of resources to support the changes recommended in this report. Our recommendations generally suggest ways the Legislature can help districts organize these resources more effectively. In addition, giving districts greater flexibility over the use of categorical resources can facilitate this reorganization of resources.
Students need and want better and more choices in high school-and schools need to be more flexible in satisfying these choices. Students need other viable alternatives besides getting a four-year college diploma-only about 15 percent of high school graduates earn a college diploma in the decade after high school. Students also want to feel more involved in their education, and creating choices over their high school program empowers students and their parents to use high school to reach their postsecondary goals. Helping schools become more flexible and provide a greater range of options will benefit many students.
The Bottom Line
Our recommendations offer the Legislature several ways to improve high schools. Alone, these changes will not address all of the problems in high schools. Many critical factors are outside of the state�s control. We think, however, our recommendations provide a strategic approach for how the state can contribute to improving high schools.
Here is the entire report LAO Study on High Schools.
To create deep and lasting change, all six core principles must be addressed. The principles are interdependent and must function as part of a comprehensive plan focused on ensuring that all students are ready for college, careers, and active civic participation.
Lincoln eighth-graders get wake-up call
Jennifer Torres, Stockton Record, Dec 1, 2005
STOCKTON -- Eighth-graders in the Lincoln Unified School District are talking about their grades, considering their test scores and understanding that going straight to high school next year is something they no longer can take for granted.
Over the past month, teachers and administrators have been explaining to students and parents the district's new requirements for ninth-grade placement and participation in extracurricular activities.
Current eighth-graders will be the first held to standards that district trustees hope will discourage the academic ambivalence that sometimes developed when promotion to high school was a sure thing.
"It's just kind of a wake-up call to everyone," said Gina Debono, 13, an eighth-grader at Claudia Landeen School.
The district's recently refined point system for eighth-grade promotion will consider state test scores in math and English up to five points are possible for each, grade point average up to four points and attendance up to four points.
Students who earn 10 or more points can go straight to high school. Students with eight or nine points must attend an intensive summer program before moving on. Students with seven or fewer points must attend the summer program, participate with their parents in a teacher conference and might be required to attend a separate ninth-grade program instead of going to Lincoln High.
District administrators also outlined rules for taking part in sports and other activities. Current eighth-graders must earn a GPA of at least 2.0, have at least a 95 percent attendance rate and no more than six days' suspension to participate in the promotion ceremony, dances and end-of-the-year field trips. As ninth graders, they will have to meet the same criteria to participate in sports, dances and other activities during the first quarter.
"I think it's a good thing, because it really pushes students to work," said Emily Meerdink, 13. "It's very frustrating to have people sitting next to you not doing anything."
Eighth-graders at Claudia Landeen School in Stockton, as well as other Lincoln Unified students, have been told they might not be promoted to high school without maintaining passing grades, adequate test scores, good attendence and good behavior.
Until the plan was developed, Lincoln Unified promoted students to high school even if they failed classes in the eighth grade.
Trustee John Pratt said he analyzed records from the 2004-05 class of Lincoln eighth-graders and found that about 100 would go to high school despite dismal grades.
Pinning eligibility to go to Lincoln High and participate in extracurricular activities to better academic effort could help motivate those students, he said.
Ellen Wehrs, the principal at Mable Barron School, said the plan also will help more students succeed in high school.
"The stakes are so much higher now," Wehrs said. "If students haven't been prepared and are just thrown into high school, it's a disadvantage for them."
Landeen eighth-grader Jamal Campbell, 13, said he isn't sure the new rules are fair. "I don't really like it, because the eighth-graders last year just got lucky."
But, he said, they are having an effect: "Everybody's wanting to get their grades up."
Few details in Romer's plan for high schools
Price estimate: $45 million
By Naush Boghossian, Los Angeles Daily News, Dec 7, 2005
Superintendent Roy Romer rolled out his plan Tuesday to convert the Los Angeles district's 53 high schools into so-called small learning communities, but school board members complained it lacked the specifics they had demanded months ago.
Romer estimated it would cost $45 million a year to create a more personalized learning environment for the 150,000 high school students in the district. But he failed to include the spending plan, deadlines and staffing projections specified by the Los Angeles Unified School District board.
He did not identify any source for the $45 million, saying it would be addressed in future discussions.
"I don't think there was a plan there. I think it was unfocused, panning for gold at best, and I don't know if we found any nuggets," board member David Tokofsky said after the meeting of the committee on small learning communities.
He had suggested focusing first on two or three academically low-performing high schools and working to improve them.
"It seems like the clock is ticking, and there ought to be some concrete deliverables defined at the next committee meeting."
But Romer said the point of the meeting was never to deliver specifics. In fact, he said he deliberately withheld specifics because a broader discussion is needed on the topic.
"We want to include other people - the teachers union, parents and some teachers - before we talk about specifics," Romer said. "Today was not intended to be a new plan but further discussion of what we already have. It was an update on where we're going."
Board member Jon Lauritzen said he was disappointed with the lack of detail.
"I'm disappointed that it's not moving ahead with more concrete components. The problem is we don't want to move too fast because we lose some of the thoughtfulness of it," he said.
There is no deadline for all schools to convert to the smaller learning environments. Grossly overcrowded Jefferson and Jordan high schools - with more than 3,000 students per campus - will be the first to close down and reopen by the fall of 2006 with several smaller learning communities at each site.
Teachers union president A.J. Duffy agreed a slowdown is needed in the process.
"Unless we slow down and do this one step at a time, we may be doing the whole district a disservice," said Duffy, who supports smaller learning communities.
Our huge dropout rate: Another great school myth?
By Peter Schrag, Sacramento Bee Columnist, May 3, 2006
Many of the things we knew for sure about education in the last half-century have turned out to be questionable, and sometimes flat wrong. It's a long list: that big high schools are better than small ones (1956); that because of our lousy schools, the Russians were winning the Cold War (1957); that because of our lousy schools, the Germans and Japanese were beating our economic brains out (1983); that schools favor boys and shortchange girls (1992).
Late last month a new certainty became a possible candidate for the list of great school myths. Maybe the nation's high school dropout rate, while still unacceptably high, isn't nearly as high as we all thought it to be. Just as important, the familiar gaps in the school completion rate between whites and blacks, or Asians and Latinos, while still large, have been shrinking for 40 years, and shrinking a lot.
These latter heresies, plus a few others, were propounded by economists Lawrence Mishel and Joydeep Roy of EPI, the liberal Economic Policy Institute in Washington, in a 99-page report, "Rethinking High School Graduation Rates & Trends" (www.epi.org). That they were announced just as Time magazine ran a cover story headlined "Dropout Nation" and as Jay Greene of the conservative Manhattan Institute was issuing a study that said high school graduation rates were even lower than commonly thought was pure coincidence.
It's not surprising that the EPI report didn't get a lot of attention. Bad news about schools always makes a better story. Both the left, which wants more money, and elements of the right, which want to make a case for vouchers, have an interest in drawing attention to educational inadequacies.
In any case the EPI report, based on data from sources that the authors regard as at least as reliable as those underlying the conventional wisdom, deserves attention.
The differences aren't small. Mishel and Roy cite a recent National Governors Association task force declaring that "about a third of our students are not graduating from high school," and that "about three-fourths of white students graduate ... but only half of African American and Hispanic students do."
That's the conventional wisdom, which they say is based on "computations that are seriously inaccurate." The overall high school graduation rate with a regular diploma is between 80 percent and 83 percent. That calculation was based on census numbers and data from NELS, the respected National Education Longitudinal Study - large samples of eighth-graders who are individually tracked.
Estimates of the black graduation rate with a regular diploma, they say, range between 69 percent and 75 percent, far higher than "the frequently alleged 50 percent rate for blacks." The NELS data indicate that the real dropout rate - about 25 percent - is only half of "the frequently alleged rate for blacks." And roughly half of that 25 percent get a GED, which, while generally not regarded as the equivalent of a regular diploma, opens the doors to the military, community college and other opportunities.
Similarly, Mishel and Roy estimate Hispanic graduation rates with a regular diploma at somewhere between 61 percent and 74 percent - with NELS showing a 74 percent rate, far higher than the 50 percent often cited. In addition, 9 percent to 12 percent get a GED. Equally important, completion rates have been climbing steadily since 1960 and the ethnic graduation gaps have been shrinking.
The differences between the EPI findings and the conventional numbers depend largely on methodology. Greene says that if Mishel and Roy are right, then, in effect, 500,000 diplomas have gone missing. That's how many more there'd have to be if 83 percent of all the 14-year-olds counted by the census in any give year were to get diplomas four years later.
The biggest problem in this muddle is how you count. Using ninth-grade enrollment as the base and comparing to the number of diplomas awarded four (or five or six) years later, as is often the case, is seriously misleading because the ninth grade in most places bulges with students who are not regarded as ready for high school work.
Nationally, ninth-grade enrollment is 14 percent higher than eighth-grade enrollment; for blacks and Latinos, it's 26 percent. Some of that, Mishel and Roy say, can be accounted for by transfers from private schools, but not much.
In addition, as one census statistician pointed out recently, "there's lots of noise" in the statistical picture - kids who start school at 6 leave, and come back, sometimes more than once; students who are still enrolled at 19 or 20. Even after the states get universal student identifiers to track students, it's likely the muddle will continue.
One footnote to the EPI announcement is that, contrary to widespread warnings from academic liberals that tougher standards and exit exams are driving up dropout rates, the authors, responding to questions about their report, say they've seen no evidence of it. Once again, maybe what we know for sure isn't.
California's not-so-bad high school graduation rates
By Peter Schrag, Sacramento Bee Columnist, May 30, 2006
The latest numbers from the State Department of Education purporting to show that only 67 percent of ninth-graders graduate from high school three years later will be fodder for both the left and right. For the left because they seem to prove that the high school exit exam is driving kids out of school; for the right because it proves that public schools stink.
But the 67 percent is a misleadingly swollen number. For a closer estimate try 76 percent. That's still a rough guess, and it's still not great, but it's considerably better.
Where does the error come from? The state calculates the graduation rate in any given year by dividing ninth-grade enrollment three years before by the total number of graduates. But because so many students are held back when they start high school, ninth-grade enrollment is badly inflated.
The class of 2006 that graduated last spring, for example, had 520,000 students in ninth grade in 2002-'03, but only 461,000 in eighth grade the year before that. Where did the extra 59,000 come from? A few are transfers from private schools, but most are students not qualified for high school work who spend an extra year (or maybe more) as ninth-graders to catch up. Some, of course, never do and drop out.
As the state numbers indicate, there was a decline in the graduation rate from 2005 to 2006 of about 4 percentage points, some of which is almost certainly attributable to the exit exam. And as Department of Education officials said, the point of the exam, keyed to the tenth grade in reading and the eighth grade in math, is to make certain that the diploma isn't a complete hoax.
In fact, what's most puzzling about some exit exam results is that the pass rate is as high as it is.
Even if you recognize that this is hardly a test of high academic skills, how is it possible that more than 70 percent of students classified as English-learners can pass the English part of the exam? Even explanations about the slow bureaucratic way students are moved out of the English learner category and the simplicity of the test leave questions.
Nonetheless, if you divide the 2006 graduates by the number of eighth-graders four years before you get a more accurate graduation rate. If you go back a decade, long before the exit exam, you get a graduation rate of 73 percent, 3 points lower than the most recent one.
Maybe more telling still, if you compare enrollment in upper-level math and science courses last year with the comparable numbers of a decade before, or even five years before -- and if course enrollment data mean anything, which isn't certain either -- last year's graduates had more solid preparation than any prior generation.
In 2000-'01, roughly 11 percent of high school students were enrolled in advanced math; last year that number had risen to 12.6 percent, a 14 percent increase; there were similar enrollment increases in intermediate algebra, chemistry and physics.
Even more dramatic was the increase in math and science courses taken by African American and Latino students. In 2000-'01, about 11.7 percent of black students were taking intermediate algebra; last year, that percentage was more than 14 percent.
There were similar increases in other math and science courses among both black and Latino students. And as is the pattern elsewhere, there were more girls than boys in most of those courses. And not surprisingly, as a proportion of enrollment, Asians exceed all other groups. Last year, more than 30 percent of Asian students were enrolled in advanced math classes, compared with a little more than 15 percent for whites and just above 7 percent for Latinos.
Those numbers hardly prove that California schools are doing well, or that poor and minority students are getting their share of good teachers and adequate resources. But like the recent EdSource report comparing successful elementary schools that have a lot of English learners to similar schools that are not so successful, the results can be spun in all sorts of ways.
EdSource, which analyzes and publishes data on California schools (for full disclosure, I was a board member) found that schools with certain characteristics scored higher on the state's achievement performance index than those without those characteristics. Among them: effective use of student assessment data; good teachers and resources; curriculum aligned with state standards and ambitious, measurable goals.
In the ongoing argument about money, it demonstrates again that it isn't a choice between more money and effective use of it, but a case for the importance of both.
The EdSource report also makes clear that even within schools with roughly similar socio-economic profiles, economic and social background remain major determinants of student achievement. On average, black students scored no higher than English-learners. Statewide, Asians outscore everybody. The report also proves again that in education reform simple answers are almost always wrong.
Five Challeneges for High School Reform
Recent MDRC ( formerly Manpower Demonstration Research Corporation) evaluations of three high school reform models � Career Academies, First Things First, and Talent Development � offer hope that comprehensive programs can improve low-performing high schools. This research synthesis for policymakers and practitioners offers practical lessons for creating personalized learning environments, helping struggling freshmen, improving instruction, preparing students for the world beyond high school, and stimulating change in overstressed high schools.
Their most recent evalutation offers research-based lessons about five major challenges associated with low-performing high schools: (1) creating a personalized and orderly learning environment, (2) assisting students who enter high school with poor academic skills, (3) improving instructional content and practice, (4) preparing students for the world beyond high school, and (5) stimulating change in overstressed high schools.
The overall message of this report is that structural changes to improve personalization and instructional improvement are the twin pillars of high school reform. Small learning communities and faculty advisory systems can increase students� feelings of connectedness to their teachers.
Rigor, Revelance and Relationships
The new 3RS
by Betsy Brand, Director, American Youth Policy
While education reform has remained a �hot� issue for policymakers since the release of A Nation at Risk in 1983, much of the attention has been focused on elementary grades and improving basic reading and math skills for younger students. The No Child Left Behind Act is heavily slanted towards reforms in the early grades, although it does hold high schools and school districts accountable for high school graduation rates as well as student performance on high school assessments. Federal education funding is also slanted toward elementary and middle schools, with only the Carl D. Perkins Vocational and Technology Education Act playing any significant role in providing resources to high schools.
But after years of largely being ignored, high school reform is headed into the policy spotlight. A groundswell of recent reports has drawn attention to the problems of many American high schools, particularly those in large urban and high poverty areas. Others have focused on the lack of student engagement in learning, as many of us are all too familiar with one of high schoolers� recurring complaints: �Class is boring � why do I have to learn this stuff?�
One recent report, Locating the Dropout Crisis: Which High Schools Produce the Nation�s Dropouts? Where are They Located? Who Attends Them? (Balfanz and Legters, 2004), states there are currently between 900 and 1,000 high schools in the country in which graduating is at best a 50/50 proposition. In 2,000 high schools, a typical freshman class shrinks by 40 percent or more by the time the students reach their senior year, which represents nearly one in five regular or vocational high schools in the U.S. that enroll 300 or more students.
Another report, Who Graduates? Who Doesn�t? A Statistical Portrait of Public High School Graduation, Class of 2001 (Swanson, 2004) provides a similar assessment:
The national graduation rate is 68 percent, with nearly one-third of all public high school students failing to graduate. Other statistics show that students from historically disadvantaged minority groups (American Indian, Hispanic, and Black) have little more than a 50-50 chance of finishing high school with a diploma. By comparison, graduation rates for Whites and Asians are 75 and 77 percent nationally. Males graduate from high school at a rate 8 percent lower than female students. Graduation rates for students who attend school in high poverty, racially segregated, and urban school districts lag from 15 to 18 percent behind their peers.
Public High School Graduation and College Readiness Rates in the United States (Greene and Forster, 2004) continues the same lament:
Only 70 percent of all students in public high schools graduate, and only 32 percent of all students leave high school qualified to attend four-year colleges. Only 51 percent of all black students and 52 percent of all Hispanic students graduate, and only 20 percent of all black students and 16 percent of all Hispanic students leave high school college-ready. The portion of all college freshmen that is black (11 percent) or Hispanic (7 percent) is very similar to their shares of the college-ready population (9 percent for both). This suggests that the main reason these groups are underrepresented in college admissions is that these students are not acquiring college-ready skills in the K- 12 system, rather than inadequate financial aid or affirmative action policies.
Other studies have looked at the problems that high school students have when they enter postsecondary education. In their 2003 report, Betraying The College Dream: How Disconnected K-12 and Postsecondary Education System Undermine Student Aspirations, Venezia et. al. finds that �current K-12 and postsecondary education systems are fractured, create unnecessary barriers between high school and college, and send mixed messages about academic preparation. This particularly impacts low-income students and students of color, but it also contributes to poor student preparation for college generally, higher rates of remediation, and low college completion rates.�
The National Research Council of the National Academies entered the discussion with a different take on the problem in Engaging Schools: Fostering High School Students� Motivation to Learn (2003). It explores how adolescents learn and what motivates them to learn. �The fundamental challenge is to create a set of circumstances in which students take pleasure in learning and come to believe that the information and skills they are being asked to learn are important or meaningful for them and worth their efforts, and that they can reasonably expect to be able to learn the material.� Too often, the student perspective is ignored, but this report makes an eloquent plea for making learning meaningful for young people.
Dozens of other reports have dealt with various aspects of high school reform over the years, from block scheduling, to team teaching, to applied and contextual learning, to small learning communities. The combined bulk of all of these reports seems to have finally pushed policymakers into paying serious attention to high schools. Both Presidential candidates had platform components focused on high schools. The U.S. Department of Education will hold its Second Annual National High School Leadership Summit in December 2004; the 2004-2005 Chairman�s Initiative for the National Governors� Association is Redesigning the American High School; several states have recently passed comprehensive high school reform programs or will release high school commission reports (RI, IN, OH); and California just held the first ever State Superintendent�s High School Summit.
At the local level, high school reform is much farther advanced. Cities like Boston, Chicago, San Diego, and New York City are deeply involved in creating more rigorous and engaging learning opportunities for young people. At the school level, there are many fine examples of redesigned, reformed, reengineered, refashioned high schools. Not only have local leaders (primarily superintendents and principals) been supporting changes to the design and look of high schools, a number of organizations have also created new models or strategies to improve student learning. It is especially helpful as we enter into a broader and higher level debate on high school reform to have this experience and knowledge to guide us.
Groups like the High Schools That Work Initiative of the Southern Regional Education Board, Tech Prep Network, Talent Development Career Academy model, First Things First, National Academy Foundation, National Career Academy Coalition, and the Career Academy Support Network are just a few of the organizations that have developed school models or promoted change strategies. Today, our focus is on the career academy movement.
One of the earliest books on high school reform, Career Academies: Partnerships for Reconstructing American High Schools (Stern, Raby, and Dayton, 1992) laid the groundwork for significant change to the traditional high school. The book espoused three central elements for reforming high schools into career academies: (1) creating small learning communities; (2) providing a college preparatory curriculum with a career theme; and (3) building partnerships with employers, community and higher education. Despite the fact that many career academies have been created over the past decade, the movement has remained to some extent on the edge of school reform efforts. National education leaders and policymakers have generally been focused on reform in the earlier grades or viewed anything to do with �career education� (including career academies, career clusters, school-to-work, tech-prep, and career and technical education) as tangential to the improvement process. It seems, however, the discussion is changing.
COMMON VISION FOR REFORM
A recent report by the National High School Alliance, Crisis or Possibility? Conversations About the American High School (Harvey and Housman, 2004), makes the case that �powerful voices are backing the proposition that the time has come to re-think and reinvent the American high school. Expert agreement emerged around several key variables related to effecting institutional change.� The report goes on to list several �levers� for high school change, such as building a K-16 educational pipeline, addressing the dropout problem, focusing on literacy and teacher competence, and making schools small.
Another framework for reform, espoused by the Bill and Melinda Gates Foundation, promotes schools founded on Three R�s: Rigor, Relevance, and Relationships.1 These ideas are aligned with many of the levers for change described in Crisis or Possibility. While one could perhaps characterize the Three R�s as a sound bite, the three words carry with them descriptive images that translate readily to policymakers. For that purpose, I will use the Three R�s as the framework for the common vision of reformed high schools in this paper.
First, let�s be clear on what rigor, relevance and relationships really mean for high schools and their students. Rigor is shorthand for ensuring that students have access to and take what is commonly known as a collegepreparatory curriculum. All students, regardless of their abilities or performance level, take four years of English, and at least three of mathematics, science, and social studies, and a foreign language. If all students are to take these courses and pass them, it means that schools and the adults in them must commit to finding ways to help all students master these new basics � which usually means spending more time helping lowerperforming students. Rigor means that all students will be prepared for postsecondary education, without the need for remediation, and that there is an alignment between high school exit exams and postsecondary entrance requirements. It also means that expectations for all students are heightened and no students are relegated to low-level general track classes or shipped off to outdated vocational shop. It also means teachers must be fully qualified and competent in their discipline.
Relevance shifts the focus to students and what motivates them to learn. Students in schools in which learning is relevant don�t ask the question, �Why do I have to learn this?� Curricula is set in context so students can see how knowledge builds on what they already know, and it is applied so they can see how it is used in the real world. Studies are connected to students� goals, and teachers and counselors help students plan their course taking to meet their interests and career and college goals. Most importantly, students become engaged in their learning because they understand that what they are learning has meaning for them and will impact their futures.
But relevance also needs to be closely tied to rigor. Efforts must be made to develop student skills (analytical, communication, organizational and social), broader content knowledge (in the arts, economics, and current and historical events), and values that are particularly relevant to success in academic, workplace and civic settings. While relevance in curriculum is important to help students make connections, a broader definition of relevance is important � one that helps students find where they fit in the world and develop the broader knowledge and skills necessary to be a competent and engaged adult.
Now to relationships. Young people who have relationships with caring and competent adults have better life chances than those that do not.2 Strong, respectful relationships between students and adults in the building are the bedrock of any successful school. There must be a culture of respect and a desire on the part of adults to not only help young people academically, but socially and developmentally as well. Schools should encourage and support opportunities for adults to serve as mentors, coaches, advocates, and advisors, both formally and informally. Smaller learning environments provide a structure that allows more personalized relationships between teachers and students to develop and grow. Adults in the wider community are also an important resource, as many young people form bonds with adults through service projects, extra-curricular activities, or during work-based internships or job shadowing experiences.
Creating schools that have rigorous and relevant curriculum and that support positive relationships takes time and hard work. And there is a lot that needs to be done �behind the scenes� (scheduling, common planning time, breaking up large high schools, aligning resources and professional development, using assessments wisely, to name a few) to support schools based on the Three R�s. Fortunately groups like the National Career Academy Coalition, the National Academy Foundation, and the Career Academy Support Network have been engaged in this process, and through their efforts and hard work, we now have vibrant examples of how career academies contribute to student achievement and positive student outcomes.
Register to vote, get diploma?
Bill would require teenagers to join voter rolls as a condition of high school graduation
by Jim Sanders, Director, Sacramento Bee, February 4, 2007
A San Jose assemblyman is pushing a novel way of getting teenagers to register to vote: withhold their high school diploma if they don't.
Democrat Joe Coto has proposed requiring voter registration as a condition of graduation.
"I think we need to establish a pattern of voting," Coto said. "It has to be a habit, almost as natural as when you reach the age of 16 and get a driver's license."
Coto's bill, Assembly Bill 183, would apply only to high school seniors who are 18, U.S. citizens and meet other election requirements.
The measure comes at a time when more than four of every 10 eligible adults younger than 25 don't bother to register to vote.
Turnout at statewide elections also has been dismal: Three of the seven worst turnouts in history have occurred since 2000.
"We've got to keep working at our democracy, reinvesting in it, educating people about it, impressing on them the importance of it and that voting is a big part," said Coto, a former school superintendent in Oakland and San Jose.
But critics counter that AB 183 would create a clerical burden for schools, intrude upon personal choices and be pointless if unwilling students register but never cast ballots.
"I don't think it's a good idea to coerce democracy," said Patrick Dorinson, spokesman for the California Republican Party.
Dorinson said 18-year-olds, legally adults, can think for themselves.
"I want young voters to register to vote, and to be knowledgeable and engaged in our democracy," added Assemblyman Roger Niello, R-Fair Oaks. "But they're only going to do that if they want to."
AB 183 also raises the specter of partisan gain because more youths tend to register as Democrats than Republicans, though officials of both parties downplay that as a factor.
"If you look at history, the youth vote has shifted back and forth," said Assemblyman Chuck DeVore, R-Irvine.
Coto's measure, if it passes and is signed into law, would take effect beginning in the 2009-10 school year, allowing a one-year adjustment period.
The measure would permit 18-year-olds who object to the registration requirement to obtain a waiver by filing a document with school officials.
The bill, as currently written, does not require students to state a reason for avoiding registering to vote. But Coto said such exceptions should apply only to students with religious or other specific objections.
Under AB 83, school districts would be responsible for notifying students about the registration requirement and verifying their compliance.
Though the measure threatens a penalty to push students to register, Coto said he cannot imagine anyone's diploma getting withheld. He said it would be easy for students to comply with the voting registration rules.
Coto is leader of a caucus of Democratic Latino legislators.
Latinos stand to gain significantly from AB 183, because they constitute 32 percent of California's adult population but only 14 percent of likely voters, according the Public Policy Institute of California.
Tim Storey, elections analyst for the National Conference of State Legislatures, said he knows of no state that mandates voter registration as a requirement for graduation.
Assemblyman Mark Leno, D-San Francisco, said he is intrigued by AB 183.
"We need to be as creative as possible to engage the next generation of voters," he said.
Secretary of State Debra Bowen said the bill is worth considering but that it won't single-handedly change teens' attitudes about politics.
"I think we have to start in the fourth grade, with government curriculum," she said. "We have to make it more relevant, and that will be a challenge because the curriculum and school days are packed as it is."
Jack O'Connell, state superintendent of public schools, supports making voter registration forms available on campus but not tying them to graduation, said Hilary McLean, his spokeswoman.
The California School Boards Association has taken no position on the bill.
Students at Sacramento's McClatchy High School had mixed feelings.
"I would vote in a heartbeat, if I could right now," said Sancino Gonzales, 16.
Alex Stutzman, 16, said many kids are lazy about voting, but she opposes threatening students' diplomas over voter registration.
"I don't think it should be a requirement, I just think it should be advertised more in school," she said.
Noah Muldavin, 16, said AB 183 might increase voter participation by making teenagers more aware of politics.
"I think a lot of kids view the government as run by old people," he said.
Comments. Questions. Broken links? Bad spelling! Incorrect Grammar? Let me know at webmaster. | <urn:uuid:7b969914-e013-483a-bfda-9547ef99ba9d> | CC-MAIN-2016-26 | http://mikemcmahon.info/highschoollao.htm | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783400031.51/warc/CC-MAIN-20160624155000-00114-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.962923 | 9,668 | 2.90625 | 3 |
- Harvard Health Blog - http://www.health.harvard.edu/blog -
Leg clots (aka deep-vein thrombosis): an immediate and long-term health hazard
Posted By Howard LeWine, M.D. On December 14, 2011 @ 2:59 pm In Heart Health | Comments Disabled
When it comes to under-the-radar health conditions, deep-vein thrombosis is at the top of the list. Most of my patients have never heard of this common problem. Yet deep-vein thrombosis puts more than one-quarter million Americans in the hospital each year, and complications from it are responsible for upwards of 100,000 deaths.
Deep-vein thrombosis (DVT) is the medical term for a blood clot that forms in a leg vein. Some DVTs cause no symptoms; others hurt, or make the leg swell. There are two big worries with a DVT:
Pulmonary embolism. A piece of a clot can break away, travel through the bloodstream, and become lodged in the lungs. This is called a pulmonary embolism. Almost all DVT-related deaths are due to a pulmonary embolism.
Post-phlebitis syndrome. A clot can permanently damage the vein it is lodged in. This problem, called post-phlebitis syndrome, causes persistent leg pain, swelling, darkened skin, and sometimes hard-to-heal skin ulcers. Up to 40% of people with a DVT develop post-phlebitis syndrome.
Deep-vein thrombosis is initially treated with an anticoagulant such as heparin or fondaparinux. Anticoagulants can stop a DVT from getting larger and can prevent new clots from forming. Use of these drugs substantially decreases the risk of developing a pulmonary embolism.
But anticoagulants can’t dissolve a clot that has already formed. That’s the job of drugs called thrombolytics (commonly known as clot busters), such as streptokinase and alteplase. Although you might think that using these drugs against DVT would be a good idea, studies have had mixed results. Thrombolytics don’t appear to improve survival or shorten hospital stays, and they increase the risk of bleeding. They do, however, appear to decrease the risk of post-phlebitic syndrome.
A new study published online in The Lancet shows that delivering a clot-dissolving drug directly into the clot—instead of having it circulate through the bloodstream via standard intravenous delivery—is very effective in preventing post-phlebitic syndrome among people with large clots that are high up in the leg and/or in the pelvis. Direct delivery of the clot-buster allows the use of a lower dose, which decreases the risk of bleeding elsewhere in the body.
Not everyone with a DVT needs direct clot-dissolving therapy. Anticoagulants, along with support stockings to reduce swelling and improve blood flow, are enough for most people. For those with a large clot, especially one high up in the leg or in the pelvis, direct injection of a thrombolytic agent will help protect the affected vein from post-phlebitic syndrome.
Anyone can develop a DVT, although some people are more likely to have one than others. You are at increased risk if you or a close family member have had a DVT before, have an inherited condition that causes your blood to clot more readily than normal, have cancer, are immobile for a long time (confined to bed, long-duration plane or car trip, etc.), or use birth control pills.
Here are some good ways that everyone can use to help prevent a DVT from forming:
Article printed from Harvard Health Blog: http://www.health.harvard.edu/blog
URL to article: http://www.health.harvard.edu/blog/leg-clots-aka-deep-vein-thrombosis-an-immediate-and-long-term-health-hazard-201112143955
Copyright © 2010 Harvard Health Publications Blog. All rights reserved. | <urn:uuid:83667241-7a43-4314-9f08-a0804e9e4b76> | CC-MAIN-2016-26 | http://www.health.harvard.edu/blog/leg-clots-aka-deep-vein-thrombosis-an-immediate-and-long-term-health-hazard-201112143955/print/ | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783402746.23/warc/CC-MAIN-20160624155002-00069-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.941662 | 896 | 3.109375 | 3 |
The Language of Banking - ABA Education
What's new with banking may just seem new
because of the banking language. The following is information
presented by the American Banking Association.
Arranging for a loan - buying a house or car,
starting a business, or opening an account - inevitably exposes
you to the language of banking. When dealing with money it's
important to know some of the terminology to avoid
misinterpretation. Here is a list of some of the most common
terms, taken from the American Bankers Association Dictionary, Banking
and Finance Terminology.
Cash Advance: A cash loan obtained by
a credit cardholder through presentation of the card at a
financial institution, ATM, or through a mail receipt.
Deposit (CD): A formal receipt for funds left with
a bank as a special deposit. Such deposits may bear interest, in
which case they are payable at a definite date in the future
and/or after a specified minimum notice of withdrawal. These
deposits are payable only upon surrender of the formal receipt,
and are properly endorsed.
Credit Rating: A formal evaluation of
an individual's loan-repayment history or potential.
Credit Report: A factual data report
from an independent agency created to verify the applicant's current
employment and income. It also provides information on current
and previous debts and liabilities.
Debit Card: A
plastic card enabling cardholder to purchase goods and services
or make cash withdrawals from an ATM. The cost or amount withdrawn
is immediately charged to the cardholde'rs checking account.
Lien: A legal claim or attachment,
filed on record, against property as security for payment of an
obligation. A lien is the guaranteed right of a lender or
investor to specific property in case of default.
Overdraft: The amount by which a debit
or charge against an account exceeds the balance of the trust
PITI: Principal, interest, taxes and
insurance. The components are commonly included in a
monthly mortgage payment.
of existing securities or the repayment of a debt from the
proceeds and new borrowings are created.
Secured loan: A loan against which a
tangible asset has been pledged in case of default on the loan.
Unsecured loan: Funds loaned with no
pledge of collateral.
Yield: The annual percentage rate of
return on capital, calculated by dividing annual return by the
amount of an investment. | <urn:uuid:86203efb-1779-4858-b234-669df0c191e0> | CC-MAIN-2016-26 | https://www.fnbbemidji.com/aboutwhats.htm | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396875.58/warc/CC-MAIN-20160624154956-00012-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.911056 | 526 | 3.03125 | 3 |
Fall Photography Tips – What Creates the Color?
The pigment is there all the time but is hidden by the chlorophyll that makes the leaves green. As the weather changes, chlorophyll production stops, and the other colors are unmasked. Best weather conditions for the color are days that are bright and cool and nights that are chilly but not freezing. Adequate rainfall maintains the leaves on the trees.
Most striking reds are found on sugar and red maples, northern red oak, sumac, mountain ash, and tupelo. Yellow is found on birches, white ash, linden, maples, beech, aspens, and hickories. The season can last through November, depending on weather conditions and location.
This information was taken from an article written by Alice Zyetz, who has authored or co-authored several ebooks including “The Roadmap to Affordable Retirement.” It first appeared on roadtripamerica.com, and is published by KOA with full permission. | <urn:uuid:e853afd7-18ea-4d6e-8ac3-af666ab5df3d> | CC-MAIN-2016-26 | http://blog.koa.com/fall-photography-tips-what-creates-the-color/ | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395679.92/warc/CC-MAIN-20160624154955-00170-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.961807 | 215 | 2.53125 | 3 |
Other Names for this Disease
- Porphyria variegate
- Porphyria, South African type
- Protoporphyrinogen oxidase deficiency
- PPOX deficiency
See Disclaimer regarding information on this site. Some links on this page may take you to organizations outside of the National Institutes of Health.
hepatic porphyria most common in the white South African population. This autosomal dominant disorder may produce acute attacks (as in acute intermittent porphyria) as well as skin photosensitivity. The condition is caused by mutations in the PPOX gene which lead to deficiency of the enzyme protoporphyrinogen oxidase. Acute attacks are managed and may be prevented as in acute intermittent porphyria.Variegate porphyria is a form of
Last updated: 7/19/2010
- Variegate Porphyria (VP). American Porphyria Foundation. 2010; http://www.porphyriafoundation.com/about-porphyria/types-of-porphyria/VP. Accessed 7/19/2010.
- Porphyria, Variegate. National Organization for Rare Disorders (NORD). 2001; http://www.rarediseases.org/search/rdbdetail_abstract.html?disname=Porphyria%2C%20Variegate. Accessed 7/19/2010.
- Porphyria. Genetics Home Reference (GHR). 2009; http://ghr.nlm.nih.gov/condition=porphyria. Accessed 7/19/2010.
- Genetics Home Reference (GHR) contains information on Variegate porphyria. This website is maintained by the National Library of Medicine.
- MedlinePlus was designed by the National Library of Medicine to help you research your health questions, and it provides more information about this topic.
- The National Digestive Diseases Information Clearinghouse (NDDIC), part of the National Institute of Diabetes and Digestive and Kidney Diseases (NIDDK), offers information on this condition. Click on the link to view information on this topic.
- The National Organization for Rare Disorders (NORD) is a federation of more than 130 nonprofit voluntary health organizations serving people with rare disorders. Click on the link to view information on this topic.
- Medscape Reference provides information on this topic. You may need to register to view the medical textbook, but registration is free.
- The Monarch Initiative brings together data about this condition from humans and other species to help physicians and biomedical researchers. Monarch’s tools are designed to make it easier to compare the signs and symptoms (phenotypes) of different diseases and discover common features. This initiative is a collaboration between several academic institutions across the world and is funded by the National Institutes of Health. Visit the website to explore the biology of this condition.
- Online Mendelian Inheritance in Man (OMIM) is a catalog of human genes and genetic disorders. Each entry has a summary of related medical articles. It is meant for health care professionals and researchers. OMIM is maintained by Johns Hopkins University School of Medicine.
- Orphanet is a European reference portal for information on rare diseases and orphan drugs. Access to this database is free of charge.
- PubMed is a searchable database of medical literature and lists journal articles that discuss Variegate porphyria. Click on the link to view a sample search on this topic. | <urn:uuid:e667ad1f-502f-4bf7-a765-2fca115a27e6> | CC-MAIN-2016-26 | https://rarediseases.info.nih.gov/gard/7848/variegate-porphyria/resources/1 | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783408828.55/warc/CC-MAIN-20160624155008-00083-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.866503 | 716 | 3.140625 | 3 |
Using some piezo sensors, this DIYer created some DIY midi drums for the Arduino. All it takes is a handful of the sensors and a few resistors to put together. He’s already assembled the soruce code, parts kit and detailed instructions so you can easily make one yourself. The piezo sensors can be used as buzzers but at the same time can also detect “knocks” or vibrations. When you strike each drum pad, a voltage is generated based upon your impact force.
- You’ll also enjoy our Top 40 Arduino Projects of the Web
Here are the basic materials you need to make your own Midi drum:
- Wire to connect piezos (speaker style wire is OK)
- Auto body thin metal sheet
- Spray glue & epoxy (or hot glue)
- Tape or heat shrink tubing | <urn:uuid:c5bae0d0-9cca-4ce8-b332-7ef415f4d14e> | CC-MAIN-2016-26 | http://hacknmod.com/hack/diy-midi-drums-using-an-arduino/ | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396949.33/warc/CC-MAIN-20160624154956-00131-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.910122 | 178 | 2.53125 | 3 |
noun[mass noun] Chemistry
A sugar present in milk. It is a disaccharide containing glucose and galactose units.
- The common sugars that form part of the diet are sucrose, lactose (milk sugar), and maltose.
- The enzyme lactase breaks down the lactose to galactose and glucose.
- Soymilk does not contain lactose (milk sugar) and can be drunk by those who are allergic to normal milk.
For editors and proofreaders
Line breaks: lac|tose
Definition of lactose in:
What do you find interesting about this word or phrase?
Comments that don't adhere to our Community Guidelines may be moderated or removed. | <urn:uuid:b18af2fe-9658-4095-8f0a-b1d77e268420> | CC-MAIN-2016-26 | http://www.oxforddictionaries.com/definition/english/lactose | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395621.98/warc/CC-MAIN-20160624154955-00046-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.878325 | 152 | 3.140625 | 3 |
Most Widely Spoken Languages in the World
Chinese tops the list of most popular world languages, with over one billion speakers. English trails in third place, with 335 million speakers. This data represents first-language speakers.
Source: Ethnologue, 2014.
1. May encompass multiple dialects.
2. The macrolanguage Chinese [zho] includes 13 individual languages with at least 1 million speakers.
3. The macrolanguage Arabic [ara] includes 18 individual languages with at least 1 million speakers. | <urn:uuid:feb34283-fc92-4113-bc13-93ff28e5a85d> | CC-MAIN-2016-26 | http://www.infoplease.com/ipa/a0775272.html | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783392069.78/warc/CC-MAIN-20160624154952-00019-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.885236 | 108 | 2.8125 | 3 |
Quake jolts Japan, no tsunami warning issued
Tokyo, July 3, 2012
An earthquake with a preliminary magnitude of 5.4 hit Eastern Japan on Tuesday, the Japan Meteorological Agency said.
The agency said no tsunami warning had been issued, and there were no immediate reports of casualties or damage.
On March 11 last year Japan's northeast coast was struck by a magnitude 9 earthquake, the strongest ever recorded in Japan, followed by a tsunami that caused massive destruction and wrecked the Fukushima Daiichi power plant, triggering the world's worst nuclear crisis in 25 years.
The disaster left up to 20,000 dead or missing, and the radiation leaks caused mass evacuations and widespread contamination, the effect of which has yet to be fully ascertained.-Reuters | <urn:uuid:5596da2e-1ee1-4f05-97b8-e80a4af1bad2> | CC-MAIN-2016-26 | http://www.tradearabia.com/news/INTNEWS_219924.html | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783398209.20/warc/CC-MAIN-20160624154958-00123-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.951654 | 156 | 2.9375 | 3 |
Helping Children Develop Healthy Eating Habits
Jessica Bennett, Dietician with Vanderbilt Nutrition Clinic, talks with Stacey Bonner of the Vanderbilt Child and Family Center, and gives some suggestions on healthy snacks to keep in the home, why is it important to limit portion size and getting your child involved in the meal preparation.
Stacey Bonner: Welcome to this edition of the Vanderbilt University Health and Wellness Wellcast. I am Stacey Bonner with the Child and Family Center. Developing healthy eating habits in children is important. Jessica Bennett, dietitian with Vanderbilt Nutrition Clinic, has joined me in discussing ways to help children develop healthy eating habits. Jessica, how can a parent help their child develop healthy eating habits?
Jessica Bennett: Setting a great example is key. Kids will follow your lead. So, if you get excited about trying new fruits and vegetables, your kids will be excited about trying them too. If you are sitting there thinking that green beans are the worst thing on the planet, they are going to say it, and you will hear it for the rest of their lives, and so, it is very important to get excited about it and to set a good example. If they see you choosing that apple instead of the chips for a snack, they are going to want to follow your lead.
Stacey Bonner: How can a parent get a picky child to enjoy a wider variety of foods?
Jessica Bennett: You’ve got to make it fun. You can rename foods. You can cut them in fun shapes, even putting them in a neat bowl, get them involved in the kitchen trying new foods. You can even incorporate healthier foods into some unhealthy foods that they love, like pizza or pastas or macaroni and cheese.
Stacey Bonner: Just incorporating some of the stuff they already enjoy eating. So, maybe for pizza, putting pineapples on them and things like that?
Jessica Bennett: Exactly, or especially, like mushrooms you can chop them up and then put them into taco meat. They will never even know they are in there or soups, pasta, sauces , things like that, great in smoothies, too. .
Stacey Bonner: My child loves to snack. Do you have some suggestions on just healthy snacks to keep in the home?
Jessica Bennett: What child does not love to snack? It is important to make sure that you have healthy snacks available that are going to actually help keep them full. So, when they grab that bag of chips or that 100‑calorie pack, even if it is a good portion size, it is still not going to keep them full like an apple or something high in fiber would . Fruits and vegetables make great snacks and you can put peanut butter with trail mix on top of them or different fruits, even fat-free cream cheese, Greek yogurt. You can make fun dips. Hummus is a great thing to get kids involved with to try. You can make a bake potato bar with low-fat cheese and salsa and Greek yogurt, and they can top it, or you could even do different smoothies. Kids love to throw different fruits and vegetables and see the colors change in the smoothies. If you want to do some prepackage snacks, it is important to work on portion sizes with kids and make sure they just get like a handful and then if they are still hungry, go back for a fruit or a vegetable.
Stacey Bonner: You just mentioned something about portion sizes, so why is it important to limit the portion sizes?
Jessica Bennett: So, a lot of kids love to snack like we talked about before, and it kind of turns into a grazing atmosphere where they have lots of snacks before dinner, they are not very hungry to actually eat their dinner, and then it makes that so they are not eating as much variety of foods if they are just eating chips and crackers when they get home and then they do not want to eat their vegetables at dinner or their protein sources. So, it is important to work on portion sizes, especially, of snacks. And you want to make sure they are actually hungry for their meal. So for those, you can even use smaller cups or bowls to put foods in. I always tell kids never eat anything out of the box or the bag it came in, they will automatically eat more. Take it out, put it on a bowl. Don’t eat snacks while watching TV or playing with your Kindle or on the computer. Again, you will automatically eat more.
Stacey Bonner: When picking your child up from school, is this a good time to have a healthy snack in the vehicle?
Jessica Bennett: Uh huh. Especially, if you are on the way to like a practice or something like that, it is important for them to have a snack. What we don’t want to see happen though is having that snack in the car and then another snack when they get home, then that just goes feeds into that mindless eating, like while you are diving and things like that, but especially, if they are on the way to practice or something like that, it is important for them to have a healthy snack. We say the original fast food was fruit, so grab it and go.
Stacey Bonner: How key is to get your child involved in the meal preparation?
Jessica Bennett: It is so key if you want to get them to try new foods. We know that kids get excited when they are involved in the kitchen. They are more likely to try the foods if they were involved in the cooking process. Even having them pick out a new recipe or take them to the grocery store and have them pick out a new fruit or a vegetable. You can even take them to a farmer’s market where they can interact with some of the farmers or to a farm, and they can pick out new produce , see where it comes from, and they will get more excited about trying it. Even eating the rainbow having them pick out different colors that they want to try or what colors they want to add to their plate that night. If it is age-appropriate, getting them involved in the cutting, peeling, washing all of that is great to you.
Stacey Bonner: Those are some great tips, Jessica, thank you.
Jessica Bennett: Thank you.
Stacey Bonner: Thanks for listening. Please feel free to leave us any comments on this Wellcast by clicking the “Add New Comment” link at the bottom of this page. If you have a story suggestion, please email us at email@example.com or you can use the “Contact Us” link on our website at healthandwellness.vanderbilt.edu.
– end of recording – | <urn:uuid:0eb1c40f-c50d-4078-b41c-b7895c33f81d> | CC-MAIN-2016-26 | http://healthandwellness.vanderbilt.edu/news/2014/03/helping-children-develop-healthy-eating-habits/ | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783398209.20/warc/CC-MAIN-20160624154958-00194-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.967857 | 1,401 | 3.25 | 3 |
Notes on Nathaniel Treat and his family
Frankfort Maine 1774-1976
974.1 tf82f Maine State Library
The building of Fort Pownal at Fort Point in 1759 marked the beginning of the settlements along this side of the Penobscot River. Before Fort Pownal was built, there were French Trading-houses at Bagaduce (Castine) and also a fort-Fort George, but the first permanent settlement made by a white man in what originally embraced Frankfort, or on the Penobscot waters, was made by Lt. Joshua Treat in May of 1759.
Lt. Treat was at Fort George as early as August 1750, and worked as an armorer. When Gen Waldo and Gov. Pownal reached Fort George on their way to Fort Point, Lt. Treat joined the expedition. Because he knew the Indian language he acted as an interpreter as well as an armorer at Fort Pownal. At one time, tradition has it. Treat paddled up the river in a canoe with Gov. Pownal and landed on the banks of the Sowadabscot Stream in Hampden, where he acted as interpreter in a conference held with the Tarratine Indians, and introduced the governor to the Chief, Modocowanda. This conference ended all troubles with the Tarratines. Joshua Treat lived near Fort Pownal and continued as armorer until the mid 1770's when the fort was abandoned.
In 1774, Treat built a log house where the Congregational Church now stands. This was the first log house built in Frankfort. Treat also built the first saw mill and the first vessel in what is now Frankfort. When the British came up the river in 1779, they burned the mill, but Treat rebuilt it in 1783.
Treat lived in his log house about ten years before moving to the Point Farm where he spent the remainder of his days. He died in 1826.
William and James Treat, grandsons of Joshua Treat, became Treat and Co. As such, they built and owned many vessels. They also ran an extensive mercantile business. Joshua Treat, 3rd, built the first frame house in Frankfort, and his sons, Upton and Adams, became the firm of U. and A. Treat. This business was in the store later known as McCambridges. The two houses below this store were owned by Amos and Hi ram Treat. Across the street is the Col. Robert Treat house which now is the Waldo Pierce Reading Room. The home of Adams Treat became the Hillside House, a hotel in the early 1900's.
Charles H., son of Capt. Henry and Abigail Treat, was treasurer of the United States under Pres. Theodore Roosevelt. Edwin Parker Treat married Sarah, the daughter of Capt. Andrew Tyier. Minnie Hubbard Treat was the daughter of Robert Treat, Jr. and Ann Eliza Tyier. She married Robert Gordon McKay of Massachusetts.
SAW MILLS AND LUMBERING
if one could visualize the expanse of forest and the convenient effective waterpower that existed in the early days of Frankfort, it would not be hard to understand the flourishing lumber business that sprang up in this area. Young men saw the fortune in the spruce, pine and oak that grew on the riverbanks and the business grew in leaps and bounds. Woods were cleared, log cabins and huts were built, and this was the beginning of the community and, later, many fine homes. It is believed that Joshua Treat Jr. built the first log cabin at the site of the present Congregational Church.
Joshua Treat Jr. (and we will find the name of Treat through all of Frankfort's history) built the first of the water powered saw mills on Marsh Stream. This was burned by the British in, 1779 and rebuilt in 1783. Relics of these mills are still evident. Wooden dams were built and log drives were conducted. Ship I building flourished and lumber was shipped far and wide through I the 19th century. Frankfort grew into a thriving lumber town.
Saw mill operations continued into the 20th century. Steam powered mills were started and a book, "An Old River Town", mentions a steam powered mill owned and operated in old Frankfort, by Theopilus Gushing, in 1841. A steam-powered mill on Tyier Lane turned out spool bars in the 1920's. The Turner water powered mill-produced shingles in the 1930s and Mr. Earl Grindell, well known and respected citizen, and long time selectmen of Frankfort, operated this mill the last time it was used. Mr. Earl Grindell and Mr. Clifford Anderson, Frankfort residents operated a steam powered lumber and shingle mill in the 1940s and products were shipped to Bangor. And Massachusetts. There are no operating sawmills in Frankfort now. Mr. James Tripp and Sons conducts a long-standing and successful lumber business.
A land office, organized in 1828, appointed agents to arrange the Sale and Settlement of public lands and this agent was made Forest Commissioner for the protection of the forest lands. It is interesting that at the time of this writing, Frankfort has a I tract of woodland consisting of 1500 acres known as the Town I Forest. This property has been valued at $200,000. And the elected town forest committee of nine men is trying to determine the best possible use of this very valuable asset.
"Old Orono Oddments
a collection printed in the Old Town Orono Times"
by Dr. A. Douglas Glanville
Compiled and published by the Orono Historical Society 1993
974.1 to74g 1993 Maine State Library
January 6,1988 -The red brick house at 114 Main Street is listed on the National Register of Historic Places as the Nathaniel Treat House for its original owner. Treat, who built the house in the early 1830s, was an important resident in the middle years of the 1800s, who is barely known or completely unknown to many of the town's present residents.
Nathaniel Treat was born in Frankfort, Maine, and like several others from that town moved up the river to Orono. He came here about 1828 and soon erected and successfully operated what became known as the Treat mill on the Bennoch dam near the mouth of the Stillwater River. He also became a large landowner. In addition to his lumbering operations he became involved in town affairs during his residence here. Between 1832 and 1856 he served eight terms as a Selectman. He was often a member of town committees and involved in projects promoting the towns development. He became the 'President of Orono's Stillwater Canal Bank.
Some Orono residents refer to the Main Street house as the Dunn House because in their younger days it was owned and occupied by Charles J. Dunn who became Chief Justice of the Maine Supreme Court.
A visitor to present day Orono might make an extensive tour of the town without discovering the site of Orono's former brickyard. In fact, there are individuals who have lived in town for a decade or more who do not know that there once was a brickyard here. If one has access to an 1855 map, he will find just east of the railroad tracks and west of Cedar Street an area designated as J. Walker Brick Yard.
In her 1926 History of Orono, Mrs. Hannah Rogers states that bricks were made in the old brickyard as early as 1820 and that the old brick houses in town were made from these bricks. These houses would include the Nathaniel Treat house at 114 Main Street; the Ludo Thayer house, 93 Main Street, painted white and which was the former St. Mary's convent; the brick house near the lower end of Middle Street; the brick apartment house on Mill Street, torn down to make way for Hasbrouck Court; and the so-called "Old Bank Building" at the intersection of Forest Avenue and Bennoch and Main Street. Mrs. Rogers also states that after the old brick houses were built (in the 1820s to approximately 1840), few bricks were made in this yard until 1867 and 1868 when John W. Mayo and his son Dr. E.N. Mayo manufactured bricks for the building now called the Masonic building, at the corner of Mill and Main Streets, and for the tall chimneys and furnaces for burning waste from the mills. She adds that J.W. Mayo was the last to make bricks from the old yard. She does not give the final date, but the cemetery records show that J.W. Mayo died in 1894.
Since most buildings in Orono were made of wood, which was the main product of the sawmill town through much of the 19th century, there was less demand here for bricks thap in towns more removed from the supply of lumber. Their chief local use was for chimneys and fireplaces. Consequently brick making never became one of the leading industries of the town.
Political news, especially of the party it favored and presented in colorful language, was frequently a prominent feature of the old Bangor Daily Whig and Courier. On August 26, 1856, in reporting a Republican convention held in Bangor the day before, the editor included the following statement; "The enthusiastic Republicans of Orono, Stillwater, Old Town, and Bradley came down upon us in an immense procession of carriages of all descriptions, from single wagons with three, four and five people in each, up to tremendous four and six horse teams, carrying 15,20 and 30 persons each, bearing all manner of patriotic and quaint devices upon them. The teams, wagons and carriages from that direction alone must have extended a mile or more in length."
Orono's first bank was the Stillwater Canal Bank, which was incorporated in 1835; the accompanying illustration is a photocopy of a two dollar bank note issued by the bank and signed by the President, Nathaniel Treat, and by the Cashier, E.P. Butler. The bank surrendered its charter and ceased to exist after 1842. Its short existence was due largely to the fact that the late 1830s was a period of "hard times" when many businesses failed and many individuals lost their property because they could not pay their debts. The Stillwater Canal Bank was succeeded in 1853 by the Orono Bank, of which E.P. Butler was also the cashier. Later, this bank was replaced by the Orono National Bank.
One of the earliest records of efforts to involve women in Orono's governmental affairs pertains to the annual town meeting of 1911 when there was an article in the town warrant to add a woman to the Board of Park
The following year, at the annual town meeting, John Hutchings was re-elected and the two offending selectmen were replaced. At the meeting the Buffum committee made a report which was accepted along with the following resolution which was made part of the town records:
"Resolved: That the Selectmen of the town for the year 1837, in refusing to receive a committee chosen by the inhabitants thereof at a legal meeting and to afford them such assistance as might be in their power, in order that they might receive a full understanding of the pecuniary affairs of the town under the official control of its affairs, proved that they were unqualified and unworthy servants of the people and made themselves justly obnoxious to their severest displeasure."
In 1855 a series of Lyceum lectures were given in Orono. Included was one of Rev. Coldwell on Relations of Geography and Geology to the History of Men. Another one was an Anti-slavery Lecture by Rev. Frederick Douglass, a leading northern Black of the time, much sought after for lectures.
Rev. Maitby of Orono married Israel Washburn, Jr to Miss Mary Ann Webster, daughter of Col. Ebenezer Webster, Sunday evening, October 24, 1841 (Whig & Courier, Oct. 29, 1841).
Sometimes a report of the facts "and nothing but the facts" leaves one wondering what events or situation led to the stated facts. Such is the case with regard to an item in the Orono town records for 1835. The record indicates that at a town meeting held on April 8,1835, three selectmen were chosen because the ones previously chosen (March 7, 1835) refused to take the oath prescribed by law." No hint is given in the record as to why the previously chosen selectmen, Ira Wadleigh, Levi Hamblen and Edward Kimball, refused to take the oath. The selectmen chosen to replace them were John Bennoch, Jr., Henry Richardson and Nathaniel Treat.
When we review the early history of the town we are reminded that two years later (as we reported 5/28/ 87 in these Oddments) two of the three selectmen of 1837 were censured for refusing to provide information and for not allowing a committee (consisting of Lower Stillwater residents) to examine the town's books in connection with financial affairs of the town.
In connection with a possible relationship of these two events it is important to remember that at the time Orono consisted of both Old Town and Lower Stillwater (present day Orono) but that they were moving toward legal separation in 1840. Consequently, one may ask if the differing interests of the two communities led to friction, which was reflected in affairs dealt with at town meetings.
What Israel Washburn, Jr. said in his 1874 address when Orono's first town hall was dedicated probably contains the answer to our question. We quote his actual words:
"After the rival villages of Stillwater and Orono had grown to importance in business and population, the inevitable jealousies and rivalries between communities situated as these two were broke out. They appeared especially at the town meetings in the spring, but were felt at all times."
"The separation was amicably effected, and was in the interest and convenience, as well as of harmony and good neighborhood. The two divisions of the old town have been excellent friends ever since the causes of differences were removed."
Workmen's pride of accomplishment is reflected in the following which appeared Aug. 7, 1857, in the Bangor Whig & Courier: "A correspondent informs us that Wednesday Ebenezer Webster's gang cut 6,000 feet of inch boards from pine logs, in one hour. They were all trimmed and piled away within the hour. The logs averaged four to the thousand. We consider this smart work. Our correspondent says if anybody on the river or elsewhere can beat this report, the crew stand ready to beat them in return." At the time Ebenezer Webster was a leading Orono lumberman.
The issuer of the Bangor Whig & Courier in the later years of the 1830s, following the financial panic of 1837, often contained detailed announcements of mortgage foreclosures and sheriff's sales of Orono properties. These announcements not only provide a dramatic picture of the financial disaster faced by many Orono citizens, but they also often provide detailed accounts of who owned and where specific pieces of property were located. Reference was often made to the book and page where deeds were recorded in the Penobscot County Registry of Deeds. In some instances the early property lines can be traced today. Such is not likely to be the case when a specific tree, a wooden stake, or a pile of stones is given as a boundary marker.
On December 14, 1837, the Bangor Whig & Courier published a copy of a petition by William Smith and others addressed to the State Legislature requesting to be incorporated and empowered to build a boom from Ayers Island in Orono across the Penobscot River to the eastern shore. The purpose of the boom was to catch and hold logs which annually escaped down the river to tide water and thus were lost by their owners and this occasioned considerable financial loss.
On April 14, 1838, John Bennoch and Samuel W. Freese issued a call to the members of the Stillwater Iron Foundry Company to assemble at the office of Washburn and Prentiss in Orono on May 1st at 2 p.m. to accept the company's charter and to elect officers. Washburn and Prentiss was an Orono partnership of lawyers which dissolved later. Each member of the partnership continued the practice of law and became outstanding members of the profession. Israel Washburn continued in Orono, was elected to Congress and then to the governorship of the State. Henry Prentiss moved to Bangor, where he became an outstanding member of his profession in the 1800s.
In the Centennial address given in Orono in 1874, Washburn spoke briefly of the Stillwater Iron Foundry Co. He stated that the foundry was built in Orono on lower Mill Street below the old Sleeper tavern. Because of losses incurred by the failure of parties for whom it had done work, the company was compelled to wind up its affairs after two or three years. The late 1830s proved not to be a good time to launch a new business or industry.
In October, 1838, Nathaniel Treat and 57 others petitioned the State Legislature to be incorporated for the purpose of building a toll bridge across the Penobscot River from Orono to Bradley at Great Works Falls. It was stated that a bridge at that place would be a great convenience to the inhabitants of the villages of Upper and Lower Stillwater and to the traveling public generally who have occasion to travel up and down the Penobscot River. We do not know what action the Legislature took on the petition, but we well know that 150 years later no such bridge has been built, although not many years ago a bridge in that general area was discussed in relation to rerouting highway traffic in this section of the State.
In 1840 a number of Orono's leading lumbermen, merchants and politicians were active in supporting the Whig candidate for President, William Henry Harrison, who opposed Martin Van Buren, President and candidate for re-election. Van Buren's re-election was strongly opposed for a number of reasons. The financial panic of 1837 had occurred during his administration and many held him responsible for it. As we have previously indicated in this column (see, for example, August 10,1988) Orono's prosperity, based largely upon the lumbering industry, was badly affected by the panic. Consequently the lumbermen and those dependent upon them for their livelihoods were eager to prevent Van Buren from being reflected.
At its national convention the Whig party had chosen William Henry Harrison to be its candidate for President and John Tyler, who had previously been identified with the Democrats, was chosen as Vice Presidential candidate. Tyler was chosen in the hope that he would win for Harrison the votes of unhappy Democrats who would not wish to vote for Van Buren. Since part of Harrison's home had once been a log cabin and since it was claimed that he served cider rather than wine at his table, his opponents began referring to him as the "log cabin and hard cider candidate. "Rather than taking offense, his Whig supporters eagerly accepted this designation and made it a feature of their campaign in support of Harrison.
In April, 1840, the Whigs of Orono formed an association as an auxiliary of the Penobscot County Whig Associates. The officers they chose included, among others. Dr. William H. Alien, William Averill, John Bennoch, Josiah Bennoch, William Colburn, Gideon Mayo, John Read Jr Nathaniel Treat and Israel Washburn Jr. They published an announcement in the Bangor Whig & Courier in which they invited those who wanted a change and who preferred Harrison to Van Buren to call at their "log cabin" (headquarters) which they would find open and where they would be welcome. They added that they would take "hard cider as often as once a month." They added, "We regard this hard cider a most wholesome beverage. It is like Falstaff's sack, and dries up in the brain all the foolish, dull and cloudy vapors...It also warms the blood which before was cold and settled, and has left the liver white and pale, and brings the heart to the deeds of honor and courage. It dispels the fog of error." This somewhat boastful political rhetoric lacked the vehement negativism of some later political pronouncements. We might also point out that not many years later when temperance held sway and some of these same gentlemen took the pledge, they publicly condemned hard cider as an evil spirit.
Harrison won the election, was inaugurated March 4, 1841, and died of pneumonia one month later, April 4,1841. When Tyler became President after Harrison's death, the Whigs were not altogether happy with him when he began to oppose some of their policies and principles. Some years later some of the men who were glad to have him at the "log cabin" regretted it when he supported and joined the Confederacy.
3/2/1860 At Orono, March 24th, of consumption, Martha W daughter of Nathaniel and Mary P. Treat, aged 14 years and 2 months.
If they appeared on a program at which they were asked to identify some of Orono's historic personages, many of the town's older and informed residents could say something significant about such persons as Jeremiah Colbum,John Bennoch, Asa Babcock, John Marsh or Ebenezer Webster. Some, if asked about E.P. Butler, probably either could not identify him or do more than state that he was an early Orono bank cashier. This present day lack of knowledge about him is largely due to the fact that our historians, Israel Washburn, Hannah Rogers, and Clarence Day have given Butler little more attention than to state briefly that in the 1830s he was cashier of the short-lived Stillwater Canal Bank and that in the 1850s he became the cashier of the Orono Bank.
Elvaton (sometimes recorded as Elverton) P. Butler, or simply E.P. Butler, as he usually signed himself, played an active and important role in Orono's business and civic affairs from roughly 1840 to 1880. He was born in 1811 in Frankfort, Maine, and came to Orono in the 1830s. His wife, Abigail, was also born in Frankfort and was the sister of Nathaniel Treat, the prosperous Orono lumberman who owned the red brick house on Main Street which is on the National Register of Historic Places. In his early days in Orono Butler was an apothecary dealing largely in patent medicines. He soon became involved in other matters. As already noted he was cashier of the Stillwater Canal Bank, which was chartered in 1835 but which lasted only a few years. His brother-in-law, Nathaniel Treat, was its president. When the Orono Bank came into existence, Butler was chosen its cashier and he served in that capacity for many years. He was also chosen to be the treasurer or secretary of several Orono corporations. In the middle years of the 1800s the pages of the Bangor Whig & Courier often carried notices signed by E.P. Butler of stockholders' or directors' annual meetings of various corporations.
Butler's activities were by no means confined to the financial concerns of banks and corporations. He played an important part in town affairs over most of his life in Orono. He was elected Selectman several times. He was also a member of the Superintending School Committee for three years. On five occasions between 1840 and 1874 he was chosen Town Clerk. The office in which he served longest and almost continuously except for occasional brief breaks was that of Town Treasurer. His years as Treasurer extended from 1844 to about 1880. His fellow citizens obviously considered him to be a competent and trustworthy manager of the town's money. There were times when they chose him to be Clerk and Treasurer at the same time. If he had kept a daily diary and if it was available to us it would be an invaluable source of information about Orono's financial and civic history in the Butler years. E.P. Butler lived in the house built around 1840 at what is now 17 Oak Street, where it still stands.
Sept 20 , 1990
Israel Washburn, Jr. Nathaniel Wilson
Some Leading LUMBERMEN John Atwell Charles Buffum B.P. Gilman Nathaniel Treat Eben Webster Elijah Webster James Webster Paul Webster
DOCTORS Wm. H. Alien Alonzo Plummer
Value of Real Estate
$7,000 $4,200 $29,750 $4,000 $4,000 $2,000 $3,000 $5,000
Value of Personal Estate
$5,000 $650 $6,000 $300 $4,000 $700 $1,700 $5,000
The lumbermen listed above were at the time Orono's most prosperous lumbermen, except for Daniel White of whom we shall speak more fully below. In addition to the lumbermen listed above there were many others in Orono in the 1860 census but in general the values of their real and or personal estates was not more than $1,000 and usually considerably below that amount. The tabulated data clearly shows that in terms of financial status in 1860 Orono's two lawyers rated considerably above nearly all of its lumbermen.
It was the lumbermen themselves who contributed in large measure to the overall financial well-being of Orono's lawyers in the mid 1800s. The records show that many of them would take one another to court "at the drop of a hat" over some matter having to do with lumbering such as water rights, payment of tolls, timber ownership, etc. The files of the Bangor Daily Whig & Courier of that period listed many such suits being tried in the local courts and the name of Washburn or Wilson were often listed as the lawyer for one or the other of the parties involved. Some of the suits were of long duration and figure prominently in the history on lumbering in Maine.
We thought it would be of interest to compare the wealth of Orono's other small professional group, the medical men, with that of the lawyers. The information derived from the census report and presented above shows that in 1860 Drs. Alien Plummer fell far behind lawyers Washburn and Wilson in the value of their real and personal estates. This raises some interesting questions about the relative frequency with which each profession was called upon and the amount paid for its services. It probably partly reflected the fact that if you had a grudge you called a lawyer and expected to pay more than if you were ill and called a doctor. (In those days you usually called the doctor rather than calling him at his office.)
Above we mentioned the lumberman Daniel White as the one Orono lumberman whose real and personal estate values were greater than those of either Washburn or Wilson. The respective values of White's estates were $16,200 and $20,000. Washburn, in his 1874 Centennial address given at the dedication of Orono's first town hall in commenting upon Daniel White, gave information which indicates why his financial status differed markedly from that of the other lumbermen we have considered. Washburn stated that in the early years the lumbermarket often fluctuated greatly from year to year. Some years the demand was good and sales and profits were correspondingly good. That tended to lead to over-investment in and over-production of lumber the next year. Demand fell and prices were so low that lumbermen lost money and either went bankrupt or at least lost their credit.
White never took great risks. When he made a good profit he did not invest all of his profit in the next/ear's market. Having capital on hand he could hold his lumber when the market was poor and not sell at a loss as many other lumbermen had to. Also, it was his practice to hire the best men, provide the" best food and enough of it, pay fair wages and promptly. The result was that he got from his crews "more good and profitable work than any man in those days could."
Charles Buffum's wife, Lydia, survived him until 1912. Their daughter, Lydia, married Lucius Merrill, Professor at the University of Maine. According to the 1900 US Census, Mrs. Buffum was then living with her daughter and son-in-law, Lydia and Lucius Merrill. The Orono Directory listed them as living at 14 Bennoch Street. Street numbers have changed since that time, but the available evidence suggests that what was then listed as 14 Bennoch Street was the Josiah Bennoch house, which Charles Buffum had bought on his return to Orono from New Hampshire. The 1910 Census lists Mrs. Buffum as then age 81, living with the family of Lucius Merrill, her son-in-law. Evidently, she ended her days in 1912 at what for years had been the Charles Buffum home on Bennoch Street.
In the first half of this century the sound of the siren connected with the Fire Department central station was a daily feature of life in Orono. The sound was a powerful one, which normally could be heard throughout Orono. Some of the residents whose roots were local rather than "from away" pronounced it as "sigh reen." Whether that was primarily an Orono pronunciation or whether it was common in other Maine localities we cannot say.
One purpose of the siren was to indicate the location of a fire. The town was divided into a number of areas and each area had its individual number. When a fire occurred the area in which it was located was indicated by the number of blasts of the siren or by the sequence of long and short blasts. This helped the volunteer firemen know where to go to fight the fire if they were not at or near the fire station when the alarm was received. It also helped to inform the public about the location of the fire
The siren sounded each evening at 9 o'clock to indicate the beginning of curfew, the time when children and young teenagers not accompanied by an adult were to be at home and off the streets. Although the ordinance was not always strictly enforced, it tended to reduce rowdiness and vandalism at night. Parents also found it a helpful way to indicate to their young offspring when they wanted them to return home for the night. Some children took the matter very seriously. The writer recalls an occaston when a young boy, barely of school age, was returning home with his parents and the siren sounded when they were two or three blocks from reaching home. He wanted his parents to hurry so that Mr. Jeddrie, the fireman who sounded the siren, would not come and get him for not being at home when the siren sounded.
Another use of the siren was to indicate "no school" days when a heavy snow fall or a freezing rain storm made closing school advisable or necessary. On such occasions, the signal was given at 7 a.m. and was eagerly listened for by those children whose favorite place for spending the day was not the school room.
On weekdays the siren sounded one blast at no-on as a means of making a routine test of it. It also provided a means whereby a person could check the accuracy of a personal watch or clock or know the time if no time piece was immediately available.
After World War II the use of the siren for the various purposes mentioned above was discontinued, and for a time it was missed by some of the old timers who had lived with it and more or less depended upon it for much of their lives. Of course, they, along with the rest of us, can turn up the volume on the television set and let it more continuously blast their ears.
In the centennial address given in 1874 on the occasion of the dedication of Orono's first town hall, former governor Israel Washburn spoke about the great political excitement which prevailed in town in the middle years of the 1800s. He went on to say that from 1840 when Old Town and Orono separated, Orono uniformly cast Whig majorities until the formation of the Republican party which replaced the Whig party in the late 1850s.
The pages of the Bangor Whig & Courier give a vivid picture from the Whig point of view of the political excitement in Orono in the presidential election of 1844. The Whig candidate was Henry Clay of Kentucky and James Polk of Tennessee was the Democratic candidate. The Whig & Courier was a strongly partisan newspaper and from it we get very little information about the participation of Orono Democrats in the election.
One of the leading national issues at the time had to do with slavery including the possibility of its elimination or, more likely, prohibiting its introduction in new states as they joined the Union. Henry Clay was more in favor than Polk in prohibiting the introduction of slavery in new states. As a consequence, more Maine voters favored Clay over Polk. Another of Clay's positions which won him strong support from Orono industrialists and business men was his advocacy of a high tariff on the import of foreign products which competed with similar products produced in this country.
Orono Whigs' support of Clay was no weak or half-way matter. On Feb. 19, 1844, the Whig & Courier announced the formation of the Clay Club in Orono. The list of officers elected by the members gives a good indication of who its prominent supporters were. The President was one of Orono's leading lumbermen, Asa W. Babcock. Other officers were Isaac Sanborn, Eliphalet Perkins, Levi Weeks, E. P. Butler, Gideon Mayo, Nathaniel Treat, Abner Starrett, S. H. Joy, Samuel White, Ludo Thayer and Alex Rogers. The Whigs made a great event of setting up a very tall ash pole and flying their political banner from it.
In the months that followed, up to election day, the Whig & Courier carried numerous accounts of the activities of the Orono Club. Israel Washburn, who was becoming a leading Whig political figure in the state as well as locally, gave numerous '"rousing" speeches in support of Whig policies and of Clay. The Whig & Courier carried accounts of speeches by other speakers to Orono voters in which Democratic supporters were vigorously attacked. One of the latter was Hannibal Hamlin of Hampden, who was then a Democratic politician. He later became a Republican and was Lincoln's first Vice President.
The Orono Whigs participated in a number of parades, both at home and in other towns. They attracted much attention when Colonel John Goddard entered a parade with them in his very large wagon constructed for that purpose. It was drawn by ten horses and carried as many as one hundred men at a time. The Whig & Courier gave considerable notice to Orono's flamboyant lumberman and his over-sized wagon. We gave an account of him in this column on Feb. II, 1987.
At the national level, Polk defeated Clay. The Whig & Courier had little to say about the outcome other than to express its disappointment. It was pleased that Bangor had supported Clay over Polk. It made no mention of how Orono had voted, but we understand that it cast a majority for Clay.
NATHANIEL TREAT House BED & BRKF
2214 4TH AVE W
Monroe, Wisconsin 53566
608 325- 5656 | <urn:uuid:0aa3950a-44ba-463e-8e1e-da1232a1c0bc> | CC-MAIN-2016-26 | http://www.mainething.com/treat/ | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395548.53/warc/CC-MAIN-20160624154955-00077-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.985034 | 7,412 | 2.953125 | 3 |
Continuing it’s world leading program of conservation and animal breeding, Disney’s Animal Kingdom recently welcomed its 28th gerenuk calf weighing in at eight pounds and approximately 1 ¾ feet tall, after a seven-month gestation.
The female calf, named Squirt, was born November 25 to her five-year-old mother Sushaunna, and nine-year-old father Jingle. This latest birth is part of the Association of Zoos and Aquarium’s Gerenuk Population Management Plan.
A gerenuk, which in Somali means “giraffe-necked,” is an African antelope with a unique long neck allowing them to eat leaves off trees. Unlike other gazelles and most antelopes that graze on grasses, gerenuks often stand erect on their hind legs and stretch their long necks to browse on taller bushes. They are typically found in areas of dry, bushy shrub. Gerenuk are very well adapted to an arid habitat and do not need to drink much since they get enough water from the plants they eat. Most adult gerenuk reach approximately three feet at the shoulder and range in weight from between 80 to 115 pounds.
At Disney’s Animal Kingdom, guests can see gerenuk when they visit the Pangani Forest Exploration Trail, a self-guided walking tour of native African wildlife. Pangani means “place of enchantment,” and the exploration trail is an enchanting adventure into a lush, tropical forest and a verdant valley that are right out of the heart of Africa. In addition to gerenuk, animals Guests can find on the trail include gorillas, hippos, okapi, naked mole rats, meerkats and many species of exotic birds. | <urn:uuid:4c8f9591-74ac-4f26-9e4f-44e7452b6138> | CC-MAIN-2016-26 | http://thedisneyblog.com/2010/01/07/disney%E2%80%99s-animal-kingdom-welcomes-gerenuk-calf-to-herd/ | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396106.25/warc/CC-MAIN-20160624154956-00064-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.956896 | 374 | 2.765625 | 3 |
Perhaps one of the most feared conditions associated with aging is dementia — an impaired ability to think or recall. The most common cause is Alzheimer’s disease. Aging also may be associated with lesser — but still disruptive — thinking and memory problems, which is called mild cognitive impairment. Mild cognitive impairment is a common precursor to Alzheimer’s.
Researchers, doctors, drug and biotech companies, and medical institutions worldwide are urgently seeking to better understand the intricacies of brain function — and particularly to develop therapies to prevent or treat diseases such as Alzheimer’s and other forms of dementia.
But despite this substantial effort, no drug or therapy can beat the powerful effect that regular physical exercise has in preventing Alzheimer’s and improving brain function — even in those with Alzheimer’s.
One Mayo Clinic study showed that those who regularly engaged in moderate exercise five or six times a week in later life reduced their risk of mild cognitive impairment by 32 percent compared with more sedentary people. Those who began exercising at midlife saw a 39 percent reduction in the risk of mild cognitive impairment.
Numerous other studies have come to similar conclusions. In one study, the effect of exercise on those with memory problems was about as good as the documented benefit of donepezil (Aricept), a drug that can modestly and temporarily improve symptoms of mild cognitive decline.
It’s not entirely clear how exercise protects the brain from Alzheimer’s, but research has pointed to several important possibilities.
One reason may be increased brain volume and possibly the creation of new brain cells. Gray matter makes up the bulk of brain tissue and is the location of many important brain functions, including memory. Gray matter decreases in volume with age. In addition, an area of gray matter in the brain called the hippocampus — which is crucial for memory — progressively deteriorates in those with Alzheimer’s.
Exercise appears to preserve gray matter, the benefits of which have been directly documented with brain imaging. In one study of older adults, significant enlargement of the hippocampus was observed in those who did moderately intense exercise over the course of one year when compared with people who did only basic stretching and toning for one year. Moreover, this increased hippocampus volume was associated with improved cognition.
Limited laboratory evidence also suggests the possibility that new brain cells can be created within the critical parts of the hippocampus. Increased blood volume — suggestive of greater biological activity — has been observed in parts of the hippocampus in individuals who are physically fit. This indicates that exercise may have the potential to slow the decline of Alzheimer’s.
Another possible explanation for how exercise can help ward off dementia is the improvement in levels of brain connections of those who exercise regularly. Substances such as brain-derived neurotrophic factor (BDNF) are like fertilizer for brain cells. They have been associated with a wide variety of biochemical changes in the brain consistent with making new brain connections. Levels of BDNF circulating in the blood are lower in those with Alzheimer’s than in those who don’t have the disease. BDNF levels appear to be increased by exercise.
Improved blood vessel health also may protect the brain from dementia. Fitness is often a sign of good blood vessel health. That’s because fitness helps prevent — and is an effective therapy for — many conditions that contribute to damage and clogging of blood vessels, such as high blood pressure, diabetes and undesirable cholesterol levels.
Aging brain arteries are highly susceptible to narrowing and closure, especially the smallest of these arteries. The result of narrowing of small vessels can be seen on routine magnetic resonance imaging (MRI) as white patches (leukoaraiosis) in the brain. Blockage of small brain arteries causes potentially unnoticeable “ministrokes” that damage tiny areas of the brain. Leukoaraiosis and blood vessel closure can cause or contribute to the development of cognitive impairment and Alzheimer’s.
In people with mild cognitive impairment or Alzheimer’s, regular exercise can have a major impact on health, as it can with anyone. Research has shown that people with Alzheimer’s who exercise have less risk of potentially life-threatening problems such as falls, osteoporosis, cardiovascular disease and other age-related problems. Depression and anxiety are common problems in both diseases, and exercise can significantly improve these conditions as well. — Ronald Petersen, M.D., Ph.D., Alzheimer’s Disease Research Center, Mayo Clinic, Rochester, Minn. | <urn:uuid:3266b4e4-981a-4833-ba54-ab2a7c12d656> | CC-MAIN-2016-26 | http://m.wfmz.com/regular-exercise-can-improve-brain-health/23741600 | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783408840.13/warc/CC-MAIN-20160624155008-00059-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.953127 | 916 | 3.515625 | 4 |
Savings bonds can be useful in estate planning because, on the death of the original bond owner, the co-owner or beneficiary becomes the owner. A will is not needed.
It's important to register your bonds correctly. Registrations for Series EE and I Bonds, both electronic and paper bonds, can vary, so it's a good idea to find out how to register each type of bond. View information about registrations for EE and I Savings Bonds.
A trust is a right of property, real or personal, held by one person appointed or required by law to administer a trust, for the benefit of another.
- Can be created for any purpose, which is not illegal, or against public policy.
- No limit to the number of trustees or the number of persons who may create a trust.
- Can be established by any organization or competent adult holding a legal title to property.
- A testamentary trust can be created under the will of a decedent.
The person who creates a trust is called the grantor, maker, donor, trustor, or settlor. The person charged with administering a trust is called the trustee. The party for whose benefit a trust is created, or who is to enjoy the income of the trust, is called the beneficiary or donee.
See information about trust registrations for electronic bonds.
A Trust Instrument is the document that sets out in writing the authority, duties, and rights of the parties involved. The instrument may be called an Agreement, Indenture, Declaration, or Deed. In the case of a testamentary trust, the trust instrument is the decedent's will. | <urn:uuid:34d70a75-4225-4911-9e61-605cce0dc21c> | CC-MAIN-2016-26 | http://treasurydirect.gov/indiv/planning/plan_estate.htm | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783402746.23/warc/CC-MAIN-20160624155002-00198-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.932647 | 336 | 2.75 | 3 |
Depression is the leading cause of disability worldwide, but research spending on the disorder substantially lags that of other illnesses.
In an effort to place depression front and center on the national and international research agendas, UCLA has launched the Depression Grand Challenge, which aims to reduce the health and economic impacts of depression by half by the year 2050.
Depression costs the U.S. economy more than $200 billion annually in medical costs and lost productivity. It also is the largest contributor to the risk for suicide, and more than 40,000 Americans take their own lives each year.
And although depression is the leading cause of disability worldwide, research spending on the disorder substantially lags that of other illnesses.
The UCLA effort is in line with President Barack Obama’s 2013 call for companies, research universities and other organizations to solve grand challenges — which he referred to as “ambitious but achievable goals.” UCLA’s Depression Grand Challenge is the largest university-led grand challenge, with an anticipated budget of $525 million for the first 10 years of its planned 35-year duration.
The program is UCLA’s second Grand Challenge, following the ongoing Sustainable L.A., a research initiative launched in 2013 to develop a blueprint to transition Los Angeles County to 100 percent renewable energy, 100 percent locally sourced water and enhanced ecosystem health by 2050.
“The two Grand Challenges are the biggest, most collaborative and potentially most transformative efforts UCLA has ever undertaken,” said UCLA Chancellor Gene Block. “They represent a new way of conducting research to solve the biggest problems of our society, and they are challenges we’re uniquely qualified to take on.”
The new grand challenge focused on depression comprises four components:
- A 100,000-person study to discover the causes of depression — the largest-ever genetic study for a single disorder.
- An effort called Discovery Neuroscience that is aimed at understanding how depression arises. Researchers will use findings from the genetic study and examine the molecular mechanisms and brain circuitry through which genetic and environmental factors lead to depression.
- An innovative treatment center that will provide state-of-the-art treatment for tens of thousands of study participants. As Discovery Neuroscience reveals better ways to treat depression, the treatment center will deliver new, effective therapies and preventive measures.
- A focus on understanding and eliminating the stigma associated with depression. Research has shown that the shame or frustration felt by those with depression often is a barrier to seeking diagnosis and to beginning and continuing treatment. To overcome this hurdle, the grand challenge will begin a research, outreach and education program targeting both the local and global communities.
Conceived in 2012 through the brainstorming efforts of more than 30 UCLA faculty members, the Depression Grand Challenge team now comprises more than 100 researchers from more than 25 academic departments.
The grand challenge is being led by Dr. Nelson Freimer, UCLA’s Maggie G. Gilbert Professor of Psychiatry and Biobehavioral Science and director of the Center for Neurobehavioral Genetics in the Jane and Terry Semel Institute for Neuroscience and Human Behavior at UCLA.
“Advances in technology for genetic research have now made it possible for us to discover the causes of depression,” Freimer said. “We know a genetics-based strategy will be successful, just as it has been with heart disease, diabetes and cancer. But finding the causes of depression is just the first step.
“We have to understand not just why, but how, depression arises, and we have to be in a position to take that knowledge directly back to our patients. Depression is such a complex disease that we are going to have to understand it from all angles — its basis in the brain, its environmental roots and its social impact. This is why a university-wide effort is essential.”
To lead the massive genetics study, UCLA has recruited a world-renowned authority on the genetics of depression. Jonathan Flint, formerly of Oxford University, led the first study that obtained clear evidence of the specific genes contributing to the disorder.
The two other members of the challenge’s leadership group are Michelle Craske, a professor of psychology, psychiatry and biobehavioral sciences and the director of the Anxiety and Depression Research Center at UCLA, who will lead the development of the innovative treatment center; and S. Lawrence Zipursky, a UCLA professor of biological chemistry and Howard Hughes Medical Institute investigator, who will lead the development of Discovery Neuroscience. Zipursky recently received the Louisa Gross Horwitz Prize for his work on a molecular identification system that helps neurons to navigate and wire the brain.
Members of the general public who would like more information about the Depression Grand Challenge may email firstname.lastname@example.org. | <urn:uuid:f8db8692-8519-415a-9f9b-744651a06f16> | CC-MAIN-2016-26 | http://newsroom.ucla.edu/releases/combating-depression-is-uclas-second-grand-challenge | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783398873.39/warc/CC-MAIN-20160624154958-00133-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.938981 | 970 | 3.015625 | 3 |
- Included species (for ITIS):
- M. arctoides, M. assamensis, M. cyclopis, M. fascicularis, M. fuscata, M. maura, M. mulatta, M. nemestrina, M. nigra, M. ochreata, M. radiata, M. silenus, M. sinica, M. sylvanus, M. thibetana, M. tonkeana
- Included species (for Mammal Species of the World):
- M. arctoides, M. assamensis, M. cyclopis, M. fascicularis, M. fuscata, M. hecki, M. leonina, M. maura, M. mulatta, M. nemestrina, M. nigra, M. nigrescens, M. ochreata, M. pagensis, M. radiata, M. siberu, M. silenus, M. sinica, M. sylvanus, M. thibetana, M. tonkeana
Macaques are found in more habitats than any other primate except for humans. Geographically, their range extends as far north as Japan (M. fuscata) and as far east as Morocco (M. sylvanus). Several species have adapted to city life.
The Barbary Macaque (M. sylvanus) is tailess and because of this is also known as the Barbary "Ape". Several other species may be refered to as apes due to their rudimentary tails, such as the Celebes Crested Macaque (M. nigra) or Black "Ape". Despite this, macaques are not apes and they are no more related to the apes than any other Old World monkey.
Macaca is a Portuguese word for monkey, which when generally used has no racial slur. Yet it is also used by the French (particularly those who had resided in North Africa), and often it is a racial slur directed towards people of darker skin. | <urn:uuid:26f3102e-b5d2-41b2-bcaa-da5e41a1e164> | CC-MAIN-2016-26 | https://commons.m.wikimedia.org/wiki/Macaca | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783392527.68/warc/CC-MAIN-20160624154952-00051-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.897138 | 438 | 3.5625 | 4 |
July 7, 2003
Newsweek International's piece on electronic textiles (e-textiles), "Wearing Wires: A New Fabric May Make Clothing Both High Tech and Fashionable," quotes Diana Marculescu, Assistant Professor of ECE, in its June 30/July 7 issue.
Also called interactive textiles (i-textiles), Marculescu predicts the technology has a promising future. According to Newsweek:
"Most tech-textile designers believe it is now only a matter of time before we actually live among what Diana Marculescu, an engineer at Carnegie Mellon University, calls 'intelligent ambient systems.'"
These systems, built inside fabric or added on to apparel, will connect the user to other devices and act self-sufficiently via electronic signals that buzz through the fabric. E-textiles could have applications for day-to-day life in the consumer market, medical, space, and military sectors. One challenge that the technology faces is the limiting the weight of the devices, in conjunction with increasing the amount of memory needed to function.
Marculescu will be an invited lecturer during the first week (July 14-18) of the European Summer School on Embedded Systems (ESSES) in Vasteras, Sweden.
Headshot of Diana Marculescu | <urn:uuid:5dc52ba0-1d3e-440c-8e50-02d7939471e5> | CC-MAIN-2016-26 | https://www.ece.cmu.edu/news/story/2003/07/newsweek_international_quotes.html | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396945.81/warc/CC-MAIN-20160624154956-00013-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.939862 | 270 | 2.828125 | 3 |
Li Hung-chang (lē hōng-jäng) [key], 1823–1901, Chinese statesman and general. His first success was as a commander of forces fighting the Taiping Rebellion. As viceroy of the capital province of Zhili (1870–95), he controlled Chinese foreign affairs for the Empress Dowager Tz'u Hsi. Li was the chief negotiator of the Treaty of Shimonoseki (1895), which ended the First Sino-Japanese War. In 1896 he negotiated the treaty that granted Russia the right to build the Trans-Siberian RR across N Manchuria. He protected foreigners when he was viceroy of Guangzhou during the Boxer Uprising (1900), and he was able to reduce the demands of the foreign powers for reparations. His moderately progressive internal policy included modernization of the army and railroad building.
The Columbia Electronic Encyclopedia, 6th ed. Copyright © 2012, Columbia University Press. All rights reserved. | <urn:uuid:e17149e7-79e0-49c4-a681-dca4fb505fa0> | CC-MAIN-2016-26 | http://www.factmonster.com/encyclopedia/people/li-hung-chang.html | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397565.80/warc/CC-MAIN-20160624154957-00192-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.943004 | 206 | 3.09375 | 3 |
Title: Grenadine island, brigantine ALBATROSS voyage, 1960 |
Accession Number: 2002.35.194
Category: PHOTOGRAPHS - 35 MM SLIDES
Type: 35mm color slide
Maker: Langford, Richard E.
Place: West Indies, French West Indies, Grenadine Islands
Description: 35mm color slide from voyage of brigantine ALBATROSS 1960-1961; photographed by Richard E. Langford who was a teacher aboard the vessel when it sank May 2, 1961; he used slides in this collection (2002.35.1-657) for lectures about the sinking. The following transcription is from Richard Langford Collection AV 193:2002-5,6: [TCR 01:40:24]: “This is one of the little Grenadine islands [West Indies]. They all have that little hill, then the little white sand strip. And they’re all around you. There are dozens of them.”
Mystic Seaport Image ID m417860 Information regarding reproductions | <urn:uuid:c8948cc7-5b31-43f9-b311-60572d3b9dc8> | CC-MAIN-2016-26 | http://library.mysticseaport.org/ere/odetail.cfm?id_number=2002.35.194 | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783404405.88/warc/CC-MAIN-20160624155004-00114-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.803096 | 224 | 2.53125 | 3 |
The ever-increasing challenges to the biodiversity shared by Cuba and the United States provide the opportunity and the need for the two nations to take an enhanced collaborative, bilateral approach to addressing shared issues. Cuba lies a mere ninety miles south of the U.S. state of Florida, and the two countries’ territorial waters meet in the Gulf of Mexico and the Straits of Florida. Cuba and the United States thus share much biodiversity—ranging from varied populations of organisms to diverse aquatic and terrestrial ecosystems. Native species migrate, exotic species invade, disease-causing species disperse, and rare species go extinct in the face of growing habitat modification. The living components of this shared environment are dynamically impacted, sometimes unpredictably so, by natural or man-made environmental disasters. Nature does not respect political boundaries nor do such potential disasters as oil spills, toxic releases, hurricanes, and tropical storms. Such events provide the sine qua non for greater bilateral cooperation.
Governments around the world routinely collaborate on shared environmental concerns bilaterally or multilaterally, depending on the situation being addressed. Environmental nongovernmental organizations (NGOs) from local to international levels often work in partnership with governments to solve environmental problems that extend beyond national boundaries. Such public/private arrangements work well in most circumstances, and there are many effective mechanisms in place to deal with challenges ranging from endangered species and ecosystems to oil and toxic waste spills.
However, a lack of formal diplomatic relations can limit desirable cooperation on shared environmental issues. The U.S. embargo on trade with Cuba—which was instituted in 1961 by the Kennedy administration in response to Cuba’s nationalization of U.S. businesses’ properties in Cuba during the Cuban Revolution—and subsequent regulations have thwarted the efforts of Cuban and U.S. scientists to collaborate on environmental or other professional and academic matters.1 There is essentially no intergovernmental environmental interaction between the United States and Cuba. The shared biodiversity of these countries, and in some cases that of other nations in the Caribbean and Gulf of Mexico regions, suffers as a result.
Fortunately, some NGOs in the United States have had success over the years in working collaboratively with their Cuban counterparts on shared environmental issues. The experiences of such NGOs can inform a way forward in structuring an enhanced mechanism for bilateral cooperation. Also fortunately, on January 14, 2011, the Obama administration announced new rules that ease some restrictions on U.S. citizens’ travel and remittances to Cuba, which will collaterally encourage more bilateral environmental collaboration as well. While these steps have created some space, given the political realities, a targeted environmental agreement is required to facilitate further mutually beneficial study, monitoring, and protection of shared biodiversity.
Urgent, Shared Environmental Problems
The most urgent environmental problems requiring bilateral action are broadly classified as disasters—both those that occur naturally and those that are man-made. Hurricanes are the clearest examples of shared natural disasters. During the twentieth century, 167 hurricanes struck the U.S. mainland. Of these, 62 were major (categories 3, 4, or 5 on the Saffir-Simpson scale). During the same period, 36 hurricanes, half of which were major, made landfall over Cuba. Because many hurricanes—Katrina and Ike being twenty-first century examples—strike both countries, there exists a shared need after such disasters to respond to the negative effects, including environmental problems created by rain, wind, and storm surges.
Most major hurricanes occurring in the Caribbean during the past century have resulted in documented extensive perturbations of shallow-water marine ecosystems, particularly to coral reefs, seagrass beds, and coastal mangroves.2 Aside from physical damage to such ecosystems from more turbulent water, hurricanes can also negatively impact water quality. On land, hurricane damage to ecosystems can be even more severe than in the ocean. For example, damaged native vegetation will possibly be more prone to colonization by exotic, noxious species such as Australian pine and Brazilian pepper.3 While Cuban and U.S. scientists have shared motivation to assess, monitor, and remediate the marine and terrestrial ecosystems that are damaged by hurricanes, they currently cannot do so.
Man-made environmental disasters, such as oil and natural gas leaks, can likewise be of shared concern to the Cuban and U.S. governments. The Gulf of Mexico is a rich source of oil and gas and will remain so for decades to come. According to the National Oceanic and Atmospheric Administration (NOAA), there exist nearly 4,000 active oil and gas platforms in the Gulf of Mexico off the U.S. coastline. Cuba also has plans for new oil and gas platforms off its northern coast.4 Given the near- and long-term implications of gas, oil, and chemical dispersants on the Gulf of Mexico’s biodiversity, it is imperative for the economic and ecological wellbeing of both Cuba and the United States that exploration is pursued with enhanced safeguards to avoid the mistakes of past disasters, such as the dramatic explosion of the Deepwater Horizon oil rig.
While Cuba and the United States are signatories to several international protocols for cooperation on containment of oil spills, there is scant cooperation between them on this front—although there were at least some low-level meetings between the countries after the Deepwater Horizon blowout.5 Given the potential of currents in the Gulf of Mexico to disperse spills from off the coast of one country to the waters and shores of the other, there were ongoing concerns about the possible reach of the disaster. Fortunately, relative to its potential, the Deepwater Horizon spill remained mostly contained.
However, with increased drilling in the area, including deep wells, more than luck will be needed to avert future disasters. Even if oil and gas leaks or spills are restricted to Cuban or U.S. waters, the negative environmental impacts can be important regionally. The two nations’ shared marine ecosystem is the foundation for the mid Atlantic and Gulf Stream fisheries. Many important commercial and sport fish species breed and feed in Cuban waters. So destruction of Cuban mangroves and coral reefs will impact stocks of species such as snapper, grouper, and tuna, along with myriad other animals, plants, and microbes that spend different parts of their life cycles in the territorial waters of each country.6
Given that urgent environmental problems can arise rapidly and harm the economic and ecological health of the United States and Cuba, it is imperative that there should be a mechanism for rapid, joint response to these shared threats.
Emergent, Shared Environmental Problems
Thankfully, urgent, shared environmental problems involving the United States and Cuba are not everyday occurrences. Nonetheless, every day there are numerous environmental issues of concern to both countries that are of great importance in the medium to long term. Such problems center on the need to study, monitor, and assess the status of organisms and ecosystems that functionally exist in both countries.
A complex mosaic of coral reefs, seagrass beds, and mangroves knit together the marine and coastal ecosystems. Some of the most extensive, best preserved coral reefs in the Wider Caribbean Region occur in Cuban waters, and extensive coral reefs parallel the Florida Keys in U.S. waters. Cuba has the largest extent of mangrove forests in the Caribbean, about 4,000 km², and Florida’s southwestern coast supports mangrove forests comprising about half the extent of those in Cuba. Seagrass meadows occur in shallow waters of both countries, stabilizing sea bottom sediments that could otherwise threaten coral reefs and providing breeding, feeding, and shelter grounds for myriad marine animals, plants, and microbes.7
These ecosystems are threatened increasingly by habitat modification, the impact of tourism, overexploitation of marine fishes and other commercial seafood resources, the ramifications of climate change and rising sea levels, and pollution from land-based sources (e.g., unsustainable agricultural and forestry practices) and ocean-based sources (e.g., cruise ship waste). Increasing tourism especially threatens coral reefs. Despite some positive measures taken by the cruise industry in recent years, more cruise ships in the region still mean greater potential stresses to the marine and coastal environments. In addition to these and other shared ecosystems, many marine and terrestrial species are shared by Cuba and the United States. Examples include migratory, invasive, endangered, and disease vector species.
Migratory Species: Thousands of species of animals migrate between the two nations. Cuba provides key wintering habitats for 284 bird species that breed in the United States, such as black-and-white warblers. Many insects also migrate between the United States and Cuba, including the monarch butterfly. Fishes, such as the Atlantic bluefin tuna, swim through both Cuban and U.S. waters, while turtles, such as the hawksbill, share Cuban and U.S. marine habitats. Mammals, such as the Florida manatee, also swim between U.S. and Cuban waters.
Invasive Species: Cuba and the United States share many of these problem organisms, which are among the most significant threats to native species and to ecological and economic wellbeing. For example, Hydrilla verticillata, an aggressive waterweed native to the Old World, displaces native aquatic plants and seriously disrupts recreational uses of lakes and rivers in Cuba and the United States.8 Another example is the red lionfish, which is native to the Indian and Western Pacific Oceans but was released into the Atlantic Ocean from a home aquarium in Florida when Hurricane Andrew struck in 1992. Today, this venomous fish has spread along the U.S. Atlantic Coast as far north as New York and into the Caribbean, including Cuban waters, voraciously eating native fish and creating major disruptions to coral reef ecosystems.9
Endangered Species: Cuba and the United States share forty-nine animal species and eight plant species that are categorized as Globally Threatened by the International Union for the Conservation of Nature and Natural Resources (IUCN). Because only a small fraction of the world’s plants and animals have been assessed by the IUCN criteria, the actual number of threatened species that are shared by Cuba and the United States is certainly much larger. Even with what is known already, there exists a strong imperative for the two countries to cooperate on monitoring and protecting the threatened species for which they are joint stewards, including the West Indian walnut, the American crocodile, and the West Indian whistling duck.
Disease Vector Species: A good example of a shared disease vector is the Aedes aegypti mosquito. This species is the principal vector for the viruses that cause dengue fever, a non-curable, sometimes fatal disease in humans. In the Western Hemisphere, the disease is known to occur throughout much of Latin America and the Caribbean, including Puerto Rico and the Virgin Islands, but so far not in Cuba, and only rarely in the continental United States. But this situation could change. According to the Centers for Disease Control and Prevention, there is evidence that this mosquito is constantly responding and adapting to environmental changes. Cuba has one of the world’s best centers for dengue research with knowledge about how the island stays dengue free.10 Yet, Cuba and the United States are not working together on dengue, a shared and growing threat.
Both urgent natural and man-made problems, such as hurricanes and oil spills, as well as more gradual, less dramatic threats, such as habitat modification and pollution, threaten the native biodiversity shared by Cuba and the United States. These threats are exacerbated by the lack of active, ongoing bilateral scientific cooperation between Cuba and the United States in seeking solutions to such threats.
Current Situation of Environmental Cooperation
Both Cuban and U.S. environmental scientists are aware of the shared urgent and emerging environmental challenges outlined in the previous sections. However, many scientists on both sides of the Florida Straits remain frustrated that more cannot be done to identify, study, and solve these challenges in a collaborative fashion. On the other hand, there is increasingly a palpable sense among environmental scientists in both Cuba and the United States that the opportunities for bilateral collaboration are poised to expand. This was underscored by an April 2009 panel discussion on U.S.-Cuba relations concerning marine and coastal resources conservation hosted by the Brookings Institution and the Environmental Defense Fund (EDF). Among many notable elements of that event was the participation of U.S. government representatives (NOAA and the Department of State), which was a real breakthrough in expanding this discussion in the United States beyond the NGO community.
A few months before the Brookings/EDF gathering, the American Council of Learned Societies/Social Science Research Council Working Group on Cuba and the Christopher Reynolds Foundation sponsored a two-day workshop. “Workshop on the Future of Environmental Collaboration between the United States and Cuba,” held in November 2008 in New York City, helped identify and define the issues that led to the Brookings/EDF event and to a number of others. This workshop was attended by thirty-two representatives of environmental NGOs and private philanthropic foundations. One of the outputs was a letter, dated December 11, 2008, which was signed by twelve CEOs of environmental NGOs, addressed to then President-elect Barack Obama urging him to “take action to increase scientific exchange and collaboration between the United States and Cuba.” The letter specifically suggested
- issuing U.S. visas to Cuban scientists and conservation professionals;
- directing the U.S. Department of the Treasury’s Office of Foreign Assets Control (OFAC) to grant licenses to U.S. scientists and conservation professionals planning to collaborate with their Cuban colleagues;
- giving OFAC licenses to U.S. entities to enable Cuban scientists and conservation professionals to travel to third countries when U.S. funds are used;
- directing federal agencies, such as NOAA, to encourage more collaboration between U.S. and Cuban scientists and academic and conservation professionals; and
- amending OFAC regulations that govern educational exchanges between the United States and Cuba to allow more flexibility.
Another major “barometer” for sensing a momentum in the direction of greater environmental collaboration is a series of meetings—four to date—of a group that has come to be known as the Trinational Initiative for Marine Science and Conservation in the Gulf of Mexico & Western Caribbean. The Trinational Initiative’s objective is to encourage increased collaborations between the “trinationals”—Cuba, Mexico, and the United States—on marine research and conservation issues. Membership of the group currently includes eight Cuban organizations, ten Mexican organizations, and fourteen U.S.-based organizations.
Another recent example of improved environmental collaboration between Cuba and the United States was the “U.S.-Cuba Conference on Hurricane Cooperation,” sponsored by the Center for International Policy and held in December 2010 in Galveston, Texas. Participants from the U.S. private sector and policy makers and technical experts from both the United States and Cuba concluded that communication concerning hurricane forecasting and early warning between the two countries is excellent. However, due to current government policies, there is no ready mechanism for either country to aid the other in hurricane damage remediation. Bilateral cooperation in other arenas, such as counternarcotics,11 could provide some precedent for bilateral collaboration in hurricane recovery.
It is still too early to fully understand how OFAC will interpret the updated Cuba policy announced by the Obama administration on January 14, 2011, which eased travel restrictions in an attempt to increase interactions between Cubans and Americans. These changes should have salutary effects on environmental collaboration between Cuba and the United States. Already, a number of people-to-people OFAC-licensed programs have taken place or are being scheduled, and several of these are being conducted by organizations with an environmental focus. See, for example, advertisements for two programs being run by the American Museum of Natural History.12
Impediments to Enhanced Environmental Cooperation
Despite the success stories of various U.S.-based environmental NGOs, the realities of carrying out effective and timely collaborative projects between Cuban and U.S. researchers are nonetheless daunting for everyone concerned. The problems are especially acute for environmental NGOs that have little or no track record in working with Cuban counterparts and for U.S. government agencies with environmental responsibilities but without the authority to conduct joint projects with the Cuban government. Even U.S.-based NGOs licensed by OFAC to conduct environmental projects in Cuba with years of experience doing so are thwarted by administrative rules and procedures that limit the breadth and depth of collaborative initiatives working to address urgent and emerging shared environmental problems. These impediments emanate from both the Cuban and U.S. governments.
U.S. Impediments to Enhanced Environmental Cooperation
Licenses for People: OFAC is the U.S. government entity that grants licenses for U.S. citizens to travel to Cuba, as described in the fifty-one-page document Comprehensive Guidelines for License Applications to Engage in Travel-Related Transactions Involving Cuba. These guidelines are periodically revised to reflect new policy directives from the White House, as they were most recently on April 19, 2011, with respect to regulations and policies governing purposeful travel, non-family remittances, and U.S. airports supporting licensed charter flights to and from Cuba. Getting the appropriate license from OFAC is the first hurdle for U.S. citizens who wish to engage in environmental collaboration with Cuban colleagues, and it can be a very daunting process. While recent rule changes are very welcome and are improving options for licenses for such activities, there are still ongoing issues with OFAC. First, the regulations and policies are interpreted unevenly. As a result, some NGOs are required to have a specific license, while others doing the same sort of activity are allowed to proceed with a general license. Second, the length of time required to get a decision on a license application is unpredictable and can range from one month to one year or more, which makes it difficult to arrange all the other aspects of conducting collaborative projects (e.g., obtaining Cuban visas, securing permits to conduct projects in Cuba, arranging for funding, and scheduling travel and time in Cuba to do the project).
Licenses for Equipment: Environmental research requires specialized equipment, ranging from handheld GPS units to deepwater submersibles. It is a complicated proposition to bring items into Cuba that are not carried in one’s luggage, and even items in the luggage can be problematic, such as GPS units, which are not allowed. It is only possible for U.S.-based environmental NGOs to send permitted large items, such as natural history specimen cabinets, if they have a license from the U.S. Department of Commerce’s Bureau of Industry and Security (BIS). BIS consults with the Department of Homeland Security and the Department of Defense before approving applications. In addition, BIS has approved the exports of aircraft or vessels on temporary visits to Cuba on a case-by-case basis, sometimes with additional authorizations needed (e.g., boats going into Cuban territorial waters from south Florida must get advance permission from the U.S. Coast Guard). While it is possible for established NGOs with solid partnerships with Cuban counterparts to export selected equipment for environmental research or monitoring purposes, it is by no means a speedy or certain process or one that includes selected critical items, such as GPS units. Impediments with respect to environmental equipment exports to Cuba are matters of both what is permitted and how expediently the shipment is approved. The current situation is inadequate for cases involving exports of equipment needed for responding to urgent environmental problems or for NGOs attempting to begin new projects.
Funding: Cuba has an excellent cadre of environmental professionals with a demonstrated capacity for conducting successful projects.13 Unfortunately, Cuba’s investment to date has been limited, and there is a chronic shortage of funding for the infrastructure, research, training, monitoring, and dissemination of research results for Cuban environmental projects that would be of shared interest for the United States and Cuba. U.S. funds that could support such environmental initiatives are potentially considerable, but they are severely limited currently due to OFAC restrictions on the amount of private funds that can be expended in Cuba and the complete prohibition of U.S. government funds for such environmental projects. U.S.-based private foundations, including the John D. and Catherine T. MacArthur Foundation, the Christopher Reynolds Foundation, the Tinker Foundation, Inc., and the Andrew W. Mellon Foundation, have taken the lead in funding OFAC-licensed environmental projects in Cuba. More funding is needed to provide modern infrastructure and information technology; vehicles, vessels, and gear for field studies; and travel options for Cuban scientists and students to participate in workshops and conferences and to pursue formal and informal studies and internships abroad.
Cuban Impediments to Enhanced Environmental Cooperation
Project Approvals: Environmental projects conducted in collaboration with Cuban organizations must be approved by an array of Cuban agencies—and at various levels within those agencies—depending on the nature of the project. This can be a daunting procedure for U.S.-based NGOs attempting to initiate collaborative activities in Cuba, but even NGOs experienced in the process of project approval can have delays and frustrations. Some of the impediments are related to technical problems (e.g., spotty Internet connections and difficulty transmitting large file attachments via email) or to changes in key administrative personnel at agencies. The most important Cuban agency for most projects is the Ministerio de Ciencia, Tecnología y Medio Ambiente (CITMA), but depending on the situation other entities must give high-level approval for environmental projects. For example, the Jardín Botánico Nacional (JBN) reports administratively to the Ministerio de Educación Superior, so projects with the JBN need to be approved by that ministry, in addition to CITMA.
Projects taking place in Cuba’s numerous protected marine and terrestrial areas must be approved by the Centro Nacional de Áreas Protegidas (CNAP), which is part of CITMA. The major impediment with respect to conducting collaborative environmental projects in Cuba is what can be a complex, non-linear, and slow approval process.
Visas: Once an appropriate OFAC license is obtained, a U.S. citizen must also obtain the appropriate visa from Cuba to enter the country and conduct the approved activity. In some cases, for example to attend a professional conference, this can be accomplished with a tourist visa, which can be issued by airline companies for a modest fee. However, for a U.S. citizen to engage in research activities, a research visa is required from Cuba, and this needs to be arranged through the Cuban counterpart’s organization, which can take up to thirty working days to process. The challenge here is that the collaborative activity for which the visa is sought must already have been approved by the Cuban counterpart organization, but that to get the approval it is usually necessary to meet with and explain the project concept and to work out the specifics in person, thus creating a “catch-22” situation.
Permits: Once projects are approved and research visas secured, the third category of impediment in Cuba for U.S.-based environmental researchers is to obtain the permits needed for implementing the project’s specific activities. Probably the key permits pertain to the conduct of field expeditions in collaboration with Cuban counterparts. Such permits require information about the individuals who will be doing the field work and a detailed schedule of sites they will visit and on what dates. However, the time it takes to get a permit approved often can affect the specific details in the permit application. For example, illness or other external factors may affect an individual’s availability (substitutions are not allowed) and natural events, such as hurricanes, may prohibit the expedition from going somewhere on the approved day. The high degree of specificity of information required, the relative inflexibility to modify what has been approved due to changing circumstances of personnel or weather, and the length of time to get the approvals of the permits impede research expeditions.
Enhancements to Environmental Cooperation
Nature knows no boundaries, and given the number and scale of environmental problems shared by Cuba and the United States, combined with the multitude of impediments to finding joint solutions to these problems, the best way to enhance environmental cooperation between the two countries would be through the establishment of a bilateral agreement on this theme.
The ecological stakes are too high for Cuba and the United States to rely on anything short of a government-to-government accord to formalize, catalyze, and facilitate cooperation on environmental problems of mutual concern. Various models for such an agreement exist: the United States has joint statements on environmental cooperation with Spain and Italy, an agreement on air quality with Canada, and a memorandum of understanding on environmental protection with India, among others.
Such a bilateral agreement could logically take advantage of the collective experiences of the U.S.-based environmental NGO community in conducting collaborative initiatives with Cuban counterparts over many years and, in some cases, decades. The focus of such a bilateral agreement should be on helping to facilitate the activities by NGOs that are currently underway and encouraging new initiatives by NGOs in consultation with and the approval of Cuban authorities. The elements of such an agreement should take into account the difficulties mentioned above and the following considerations:
- Project Approvals: Before cooperative projects can begin, one or more Cuban agencies need to approve. It would be ideal to have this process more clearly defined and streamlined to minimize delays in getting approvals.
- Visas: Research visas for representatives of NGOs conducting approved projects should be expedited and ideally approved for multiple entries into Cuba, perhaps renewable annually for the duration of the project.
- Permits: Permits for all the components of projects (e.g., to collect specimens, to enter and collect or monitor in protected areas, to import research equipment, to export biological specimens, etc.) should be expedited for approved projects.
- Licenses: The processes for obtaining the U.S. Department of the Treasury’s OFAC specific licenses and the U.S. Department of Commerce’s Bureau of Industry and Security licenses should be streamlined and more transparent.
Cuba’s Ministerio de Relaciones Exteriores (MINREX) might logically take the lead on such a bilateral agreement. Any of several U.S. government entities could logically take the lead, such as the Department of State, NOAA, the Fish and Wildlife Service, or the Environmental Protection Agency. At the same time, efforts should continue unabated to promote the revision of U.S. government policies that currently impede greater bilateral environmental cooperation between the two countries.
Cuba and the United States have the potential to work around their differences to respond to the threats to their shared biodiversity. And no matter what the trajectory of future official relations between the two countries, initial mutually beneficial steps concerning the environment can provide an important opportunity to address real shared problems while also building links and trust between the two societies that can provide some bedrock for future relations.
- Sergio Jorge Pastrana (Cuban Academy of Sciences) and Michael T. Clegg (U.S. National Academy of Sciences), “U.S.-Cuban Scientific Relations,” Science 322, no. 5900 (2008): 345. http://www.sciencemag.org/content/322/5900/345.summary.
- James T. Tilmant, Richard W. Curry, Ronald Jones, Alina Szmant, Joseph C. Zieman, Mark Flora, Michael B. Robblee, et al. “Hurricane Andrew’s Effects on Marine Resources,” BioScience 44, no. 4 (April 1994): 230-237. http://www.jstor.org/stable/1312227.
- David W. Orr and John C. Ogden, “The Impact of Hurricane Andrew on the Ecosystems of South Florida,” Conservation Biology 6, no. 4 (December 1992): 488-490. http://www.jstor.org/stable/2386357. Note: the Australian pine is Casuarina spp. and the Brazilian pepper is Schinus terebinthifolius.
- Andrew E. Kramer, “Gazprom of Russia to Drill for Oil in Cuban Waters,” The New York Times, November 15, 2010. http://www.nytimes.com/2010/11/16/business/global/16oil.html.
- Clifford Krauss, “Drilling Plans Off Cuba Stir Fears of Impact on Gulf,” The New York Times, September 29, 2010. http://www.nytimes.com/2010/09/30/world/americas/30cuba.html.
- Environmental Defense Fund. Cuba: What’s at Stake. http://www.edf.org/oceans/cuba.
- UNEP, Caribbean Environment Program (CEP), Wider Caribbean Region. http://www.unep.org/regionalseas/programmes/unpro/caribbean/instruments/r_profile_car.pdf.
- Hydrilla verticillata. Non-indigenous Aquatic Species. U.S. Geological Survey factsheet. http://nas.er.usgs.gov/taxgroup/plants/fact%20sheets/Hydrilla%20verticillata.pdf.
- Mark A. Albins and Mark A. Hixon, “Invasive Indo-Pacific lionfish Pterois volitans reduce recruitment of Atlantic coral-reef fishes,” Marine Ecology Progress Series 367 (2008): 233–238. http://content.imamu.edu.sa/Scholars/it/net/albins_hixon_2008_meps.pdf.
- Maria G. Guzmán and Gustavo Kourí. “Dengue in Cuba: research strategy to support dengue control,” The Lancet 374, no. 9702 (November 2009): 1660-1661.
- Country Reports - Croatia through Haiti, 2012 International Narcotics Control Strategy Report (INCSR), U.S. Department of State, March 7, 2012. http://www.state.gov/j/inl/rls/nrcrpt/2012/vol1/184099.htm.
- Discovering Cuba’s Natural History: A People to People Educational Exchange Led by Museum Experts. American Museum of Natural History. http://www.amnhexpeditions.org/home/show/Programs%20to%20Cuba.
- For example, the Global Environment Facility (GEF) has allocated US$4,350,000 for twenty projects on themes such as biodiversity, climate change, and land degradation. http://www.thegef.org/gef/gef_country_prg/CU.
Funding for the research and writing of this article was provided by the Tinker Foundation, Inc. Robert L. Muse, Esq. principally reviewed and provided input to this article. Additional reviewers for earlier versions of this article include Felicity Arengo and Ana Luz Porzecanski (American Museum of Natural History), Elizabeth Newhouse (Center for International Policy), Daniel Whittle (Environmental Defense Fund), Jennifer Gebelein (Florida International University), Robert Heuter (Mote Marine Laboratory), Marianne Kleiberg (The Nature Conservancy), James Miller and Denis O’Connor (The New York Botanical Garden), and Martin Mendez (Wildlife Conservation Society). Input from the reviewers of this article is gratefully acknowledged; however, neither they nor their respective organizations are responsible for the opinions presented herein, which are those of the author. | <urn:uuid:0584a7b7-fe2a-40f1-9de4-7c718182c973> | CC-MAIN-2016-26 | http://www.sciencediplomacy.org/article/2012/biodiversity-without-borders | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396945.81/warc/CC-MAIN-20160624154956-00032-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.939733 | 6,543 | 3.40625 | 3 |
Click on the image for larger version
NASA's Mars Exploration Rover Opportunity used its panoramic camera in a super-resolution technique to record this eastward view of the horizon on the 2,298th Martian day, or sol, of the rover's work on Mars (July 11, 2010).
Rising highest above the horizon in the right half of the image is a portion of the western rim of Endeavour Crater including a ridge informally named "Cape Tribulation" (see Figure 1).
Super-resolution is an imaging technique combining information from multiple pictures of the same target in order to generate an image with a higher resolution than any of the individual images.
Endeavour Crater is about 22 kilometers (14 miles) in diameter. The rover team chose Endeavour as a long-term destination for Opportunity in mid-2008, after the rover had investigated the much-smaller Victoria Crater for two years. More than a year later, observations by the Compact Reconnaissance Imaging Spectrometer on NASA's Mars Reconnaissance Orbiter revealed clay minerals on Endeavour's western rim, making the destination even more enticing for Opportunity's investigation. Cape Tribulation is one location where the clay minerals are exposed.
Opportunity completed its three-month prime mission in April 2004 and has continued working in mission extensions since then. | <urn:uuid:b3b770fc-1f3b-4ee4-b8c5-57f814664525> | CC-MAIN-2016-26 | http://photojournal.jpl.nasa.gov/catalog/PIA13709 | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395546.12/warc/CC-MAIN-20160624154955-00014-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.906596 | 275 | 3.453125 | 3 |
Merriam-Webster defines gerrymandering as the following: “to divide (a territorial unit) into election districts to give one political party an electoral majority in a large number of districts while concentrating the voting strength of the opposition in as few districts as possible.” With the dawn of each decade, the new census allows each state’s legislature to redraw its district maps to coincide with population shift. Standard practice has always been that districts are redrawn as fairly as possible while giving some leeway to protect the incumbent politicians of the party in power. The word gerrymandering, which has a negative connotation, comes from 1812. Then-Massachusetts Governor Elbridge Gerry redrew districts to gain political advantage leaving some to be shaped like a salamander. Gerry + Salamander = Gerrymander.
In the modern age of technology — where emails always surface and nary a backroom deal goes un-unveiled — we now frequently get a look into how politicians manipulate the system to benefit their party for years to come. In Wisconsin, where the Tea Party hit the ground running in 2010, the notion of fairness was thrown out the GOP window going 75 on I-39.
New documents obtained by the Wisconsin State Journal show that Badger state Republicans did little to hide the political motivation behind their decisions. On May 4th, 2011, State Senator Leah Vukmir (R-Wauwatosa) wrote to Tad Ottman, the aide of then-Sen. Majority leader Scott Fitzgerald, proclaiming her giddiness that the GOP was in charge of the gerrymandering saying, “This is such a big task. So glad we are in control!”
A look into Vukmir’s email reveals a desire to “waste” as many of the opposition’s votes as possible.
The documents were turned over to new Senate Majority Leader Mark Miller (…the recall did have an effect!) by Michael Best & Friedrich, the firm that was paid $431,000 in taxpayer money to give the Wisconsin GOP an unfair advantage until 2020. Former Sen. Majority Leader Fitzgerald has stuck by his story that districts were created fairly without political bias but these new documents indicate that the speakers fingers must be eternally crossed.
One document released Tuesday detailed how many districts would be safe for Republicans, GOP-leaning, swing districts, safe Democratic, or lean Democratic under at least one version of the redistricting plan. It showed a dramatic increase in likely GOP seats, but Ottman said that chart did not represent the final breakdown.
Another chart shows a breakdown of the last 10 years of Wisconsin elections, and which districts went to Democrats or Republicans.
Ottman confirmed he prepared the charts.
Assembly minority leader and recall candidate Peter Barca suggests that these documents prove the Wisconsin GOP lied to its constituency:
“To me, it’s sort-of the smoking gun that shows that all of their efforts were political, partisan and about protecting their power,” Barca said. “It shows their deceitfulness. They consistently maintained this was not about politics.” | <urn:uuid:ace6ba45-fbd3-47ef-b36c-3d1726b98920> | CC-MAIN-2016-26 | http://wepartypatriots.com/wp/2012/08/07/emails-reveal-wisconsin-gops-gerrymandering-efforts-to-be-quite-dubious/ | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783392159.3/warc/CC-MAIN-20160624154952-00158-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.965624 | 645 | 2.65625 | 3 |
Mr. Julio Lopes, Secretary of Transport of the State of Rio de Janeiro, recently visited the World Bank to present what the city is doing to improve the quality of public transport. It is a fascinating example of how cities can improve urban transport, with a clear target of benefiting the poor and reducing a city’s carbon footprint.
Brazil’s recent economic growth, the hosting of the football World Cup in 2014 and the 2016 Olympic Games are serving as a catalyst for the Rio de Janeiro metropolitan area to reform public transport. Rio, like many other Brazilian cities – where close to 85 percent of the country’s population lives – is facing strong pressure to find new ways to adapt its transport system to increasing population and rapid motorization. The city has a population of about 6 million and motorized transport accounts for almost 70 percent of trips and about 45 percent of the city’s total emissions.
Both the State and the Municipality are seeking to decongest the metropolitan area cooperatively through better integration of the existing transport systems. The aim is to offer an acceptable level of service to the user while reducing operating subsidies and informality in the bus system. The result will be a more integrated multimodal network, including bus/BRT (under the authority of the Municipality), suburban railways (State), and metro (State). There are also plans to upgrade waterway terminals and acquire new ferries. All public transport systems in the City of Rio will need to be fully accessible for people with disabilities by 2016, to be compliant with Federal Laws on accessibility.
Map of planned extensions of Rio de Janeiro's urban transport system
In particular, the State’s emphasis on improving the rail-based networks, particularly the SuperVia commuter rail and MetrôRio systems, is a cost-efficient priority which will contribute to a cleaner environment and to the greening initiatives put forward by the State. It will also help facilitate access to employment centers, health, education, and leisure facilities by poorer segments of the population. On a daily basis, 46 percent of the poor residents of Rio’s peripheral areas travel to the capital. It is estimated that 74 percent of SuperVia users are poor and six of the nine municipalities served by the network have poverty rates above 50 percent.
One of Rio‘s major achievements is the recent introduction of the Integrated Transit Fare or Bilhete Único Integrado (BUI). Launched in 2010, the BUI is an electronic card system that allows residents of the 20 municipalities in Rio’s metropolitan region to transfer between various modes of public transport (ferries, commuter rail, metro, buses and informal transport - usually in the form of privately owned and operated vans) within a period of two and a half hours for the price of a single fare ticket. This integration of public transport fares at the state and municipal levels has improved the affordability of public transport for poor households and led to a more optimal use of the road network in the city 99 percent of riders in Rio’s metropolitan region are now able to reach their destinations using the single-fare pass; while bringing reductions in their travel time and when transferring between different modes of transport, making public transit more attractive compared to cars and motorcycles.
The BUI has contributed to a more balanced job distribution throughout the region: under the old system, employers - who are required by law to cover a fixed percentage of their employees' travel expenses, had a tendency to select applicants who lived close to their job and therefore enjoyed comparatively low commuting costs; with the new integrated fare system, workers who live farther away from major employment centers now benefit from lower transport costs and have consequently become more attractive to potential employers. As a result, employers no longer select their employees based on where they live.
The BUI card, which costs up to R$4.40 (about US$2.00), led to a reduction in the number of bus fares from 74 to 12 in the Metropolitan Area. The Department of Transport estimated that between February 2010 and February 2012, the card helped generate more than 585.3 million trips, benefiting an average of 280,000 users per day. An independent study estimated that the card implementation would represent an average saving of R$2.62 per user; a reduction of over 50% in the number of public transit trips taken; and an increase in formal jobs from 71.0% to 72.15%.
A critical factor in ensuring the success of the card was tackling the issue of informal buses. Introducing the BUI required the passing of new legislation regulating the services of bus operators and their consolidation from 47 companies to four. These companies were required to compete for five areas of the city through open tenders. One of the requirements for the new operators was adherence to the new ticketing system. The introduction of the BUI and the lower public transport fares that ensued translated into a 20% decrease in fare revenues; since Rio does not award subsidies to public transport operators, the implementation of the new fare system forced bus companies to become more efficient and to consolidate some of their operations.
There are other reforms underway in Rio related to bicycles and expansion of BRT lines and funicular transport in the city’s favelas that are interesting to review as examples of transport policies that can impact social inclusion and the environment. And if I had nearly half as much enthusiasm as Mr. Lopes about discussing good practice about Rio, this blog entry could easily become the length of a paper. For now, the World Bank is continuing its strong partnership with Rio in the context of a project that aims to upgrade Rio’s urban rail system. The project will further improve transit quality, favor the poor in particular, promote non-motorized transport, and improve the policy framework for sustainable transport. The project will also improve the transit system’s resilience to natural disasters and it will have positive impacts on mitigation and adaptation to climate change.
Photo credit: Governo do Rio de Janeiro, Secretaria de Transportes | <urn:uuid:23e6f34f-aeda-4cf1-921a-9ddcee3f57bf> | CC-MAIN-2016-26 | https://blogs.worldbank.org/transport/a-transport-fare-card-moves-rio-closer-to-social-inclusion-and-carbon-emission-reductions?cid=EXT_WBBlogSocialShare_D_EXT | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397865.91/warc/CC-MAIN-20160624154957-00120-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.949041 | 1,229 | 3.109375 | 3 |
Bookmark or cite this page as <http://www.sym454.org/hebrew/molad.htm>
by Dr. Irv Bromberg, University of Toronto, Canada
[Click here to go back to the Hebrew Calendar Studies home page]
I used astronomical algorithms and calendrical calculations to prove the following:
|A special "Thank You!" to Yehoshua Honigwachs, author of The Unity of Torah, Feldheim Publishers (May 1991, 350 pages): When I first documented the non-linear relationship between the traditional molad and the actual astronomical lunar conjunctions, I posted the information here in August 2004 merely as a curious finding, without attempting to explain its cause or significance. Soon thereafter, Yehoshua Honigwachs started to correspond with me about it, and, over the course of dozens of email messages over the next 3 months, he prodded me to explain the finding and its significance, and then to work out how to adjust the molad moments and shift the reference meridian, and further to validate and document how the study was done and its underlying arithmetic. He persistently wouldn't let me "off the hook" until he was satisfied that the matter had been appropriately accounted for and explained. He also brought the existence of the Israeli New Moon Society and its web site to my attention, as well as many papers written by F. Richard Stephenson and others about historical eclipses.|
Common names for lunar conjunction are "New" Moon, or "Dark" Moon.
Contrary to popular misconceptions, the moment of the lunar conjunction is not when Moon reaches its the minimum angular distance from Sun, nor is it the moment when the actual Sun-to-Moon distance is minimal, nor is it the moment of first visibility of the new lunar crescent at the beginning of a lunar month. It also has nothing to do with Moon or Sun rising or setting.
Astronomically, the lunar conjunction is the moment when the celestial longitudes of Moon and of Sun are equal when projected onto the ecliptic (the celestial path of the plane of Earth's orbit around Sun), from the prespective of the center of Earth. Alternatively, one could describe it as the moment when the centers of Sun, Moon, and Earth, in that order, are all in a plane that is perpendicular to the plane of Earth's orbit (the ecliptic plane). Since it doesn't in any way depend on any Earth surface location, the moment of the astronomical lunar conjunction occurs at the same instant everywhere, but people in different time zones may choose to reckon that moment according to their local clocks or according to some agreed-upon clock.
It is only rarely that at a lunar conjunction the centers of Sun, Moon, and Earth, in that order, are aligned in the same line. Such a perfect alignment only occurs during the maximum of a total solar eclipse at the moment that the center of the lunar umbra (darkest part of the lunar shadow) crosses the terrestrial latitude that equals the solar declination.
At the time of a lunar conjunction, Moon rises or sets at nearly the same time as Sun, but Moon is not visible except for a few minutes if there is a solar eclipse.
In the present era, according to the observational records of the Israeli New Moon Society as posted on-line at <http://sites.google.com/site/moonsoc/>, without optical aids the last visibility of the old lunar crescent before sunrise is at least 24 hours before the lunar conjunction, and the first visibility of the new lunar crescent after sunset is at least 24 hours after the lunar conjunction.
The following photographs at the NASA Astronomy Picture of the Day (APOD) web site show some examples of thin lunar crescents, a few of which were exceptionally "young" and very difficult to see (less than 24 hours after the lunar conjunction, seen either with optical aids or time-exposure photography):
The "world record" for youngest new moon photography is at <http://astrosurf.com/legault/new_moon_2010april14.html>, which was taken in full daylight at the moment of the lunar conjunction, made possible only because the lunar latitude was near maximal (almost 5°N) and the near infrared imaging system was equipped with a screen to block out interfering sunlight.
For information, images, and movies depicting the lunar phases over the course of the full lunar cycle, see the USNO web site at:
Regarding the dark moon interval, the Talmud Bavli tractate Rosh HaShanah page 20b says:
R. Zeira said in the name of Rav Nachman: Moon is covered (invisible) for 24 hours. To us (in Babylonia) it is invisible for 6 (hours) of the old moon and 18 (hours) of the new moon. To them (in the land of Israel) it is invisible for 6 (hours) of the new (moon) and 18 (hours) of the old (moon).
The actual difference in local mean time between Israel and Babylonia is less than 30 minutes, so the assertion of R. Zeira that there is such a large difference between Babylonia and Israel is astronomically impossible.
Rabbi Moshe ben Maimon ("Rambam"), also commonly known by his Greek name, (Moses) Maimonides, wrote a book entitled Hilchot Kiddush haChodesh (title translated as "Sanctification of the New Month" or alternatively as "Sanctification of the New Moon") around the Julian year 1178 or Hebrew year 4938, which is the eighth treatise in Zmanim (The Book of Seasons), the third book of his Mishneh Torah collection (code of Jewish Law). Quoting from that source:
|Moon becomes hidden each month and is invisible for about 2 days: about one day before its conjunction with Sun at the end of the month, and about one day after its conjunction with Sun, before it can be seen in the west in the evening.
"Sanctification of the New Month", chapter 1, paragraph 3.
|הלבנה נסתרת בכל חודש ואינה נראית כמו שני ימים או פחות או יותר מעט
כמו יום אחד קודם שתדבק בשמש בסוף החודש
וכמו יום אחד אחר שתדבק בשמש ותראה במערב בערב
הלכות קידוש החודש פרק א הלכה ג
The minimum period of lunar invisibility (the dark moon interval) is a physical phenomenon, not a matter of opinion, nor of law or tradition.
Using modern astronomical algorithms it is possible to estimate whether Moon will be visible during the twilight before a specified sunrise or during the twilight after a specified sunset at a given locale in the Middle East. The criteria for reckoning the visibility of the lunar crescent as recommended by Dershowitz & Reingold in Calendrical Calculations: 3rd Edition (CC3, see <http://www.calendarists.com/>) are:
Click here to view charts based on the above criteria calculated for Jerusalem, Israel 178 KB, 7 pages
The charts span one saros cycle before and after the beginning of the year 2000 AD (a total of two saros cycles). I also generated similar charts for the era that was 2000 years earlier, but they are not shown because they were not significantly different.
Since we observe (from the ground and without optical aids) the dark moon interval as spanning from the last visible old lunar crescent in the twilight prior to sunrise until the first visible new lunar crescent in the twilight after sunset, it is impossible for that interval to equal an integer number of days. Rather, it always equals at least one day plus the length of the daytime (measured from the dawn twilight until the dusk twilight), which seasonally varies fractionally in length (longer in summer, shorter in winter).
The dark moon interval varies greatly with latitude, with the least variations at the equator. It tends to be shorter when the lunar conjunction is near perigee (Moon closest to Earth) because Moon moves faster near perigee. For the northern hemisphere it tends to be shorter when the lunar latitude near the conjunction is north of the ecliptic, whereas for the southern hemisphere it tends to be shorter when the lunar latitude near the conjunction is south of the ecliptic. There is also a weak tendency for the dark moon inteval to be shorter when the conjunction is near the Earth orbital aphelion (Earth furthest from Sun and moving slowest).
At Jerusalem the duration from dawn twilight to dusk twilight (solar depression = 4°30' in both cases) seasonally varies over a nearly 4 hour range from about 10+3/4 hours to almost 15 hours.
For Jerusalem, a dark moon interval as short as one day plus the daytime length is very rare, and occurs only when Moon is near perigee, the lunar latitude is near its maximum north deviation (about 5°N), Earth is near its orbital aphelion, and in such cases both the last old and first new lunar crescents are barely visible. The most common dark moon interval at Jerusalem is two days plus the daytime length, occurring in about 55% of cases, and in such cases Moon is most often north of the ecliptic and near perigee. In about 40% of cases the dark moon interval at Jerusalem is three days plus the daytime length, and in such cases Moon is most often south of the ecliptic and near apogee. Very rarely, the dark moon interval at Jerusalem reaches four days plus the daytime length, and such cases happen when Moon is very close to apogee and the lunar latitude is near its maximum south deviation (about 5°S).
Astronomically, the lunar opposition or "Full" Moon is the moment when the celestial longitude of Moon is 180° away from Sun when projected onto the ecliptic. Alternatively, one could describe it as the moment when the centers of Sun, Earth, and Moon, in that order, are all in the plane that is perpendicular to the plane of Earth's orbit (the ecliptic plane).
At the moment of a lunar opposition, somewhere in the nightime world Moon sets in the west at nearly the same time as Sun rises in the east, while simultaneously at nearly antipodal locations in the daytime world Moon rises in the east at nearly the same time as Sun sets in the west. Moon will appear full, and when it occasionally passes through Earth's shadow it will appear dull red, an event referred to as a lunar eclipse.
The lunar orbital plane is tilted about 5° 9' (mean value) relative to the ecliptic plane, therefore most of the time the lunar latitude is either north or south of the ecliptic. If the lunar latitude at the moment of a lunar conjunction is greater than the apparent solar angular diameter north or south of the ecliptic (32 arcminutes, nearly equal to the mean apparent lunar angular diameter) then Moon passes north or south of Sun, respectively, without any solar eclipse.
The Hebrew word molad (plural moladot) means "birth", and it is also a generic term for New Moon, in the sense of that Moon is "born" again at the beginning of each lunar cycle. Depending on the context, however, the word molad could have several alternative meanings:
The Hebrew calendar is nominally supposed to stay aligned with both the solar year (the seasons) and the lunar month.
Originally each Hebrew calendar month started after the observation in Israel of the first visible crescent after the new moon. Jewish communities outside Israel depended on messengers to communicate calendar decisions. In Hebrew year 4119 (Julian 358 AD), after Roman Emperor Constantius II outlawed New Moon announcements, Hillel ben Yehudah, the second-last President of the ancient Sanhedrin, promulgated a fixed arithmetic calendar (probably developed a century earlier by Amora Shmuel "the Astronomer" of Nehardea, Babylonia) based on the Metonic cycle of leap years (235 months in 19 years, with 7 leap months per 19 year cycle) and using simple arithmetic to approximate the moment of the mean lunar conjunction (the "molad", or in plural "moladot"). Since then, the molad of the Hebrew calendar has been nothing more than a fixed arithmetic cycle that determines the provisional date of Rosh HaShanah (the Hebrew New Year Day), subject to possible postponement of 0, 1 or 2 days (depending on the weekday and timing of the molad moment).
For centuries Jews have followed a tradition of announcing the moment of the molad for the coming month in their synagogues during the morning services on the last Sabbath before the start of each month except Tishrei. All of these announced moladot are of no relevance to the Hebrew calendar! The only molad that matters in Hebrew calendar arithmetic is the molad of Tishrei, yet that molad is never announced!
Furthermore, if 1/4 day is added to the molad moment, a shortcut originally proposed by German Mathematician Johann Carl Friedrich Gauss (Gauß), or if a 1/4 day is permanently included as an offset at the epoch, then even for the molad of Tishrei its time-of-day becomes irrelevant, all that matters is the day that the molad lands on.
There is nothing "mystical" about the molad, it is just simple arithmetic, 100% predictable and exactly calculated.
Some people seek to know the clock time of the molad moment, perhaps to choose an auspicious moment for starting a significant life event after a new lunar cycle begins. In the case of the traditional molad, such thinking is at best misguided, because there is nothing particularly "auspicious" about such a moment, due to the drift of the traditional molad with respect to the mean lunar cycle, as will be shown below, and due to the superimposed periodic variations of the duration of the lunar cycle. In addition, there is no valid method to convert a traditional molad moment to a clock time in any time zone. For such purposes everyone should refer to reliable sources for the actual astronomical moment of the lunar conjunction, properly adjusted to the local time zone, taking into account Daylight Saving Time, if applicable.
Our freeware calendrical calculator, Kalendis, can export all of the moladot for a full Hebrew calendar year in a variety of formats, click here for an example of its built-in web page format. Note that the right-most column on that report shows the moment of the actual lunar conjunction, expressed in terms of the time zone of the currently selected locale (in the example, Jerusalem). Looking at the numbers, it is hard to see any relationship between the announced traditional molad moments and the actual lunar conjunction moments, but rest assured that there is a relationship, which we will examine in detail and quantify here. Kalendis also shows the molad moment to announce (for the coming month) at the bottom of each monthly zmanim (ritual times) export report, click here for examples of zmanim reports for an entire Hebrew calendar year.
Some hold that the molad time is based on Temporal Hours (sha'ot zmaniot) , but that seems inconsistent with the simple arithmetic used to calculate the molad, which adds a constant 29+13753/25920 days per month. In our time, Hebrew year 5766, the Molad minus Mean Lunar Conjunction difference varies from the molad being early by 2+1/2 hours ahead of the Mean New Moon in Nisan to the molad being late by 5+2/3 hours after the Mean New Moon in Tishrei and Cheshvan. This more than 8-hour span dwarfs the few minutes of variance that could be accounted for by the use of Temporal Time. Furthermore, the modest differences in temporal hour length at the latitude of Israel average out during the full solar year cycle.
Nevertheless, the Talmud and many other rabbinic sources regard the molad moment as if it were in terms of Temporal Time, so that a molad moment that is between 00:00h and 05:59h is regarded as between sunset and midnight, between 06:00h and 11:59h is regarded as between midnight and sunrise, between 12:00 to 17:59h is regarded as between sunrise and mid-day, and between 18:00h to 23:59h is regarded as between mid-day and sunset. Thus rabbinic sources freely relate the molad moment to the time-of-day without actually calculating the moments of sunset, midnight, sunrise, or mid-day.
Evidently, for the purposes of Jewish Law (halachah), this approximation has a long tradition of acceptance, and I used it to advantage in simplifying my astronomical Hebrew calendar arithmetic during development of the rectified Hebrew calendar.
The reader can skip this section if the details for calculating the traditional molad moment are not of interest.
In the traditional method of reckoning molad moments, as it often appears in rabbinic literature, one simply adds or subtracts 29 days 12 hours and 793 parts from a known molad moment to obtain the molad moment of the next or prior month, respectively. Although simple, this classical method is often inconvenient, because it requires starting from a known molad moment that is reasonably near to the desired molad. It is inefficient when one needs to jump from a given molad to another that is an arbitrary number of months into the future or past, because that requires repeated steps. Any serially repeated addition or subtraction calculation is also risky because if a mistake is made somewhere along the way then that introduces an offset error that may be carried through indefinitely. A shortcut commonly employed in classical calculations is to ignore the day count, simply tracking the weekday instead. This shortcut can lead to ambiguities when the ignored days add up to more than a week, and although it does yield sufficient information for the purpose of announcing a molad moment, the weekday with time of day is not enough information for calendrical calculation purposes. Instead, the following is a direct method for the unambiguous calculation of any molad moment.
A molad moment has an integer part, which represents the calendar day and weekday of the molad, and a fractional part that represents the time-of-day of the molad. The ideal way to carry out molad calculations is using using exact computing engines capable of arbitrary-precision arithmetic, such as Mathematica, or the computer programming language "LISP", which can calculate the molad moment to the exact fraction or whole number of parts. It is also possible to obtain correct results using floating point arithmetic or a digital calculator, provided that the user takes care to round intermediate results at appropriate points, as will be shown below, but the calculation must support at least Double Precision floating point, because Single Precision is often insufficient to resolve moments to better than one second accuracy.
For the purposes of announcing the molad moment or determining the molad of Tishrei for Hebrew calendar arithmetic, the moment must be calculated exactly. For the purpose of comparing moladot to astronomical lunar conjunction moments, however, the small errors introduced by the limitations of floating point arithmetic are negligible.
In a computer program, non-variable values can be declared as constants, for optimal performance.
Calendrical calculations make frequent use of dividing a number and keeping only the remainder, for example, dividing by 7 to determine the weekday, as will be done below. Many programming languages have a MOD operator or function intended for this purpose, but in many languages MOD handles negative or real numbers improperly (the MOD operator of Microsoft Visual Basic is defective on both counts). To avoid the risk of such errors, herein I will use the solution recommended by Dershowitz & Reingold in Calendrical Calculations: 3rd Edition (CC3, see <http://www.calendarists.com/>):
modulus( x, y ) = x – y × floor( x / y )
Not being limited to integer division, the CC3 modulus function also works properly with floating point (real number) parameters provided both the x and the y parameter and the function return value are declared as Double Precision.
The traditional epoch or first day of the Hebrew calendar, on the first day of Tishrei of the first year, was a Monday (Yom Sheini, or second day of the week), starting at the sunset at the end of the daytime on Sunday. Assign day number one to that epoch:
HebrewEPOCH = 1
(A good case could be made to assign HebrewEPOCH = 0 instead, to count elapsed days relative to the epoch, but that would require changes to some of the arithmetic below, such as the weekday expression. Suffice it to say that the elapsed day count is always one day less than any calculated day number.)
To compute a traditional molad moment, start by calculating the Lunation number since the traditional epoch of the Hebrew calendar, which depends on the given Hebrew year number and month number:
Lunation = ElapsedMonths( hYear, hMonth )
Within the ElapsedMonths function, adjust the year number to account for the traditional month numbering that starts from Nisan, storing the adjusted year number in a local variable:
IF hMonth < Tishrei THEN TheYear = TheYear + 1 ELSE TheYear = hYear
where the Hebrew months are numbered from Nisan=1 to Tishrei=7 to Adar=12, etc.
In 19 years there are 19 × 12 = 228 regular months, plus 7 leap months, a total of 235 lunar months per cycle. The following expression returns the number of elapsed months:
RETURN hMonth – Tishrei + quotient( 235 × TheYear – 234 , 19 )
For example, for Cheshvan, the 8th month of traditional Hebrew calendar year 5766:
= 8 – 7 + quotient( 235 × 5766 – 234 , 19 ) = 1 + quotient( 1354776 , 19 ) = 1 + 71304 = 71305
The assumed initial conjunction time (called "BaHaRad", or PartsAtEpoch below) of the first molad (Lunation=0) was 5 hours and 204 parts (each part = 1/18 minute) after the epoch of the Hebrew calendar:
HoursPerDay = 24
MinutesPerHour = 60
MinutesPerDay = MinutesPerHour × HoursPerDay = 1440
PartsPerDay = MinutesPerDay × 18 = 25920
PartsPerHour = PartsPerDay / HoursPerDay = 1080
PartsPerMinute = PartsPerDay / MinutesPerDay = 18
PartsAtEpoch = 5 / HoursPerDay + 204 / PartsPerDay = 5/24 + 204/25920 = 5604/25920 = 467/2160 = 0.2162037...
(the overscored digits of the decimal fraction repeat forever)
The constant traditional molad interval is 29 days, 12 hours, and 44+1/18 minutes. Separate the 44+1/18 minutes in order to improve the accuracy of floating point calculations:
Parts793 = 44 / MinutesPerDay + 1 / PartsPerDay = 793/25920 of a day
TwentyNineAndHalf = 29 + 12 / HoursPerDay = 29.5 days
Ari Meir Brodsky of Toronto has pointed out that in mental molad arithmetic for the purpose of announcing the molad moment it is easier to disregard whole weeks and calculate using the molad interval in excess of 4 weeks as 1+1/2 day, plus 3/4 of an hour, minus 1 minute, plus 1 part. For more information click here to see Ari's mental molad method web page.
The assumed molad epoch was traditionally derived as follows (Tosefot on Talmud Bavli tractate Rosh HaShanah page 8b):
The following expression yields the molad moment as the number of days since the epoch of the traditional Hebrew calendar, plus the fraction of a day elapsed since a mean sunset time that is 6 hours before civil midnight:
MoladMoment = HebrewEPOCH + PartsAtEpoch + ( Lunation × TwentyNineAndHalf ) + ( Lunation × Parts793 )
If the calculation to this point was carried out using floating-point arithmetic then it is necessary to round the molad moment so that its fractional portion corresponds to a whole number of parts (this must be carried out before doing anything else with the molad moment):
MoladMoment = round( MoladMoment × PartsPerDay ) / PartsPerDay
To determine the weekday of any molad moment, divide the number of whole days since the epoch by 7, and add one. This yields a number from 1=Sunday (Yom Rishon, or first day of the week) to 7=Saturday (Shabbat, or seventh day of the week):
MoladWeekday = modulus( floor( MoladMoment ) , 7 ) + 1
For this weekday expression to yield the correct result, modulus( HebrewEPOCH, 7 ) must equal 1, which will be the case if HebrewEPOCH was set equal to 1 as recommended above. If for some reason you set HebrewEPOCH to some other value, then you must modify the expression appropriately.
When announcing the molad weekday, state it in Hebrew, because Hebrew calendar days begin at sunset, not midnight.
Only for the purpose of announcing the molad time, separate it into hours and parts (alternatively, not recommended because it is not the traditional way: hours, minutes, and parts). Extract the fractional portion of the molad moment and convert it to an exact whole number of parts:
MoladFractionAsParts = [ MoladMoment – floor( MoladMoment ) ] × PartsPerDay
Then calculate the separate time components for the announcement:
MoladHour = floor( MoladFractionAsParts / PartsPerHour )
MoladParts = modulus( MoladFractionAsParts , PartsPerHour )
Alternatively, to separate the minutes and residual parts of a minute:
MoladMinute = modulus( floor( MoladFractionAsParts / PartsPerMinute ) , MinutesPerHour )
MoladParts = modulus( MoladFractionAsParts , PartsPerMinute )
When announcing the molad time, state that the time is counted from mean sunset, not midnight.
Some organizations subtract 6 hours to convert the molad time to "civil" time, but that is a misleading and futile adjustment because the traditional molad has an undefined and undefinable time zone, as will be demonstrated herein.
If a "civil" time adjustment is applied to the molad moment itself then the molad will land on the incorrect weekday 25% of the time.
Never apply Daylight Saving Time to a molad moment, otherwise the molad of Tishrei will sometimes land on or be postponed to the wrong weekday.
Engineer Yaaqov Loewinger of Tel Aviv, Israel published an essay in Hebrew on the subject of the announcing of the molad moment in synagogues. It is a good review and cites an impressive collection of traditional sources on the subject. The default format of the molad moment as reported by Kalendis is as recommended by Loewinger, and is also the same format as just outlined above. Please see Loewinger's essay, which is freely available on-line in PDF format at <http://www.hakirah.org/Vol 6 Loewinger.pdf> 244 KB in the Summer 2008 issue of Hakirah, The Flatbush Journal of Jewish Law and Thought.
As an example, by the above arithmetic the traditional molad moment for the month of Cheshvan in Hebrew year 5766 was at 2105680.2310571 days or exactly 2105680 days and 5989 parts. Dividing the integer portion by 7 yields a remainder of 3 to which we add one to obtain the molad weekday of 4 = Yom Rivii. We don't translate this weekday as Wednesday, because it is a Hebrew weekday and the molad moment is 27 minutes and 5 parts before midnight, so in terms of "civil" time it would be late on "Tuesday evening" (placed in quotes because the molad time zone is undefined, as explained in the previous paragraph). The 5989 parts separate to 5 hours and 589 parts (or 5 hours 32 minutes and 13 parts) after mean sunset.
Some people use the molad moment to determine the earliest and latest time for saying the Kiddush Levanah prayer (sanctification of the Moon) once during the waxing half of each lunar cycle, when Moon is clearly visible, but again because of the ritually undefined time zone problem such calculations are of doubtful validity. Key sources for earliest and latest time rules are Shulchan Aruch: Orach Chaim 426:3-4 and the commentary upon it Mishneh Berurah 426:17-20, but neither source is reconcilable with astronomical reality.
Why not determine the Kiddush Levanah time limits relative to the actual astronomical lunar conjunction and lunar opposition moments, respectively, expressed according to the local clock time? These astronomical moments can be calculated by our freeware computer program, Kalendis, see <http://www.sym454.org/kalendis/>. Alternatively, there is the simple observational method: if the local moonrise is before sunset then the waxing half has not yet ended.
For comparison with astronomical lunar conjunction moments, further adjustment is necessary, because astronomical moments are usually calculated as Universal Time (from midnight at the Prime Meridian), and usually relative to one of the following calendar epochs:
Finally, adjust for the time zone difference. For example, if the astronomical moment is computed as usual for the Prime Meridian, to evaluate if the molad moment refers to Jersualem Local Mean Time add 2 hours and 21 minutes to the astronomical moment, because Jerusalem is about 35+1/4° east of the Prime Meridian, and each degree corresponds to 4 minutes of local mean time difference, so Jerusalem Local Mean Time is 35+1/4 × 4 = 141 minutes = 2 hours and 21 minutes ahead of time at the Prime Meridian.
If the molad is the moment of the mean lunar conjunction, then it can't be so for every location on Earth. The mean lunar conjunction can occur at that moment only for a single reference meridian of longitude on Earth. At the moment of a mean astronomical lunar conjunction, a clock placed at that longitude and set to display local mean time will indicate the same time as the corresponding molad moment (less 6 hours, because the molad moment is relative to mean sunset whereas clock time is relative to civil midnight).
The molad reference meridian has nothing to do with direct observation of lunar conjunctions, which can never be observed except during a solar eclipse when Sun is above the local horizon. Even during solar eclipses it is technically difficult to determine the maximum eclipse moment. Furthermore, astronomical lunar conjunctions are computed for a geocentric "observer", even though an observer at the center of Earth couldn't possibly make any astronomical observations, let alone get there and survive! Geocentric calculations eliminate variations due to locale-specific parallax.
The explicit specification of a reference meridian is standard for any astronomical calendar. For example, the modern Persian astronomical solar calendar uses 52.5° E, which corresponds to Iran Standard Time (UT+3.5h), and the traditional oriental lunisolar astronomical calendars use Standard Time in Beijing (UT+8h) for the Chinese, Standard Time in Tokyo (UT+9h) for the Japanese, Standard Time in Seoul (also UT+9h) for the Korean, and Standard Time in Hanoi (UT+7h) for the Vietnamese calendar.
I could not find, however, any rabbinical or classical or even a modern source that authoritatively specified which longitude is the reference meridian for the traditional molad of the Hebrew calendar, but there were many internet web sites and books that casually mentioned a commonly-held assumption that the molad refers to the meridian of Jerusalem. For example this assumption is mentioned in many places at the web site of the Orthodox Union at <http://www.ou.org>, and in the translation of the "Blessing of the New Month" ceremony of the very popular prayer book entitled The Complete ArtScroll Siddur, published by Mesorah Publications Ltd., Brooklyn, New York, it says "Announcement of the Molad: It is customary – but not obligatory – to announce the precise time at which the new cycle of the moon will begin in Jerusalem." I also found a minority of sources (web sites and books) that assumed or even presented evidence that the molad referred to somewhere in Iraq/Babylonia/Persia, or Afghanistan, or the "center of the civilized world", or the East China Sea between China and Japan (90° east of Jerusalem), or in the Pacific Ocean an hour to the east of Japan (120° east of Jerusalem).
It seemed to me that it ought to be straightforward, using modern astronomical algorithms, to prove which of the alternatives, if any, was the correct meridian, or at least the historically correct meridian. We know today that the mean interval between lunar conjunctions is steadily getting shorter in terms of the mean solar days that are appropriate for calendar calculations. We also know today that the Earth rotation rate is more-or-less steadily slowing down due to Earth-Moon tidal interactions. Back in the era when the fixed arithmetic traditional Hebrew calendar was established, however, they did not know about tidal or even basic gravitational interactions. Therefore if we want to know which meridian was the original reference meridian for the molad, we must include the era of Hillel ben Yehudah in our calculations, as well as dates in the remote past and distant future. Furthermore, to end up at the correct meridian, our calculations must as accurately as possible take into account the historical changes in the lunar revolution rate and the Earth rotation rate.
Limitations: The astronomical algorithms that I initially used for lunar position are based on a truncation by Jean Meeus of the semi-analytical ELP-2000 lunar theory of Jean Chapront et al. The parabola used to estimate Delta T (to take changes in the Earth rotation rate into account) was based on the approximation, derived mainly from historical records of solar eclipses, that the mean length of the solar day changes by about 1.75 milliseconds longer each century (for more information about Delta T see this page). Relativistic effects are not accounted for. (These limitations were essentially eliminated in my repeat calculation, to be discussed further down this page, but the conclusions were nearly the same.)
I started with the most widely held assumption, that the molad referred to the mean lunar conjunction in terms of Jerusalem Local Mean Time (≈ Israel Standard Time + 21 minutes). For all Hebrew years from the traditional year of Creation to the year 10000, I compared the moments of the actual lunar conjunctions, computed using modern astronomical algorithms and expressed in terms of Jerusalem Local Mean Time, with those of the corresponding molad moments. Using that data, I generated a collection of graphs depicting how their average differences and their month-specific differences change over the years.
This first chart (click here 741 KB) shows the difference between the molad moment and the actual astronomical lunar conjunction (in terms of Jerusalem Local Mean Time) for the first 10000 years of the Hebrew calendar. I expressed the x axis as the number of months elapsed since the epoch of the Hebrew calendar for convenience in relating that directly to the molad moment. (I used Microsoft Excel to produce this chart, but that program can't plot more than 32000 points as a single series, therefore I limited the amount of data by sampling every 4th year.) This chart essentially depicts the average relationship between the molad and the actual lunar conjunction.
The horizontal red line is the zero difference line. If the molad truly referred to the Jerusalem Local Mean Time of the actual mean lunar conjunction then all of the plotted points would be distributed as a straight band horizontally across the chart, vertically centered upon that zero difference line, with slightly more than ±14 hours of vertical spread due to the natural variations in the duration of the lunations.
Instead, we see that the points actually follow a curved band, and please note the blue line through the middle of that band, which is a quadratic least-squares statistical regression line (hard to see on a printout, even in color, but easily seen on a computer display screen). The equation for that line is given below the title of the graph — don't panic about all the numbers, it is simply a quadratic equation whose coefficients could have been rounded to perhaps 6 significant figures.
The difference between the red line and the blue curve represents the error or drift of the molad with respect to the actual lunar conjunction. The blue line almost touches the red line near 51000 elapsed months, which was the era of Hillel ben Yehudah (Tishrei 4119 was lunation number 50933). This implies that the molad interval was essentially equal to the actual mean synodic month in that era, and that the molad reference meridian was close to Jerusalem, which has a longitude of 35° 14' 4" East. The closest that the blue line approaches the red line is +23 minutes, which implies that the original molad reference meridian was 23 mean solar minutes east of Jerusalem. Earth rotates 360° in 24 hours, so the corresponding longitude difference is 360° × FractionOfDay = 360° × (23 minutes) / (1440 minutes per day) ≈ 5 3/4°, which corresponds to eastern Jordan — why would the molad refer there? What if we doubled the longitude difference to ≈ 11 1/2° ? The historical site of origin of the Chaldeans, renowned astronomers of Babylonia, was around the City of Ur, which was at a longitude of about 46° 5' East, a longitude difference of almost 11° east of Jerusalem. In the era of Hillel ben Yehudah the majority of Jews lived in the region of that meridian in Babylonia, mostly north of Ur and extending up to Bahgdad. If we subtract 23 minutes from all points on the chart, they all shift down by that amount, and the blue line will then touch the red line in the era of Hillel ben Yehudah.
This evidence suggested that the original reference meridian of the molad was midway between Israel and Babylonia. Could this have been chosen as a compromise between the two major Jewish population regions of that era? Does that make sense? Where can one find a Torah or Talmud source to justify or even suggest such a compromise? Are we to learn from this that if it were done today then the molad meridian would be placed halfway between Israel and New York (at the longitude of the western tip of Africa, Canary Islands, Iceland)? There had to be a better explanation. Due to limitations of the astronomical algorithms and Delta T approximation that I initially used, it seemed worthwhile to independently and more accurately confirm the calculations.
At the beginning of January 2007 a new version 9.1 of SOLEX was released. This program performs high accuracy astronomical calculations using numerical integration, the "gold standard" for celestial mechanics, with documented excellent agreement against the world reference DE406 algorithms of NASA's Jet Propulsion Laboratory. The new SOLEX version introduced (following my request for this feature) the ability to automatically search for lunar conjunction moments, logging those moments to better than one-second accuracy. I used SOLEX 9.1 to develop very accurate arithmetic for calculating mean lunar conjunction moments, as given on my "Length of the Lunar Cycle" web page at <http://www.sym454.org/lunar/>. SOLEX does take the most important relativistic effects into account.
Furthermore, near the end of January 2007 the NASA Eclipses web site published a new 5-millennium canon of solar eclipses, and the authors Fred Espenak and Jean Meeus also posted at <http://eclipse.gsfc.nasa.gov/SEcat5/deltatpoly.html> newly updated expressions for approximating Delta T to take into account historical changes in the Earth rotation rate, based on recently revised and extended analyses of historical lunar and solar eclipses records, published by F. Richard Stephenson et al.
For the first 10500 years of the Hebrew calendar (>130000 lunar months), I used SOLEX to compute the mean lunar conjunction moments in terms of TT (Terrestrial Time), converted those moments to UT (Universal Time) by subtracting the new NASA Delta T approximation, shifted them to Jerusalem Local Mean Time by adding 2 hours and 21 minutes, then added 6 hours to count time from mean sunset instead of civil midnight, and finally subtracted those astronomical moments from the corresponding traditional molad moments to generate the following plot (click here or on the chart to open a high-resolution PDF version 91KB):
This numerical integration independently confirmed that the molad minus Jerusalem mean lunar conjunction difference was around 23 minutes in the era of Hillel ben Yehudah (the close-up inset graph shows it at about 22+1/2 minutes). However, back in the era of the Maccabees and of the ancient Greek astronomer Hipparchus, there was another minimum that was even closer to Jerusalem time, just under 15+1/2 minutes, corresponding to an original molad reference meridian of longitude that was slightly less than 4° east of Jerusalem.
Since its origin in the era of the Maccabees the molad reference meridian has been drifting eastward at an accelerating rate, because the traditional molad interval was fixed at a length of 29 days 12 hours and 44+1/18 minutes whereas the astronomical mean synodic month has been getting about 25 microseconds shorter with each passing month. Around two centuries before the era of Hillel ben Yehudah, the Greek astronomer Ptolemy of Alexandria published his multi-volume treatise of astronomy, known today as Almagest, in which he gave the the length of the mean synodic month in sexagesimal format as 29 days 31' 50'' 8''' 20'''', that is 29 days 31 minutes 50 seconds 8 thirds and 20 fourths, numerically exactly equal to the traditional molad interval (see "Why Divide Hours into 1080 Parts?" at <http://www.sym454.org/hebrew/chelek.htm>), but Ptolemy cited Hipparchus as the source for that mean synodic month. We no longer have the original writings of Hipparchus on this topic, but in the 20th century a bronze astronomical calculator mechanism dating back to the 2nd century BC was discovered near Antikythera on a shipwreck laden with goods from the Isle of Rhodes (where Hipparchus used to live and teach). The Antikythera Mechanism is thought to have been constructed by Hipparchus or Posidonius or others associated with his school of astronomy. Its clockwork multi-geared mechanism quite accurately calculated the lunar position and phase, including a clever mechanical accounting for the major periodic variations of the lunar cycle that are due to the eccentricity of the lunar orbit. Perhaps such a calculator was available to the Maccabees? Another authority who lived around the era of Ptolemy was Rebbe Eliezer ben Hurcanus (אליעזר בן הורקנוס), traditional author of Pirkei D'Rebbi Eliezer, who described the mean lunar cycle in considerable detail (chapters 6 and 7), including specification of the traditional molad interval that is still used today, although his teachings weren't assembled and published until several centuries after Hillel ben Yehudah.
The earlier minimum difference of 15 minutes 27 seconds corresponded to an original molad reference meridian that was 3° 51' 45" east of Jerusalem. By adding Jerusalem's longitude of 35° 14' 4" E (from the Prime Meridian that runs through the Greenwich Observatory, UK) one obtains 39° 5' 49" E as its original longitude (2h 36m 23s ahead of UT), which is between Israel and Babylonia, but not halfway, rather it is closer to Israel than Babylonia. Although this independent recalculation confirmed that the molad reference meridian was never in Jerusalem or even in Israel, and never will be in Jerusalem, a "compromise" between the two historical major Jewish centers can't explain why the original meridian was about 2 degrees closer to Israel than the halfway point.
Perhaps the answer comes from the promise of HaShem to Abram in the Torah, Genesis chapter 15 verse 18: "On that day HaShem made a covenant with Abram, saying To your descendants have I given this land, from the river of Egypt to the great river, the Euphrates River." This territory also corresponds to the full range of the patriarch's travels during his lifetime, as described in the Torah, from Ur to Egypt. The Torah later specified in Genesis chapter 21 verses 12-13 that this promise referred only to those who descended through Isaac (Yitzchak), and in Genesis chapter 28 verses 1-5 through Jacob (Yaacov).
The Nile River passes through Cairo at 30° 3' N, 31° 22' E. The Euphrates River joins the Tigris River near the town of al-Qurnah at 31° 1' N, 47° 25' E, continuing as the Arvadrud River to the Persian Gulf. In the era of the patriarch, however, the Euphrates and Tigris together formed a delta which drained directly into the inland extension of the gulf, and Ur was a gulf coastal city on the west bank of the Euphrates. Four millennia of mesopotamian silt deposition caused the gulf waters to recede about 200 km to the south, despite globally rising sea levels. The meridian of longitude that is halfway between Cairo and al-Qurnah is at 39° 23' 30" E, which is 4° 9' 26" E of Jerusalem, corresponding to (1440 minutes per day) × 4.157° / 360° = 16 minutes and 38 seconds ahead of Jerusalem Local Mean Time (click here to see map). This agrees with the more accurate molad meridian confirmation to within 71 seconds of time, or less than 17 arcminutes of longitude (one minute of time = 15 arcminutes of longitude, or one degree of longitude = 4 minutes of time).
The arcseconds and arcminutes in the preceding paragraph are not really significant, due to the uncertainties of the Delta T approximation and of the original Nile and Euphrates reference meridians of longitude. It suffices to round the minimum time difference to +16 minutes relative to Jerusalem Local Mean Time, corresponding to a longitude that is 4° east of Jerusalem, or a Local Mean Time that is 2 hours and 37 minutes ahead of UT or that is 37 minutes ahead of Israel Standard Time. That meridian passes through the junction of the modern borders of Jordan, Iraq, and Saudi Arabia.
Although I am aware of no specific Jewish record of this event, there was a total solar eclipse that crossed central ancient Israel shortly before the apparent era of origin of the traditional molad calculation, and its maximum occurred at the Nile-Euphrates midpoint! There are two images at NASA that portray this interesting eclipse: a map of eclipses during that era, which is at <http://eclipse.gsfc.nasa.gov/SEatlas/SEatlas-1/SEatlas-0339.GIF> (the eclipse of interest is the blue band labeled as "-0335 Jul 04" [NASA includes a year zero] with an asterisk marking the maximum point in today's eastern Jordan). There is also a schematic diagram at <http://eclipse.gsfc.nasa.gov/5MCSEmap/-0399--0300/-335-07-04.gif>, which details this specific eclipse. The date was Julian July 4, 336 BC (no year zero) = Julian Day 1598883.895 = traditional Hebrew 1 Av 3425, which happened to be less than a week after the north solstice (hence essentially as high in the sky as an eclipse can possibly be for any location in Israel), and the eclipse totality path crossed Jerusalem shortly before noon.
The Talmud Bavli, tractate Sukkah 29a declared: "If it is in eclipse in the east, it is a bad omen for those who dwell in the east; if in the west, it is a bad omen for those who dwell in the west; if in the midst of heaven it is bad omen for the whole world." Perhaps this eclipse comprised the basis for that statement, because in Israel it was "in the midst of heaven" as it happened shortly before noon, and because it occurred in the year of ascent of Alexander the Great to the throne of Macedonia, heralding 12 years of military compaigns during which he conquered essentially all of the territory that had been promised to the descendents of Abraham!
Is it possible that that solar eclipse served as the initial demonstration that a lunar conjunction is a specific, calculable moment, whereas the first visible new lunar crescent is not, initially leading to development of a calculation for the mean lunar conjunction, and ultimately to the switch in criteria for determining the start of calendar months? Perhaps it also provided a very tangible demonstration of the principle that lead to the molad zakein rule (that the molad is postponed to the next calendar day if the calculated moment is on or after noon), because that solar eclipse / lunar conjunction moment occurred just prior to noon, yet the new lunar crescent couldn't have been visible at the next sunset, by which time Moon was below the western horizon. At the sunset after that, however, the new lunar crescent should have been easily visible throughout Israel, nearly 1+1/3 days after the eclipse.
For more information, please see the heading "The MoladMoment Function" as well as the subsequent sections about the molad on the rectified Hebrew calendar web page at <http://www.sym454.org/hebrew/rect.htm>.
Some have claimed that the molad seems to refer to a meridian in Afghanistan, whose solar time is about 2 hours ahead of Jerusalem. The most likely reason for this "Afghanistan mistake" is that in the present era if one looks at a clock showing Jerusalem Local Mean Time then the moladot are an average of about 2 hours later than the astronomical mean lunar conjunction moments. Although this 2-hour delay is entirely accounted for by the accumulated progressive shortening of the Mean Synodic Month since Hebrew year 4119 (as documented in the PDFs above, especially the elapsed months version) plus the difference in minutes for referring the calculation to Jerusalem, it could be misinterpreted to imply that the molad refers to a meridian 2 hours to the east, which indeed today would correspond to the meridian of Qandahar, Afghanistan.
Another possible explanation for the "Afghanistan mistake" is neglecting to subtract Delta T when calculating the moments of all lunar conjunctions, especially for ancient times. The necessity for this correction has only been understood since the beginning of the 20th century, especially since the advent of Atomic Time (1955) and Laser Lunar Ranging (1969). The accuracy of the Delta T correction has been progressively refined over the decades since 1970 as the LLR observations have continued with improved instrumentation, and as progressively more historical lunar and solar eclipse records have been analyzed. The Delta T value that the NASA Eclipses web site polynomial approximation generates for Rosh HaShanah of Hebrew year 4119 is 7107 seconds ≡ 1h 58m 27s ≡ almost 2 hours, which agrees nicely with this explanation.
In fact, my Average Molad Adjustment is proportional to Delta T 47 KB, which should not surpise anybody because they have the same physical basis (Tidal Acceleration), and as a corollary both Delta T and the Average Molad Adjustment are proportional to the Mean Synodic Month 52 KB. For more information about Delta T see this page. The accuracy of the algorithm for estimating Delta T is critical to this investigation of the relationship between the molad and the actual lunar conjunctions, espeically for remote past or distant future differences.
Others have asserted that the molad refers to a meridian that is 6 or 8 hours ahead of Jerusalem, either in the East China Sea between China and Japan (90° east of Jerusalem), or in the Pacific Ocean an hour to the east of Japan (120° east of Jerusalem). A likely explanation for this mistake is the failure to add 6 hours to the astronomical moment to count time from mean sunset instead of civil midnight. In combination with the 2 hour error explained above, this can account for a +8 hour offset. Each hour of time difference corresponds to 360° / (24 hours per day) = 15° of longitude.
In the era when the traditional molad calculation was established the average lunar cycle (Mean Synodic Month, or MSM) was assumed to be eternally constant.
How was the traditional molad interval length originally determined? The MSM was probably first measured by the Babylonians, using records of eclipses, most likely lunar eclipses, which occur much more frequently and can be much more easily continuously observed than solar eclipses, and can be simultaneously seen from many locations on Earth's night time hemisphere. One could estimate the MSM by carefully determining the elapsed time between the maxima of two lunar eclipses and then dividing by the number of elapsed lunar months. For best accuracy, the eclipses should have been separated by as many years as records were available for, preferably spanning several centuries. Even so, basing this calculation on only one pair of eclipses would be misleadingly erroneous unless each eclipse happened to represent near average lunation periods, or they were at equivalent points in the lunar periodical variability cycles. To improve accuracy, it would have been necessary to average many eclipse pairs.
Around 147-148 AD, the Greek astronomer Ptolemy of Alexandria published his multi-volume treatise of astronomy, known today as Almagest, in which he gave the the length of the MSM in sexagesimal format as 29 days 31' 50'' 8''' 20'''', that is 29 days 31 minutes 50 seconds 8 thirds and 20 fourths, numerically exactly equal to the traditional molad interval (see "Why Divide Hours into 1080 Parts?" at <http://www.sym454.org/hebrew/chelek.htm>), but Ptolemy cited Hipparchus as the source for that MSM.
Whatever the source of the traditional molad interval, the next question must be: How was the molad epoch, called BaHaRad, determined? This is not a trivial question, because if an error had been made in the choice of epoch, then that error would have persisted in the calculation forever, and it is very difficult to choose an appropriate epoch on astronomical grounds, because the lunar conjunction itself is not observable. Even if the maximum of a solar eclipse were used to guide selection of the epoch, that particular eclipse might have occurred anywhere within ±14 hours of the true mean lunar conjunction. The selection of epoch could not have been based on accurate astronomy back-calculated to the epoch, because the traditional moladot near the epoch average 7+1/2 hours late, as shown in the chart above. Rather, they must have positioned the epoch at the moment that minimized the astronomical error in the era when the molad calculation was developed. I am suggesting that the molad calculation was developed in the era of the Second Temple because that is the era when the differences between moladot and actual astronomical lunar conjunctions were minimal. Alternatively, it could have been developed a few centuries later but based on historical astronomical records that originated near the era of the Second Temple.
In my own numerical integration based development of mean lunar conjunction polynomials, as discussed on my web page "The Length of the Lunar Cycle" at <http://www.sym454.org/lunar/>, I found that the mean lunar conjunction in January 2000 AD, which I set as the epoch for my "J2K" lunation count, was almost 4 hours before the actual lunar conjunction on January 6th. Accurately establishing that epoch involved literally hundreds of millions of calculations, thankfully mostly carried out automatically by computer!
In other words, I used an accurate lunar ephemeris to establish my "J2K" epoch. The ancients could have done the equivalent with a reasonably accurate lunar ephemeris that was available at the time. If the calculation was established in the era of Shmuel about a century after Ptolemy or in the era of Hillel ben Yehudah about two centuries after Ptolemy, they could have relied upon the published Ptolemaic algorithms. Alternatively, if the calculation was established much earlier, in the era of the Second Temple, then they could have relied upon a lunar ephemeris obtained from Hipparchus or from the Babylonians.
The web page "The Length of the Lunar Cycle" at <http://www.sym454.org/lunar/> also includes a section explaining a simple method for "Estimation of Fixed Lunar Cycle Calendar Drift", using the drift of the molad of the Hebrew calendar as an example, which arrives at a result that is closely comparable with the evaluations documented on this page.
In the present era the median length of the lunar cycle is about 29 days 12 hours and 30 minutes, the MSM is slightly more than 29 days 12 hours and 44 minutes, the shortest lunations are about 29 days 6 hours and 30 minutes, and the longest are about 29 days and 20 hours. Thus the length of the synodic month varies over a range spanning about 13 hours and 30 minutes! These variations were greater in the past and will diminish in the future:
Centile trends, per group of 4657 lunar months, based on SOLEX 9.1β numerical integration
(the image below is linked to a higher resolution full-page PDF version)
Although the individual lunation length variations span about 13 hours and 30 minutes, compared to the traditional molad the periodic variations appear to be substantially greater, spanning about 28 hours, because of series of several short lunations in a row (before and after Earth's orbital aphelion) alternating with several long lunations in a row (before and after Earth's orbital perihelion). Click here or on the heading or any of the next 3 charts to open up a higher-resolution PDF version containing all 3 of the following periodic variation charts 384 KB:
The rather large periodic variations of the length of the lunar cycle make it impossible to accurately calculate the mean length of the lunar cycle based on the separation between any two well-established lunar conjunctions, such as total solar eclipses, even when that separation spans many centuries.
The only molad moment that is of relevance to the traditional Hebrew calendar is the molad of Tishrei, which determines the provisional date of Rosh HaShanah, subject to the traditional Rosh HaShanah postponement rules. The variation of the molad of Tishrei relative to the actual lunar conjunctions is not subject to the seasonal variations that affect the moladot overall (discussed below), so it spans "only" about 20 hours, as shown in this chart 315 KB.
For a published explanation of periodic lunar cycle variations see "Chapter 4: The Duration of the Lunation" on pages 19-31 in "More Mathematical Astronomy Morsels" by Jean Meeus, published in 2002 by Willmann-Bell, Inc., Richmond, Virginia.
Once again referring to the chart showing the difference between the molad moment and the Actual Lunar Conjunction (Jerusalem Local Mean Time) 741 KB for the first 10000 years of the Hebrew calendar, note that the further back in time that one back-calculates the molad prior to the era of Hillel ben Yehudah, the later it was with respect to the actual mean lunar conjunction, to a maximum of about 9 1/3 hours late at the epoch of the Hebrew calendar. Going forward in time from the era of Hillel ben Yehudah, the molad drifts later and later at an accelerating rate, being about 2 hours late in our time (Rosh HaShanah 5766 is lunation number 71305 since the epoch), and more than 20 hours late by the year 10000. This implies that although the molad interval was appropriate for the era of Hillel ben Yehudah, it was too short for earlier ages, and too long for later ages. In other words, the molad interval has a constant value, but the mean interval between lunar conjunctions has been steadily getting shorter. The value of the equation for the blue line for any given lunation number represents the average error of the molad with respect to the actual mean lunar conjunction. This value, when subtracted from the corresponding molad moment, "adjusts" it by an amount that restores the relationship that existed in the era of Hillel ben Yehudah, but it includes a shift of the reference meridian to Jerusalem because the astronomical moments were converted to Jerusalem Local Mean Time.
By reconciling the average deviation of the molad, indicated by the curved blue regression line, I calculated the Apparent Mean Synodic Month, and graphically depicted its trend in relation to the Gregorian year, Hebrew year, and Hebrew calendar elapsed months. The following charts show those trends, and each includes a linear equation that can be employed, as it is in Kalendis, to estimate the Mean Synodic Month for any date:
The Kalendis estimate of the Mean Synodic Month is computed in time units of Mean Solar Days. This is quite different from either of the two formulae widely available on the internet and attributed to the ELP2000 Lunar Theory (Ephemerides Lunaires Parisiennes) of Chapront et al, which calculate the length of the Mean Synodic Month in uniform time units of Ephemeris Days (essentially equivalent to days measured in atomic time):
where T is the number of Julian Centuries (intervals of 36525 ephemeris days) relative to J2000.0 (January 1, 2000 AD at Noon, Terrestrial Time).
Mean Solar Days are getting longer over time due to tidal deceleration of Earth's rotation, whereas Ephemeris Days are by definition of constant duration over the millennia. In terms of Ephemeris Days the durations of lunations are getting longer, because Moon is moving further away, according to on-going daily Laser Lunar Ranging measurements, at an average rate of about 38 mm per year, due to tidal acceleration. However, in terms of Mean Solar Days (that is, calendar days), durations of lunations are getting shorter, because solar days are getting longer at a greater rate than the lunations.
CT Scrutton and RG Hipkin, in "Long-term changes in the rotation rate of the Earth", Earth-Science Reviews 1973 (9): 259-274, claim that if one back-calculates the history of the lunar orbit, allowing for larger tidal acceleration when Moon was closer, it implies a catastrophic period (incompatible with the survival of any forms of life) in the mid-Precambrian era, during which there would have been tremendenous tectonic upheavals, volcanic eruptions, and heat generation caused by the close proximity of Moon to Earth, at a distance of only a few Earth-radii.
After conversion of Ephemeris Days to Mean Solar Days, Chapront's linear formula corresponds to a Mean Synodic Month length that is essentially identical to and an important validation of the Mean Synodic Month formula of Kalendis, as shown in the following charts:
All of this evidence points to the conclusion that the molad interval was essentially a perfect fit to the actual mean synodic month in the time of Hillel ben Yehudah, but today it is slightly too long. The Mean Synodic Month is currently about 3/5 second or 9/50 of a part shorter than the traditional molad interval, and is continuing to get shorter at a steady rate of about 1/3 mean solar second per thousand years. The present-era 3/5 second or 9/50 part discrepancy may seem negligible in comparison with the 29+13753/25920 days of the molad interval, but those fractions inevitably accumulate as each and every month passes, so that over the years it progressively adds up to minutes, then hours, and so on. The total accumulated drift of the molad relative to the average alignment that it had in the era of Hillel ben Yehudah is currently about 1 hour and 37 minutes. If one adds 23 minutes to shift the reference meridian to Jerusalem, then the average molad today is more than 2 hours late with respect to the actual mean lunar conjunctions in terms of Jerusalem Local Mean Time.
Many with rabbinic education traditionally hold that the molad interval was given to Moses at Mount Sinai.
Many historians have noted that the Hebrew calendar molad interval matches the mean lunation period that was used by Babylonia, Hipparchus, and Ptolemy.
At the time of the Exodus from Egypt, traditionally given as Nisan (Aviv) of Hebrew year 2448 (counting the first year of the life of Adam haRishon as year zero), the traditional molad interval was about 2/5 second shorter than the Mean Synodic Month. (Anyone who is designing a fixed cycle lunar calendar for use today and for as long as possible into the future would be well advised to choose a mean lunation period that is intentionally slightly too short, to maximize the duration that the calendar will serve with reasonable accuracy.)
The era of the Babylonians (most specifically the Chaldeans, who were renowned for their astronomic observations) was around the 6th century BC. For the month of March in 600 BC on the proleptic Julian calendar the Mean Synodic Month was almost 1/4 second longer than the molad interval.
The era of Hipparchus was around 190 BC to 120 BC. For the month of March in 120 BC on the proleptic Julian calendar the Mean Synodic Month was essentially equal to the traditional molad interval (within 25 milliseconds).
Ptolemy's Almagest was published around 150 AD. For the month of March in 150 AD on the Julian calendar the Mean Synodic Month was very close to the traditional molad interval (within 1/10 second).
Taking Rosh HaShanah 4119 as the date that Hillel ben Yehudah started the fixed arithmetic traditional Hebrew calendar = September 21, 358 AD, the Mean Synodic Month was again nearly equal to the traditional molad interval (within 25 milliseconds).
Out of sheer curiosity I separately analyzed the molad minus actual new moon relationship separately for each month of the traditional Hebrew calendar. This revealed a periodic annual variation. In the present era, the molad of the month of Nisan is earliest (about 2 1/2 hours early), then the moladot drift later until the months of Tishrei and Cheshvan, whose moladot are equally the latest (about 5 2/3 hours late), after which the moladot drift earlier until the next month of Nisan. Currently the months of Tammuz and Shevat are close to the annual average, which itself is about 2 hours late (referred to Jerusalem). Click here to view the chart for Nisan alone 309 KB, Tishrei alone 315 KB, or all of the Hebrew calendar months in one large file 3.8 MB. Each of these month charts has its own quadratic least-squares regression line that could be used to implement an adjustment of the molad moment tailored for that month's typical seasonal variation, but to do so would be going considerably beyond restoration of the original state that the molad had in the era of Hillel ben Yehudah, and anyhow the only molad moment that matters to the Hebrew calendar is the molad of Tishrei.
In each era there is one month that has the earliest moladot, another month 1/2 year later that has the latest moladot with respect to the actual mean lunar conjunctions, and there are two months in-between whose moladot are closest to the average for the year:
Variations of the Molad Minus Mean New Moon Across the Months of the Calendar Year (hours)
These variations shift rightward (later in the calendar year) as the centuries pass. From the table above one can see that at the calendar epoch (year 1) the earliest molad was in Shevat, by year 4119 it had shifted later to Adar 2 and Nisan, by 5766 (the present era) Nisan alone was earliest, and by 10000 it will have shifted later to Sivan. Meanwhile the latest molad was in Av in the year 1, Tishrei in 4119, both Tishrei and Cheshvan in the present era, and then rightward to Tevet by 10000. As shown in the row for year 4119 and for year 5766, when the earliest molad reaches the last month of the year, it cycles back around to the first month of the year.
Seasonal variations in the timing of the New Moon were noted by Rambam, without explanation, in fact most of chapter 17 in his book deals with such seasonal adjustments to the lunar longitude. These variations, however, have nothing to do with the climatic seasons, nor are they related to precession of the equinoxes.
This shifting pattern follows a perpetual cycle, which turns out to be due to the long-term Earth orbital perihelion cycle. As explained by Jean Meeus (cited above), when Earth is near its orbital perihelion it moves more rapidly, prolonging the durations of the lunations because it takes longer for Moon to move fully into conjunction with Sun (giving the fixed molad arithmetic a chance to "catch up"). When Earth is near its orbital aphelion it moves more slowly, reducing the durations of the lunations because it takes less time for Moon to move fully into conjunction with Sun (so the fixed molad arithmetic "falls behind"). The Earth orbital perihelion gradually shifts later in the calendar year, variably taking 20000 to 25000 years to make a full cycle through all of the seasons. Perihelion advances more rapidly as the Earth orbital eccentricity declines. Perihelion is currently in early January / Tevet and aphelion is currently in early July / Tammuz.
I created a spreadsheet to depict these perihelion-related variations of the molad in a modestly animated way in this Molad minus Mean New Moon spreadsheet 45KB. This spreadsheet requires Microsoft Excel — if you don't have a compatible program then you can click here to download the free Microsoft Excel Viewer 2003 for Windows. At the top right of the spreadsheet page the user can enter any desired Hebrew year number from 1 to 10000, or can use the arrow buttons beside the year number to scan backward or forward through the years (these buttons only work in the full version of Excel, not the free Excel Viewer), then observe the effect on the displayed table and chart. Notice how the peak and valley of the plotted curve shift rightward, and observe the way that the annual average rises later and later as the years progress into the future (or further into the remote past approaching the epoch). Go to the year 4119 and observe the effect of toggling the checkbox option in the top line "Molad refers to halfway between Ur, Babylonia and Jerusalem, Israel." on and off, noting that the annual average line is superimposed upon the zero difference line only when that option is checkmarked.
If you are a pure mathematician with an irrational revulsion against spreadsheets then you can instead open this limited PDF version that covers a few selected years 333KB, but believe me, you will miss a lot!
I was able to generate a 3-dimensional chart that depicts the Hebrew calendar month on the x-axis (horizontal), the Molad minus Mean New Moon on the y-axis (vertical), and the Hebrew Calendar Elapsed Months on the z-axis (depth). The image looks like a saddle, with a hump (due to the latest months) gradually shifting diagonally across the saddle as the years pass (due to the perihelion cycle). Click here to see the Molad minus Mean New Moon as a 3-dimensional surface 43 KB.
If the moladot were adjusted to eliminate the average drift, then that 3-D surface would be a flat plane vertically centered at the zero difference level, with a perihelion-related hump and valley gradually moving diagonally across.
If the moladot were adjusted to match the seasonal variations in the actual mean lunar conjunctions, then the surface would be a hump-free flat plane at the zero difference level.
Looking carefully at any of the Molad minus actual New Moon charts, one can see that the breadth of the curved band is broader at the calendar epoch (year 1), tapering to a slightly narrower breadth by the year 10000. That effect is not an optical illusion, it really is so, and is due to the steadily declining Earth orbital eccentricity over the first 10000 years of the Hebrew calendar. One can see this effect more obviously in the Molad minus Mean New Moon spreadsheet 45KB (click here to download the free Microsoft Excel Viewer 2003 for Windows) — note how the span or full swing of the annual cycle steadily declines from year 1 through to year 10000, along with the Earth orbital eccentricity, which is shown further down the page.
As explained by Jean Meeus, the variability of the duration of lunations decreases as the Earth orbital eccentricity declines, a trend which will continue for many millennia into the future. If Earth's orbit were perfectly circular then there would not be any seasonal variations in the duration of lunations. These seasonal variations have essentially no relationship to Earth's well-known axial tilt nor to precession of the equinoxes, which are involved in most other seasonally-variable effects, although the perihelion cycle does have an effect on the lengths of the seasons. See also "Chapter 33: Long-period variations of the orbit of the Earth" on pages 201-205 in "More Mathematical Astronomy Morsels" by Jean Meeus, published in 2002 by Willmann-Bell, Inc., Richmond, Virginia.
The variations in the Earth orbital eccentricity contribute to the variations in the time that it takes for each perihelion cycle. As the orbital eccentricity declines the perihelion advances more rapidly. That is why it is not valid to state the duration of the perihelion cycle more precisely than "20000 to 25000 years". For further insight, please see my web page entitled "The Lengths of the Seasons" at <http://www.sym454.org/seasons/>.
An "average" molad adjustment would symmetrically align the Molad minus Mean New Moon annual curve above and below the zero difference line, allowing it to shift rightward with time along with the Earth orbital perihelion. Individual moladot would vary by several hours from the zero difference. The distribution band of the Actual New Moons would gently "wave" around an average level close to the adjusted molad that would be characteristic for each month of the Hebrew calendar. Click here to view an average-adjusted Molad minus Actual New Moon chart for Nisan 301KB and for Tishrei 299 KB.
By using a separate adjustment formula for each month of the Hebrew calendar year, each molad could be locked (vertically centered) at the Molad minus Mean New Moon zero difference line for all Hebrew calendar years from 1 to 10000, with the Actual New Moon always within slightly beyond ±10 hours of the adjusted molad. Click here to view an month-specific adjusted molad minus Actual New Moon chart for Nisan 302KB and for Tishrei 302 KB. All of the other month-specific adjusted molad charts look the same.
A month-specific molad adjustment might be worth considering for the molad of Tishrei, because it is the only molad that affects the arithmetic of the Hebrew calendar, by determining the provisional date of Rosh HaShanah. In the present era the moladot of Tishrei and of Cheshvan have the latest average molad moments. Specifically adjusting the molad of Tishrei would have the probably undesirable side-effect of making it less likely that the new lunar crescent will be visible on the eve of Rosh HaShanah.
Adjustment of the molad moment should ideally be a nearly linear polynomial function to account for the progressive shortening of the Mean Synodic Month combined with a cyclic (periodic) function to account for the perihelion cycle, preferably adjusting for the date of the molad relative to the date of perihelion rather than by the Hebrew month, and the swing of the cyclic function should be controlled by an expression that is a function of the Earth orbital eccentricity. The period of the cyclic component itself ungoes long-period variation, however, so it can't be reduced to a single trigonometric term that will remain in sync for more than a few centuries.
Therefore the best long-term adjustment that is practical with simple arithmetic is an "average" molad adjustment, which ought to be acceptable or perhaps even preferable anyway, since it would restore the molad vs actual lunar conjunction relationship to the state that it had back in the era of the Second Temple, resetting the reference meridian to its original longitude at the midpoint between the Nile river and the end of the Euphrates river, tracking the progressively shorter mean lunar conjunction interval, discussed next.
On my separate web page, "Length of the Lunar Cycle" web page at <http://www.sym454.org/lunar/>, I show how to calculate the mean lunar conjunction moment for the Prime Meridian. It is easy to use that arithmetic to calculate the mean lunar conjunction moment for the meridian that is midway between the Nile and the Euphrates. The arithmetic there works in terms of the lunation number relative to January 2000 AD (J2KL), but for Hebrew calendar arithmetic purposes we need arithmetic that works in terms of Hebrew calendar elapsed months (HCEM), that is the lunation number starting from zero at the Hebrew calendar epoch. For interconversion of J2KL and HCEM we only need to know the HCEM at January 2000 (Rosh Chodesh Shevat, 5760):
HebrewLunationAtJ2000 = 71233
J2KL = HCEM – HebrewLunationAtJ2000
HCEM = J2KL + HebrewLunationAtJ2000
Note that Shevat 5760 was month number 71234 since the Hebrew calendar epoch, but it didn't start until the 8th of January 2000 AD, whereas the mean lunar conjunction functions need to have the count of months elapsed (completed).
So to calculate the desired moment of the mean lunar conjunction, use the ElapsedMonths function defined above (or better still, the Traditional or Rectified mode of the Rectified Hebrew calendar ElapsedMonths function, as desired) to calculate the HCEM, then subtract HebrewLunationAtJ2000 to obtain the J2KL to pass to the MeanNewMoonUT function defined on the "Length of the Lunar Cycle" web page, and then add 2 hours and 37 minutes = 157/1440 of a day to convert that moment to Local Mean Time at the Nile-Euphrates midpoint (in other words, Jerusalem Local Mean Time +16 minutes). That, however, is the moment referred to Civil Midnight, whereas the molad moment is traditionally quoted as the time since mean sunset, so we should add 6 hours = 1/4 of a day for the purpose of announcements or relating the moment to the Hebrew weekday. Finally, to use such a molad moment for the purpose of calculating the provisional date of Rosh HaShanah, add another 6 hours = 1/4 of a day to refer the moment to the prior Local Mean Noon and thereby include the Gauss shortcut, which bypasses the molad zakein postponement rule. Obviously this list of additions can be combined into one operation, if desired.
The interval between such moladot will grow progressively shorter, presently at a rate of about –25 mean solar microseconds per month, which would be incompatible with the fixed cutoff limits traditionally applied to the 3rd the 4th Rosh HaShanah postponement rules. The solution is very simple and fully equivalent to the traditional way that those rules are applied, as explained under the heading "The First Day of Tishrei (Rosh HaShanah) — the HebrewNewYear Function" on the rectified Hebrew calendar web page at <http://www.sym454.org/hebrew/rect.htm>.
At the time of writing the rectified Hebrew calendar employs a progressive molad based on a quadratic molad adjustment that was derived using older astronomical algorithms and Delta T approximation, referred to the meridian of Jerusalem. The Nile-Euphrates mean lunar conjunction calculation documented above would be a better choice. My current development efforts, however, are directed at obtaining equivalent accuracy (same calendar dates) while working strictly with integers, through the use of yerm era arithmetic, discussed below. Stay tuned...
From the information presented here, you may have the impression, as many others have written, that the drift of the molad is minor and can be neglected. Not so.
For example the calculated moment of the molad of Tishrei 5768 is 10 hours and 26 minutes after mean sunset on Yom Rivii on the 29th of Elul 5767. Rosh HaShanah can't start on Yom Rivii so it is postponed to the next day, Yom Chamishi. The corresponding astronomical mean lunar conjunction at the meridian midway between the Nile River and the end of the Euphrates River is on the same day but 8 hours 41 minutes and 15 seconds after mean sunset. On that day the calculated molad moment is therefore 1 hour 44 minutes and 15 seconds too late = 1/24 + 44/1440 + 15/86400 = 139/1920 or about 0.0724 of a day. This fraction represents the probability, as of year 5768, that the molad of Tishrei will land one day later than the sages originally intended. Expressed as a percentage, it means that already in almost 7+1/4% of years the molad of Tishrei lands one day late.
However, the resultant proportion of years in which Rosh HaShanah and the High Holy Days of Tishrei land on the "wrong" day is considerably greater, because of interactions with the Rosh HaShanah postponement rules, which in turn are subject to interactions between adjacent years.
As the drift of the molad accelerates, due to the growing discrepancy between the traditional molad interval and the astronomical mean synodic month, the percentage of years which will start on the "wrong" day will increase proportionately.
I placed "wrong" in quotation marks above because although the dating seems astronomically incorrect, according to Jewish law the dates of the calendar are not determined in Heaven, nor by astronomy, but rather the "correct" days for observing Rosh HaShanah and the other High Holy Days etc. are whichever days the Jewish people agree to observe them on. This principle is known in Hebrew as Torah lo ba'shomayim hi.
Even more importantly, one must understand that the molad interval sets the Hebrew calendar mean year length, which traditionally equals 235 lunar months times the duration of the molad interval divided by 19 years = 365 days 5 hours 55 minutes and (25+25/57 seconds or 7+12/19 chalakim) ≈ 365.2468222 days, which is presently 6 minutes and 25+25/57 seconds too long per year. Most of that excess length is due to the excess length of the traditional 19-year Hebrew calendar leap cycle, but a significant portion is due to the increasingly excessive length of the molad interval.
I have developed the rectified Hebrew calendar, see <http://www.sym454.org/hebrew/rect.htm>, based on a leap cycle that has 130 leap years per 353-year cycle (353 years with 12 months = 4236 months plus 130 leap months = 4366 months per cycle). It has fractionally fewer leap months than the traditional leap cycle — although it will take 6707 years for the difference to amount to one full month, that is just the right amount to restore the calendar mean year to the length of the actual present era mean northward equinoctial year. Using the traditional molad, the rectified Hebrew calendar will maintain excellent equinox alignment until perihelion reaches the northward equinox in the 11th Hebrew calendar millennium (for an explanation please see my "Lengths of the Seasons" web page at <http://www.sym454.org/seasons/>). However, it so happens that applying the progressive molad instead of the traditional molad will allow the rectified Hebrew calendar to maintain its long-term alignment with the northward equinox for three extra millennia (until the 14th millennium), because the adjustment of the molad interval will progressively shorten the rectified Hebrew calendar mean year length at the rate of about 3/2 seconds per 353-year leap cycle or about 17/4 seconds per millennium, thus helping the rectified leap cycle to "hold on" to the equinox much longer (until the rate of change of the length of the northward equinoctial year accelerates beyond range, which will happen when perihelion advances beyond the equinox).
With use of the traditional molad interval the mean year of the rectified Hebrew calendar will equal the traditional molad interval × 4366 months per cycle / 353 years per cycle = 365 days 5 hours 49 minutes and 5+25/1059 seconds, which is much more accurate than the traditional 19-year leap cycle, being only about 5 seconds longer than the mean northward equinoctial year. If, however, the molad interval is adjusted to match the actual secular mean synodic month (the word secular, derived from the Latin saeculum, in this context refers to a variation that spans centuries, that is the progressively shorter mean synodic month), then in the present era the mean year of the rectified Hebrew calendar will be about 365 days 5 hours 48 minutes and 58 seconds, which is about 2 seconds shorter than the northward equinoctial year. By the 11th Hebrew millennium, the progressive molad will cause the rectified Hebrew calendar mean year to shorten to 365 days 5 hours 48 minutes and 40 seconds, but in that future era the advance of perihelion will have passed the northward equinox and the mean northward equinoctial year will be even shorter, about 365 days 5 hours 48 minutes and 32 seconds.
This chart shows the mean year of the rectified Hebrew calendar 130/353 leap cycle using the traditional and adjusted molad, in comparison with the actual mean northward equinoctial year 28KB. With regard to the periodic variability of the northward equinoctial year, from year-to-year the dominant cause of variability is Moon, but I reduced the variations in this chart by plotting the average for every one hundred years and smoothing the lines connecting the points.
Note that all of the mean years were equal in the era of Hillel ben Yehudah, Hebrew year 4119. Over the next few millennia the Rectified mean year with progressive molad will get steadily shorter while the mean equinoctial year gets slightly longer. During that time the progressive molad-based calendar will on average pull slightly ahead (by a small fraction of a day). Around the Hebrew year 9000 the progressive molad-based calendar mean year will nicely match the mean equinoctial year length. During the subsequent millennia the mean equinoctial year length will shorten faster. Beyond the 10th millennium the rectified Hebrew calendar with traditional molad will fall progressively behind the mean equinoctial year at an accelerating rate, but the progressive molad-based calendar will be able to "hold on" to the equinox until near the end of the 13th millennium, because of its having pulled slightly ahead during the 8th to 10th millennia.
I have also carried out analyses of the relationship between the first day of Hebrew months, the actual lunar conjunction moments, and the Lunar Phase. Some examples are shown below for the Hebrew months of Nisan and Tishrei.
|First Day of Month vs. Actual New Moon||116 KB||114 KB|
|First Day of Month vs. Lunar Phase||116 KB||115 KB|
The charts of first day against actual New Moon and against Lunar Phase have a curved band pattern that relates to the curved bands seen above in the Molad minus Actual New Moon charts.
Rebbe Eliezer ben Hurcanus, traditional author of Pirkei D'Rebbi Eliezer, described the molad cycle in terms of a "short cycle" of 3 lunar years, after which the molad day would be one weekday earlier than where it started, and a "long cycle" of 21 lunar years, after which the molad day would be on the same weekday that it started (having stepped backwards by one day for each of 7 elapsed short cycles). These cycles are approximations, for example after two long cycles the molad day will fall one weekday later.
The traditional molad can, nevertheless, be expressed to absolute accuracy in terms of cycles, but to perceive its inherent repetition pattern one must let go of the concept of a 12-month "lunar year" and any relationship to the solar year, as well as the idea that the cycle began at the traditional year of Creation.
Traditionally the molad day is the day upon which the molad moment lands, but if the calculated moment is after noon then the molad day is considered to have advanced to the following day (based on the molad zakein postponement rule). Referring to the calculation expressions defined above, the molad day can be calculated as follows:
MoladDay( Lunation ) = floor( MoladMoment( Lunation ) + 1/4 )
where the 1/4 day offset takes care of the molad zakein postponement rule,
and Lunation is the number of months elapsed from the epoch of the traditional Hebrew calendar.
The number of days from one molad day to the next can only be either 29 or 30 days, corresponding to a "deficient" (D) or a "full" (F) month, respectively. The following are the molad month lengths near the present era, from the molad of Nisan 5726 until the molad of Elul 5818, inclusive:
FDFDFDFDFDFDFDFDFFDFDFDFDFDFDFDFDFFDFDFDFDFDFDFDFFDFDFDFDFDFDFDFDFFDFDFDFDFDFDFD FDFFDFDFDFDFDFDFDFFDFDFDFDFDFDFDFDFFDFDFDFDFDFDFDFDFFDFDFDFDFDFDFDFFDFDFDFDFDFDF DFDFFDFDFDFDFDFDFDFDFFDFDFDFDFDFDFDFFDFDFDFDFDFDFDFDFFDFDFDFDFDFDFDFDFFDFDFDFDFD FDFDFFDFDFDFDFDFDFDFDFFDFDFDFDFDFDFDFDFFDFDFDFDFDFDFDFFDFDFDFDFDFDFDFDFFDFDFDFDF DFDFDFDFFDFDFDFDFDFDFDFFDFDFDFDFDFDFDFDFFDFDFDFDFDFDFDFDFFDFDFDFDFDFDFDFFDFDFDFD FDFDFDFDFFDFDFDFDFDFDFDFDFFDFDFDFDFDFDFDFFDFDFDFDFDFDFDFDFFDFDFDFDFDFDFDFDFFDFDF DFDFDFDFDFFDFDFDFDFDFDFDFDFFDFDFDFDFDFDFDFDFFDFDFDFDFDFDFDFFDFDFDFDFDFDFDFDFFDFD FDFDFDFDFDFDFFDFDFDFDFDFDFDFFDFDFDFDFDFDFDFDFFDFDFDFDFDFDFDFDFFDFDFDFDFDFDFDFFDF DFDFDFDFDFDFDFFDFDFDFDFDFDFDFDFFDFDFDFDFDFDFDFFDFDFDFDFDFDFDFDFFDFDFDFDFDFDFDFDF FDFDFDFDFDFDFDFFDFDFDFDFDFDFDFDFFDFDFDFDFDFDFDFDFFDFDFDFDFDFDFDFFDFDFDFDFDFDFDFD FFDFDFDFDFDFDFDFDFFDFDFDFDFDFDFDFFDFDFDFDFDFDFDFDFFDFDFDFDFDFDFDFDFFDFDFDFDFDFDF DFFDFDFDFDFDFDFDFDFFDFDFDFDFDFDFDFDFFDFDFDFDFDFDFDFFDFDFDFDFDFDFDFDFFDFDFDFDFDFD FDFDFFDFDFDFDFDFDFDFFDFDFDFDFDFDFDFDFFDFDFDFDFDFDFDFDFFDFDFDFDFDFDFDFFDFDFDFDFDF DFDFDFFDFDFDFDFDFDFDFDFFDFDFDFDFDFDFDFFDFDFDFDFDFDFDFDFFDFDFDFDFDFDFDFDFFDFDFDFD FDFDFDFFDFDFDFDFDFDFDFDF
It is hard for the eye to discern any particular pattern in the above sequence, which contains mostly alternating full and deficient months, but it actually does contain a very simple repetition. Focus on the places where consecutive full months occur. A simple repeating pattern becomes clear if one inserts a space between consecutive full months and a new line after every third group:
FDFDFDFDFDFDFDFDF FDFDFDFDFDFDFDFDF FDFDFDFDFDFDFDF FDFDFDFDFDFDFDFDF FDFDFDFDFDFDFDFDF FDFDFDFDFDFDFDF FDFDFDFDFDFDFDFDF FDFDFDFDFDFDFDFDF FDFDFDFDFDFDFDF FDFDFDFDFDFDFDFDF FDFDFDFDFDFDFDFDF FDFDFDFDFDFDFDF FDFDFDFDFDFDFDFDF FDFDFDFDFDFDFDFDF FDFDFDFDFDFDFDF FDFDFDFDFDFDFDFDF FDFDFDFDFDFDFDFDF FDFDFDFDFDFDFDF FDFDFDFDFDFDFDFDF FDFDFDFDFDFDFDFDF FDFDFDFDFDFDFDF FDFDFDFDFDFDFDFDF FDFDFDFDFDFDFDFDF FDFDFDFDFDFDFDF FDFDFDFDFDFDFDFDF FDFDFDFDFDFDFDFDF FDFDFDFDFDFDFDF FDFDFDFDFDFDFDFDF FDFDFDFDFDFDFDFDF FDFDFDFDFDFDFDF FDFDFDFDFDFDFDFDF FDFDFDFDFDFDFDFDF FDFDFDFDFDFDFDF FDFDFDFDFDFDFDFDF FDFDFDFDFDFDFDFDF FDFDFDFDFDFDFDF FDFDFDFDFDFDFDFDF FDFDFDFDFDFDFDFDF FDFDFDFDFDFDFDF FDFDFDFDFDFDFDFDF FDFDFDFDFDFDFDFDF FDFDFDFDFDFDFDF FDFDFDFDFDFDFDFDF FDFDFDFDFDFDFDFDF FDFDFDFDFDFDFDF FDFDFDFDFDFDFDFDF FDFDFDFDFDFDFDFDF FDFDFDFDFDFDFDF FDFDFDFDFDFDFDFDF FDFDFDFDFDFDFDFDF FDFDFDFDFDFDFDF FDFDFDFDFDFDFDFDF FDFDFDFDFDFDFDFDF FDFDFDFDFDFDFDF FDFDFDFDFDFDFDFDF FDFDFDFDFDFDFDFDF FDFDFDFDFDFDFDF FDFDFDFDFDFDFDFDF FDFDFDFDFDFDFDFDF FDFDFDFDFDFDFDF FDFDFDFDFDFDFDFDF FDFDFDFDFDFDFDFDF FDFDFDFDFDFDFDF FDFDFDFDFDFDFDFDF FDFDFDFDFDFDFDFDF FDFDFDFDFDFDFDF FDFDFDFDFDFDFDFDF FDFDFDFDFDFDFDFDF FDFDFDFDFDFDFDF FDFDFDFDFDFDFDFDF
In summary, the above sequence of months, which is only one "era" of the full molad sequence, contains 23 repeats of a 49-month sequence, each having two 17-month groups and one 15-month group, completed by a final stand-alone 17-month group, having a total of 1144 months in the era.
Each 17- or 15-month group starts with a full month, then has deficient alternating with full months until the end of the group. K.E.V. (Karl) Palmen of the Rutherford Appleton Laboratory in the United Kingdom discovered these groups and patterns in mean lunar month sequences, invented the word "yerm" for them, and used them as the basis for his yerm lunar calendar, see <http://www.hermetic.ch/cal_stud/palmen/yerm1.htm>. Each yerm has exactly one more full month than its count of deficient months, so the basic yerms are:
17-month yerm = FDFDFDFDFDFDFDFDF
15-month yerm = FDFDFDFDFDFDFDF
The average or mean month of any sequence of alternating full / deficient months, where the number of full months equals the number of deficient months, is exactly 29+1/2 days. If the sequence contains any consecutive full month pairs, however, then its mean month increases accordingly. In any sequence of full and deficient months, if NFM is the number of full months and NDM is the number of deficient months, then the mean month length equals the number of days in the cycle divided by the number of months in the cycle = (NFM × 30 + NDM × 29) / (NFM + NDM) days. This expression can be simplified because the mean month is always 29 days plus a fraction equal to the number of full months divided by the total number of months in the cycle = 29 + NFM / (NFM + NDM) days.
The mean month of the more frequent 17-month yerm is (9 × 30 + 8 × 29) / (9 + 8) = 29+9/17 days = 29 days 12 hours 42 minutes and 21+3/17 seconds, which is exactly 30+11/17 parts shorter than the molad interval. Any longer yerm has a shorter mean month.
The mean month of the less frequent 15-month yerm is (8 × 30 + 7 × 29) / (8 + 7) = 29+8/15 days = exactly 29 days 12 hours and 48 minutes, which is exactly 71 parts longer than the molad interval. Any shorter yerm has a longer mean month.
In the overall sequence 17-month yerms occur about twice as frequently as 15-month yerms because the deviation of the 17-month yerm mean month from the traditional molad interval is about half that of the 15-month yerm.
The sequence inherently contains larger groups having subcycles of 17+17+15 = 49 months, occasionally followed by an extra 17-month yerm. Each 49-month subcycle contains 3 yerms, having 3 more full months (2×9+8=26) than deficient months (2×8+7=23), and has a mean month of (26 × 30 + 23 × 29) / 49 = 29+26/49 days = 29 days 12 hours 44 minutes and 4+44/49 seconds, which is only 23/49 parts longer than the molad interval. The extra 17-month yerm that is occasionally inserted compensates for the slight mean month excess of each series of 49-month subcycles.
[According to unpublished manuscripts composed around 1700 AD, Sir Isaac Newton recognized the 17- and 15-month subcyles, and called the 49-month subcycle a "Great Lunar Cycle", although he apparently had difficulty deciding where to begin the latter. Inexplicably, he thought that the mean month of the 49-month group was too short, so he proposed adding one extra day, even though he otherwise relied heavily on the lunar algorithms of Jeremiah Horrocks, which had an impressively accurate mean month of 29 days 12 hours 44 minutes 3.16 seconds, almost 1+3/4 seconds shorter than that of the 49-month group. See: Notes & Records of The Royal Society 2005; 59: 223-254, especially the discussion under the heading "Lunar Part", on page 230.]
The present traditional molad era has 70 yerms, but the next molad era from the molad of Tishrei 5819 until molad of Adar Sheini 5907, inclusive, is an example of the alternative type of era that will have one fewer 49-month subcycle = 67 yerms, so it will have only 22 repeats of the 49 month subcycle plus the final 17-month yerm, for an era total of 1095 months. Each 1095-month era contains 3 yerms for each of the 22 subcycles that have 49 months plus 1 yerm for the extra 17 months = 22×3+1 = 67 yerms, having 67 more full months than deficient months, so it has (1095-67) / 2 = 514 deficient months and 514+67 = 581 full months. The mean month of this 1095-month type of era is (581 × 30 + 514 × 29) / (581 + 514) = 29+581/1095 days, which is only 1/73 part shorter than the traditional molad interval.
Each 1144-month era, like the present era, contains 3 yerms for each of the 23 subcycles that have 49 months plus 1 yerm for the extra 17 months = 23×3+1 = 70 yerms, having 70 more full months than deficient months, so it has (1144-70) / 2 = 537 deficient months and 537+70 = 607 full months. The mean month of such an 1144-month era is (607 × 30 + 537 × 29) / (607 + 537) = 29+607/1144 days, which is only 1/143 part longer than the molad interval. The 1144-month era type occurs almost twice as frequently in the complete molad cycle because its mean month distance from the target molad interval length is almost half that of the 1095-month era type.
A complete cycle with mean month exactly equal to the traditional molad interval has 25920 months with 13753 full months and 12167 deficient months, comprising exactly 2160 twelve-month lunar "years" or about 2095+2/3 solar years. It has 15 eras of 70 yerms interleaved with 8 eras of 67 yerms. Thus the full cycle has 15 × (23 × 2 + 1) + 8 × (22 × 2 + 1) = 1065 of the 17-month yerms plus 15 × 23 + 8 × 22 = 521 of the 15-month yerms, for a total of 1586 yerms. The complete interleaved traditional molad yerm era sequence, from the molad of Elul 4724, which was the most recent cycle starting point prior to the present era, is:
67, 70, 70 yerms
67, 70, 70
67, 70, 70
67, 70, 70
67, 70, 70
67, 70, 70
67, 70, 70
In summary, the complete traditional molad cycle has 7 repeats of (67, 70, 70) followed by final 67- and 70-yerm eras. That complete pattern repeats forever in the fixed arithmetic cycle of the traditional molad. The simple arithmetic reason for regarding the final 67- and 70-yerm eras as the "end" of the complete cycle is so that the first era of each triplet as well as the final partial triplet has 67 yerms, and all other eras have 70 yerms.
Confirming the previous calculation, the total number of yerms in the complete cycle = 8×67 + 15×70 = 1586 yerms, which is the required excess of full months over deficient months in the complete sequence of 25920 months. Thus the full cycle has (25920-1586) / 2 = 12167 deficient months and 12167+1586 = 13753 full months. The mean month of the full cycle is (13753 × 30 + 12167 × 29) / (13753 + 12167) = 29+13753/25920 days, which is exactly equal to the traditional molad interval.
Click here to view a complete report of the traditional molad yerm era analysis covering the first 10000 years of the Hebrew calendar (includes 5 complete molad yerm era cycles, in which era zero is defined as the era that started prior to and ended after the Hebrew epoch). The following formula, due to Karl Palmen, calculates the molad day using only the era and lunation numbers:
Determine the lunation number by using the ElapsedMonths function defined above.
Determine the era number by comparing the given lunation number with the "HCEM" column on the traditional molad yerm era list.
molad day = EraZeroStartDay + 30 + floor( [ (Lunation – SeriesStartLunation – 1) × 1447 – EraNumber ] / 49)
where EraZeroStartDay is 5286 days before the Hebrew epoch, 30 is the number of days in a full lunar month, SeriesStartLunation is -179 (as shown in the HCEM column of the era zero row), 1447 is the number of days in a 49-month subcycle, EraNumber is determined by comparison of Lunation with the yerm era list, and 49 is the number of months in a 49-month subcycle.
By inserting constants from the yerm era list and rearranging, the above expression simplifies to:
molad day = HebrewEPOCH – 5256 + floor( [ 1447 × Lunation + 257566 – EraNumber ] / 49)
I have programmatically verified that the above expression yields the correct molad day for every molad of the full 689472-year repeat cycle of the traditional Hebrew calendar.
The actual astronomical mean lunar cycle contains similar patterns of yerms and 49-month subcycles, although fewer yerms are required per era because the appropriate number of yerms per fixed arithmetic lunar month cycle is inversely proportional to the mean month and the present era mean lunar cycle is about 2/3 second shorter than the traditional molad interval. As the mean lunar month becomes slightly shorter, the necessary yerms per era rapidly decreases. This is evident in this progressive yerm eras analysis of mean lunar conjunctions, in which era zero is defined as the era in which the astronomical mean synodic month decreased to less than 29+26/49 days, and which shows that although 70- and 67-yerm eras were appropriate around the time of the Second Temple, since the era of Rambam 55 or 52 yerms would have sufficed. As shown by the yellow highlighted row in the report, the present era is the last 52-yerm era and the next era will be the first ever 49-yerm era in history!
Although these simple yerm patterns exist in lunar month sequences based on the mean lunation interval, sequences of actual astronomical lunar month lengths are not so simple, because typically several longer lunations occur consecutively near Earth's orbital perihelion (not always long enough to be considered a 30-day month), and several shorter lunations occur consecutively near aphelion (not always short enough to be considered a 29-day month), as discussed and charted above.
Amos Shapir, of Israel, was the first to develop a progressive yerm era series, but he generated a long string of F and D codes based on calculated actual lunar conjunction moments relative to midnight at the meridian of Jerusalem. Then he "had to clean up the sequence, occasionally replacing FD by DF in order to avoid DD sequences." He then used a text editor to group runs of 17 and 15 months into yerms, and replaced them with "7" or "5" accordingly. Finally, he used the text editor to search and replace strings of "775" and "7775" to generate the yerm eras.
My freeware calendrical calculator, Kalendis [version 9.558(1278) or later], has a built-in yerm eras report generator, developed in consultation with Karl Palmen and Amos Shapir, that automates this entire process for more than a quarter million mean lunations by computing the mean lunar conjunction (or quadrature or opposition) moments for any selected locale, optionally including an offset value, automatically finding the yerm era starts and the number of consecutive 49-month subcycles in a matter of a few seconds of computer time. After the report is generated Kalendis launches the report file so that the user can view it, and also uses that progressive yerm era series as the basis for its yerm lunar calendar dates, as displayed in its "Lunar" window.
Karl Palmen determined that the beginning of each yerm era having a mean month that is shorter than 29+26/49 days is uniquely marked by the sequence "FDFDFDFDFDFDFDFDFFDFDFDFDFDFDFDFDFFDFDFDFDFDFDFDFD", which includes the last 17-month yerm of the prior era, so the era itself starts at the 18th character position, as highlighted in boldface red text.
For the very remote past, well before the traditional epoch of the Hebrew calendar, when yerm eras had a mean month that was longer than 29+26/49 days, such eras started with a 15-month yerm that Karl Palmen calls a "yerm zero". Karl determined that the sequence "FFDFDFDFDFDFDFDFFDFDFDFDFDFDFDFDFFDFDFDFDFDFDFDFFD" is a unique marker for the start of that era type, where the era itself started at the 34th character position, at which the 15-month yerm zero is highlighted in boldface red text. The era in which the first walled settlement in Israel was founded at Jerico, which Encyclopaedia Judaica gives as 6850 BCE ±210 years, corresponding to 6850 – 3760 ≈ 3090 years before the epoch of the traditional Hebrew calendar, was the last yerm era that started with a yerm zero.
The simplest way to generate a 25920-month sequence having a mean month exactly equal to the traditional molad interval is by executing a loop 25920 times, where the full or deficient status of any month in the sequence is given by:
IF ( MonthNumber × 13753 ) MOD 25920 < 13753 THEN Full ELSE Deficient
where MonthNumber starts at zero.
Although the generated sequence has the correct mean month, its sequence of month lengths is not the same as the traditional molad because the traditional molad epoch didn't start at moment zero on day zero of month zero. If we define the molad day as the day upon which the traditional molad moment lands after including the +1/4 day offset due to Gauss (thereby including the molad zakein rule), and if we take the length of the month as the difference from one molad day to the next, then the month length status of any month in the molad sequence is given by:
IF [ ( ElapsedMonths + 21589 ) × 13753 ] MOD 25920 < 13753 THEN Full ELSE Deficient
where ElapsedMonths starts at zero and is the number of Hebrew calendar months elapsed since the epoch of the calendar, as per the arithmetic given above. This means that the molad day sequence starts 4431 months before or 21589 months after the start of the zero-based sequence. Rearranging the expression to make its coefficients less obscure we obtain:
IF ( MonthNumber × 13753 + 12084 ) MOD 25920 < 13753 THEN Full ELSE Deficient
where MonthNumber is the Hebrew calendar month number since the epoch, starting at month one = elapsed months + 1. The coefficient 13753 is the number of parts that the traditional molad interval is in excess of 29 days, and it is also the number of full months in the complete cycle. The coefficient 12084 includes the 5 hours 204 parts = 5604 parts, which was the traditional moment of the first molad after the first sunset plus +1/4 day offset due to Gauss to bypass the molad zakein postponement rule = 25920/4 = 6480 parts, so we have 5604+6480=12084 parts. The coefficient 25920 is the number of parts per day and is also the total number of months in the complete cycle.
An interesting sequence is generated by:
IF [ ( ElapsedMonths + 2256 ) × 13753 ] MOD 25920 < 13753 THEN Full ELSE Deficient
This sequence starts 2257 months after the zero-based sequence or 6588 months after the molad day sequence, and its complete molad cycle as given above started from the first molad at Creation, however after Creation it generates molad days that are always one day later than the traditional molad day.
I would like to thank K.E.V. (Karl) Palmen, developer of the yerm lunar calendar, for his comprehensive assistance with the above yerm and subcycle arithmetic.
Updated 25 Av 5770 (Traditional) = 25 Av 5770 (Rectified) = Aug 4, 2010 (Symmetry454) = Aug 2, 2010 (Symmetry010) = Aug 5, 2010 (Gregorian) | <urn:uuid:6ecd7b33-f6f3-4184-98e7-85f11baa2497> | CC-MAIN-2016-26 | http://individual.utoronto.ca/kalendis/hebrew/molad.htm | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391766.5/warc/CC-MAIN-20160624154951-00071-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.900491 | 24,418 | 3.03125 | 3 |
University of Alabama professor John Clark honored for his work in Ecuador; tree is named after him
Published: Friday, July 4, 2014 at 11:00 p.m.
Last Modified: Friday, July 4, 2014 at 11:46 p.m.
Colleagues of University of Alabama botanist John Clark have honored him a few times by naming new plant species after him, but the latest carries additional significance.
Prominent botanist Scott Mori named the tree found in western Ecuador the Gustavia johnclarkii after Clark, an assistant biology professor at UA and a curator and director at the University of Alabama Herbarium. Clark collected plant samples during a trip to the rain forest in the mid-1990s and has made frequent trips to the region.
“In the past, my role has been to help document biodiversity. I dispersed the information to lots of institutions,” Clark said.
Mori, in the fall 2013 article describing the tree, said the honor was for Clark’s contributions and noted Clark was one of the first to collect specimens in the area. The honor is special because of the recognition by Mori and the tree’s ties to Clark’s past.
“That was particularly an honor,” Clark said, comparing Mori to famed entomologist Edward O. Wilson.
“Scott Mori is that person for botany. He is a living legend,” Clark said.
Clark began studying the trees as a Peace Corps volunteer after college.
“It is a species that I climbed a lot,” he said.
Clark studied forestry and natural resources in college. It was a skill set the Peace Corps was looking for in recruits. He was originally scheduled to travel to Nepal, but plans were changed to Ecuador because he had lived in the southern region of the country as an exchange student and could speak Spanish. The Peace Corps trip nearly 20 years ago took him to West Ecuador, where he eventually took over the task of outreach and education and promoting biodiversity in the local communities. He also worked with the national herbarium of Ecuador.
“The area where it came from inspired me to study tropical plants and biodiversity,” Clark said.
On a recent trip with UA students, Clark estimated the group found as many as five new species, possibly 10. Clark takes a group of UA students to Ecuador every other summer for a field biology course.
“That’s what’s so exciting working in this area,” he said.
But Clark said the area is also critically endangered.
“It is through the process of documenting biodiversity that we understand the significance,” he said.
Clark’s passion for field work had its roots in a childhood in the outdoors of Maine.
That experience and exploring and foraging developed into an interest in identifying local plants that turned into the evolution of plants.
“I like being outside. What really drew me to nature was recognizing the diversity around me,” he said.
Reach Ed Enoch at firstname.lastname@example.org or 205-722-0209. | <urn:uuid:ad5337a7-d92a-49c7-861d-5fe595b83d30> | CC-MAIN-2016-26 | http://www.tuscaloosanews.com/article/20140704/NEWS/140709876/1291/MULTIMEDIA | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395620.9/warc/CC-MAIN-20160624154955-00135-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.973004 | 656 | 2.609375 | 3 |
Inclusionary zoning–a local government mandate that some new developments set aside some housing units at an “affordable” price point–is being discussed in Minnesota. Several Minneapolis council members have been supportive of an inclusionary zoning ordinance, and the council gave notice at the end of last year that they’re studying inclusionary zoning with an eye towards adopting a policy.
Although inclusionary zoning is often discussed as a question of zoning and housing, it’s really one of social policy or welfare. The problem we’re trying to solve is that some people’s incomes are insufficient to pay for a decent minimum standard of life’s basics of shelter, food, and clothing. There are several ways to address that problem of income falling short of expenses on the basics
- Give people more money (or “near money”, such as food stamps)
- Give people the things they need (food banks, school lunches, government owned housing)
- Regulate the prices people pay for the necessities of life.
In meeting people’s needs for food and clothing our social policies rely on a combination of the first two strategies—we give people things and we give people money. Giving people money and giving people access to things (houses) is also an important part of meeting the need for housing at affordable prices. In housing the money is often given in the form of vouchers that can only be used for housing, or the payment is made in some way directly to a person who owns the property being leased or rented to a person with a lower income. American programs to meet housing needs are financed by a range of federal and state taxes, and then complicated further by county or municipal administration. The continued need for more affordable housing might suggest we need to look beyond existing programs and add new policy tools such as inclusionary zoning. A big problem like housing affordability surely needs many policies directed at it, one might argue. Yet a more effective principle in public policy is to find the programs that work, and expand those.
The existing programs, despite their flaws, complications, and insufficient scale to meet everyone’s needs, have some virtues. Financed through federal and state taxation, they adhere to some important principles of how to pay for and organize social support.
- Government-provided social support should be financed by a broad cross-section of the community.
- The financing and provision of social support should be transparent. It shouldn’t be too hard to tell who is paying and who is benefiting.
- Government’s capacity to act should be independent of the timing and fluctuations of private decisions: if as a society we want to provide more help to the less fortunate we should vote to do do, tax ourselves, and distribute the money.
- We should set taxes to pay for social support in a way that doesn’t discourage the taxed population from doing things that are otherwise good for society
Inclusionary zoning fails many of these principles. It has a laudable goal: to help the less fortunate in society have decent housing. But if the goal is laudable, we should campaign for the most effective policies to meet it.
First, inclusionary zoning is often not financed by a broad cross section of the community. Indeed, it’s hard to think of a way to fund social welfare that is more narrowly designed. If we randomly selected a few hundred unlucky people in our cities to pay extra taxes to support public housing, that would be more unfair, but only slightly more. Inclusionary zoning policies vary considerably, but the basic design is that a percentage of units in multi-family developments must be provided at affordable rents.
In the worst form of inclusionary zoning policy the developer and the owners or renters of the other units have to make up the difference between the break-even on the inclusionary unit(s) and the affordable rent. In so doing, we are asking a small number of people who happen to be building or buying now or renting in new construction to fund our housing policy. Existing home-owners are entirely exempt from helping to pay.
The politics are understandable: looking forward the cost of the policy will be borne by some combination of developers and people living in hypothetical unbuilt multi-unit buildings. This has an obvious appeal to existing homeowners or people happy with their current situation who don’t think they’re going to move, and thus don’t think they will have to pay. The costs of it are not even clearly perceived by current renters looking to buy or move sometime soon, because right now you don’t know if in the future you’ll move into a building subject to the policy.
In some versions of the policy, cities make up the difference between the developer or owners’ break-even and the affordable rent. This is much more transparent and can be seen in city budgets. But if we think about this in the context of a metropolitan area with many different municipal governments the problems abound. Why should renters or owners in one city be mandated to support the costs of housing provision when their neighbors across the border pay less? If we are going to support people meeting their housing costs we should do so at a state or federal level.
Secondly, the financing of inclusionary zoning is often opaque. An inclusionary zoning policy that is financed out of city taxes still fails to spread the costs of social support across a broader community, but is at least transparent in a public budget. Inclusionary zoning that mandates a proportion of units be affordable without compensation from the city pushes the costs of housing support onto developers and residents of apartment buildings.
The politics here are understandable if undesirable. Developers of new multi-unit buildings are seen by many homeowners as making large profits. The residents of new apartments and condominiums are similarly seen as well-off given listed prices for new construction. Why not, from this perspective, have the city capture some of the developers’ profit and some of the well-off residents’ income to finance a few units for people on lower incomes? The precise division of the burden between developers and market-rate residents will depend on whose choices are more or less responsive to prices. If residents really want to live in a particular new building without much concern for the price, they will take more of the burden of paying for the mandate. On the contrary, if residents are flexible but developers really want to build in a particular city despite the inclusionary zoning ordinance, then developers will bear more of the cost. But none of this is clear in the way that it would be if the city announced it was raising property taxes x% to devote y million dollars to supporting housing affordability.
Third, inclusionary zoning is a passive response to the problem of housing affordability. In most forms of inclusionary zoning policy, cities require that proposed developments include a certain proportion of affordable units. The policies don’t in themselves call any new housing into being, any units that are created are just a fraction of what private developers have decided to build. When you think about it, this is a little strange.
We don’t observe local governments requiring that upmarket supermarkets give 10% of what they sell at reduced prices to poor people. It would be a little strange if the amount of food we provided to the poor fluctuated with the demand for fancy dinners at downtown restaurants. Macy’s is not required by law to give five poor people overalls for every 100 suits they sell. Those policies would be the functional equivalent of inclusionary zoning for food and clothing. If it sounds absurd, perhaps it’s worth pondering why, and whether housing is really so different.
Inclusionary zoning policies in many cities only apply to construction of larger buildings, with a typical threshold of 5 or 10 units before being subject to the policy. On the one hand, this will tend to discourage construction of larger multi-unit buildings, perhaps a politically appealing aspect of the policy. On the other, it holds the amount of housing affordability support hostage to the decisions of private developers about what they are going to build, and adds another layer of complexity to the zoning code.
Finally, inclusionary zoning policies discourage the construction of new market-rate housing. Inclusionary zoning is basically a tax on the construction of new market-rate housing. Proponents understandably don’t want to describe it as a tax, but a tax it is. Unless the city is funding the difference between the break-even and the affordable price from general taxation, inclusionary zoning is a tax on new construction.
Less new housing will be built if new housing is taxed, unless developers and potential residents are totally indifferent to the increased costs of market-rate housing. And indeed that is what research about the effects of inclusionary zoning has found: that it has increased the costs of housing and reduced the rate of construction, surely not what is desired. There are important urban-form implications here as well. If some cities mandate inclusionary zoning and others don’t, new development will be more likely to occur in the cities without inclusionary zoning. These shifts are unlikely to be easily discerned by policy makers who won’t see the developments that aren’t proposed because they are no longer profitable, and won’t see that a project is now being planned across the border.
The politics of inclusionary zoning are understandable. It sounds like policymakers are doing something about ensuring housing affordability. But the evidence is clear that inclusionary housing provides a tiny number of units relative to need, at best has no impact on overall housing prices while sometimes increasing them, discourages new construction, and comes with significant administrative and transaction costs.
Housing affordability is a problem, and one of the main causes is exclusionary zoning that makes it hard for new housing to be built. While the impact of zoning in Minneapolis on housing affordability is smaller than in Manhattan or San Francisco, there are many reforms that would improve affordable housing supply with modest impacts. Minimum parking requirements could be removed from all residences across the city, continuing the policy reform begun in 2015 that removed minimum parking requirements in larger buildings near high frequency transit (but not near the University of Minnesota).
R1 zones could be changed en masse to R2 to allow renewed construction of duplexes and fourplexes that make good neighbors with single family homes throughout the city, yet house eight to twelve people on the same amount of land as a single family house with two to five residents. Inclusionary zoning sounds nice, because who doesn’t love inclusion, but we should pursue more effective policies to deal with such important issues.
Streets.mn is a non-profit and is volunteer run. We rely on your support to keep the servers running. If you value what you read, please consider becoming a member. | <urn:uuid:61cca926-a4ca-4c59-bd34-7de92b7a36a2> | CC-MAIN-2016-26 | http://streets.mn/2016/03/03/inclusionary-zoning-understandable-politics-terrible-policy/ | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395620.56/warc/CC-MAIN-20160624154955-00102-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.961335 | 2,213 | 3.296875 | 3 |
Mosaic Twirl Tag
I claim the copyright to this tutorial. You are free to do whatever you please with what you make, but please do not take my tutorial. Do not send the tutorial through email. Only share my link.
This tutorial is intended for a beginner to learn about setting up image ready for animation. If you have any questions, please feel free to email me.
To do this tutorial you will also need Photoshop CS. I used CS 2. You may download a free trial at Adobe.
Remember to save as a .psd file often!
You will need a tube or cutout image that you wish to use.
You will need to download the Supplies. The MosaicTwirlRDB.psd is a blank image palette, that contains selections. The BlueberryNCreamRDB.grd file needs to be saved into your Adobe > CS > Presets > Gradient folder.
1. Open up the tube. It will need to be resized. Image > Resize, and set the longest dimension to 300 pixels.
2. Open up the MosaicTwirlRDB Palette. In the top menu, click Image > Duplicate. Close the original.
3. In the top menu, click Layer > New > Layer.
4. Choose the Gradient Tool from your tool menu. If the paint can is showing, right-click on the paint can and then choose the gradient tool.
5. click on the gradient drop down menu from the top menu. Then click on the choice arrow (circled in pic below) Choose Load Gradients.
6. Find the BlueberryNCreamRDB.grd and load it.
7. Then click on the blueberryNCream gradient from the Preset window and it will be your gradient choice in the gradient bar in the top menu.
8. Click on that gradient bar (top circle above) and you will get the Gradient Editor. Now edit the gradient to change the first and last color sliders to two colors from your tube. If you are unfamiliar with how to edit a gradient, please see my tutorial: Editing Gradients . Give it a name and click NEW.
9. Make sure that this created gradient is your gradient of choice in the top menu. Then draw the gradient from the TOP to the bottom.
10. In the top menu, click Select > Load Selection
11. In the Channel Drop-down menu, choose MosaicTwirl1. click ok
12. In the top menu, click Layer > New > Layer
13. Draw the gradient from BOTTOM to top.
14. In the top menu, click Filters > Distort > Twirl. Set Angle = 600
15. In the top menu, click Filters > Pixelate > Mosaic. Set the cell size = 8
16. In the top menu, click Select > Inverse.
17. In the Layer Palette, click on the bottom layer to highlight it. Then right-click on the bottom layer and choose Blending Options. Click on the WORDS Bevel and Emboss. Apply these settings: Style= Inner Bevel, Technique = Smooth , Depth = 101%, Direction = Down, Size = 5, Soften = 0, Gloss Contour as show below, Highlight Mode = color Dodge, Shadow Mode = Hard Light.
18. In the Layer Palette, click on the top layer.
19. In the top menu, click Layer > New > Layer
20. In the top menu, click Select > Load Selection. Choose Mosaic Twirl 2.
21. In the Selection Draw the gradient from TOP to bottom.
22. In the top menu, click Layer > New > Layer
23. In the top menu, click Select > Load Selection. Choose MosaicTwirl4
24. Draw the gradient diagonally from TOP to bottom
25. In the Layer Palette, Right-click on the top layer and choose Duplicate Layer
26. In the top menu, click Edit > Transform > Flip Horizontal
27. In the top menu, click Edit > Transform > Flip Vertical
28. In the Layer Palette, right-click on the top layer and choose Blending Options. Click on the WORDS Bevel and Emboss and apply these settings: Style = Inner Bevel, Technique = Smooth, Depth = 100, Direction = Up, Size = 5, Soften = 5, Angle = 120, Gloss Contour as shown.
29. Right-click on the blending Options effect of the top layer and choose Copy Layer Style.
30. Right-click on the layer below the top one and choose Paste Layer Style
31. In the top menu, click Layer > New > Layer
32. In the top menu, click Select > Load Selection. Choose MosaicTwirl3.
33. In the top menu, click Select > Inverse.
34. In the top menu, draw the gradient from the top left corner to the bottom right corner
35. Click on the tube image. In the top menu, click Select > All. then click Edit > Copy.
36. Click on the twirl image. In the top menu, click Edit > Paste. With the mover tool move the tool into position.
37. In the top menu, click Layer > Merge Visible.
Add your name and watermark and
save as a jpeg!
Please do not remove this tutorial from my site. Share my url. | <urn:uuid:3ddea471-53b5-4ffc-b969-6e6509d8caa5> | CC-MAIN-2016-26 | http://byudragon.com/Tutorials/MosaicTwirl/MosaicTwirl.html | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395548.53/warc/CC-MAIN-20160624154955-00032-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.690406 | 1,147 | 2.515625 | 3 |
From BCH reader Thomas Underwood comes this illustration out of an 1852 issue of Gleason’s Pictorial Drawing-Room Companion, which was published in Boston. Accompanying the wood carving was a brief article on Houston and the geography of Texas.
Now, a couple of things are off with this image. First, and most obvious, this doesn’t really look like the city, what with hills and all that. Mike Vance of Houston Arts and Media speculates that this illustration would have been done around 1839 by Europeans looking to attract interest in coming to Texas.
Second, the caption and accompanying article identify Houston as the capital of Texas.
What? True, Austin’s title as capital was very much an uncertainty in the early 1840s, but by the time this edition was published, voters and lawmakers had already decided on Austin as the capital.
Guess no one told the folks in Boston. | <urn:uuid:d7d9c547-5f35-41de-a37e-1a1380b14d81> | CC-MAIN-2016-26 | http://blog.chron.com/bayoucityhistory/2012/05/does-this-look-like-1850s-houston-to-you/ | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396538.42/warc/CC-MAIN-20160624154956-00133-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.979363 | 192 | 3.234375 | 3 |
Born: August 8, 1571
Died: August 31, 1638
In addition to being a composer, John Ward was an Attorney in the Exchequer and maintained dual careers throughout his life. He found a patron in Henry Fanshawe and served as household musician for his family, composing sacred and secular music for them. These positions allowed Ward to become wealthy in his own right, leaving a considerable estate at the time of his death. His compositions include many madrigals, music for the viol and someRead more sacred music. His madrigals are unusual for their serious nature, unlike those of other composers of the time. It is possible that his compositions for viol were fairly popular during Ward's time, given their large number. His music is characteristically dissonant with strong melody lines, relies heavily on the use of sequences and is generally rhythmically simple. Read less
YOU MUST BE A SUBSCRIBER TO LISTEN TO ARKIVMUSIC STREAMING.
TRY IT NOW FOR FREE!
Sign up now for two weeks of free access to the world's best classical music collection. Keep listening for only $0.0/month - thousands of classical albums for the price of one! Learn more about ArkivMusic Streaming | <urn:uuid:16ae0c79-ba66-4a38-8f72-9c9d7caf62ec> | CC-MAIN-2016-26 | http://www.arkivmusic.com/classical/Name/John-Ward/Composer/12791-1 | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783398516.82/warc/CC-MAIN-20160624154958-00036-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.97842 | 258 | 2.859375 | 3 |
History of Medicine
HISTORY as Today's News
In the late 1960s, when science and medicine in many ways were at the height of their power, the very structure of professional authority came under question. The calm of the academy was disturbed by the civil rights, anti-war, and feminist movements, and by demands for environmental protection, occupational safety, and reproductive rights, as the younger generation declared its disenchantment with the status quo.
In this context, medical historians who had been trained in the burgeoning graduate programs of the sixties reached for a broader relevance. Emulating the scholarly standards of their mentors, they also nurtured an ideal of political engagement and sought out new audiences. They began to articulate a public relevance for the history of medicine by addressing medical professionals and policy makers, testifying before public agencies and commissions, appearing in court and in Congress, and interpreting history in popular exhibitions, magazines, films, and public television programs. The history of medicine thus emerged from the academy to become part of today's news.
Allan Brandt, historian of medicine at Harvard University, reproduced these World War II-era posters in his highly regarded social history of venereal disease in the United States.
No Magic Bullet: A Social History of Venereal Disease in the United States Since 1880
New York, 1985
NLM Call Number: WC 11 AA1 B8n 1985, figures 16-19
This article is one result of Brandt's collaborative work with colleagues in law, epidemiology and health policy. Brandt also contributed to studies on the same subject organized by the National Academy of Science's Institute of Medicine, the nation's most esteemed health policy advisory body.
"FDA Regulation of Tobacco Advertising and Youth Smoking: Historical, Social, and Constitutional Perspectives," Journal of the American Medical Association
Chicago, February 5, 1997, Volume 277, Number 5, Page 410-411
Copyrighted 1997, American Medical Association
NLM Call Number: W1 J221
Final Report: Executive Summary and Guide to Final Report
NLM Call Number: W 20.55 H9 U58a 1995, Title Page
Courtesy of Susan Lederer
Since its publication, Rosner and Markowitz's Deadly Dust has been introduced in court cases as evidence that manufacturers either "knew" or "should have known" the dangers workers faced in various "dusty trades," particularly sand blasting and granite cutting. A major issue in the cases has been the inadequacy of warnings given to workers. Prior to the lawsuits, only a general warning about the danger of working with silica was placed on the side of bags; since the lawsuits, detailed warnings have become more common.
Deadly Dust: Silicosis and the Politics of Occupational Disease in Twentieth-Century America
(c) 1991 by Princeton University Press.
All rights reserved.
NLM Call Number: WF 11 AA1 R8d 1991, Cover
This historical monograph helped re-open the silicosis issue for the Occupational Safety and Health Administration (OSHA), the National Institute for Occupational Safety and Health (NIOSH), and the Mine Safety and Health Administration (MSHA) of the United States Department of Labor.
Dying for Work: Worker's Safety and Health in Twentieth-Century America
Bloomington and Indianapolis, 1989
NLM Call Number: WA 11 AA1 D93 1987, Cover
Dying for Work is a pioneering collection of essays on the occupational health and safety risks facing American workers during this century. | <urn:uuid:f4ce8610-d258-45f0-961a-818d277b877c> | CC-MAIN-2016-26 | https://www.nlm.nih.gov/hmd/sowhatsnew/todays.html | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395160.19/warc/CC-MAIN-20160624154955-00187-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.933354 | 727 | 3.15625 | 3 |
(Helen Bond) A week or so ago we welcomed Dr Ken Dark from the University of Reading to Edinburgh. Ken is an archaeologist with a great deal of experience in all things to do with the Roman Empire, and we were particularly interested to hear of his experiences in mapping the area around Nazareth and the Sea of Galilee. The two extended lectures were extremely lively and we all learnt a great deal from them. I was particularly struck by the difference in archaeology between Romanized Sepphoris and Nazareth; it was also news to me that there was no road between Nazareth and Sepphoris – clearly this puts some question marks over reconstructions that have Jesus continually making his way over to the city. By way of contrast, there was a good road between Nazareth (which Ken portrayed as a small Jewish regional town rather than a hamlet) and the larger and more Romanized towns around the the Sea of Galilee.
Sumaries of the two lecturesare posted below.
Archaeology of Roman-Period Nazareth
Recent work by the Nazareth Archaeological Project (established in 2004) is transforming our understanding of first-century Nazareth and its hinterland, by applying modern archaeological methods and theory to its study for this first time. In Nahal Zippori, the broad valley between Nazareth and Sepphoris, survey has revealed a pattern of many small agricultural settlements, probably established at, or just before, the start of the Roman period. Those closer to Nazareth seem to have used only artefacts produced in what are known to have been Jewish contexts, but those closer to Sepphoris used a much wider range of material, including imported goods. This, along with other evidence, strongly suggests that a Roman-period cultural boundary existed between communities nearer Sepphoris and those nearer Nazareth, casting doubt on many recent interpretations of the relationship between the two centres. Turning to Nazareth itself, a reinvestigation of the archaeological site below the present Sisters of Nazareth convent, just across the street from the Church of the Annunciation, has demonstrated a long sequence of activity from the Roman period onward. This begins with an exceptionally well-preserved domestic building, probably a ‘courtyard house’, dating to the first century. The structure was disused within that century, and burials, including an almost-complete kokhim tomb of mid- to late- first-century form, dug in its immediate vicinity. Later, the site was used for the largest Byzantine church yet identified in Nazareth, with the earlier house and tombs contained and venerated in its crypt. The church may well be the ‘lost’ Church of the Nutrition, referred to in the seventh-century Insular Latin text De Locis Sanctis.
K.R.Dark ‘The Sisters of Nazareth site and the archaeology of Early Roman period Nazareth’ The Antiquaries Journal 92, 2012, 1-28.
K.R.Dark ‘The Byzantine and Crusader Church of the Nutrition in Nazareth rediscovered’ Palestine Exploration Quarterly 144.3, 2012, 164-184.
K.R.Dark ‘The Roman-Period and Byzantine Landscape between Sepphoris and Nazareth’ Palestine Exploration Quarterly 140.2, 2008, 87-102.
Dalmanutha Discovered? First-Century Fishing, Farming and Urbanization around the Sea of Galilee
While there has been much archaeological work on Roman-period and Byzantine settlement around the Sea of Galilee, little synthesis of this has taken place. Both material and written evidence suggest a distinctive economic system, in which local communities developed a specialised fishing ‘industry’ alongside agricultural production, but this has seldom been studied in its own right. Indeed, the valley of Ginosar, which has the best farmland around the ‘sea’, has been largely neglected by archaeologists, except for the urban site to its extreme south-east identified by Franciscan scholars as Magdala, and the famous first-century ship discovered on its shoreline. A new research project is synthesising existing data and using air- and satellite-photography to re-examine the area, combined with the first extensive archaeological survey of the Ginosar valley. The latter has identified a very large, but previously-unrecognised, Late Hellenistic, Roman-period, and later, settlement between the modern town of Migdal (on the western side of the valley) and the coast, just south of Kibbutz Ginosar. It is hard to imagine that a Roman-period coastal community of this size is nowhere mentioned in textual sources, and the site might be identified with one of the unlocated toponyms known from the Bible, perhaps the Dalmanutha of Mark 8:10.
K.R.Dark (forthcoming in 2013) ‘Archaeological Evidence for a Previously Unrecognised Town near the Sea of Galilee’ Palestine Exploration Quarterly 141.3 | <urn:uuid:ed9ab778-70f4-436c-a6f1-45b8a015c752> | CC-MAIN-2016-26 | http://christianorigins.co.uk/2013/06/07/dr-ken-dark-on-galilean-archaeology/ | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396959.83/warc/CC-MAIN-20160624154956-00058-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.960293 | 1,036 | 2.703125 | 3 |
What is Marine Debris?
Marine debris is trash that gets into the marine environment as a result of careless handling or disposal. Two characteristics of trash, its buoyancy and ability to be blown around, affect how easily trash becomes marine debris, while the ease with which trash degrades dictates how long it remains in the marine environment. Marine debris includes all the objects found in the marine environment that do not naturally occur there. Although items such as tree branches and the bones of land animals can be considedred marine debris, the term generally is reserved for trash. The most common categories of marine debris are plastic, glass, rubber, metal, paper, wood, and cloth.
Where does Marine Debris Come From?
Marine debris comes from many different sources. Any trash that is improperly disposed of, as well as any materials that are improperly transported or stored, can be become marine debris. The main sources of marine debris are: beach-goers, improper disposal of trash on land, stormwater sewers and combined sewer overflow, ships and other vessels, industrial facilities, waste disposal activities, and offshore oil and gas platforms.
Every year, thousands of people visit our bay area beaches. Many of them leave behind materials that become marine debris, such as food wrappers, cans, cigarette butts, and toys like shovels, pails, and beach balls. This trash can be blown into the ocean, picked up by waves, or washed into the water when it rains.
Landlubbers also can generate marine debris. Trash can be blown or washed directly into the ocean. Even trash that is generated hundreds of miles inland can become marine debris if it is blown or washed into rivers or streams and carried to sea.
Stormwater runoff (the water that flows along streets or along the ground as a result of a storm) can carry street litter into sewer pipes, which flow to the ocean. Where there are older sewer systems, sewage (the wastes flushed down toilets and drains) also is carried in the same pipe system as stormwater runoff. Pipes that carry a combination of sewage and stormwater are known as combined sewers. Unlike independent storm sewers, combined pipes turn to a sewage treatment plant rather than directly into a nearby body of water. At the sewage treatment plant, sewage is separated into sludge (solid waste materials) and water. The sludge is dried and either disposed of in a landfill or treated and sold as a fertilizer. The treated water is discharged into a river or other nearby waterway, free of solid waste.
Combined sewer pipes cause problems when heavy rainstorms cause too much water to enter the sewer system. When this happens, the amount of water in the sewer pipes exceeds the sewage treatment plant's handling capacity. To prevent major operating problems in the plant, a safety overflow valve diverts the excess water from the plant into a nearby waterway. The problem with this system, however, is that untreated sewage and debris also are diverted into the waterway.
Boats are also sources of marine debris. Sometimes, trash is purposefully thrown overboard. One major reason for the overboard disposal of trash is that there is limited storage space aboard these vessels. Most of the time, however, trash is disposed of in the ocean by people who are unaware of the problems we can cause. Trash can also accidentally fall, blow, or wash off vessels into the water. In addition, fishing nets and lines, and other types of equipment, can be lost at sea and become marine debris.
Industrial facilities contribute to the marine debris problems when waste items generated by industrial processes are improperly disposed of on land. Finished products also can become marine debris if they are lost during loading and unloading at port facilities, or they are lost when they are transported through waterways or over land.
Waste disposal activities can cause a problem when trash is lost during collection or transportation, or when trash blows or is washed away from disposal facilities.
Once debris has found its way into the ocean, it is very difficult to
trace the source of the debris. A plastic cup, for instance, could
have been left by a beach-goer, littered in a city street and washed into
a storm sewer and out to sea, blown off of a recreational boat, used on
a shipping vessel and disposed of overboard, and so on. Clearly,
marine debris is a complex problem whose answer will require that many
sources of marine debris be controlled.
What's One Way to Reduce Marine Debris?
Recycling is one way to reduce trash. Recycling is the collection and reprocessing of materials so they can be used again. Before materials can be processed for reuse, they must be separated into different types (such as plastic, glass, and metal). Although recycling has become widespread, not every type of material can be recycled.
Paper is the most frequently recycled type of trash. Three types of paper are recycled: high-grade paper, such as computer paper, newspaper, and corrugated cardboard. Metals also are commonly recycled, particularly aluminum cans. All types of glass, except light bulbs, ceramic glass, dishes, and plate glass, currently can be recycled. Overall, very little plastic waste is recycled, with the exception of plastic milk jugs and soft drink bottles.
Even better than recycling is adopting pollution prevention strategies that produce less waste in the first place. Ways to produce less waste include reusing materials, using reusable items rather than disposable ones, and reducing the amount of packaging we use.
We can also take steps to keep waste from getting into the ocean. Most importantly, littering should be prevented. Boaters and beach-goers should ensure that trash and other items are not blown or washed away. Before trash is left out for collection, it should be tightly secured in bags or trash cans to ensure that trash stays in its proper place.
How Can Marine Debris Affect Wildlife?
The two primary problems that marine debris poses to wildlife are entanglement and ingestion. Entanglement results when an animal become encircled or ensnared by debris. Entangelment can occur accidentally, or when the animal is attracted to the debris as part of its normal behavior or out of curiosity. For example, an animal may try to use a piece of marine debris for shelter or as a plaything, or as a source of food (if other plants and animals are already trapped in the debris or if the debris resembles prey that is a normal part of its diet).
Entanglement is harmful to wildlife for several reasons. Not only can entanglement cause wounds that can lead to infections or loss of limbs, but it may also choke or strangle wildlife. In addition, entanglement can impair an animal's ability to swim, which can cause drowning or difficulty in moving, finding food, and escaping predators.
Ingestion occurs when an animal swallows marine debris. Ingestion sometimes happens accidentally, but generally animals feed on debris because it looks like food. Ingestion can lead to starvation or malnutrition if the ingested items block the intestinal tract and prevent digestion, or accumulate in the digestive tract and make the animal feel "full," lessening its desire to feed. Ingestion of sharp objects can damage the digestive tract or stomach lining and cause infection or pain. Ingested items may also block air passages and prevent breathing, which siffocates animals.
Marine mammals, turtles, birds, fish, and crustaceans all have been entangled in or have eaten marine debris. Many of the species most vulnerable to the problems of marine debris are endangered or threatened. Endangered species are plants or animals that are in immediate danger of becoming extinct because their population levels are so low. Threatened species are plants and animals that may become endangered in the near future.
Approximately 100,000 marine mammals die every year from entanglement or ingestion of marine debris. Of the different types of marine mammals, seals and sea lions are the most affected particularly by entanglement because of their natural curiosity and tendency to investigate unusual objects in the environment. Packing straps and net fragments are a major problem for these animals. Some studies have linked the decline of the northern fur seal of Alaska and the endangered Hawaiian monk seal to entanglement in marine debris. Whales, including endangered humpback and gray whales, have been found entangled in fishing nets and line. Manatees, which are also endangered, have become entangled in crab-pot lines, and dolphins and porpoises may get caught in fishing nets. Ingestion of debris by marine mammals appears to occur less frequently, but it has been reported for elephant seals, sea lions, certain types of whales and manatees. Although few cases of ingestion have been reported, these cases are significant because they have usually contributed to or resulted in the death of animals due to suffocation or starvation.
Sea turtles also have become entangled in marine debris. All of the five species of sea turtles found in the U.S. are endangered species, and all have been found entangled in different types of marine debris, such as fishing line, rope and nets. Ingestion of marine debris is an even greater problem for these species. Sea turtles have swallowed plastic bags because they look like jellyfish, one of their favorite foods. This may cause the turtle's digestive tract to become blocked, leading to starvation. Cases of turtles swallowing balloons, tar balls and debris that has become covered with algae have also been reported.
Nearly a million seabirds are thought to die from entanglement or ingestion each year. Since most seabirds feed on fish, they are often attracted to fish that have been caught or entangled in nets and fishing line. Entanglement in fishing line has been a particular problem for the brown pelican, which is an endangered species. Seabirds are some of the most frequent victims of abandoned nets. As many as 100 birds have been found in a single net. Many birds, including ducks, geese, cormorants and gulls have been found entangled in six-pack rings and other encircling debris. The ingestion of plastic resin pellets (small, round pellets that are used to form plastic products) is a major concern. Many types of birds have been found to feed on these pellets, most likely because they mistake them for fish eggs or other types of food.
Fish and crustaceans such as lobsters and crabs are frequently caught in lost or discarded fishing gear, in a phenomenon known as ghost fishing. For example, a ½-mile section of nylon net was found in Lake Superior that had been abandoned for an estimated 15 years, and contained 100 pounds of fish, much of which was rotten. Lost traps also continue to attract fish and crustaceans, which enter them in search of food or shelter. In New England alone, an estimated half million lobster pots are lost every year.
Wildlife also is affected when marine debris disturbs its environment. For example, lost or discarded fishing gear and nets can drag along the ocean floor or through coral reefs, disrupting the animals and plants that live there. In addition, debris can bioaccumulate in the food chain. Bioaccumulation occurs when organisms low on the food chain consume a substance that builds up in their bodies. When animals higher on the food chain eat those organisms, they also ingest that substance, and it accumulates in their bodies. The higher an animal is on the food chain, the greater the quantity of the substance that is consumed and accumulated. For example, eagles and other predators high on the food chain have been found with large concentrations of plastic pellets in their stomachs after preying on smaller birds, which previously ate the material in fish. | <urn:uuid:5e7dc692-c31b-4e87-94c2-ba64b5338f9d> | CC-MAIN-2016-26 | http://www.marine.usf.edu/beachbuddies/background.html | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396455.95/warc/CC-MAIN-20160624154956-00170-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.957676 | 2,390 | 3.96875 | 4 |
TERRE HAUTE, Ind. (WTHI) – Poisonous plants paint the Wabash Valley and some of those can cause harm to you or your children.
“This is one of the most poisonous plants growing in our area right now, this is pokeweed,” explained Peter Coppinger, PhD, Associate Professor of Biology at Rose-Hulman.
“They bloom into these flowers and in the summer they grow the fruit right here, and these are what are the most poisonous,” said Coppinger.
Although they smell and look like an innocent flower, these are a posy of poisonous plants can be dangerous. But there is one poisonous plant that is notorious across the region, and that is poison ivy.
“So people say the best way to identify poison ivy is ‘leaves of three let it be’, but there are so many plants of leaves of three, so they are very hard to identify,” said Coppinger.
So what’s the best way to pin point the poison?
“The basic characteristics, you pick a stem and they go up in alternate directions,” explained Coppinger.
And although not everyone is allergic to poison ivy, Coppinger says “don’t be too cavalier about trooping through a field of poison ivy. Even if you aren’t allergic to it, you can still develop sensitivity to it over time.” | <urn:uuid:b6a2eda4-7c32-4103-a74c-de21aedacabe> | CC-MAIN-2016-26 | http://wthitv.com/2014/07/02/pinpointing-poisonous-plants/ | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783393332.57/warc/CC-MAIN-20160624154953-00001-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.940874 | 311 | 3.203125 | 3 |
A LOVERS’ well, said to be created by a medieval king for his mistress, needs major restoration work at Blenheim Palace.
The so-called Fair Rosamond’s Well is thought to have been built by King Henry II for his mistress Rosamund de Clifford 850 years ago.
She was famed for her beauty, and was called the “most beautiful woman in Christendom” and “rose of the world” in a 17th century poem Fair Rosamond.
But the well, down from the palace’s Grand Bridge across the lake from the house, is now in need of some love itself.
Rosamond de Clifford, depicted in the main picture in a 1917 Oil on canvas painting by John William Waterhouse
Roy Cox, the palace’s rural enterprises manager, said: “Fair Rosamond’s Well is one of the most mysterious, historic and romantic locations in the country.
“It has been an inspiration for writers and artists down through the years and has been a focal point for more than eight centuries, attracting visitors from around the world.
“Over recent decades the well has become somewhat overgrown and at risk of becoming damaged.
“Our plan is to sympathetically restore the surviving well and its surrounding area and to provide people with lots more information about its rich past.”
The poem said that the well was part of a maze, or bower, with 150 doors, built by the king to hide Rosamund from his jealous wife Queen Eleanor.
It says that the queen poisoned Rosamond, but historians say this is not true – but do agree that she had at least one child during her relationship with the king.
Blenheim Palace carried out an archaeological dig to find the fabled maze, and while some underground structures were found, they are thought to be from more modern work.
However surveyors did find a metallic feature which could be ancient pipework to do with the well.
Today, the well has a spring of water that flows from a stone wall into a shallow pool surrounded by flagstones, and the water was – until recently – bottled and sold to visitors.
Ode to the Rose of the World
A section of Fair Rosamond, first published by Thomas Delone in 1612
THE king therefore, for her defence
Against the furious queene,
At Woodstocke builded such a bower,
The like was never seene.
Most curiously that bower was built,
Of stone and timber strong;
An hundered and fifty doors
Did to this bower belong:
And they so cunninglye contriv’d,
With turnings round about,
That none but with a clue of thread
Could enter in or out. | <urn:uuid:92ce532e-6f5f-441d-87b4-7af2ba885217> | CC-MAIN-2016-26 | http://www.oxfordmail.co.uk/news/11364952.Lovers____well_for_king___s_fair_maiden_needs_a_bit_of_TLC/?ref=var_0 | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397695.90/warc/CC-MAIN-20160624154957-00020-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.973852 | 589 | 2.828125 | 3 |
Definitions for Terraintəˈreɪn
This page provides all possible meanings and translations of the word Terrain
a piece of ground having specific characteristics or military potential
"they decided to attack across the rocky terrain"
A single, distinctive rock formation; an area having a preponderance of a particular rock or group of rocks.
An area of land or the particular features of it.
Origin: From terrain, from terrenum, neuter of terrenus, from terra.
Terrain, or land relief, is the vertical and horizontal dimension of land surface. When relief is described underwater, the term bathymetry is used. Terrain is used as a general term in physical geography, referring to the lay of the land. This is usually expressed in terms of the elevation, slope, and orientation of terrain features. Terrain affects surface water flow and distribution. Over a large area, it can affect weather and climate patterns.
Chambers 20th Century Dictionary
ter′ān, n. (geol.) any series of rocks continuously related: any tract considered in relation to its fitness for some purpose. [Fr.,—L. terrenum.]
The numerical value of Terrain in Chaldean Numerology is: 2
The numerical value of Terrain in Pythagorean Numerology is: 4
Sample Sentences & Example Usage
I think that the terrain has changed.
(Iraqi forces) will eventually take back the terrain.
Mud, rugged terrain and bad weather hampered our rescue efforts.
It was real mountainous terrain with heavy tree cover, Basically all we were doing was a recovery.
Trail runs force you to focus, if you’re not attentive to the terrain, you’re likely to be face-down on it!
Images & Illustrations of Terrain
Translations for Terrain
From our Multilingual Translation Dictionary
Get even more translations for Terrain »
Find a translation for the Terrain definition in other languages:
Select another language: | <urn:uuid:87ca8591-cfc8-4754-9c6c-30f799cfd476> | CC-MAIN-2016-26 | http://www.definitions.net/definition/Terrain | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783403508.34/warc/CC-MAIN-20160624155003-00133-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.858208 | 424 | 3.46875 | 3 |
Thursday, December 8, 2005
Marvel's dips into the Antietam
The January 2006 article of America's Civil War magazine has an article written by William Marvel on the subject of the possible usefulness of the creek crossings on the Union left flank during the battle. The author waded through the creek himself and recorded the depth at various places. He seems to make the quite reasonable assumption that the flow today is about the same as it was back in 1862. What makes me think twice about this is an episode of Battlefield Detectives I watched a few nights ago that investigated the Revolutionary War battle at Oriskany. The cartographer/geographer was puzzled why the raging creek of today on a key area in the battlfield was not noted on period maps. After studying the problem, he posited that the considerable deforestation and other vegetation losses in the intervening years led to a high erosion situation that resulted in a far greater volume of free water flowing into low lying areas. Thus, rivulets are transformed by the increased runoff into creeks and creeks into small rivers and so forth. Makes me wonder if Marvel took this possibility into consideration. | <urn:uuid:edb278ff-9708-4815-89e5-68ffb4b2302d> | CC-MAIN-2016-26 | http://cwba.blogspot.com/2005/12/marvels-dips-into-antietam.html?showComment=1159748280000 | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395039.24/warc/CC-MAIN-20160624154955-00168-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.977504 | 233 | 2.78125 | 3 |
Sign up to receive important emails from us on current issues.
HOPE works in more than 35 countries worldwide. Please enjoy our blog as we document the successes and challenges of our work to provide Health Opportunities for People Everywhere.
Posted By: Tom Kenyon, M.D., M.P.H. on June 8, 2016
It seems unbelievable to those of us who remember the 1980s and 90s, but HIV/AIDS often seems regarded as yesterday's crisis. It no longer grabs headlines. That doesn't mean the fight against the disease is over -- in fact, far from it which is why the United Nations General Assembly is holding a high level meeting on HIV/AIDS over the next two days to rededicate the world to controlling the epidemic by 2030.
That we can even think of making HIV/AIDS history is the direct result of one of the most comprehensive and successful international public health operations in human history – one that can be a blueprint for the fight against future epidemics and pandemics. As President Barack Obama said in a statement ahead of the UN meeting: "The past 35 years tell a story that bends from uncertainty, fear, and loss toward resilience, innovation, and hope."
And as we consider how to confront other public health crises, including our recent challenge from Ebola and the current epidemic very much in the public eye, Zika and other threats to come, it's useful to reflect on why the HIV/AIDS fight has largely worked thus far.
Yes, it has taken resources, evidence-based approaches, and hard work. But for me, the most important watchword is: Perseverance. There were times when the battle against HIV/AIDS looked dark indeed. But the many health professionals, governments, and NGOs, including Project HOPE, that were involved in the effort to combat the disease have never given up. And this is what they have achieved. According to new UN figures released ahead of the conference, the number of HIV-positive people taking antiretroviral medicines more than doubled to an estimated 17 million people from 2010 to 2015, including two million people who were started on the lifesaving treatment last year alone.
Global coverage of antiretroviral medicines hit 46 per cent at the end of 2015 and progress was most notable in eastern and southern Africa, where coverage increased from 24 per cent in 2010 to 54 per cent in 2015, reaching a total of 10.3 million people. AIDS-related deaths fell from 1.5 million in 2010 to 1.1 million in 2015. But as deaths decline, we mustn’t forget the staggering mortality during the era when treatment was not available. We must persevere.
One of the most important cogs in the anti-AIDS machine is a U.S. effort: The President's Emergency Plan or AIDS Relief (PEPFAR) -- which debuted under President George W. Bush and has continued under President Obama. To date, the program has had staggering results. It has provided lifesaving antiretroviral drugs to 9.5 million people. It has delivered HIV testing and counseling to more than 14 million pregnant women. And thanks to PEPFAR, more than one million babies have been born HIV free. It is highly significant that the program has not been an old fashioned top-down aid and emergency effort. It's been driven by health professionals and partnerships with governments in the most affected nations -- and in fact has done vital work in establishing sustainable public health systems that will be the backbone of improving health care -- especially in Africa for generations.
Just as importantly, governments and politicians from all persuasions, perhaps slowly at first but later with the full force that only coordinated international action can bring, recognized the danger from HIV/AIDS and the need to tackle it aggressively. That aspect of HIV/AIDS coordination provides a model for how bipartisan political efforts and diplomatic coordination actually work well to advance global health issues.
It's important that when we look back at the 35 years of fighting HIV/AIDS that we consider our successes -- that's how we can summon up the emotion and the political will to carry on the struggle -- and that's why the UN meeting is so crucial. But much remains to be overcome – including, stigma and discrimination, poor access to services, and lack of sustainable financing.
The conference will look at ways to increase the commitment of individual nations to cope with various aspects of their own epidemic. This includes how best to finance new treatment regimes, how to find the right mix of motivated and knowledgeable leaders, and the use of new data techniques to identify marginalized populations.
It’s also a reality that people in countries affected by HIV/AIDS have finite resources to spend on their own health and there is more competition for those resources than ever before. Now that the threat from noncommunicable diseases is so evident, some governments must confront a new reality: "Do I buy diabetes, hypertension, and cancer medicines, or do I buy HIV medicines?"
And there are still parts of the world, including Nigeria and parts of Eastern Europe for instance, where the fight against HIV/AIDS is still not being won -- for all the success elsewhere. That is why meetings to set new goals and to examine current and future policy like the one at the UN this week are so vital. Global leaders need to set the stage for those who do the actual work.
And it comes back to that word: Perseverance. No person, government or country can fight HIV/AIDS or other health crises alone. We all share risk and we all need to share in the response. And it’s not just true of this epidemic but other public health threats as well, both known and unknown.
Students at Wuhan HOPE School of Nursing Offer Excellent Care
Posted By Sharon R. Redding, RN, EdD, CNE, Project HOPE Volunteer, currently serving as a nurse educator at the Wuhan HOPE School of Nursing on June 1, 2016
Community service is one of the major activities of nursing students at Wuhan HOPE School of Nursing, in Wuhan, China, where I have been volunteering for the Spring 2016 semester. For several years, students have volunteered each Sunday at a local nursing home after having been introduced to this facility during their Community Health nursing course. This facility also serves as a clinical practice site for student nurses in the Nursing Assessment course. Students have the opportunity to listen to heart and lung sounds of residents at the nursing home, as well as perform other health assessments. This benefits both residents and students, as there are no professional nurses employed at the home. Blood pressure, hearing and vision screening, blood glucose monitoring, and other assessments of the mouth, teeth, skin, hair and nails, as well as mobility are completed. The students then present their findings to a nurse from a Chinese community health center who visits the home each month.
During their clinical experience, the students recognized that many residents lacked family support and opportunities for recreation and education, so they took it upon themselves to expand their presence through weekly Sunday visits. Students plan recreation activities such as listening to music, group singing, physical exercise and playing various games. These are accompanied by discussions of health issues such as correct use of a cane, walker or wheelchair, oral hygiene, handwashing and other topics. My role as a Project HOPE nurse educator is to supervise students in their clinical experiences at this home and also support the students in their community outreach efforts.
I recently organized a program with the help of a grant from the Gamma Pi-at-Large Chapter of Sigma Theta Tau International Nursing Society – an organization to which I belong. This provided for the purchase of bibs, small towels, toothbrushes and toothpaste to be donated to the nursing home. The Wuhan student nurses’ volunteer group also asked nursing faculty and staff to participate by donating additional toothbrushes and toothpaste to supplement the grant. Students observed that residents have major dental problems, and lack fine muscle control and self-care abilities. Their clothes are often soiled after eating and need protection, which is why students chose to use the grant funds to buy bibs and small towels.
Students planned a special program to present the supplies to the director of the home. This was well-attended by students and faculty, and the director chose to have the event documented by a videographer so that the activities could be shared with nursing home administrators. Residents gathered and music was provided. Residents also organized and presented their own musical selections. One resident – a former opera star with the Beijing Opera – sang for the group.
The entertainment portion of the program was followed by an education presentation for assistive personnel working at the nursing home about dental hygiene and feeding techniques for residents needing assistance. Residents had their blood pressure checked and students also provided blood glucose monitoring for residents needing this assessment.
As a Project HOPE nurse educator in Wuhan, my job is varied. Outreach to the community, fostering educational experiences in settings with limited resources, and promoting volunteerism are examples of how the mission of Project HOPE can become a reality.
Dr. Sharon Redding is from Omaha, NE. She has a doctorate in Education and a Master’s Degree in Nursing. She was a former HOPE Nursing Educator in Brazil for seven years in the 1970s and 1980s. She is now a volunteer teaching in both the undergraduate and graduate programs at Wuhan HOPE School of Nursing. This school was established in 2002 with the cooperation of Project HOPE and offers the only baccalaureate nursing program in English in China. Dr. Redding works with faculty in implementing creative teaching strategies, evaluating the curriculum and the use of appropriate testing. She assists graduate students in developing their research proposals and writing their theses. Helping faculty to design research to develop evidence-based practice is another of her activities. She is also called upon to do presentations and workshops at colleges and medical centers in Wuhan and nearby provinces.
Posted By John Bronson, Sr. Director, Technical Strategy and Innovation, Global Health on May 3, 2016
Late last spring, Project HOPE’s global staff submitted 20 pioneering ideas to be considered for the first Internal Global Health Innovations Challenge. This challenge encouraged staff working at program sites around the world to submit original solutions to improve health for underserved populations. The entire HOPE organization had the opportunity to vote on one of the 20 promising ideas and four others were chosen by technical staff to each receive a $20,000 innovative grant, supported by HOPE’s long-time partner, Eli Lilly and Company, Inc.
One of the winners was an idea proposed by the Tajikistan team to incorporate global positioning system (GPS) tracking devices to improve TB sputum transportation and patient monitoring thereby increasing the success of TB treatment.
Proper medicine adherence is critical to effective treatment of Tuberculosis. Great sums are invested in resources for Directly Observed Treatment (DOT) for TB, which confirms patients regularly take their drugs and complete treatment under supervision by community social workers (CSW), thus achieving cure and preventing the development of drug resistance.
The results of our test activity showed increased confidence that patients received treatment. Patients, as well as CSWs and coordinators embraced with enthusiasm the system as an additional mechanism, which increased the chances of better control and ultimately for a cure. The information was positively received by NTP, with an interest and readiness to support the system as one of the additional methods to improve the DOT upon broader confirmation of results.
The initial stage of the project began with the project team in Tajikistan conducting a three month demo trial to verify the concept and determine if it was viable.
The next phase of development is to establish research comparisons to document the efficiencies, quality of services and improved outcomes. The team has begun the process of selecting the company for delivery and installation of the GPS devices, and GPS tracking is planned to begin in early June.
Posted By Krenare Jashari, Project HOPE Volunteer on April 26, 2016
In 1999, the war in Kosovo forced 7-year-old Krenare Jashari‘s family to leave their home and become refugees in Macedonia. Nearly 17 years later, Krenare returned to Macedonia, this time to help others now caught up in the largest refugee crisis in Europe since World War II. Krenare, now volunteered at a refugee camp in Tabanovce, Kumanovo, where Project HOPE continues to help with the donation of supplies, medicines and volunteer medical help. Recalling her own personal history as a refugee, Krenare reflects on the emotional considerations that refugees experience.
I would like to use this opportunity to thank Project HOPE for making it possible for me to be part of their volunteering community which allowed me to spend some time at the refugee camp in Kumanovo.
This experience took me back to the days in 1999, during the war in Kosovo, when I was a refugee in Macedonia. I now am able to use my past experiences to better understand those who are going through a similar situation.
My first memory as a young refugee in Macedonia begins with the sound of a crib rocking, as my mother tried to put her youngest to sleep while attempting to keep her two other small daughters warm. She was also just two months away from giving birth to another baby.
I remember being scared. I could hear people moaning and crying. My sister’s little voice was saying she wanted to go home now, where she could she see our nana. Two nights later, I lay down looking at the dark sky, pretending that I was asleep. All I could hear is what sounded like a happy voice: “They are here.” It was my father’s voice. As I lifted my head up, I saw a few men who were handing over blankets. As my father laid a blanket over me, I told him I did not want to sleep, so he took me into his arms.
We spent two months in Stenkovec, the refugee camp in Macedonia. Queuing for a piece of bread took forever. We would practically peel our skin off scrubbing it, a desperate attempt to get the ink stamp off our little hands so we could get back in line for more food.
I remember people coming to visit us; any sign of affection made me feel like I belonged somewhere. Often I wondered: Where did they come from? Why do they seem different -- a lot paler? Everything I had known was taken from me. One minute I had a big family – aunts and uncles that I would visit every weekend – and the next moment I was surrounded by strangers.
And now, as a 24-year-old adult walking around the camp in Tabanovce, Kumanovo (a municipality of Macedonia), I had so many questions: Who are those children playing with? Is it a brother or a sister or a complete stranger?
Going back in time overwhelmed me with memories I had as I went from living in a small tent at the refugee camp in Macedonia with my mom and dad and my two little sisters to then living in a big castle in Millstreet, Co. Cork, Ireland – a place for asylum seekers.
I began to ask myself more questions: Where will these people go? Who will be their salvation? As an adult now I see things differently. How does a parent feel when their child wants their favorite toy or wants to go home? What's the impact on a teenager who's already going through physical and emotional changes? How does the former teacher or engineer feel – now a refugee sleeping in a tent?
From my personal experience, I can truly say that being forced to flee your home and leave behind everything that’s familiar and dear to you is one of the toughest challenges anyone can face. However, seeing how various NGOs such as Project HOPE are working together to offer basic needs such as food, water, shelter and access to health services, really makes me proud of my colleagues who work hard day and night to help the refugees through the darkest moments of their lives.
Let’s all take a moment to reflect on the things that really matter and help a little every day to make the world a safer place.
Project HOPE has been supporting the Syrian Refugee Crisis through our program in Macedonia since September of 2015. Krenare hopes to be able to return to Macedonia to provide more assistance with volunteers.
World TB Day
Posted By Alex Trusov, Project HOPE Sr.Director, TB Portfolio on March 23, 2016
A century-and-a-half ago tuberculosis (TB) was such a fact of daily life that the tragic romance of a young woman dying from the disease was immortalized in Giuseppe Verdi’s famous opera La Traviata.
No one is writing operas, or their modern equivalent, movies or television shows about TB these days, and most people probably never give it a second thought — even though it is a disease that has afflicted the likes of celebrities like Nelson Mandela, Ringo Star, Sir Tom Jones and Tina Turner.
But would it surprise you to know that the deadly lung disease remains a potent killer of more than a million people a year worldwide in the second decade of the 21st century?
The World Health Organization (WHO) reports that in 2014 there were 9.6 million new cases of TB and about 1.5 million people died from tuberculosis -- more than the population of the U.S. state of Montana or the population of Swaziland.
When I was a young fellow at the Moscow Medical Academy, my senior colleagues told a story about the time when they started to look for another job after Dr. Selman Waksman, microbiologist at Rutgers University, introduced Streptomycin and later new anti-tuberculosis drugs developed in the early 1960s. Thanks to these medicines thousands of patients even with severe cases were cured in six months.
TB sanatoriums where patients spent years were not needed any more. In the U.S. these places, once dubbed “waiting rooms for death” were finally closed. Many doctors and medical students thought there were no longer any meaningful careers to be pursued in curing TB. The disease faded from the curriculum of many medicals schools in the U.S. and Europe — a trend that soon went worldwide. But TB was not gone and it found the conditions it needed to strike back.
First, poor populations worldwide did not get meaningful access and consistent effective treatment with the new drugs – so the disease continued to spread inexorably.
Then, multiple drug resistant forms of tuberculosis (MDR TB) appeared even in wealthy countries where some populations were not correctly treated until fully cured. And third, HIV/AIDS acted as catalyst for tuberculosis to explode among immune-compromised populations from the mid-1980s. Within a decade, the number of TB cases in the U.S. jumped sharply.
The revived threat of TB inspired a new fight against the disease. A strategy of monitoring patients carefully when they took each dose of anti-TB drugs and recording their progress was pioneered by Karel Styblo (IUATLD) and successfully piloted in Tanzania, Malawi, Mozambique and Nicaragua. It showed early promise for wider use.
Richard Bumgarner, leading the World Bank’s health programs in China, invited Styblo and the WHO to build on this approach to design a TB control project for World Bank financing in China where TB was the number one killer, claiming 400,000 lives a year. By the end of 1991 pilot Chinese counties were achieving double the previous rates of cure in tuberculosis. A large World Bank grant was made to China for TB control. Bumgarner moved to the WHO to help expand its tiny TB program and build the WHO Global TB Programme. WHO declared its first Global Health Emergency in 1993 and named the new TB approach DOTS (Directly Observed Treatment Shortcourse). Nearly 80 percent of people were cured, at a cost of less than $10 per life and $3 per new infection avoided (“TB Join the DOTS.” The Economist. May 20, 1995, P.89).
Project HOPE also started its first TB activities in the early-1990s when Richard Bumgarner wrote to and convinced Dr. Bill Walsh, Project HOPE’s founder, to build TB programs, first in the Central Asian Republics, and later in parts of Europe and Africa. After he retired from the WHO, Bumgarner joined Project HOPE to strengthen the efforts of our team.
Today Project HOPE successfully guides TB programs in Europe/Eurasia and Africa with funding by USAID, GFAMT and private donors for more than US$40 million, and provides additional humanitarian assistance for hundreds of millions dollars to save lives of people all around the world.
Project HOPE invests significant efforts to build political commitment in countries where TB programs are implemented and support principles of humanity that started when Waxman brought, almost illegally, a few grams of the new Streptomycin to the USSR to save the life of Ira Zukerman who was dying from tuberculosis meningitis.
In over 20 years of TB program implementation, Project HOPE pioneered universal education for medical doctors, nurses, patients, their relatives and communities. The programs have sought to combat the stigmatization of patients in their communities, improve tolerance and empower patient and health professionals to address the needs of TB patients.
Much improved methods of diagnosis such as GeneXpert, developed by Rutgers University where Streptomycin was created, are being successfully introduced by Project HOPE in TB control programs in Africa, Europe and Central Asia
Following the WHO’s END TB strategy and US Government TB control strategy, Project HOPE provides assistance to patients and their family members from the most vulnerable groups of population – women, children, migrants, HIV positive people, injection drug users and former prisoners. This fight has made enormous success, helping to save an estimated 46 million lives worldwide since 2000. It must be continued forcefully so that TB does not again come back in even more deadly forms. | <urn:uuid:2c1b7b4c-17d5-4ca4-8a8e-9434686c3476> | CC-MAIN-2016-26 | http://www.projecthope.org/news-blogs/In-the-Field-blog/index.jsp?page=2 | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783404405.88/warc/CC-MAIN-20160624155004-00132-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.971524 | 4,562 | 2.8125 | 3 |
So, that Internet apocalypse that’s going to befall us when our fiber optic cables max out? Maybe not so much. On Thursday, engineers reported in Science that they’d broken the “capacity limit” for fiber optic transmission, opening the door to future networks that carry more data further at lower costs.
As the world’s collective Internet demand continues to skyrocket, electrical engineers have been keeping pace by upping the signal that passes through our fiber optic cables, allowing us to send and receive more juicy data faster. But optical fiber transmission has certain physical limitations. If you boost the power too much, the beams of laser light that carry data start interfering with one another, until eventually, the signal degrades and information is lost.
Whispers of a so-called capacity crunch—a singular moment when the cables simply can’t spit out our cat videos any faster—have taken on a louder, more alarmist tone recently. Some experts have gone so far as to suggest the Day of Reckoning might be upon us in as little as five years.
Which is why the recent breakthrough is so timely. In experiments performed at UC San Diego’s Qualcomm Institute, electrical engineers were able to increase the power of optical signals nearly 20 fold, deciphering information after it had travelled a record-breaking 7,400 miles, without the use of costly electronic regenerators.
In optical fibers, information is transmitted via carriers that operate across a range of different frequencies. As we beam more laser light through our cables faster, the amount of “crosstalk,” or interference, between these carriers goes up. (Imagine a person shouting down a long corridor, his voice echoing and distorting with increasing distance. Similar idea.) Eventually, we reach a point where the signal becomes so distorted that it can’t be decoded at the other end.
To break that distortion-induced capacity limit, the researchers developed wideband “frequency combs” that essentially condition streams of information before they’re sent out, such that any interference that occurs along the way is predictable. At the receiving end of the fiber, the information can unscrambled and fully restored.
Said Nikola Alic of the Qualcomm Institute, a lead author on the new Science paper:
Today’s fiber optic systems are a little like quicksand. With quicksand, the more you struggle, the faster you sink. With fiber optics, after a certain point, the more power you add to the signal, the more distortion you get, in effect preventing a longer reach. Our approach removes this power limit, which in turn extends how far signals can travel in optical fiber without needing a repeater.
There’s a lot more work to be done before this development translates into real world Internet improvements, but it’s a promising step toward a future where optical networks carry our data faster and at significantly lower costs than today. Once again, clever engineers are showing the world that most limits are actually in our imaginations. Meanwhile, the Internet lives on.
Top image via Shutterstock | <urn:uuid:4546322f-21cb-43fb-b02e-13539195da3d> | CC-MAIN-2016-26 | http://gizmodo.com/engineers-just-broke-the-capacity-limit-for-fiber-optic-1714070706 | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783393518.22/warc/CC-MAIN-20160624154953-00059-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.918193 | 635 | 3.171875 | 3 |
Significant revisions since the last chapter update are highlighted in the CIG Table of Updates which is available on the PHAC website.
For additional information, refer to previously published National Advisory Committee on Immunization (NACI) Statements and Statement Updates.
Rubella (German measles) is caused by rubella virus, a ribonucleic acid (RNA) virus of the Togaviridae family. For more information about rubella virus refer to the PHAC website.
Rubella virus is highly communicable and is transmitted by droplet spread or direct contact with nasopharyngeal secretions of infected people. Transplacental transmission from an infected mother to her fetus during pregnancy may result in Congenital Rubella Syndrome (CRS) in the infant. Infants with CRS may shed the virus in their urine and nasopharyngeal secretions for 1 year or more. The incubation period for rubella is from 14 to 17 days (range, 14 to 21 days). The period of communicability extends from 1 week before to at least 4 days after the onset of rash. People who recover from rubella have lifetime immunity.
People of any age who have not been successfully vaccinated or have not had rubella disease are at risk of being infected. In Canada, routine infant immunization programs have resulted in sustained high rates of immunity in the general population, but the risk of limited transmission resulting from importation still exists. Individuals with the greatest risk of exposure to the Rubella virus include travelers to destinations outside of the Region of the Americas, where rubella incidence may be higher.
Historically, the incidence of rubella peaked in the spring and winter months in temperate zones; rubella is now limited to sporadic cases and outbreaks.
Spectrum of clinical illness
Rubella results in a transient erythematous rash, post-auricular and suboccipital lymphadenopathy, arthralgia and low-grade fever. As symptoms are non-specific, it may be mistaken for infection due to other viruses including parvovirus, adenovirus, or enterovirus. Adult infection is frequently accompanied by transient polyarthralgia or polyarthritis. Serious complications in children and non-pregnant adults are rare, and up to 50% of infections are subclinical.
Rubella infection during pregnancy often gives rise to CRS, which can result in miscarriage, stillbirth or fetal malformations, including congenital heart disease, cataracts, deafness and mental retardation. Fetal infection can occur at any stage of pregnancy, but the risk of fetal damage following maternal infection is particularly high in the earliest months after conception (85% in the first trimester) with progressive diminution of risk thereafter, and it is very uncommon after the 20th week of pregnancy. Infected infants who appear normal at birth may later show eye, ear or brain damage. Congenital infection may give rise to such problems as diabetes mellitus and panencephalitis later in life.
Rubella occurs worldwide; however, during the last two decades, rubella vaccination programs have greatly reduced rubella incidence rates in many industrialized countries. In Canada, routine infant immunization programs have resulted in sustained high rates of immunity in the general population and a massive decrease in incidence. Like rubella, CRS has been eliminated in Canada, although sporadic cases do occur through prenatal infection acquired in endemic areas. There have been no cases of CRS due to a rubella exposure in Canada since 2000.
For more information about rubella disease in Canada, refer to the Public Health Agency of Canada rubella web page.
In Canada, rubella vaccine is only available in combination with measles and mumps vaccine (MMR) or measles, mumps and varicella vaccine (MMRV). In some other countries, measles vaccine alone is given and rubella vaccination is not offered.
For complete prescribing information, consult the product leaflet or information contained within the product monograph available through Health Canada's Drug Product Database. Refer to Contents of Immunizing Agents Available for Use in Canada in Part 1 for a list of vaccines available for use in Canada and their contents.
The duration of protection following immunization with rubella-containing vaccine is not known, but studies indicate that the duration of both cellular and humoral immunity exceeds 20 years. Asymptomatic rubella re-infection, manifest by a rise in antibody, has been observed in some vaccinees. Asymptomatic re-infection has also been observed in women with naturally acquired immunity associated with very low antibody titres. There are no data regarding the efficacy of MMRV vaccine.
In clinical trials, 95% or more of vaccinees aged 12 months and older developed serologic evidence of rubella immunity after a single dose of rubella-containing vaccine. Antibody titres are generally lower than those observed in natural rubella infection. Long-term persistence of anti-measles, anti-mumps, anti-rubella and anti-varicella antibodies following MMRV vaccinations are under evaluation.
One dose of rubella-containing vaccine should be given for routine immunization of children and for immunization of children and adolescents who missed rubella immunization on the routine schedule. MMRV vaccine may be used in children aged 12 months to 12 years. While a single dose is recommended for rubella protection, two doses are required for measles, mumps and varicella protection.
Adults who do not have documented evidence of receiving rubella-containing vaccine on or after their first birthday, or laboratory evidence of immunity, or a history of laboratory confirmed rubella infection should be immunized with one dose of MMR vaccine.
Rubella immunization recommendations differ from measles and mumps recommendations. Because the available preparations all contain measles, mumps and rubella, extra rubella vaccinations may be administered when following the recommendations for measles and mumps vaccination. Refer to Booster doses and re-immunization for additional information.
Individuals who have one or more of the following are considered immune to rubella. Individuals who do not have ANY of the following are considered susceptible to rubella:
The following groups are priorities for rubella immunization:
A second dose of MMR or MMRV vaccine (as appropriate for age and risk factors) may be recommended for measles and mumps protection (MMR and MMRV) and varicella protection (MMRV) in certain people. Although a second dose of the rubella component is not considered necessary for elimination of CRS, it is not harmful and may benefit the 1% to 5% of people who do not respond to primary immunization. Refer to Measles Vaccine, Mumps Vaccine and Varicella (Chickenpox) Vaccine in Part 4 for additional information and to Schedule.
Children and adults lacking adequate documentation of immunization should be considered unimmunized and started on an immunization schedule appropriate for their age and risk factors, unless known to be immune based on laboratory testing. MMR or MMRV vaccine as appropriate may be given regardless of possible previous receipt of the vaccine because additional adverse events associated with repeated immunization have not been demonstrated. Refer to Immunization of Persons with Inadequate Immunization Records in Part 3 for additional general information.
Immunity to measles, mumps and rubella should be reviewed in women of reproductive age, and vaccination should be recommended to non-pregnant susceptible women. Ideally, the immunization status of women intending to become pregnant should be reviewed and vaccines updated as necessary prior to conception. Women should delay pregnancy by 4 weeks following vaccination with a live vaccine.
MMR and MMRV vaccines should not be given during pregnancy because of the theoretical risk of disease transmission to the fetus; however, there is no evidence demonstrating a teratogenic or other risk from such vaccines. In one study, there was no evidence of CRS in any of the offspring of 226 women inadvertently vaccinated during pregnancy. Inadvertent immunization with MMR vaccine is not a reason for pregnancy termination. In some situations, potential benefits of MMR vaccination may outweigh risks such as during measles or rubella outbreaks, in which case vaccination may be considered based on recommendations from public health officials.
Women who are breastfeeding can be vaccinated with MMR vaccine.
Susceptible residents of long-term care facilities should receive measles, mumps and rubella-containing vaccine, as well as all routine immunizations appropriate for their age and risk factors. Refer to Immunization of Patients in Health Care Institutions in Part 3 for additional general information.
In general, immunocompromised persons should not receive live vaccines because of the risk of disease caused by the vaccine strains. When considering immunization of an immunocompromised person with a live vaccine, approval from the individual's attending physician should be obtained before vaccination. For complex cases, referral to a physician with expertise in immunization or immunodeficiency is advised. Refer to Immunocompromised persons in Measles Vaccine in Part 4 for additional information.
Susceptible household contacts of immunocompromised people should receive a rubella-containing vaccine as appropriate for age and risk factors.
People with conditions such as autism spectrum disorders or demyelinating disorders (including multiple sclerosis) should receive all routinely recommended immunizations, including MMR or MMRV vaccine. Refer to Immunization of Persons with Chronic Diseases in Part 3 for additional general information.
Protection against rubella is important for people planning travel to rubella-endemic areas. Susceptible travellers should receive one dose of rubella-containing vaccine. Refer to rubella incidence rates in WHO member countries for additional information.
Refer to Immunization of Travellers in Part 3 for additional general information.
Health care providers who see persons newly arrived in Canada should review the immunization status and update immunization for these individuals. In many countries outside of Canada, mumps and rubella vaccines are in limited use and measles vaccine alone is given. A Canadian study showed that more than one-third of new immigrants and refugees, particularly women, were susceptible to measles, mumps, or rubella.
Unless known to be immune to rubella because of prior serology or documentation of a dose of rubella-containing vaccine, rubella-containing vaccine should be given to persons new to Canada; pre-immunization serology is not needed, as it adds cost and immunising a woman who already is immune is not harmful. Non-pregnant foreign-born women of childbearing agefrom countries where rubella-containing vaccine is not in use should be vaccinated as soon as possible on arrival in Canada. Susceptible women who are pregnant should receive MMR vaccine after delivery. Refer to Immunization of Persons New to Canada in Part 3 for additional general information.
It is recommended that all health care workers be immune to rubella. Health care workers who do not have documented evidence of receiving one dose of rubella-containing vaccine on or after their first birthday, or laboratory evidence of immunity, or a history of laboratory confirmed rubella disease should receive one dose of MMR vaccine. Susceptible people who work with children (e.g. child care workers, teachers, health care workers) are priorities for rubella immunization, especially non-pregnant susceptible women of childbearing age. Refer to Immunization of Workers in Part 3 for additional general information.
Post-exposure MMR vaccination does not prevent or alter the clinical severity of rubella after exposure; however, it should be given to susceptible individuals because exposure may not result in infection, and the MMR vaccine will induce protection against subsequent exposures. There is no evidence of increased risk of adverse reactions from immunization with MMR vaccine if an individual is already immune to one or more components of the vaccine or infected by rubella virus.
Passive immunization with human immune globulin (Ig) is not effective in preventing rubella. Ig given soon after exposure to rubella may modify or suppress symptoms but may not prevent infection, including congenital infection. Therefore, the routine use of Ig in susceptible women exposed to rubella early in pregnancy is not indicated.
During rubella outbreaks, susceptible people should be given MMR vaccine promptly without prior serologic testing. In consultation with public health officials, it may be appropriate to vaccinate pregnant women.
Each dose is 0.5 ml.
Route of administration
MMR vaccine should be administered subcutaneously; MMRV can be administered subcutaneously or intramuscularly. Refer to Vaccine Administration Practices in Part 1 for additional information.
Children (12 months to 12 years of age)
For routine immunization of children aged 12 months to 12 years, one dose of rubella-containing vaccine (MMR or MMRV) should be administered at 12 to 15 months of age.
Adolescents (13 to 17 years of age)
Rubella-susceptible adolescents should receive one dose of MMR vaccine.
Adults (18 years of age and older)
Rubella-susceptible adults should receive one dose of MMR vaccine.
Re-immunization with rubella-containing vaccine after documented receipt of one dose of rubella-containing vaccine is not necessary. However, if a booster dose is given, it is not harmful and may benefit individuals who did not respond to primary immunization.
Serologic testing is not routinely recommended before or after receiving rubella-containing vaccine.
Pregnant women without documented evidence of prior immunization with a rubella-containing vaccine should be serologically screened for rubella antibodies. Those found to be susceptible should be vaccinated with one dose of MMR vaccine in the immediate post-partum period, before discharge from hospital (unless they have received Rh immune globulin [RhIg] - refer to Rh immune globulin and MMR vaccine in Immunization in Pregnancy and Breastfeeding in Part 3). Women who have been appropriately immunized post-partum do not need to be serologically screened for rubella antibodies either post-immunization or in subsequent pregnancies. Women who have been found to be serologically positive in one pregnancy do not need to be screened again in subsequent pregnancies.
M-M-R® II: This vaccine should be held at +10°C or colder during shipment. Freezing during shipment will not affect potency of the vaccine. The vaccine should be protected from light. Before reconstitution, the vial of vaccine should be stored at +2°C to +8°C or colder. The diluent may be stored in the refrigerator or at room temperature and must not be frozen.
PRIORIX®: This vaccine should be stored in a refrigerator at +2°C to +8°C and protected from light. The diluent may be stored separately at room temperature.
PRIORIX-TETRA®: This vaccine and diluent should be stored in a refrigerator at +2°C to +8°C and care should be taken not to freeze them. The vaccine should be protected from light.
Refer to Storage and Handling of Immunizing Agents in Part 1 for additional general information.
Live vaccines given by the parenteral route may be administered concomitantly with all other vaccines during the same visit using different injection sites and separate needles and syringes. In general, if two live parenteral vaccines are not administered concomitantly, there should be a period of at least 4 weeks before the second live parenteral vaccine is given. Exceptions are varicella-containing vaccines, such as MMRV vaccine. Refer to Varicella Vaccine chapter in Part 4 for additional information.
Oral and intranasal vaccines can be given at the same time as, or any time before or after any other live vaccine, regardless of the route of administration of the other live vaccine.
Refer to Timing of Vaccine Administration in Part 1 for additional general information.
Refer to Vaccine Safety in Part 2 for additional general information.
Adverse events following MMR immunization occur much less frequently and are significantly less severe than those associated with natural disease. Adverse reactions are less frequent after the second dose of vaccine and tend to occur only in those not protected by the first dose. Six to 23 days after MMR immunization, approximately 5% of immunized children experience malaise and fever (with or without rash) lasting up to 3 days. Parotitis, rash, lymphadenophy, and joint symptoms also occur occasionally after MMR immunization.
Pain and redness at the injection site, low-grade fever or both occur in 10% or more of vaccinees. Rash, including measles-like, rubella-like and varicella-like rash, as well as swelling at the injection site and moderate fever (greater than 39°C) occur in 1% to less than 10% of vaccinees. As varicella-like rashes that occur within the first two weeks after immunization may be caused by wild-type virus, health care providers should obtain specimens using viral transport media from a lesion of the vaccinee to ensure that varicella disease is not confused with a reaction to vaccination.
Acute transient arthritis or arthralgia may occur 1 to 3 weeks after immunization with rubella-containing vaccine, lasts for about 1 to 3 weeks, and rarely recurs. This is more common in post-pubertal females, among whom arthralgia develops in 25% and arthritis in 10% after immunization with rubella-containing vaccine. There is no evidence of increased risk of new onset, chronic arthropathies or neurologic conditions.
Serious adverse events are rare following immunization and, in most cases, data are insufficient to determine a causal association. As with other vaccines, anaphylaxis following vaccination with MMR or MMRV vaccine may occur but is very rare.
Immune Thrombocytopenic Purpura (ITP)
Rarely, ITP occurs within 6 weeks after immunization with MMR or MMRV vaccine. In most children, post-immunization thrombocytopenia resolves within three months without serious complications. In individuals who experienced ITP with the first dose of MMR or MMRV vaccine, serologic status may be evaluated to determine whether an additional dose of vaccine is needed. The potential risk to benefit ratio should be carefully evaluated before considering administration of the second dose in such cases.
Encephalitis has been reported in association with administration of measles vaccine in approximately 1 per million doses distributed in North America, which is much lower than that observed with natural measles disease (1 per 1,000 cases).
Recent studies have found a higher risk of febrile seizures with the first dose of a MMRV vaccine (ProQuad®, not authorized for use in Canada) when compared to the concomitant administration of MMR and univalent varicella vaccine. Data from the United States (US) estimated that the risk of febrile seizures in the 5 to 12 days following the first dose of this MMRV vaccine is 1 for every 2,600 vaccinated children aged 12 to 23 months. Experience with the MMRV vaccine available in Canada is more limited; however, one study showed an additional risk of febrile seizures with MMRV vaccine compared to MMR and univalent varicella vaccines given as two separate products administered concomitantly. The risk with the vaccine used in Canada was smaller than the risk found with the US product. Close surveillance and further investigation are underway.
In the mid to late 1990s, researchers from the United Kingdom reported an association between MMR vaccine and inflammatory bowel disease, and MMR vaccine and autism. Rigorous scientific studies and reviews of the evidence have been done worldwide, and there is now considerable evidence proving that these claims were false. In 2010, the original study suggesting a link between the MMR vaccine and autism was retracted.
Vaccine providers are asked to report the following AEFI in particular, through local public health officials:
Refer to Reporting Adverse Events Following Immunization (AEFI) in Canada and Vaccine Safety in Part 2 for additional information about AEFI reporting.
MMR and MMRV vaccines are contraindicated in persons with a history of anaphylaxis after previous administration of the vaccine and in persons with proven immediate or anaphylactic hypersensitivity to any component of the vaccine (with the exception of egg allergy [refer below]) or its container. Refer to Contents of Immunizing Agents Available for Use in Canada in Part 1 for a list of vaccines available for use in Canada and their contents. For rubella-containing vaccines, potential allergens include:
In situations of suspected hypersensitivity or non-anaphylactic allergy to vaccine components, investigation is indicated, which may involve immunization in a controlled setting. Consultation with an allergist is advised.
The measles and mumps components of MMR and MMRV vaccines are produced in chick embryo cell culture and may contain traces of residual egg and chicken protein. The trace amount of egg and chicken protein in the vaccine appears to be insufficient to cause an allergic reaction in egg-allergic individuals. Skin testing is not recommended prior to vaccination as it does not predict reaction to the vaccine. MMR or MMRV vaccine can be administered in the routine manner to people who have a history of anaphylactic hypersensitivity to hens' eggs. Prior egg ingestion is not a prerequisite for immunization with egg protein-containing vaccine. For all vaccines, immunization should always be performed by personnel with the capability and facilities to manage adverse events post-vaccination. Refer to Anaphylactic Hypersensitivity to Egg and Egg-Related Antigens in Part 2 for additional information.
Children with a known or suspected family history of congenital or hereditary immunodeficiency that is a contraindication to vaccination with live vaccine should not receive live vaccines unless their immune competence has been established.
MMRV vaccine is contraindicated in persons with impaired immune function, including primary or secondary immunodeficiency disorders. Refer to Immunocompromised persons.MMR and MMRV vaccines are generally contraindicated during pregnancy but not while breastfeeding. Refer to Pregnancy and breastfeeding.
MMR vaccine is contraindicated in individuals with active, untreated tuberculosis. While tuberculosis may be exacerbated by natural measles infection there is no evidence that MMR vaccine has such an effect. Nonetheless, anti-tuberculous therapy for active TB disease is advisable before administering MMR vaccine and it may be prudent to avoid this vaccine in those with active TB disease until treatment is underway. Consultation with an expert in infectious diseases is recommended.
A history of febrile seizures or a family history of convulsions is not a contraindication to the use of MMRV vaccine.
Administration of MMR or MMRV vaccine should be postponed in persons with a severe acute illness. Persons with a minor or moderate acute illness (with or without fever) may be vaccinated.
It is recommended to avoid the use of salicylates (e.g., acetylsalicylic acid [ASA]) for 6 weeks after immunization with MMRV vaccine because of an association between wild-type varicella, salicylate therapy and Reye's syndrome.
Refer to Contraindications, Precautions and Concerns in Part 2 for additional general information.
Systemic antiviral therapy (such as acyclovir, valacyclovir, famciclovir) should be avoided in the peri-immunization period, as it may reduce the efficacy of varicella-containing vaccine such as MMRV. On the basis of expert opinion, it is recommended that people taking long-term antiviral therapy should discontinue these drugs, if possible from at least 24 hours before administration of MMRV vaccine and should not restart antiviral therapy until 14 days after.
The measles component in measles-containing vaccines can temporarily suppress tuberculin reactivity, resulting in false-negative results. If tuberculin skin testing or an Interferon Gamma Release Assay (IGRA) test is required, it should be done on the same day as immunization or delayed for at least 4 weeks after measles vaccination. Vaccination with measles-containing vaccine may take place at any time after tuberculin skin testing has been administered.
Passive immunization with human immune globulin (Ig) or receipt of most other blood products can interfere with the immune response to live vaccines. Refer to Blood Products, Human Immune Globulin and Timing of Immunization in Part 1 for additional information for recommended intervals between the administration of immune globulin (Ig) preparations or other blood products and MMR and MMRV vaccines.
On the basis of expert opinion, the MMR vaccines authorized in Canada may be used interchangeably. Refer to Principles of Vaccine Interchangeabilityin Part 1 for additional general information.
Centers for Disease Control and Prevention. Progress Toward Elimination of Rubella and Congenital Rubella Syndrome-the Americas, 2003-2008. MMWR Morb Mortal Wkly Rep 2008;57(43):1176-9. | <urn:uuid:a5fa29df-4e13-4d56-90f1-f957dd8d9aac> | CC-MAIN-2016-26 | http://www.phac-aspc.gc.ca/publicat/cig-gci/p04-rube-eng.php | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783408840.13/warc/CC-MAIN-20160624155008-00120-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.924508 | 5,193 | 3.578125 | 4 |
Until recently, Argentina did not produce minerals in significant volume, but the future of the mining industry appeared secure. The mountainous northwest, especially Yujuy Province, was rich in many minerals. Privatization laws were introduced in 1989 to give investors greater guarantees, reduce bureaucratic delays, and attract foreign capital. The government continues to offer fiscal incentives and provide assistance to the small and medium mining producers at the center of the country's mineral production.
In 2001, nonfuel minerals accounted for less than 0.5% of GDP, a decrease from 2000. The value of mineral exports in 2001 was $754.2 million, up 6.1% from 2000; in 1991, the figure was $67 million. Ninety percent of that value was from metals. Copper concentrate and associated minerals accounted for $399 million. Gold and silver alloys followed, at $99.3 million, and nonalloyed aluminum accounted for $99 million. The total value of mineral imports decreased by 13.5%, to $483.4 million. Investment in the nonfuel mineral mining sector in 2001 was $267 million, down from $371 million in 2000 and $818 million in 1996.
Preliminary data indicate that mineral production totaled $1.03 billion in 2001, a sevenfold increase from 1997, despite the country's economic crisis. Metal production, including lithium, accounted for 57.5% of mineral production. Copper and gold concentrates contributed 76.3% of the value of metal production. Argentina was the third largest Latin American producer of aluminum in 2001 (255,000 tons of primary aluminum); one of six Latin American producers of mine lead and zinc, ranking second to Mexico in lead; and the fourth-largest producer of silver in Latin America.
Copper production increased by more than 30% in 2001, to 191,566 tons, up from 30,421 tons in 1997. The sole producer was Minera Alumbrera, operating from the Bajo de la Alumbrera open pit mine, in Catamarca Province, since 1998. The increase resulted from higher ore grade and recovery rates. For the El Pachón copper project, in San Juan Province, a feasibility study estimated reserves at 880 million tons and planned production of 250,000 tons per year.
Gold production, mostly from the Bajo de la Alumbrera and the Cerro Vanguardia mines, increased by 18%, to 30,630 kg, down from 38,515 in 1999 and up from 2,289 in 1997. The Cerro Vanguardia mine, in Santa Cruz Province, produced about 9,100 kg of gold and 65,600 kg of silver in 2001. The country's total silver mine output for 2001 was 152,802 kg, up from 78,271 kg in 2000 and 35,768 kg in 1998.
The high-grade underground Martha mine, in Santa Cruz, began producing gold and silver, and yielded an initial 175-ton shipment abroad in 2001 (17,600 grams per ton silver and 27 grams per ton gold), with plans to produce 101,000 kg of silver equivalent for the year. The Veladero gold project, in San Juan, increased production plans to 19,300 kg per year; its probable reserves had been increased to almost 250,000 kg of gold and4.05 million kg of silver. Other projects, Esquel, in Chubut Province, and Diablillos, in Salta Province, had, respectively, estimated resources of 130,000 kg and 25,300 kg of gold and 240,000 and 2.9 million kg of silver.
In 2001, 39,703 tons of zinc was mined, an increase of 14%, and 12,334 tons of lead was produced, a decrease of 12.6%. The yields, by the country's only producer of zinc and lead, Cía. Minera Aguilar, came entirely from Mina Aguilar, in production for 70 years.
Exploration began in 2001 on the Tecka platinum and palladium property, in Chubut Province, one of the largest unexplored layered ultramafic intrusive complexes in the world, leading to the discovery of two sulfide zones.
The atomic energy commission in 2001 sought authorization to solicit bids for the development of the Cerro Solo uranium deposit, whose estimated recoverable resources were 4,600 tons with an average content of 0.3% to 0.5% uranium.
Plans made in 2001 for the Río Colorado potash mine, in Mendoza and Neuqén provinces, included initial production of 500,000 tons of potassium chloride for five years and 1 million tons thereafter.
In 2001, Argentina produced 500,000 tons of crude boron materials, ranking third in the world, after the US and Turkey; the 1999 and 2000 totals were 245,450 and 580,000 tons, respectively. Output of iron ore was 239,400 tons in 1970, 1.04 million tons in 1988, 259,000 tons in 1991, and 4,000 tons in 1994. In 1997, Argentina imported 1.8 million tons of iron ore and concentrates. Among other industrial minerals, output in 2001 for limestone was 13 million tons; dolomite, 760,000 tons; crushed quartzite, 400,000 tons; crushed quartz, 95,000 tons; talc, 10,000 tons; bentonite, 120,000 tons; diatomite, 35,000 tons; feldspar, 61,000 tons; crude gypsum, 500,000 tons; kaolin, 50,000 tons; and salt, 1 million tons. The country also produced marble, clays, celestite, sodium carbonate, asbestos, barite, and vermiculite. Asphaltite, fluorspar, mica, manganese, and antimony are found mainly in the northwest. There are also deposits of lithium, beryllium, and columbium. | <urn:uuid:60e1fe9d-873f-4d12-9617-9b92f3a056a3> | CC-MAIN-2016-26 | http://www.nationsencyclopedia.com/Americas/Argentina-MINING.html | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397562.76/warc/CC-MAIN-20160624154957-00131-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.926874 | 1,220 | 3.203125 | 3 |
Ever wonder how Velcro works? Find out more on today's Moment of Science.
Paper is an indispensable part of modern life. In fact, each American uses about 600 pounds of paper every year. Where does all this paper come from?
If your hair drier shrunk your head to the size of a grapefruit, you’d be astounded. If your wool pants shrank in the drier, you’d be angry but not amazed. How can clothing disobey the laws of common sense, and shrink in the laundry? Learn more on this Moment of Science. | <urn:uuid:96cde03f-e6c7-46e3-9858-4e762e55bed2> | CC-MAIN-2016-26 | http://indianapublicmedia.org/amomentofscience/tag/fabric/ | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397565.80/warc/CC-MAIN-20160624154957-00143-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.931516 | 120 | 3.296875 | 3 |
I got this comment and thought it needed to be a separate post:
Tamar Asks: 32 pages? Not clear
Thank you for your posts. They have been very helpful.
It isn’t clear to me from your posting of a 32 page book being standard, how many pages of text that translates into for the author.
Or does it just mean that an author should aim for a book that she tells the publisher is going to be 32 pages, and they break it up. Or does she break it up herself.
Thank you and looking forward to your response.
Darcy Answers: 32 pages of finished book, 5 (or less) pages of manuscript text
Thanks for the question!
Book. There will be short segments of text on each page (or double-page spread).
Manuscript. If you translate that to standard manuscript pages, that’s 5 pages or less of text. When you send in the mss to a publisher, you send it in standard manuscript format (5 pages or so). When the text is laid out for the picture book, the editor, art director and illustrator will divide it into the segments that go on each page.
You will see me suggest that you divide your text into 14-28 segments, anticipating how it will be laid out in a picture book. That is strictly to help the author revise and polish the text. When you send in the mss, it should be in standard mss format.
Find this Helpful? Read the Complete Series as an Ebook:
How to Write a Children’s Picture Book Available Now!
The 30 Days to a Stronger Picture Book series has been collected into a Fiction Notes Ebook. | <urn:uuid:772373c7-3ca4-4e80-8105-e4e5186ae9c1> | CC-MAIN-2016-26 | http://www.darcypattison.com/picture-books/picture-books-those-confusing-32-pages/ | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783399425.79/warc/CC-MAIN-20160624154959-00091-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.921257 | 353 | 2.84375 | 3 |
About Indian festivals
Festivals that we celebrate in India portraying the incredibility of our cultural heritage have become a part and parcel of our lifestyle. Almost throughout the year, numerous festivals are entertained in association with religion, culture, and tradition that has been wondrous for the Western people who visit India to be a part of the celebration. Indian festivals are celebrated generally through some unique rituals making the festivity alluring and colorful. Regional wise celebrations of following certain customs also add to the fervor of carnival mood such as Navratri and Pongal in Tamil Nadu, Onam in Kerala, festivity of Dasara in Mysore, Durga Pooja in West Bengal, and so on. Certain festivities may be of welcoming seasons of the year, the rains, harvest, rivers, or full moon or new moon, commencement of New Year, and birthdays of deities similar to Krishna Jayanthi, Vinayaka Chathurthi, Christmas, etc. Most of these festivals have been basically commonness in nature, however, sometimes, differentiated by names, rituals and style of celebration in various regions of the country. Festivities include offering poojas at home or prayers in temples, goodwill exchange, seeking blessings, home decoration, wearing new dresses, dancing, singing, feasting, hosting and so on.
With the Indian festivals reverberating zest of the people across the country, Navaratri is the one amongst those festivals celebrated with great zeal and devotion. Navaratri, the festival of nine nights manifests the supremacy of divinity in the form of Shakti (superpower) against the evil.
Significance/history of Navaratri, the festival of nine nights
Festivals, however, seem to be a breakthrough from routine sometimes, but the fact is more than simply a fun and enjoyment each festival has its unique rationality and significance behind its festivity. Navratri, a festival to rejoice over a period of nine nights almost across the country for its religious, cultural and social significance is dedicated to the adornment of Divine Goddess in various forms and names by different people. Despite these differences, the virtue of humanness is a sole attribute throughout the celebration wherever it may be.
Similar to other religious festivals, Navratri has a spiritual theme for its devotees. The festival of nine nights endorses the triumph of righteousness over malefic and inspires the humankind to be relieved from dormant of ignorance and negativisms by inculcating positive virtues and by sanctifying the inner mind. To attain the most eminent goal of human living by overcoming all temporal bindings – salvation can be accomplished with the spiritual knowledge alone. This is indeed the centerpiece of whole Vedas, the ancient Hindu write-ups.
Navratri, primarily a Hindu festival has been celebrated generally at the end of September or at the onset of October month based upon the Hindu lunar calendar. The Navratri festival represents the devotion to the Goddess Durga/Shakti/Devi. In Sanskrit, the term Navratri, literally denotes nine nights, (nava has a meaning of nine and ratri as nights). The Divine Shakti is worshipped throughout these nine nights and the tenth day is celebrated as Vijayadasami added to the festival of nine nights that commemorates the victory of the Divine Shakti over the wicked demons. The Navratri celebrated in this period between September and October is also known as Sharada (Sharad) Navratri or Maha or the Great Navratri, Sharad denoting winter.
Other than the above said Navratri, there have been three kinds of Navratri notable as Vasantha Navratri also known as Chaitra Navratri or Raama Navratri in spring season between March and April, Gupta Navratri mentioned variably as Magha or Ashadha or Gayatri or Shakambhari Navratri during the month of Ashadha or between June and July and Paush Navratri in the period of December to January. All these types of devotion are similar in nature of nine days devoted to nine forms of Shakti.
Traditional myths of Navratri festival
Though there are various myths related to this festivity, traditional theme is the triumph of the goodness over the evil. Based on the puranas as Indran, the king of Devas was defeated and tormented by the powerful Asura called Mahishasura, the divine trinity (Deities) of Brahma, Vishnu and Eshwar consecrated a superpower of Shakti by combining their energies and made a request to Goddess Durga to help them to kill the demon. Thus the Mahishasura (the evil) was destroyed by the Goddess Shakti (Durga) and the 10th day, on which the Mahishasura was killed, has been celebrated as Dasara or Vijayadashami symbolizing the victory of good virtues over evilness. Dasara is one of the important Hindu festivals observed with great enthusiasm across the country.
Added to this, another interesting tale endorsing the celebration narrates the destruction of Ravana by Lord Rama, similar to the ultimate win over the malefic forces and it was on the Vijayadasmi day. By making giant effigies of Ravana, Kumbhkarna and Meghnadh (brother and son of Ravana respectively) and get them burnt in a public place, thus the devotees need to get their evil thoughts destroyed and to follow the virtue of goodness and honesty. At the Ramleela grounds in Delhi on the Vijayadasami day, these effigies packed with explosive substances have been fired using an arrow illustrating the success over immorality.
Divine Goddesses for nine days of Navratri
Each day of Navaratri is devoted to different Goddesses and we specially worship them on that day.
1. Day one is devoted to Shailputri, daughter of the Himalayas, another form of Shakti, wife of Lord Shiva.
2. Day two has been addressed to Brahmacharini, derived from 'Brahma', a form of Durga/Shakti that indicates 'Tapa' or 'penace'.
3. Day three is dedicated to deity Chandraghanta, symbolizing beauty and courage.
4. Day four is specially meant to dedicate to Kushmandas, the Creator of the Universe.
5. Day five is for Skand Mata, the mother of the chief warrior of the Gods, Skanda.
6. Day six is for Divine Katyayani having three eyes and four hands.
7. Day seven is for poojas to the Goddess 'Kalratri' instilling fearlessness in devotees.
8. Day eight is meant for Mata Rani or Maha Gauri denoting a feeling of calm and wisdom.
9. Day nine is for Siddhidatri who has eight siddhis and is adored by the yogis and rishis.
10. Day ten is Vijayadasami dedicated to Saraswati, Goddess of wisdom or knowledge.
Rituals in Navratri festival
Navratri celebration is categorized into three parts, each part consisting of three days, first part for Durga, Goddess for power or Shakti, the second Lakshmi for wealth and prosperity and the final for Saraswathi, Maatha for knowledge and wisdom with the tenth day completed as a day of Vijayadasami. Throughout the nine days of Navratri, devotees observe fasting depending upon their customs and region with one-time meal alone and the rest of the day not taking anything other than the morning meal. They also offer Bajans or stotrams or prayers for prosperous and healthy living. In addition, Navratri, other than being a time opted for spiritual introspection and purgation is believed to be auspicious for commencing new ventures on the tenth day or Vijayadasami day. In Maharashtra and in some North Indian states, during Navratri on the first day, a pot called ghata sthapana or Kalasa sthapana has been set up in a holy place at home (normally pooja room) with a lamp lit up inside the pot for nine days. This represents the universe or earth and the lamp that has been lighted up continuously as medium for devoting the Goddess Shakti or Shree Durgadevi.
Navaratri celebrations in different parts of India
Throughout India, Navratri festival has been wallowed with vibrant festivity mood for ten days in which nine nights have been meant for worshipping the Goddess and the tenth day for celebrations such as beginning anything afresh or buying precious things on that day or starting initial schooling for small children. However, Navaratri celebrations in India differ based on the tradition and culture of the region.
Navratri in North India
The Chaitra Navratri and the Sharad Navratri are followed in north India with the most fervidness by observing fasting through all nine days and revering the deity Shakti in various forms. The Chaitra Navratri ends on the ninth day Ram Navami and the Sharad Navratri finales at the tenth day as celebrations of Durga Puja or Dussehra. In the Kulu valley of Himachala Pradesh, the Dussehra of Kulu in particular is much famous in the northern region with grand ceremony by the hill tribes. Goddess Durga has been worshipped by offering poojas with sugarcane stalks during the ninth day of the Navratri in this region. Kanya puja is a form of worshipping on the ninth day in some parts of north India such as Punjab, Harayana, Uttar Pradesh and Bihar. The Kanya puja symbolizes that nine little girls as nine forms of the Goddess Shakti will be given feast and these young girls will be honored by performing poojas to them cleaning their feet and keeping tilak on the forehead. They will also be presented gifts. People of Punjab celebrate Navratri through fasting for seven days and Navratri in Punjab is mainly in connection with harvesting or the nature. By ending the fast on Astami or the eighth day, they perform pooja worshipping young girls who they believe are like the Goddess and these girls are offered feast and gifts as well.
Navratri in the West
The Navratri, a festival of nine nights in the western region is the grand carnival held in the state of Gujarat and Maharashtra with well-known traditional dances of Garba and Dandiya similar to the famous celebrations such as festivity of Dasara in Mysore and Durga Pooja in West Bengal. In Gujarat, the state government itself has initiated "Navratri Festival Celebrations" for continuous nine days in the state. Participation of the people from all over state or sometimes from abroad in the festivity is the highlight of the nine-day celebration. It has been a fantastic occasion of galore throughout India and amidst global Indian residential mostly in UK and USA too. These traditional Garba and Dandiya dances are being held almost everywhere in the state. People wearing colorful clothes with beautifully decorated sticks in their hands, striking rhythmic to the divine songs and dancing in a circle with unique dancing steps around a light that stands for the eternal light of the Goddess Durga is a real visual retreat.
Worshipping in the State of Maharashtra varies somewhat. Navratri is devoted to Goddess Durga with the last day Vijayadashami for Goddess Saraswati, Divine of Knowledge or Wisdom. Maharashtrians believe Vijayadashami as a day of auspicious to begin learning or education, to acquire new homes or properties or precious things such as gold and silver and to start new business or investment.
Navratri in Goa starts with zatra and the whole Antruz Mahal in Ponda of Goa is extremely ornamented with flowers during this festival. The Saraswati Brahmin temples have been awesomely decorated and the deities adorned with flowers, sandalwood paste, turmeric and kumkum paste have been out for the devotees to have a non-stop special dharshan and for Kaul Prasad (flowers), which is considered to be divine from the Gods and Goddesses. When the festivity gets concluded at night the Pradad of flowers will be distributed to the worshippers. The idols of Dasha Maitrikas (ten sisters in Goa) of the Saraswat Brahmins are Shantadurga, Aryadurga, Mahalasa, Katyayani, Mahamaya, Kamakshi, Vijayadurga, Bhumika, Mahalakshmi and Navadurga that are brought out for worship during the Navratri celebrations.
Navratri celebrations in the East
Navaratri celebrations in India especially in the West Bengal of the east is a festivity time across the entire State and it symbolizes a uniform civil code as Navratri is a religious occasion, cultural event, season of euphony, food carnival and literary expo, all in one. Wherever you go you will notice nothing else other than puja pandals, food stalls, and a multitude of Bengalis out on the road until the eleventh hour. Sharad Navaratri, known as Durga Puja, the most important festival in the State is celebrated as a devotion to the Goddess Durga with puja rituals everywhere. Big-sized idols made up of clay that are exquisitely designed and adorned of the Goddess Durga portraying the destruction of the devil called Mahishasura are worshipped by a huge gathering of devotees in all places. Huge 'pandals' are made for worshipping Goddess Durga by the people who are costumed in new dresses. The last four days of Sharad Navratri are the most important days in West Bengal. The idols of the Goddess are immersed in the Hugli River on the fifth day, Vijayadasami day after a 5-day celebration with great devotion.
Navratri in South India
Navaratri, the festival of nine nights is celebrated in the state of Tamil Nadu in south in devotion to the Goddesses Durga, Lakshmi, and Saraswati. Set up of Navaratri Kolu is much popular in the Tamil culture where various dolls of deities and dolls symbolizing different themes or stories in the ancient history or the nature are organized on decorated steps with paper flowers, serial lights or natural scenarios representing hill, temple, park, agri lands, etc. In the Puratassi Tamil month on the Mahalaya Amavasya day (new moon), the preparation for Durga Puja starts since it is an auspicious occasion followed seven days prior to the Durga Puja that hails the arrival of the Goddess Durga, the Shakti or the supreme power.
The first three days for Durga who is trusted to be slayer of our miseries and flaws, next set of three days to Lakshmi, a Goddess for prosperity and wealth and the final part of three days to Saraswati who renders blessings with wisdom and knowledge and these three forms of Deities are worshipped with great devotion and splendor and the Saraswathi Puja/Ayutha Puja is of much significance to the devotees. A kalash that represents the arrival of Goddesses is set up in the center of the Kolu steps organized and adored with Bajans, Mahishasuramardini stotrams and Abirami Anthathi in the morning and evening on all days of the Navratri festival. Mostly devotees observe full or partial fasting at this time. The Kolu with the arrangement of dolls have been traditional collections gifted from one generation to other and sometimes a new collection of dolls also may be added to the Kolu setup every year. Decoration with Rangoli or Kolam of colorful mixings is an integral part of Hindu auspicious occasions in South India and Navratri festival also is of no exception. It is more interesting that theme-based Kolus are the most attractive feature of Kolu during Navratri lately. Chundal is a delicious snack that has been prepared for puja for complete ten days in Tamil Nadu and also in Kerala. It is believed to be that dolls arranged in Kolu are the warriors of the Goddess Durga prior to Mahishasura's devastation to save the whole universe (including Devas and Trinity of the Gods, Brahma, Vishnu and Shiva). Ayudha Puja/Saraswathi Puja is performed on the ninth day of Navratri. The people will perform puja for shops, industries, schools, offices, vehicles, and household articles, books, stationary related to education at their residences to be healthy, knowledgeable and prosperous. The books and other articles kept for puja will be detached from the puja place on the next day only and it is considered to be propitious for the small children to get initiated to write and read that has been known as Vidyarambham which is similar to the celebration in Kerala.
A Bommai Kolu that simply resembles the one in Tamil Nadu is set up in Karnataka and Andhra Pradesh and nine young girls or 'Kanyas' are gifted with sweets, thambulam, and new dresses and wedded women exchange thambulams, sweets and snacks between them. A festival of nine days known as Bathukamma as of Navratri is popular in the Telengana region of Andhra Pradesh.
The Keralites observe a form of puja for three days called Ashtami, Navami, and Vijayadasami of Sharad Navaratri collectively as Saraswathi Puja through which books have been idolized. On the day of Ashtami, puja popularly known as 'Puja vaipu' are performed for books, a symbol of Saraswathi that are placed in Kolu in their houses, traditional schools and temples and ceremonially separated out for reading and writing that has been observed as 'Puja eduppu.' Little children are learnt to read and write on the tenth day of Vijayadasami. They are taught to write first alphabets on the sand or rice and this sort of observance for the enlightenment of children is noted as 'Ezhuthinu Iruthu' or 'Vidyarambham' and the children should begin the schooling only after this occasion. The ninth day is named as Mahanavami. Certain groups in Kerala celebrate the festival for all days during the period of Navaratri. An exhibition addressed as 'Koluvaipu' in temples or in public places is organized with idols of deities, animals and a plethora of dolls for the grand festivity.
The finale of the Navaratri is observed on the ninth day as Mahanavmi and the last day of festivity. In West Bengal especially, the people end up with mixed feelings of touching the peak of joyous mood and emotional setback at which time they feel extremely mournful and tearful too. They are left with no option but looking forward to the next year's celebration and they feel that the festive mode is not yet over. In Kerala, the mahanavmi day has been regarded as resting day without commencing any new venture or business and children not permitted any reading even on this day. Instead of navmi, the ninth day, people in some regions of Haryana, Kashmir, Maharashtra, and UP celebrate the final day on ashtami since it is the birthday of Parvathi, another form of Durga.
Festivity of Dasara in Mysore
Dasara in Mysore observed as the State Festival of Karnataka is a grandiose festivity of Navaratri in Mysore that is maneuvered by the royal family of Mysore symbolizing triumph of righteousness over evil. Special pujas are organized on this grand occasion by the royal family of Mysore. The mythology goes that on the Vijayadasami day, Goddess Chamundeeswari, variant form of Durga culminated the life of Mahishasura after a nine-day thapaz from whose name, the city is entitled as Mysore and Dasara is a carnival for ten days in the region Vijayadasami being the tenth day. Throughout the duration of ten days, the whole city is adorned with lights and seems as if a city of illumination. Cultural programs have been organized in the Mysore Palace along with sports, wrestling, food and film fete that are luxuriant attracting huge gathering inclusive of foreigners. By the year 2010, this traditional festival completed 400 anniversaries.
Though the Dasara celebrations date back in 15th century when the region was ruled under the Vijayanagar Dynasty the festivity was originally officiated by the Wodeyar Kingdom in 16th century at Srirangapatna. With the Mysore Palace illuminated throughout the ten-day Dasara jamboree, celebrations have commenced with royal family conducting a special puja to the Divine Chamundeshwari of the Chamundi Temple that is situated on the Chamundi Hills in Mysore accompanied by a traditional special Durbar (royal assembly) in the royal palace in which royal family members, special guests, officials and the public participate and this custom is still being continued by the descendant of the royal family entertaining a Durbar personally during Dasara. The custom of revering the royal sword in a procession engaging the elephants, camels and horses which are the chariots for the kings is organized on Mahanavami, the ninth day of Dasara in Mysore up until now.
The main event of the festivities is the traditional Dasara parade on the roads of the city also referred as Jumbo savari by the local people on the day of Vijayadasami in which the procession of the idol of the deity Chamundeshwari positioned in a decorative golden mantapam on the top of the royal elephant is the highlight of the Dasara ceremony. Colorful and attractive tableau with group of dance and music orchestra, beautifully adorned elephants, camels and horses also signifies the procession with the sourcing point at the Mysore Palace and terminating stage at Banni mantap where the banni tree (botanically Prosopis spicigera, a Fabaceae family) is being worshipped since it is believed that the kings go with the custom of worshipping the banni tree conventionally to be victorious prior to set about for any battle. On holding a celebration called torch-light procession (Panjina Kavayatthu) in the Banni mantap, the festivities come to a closing point at night on the vijayadasami day.
An exhibition that has been held in the place situated opposite to the Palace is also another major entertainment of Dasara in Mysore. In 1880, the Mysore Maharaja primitively instituted this exhibition to introduce apropos developments of the State to the public and at present the Karnataka government is holding the responsibility of organizing the exhibition under the Karnataka Exhibition Authority. The exhibition that begins with Dasara extends until December with variety of booths exclusively for sale of clothes, kitchenware, cosmetics, other accessories and eatables for foodies, play areas of amusement park such as giant Ferris wheel to entertain the children and stalls from several state agencies denoting major achievements and projects undertaken by the government. Music bands and dance groups across the whole India will be invited to give their concerts in the theaters or auditoriums allocated around the city on these ten days of Dasara. Wrestling contests for wrestlers all around the country is an additional enchantment of Dasara in Mysore. This Dasara exhibition project was assigned to Karnataka Exhibition Authority initially and the Information, Tourism and Youth Affairs Departments of the State does streamline the exhibition later on and then was transferred to Kannada Culture, Information and Tourism Departments recently.
Festivals for men exhibit valiance or virility through different marathons similar to boat races in Kerala or wrestling contests in Mysore, for women these explore their artistic inclining and various managerial skills and for children, it is in fact a time slot to get rid of their tedious study programs with mouthwatering and tasty sweets and savories all the time, new dresses and lot of time to roam around with friends and cousins. Visitors from abroad are astonished by the multiplicity of those Indian festivals featuring gaiety, enthusiasm, rituals and worships, and above all, festivals are moments of recollection and commemoration of great deeds of gods, goddesses, heroes, heroines, and saints. These festivities symbolize unity in diversity. Much more than celebration viewpoint, festivals have opened up the windows into the chronicle of great Indian legends unveiling the philosophical system of our country at different time periods. The bottom line is that the festivals do connect people together in friendship and love and do facilitate rejuvenating the lost relationship.
Let us have the Navratri festivity at its best with the spirituality of oneness throwing off all our negativities and keeping our mind strong enough to overcome these unethical thoughts or deeds.
More articles: India Festivals
I saw a cover of a tamil magazine,today which depicts a lady with multiple hands holding many items in each of hand. This is nothing but reveals that a lady of home holds multiple positions. Accordingly, the festival of navarathri is celebrated to indicate the importance of women who are possessing three important types of powers namely knowledge,wealth and ruling which are Saraswathi,Lakshmi and Durga according to Hindu Religion. We are all wastely talking on one side about the less or least significance of women - nothing but politics but in real sense the ladies are given much importance in this world. We celebrate the navarathri well by worshiping the Goddess Shakthi and respecting Home Shakthi.
It is a good article on the festival season of navarathri.It provides a good information about this valuable article.
@Ajaibmaliackal. It is my pleasure to express my gratitude to you for your regards about this article. Yes, I must thank the ISC for giving me this opportunity to explore my perception about this colorful and fantastic festival at this appropriate time.
@Ramachandran. Yes. I would like to make a note that women themselves are not aware of their power (Shakti) and this unawareness is the only drawback of women and sometimes this negativity brings chiasm in their life. They would be the top if they are aware. | <urn:uuid:4ec997e5-40c5-4646-b432-436866cca551> | CC-MAIN-2016-26 | http://www.indiastudychannel.com/resources/145330-Navaratri-festival-nine-nights-celebrated.aspx | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395160.19/warc/CC-MAIN-20160624154955-00114-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.950518 | 5,506 | 2.734375 | 3 |
Two approximate calculations give the number of atoms in the observable universe to be close to 1080. (source: Wikipedia)When teaching, I used the hypothesized finiteness of the amount of matter in the universe as a quick rationale for the statement that all practical optimization models should be bounded. (All practical optimization models should be feasible, too, unless the purpose of the model is to show that your bosses are being unreasonable in their demands.) Unbounded models arise from a combination of two factors:
- the analyst does not bother to determine reasonable finite bounds for each decision variable; and
- constraints that should bound the problem are either omitted or formulated incorrectly.
The fancy approach involves getting the solver to cough up a ray along which the feasible region is unbounded and the objective is strictly improving. The easy way is to slap finite but ridiculously loose bounds (both upper and lower) on any variable that is not already bounded, and then solve the model again. Since the feasible region is now bounded, you will get an "optimal" solution. Since the problem was previously unbounded, at least some variables will attain those ridiculously loose bounds.
Ask yourself, in the context of the decision problem, why that cannot happen in practice: why you cannot make 50 trillion automobiles per day, or sell 50 trillion iGadgets per day, ... Think not in terms of the model, but in terms of reality. You can't do it because you don't have enough (labor, materials, capacity, capital, demand, ...). Now focus on the constraints that should reflect those limits. If they are missing, add them. If they are present, plug the solution you just got into them and figure out why they failed to prevent it. Remediate and iterate until you can remove the goofy bounds (preferably replacing them with more reasonable bounds) and get an optimal solution. | <urn:uuid:5e17a1a4-fa99-4e83-a69e-87a22e543d3d> | CC-MAIN-2016-26 | http://orinanobworld.blogspot.com/2012/03/diagnosing-unbounded-models.html | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783404405.88/warc/CC-MAIN-20160624155004-00113-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.946603 | 383 | 2.5625 | 3 |
Traffic fatalities at the end of last year declined to record lows, according to the U.S. Department of Transportation. There were just 1.16 fatalities per 100 million vehicle miles traveled (VMT) - down from 1.25 fatalities per 100 million VMT just a year earlier.
The news dovetails with the formation of a new coalition of highway safety agencies and law enforcement that it calling for increased funding of state highway safety programs.
The State Highway Safety Alliance and the International Association of Chiefs of Police have set a goal of reducing the number of annual traffic deaths to below 20,000 by 2030. The State Highway Safety Alliance consists of the American Association of Motor Vehicle Administrators, the American Association of State Highway and Transportation Officials, the Commercial Vehicle Safety Alliance, the Governors Highway Safety Association, and the National Association of State Emergency Medical Services Officials.
The coalition is urging Congress to include the following in its long-term highway reauthorization bill:Streamlined program administration and enhanced flexibility to focus federal resources where they are most needed - such as rural roads, where at least half of all fatalities occur; Enhanced data collection and analysis so that problems can be identified and progress tracked; and
Increased investment in safety research and development so that states can implement evidence-based programs.
National Highway Traffic Safety Administration, which collects crash statistics to produce annual reports on traffic fatality trends, expects the final counts for 2009 to be possibly the lowest level ever recorded, despite a slight increase in vehicle miles traveled. Final statistics will be available within the next few months. | <urn:uuid:d2bdf0e6-2300-4862-9d42-a72b16ed4837> | CC-MAIN-2016-26 | http://www.pwmag.com/transportation-projects/record-low-for-traffic-fatalities.aspx?dfpzone=roadways.trafficcontrol | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783403502.46/warc/CC-MAIN-20160624155003-00153-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.933213 | 316 | 2.53125 | 3 |
The crested canary is a "type canary," meaning it is bred for its physical characteristics (particularly its crest) rather than its color or song. It is one of several canary varieties with a tuft of feathers around the top of its head. Others include the Gloster canary, the crested Stafford canary, the crested Norwich canary and the crested Lancashire canary. The crested canary was developed in the 1800s and is one of the oldest varieties of canaries. The crested canary comes in two varieties: crested and smooth (plain) headed. Their coloring can be yellow, white, brown, grey and green.
The crested canary, originally called the turncoat, is a social, easy-going bird that enjoys singing and being active. This bird has a broad deep body, a short stout neck, and a lot more feathers than many other types of canaries. Crested canaries are relatively hardy and do well with ample room in a cage or aviary.
The crested canary does well in either cages or aviaries. They are on the timid side and should not be housed together with parakeets, lovebirds or other parrots that tend to be more aggressive. Males should be kept separately to ensure quality singing. They like to bathe daily and should be given water to do so. Their environment should not be wet, cool or drafty, and if they are given space to sunbathe, they should also have a shaded area to protect from too much sun. Keep perches clean to avoid any foot problems. | <urn:uuid:e6e0cd23-f541-40ee-bea2-135f751a776c> | CC-MAIN-2016-26 | http://www.birdchannel.com/bird-species/profiles/crested-canary-2.aspx | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783403826.29/warc/CC-MAIN-20160624155003-00200-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.969078 | 332 | 3.359375 | 3 |
Linux Exposed: Basic Hardening Linux
Jul 09, 2003, 08:30 (0 Talkback[s])
[ Thanks to Gido for
this link. ]
"Linux is capable of high-end security; however, the
out-of-the-box configurations must be altered to meet the security
needs of most businesses with an Internet presence. This article
shows you the steps for securing a Linux system called hardening
the server using both manual methods and open source security
solutions. The hardening process focuses on the operating system,
and is important regardless of the services offered by the
server.The steps will vary slightly between services, such as
e-mail and Hypertext Transfer Protocol (HTTP), but are essential
for protecting any server that is connected to a network,
especially the Internet. Hardening the operating system allows the
server to operate efficiently and securely.
"This article includes the essential steps an administrator must
follow to harden a Unix system; specifically, a Red Hat Linux
system.These steps include updating the system, disabling
unnecessary services, locking down ports, logging, and maintenance.
Open source programs allow administrators to automate these
processes using Bastille, sudo, logging enhancers such as SWATCH,
and antivirus software. Before you implement these programs, you
should first understand how to harden a system manually.
"A Linux system may contain many security vulnerabilities and
software bugs when it is first released. Vendors, such as Red Hat,
provide updates to the operating system to fix these
vulnerabilities and bugs. In fact, many consulting firms recommend
that companies do not purchase and implement new operating systems
until the first update is available. In most cases, the first
update will fix many of the problems encountered with the first
release of the Linux system (distribution).
"You should apply the latest updates before the server goes
live, and constantly maintain the server after it is deployed to
make sure the most current required patches are installed. The more
time an operating system is available to the public, the more time
malicious hackers have to exploit discovered vulnerabilities.
Vendors offer patches to fix these vulnerabilities as quickly as
possible; in some cases, the fixes are available at the vendor's
site the same day. Administrators must also regularly test their
systems using security analyzer software. Security analyzer
software scans systems to uncover security vulnerabilities, and
recommends fixes to close the security hole..." | <urn:uuid:a18312e8-ffd5-4d1b-b05b-8ebab497d73a> | CC-MAIN-2016-26 | http://www.linuxtoday.com/security/2003070900526OSHLSW | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396100.16/warc/CC-MAIN-20160624154956-00054-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.889843 | 525 | 2.640625 | 3 |
Once the uncontested king of the porcelain trade, China is diligently working to rediscover its artistic genius.
Mildred Mottahedeh, one of the foremost international authorities on Chinese export porcelain, has been working in conjunction with the Chinese government to help rebuild the porcelain industry, which declined during the years of war and internal strife since the turn of the century.
A great advocate of Sino-American cultural relations, Mrs. Mottahedeh has already made several trips to China to assist the Chinese in modernizing their porcelain production methods and reestablishing standards for quality of design. At 74, Mrs. Mottahedeh is respected by the Chinese as a woman of ''high years'' as well as for her expertise. She says they have told her: ''We want people with gray hair who have experience.''
Mrs. Mottahedeh is currently working with a factory outside of Peking which, like most other Chinese potteries, recently made the switch from the centuries-old ''dragon kilns,'' which burned wood, to firing the chinaware with methane gas. The transition has not been easy. So far the Chinese have had only limited success, with a 51 percent discard rate.
The Chinese are also working to meet Western standards in mixing glazes consistently in order to produce subsequent sets of china with exactly the same colors.
While the Chinese potters struggle with the new technology at home, the Chinese government is sending business people abroad to be trained as dealers and to become acquainted with Western tastes in chinaware.
Although the Chinese porcelain produced in the last 40 or 50 years has been lacking in design quality, Mrs. Mottahedeh is encouraged by the promise of the young Chinese students graduating from art schools.
She is not only interested in reviving the best of China's past, but would like to foster a new genre of Chinese design using the talents of the young modern artists.
Because of government support of systematic archaeological excavations, vast quantities of artifacts are being discovered each year in China. In the past most of the Chinese treasures were divided among the three major national art museums, in Peking, Shanghai, and Guangzhou (formerly Kwangchow). Today the Chinese are sifting through their newly recovered art objects to assemble collections for local museums throughout the country.
''The Chinese are beginning to discern between good and bad taste. They are learning the difference between quality design and hack work without soul,'' says Mrs. Mottahedeh, who is advising the Chinese in the process of selection. | <urn:uuid:f87f2c28-9b58-48f8-a15e-00660cb5593e> | CC-MAIN-2016-26 | http://www.csmonitor.com/1982/1103/110332.html | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397864.87/warc/CC-MAIN-20160624154957-00044-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.967116 | 526 | 2.71875 | 3 |
This report has volatile references. Author must fix. - RBL
The heat released by fission in nuclear reactors must be captured and transferred for use in electricity generation. To this end, reactors use coolants that remove heat from the core where the fuel is processed and carry it to electrical generators. Coolants also serve to maintain manageable pressures within the core.
In order for the coolant to work effectively, it must fulfill a number of key specifications. Most basically, it must have efficient heat transfer properties. The coolant must also be a fluid that can fill the interstices of the core and be pumped to a steam generator or turbine. Thermal and material compatibility are vital as well; the coolant should be chemically stable at high temperatures, non-corrosive and a poor neutron absorber. This last parameter is achieved by ensuring that the coolant has a low absorption cross section. As a neutron is ejected from the uranium-235 in the fuel rod (or, in rare cases, dissolved in the coolant itself), the atoms with which it collides will either scatter or absorb the neutron. The chance of each event is expressed as a nuclear cross section, or effective area presented by the nucleus, and has the units barns (1 barn = 1× 10-28 m2). The reason a high scattering and low absorption cross section is optimal is that the coolant should not eat the neutrons before they can be taken up by the fissile material. In the cases when coolant does absorb neutrons, however, the resulting radioactivity should have a short lifetime. Lastly, cost-effectiveness is a relevant consideration for reactors.
Note also the coolant affects significant aspects of the reactor itself, such as the operating temperature and pressure, the size of the core, and methods of fuel handling.
Since no coolant qualifies as perfect for all, various substances are used in industry. Below I will cover two common coolants: water and liquid sodium.
The two major types of water-cooled reactors are light water reactors (PWR) and boiling water reactors (BWR). Both use light (normal) water, but with slightly differing cooling mechanisms. In a BWR, the water turns into steam in the reactor core and is then pumped directly to the turbines that power electrical generators. In a PWR, the primary loop of coolant flowing through the core is at very high pressure (2250 psi) so it will remain a liquid. It then transfers heat to a secondary loop of water that vaporizes and turns the turbines. This latter method ensures that any radioactivity activated in the coolant remains within the reactor. Because the heat of vaporization that is required for the phase change from liquid to steam limits thermal efficiency, there is currently research being done on a Generation IV supercritical reactor. Light water is a good coolant for thermal reactors but not for fast breeders; pressurized water also moderates (slows down) the neutrons because hydrogen-1 (H-1), which comprises much of water, has a scattering cross section of σ = 82.03 barns, far larger than any other atom. PWRs have an intrinsic failsafe should the reactor overheat to the point where the water in the primary loop boils; neutrons interact less with steam and do not get thermalized, so the abundance of fast neutrons causes the rate of fission to drop. After a few minutes, the reactor achieves passive shut-down.
An even more effective coolant and moderator is heavy water, or deuterium (liquid D2O), because its absorption cross section is three orders of magnitude smaller than that of hydrogen. However, it is also prohibitively expensive: approximately $2400/L.
When it comes to fast breeder reactors, molten sodium is the coolant of choice because it causes negligible moderation. Not only one of the cheapest available metals (DuPont reactor grade Niapure is approximately $1.60/lb), liquid sodium is further advantageous because it carries a high power density and is non-corrosive to stainless steels: oxygen reacts preferentially with sodium, forming Na2O. Like PWRs, the sodium-cooled fast reactor (SFR) utilizes a primary coolant loop that transfers heat via a steam generator to a separate water cycle. The sodium becomes intensely radioactive from contact with the fuel, but it stays contained within the reactor and has a short half-life of approximately 15 hours. Like light water, liquid sodium is inherently safe in loss-of-flow scenarios; its large heat capacity and good thermal conductivity prevent significant temperature rises
However, liquid sodium has significant disadvantages as well: it ignites spontaneous upon contact with the air, and reacts violently with water. Besides burning, sodium exposed to the air produces aerosols that are highly toxic and can cause equipment damage to the surfaces onto which they are deposited. An operational concern is that the opacity of the coolant makes fuel handling and monitoring more complicated.
An alternative to liquid metal is molten salt. This coolant can run at high temperatures for better thermodynamic efficiency, but remains at a low vapor pressure, which reduces the effects of mechanical stress and increases the intrinsic safety of the reactor. Since heat transfer by molten salt is so efficient, reactors can be designed with smaller cores and less complicated piping systems. Structurally, the biggest difference is that the fuel – a mixture of sodium, zirconium and uranium fluorides – can be dissolved into the coolant itself, a set-up that eliminates the need for fuel fabrication and the disadvantages of having variable isotopic ratios within the fuel rods.
Other nuclear reactor currents include liquid lead, gases such as helium and carbon dioxide, and organic compounds.
|Table 1: Absorption and Scattering Cross Sections for Various Coolants and Fissile Materials. Figures in parentheses are uncertainties|
© Suraya Omar. The author grants permission to copy, distribute and display this work in unaltered form, with attribution to the author, for noncommercial purposes only. All other rights, including commercial rights, are reserved to the author.
R. L. Murray, Nuclear Energy: an Introduction to the Concepts, Systems, and Applications of Nuclear Processes. (Butterworth-Heinemann, 2009).
P. Gunter, "Safety Problems with Pressurized Water Reactors in the United States," Nuclear Information and Resource Service, March 1996. - This reference is volatile. NIRS is also a virulently anti-nuclear organization. - RBL
"A Technology Roadmap for Generation IV Nuclear Energy Systems," Generation IV International Forum, GIF-002-00, December 2002. - GIF is a pro-nuclear lobbying organization. This is not a DOE document. - RBL
Neutron Scattering Lengths and Cross Sections. - This reference is volatile. - RBL
Fisher Scientific. - This reference is volatile. The link is also to a promotional site. - RBL
Sodium as a Fast Reactor Coolant, Argonne National Laboratory, 3May 2007. - Power Point presentations are not allowed as references. - RBL | <urn:uuid:f3e81d2f-1705-433b-b756-f9f4767c2935> | CC-MAIN-2016-26 | http://large.stanford.edu/courses/2011/ph241/omar1/ | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395548.53/warc/CC-MAIN-20160624154955-00101-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.924873 | 1,469 | 4.25 | 4 |
Introduction to Ethernet
Ethernet was originally developed by Digital, Intel and Xerox (DIX) in the early 1970's and
has been designed as a 'broadcast' system, i.e. stations on the network can send messages
whenever and wherever it wants. All stations may receive the messages, however only the
specific station to which the message is directed will respond.
The original format for Ethernet was developed in Xerox Palo Alto Research Centre (PARC), California in 1972. Using
Carrier Sense Multiple Access with Collision Detection (CSMA/CD) it had a transmission rate
of 2.94Mb/s and could support 256 devices over cable stretching for 1km. The two inventors
were Robert Metcalf and David Boggs.
Ethernet versions 1.0
and 2.0 followed until the IEEE 802.3 committee re-jigged the Ethernet II packet to form the
Ethernet 802.3 packet. (IEEE's Project 802 was named after the time it was set up,
February 1980. It includes 12 committees 802.1 to 802.12, 802.2 is the LLC, 802.4 Token
Bus, 802.11 Wireless, 802.12 100VG-AnyLAN etc.)
Nowadays you will see either Ethernet II (DIX) (invented by Digital, Intel
and Xerox) format or Ethernet 802.3 format being used.
The 'Ether' part of Ethernet denotes that the system is not meant to be restricted for use
on only one medium type, copper cables, fibre cables and even radio waves can be used.
802.3 Ethernet uses Manchester Phase Encoding (MPE)
the data bits on the outgoing signal. The next few sections describe how Ethernet works and
how Ethernet is structured.
As mentioned earlier, Ethernet uses Carrier Sense Multiple Access with Collision
Detection (CSMA/CD). When an Ethernet station is ready to transmit, it checks for
the presence of a signal on the cable i.e. a voltage indicating that another station
is transmitting. If no signal is present then the station
begins transmission, however if a signal is already present then the station delays
transmission until the cable is not in use. If two stations detect an idle cable
and at the same time transmit data, then a collision occurs.
On a star-wired UTP network, if the transceiver of the sending station detects activity
on both its receive and transmit pairs before it has completed transmitting, then it
decides that a collision has occurred. On a coaxial system, a collision is detected
when the DC signal level on the cable is the same or greater than the combined signal
level of the two transmitters, i.e.. significantly greater than +/- 0.85v.
Line voltage drops dramatically if two stations transmit at the same and the first
station to notice this sends a high voltage jamming signal around the network as a
signal. The two stations involved with the collision lay off transmitting again for
a time interval which is randomly selected. This is determined
using Binary Exponential Backoff. If the collision occurs again then the
time interval is doubled, if it happens more than 16 times then an error is reported.
A Collision Domain is that part of the network where each station can 'see'
other stations' traffic both unicast and broadcasts. The Collision Domain is made up of one segment
of Ethernet coax (with or without repeaters) or a number of UTP shared hubs. A network
is segmented with bridges (or microsegmented when using switches) that create two segments,
or two Collision Domains where a station on one segment can not see traffic between stations
on the other segment unless the packets are destined for itself. It can however still see
all broadcasts as a segmented network, no matter the number of segments, is still one
Broadcast Domain. Separate Broadcast Domains are created by VLANs on switches so that
one physical network can behave as a number of entirely separate LANs such that the only
way to allow stations on different VLANs to communicate is at a layer 3 level using a
router, just as if the networks were entirely physically separate.
3. Ethernet Frame
3.1 Frame Formats
The diagrams below describe the structure of the older DIX (Ethernet II) and the now standard 802.3 Ethernet frames.
The numbers above each field represent the number of bytes.
- Preamble field: Establishes bit synchronisation and transceiver conditions
so that the PLS circuitry synchs in with the received frame timing. The DIX frame
has 8 bytes for the preamble rather than 7, as it does not have a Start Frame Delimiter (or Start of Frame).
- Start Frame Delimiter: Sequence 10101011 in a separate field, only in the 802.3 frame.
- Destination address: Hardware address (MAC address) of the destination station
(usually 48 bits i.e. 6 bytes).
- Source address: Hardware address of the source station (must be of the same
length as the destination address, the 802.3 standard allows for 2 or 6 byte addresses, although
2 byte addresses are never used, N.B. Ethernet II can only uses 6 byte addresses).
- Type: Specifies the protocol sending the packet such as IP or IPX (only applies
to DIX frame).
- Length: Specifies the length of the data segment, actually the number of LLC
data bytes, (only applies to 802.3 frame and replaces the Type field).
- Pad: Zeros added to the data field to 'Pad out' a short data field to 46 bytes
(only applies to 802.3 frame).
- Data: Actual data which is allowed anywhere between 46 to 1500 bytes within one frame.
- CRC: Cyclic Redundancy Check to detect errors that occur during transmission
(DIX version of FCS).
- FCS: Frame Check Sequence to detect errors that occur during transmission
(802.3 version of CRC). This 32 bit code has an algorithm applied to it which will
give the same result as the other end of the link, provided that the frame was transmitted
From the above we can deduce that the maximum 802.3 frame size is 1518 bytes and the
minimum size is 64 bytes. Packets that have correct CRC's (or FCS's) but are smaller than
64 bytes, are known as 'Runts'.
The hardware address, or MAC address is transmitted and stored in Ethernet network devices in Canonical format i.e. Least significant Bit (LSB)
first. You may hear the expression Little-Endian to describe the LSB format in which Ethernet is transmitted.
Token Ring and FDDI, on the other hand, transmit the MAC address with the Most Significant Bit (MSB) first, or
Big-Endian, This is known as Non-Canonical format. Note that this applies on a byte by byte
basis i.e. the bytes are transmitted in the same order it is just the bits in each of those bytes that are reversed!
The storage of the MAC addresses in Token Ring and FDDI devices however, may sometimes still be in Canonical format so this can
sometimes cause confusion. The reference to, the distribution of MAC addresses and the OUI desinations are always carried out in
Some discussion is warranted on the LLC field.
The 802.2 committee developed the Logical Link Control (LLC)
to operate with 802.3 Ethernet as seen in the above
diagram. LLC is based on the HDLC
format and more detail can be found by following the link.
Whereas Ethernet II (2.0) combines the MAC and the Data link layers restricting itself to connectionless
service in the process, IEEE 802.3 separates out the MAC and Data Link layers. 802.2 (LLC) is also required by Token Ring
and FDDI but cannot be used with the Novell 'Raw' format. There are three types of LLC, Type 1 which is
connectionless, Type 2 which is connection-oriented and Type 3 for Acknowledged Connections.
The Service Access Point (SAP) is used to distinguish between different data exchanges on the same end
station and basically replaces the Type field for the older Ethernet II frame. The Source Service
Access Point (SSAP) indicates the service from which
the LLC data unit is sent, and the Destination Service Access Point (DSAP)
indicates the service to which the LLC data unit is being sent.
As examples, NetBIOS uses the SAP address of F0 whilst IP uses the SAP address of 06.
The following lists common SAPs:
- 00 - Null LSAP
- 02 - Individual LLC Sublayer Management Function
- 03 - Group LLC Sublayer Management Function
- 04 - IBM SNA Path Control (individual)
- 05 - IBM SNA Path Control (group)
- 06 - ARPANET Internet Protocol (IP)
- 08 - SNA
- 0C - SNA
- 0E - PROWAY (IEC955) Network Management & Initialization
- 14 - ICL OSLAN SSAP (TP4 over 802.3)
- 18 - Texas Instruments
- 42 - IEEE 802.1 Bridge Spanning Tree Protocol
- 4E - EIA RS-511 Manufacturing Message Service
- 54 - ICL OSLAN DSAP (TP4 over 802.3)
- 7E - ISO 8208 (X.25 over IEEE 802.2 Type 2 LLC)
- 80 - Xerox Network Systems (XNS)
- 86 - Nestar
- 8E - PROWAY (IEC 955) Active Station List Maintenance
- 98 - ARPANET Address Resolution Protocol (ARP)
- BC - Banyan VINES
- AA - SubNetwork Access Protocol (SNAP)
- E0 - Novell NetWare
- F0 - IBM NetBIOS
- F4 - IBM LAN Management (individual)
- F5 - IBM LAN Management (group)
- F8 - IBM Remote Program Load (RPL)
- FA - Ungermann-Bass
- FE - ISO Network Layer Protocol
- FF - Global LSAP
The Control Field identifies the type of LLC, of which there are three:
- Type 1 - uses Unsequenced Information (UI) (Indicated by a Control Field value of 03)
frames to set up unacknowledged connectionless sessions.
- Type 2 - uses Information (I) frames and maintains the sequence numbers during an acknowledged connection-oriented transmission.
- Type 3 - uses Acknowledged Connection (AC) frames in an acknowledged connectionless service.
3.2 I/G and U/L within the MAC address
With an Ethernet MAC address, the first octet uses the
lowest significant bit as the I/G bit (Individual/Group address) only and does
not have such a thing as the U/L bit (Universally/Locally administered).
The U/L bit is used in Token Ring
A destination Ethernet MAC address starting with the octet '05' is a group or multicast
address since the first bit (LSB) to be transmitted is on the right hand side of
the octet and is a binary '1'. Conversely, '04' as the first octet indicates that
the destination address is an individual address. Of course, in Ethernet, all source
address will have a binary '0' since they are always individual.
The first 3 octets of the MAC address form the Organisational Unique Identifier (OUI) assigned
to organisations that requires their own group of MAC addresses.
A list of OUIs can be found at OUI Index
3.3 Subnetwork Access Protocol (SNAP)
The SNAP protocol was introduced to allow an easy transition to the new LLC frame format for vendors. SNAP
allows older frames and protocols to be encapsulated in a Type 1 LLC header so making any protocol
'pseudo-IEEE compliant'. SNAP is described in
The following diagram shows how it looks:
As you can see, it is an LLC data unit (sometimes called a Logical Protocol Data Unit (LPDU)
Type 1 (indicated by 03). The DSAP and SSAP are set to AA
to indicate that this is a SNAP header coming up.
The SNAP header then indicates the vender via the Organisational Unique Identifier (OUI)
protocol type via the Ethertype field. In the example above we have the OUI as 00-00-00 which means that there
is an Ethernet frame, and the Ethertype of 08-00 which indicates IP as the protocol.
The official list of types can be found at Ethertypes
More and more vendors are moving to LLC1 on the LAN but SNAP still remains and crops up time and time again.
Have a look at the document IPX
for further discussion
of 802.3 and 802.5 headers (SNAP etc.) in an IPX environment.
Traditionally, Ethernet is used over 'thick' coaxial cable (Normally yellow in colour)
called 10Base5 (the '10' denotes 10Mbps, base means
that the signal is baseband i.e. takes the whole
bandwidth of the cable (so that only one device can transmit at one time on the same cable),
and the '5' denotes 500m maximum length). The minimum length between
stations is 2.5m.
The cable is run in one long length forming a 'Bus Topology'. Stations attach to it by way
of inline N-type connections or a transceiver which is literally screwed into the cable
(by way of a 'Vampire Tap') providing a 15-pin AUI (Attachment Unit Interface) connection
(also known as a DIX connector or a DB-15 connector)
for a drop lead connection (maximum of 50m length) to the station.
The segments are terminated with 50 ohm resistors and the shield should be grounded at one end only.
The segment could be
appended with up to a maximum of 4 repeaters, therefore 5 segments (total length of 2,460m) can
be connected together. Of the 5 segments only 3 can have devices attached (100 per segment).
A total of 300 devices can be attached on a Thicknet broadcast domain.
It was common to see the Thick coax used in Risers to connect Repeaters which in turn
provide 'Thin Ethernet' coaxial connections for runs around the floors to up to 30 workstations.
Thin ethernet (Thinnet) uses RG-58 cable and is called 10Base2 (The '2' now denoting 200m
maximum length, strictly speaking this is 185m). The minimum length
between stations is 0.5m. Following is a table detailing various
types of coaxial cable:
- RG-58 /U - solid copper core (0.66mm or 0.695mm), 53.5 ohms.
- RG-58 A/U - stranded copper core (0.66mm or 0.78mm), 50 ohms.
- RG-58 C/U - military version of RG58 A/U (0.66mm), 50 ohms.
- RG-59 - broadband transmissions e.g. cable TV.
- RG-6 - higher frequency broadband transmissions. A larger diameter than RG-59.
- RG-62 - ArcNet.
- RG-8 - Thicknet, 50 ohms.
Each station connects to the thinnet by
way of a Network Interface Card (NIC) which provides a BNC (British Naval Connector). At each station
the thinnet terminates at a T-piece and at each end of the thinnet run (or 'Segment') a
50-ohm terminator is required to absorb stray signals, thereby
preventing signal bounce. The shield should be grounded at one end only.
A segment can be appended with other segments using up to 4 repeaters,
i.e. 5 segments in total. 2 of these segments however, cannot be tapped, they can only be used
for extending the length of the broadcast domain (to 925m). What this means is that 3 segments with a maximum
of 30 stations on each can give you 90 devices on a Thinnet broadast domain.
(There is also a little used 10Broad36 standard where 10 Mbps Ethernet runs over broadband up to 3.6km.
With broadband, a number of devices can transmit at the same time using multiple basebands e.g. multiple TV stations
each with its own baseband signal frequency on one wire).
Nowadays, it is becoming increasingly important to use Ethernet across Unshielded Twisted
Pair (UTP) or Shielded Twisted Pair (STP), this being called 10BaseT (the 'T' denoting
twisted pair). For instance, Category 5 UTP is installed in a 'Star-wired' format, with
runs recommended at no greater than 100m (including
patch leads, cable run and flyleads) and Ethernet Hubs with UTP ports
(RJ45) centrally located. It has been found though that runs of up to 150m are feasible, the limitations being
signal strength. Also, there should be no more than a 11.5dB signal loss and
the minimum distance between devices is 2.5m. The maximum delay for the signal in a 10Mbps network is 51.2 microseconds.
This comes from the fact that the bit time (time to transmit one bit) is 0.1 microseconds and that the slot time for
a frame is 512 bit times.
The wires used in the RJ45 are 1 and 2 for transmit, 3 and 6 for receive.
In order to connect to ethernet in this 'Star Topology', each station again has a NIC
which, this time, contains an RJ45 socket which is used by a 4-pair RJ45 plug-ended
droplead to connect to a nearby RJ45 floor or wall socket.
Each port on the hub sends a 'Link Beat Signal' which checks the integrity of the cable
and devices attached, a flickering LED on the front of the port of the hub tells you
that the link is running fine.
The maximum number of hubs (or, more strictly speaking, repeater counts) that you can
have in one segment is 4 and the maximum number of stations on one broadcast domain is 1024.
The advantages of the UTP/STP technology are gained from the flexibility of the system,
with respect to moves, changes, fault finding, reliablity and security.
The following table shows the RJ45 pinouts for 10BaseT:
If you wish to connect hub to hub, or a NIC directly to another NIC, then the following 10BaseT cross-over
cable should be used:
The 4 repeater limit manifests itself in 10/100BaseT environments
where the active hub/switch port is in fact a repeater, hence the name multi-port repeater.
Generally, the hub would only have one station per port but you can cascade hubs from one another
up to the 4 repeater limit. The danger here of course, is that you will have all the traffic
from a particular hub being fed into one port so care would need to be taken on noting
the applications being used by the stations involved, and the likely bandwidth that
the applications will use.
There is a semi-standard called Lattisnet (developed by Synoptics) which runs 10MHz Ethernet
over twisted pair but instead of bit synchronisation occurring at the sending (as in 10BaseT)
the synchronisation occurs at the receiving end.
The 10BaseF standard developed by the IEEE 802.3 committee defines the use of fibre for
ethernet. 10BaseFB allows up to 2km per segment (on multi-mode fibre) and is designed for backbone applications
such as cascading repeaters. 10BaseFL describes the standards for the fibre optic links
between stations and repeaters, again allowing up to 2km per segment on
multi-mode fibre. In addition, there is the 10BaseFP (Passive components)
standard and the FOIRL (Fibre Optic Inter-Repeater Link) which provides the specification
for a fibre optic MAU (Media Attachment Unit) and other interconnecting components.
The 10BaseF standard allows for 1024 devices per network.
4.5 Fast Ethernet (802.3u) 100BaseTx
Fast Ethernet uses the same frame formats and CSMA/CD technology as normal 10Mbps Ethernet. The difference
is that the maximum delay for the signal across the segment is now 5.12 microseconds instead of 51.2 microseconds.
This comes from the fact that the bit time (time to transmit one bit) is 0.01 microseconds and that the slot time for
a frame is 512 bit times. The Inter-Packet Gap (IPG) for 802.3u is 0.96 microseconds as opposed to 9.6 microseconds
for 10Mbps Ethernet.
Fast Ethernet is the most popular of the newer standards and is an extension to 10BaseT, using
CSMA/CD. The '100' denotes 100Mbps data speed and it uses the same two pairs
as 10BaseT (1 and 2 for transmit, 3 and 6 for receive) and must only be used on Category 5 UTP cable installations
with provision for it to be used on Type 1 STP. The Copper physical layer being based on
the Twisted Pair-Physical Medium Dependent (TP-PMD) developed by ANSI X3T9.5 committee.
The actual data throughput increases by between 3 to 4 times that of 10BaseT.
Whereas 10BaseT uses Normal Link Pulses (NLP) for testing the integrity of the connection, 100BaseT
uses Fast Link Pulses (FLP) which are backwardly compatible with NLPs but contain more information.
FLPs are used to detect the speed of the network (e.g. in 10/100 switchable cards and ports).
The ten-fold increase in speed is achieved by reducing the time it takes to transmit a bit to a tenth
that of 10BaseT. The slot-time
is the time it takes to transmit 512 bits on 10Mbps Ethernet (i.e.
5.12 microseconds) and listen for a collision (see earlier). This remains the same for 100BaseT, but
the network distance between nodes, or span, is reduced. The encoding used is 4B/5B
. This wave-shaping takes the clock frequency of 125MHz and reduces it to 31.25MHz which is the frequency of
the carrier on the wire.
The round trip signal timing is the critical factor when it comes to the distance that the signal can run on copper UTP.
The cable has to be Category 5 and the distance must not exceed 100m.
The IEEE use the term 100BaseX to refer to both 100BaseTx and 100BaseFx and the Media-Independent
Interface (MII) allows a generic connector for transceivers to connect to 100BaseTx, 100BaseFx and
There is no such thing as the 5-4-3 rule in Fast Ethernet.
All 10Base-T repeaters are considered to be functionally identical.
Fast Ethernet repeaters are divided into
two classes of repeater, Class I and Class II. A Class I repeater has
a repeater propagation delay value of 140 bit times, whilst a Class II repeater is 92 bit times.
The Class I repeater (or Translational Repeater) can support different signalling types
such as 100BaseTx and 100BaseT4.
A Class I repeater transmits or repeats the incoming line signals on one port to the other ports by first
translating them to digital signals and then retranslating them to line signals. The translations are necessary
when connecting different physical media (media conforming to more than one physical layer specification) to the
same collision domain.
Any repeater with an MII port would be a Class I device. Only one Class I repeater can
exist within a single collision domain, so this type of repeater cannot be cascaded.
There is only allowed one Class I repeater hop
in any one segment.
A Class II repeater immediately transmits or repeats the incoming line signals on one port to the other ports:
it does not perform any translations. This repeater type connects identical media to the same collision domain
(for example, TX to TX). At most, two Class II repeaters can exist within a single collision domain. The cable
used to cascade the two devices is called and unpopulated segment or IRL (Inter-Repeater Link).
The Class II repeater (or Transparent Repeater) can only
support one type of physical signalling, however you can have two Class II repeater hops in any one segment
100BaseT4 uses all four pairs and is designed to be used on Category 3 cable installations.
Transmit is on pairs 1 and 2, receive is on pairs 3 and 6, whilst data is bidirectional
on 4 and 5 and on 7 and 8. The signaling is on three pairs at 25MHz each using 8B/6T
The fourth pair is used for collision detection. Half-Duplex is supported on 100BaseT4.
100BaseFx uses two cores of fibre (multi-mode 50/125um, 60/125um or single-mode) and 1300nm wavelength optics. The connectors are SC,
Straight Tip (ST) or Media Independent Connector (MIC). The 100BaseT MAC mates with the
ANSI X3T9.5 FDDI Physical Medium Dependent (PMD) specification. At half-duplex you can have distances
up to 412m, whereas Full-duplex will give 2km.
There is also a proposed
100BaseSx which uses 850nm wavelength optics giving 300m on multi-mode fibre.
The encoding used is 4B/5B
wave shaping with a clock frequency of 125MHz.
This little known version of Fast Ethernet is for use over two pairs of Category 3 cable and uses
PAM-5 for encoding. There is simultaneous transmission and reception of data in both pairs and the electronics
uses DSP technology to handle alien signals in adjacent pairs.
100BaseT2 can run up to 100m on Category 3 UTP.
Based on 802.12 (Hewlett Packard), 100VG-AnyLAN uses an access method called Demand Priority.
The 'VG' stands for 'Voice Grade' as it is designed to be used with Category 3 cable.
This is where the repeaters (hubs) carry out continuous searches round all of the nodes
for those that wish to send data. If two devices cause a 'contention' by wanting
to send at the same time, the highest priority request is dealt with first, unless
the priorities are the same, in which case both requests are dealt with at the same
time (by alternating frames). The hub only knows about connected devices and other
repeaters so communication is only directed at them rather than broadcast to every
device in the broadcast domain (which could mean 100's of devices!). This is a more
efficient use of the bandwidth. This is the reason why a new standard was developed called 802.12 as it is not
strictly Ethernet. In fact 802.12 is designed to better support both Ethernet and Token Ring.
The encoding techniques used are 5B/6B
All four pairs of UTP are used. On Cat3 the longest cable run is 100m but this increases to 200m on Cat5.
The clock rate on each wire is 30MHz, therefore 30Mbits per second are
transmitted on each pair giving a total data rate of 120Mbits/sec. Since each 6-bits of data on the line
represents 5 bits of real data due to the 5B/6B encoding, the rate of real data being transmitted is 25Mbits/sec
on each pair, giving a total rate of real data of 100Mbits/sec. For 2-pair STP and fiber, the data rate is
120Mbits/sec on the transmitting pair, for a real data transmission rate of 100Mbits/sec.
4.10 Gigabit Ethernet
Although the functional principles of Gigabit Ethernet are the same as Ethernet and Fast Ethernet
i.e. CSMA/CD and the Framing format, the physical outworking is very different.
One difference is the slot time. The standard Ethernet slot time required in CSMA/CD half-duplex mode is
not long enough for running over 100m of copper, so Carrier Extension is used to guarantee a 512-bit slot time.
802.3z is the committee responsible for formalising the standard for Gigabit Ethernet
1000 refers to 1Gb/s data speed. The existing Fibre Channel interface standard (ANSI X3T11) is used
and allows up to 4.268Gbps speeds. The Fibre Channel encoding scheme is 8B/10B
Gigabit Ethernet can operate in half or full duplex modes and there is also a standard 802.3x which manages XON/XOFF flow
control in full duplex mode. With 802.3x, a receiving station can send a packet to a sending station to stop it sending data
until a specified time interval has passed.
There are three media types for 1000BaseX. 1000BaseLX, 1000BaseSX and 1000BaseCX.
With 1000BaseSX, 'S' is for Short Haul, and this uses short-wavelength laser (850nm) over multi-mode fibre.
1000BaseSX can run up to 300m on 62.5/125um multimode fibre and up to 550m on 50/125um multimode fibre.
Using 1300nm wavelength, Gigabit Ethernet (1000BaseLX where the 'L' is for Long wavelength laser, or Long Haul) can run up to 550m
on 62.5/125um multi-mode fibre or 50/125um multi-mode fibre. In addition, 1000BaseLX can run up to 5km (originally 3km) on single-mode fibre using 1310nm
1000BaseCX is a standard for STP copper cable and allows Gigabit Ethernet to run up to 25m over STP cable.
There is currently an issue as many
multimode fibre installations using 62.5/125um fibre and so 220m is often the limit for the backbone when it
should be 500m to satisfy ISO 11801 and EIA/TIA 568A.
Many cable manufacturers are
enhancing their cable systems to 'enhanced Category 5' standards in order to allow Gigabit Ethernet to run
at up to 100m on copper. The Category 6 standard has yet to be ratified, and is not likely to be due
for a while.
In order to obtain the 1000Mbps data bit rate across the UTP cable without breaking the FCC rules for emission, all
4 pairs of the cable are used. Hybrid circuits at each end of each pair are used to allow simultaneous transmission
and reception of data (full-duplex) by separating the transmission signal from the
receiving signal. Because some transmission signal still manages to couple itself to the receiving
side there is an additional echo canceller built in, this is called a NEXT canceller. This system minimises the symbol rate.
Encoding is carried out with PAM-5 | <urn:uuid:444101a2-1d1e-48fb-a130-0810de113b08> | CC-MAIN-2016-26 | http://www.rhyshaden.com/eth_intr.htm | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395621.98/warc/CC-MAIN-20160624154955-00132-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.881007 | 6,728 | 3.78125 | 4 |
6 x 9
Descriptive grammarians and typologists often encounter unusual constructions or unfamiliar variants of otherwise familiar construction types. Many of these phenomena are puzzling from the perspective of linguistic theories: they neither predict these “anomalies” nor, arguably, provide the tools to describe them insightfully. This book analyzes an unusual type of relative clause found in many related and unrelated languages of Eurasia. While providing a detailed case study of Tundra Nenets, it broadens this inquiry into a detailed typological exploration of this relative clause type. The authors argue that an understanding of this construction requires exploring the (type of) grammar system in which it occurs in order to identify the (set of) independent constructions that motivate its existence. The resulting insights into grammar organization illustrate the usefulness of a construction-theoretic syntax and morphology informed by a developmental systems perspective for the understanding of complex grammatical phenomena. | <urn:uuid:4239e288-1cbe-4d15-960e-c429ba5073e1> | CC-MAIN-2016-26 | http://www.press.uchicago.edu/ucp/books/book/distributed/D/bo3619197.html | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397842.93/warc/CC-MAIN-20160624154957-00010-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.873065 | 188 | 2.578125 | 3 |
With a light rail renewal France has gone full steam ahead to become a leader in technological design and leading style. Howard Johnston reports.
France is a nation synonymous with developments in high-speed rail technology. The train à grande vitesse (TGV), the high-speed steel-wheel rail train developed by Alstom and SNCF, which is also the operator, is a world-leading product that continues to be developed and exported worldwide.
But just as rapidly as high-speed rail came to grip the nation, France moved from being almost devoid of tramways to becoming a showcase of the international light rail industry.
20TH CENTURY TRAMWAY DECLINE
In common with many European countries, France saw most of its original tramway networks decline and close through much of the 20th century. Increased prosperity, improved routes nationales (trunk roads) and the expanding autoroute (motorway) network encouraged private car ownership.
Improved bus fleets providing quicker and cheaper urban public transport also affected France's tram lines and for a while. Apart from three surviving fragments – Lille, Marseilles and St Etienne – France seemed finished with light rail.
In 1977 work on the cross-city urban rail RER subway network and the Paris Metro was extended. Lyons followed by opening a conventional metro in 1978 and Lille introduced the driverless, rubber-tyred VAL (véhicule automatique léger – automatic light vehicle) system in 1983.
But in spite of the decline of light rail, larger French cities still had to address the need to move large masses of people around, without total reliance upon increasingly crowded roads. So work continued, with unexpected affects, on French tram networks.
These urban areas, which were much smaller, and Marseilles kept their tram lines running as a way of providing a much less expensive public transport option to the people, while at the same time, advancing the way light rail would run.
FRENCH LIGHT RAIL – BEYOND THE GRAVE
By the 1980s, a number of factors converged in a way that was to lead to light rail becoming the mainstream response for transporting France's urban population. With a tradition of centralisation reaching back to the Napoleonic era, it was a cluster of national legislation following the 1982 Urban Mobility Master Plan guidelines that prompted the proliferation of the French tramway systems.
Although not prescriptive about the solutions adopted, the plan de déplacements urbains (PDU or urban travel plan) required all cities – in practice, a cluster of communes – with a population of more than 100,000 to create their own strategy for public transport.
The PDU takes account of such factors as energy use, town planning, land use, traffic levels and access to public transport. It also gave a directive for planning to be revised as necessary for changes in the urban perimeter at five-year intervals.
The early years of the 1980s also saw the national introduction of 'versement transport', an employer payroll tax which was originally deployed in Paris a decade earlier to part-finance the Metro expansion. Although controversial, this dedicated funding stream became crucial to the rapid development of light rail.
It was estimated in a study on behalf of the British Passenger Transport Executive Group in 2004 that the fund for Lyons, for example, which has the same population as Merseyside, was worth about £10m (€14.6m) a year. Teamed with a fiscal planning and approval process that is relatively speedy in comparison to other democratic countries with similarly developed economies, this could only help light-rail grow.
French politics has also had a strong part to play in the nation's tram success. A high level of civic pride and the mayoral election cycle means the acquisition of show-piece public transport can become a clear and enduring symbol of a successful administration.
Starting out as a promise to an electorate, a new tram route could go from a plan through to operation within a six-year electoral term, leaving it open to further grow into a comprehensive system.
FRENCH TRAM STYLE
The importance of a light rail network as part of a city's modern identity has been exemplified by the distinctive and specific styling of trams, such as those that operate in Marseilles, Lyons and Strasbourg.
In all these cases the form clearly goes far beyond the basic function – their styling becomes visual shorthand for a particular location.
Although most French light-rail projects encounter opposition in the planning stages (mostly due to value for money, construction disruption, business access and conflicts over which system to adopt) the showcase effect of the pioneer modern systems has created a momentum for more development. In effect, light rail has become a club that more French cities want to join.
Recurring elements that have attracted popular support include a restoration of city centres to pedestrians (as opposed to them being funnelled around roads in vehicles), easier access for those with restricted mobility and urban regeneration. And then, of course, there are the rising environmental concerns.
ALTERNATIVES TO LIGHT RAIL
While France clearly embraced light rail as the principal urban transport mode (other than buses) by the start of the 21st century, other approaches have found favour.
In Lille, a growing VAL metro system effectively confined trams to the historic metre-gauge 'Mongy' (named after original engineer) routes to Roubaix and Tourcoing. And Rennes, the Breton capital, is one of the smallest cities in the world to have a metro – its single 9.4km VAL line opened in 2002.
For many years, the presence of a metro in Paris, Marseilles and Lyons meant that the case for surface trams was less pressing. However, these three authorities have subsequently added light rail to their portfolio of public transport services.
Proprietary guided light transit / bus systems have been promoted as offering some of the key benefits associated with modern trams in terms of style, predictable operation and comfort – all at a lower capital cost and in a shorter time than light rail.
Several French cities have taken the opposite route and opted for guided buses, including Clermont-Ferrand and Caen, but it looks as if some established tram operators Rouen and Nantes also seem set to add this mode.
Like Caen, Nancy in eastern France uses the Bombardier transport sur voie réservée (TVR) system – a rubber-tyre tramway. But widely reported problems, not least the lower-than-anticipated safe operational speeds, have set back such systems relative to light rail. Bombardier decided not to further promote the system for now but this has not stopped other guided systems from coming under development in other locations around the world.
RUNNING THE FRENCH LIGHT RAIL
Between them, the French systems use permutations of street running, dedicated formations and redeployed heavy rail routes. Many of the new schemes have made tramways a centrepiece to urban restructuring and/or reinvigoration.
With regulation of public transport remaining a feature of France, revision of bus routes to act as feeders to tram services has become the norm. This is in stark contrast to the deregulation seen in the UK where competing modes allow market forces to control pricing and operators to come under attack for transport pollution.
In France, technical innovations have been introduced to overcome objections to charges of visual pollution created by overhead power supply equipment.
Approval for the Bordeaux tramway was dependent upon fitting of a safe, non-visually intrusive power supply in architecturally sensitive parts of the city. To meet this challenge, consortium leader Alstom developed alimentation par le sol (APS), a sub-surface supply of traction current.
APS – THIRD-RAIL TRAM TECHNOLOGY
In operation since 2003, APS features a central third rail made up of 8m-long conducting segments, separated by three metre-long insulating joints and controlled by supply units installed at 22m intervals. The conducting segments are only powered when triggered by radio signals from the system-equipped tram whilst directly overhead.
Despite in-service operating difficulties, with some retro-fitting of overhead equipment in Bordeaux, and the higher expense for installing and operating the infrastructure, APS has found favour elsewhere. Orléans, Reims and Angers have all placed orders for APS on sections of new-build alignments.
Nice had originally envisaged using APS for powering trams across the large historic squares, Place Masséna and Place Garibaldi. A policy change means the city's Alstom Citadis trams will have roof-fitted, Saft-produced, nickel metal hydride batteries, capable of providing up to 1km of travel at 30km/h.
Alstom is also involved with Dutch company CCM (Centre for Concepts in Mechatronics BV) in researching the application of flywheels as a stored energy source for tram propulsion.
A specially equipped Citadis tram has been tested in normal operating conditions in Rotterdam since 2005. Energy from braking is routed to the roof-mounted flywheel and released via a generator for propulsion when the vehicle restarts. The 30-year projected working life for the system approximates to that expected for new light rail vehicles.
FRANCE'S ROLLING STOCK – GROWING WITH RAIL
France has a long-standing reputation in many fields for technical innovation along with a tradition of supporting the products of native companies – both areas apparent with its tram fleets. However, when the first modern systems were emerging, France had little in the way of light-rail products.
The solitary Marseille Line 68 used Belgian-built PCCs and the re-equipping in Lille was initially with former German Düwag stock before new Italian Pininfarina-designed Breda trams were introduced in 1993. Even the new, high-profile Strasbourg system opened with British-built ABB Eurotrams in 1994.
With this in mind, the response of Alstom (earlier Alsthom and GEC Alsthom) in such a short time to the burgeoning domestic market has been as noteworthy as the proliferation of light rail systems.
Tramway français standard (TFS) vehicles emerged from a development for Nantes, initially high-floor only, and Grenoble, featuring a pioneering partial low-floor layout. Other TFS buyers included Rouen and Paris.
The approximate 80% penetration by Alstom of the domestic market by 2007 is largely due to the success of the later Citadis modular low-floor design. From its operational debut in Montpellier in July 2000, in its many guises and configurations, the Citadis has become the mainstay of French networks and a popular choice of transport authorities around the world.
Alstom's main competitor on the world stage, Bombardier, has made some inroads to the French light-rail market. In spite of being an early customer of Alstom (then Alsthom), Nantes subsequently ordered the Bombardier Incentro.
The initial supplier to Strasbourg, via a former constituent company, the city's transport authority conversely went to Alstom for a Citadis with Eurotram-like styling for their later orders. More recently, Bombardier has produced a dramatically styled, nautical themed, five-section variant of their 100% low-floor Flexity Outlook for Marseille's new system. German rolling stock giant Siemens has yet to penetrate the market.
ROLLING INTO THE FUTURE
In an international context, the proliferation of French systems since Nantes opened the first of the modern networks in 1985 has been remarkable.
With several new projects going live or into advanced planning and a majority of operational systems looking to undergo expansion projects, and the emergence of Alstom Citadis as a world brand in tramway technology, France has taken centre stage for modern light rail.
Restraints on longer-term developments are likely to remain financial and political rather than technical.
Better still, light rail is able to accommodate emerging technologies such as stored and reused energy – consistent with the ecological agenda.
With all this in mind, France in the first decade of the 21st century, could have easily, and unwittingly, become light rail's biggest laboratory and showroom for the tramways of the future. | <urn:uuid:895f7cf3-9a26-480c-a8df-287b25e4cb59> | CC-MAIN-2016-26 | http://www.railway-technology.com/features/feature1096/ | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397213.30/warc/CC-MAIN-20160624154957-00195-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.952423 | 2,583 | 3.15625 | 3 |
Details about Statistical Analysis and Data Display:
1 Audience Students seeking master's degrees in applied statistics in the late 1960s and 1970s typically took a year-long sequence in statistical methods. Popular choices of the course text book in that period prior to the availability of high speed computing and graphics capability were those authored by Snedecor and Cochran, and Steel and Torrie. By 1980, the topical coverage in these classics failed to include a great many new and important elementary techniques in the data analyst's toolkit. In order to teach the statistical methods sequence with adequate coverage of topics, it became necessary to draw material from each of four or five text sources. Obviously, such a situation makes life difficult for both students and instructors. In addition, statistics students need to become proficient with at least one high-quality statistical software package. This book can serve as a standalone text for a contemporary year-long course in statistical methods at a level appropriate for statistics majors at the master's level or other quantitatively oriented disciplines at the doctoral level. The topics include both concepts and techniques developed many years ago and a variety of newer tools not commonly found in textbooks."
Back to top
Rent Statistical Analysis and Data Display 1st edition today, or search our site for other textbooks by Richard M. Heiberger. Every textbook comes with a 21-day "Any Reason" guarantee. Published by Springer.
Need help ASAP? We have you covered with 24/7 instant online tutoring. Connect with one of our Statistics tutors now. | <urn:uuid:20224058-0001-4fb4-8380-172220703ec9> | CC-MAIN-2016-26 | http://www.chegg.com/textbooks/statistical-analysis-and-data-display-1st-edition-9780387402703-0387402705 | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397111.67/warc/CC-MAIN-20160624154957-00172-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.947673 | 311 | 2.765625 | 3 |
« Back to Education
Today’s Teens, Tomorrow’s Techies (T4)
Brooklyn Public Library
Kieron George, T4 Project Coordinator , email@example.com
During summer months, Brooklyn Public Library (BPL) librarians noticed inner city NYC teens spent much of their time playing basketball and sitting idle in the parks. After discovering the limited amount of educational programming for teens in the neighborhood, librarians asked teens, parents, and educators what teen opportunities were missing from their communities. Librarians learned that teens wanted advanced training with computers and the skills necessary to get a job. Moreover, teens, parents and educators often inquired about technology related volunteer and employment opportunities for teens at the library. Feedback from ten focus groups confirmed that there was a need for advanced technology training and volunteer opportunities. To fill the need, the T4 Volunteer Program was created.
Launched as a pilot initiative in 2005, each year a minimum of 120 youths undergo an eight-day Summer Technology Institute training to become a T4 volunteer. The 32-hour training series is offered at three locations and covers topics including Microsoft Word, Excel, PowerPoint, and intro to library electronic resources as well as how to troubleshoot and repair basic computer and tech issues, how to create a resume, and “soft topics” such as customer service and communication skills. Upon completion of the Summer Institute teens volunteer 72 hours in their local library, sharing their newly acquired technology expertise. As volunteers, T4s continue to learn advanced technology skills while volunteering under the direction of library staff and participating in school-year workshops. Equally important, volunteers pass their knowledge along to patrons in their community libraries.
Results from pre/post T4 Summer Institute training show a significant majority of teens increase their technology skills. For instance, the 2009 T4 class showed the following educational gains: 79% of participants showed improvement in Microsoft Excel skills and 73% in hardware, networking, and troubleshooting. Each T4 volunteer is evaluated by his/her supervisor. Data indicate that a significant majority (97%) of T4 volunteers perform at or above satisfactory when assisting patrons. The majority (73%) of supervisors said they would recommend their T4 volunteer for employment as a library technology staff member. Perhaps the most qualitatively rich evaluation of T4 is the final essay submitted by T4 volunteers reflecting on their experience in the program. In final essays, T4 participant sentiments consistently indicate that the T4 Volunteer Program has a positive and transformational impact on youth development. For instance, a 2009 T4 graduate recounts in his final essay, “I got to experience what it was like to be a librarian while working in the library and it was actually a lot more interesting than I had expected. This was such a life changing experience that if I could I would definitely do it again.” | <urn:uuid:3f529f92-a863-44e6-9c52-aa4df9d6ed69> | CC-MAIN-2016-26 | http://www.urbanlibraries.org/today---s-teens--tomorrow---s-techies--t4--innovation-249.php?page_id=80 | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783393093.59/warc/CC-MAIN-20160624154953-00074-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.955959 | 591 | 2.96875 | 3 |
PURVIS, GEORGE, farmer and agrarian leader; b. 1842 in Scotland; fl. 1867–94.
Nothing is known of George Purvis before he immigrated to British North America in 1867. Shortly after his arrival an English company engaged him to oversee its lumber and farming operations in the Ottawa valley. Active in community affairs at Portage-du-Fort, Que., and later at Arnprior, Ont., he became known locally as a prominent businessman and as a progressive farmer.
In 1881, attracted by the possibilities of the Canadian Prairies, Purvis left for Manitoba with his wife, Hannah, and their two sons, to settle near Brandon. It was not a good time to begin farming in the west. The high cost of agricultural equipment and supplies, coupled with Manitoba’s distance from markets, made it difficult to eke out a living. When a crop failure in 1883 and the collapse of already low farm prices made the situation of Manitoba farmers intolerable, Purvis joined in the founding of an association to protest against the federal government’s railway, land, and tariff policies, popularly believed to be at the root of the difficulties confronting settlers.
In December 1883 he was elected provincial secretary of the Manitoba and North West Farmers’ Union. In office he developed a reputation for moderation and common sense, qualities notably lacking among the union’s other leaders. By the time the association collapsed in April 1884 it had become little more than an extra-parliamentary extension of the provincial Liberal party. Purvis, a Conservative, was one of the few leaders to survive the débâcle. Shortly afterwards he joined the Manitoba and North-West Farmers’ Co-operative and Protective Union, which had been organized at Manitou in December 1883 for the cooperative marketing of grain. In June 1884 he was elected provincial secretary of the organization and within a short time he became its best-known figure.
Under his leadership, the union sold members’ wheat and arranged for the purchase of binder twine in bulk directly from the manufacturer. Purvis also discussed with officials of the Canadian Pacific Railway the question of establishing grain standards that were acceptable to western farmers. On the political front, he urged the federal government to alter its policies of national development so as to benefit the west.
As a result of his strenuous advocacy of the agrarian cause, he came under attack. He was accused of being in favour of Canada’s annexation to the United States, but the charge had little impact at the time and was without foundation. More important was the criticism of his management of the grain marketing program. In June 1884 he worked out an arrangement with the firm of Mitchell and Mitchell to buy farmers’ wheat, apparently unaware that the company’s capital was provided by the CPR. The following year the CPR withdrew its backing and Mitchell and Mitchell terminated the agreement with the union. Lacking capital, the union decided that Purvis should personally undertake to market its members’ wheat, building up capital out of the profits of the business. In December 1885 he was accused of having a secret agreement with the CPR by which he would profit at the expense of the farmers. Although the union supported him and an audit of the books later found everything in order, he resigned from the executive in January 1886.
The provincial press took his resignation as an admission of guilt and vilified him. By July 1886 his humiliation was complete. For two years he had neglected his farm in his devotion to the agrarian cause, and he now encountered personal financial difficulties. Looking for a fresh start, he moved his family to Minto (N.Dak.), where he remained until 1894. In that year he sold his farm; what became of him afterwards is not known.
NA, MG 26, A: 46150, 191785. PAM, MG 10, E14. Statement of the claims of the province of Manitoba and the North-West Territories; to the constitutional right of a province under the British North America Act, 1867 ([Winnipeg, 1885]). Commercial (Winnipeg), 17 June, 7 Oct. 1884. Daily Free Press (Winnipeg), 27 Nov., 6, 28 Dec. 1883; 6 June, 9, 25 July, 28 Aug., 1 Sept., 13 Oct. 1884; 19 March, 18, 21 Dec. 1885; 5, 8, 20 Jan. 1886. Manitou Mercury (Manitou, Man.), 9 July 1886. Stonewall News and Rockwood County Advertiser (Stonewall, Man.), 16 Jan. 1886. Morton, Manitoba. J. A. D. Stuart, The Prairie W.A.S.P.; a history of the rural municipality of Oakland, Manitoba (Winnipeg, 1969). D. F. Warner, The idea of continental union; agitation for the annexation of Canada to the United States, 1849–1893 ([Lexington, Ky.], 1960); “The Farmers’ Alliance and the Farmers’ Union: an American-Canadian parallelism,” Agricultural Hist. ([Baltimore, Md.]), 23 (January 1949): 9–19. | <urn:uuid:06bb6fa6-df30-44b2-a132-0affb1c014ca> | CC-MAIN-2016-26 | http://www.biographi.ca/en/bio/purvis_george_12E.html | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783394414.43/warc/CC-MAIN-20160624154954-00010-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.977309 | 1,077 | 2.984375 | 3 |
The Jungle Theme of Power
So, in 1776, the United States declared its independence from Britain. We saw King George III as a tyrant and a bully, so we were like, later, guys. What's interesting is that the American Revolution was all about casting off the oppression of a single, unified symbol of injustice: the King and his government. What is challenging about imagining a socialist revolution à la Upton Sinclair is that there is no single problem. It's the whole economic system of the United States – the banks, the marketers, the factory owners, the foremen, the managers, the bosses, the stores, the judges, the incompetent government inspectors, the rigged elections, everything – that keeps the workingman down. How do you rise up against a decentralized system of power? Perhaps it is the sheer scope of the problems Sinclair points out that explains why The Jungle is pretty much only remembered for one thing: meatpacking. Issues in hygiene and food safety were one thing that progressives could address immediately through federal legislation – their King George III, if you will. Larger problems like poverty, illiteracy, and prostitution can't be solved with the establishment of a new federal agency; these problems endure today.
Questions About Power
- When does Jurgis begin to recognize hidden networks of power within the American economic system? What events open Jurgis's eyes to his lowly position in American power structures?
- Where is power concentrated in The Jungle? Who has power, and how do they use it?
- How do characters without power gain it in this system? Is there any way for disempowered individuals to move up the social ladder?
Chew on This
In The Jungle, there is no legitimate avenue for poor people to increase their power. Jurgis must resort to election rigging and selling out his union in order to achieve his temporary status as boss of the hog trimmers later on in the novel.
The Jungle repeatedly illustrates the saying that all power corrupts absolutely. There are no characters in positions of power over their fellow men who do not abuse this power in some fashion. | <urn:uuid:17b789c5-ef09-48be-95a0-e12334a23482> | CC-MAIN-2016-26 | http://www.shmoop.com/the-jungle/power-theme.html | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783403508.34/warc/CC-MAIN-20160624155003-00020-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.963507 | 427 | 3.484375 | 3 |
Transsphenoidal adenomectomy is a type of surgery that may be used to remove a tumor (adenoma) of the pituitary gland, which lies at the bottom of the brain. Using a special microscope, the surgeon reaches the pituitary gland by threading tiny instruments through a person's nostril or into a small opening made in the bridge of the nose.
Once the tumor is removed, the remaining portion of the pituitary gland works normally in most cases.
Transsphenoidal adenomectomy is a very delicate procedure. People who need this surgery are often referred to medical centers that specialize in the technique.
eMedicineHealth Medical Reference from Healthwise
To learn more visit Healthwise.org
© 1995-2014 Healthwise, Incorporated. Healthwise, Healthwise for every health decision, and the Healthwise logo are trademarks of Healthwise, Incorporated. | <urn:uuid:b7b95961-48ab-4677-9cb4-f97ecf333e60> | CC-MAIN-2016-26 | http://www.emedicinehealth.com/script/main/art.asp?articlekey=134327&ref=130043 | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783399385.17/warc/CC-MAIN-20160624154959-00164-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.899576 | 188 | 2.671875 | 3 |
A push is on to regulate electronic cigarettes as health officials raise concerns about their safety.
Last week, the Food and Drug Administration announces plans to regulate e-cigarettes, banning sales to those under 18 and requiring nicotine warning labels. Their makers would eventually have to disclose the ingredients.
The move comes at a time when poison control centers are reporting more calls about people overusing or ingesting the liquid nicotine contained in the devices.
"Vaping," as it's called, has become an increasingly popular way to get a nicotine fix without smoking. E-cigarettes have soared in popularity in the past few years.
The devices have been marketed as a healthier alternative to smoking, but that claim has been controversial and debated.
"What we know right now is electronic cigarettes are not a regulated product by the FDA, so no one really knows what the full list of ingredients is in that synthetic nicotine," said Tricia Tilley of the state Department of Health and Human Services.
The liquid nicotine used in e-cigarettes comes in different colors and flavors, and health officials said that's part of the problem. They believe that attracts children, who are tempted to try them.
There have been eight calls to poison control centers in New Hampshire related to e-cigarettes since 2010 and 37 across New England. Most of those calls involved adults, but 15 were for children under the age of 5. Three involved teenagers.
"Some have had mild effects, but some have had more moderate impacts," Tilley said.
Those effects are similar to an overdose: rapid heartbeat, elevated blood pressure, nausea and vomiting.
But e-cigarette advocates dispute the concern the liquid nicotine is a danger.
"Fortunately, the problem itself is not that dire," said Carl V. Phillips of Consumer Advocates for Smoke-Free Alternatives Association. "It's incredibly difficult to poison yourself on e-cigarette liquid to an extent that is health-threatening."
Phillips is the scientific director of CASAA. He said it would basically take a beer mug full of liquid nicotine to kill an adult and proportionally less for a child. He said there's not as much nicotine in an e-cigarette as people might think.
Many bottles of liquid nicotine do indicate the nicotine concentration. But health officials said that without regulations, it's impossible to know if manufacturers are being truthful.
"There is a nicotine level on all the bottles," said Deb Tickel, owner of XSmoke Vape Smart in the Steeplegate Mall. "It goes from zero to 24. The numbers do go further, but I only go up to 24."
Tickel said the fear and negativity surrounding the product come from a lack of education.
"We let them know that every six to 10 puffs is the equivalent to a cigarette," she said.
She said she checks identification and doesn't like to sell to high-schoolers, even if they are 18.
Tickel and Phillips said it's a parent's responsibility to keep liquid nicotine out of the reach of children.
Tickel opened her store about five months ago. She's a former smoker who was turned onto e-cigarettes when her aunt was diagnosed with lung cancer.
"It took me three days," she said. "On the third day, I decided to put the cigarettes away, and I haven't had a cigarette (since then). It was a year ago on April 7."
Tickel said she feels healthier and has helped covert 250 people from smoking to smoke-less.
"These are such low risks that the risks are down in the area of everyday hazards, like eating dessert, in terms of how much it affects your health," Phillips said. "It is nothing at all like cigarettes."
Tilley said regulation will help consumers gauge the risk for themselves.
"Once there is some regulation on these products, which there seems to be coming down the path, we will have a better understanding of the ingredients and what the long-term and short-term health impacts are, either by poisoning or simply by using these products," she said.
There will be a 75-day public comment period before any FDA rules would be finalized.
E-cigarette makers would also have two years to submit applications to allow their products to remain on the market. | <urn:uuid:0ea52179-21f9-477e-86f0-013ae1c03d69> | CC-MAIN-2016-26 | http://www.wmur.com/special-reports/ecigarette-debate-continues-as-fda-announce-plans-to-regulate-industry/25689528?absolute=true | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395679.18/warc/CC-MAIN-20160624154955-00183-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.97753 | 879 | 2.578125 | 3 |
Detailed structure and explanatory notes
ISIC Rev.4 code BStructure
This Section is divided into the following Divisions:
This section includes the extraction of minerals occurring naturally as solids (coal and ores), liquids (petroleum) or gases (natural gas). Extraction can be achieved by different methods such as underground or surface mining, well operation, seabed mining etc.
This section also includes supplementary activities aimed at preparing the crude materials for marketing, for example, crushing, grinding, cleaning, drying, sorting, concentrating ores, liquefaction of natural gas and agglomeration of solid fuels. These operations are often carried out by the units that extracted the resource and/or others located nearby.
Mining activities are classified into divisions, groups and classes on the basis of the principal mineral produced. Divisions 05, 06 are concerned with mining and quarrying of fossil fuels (coal, lignite, petroleum, gas); divisions 07, 08 concern metal ores, various minerals and quarry products.
Some of the technical operations of this section, particularly related to the extraction of hydrocarbons, may also be carried out for third parties by specialized units as an industrial service, which is reflected in division 09.
This section excludes the processing of the extracted materials (see section C - Manufacturing), which also covers the bottling of natural spring and mineral waters at springs and wells (see class 1104) or the crushing, grinding or otherwise treating certain earths, rocks and minerals not carried out in conjunction with mining and quarrying (see class 2399). This section also excludes the usage of the extracted materials without a further transformation for construction purposes (see section F - Construction), the collection, purification and distribution of water (see class 3600), separate site preparation activities for mining (see class 4312) and geophysical, geologic and seismic surveying activities (see class 7110).
Would you like to leave some feedback?
(on this code, this classification, the site in general or any other issue)
Then drop us a line at CHL@un.org. | <urn:uuid:0f6e9786-ec47-4329-becb-73c32f95d3a2> | CC-MAIN-2016-26 | http://unstats.un.org/unsd/cr/registry/regcs.asp?Cl=27&Co=B&Lg=1&prn=yes | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783398628.62/warc/CC-MAIN-20160624154958-00158-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.916805 | 432 | 2.609375 | 3 |
Vanitha, Varadharajan and Thiyagesan, Krishnamoorthy and Baskaran, Nagarajan (2011) Social Life of Captive Asian Elephants (Elephas maximus) in Southern India: Implications for Elephant Welfare. In: Journal of Applied Animal Welfare Science, 14 (1). pp. 42-58.Full text not available from this repository.
Asian elephants in the wild live in complex social societies; in captivity, however, management often occurs in solitary conditions, especially at the temples and private places of India. To investigate the effect of social isolation, this study assessed the social group sizes and the presence of stereotypies among 140 captive Asian elephants managed in 3 captive systems (private, temple, and forest department) in Tamil Nadu, India, between 2003 and 2005. The majority of the facilities in the private (82%) and temple (95%) systems held a single elephant without opportunity for social interaction. The forest department managed the elephants in significantly larger groups than the private and temple systems. Among the 3 systems, the proportion of elephants with stereotypies was the highest in temple (49%) followed by private system (26%) and the forest department facility (6%); this correlates with the social isolation trend observed in the 3 systems and suggests a possible link between social isolation and abnormal elephant behavior separate from other environmental factors. The results of this study indicate it would be of greater benefit to elephant well being to keep the patchily distributed solitary temple and private elephants who are socially compatible and free from contagious diseases in small social groups at ocommon elephant houseso for socialization.
|Item Type:||Journal Article|
|Additional Information:||Copyright of this article belongs to Lawrence Erlbaum Associates, Inc.|
|Department/Centre:||Division of Biological Sciences > Centre for Ecological Sciences|
|Date Deposited:||01 Mar 2011 10:50|
|Last Modified:||08 Jul 2011 07:14|
Actions (login required) | <urn:uuid:a3614d74-11a2-422f-82db-438c6a5dbca4> | CC-MAIN-2016-26 | http://eprints.iisc.ernet.in/35813/ | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396887.54/warc/CC-MAIN-20160624154956-00051-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.877584 | 412 | 2.59375 | 3 |
Bryn Mawr Classical Review 2003.09.46
Christoph Catrein, Vertauschte Sinne. Untersuchungen zur Synästhesie in der römischen Dichtung. BzA 178. München/Leipzig: K.G. Saur, 2003. Pp. 240. ISBN 3-598-77727-2. EUR 82.00.
Reviewed by E.J. Kenney, Peterhouse, Cambridge
Word count: 1589 words
'Synaesthesia or CROSSED PERCEPTION, condition in which perception in one sensory mode arouses imagery from another mode'. So Britannica, 15th edn. The most generally familiar form of it is 'coloured hearing'; though few possessors of this faculty can have approached the astonishing range and nuanced character of the colours and textures evoked in the mind of Vladimir Nabokov by the sounds of letters: e.g. 'polished ebony' by French a; 'brassy with an olive sheen' by h; and so on right through the alphabet (Speak, Memory 34-6). There are well-attested cases of musicians 'hearing' colours, though, curiously, musical notes had no such effect on Nabokov. More commonly, numbers, the days of the week, or the months are perceived as coloured: see e.g. Bryher [Winifred Ellerman], The heart to Artemis. A writer's memoirs 103. In his 2003 Reith Lectures, broadcast by the BBC, Professor V.S. Ramachandran suggested that as many as one in two hundred people may have this faculty in some degree. He explained it as the result of 'crossing' in the brain, and intriguingly went on to argue that the neuroscientific investigation of the phenomenon can contribute to our understanding of literary metaphor, indeed of the evolution of language and abstract thought. In the context of Christoph Catrein's groundbreaking book this is a striking thought.
So far as Latin poetry is concerned, this is hitherto uncharted territory. Ancient critics by and large took little heed of the phenomenon; such discussion as there was centred on Greek. It is amusing to see how the scholiasts tied themselves in knots in their efforts to explain the famous crux at Septem 103 κτύπον δέδορκα· πάταγος οὐχ ἑνὸς δορός (16-17). Modern classical scholars have been slow to show interest. The lead given by C.A. Lobeck in his PHMATIKON sive verborum Graecorum et nominum verbalium technologia (1846) was not followed up by Hellenists until W.B. Stanford's Greek metaphor: studies in theory and practice (1936); and even after that little of note appeared apart from C.P. Segal's 'Synaesthesia in Sophocles' (ICS 2 88-96), which in turn appears to have attracted surprisingly little attention from students of Greek tragedy. Michael Silk's discussions of metaphor in his book Interaction in poetic imagery with special reference to early Greek poetry (1974) and his OCD article 'Metaphor and simile' impinge only tangentially on the phenomenon, without actually naming it. Most discussion has been by scholars in modern vernacular literatures. In transferring the debate to Latin poetry Catrein has opened up a rich new vein of critical enquiry.
His book is concerned only with synaesthesia as metaphor. After the preliminary historical chapter summarized above, ch. 2 'Synästhesien als Metaphern' examines two definitions that he considers useful as critical tools. The first is that of synaesthesia as 'bold' (kühn) metaphor. As classified by Harald Weinreich, whose treatment forms the basis for this and the following section, metaphors rank as 'bold' not when (as most of us would probably assume) they are what might be loosely termed far-fetched, but when they are contradictory. Thus, as Catrein applies this definition, at OT 371 τυφλὸς τά τ' ὦτα τόν τε νοῦν τά τ' ὄμματ' εἶ, Tiresias' blindness of hearing is a bolder metaphor than his mental blindness: 'Der νοῦς hat mit Sinneswahrnehmung nichts zu tun, die Ohren aber sind ein Sinnesorgan, nur eben das falsche' (30). This idea Catrein sees as helpful when appraising the significance of the metaphorical invasion of one sense by another. So, a 'bitter experience' does not count as especially bold, whereas a 'bitter colour' does (32). A second definition of synaesthesia, also adopted from Weinreich, is 'cognitive' metaphor. An image belongs in a conceptual context (Bildfeld); metaphor is the transference of an image from one such context to another, 'understanding and expressing one kind of thing in terms of another' (G. Lakoff and M. Johnson, Metaphors we live by , cit. 34). So in 'coloured hearing' an image crosses from the domain of sound to that of vision. The chapter ends with some methodological caveats which underline the difficulties attending any attempt at precise classification (35-42). Catrein differentiates two possible approaches, the 'diachronic' and the 'synchronic'. Is a metaphor deemed to count as such because it appears to depart from what is subjectively perceived as 'normal' usage, or should its credentials, so to say, be evaluated historically on the basis of what can be shown to have been 'normal' at the relevant time? The latter can be difficult or sometimes impossible to establish securely, but Catrein argues that more often than not there is enough evidence, in the shape of suitable prose texts, to make the historical approach a practicable tool in helping to determine metaphorical status (37-8). He grants that such categorizations can never be hard and fast, going on to point out that a 'dead' or 'lexicalized' metaphor can always be resurrected and given a fresh lease of life by a poet alive to its potentiality (38-9). I was interested to find enallage, zeugma and syllepsis drawn into the discussion (40-2). Enallage is indeed a kind of transference, formally considered, but as Catrein's somewhat cursory treatment implicitly acknowledges, it seems to have little to do with synaesthesia. Syllepsis, however, can perhaps be seen as a special case or variant of it which exploits both semantic ambiguity and sense-transgressional metaphor.1
The two chapters that make up the bulk of the book, ch. 3 'Synästhetishe Bildfelder in der römischen Dichtung' (43-164), and ch. 4 'Zur Funktion taktiler Synästhesien bei Lukrez' (165-99), offer what might be described as a catalogue raisonné of synaesthetic expressions in the Latin poets, chiefly Lucretius, Virgil and Ovid. These are classified according as a sense-domain figures as transmitter or receiver. Thus, to give two examples out of the hundreds displayed, under the rubric 'Tastsinn als Bildspender' we find expressions such as Aen. 12.483 quotiens oculos coniecit in hostem (103), in which eyesight is treated as a concrete thing that can be projected and felt ('Gesehenes wird "gefühlt" oder "ertastet"', 101); and under 'Gesichtssinn als Bildempfänger' the famous description of the theatre awnings at DRN 4.78-83 (181). These chapters illustrate and discuss the many ways in which the poets, according to their individual sensibilities and literary strategies, responded to and developed the synaesthetic possibilities offered by the resources of poetic Latinity. They provide a rich resource for future commentators and, I would guess, much pabulum for seminars. By the same token they resist summarizing. In the rest of this review I concentrate on what Catrein has to say about Lucretius' use of synaesthesia, which seems to me especially thought-provoking.
It was Hugh Sykes Davies who first directed serious critical attention to Lucretius' use of metaphor as 'unique in European literature'.2 It was indeed for the purpose in hand an essential argumentative tool. The section in Catrein's book on 'Gesichtssinn als Bildspender' (46-81) demonstrates how from very early on uideo extended its semantic range to embrace other modes of perception, a familiar phenomenon in other languages: English 'don't you see?' equates exactly with Latin nonne uides? What had already by Lucretius' time become standard usage he exploited to its limits: 'Lukrez führt, so kann man zusammenfassen, das Verb videre an die Grenzen seiner semantischen Belastbarkeit' (58). Even more striking and significant for the argumentative strategy of the DRN is what he did with touch. It was a fundamental tenet that the senses cannot lie and do not contradict each other (4.478-96, cit. 165). That for a committed Epicurean would appear to rule synaesthesia as an argumentative tactic out of court. In fact Lucretius employed it without scruple wherever it might serve to reinforce his message. So in the wonderful description of dawn at 2.44-8 the point that these phenomena are material in origin and operation is paradoxically underlined by the interplay of synaesthetic metaphor (170-1). Catrein's analysis brilliantly demonstrates the originality and ingenuity with which Lucretius invests familiar metaphors with new life so as to bring out their underlying materiality (187). He allows that his use of metaphor cannot in every case be shown to subserve the Epicurean message but adds engagingly 'Dennoch fühlt sich der Interpret bei Lukrez zu Deutungen berechtigt, für die er sich bei anderen Dichtern den Vorwurf der Überinterpretation einhandelte' (197). Well, as Mr Weller junior remarked in another connexion, it's an amiable weakness, to which I am prone myself with Ovid. Catrein emphasizes that the DRN is addressed, not to the converted but to the potential convert, personified in Memmius: 'wie jeder gute Lehrer entfaltet Lukrez nicht einfach nur den neuen Stoff vor seinem Schüler, sondern geht auf ihn zu und holt ihn in seiner Welt ab; zu dieser Welt gehört aber auch nichtepikureischer Sprachgebrauch' (198). (Whether Memmius himself was equipped by nature or upbringing to respond is something we can only speculate about.) One lasting legacy of Lucretius' engagement with synaesthesia, Catrein suggests, is the frequency in later poetry, especially Virgil's, of synaesthetic metaphor involving touch ('Tastsinn als Bildspender', 100-62). He leaves his readers with the thought that what for Lucretius had been essentially a didactic ploy became, through his inspired use of it, common poetic currency: 'dies wäre ein schwaches, aber nicht uninteressantes Element epikureischen "Nachleben"' (199).
Catrein is a careful writer, scrupulous to alert the reader to the complexities of the subject (13) and at great pains to clarify or qualify his argument where necessary. There are serviceable bibliographies and indexes of Latin words and passages discussed. His book demands and deserves attentive reading and is one which no serious student of Latin poetry should neglect.
1. Catrein touches on the problem of distinguishing syllepsis from zeugma (41 n.134). The failure of ancient grammarians to distinguish the two is, it seems to me, no reason why their modern successors should tamely follow suit: they are two different things. All the examples of zeugma cited by Catrein at pp. 156-7 I should call syllepsis. See G. Tissol, The face of Nature (1997) 219-20.
2. 'Notes on Lucretius', The Criterion 11 (1931-2) 25-42 at 31-2 = C.J. Classen, ed., Probleme der Lukrezforschung (1986) 273-90 at 279-80. | <urn:uuid:98ceab62-4c24-4a68-865b-4af8dd16349f> | CC-MAIN-2016-26 | http://bmcr.brynmawr.edu/2003/2003-09-46.html | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783398869.97/warc/CC-MAIN-20160624154958-00065-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.878759 | 2,772 | 2.546875 | 3 |
Six school girls in Peru’s capital city passing through a 2,000-year-old archaeological ruins as part of their daily routine. It’s a poignant image and a tribute to how cultural patrimony can appear to integrate seamlessly with daily life.
Click on the photo and you’ll land on an article in Peru’s leading newspaper, El Comercio, about its efforts to raise awareness to preserve Lima’s few remaining pre-Columbian monumental structures.
The article by Javier Lizarzaburu opens with a statistic: Lima has 350 archaeological sites. At first glance that does not seem like a few. It sounds like an enviable abundance of cultural patrimony.
But not if you take into account that in the last 200 years, 60 to 70 percent of Lima’s huacas – ancient temples, forts, cemeteries, administrative centers and their contents – have been flattened, burned and buried.
Some of the worst devastation occurred during the 20th century.
Major burial mounds were knocked down and plowed under to make way for the city zoo, university campuses, and several hospitals. Construction of private estates and major avenues cut directly through dozens of massive pre-colonial ruins. In recent decades, impoverished Andean migrant squatters, seeking a better life in the coastal capital, doused any mummies they found with gasoline and set them ablaze, rather than risk being forced from their makeshift homesteads in the names of science and Intangible Cultural Heritage.
El Comercio’s “nuisance to opportunity” article was preceded a day earlier by an excellent interactive map of Lima’s huacas, along with photos of 60 of the capital’s under-appreciated pre-Inca archaeological sites.
For the most part, Lima’s 350 archaeological ruins are the scant remains of ancient adobe pyramids and walls, which stand out on Lima’s chaotic streets like islands in a churning sea of poorly planned and barely controlled urban development.
Getting a handle on that legacy in a teeming city of nine million inhabitants — even now after more than a decade of booming economic growth — remains a huge challenge. Competing priorities often boil over into political conflict.
An example is the battle between the Ministry of Culture and the Municipality of Ate Vitate over a planned highway extension from Avenida Javier Prado to the Central Highway. The flyover, as planned, would gouge a broad path through the middle of the protected Puruchuco/Huaquerones archaeological complex, potentially destroying priceless relics, as-yet undiscovered. The limited excavations conducted in the protected zone by archaeologists Elena Goycochea and Guillermo Cock have produced the richest treasure trove of mummies and artifacts pinpointing the historical transition from Inca imperial rule to the Spanish Colonial era.
Where major public works projects aren’t a burning issue, some of Lima’s district’s have tried to find economic models to preserve their archaeological sites, satisfying both the needs of science and the local communities.
For the last decade, Lima’s district of Miraflores has leased space to the Huaca Pucllana Restaurant, one of city’s finest gourmet eateries, where patrons dine just meters from the magnificently lit adobe pyramid built by the Lima culture (200-700 AD). A portion of the restaurant’s profits are allocated to restoration and ongoing scientific research at the site.
Most archaeological ruins don’t have the benefit of being located in the posh, tourist-rich environs of Miraflores. For less fortunate communities, the possibility of such a marriage between an archaeological ruins and a five-star restaurant drawing thousands of foreign tourists is a distant dream.
That does not mean, however, that there aren’t riches to be derived from ancient huacas, wherever they are located.
“There are people who live in front of archaeological monuments that are garbage dumps or refuges for low lifes,” Pedro Pablo Alayza, deputy director of cultural affairs for the Municipality of Lima, told El Comercio.
“Nobody wants to live near a huaca that is a drug den,” he said. “Once we reclaim the monument, we recover it for the community because it automatically means greater security. Also, in many zones there are no parks and these can be converted into shared public spaces. Not all of them have to be geared toward tourism.”
José Félix Huaring is a high school teacher and general coordinator of the Kusillaqta Project, a grass-roots cultural organization that’s been instrumental in reclaiming the Huaca Mangomarca in San Juan de Lurigancho. Just five years ago, the ancient site was a garbage-strewn haven for drug addicts and violent gangs.
The Kusillaqta Project has helped turn the huaca into a valued community asset. “This includes exhibitions,” Félix told El Comercio, “and regular tasks to clean the huaca, with students from various schools and local residents, and with support from the National Institute of Culture and the municipality.”
During the last four years, Mangomarca has been the site of a community festival celebrating the Inca December solstice ceremony with an elaborate re-enactment by actors and dancers, young an old, from the neighborhood. | <urn:uuid:81017baf-5bf2-4733-a54e-5d0c6579a4bf> | CC-MAIN-2016-26 | http://www.fertur-travel.com/blog/2012/lost-huacas-the-challenge-of-reclaiming-limas-pre-columbian-past/3917/ | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395346.6/warc/CC-MAIN-20160624154955-00087-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.934648 | 1,151 | 3.03125 | 3 |
Ok, the emphasis is on “may”. “…[L]earning to swim early in life may give kids a head start in developing balance, body awareness and maybe even language and math skills.”
Am blessed to be able to swim at work during lunch. The campus has a gym, with swim privileges at the hotel pool on campus. Maybe the swim is keeping some math skills intact!
From the 20 May 2015 Science News article
Loosely based on something our mother told us, it’s that learning to swim early in life may give kids a head start in developing balance, body awareness and maybe even language and math skills.
Mom may have been right. A multi-year study released in 2012 suggests that kids who take swim lessons early in life appear to hit certain developmental milestones well before their nonswimming peers. In the study, Australian researchers surveyed about 7,000 parents about their children’s development and gave 177 kids aged 3 to 5 years standard motor, language, memory and attention tests. Compared with kids who didn’t spend much time in the water, kids who had taken swim lessons seemed to be more advanced at tasks like running and climbing stairs and standing on their tiptoes or on one leg, along with drawing, handling scissors and building towers out of blocks.
Hitting milestones related to motor skills isn’t so surprising, the authors note, since swimming is a very physical activity. A bit more unexpected, they say, are the swimming kids’ advanced skills in language and math — tasks like counting, naming objects and recognizing words and letters. Kids who swam also seemed to be better at following directions. And, in some areas, kids had proportionally better scores on the development tests relative to how long they had been taking lessons.
The authors admit that they can’t conclusively claim that swimming alone is responsible for the developmental advances because the analysis was based on survey data and limited testing with young children. “Simply, we can say that children who participate in swimming achieve a wide range of milestones … and skill, knowledge and dispositions … earlier than the normal population,” the researchers write.
…Early research suggests that interactive media, such as electronic books and learn-to-read applications can be useful in teaching vocabulary and reading comprehension, but only in children preschool-age or older. The potential educational benefits for children under two is questioned, as research on interactive media in this age group is scant, and it is well-known that infants and toddlers learn best through hands-on and face-to-face experiences.
This commentary notes that while mobile device use by children can provide an educational benefit, the use of these devices to distract children during mundane tasks may be detrimental to the social-emotional development of the child. The researchers ask “If these devices become the predominant method to calm and distract young children, will they be able to develop their own internal mechanisms of self-regulation?”
“It has been well-studied that increased television time decreases a child’s development of language and social skills. Mobile media use similarly replaces the amount of time spent engaging in direct human-human interaction,” explained corresponding author Jenny Radesky, MD, clinical instructor in Developmental-Behavioral Pediatrics at Boston University School of Medicine and a former fellow in pediatrics at Boston Medical Center.
The authors question whether heavy device use during young childhood could interfere with development of empathy, social and problem solving skills that are typically obtained by exploring, unstructured play and interacting with peers
Hold the Phone
…there may be some serious drawbacks to never unplugging.
Dr. Radesky, a clinical instructor in pediatrics at Boston University School of Medicine and a fellow in the Division of Developmental and Behavioral Pediatrics at BMC, set out to better understand how all the new devices at our fingertips affect both parents’ and children’s behavior. What she found may make us look at our digital attachment in a new light, especially as we consider the potential impact on our relationship with our kids and their development.
…babies with trouble self-regulating also tended to have higher levels of exposure to media devices.
…. infants and toddlers whom parents characterized as most fussy also had the most media exposure, even after accounting for other factors such as socio-demographics and home environment. The study noted early childhood is a crucial time for forming lifelong media habits and suggested the benefit of managing children’s media exposure for both amount and content.
Dr. Radesky… recommends, if possible, having another adult come over to help soothe the baby for a bit instead of turning to a media device “babysitter” so the parent can have a moment to his or herself.
And instead of all-out bans, Dr. Radesky suggests establishing a “no-device rule” at certain times of the day or in particular places in the home.
…Although there are no current guidelines, Dr. Radesky has funding from the American Academy of Pediatrics to develop guidelines for how pediatricians can talk to families about mobile media use.
[Podcast] Early Stress Gets Under the Skin: Promising Initiatives to Help Children Facing Chronic Adversity
Disadvantaged children who often experience deep poverty, violence, and neglect simultaneously are particularly vulnerable to the pernicious effects of chronic stress. New research reveals that chronic stress alters childrens’ rapidly developing biological systems in ways that undermine their ability to succeed in school and in life. But there is good evidence that specialized programs can help caretakers learn to be more supportive and responsive. High-quality childcare can offer a safe, warm, and predictable environment amid otherwise chaotic lives, and home visiting programs can help both parents and foster parents learn to provide an environment of greatly reduced stress for their children.
On May 7, Princeton University and the Brookings Institution released the Spring 2014 volume and accompanying policy brief of the Future of Children. The release event featured researchers and policy experts who explained how chronic stress “gets under the skin” to disrupt normal development and how programs can provide the support so urgently needed by children who face chronic stress.
Children who are malnourished during their first 1,000 days (conception to age 2) often experience developmental setbacks that affect them for life.
To that end, philanthropic groups have funded massive global health initiatives for impoverished infants and pregnant women around the world. While money flows justifiably to this cause, programs for children past the 1,000-day mark are seen as having little hope, and garner less support.
But new research from Brigham Young University is finding that global health workers should not give up on impoverished children after that critical time frame.
In a longitudinal study of 8,000 children from four poverty-laden countries, BYU health science assistant professor Ben Crookston and colleagues found that the developmental damage of malnutrition during the first 1,000 days is not irreversible.
“The first 1,000 days are extremely critical, but we found that the programs aimed at helping children after those first two years are still impactful,” Crookston said.
Specifically, the study found that nutritional recovery after early growth faltering might have significant benefits on schooling and cognitive achievement.
The data for the study, which comes from the international “Young Lives” project led by the University of Oxford, tracked the first eight years of life of children from Ethiopia, Peru, India and Vietnam.
Initially, Crookston and his colleagues found what they expected with the data: Children who had stunted growth (in this case, shorter than expected height at 1 year of age) ended up behind in school and scoring lower on cognitive tests at 8 years of age.
However, kids who experienced “catch-up growth,” scored relatively better on tests than those who continued to grow slowly and were in more age-appropriate classes by the age of 8.
- Malnourished children still have hope beyond first 1,000 days (eurekalert.org)
- Hospitals set rules for treating malnutrition (vietnamnews.vn)
- Three Steps to Ending Child Hunger (healthandcultureblog.wordpress.com)
DURHAM, N.C. – Children who grow up in poverty are more likely than wealthier children to smoke cigarettes, but they are less likely to binge drink and are no more prone to use marijuana, according to researchers at Duke Medicine.
The researchers also found that economic strains in early life – including family worries about paying bills or needing to sell possessions for cash – independently erode a child’s self-control, regardless of strong parenting in adolescence. Lack of self-control often leads to substance use.
The findings, appearing July 30, 2013, in the Journal of Pediatric Psychology, debunk common assumptions about who abuses substances, and provide a basis for better approaches to prevent young people from falling into drug and alcohol addiction.
“Poverty during childhood not only appears to affect child development, but can have lasting effects on the types of health choices made during adolescence and early adulthood, especially as it relates to cigarette smoking,” said senior author Bernard Fuemmeler, Ph.D., MPH, MS, associate professor in Community and Family Medicine at Duke University School of Medicine. “Economic strains may shape an individual’s capacity for self-control by diminishing opportunities for self-regulation, or affecting important brain structures.”
Fuemmeler and colleagues at Duke set out to examine the direct effect of childhood economic strains on smoking, binge drinking, and marijuana use in young adults. They also sought to determine how financial difficulties impact self-control, and how positive parenting might mitigate the tendency to use drugs and alcohol.
The group analyzed data from 1,285 children and caregivers included in a representative sample of U.S. families studied from 1986-2009. Economic status was measured by annual family income, plus a survey with questions about economic problems such as difficulty paying bills or postponing medical care. Additional information was gathered to gauge childhood self-control and parental interactions.
Among the study participants who were transitioning to adulthood, young people who lived in poverty as children were far more likely to become regular cigarette smokers than children who grew up in wealthier households. The impoverished children also scored low on self-control measures.
“Poor self-control may be a product of limited learning resources and opportunities for developing appropriate behaviors,” Fuemmeler said.
Binge drinking, however, was much more common among the wealthier young people. And surprisingly, those who had good self-control as children were more likely to engage in heavy episodic drinking as young adults.
Neither wealth nor poverty appeared to influence marijuana use, although positive parenting did reduce the use of this drug. Parents who were nurturing and accepting, in fact, diminished the likelihood of young people using any of the substances.
The researchers also found no correlation between economic hardship and poor parenting – a contradiction to some other studies.
“We suspected we’d find a relationship between parenting and economic problems – the idea that economic strains may cause parents to have less capacity to deal with their children, but that relationship wasn’t there,” Fuemmeler said. “That means it’s not necessarily poverty that affects the parenting strategy, but poverty that affects the children’s self-control.”
Fuemmeler said the findings are important given the increase in U.S. children living in poverty. The U.S. Census Bureau reported 22 percent of children lived in poverty in 2010, compared to 18 percent in 2000.
“Continued work is needed to better understand how economic strains may influence the development of self-control, as well as to identify other potential mediators between economic strains and substance use outcomes,” Fuemmeler said.
In addition to Fuemmeler, study authors include Chien-Ti Lee, Joseph McClernon, Scott H. Kollins and Kevin Prybol.
The National Institutes of Health (RO1 DA030487), the National Cancer Institute (K07CA124905) and the National Institute on Drug Abuse (K24DA023464) funded the study.
- Could personality in childhood predict how teens will respond to drinking? (globalnews.ca)
- Aussie expert: Legalize marijuana to protect teens from binge drinking (rawstory.com)
- The Persistent Geography of Disadvantage (theatlanticcities.com)
- The ‘Boy Crisis’: Is It Fictional? (ideas.time.com)
Come back Mom and Dad
You’re growing apart; you know that I’m growing up sad
I need some attention
I shoot into the light.
– Peter Gabriel, “Family Snapshot”
Purposeless boys are dangerous.
Michael Gurian, in his book The Purpose of Boys (2010), lists some of the effects of the growing population of boys without purpose.
- For every 100 girls in public schools, 335 boys are expelled.
- For every 100 girls ages 15-19 who commit suicide, 549 boys in the same age range kill themselves.
- For every 100 women ages 18-21 in correctional facilities, there are 1,430 men behind bars.
- For every 100 American women who earn a bachelor’s degree, 73 American men earn the same degree.
View original post 568 more words
From an email recently received from USA.gov
October is Children’s Health Month. If you are a parent or caregiver, check out these resources to help promote your child’s good health:
- Vaccines — Vaccination is one of the best ways to protect children from several potentially serious diseases. Get recommended vaccine information based on your child’s age group.
- Nutrition Resources and 10 Kid-Friendly Veggies and Fruits (pdf) — Encourage children to eat vegetables and fruits by making it fun. Get ideas for healthy snacks and meals.
- Child Development — Get the basics about healthy development; learn about specific conditions that affect development; get parenting tips; and more.
- Developmental Milestones — Skills such as crawling, walking, and waving are developmental milestones. Check out milestones for children between the ages of two months and five years.
- Oral Health — Find out what you can do to help prevent tooth decay and other oral diseases.
- Child Safety — Get resources to help keep your child safe during different stages of development.
- Physical Activity — Children need 60 minutes of play with moderate to vigorous activity every day. Get ideas for steps you can take to increase your child’s level of activity.
Many elements contribute to a child’s good health and overall well-being. Find additional topics on children’s health.
- Are Kindergarten Kids Getting Their Vaccines? (children.webmd.com)
- Oral Health Tips for the Whole Family (juiciestdeals.com)
Reliance on supernatural explanations for major life events, such as death and illness, often increases rather than declines with age, according to a new psychology study from The University of Texas at Austin.
The study, published in the June issue of Child Development, offers new insight into developmental learning.
“As children assimilate cultural concepts into their intuitive belief systems — from God to atoms to evolution — they engage in coexistence thinking,” said Cristine Legare, assistant professor of psychology and lead author of the study. “When they merge supernatural and scientific explanations, they integrate them in a variety of predictable and universal ways.”..
According to the findings, participants of all age groups agreed with biological explanations for at least one event. Yet supernatural explanations such as witchcraft were also frequently supported among children (ages 5 and up) and universally among adults.
Among the adult participants, only 26 percent believed the illness could be caused by either biology or witchcraft. And 38 percent split biological and scientific explanations into one theory. For example: “Witchcraft, which is mixed with evil spirits, and unprotected sex caused AIDS.” However, 57 percent combined both witchcraft and biological explanations. For example: “A witch can put an HIV-infected person in your path.”
Legare said the findings contradict the common assumption that supernatural beliefs dissipate with age and knowledge.
“The findings show supernatural explanations for topics of core concern to humans are pervasive across cultures,” Legare said. “If anything, in both industrialized and developing countries, supernatural explanations are frequently endorsed more often among adults than younger children.”
The results provide evidence that reasoning about supernatural phenomena is a fundamental and enduring aspect of human thinking, Legare said.
“The standard assumption that scientific and religious explanations compete should be re-evaluated in light of substantial psychological evidence,” Legare said. “The data, which spans diverse cultural contexts across the lifespan, shows supernatural reasoning is not necessarily replaced with scientific explanations following gains in knowledge, education or technology.”
- People Merge Supernatural and Scientific Beliefs When Reasoning With the Unknown, Study Shows (yubanet.com)
- Supernatural Beliefs Increase with Age, Study Finds (sciencedaily.com)
- To make magic ritual work: Add steps. Repeat. (futurity.org)
- People are more likely to believe in magic spells that are repetitious and time-consuming [Psychology] (io9.com)
- Legare and Souza’s “Evaluating Ritual Efficacy” (danharms.wordpress.com)
- Study Shows Repetitious, Time-Intensive Magical Rituals Considered More Effective (medicalnewstoday.com)
School absenteeism is a significant problem, and students who are frequently absent from school more often have symptoms of psychiatric disorders. A new longitudinal study of more than 17,000 youths has found that frequently missing school is associated with a higher prevalence of mental health problems later on in adolescence, and that mental health problems during one year also predict missing additional school days in the following year for students in middle and high school….
- School Absenteeism, Mental Health Problems Linked (psychcentral.com)
- Chronic School Absenteeism Linked to Mental Health Problems (nlm.nih.gov)
Teens who more openly express their own viewpoints in discussions with their moms, even if their viewpoints disagree, are more likely than others to resist peer pressure to use drugs or drink.
That’s one of the findings of a new longitudinal study by researchers at the University of Virginia. The study appears in the journal Child Development.
The researchers looked at more than 150 teens and their parents, a group that was racially, ethnically, and socioeconomically diverse. The teens were studied at ages 13, 15, and 16 to gather information on substance use, interactions with moms, social skills, and close friendships. Researchers used not just the youths’ own reports, but information from parents and peers. They also observed teens’ social interactions with family members and peers.
They found that teens who hold their own in family discussions were better at standing up to peer influences to use drugs or alcohol. Among the best protected were teens who had learned to argue well with their moms about such topics as grades, money, household rules, and friends. Arguing well was defined as trying to persuade their mothers with reasoned arguments, rather than with pressure, whining, or insults.
“The healthy autonomy they’d established at home seemed to carry over into their relationships with peers,” suggests Joseph P. Allen, Hugh P. Kelly Professor of Psychology at the University of Virginia, who led the study. …
- Study: Argumentative Teens More Likely To Resist Peer Pressure (clutchmagonline.com)
- Argumentative Teens More Likely to Resist Peer Pressure (psychcentral.com)
- Teens Who Butt Heads With Mom Better At Resisting Peer Pressure (livescience.com)
- Teen, mom debates may help resist peer pressure (cbc.ca)
- Arguing with Mom Helps Teens Fend Off Peer Pressure (healthland.time.com)
- Teens who argue with mom might resist peer pressure (ctv.ca)
- Does Your Teen Constantly Challenge You? (nlm.nih.gov)
- Teens who stand up against mom are better at resisting peer pressure: study (theglobeandmail.com)
- Teens who argue with mom might resist peer pressure: study (ctv.ca)
- Teens: Why Arguing With Mom Helps Fend Off Teen Pressure (healthland.time.com)
“If you look at aging research, you find some studies that show older people are not impaired in accuracy, but other studies that show that older people do suffer when it comes to speed. What this model does is look at both together to reconcile the results.”
COLUMBUS, Ohio – Both children and the elderly have slower response times when they have to make quick decisions in some settings.
But recent research suggests that much of that slower response is a conscious choice to emphasize accuracy over speed.
In fact, healthy older people can be trained to respond faster in some decision-making tasks without hurting their accuracy – meaning their cognitive skills in this area aren’t so different from younger adults.
“Many people think that it is just natural for older people’s brains to slow down as they age, but we’re finding that isn’t always true,” said Roger Ratcliff, professor of psychology at Ohio State University and co-author of the studies.
“At least in some situations, 70-year-olds may have response times similar to those of 25-year olds.”
Ratcliff and his colleagues have been studying cognitive processes and aging in their lab for about a decade. In a new study published online this month in the journal Child Development, they extended their work to children.
Ratcliff said their results in children are what most scientists would have expected: very young children have slower response times and poorer accuracy compared to adults, and these improve as the children mature.
But the more interesting finding is that older adults don’t necessarily have slower brain processing than younger people, said Gail McKoon, professor of psychology at Ohio State and co-author of the studies.
“Older people don’t want to make any errors at all, and that causes them to slow down. We found that it is difficult to get them out of the habit, but they can with practice,” McKoon said.
Researchers uncovered this surprising finding by using a model developed by Ratcliff that considers both the reaction time and the accuracy shown by participants in speeded tasks. Most models only consider one of these variables.
“If you look at aging research, you find some studies that show older people are not impaired in accuracy, but other studies that show that older people do suffer when it comes to speed. What this model does is look at both together to reconcile the results,” Ratcliff said.
Ratcliff, McKoon and their colleagues have used several of the same experiments in children, young adults and the elderly….
- Elderly can be as fast as whippersnappers in some brain tasks (scienceblog.com)
- Elderly Can Be As Fast As Young In Some Brain Tasks (medicalnewstoday.com)
- Elderly Can Be As Fast As Young in Some Brain Tasks (neurosciencenews.com)
- 70 year olds can be as fast as 25 year olds in some brain tasks (tricitypsychology.com)
- Elderly can be as fast as young in some brain tasks, study shows (eurekalert.org)
- Aging Brains Match Youth in Some Mental Tasks (livescience.com)
- Elderly can be as fast as young in some brain tasks, study shows (sciencedaily.com)
Hamilton, ON (August 16, 2011) — A successful new rehabilitation approach to treating children with cerebral palsy puts its focus on where a child lives and plays, not just improving the child’s balance, posture and movement skills.
Called a “context-focused intervention”, McMaster University and the University of Alberta researchers report in a new study this approach is just as beneficial as traditional child-focused therapy, offering parents an additional treatment option for their child.
The McMaster study, in conjunction with researchers at the University of Alberta’s Faculty of Rehabilitation Medicine and Alberta Health Services in Calgary, is the first randomized trial to examine the effects of therapy focused on changing a child’s task or environment, not the child. It appeared in the July issue of the medical journal Developmental Medicine and Child Neurology.
Context-focused and child-focused therapies were evaluated in a randomized controlled trial of 128 children with cerebral palsy ranging in age from one year to almost six year old. The children, from 19 different rehabilitation centres in Ontario and Alberta, received one of the two approaches for six months. Therapy was provided by occupational therapists and physical therapists. Between assessments at six and nine months, they returned to their regular therapy schedule.
Researchers found that while both groups improved significantly over the study, there were “no significant differences in daily functioning” between the two treatment groups, reported lead author Mary Law, professor in McMaster’s School of Rehabilitation Science and co-founder of the university’s CanChild Centre for Childhood Disability Research.
Cerebral palsy is caused by damage in the brain before or just after birth that results in problems with muscle tone and movement, and impacts ability to perform everyday activities. More than 50,000 Canadians have cerebral palsy, which occurs in about two of 1,000 babies.
During the study, parents in both groups received general information and education about their child’s disability as well as specific strategies to practice at home.
In the child-focused approach, therapists identified the underlying impairment – tone, posture, range of motion – and provided therapy to improve the child’s skills and abilities…..
- Using Play and Technology for Therapy (mentalflowers.wordpress.com)
- HealthWatch: Cerebral Palsy (newyork.cbslocal.com)
- Smartphones, Tablets Provide Therapy for Cerebral Palsy, Autism (blogs.forbes.com) | <urn:uuid:735fa459-e2ca-4c41-823a-8e5cb53ae28f> | CC-MAIN-2016-26 | https://jflahiff.wordpress.com/tag/child-development/ | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783402746.23/warc/CC-MAIN-20160624155002-00073-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.950645 | 5,546 | 2.765625 | 3 |
If one sign of a great game is staying power, then The Oregon Trail stands out for over forty years of enduring popularity. The game has also outlasted many different platforms.
If, like me, you played it growing up, you remember that the game challenges players to guide their wagon party across the great American West in 1848. To successfully traverse the continent, you must choose supplies, set your travel speed, cross rivers, trade with Native Americans, hunt for animals, survive disease and storms, and make wagon repairs. Choose poorly, and one or more of your party dies along the trail.
Three Carleton College students invented the game in 1971, when student teacher Don Rawitsch asked fellow seniors Paul Dillenberger and Bill Heinemann, “Can’t we do something with the computer in my history class?” They developed a text-based version of The Oregon Trail, and later, when Rawitsch joined the Minnesota Educational Computer Consortium (MECC), he made the game available to students throughout the state. The game play was primitive. Students dialed in on teletype machines and hunted by typing in “BANG.” Players who spelled the word correctly got the award message, “Good Eatin’ Tonight,” and some food for their travel party.
The Oregon Trail was not the only 1970s game to simulate historical adventures. Programmers turned to the ancient camel caravan trades, the rule of Hammurabi, and Civil War battles for inspiration. But unlike most of these other games, The Oregon Trail successfully migrated from mainframe computer to the newly popular microcomputers.
In 1979 MECC ported the game to the Apple II, and players could then hunt by shooting at graphics of deer, bison, or rabbits dashing across the screen. The game’s popularity expanded enormously in school districts all over the country, and as a result, in 1985 MECC released it to the general public. You can play an emulated version here. Continuously updated, the game is still a steady seller and has even migrated to the iPhone.
The Oregon Trail succeeded because it was simple, yet challenging, but endured because MECC, a stable and committed creator of educational software titles such as Number Muncher and Lemonade Stand, invested the resources to keep it updated and fresh and get it into classrooms so that teachers could easily tie it into the American History curricula. For the majority of children who didn’t have home computers in the 1980s and 1990s, The Oregon Trail was often not only the first computer game they played, but also their first introduction to computers. For a collection of people’s memories playing the game, see Dave Lester’s 2006 Facebook survey.
Today, there are many other historical simulations and many more opportunities for teachers to use games in the classroom. But 39 years after its creation, The Oregon Trail still stands out as one of the most effective simulation games. When children stock their supplies, load up their wagon, and head West, they start to understand the challenges of Western migration, build some valuable decision-making skills, and have fun. And hopefully no one dies of dysentery along the way.
Did you play The Oregon Trail in school? Share your memories! | <urn:uuid:d1bd93d2-f71b-410a-a4a3-5f7e7729d62d> | CC-MAIN-2016-26 | http://www.icheg.org/blog/chegheads/2010/02/four-decades-on-the-oregon-trail/ | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397636.15/warc/CC-MAIN-20160624154957-00189-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.960185 | 671 | 3.015625 | 3 |
Trading aviation emissions
Posted 1/02/2012 by Anita Talberg
For the first time, Qantas and other airlines are being held accountable for greenhouse gas emissions from their international flights—at least for those flights using an airport within the European Union (EU). The move by the EU to price and restrict greenhouse gas emissions from European flights has been a contentious one. This FlagPost explains the basic mechanics of the EU’s decision and highlights some of the issues being raised.
The latest report from the Intergovernmental Panel on Climate Change anticipates that by 2050 international and domestic aviation emissions will be around five percent of what global annual ‘man-made’ emissions were in 2000. International aviation was excluded from the Kyoto Protocol because of negotiating difficulties. The responsibility of delivering an emissions reduction policy response was referred to the International Civil Aviation Organisation (ICAO). By 2005, with aviation emissions increasing and no likely outcome from the ICAO process, the EU began investigating alternative options and finally resolved to bring aviation into the EU emissions trading scheme (ETS).
Bringing aviation into the ETS
The EU ETS has existed since 2005 and is currently in its second phase. The Aviation Directive amends the EU ETS legislation to include aviation activities from 1 January 2012. For this year, aviation emissions will be capped at 97 per cent of the average annual aviation emissions that occurred in the period 2004-06. After 2012, the percentage will be reduced to 95.
Each year, airlines will receive emissions allowances—known as EU Aviation Allowances (EUAAs)—up to their cap. One EUAA represents the right to emit one tonne of greenhouse gases. If an airline exceeds its cap it will need to purchase supplementary allowances from the market. In theory, airlines will choose to reduce their emissions if they can do so at a cost below the price of allowances, and then sell surplus EUAAs at a profit. However, in practice, at least at first, the price of fuel is likely to have a stronger impact on such decisions than that of EUAAs.
At the beginning of each year, at least 15 per cent of EUAAs will be auctioned by EU Member States. The rest will be provided free of charge with a small percentage being kept aside for new market entrants. The penalty for non-compliance is €100 per tonne of carbon dioxide equivalent plus the cost of allowances not surrendered. The Aviation Directive exempts from liability any non-EU countries that have adopted ‘measures, which have an environmental effect at least equivalent to that of this Directive’.
Airlines fly off the handle
The US was the first and loudest opponent to the EU’s decision to bring aviation into the ETS. The U.S. House of Representatives passed a Bill ‘to prohibit operators of civil aircraft of the United States from participating in the EU ETS’ and a similar Bill was introduced into the Senate. US Secretaries of State and Transport sent a letter to the European Commissioner urging the EU to end plans to include US airlines in the ETS, or the US would be ‘compelled to take appropriate action’.
American Airlines, United Continental and US airline industry association Airlines for America (A4A) brought a case against the EU before the London High Court of Justice. They claimed the Aviation Directive breached the Chicago Convention, the EU-US Open Skies Agreement and customary international law. The case was referred to the European Court of Justice (ECJ), which ruled on 21 December 2011 in favour of the EU.
Conclusions from the ECJ’s process have not deterred the China Air Transport Association (CATA), acting on behalf of four Chinese airlines, from wanting to sue the EU. Some fear a trade war could develop as Chinese airlines refuse to pay their dues to the EU scheme.
Following the ECJ’s ruling, a number of US airlines decided to pass on some of the anticipated costs to passengers by increasing the price of flights to and from Europe by US$3 per ticket. Qantas will also increase ticket prices by up to AU$5 and Europe-based low-cost airline, Ryanair, has announced a 25p surcharge for each passenger on all flights. One Asian airline, AirAsia X has gone so far as to cancel some of its European flights; although whether the EU ETS is solely responsible for this decision is questionable (increased UK air passenger duty taxes are also a factor).
The European Commission estimated (based on higher permit prices than exist today) that the EU ETS’ impact on the price of airline tickets should be less than €12 per passenger on intercontinental flights and less than €9 for internal European flights. Independent news and consulting service, Thomson Reuters Point Carbon, estimated that the shortfall between free allowances and those needed for compliance will cost about €1.4 billion in 2012, and may reach €7 billion by 2020. However, some airlines will do better out of it than others with 23 per cent borne by the top 10 airlines. Independent aviation market intelligence group CAPA is concerned that smaller operators will struggle to deal with the added requirements on data collection and reporting and the impacts on route and fleet planning.
Despite the strong resistance, the EU is confident that airlines will comply (as was reflected in the EU Commissioner’s letter of response to the US Secretary of State). The cost for non-compliance has been set high and airlines risk a possible ban from operating in the EU. Qantas has accepted its fate and A4A, after losing the ECJ case, said its airlines would ‘comply under protest’ while the group reviews its legal options.
Initially, trading will be the only way for airlines to comply with EU ETS. In the medium term, airlines will explore ways to reduce fuel consumption by altering routes and building more efficient aircraft. Biofuels are another attractive option. Using pure biofuels may reduce emissions by two per cent but they are more costly than traditional aviation fuels and depend heavily on the availability of sustainable biomass production. The highest biofuel blend approved for the aviation industry is currently 50 per cent. This blend was used by Lufthansa on 12 January 2012 for a trial flight from Frankfurt to Washington. The flight was successful but the fuel cost was more than twice that of conventional aviation fuel. Lufthansa is keen to see a 100 per cent blend approved.
Will the idea take off?
Some airlines are embracing the scheme. While stating that it believes regional markets distort the competition, Lufthansa is one of several airlines already buying or planning to buy allowances (especially as permit prices are currently so low). Some airline partnerships, such as Star Alliance and SkyTeam, are looking to buy as groups, or give each other preferential trading rights. Some economists are suggesting that airlines could actually make windfall profits from it all.
Discussions are now ramping up on how to deal with international shipping emissions. Maritime emissions were also excluded from the Kyoto Protocol and have not been properly addressed on a global level. The EU has floated the idea of extending its ETS yet again. The early results of the aviation experience may help inform negotiations on shipping.
Thank you for your comment. If it does not require moderation, it will appear shortly. | <urn:uuid:114d0fb3-3eea-468b-8b61-6b2f9b2ef3a8> | CC-MAIN-2016-26 | http://www.aph.gov.au/About_Parliament/Parliamentary_Departments/Parliamentary_Library/FlagPost/2012/February/Trading_aviation_emissions?toggleTextOnly=1 | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783392159.3/warc/CC-MAIN-20160624154952-00144-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.952363 | 1,497 | 2.734375 | 3 |
It’s not easy to measure wind turbine noise as background noise from rainfall to traffic interferes with the results, says the latest Wind Directions based on a recent EWEA workshop. In fact, that background noise – including the wind itself – is usually louder than the sound of the turbines.
At least 17 peer-reviewed studies have found that there is no adverse effect on human health linked to turbine noise.
However, people’s concerns about wind turbine noise must be taken seriously. “Developers must also show respect by answering questions and listening to fears,” said Jeremy Bass, Senior Technical Manager at RES.
Read the full article in Wind Directions now. | <urn:uuid:9c84e8e7-3e17-43b6-84d6-21621b0a692b> | CC-MAIN-2016-26 | http://www.ewea.org/blog/2013/02/how-noisy-are-wind-turbines-really/ | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397749.89/warc/CC-MAIN-20160624154957-00185-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.961015 | 140 | 3.046875 | 3 |
narrator and protagonist of the novel. Jake is an American veteran
of World War I working as a journalist in Paris, where he and his
friends engage in an endless round of drinking and parties. Although Jake
is the most stable of his friends, he struggles with anguish over
his love for Lady Brett Ashley, his impotence, and the moral vacuum
that resulted from the war. Jake positions himself as an observer,
generally using his insight and intelligence to describe only those around
him, rarely speaking directly about himself. However, in describing
the events and people he sees, Jake implicitly reveals much about
his own thoughts and feelings.
in-depth analysis of Jake Barnes.
Lady Brett Ashley
A beautiful British socialite who drinks heavily.
As the novel begins, Brett is separated from her husband and awaiting
a divorce. Though she loves Jake, she is unwilling to commit to
a relationship with him because it will mean giving up sex. Indeed,
she is unwilling to commit fully to any of the many men who become
infatuated with her, though she has affairs with a number of them.
However, she does not seem to draw much happiness from her independence.
Her life, like the lives of many in her generation, is aimless and unfulfilling.
in-depth analysis of Lady Brett Ashley.
wealthy American writer living in Paris. Though he is an expatriate
like many of his acquaintances, Cohn stands apart because he had
no direct experience of World War I and because he is Jewish. He
holds on to the romantic prewar ideals of love and fair play, yet, against
the backdrop of the devastating legacy of World War I, these values
seem tragically absurd. As a Jew and a nonveteran, Cohn is a convenient
target for the cruel and petty antagonism of Jake and his friends.
in-depth analysis of Robert Cohn.
Jake, a heavy-drinking war veteran, though not an expatriate. Bill
uses humor to deal with the emotional and psychological fallout
of World War I. He and Jake, as American veterans, share a strong bond,
and their friendship is one of the few genuine emotional connections
in the novel. However, Bill is not immune to the petty cruelty that
characterizes Jake and Jake’s circle of friends.
constantly drunk, bankrupt Scottish war veteran. Mike has a terrible
temper, which most often manifests itself during his extremely frequent
bouts of drunkenness. He has a great deal of trouble coping with
Brett’s sexual promiscuity, which provokes outbreaks of self-pity
and anger in him, and seems insecure about her infidelity as well
as his lack of money.
beautiful, nineteen-year-old bullfighter. Romero’s talents in the
ring charm both aficionados and newcomers to the sport alike. He
serves as a foil (a character whose attitudes or emotions contrast
with, and thereby accentuate, those of another character) for Jake
and his friends in that he carries himself with dignity and confidence
at all times. Moreover, his passion for bullfighting gives his life
meaning and purpose. In a world of amorality and corrupted masculinity,
Romero remains a figure of honesty, purity, and strength.
owner of a Pamplona inn and a bullfighting expert. Montoya sees
bullfighting as something sacred, and he respects and admires Jake
for his genuine enthusiasm about it. Montoya takes a paternal interest
in the gifted young bullfighter Pedro Romero and seeks to protect him
from the corrupting influences of tourists and fame.
girlfriend at the beginning of the novel. A manipulative status-seeker,
Frances was highly domineering early in their relationship and persuaded Cohn
to move to Paris. As her looks begin to fade, she becomes increasingly
possessive and jealous.
A wealthy Greek count and a veteran of seven wars
and four revolutions. Count Mippipopolous becomes infatuated with
Brett, but, unlike most of Brett’s lovers, he does not subject her
to jealous, controlling behavior. Amid the careless, amoral pleasure-seeking
crowd that constitutes Jake’s social circle, the count stands out
as a stable, sane person. Like Pedro Romero, he serves as a foil
for Jake and his friends.
British war veteran whom Jake and Bill befriend while fishing in
Spain. The three men share a profound common bond, having all experienced
the horrors of World War I, as well as the intimacy that soldiers develop.
Harris, as Jake and Bill call him, is a kind, friendly person who
greatly values the brief time he spends with Jake and Bill.
beautiful but somewhat thick-witted prostitute whom Jake picks up
and takes to dinner. Jake quickly grows bored of their superficial
conversation and abandons her in a club to be with Brett.
bullfighter who fights on the same day as Pedro Romero. In his early
days, Belmonte was a great and popular bullfighter. But when he
came out of retirement to fight again, he found he could never live
up to the legends that had grown around him. Hence, he is bitter and
dejected. He seems to symbolize the entire Lost Generation in that
he feels out of place and purposeless in his later adult life.
drunken expatriate gambler who is perpetually out of money. Harvey
is intelligent and well read, yet he cannot escape his demons of
excessive drinking and gambling. Like many of Jake’s friends, he
is prone to petty cruelty toward Cohn. | <urn:uuid:1c14bcdf-44f3-4858-9aee-f81f0627fb9f> | CC-MAIN-2016-26 | http://www.sparknotes.com/lit/sun/characters.html | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396455.95/warc/CC-MAIN-20160624154956-00108-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.956547 | 1,198 | 2.640625 | 3 |
By Anum Yoon
Everything is becoming ‘smart’ these days, from smartphones to smart watches and more. There’s a reason for it.
For the most part, converting something into what we call “smart” means performance enhancements and better optimization. For example, smartphones allow us to do a whole slew of things on the go that we used to need a computer for. You can shop, browse the Web, answer and send emails, and much more.
It makes sense that the world around us would eventually evolve to incorporate modern technology, just like with our mobile phones. What better place to start than a fast-paced city? There is such a thing as a smart city, and more own cities across the country are making the conversion to this model.
What is a smart city?
A smart city, or connected city, uses modern digital technologies to improve the overall quality of life and performance. In addition, this allows for reduced operation costs and better communication and engagement with citizens.
Key sectors, or areas that can be improved through technology, include transportation, energy, health care, water and waste.
The idea is to leverage modern and useful tech to create a sustainable and self-aware city. This would allow it to do many things like direct traffic, notify residents about available parking and reduce gas emissions.
Climate change has been a significant factor in the push for smart cities, in the hopes that enhancing our highly populated urban areas will allow reduced environmental hazards like gas emissions and waste production.
What does a smart city look like?
Smart cities are relatively new in terms of the human timeline. Modern cities — devoid of ‘smart’ technology — represent a large majority of the world’s GDP, as well as its waste production and resource consumption. For all intents and purposes, they are a large drain.
That’s why these large population centers need to be reinvented to become more efficient and sustainable over time, lest they burst at the seams. The idea behind a smart city is to reduce this drain on society, eliminating a city’s consumption footprint or mitigating it entirely.
A proper smart city should be able to do the following:
- Recognize and identify its context and local impact
- Offer an abundance of raw materials, a sustainable agricultural ecosystem, free crop pollination and genetic diversity
- Be resilient to long-term changes and adapt to conceptualized data
- Employ high-tech ‘smart’ devices, sensors and low-tech biomimetic designs to improve operation and life within
- Have zero footprint when it comes to water consumption, by recharging water supplies through modern recycling systems or rainwater collection (such systems should be implemented even as part of basic plumbing standards)
- Minimize citizen consumption by allowing them to optimize the emission of greenhouse gases, cutting back their reliance on destructive living
- A general shift that favors efficient building design, decentralized generation, green transportation and modes of travel, sustainable energy systems, and green living
How can smart cities enable sustainability?
The concept behind a smart city is simple, the application not so much. Think about it: The more information we know about something, the better off we are. We can make informed decisions — more accurate decisions —and enact change to improve performance and operation.
When applied to a city, the more self-aware, or ‘smart,’ a city is, the more we can engage with it. It relates directly to big data and modern tech. One of the more recent trends is leveraging big data to solve problems.
For instance, let’s say a city was outfitted with the technology to streamline traffic. By knowing where all of its citizens are at a given time, and where traffic is the heaviest, it can direct others appropriately (think: a GPS system that actually connects directly to an urban network to update accident locations, congestion info and more).
From the outside, this may not seem like it does anything more than improve commute times, but in the grand scheme it improves nearly everything about that city. It cuts down on emission ratings, as vehicles spend less time on the roads. It cuts down on commute times, yes, but that also reduces the risk for accidents. It also reduces fuel consumption and saves money.
Really, the list could go on and on. So, you see, even just a simple system such as that can help improve the quality of life for a city’s residents along with performance.
Yes, but what about sustainability?
The concept of smart cities might be difficult to understand, so I’ll break it down further. By leveraging modern tech, a smart city can do much more than just help you find a closer parking space or hail a taxi.
Technology implemented in a smart city can improve sustainability in many ways — for instance, with public transportation. All Aboard Florida is a great example of modern tech improving sustainability. It is an environmentally-friendly railway in the state of Florida that will improve transportation for all areas it covers. While building, it leveraged big data, or ‘smart’ data, to reduce vibrations and noise, monitor cultural effects and resource consumption, reduce fuel costs and consumption, and discern the impact on air and water quality in nearby areas. The state will also be able to accurately monitor how the railway affects transportation and commute times
A smart city could also do things such as measure water levels, average consumption and weather patterns — like a drought — to help us better manage our supply. It could even track waste patterns to help us optimize recycling and cut back on the amount of waste we use as a society. Both of these scenarios would require the smart city to constantly monitor activities and collect data. All of that data is what we refer to as ‘big data,’ and it’s what would allow the city to be sustainable and react.
Hitachi is working on a project called Social Innovation that will utilize IT and infrastructure technologies to drive business and make smart cities safer. Through the use of big data, the system will be able to react to an event happening around the city in a more fluid manner. Think: The system will alert law enforcement when subway platforms are too overcrowded — thanks to unique sensors — and the police can react by sending additional transportation to alleviate the crowds. This would not only improve commute times, but also keep a large majority of people safe, avoiding unnecessary accidents.
As for the matter of privacy, when it comes to collecting that big data, that’s another story entirely.
For now, I’ll just close out with this: As our cities become ‘smart,’ our lives will improve significantly, because a more sustainable, more aware city means better living conditions and experiences for all.
So, how do we get there?
To be perfectly honest, as promising as this all sounds, our cities are not going to become smarter overnight. It takes time to adapt the technology needed for smart cities and put it into place. Furthermore, it will take just as long to conceptualize collected data and understand how it can be used in the grand scheme. Once active, this technology will constantly be recording and measuring data, and it’s going to take a lot of manpower to analyze it and filter it properly.
It’s also going to require reform in terms of development. Citizens of these cities will need to be willing to support large-scale projects, particularly when it comes to transforming the area so that it’s smarter and more efficient. Older, run-down buildings will need to be replaced with ones that implement modern technologies. It’s not just about buildings though — other city utilities and services count, too. It could include anything from storm-water drains, to dams or public structures.
Bland, open rooftops could be transformed into energy-producing and sustainable areas with solar energy. City streets could become aware of foot and vehicle traffic and alert the proper authorities when issues arise. Plumbing systems could be designed to collect rainwater and filter it for public consumption. The possibilities are truly endless.
The trick of it all is updating existing technologies, structures and strategies to incorporate new, smarter and more efficient ones. That is simply going to take time. If you support such a thing, be sure to vote with your wallet and time when you can.
But just as long-term solutions and big pictures are important, so are the daily habits and life choices of individuals. Rather than people disengaging from sustainability because they cannot provide the whole solution, individuals should realize the value of engaging through small steps. Changing a habit, like how you wash dishes, or making long-term sustainable choices, like investing in a greener kitchen, are individual efforts that add up.
Anum Yoon is a writer who is passionate about personal finance and sustainability. She often looks for ways she can incorporate money management with environmental awareness. You can read her updates on Current on Currency. | <urn:uuid:a01b99ec-bc5a-4d22-a419-69b0850c160b> | CC-MAIN-2016-26 | http://www.triplepundit.com/2015/08/smart-cities-enable-urban-sustainability/ | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783392099.27/warc/CC-MAIN-20160624154952-00155-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.945863 | 1,842 | 3.375 | 3 |
Top | Criticism | Alternative | Miscellaneous | Bottom
Eric Lerner starts his book "The Big Bang Never Happened" (hereafter BBNH) with the "errors" that he thinks invalidate the Big Bang. These are
Are these criticisms correct? No, and they were known to be incorrect in 1991 when Lerner wrote his book.
Let us look at the superclusters first.
Lerner gives the example of filaments or sheets 150 million light years apart in Figure 1.1, and then asserts that material would have to travel 270 million light years to make the structure. Obviously 75 million light years would do the trick. With material traveling at 1000 km/sec, that would take 22.5 billion years, which is about twice as long as the probable age of the Universe. But when the Universe was younger, everything was closer together, so a small motion made early in the history of the Universe counts for much more than a motion made later. Thus it was easier for the material to clump together early in the history of the Universe. Lerner's math here is like ignoring interest when planning for retirement. If you save $1000 per year for 50 years, you don't retire with $50,000. If the interest rate was 7 percent throughout the 50 years, you will have a $460,000 nest egg.
Furthermore, velocities relative to the Hubble flow naturally decrease with time, so the 1000 km/sec velocity was larger in the past. Lerner's discussion of this point uses loaded words and incorrect logic. He quotes unnamed cosmologists as "speculating" that matter moved faster in the past, and calls this an "unknown" process. In fact, it is just Newton's First Law. Consider an object moving at 1000 km/sec relative to the Hubble flow at our location. For Ho = 65 km/sec/Mpc this object will have moved 1.54 Mpc in 1.5 Gyr, the time it takes for the Universe to grow by 10% for this value of Ho. Its velocity will still be 1000 km/sec, but the Hubble flow at a distance of 1.54 Mpc is 1.54*65 = 100 km/sec, so the object's velocity relative to the Hubble flow is now only 900 km/sec. It went down by 10% while the Universe grew by 10%.
For example, the neutrinos in the hot dark matter model are just coasting, or "free streaming". If a free streaming neutrino has 1000 km/sec velocity now, then since recombination it has traveled from a point that is now 2.8 billion light years away. If instead of free streaming the material has been accelerated by gravitational forces, then the relation between velocity relative to the Hubble flow and the distance to the starting point (measured now), is
v = H*D*Omega0.6Using Lerner's value of 1000 km/sec, and a distance of 75 million light years, and Ho = 50 km/sec/Mpc, we find perfect agreement as long as Omega is close to 1. So Lerner's "structures that take too long to grow" are just more evidence for a large amount of dark matter.
In fact, Jim Peebles at Princeton had calculated just how much inhomogeneity in the early Universe would have been needed to grow into the large scale structures we see today. The anisotropy can be used to measure the inhomogeneity. This calculation was published in 1982 (ApJ Lett, 263, L1) and showed that an anisotropy of the temperature of the microwave background with an RMS quadrupole amplitude of 6 microKelvin would have been produced by the inhomogeneity necessary to produce the clustering of galaxies, if the Hubble constant was Ho = 100 km/sec/Mpc. For Ho = 50, the RMS quadrupole would be 12 microK. The actual limit at the time was 600 microK, so there wasn't any problem producing the large scale structure. Later results reduced the limit on the RMS quadrupole to 200 microK by the time Lerner published his book. Thus when Lerner wrote the BBNH, models could reproduce the observed large scale structure with initial conditions that were twenty times more uniform than the observed limit on homogeneity.
In 1991 the limit was reduced to 22 microK by the FIRS balloon experiment and then COBE discovered the anisotropy with a level of 17+/- 5 microK and the current best value is 18.4+/-1.6 microK.
So where was the "crisis"? The "crisis" only arises if there is no dark matter. Without dark matter you need 10 times larger initial perturbations and thus a 10 times larger RMS quadrupole, which was finally ruled out in 1991 after Lerner wrote his book.
Lerner quotes George Field saying there was a crisis, but doesn't give a citation in the book. I remember many newspaper articles saying there was a crisis, but those of us building the COBE satellite knew that nobody had made observations with enough sensitivity to test the models calculated by Peebles, and just hoped that COBE would work well enough to do the job.
By 1992, the model Peebles used had been named "Cold Dark Matter" and people were saying it was "dead" (see "The End of Cold Dark Matter?" by Davis et al., 1992, Nature, 356, 489). But this was from trying to get the details just right: you could make the superclusters and then you had too many cluster of galaxies, or you could make the clusters with a smaller RMS quadrupole and then made too few superclusters. The COBE measurement matched the value needed to make the superclusters. Thus the problem with CDM is that it makes too much structure, not too little. There are several ways to modify CDM to make it work:
Ironically, while Lerner uses this false argument against the Big Bang to advocate an infinitely old Universe, young Earth creationists use the same argument to bolster their belief that the Universe is only several thousand years old.
There is certainly lots of evidence for dark matter. When one looks at cluster of galaxies, the gravitational effects of the cluster can be measured three ways. One is by the orbital motions of the galaxies in the cluster. This was first done by Zwicky in 1933 (Helv. Phys. Acta, 6, 110)! A second looks at the hot gas trapped in many big clusters of galaxies. The third way looks at the bending of light from galaxies behind the cluster by the mass in the cluster (gravitational lensing). All three methods give masses that appear to be very much larger than the mass of the stars in the galaxies in the cluster. This is usually given as the mass-to-light ratio, and M/L is several hundred solar units for clusters of galaxies and only about 3 for the stars in the Milky Way near the Sun.
The paper that Lerner cites as evidence for a lack of dark matter, Valtonen and Byrd (1986, ApJ, 303, 523), claims that the Coma cluster of galaxies and the other great clusters of galaxies are not bound objects. However, the observed velocities within the cluster would cause them to disperse in much less than the age of the Universe, so this claim is quite strange. Furthermore, the X-ray and gravitational lensing evidence now available show that Valtonen and Byrd were incorrect.
The only way to satisfy these observations without a lot of dark matter is to hypothesize that the force of gravity is much stronger at large distances than Newton (or Einstein) would predict. This model is called MOND, for Modification Of Newtonian Dynamics, and it has some adherents. But no good relativistic version of MOND exists, and the existence of gravitational lensing in cluster of galaxies requires a relativistic theory that makes the same change for light and for slow moving objects like galaxies. Furthermore, if the MACHO results hold up, then the MOND model will fail for the halo of the Milky Way. If we then need dark matter to explain the Milky Way halo, it is most reasonable to use the same explanation in distant clusters of galaxies.
More about dark matter.
Lerner claims that the CMB spectrum presented by Mather in 1990 was "too perfect", and that it made it impossible for large scale structure to be formed. However, the perfect fit to the blackbody only ruled out explosive structure formation scenarios like the Ostriker and Cowie model (1981, ApJL, 243, L127). The limits on distortion of the CMB spectrum away from a blackbody are now about 100 times better, and these tighter limits are easily met by models which form large scale structure by gravitational perturbations acting on dark matter. Models which act via electromagnetic interactions, like the explosive structure formation scenario or the plasma Universe have a much harder time meeting the constraints imposed by the FIRAS observations of the CMB spectrum.
Top | Criticism | Alternative | Miscellaneous | Bottom
What alternative does Lerner give for the Big Bang? Since the Big Bang is based on
In the BBNH, Lerner presents the Alfven-Klein model which explains the redshift using a portion of the Universe that starts to collapse, then the collapse is reversed. This model requires new physics to generate the force necessary reverse the collapse Figure 6.2 of BBNH shows the collapse, reversal, and later expansion of a region of space. The figure below shows space-time diagrams based on this idea. In a space-time diagram, time is plotted going upward, with the bottom being the distant past. The black lines show the paths of different clumps of matter (galaxies) as function of time. These are called "world-lines". The red lines show the position of light rays that reach us now at the top center of the diagrams. These are called "light cones". Lerner says that only a small region of space collapsed: only a few hundred million light-years across. This is shown on the left. But if this were the case, then the distant galaxy at G would have a recession velocity smaller than the recession velocity of the nearby galaxy A. This is not what we observe. Thus a much larger region must have collapsed. This is shown on the right. Now G has a larger recession velocity than A which matches the observations.
What causes the reversal from collapse to re-expansion? Lerner claims that it is the pressure caused by the annihilation of matter and antimatter during the collapse. The green ellipse shows this high pressure region. But only pressure differences cause forces. A pressure gradient is needed to generate an acceleration. In the case of a large region of collapse, which is needed to match the observations, a larger acceleration requires a larger pressure gradient, and this gradient exists over a larger distance, leading to a greatly increased pressure.
But in relativity pressure has "weight" and causes stronger gravitational attraction. This can be seen using work W = PdV, so the pressure is similar to an energy density. Then through E = mc2, this energy density is similar to a mass density. If the collapsing region is big enough to match the observations, then the pressure must be so large that a black hole forms and the region does not re-expand. Peebles discusses this problem with the plasma cosmology in his book "Principles of Physical Cosmology".
Remarkably, Lerner now disowns the the Alfven-Klein model which plays such a big part in the BBNH, and wants me to give the proper attribution! He points out that he listed problems with the Alfven-Klein model in the Appendix of BBNH, but these were rather minor problems compared to the fact that it just won't work! If the Alfven-Klein model doesn't work, Lerner's fall back is tired light, which is another total failure.
Lerner's model for the CMB claims that the intergalactic medium is a strong absorber of radio waves. His evidence for this is presented in Figure 6.19 of BBNH, which allegedly shows a decrease in the radio to infrared luminosity ratio as function of distance. This absorption is supposed to occur in narrow filaments, with tiny holes scattered about randomly so that distant compact radio sources like QSOs can be seen through the holes.
The best evidence against this model in also in BBNH, in Figure 6.17. This is a picture of Cygnus A, which is the brightest extragalactic radio source. It has a redshift z = 0.056 and is 700 million light years away, using H0 = 75 as in Lerner's ApJ article, and looking at Figure 6.19 of BBNH, we see that it should be more than 99% absorbed. So more than 99% of the area should be blacked out by absorbing filaments in Figure 6.17, but none can be seen. Cygnus A could be plotted on Figure 6.19, but it would be off scale in the upper right corner, completely orthogonal to Lerner's claimed trend.
Lerner has denied the existence of extended high redshift radio sources, which is pretty silly since Cygnus A obviously counts as one. A three times more distant extended radio source is in Abell 2218, with a size of 120" and a redshift of z = 0.174. Clearly this is beyond Lerner's metagalaxy but there is no big hole in the CMB there. The field has been studied extensively for the Sunyaev-Zeldovich effect and the deficit is less than a milliKelvin.
The 3CRR Atlas has images of many distant radio sources with large angular size. The largest angular size for those sources with z > 0.4 is 3C457 which has an angular size of 205" and a redshift of z = 0.428. 7 out of the 10 sources with 0.4 < z < 0.5 in this list have sizes greater than 30". A single 30" hole in the absorbing curtain would have appeared as a -2 mK anisotropy in the Saskatoon data and nothing like this was seen.
Thus radio sources with large angular size are seen to great distances and Lerner's local absorbing curtain does not exist.
A second objection to Lerner's local absorbing curtain is that its density falls inversely with distance from the local density peak, which Lerner takes to be the Virgo supercluster. But if the density of the absorbers peaks at the Virgo, then there will be much more absorption in that direction than in the opposite direction. This would make the distribution of radio sources on the sky very anisotropic. But the radio sources are evenly distributed to within a few percent, so Lerner's local absorbing curtain does not exist.
A third objection to Lerner's local absorbing curtain is that by making distant radio sources fainter, it would change the number vs flux law for radio sources in a way that is not observed. Normally the flux of a source falls off like an inverse square law: F = A/D2, where A is a constant that depends on the luminosity of the source. If you count all the sources brighter than a minimum flux Fmin, then you are looking out to maximum distance Dmax = sqrt(A/Fmin). The number of sources varies like D3, or N = N1(Fmin/F1)-1.5. Lerner changes the flux distance relation to F = A/D2.4 with his added radio absorption, and this would change the number count law to N = N1(Fmin/F1)-1.25. If in addition the density of radio sources peaked near the Earth the way that Lerner assumes other densities do, then the number count law becomes N = N1(Fmin/F1)-0.83. The actual data show N = N1(Fmin/F1)-1.8 which is not compatible with Lerner's model. Thus Lerner's local absorbing curtain does not exist.
Assuming the existence of his absorbing curtain, even though extended distant radio sources show that it does not exist, Lerner (1995, Ap&SS, 227, 61) presents a fit to the FIRAS spectrum of the cosmic microwave background. After discussing how there is a slight variation in "absorbency" (not defined, units unknown) with frequency, Lerner's final fitting function in his Equation (38) assumes an opacity that is independent of frequency. This function has seven apparent parameters in addition to the 2 parameters of temperature and galactic normalization that are needed for any FIRAS fit. Lerner then bins the FIRAS data in Mather et al. (1994) from 34 points down to 10 binned points, and finds that his 9 parameter model gives a good fit to 10 data points. This sounds stupid, but that is mainly due to the paper being poorly written and edited. Lerner's fitting function actually only has two free parameters: a Kompaneets "y" distortion times an emissivity that is slightly different from unity. And the resulting 4 parameter fit to the 34 data points in Mather et al. (1994) is pretty good. The Figure below shows the deviation from a blackbody for Lerner's model, and the open circles are the Mather et al. (1994) data.
Unfortunately for Lerner, the improved calibration and use of the full FIRAS dataset in Fixsen et al. (1996) give the black data points in the Figure. Lerner's model is a bad fit to this data. The curve shown, which is the best fit to the Mather et al. (1994) data, is six standard deviations away from the Fixsen et al. (1996) data. Readjusting the emissivity and "y" parameter to best fit the Fixsen et al. (1996) data gives a change in chi2 of only 0.7 for two new degrees of freedom, which is worse than the average performance of random models.
Lerner wants to make helium in stars. This presents a problem because the stars that actually release helium back into the interstellar medium make a lot of heavier elements too. Observations of galaxies with different helium abundances show that for every 3.2 grams of helium produced, stars produce 1 gram of heavier elements (French, 1980, ApJ, 240, 41). Thus it is not even possible to make the 28% helium fraction in the Sun without making four times more than the observed 2% heavier elements fraction, and making the 23% helium with only 0.01% of heavier elements seen in old stars in the Milky Way halo is completely out of the question.
But a further problem is that stars make no lithium and no deuterium. Lerner proposes that these elements are made by spallation in cosmic rays. But the cosmic rays have 80 deuterium nuclei for every lithium nucleus (Meyer, 1969, ARAA, 7, 1) while the Universe has about 6 million deuterium nuclei for every lithium nucleus. So if the lithium is entirely due to spallation in cosmic rays, the Universe is still missing 99.99% of the observed deuterium. Lerner's arithmetic once again fails by a large margin.
Top | Criticism | Alternative | Miscellaneous | Bottom
v = (1 - 0.5*(m/E)2+...)*c = 0.999999999999875*cand after traveling for 160,000 years lags by less than 1 second. The observed burst was 6-10 seconds long, so even the electron neutrino could have enough mass to be the hot component in a mixed dark matter model.
Top | Criticism | Alternative | Miscellaneous | Bottom
Part 1 |
Part 2 |
Part 3 |
FAQ | Age | Distances | Bibliography | Relativity
© 1997-2000 Edward L. Wright. Last modified 11 Oct 2003 | <urn:uuid:755c5ab8-b8e8-46fb-a8ae-f6ee473c67cc> | CC-MAIN-2016-26 | http://www.astro.ucla.edu/%7Ewright/lerner_errors.html | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396949.33/warc/CC-MAIN-20160624154956-00111-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.942204 | 4,140 | 3.265625 | 3 |
Politicians are too simple.
Two Ig Nobel Prize winners — both, as it happens, professors at Stanford University — each gives part of the explanation. (As with many simple facts, the explanation is a bit complex.)
Politicians’ simple debates
Needed: More Political Dimensions
One dimension — left to right — doesn’t suffice to deal with today’s political reality…. We seem to need at least three dimensions… The system I suggest is no doubt quite deficient. But it is not, as the current left-to-right system clearly is, ridiculously deficient. [FOR DETAILS, SEE HIS ENTIRE ESSAY]
(Professor Perry was awarded the 2011 Ig Nobel Prize in literature for his Theory of Structured Procrastination, which says: To be a high achiever, always work on something important, using it as a way to avoid doing something that’s even more important.)
Politicians’ simple personalities
Zimbardo shared, with Gian Vittorio Caprara and Claudio Barbaranelli of the University of Rome, the 2003 Ig Nobel Prize in psychology. They were honored for their report “Politicians’ Uniquely Simple Personalities” (published in Nature, vol. 385, February 1997, p. 493). They write:
Politicians’ Uniquely Simple Personalities
The complexity of human personality has been reduced to five dimensions, based on factor analyses of judgements of personality traits. Many researchers agree that a five-factor model of personality captures the essential features of all traits that are used to describe personality: energy/ extroversion; agreeableness/friendliness; conscientiousness; emotional stability against neuroticism; and intellect/openness to experience.
But we show here that this common, standard set of five factors does not hold for judgements of famous politicalfigures. We found that, when people judge the personality traits of politicians, they use only two or three factors….
[NOTE: One of the politicians they studied was Silvio Berlusconi, who recently became an ex-politician.] | <urn:uuid:918e690a-95d3-45b0-a356-18b857278acd> | CC-MAIN-2016-26 | http://www.improbable.com/2012/01/22/politicians-r-2-simple-explain-2-ig-nobel-winners/ | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783398628.62/warc/CC-MAIN-20160624154958-00005-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.940899 | 433 | 2.578125 | 3 |
Individual differences |
Methods | Statistics | Clinical | Educational | Industrial | Professional items | World psychology |
- Main article: Imagery
Visualization can refer to
- Mental image, as with imagination
- Spatial Visualization Ability, the ability to mentally manipulate 2-dimensional and 3-dimensional figures
in spiritual disciplines:
- Visualization (graphic), as in any technique for creating images, diagrams, or animations to communicate any message.
- Scientific visualization, an application of computer graphics which is concerned with the presentation of potentially huge quantities of laboratory, simulation or abstract data to aid cognition, hypotheses building and reasoning
- Knowledge visualization, a sub discipline of Information Design and Instructional Message Design. | <urn:uuid:f3b3f334-a354-4933-b146-59f8b334b33a> | CC-MAIN-2016-26 | http://psychology.wikia.com/wiki/Visualization | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783393332.57/warc/CC-MAIN-20160624154953-00116-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.830816 | 144 | 3.078125 | 3 |
noun[mass noun] Law
1Action that interferes with free competition in a market.
- The original Sherman Anti-Trust Act of 1890 forbade any action that was a ‘conspiracy in restraint of trade or commerce among the several States or with foreign nations.’
- Therefore, to ensure that markets work effectively, regulation is needed to prevent collective and individual monopolies from operating in restraint of trade.
- ‘Antitrust laws apply to any restraint of trade,’ Donovan explains.
1.1 [count noun] A clause in a contract that restricts a person’s right to carry on their trade or profession.
- All interference with individual liberty of action in trading, and all restraints of trade of themselves, if there is nothing more, are contrary to public policy, and therefore void.
- First, a number of individual players have challenged such rules before common law courts as unreasonable restraints of trade.
- The courts have, generally speaking, followed the approach of the High Court in Buckley v Tutty, in striking down labour-market rules leagues have imposed on players, as unreasonable restraints of trade.
For editors and proofreaders
Line breaks: re|straint of trade
Definition of restraint of trade in:
What do you find interesting about this word or phrase?
Comments that don't adhere to our Community Guidelines may be moderated or removed. | <urn:uuid:f21f0254-f65d-49f3-b43f-3930a07a09a8> | CC-MAIN-2016-26 | http://www.oxforddictionaries.com/definition/english/restraint-of-trade | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397696.49/warc/CC-MAIN-20160624154957-00146-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.921103 | 291 | 2.671875 | 3 |
Following its first flyby of Titan, Cassini gazed back at the smog-enshrouded moon's receding crescent. This natural color view was seen by the spacecraft about one day after closest approach. The slight bluish glow of Titan's haze is visible along the limb.
The superimposed coordinate system grid in the accompanying image at right illustrates the geographical regions of the moon that are illuminated and visible, as well as the orientation of Titan -- lines of longitude converge on the South Pole near the moon's eastern limb. The yellow curve marks the position of the boundary between day and night on Titan.
Images taken through blue, green and red filters were combined to create this natural color view. The images were obtained with the Cassini spacecraft wide angle camera on July 3, 2004, from a distance of about 790,000 kilometers (491,000 miles) from Titan and at a Sun-Titan-spacecraft, or phase angle of 115 degrees. The image scale is 47 kilometers (29 miles) per pixel.
The Cassini-Huygens mission is a cooperative project of NASA, the European Space Agency and the Italian Space Agency. The Jet Propulsion Laboratory, a division of the California Institute of Technology in Pasadena, manages the Cassini-Huygens mission for NASA's Office of Space Science, Washington, D.C. The Cassini orbiter and its two onboard cameras, were designed, developed and assembled at JPL. The imaging team is based at the Space Science Institute, Boulder, Colo.
For more information, about the Cassini-Huygens mission visit, http://saturn.jpl.nasa.gov and the Cassini imaging team home page, http://ciclops.org. | <urn:uuid:ddacdb35-cbd3-4fc0-9f77-42ca2f6f7928> | CC-MAIN-2016-26 | http://photojournal.jpl.nasa.gov/catalog/PIA06089 | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783403502.46/warc/CC-MAIN-20160624155003-00111-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.905111 | 362 | 3.8125 | 4 |
Next-Generation Charging Technology for Environmentally Friendly Vehicles
Arkansas Power Electronics International is working to create a power module that can support the demands of plug-in electric vehicles
Plug-in electric vehicles represent a new direction for environmentally friendly transportation. Unfortunately, plug-in electric cars are currently grid-tie power electronics that can require large quantities of energy--and time--to charge. As plug-in cars become more and more widely used, large amounts of power will be required to quickly charge these vehicles.
Arkansas Power Electronics International (APEI) is one of the companies working on a solution to this challenge. A small research and development company based in Fayetteville, Arkansas, APEI's goal is to build state-of-the-art technology for the development and application of power electronics.
The Department of Energy's research agency has included APEI as one of the agencies to fund, helping to develop more energy efficient power electronics. As part of the Agile Delivery of Electric Power Technology project, APEI's research will help create a power module that can support the demands of plug-in electric vehicles.
APEI has spent the last 10 years working on a way to implement silicon carbide semiconductors into its power electronics to replace standard silicon semiconductors. Silicon carbide semiconductors are applied in situations where extreme heat and harsh environment are commonplace, such as the wing of an aircraft or the hood of a hybrid car. Because of the extreme conditions, silicon carbide semiconductors are built to withstand potential temperatures in excess of 600 degrees Celsius.
Modern silicon semiconductors generally can't handle temperatures higher than 150 degrees Celsius. Heat is no longer a limitation when designing silicon carbide power modules, but is instead a design factor. The silicon carbide power module that APEI helped develop along with the University of Arkansas won an R&D 100 award in 2009 for being one of 100 new global technological breakthroughs.
"Silicon carbide allows a lower on-resistance for a given blocking voltage versus traditional silicon," said Ty McNutt, director of business development at APEI. A lower on-resistance has profound advantages for a semiconductor. "Smaller and faster switches can be fabricated with less switching and conduction losses," said McNutt. APEI's silicon carbide semiconductors are more energy efficient than silicon semiconductors.
Performance power modules
As a result of the silicon carbide semiconductor's development, APEI also designed a new power module that can help provide the power conversion necessary to charge plug-in electric vehicles. "The advantages are many, from higher efficiency to reduced size and weight enabled by high frequency operation," said McNutt.
The new power module is called the multichip power module, and is designed to be a very compact, cost-efficient, lightweight solution for the plug-in vehicle's charging dilemma. APEI's patented power module technology integrates both the power and control circuitry into one compact power module.
The development of silicon carbide semiconductors has led to the need for power modules that will reduce cost and increase efficiency for power electronics. "APEI, Inc.'s multichip power module technology is designed around the silicon carbide components," said McNutt. Because of the "ultra-high speed switching for greater efficiency... the power modules are also capable of temperatures in excess of 250 degrees Celsius, offering the end user greater thermal headroom over traditional silicon electronics."
Taking the heat
Since silicon carbide semiconductors operate at such high temperatures, the thermal management system within the power module does not have to play such an integral role in the module's function. With a lighter and smaller thermal management system, the multichip power module can be much smaller.
APEI's new power modules aim to produce a power module that charges at an efficiency rate of greater than 96 percent while most modern power modules today only charge at efficiencies of less than 92 percent.
APEI's power module technology is also a very high power density module. The power output per kilogram for APEI's silicon carbide power module is 25 kilowatts, while other "state-of-the-art" power modules only put out 2.5 kilowatts per kilogram.
Future of plug-ins
"The higher temperature capability and higher switching frequency one can achieve by combining these two technologies will allow power electronics systems to obtain a tenfold reduction in size and weight if the system is designed around the technology," said McNutt.
APEI's charging module is one of the new technologies paving the way for green energy vehicles in the future. Weight reduction and increases in efficiency make the future look promising for technologies implementing silicon carbide technology. Electric vehicles offer an innovative direction for personal transportation, especially as rapid-charging is developed to make them more convenient.
As plug-in hybrids continue to become more and more widely available, it's very possible that gasoline-powered vehicles will no longer be the most popular option for personal transportation.
-- Jacob Gersh, ARPA-E, email@example.com
This Behind the Scenes article was provided to LiveScience in partnership with the National Science Foundation. | <urn:uuid:8ab0270e-7fde-4ae0-8d46-aa7375553e39> | CC-MAIN-2016-26 | http://www.nsf.gov/discoveries/disc_summ.jsp?cntn_id=122074&org=SMA | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783399117.38/warc/CC-MAIN-20160624154959-00057-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.897175 | 1,073 | 3.234375 | 3 |
Learn something new every day
More Info... by email
Creating an artificial heart that would be a successful longterm replacement for the human heart has been a goal in medical research for numerous decades. Thus far, various doctors and scientists have developed some machines that can take over heart function while a patient waits for a transplant, or for people who can’t receive transplants. These mechanical hearts may be placed in the body, so that many people may have an extension of life, but they don’t last forever.
Prior to delving into the history of the artificial heart, it’s important to understand what it is and what it isn’t. This mechanized device is not a heart/lung bypass machine. Such machines are a hugely important development in medicine nevertheless, and are used on a regular basis.
Artificial hearts should be viewed as distinct from ventricular assist devices and left ventricular assist devices (VADs and LVADs). These may be implanted to take over some of the work of a heart that retains a certain amount of function. They are useful too in bridging the gap when patients are on transplant waiting lists, and may help assist the heart in continuing to work at a more efficient rate for a period of time. However, it should be understood that a true artificial heart is implanted in the body and takes over the work of the failing heart. The term failing usually means that neither the left or right ventricle can function enough to support life.
In the mid-twentieth century, there were several people working on the creation of an artificial heart, and the first implant was performed on a dog in 1957. It was not tremendously successful and the dog only survived for a few hours after the implantation. Research of total mechanized hearts continued on dogs, and in the mid 1960s, doctors began to develop LVADs too, with the first successful LVAD surgery performed in 1966.
Much trial and error followed, and some of the main concerns included rejection of various components of artificial hearts, and survival rate, which was considerably low for both LVADs and total hearts. In the 1980s, two artificial hearts were developed that continue to be used. These are the Jarvik and the Abiocor. Both have been used in many surgeries to extend life. The Abiocor was considered an improvement on the Jarvik because its power source was not outside of the body. The Jarvik requires external wiring to a power source, but has proven over long clinical studies to be more effective than the Abiocor with longer survival rates for some patients.
Another artificial heart developed in the 2000s that shows great promise has been created by Dr. Alain Carpentier of France, and this heart is in trials to determine efficacy and safety. Unlike its predecessors, Carpentier’s model uses some animal tissue in its design, which might prove effective in reducing rejection. Other scientists continue to work on additional models, since even though some patients survive for several years after receiving an artificial heart, others still don’t. For some survivors, quality of life can be poor and length of life may be short.
There are some inherent concerns with creating artificial hearts. One that remains is how to power the heart, and improvement in various power cells may ultimately allay these concerns. Yet a human heart must work constantly and even with advanced power techniques, it’s hard to know how long any artificial heart can last, once in the body. Still there remains much need for artificial hearts because some people are not eligible for heart transplants, and others need one and die waiting for a heart.
There is some speculation about whether advances in regenerative medicine would ultimately make the quest for the perfect artificial heart obsolete. Hope exists that someday scientists will be able to use a patient’s own tissue to grow a new heart for those who need them. This would eliminate concern over rejection and address the issue of lack of transplants to meet demand.
One of our editors will review your suggestion and make changes if warranted. Note that depending on the number of suggestions we receive, this can take anywhere from a few hours to a few days. Thank you for helping to improve wiseGEEK! | <urn:uuid:6dc0016d-30b6-433a-80e6-984a4c828c91> | CC-MAIN-2016-26 | http://www.wisegeek.com/what-is-an-artificial-heart.htm | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396029.85/warc/CC-MAIN-20160624154956-00058-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.958671 | 868 | 3.46875 | 3 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.