text
stringlengths
199
648k
id
stringlengths
47
47
dump
stringclasses
1 value
url
stringlengths
14
419
file_path
stringlengths
139
140
language
stringclasses
1 value
language_score
float64
0.65
1
token_count
int64
50
235k
score
float64
2.52
5.34
int_score
int64
3
5
Kananaskis (Seebe and Ozada) The Kananaskis Prisoner Of War camp (No. 130), also known as Seebe for the nearby hamlet, operated from 29 September 1939 to 28 January 1946. Locals referred to this facility as Camp "Kan-A-Nazi". Seebe was small with a capacity of 200 prisoners of war (POWs). It was categorized as a Class II facility, meaning prisoners were afforded no extra privileges or special rights. POWs earned 20 cents per day for completing non-war-related duties and work. Maintaining discipline and order was the top priority; the maximum punishment was solitary confinement for up to 28 days. As stipulated by Geneva Convention guidelines, escapees could face up to two years in prison. Most of the German prisoners held here belonged to the German Afrika Korp. German-Canadians who openly supported Hitler’s campaign in Europe were also transferred to the Kananaskis Prisoner of War Camp, under strict surveillance. During the first few years of the war, an influx of POWs continued to arrive in Alberta. Camp Ozada (No. 132) opened in 1942. It was located between Banff and Calgary, approximately 30 kilometres (18.64 miles) away from the POW camp at Seebe. Ozada was established as a temporary facility until such time as the permanent camps at Lethbridge and Medicine Hat were completed. The facility covered roughly 500 hectares (1235.5 acres) and housed thousands of POWs in fairly crude conditions. Prisoners griped about the facility, but federal government officials pointed out that the guards, members of the Veterans Guard of Canada—veterans of the First World War—were forced to live under the same conditions. The Prisoner of War camp at Ozada remained open for roughly twelve months before the larger camp in Lethbridge opened. John Joseph Kelly, “Der Deutsche Kriegsgefangener auf Alberta: Alberta and the Keeping of German Prisoners of War, 1939–1947”, ed. K.W. Tingley, For King and Country: Alberta in the Second World War. (Edmonton: Provincial Museum of Alberta and Reidmore Books, 1995), 285–302. David Carter, “POW Camps in Canada during WWI and WWII.” (accessed September 2007).
<urn:uuid:75db8c83-a0b8-4003-9838-064e9f297998>
CC-MAIN-2016-26
http://wayback.archive-it.org/2217/20101208161448/http:/www.albertasource.ca/homefront/ww2/alberta_at_war/camps/kananaskis.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783398873.39/warc/CC-MAIN-20160624154958-00016-ip-10-164-35-72.ec2.internal.warc.gz
en
0.962341
492
3.15625
3
Girls attending s tate-run schools in India’s financial capital of Mumbai are being paid 1 rupee(two cents) for each day of attendance in school. The project seeks to deal with low attendance rates of female students and to empower Indian girls by providing them a financial foundation in life. The scheme has yet to uncover a single girl among the 220,000 who daily attend school who has attained a perfect attendance record. A school principal, Baig Noorjahan, insisted the payment was not sufficient and urged it be raised to at least 5 rupees daily. He also noted that school attendance for boys was even worse that for girls. Of course, there are other issues such as the reality that millions of Indian children have not even enrolled in schools. India has made dramatic changes due to its economic explosion, but the reality is millions of poor boys and girls are left behind. There are an insufficient number of teachers, not enough school buildings, and most lack basic educational equipment. In essence, the concept of paying poor children to attend school makes sense, particularly, if it provides families with sufficient money to provide for the maintainance of children in necessary school equipment, books and clothes.
<urn:uuid:41d681d1-27fc-40ce-891e-f7ee7f07e6b5>
CC-MAIN-2016-26
http://theimpudentobserver.com/tag/females-paid/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397873.63/warc/CC-MAIN-20160624154957-00024-ip-10-164-35-72.ec2.internal.warc.gz
en
0.974249
243
2.84375
3
Can Vitamin C Help My Cold? When you're sniffling and sneezing from a cold, you may be tempted to reach for a bottle of vitamin C pills or drink some orange juice. But does vitamin C prevent or treat your symptoms? So far, the evidence is mixed. What Is It? Vitamin C is a nutrient your body uses to keep you strong and healthy. It helps maintain your bones, muscles, and blood vessels. It also helps you absorb iron. You can get vitamin C when you eat fruits and veggies, especially oranges and other citrus fruits. It's also sold as a dietary supplement in the form of pills or chewable tablets. Can Vitamin C Prevent or Treat Cold Symptoms? There have been a lot of studies, but the findings aren't consistent. Overall, experts find little to no benefit if you use vitamin C to prevent or treat a cold. In 2010, researchers looked at all studies and found that taking vitamin C every day did not prevent the number of colds that a person got. In some cases it made symptoms improve. However, vitamin C didn't help if people took it after they showed signs of getting sick. The results were different for people who were in extremely good physical condition, such as marathon runners. People like that who took vitamin C every day cut their risk of catching a cold in half. So what does all this mean? If you take at least 0.2 grams of vitamin C every day, you're not likely to have fewer colds, but they may end a day or two quicker. Is It Safe? In general, vitamin C won't harm you if you get it by eating food like fruits and veggies. For most people, it's also OK if you take supplements in the recommended amount. Higher doses of vitamin C (greater than 2,000 milligrams per day) may cause kidney stones, nausea, and diarrhea. Talk to your doctor if you're thinking of taking vitamin C pills and let him know about any other dietary supplements you use.
<urn:uuid:f441c6ea-5bad-413a-b547-97c8ae08d00a>
CC-MAIN-2016-26
http://www.webmd.com/cold-and-flu/vitamin-c-colds
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783393093.59/warc/CC-MAIN-20160624154953-00050-ip-10-164-35-72.ec2.internal.warc.gz
en
0.960667
422
2.828125
3
By MARY CATHARINE MARTIN Archaeological records and cultural memory indicate that in addition to being more abundant in Southeast Alaska, herring spawning locations were once more consistent. Though the Alaska Department of Fish and Game says its data don’t support either conclusion, a new program at the Sealaska Heritage Institute intends to restore herring to areas where they proliferated. David Harris, the Alaska Department of Fish and Game’s Juneau-area management biologist, said herring aren’t anywhere near as predictable as salmon; their spawning location can shift over time. “For some reason they’ll sort of shift their focus of spawning,” he said. “(For Lynn Canal) at one time, it was more in the Auke Bay area. Now, it’s more in Point Bridget, in Berners Bay.” Scientists know approximately when and approximately where herring will spawn, but it changes every year, Harris said. Contrary to current trends, the archeological record indicates herring spawned in the same locations more consistently than they do now, according to a 2013 paper authored by the lead archaeologist studying Alaska’s herring, Madonna Moss, and other researchers. “Over the period represented well by the archaeological record (2,500 to 200 years before the present day) Pacific herring populations also appear to have exhibited higher abundance and greater consistency in their distribution than is indicated by the dynamics of industrially harvested populations over the past 50-100 (years),” the paper says. Given current variability, that’s a difficult thing to envision for some. “It’s hard for me to grasp that there was some sort of consistency to a specific spawning location for a great length of time,” said Sitka-area management biologist Dave Gordon. “The ecosystem 2,500 to 10,000 years ago is a bit of an unknown, as well. Things may have been very different back then.” Gordon said Fish and Game monitors large spawning events, but there are many spot spawns and smaller spawning events it is unable to monitor. “It’s certainly substantial, and probably the total could add up to quite a bit of herring,” he said. “It would be a difficult thing to adequately assess the entire Southeast spawning event. It’s too much territory, and it’s too much money.” Partly because of that, some scientists don’t think recent egg deposition at Auke Bay, around Fishermen’s Bend, is exciting news. They say “spot spawning” — isolated pockets of spawn — happen every year. “We get periodic reports in Auke Bay of fish spawning on the pilings in June,” Harris said. “There’s little spawns all over the place that will occur.” Just the same, Harris and Gordon say they see balls of herring in most bays they fly over. Lynn Canal is still far from an open commercial fishery, Harris said. Last year, the spawning biomass was above the threshold for a fishery, but that doesn’t mean the projected population would be enough to merit one, he said. (This year, the spawning biomass was below that threshold.) “We want to see some number of years of strong return before we contemplate a fishery,” he said. Herring revitalization and reintroduction Chuck Smythe, the director of Sealaska Heritage Institute’s culture and history department, is spearheading an effort to reintroduce herring to areas they historically spawned, but no longer do. “We’re primarily finding that herring is kind of in a depressed state,” Smythe said. “There used to be many more herring than there are today.” The project, “Development of a pilot herring restoration plan using local and traditional knowledge,” started this January. Smythe said the organization hopes to use it as a springboard for a more substantial effort. The program provides spawning structure, such as hemlock branches, in places that used to be abundant in herring. It also sometimes relocates fertilized eggs to places that historically had spawn, but no longer do, he said. Along the coast, many places named for the fish — Teesoshum (milky waters from herring spawn), Yaaw Teiyi (herring rock); and Shaan Daa (White Island, so named for its spring spawning activity) — no longer have them, as Moss points out. Citing unpublished data from researchers including Moss, the authors of “High Potential for Using DNA from Ancient Herring Bones to Inform Modern Fisheries Management and Conservation” write, “Coastal First Nations report that herring consistently returned to the same bays to spawn every year and the archaeological record demonstrates consistency in spawning locations through the millennia.” Over the course of the program, SHI hopes to develop monitoring factors related to spawning success, like water conditions, that might be replicable, Smythe said. Right now their efforts focus on Sitka, but like the herring, it’s too early to tell where they will show up.
<urn:uuid:da1faca1-36b1-42a9-8a2b-1bbc2388f300>
CC-MAIN-2016-26
http://peninsulaclarion.com/outdoors/2014-08-14/organization-aims-to-rebuild-southeast-herring-stocks
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783393332.57/warc/CC-MAIN-20160624154953-00050-ip-10-164-35-72.ec2.internal.warc.gz
en
0.953032
1,110
3.078125
3
Six months ago we challenged you to realize the future of open, connected devices. Today we see the five finalists vying for The Hackaday Prize. These five were chosen by our panel of Launch Judges from a pool of fifty semifinalists. All of them are tools which leverage Open Design in order to break down the barriers of entry for a wide range of interests. They will have a few more weeks to polish and refine their devices before [Chris Anderson] joins the judging panel to name the winner. Starting on the top left and moving clockwise: ChipWhisperer, an embedded hardware security research device goes deep into the world of hardware penetration testing. The versatile tool occupies an area in which all-in-one, wide-ranging test gear had been previously non-existant or was prohibitively expensive to small-shop hardware development which is so common today. SatNOGS, a global network of satellite ground stations. The design demonstrates an affordable node which can be built and linked into a public network to leverage the benefits of satellites (even amateur ones) to a greater extent and for a wider portion of humanity. PortableSDR, is a compact Software Defined Radio module that was originally designed for Ham Radio operators. The very nature of SDR makes this project a universal solution for long-range communications and data transfer especially where more ubiquitous forms of connectivity (Cell or WiFi) are not available. ramanPi, a 3D printed Raman Spectrometer built around a RaspberryPi with some 3D printed and some off-the-shelf parts. The design even manages to account for variances in the type of optics used by anyone building their own version. Open Source Science Tricorder, a realization of science fiction technology made possible by today’s electronics hardware advances. The handheld is a collection of sensor modules paired with a full-featured user interface all in one handheld package. From Many, Five The nature of a contest like the Hackaday Prize means narrowing down a set of entries to just a few, and finally to one. But this is a function of the contest and not of the initiative itself. The Hackaday Prize stands for Open Design, a virtue that runs far and deep in the Hackaday community. The 50 semifinalists, and over 800 quarterfinalists shared their work openly and by doing so provide a learning platform, an idea engine, and are indeed the giants on whose shoulders the next evolution of hackers, designers, and engineers will stand. Whether you submitted an entry or not, make your designs open source, interact with the growing community of hardware engineers and enthusiasts, and help spread the idea and benefits of Open Design. Once upon a time there was a store where you could find the most amazing Hackaday shirts and other swag. If you managed to get one of the rare Jolly Wrencher adorned shirts back then, it’s probably about worn out by now. Prepare to rejoice, Hackaday has a completely new store packed with T-shirts, tools, and stuff to help you fill up those waking hours with hardware hacking goodness. We’ve had a little fun over the last couple of days with posts that hint (maybe a bit too subtly?) that this was coming. We always try to have a little bit of fun for those of you who are really paying attention. Now we’re wondering who will be the first to implement the one-time pad as a dedicated piece of hardware… project ideas need to come from somewhere, right? Take a look around the general store and you’ll see this time we have more than just stuff you wear. Hackers need tools and we’ve selected a small but inspiring group of must-have’s. The kits and toys we’ve selected are surely a rabbit hole of personal challenges and evolving hacks for you. And the best part is that these choices are one more way for us to promote the virtue of Open Design (it is the way). The only question now is what other open hardware do you want to see added to those ranks? It seems like I’m constantly having the same discussions with different people about the Open Design aspect of The Hackaday Prize. I get arguments from both sides; some attest that there should be no “openness” requirement, and others think we didn’t set the bar nearly high enough. Time to climb onto my soap box and throe down some sense on this argument. Open Design is Important When you talk about hardware there is almost always some software that goes into making a finished product work. Making the information about how a product works and how it is manufactured available to everyone is called Open Design; it encompasses both Open Hardware and Open Source Software. Open Design matters! First of all, sharing how something is designed and built goes much further than just allowing others to build their own. It becomes an educational tool and an innovation accelerator (others don’t need to solve the same problems over and over again). When using a new chip, protocol, or mechanical part you can learn a lot by seeing how someone else already did it. This means faster prototyping, and improvements on the design that weren’t apparent to the original creator. And if it breaks, you have a far easier time trying to diagnose and repair the darn thing! We all benefit from this whether we’re creating something or just using an end product because it will work better, last longer, and has the potential to be less buggy or to have the bugs squashed after the fact. There is also peace-of-mind that comes with using Open Design products. The entries in The Hackaday Prize need to be “connected devices”. With open design you can look at the code and see what is being done with your information. Can you say that about Nest? They won’t even allow you to use the thermostat in a country that hasn’t been pre-approved by decree from on high (we saw it hacked to work in Europe a few years back). Now it has been rooted so that you can do with it what you please. But I contest that it would have been better to have shipped with options like this in the first place. Don’t want to use Nest’s online platform? Fine, let the consumer own the hardware they pay for! My wager since the day they announced Google’s acquisition of Nest is that this will become the “router” for all the connected devices in your home. I don’t want the data from my appliances, entertainment devices, exercise equipment, etc., being harvested, aggregated, and broadcast without having the ability to look at how the data is collected, packaged, and where it is being sent. Open Design would allow for this and still leave plenty of room for the big G’s business model. I find it ironic that I rant about Google yet it would be pretty hard to deny that I’m a fanboy. Decentralize the Gatekeeper I’m going to beat up on Google/Nest a bit more. This is just an easy example since the hardware has the highest profile in the field right now. If Nest controls the interface and they retain the power to decide whose devices can participate the users lose. Imagine if every WiFi device had to be blessed by a single company before it would be allowed to connect to any access points? I’m not talking about licensing technology or registering a MAC address for a chip. I’m talking about the power, whether abused or not, to shut any item out of the ecosystem based on one entity’s decisions. If connected devices use a known standard that isn’t property of one corporation it unlocks so many good things. The barrier for new companies to put hardware in the hands of users is very low. Let’s consider one altruistic part of this; Open Design would make small run and single unit design a possibility. Think about connected devices specialized for the physically challenged; the controller project makes specialized controls for your Xbox, what about the same for your oven, dishwasher, the clock on your wall, or your smart thermostat? The benefits really show themselves when a “gatekeeper” goes out of business or decides to discontinue the product line. This happened when the Boxee servers were shut down. If the source code and schematics are available, you can alter the code to use a different service, build up your own procotol-compliant home server, or even manufacture new devices that work with the system for years to come. There are already pleas for belly-up manufacturers to open-source as the last death throw. Hacking this stuff back into existence is fun, but isn’t it ridiculous that you have to go to those lengths to make sure equipment you purchased isn’t turned into a doorstop when they shut the company lights off? To drive the point home, consider this Home Automation System from 1985 [via Reddit]. It’s awesome, outdated, and totally impossible to maintain into the future. I’m not saying we should keep 30-year-old hardware in use indefinitely. But your choices with this are to source equally old components when it breaks, or trash everything for a new system. Open Design could allow you to develop new interfaces to replace the most used parts of the system while still allowing the rest of the hardware to remain. Why not disqualify entries that aren’t Open Hardware and Open Source Software? Openness isn’t a digital value Judging preferences are much better than disqualifying requirements. This is because ‘openness’ isn’t really a digital value. If you publish your schematic but not your board artwork is that open? What if you’re using parts from a manufacturer that requires a Non-Disclosure Agreement to view the datasheet and other pertinent info about the hardware? In addition to deciding exactly where the threshold of Open or Not-Open lies, we want to encourage hackers and companies to try Open Design if they never have before. I believe that 1% open is better than 0% open, and I believe that there is a “try it, you’ll like it” experience with openness. If this is the case, The Hackaday Prize can help pollinate the virtue of Open Hardware far and wide. But only if we act inclusively and let people work their way toward open at their own pace. There are more benefits to Open than there are drawbacks. The biggest worry I hear about open sourcing a product is that it’ll get picked up, manufactured, and sold at a cut-throat rate. If you build something worth using this is going to happen either way. The goal should be to make a connection with your target users and to act ethically. Open Design allows the user to see how your product works, and to add their own features to it. Most of the time these features will appeal to a very small subset of users, but once in a while the community will develop an awesome addition to your original idea. You can always work out a way to include that in the next revision. That right there is community; the true power of open. Oh for the day when we can stop repeatedly looking up our favorite drink recipes on Wikipedia. Those may be just around the corner and you’ll have your choice of single-click delivery or toiling away in the workshop for a scratch build. That’s because Barobot is satisfying both the consumer market and our thirst for open hardware goodness. They’re running a Kickstarter but to our delight, the software and mechanical design files are already posted. Before you dig into the design files there’s a really good look at the constituent parts in the assembly manual (PDF) — that’s a lot of pieces! — and a tiny bit on the tech-stuff page. This remind us of the Drinkmo we saw earlier in the year. That one cames complete with the high-pitched whine of stepper motors. We didn’t get to hear Barobot’s ambient noise in the promo vid after the break. But one place this desing really shines is a swiveling caddy that allows for a double-row of bottles in a similar footprint. One thing we’d be interesting in finding out is the cleaning procedure. If anyone know what goes into cleaning something like this let us know in the comments. Like it or not, a whole new wave of Hardware Startups is coming our way. Crowd Funding campaigns are making it possible for everyone with an idea to “test the waters”, tech-savvy Angel investors are eager to help successful ones cross over, and Venture Capitalists are sitting on the other side, always on the lookout for potential additions to their “hardware portfolio”. It’s these billion-dollar acquisitions that made everyone jump on the bandwagon, and there’s no going back. At least for now. That’s all great, and we want to believe that good things will come out of this whole frenzy. But instead of staying on the sidelines, we thought Hackady should get involved and start asking some hard questions. After all, these guys didn’t think they’d be able to get away with some nicely produced videos and a couple of high-res photos, right? For our first issue, we picked a relatively innocent target – Spark, the team behind the Spark Core development board. By embracing Open Source and Open Hardware as the core part of their strategy, Spark has so far been a positive example in the sea of otherwise dull (and potentially creepy) IoT “platforms”. So we thought we should give [Zach Supalla], CEO of Spark a call. For weeks we’ve been teasing you that something BIG was coming. This is it. Six months from now one hardware hacker will claim The Hackaday Prize and in doing so, secure the grand prize of a trip into space. You have the skills, the technology, and the tenacity to win this. Even if you don’t take the top spot there’s loot in it for more than one winner. To further entice you, there are eyebrow-raising prizes for all five of the top finishers, and hundreds of other rewards for those that build something impressive. You can win this… you just need to take the leap and give it your all.
<urn:uuid:8f27874c-fcaa-4550-a276-7d5a906c750a>
CC-MAIN-2016-26
http://hackaday.com/tag/open-design/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783399425.79/warc/CC-MAIN-20160624154959-00056-ip-10-164-35-72.ec2.internal.warc.gz
en
0.947084
3,024
2.515625
3
Subject: B7) How can I nominate a new name for the list? Contributed by Frank Lepore (NHC) Since 1978, the United Nations' World Meteorological Organization, a group representing some 120 different countries, has used pre-determined lists of names for tropical storms for each ocean basin of the world. The Atlantic basin, which falls under Regional Association IV, has a six year supply of names with 21 names for each year. Why 21 names? Well, the letters Q, U, X, Y and Z are not used because names beginning with those letters are in short supply (you would need at least 3 male and 3 female names for each letter, plus a back-up supply for those retired). Think about it; how many men and women do you know whose names begin with these letters? When a damage or casualty producing storm like Mitch, Andrew,or Katrina strikes, the country most affected by the storm may recommend to the World Meteorological Organization's Regional Association that the name be "retired." Retiring a name is an act of respect for its victims, and reduces confusion in the insurance, legal or scientific literature. A retired name is replaced with a like-gender name beginning with the same letter. For example, Honduras recommended (1998) the name Mitch be retired and proposed the replacement name, Matthew, for consideration (and vote) by the 25-member countries of the Regional Association-IV. Seventy-seven names have been retired in the Atlantic basin. The names used on the list must meet some fundamental criteria. They should be short, and readily understood when broadcast. Further the names must be culturally sensitive and not convey some unintended and potentially inflammatory meaning. The potential for misunderstanding increases when you figure that in the Atlantic basin there are twenty-four countries, reflecting an international mix of English, Spanish and French cultures. Typically, over the historical record, about one storm each year causes so much death and destruction that its name is considered for retirement. This means that in a "normal" year, the odds are about 1 in 8 of requiring a replacement name, given that over the last 57 years (of reliable record) we've averaged slightly over 8 tropical storms and hurricanes per season (actually 8.6). So, it's more likely that letters/ names toward the front of the alphabet (letters A through H) might be retired. The Region IV Naming Committee has a rather large file folder of nominated names that have already been submitted. The next time the need arises and it's a storm affecting mainly the United States, the Committee will be casting about for a replacement tropical cyclone name. They will take out this file to make a selection. But as we say, it's pure chance from there. Last updated : May 20, 2011 Links of Interest AOML Tools & Resources
<urn:uuid:9b69769a-baa9-404c-8027-f0d21528fb7e>
CC-MAIN-2016-26
http://www.aoml.noaa.gov/hrd/tcfaq/B7.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395546.12/warc/CC-MAIN-20160624154955-00092-ip-10-164-35-72.ec2.internal.warc.gz
en
0.927973
580
3.15625
3
- Apollo 11, headline in The Glasgow Herald, July - Daily Mirror "book of space" first published in 1970. Intended for children, but contains a full technical specification of the stages of the Saturn V rocket - Apollo 8, The Times colour supplement (very unusual for that time), January 1969. I took this in to my class in Primary 2, and it was on the wall till Easter. I fished it out of the bin as bafflingly (to me) the teacher did not give it back to me. - Apollo 11, Sunday Times magazine, July 1969. This is one of the famous photos of Buzz Aldrin saluting the American flag. There is widely available film from the colour still camera in the Lunar Module that shows how the flag was erected including the infamous wire along the top. It even shows the moment this photo was taken. - Apollo 11, Sunday Times and Observer magazines, July 1969. Only once the astronauts had returned to earth could the memorable photos be published - Apollo 11, contemporary newspaper advert, 1969. To this day Omega advertise that only their watches were worn on the moon. However Buzz Aldrin did not wear it on the outside of his spacesuit. - Apollo 14, The Times, February 1971. This was the mission where the on board computer on the Lunar Module was re-programmed during the descent to the surface to prevent an abort alarm. Alan Shepard became the oldest person to walk on the moon and celebrated by playing some golf. - Apollo 14, Radio Times, February 1971. The Radio Times was brimming full of information about every mission event regardless of whether there was a broadcast associated with it - Apollo 16, Radio Times, April 1972. It is not clear how many of the public would have known the difference between "miles" and "nautical miles" or understood what "yaw" meant. Note also that 7.08pm was represented as 7.8pm in those days. - Apollo 11, Observer single, July 1969. The only non-written way of re-living the moon landing. No cassettes, no video, no DVD, no Internet in 1969. - Apollo 13, headline in The Times, April 1970 - Apollo 13, The Times, April 1970. The headline is part of a two page article explaining how the Apollo 13 rescue would work. - Apollo 15, The Times, August 1970. Yet another iconic photograph from the moon - this time it's of James Irwin saluting (again) the flag. See the colour here. This mission was probably the most spectacular as the landing site was in Hadley Rille. It had mountains, craters, rocks and all with a colour TV camera for the first time. See this photo being set up from the perspective of the lunar rover - Apollo summary, The Times, December 1972. This is a summary of all the missions to land on the moon. Even before the final Apollo was complete, people were asking if it had all been worth it. - Apollo 17, Radio Times, December 1972. The final mission - blasting off on prime time TV in the US, but the middle of the night in the UK. - Apollo 9, The Glasgow Herald, March 1969. All Apollo missions splashed down in the Pacific Ocean. The crew were picked up by Navy helicopters and returned to a nearby - Apollo badges. Every Apollo mission had an associated badge which the crew wore on the space suits. The Apollo 17 badge is missing here as it had yet to fly. Much of the moon memorabilia and books were published before Apollo 17 to capitalise on the interest before the final mission had flown. - Apollo 13, Radio Times, April 1970. This is the original mission plan, showing how and when Apollo 13 would land on the moon. The explosion that caused the mission to be aborted occurred two days after blast off so the plan completely changed after the entry for Monday 13 April. - Apollo 15, Radio Times, July 1970. James Burke and Patrick Moore were the stalwarts of the BBC moon specials. Burke, a true enthusiast, and Moore, a true astronomer brought the whole show to life. They are seen here sitting in the lunar rover (or lunar buggy) which Apollo 15 took to the moon for the first time.
<urn:uuid:d722eca3-1d06-4559-bb04-b2807e54e8c5>
CC-MAIN-2016-26
http://www.photo-transport.co.uk/moon/moon.htm
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783404826.94/warc/CC-MAIN-20160624155004-00193-ip-10-164-35-72.ec2.internal.warc.gz
en
0.949312
948
2.78125
3
In Search of the Elusive Jaguar Image: Staffan Widstrand / Corbis ildlife tourism is heavily focussed on large predators. Perhaps like Ernest Hemingway, wildlife tourists want to wrestle these ferocious creatures bare-handed, although most seem to prefer safer vantage points. If you’re rolling your eyes and thinking “Indian tourists!”, rest assured that this is not only a subcontinental phenomenon. In the Americas, the place of the tiger is occupied by the jaguar (panthera onca), the apex predator, surpassed in size only by lions and tigers. The best and most reliable place to see jaguars is Brazil’s Pantanal. The Pantanal (from the Portuguese pântano, swamp or marsh) is one of the largest wetland areas in the world, covering up to 195,000 sq km, mostly in Brazil. Largely submerged during the rains, it consists of distinct eco-systems that support a rich array of plant life and wildlife species. However, the Pantanal is not a pristine wilderness; it includes agricultural properties, primarily raising cattle. Assisted by money from NGOs and donors (expiating the guilt of having destroyed nature in their homelands), many farms now engage in tourism—sometimes their main activity—and support conservation. Third Time Lucky On our first visit to South America almost 15 years ago, we found the pristine rainforests of Peru and Ecuador beautiful and frustrating in equal measure. Hallmark mammal and bird species were difficult to spot in the thick and dark forests. We recorded many ‘heards’, not as many ‘seens’. Although the areas we visited were known to be home to jaguars, we never saw any. We got tired of the familiar refrain, “If you had only been here last week…” Rachel, our guide in Ecuador, advised us that many of the species are more easily and better seen in the wetlands bordering the Amazon basin. We travelled subsequently to Venezuela’s Llanos, the northern equivalent of the Pantanal, and Central American rainforests in Costa Rica. But the jaguar eluded us. The closest we came to encountering the predator was in Venezuela. At Hato Piñero, we went out before dawn. We didn’t see a jaguar, though we did glimpse the rotund, sizeable rear of a startled tapir retreating into the forest. On the way back, Otto, our guide, checked the dirt road. There it was—fresh, full-sized jaguar paw prints right on top of our tyre tracks. The cat had walked by after we had driven past. But try as we might, we never found the jaguar. On our third trip to South America, we are optimistic. The Pantanal is home to one of the largest jaguar populations—and also 300 mammals, 1,000 species of birds, 480 reptiles, 50 amphibians and 325 varieties of fish. Carrying the Dead We plan to spend two weeks in the northern Pantanal in August/September, towards the end of the dry season, to maximise our wildlife-sighting opportunities. But our first target is something rarer than a jaguar. Serra das Araras, a few hours’ drive from the city of Cuiabá, is billed as an ecological reserve. Though severely degraded by mining and cattle ranching—there are only remnants of the original spectacular plateaus, forests and savannah—the cattle farm is the nesting site of a pair of Harpy Eagles. Named after the Harpies, the winged spirits of Greek mythology that carried the dead to Hades, an adult Harpy Eagle’s wing span is more than two metres. The larger females weigh up to 9 kg, while the males are around 6 kg. Only the Philippines eagle and Steller’s sea eagles are larger. The Harpy Eagle’s large talons and power allow it to catch and kill large prey, traditionally monkeys and sloths picked up from the upper canopies of tropical lowland rainforests. Threatened by loss of habitat because of logging, agriculture and mining, and also hunted by farmers seeking to preserve livestock, they have been wiped out in many parts of South and Central America.
<urn:uuid:b54ae5e5-e2eb-4945-b15a-2f26f61b9714>
CC-MAIN-2016-26
http://forbesindia.com/article/recliner/in-search-of-the-elusive-jaguar/35551/1
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395346.72/warc/CC-MAIN-20160624154955-00150-ip-10-164-35-72.ec2.internal.warc.gz
en
0.950395
912
2.6875
3
I'm having trouble using the POWER() function. Two integer columns hold a value and an exponent respectively. When I write a query that uses a negative exponent, the query returns 0. For example, the following query (which for simplification uses only constants): SELECT 2 * CAST(POWER(10, -2) AS DECIMAL(9,3)) returns .000. I expected the query to return .020. Why didn't the query return the expected result? To answer your question, let's first look at an excerpt from SQL Server Books Online's (BOL) description of the way the POWER() and EXP() exponential functions work: "The POWER function returns the value of the given numeric expression to the specified power. POWER(2,3) returns 2 to the third power, or the value 8. Negative powers can be specified, so POWER(2.000, -3) returns 0.125. Notice that the result of POWER(2, -3) is 0. This is because the result is the same data type as the given numeric expression. Therefore, if the result has three decimal places, the number to raise to a given power must have three decimals, too." Applying that explanation to your problem, you need to convert the POWER function's first argument to a decimal data type. To perform the conversion, you can choose either of the following alternatives: SELECT 2 * POWER(CAST (10 AS DECIMAL(9, 3)), -2) SELECT 2 * POWER(10 * 1.000, -2) That way, you apply the POWER() function to a decimal so that you can return a decimal.
<urn:uuid:21f89576-33d2-4aae-b852-448da8d33ff3>
CC-MAIN-2016-26
http://sqlmag.com/t-sql/using-power-function-negative-exponent
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395560.14/warc/CC-MAIN-20160624154955-00043-ip-10-164-35-72.ec2.internal.warc.gz
en
0.76408
349
2.625
3
TABLE OF CONTENTS A Guide to The Leader (Navasota) Ledger Book, 1892-1897 Founded in the 19th century, The Leader, also known as The Navasota Leader and The Weekly Leader, was a newspaper that served the city of Navasota and Grimes County, Texas. "About the Navasota leader." The Library of Congress. Accessed July 5, 2011. http://chroniclingamerica.loc.gov/lccn/sn88083899/. One volume composes the Leader (Navasota) Ledger Book, 1892-1897, documenting the business affairs of the newspaper during the late-19th century. The ledger contains records of payments for subscriptions and the placement of advertisements. This collection is open for research use. The Leader (Navasota) Ledger Book, 1892-1897, Dolph Briscoe Center for American History, The University of Texas at Austin. Basic processing and cataloging of this collection was supported with funds from the National Historical Publications and Records Commission (NHPRC) for the Briscoe Center’s "History Revealed: Bringing Collections to Light" project, 2009-2011.
<urn:uuid:86f53074-8b2d-449f-a008-3b149ecaa5bb>
CC-MAIN-2016-26
http://www.lib.utexas.edu/taro/utcah/02906/cah-02906.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395548.53/warc/CC-MAIN-20160624154955-00188-ip-10-164-35-72.ec2.internal.warc.gz
en
0.885046
254
2.921875
3
James Cameron’s ‘Titanic’ sailed back onto theater screens last week sporting a fresh 3D makeover. This follows last year’s very successful run of ‘The Lion King 3D’. If similar projects continue to make money, we can no doubt expect to see more beloved classics subjected to the 3D treatment – not to mention that even many brand new movies are converted to 3D in post-production. But how exactly are these conversions actually done? The short answer, of course, is: “Computers!” The long answer is a little more complicated. As you probably know, 3D images are created by viewing a scene from two viewpoints slightly offset from one another, one that goes directly to the viewer’s right eye and one that goes directly to the left. Movies that are natively produced in 3D use camera rigs that capture both viewpoints on set. (CG animated movies render the two views separately.) However, a movie that was originally shot (or animated) in 2D only has one viewpoint on the action. The second camera view must be created artificially using software that interpolates what that view would look like based on cues in the existing imagery. Some of this can be automated based on generalized rules about how photographic depth usually works. For example: bright objects are typically closer than dark objects; large objects are often closer than small ones; objects with a hazy focus probably belong in the background, and so forth. Many 3D TVs, Blu-ray players and outboard video processors (like the recently-reviewed 3D-Bee) can do 2D-to-3D conversions in real time, but the results are erratic at best. While these devices will no doubt get better with time, a proper conversion job requires human interaction and guidance. The technicians, called stereographers, will look at the movie on a frame-by-frame basis and create a “depth map” or “depth script” for each frame that determines which objects get pulled forward, which get pushed backward, and how much in each direction. To convert ‘The Lion King’, 60 artists worked on the movie for four months. ‘Titanic’ was an even bigger project. That one required 450 people and two years of labor, at a reported cost of $18 million. As described in the ‘Titanic’ article: Fortunately for those converting feature films, they already know how far away each object on the set was — or was supposed to be — when the movie was made. Productions stills also provide additional information. While this means they don’t have to guess or estimate depth, humans still need to painstakingly assist the software in adding depth information to hundreds of thousands of frames. Many of the necessary decisions are more artistic in nature than technical. ‘Lion King’ stereographer Robert Neuman explains: The way I approach depth on the movie is to create a depth score, which is a similar process to the way that a film composer creates a musical score. A film composer uses the rises and falls of the score to echo the emotional content of the film. I try to do the same thing with depth in the movie… I equate stereoscopic depth to emotional depth. In other words, the shots in the depth script with a value of one get the minimum amount of depth. We’d pull out all the stops on shots with a value of ten by using as much depth as possible. Additionally, if there’s a scene where we’re supposed to feel detached from a character, then I put the character further back into the background. If we’re supposed to feel connected to a character, I bring them further forward. In this way, we’re not using 3D randomly. We’re using 3D as part of the narrative. The amount of time, effort and skill expended on the project will mark the difference between a tasteful conversion like the above and some of the quick-and-dirty conversions that have plagued recent theatrical releases, such as the notorious ‘Clash of the Titans‘. Back to that ‘Titanic’ article: One reason [James] Cameron says that previous 3D conversion efforts have failed is that they were shoehorned into an already busy production schedule, meaning the filmmaker didn’t have the dedicated attention needed to ensure the conversion came out the way they wanted. He also credits his learnings from Avatar with helping him know how to work on this conversion — experience which most other filmmakers don’t have. Yet when it comes to older films like these, even the best possible conversion will still remain limited by one unavoidable fact: These movies were not originally made with 3D in mind. When James Cameron directed ‘Titanic’, he chose his camera angles, lighting and staging based on the rules for 2D photography, which are not the same as those for 3D photography. Contrast this to a more recent movie that was converted to 3D in post-production, Tim Burton’s ‘Alice in Wonderland‘. Although Burton shot his film in 2D, he planned for 3D from the beginning, and had stereographers on set to advise him on the best camera angles and staging that would maximize the 3D impact. Neither ‘Titanic’ nor ‘The Lion King’ had that benefit. No matter how tastefully done, the conversions for those films were imposed after-the-fact and are, in a way, just the 21st Century’s version of colorizing black & white movies.
<urn:uuid:9bce3b67-0bc7-4fdb-81c4-7de01e0d6c44>
CC-MAIN-2016-26
http://www.highdefdigest.com/blog/converting-movies-to-3d/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396887.54/warc/CC-MAIN-20160624154956-00054-ip-10-164-35-72.ec2.internal.warc.gz
en
0.961525
1,172
3.046875
3
Henry Hazlitt wrote the classic economics primer Economics in One Lesson, originally published in 1946. The short February 9 news story “Toyota to leave Venezuela” could form the basis of a chapter applying “the lesson” (as Hazlitt does in Part Two of the book): Venezuela’s President Nicolas Maduro says he wants to speak to Toyota’s top official for Latin America after the carmaker said it would stop production in the South American nation. Toyota said on Friday it would soon halt production at its only assembly plant in Venezuela because the world’s largest automaker lacks hard currency to import parts due to government controls. The temporary shutdown of the Japanese car maker’s operations in the western city of Cumana is due to begin February 13 and last at least six weeks. “I have already directed Industry Minister (Wilmer Barrientos) to call in the head of Toyota for Latin America, or have somebody come from Japan,” Maduro said in an address on state television. The plant produced nearly 9,500 vehicles in 2013. Companies such as Toyota must go through a complex bureaucratic process to obtain dollars. Venezuela is only providing dollars at the official rate of 6.3 bolivars to the dollar to importers of designated priority goods such as food and medical supplies. Others who need dollars to pay overseas bills have to buy them at a higher rate at government-run auctions. Many companies have complained Caracas is not providing them with enough hard currency. The currency controls have led to shortages of a wide range of basic necessities, and fuelled an inflation rate that reached 56.2 per cent last year. Last year, 72,000 vehicles were made in Venezuela, down more than 30 per cent from 2012. USA Today picked up and elaborated on the story last week in “Venezuela car industry slips into idle.” Reviewing “the lesson” after thirty years, Hazlitt wrote: “The main problem we face today is not economic, but political.” As in the story above, Venezuela provides a perfect case study. The latest news from Venezuela adds this: “Venezuela leader expels US officials amid protests.” The story quotes Secretary of State John Kerry making another contribution to climate change: “Secretary of State John Kerry said Saturday that [opposition leader Leopoldo] Lopez’s arrest would have a ‘chilling effect’ on Venezuelans’ right to free expression.”
<urn:uuid:34e936de-118c-4074-b20b-5b1c19875fa0>
CC-MAIN-2016-26
http://www.powerlineblog.com/archives/2014/02/maduro-madness.php
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783404382.73/warc/CC-MAIN-20160624155004-00100-ip-10-164-35-72.ec2.internal.warc.gz
en
0.951773
532
3.140625
3
Tokyo, Japan -- Researchers have shown that bone marrow stem cells injected into a damaged inner ear can speed hearing recovery after partial hearing loss. The related report by Kamiya et al, Mesenchymal stem cell transplantation accelerates hearing recovery through the repair of injured cochlear fibrocytes, appears in the July issue of The American Journal of Pathology. Hearing loss has many causes, including genetics, aging, and infection, and may be complete or partial. Such loss may involve damage to inner ear cells called cochlear fibrocytes, which are fundamental to inner ear function. Some natural regeneration of these cells can occur after acute damage, leading to partial recovery of temporary hearing loss. But could such restoration be enhanced by using bone marrow stem cells, which can differentiate into various tissue-specific cell types" Dr. Tatsuo Matsunaga of National Tokyo Medical Center pursued this hypothesis by utilizing a well-characterized rat model of drug-induced hearing loss. This model specifically destroys cochlear fibrocytes and leads to acute hearing loss. Although partial recovery occurs over many weeks, high-frequency hearing remains extremely diminished. Using this system, the investigators examined whether direct administration of stem cells into the inner ear could restore the cochlear fibrocyte population and aid hearing recovery. Stem cells injected into the inner ear survived in half of the injured rats, where they migrated away from the site of injection toward the injured region within the inner ear. These stem cells divided in the new environment and expressed several proteins necessary for hearing, suggesting tissue-specific differentiation. Further, transplanted cells that migrated to the damaged area of the inner ear displayed shape similar to that of cochlear fibrocytes. Importantly, transplanted rats exhibited faster recovery from hearing loss, particularly in the high frequency Contact: Audra Cox American Journal of Pathology
<urn:uuid:38897da3-e831-4216-954c-351481c94a49>
CC-MAIN-2016-26
http://news.bio-medicine.org/biology-news-3/Can-you-hear-me-now-3F-Stem-cells-enhance-hearing-recovery-770-1/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396872.10/warc/CC-MAIN-20160624154956-00073-ip-10-164-35-72.ec2.internal.warc.gz
en
0.934054
386
3.140625
3
Luceo non uro, I shine not burn. Azure, a deer's head cabossed Or. A mount in flames Proper. (on a compartment embellished with stagshorn clubmoss) Two savages wreathed about the head and middle (with laurel Proper) each holding in the exterior hand a baton resting on the shoulder burning at the end, the hair likewise enflamed, all Proper. A stag's head cabossed Or. he name Mackenzie, or MacCoinnich, as it appears in Gaelic, is generally taken to mean "son of Kenneth", and the original Kenneth, who lived in the thirteenth century, is said to have descended from a younger son of Gilleoin of the Aird. The MacKenzies were, without doubt, of Celtic stock and were not among the clans that originated from Norman ancestors. We know little about the generations immediately following Gilleoin, but in 1267 Kenneth was living at Eilean Donan, a stronghold at the mouth of Loch Duich. He must have been an important vassal, for the Earl of Ross appears to have married Kenneth's aunt and thus strengthened the relationship which already existed between the two families. At the start of the fifteenth century the Earldom of Ross came, through marriage, into the hands of the powerful family of MacDonald, who owned vast property on the west of Scotland and called themselves, at first without the King's authority, Lords of the Isles. In this way the Mackenzies became vassals, not of their kinsmen the Earls of Ross but of the MacDonalds. The Lords of the Isles were so powerful and claimed the allegiance of so many clans that they very soon came into conflict with the King. The earliest of their rebellions took place in 1428 after James I had imprisoned the Lord of the Isles and several chiefs who were attending a Parliament at Inverness. Alexander Mackenzie of Kintail was one of the chiefs who attended the Parliament of 1427; but, as he was very young at the time, James I sent him not to prison but to school at Perth, which was then one of the centers of the Court. Alexander seems to have taken advantage of his education, for he was later called "the Upright" and his rule of the clan laid the foundations of its future power. Alexander refused to support his superior, the Lord of the Isles, in his later rebellions and the Mackenzies were prominent in defending the King. As a result the Chief obtained royal charters for his land, the earliest being in 1463. Thirteen years later as the result of another rebellion, the Earldom of Ross was declared forfeit to the Crown, and in the same year Alexander Mackenzie was given charters of land to be held directly from the King. A last revolt by younger members of the MacDonald family, under Alexander of Lochalsh, was finally crushed, the leader captured by the Mackenzies, and the Lordship of the Isles itself was forfeit in 1493. This enabled the Mackenzies to obtain new land without antagonizing powerful neighbors and, what was perhaps more important, to acquire clear legal titles from their own superior, the Crown. Alexander's son, Kenneth, married a daughter of Lord Lovat, his son, John married a daughter of Grant of Grant, while his son, another Kenneth, married Lady Elizabeth Stewart, daughter of the Earl of Atholl and niece of the Earl of Argyll. The power resulting from these alliances was seen after the Government of the infant Mary Queen of Scots had appointed the Earl of Huntly to be "Lieutenant of the North," while Argyll held a similar position in the west. In 1544 Huntly commanded Mackenzie of Kintail to raise his clan against Clanranald of Moidart and, when he refused, ordered an attach on Mackenzie. But the clans supporting Huntly - Grants, Rosses, and Macintoshes - were not inclined to fall out with Mackenzie and would not attack him. From that time onwards Kintail seems to have been recognized as a separate power in the northwest, independent of the Queen's Lieutenant. In 1602, John Mackenzie of Kintail was appointed a Privy Councillor and in 1609 he was created Lord Mackenzie of Kintail. His independent power and influence in the north had been fully recognized. The clan country of the Mackenzies includes almost every kind of scenery and conditions to be found in Scotland. The original home of the clan is Kintail, which in Gaelic means "head of the sea." High hills fall steeply down to Loch Duich, giving little space for cultivation. At the mouth of Loch Duich stands Eilean Donan castle, no longer a stronghold of the Mackenzies, but the first home of their chief. Built on a rocky island at the narrow entrance of the loch, where the arrival of enemy boats could be seen and contested, it looks over the water towards Skye and the west. From Kintail the Mackenzies gradually pressed outwards. Eventually they reached the east coast and here they found a fertile country where the soil is excellent and the climate most favorable for agriculture. In contrast to the west, the slope from moorland to sea is very gradual, the hills are softly rounded and stand back from the water allowing the cultivation of a wide plain. The coastline runs back into several firths but the rivers flow gently through fields rather than hurl themselves over rocks as they do on the west. Although the Mackenzie country has a shorter coastline on the east, it is far richer. In later years the chiefs moved eastwards from Eilean Donan. In the fifteenth century they lived at Kinellan, near Strathpeffer, before making their home at Brahan, near Dingwall, where, in the early seventeenth century, they built themselves a castle that was as much a contrast to Eilean Donan as the surrounding country was to Loch Duich. Brahan was regarded as one of the most stately houses in Scotland when it was built among the meadows above the river Conon. In Scottish clans, the younger sons and grandsons of the Chief often founded what were known as cadet or landed families of their own. In the Clan Mackenzie there was an unusual number of these, 25 founded before 1600. After 1600 only five main cadets appeared - Roderick of Coigach, Alexander of Kilcoy, and Alexander of Coul being brothers of Lord Mackenzie, Simon of Lochslin, his son, and John of Gruinard, his great-grandson. These five founded another sixteen landed families within the clan. The descendants of Roderick of Coigach rose to become Earls of Cromartie. In 1609, as previously described, the Chief was created Lord Mackenzie. Fourteen years later, in further recognition of his power in the north, he became Earl of Seaforth, taking his title from a sea-loch in Lewis. This Earl had succeeded his father at the age of eleven and his affairs were very ably managed by his uncle, Roderick of Coigach, known as the Tutor of Kintail. Coigach was an excellent landlord but severe, as is recorded in the saying, "There are but two things worse than the Tutor of Kintail - frost in spring and mist in the dog days." Under his administration the Mackenzie estates increased in value while the influence of the first Earl was extended by his marriage to Lady Margaret Seton, daughter of the Lord High Chancellor of Scotland. Before the death of the first Earl of Seaforth in 1633, the fortunes of the Mackenzies, chief and clan, were at their highest peak. Coigach was also able to lay the foundation of the success of his own family. He himself was knighted, his son, Sir John Mackenzie of Tarbat, became a Baronet, while his grandson was created Viscount Tarbat and later Earl of Cromartie. In commemoration of the fact that Coigach, the first property of the family, had been inherited through marriage with the MacLeods of Lewis, Cromartie's son took the title of Lord MacLeod. The history of the Mackenzies in the seventeenth and eighteenth centuries naturally centers on the fate of the Stewart kings. The struggle opened in the 1630's, when the power and influence of the clan were at their highest. The cause of the trouble in Scotland at first appeared to be purely religious and there were many, including Seaforth, who were unwilling at first to seem disloyal to the Church and the Covenant they had signed for its defense. The Mackenzies therefore wavered during Montrose's campaigns of 1644 and 1645 and it was not until just before the tragedy of the death of Charles I that Seaforth, fully realizing that he must choose between the Covenant and the King, took up the Stewart cause which was ultimately to ruin his family. In 1649, having placed his family in safety in Lewis, Seaforth joined Charles II in exile in Holland. Seaforth's eldest son ran away from college at Aberdeen to gather recruits from the clan. Several of the cadets followed him to join the King and were present with their men at the battle of Worcester in 1651 when Cromwell defeated Charles. In 1652 this son, now Earl of Seaforth, was among the chiefs who were plotting a rising against Cromwell's government of Scotland. It was Seaforth who opened hostilities in Lewis in May 1653. General Monck eventually defeated the Royalists at Loch Garry. The leaders managed to hold out for several months longer, but in the end they capitulated. The treaty between Cromwell and Seaforth was signed in January 1655 and Seaforth, together with Coul, Applecross and Lochslin, was allowed to go free after providing certain financial security. These seem easy terms but, apart from the loss in man-power sustained by the clan, the lands of Kintail, Lochbroom, Strathgarve, Strathconan, and Strathbran had already been burnt "as a lesson." The return of Charles II in 1660 brought peace to the Highlands and prosperity to Clan Mackenzie; but twenty-eight years later, when James II fled the country, Seaforth, now the son of the college boy of 1649, went with him. By the time Seaforth had arrived back in Scotland to gather his clan in support of the rising of 1689, Viscount Dundee the Jacobite leader had died at the battle of Killiecrankie and the struggle was over for the time. But King William allowed no chances for further plotting; Seaforth was imprisoned for seven years, and garrisons were placed at Brahan and at Castle Leod, the home of Lord Tarbat. As soon as he was liberated Seaforth returned to the Stuart Court in France, where he died in 1701. His son William, the 5th Earl, was perhaps the most devoted Jacobite of all his family. Returning to Scotland about 1713, he was soon involved in the plots. In 1715 Seaforth went north to raise his clan, but found his return blocked. It was not until Seaforth had collected about 3000 men, including other clans as well as his own, that he could fight his way south, leaving some of his clan under Coul and Gruinard to hold Inverness. The Mackenzies joined the Jacobite army in time to take part in the battle of Sheriffmuir, where many of them were killed. A few days earlier Coul had been forced to yield Inverness, the garrison was compelled to swear not to bear arms against the Hanoverians and, as a precaution, Brahan was once again occupied, together with Eilean Donan. Seaforth returned north after the defeat at Sheriffmuir and tried to raise another army, but the government forces were too strong and the rising was already fading out elsewhere. Seaforth himself escaped to France, but his estates, with those of all the landowners who had taken part in the rising, were forfeit. In 1718 a large-scale invasion of England by the Duke of Ormonde was being planned there and it was decided that the Highlanders should create a diversion in the north to draw off some of the Hanoverian troops. Jacobite headquarters were established at Eilean Donan castle in March 1719. But Ormonde's fleet had already been scattered by a gale and the English blew up the castle and set fire to all the supplies. At the beginning of June Seaforth returned from a recruiting drive with at least 500 of his clan only a few days before the Hanoverian army attacked the Jacobites in Glen Shiel. Seaforth and Coul each commanded over 200 Mackenzies, the former on the extreme left and high up the hillside. Seaforth was attacked first and Coul with his men came to help him, but the Jacobite army had already been defeated and the Mackenzies were pursued. Seaforth had been severely wounded at the battle, but he managed to escape once more to France. A week after the battle of Glen Shiel the English commander wrote that he was "taking a tour through all the difficult parts of Seaforth's country to terrify the Rebels by burning the houses of the Guilty." The inevitable result of all this fighting was not only to reduce the man-power of the clan but also to impoverish those remaining. In 1725 General Wade reported that the Seaforth tenants, who had been among the richest of any in the Highlands, were now poor, through neglecting their business and applying themselves to the use of arms. He might as justly have said that they were poor through having their houses and crops burnt by government troops as a lesson. Seaforth died in 1740 and was succeeded by his son Kenneth, known as Lord Fortrose. The extreme poverty of the clan and the chief goes far to explain why Lord Fortrose did not lead the clan to the help of Prince Charles Edward in 1745. He decided that he could not again hazard the fortunes of his clan and family and Duncan Forbes of Culloden even managed to persuade him to recruit a few officers for an Independent Company under the Hanoverian government. Meanwhile the Earl of Cromartie and his son, Lord MacLeod, decided to call out the clan and succeeded in raising about 500 men. Cromartie's Regiment, as the force was called, joined the Prince at Stirling in January 1746 and fought at the battle of Falkirk. On 15th April, the day before the battle of Culloden, they were defeated at Dunrobin and the Earl of Cromartie, Lord MacLeod, and many others were taken prisoner. During the later part of the eighteenth century it seemed as though the family of Mackenzie of Seaforth might again rise to lead the North, but the losses they had suffered eventually proved too heavy. Lord Fortrose died in 1761 and was succeeded by his son, who was once more created Earl of Seaforth. It was this Earl who raised and commanded a regiment of Seaforth Highlanders when the Government decided to seek recruits among the clans for service in the war of American Independence. The 78th Regiment, as it was first called, was raised in 1778 from men on the Seaforth and other Mackenzie estates. The Earl of Seaforth, having raised his men, sailed with them to India in 1781, but unfortunately died there a few month later. He had no son and was succeeded by the descendants of his father's great-uncle, a younger son of the 3rd Earl. This line was represented by Thomas Humberstone Mackenzie, who died within two years of succeeding to the estates, to be followed by his brother Francis. In Francis Humberstone Mackenzie the clan had one of its ablest chiefs, but even he failed to resuscitate the clan. The vast Seaforth estates provided very little money-rent, the Highlanders were without means to increase this, and the chief lacked the capital needed to undertake the extensive schemes of improvement and development which alone would benefit his people. Many Mackenzies found their way overseas at this time. On his death the estates passed to his elder daughter, whose husband, Admiral Sir Samuel Hood, had recently died while serving in Indian waters. Lady Hood afterwards married James Stewart of Glasserton, grandson of the 6th Earl of Galloway and their son assumed the name of Stewart-Mackenzie, but there were no male heirs in several generations. Colonel James Francis Stewart-Mackenzie was created Baron Seaforth of Brahan in 1921, but this title became extinct on his death 2 years later. His niece, Lady Middleton, had two sons; the elder took the name of Stewart-Mackenzie of Seaforth on succeeding to Brahan, but both were killed in action in 1943. The Earldom of Cromartie, which was forfeited after 1745, was revived in 1861 in favor of Anne, only daughter of John Hay Mackenzie of Cromartie, whose ancestors had inherited the estates, including Castle Leod in Strathpeffer, from Lord MacLeod. The first Countess was Mistress of the Robes to Queen Victoria and later married the Duke of Sutherland. The Cromartie honors, descending through their second son Lord Tarbat, were confirmed to his daughter Sibell in 1895. After holding them for more than sixty years, she was succeeded in 1962 by her son Roderick Grant Francis Mackenzie as 4th Earl of Cromartie. In 1979 Sir Roderick was confirmed as Caberfeidh (chief) of the Mackenzies. Roderick died December 21, 1989, and his son John R. Mackenzie became Earl of Cromartie, and Caberfeidh. In 1991, under the leadership of Caberfeidh, the Clan Mackenzie announced plans to restore the original portion of Castle Leod, built at the end of the 15th Century. The restored Castle will include a Clan genealogical center and will be open to the public. The Earl and his family will continue to live in an extension to the Castle built in the Victorian and Edwardian periods. Name Variations: Charles, Charleson, Cluness, Clunies, Cromarty, Iverach, Iverson, Ivory, Kenneth, Kennethson, Kynoch, MacAweeney, MacBeolain, MacConnach, MacIver, MacIvor, MacKenna, MacKenney, MacKenzie, McKenzie, MacKerlich, MacKinney, MacMurchie, MacMurchy, MacQueenie, MacVanish, MacVinish, MacVinnie, MacWeeny, MacWhinnie, Murchie, Murchison, Smart One or more of the following publications has been referenced for this article. The General Armory; Sir Bernard Burke - 1842. A Handbook of Mottoes; C.N. Elvin - 1860. Scottish Clans and Tartans; Neil Grant - 2000. Scottish Clan and Family Encyclopedia; George Way of Plean and Romilly Squire - 1994. Scottish Clans and Tartans; Ian Grimble - 1973. World Tartans; Iain Zaczek - 2001. Clans and Families of Scotland; Alexander Fulton - 1991. The beautiful heraldry artwork for this family is available to purchase on select products from the Celtic Radio Store. We look forward to filling your order!
<urn:uuid:f6337042-c92f-435b-9500-db775e682d4f>
CC-MAIN-2016-26
http://heraldry.celticradio.net/search.php?id=42&branch=McKenzie%20(Scotland)
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783403508.34/warc/CC-MAIN-20160624155003-00153-ip-10-164-35-72.ec2.internal.warc.gz
en
0.987136
4,062
3.40625
3
This course offers an introduction to queer and feminist film studies, focusing on several key genres, directors, and themes in transnational queer and feminist film cultures and scholarship. We will explore what makes a particular film or media practice "queer" and/or "feminist," and what role media production, distribution, and exhibition have in the process. We will exam constructions of sexuality, gender, race, and nation in a variety of films and investigate how transnational queer and feminist cinemas can both participate in and resist dominant ideas about sexuality, imperialism, race, gender, politics, and community. This course is not an introduction to film studies, but does spend time introducing basic film concepts (editing, cinematography, mise-en-scene, etc.) so that students can apply them to the films we watch. In the course students will learn to incorporate formal film analysis in an analysis of ideology, production, circulation, and consumption, and will develop the skills to construct compelling arguments about the politics of cinema. No prior film studies experience is required.
<urn:uuid:6e19bbc0-730e-4c13-9b88-9289653c8387>
CC-MAIN-2016-26
http://www.sas.upenn.edu/gsws/pc/course/2012A/GSWS322
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397865.91/warc/CC-MAIN-20160624154957-00147-ip-10-164-35-72.ec2.internal.warc.gz
en
0.933048
214
2.75
3
2014-01-08 12:55:36 by Janet Haas, RN, CDE, as posted on the inside.akronchildrens.org blog. With the flu season upon us, you may worry about your child with diabetes. Although diabetes doesn’t predispose children toward getting the flu, it may be slightly more challenging to manage their diabetes if they develop nausea and vomiting. Feel free to call our Center for Diabetes and Endocrinology (330-543-3276) with questions about nausea and vomiting. For lesser concerns, such as fever, sore throat and earache, we encourage you to call your child’s primary care doctor. While your child is ill, you should: If you need to call our diabetes center, please have answers ready for the following questions: Take your child to the nearest hospital ER if he: If your child is vomiting or having diarrhea, it’s easy for her to become dehydrated. Have her drink at least 4 ounces of fluid every hour. If blood sugar is high, give carb-free fluids, such as plain water, Powerade Zero or sugar-free popsicles. If blood sugar is low, give regular popsicles or 4 ounces of fluid containing sugar. You can alter these recommendations if urine ketones are present. Please consult our center for more information and request a chart with specific instructions on what to do when your sick child has certain levels of blood glucose and ketones. (8 a.m.-4:30 p.m.) What to expect when coming to Akron Children's For healthcare providers and nurses Residency & Fellowships, Medical Students, Nursing and Allied Health For prospective employees and career-seekers Our online community that provides inspirational stories and helpful information.
<urn:uuid:21d23469-62e4-455e-b26c-39faa5bdab25>
CC-MAIN-2016-26
https://www.akronchildrens.org/cms/sharing_blog/f044c7f51dacc78a/index.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783402699.36/warc/CC-MAIN-20160624155002-00040-ip-10-164-35-72.ec2.internal.warc.gz
en
0.898875
369
2.640625
3
You’d think Science, knowing our history of continental land bridges and pre-historic migrants overwhelming natives, would have consensus on how many thousands of years it takes something to dominate its surroundings to become the new “native” – but you’d be wrong … The latest science involving Didymo rethinks the “invasive” label, as examination of the fossil record of lakes and streams afflicted by the diatom are finding the Didymo has been resident on five of seven continents for many thousands of years. The Delaware River shows Didymo having been present for tens of thousands of years, rather than recently introduced by fishermen. Dissolved Phosphorus can dip below its normal threshold via numerous temporal phenomena, and with that change in water chemistry, triggers the visual “blooms” that gives the infestation its characteristic unappealing blanket. As quickly as water chemistry is restored, the blooms vanish, explaining one of the great mysteries of Didymo infestation. Moreover, fossil and historical records place D. geminata on all continents except Africa, Antarctica, and Australia; records place D. geminata in Asia (China, India, Japan, Mongolia, Russia), Europe (Denmark, Finland, France, Germany, Ireland, Italy, Norway, Poland, Portugal, United Kingdom, Sweden), and North America (Canada and the United States), and historical records dating back to the 1960s place D. geminata in South America (Chile; Blanco and Ector 2009, Whitton et al. 2009). The recent blooms of D. geminata are found on each of these continents, where fossil or historical records have been documented, which indicates that attributing all blooms to recent introductions or to range expansion is incorrect. … and as the last article mentioned, our collective angst in approaching our respective legislatures was a tad premature … In fact, citing the threat of human-induced translocations of D. geminata or other unwanted organisms, seven US states (Alaska, Maryland, Missouri, Nebraska, Rhode Island, South Dakota, and Vermont), Chile, and New Zealand have passed legislation banning the use of felt soled waders and boots in inland waters (e.g., the 1993 New Zealand Biosecurity Act, Chile’s law no. 20.254, Vermont 2013 Act no. 130 [H.488]). Although such restrictions may reduce introductions of other deleterious aquatic microorganisms, the connection to the spread of Didymo. geminata within its native range seems dubious. What’s even more interesting is the final definitive science will employ DNA sequencing of the respective colonies to see which continents have unique strains, and which continents may have sourced strains carried by everything from humans to migrating waterfowl. The assertion that the recent blooms are caused by inad- vertent introductions of D. geminata cells by humans comes from frequent reports of blooms in areas that are used for recreation or monitoring by various agencies (Bothwell et al. 2009). Although Kilroy and Unwin (2011) reported a correlation between the ease of river access and D. geminata blooms in New Zealand, this has not been found in North American studies. In fact, systematic observations at both rivers with frequent human activities and remote rivers not heavily used for recreation or monitoring reveal no association between human activities at a river and blooms in Glacier National Park, in Montana (Schweiger et al. 2011). Moreover, pathways for introducing D. geminata cells have existed for decades (e.g., felt-soled shoes; the transport of fish, their eggs, and water from areas where D. geminata is determined to be native on the basis of fossil records), making inadvertent introductions by humans difficult to explain, given the recent worldwide synchrony of blooms. Really good article for the lay person given the science is common sense and easy to follow. I recommend you read it and draw your own conclusions. As I adore a good conspiracy theory, I find it equally interesting that our fishing media and conservation organizations have published nothing on how scientists are reconsidering earlier theories as more concrete observations accumulate. I’m sure those that insisted we act responsibly, by first purchasing new wading shoes, donated most generously …
<urn:uuid:69e7b23a-e280-4851-a0ec-529fdac4d4aa>
CC-MAIN-2016-26
http://singlebarbed.com/2014/05/11/fossil-record-shows-didymo-geminata-is-native-rather-than-invasive/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397865.91/warc/CC-MAIN-20160624154957-00074-ip-10-164-35-72.ec2.internal.warc.gz
en
0.931334
895
2.765625
3
March 21, 2014 Federal agencies that manage scientific collections such as space rocks, fossils and animal tissue samples have six months to write draft policies describing how those collections will be made more accessible to the public online, according to a White House memo. The memo from John Holdren, director of the White House’s Office of Science and Technology Policy, envisions a central clearinghouse for digital information about government’s scientific collections developed, in part at least, by the Smithsonian Institution. Those policies should be consistent with earlier guidance requiring agencies to make their data open to the public and machine readable whenever possible, the memo said. When government information is published in open formats, it's easier for nongovernment groups to parse through large volumes of data to gather educational insights or to build Web and mobile tools that deliver targeted information to the public. Local science teachers, for example, could comb through such a database to gather photos and other information about geological samples from their area. “These collections are public assets,” the White House said in a blog post announcing the memo. “They play an important role in promoting public health and safety, homeland security, trade, and economic development, medical research, resource management, education, and environmental monitoring…For the American public, students and teachers, they are also treasure troves of information ripe for exploration and learning.” The draft policies should include “a strategy for providing online information about the contents of the agency’s scientific collections and, where appropriate, for maximizing access to individual objects in digital form for scientific and educational purposes” as well as the agency official responsible for carrying out the policies, the memo said. The online collections should also include metadata, the memo said, describing where a sample came from, when it was collected and other information. Get the Nextgov iPhone app to keep up with government technology news. March 21, 2014
<urn:uuid:99d92241-7fa6-4f76-ac44-815d60d15860>
CC-MAIN-2016-26
http://www.govexec.com/emerging-tech/2014/03/white-house-wants-more-fossils-and-moon-rocks-online/81040/print/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396455.95/warc/CC-MAIN-20160624154956-00127-ip-10-164-35-72.ec2.internal.warc.gz
en
0.914336
393
2.71875
3
During the scores of commemorations to mark the 100th anniversary of the outbreak of the First World War we remember the men, young and old, who served and died in the war. We also remember the wounded, in mind and body, crippled from their service. We remember, too, the voluntary spirit of those women whose war efforts supported these men. We will hear, as we have done for years past, how the nation, which had been federated for only 14 years, was "born" at Gallipoli. But how much of the divisions at home during that war will be acknowledged? It's a story familiar to historians of the period, but one that is largely absent from our public remembrances. Our amnesia risks perpetuating the glorification of war. That glorification underpinned the kneejerk jingoism that sent men to their deaths. In towns across Australia men were pressured into enlisting by patriotism, yes, but also by local newspapers, recruitment drives, white feathers and bluster. The repercussions of these pressures had devastating effects on families and towns. The unanimity of August and September 1914 vanished in the years that followed. The war engendered a hard, intolerant, attitude where volunteers were lauded and "slackers" and "shirkers" denounced, regardless of their circumstance. As recruitment numbers declined, these pressures intensified and for many in country towns it became intolerable. One country paper insisted that "nothing short of conscription will shunt many of our burly young manhood into the firing line. Some of their cowardly skins would be all the better for a little Turkish bronzing." Despite the official age of enlistment being 21, younger men, who could enlist with parental permission, quickly became the target of recruitment agencies. These groups often joined with police in strong arming teachers and businesses to help identify those between 18 and 21 – men who did not yet have the vote but who would be cannon fodder for the war. The newspaper in Orange, in central west NSW, made much fun of the young man who, when asked by police why he had not enlisted, answered simply that he was "frightened". In that same town, Sir Neville Howse, the venerated VC recipient from the Boer War, sent a cable from Gallipoli calling on people to "ostracise every healthy young man who does not volunteer immediately". In the days that followed, dozens were sent white feathers, signals of their perceived cowardice. These experiences were not unique to Orange. After Prime Minister Hughes announced a vote on the question of conscription, relationships in many towns across Australia turned toxic. The boycotting of businesses supporting either side, and the rowdy public debates before the conscription plebiscite, amply demonstrate the divisions. So does the result (1,160,033 against, 1,087,557 for). Hughes' setback split the Federal Government, led to the formation of a new political party, and foreshadowed a second plebiscite at the end of 1917. Conscription was again rejected. By 1918, with war-weariness entrenched, some were prepared to step beyond the bounds of the anti-conscription struggle and actively call for peace. One such brave soul was the Congregational Minister, Thomas Roseby. His services on "Peace and War" in his parish church attracted pacifists and loyalists alike: on one occasion he was attacked, mid-sermon, by a group of returned soldiers. Three months shy of war's end, Roseby was silenced by the draconian War Precautions Act, which made it an offence to do or say anything "likely to prejudice recruiting". In late November, Sir Neville Howse, who three years earlier stated that "every able young man who does not enlist should be sterilised", returned to Australia and now called for the "unpleasantness and friction" of the war years to end. By forgetting our shared divisive past, and by not celebrating the dissenting voices in our history, we have fulfilled his wish. The Great War was a tragedy of immense proportions. Of the millions of deaths worldwide, over 60,000 families in Australia were left without a son, father or brother. David Noonan's research (The Age, 28 April 2014) tells us that "four out of five" soldiers who survived the war were "damaged or disabled in some way". We, then and now, falsely connect the blood sacrifice of the war with the coagulation of states into a nation. If Australia "became a nation" at Gallipoli it was a very divided one at home. The glory of war sent men to their deaths. Opposition to conscription saved many others from the same fate. Maybe it is now time to recognise that, in the long run, it was the dissenting voices that were right about the tragic folly we call the Great War, and that to repeat the jingoistic bluster of the time is to dishonour the sacrifices of those young men, and of the many citizens who resisted the pressure to enlist. Julie Kimber teaches politics and history at Swinburne University and is the co-editor of the Journal of Australian Studies.
<urn:uuid:b5518d20-5c04-4bc2-9e2a-e9ed07fee028>
CC-MAIN-2016-26
http://www.brisbanetimes.com.au/comment/world-war-1-how-jingoistic-bluster-encouraged-our-boys-to-enlist-20140904-10c98v.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783404826.94/warc/CC-MAIN-20160624155004-00025-ip-10-164-35-72.ec2.internal.warc.gz
en
0.975007
1,063
2.6875
3
With Joseph Stalin’s death in 1953 and the rise to power of Nikita Khrushchev, the Soviet government opened a period of episodic reforms that became known as “The Thaw.” Between alternating years of openness and years of constriction, artists managed to find independent avenues for self-expression. In twenty-five years of complex shifts in the political, cultural and economic life of the Soviet Union, there was space for the development of a personal voice, even in one of the most closely supervised areas of Soviet culture – photography. These reforms created the possibility of closer contact with non-Communist nations, including the United States, which presented two important and wildly popular U.S.-organized art exhibitions in Moscow in 1959 – Edward Steichen’s Family of Man and the American National Exhibition. Many of the works in this section of the Russian exhibitions are vintage photographic prints on loan from private collectors Natalia Grigorieva and Edward Litvinsky, founders and owners of the Lumiere Brothers Center for Photography in Moscow, founded in conjunction with one of the first private galleries in Russia devoted to fine art photography. Other works come from members of Novator, one of the most important and enduring of the independent Russian photography associations, founded in the early 1960s by individual photographers and photography lovers in Russia. More than photo clubs, the intent of these associations was to open a space where photographers could present and discuss new ideas in photography, and re-visit the unofficial, often banned, works of Russian-Soviet photography of the previous three decades. Members of these associations shared historical and contemporary works not approved by the state.
<urn:uuid:cce2a374-fbe2-4ecf-ae63-ec2192b5f014>
CC-MAIN-2016-26
http://www.artslant.com/ew/events/show/208792-after-stalin-the-thaw-the-re-emergence-of-the-personal-voice---the-late-1950s-1970s
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391519.2/warc/CC-MAIN-20160624154951-00033-ip-10-164-35-72.ec2.internal.warc.gz
en
0.962256
335
2.671875
3
The History of the Bible (CCC 105-108) A very complex and large subject The history of the Bible is a very large subject and is very complex, involving many dates, councils, people and political events. It is far more complex than can be dealt with in a single article. Saint Michael's Media recommends the book “Where Did The Bible Come From?” by the co-author of this series of articles (available from Saint Michael's Press) as an excellent overview of the subject in greater detail than this article can provide. Also available is the “Where Did The Bible Come From?” Collection which contains the book, DVDs and a timeline – this is also available from Saint Michael's Media. This article is concerned with the history of the Bible as it relates to Catholic apologetics – particularly the fact that the Bible is historically accurate and that the Bible is inspired scripture. The fact that the human agency which can be called the “author” of the Bible is the Catholic Church is also part of this article – partially because it is the simple truth, but additionally because it directly relates to the argument concerning sola scriptura. The authorship of the Bible The Bible does not have a single author – it is a collection of 73 books which were written by many different authors over a long period of time. It is divided into two main sections – the Old Testament and the New. The Old Testament is the Jewish Scriptures which were used by faithful Jews before the time of Christ. The New Testament consists of books and letters written by the early Christians. The compilation of the Old Testament The canon (list of books) of the Old Testament was not formally fixed and varied a great deal between different groups of faithful Jews. The Pharisees, Sadducees, Samaritans and other groups all had different lists of books which they considered to be Sacred Scripture, although there was agreement on the core of which books were part of the canon. Christians have the current 46 book Old Testament because this was the canon used by the leaders of the early Christian Church; the apostles and their followers. This canon was found in a Greek translation of the Scriptures known as the Septuagint. This was the version used by very many Jews in the first century. The Jews were using a Greek translation because very few Jews actually spoke Hebrew any longer. Owing to their capture by the Babylonians and subsequent freeing by the Persians, most Jews no longer spoke Hebrew, but rather spoke Aramaic – a Persian-derived language. The priestly class still spoke Hebrew, but the average Jew did not. In addition, owing to the massive conquests of Alexander the Great of Macedon, the Greek language had become the common language of business and culture in the Near- and Middle East. Accordingly, the Greek translation of the Hebrew Scriptures was very popular. It can clearly be seen that the Septuagint was used by the early Christians – when the Old Testament is quoted in the New Testament over 90% of the quotations are taken from the Septuagint text. Many Protestants will argue that the Septuagint canon is not the correct one – but it is clear that the correct canon of the Bible is the 46 book Septuagint one. The compilation of the New Testament The assemblage of the New Testament is a very interesting process and a highly complex one. It can, however, by summarized relatively simply as follows. Various Christians wrote books explaining the history of the Christian Church (including Gospels about the life of Christ and more general histories such as the Acts of the Apostles) and letters addressed to specific communities and persons (such as the letters of Saint Paul) and also what are best considered to be “open letters” (such as Hebrews). There were hundreds of different documents circulating around, all of them purporting to the authentic Christian teaching and accurate history and doctrine. However, many of these documents were not what they claimed to be – they were forgeries not written by the people whose names they bore, or were heretical documents advancing novel notions about Christ. Some of these documents have survived today – examples are the Gospel of Judas and the Gospel of Thomas. Neither of these documents were written by their alleged authors – they are late forgeries designed to cash in on the success and popularity of Christianity. Out of all these hundreds of documents – many of them forgeries – the current 27 book New Testament appeared. This process took a long time – roughly 300 years went by from the writing of the last book of the New Testament (Revelation) until the list was finalized. The list was compiled by the bishops of the Catholic Church. Initially, local canons were assembled by individual bishops. These canons were lists of books which could be read aloud in Churches at Mass. Despite the fact that these canons were independently assembled they bore a great deal of similarity to each other – because the Catholic bishops were all using the same criteria to determine which books should be included. They looked to see if the books were written by an apostle or someone who was reporting the words of an apostle. They checked to see how much the book was being used by other bishops and priests in their Masses, and also looked at how often the book was quoted by the Church Fathers in their writings. Only those books which “scored” favorably on all three of these criteria made it into their canons. In the early fourth century Christianity was made the official religion of the Roman Empire and it became possible for the bishops to meet without being imprisoned or killed by the pagan authorities. Beginning in the late fourth century and continuing until the very early fifth century the Catholic Church met at a number of councils where the canon of the Bible was debated. These councils produced canons which were identical to the current 73 book Roman Catholic canon. As can clearly be seen the canon of the Bible was produced by the Catholic Church. The Church also existed long before the Bible – it was the early fifth century before the Bible existed as we might recognize it today, and none of the books of the Bible were even written until around 50 AD. But the Catholic Church began 20 years earlier, at Pentecost when the Holy Spirit descended on the apostles. The Christians who wrote the New Testament were Catholic – they were Catholic for two reasons. One, they believed everything which the current Catholic Church (and only the Catholic Church) teaches (as is shown by the writings of the Church Fathers). And they were Catholic because there was no other church at the time. Myths such as the “Trail of Blood” simply do not hold water – the Catholic Church was, quite literally, the only game in town. Accordingly, the Bible can be considered to be two things – it is younger than the Catholic Church and it is the product of the Catholic Church. This means that the Bible is not the sole rule of faith for Christians, but rather “the Church is the pillar and foundation of the truth” as it says in I Timothy 3:15. The copying of the Bible The way the Bible was disseminated to the various churches around the world – and the way it ended up being commonly available in virtually every bookstore in the world – is also a very interesting story, but is long and complex and not very relevant to apologetics. There are, however, a number of points which the apologist needs to be able to answer. Firstly, a commonly-made charge is that the Catholic Church is somehow “anti-Bible”. This is a typical anti-Catholic slander and is totally untrue. If the Catholic Church really wanted to destroy the Bible, why did she not do so when she was the only Church there was and was the sole protector of the Bible? For over one thousand years the Bible was the possession of the Catholic Church alone, as there were no other Christians! The Bible was copied by monks in monasteries – if the Church had wanted to get rid of the Bible she could have done so simply by not copying it! A number of anti-Catholics say that the Catholic Church’s doctrines are contrary to the Bible – what they mean is that their interpretation of the Bible is at odds with the Catholic Church’s. But if it were truly the case that there were verses in the Bible which were against the Catholic Church’s doctrines, why did she not change them when she had the chance? The Catholic Church could have changed the Bible to remove such embarrassing verses. The fact she did not shows that these verses are, in fact, not embarrassing at all and that the interpretation of non-Catholics must be considered incorrect! The Bible is generally very historically accurate A common charge leveled against the Bible by atheists and others is that the Bible is not historically accurate and is simply a collection of myths and stories. This is not the case. Modern archaeology and history have shown that the Bible is generally very historically accurate. The events described in the Bible are supported by secular historians such as Tatian and Josephus. In order to refute the charge that the Bible is not historically accurate it is necessary to understand something very important about the nature of historical documents and history. Historical documents are not generally assumed to be inaccurate and packs of lies; they are generally assumed to be accurate because people are not assumed to be liars. Additionally, historians do not automatically require there to be two, three or four sources in order to actually believe something. If historians were very skeptical about all documents and required multiple sources we would not know very much about history – for most of the historical events which humans beings believe to be true and to have really happened, there is only a single source which is not attested elsewhere. Additionally, historical records which do not precisely agree on all the details are not automatically thought to be inaccurate or flawed. As an example, we only have two accounts of Hannibal crossing the Alps into Italy with elephants. These two accounts cannot be reconciled with each other – yet no historian says that Hannibal did not cross the Alps. Most historical documents concerning events in the ancient world were written down long after the events they describe, and the copies which have survived are much later than the already late originals. This is true for the histories about Alexander the Great, for example. In comparison it is a simple fact that the documents of the Bible were written within living memory of the events (this is shown by the fact that the Church Fathers are quoting from the documents which make up the Bible very early indeed). Paul’s letters are written during the life of Paul – he was executed around 68 AD. Such temporal proximity to the events means that, firstly, there is less chance of legendary development and, secondly, that any errors would be corrected by the people who were there! The question of legendary development – that is, the addition of legendary, fantastical elements to a story which turn it into a myth – is one which many atheists raise. They say that the Gospels originally did not contain any supernatural elements and that these were added afterwards – but this is not consistent with legendary development. There is not enough time for legendary development to take place; legendary development takes centuries to occur (it does not occur, for example, with the histories of Alexander the Great until the middle ages). The people who say that the Bible is historically inaccurate are not historians, or at least are not unbiased historians. They do not understand the nature of historical documents; the fact that there are seeming contradictions or that the documents are not detailed, identical, or supported elsewhere does not automatically mean that the documents are flawed and are lies. The Catholic apologist should always ask the non-Catholic precisely what about the Bible is allegedly historically inaccurate. Then he should determine if that alleged inaccuracy can be overcome by reading the Bible correctly. Many of these so-called inconsistencies are based on scientific assumptions which have not been proven (a good example is the Genesis creation story – the scientific theories of the creation of the world have never been proven and so cannot be treated as definite fact. In addition many Biblical scholars consider that the Genesis account should not be read literally, but rather allegorically). Historical “evidence” is often flawed, and comes from sources which are opposed to Christianity. The Bible is often the only source we have for information concerning the events described in the Bible – why is the Bible being automatically considered to be unreliable when no other historical document contradicts it and there is no evidence to suggest it is wrong? This is an unfair double standard which shows bias and prejudice against Christianity. In addition the alleged inaccuracy of the Bible is often simply assumed without any evidence. In many cases the non-Catholic will simply say that “the supernatural does not exist” and so every time a supernatural event is depicted it must be false and a lie. That is simply an assertion without evidence – the non-Catholic must prove that the supernatural does not exist. But, most importantly, it is not necessary to prove that the supernatural events in the Bible actually happened in order to prove the truth of Catholicism. It is merely necessary to prove that the Bible is generally historically accurate. Provided the person is not an atheist and does not deny the existence of God, that is all that is needed in order to prove that the Bible is divinely inspired and that the Catholic Church has authority. Of course, supernatural events such as the resurrection of Christ can be proven quite easily if the atheist is open-minded and honest. The Bible is inspired scripture This particular point of apologetics is relevant to two different groups, and for two different reasons. Firstly, to atheists and others who do not believe the Bible – it is important to show them that the Bible is the inspired word of God and contains accurate information about spiritual matters. Secondly, to non-Catholic Christians who already consider the Bible to be the inspired word of God – by showing them why the Bible is known to be the inspired word of God the authority of the Catholic Church can be supported. The reason the Bible is considered divinely inspired is because the Catholic Church says so and the Catholic Church has the authority to do so. This is not an argument most people have heard – most people are expecting something along the lines of “The Bible says so” or “I was told so by God”. But this is not the reason. As shown in the article concerning sola scriptura the Bible cannot self-authenticate itself as inspired Scripture; there has to be an external authority which provides not only the canon of the Bible but also accurate interpretation of the Bible and the assurance that it is divinely inspired. This authority is the Catholic Church. In order to prove the divine inspiration of the Bible to an atheist the Catholic apologist should first show that the Bible is historically accurate, then show that the Catholic Church has authority (based on the existence of God and the accuracy of the Bible) and then explain that the Catholic Church was the authority which put the Bible together and is the only authority which can correctly interpret it and declare it divinely inspired. Some atheists will call this a circular argument – but this is not the case. It is perhaps best described as a spiral argument. The conclusion is not contained in the premise and an earlier logical step does not depend on a later one; the first point is that the Bible is historically accurate and that means the Catholic Church has authority. The Catholic Church assembled the Bible and then declared it divinely inspired. Because the Church has authority she can declare the Bible to be divinely inspired. Divinely inspired is not the same thing as historically accurate and hence this is not a circular argument. For a non-Catholic Christian who already accepts the divine inspiration of the Bible the Catholic apologist should ask “Why do you believe the Bible is divinely inspired?” Various answers will be offered – but none of them are logically consistent and satisfying except the fact that the authority of the Catholic Church states that it is divinely inspired. The question which should then be put to the non-Catholic is “Don’t you think that, because the Catholic Church was the organization who put the Bible together and the organization who declared it inspired, the Catholic Church has to have authority in order to do this?” This is not actually the correct order for argument – it is arguing the cause from the effect – but it may convince non-Catholics of the essential truth that the Catholic Church has authority.
<urn:uuid:adb57fb2-5060-411e-891c-7a1111044a03>
CC-MAIN-2016-26
http://www.catholicbasictraining.com/apologetics/coursetexts/1l.htm
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395166.84/warc/CC-MAIN-20160624154955-00052-ip-10-164-35-72.ec2.internal.warc.gz
en
0.979048
3,371
3.4375
3
CROSS-REFERENCE TO RELATED APPLICATIONS - Top of Page II. FEDERALLY SPONSORED RESEARCH III. SEQUENCE LISTING OR PROGRAM FIELD OF THE INVENTION - Top of Page This application relates to telescopic gun sights. More specifically, this invention relates to telescopic gun sights having variable magnification and a reticle mounted at the objective focal plane. BACKGROUND OF THE INVENTION - Top of Page A telescopic gun sight, commonly called a rifle scope, is a device used to provide an accurate point of aim for firearms such as rifles, handguns and shotguns. A telescopic sight significantly improves the functionality of a firearm by providing the shooter with a simple yet highly accurate means for aiming at distant targets. A telescopic sight is essentially a Keplerian telescope with an added reticle to designate the point of aim. Reticles are most commonly represented as intersecting lines called “cross hairs” though many variations exist, including dots, posts, circles, scales, chevrons, etc. A basic telescopic sight is shown schematically in FIG. 1. With reference to this figure, a telescopic sight comprises an objective lens 1 to form a first image of the target at (or near) the objective focal plane 2. This first image is laterally reversed and upside-down. An image relay means, shown in FIG. 1 comprising of a pair of convex lenses 3a and 3b, takes this first image and produces a laterally-correct and up-right second image at the eyepiece focal plane 4. Finally, an eyepiece 5 converts the second image into a virtual image at infinity for viewing by the shooter. To provide variable magnification (zoom), the positions of the relay lenses 3a and 3b are individually shifted along the optical axis. This is usually done by placing the entire image relay means inside a rotating inner tube which has a set of precisely calculated slots cut in its surface. A cam system connected to these slots moves each relay lens back and force as said inner tube rotates. Details of the mechanical construction of the zoom mechanism is not essential in understanding the nature or benefits of the present invention and is not shown in FIG. 1. To designate the point of aim, a reticle is placed either at the objective focal plane or at the eyepiece focal plane. These two planes are also referred to as the First Focal Plane (FFP) and the Second Focal Plane (SFP), respectively. In either case, the reticle's shape will appear superposed on the target image providing a precise indication of the point of aim. The difference is that if the reticle is placed at the objective focal plane, it appears to enlarge and shrink along with the target image as the sight's magnification is changed. If the reticle is placed at the eyepiece focal plane, its size appears constant at all magnifications. In FIG. 1, the reticle 20 is placed at the objective focal plane 2. Traditionally, European designs have placed the reticle at the first focal plane. In this configuration the reticle and the image of the target are enlarged or reduced simultaneously as the sight's magnification is changed. This keeps the scale factor between the reticle and the target image unchanged, thus allowing the reticle to be used as a range finding aid. Another benefit of rifle scopes with first focal plane reticle is that their aiming precision is not affected by the mechanical imperfections of the zoom mechanism. In refile scope with second focal plane reticle, even a very small change in the concentricity of the moveable relay lenses can change the point of aim during zoom. Most American shooters prefer that the reticle remains constant as the target image changes size. Therefore, many variable-magnification telescopic sights sold or manufactured in the United States have reticles in the second focal plane. This allows the shooter to aim very well at small targets at long distances because the reticle obscures only a tiny portion of the target image at high magnifications. Rifle scopes with low magnification, such as those intended for hunting dangerous game at short ranges, are also well-suited to this design. When set at the lowest magnification for the wildest field of view, the reticle remains thick enough allowing fast and reliable target acquisition. It is evident from the preceding discussion that a rifle scope with FFP reticle is far superior to one with SFP reticle in terms of accuracy and usability as a range-finding device. The only draw back of a rifle scope with FFP reticle is that the reticle could appear too small (therefore difficult to see) at low magnification and too large (therefore obscuring the field of view) at high magnification. This phenomenon is illustrated in FIG. 2. In this figure, a popular reticle design commonly known as “German No. 4” is shown as it appears in an FFP rifle scope. On the left hand side, the reticle is shown as it appears at low magnification (e.g, zoom knob set to 3×). On the right hand side, the same reticle is shown as it would appear at 4 times higher magnification (e.g., zoom knob set to 12×). It is clear from this illustration that for FFP rifle scopes with a large zoom range, excessive reticle enlargement and shrinkage becomes a major problem. For this very reason, a famous European manufacturer (Sawrovski Optik of Tyrol, Austria) has completely abandoned offering first focal plane models in its new line of rifle scopes. The present invention teaches a telescopic sight with a first focal plane reticle that appears invariant (or almost-invariant) at various zoom scales. This invention thus overcomes the limitations of the prior art by combining the benefits of a rifle scope having a first focal plane reticle (precise aiming and range finding capability at all zoom scales) and a rifle scope having a second focal plain reticle (constant reticle size). Furthermore, the present invention achieves these benefits simply and inexpensively without any additional manufacturing cost. Many different shapes and patterns have been proposed for reticles in the past. We refer the reader to U.S. Pat. No. 7,100,320B2 issued Sep. 5, 2006 to E. A. Verdugo; U.S. Pat. No. 6,729,062B2 issued May, 4, 2004 to R. L. Thomas and C. Thomas; U.S. Pat. No. 6,681,512B2 issued Jan. 27, 2004 to D J Sammut; U.S. Pat. No. 3,948,587 issued Apr. 6, 1976 to P. E. Rubbert and references therein for examples of prior art. Examples of commercially available reticle patterns can be found in the catalogs and websites of rifle scope manufacturers including Carl Zeiss (www.zeiss.com.), Swarovski Optik (www.swarovskioptik.us), Schmidt and Bender (www.schmidtbender.com.), Night Force Optics (www.nightforceoptics.com.), Horus Vision (www.horusvision.com), Leupold (www.leupold.com) and so on. While the reticles invented to this date accomplish their individual objectives, they do not describe a reticle that appears substantially invariant to magnification when used in a zoom FFP rifle scope. The concept of magnification invariance introduced in this invention is a fundamentally new design concept and represents a significant departure from all the design concepts previously used in the prior art. - Top of Page OF THE INVENTION The present invention teaches a telescopic gun sight whose reticle is placed at the first focal plane yet it appears substantially invariant or near-invariant at different magnifications. In accordance with one embodiment, this invention introduces a variable-magnification telescopic gun sight comprising an objective lens, a magnification-invariant reticle, an image relay means, and an eyepiece; wherein said reticle is comprised of a plurality of posts in the form of circular sectors so that the apparent shape of the reticle remains invariant at various magnification settings. BRIEF DESCRIPTION OF THE DRAWINGS - Top of Page The foregoing aspects and many of the attendant advantages of this invention will become more readily apparent with reference to the following detailed description of the invention, when taken in conjunction with the appended claims and accompanying drawings, wherein: FIG. 1 is a side-view schematic of a variable-magnification rifle scope. FIG. 2 is a diagram illustrating reticle size variation in conventional first focal plane rifle scopes. FIG. 3 is a diagram illustrating the magnification-invariance property of a circular sector. FIGS. 4a to 4d depict a plurality of magnification-invariant reticles in accordance with a first embodiment of the invention. FIG. 5 is a diagram illustrating the general design of an almost-magnification-invariant reticle in accordance with a second embodiment of the invention. FIGS. 6a to 6f illustrate a sample almost-magnification-invariant reticle, its angular profile, and its appearance at various magnification scales, in accordance with the second embodiment of the invention. FIGS. 7a to 7f illustrate a conventional “plex” reticle, its angular profile, and its appearance at various magnification scales for comparison purposes. FIGS. 8a and 8b illustrate an almost-magnification-invariant reticle which uses a variation of the designs described in the first and the second embodiments of the invention. FIGS. 9a and 9b illustrate another almost-magnification-invariant reticle which uses a variation of the designs described in the first and the second embodiments of the invention. - Top of Page OF THE INVENTION This invention is inspired by a fundamental geometrical property of circular sectors. In geometry, a circular sector or circle sector is defined as the portion of a circle enclosed by two radii and an arc. A circular sector has the property that its central angle is preserved under magnification. This phenomenon is illustrated in FIG. 3. As shown in this figure, the central angle θ of a circular sector is preserved if the central part of the circle is enlarged using an image-magnifying device. In this invention we use the above mentioned geometric principle to construct reticle patterns that remain invariant under zoom when placed in the first focal plane of a rifle scope. Details of the arrangement of elements characterizing the invention will be more fully understood from the description of preferred embodiments and with reference to the accompanying drawings. A. First Embodiment of the Invention In accordance with a first embodiment of the invention, a zoom telescopic gun sight comprises an objective lens, a magnification-invariant (MI) reticle, an image relay means with variable magnification, and an eye piece. FIG. 1 shows a side-view schematic illustrating the arrangement of the elements in accordance to the first embodiment of the invention. FIGS. 4a to 4d depict a plurality of MI reticle shapes in accordance to the first embodiment of the invention. With reference to FIG. 1, the objective lens 1 forms a first image of a distant target at the objective focal plane 2. This first image is laterally reversed and upside-down. An image relay means, shown symbolically in FIG. 1 comprising of a pair of convex lenses 3a and 3b, takes the first image formed by the objective lens and produces a laterally-correct and up-right second image at the eyepiece focal plane 4. Finally, an eyepiece 5 converts the second image into a virtual image at infinity for viewing by the shooter. With reference to FIG. 1, the first embodiment of the invention further includes an MI reticle 20 placed at the objective focal plane 2. The MI reticle 20 is comprised of one or more posts. Each post is in the form of a circular sector or “wedge” originating from the center of field of view and extending to its edge. Four example reticle patterns that are in accordance with this design are shown in FIGS. 4a to 4d. FIG. 4a shows an MI reticle comprising two horizontal posts and a vertical post where each post is a circular sector with small central angle. The point of aim is at the center of field of view where the three sectors meet. This MI reticle is suitable for general hunting or tactical applications. The position of posts in this MI reticle has been inspired by the popular “German No. 4” reticle used in many European rifle scopes. The MI reticle in FIG. 4b is similar to the previous reticle except that it has an additional post on the top. The position of posts in this MI reticle resembles the popular “plex” reticle used in many American rifle scopes. The MI reticle in FIG. 4c is comprised of a single vertical wedge extending from the bottom to the center of field of view. This vertical post is in the form of a circular sector with somewhat wider central angle compared to those used in FIGS. 4a and 4b. This MI reticle is particularly suited for hunting running game and for tactical applications involving Close-Quarter Combat (CQB). The MI reticle in FIG. 4d shows another useful arrangement where two circular sectors each with a relatively small central angle are arranged in an inverted V formation. This MI reticle is also suited for tactical and CQB applications. The advantage of this pattern is that there is some open space immediately below the point of aim. This space can be used for placing additional markings or other indicators for range finding or bullet drop compensation. The patterns shown in FIGS. 4a to 4d are illustrative examples of MI reticles that can be constructed in accordance with the first embodiment of the invention. These reticles are perfectly magnification-invariant in the sense that their shape and size appears constant once implemented at the first focal plain of a telescopic gun sight in accordance to the invention. B. Second Embodiment of the Invention The telescopic sight described in the first embodiment of the invention has the property that its reticle size appears completely unchanged at any zoom range. While very interesting from an engineering point of view, this level of invariance is not always necessary in practice. In many hunting situations it is sufficient that the reticle remains thin enough at high zoom (not to obstruct the field of view) and thick enough at low zoom (easily visible against the background). In this embodiment we describe a zoom telescopic sight with first focal plane reticle such that the reticle size appears sufficiently unchanged for a finite zoom range. In accordance with the second embodiment of the invention, a zoom telescopic gun sight comprises an objective lens, an almost-magnification-invariant (AMI) reticle, an image relay means with variable magnification, and an eye piece. The arrangement of the elements and their function is similar to the first embodiment of the invention. The only difference is the shape of the AMI reticle as described below. The general design of an AMI reticle in accordance to the second embodiment of the invention can be understood using the illustration in FIG. 5. In this figure, the telescopic sight's field of view at its lowest magnification is represented by a circle with normalized radius of one unit. A Cartesian coordinate system is defined within this circle. The horizontal coordinate r and the vertical coordinate y are chosen such that the center of the coordinate system is at the center of the field of view. When the sight's magnification is increased, the field of view is reduced proportionally. Let's assume, for the sake of simplicity, that the sight's minimum magnification is 1× and its maximum magnification is 6×. As the user increase the zoom for this sight he will observe that the field of view will reduce to a circle with radius r=0.5 at magnification 2× and to a circle with radius r=⅙ at maximum magnification 6×. These two circles are shown in FIG. 5 using dashed lines. An AMI reticle is comprised of one or more AMI posts. An AMI post has a central axis which is a radius of the circle and two perimeters. Each perimeter may be defined by a linear characteristic function y(r) which represents the half-thickness of the post (measured perpendicular to the central axis) as a function of normalized radial distance. An example AMI post is drawn in FIG. 5. Here, the central axis of the AMI post is the same as the horizontal axis and the post's upper and lower perimeters are assumed to be symmetric with respect to its central axis r. Alternatively, the primeters of an AMI post can be defined using an angular characteristic function φ(r). The angular characteristic function φ(r) measures the angular separation between each point on a perimeter of an AMI post with respect to its central axis. The center of the angle is at the center of field of view. The angular characteristic function φ(r) is illustrated in FIG. 5 as well. Each point on the perimeter of a post can be uniquely specified either by the linear characteristic function y(r) or by the angular characteristic function φ(r). It is easy, using elementary trigonometry, to show that y(r)=r tan(φ(r)). In this embodiment, the angular characteristic function φ(r) is used to design preferred AMI posts. This is because φ(r) shows how various parts of a post expand at various zoom scales. To see this, consider the post shown in FIG. 5 and assume that the sight\'s maximum zoom range is M. During zoom, the outer parts of the post for which 1/M≦r≦1 will move towards the edge of field of view along a radial direction with azimuth angle φ(r). These parts of the post will reach a maximum half-thickness of tan(φ(r)) just before moving out of the field of view. The inner parts of the post for which 0<r≦1/M will also grow during zoom and move towards the edge of the field of view along a radial direction with azimuth angle φ(r). However, these parts will remain visible and will enlarge to a maximum half-thickness given by Mr tan(φ(Mr)). An MI post is produced when φ(r) is constant for 0≦r≦1. In this case the posts will be wedge-shaped as described in the first embodiment of the invention. Such a post will grow “on itself” during zoom therefore appearing invariant. An AMI post is achieved when φ(r) is allowed to change a small amount as r changes. A preferred AMI post in accordance to this embodiment is achieved when φ(r) has higher values near r=0 and lower values when r approaches 1. An AMI post designed this way will have a thicker tip near the center of the field of view and will be more easily visible at low zoom. An example AMI reticle utilizing this angular characteristic is shown in FIG. 6a. The angular characteristic function φ(r) associated with this design is depicted in FIG. 6b. Here φ(r) is designed to start at 6 degrees for r=0 and slowly decrease to 2 degrees at r=1. FIGS. 6c to 6f illustrate how this reticle appears at various zoom scales up to 16×. For comparison, FIGS. 7c to 7f show how the conventional plex reticle shown in FIG. 7a enlarges at the same zoom scales. It is clear from this comparison that the AMI reticle enlarges much less than the conventional plex reticle during zoom. For comparative purposes, the angular characteristic function associated with the conventional plex reticle is also shown (See FIG. 7b). Persons skilled in the art may use the methodology described above to design many different forms of AMI reticles. The design is carried out by choosing a proper form for the angular characteristic function φ(r). The general guideline is to have φ(r) start at a higher value near r=0 and slowly take smaller values as r approaches 1. Based on the above descriptions of some embodiments of the invention, a number of advantages of one or more aspects over existing sights are readily apparent: 1. The reticle is placed in the first focal plane which guarantees that the point of aim is not affected when magnification is changed and the reticle occupies substantially the same (or almost the same) area within the field of view when magnification is changed. 2. There is little extra design cost. The MI and AMI reticles can be retro-fitted in existing telescopic sights. 3. There is no extra manufacturing cost. In fact, a single MI or AMI design can be used in many rifle scope models irrespective of individual scope\'s magnification or zoom range. These and other advantages of one or more aspects may now be apparent to the reader from a consideration of the foregoing description and accompanying drawings. IX. CONCLUSION, RAMIFICATIONS, AND SCOPE The foregoing description of the specific embodiments will so fully reveal the general nature of the invention that others can, by applying knowledge within the skill of the art, readily modify and/or adapt for various applications such specific embodiments, without undue experimentation, without departing from the general concept of the present invention. While the above descriptions of the present invention contain numerous specificities, they should not be construed as limiting the scope of the invention, but as mere illustrations of some of the preferred embodiments thereof. Many other ramifications and variations are possible within the expositions of the various embodiments. For example: 1. It is possible to combine MI and AMI posts in the same reticle to suite specific needs. It is also possible to remove small parts from some posts and/or add some extra markings to make aiming easier. An example of this variation in shown in FIGS. 8a and 8b. The reticle shown here has two horizontal MI posts and a vertical AMI post. A small part from the tip of the MI posts has been removed and a small circle has been added to the tip of the AMI post to improve its visibility and facilitate aiming 2. Markings for bullet drop indication or range-finding may be added to AMI or MI reticles. An example of this variation in shown in FIGS. 9a and 9b. The reticle shown here has two horizontal AMI posts and a vertical MI post. A small part from the tip of all three posts has been removed and small dots have been added. These dots are not visible at low magnification hence the sight is well-suited for Close Quarter Combat. The dots become visible at high magnification and can help the shooter adjust his point of aim for wind and bullet drop when aiming at distant targets. Therefore, such adaptations and modifications are intended to be within the meaning and range of equivalents of the disclosed embodiments, based on the teaching and guidance presented herein. It is to be understood that the phraseology or terminology herein is for the purpose of description and not of limitation, such that the terminology or phraseology of the present specification is to be interpreted by the skilled artisan in light of the teachings and guidance. Thus, the scope of the invention should be determined by the appended claims and their legal equivalents, as opposed to the embodiments illustrated.
<urn:uuid:21aeef45-dca5-44f7-adfa-a92f8e12754b>
CC-MAIN-2016-26
http://www.freshpatents.com/-dt20121213ptan20120314283.php
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397864.87/warc/CC-MAIN-20160624154957-00173-ip-10-164-35-72.ec2.internal.warc.gz
en
0.920855
4,920
3.265625
3
Absence of Fossil Evidence Does an absence of fossil evidence shows that ancestral species did not exist? Summary of problems with claim: The fact that a particular species at a given place and time didn't fossilize doesn't mean that the species didn't exist. Explore Evolution states: …critics argue that Darwin's theory has failed an important test. Just as students are tested by exams, theories are tested by how well they match the evidence. In the overwhelming majority of cases, Common Descent does not match the evidence of the fossil record. A student who gets a correct answer only once in a while does not deserve a passing grade. In the same way, critics say that a scientific theory that only rarely matches the evidence fails the test of experience.Explore Evolution, p. 27 Firstly, taphonomy and earth processes also help us understand why, where, and under what conditions fossils form – and explain low abundance (or absence) of fossils in certain situations. Just because paleontologists do not find fossils in certain rocks (or certain preservational environments) does not mean nothing ever lived there. There are many contingencies that explain fossilization, and any 'absence' of fossils is not – by default – positive evidence against evolutionary theory. There are two different hypotheses/processes at work, taphonomy and evolution, and fossil absence is also very well explained/understood by taphonomic data. Secondly, when fossils are preserved, there is a lot of evidence for common descent. See section on 'transitional fossils' above.
<urn:uuid:5057bff6-c48f-4bed-becc-27334e328d96>
CC-MAIN-2016-26
http://ncse.com/creationism/analysis/absence-fossil-evidence
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396027.60/warc/CC-MAIN-20160624154956-00158-ip-10-164-35-72.ec2.internal.warc.gz
en
0.948772
319
3.5
4
Chapter 8: CulvertsAnchor: #i1015245 Section 1: IntroductionAnchor: #i1015250 Definition and Purpose A culvert conveys surface water through a roadway embankment or away from the highway right-of-way (ROW) or into a channel along the ROW. In addition to the hydraulic function, the culvert must also support construction and highway traffic and earth loads; therefore, culvert design involves both hydraulic and structural design. The hydraulic and structural designs must be such that minimal risks to traffic, property damage, and failure from floods prove the results of good engineering practice and economics. Culverts are considered minor structures, but they are of great importance to adequate drainage and the integrity of the facility. This chapter describes the hydraulic aspects of culvert design, construction and operation of culverts, and makes references to structural aspects only as they are related to the hydraulic design. Culverts, as distinguished from bridges, are usually covered with embankment and are composed of structural material around the entire perimeter, although some are supported on spread footings with the streambed or concrete riprap channel serving as the bottom of the culvert. For economy and hydraulic efficiency, engineers should design culverts to operate with the inlet submerged during flood flows, if conditions permit. Bridges, on the other hand, are not covered with embankment or designed to take advantage of submergence to increase hydraulic capacity, even though some are designed to be inundated under flood conditions. Any culvert with a clear opening of more than 20-feet, measured along the center of the roadway between inside of end walls, is considered a bridge by FHWA, and is designated a bridge class culvert. (See Chapter 9, Section 1). This chapter addresses structures designed hydraulically as culverts, regardless of length. At many locations, either a bridge or a culvert fulfills both the structural and hydraulic requirements for the stream crossing. The appropriate structure should be chosen based on the following criteria: - Anchor: #EDEXWTQQ - construction and maintenance costs Anchor: #IYMCRDFB - risk of failure Anchor: #MLPIQYUB - risk of property damage Anchor: #AFRYRWRB - traffic safety Anchor: #RELIEQOO - environmental and aesthetic considerations Anchor: #CTVRXFIE - construction expedience. Although the cost of individual culverts is usually relatively small, the total cost of culvert construction constitutes a substantial share of the total cost of highway construction. Similarly, culvert maintenance may account for a large share of the total cost of maintaining highway hydraulic features. Improved traffic service and reduced cost can be achieved by judicious choice of design criteria and careful attention to the hydraulic design of each culvert. Before starting culvert design, the site and roadway data, design parameters (including shape, material, and orientation), hydrology (flood magnitude versus frequency relation), and channel analysis (stage versus discharge relation) must be considered.Anchor: #i1015326 Culverts are constructed from a variety of materials and are available in many different shapes and configurations. When selecting a culvert, the following should be considered: - Anchor: #HXEKQTIN - roadway profiles Anchor: #AFHBIXET - channel characteristics Anchor: #LJWXMFGV - flood damage evaluations Anchor: #TQBDMLNL - construction and maintenance costs Anchor: #IUPUQTEP - estimates of service life. Numerous cross-sectional shapes are available. The most commonly used shapes are circular, pipe-arch and elliptical, box (rectangular), modified box, and arch. Shape selection should be based on the cost of construction, limitation on upstream water surface elevation, roadway embankment height, and hydraulic performance. Commonly used culvert materials include concrete (reinforced and non-reinforced), steel (smooth and corrugated), aluminum (smooth and corrugated), and plastic (smooth and corrugated). The selection of material for a culvert depends on several factors that can vary considerably according to location. The following groups of variables should be considered: - Anchor: #HQSWESMV - structure strength, considering fill height, loading condition, and foundation condition Anchor: #DEYXEMIB - hydraulic efficiency, considering Manning’s roughness, cross section area, and shape Anchor: #RNGAWLNT - installation, local construction practices, availability of pipe embedment material, and joint tightness requirements Anchor: #HBRIIVIP - durability, considering water and soil environment (pH and resistivity), corrosion (metallic coating selection), and abrasion Anchor: #NMSNVXGO - cost, considering availability of materials. The most economical culvert is the one that has the lowest total annual cost over the design life of the project. Culvert material selection should not be based solely on the initial cost. Replacement costs and traffic delay are usually the primary factors in selecting a material that has a long service life. If two or more culvert materials are equally acceptable for use at a site, including hydraulic performance and annual costs for a given life expectancy, bidding the materials as alternates should be considered, allowing the contractor to make the most economical material selection.Anchor: #i1015405 Several inlet configurations are utilized on culvert barrels. These include both prefabricated and constructed-in-place installations. Commonly used inlet configurations include the following: - Anchor: #EOUVFBWD - projecting culvert barrels Anchor: #CCKVLAWS - cast-in-place concrete headwalls Anchor: #BLERXDRK - pre-cast or prefabricated end sections Anchor: #ILKADYBU - culvert ends mitered to conform to the fill slope. When selecting various inlet configurations, structural stability, aesthetics, erosion control, and fill retention should be considered. Culvert hydraulic capacity may be improved by selecting appropriate inlets. Because the natural channel is usually wider than the culvert barrel, the culvert inlet edge represents a flow contraction and may be the primary flow control. A more gradual flow transition lessens the energy loss and thus creates a more hydraulically efficient inlet condition. Beveled inlet edges are more efficient than square edges. Side-tapered inlets and slope-tapered inlets, commonly referred to as improved inlets, further reduce head loss due to flow contraction. Depressed inlets, such as slope-tapered inlets, increase the effective head on the flow control section, thereby further increasing the culvert efficiency.
<urn:uuid:9e75c0a6-f78e-4439-93fd-68570ec89355>
CC-MAIN-2016-26
http://onlinemanuals.txdot.gov/txdotmanuals/hyd/culverts.htm
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783403826.29/warc/CC-MAIN-20160624155003-00109-ip-10-164-35-72.ec2.internal.warc.gz
en
0.915011
1,435
3.453125
3
By: Linda Chamberlain PhD MPH While at a recent brain-storming session about adverse childhood experiences (ACEs), a colleague echoed my mantra—“What is predictable is preventable.” This reality came to me many years ago while training as an injury epidemiologist at a time when injuries were considered accidents that were inevitable or in other words, not preventable. Ground-breaking research by Professors Susan Baker and Stephen Teret at Johns Hopkins and others would change our thinking on both unintentional and intentional injuries. As a result, patterns of predictability led us to realize that these tragedies could be prevented. The ACE Study and the considerable body of research on early trauma send the same message for suicide [for more information about the ACE Study, go to our first posting]. Early adverse childhood experiences dramatically increase the risk of suicidal behaviors. ACEs have a strong, graded relationship to suicide attempts during childhood/adolescent and adulthood. An ACE score of 7 or more increased the risk of suicide attempts 51-fold among children/adolescents and 30-fold among adults (Dube et al, 2001). In fact, Dube and colleagues commented that their estimates of population attributable fractions for ACEs and suicide are “of an order of magnitude that is rarely observed in epidemiology and public health data.” Nearly two-thirds (64%) of suicide attempts among adults were attributable to ACEs and 80% of suicide attempts during childhood/adolescence were attributed to ACEs. Further, while system responses to family violence continue to place greater emphasis on physical forms of abuse, the strongest predictor of future suicide attempts in ACE research was emotional abuse. These data beg the question—what does suicide prevention look like in your agency and community? Also, does risk assessment for suicide incorporate questions about ACEs? How can we start early to interrupt the predictable pathway between ACEs and risky behaviors including suicidality? Is the impact of emotional abuse truly recognized and addressed in our system’s response to children living in abusive environments? I live in a state where our suicide rates are typically double the national rate and there are major disparities in terms of risk. Consequently, there is a lot of discussion around what are the most effective strategies to reach our culturally diverse population. Increasingly, these discussions include the role of ACEs in understanding and preventing suicide. It starts with education for service providers, survivors, and communities. With increased awareness, we can make the case for routine assessment of ACEs as an early intervention and prevention strategy. Around the country, there are efforts to develop educational resources for clients and communities to promote self-understanding about how early trauma can affect health and risk behaviors even decades later. The Institute for Safe Families will expand its “Amazing Brain Series” to include a booklet for adult survivors called “It’s Never Too Late.” While the first four booklets in the series focus on early brain development and implications of trauma for children, “It’s Never Too Late” will help adults to understand the long-term effects of trauma and the capacity of the adult brain to heal. One of our greatest and most critical challenges is conveying this information to a sector of the population that is at especially high risk for suicide due to ACEs--adolescents. How can we create a safe and meaningful dialogue with teens about ACEs when the trauma is so recent or current, the need is so urgent, and yet many of the traditional responses to children at risk may not be effective in reaching this population? Dube SR, Anda RF, Felitti FJ et al. Childhood abuse, household dysfunction, and the risk of attempted suicide throughout the lifespan: Findings from the Adverse Childhood Experiences Study. JAMA, 2001; 286:3089-3095. Disclosure: Linda Chamberlain trained in injury prevention at Johns Hopkins with Professors Stephen Teret and Susan Baker. She lives in Alaska. Statistics for suicide including comparison to national rates and disparities between non-Native and Native Alaskans can be found at http://www.hss.state.ak.us/suicideprevention/statistics.htm and there is also an online forum about suicide prevention at www.stopsuicidealaska.org . Other Related Posts:
<urn:uuid:7406cc25-cb5a-489b-8ca8-d4a313dd7df1>
CC-MAIN-2016-26
http://www.scattergoodfoundation.org/activity/general/linda-chamberlain-scholar
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783403508.34/warc/CC-MAIN-20160624155003-00042-ip-10-164-35-72.ec2.internal.warc.gz
en
0.942357
879
2.609375
3
ARTICLES / General / Tackling Cancer & Weight Loss/ By Hilary Wright, Staff Writer There are many reasons people with cancer lose weight. Causes include depression, fatigue, pain, altered taste perception, side effects of treatment, or obstruction of the gastrointestinal tract. Often consumption of a high calorie, high protein, beverage may help reverse weight loss. Psychological distress also can play a major role in weight loss. The anxiety that comes with receiving a diagnosis of cancer, the intensified feelings of anxiety and depression, and the possibility of pain, all can cause weight loss. Once a particular psychological issue is addressed, some of the weight loss may be reversed. Caregivers should try to serve a variety of foods, since lots of nutrients from many different sources are required to meet physical and nutritional needs. Caregivers can help people with cancer create a flexible, personalized nutritional plan that will be easy to adapt to their ever-changing needs. Following are guidelines to help create the basis for a sound, nutritional There are many ways to add calories, and eating high calorie foods is a good place to start. Caregivers and people with cancer can enjoy adding the following to increase calorie intake: Melt margarine onto hot foods such as toast, soups, vegetables, cooked cereals, rice and Choose mayonnaise instead of salad dressing for use in meat salads, in deviled eggs and on Serve peanut butter - also high in protein - with an apple, banana, or pear, or spread it on a sandwich with mayonnaise. Top puddings, pies, hot chocolate, fruit, gelatin and other desserts with whipped cream. Cook with heavy cream instead of milk. Sprinkle nuts or seeds on vegetables, salads, and pasta, or sprinkle on desserts such as fruits, ice cream, pudding and custard. It can be extremely beneficial for a person with cancer to receive extra protein, especially if they are healing after surgery. The following suggestions will help provide more healing proteins and extra calories to the diet: Add nonfat dry milk or powdered protein supplements, like soy protein shakes, to regular milk. Also, they can be added to sauces and gravies or used for breading meat, fish or poultry. Cook cereals with milk instead of water. Use milk, half-and-half and evaporated milk when making instant cocoa, canned soups, mashed potatoes,
<urn:uuid:dbe99361-44ae-45f0-be47-a88c1724fd97>
CC-MAIN-2016-26
http://caregiver.com/articles/general/cancer_and_weight_loss.htm
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396147.66/warc/CC-MAIN-20160624154956-00146-ip-10-164-35-72.ec2.internal.warc.gz
en
0.879624
551
2.9375
3
TOWER (Hebr. ): A building of strength or magnificence (Isa. ii. 15; Cant. iv. 4, vii. 4), and, with a more limited connotation, a watch-tower in a garden or vineyard or in a fortification. It was customary to erect watch-towers in the vineyards for the guards (Isa. v. 2), and such round and tapering structures may still be seen in the vineyards of Judea. Similar towers were built for the protection of the flocks by the shepherd, in the enclosures in which the animals were placed for the night (comp. the term "tower of the flock," Gen. xxxv. 21; Micah iv. 8), and it is expressly stated that Uzziah built such structures in the desert for his enormous herds (II Chron. xxvi. 10). Around these towers dwellings for shepherds and peasants doubtless developed gradually, thus often forming the nuclei of permanent settlements. Towers for defense were erected chiefly on the walls of fortified cities, the walls themselves being strengthened by bastions (Neh. iii. 1), and the angles and gates being likewise protected by strong towers (II Kings ix. 17). Thus the walls of the city of Jerusalem were abundantly provided with towers in antiquity, and the ancient tower of Phasael (the so-called "tower of David") in the modern citadel is an excellent specimen of this mode of defense, its substructure being of massive rubblework, and the ancient portion of the tower erected upon it being built of immense square stones (for illustration see
<urn:uuid:50068142-46a3-45d9-8f1a-ca4d86bc09f4>
CC-MAIN-2016-26
http://jewishencyclopedia.com/articles/14466-tower
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396945.81/warc/CC-MAIN-20160624154956-00000-ip-10-164-35-72.ec2.internal.warc.gz
en
0.962807
334
3.421875
3
At the end of the 19th century physics had a significant problem reconciling Newtonian mechanics with Maxwell's theory of electricity and magnetism (E&M). Roughly, very roughly put, this boiled down to the invariance of physics across frames moving uniformly with respect to each other. One needs to calculate the time and position of a body, or an electromagnetic wave in one frame compared to that in another moving uniformly with respect to the first. In human speak this is figuring out how the same process appears when one is moving on a train as opposed to when one is standing the ground watching the train go by. Of course, both the train and the Earth are moving, and the train could be a really fast space ship, but we really do not need to go there. The Newtonian solution is a Galilean transform, that time is the same everywhere and that the difference in position changes as the product of time and velocity, e.g. r(t)' - r(t) = - vt, where t is time, v the relative velocity between the frames and r(t)' and r(t) the positions in the two frames at any time t. Within this picture, the Newtonian physics remains the same for both the observer on the ground and the one on the train. For the physics described by Maxwell's equations this does not work. One needs a more complex looking set of equations called Lorentz transforms. These allow calculation of such things as the wavelength and energy of an electromagnetic wave, e.g. light, radio waves, in one frame given the time and position in another. With this key idea, and none of the mathematics, we can appreciate Einstein's contribution. Physics should be independent of the rock we are standing on when we measure it. One way out was intensively explored by Michelson and Morley and others (here and here). They showed that the laws of E&M did not change when the frame of observation changed. Therefore, Einstein concluded that the Galilean frame transform was the one that had to be modified. This leads to a set of Lorentz invariant transforms for particle motion, and prediction of such things as time dilation, the energy equivalence of mass, E = mc2, and more, each having been observed and shown to obey the predictions of what is called the Special Theory of Relativity. The Theory of Relativity became established science within twenty years. Some recognized its importance and correctness quickly, for example, Max Planck. On the other hand, the theory remains distasteful to many, mostly not scientists. Exploration of that distaste is instructive and there is no better place to start than Einstein's take This world is a strange madhouse. Currently, every coachman and every waiter is debating whether relativity theory is correct. Belief in this matter depends on political party affiliation.”and Anti-relativists were convinced that their opinions were being suppressed. Indeed, many believed that conspiracies were at work that thwarted the promotion of their ideas. The fact that for them relativity was obviously wrong, yet still so very successful,There is an historical and philosophical literature about this. The role of Naomi Oreskes and Erik Conway's book, Merchants of Doubt, belongs to Milena Wazeck's Einsteins Gegner: Die Oeffentliche Kontroverse um die Relativitaetstheorie in de 1920er Jahren (Einsteins opponents: The public controversy about the theory of relativity in the decade of the 1920s). Eli has not yet read that book (on order from Amazon) which is untranslated from the German, but there is a review by, Jeron von Dorgens On Enstein's Opponents and Other Crackpots that sets the stage nicely. One of the first things that jumps out is how the witches of denial cooked relativity and climate change in the same kettle. Wazeck observes that the physicists who opposed Einstein, and it did get personal, were those who were fearful of being sidelined by the shift of the field to a more mathematical approach. In the same way, the professional opposition to climate change science was rooted in the observational climatologists seeing a threat to their work from the climate modeling community, the Tim Balls, the William Grays, the Pat Michaels, of the world, and the regional climatologists, the Roger Pielke Sr.'s. And, of course, there were the citizen scientists, as Roy Spencer would put it, or in Wazeck's description Although they had previously played no role in German academic life, during the 1920s scores of self-proclaimed researchers alleged to have proved the theory of relativity to be scientifically incorrect. Because the arguments set out in hundreds of ensuing publications frequently rested on fundamental misunderstandings of Einstein’s new theory, their accounts have largely been ignored by traditional history of science.Is there any better description of Chris Monckton than this of Arthur Patschke Non-academic researchers like Patschke announced public lectures, submitted essays, and tried to establish contact with Einstein and other leading scholars in order to warn them—as well-intentioned colleagues—of the falsehood of the theory of relativity and to convince them of the veracity of their own scientific worldviews. Patschke and others like him were often simply ignored; in other instances, it was patiently explained how their criticisms of the theory of relativity had completely missed the mark. But because their observations were anchored in specific worldviews, Patschke and his associates were immune to this type of criticism.well, except for the well intentioned part, but that too soon vanished, and the roots of this are explored by Waneck The controversy surrounding the theory of relativity was exceptionally heated. In many pamphlets one finds what might be described as a martial rhetoric of damnation; his opponents also staged acts of protest that sought to inflame public opinion against Einstein’s work. A complex process of marginalization and protest helps to account for the heated responses to Einstein’s theory.Of course, for scientific denialism to gain traction there have to be political and philosophical motivations. Politically, while anti-Semitism played a major role, it must not be forgotten that Einstein's outspoken and well known pacifism did not play well on the right of post WWI Germany's political spectrum, where defeat and disgrace was blamed on internal enemies (of course, including Jews, liberals, social democrats and communists). Moreover, the theory of relativity in removing time from its immutable privileged place stirred fierce resentment especially among those who regarded it as a world view rather than a scientific theory. Perhaps it would have been better to call Einstein's work the Theory of Consistent Physics Still one cannot separate the gathering storm which lead to Hitler and the political context in which the Theory of Relativity was demonized. Von Dorgen writes in his short book review (go read it and say to yourself every paragraph, hey Eli recognizes that, he read it yesterday at WUWT, at Curry's, etc) and, where have we seen thisConspiracy theories tend to do well in uncertain times: they create order in chaos. Hence, they thrived in post-World War I Germany. Just as there is no real point in debating conspiracy theorists, there was no point in explaining relativity to anti-relativists, Wazeck astutely observes. Their strong opposition was not due to a lack of understanding, but rather the reaction to a perceived threat. Furthermore, anti-relativists were convinced of their own ideas, and were really only interested in pushing through their own theories; any explanation of relativity would not likely have changed their minds. Initially, relativists, and in particular Einstein himself, were willing to engage in correspondence or debate with their critics. By the early 1920’s, however, they concluded that sufficient common ground was lacking, and likely chose not to further waste any valuable time. Oh yes, over at Dr. Roy's place this morningNevertheless, anti-relativists were convinced that their opinions were being suppressed. Indeed, many believed that conspiracies were at work that thwarted the promotion of their ideas. The fact that for them relativity was obviously wrong, yet still so very successful, strengthened the contention that a plot was at play—and some anti-relativists were convinced that the co-conspirators were Jewish. Jews were held to dominate both the newspaper business and the new discipline of theoretical physics; they could thus easily advertize one of their own (Einstein) and his fallacious work (relativity). Gehrcke, for instance, kept emphasizing that the successes of relativity could only be explained by a state of “mass hypnosis”, brought about by excessive and one-sided reporting.Such a qualification resonated with familiar anti-Semitic reasonings (it was a known anti-Semite strategy to claim Jewish hyping), and was well received in right-wing media. Yet, Wazeck denies any overt anti-Semitic motivations on Gehrcke’s part; in her perspective, a crucial distinction seems to be that he did not necessarily primarily intend to promote the rightist cause, as Weyland appeared to have attempted. I would guess today’s research funding lopsidedness is currently running at least 100 to 1, humans versus nature. Is that really how the public would like their tax dollars spent?Eli is not the first to have spotted the analogy or even von Dorgen's essay. Brenden DeMille had a piece in 2010 at deSmog The entire analogy is filled with irony (Eli only does snark). For example, the equivalents of the NIPCC was the "Academy of Nations" bringing together Einstein's opponents organized by Ernst Gehrcke in Germany and Arvid Reutherdahl, engineer and dean at the College of St. Thomas in St. Paul, Minnesota. Alert bunnies may recognize that John Abraham, one of the most successful defenders of climate science, is, at the University, nee College, of St Thomas, and that the administration of the University acted admirably when Chris Monckton bared fangs and attacked John. In conclusion, one must acknowledge that science denial is the same from every point of observation both in time and in space. Eli formulates this as Lewandowsky's Denial Invariance Transform.
<urn:uuid:8dfb4dac-0689-4fa9-a1d8-158272f9ce8b>
CC-MAIN-2016-26
http://rabett.blogspot.com/2013/10/lewandowsky-invariance.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783392527.68/warc/CC-MAIN-20160624154952-00168-ip-10-164-35-72.ec2.internal.warc.gz
en
0.974214
2,151
3.828125
4
Vision: to develop a healthy, well educated society with a high quality life. Mission:to carry out advocacy work, promote early childhood and adolescents' development, fight health hazards and illiteracy through performing arts and sports. Programmes: Digital literacy which include such activities as: dramatizing educational, social and economic issues into movie or edutainment stage drama skits or documentaries i.e.: a)Awolwatuuka: Oral traditional folk story telling is a folk school program developed from the genre of oral traditional story telling dramatized into 15 minutes' T.V Programs purposely to educate the T.V. generation that missed the genre of intellectual folk stories that groomed our fore parents into obedient, hardworking and good citizens of this country. b)Novelse.g.:Ebbanjaly'Obutonde: A novel that exposes the immoral practices that go on inside our homes children rarely tell e.g., premature sexual relationships among children with respected members of the family, defilement, rape, hazardous abortion and sexually transmitted diseases including HIV/AIDS. c)Moviese.g.:Sweet Enemy: A movie partiary financed by UNESCO which exposes the promiscuous behaviors where children born HIV positive date fellow pupils/students, teachers and other members of the community. Something which makes school communities, the leading groups in the spread of HIV. ii.The sour Challenges: A moviein the making to fights stigma and discrimination which encourages parents to open up with school administrators about children on ARV medication. d)Uganda Nursery Music festival, Uganda Nursery Sports Gala and the Kids Uganda community soccer academy where edutainment programmes on health hazards are communicated, Play Day:The families get together and play cerebrations. And many other activities directed to the same purpose as education i.e.:Producing education material for use in schools.
<urn:uuid:7ef8dff8-a20f-4a90-8a7c-c77e959fc08b>
CC-MAIN-2016-26
http://es.idealist.org/view/org/pp85KmNnFPbD/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397213.30/warc/CC-MAIN-20160624154957-00140-ip-10-164-35-72.ec2.internal.warc.gz
en
0.924099
391
2.734375
3
A new study has confirmed that not only the brain, but other body parts also play a significant role in problem solving. “Being able to use your body in problem solving alters the way you solve the problems,” said Martha Alibali, psychology professor at University of Wisconsin. “Body movements are one of the resources we bring to cognitive processes,” she added. To confirm their findings, researchers recruited 86 American undergraduates, half of whom were prevented from moving their hands and the others were prevented from moving their feet. Read more: Yahoo India Leave a comment below and continue the conversation.
<urn:uuid:d360ab20-6a33-4073-9226-36754a8e6557>
CC-MAIN-2016-26
http://www.psychologicalscience.org/index.php/news/both-mind-and-body-play-key-role-in-problem-solving.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397749.89/warc/CC-MAIN-20160624154957-00028-ip-10-164-35-72.ec2.internal.warc.gz
en
0.966334
128
2.78125
3
While world governments bedwet over a fantasized climate catastrophe taking place 100 years out, mankind could be facing a potential catastrophic food shortage. A worthhile read (see link below). The disease is Ug99, a virulent strain of black stem rust fungus (Puccinia graminis), discovered in Uganda in 1999, threatens the world’s wheat supply. Read the scary details here: http://www.wired.com/magazine/2010/02/ff_ug99_fungus/all/1 Wheat provides 20% of all calories consumed by humans. According to Nobel laureate Norman Borlaug, father of the Green Revolution: This thing has immense potential for social and human destruction. According to wired.com, the fungus attacks the stem of the wheat plant, causing it to wither and die. Stem rust is the polio of agriculture, a plague that was brought under control nearly half a century ago as part of the celebrated Green Revolution. After years of trial and error, scientists managed to breed wheat that contained genes capable of repelling the assaults of Puccinia graminis, the formal name of the fungus. But now it’s clear: The triumph didn’t last. The new fungus has spread from Africa and into the Middle East. It would only take a traveller with a single spore on his shirt to transport it to the USA and Canada. The pathogen makes its presence known to humans through crimson pustules on the plant’s stems and leaves. When those pustules burst, millions of spores flare out in search of fresh hosts. It goes to show that nature has a bag full of nasty tricks, and there’s nothing you can do to stop her. All you can do is adapt, hopefully quickly enough. But if you waste your time trying to appease her, and don’t invest your resources wisely in adapting, you’ll get eliminated.
<urn:uuid:3e939f19-b8a1-4e2b-8971-293a0e3144f8>
CC-MAIN-2016-26
http://notrickszone.com/2010/05/26/as-governments-bedwet-over-agw-world-faces-potential-food-catastrophe/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783392527.68/warc/CC-MAIN-20160624154952-00061-ip-10-164-35-72.ec2.internal.warc.gz
en
0.91606
405
2.65625
3
Installing a GPS instrument in the field Science is filled with stories of people finding unexpected signals in their noise. Such is the case with the present study, a study I was lucky enough to be involved in, one involving high precision GPS data. Keep in mind, this ain't your handheld GPS—this is the stuff scientists use to track centimeter- to millimeter-scale movements of continental plates over long time periods. When a GPS satellite passes across the sky, the signal it sends out has to travel through the atmosphere, and occasionally it bounces off the land before getting to a GPS instrument. Both the atmosphere and the land create problems called multi-path noise. This noise is suppressed as much as possible by the instruments, but it is never completely eliminated, and it leads to error in high-precision GPS positioning data. However, when researcher Kristine Larson at the University of Colorado started seeing a consistent pattern in this noise, she talked to a geologist, Eric Tilton. After numerous meetings, lots of modeling, and a little digging in the dirt, a group of us had the story that we just published in the journal Geophysical Research Letters. This "noise" was telling us how much water was in the soil. You see, when the GPS signal bounces off the ground, the distance it penetrates into the ground is related to the amount of water in the soil. This depth of penetration affects the total distance the multi-path signal travels on its way to the GPS receiver, and the difference between this multi-path distance and the direct path distance leads to interference between the two signals and, you guessed it, "noise." To verify the signal that we were seeing in the "noise," we initially used a computer model to simulate soil water content. When this simulation matched up nicely with the noise, we decided to go the next step and install a series of soil moisture probes at a nearby site. We collected almost a year's worth of soil moisture data and, in the end, the soil moisture we retrieved from the GPS "noise" was correlated to the soil moisture we measured in the ground with an r2 of 0.91. Considering that these two measures of soil moisture come from slightly different locations and there are errors in both measurements, this is an extremely good correlation. Much like measuring water held in a plant canopy, measuring soil water content is important because of its influence on evaporation and, as a result, the weather. The potential to use a large network of existing GPS installations to measure soil water content could substantially improve model initialization for weather and flood forecasts. Geophysical Research Letters, 2008. DOI: 10.1029/2008GL036013
<urn:uuid:f4203570-951e-4d98-a304-37aba989881c>
CC-MAIN-2016-26
http://arstechnica.com/science/2009/01/exploiting-the-noise-in-gps-data/?comments=1
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783398216.41/warc/CC-MAIN-20160624154958-00087-ip-10-164-35-72.ec2.internal.warc.gz
en
0.954113
552
3.390625
3
Inflammatory bowel disease (IBD) refers to chronic conditions that cause inflammation in the lining and, in some instances, the wall of the intestine. The 2 primary types of IBD are: Ulcerative colitis —This type causes inflammation and ulcers in the top lining of the colon and rectum. Crohn’s disease —This type causes inflammation and ulcers in the lining and the wall of any part of the gastrointestinal tract. Crohn's disease usually affects the small intestine, particularly the last section (called the ileum). However, any part of the gastrointestinal tract—from the mouth to the anus—can be affected. The inflammation associated with Crohn’s disease reaches deeper into the layers of the intestinal wall, as opposed to ulcerative colitis, which affects primarily the lining of the intestine. The cause of inflammatory bowel disease is not known. It seems to run in some families. Some researchers think that an infection causes the immune system to overreact and damage the intestines. The Crohn's & Colitis Foundation of America estimates that as many as one million Americans may have inflammatory bowel disease—about half of these people have Crohn’s and the other half have ulcerative colitis. Increased Risk of Colon Cancer About 5% of people with ulcerative colitis eventually develop colon cancer. The risk of cancer increases with the duration and the extent of involvement of the colon. The risk is higher in those with ulcerative colitis with involvement of the entire colon and in those who have had the disease over 8-10 years. Complications of Crohn’s Disease Possible complications of Crohn’s disease include intestinal obstruction and formation of fistulas. A fistula is an abnormal connection between the intestine and other organs or tissues, such as the bladder, vagina, or skin.What are the risk factors for inflammatory bowel disease?What are the symptoms of inflammatory bowel disease?How is inflammatory bowel disease diagnosed?What are the treatments for inflammatory bowel disease?Are there screening tests for inflammatory bowel disease?How can I reduce my risk of inflammatory bowel disease?What questions should I ask my doctor?What is it like to live with inflammatory bowel disease?Where can I get more information about inflammatory bowel disease? - Reviewer: Daus Mahnke, MD - Review Date: 09/2015 - - Update Date: 09/17/2014 -
<urn:uuid:a9661ea7-8c0d-418b-a292-56bd791504f1>
CC-MAIN-2016-26
http://wesleymc.com/hl/?/19635/Resource-Guide-for-Inflammatory-Bowel-Disease~Main-Page
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395613.65/warc/CC-MAIN-20160624154955-00157-ip-10-164-35-72.ec2.internal.warc.gz
en
0.924567
503
3
3
The greenback cutthroat trout, Colorado’s state fish, can be found only in a 4-mile span of Bear Creek, located southwest of Colorado Springs. A recent study conducted by the University of Colorado delved into the genetics of the greenback cutthroat trout and found that many were mistaking the Colorado River cutthroat, Rio Grande cutthroat and others for the greenback. The U.S. Forest Service is currently exploring options to conserve the greenback and creek upon which the fish depends. Meanwhile, TRCP partner Trout Unlimited is working to address trail impacts the Bear Creek area. For any anglers out there thinking they caught a greenback only to learn later that they were mistaken, the TRCP feels your pain. Last summer, we shot an episode of “TRCP’s Native Trout Adventures” in which we mistakenly thought we were fishing for – and catching – greenback cutthroat trout in Pike National Forest near South Park, Colo.
<urn:uuid:a9262487-e890-4aea-85af-dc42e3421fdb>
CC-MAIN-2016-26
http://blog.trcp.org/2012/10/09/colorado-state-fish-swims-in-only-one-stream/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783394987.40/warc/CC-MAIN-20160624154954-00153-ip-10-164-35-72.ec2.internal.warc.gz
en
0.969226
207
2.515625
3
By Kara Duffy July 15, 2013 Thomas County, GA - It's been one week since the first human case of West Nile Virus has been confirmed in Georgia. Health officials say the recent wet weather and hot temperatures could be a sign that there are more cases to come. Health officials at the Thomas County Health Department say we have to be extra careful when we head outside now. They say the West Nile Virus has arrived early this year and it's our job to be prepared. The recent heavy rain fall over the past few weeks could have brought along some early, unwanted visitors, mosquitoes. Mosquitoes health officials say could be infected with the West Nile Virus. Jay Ridenhower, Thomas County Environmental Health Manager: "The West Nile Virus breeds in artificial containers, such as old tires, coffee pots, coffee cans, paint cans, bird baths, dog dishes; things like that where people typically have them laying in the yard somewhere." Health officials say 4 out of 10 people who are bitten by a mosquito infected by the virus won't show any symptoms, but for the other six it could be debilitating or even deadly. Carolyn Simmons, RN Director of Thomas County Health Department: "Usually it takes about 2 to 15 days to develop symptoms. The symptoms are headache, fever, muscle and joint aches, neck discomfort, swollen lymph nodes, and a rash." Health experts say the elderly, as well as people with compromised immune systems or underlying health issues have the greatest risk of being infected. However, they urge everyone to take proper precautions. "Repellants, curtailing your activities so that you're doing them during the hottest parts of the day or during the day when the mosquitoes aren't usually out; they're usually out early morning or late evening." Officials at the Thomas County Health Department say they're also offering residents free packets of insect growth regulator tablets. They say the tablets can be placed in areas with standing water and will prevent the mosquito larvae from developing into adults.
<urn:uuid:11788c01-d17e-4612-9f67-fd5cb017123a>
CC-MAIN-2016-26
http://www.wctv.tv/health/headlines/West-Nile-Virus--215604341.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395160.19/warc/CC-MAIN-20160624154955-00053-ip-10-164-35-72.ec2.internal.warc.gz
en
0.967134
408
2.796875
3
This is a simple cut and paste paper craft good "just for fun", alphabet practice, spelling practice, an insects theme, or an animals theme. There are different types of caterpillars you can make: Lower Case Alphabet, Upper Case Alphabet, Spelling Words, Vowels (aeiou) The pieces for this craft are fairly large (there are three pieces on each page -- making the entire alphabet takes 5 pages). This looks terrific on something like a school bulletin board, but can be a bit large for small individual projects. If you would like to do a smaller alphabet caterpillar, make your own small circle template from a piece of cardboard or margarine container lid. Allow the children to trace this onto pieces of construction paper and allow them to print the letter on the piece. Add wiggly eyes and pom pom antennas for a cute face! - something to color with if using the B&W version, - Print out the templates of choice: - If you're doing the entire alphabet, you just print one of every sheet (you can print different colors out if you want to pattern as well). - If you're letting the children make up words, print three or four copies of each page. You can print the B&W version on different colors of construction paper to let the children make patterns and unique caterpillars. - When doing words, you can spell anything you like (names, weekdays, etc) but I particularly like doing "caterpillar themed" words (antenna, caterpillar, leaves, insect, butterfly, etc) after learning about caterpillars then putting them up on the bulletin board to make your own themed word wall. - Color (where appropriate) and cut out the template pieces. - Glue the caterpillar together. - Close the template window after printing to return to this screen. - Set page margins to zero if you have trouble fitting the template on one page (FILE, PAGE SETUP or FILE, PRINTER SETUP in most browsers). Lower Case Alphabet: Upper Case Alphabet:
<urn:uuid:de258d73-19dd-4c08-b6cd-1d8ba77c635d>
CC-MAIN-2016-26
http://www.dltk-kids.com/crafts/insects/mcaterpillar-letters.htm
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397696.49/warc/CC-MAIN-20160624154957-00144-ip-10-164-35-72.ec2.internal.warc.gz
en
0.850352
431
3.203125
3
Aug 27, 2010 By David Dick-Agnew f you’re like me — like most people — you probably have a hard time holding the concept of a quadrillion in your mind. Even a million is a little hard to wrap your head around. This means that when someone throws out the fact that Warren Buffett is sitting on $62 billion, or that the US debt is over $13 trillion, it’s more or less meaningless. How can we grasp the importance of these ideas if we can’t even comprehend their scale? To help to better visualize the relative weight of these amounts, I’ve thrown together this handy illustrated guide. The scale is more or less consistent, so walking through the next 6 images will hopefully show how these orders of magnitude stack up. First off, 1: For the purposes of this guide, 1 = 1 cubic millimeter. That’s less than 1/16″, for all you imperialists out there. Roughly the size of a grain of coarse sand, or a honeybee’s brain. Keep that in mind — that’s our basic unit of measurement. To stack these little guys ten high, ten wide, and ten deep, it would take 1 thousand (1,000) of them (10 x 10 x 10, or 10³ if that helps). 1000 of these units would fit into a space the size of a sugar cube: Most people measure their income in the scale of these sugar cubes. The average American, working full-time, pulls in about 40 of these a year. Not enough to make a single handful. Or, to look at it another way, this represents roughly the number of words it would take to replace this picture. If you stack these sugar cubes 10 high, 10 wide, and 10 deep (which would take 1000 of them, or 100 x 100 x 100 of our original tiny single unit), you will have 1 million (1,000,000) of of the original unit of 1: That’s roughly one for every person living in San Jose, America’s 10th most populous city. Still fits nicely on an average breakfast tray. But bearing in mind that it’s made up of bits the size of grains of sand — that’s still a lot of them. Incidentally, this is also 1 liter in volume. Fill it with water and it will weigh 1 kilogram. Ah, the metric system! To stack these 1-liter-sized cubes 10 by 10 by 10 would take 1 billion (1,000,000,000) of our basic 1mm³ units: 1 billion is how much it costs, in dollars, to buy approximately one third of a Virginia-class nuclear submarine. It’s how much money Avatar made (international box office gross) in only 17 days. Warren Buffett gave 37 of these to the Bill & Melinda Gates Foundation in 2006. But it would take the average American, working full-time, 25,000 years to earn just one of them (and it takes 1 year for Djibouti’s entire population of 864,000 people to do the same). It’s expected that next year, the world population will reach 7 billion; imagine 7 of these in a row, and there’ll be one grain of sand for every person alive. And it would take at least 20 of them to represent all the websites on the Internet. If you could manage to find enough of these 1m³ cubes to stack them 10 high, 10 wide, and 10 deep, you’d be looking at 1 trillion (1,000,000,000,000) of our starting units: Enough to cause some pretty bad traffic. This is getting up into the range of pure abstraction, and yet there are a few things we measure in the trillions. It would take 5 of these mammoth blocks to represent Japan’s GDP in a year. Australia’s would take just 1. America’s GDP would require 14, its debt would take 13, and its bank bailout 10. One of these could purchase all homes foreclosed in America in 2007 and 2008 combined. It would take between 60 and 100 of these trillion-cubic-millimeter blocks to represent all the synapses in the human brain. And it would take a whopping 200 of these massive blocks to represent every ant on Earth. And given that each ant is, on average, a little bigger than a large grain of sand, that means if you put all the ants on earth in a single place, these buildings — and these poor people — would be buried. But it would take even more than 200 of these blocks — 5 times as many, to be exact — to reach 1 quadrillion (1,000,000,000,000,000): This is a number so huge, it has basically no practical applications (unless you wanted to talk about the number of ants that lived in the past 2 and a half years). It would take between 2,500 and 10,000 galaxies like our own to total a quadrillion stars — maybe as much as a tenth of the surveyed universe. The average American, working full-time, would have to work 250 million years to earn 1 quadrillion pennies. If you’d started when the continents still formed the single land-mass known as Pangaea, you’d just about be there by now. Of course, a quadrillion pennies would weigh 2,500,000 metric tonnes, equal in weight to Russia’s entire grain imports in 2010 — that would require a pretty big piggy bank. Back to the human brain: it’s estimated that our synapses each fire about 300 to 400 times per second, but at peak moments they can fire as many as 1000 times a second. It’s impossible for every synapse to fire at the same time, but we can still calculate the upper limit of possible brain events per second. Given the estimate of 60 to 100 trillion synapses, that means it would take between 18 and 100 of these mammoth, skyscraper-sized blocks to represent the range in the number of events the human brain is capable of sustaining in a single second. That’s a whole city! And in 10 seconds? That’s right: it would take up to 1,000 of them — 1 quintillion (1,000,000,000,000,000,000) mm³ in total. Now if that doesn’t beat all. By all means, keep going. If you have any other things measurable in billions, trillions, or quadrillions, I’d love to hear about them! That’s what the comments are for.
<urn:uuid:653146eb-acaa-4c46-a04f-3a6e87c1f84f>
CC-MAIN-2016-26
http://www.ignisfatuus.com/2010/08/27/a-quadrillion-to-scale/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395166.84/warc/CC-MAIN-20160624154955-00053-ip-10-164-35-72.ec2.internal.warc.gz
en
0.951393
1,412
3
3
News Research Archive Search an archive of past research stories. - Coverage of OSU Research - Reports on national news coverage of university research. - Research Communications Staff Who we are and what we do. (Last updated 5/23/05) Editor's note: a preprint of the journal article announcing the planet will be available on the Web starting Monday evening, May 23, 2005, at the following URL: http://arXiv.org/abs/astro-ph/ 0505451 . Reporters may obtain a copy of the paper from Pam Frost Gorder. Previous stories pertaining to Professor Gould's research: "Astronomers Measure Mass Of A Single Star -- First Since The Sun," 7/14/04. "Study: Search For Life Could Include Planets, Stars Unlike Ours," 8/1/03. "First Definitive Mass Measurement Of A Gravitational Microlens," 1/8/02. "Planet Search Results Suggest Our Solar System May Be Uncommon," 1/12/00. "Stars In Neighboring Galaxy Offer Clues To Mystery Of Dark Matter," 1/4/99. "Cometary Impact With Earth Unlikely In The Next 500,000 Years," 7/30/98. "New Technique May Help Find Missing Mass In The Galaxy," 8/14/97. "Study Findings Deepen Mystery Of Dark Matter In Space," 3/27/96. ASTRONOMERS, AMATEUR SKYWATCHERS FIND NEW PLANET 15,000 LIGHT YEARS AWAY COLUMBUS, Ohio -- An international collaboration featuring Ohio State University astronomers has detected a planet in a solar system that, at roughly 15,000 light years from Earth, is one of the most distant ever discovered. In a time when technology is starting to make such finds almost commonplace, this new planet -- which is roughly three times the size of Jupiter -- is special for several reasons, said Andrew Gould, professor of astronomy at Ohio State . The technique that astronomers used to find the planet worked so well that he thinks it could be used to find much smaller planets -- Earth-sized planets, even very distant ones. And because two amateur astronomers in New Zealand helped detect the planet using only their backyard telescopes, the find suggests that anyone can become a planet hunter. Gould and his colleagues have submitted a paper announcing the planet to Astrophysical Journal Letters, and have posted the paper on a publicly available Internet preprint server, http://arXiv.org . The team has secured use of NASA's Hubble Space Telescope in late May to examine the star that the planet is orbiting. The astronomers used a technique called gravitational microlensing, which occurs when a massive object in space, like a star or even a black hole, crosses in front of a star shining in the background. The object's strong gravitational pull bends the light rays from the more distant star and magnifies them like a lens. Here on Earth, we see the star get brighter as the lens crosses in front of it, and then fade as the lens gets farther away. Because the scientists were able to monitor the light signal with near-perfect precision, Gould thinks the technique could easily have revealed an even smaller planet. "If an Earth-mass planet was in the same position, we would have been able to detect it," he said. On March 17, 2005, Andrzej Udalski, professor of astronomy at Warsaw University and leader of the Optical Gravitational Lensing Experiment, or OGLE, noticed that a star located thousands of light years from Earth was starting to move in front of another star that was even farther away, near the center of our galaxy. A month later, when the more distant star had brightened a hundred-fold, astronomers from OGLE and from Gould's collaboration (the Microlensing Follow Up Network, or MicroFUN) detected a new pattern in the signal -- a rapid distortion of the brightening -- that could only mean one thing. "There's absolutely no doubt that the star in front has a planet, which caused the deviation we saw," Gould said. Because the scientists were able to monitor the light signal with near-perfect precision, Gould thinks the technique could easily have revealed an even smaller planet. "If an Earth-mass planet was in the same position, we would have been able to detect it," he said. OGLE finds more than 600 microlensing events per year using a dedicated 1.3-meter telescope at Las Campanas Observatory in Chile (operated by Carnegie Institution of Washington). MicroFUN is a collaboration of astronomers from the US, Korea, New Zealand, and Israel that picks out those events that are most likely to reveal planets and monitors them from telescopes around the world. "That allows us to watch these events 24/7," Gould said. "When the sun rises at one location, we continue to monitor from the next." Two of these telescopes belong to two avid New Zealand amateur astronomers who were recruited by the MicroFUN team. Grant Christie of Auckland used a 14-inch telescope, and Jennie McCormick of Pakuranga used a 10-inch telescope. Both share co-authorship on the paper submitted to Astrophysical Journal Letters. Two other collaborations -- the Probing Lensing Anomalies NETwork (PLANET) and Microlensing Observations in Astrophysics (MOA) -- also followed the event and contributed to the journal paper. Ohio State scientists on the project included Darren DePoy and Richard Pogge, both professors of astronomy, and Subo Dong, a graduate student. Other partners hail from Warsaw University in Poland, Princeton University, Harvard-Smithsonian Center for Astrophysics, Universidad de Concepción in Chile, University of Manchester, California Institute of Technology, American Museum of Natural History, Chungbuk National University in Korea, Korea Astronomy and Space Science Institute, Massy University in New Zealand, Nagoya University in Japan, and the University of Auckland in New Zealand. This is the second planet that astronomers have detected using microlensing. The first one, found a year ago, is estimated to be at a similar distance. Gould's initial estimate is that the new planet is approximately 15,000 light years away, but he will need more data to refine that distance, he said. A light year is the distance light travels in a year -- approximately six trillion miles. The OGLE collaboration is funded by the Polish Ministry of Scientific Research and Information Technology, the Foundation for Polish Science, the National Science Foundation, and NASA. Some MicroFUN team members received funding from the National Science Foundation, Harvard College Observatory, the Korea Science and Engineering Foundation, and the Korea Astronomy and Space Science Institute. Contact: Andrew Gould, (614) 292-1892; email@example.com Written by Pam Frost Gorder, (614) 292-9475; Gorder.firstname.lastname@example.org
<urn:uuid:64bcf959-61e2-44b1-a74b-18bcdb633857>
CC-MAIN-2016-26
http://researchnews.osu.edu/archive/nuplanet.htm
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395039.24/warc/CC-MAIN-20160624154955-00132-ip-10-164-35-72.ec2.internal.warc.gz
en
0.918298
1,455
2.96875
3
First of three parts Bare suburban streets. Thousands of gallons of rainwater with nowhere to go. Billions of dollars in public money. Higher air-conditioning and heating bills. Lower property values. And millions of dead trees that could pose hazards to people and property. In the next five to seven years, the tiny emerald ash borer will utterly change the landscape of the Chicago region. In some places, it will happen one tree at a time; in others, whole blocks of trees will be felled at once. Illinois has the largest population of public ash trees in the nation, with at least 5.5 million on developed land statewide and nearly 3 million of those in the Chicago area, according to a study on emerald ash borer damage expected between 2009 and 2019. Between those public trees and millions more on private property and in forest preserves, nearly 1 in every 5 trees in the region is an ash -- all of which will be destroyed by the ash borer if not treated. "This is like a natural disaster in slow motion," said Scott Shirmer, emerald ash borer program manager for the Illinois Department of Agriculture. "We didn't see it coming very far in advance, and once we did, we tried to prepare as best we could," he said. "We'll just have to pick up the pieces afterward." Worse, that slow-moving disaster is actually speeding up. Ash borer treatments don't work nearly as well on trees that are not well watered, and trees have been weakened in general by the drought -- making some people wonder if treatments are even worth it. That same study, done by experts in entomology, forestry and economics, says that in the 25 states where emerald ash borer infestation is at its peak, dealing with it will cost upward of $10.7 billion. Illinois' share of that is estimated at $2.1 billion, split up among municipalities, forest preserves, private landowners and other government units. Michigan and Indiana have already gotten the worst of it. Fort Wayne, Ind., has lost thousands of ash trees and plans to remove another 4,500 in 2012. Chad Tinkel, manager of forestry operations for Fort Wayne, said knowledge and planning are key to avoiding the "massive catastrophe" his city has seen. "Act now," he warned. "Do not wait and think you can handle it as it comes." Andi Dierich, forest pest outreach coordinator for the Morton Arboretum in Lisle, said the ash borer is devastating everywhere it strikes. "But, there's so much ash that was planted here, so the devastation is just magnified," she said. Ash tree overload The suburbs are heavily populated with ash trees, which make up nearly 35 percent of some towns' tree populations. As suburbs expanded in the past half-century and developers looked for trees to decorate the new subdivisions, ash was an inexpensive, fast-growing, large-canopied option. As well, it had no known predator, an important factor in the post-Dutch Elm disease years. The emerald ash borer, which evolved in Asia, first came to the U.S. around 1990, hidden in packing crates and pallets aboard ships and planes. The bug was first seen in 2002 in Detroit suburbs. It would traditionally take years for borers to move even a few miles, but humans sped up the infestation by moving infected pallets and firewood around the country, Shirmer said. Ironically, while the ash borer evolved in China and Korea, it has not destroyed the Asian ash trees. If it had, the insects would have destroyed their means of survival -- as will eventually happen here. Instead, ash species in Asia have something in their DNA that makes them resistant to the insects until near the end of their lives, said University of Illinois Extension entomologist Phil Nixon. In Asia, ash borers hasten the demise of trees already in decline, unlike the U.S. where they attack ash trees in all stages of their lives. Scientists are studying to find what is in the Asian trees' genetic makeup that makes them resistant, but it's a natural defense compared to the chemical treatments being applied here, Nixon said. Arborists are hoping municipalities will learn from this disaster that they need a greater variety of trees. "This shouldn't have had to happen twice," said Mark Spreyer, naturalist at Stillman Nature Center in South Barrington, referring to the lessons not learned from the Dutch elm debacle of the last century. "There's no new lessons to be learned here that shouldn't have been learned last time." Already some suburbs are taking the lesson to heart. In Schaumburg, for instance, no single species will be allowed to make up more than 7 percent of the total forestation. As millions of trees begin to come down, experts are concerned about the environmental costs of so much canopy loss concentrated in the Chicago area. Studies show large shade trees can reduce summer energy use by 20 to 25 percent, Dierich said. Even scarier is the effect on local and regional stormwater systems. A large ash tree, 18 inches in diameter, prevents about 2,200 gallons of water from hitting the ground each year as rainwater is soaked into the leaves, bark and roots. "If you lose that one tree you have an extra 2,200 gallons hitting the ground every year," Schirmer said. "That in itself is not significant, but if you lose 10,000 trees or a million trees, it's that much additional water to flow into the sewer systems." As more water reaches the ground, erosion problems and basement flooding will follow, he said. The loss of trees also will make property less attractive to buyers. A study published in the Southern Journal of Applied Forestry estimates that each large front yard tree adds 1 percent to a house's sale price, while large "specimen" trees can add as much as 10 percent to a property's value. "You have a lot of residents who purchase homes or select neighborhoods based on the trees," Nixon said. "The idea of moving to an area without trees or where the trees are all about to be cut down may not be that appealing." The USDA Forest Service's Center for Urban Forest Research cites research that says 100 mature trees remove 53 tons of carbon dioxide out of the air per year, and 430 pounds of other air pollutants. Other research says tree-filled neighborhoods are safer and more sociable, and trees even lessen cases of domestic violence. "Trees are a resource -- whether in a wood lot, on private property or parkways, they do serve a purpose," Shirmer said. The effect of the major tree loss on wildlife is unclear, Nixon said, and may not be known until most North American ashes are wiped out. Until then, he said, we may not know how integral ash trees have been to the life cycles of other species of insects or birds. Another looming problem is dead ash trees also have a tendency to fall down relatively quickly -- within a year of dying -- softened by disease and having weaker wood than other species. "As these trees die -- and they are pretty brittle to begin with -- they will become standing hazards that put the public and property at risk," Shirmer said. The emerald ash borer war is being waged in hundreds of communities, all of which are making individual decisions about whether to fight to save trees, or let them die and tear them all down. Either way, the cost is high. Treating a tree can cost $100 or more every year, although less expensive treatments are available. Removing, and then replanting, a tree can cost more than $1,000. Since many suburbs have thousands -- or tens of thousands -- of ash trees on public parkways, that puts the price tag for dealing with the emerald ash borer in the millions for many suburbs. Arlington Heights estimates the cost to remove and replace 13,000 ash trees at more than $11 million. Schaumburg will spend $9 million over 10 years in a combination of treatment, removal and reforestation. Naperville has budgeted $467,000 to save as many of its 16,000 parkway ash trees as possible. A tree with an infestation caught early enough can be treated with a variety of chemical compounds now on the market. If a municipality chooses not to treat or treats too late, it will still have to pay to remove the infested tree and possibly replace it. "It's a snowball effect," said Robin Usborne, communications manager for Michigan State University's College of Agriculture and Natural Resources. As one of the first infested states, Michigan and its universities have pioneered the research on the emerald ash borer. "The trees start dying, and you have to spend money to treat them or take them down; you have to spend more money to replace them. Where is all the money going to come from?" Usborne said. Although treating can be a less expensive option, not everyone is convinced. "Should we be spending money to try to save a tree that's going to die sooner or later anyway, especially when we don't know for sure that this will even work?" asked Alderman John D'Astice of Rolling Meadows, where officials ultimately decided not to treat most of the 1,700 public trees. And if the drought doesn't ease, Dierich said, treating trees this year might be a waste. The chemical won't make a difference. "With the drought there's not enough water to move the chemical through the tree," she said. "Unless you are watering your tree consistently, you're wasting you're money on treatment because the tree won't be able to do anything with the chemical." It's not a one-size-fits-all situation, Dierich said. Every community has different challenges and has to understand all the factors before deciding the best way to approach it. "There's going to be an ongoing battle against this," Shirmer said. "I hate to use the term 'natural disaster' because it won't take human lives, but when you see a tornado coming, you go into the basement. That doesn't stop the tornado from coming." • Daily Herald staff writers Deborah Donovan and Eric Peterson contributed to this report.
<urn:uuid:924dd94b-86e2-4bc3-a603-26aab10b74d6>
CC-MAIN-2016-26
https://www.dailyherald.com/article/20120819/news/708199922/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783399385.17/warc/CC-MAIN-20160624154959-00083-ip-10-164-35-72.ec2.internal.warc.gz
en
0.964194
2,153
2.953125
3
When Is Force Justified? by Alan Duppler Editor's Note: This article is presented for educational purposes only, and is not intended to be legal advice. If anyone has questions regarding a specific set of circumstances, he or she should contact a Whether we study judo (as I do), or a style of karate, jujitsu or what have you, we are engaged in learning a system of using physical force. Learning how to use force is the easy part. Things become complicated when we are either compelled or choose to use it against another. That’s when the law becomes involved and we learn that using force is not simply a matter between two individuals. Rather, it is a matter between the two individuals and the society in which they live. This article will first discuss the philosophy behind relevant aspects of the Anglo-American legal system and will then apply that system when discussing circumstances in which force against another may, legitimately, be used. I am a lawyer. I work with the law every day, and I’m proud of that fact. This may sound strange in the lawyer-bashing age we live in, but it is true. The law impacts everything we do. If you want to function in society then you have to be concerned about the law. The law not only defines who we are; it also regulates how we interact with others. It is the very glue that holds us together as a society. Without the law we would be huddled masses of frightened humanity—unable to function, much less express opinions, object to the actions of our leaders or assert Clearly, something so important as the law cannot be irrational or capricious. It has to be grounded in common sense, and it has to reflect the mores and values of every day people leading every day lives. Contrary to what some would have you believe, our legal system does just that. Our laws are based on how people live and what they hold most dear. The law is not a trap for the unwary. The law is a guidebook to what our society When I was in law school, one of my professors put it this way. "Look at the laws of the State of North Dakota," he said (they took up about four feet of shelf space). Then he said, "Look at the U. S. Code" (which took up about sixty feet of shelf space). "The Bible," he continued, "does it all in Ten Commandments." Now, I’m not a preacher. In fact, I’m not certain I’m even a Christian. But, there is a lot of truth in that statement. The first truth is historical. Our American legal system arose out of the English Common Law. (Thus the term "Anglo-American".) This, in turn, developed after the year 1066 A.D., when the invading Normans imposed their legal system on the natives of the British Isles. The Normans had adopted, or rather, had been forced to adopt, the legal system of the Romans. Roman law in turn came from the Old Testament during the reign of the Emperor Constantine. The second truth is the fact that all of our laws, all of our regulations, all of the paper work that keeps guys like me busy have, at their root, only a handful of central concepts. These concepts (so succinctly stated in the Ten Commandments) are what I will call "core values," and they form the heart of our law. Learn them and you will become a lawyer. What are the core values that control the use of force in modern, urban, industrialized America? There are three of them: (1) you may defend yourself, (2) you may defend another, and (3) you may defend your property. All other uses of force, our society says, are forbidden. Stated this way, the core values are deceptively simple. Of course you can use force to accomplish these three things. No one wants to see himself or his loved ones attacked, nor does he want to see his things taken. It is this very simplicity, however, that conceals profound truths about our society. A closer examination of these statements is in order. The one word that is common to all three core values is "defend". That says a lot about us. While aggression may be acceptable in the boardroom, in the courtroom and sometimes even in the classroom, it is never acceptable when using physical force. Many hours of attorneys’ time and many gallons of ink have been spent arguing and deciding whether a particular use of force was defensive or offensive in nature. As one can well imagine, the distinction is not always clear or easily made. Imagine yourself at an automobile dealership. The garage has just performed work on your car, which you feel was not only unnecessary, but a "rip off" to boot. You refuse to pay for the repairs. A heated argument ensues. At some point the shop foreman draws back his fist as if to strike you. In so doing he shifts his weight to his left, or back, foot. Seizing the opportunity you sweep the foreman’s right foot with deashi-harai, drop into juji gatame (arm bar) and break his arm. Were you justified in using the force you did? In all honesty, I can’t answer that question with just the facts I’ve just given you. You have the absolute right to defend yourself. If the shop foreman were actually intending to strike you when he drew back his fist, then you could clearly protect yourself from his blow(s). But, without more information, I can’t say for sure that he was really going to strike. The question is whether a "reasonable person" would have felt threatened under the circumstances. For example, if the foreman says, "You dirty son-of-a-bitch" as he draws back his fist, the implication is he intends to strike. Defensive force is justified. If, on the other hand, he says, "These damn flies!" A real question exists as to his intent. Maybe he was really going to strike you, but maybe the flies are the objects of his strike. The lesson to be learned from this is not alien to martial artists. That lesson is control. When we seek to perfect our art, we are seeking control: control of our bodies, control of our minds and (in some cases) control of our surroundings. An act of aggression is not an act of control. Rather it is an act which evidences a loss of control. If, in my example above, the shop foreman were really in control of the situation, he would have no need to strike you, the customer. If he drew back his fist it would be to swat those "damn flies." On the other hand, if he were not in control then striking you would be much more plausible. Similar examples can be developed for each of the remaining two legitimate uses of force stated above. In each instance the central question is one of defense (for the Anglo-American legal system) or of control (for the martial artist). These are not separate questions but, rather, different manifestations of the same question. In the end common sense tells us when force can be justified. We are not a nation of bullies. We do not seek to impose our will on others by force. Rather, we all want to get along without conflict while living our lives to the fullest. Jigoro Kano, the founder of judo, expressed it as "Minimum effort, maximum efficiency." In other words, by using the minimum amount of effort we all strive to attain the maximum benefits society has to offer. Using force to attain these benefits is a waste of effort. Force can only be justified if it is used to protect that which we hold most dear: ourselves, our friends and family, and our possessions. Any other use of force is unjustified and, therefore, inefficient. About The Author: Alan Duppler is a graduate of the North Dakota Law School (1977). He then entered government service. In 1981 he was appointed to complete the unexpired term of the Mercer County (North Dakota) State's Attorney. After 10 years as a prosecutor (1990) he joined the United States Department of Veterans Affairs. A year later (1991) he was appointed as a Special Assistant United States Attorney, where is primarily responsibility is prosecuting crimes occurring at the Fargo VA Medical Center. Duppler is also a black belt judo practitioner and continues to work out with the Gentle Ways at a Judo Club in Fargo.
<urn:uuid:92b48a3f-058c-49d9-9e2d-611752229b0e>
CC-MAIN-2016-26
http://fightingarts.com/reading/article.php?id=153
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783393463.1/warc/CC-MAIN-20160624154953-00056-ip-10-164-35-72.ec2.internal.warc.gz
en
0.957075
1,894
2.59375
3
This afternoon I got an emergency call from a mom. Her daughter (my student) was in the throes of a full on research paper on the Rwandan genocide of the 1990's and couldn't figure out how to get started. Who can blame her? One thing I've learned as a tutor is: nobody teaches kids how to write any more! It's a real problem. The problem starts with the thesis statement. Kids don't know how to write one. This is no easy task. A thesis statement tell the reader what you plan to tell them in the body of your paper, but doesn't give away your details or your sources. It's short and sweet, but must be concise and succinct. Most students feel (and some are told) the thesis is a single sentence. Most often, this is not the case. Like a sculpture, the thesis statement must be carefully and artfully molded, crafted into the introductory paragraph that holds it with delicate precision. To write the thesis, the writer must know what he plans to convince his/her reader of. It must grab attention, but not give anything away. Our students are taught to infuse their writing with exciting and teasing attention grabbers. Because of this, too often, students begin their writing with something like this: "Do you want to know more about african elephants? Well read on to discover facts about this animal." This is a cute opener for a young writer, but won't work for the Rwandan genocide piece. Yet students have a hard time bridging creative writing to a more sophisticated style. Opening statements should be crafted out of facts that hit salient points, while enticing the reader to know more. Over the course of barely three months, close to a million Rwandans were killed in a massive genocide that pitted neighbor against neighbor, husband against wife. The paragraph should be clinched by the thesis statement. Now, ten years later, Rwanda struggles to pick up the pieces and move into the 21st century by ending poverty, improving infrastructures and becoming a competitive economy in the global community. The best way to organize a paper is in a logical, sequential manner. Notecards are very helpful for this, provided they are used efficiently. Frequently, students arrive for sessions with notecards that are not effectively organized. Their goal may have been to accomplish the required "35 cards by Friday". Or they may be color coded by source. But the best way to organize notecards is by topic or paragraph. Once the thesis statement is established, students can be directed to develop topics for each paragraph. The question is: how can I prove my thesis statement? What questions must be answered? What arguments must be made? When this is known, notecards can be labeled by topic. This can be done either by using title headings (Events Leading up to Genocide), an alphabet or number system (#1 cards are all about global response to the genocide) or color coding (orange cards are all about the rebuilding of the country). Organizing cards in this fashion will come in handy for writing the outline and paper. Notecards should contain the following: relevant facts for the topic, appropriate quotes for the topic, and page numbers. The back of each card should contain the citation of the work (e.g. author's name, publication title, date, etc...). This will be handy when using the cards during the writing process; simply type the quote or facts, page numbers, and then flip the card over for the citation. Use more cards and write less on each card. It keeps the fact relevant (not everything is important enough to include) and helps limit facts and quotes that will be cited to the most powerful ones, making the paper more focused. Limit notecards containing quotes to only the one quote per card. Learning to write is a tough skill. Making words flow is not easy. Using well organized notecards, writers can sort cards and organize the flow before putting it into writing. Once this is done, a draft can be started. Good writers read their drafts--out loud--to hear how it sounds, always editing and improving. Sometimes, more words helps the flow (instead of: The violence in Rwanda was widespread-perhaps- The violence in Rwanda was massive and widespread, touching every community and sparing none.) Other times, a more succinct message is appropriate (The global response was a failure.) All messages should be stated as facts, supporting the writer's argument, even when equal facts may exist that support another opinion. There is no substitute for a broad vocabulary. This is a time for the thesaurus. Some words are better than others (They ratified the new constitution. He denounced the new government.) Stunning is stronger than surprising; ridiculed is stronger than mocked or laughed at. In text quotations are tough too. Students need practice with this skill; finding and inserting (and then citing) quotations that match their thought process is a high level skill. But once learned, it will be a lifelong asset. Student should be careful to choose the most fitting quote that makes the biggest impact (While studies are currently underway to obtain current and specific statistics, the National Reading Panel suggests, "...at least 20% of readers in third grade are reading well below grade level across the nation" (National Institutes of Health, 2001). A strong conclusion brings the whole paper together. The conclusion should rephrase the thesis in a direct and conclusive manner, using some details from the paper: Too often, students begin this paragraph with "In conclusion..." Worse, so many students insert the mighty "I" statements (I think the genocide in Rwanda was a horrific event, but I think the country has learned from it). Students must be taught to state their opinions without using the I-statement. This is hard for kids to learn. Instead, the conclusion might look something like: Despite a massive genocide which tore the country apart, the government of Rwanda has made great efforts to achieve its goal of creating a middle-income country by the year 2020. It should be concisely supported, not only with facts but conclusions drawn by the writer (This goal, while lofty, is already being acted upon, with Rwanda focusing resources on improving roads, developing tourism and planning for a university system. Real progress, however, will require a redoubling of efforts and resources.) A great way to practice writing technique is to do it before the major paper is due. Practice with simple, fact-based topics (e.g. All About Dolphins; The Difference Between Lincoln and Washington; Mountain Gorillas in Danger). Use a topic that students find interesting and simple sources (limited to 2 for practice) that offer straight-forward information that is easy to read and understand. (A note here about finding reliable sources: a safe bet for students using the internet is to focus on government sources, .org sources and major publications sites--Time, Newsweek, BBC, etc...). Of course, it goes without saying that good writing comes good reading. Thoroughly review all sources and have students read with you to be sure they are comprehending. When reading on the internet, make use of features such as the computer's ability to define unfamiliar words and supply synonyms. Help children practice paraphrasing and summarizing in their own words. These techniques will be good practice for writing. As students grow as writers they learn what a valuable tool writing can be. After all, the pen is mightier than the sword! (Edward Bulwer Lytton, 1839).
<urn:uuid:b4019e5e-6479-46f2-84f8-0a50a4a1b8da>
CC-MAIN-2016-26
http://thelearninglaboratory.blogspot.com/2012_05_01_archive.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391634.7/warc/CC-MAIN-20160624154951-00118-ip-10-164-35-72.ec2.internal.warc.gz
en
0.951026
1,534
2.921875
3
DOHA: Primary healthcare centres on average receive one heat exhaustion case every month, according to a senior official. The number of cases increases between May and September each year due to high temperatures. However, within one week (in May-June), three centres in the northern region received only two cases, according to Dr Mohamed Aiad, Assistant Regional Director of Operations, Northern Region, Primary Healthcare Corporation. Centres in Umm Salal and Madinat Khalifa treated one case each, while Al Ghuwariya centre has not received any case. To a question on average number of cases treated at centres each year, Dr Aiad said in an email interview: “Nil to one case per month per health centre”. Heat exhaustion is when a person experiences fatigue (extreme tiredness) as a result of a drop in blood pressure and blood volume. It’s caused by loss of body fluids and salts after being exposed to heat for a long time. Symptoms of heat exhaustion can develop rapidly and include very hot skin that feels ‘flushed,’ heavy sweating, dizziness, nausea, vomiting and a rapid heartbeat. “If a person with heat exhaustion is quickly taken to a cool place and given water to drink and excess clothing is removed, they should start to feel better within half an hour and have no long-term complications. However, without treatment, they could develop heatstroke,” said Dr Aiad. “Heatstroke is a more serious condition than heat exhaustion. It occurs when the body’s temperature becomes dangerously high due to excessive heat exposure. The body is no longer able to cool itself and starts to overheat,” he added. Signs of heatstroke include dry skin, vertigo, confusion, headache, thirst, nausea, rapid shallow breathing (hyperventilation) and muscle cramps. Inadequate fluid intake, working under sun and in areas without ventilation would lead to heat exhaustion. To avoid heat exhaustion Dr Aiad advised people to stay out of the sun during the hottest time of the day, particularly between 11am and 3pm. If they have to go out in the heat, walk in the shade, apply sunscreen and wear a hat, do not leave anyone in a parked car, avoid extreme physical exertion, have plenty of cold drinks, but avoid drinks that contain caffeine and alcohol, eat cold food, particularly salads and fruits with a high water content, take a cool shower, bath or body wash, sprinkle water over the skin or clothing, or keep a damp cloth on the back of neck and keep the environment cool.
<urn:uuid:90ac390d-84fa-4c99-b314-2be08d8a5961>
CC-MAIN-2016-26
http://thepeninsulaqatar.com/news/qatar/288171/keep-off-sun-during-hottest-time-to-avoid-exhaustion
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783400572.45/warc/CC-MAIN-20160624155000-00011-ip-10-164-35-72.ec2.internal.warc.gz
en
0.949293
542
2.640625
3
Google has teamed up with Amazon for a new project. Well, not Amazon, as in the company that sells the Kindle. No, we’re talking about the great, untamed wilderness of the Amazon rainforest. In a partnership with the Sustainable Amazon Foundation, Google aims to use its Street View technology to raise awareness of the world’s largest rainforest and its important ecosystems. To do this, Google is mapping the byways of the Amazon River. Local residents will pedal Google’s camera-equipped trikes across the few roads in the area as well, offering a peek of the different communities and the area’s inhabitants that live in isolated villages along the riverfront, most of which we’ll never see. Thousands of indigenous people call this region home. The Sustainable Amazon Foundation promotes social, economic, and environmental awareness of the Brazilian sate of Amazonas. Street View in the Amazon team leader Karin Tuxen-Bettman said in a statement that once all the images are uploaded to the Internet, the local culture and beauty of the Amazon will be available to anyone, anywhere in the world. Soon, anyone with a computer and Internet access will be able to explore the Amazon without having to get all the necessary malaria vaccines in advance. It looks like Google traded the icy tundra of Antarctica for the warm rainforest of the Amazon. Last year Google ventured to Antarctica to capture the physical beauties, as well as the adorable penguins that inhabit the continent. What area of the world would you like Google to explore next with its Street View cameras?
<urn:uuid:e60ff068-ebe5-4ab9-9d81-a25cf3592b03>
CC-MAIN-2016-26
http://www.geek.com/geek-cetera/google-street-view-is-using-canoes-and-tricycles-to-map-the-amazon-1414455/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783392069.78/warc/CC-MAIN-20160624154952-00096-ip-10-164-35-72.ec2.internal.warc.gz
en
0.92957
326
2.59375
3
Before 1941, two famous female pilots named Jacqueline “Jackie” Cochran and Nancy Harkness Love had individually proposed plans to the U.S. Army Air Forces. The proposals asked that women be allowed to fly planes in non-combat missions, such as ferrying aircraft or towing drones and aerial targets, in order to free male pilots for combat. Though both proposals were initially turned down, minds began to change after the U.S. became more directly involved in the war after the attack on Pearl Harbor and it became clear that more pilots were needed. In September 1942, while Jackie Cochran was in Britain flying for the Air Transport Auxiliary (ATA) program—which had been using female pilots since 1920—General Henry H. “Hap” Arnold, commander of the Army Air Forces, approved a plan for a Womens Auxilliary Ferrying Squadron (WAFS) under the Direction of Nancy Love. Cochran returned to the U.S., insisting that women could do more for the USAAF than just ferrying. So another program was instituted—the Women’s Flying Training Detachment (WFTD), headed by Cochran herself. In the Summer of 1943, the WAFS and WFTD were combined into one single women pilot group, the Women Airforce Service Pilots (WASPs). More than 25,000 women applied for the WASP program, but fewer than 1,900 were accepted. After four months of military flight training, 1,074 of them became the first women to fly American military aircraft. Though not trained in combat, the women were given much the same instruction as aviation cadets, learning how to recover from any position. They flew newly manufactured planes to military bases, towed targets, and transported cargo. By December 1944, the WASP had delivered 12,650 aircraft to their destinations. Since the WASP program was considered a civil service, the WASPs were not given military benefits. The 38 women who perished in accidents during training and on active duty were sent home at the family’s expense and weren’t allowed to have an American flag over their coffin. In September 1943, the first bill for militarization of the WASP was introduced in the House of Representatives. Cochran and Arnold both wanted a seperate corps headed by a female colonel. But the bill was defeated, as were the subsequent attempts to give the WASPs military status. In the end, Cochran essentially asked that the question be resolved by either granting military status or by disbanding the program. So it was announced that the program was to be disbanded by December 20, 1944. In 1977, the previously classified, sealed documents explaining the WASPs services to the country were unsealed, following the incorrect statement that the Air Force was then training the first women pilots ever to fly American military aircraft. With the support of Senator Barry Goldwater, the WASPs lobbied again for recognition, which was granted to them in legislation signed by President Jimmy Carter in the form of a World War II Victory Medal for each WASP. An American Theater Ribbon/American Campaign Medal was also granted to those WASPs who had served for more than one year. Then on May 10, 2010, President Barack Obama and the United States Congress granted the WASP program the Congressional Medal of Honor for its service to the nation, and the 300 surviving WASPs came to the U.S. Capitol to receive the medal. For more on how the WASP was started, their wartime efforts, and a list of members, check out this Wikipedia page. You can also check out this page on Fold3 for facts and photos of Elizabeth M Magid, a WASP. And there is more to see on Fold3 regarding the WASPs; you can use this search to find more photos and stories. This page on pbs.org is also a great resource for first-hand accounts of the WASP.
<urn:uuid:84f23664-09db-4124-b1ed-ae252af5daa2>
CC-MAIN-2016-26
http://spotlights.fold3.com/2013/05/08/women-airforce-service-pilots/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395679.92/warc/CC-MAIN-20160624154955-00135-ip-10-164-35-72.ec2.internal.warc.gz
en
0.981203
807
3.96875
4
November 16, 2012 The Dust Bowl, a film by noted filmmaker Ken Burns, will premier this Sunday and Monday, November 18 and 19 on PBS stations nationwide. On Sunday, the first 2-hour film, “The Great Plow-Up” will air from 8:00 to 10:00 pm ET. On Monday evening, the second 2-hour film, “Reaping the Whirlwind” will air during the same times—8 to 10pm ET. According to PBS, the film “chronicles the worst man-made ecological disaster in American history, in which the frenzied wheat boom of the ‘Great Plow-Up,’ followed by a decade-long drought during the 1930s nearly swept away the breadbasket of the nation[…] It is also a morality tale about our relationship to the land that sustains us—a lesson we ignore at our peril.” The origins of USDA’s Natural Resources Conservation Service (NRCS) are deeply rooted in the Dust Bowl and the soil and water erosion issues that were prevalent in the 1930s. President Franklin Roosevelt created the Soil Conservation Service, NRCS’s predecessor, in 1935 to help farmers and ranchers overcome the devastating effects of poor land management decisions and drought, especially in the Midwest and Southern Plains. Since then, USDA and Congress have developed a diverse set of conservation tools, including programs like the Conservation Stewardship Program (CSP), which supports progressive and comprehensive conservation on agricultural lands across the country. CSP is for working farms, built on the belief that we must enhance natural resource and environmental protection at the same time we produce profitable food, fiber and energy. Soil erosion is a major resource concern within CSP, which offers soil conservation practices such as cover crops, residue management and conservation till, and resource-conserving crop rotations. One of the most important conservation programs administered by USDA is the highly-erodible land and wetland conservation compliance mechanism. HEL compliance requires that, if a farmer chooses to receive agricultural program benefits such as direct payments on highly erodible land, they must work with NRCS to develop and implement a conservation plan to conserve the soil. Former U.S. Secretary of Agriculture Dan Glickman recently noted, “farmers managing more than 140 million acres of highly erodible land have implemented practices that reduced the amount of soil washed into streams from ‘highly erodible’ lands by 40 percent. This accounts for more than 300 million tons of soil saved per year. Annualized since 1985, that is about 8 billion tons of soil saved on the farm, allowing it to remain a productive asset for growing our nation’s food and fiber, rather than washing into our rivers, lakes and streams, or blowing away with the wind.” There is a very real possibility that, despite the severe drought and flooding that occurred this summer and fall, upcoming federal legislation will undermine both conservation compliance and the CSP, among other critical conservation programs. The 2008 Farm Bill expired on September 30, 2012 and took with it the Conservation Reserve Program, Wetlands Reserve Program, Grasslands Reserve Program, and Chesapeake Bay Watershed Initiative. These programs have no authority to continue in 2013. While CSP retains authority in FY 2013, it was unintentionally stripped of funding for new enrollments by the FY 2013 continuing appropriations resolution. Moreover, unlike the Senate version of the 2012 Farm Bill, the version passed by the House Agriculture Committee does not reattach conservation compliance requirements to federal crop insurance subsidies, which make it easier for producers to plant crops on risky, marginal lands. If Congress is to avoid another dust bowl for farmers and ranchers across the country, it must do more than reauthorize or extend the farm bill before the end of the calendar year. It must: In the face of more frequent and severe weather events and increasing pressures on the land, now is the time for Congress to reaffirm its commitment to farmer-led natural resource conservation.
<urn:uuid:34050c50-1e30-4a2d-a25d-39fd25b3a63e>
CC-MAIN-2016-26
http://sustainableagriculture.net/blog/ken-burns-dust-bowl-film/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783398075.47/warc/CC-MAIN-20160624154958-00169-ip-10-164-35-72.ec2.internal.warc.gz
en
0.94035
828
3.421875
3
Poof! A cloud of dust splayed around a young star has astonishingly disappeared into thin year in just three years time. Scientists are puzzled at how this is possible in such short time duration. Maybe the present ideas of planet formation are not quite true! Could it be that the planet formation is quicker than anticipated? Or maybe the stars anchoring the planets are much larger in number? "The most commonly accepted time scale for the removal of this much dust is in the hundreds of thousands of years, sometimes millions," said the perplexed Inseok Song, assistant professor of physics and astronomy at the University of Georgia."What we saw was far more rapid, and has never been observed or even predicted. It tells us that we have a lot more to learn about planet formation." Data that surveyed more than 96 percent of the sky in 1983, retrieved from the Infrared Astronomical Satellite, or IRAS, was thoroughly scrutinized. It shows Scorpius-Centaurus stellar nursery hosts the star called the TYC 8241 2652 1. The star in the beginning was enveloped by a cloud of dust that was recognized by its characteristic infrared energy radiation. Inspection of the star in 2008, at the Gemini South Observatory with a mid-infrared imager showed an identical pattern. An astounding observation was noted a year later, when the examination was repeated. The infra red emission had shown a shocking fall by nearly two thirds. In 2010 further examination by NASA’s Wide-field Infrared Survey Explorer showed that the dust had almost vanished. "It's as if you took a conventional picture of the planet Saturn today and then came back two years later and found that its rings had disappeared,“ exclaimed Ben Zuckerman working with UC Los Angeles. Song stated his opinion saying "If what we observed is related to runaway growth, then our finding suggests that planet formation is very fast and very efficient," he stated "The implication is that if the conditions are right around a star, planet formation can be nearly instantaneous from an astronomical perspective." The planet is nearly 450 light years away, that is a steep distance to carry out observation research. Another reason for the disappearance of the dust could be that the star may have sopped it up on its own or may be the dust had been totally ousted out from the sun’s orbit. The actuality of this finding is that the dust clouds are fleeting and transitory. It also indicates that there is a large number of undiscovered planets. As Song comments "Many stars without any detectable dust may have mature planetary systems that are simply undetectable."
<urn:uuid:85bcb19c-97b3-4a0f-8b56-4a458ceb7284>
CC-MAIN-2016-26
http://themoneytimes.com/featured/20120705/astonishing-disappearance-star-dust-belt-has-scientists-bamboozled-id-1701711886.h
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783403826.29/warc/CC-MAIN-20160624155003-00023-ip-10-164-35-72.ec2.internal.warc.gz
en
0.974137
531
3.78125
4
The long tail of life APR 23 2010 There are a ridiculous number of microbes in the Earth's oceans. During an 11 month study in 2007, scientists sequenced the genes of more than 180,000 specimens from the Western English Channel. Although this level of sampling "far from exhausted the total diversity present," they wrote, one in every 25 readings yielded a new genus of bacteria (7,000 genera in all). That's genus, not species. Kevin Kelly translates: This suggests there is a long tail of life in bacteria, with a few species super-abundant, but many many species with very thin populations. At the far end of the tail there may be a billion species with only a few individuals. [...] And like other kinds of long tails, the sum of all these small bits total up to exceed the sum of individuals in the most popular species. As the microbiologists involved in the Census of Marine Life like to say, this survey reveals life's "hidden majority."
<urn:uuid:a78300f5-c9bc-4be8-ae69-5bedb10875d0>
CC-MAIN-2016-26
http://kottke.org/10/04/the-long-tail-of-life
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395546.12/warc/CC-MAIN-20160624154955-00049-ip-10-164-35-72.ec2.internal.warc.gz
en
0.92147
203
3.28125
3
Sesame Street has done it again. They have created a fun take-off of Carly Ray Jepsen's song, "Call Me Maybe", where Cookie Monster asks people to share their cookies with him. Won't you share your cookie? Watch this fun video then check-out some of the great places for preschoolers to learn and play that the library has bookmarked for you on our children's page. There are many fun videos to watch and "mouseless" games to try. Johnny Appleseed was actually born John Chapman on September 26, 1774. So why is he called Johnny Appleseed? As the pioneers were starting to move west he traveled just ahead of them and planted thousands of apple seeds so they would grow into apple trees that could provide food for the settlers as they arrived. To celebrate his birthday try these rhymes and crafts and create your own fun with apples: "Eat an Apple" Eat an apple; (bring hand to mouth) Save the core. (cup both hands together) Plant the seeds (bend down and touch hands to ground) And grow some more! (stand up tall and extend both arms up and out) Apple Prints Cut an apple in half and let it dry for an hour or so. Then lightly coat the cut side with paint and stamp on a piece of paper. Try some different colors - red, yellow or green for different kinds of apples. Thumbprint Apple Trees Make a tree out of construction paper or draw one with markers. Then use a washable red ink or paint pad to press your thumb in and make thumbprint apples on the tree. Like dystopian fiction? Try Mark Peter Hughes’ newest book, A Crack in the Sky (book 1 in the Greenhouse Chronicles series). If you enjoyed City of Ember, you’ll love this sci-fi/mstery/conspiracy novel. Give your baby or young child the gift of a Bright Beginning. The parent is the child's most important teacher. Find out how you can enhance your child's ability to achieve in school long before they sart school. Jefferson County Public Library partners with Colorado Bright Beginnings to provide early learning tools for ages birth to 36 months. Free bags containg such things as a local community resource guide,learning games to play with your baby or toddler, a board book plus much more are available simply by attending a 30 minute class at the library. Classes and materials are provided for three levels according to your child's age: birth to 12 months, 12 to 24 months, and 24 to 36 months. Two fo the JCPL libraries will be presenting classes this fall. There will be other classes presented throughout the year. Watch for signs announcing times in the Children's room or ask your children's librarian if they will be having a class soon. Belmar Library: Mon., Oct. 1, ages 24 to 36 months, 10:30 a.m. just after Toddler Time Mon., Oct. 8, ages 12 to 24 months, 10:30 a.m. just after Toddler Time Thur.,Oct. 11, ages birth to 12 months, 10:30 a.m. between Baby Times (Expectant mothers who are due within 2 months are welcome to attend this session) Lakewood Library: Sat., Oct. 20, ages birth to 12 months, 10:50 a.m. just after Baby Time (Expectant mothers who are due within 2 months are welcome to attend this session) Sat., Nov. 3, ages 12 to 24 month & ages 24 to 36 months, 10:50 a.m. just after Baby Time Tues., Nov. 6, ages birth to 12 months, 10:50 a.m. just after Baby Time (Expectant mothers who are due within 2 months are welcome to attend this session) Tues., Nov. 13, ages 12 to 24 months, 10:50 a.m. just after Baby Time Tues., Nov. 20, ages 24 to 36 months, 10:50 a.m. just after Baby Time Have you used Colorado Bright Beginnings materials before? Arrrrgh, and avast ye landlubbers, what’s so special about September 19? Well, shiver me timbers, it’s International Talk Like a Pirate Day. In fact, THIS year is the 10th anniversary! Pirates, both real and fictional, have been around a long, long time – did you know that Julius Caesar was captured by pirates? Supposedly they asked for 30 gold coins in ransom, and he said he was worth 50! Blackbeard was known as the scourge of the high seas in the early 1700’s. In Treasure Island, Robert Louis Stevenson brought to life Long John Silver, generally considered the first well-known fictional pirate. This classic adventure tale includes a stowaway, hijacked ships, buried treasure, a mutiny, and more – get a copy at the library and read for yourself! Remember Peter Pan – and his enemy, the pirate Captain Hook? The original story of Peter Pan was written by J. M. Barrie in 1911. Bob Raczka gets us in the mood for cooler days with Who Loves the Fall? and Fall Mixed Up, where the delights of autumn are described in mixed-up verse and illustrations, and the reader is challenged to uncover the errors. After reading one of our many stories and nonfiction books about the season hopefully you'll be ready for some fun in the leaves! Do you have any favorite books or activities your family looks forward to in the fall? Share your ideas on our blog. As soon as toddlers learn to toddle around, they're captivated with exploring their world and they are constantly on-the-go! That's why the Arvada Library holds a monthly Toddler Storytime and Play Program, geared towards children 18 months to three years of age. Instead of sitting quietly for a traditional storytime, toddlers and their caregivers participate in interactive cut-and-tell stories, play games together and engage gross motor skills by using scarves, balls, shakers and other learning props. The pace of this interactive program is geared towards a short toddler attention span and multiple activities keep the wee-ones hopping for 30 minutes. If storytimes with your toddler are more frazzling than fun, the Toddler Storytime and Play program may be the program for you! Sign up today by visiting the Arvada Library Children’s Information Desk, or by calling the Arvada Library at 303-235-JCPL (5275). Each program is from 11:00 to 11:30 a.m. Here are the program dates for the remainder of 2012: Friday, September 14, 2012 Friday, October 12, 2012 Friday, November 09, 2012 Friday, December 07, 2012 Has your toddler ever acted up in public? Got a funny, silly, or perhaps a downright embarrassing story to share? Leave a comment below and make another mom's day! Hey kids – was a library card on your school supply list?! JCPL is joining libraries across the nation in celebrating National Library Card Sign-up Month by reminding parents to get the ultimate school supply for their children and encouraging everyone to get a library card. A library card gives patrons access to more than 1.3 million items in JCPL’s catalog, including books, eBooks, audiobooks, magazines, DVDs, music, online databases, free programs, free admission to cultural institutions, free computer classes and free WIFI at all locations. “The library is an important cornerstone of every community,” said JCPL Executive Director Pam Nissler. “It plays a vital role in the development and education of children, creating opportunities for lifelong learning and provides resources that many would not otherwise have access to. We encourage everyone, no matter their age, to get a library card and explore what we have to offer.” To obtain a Jefferson County Public Library card, visit any of its 10 locations or apply online at http://jeffcolibrary.org/about/card.html. All that is needed is a photo ID and proof of current address. For students age 17 and younger, a parent's or legal guardian's permission is required. Visit our booth at the Summerset Festival at Clement Park on September 15th and 16th! You can come and paint with us 10 AM to 2 PM on Saturday and 11 AM to 3 PM on Sunday. Clement Park is located next to the Columbine Library. Volunteers will be on hand to help you sign up for a library card. Have you ever wondered what it would be like to be a knight? Or a resident of ancient Greece? Or to have lived during the golden age of pirates? JCPL has a marvelous series of Interactive History Adventure books that allow you to put yourself into an adventure and choose the path you will take through history! These 13 books are patterned after the classic fictional Choose Your Own Adventure series, but they are historically accurate and filled with great information about the lives and times they describe. In this one, you, the reader, travel on the Titanic in 1912, and experience the ship's sinking from the perspective of a first-class passenger, a third-class passenger, or a crew member, depending on which pages you choose to turn. The fun of these books is that you can read them many times, and have a different adventure each time! They are shelved in our nonfiction section, and are written for children in grades 3-7. If this sounds like your kind of book, you can find them in our catalog by entering "You Choose" as your title, and you will be given three sets of books to choose from: You Choose Books, Historical Eras, and Warriors. Each book also offers you access to internet sites with additional facts and activities about the time period you are interested in. They can be accessed through www.Facthound.com. You can search this website using the ID codes provided at the back of each book or by the general subject. As always, your Children’s Information Services staff will be happy to help you locate these and other fascinating nonfiction books! Happy travels through time! Learning to read begins at home long before children start school. You can help your child by talking, singing, reading, playing and writing. Talking--any time, any where. Listen, answer questions, add new information and listen some more! In the tub, the car, the store, waiting in line, doing chores and at meal time. Singing--helps children hear the distinct sounds that make up words. Sing every chance you get. Clap, bang pots,jump and twirl. Check out music CD's form the library or listen online. Try www.freesongsforkids.com or www.speakaboos.com/songs. Read--the single best way to help children develop the essential skills needed to read. Create a comfortable space for you to read together. Make sure the books are reachable. Encourage the child to pretend read to you or a stuffed animal. Remember their questions and get books about their interests at the library. Play--Children learn how to express themselves and the meaning of words by playing. With simple props, some imagination and encouragement your child will turn a box into a race car and a sock into a puppet.Provide plenty of opportunities for your child to play. Writing--reading and writing go together. Writing helps children learn letter names and sounds. Make it easy. Set up a space with pencils, crayons, or markers of different sizes and unlined paper.When writing letters start with favorite words such as their name or "Mom" and "Dad". Show them your writing. Let them hold the grocery list while you shop. Write your child a note and leave it in the writing area. Display their writing for all to see. Children’s book author Aliki turned 83 yesterday! She was born in Wildwood Crest, New Jersey in 1929. Aliki began drawing when she was in preschool. In kindergarten she displayed her very first portraits, one of herself and her family, and another of Peter Rabbit and his family. She attended the Philadelphia Museum School of Art where she graduated in 1951. After this, she painted murals, taught classes in art and ceramics, and started her own greeting card company to make money. Soon after she began traveling in Europe where she met her husband and moved to Switzerland with him. After learning that William Tell was Swiss, she visited the place where he lived and was inspired to write her very first children’s book, The Story of William Tell, which was published in 1960. You can read more about Aliki on the Something About the Author database.
<urn:uuid:59e3fb09-a9a3-45a6-bc8c-afbec7ab2030>
CC-MAIN-2016-26
http://engagedpatrons.org/Blogs.cfm?SiteID=2367&BlogID=255&Month=September%202012
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395346.6/warc/CC-MAIN-20160624154955-00141-ip-10-164-35-72.ec2.internal.warc.gz
en
0.952832
2,651
2.953125
3
By: Katie Burch, J.D. Candidate ’14 | August 8, 2013 Fracking is a process whereby water, sand, and chemicals are pumped into a horizontal water well so as to “fracture” the underground shale, allowing natural gas to flow freely. The process has seen a tremendous uptick in recent years, with over one million wells drilled in the last sixty years. Some economists call it the next “big thing.” To be sure, the process has contributed to reliably low natural gas prices for consumers. In 2008, natural gas prices hovered around $12. Since the rise in hydraulic fracturing, prices have fallen to just over $4 in 2011. But as the process continues to develop, and prices continue to fall, legal questions regarding hydraulic fracturing are now a hot button topic in many circles. As a general rule, the Environmental Protection Agency regulates just about everything that could affect underground drinking water supplies. In 2005, however, the EPA successfully lobbied Congress for an exemption from the Safe Drinking Water Act for fracking. The result? States, which don’t have nearly the kind of resources of the EPA, are now charged with the duty to implement any regulations to protect the sanctity of the local water supplies. In 2009, Lee Fuller, vice president of government relations for the Independent Petroleum Association of America, told National Public Radio “we have no evidence that hydraulic fracturing is causing problems.” But is it too soon to tell? Hollywood chimed in on the debate with the release of Promised Land, a film about a small town forced to weigh the economic benefits of fracking with its potential health risks. In one scene, a man lights a model farm on fire to simulate what happens as a result of chemicals from hydraulic fracturing entering the water supply. Other unconfirmed real-world stories have been described to media sources of foul smelling water, oily film on top of water, and even an Ohio couple’s home blowing up as a result of gas from their water well filling their basement. But now researchers from the National Institute for Occupational Safety and Health are being forced to weigh in on the debate. In March of 2013, researchers visited 11 fracking sites in five states: Arkansas, Colorado, North Dakota, Pennsylvania and Texas. At each site, researchers found high levels of silica, with 79% of the collected samples exceeding the recommended exposure limit. The prolonged inhalation of silica has been directly linked to certain forms of lung cancer. The unexpected finding has triggered demands for stricter regulation, which has gas companies warning of increases in natural gas prices. So, in terms of legal remedies for all parties effected, what options are available? In an article published for the UALR Law Review, author Erica J. Fitzhugh lays out the legal background for understanding some of the problems faced by landowners in areas where hydraulic fracturing is already in place. In it, she sets forth a compelling framework for understanding some legal remedies currently available to those affected landowners.
<urn:uuid:0c86786c-0a6e-477c-a39f-f53c82f878b6>
CC-MAIN-2016-26
http://ualr.edu/lawreview/blogs/what-the-frack-the-price-of-hydraulic-fracturing/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783403825.35/warc/CC-MAIN-20160624155003-00036-ip-10-164-35-72.ec2.internal.warc.gz
en
0.942779
616
3.265625
3
Does the position of the sun in a hollow earth scenario affect the way light scattering would "color" the sky? My understanding is the reddish-orange color during sunrise/sunset is caused by the sun being at a more oblique angle in contrast to the standard blue when the sun is fully up. My intuition states that the sky's color wouldn't change much, or if it did would become a washed out version of whatever it normally would be, ie blue on Earth. The most dramatic coloring that I could imagine would be a gradient from say blue to red as you look from the center of the sky to the horizon, given an Earth colored sun and atmosphere. I doubt the gradient scenario is possible, but it would be neat if it was. Of course a true hollow earth situation isn't possible. I'm mostly interested in how light scattering works when the light source is placed in the same setup.
<urn:uuid:30d6f2a9-f7e5-4d74-8b8a-bb51b8b1371e>
CC-MAIN-2016-26
http://worldbuilding.stackexchange.com/questions/32808/what-is-the-color-of-the-sky-in-a-hollow-earth
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395160.19/warc/CC-MAIN-20160624154955-00072-ip-10-164-35-72.ec2.internal.warc.gz
en
0.948407
186
3.0625
3
Polyommatus icarus butterflies in the British Isles: evidence for a bottleneck De Keyser, Rien; Shreeve, Tim G.; Breuker, Casper J.; Hails, Rosemary S.; Schmitt, Thomas. 2012 Polyommatus icarus butterflies in the British Isles: evidence for a bottleneck. Biological Journal of the Linnean Society, 107 (1). 123-136. 10.1111/j.1095-8312.2012.01925.xFull text not available from this repository. Phylogeographical research has revealed several paradigm patterns of postglacial range expansion from the Mediterranean peninsulas to more northern parts of Europe. These range expansions have consequences for the genetic constitution of populations. Although many studies have been performed in mainland Europe, the colonization history of the British Isles is relatively poorly studied; the genetic consequences of the last glacial readvances and the climate optimum conditions, as well as the implications of the recent climatic conditions on the population genetic structures, are little understood. Therefore, we selected the common blue butterfly Polyommatus icarus as a model species for understanding more generally the colonization patterns of the British Isles and the genetic dynamics on these islands. Allozyme analyses of this butterfly show a rather high genetic diversity over continental Europe without major genetic differentiation. The situation on the British Isles is completely different. Here, populations show a much lower genetic diversity compared to mainland Europe. The genetic constitution is well differentiated from that observed on the European mainland, and the genetic differentiation among populations in Britain is stronger than at the European scale. These results support the hypothesis that a relatively cold-tolerant species such as the common blue could have colonized the British Isles early during the late glacial period and survived the last glacial readvances in small refugia in the South. The retraction of this species in small isolated populations could have caused the genetic impoverishment found. The subsequent forest climax during the climate optimum possibly restricted further expansion of this early succession species to small pockets all over the British Isles, resulting in the genetic patchwork that is still observed. Additionally, the relatively cool and rainy conditions one these islands might have caused bottlenecks, possibly enforcing these genetic patterns. |Item Type:||Publication - Article| |Digital Object Identifier (DOI):||10.1111/j.1095-8312.2012.01925.x| |Programmes:||CEH Topics & Objectives 2009 - 2012 > Biodiversity > BD Topic 1 - Observations, Patterns, and Predictions for Biodiversity| |Additional Keywords:||allozyme electrophoresis, biogeography, common blue, postglacial range expansion, phylogeography| |NORA Subject Terms:||Ecology and Environment| |Date made live:||12 Oct 2012 15:37| Actions (login required)
<urn:uuid:1a37f2d0-0d4d-4ce8-b0b4-772e5bdc7e5f>
CC-MAIN-2016-26
http://nora.nerc.ac.uk/19938/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396538.42/warc/CC-MAIN-20160624154956-00159-ip-10-164-35-72.ec2.internal.warc.gz
en
0.859969
600
2.5625
3
We also have an inequalities calculator that can graph inequalities on a number line. Use it to check your answers. Related Topics: More Algebra Lessons An inequality is a relationship between two quantities that are not equal. The symbols used for inequality are: > means ‘greater than’ < means ‘less than’ ≥ means ‘greater than or equal to’ ≤ means ‘less than or equal to’ In equations, one side is equal to the other side. In linear inequalities, one side is bigger than or smaller than or equal to the other side. A linear equation in one variable has only one solution. An inequality in one variable has a set of possible solutions. Given that x is an integer. State the possible integer values of x in the following inequalities. a) x > 4 b) x ≤ –3 a) x is greater than 4. 5, 6, 7, 8, … b) x is less than or equal to –3 –3, –4, –5, –6, … We can represent a linear inequality in one variable on a number line. We use the following symbols in the representation. A small circle is used for < and > to indicate that the number is not included. A filled-in circle is used for ≤ and ≥ to indicate that the number is included. A line with an arrow indicates that the line continues to infinity in the direction of the arrow. Represent each inequality on a number line. a) x ≤ 0 b) x > 2 c) x < 1 d) x ≥1 Graphing Inequalities on a Number Line.
<urn:uuid:8ec38cf0-ed12-4960-b59a-768be4c0da9a>
CC-MAIN-2016-26
http://www.onlinemathlearning.com/algebra-inequalities.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783392099.27/warc/CC-MAIN-20160624154952-00034-ip-10-164-35-72.ec2.internal.warc.gz
en
0.907738
360
4.4375
4
Hilbert's 10th Problem At the 1900 International Congress of Mathematicians, held that year in Paris, the German mathematician David Hilbert put forth a list of 23 unsolved problems that he saw as being the greatest challenges for twentieth-century mathematics. Hilbert's 10th problem, to find a method (what we now call an algorithm) for deciding whether a Diophantine equation has an integral solution, was solved by Yuri Matiyasevich in 1970. Proving the undecidability of Hilbert's 10th problem is clearly one of the great mathematical results of the century.This book presents the full, self-contained negative solution of Hilbert's 10th problem. In addition it contains a number of diverse, often striking applications of the technique developed for that solution (scattered previously in journals), describes the many improvements and modifications of the original proof - since the problem was "unsolved" 20 years ago, and adds several new, previously unpublished proofs.Included are numerous exercises that range in difficulty from the elementary to small research problems, open questions,and unsolved problems. Each chapter concludes with a commentary providing a historical view of its contents. And an extensive bibliography contains references to all of the main publications directed to the negative solution of Hilbert's 10th problem as well as the majority of the publications dealing with applications of the solution.Intended for young mathematicians, Hilbert's 10th Problem requires only a modest mathematical background. A few less well known number-theoretical results are presented in the appendixes. No knowledge of recursion theory is presupposed. All necessary notions are introduced and defined in the book, making it suitable for the first acquaintance with this fascinating subject.Yuri Matiyasevich is Head of the Laboratory of Mathematical Logic, Steklov Institute of Mathematics, Russian Academy of Sciences, Saint Petersburg. About the Author Yuri Matiyasevich is Head of the Laboratory of Mathematical Logic, Steklov Institute of Mathematics, Russian Academy of Sciences, Saint Petersburg.
<urn:uuid:f529e6f2-55e0-4f64-ab3b-b7e4b9077e07>
CC-MAIN-2016-26
https://mitpress.mit.edu/books/hilberts-10th-problem
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783393533.44/warc/CC-MAIN-20160624154953-00180-ip-10-164-35-72.ec2.internal.warc.gz
en
0.946269
408
2.734375
3
An application security technique known as code signing is gaining importance as the Apple and Android mobile distribution... By submitting your email address, you agree to receive emails regarding relevant topic offers from TechTarget and its partners. You can withdraw your consent at any time. Contact TechTarget at 275 Grove Street, Newton, MA. centers require developers to provide the apps they write with a stamp of approval. Code signing indicates that "you know where the code came from and it hasn't been corrupted," said August Detlefsen, a security consultant for AppSec Consulting, in San Jose CA. "The purpose is to basically guarantee that when you get some code, you know who it's from," said Frank Kim, principal of security consultancy ThinkSec and curriculum lead for application security, SANS. "It's to give you some level of trust." Mobile application distributors use code signatures to help prevent malicious code from being distributed among mobile devices. "Both iOS and Android enforce the fact that your code must be signed in order to distribute it through the App Store or Google Play Store, so in order to get your software out there, you must sign it so that at least the identity of the person who created the code is verified," Detlefsen said. If an attacker were to obtain the private key, he could modify the code and sign it with the private key, leading people to believe that the code came from a trusted source. Apple goes a step further and adds its signature to the applications distributed via its App Store. Before any code runs on an iOS device -- assuming it hasn't been jailbroken -- the device verifies the signatures. This helps ensure the code has not been modified. Developers who submit code for distribution via Apple's App Store don't have to be concerned with the details of code signing. "When I register for the iOS development program, it's pretty straightforward," Kim said. "Apple makes the process as seamless as possible." In other scenarios, the process of code signing is a bit more involved. "If you're developing other types of software, server-side apps, or those distributed to enterprise customers in a different way, then it's more cumbersome because the infrastructure is not there," Kim said. Code is signed using public key cryptography. The process begins with the generation of a cryptographic hash. This is done by running the source code or compiled executable through a one-way function that calculates a checksum based on the bits in the code, Detlefsen explained. The resulting cryptographic hash is unique and non-reversible. The hash is sent through another cryptographic function along with a unique key known only to the user, resulting in a signature. It is a short alphanumeric string that is associated with the code. A public key, associated with the private key but freely sharable, can be used to verify the code is signed with the private key by running the signature and the corresponding public key though a signature verification function. Public and private keys can be generated at no cost using one of the many key generator tools that can be found online, Detlefsen said. However, these keys do not offer verification that you are who you say you are. After all, Detlefsen pointed out, "Just because the code is signed doesn't mean that the [developer of that code] knows what they're doing." An alternative is to purchase keys from a certificate authority, like VeriSign or DigiCert. These companies validate the identities of their customers. Information such as the signer's name and organization is included with the code signature, and can be verified with the certificate authority. There are no risks involved with the code signing process itself. However, the private key must be kept private. If an attacker were to obtain the private key, he could modify the code and sign it with the private key, leading people to believe the code came from a trusted source when in fact it did not. While developers benefit from signing their own code, they also benefit from the signatures on the code they use. "A lot of developers use third-party code and open source libraries. If you're building significant apps that require security, you should also check the authenticity of the code you're using," Detlefsen said. An attacker could insert malicious code into an open source library. "Code signing provides one way of knowing that the code you're downloading is verified to be the original and hasn't been tainted in some way," he said. "Before you run it, verify the signature." But not all code is signed in the first place. "Code signing is becoming more well-known and practiced because these distribution centers are requiring it. But as far as code you download over the Internet, or let's say there's an applet in your website or a flash app on a website, you might not know where it came from or whether it was something the original developers put there," Detlefsen said. Dig Deeper on Software Security Test Best Practices
<urn:uuid:0fdaeea9-d050-4f15-8cb3-6a55ac475c65>
CC-MAIN-2016-26
http://searchsoftwarequality.techtarget.com/news/2240176128/Code-signing-Stamp-of-approval-for-Android-and-iOS-apps
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397695.90/warc/CC-MAIN-20160624154957-00176-ip-10-164-35-72.ec2.internal.warc.gz
en
0.961622
1,026
2.625
3
On June 9 and 10, a conference celebrating the 160th anniversary of the Evelina de Rothschild School’s establishment in Jerusalem took place. The longevity of this school, created for Jewish girls during a period when the Yishuv was struggling with serious issues of poverty, is noteworthy. The reputation of this institution has benefited from the presence of a few outstanding headmistresses. The life of one of them is highlighted in Laura S. Schor’s recently published book The Best School in Jerusalem: Annie Landau’s School for Girls, 1900-1960 (Brandeis University Press, 2013). In the conference’s opening session, Schor described the uniqueness of Annie Landau, the devoted headmistress who shaped the school and its policies as of 1900. Five years after the Rothschild family turned management of the school over to the Anglo-Jewish Association, Landau was hired as an English teacher and sent to Jerusalem. The choice was apt, for despite this teacher’s aristocratic bearing, she had realistic, intelligent and assertive means of dealing with real issues. When the none-too-successful principal who was in charge when Landau arrived was unable to continue in her role, the association passed the baton to the young and talented teacher, which turned out to be a wise decision. The new headmistress had many challenges to overcome: a crowded and non-hygienic building, poor attendance rates, no clear system of grading or promotion from grade to grade, apathetic teachers, disorganized school financial accounts and nonexistent discipline, among other things. One marvels that she did not despair and return to England. Instead, this young woman decided to win the support of her teachers and invented her own methods to carry out her plans. Landau bombarded them with her criticism of the situation while explaining her vision and goals. She emphasized the importance of lesson plans and disciplining the girls while educating them; self-improvement was to be encouraged. Once she had a cadre of capable educators on her staff, she was able to professionalize them. At the same time, the faculty had to work with the families of these girls, encouraging them to delay their daughters’ marriage age and stressing the importance of attending school daily. Classes for the girls included arithmetic, literature, history, geography, sewing and housekeeping. For a number of years, there was a kindergarten that promoted learning by means of play, a new notion at the time; later, the school provided a full high-school education with matriculation exams. Activities after school were also offered for the first time, such as the “Guides” (Scouts), which included sports as well as the option of singing in a choir. This principal was anxious to engage the teaching staff and for them to function as a unit. For 30 years, the majority of the single teachers even lived together. Monthly get-togethers for interreligious cultural meetings took place in their apartment. The makeup of her staff included British Jewish women like herself, other Europeans, and Canadians; the faculty eventually included Evelina graduates. Needless to say, because of the impoverished state of the Yishuv, the school was funded mostly by donors, beginning with the aforementioned Rothschilds and Anglo-Jewish Association; aid also came from the Ladies Committee, the Department of Education of the Mandate and the Public Health Department. Fees became a necessity and were even increased in order to cover ever-growing expenses. Her annual reports concerning the significant progress that occurred surely helped her receive steadfast support. Schor noted that Landau traveled to London almost every year during the summer to guarantee the continuation of philanthropy, and said that The Jewish Chronicle always interviewed her. The school she first entered at the age of 26 was nothing like the one she left behind with her death in January 1945. She had taken a barely functioning institution with no vision whatsoever and completely overhauled it. Landau was fortunate to have the backing, both financial and moral, of the Anglo-Jewish Association, but mostly she had amazing foresight. She was able to adapt her British sensibilities to a new environment. Her ability to negotiate within such a different cultural milieu, albeit within the more familiar bounds of the Jewish world, was remarkable. Her academic background and previous experience as a teacher proved invaluable, yet this was not enough to guarantee success as an administrator. Annie Landau knew how to organize and how to communicate: to the students, to her teachers, to the girls’ parents, and to the philanthropists who supported her laudable endeavors. She devoted her life to these girls, giving them a religious framework that provided a well-rounded education and helped them modernize and face the 20th century. She was indeed a headmistress to remember. ■ The writer is a professor of Jewish history at the Schechter Institute of Jewish Studies and the academic editor of the journal Nashim. Her most recent publication, An Ode to Salonika, was just awarded a Canadian Jewish Book Award.
<urn:uuid:28a99654-bc6f-4fc6-a4f9-127cab5f2799>
CC-MAIN-2016-26
http://www.jpost.com/Jewish-World/Judaism/His-Story-Her-Story-A-headmistress-to-remember-359927
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396100.16/warc/CC-MAIN-20160624154956-00041-ip-10-164-35-72.ec2.internal.warc.gz
en
0.97881
1,033
2.65625
3
China announced this Tuesday that an extremely rare set of panda triplets was born in the southern city of Guangzhou. The mother, Ju Xiao, and the three as-yet-unnamed cubs are healthy, the China news agency reported. The triplets weigh between 8 and 12 ounces or about half a pound! This is only the fourth set of giant panda triplets born with the help of China’s artificial breeding program, making it an extremely rare occurrence. A set of panda twins is considered a miracle in China due to the low reproduction rates of giant pandas. The giant panda is only fertile between 24 and 36 hours per year making the window for conception very narrow. Ju Xiao was impregnated in March with sperm from a panda living at a Guangzhou zoo. The three cubs were born July 29, but breeders delayed an announcement until they were sure all three would survive, the official China News Service said. There are about 1,600 giant pandas living in the wild, where the species is critically endangered due to loss of habitat and low birth rates. More than 300 live in captivity, mostly in China’s breeding programs. Want to see giant pandas in their natural habitat? Join us on our all-new panda itinerary, Wild Side of China: A Nature Odyssey!
<urn:uuid:5616efa3-c1e7-4af5-86b3-4bb170d42d3f>
CC-MAIN-2016-26
http://goodnature.nathab.com/rare-baby-panda-triplets/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783394414.43/warc/CC-MAIN-20160624154954-00166-ip-10-164-35-72.ec2.internal.warc.gz
en
0.960779
275
3.046875
3
|Values are valid only on day of printing.| Chromium (Cr) exists in valence states ranging from 2(-) to 6(+). Hexavalent chromium (Cr[+6]) and trivalent chromium (Cr[+3]) are the 2 most prevalent forms. Cr(+6) is used in industry to make chromium alloys including stainless steel, pigments, and electroplated coatings. Cr(+6), a known carcinogen, is immediately converted to Cr(+3) upon exposure to biological tissues. Cr(+3) is the only chromium species found in biological specimens. Urine chromium concentrations are likely to be increased above the reference range in patients with metallic joint prosthesis. Prosthetic devices produced by Depuy Company, Dow Corning, Howmedica, LCS, PCA, Osteonics, Richards Company, Tricon, and Whiteside typically are made of chromium, cobalt, and molybdenum. This list of products is incomplete, and these products change occasionally; see prosthesis product information for each device for composition details. Screening for occupational exposure to chromium Monitoring metallic prosthetic implant wear Chromium is principally excreted in the urine. Urine levels correlate with exposure. Results greater than the reference range indicate either recent exposure to chromium or specimen contamination during collection. Prosthesis wear is known to result in increased circulating concentration of metal ions. Modest increase (8-16 mcg/24 hour) in urine chromium concentration is likely to be associated with a prosthetic device in good condition. Urine concentrations >20 mcg/24 hour in a patient with chromium-based implant suggest significant prosthesis wear. Increased urine trace element concentrations in the absence of corroborating clinical information do not independently predict prosthesis wear or failure. The National Institute for Occupational Safety and Health (NIOSH) draft document on occupational exposure reviews the data supporting use of urine to assess chromium exposure. They recommend a Biological Exposure Index of 10 mcg/g creatinine and 30 mcg/g creatinine for the increase in urinary chromium concentrations during a work shift and at the end of shift at the end of the workweek, respectively. A test for this specific purpose (CHROMU / Chromium for Occupational Monitoring, Urine) is available. Normal specimens have extremely low levels of chromium; because of the ubiquitous nature of chromium, elevated results could easily be a result of external contamination. Precautions must be taken to ensure the specimen is not contaminated. Metal-free urine collection procedures must be followed (see Trace Metals Analysis Specimen Collection and Transport in Special Instructions). Refrigeration is preferred over chemical methods of preservation. High concentrations of gadolinium and iodine are known to interfere with most metals tests. If either gadolinium- or iodine-containing contrast media has been administered, a specimen should not be collected for 96 hours. 0-15 years: not established > or =16 years: 0.0-7.9 mcg/specimen 1. Vincent JB: Elucidating a biological role for chromium at a molecular level. Acc Chem Res 2000 July;33(7):503-510 2. NIOSH Hexavalent Chromium Criteria Document Update, September 2008; Available from URL: http://www.cdc.gov/niosh/topics/hexchrom/ 3. Keegan GM, Learmonth ID, Case CP: A systematic comparison of the actual, potential, and theoretical health effects of cobalt and chromium exposures from industry and surgical implants. Crit Rev Toxicol 2008;38:645-674
<urn:uuid:95be0188-adb3-4f5f-b7e1-7eea6444baf3>
CC-MAIN-2016-26
http://www.mayomedicallaboratories.com/interpretive-guide/index.html?alpha=C&unit_code=8593
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397797.77/warc/CC-MAIN-20160624154957-00101-ip-10-164-35-72.ec2.internal.warc.gz
en
0.860453
771
2.84375
3
Tropical storm Humberto: Will it be the season's first hurricane? (+video) Tropical storm Humberto is forecast to become the season's first hurricane by Tuesday. If that doesn't happen, this will be the longest hurricane-less start to a season since at least 1967. Tropical storm Humberto has formed off of the west coast of Africa, with the storm slated to become the Atlantic hurricane season's first hurricane by Tuesday evening, Eastern Daylight Time. If the forecast holds up, Humberto would keep this year's hurricane season out of the record books for late bloomers. Since 1967, and perhaps longer, that distinction falls to 2002, when the season's first hurricane, Gustav, formed by 8 a.m. Eastern Daylight Time, Sept. 11, according to the National Hurricane Center in Miami. That record may hold all the way back to 1944, when aircraft first began tracking tropical cyclones and measuring the conditions inside them, suggests Jeff Masters, director of meteorology at the website Weather Underground. Although Humberto appears unlikely to unseat Gustav as the latest first hurricane to form in recent decades, it does edge 2001's Erin for the No. 2 spot. The all-time record for a late arrival falls to a hurricane that formed Oct. 8, 1905, according to the National Hurricane Center. Humberto, the 2013 season's eighth named storm, currently is dumping heavy rain on the southern Cape Verde Islands, where tropical storm warnings have been posted. Humberto sports maximum sustained winds of 40 miles an hour, with tropical storm-force winds extending up to 60 miles from its center. Forecasters expect Humberto's maximum sustained winds to top out at about 90 miles an hour – making it a strong category 1 hurricane – by Thursday morning. By Saturday, its winds are expected to drop back to tropical storm strength. Track forecasts call for the storm's center to hook northward after it clears the Cape Verde Islands overnight Monday, then take a jog to the northwest at the weekend. By then, Humberto's center is expected to be roughly 2,700 miles east of the Bahamas, well away from any land. The 2013 Atlantic hurricane season, which runs from June 1 to Nov. 30, is well into its mid-August-through-late-October peak. And while the first hurricane is late to arrive, the season is running above normal for named storms as a whole. Based on the 1966-2009 average, the eighth named storm usually doesn't appear until around Sept. 24. Several factors have held tropical storms' graduation to hurricane status at bay in the Atlantic, notes Dan Kottlowski, a senior meteorologist and hurricane expert at Accuweather, a forecasting firm in State College, Pa. The wide sweep of clockwise winds from a mid-level high-pressure system centered off the coasts of Portugal and northern Africa has swung out over the Atlantic in the region where the storm systems that spawn tropical cyclones form. That has set up wind shear – a change in wind speed or direction with altitude – that can slow or prevent a storm system from organizing into a rotating tropical cyclone. Forecasters have seen such features before, Mr. Kottlowski says, but "what's unusual is that it's been sitting there for almost two weeks." At the same time, a large, persistent area of high pressure over the central US this summer has driven westerly winds deep into the low latitudes, where they turn to cross the Atlantic. This serves up its own wind shear and delivers relatively dry air into the main development region for Atlantic tropical cyclones. Meanwhile, another surface persistent high-pressure system – this one between the Azores and Bermuda – has swept winds across Portugal and driven hot dry air deep into the main storm-forming region, as well, stifling storm formation. Humberto was preceded by three tropical-cyclone precursors – tropical waves – that encountered that dry air. "There's three storms that bit the dust immediately," he says. "If that dry air wasn't there, these things probably would have been hurricanes." Still, he cautions that the season isn't over yet. In 2001, when Erin made its late appearance, the season ended with nine hurricanes in the books. In 1998, hurricane Mitch formed in late October in the southern Caribbean. The storm killed 19 people and inflicted an estimated $6.2 billion in damage in Central America and on the Yucatan Peninsula. This year, Kottlowsi says, conditions there appear much more fertile for tropical cyclone formation. The atmosphere has been delivering very little shear, a lot of heavy rain, and low surface pressures. "There's a lot of factors in play that could still make this a fairly active season," he says.
<urn:uuid:fe3608ea-d616-4f64-b2cb-03e5feb3c428>
CC-MAIN-2016-26
http://www.csmonitor.com/Science/2013/0909/Tropical-storm-Humberto-Will-it-be-the-season-s-first-hurricane-video
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396949.33/warc/CC-MAIN-20160624154956-00128-ip-10-164-35-72.ec2.internal.warc.gz
en
0.957749
1,001
2.53125
3
Just wed: monarchy and commerce, toward the end of the medieval period, that is. In Prosperity and Violence (1/3), we have explained that the private provision of societal coercion in kinship-based agrarian societies was trapped within an insurmountable trade-off between peace and prosperity. If you achieved the former, you could not have the latter; and of course, without peace there can be no prosperity either. In the present sequel, we trace [...] the change from the private provision of violence, based on kin and community, to the public provision of coercion, based upon the monarchy and the state. The rise of the towns [in the medieval period] produced an increase in incomes and the new wealth incited increased conflict. Specialists in the use of violence needed revenues to fight their wars; and those who prevailed were those who allied their political force with the economic fortunes of the towns. (p.51) In the process, a new political and economic order emerged: [O]ne based on capital and complex economic organizations, one in which prosperity profitably coexisted with peace, and one in which coercion was used not for predation but rather to enhance the productive use of society's resources. (p.51) The rise of urban centers closely intertwined with the rise of rural prosperity [...] As the population of the urban centers rose, so too did the demand for agricultural products [...] To secure food, the urban population [...] had to trade for it, thus strengthening the role of markets in rural society. The growth of cities therefore fostered the commercialization of agriculture. (p. 52/53) The increase in profits from agriculture permitted investments in more specialised and more efficient methods and personnel and other improvements, including greater security and enhanced military prowess thanks to a widened base of indentured retainers that were obligated to fight when called upon. The growth of the economy of northwestern Europe was thus accompanied ny the militarization of households. The private provision of violence was costly. Only those who stood to lose much possessed an incentive to provide it. (p. 54) This meant that [...] the political and the economic elite became one in the rural areas, with households that were rich also becoming the households that dominated militarily the hinterlands of northwestern Europe. (p. 54) It is worth repeating: Feudalism was based on the private provision of coercion; it involved the militarization of the rural household. (p.56) The economic boom brought about by intensified urbanisation and the attendant commercialisation of agriculture put the old economic and political system under fatal stress: Prosperity spread inland along the river systems, up the Rhine and southward into France, and across the Channel to incorporate London, East Anglia, and the southern counties of England. But along with that prosperity came violence, privately provided by elite kin groups and households, with the support of their liveried retainers. In the course of this violence, some kin groups did better than others. Those that prevailed formed ruling lineages and provided kings. Central to the emergence of these monarchies - and central, therefore, to the emergence of the [centralised, national] state - was the alliance between militarized lineages and the new economic order. Driven by necessity, fighting lineages allied with the cities, using them as a source of finance with which to suppress and seduce elites in the countryside, and so transforming the political structure of Europe. (p.56) Continued in Prosperity and Violence (3/3).
<urn:uuid:5ca87acc-343a-4bb0-ab9e-3ef1d98ae0c4>
CC-MAIN-2016-26
http://redstateeclectic.typepad.com/redstate_commentary/2013/02/prosperity-and-violence-22.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783394414.43/warc/CC-MAIN-20160624154954-00114-ip-10-164-35-72.ec2.internal.warc.gz
en
0.962831
736
2.953125
3
Jachter's Halacha Files (and other Halachic compositions) A Student Publication of the Isaac and Mara Benmergui Torah Academy of Bergen County Parshat Shemini 11 Adar II 5763 March 15, 2003 Vol.12 No.21 Sefirat Haomer During Bein Hashemashot by Rabbi Chaim Jachter This week we shall discuss an important question relating to Sefirat Haomer - when the earliest time to count the Omer is. First, though, we must outline a critical Halachich issue – what and when is Bain Hashemashot. The Gemara refers to the period between sunset (Shkia) and the appearance of three medium size stars (Tzeit Hakochavim) as Bain Hashemashot. The Gemara (Shabbat 34b) writes that there is a Safek (doubt) about this period, whether it is day or night. Thus, the Gemara concludes, Halacha imposes the stringencies of both days upon us. This is why we begin, for example, Shabbat and Yom Tov at Shkia and end these days only at Tzeit Hakochavim. Rav Yosef Dov Soloveitchik (Shiurim Lizecher Abba Mori z”l 1:97-104) cites the Ritva (Yoma 47b s.v. Amar Rabi Yochanan) who explains that Chazal did not consider Bain Hashemashot to be Safek day or night because of a lack of knowledge. Rather, Chazal consider Bain Hashemashot as having aspects of both day and night. The Rav explains the dual identity of Bain Hashemashot as emerging from the two different standards of night and day that appear in the first chapter of Sefer Breishit. By the standards of the first day of creation, Bain Hashemashot is considered day. On the first day of creation, the appearance of light distinguishes between night and day (Breishit 1:5). On the fourth day of creation, though, the appearance of the sun determines when it is day and when it is night (Breishit 1:14). Thus, by the standard of the first day of creation, Bain Hashemashot is day because there is still light. However, by the standard of the fourth day of creation, Bain Hashemashot is defined as night, because the sun no longer appears above the horizon during this time. There is a debate, though, about the precise contours of Bain Hashemashot. The primary debate regarding Bain Hashemashot is the unresolved debate that rages between the Vilna Gaon and Rabbeinu Tam. Tosafot (Shabbat 35a s.v. Trei) note an apparent contradiction between Shabbat 34-35 and Pesachim 94a. Shabbat 34-35 seems to teach that night begins thirteen and a half minutes after Shkia or the time it takes for an average individual to walk three quarters of a Mil (a Mil is two thousand cubits or three to four thousand feet). According to most opinions, an average person walks a Mil in eighteen minutes. Thus, according to Shabbat 34-35, night seems to begin thirteen and a half minutes after Shkia. However, Pesachim 94a seems to teach that night begins seventy-two minutes after Shkia, or the time it takes an average person to walk four Mil. Rabbeinu Tam seems to resolve the contradiction by explaining that nightfall or Tzeit HaKochavim occurs seventy-two minutes after the sun sets, in accordance with Pesachim 94a. Bain Hashemashot, in turn, begins thirteen and a half minutes before night or fifty-eight and a half minutes after sunset. Thus, according to Rabbeinu Tam, it is still daytime according to the Halacha until fifty-eight and a half minutes after sunset, and Bain Hashemashot is between fifty-eight and a half minutes after sunset until seventy-two minutes after sunset. Many Rishonim concur with Rabbeinu Tam’s approach including the Ramban (Torat Haadam, Inyan Aveilut Yeshana), the Rashba (commentary to Shabbat 35), the Ritva, (commentary to Shabbat 35), and the Ran (in his commentary to the Rif on Shabbat). The Shulchan Aruch (Orach Chaim 261:2) rules in accordance with Rabbeinu Tam in the context of Hilchot Shabbat. Chassidim, as is well known, follow Rabbeinu Tam for stringencies and thus end Shabbat (what appears to us to be) extraordinarily late. They will even rely on Rabbeinu Tam for leniencies for a matter that involves only rabbinic law such as the time for Tefilla. This explains the familiar sight of Chassidim davening Mincha long after the sun has set. Some resistance to this opinion began with the Shach (Yora Dea 266:11) citing the Teshuvot Maharam Alashkar who believes that the Rif, Rambam, and Rosh disagree with Rabbeinu Tam. The opposition reaches its crescendo with the Vilna Gaon (Biur Hagra to Orach Chaim 261:2) who marshals many proofs from the Gemara to dispute Rabbeinu Tam. The Vilna Gaon believes that Shabbat 34-35 is the primary focus of this issue and that night begins thirteen and a half minutes after sunset. Sephardic Jews and non-Chassidic Ashkenazic Jews follow the ruling of the Vilna Gaon, although some accommodate Rabbeinu Tam regarding the end of Shabbat and Yom Kippur (see Biur Halacha 261:2 s.v. Mitechilat and Shehu). The Vilna Gaon writes, though, that the time period of thirteen and a half minutes, applies only in Jerusalem on the day of the equinox. The time must be adjusted according to the time of season and distance from the equator. Thus, common practice in this country is to wait forty-five to fifty minutes after sunset for the end of Shabbat (see Rav Moshe Feinstein, Teshuvot Igrot Moshe O.C. 4:62 and Rav Mordechai Willig, Am Mordechai pp.11-16). For a detailed discussion of this issue see the aforementioned section of Rav Willig’s Am Mordechai and the many twentieth century Sefarim that he cites, that discuss this topic at length. Sefirat Haomer- A Torah or Rabbinic Level Obligation? The question of whether we may count the Omer during Bain Hashemashot depends, largely, on the debate among the Rishonim whether Sefirat Haomer constitutes a Torah level obligation in the absence of a functioning Bait Hamikdash. The Ran (in the conclusion of his commentary to the Rif to Masechet Pesachim) notes that most Rishonim agree with Tosafot (Menachot 66a s.v. Zecher) that Sefirat Haomer today is only a rabbinic obligation. Rambam (Hilchot Temidim Umusafim 7:22), however, believes that Sefirat Haomer remains a Torah level obligation even in the tragic absence of the Beit Hamikdash. The Biur Halacha (489:1 s.v. Lispor) cites a significant number of Rishonim who concur with the Rambam including the Sefer HaChinuch, Raavya, and Ohr Zarua. Hashem privileged me to hear an explanation of this dispute from Rav Yosef Dov Soloveitchik in the Shiurim he delivered on the sixth chapter of Masechet Menachot at Yeshiva University on November 7, 1983. The Rav noted that all agree that the offering of the Korban Omer generates the Mitzva of Sefirat Haomer as is explicit in the Torah (Vayikra 23:15 and Devarim 16:9). Accordingly, Tosafot believe that in the absence of the Korban Omer, there is no Torah level obligation to count the Omer. The Rambam, however, believes that the very date of the sixteenth of Nissan also generates the obligation to count the Omer. Thus, even in the absence of the Korban Omer, the Torah level obligation to count the Omer remains in effect. Sefirat Haomer During Bein Hashemashot Tosafot (ad. loc.) note that since Sefirat Haomer today is only a rabbinic level obligation, we may count Sefira even during Bain Hashemashot. This is because the time of Bain Hashemashot is Safek night and regarding a rabbinic law, one may resolve a Safek leniently (Safek Dirabannan Likula). Tosafot add that it is even preferable to count the Omer during Bain Hashemashot because of the desirability of Temimot, that Sefirat Haomer should be whole and complete. The Gemara (Menachot 66a) says that one should count the Omer at night because the Pasuk (Vayikra 23:15) describes the weeks to be counted as Temimot. Tosafot understands this Gemara also as teaching that the earlier in the evening we count the Omer the better. Rav Soloveitchik explained Tosafot as understanding that not only is it a Mitzva to count the days, it is also a Mitzva for the days to be counted. Thus, the earlier in the evening we count the Omer, the more of the day is counted. The Rambam quite obviously would reject Tosafot’s assertion that one may count the Omer during Bain Hashemashot because he believes that Sefira today is a Torah level obligation and one must act stringently in case of doubt regarding a Torah law (Safek Dioraita Lichumra). Moreover, even Tosafot express objection to the assertion that it is preferable to count the Omer during Bain Hashemashot. The Ran (ad. loc.) explains that it is counterintuitive to believe that when Sefirat Haomer’s status as a Mitzva was downgraded to a rabbinic level obligation, a stringency was introduced that we should strive to count the Omer earlier than we used to do when Sefira was a Torah level obligation. Furthermore, the Ran notes that even though Safek Dirabanan Likula, one should not deliberately introduce a doubt in one’s performance of Mitzvot. Thus, he concludes that Bidieved (post facto) if one had already counted the Omer during Bain Hashemashot he need not repeat the counting, but Lichatchila (initially) one should not count the Omer during Bain Hashemashot. The Shulchan Aruch (O.C. 489:2) notes that those who are particular in their performance of Mitzvot wait until Tzeit HaKochavim (nightfall) and he concludes, “it is proper to do so.” The Biur Halacha (ad. loc.) and Aruch Hashulchan (O.C. 489:7) note that it is common practice to follow the stricter approach and wait until Tzeit HaKochavim to count the Omer. Rav Ovadia Yosef (Teshuvot Yechave Daat 1:23), though, records the custom in Jerusalem is to count the Sefira during Bain Hashemashot. Rav Aharon Adler of Ramot reports that Rav Soloveitchik told him that one may rely on Tosafot and count the Omer during Bain Hashemashot, especially in light of the opinion in Tosafot that this is the preferable way to count the Omer. A benefit of counting the Omer after a Minyan that davens Maariv during Bain Hashemashot (a practice recorded and endorsed by the Rama O.C. 233:1) is that it eliminates the concern that one may forget to count the Omer after Tzeit HaKochavim when he is at home. Perhaps this is what motivated the Rav’s ruling (also see Shulchan Aruch O.C. 489:3). Rav Ovadia Yosef (ad. loc.) adopts a similar approach to the Rav. He rules that if the Tzibbur is not willing to remain in Shul until Tzeit HaKochavim, they may count the Sefira during Bain Hashemashot since people might forget to count after Tzeit when they are in their homes. The Aruch Hashulchan (ad. loc.) adopts a compromise view, stating that on Friday evenings common practice in his locale was to count the Omer during Bain Hashemashot. This is because the Aruch Hashulchan subscribes to the Taz’s (O.C. 494 and 668) view that when one accepts Shabbat he has transformed the time into the next day (for a full analysis and discussion of the issue of the debate regarding the potential transformative powers of Tosefet Shabbat, see Rav Betzalel Zolti’s Mishnat Yaavetz O.C. chapter 29). Thus, Bain Hashemashot on Shabbat is the equivalent of night for those who have already recited Maariv, according to this view. We should note that both Rav Moshe Feinstein (Teshuvot Igrot Moshe O.C. 99:3) and Rav Ovadia (ad. loc.) rule that one may not count Sefira before Bain Hashemashot even though Plag Mincha has passed. In our next issue, we Bli Neder and Im Yirtzeh Hashem, shall complete this discussion and review the question of when is the last opportunity to count the Omer. Back to Rabbi Jachter's Article List
<urn:uuid:8a3af09c-3b31-4f33-90c0-909dfd39f86e>
CC-MAIN-2016-26
http://www.koltorah.org/ravj/Sefirat%20Haomer%20During%20Bain%20Hashemashot.htm
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396887.54/warc/CC-MAIN-20160624154956-00098-ip-10-164-35-72.ec2.internal.warc.gz
en
0.924456
3,042
2.609375
3
The WIC food packages provide supplemental foods designed to meet the special nutritional needs of low-income pregnant, breastfeeding, non-breastfeeding postpartum women, infants and children up to five years of age who are at nutritional risk. WIC food packages and nutrition education are the chief means by which WIC affects the dietary quality and habits of participants. You can read a brief history of the WIC food packages at Background: Revisions to the WIC Food Package. New food packages are now being provided to WIC participants in all States. On December 6, 2007, an interim rule revising the WIC food packages was published in the Federal Register. The new food packages align with the 2005 Dietary Guidelines for Americans and infant feeding practice guidelines of the American Academy of Pediatrics. The food packages better promote and support the establishment of successful, long-term breastfeeding, provide WIC participants with a wider variety of foods including fruits and vegetables and whole grains, and provide WIC State agencies greater flexibility in prescribing food packages to accommodate the cultural food preferences of WIC participants. An interim rule allows the Food and Nutrition Service to obtain feedback on the revisions while allowing implementation to move forward. The interim rule comment period ended on February 1, 2010. FNS reviewed and analyzed over 7500 comment letters. A final rule is currently in clearance. Comments: All comments submitted electronically via Regulations.gov can be viewed on Regulations.gov (see instructions below). To view comments on Regulations.gov: Go to www.regulations.gov Click on "Read Comments" at the top of the screen. Enter keyword or ID: FNS-2006-0037 (may also search by submitter ID). Click on "Search." Note: Postings with the title "Comment FR Doc # E7-23033" should be disregarded. These are attachments to comment letters, which are also included with the individual comment to which it was attached. Due to a system error, these attachments appear both with the comment letter and separately.
<urn:uuid:dd85b35c-f97f-4a6d-99ef-bddbaceb4770>
CC-MAIN-2016-26
http://www.fns.usda.gov/es/node/10114
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783394414.43/warc/CC-MAIN-20160624154954-00178-ip-10-164-35-72.ec2.internal.warc.gz
en
0.936894
414
2.578125
3
A colleague has been running simulations using a library written in Python. She has having serious performance problems… Her application is parallelizable, but Python does not make parallelization easy. She could switch to another language, but that’s expensive. Further investigations reveals that her simulation relies heavily on random-number generation. Every little step involves a random number. So how good is Python at generating random numbers? Python has a nice framework to quickly benchmark functions: the timeit module. How fast is the Python random-number generator? $ python -m timeit -s 'import random' 'random.random()' 10000000 loops, best of 3: 0.0363 usec per loop So over 100 CPU cycles to generate one random floating point numbers. However, timeit includes an overhead of about 30 cycles or so to every operation, related to the function-call overhead. It is not unreasonable. What if you want to generate an integer in a range [0,1000]? It gets ugly. $ python -m timeit -s 'import random' 'random.randint(0,1000)' 1000000 loops, best of 3: 0.847 usec per loop Wow! We are now taking over 2000 CPU cycles per random integer. This can easily becoming a limiting factor when writing simulation code. I tried to read Python’s source code for random.randint, but I could not figure out quickly what it is doing. If we accept a very small (negligible) bias, we can do it by multiplication instead… $ python -m timeit -s 'import random' 'int(random.random() * 1001)' 1000000 loops, best of 3: 0.206 usec per loop We are down to 400 CPU cycles per integer. It is still a lot… but it is four times faster to avoid Python’s default API (random.randint). The nice thing with Python is that it is easy to write a C function and access it from Python. Of course, it comes with some significant overhead. I do not hope to use far fewer than 100 cycles per random value by calling a C function. However, the ranged random-number generators are expensive enough that a C function might help. So I took a simple function in C that generates a good-quality (unbiased) ranged random number and made it available to Python: $python -m timeit -s 'import fastrand' 'fastrand.pcg32bounded(1001)' 10000000 loops, best of 3: 0.0693 usec per loop That is about 10 times faster than Python’s native random.randint. The lesson is that random.randint should probably not be used in performance-sensitive code. My source code is available (Python and C). Update: Marcel Ball reports in the comments that this performance problem does not affect PyPy, only the regular Python. David Andersen points out that using the numpy library via python -m timeit -s 'import numpy' 'numpy.random.randint(0, 1000)' is much faster. Credit: This blog post benefited from an exchange with Nathan Kurz.
<urn:uuid:94a7bedf-88be-4967-9f98-2a10b3d4cde0>
CC-MAIN-2016-26
http://lemire.me/blog/2016/03/21/ranged-random-number-generation-is-slow-in-python/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783399385.17/warc/CC-MAIN-20160624154959-00146-ip-10-164-35-72.ec2.internal.warc.gz
en
0.84718
674
2.859375
3
We know that corn can be used to produce automobile fuel ethanol. Now Tata Motors, India’s largest automobile producer is working on using cornstarch to make car body parts to improve safety of the vehicle. They have found that cornstarch can increase the crash resistance of cars. This can save the occupants of the car in case of an accident. Tata Motors is already using cornstarch based material in small cars for proto-testing. It is expected to be a reality by the end of 2012. Read more from Tata Group: Now, drive cars built on cornstarch.
<urn:uuid:2a728057-0784-4a37-8ce4-c9150b59f038>
CC-MAIN-2016-26
http://www.cookatease.com/tata-motors-to-use-cornstarch-to-improve-car-safety
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783398209.20/warc/CC-MAIN-20160624154958-00125-ip-10-164-35-72.ec2.internal.warc.gz
en
0.93416
123
2.96875
3
In developing the Strategic Plan, we assessed how domestic and foreign policy priorities and political and public support have changed in the postCold War era. This assessment provided the basis for key assumptions that have been factored into our program strategies. Our annual review process will include a reassessment of the dynamic business and political environment in which we operate to ensure that our assumptions and strategies remain valid. Domestic policy priorities are being adjusted in light of the Federal deficit, constrained budgets, and the need to maintain America's vitality and competitiveness. The Administration has placed a priority on supporting and promoting high technology for economic growth through effective partnerships, both within Government and with industry and academia. Therefore, NASA will work closely with other Federal agencies to ensure coordinated efforts in the areas of space and aeronautics science and technology. With increased emphasis on pressing domestic needs, we will ensure the relevance of our programs to national science and technology priorities and to other domestic goals in areas such as the environment, health, education, and aviation safety. The National Aeronautics and Space Act of 1958 (Space Act) established NASA and laid the foundation for its mission. It directs NASA to conduct space activities devoted to peaceful purposes for the benefit of all humankind. We are to preserve the leadership of the United States in aeronautics and space science and technology, and we are to expand knowledge of the Earth and space. We are to conduct human activities in space. We are to encourage the fullest commercial use of space. Furthermore, we are to cooperate with other nations and are directed to widely communicate the results of our efforts. Two Presidential policy statements also shape NASA's activities in space and aeronautics. The top-level goals of these policies are displayed below. The complete documents, which are aligned with this Plan, can be accessed as indicated in Appendix 3. First, the President's National Space Policy defines the following goals: The National Space Policy also provides guidelines designating NASA as the lead agency for research and development in civil space activities. NASA, in coordination with other departments and agencies, is to focus its research and development efforts in: space science to enhance knowledge of the solar system, the universe, and fundamental natural and physical sciences; Earth observation to better understand global change and the effect of natural and human influences on the environment; human space flight to conduct scientific, commercial, and exploration activities; and space technologies and applications to develop new technologies in support of U.S. Government needs and our economic competitiveness. Second, the President's Goals for a National Partnership in Aeronautics Research and Technology includes the following: In the postCold War era, the foreign policy aspect of the civil space program will focus on a spirit of expanded cooperation with our traditional international partners and the forging of new partnerships. The Administration has asked NASA to play a major role in international ventures with Russia to expand space exploration opportunities and to promote the peaceful uses of technology. There are also increased opportunities for cooperation with developing countries. These new relationships, along with strengthened ties to our traditional partners in Europe, Japan, and Canada, can help reinforce the economic and technological bonds in the new global society. As NASA moves forward with increased levels of international cooperation, it must balance the benefits that will result from joint endeavors with our own national policies and priorities. Political and Public Support A commitment from America's political leadership is vital to our success. The President has demonstrated his support for NASA and has indicated that we will play a significant role in the Administration's science and technology agenda and its foreign policy initiatives. In Congress, NASA continues to enjoy significant bipartisan support. Sustained political support will depend on our ability to demonstrate a contribution to national needs and to deliver on our promises. Public support for NASA's programs has been positive and generally stable throughout our history. Recent public opinion polls continue to indicate solid support for U.S. endeavors in space. A number of recent discoveries and accomplishments have served to increase the level of public interest and support of NASA's programs. These include the possible evidence of ancient life discovered in a meteorite from Mars, exciting images of the surface of Mars from the Mars Pathfinder, dramatic pictures from the Hubble Space Telescope of the birth and death of stars, discoveries of planets around other stars, and images from the Galileo spacecraft of the fractured and deformed icy surface of Jupiter's moon Europa. In the area of Earth science, the SeaWiFS ocean color sensor, developed through innovative partnerships with industry, is providing significant new data about the ocean. In addition, the highly visible long-term missions of NASA astronauts aboard the Space Shuttle and Russian space station Mir have engaged public interest in the challenges of living and working in space. Successful demonstrations of aeronautics technologies to enhance aviation system capacity and safety have also attracted great public attention. Continued public support will depend on our ability to satisfy the Nation's needs and to keep the public fully informed about the results and relevance of our activities. Key External Factors We identified the following key assumptions, which if significantly changed could impact our ability to implement this Plan: Web Design: Pamela Sams| Last Updated: October 30, 1997 For more information contact Gary Steinberg,Office of Policy and Plans. NASA Privacy Statement, Disclaimer, and Accessibility Certification
<urn:uuid:65ad77d6-794e-4517-b5d9-2a8c46f5e8c0>
CC-MAIN-2016-26
http://www.hq.nasa.gov/office/nsp/assess.htm
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783393463.1/warc/CC-MAIN-20160624154953-00191-ip-10-164-35-72.ec2.internal.warc.gz
en
0.935259
1,071
2.734375
3
In October of 2004 the U.S. Department of Homeland Security announced it would create a state-of-the-art program to ensure that electronic and information technology is accessible to employees and consumers with disabilities. As Chief Information Officer Steve Cooper explained, "Making electronic and information technology accessible for people with disabilities is a good business management strategy." He added that "complying with Section 508 ensures our information technology will be more capable of responding to technology changes in future years. A strong accessibility program results in more flexibility, more portability, better designs, and better websites." Section 508 of the Rehabilitation Act Amendments of 1998 (508) requires that when Federal agencies develop, procure, maintain, or use electronic and information technology, Federal employees with disabilities have access to and use of information and data that is comparable to the access and use by Federal employees who are not individuals with disabilities, unless an undue burden would be imposed on the agency. Section 508 also requires that individuals with disabilities, who are members of the public seeking information or services from a Federal agency, have access to and use of information and data that is comparable to that provided to the public who are not individuals with disabilities, unless an undue burden would be imposed on the agency. (36 C.F.R. 1194.1) The Department of Homeland Security's commitment to accessibility for people with disabilities, both employees and consumers, and its articulation of the value of accessible design create a model that can be emulated by others.
<urn:uuid:2d373918-9704-4f8b-a7f9-98a981e5ce96>
CC-MAIN-2016-26
http://www.washington.edu/accesscomputing/homeland-security-508-compliance-office-promising-practice-promoting-accessible-it
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395548.53/warc/CC-MAIN-20160624154955-00064-ip-10-164-35-72.ec2.internal.warc.gz
en
0.949958
307
2.53125
3
4. Position or situation with regard to seeing; that position which enables one to look in a particular direction; position in relation to the points of the compass; as, a house has a southern aspect, that is, a position which faces the south. 6. (Science: astronomy) The situation of planets or stars with respect to one another, or the angle formed by the rays of light proceeding from them and meeting at the eye; the joint look of planets or stars upon each other or upon the earth. The aspects which two planets can assume are five; sextile, when the planets are 60 deg apart; quartile, or quadrate, when their distance is 90 deg or the quarter of a circle; trine, when the distance is 120 deg; opposition, when the distance is 180 deg, or half a circle; and conjunction, when they are in the same degree. Astrology taught that the aspects of the planets exerted an influence on human affairs, in some situations for good and in others for evil.
<urn:uuid:54d22d1f-6a1d-4847-b880-f7ba4a4f0ed4>
CC-MAIN-2016-26
http://www.biology-online.org/dictionary/Aspects
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783398873.39/warc/CC-MAIN-20160624154958-00092-ip-10-164-35-72.ec2.internal.warc.gz
en
0.923729
209
2.953125
3
Kovno (Mar. 11) (Jewish Telegraphic Agency) The population of Kovno, the capital of Lithuania, is suffering greatly from a flood caused by the overflowing of the rivers Niemen and Vilya. The flood, which started a few days ago, now threatens the entire city. The Jewish quarter is entirely submerged. Hundreds of Jewish families are homeless. The damages are estimated to run into millions. The Ezra Relief Committee, representing the Kehillah which was dissolved by the government, despatched appeals to the Joint Distribution Committee, the Jewish Colonization Association and to other Jewish organizations abroad for immediate relief.
<urn:uuid:eda61e44-3c01-4554-b578-480e279e6f39>
CC-MAIN-2016-26
http://www.jta.org/1926/03/12/archive/catastrophic-flood-threatens-capital-of-lithuania
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783392159.3/warc/CC-MAIN-20160624154952-00040-ip-10-164-35-72.ec2.internal.warc.gz
en
0.951265
130
2.640625
3
A One-Week NEH Landmarks Workshop for Community College Teachers First Session July 8-14, 2007; Second Session, July 15-21, 2007 The workshop is devoted to studying John Adams’ life and thought as revealed in the letters, essays and documents he wrote, the marginal notes he made in the books he read, the homes he lived in and the artifacts he collected. Reading his words and considering his deeds in his very own physical surroundings helps us to understand his frame of mind and recognize the great difficulties and challenges he faced. We read the Massachusetts Constitution in the room in which he drafted it; climb Penns Hill and look out at Boston harbor from the same spot where Abigail Adams watched the battle of Bunker Hill and described it in letters to him; sit in the small kitchen in which John hosted meetings of revolutionary leaders; and inspect the art works he acquired abroad and treat them as clues regarding the impact that European culture had upon his thoughts and feelings. In addition to intensive work at the Adams National Historical Park, participants do hands-on research activities at the Massachusetts Historical Society which houses the Adams Papers, the Massachusetts Archives which houses the Massachusetts Constitution and documents relating to its ratification, and the Boston Public Library which houses his personal library. Four seminar meetings provide a thread of analytic and chronological continuity and integrate the specific lessons learned at the landmarks. Each participant has his or her own room in a modern and well maintained dormitory suite at Boston College. The campus is close to several trolley lines that provide good access to downtown Boston and Cambridge. Each participant receives a $500 stipend to help cover housing and meal expenses and a travel subsidy to help meet transportation expenses. For more information please check our website www.bc.edu/sites/johnadams or email us at email@example.com. Send comments and questions to H-Net Webstaff. H-Net reproduces announcements that have been submitted to us as a free service to the academic community. If you are interested in an announcement listed here, please contact the organizers or patrons directly. Though we strive to provide accurate information, H-Net cannot accept responsibility for the text of announcements appearing in this service. (Administration)
<urn:uuid:c15b602c-746a-4c94-bf50-517856bf47b8>
CC-MAIN-2016-26
http://www.h-net.org/announce/show.cgi?ID=155376
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783398873.39/warc/CC-MAIN-20160624154958-00085-ip-10-164-35-72.ec2.internal.warc.gz
en
0.963079
461
2.703125
3
Monday, 5 September 2011 The effect of climate change in Kenya This paper measures the economic impact of climate on crops in Kenya. The analysis is based on cross-sectional climate, hydrological, soil, and household level data for a sample of 816 households, and uses a seasonal Ricardian model. Estimated marginal impacts of climate variables suggest that global warming is harmful for agricultural productivity and that changes in temperature are much more important than changes in precipitation. This result is confirmed by the predicted impact of various climate change scenarios on agriculture. The results further confirm that the temperature component of global warming is much more important than precipitation. The authors analyze farmers' perceptions of climate variations and their adaptation to these, and also constraints on adaptation mechanisms. The results suggest that farmers in Kenya are aware of short-term climate change, that most of them have noticed an increase in temperatures, and that some have taken adaptive measures.
<urn:uuid:633ece12-7ee2-4d95-b7e6-51875abc36bf>
CC-MAIN-2016-26
http://kariclimatechange.blogspot.com/2011/09/effect-of-climate-change-in-kenya.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397749.89/warc/CC-MAIN-20160624154957-00190-ip-10-164-35-72.ec2.internal.warc.gz
en
0.945168
184
3.078125
3
Faces of the Harlem Renaissance 1886-1983 / Photographer A superlative studio photographer, James VanDerZee captured the spirit and energy of life in Harlem for more than 50 years. Like so many pivotal figures of the Harlem Renaissance, VanDerZee originally embarked on a career totally other than the one in which he ultimately excelled. Arriving in Harlem as an aspiring violinist in 1906, he formed—and performed with—the Harlem Orchestra. VanDerZee was equally skilled at piano; he often tickled the ivories with such jazz giants as Fletcher Henderson. On regular return visits from Harlem to his hometown of Lenox, Massachusetts, VanDerZee found himself shooting pictures of the beloved place as a hobby. In 1915 he landed a job as a darkroom technician, and within just two years he had opened his own studio on 135th Street. From that base he began to document all faces and facets of the local community. VanDerZee's work exhibited artistic as well as technical mastery. Thanks to his genius for darkroom experimentation—retouching negatives, for example, and creating double exposures—the demand for his portraiture soon skyrocketed. Many of VanDerZee's photographs celebrate the life of the emergent black middle class. Using the conventions of studio portrait photography, he composed images that reflected his clients' dignity, independence, and material comfort, characterizing the time as one of achievement, idealism, and success. VanDerZee's photographs portray the Harlem of the 1920s and 1930s as a community that managed to be simultaneously talented, spiritual, and prosperous.
<urn:uuid:bceea24a-e75f-4e55-8204-aa53c1dd271b>
CC-MAIN-2016-26
http://artsedge.kennedy-center.org/interactives/harlem/faces/james_vanderzee.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783399522.99/warc/CC-MAIN-20160624154959-00052-ip-10-164-35-72.ec2.internal.warc.gz
en
0.963541
337
2.796875
3
American Masters "John James Audubon: Drawn From Nature" American Masters "John James Audubon: Drawn From Nature" Thu, 09/23/2010 - 6:00pm The complex life of the artist whose pioneering work helped define a young nation. John James Audubon - whose name became synonymous with American conservation - killed thousands of birds during his quest to create Birds of America, arguably the largest and most beautiful book of the 19th century. Audubon was at once entrepreneur, artist, scientist, husband, father, legend - and walking contradiction. Born in what is now Haiti, the illegitimate son of a French plantation owner and his mistress, Audubon ultimately became the quintessential American pioneer. On the frontier, he played the debonair European. In the drawing rooms of Europe, he acted the part of wild woodsman. Although faithful to his long-suffering wife, he nonetheless wrote her lengthy letters bursting with details of his encounters with other women. Jailed once for bankruptcy, he went on to dine at the White House. The self-taught artist and self-made man was praised by royalty, shunned by his in-laws and blackballed by the Philadelphia Academy of Natural Sciences. AMERICAN MASTERS "John James Audubon: Drawn From Nature," encoring Thursday, September 23 at 7 p.m. on PBS World (cable 524/DT21.2), details Audubon's epic adventures while capturing the full-scale beauty of his definitive book. "John James Audubon was a genuine American character. His life story reads like the stuff of great fiction - from his uncertain beginnings to the passionate pursuit of his dream," said Susan Lacy, the creator and executive producer of AMERICAN MASTERS. "His seminal work, Birds of America, is a magnificent testament to art, to nature and to dogged determination." Said program producer Lawrence Hott, "When most people hear the word Audubon they think of the Audubon Society, but they really know almost nothing about him. Audubon pulled off the most successful publishing coup of his time, leaving us with a stunning visual legacy that opens a window onto a time and place that would otherwise be lost to us. And while he lived most of his life before the age of photography, making filming a challenge, we knew when we were editing that there would always be something to show because Audubon had shown us so much." Birds of America includes 435 life-sized portraits of every bird then known in the United States. Although Audubon was not the first person to attempt to paint and describe all the birds of America, his book remains the standard by which all other bird artists are measured. In addition to following Audubon's cue and traveling across the country to document birds in their natural habitat, the filmmakers were granted access to Audubon's original watercolors at the New-York Historical Society. From that starting point, they devised numerous ways to fully explore the enduring legacy of his paintings. In "Drawn From Nature," artist Walton Ford, who frequently parodies Audubon, shows how the artist posed birds in lifelike positions in order to paint them. Master printer Michael Aakhus demonstrates the printing process for Birds of America, using a rare, authentic copperplate from Audubon's original collection. An animated sequence about 19th-century passenger pigeon hunts - which effectively wiped out the population follows the pigeons from a sky blackened by never-ending waves of migrating birds to a drawer at the Academy of Natural Sciences in Philadelphia, where a few examples of the now-extinct bird remain. The film traces Audubon's life from its unlikely beginning to its unfortunate end and includes every triumph and tragedy the artist experienced along the way. Audubon not only illustrated Birds of America, he was the writer, publisher and promoter. The man who had failed at selling penny nails in the backwoods of Kentucky discovered that he could sell an unfinished folio for a thousand dollars in the finest homes in Edinburgh, Manchester, Leeds, London and Paris. A much-heralded four-volume set of Birds of America was published in London between 1827 and 1838. Early subscribers to the book included the kings of England and France; the final list would boast more than 200 of the richest and most recognizable names on both sides of the Atlantic. "Drawn From Nature" is also a love story that details the strong bond between Audubon and his wife, Lucy, who left a comfortable lifestyle to trek into the American frontier with her husband before settling in Louisiana and, later, New York. Narrated excerpts from Audubon's many letters to Lucy provide insight into a complicated relationship that survived long separations, bankruptcy and the death of two children. After the success of Birds, Audubon continued to work, creating a smaller folio and embarking on a major study of mammals. The Viviparous Quadrupeds of America was only half-finished in 1846 when he turned the work over to his son. His eyesight was failing, as was his mind. He passed the last two years of his life in silence, recognizing no one. In 1863, his wife - by then destitute - sold some 800 original works of art to the New-York Historical Society for a total of $4,000, paid in installments. In December 2005, a set of Audubon's original artwork and manuscripts fetched $10.6 million at auction. Nearly 50 years after Audubon's death, a small group of people banded together to protest the wholesale slaughter of birds for their plumes, which were then used to decorate women's hats. The group eventually dubbed themselves the Audubon Society. Today, the National Audubon Society continues to build on the love of avian wonders to inspire and advance conservation for the benefit of birds, wildlife and people.
<urn:uuid:6866e170-6fd5-42c8-8316-6988cc6eb6af>
CC-MAIN-2016-26
http://interactive.wxxi.org/node/45099
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783393997.50/warc/CC-MAIN-20160624154953-00146-ip-10-164-35-72.ec2.internal.warc.gz
en
0.967968
1,226
2.640625
3
IBM and Hitachi are expected to announce a research agreement this week in which the companies will collaborate to improve semiconductor technology, including shrinking the features on silicon chips. Researchers from the companies will try to accelerate the miniaturization of chip circuitry by researching at the atomic level for 32-nanometer and 22-nm semiconductors. Making chip circuits smaller should allow computing devices to deliver power savings and performance gains. It will also make manufacturing more efficient, IBM said. By combining research capabilities and intellectual property, the companies also hope to reduce the costs of developing advanced chip technologies, IBM said. The tie-up with Hitachi is not linked to the Cell processor, which is the result of a separate development partnership between IBM, Sony and Toshiba, IBM said. Though IBM and Hitachi work together on enterprise servers and other products, this is the first time they are collaborating on semiconductor technology. Engineers from the companies will conduct research at IBM's Thomas J. Watson Research Center in Yorktown Heights, New York, and at the College of Nanoscale Science and Engineering's Albany NanoTech Complex, also in New York. Though the research does not apply directly to manufacturing, it could contribute to IBM's manufacturing processes as they relate to future silicon devices, IBM said. Financial details of the two-year agreement were not disclosed. IBM officials declined comment on when products resulting from the research would hit the market. Chip makers such as IBM, Intel and Advanced Micro Devices are constantly upgrading their manufacturing technologies to shrink chips. Intel began switching its manufacturing process to 45-nanometer chips last year, and AMD is scheduled to make a similar move later this year. Intel recently said it hopes to shrink the features on its chips to 22-nm by 2011. A nanometer is equal to about one billionth of a meter. In chip manufacturing, the figure refers to the smallest features etched onto the surface of the chips. As chip makers build smaller and smaller transistors, they are dealing with features that are in some cases just a few atoms thick. IBM already has a strong profile in advancing semiconductor technology. It is developing silicon nanophotonics technology, which could replace some of the wires on a chip with pulses of light on tiny optical fibers for quicker and more power-efficient data transfers between cores on a chip. It is also working with US universities to develop carbon nanotubes, smaller transistors that could deliver better performance than current transistors.
<urn:uuid:d096c4ed-6fbf-4118-9b9e-a51ba3f54045>
CC-MAIN-2016-26
http://www.pcworld.idg.com.au/article/208371/ibm_hitachi_team_up_advance_chip_research/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396959.83/warc/CC-MAIN-20160624154956-00176-ip-10-164-35-72.ec2.internal.warc.gz
en
0.951525
503
3.125
3
Did you know that people decide how they feel about you within the first three seconds of meeting? This reaction, an auto-response generated in the most primitive part of our brains, evolved in our early pre-human ancestors when instant decisions about friend or foe were required to survive. The way we look and act generates subconscious impressions and comparisons with “stereotypes” in the minds of observers, often generating powerful emotions and judgments. First impressions, even when untrue representations, are difficult to dislodge and change. A “good” first impression can be a powerful impetus for your career, just as a negative impression can be an impossible obstacle to overcome. Etiquette and Your First Impression As a former senior executive of a multi-billion-dollar service company and a small business owner, I am constantly surprised by the naiveté of job applicants or new employees in their failure to recognize the importance of etiquette and manners in the workplace. There are few jobs that are so demanding or unique that they fit a single individual; in fact, for most jobs and promotions, there are literally hundreds of candidates with similar experience, competence, and skills. Often, the decision of whom to hire, promote, or work with boils down to likability. In other words, the ability to make others comfortable around you is more often than not the reason for personal success. Standards of Business Etiquette Business etiquette is the commonly accepted code of conduct in the business world governing relationships between people. The minimum requirements to make a favorable first impression include several standards: - Be on Time. Being prompt shows respect for others and a recognition that their time is valuable. - Dress Appropriately. Most offices establish formal or informal dress standards. If you have questions about the appropriate attire, err on the formal side. You can always slip off a necktie; however, switching from jeans and a sweatshirt to a suit is more difficult. - Smile. A smile makes you more approachable and stimulates a return smile from others. - Address People by Their Last Name. Don’t use a person’s first or familiar name unless invited to do so. - Maintain Eye Contact. Avoiding another’s eyes give the impression that you have something to hide or lack confidence. However, be aware that in other countries, direct eye contact may be viewed as impolite or aggressive. - Speak Clearly. Enunciate in a voice loud enough to be heard, but soft enough to avoid startling others. - Deliver a Firm Handshake. Practice your handshake to be sure you’re neither a “bone-crusher” nor a “limp fish.” “Do’s” and “Don’ts” of Day-to-Day Business Conduct Whether you’re a new employee, a manager in the midst of your career, or a seasoned business executive, good manners should be a daily practice. Showing respect and appreciation for other people in all situations is a sign of maturity and self-confidence. 1. Use the Words “Please” and “Thank You” Generously Treat people as you would expect to be treated if the roles were reversed. Do you like being ordered about, or having your concerns ignored? Most people don’t, but managers caught up in their daily duties often forget and peremptorily issue directions as if their subordinates were machines to be turned on and off. And respond graciously with a “you’re welcome” or “my pleasure” when you are the recipient of “please” or “thank you.” 2. Remember Names and Use Them Frequently Each of us is uniquely attached to our name, and pay particular attention when we hear it voiced. Hearing our name reinforces our ego and affirms our identity. Hearing our name makes us feel good. Be careful though: If you use a name too much, it can appear manipulative. 3. Remain Civil Despite Provocation Mark Twain advised, “Never argue with a fool – onlookers might not be able to tell the difference.” Disagreements and conflict are a part of everyday life. However, there is an appropriate time and place to hash out disputes when cooler heads can prevail. 4. Show Respect for Others at All Times Someone once said that respect was a two-way street – if you want to get it, you have to give it. It is easy to defer to people whom we consider important or superior. The real test of our character is how we treat those who serve us – the waitress in the cafe, or the clerk at the drugstore. In a world of 24/7 electronic communication and constant multitasking, it is easy to ignore or pay partial attention to the person speaking to you. How many times in meetings do you focus on text messages, rather than the speaker? How often are one-on-one meetings with your peers interrupted by a cell phone ring? Intended or not, texting during group meetings, taking calls during personal meetings, or impatiently glancing at your watch in the midst of a conversation signals to the physical speaker in front of you that he or she is not important enough to deserve your full attention. Therefore, it is important to take the time to listen and keep proper cell phone etiquette in mind. 1. Don’t Engage in Gossip, Malicious Comments, or Tasteless Jokes Such behavior says more about you than the person to whom you are referring, and it’s not a complimentary message. Obviously, profanity and cursing is never appropriate in a professional setting. 2. Don’t Forget to Guard Your Online Presence Social sites such as Facebook, Twitter, and LinkedIn are visited millions of times a day by friends and strangers. Most companies review such sites before hiring a new employee or extending a promotion offer as part of their due diligence. Be careful regarding what you post, as it will be public for years to come. Don’t post any pictures, write any emails, or make any comments that would make you uncomfortable if your mother visited your site. 3. Don’t Discuss Politics or Religion Most people have strong feelings about both subjects, holding positions which you might consider irrational. There is no upside to engaging in discussions about either politics or religion since you are unlikely to change anyone’s mind and such conversations can quickly degenerate into rancor and hurt feelings. 4. Don’t Give Inappropriate or Improper Gifts Many companies have strict prohibitions regarding the receipt of business gifts, including meals and entertainment, to avoid any suggestion of favoritism or impropriety. The purpose of a business gift is to thank the recipient for his or her business, time, or, in the case of employees, their contributions to your success. Don’t expect a quid pro quo; if you are giving a present with strings attached, it is likely to be inappropriate. Business Etiquette in Foreign Cultures If your business takes you to other countries, you should investigate that culture’s business practices to identify what is expected and what might constitute a “blunder.” For example, in Brazil, personal space is not as important as it is to Americans, with frequent pats and touches. Unlike America, a Chinese businessman might expect a small gift representing your company upon meeting. Formally exchanging business cards is a ritual practiced in Japan, while the wearing of leather or eating beef would be an insult to many in India, since cows are sacred. Take the time to learn about the business culture before and during your foreign interaction. Proper etiquette and good manners never go out of style because they demonstrate a respect for others, an attitude often overlooked in the chaotic, frenzied world of business competition. The exercise of good manners slows the pace and focuses on the interactions between people. They can help you win the favor and confidence of others, and improve your chances of success. Are good manners overlooked in your company? Do you believe that proper etiquette is still important in the modern business community?
<urn:uuid:9cdd667d-4595-40a5-8029-cb8381aacef6>
CC-MAIN-2016-26
http://www.moneycrashers.com/making-good-first-impression-work/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783392527.68/warc/CC-MAIN-20160624154952-00104-ip-10-164-35-72.ec2.internal.warc.gz
en
0.950341
1,688
2.765625
3
The U.S. space agency is relying on findings from an ongoing, unmanned mission to Mars as it shapes plans for a human mission to that planet in the mid-2030s. As NASA's Mars Science Laboratory sped toward the Red Planet in 2011 and 2012, an instrument called the Radiation Assessment Detector measured radiation levels that a space crew could experience during the long journey. New findings published in the journal Science indicate a trip to Mars could subject space travelers to more radiation than NASA allows. Longstanding research shows that exposure to radiation increases a person's risk of developing cancer. Scientists say one way to reduce radiation exposure is to develop new propulsion technologies that would shorten the time it takes to get to Mars, thereby reducing the time a person is subjected to radiation in deep space. NASA's deputy director of advanced exploration systems, Chris Moore, says engineers are working on advanced systems that could cut a one-way trip to Mars down from about 250 days to 180 days. "To get really fast trip times and cut down on the radiation exposure, we would probably need nuclear thermal propulsion, and we're working with the U.S. Department of Energy now to look at various types of fuel elements for these rockets," said Moore. "But it's a long-range technology development activity, and it will probably be many years before that is ready." Moore emphasized that nuclear thermal propulsion is part of the current plan to get humans to Mars. During the journey through deep space, astronauts would be exposed to two types of radiation that pose potential risks. Galactic cosmic rays are caused by supernova explosions and other high-energy events outside the solar system. Solar energetic particles are linked to solar flares and coronal mass ejections from the sun. Eddie Semones, the space flight radiation health officer at NASA's Johnson Space Center in Houston, says this requires a two-pronged mitigation approach. "We need to get there [to Mars] faster to reduce the impact of the galactic cosmic rays, but we need to have shielding, local shielding, on board to eliminate the effects of solar particle events, so it's hand-in-hand," he said. Semones says deployable shields are effective at reducing the effects of solar particle events because those events are low energy. But he says in order to reduce the effect of high-energy galactic cosmic rays, shields would have to be meters thick -- too thick for a spacecraft to launch successfully from Earth. Looking to the Future The Curiosity rover's Radiation Assessment Detector continued to take measurements after it landed on the Martian surface in August. NASA says the medical community is using the radiation data to develop exposure limits for deep space explorers.
<urn:uuid:f062e240-91bc-4705-9652-8ad8f11af494>
CC-MAIN-2016-26
http://www.voanews.com/content/mars-mission-radiation/1671915.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783400572.45/warc/CC-MAIN-20160624155000-00001-ip-10-164-35-72.ec2.internal.warc.gz
en
0.944434
551
4
4
History. Shakespeare. Theater. Politics. Family. Religion. A play could be about any of these things. In the case of Bill Cain’s "Equivocation" – it’s about all of them. A complicated and intellectual piece of work that challenges its audience, "Equivocation" is worth seeing for the outstanding performances as well as for its intriguing plot and subplots. However, it helps to know a bit before you go. Brushing up on your Shakespeare could be an advantage for theater-goers who want to understand some things they might otherwise miss. "Equivocation" tells the story of how Shakespeare is commissioned by King James to write the “true history” of England ’s infamous Gunpowder Plot. With numerous contemporary parallels, it is a complex play that is both classic and relevant. The setting is London in 1605. William Shakespeare, here referred to as "Shagspeare, " is ordered to write a play about current events. The problem is that the king’s version is more fiction than fact – and the actors are afraid they will lose, not only their jobs, but their heads! The story is told in modern dress, with the actors playing dual roles, and there is quite an assortment of them: the Shakespearean troupe at the Globe Theater; the characters they portray in the plays; the king and servants; and Shagspeare and his daughter, who we learn was ignored by her father because her beloved twin brother died instead of her. In the course of the play, the characters take us through the process of the famed bard writing the King’s project, amidst the politically charged climate of the times. We also witness the Royal family dynamic, Shagspeare’s family dynamic, and the trial of a Priest for his role in the Gunpowder Plot to blow up the Parliament. No, it’s not a mini-series, but it could be! It’s a bold and challenging treatment of all these subjects, and it is likewise challenging for the audience to absorb it all. Amidst the intellectualism and historical facts, are some dead-on funny acknowledgements about actors, writers, and families as they exist today. Witticisms punctuate the writing and provide knowing laughs for the audience. Overall, it is an intriguing premise; however, the story isn’t always quite clear enough. The first act is sometimes difficult to assimilate, but the second act picks up the pace, and is more engaging, with the characters and relationships more interesting. The end of the piece has especially provocative staging where the daughter wraps her dead father in a shroud. "Equivocation" was ably directed by David Esbjornson, former artistic director of Seattle Repertory. The exceptional cast includes Joe Spano, (best known as FBI Agent Tobias Fornell on NCIS, and as Lt. Henry Goldblume on the series Hill Street Blues), Harry Groener, Patrick J. Adams, Troian Bellisario, Brian Henderson, and Connor Trinneer, (best remembered as Trip on Star Trek: Enterprise; or Michael, on StarGate: Atlantis.) "Equivocation" is an apt title for the piece, with the audience going through some of the same feelings expressed in the theme. Is it worth seeing? Unequivocally! For fine performances in a literate, intelligent and thoughtful piece! Program notes, including the playwright’s story of how he came to write the play, are quite interesting and useful for appreciating the depth of the subject matter and making it more accessible. Run time of the show is approximately two hours and 45 minutes including one 15 minute intermission. The Geffen Playhouse is a wonderful venue, presenting a variety of outstanding classic and contemporary plays, provocative new works and musicals each year. Written by Bill Cain Directed by David Esbjornson November 10 – December 20, 2009 10886 Le Conte Ave. Los Angeles, California 90024 Tickets ($35 to $75) are on sale at the Geffen Playhouse box office (310) 208-5454, online at www.geffenplayhouse.com and at all Ticketmaster outlets. Student rush tickets are available one hour prior to curtain for $20.
<urn:uuid:83a940ed-eb12-4dbb-a740-5b5859c249f9>
CC-MAIN-2016-26
http://www.lasplash.com/publish/Los_Angeles_Performances_116/EQUIVOCATION_Geffen_Playhouse_Review.php
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396029.85/warc/CC-MAIN-20160624154956-00101-ip-10-164-35-72.ec2.internal.warc.gz
en
0.952589
896
2.71875
3
Returns a color specification in the hue-saturation-value model. A hue component. A saturation component. A value component. A number between 0 and 1, or A color specification. Return a color-spec in the model with components Note that short-floats are used for each component; this results in the most efficient color conversion process. However, any floating-point number type can be used. indicates the alpha value of the color. 0 means it is transparent, 1 means it is solid. If or not specified then the color does not have an alpha component and it is assumed to be solid. COLOR 27 > (color:make-hsv 1.2s0 0.5s0 0.9s0) #(:HSV 1.2S0 0.5S0 0.9S0) CAPI Reference Manual - 15 Dec 2011
<urn:uuid:1f50b655-0822-416b-b17b-04f6200989ba>
CC-MAIN-2016-26
http://www.lispworks.com/documentation/lw61/CAPRM/html/capiref-689.htm
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395039.24/warc/CC-MAIN-20160624154955-00049-ip-10-164-35-72.ec2.internal.warc.gz
en
0.6624
192
2.609375
3
Jul 9, 1997 How can PWA's combat the prejudice and ill will they face? Response from Rev. Pieters Come out, confront, and educate those who show "prejudice and ill will" towards persons living with HIV/AIDS. Come out --- when you hear people exhibiting their prejudice, let them know your relationship to HIV. Put a human face on the disease. Even if you're not infected with HIV yourself, you may love someone who is, you may have lost a loved one to AIDS, or you may be involved in providing HIV/AIDS services. Share your story of how HIV has impacted your life, and that will help people see that it's not "them out there somewhere," but us, here and now, right in front of them. Confront --- Coming out is the beginning of fighting prejudice, but even after you've come out, you may still have to face prejudicial remarks or behaviors. Confront people when these things happen, and let them know how their prejudice affects you and your loved ones. Educate --- Usually prejudice is based on fear, and fear is often based in lack of knowledge. After you've come out and confronted the prejudice, teach the reality. Share the facts about HIV transmission; talk about what it's like to live with HIV every day; explode the myths and enlighten your listeners with real stories of real people and how prejudice has damaged their lives. Thank you for caring! Moving on Without Billy HIV Positive Doctors: should they still work? This forum is designed for educational purposes only, and experts are not rendering medical, mental health, legal or other professional advice or services. If you have or suspect you may have a medical, mental health, legal or other problem that requires advice, consult your own caregiver, attorney or other qualified professional. Experts appearing on this page are independent and are solely responsible for editing and fact-checking their material. Neither TheBody.com nor any advertiser is the publisher or speaker of posted visitors' questions or the experts' material.
<urn:uuid:9ff358cc-9515-4812-a2df-ed19402bc163>
CC-MAIN-2016-26
http://www.thebody.com/Forums/AIDS/Spiritual/Q10160.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397873.63/warc/CC-MAIN-20160624154957-00042-ip-10-164-35-72.ec2.internal.warc.gz
en
0.959615
422
2.546875
3
Pronunciation: (ep'u-sī"kul), [key] 1. Astron.a small circle the center of which moves around in the circumference of a larger circle: used in Ptolemaic astronomy to account for observed periodic irregularities in planetary motions. 2. Math.a circle that rolls, externally or internally, without slipping, on another circle, generating an epicycloid or hypocycloid. Random House Unabridged Dictionary, Copyright © 1997, by Random House, Inc., on Infoplease.
<urn:uuid:01e9c22d-65ed-4cda-960f-35ebc3663016>
CC-MAIN-2016-26
http://dictionary.infoplease.com/epicycle
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397567.28/warc/CC-MAIN-20160624154957-00153-ip-10-164-35-72.ec2.internal.warc.gz
en
0.785849
115
2.6875
3
NOTE: This page was developed using G*Power version 3.0.10. You can download the current version of G*Power from http://www.psycho.uni-duesseldorf.de/abteilungen/aap/gpower3/ . You can also find help files, the manual and the user guide on this website. Power analysis is the name given to the process of determining the sample size for a research study. The technical definition of power is that it is the probability of detecting an effect when it exists. Many students think that there is a simple formula for determining sample size for every research situation. However, the reality is that there are many research situations that are so complex that they almost defy rational power analysis. In most cases, power analysis involves a number of simplifying assumptions, in order to make the problem tractable, and running the analyses numerous times with different variations to cover all of the contingencies. In this page we will try to illustrate how to do a power analysis for a test of two independent proportions, i.e., the response variable has two levels and the predictor variable also has two levels. Instead of analyzing these data using a test of independent proportions, we could compute a chi-square statistic in a 2x2 contingency table or run a simple logistic regression analysis. These three analyses yield the same results and would require the same sample sizes to test effects. It is known that a certain type of skin lesion will develop into cancer in 30% of patients if left untreated. There is a drug on the market that will reduce the probability of cancer developing to 20%. . A pharmaceutical company is developing a new drug to treat skin lesions, but it will only be worthwhile to do so if the new drug reduces the probability of developing cancer to 15% or better. The pharmaceutical company plans to do a study with patients randomly assigned to two groups, the control (untreated) group and the treatment group. The company wants to know how many subjects will be needed to test a difference in proportions of .15 with a power of .8 at alpha equal to .05. G*Power is easily capable of determining the sample size needed for tests of two independent proportions as well as for tests of means. To begin, the program should be set to the z family of tests, to a test of proportions, and to perform the 'A Priori' power analysis necessary to identify sample size. From there, simply input the necessary parameters. We are given the power, significance level, and the values of the two proportions, and we can assume that we want equally sized sample groups (an allocation ratio of 1). Pressing 'Calculate' produces the desired results along with the critical z (the number of standard deviations from the null mean where an observation becomes statistically significant) and the test's actual power. In addition, a graphical representation of the test is shown, with the second proportion's distribution a dotted blue line, the first proportion's distribution represented by a solid red line, a red shaded area delineating the probability of a type 1 error, a blue area the type 2 error, and a pair of green lines demarcating the critical points z. Each group will require 121 people. This is all well and good, but a two-sided test doesn't make much sense in this situation. We want to test for a drug that reduces the probability of cancer not for one that increases the probability. In this case we might want to use a one-tail test, adjustable easily enough by changing the input in 'Tail(s)'. G*Power indicates that we need to use 95 subjects in each group to find a change in probability of .15 for a power of .8 when alpha equals .05. Just as a check, let's run the analysis specifying each of the two sample sizes. This is accomplished by changing the type of power analysis from the 'A Priori' investigation of sample size to the 'Post Hoc' power calculation. The solved-for sample sizes should be automatically tabulated. Now, because we believe that we know a lot about the incidence of cancer in the untreated group, we would like to make the control group half as large as the treatment group. We can easily do this by adjusting the allocation ratio input. As we desire the control group (group 1) to be half as large as the treatment group (group 2), N2/N1 should equal 2. With this unbalanced design we have an estimated power of 0.800822, which the company deems acceptable. For more information on power analysis, please visit our Introduction to Power Analysis seminar. The content of this web site should not be construed as an endorsement of any particular web site, book, or software product by the University of California.
<urn:uuid:98f86edf-80ec-4cda-b66d-a343f86b4f7c>
CC-MAIN-2016-26
http://www.ats.ucla.edu/stat/gpower/indeppropor1.htm
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783400031.51/warc/CC-MAIN-20160624155000-00078-ip-10-164-35-72.ec2.internal.warc.gz
en
0.927257
989
3.5
4
GRYTrick brain teasers appear difficult at first, but they have a trick that makes them really easy. Think of words ending in GRY. Angry and hungry are two of them. There are three words in the English language. What is the third word? HintIf you read what I wrote carefully, I have already told you the answer. AnswerThe answer is 'language'. The first two sentences are just to throw you off. Take off the first two sentences, you get: There are three words in THE ENGLISH LANGUAGE. Think of the THE in that sentence as word #1 and ENGLISH in that sentence as word #2, then LANGUAGE is the third word. Rate Brain Teaser If you become a registered user you can vote on brain teasers. Back to Top
<urn:uuid:7fb7c5c5-a66d-4769-8631-ba0f93487b2c>
CC-MAIN-2016-26
http://www.braingle.com/wii/brainteasers/teaser.php?id=373
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396872.10/warc/CC-MAIN-20160624154956-00104-ip-10-164-35-72.ec2.internal.warc.gz
en
0.962443
174
2.734375
3
May 6, 2013: Solar Reflective Device Can Cool Buildings in Full Sunlight “People usually see space as a source of heat from the sun, but away from the sun outer space is really a cold, cold place,” said Shanhui Fan, a professor of electrical engineering and the paper’s senior author. “We’ve developed a new type of structure that reflects the vast majority of sunlight, while at the same time it sends heat into that coldness, which cools man-made structures even in the daytime.” From an engineering standpoint, there are two challenges. First, the reflector has to reflect as much of the sunlight as possible. Poor reflectors absorb too much sunlight, heating up in the process and defeating the goal of cooling. The second challenge is that the device must efficiently radiate heat (from a building, for example) back into space. Thus, the structure must emit thermal radiation very efficiently within a specific wavelength range in which the atmosphere is nearly transparent. Outside this range, the thermal radiation interacts with Earth’s atmosphere. Most people are familiar with this phenomenon. It’s known as the greenhouse effect, which is considered the cause of global climate change. The new device accomplishes both goals. It is an effective broadband mirror for solar light — it reflects most of the sunlight. It also emits thermal radiation very efficiently within the precise wavelength range needed to escape Earth’s atmosphere. The Stanford research team has succeeded by turning to nanostructured photonic materials. These materials can be engineered to enhance or suppress light reflection in certain wavelengths. Using engineered nanophotonic materials, the team was able to strongly suppress how much heat-inducing sunlight the panel absorbs, while it radiates heat very efficiently in the key frequency range necessary to escape Earth's atmosphere. The material is made of quartz and silicon carbide, both very weak absorbers of sunlight. The new device is capable of achieving a net cooling power in excess of 100 watts per square meter. By comparison, today’s standard 10-percent-efficient solar panels generate about the same amount of power. That means Fan’s radiative cooling panels could theoretically be substituted on rooftops where existing solar panels feed electricity to air conditioning systems needed to cool the building. To put it a different way, a typical one-story, single-family house with just 10 percent of its roof covered by radiative cooling panels could offset 35 percent of its entire air conditioning needs during the hottest hours of the summer. The researchers also note that radiative cooling has another substantial advantage. It is a passive technology. It requires no energy. It has no moving parts. It is easy to maintain. It can be installed on the roof or the sides of buildings and it starts working immediately. Publication date: 5/6/2013
<urn:uuid:345acabb-abfd-4a7b-ba6f-7136d5f06c4f>
CC-MAIN-2016-26
http://www.achrnews.com/articles/123143-may-6-2013-solar-reflective-device-can-cool-buildings-in-full-sunlight?WT.rss_f=Breaking+News&WT.rss_a=May+6%2C+2013%3A+Solar+Reflective+Device+Can+Cool+Buildings+in+Full+Sunlight&WT.rss_ev=a
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783408840.13/warc/CC-MAIN-20160624155008-00087-ip-10-164-35-72.ec2.internal.warc.gz
en
0.942814
589
3.75
4
Can you lose weight just by exercise? Well, the answer appears to be, it depends. In an article in the Sunday New York Times Magazine, "The Appetite Workout," Gretchen Reynolds reports on a study conducted by researchers at the University of Wyoming. The study was based upon a group of women who were asked to run or walk on one day and then rest on the next day. The researchers found that those who ran did not consume as many calories as those who walked. According to the article, the women who ran at a brisk pace consumed fewer calories because of certain hormones in the body that told them when they had eaten enough food. On the other hand, those who walked, were more hungry because of an increase in the body of a hormone called ghrelin. Ghrelin was also increased in those who ran but apparently that increase was offset by the other hormones. So what does this mean for you and me? If you are seeking to lose weight, more strenuous and longer exercise will probably be probably be your best bet. This may seem rather obvious, but many people think that walking alone may bring about weight loss and this study suggests that this may not be so. Now walking undoubtedly has many other health benefits, but weight loss may not be one of them. For the complete article, see www.nytimes.com.
<urn:uuid:de68a25c-7a05-454b-a113-007d092bde85>
CC-MAIN-2016-26
http://notjustforboomers.blogspot.com/2013/01/will-exercise-alone-cause-you-to-lose.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783394937.4/warc/CC-MAIN-20160624154954-00091-ip-10-164-35-72.ec2.internal.warc.gz
en
0.988344
278
2.6875
3
, the Swiss psychologist best known for his theory of cognitive development , also proposed a theory of moral development in the early 1930s. It was influenced by his cognitive theory and had the same basic format, being based on stages that children are supposed to pass through at certain approximate ages. The first stage is known as premoral judgement and lasts from birth until about five years of age. In this stage, children simply do not understand the concept of rules and have no idea of morality, internal or external. This stage roughly coincides with the sensorimotor and pre-operational stages of Piaget's cognitive theory and is related to them in the sense that since the child has a poor conception of other people's consciousnesses (if at all), and is incapable of carrying out complex mental operations, it is impossible for them to have a sense of morality. The second stage is called moral realism and lasts from the approximate ages of five to nine. Children in this stage now understand the concept of rules, but they are seen as external and immutable. Children obey rules largely because they are there. Since a rule tells you what you're not supposed to do, moral realist children evaluate wrongdoing in terms of its consequences, not the intentions of the wrongdoer. In terms of Piaget's cognitive theory, this stage corresponds to the pre-operational and concrete operational stages. The third and final stage is called moral relativity. This stage begins at about seven years of age, so it overlaps at first with moral realism. Children who have reached this stage recognise that rules are not fixed, but can be changed by mutual consent, and they start to develop their own internal morality which is no longer the same as external rules. A major development is that actions are now evaluated more in terms of their intentions, which most people would see as a more sophisticated view of morality. Piaget also thought it was during this stage that children develop a firm concept of the necessity that punishment specifically fits the crime. This stage corresponds to the concrete and formal operational stages in Piaget's cognitive theory, during which children become able to carry out complex mental operations, first on concrete examples, and then additionally on abstract concepts. Piaget based this moral theory on two lines of research. The first of these was to observe children of different ages playing marbles, and ask them questions about the rules of the game. Children younger than five essentially had no rules at all. Between five and ten, there were rules, but the children saw them as fixed. Finally by the age of ten, the children were able to think of their own rules and recognise that these could be adopted by mutual consent. Piaget's other technique was to present to children moral dilemmas, each consisting of a pair of stories. In one, a child deliberately caused a small amount of damage. In the other, the damage was accidental but much greater. Piaget asked children which of the characters deserved to be punished the most, and tried to find out not just their answers but the reasoning they used to arrive at them. As came out in his theory, younger children focused on consequences, while older children also took intent into account. Both of these methods have been criticised. Unsurprisingly, many people have claimed that games of marbles do not represent a child's entire perception of morality. However Piaget's use of dilemmas has also been criticised. It has been claimed that younger children only focused on consequences because, given that the story was narrated, this was much easier to see than the characters' intentions. This view is supported by Chandler et al.1 (1973), who found that if the stories were presented on video, younger children were much better able to consider intentions. On the other hand, Armsby2 (1971) had carried out investigations with moral dilemmas and found that, although younger children had some conception of intent, they still preferred to judge in terms of consequences because they found this easier. Piaget's theory has also been criticised on the grounds that it is based on moral "universals" which may in fact be culture-specific. It has been claimed that the moral development of children in non-Western cultures may differ from that of the children Piaget investigated. Thirdly, the theory can be criticised from the increasingly successful viewpoint of evolutionary psychology. Piaget implied that all morality comes from socialization, but evolutionary psychologists maintain that a basic sense of morality is a cognitive adaptation produced by natural selection, and thus ultimately innate On the other hand, evolutionary psychology largely supports Piaget's assumption of moral universals. Piaget's theory of moral development is not as well-known outside psychology as his theory of cognitive development, but it was a great influence on Kohlberg's theory, which has become one of the most important. 1 - Chandler, M.J., Greenspan, S. and Barenboim, C. (1973) "Judgements of intentionality in response to videotaped and verbally presented moral dilemmas: the medium is the message", Child Development, 44, pp.315-320. 2 - Armsby, R.E. (1971) "A re-examination of the development of moral judgement in children", Child Development, 42, pp.1241-8.
<urn:uuid:89abf9a3-ff01-486e-8426-97944668c8db>
CC-MAIN-2016-26
http://www.everything2.com/title/Piaget%2527s+theory+of+moral+development
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397748.48/warc/CC-MAIN-20160624154957-00076-ip-10-164-35-72.ec2.internal.warc.gz
en
0.98273
1,092
3.453125
3
Chapter 1 - The Riddle House | Chapter 2 The Scar • Years ago, Tom Riddle, his wife, and son were found dead at the dining room table with their mouths and eyes open in expressions of sheer terror. • Years later, Frank Bryce, the gardener, seeing a strange light, goes up to the empty house to see what is happening. • Standing outside the door of the drawing room, he hears a conversation between two strangers, Wormtail and Lord Voldemort. • Frank does not understand the conversation, but feels the evil. He is just about to run to the police when a huge snake appears. • In cold blood, Voldemort waves his wand, and with a green light and a rushing sound, Frank drops dead. • Two hundred miles away Harry Potter wakes with a terrible pain in his scar. • Harry, who is now fourteen years old, reflects on his life story. The dark wizard Voldemort... This section contains 5,212 words (approx. 18 pages at 300 words per page)
<urn:uuid:2efbab4c-3994-4139-a117-19139d0e2717>
CC-MAIN-2016-26
http://www.bookrags.com/lessonplan/the-goblet-of-fire/abstracts.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396222.11/warc/CC-MAIN-20160624154956-00001-ip-10-164-35-72.ec2.internal.warc.gz
en
0.932212
214
2.515625
3
If you have an oak tree or an apple tree, this column is for you. If you have neither, read on nevertheless for seasonal ideas that generalize to other gardening processes. The first concept of value is that timing is an essential strategy for successful gardening. Our plants operate on a natural cycle that waits for no gardener. Right now (the end of June) is the year’s last opportunity to maximize the yield of your apple trees. This time-sensitive task involves thinning your apples to allow each of the remaining apples to develop its greatest size and sweetness. You can begin thinning apples a short time after the blossom drop when small apples appear, up to when the apples are no larger than table tennis balls. Once they grow beyond that size, thinning is not as effective. Apple trees will thin themselves: the “June drop” is Nature’s way to produce larger fruits and avoid broken branches. Commercial growers use chemicals for thinning, but hand thinning is practical for small home orchards. The largest young apples are the “king apples,” from the earliest blossoms. Use pruners or small clippers to remove the smaller fruits to so that the remaining fruits are about six inches apart. Let them drop then rake them for disposal. This might seem brutal but the harvest will be gratifying. During the July–November period, California Oakworms (Phryganidia californica) attack our Coast Live Oaks (Quercus agrifolia). The infestations vary in severity, but in a bad year the caterpillar-like larva of the Oakworm can defoliate a tree severely and provide a nasty display for the homeowner as well. Some experts say Oakworm attacks are natural occurrences that rarely cause permanent damage to otherwise healthy oak trees. During last Saturday’s Garden Faire, however, I spoke with James Neve of Tree Solutions, who says homeowners need not suffer the presence of these insects and their droppings (frass), and their trees need not suffer defoliation. He recommends watching for the presence of oakworms in mid-July by placing a white paper plate under the tree’s branches and checking for frass. If the pests show up, consider whether spraying or injecting a biological control would be indicated. Tree Solutions sprays with “Bt.” (Bacillus thuringiensis) or Pyrethrum (derived from Chrysanthemum flowers) or injects with Abamectin (derived from a soil bacterium). The University of California’s Integrated Pest Management program recommends these sprays and also a commercial product, Spinosad. A regular reader of this column reports good results with Spinosad. Most garden centers have these controls and spray equipment for do-it-yourselfers. Other timely tasks: deadhead your roses, propagate your favorite woody plants from softwood cuttings, and above all, hydrate your plants during these hot and dry days. For good, reliable information about the California Oakworm, visit the University of California’s Agriculture and Natural Resources website by clicking here. For the commercial website of Tree Solutions (serving the Monterey Bay area), click here. (This is a free plug for a good business.) Enjoy your garden.
<urn:uuid:428a8490-855d-4809-b1d7-2d91088d5a39>
CC-MAIN-2016-26
http://ongardening.com/?m=201206
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396538.42/warc/CC-MAIN-20160624154956-00015-ip-10-164-35-72.ec2.internal.warc.gz
en
0.912551
691
2.875
3
24 of 27 Google Doodle: Dr. Martin Luther King Jr.'s Day Date: January 17, 2011 On this day we celebrate Martin Luther King, Jr. (January 15, 1929 – April 4, 1968). Clergyman, activist, and leader in the Civil Rights movement, King worked to end racial segregation and racial discrimination.
<urn:uuid:11a8e611-28fa-4180-b0b7-5dd37a539408>
CC-MAIN-2016-26
http://www.pcmag.com/slideshow/story/262653/google-doodles-2011/24
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783393442.26/warc/CC-MAIN-20160624154953-00142-ip-10-164-35-72.ec2.internal.warc.gz
en
0.906126
70
3.5
4
Ducks begin to leave their summer breeding grounds as diminishing daylight triggers their journey to the southern states. While the majority of ducks arrive well into the fall and early winter, preparing a full crop of ideal grains is a necessity for many waterfowl enthusiasts, and with good reason. Historically, corn is the crop of choice for drilling in the majority of duck impoundments in the southeast, and August is just too late to start a crop for the 2011-12 season. Ducks do love those sweet, golden kernels, but many other grain and seed crops with shorter maturation periods are well-received by migrating waterfowl. Ducks prefer small-grained foods that are full of energy and easily-accessible within flooded waters. Luckily, several of these foods can be planted as late as August in the Carolinas, just in time for the arrival of the migrating waterfowl. Japanese millet, brown-top millet, white proso millet, buckwheat, sorghum and even wheat have small, nutrient-rich seeds highly-favored by migrating waterfowl. These cereal grains have a high raw-protein content between 10 and 20 percent and offer prime nutrition for migrating ducks. While wheat will not mature enough for seed-head development, the tender sprouts will offer ducks a tasty treat loaded with antioxidants and a prime energy source. Buckwheat itself is extremely high in fat, providing ducks with a rich energy source ideal for winter foraging conditions as well. These cereal grains mature 60 to 90 days after planting. Planted in early August, they will mature sometime in October, just in time to introduce the new crops to 12 to 24 inches of water. Japanese millet is tolerant of shallow flooding; therefore, it can be partially-flooded prior to complete maturation. While a few early flocks of green-wing and blue-wing teal arrive in September, the first substantial wave of migrating ducks will not arrive until November - depending on adverse weather. However, maturation dates on these crops is not the concern when planted at the end of the season. Killer frosts and cold conditions will halt growth and prevent full seed development and maturation. Early August should be the last interval for planting the majority of these crops. In areas in the Carolinas where significant frosts occur as early as October, brown-top and white proso millet should be planted; they mature in 60 to 70 days. Provide legal runways for best hunting results Duck hunters across the country use a variety of methods to entice ducks into shotgun range. Depending on the layout of the hunt, the use of decoys has little effect on the success of an excursion when pass-shooting methods are utilized. However, impoundments and other open-water hunts require decoys and a systematic placement scheme to draw ducks into range. Planting impoundments should utilize similar techniques to lure ducks into a specified landing zone. Decoy spreads vary among hunters and hunting location, but all spreads incorporate holes, "Vs" or some sort of ideal, empty landing area. While ducks want to be around many other ducks, few want to land on top of another bird. They look for open areas within a flock or open water adjacent to food sources. Duck impoundments are no different. Some flooded crop fields offer ducks with few landing zones. While these birds will usually find a place to land, providing landing zones close to blinds will increase the probability of having these birds land in an ideal location. They also provide ideal areas to place a few decoys that will be easily-noticed from the air. Landing zones should be between five and 15 feet wide, with a preferred width of 10 feet. "Landing strips" should not be created after crops mature through mowing or discing. Intentionally knocking down or mowing crops to scatter matured seeds in waterfowl impoundments is considered baiting and is illegal. However, crops can be harvested using normal harvesting machinery/practices to create these strips after crops mature. This type of activity is considered a legal method in impoundments where crops are specifically-planted for waterfowl and hunted over during the duck season. Landing zones are beneficial in impoundments, but they can still provide available foods for ducks as well. While these areas are void of planted seed, they still should be prepared for seed with discing and fertilization to encourage natural grasses to develop. Wheat can also be planted within these landing zones that will provide ducks with nutritional sprouts and a quick deterioration rate allowing the open water areas to develop. Fertilization will fortify natural impoundments Waterfowl will eat a wide variety of natural vegetation, including seeds, tubers and tender sprouts. Most impoundments are constructed in low-lying areas with the ability to retain water. Many natural foods preferred by waterfowl will grow in these naturally-moist soil areas without the first seed being planted. Planting commercial cereal grains is not always required to draw ducks into a flooded field in the fall. In fact, many naturally-occurring foods will produce a higher volume of palatable, nutritional material for ducks to devour as they arrive. While vegetation will vary from different parts of the Carolinas, beggars tick, crabgrass, sedges, smartweed, curly dock and other grasses/forbs often make up a high percentage of the pioneer species in low-lying areas. These natural species contains respectable levels of fat, crude protein, and vitamins providing ducks with the right recipe in high volumes. Fertilize these natural impoundments with a balanced combination of nitrogen, potassium and phosphorus to boost the production of these natural waterfowl foods in August, while the days are still long. Flooding can occur just before arrival of the first few flocks of migrating waterfowl into the area. While fields of corn, millet, and sorghum offer ducks a favorable food source, natural vegetation in moist-soil areas can also provide more-than-adequate groceries for migrating waterfowl each winter.
<urn:uuid:b9f626c3-db73-4015-ba1f-915fca5ece13>
CC-MAIN-2016-26
http://www.northcarolinasportsman.com/details.php?id=2006
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396222.11/warc/CC-MAIN-20160624154956-00174-ip-10-164-35-72.ec2.internal.warc.gz
en
0.942109
1,259
2.96875
3
Paradise Fire 7.12Mb This animation is created by using MODIS data from October 25 to October 31. The fixed red dot in Day 0 indicates the origin of fire. The Red/Yellow dots indicate the hot spots at each day (with different temperature from 305 degrees to 504 degrees (Kelvin). The brown dots indicate previous burned areas. This animation is composed by six separated pictures. (created by Ming Tsou) Note: The map is based on the MODIS data which only take a snap shot on Earth surface twice per day. Some burned areas may not be detected by the satellite images. 3D Fire Spread Animations Created by Harry D. Johnson, Department of Geography, SDSU Requires Quicktime 6.5 or later version Fire History Animation Created by David Palomino, Department of Geography, SDSU
<urn:uuid:c3ab9f35-7899-4d74-b742-ecc8dc72b59f>
CC-MAIN-2016-26
http://map.sdsu.edu/fireweb/animations.htm
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783392069.78/warc/CC-MAIN-20160624154952-00070-ip-10-164-35-72.ec2.internal.warc.gz
en
0.928136
180
3.015625
3
With day-to-day use, your desktop computer suffers though normal wear and tear that may eventually hinder it from functioning properly. Not to worry – most causes of performance problems can be avoided. Take note of the following 5 things that slow your PC down. An overfilled hard drive Generally, the more full the hard drive is, the slower it gets. When there’s too much in your hard drive, it leaves less free space for it to perform the required tasks. What to do: The solutions is simple: free up the hard drive. It should have at least 300MB free to function effectively. Discard all unnecessary files, especially large items like graphics and videos. For things you aren’t willing to get rid of, move them to an external storage device such as an external hard drive. Also check for accumulated junk (like temp files) and delete them – be sure to empty the recycle bin afterward. Liberate more space by uninstalling all the programs you don’t use. Viruses, spyware and other malware Malware or malicious software are programs that mess with your computer and prevent it from functioning the way it should. Any system can be infected through normal use, particularly from untrustworthy downloads or shared storage devices. What to do: It’s a must for every user to have a good anti-virus and spyware/adware removal program. Most of the trusted brands can be bought online. Run a scan regularly, at least once a week. And since new malware are bred every so often, make sure to update continually. As a precaution, avoid installing disreputable applications, and resist the urge to download content from untrusted sources, no matter how cool they seem. Overheating corrupts the components of your PC, and leads to permanent hardware damage. What to do: As a preventive measure, place your PC in a cool spot, far from hot devices. Maintaining cleanliness also helps a lot – since dust insulates heat, it’s beneficial to keep the CPU clean and dust-free. If overheating still occurs, seek a technician to inspect the fans and heat sinks. Acquire new cooling hardware if necessary. A fragmented hard drive Hard drives manage data by breaking them into smaller pieces. As you add and delete files, spaces accumulate between these fragments. What to do: Defrag your computer. Defragmenting compresses those scattered pieces together, arranging them to become more accessible. Schedule a time for regular defragging, at least once every two weeks. Not enough RAM Random Access Memory or RAM serves as temporary storage for activated programs, to allow easier data retrieval and quicker access. The more RAM your PC has, the more efficient it can be. What to do: Add more RAM. Figure out how much you already have, how much you need, and what type works with your PC. If you’re not very tech-savvy, consult the manufacturer’s online support, or contact a computer technician. In today’s world, we depend so much on our computers for information, work, interaction and entertainment, and it definitely pays to take good care of our devices. As a rule of thumb, contact a professional if you don’t know what to do. I’ve mentioned 5 things that will slow your PC down. Any other things or reasons you could think of? Please tell us in the comments. [tp lang=”en” only=”y”] Guest post written by: Peter Lee who blogs at ComputerHowToGuide.com. Do check out his blog if you would like to know more about how to speed up your computer. Related articles you might find interesting: - Are Computer Viruses Spread By The Media? - Android users, watch out for malware!! - The Importance of Security Software for Computers - Computer Security Is More Than Just Spyware & Viruses – It’s You! - Android malware-apps: Doubled in just a month
<urn:uuid:2e77c04d-974e-4f18-a9ba-5ef0153d8c59>
CC-MAIN-2016-26
http://techpatio.com/2012/software/how-to-fix-slow-computer/comment-page-1
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396147.66/warc/CC-MAIN-20160624154956-00108-ip-10-164-35-72.ec2.internal.warc.gz
en
0.903558
850
2.640625
3
Welcome to the world of curses. Before we plunge into the library and look into its various features, let's write a simple program and say hello to the world. To use ncurses library functions, you have to include ncurses.h in your programs. To link the program with ncurses the flag -lncurses should be added. #include <ncurses.h> . . . compile and link: gcc <program file> -lncurses The above program prints "Hello World !!!" to the screen and exits. This program shows how to initialize curses and do screen manipulation and end curses mode. Let's dissect it line by line. The function initscr() initializes the terminal in curses mode. In some implementations, it clears the screen and presents a blank screen. To do any screen manipulation using curses package this has to be called first. This function initializes the curses system and allocates memory for our present window (called stdscr) and some other data-structures. Under extreme cases this function might fail due to insufficient memory to allocate memory for curses library's data structures. After this is done, we can do a variety of initializations to customize our curses settings. These details will be explained later . The next line printw prints the string "Hello World !!!" on to the screen. This function is analogous to normal printf in all respects except that it prints the data on a window called stdscr at the current (y,x) co-ordinates. Since our present co-ordinates are at 0,0 the string is printed at the left hand corner of the window. This brings us to that mysterious refresh(). Well, when we called printw the data is actually written to an imaginary window, which is not updated on the screen yet. The job of printw is to update a few flags and data structures and write the data to a buffer corresponding to stdscr. In order to show it on the screen, we need to call refresh() and tell the curses system to dump the contents on the screen. The philosophy behind all this is to allow the programmer to do multiple updates on the imaginary screen or windows and do a refresh once all his screen update is done. refresh() checks the window and updates only the portion which has been changed. This improves performance and offers greater flexibility too. But, it is sometimes frustrating to beginners. A common mistake committed by beginners is to forget to call refresh() after they did some update through printw() class of functions. I still forget to add it sometimes :-) And finally don't forget to end the curses mode. Otherwise your terminal might behave strangely after the program quits. endwin() frees the memory taken by curses sub-system and its data structures and puts the terminal in normal mode. This function must be called after you are done with the curses mode.
<urn:uuid:62f0eb1b-32f8-4cc0-bb4e-3191d4d25e3a>
CC-MAIN-2016-26
http://www.tldp.org/HOWTO/NCURSES-Programming-HOWTO/helloworld.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396945.81/warc/CC-MAIN-20160624154956-00020-ip-10-164-35-72.ec2.internal.warc.gz
en
0.910008
592
3.71875
4
You will not read these facts in the newspapers or the popular press. But scientists are wringing their hands over the lack of evidence or mechanisms for how life forms on our planet could have originated. Evolutionary theory is unworkable. It is a myth. This is science vs. evolution—a Creation-Evolution Encyclopedia, brought to you by Creation Science Facts. In the list below, full caps at the beginning of a hyperlink show it begins a new page. CONTENTS: Scientists Speak about the Primitive Environment This material is excerpted from the book, "It is almost invariably assumed that animals with bodies composed of a single cell represent the primitive animals from which all others derived. They are commonly supposed to have preceded all other animal types in their appearance. There is not the slightest basis for this assumption."—*Austin Clark, The New Evolution (1930), pp. 235-236. "The hypothesis that life has developed from inorganic matter is, at present, still an article of faith."—*J.W.N. Sullivan, The Limitations of Science (1933), p. 95. "Creation and evolution, between them, exhaust the possible explanations for the origin of living things. Organisms either appeared on the earth fully developed or they did not. If they did not, they must have developed from pre-existing species by some process of modification. If they did appear in a fully developed state, they must have been created by some omnipotent intelligence."—*D.J. Futuyma, Science on Trial (1983), p. 197. "With the failure of these many efforts, science was left in the somewhat embarrassing position of having to postulate theories of living origins which it could not demonstrate. After having chided the theologian for his reliance on myth and miracle, science found itself in the unenviable position of having to create a mythology of its own: namely, the assumption that what, after long effort could not be proved to take place today, had, in truth, taken place in the primeval past."—*Loren Eisley, The Immense Journey (1957), p. 199. "Since Darwin's seminal work was called The Origin of Species one might reasonably suppose that his theory had explained this central aspect of evolution or at least made a shot at it, even if it had not resolved the larger issues we have discussed up to now. Curiously enough, this is not the case. As Professor Ernst Mayr of Harvard, the doyen [senior member] of species studies, once remarked, the `book, called The Origin of Species, is not really on that subject' while his colleague, Professor Simpson, admits: `Darwin failed to solve the problem indicated by the title of his work.' "You may be surprised to hear that the origin of species remains just as much a mystery today, despite the efforts of thousands of biologists. The topic has been the main focus of attention and is beset by endless controversies."—*Gordon R. Taylor, Great Evolution Mystery (1983), p. 140. "Mathematics and dynamics fail us when we contemplate the earth, fitted for life but lifeless, and try to imagine the commencement of life upon it. This certainly did not take place by any action of chemistry, or electricity, or crystalline grouping of molecules under the influence of force, or by any possible kind of fortuitous concourse of atmosphere. We must pause, face to face with the mystery and miracle of the creation of living things."—Lord Kelvin, quoted in Battle for Creation, p. 232. "We are left with very little time between the development of suitable conditions for life on the Earth's surface and the origin of life . . Life apparently arose about as soon as the Earth became cool enough to support it."—*S.J. Gould, "An Early Start," in Natural History, February 1978. "Biogenesis is a term in biology that is derived from two Greek words meaning life and birth. According to the theory of biogenesis, living things descend only from living things. They cannot develop spontaneously from nonliving materials. Until comparatively recent times, scientists believed that certain tiny forms of life, such as bacteria, arose spontaneously from nonliving substances."—*"Biogenesis," in World Book Encyclopedia, p. B-242 (1972 edition). "Pasteur's demonstration apparently laid the theory of spontaneous generation to rest permanently. "All this left a germ of embarrassment for scientists. How had life originated after all, if not through divine creation or through spontaneous generation? . . "They [scientists] are [today] back to spontaneous generation."—*Isaac Asimov, Asimov's New Guide to Science (1984), pp. 638-639. "His aphorism `omnis cellula e cellula' [every cell arises from a pre-existing cell] ranks with Pasteur's `omne vivum e vivo' [every living thing arises from a pre-existing living thing] as among the most revolutionary generalizations of biology."—*Encyclopedia Britannica, 1973 Edition, Volume 23, p. 35. " `Every cell from a cell.' "—Rudolf Vircho, German pathologist. `Every living thing from a living thing.' `Spontaneous generation is a chimera [illusion].'—Louis Pasteur, French chemist and microbiologist." Quotations in Isaac Asimov's Book of Science and Nature Quotations (1988), p. 193. Chemical compounds would not have been rich enough. "It is commonly assumed today that life arose in the oceans . . But even if this soup contained a goodly concentration of amino acids, the chances of their forming spontaneously into long chains would seem remote. Other things being equal, a diluted hot soup would seem a most unlikely place for the first polypeptides to appear. The chances of forming tripeptides would be about one-hundredth that of forming dipeptides, and the probability of forming a polypeptide of only ten amino acid units would be something like 1 / 1020. The spontaneous formation of a polypeptide of the size of the smallest known proteins seems beyond all [mathematical] probability."—H.F. Blum, Time's Arrow and Evolution (1968), p. 158. "If there ever were a primitive soup, then we would expect to find at least somewhere on this planet either massive sediments containing enormous amounts of the various nitrogenous organic compounds, amino acids, purines, pyrimidines, and the like, or alternatively in much metamorphosed sediments we should find vast amounts of nitrogenous cokes . . In fact, no such material has been found anywhere on earth . . There is, in other words, pretty good negative evidence that there never was a primitive organic soup on this planet that could have lasted but a brief moment."—*J. Broks and *G. Shaw, Origins and Development of Living Systems (1973), p. 360. Enzyme inhibitors would surely have been present and would quickly have destroyed that which had been produced. "It is clear that enzymes were not present in the primordial soup. Even if they were formed, they would not have lasted long since the primeval soup was, by definition, a conglomeration of nearly every conceivable chemical substance. There would have been innumerable enzyme inhibitors present to inhibit an enzyme as soon as it appeared. Thus, such molecules could not have formed; however, even with the assumption that they had formed, they could not have remained."—David and Kenneth Rodabaugh, Creation Research Society Quarterly, December 1990, p. 107. Rapid fluid loss would not have occurred. "One well-known problem in the formation of polymerized proteins in water is that water loss is necessary for this process. Living organisms solve this problem with the presence of enzymes and the molecule ATP. It is clear the enzymes were not present in the primordial soup."—David and Kenneth Rodabaugh, Creation Research Society Quarterly, December 1990, p. 107. "Beneath the surface of the water there would not be enough energy to activate further chemical reactions; water in any case inhibits the growth of more complex molecules."—*Francis Hitching, The Neck of the Giraffe (1982), p. 65. If oxygen were present, the required chemicals would quickly decompose. "First of all, we saw that the present atmosphere, with its ozone screen and highly oxidizing conditions, is not a suitable guide for gas-phase simulation experiments."—*A. Oparin, Life: Its Nature, Origin and Development, p. 118. "The synthesis of compounds of biological interest takes place only under reducing conditions [that is, with no free oxygen in the atmosphere]."—*Stanley Miller and *Leslie Orgel, The Origins of Life on the Earth (1974), p. 33. "With oxygen in the air, the first amino acid would never have gotten started; without oxygen, it would have been wiped out by cosmic rays."—*Francis Hitching, The Neck of the Giraffe (1982), p. 65. Just producing the needed proteins would be an impossible task. "The conclusion from these arguments presents the most serious obstacle, if indeed it is fatal to the theory of spontaneous generation. First, thermodynamic calculations predict vanishingly small concentrations of even the simplest organic compounds. Secondly, the reactions that are invoked to synthesize such compounds are seen to be much more effective in decomposing them."—*D. Hull, "Thermodynamics and Kinetics of Spontaneous Generation," in Nature, 186 (1960), pp. 693-694. "In other words, the theoretical chances of getting through even this first and relatively easy stage [getting amino acids] in the evolution of life are forbidding."—*Francis Hitching, The Neck of the Giraffe (1982), p. 65. "In the vast majority of processes in which we are interested, the point of equilibrium lies far over toward the side of dissolution. That is to say, spontaneous dissolution [atomic self-destruction process] is much more probable, and hence proceeds much more rapidly, than spontaneous synthesis [accidental put-together process] . . The situation we must face is that of patient Penelope waiting for Odysseus, yet much worse: Each night she undid the weaving of the preceding day, but here a night could readily undo the work of the year or a century."—*G. Wald, "The Origin of Life," in The Physics and Chemistry of Life (1955), p. 17. Not even the scientists know how to produce the required fatty acids. Yet sand and seawater are said to have figured out the process. "No satisfactory synthesis of fatty acids is at present available. The action of electric discharges on methane and water gives fairly good yields of acetic and propionic acids, but only small yields of the higher fatty acids. Furthermore, the small quantities of the higher fatty acids that are found are highly branched."—*S. Miller and *L. Orgel, The Origins of Life on the Earth (1974), p. 98. A reducing atmosphere (one without oxygen) would be required, yet it would produce peroxides, which are lethal to living creatures. "The hypothesis of an early methane-ammonia atmosphere is found to be without solid foundation and indeed is contradicted."—*P. Abelson, "Some Aspects of Paleobiochemistry," in Annals of the New York Academy of Science, 69 (1957), p. 275. A continuous supply of energy would, from the very first, be required. "To keep a reaction going according to the law of mass action, there must be a continuous supply of energy and of selected matter (molecules) and a continuous process of elimination of the reaction products."—*P. Mora, "The Folly of Probability," in Origins of Prebiological Systems and their Molecular Matrices, Ed, S.W. Fox (1965), p. 43. There are other amazing aspects to life. For example, where did the built-in intelligence come from? "Any living thing possesses an enormous amount of `intelligence' . . Today, this `intelligence' is called `information,' but it is still the same thing . . This `intelligence' is the sina qua non of life. If absent, no living being is imaginable. Where does it come from? This is a problem which concerns both biologists and philosophers, and, at present, science seems incapable of solving it."—*Pierre-Paul de Grasse, Evolution of Living Organisms (1977), p. 3. There can be only one solution to the mystery of how living creatures originated. "Every time I write a paper on the origin of life, I determine I will never write another one, because there is too much speculation running after too few facts." —*Francis Crick, Life Itself (1981), p. 153. [Crick received a Nobel Prize for discovering the structure of DNA.] "An honest man, armed with all the knowledge available to us now, could only state that, in some sense, the origin of life appears at the moment to be almost a miracle."—*Francis Crick, Life Itself, Its Origin and Nature (1981), p. 88. "All of us who study the origin of life find that the more we look into it, the more we feel it is too complex to have evolved anywhere. We all believe, as an article of faith, that life evolved from dead matter on this planet. It is just that its complexity is so great, it is hard for us to imagine that it did."—*Harold C. Urey, quoted in Christian Science Monitor, January 4, 1962, p. 4. "All the facile speculations and discussions published during the last ten to fifteen years explaining the mode of origin of life have been shown to be far too simple-minded and to bear very little weight. The problem in fact seems as far from solution as it ever was."—*Francis Hitching, The Neck of the Giraffe (1982), p. 68. "The probability of life origination from accident is comparable to the probability of the unabridged dictionary resulting from an explosion in a printing shop." —*Edwin Conklin, Reader's Digest, January 1963, p. 92. "From the probability standpoint, the ordering of the present environment into a single amino acid molecule would be utterly improbable in all the time and space available for the origin of terrestrial life."—*American Scientist, January, 1955. "Ultimately the Darwinian theory of evolution is no more nor less than the great cosmogenic myth of the twentieth century . . The origin of life and of new beings on earth is still largely as enigmatic as when Darwin set sail on the Beagle."—*Michael Denton, Evolution: A Theory in Crisis (1985), p. 358. FOR MORE INFORMATION: To the next topic in this series: WHY LIFE COULD NOT SELF-ORIGINATE: 30 scientific reasons why it could not happen
<urn:uuid:e0e7f935-44e3-4b43-909c-9d231cab48b7>
CC-MAIN-2016-26
http://www.pathlights.com/ce_encyclopedia/Encyclopedia/07prim04.htm
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396872.10/warc/CC-MAIN-20160624154956-00143-ip-10-164-35-72.ec2.internal.warc.gz
en
0.946486
3,172
2.953125
3
In a war room of sorts in a neatly appointed government building, US officers dressed in crisp uniforms arranged themselves around a U-shaped table and kept their eyes trained on a giant screen. PowerPoint slides ticked through the latest movements of an enemy that recently emerged in Saudi Arabia — a mysterious virus that has killed more than half of the people known to have been infected. Here at the Centers for Disease Control and Prevention, experts from the US Public Health Service and their civilian counterparts have been meeting twice a week since the beginning of June to keep tabs on the Middle East Respiratory Syndrome Coronavirus. Mers-CoV, as the pathogen is known, causes fevers, severe coughs and rapid renal failure as it attacks the lungs of victims. Since it was first isolated in June 2012 in the city of Jeddah, Mers has infected at least 77 people and killed at least 40 of them. The number of confirmed cases has quadrupled since April, and patients have been sickened as far away as Tunisia and Britain. Most troubling to health experts are reports of illnesses in patients who have not been to the Middle East. The virus has not yet emerged in the US, and perhaps it never will. But in July and August, towards the end of the holy month of Ramadan, around 11,000 American Muslims will travel to the Arabian Peninsula. In the meantime, millions more will fly between continents, citizens of today’s globalised world.
<urn:uuid:af46768b-53c1-417e-a4a8-1716e22e9bc5>
CC-MAIN-2016-26
http://crofsblogs.typepad.com/h5n1/2013/07/qatar-outsourcing-the-news-about-mers.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397111.67/warc/CC-MAIN-20160624154957-00067-ip-10-164-35-72.ec2.internal.warc.gz
en
0.977231
299
2.765625
3
The scientific community was buzzing about the possibility Einstein’s Special Theory of Relativity – which holds nothing travels faster than the speed of light – could be wrong. The fuss started last year, when scientists from the Oscillation Project with Emulsion-Tracking Apparatus (OPERA) fired a beam of neutrinos – elementary particles which don’t hold an electrical charge and can pass through ordinary matter with virtually no interaction – from CERN’S particle accelerator in Geneva, Switzerland, to a laboratory in Italy, about 730 kilometers away. The OPERA scientists found the sub-atomic particles traveled to the Italian lab at a speed of 300,006 kilometers per second, or 60 nanoseconds faster than the speed of light, which is 299,792.458 kilometers per second. Now the folks from OPERA have identified two things that could have influenced its neutrino timing measurement. OPERA says these two recent findings still require further tests with a short-pulsed beam. If the technical problems are confirmed, one of the effects would actually show that the neutrinos were traveling faster than originally measured, while the other would show that the the sub-atomic particles were moving slower than measured, in other words, not faster than the speed of light. OPERA says a problem with an oscillator used to provide the time stamps for GPS synchronizations in the experiment, could have led to an overestimate of the neutrinos’ time of flight. The other concern has to do with the optical fiber connector that brings the external GPS signal to the OPERA master clock. If the master clock wasn’t functioning properly when the measurements were taken, it could have led to an underestimate of the time of flight of the neutrinos. The scientists at OPERA are continuing their study of these two issues and have scheduled new measurements with short-pulsed beams for sometime in May.
<urn:uuid:edb4896a-cee1-4d7a-91b0-88f699538f3f>
CC-MAIN-2016-26
http://blogs.voanews.com/science-world/2012/02/27/faster-than-the-speed-of-light-findings-in-doubt/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396106.71/warc/CC-MAIN-20160624154956-00015-ip-10-164-35-72.ec2.internal.warc.gz
en
0.949264
398
3.71875
4
By Tara N. Lawrence, Neil W. Pelkey, and R. S. Bhalla Coastal fishing in small boats with ragged nets, refurbished motors, and overworked crews is a dangerous occupation. These days, catches seldom contain big fish complete with bragging rights. If a catch fetches enough cash for tomorrow's diesel fuel, it's a good day. If not, fish harder, deeper, and longer tomorrow. For decades, it was clear that some form of regional management was necessary on the east coast of India. In 2004, the Banda Aceh tsunami provided both the motivation and the funding to get this under way. Boats, nets, motors, GPS units, and fish finders were distributed in nearly every community. However, the management tools used at the government level at that time were hampered because the data used in the analysis was limited to information collected at the jetties where hundreds of men and women sold fish. But, of course, fishing itself happens at a place, depth, and time with real boats, real people, real gear, and real fish of certain weights and species. The proper information to govern fisheries needs to have "what," "when," "where," "who," "how deep," and "with what" data. This information is also critical for fisheries governance in India since traditional and mechanized craft have different legally defined fishing zones. A nonprofit research organization called the Foundation for Ecological Research, Advocacy and Learning (FERAL) purchased a commercial license in 2003, and researchers at FERAL have been using ArcGIS since then for various mapping applications within research projects. This fisheries dataset required the ability to explore the complex relationships between fishing and gear, catch, and location—a task that ArcGIS is ideally suited for. The researchers linked the dynamic mapping and graphics capabilities in ArcGIS to explore and demystify this data. They then transferred the on-screen dynamic displays to publication-quality graphs using the ggplot2 graphic plotting system designed by Hadley Wickam of Rice University, Houston, Texas. The "what" is quite detailed, as there are roughly 243 species of fish recorded that are caught, sold, and consumed. The "where" and "how deep" questions were covered by a straightforward GIS application using existing coastline maps, GPS, and a Humminbird echo sounding device. The "when," "who," and "with what" data was supplied by observation as researchers traveled standard fishing routes in three regions of the Coromandel Coast in the Bay of Bengal. Catch data would no longer be 300 kilos of shad, but rather, for example, 300 kilos of shad caught at 79.38765 E, 12.345 N at a 20-meter depth in sandy soils by five men using 25 millimeter nets who fished from 7:00 a.m. to 8:30 a.m. on January 3, 2008. The researchers "pinged" the fishing coordinates, depth, and substrate where they found men fishing. They also collected data on type of gear, mesh size, and target species. All data was integrated and fed into ArcGIS. ArcGIS software's dynamic data visualization and exploratory analysis helped immediately identify data entry errors, but more importantly, it illuminated the "where" of artisanal fisheries. Researchers were also able to move the data quickly into the R programming language and ggplot2 to create publication-quality statistical graphics. The combination of graphs linked to the map display showed many Marine Fisheries Regulation Act (MFRA 1983) violations in terms of location and gear type. It was also clear that the fish cluster, and hence so do the fishermen. The often-told story that plenty of illegal fishing occurred in these waters turned out to match the data. Banned nets and mesh sizes were used, and large trawl boats regularly fished well within the 3-nautical-mile limit. The distance tool, combined with the extraction tool in the ArcGIS Spatial Analyst extension, provided accurate measurements of distance to shore. Knowing the distance from shore is critical, because mechanized craft are not allowed to fish less than 3 nautical miles from shore, and motorized craft cannot go beyond 12 nautical miles without additional licenses. This creates tension, since the big trawl catches of shrimp are often within 3 nautical miles of the shore and the big long-line catches of the motorized boats are often beyond the 12-nautical-mile limit. A huge plus of the GIS was the ease and rapidity of visualizing and analyzing the data. It took less than an hour to pull data from multiple sources; create and add the different layers; and define categories, such as boat size by depth or crew size by boat. After that, it was a simple matter of cleaning up errors, adding the distance grid, and redefining categories. The dynamic analysis and publication-quality maps were only a few clicks away. Visualizing the data in ArcGIS and R was a quick and efficient way of visually exploring theories and perceptions on the fisheries sector. It took only nine days of sampling to map more than 250 boats fishing in the same zone in only a part of the Coromandel Coast. The ease with which this assessment could be done in so little time presents a useful tool for fisheries management. It was cost-effective in terms of time spent collecting the data and analyzing it and therefore could be used frequently to map fishing efforts on both coasts of India. The real impact of the kind of fishing effort expended on a daily basis will only hit home then. Catches can no longer satiate this massive demand. Overfishing is a real issue that needs to be addressed on several levels in this complex yet dynamic sector, and a spatial context can solidify/silence any arguments raised on the ground. Tara N. Lawrence is a marine biologist with the Foundation for Ecological Research, Advocacy and Learning, Pondicherry, India. Her current position as a junior research fellow involves building a qualitative and quantitative profile for the traditional and motorized sector in fisheries along the Coromandel Coast of India. Dr. Neil Pelkey is an associate professor at Juniata College, Huntingdon, Pennsylvania, whose area of expertise involves ecological modeling and environmental economics. R. S. Bhalla is a senior research fellow and trustee of FERAL. He is a landscape ecologist whose area of expertise also involves GIS and remote sensing.
<urn:uuid:a39eac29-b0e4-4ceb-b991-4e1a4c2f95ca>
CC-MAIN-2016-26
http://www.esri.com/news/arcnews/fall10articles/fishing-catch.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783393533.44/warc/CC-MAIN-20160624154953-00046-ip-10-164-35-72.ec2.internal.warc.gz
en
0.95435
1,331
3.171875
3
The Role and Structure of the Supreme Court The Supreme Court is New Zealand's final court of appeal. According to the Supreme Court Act 2003, it was established to recognise New Zealand as an independent nation with its own history and traditions, and improve access to justice and enable important legal matters, including those relating to the Treaty of Waitangi, to be resolved with an understanding of New Zealand conditions, history, and traditions. As the court of final appeal, the Supreme Court has the role of maintaining overall coherence in the legal system. Appeals to the Supreme Court can be heard only with the leave of the court. It must give leave to appeal only if it is satisfied that it is necessary in the interests of justice (s12 and s13 Supreme Court Act 2003). The court can sit only as a bench of five to hear substantive appeals. It is able to appoint retired judges of the Supreme Court or Court of Appeal (under the age of 75) where it is not possible to convene a court of five permanent members. The judges of the Supreme Court continue to be judges of the High Court, which maintains the formal integration of the higher courts judicature. The Supreme Court Act does not expressly prevent the Supreme Court’s judges sitting in the High Court. However, it is not appropriate, except in exceptional circumstances, for judges of the Supreme Court to sit in the lower court on a case which could end up before the Supreme Court. You might be interested in:
<urn:uuid:08f5a008-cab0-46b8-ad69-a88b5c154353>
CC-MAIN-2016-26
http://www.courtsofnz.govt.nz/about/supreme/role-structure
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783398628.62/warc/CC-MAIN-20160624154958-00012-ip-10-164-35-72.ec2.internal.warc.gz
en
0.960376
305
3.578125
4
FIGURES IN THE latest UNFPA report show that Ireland is performing well in the areas of sexual and reproductive heath. According to the 2012 State of World Population study, the rate of contraceptive use in Ireland is close to the average for highly-developed countries. The research highlights that 61 per cent of Irish women aged 15 to 49 use modern methods of contraception. That is compared to a worldwide prevalence rate of 57 per cent. The worst-performing country in the category is Somalia, at just one per cent. The adolescent (aged 15 to 19) birth rate in Ireland is 16 per every 1,000 women. The global average is 49, while in developing countries it shoots up to 116 per 1,000. Ireland also follows suit with the world average of 1.1 per cent for population change. However, its fertility rate per woman is slightly above average at 2.1 per cent. At the Irish launch of the report yesterday, Dr. Niamh Reilly, Co-Director of the Global Women’s Studies Programme at NUI Galway, highlighted how Ireland and other countries have benefited from making family planning programmes more accessible. “Recognition of the right to determine when, if and how many children we have is a relatively new and hard-won achievement in Ireland, and an unfinished agenda,” she said. “In the very recent past, our country was transformed once women – and the population in general – gained access to family planning. Undoubtedly, better access to family planning contributed significantly to the development of Ireland’s society and economy in the 1990s and early 2000s, as women’s participation in social, cultural and economic domains expanded hugely. Throughout the world, it has been demonstrated that increased access to family planning results in wide-ranging economic and social benefits. “The ability for a couple to choose when and how many children to have is one of the most effective means of empowering women. Ireland’s recent history supports the evidence globally that women who use contraception are generally more empowered in their households and communities, and enjoy better health and educational attainment,” concluded Dr Reilly. The report also revealed that 222 million women across the world have an “unmet need for family planning”. It argued for increased family planning as a “sound investment”.
<urn:uuid:f8669882-65df-448b-899e-9653cbe42cbd>
CC-MAIN-2016-26
http://www.thejournal.ie/ireland-family-planning-sex-health-673992-Nov2012/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396027.60/warc/CC-MAIN-20160624154956-00126-ip-10-164-35-72.ec2.internal.warc.gz
en
0.950301
483
2.984375
3
Duke of Edinburgh gives spintronics researchers a pile of cash Atomic nuclei can store data, boffins discover The Duke of Edinburgh is funding research into spintronics that has demonstrated the reading and writing of data from the spin of the nuclei of phosphorous atoms. The grand old D of E, through the Royal Commission for the Exhibition of 1851 which he heads, is but one of many funders of the research which has been carried out by boffins at the University of Utah, University of Sydney, Florida State University and the London Centre for Nanotchnology. The findings were published in the 17 December issue of Science (subscription access only). Previously, information has been stored momentarily in the spin of electrons. Professor Christoph Boehme ran a 2006 project at the University of Utah which demonstrated the reading and writing of binary data from around 10,000 electrons in phosphorous atoms which had been inserted into a silicon substrate. The initial spin of the electrons had been lined up by a high-strength magnetic field. The latest study took the same basic approach but looked at the atomic nuclei spin, using electron spins to do so. A 1mm square chip containing phosphorous atoms in a silicon substrate was used, with the material cooled to a few degrees above absolute zero, (3.2 degrees Kelvin, 454 degrees less than zero Fahrenheit), and surrounded by an 8.6 Tesla magnetic field, some 200,000 times stronger than that of planet Earth. This lined up the atoms' electron spins, which could then be changed by writing data to them. FM radio waves were then used to transfer the electron spins to the atomic nuclei. Radio wavelengths in the high hundreds of gigahertz area were then used to transfer the up or down nuclei spins, representing binary ones or zeros, back to the electrons and the electrons' spin value read out as variations in an electrical current, thus integrating spintronics and electronics. The kickers here were that this could be done up to 112 seconds after the nuclei spin had been set, and the nuclei spins could be read and re-read 2,000 times, meaning that the technique is reliable and durable enough, the researchers say, for use as a computer memory technology. Electron spins are unstable because of interference from surrounding electrons, whereas an atomic nucleus is a pretty isolated entity. A 112 second refresh time is more than enough for computer main memory, where millisecond refresh rates are the rule. The researchers dangle the carrot of having both computer memory and processing engine in the same silicon chip. There are the somewhat tricky problems of getting this to work without an enormous magnetic field and near-absolute-zero-capable refrigeration unit, but researchers hunting funding dollars allow for no such obstacles. Indeed, they say the technique could be used both for binary computers and for the glamorous prospective quantum computer with qubits being one and zero simultaneously. However, this would require the ability to read and write the spin of a single atomic nucleus. That requires more research. ®
<urn:uuid:1ef488b5-d6fa-4b17-811c-a812ba6fa211>
CC-MAIN-2016-26
http://www.theregister.co.uk/Print/2010/12/21/d_of_e_funds_spintronics_research/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397567.28/warc/CC-MAIN-20160624154957-00150-ip-10-164-35-72.ec2.internal.warc.gz
en
0.95034
626
2.59375
3
Anas Americanus lato rostro: The Blue-wing Shoveler. This is somewhat less than a common Duck. The Eyes are yellow; the upper part of the Wing is covered with pale blue Feathers, below which is a Row of white Feathers, and below them a Row of green; the rest of the lower part of the Wing is brown; all the other part of the Body is of a mixed brown, not unlike in colour to the common Wild Duck. This Bird does not altogether agree with that described by Mr. Willoughby, p. 370. But if, as he observes, they change their colours in Winter, it is possible this may be the Bird. However, as their Bills are of the same form, and by which they may be distinguish'd from all others of the Duck kind, I cannot describe it in better Words than the above excellent Author. It's Bill is three inches long, coal black (tho' this is of a reddish brown, spotted with black) much broader toward the tip than at the base, excavated like a Buckler, of a round circumference. At the end it hath a small crooked Hook or Nail; each Mandible is pectinated or toothed like a Comb, With Rays or thin Plates inserted mutually one into another, when the Bill is shut. The Legs and Feet I am not certain whether this was a Male or Female.
<urn:uuid:700f2ef0-f905-4554-9267-569e2f46526f>
CC-MAIN-2016-26
http://xroads.virginia.edu/~ma02/amacker/etext/I_96.htm
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396106.25/warc/CC-MAIN-20160624154956-00131-ip-10-164-35-72.ec2.internal.warc.gz
en
0.939785
309
2.75
3
Initially we only had a keyboard for the command line and text entry. Then the mouse appeared for navigating two dimensional plains of UI. Now the field of computing has a new input toy to play with; our hands. Touch, multi-touch and gestural computing, also known as Natural User Interface (NUI) has become the newest input craze. Excitement around this has even spurred comments predicting the demise of the mouse in the next 3-5 years1. Computer designers (and engineers) have become engrossed with the ability to touch the screen with multiple fingers and control software by waving their arms. However in this excitement, have designers overlooked how to properly engage users and use multi touch to create useful, innovative, and interesting experiences? Perhaps touch and gesture are simply the new shiny objects in the room, soon to be discarded for the next new thing. In my next few articles for Johnny Holland Magazine I’ll look at some of the details of touch and gesture computing and what I’ve learned as a practitioner in the field. Before I dig in, I want to plug Designing Gestural Interfaces, by Dan Saffer. The book is a great starting guide and reference for anyone looking to get engaged in this field. I’d suggest grabbing a copy if you’re new to the ranks of touch and gesture design. Touch is but one slice of the pie Let’s start the journey here. As a designer on Microsoft Surface, we’re uncovering and discovering things as we go. In my work I’ve come to learn quickly that touch, gesture, and NUI are not right for everything. As obvious as this sounds, it’s often overlooked. They should be considered part of an input ecosystem. Each type of input below has unique attributes that make it good for certain types of interactions between users and systems. This is not a comprehensive list, but here are some of the most common input and interaction methods. • Single Point Touch • Multi point touch Each of these methods have pros or cons associated with them. Text input is a perfect example of a task that touch is rather inadequate for. There is no haptic feedback upon pressing the keys, and there isn’t tactile feedback to touch type. Touch also falls short in applications that require precision, such as Adobe Photoshop or Microsoft Office Excel. A mouse would be able to cover ground quicker across the screen and not make the user reach back and forth, as well as more precise in its actions. However when people begin their design of touch, they forget all this, and seemingly everything else. When not done properly, touch and gesture can appear as a step backwards.. A belief I’ve heard is touch can be so compelling, people will forget the inadequacies, when in reality, it only serves to shine a light on the downfalls of touch. When not done properly, touch and gesture can appear as a step backwards. The (design) problem takes a back seat to the “innovation” of touch. My advice for any designer approached by a client in need of a touch system (holding pictures of Tom Cruise in Minority Report) is make sure to evaluate the problem first. Make sure the interaction fits the needs. Again, the key point is to consider touch as part of an input eco-system, and not view it always as the sole method of device interaction. Not all input methods are equal. This early thinking has led me to squarely declare that tap is not the new click, which is something I’ve heard thrown around, and anyone who believes so lacks an understanding and respect for how to approach different problems and searching for the best method of interaction between a user and a system. Systematic approach of gesture integration Most systems utilizing touch are purely touch based with no addition methods of interaction. This leads to touch being sequestered from other interactions, thus making it more of a user burden to learn. When a new behavior is introduced into a working knowledge system, it can be easier to absorb. In their recent laptops, Apple has taken an approach of incorporating touch into their behavior and input systems by using the track pad. In doing so they have managed to introduce and teach people touch and gesture behaviors in a method users already accept (the track pad). In addition, they are beginning to train people to move between input modes, from track pad mouse, to gesture, to keyboard, depending on the task. These types of associations allow for a better learning and input experience. On the flip side, the gesture actions are secondary to the main system, so they can be ignored fairly easily. It will be interesting to see if this makes gesture and touch easier to adopt, or if people will disregard it. Top image by pinksherbet
<urn:uuid:2403b86d-6c33-46eb-a759-ef527823d91c>
CC-MAIN-2016-26
http://johnnyholland.org/2009/02/touch-and-gesture-systems-what-you-haven%E2%80%99t-heard/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783398216.41/warc/CC-MAIN-20160624154958-00185-ip-10-164-35-72.ec2.internal.warc.gz
en
0.948555
980
2.71875
3
The unveiling of the Jaharis Galleries also celebrates the opening of a special exhibition of more than 50 incomparable works of late Roman and early Byzantine art lent by the British Museum. Comprised of luxurious yet portable items such as silver vessels, carved ivories, and gem-encrusted jewelry, these artworks reflect the splendor of wealthy households and important ecclesiastical sites between A.D. 350 and 650. These centuries saw great shifts in the Roman Empire: Constantinople replaced Rome as the imperial capital, Christianity became the official imperial religion, and Greek eclipsed Latin as the official administrative language. Beautifully illustrating these transitions, the objects in the exhibition were employed in a variety of civic, domestic, and sacred contexts. For example, a gilded silver chest for bathing accessories and perfumed oils that belonged to a Roman noblewoman named Projecta stands as an eloquent witness to the intersection of classical iconography and Christian belief; above the inscription indicating that its owner was indeed a Christian appears a seductive image of the goddess Venus. The gradual stylistic shift from a classical naturalism towards a Byzantine aesthetic can be seen in the Reliquary of St. Menas. Carved in ivory during the sixth century and markedly different in style from the earlier objects in the exhibition, the imagery—charged with spiritual import—is more abstract, static, and hieratic. For its part, The Lycurgus Cup vividly exemplifies the refinement and spectacle of lavish tableware proudly used throughout the late Roman Empire. In a display of technical virtuosity, this cup appears green in reflected light but turns a brilliant red when light is transmitted through it, thanks to the addition of gold and silver particles to the molten glass. Most of the treasures in this exhibition have never before traveled to the United States. The Art Institute of Chicago Website
<urn:uuid:45d51a5e-da42-4009-9d20-b8a8ab58eb8b>
CC-MAIN-2016-26
http://www.culturekiosque.com/travel/item21578.htm
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397562.76/warc/CC-MAIN-20160624154957-00043-ip-10-164-35-72.ec2.internal.warc.gz
en
0.93876
372
2.71875
3