content
stringlengths 86
994k
| meta
stringlengths 288
619
|
|---|---|
Mplus Discussion >> Non Invariance
Evgenia posted on Friday, January 18, 2013 - 5:01 am
I have a general question about Factor Mixture models. Having a 2-Class, 1 Factor (IRT model)
and allowing factor loading and thresholds different at each class implies that I have in each class a different factor? What does this difference mean? Difference in mean and variance , i.e., I have
fj~N(mu_j, var_j).
Thanks alot
Bengt O. Muthen posted on Friday, January 18, 2013 - 2:58 pm
Yes, then you have a different factor in each group and the factors can't be compared. Still, it relaxes the conditional independence assumption and it says that the within-class item correlations
are different in the different classes. Or using other words, the "severity" dimension is defined differently in the different classes.
Evgenia posted on Thursday, January 24, 2013 - 1:39 am
Thanks for your prompt reply.
I want to ask you one more question.
Having a 2-Class, 1 Factor (IRT model)
and assuming non invariant thresholds at each class and invariant loadings, do you have any guidance how to check if I have to assume a latent factor with equal variance at two classes or different
variances at the two latent classes for my data, except AIC measure?
Assumming different variance means that I have one latent factor with the same meaning , -"severity"- but that there are different amounts of "severity" within each class?
Thanks alot
Bengt O. Muthen posted on Thursday, January 24, 2013 - 10:09 am
I think class-varying factor variances is a good model - it is more parsimonious than having class-varying loadings.
To answer your question, I think you can test variance equality using a likelihood ratio chi-square test, so working with 2 times the loglikelihood difference.
Evgenia posted on Friday, March 01, 2013 - 10:01 pm
Having a Factor Mixture models, 2-Class, 1 Factor (IRT model) , with measurement invariance and mean and variance of factor allowed to be different, for identification of the model I fix mean (0) and
variance (1) of one class and freely estimate them at the other one. (The alternative scenario is to fix mean (0) and the first factor loading in one group to 1). Are these the only necessary
coefficients that I have to fix in order the model to be identified?
Thanks alot
Linda K. Muthen posted on Saturday, March 02, 2013 - 2:31 pm
Back to top
|
{"url":"http://www.statmodel.com/cgi-bin/discus/discus.cgi?pg=prev&topic=13&page=11598","timestamp":"2014-04-18T11:10:22Z","content_type":null,"content_length":"22769","record_id":"<urn:uuid:71bc4e7a-7188-4e53-ad9d-949a35cc08b3>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00447-ip-10-147-4-33.ec2.internal.warc.gz"}
|
If a car going 75 miles per hour going a distance of 50 miles will take how long to get there?
United States customary units are a system of measurements commonly used in the United States. The U.S. customary system developed from English units which were in use in the British Empire
before American independence. Consequently most U.S. units are virtually identical to the British imperial units. However, the British system was overhauled in 1824, changing the definitions of
some units used there, so several differences exist between the two systems.
The majority of U.S. customary units were redefined in terms of the meter and the kilogram with the Mendenhall Order of 1893, and in practice, for many years before. These definitions were
refined by the international yard and pound agreement of 1959. The U.S. primarily uses customary units in its commercial activities, while science, medicine, government, and many sectors of
industry use metric units. The SI metric system, or International System of Units is preferred for many uses by NIST
The system of imperial units or the imperial system (also known as British Imperial) is the system of units first defined in the British Weights and Measures Act of 1824, which was later refined
and reduced. The system came into official use across the British Empire. By the late 20th century, most nations of the former empire had officially adopted the metric system as their main system
of measurement, but some Imperial units are still used in the United Kingdom and Canada.
Miles Straume is a fictional character played by Ken Leung on the ABC television series Lost. Miles is introduced early in the fourth season as a hotheaded and sarcastic medium as a crew member
aboard the freighter called the Kahana that is offshore the island where most of Lost takes place. Miles arrives on the island and is eventually taken captive by John Locke (played by Terry
O'Quinn), who suspects that those on the freighter are there to harm his fellow crash survivors of Oceanic Airlines Flight 815 and expose the island to the general public. Miles is on a mission
to obtain Ben Linus (Michael Emerson); instead, he tries to cut a deal with Ben to lie to Miles's employer Charles Widmore (Alan Dale) that Ben is dead.
The writers created the role of Miles specifically for Leung after seeing him guest star on The Sopranos. Leung was the only actor to read for the part. They chose his name because it resembles
"maelstrom", another word for a powerful whirlpool. Reaction to the character has been positive.
Miles per hour is an imperial unit of speed expressing the number of statute miles covered in one hour. It is currently the standard unit used for speed limits, and to express speeds generally, in
many countries throughout the world.
These include roads in the United Kingdom, the United States, American Samoa, the Bahamas, Belize, British Virgin Islands, the Cayman Islands, Dominica, the Falkland Islands, Grenada, Guam, Myanmar,
The N. Mariana Islands, Samoa, St. Lucia, St. Vincent & The Grenadines, St. Helena, St. Kitts & Nevis, Turks & Caicos Islands, the U.S. Virgin Islands,Antigua & Barbuda (although km are used for
distance), and Puerto Rico (same as former).
Hospitality Recreation
Related Websites:
|
{"url":"http://answerparty.com/question/answer/if-a-car-going-75-miles-per-hour-going-a-distance-of-50-miles-will-take-how-long-to-get-there","timestamp":"2014-04-21T14:42:40Z","content_type":null,"content_length":"32501","record_id":"<urn:uuid:ec640c05-70e5-4bf7-bdb8-bda26e7c2a24>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00006-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Not too much, sadly. Term started,
started (introductory post to follow soon), and I've gotten mixed up in other
side projects
I have made a little progress on characterizing the arm's dynamics for feed-forward control as discussed at the end of the
last update
. To start out I took some measurements to characterize the friction in the belt reductions. Somehow in the past month my data from the 72 tooth reduction disappeared, but I still have everything
from the 60 tooth one, and I remember they were virtually identical:
The process for calculating the friction was pretty simple. I ran the motor off a bench supply at a constant current. I then had the mbed spit back the steady-state angular velocity to my computer
over serial. Since I know the torque constant of my motor, I can convert the current draw read off the bench supply into torque produced by the motor. Plotting all this data gives me a nice linear
curve representing the friction in the belt reduction as a function of angular velocity.
While doing all this testing, all of a sudden the readings from one of my encoders got stuck. I probed the encoder outputs, and found that one channel was stuck high. I cracked open the encoder to
investigate further, and found this:
Further probing around the inside told me something was wrong with the 339N comparator (the chip on the right). Fortunately MITERS had some of these same chips floating around. I cut the leads off
the original and soldered a new one in its place:
This seems to have fixed things. I can't imagine why the chip would have died in the first place though.
In terms of actual mechanical progress, I machined a new linkage to replace the carbon fiber one. It's basically an aluminum I-beam, with clamping mounts at each end:
I also worked out the inertia terms for the arm. At the end of the last post I stated that the inertia would change as a function of arm configuration. Fortunately, this actually isn't really true.
The inertia
as seen by the motors
is actually independent of the configuration the arm is in. Let me demonstrate.
First let's look at the inertia as seen by motor 1, which drives the arm's first link. When that motor (driving link 1) rotates, the following happens: The link 1 rotates about its pivot (C). Link 4
rotates about its pivot (D). Link 2
following the motion of the end of the first link (B). This is shown by the diagram below:
So, the inertia seen by motor 1 (ignoring the effects of the belt reduction) it I[D] + I[C] + m[2]*L[1]^2.
Now the inertia seen by the second motor, which drives link 3: Link 3 rotates about its pivot (C). Link 2 rotates about its pivot (B). Link 4 translates, following the motion of point (D).
So, the moment of inertia as seen by motor 2 is I
+ I
+ m
The values of the various masses, lengths, and inertias can be determined form my Solidworks models of all the parts. By assigning appropriate material properties to all the parts, and defining
reference coordinate systems at the pivot points, can just click on "mass properties" and read off the values for moment of inertia.
Finally, to figure out the inertia
as seen by the motor
, I just take the values computed by the two formulas above, and multiply them by the square of the gear ratio between the motor and the links. So ((18/72)
for the first link and ((18/60)
for the second link.
During the middle of some Friday night MITERSing, Mike, one of this year's crop of MITERS-frosh (also one of the people responsible for the recent revival of LOLriokart), decided to build a tiny
electric kart or other small silly vehicle. That night. I joined in on the project, and within 10 hours we had a rideable go kart.
At around 12:00 am Saturday we had a pile of kart-parts. There's a pair of large scooter wheels, some 1.25" square aluminum tubing, some 1" x 4" rectangular aluminum extrusion, a Kelly KBS24121
controller, an EMP 63-200 motor (essentially identical in construction but a bit smaller than the trike motor), a big Colson Performa wheel, and a very old Brooks saddle.
All the parts were scavenged from around MITERS. The motor, controller and saddle were mine, one scooter wheel came from an abandoned kick scooter, one came from Jaguar, the Colson came from the
carcass of Straight Razer, and all the metal stock was scrap.
For batteries, we finished off a box of A123 18650 cells. Five of them fit side by side in the inside of the rectangular aluminum extrusion. We had enough cells for a 6S5P pack, which also happens to
be the the most cells that could physically fit in the length of the extrusion.
Unfortunately, the incredibly short timespan over which this was built means my documentation is pretty poor, so there are not many intermediate construction pictures.
The back wheel and motor were both bolted to some U-channel, which was in turn bolted to the rectangular extrusion. Don't worry, the wingnuts were eventually replaced with locking nuts.
I did most of the machining of the steering assembly. I made the steering as simple as possible for quick construction, so doesn't have Ackerman steering or anything fancy like that. The steering is
actually completely non-adjustable, since the tie rod is a solid square aluminum bar with holes drilled through it at the pivots. The steering column is supported by a big chunk of round Delrin which
is pressed into the aluminum frame, and the column itself is 1/2" polished steel shaft. I first tried cutting it with a hacksaw, but after trashing one blade I realized it was hardened, so I milled
through it with a little carbide endmill I picked up for a dollar at Swapfest.
To test the handling, we pushed it around with some clamps attached as temporary handlebars:
The steering knuckles were machined from some 1" square aluminum. At one end, they have a 1/2" hole through them, through which pass the bolts the wheels pivot around. A little thrust bearing is
sandwiched between the knuckle and it's supporting plate. These bearings were found in a drawer of random bearings, and were labeled "Precision components, handle with care!". So much for that...
For a seat, we used an old Brooks saddle. It was clamped to a segment of 3" square tubing, which was fastened to the frame through the same holes as the motor and wheel bracket.
Mike quickly threw together an electronics system with some parts from his electric longboard, so we could ride it. These were a Hobbyking 120A car controller, a 4S hardcase LiPo (same as the ones in
my scooter), and a really sketchy RC car remote with a broken trigger: All duct taped to the frame. As of around 9 am on Saturday morning, the kart looked like this:
We searched long and hard for something silly to use as a steering wheel, but eventually just bolted on a length of 80-20 extrusion, as a nod to all the 80-20 go karts out there.
At this point, the kart was driveable:
The electrical system was far from ideal, so it was replaced Saturday night and Sunday morning. I worked on assembling the new battery pack, while Mike got the Kelly controller ready and set up one
of Charles's hall sensor boards.
I started off by gluing together sets of 5 cells in parallel. These were soldered together with a strip of copper braid across each end.
These six modules were then connected in series with more braid.
Balance connectors, power leads, and insulation and padding at the terminals was added. The big red power lead shown was actually replaced with some insulated copper braid, because the round wire
took up just barely too much space for the pack to fit into the aluminum tube.
Here is the battery pack next to the frame. More insulation was added before sliding it down the tube. Actually shoving the battery in was a hair-raising procedure. To install the battery, I had to
lightly lubricate the outside of the pack's insulation to prevent it from getting stuck part way down. On the first attempt, the little bulge where I spliced together balance wires got caught on the
inside edge of a bolt hole, which removed a tiny speck of insulation from the balance lead of the second to last cell. There were a couple little sparks, and the frame was floating at 17V relative to
the negative end of the battery. We disassembled the front end of the kart, and pushed the battery back out a few inches. The trouble spot was reinsulated, and then protected from any sharp edges
with some thin plastic sheeting.
Once the kart was reassembled, we rigged up the Kelly and began playing with hall sensors and motor phases to get the motor commutating properly. First time around two of the hall sensors we used
turned out to be dead, so I had to replace them. After that, everything worked great, and it only took a few minutes to get the motor spinning the right direction. Adjusting the sensor timing took a
bit longer, since I had to extend the slots in the sensor board to get it positioned at the optimal spot.
The final kart: I haven't weighed it, but it probably tips the scales at 20-25 pounds. Top speed is only 15 mph, so it will be a good indoor and demo vehicle.
Now it's time to recover a weekend's worth of sleep.
Last update I was bad at serial and the robot arm almost exploded itself. I quickly made my serial communication less stupid, and added some features like a checksum, which stopped the robot from
behaving so spastically. However, no matter what I did I was unable to get smooth motion while streaming serial commands. The robot would follow the path smoothly for a few seconds at a time, but
would be interrupted by jerky motions I could not diagnose. One strategy for fixing this might be buffering the serial commands rather than executing them as they arrive. For now though, my Python
script simply generates a text file with all the commands, rather than streaming them over serial. I can then copy the text file to the mbed for it to execute. While more time consuming to change
paths, this method has been much more (read: completely) reliable.
So here's the robot drawing some things:
Fast squares:
Writing "MITERS":
Here are some mediocre squares. I've gotten it to draw better ones (without weird corners) since this since, but it's actually roughly the size it's supposed to be:
This is what happens when I try to draw 10 cm squares at a few Hz..... the "circles" were also drawn at the same frequency. There are a couple reasons why it looks to terrible. First the robot was
shaking the table it was on. Second, the paper was held below the pen by my hands. These two problems were responsible for probably at least 2/3 of the waviness. The remainder was caused (I think) by
flexing in the linkage between the second link of the arm and its motor. It might not be reasonable to expect the arm to be able to draw three perfect squares per second, but I can definitely get
them a lot better than this.
Here's what the "MITERS" text looks like. Once again, I was just holding the paper below the arm with my hands.
I've started simplifying my method of path generation. Right now, I feed my python script some setpoints. It interpolates between these points at whatever step size I choose, converts these X-Y
positions to arm joint angles through the robot's inverse kinematics function, and then turns those angles into encoder counts which are loaded onto the robot. Eventually I want to write a G-code
interpreter to make path generation easy.
On to some hardware. I made a quick pen holder out of a botched elbow joint I machined a while ago:
I also made a quick demo-frame out of 80-20 for Techfair.
Action shot:
I am working on replacing the carbon fiber linkage assembly with something that can't twist where it connects to the pins at the pulley and arm:
Back to some robot control stuff. In the above videos, you may notice the tones the robot generates while moving. This tone corresponds to the frequency at which I have the robot arm read points.
Right now, the arm only has position control, so when it gets a new goal point, it attempts to travel to and stop on that point as fast as possible. So my servos are kind of trying to be stepper
motors. To smooth out motion, I plan on implementing some velocity control on top of the position control.
At the high level, velocity control is pretty simple to explain. When reading through the list of points to move to, the robot will look ahead a step. By looking at the position change required
between the current step and the next step, it can figure out how fast it needs to be moving when it arrives at its current step. For example, if one of the motors has a current goal position of 100
(arbitrary units), a next goal position of 101, and the time between steps is one second, when the motor reaches position 100 it should be moving at a speed of around 1 per second. If the current
goal is 100, and the next goal is also 100, the motor should be completely stopped when it reaches position 100. The actual velocity control will be done by a PI velocity loop running within the
position control loop.
So that sounds great, but it makes a big assumption: that you actually have a good idea how fast your motors are moving. For my position control loop, I determine the motor velocity for the D term of
the PID loop by taking the difference between consecutive position readings. The issue with this is that encoders are digital. When you try to sample the velocity really fast by subtracting position
readings over very short time intervals, you lose resolution in your velocity measurement. So this method is no good for running a very fast velocity control loop. Also, I have been feeding my robot
new points faster than I am able to sample for velocity, meaning that a velocity loop would be too slow to do anything.
Skimming through a bunch of papers on servo control loops, I found that my problem was not at all unique to my system. Turns out the solution is to estimate the velocity at a high rate, and then
update your estimate at a lower rate with encoder feedback. Estimating the motor velocity will work pretty similarly to my python script to simulate the arm back at the beginning of this project. A
guess at the change in velocity over a cycle will be made by calculating the predicted acceleration of the arm given the arm's inertia, the motor's torque-speed curve, the previous command to the
motors, and the loop time. As actual measurements from the encoders are collected, the estimated velocity will be updated. This will involve figuring out more precise values for inertia and
frictional torque in the system than I've used before. Also, the inertia of the arm's first link will be some function of the angle of the second link, which will make things a little trickier.
So there's a lot more work to be done. At some point I need to add a third axis too....
The robot arm is now moving. I started out by writing simple PID position controllers, and then making the motors do some fast back-and-forths. Nothing is at all tuned right now, and I imagine my
whole control loop will get fancier as I take 2.004 next term and learn how to actually do controls.
The basic control structure of the robot arm is this: At the highest level, my computer runs some python code. This code takes XY-space commands and does the inverse kinematics for the robot arm to
convert XY to two joint angles, and then sends the joint angles over serial (using pySerial) to one of two mbed microcontrollers. The mbed keeps one command to itself, and sends the other along to a
second mbed over SPI. Each mbed reads the encoder signals from the motors and uses that plus the commanded positions from my computer to do a PID control loop for each motor. Finally, the mbed's send
PWM and direction signals to a pair of Pololu motor drivers, which actually drive the motors.
For the sake of the robot arm and anything in its plane of motion, early testing was done without the arm attached. Here's the linkage drive doing a 10 Hz shake:
And the arm's first link moving:
Once I got the core of my python code mostly working, I could send position commands from my computer:
Or so I thought. Here, the arm was supposed to do a 1 Hz, 10 mm amplitude Y axis sine wave. And it did, for a bit:
Somehow the arm managed to not crash into its physical limits, so nothing was damaged. As far as I can tell, the problem was with my serial communication. I added identification commands to the
beginning and end of each block joint angles as a safety feature, so now the robot just stops when the commands go wrong, rather than freaking out as above. Also, I'm now testing with the motor power
supply at 5V rather than 20+, so the max speed and torque are much, much less dangerous. After incident this I added an emergency stop button.
Which leads perfectly into the story of robot arm number 2. The e-stop button and panel above came from the silicon wafer handling enclosure for a Stäubli RX60 robot arm. A while ago MIT professor
Seth Teller contacted MITERS looking for a home for this robot. It turns out the robot arm was not just a robot arm. It was actually a 100 lbs arm with a 200 lbs controller inside a 2,800 lbs box. A
very, very fancy box for inspecting silicon wafers.
Nancy, Peter and I went to retrieve the arm from its box, which was all stored at a storage warehouse just down the street.
And inside the box:
So shiny.... Everything was paneled in brushed stainless steel. The e-stop panel I used can be seen in the middle left.
And the exciting part. 6 axes of robot-arm goodness:
Most of the important bits were Mikuvanned back to MITERS. The robot was screwed to a table for temporary testing.
I pulled a couple of the panels off to see what sort of magic was on the inside:
Those are some fancy servos. The two largest joints are driven by 200 V, 1 kW(!) servos. That much power in an arm is kind of terrifying. I couldn't tell what type of gear system was used, but but
excepting the rotation at the writs, the joints do not appear to be backdriveable.
Time to play with some microcontrollers!
For now I'm starting out using the mbed platform. Think Arduino but faster and fancier. I spent a lot of time using these in the lab over the summer, so that's what I'm using to get things up and
running. Eventually, I'd like to switch over to a BeagleBone Black, but there's a fairly large learning curve there that I'm not going to jump into just yet.
To sample the encoders, I first used the convenient QEI library. Testing with this quickly showed me that one of the stock encoders was borked. Upon opening it up, I could see a chip out of the
encoder's glass optical disc. Disassembling it revealed even more sadness on the surface of the disc.
Fortunately, I had a pile of fancy encoders scavenged from a lab cleanout over the summer, so I grabbed one of those. The stock case and connector were extremely bulky, so I 3D-printed a new shell.
The new encoder is coupled to the motor by a timing belt from the motor's back shaft, and is held in place by some pieces of laser cut Delrin.
Adding this new encoder made another problem apparent. The new encoder has 2,500 lines per revolution, over the stock encoder's 500, giving a resolution of 10,000 steps per revolution in quadrature
mode. The QEI software library just couldn't handle the pulse rate. I could manually wave the robot arm around and get the microcontroller to lose track of the encoder's location
Fortunately, it turns out that the processor on the mbed has a built in hardware quadrature encoder interface. However, on the mbed the pins corresponding to the interface are used up by indicator
LEDs. Some clever guy figured out how to use them anyway, by soldering wires to tiny pads on the bottom of the PCB, and wrote a driver as well. Each board only has one such interface, so I'll need to
use two of them to drive the motors. To sync everything, they will eventually communicate with each other over SPI. The hardware QEI seems to have solved the speed problem. I can wave the arm around
as fast as I want manually without it skipping a step.
I also assembled some boards to interface all the electronic components together. From top left, clockwise are terminals for power supply and motors, a pair of motor drivers with added heatsinks, two
mbeds, and two encoder breakouts.
Up next: writing some kind of control loop so the robot can actually do things.
In the last month I've more or less finished the hardware side of the robot arm. For the two fast axes, at least. I'll deal with the z axis later.
Finishing the arm required a few big machining operations, especially for the elbow joint. Naturally, I made these parts from big bricks of aluminum billet. Elbow Part One started out as some 2"
square billet, which I faced to size manually on the CNC mill, since the MITERS Bridgeport was temporarily out of commission. On my first attempt at making this part, I discovered that when plunging
into a pocket with a large endmill, the CNC mill's spindle stalls really easily. Even with .5 mm plunges this occurred. To resolve this, I manually drilled big pilot holes for the circular contours.
After the first CNC job, the part looked like this:
And after two more CNC jobs and one manual one, it was almost done. It's a little sad that when you machine something this way 90% of the metal goes into waste chips. But not nearly sad enough that I
won't do it anyway.
I manually added the slot and tapped holes for a clamping mechanism.
When I ordered the pulleys for the belt reductions, I had a different design in mind for the plastic reduction than I ended up using. In the final assembly, the output plastic pulley was torqued by
the linkage it drove, causing it to deflect significantly. To get a bit more precision, I got a nice aluminum pulley to replace it. I drilled some big holes in it for moment of inertia reduction, and
CNC milled plates for the linkage to attach to.
The second part of the elbow joint was machined manually out of some 2" round stock.
The driving link was made from some unidirectional carbon fiber tube I found lying around MITERS. It is clamped at each end by a piece of aluminum that passes through the pairs of bearings on the
pulley and elbow. While turning the aluminum clamps, I found that the MITERS lathe turns a pretty significant taper. To get a tight fit in both the bearings, I had to remove the taper by taking extra
small passes off towards the chuck.
Hey, it looks like a robot arm now!
For the time being I am going to ignore the z axis, and work on assembling the electronics and programming the thing, so that I can have something interesting and moving to display at TechFair.
|
{"url":"http://build-its-inprogress.blogspot.com/","timestamp":"2014-04-16T19:59:50Z","content_type":null,"content_length":"131432","record_id":"<urn:uuid:2e3f671d-3388-44a3-8e5f-2d18c13f894d>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00529-ip-10-147-4-33.ec2.internal.warc.gz"}
|
having trouble with setting up integral by parts
I understand how to solve integrals by parts. My problem lies in setting up the variables so that I can begin solving it.
Scanned is my work and how I set up the variables in what I believe are good except for the v and dv.
If I can get v and dv correct then I know how to solve this but need a little kick start.
|
{"url":"http://mathhelpforum.com/calculus/159351-having-trouble-setting-up-integral-parts.html","timestamp":"2014-04-21T04:11:34Z","content_type":null,"content_length":"43571","record_id":"<urn:uuid:d20c4060-d51b-4811-a863-f266d01d291a>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00328-ip-10-147-4-33.ec2.internal.warc.gz"}
|
A Beginner's Guide to Generalized Additive Mixed Models with R. Zuur AF, Saveliev AA, Ieno EN.
Jump straight to
Price and Order the book (or Ebook)
This book begins with an introduction to generalised additive models (GAM) using stable isotope ratios from squid. In Chapter 2 we explain additive mixed effects using polar bear movement data. In
Chapter 3 we apply additive mixed effects models on coral reef data. Ruddy turnstone data are used in Chapter 4 to explain Poisson generalised additive mixed effects models (GAMMs) using the gamm4
package. A simulation study is applied to investigate the effect unbalanced random effects. In Chapter 5 parasite data sampled on anchovy fishes are used to explain overdispersed Poisson GAMM,
negative binomial GAMM, and NB-P GAMM models.
The title of this book contains the phrase ‘Beginner’s Guide to …’. This does not mean that this book is for the statistical novice and can be read as a stand-alone book. On the contrary, we assume
that the reader is familiar with R, data exploration, multiple linear regression, generalised linear modelling, generalised additive modelling, linear mixed effects modelling, and Markov chain Monte
Carlo (MCMC) techniques. This is quite a substantial number of statistical techniques. This book is written as a sequel to our Beginner’s Guide to GAM with R and Beginner’s Guide to GLM and GLMM with
R books. If you are familiar with the material described in those two books, then the current volume is indeed a ‘Beginner’s Guide’. But if you are not familiar with these techniques then the
learning curve may be steep. However, wherever possible we have included short revisions. And we also provide the reader with access to Chapter 1 of Zuur et al. (2012a), which contains an
introduction to Markov chain Monte Carlo techniques (see below for access details).
In this book we take the reader on an exciting voyage into the world of generalised additive mixed effects models (GAMM). Keywords are GAM, mgcv, gamm4, random effects, Poisson and negative binomial
GAMM, gamma GAMM, binomial GAMM, negative binomial-P models, GAMMs with generalised extreme value distributions, overdispersion, underdispersion, two-dimensional smoothers, zero-inflated GAMMs,
spatial correlation, INLA, Markov chain Monte Carlo techniques, JAGS, and two-way nested GAMMs. The book includes three chapters on the analysis of zero-inflated data.
Click for
Copyright statement
This book is copyright material from Highland Statistics Ltd. Scanning this book (or parts of it) and distributing the digital media (including uploading to the Internet) without our explicit
permission is copyright infringement. Infringing copyright is a criminal offence and you will be taken to court and run the risk of paying ALL damages and compensation. Highland Statistics Ltd.
actively polices against copyright infringement.
All data sets used in the book are provided as *.txt or *.csv files. Right-mouse click on a data file and click on Save-As.
R code for each chapter is password protected. The password is given on page vi in the preface of the book. See the paragraph "Data sets and R code used in this book"
Support routines that we source in various chapters: HighstatLibV6.R and MCMCSupportHighstat.R.
Just copy these two files in the working directory (use Save As) and type:
source(file = "HighstatLibV6.R")
source(file = "MCMCSupportHighstat.R")
pdf file with some simple explanations on matrix notation
Chapter Title Data sets R code*
1 Introduction SquidNorway.txt April 2014
2 Additive mixed effects models applied on polar bear movement data PolarBearsV2.txt April 2014
3 Additive mixed effects models applied on coral reef data coralData.txt April 2014
4 Poisson GAMM applied on ruddy turnstone data TurnstoneDataV2.txt April 2014
5 GAMM applied on parasite data Anchoita.csv April 2014
6 Zero-inflated sea bird data sampled at offshore wind farms Common_Guillemot.txt April 2014
7 Zero-inflated GAMM applied on harbour porpoise Not available April 2014
8 Gamma GAMM applied on tree growth data BeechDataV2.txt April 2014
9 Bernoulli GAMM applied on cowbird brood parasitism CowbirdV2Book.txt April 2014
10 GAMM applied on maximum cod length using inla CodMaximumLenghtV3.txt April 2014
11 Zero-inflated and spatial correlated Common Scoter data SeaDucks4.txt April 2014
Rather than reproducing the material on MCMC, we give the reader of this book electronic access to Chapter 1 of Zuur et al. (2012a), which contains an introduction to Bayesian statistics and MCMC.
Chapter 1 of Zuur (2012b) provides an introduction to multiple linear regression, which is also prerequisite knowledge for this book. These two chapters are downloadable from:
• Introduction to Bayesian statistics, Markov Chain Monte Carlo techniques, and WinBUGS. Chapter 1 in: Zero Inflated Models and Generalized Linear Mixed Models with R (2012). Zuur AF, Saveliev AA,
Ieno EN. We are in the process of changing the WinBUGS code in this chapter to JAGS code. The modified chapter will be available on 21 June 2013
• Review of multiple linear regression. Chapter 1 in: A Beginner’s Guide to Generalized Additive Models with R (2012). Zuur AF.
Both chapters are password protected. The password is given on page vi in the Preface. See the paragraph labelled "Chapter 1 of Zuur et al. (2012a) and Zuur (2012b)".
|
{"url":"http://www.highstat.com/BGGAMM.htm","timestamp":"2014-04-18T20:43:56Z","content_type":null,"content_length":"18215","record_id":"<urn:uuid:1dd0ae2e-b3e8-4e0e-8a2b-452c74b2cc34>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00288-ip-10-147-4-33.ec2.internal.warc.gz"}
|
[Haskell] Typeful symbolic differentiation of compiled functions
oleg at pobox.com oleg at pobox.com
Wed Nov 24 04:05:12 EST 2004
Jacques Carette wrote on LtU on Wed, 11/24/2004
] One quick (cryptic) example: the same difficulties in being able to
] express partial evaluation in a typed setting occurs in a CAS
] [computer algebra system]. Of course I mean to have a partial
] evaluator written in a language X for language X, and have the partial
] evaluator 'never go wrong'. Cheating by encoding language X as an
] algebraic datastructure in X is counter-productive as it entails huge
] amounts of useless reflection/ reification. One really wants to be
] able to deal with object-level terms simply and directly. But of
] course, that way lies the land of paradoxes (in set theory, type
] theory, logic)
] And while I am at it: consider symbolic differentiation. If I call
] that 'function' diff, and have things like diff(sin(x),x) == cos(x),
] what is the type of diff? More interestingly, what if I have D(\x ->
] sin(x) ) == \x -> cos(x) What is the type of D ? Is it implementable
] in Ocaml or Haskell? [Answer: as far as I know, it is not. But that
] is because as far as I can tell, D can't even exist in System F. You
] can't have something like D operating on opaque lambda terms.]. But
] both Maple and Mathematica can. And I can write that in LISP or Scheme
] too.
In this message, we develop the `symbolic' differentiator for a subset
of Haskell functions (which covers arithmetics and a bit of
trigonometry). We can write
test1f x = x * x + fromInteger 1
test1 = test1f (2.0::Float)
test2f = diff_fn test1f
test2 = test2f (3.0::Float)
We can evaluate our functions _numerically_ -- and differentiate them
_symbolically_. Partial derivatives are supported as well. To answer
Jacques Carette's question: the type of the derivative operator (which
is just a regular function) is
diff_fn :: (Num b, D b) => (forall a. D a => a -> a) -> b -> b
where the class D includes Floats. One can add exact reals and other
similar things. The key insight is that Haskell98 supports a sort of a
reflection -- or, to be precise, type-directed partial evaluation and
hence term reconstructions. The very types that are assumed of great
hindrance to computer algebra and reflective systems turn out
indispensable in being able to operate on even *compiled* terms
We must point out that we specifically do _not_ represent our terms as
algebraic datatypes. Our terms are regular Haskell terms, and can be
compiled! That is in stark contrast with Scheme, for example: although
Scheme may permit term reconstruction under notable restrictions, that
ability is not present in the compiled code. In general, we cannot
take a _compiled_ function Float->Float and compute its derivative
symbolically, yielding another Float->Float function. Incidentally,
R5RS does not guarantee the success of type-directed partial
evaluation even in the interpreted code.
Jacques Carette has mentioned `useless reflection/reification'. The
paper `Tag Elimination and Jones-Optimality' by Walid Taha, Henning
Makholm and John Hughes has introduced a novel tag elimination
analysis as a way to remove all interpretative overhead. In this
message, we do _not_ use that technique. We exploit a different idea,
whose roots can be traced back to Forth. It is remarkable how Haskell
allows that technique.
Other features of our approach are: an extensible differentiation rule
database; emulation of GADT with type classes.
This message is the complete code.
> {-# OPTIONS -fglasgow-exts #-}
> -- We only need existentials. In the rest, it is Haskell98!
> -- Tested with GHC 6.2.1 and 6.3.20041106-snapshot
> module Diff where
> import Prelude hiding ((+), (-), (*), (/), (^), sin, cos, fromInteger)
> import qualified Prelude
First we declare the domain of `differentiable' (by us) functions
> class D a where
> (+):: a -> a -> a
> (*):: a -> a -> a
> (-):: a -> a -> a
> (/):: a -> a -> a
> (^):: a -> Int -> a
> sin:: a -> a
> cos:: a -> a
> fromInteger:: Integer -> a
and inject floats into that domain
> instance D Float where
> (+) = (Prelude.+)
> (-) = (Prelude.-)
> (*) = (Prelude.*)
> (/) = (Prelude./)
> (^) = (Prelude.^)
> sin = Prelude.sin
> cos = Prelude.cos
> fromInteger = Prelude.fromInteger
For symbolic manipulation, we need a representation for
(reconstructed) terms
> -- Here, reflect is the tag eliminator -- or `compiler'
> class Term t a | t -> a where
> reflect :: t -> a -> a
We should point out that the terms are fully typeful.
> newtype Const a = Const a deriving Show
> data Var a = Var deriving Show
> data Add x y = Add x y deriving Show
> data Sub x y = Sub x y deriving Show
> data Mul x y = Mul x y deriving Show
> data Div x y = Div x y deriving Show
> data Pow x = Pow x Int deriving Show
> newtype Sin x = Sin x deriving Show
> newtype Cos x = Cos x deriving Show
We can now describe the grammar of our term representation in the
following straightforward way:
> instance Term (Const a) a where reflect (Const a) = const a
> instance Term (Var a) a where reflect _ = id
> instance (D a, Term x a, Term y a) => Term (Add x y) a
> where
> reflect (Add x y) = \a -> (reflect x a) + (reflect y a)
> instance (D a, Term x a) => Term (Sin x) a
> where
> reflect (Sin x) = sin . reflect x
The other instances are given in the Appendix. This is the straightforward
emulation of GADT. The function `reflect' removes the `tags' after the
symbolic differentiation. Actually, `Sin' is a newtype constructor, so
there is no run-time tag to eliminate in this case.
We must stress that there is no `reify' function. One may say it is
built into Haskell already.
We only need to declare the datatype for the reified code
> data Code a = forall t. (Show t, Term t a, DiffRules t a) => Code t
> instance Show a => Show (Code a) where show (Code t) = show t
> reflect_code (Code c) = reflect c
inject the reified code in the D domain
> instance (Num a, D a) => D (Code a) where
> Code x + Code y = Code $ Add x y
> Code x - Code y = Code $ Sub x y
> Code x * Code y = Code $ Mul x y
> Code x / Code y = Code $ Div x y
> (Code x) ^ n = Code $ Pow x n
> sin (Code x) = Code $ Sin x
> cos (Code x) = Code $ Cos x
> fromInteger n = Code $ Const (fromInteger n)
and we're done with the first part:
We can define a function
> test1f x = x * x + fromInteger 1
> test1 = test1f (2.0::Float)
we can even compile it. At any point, we can reify it
> test1c = test1f (Code Var :: Code Float)
and reflect it back:
> test1f' = reflect_code test1c
> test1' = test1f' (2.0::Float)
*Diff> test1
*Diff> test1'
*Diff> test1c
Add (Mul Var Var) (Const 1.0)
The differentiation part is quite straightforward. We declare a class
for differentiation rules
> class (Term t a,D a) => DiffRules t a | t -> a where
> diff :: t -> Code a
The rules are the instances of the class DiffRules
> instance (Num a, D a) => DiffRules (Const a) a where
> diff _ = Code $ Const 0
> instance (Num a, D a) => DiffRules (Var a) a where
> diff _ = Code $ Const 1
> instance (Show x, Show y, DiffRules x a, DiffRules y a)
> => DiffRules (Mul x y) a where
> diff (Mul x y) = case (diff x,diff y) of
> (Code x'::Code a,Code y') ->
> Code $ Add (Mul (x::x) y') (Mul x' (y::y))
> instance (Num a, Show x, DiffRules x a)
> => DiffRules (Sin x) a where
> diff (Sin x) = case diff x of
> (Code x'::Code a) ->
> Code $ Mul x' (Cos x)
The other instances are in the Appendix.
The approach is scalable -- we may add more rules later, in other
And that's about it:
> diff_code (Code c) = diff c
> diff_fn :: (Num b, D b) => (forall a. D a => a -> a) -> b -> b
> diff_fn f =
> let code = f (Code Var)
> in reflect_code $ diff_code code
the differentiation operator could not be any simpler.
We can try
> test2f = diff_fn test1f
> test2 = test2f (3.0::Float)
we can even see the differentiation result, symbolically:
*Diff> diff_code test1c
Add (Add (Mul Var (Const 1.0)) (Mul (Const 1.0) Var)) (Const 0.0)
True, simplifications are direly needed. Well, the full computer
algebra system is a little bit too big to be developed over one
evening. Besides, I wanted to go home three hours ago.
Here's a slightly more complex example:
> test5f x = sin (fromInteger 5*x) + cos(fromInteger 1/x)
> test5c = test5f (Code Var :: Code Float)
> test5 = test5f (pi::Float)
> test5d = diff_code test5c
> test6 = diff_fn test5f (pi::Float)
One can evaluate the function test5f numerically, differentiate it
symbolically, check the result of differentiation -- and evaluate it
numerically right away.
We can even do partial derivatives:
> test3f x y = (x*y + ((fromInteger 5)*(x^2))) / y
> test3c1 = test3f (Code Var :: Code Float) (fromInteger 10)
> test4x y = diff_fn (\x -> test3f x (fromInteger y))
> test4y x = diff_fn (test3f (fromInteger x))
-- *Diff> test4x 1 (2::Float) -- partial derivative with respect to x
-- 21.0
-- *Diff> test4y 5 (5::Float) -- partial derivative with respect to y
-- -5.0
> instance (D a, Term x a, Term y a) => Term (Sub x y) a
> where
> reflect (Sub x y) = \a -> (reflect x a) - (reflect y a)
> instance (D a, Term x a, Term y a) => Term (Mul x y) a
> where
> reflect (Mul x y) = \a -> (reflect x a) * (reflect y a)
> instance (D a, Term x a, Term y a) => Term (Div x y) a
> where
> reflect (Div x y) = \a -> (reflect x a) / (reflect y a)
> instance (D a, Term x a) => Term (Pow x) a
> where
> reflect (Pow x n) = (^ n) . reflect x
> instance (D a, Term x a) => Term (Cos x) a
> where
> reflect (Cos x) = cos . reflect x
> instance (Show x, Show y, DiffRules x a, DiffRules y a)
> => DiffRules (Add x y) a where
> diff (Add x y) = case (diff x,diff y) of
> (Code x'::Code a,Code y') ->
> Code $ Add x' y'
> instance (Show x, Show y, DiffRules x a, DiffRules y a)
> => DiffRules (Sub x y) a where
> diff (Sub x y) = case (diff x,diff y) of
> (Code x'::Code a,Code y') ->
> Code $ Sub x' y'
> instance (Num a, Show x, Show y, DiffRules x a, DiffRules y a)
> => DiffRules (Div x y) a where
> diff (Div x y) = case (diff x,diff y) of
> (Code x'::Code a,Code y') ->
> Code $
> Div (Sub (Mul x' y) (Mul x y'))
> (Pow y 2)
> instance (Num a, Show x, DiffRules x a)
> => DiffRules (Pow x) a where
> diff (Pow x n) = case diff x of
> (Code x'::Code a) ->
> Code $ Mul (Const (fromInteger $ toInteger n))
> (Mul x' (Pow x (n Prelude.- 1)))
> instance (Num a, Show x, DiffRules x a)
> => DiffRules (Cos x) a where
> diff (Cos x) = case diff x of
> (Code x'::Code a) ->
> Code $ Mul x' (Sub (Const 0) (Sin x))
More information about the Haskell mailing list
|
{"url":"http://www.haskell.org/pipermail/haskell/2004-November/014939.html","timestamp":"2014-04-20T05:05:53Z","content_type":null,"content_length":"15236","record_id":"<urn:uuid:a5cd78fb-cf31-4d18-b9d7-27cf98344a86>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00471-ip-10-147-4-33.ec2.internal.warc.gz"}
|
ABINIT parallelisation input variables:
List and description.
This document lists and provides the description of the name (keywords) of parallelisation input variables to be used in the main input file of the abinit code.
The new user is advised to read first the new user's guide, before reading the present file. It will be easier to discover the present file with the help of the tutorial.
When the user is sufficiently familiarized with ABINIT, the reading of the ~abinit/doc/users/tuning file might be useful. For response-function calculations using abinit, please read the response
function help file
Copyright (C) 1998-2012 ABINIT group (DCA, XG, RC)
This file is distributed under the terms of the GNU General Public License, see ~abinit/COPYING or http://www.gnu.org/copyleft/gpl.txt .
For the initials of contributors, see ~abinit/doc/developers/contributors.txt .
Content of the file : alphabetical list of variables.
npband npfft npimage npkpt nppert npspinor
paral_atom paral_kgb paral_rf
Mnemonics: GW PARAllelization level
Characteristic: GW, PARALLEL
Variable type: integer
Default is 1
TODO: default should be 2
Only relevant if optdriver=3 or 4, that is, screening or sigma calculations.
gwpara is used to choose between the two different parallelization levels available in the GW code. The available options are:
• =1 => parallelisation on k points
• =2 => parallelisation on bands
Additional notes:
In the present status of the code, only the parallelization over bands (gwpara=2) allows to reduce the memory allocated by each processor.
Using gwpara=1, indeed, requires the same amount of memory as a sequential run, irrespectively of the number of CPU's used.
A reduction of the requireed memory can be achieved by opting for an out-of-core solution (mkmem=0, only coded for optdriver=3) at the price of a drastic worsening of the performance.
Go to the top | Complete list of input variables
Mnemonics: LOCAL ReaD WaveFunctions
Variable type: integer
Default is 1.
This input variable is used only when running abinit in parallel. If localrdwf=1, the input wavefunction disk file or the KSS/SCR file in case of GW calculations, is read locally by each processor,
while if localrdwf=0, only one processor reads it, and broadcast the data to the other processors.
The option localrdwf=0 is NOT allowed when parallel I/O are activated (MPI-IO access), i.e. when accesswff==1.
The option localrdwf=0 is NOT allowed when mkmem==0 (or, for RF, when mkqmem==0, or mk1mem==0), that is, when the wavefunctions are stored on disk. This is still to be coded ...
In the case of a parallel computer with a unique file system, both options are as convenient for the user. However, if the I/O are slow compared to communications between processors, (e.g. for CRAY
T3E machines), localrdwf=0 should be much more efficient; if you really need temporary disk storage, switch to localrdwf=1 ).
In the case of a cluster of nodes, with a different file system for each machine, the input wavefunction file must be available on all nodes if localrdwf=1, while it is needed only for the master
node if localrdwf=0.
Go to the top | Complete list of input variables
Mnemonics: Number of Processors at the BAND level
Variable type: integer
Default is 1.
Relevant only for the band/FFT parallelisation (see the paral_kgb input variable).
npband gives the number of processors among which the work load over the band level is shared. npband, npfft, npkpt and npspinor are combined to give the total number of processors (nproc) working on
the band/FFT/k-point parallelisation.
See npfft, npkpt, npspinor and paral_kgb for the additional information on the use of band/FFT/k-point parallelisation.
Note : at present, npband has to be a divisor or equal to nband
Go to the top | Complete list of input variables
Mnemonics: Number of Processors at the FFT level
Variable type: integer
Default is nproc.
Relevant only for the band/FFT/k-point parallelisation (see the paral_kgb input variable).
npfft gives the number of processors among which the work load over the FFT level is shared. npfft, npkpt, npband and npspinor are combined to give the total number of processors (nproc) working on
the band/FFT/k-point parallelisation.
See npband, npkpt, npspinor, and paral_kgb for the additional information on the use of band/FFT/k-point parallelisation.
Note : ngfft is automatically adjusted to npfft. If the number of processor is changed from a calculation to another one, npfft may change, and then ngfft also.
Go to the top | Complete list of input variables
Mnemonics: Number of Processors at the IMAGE level
Variable type: integer
Default is min(nproc,
) (see below).
Relevant only when sets of images are activated (see imgmov and nimage.
npimage gives the number of processors among which the work load over the image level is shared. It is compatible with all other parallelization levels available for ground-state calculations.
Note on the npimage default value: this default value is crude. It is set to the number of dynamic images (ndynimage) if the number of available processors allows this choice. If ntimimage=1, npimage
is set to min(nproc,nimage).
See paral_kgb, npkpt, npband, npfft and npspinor for the additional information on the use of k-point/band/FFT parallelisation.
Go to the top | Complete list of input variables
Mnemonics: Number of Processors at the K-Point Level
Variable type: integer
Default is 1.
Relevant only for the band/FFT/k-point parallelisation (see the paral_kgb input variable).
npkpt gives the number of processors among which the work load over the k-point/spin-component level is shared. npkpt, npfft, npband and npspinor are combined to give the total number of processors
(nproc) working on the band/FFT/k-point parallelisation.
See npband, npfft, npspinor and paral_kgb for the additional information on the use of band/FFT/k-point parallelisation.
Note : npkpt should be a divisor or equal to with the number of k-point/spin-components (nkpt*nsppol) in order to have the better load-balancing and efficiency.
Go to the top | Complete list of input variables
Mnemonics: Number of Processors at the PERTurbation level
Characteristic: can even be specified separately for each dataset, parameter paral_rf is necessary
Variable type: integer
Default is 1.
This parameter is used in connection to the parallelization over perturbations(see paral_rf ), for a linear response calculation. nppert gives the number of processors among which the work load over
the perturbation level is shared.
Go to the top | Complete list of input variables
Mnemonics: Number of Processors at the SPINOR level
Variable type: integer
Default is 1.
Can be 1 or 2 (if nspinor=2).
Relevant only for the band/FFT/k-point parallelisation (see the paral_kgb input variable).
npspinor gives the number of processors among which the work load over the spinorial components of wave-functions is shared. npspinor, npfft, npband and npkpt are combined to give the total number of
processors (nproc) working on the band/FFT/k-point parallelisation.
See npkpt, npband, npfft, and paral_kgb for the additional information on the use of band/FFT/k-point parallelisation.
Go to the top | Complete list of input variables
Mnemonics: activate PARALelization over (paw) ATOMic sites
Variable type: integer
Default is 0.
Relevant only for PAW calculations.
This keyword controls the parallel distribution of memory over atomic sites. Calculations are also distributed using the "kpt-band" communicator.
Warning: use of paral_atom is highly experimental.
Only compatible (for the moment) with ground-state calculations.
Go to the top | Complete list of input variables
Mnemonics: activate PARALelization over K-point, G-vectors and Bands
Variable type: integer
Default is 0.
If paral_kgb is not explicitely put in the input file, ABINIT automatically detects if the job has been sent in sequential or in parallel. In this last case, it detects the number of processors on
which the job has been sent and calculates values of npkpt, npfft, npband, bandpp ,npimage and npspinor that are compatible with the number of processors. It then set paral_kgb to 0 or 1 (see
hereunder)and launches the job.
If paral_kgb=0
, the parallelization over k-points only is activated. In this case,
are ignored. Require compilation option --enable-mpi="yes".
If paral_kgb=1, the parallelization over bands, FFTs, and k-point/spin-components is activated (see npkpt, npfft and npband). With this parallelization, the work load is split over three levels of
parallelization. The different communications almost occur along one dimension only. Require compilation option --enable-mpi="yes".
HOWTO fix the number of processors along one level of parallelisation:
At first, try to parallelise over the k point and spin (see
). Otherwise, for unpolarized calculation at the gamma point, parallelise over the two other levels: the band and FFT ones. For nproc<=50, the best speed-up is achieved for
=nproc and
=1 (which is not yet the default). For nproc>=50, the best speed-up is achieved for
For additional information, download F. Bottin's presentation at the ABINIT workshop 2007
Suggested acknowledgments :
F. Bottin, S. Leroux, A. Knyazev and G. Zerah, Large scale ab initio calculations based on three levels of parallelization, Comput. Mat. Science 42, 329 (2008), available on arXiv, http://arxiv.org/
abs/0707.3405 .
If the total number of processors used is compatible with the three levels of parallelization, the values for npband, npfft, npband and bandpp will be filled automatically, although the repartition
may not be optimal. To optimize the repartition use:
If paral_kgb=-n , ABINIT will test automatically if all the processor numbers between 2 and n are convenient for a parallel calculation and print the possible values in the log file. A weight is
attributed to each possible processors repartition. It is adviced to select a processor repartition the weight of which is close to 1. The code will then stop after the printing. This test can be
done as well with a sequential as with a parallel version of the code. The user can then choose the adequate number of processor on which he can run his job. He must put again paral_kgb=1 in the
input file and put the corresponding values for npband, npfft, npband and bandpp in the input file.
Go to the top | Complete list of input variables
Mnemonics: activate PARALlelization over Response Function perturbations
Characteristic: can even be specified separately for each dataset
Variable type: integer
Default is 0.
This parameter activates the parallelization over perturbations which can be used during RF-Calculation. It is possible to use this type of parallelization in combination to the parallelization over
Currently total energies calculated by groups, where the master process is not in, are saved in .status_LOGxxxx files.
Go to the top | Complete list of input variablesuse_gpu_cuda
Mnemonics: activate USE of GPU accelerators with CUDA (nvidia)
Variable type: integer
Default is 1 for ground-state calculations (
=0) when ABINIT has been compiled using cuda, 0 otherwise
Only available if ABINIT executable has been compiled with cuda nvcc compiler.
This parameter activates the use of NVidia graphic accelerators (GPU) if present.
If use_gp_cuda=1, some parts of the computation are transmitted to the GPUs.
If use_gp_cuda=1, no computation is done on GPUs, even if present.
Note that, while running ABINIT on GPUs, it is recommended to use MAGMA external library (i.e. Lapack on GPUs). The latter is activated during compilation stage (see "configure" step of ABINIT
compilation process). If MAGMA is not used, ABINIT performances on GPUs can be poor.
Go to the top | Complete list of input variables
|
{"url":"http://www.abinit.org/documentation/helpfiles/for-v7.0/input_variables/varpar.html","timestamp":"2014-04-20T01:18:05Z","content_type":null,"content_length":"32484","record_id":"<urn:uuid:fed2124a-12d0-46a8-97e6-c192704cabf6>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00272-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Knapek, Christina (2010): Phase Transitions in Two-Dimensional Complex Plasmas. Dissertation, LMU München: Faculty of Physics
Metadaten exportieren
Autor recherchieren in
This thesis presents the experimental investigation of the phase state of two-dimensional complex plasmas by means of their dynamical and kinetic properties. The two-dimensional complex plasma
consists of negatively charged micron-sized plastic spheres, levitated in the sheath of a radio-frequency noble gas discharge in a single horizontal layer. In two different experiments the
thermodynamical state of a crystalline complex plasma (``plasma crystal''), and the process of recrystallization of a molten complex plasma is studied. The experiments were performed on strictly
two-dimensional particle systems, and all data analysis builds on the examination of particle coordinates and trajectories. One important aspect of the data analysis is the estimation of
uncertainties. A procedure has been developed to obtain reliable estimations of the measurement uncertainties introduced by the recording method and the particle tracking algorithm. The implications
of the uncertainties on the scientific interpretation of the experimental results will be considered throughout the thesis. The first experiment aims to estimate the coupling parameter of a
two-dimensional, crystalline complex plasma. The coupling parameter of an ensemble of particles is the ratio of their mean potential energy to their mean kinetic energy. It describes the
thermodynamical state of the system, and is therefore an important quantity to characterize such a system. To calculate it, not only the particle temperature has to be estimated, but also an
expression for the interparticle potential has to be known. For charged particles, this depends on the particle charge, which can often only be obtained with additional experimental effort, and its
measurement is usually subject to large uncertainties. A simple, new method to calculate the coupling parameter from solely the spatial particle coordinates will be presented in this thesis, and
verified to be consistent with the conventional estimation by charge and temperature measurements. The second experiment involves the creation of a two-dimensional plasma crystal and its shock
melting by the application of a short electric pulse. The following phase of rapid recrystallization gives insight into the nature of a non-equilibrium transition of a two-dimensional system of
interacting particles from a disordered to an ordered state. The measurements have been performed at a high temporal resolution to ensure the possibility to obtain kinetic energies from particle
velocity distributions. The process is investigated thoroughly by means of the time-dependent development of the kinetic particle energy and structural properties of the system, such as translational
and orientational long range order, defects fraction and spatial defect arrangements. Finally the connection of structural order parameters to the kinetic energy -- in comparison with conventional
models and theories -- gives novel insights into the underlying physical processes determining the two-dimensional phase transition.
Item Type: Thesis (Dissertation, LMU Munich)
Keywords: two-dimensional complex plasma, strongly coupled system, non-equilibrium phase transition
Subjects: 600 Natural sciences and mathematics
600 Natural sciences and mathematics > 530 Physics
Faculties: Faculty of Physics
Language: English
Date Accepted: 28. October 2010
1. Referee: Morfill, Gregor
Persistent Identifier (URN): urn:nbn:de:bvb:19-123271
MD5 Checksum of the PDF-file: 5a99ac80d8ce258474dc868492f0aa12
Signature of the printed copy: 0001/UMC 19072
ID Code: 12327
Deposited On: 17. Dec 2010 07:50
Last Modified: 16. Oct 2012 08:44
|
{"url":"http://edoc.ub.uni-muenchen.de/12327/","timestamp":"2014-04-20T13:24:54Z","content_type":null,"content_length":"28454","record_id":"<urn:uuid:d47e117f-2b9e-4486-b907-71f5744ca959>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00243-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Lemon Grove Calculus Tutor
Find a Lemon Grove Calculus Tutor
...I specialize in algebra, geometry, calculus, and SAT prep. In addition I am well versed in the Spanish Language and I am capable of tutoring in Spanish as well. My approach to teaching and
tutoring is through working and solving problems.
9 Subjects: including calculus, Spanish, geometry, algebra 1
...Later, after transferring to North Carolina, I began more formal education training and continued tutoring. Again, I saw students with different backgrounds and varying abilities. I taught as a
student teacher at Riverside High School in Durham, NC.
26 Subjects: including calculus, Spanish, chemistry, physics
...I can tutor a variety of subjects from basic elementary math to calculus, basic natural sciences to upper division chemistry, as well as up to Semester 4 of university Japanese. I started out
majoring Chemistry at Harvey Mudd College where I was taught not only a wide breadth of subjects in math...
13 Subjects: including calculus, chemistry, geometry, statistics
...During my masters in computer science I took many algorithms classes based around C like languages. I am very familiar with programming C also from my PhD and Postdoc, where I rely on it to
create numeric simulations. I am able to walk you through all aspects of C programming, such as the synta...
26 Subjects: including calculus, physics, statistics, algebra 1
...I have also studied theoretical computer science, which relies heavily on combinatorial theory. I have taken undergraduate and graduate courses in linear algebra as well as math and physics
courses that relied heavily on this subject. I have acted as a teaching assistant in for undergraduate linear algebra course.
21 Subjects: including calculus, physics, statistics, geometry
|
{"url":"http://www.purplemath.com/lemon_grove_calculus_tutors.php","timestamp":"2014-04-19T10:14:34Z","content_type":null,"content_length":"23991","record_id":"<urn:uuid:4b671ba9-c380-486d-8001-9850ad45b803>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00604-ip-10-147-4-33.ec2.internal.warc.gz"}
|
register allocation via graph coloring
preston@tera.com (Preston Briggs)
Tue, 21 Feb 1995 21:37:51 GMT
From comp.compilers
| List of all articles for this month |
Newsgroups: comp.compilers
From: preston@tera.com (Preston Briggs)
Keywords: registers, optimize, bibliography
Organization: Compilers Central
References: 95-02-136
Date: Tue, 21 Feb 1995 21:37:51 GMT
alexe@eecs.umich.edu writes
>Here are the answers I got about register allocation using graph
>coloring. Besides the articles mentioned in the following emails, I
>found an article that presents an algorithm for register allocation
>and spilling for a given schedule in an optimal fashion:
W. M. Meleis and E. S. Davidson, "Optimal Register Allocation for a
Multiple-Issue Machine, in the Proceedings of the 1994 International
Conference on Supercomputing (ICS), Manchester, UK, pp 107-116
You always have to be careful with the word "optimal." Technically
(the way papapers always use it), it does _not_ mean "none better",
which is too bad. I'm sure the paper will spell out the exact
conditions for optimality, but they aren't always what you might
I haven't read the paper (oops), but you mention one restriction,
namely "for a given schedule". This is the same sort of
simplification that Chaitin makes, I make, Chow&Hennessy make,
Callahan&Koblenz, make, etc. Interesting work that explores the
alternative (i.e., letting the schedule flex) includes
author="Schlomit S. Pinter",
title="Register Allocation with Instruction Scheduling: A New Approach",
A scheduler-sensitive global register allocator
Norris and Pollack
Supercomputing '93
Other possible simplifications include restricting the problem to a
single routine (most people do this, but there are exceptions), to a
single block, or even to a single expression. If you're looking at
more than a single expression (with no reuse, that is, a tree), the
problem of "optimal" allocation, for most definitions of optimal, is
at least NP complete (in the really general cases, it's just plain
About the NP-completeness thing, yet again. If somebody proves that
some particular formulation of a problem is NP complete, it means that
it's unlikely that an algorithm exists that can find an optimal
solution quickly. Note well the use of the special hedge words
"unlikely", "optimal", and "quickly".
So, for the paper referenced above, I expect they're solving some
limited problem well. However, the use of the word "optimal" suggests
that the problem they solve is so limited that their approach is
probably not that interesting (not meaning to be harsh or keep my head
too deeply in the sand, but there are a lot of NP-complete problems
for them to work around).
The moderator writes:
[And are they patented? -John]
Chaitin's work is patented by IBM. The work I published with Cooper,
Kennedy, and Torczon is patented by Rice (our PLDI '89 paper, or chapter 3
of my thesis). The rest of my stuff has not been patented. Callahan &
Koblenz is not patented. I don't know about other approaches.
Does a patent matter? Depends, doesn't it? In terms of research or home
projects, it doesn't matter at all. In terms of a product, it surely
matters, though it isn't necessarily prohibitive. You just have to pursue
liscensing issues (or, to keep the patent, the owners must actively pursue
Marc-Michael Brandis wrote
P. Briggs. Register Allocation via Graph Coloring. Ph.D. thesis,
Rice University, Houston, Texas. Available as Technical Report Rice
COMP TR92-183. 1992. (also available through ftp, but I do not
have the address)
Available via anonymous ftp from cs.rice.edu, in the directory public/preston
Cliff Click wrote
Graph-coloring is NP-complete, so an optimal
algorithm will have to be exponential. Don't know if one exists, but
it's much less likely to be interesting than a better spill metric, or
live-range-splitting algorithm. In other words, most interference
graphs are NOT colorable, so a "perfect" coloring algorithm will fail
anyways. The more important issue is how you mangle the live-ranges
to get a colorable graph.
I have some problems with some of this. An optimal algorithm does not
_need_ to be exponential; however, everyone will be very interested in a
polynomial-time algorithm. Also, it's easy to write an optimal algorithm
for graph coloring that requires exponential time.
Otherwise, Cliff makes good points. For most interesting routines, a
k-coloring won't exist, so _something_ must be done, typically spilling,
splitting, or rescheduling. In terms of research, this is where all the
action is.
John D. Mitchell wrote
>Also, is there an optimal graph coloring algorithm available on the net?
There's no such thing as 'optimal' graph coloring. Coloring is one form of
heuristic to tractably deal with an untractable problem (in general).
Eh? The problem of coloring a given graph using a minimal number of
colors certainly has an optimal solution. And it's easy to write an
(expensive) algorithm for finding the optimal solution. It's also easy to
dream up faster, but non-optimal heuristics for coloring. Here's one:
give every node a different color
Not very minimal! How about this:
start with all nodes being uncolored
foreach node n, (in any order)
choose a color for n that differs from its neighbors
Better. And so forth.
But none of these have much to do with register allocation, wherein you've
got to find a k-coloring (not just a minimal coloring), or if you can't,
modify the code until you can, all in some reasonable amount of time.
Preston Briggs
Post a followup to this message
Return to the comp.compilers page.
Search the comp.compilers archives again.
|
{"url":"http://compilers.iecc.com/comparch/article/95-02-168","timestamp":"2014-04-19T07:05:40Z","content_type":null,"content_length":"11022","record_id":"<urn:uuid:28982f18-9ed2-4a7d-922a-ab05a3805ebe>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00611-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Union Beach, NJ Math Tutor
Find an Union Beach, NJ Math Tutor
...I breezed through my Bachelor's in Engineering and minor in mathematics, graduating Summa Cum Laude in 3 years and continued to pursue a a graduate degree. My expertise was in tissue
engineering and I was doing front of the line research in nerve regeneration, but I realized I was becoming an ex...
15 Subjects: including algebra 1, algebra 2, prealgebra, precalculus
...I am currently a certified Elementary School Teacher in NJ with a K-8 Certification for all subjects. I also have experience as an Instructional Aide in Elementary School grades for 3 years.
For six years I was an Elementary School teacher teaching grades from Kindergarten to 5th grade.
53 Subjects: including algebra 1, prealgebra, SAT math, trigonometry
...I attended Peter Kump's Chef School and numerous cooking classes. I love teaching kids to cook. I was an early user of the C Programming Language, beginning in 1981.
36 Subjects: including precalculus, algebra 1, algebra 2, GRE
I am a freelance Medical Writer with a varied career in basic biomedical research, scientific publication, and teaching. I have never forgotten what it is like to be a student, hungry for
COMPLETE understanding of a subject, and VERY APPRECIATIVE of certain exceptional teachers who inspired me and ...
26 Subjects: including algebra 1, algebra 2, biology, ACT Math
...A short conversation is the best way to learn about me, about my process, and see if it fits your needs. I love tutoring! Working with a student and creating a customized learning curriculum
makes every new student a challenge and opportunity.
16 Subjects: including precalculus, business, ACT Math, algebra 1
Related Union Beach, NJ Tutors
Union Beach, NJ Accounting Tutors
Union Beach, NJ ACT Tutors
Union Beach, NJ Algebra Tutors
Union Beach, NJ Algebra 2 Tutors
Union Beach, NJ Calculus Tutors
Union Beach, NJ Geometry Tutors
Union Beach, NJ Math Tutors
Union Beach, NJ Prealgebra Tutors
Union Beach, NJ Precalculus Tutors
Union Beach, NJ SAT Tutors
Union Beach, NJ SAT Math Tutors
Union Beach, NJ Science Tutors
Union Beach, NJ Statistics Tutors
Union Beach, NJ Trigonometry Tutors
|
{"url":"http://www.purplemath.com/Union_Beach_NJ_Math_tutors.php","timestamp":"2014-04-19T12:46:55Z","content_type":null,"content_length":"23783","record_id":"<urn:uuid:504f49d0-6e33-47b0-a209-24065f4951e3>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00495-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Pi Day is all about circles, circumference and diameter. Pi (approximately 3.14) i[DEL:s delicious:DEL] is the ratio of a circle’s circumference divided by its diameter. This ratio is the same for
all circles.
In the spirit of Pi Day, let’s see what we can do with the fabulously fun circle and what we learn along the way by making some simple folds.
You will need the following supplies:
• Ruler
• Pencil
• Paper
• Compass
• Markers
• Scissors
• Scotch Tape™
• Small piece of candy
1. Use a compass to draw a 7-inch circle. Carefully cut the circle out.
2. Describe the properties of the circle.
3. Can any other shapes be made using this circle? Let’s find out.
4. First, fold the circle in half and open the circle back up. Each half of this circle is called a semi-circle. Notice that both halves of the circle are identical. We say the halves are
symmetrical, or have symmetry.
5. Can you think of anything in the classroom that is a semi-circle? A protractor is a semi-circle. How many degrees are on a protractor? If you don’t know, investigate. There are 180˚ on a
protractor, so, how many degrees in a full circle?
6. Now, fold the circle into fourths, then unfold the circle and locate the center. Mark the center using a marker or pencil.
7. Using a ruler, draw a line from one side of the circle to the other, making sure to pass through the center. This line is diameter of the circle.
8. Using a ruler, draw a line from the center of the circle to one point on the edge of the circle to create a radius.
9. Next, fold one side of the circle down so the edge meets the center point. Unfold and use a marker or pencil to darken the line of the fold. This line is called a chord.
10. Re-fold the circle along the chord line and fold an additional edge to the center of the circle to form an ice cream cone-like shape.
11. Fold the remaining edge of the circle to the center to form an equilateral triangle.
12. Make a new shape by folding one vertex of the triangle down so that its tip touches the center of the side opposite to it. What is the resulting shape? The shape is a quadrilateral and a
13. Let’s make another shape: Fold one acute vertex so that it meets one of the obtuse vertices. What is the shape created as a result of this fold? You should come up with the terms: parallelogram,
quadrilateral, and rhombus.
14. Unfold the shape until you get back to the larger triangle. Then, fold each of the three vertices to the center point. The new shape that is created is a hexagon.
15. Again, unfold the shape to the original triangle. Fold the triangle so that all of the vertices touch at a single point to form a triangular pyramid. Is this shape a polyhedron?
16. Lastly, fold all of the top halves of the triangles down so they cross each other. Use tape to secure the sides. This creation is a truncated triangular pyramid. It can be used as a space to put
a special treat.
Property – an attribute common to all members of a class
Semi-circle – a half circle, formed by cutting a whole circle along a diameter line. Any diameter of a circle cuts it into two equal semicircles
Symmetry – when one shape is identical to another
Diameter – the length of the line through the center and touching two points on its edge; sometimes the word ‘diameter’ is used to refer to the line itself
Radius – the length of the line from the center to any point on its edge. The plural form is radii. The radius is half the length of the diameter.
Chord – a line segment that only covers the part inside the circle. A chord that passes through the center of the circle is also a diameter of the circle.
Equilateral Triangle – a triangle in which all three sides are congruent (same length)
Vertex – typically means a corner or a point where lines meet; every triangle has three vertices.
Polygon – a number of coplanar line segments, each connected end to end to form a closed shape
Quadrilateral – is any 4-sided polygon
Trapezoid – a quadrilateral which has at least one pair of parallel sides
Acute – an angle less than 90°, or to a shape involving angles less than 90°
Obtuse – an angle greater than 90° or a shape involving angles of more than 90°
Parallelogram – a quadrilateral with both pairs of opposite sides parallel
Rhombus – a quadrilateral with all four sides equal in length
Hexagon – a polygon with 6 sides
Triangular Pyramid – a pyramid having a triangular base; the tetrahedron is a triangular pyramid having congruent equilateral triangles for each of its faces
Polyhedron – a solid figure with many plane faces, typically more than six
Truncated Triangular Pyramid – is the result of cutting a pyramid by a plane parallel to the base and separating the part containing the apex
For more Pi Day fun, join us at the Museum from 10 a.m. to 1 p.m. on Thursday, March 14 for an event celebrating all things Pi! Expect crafts, Einstein-themed goodies and pies of every variety from
Pi Pizza Truck and Oh My Pocket Pies. Click here for more info!
|
{"url":"http://blog.hmns.org/tag/rhombus/","timestamp":"2014-04-20T18:27:02Z","content_type":null,"content_length":"53693","record_id":"<urn:uuid:beaf620d-8848-496c-a3dd-6bde4919720a>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00164-ip-10-147-4-33.ec2.internal.warc.gz"}
|
We have a lot to say about CCSSI’s treatment of fractions, which starts tentatively with 1.G.3, but we’ll initially hone in on Grade 3, which is where Common Core begins its big push. We’ll discuss
Common Core’s sequence, and compare or contrast it to our own preferences for how fraction concepts should be introduced, and if we differ, provide a (hopefully justified) rationale for our choices.
3.NF.1 states, ``Understand a fraction 1/b as the quantity formed by 1 part when a whole is partitioned into b equal parts; understand a fraction a/b as the quantity formed by a parts of size 1/b.’’
We hold these truths to be self-evident, that all numbers are created equal...
(Well, Abraham Lincoln or Thomas Jefferson could have written this.)
On February 8&9, 2013, while much of the northeastern US was getting socked with a blizzard, a symposium was held at Educational Testing Service headquarters in Princeton. The meeting between ETS and
the National Urban League was entitled "Taking Action: Navigating the Common Core State Standards and Assessments," and the purpose was to ``discuss [the] impact of Common Core State Standards on
underserved communities’’ and ``consider strategies to succeed with the new standards and assessments.’’
We stumbled across the live-twitter feed by accident, but immediately recognized the meeting's significance, as David Coleman, Joe Willhoft, and Doug Sovde, three Common Core ``biggies’’ were all
featured speakers. For them, it offered an opportunity to ``sell’’ CCSSI to important community groups: in addition to the NUL, representatives of the NAACP, NCLR and SEARAC were also in attendance.
|
{"url":"http://ccssimath.blogspot.com/2013_02_01_archive.html","timestamp":"2014-04-19T04:47:51Z","content_type":null,"content_length":"87967","record_id":"<urn:uuid:03b22095-ff5f-40d3-8df2-beddd3dadb16>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00269-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Let . Now Let: . Find The Fourier Transform Of ... | Chegg.com
Find the Fourier transform of x(t).
I am now going to post the given solution. I can't understandthis solution! Please help me:
By Table 4.2, the Fourier transform of
By Linearity and the time shifting property:
(Final Answer)
Two Questions:
(1) For time shifting, why are they multiplying by
(2) How does the
Thanks for any help!!!
Electrical Engineering
|
{"url":"http://www.chegg.com/homework-help/questions-and-answers/let--let--find-fourier-transform-x-t--going-post-given-solution-t-understandthis-solution--q86842","timestamp":"2014-04-20T04:44:24Z","content_type":null,"content_length":"25926","record_id":"<urn:uuid:b1db2ded-e99e-44cd-935d-64743d0183d1>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00612-ip-10-147-4-33.ec2.internal.warc.gz"}
|
[FOM] Vaught - some new work.
John Baldwin jbaldwin at uic.edu
Mon Mar 11 11:09:55 EDT 2013
Thanks first to the several people who provided me with copies of the
Vaught paper. I asked for this because I had a bad conscience about a
remark he made in that paper taking a very restrictive (from my
standpoint) of the meaning of `syntax' (footnote 35 of 2).
My two papers
1) Formalization, Primitive Concepts and Purity
and 2) Completeness and Categoricity (in power): Formalization without
at http://homepages.math.uic.edu/~jbaldwin/model11.html
exhibit somewhat different views about the role of formalism in
mathematics. The first expresses some limitations of formal methods in
dealing with the concept of
purity and develops some aspects of Juliette Kennedy's notion of formalism
The second builds off questions of Detlefsen about the `virtue' of notions
such as categoricity and completeness. It argues that formal methods and
in particular the notion of classifying first order theories by
`essentially syntactic' properties is an important tool in
mathematics and should be recognized by philosophers as a fundamental use
of formal methods.
VAUGHT. There are two Vaught papers. 1959 - `Denumerable models of
Complete Theories',* Infinitistic Methods, Proc. Symp. Foundations of
Math*. This is hard to get.
VAUGHT'S 59 paper IS POSTED AS A REFERENCE BETWEEN THE 6TH AND 7TH ITEM ON
the same webpage http://homepages.math.uic.edu/~jbaldwin/model11.html
In 1961, Vaught published in the Bulletin of the AMS, `Models of Complete
Theories'. This is more a summary of the area with few proofs and not
including proof of such results
and no complete theory with two countable models and the Vaught 2-cardinal
theorem (which are in the first paper).
It is readily available on project Euclid.
(The appendix to Formalism freeneess... by Bill Howard and myself
provides a fully geometric proof entirely
in the projective case that a Desarguesian projective plane can be imbedded
in 3 space)
Finally my great thanks to the many people who responded with copies of
Vaught's paper.
John T. Baldwin
Professor Emeritus
Department of Mathematics, Statistics,
and Computer Science M/C 249
jbaldwin at uic.edu
851 S. Morgan
Chicago IL
-------------- next part --------------
An HTML attachment was scrubbed...
URL: </pipermail/fom/attachments/20130311/dbbd69a5/attachment-0001.html>
More information about the FOM mailing list
|
{"url":"http://www.cs.nyu.edu/pipermail/fom/2013-March/017103.html","timestamp":"2014-04-17T18:26:52Z","content_type":null,"content_length":"5238","record_id":"<urn:uuid:8cfd12ca-4844-43da-ace6-201ebaec7dd1>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00292-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Work Energy Theorem
If mechanical energy is conserved , then you have : T1+ V1 = T2+V2
(T: kinetic energy, V: potential energy)
It can be written as T2-T1=V1-V2=>DT=-DV
Thus the potential energy diminishes. The amount of change in the potential energy is the work done, which is finally converted in kinetic energy, thus increasing its value.
|
{"url":"http://www.physicsforums.com/showthread.php?p=4217913","timestamp":"2014-04-21T02:15:58Z","content_type":null,"content_length":"25007","record_id":"<urn:uuid:a998a411-baf6-4ad3-8dc7-c84070de80ba>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00215-ip-10-147-4-33.ec2.internal.warc.gz"}
|
"... The biinterpretability conjecture for the r.e. degrees asks whether, for each sufficiently large k, the # k relations on the r.e. degrees are uniformly definable from parameters. We solve a
weaker version: for each k >= 7, the k relations bounded from below by a nonzero degree are uniformly definabl ..."
Cited by 34 (13 self)
Add to MetaCart
The biinterpretability conjecture for the r.e. degrees asks whether, for each sufficiently large k, the # k relations on the r.e. degrees are uniformly definable from parameters. We solve a weaker
version: for each k >= 7, the k relations bounded from below by a nonzero degree are uniformly definable. As applications, we show that...
- Journal of the London Mathematical Society , 1998
"... Given a computably enumerable set B; there is a Turing degree which is the least jump of any set in which B is computably enumerable, namely 0 : Remarkably, this is not a phenomenon of
computably enumerable sets. We show that for every subset A of N; there is a Turing degree, c (A); which ..."
Cited by 10 (0 self)
Add to MetaCart
Given a computably enumerable set B; there is a Turing degree which is the least jump of any set in which B is computably enumerable, namely 0 : Remarkably, this is not a phenomenon of computably
enumerable sets. We show that for every subset A of N; there is a Turing degree, c (A); which is the least degree of the jumps of all sets X for which A is \Sigma 1 (X): 1
- J. Symb. Logic
"... We show that the intersection of the class of 2-REA degrees with that of the #-r.e. degrees consists precisely of the class of d.r.e. degrees. We also include some applications and show that
there is no natural generalization of this result to higher levels of the REA hierarchy. 1 Introduction The ..."
Cited by 3 (1 self)
Add to MetaCart
We show that the intersection of the class of 2-REA degrees with that of the #-r.e. degrees consists precisely of the class of d.r.e. degrees. We also include some applications and show that there is
no natural generalization of this result to higher levels of the REA hierarchy. 1 Introduction The # 0 2 degrees of unsolvability are basic objects of study in classical recursion theory, since they
are the degrees of those sets whose characteristic functions are limits of recursive functions. A natural tool for understanding the Turing degrees is the introduction of hierarchies to classify
various kinds of complexity. Because of its coarseness, the most common such hierarchy, the arithmetical hierarchy, is itself not of much use in the classification of the # 0 2 degrees. This fact
leads naturally to the consideration of hierarchies based on finer distinctions than quantifier alternation. Two such hierarchies are by now well established. One, the REA hierarchy defined by
Jockusch and S...
, 1995
"... This paper is a contribution to the investigation of the relationship between the ..."
"... .e. degree. Note that an isolated d.r.e. degree must be properly d.r.e., that is, it cannot be of r.e. degree. Recall that the d.r.e. degrees are 2-REA: if B = W \Gamma V with W and V both r.e.
sets and h is any one-to-one onto recursive function from ! to W , then it is straightforward to show B is ..."
Add to MetaCart
.e. degree. Note that an isolated d.r.e. degree must be properly d.r.e., that is, it cannot be of r.e. degree. Recall that the d.r.e. degrees are 2-REA: if B = W \Gamma V with W and V both r.e. sets
and h is any one-to-one onto recursive function from ! to W , then it is straightforward to show B is recursively enumerable in and above h \Gamma1 [V T W ]. While the degree of h \Gamma1 [V T W ]
depends on the particular representation using W and V , it is independent of the enumerating function h, so by a slight abuse of notation we write ~ B for h \Gamma1 [V T W ] whenev
|
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=1372972","timestamp":"2014-04-19T03:13:55Z","content_type":null,"content_length":"24072","record_id":"<urn:uuid:62ec7d45-7e97-4474-9887-6ad909c444eb>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00144-ip-10-147-4-33.ec2.internal.warc.gz"}
|
8th chevron
March 21st, 2006, 10:10 AM #1
Ok i know that the 8th chevron adds an extra distance calculation but why, 6 points and point of origin should be enough, does any one know why it needs 8 as in points in a three dimetional
It adds the galaxy distance calculation. In order for the system to work the 6 points have to be measured by predetermined points in space around the outside of the galaxy so the intersecting
lines can be plotted. the default is that it automatically uses the home system of coordinates for the galaxy you are in, the additional chevron (which is actually the seventh when you are
dialing because PoO still has to be last) tells it to calculate using the coordinates for the other galaxy.
let's sift ths to a 2D plane for simplity.
We ave 2 sheets of grid paper. Each one is divided into the normal 4 sections. Now (3,2) could be that point in eith sheet, so we add another thing to explain which one
peanutbutter + Jelly = sandwich
there is your answer duh.
See Jaffa are Crazy! (pic of a Tia food place in the US of A
It adds the galaxy distance calculation. In order for the system to work the 6 points have to be measured by predetermined points in space around the outside of the galaxy so the intersecting
lines can be plotted. the default is that it automatically uses the home system of coordinates for the galaxy you are in, the additional chevron (which is actually the seventh when you are
dialing because PoO still has to be last) tells it to calculate using the coordinates for the other galaxy.
Have they ever explained why three intersecting lines are needed, and not two?
That has sort of never made sense to me.
Have they ever explained why three intersecting lines are needed, and not two?
That has sort of never made sense to me.
because space a 3 dimensional reality, not 2. With only 2 line you can calculate the distance on only one plane.
but the gate doesnt travel to other planes well unless the 9th cheveron theory is correct??
You are the fifth race, your role is clear, if there is any hope in preserving the future it lies with you and your people ~ 8years for those words
Stargate : Genesis | Original Starship DesignThread Sanctuary for all | http://virtualfleet.vze.com/
11000! green me
but the gate doesnt travel to other planes well unless the 9th cheveron theory is correct??
plane as in mathematic calculations, not planes of reality. you can only calculate along an x and y axis with only two lines, you need a third line for z
First of all, its TV and second, I don't think they really thought it out, they've basically made it into an area code...
because space a 3 dimensional reality, not 2. With only 2 line you can calculate the distance on only one plane.
Sorry, but that doesn't really make sense. If you have two intersecting lines, why do you need a third one?
EDIT: It would have made more sense to make the address of a planet represent a vector so it doesn't depend on constellations, but its a TV show, so I suppose its not going to make sense.
Sorry, but that doesn't really make sense. If you have two intersecting lines, why do you need a third one?
EDIT: It would have made more sense to make the address of a planet represent a vector so it doesn't depend on constellations, but its a TV show, so I suppose its not going to make sense.
ok take a piece of paper and draw two intersecting lines on it. The lines come together at a single point. Now hold that paper up in front of your eyes. the piece of paper represents one single
plane in space, a measurement based on front to back measurment and side to side. However in space things also have to be measure up and down, thats the z axis. With out that additional
measurement the point in space that the planet is at could be anywhere along that vertical axis. Its hard to explain over a message board.
And the constelations thing was just in the movie and worked when it only went to one place. Its never been carried over or utilized in the show.
However in space things also have to be measure up and down, thats the z axis.
If two lines intersect in 3 dimensional space, that is still going to give you a single point, not an entire line. You can find the intersection of two lines in three dimensional space pretty
easily. (if they have one, not likely however)
Two lines intersecting is just as good as three lines intersecting.
here, I have attached a picture of two intersecting line segments. Explain why thats not as good at three intersecting line segments.
Last edited by cobraR478; March 21st, 2006 at 04:24 PM.
If two lines intersect in 3 dimensional space, that is still going to give you a single point, not an entire line.
Two lines intersecting is just as good as three lines intersecting.
no it does give you an entire line. if you have a 2 foot by 2 foot by 2 foot cube and make two lines cross in the middle a foot from each wall you still havent specified how far from the top or
bottom of the box the point is so it could be any point along the line that exists 1 foot from each side
It adds the galaxy distance calculation. In order for the system to work the 6 points have to be measured by predetermined points in space around the outside of the galaxy so the intersecting
lines can be plotted. the default is that it automatically uses the home system of coordinates for the galaxy you are in, the additional chevron (which is actually the seventh when you are
dialing because PoO still has to be last) tells it to calculate using the coordinates for the other galaxy.
I am going with this one. It adds for distance.
*Post in Peace, Yah or Nah*
*Go to Sokar you Cylon fracker*
*I can't spell vary good, but I can read mis- spelled words vary good*
*And then the Ori said, "if your thread is dead then let their be a new one"*
*It's Science Fiction. Not Science with Fiction.*
*Sproiler Tags should only be used when you are going to be mentioning something that you can't already read on Gateworld*
*When I talk out my butt it smells like sarcasm*
no it does give you an entire line. if you have a 2 foot by 2 foot by 2 foot cube and make two lines cross in the middle a foot from each wall you still havent specified how far from the top or
bottom of the box the point is so it could be any point along the line that exists 1 foot from each side
look at the attachment I added in my previous post, and tell me why those two intersecting lines dont give you a point.
look at the attachment I added in my previous post, and tell me why those two intersecting lines dont give you a point.
Where is the point along the x axis? This isnt something im making up, this is simple mathematic fact. in order to plot a single point in 3 dimensional space you need three lines intersecting.
left to right, front to back, and up and down. Otherwise the point is just a point on a single plane and could exist anywhere along the unspecified axis.
Where is the point along the x axis? This isnt something im making up, this is simple mathematic fact. in order to plot a single point in 3 dimensional space you need three lines intersecting.
left to right, front to back, and up and down. Otherwise the point is just a point on a single plane and could exist anywhere along the unspecified axis.
Each line has an x,y, and z component.
I am starting to think you are just screwing with me.... this isn't a difficult concept.
Last edited by cobraR478; March 21st, 2006 at 04:36 PM.
Each line has an x,y, and z component.
I am starting to think you are just screwing with me.... this isn'st a difficult concept.
Im not screwing with you, and you are right this isn't difficult it is a basic geometric concept. The diagram you showed is a perfect example. each line can only be on the x, y, or z axis as a
measurement. the diagram you showed has two lines intersecting. one is the y axis, one is the z axis, so where does the point fall on the y axis? I can not explain it any clearer than that over
the internet, but this isn't a subjective opinion thing. this is a proven mathematical principal. If you still can't get it go find a math teacher to explain it in real life.
Im not screwing with you, and you are right this isn't difficult it is a basic geometric concept. The diagram you showed is a perfect example. each line can only be on the x, y, or z axis as a
measurement. the diagram you showed has two lines intersecting. one is the y axis, one is the z axis, so where does the point fall on the y axis? I can not explain it any clearer than that over
the internet, but this isn't a subjective opinion thing. this is a proven mathematical principal. If you still can't get it go find a math teacher to explain it in real life.
The lines I showed you have x, y, and z components. They exist in three-space. The place where they intersect will give you a point... what is so hard about this?
Last edited by cobraR478; March 21st, 2006 at 04:55 PM.
The lines I showed you have x, y, and z components. They exist in three-space. The place where they intersect will give you a point... what is so hard about this?
They dont have an x, y, and z component, it is only two lines, it is only two axii. Look Ive explained it several times, this isnt some subjective thing open for interpretation, it is a simple
fact. You are wrong. If you don't get it go look it up in a math text book or have a teacher explain it to you.
March 21st, 2006, 10:21 AM #2
March 21st, 2006, 12:04 PM #3
March 21st, 2006, 12:38 PM #4
March 21st, 2006, 12:44 PM #5
March 21st, 2006, 12:53 PM #6
March 21st, 2006, 12:58 PM #7
March 21st, 2006, 01:04 PM #8
March 21st, 2006, 01:05 PM #9
Major General
Member Since
Feb 2005
March 21st, 2006, 03:56 PM #10
March 21st, 2006, 04:15 PM #11
March 21st, 2006, 04:18 PM #12
March 21st, 2006, 04:21 PM #13
March 21st, 2006, 04:22 PM #14
March 21st, 2006, 04:25 PM #15
March 21st, 2006, 04:29 PM #16
March 21st, 2006, 04:30 PM #17
March 21st, 2006, 04:37 PM #18
March 21st, 2006, 04:49 PM #19
March 21st, 2006, 04:57 PM #20
|
{"url":"http://forum.gateworld.net/threads/26361-8th-chevron","timestamp":"2014-04-18T02:59:37Z","content_type":null,"content_length":"145267","record_id":"<urn:uuid:a7adc0c4-db3e-4b0b-a868-666db257bea3>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00122-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Derivitive of the X final equation?
1. The problem statement, all variables and given/known data
The derivitive of the X[final]=.5at^2+V[initial]t+X[initial]
2. Relevant equations
X[final]= Final distance
X[initial]= Initial distance
a= Acceleration
t= Time
V[initial]= Initial velocity
3. The attempt at a solution
I have attempted the problem but get stuck almost immediatly.
1. The problem statement, all variables and given/known data
2. Relevant equations
3. The attempt at a solution
|
{"url":"http://www.physicsforums.com/showthread.php?p=2567493","timestamp":"2014-04-19T19:37:35Z","content_type":null,"content_length":"23081","record_id":"<urn:uuid:3bbaa7d0-617b-41db-b2aa-d0abbe438615>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00346-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Review: Discovering Mathematics (Singapore Math, Secondary Level)
Note: Since beginning Discovering Mathematics, Singapore Math has released a new edition, Discovering Mathematics Common Core. The order of lessons vary a bit, and new topics have been included. At
this writing, only a few levels are available. We tried the new ones, and they are fine, but as they’ve been slow to release, we’re still working through the earlier series. The differences are
slight, with changes in order and a few additions being the bulk of what varies from the old to the new.
Providing a challenging mathematics education was one of the key reasons we started homeschooling. Deeply disappointed by the depth of the math provided by two schools, my older son, then seven,
assumed he was the problem.
“I don’t think I’ve very smart, Mom,” he told me.
“Why not?” I inquired.
“Because they don’t give me anything hard to do,” came his sad reply.
Math (and science) were his loves at age 4 and 5 in Montessori and while at home. He was appropriately challenged in the first at school and free to explore the second at home. First grade ended all
that, where math became repetition of previously mastered lessons. Second grade, at our local gifted and talented public school, it was nonexistent which was because, we were informed, he knew all
the material for that year already.
So once home, math took a starring role. Singapore Math quickly became our preferred curriculum (reviewed here) for the elementary sequence. Even doing the Challenging Word Problem books, we burned
through it quickly. Almost 10, my older insisted on Algebra, so we started the standard sequence, happily making our way through a fine text, Jacobs’ Algebra. (reviewed here).
When my younger finished 6B, I wondered if there was another way. We vamped for much of last year, working through a variety of books while choosing our next course of action. After much
consideration, we decided to stay with Singapore, specifically, their Discovering Mathematics series. This four-year series is designed to cover some prealgebra, algebra (I and II), geometry, and a
smattering of other topics, like probability and counting. Unlike most American programs, these topics are interwoven throughout the years, with chapters on algebra followed by chapters on geometry
with a side trip to data handling. It’s challenging, with plenty of problems, tests with answers, and teacher’s support books if needed.
But I hesitated. Accustomed to the four-year math sequence I’d known as a child and that my older son had followed, I was hesitant to commit to a different path. What if we didn’t like it after a
year? What then? (Answer: Start a traditional Algebra program and compact or test out of what has already been covered. Ditto the next year with Geometry.) I presented my younger son, then 10, with
the options. Singapore, Jacobs, or Art of Problem Solving? He looked at samples of all online and liked the familiarity of the Singapore. Thus, we reached a decision.
We’ve not been disappointed. We started Discovering Mathematics 1A soon after it arrived and found that while it certainly felt like the Singapore Math we’d enjoyed the previous years, it was a step
up in challenge and pace. He’s enjoying it, but we don’t whip through the pages as we did at the elementary level. Concepts aren’t broken down in such small parts, and even the sample problems (Try
This!) are fairly challenging. Fortunately, this increase in challenge has resulted in an increase of effort. As a result, he’s feeling rather accomplished while learning large amounts.
At the minimum, the user will need to purchase two textbooks for the year. These paperbacks are affordable and reusable, in keeping with Singapore Math’s reputation for affordability. Each of the
four levels requires two textbooks, each generally over 200 pages long. The year is broken up into 11 to 17 chapters, roughly evenly divided between the two books. (The fourth level is shorter, with
a significant proportion of 4B dedicated to review tests, similar to the elementary level 6B.)
The chapters are broken up into shorter sections, some amenable to a single lesson or day of work, others requiring multiple days, given the depth of the lessons. Each section ends with problems in
four categories: Basic Practice (the easiest problems), Further Practice (definitely a bit more work), Maths@Work (word problems just as challenging as the aptly named Challenging Word Problems of
the elementary series), and Brainworks (sometimes too hard for Mom but worth trying if no one is crying). The so-called Revision Exercise (test) at the end of each chapter is at the level of the
Further Practice and Maths@Work level. Aside from the Brainworks problems, all the answers for the problems are in the back of the book. If you desire worked solutions (and so far, I’m good without),
there are Teacher’s Guides available, which include other teaching assistance, activities, and a breakdown of lessons and timing.
An additional workbook is available for each level, providing some extra practice as well as more problems at the more challenging level. Unlike the traditional workbook, these don’t provide a place
to do the problems, making them more of a reusable problem bank. I assign some of these at the end of each chapter, before the revision (test). The number I assign depends on how well he’s handling
the material — some sections just require more practice than others. Generally, these workbook problems are more challenging than the textbook ones. They are broken down into sections called Basic
Practice, Further Practice (both a bit more involved than the same-named section in the text, it seems), Challenging Practice (and it generally lives up to its name), and Enrichment (excellent
problems that we don’t get to most of the time). As with the text, answers are in the back, but solutions require the Teacher’s Edition of the workbook. I’d strongly suggest the workbook to
supplement all learners, with the Teacher’s Edition on the shelf if a parent is a bit math wary and wants guidance on the trickier problems.
The strengths of the elementary level of Singapore Math continue at the secondary level. The pace is swift, which is excellent for the mathematically talented child but could be overwhelming for
others. The problems in the text at the secondary level are far more challenging that what is in the workbooks for the elementary level, but on par with the Challenging Word Problems books. (I’ve not
used the Intensive Practice books at the elementary level, which are designed to increase the challenge at their respective levels.) The depth we’ve encountered thus far is also impressive. Math is
not taught via algorithm but by deep understanding, which, in my opinion, is by far the superior method. It is applied, not simply in one-step word problems, but across the sciences and into the work
world. Math lives in these books, with all its complexity and beauty there for the learning.
The downside to the Discovering Mathematics series? If one isn’t math-comfortable, these could be a challenge to teach. That said, for the math-uncomfortable, these are an excellent way to build a
new relationship with math. I know that throughout teaching even the elementary level of Singapore Math to my boys, this math-comfortable mom moved from number capable to number savvy. I’ve said
before that I believe that math is best taught rather than learned solo. Discussion is part of the process, and many times, I’ve had a child teach me and correct me, thus delighting the child and
enlightening me. (For more on thoughts about strong mathematics programs, read my post, Math Matters.)
We’re early in our exploration of this four-level series, and I’ll post again as we move through the program. I’m hoping we continue to enjoy Discovering Mathematics over the next several years,
allowing us continuity with a strong mathematics educational program.
As always, I only review what we’ve used, and I never accept compensation of materials or money for my reviews.
6 thoughts on “Review: Discovering Mathematics (Singapore Math, Secondary Level)”
1. Thank you, as ever. I’m curious, as one just starting out on Discovery Mathematics myself, did you use Sketchpad at all for those geometry problems that required it?
□ I’ve not yet purchased Sketchpad. At this point, we’re drawing things out. That may change down the road.
2. I, too, have used Singapore Primary Mathematics from the get-go (now finishing up 6A/6B). I found your review to be very helpful in confirming what I already knew–that the Discovering Mathematics
would probably hold true to the analytical/critical thinking type math that makes you think and not just plug in numbers. I, too, have noticed that my mental math in particular and understanding
of concepts has really sharpened doing this math with my girls. I, too, have had the more than one occasion of my daughter teaching me a problem. My husband always says, who is home schooling
whom? It does make for a great confidence builder for your child! Do post later if you end up using Sketchpad–I just read about that today as I was reviewing the Discovery Mathematics series.
Thanks for the information!
3. Thanks for this review. We have used Singapore all the way through, and I do think we will stick with Discovering Mathematics after Singapore 6. My husband is a high school math teacher and
speaks highly of what the kids have learned so far.
4. I started out with Singapore NEM, but thinking of switching to DM. So, I may be able to only use the textbook and workbook alone without having to buy the teaching notes and solution book and the
workbook solutions book?
□ I’ve purchased the solution manuals for level 2. I’ve rarely needed them, but they are a comfort for when (cringe) I just don’t want to think as hard.
|
{"url":"http://quarksandquirks.wordpress.com/2012/10/10/review-discovering-mathematics-singapore-math-secondary-level/","timestamp":"2014-04-20T03:09:29Z","content_type":null,"content_length":"73013","record_id":"<urn:uuid:dd539884-09e7-4d1e-b8af-bcba8303f66b>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00037-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Find SURFACE AREA of a pyramid (hexagonal-base)?
December 4th 2007, 10:43 PM #1
Nov 2007
How to find the surface area of a pyramid which has a regular hexagonal base of edge 6 cm and a height of 8 cm?
Surface area
The area of the base should be simple, so the problem is the six triangular sides. Each triangle has area $\tfrac{1}{2}bl$, where b = 6 cm is the length of a side of the base, and l is the height
of the triangle. Consider the right triangle formed by a line from the center of the base to the center of one of the sides of the base, by the center axis of the pyramid, and the height l of the
side in question. Since b=6 cm, the distance from center of base to the side of the base is $6 \cdot \tfrac{\sqrt{3}}{2} = 3\sqrt{3}$. Thus by the pythagorean theorem, $l^2 = (8 cm)^2 + (3\sqrt
{3} cm)^2 = 64 cm^2 + 27 cm^2 = 91 cm^2$, and $l = \sqrt{91} \approx 9.54 cm$. From here, you can find the area of one of the triangles. Then just multiply by 6 to get the entire lateral area,
and add the area of the base for the entire surface area.
--Kevin C.
For the base (img1):
You can divide it up into 6 equilateral triangles by drawing diagonals from opposite points.
The area of each of the equilateral triangles is
$A = \frac{1}{2}\left(6 \cdot \sqrt{6^2 -3^2}\right)=3\sqrt{27}=9\sqrt{3}$
So the Total Base Area is
$6 \cdot 9\sqrt{3} = 54\sqrt{3}$
Now for the slanting plane areas (img2):
From before, we learnt that the length of $b$ is $\sqrt{6^2-3^2}=3\sqrt{3}$
Also, $h = 8$, as given.
Therefore, $a$ can be found by Pythagoras:
$a = \sqrt{(3\sqrt{3})^2+8^2}=\sqrt{91}$
$a$ is the height of the triangular plane, so we can now work out its area:
$\frac{1}{2}\cdot 6 \cdot \sqrt{91}=3\sqrt{91}$
Multiplying by 6 gives us the area of all of them = $18\sqrt{91}$
So the total surface area is $54\sqrt{3}+18\sqrt{91}$
Thank you, divideby0! excellent answer (dont have much of an aptitude for maths) hehe...
December 4th 2007, 11:26 PM #2
Senior Member
Dec 2007
Anchorage, AK
December 4th 2007, 11:31 PM #3
December 5th 2007, 01:17 AM #4
Nov 2007
|
{"url":"http://mathhelpforum.com/geometry/24189-find-surface-area-pyramid-hexagonal-base.html","timestamp":"2014-04-16T14:34:00Z","content_type":null,"content_length":"42045","record_id":"<urn:uuid:1a593a76-f007-4a96-861e-d4df1528d04c>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00584-ip-10-147-4-33.ec2.internal.warc.gz"}
|
188 helpers are online right now
75% of questions are answered within 5 minutes.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred.
|
{"url":"http://openstudy.com/users/anonnnn/asked/1","timestamp":"2014-04-16T19:50:27Z","content_type":null,"content_length":"77855","record_id":"<urn:uuid:8a811861-6e20-4a44-9ebe-5dbc6c76a86a>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00000-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Are denotational semantic mappings decidable?
up vote 8 down vote favorite
Apologies for my poor expression of this question, I'm not sure I have the vocabulary to ask it appropriately.
I've written (very recently) something akin to
⟦let x = x in x⟧ = ⊥
but really I'm failing to understand something tricky here. I can assert that this statement is truly ⊥ because I know it's a non-productive infinite loop. Furthermore, I can assert something like
⟦let ones = 1:ones in ones⟧ = μ(λx.(1,x)) = (1, (1, (1, ... )))
but what goes into that elipsis? Presumably it's an infinite number of "1-and-tuples", a perfectly well-defined mathematical object if you're alright with the AFA, but how can I convince you that
it's not some finite number of "1-and-tuples" and then a non-productive ⊥?
Obviously, this involves answering the halting problem, so I can't in general.
So in that case, how can we compute semantic mappings as if they're a total function? Are semantics necessarily non-deterministic for Turing-incomplete languages? I imagine it means that semantics
are always only ever an approximate, informal description of a language, but does this "hole" go further?
haskell formal-semantics
slight comment... shouldn't it be $\nu$ rather than $\mu$ in your second example. (You're taking a greatest fixed point, this is what is happening in Haskell.) – Kristopher Micinski Feb 6 '13 at
1 You might be interested in this program I constructed during a twitter argument a while ago: gist.github.com/luqui/1379703 -- it is a Haskell value such that it is not known whether it denotes ⊥ –
luqui Feb 6 '13 at 22:15
@luqui Very cool! – J. Abrahamson Feb 7 '13 at 2:08
add comment
2 Answers
active oldest votes
There are no set theoretic models of turing complete languages. If your language is strongly normalizing, there exits a total function to "interpret it" to something. You may or may
not have set theoretic semantics in a non-turing complete language. Regardless, turing complete, and non-turing complete, languages can have non set theoretic semantics with total
semantic mapping functions.
I don't think that is the issue here.
There is a difference between inductive and co-inductive definitions. We can explore this set theoretically:
The inductive definition of a list of integers reads:
the set [Z] is the smallest set S such that the empty list is in S , and such that for anylsinSandninZthe pair(n,ls)inS`.
This can also be presented in a "step indexed" way as [Z](0) = {[]} and [Z](n) = {(n,ls) | n \in Z, ls \in [Z](n-1)} which lets you define [Z] = \Union_{i \in N}([Z](n) (if you believe
in natural numbers!)
On the other hand, "lists" in Haskell are more closely related to "coinductive streams" which are defined coinductively
the set [Z] (coinductive) is the largest set S such that forall x in S, x = [] or x = (n,ls) with n in Z and ls in S.
That is, coinductive defintions are backwards. While inductive definations define the smallest set containing some elements, coinductive definations define the largest set where all
elements take a certain form.
It is easy to show that all inductive lists have finite length, while some coinductive lists are infinitely long. Your example requires coinduction.
More generally, inductive definitions can be though of as the "least fix-point of a functor" while coinductive definitions can be thought of as "the greatest fix-point of a functor".
The "least fix point" of a functor being just its "initial algebra" while the "greatest fixpoint" is its "final coalgebra". Using this as your semantic tools makes it easier to define
things in categories other than the category of sets.
I find that Haskell provides a good language for describing these functors
up vote 6 down data ListGenerator a r = Cons a r | Nil
vote accepted
instance Functor (ListGenerator a) where
fmap f (Cons a x) = Cons a (f x)
fmap _ Nil = Nil
although haskell provides a good language for describing these functors, because its function space is CBN and the language is not total, we have no way of defining the kind of least
fix point we would like :(, although we do get the definition of the greatest fixpoint
data GF f = GF (f (GF f))
or the non recursive existentially quantified
data GF f = forall r. GF r (r -> (f r))
if we were working in a strict or total language, the least fixpoint would be the universally quantified
data LF f = LF (forall r. (f r -> r) -> r)
EDIT: since "smallest" is a set theoretic notion though the "least"/"greatest" distinction might not be the right one. The definition of LF is basically isomorphic to GF and is "the
free initial algebra" which is the categorical formalism of "least fix point."
as to
how can I convince you that it's not some finite number of "1-and-tuples" and then a non-productive ⊥?
you can't unless I believe in the kind of constructions in this post. If I do, then your definition leaves me stuck!. If you say "ones is the coinductive stream consisting of the pair
(1,ones)" then I have to believe! I know ones is not _|_ by definition, and thus by induction I can show that it cant be the case that for any value n I have n ones and then bottom. I
can try to deny your claim only be denying the existence of coinductive steams.
I thought in haskell greatest and least fixed points coincide? – sclv Feb 6 '13 at 4:41
@sclv i'm not sure what that statement means in any particular context. The initial algebra of a functor coincides with the final coalgebra for sure--although the standard
constructions have differing cost models. – Philip JF Feb 6 '13 at 6:01
Ahh, this helps me to see why I was getting confused. You really can't get away with set theoretic semantics at all once you start thinking about _|_---which ought to be plain, but
there we go. Thank you for the non-recursive LF and GF definitions as well! The symmetry there is connecting many things for me. – J. Abrahamson Feb 6 '13 at 16:39
add comment
For more on proof techniques over coinductive structures (expanding on Philip JF's very nice answer), you can take a look at Hinze and James' "Proving the Unique Fixed-Point Principle
up vote 1 down Correct": http://www.cs.ox.ac.uk/people/daniel.james/unique/unique-tech.pdf
add comment
Not the answer you're looking for? Browse other questions tagged haskell formal-semantics or ask your own question.
|
{"url":"http://stackoverflow.com/questions/14718228/are-denotational-semantic-mappings-decidable","timestamp":"2014-04-21T12:34:46Z","content_type":null,"content_length":"76892","record_id":"<urn:uuid:998d5663-4710-4486-8eed-53b4adf7f69c>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00269-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
medals and fans rewarded
• 11 months ago
• 11 months ago
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred.
|
{"url":"http://openstudy.com/updates/5183b252e4b09587cd391081","timestamp":"2014-04-20T18:44:38Z","content_type":null,"content_length":"63472","record_id":"<urn:uuid:9ec33473-e010-4195-9552-aa54e035b72f>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00606-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Kaleidoscopic (escape time) IFS
Welcome, Guest. Please login or register. April 19, 2014, 09:39:25 PM
Pages: 1 2 [3] 4 5 ... 10 Go Down Print
Author Topic: Kaleidoscopic (escape time) IFS (Read 31477 times)
Description: An interresing class of fractals
0 Members and 1 Guest are viewing this topic.
« Reply #30 on: May 06, 2010, 01:16:09 AM »
Nice rendering! (as usual
Not a julia (that gives even more possibilties), that was a power 8 bulb and the tetra-sierpinski.
A combination with TGlads box is one more option that gives another endless variety...
(only a quick test, no more time yet)
« Reply #31 on: May 06, 2010, 10:48:55 AM »
Great shapes indeed!
Can't wait to play with these algorithms too...
« Reply #32 on: May 06, 2010, 09:17:27 PM »
Great shapes indeed!
Can't wait to play with these algorithms too...
If you like I'll make my script available. But it is not easy to use (at first).
Good stuff here...
If you remember some of my escape-time Sierpinskis, what you can do in general is define a set of vertices (e.g., those for an octahedron). Then for a given "z", find the nearest vertex. Reflect off
the vertex, etc. using 2*(vertex)-point or something like that.
Yes I remember
What you do is perhaps (i'm not sure) more general but I was worrying about the continuity of the distance field. In fact I'm using a very simple DE based raymarcher that don't work well with
discontinuous distance field.
Knighty, do you have some background information on how you constructed the generator code?
I mean, how does does
if(x-z<0){x1=z;z=x;x=x1;} etc etc
generate the Sierpinsky ? ..and the code for the Menger sponge is also a bit of magic in my eyes..
I was trying to generate an octahedral Sierpinsky, but so far all my efforts have failed..
It is maybe easyer to visalize things in 2D. Let's take a 2D variant of msltoe's algorithm but for the sierpinski triangle. the coordinates of the centers of scaling are (1,0),(-0.5,sqrt(3)/2) and
(-0.5,-sqrt(3)/2) (up to a rotation and scaling). In msltoe's algorithm at each iteration you take the scaling center that is closest to the current position then do the scaling (stretch) wrt the
closest scaling center. But in our case the three centers are symmetric. There are 6 axis of symmetry. The idea is that choosing the nearest center of scaling is equivalent to reflect the current
position about (some of) the symmetry axis in order to make it nearer to one of the centers. in the case of sierpinski triangle, two symmetries are enough. for example : the one that goes through
(0,0) and (-0.5,-sqrt(3)/2) and the one that goes through (0,0) and (-0.5,sqrt(3)/2). They will "transport" any point of the plan in the area where they will be closer to (1,0) than the opposit
In the case of octahedral sierpinski, there are 9 (need confirmation
With rotation, choosing the "minimal" set or the "full" set of planes of symmetry gives different results. The "full" set gives the most symmetric fractals.
[DEL:That said, I realize that rotations before folding are equivalent to rotating the folding planes in the inverse direction. That means rotations are not necessary (but convenient) and can lead to
some optimizations.:DEL](EDIT: this is not true.
I'm just beginning to explore the math behind this fractals. They must have someting to do with Coxeter groups, among which the symmetries of the platonic solids, and paper folding maths (http://
Next step: "origami fractals". guess why!
Nice rendering! (as usual
Not a julia (that gives even more possibilties), that was a power 8 bulb and the tetra-sierpinski.
A combination with TGlads box is one more option that gives another endless variety...
(only a quick test, no more time yet)
Thanks. Combining fractals is the coolest idea of all. Is it Tglad box then Mandelbulb or the reverse?
« Last Edit: May 12, 2010, 09:04:13 PM by knighty, Reason: I have to shut my mouth sometimes! »
« Reply #33 on: May 06, 2010, 10:01:58 PM »
Thanks. Combining fractals is the coolest idea of all. Is it Tglad box then Mandelbulb or the reverse?
The first one was 1 iteration pow8 bulb and 4 iterations sierpinski tetrahedron, the second was imho sierpinsky terahedron and mandbox in that row. Both with negative scalings.
Btw, many thanks for the code. Just have to reprogram many things to get 9 parameters changed!
I think that will give enough combinations, for now i have to play around with combis.
Another one like the second image, but with default parameters, means scaling 2 for both:
« Last Edit: May 07, 2010, 03:41:11 PM by Jesse »
« Reply #34 on: May 06, 2010, 11:09:53 PM »
Just have to reprogram many things to get 9 parameters changed!
I think that will give enough combinations, for now i have to play around with combis.
The number of combinations is becoming very big. I'm thinking about using metaprogramming. I'm working on the idea of generating GPU shaders on this basis. For CPU native code, a just in time script
compiler would be useful.
« Reply #35 on: May 07, 2010, 08:21:22 AM »
Thanks for the explanation.
Code for the octahedral Sierp :
if x+y<0, x1=-y,y=-x,x=x1,endif
if x+z<0, x1=-z,z=-x,x=x1,endif
if x-y<0, x1=y,y=x,x=x1,endif
if x-z<0, x1=z,z=x,x=x1,endif
Here it is :
(124.01 KB, 960x960 - viewed 130 times.)
« Reply #36 on: May 08, 2010, 08:34:18 PM »
Nice one
I'm working on dodeca and icosahedra-sierpinski. The folding set a little bit more difficult to find.
« Reply #37 on: May 08, 2010, 08:55:55 PM »
It would be interesting to see if the folding plane solution produces the same results as the Julia vertex reflection for the icosahedron:
(163.23 KB, 600x600 - viewed 130 times.)
« Reply #38 on: May 09, 2010, 01:52:01 AM »
Thanks Jos for the octahedral folding. It gives some very interesting structures.
I've had a busy evening building my own mechanical tree - only managed the stump so far though
www.subblue.com - a blog exploring mathematical and generative graphics
« Reply #39 on: May 09, 2010, 11:39:47 AM »
Thanks Jos for the octahedral folding. It gives some very interesting structures.
I've had a busy evening building my own mechanical tree - only managed the stump so far though
<Quoted Image Removed>
<Quoted Image Removed>
This is absolutely awesome subblue.
It would be interesting to see if the folding plane solution produces the same results as the Julia vertex reflection for the icosahedron:
Well, it should. I'll post the results later in case I succeed.
« Reply #40 on: May 09, 2010, 12:00:13 PM »
whoa, GREAT subblue!
That's not too far away from a recursive Eifel-Tower
I guess, as that often is the case, that construction would be surprisingly stable, due to its fractal natue
« Reply #41 on: May 09, 2010, 01:46:08 PM »
Quick test (I have no time :-() Thanx to Mikael Hvidtfeldt (Syntopia) and Knighty!
« Reply #42 on: May 09, 2010, 02:30:21 PM »
Thanks for the animation visual. Cool!
Finally I've obtained a good folding planes set for the dodecahedra-siepinski. Their normal vectors are:
(phi^2,1,-phi) , (-phi,phi^2,1) , (1,-phi,phi^2) , (-phi*(1+phi),phi^2-1,1+phi) , (1+phi,-phi*(1+phi),phi^2-1)
and the x=0, y=0 and z=0 planes.
The center of scaling should be (1,0,phi) for the dodeca sirpinski.
Phi is the golden ratio (phi=(1+sqrt(5))/2).
Here is the code I used (note that the vectors are normalized):
#define _IVNORM_ (0.5/_PHI_)
#define _PHI1_ (_PHI_*_IVNORM_)
#define _1PHI_ (_IVNORM_)
#define _PHI2_ (_PHI_*_PHI_*_IVNORM_)
#define _IKVNORM_ 1/sqrt((_PHI_*(1+_PHI_))^2+(_PHI_^2-1)^2+(1+_PHI_)^2)
#define _C1_ (_PHI_*(1+_PHI_)*_IKVNORM_)
#define _C2_ ((_PHI_*_PHI_-1)*_IKVNORM_)
#define _1C_ ((1+_PHI_)*_IKVNORM_)
for(i=0;i<MaxIteration && r<Bailout; i++){
#ifdef PRE_ROTATE
#ifdef POST_ROTATE
x=scale*x-(scale-1)*stc[0];//stc is the center of scaling
return (sqrt(r)-2)*scale^(-i);
Now some results:
dodeca01. center of scaling (1,1,1); scale=phi^2
dodeca02. center of scaling (1,1,1); scale=2*phi
Icosa01. center of scaling (1,0,phi); scale=2
Icosa02. center of scaling (1,0,phi); scale=phi^2
« Last Edit: May 20, 2010, 11:19:11 PM by knighty, Reason: Confusion between dodeca and icosa (thaks Jos Leys) »
« Reply #43 on: May 09, 2010, 02:40:19 PM »
Wanted to say two things:
1- In order to get interresting fractals, it's not necessary for the planes to go through (0,0,0). Any set of planes may give good fractal shapes. That's what I've called "origami fractals"
2- The distance estimate is very good and doesn't need (in general) to be scaled down. In some cases (when bailout is low) you need to scale it down a little (say descale=0.9 or 0.95).
« Reply #44 on: May 09, 2010, 03:33:25 PM »
Do you have the code for icosa also?
« Reply #30 on: May 06, 2010, 01:16:09 AM »
Nice rendering! (as usual
Not a julia (that gives even more possibilties), that was a power 8 bulb and the tetra-sierpinski.
A combination with TGlads box is one more option that gives another endless variety...
(only a quick test, no more time yet)
« Reply #30 on: May 06, 2010, 01:16:09 AM »
« Reply #31 on: May 06, 2010, 10:48:55 AM »
Great shapes indeed!
Can't wait to play with these algorithms too...
« Reply #31 on: May 06, 2010, 10:48:55 AM »
« Reply #32 on: May 06, 2010, 09:17:27 PM »
Great shapes indeed!
Can't wait to play with these algorithms too...
If you like I'll make my script available. But it is not easy to use (at first).
Good stuff here...
If you remember some of my escape-time Sierpinskis, what you can do in general is define a set of vertices (e.g., those for an octahedron). Then for a given "z", find the nearest vertex. Reflect off
the vertex, etc. using 2*(vertex)-point or something like that.
Yes I remember
What you do is perhaps (i'm not sure) more general but I was worrying about the continuity of the distance field. In fact I'm using a very simple DE based raymarcher that don't work well with
discontinuous distance field.
Knighty, do you have some background information on how you constructed the generator code?
I mean, how does does
if(x-z<0){x1=z;z=x;x=x1;} etc etc
generate the Sierpinsky ? ..and the code for the Menger sponge is also a bit of magic in my eyes..
I was trying to generate an octahedral Sierpinsky, but so far all my efforts have failed..
It is maybe easyer to visalize things in 2D. Let's take a 2D variant of msltoe's algorithm but for the sierpinski triangle. the coordinates of the centers of scaling are (1,0),(-0.5,sqrt(3)/2) and
(-0.5,-sqrt(3)/2) (up to a rotation and scaling). In msltoe's algorithm at each iteration you take the scaling center that is closest to the current position then do the scaling (stretch) wrt the
closest scaling center. But in our case the three centers are symmetric. There are 6 axis of symmetry. The idea is that choosing the nearest center of scaling is equivalent to reflect the current
position about (some of) the symmetry axis in order to make it nearer to one of the centers. in the case of sierpinski triangle, two symmetries are enough. for example : the one that goes through
(0,0) and (-0.5,-sqrt(3)/2) and the one that goes through (0,0) and (-0.5,sqrt(3)/2). They will "transport" any point of the plan in the area where they will be closer to (1,0) than the opposit
In the case of octahedral sierpinski, there are 9 (need confirmation
With rotation, choosing the "minimal" set or the "full" set of planes of symmetry gives different results. The "full" set gives the most symmetric fractals.
[DEL:That said, I realize that rotations before folding are equivalent to rotating the folding planes in the inverse direction. That means rotations are not necessary (but convenient) and can lead to
some optimizations.:DEL](EDIT: this is not true.
I'm just beginning to explore the math behind this fractals. They must have someting to do with Coxeter groups, among which the symmetries of the platonic solids, and paper folding maths (http://
Next step: "origami fractals". guess why!
Nice rendering! (as usual
Not a julia (that gives even more possibilties), that was a power 8 bulb and the tetra-sierpinski.
A combination with TGlads box is one more option that gives another endless variety...
(only a quick test, no more time yet)
Thanks. Combining fractals is the coolest idea of all. Is it Tglad box then Mandelbulb or the reverse?
« Last Edit: May 12, 2010, 09:04:13 PM by knighty, Reason: I have to shut my mouth sometimes! »
« Reply #32 on: May 06, 2010, 09:17:27 PM »
« Last Edit: May 12, 2010, 09:04:13 PM by knighty, Reason: I have to shut my mouth sometimes! »
« Reply #33 on: May 06, 2010, 10:01:58 PM »
Thanks. Combining fractals is the coolest idea of all. Is it Tglad box then Mandelbulb or the reverse?
The first one was 1 iteration pow8 bulb and 4 iterations sierpinski tetrahedron, the second was imho sierpinsky terahedron and mandbox in that row. Both with negative scalings.
Btw, many thanks for the code. Just have to reprogram many things to get 9 parameters changed!
I think that will give enough combinations, for now i have to play around with combis.
Another one like the second image, but with default parameters, means scaling 2 for both:
« Last Edit: May 07, 2010, 03:41:11 PM by Jesse »
« Reply #33 on: May 06, 2010, 10:01:58 PM »
Thanks. Combining fractals is the coolest idea of all. Is it Tglad box then Mandelbulb or the reverse?
« Last Edit: May 07, 2010, 03:41:11 PM by Jesse »
« Reply #34 on: May 06, 2010, 11:09:53 PM »
Just have to reprogram many things to get 9 parameters changed!
I think that will give enough combinations, for now i have to play around with combis.
The number of combinations is becoming very big. I'm thinking about using metaprogramming. I'm working on the idea of generating GPU shaders on this basis. For CPU native code, a just in time script
compiler would be useful.
« Reply #34 on: May 06, 2010, 11:09:53 PM »
« Reply #35 on: May 07, 2010, 08:21:22 AM »
Thanks for the explanation.
Code for the octahedral Sierp :
if x+y<0, x1=-y,y=-x,x=x1,endif
if x+z<0, x1=-z,z=-x,x=x1,endif
if x-y<0, x1=y,y=x,x=x1,endif
if x-z<0, x1=z,z=x,x=x1,endif
Here it is :
(124.01 KB, 960x960 - viewed 130 times.)
« Reply #35 on: May 07, 2010, 08:21:22 AM »
« Reply #36 on: May 08, 2010, 08:34:18 PM »
Nice one
I'm working on dodeca and icosahedra-sierpinski. The folding set a little bit more difficult to find.
« Reply #36 on: May 08, 2010, 08:34:18 PM »
« Reply #37 on: May 08, 2010, 08:55:55 PM »
It would be interesting to see if the folding plane solution produces the same results as the Julia vertex reflection for the icosahedron:
(163.23 KB, 600x600 - viewed 130 times.)
« Reply #37 on: May 08, 2010, 08:55:55 PM »
« Reply #38 on: May 09, 2010, 01:52:01 AM »
Thanks Jos for the octahedral folding. It gives some very interesting structures.
I've had a busy evening building my own mechanical tree - only managed the stump so far though
www.subblue.com - a blog exploring mathematical and generative graphics
« Reply #38 on: May 09, 2010, 01:52:01 AM »
« Reply #39 on: May 09, 2010, 11:39:47 AM »
Thanks Jos for the octahedral folding. It gives some very interesting structures.
I've had a busy evening building my own mechanical tree - only managed the stump so far though
<Quoted Image Removed>
<Quoted Image Removed>
This is absolutely awesome subblue.
It would be interesting to see if the folding plane solution produces the same results as the Julia vertex reflection for the icosahedron:
Well, it should. I'll post the results later in case I succeed.
« Reply #39 on: May 09, 2010, 11:39:47 AM »
« Reply #40 on: May 09, 2010, 12:00:13 PM »
whoa, GREAT subblue!
That's not too far away from a recursive Eifel-Tower
I guess, as that often is the case, that construction would be surprisingly stable, due to its fractal natue
« Reply #40 on: May 09, 2010, 12:00:13 PM »
« Reply #41 on: May 09, 2010, 01:46:08 PM »
Quick test (I have no time :-() Thanx to Mikael Hvidtfeldt (Syntopia) and Knighty!
« Reply #41 on: May 09, 2010, 01:46:08 PM »
« Reply #42 on: May 09, 2010, 02:30:21 PM »
Thanks for the animation visual. Cool!
Finally I've obtained a good folding planes set for the dodecahedra-siepinski. Their normal vectors are:
(phi^2,1,-phi) , (-phi,phi^2,1) , (1,-phi,phi^2) , (-phi*(1+phi),phi^2-1,1+phi) , (1+phi,-phi*(1+phi),phi^2-1)
and the x=0, y=0 and z=0 planes.
The center of scaling should be (1,0,phi) for the dodeca sirpinski.
Phi is the golden ratio (phi=(1+sqrt(5))/2).
Here is the code I used (note that the vectors are normalized):
#define _IVNORM_ (0.5/_PHI_)
#define _PHI1_ (_PHI_*_IVNORM_)
#define _1PHI_ (_IVNORM_)
#define _PHI2_ (_PHI_*_PHI_*_IVNORM_)
#define _IKVNORM_ 1/sqrt((_PHI_*(1+_PHI_))^2+(_PHI_^2-1)^2+(1+_PHI_)^2)
#define _C1_ (_PHI_*(1+_PHI_)*_IKVNORM_)
#define _C2_ ((_PHI_*_PHI_-1)*_IKVNORM_)
#define _1C_ ((1+_PHI_)*_IKVNORM_)
for(i=0;i<MaxIteration && r<Bailout; i++){
#ifdef PRE_ROTATE
#ifdef POST_ROTATE
x=scale*x-(scale-1)*stc[0];//stc is the center of scaling
return (sqrt(r)-2)*scale^(-i);
Now some results:
dodeca01. center of scaling (1,1,1); scale=phi^2
dodeca02. center of scaling (1,1,1); scale=2*phi
Icosa01. center of scaling (1,0,phi); scale=2
Icosa02. center of scaling (1,0,phi); scale=phi^2
« Last Edit: May 20, 2010, 11:19:11 PM by knighty, Reason: Confusion between dodeca and icosa (thaks Jos Leys) »
« Reply #42 on: May 09, 2010, 02:30:21 PM »
« Last Edit: May 20, 2010, 11:19:11 PM by knighty, Reason: Confusion between dodeca and icosa (thaks Jos Leys) »
« Reply #43 on: May 09, 2010, 02:40:19 PM »
Wanted to say two things:
1- In order to get interresting fractals, it's not necessary for the planes to go through (0,0,0). Any set of planes may give good fractal shapes. That's what I've called "origami fractals"
2- The distance estimate is very good and doesn't need (in general) to be scaled down. In some cases (when bailout is low) you need to scale it down a little (say descale=0.9 or 0.95).
« Reply #43 on: May 09, 2010, 02:40:19 PM »
« Reply #44 on: May 09, 2010, 03:33:25 PM »
Do you have the code for icosa also?
« Reply #44 on: May 09, 2010, 03:33:25 PM »
Related Topics
Subject Started by Replies Views Last post
Is there anything novel left to do in M-like escape-time fractals in 2d? New Theories & fracmonk 330 24803 September 19, 2013, 09:32:59 PM
Research « 1 2 ... 22 23 » by fracmonk
3D Koch cube (escape time) + formula 3D Fractal Generation « 1 2 3 » DarkBeam 39 7923 November 28, 2012, 01:15:21 AM
by M Benesi
three more pictures, mandelbrot set, escape time algorithm Images Showcase (Rate My Eric B 0 274 October 23, 2012, 02:03:32 AM
Fractal) by Eric B
Wart, Tangle: genuine escape time pictures Images Showcase (Rate My Fractal) Eric B 0 176 November 24, 2012, 06:47:43 PM
by Eric B
Improving over the escape time map New Theories & Research megafiddle 1 128 November 22, 2013, 03:38:37 AM
by megafiddle
|
{"url":"http://www.fractalforums.com/ifs-iterated-function-systems/kaleidoscopic-(escape-time-ifs)/30/","timestamp":"2014-04-19T19:39:26Z","content_type":null,"content_length":"102171","record_id":"<urn:uuid:dee156e7-d45b-4e9c-a23e-079de31f07a8>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00624-ip-10-147-4-33.ec2.internal.warc.gz"}
|
In Pursuit of the Unknown: 17 Equations that Changed the World
In the beginning of In Pursuit of the Unknown: 17 Equations that Changed the World, Ian Stewart claims that “you don’t need to be a rocket scientist to appreciate the poetry and beauty of a good,
significant equation.” He subsequently dangles the provocative description “this is the story of the ascent of humanity, told through 17 equations.” What is a reader — regardless of whether that
reader is a math phobe or a research mathematician — to do, but keep reading? After all, the math phobe has little to lose and the research mathematician has a tremendous amount to gain.
Does Stewart succeed with his bold claim? Absolutely! Stewart effortlessly strikes a conversational tone that makes accessible the content and context of some high level equations from mathematics,
physics, information theory and finance. Maxwell’s equations on electricity and magnetism, for example, gave birth to radio, radar and wireless communication, which has since fueled an entertainment
industry, enhanced military operations, helped doctors detect tumors, and helped archeologists locate ancient underground structures. The equation for the normal distribution has given society, for
better or worse, a means to understand the “average” person. The Fourier transform has yielded insight into DNA and earthquakes. The Navier-Stokes equation gave us jet planes, quiet submarines, and
medical advances concerning blood flow, while the wave equation has enabled us to find oil. Newton’s law of gravity birthed the Hubble telescope, the Mars rover, and GPS. The list of transformative
events that have been fueled by equations goes on and on and on.
Stewart thinks highly of his reader. He neither dumbs down the math nor bombards the reader with highly specialized vocabulary or notation. To be sure, there are equations in this book — as the title
acknowledges. But he includes pictures too, and lots and lots of prose.
The book is organized into seventeen chapters, each of which addresses the history, content and significance of a single equation. The first page of every chapter presents a graphic of the selected
equation complete with arrows identifying the ingredients. Also included are brief answers to three questions, “what does the equation say?” “why is the equation important?” and “what did the
equation lead to?” This reader-friendly introduction piques the reader’s interest in the topic as well as confidence in Stewart’s ability to get right to the point.
Once inside a chapter, the author does not hesitate to add a healthy dose of human-interest, telling us about the mathematical greats behind the equations. For example, rather than setting the scene
for imaginary numbers with, “the variable i equals the square root of minus one…,” the reader is introduced to Cardano, the “gambling scholar,” whose “mother tried to abort him, his son was beheaded
for killing his (the son’s) wife, … (who) gambled away the family fortune, … (and) was accused of heresy for casting the horoscope of Jesus.” In between, the reader is provided with a lay explanation
of how imaginary numbers may explain airflow around airplane wings.
What is there in Stewart’s book for the experienced mathematician? In Pursuit of the Unknown offers a noteworthy example of how to write about mathematics for a wide audience. The book also offers a
deep understanding of how equations have shaped modern civilization. Most mathematicians already know some of the facts and trivia that Stewart cites, but the incredible volume and breadth of
Stewart’s book almost guarantee that even experienced mathematicians have something to learn.
In Pursuit of the Unknown could be a strong contender for a liberal arts math course, a course on the history of mathematics, or simply an enjoyable read on the beach. Stewart’s tone is inviting, his
mathematical content substantial and his argument compelling. In the end, Stewart succeeds in breathing life into those immediately recognizable, but all too often little-understood, mathematical
objects known as “equations.”
Susan D’Agostino is an Assistant Professor of Mathematics at Southern New Hampshire University. Her essays have appeared in The Chronicle of Higher Education, MAA Focus and Math Horizons. She is
currently writing a math book for a general audience.
|
{"url":"http://www.maa.org/publications/maa-reviews/in-pursuit-of-the-unknown-17-equations-that-changed-the-world","timestamp":"2014-04-21T00:59:29Z","content_type":null,"content_length":"98595","record_id":"<urn:uuid:f33e53ee-50d1-45e3-a734-d987161c103e>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00274-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Calc 3D Pro
Calc 3D Pro 2.1.10
Calc 3D Pro Short Description
Calc 3D is a collection of mathematical tools for highschool and university. The calculator can do statistics, best fits, function plotting, integration. It handles vectors, matrices, complex
numbers, coordinates, regular polygons and intersections. For objects ( like point, line, plane and sphere) distances and intersections are calculated. Cartesian, spherical and cylindrical
coordinates can be transformed into each other.
Calc 3D Pro Details
Greuer AndreasDeveloper :
2.1.10Version :
Windows 95/98/ME/NT/2000/XP Platform :
2.9 MbFile Size :
Freeware License :
January 10, 2011 Date Added :
Calc 3D Pro Download
Contacting third-party download site...
Please wait.
If your download does not start automatically, try the following links:
Download Link1 Calc 3D Pro
- Calculator for statistic, function plottting, vectors, matrices, complex numbers, coordinates, intersections. For objects like point, line, plane and sphere distances, intersections, volume, area
of squeres, area of a triangle can be calculated.
|
{"url":"http://www.newfreedownloads.com/download-Calc-3D-Pro.html","timestamp":"2014-04-16T16:58:35Z","content_type":null,"content_length":"9425","record_id":"<urn:uuid:cd919e6e-d259-4b31-a31e-c3e299341630>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00408-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Shoreline, WA Algebra 2 Tutor
Find a Shoreline, WA Algebra 2 Tutor
...While working with a middle school student on ancient Egyptian history, for example, I make sure to emphasize how to use chapter summaries to pinpoint what to study for an upcoming exam, the
importance of learning bolded key terms, and the usefulness of section comprehension questions for focusin...
35 Subjects: including algebra 2, English, reading, calculus
...My name is Joslynn, and I am currently a student at community college. I plan to transfer to the University of Washington in a year and double major in Bioengineering and mechanical
engineering (I plan to go into bioprinting, so that's why there's the weird combination of majors). Also, I plan t...
12 Subjects: including algebra 2, chemistry, calculus, physics
If you want someone who can teach your child/student with great real world experience in Math and Physics, then that's me. I've worked at NASA Johnson Space Center training Astronauts in Space
Shuttle Systems like Guidance, Propulsion and Flight Controls. I have a Bachelor's in Aerospace Engineeri...
12 Subjects: including algebra 2, calculus, physics, geometry
...I also coach students through the college application process and enjoy helping them write their personal statement or essay. I've taught both beginning and intermediate SAT classes and also
have much experience working with ESL students, both children and adults. I'm a graduate of the University of Washington with a degree in neurobiology and I plan to attend dental school this
28 Subjects: including algebra 2, chemistry, writing, ESL/ESOL
...I have extensive experience in both science and foreign language that I'd love to share with others. I started my science career in 2009 with an Amgen Scholarship to research the pore-forming
unit of an acid-sensing taste channel at the molecular and cellular level. I graduated from Columbia Un...
30 Subjects: including algebra 2, Spanish, English, chemistry
Related Shoreline, WA Tutors
Shoreline, WA Accounting Tutors
Shoreline, WA ACT Tutors
Shoreline, WA Algebra Tutors
Shoreline, WA Algebra 2 Tutors
Shoreline, WA Calculus Tutors
Shoreline, WA Geometry Tutors
Shoreline, WA Math Tutors
Shoreline, WA Prealgebra Tutors
Shoreline, WA Precalculus Tutors
Shoreline, WA SAT Tutors
Shoreline, WA SAT Math Tutors
Shoreline, WA Science Tutors
Shoreline, WA Statistics Tutors
Shoreline, WA Trigonometry Tutors
|
{"url":"http://www.purplemath.com/Shoreline_WA_algebra_2_tutors.php","timestamp":"2014-04-18T15:44:17Z","content_type":null,"content_length":"24259","record_id":"<urn:uuid:1d499764-43f6-461b-936a-ac5bb1f3b5fc>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00210-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Katy ISD - CRHS
AP Calculus AB
Elizabeth Kana and Lori Bynum
AP Calculus BC
Bruce Eaton
Pre-Calculus PAP/GT
Sherri Scott, Darlene Sugarek, George Garner
Math Models (MMA)
Beverly Shepherd, Steven Fish, Stephanie Thapar
Algebra 2 Academic
Misty Fincher, Joe Fincher, Melissa Cerny, Missy Birch, Elizabeth Kana, Trisha Hammond
Algebra 1
Amber Schmidt, Jeremy Stahl, Luke McConn, Kayley Johns, Stephanie Thapar, Erica Myers, Keith Hutson, Trisha Hammond
Geometry PAP/GT
Lori Bynum, Vonda Perritt-Turner, and Nancy Lisk
AP Statistics
George Garner and Julie Chipman
Pre-Calculus Academic
Julie Chipman, Mary Birch, Melissa Cerny, Vonda Perritt-Turner
Geometry Academic
Sherri Scott, Vonda Perritt-Turner, Nancy Lisk, Ishan Rison, Steven Fish, Erica Myers, Keith Hutson, Trisha Hammond
Algebra 2 PAP/GT
Darlene Sugarek, Bruce Eaton, and Richard May
Topics in Mathematics
Beverly Shepherd, Luke McConn
Thapar, Stephanie
Algebra I & Math Models
|
{"url":"http://kisdwebs.katyisd.org/campuses/CRHS/teacherweb/Pages/categoryresults.aspx?Column=DivisionMulti&ColumnDisplayName=Grade%20Level%20or%20Groups&Value=Math","timestamp":"2014-04-18T21:33:18Z","content_type":null,"content_length":"22051","record_id":"<urn:uuid:d0d19787-bcd7-4078-afed-c468518bd345>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00645-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Stars and Slopes
1. Students will apply the knowledge of plotting data and obtaining a slope using a log-log coordinate system.
2. Students will determine the line of best fit from a set of data obtained from X-ray astronomy satellites.
3. Students will discover the relationship between slope and the classification of stellar objects.
Grade Level
10th to 12th + grades
Students should have learned the following algebraic concepts:
1. graphing a linear equation using slope and y-intercept
2. determining a line of best fit from a set of data
3. using logarithms
Students should have had an introduction to the concepts of physics and space astronomy.
Time Requirements
For each class of students, you will probably need at least 2 periods.
Many problems in physics, mathematics, engineering, and other fields are fundamentally the study of the relationship between two variables. For example, how the velocity of a falling object varies
with time; the angular distribution of radiant energy transmitted from a small hole; the pressure response frequency characteristic of a crystal telephone receiver. Such everyday applications involve
an independent variable (i.e., one that progressively changes such as time or frequency) and a dependent variable which is mathematically determined from the change in the independent variable in
some way (e.g., velocity or intensity).
By displaying the data in a graphic, the relationship of the dependent variable on the independent variable can be seen. The most powerful form of display is when the result is a straight line ---
which can always be converted quickly into a mathematical equation. However, obtaining a straight line curve may require the selection of very special types of graph paper or axis values.
There are different types of graph paper which can be used for the presentation of data. They each have their own advantages and disadvantages. The most common three types are "rectangular" (or
"Cartesian") coordinates, polar coordinates, and logarithmic (or log) coordinates. This lesson will looks at two of the three -- rectangular and logarithmic.
Day 1 focuses on log-log plotting and determining the slopes of such plots.
Day 2 delves into how to use log-log plots to gain insight into certain celestial objects of interest to X-ray astronomers.
Day 2 is still under development!
Kaufmann, William J. III, Universe, Freeman and Company, 1994, pgs. 336-340
Kerrod, Robin, Encyclopedia of Science: The Heavens Stars, Galaxies, and the Solar System, Macmillan Publishing Company, 1991
Kondo, Herbert, The New Book of Popular Science Vol. 1, Grolier Incorporated, 1982, pgs. 174-190
Overbeck, C.J., Palmer, R.R., Stephenson, R.J., and White, M.W., 1963, Graphs and Equations, Selective Experiments on Physics, Central Scientific Company
Seward, Frederick D. and Charles, Philip A., Exploring the X-ray Universe, Cambridge University Press, 1995
The graphics and other information found within this lesson can also be found on Imagine the Universe! which is located on the World Wide Web. The URL for this site is http://imagine.gsfc.nasa.gov/.
The data were retrieved within The HEASARC Data Archive using W3Browse which is located on the World Wide Web. The URL for this site is http://heasarc.gsfc.nasa.gov/.
|
{"url":"http://imagine.gsfc.nasa.gov/docs/teachers/lessons/slopes/ss_main.html","timestamp":"2014-04-17T03:48:59Z","content_type":null,"content_length":"16733","record_id":"<urn:uuid:9be3fdc4-8bba-4663-b690-e5a1ef66826e>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00640-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Santa Monica Science Tutor
...I have also helped students out with their coursework in Physics and Finance. When I am tutoring, I enjoy getting to know the student and understanding the way the student learns. From my
experience, I have found many creative ways of explaining common problems.
14 Subjects: including physics, astronomy, calculus, geometry
...I've gotten a perfect score on the SAT Math section twice! Once in high school on the old format test, and again as an adult on the revised SAT. The problem a lot of students have with SAT math
is that they look at a problem and they don't know where to begin.
49 Subjects: including ACT Science, statistics, reading, English
...I taught music in secondary school and also became a certified Orff teacher for elementary school students. Today I sing avocationally in church choirs and occasionally with an international
choir that travels to different countries. There are some basics involved in sight singing that can be extremely helpful in learning most music quickly.
20 Subjects: including psychology, English, reading, writing
...Linear Algebra: My background in Mathematics is quite deep and extensive, and Linear Algebra happens to be my favorite of the sub-disciplines. It is where the true beauty of Mathematics begins
to be apparent, and where one can finally see the deep connections that exist among mathematical structures. I matriculated into the Doctoral Program in Mathematics at University of Illinois.
20 Subjects: including physics, reading, biochemistry, algebra 1
...I am very grateful for her time and dedication! Nov 06, 2010 Student: Phoebe was an excellent tutor. For about two months she tutored me for my upcoming GRE test.
51 Subjects: including ACT Science, Chinese, zoology, biostatistics
|
{"url":"http://www.purplemath.com/santa_monica_ca_science_tutors.php","timestamp":"2014-04-20T01:51:42Z","content_type":null,"content_length":"24024","record_id":"<urn:uuid:57695e5d-ffb0-4f13-8d0a-37fde2d204c3>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00497-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Nutley Geometry Tutor
Find a Nutley Geometry Tutor
...Do you have a student whose progress in reading is slower than expected or uneven with unexpected weaknesses, such as reading comprehension? Does your child have difficulty with spelling? Do
you have a student who has difficulties with writing such as generating or getting ideas onto paper, organizing writing and grammatical problems?
30 Subjects: including geometry, English, piano, reading
...College graduate in Physics. 1 year of calculus in high school, 2 years of calculus/analysis in university. Experienced tutoring precalculus and (mainly) calculus, starting from the bottom to
build a rock solid foundation. I was born in Spain, lifetime bilingual.
17 Subjects: including geometry, chemistry, calculus, Spanish
...Though varied approaches customized to your learning style, I will help you reach those breakthrough moments when topics that may have given you trouble suddenly become clearly understood.
Through my degree in Mechanical Engineering, I studied Differential and Integral calculus (AP Calc AB and B...
22 Subjects: including geometry, chemistry, calculus, physics
...In math, everything you learn builds on top of what you learned in previous years, and without that strong foundation, students can fall behind. When teachers explain something in class they
assume that the students have a certain knowledge about math based on what they learned in previous years...
21 Subjects: including geometry, calculus, statistics, accounting
...My clients come mainly from disciplines in academia and medicine, but also include executives, software developers, clothing designers, salespeople, a housekeeping crew, members of the
Consulate of Ecuador to NY, engineers at a major firm, underprivileged women at a non-profit providing job readi...
39 Subjects: including geometry, Spanish, English, reading
|
{"url":"http://www.purplemath.com/Nutley_geometry_tutors.php","timestamp":"2014-04-16T19:05:40Z","content_type":null,"content_length":"23858","record_id":"<urn:uuid:b82345fc-c596-482b-ae4f-4c48e80686a2>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00296-ip-10-147-4-33.ec2.internal.warc.gz"}
|
IMA Newsletter #381
José Bico (École Supérieure de Physique et de Chimie Industrielles de la Ville de Paris Capillary winding
Abstract: When a liquid droplet is deposited on a flexible sheet, the sheet may deform and spontaneously wrap the droplet. We propose to address a problem in connection with this "capillary origami"
experiment: does a flexible rod put in contact with a liquid droplet spontaneously winds itself around the droplet? In the positive situation, what is the maximum length that can be packed inside the
droplet? We will finally try to connect this problem to damping issues in spider webs.
Bjorn Birnir (University of California) Turbulent solutions of the stochastic Navier-Stokes equation
Abstract: Starting with a swirling flow we prove the existence of unique turbulent solutions of the stochastically driven Navier-Stokes equation in three dimensions. These solutions are not smooth
but Hölder continuous with index 1/3. The turbulent solutions give the existence of an invariant measure that determines the statistical theory of turbulence including Kolmogorov´s scaling laws. We
will discuss how the invariant measure can be approximated leading to a implicit formula that can be used to compare with simulations and experiments.
Philip Boyland (University of Florida) Topological kinematics of point vortex motions
Abstract: Topological techniques are used to study the motions of systems of point vortices. After symplectic and finite reduction the systems become one-degree-of-freedom Hamiltonian. The phase
portrait of the reduced system is subdivided into regimes using the separatrix motions, and a braid representing the topology of all vortex motions in each regime is computed. This braid also
describes the isotopy class of the advection homeomorphism induced by the vortex motion in the surrounding fluid. The Nielsen-Thurston theory is then used to analyze these isotopy classes, and in
certain cases, lower bounds for the complexity of the chaotic dynamics (eg. topological entropy) of the advection are obtained. Similar analysis using the Nielsen-Thurston theory applied to the
stirring of two-dimensional fluids will also be briefly described. The results illustrate a mechanism by which the topological kinematics of large-scale, two-dimensional fluid motions generate
chaotic advection.
Stephen Childress (New York University) Some remarks on vorticity growth in Euler flows
Abstract: Motivated by some estimates of vorticity growth in axisymmetric flows without swirl, we re-examine the paired vortex model of singularity formation proposed by Pumir and Siggia for Euler
flows in three dimensions. The problem is reformulated as a generalized system of differential equations. No supporting solutions of the system are known, and it is suggested that core deformation
remains the most likely mechanism preventing the formation of a singularity.
Paul Clavin (UMR CNRS-Universites d'Aix-Marseille I&II) Ablative Rayleigh-Taylor instability
Abstract: Ablative Rayleigh-Taylor (R-T) instability is a special feature of the acceleration phase in inertial confinement fusion (ICF). Ablation stabilizes the disturbances with small wavelength,
introducing a marginal wavelength. Due to a large temperature ratio, the conduction length-scale varies strongly across the wave, and the attention is limited to the intermediate acceleration regimes
for which the length-scale of the marginal wavelength is in-between the smallest and the largest conduction length-scale. The analysis is performed for a strong temperature dependence of thermal
conductivity. At the leading order, the ablation front appears as a vortex sheet separating two potential flows^ 1, 2, and the free boundary problem takes the form of an extension of the pure R-T
instability with unity Atwood number and zero surface tension. It shows also some analogies with the Kelvin-Helmholtz instability described by the Birkhoff-Rott equation. However, the hot flow of
ablated matter introduces a damping at small wavelength which has a form different from the usual damping (as the surface tension for example). The nonlinear patterns are obtained by the same
boundary integral method as used for revisiting the R-T instability ^3. Unfortunately, a curvature singularity develops within a finite time, even though the short wavelengths are stabilised. Scaling
laws are derived from numerical fitting and a self-similarity solution of the problem is exhibited close to the critical time ^4. The occurrence of a curvature singularity indicates that the
modifications to the inner structure of the vortex sheet can no longer be neglected. A non-local curvature effect is obtained by pushing the asymptotic analysis to the next order ^5. The
corresponding small pressure correction is showed to prevent the occurrence of the curvature singularity within a finite time.
Itai Cohen (Cornell University) Investigating dislocation dynamics in degenerate crystals of dimer colloids
Abstract: Colloidal suspensions consist of micron sized solid particles suspended in a solvent. The particles are Brownian so that the suspension as a whole behaves as a thermal system governed by
the laws of statistical mechanics. The thermodynamic nature of these systems allows scientists to use colloidal suspensions as models for investigating numerous processes that typically take place on
the atomic scale but are often very difficult to investigate. In this talk I will describe how we use confocal microscopy techniques to investigate the structure and dynamics of these systems and
gain an understanding of dislocation nucleation and transport in colloidal crystals. Such dislocations are examples of singular point defects in 2D crystals and line defects in 3D crystals.
Peter Constantin (University of Chicago) The zero temperature limit of interacting corpora
Abstract: We consider examples of melts of corpora, that is collections of compacts each having finitely many degrees of freedom, such as articulated particles or n-gons. We associate to the melt the
moduli spaces of the corpora, compact metric or pseudometric spaces equipped with a Borel probability measure representing the phase space measure. We consider probability distributions on the moduli
spaces of such corpora, we associate a free energy to them, and show that under general conditions, the zero temperature limit of free energy minimizers are delta functions concentrated on a single
corpus, the ur-corpus. We give a selection principle for the ur-corpus. This is a generalization of the isotropic to nematic transition but we suggest that this language is appropriate for a larger
class of n-body interactions. This is work in progress with Andrej Zlatos.
Mark Dennis (University of Bristol) Topological singularities in optical waves
Abstract: Understanding of complicated spatial patterns emerging from wave interference, scattering and diffraction is frequently aided by insight from topology: the isolated places where some
fundamental physical quantity -- such as optical phase in a complicated light field -- is undefined (or singular) organize the rest of the field. In scalar wave patterns, the optical phase is
undefined at nodes at points in 2D, and lines in 3D, in general whenever 3 or more waves interfere. Similar singularities occur in optical polarization fields, and these quantized defects bear some
morphological similarity to defects in other systems, such as crystal dislocations, diclinations and quantum vortices in condensed matter physics, etc.I will describe the features of these optical
singularities, concentrating on three cases. The first will be three-dimensional optical speckle, familiar as the mottled pattern in reflected laser light. Natural speckle volume is filled with a
dense tangle of nodal phase singularity lines. We have found in computer simulations that these lines have several fractal scaling properties. Secondly, by controlling the interference using
diffractive holograms in propagating laser light, I will show how these nodal lines can be topologically shaped to give a range of loops, links and knots. Finally, I will describe the natural
polarization pattern that occurs in skylight (due to Rayleigh scattering in the atmosphere), originally discovered in the 1800s by Arago, Babinet and Brewster. This pattern contains polarization
singularities, whose global geometry has several physical interpretations and analogs.
Efi Efrati (Hebrew University) Elastic theory of non-Euclidean plates
Abstract: Thin elastic sheets are very common in both natural and man-made structures. The configurations these structures assume in space are often very complex and may contain many length scales,
even in the case of unconstrained thin sheets. We will show that in some cases, a simple intrinsic geometry leads to complex three-dimensional configurations, and discuss the mechanism shaping thin
elastic sheets through the prescription of an intrinsic metric. Current reduced (two-dimensional) elastic theories devised to describe thin structures treat either plates (flat bodies having no
structure along their thin dimension) or shells (non-flat bodies having a non-trivial structure along their thin dimension). We propose the concept of non-Euclidean plates, which are neither plates
nor shells, to approximate many naturally formed thin elastic structures. We derive a thin plate theory which is a generalization of existing linear plate theories for large displacements but small
strains, and arbitrary intrinsic geometry. We conclude by surveying some experimental results for laboratory-engineered non-Euclidean plates.
Jens Eggers (University of Bristol) A catalogue of singularities
Abstract: We survey rigorous, formal, and numerical results on the formation of point-like singularities (or blow-up) for a wide range of evolution equations. We use a similarity transformation of
the original equation with respect to the blow-up point, such that self-similar behaviour is mapped to the fixed point of an infinite dimensional dynamical system. We point out that analysing the
dynamics close to the fixed point is a useful way of classifying the structure of the singularity. As far as we are aware, examples from the literature either correspond to stable fixed points,
low-dimensional centre-manifold dynamics, limit cycles, or travelling waves. We will point out unsolved problems, present perspectives, and try to look at the role of geometry in singularity
Stephan Gekle (Universiteit Twente) High-speed jet formation after solid object impact
Abstract: A circular disc impacting on a water surface creates a remarkably vigorous jet. Upon impact an axisymmetric air cavity forms and eventually pinches off in a single point halfway down the
cavity. Immediately after closure two fast sharp-pointed jets are observed shooting up- and downwards from the closure location, which by then has turned into a stagnation point surrounded by a
locally hyperbolic flow pattern. Counter-intuitively, however, this flow is not the mechanism feeding the two jets. Using boundary-integral simulations we show that only the inertial focussing of the
liquid colliding along the entire surface of the cavity provides enough energy to eject the high-speed jets. With this in mind we show how the natural description of a collapsing void (using a line
of sinks along the axis of symmetry) can be continued after pinch-off to obtain a quantitative analytical model of jet formation.
Walter Goldburg (University of Pittsburgh) Hydraulic jump in a flowing soap film
Abstract: Joint work with S. Steers, J. Larkin, A. Prescott (University of Pittsburgh), T. Tran, G. Gioia, P. Chakraborty, G. Gioia, and N. Goldenfeld (University of Illinois, Urbana). A soap film
flows vertically downward under gravity and in a steady state. At all lengths of the film, its thickness h(x) decreases as the distance x from the top reservoir increases. But then h(x) abruptly
starts to increase and its downward flow velocity u(x) correspondingly decreases to very small value. To explain this nonmonotonic behavior in h(x) and u(x), it is necessary to invoke the film's
elasticity; one has a type of Marangoni effect. The transition from subcritical flow speed to a supercritical one at the thickening point, is akin to the classical hydraulic jump. This transition
will be explained, but other findings, also to be described, are not yet understood.
Evan Hohlfeld (University of California) Point-instabilities, point-coercivity (meta-stability), and point-calculus
Abstract: For general non-linear elliptic PDEs, e.g. non-linear rubber elasticity, linear stability analysis is false. This is because of the possibility of point-instabilities. A point-instability
is a non-linear instability with zero amplitude threshold that occurs while linear stability still holds. Examples include cavitation, fracture, and the formation of a crease, a self-contacting fold
in an otherwise free surface. Each of which represents a kind of topological change. For any such PDE, a point-instability occurs whenever a certain auxiliary scale-invariant problem has a
non-trivial solution. E.g. when sufficient strain is applied at infinity in a rubber (half-)space to support a single, isolated crease, crack, cavity, etc. Owing to scale-invariance, when one such
solution exists, an infinite number or geometrically similar solutions also exist, so the appearance of one particular solution is the spontaneous breaking of scale-invariance. We then identify this
(half-)space with a point in a general domain. The condition that no such solutions exist is called point-coercivity, and can be formulated as non-linear eigenvalue problem that predicts the critical
stress for fracture, etc. And when point-coercivity fails for a system, the system is susceptible to the nucleation and self-similar growth of some kind of topological defect. Viewing fracture, etc.
as symmetry breaking processes explains their macroscopic robustness. Point-coercivity is similar to, but more general than, quasi-convexity, as it can be formulated for any elliptic PDE, not just
Euler-Lagrange systems (i.e. for out-of-equilibrium systems, and so defining meta-stability in a general sense). Indeed, these are just two examples of a host of point-conditions, the study of which
might be called point-calculus. Time allowing, I will show that for almost any elliptic PDE, linear- and point-instabilities exhaust the possible kinds of instabilities. The lessons learned from
elliptic systems will be just as valid for parabolic and hyperbolic systems since the underlying reason linear analysis breaks down – taking certain limits in the wrong order holds for these systems
as well.
Mihaela D. Iftime (Boston University) On characteristic classes for the gravitational field and black holes
Abstract: Many physical theories have mathematical singularities of some kind. A spacetime singularity is "a place" where quantities that measure the gravitational field ( e.g. spacetime curvature)
"blows up". The prediction of a singularity, such as the the big bang and the final state of black holes is a signal that the classical gravitational theory has been pushed beyond the domain of its
validity, and that we need a quantum theory to correctly describe what happens near the singularity. While no black hole can be visualized (in the literal meaning of that word) a meaningful picture
of a black hole has been obtained by plotting curvature scalar polynomial invariants or Cartan scalars. These invariants have been primarly used in providing a local characterization of the
spacetime. In this talk I shall discuss the equivalence problem more rigorously, and define a set of characteristic cohomology classes for the gravitational field.
Mee Seong Im (University of Illinois at Urbana-Champaign) Singularities in Calabi-Yau varieties
Abstract: Calabi-Yau manifolds are currently being studied in theoretical physics to unify Einstein's general relativity and quantum mechanics. Vibrating strings in string theory live in
10-dimensional spacetime, with four of these dimensions being 3-dimensional observable space plus time and six additional dimensions being a Calabi-Yau manifold. In this talk, I will discuss orbifold
singularity on a Calabi-Yau variety and the topology of crepant resolutions using the McKay Correspondence.
Daniel D. Joseph (University of Minnesota) Viscous potential flow analysis of radial fingering in a Hele-Shaw cell
Abstract: The problem of radial fingering in two phase gas/liquid flow in a Hele-Shaw cell under injection or withdrawal is studied here. The problem is analyzed as a viscous potential flow VPF in
which the potential flow analysis of Paterson 1981 and others is augmented to account for the effects of viscosity on the normal stress at the gas/liquid interface. The unstable cases in which gas is
injected into liquid or liquid is withdrawnfrom gas lead to fingers. This stability problem was previously considered by other authors with the viscous normal stress neglected. Here we show that the
viscous normal stress should not be neglected; the normal stress changes the speed of propagation of the undisturbed interface, it changes the growth rate, and the numbers of fingers that grow the
fastest and the cut-off number above which fingers can not grow.
Christophe Josserand (Université de Paris VI (Pierre et Marie Curie)) Singular behaviors in drop impacts
Abstract: I will discuss different singular behaviors that arise when one consider the impact of drop on thin liquid films or solid surface. For instance, singularities can be observed for low
velocity impacts on super-hydrophobic surface, related to classical surface singularities. I will then discuss in more details the condition of prompt splash when an impact is made on a thin liquid
film. Self-similar behaviors are then exhibited which allow a simplified understanding of empirical scaling laws.
Randall D. Kamien (University of Pennsylvania) The geometry of topological defects
Abstract: The theory of smectic liquid crystals is notoriously difficult to study. Thermal fluctuations render them disordered through the Landau-Peierls instability, lead to anomalous momentum
dependent elasticity, and make the nematic to smectic-A transition enigmatic, at best. I will discuss recent progress in studying large deformations of smectics which necessitate the use of nonlinear
elasticity in order to preserve the underlying rotational symmetry. By recasting the problem of smectic configurations geometrically it is often possible to exploit toplogical information or,
equivalently, boundary conditions, to confront these highly nonlinear problems. Specifically, I will discuss edge dislocations, disclination networks in three-dimensionally modulated smectics, and
large angle twist grain boundary phases. Fortuitously, it is possible to make intimate comparison with experimental systems!
David Kinderlehrer (Carnegie Mellon University) What's new for microstructure
Abstract: Cellular structures coarsen according to a local evolution law, a gradient flow or curvature driven growth, for example, limited by space filling constraints, which give rise to random
changes in configuration. Composed of volumes, facets, their boundaries, and so forth, they are ensembles of singlular structures. Among the most challenging and ancient of such systems are
polycrystalline granular networks, especially those which are anisotropic, ubiquitous among engineered materials. It is the problem of microstructure. These are large scale metastable, active across
many scales. We discuss recent work in this area, especially the discovery and the theory of the GBCD, the grain boundary character distribution, which offers promise as a predictive measure of
texture related material properties. There are many mathematical challenges and the hint of universality.
Arshad Kudrolli (Clark University) Experimental investigations of packing, folding, and crumpling in two and three dimensions
Abstract: We will discuss the packing and folding of a confined beaded chain vibrated in a flat circular container as a function of chain length, and compare with random walk models from polymer
physics. Time permitting, we will briefly discuss crumpling and folding structures obtained with paper and elastic sheets obtained with a laser-aided topography technique. We have shown that the
ridge length distribution is consistent with a hierarchical model for ridge breaking during crumpling.
Robert B. Kusner (University of Massachusetts) Lengths and crossing numbers of tightly knotted ropes and bands
Abstract: About a decade ago, biophysicists observed an approximately linear relationship between the combinatorial complexity of knotted DNA and the distance traveled in gel electrophoresis
experiments [1]. Modeling the DNA as tightly knotted rope of uniform thickness, it was suggested that lengths of such tight knots (rescaled to have unit thickness) would grow linearly with crossing
numbers, a simple measure of knot complexity. It turned out that this relationship is more subtle: some families of knots have lengths growing as the the 3/4 power of crossing numbers, others grow
linearly, all powers between 3/4 and 1 can be realized as growth rates, and it could be proven that that the power cannot exceed 2 [2-5]. It is still unknown whether there are families of tight knots
whose lengths grow faster than linearly with crossing numbers, but the largest power has been reduced to 3/2 [6]. We will survey these and more recent developments in the geometry of tightly packed
or knotted ropes, as well as some other physical models of knots as flattened ropes or bands which exhibit similar length versus complexity power laws, some of which we can now prove are sharp [7].
References: [1] Stasiak A, Katritch V, Bednar J, Michoud D, Dubochet J "Electrophoretic mobility of DNA knots" Nature 384 (1996) 122 [2] Cantarella J, Kusner R, Sullivan J "Tight knot values deviate
from linear relation" Nature 392 (1998) 237 [3] Buck G "Four-thirds power law for knots and links" Nature 392 (1998) 238 [4] Buck G, Jon Simon "Thickness and crossing number of knots" Topol. Appl. 91
(1999) 245 [5] Cantarella, J, Kusner R, Sullivan J "On the minimum ropelength of knots and links" Invent. Math. 150 (2002) 257 [6] Diao Y, Ernst C, Yu X "Hamiltonian knot projections and lengths of
thick knots" Topol. Appl. 136 (2004) 7 [7] Diao Y, Kusner R [work in progress]
Norman Lebovitz (University of Chicago) The prospects for fission of self-gravitating masses
Abstract: The idea that a single, rotating, self-gravitating mass — like a star — can evolve into a pair of masses orbiting one another — like a double-star — was suggested over a century ago. The
elaboration of the mathematical details led to negative results and most astronomers abandoned this idea in the 1920's. The negative results are not decisive, however, and we discuss alternative
mathematical formulations of this problem and their prospects for positive outcomes.
John Lister (University of Cambridge) Capillary pinch-off of a film on a cylinder
Abstract: Much of the work on capillary pinch-off, and on other fluid-mechanical problems with changes in topology, has focused on situations that lead to finite-time singularities in the
neighbourhood of which there is some kind of similarity solution. Capillary instability in the absence of gravity of an axisymmetric layer of fluid coating a circular cylinder is, by contrast, an
example of an infinite-time singularity. Even more unusually, film rupture proceeds through an episodic series of oscillations that form a diverging geometrical progression in time, each of which
reduces the remaining film thickness by a factor of about 10.
Fernando Lund (University of Chile) Ultrasound as a probe of plasticity? The interaction between elastic waves and dislocations
Abstract: Plasticity in metals and alloys is a mature discipline in the mechanics of materials. However, it appears that current theoretical modeling lacks predictive power. If a new form of steel,
say, is fabricated, there appears to be no way of predicting its deformation and fracture behavior as a function of temperature, and/or cyclic loading. The root of this problem appears to be with the
paucity of controlled experimental measurements, as opposed to visualizations, of the properties of dislocations, the defects that are responsible for plastic deformation of crystals. Indeed, the
tool of choice in this area is transmission electron microscopy, which involves an intrusive measurement of specially prepared samples. Is it possible to develop non intrusive tools for the
measurement of dislocation properties? Could ultrasound be used to this end? This talk will highlight recent developments in this line of thought.Specific results include a theory of the interaction
of elastic, both longitudinal and transverse, bulk as well as surface, waves with dislocations, both in isolation and in arrays of large numbers, in two and three dimensions. Results for the isolated
case can be checked with experimental results obtained using stroboscopic X-ray imaging. The theory for the many-dislocations case constitutes a generalization of the standard Granato-Lücke theory of
ultrasound attenuation in metals, and it provides an explanation of otherwise puzzling results obtained with Resonant Ultrasound Spectroscopy (RUS). Application of the theoretical framework to
low-angle grain boundaries, that can be modeled as arrays of dislocations, provides an understanding of recently obtained results concerning the power law behavior of acoustic attenuation in
polycrystals. Current developments of instrumentation that may lead to a practical, non-intrusive probe of plastic behavior will be described.
Andreas Münch (University of Nottingham) Self similar rupture of thin films with slippage
Abstract: We recently developed a thin film model that describes the rupture and dewetting of very thin liquid polymer films where slip at the liquid/solid interface is very large. In this talk, we
investigate the singularity formation at the moment of rupture for this model, where we identify different similarity regimes.
David R. Nelson (Harvard University) Buckled viruses, crumpled shells and folded pollen grains
Abstract: The difficulty of constructing ordered states on spheres was recognized by J. J. Thomson, who discovered the electron and then attempted regular tilings of the sphere in an ill-fated
attempt to explain the periodic table. We first discuss how protein packings in buckled virus shells solve a related “Thomson problem”. We then describe the grain boundary scars that appear on
colloidosomes, drug delivery vehicles that represent another class of solution to this problem. The remarkable modifications in the theory necessary to account for thermal fluctuations in crumpled
amorphous shells of spider silk proteins will be described as well. We then apply related ideas to the folding strategies and shapes of pollen grains during dehydration when they are released from
the anther after maturity. The grain can be modeled as a pressurized high-Young-modulus sphere with a weak sector and a nonzero spontaneous curvature. In the absence of such a weak sector, these
shells crumple irreversibly under pressure via a strong first order phase transition. The weak sectors (both one and three-sector pollen grains are found in nature) eliminate the hystersis and allow
easy rehydration at the pollination site, somewhat like the collapse and subsequent reassembly of a folding chair.
Jinhae Park (Purdue University) Static problems of the chiral smectic and bent core liquid crystals focusing on the role of the
spontaneous polarization
Abstract: In this talk, I will present mathematical modeling of ferroelectric liquid crystals and discuss existence and partial regularity results of minimum configurations in some special geometry.
I will then speak about switching problem between ferroelectric states and derive a formulae for critical field. I will end my talk with the proof of hysteresis loop between the spontaneous
polarization and electric field which can be applied to other materials including ferroeletric solids and ferromagnetics.
Thomas J. Pence (Michigan State University) Singularities associated with swelling of hyperelastic solids
Abstract: This talk will discuss certain singularities that arise in the solution to boundary value problems involving the swelling of otherwise hyperelastic solids. In this setting, both non-uniform
swelling and constrained swelling give rise to nonhomogeneous deformation in the absence of externally applied load. The standard singularities that are encountered in nonlinear elasticity may occur,
such as cavitation. Additional singularities also arise, such as loss of smoothness associated with the concentration of deformation on singular surfaces.
Leonid Pismen (Technion-Israel Institute of Technology) Resolving dynamic singularities: from vortices to contact lines
Abstract: When a physical object, which is perceived as a singularity on a certain level of mathematical description, is set into motion, a paradox may arise rendering dynamic description impossible
unless the singularity is resolved by introducing new physics in the singular core. This situation, appearing in diverse physical contexts, necessitates application of multiscale matching methods,
employing a simpler long-scale model in the far field and a short-scale model with more detailed physical contents in the core of the singularity. The law of motion can be derived within this
approach by applying a modified Fredholm alternative in a region large compared to the inner and small compared with the outer scale, and evaluating the boundary terms which determine both the
driving force and dissipation. I give examples of applying this technique to both topological (vortices) and non-topological (contact lines) singularities.
Michael Renardy (Virginia Polytechnic Institute and State University) An open problem concerning breakup of fluid jets
Abstract: We present a simple one-dimensional equation modeling slender jets of a Newtonian fluid in Stokes flow. It would be desirable to have a proof linking the asymptotics of surface tension
driven breakup to the behavior of the initial condition near the thinnest point of the jet. Despite the apparent simplicity of the equations, the problem is open. I shall discuss some partial
Sergio Rica (Centre National de la Recherche Scientifique (CNRS)) Weak turbulence of a vibrating elastic thin plate
Abstract: I will talk about a work in collaboration with G. During and C. Josserand on the long-time evolution of waves of a thin elastic plate in the limit of small deformation so that modes of
oscillations interact weakly. According to the theory of weak turbulence (successfully applied in the past to plasma, optics, and hydrodynamic waves), this nonlinear wave system evolves at long times
with a slow transfer of energy from one mode to another. We derived a kinetic equation for the spectral transfer in terms of the second order moment. We show that such a theory describes the approach
to an equilibrium wave spectrum and represents also an energy cascade, often called the Kolmogorov-Zakharov spectrum. We perform numerical simulations that confirm this scenario. Finally, I will
discuss recent experiments by A. Boudaoud and collaborators and N. Mordant.
John R. Savage (Cornell University) Dynamics of droplet breakup in a complex fluid
Abstract: The dynamics of droplet breakup in Newtonian fluids are described by the Navier-Stokes equation. Previous experiments have shown that in many cases the breakup dynamics follow a
self-similar behavior where successive drop profiles can be scaled onto one another. In visco-elastic systems however, the Navier-Stokes equation is not sufficient to describe breakup. In this talk
we will describe droplet breakup in a visco-elastic surfactant system which forms micellar, lamellar, and reverse-micellar phases at various concentrations. We present results of the dynamics of
breakup in this system and compare these to previously studied Newtonian systems.
David Schaeffer (Duke University) Chaos in a one-dimensional cardiac model
Abstract: Under rapid periodic pacing, cardiac cells typically undergo a period-doubling bifurcation in which action potentials of short and long duration alternate with one another. If these action
potentials propagate in a fiber, the short-long alternation may suffer abrupt reversals of phase at various points along the fiber, a phenomenon called (spatially) discordant alternans. Either
stationary or moving patterns are possible. Echebarria and Karma proposed an approximate equation to describe the spatiotemporal dynamics of small-amplitude alternans in a class of simple cardiac
models, and they showed that an instability in this equation predicts the spontaneous formation of discordant alternans. We show that for certain parameter values a degenerate steady-state/Hopf
bifurcation occurs at a multiple eigenvalue. Generically, such a bifurcation leads one to expect chaotic solutions nearby, and we perform simulations that find such behavior. Chaotic solutions in a
one-dimensional cardiac model are rather surprising--typically chaos in the cardiac system has occurred from the breakup of spiral waves in two dimensions.
Michael Siegel (New Jersey Institute of Technology) Calculation of complex singular solutions to the 3D incompressible Euler equations
Abstract: We describe an approach for the construction of singular solutions to the 3D Euler equations for complex initial data. The approach is based on a numerical simulation of complex traveling
wave solutions with imaginary wave speed, originally developed by Caflisch for axisymmetric flow with swirl. Here, we simplify and generalize this construction to calculate traveling wave solutions
in a fully 3D (nonaxisymmetric) geometry. Our new formulation avoids a numerical instability that required the use of ultra-high precision arithemetic in the axisymmetric flow calculations. This is
joint work with Russ Caflisch.
Jey Sivaloganathan (University of Bath) Singular minimisers in nonlinear elasticity and modelling fracture
Abstract: We present an overview of a variational approach to modelling fracture initiation in the framework of nonlinear elasticity. The underlying principle is that energy minimizing deformations
of an elastic body may develop singularities when the body is subjected to large boundary displacements or loads. These singularities often bear a striking resemblance to fracture mechanisms observed
in polymers. Experiments indicate that voids may form in polymer samples (that appear macroscopically perfect) when the samples are subjected to large tensile stresses. This phenomenon of cavitation
can be viewed as the growth of infinitesimal pre-existing holes in the material or as the spontaneous creation of new holes in an initially perfect body. In this talk we adopt both viewpoints
simultaneously. Mathematically, this is achieved by the use of deformations whose point singularities are constrained to be at certain fixed points (the "flaws" in the material). We show that, under
suitable hypotheses, the energetically optimal location for a single flaw can be computed from a singular solution to a related problem from linear elasticity. One intriguing consequence of the above
approach is that cavitation may occur at a point which is not energetically optimal. We show that such a disparity will produce configurational forces (of a type previously identified in the context
of defects in crystals) and conjecture that this may provide a mathematical explanation for crack initiation. Much of the above work is joint with S.J. Spector (S. Illinois University).
Dejan Slepčev (Carnegie Mellon University) Blowup dynamics of an unstable thin-film equation
Abstract: Long-wave unstable thin-film equations exhibit rich dynamical behavior: Solutions can spread indefinitely, converge to a steady droplet configuration or blow up in finite time. We will
discuss the properties of scaling solutions that govern the blowup dynamics. In particular, we will present how energy based methods can be used to study the stability of selfsimilar blowup solutions
as well as other dynamical properties of the blowup solutions. Strong connections to studies of blowup behavior in other equations will be indicated.
Scott J. Spector (Southern Illinois University) Some remarks on the symmetry of singular minimizers in elasticity
Abstract: Experiments on elastomers have shown that triaxial tensions can induce a material to exhibit holes that were not previously evident. Analytic work in nonlinear elasticity has established
that such cavity formation may indeed be an elastic phenomenon: sufficiently large prescribed boundary deformations yield a hole-creating deformation as the energy minimizer whenever the elastic
energy is of slow growth. In this lecture the speaker will discuss the use of isoperimetric arguments to establish that a radial deformation, producing a spherical cavity, is the energy minimizer in
a general class of isochoric deformations that are discontinuous at the center of a ball and produce a (possibly non-symmetric) cavity in the deformed body. The key ingredient is a new
radial-symmetrization procedure that is appropriate for problems where the symmetrized mapping must be one-to-one in order to prevent interpenetration of matter.
Paul H. Steen (Cornell University) Singularity theory and the inviscid pinch-off singularity
Abstract: Whitney's theorem tells us that folds and cusps are generic in smooth mappings of a plane into a plane. Whitney's work builds on Morse's and is extended by Thom's classification of
singularities of mappings (singularity theory). To the extent that the pinch-off of an interface is a geometric singularity, it is natural to ask what singularity theory says about pinch-off. We
explore this question for axisymmetric surfaces. Curvature extrema, which coincide with either curvature crossings or with profile extrema, are features whose evolution can be tracked up to the
instant of singularity. A singularity theory classification is tested against vortex-sheet simulations (theory) and against curvatures extracted from images of evolving soap-films (experiment).
Saleh A. Tanveer (Ohio State University) A new approach to regularity and singularity questions for a class of non-linear evolutionary PDEs such
as 3-D Navier-Stokes equation
Abstract: Joint work with Ovidiu Costin, G. Luo. We consider a new approach to a class of evolutionary PDEs where question of global existence or lack of it is tied to the asymptotics of solution to
a non-linear integral equation in a dual variable whose solution has been shown to exist a priori. This integral equation approach is inspired by Borel summation of a formally divergent series for
small time, but has general applicability and is not limited to analytic initial data. In this approach, there is no blow-up in the variable p, which is dual to 1/t or some power 1/t^n; solutions are
known to be smooth in p and exist globally for p in R^+. Exponential growth in p, for different choice of n, signifies finite time singularity. On the other hand, sub-exponential growth implies
global existence. Further, unlike PDE problems where global existence is uncertain, a discretized Galerkin approximation to the associated integral equation has controlled errors. Further, known
integral solution for p in [0, p[0]], numerically or otherwise, gives sharper analytic bounds on the exponents in p and hence better estimate on the existence time for the associated PDE. We will
also discuss particular results for 3-D Navier-Stokes and discuss ways in which this method may be relevant to numerical studies of finite time blow-up problems.
Sigurdur Thoroddsen (National University of Singapore) Singular jets in free-surface flows
Abstract: Free-surface 'singular jetting' occurs in geometries where flow focusing accelerates the free surface symmetrically towards a line or a point. This is known to occur in a number of
configurations, such as during the collapse of free-surface craters and of granular cavities as well as for capillary waves converging at the apex of oscillating drops. Drops impacting onto
super-hydrophobic surfaces also generate such jets. We will show recent work on characterizing such jetting, in well-known and new jetting configurations. High-speed video imaging, with frame-rates
up to 1,000,000 fps, will be presented and used for precise measurement of jet size and velocity. The focus will be on three well-controlled flow-configurations: During the crater collapse following
the impact of a drop onto a liquid pool and after the pinch-off of a drop from a vertical nozzle. Finally, we will show a new apex jet which is generated by the impact of a viscous drop onto a
lower-viscosity pool.
Konstantin Turitsyn (University of Chicago) Singularity formation in two-dimensional free surface dynamics
Abstract: Motivated by recent experiments on bubble pinch off by Nathan Keim and Sid Nagel we study the nonlinear dynamics of two-dimensional collapsing air bubble surrounded by ideal fluid. We show
that the dynamics can lead to several distinct type of singularities: interface reconnections, cusps and wedges. We analyze the critical dynamics of singularities formation, and show that it is
described by universal critical exponents. Remarkably, there are strong similarities between our system and the Hele-Shaw type systems. These similarities support the conjecture that the critical
dynamics of the free interface is described by integrable equations.
Emmanuel Villermaux (IRPHE - Institut de Recherche sur les Phénoménes Hors Équilibre) Fragmentation under impact
Abstract: Fragmentation phenomena will be reviewed with a particular emphasis on processes occurring with liquids, those giving rise to drops (the case of solid fragmentation can discussed also,
depending on the audience requests). Examples including impacts of different kinds, and raindrops will specifically illustrate the construction mechanism of the drop size distributions in the
resulting spray.
Barbara Wagner (Weierstraß-Institut für Angewandte Analysis und Stochastik (WIAS)) Patterns in dewetting liquid films: Intermediate and late phases
Abstract: We investigate the dynamics of a post-rupture thin liquid film dewetting on a hydrophobised substrate driven by Van-der-Waals forces. The stability of the three-phase contact line is
discussed numerically and asymptotically in the framework of lubrication models by taking account of various degrees of slippage. The results are used to explain some experimentally observed
patterns. Finally, we present some recent studies of the impact of slippage on the late stages of the dynamics. Here, we present some novel coarsening behaviour of arrays of interacting droplets.
Guowei Wei (Michigan State University) Geometric flow approach to singularity formation and evolution
Abstract: Geometric singularities are ubiquitous in nature, The fascinating complexity of geometric singularities has attracted the attention of mathematicians, engineers and physicists alike for
centuries. Geometric singularities commonly occur in multiphase systems at the geometric boundaries. Their formation and evolution are often accompanied with topological changes. In this talk, we
argue that the theory of differential geometry of curves and surfaces provides a natural and unified description for the geometric singularities. We show that geometric flows, particularly, the
potential driving geometric flows offer a powerful framework for the theoretical analysis of singularity formation and evolution. Potential driving geometric flows, derived from the Euler-Lagrange
equation, balance the intrinsic geometric forces, i.e., surface tension, with potential forces. Geometric concepts, such as differentiable manifold, tangent bundle, mean curvature and Gauss
curvature, are utilized for the construction of generalized geometric flows. The driving potential can be the gravitation in describing the formation of droplets, or have a double-wall structure in a
phenomenological description of phase separation, or be a collection of atomistic interactions in a multiscale modeling of the solvation of biomolecules. Physical properties, such as free energy
minimization (area decreasing) and incompressibility (volume preserving), are realized in our paradigm of potential driving geometric flows. Finally, we discuss the application of potential driving
geometric flows to the multiscale analysis of protein folding.
(1) P. Bates, G.W. Wei and S. Zhao, Minimal molecular surfaces and their applications, J. Comput. Chem., 29, 380-391 (2008). (2) S. N. Yu, W. H. Geng and G.W. Wei, Treatment of geometric
singularities in implicit solvent models, J. Chem. Phys., 126, 244108 (13 pages) (2007). (3) P. W. Bates, Z. Chen, Y.H. Sun, G.W. Wei and S. Zhao, Potential driving geometric flows, J. Math. Biology,
in review (2008). (4) G.W. Wei, Generalized Perona-Malik equation for image restoration, IEEE Signal Processing Lett., 6, 165-167 (1999).
Jon Wilkening (University of California) Lubrication theory in nearly singular geometries: when should one stop optimizing a reduced model?
Abstract: Shape optimization plays a central role in engineering and biological design. However, numerical optimization of complex systems that involve coupling of fluid mechanics to rigid or
flexible bodies can be prohibitively expensive (to implement and/or run). A great deal of insight can often be gained by optimizing a reduced model such as Reynolds' lubrication approximation, but
optimization within such a model can sometimes lead to geometric singularities that drive the solution out of its realm of validity. We present new rigorous error estimates for Reynolds'
approximation and its higher order corrections that reveal how the validity of these reduced models depend on the geometry. We use this insight to study the problem of shape optimization of a sheet
swimming over a thin layer of viscous fluid.
Thomas Peter Witelski (University of Oxford) Some open questions on similarity solutions for fluid film rupture
Abstract: Finite-time topological rupture occurs in many models in fluid and solid mechanics. We review and discuss some properties of the self-similar solutions for such problems. Unresolved issues
regarding analytical forms of the solutions (stability and symmetry vs. asymmetry) and numerical calculation methods (shooting vs. global relaxation) will be highlighted. Further questions of
interest arise in post-rupture coarsening dynamics of dewetting thin films.
|
{"url":"https://www.ima.umn.edu/newsletters/2008/07/","timestamp":"2014-04-18T20:44:49Z","content_type":null,"content_length":"115200","record_id":"<urn:uuid:2f8b4632-d747-4fe5-9d27-692403f4dd0e>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00045-ip-10-147-4-33.ec2.internal.warc.gz"}
|
I need a simple formula to convert standard time (1:05pm) into military time (13:08).
I've read through several posts, but all I've found is reformatting or time subtraction type information. The reformating works to an extent, it gives me the hours in military but the minutes stay
Basicly, all I need is:
Cell A1 = 1:05 p
Cell B1 = formula that shows/converts 1:05 p as 13:08
Can someone direct me to a helpful post or help me with this?
|
{"url":"http://www.knowexcel.com/view/1404273-convert-standard-time-in-military.html","timestamp":"2014-04-19T06:53:51Z","content_type":null,"content_length":"51221","record_id":"<urn:uuid:bcb47a51-de61-406e-8568-404e372ddb3c>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00254-ip-10-147-4-33.ec2.internal.warc.gz"}
|
36,856pages on
this wiki
Probability is the chance that an event will occur given as a mathematic ratio that states the number of possible events giving one outcome in relation to a total number of similarly possible events
in all outcomes. This ratio is often used in gambling to determine the odds, or the degrees of advantage or disadvantage in a competition based on previous results.
According to Leonard McCoy, in 2267, the chances of contracting Sakuro's Disease are literally billions to one. (TOS: "Metamorphosis")
In 2269, Spock determined the probability of the Insectoid pod ship being dead at being .997 accurate. (TAS: "Beyond the Farthest Star")
After succumbing to rapid aging, caused by Taurean headbands, Spock hypothesized that there was a possibility that the transporter could be used to restore the landing party to their original ages.
Unfortunately, at the time, the odds were against the aged crewmembers 99.7-to-1. (TAS: "The Lorelei Signal")
Later that year, Spock determined the probability of Harry Mudd being found on the planet Motherlode to be 81% ± .53, which was, in his words saying: "Mudd is probably there." (TAS: "Mudd's Passion")
Later yet that year, Spock determined the probability of approximately 82.5% that one of the members of the Vedala search party, assigned to find the Soul of the Skorr, was an saboteur. (TAS: "The
In a tactical projection of possible future Romulan deployments along the Romulan Neutral Zone, Data determined that their ships were deployed to support a policy of confrontation designed to test
Federation defenses along the Neutral Zone. With this analysis, he projected a 90% probability that they would continue to pursue this policy. (TNG: "Data's Day")
In 2370, an alien visiting Deep Space 9 named Cos introduced a gambling machine to Martus Mazur that was able to manipulate the laws of probability. Jadzia Dax later discovered this abnormality when
she observed that the spin of over 80% of the solar neutrinos in the space station were spinning clockwise, when the given probability was that about 50% of should be spinning clockwise and the other
50% spinning counterclockwise. (DS9: "Rivals")
During the opening months of the Dominion War, Julian Bashir calculated his shipmates having a 32.7% chance of surviving the war. (DS9: "A Time to Stand")
External link
|
{"url":"http://en.memory-alpha.org/wiki/Laws_of_probability","timestamp":"2014-04-20T16:53:54Z","content_type":null,"content_length":"83138","record_id":"<urn:uuid:46d27f31-f440-4605-8e0d-6c3c16d8f1e1>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00487-ip-10-147-4-33.ec2.internal.warc.gz"}
|
[SciPy-Dev] distributions.py expect
nicky van foreest vanforeest@gmail....
Fri Sep 14 15:21:46 CDT 2012
I am trying to implement the expect method in rv_frozen. To understand
the normal working of the expect method I tried the following:
from scipy.stats import geom, norm, gamma
print norm.expect(loc = 3,scale =5)
print gamma.expect(None, 4.5)
print gamma.expect(lambda x: x, 4.5)
print geom.expect(lambda x: x, 1./3)
This is the result:
Traceback (most recent call last):
File "expecttest.py", line 6, in <module>
print geom.expect(lambda x: x, 1./3)
File "/home/nicky/prog/scipy/scipy/stats/distributions.py", line
6375, in expect
self._argcheck(*args) # (re)generate scalar self.a and self.b
TypeError: _argcheck() argument after * must be a sequence, not float
So the first examples work, but the rv_discrete example doesn't. One
thing is that the _argcheck is not called in rv_continuous while it is
in rv_discrete. Removing this line results in another error:
Traceback (most recent call last):
File "expecttest.py", line 6, in <module>
print geom.expect(lambda x: x, 1./3)
File "/home/nicky/prog/scipy/scipy/stats/distributions.py", line
6394, in expect
low, upp = self._ppf(0.001, *args), self._ppf(0.999, *args)
TypeError: _ppf() argument after * must be a sequence, not float
What would be actually the right way to call the expect method for
rv_discrete? Or perhaps the other way around, should this method be
changed so that my example with geom works?
I extended rv_frozen with this method, and this appears to work well
for continuous rvs, but also fails for discrete rvs.
def expect(self, *args, **kwds):
args += self.args
return self.dist.expect(*args, **kwds)
More information about the SciPy-Dev mailing list
|
{"url":"http://mail.scipy.org/pipermail/scipy-dev/2012-September/017958.html","timestamp":"2014-04-16T17:00:38Z","content_type":null,"content_length":"4086","record_id":"<urn:uuid:a35b45a1-42a2-4faf-9b06-caa5e6ce4850>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00173-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Create Definitions for Variables and Functions
How to | Create Definitions for Variables and Functions
Mathematica has a very general notion of functions, as rules for arbitrary transformations. Values for variables are also assigned in this manner. When you set a value for a variable, the variable
becomes a symbol for that value.
Here is a simple transformation rule. It says: whenever you see , replace it by 3:
The variable has a value of 3.
Whenever you evaluate an expression, 3 is substituted for :
You can remove the rule by defining a new one:
The new rule says: whenever you see , replace it by . So far there are no rules associated with , so its value is itself.
Now if you evaluate , the rule for says to replace by , and the rule for says to replace by 4, so the result is , or 16:
If you change the value of , then the value of changes:
Now assign a value to , like this:
Since has already been assigned the value 3, the rule you have defined is "replace by 9", not "replace by ". So does not depend on :
This happened because when a rule is defined using (Set), the right-hand side is evaluated before the rule is defined.
You can also define rules using (SetDelayed), like this:
When a rule is defined with the right-hand side is not evaluated before the rule is defined. So even if already has a value, this new rule says: whenever you see , replace it with . So in this case,
depends on :
Functions in Mathematica are defined by rules that act on patterns. Here is a simple one:
is a pattern in which stands for any expression (which is represented on the right-hand side by the name ). The rule says: if you have of any expression, replace it by that expression squared:
Here is a function with two arguments:
Always use to define functions, otherwise the variables on the right-hand side may not represent the associated expressions on the left-hand side, since they will be evaluated before the rule is
That happened because is 9 and is 3. This rule says that anything matching the pattern is replaced by 90:
|
{"url":"http://reference.wolfram.com/mathematica/howto/CreateDefinitionsForVariablesAndFunctions.html","timestamp":"2014-04-19T19:43:40Z","content_type":null,"content_length":"50875","record_id":"<urn:uuid:24f71817-4d4d-41dc-b024-66d569ffe99c>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00030-ip-10-147-4-33.ec2.internal.warc.gz"}
|
FastTree: Computing Large Minimum Evolution Trees with Profiles instead of a Distance Matrix
• We are sorry, but NCBI web applications do not support your browser and may not function properly.
More information
Mol Biol Evol. Jul 2009; 26(7): 1641–1650.
FastTree: Computing Large Minimum Evolution Trees with Profiles instead of a Distance Matrix
Gene families are growing rapidly, but standard methods for inferring phylogenies do not scale to alignments with over 10,000 sequences. We present FastTree, a method for constructing large
phylogenies and for estimating their reliability. Instead of storing a distance matrix, FastTree stores sequence profiles of internal nodes in the tree. FastTree uses these profiles to implement
Neighbor-Joining and uses heuristics to quickly identify candidate joins. FastTree then uses nearest neighbor interchanges to reduce the length of the tree. For an alignment with N sequences, L
sites, and a different characters, a distance matrix requires O(N^2) space and O(N^2L) time, but FastTree requires just O(NLa + NNN)La) time. To estimate the tree's reliability, FastTree uses local
bootstrapping, which gives another 100-fold speedup over a distance matrix. For example, FastTree computed a tree and support values for 158,022 distinct 16S ribosomal RNAs in 17 h and 2.4 GB of
memory. Just computing pairwise Jukes–Cantor distances and storing them, without inferring a tree or bootstrapping, would require 17 h and 50 GB of memory. In simulations, FastTree was slightly more
accurate than Neighbor-Joining, BIONJ, or FastME; on genuine alignments, FastTree's topologies had higher likelihoods. FastTree is available at http://microbesonline.org/fasttree.
Keywords: minimum evolution, Neighbor-Joining, large phylogenies
Inferring phylogenies from biological sequences is the fundamental method in molecular evolution and has many applications in taxonomy and for predicting structure and biological function. In
general, sequences are identified as homologous and aligned, and then a phylogeny is inferred. Large alignments can be constructed efficiently, in time linear in the number of sequences, by aligning
the sequences to a profile instead of to each other, as with position-specific Blast or hmmalign (Schaffer et al. 2001; http://hmmer.janelia.org/).
Given an alignment, Neighbor-Joining and related minimum evolution methods are the fastest and most scalable approaches for inferring phylogenies (Saitou and Nei, 1987), (Studier and Keppler, 1988),
(Desper and Gascuel, 2002). All these methods rely on a distance matrix that stores an estimate of the evolutionary distance between each pair of sequences. Computing an entry in the distance matrix
requires comparing the characters at each position in the alignment and hence requires O(L) time, where L is the number of positions. Thus, the distance matrix takes O(N^2L) time to compute, where N
is the number of sequences, and O(N^2) space to store.
Given a distance matrix, Neighbor-Joining performs a greedy search for a tree of minimal length, according to a local estimate of the length of each branch (Gascuel and Steel 2006). More
specifically, Neighbor-Joining begins with the tree as a star topology, and it iteratively refines the tree by joining the best pair of nodes together, until the tree is fully resolved. Each step
considers O(N^2) possible joins, so the standard Neighbor-Joining algorithm requires O(N^3) time to infer a tree from a distance matrix. This can be reduced to O(N^2) or O(N^2logN) time, either by
using heuristics to consider fewer joins (Elias and Lagergren 2005, Evans et al. 2006) or by using additional O(N^2) memory (Simonsen et al. 2008, Zaslavsky and Tatusova 2008). FastME is another
minimum evolution method that takes only O(N^2) time (Desper and Gascuel 2002). With any of these optimized methods, the O(N^2L) time to compute the distance matrix dominates the time.
As DNA sequencing accelerates, the memory and CPU requirements of the distance matrix approach are becoming prohibitive. For example, an alignment of full- length 16S ribosomal RNAs (rRNAs) contains
over 160,000 distinct sequences (DeSantis et al. 2006; http://greengenes.lbl.gov). Similarly, the MicrobesOnline database, which provides phylogenies for all protein families from prokaryotic
genomes, contains protein families with over 100,000 distinct sequences (Alm et al. 2005; http://www.microbesonline.org/). The distance matrix for families with 100,000–200,000 members requires 20–80
GB of memory to store (a 4-byte floating-point value for each of N(N −1)/2 pairs). Although computers with this much memory are available, the typical node in a compute cluster has an order of
magnitude less memory. Furthermore, DNA sequencing technology is improving rapidly, and the distance matrix's size scales as the square of the family's size, so we expect these problems to become
much more severe. Finally, most of the methods that construct a tree from a distance matrix in O(N^2) time, such as FastME and the exact O(N^2) implementations of Neighbor-Joining, require additional
O(N^2) memory.
Whatever the method used, inferred phylogenies often contain errors, and so it is important to estimate the reliability of the result (Nei et al. 1998). The standard method to estimate reliability is
to use the bootstrap: to resample the columns of the alignment, to rerun the method 100–1,000 times, to compare the resulting trees to each other or to the tree inferred from the full alignment, and
to count the number of times that each split occurs in the resulting trees (Felsenstein 1985). (A split is the two sets of leaves on either side of an internal edge.) Unfortunately, bootstrapping is
a minimum of 100 times slower than the underlying phylogenetic inference, and comparing the trees to each other is also a nontrivial computation. In principle, the resampled trees could be compared
with the original tree in O(N^2) time and O(N) space by hashing the splits in the tree. However, the tree comparison tools that we are aware of require O(N^3) time and O(N^2) space.
Although building phylogenetic trees for large gene families is challenging, it is important to do so and not just to build trees for small sets of selected homologs. Analyzing all the sequences is
important for taxonomy, for predicting gene function, for classifying environmental DNA sequences, and for identifying functional residues (Eisen 1998, Lichtarge et al. 2003, Engelhardt et al. 2005,
von Mering et al. 2007). Furthermore, omitting sequences might change the biological interpretation of the result, especially in prokaryotes: because of horizontal gene transfer, it is difficult to
know which homologs are relevant without building a tree. Finally, for Web sites that support interactive use of phylogenetic trees, it is desirable to compute trees for all the genes beforehand (Li
et al. 2006; http://www.treefam.org/; http://www.microbesonline.org/).
Our Approach
We present FastTree that uses four ideas to reduce the space and time complexity of inferring a phylogeny from an alignment (fig. 1). First, FastTree implements Neighbor-Joining by storing profiles
for the internal nodes in the tree instead of storing a distance matrix. Each profile includes a frequency vector for each position, and the profile of an internal node is the weighted average of its
children's profiles. For example, if we join two leaves i and j, and i has an A at a position and j has a G, then the profile of ij at that position will be 50% A and 50% G (and 0% for other
characters). The intuition behind using profiles is that the average of the distances between the sequences in two subtrees A and B equals the distance between profile(A) and profile(B) because
profile(A) is the average of the sequences in A. FastTree uses these profiles to compute the distances between internal nodes in the tree and also the total distance from a node to all other nodes,
which is also required for Neighbor-Joining. The profiles require a total of O(NLa) space, where a is the size of the alphabet (20 for protein sequences and 4 for nucleotide sequences), instead of O(
N^2) space for the distance matrix. However, the time required for Neighbor-Joining with exhaustive search rises from O(N^3) to O(N^3La) because every distance has to be recomputed on demand in O(La)
Second, FastTree uses a combination of previously published heuristics (Elias and Lagergren 2005, Evans et al. 2006) and a new “top-hits” heuristic to reduce the number of joins considered. Whereas
traditional Neighbor-Joining considers O(N^3) possible joins and optimized variants have considered O(N^2) possible joins (the size of the distance matrix), FastTree considers O(NN) possible joins.
Thus, in theory, FastTree takes O(NN)La) time. In practice, FastTree is faster than computing the distance matrix. These heuristics require additional O(NNLa +NN^2).
Third, FastTree refines the initial topology with nearest neighbor interchanges (NNIs). Given an unrooted tree ((A, B), (C, D)), where A, B, C, and D may be sub-trees rather than individual
sequences, FastTree compares the profiles of A, B, C, and D and determines whether alternate topologies ((A, C), (B, D)) or ((A, D), (B, C)) would reduce the length of the tree. These NNIs are
similar to those of FastME, although FastME uses a distance matrix (Desper and Gascuel 2002). FastTree's NNIs take O(N log(N)La) additional time and O(NLa) additional space. In practice, the NNIs
take much less time than computing the initial topology, and they improve the quality of the tree.
Fourth, FastTree computes a local bootstrap value for each internal split ((A, B), (C, D)) by resampling the columns of the profiles and counting the fraction of resamples that support ((A, B), (C, D
)) over the alternate topologies ((A, C), (B, D)) or ((A, D), (B, C)). The local bootstrap has been used for maximum likelihood trees (Kishino et al. 1990) but cannot be used with distance matrices.
Computing the local bootstrap takes O(bNLa) time, where b is the number of bootstrap samples. Even with 1,000 resamples, this takes less than a minute for an alignment of over 8,000 protein sequences
and 394 columns. Thus, local bootstrap gives FastTree an additional 100-fold speedup over distance matrix methods, in which the entire computation must be repeated for each sample. However, the local
bootstrap should be interpreted more conservatively than the traditional bootstrap. Whereas traditional bootstrap estimates the probability that the split is correct (Efron et al. 1996), local
bootstrap estimates the probability that the split is correct if we assume that A, B, C, and D are subtrees of the true tree.
Below, we describe FastTree in more detail. Then, we show that in realistic simulations, FastTree is slightly more accurate than other minimum evolution methods such as Neighbor-Joining, BIONJ, or
FastME. On genuine alignments, FastTree topologies tend to have higher likelihoods than topologies from other minimum evolution methods, which also suggests that FastTree gives higher quality
results. For both simulated and genuine alignments, FastTree's heuristics do not lead to any measurable reduction in quality. For large families, FastTree requires less CPU time and far less memory
than computing and storing a distance matrix. Finally, we show that the local bootstrap is a good indicator of whether each split in the inferred topology is correct, and it is orders of magnitude
faster than the traditional bootstrap. We believe that FastTree is the first practical method for computing accurate phylogenies, including support values, for alignments with tens or hundreds of
thousands of sequences.
Materials and Methods
A rough outline of FastTree is shown at the bottom of figure 1. Before we explain how FastTree implements Neighbor-Joining, we explain how it computes distances between sequences and how it computes
distances between profiles. We then explain how it computes distances between internal nodes and how it calculates the Neighbor-Joining criterion, which is used to select the best join. We also
describe the heuristics that it uses to reduce the number of joins that it considers. Finally, we explain the steps after Neighbor-Joining: NNIs, the local bootstrap, and estimating the branch
lengths for the final topology. For formulas, derivations, and technical details, see supplementary note 1 (Supplementary Material online).
Distances between Sequences
FastTree uses both corrected and uncorrected distances. FastTree corrects the distances for multiple substitutions during NNIs, computing final branch lengths, and local bootstrap, but not during
Neighbor-Joining. For nucleotide sequences, FastTree's uncorrected distance d[u] is the fraction of positions that differ, and the corrected distance is the Jukes–Cantor distance Sonnhammer and
Hollich 2005). We scaled the BLOSUM45 similarity matrix into a dissimilarity matrix such that the average dissimilarity between each amino acid and a random amino acid is 1 if we use the nonuniform
amino acid frequencies of biological sequences. The uncorrected distance d[u] between two sequences is the average dissimilarity among nongap positions, and the corrected distance is d = −1.3 × log(1
− d[u]). The intuitive justification is that the term within the logarithm ranges from 1 for identical sequences to an expected value of 0 for unrelated sequences, as with Jukes–Cantor distances for
nucleotide sequences. For both nucleotide and protein sequences, FastTree truncates the corrected distances to a maximum of 3.0 substitutions per site, and for sequences that do not overlap because
of gaps, FastTree uses this maximum distance.
Distances between Profiles
FastTree uses profiles to estimate the average distance between the children of two nodes. The profile distance at each position is the average dissimilarity of the characters. The uncorrected
distance between two profiles is then the average of these position-wise distances, weighted by the product of the proportion of nongaps in each of the two profiles. FastTree computes the distance
between two profiles in O(La) time by using the eigendecomposition of the dissimilarity matrix.
The profile distance is identical to the average distance if the distances are not corrected for multiple substitutions and if the sequences do not contain gaps. For example, if we join two sequences
A and B together, then the profile distance
Of course, we do wish to correct for multiple substitutions, and in practice, large alignments always contain gaps. In these cases, the profile-based average becomes an approximation of the average
distances used in traditional minimum evolution methods.
First, consider the issue of correcting distances for multiple substitutions with a formula of the form d d[u]). The average corrected distance between A and BC is (d(A,B) + d(A,C))/2 or the average
of two logarithms. However, FastTree cannot compute this average of logarithms from the profiles. Instead, FastTree uses the logarithm of averages. This is a close approximation if the distances are
short or if the distances are similar. If the distances are large, then distances between profiles may be more accurate than averages of distances (Müller et al. 2004).
Second, consider what happens if the sequences contain gaps. FastTree records the fraction of gaps at each profile position, and when computing distances, FastTree weights positions by their
proportion of nongaps. Traditional Neighbor-Joining implicitly weights the ungapped columns more highly. For example, consider an alignment with ABCAB,C) = 2/3, but (d[u](A,B) + d[u](A, C))/2 = 1/2.
Both approaches treat gaps as missing data, and it is not obvious which is preferable.
Distances between Internal Nodes
Neighbor-Joining operates on distances between internal nodes rather than on average distances between the members of subtrees. For example, after joining nodes A and B, Neighbor-Joining sets
FastTree instead sets the profile of AB to AB) =(A) + B))/2 and computes the distance between nodes with
where Δ(i,j) is the profile distance and u(i) is the “up-distance,” or the average distance of the node from its children. u(i) = 0 for leaves, and for balanced joins, u(ij) = Δ(i,j)/2. This
profile-based computation gives the exact same value of d[u](i,j) as Neighbor-Joining after any number of joins, as long as distances are not corrected for multiple substitutions and the sequences
contain no gaps.
FastTree actually uses weighted joins, as in BIONJ (Gascuel 1997), rather than the balanced joins. In BIONJ, the weight of each join depends on the variance of the distance between two joined nodes,
which can also be computed from the profiles. Also, with weighted joins, the formula for the up-distances becomes more complicated.
Calculating the Neighbor-Joining Criterion
Given the distances between nodes, Neighbor-Joining selects the join that minimizes the criterion d[u](i,j) −r(i) −r(j), where i, j, and k are indices of active nodes that have not yet been joined, d
[u](i,j) is the distance between nodes i and j, n is the number of active nodes, and
r(i) can be thought of as the average “out-distance” of i to other active nodes (although the denominator is n−2, not n − 1). Traditional Neighbor-Joining computes all N out-distances before doing
any joins, which takes O(N[2]) time, and updates each out-distance after each join, which also takes O(N^2) time overall. To avoid this work, FastTree computes each out-distance as needed in O(La)
time by using a “total profile” T which is the average of all active nodes’ profiles, as implied by
(Δ(i,i) is the average distance between children of i, including self-comparisons.) If there are gaps, then this is an approximation. FastTree computes the total profile at the beginning of
Neighbor-Joining in O(NLa) time, updates it incrementally in O(La) time, and recomputes it every 200 joins to avoid round-off error.
Notice that FastTree does not log correct the distances during Neighbor-Joining. We considered doing so, but it reduced FastTree's accuracy. Perhaps the profile-based out-distances become inaccurate:
the out-distance is an average of both far and small values, and so the log correction of the average distance is a poor estimate of the average of the log-corrected distances.
Selecting the Best Join
FastTree uses heuristics to reduce the number of joins considered at each step to less than O(n). We first explain the “top-hits” heuristic. For each node, FastTree records a top-hits list: the nodes
that are the closest m neighbors of that node, according to the Neighbor-Joining criterion. By default, m =N sequences by assuming that if A and B have similar sequences, then the top-hits lists of A
and B will largely overlap. More precisely, FastTree computes the 2m top hits of A, where the factor of two is a safety factor. Then, for each node B within the top m hits of A that does not already
have a top-hits list, FastTree estimates the top hits of B by comparing B to the top 2m hits of A. In theory, this takes a total of O(N^2L/m + NmL)NL) time to compute and O(Nm)N
FastTree restricts the top-hits heuristic to ensure that a sequence's top hits are only inferred from the top hits of a “close enough” neighbor. Because of these restrictions, it is not clear how
many sequences will have O(m) close neighbors and it is not clear if the initial computation of top-hits lists will truly take O(NL) time. However, for large alignments, it takes less time than
computing the distance matrix, so in practice it takes less than O(N^2L) time.
FastTree maintains these top-hits lists during Neighbor-Joining. First, after a join, FastTree computes the top-hits list for the new node in O(mLa) time by comparing the node to all entries in the
top-hits lists of its children. Second, after a join, some of the other nodes’ top hits may point to an inactive (joined) node. When FastTree encounters these entries, it replaces them with the
active ancestor. Finally, as the algorithm progresses, the top-hits lists will gradually become shorter as joined nodes become absent from lists. Thus, FastTree periodically “refreshes” the top-hits
list by comparing the new node to all other nodes and also by comparing each of the new node's top hits to each other. Each refresh takes O(nLa + m^2La) time and ensures that the top-hits lists of O(
m) other nodes are of full length and up-to-date, so FastTree performs O(NLa) time.
Besides storing the list of top hits for each node, FastTree also remembers the best-known join for each node, as in FastNJ (Elias and Lagergren 2005). FastTree updates the best-known join whenever
it considers a join that involves that node. For example, while computing the top hits of A, it may discover that A,B is a better join than B,best(B).
Based on the best joins and the top-hits lists, FastTree can quickly select a join. First, FastTree finds the best m joins among the best-known joins of the n active nodes, without recomputing the
Neighbor-Joining criterion to reflect the current out-distances. In principle, this can be implemented in O(mlog N) time per join by using a priority queue. (FastTree simply sorts the entries, which
adds O(Nlog N) time per join or O(N^2 log N) time overall.) For those m candidates, FastTree recomputes the Neighbor-Joining criterion, which takes O(mLa) time, and selects the best. Furthermore,
FastTree does a local hill-climbing search to find a better join, as in relaxed Neighbor-Joining (Evans et al. 2006): given a join AB, it considers all joins AC or BD, where C is in top-hits(A) or D
is in top-hits(B). This can be beneficial because the out-distances change after every join, so the best join for a node can change as well. In theory, this takes O(logn) iterations (Evans et al.
2006), O(mlog(n)La) time per join, or O(NN)La) time overall. Thus, it takes FastTree a total of O(NN)La) time to maintain the top-hits lists and to select all the joins.
Nearest Neighbor Interchanges
After FastTree constructs an initial tree with Neighbor-Joining, it uses NNIs to improve the tree topology. During each round, FastTree tests and possibly rearranges each split in the tree, and it
recomputes the profile of each internal node. The profiles can change even if the topology does not change because FastTree recomputes the weighting of the joins.
By default, FastTree does log[2](N) + 1 rounds of NNIs. We chose a fixed number of rounds, instead of iterating until no more NNIs occur, to ensure fast completion. We chose roughly log[2](N) rounds
so that, on a balanced topology, a misplaced node could migrate all the way across the tree.
The minimum evolution criterion prefers ((A, B), (C, D)) over alternate topologies ((A, C), (B, D)) or ((A, D), (B, C)) if d(A,B) + d(C,D) < d(A,C) + d(B,D) and d(A,B) + d(C,D)d(A,D) + d(B,C). Here,
FastTree uses log-corrected profile distances, rather than distances between nodes. The profile distances do not account for the distances within the nodes, but this does not affect the minimum
evolution criterion as it increases all distances d(A, ·) by the same amount.
For larger topologies, FastTree must compute profiles for additional subtrees before doing this computation. For example, consider the topology ((A, (B, C)), D, E). After Neighbor-Joining, FastTree
has profiles for the internal nodes BC and ABC as well as for the leaves, but to test the split BC versus ADE requires the profile for DE. FastTree computes the profile for DE by doing a weighted
join of D and E, using the weighting of BIONJ for a 4-leaf tree (Gascuel 1997). FastTree stores these additional profiles along the path to the root and reuses them when possible. (FastTree computes
an unrooted tree but stores it as a rooted tree.) To ensure that a round of NNIs takes O(NLa) time and at most O(NLa) additional space, FastTree visits nodes in postorder (it visits children before
their parents).
Local Bootstrap
To estimate the support for each split, FastTree resamples the alignment's columns with Knuth's 2002 random number generator (http://www-cs-faculty.stanford.edu/knuth/programs/rng.c). FastTree counts
the fraction of resamples that support a split over the two potential NNIs around that node, much as it does while using NNIs to improve the topology. If a resample's minimum evolution criterion
gives a tie, then that resample is counted as not supporting the split.
Branch Lengths
Once the topology is complete, FastTree computes branch lengths, with
for internal branches and
for the branch leading to leaf A, where d are log-corrected profile distances.
Unique Sequences
Large alignments often contain many sequences that are exactly identical to each other (Howe et al. 2002). Before inferring a tree, FastTree uses hashing to quickly identify redundant sequences. It
constructs a tree for the unique subset of sequences and then creates multifurcating nodes, without support values, as parents of the redundant sequences.
Testing FastTree
Sources of Alignments
We obtained sequences of members of Clusters of Orthologous Groups (COG) gene families (Tatusov et al. 2001) and members of Pfam PF00005 (Finn et al. 2006) from the fall 2007 release of the
MicrobesOnline database (http://www.microbesonline.org/). We aligned the sequences to the family's profile, using reverse position-specific Blast for the COG alignment (Schaffer et al. 2001) and
hmmalign for the PF00005 alignment (http://hmmer.janelia.org/). As the profiles only include positions that are present in many members of the family, these alignments do not contain all positions
from the original sequences. The 16S rRNA alignment is from greengenes and is trimmed with the greengenes mask (DeSantis et al. 2006; http://greengenes.lbl.gov).
To simulate alignments with realistic phylogenies and realistic gaps, we used the COG alignments. In each simulation, we selected the desired number of sequences from a COG alignment, we removed
positions that were over 25% gaps, we estimated a topology and branch lengths with PhyML (Guindon and Gascuel 2003), we estimated evolutionary rates across sites with PHYLIP's proml (http://
evolution.genetics.washington.edu/phylip.htm), we simulated sequences with Rose (Stoye et al. 1998), and we reintroduced the gaps from the original alignment. For simulations of 5,000 sequences, we
used FastTree instead of PhyML and we assigned evolutionary rates at random. For N = 10, we simulated 3,100 alignments (10 independent runs per family); for N = 50, we simulated 3,099 alignments; for
N = 250, we simulated 308 alignments; for N = 1,250, we simulated only 92 alignments because some PhyML jobs did not complete, and for N = 5,000, we simulated 7 alignments, as only seven families
contained enough nonredundant sequences. See supplementary note 2 (Supplementary Material online) for technical details.
CPU Timings
All programs used a single thread of execution. We used a computer with two dual-core 2.6-GHz AMD Opteron processors and 32 GB of RAM. However, for the two long-running maximum likelihood jobs in
table 6, we used a computer with a 2.4-GHz Intel Q6600 quad-core processor and 8 GB of RAM. The two machines have similar performance (about 20% different for FastTree).
CPU Time and Memory Usage for Computing Distances, Trees, and Support Values
To estimate performance on large alignments, we extrapolated from the largest feasible alignment for that method and its theoretical complexity. Inferring a tree from a distance matrix requires O(N^
2) space and either O(N^2) time (FastME and RapidNJ; Simonsen et al. 2008), O(N^2logN) time (Clearcut), or O(N^3) time (QuickTree and BIONJ). Computing bootstrap values from resampled trees with
PHYLIP's consense or with QuickTree's built-in bootstrap requires O(N^2) space and O(N^3) time. For QuickTree, which identifies and removes duplicate sequences, we used the number of unique sequences
rather than the total number.
Topological Accuracy in Simulations
We tested FastTree and other methods for inferring phylogenies on simulated protein alignments with realistic topologies, realistic gaps, varying evolutionary rates across sites, and between 10 and
5,000 sequences. The simulated alignments ranged from 64 to 1,009 positions (median 304), with 9% gaps, and on average, pairs of sequences within these alignments were 33% identical. For each
alignment and for each method, we counted the proportion of splits that were correctly inferred.
As shown in table 1, FastTree was significantly more accurate than other minimum evolution methods but was 1–2% less accurate than PhyML, a maximum likelihood method (Guindon and Gascuel 2003). We
will show that FastTree scales to far larger alignments than current maximum likelihood methods can handle. Furthermore, most of the splits that disagree between minimum evolution and maximum
likelihood trees are poorly supported (Nei et al. 1998). This is true in our simulations as well, even for the splits that PhyML inferred correctly but FastTree missed (data not shown). Thus, the
practical effect of these differences may be much less than 1–2%.
Topological Accuracy of Tree-Building Methods on Simulated Protein Alignments with Gaps
After FastTree, the next best method was FastME, which like FastTree uses NNIs according to the minimum evolution criterion (Desper and Gascuel 2002). Depending on the number of sequences, FastTree
was slightly but significantly more accurate than FastME, or the two methods were tied. FastTree was up to 4% more accurate than BIONJ, a weighted variant of Neighbor-Joining (Gascuel 1997), when run
with FastTree's log-corrected distances. BIONJ with log-corrected distances was about as accurate as BIONJ with maximum likelihood distances from PHYLIP's protdist, so FastTree's distance measure is
adequate. Maximum likelihood distances that were estimated using a model with gamma-distributed rates gave poor results. FastTree was 1–5% more accurate than QuickTree, an implementation of
traditional Neighbor-Joining (Howe et al. 2002), and 4–6% more accurate than Clearcut, an implementation of relaxed Neighbor-Joining (Evans et al. 2006). Clearcut is more scalable than the other
distance matrix methods but not as scalable as FastTree (see below).
We obtained similar results with a standard set of simulations of ungapped nucleotide alignments (Desper and Gascuel 2002) or with ungapped protein simulations (supplementary tables 1 and 2;
Supplementary Material online). Furthermore, FastTree was more accurate than BIONJ regardless of how strongly the tree deviated from the molecular clock or how divergent the sequences were (
Supplementary fig. 1; Supplementary Material online).
These simulations also confirm that topologies can be inferred even when there are many more sequences than sites (Bininda-Emonds et al. 2001). The alignments with 5,000 sequences contained just
197–384 sites, yet FastTree identified 76.3% of the splits correctly.
Effectiveness of FastTree's Approximations and Heuristics
The simulations also let us test the internals of FastTree. First, FastTree's Neighbor-Joining phase should give roughly the same results as BIONJ with uncorrected distances. In practice, the two
methods had very similar accuracies, as did FastTree's Neighbor-Joining with exhaustive search (table 2). Thus, FastTree's accuracy was not affected by its approximations to handle gaps or by its
heuristics to reduce the number of joins considered. Heuristic search was also over 100 times faster: for an alignment of 1,250 proteins with 338 positions, the Neighbor-Joining phase of FastTree
took 1,551 s with exhaustive search but only 8 s with heuristic search.
The Topological Accuracy of Variants of FastTree on Simulated Protein Alignments with Gaps
Second, using uncorrected distances only reduced the accuracy of BIONJ by around 3% (table 2). This is consistent with a previous simulation study of realistic topologies and protein alignments (
Hollich et al. 2005). Because using uncorrected distances leads to relatively few errors, FastTree can correct these errors by doing a few rounds of NNIs. Adding more rounds of NNIs did not increase
accuracy (table 2).
Quality of Trees for Genuine Alignments
To test the quality of FastTree's results on genuine protein families, we inferred topologies for alignments of 500 randomly selected sequences from large COGs. These alignments ranged from 65 to
1,009 positions, and within each alignment, the average pair of sequences was 27% identical. To quantify the quality of each topology, we used PhyML to optimize the branch lengths and compute the log
likelihood. We ran PhyML with the Jones, Taylor, and Thorton (JTT) model of amino acid substitution and four categories of gamma-distributed rates.
In table 3, we report the average difference in log likelihood between that method's trees and FastTree's trees. The methods are sorted by the average difference. All the distance matrix methods gave
significantly worse average likelihoods than FastTree (paired t-test, all P^0 −20). Furthermore, as in the simulations, FastTree's approximations and heuristics did not reduce the quality of the
trees (supplementary table 3; Supplementary Material online). Overall, we found that for these genuine alignments, FastTree's topologies were of high quality.
The Relative Log Likelihoods of Topologies Inferred for 310 Genuine Protein Alignments of 500 Sequences Each
We also tested the quality of FastTree trees for sets of 500 nonredundant sequences from a large 16S rRNA alignment (DeSantis et al. 2006; http://greengenes.lbl.gov). To quantify the quality of each
topology, we used PhyML with the Hasegawa–Kishino–Yano 85 model, which accounts for the higher rate of transitions over transversions, and four categories of gamma-distributed rates. FastTree found
topologies with higher likelihoods than most of the distance matrix methods (table 4). FastME did outperform FastTree slightly if given maximum likelihood distances that account for the higher rate
of transitions than transversions. Distinguishing transitions from transversions might further improve FastTree's topologies.
The Relative Log Likelihoods of Topologies Inferred for 100 Genuine 16S rRNA Alignments of 500 Sequences Each
CPU Time and Memory Required to Infer Trees
We tested FastTree and other methods on a protein alignment from the COG database (COG2814), a domain alignment from PFam (PF00005), and a trimmed alignment of full-length 16S rRNAs (Tatusov et al.
2001; Finn et al. 2006; http://greengenes.lbl.gov). These alignments contain roughly 8,000–150,000 distinct sequences (table 5). Running the distance matrix methods on the larger alignments was not
feasible, so we extrapolated from smaller alignments (see Materials and Methods). The actual or estimated CPU time and memory usage are shown in table 6.
Genuine Alignments for Performance Testing
The maximum likelihood methods we tested, PhyML 3 (Guindon and Gascuel 2003) and RAxML VI (Stamatakis 2006), did not complete in 50 days on the smallest of these problems, which took FastTree about 3
min. (Despite the high usage of virtual memory by PhyML, both PhyML and RAxML ran at over 99% CPU utilization.) Even for COG alignments of just 1,250 proteins, PhyML 3 typically took over a week.
Thus, current maximum likelihood methods do not scale.
Most of the methods require a distance matrix as input, so in practice, the running time is the time to compute a distance matrix plus the time to infer a tree. As shown in table 6, FastTree is over
1,000 times faster than computing maximum likelihood protein distances. For the 16S rRNA alignment, FastTree is as fast as computing Jukes–Cantor distances and over 100 times faster than computing
maximum likelihood distances with gamma-distributed rates.
For the 16S alignment, the only method other than FastTree that seems practical is Clearcut: all the other methods would require over 1,000 h or over 500 GB of memory. Clearcut itself is very fast—we
estimate that it might take only 12 h to infer a tree from the 16S distance matrix. However, Clearcut requires a distance matrix, and FastTree is faster than Clearcut once the cost of computing the
distance matrix is included. Clearcut would also require over 50 GB of memory—20 times as much as FastTree—which makes it impractical for us to run. Furthermore, Clearcut seems to be less accurate
than FastTree (tables 1, 3, and 4).
Effectiveness and Speed of the Local Bootstrap
To test whether FastTree's local bootstrap can identify which splits are reliable, we used the protein simulations with 250 sequences. We also computed the traditional bootstrap: we used PHYLIP's
seqboot to generate resampled alignments, we ran FastTree on each resample, and we counted how often each split in the original tree was present in the resampled trees. For both methods, we used
1,000 resamples. As shown in figure 2, both methods were effective in identifying correct splits. If we define “strongly supported” as a local bootstrap of ≥95%, then 65% of the correct splits were
strongly supported. Conversely, 97% of the strongly supported splits were correct.
Distribution of support values for simulated alignments of 250 protein sequences with gaps. We compare the distribution of FastTree's local bootstrap and the traditional (global) bootstrap for
correctly and incorrectly inferred splits. The right-most ...
To quantify how effective the measures were in distinguishing correct splits, we used the area under the receiver operating characteristic curve (AOC; DeLong and Clarke-Pearson 1998). The AOC is the
probability that a true split will have a higher support value than an incorrect split, so a perfect predictor has AOC
The local bootstrap was far faster than the traditional bootstrap and required far less memory. The traditional bootstrap takes 100 times longer than tree inference plus the time to compare the trees
to each other. For the 16S rRNA alignment, performing the tree comparisons with PHYLIP's consense would take months and would require over 90 GB of memory (table 6). In contrast, FastTree computed
the local bootstrap in an hour and 2.4 GB.
Large Alignments
We have relied on profile-based multiple sequence alignment as the most practical method for large families. However, profile-based alignment is believed to be less accurate than progressive
alignment. Thus, whenever possible, biological inferences from these large trees should be confirmed with smaller, higher quality alignments. This also allows the use of slower but more accurate
tree-building methods and tests. For example, MicrobesOnline.org includes interactive tools for browsing large trees, for selecting relevant sequences, and for building progressive alignments and
maximum likelihood trees with those sequences.
Scaling to a Million Sequences
FastTree computes trees for the largest existing alignments, with on the order of 100,000 sequences, in under a day. However, given the rapid rate of DNA sequencing, we expect that alignments with
1,000,000 sequences will soon exist. For such large alignments, the major memory requirement will be the top-hits lists, which take O(NNlogNN^2), so inferring a tree for a million rRNA sequences
should take 2–4 weeks. Tuning the top-hits heuristic might reduce this time.
FastTree makes it practical to infer accurate phylogenies, including support values, for families with tens or hundreds of thousands of sequences. These phylogenies should be useful for
reconstructing the tree of life and for predicting functions for the millions of uncharacterized proteins that are being identified by large-scale DNA sequencing. FastTree executables and source code
are available at http://www.microbesonline.org/fasttree; FastTree trees for every prokaryotic gene family are available in the MicrobesOnline tree-browser (http://www.microbesonline.org/); and a
FastTree tree for all sequenced full-length 16S rRNAs is available from the FastTree Web site and will be included in the next release of greengenes (http://greengenes.lbl.gov).
This work was supported by a grant from the US Department of Energy Genomics: GTL program (DE-AC02-05CH11231).
• Alm EJ, Huang KH, Price MN, Koche RP, Keller K, Dubchak IL, Arkin AP. The MicrobesOnline Web site for comparative genomics. Genome Res. 2005;15:1015–1022. [PMC free article] [PubMed]
• Bininda-Emonds OR, Brady SG, Kim J, Sanderson MJ. Scaling of accuracy in extremely large phylogenetic trees. Pac Symp Biocomput. 2001;2001:547–558. [PubMed]
• DeLong ER, Clarke-Pearson DL. Comparing the areas under two or more correlated receiver operating characteristic curves: a nonparametric approach. Biometrics. 1998;44:837–845. [PubMed]
• DeSantis TZ, Hugenholtz P, Larsen N, Rojas M, Brodie EL, Keller K, Huber T, Dalevi D, Hu P, Andersen GL. Greengenes, a chimera-checked 16S rRNA gene database and workbench compatible with ARB.
Appl Environ Microbiol. 2006;72:5069–5072. [PMC free article] [PubMed]
• Desper R, Gascuel O. Fast and accurate phylogeny reconstruction algorithms based on the minimum-evolution principle. J Comput Biol. 2002;9:687–705. [PubMed]
• Efron B, Halloran E, Holmes S. Bootstrap confidence levels for phylogenetic trees. Proc Natl Acad Sci USA. 1996;93:13429–13434. [PMC free article] [PubMed]
• Eisen JA. Phylogenomics: improving functional predictions for uncharacterized genes by evolutionary analysis. Genome Res. 1998;8:163–167. [PubMed]
• Elias I, Lagergren J. Languages and Programming (ICALP'05) Lecture Notes in Computer Science. Vol. 3580. Berlin/Heidelberg: Springer-Verlag; 2005. Fast neighbor joining. In: Proceedings of the
32nd International Colloquium on Automata; pp. 1263–1274.
• Engelhardt BE, Jordan MI, Muratore KE, Brenner SE. Protein molecular function prediction by Bayesian phylogenomics. PLoS Comput Biol. 1:e45. 2005 [PMC free article] [PubMed]
• Evans J, Sheneman L, Foster J. Relaxed neighbor joining: a fast distance-based phylogenetic tree construction method. J Mol Evol. 2006;62:785–792. [PubMed]
• Felsenstein J. Confidence limits on phylogenies: an approach using the bootstrap. Evolution. 1985;39:783–791.
• Finn RD, Mistry J, Schuster-Böckler B, et al. 13 co-authors. Pfam: clans, web tools and services. Nucleic Acids Res. 2006;34:D247–D251. [PMC free article] [PubMed]
• Gascuel O. BIONJ: an improved version of the NJ algorithm based on a simple model of sequence data. Mol Biol Evol. 1997;14:685–695. [PubMed]
• Gascuel O, Steel M. Neighbor-joining revealed. Mol Biol Evol. 2006;23:1997–2000. [PubMed]
• Guindon S, Gascuel O. A simple, fast, and accurate algorithm to estimate large phylogenies by maximum likelihood. Syst Biol. 2003;52:696–704. [PubMed]
• Henikoff S, Henikoff JG. Amino acid substitution matrices from protein blocks. Proc Natl Acad Sci USA. 1992;89:10915– 10919. [PMC free article] [PubMed]
• Hollich V, Milchert L, Arvestad L, Sonnhammer EL. Assessment of protein distance measures and tree-building methods for phylogenetic tree reconstruction. Mol Biol Evol. 2005;22:2257–2264. [PubMed
• Howe K, Bateman A, Durbin R. QuickTree: building huge neighbour-joining trees of protein sequences. Bioinformatics. 2002;18:1546–1547. [PubMed]
• Kishino H, Miyata T, Hasegawa M. Maximum likelihood inference of protein phylogeny and the origin of chloroplasts. J Mol Evol. 1990;31:151–160.
• Li H, Coghlan A, Ruan J, et al. 15 co-authors. TreeFam: a curated database of phylogenetic trees of animal gene families. Nucleic Acids Res. 2006;34:D572–D580. [PMC free article] [PubMed]
• Lichtarge O, Yao H, Kristensen DM, Madabushi S, Mihalek I. Accurate and scalable identification of functional sites by evolutionary tracing. J Struct Funct Genomics. 2003;4:159–166. [PubMed]
• Müller T, Rahmann S, Dandekar T, Wolf M. Accurate and robust phylogeny estimation based on profile distances: a study of the Chlorophyceae (Chlorophyta) BMC Evol Biol. 2004;4:20. doi:10.1186/
1471-2148-4-20. [PMC free article] [PubMed]
• Nei M, Kumar S, Takahashi K. The optimization principle in phylogenetic analysis tends to give incorrect topologies when the number of nucleotides or amino acids used is small. Proc Natl Acad Sci
USA. 1998;95:12390–12397. [PMC free article] [PubMed]
• Saitou N, Nei M. The neighbor-joining method: a new method for reconstructing phylogenetic trees. Mol Biol Evol. 1987;4:406–425. [PubMed]
• Schaffer AA, Aravind L, Madden TL, Shavirin S, Spouge JL, Wolf YI, Koonin EV, Altschul SF. Improving the accuracy of PSI-BLAST protein database searches with composition-based statistics and
other refinements. Nucleic Acids Res. 2001;29:2994–3005. [PMC free article] [PubMed]
• Simonsen M, Mailund T, Pedersen CNS. Rapid neighbor-joining. Lect Notes Comput Sci. 2008;5251:113–122.
• Sonnhammer ELL, Hollich V. Scoredist: a simple and robust protein sequence distance estimator. BMC Bioinformatics. 2005;6:108. [PMC free article] [PubMed]
• Stamatakis A. RAxML-VI-HPC: maximum likelihood-based phylogenetic analyses with thousands of taxa and mixed models. Bioinformatics. 2006;22:2688–2690. [PubMed]
• Stoye J, Evers D, Meyer F. Rose: generating sequence families. Bioinformatics. 1998;14:157–163. [PubMed]
• Studier JA, Keppler KJ. A note on the neighbor-joining algorithm of Saitou and Nei. Mol Biol Evol. 1988;5:729–731. [PubMed]
• Tatusov RL, Natale DA, Garkavtsev IV, Tatusova TA, Shankavaram UT, Rao BS, Kiryutin B, Galperin MY, Fedorova ND, Koonin EV. The COG database: new developments in phylogenetic classification of
proteins from complete genomes. Nucleic Acids Res. 2001;29:22–28. [PMC free article] [PubMed]
• von Mering C, Hugenholtz P, Raes J, Tringe SG, Doerks T, Jensen LJ, Ward N, Bork P. Quantitative phylogenetic assessment of microbial communities in diverse environments. Science. 2007;315
:1126–1130. [PubMed]
• Zaslavsky L, Tatusova TA. Accelerating the neighbor-joining algorithm using the adaptive bucket data structure. Lect Notes Comput Sci. 2008;4983:122–133.
Articles from Molecular Biology and Evolution are provided here courtesy of Oxford University Press
Your browsing activity is empty.
Activity recording is turned off.
See more...
|
{"url":"http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2693737/?tool=pubmed","timestamp":"2014-04-18T07:10:58Z","content_type":null,"content_length":"133753","record_id":"<urn:uuid:57063e78-348a-4376-b3bb-6dec9d443780>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00127-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Sunnyvale, TX Algebra Tutor
Find a Sunnyvale, TX Algebra Tutor
...These will be "graded" and returned back to allow the student maximum potential in the topic. Since I hold myself and my students to high standards I will NOT charge any lesson that the student
is not satisfied in. Under NO circumstance should anyone pay for the service they are not receiving correctly.
16 Subjects: including algebra 1, algebra 2, reading, chemistry
...I have taught at the primary, secondary, and college levels. I work very hard to make learning meaningful and fun. As an educational psychologist, I have completed many hours of advanced
coursework, and I am well-versed in the current research regarding learning, memory, and instructional practices.
39 Subjects: including algebra 1, algebra 2, reading, English
...I believe a big key to success in math is recognizing and understanding the terminology, as well as finding a way for students to comprehend and even embrace the processes they are attempting
to follow. I know that everyone hears how math is the basis for most of what we see and hear, but mathem...
17 Subjects: including algebra 1, algebra 2, English, reading
...I tutor students regular,Pre-Ap and Ap Physics B,C. The courses include topics of Kinematic motions, Forces and newton's laws, Circular Motion, Impulse and Momentum, Work and Energy, Rotational
Dynamics, Simple harmonic motion and Elasticity, Fluids, Thermodynamics, Waves and Sound, Electromagne...
20 Subjects: including algebra 1, algebra 2, calculus, physics
...For the last eight years I've been a homemaker and am eager to help students with math again. I keep my skills sharp by doing math puzzles and logic problems daily. Many of these puzzles
require higher level algebra to succeed.
14 Subjects: including algebra 1, algebra 2, reading, English
|
{"url":"http://www.purplemath.com/Sunnyvale_TX_Algebra_tutors.php","timestamp":"2014-04-19T06:59:35Z","content_type":null,"content_length":"23890","record_id":"<urn:uuid:f1af056e-dffe-43f5-a7fe-bf61eff1481e>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00573-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Trigonometry Regents Exam Prep
Algebra 2 and Trigonometry Regents Exam Prep
At New York Academics, we are very familiar with the Algebra 2 and Trigonometry Regents, and we know how to help students get a score they can be proud of.
We specialize in individualized, one-on-one tutoring and this allows us to give each student the specific type of help that he or she needs. Whether you're looking for help with one or two specific
topics, test taking strategy, or just about everything on the Algebra 2 and Trigonometry Regents, we can design a program to suit your needs.
About the Algebra 2 and Trigonometry Regents
The Algebra 2 and Trigonometry Regents is the most advanced of the three math regents exams offered in New York State. It is not required for graduation, but it is required for the desirable advanced
Regents diploma. It is also an important course for college admissions and readiness.
The Algebra 2 and Trigonometry Regents is a comprehensive test covering a variety of topics including trigonometric functions, imaginary and complex numbers, and direct and indirect variation.
Although the term "pre-calculus" is not used to describe either this exam or the class that leads up to it, that is exactly what it is.
At New York Academics, we give our students individualized instruction in each of the algebra and geometry topics that they have difficulty with. We also make sure our students are "test savvy" and
know how to make the best use of their knowledge on test day. Perhaps most importantly, we make sure that all of our students get plenty of practice working with real Algebra 2 and Trigonometry
Regents questions from past exams. Taken as a whole, this results in real mathematical achievement and Regents success for our students.
To Learn More - Click Here to Contact Us
|
{"url":"http://www.tutornewyorkcity.com/algebra-trigonometry-regents.htm","timestamp":"2014-04-19T19:58:57Z","content_type":null,"content_length":"10564","record_id":"<urn:uuid:a2cabafb-834d-424f-9b7b-b213b40d00db>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00334-ip-10-147-4-33.ec2.internal.warc.gz"}
|
A 3D Visualization Method for Bladder Filling Examination Based on EIT
Computational and Mathematical Methods in Medicine
Volume 2012 (2012), Article ID 528096, 9 pages
Research Article
A 3D Visualization Method for Bladder Filling Examination Based on EIT
State Key Laboratory of Power Transmission Equipment & System Security and New Technology, Chongqing University, Chongqing 400030, China
Received 20 September 2012; Revised 23 November 2012; Accepted 30 November 2012
Academic Editor: Peng Feng
Copyright © 2012 Wei He et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium,
provided the original work is properly cited.
As the researches of electric impedance tomography (EIT) applications in medical examinations deepen, we attempt to produce the visualization of 3D images of human bladder. In this paper, a planar
electrode array system will be introduced as the measuring platform and a series of feasible methods are proposed to evaluate the simulated volume of bladder to avoid overfilling. The combined
regularization algorithm enhances the spatial resolution and presents distinguishable sketch of disturbances from the background, which provides us with reliable data from inverse problem to carry on
to the three-dimensional reconstruction. By detecting the edge elements and tracking down the lost information, we extract quantitative morphological features of the object from the noises and
background. Preliminary measurements were conducted and the results showed that the proposed algorithm overcomes the defects of holes, protrusions, and debris in reconstruction. In addition, the
targets' location in space and roughly volume could be calculated according to the grid of finite element of the model, and this feature was never achievable for the previous 2D imaging.
1. Introduction
Bladder filling causes the desire to urinate when the bladder contains a certain volume of urine. But for unconsciousness elders, some handicapped with spinal cord injury or patients with urological
disease, this sense will not occur. Urinary incontinence or lack of bladder control is an embarrassing problem, in case that many patients need professional nursing. And the work of nursing may be
greatly reduced if the urination is detected and alarmed in time. In clinical, the traditional method to solve this problem is draining urine out by a catheter inserted in the bladder. But the
intubation is invasive and not suitable for most patients, because it may cause secondary infection of the urinary tract. A way for measure in real time is the ultrasound imaging. Researchers have
developed ultrasound bladder volume measurement devices to evaluate bladder volume. However, these devices are inconvenient for continuous monitoring, moreover, the ultrasonic images are greatly
influenced by the human intraperitoneal gas [1, 2].
Several investigators over the last 20 years have verified that the electrical properties of human tissues and body fluids are significantly different and have demonstrated that measurement of these
properties has obvious clinical potential [3]. Electric impedance tomography distills biomedicine information without trauma and generates real-time image, which examination is not necessary straight
line to avoid the intra-peritoneal gas affection. Consequently, this technology is applied to measure and visualize impedance changes in bladder.
As we know, the filling bladder is located just a little beneath the lower abdomen. On account of different physiological and structural characteristics of patient’s abdomen, traditional closed EIT
system is difficult to meet in different shape and location of focus detection [4]. Therefore, we chose planar array EIT system due to its convenience in operation [5]. In order to solve the problem
of lack in effective precondition in the past researches of 3D volume estimation, we propose a system of 64 electrodes rectangular array with adaptable combination mode of injection and measurement [
6]. This system has features of adjustable multifrequency, high accuracy, and being portable and flexible in application, which is quite suitable for long-term clinical monitoring via appropriate
Another reason that restricts the 3D EIT development is the lacking of proper algorithms. The previous algorithms applied in EIT included Filtered Back-projection [7], Spectral Expansion method [8],
Newton’s one-step error reconstruction [9], Genetic Algorithms [10], and Weighted Minimum Norm method [11], most of which are confronted with the severely ill-posed problem and the great amount of
calculation. Through the analysis of the respective advantage and disadvantage of Tikhonov [12] and NOSER regularizations [9], we developed the combined regularization algorithm of expectable spatial
resolution for 3D EIT, which ensures more uniform impedance estimation and deeper investigation depth.
Since the boundary of the targets or structures are usually contained in the cell of the three-dimensional image, the detection and reconstruction of the edge surfaces from the reconstructed
electrical impedance are mong of the important research issues in three-dimensional image analysis [13]. The isosurface is a common approximation of the boundary surface within biomedical images.
However, a fixed values iosurface is not suitable in approximating the boundary surface in EIT inverse problems due to large errors of the results [14]. To adapt the local difference of the complex
boundary surface, we have improved the work of the literatures [15]. The formation of new method, by which, constructs the adaptively approximating to the boundary surface of the targets with
different surface patches in different local regions. Consequently, the approximation accuracy has been considerably improved.
2. Materials and Methods
2.1. System Description
On account of different physiological and structural characteristics of patient’s waist, the open EIT system in Figure 1(a), by simply placing the measuring probe onto the targets, could make the
measurement and avoid the trouble of routine electrode pasting. The measuring probe is an electrode array with a back electrode as the signal ground is placed on the patient’s back to make the
current evenly distributed into the body for deeper detection. The measurement and reconstruction field are the area between the electrode array and the back electrode.
As in examination, a sinusoidal current is injected from the 64 electrodes in turn and outflows from the back electrode in Figure 1(b). The measurements are taken from the rest 63 electrodes. In each
examination, we will obtain independent measurements from excitation, which greatly increased the amount of available data comparing with most current reported methods, as the 32 electrodes are in
round for chest examination (maxim = 992 measurements) [16] and the fixed voltage source with measuring points for breast cancer detection [17].
This system has features of constant current source, good antijamming capability, low output impedance, and deep detection area, which is supplied by medical power and communicates with notebook via
USB as in Figure 1(c). While used for long-term monitoring, the device can be replaced with lithium battery powered, Bluetooth communication and belt-contact electrode array, which enables the
application for inpatients or even patients at home.
2.2. EIT Inverse Problems
The inverse problem is the process of calculating the internal conductivity distribution based on the boundary voltage. EIT image reconstruction is a nonlinear ill-posed problem, and it is only to
deduce the impedances in the measurement filed by approximation. In principle, small enough perturbations in conductivity can be reconstructed accurately enough by considering just the linear
problem. In EIT, starting from a known and usually homogeneous distribution , a set of measurement is gathered. In sequence, a perturbation occurs causing a new and consequently a . Calculating the
Jacobian matrix in which a computing method is introduced in [18] based on , the discrete form of the linear forward problem used in difference imaging becomes
In (1) only is physically collected from the boundary of the volume as is obtained by forward calculations: in which denotes the vector of simulated measurements derived from forward computations
based on a model .
The Least squares method (LS) could be used to solve (2):
For the linear least squares problem, the Jacobian matrix is very ill-conditioned and singular. This problem is remedied by regularizing the matrix and solving a new problem that is well conditioned.
A general version of Tikhonov regularization method is used: where is Tikhonov regularization parameter, and is a matrix that defines a norm on the solution through which the “size” is measured.
Often, represents the first or second derivative operator. If is the identity matrix, then the Tikhonov problem is said to be in standard form. So we can get the solution from (2)~(4):
It has the effect of damping any large oscillations. A scaled identity matrix adding to the Jacobian matrix makes the solution stable. But the condition number which indicates the sensitivity of the
uncertainty is still large and the solution has the side effect of the smoothing caused by that identity matrix [9].
To reduce the condition number and side effect, we combined Tikhonov regularization with NOSER type regularization. In NOSER regularization, the regularization matrix is a simple diagonal weighting
for corresponding to the first and second difference operators. The equation is where is the NOSER regularization parameter and denotes the diagonal matrix and also represents an approximation for
the missing part of the second derivative of the mapping [19].
NOSER regularization works well in 2D field, but cannot correct the error caused by the noise in 3D model which is a very ill-posed problem. Tikhonov regularization could correct the error caused by
weak noises and also has the side effect of the smoothing caused by that identity. If these two methods are combined, the condition number would be reduced (in Table 1), consequently, a better image
will be obtained. The equation with combined regularization method can be written as
As comparison and experiments between reconstructed results of different algorithms, including parameters choosing and discussions, have been made in previous work [20], the combined regularization
was proved to be effective in eliminating errors and demonstrate better spatial resolution such as target’s location and size.
2.3. Finite Element Mesh and Impedance Calculation
To calculate the discrete impedance within three-dimensional space, we first have to conduct tetrahedral finite element meshing in the whole measurement space as in Figure 2(a). As in the followed
experiment, for example, the cuboid phantom was meshed into 79307 finite elements with 14876 nodes in space. Then, the Jacobian matrix could be obtained via complete electrode model by calculating
the voltage of each node with analytical method [21]. The combined regularization matrix could be then deduced from Jacobian by choosing the proper parameters and . Finally, spatial distribution of
the electrical impedance in the model could be approximated from the boundary conditions which were the voltage measurements from the electrodes. Although the accuracy of discrete impedance
calculated from the combined regularization had been improved and large, there was still disturbance around the electrodes, as we could see from Figure 2(b). Another problem was that the electrical
impedance changes gradually, in case that we were not able to draw the actual boundary of the anomaly buried in the background. Therefore, we had to develop a feasible and reliable way to eliminate
the noise, sketch out the boundary of objects, and reconstruct images with boundary surface detection in 3D field.
2.4. Boundary Detection
In many cases, the gray-level-based decent isosurface of boundary surfaces can well separate voxels belonging to an object from voxels belonging to the background and therefore can be applied in the
segmentation of 3D images [22]. However, the impedance approximations obtained from inverse problems are usually of high level noises which are not applicable for direct isosurface calculation. As
the boundaries of the target object are usually across intensity values which have great differences from the background, they are actually steplike edge surfaces, defined as surface where great
change of intensity value occurs. A volumetric image can be considered as the discrete sampling of the underlying three-dimensional continuous function at the grid points of the three-dimensional
regular grid.
The boundary surface within the volumetric image can be considered as the implicitly defined continuous surface contained in the continuous sampling region of the volumetric image. Recall that, in a
volumetric image, different structures usually correspond to different image intensities. Thus, the impedance intensities on either side of the boundary surface of the structure within a volumetric
image have sharp changes. Such a boundary surface belongs to a steplike edge surface and therefore it is a continuous zero-crossing surface with a high gradient value. Mathematically, the boundaries
within 3D image could be presented as follows [23]: where is a predetermined gradient threshold, represents the Laplacian function of , and represents the gradientmagnitude function of . can be
selected by other methods that are used to select the gradient threshold in the edge detection of a 2D image [24].
Since the boundary surface of the volumetric image is determined, the subsequent processing can be performed to reduce the noises and improve the reconstruction quality, which will be further
elaborated in the following sections.
2.5. Edge Elements Detection
The electrical impedance is sampled from tetrahedron grid as described above, and all such tetrahedrons form a continuous space occupied by 3D image. Steplike edge surfaces in the volume pass
through, or are included in, some tetrahedrons. All tetrahedron elements are divided into two categories: those that are passed through by a steplike edge and the rest which are not. We first detect
the edge elements and then compute the steplike edges in each edge element. For each edge element, since steplike edge passes through it, at least three of its edges are intersected by steplike
edges. Without loss of generality, we assume that the edge linking vertexes of one tetrahedron are and . In terms of the literature [23], if the one side of the tetrahedron intersects with the edge
surfaces, the two endpoints in the side of the section, respectively, having a high value of the gradient, and their Laplacian values, are of different signs.
In the edge elements, each edge intersected has the following characterizations:(1)both vertices have high gradient values: ,(2)two vertices are a pair of zero-crossing points: .
Accordingly, by tagging the edge surfaces intersection as well as determining whether three edges of the tetrahedron are intersected by the edge surfaces, we will find out the edge elements and
locate the edge surface from the three-dimensional images.
2.6. Extraction and Reconstruction
Among the detected edge elements, there are true elements which contain the edge surfaces, besides pseudoedge elements which caused by the noise and object details. The pseudo-edge elements, usually
just a small collection of interlinked tetrahedron, and the tetrahedrons which include the edge surfaces, are typically a relatively large collection according to the characteristics in bladder
filling process. Therefore, judging by whether the edge elements are coplanar, we can extract the larger connected set by removing the small ones.
All the tetrahedron elements in the model are divided into slices in horizontal, each slice contains incomplete seed elements representing the edge surfaces (in Figure 3), from which the edge surface
of the object can be tracked. If the edge surface intersects with one face of an edge element, the adjacent tetrahedron of mutual face of that one is inevitable an edge element as well. This
definition originated in the region growing method, the algorithm using 3D region growing method [25]. By checking the adjacent elements which meet the coplanar criteria, the seed grows in the
original area, until the target area does not grow any longer. In virtue of this property, we can track and recombinant the most similar edge elements to the objects, by which are not detected yet,
starting from the determined seed elements.
Each edge element contains a piece of edge surface which is in fact the isosurface of zero value of the Laplacian function in three-dimensional image. By computing the zero-crossing surface of the
Laplacian in each edge element, eventually, a triangulated model of steplike edges is obtained. From each edge element, the surface patch could be extracted by using the Marching Cubes and its
improved algorithm as the polygonal approximation. This algorithm guarantees that the surface patches extracted from the adjacent edge elements can be spliced together and constitute a polygonal
surface model [26].
3. Results and Discussion
3.1. Experiment Platform
The process of imaging reconstruction was illustrated and verified by applying the algorithm on a dataset obtained from an experimental feasibility trial. The cuboid phantom was made of
polycarbonate, with 18cm long, 15cm wide and 10cm high as in Figure 4(a). The electrodes array, of each electrode diameter 4mm and gap of 8mm between each, was placed on the upper surface and
the back electrode as the ground was placed on the center of lower surface. The current density simulation model was represented in Figure 4(b), and from which we could configure that the current
density in the middle area below the electrode array was larger which indicated higher sensitivity.
The experiment was carried out by using agar of 0.1S/m (at 200kHz) as the background with a different size of cuboid hole in the middle of background as the inclusion (detailed in Section 3.2). The
hole was filled with saline solution dropped of India ink, which conductivity measures as 0.892S/m by the Mettler-Toledo hand-held portable liquid conductivity measurement instrument SG7, then it
was covered with 1.5cm thick of 0.1S/m agar same as the background. So the saline solution was wrapped in agar to simulate the urine in the bladder.
As the conductivity of saline varies with the frequency, a relatively sharp change in order to distinguish it from the agar happens at the frequency around 200kHz [27]. Furthermore, there is also a
strict regulation to limit the injection current into the body, which is less than 10mA of frequency at 100kHz or higher frequency [28]. Accordingly, we set the current waveform frequency 200kHz
with the amplitude of 10mA. The injection-and-measurement strategy was that of Section 2.1 described.
3.2. Preliminary Data Analysis
A completely full bladder of human is capable of holding approximately 1 liter of fluid. However, the urge to urinate ordinarily occurs when the bladder contains about 200mL of urine, which value
should be smaller considering of the age and body figure [29]. So we chose different volume of saline solution to simulate the urine in the experiment, as in Figures 5(a)–5(d) in red. They were,
respectively, 0mL in Figure 5(a), 4cm*4cm*4cm that was 64mL in Figure 5(b), 5cm*5cm*5cm that was 125mL in Figure 5(c), and 8cm*8cm*5cm that was 320mL in Figure 5(d). As long as these
volumes are able to be estimated, we are able to predict the right moment to micturate or whether the volume reaches the critical value.
Figures 5(e)–5(h) illustrate the 2D image projection from direct acquisition data without any algorithm applied. That means the color of image reflects the voltage which corresponds to the impedance
from the measurement electrodes. We can figure out that the results are with perturbations at the corners due to the edge effect of container in Figures 5(e) and 5(f). As the volume increases in
Figures 5(g) and 5(h), the blue area indicating the lower impedance increases, but obviously, the shapes are irregular without any depth information. As a result, it is ambiguous for diagnose which
inspirit us to improve the results by utilizing more optimized methods and algorithms.
3.3. Three-Dimensional Reconstruction and Discussion
The 3D representations of the reconstructed perturbation from the experiments were shown above. Figure 6(a) was the reconstructed image from 4cm*4cm*4cm saline solution, Figures 6(b) and 6(c) are
the corresponding lateral and top views. From those of which we could figure out that the reconstructed target’s location was basically identical, whereas the shape of target transferred from cubic
to round-like. It was by reason of the regularization-based algorithms applied the least squares method, which is different from back-projection [7] and genetic algorithms [10], and approached the
perturbation generally, in case that the sharp changes were smoothened and some object details were as well omitted. Reconstructions from other two volumes in Figures 6(d)–6(i) reflected the similar
characteristic. In addition, we could point out that while the volume increased, the distortion was getting more serious, because of the sensor array was getting comparatively smaller in contrast of
the volume. As a result, the sensitivity of the algorithm decreased and the image deteriorated because of incomprehensive boundary conditions.
To estimate the object volume precisely, the tank model was gridded by the interval of 1mm in coordinates. We could judge the entire space being divided into cubes. Each cube volume of mL, and each
node from the cubes could be surrounded in the edge surface or not. Here, we defined the cube which has at least 4 noncoplanar vertices in the edge surface considered to be a valid one. As in our
experiment, the number of included cubes in Figure 6(a) was 62,783, the number of Figure 6(d) was 115,429, and Figure 6(g) was 307,725. The corresponding estimated volumes were approximately 62.8mL,
115.4mL and 308.7mL. Comparing with the origin volume with estimation, Figure 7(a) reveals that the estimation of model 4*4*4 is almost identical to that of actual volume, even though their shapes
are inconsistent. Whereas the 5*5*5 and 8*8*5 models’ estimation are lower than that of real ones, the volumetric errors are less than 10% which was still at acceptable level.
The position of an anomaly is defined to be located as the centre of mass of HA set in reconstruction images as Here, is the position vector () within the domain. Position error is a figure of merit
defined as the proportional difference in position of the centre of mass of the reconstructed image HA set and the centre of mass of the generating anomaly. The smaller PE indicating the
reconstructed image is more approximate to the center of the target object.
As we consider the conductivity of saline solution to be homogeneous, the centre of origin should be at the center of the cuboids. In terms of position error in Figure 7(b), the configurations show
that relative position errors are various at different axes. Although the result indicates nonlinearity in different models, the error percentages are generally between ~. We could also figure out
that, the errors on -axis in green is comparatively greater than that of - and -axis which exceed of 10%. It is because of the higher sensitivity as the object closer to the electrode array, this is
resulting from both the current density distribution and the finite elements at the top are smaller than the bottom in our algorithm. Moreover, we realize that, as the size of target increase
especially for the model 8*8*5, the target is getting closer to the edge of the electrode array, namely, the position errors increase.
By utilizing the combined regularization algorithm and integrating with edge elements filtering and rearrangement, 3D images were able to display for qualitative image evaluation. The differences
between the objectives and background from reconstructions were significant. By and large, the target’s locations were easy to be distinguished, nevertheless the reconstructed object did not directly
correspond with the exact shape of the original. This experiment demonstrated that it was possible to obtain and localize reliable 3D images of conductivity changes, employing 65 channels, the result
presented to be superior to that of traditional methods and of considerably highly approximation to the target.
4. Conclusions
This paper addresses the issue of presenting a method for EIT 3D reconstruction and targets identification, by which aim to predict the urine volume in bladder. The performance of the proposed
algorithms has been investigated and demonstrated by mathematical exposition. The approximation problem of boundary surface within 3D images is also described, it is applied not only to reduce the
system noise which leads to holes, protrusions, and debris in reconstruction, but also deliver us readily to identify and calculate visual images. The reconstruction images provide more information
as well, including depth and volume, and contrast from the background.
Overall, EIT image reconstruction is a nonlinear and ill-posed inverse problem of spatially variant estimation. Uncertainties caused by these properties prevent EIT images from having high
resolution. These preliminary results indicate that sufficient finite element modeling of the impedance distribution in the abdomen, proper inverse problem, and tracking algorithms choosing enable
this technology to be applicable for routine measurement of bladder volume.
This approach is convenient to apply with image reconstructions that are spatially variant, which promises to deliver a joint distribution and material identification and estimates in a single
measurement process. It yields an alternative method for reporting the bladder filling so that instead of reporting in terms of pressure and ultrasound images, we may be able to present clinicians
medical visualization and extract certain boundary surface structure.
This work was supported by the Fundamental Research Funds for the Central Universities of China (no. CDJZR 10150021).
1. E. J. W. Merks, N. Born, N. N. De Jong, and A. F. W. Steen, “Quantitative bladder volume assessment on the basis of nonlinear wave propagation,” in proceedings of the IEEE International
Ultrasonics Symposium (IUS '08), pp. 1158–1162, November 2008. View at Publisher · View at Google Scholar · View at Scopus
2. R. Tanaka and T. Abe, “Measurement of the bladder volume with a limited number of ultrasonictransducers,” in Proceedings of the IEEE International Ultrasonics Symposium (ISU '10), pp. 1783–1786,
3. R. H. Smallwood, A. Keshtkar, B. A. Wilkinson, J. A. Lee, and F. C. Hamdy, “Electrical impedance spectroscopy (EIS) in the urinary bladder: the effect of inflammation and edema on identification
of malignancy,” IEEE Transactions on Medical Imaging, vol. 21, no. 6, pp. 708–710, 2002. View at Publisher · View at Google Scholar · View at Scopus
4. D. Holder, “Electrical tomography for industrial applications electrical impedance tomography: methods, history and applications,” Medical Physics and Biomedical Engineering, pp. 295–347, 2004.
5. X. J. Zhang, M. Y. Chen, W. He, and C. H. He, “Modeling and simulation of open electrical impedance tomography,” International Journal of Applied Electromagnetics and Mechanics, vol. 33, no. 1-2,
pp. 713–720, 2010. View at Publisher · View at Google Scholar · View at Scopus
6. D. R. Stephenson, R. Mann, and T. A. York, “The sensitivity of reconstructed images and process engineering metrics to key choices in practical electrical impedance tomography,” Measurement
Science and Technology, vol. 19, no. 9, Article ID 094013, 2008. View at Publisher · View at Google Scholar · View at Scopus
7. M. Wang, J. Zhao, S. Zhang, and G. Wang, “Electrical impedance tomography based on filter back projection improved by means method,” in Proceedings of the 3rd International Conference on
BioMedical Engineering and Informatics (BMEI '10), pp. 218–221, October 2010. View at Publisher · View at Google Scholar · View at Scopus
8. S. Meeson, A. L. T. Killingback, and B. H. Blott, “The dependence of EIT images on the assumed initial conductivity distribution: a study of pelvic imaging,” Physics in Medicine and Biology, vol.
40, no. 4, pp. 643–657, 1995. View at Publisher · View at Google Scholar · View at Scopus
9. M. Cheney, D. Isaacson, J. C. Newell, S. Simske, and J. C. Goble, “NOSER: an algorithm for solving the inverse conductivity problem,” Image Systems & Technology, vol. 2, pp. 66–75, 1990.
10. R. Olmi, M. Bini, and S. Priori, “A genetic algorithm approach to image reconstruction in electrical impedance tomography,” IEEE Transactions on Evolutionary Computation, vol. 4, no. 1, pp.
83–88, 2000. View at Scopus
11. M. T. Clay and T. C. Ferree, “Weighted regularization in electrical impedance tomography with applications to acute cerebral stroke,” IEEE Transactions on Medical Imaging, vol. 21, no. 6, pp.
629–638, 2002. View at Publisher · View at Google Scholar · View at Scopus
12. P. Jiang, L. Peng, and D. Xiao, “Tikhonov regularization based on second order derivative matrix for electrical capacitance tomography image reconstruction,” Journal of Chemical Industry and
Engineering, vol. 59, no. 2, pp. 405–409, 2008. View at Scopus
13. J. S. Suri, K. Liu, S. Singh, et al., “Shape recovery algorithms using level sets in 2DP 3D medical imagery : a state of the art review,” IEEE Transactions on Information Technology in
Biomedicine, vol. 6, no. 1, pp. 8–28, 2002.
14. C. Pudney, M. Robins, B. Robbins, and P. Kovesi, “Surface detection in 3D confocal microscope images via local energy and ridge tracing,” Journal of Computer-Assisted Microscopy, vol. 8, no. 1,
pp. 5–20, 1996. View at Scopus
15. M. Brejl and M. Sonka, “Directional 3D edge detection in anisotropic data: detector design and performance assessment,” Computer Vision and Image Understanding, vol. 77, no. 2, pp. 84–110, 2000.
View at Publisher · View at Google Scholar · View at Scopus
16. J. Solà, A. Adler, A. Santos, G. Tusman, F. S. Sipmann, and S. H. Bohm, “Non-invasive monitoring of central blood pressure by electrical impedance tomography: first experimental evidence,”
Medical and Biological Engineering and Computing, vol. 49, no. 4, pp. 409–415, 2011. View at Publisher · View at Google Scholar · View at Scopus
17. A. Michel, L. M. Orah, et al., “The T-SCANTM technology: electrical impedance as a diagnostic tool for breast cancer detection,” Physiological Measurement, vol. 22, no. 1, pp. 1–8, 2001.
18. N. Polydorides, Image Reconstruction Algorithms for Soft-Field Tomography, University of Manchester Institute of Science and Technology, 2002.
19. J. L. Mueller, D. Isaacson, and J. C. Newell, “A reconstruction algorithm for electrical impedance tomography data collected on rectangular electrode arrays,” IEEE Transactions on Biomedical
Engineering, vol. 46, no. 11, pp. 1379–1386, 1999. View at Publisher · View at Google Scholar · View at Scopus
20. W. He, B. Li, Z. Xu, H. Luo, and P. Ran, “A combined regularization algorithm for electrical impedance tomography system using rectangular electrodes array,” Biomedical Engineering, vol. 24, no.
4, pp. 313–322, 2012. View at Publisher · View at Google Scholar
21. E. Somersalo, M. Cheney, and D. Isaacson, “Existence and uniqueness for electrode models for electric current computed tomography,” SIAM Journal on Applied Mathematics, vol. 52, no. 4, pp.
1023–1040, 1992. View at Scopus
22. J. O. Lachaud and A. Montanvert, “Continuous analogs of digital boundaries: a topological approach to iso-surfaces,” Graphical Models, vol. 62, no. 3, pp. 129–164, 2000. View at Publisher · View
at Google Scholar · View at Scopus
23. A. H. Pheng, L. Wang, T. W. Tien, S. L. Kwong, and J. C. Y. Cheng, “Edge surfaces extraction from 3D images,” in The International Society for Optical Engineering, vol. 4322 of Proceedings of
SPIE, no. 1, pp. 407–416, 2001.
24. A. Rosenfeld and A. Kak, Digital Picture Processing, vol. 1, Academic Press, 1982.
25. D. Stroppiana, G. Bordogna, P. Carrara, M. Boschetti, L. Boschetti, and P. A. Brivio, “A method for extracting burned areas from Landsat TM/ETM+ images by soft aggregation of multiple spectral
indices and a region growing algorithm,” ISPRS Journal of Photogrammetry and Remote Sensing, vol. 69, pp. 88–102, 2012.
26. G. M. Nielson, L. Y. Zhang, K. Lee, and A. Huang, “Spherical parameterization of marching cubes isosurfaces based upon nearest neighbor coordinates,” Journal of Computer Science and Technology,
vol. 24, no. 1, pp. 30–38, 2009. View at Publisher · View at Google Scholar · View at Scopus
27. S. Gabriel, R. W. Lau, and C. Gabriel, “The dielectric properties of biological tissues: II. Measurements in the frequency range 10Hz to 20GHz,” Physics in Medicine and Biology, vol. 41, no.
11, pp. 2251–2269, 1996. View at Publisher · View at Google Scholar · View at Scopus
28. “For limiting exposure to time varying electric, magnetic and electromagnetic fields (up to 300GHz),” Health Physics, vol. 74, no. 4, pp. 494–522, 1998.
29. Magill's Medical Guide, vol. 3, Salem, Englewood Cliffs, NJ, USA, 1998.
|
{"url":"http://www.hindawi.com/journals/cmmm/2012/528096/","timestamp":"2014-04-17T06:10:53Z","content_type":null,"content_length":"142491","record_id":"<urn:uuid:a3d7b6d9-0da8-4f61-81ea-9fe8685686ee>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00311-ip-10-147-4-33.ec2.internal.warc.gz"}
|
16.1 psi in bar
You asked:
16.1 psi in bar
1.11005592213524 bars
the pressure level 1.11005592213524 bars
Say hello to Evi
Evi is our best selling mobile app that can answer questions about local knowledge, weather, books, music, films, people and places, recipe ideas, shopping and much more. Over the next few months we
will be adding all of Evi's power to this site.
Until then, to experience all of the power of Evi you can download Evi for free on iOS, Android and Kindle Fire.
|
{"url":"http://www.evi.com/q/16.1_psi_in_bar","timestamp":"2014-04-21T16:15:02Z","content_type":null,"content_length":"49087","record_id":"<urn:uuid:26cd2ded-5f65-46e2-857e-3386d3d69458>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00471-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Paul Erdos, a Math Wayfarer at Field's Pinnacle, Dies at 83
Some links to internet pages with additional information on Paul Erdos:
September 24, 1996
Paul Erdos, a Math Wayfarer at Field's Pinnacle, Dies at 83
By GINA KOLATA
Paul Erdos, a legendary mathematician who was so devoted to his subject that he lived as a mathematical pilgrim with no home and no job, died Friday in Warsaw, Poland. He was 83.
The cause of death was a heart attack, according to an E-mail message sent out this weekend by Dr. Miki Simonovits, a mathematician at the Hungarian Academy of Sciences, who was a close friend.
Erdos (pronounced AIR-dosh) was attending a mathematics meeting in Warsaw when he died, Simonovits reported.
The news, only now reaching the world's mathematicians, has come as a blow. Dr. Ronald L. Graham, the director of the information sciences research center at AT&T Laboratories, said, "I'm getting
E-mail messages from around the world, saying, 'Tell me it isn't so.' "
Never, mathematicians say, has there been an individual like Paul Erdos. He was one of the century's greatest mathematicians, who posed and solved thorny problems in number theory and other areas and
founded the field of discrete mathematics, which is the foundation of computer science. He was also one of the most prolific mathematicians in history, with more than 1,500 papers to his name. And,
his friends say, he was also one of the most unusual.
Erdos, "is on the short list for our century," said Dr. Joel H. Spencer, a mathematician at New York University's Courant Institute of Mathematical Sciences.
Graham said, "He's among the top 10."
Dr. Ernst Straus, who worked with both Albert Einstein and Erdos, wrote a tribute to Erdos shortly before his own death in 1983. He said of Erdos: "In our century, in which mathematics is so strongly
dominated by 'theory doctors,' he has remained the prince of problem solvers and the absolute monarch of problem posers."
Erdos, Straus continued, is "the Euler of our time," referring to the great 18th-century mathematician, Leonhard Euler, whose name is spoken with awe in mathematical circles.
Stooped and slight, often wearing socks and sandals, Erdos stripped himself of all the quotidian burdens of daily life: finding a place to live, driving a car, paying income taxes, buying groceries,
writing checks. "Property is nuisance," he said.
Concentrating fully on mathematics, Erdos traveled from meeting to meeting, carrying a half-empty suitcase and staying with mathematicians wherever he went. His colleagues took care of him, lending
him money, feeding him, buying him clothes and even doing his taxes. In return, he showered them with ideas and challenges -- with problems to be solved and brilliant ways of attacking them.
Dr. Laszlo Babai of the University of Chicago, in a tribute written to celebrate Erdos' 80th birthday, said that Erdos' friends "care for him fondly, repaying in small ways for the light he brings
into their homes and offices."
Mathematicians like to brag about their connections to Erdos by citing their "Erdos number." A person's Erdos number was 1 if he or she had published a paper with Erdos. It was 2 if he or she had
published with someone who had published with Erdos, and so on.
At last count, Erdos had 458 collaborators, Graham said. An additional 4,500 mathematicians had an Erdos number of 2, Graham added. He said so many mathematicians were still at work on problems they
had begun with Erdos that another 50 to 100 papers with Erdos' name on them were expected to be published after his death.
Graham, whose Erdos number is 1, handled Erdos' money for him, setting aside an "Erdos room" in his house for the chore. He said Erdos had given away most of the money he earned from lecturing at
mathematics conferences, donating it to help students or as prizes for solving problems he had posed. Erdos left behind only $25,000 when he died, Graham said, and he plans to confer with other
mathematicians about how to give it away to help mathematics.
Graham said Erdos' "driving force was his desire to understand and to know." He added, "You could think of it as Erdos' magnificent obsession. It determined everything in his life."
"He was always searching for mathematical truths," said Spencer, of New York University, who also has an Erdos number of 1. He added: "Erdos had an ability to inspire. He would take people who
already had talent, that already had some success, and just take them to an entirely new level. His world of mathematics became the world we all entered."
Born in Hungary in 1913, Erdos was a cosseted mathematical prodigy. At age 3, Graham said, Erdos discovered negative numbers for himself when he subtracted 250 degrees from 100 degrees and came up
with 150 degrees below zero. A few years later, he amused himself by solving problems he had invented, like how long would it take for a train to travel to the sun.
Erdos had two older sisters who died of scarlet fever a few days before he was born, so his mother became very protective of him. His parents, who were mathematics teachers, took him out of public
school after just a few years, Graham said, and taught him at home with the help of a German governess. And, Graham said, Erdos' mother coddled him. "Erdos had never buttered his own toast until he
was 21 years old," Graham said. He never married and left no immediate survivors.
When Erdos was 20, he made his mark as a mathematician, discovering an elegant proof for a famous theorem in number theory. The theorem, Chebyshev's theorem, says that for each number greater than
one, there is always at least one prime number between it and its double. A prime number is one that has no divisors other than itself and 1.
Although his research spanned a variety of areas of mathematics, Erdos kept up his interest in number theory for the rest of his life, posing and solving problems that were often simple to state but
notoriously difficult to solve and that, like Chebyshev's theorem, involved relationships between numbers.
"He liked to say that if you can state a problem in mathematics that's unsolved and over 100 years old, it is probably a problem in number theory," Graham said.
Erdos, like many mathematicians, believed that mathematical truths are discovered, not invented. And he had an evocative way of conveying that notion. He spoke of a Great Book in the sky, maintained
by God, that contained the most elegant proofs of every mathematical problem. He used to joke about what he might find if he could just have a glimpse of that book.
He would also muse about the perfect death. It would occur just after a lecture, when he had just finished presenting a proof and a cantankerous member of the audience would have raised a hand to
ask, "What about the general case?" In response, Erdos used to say, he would reply, "I think I'll leave that to the next generation," and fall over dead.
Erdos did not quite achieve his vision of the perfect death, Graham said, but he came close.
"He died with his boots on, in hand-to-hand combat with one more problem. It was the way he wanted to go," Graham said.
Copyright 1996 The New York Times Company
PAUL ERDOS, the mathematician, who has died aged 83, established many records in his field, including the number of papers written (about 1,500), and the number of co-authors (close to 500).
Many mathematicians are eccentric, and Erdös was more so than most. He never had a "proper job"; he had no cheque-book or credit card; he never learnt to drive, and never had health insurance. For
most of his life, carrying almost no luggage, wearing sandals and an old suit, he travelled from university to university around the world.
He would bring the mathematical news, pose problems, inspire the locals with his brilliant ideas, and depart in a few days, leaving behind his exhausted hosts to work out the details of their joint
work. His open mind, his ability to see the unexpected, and his willingness to wrestle with complications without the help of well-established tools made him a welcome guest wherever he went.
Over the years, Erdös proved important theorems in number theory, geometry, analysis, probability theory, approximation theory, set theory and, above all, combinatorics.
Yet the sophisticated large-scale theories dominating today's mathematics were not to his taste. Rather than build theories, he solved problems - and posed them, often adding spice by offering
prizes, ranging from $10 to $10,000, to those who met the challenge.
After receiving his doctorate in mathematics in 1934, he accepted Mordell's offer of a fellowship at Manchester University
When Erdös failed to find the answers to questions which arose in his field through existing techniques, he would improvise. His "theory" consisted of the accumulation of these original, seemingly
unrelated ad hoc methods.
Perhaps his greatest contribution to mathematics was that he realised and demonstrated (decades before it came to be accepted) the importance of chance in finding objects of seemingly contradictory
properties, such as efficient networks with few connections. These methods are of paramount importance in computer science, though Erdös himself never touched a computer.
Paul Erdös was born in Budapest on March 26 1913, into a Hungarian-Jewish family. Both his parents were mathematics teachers, and his early education came partly from his mother.
His outstanding work on number theory, undertaken when he was an undergraduate at Budapest University, brought him to the attention of Issai Schur in Berlin, and Louis Mordell at Manchester. After
receiving his doctorate in mathematics in 1934, he accepted Mordell's offer of a fellowship at Manchester University. He had planned to go to Germany, but, as he put it, "Hitler got there first."
After four fertile years at Manchester, he left for America, where he was to remain for the next decade. A year at the Institute for Advanced Study in Princeton, when he produced a host of monumental
results, was followed by stays at Notre Dame, Purdue, Stanford and other universities.
Upon receiving $50,000 for his share of the Wolf Prize, in Israel, in 1984, he kept $720 and gave away the rest
With the exception of a nine-year period, when he was not allowed to enter America because of anti-communist hysteria, he spent most of his life there.
Many honours were bestowed on Erdös. Although he did not care about them, he was pleased that they enabled him to help people in need. Upon receiving $50,000 for his share of the Wolf Prize, in
Israel, in 1984, he kept $720 and gave away the rest; half the money went to a second cousin he hardly knew, who happened to be in need at that time.
He was a member of many illustrious academies, including the Royal Society and the US National Academy of Science, and he received numerous honorary degrees.
To amuse himself and his friends, Erdos invented a peculiar brand of word-play and imagery. He awarded himself letters after his name: he became PGOM (poor great old man) when his mother died, LD
(living dead) at 60, and AD (archaeological discovery) at 65.
Taking his cue from the great English mathematician G H Hardy, he considered God malicious: He gives us colds, hides our glasses and papers, sends us bad weather and traffic jams and, most
importantly, is delighted if we fail to do something good when we have a chance.
A "book-proof" is a thing of beauty; to spite us, God allows us to see one only in exceptional circumstances
As a mathematician in search of beauty, Erdös imagined a book kept by God in which all mathematical theorems are written down, together with their ideal proofs. A "book-proof" is a thing of beauty;
to spite us, God allows us to see one only in exceptional circumstances. Erdös himself contributed several book-proofs to the mathematical literature.
Paul Erdös will probably be best remembered for showing that elementary methods (relying on ingenuity rather than vast theories) have a place in contemporary mathematics, and for being the driving
force behind the rapid growth of combinatorics.
Paul Erdös lived for mathematics, though he was deeply interested in medicine, history and politics. After his mother's death, he drove himself relentlessly; he slept little and only with the aid of
sleeping tablets, and took caffeine pills to help his concentration.
He was unmarried.
© Copyright Telegraph Group Limited 1996.
Paul Erdös, Sweet Genius
By Charles Krauthammer
Friday, September 27 1996; Page A25
The Washington Post
One of the most extraordinary minds of our time has "left." "Left" is the word Paul Erdös, a prodigiously gifted and productive mathematician, used for "died." "Died" is the word he used to signify
"stopped doing math." Erdös never "died." He continued doing math, notoriously a young person's field, right until the day he died last Friday. He was 83.
It wasn't just his vocabulary that was eccentric. Erdös's whole life was so improbable no novelist could have invented him (though he was chronicled beautifully by Paul Hoffman in the November 1987
Atlantic Monthly).
He had no home, no family, no possessions, no address. He went from math conference to math conference, from university to university, knocking on the doors of mathematicians throughout the world,
declaring, "My brain is open" and moving in. His colleagues, grateful for a few days' collaboration with Erdös -- his mathematical breadth was as impressive as his depth -- took him in.
Erdös traveled with two suitcases, each half-full. One had a few clothes, the other mathematical papers. He owned nothing else. Nothing. His friends took care of the affairs of everyday life for him
-- checkbook, tax returns, food. He did numbers.
He seemed sentenced to a life of solitariness from birth, on the day of which his two sisters, age 3 and 5, died of scarlet fever, leaving him an only child, doted upon and kept at home by a fretful
mother. Hitler disposed of nearly all the rest of his Hungarian Jewish family. And Erdös never married. His Washington Post obituary ends with this abrupt and rather painful line: "He leaves no
immediate survivors."
But in reality he did: hundreds of scientific collaborators and 1,500 mathematical papers produced with them. An astonishing legacy in a field where a lifetime product of 50 papers is considered
quite extraordinary.
Mathematicians tend to bloom early and die early. The great Indian genius, Srinivasa Ramanujan, died at 32. The great French mathematician, Evariste Galois, died at 21. (In a duel. The night before,
it is said, he stayed up all night writing down everything he knew. Premonition?) And those who don't literally die young, die young in Erdös's sense. By 30, they've lost it.
Erdös didn't. He began his work early. At 20 he discovered a proof for a classic theorem of number theory (that between any number and its double must lie a prime -- i.e., indivisible, number). He
remained fecund till his death. Indeed, his friend and benefactor, Dr. (of math, of course) Ron Graham, estimates that perhaps 50 new Erdös papers are still to appear, reflecting work he and
collaborators were doing at the time of his death.
Erdös was unusual in yet one other respect. The notion of the itinerant, eccentric genius, totally absorbed in his own world of thought, is a cliche that almost always attaches to the adjective
"anti-social." From Bobby Fischer to Howard Hughes, obsession and misanthropy seem to go together.
Not so Erdös. He was gentle, open and generous with others. He believed in making mathematics a social activity. Indeed, he was the most prolifically collaborative mathematician in history. Hundreds
of colleagues who have published with him or been advised by him can trace some breakthrough or insight to an evening with Erdös, brain open.
That sociability sets him apart from other mathematical geniuses. Andrew Wiles, for example, recently achieved fame for having solved math's Holy Grail, Fermat's Last Theorem -- after having worked
on it for seven years in his attic! He then sprang the proof on the world as a surprise.
Erdös didn't just share his genius. He shared his money. It seems comical to say so because he had so little. But, in fact, it is rather touching. He had so little because he gave away everything he
earned. He was a soft touch for whatever charitable or hard-luck cause came his way. In India, he once gave away the proceeds from a few lectures he had delivered there to Ramanujan's impoverished
A few years ago, Graham tells me, Erdös heard of a promising young mathematician who wanted to go to Harvard but was short the money needed. Erdös arranged to see him and lent him $1,000. (The sum
total of the money Erdös carried around at any one time was about $30.) He told the young man he could pay it back when he was able to. Recently, the young man called Graham to say that he had gone
through Harvard and now was teaching at Michigan and could finally pay the money back. What should he do?
Graham consulted Erdös. Erdös said, "Tell him to do with the $1,000 what I did."
No survivors, indeed.
© Copyright 1996 The Washington Post Company
Paul Erdös, mathematician, died on September 20 aged 83. He was born on March 26, 1913.
Paul Erdös was regarded by fellow mathematicians as the most brilliant, if eccentric, mind in his field. Because he had no interest in anything but numbers, his name was not well known outside the
mathematical fraternity. He wrote no best-selling books, and showed a stoic disregard for worldly success and personal comfort, living out of a suitcase for much of his adult life. The money he made
from prizes he gave away to fellow mathematicians whom he considered to be needier than himself. "Property is a nuisance," was his succinct evaluation.
Mathematics was his life and his only interest from earliest childhood onwards. He became the most prolific mathematician of his generation, writing or co-authoring 1,000 papers and still publishing
one a week in his seventies. His research spanned many areas, but it was in number theory that he was considered a genius. He set problems that were often easy to state, but extremely tricky to solve
and which involved the relationships between numbers. He liked to say that if one could think of a problem in mathematics that was unsolved and more than 100 years old, it was probably a problem in
number theory.
In spite, or perhaps because of, his eccentricities, mathematicians revered him and found him inspiring to work with. He was regarded as the wit of the mathematical world, the one man capable of
coming up with a short, clever solution to a problem on which others had laboured through pages of equations. He collaborated with so many mathematicians that the phenomenon of the "Erdös number"
evolved. To have an Erdös number 1, a mathematician must have published a paper with Erdös. To have a number of 2, he or she must have published with someone who had published with Erdös, and so on.
Four and a half thousand mathematicians have an Erdös number of 2.
Erdös was born into a Hungarian-Jewish family in Budapest, the only surviving child of two mathematics teachers (his two sisters, who died of scarlet fever, were considered even brighter than he
was). At the age of three he was amusing guests by multiplying three-digit numbers in his head, and he discovered negative numbers for himself the same year. When his father was captured in a Russian
offensive against the Austro-Hungarian armies and sent to Siberia for six years, his mother removed him from school, which she was convinced was full of germs, and decided to teach him herself. Erdös
received his doctorate in mathematics from the University of Budapest, then in 1934 came to Manchester on a post-doctoral fellowship.
By the time he finished there in the late 1930s it was obvious that it would be an act of suicide for a Jew to return to Hungary. Instead Erdös left for the United States. Most members of his family
who remained in Hungary were killed during the war.
Erdös had made his first significant contribution to number theory when he was 20, and discovered an elegant proof for the theorem which states that for each number greater than 1, there is always at
least one prime number between it and its double. The Russian mathematician Chebyshev had proved this in the 19th century, but Erdös's proof was far neater. News of his success was passed around
Hungarian mathematicians, accompanied by a rhyme: "Chebyshev said it, and I say it again/There is always a prime between n and 2n."
In 1949 he and Atle Selberg astounded the mathematics world with an elementary proof of the Prime Number Theorem, which had explained the pattern of distribution of prime numbers since 1896. Selberg
and Erdös agreed to publish their work in back-to-back papers in the same journal, explaining the work each had done and sharing the credit. But at the last minute Selberg (who, it was said, had
overheard himself being slighted by colleagues) raced ahead with his proof and published first. The following year Selberg won the Fields Medal for his work. Erdös was not much concerned with the
competitive aspect of mathematics and was philosophical about the episode.
>From 1954 Erdös began to have problems with the American and Soviet authorities. He was invited to a conference in Amsterdam but on the way back into the United States was interrogated by
immigration officials over his Soviet sympathies. Asked what he thought of Marx, he gave a typically guileless response: "I'm not competent to judge, but no doubt he was a great man." Denied his
re-entry visa, Erdös left and spent much of the 1950s in Israel.
He was allowed back into the United States in the 1960s, and from 1964 his mother, now in her mid-eighties, began travelling with him. Apart from his family and old friends, Erdös had no interest in
a relationship which was not founded in shared intellectual curiosity and he was content to remain a bachelor.
Nor did he see the need to restrict himself to one university. He needed no equipment for his work, no library or laboratory. Instead he criss-crossed America and Europe from one university and
research centre to the next, inspired by making new contacts. When he arrived in a new town he would present himself on the doorstep of the local most prominent mathematician and announce: "My brain
is open."
He would work furiously for a few days and then move on, once he had exhausted the ideas or patience of his host (he was quite capable of falling asleep at the dinner table if the conversation was
not mathematics). He would end sessions with: "We'll continue tomorrow if I live." After the death of his mother in 1971, Erdös threw himself into his work with even greater vigour, regularly
putting in a 19-hour day. He fuelled his efforts almost entirely by coffee, caffeine tablets and Benzedrine. He looked more frail, gaunt and unkempt than ever, and often wore his pyjama top as a
shirt. Somehow his body seemed to thrive on this punishing routine.
Because of his simple lifestyle, Erdös had little need of money. He won the Wolf Prize in 1983, the most lucrative award for mathematicians, but kept only $720 of the $50,000 he had received.
Lecturing fees also went to worthy causes. The only time he required funds was when another mathematician solved a problem which Erdös had set but not been able to solve. From 1954 he had spurred his
colleagues on by handing out rewards of up to $1,000 for these problems.
He died from a heart attack at a conference in Warsaw, while he was working on another equation.
Paul Erdos, an Eccentric Titan Of Mathematical Theory, Dies
By Richard Pearson
Washington Post Staff Writer
Tuesday, September 24 1996; Page B07
The Washington Post
Paul Erdos, 83, one of the world's greatest and most eccentric mathematicians, died Sept. 20 at a hospital in Warsaw after a heart attack. He was stricken while attending a conference.
Dr. Erdos, a Jewish native of Budapest, lived a celibate, monkish and nomadic life devoted to mathematics. He had no home, lived out of a single suitcase and since the 1940s had traveled the world
teaching, attending conferences and visiting mathematicians -- he simply stayed with friends the world over.
He was known to arrive, unannounced, at a friend's house with the simple announcement that "my brain is open."
While he was visiting, Dr. Erdos (pronounced AIR-dish) devoted a large part of his time to working with the hosts' mathematics problems, sometimes co-authoring technical articles with them.
It has been said that an above-average mathematician might publish about 20 articles and a really great one 50 in a lifetime. Dr. Erdos, who devoted 19 hours a day, every day, to mathematics, was the
author of more than 1,500 works. In 1986, he published 50 papers -- in a field in which it is thought that most peak early.
Dr. Erdos, who was a member of Britain's Royal Society and the national academies on three continents, was known for his work in numbers theory, the theory of sets and probability theory. Over the
years, he helped develop such fields of mathematics as random graph theory and combinatorics, mathematics dealing with large numbers of objects that must be counted and classified. He was an inventor
of the branch of combinatorics called Ramsey theory.
He was the subject of a prize-winning 1987 profile in the Atlantic magazine and a documentary film of his life recorded in Europe and the United States.
He also had a lifelong fascination for deceptively simple and even ancient branches of mathematics, such as "prime," "perfect" and "friendly" numbers. Simplified, a prime is a number evenly divisible
by no other whole number but itself and 1. A perfect number is an integer that equals the sum of other integers that can evenly be divided into it, while pairs of friendly numbers equal the sum of
the other's divisors.
Dr. Erdos, who crossed boundaries of mathematical study with ease, made his name composing short, pithy and brilliantly simple solutions to problems. Once, while in an instructors lounge, he spotted
a problem concerning functional analysis, not his area of expertise. Told that two mathematicians were pleased with a 30-page solution they had arrived at, Dr. Erdos spent about 10 minutes before
coming up with a two-line solution.
He was a recipient of the immensely prestigious World Prize, at $50,000 the highest-paying award in mathematics. Despite a spotty income, he gave most of it away, explaining that "some French
socialist said that private property was theft" but that he thought "private property is a nuisance."
Dr. Erdos defined the word "mathematician" as "a machine for turning coffee into theorems." Told by many colleagues to slow down, take it easy, he always replied, "There'll be plenty of time to rest
in the grave."
Standing 5 feet 6 inches tall and weighing a strapping 130 pounds, he had white hair, glasses and an unruly beard. He lived on a diet consisting largely of caffeine, antidepressants and amphetamines.
Some of this might be due to his confession that he had never "learned" to boil water and had not managed to butter his first piece of bread until he was 21. Added to this, a skin condition caused
him to wear only silk underwear and led him to wash his hands more than 50 times a day.
Dr. Erdos was born to two mathematics teachers who recognized his gifts early. At the age of 3, he could multiply two three-digit numbers in his head. At 4, he "invented" the concept of negative
numbers. He explained that when he was 10 years old, his father explained Euclid's proof to him, and "I was hooked." He entered the University of Budapest as a teenager and left four years later with
a doctorate in mathematics.
Dr. Erdos did postgraduate study in England and then found himself wandering the globe as something of a stateless person. As a Jew, he could not live in wartime Europe, and later he was not a fan of
communism. To round it all out, he was forbidden to visit the United States during the McCarthy era after explaining to the FBI that he knew nothing of Karl Marx and was interested only in
His only known hobby was the Japanese board game go.
He leaves no immediate survivors.
© Copyright 1996 The Washington Post Company
|
{"url":"http://www.fmf.uni-lj.si/~mohar/Erdos.html","timestamp":"2014-04-19T14:52:34Z","content_type":null,"content_length":"31108","record_id":"<urn:uuid:8e0e53a3-62a2-438e-b289-e331a3a9d252>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00511-ip-10-147-4-33.ec2.internal.warc.gz"}
|
This is the old Caveman Chemistry website.
Three New Hotdogs
Stoichiometry is a big long word which denotes the general process of figuring out how much stuff you need to make something. For example, if you are going to build 100 cars, how many wheels do you
need? This is a (nonchemical) stoichiometric problem. We could solve it using Unit Factor Analysis by introducing the wheel/car ratio:
(4 wheels/car)
wheels = 100 cars (4 wheels/car) = 400 wheels
If we want to know how many lug nuts we need, this becomes
nuts = 100 cars (4 wheels/car)(5 nuts/wheel) = 2000 nuts
Chemical stoichiometry problems are not more complicated than this provided that you become familiar with three new hotdogs:
Name Unit
Mole Ratio ( X moles A / Y moles B )
Formula Weight ( Z grams A / mole A )
Normal Gas Volume ( 24.4 L gas / mole gas )
What is a mole? Well, when we first introduced balanced chemical reactions, we talked in terms of parts and I promised that we would get more specific when the need arose. The need just arose.
Conside the balanced chemical reaction for the fermentation of glucose:
C[6]H[12]O[6](s) ---> 2 CO[2](g) + 2 C[2]H[5]OH(l)
The "2's" in front of CO[2] and C[2]H[5]OH are called the stoichiometric coefficients. This may seem like a long and cumbersome word, but it is certainly shorter than saying, "the numbers in front of
each substance in a balanced chemical equation!" The mole ratio is simply the ratio of two stoichiometric coefficients from the same balanced equation. You may have to think about that a bit before
you realize how simple it is.
What is the stoichiometric coefficient of C[6]H[12]O[6] in this equation? If a number is not explicitely given, it is implicitely understood to be "1." Now, the mole ratios for this equation are:
(2 moles CO[2] / 1 mole C[6]H[12]O[6])
(2 moles C[2]H[5]OH / 1 mole C[6]H[12]O[6])
(2 moles C[2]H[5]OH / 2 moles CO[2])
Which ratio you need to use depends on the question you are trying to answer.
Let's work some simple problems.
• How many moles of glucose are needed to produce 25 moles of ethanol?
Simple. Recalling that C[6]H[12]O[6] is glucose and C[2]H[5]OH is ethanol:
moles glucose = 25 moles ethanol (1 mole glucose / 2 moles ethanol) = 12.5 moles glucose
So whenever you want to make 25 moles of ethanol, you will need 12.5 moles of glucose. But how do you measure out 12.5 moles of glucose? That's where the second hotdog comes in.
The conversion factor from moles to grams is called the formula weight. You get that from looking up the atomic weight for each atom in the formula and adding them all up. If you look on the Periodic
Table you will find the following atomic weights:
• Carbon: 12.011 grams/mole
• Hydrogen: 1.008 grams/mole
• Oxygen: 16.000 grams/mole
The number of digits may vary from one table to another depending on the precision used. For our purposes in this course, we can round off to integer values. now the formula weight of glucose is:
6*12 + 12*1 + 6*16 = (180 grams glucose / mole glucose)
For carbon dioxide and ethanol, we have:
1*12 + 2*16 = (44 grams CO[2]/ mole CO[2])
2*12 + 6*1 + 1*16 = (46 grams ethanol/ mole ethanol)
We now have everything we need to answer the very important question:
• How many grams of glucose are needed to produce 1000 grams of ethanol?
grams glucose = 1000 grams ethanol (1 mole ethanol/46 grams ethanol)
(1 mole glucose/2 moles ethanol)(180 grams glucose/1 mole glucose)
= 1000*(1*1*180)/(46*2*1)
= 1956 grams glucose
This is a typical stoichiometry problem. Can you answer the question:
• What volume of carbon dioxide is produce when 100 grams of glucose is fermented to ethanol?
To answer this question we need the third hotdog: (24.4 L of gas/ mole of gas). It turns out that under normal conditions (25 Centigrade and normal atmospheric pressure) one mole of gas occupies
approximately 24.4 L no matter what the identity of the gas is. The volume depends on temperature and pressure. You may run across the value 22.4 L/mole for gases at 0 Centigrade and standard
atmospheric pressure. And there is a relationship called the Ideal Gas Equation which allows you to calculate the volume for a wide range of conditions. But for this course you only need to remember
that under the same conditions all gases occupy approximately the same volume and at room temperature and normal pressure this value is 24.4 L. How big is this volume? About a dozen 2L soft drink
Now to answer the question:
L carbon dioxide = 100 grams glucose (1 mole glucose/180 grams glucose)
(2 moles carbon dioxide/ 1 mole glucose)
(24.4 L carbon dioxide / 1 mole carbon dioxide)
= 100*(1*2*24.4)/(180*1*1) = 27 L carbon dioxide
No wonder you have to leave the cap loose when you brew mead!
Here are some important tips for working stoichiometry problems:
• Resist the temptation to use short cuts, UAYF!
• Always use "moles A," "moles B," never "moles" alone
• Only cancel "moles A" with "moles A" not "moles B"
• Only use 24.4 L/mole for gases
By paying attention to these points you will be happy and successful in working stoichiometry problems.
What Is This Good For?
Just as a factory owner is concerned to have the right number of wheels and nuts available for the number of cars he is making, a chemist must be sure to mix reactants in the proper amounts for his
purposes. If he uses more of one reactant than is needed, that reactant will be left over and wasted at the end. Sometimes this simply means that he has wasted some of his chemicals. But other times
it can cause a different reaction from the one intended to take place. This will be particularly important when we talk about gunpowder and acids.
Yes But What Is This "Mole" Thing?
For millenia alchmists, tradesmen, and later chemists struggled with the general stoichimetric problem: "How much A do I need to react with a given amount of B." Up until the dawn of the Nineteenth
Century, there was no satisfactory answer. Relative amounts were determined by trial and lots of error. The struggle to form a satisfactory theoretical foundation is described in detail in From
Caveman to Chemist and is worthy of your time and attention. For our practical use, however, we can simply say that a mole is the unit of chemical amount. When you need to know how much of a chemical
to use, the mole is the unit of choice. Since we don't have a direct way of measuring this elusive quantity, we need some conversion factors. The formula weight converts moles to grams, and the
normal gas volume converts moles of gas to liters of gas.
Criteria for Success
When you are ready, I will give you a single stoichiometry problem to work by Unit Factor Analysis. I will expect that you have memorized the table of unit factors from the Unit Factor Analysis page.
In addition, I will give you a balanced chemical equation. I may ask for the number of grams needed or produced or I may ask for the volume of gas needed or produced. You might need to convert to
pounds or cubic feet or whatever units are discussed in the Unit Factor Analysis page. You will work this problem without notes, but you may use a calculator. If you do not get the correct answer,
you fail. You may, however, keep taking the test (one per day) until you pass. Of course the problems will be different from day to day.
|
{"url":"http://cavemanchemistry.com/oldcave/projects/stoich/","timestamp":"2014-04-18T15:38:42Z","content_type":null,"content_length":"9083","record_id":"<urn:uuid:09706535-8a88-4a02-aeb6-a2665e885d0f>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00353-ip-10-147-4-33.ec2.internal.warc.gz"}
|
difficult identity
March 23rd 2008, 11:37 AM #1
Mar 2008
difficult identity
I've been working on this problem for over an hour, and I still am unable to verify it. I know it is true, because I plugged in a value for x and it works, but i don't know how to prove it.
(tan x / (1 - cot x)) + (cot x / (1-tan x))^2 = 1 + sec x csc x
start with the left hand side. combine the fractions, change everything to sines and cosines and simplify as much as possible.
then go to the left hand side, change everything to sines and cosines and show that you can simplify it to get the same thing
yea, I know thats the method of solving them, and I was actually able to solve 19 out of the 20 I had to do, but this one just doesn't seem to simplify.
just to clarify, is the problem $\frac {\tan x}{1 - \cot x} + \left( \frac {\cot x}{1 - tan x} \right)^2 = 1 + \sec x \csc x$ ?
No it is actually:
〖tanx/(1-cotx )〗+ cotx/(1-tanx )=1+ sec〖x cscx 〗
ps. I typed the formula into microsoft word using the symbols, so that it was easier to visualize, but it didn't copy over the same. How did you write that formula?
hehe, what you write now is even more confusing than the last one. what do the ?'s mean?
i used LaTeX. see the LaTeX tutorial here to see how
whoops, sorry. It doesn't display ?s for me, but I'll try this LaTex thing, which might take me a little while.
$<br /> \frac {\tan x}{1 - \cot x} + \frac {\cot x}{1 - \tan x} = 1 + \sec x \csc x<br />$
whew, that was rough.
Hello, qwerty!
$\frac {\tan x}{1 - \cot x} + \frac {\cot x}{1 - \tan x} \:= \:1 + \sec x \csc x$
We have: . $\frac{\dfrac{\sin x}{\cos x}}{1 - \dfrac{\cos x}{\sin x}} + \frac{\dfrac{\cos x}{\sin x}}{1 - \dfrac{\sin x}{\cos x}}$
Multiply each fraction by $\frac{\sin x\cos x}{\sin x\cos x}$
. . $\frac{\sin x\cos x}{\sin x\cos x}\cdot\frac{\dfrac{\sin x}{\cos x}}{1 - \dfrac{\cos x}{\sin x}} \;+ \;\frac{\sin x\cos x}{\sin x\cos x}\cdot\frac{\dfrac{\cos x}{\sin x}}{1 - \dfrac{\sin x}{\
cos x}} \;=$ . $\frac{\sin^2\!x}{\sin x\cos x - \cos^2\!x} + \frac{\cos^2\!x}{\sin x\cos x - \sin^2\!x}$
. . $= \;\frac{\sin^2\!x}{\cos x(\sin x-\cos x)} - \frac{\cos^2\!x}{\sin x(\sin x - \cos x)} \;=\;\frac{\sin^3\!x - \cos^3\!x}{\cos x\sin x(\sin x-\cos x)}$
Factor: . $\frac{(\sin x-\cos x)(\sin^2\!x + \sin x\cos x + \cos^2\!x)}{\cos x\sin x(\sin x-\cos x)} \;=\;\frac{\sin x\cos x + \overbrace{\sin^2\!x + \cos^2\!x}^{\text{This is 1}}}{\cos x\sin x}$
. . $= \;\frac{\sin x\cos x + 1}{\cos x\sin x} \;=\;\frac{\sin x\cos x}{\cos x\sin x} + \frac{1}{\cos x\sin x} \;=\;1 + \sec x\csc x$
wow thank you so much! I would've spent another 2 hours on that to no avail, because I didn't even think of using difference of cubes.
March 23rd 2008, 11:39 AM #2
March 23rd 2008, 11:44 AM #3
Mar 2008
March 23rd 2008, 11:47 AM #4
March 23rd 2008, 11:53 AM #5
Mar 2008
March 23rd 2008, 11:55 AM #6
March 23rd 2008, 12:52 PM #7
Mar 2008
March 23rd 2008, 01:40 PM #8
Super Member
May 2006
Lexington, MA (USA)
March 23rd 2008, 01:59 PM #9
Mar 2008
|
{"url":"http://mathhelpforum.com/trigonometry/31800-difficult-identity.html","timestamp":"2014-04-19T15:03:26Z","content_type":null,"content_length":"59596","record_id":"<urn:uuid:b7674d64-6854-4ac6-b5df-dec19383c75b>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00062-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Fourteen Limit Cycles in a Seven-Degree Nilpotent System
Abstract and Applied Analysis
Volume 2013 (2013), Article ID 398609, 5 pages
Research Article
Fourteen Limit Cycles in a Seven-Degree Nilpotent System
^1Guangxi Key Laboratory of Trusted Software, School of Computing Science and Mathematics, Guilin University of Electronic Technology, Guilin 541004, China
^2Department of Mathematics, Hezhou University, Hezhou 542800, China
Received 13 August 2013; Accepted 30 October 2013
Academic Editor: Isaac Garcia
Copyright © 2013 Wentao Huang et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any
medium, provided the original work is properly cited.
Center conditions and the bifurcation of limit cycles for a seven-degree polynomial differential system in which the origin is a nilpotent critical point are studied. Using the computer algebra
system Mathematica, the first 14 quasi-Lyapunov constants of the origin are obtained, and then the conditions for the origin to be a center and the 14th-order fine focus are derived, respectively.
Finally, we prove that the system has 14 limit cycles bifurcated from the origin under a small perturbation. As far as we know, this is the first example of a seven-degree system with 14 limit cycles
bifurcated from a nilpotent critical point.
1. Introduction
In the qualitative theory of planar differential equations, the center-focus problem and bifurcation of limit cycles for nilpotent system are known as a difficult problem. Some advance of this
problem can be dated back to [1–3]. In recent years, due to the improvement of research method and development of computer symbolic computation, the problem has attracted more and more scholars’
attention and has received a lot of results. For instance, in [4, 5], the center conditions of the nilpotent critical points were obtained for several systems. In [6] the center conditions and the
bifurcations of limit cycles were investigated for a quintic and a nine-degree nilpotent systems. The center and the limit cycles problems of a quintic nilpotent system were also solved in [7]. And
in [8], the authors gave a recursive method to calculate quasi-Lyapunov constants of the nilpotent critical point. The nilpotent center problem and limit cycles bifurcations were performed also in [9
]. It is interesting how many limit cycles can be bifurcated from the nilpotent critical point. Let be the maximum possible number of limit cycles bifurcated from a nilpotent critical point of system
(1) when and are of degree at most . The known results of are: Andreev et al. given have , , , see [5]. Y. Liu and J. Li showed , , , see [8, 10–12]. Li et al. found in [13]. Recently, Li et al. [14]
obtained .
In this paper, we study the bifurcation of limit cycles for a seven-degree nilpotent system with the following form: By the computation of the quasi-Lyapunov constants, we prove that its perturbed
system has 14 small-amplitude limit cycles bifurcated from the origin, namely, which improves the result in [14].
In Section 2, we give some preliminary knowledge concerning the nilpotent critical point. In Section 3, we obtain the first 14 quasi-Lyapunov constants and derive the sufficient and necessary
conditions of the origin to be a center and a 14th-order fine focus. At the end, it is proved that there exist 14 limit cycles in the neighborhood of the origin of the system.
2. Focal Values and Quasi-Lyapunov Constants
In order to discuss limit cycles of the system, we state some preliminary results given by [8].
According to [2], the origin of system is a 3th-order monodromic critical point and a center or a focus if and only if , . Without loss of generality, we assume that , , , , otherwise let , .
Under the substitutions system (1) becomes
By the transformation of the generalized polar coordinates, system (4) is transformed into where
For sufficiently small , let be a solution of (6) satisfying the initial value condition , where
Because for all sufficiently small , there is , in a small neighborhood; we obtain the Poincaré return map of (6) in a small neighborhood of the origin as follows:
Lemma 1. For any positive integer , has the form where is a polynomial of , , , with rational coefficients.
Definition 2. For any positive integer , is called the th-order focal value of system (4) at the origin; if , the origin of system (4) is called an 1th-order weak focus; if there is an integer such
that , , then the origin of system (4) is called a th-order weak focus; if for all positive integer , we have , the origin of system (4) is called a center.
Lemma 3. For system (4), one can derive successively the formal series such that
Lemma 4. If there exists a natural number and formal series such that (13) holds, then where In (15), is the symbol of algebraic equivalence, meaning that there exists , polynomial functions of the
coefficients of system (4), such that
Definition 5. In Lemma 4, is called the th-order quasi-Lyapunov constant of the origin of system (4).
Lemma 6. For system (4), one can derive successively the formal series such that where , . For , , , and are determined by the following recursive formulas: where By choosing such that one has
One considers the perturbed system of system (4)
For system (24), from Lemma 4, we know that the first nonvanishing quasi-Lyapunov constant is positive constant times as much as the first nonvanishing focal value, so the former shows the same
effect as the latter in the study of bifurcation of limit cycles. From [10, Theorem 4.7], we have the following.
Theorem 7. For the system (27), assume that the quasi-Lyapunov constants of the origin have independent parameters ; that is, . If , the origin of the system (4) is an th-order weak focus (), and the
Jacobian determinant then, the perturbed system (24) exists small amplitude limit cycles bifurcated from the origin.
3. Criterion of Center Focus and Bifurcation of Limit Cycles
Applying the recursive formulas in Lemma 6, we compute the quasi-Lyapunov constants of the origin of system (2) with the computer algebra system Mathematica and obtain the following result.
Theorem 8. For system (2), the first 14 quasi-Lyapunov constants are as follows: Here, every () was computed under the assumption .
It is easy to obtain the following Theorem.
Theorem 9. For system (2), the first 14 quasi-Lyapunov constants at the origin are all zero if and only if the following condition is satisfied:
If and the condition (27) holds, system (2) becomes which is symmetric with respect to the -axis, one has the following.
Theorem 10. The origin of system (2) is a center if and only if and (27) holds.
By , , one has the following.
Theorem 11. The origin of system (2) is a 14th-order weak focus if and only if
By computing carefully, we obtain that the Jacobian determinant
From (30) and Theorem 7, one has the following.
Theorem 12. For system (2), under the condition (29), by small perturbations of the parameter group , then there are 14 small amplitude limit cycles bifurcated from the origin.
This paper is partly supported by the Nature Science Foundation of China Grants 11261013 and 11361017 and the Nature Science Foundation of Guangxi (2012GXNSFAA053003).
1. A. F. Andreev, “Investigation of the behaviour of the integral curves of a system of two differential equations in the neighbourhood of a singular point,” American Mathematical Society
Translations, vol. 8, pp. 183–207, 1958. View at MathSciNet
2. V. V. Amel’kin, N. A. Lukashevich, and A. P. Sadovskiĭ, Nonlinear Oscillations in Second Order Systems, Belarusian State University, Minsk, Russia, 1982, (Russian). View at MathSciNet
3. V. G. Romanovskii, “On the cyclicity of the equilibrium position of the center or focus type of a certain system,” Vestnik St. Petersburg University: Mathematics, vol. 19, pp. 51–56, 1986.
4. M. J. Álvarez and A. Gasull, “Monodromy and stability for nilpotent critical points,” International Journal of Bifurcation and Chaos, vol. 15, no. 4, pp. 1253–1265, 2005. View at Publisher · View
at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
5. A. F. Andreev, A. P. Sadovskiĭ, and V. A. Tsikalyuk, “The center-focus problem for a system with homogeneous nonlinearities in the case of zero eigenvalues of the linear part,” Differential
Equations, vol. 39, no. 2, pp. 155–164, 2003. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
6. M. J. Álvarez and A. Gasull, “Generating limit cycles from a nilpotent critical point via normal forms,” Journal of Mathematical Analysis and Applications, vol. 318, no. 1, pp. 271–287, 2006.
View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
7. A. Algaba, C. García, and M. Reyes, “Local bifurcation of limit cycles and integrability of a class of nilpotent systems of differential equations,” Applied Mathematics and Computation, vol. 215,
no. 1, pp. 314–323, 2009. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
8. Y. Liu and J. Li, “On third-order nilpotent critical points: integral factor method,” International Journal of Bifurcation and Chaos, vol. 21, no. 5, pp. 1293–1309, 2011. View at Publisher · View
at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
9. M. Han and V. G. Romanovski, “Limit cycle bifurcations from a nilpotent focus or center of planar systems,” Abstract and Applied Analysis, vol. 2012, Article ID 720830, 28 pages, 2012. View at
Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
10. Y. Liu and J. Li, “New study on the center problem and bifurcations of limit cycles for the Lyapunov system. I,” International Journal of Bifurcation and Chaos, vol. 19, no. 11, pp. 3791–3801,
2009. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
11. Y. Liu and J. Li, “New study on the center problem and bifurcations of limit cycles for the Lyapunov system. II,” International Journal of Bifurcation and Chaos, vol. 19, no. 9, pp. 3099–3807,
2009. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
12. Y. Liu and J. Li, “Bifurcations of limit cycles and center problem for a class of cubic nilpotent system,” International Journal of Bifurcation and Chaos, vol. 20, no. 8, pp. 2579–2584, 2010.
View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
13. F. Li, Y. Liu, and Y. Wu, “Center conditions and bifurcation of limit cycles at three-order nilpotent critical point in a seventh degree Lyapunov system,” Communications in Nonlinear Science and
Numerical Simulation, vol. 16, no. 6, pp. 2598–2608, 2011. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
14. F. Li, Y. Liu, and H. Li, “Center conditions and bifurcation of limit cycles at three-order nilpotent critical point in a septic Lyapunov system,” Mathematics and Computers in Simulation, vol.
81, no. 12, pp. 2595–2607, 2011. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
|
{"url":"http://www.hindawi.com/journals/aaa/2013/398609/","timestamp":"2014-04-19T08:23:05Z","content_type":null,"content_length":"379298","record_id":"<urn:uuid:e9d21cb9-d132-4414-a55f-5303a15c1d64>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00172-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Problem 286
Scoring probabilities
Problem 286
Published on Saturday, 3rd April 2010, 05:00 am; Solved by 1180
Barbara is a mathematician and a basketball player. She has found that the probability of scoring a point when shooting from a distance x is exactly (1- ^x/[q]), where q is a real constant greater
than 50.
During each practice run, she takes shots from distances x=1, x=2, ..., x=50 and, according to her records, she has precisely a 2% chance to score a total of exactly 20 points.
Find q and give your answer rounded to 10 decimal places.
|
{"url":"http://projecteuler.net/problem=286","timestamp":"2014-04-20T01:04:49Z","content_type":null,"content_length":"5241","record_id":"<urn:uuid:51f3468f-10f8-4ad0-917c-77a06a074afe>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00451-ip-10-147-4-33.ec2.internal.warc.gz"}
|
the encyclopedic entry of Digitaria (Electronic Band)
solid-state physics
, the
electronic band structure
(or simply
band structure
) of a
describes ranges of
that an
is "forbidden" or "allowed" to have. It is due to the
of the quantum mechanical electron waves in the periodic
crystal lattice
. The band structure of a material determines several characteristics, in particular its electronic and optical properties.
Why bands occur in materials
The electrons of a single free-standing atom occupy atomic orbitals, which form a discrete set of energy levels. If several atoms are brought together into a molecule, their atomic orbitals split, as
in a coupled oscillation. This produces a number of molecular orbitals proportional to the number of atoms. When a large number of atoms (of order $10^\left\{20\right\}$ or more) are brought together
to form a solid, the number of orbitals becomes exceedingly large, and the difference in energy between them becomes very small, so the levels may be considered to form continuous bands of energy
rather than the discrete energy levels of the atoms in isolation. However, some intervals of energy contain no orbitals, no matter how many atoms are aggregated, forming band gaps.
Within an energy band, energy levels are so numerous as to be a near continuum. First, the separation between energy levels in a solid is comparable with the energy that electrons constantly exchange
with phonons (atomic vibrations). Second, it is comparable with the energy uncertainty due to the Heisenberg uncertainty principle, for reasonably long intervals of time. As a result, the separation
between energy levels is of no consequence.
Several approaches to finding band structure are discussed below
Basic concepts
Any solid has a large number of bands. In theory, it can be said to have infinitely many bands (just as an atom has infinitely many energy levels). However, all but a few lie at energies so high that
any electron that reaches those energies escapes from the solid. These bands are usually disregarded.
Bands have different widths, based upon the properties of the atomic orbitals from which they arise. Also, allowed bands may overlap, producing (for practical purposes) a single large band.
Figure 1 shows a simplified picture of the bands in a solid that allows the three major types of materials to be identified: metals, semiconductors and insulators.
Metals contain a band that is partly empty and partly filled regardless of temperature. Therefore they have very high conductivity.
The lowermost, almost fully occupied band in an insulator or semiconductor is called the valence band by analogy with the valence electrons of individual atoms. The uppermost, almost unoccupied band
is called the conduction band because only when electrons are excited to the conduction band can current flow in these materials. The difference between insulators and semiconductors is only that the
forbidden band gap between the valence band and conduction band is larger in an insulator, so that fewer electrons are found there and the electrical conductivity is lower. Because one of the main
mechanisms for electrons to be excited to the conduction band is due to thermal energy, the conductivity of semiconductors is strongly dependent on the temperature of the material.
This band gap is one of the most useful aspects of the band structure, as it strongly influences the electrical and optical properties of the material. Electrons can transfer from one band to the
other by means of carrier generation and recombination processes. The band gap and defect states created in the band gap by doping can be used to create semiconductor devices such as solar cells,
diodes, transistors, laser diodes, and others.
A more complete view of the band structure takes into account the periodic nature of a crystal lattice using the symmetry operations that form a
space group
. The
Schrödinger equation
is solved for the crystal, which has
Bloch waves
as solutions:
where k is called the wavevector, and is related to the direction of motion of the electron in the crystal, and n is the band index, which simply numbers the energy bands. The wavevector k takes on
values within the Brillouin zone (BZ) corresponding to the crystal lattice, and particular directions/points in the BZ are assigned conventional names like Γ, Δ, Λ, Σ, etc. These directions are shown
for the face-centered cubic lattice geometry in Figure 2.
The available energies for the electron also depend upon k, as shown in Figure 3 for silicon in the more complex energy band diagram at the right. In this diagram the topmost energy of the valence
band is labeled $E_v$ and the bottom energy in the conduction band is labeled $E_c$. The top of the valence band is not directly below the bottom of the conduction band ($E_v$ is for an electron
traveling in direction Γ, $E_c$ in direction X), so silicon is called an indirect gap material. For an electron to be excited from the valence band to the conduction band, it needs something to give
it energy $E_c - E_v$and a change in direction/momentum. In other semiconductors (for example GaAs) both are at Γ, and these materials are called direct gap materials (no momentum change required).
Direct gap materials benefit the operation of semiconductor laser diodes.
Anderson's rule is used to align band diagrams between two different semiconductors in contact.
Band structures in different types of solids
Although electronic band structures are usually associated with
amorphous solids
may also exhibit band structures. However, the periodic nature and symmetrical properties of crystalline materials makes it much easier to examine the band structures of these materials
theoretically. In addition, the well-defined symmetry axes of crystalline materials makes it possible to determine the
dispersion relationship
between the momentum (a 3-dimension vector quantity) and energy of a material. As a result, virtually all of the existing theoretical work on the electronic band structure of solids has focused on
crystalline materials.
Density of states
While the density of energy states in a band could be very large for some materials, it may not be uniform. It approaches zero at the band boundaries, and is generally highest near the middle of a
band. The density of states for the free electron model in three dimensions is given by,
D(epsilon)= frac{V}{2pi^2}left(frac {2m}{hbar^2}right)^{3/2} epsilon^{1/2}
Filling of bands
Although the number of states in all of the bands is effectively infinite, in an uncharged material the number of electrons is equal only to the number of protons in the atoms of the material.
Therefore not all of the states are occupied by electrons ("filled") at any time. The likelihood of any particular state being filled at any temperature is given by the Fermi-Dirac statistics. The
probability is given by the following:
$f\left(E\right) = frac\left\{1\right\}\left\{1 + e^\left\{frac\left\{E-E_F\right\}\left\{k_B T\right\}\right\}\right\}$
The Fermi level naturally is the level at which the electrons and protons are balanced.
At T=0, the distribution is a simple step function:
$f\left(E\right) = begin\left\{cases\right\} 1 & mbox\left\{if\right\} 0 < E le E_F$
0 & mbox{if} E_F < E end{cases}
At nonzero temperatures, the step "smooths out", so that an appreciable number of states below the Fermi level are empty, and some states above the Fermi level are filled.
Band structure of crystals
Brillouin zone
Because electron momentum is the reciprocal of space, the dispersion relation between the energy and momentum of electrons can best be described in reciprocal space. It turns out that for crystalline
structures, the dispersion relation of the electrons is periodic, and that the Brillouin zone is the smallest repeating space within this periodic structure. For an infinitely large crystal, if the
dispersion relation for an electron is defined throughout the Brillouin zone, then it is defined throughout the entire reciprocal space.
Theory of band structures in crystals
The ansatz is the special case of electron waves in a periodic crystal lattice using Bloch waves as treated generally in the dynamical theory of diffraction. Every crystal is a periodic structure
which can be characterized by a Bravais lattice, and for each Bravais lattice we can determine the reciprocal lattice, which encapsulates the periodicity in a set of three reciprocal lattice vectors
($mathbf\left\{b_1\right\}$, $mathbf\left\{b_2\right\}$, $mathbf\left\{b_3\right\}$). Now, any periodic potential $V\left(mathbf\left\{r\right\}\right)$ which shares the same periodicity as the
direct lattice can be expanded out as a Fourier series whose only non-vanishing components are those associated with the reciprocal lattice vectors. So the expansion can be written as:
$V\left(mathbf\left\{r\right\}\right) = sum_\left\{mathbf\left\{K\right\}\right\}\left\{V_\left\{mathbf\left\{K\right\}\right\}e^\left\{i mathbf\left\{K\right\}cdotmathbf\left\{r\right\}\right\}\
where $mathbf\left\{K\right\} = m_1 mathbf\left\{b\right\}_1 + m_2 mathbf\left\{b\right\}_2 + m_3 mathbf\left\{b\right\}_3$ for any set of integers $\left(m_1, m_2, m_3\right)$.
From this theory, an attempt can be made to predict the band structure of a particular material, however most ab initio methods for electronic structure calculations fail to predict the observed band
Nearly-free electron approximation
In the nearly-free electron approximation in solid state physics interactions between electrons are completely ignored. This approximation allows use of Bloch's Theorem which states that electrons in
a periodic potential have wavefunctions and energies which are periodic in wavevector up to a constant phase shift between neighboring reciprocal lattice vectors. The consequences of periodicity are
described mathematically by the Bloch wavefunction:
$\left\{Psi\right\}_\left\{n,mathbf\left\{k\right\}\right\} \left(mathbf\left\{r\right\}\right) = e^\left\{i mathbf\left\{k\right\}cdotmathbf\left\{r\right\}\right\} u_n\left(mathbf\left\{r\right
where the function $u_n\left(mathbf\left\{r\right\}\right)$ is periodic over the crystal lattice, that is,
$u_n\left(mathbf\left\{r\right\}\right) = u_n\left(mathbf\left\{r-R\right\}\right)$.
Here index n refers to the n-th energy band, wavevector k is related to the direction of motion of the electron, r is position in the crystal, and R is location of an atomic site..
(For more detail see nearly-free electron model and pseudopotential method).
Tight-binding model
The opposite extreme to the nearly-free electron approximation assumes the electrons in the crystal behave much like an assembly of constituent atoms. This
tight-binding model
assumes the solution to the time-independent single electron
Schrödinger equation $Psi$
is well approximated by a linear combination of
atomic orbitals $psi_n\left(mathbf\left\{r\right\}\right)$
$Psi\left(mathbf\left\{r\right\}\right) = sum_\left\{n,mathbf\left\{R\right\}\right\} b_\left\{n,mathbf\left\{R\right\}\right\} psi_n\left(mathbf\left\{r-R\right\}\right)$,
where the coefficients $b_\left\{n,mathbf\left\{R\right\}\right\}$ are selected to give the best approximate solution of this form. Index n refers to an atomic energy level and R refers to an atomic
site. A more accurate approach using this idea employs Wannier functions, defined by $^,$.
$a_n\left(mathbf\left\{r-R\right\}\right) = frac\left\{V_\left\{C\right\}\right\}\left\{\left(2pi\right)^\left\{3\right\}\right\} int_\left\{BZ\right\} dmathbf\left\{k\right\} e^\left\{-imathbf\
in which
is the periodic part of the Bloch wave and the integral is over the
Brillouin zone
. Here index
refers to the
-th energy band in the crystal. The Wannier functions are localized near atomic sites, like atomic orbitals, but being defined in terms of Bloch functions they are accurately related to solutions
based upon the crystal potential. Wannier functions on different atomic sites
are orthogonal. The Wannier functions can be used to form the Schrödinger solution for the
-th energy band as:
$Psi_\left\{n,mathbf\left\{k\right\}\right\} \left(mathbf\left\{r\right\}\right) = sum_\left\{mathbf\left\{R\right\}\right\} e^\left\{-imathbf\left\{k\right\}cdot\left(mathbf\left\{R-r\right\}\
Density-functional theory
In present days physics literature, the large majority of the electronic structures and band plots is calculated using the density-functional theory (DFT) which is not a model but rather a theory,
i.e. a microscopic first-principle theory of condensed matter physics that tries to cope with the electron-electron many-body problem via the introduction of an exchange-correlation term in the
functional of the electronic density. DFT calculated bands are found in many cases in agreement with experimental measured bands, for example by angle-resolved photoemission spectroscopy (ARPES). In
particular, the band shape seems well reproduced by DFT. But also there are systematic errors of DFT bands with respect to the experiment. In particular, DFT seems to underestimate systematically by
a 30-40% the band gap in insulators and semiconductors.
It must be said that DFT is in principle an exact theory to reproduce and predict ground state properties (e.g. the total energy, the atomic structure, etc.). However DFT is not a theory to address
excited state properties, such as the band plot of a solid that represents the excitation energies of electrons injected or removed from the system. What in literature is quoted as a DFT band plot is
a representation of the DFT Kohn-Sham energies, that is the energies of a fictive non-interacting system, the Kohn-Sham system, which has no physical interpretation at all. The Kohn-Sham electronic
structure must not be confused with the real, quasiparticle electronic structure of a system, and there is no Koopman's theorem holding for Kohn-Sham energies, like on the other hand for Hartree-Fock
energies that can be truly considered as an approximation for quasiparticle energies. Hence in principle DFT is not a band theory, not a theory suitable to calculate bands and band-plots.
Green's function methods and the ab initio GW approximation
To calculate the bands including electron-electron interaction many-body effects, one can resort to so called Green's function methods. Indeed, the knowledge of the Green's function of a system
provides both ground (the total energy) and also excited state observables of the system. The poles of the Green's function are the quasiparticle energies, the bands of a solid. The Green's function
can be calculated by solving the Dyson equation once the self-energy of the system is known. For real systems like solids, the self-energy is a very complex quantity and usually approximations are
needed to solve the problem. One of such approximations is the GW approximation, so called from the mathematical form the self-energy takes as product $Sigma=GW$ of the Green's function $G$ and the
dynamically screened interaction $W$. This approach is more pertinent to address the calculation of band plots (and also quantities beyond, such as the spectral function) and can be also formulated
in a completely ab initio way. The GW approximation seems to provide band gaps of insulators and semiconductors in agreement with the experiment and hence to correct the systematic DFT
Mott insulators
Although the nearly-free electron approximation is able to describe many properties of electron band structures, one consequence of this theory is that it predicts the same number of electrons in
each unit cell. If the number of electrons is odd, we would then expect that there is an unpaired electron in each unit cell, and thus that the valence band is not fully occupied, making the material
a conductor. However, materials such as CoO that have an odd number of electrons per unit cell are insulators, in direct conflict with this result. This kind of material is known as a Mott insulator,
and requires inclusion of detailed electron-electron interactions (treated only as an averaged effect on the crystal potential in band theory) to explain the discrepancy. The Hubbard model is an
approximate theory that can include these interactions.
Calculating band structures is an important topic in theoretical solid state physics. In addition to the models mentioned above, other models include the following:
• The Kronig-Penney Model, a one-dimensional rectangular well model useful for illustration of band formation. While simple, it predicts many important phenomena, but is not quantitative.
• Bands may also be viewed as the large-scale limit of molecular orbital theory. A solid creates a large number of closely spaced molecular orbitals, which appear as a band.
The band structure has been generalised to wavevectors that are complex numbers, resulting in what is called a complex band structure, which is of interest at surfaces and interfaces.
Each model describes some types of solids very well, and others poorly. The nearly-free electron model works well for metals, but poorly for non-metals. The tight binding model is extremely accurate
for ionic insulators, such as metal halide salts (e.g. NaCl).
Further reading
1. Kotai no denshiron (The theory of electrons in solids), by Hiroyuki Shiba, ISBN 4-621-04135-5
2. Microelectronics, by Jacob Millman and Arvin Gabriel, ISBN 0-07-463736-3, Tata McGraw-Hill Edition.
3. Solid State Physics, by Neil Ashcroft and N. David Mermin, ISBN 0-03-083993-9
4. Elementary Solid State Physics: Principles and Applications, by M. Ali Omar, ISBN 0-20-160733-6
5. Introduction to Solid State Physics by Charles Kittel, ISBN 0-471-41526-X
6. Electronic and Optoelectronic Properties of Semiconductor Structures - Chapter 2 and 3 by Jasprit Singh, ISBN 0-521-82379-X
See also
|
{"url":"http://www.reference.com/browse/Digitaria+(Electronic+Band)","timestamp":"2014-04-16T13:54:27Z","content_type":null,"content_length":"100751","record_id":"<urn:uuid:4471b356-77a4-42a5-9113-5b5396514f7a>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00244-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Elevator/Escalator Study Guide
Elevator/Escalator Study Guide
This study guide is intended to aid applicants who are experienced in the field of electricity /electronics and/or mechanical/hydraulic principles in their preparation for entry-level examinations.
By no means is this study guide intended for individuals who have not completed some form of formal technical training through a trade school, college or military training; nor should this guide be
used as a sole preparation guide for examination purposes. This guide is divided into three sections: electrical/electronics studies, mechanical/hydraulic studies, and sources of reference material.
Electrical / Electronic Studies
Ohm's Law
• Demonstrate the ability to perform calculations to determine the unknown electrical quantity when given two of the fundamental values of electricity.
Fundamental Values of Electricity
• Be well versed with electrical prefixes and have a basic understanding of voltage, current, resistance, and power as well as their units of measurement and abbreviations.
• Be able to calculate electrical power in watts and combine Ohm's Law and Watt's law to find unknown currents, voltages, resistance, and power.
Basic Instrumentation and Measurements
• Demonstrate the ability to use common test instruments as well as interpret scale values on digital meters and interpret linear and nonlinear scales on an analog meter.
• Demonstrate the ability use an oscilloscope and to interpret a waveform pattern, i.e. determine the voltage and frequency using an oscilloscope display. Understand the terminology associated with
test instruments.
Basic Electrical Circuits
• Be able to identify various types of electrical symbols and common circuit devices.
• Be able to identify various types of resistors and their color code.
• Understand the relationship of cross-sectional area and length of a conductor as they relate the current in a circuit.
• Identify the three basic circuit configurations; series, parallel, series-parallel and be able to perform circuit calculations to solve for an unknown electrical quantity, i.e. determine voltage
drops, current values, and wattage values.
Sources of Electricity
• Understand the differences between primary and secondary cells.
• Distinguish between series and parallel connections.
• Calculate the outputs of batteries in series and parallel.
• Identify other sources of electrical energy.
• Understand the operation of various dc motors.
• Understand the operation of three phase motors.
• Understand the operation of a transformer.
• Identify types of transformer losses.
• Be able to calculate the various values of currents and voltages in transformer circuits.
Alternating Circuits
• Be able to calculate various levels of ac voltage, i.e. peak to peak, rms, average.
• Understand the time relationships of an ac waveform, i.e. quarter-wave, half-wave, full-wave.
• Understand the difference between direct current and alternating current.
• Be familiar with reactive components, i.e. capacitors and inductors, and understand how they respond in both a dc circuit and an ac circuit.
• Be familiar with formulas associated with calculating the transient response time of both an RC and an RL circuit.
• Understand resonant frequency and how it affects various RCL circuits. Calculate a resonant frequency.
• Understand how N-type and P-type materials in a semiconductor conduct electricity.
• Be able to apply the principles of both forward and reverse biasing.
• Identify and understand the operation of various types of semiconductor diodes.
• Understand the operation of a half-wave and full-wave rectifier.
• Understand power supply filtering.
• Identify and understand the operation of the bipolar transistor.
• Identify and understand the operation of several common thyristors.
Digital Circuits
• Convert decimal numbers to their binary equivalents and binary numbers to their decimal equivalents.
• Identify various types of logic gates and their associated truth tables.
• Be able to apply knowledge of basic logic gates to determine the output of a simple logic circuit.
• Understand the difference between digital and analog devices and their signals. Identify different types of logic families.
Mechanical / Hydraulic Studies
Basic Hydraulics (Fluid Power)
• Have an overall understanding of hydraulic systems.
• Be able to apply the principles of Pascal’s Law in analyzing hydraulic systems.
• Be familiar with Bernoulli’s Principle as it applies to hydraulic systems.
• Understanding the characteristics of hydraulic fluid.
• Distinguish the difference between hydraulic fluid and specific gravity.
• Be aware of the relationship between hydraulic fluid and viscosity.
• Understand how hydraulic pressure is measured.
• Realize the purpose of relief valves in a hydraulic system.
• Realize the purpose of filters in a hydraulic system.
• Be aware of how contaminants can affect a hydraulic system.
• Be familiar with the purpose of reservoirs in a hydraulic system.
Basic Mechanics
• Have an adequate understanding of the laws of mechanics (e.g., friction, equilibrium, inertia)
• Comprehend the principles of applied forces.
• Be familiar with the principles of absolute and atmospheric pressure.
• Possess an understanding of the different types of gears, chain drives, belts, and bearings and their applications.
Reference Materials
• Electricity and Electronics by Howard H. Gerrish, and William E. Dugger, Jr., published by Goodheart-Willcox Company Inc. – ISBN 1-59070-207-7
• Industrial Maintenance by Michael E. Brumbach and Jeffrey A. Clade, published by Thomson-Delmar Learning – ISBN 0-7668-2695-3
• Electrical Motor Controls by Gary Rockis and Glen Mazur, published by American Technical Publishers Inc. - ISBN 0-8269-1207-9
• Solid State Fundamentals by Gary Rockis, published by American Technical Publishers Inc. – ISBN 0-8269-1634-1
|
{"url":"http://www.wmata.com/careers/metro_jobs/elevator_escalator_study_guide.cfm","timestamp":"2014-04-17T18:45:04Z","content_type":null,"content_length":"27216","record_id":"<urn:uuid:0838ffbf-6504-42d5-8615-5b37fce5f2a8>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00013-ip-10-147-4-33.ec2.internal.warc.gz"}
|
the definition of point estimation
point estimation
in statistics, the process of finding an approximate value of some parameter-such as the mean (average)-of a population from random samples of the population. The accuracy of any particular
approximation is not known precisely, though probabilistic statements concerning the accuracy of such numbers as found over many experiments can be constructed. For a contrasting estimation method,
see interval estimation.
Learn more about point estimation with a free trial on Britannica.com.
|
{"url":"http://dictionary.reference.com/browse/point+estimation?qsrc=2446","timestamp":"2014-04-20T22:02:23Z","content_type":null,"content_length":"90055","record_id":"<urn:uuid:0175c8bf-c41a-4610-8902-df298594bfd2>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00114-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Charges...please help!!
X.) TSINθ + qE = 0
y.) TCOSθ - mg = 0
I'd write that first equation as:
Tsinθ - qE = 0 (since the force components are in opposite directions)
Realize that E is also a function of q, so rewrite that in terms of k, q, and r (which you figured out).
Otherwise, you are on the right track.
|
{"url":"http://www.physicsforums.com/showthread.php?t=208570","timestamp":"2014-04-16T22:05:47Z","content_type":null,"content_length":"58897","record_id":"<urn:uuid:2ba3fa10-2e99-41d0-a0ea-cf7809668e94>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00592-ip-10-147-4-33.ec2.internal.warc.gz"}
|
This Article
Bibliographic References
Add to:
ASCII Text x
Chih-Fang Wang, Sartaj Sahni, "Image Processing on the OTIS-Mesh Optoelectronic Computer," IEEE Transactions on Parallel and Distributed Systems, vol. 11, no. 2, pp. 97-109, February, 2000.
BibTex x
@article{ 10.1109/71.841747,
author = {Chih-Fang Wang and Sartaj Sahni},
title = {Image Processing on the OTIS-Mesh Optoelectronic Computer},
journal ={IEEE Transactions on Parallel and Distributed Systems},
volume = {11},
number = {2},
issn = {1045-9219},
year = {2000},
pages = {97-109},
doi = {http://doi.ieeecomputersociety.org/10.1109/71.841747},
publisher = {IEEE Computer Society},
address = {Los Alamitos, CA, USA},
RefWorks Procite/RefMan/Endnote x
TY - JOUR
JO - IEEE Transactions on Parallel and Distributed Systems
TI - Image Processing on the OTIS-Mesh Optoelectronic Computer
IS - 2
SN - 1045-9219
EPD - 97-109
A1 - Chih-Fang Wang,
A1 - Sartaj Sahni,
PY - 2000
KW - Optoelectronic computer
KW - OTIS-Mesh
KW - image processing
KW - histogramming
KW - histogram modification
KW - Hough transform
KW - image shrinking and expanding.
VL - 11
JA - IEEE Transactions on Parallel and Distributed Systems
ER -
Abstract—We develop algorithms for histogramming, histogram modification, Hough transform, and image shrinking and expanding on an OTIS-Mesh optoelectronic computer. Our algorithm for the Hough
transform is based upon a mesh algorithm for the Hough transform which is also developed in this paper. This new mesh algorithm improves upon the mesh Hough transform algorithms of [4] and [14].
[1] T. Bestul and L.S. Davis,“On computing histograms of images in logntime using fat pyramids,” IEEE Trans. on Pattern Analysis and Machine Intelligence, vol. 11, no. 2, pp. 212-213, 1989.
[2] H.Y.H. Chaung and C.C. Li, “A Systolic Processor for Straight Line Detection by Modified Hough Transform,” IEEE Workshop on Computer Architecture, Pattern Analysis, and Database Management, pp.
300–303, 1985.
[3] A.N. Choudhary and R. Ponnusamy, "Implementation and Evaluation of Hough Transform Algorithms on a Shared-Memory Multiprocessor," J. Parallel and Distributed Computing, vol. 12, pp. 178-188,
[4] R.E. Cypher, J.L.C. Sanz, and L. Snyder, “The Hough Transform Has$o(n)$Complexity on SIMD$n \times n$Mesh Array Architecture,” IEEE 1987 Workshop Computer Architecture for Pattern Analysis and
Machine Intelligence, pp. 115–121, 1987.
[5] M. Feldman, S. Esener, C. Guest, and S. Lee, “Comparison between Electrical and Free-Space Optical Interconnects Based on Power and Speed Considerations,” Applied Optics, vol. 27, no. 9, pp.
1,742–1,751, May 1988.
[6] A. Fisher and P. Highnam, “Computing the Hough Transform on a Scan Line Array Processor,” IEEE 1987 Workshop Computer Architecture for Pattern Analysis and Machine Intelligence, pp. 83–87, 1987.
[7] J. Grinberg, G.R. Nudd, and R. D. Etchells, “A Cellular VLSI Architecture,” Computer, vol. 17, no. 1, pp. 69–81, Jan. 1984.
[8] C. Guerra and S. Hambrusch, “Parallel Algorithms for Line Detection on a Mesh,” IEEE 1987 Workshop Computer Architecture for Pattern Analysis and Machine Intelligence, pp. 99–106, 1987.
[9] W. Hendrick, O. Kibar, P. Marchand, C. Fan, D.V. Blerkom, F. McCormick, I. Cokgor, M. Hansen, and S. Esener, “Modeling and Optimization of the Optical Transpose Interconnection System,”
Optoelectronic Technology Center, Program Review, Cornell Univ., Sept. 1995.
[10] H.A. Ibrahim, J.B. Kender, and D.E. Shaw, “On the Application of Massively Parallel SIMD Tree Machine to Certain Intermediate-Level Vision Tasks,” Computer Vision, Graphics, and Image
Processing, vol. 36, pp. 53–75, 1986.
[11] J. Illingworth and J. Kitter, "A survey of Hough transform," CVGIP, vol. 44, pp. 87-116, 1988.
[12] J. Jang,H. Park,, and V.K. Prasanna,“A fast algorithm for computing histogram on reconfigurable mesh,” Proc. Frontiers of Massively Parallel Computation, pp. 244-251, 1992.
[13] J. Jenq and S. Sahni,“Reconfigurable mesh algorithms for image shrinking, expanding, clustering, and template matching,” Proc. Int’l Parallel Processing Symp., pp. 208-215, 1991.
[14] J. Jenq and S. Sahni,“Reconfigurable mesh algorithms for image shrinking, expanding, clustering, and template matching,” Proc. Int’l Parallel Processing Symp., pp. 208-215, 1991.
[15] J. Jenq and S. Sahni,“Histogramming on a reconfigurable mesh computer,” Proc. Int’l Parallel Processing Symp., pp. 425-432, 1992.
[16] J.-F. Jenq and S. Sahni, “Image Shrinking and Expanding on a Pyramid,” IEEE Trans. Parallel and Distributed Systems, vol. 4, no. 11, pp. 1,291–1,296. Nov. 1993.
[17] C.S. Kannan, H.Y.H. Chuang, “Fast Hough Transform on a Mesh Connected Processor Array,” Information Processing Letters, vol. 33, pp. 243–248, Jan. 1990.
[18] F. Kiamilev, P. Marchand, A. Krishnamoorthy, S. Esener, and S. Lee, “Performance Comparison between Optoelectronic and VLSI Multistage Interconnection Networks,” J. Lightwave Technology, vol. 9,
no. 12, pp. 1,674–1,692, Dec. 1991.
[19] A. Krishnamoorthy, P. Marchand, F. Kiamilev, and S. Esener, “Grain-Size Considerations for Optoelectronic Multistage Interconnection Networks” Applied Optics, vol. 31, no. 26, pp. 5,480–5,507,
Sept. 1992.
[20] D. Krizanc, “Integer Sorting on a Mesh-Connected Array of Processors.” manuscript, 1989.
[21] H. Li, M.A. Lavin, and L.R. Le, “Master Fast Hough Transform: A Hierarchical Approach, Computer Vision, Graphics, and Image Processing, vol. 36, pp. 139–161, Dec. 1986.
[22] H. Li and M. Maresca, "Polymorphic-Torus Architecture for Computer Vision," IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 11, no. 3, pp. 233-243, Mar. 1989.
[23] H.F. Li, D. Pao, and R. Jayakumar, “Improvements and Systolic Implementation of the Hough Transform for Straight Line Detection,” Pattern Recognition, vol. 22, no. 6, pp. 697–706, 1989.
[24] M. Maresca, H. Li, and X. Sheng, “Parallel Computer Vision on Polymorphic Torus Architecture,” Int'l J. Computer Vision and Applications, vol. 2, no. 4, 1989.
[25] G.C. Marsden and P.J. Marchand, P. Harvey, and S.C. Esener, “Optical Transpose Interconnection System Architectures,” Optics Letters, vol. 18, no. 13, pp. 1,083–1,085, July 1993.
[26] M. Nigam and S. Sahni, “Sorting$n^2$Numbers on$n \times n$Meshes,” Proc. Seventh Int'l Parallel Processing Symp. (IPPS‘93), pp. 73–78, 1993.
[27] S. Olariu, J.L. Schwing, and J. Zhang, “Computing the Hough Transform on Reconfigurable Meshes,” Proc. Conf. Vision Interface‘92, pp. 169–174, 1992.
[28] S. Pavel and S.G. Akl, “Efficient Algorithms for the Hough Transform on Arrays with Reconfigurable Optical Buses,” Proc. 10th Int'l parallel Processing Symp. (IPPS’96), pp. 697–701, 1996.
[29] T. Pavlidis, Algorithms for Graphics and Image Processing, pp. 199-201 Rockville, Md.: Computer Science Press, 1982.
[30] S. Ranka and S. Shani,Hypercube Algorithms for Image Processing and Pattern Recognition. New York: Springer-Verlag, 1990.
[31] A. Rosenfeld, “A Note on Shrinking and Expanding Operations in Pyramids,” Pattern Recognition Letters, vol. 6, no. 4, pp. 241–244, 1987.
[32] A. Rosenfeld, J. Ornelas, and Y. Hung, "Hough Transform Algorithms for Mesh Connected SIMD Parallel Processors," Computer Vision Graphics and Image Processing, vol. 41, no. 3, pp. 293-305, 1988.
[33] S. Sahni and C.-F. Wang, “BPC Permutations on the OTIS-Mesh Optoelectronic Computer,” Proc. Fourth Int'l Conf. Massively Parallel Processing Using Optical Interconnections (MPPOI '97),
pp. 130-135, 1997.
[34] H.J. Siegel, J. Siegel, F.C. Kemmerer, P.T. Muller, H.E. Smalley, and D.D. Smith, “PASM: A partitionable SIMD/MIMD System for Image Processing and Pattern Recognition,” IEEE Trans. Computers,
vol. 30, no. 12, pp. 934–947, Dec. 1981.
[35] T.M. Silberberg, “The Hough Transform in the Geometric Arithmetic Parallel Processor,” IEEE Workshop Computer Architecture and Image Database Management, pp. 387–391, 1985.
[36] S.L. Tanimoto, “Sorting, Histogramming, and Other Statistical Operations on a Pyramid Machine,” Multiresolution Image Processing and Analysis, A. Rosenfeld, ed. New York: Springer-Verlag, pp.
136–145, 1984.
[37] M.J. Thazhuthaveetil, A.V. Shah, “Parallel Hough Transform Algorithm Performance,” Image and Vision Computing, vol. 9, no. 2, pp. 88–92, 1991.
[38] C.-F. Wang and S. Sahni, “Basic Operations on the OTIS-Mesh Optoelectronic Computer,” IEEE Trans. Parallel and Distributed Systems, vol. 9, no. 12, pp. 1226-1236, Dec. 1998.
[39] C.-F. Wang and S. Sahni, “Matrix Multiplication on the OTIS-Mesh Optoelectronic Computer,” technical report, CISE Department, Univ. of Florida, Gainesville, Fla., 1998.
[40] M. Yasrebi and S. Deshpande, J.C. Browne, “A Comparison of Circuit Switching and Packet Switching Data Transfer Using Two Simple Image Processing Algorithms,” Proc. 1983 Int'l Conf. Parallel
Processing, pp. 25–28, 1983.
[41] F. Zane, P. Marchand, R. Paturi, and S. Esener, “Scalable Network Architectures Using the Optical Transpose Interconnection System (OTIS),” Proc. Second Int'l Conf. Massively Parallel Processing
Using Optical Interconnections (MPPOI '96), pp. 114-121, 1996.
Index Terms:
Optoelectronic computer, OTIS-Mesh, image processing, histogramming, histogram modification, Hough transform, image shrinking and expanding.
Chih-Fang Wang, Sartaj Sahni, "Image Processing on the OTIS-Mesh Optoelectronic Computer," IEEE Transactions on Parallel and Distributed Systems, vol. 11, no. 2, pp. 97-109, Feb. 2000, doi:10.1109/
Usage of this product signifies your acceptance of the
Terms of Use
|
{"url":"http://www.computer.org/csdl/trans/td/2000/02/l0097-abs.html","timestamp":"2014-04-24T19:29:03Z","content_type":null,"content_length":"60999","record_id":"<urn:uuid:062ddf86-9598-4b24-baad-2e76250035e9>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00501-ip-10-147-4-33.ec2.internal.warc.gz"}
|
the Fractional Differential Equation with p-Laplacian Operator
Discrete Dynamics in Nature and Society
Volume 2013 (2013), Article ID 308024, 12 pages
Research Article
Generalized Antiperiodic Boundary Value Problems for the Fractional Differential Equation with p-Laplacian Operator
^1Department of Mathematics, Zhengzhou University, Zhengzhou, Henan 450001, China
^2Department of Mathematics and Physics, Anyang Institute of Technology, Anyang, Henan 455000, China
Received 3 February 2013; Accepted 12 March 2013
Academic Editor: Hua Su
Copyright © 2013 Zhi-Wei Lv and Xu-Dong Zheng. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and
reproduction in any medium, provided the original work is properly cited.
We discuss the existence of solutions about generalized antiperiodic boundary value problems for the fractional differential equation with p-Laplacian operator , , , , , , where is the Caputo
fractional derivative, , , , and , , , . Our results are based on fixed point theorem and contraction mapping principle. Furthermore, three examples are also given to illustrate the results.
1. Introduction
Fractional differential equations arise in various areas of science and engineering, such as physics, mechanics, chemistry, and engineering. The fractional order models become more realistic and
practical than the classical integer models. Due to their applications, fractional differential equations have gained considerable attentions; one can see [1–14] and references therein.
Anti-periodic boundary value problems occur in the mathematical modeling of a variety of physical processes. Anti-periodic problems constitute an important class of boundary value problems and have
received considerable attention (see [15–19]).
In [20], Zhang considered the existence and multiplicity results of positive solutions for the following boundary value problem of fractional differential equation: where is a real number, is the
Caputo fractional derivative, and is continuous.
In [15], the authors discussed some existence results for the following anti-periodic boundary value problem for fractional differential equations: where is the Caputo fractional derivative of order
; is a given continuous function.
In [16], the authors investigated the following anti-periodic boundary value problem for higher-order fractional differential equations: where is the Caputo fractional derivative of order ; is a
given continuous function.
In [17], the authors investigated a class of anti-periodic boundary value problem of fractional differential equations where is the Caputo fractional derivative of order ; is a given continuous
In this paper, we discuss the existence of solutions about generalized anti-periodic boundary value problems for the fractional differential equation with p-Laplacian operator where is the Caputo
fractional derivative, , , , , and , , , .
If we take , and , then the problem (5) becomes the problem studied in [17]. In this paper, we let .
This paper is organized as follows. In Section 2, we present some background materials and preliminaries. Section 3 deals with some existence results. In Section 4, three examples are given to
illustrate the results.
2. Background Materials and Preliminaries
Definition 1 (see [21]). The fractional integral of order with the lower limit for a function is defined as where is the gamma function.
Definition 2 (see [21]). Caputo's derivative of order with the lower limit for a function can be written as
Lemma 3 (see [22]). Assume that with a fractional derivative of order that belongs to . Then where , , .
Lemma 4. Let . Then the fractional differential equation has a unique solution which is given by
Proof. From Lemma 3, we have Thus, By , we have Using the boundary condition and (13), we obtain Thus,
3. Main Results
Let denote the Banach space of continuous functions and from endowed with the norm defined by where Define an operator as From (18), we conclude that Then (5) has a solution if and only if the
operator has a fixed point.
Theorem 5. Let be continuous. Assume that meets the following condition: there exist , such that Then the problem (5) has at least one solution on for
Proof. From , we know that is continuous.
Let For , we have This, together with (21) and (22), yields that Hence, is uniformly bounded.
Next we show that is equicontinuous.
For any , , we have Thus, we conclude that is equicontinuous on , and By Schauder fixed point theorem we know that there exists a solution for the boundary value problem (5).
Theorem 6. Let be continuous. Assume that meets the following condition: there exist , such that Then the problem (5) has a unique solution on for any .
Proof. From (18) and (19), we have, for , , Thus, It follows from (29) that is a contraction. Thus, the conclusion of the theorem follows from the contraction mapping principle.
Theorem 7. Let . Assume that meets the following condition: there exist , , such that Then the problem (5) has unique solution on for
Proof. Let where By (18) and (19), we have, for , This, together with (36), yields that Hence, In view of , we have . Thus, by the following property of p-Laplacian operator:
if , , , then ; we have, for , Thus, It follows from (33) that is a contraction. Thus, the conclusion of the theorem follows from the contraction mapping principle.
4. Examples
Example 8. Consider the following boundary value problem: where Let By computation, we deduce that Thus, let ; we have Hence, by Theorem 5, BVP (42) has at least one solution for .
Example 9. Consider the following boundary value problem: where Let By computation, we deduce that Let Thus,
|
{"url":"http://www.hindawi.com/journals/ddns/2013/308024/","timestamp":"2014-04-16T11:46:13Z","content_type":null,"content_length":"1042427","record_id":"<urn:uuid:a209cceb-0880-48fa-b73e-e0a516adac48>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00264-ip-10-147-4-33.ec2.internal.warc.gz"}
|
A peculiar effect of Einstein's postulates is the transformation that connects space-time in two inertial frames. Such transformations are called Lorentz transformations.
The standard Lorentz transformation in the x direction is (for reference also the classical Galilei transformation is included):
Lorentz transformation (special relativity) Galilei transformation (classical Newtonian mechanics)
where Lorentz factor. Note that the spatial coordinates (y and z) perpendicular to the direction of motion (x) are unchanged. In the classical limit time is relative in special relativity.
Directly from Lorentz transformations, one obtains the concepts of length contraction, time dilation, relativistic Doppler effect, and relativistic addition of velocities.
|
{"url":"http://www.nobelprize.org/educational/physics/relativity/transformations-1.html","timestamp":"2014-04-20T13:34:58Z","content_type":null,"content_length":"8378","record_id":"<urn:uuid:3a7a7b3c-5454-43d4-9127-a9719d0200f9>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00554-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Wayland, MA Algebra 2 Tutor
Find a Wayland, MA Algebra 2 Tutor
...Most importantly, I have built understanding of AD/HD and skills in helping students with the condition via working closely with dozens of students with AD/HD -- and their families -- in
public and private classroom and tutoring settings over the past 13 years. I received formal training in dysl...
26 Subjects: including algebra 2, Spanish, reading, English
...I have also been successfully tutoring high school and college students in chemistry, physics, computer programming, and topics in mathematics from pre-algebra to statistics and advanced
calculus, as well as SSAT, SAT, and ACT test preparation for over ten years. I can help you, too. References...
33 Subjects: including algebra 2, chemistry, physics, calculus
...A comprehensive study of the Bible involves investigating many of the themes outlined through the scriptures. It will cover highlights of each of the 66 books of the Bible. It will include a
close examination of why the ransom was necessary as well as the many prophecies beginning with the first prophecy found in Genesis and down through Revelations.
38 Subjects: including algebra 2, reading, calculus, English
...For example, I can illustrate 'word problems' and also 'ratio problems' with charts and graphs that the student can use to understand the reasoning behind these 'word problems'. I think that
it is important to teach not only the techniques needed for prealgebra, but also help form the foundation for the math that the student will encounter next year, as well. I work with Java all
17 Subjects: including algebra 2, physics, algebra 1, C
I am an experience math tutor for students in middle school through college. I have an PhD in Applied Math from UC Berkeley and have been tutoring students part time in the last four years. I
enjoy working with students who are motivated but need a little help to understand the subject at hand.
11 Subjects: including algebra 2, calculus, geometry, algebra 1
Related Wayland, MA Tutors
Wayland, MA Accounting Tutors
Wayland, MA ACT Tutors
Wayland, MA Algebra Tutors
Wayland, MA Algebra 2 Tutors
Wayland, MA Calculus Tutors
Wayland, MA Geometry Tutors
Wayland, MA Math Tutors
Wayland, MA Prealgebra Tutors
Wayland, MA Precalculus Tutors
Wayland, MA SAT Tutors
Wayland, MA SAT Math Tutors
Wayland, MA Science Tutors
Wayland, MA Statistics Tutors
Wayland, MA Trigonometry Tutors
Nearby Cities With algebra 2 Tutor
Ashland, MA algebra 2 Tutors
Auburndale, MA algebra 2 Tutors
Concord, MA algebra 2 Tutors
Holliston algebra 2 Tutors
Lincoln Center, MA algebra 2 Tutors
Lincoln, MA algebra 2 Tutors
Maynard, MA algebra 2 Tutors
Needham Jct, MA algebra 2 Tutors
Newtonville, MA algebra 2 Tutors
Southboro, MA algebra 2 Tutors
Southborough algebra 2 Tutors
Sudbury algebra 2 Tutors
Wellesley algebra 2 Tutors
Wellesley Hills algebra 2 Tutors
Weston, MA algebra 2 Tutors
|
{"url":"http://www.purplemath.com/wayland_ma_algebra_2_tutors.php","timestamp":"2014-04-16T07:40:12Z","content_type":null,"content_length":"24238","record_id":"<urn:uuid:cbce8d57-4393-445a-836c-8a42acba89e2>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00089-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Pokemon Jeopardy [Archive] - The PokéCommunity Forums
March 30th, 2005, 07:27 PM
I'll allow four people to join this game at a time. If you've watched the Jeopardy TV Show, you probably know how this will work out. The first question will be worth 100 points. The first person to
answer correctly with get the ammount of points the question is worth and choose how much the next question will be worth. Keep in mind that the higher the number, the harder the question. If a
person or people answer the question wrong before someone answers it correctly, they will lose the ammount of points the question is worth. After the board is cleared, there will be a final Jeopardy
where the players will stake a certain ammount of points (if you have a negative ammount of points, you will be unable to participate in Final Jeopardy). I will ask a question and give the correct
answer once all the person answered it. If you answer the question correctly, you get the points you staked. If you answered incorrectly, the lose the points you stakes. The person with the highest
score wins the game and a new round will begin with new people. You can't participate in more that one game in a row so more people get a chance to play. Once four people have signed up to play, I
will post the first question.
|
{"url":"http://www.pokecommunity.com/archive/index.php/t-35015.html","timestamp":"2014-04-25T04:23:12Z","content_type":null,"content_length":"4496","record_id":"<urn:uuid:83231bdf-4516-4729-aab4-5f25dd0d2fd4>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00260-ip-10-147-4-33.ec2.internal.warc.gz"}
|
MATLAB Primer, Seventh Edition
Provides a quick, straightforward, hands-on guide to MATLAB 7.0
Explains how to call Fortran and Java routines from MATLAB
Demonstrates and explains cell publishing, which enables MATLAB results to be viewed in HTML, LaTeX, and Microsoft Word and PowerPoint
Offers tools to help write M-Files that include Help and Content Reports, File Comparison, and Coverage Report
Updates the material on plotting to reflect the many changes in MATLAB 7.0
Includes expanded examples of M-files and MEX-files available at the author's Web site and the CRC Web site
With the spread of the powerhouse MATLAB® software into nearly every area of math, science, and engineering, it is important to have a strong introduction to using the software. Updated for version
7.0, MATLAB® Primer, Seventh Edition offers such an introduction as well as a "pocketbook" reference for everyday users of the software. It offers an intuitive language for expressing problems and
solutions both numerically and graphically.
The latest edition in this best-selling series, MATLAB® Primer, Seventh Edition incorporates a number of enhancements such as changes to the desktop, new features for developing M-files, the JIT
accelerator, and an easier way of importing Java classes. In addition to the features new to version 7.0, this book includes:
A new section on M-Lint, the new debugger for M-files
A new chapter on calling Java from MATLAB and using Java objects inside the MATLAB workspace
A new chapter on calling Fortran from MATLAB
A new chapter on solving equations: symbolic and numeric polynomials, nonlinear equations, and differential equations
A new chapter on cell publishing, which replaces the "notebook" feature and allows the creation of Word, LaTeX, PowerPoint, and HTML documents with executable MATLAB commands and their outputs
Expanded Graphics coverage-including the 3D parametrically defined seashells on the front and back covers
Whether you are new to MATLAB, new to version 7.0, or simply in need of a hands-on, to-the-point reference, MATLAB® Primer provides the tools you need in a conveniently sized, economically priced
Table of Contents
Help Window
Start Button
Command Window
Workspace Window
Command History Window
Array Editor Window
Current Directory Window
Referencing Individual Entries
Matrix Operators
Matrix Division (Slash and Backslash)
Entry-Wise Operators
Relational Operators
Complex Numbers
Other Data Types
Generating Vectors
Accessing Submatrices
Constructing Matrices
Scalar Functions
Vector Functions and Data Analysis
Matrix Functions
The linsolve Function
The find Function
The for Loop
The while Loop
The switch Statement
The try/catch Statement
Matrix Expressions (if and while)
Infinite Loops
M-File Editor/Debugger Window
Script Files
Function Files
Multiple Inputs and Outputs
Variable Arguments
Comments and Documentation
MATLAB's Path
Function Handles and Anonymous Functions
Name Resolution
Error and Warning Messages
User Input
Performance Measures
Efficient Code
A Simple Example
C Versus MATLAB Arrays
A Matrix Computation in C
MATLAB mx and mex Routines
Online Help for MEX Routines
Larger Examples on the Web
Solving a Transposed System
A Fortran mexFunction with %val
If You Cannot Use %val
A Simple Example
MATLAB's Java Class Path
Calling Your Own Java Methods
Loading a URL as a Matrix
Planar Plots
Multiple Figures
Graph of a Function
Parametrically Defined Curves
Titles, Labels, and Text in a Graph
Control of Axes and Scaling
Multiple Plots
Line Types, Marker Types, Colors
Subplots and Specialized Plots
Graphics Hard Copy
Curve Plots
Mesh and Surface Plots
Parametrically Defined Surfaces
Volume and Vector Visualization
Color Shading and Color Profile
Perspective of View
Handle Graphics
Graphical User Interface
Storage Modes
Generating Sparse Matrices
Computation with Sparse Matrices
Ordering Methods
Visualizing Matrices
Symbolic Variables
Variable Precision Arithmetic
Numeric and Symbolic Substitution
Algebraic Simplification
Two-Dimensional Graphs
Three Dimensional Surface Graphs
Three-Dimensional Curves
Symbolic Matrix Operations
Symbolic Linear Algebraic Functions
Solving Algebraic Equations
Solving Differential Equations
Further Maple Access
Representing Polynomials
Evaluating Polynomials
Polynomial Interpolation
Numeric Integration (Quadrature)
Symbolic Equations
Linear Systems of Equations
Polynomial Roots
Nonlinear Equations
Ordinary Differential Equations
Other Differential Equations
M-Lint Code Check Report
TODO/FIXME Report
Help Report
Dependency Report
File Comparison Report
Profile and Coverage Report
General Purpose Commands
Operators and Special Characters
Programming Language Constructs
Elementary Matrices and Matrix Manipulation
Elementary Math Functions
Specialized Math Functions
Matrix Functions-Numerical Linear Algebra
Data Analysis, Fourier Transforms
Interpolation and Polynomials
Function Functions and ODEs
Sparse Matrices
Annotation and Plot Editing
Two-Dimensional Graphs
Three-Dimensional Graphs
Specialized Graphs
Handle Graphics
Graphical User Interface Tools
Character Strings
Image and Scientific Data
File Input/Output
Audio and Video Support
Time and Dates
Data Types and Structures
Version Control
Creating and Debugging Code
Help Commands
Microsoft Windows Functions
Examples and Demonstrations
Symbolic Math Toolbox
Editorial Reviews
"I believe that the beginners and casual readers of MATLAB will find the little book very helpful as a companion when trying to run the software. …The convenient size makes it a nice book to have
around, for quick help on usage of a command or a little intro on some new topic."
-Álvaro Lozano-Robledo, MAA Reviews
|
{"url":"http://www.crcpress.com/product/isbn/9781584885238","timestamp":"2014-04-18T03:07:54Z","content_type":null,"content_length":"111685","record_id":"<urn:uuid:00235e09-ac5b-43ca-973a-ebc25eeee81d>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00584-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Combinations Qusetipm
September 30th 2008, 01:32 PM #1
Junior Member
Nov 2006
Combinations Qusetipm
Just wondered if someone could help me with this...
There are 52 white keys on a piano. The lowest key is A. The keys are designated A,B,C,D,E,F, and G in succession, and then the sequence of lettesr repeats, ending with a C for the highest key.
a) If five notes are played simultaneously, in how many ways could all the notes be:
i) As?
ii) Gs?
iii) the same letter
iv) different letters
b) If the five keys are played in order, hwo would your answers in a) change?
Okay, so, I've figured out i, ii, and iii for part a, I think..
i) ${8 \choose 5}$ (because A is one of the keys that are repeated at the end)
ii) ${7 \choose 5}$
iii) ${3}{8 \choose 5} + {4}{7 \choose 5}$
And for part b, obviously you just do the math as permutations rather than combinations.
But for Part a, iv), I'm confused about how I should answer. If all they care about is 7 different keys, then your answer would obviously be ${7 \choose 5}$. But, this wouldn't be taking into
considering that you could be playing an A in one octave, and a B in another octave. A lower B does not sound the same as a higher B. Is this what the question is indeed looking for? and if so,
how would I solve that?
You are correct for the questions you answered.
For the last question, you do have to take into account keys being played in different octaves. You have to break up the cases this way:
1. All three of A, B, C are played and two others
2. Two of A, B, C, are played and three others
3. One of A, B, C are played and four others
For 1): $8^3 \cdot 7^2 \cdot {3 \choose 3} \cdot {4 \choose 2}$ cases
For 2): $8^2 \cdot 7^3 \cdot {3 \choose 2} \cdot {4 \choose 3}$ cases
For 3): $8 \cdot 7^4 \cdot {3 \choose 1} \cdot {4 \choose 4}$ cases
The answer is the sum of these three.
September 30th 2008, 01:50 PM #2
MHF Contributor
Apr 2008
|
{"url":"http://mathhelpforum.com/statistics/51380-combinations-qusetipm.html","timestamp":"2014-04-16T07:24:56Z","content_type":null,"content_length":"34309","record_id":"<urn:uuid:e0773074-fa03-4dbf-84b1-740e06ee861d>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00525-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Physics Forums - View Single Post - proving a set in closed and nowhere dense
Right! to be more specific are we letting epsilon to be a/2? also for the nowhere dense part. there are many theorems to use in my book about proving something is nowhere dense. for instance we have
" a subset is nowhere dense if int(cl(H)) = empty set or iff T- Cl(H) is dense. which shall i use or do u suggest to use another theorem?
|
{"url":"http://www.physicsforums.com/showpost.php?p=3388338&postcount=7","timestamp":"2014-04-19T04:46:45Z","content_type":null,"content_length":"7003","record_id":"<urn:uuid:0df9c60e-84c8-494d-943b-cfaf8572a54d>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00161-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Homework Help
Post a New Question | Current Questions
5th grade Science
Wednesday, November 17, 2010 at 5:21pm
5th grade math
8 2/4 devided by half = 4 1/2
Wednesday, November 17, 2010 at 3:32pm
5th grade social studies
what did the europeans and the africans traded
Tuesday, November 16, 2010 at 8:57pm
5th grade math
From the data given, you are right.
Tuesday, November 16, 2010 at 1:00am
5th grade science
blank a type of material that tries to prenent the flow of thermal energy.
Monday, November 15, 2010 at 8:53pm
5th grade
A community's . . . See my other post for an explanation.
Sunday, November 14, 2010 at 7:10pm
5th grade
Possessive nouns A communities' elected officials were often members of the gentry.
Sunday, November 14, 2010 at 6:55pm
5th grade math
Sunday, November 14, 2010 at 4:04pm
5th grade
What is 5 X 2? Sra
Thursday, November 11, 2010 at 9:15pm
5th grade
what is 1/5 of 10
Thursday, November 11, 2010 at 9:14pm
5th grade
if 5/17 of region is shaded what part is not?
Thursday, November 11, 2010 at 8:23pm
5th grade
If two sides have the same length, the triangle is isosceles.
Thursday, November 11, 2010 at 7:45pm
5th grade math
Thursday, November 11, 2010 at 6:52pm
5th grade
Explain the type of solution formed when steam from boiling water evaporates into the air
Wednesday, November 10, 2010 at 11:20pm
math 5TH GRADE
its a multiple choice question i dont understand it i have to chose from a.36,b26,c22,d3
Wednesday, November 10, 2010 at 8:19pm
math 5TH GRADE
How long is the rectangle?
Wednesday, November 10, 2010 at 8:18pm
5th grade
Tuesday, November 9, 2010 at 6:57pm
5th grade
ohhh wait never mindd..my baadd lol
Sunday, November 7, 2010 at 11:31pm
5th grade
just keep adding 6 until u get to 87
Sunday, November 7, 2010 at 11:30pm
5th grade
Sunday, November 7, 2010 at 10:55pm
5th grade
find the missing sum 3+9+15+....+87
Sunday, November 7, 2010 at 10:36pm
5th grade social studies
describe the geographical features of your community or region. How might they have influenced people to settle there?
Sunday, November 7, 2010 at 6:26pm
5th grade
what are 3 types of engergy collected by telescopes?
Sunday, November 7, 2010 at 3:02pm
5th grade
Saturday, November 6, 2010 at 8:03pm
5th grade English
yes it does
Thursday, November 4, 2010 at 10:45pm
science 5th grade
what steps does the body take to digest milk thanks
Thursday, November 4, 2010 at 9:40pm
5th grade math
Thursday, November 4, 2010 at 6:56pm
5th grade Math
Thursday, November 4, 2010 at 5:45pm
5th grade
sorry that didint work well
Thursday, November 4, 2010 at 12:00am
5th grade
its hard to exsplane without signs google long division
Wednesday, November 3, 2010 at 11:13pm
5th grade
what shape can fold into 50 deferent ways and still be equal on both sides?
Wednesday, November 3, 2010 at 9:18pm
5th grade
Wednesday, November 3, 2010 at 5:55pm
5th grade
Continental Congress
Wednesday, November 3, 2010 at 4:04pm
5th grade
what is the first legislative body in america giving the settlers the opportunity to control thier own then it has a blank.
Wednesday, November 3, 2010 at 3:51pm
5th grade
kompozizyon yazma
Wednesday, November 3, 2010 at 1:10pm
5th grade
Wednesday, November 3, 2010 at 1:10pm
math 5TH GRADE
fifth gades have the most students playing sports after school.
Wednesday, November 3, 2010 at 12:01am
math 5TH GRADE
at morris elementary there are 45 students in each grade, four though six. In the fourth grade, 19 participate in sports after school. Two out of every six fifth graders play sports after school. In
the sixth-grade class, seven of every ten students are noot playing sports. ...
Tuesday, November 2, 2010 at 11:56pm
5th grade
Pat, I already told you. The first one should be 700,000.
Tuesday, November 2, 2010 at 11:36pm
math 5th grade
Jill, they can all be divided by 5.
Tuesday, November 2, 2010 at 11:35pm
5th grade
how does a double line graph looks
Tuesday, November 2, 2010 at 11:22pm
math 5TH GRADE
how many boxes can you find that will hold two times as many cubes as a 2 by 3 by4 box.record each of the dimensions
Monday, November 1, 2010 at 6:08pm
5th grade math
3x=x with x= the quantity the brother has
Monday, November 1, 2010 at 3:46pm
5th grade math
i have to write an algebraic expression for the following: jasmine has three times as many chores as her younger brother does.
Monday, November 1, 2010 at 3:41pm
5th grade
2 22 12 132 122 1342? What is next and what is the pattern
Friday, October 29, 2010 at 2:13pm
5th grade
What do enzymes do in digestion?
Friday, October 29, 2010 at 5:42am
5th grade science
Its Important because your fat ahahhahahah
Thursday, October 28, 2010 at 9:59pm
5th grade science
Its Important because your fat ahahhahahah
Thursday, October 28, 2010 at 9:59pm
5th grade math
please repost your question,, something that we can understand. or just describe the figure.
Thursday, October 28, 2010 at 12:40am
5th grade
220 angle duh im 15
Wednesday, October 27, 2010 at 7:50pm
5th grade social studies
wwhat are some responsibilities of citizens
Wednesday, October 27, 2010 at 2:54pm
5th grade Math
help in division plis help help help help
Wednesday, October 27, 2010 at 8:48am
math 5th grade
Monday, October 25, 2010 at 11:35pm
5th grade
Sunday, October 24, 2010 at 8:47pm
are 5th grade students allowed to ask questions??????
Sunday, October 24, 2010 at 3:28pm
5th grade
how I can do a essay for Salvador Dali ?
Friday, October 22, 2010 at 5:06pm
5th grade math
46.93 You have 4 tens (40), 6 ones (6), 9 tenths (.9), and 3 hundredths (.03).
Thursday, October 21, 2010 at 11:53pm
5th grade math
Jen, i think is 34.96 my answer to the last question.
Thursday, October 21, 2010 at 11:49pm
5th grade
Please ask your questions clearly. I do not know exactly what you need. Sra
Thursday, October 21, 2010 at 7:34pm
5th grade
comaring and ordering fraction books never written
Thursday, October 21, 2010 at 5:15pm
5th grade math
what is 5 names for 15 useing distibutive properity
Thursday, October 21, 2010 at 2:03pm
5th grade math
C 87, because 522/6 is about 540/6=90
Thursday, October 21, 2010 at 12:31pm
5th grade math
Divide. Do it in your head. Let us know what you think.
Thursday, October 21, 2010 at 7:31am
5th grade
how many inches are in 15 yards?
Wednesday, October 20, 2010 at 8:53pm
5th grade
how many inches are in 15 yards?
Wednesday, October 20, 2010 at 8:53pm
5th grade math
Wednesday, October 20, 2010 at 8:38pm
5th grade
how did you get the other random numbers?
Tuesday, October 19, 2010 at 5:31pm
5th grade
Our life on Earth is a short period, we cannot "do over". Once time has passed, it is gone.
Tuesday, October 19, 2010 at 9:42am
5th grade
what are 4 possible tune-ups to help you keep a postive attitude?
Monday, October 18, 2010 at 7:51pm
5th grade
how to keep a watershed heathly?
Monday, October 18, 2010 at 5:53pm
5th grade English
Right on the first sentence. What is your answer for the second?
Sunday, October 17, 2010 at 11:43am
5th grade
What pronoun do you see in that sentence? How is it used?
Sunday, October 17, 2010 at 11:41am
5th grade
what is 16/5723
Thursday, October 14, 2010 at 10:29pm
5th grade
Thursday, October 14, 2010 at 9:45pm
5th grade
88.8 or if you round 89
Thursday, October 14, 2010 at 5:44pm
5th grade
What do you think 24*3.7 is ???
Thursday, October 14, 2010 at 3:58pm
5th grade
You have to ask specific questions.
Thursday, October 14, 2010 at 2:55pm
5th grade
addition subtraction multiplication and division
Thursday, October 14, 2010 at 2:50pm
5th grade English
Yes, you are correct about both parts of your assignment for this sentence.
Thursday, October 14, 2010 at 9:11am
5th Grade Math
43, 59, 73
Wednesday, October 13, 2010 at 6:26pm
5th Grade Math
Which ones are prime numbers. 43, 87, 59, and/or 73.
Wednesday, October 13, 2010 at 4:17pm
5th grade math
Divide both numbers by 5. 30/100 = x/20 ??
Wednesday, October 13, 2010 at 12:22am
5th grade math
I don't get how to express the fraction 30/100 in 20ths.
Wednesday, October 13, 2010 at 12:16am
5th grade math
make it 3 high, 4 wide, and 12 long.
Tuesday, October 12, 2010 at 9:22pm
5th grade math
how do you draw a centimeter box with the dimensions 3 by 4 by 12 centimers which = 144 cubes?
Tuesday, October 12, 2010 at 8:51pm
5th grade
what is the lcm of 14,20
Tuesday, October 12, 2010 at 8:37pm
5th grade
what is the lcm of 15,25
Tuesday, October 12, 2010 at 8:20pm
5th grade
what is the lcm of 15,25
Tuesday, October 12, 2010 at 8:18pm
5th grade
what is the lcm of 15,25
Tuesday, October 12, 2010 at 8:17pm
5th grade
what is the lcm of 6,8
Tuesday, October 12, 2010 at 8:15pm
5th grade
Thursday, October 7, 2010 at 11:01pm
5th grade
Yes, mix sugar, water, and carbon dioxide. They all can exist in solution (liquid) together, as in soda pop.
Thursday, October 7, 2010 at 8:43pm
5th grade
can a solution be made by mixing a solid, liquid and a gas?
Thursday, October 7, 2010 at 8:39pm
5th grade math
how many boxes con you find that will hold two times as many cubes as a 2 by 3 by 4 box.
Thursday, October 7, 2010 at 5:25pm
5th grade math
MAKE A SET OF 12 NUMBERS WITH THE FOLLOWING LANDMARKS MAX: 8 RANGE: 6 MODE: 6 MEDIAN: 5
Wednesday, October 6, 2010 at 11:50pm
math 5th grade
(17-A)SQUARED=2209 SOLVE FOR A
Wednesday, October 6, 2010 at 10:46pm
5th grade
Wednesday, October 6, 2010 at 7:46pm
5th grade
What is your question?
Tuesday, October 5, 2010 at 5:35pm
5th grade
91 .12
Tuesday, October 5, 2010 at 5:30pm
5th grade math
1, 2, 5, 7, 10, 11, 13, 14, 17, 19, . . . Can you figure out the rest of the numbers?
Tuesday, October 5, 2010 at 4:30pm
Pages: <<Prev | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | 16 | Next>>
|
{"url":"http://www.jiskha.com/5th_grade/?page=10","timestamp":"2014-04-19T21:46:03Z","content_type":null,"content_length":"23298","record_id":"<urn:uuid:0c7a8c1e-9d33-4527-b415-7d1c879eafb0>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00144-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Complex graphing
January 1st 2010, 06:31 PM
Complex graphing
I read about complex graphs in google and the result actually seems interesting, but I can't seem to understand them...
As far as I understood then colors are simply to better understand, but I can't seem to understand how the "null" points form and where.
Can someone please point to a good start for these things?
Also can someone recommend a easy-to-use application for drawing these graphs?
January 2nd 2010, 03:52 AM
January 2nd 2010, 07:51 AM
Well ok, that won't be a problem to find a graphing application, I did manage to find one good that allows me draw the basics, but still - I can't understand them.
For example, f(x) = x^2 and f(z) = z^2
f(x) goes like a hyperbolic function should be, but f(z) goes like some inverss modolus function (|x^2|) so I can't seem to figure it out.
|
{"url":"http://mathhelpforum.com/math-topics/122146-complex-graphing-print.html","timestamp":"2014-04-21T02:10:51Z","content_type":null,"content_length":"4447","record_id":"<urn:uuid:0bbce8c5-89e3-4ad8-aaa0-d3dc7712c967>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00016-ip-10-147-4-33.ec2.internal.warc.gz"}
|
, 2000
"... The modern era of interior-point methods dates to 1984, when Karmarkar proposed his algorithm for linear programming. In the years since then, algorithms and software for linear programming have
become quite sophisticated, while extensions to more general classes of problems, such as convex quadrati ..."
Cited by 463 (16 self)
Add to MetaCart
The modern era of interior-point methods dates to 1984, when Karmarkar proposed his algorithm for linear programming. In the years since then, algorithms and software for linear programming have
become quite sophisticated, while extensions to more general classes of problems, such as convex quadratic programming, semidefinite programming, and nonconvex and nonlinear problems, have reached
varying levels of maturity. We review some of the key developments in the area, including comments on both the complexity theory and practical algorithms for linear programming, semidefinite
programming, monotone linear complementarity, and convex programming over sets that can be characterized by self-concordant barrier functions.
"... We perform a smoothed analysis of a termination phase for linear programming algorithms. By combining this analysis with the smoothed analysis of Renegar’s condition number by Dunagan, Spielman
and Teng ..."
Cited by 23 (4 self)
Add to MetaCart
We perform a smoothed analysis of a termination phase for linear programming algorithms. By combining this analysis with the smoothed analysis of Renegar’s condition number by Dunagan, Spielman and
, 2003
"... We perform a smoothed analysis of Renegar’s condition number for linear programming. In particular, we show that for every n-by-d matrix Ā, n-vector ¯ b and d-vector ¯c satisfying ∥ Ā, ¯ b, ¯c ∥
∥ F ≤ 1 and every σ ≤ 1 / √ dn, the expectation of the logarithm of C(A,b,c) is O(log(nd/σ)), where A, ..."
Cited by 22 (6 self)
Add to MetaCart
We perform a smoothed analysis of Renegar’s condition number for linear programming. In particular, we show that for every n-by-d matrix Ā, n-vector ¯ b and d-vector ¯c satisfying ∥ Ā, ¯ b, ¯c ∥ ∥ F
≤ 1 and every σ ≤ 1 / √ dn, the expectation of the logarithm of C(A,b,c) is O(log(nd/σ)), where A, b and c are Gaussian perturbations of Ā, ¯ b and ¯c of variance σ 2. From this bound, we obtain a
smoothed analysis of Renegar’s interior point algorithm. By combining this with the smoothed analysis of finite termination Spielman and Teng (Math. Prog. Ser. B, 2003), we show that the smoothed
complexity of linear programming is O(n 3 log(nd/σ)).
, 1998
"... We consider an infeasible-interior-point algorithm, endowed with a finite termination scheme, applied to random linear programs generated according to a model of Todd. Such problems have
degenerate optimal solutions, and possess no feasible starting point. We use no information regarding an optimal ..."
Cited by 11 (3 self)
Add to MetaCart
We consider an infeasible-interior-point algorithm, endowed with a finite termination scheme, applied to random linear programs generated according to a model of Todd. Such problems have degenerate
optimal solutions, and possess no feasible starting point. We use no information regarding an optimal solution in the initialization of the algorithm. Our main result is that the expected number of
iterations before termination with an exact optimal solution is O(n ln(n)). Keywords: Linear Programming, Average-Case Behavior, Infeasible-Interior-Point Algorithm. Running Title: Probabilistic
Analysis of an LP Algorithm 1 Dept. of Management Sciences, University of Iowa. Supported by an Interdisciplinary Research Grant from the Center for Advanced Studies, University of Iowa. 2 Dept. of
Mathematics, Valdosta State University. Supported by an Interdisciplinary Research Grant from the Center for Advanced Studies, University of Iowa. 3 Dept. of Mathematics, University of Iowa.
Supported by ...
, 2009
"... We perform a smoothed analysis of Renegar’s condition number for linear programming by analyzing the distribution of the distance to ill-posedness of a linear program subject to a slight
Gaussian perturbation. In particular, we show that for every n-by-d matrix Ā, n-vector ¯ b, and d-vector ¯c satis ..."
Cited by 7 (0 self)
Add to MetaCart
We perform a smoothed analysis of Renegar’s condition number for linear programming by analyzing the distribution of the distance to ill-posedness of a linear program subject to a slight Gaussian
perturbation. In particular, we show that for every n-by-d matrix Ā, n-vector ¯ b, and d-vector ¯c satisfying ∥ ∥ Ā, ¯ b, ¯c ∥ ∥ F ≤ 1 and every σ ≤ 1, E [log C(A, b, c)] = O(log(nd/σ)), A,b,c where
A, b and c are Gaussian perturbations of Ā, ¯ b and ¯c of variance σ 2 and C(A, b, c) is the condition number of the linear program defined by (A, b, c). From this bound, we obtain a smoothed
analysis of interior point algorithms. By combining this with the smoothed analysis of finite termination of Spielman and Teng (Math. Prog. Ser. B, 2003), we show that the smoothed complexity of
interior point algorithms for linear programming is O(n 3 log(nd/σ)).
, 2003
"... A linear program is typically specified by a matrix A together with two vectors b and c, where A is an n-by-d matrix, b is an n-vector and c is a d-vector. There are several canonical forms for
defining a linear program using (A,b,c). One commonly used canonical form is: max c T x s.t. Ax ≤ b and it ..."
Add to MetaCart
A linear program is typically specified by a matrix A together with two vectors b and c, where A is an n-by-d matrix, b is an n-vector and c is a d-vector. There are several canonical forms for
defining a linear program using (A,b,c). One commonly used canonical form is: max c T x s.t. Ax ≤ b and its dual min b T y s.t A T y = c, y ≥ 0. In [Ren95b, Ren95a, Ren94], Renegar defined the
condition number C(A,b,c) of a linear program and proved that an interior point algorithm whose complexity was O(n 3 log(C(A,b,c)/ǫ)) could solve a linear program in this canonical form to relative
accuracy ǫ, or determine that the program was infeasible or unbounded. In this paper, we prove that for any ( Ā, ¯ b, ¯c) such that ∥ ∥ Ā, ¯ b, ¯c ∥ ∥ F ≤ 1, where ∥ ∥ Ā, ¯ b, ¯c ∥ ∥
|
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=2555403","timestamp":"2014-04-21T05:09:34Z","content_type":null,"content_length":"25843","record_id":"<urn:uuid:c50812ef-09ec-4f88-b81e-d5c643510c72>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00581-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Physical interpretation of conductivity with electromagnetic waves?
When an electromagnetic wave travels through a conducting medium, the electric field of the wave exerts a force on the free electrons. This force causes them to accelerate and thereby gain some
velocity. Moving charge is known as current, so the electric field of the em wave has therefore created electric currents in the material. The electrons are actually not perfectly free and isolated,
and therefore cannot be perfectly accelerated by the em wave. The bond of the electron to the solid as well as the bumping into other particles creates a net drag on the electron as it tries to
accelerate. The average amount of drag on electron's when being accelerated in a certain material is known as the "electrical resistivity" ρ. The electrical conductivity σ of a material is just the
inverse of its resistivity.
The creation of currents in non-perfect conductors has two effects. First, when an electron being accelerated by the em wave bumps into an atom, it gets knocked out of the oscillation, and looses
some of its kinetic energy to the atom. Therefore, some of the energy in the wave gets transferred to the coherent kinetic energy of the electron it accelerates, which then gets transferred to the
random kinetic energy of the atom it bumps into. As a result, the wave dies down and the material heats up. This is known as "Joule heating" or "resistive heating". The second effect is that the
induced oscillating currents radiate new waves which also carry much of the energy away. As a result, a conductor tends to reflect much of the energy of an incident em wave instead of transmitting
it. The wave inside the conductor spatially decays because much of its energy is reflected back at the conductor's surface.
|
{"url":"http://www.physicsforums.com/showthread.php?t=563523","timestamp":"2014-04-17T18:29:15Z","content_type":null,"content_length":"27985","record_id":"<urn:uuid:19c099dc-c42b-41d7-a7da-d22ab07044fa>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00648-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Gauss-Seidel Initial Guess
I implemented the Gauss-Seidel method for solving a linear system of equations. My question: Is there a way to “guess” the best initial values that give the fastest convergence?
This really depends on the problem. For instance, if you are solving a linear system for each frame, you can often use the solution from the last frame as an initial guess. Can you give us more
specifics about what you are trying to do?
Any reason to why you can’t use the conjugated gradient method?
|
{"url":"http://devmaster.net/posts/8006/gauss-seidel-initial-guess","timestamp":"2014-04-16T04:27:23Z","content_type":null,"content_length":"13797","record_id":"<urn:uuid:317fb42c-9374-4294-b159-06e907f82fc8>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00090-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Problemi Attuali di Fisica Teorica 06
Posted by Urs Schreiber
This year’s conference in the annual series Problemi Attuali di Fisica Teorica will be held April 7 - April 13 in Vietri, Italy (like last year).
The HEP program is
Monday 10 - Quantum Gravity
Tuesday 11 - Strings and Fields
Wednesday 12 - Duality and Branes
Thursday 13 - Geometric and topological aspects of strings and branes
$\phantom{\rule{thickmathspace}{0ex}}\phantom{\rule{thickmathspace}{0ex}}\phantom{\rule{thickmathspace}{0ex}}\phantom{\rule{thickmathspace}{0ex}}\phantom{\rule{thickmathspace}{0ex}}$ 1) Non Abelian
gerbes and brane theory.
$\phantom{\rule{thickmathspace}{0ex}}\phantom{\rule{thickmathspace}{0ex}}\phantom{\rule{thickmathspace}{0ex}}\phantom{\rule{thickmathspace}{0ex}}\phantom{\rule{thickmathspace}{0ex}}$ 2) Manifolds,
supermanifolds, special holonomy and superstring compactifications.
$\phantom{\rule{thickmathspace}{0ex}}\phantom{\rule{thickmathspace}{0ex}}\phantom{\rule{thickmathspace}{0ex}}\phantom{\rule{thickmathspace}{0ex}}\phantom{\rule{thickmathspace}{0ex}}$ 3) Generalized
complex geometry and supersymmetric sigma models.
$\phantom{\rule{thickmathspace}{0ex}}\phantom{\rule{thickmathspace}{0ex}}\phantom{\rule{thickmathspace}{0ex}}\phantom{\rule{thickmathspace}{0ex}}\phantom{\rule{thickmathspace}{0ex}}$ 4) Poisson
geometry and non commutative geometry.
Francesco Bonechi
(I.N.F.N., Firenze)
The Poisson sigma model and the quantization of Poisson manifolds
Abstract: The Poisson sigma model is a bidimensional field theory having as target manifold a Poisson manifold. Kontsevich formula for the deformation quantization of the target manifold is
interpretable as the perturbative expansion of a particular correlator of the model. The non perturbative dynamics of the model is instead still largely unexplored. In this seminar, we clarify
the meaning of the inegrality condition of the Poisson tensor which appears both in the integration of the gauge transformations of the model and in the geometric quantization of the target
Francesco D’Andrea (S.I.S.S.A., Trieste)
Local index formulas on quantum spheres
Abstract: A general introduction to the basic ideas of index theory in noncommutative geometry is presented, clarified through the q-sphere example. One of the main motivation of this work is the
classification of deformations of instantons, whose charge can be computed using the local formulae of Connes-Moscovici. After a brief introduction of the main notions, some results concerning
the geometrical properties of the quantum SU(2) group and of Podles spheres, which are deformation of the Lie group SU(2) and of Riemann sphere, respectively, will be discussed. Finally, the
outlook of the subject and in particular the connection with the theory of modular forms will be illustrated.
Jarah Evslin (Free University, Bruxelles)
Twisted K-Theory as a BRST Cohomology
Abstract: We argue that twisted K-theory is a BRST cohomology. The original Hilbert space is the integral cohomology of a spatial slice, corresponding to the lattice of quantized Ramond-Ramond
field strengths. The gauge symmetry consists of large gauge transformations that correspond geometrically to choices of trivializations of gerbes. The BRST operator is identified with the
differential of the Atiyah-Hirzebruch spectral sequence.
Branislav Jurco (Munich University, Munich)
Nonabelian gerbes, differential geometry and stringy applications
Abstract: We will discuss nonabelian gerbes and their twistings as well as the corresponding differential geometry. We describe the classifying space, the corresponding universal gerbe and their
relation to string group and string bundles. Finally we show the relevance of twisted nonabelian gebres in the study and resolution of global anomalies of multiple coinciding M5-branes.
Urs Schreiber (Hamburg University, Hamburg)
Surface transport, gerbes, TFT and CFT
Abstract: Segal’s conception of a 2D QFT as a functor on cobordisms may be refined to that of a 2-functor on surface elements. Surface transport in gerbes, as well as 2D TFTs and CFTs provide
Alessandro Tanzini (SISSA Trieste)
Recent developments in topological brane theories
Abstract: We will discuss the formulation of topological theories for branes and its relevance for the recent conjectures about S-duality in topological string and topological M theory.
Alessandro Torrielli (Humboldt University, Berlin)
D-brane decay in electric fields and noncommutative geometry
Abstract: We study tachyon condensation in the presence of overcritical electric fluxes, by means of a toy model based on the noncommutative deformation of the one proposed by Minahan and
Zwiebach. We discuss the relation with Sen’s standard picture of D-brane decay, and the connection with the S-brane paradigma.
Maxim Zabzine (Upsala University, Upsala, and University of California, Santa Barbara)
New results in generalized Kahler geometry
Abstract: I will review the different decsriptions of generalized Kahler geometry and its relation with $N=\left(2,2\right)$ supersymmetric sigma model. I will sketch the proof of the existence
of generalized Kahler potential and will explain the relation to off-shell supersymmetry.
Posted at February 9, 2006 12:27 PM UTC
|
{"url":"http://golem.ph.utexas.edu/string/archives/000749.html","timestamp":"2014-04-19T18:22:29Z","content_type":null,"content_length":"17043","record_id":"<urn:uuid:e07d4182-ab27-4d9f-8ad6-ceb0f9776cc9>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00621-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Sometimes when a function has a horizontal asymptote, we can see what it should be.
Sample Problem
Let f(x) = 4^-x. Then as x approaches ∞ the function f approaches 0, there is a horizontal asymptote at y = 0. The function approaches this asymptote as x approaches ∞.
As x approaches -∞ the function f grows without bound, and therefore does not approach the asymptote y = 0.
Sometimes it's a little challenging to see what the asymptote should be, but we're up for it.
These asymptotes often appear when drawing rational functions. First we'll go through the guaranteed method that will tell us what type of asymptote we have and what it is, and then we'll show a
shortcut for finding horizontal asymptotes.
The Polynomial Long Division Method
The guaranteed method is polynomial long division. This method will probably take some time, but it will get the answer.
When finding horizontal / slant / curvilinear asymptotes of a rational function, we do long division to rewrite the function. We throw away the remainder, and what is left is our asymptote. If we're
left with a number, that's a horizontal asymptote (and remember, 0 is a perfectly good number!). If we're left with a line of the form y = mx + b (in other words, a degree-1 polynomial), that line is
our slant asymptote.
If we're left with anything else, it's a curvilinear asymptote.
The reason we throw away the remainder is that it will be a rational function whose numerator has a smaller degree than the denominator, and we know the limit of such a function as x approaches ∞ is
A rational function will approach its horizontal / slant / curvilinear asymptote when x is approaching ∞ and when x is approaching -∞.
The Shortcut
Congratulations; we've survived the long division. Our reward is a shortcut for finding horizontal asymptotes of rational functions. A horizontal asymptote will occur whenever the numerator and
denominator of a rational function have the same degree.
Find the horizontal asymptote of the function
If we do long division, we find
so the horizontal asymptote is y = 3.
Sample Problem
Find the horizontal asymptote of the function
If we do long division, we find
Therefore, the horizontal asymptote is y = 2.
Notice a pattern? We divide the leading term of the numerator by the leading term of the denominator, and that gives us the horizontal asymptote. That's it.
To summarize:
If a rational function has...
• a smaller degree polynomial in the numerator than in the denominator, that function will have a horizontal asymptote at 0. All done!
• the same degree polynomial in the numerator as in the denominator, that rational function has a horizontal asymptote which we can find by dividing leading terms only.
• a numerator one degree larger than the denominator, that rational function has a slant asymptote, which we can find by long division.
• none of the above, the function has a curvilinear asymptote, which we can find by long division.
Find any horizontal / slant / curvilinear asymptotes of the function
Find any horizontal / slant / curvilinear asymptotes of the function
Find any horizontal / slant / curvilinear asymptotes of the function
Find the horizontal asymptote of the function
For the function find any horizontal, slant, or curvilinear asymptotes. Specify the type of each asymptote, and whether the function f approaches the asymptote as x approaches ∞, -∞, or both.
For the function find any horizontal, slant, or curvilinear asymptotes. Specify the type of each asymptote, and whether the function f approaches the asymptote as x approaches ∞, -∞, or both.
For the function find any horizontal, slant, or curvilinear asymptotes. Specify the type of each asymptote, and whether the function f approaches the asymptote as x approaches ∞, -∞, or both.
For the function find any horizontal, slant, or curvilinear asymptotes. Specify the type of each asymptote, and whether the function f approaches the asymptote as x approaches ∞, -∞, or both.
For the function find any horizontal, slant, or curvilinear asymptotes. Specify the type of each asymptote, and whether the function f approaches the asymptote as x approaches ∞, -∞, or both.
Find the horizontal / slant / curvilinear asymptote for the rational function.
Find the horizontal / slant / curvilinear asymptote for the rational function.
Find the horizontal / slant / curvilinear asymptote for the rational function.
Find the horizontal / slant / curvilinear asymptote for the rational function.
Find the horizontal / slant / curvilinear asymptote for the rational function.
|
{"url":"http://www.shmoop.com/functions-graphs-limits/finding-horizontal-asymptotes-help.html","timestamp":"2014-04-21T12:10:45Z","content_type":null,"content_length":"57520","record_id":"<urn:uuid:46c99047-d596-41c1-adea-6fd647717945>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00640-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Analysis of a Generic Model of Eukaryotic Cell-Cycle Regulation
• We are sorry, but NCBI web applications do not support your browser and may not function properly.
More information
Biophys J. Jun 15, 2006; 90(12): 4361–4379.
Analysis of a Generic Model of Eukaryotic Cell-Cycle Regulation
We propose a protein interaction network for the regulation of DNA synthesis and mitosis that emphasizes the universality of the regulatory system among eukaryotic cells. The idiosyncrasies of cell
cycle regulation in particular organisms can be attributed, we claim, to specific settings of rate constants in the dynamic network of chemical reactions. The values of these rate constants are
determined ultimately by the genetic makeup of an organism. To support these claims, we convert the reaction mechanism into a set of governing kinetic equations and provide parameter values (specific
to budding yeast, fission yeast, frog eggs, and mammalian cells) that account for many curious features of cell cycle regulation in these organisms. Using one-parameter bifurcation diagrams, we show
how overall cell growth drives progression through the cell cycle, how cell-size homeostasis can be achieved by two different strategies, and how mutations remodel bifurcation diagrams and create
unusual cell-division phenotypes. The relation between gene dosage and phenotype can be summarized compactly in two-parameter bifurcation diagrams. Our approach provides a theoretical framework in
which to understand both the universality and particularity of cell cycle regulation, and to construct, in modular fashion, increasingly complex models of the networks controlling cell growth and
The cell cycle is the sequence of events by which a cell replicates its genome and distributes the copies evenly to two daughter cells. In most cells, the DNA replication-division cycle is coupled to
the duplication of all other components of the cell (ribosomes, membranes, metabolic machinery, etc.), so that the interdivision time of the cell is identical to its mass doubling time (1,2). Usually
mass doubling is the slower process; hence, temporal gaps (G1 and G2) are inserted in the cell cycle between S phase (DNA synthesis) and M phase (mitosis). During G1 and G2 phases, the cell is
growing and “preparing” for the next major event of the DNA cycle (3). “Surveillance mechanisms” monitor progress through the cell cycle and stop the cell at crucial “checkpoints” so that events of
the DNA and growth cycles do not get out of order or out of balance (4,5). In particular, in protists (for sure) and metazoans (to a lesser extent), cells must grow to a critical size to start S
phase and to a larger size to enter mitosis. These checkpoint requirements assure that the cycle of DNA synthesis and mitosis will keep pace with the overall growth of cells (6). Other checkpoint
signals monitor DNA damage and repair, completion of DNA replication, and congression of replicated chromosomes to the metaphase plate (7).
Eukaryotic cell cycle engine
These interdependent processes are choreographed by a complex network of interacting genes and proteins. The main components of this network are cyclin-dependent protein kinases (Cdk's), which
initiate crucial events of the cell cycle by phosphorylating specific protein targets. Cdk's are active only if bound to a cyclin partner. Yeasts have only one essential Cdk, which can induce both S
and M phase depending on which type of cyclin it binds. Because Cdk molecules are always present in excess, it is the availability of cyclins that determines the number of Cdk/cyclin complexes in a
cell (8). Cdk/cyclin complexes can be downregulated a), by inhibitory phosphoryation of the Cdk subunit and b), by binding to a stoichiometric inhibitor (cyclin-dependent kinase inhibitor (CKI)) (9).
Some years ago Paul Nurse (10) proposed, and since then many experimental studies have confirmed, that the DNA replication-division cycle in all eukaryotic cells is controlled by a common set of
proteins interacting with each other by a common set of rules. Nonetheless, each particular organism seems to use its own peculiar mix of these proteins and interactions, generating its own
idiosyncrasies of cell growth and division. The “generic” features of cell cycle control concern these common genes and proteins and the general dynamical principles by which they orchestrate the
replication and partitioning of the genome from mother cell to daughter. The peculiarities of the cell cycle concern exactly which parts of the common machinery are functioning in any given cell
type, given the genetic background and developmental stage of an organism. We formulate the genericity of cell cycle regulation in terms of an “underlying” set of nonlinear ordinary differential
equations with unspecified kinetic parameters, and we attribute the peculiarities of specific organisms to the precise settings of these parameters. Using bifurcation diagrams, we show how specific
physiological features of the cell cycle are determined ultimately by levels of gene expression.
Mathematical modeling of the cell cycle
The dynamic properties of complex regulatory networks cannot be reliably characterized by intuitive reasoning alone. Computers can help us to understand and predict the behavior of such networks, and
differential equations (DEs) provide a convenient language for expressing the meaning of a molecular wiring diagram in computer-readable form (11). Numerical solutions of the DEs can be compared with
experimental results, in an effort to determine the kinetic rate constants in the model and to confirm the adequacy of the wiring diagram. Eventually the model, with correct equations and rate
constants, should give accurate simulations of known experimental results and should be pressed to make verifiable predictions. This method has been used for many years to create mathematical models
of eukaryotic cell cycle regulation (12–29). The greatest drawback to DE-based modeling is that the modeler must estimate all the rate constants from the available data and still have some
observations “left over” to test the model. In the case of cell cycle regulation, very few of these rate constants have been measured directly (30,31) although the available data provide severe
constraints on rate constant values (15,32). To complement the important but tedious work of parameter estimation by data fitting, we need analytical tools for characterizing the parameter-dependence
of solutions of DEs and for associating a model's robust dynamical properties to the physiological characteristics of living cells.
Bifurcation theory and regulatory networks
Bifurcation theory is a general tool for classifying the attractors of a dynamical system and describing how the qualitative properties of these attractors change as a parameter value changes.
Bifurcation theory has been used successfully to understand transitions in the cell cycle by our group (33–37) and by others (12,26,38). In this article, we use bifurcation theory to examine a
generic model of eukaryotic cell cycle controls, bringing out the similarities and differences in the dynamical regulation of cell cycle events in yeasts, frog eggs, and mammalian cells. To
understand our approach, the reader must be familiar with a few elementary bifurcations of nonlinear DEs and how they are generated by positive and negative feedback in the underlying molecular
network. For more details, the reader may consult the Appendix to this article and some recent review articles (36,37).
In Fig. 1 we propose a general protein interaction network for regulating cyclin-dependent kinase activities in eukaryotic cells. (Fig. 1 uses “generic” names for each protein; in Table 1 we present
the common names of each component in specific cell types: budding yeast, fission yeast, frog eggs, and mammalian cells.) Using basic principles of biochemical kinetics, we translate the generic
mechanism into a set of coupled nonlinear ordinary differential equations (Supplementary Material, Table SI) for the temporal dynamics of each protein species. Although the structure of the DEs is
fixed by the topology of the network, the forms of the reaction rate laws (mass action, Michaelis-Menten, etc.) are somewhat arbitrary and would vary from one modeller to another. We use rate laws
consistent as much as possible with our earlier choices (15,18,25,39–41). In addition, most of the parameter values for each organism (Supplementary Material, Table SII) were inherited from earlier
Wiring diagram of the generic cell-cycle regulatory network. Chemical reactions (solid lines), regulatory effects (dashed lines); a protein sitting on a reaction arrow represents an enzyme catalyst
of the reaction. Regulatory modules of the system are ...
Protein name conversion table and modules used for each organism
For numerical simulations and bifurcation analysis of the DEs, we used the computer program XPP-AUT (42), with the “stiff” integrator. Instructions on how to reproduce our simulations and diagrams
(including all necessary .ode and .set files, and an optional SBML version of the model) can be downloaded from our website (43).
All protein concentrations in the model are expressed in arbitrary units (au) because, for the most part, we do not know the actual concentrations of most regulatory proteins in the cell. Hence, all
rate constants capture only the timescales of processes (rate constant units are min^−1). For each mutant, we use the same equations and parameter values except for those rate constants that are
changed by the mutation (e.g., for gene deletion we set the synthesis rate of the associated protein to zero).
A generic model of cell cycle regulation
Since the advent of gene-cloning technologies in the 1980s, molecular cell biologists have been astoundingly successful in unraveling the complex networks of genes and proteins that underlie major
aspects of cell physiology. These results have been collected recently in comprehensive molecular interaction maps (44–48). In the same spirit, but with an eye toward a computable, dynamic model, we
collected the most important regulatory “modules” of the Cdk network. Our goal is to describe a generic network (Fig. 1) that applies equally well to yeasts, frogs, and humans. We do not claim that
Fig. 1 is a complete model of eukaryotic cell-cycle controls, only that it is a starting point for understanding the basic cell-cycle engine across species.
Regulatory modules
The network, which tracks the three principal cyclin families (cyclins A, B, and E) and the proteins that regulate them at the G1-S, G2-M, and M-G1 transitions, can be subdivided into 13 modules.
(Other, coarser subdivisions are possible, but these 13 modules are convenient for describing the similarities and differences of regulatory signals among various organisms.)
Modules 4, 10, and 13: synthesis and degradation of cyclins B, E, and A. Cyclin E is active primarily at the G1-S transition, cyclin A is active from S phase to early M phase, and cyclin B is
essential for mitosis.
Modules 1 and 2: regulation of the anaphase promoting complex (APC). The APC works in conjunction with Cdc20 and Cdh1 to ubiquitinylate cyclin B, thereby labeling it for degradation by proteasomes.
The APC must be phosphorylated by the mitotic CycB kinase before it will associate readily with Cdc20, but not so with Cdh1. On the other hand, Cdh1 can be inactivated by phosphorylation by
cyclin-dependent kinases. Cdc14 is a phosphatase that opposes Cdk by dephosphorylating and activating Cdh1.
Module 8: synthesis and degradation of CKI (cyclin-dependent kinase inhibitor). Degradation of CKI is promoted by phosphorylation by cyclin-dependent kinases and inhibited by Cdc14 phosphatase.
Modules 6, 9, and 12: reversible binding of CKI to cyclin/Cdk dimers to produce catalytically inactive trimers (stoichiometric inhibition).
Modules 3, 7, and 11: regulation of the transcription factors that drive expression of cyclins and CKI. TFB is activated by cyclin B-dependent kinase. TFE is activated by some cyclin-dependent
kinases and inhibited by others. TFI is inhibited by cyclin B-dependent kinase and activated by Cdc14 phosphatase.
Module 5: regulation of cyclin B-dependent kinase by tyrosine phosphorylation and dephosphorylation (by Wee1 kinase and Cdc25 phosphatase, respectively). The tyrosine-phosphorylated form is less
active than the unphosphorylated form. Cyclin B-dependent kinase phosphorylates both Wee1 (inactivating it) and Cdc25 (activating it), and these phosphorylations are reversed by Cdc14 phosphatase.
The model is replete with positive feedback loops (CycB activates TFB, which drives synthesis of CycB; CycB activates Cdc25, which activates CycB; CKI inhibits CycB, which promotes degradation of
CKI; Cdh1 degrades CycB, which inhibits Cdh1), and negative feedback loops (CycB activates APC, which activates Cdc20, which degrades CycB; CycB activates Cdc20, which activates Cdc14, which opposes
CycB; TFE drives synthesis of CycA, which inhibits TFE). These complex, interwoven feedback loops create the interesting dynamical properties of the control system, which account for the
characteristic features of cell cycle regulation, as we intend to show.
The model (at present) neglects important pathways that regulate, e.g., cell proliferation in metazoans (retinoblastoma protein), mitotic exit in yeasts (the FEAR, MEN, and SIN pathways), and the
ubiquitous DNA-damage and spindle assembly checkpoints. We intend to remedy these deficiencies in later publications, as we systematically grow the model to include more and more features of the
control system.
Role of cell growth
In yeasts and other lower eukaryotes, a great deal of evidence shows the dominant role of cell growth in setting the tempo of cell division (2,49–52). In somatic cells of higher eukaryotes there are
many reports of size control of cell-cycle events (e.g., (53–55)), although other authors have cast doubts on a regulatory role for cell size (e.g., (56,57)). For embryonic cells and cell extracts,
the activation of Cdk1 is clearly dependent on the total amount of cyclin B available (58,59). To create a role for cell size in the regulation of Cdk activities, we assume, in our models, that the
rates of synthesis of cyclins A, B, and E are proportional to cell “mass”. The idea behind this assumption (see also Futcher (60)) is that cyclins are synthesized in the cytoplasm on ribosomes at an
increasing rate as the cell grows. The cyclins then find a Cdk partner and move into the nucleus where they perform their functions. Presumably the effective, intranuclear concentrations of the
cyclin-dependent kinases increase as the cell grows because they become more concentrated at their sites of action. Other regulatory proteins in the network, we assume, are not compartmentalized in
the same way, so their effective concentrations do not increase as the cell grows. This basic idea for size control of the cell cycle was tested experimentally in budding yeast by manipulating the
“nuclear localization signals” on cyclin proteins (8). As predicted by the model, cell size is larger in cells that exclude cyclins from the nucleus and smaller in cells that overaccumulate cyclins
in the nucleus. A recent theoretical study by Yang et al. (61) may shed light on how cell size couples to cell division without assuming a direct dependence of cyclin synthesis rate on mass, but, for
this article, we adopt the assumption as a simple and effective way to incorporate size control into nonlinear DE models for the control of cyclin-dependent kinase activities.
For simplicity, we assume that cell mass increases exponentially (with a mass doubling time (MDT) suitable for the organism under consideration) and that cell mass is exactly halved at division. Our
qualitative results (bifurcation diagrams, etc.) are not dependent on these assumptions. Cell growth may be linear or logistic, and cell division may be asymmetric or inexact—it doesn't really matter
to our models. The important features are that “mass” increases monotonically as the cell grows (driving the control system through bifurcations that govern events of the cell cycle) and that mass
decreases abruptly at cell division (resetting the control system back to a G1-like state—unreplicated chromosomes and low Cdk activity).
Equations and parameter values
The dynamical properties of the regulatory network in Fig. 1 can be described by a set of ordinary differential equations (Supplementary Material, Table SI), given a table of parameter values
suitable for specific organisms (Table SII). For each organism we analyze the effects of physiological and genetic changes on the transitions between cell cycle phases, in terms of bifurcations of
the vector fields defined by the DEs (for background on dynamical systems, see the Appendix).
Frog embryos: Xenopus laevis
To validate our equations and tools, we first verified our earliest studies of bifurcations in the frog-egg model. The combination of modules 1, 4, and 5 of Fig. 1 was used to recreate the
bifurcation diagram of Borisuk and Tyson (33); see Supplementary Material, Fig. S1. Our bifurcation parameter, “cell mass”, can be interpreted as the rate constant for cyclin B synthesis. For small
rates of cyclin synthesis, the control system is arrested in a stable “interphase” state with low activity of CycB-dependent kinase. For larger rates of cyclin synthesis, the model exhibits
spontaneous limit cycle oscillations, which begin at a SNIPER bifurcation (long period, fixed amplitude). Eventually, as the rate of cyclin synthesis gets large enough, the oscillations are lost at a
Hopf bifurcation (fixed period, vanishing amplitude). Beyond the Hopf bifurcation, the control system is arrested in a stable “mitotic” state with high activity of CycB-dependent kinase. These types
of states of the control system are reminiscent of the three characteristic states of frog eggs: interphase arrest (immature oocyte), metaphase arrest (mature oocyte), and spontaneous oscillations
(fertilized egg). For more details, see Novak and Tyson (18) and Borisuk and Tyson (33).
Fission yeast: Schizosaccharomyces pombe
Wild-type cell cycle
The fission yeast cell cycle network, composed of modules 1, 2, 4, 5, 6, 8, 11, 12, and 13, is described in Fig. 2 in terms of a one-parameter bifurcation diagram (Fig. 2 A) and a simulation (Fig. 2
B). In the simulation, we plot protein levels as a function of cell mass rather than time, but because mass increases exponentially with time, one may think of the lower abscissa as e^μt. We present
the simulation this way so that we can “lift it up” onto the bifurcation diagram: the gray curve in Fig. 2 A is identical to the solid black curve (actCycB) in Fig. 2 B. In Fig. 2 A, a stable,
G1-like, steady state exists at very low level of actCycB (active Cdk/CycB dimers). This steady state is lost at a saddle-node bifurcation (SN1) at cell mass = 0.8 au. Between SN1 and SN2 (at cell
mass = 2.6 au), the control system has a single, stable, steady-state attractor with an intermediate activity (~0.1) of cyclin B (an S/G2-like steady state). The other steady-state branches are
unstable and physiologically unnoticeable. For mass >2.6 au, the only stable attractor is a stable limit cycle oscillation. This branch of stable limit cycles is lost by further bifurcations at very
large mass (of little physiological significance for wild-type cells).
One-parameter bifurcation diagram (A) and cell-cycle trajectory (B) of wild-type fission yeast. Both figures share the same abscissa. Notice that cell mass is just the logarithm of age, because we
assume that cells grow exponentially between birth (age ...
The gray trajectory in Fig. 2 A represents the path of a growing-dividing yeast cell projected onto the bifurcation diagram. Let us pick up the trajectory of a growing cell at mass = 2.2 au, where
the cell cycle control system has been captured by the stable S/G2 steady state. As the cell continues to grow, it leaves the S/G2 state at SN2 and prepares to enter mitosis. At cell mass >2.6, the
only stable attractor is a limit cycle. This limit cycle, which bifurcates from SN2, has infinite period at the onset of the bifurcation (hence, the onset point is commonly called a
SNIPER—saddle-node-infinite-period—bifurcation). Because the limit cycle has a very long period at first, and the cell enters the limit cycle at the place where the saddle-node used to be, the cell
is stuck in a semistable transient state (where the gray trajectory “overshoots” SN2). As the cell grows, it eventually escapes the semistable state (at cell mass ≈ 3), and then actCycB increases
dramatically (note the log-scale on the ordinate), driving the cell into mitosis. Because the control system is now captured by the stable limit cycle, actCycB inevitably decreases and the cell is
driven out of mitosis. We presume that the cell divides when actCycB falls below 0.1; hence, cell mass is halved (3.4 → 1.7), and the control system is now attracted to the S/G2 steady state (the
only stable attractor at this cell mass). The newly divided cell makes its way to the S/G2 attractor by a circuitous route that looks like a brief G1 state (very low actCycB) but is not a stable and
long-lasting G1 state. This transient G1 state is characteristic of wild-type fission yeast cells (62).
Overshoot of a SNIPER bifurcation point (as in Fig. 2 A) is a common feature of our cell cycle models, and recent experimental evidence (63) confirms this prediction in frog egg extracts. These
authors located the position of the steady-state SN bifurcation in a nonoscillatory extract and then showed that during oscillations the Cdk-regulatory system overshoots the SN point by twofold or
The one-parameter bifurcation diagram in Fig. 2 A is a compact way to display the interplay between the DNA replication-segregation cycle (regulated by Cdk/CycB activity) and the growth-division
cycle (represented on the abscissa by the steady increase of cell mass and its abrupt resetting at division). The very strong “cell size control” in late G2 phase of the fission yeast cell cycle,
which has been known to physiologists for 30 years (52), is here represented by growing past the SNIPER bifurcation, which eliminates the stable S/G2 steady state and allows the cell to pass into and
out of mitosis (the stable limit cycle oscillation).
A satisfactory model of fission yeast must account not only for the phenotype of wild-type cells but also for the unusual properties of the classic cdc and wee mutants that played such important
roles in deducing the cell-cycle control network. Mutations change the values of specific rate constants, which remodel the one-parameter bifurcation diagram and thereby change the way a cell
progresses through the DNA replication-division cycle. For example (Fig. 3 A), for a wee1^− mutant (reduce Wee1 activity to 10% of its wild-type value) SN2 moves to the left of SN1 and the
infinite-period limit cycle now bifurcates from SN1. Hence, the cell cycle in wee1^− cells is now organized by a SNIPER bifurcation at the G1/S transition: wee1^− cells are about half the size of
wild-type cells, they have a long G1 phase and short G2, and slowly growing cells pause in G1 (unreplicated DNA) rather than in G2 (replicated DNA).
One-parameter (A) and two-parameter (B) bifurcation diagrams for mutations at the wee1 locus in fission yeast. Panel A should be interpreted as in Fig. 2. Key to panel B: dashed black line, locus of
SN1 bifurcation points; solid black line, locus of SN2 ...
In the Supplementary Material (Fig. S2) we present bifurcation diagrams for four other fission yeast mutants (cig2Δ, cig2Δ rum1Δ, wee1Δ cdc25Δ, wee1Δ rum1Δ), to confirm that our “generic” version is
indeed consistent with the known physiology of these mutants. Because they have been described in detail elsewhere (37), we turn our attention instead to some novel results.
Endoreplicating mutants
On the wild-type bifurcation diagram (Fig. 2 A) we can notice a very small oscillatory regime at the beginning of the S/G2 branch of steady states (labeled as HB1, at cell mass = 0.79). This stable
periodic solution is a consequence of a negative feedback loop whereby Cig2 inhibits its own transcription factor, Cdc10, by phosphorylation (64). (In the generic nomenclature, Cig2 is “CycA” and
Cdc10 is “TFE”.) The negative feedback loop can generate oscillations if there is positive feedback in the system as well, which is provided by the Cdk inhibitor (CKI). As CycA slowly accumulates, it
is at first sequestered in inactive complexes with CKI, but eventually CycA saturates CKI and active (uninhibited) Cdk/CycA appears. ActCycA phosphorylates CKI, which labels CKI for proteolysis (65).
As CKI is degraded, actCycA rises even faster because it is released from the inactive complexes. At this point the negative feedback turns on and CycA synthesis is blocked. With no synthesis but
continued degradation, CycA level drops, which allows CKI to come back (provided there is no other Cdk activity that can phosphorylate CKI and keep its level low). CKI comeback returns the control
system to G1. In wild-type cells, the CycA-TFE-CKI interactions cannot create stable oscillations because CycB takes over from CycA and keeps CKI low in G2 and M phases. But if CycB is absent (as in
cdc13Δ mutants of fission yeast), then CKI and CycA generate multiple rounds of DNA replication without intervening mitoses (called “endoreplication”), precisely the phenotype of cdc13Δ mutants (66).
In Fig. 4 A we show the bifurcation diagram of cdc13Δ cells. Over a broad range of cell mass, large amplitude stable oscillations of Cdk/CycA (from a SNIPER bifurcation at SN1) drive multiple rounds
of DNA synthesis without intervening mitoses. Because this negative feedback loop also exists in metazoans, it may explain the core mechanism of developmental endoreplication (67).
One-parameter (A) and two-parameter (B) bifurcation diagrams for mutations at the cdc13 locus in fission yeast. Panels A and B should be interpreted as in Fig. 3. cdc13^+ overexpression has little
effect on cell-cycle phenotype, but cdc13 deletion ...
Mutant analysis on the genetics-physiology plane
In our view, genetic mutations are connected to cell phenotypes through bifurcation diagrams. Mutations induce changes in parameter values, which may change the nature of the bifurcations experienced
by the control system, which will have observable consequences in the cell's physiology. Mutation-induced changes in parameter values may be large or small: e.g., the rate constant for CycB synthesis
= 0 in a cdc13Δ cell, but a wee1^ts (“temperature sensitive”) mutant may cause only a minor change in the catalytic activity of Wee1 kinase. Whether these changed parameter values cause a qualitative
change in bifurcation points on the one-parameter diagram (Figs. 2 A and 3 A), or merely a quantitative shift of their locations, depends on whether the parameter change crosses a bifurcation point
or not. In principle, we can imagine a sequence of bifurcation diagrams (and associated phenotypes) connecting the wild-type cell to a mutant cell as the relevant kinetic parameter changes
continuously (up or down) from its wild-type value. This theoretical sequence of morphing phenotypes can be captured on a two-parameter bifurcation diagram, where cell mass continues to stand in for
the physiology of the cell cycle (growth and division) and the second parameter is a rate constant that varies continuously between 0 (the deletion mutant) and some large value (the overexpression
mutant). Plotted this way, the two-parameter bifurcation diagram spans the entire range of molecular biology from genetics to cell physiology! (For more details on two-parameter bifurcation diagrams,
see the Appendix.)
To illustrate this idea, we first consider wee1 mutations. On the two-parameter bifurcation diagram in Fig. 3 B we follow the loci of bifurcation points (SN1, SN2, and HB1) from their position in
wild-type cells (“Wee1 activity” = 0.5) in the direction of overexpression (>0.5) or deleterious mutation (<0.5). The one-parameter bifurcation diagrams of wild-type (Fig. 2 A) and wee1^− (Fig. 3 A)
cells are cuts of this plane at the marked levels of Wee1 activity. For overexpression mutations, the SNIPER bifurcation moves toward larger cell mass, and the heavy bar shows where the simulation of
2 × wee1^+ cells projects onto the genetics-physiology plane. Clearly, the size of wee1^op cells increases in direct proportion to gene dosage (68). As Wee1 activity decreases below 0.5, e.g., in a
heterozygote diploid cell (activity = 0.25) or in wee1^ts mutants, the SNIPER bifurcation moves toward smaller cell mass. Eventually, the SN1 and SN2 loci cross, and the infinite-period oscillations
switch from SN2 to SN1 by a short but complicated sequence of codimension-two bifurcations (not shown on the diagram). Because SN1 is not dependent on Wee1 activity, the critical cell size at the
SNIPER bifurcation drops no further as Wee1 activity decreases.
The two-parameter bifurcation diagram for cyclin B (Cdc13) expression (Fig. 4 B) shows how mitotic cycles are related to endoreplication cycles. As Cdc13 synthesis rate decreases from its wild-type
value (0.02 min^−1), there is a dramatic increase of the critical cell mass for mitotic oscillations (the SNIPER bifurcation associated with SN2). In addition, endoreplication cycles appear at the
intersection of HB1 and SN1 (by a sequence of codimension-two bifurcations, which we are not focusing on here). At first appearance, the endoreplication cycles have a very long period, but as Cdc13
synthesis rate decreases further, the period of endoreplication cycles decreases and the range of these oscillations increases.
The two-parameter bifurcation diagrams in Figs. 3 and and44 are incomplete: they do not show all loci of codimension-one bifurcations or any of the characteristic codimension-two bifurcations.
Examples of more complete two-parameter bifurcation diagrams can be found in the Supplementary Material (Fig. S3) and on our web site (69).
Budding yeast: Saccharomyces cerevisiae
Our generic model of the budding yeast cell cycle is based on a detailed model published recently by Chen et al. (15). The generic model bypasses details of the mitotic exit network (MEN) in Chen's
model, assuming instead that Cdc20 directly activates Cdc14. We had to change some parameters compared to Chen et al. (15) because of this and other minor changes in the network. We found these new
parameter values by fitting simulations of wild-type and some mutant cells (15).
Wild-type cells
One-dimensional bifurcation diagrams of wild-type cells created by the full model (15) and by our generic model (Fig. 5, A and B) look very similar. Both figures show a stable G1 steady state that
disappears at a SNIPER bifurcation (G1-S transition at cell mass = 1.13 au), giving rise to oscillations that correspond to progression through S/G2/M phases. There is no attractor representing a
stable G2 phase in wild-type budding yeast cells. The green, red, and blue curves superimposed on the bifurcation diagram are “cell cycle trajectories” at mass doubling time of 150, 120, and 90 min,
respectively (MDT = ln2/μ, where μ = specific growth rate). Notice that cells get larger as MDT gets smaller (as μ increases). For simplicity, we are neglecting the asymmetry of division of budding
yeast in these simulations.
One-parameter bifurcation diagrams of budding yeast cells. (A) Wild-type (this article), (B) wild-type (Chen's 2004 model (15)), (C) cdh1Δ (k[ah1p] = k[ah1pp] = 0), (D) ckiΔ (k[sip] = k[sipp] = 0), (
E) cdc20Δ ...
Two ways to achieve size homeostasis
Fig. 5 A shows that the relation of the cell cycle trajectory to the SNIPER bifurcation point depends strongly on MDT. At slow growth rates (MDT ≥ 150 min), newborn cells are smaller than the size at
the SNIPER bifurcation; hence the Cdk-control system is attracted to the stable G1 steady state (seen more clearly in Fig. 5 B than in Fig. 5 A), and the cell is waiting until it grows large enough
to surpass the SNIPER bifurcation. Only then can the cell commit to the S/G2/M sequence. This is a mathematical representation of the classic notion of “size control” to achieve balanced cell growth
and division (49,50,52,70). At faster growth rates, however, newborn cells are already larger than the critical size at the SNIPER bifurcation, and they do not linger in a stable G1 state, waiting to
grow large enough to start the next chromosome replication cycle. How then is cell-size homeostasis achieved, if the classic “sizer” mechanism is inoperative?
Fig. 6 shows the relationship between limit cycle period and distance from the SNIPER bifurcation. For mass <1.13, there is no limit cycle; the stable attractor is the G1 steady state. For mass
slightly >1.13, the limit cycle period is very long, approaching infinity as mass approaches 1.13 from above. Depending on MDT, the cell cycle trajectory finds a location on the cell-mass axis such
that the average cell-cycle-progression time (time spent in G1/S/G2/M) is equal to the mass doubling time. For MDT = 90 min (bottom curve in Fig. 6), the cell is born at mass = 2 and divides at mass
= 4, spending its entire lifespan in the oscillatory region, with an average cell-cycle-progression time of 90 min. As MDT lengthens to 120 min (second curve from bottom), the cell cycle trajectory
shifts to smaller size, so that the average cell-cycle-progression time can lengthen to 120 min. Still slower growth rates (MDT ≥ 150 min) drive the newborn cell into the “sizer” domain, where the
Cdk-control system can wait indefinitely at the stable G1 state until the cell grows large enough to surpass the SNIPER bifurcation. Notice that cell-size homeostasis is possible in the “oscillator”
domain because of the inverse relationship between oscillator period and cell mass close to a SNIPER bifurcation.
Achieving balanced growth at different growth rates. (Upper panel) Bifurcation diagram of the budding yeast network (same as Fig. 5 A). (Lower panel) Period of the oscillatory solutions. Cell cycle
trajectories at different MDT (solid curves) are displayed ...
Cell cycles that visit the “sizer” domain (top two curves in Fig. 6) show “strong” size control, i.e., interdivision time is strongly negatively correlated to birth size, and cell size at the
size-controlled transition point (G1 to S in Fig. 6) shows little or no dependence on birth size (1,2). Cell cycles that live wholly in the “oscillator” domain (bottom two curves in Fig. 6) show
“weak” size control, i.e., interdivision time is weakly negatively correlated to birth size and there is no clear “critical size” for any cell cycle transition. Nonetheless, such cycles still show
balanced growth (interdivision time = mass doubling time) because the cell cycle trajectory settles on a size interval for which the average oscillatory period is identical to the cell's mass
doubling time. Balanced growth and division is a consequence of the steep decline in limit cycle period with increasing cell size past the SNIPER bifurcation.
As Fig. 6 demonstrates, for cells in the “oscillator” domain, our model predicts a positive correlation between growth rate and average cell size (faster growing cells are bigger). This correlation
is a characteristic and advantageous feature of yeast cells: rich media favor cell growth, poor media favor cell division (50,71). Although it is satisfying to see our model explain this correlation
in an “unforced” way, we note that our interpretation of the dependence of cell size on growth rate is predicated on the assumption that one can vary mass doubling time without changing any rate
constants in the Cdk-control system (i.e., without changing the location of the bifurcation points in Fig. 6). Unfortunately, this assumption is probably incorrect because changes in growth medium
(sugar source, nitrogen source, etc.) likely induce changes in gene expression that move the SNIPER bifurcation points, with poorer growth medium favoring smaller size for completion of the cell
cycle (see, e.g., (49,50)). We have yet to sort out all the complications of size regulation in yeast cells. In the meantime, Fig. 6 provides a useful paradigm for understanding “strong” and “weak”
size control in eukaryotes.
Mutants of G1 phase regulation
In this section we present bifurcation diagrams for a few of the most important and interesting mutants described in great detail by numerical simulations in Chen et al. (15). We start with mutants
missing the components that stabilize the G1 phase of the cell cycle: either Cdh1 (an activator of CycB degradation) (Fig. 5 C) or Sic1 (a cyclin B-dependent kinase inhibitor) (Fig. 5 D). In both
cases the mutant cells are viable and apparently have a short G1 phase (72–74). On the bifurcation diagrams, however, a stable G1 steady state exists only at very small cell size. In both mutants,
the cell cycle trajectory is operating in the “oscillator” domain of the size-homeostasis diagram, and consequently these mutant cells are expected to exhibit “weak” size control. In these cases, the
G1 phase of the cell cycle is a transient state, as described above, and the Start transition (G1-to-S) is governed by an oscillator not a sizer. Furthermore, if these mutant cells are grown from
spores (i.e., very small size initially), they will execute Start at a much smaller size than they do under normal proliferating conditions.
Two-parameter bifurcation diagrams (genetic-physiology planes) for both SIC1 and CDH1 are presented in the Supplementary Material (Fig. S3). The two types of mutations have quite a similar effect on
cell physiology.
Mutants of mitotic exit regulation
Although both cdc20^ts and cdc14^ts mutants block mitotic exit, cdc20^ts arrests at the metaphase-anaphase transition (75), whereas cdc14^ts arrests in telophase (76,77). Hence, exit from mitosis
must be a two-stage process (30), with two different stable-steady states in which the control system can halt. The one-parameter bifurcation diagrams (Fig. 5, E and F) reveal these two stable steady
states. For cdc20^ts the steady state has very large CycB activity (~60 au), whereas the cdc14^ts mutant arrests in a state of much lower CycB activity (~2 au). Also, in the second case a damped
oscillation is seen on the simulation curve. These effects all derive from the fact that if Cdc20 is inoperable, then cyclin degradation is totally inhibited, whereas if Cdc14 is not working, then
Cdc20 can destroy some CycB—not enough for mitotic exit, but enough to create a stable steady state of lower CycB activity (30). The corresponding two-parameter bifurcation diagrams of cdc20^ts and
cdc14^ts mutants (Supplementary Material, Fig. S3, C and D) are also qualitatively similar.
Lethality that depends on growth rate
To bind effectively to Cdc20, proteins of the core APC need to be phosphorylated (78). If these phosphorylation sites are mutated to nonphosphorylable alanine residues (the mutant is called APC-A),
then Cdc20-mediated degradation of CycB is compromised, although the APC-A cells are still viable. We assume that APC-A has a constant activity that is 10% of the maximum activity of the normally
phosphorylated form of APC in conjunction with Cdc20. Furthermore, we assume that APC-A has full activity in conjunction with Cdh1, in accord with the evidence (78). In simulations (Fig. 7 A), APC-A
cells are viable and large. Because these mutant cells are delayed in exit from mitosis, the period of the limit cycle oscillations beyond the SNIPER bifurcation is considerably longer than in
wild-type cells. Hence, they cycle in the “oscillator” regime even at MDT > 150 min.
One-parameter bifurcation diagrams of budding yeast mutants defective in cyclin degradation. (A) APC-A ([APCP] = 0.1 au, constant value), (B) APC-A cdh1Δ ([APCP] = 0.1 au, k[ah1p] = k[ah1pp] = 0), (C
) clb2Δ ...
Double mutant cells, APC-A cdh1Δ, are lethal at fast growth rates but partially viable at slow growth rates (30). Our bifurcation diagram (Fig. 7 B) shows a truncated oscillatory regime ending at a
cyclic fold bifurcation at cell mass = 3.6. Simulations show that at MDT = 150 min cells stay within the small oscillatory regime, but faster growing cells (MDT = 120 min) grow out of the oscillatory
regime and get stuck in mitosis. Mutations of APC core proteins also show growth rate-dependent viability, e.g., apc10-22 is viable in galactose (slow growth rate) but inviable in glucose (fast
growth rate) (79).
The same dependence of viability on growth conditions was reported for CLB2dbΔ clb5Δ mutant cells (CycB stablized, CycA absent) (30,80), and is illustrated in our bifurcation diagram (Fig. 7 D). In
addition to these mutants, which are defective in cyclin degradation, Cross (30) found that the double mutant clb2Δ cdh1Δ also shows growth rate-dependent viability. In our model these cells are
viable at MDT = 200 min, but lethal at MDT = 120 min (Fig. 7 C).
All of these mutations interfere with the negative feedback loop of CycB degradation. Weak negative feedback creates long-period oscillations that are stable attractors only at relatively small cell
mass; at large mass the activity of CycB-dependent kinase is so strong that the mutant cells arrest in mitosis. Fast growing cells cannot find a period of oscillation that balances their MDT, so they
overgrow the oscillatory region and get stuck in mitosis. These results suggest that other mutants affecting the negative feedback loop should be reinvestigated to see if viability depends on growth
rate (for example, APC-A sic1Δ and cdc20^ts pds1Δ).
Cells that show this sensitivity to growth rate are also likely to be sensitive to random noise in the control system. Using a model similar to ours, Battogtokh and Tyson (34) showed that, for
control systems operating close to a bifurcation to the stable M-like steady state, cells might get stuck in mitosis after a few cycles if a little noise is added to the system. This effect would
show up as partial viability of a clone at intermediate growth rates.
Incorporation of the morphogenetic checkpoint
In modeling the budding yeast cell cycle so far, we have assumed that the G2 module of Cdk phosphorylation (module 5 in Fig. 1) plays no role during normal cell proliferation (81), but recently this
view was challenged by Kellogg (82). In any event, all agree that the G2 module is necessary for the “morphogenesis checkpoint” in budding yeast, which arrests a cell in G2 if the cell is unable to
produce a bud (81). It is a simple job to “turn on” module 5 in our generic version of the budding yeast cell cycle and to reproduce most of the results in Ciliberto et al. (83); see Supplementary
Material, Fig. S4.
Mammalian cells
Many groups have modeled various aspects of the molecular machinery controlling mammalian cell cycles (22,26,84,85), including us (41). In this article, we insert parameter values from Novak and
Tyson (41) into our generic model to simulate a “generic mammalian cell” (Fig. 8). As expected the bifurcation diagram of the mammalian cell (Fig. 8 B) is very similar to the budding yeast cell
(there is no G2 module in either model). This yeast-like proliferation is observed in mammalian cells in early development and in malignant transformation, when the cell's main goal is rapid
Analysis of a mammalian cell cycle model. Numerical simulations: (A) normal cell (without G2 module), (C) cycDΔ (CycD^0 = 0), (D) cycDΔ cycEΔ (CycD^0 = 0, k[sep] = k[sepp] = 0), (E) normal cell (with
G2 ...
It has been recently discovered that mouse embryos deleted of all forms of CycD (86), deleted of both forms of CycE (87), or deleted of both Cdk4 and Cdk6 (88) can develop until late stages of
embryogenesis and die from causes unrelated to the core cell cycle machinery. Mice lacking Cdk2 are viable (89), and mouse embryo fibroblast from any of these mutants proliferate normally. Our model
is expected to reproduce these results. Indeed, simulation of CycE-deleted cells show almost no defect in proliferation with a cell division mass 1.2 times wild-type cells (Supplementary Material,
Fig. S5 C). The absence of CycD has a greater effect on the system, creating cycles with a division mass 3.6 times wild-type (Fig. 8 C). If we eliminate both CycD and CycE, we find that cells leave
G1 phase at a mass equal to 5 times wild-type division mass (Fig. 8 D), which might be lethal for cells. These results are related to the corresponding experiments in budding yeast, where cln3^−
(CycD) and cln1^− cln2^− (CycE) mutants are viable but larger than wild-type (90), whereas the combined mutation is lethal (91).
From Chow et al. (92) we know that, although phosphorylation of Cdk2 (in complexes with CycE or CycA) plays no major role in unperturbed proliferation of HeLa cells, phosphorylation of Cdk1/CycB by
Wee1 plays a role in normal cell cycling. These reactions (module 5 in Fig. 1) are easily added to the model, as we did in the previous section on budding yeast. For the parameter values chosen, the
bifurcation diagram (Fig. 8 F) exhibits stable G1 and G2 steady states. The cell cycle trajectories in Fig. 8, E and F, are computed for cells proliferating at MDT = 24 h, that operate in the
“oscillator” region of the size homeostasis curve (Fig. 6). More slowly proliferating cells (MDT = 48 h) pause in the stable G1 state until they grow large enough to surpass the SNIPER bifurcation at
cell mass ~1. At all growth rates, there is a transient G2 state on the trajectory (the flattened regions of the red and blue curves at [actCycB] ~ 0.01–0.1).
With the G2-regulatory module in place, our model is now set up for serious consideration of the major checkpoint controls in mammalian cells: 1), restriction point control, by which cyclin D and
retinoblastoma protein regulate the activity of transcription factor E; 2), the DNA-damage checkpoint in G1, which upregulates the production of CKI; 3), the unreplicated-DNA checkpoint in G2, which
activates Wee1 and inhibits Cdc25; and 4), the chromosome misalignment checkpoint in M phase, which silences Cdc20. Building appropriate modules for these checkpoints and wiring them into the generic
cell cycle engine will be topics for future publications and will provide a basis for modeling the hallmarks of cancer (93).
We propose a protein interaction network for eukaryotic cell cycle regulation that 1), includes most of the important regulatory proteins found in all eukaryotes, and 2), can be parameterized to
yield accurate models of a variety of specific organisms (budding yeast, fission yeast, frog eggs, and mammalian cells). The model is built in modular fashion: there are four
synthesis-and-degradation modules (“4, 8, 10, 13”), three stoichiometric binding-and-inhibition modules (“6, 9, 12”), three transcription factor modules (“3, 7, 11”), and three modules with multiple
activation-and-inhibition steps (“1, 2, 5”). This modularity assists us to craft models for specific organisms (where some modules are more important than others) and to extend models with new
modules embodying the signaling pathways that impinge on the underlying cell cycle engine.
To describe the differences in regulatory networks in yeasts, frog eggs, and mammalian cells, we subdivided the generic wiring diagram (Fig. 1) into 13 small modules. From a different point of view (
36,37) we might lump some of these modules into larger blocks: bistable switches and negative feedback oscillators. One bistable switch creates a stable G1 state and controls the transition from G1
to S phase. It is a redundant switch, created by interactions between B-type cyclins and their G1 antagonists: CKIs (stoichiometric inhibitors) and APC/Cdh1 (proteolytic machinery). Either CKI or
Cdh1 can be knocked out genetically, and the switch may still be functional to some extent. A second bistable switch creates a stable G2 state and controls the transitions from G2 to M phase. It is
also a redundant switch, created by double-negative feedback between Cdk/CycB and Wee1 and positive feedback between Cdk/CycB and Cdc25. A negative feedback loop, set up by the interactions among Cdk
/CycB, APC/Cdc20, and Cdc14 phosphatase, controls exit from mitosis. A second negative feedback loop, between CycA and its transcription factor, plays a crucial role in endoreplication. These
regulatory loops are responsible for the characteristic bifurcations that (as our analysis shows) control cell cycle progression in normal cells and misprogression in mutant cells.
The many different control loops in the “generic” model can be mixed and matched to create explicit models of specific organisms and mutants. In this sense, there is no “ideal” or “simplest” model of
the cell cycle. Each organism has its own idiosyncratic properties of cell growth and division, depending on which modules are in operation, which depends ultimately on the genetic makeup of the
organism. Lethal mutations push the organism into a region of parameter space where the control system is no longer viable.
To deepen our understanding of the similarities and differences in cell cycle regulation in different types of cells, we analyzed our models of specific organisms and mutants with bifurcation
diagrams. To show how cell growth drives transitions between cell cycle phases (G1/S/G2/M), we employ one-parameter bifurcations diagrams, where stable steady states correspond to available arrest
states of the cell cycle (late G1, late G2, metaphase) and saddle-node and SNIPER bifurcation points identify critical cell sizes for leaving an arrest state and proceeding to the next phase of the
cell cycle. In this view, cell cycle “checkpoints” (also called “surveillance” mechanisms) (4,5) respond to potential problems in cell cycle progression (DNA damage, delayed replication, spindle
defects) by stabilizing an arrest state, i.e., by putting off the bifurcation to much larger size than normal (18,37,40,84,94).
The most important type of bifurcation, we believe, is a “SNIPER” bifurcation, by which a stable steady state (G1 or G2) gives rise to a limit cycle solution that drives the cell into mitosis and
then back to G1 phase. At the SNIPER bifurcation, the period of the limit cycle oscillations is initially infinite but drops rapidly as the cell grows larger. SNIPER bifurcations are robust
properties of nonlinear control systems with both positive and negative feedback. Not only are they commonly observed in one-parameter bifurcation diagrams of the Cdk network, but they persist over
large ranges of parameter variations, as is evident from our two-parameter bifurcation diagrams. For example, in Figs. 3 B and 4 B, SNIPER bifurcations are observed over the entire range of gene
expression for wee1 and cdc13 in fission yeast. The same is true for SIC1 gene expression in budding yeast (Supplementary Material, Fig. S3 B), but not so for CDC20 and CDC14 genes (Fig. S3, C and D
). In the latter cases, the SNIPER bifurcation is lost for low levels of expression of these essential (“cdc”) genes, and the mutant cells become arrested in late mitotic stages, as observed.
Although SNIPER bifurcations are often associated with robust cell cycling in our models, they are not necessary for balanced growth and division, as is evident in our simulation of cdh1Δ mutants of
budding yeast (Fig. 5 C and Supplementary Material, Fig. S3 A), where the stable oscillations can be traced back to a subcritical Hopf bifurcation.
The SNIPER bifurcation is very effective in achieving a balance between progression through the cell cycle (interdivision time (IDT)) and overall cell growth (mass doubling time (MDT)). Cell size
homeostasis means that IDT = MDT. In Fig. 6 we show that cell size homeostasis is a natural consequence of the eukaryotic cell cycle regulatory system, and that it can be achieved in two dramatically
different ways: by a “sizer” mechanism (characteristic of slowly growing cells) and an “oscillator” mechanism (employed by rapidly growing cells). In the sizer mechanism, slowly growing cells are
“captured” by a stable steady state, either a G1-like steady state (as in budding yeast) or a G2-like steady state (as in fission yeast). To progress further in the cell cycle, these sizer-controlled
cells must grow large enough to surpass the critical size at the SNIPER bifurcation. In the oscillator mechanism, rapidly growing cells persist in the limit cycle regime (with cell mass always
greater than the critical size at the SNIPER bifurcation), finding a specific combination of average size and average limit-cycle period such that IDT = MDT. In the oscillator regime, cells are
unable to arrest in G1 or G2 phase because they are too large. To arrest, they must undergo one or more divisions, without intervening mass doubling, so that they become small enough to be caught by
a stable steady state, or the SNIPER bifurcation point must be shifted to a larger size (by a surveillance mechanism), to arrest the cells in G1 or G2.
One-parameter bifurcations diagrams succinctly capture the dependence of the cell cycle engine (Cdk/CycB activity) on cell growth and division (cell mass changes). By superimposing cell cycle
trajectories on the one-parameter bifurcation diagram, we have shown how SNIPER bifurcations orchestrate the balance between cell growth and progression through the chromosome replication cycle. In a
two-parameter bifurcation diagram, we suppress the display of Cdk/CycB activity (i.e., the state of the engine) and use the second dimension to display a genetic characteristic of the control system
(i.e., the level of expression of a gene, from zero, to normal, to overexpression). On the two-parameter diagram we see how the orchestrating SNIPER bifurcations change in response to mutations, and
consequently how the phenotype of the organism (viability/inviability and cell size) depends on its genotype. The two-parameter bifurcation diagram can be used not only to obtain an overview of known
phenotypes but also to predict potentially unusual phenotypes of cells with intermediate levels of gene expression.
Our model is freely available to interested users in three forms. From the web site (69) one can download .ode and .set files for use with the free software XPP-AUT. From an .ode file one can easily
generate FORTRAN or C++ subroutines, or port the model to Matlab or Mathematica. Secondly, one can download an SBML version of the model from the same web site for use with any software that reads
this standard format. Thirdly, we have introduced the model and all the mutant scenarios discussed in this article into JigCell, our problem-solving environment for biological network modeling (95–97
). The parameter sets in the JigCell version of budding yeast and fission yeast are slightly different from the parameter sets presented in this article. The revised parameter values give better fits
to the phenotypic details of yeast mutants. JigCell is especially suited to this sort of parameter twiddling to optimize the fit of a model to experimental details.
An online supplement to this article can be found by visiting BJ Online at http://www.biophysj.org.
We thank Jason Zwolak for help with the Supplementary Material and Akos Sveiczer for useful discussions.
This research was supported by grants from Defense Advanced Research Project Agency (AFRL F30602-02-0572), the James S. McDonnell Foundation (21002050), and the European Commission (COMBIO,
LSHG-CT-503568). A.C-N. is a Bolyai fellow of the Hungarian Academy of Sciences.
A molecular regulatory network, such as Fig. 1, is a set of chemical and physical processes taking place within a living cell. The temporal changes driven by these processes can be described, at
least in a first approximation, by a set of ordinary differential equations derived according to the standard principles of biophysical chemistry (36). Each differential equation describes the rate
of change of a single time-varying component of the network (gene, protein, or metabolite—the state variables of the network) in terms of fundamental processes like transcription, translation,
degradation, phosphorylation, dephosphorylation, binding, and dissociation. The rate of each step is determined by the current values of the state variables and by numerical values assigned to rate
constants, binding constants, Michaelis constants, etc. (collectively referred to as parameters).
Given specific values for the parameters and initial conditions (state variables at time = 0), the differential equations determine how the regulatory network will evolve in time. The direction and
speed of this change can be represented by a vector field in a multidimensional state space (Fig. 9 A). A numerical simulation moves through state space always tangent to the vector field. Steady
states are points in state space where the vector field is zero. If the vector field close to a steady state points back toward the steady state in all directions (Fig. 9 B), then the steady state is
(locally) stable; if the vector field points away from the steady state in any direction (near the open circles in Fig. 9, A and C), the steady state is unstable. If the vector field supports a
closed loop (Fig. 9 C), then the system oscillates on this periodic orbit, also called a limit cycle. The stability of a limit cycle is defined analogously to steady states. Stable steady states and
stable limit cycles are called attractors of the dynamical system. To every attractor is associated a domain of attraction, consisting of all points of state space from which the system will go to
that attractor.
As parameters of the system are changed, the number and stability of steady states and periodic orbits may change, e.g., going from Fig. 9, A to B, or from Fig. 9, B to C. Parameter values where such
changes occur are called bifurcation points (98,99). At a bifurcation point, the system can gain or lose a stable attractor, or undergo an exchange of stabilities. In the case of the cell cycle, we
associate different cell cycle phases to different attractors of the Cdk-regulatory system, and transitions between cell cycle phases to bifurcations of the dynamical system (37).
To visualize bifurcations graphically, one plots on the ordinate a representative variable of the dynamical system, as an indicator of the system's state, and on the abscissa, a particular parameter
whose changes can induce the bifurcation (Fig. 9 D). It is fruitful to think of changes to the parameter as a signal imposed on the control system, and the stable attractors (steady states and
oscillations) as the response of the network (100). For the cell cycle control system, the clear choice of dynamic variable is the activity of Cdk1/CycB (the activity of this complex is small in G1,
modest in S/G2, and large in M phase). As bifurcation parameter, we choose cell mass because we consider growth to be the primary driving force for progression through the cell cycle. For each fixed
value of cell mass, we compute all steady-state and oscillatory solutions (stable and unstable) of the Cdk-regulatory network, and we plot these solutions on a one-parameter bifurcation diagram (Fig.
9 D).
Following standard conventions, we plot steady-state solutions by lines: solid for stable steady states and dashed for unstable. For limit cycles, we plot two loci: one for the maximum and one for
the minimum value of Cdk1/CycB activity on the periodic solution, denoting stable limit cycles with solid circles and unstable with open circles. A locus of steady states can fold back on itself at a
saddle-node (SN) bifurcation point (where a stable steady state—a node—and an unstable steady state—a saddle—come together and annihilate one another). Between the two SN bifurcation points in Fig. 9
D, the control system is bistable (coexistence of two stable steady states, which we might call off and on). To the left and right of SN2 in Fig. 9 D, the state space looks like Fig. 9, A and B,
respectively. A locus of steady-state solutions can also lose stability at a Hopf bifurcation (HB) point, from which there arises a family of small amplitude, stable limit cycle solutions (Fig. 9 D).
A Hopf bifurcation converts state space Fig. 9 B into Fig. 9 C. For experimental verification of these dynamical properties of the cell cycle control system in frog eggs, see recent articles by Sha
et al. (94) and Pomerening et al. (63,101).
Positive feedback is often associated with bistability of a control system. For example, if X activates Y and Y activates X, then the system may persist in a stable “off” state (X low and Y low) or
in a stable “on” state (X high and Y high). Similarly, if X inhibits Y and Y inhibits X (double-negative feedback), the system may also persist in either of two stable steady states (X high and Y
low, or X low and Y high). Typically, bistability is observed over a range of parameter values (k[SN1] < k < k[SN2]). Negative feedback (X activates Y, which activates Z, which inhibits X) may lead
to sustained oscillations of X, Y, and Z, for appropriate choices of reaction kinetics and rate constants. These oscillations typically arise by a Hopf bifurcation, with a stable steady state for k <
k[HB] giving way to stable oscillations for k > k[HB].
In Table 2 we provide a catalog of common codimension-one bifurcations (bifurcations that can be located, in principle, by changing a single parameter of the system). From a one-parameter bifurcation
diagram, properly interpreted, one can reconstruct the vector field (see lines A, B, and C in Fig. 9 D), which is the mathematical equivalent of the molecular wiring diagram. There are only a small
number of common codimension-one bifurcations (see Table 2); hence, there are only a few fundamental signal-response relationships from which a cell must accomplish all the complex signal processing
it requires. Of special interest to this article is the SNIPER bifurcation, which is a special type of SN bifurcation point: after annihilation of the saddle and node, the remaining steady state is
unstable and surrounded by a stable limit cycle of large amplitude. At the SN bifurcation point, the period of the limit cycle is infinite (SNIPER = saddle-node infinite-period). As the bifurcation
parameter pulls away from the SNIPER point, the period of the limit cycle decreases precipitously (see, e.g., Fig. 6).
TABLE 2
Codimension-one bifurcations
Full name Abbreviation From/to To/from 1D example
Saddle-node SN 3 steady states 1 steady state
Supercritical Hopf HBsup 1 stable steady state Unstable steady state + small amplitude, stable limit cycle
Subcritical Hopf HBsub 1 unstable steady state Stable steady state + small amplitude, unstable limit cycle
Cyclic-fold CF No oscillatory solutions 1 stable oscillation + 1 unstable oscillation
Saddle-node infinite-period SNIPER 3 steady states Unstable steady state + large amplitude oscillation
Saddle-loop SL Unstable steady state (saddle) Unstable steady state + large amplitude oscillation
To continue this process of abstraction, we go from a one-parameter bifurcation diagram to a two-parameter bifurcation diagram (Fig. 10). As the two parameters change simultaneously, we follow loci
of codimension-one bifurcation points in the two-parameter plane. For example, the one-parameter diagram in Fig. 9 D corresponds to a value of the second parameter at level 6 in Fig. 10. As the value
of the second parameter increases, we track SN1 and SN2 along fold lines in the two-parameter plane. Between these two fold lines the control system is bistable. We also track the HB point in the
two-parameter diagram for increasing values of the second parameter. We find that, at characteristic points in the two-parameter plane, marked by heavy “dots” in Fig. 10, there is a change in some
qualitative feature of the codimension-one bifurcations. Because two parameters must be adjusted simultaneously to locate these “dots”, they are called codimension-two bifurcation points. In Fig. 10
(and Table 2) we illustrate the three most common codimension-two bifurcations: degenerate Hopf (dHB), saddle-node-loop (SNL), and Takens-Bagdanov (TB). From a two-parameter bifurcation diagram,
properly interpreted, one can reconstruct a sequence of one-parameter bifurcation diagrams (see lines 1–6 in Fig. 10), which are the qualitatively different signal-response characteristics of the
control system. There are only a small number of generic codimension-two bifurcations; hence, there are limited ways by which one signal-response curve can morph into another. These constraints place
subtle restrictions on the genetic basis of cell physiology.
In the one-parameter bifurcation diagram, we choose as the primary bifurcation parameter some physiologically relevant quantity (the “signal”) that is inducing a change in behavior (the “response”)
of the molecular regulatory system. In the two-parameter diagram, we propose to use the second parameter as an indicator of a genetic characteristic of the cell (the level of expression of a
particular gene, above and below the wild-type value) with bearing on the signal-response curve. In this format, the two-parameter bifurcation diagram provides a highly condensed summary of the
dynamical links from a controlling gene to its physiological outcome (its phenotypes). The two-parameter diagram captures the sequence of dynamically distinct changes that must occur in carrying
phenotype of a wild-type cell to the observed phenotypes of deletion mutants (at one extreme) and overexpression mutants (at the other extreme). In between, there may be novel, physiologically
distinct phenotypes that could not be anticipated by intuition alone. Examples of this analysis are provided in Figs. 3 and and4,4, in the Supplementary Material, and on our website.
For alternative explanations of bifurcation diagrams, one may consult the appendix to Borisuk and Tyson (33) or the textbooks by Strogatz (99) or Kaplan and Glass (102).
Rupes, I. 2002. Checking cell size in yeast. Trends Genet. 18:479–485. [PubMed]
Sveiczer, A., B. Novak, and J. M. Mitchison. 1996. The size control of fission yeast revisited. J. Cell Sci. 109:2947–2957. [PubMed]
Nurse, P. 1994. Ordering S phase and M phase in the cell cycle. Cell. 79:547–550. [PubMed]
Hartwell, L. H., and T. A. Weinert. 1989. Checkpoints: controls that ensure the order of cell cycle events. Science. 246:629–634. [PubMed]
Nasmyth, K. 1996. Viewpoint: putting the cell cycle in order. Science. 274:1643–1645. [PubMed]
6. Tyson, J. J. 1985. The coordination of cell growth and division—intertional or incidental. Bioessays. 2:72–77.
Kastan, M. B., and J. Bartek. 2004. Cell-cycle checkpoints and cancer. Nature. 432:316–323. [PubMed]
Cross, F. R., V. Archambault, M. Miller, and M. Klovstad. 2002. Testing a mathematical model for the yeast cell cycle. Mol. Biol. Cell. 13:52–70. [PMC free article] [PubMed]
Nurse, P. 1992. Eukaryotic cell-cycle control. Biochem. Soc. Trans. 20:239–242. [PubMed]
Nurse, P. 1990. Universal control mechanism regulating onset of M-phase. Nature. 344:503–508. [PubMed]
Bray, D. 1995. Protein molecules as computational elements in living cells. Nature. 376:307–312. [PubMed]
Aguda, B. D. 1999. A quantitative analysis of the kinetics of the G2 DNA damage checkpoint system. Proc. Natl. Acad. Sci. USA. 96:11352–11357. [PMC free article] [PubMed]
Aguda, B. D. 1999. Instabilities in phosphorylation-dephosphorylation cascades and cell cycle checkpoints. Oncogene. 18:2846–2851. [PubMed]
Chen, K. C., A. Csikasz-Nagy, B. Gyorffy, J. Val, B. Novak, and J. J. Tyson. 2000. Kinetic analysis of a molecular model of the budding yeast cell cycle. Mol. Biol. Cell. 11:369–391. [PMC free
article] [PubMed]
Chen, K. C., L. Calzone, A. Csikasz-Nagy, F. R. Cross, B. Novak, and J. J. Tyson. 2004. Integrative analysis of cell cycle control in budding yeast. Mol. Biol. Cell. 15:3841–3862. [PMC free article]
Goldbeter, A. 1991. A minimal cascade model for the mitotic oscillator involving cyclin and cdc2 kinase. Proc. Natl. Acad. Sci. USA. 88:9107–9111. [PMC free article] [PubMed]
Gonze, D., and A. Goldbeter. 2001. A model for a network of phosphorylation-dephosphorylation cycles displaying the dynamics of dominoes and clocks. J. Theor. Biol. 210:167–186. [PubMed]
Novak, B., and J. J. Tyson. 1993. Numerical analysis of a comprehensive model of M-phase control in Xenopus oocyte extracts and intact embryos. J. Cell Sci. 106:1153–1168. [PubMed]
19. Novak, B., and J. J. Tyson. 1995. Quantitative analysis of a molecular model of mitotic control in fission yeast. J. Theor. Biol. 173:283–305.
Obeyesekere, M. N., J. R. Herbert, and S. O. Zimmerman. 1995. A model of the G1 phase of the cell cycle incorporating cyclinE/cdk2 complex and retinoblastoma protein. Oncogene. 11:1199–1205. [PubMed]
Obeyesekere, M. N., E. Tecarro, and G. Lozano. 2004. Model predictions of MDM2 mediated cell regulation. Cell Cycle. 3:655–661. [PubMed]
Qu, Z. L., J. N. Weiss, and W. R. MacLellan. 2003. Regulation of the mammalian cell cycle: a model of the G(1)-to-S transition. Am. J. Physiol. Cell Physiol. 284:C349–C364. [PubMed]
Qu, Z. L., J. N. Weiss, and W. R. MacLellan. 2004. Coordination of cell growth and cell division: a mathematical modeling study. J. Cell Sci. 117:4199–4207. [PubMed]
Steuer, R. 2004. Effects of stochasticity in models of the cell cycle: from quantized cycle times to noise-induced oscillations. J. Theor. Biol. 228:293–301. [PubMed]
Sveiczer, A., A. Csikasz-Nagy, B. Gyorffy, J. J. Tyson, and B. Novak. 2000. Modeling the fission yeast cell cycle: quantized cycle times in wee1^-cdc25Δ mutant cells. Proc. Natl. Acad. Sci. USA. 97
:7865–7870. [PMC free article] [PubMed]
Swat, M., A. Kel, and H. Herzel. 2004. Bifurcation analysis of the regulatory modules of the mammalian G(1)/S transition. Bioinformatics. 20:1506–1511. [PubMed]
Thron, C. D. 1991. Mathematical analysis of a model of the mitotic clock. Science. 254:122–123. [PubMed]
Thron, C. D. 1997. Bistable biochemical switching and the control of the events of the cell cycle. Oncogene. 15:317–325. [PubMed]
Tyson, J. J. 1991. Modeling the cell division cycle: cdc2 and cyclin interactions. Proc. Natl. Acad. Sci. USA. 88:7328–7332. [PMC free article] [PubMed]
Cross, F. R. 2003. Two redundant oscillatory mechanisms in the yeast cell cycle. Dev. Cell. 4:741–752. [PubMed]
Marlovits, G., C. J. Tyson, B. Novak, and J. J. Tyson. 1998. Modeling M-phase control in Xenopus oocyte extracts: the surveillance mechanism for unreplicated DNA. Biophys. Chem. 72:169–184. [PubMed]
Zwolak, J. W., J. J. Tyson, and L. T. Watson. 2005. Globally optimized parameters for a model of mitotic control in frog egg extracts. IEE Proc. Syst. Biol. 152:81–92. [PubMed]
Borisuk, M. T., and J. J. Tyson. 1998. Bifurcation analysis of a model of mitotic control in frog eggs. J. Theor. Biol. 195:69–85. [PubMed]
Battogtokh, D., and J. J. Tyson. 2004. Bifurcation analysis of a model of the budding yeast cell cycle. Chaos. 14:653–661. [PubMed]
Novak, B., Z. Pataki, A. Ciliberto, and J. J. Tyson. 2001. Mathematical model of the cell division cycle of fission yeast. Chaos. 11:277–286. [PubMed]
Tyson, J. J., K. Chen, and B. Novak. 2001. Network dynamics and cell physiology. Nat. Rev. Mol. Cell Biol. 2:908–916. [PubMed]
Tyson, J. J., A. Csikasz-Nagy, and B. Novak. 2002. The dynamics of cell cycle regulation. Bioessays. 24:1095–1109. [PubMed]
Qu, Z. L., W. R. MacLellan, and J. N. Weiss. 2003. Dynamics of the cell cycle: checkpoints, sizers, and timers. Biophys. J. 85:3600–3611. [PMC free article] [PubMed]
Novak, B., and J. J. Tyson. 1997. Modeling the control of DNA replication in fission yeast. Proc. Natl. Acad. Sci. USA. 94:9147–9152. [PMC free article] [PubMed]
Novak, B., A. Csikasz-Nagy, B. Gyorffy, K. Chen, and J. J. Tyson. 1998. Mathematical model of the fission yeast cell cycle with checkpoint controls at the G1/S, G2/M and metaphase/anaphase
transitions. Biophys. Chem. 72:185–200. [PubMed]
Novak, B., and J. J. Tyson. 2004. A model for restriction point control of the mammalian cell cycle. J. Theor. Biol. 230:563–579. [PubMed]
Giot, L., J. S. Bader, C. Brouwer, A. Chaudhuri, B. Kuang, Y. Li, Y. L. Hao, C. E. Ooi, B. Godwin, E. Vitols, G. Vijayadamodar, P. Pochart, et al. 2003. A protein interaction map of Drosophila
melanogaster. Science. 302:1727–1736. [PubMed]
Kohn, K. W. 1999. Molecular interaction map of the mammalian cell cycle control and DNA repair systems. Mol. Biol. Cell. 10:2703–2734. [PMC free article] [PubMed]
Schwikowski, B., P. Uetz, and S. Fields. 2000. A network of protein-protein interactions in yeast. Nat. Biotechnol. 18:1257–1261. [PubMed]
Uetz, P., L. Giot, G. Cagney, T. A. Mansfield, R. S. Judson, J. R. Knight, D. Lockshon, V. Narayan, M. Srinivasan, P. Pochart, A. Qureshi-Emili, Y. Li, B. Godwin, et al. 2000. A comprehensive
analysis of protein-protein interactions in Saccharomyces cerevisiae. Nature. 403:623–627. [PubMed]
Uetz, P., and M. J. Pankratz. 2004. Protein interaction maps on the fly. Nat. Biotechnol. 22:43–44. [PubMed]
Fantes, P., and P. Nurse. 1977. Control of cell size at division in fission yeast by a growth-modulated size control over nuclear division. Exp. Cell Res. 107:377–386. [PubMed]
Johnston, G. C., C. W. Ehrhardt, A. Lorincz, and B. L. A. Carter. 1979. Regulation of cell size in the yeast Saccharomyces cerevisiae. J. Bacteriol. 137:1–5. [PMC free article] [PubMed]
Jorgensen, P., and M. Tyers. 2004. How cells coordinate growth and division. Curr. Biol. 14:R1014–R1027. [PubMed]
Nurse, P. 1975. Genetic control of cell size at cell division in yeast. Nature. 256:547–551. [PubMed]
Dolznig, H., F. Grebien, T. Sauer, H. Beug, and E. W. Mullner. 2004. Evidence for a size-sensing mechanism in animal cells. Nat. Cell Biol. 6:899–905. [PubMed]
Killander, D., and A. Zetterberg. 1965. A quantitative cytochemical investigation of the relationship between cell mass and initiation of DNA synthesis in mouse fibroblast in vitro. Exp. Cell Res. 40
:12–20. [PubMed]
55. Zetterberg, A., and O. Larsson. 1995. Cell cycle progression and cell growth in mammalian cells: kinetic aspects of transition events. InCell cycle Control. C. Hutchison and D. M. Glover,
editors. Oxford University Press. Oxford, UK. 206–227.
Baserga, R. 1984. Growth in size and cell DNA replication. Exp. Cell Res. 151:1–4. [PubMed]
Conlon, I., and M. Raff. 2003. Differences in the way a mammalian cell and yeast cells coordinate cell growth and cell-cycle progression. J. Biol. 2:7. [PMC free article] [PubMed]
Murray, A. W., and M. W. Kirschner. 1989. Cyclin synthesis drives the early embryonic cell cycle. Nature. 339:275–280. [PubMed]
Solomon, M. J., M. Glotzer, T. H. Lee, M. Philippe, and M. W. Kirschner. 1990. Cyclin activation of p34^cdc2. Cell. 63:1013–1024. [PubMed]
Futcher, B. 1996. Cyclins and the wiring of the yeast cell cycle. Yeast. 12:1635–1646. [PubMed]
Yang, L., Z. Han, W. Robb Maclellan, J. N. Weiss, and Z. Qu. 2006. Linking cell division to cell growth in a spatiotemporal model of the cell cycle. J. Theor. Biol. In press. [PMC free article] [
Moreno, S., and P. Nurse. 1994. Regulation of progression through the G1 phase of the cell cycle by the rum1^+ gene. Nature. 367:236–242. [PubMed]
Pomerening, J. R., S. Y. Kim, and J. E. Ferrell Jr. 2005. Systems-level dissection of the cell-cycle oscillator: bypassing positive feedback produces damped oscillations. Cell. 122:565–578. [PubMed]
Ayte, J., C. Schweitzer, P. Zarzov, P. Nurse, and J. A. DeCaprio. 2001. Feedback regulation of the MBF transcription factor by cyclin Cig2. Nat. Cell Biol. 3:1043–1050. [PubMed]
Benito, J., C. Martin-Castellanos, and S. Moreno. 1998. Regulation of the G1 phase of the cell cycle by periodic stabilization and degradation of the p25rum1 CDK inhibitor. EMBO J. 17:482–497. [PMC
free article] [PubMed]
Hayles, J., D. Fisher, A. Woollard, and P. Nurse. 1994. Temporal order of S phase and mitosis in fission yeast is determined by the state of the p34^cdc2 -mitotic B cyclin complex. Cell. 78:813–822.
Parisi, T., A. R. Beck, N. Rougier, T. McNeil, L. Lucian, Z. Werb, and B. Amati. 2003. Cyclins E1 and E2 are required for endoreplication in placental trophoblast giant cells. EMBO J. 22:4794–4803. [
PMC free article] [PubMed]
Nurse, P., P. Thuriaux, and K. Nasmyth. 1976. Genetic control of the cell division cycle in the fission yeast Schizosaccharomyces pombe. Mol. Gen. Genet. 146:167–178. [PubMed]
Tyson, J. J. 1983. Unstable activator models for size control of the cell cycle. J. Theor. Biol. 104:617–631. [PubMed]
Tyers, M. 2004. Cell cycle goes global. Curr. Opin. Cell Biol. 16:602–613. [PubMed]
Schwob, E., T. Bohm, M. D. Mendenhall, and K. Nasmyth. 1994. The B-type cyclin kinase inhibitor p40^sic1 controls the G1 to S transition in S. cerevisiae. Cell. 79:233–244. [PubMed]
Visintin, R., S. Prinz, and A. Amon. 1997. CDC20 and CDH1: a family of substrate-specific activators of APC-dependent proteolysis. Science. 278:460–463. [PubMed]
Zachariae, W., M. Schwab, K. Nasmyth, and W. Seufert. 1998. Control of cyclin ubiquitination by CDK-regulated binding of Hct1 to the anaphase promoting complex. Science. 282:1721–1724. [PubMed]
Sethi, N., M. C. Monteagudo, D. Koshland, E. Hogan, and D. J. Burke. 1991. The CDC20 gene product of Saccharomyces cerevisiae, a beta-transducin homolog, is required for a subset of
microtubule-dependent cellular processes. Mol. Cell. Biol. 11:5592–5602. [PMC free article] [PubMed]
Fitzpatrick, P. J., J. H. Toyn, J. B. Millar, and L. H. Johnston. 1998. DNA replication is completed in Saccharomyces cerevisiae cells that lack functional Cdc14, a dual-specificity protein
phosphatase. Mol. Gen. Genet. 258:437–441. [PubMed]
Visintin, R., K. Craig, E. S. Hwang, S. Prinz, M. Tyers, and A. Amon. 1998. The phosphatase Cdc14 triggers mitotic exit by reversal of Cdk-dependent phosphorylation. Mol. Cell. 2:709–718. [PubMed]
Rudner, A., and A. Murray. 2000. Phosphorylation by Cdc28 activates the Cdc20-dependent activity of the anaphase promoting complex. J. Cell Biol. 149:1377–1390. [PMC free article] [PubMed]
Irniger, S., M. Baumer, and G. H. Braus. 2000. Glucose and Ras activity influence the ubiquitin ligases APC/C and SCF in Saccharomyces cerevisiae. Genetics. 154:1509–1521. [PMC free article] [PubMed]
Wasch, R., and F. Cross. 2002. APC-dependent proteolysis of the mitotic cyclin Clb2 is essential for mitotic exit. Nature. 418:556–562. [PubMed]
Lew, D. J. 2003. The morphogenesis checkpoint: how yeast cells watch their figures. Curr. Opin. Cell Biol. 15:648–653. [PubMed]
Kellogg, D. R. 2003. Wee1-dependent mechanisms required for coordination of cell growth and cell division. J. Cell Sci. 116:4883–4890. [PubMed]
Ciliberto, A., B. Novak, and J. J. Tyson. 2003. Mathematical model of the morphogenesis checkpoint in budding yeast. J. Cell Biol. 163:1243–1254. [PMC free article] [PubMed]
Aguda, B. D., and Y. Tang. 1999. The kinetic origins of the restriction point in the mammalian cell cycle. Cell Prolif. 32:321–335. [PubMed]
Obeyesekere, M. N., E. S. Knudsen, J. Y. Wang, and S. O. Zimmerman. 1997. A mathematical model of the regulation of the G1 phase of Rb+/+ and Rb−/− mouse embryonic fibroblasts and an osteosarcoma
cell line. Cell Prolif. 30:171–194. [PubMed]
Kozar, K., M. A. Ciemerych, V. I. Rebel, H. Shigematsu, A. Zagozdon, E. Sicinska, Y. Geng, Q. Y. Yu, S. Bhattacharya, R. T. Bronson, K. Akashi, and P. Sicinski. 2004. Mouse development and cell
proliferation in the absence of D-cyclins. Cell. 118:477–491. [PubMed]
Geng, Y., Q. Y. Yu, E. Sicinska, M. Das, J. E. Schneider, S. Bhattacharya, W. M. Rideout, R. T. Bronson, H. Gardner, and P. Sicinski. 2003. Cyclin E ablation in the mouse. Cell. 114:431–443. [PubMed]
Malumbres, M., R. Sotillo, D. Santamaria, J. Galan, A. Cerezo, S. Ortega, P. Dubus, and M. Barbacid. 2004. Mammalian cells cycle without the D-type cyclin-dependent kinases Cdk4 and Cdk6. Cell. 118
:493–504. [PubMed]
Ortega, S., I. Prieto, J. Odajima, A. Martin, P. Dubus, R. Sotillo, J. L. Barbero, M. Malumbres, and M. Barbacid. 2003. Cyclin-dependent kinase 2 is essential for meiosis but not for mitotic cell
division in mice. Nat. Genet. 35:25–31. [PubMed]
Dirick, L., T. Bohm, and K. Nasmyth. 1995. Roles and regulation of Cln/Cdc28 kinases at the start of the cell cycle of Saccharomyces cerevisiae. EMBO J. 14:4803–4813. [PMC free article] [PubMed]
Richardson, H. E., C. Wittenberg, F. Cross, and S. I. Reed. 1989. An essential G1 function for cyclin-like proteins in yeast. Cell. 59:1127–1133. [PubMed]
Chow, J. P. H., W. Y. Siu, H. T. B. Ho, K. H. T. Ma, C. C. Ho, and R. Y. C. Poon. 2003. Differential contribution of inhibitory phosphorylation of CDC2 and CDK2 for unperturbed cell cycle control and
DNA integrity checkpoints. J. Biol. Chem. 278:40815–40828. [PubMed]
Hanahan, D., and R. A. Weinberg. 2000. The hallmarks of cancer. Cell. 100:57–70. [PubMed]
Sha, W., J. Moore, K. Chen, A. D. Lassaletta, C.-S. Yi, J. J. Tyson, and J. C. Sible. 2003. Hysteresis drives cell-cycle transitions in Xenopus laevis egg extracts. Proc. Natl. Acad. Sci. USA. 100
:975–980. [PMC free article] [PubMed]
Allen, N. A., L. Calzone, K. C. Chen, A. Ciliberto, N. Ramakrishnan, C. A. Shaffer, J. C. Sible, J. J. Tyson, M. T. Vass, L. T. Watson, and J. W. Zwolak. 2003. Modeling regulatory networks at
Virginia Tech. OMICS. 7:285–299. [PubMed]
Vass, M., N. Allen, C. A. Shaffer, N. Ramakrishnan, L. T. Watson, and J. J. Tyson. 2004. The JigCell model builder and run manager. Bioinformatics. 20:3680–3681. [PubMed]
98. Kuznetsov, Y. A. 1995. Elements of Applied Bifurcation Theory. Springer Verlag, New York.
99. Strogatz, S. H. 1994. Nonlinear Dynamics and Chaos. Addison-Wesley, Reading, MA.
Tyson, J. J., K. C. Chen, and B. Novak. 2003. Sniffers, buzzers, toggles, and blinkers: dynamics of regulatory and signaling pathways in the cell. Curr. Opin. Cell Biol. 15:221–231. [PubMed]
Pomerening, J. R., E. D. Sontag, and J. E. Ferrell Jr. 2003. Building a cell cycle oscillator: hysteresis and bistability in the activation of Cdc2. Nat. Cell Biol. 5:346–351. [PubMed]
102. Kaplan, D., and L. Glass. 1995. Understanding Nonlinear Dynamics, Chapter 5. Springer-Verlag, New York.
Articles from Biophysical Journal are provided here courtesy of The Biophysical Society
• Circadian rhythms synchronize mitosis in Neurospora crassa[Proceedings of the National Academy of Scie...]
Hong CI, Zámborszky J, Baek M, Labiscsak L, Ju K, Lee H, Larrondo LF, Goity A, Chong HS, Belden WJ, Csikász-Nagy A. Proceedings of the National Academy of Sciences of the United States of
America. 2014 Jan 28; 111(4)1397-1402
• Model Composition for Macromolecular Regulatory Networks[IEEE/ACM transactions on computational biol...]
Randhawa R, Shaffer CA, Tyson JJ. IEEE/ACM transactions on computational biology and bioinformatics / IEEE, ACM. 2010; 7(2)278-287
• Endoplasmic reticulum stress, the unfolded protein response, and gene network modeling in antiestrogen resistant breast cancer[Hormone molecular biology and clinical inve...]
Clarke R, Shajahan AN, Wang Y, Tyson JJ, Riggins RB, Weiner LM, Bauman WT, Xuan J, Zhang B, Facey C, Aiyer H, Cook K, Hickman FE, Tavassoly I, Verdugo A, Chen C, Zwart A, Wärri A, Hilakivi-Clarke
LA. Hormone molecular biology and clinical investigation. 2011 Mar; 5(1)35-44
• MicroRNA-Mediated Regulation in Biological Systems with Oscillatory Behavior[BioMed Research International. 2013]
Zhang Z, Xu F, Liu Z, Wang R, Wen T. BioMed Research International. 2013; 2013285063
• Information-Optimal Transcriptional Response to Oscillatory Driving[Physical review letters. 2010]
Mugler A, Walczak AM, Wiggins CH. Physical review letters. 2010 Jul 30; 105(5)058101
See all...
Your browsing activity is empty.
Activity recording is turned off.
See more...
|
{"url":"http://www.ncbi.nlm.nih.gov/pmc/articles/PMC1471857/?tool=pubmed","timestamp":"2014-04-19T02:19:15Z","content_type":null,"content_length":"216678","record_id":"<urn:uuid:7046bdd3-a5c3-4b53-b654-e07401c6aadd>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00623-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Benicia Geometry Tutor
...I have been a private tutor for elementary, junior high and high school students. I have tutored university students and was elected as the department tutor for Anatomy and Physiology at
California Lutheran University. My experiences have allowed me to become a successful tutor for students with different learning levels and styles.
14 Subjects: including geometry, reading, chemistry, biology
...I find that many students shy away from the core concepts in math and physics, preferring instead to learn only the specific problems they are assigned. This can result in the student becoming
confused when confronted with a new problem. For this reason I first strive to ensure that the student grasps the basic concepts, and I then illustrate the concepts with a variety of examples.
25 Subjects: including geometry, physics, calculus, statistics
...Another approach is to use clever acronyms or phrases as mnemonics tools. Of course, I do not impose one approach or the other on the student. I adapt to each student’s needs.
24 Subjects: including geometry, chemistry, physics, algebra 1
...I am now a 3rd year college student and have also played in the UC Davis concert band. I took private lessons when I was in middle and high school. I have been in honor band and marching band
all throughout middle and high school.
34 Subjects: including geometry, chemistry, writing, calculus
...We care for others by understanding their speech and writing; communicating our ideas clearly; and using grammar that prevents confusion. What a concept! This great insight has fueled my desire
to master and help others master English.
37 Subjects: including geometry, English, reading, writing
|
{"url":"http://www.purplemath.com/Benicia_geometry_tutors.php","timestamp":"2014-04-20T21:04:41Z","content_type":null,"content_length":"23636","record_id":"<urn:uuid:768640ce-83f8-466f-949f-45ce1e47f5d9>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00458-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Confidence Interval Formula
Create a free account Join Calaméo to publish and share documents with the world!
Rate and comment your favorite publications, download documents and share your readings with your friends.
Confidence Interval Formula
4 pages
Published by jay butt
Confidence Interval Formula Many of us are not familiar with the term Confidence Interval so in this article we will see that where the term confidence interval is used, what is the meaning of the
term confidence interval and what are... [More]
Confidence Interval Formula
Many of us are not familiar with the term Confidence Interval so in this article we will see that
where the term confidence interval is used, what is the meaning of the term confidence
interval and what are the uses of the term confidence interval.
The confidence interval is a term which is used in the math in the context of the statistics.
let us see the definition of the term confidence interval.
The confidence interval is a type of interval which predicts the parameter of the population and
now let us come to the use of the confidence interval.
The confidence interval is utilized to show the reliability of any of the estimate.
The confidence
interval is a type of the interval which is observed which means that it has been found with the
help of the given observations and it is not the same for each of the sample.
The confidence interval generally contains the interest parameter with the condition that the
experiment is being performed again and [Less]
Would you like to comment?
Join Calaméo for a free account, or log in if you are already a member.
|
{"url":"http://www.calameo.com/books/0014421102312f788416b","timestamp":"2014-04-18T01:01:28Z","content_type":null,"content_length":"55303","record_id":"<urn:uuid:bf673109-ed94-4f13-b402-3271a3567e67>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00478-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Upper Bound Estimation for the Noise Current Spectrum of Digital Circuit Blocks
Alessandra Nardi^1 and Luca Daniel
(Professor Alberto L. Sangiovanni-Vincentelli)
Semiconductor Research Corporation
The complexity of the systems-on-a-chip design requires an aggressive re-use of intellectual property (IP) circuit blocks. However, IP blocks can be safely re-used only if they do not affect other
sensitive components. The switching activity of CMOS digital circuit blocks typically produces high frequency current noise both into the Gnd/Vdd system, and into the substrate of integrated
circuits. Such currents can potentially affect circuit reliability and performance of other sensitive components. For instance the Gnd/Vdd currents may produce electromigration, IR voltage drops,
voltage oscillations due to resonances, and, possibly, electromagnetic interference. The substrate currents may couple noise to sensitive analog circuitry through body effect or direct capacitive
coupling. An analysis of current injection due to switching activity is needed to properly account for all such effects during the design phase. Different effects require different types of current
injection models. For instance, power consumption analysis requires time-domain average current estimation over several clock periods. IR drop, electromigration, and timing performance analysis
require a time-domain noise current upper-bound with respect to all possible combinations of the inputs. Signal integrity, Gnd/Vdd grid resonances, electromagnetic interference, and substrate
coupling on mixed-signal ICs, on the other hand, require an upper-bound on the spectrum of the current injected into the Gnd/Vdd system or into the substrate, respectively. Methodologies developed so
far do not address this latter kind of analysis. Therefore, for these problems we are developing a methodology for estimating an upper bound over all possible input combinations for the spectrum of
the noise current injected by the switching activity of digital blocks. The methodology identifies an upper bound for the spectrum of a circuit noise current by combining the noise current injected
by each gate and accounting for the circuit logic functionality.
[1] A. Nardi, L. Daniel, and A. Sangiovanni-Vincentelli, A Methodology for the Computation of an Upper Bound on Noise Current Spectrum of CMOS Switching Activity, UC Berkeley Electronics Research
Laboratory, Memorandum No. UCB/ERL M02/20, June 2002.
^1Postdoctoral Researcher
More information (http://www-cad.eecs.berkeley.edu/~nardi) or
Send mail to the author : (nardi@eecs.berkeley.edu)
Edit this abstract
|
{"url":"http://www.eecs.berkeley.edu/XRG/Summary/Old.summaries/03abstracts/nardi.2.html","timestamp":"2014-04-21T14:44:24Z","content_type":null,"content_length":"3342","record_id":"<urn:uuid:b6a09a58-6a13-4c0c-9e7e-65028c8bb774>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00617-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Quantum Chaos in Biology
For those who might not know of the theory the goal of quantum chaos theory is to explain all of classical physics including classical chaos as emerging from quantum systems. They've had some
experimental success in the past, but this new paper has profound implications if confirmed. However, the exact philosophical foundations of the theory are a mystery to me so I thought it might be a
good subject to explore here.
Correct me if I'm wrong, but the theory appears to be another contextual one. It proposes no metaphysics and makes no claims about Indeterminacy and merely attempts to describe quanta contextually.
However, I'm a bit confused about how the theory proposes classical chaos emerges from quantum systems.
What the article is speculating about is a critical point behaviour that biological systems might harness. The "ordinary" view is that the transition from QM to classical behaviour would be a swift
one due to thermal decoherence. The "chaotic" view is that you could use the right kind of biomolecular scaffolding to hold the decoherence poised at the critical point and so trap it and milk more
more out of it in some fashion - as in light gathering during photosynthesis.
However chaos seems entirely the wrong term to describe critical points. Criticality is better.
Criticality is the exact point where things change, but have not yet changed. In the classical realm, as with the transitions of water, you physically get a mixed phase, and an easily reversible one.
So as with critical opalescence, you get liquid and vapor over all scales, and the two states fluctuating back and forth over all these scales.
People have described this poised state as the edge of chaos, rather than chaos. And the whole point is its instability.
So perhaps the philosophical issue is how far can the classical analogy be stretched? Are we talking of "quantum chaos" as a physical mix of quantum and classical states? Are we claiming a reversible
equilbrium where decoherence and recoherence is going on over all scales?
It is an interesting possibility that there may be a transition zone and decoherence can be trapped - perhaps along the lines of the quantum zeno effect. But this is something else than the standard
quantum chaos debate in my view.
There, the philosophical question is to do with the uncertainty involved in measuring initial conditions. Is the planck scale cut-off an issue for the determinism that is presumed by
chaos models? You can't seem to get deterministic sensitivity to initial conditions if initial conditions are ontologically indeterminate. (But then a decoherence view of QM may allow you to get at
least a reliable average when it comes to the concept of initial conditions, a basis that is determinate enough to underpin the classical model).
|
{"url":"http://www.physicsforums.com/showthread.php?s=a1f0d34414434b0ba7f6951927286e94&p=3804473","timestamp":"2014-04-18T03:01:33Z","content_type":null,"content_length":"34353","record_id":"<urn:uuid:a19b4677-8db3-4710-b6c9-08283550e34d>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00459-ip-10-147-4-33.ec2.internal.warc.gz"}
|
188 helpers are online right now
75% of questions are answered within 5 minutes.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred.
|
{"url":"http://openstudy.com/users/yanieliz/asked/1","timestamp":"2014-04-17T01:02:40Z","content_type":null,"content_length":"114125","record_id":"<urn:uuid:6a13ab62-1dfc-4760-ab69-a6b1f2299b82>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00435-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Cientifica Plc | Graphene | Emerging Technologies
Like novelists, mathematicians are creative authors. With diagrams, symbolism, metaphor, double entendre and elements of surprise, a good proof reads like a good story.
So starts Corrie Goldman’s May 8, 2012 article about Stanford University (California, US) professor Reviel Netz and his new book, Ludic Proof: Greek Mathematics and the Alexandrian Aesthetic.
(Goldman’s article was republished from the May 7, 2012 article in the Stanford Report.) The article was written in a Question and Answer format and here is an excerpt from the Stanford website,
You have said that a math proof is more focused on the properties of text than any other human endeavor, short of poetry.
Mathematics is structured around texts – proofs – that have very rich protocols in terms of their textual arrangement, whether in the use of extra-verbal elements – diagrams – in the very layout, in
the use of a particular formulaic language, in the structuring of the text. And its success or failure depends entirely on features residing in the text itself. It is really an activity very
powerfully concentrated around the manipulation of written documents, more perhaps than anywhere else in science, and comparable, then, to modern poetry.
How do you define or identify literary-like elements like metaphors in a mathematical proof?
Metaphor is fairly standard in mathematics. Mathematics can only become truly interesting and original when it involves the operation of seeing something as something else – a pair of similarly
looking triangles, say, as a site for an abstract proportion; a diagonal crossing through the set of all real numbers.
You have said that a proof can be seen as having a complex narrative and even elements of surprise much like how a story unfolds. Can you give me an example?
You tell me, “I’m going to find the volume of a sphere.” And then you do nothing of the kind, going instead through an array of unrelated results – a cone here, a funny polygon there, various
proportion results and general problems; then you make a thought experiment that shows how a sphere is like a series of cones produced from a certain funny polygon and, lo and behold, all the results
do allow one a very quick determination of the volume of the sphere. Here is surprise and narrative. That’s Archimedes’ “Sphere and Cylinder” proof; it’s a typical mechanism in his works. Other
authors are often much more sedate and progress in a more stately manner; this is Euclid’s approach.
These questions are answers derived from an April 13, 2012 workshop (Mathematics as Literature / Mathematics as Text Workshop) held at Stanford University.
The description for Netz’s book, Ludic Proof, provides more insight into his work (excerpted from the description),
This book represents a new departure in science studies: an analysis of a scientific style of writing, situating it within the context of the contemporary style of literature. Its philosophical
significance is that it provides a novel way of making sense of the notion of a scientific style. For the first time, the Hellenistic mathematical corpus – one of the most substantial extant for the
period – is placed centre-stage in the discussion of Hellenistic culture as a whole. Professor Netz argues that Hellenistic mathematical writings adopt a narrative strategy based on surprise, a
compositional form based on a mosaic of apparently unrelated elements, and a carnivalesque profusion of detail. He further investigates how such stylistic preferences derive from, and throw light on,
the style of Hellenistic poetry.
The word ‘carnivalesque’ made me think of literary theorist Mikhail Mikhailovich Bakhtin, from the Wikipedia essay (footnotes and links removed),
Bakhtin had a difficult life and career, and few of his works were published in an authoritative form during his lifetime.As a result, there is substantial disagreement over matters that are normally
taken for granted: in which discipline he worked (was he a philosopher or literary critic?), how to periodize his work, and even which texts he wrote (see below). He is known for a series of concepts
that have been used and adapted in a number of disciplines: dialogism, the carnivalesque, the chronotope, heteroglossia and “outsidedness” (the English translation of a Russian term vnenakhodimost,
sometimes rendered into English—from French rather than from Russian—as “exotopy”). [emphasis mine] Together these concepts outline a distinctive philosophy of language and culture that has at its
center the claims that all discourse is in essence a dialogical exchange and that this endows all language with a particular ethical or ethico-political force.
I didn’t find that description as helpful as I hoped and so clicked to Carnivalesque and here I found a liaison between this term and Netz’s response about mathematical proofs unfolding as complex
narratives with surprises,
Carnivalesque is a term coined by the Russian critic Mikhail Bakhtin, which refers to a literary mode that subverts and liberates the assumptions of the dominant style or atmosphere through humor and
It’s not the first time I’ve across a reference to Bakhtin’s theories, specifically ‘carnivalesque’, in the context of scientific and/or technical writing. Somehow one doesn’t usually associate
chaos, humour, and surprise with those writing forms and yet, ‘carnivalesque’ keeps popping up.
|
{"url":"http://www.cientifica.com/category/archimedes/","timestamp":"2014-04-19T01:47:19Z","content_type":null,"content_length":"35260","record_id":"<urn:uuid:f0a8960c-3210-494c-8f4c-ea41498653bb>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00366-ip-10-147-4-33.ec2.internal.warc.gz"}
|
[racket] Plot woes
From: Neil Toronto (neil.toronto at gmail.com)
Date: Fri Jun 1 14:01:16 EDT 2012
On 05/30/2012 11:12 AM, Jens Axel Søgaard wrote:
> Hi All,
> Attached is my version of a 2d-plotter that handles singularities.
> Give it a spin with your favorite ill-behaved functions and
> report back with ones badly handled.
For approaches like this, I think the definition of "well-behaved" is
"locally Lipschitz continuous with Lipschitz constant < K", where K
depends on the constants used to detect discontinuities (e.g.
machine-epsilon, number-of-regions). For everywhere-differentiable
functions, K is the maximum absolute derivative on the domain plotted,
scaled by various plot- and detection-specific factors.
(Yes, "Lipschitz" is like, the most unfortunate name for a mathematical
property EVER. It takes undergraduate math majors a whole semester to
get over it, if they ever do.)
Knowing that, it doesn't take long to find an ill-behaved function. We
just need one that's differentiable but not locally Lipschitz:
(λ (x) (* (expt (abs x) 3/2) (sin (/ x))))
Things like this will always be ill-behaved when plotted on some domain,
no matter how fine the discontinuity detection is.
Further, there are locally and globally Lipschitz functions whose K is
too large. (I didn't spend time finding one, though.) For each one, some
constant assignment in the discontinuity detection will make them plot
nicely. But for any constant assignment, there are infinitely many of them.
That doesn't mean I won't try something like this. The definition of
"functions without discontinuities that are well-behaved when plotted
non-adaptively" is also "locally Lipschitz continuous with Lipschitz
constant < K", where K depends on different constants like the
discretization step size. I would be perfectly happy with an adaptive
sampler or just discontinuity detection if I could show that its set of
ill-behaved functions is no larger than those for the current sampler.
That'll take some time.
Also, I'm still considering letting users specify discontinuities. That
would allow them to tell plot what kind they are: removable, removable
singularity, right-step, left-step, etc. Those kinds of properties are
pretty much undetectable.
Neil ⊥
Posted on the users mailing list.
|
{"url":"http://lists.racket-lang.org/users/archive/2012-June/052463.html","timestamp":"2014-04-18T18:20:35Z","content_type":null,"content_length":"7348","record_id":"<urn:uuid:efb1f10c-0c8d-4420-a89f-51c285af330f>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00140-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Transactions of the American Mathematical Society
ISSN 1088-6850(online) ISSN 0002-9947(print)
Cyclicity of CM elliptic curves modulo
Author: Alina Carmen Cojocaru
Journal: Trans. Amer. Math. Soc. 355 (2003), 2651-2662
MSC (2000): Primary 11G05; Secondary 11N36, 11G15, 11R45
Published electronically: March 14, 2003
MathSciNet review: 1975393
Full-text PDF Free Access
Abstract | References | Similar Articles | Additional Information
Abstract: Let
• [acC1] A. C. Cojocaru, ``On the cyclicity of the group of
• [acC2] A. C. Cojocaru, ``Cyclicity of elliptic curves modulo
• [BMP] I. Borosh, C. J. Moreno, and H. Porta, Elliptic curves over finite fields. II, Math. Comput. 29 (1975), 951–964. MR 0404264 (53 #8067), http://dx.doi.org/10.1090/S0025-5718-1975-0404264-3
• [Ho] C. Hooley, Applications of sieve methods to the theory of numbers, Cambridge University Press, Cambridge, 1976. Cambridge Tracts in Mathematics, No. 70. MR 0404173 (53 #7976)
• [LT1] Serge Lang and Hale Trotter, Frobenius distributions in 𝐺𝐿₂-extensions, Lecture Notes in Mathematics, Vol. 504, Springer-Verlag, Berlin, 1976. Distribution of Frobenius automorphisms in
𝐺𝐿₂-extensions of the rational numbers. MR 0568299 (58 #27900)
• [LT2] S. Lang and H. Trotter, Primitive points on elliptic curves, Bull. Amer. Math. Soc. 83 (1977), no. 2, 289–292. MR 0427273 (55 #308), http://dx.doi.org/10.1090/S0002-9904-1977-14310-3
• [Mu1] M. Ram Murty, On Artin’s conjecture, J. Number Theory 16 (1983), no. 2, 147–168. MR 698163 (86f:11087), http://dx.doi.org/10.1016/0022-314X(83)90039-2
• [Mu2] M. Ram Murty, An analogue of Artin’s conjecture for abelian extensions, J. Number Theory 18 (1984), no. 3, 241–248. MR 746861 (85j:11161), http://dx.doi.org/10.1016/0022-314X(84)90059-3
• [Mu3] M. Ram Murty, Artin’s conjecture and elliptic analogues, theory (Cardiff, 1995) London Math. Soc. Lecture Note Ser., vol. 237, Cambridge Univ. Press, Cambridge, 1997, pp. 325–344. MR
1635711 (2000a:11098), http://dx.doi.org/10.1017/CBO9780511526091.022
• [Mu4] M. Ram Murty, Problems in analytic number theory, Graduate Texts in Mathematics, vol. 206, Springer-Verlag, New York, 2001. Readings in Mathematics. MR 1803093 (2001k:11002)
• [Sch] Werner Schaal, On the large sieve method in algebraic number fields, J. Number Theory 2 (1970), 249–270. MR 0272745 (42 #7626)
• [Se1] J. -P. Serre, ``Résumé des cours de 1977-1978", Annuaire du Collège de France 1978, pp. 67-70.
• [Se2] Jean-Pierre Serre, Quelques applications du théorème de densité de Chebotarev, Inst. Hautes Études Sci. Publ. Math. 54 (1981), 323–401 (French). MR 644559 (83k:12011)
• [Silv1] Joseph H. Silverman, The arithmetic of elliptic curves, Graduate Texts in Mathematics, vol. 106, Springer-Verlag, New York, 1986. MR 817210 (87g:11070)
• [Silv2] Joseph H. Silverman, Advanced topics in the arithmetic of elliptic curves, Graduate Texts in Mathematics, vol. 151, Springer-Verlag, New York, 1994. MR 1312368 (96b:11074)
Similar Articles
Retrieve articles in Transactions of the American Mathematical Society with MSC (2000): 11G05, 11N36, 11G15, 11R45
Retrieve articles in all journals with MSC (2000): 11G05, 11N36, 11G15, 11R45
Additional Information
Alina Carmen Cojocaru
Affiliation: Department of Mathematics and Statistics, Queen’s University, Kingston, Ontario, Canada, K7L 3N6
Address at time of publication: The Fields Institute for Research in Mathematical Sciences, 222 College Street, Toronto, Ontario, M5T 3J1, Canada
Email: alina@mast.queensu.ca, alina@fields.utoronto.ca
DOI: http://dx.doi.org/10.1090/S0002-9947-03-03283-5
PII: S 0002-9947(03)03283-5
Keywords: Cyclicity of elliptic curves modulo $p$, complex multiplication, applications of sieve methods
Received by editor(s): July 24, 2002
Received by editor(s) in revised form: December 4, 2002
Published electronically: March 14, 2003
Additional Notes: Research partially supported by an Ontario Graduate Scholarship
Article copyright: © Copyright 2003 American Mathematical Society
|
{"url":"http://www.ams.org/journals/tran/2003-355-07/S0002-9947-03-03283-5/home.html","timestamp":"2014-04-24T15:54:57Z","content_type":null,"content_length":"36902","record_id":"<urn:uuid:f39ac47c-04df-4eef-9f3b-2bae943d2d31>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00272-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Walker, Judy - Department of Mathematics, University of Nebraska-Lincoln
• Public key cryptography A government organization (Pentagon, State Department, CIA, etc.) employs persons (military offi-
• LEE WEIGHTS OF Z/4ZCODES FROM ELLIPTIC CURVES E FELIPE VOLOCH AND JUDY L. WALKER
• Codes and Curves Judy L. Walker
• Contemporary Mathematics Instructor: David Jaffe, 936 Oldfather Hall.
• Math 203 Exam 2 Fall 1994 There are 6 problems, each of which is worth 16 points. You get 4 points for
• Your first exam is on Monday, September 22 and covers Chapters 1-3 of your textbook. This review sheet is meant to help you, but is not meant to be a substitute
• Math 203 Exam 2 Fall 1997 1 2 3 4 5 6
• MANAGEMENT SCIENCE REVIEW SHEET CHAPTER 1. Know these concepts: Euler circuit, Euler's Theorem, Euleriz-
• Math 203 Sample Exam 3 1 2 3 4 5 6 7 8
• Plans for Chapter 3 Planning and Scheduling
• Plans for Chapter 8 Statistical Inference
• Writing assignments for Math 203 There are two distinct types of writing assignments in this course: journals and reports.
• Discrete Mathematics 308 (2008) 31153124 www.elsevier.com/locate/disc
• June 19, 2008 13:44 World Scientific Review Volume -9in x 6in agcodesoverrings Algebraic Geometric Codes over Rings
• Average Min-Sum Decoding of LDPC Codes Nathan Axvig, Deanna Dreher, Katherine Morrison, Eric Psota,
• Plans for Chapter 7 Probability: The Mathematics of Chance
• Comments on the Journals for Math 203 Purpose The journals have several purposes, which will have different levels of importance for
• Reading Mathematics Ideas taken from "How to read and study mathematics" by Ruth Hubbard, Queens-
• Math 203 Sample Exam 2 The problems that are given below are all plausible exam problems. Be sure
• Suppose you wish to plan a trip, starting in Lincoln, visiting Blair, Fremont, Hastings and Lexington in some order, and then returning to Lincoln. The following is a table
• Metro University has two limited-enrollment courses that admit only some of the students who apply. There are complaints about discrimination in the admissions process.
• A Critical Look at Self-Dual Codes Judy L. Walker
• Plans for Chapter 1 Street Networks
• Judy Leavitt Walker Current CV as of March 8, 2011
• Connections Between Computation Trees and Graph Deanna Dreher and Judy L. Walker
• ANALYSIS OF CONNECTIONS BETWEEN PSEUDOCODEWORDS 1 Analysis of Connections between
• A Universal Theory of Decoding and Pseudocodewords
• LDPC codes from voltage graphs Christine A. Kelley
• Towards explaining decoding errors for LDPC codes
• Applications of List Decoding to Tracing Traitors Alice Silverberg, Jessica Staddon, Member, IEEEand Judy L. Walker, Member, IEEE
• A Critical Look at Self-Dual Codes Judy L. Walker
• Math 203 Section 002 9:30 -10:20 MWF
• Math 203, Spring 2001 Journal Assignments
• Plans for Chapter 2 Street Networks
• Suppose you wish to plan a trip, starting in Lincoln, visiting Blair, Fremont, Hastings and Lexington in some order, and then returning to Lincoln. The following is a table
• !"#%$'&(0)21!43 5 Your first exam covers Chapters 1-3 of your textbook. This review sheet is meant to help you study
• Plans for Chapter 5 Producing Data
• Plans for Chapter 6 Describing Data
• Math 203 Review for Exam II Your second exam covers Chapters 5-8 of your textbook. This review sheet is meant to
• Math 203 Statistics Report (40 points due November 14, 1997)
• Part A Choose one of the following four options. (And don't forget to do Part (1) Think of a novel real-world problem for which a graph might be useful for
• Two Excerpts from the Text A. Read each of the following excerpts carefully
• International Symposium on Information Theory and its Applications, ISITA2008 Auckland, New Zealand, 7-10, December, 2008
• A Universal Theory of Pseudocodewords Nathan Axvig, Emily Price, Eric Psota, Deanna Turk, Lance C. Perez, and Judy L. Walker
• HOMOGENEOUS WEIGHTS AND EXPONENTIAL SUMS JOS'E FELIPE VOLOCH AND JUDY L. WALKER
• Codes and Curves Judy L. Walker
• Applications of List Decoding to Tracing Traitors Alice Silverberg, Jessica Staddon, Member, IEEEand Judy L. Walker, M*
• LEE WEIGHTS OF Z=4Z-CODES FROM ELLIPTIC CURVES JOS'E FELIPE VOLOCH AND JUDY L. WALKER
|
{"url":"http://www.osti.gov/eprints/topicpages/documents/starturl/08/939.html","timestamp":"2014-04-16T18:09:51Z","content_type":null,"content_length":"14912","record_id":"<urn:uuid:a193ceee-f9c8-4270-8831-3cb3e20cb5ee>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00342-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Math Help
Give an example of a RATIONAL function that has -a vertical asymptote at x-=2 -a horizontal asymptote at y=1
A vertical asymptote is a dashed vertical line that as the graph approaches the number that makes the function undefined, it approaches this line and f(x) blows up to infinity (or - infinity). A
vertical asymptote is y = a, where x is any point that makes the denominator 0. You may also have to check if it's a hole or not. A horizontal asymptote is a dashed horizontal line that as the graph
approaches, it extends to infinity in the x and approaches the horizontal asymptote. A horizontal asymptote is x = b. To find the horizontal asymptote, take the limit of a rational function as it
goes to infinity. Examples: 1. $f(x) = \frac{x^2 + 2}{x-2}$ When does the vertical asymptote exist? When denominator is 0. When does the denominator become 0? When x = 2. To find out what happens on
both sides of the asymptote, take the limit as x goes to 2 from both sides. $\lim_{x \to 2^{+}} \frac{x^2 + 2}{x-2} = +\infty$ $\lim_{x \to 2^{-}} \frac{x^2 + 2}{x-2} = -\infty$ 2. $f(x) = \frac{x+1}
{x^2-1} = \frac{x+1}{(x+1)(x-1)}$ What are the vertical asymptotes here? x = 1 only. Why not x = -1? Because, as you can see (x+1) cancels, and this implies that there is a hole at x= -1, not an
asymptote. 3. $f(x) = \frac{x+2}{x-3}$ Finding the horizontal asymptote is simply taking the limit as x approaches infinity. If limit is infinity, no horizontal asymptote. $\lim_{x \to \pm \infty} \
frac{x+2}{x-3} = \lim_{x \to \pm \infty} \frac{x}{x} = 1$ Take the first derivative to find where it is increasing/decreasing, the second derivative to find where it is concave up/down.
|
{"url":"http://mathhelpforum.com/pre-calculus/43460-asymptote.html","timestamp":"2014-04-21T15:54:58Z","content_type":null,"content_length":"37849","record_id":"<urn:uuid:5a097844-fad0-4d5f-8cf0-020bb3248c09>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00247-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
what is the lim X->0 sinx/(abs)
• one year ago
• one year ago
Best Response
You've already chosen the best response.
lim x->0 sin(x)/|x| lim x->0+ sin(x)/x and lim x->0- sin(x)/x = cos (x)/1 and cos(x)/1 =1 Is anyone else have problems using equations; I have tried both Firefox and Safari and I am not able to
use "Equation" to type in either.
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred.
|
{"url":"http://openstudy.com/updates/504d0032e4b0f79a4d924af4","timestamp":"2014-04-17T06:57:10Z","content_type":null,"content_length":"27804","record_id":"<urn:uuid:cce2d8d6-eeab-47a3-80d1-b1860f3d1a19>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00479-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Modular arithmetic
modular arithmetic
is a system of arithmetic for certain equivalence classes of
, called
congruence classes
. Sometimes it is suggestively called '
clock arithmetic
', where numbers 'wrap around' after they reach a certain value (the
). For example, when the modulus is 12, then any two numbers that leave the same remainder when divided by 12 are equivalent (or "congruent") to each other. The numbers
..., −34, −22, −10, 2, 14, 26, ...
are all "congruent modulo 12" to each other, because each leaves the same
(2) on division by 12. The collection of all such numbers is a congruence class.
As explained below, one can add such congruence classes to get another such congruence class, subtract two such classes to get another, and multiply such classes to get another. When the modulus is a
prime number, one can always divide by any class not containing 0.
Definition of modulo
Two discrepant conventions prevail:
• the one originally introduced by Gauss two centuries ago, still used by mathematicians, and suitable for theoretical mathematics, and
• a newer one adhered to by computer scientists and perhaps more suitable for computing.
The older convention, used by mathematicians
The original convention is that the expression
means that a and b are both in the same "congruence class" modulo n, i.e., both leave the same remainder on division by n, or, equivalently, a − b is a multiple of n. Thus we have, for example
since 63 and 83 both leave the same remainder (3) on division by 10, or, equivalently, 63 − 83 is a multiple of 10. One says:
"63 is congruent to 83, modulo 10,"
"63 and 83 are congruent to each other, modulo 10."
"Modulo" is usually abbreviated to "mod" in speaking, just as in writing. The parentheses, i.e., the round brackets (), are usually not written, but in this case they emphasize the difference between
the traditional mathematical convention and the modern computing convention. The mathematical usage parses the phrase differently from the computing usage.
In Latin, the language in which Gauss wrote, modulo is the ablative case of modulus. The number n, which in this example is 10, is the modulus.
The newer convention, used in computing
According to the newer convention, in general, a mod n is the remainder on integer division of a by n. Depending on the implementation, the remainder r is typically constrained to 0 < |r| < |n|, with
a negative remainder only resulting when n < 0.
The difference in conventions is not very serious, in fact; it is reasonably thought of as reflecting the preference, for computational purposes, of a normal form over the underlying equivalence
relation. This can be regarded mainly as a notational convention in this case, where there is a strict-sense normal form.
Implementation of modulo in computing
Some calculators have a mod() function button, and many programming languages have a mod() function or similar, expressed as mod(a,n), for example. Some also support expressions that use "%" as a
modulo operator, such as a % n.
a mod n can be calculated by using equations, in terms of other functions. Differences may arise according to the scope of the variables, which in common implementations is broader than in the
definition just given.
An implementation of a modulo function that constrains the remainder set in the manner described above, as is found in the programming languages Perl and Python, can be described in terms of the
floor function floor(z), the greatest integer less than or equal to z:
mod(a,n) = a − n × floor(a ÷ n)
This definition allows for a and n to be typed as integers or rational numbers.
The expression a mod 0 is undefined in the majority of numerical systems, although some do define it to be n.
Applications of modular arithmetic
Modular arithmetic, first systematically studied by Carl Friedrich Gauss at the end of the eighteenth century, is applied in number theory, abstract algebra, cryptography, and visual and musical art.
The fundamental arithmetic operations performed by most computers are actually modular arithmetic, where the modulus is 2^b (b being the number of bits of the values being operated on). This comes to
light in the compilation programming languages such as C; where for example arithmetic operations on "int" integers are all taken modulo 2^32, on most computers.
In art
In music, because of octave and enharmonic equivalency (that is, pitches in a 1/2 or 2/1 ratio are equivalent, and C# is the same as Db), modular arithmetic is used in the consideration of the twelve
tone equally tempered scale, especially in twelve tone music. In visual art modular arithmetic can be used to create artistic patterns based on the multiplication and addition tables modulo n (see
link below).
Some consequences of the mathematical usage
Recall from above that two integers a, b congruent modulo n, written as
a ≡ b (mod n) if their difference a − b is divisible by n, i.e. if a − b = kn for some integer k.
Using this definition, we can generalize to non-integral moduli. For instance, we can define a ≡ b (mod π) if a − b = kπ for some integer k. This idea is developed in full in the context of ring
theory below.
Here is an example of the congruence notation.
14 ≡ 26 (mod 12).
This is an equivalence relation, and the equivalence class of the integer a is denoted by [a][n] (or simply [a] if the modulus n is understood.) Other notations include a + nZ or a mod n. The set of
all equivalence classes is denoted Z/nZ = { [0][n], [1][n], [2][n], ..., [n-1][n] }.
If a and b are integers, the congruence
ax ≡ b (mod n)
has a solution x if and only if the greatest common divisor (a, n) divides b. The details are recorded in the linear congruence theorem. More complicated simultaneous systems of congruences with
different moduli can be solved using the Chinese remainder theorem or the method of successive substitution.
This equivalence relation has important properties which follow immediately from the definition: if
a[1] ≡ b[1] (mod n) and a[2] ≡ b[2] (mod n)
a[1] + a[2] ≡ b[1] + b[2] (mod n)
a[1]a[2] ≡ b[1]b[2] (mod n).
This shows that addition and multiplication are well-defined operations on the set of equivalence classes. In other words, addition and multiplication are defined on Z/nZ by the following formulae:
• [a][n] + [b][n] = [a + b][n]
• [a][n][b][n] = [ab][n]
In this way, Z/nZ becomes a commutative ring with n elements. For instance, in the ring Z/12Z, we have
[8][12][3][12] + [6][12] = [30][12] = [6][12].
In abstract algebra, it is realized that modular arithmetic is a special case of forming the factor ring of a ring modulo an ideal. If R is a commutative ring, and I is an ideal of R, then the
elements a and b of R are congruent modulo I if a − b is an element of I. As with the ring of integers, this turns out to be an equivalence relation, and addition and multiplication become
well-defined operations on the factor ring R/I.
In the ring of integers, if we consider the equation ax ≡ 1 (mod n), then we see that a has a multiplication inverse if and only if a and n are coprime. Therefore, Z/nZ is a field if and only if n is
prime. It can be shown that every finite field is an extension of Z/pZ for some prime p.
An important fact about prime number moduli is Fermat's little theorem: if p is a prime number and a is any integer, then
a^p ≡ a (mod p).
This was generalized by
: for any positive integer
and any integer
that is relatively prime to
a^φ(n) ≡ 1 (mod n),
where φ(
) denotes
Euler's φ function
counting the integers between 1 and
that are
. Euler's theorem is a consequence of the
Theorem of Lagrange
, applied to the group of units of the ring
Another "computing" usage
An implied meaning of modulo in computing contexts is "valid up to this value." For example, "addition is modulo 1,000" means that the addition operation being described provides valid answers until
the sum goes beyond 1,000.
Digital representations of number spaces are not infinite (see binary numeral systems). Thus, if a computer is representing a set of positive integers as 8-bits, the values that can be represented
range from 0 to 255. When an addition (or multiplication, or whatever) results in a number above this cutoff, the typical behavior is for the values to wrap around. For example, in the 8-bit positive
integer situation, 255 + 1 = 0. This computer is therefore described as "modulo 256". Furthermore, some computers do different operations with different bit representations. So although the storage
of integers may be 8-bit ("modulo 256"), the addition of integers may be 12-bit ("modulo 4096"), or anything else. Thus individual operations can also be described as "modulo x".
In the case of signed (positive and negative) integers, or floating point numbers, the wrap around effect is more complicated, and is not always perfectly analogous to the formal mathematical modulo.
However, the slang persists such that "addition is modulo 1000" may not literally mean (in fact cannot literally mean) that the computer does addition in bits, but may simply mean "watch out: if you
go over 1000 this computer will give you weird results".
More general use of the word modulo by mathematicians
To say that any two things are the same "modulo" a third thing means, more-or-less, that the difference between the first two is accounted for or explained by the third. That is, the up to concept is
often talked about this way, using modulo as a term alerting the hearer. In mathematics, this admits various precise definitions. In particular, two members of a ring or an algebra are congruent
modulo an ideal if the difference between them is in the ideal. The use of the term in modular arithmetic is a special case of that usage, and that is how this more general usage evolved. Some loose
terms such as almost all can in this way acquire precise meanings from a Boolean algebra version, in which symmetric difference of sets replaced arithmetical subtraction; for example "modulo finite
See modulo.
Slang use of the word modulo
Mathematicians speaking of things non-mathematical still say "A is the same as B modulo C" when they mean A is the same as B except for differences accounted for by C. But in such non-mathematical
contexts, the phrase may not admit any very precise definition. Consequently mathematicians and computer scientists often use the phrase in an informal or even jocular way.
Some users of the term either lack this theoretical viewpoint or else ignore it, and use the word "modulo" more-or-less synonymously with the preposition except.
"http and https are the same, modulo encryption." - means "the only difference between http and https is the addition of encryption".
"These two characters are equal."
"You mean, equal modulo case." - indicates that the first speaker is wrong: the characters are not the same, one is uppercase and the other lowercase.
"The two students performed equally well on the exam, modulo some minor computational mistakes." - means that the two students demonstrated an equal understanding of the material and its
application, but one of them lost some points for minor computational mistakes.
"This code is finished modulo testing" - means "this code is finished except for testing". Since testing is generally considered quite important, whereas in mathematics the use of modular
arithmetic generally ignores the difference between modulo-equal numbers, use of a phrase like this might be deliberate understatement.
External resources
|
{"url":"http://july.fixedreference.org/en/20040724/wikipedia/Modular_arithmetic","timestamp":"2014-04-21T02:01:17Z","content_type":null,"content_length":"22015","record_id":"<urn:uuid:75d83a64-0152-4170-a4e0-ff6cc1db735f>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00194-ip-10-147-4-33.ec2.internal.warc.gz"}
|
proof of Ptolemy
proof of Ptolemy's inequality
proof of Ptolemy’s inequality
Looking at the quadrilateral $ABCD$ we construct a point $E$, such that the triangles $ACD$ and $AEB$ are similar ($\angle ABE=\angle CDA$ and $\angle BAE=\angle CAD$).
This means that:
from which follows that
$BE=\frac{AB\cdot DC}{AD}.$
Also because $\angle EAC=\angle BAD$ and
the triangles $EAC$ and $BAD$ are similar. So we get:
$EC=\frac{AC\cdot DB}{AD}.$
Now if $ABCD$ is cyclic we get
$\angle ABE+\angle CBA=\angle ADC+\angle CBA=180^{\circ}.$
This means that the points $C$, $B$ and $E$ are on one line and thus:
Now we can use the formulas we already found to get:
$\frac{AC\cdot DB}{AD}=\frac{AB\cdot DC}{AD}+BC.$
Multiplication with $AD$ gives:
$AC\cdot DB=AB\cdot DC+BC\cdot AD.$
Now we look at the case that $ABCD$ is not cyclic. Then
$\angle ABE+\angle CBA=\angle ADC+\angle CBAeq 180^{\circ},$
so the points $E$, $B$ and $C$ form a triangle and from the triangle inequality we know:
Again we use our formulas to get:
$\frac{AC\cdot DB}{AD}<\frac{AB\cdot DC}{AD}+BC.$
From this we get:
$AC\cdot DB<AB\cdot DC+BC\cdot AD.$
Putting this together we get Ptolomy’s inequality:
$AC\cdot DB\leq AB\cdot DC+BC\cdot AD,$
with equality iff $ABCD$ is cyclic.
Mathematics Subject Classification
no label found
|
{"url":"http://planetmath.org/ProofOfPtolemysInequality","timestamp":"2014-04-17T12:41:40Z","content_type":null,"content_length":"105690","record_id":"<urn:uuid:5775325b-ac65-44ff-a2f5-f18ae35a83c6>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00069-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Edgewater, CO Statistics Tutor
Find an Edgewater, CO Statistics Tutor
...I worked as a tutor and instructional assistant at American River College in Sacramento, CA for 15 years. I also attended UC Berkeley as an Engineering major.I took this class at American
River College in Sacramento, CA. I received an A, one of the highest grades in the class.
11 Subjects: including statistics, calculus, geometry, algebra 1
...As far as tutoring experience goes, I tutored through Educational Supportive Services and the Department of Athletics at Kansas State University for a total of 4 years. After college, I
acquired a position as a Product Developer where I constructed electronic cost analysis tools. Thus, I have some idea of how people actually apply math in the real world.
20 Subjects: including statistics, calculus, geometry, algebra 1
...It is the foundation upon which all other secondary and college math and science success will be built, so it is of strategic importance to every student. I focus on working with students to
understand the "why" not just the "how", so the "how" will stick. My goal is to help students master the...
30 Subjects: including statistics, chemistry, calculus, algebra 1
...All of the students I tutored had differing needs - from simply learning the basics, to improving their test-taking abilities. My methods for tutoring these students typically changed
depending on the student need, but generally followed similar guidelines: 1) Introduction - Learning the basics ...
5 Subjects: including statistics, calculus, geometry, precalculus
...I am very confident in my knowledge and ability to tutor students preparing for standardized tests, particularly the math sections. I also have a solid foundation in the science and English
subjects of these exams, and can help students hone their skills and work on the specific areas they need to improve their scores. Science and math have always been natural strengths of mine.
13 Subjects: including statistics, geometry, GRE, algebra 1
Related Edgewater, CO Tutors
Edgewater, CO Accounting Tutors
Edgewater, CO ACT Tutors
Edgewater, CO Algebra Tutors
Edgewater, CO Algebra 2 Tutors
Edgewater, CO Calculus Tutors
Edgewater, CO Geometry Tutors
Edgewater, CO Math Tutors
Edgewater, CO Prealgebra Tutors
Edgewater, CO Precalculus Tutors
Edgewater, CO SAT Tutors
Edgewater, CO SAT Math Tutors
Edgewater, CO Science Tutors
Edgewater, CO Statistics Tutors
Edgewater, CO Trigonometry Tutors
Nearby Cities With statistics Tutor
Arvada, CO statistics Tutors
Bow Mar, CO statistics Tutors
Columbine Valley, CO statistics Tutors
Denver statistics Tutors
East Lake, CO statistics Tutors
Eastlake, CO statistics Tutors
Evergreen, CO statistics Tutors
Federal Heights, CO statistics Tutors
Glendale, CO statistics Tutors
Lafayette, CO statistics Tutors
Lakeside, CO statistics Tutors
Lakewood, CO statistics Tutors
Lonetree, CO statistics Tutors
Sheridan, CO statistics Tutors
Wheat Ridge statistics Tutors
|
{"url":"http://www.purplemath.com/Edgewater_CO_statistics_tutors.php","timestamp":"2014-04-17T21:40:32Z","content_type":null,"content_length":"24370","record_id":"<urn:uuid:bf24d411-7a63-4335-89c9-cb07cfd38a7a>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00485-ip-10-147-4-33.ec2.internal.warc.gz"}
|
More On The Changing Historical Relationship Between Walks, HBPs and HRs
What I posted last week was something I posted on the SABR list last year. At that time, someone raised a question about this. Below is the question and how I responded, with a little more research.
I think my basic finding is that there are not more HBP these days due to pitchers throwing faster.
"Cyril mentioned that current pitchers seem to be more willing to hit batters than pitchers in the past. How about since a lot more pitchers now pitch the ball around 90 MPH, it's harder for batters
to get out of the way. Historically, have the pitchers leading the leagues in HB been hard throwers (more Ks) or poor control pitchers (more BBs)?"
I did some analysis on this although it is not exactly what John Lewis suggests. I took the top 500 pitchers in batters faced (seasonal data) from 1960-69 and 1997-2006. I ran a regression in each
case in which the HBP rate was the dependent variable and the strikeout rate and the walk rate were the independent variables. Intentional walks were removed.
Here is the regression equation for the 1960s
HBP = .00387 + .0177*BB + .00186*SO
For the 1997-2006 period it was
HBP = .005 + .0031*BB + .00486*SO
The r-squared in the first case was just .013 and in the second it was .025. The r-squared tells us what percent of the variation in the dependent variable is explained by the model. So it is pretty
weak. But the T-values for BBs and SOs in the first case were 2.44 and .44. So the walk rate is statistically significant. For the second period they were 3.32 and 1.13.
In the first period, a one standard deviation increase in BB rate increased HBP rate .000392. For the strikeout rate it was .00007. So if a pitcher increases his walk rate he increases his HBP rate
more than if he increases his SO rate. For the second period these numbers were .00065 and .00022. So again, the walk rate has a bigger impact.
So all of this suggests that it is worse control in general that increases the HBP rate.
Now another response to that question
The other day I discussed a regression relating HBP, BBs and SOs. I did that again but I added in HRs with the idea that a pitcher might be more likely hit a guy who hit a HR last time up (or the
next guy). I again looked at both the 1960s and the last 10 years. Skipping the regression details (except to say the coefficient values and the r-sqaured values did not change much), the interesting
thing I found was that HRs had a negative relationship with HBP in the 1960s but it was positive in the last 10 years. So in the 1960s, a pitcher who gave up more HRs hit fewer batters but today a
pitcher who gives up more HRs hits more batters.
Having an increase in HR% of .01 over 1000 batters faced reduced HBP in the 1960s by about .23. In the last 10 years, they went up by .33. A 1 standard deviation increase in HR% in the 1960s
decreased HBP by .15. In the last 10 years it increased HBP by .24 (again, over 1000 batters). The standard deviation of HR% in the 1960s was .0066. In the last 10 years it was .0075.
The T-value on HRs was not significant for either time period. But maybe the difference in their coefficients could be. Anyone know if you can look at two different regressions and run some kind of a
test to see if the difference between coefficients from the regressions is significant?
I ran a regression which combined the two periods. There was a dummy variable for time period. It indicates that pitching in the last 10 years instead of the 1960s, holding everything else constant,
means 2.5 more HBP per 1000 batters faced. The T-value was 8.98. In other words, highly significant.
I also ran a regression with the dummy variable and the dummy variable was multiplied by each of the other variables (HRs, BBs, SOs). In this case the dummy for time period was just about zero and
not significant. The value of the HR*dummy coefficient was .055 (although the T-value was just 1.53 and about 2 is usually needed for significance). So I think the .055 value means that any given
increase in HR% in the last 10 years would make the HBP rate go up by .055 more than in the 1960s. So over 1000 batters faced, if your HR% goes up by .01 (say you give up 10 more HRs) you would hit
.55 more batters in the last 10 years than you would have in the 1960s.
No comments:
|
{"url":"http://cybermetric.blogspot.com/2008/08/more-on-changing-historical.html","timestamp":"2014-04-20T20:55:40Z","content_type":null,"content_length":"51387","record_id":"<urn:uuid:cd52ac29-6420-4c07-9964-28df81152e3e>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00549-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Discrete Mathematics: Elementary and Beyond
Discrete mathematics is a subject to which no shortage of books have been devoted. It has long been a staple in the mathematics studied and used by computer scientists as well as mathematicians. In
light of this, one must certainly ask whether or not another book on the subject belongs on the bookshelf. For many, the answer with respect to this book should be yes.
When this book arrived on my desk, it got buried rather quickly, and after falling onto the back burner, it stayed there for quite some time. Until I started actually reading it. I quite enjoyed
carrying this small volume around, reading a section or two at a time. The writing is generally clear and engaging. The material is basic, as one expects from a book in the Undergraduate Texts in
Mathematics series, but there is some material "beyond" the elementary. I found myself pleased with how the authors make a point of including developments and applications in their text, in coding
theory in particular. I learned of a few results here. For example, there is a discussion of pseudoprimes and of the Miller-Rabin, algorithm which, upon iteration, has an excellent probability of
correctly identifying a prime. This result is not new (the authors date it to the late 1970s) but it was new to me. Primality testing is not new either, but it is not standard fare, and lends a nice
flavor here. I was also pleased that in several places the authors would state a best known result, and then proceed to state and prove an easier result — one that was within the scope of the book.
Surely there are some readers who will find this sort of bait and switch annoying, but I am not one of them. In fact, I felt it added to the introductory nature of the text.
This book does a wonderful job of communicating mathematics as a vibrant field. In many places the authors are willing to remark on the process of doing mathematics, on questions that appear
"natural" or "surprising" and of course "elegant". The first paragraph of the chapter entitled Integers, Divisors, and Primes presents a good example of this philosophy in action:
This area of mathematics is called number theory, and it is a truly venerable field: Its roots go back about 2500 years, to the very beginning of Greek mathematics. One might think that after
2500 years of research, one would know essentially everything about the subject. But we shall see that this is not the case: There are very simple, natural questions that we cannot answer; and
there are other simple, natural questions to which an answer has been found only in the last few years! (p. 87)
The authors carefully remind us throughout the text that what is convincing does not necessarily constitute a proof. But they do not shy away from first convincing the reader of the likelihood of a
result (having usually led the reader to that point skillfully) and then providing a proof.
Lovasz, Pelikan and Vesztergombi's choice of topics does have a more mathematical bent in comparison to some other undergraduate texts at which I looked. The first chapter takes up the topics of sets
and counting, but the discussion of unions of sets, intersection of sets, and other such introductory logic is extremely brief. The second chapter leads to the pigeonhole principle, but also
discusses estimating the size of numbers, a theme that reappears from time to time throughout the book. The binomial theorem is the main tool of the next chapter, leading quite nicely to identities
arising from Pascal's triangle and estimates for sums and quotients of binomial coefficients. Recurrence relations are briefly introduced via the Fibonacci numbers, but attention quickly turns to
combinatorial probability and a new chapter. In Chapter Six, the theme is prime numbers. (This is the longest chapter in the text, at about thirty pages.) Graphs and trees, and matching and
optimization problems are the themes of the next few chapters. Then there is a foray into planar geometry leading to a discussion of the Four Color Theorem. The last two chapters delve into coding
theory and cryptography by introducing projective planes, Steiner systems, etc, on the way to describing the RSA cryptosystem. I'm a sucker for projective planes, as well as cryptography, and was
delighted with this selection as a fitting conclusion to the book.
While the choice of topics was to my taste and what made reading this book fun, it will be seen as a drawback by some readers who desire more connection with computer science. For example, there is
no mention of boolean logic or automata. Likewise, algorithms are discussed strictly from a mathematical viewpoint — as in the Euclidean Algorithm, as are recurrence relations.
To conclude, in Discrete Mathematics Lovasz, Pelikan and Vesztergombi have succeeded in providing us with a book that is sure to please many readers. It is indeed elementary enough to use as a text
in class (although be warned: the solutions to all the exercises are included). But a reader interested in discrete mathematics mostly for the sake of computer science will likely be disappointed,
frustrated, or both.
Michele Intermont (intermon@kzoo.edu) is Associate Professor of Mathematics at Kalamazoo College. Her area of specialty is algebraic topology.
|
{"url":"http://www.maa.org/publications/maa-reviews/discrete-mathematics-elementary-and-beyond","timestamp":"2014-04-21T13:53:58Z","content_type":null,"content_length":"99896","record_id":"<urn:uuid:f6b0d782-d3bf-45ab-bf14-178593c3cfa3>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00547-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Soquel Algebra Tutor
Find a Soquel Algebra Tutor
...As I continued working with them, they kept telling me things such as "You should really consider being a teacher," "You are really good at explaining things," "Your explanation is so clear,"
or "I would have aced my calculus and physics classes in high school if my teachers had taught the way yo...
11 Subjects: including algebra 2, algebra 1, chemistry, calculus
I am an experienced, enthusiastic, and dedicated instructor who will help students understand physics and mathematics using various method of instruction. I have a M.S. degree in Condensed Matter
Physics from Iowa State University with more than 5 years of teaching experience in physics. I was a r...
14 Subjects: including algebra 2, physics, calculus, algebra 1
...I look forward to helping students grow and mature academically.The foundation for all other math courses, I will emphasize all the basics in this course as it helps students with all future
courses If I have a specialty, it would be algebra. I have tutored many students in the past in this subject. Calculus is one of the more interesting subjects that I like to tutor.
9 Subjects: including algebra 1, algebra 2, calculus, geometry
...I love teaching students at this age, where their critical thinking and reasoning skills have matured to the point where problem-solving can be exciting rather than intimidating. I have a large
library of enrichment material for those students who need/want additional challenge, and I also have ...
10 Subjects: including algebra 1, algebra 2, calculus, geometry
...In addition to my personal education, I have a child with ADHD and understand the special time and patience it takes to achieve proper study habits, motivation and focus to learn. I do know
some ASL, and can work with those who utilize this method of communication. Thank you for considering me as your tutor!
13 Subjects: including algebra 1, reading, English, prealgebra
|
{"url":"http://www.purplemath.com/Soquel_Algebra_tutors.php","timestamp":"2014-04-19T09:36:06Z","content_type":null,"content_length":"23821","record_id":"<urn:uuid:2774edef-16a1-4f6a-9636-6b1fc1aa3f0a>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00294-ip-10-147-4-33.ec2.internal.warc.gz"}
|
, 2007
"... The aim of this paper is double. From one side we survey the knowledge we have acquired these last ten years about the lattice of all λ-theories ( = equational extensions of untyped λ-calculus)
and the models of lambda calculus via universal algebra. This includes positive or negative answers to se ..."
Cited by 2 (2 self)
Add to MetaCart
The aim of this paper is double. From one side we survey the knowledge we have acquired these last ten years about the lattice of all λ-theories ( = equational extensions of untyped λ-calculus) and
the models of lambda calculus via universal algebra. This includes positive or negative answers to several questions raised in these years as well as several independent results, the state of the art
about the long-standing open questions concerning the representability of λ-theories as theories of models, and 26 open problems. On the other side, against the common belief, we show that lambda
calculus and combinatory logic satisfy interesting algebraic properties. In fact the Stone representation theorem for Boolean algebras can be generalized to combinatory algebras and λ-abstraction
algebras. In every combinatory and λ-abstraction algebra there is a Boolean algebra of central elements (playing the role of idempotent elements in rings). Central elements are used to represent any
combinatory and λ-abstraction algebra as a weak Boolean product of directly indecomposable algebras (i.e., algebras which cannot be decomposed as the Cartesian product of two other non-trivial
algebras). Central elements are also used to provide applications of the representation theorem to lambda calculus. We show that the indecomposable semantics (i.e., the semantics of lambda calculus
given in terms of models of lambda calculus, which are directly indecomposable as combinatory algebras) includes the continuous, stable and strongly stable semantics, and the term models of all
semisensible λ-theories. In one of the main results of the paper we show that the indecomposable semantics is equationally incomplete, and this incompleteness is as wide as possible.
"... A longstanding open problem is whether there exists a model of the untyped λ-calculus in the category Cpo of complete partial orderings and Scott continuous functions, whose theory is exactly
the least λ-theory λβ or the least extensional λ-theory λβη. In this paper we analyze the class of reflexive ..."
Cited by 1 (1 self)
Add to MetaCart
A longstanding open problem is whether there exists a model of the untyped λ-calculus in the category Cpo of complete partial orderings and Scott continuous functions, whose theory is exactly the
least λ-theory λβ or the least extensional λ-theory λβη. In this paper we analyze the class of reflexive Scott domains, the models of λ-calculus living in the category of Scott domains (a full
subcategory of Cpo). The following are the main results of the paper: (i) Extensional reflexive Scott domains are not complete for the λβη-calculus, i.e., there are equations not in λβη which hold in
all extensional reflexive Scott domains. (ii) The order theory of an extensional reflexive Scott domain is never recursively enumerable. These results have been obtained by isolating among the
reflexive Scott domains a class of webbed models arising from Scott’s information systems, called iweb-models. The class of iweb-models includes all extensional reflexive Scott domains, all
preordered coherent models and all filter models living in Cpo. Based on a fine-grained study of an “effective” version of Scott’s information systems, we have shown that there are equations not in
λβ (resp. λβη) which hold in all (extensional) iweb-models.
"... A longstanding open problem is whether there exists a non-syntactical model of the untyped λ-calculus whose theory is exactly the least λ-theory λβ. In this paper we investigate the more general
question of whether the equational/order theory of a model of the untyped λ-calculus can be recursively e ..."
Add to MetaCart
A longstanding open problem is whether there exists a non-syntactical model of the untyped λ-calculus whose theory is exactly the least λ-theory λβ. In this paper we investigate the more general
question of whether the equational/order theory of a model of the untyped λ-calculus can be recursively enumerable (r.e. for brevity). We introduce a notion of effective model of λ-calculus, which
covers in particular all the models individually introduced in the literature. We prove that the order theory of an effective model is never r.e.; from this it follows that its equational theory
cannot be λβ, λβη. We then show that no effective model living in the stable or strongly stable semantics has an r.e. equational theory. Concerning Scott’s semantics, we investigate the class of
graph models and prove that no order theory of a graph model can be r.e., and that there exists an effective graph model whose equational/order theory is the minimum among the theories of graph
models. Finally, we show that the class of graph models enjoys a kind of downwards Löwenheim-Skolem theorem.
, 906
"... The symmetric interaction combinators are an equally expressive variant of Lafont’s interaction combinators. They are a graph-rewriting model of deterministic computation. We define two notions
of observational equivalence for them, analogous to normal form and head normal form equivalence in the la ..."
Add to MetaCart
The symmetric interaction combinators are an equally expressive variant of Lafont’s interaction combinators. They are a graph-rewriting model of deterministic computation. We define two notions of
observational equivalence for them, analogous to normal form and head normal form equivalence in the lambda-calculus. Then, we prove a full abstraction result for each of the two equivalences. This
is obtained by interpreting nets as certain subsets of the Cantor space, called edifices, which play the same role as Böhm trees in the theory of the lambda-calculus.
, 904
"... Abstract. In this paper we briefly summarize the contents of Manzonetto’s PhD thesis [35] which concerns denotational semantics and equational/order theories of the pure untyped λ-calculus. The
main research achievements include: (i) a general construction of λ-models from reflexive objects in (poss ..."
Add to MetaCart
Abstract. In this paper we briefly summarize the contents of Manzonetto’s PhD thesis [35] which concerns denotational semantics and equational/order theories of the pure untyped λ-calculus. The main
research achievements include: (i) a general construction of λ-models from reflexive objects in (possibly non-well-pointed) categories; (ii) a Stonestyle representation theorem for combinatory
algebras; (iii) a proof that no effective λ-model can have λβ or λβη as its equational theory (this can be seen as a partial answer to an open problem introduced by Honsell-Ronchi Della Rocca in
1984). These results, and others, have been published in three conference papers [36,10,15] and a journal paper [37]; a further journal paper has been submitted [9].
|
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=292718","timestamp":"2014-04-17T06:49:01Z","content_type":null,"content_length":"26087","record_id":"<urn:uuid:85d3a098-8ddd-4d26-a774-70faef200586>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00306-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Mathematical Organ
Institute and Museum of the History of Science, Florence, ITALY
Kircher's Mathematical Organ
A surviving example of Kircher's Organum Mathematicum, housed in the Museum of the History of Science, Florence
Kircher designed this device to contain all of the mathematical knowledge required by a young Baroque prince - Archduke Karl Joseph of Austria - in a single box. By manipulating the wooden rods in
the box, simple arithmetical, geometrical and astronomical calculations could be carried out. The organ could also be used to write messages in cipher, design fortifications, calculate the date of
Easter and compose music. Although Kircher promised to make the acquisition of mathematical knowledge an effortless process with the aid of his invention, the book describing the use of the
mathematical organ published by Kircher's disciple Gaspar Schott in 1664 ran to over 850 pages in length, and many of the operations required the apprentice mathematician to memorize long Latin
Further reading
Back to Athanasius Kircher Correspondence Project
|
{"url":"http://archimede.imss.fi.it/kircher/emathem.html","timestamp":"2014-04-20T09:10:57Z","content_type":null,"content_length":"3089","record_id":"<urn:uuid:e5585365-1e6e-456c-92d9-c424b30b5e02>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00533-ip-10-147-4-33.ec2.internal.warc.gz"}
|
tutorcircle team: My publications on Calaméo
Create a free account Join Calaméo to publish and share documents with the world!
Rate and comment your favorite publications, download documents and share your readings with your friends.
Algebra Solver Tutorcircle. com Page No. : 1/4 Algebra Solver Friends today we all are going to learn the basic concept behind linear Inequalities and how to solve Inequalities. Firstly we need to
understand is what does an Inequalities means in Maths ? An inequality tells that two values are not equal. For example a ? b... More
Algebra Solver Tutorcircle. com Page No. : 1/4 Algebra Solver Friends today we all are going to learn the basic concept behind linear Inequalities and how to solve Inequalities. Firstly we need to
understand is what does an Inequalities means in Maths ? An inequality tells that two values are not equal. For example a ? b shows that a is not equal to b. Slope formula plays an important role in
graphing linear inequalities. So we need to know what slope formula means. Slope of a line describes the steepness, incline or grade of the straight line. The slope through the points (x1, y1) and
(x2, y2) is given as slope formula. ( m = y2-y1/x2-x1 ) where x1 is not equal to x2. In general slope intercept form denotes the formula : y = mx + b. Know More About :- How to Subtract Radicals with
Different Radicands Less
Algebra Solver Tutorcircle. com Page No. : 1/4 Algebra Solver Mathematics is one of the complex subjects. The students have to make a lot of efforts to learn mathematics. To improve their skill in
mathematics the students have to join private institutes and pay heavy fees for this. The online tutors are good choice for the... More
Algebra Solver Tutorcircle. com Page No. : 1/4 Algebra Solver Mathematics is one of the complex subjects. The students have to make a lot of efforts to learn mathematics. To improve their skill in
mathematics the students have to join private institutes and pay heavy fees for this. The online tutors are good choice for the students to learn mathematics. The online tutors are available 24 hours
so the students can take help at any time from their home. Mathematics has many different branches so the online tutors are always available for each branch. Algebra is a very important branch of
mathematics which concerns the study of rules of the operations and relations. Algebra equations include alphabetic symbol and number. Let us take an example to solve an algebraic equation. 2x +7y
+4y --y +6x Here x and y are alphabetic symbol and 2, 7, 4, 6 are the numbers. Algebra Solver is an online math tutor who solves the algebraic problems. An Algebra solver is always available to he
How to Calculate Population Percentage Tutorcircle. com Page No. : 1/4 How to Calculate Population Percentage By the term percentage we mean to find the value of the item in the standard scale of
100. We convert the value per 100, if we need to make the scale of comparison equal. We will learn how to calculate a percentage.... More
How to Calculate Population Percentage Tutorcircle. com Page No. : 1/4 How to Calculate Population Percentage By the term percentage we mean to find the value of the item in the standard scale of
100. We convert the value per 100, if we need to make the scale of comparison equal. We will learn how to calculate a percentage. If we are given any fraction, we need to say that the denominator is
to be converted to 100. This we will do by writing the given fraction to its equivalent form, such that the denominator converts to 100. Here are some of the tips for it. If we have the denominator
as 10, we will multiply the numerator and the denominator by 10. In case if the denominator is 5, we will multiply the numerator and the denominator by 20. In case the denominator is 25, we will
multiply the numerator and the denominator by 4. If we have the denominator as 50, we will multiply the numerator and the denominator by 20. Know More About :- Coordinate of a Point Less
Exams. Edurite. com Page : 1/3 Bank Po Book Know More About :- Bank Po sample Papers Bank Po Book SBI PO PWB English State Bank of India and State Bank Associates Probationary Officer Exam Practice
Work Book Objective Type English Language(Grammar, Vocabulary, Comprehension) General Awareness. Marketing and Computer, Data Analysis and... More
Exams. Edurite. com Page : 1/3 Bank Po Book Know More About :- Bank Po sample Papers Bank Po Book SBI PO PWB English State Bank of India and State Bank Associates Probationary Officer Exam Practice
Work Book Objective Type English Language(Grammar, Vocabulary, Comprehension) General Awareness. Marketing and Computer, Data Analysis and Interpretation, Reasoning (High Level), Descriptive Type:
English Language (Comprehension, Short Prices, Letter Writing & Essay), Including Model Solved Papers of: 07. 03. 2010, 18. 04. 2010 & 07. 08. 2011 IBPS RRBs Objective General Hindi—Hindi IBPS RRBs
Objective General Hindi Useful for Office Assistant (Multipurpose) Officer Scale-I, Scale-II (General Banking Officer) & Specialist Cadre and Scale-III IBPS Bank Clerical CWE PWB—English IBPS BANK
CLERICAL Common Written Examination Practice Work Book With Model Solved Papers of IBPS Clerk CWE 2011 4. 12. 2011 (Eastern Zone), 11. 12. 2011 (Eastern Zone), 11. 12. 2011 (South Zone), 27. 11. 2011
Free Math Help 1. Adding Doing an addition is adding a number to another number. For instance : 1 + 1 = 2 1 + 4 = 5 7 + 2 = 9 When you add 1 to the number 1 you get 2, when you add 4 to the number 1
you get 5, when you add 2 to the number 7 you get 9. 2. Subtracting Doing a subtraction is taking away a number from another number.... More
Free Math Help 1. Adding Doing an addition is adding a number to another number. For instance : 1 + 1 = 2 1 + 4 = 5 7 + 2 = 9 When you add 1 to the number 1 you get 2, when you add 4 to the number 1
you get 5, when you add 2 to the number 7 you get 9. 2. Subtracting Doing a subtraction is taking away a number from another number. For instance : Know More About Derivative of e tanx Free Math Help
Tutorcircle. com Page No. : 1/4 Less
Real number worksheets 1. The numbers below all belong to a subset of the real number system. 0, 5, 10, 20, 30 Which subset of the real number system do these numbers belong to? a. rational numbers
b. whole numbers c. integers d. counting numbers 2. Which subset of the real number system does not contain the number 0? a.... More
Real number worksheets 1. The numbers below all belong to a subset of the real number system. 0, 5, 10, 20, 30 Which subset of the real number system do these numbers belong to? a. rational numbers
b. whole numbers c. integers d. counting numbers 2. Which subset of the real number system does not contain the number 0? a. integers b. natural numbers c. rational numbers d. whole numbers 3. Which
equation correctly represents the product of a nonzero real number and zero? Know More About Quadratic Formulas Real number worksheets Tutorcircle. com Page No. : 1/4 Less
Distance from Origin to Plane Tutorcircle. com Page No. : 1/4 Distance from Origin to Plane A plane is a two-dimensional doubly ruled surface spanned by two linearly independent vectors. The
generalization of the plane to higher dimensions is called a hyperplane. The angle between two intersecting planes is known as the... More
Distance from Origin to Plane Tutorcircle. com Page No. : 1/4 Distance from Origin to Plane A plane is a two-dimensional doubly ruled surface spanned by two linearly independent vectors. The
generalization of the plane to higher dimensions is called a hyperplane. The angle between two intersecting planes is known as the dihedral angle. The equation of a plane in 3D space is defined with
normal vector (perpendicular to the plane) and a known point on the plane. Let the normal vector of a plane, and the known point on the plane, P1. And, let any point on the plane as P. We can define
a vector connecting from P1 to P, which is lying on the plane. Since the vector and the normal vector are perpendicular each other, the dot product of two vector should be 0. This dot product of the
normal vector and a vector on the plane becomes the equation of the plane. By calculating the dot product, we get; Know More About :- How to do Cumulative Frequency Less
Distance on a Coordinate Plane Tutorcircle. com Page No. : 1/4 Distance on a Coordinate Plane Coordinate Plane The coordinate plane or Cartesian plane is a basic concept for coordinate geometry.
It describes a two-dimensional plane in terms of two perpendicular axes: x and y. The x-axis indicates the horizontal direction... More
Distance on a Coordinate Plane Tutorcircle. com Page No. : 1/4 Distance on a Coordinate Plane Coordinate Plane The coordinate plane or Cartesian plane is a basic concept for coordinate geometry.
It describes a two-dimensional plane in terms of two perpendicular axes: x and y. The x-axis indicates the horizontal direction while the y-axis indicates the vertical direction of the plane. In the
coordinate plane, points are indicated by their positions along the x and y-axes. Slopes On the coordinate plane, the slant of a line is called the slope. Slope is the ratio of the change in the
y-value over the change in the x-value. You can use what you know about right triangles to find the distance between two points on a coordinate grid. Finding Distance on the Coordinate Plane Know
More About :- Raphing composite functions Less
Bipartite Graph Matching Algorithm Tutorcircle. com Page No. : 1/4 Bipartite Graph Matching Algorithm The Marriage Problem and Matchings Suppose that in a group of n single women and n single men
who desire to get married, each participant indicates who among the opposite sex would be acceptable as a potential spouse. This... More
Bipartite Graph Matching Algorithm Tutorcircle. com Page No. : 1/4 Bipartite Graph Matching Algorithm The Marriage Problem and Matchings Suppose that in a group of n single women and n single men
who desire to get married, each participant indicates who among the opposite sex would be acceptable as a potential spouse. This situation could be represented by a bipartite graph in which the
vertex classes are the set of n women and the set of n men, and a woman x is joined by an edge to a man y if they like each other. For example we could have the women Ann, Beth, Christina, Dorothy
Evelyn, and Fiona, and the men Adam, Bob, Carl, Dan, Erik, and Frank. If Ann liked Adam and Bob, (and vice-versa), Beth liked Adam and Carl, Christina liked Dan, Erik and Frank, Dorothy liked Bob,
Evelyn liked Adam and Dan, and Fiona liked Frank, we would have the following bipartite graph. In this situation could we marry everybody to someone they liked? This is one version of the Marriage
Problem. Si Less
Analytic Geometry Tutorcircle. com Page No. : 1/4 Analytic Geometry An analytic geometry is also similar to the algebra which is used model the geometric objects, and the geometric objects are
points, straight line, and circle. Points are represented as order pair in the plane analytic geometric and in the case straight line... More
Analytic Geometry Tutorcircle. com Page No. : 1/4 Analytic Geometry An analytic geometry is also similar to the algebra which is used model the geometric objects, and the geometric objects are
points, straight line, and circle. Points are represented as order pair in the plane analytic geometric and in the case straight line it is represented as set of points which satisfy the linear
equation and the part of analytic geometric which deal with the linear equation is said to be linear algebra. Coordinate geometry, Cartesian geometry are all the other name of analytic geometry. And
this plane analytic geometry is based on the coordinate system and the principal of algebra and analysis. Now we will see the basic principal of plane analytic geometry which is given below: Every
point in the analytic geometry is having a pair of real number coordinates. Cartesian coordinates system is one of the most important in the coordinate system in the plane analytic geometry, where
all the x co Less
Adobe PDF document
Pub. on July 24th 2012
Views: 1
Downloads: 0
Least Common Denominator Definition Tutorcircle. com Page No. : 1/4 Least Common Denominator Definition In this article we will discuss about the least common denominator. Before going to
understand about how to find the least common denominator we will fist pay some attention to the definition of the least common... More
Least Common Denominator Definition Tutorcircle. com Page No. : 1/4 Least Common Denominator Definition In this article we will discuss about the least common denominator. Before going to
understand about how to find the least common denominator we will fist pay some attention to the definition of the least common denominator. The Least common denominator in the context of the math is
being defined as the common multiple which is lowest, of any no. of the denominators in a set of the fractions or we can present our definition of the least common denominator in another way that
Least common denominator is the lowest of the integers which are positive and are representing the multiple of the denominators. The least common denominator is generally represented by LCD. Now we
have discussed the definition of the Least Common Denominator in the last paragraph so we will now try to understand something about the term lowest common multiple. But before going to study about
the lowest Less
Adobe PDF document
Pub. on July 24th 2012
Views: 0
Downloads: 0
Lowest Common Denominator Tutorcircle. com Page No. : 1/4 Lowest Common Denominator The Lowest Common Denominator in mathematics is the least common multiple of 2 or more denominators of a number
of fractions or we can say that Lowest Common Denominator is the smallest of the positive integers which are the multiple of the... More
Lowest Common Denominator Tutorcircle. com Page No. : 1/4 Lowest Common Denominator The Lowest Common Denominator in mathematics is the least common multiple of 2 or more denominators of a number
of fractions or we can say that Lowest Common Denominator is the smallest of the positive integers which are the multiple of the denominators. lowest common denominator in short form is written as
LCD. Now, we will get to know more about the term ‘least common multiple ‘. Before understanding the meaning of the term least common multiple, it is first necessary to know that what is the meaning
of a ‘multiple ‘. The multiple of any no. is some whole no. times of that no. . Let us take an example. Let us take any number, say 4 then some of the multiples of 4 are 4, 8, 12, 16, 20 and 24. Now,
if we have to find the Lowest Common Denominator of 2 fractions, first we have to find the common multiples of their denominators. The common multiple of any 2 numbers is a number which is a mult
Adobe PDF document
Pub. on July 24th 2012
Views: 0
Downloads: 0
Straight Line Tutorcircle. com Page No. : 1/4 Straight Line Straight Line is defined as the shortest distance between two points. A straight line is just a line which joins two points without any
curve. In geometry a straight line is made by joining simple two points. In other words you can say straight line eliminates the... More
Straight Line Tutorcircle. com Page No. : 1/4 Straight Line Straight Line is defined as the shortest distance between two points. A straight line is just a line which joins two points without any
curve. In geometry a straight line is made by joining simple two points. In other words you can say straight line eliminates the distance between two points. In Graph, straight line can be
represented as: The above diagram shows that PQ is a straight line and it is not a curve. Formula of straight line is stated by a linear equation and its general form is Px + Qy + C = 0, here P,Q are
not both 0. Now, straight line equation is given by y = mx+d where value of m is the given slope and the y-intercept is given by d. For making a straight line you have to follow certain steps and the
steps are: Know More About :- How to Solve Square Roots Less
Adobe PDF document
Pub. on July 24th 2012
Views: 3
Downloads: 0
Subtracting Mixed Numbers Tutorcircle. com Page No. : 1/4 Subtracting Mixed Numbers Numbers are the basic need for the mathematical process. We are going to learn about how to perform mathematical
operations on the mixed numbers. Here we start with Subtracting Mixed Numbers. When we talk about subtracting mixed numbers, we... More
Subtracting Mixed Numbers Tutorcircle. com Page No. : 1/4 Subtracting Mixed Numbers Numbers are the basic need for the mathematical process. We are going to learn about how to perform mathematical
operations on the mixed numbers. Here we start with Subtracting Mixed Numbers. When we talk about subtracting mixed numbers, we will first convert the mixed numbers in the form of improper fraction
numbers. Once the mixed numbers are converted in the form of improper fraction numbers, we say that the denominators of the two numbers need to be same. It means to do the addition or subtraction on
the improper fraction numbers is only possible when we have two fraction numbers as like fractions. So we check if the subtrahend and the minuend are like fractions or not. If they are unlike
fraction numbers, then we come to the conclusion that the fraction numbers must be converted into their equivalent form such that the denominators of the two fractions become same. Know More About :-
Ang Less
Adobe PDF document
Pub. on July 24th 2012
Views: 1
Downloads: 0
Conic Section Tutorcircle. com Page No. : 1/4 Conic Section Conic section can be defined as a curve which is made by the intersection of a cone that resides on a plane. And in other terms it can
be assumed as the plane algebraic curve with the degree of two. The general equation of any conic section is given by: Sp2 + Tpq +... More
Conic Section Tutorcircle. com Page No. : 1/4 Conic Section Conic section can be defined as a curve which is made by the intersection of a cone that resides on a plane. And in other terms it can
be assumed as the plane algebraic curve with the degree of two. The general equation of any conic section is given by: Sp2 + Tpq + Uq2 + Vp + Wq + X = 0; If the value of T = 0 then we will see the
‘S’ and ‘U’ in the equations. Name of conic section Relationship of A and C. Parabola S = 0 or U = 0 but both the values of ‘S’ and ‘U’ are never equals to 0. Circle - In case of circle the value of
‘S’ and ‘U’ are both equal. Ellipse - In case of ellipse the sign of ‘S’ and ‘U’ are same but ‘S’ and ‘U’ are not equal. Hyperbola - In case of hyperbola the signs of ‘S’ and ‘U’ are opposite. Let s
have small introduction about all conic sections. Hyperbola can be defined as a line in a graph that has curve shape. Generally the equation of hyperbola is given by: x2 / F2 - y2 / G2= 1; this is e
Adobe PDF document
Pub. on July 21st 2012
Views: 1
Downloads: 0
Equation For A Line Tutorcircle. com Page No. : 1/4 Equation For A Line In mathematics we have so many fields like geometry, algebra, calculus, trigonometry etc but in geometry the meaning of line
is a straight line which consists of at least two points and can be extend in both the directions up to infinity but the condition... More
Equation For A Line Tutorcircle. com Page No. : 1/4 Equation For A Line In mathematics we have so many fields like geometry, algebra, calculus, trigonometry etc but in geometry the meaning of line
is a straight line which consists of at least two points and can be extend in both the directions up to infinity but the condition is we have to extend this in straight manner and no curve should be
present otherwise it is not a line. Now we are going to learn the meaning of Equation of a Line, so there are various forms of a straight line like slope intercept form, point slope form etc but we
are going to see standard form of straight line. Equation of line is Ax + By = C Where A and B are not equal to 0 In the above equation x and y are coordinates and A, B and C are constants and also
constants A and B are not equal to 0 and also there is no thickness in line which is property of line. Know More About :- Equation Of A Parabola Less
Geometric Means Calculator Tutorcircle. com Page No. : 1/4 Geometric Means Calculator Geometric Mean of any series which contains N observations is the Nth root of the product of the values. If
there are two values, then the square root of the product of the values is called the geometrical mean. In case there are three... More
Geometric Means Calculator Tutorcircle. com Page No. : 1/4 Geometric Means Calculator Geometric Mean of any series which contains N observations is the Nth root of the product of the values. If
there are two values, then the square root of the product of the values is called the geometrical mean. In case there are three values, then the cube root of the values is the geometrical mean. Let
us look at the ungrouped data and find how to find the geometrical Mean of the ungrouped data. Geometric Mean Calculator help us to calculate G. M. In such cases we say that the Geometrical Mean =
nth root ( the product of n values ) To do such calculations we use the logarithms and so it can be written as follows : Log (G. M. ) = (1/n ) * ( log(x1. X2 . x3 . x4 …… xn) ), = ( 1/ n) [ log x1 +
log x2 + log x3 + log x4 + log x5 …… + log xn ] Thus we conclude that the G. M. of a set of observations is the arithmetic mean of their logarithm values . Know More About :- Converting Degrees to R
How to Do Fractions Tutorcircle. com Page No. : 1/4 How to Do Fractions Fractions are the numbers which are expressed in the form of a / b, where a and b are the whole numbers, b <>0. Here we will
learn about how to do fractions. Different mathematical operations can be performed on the fractions. We can even compare the two... More
How to Do Fractions Tutorcircle. com Page No. : 1/4 How to Do Fractions Fractions are the numbers which are expressed in the form of a / b, where a and b are the whole numbers, b <>0. Here we will
learn about how to do fractions. Different mathematical operations can be performed on the fractions. We can even compare the two fractions. In case of comparing two fraction numbers, we can do it by
two methods. First method of comparing two fraction numbers is by making the two fractions as the like fractions. Once the denominator of the two fraction numbers becomes same by writing them with
the same denominator, then we say that the smaller numerator becomes the smaller fraction number. Looking at the another method of comparing the two fraction numbers is by cross multiplication. Here
we will proceed as follows: Know More About :- Spherical Coordinates to Cartesian Less
Displaying items 1 from 18 out of 232
232 Publications
92 Views
|
{"url":"http://en.calameo.com/subscriptions/1450581","timestamp":"2014-04-20T00:42:11Z","content_type":null,"content_length":"97007","record_id":"<urn:uuid:40bc43eb-13c5-4f09-9a2f-4774adef267b>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00099-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Summary: Degrees and choice numbers
Noga Alon
The choice number ch(G) of a graph G = (V, E) is the minimum number k such that for every
assignment of a list S(v) of at least k colors to each vertex v V , there is a proper vertex coloring
of G assigning to each vertex v a color from its list S(v). We prove that if the minimum degree of
G is d, then its choice number is at least (1
2 - o(1)) log2 d, where the o(1)-term tends to zero as d
tends to infinity. This is tight up to a constant factor of 2 + o(1), improves an estimate established
in [1], and settles a problem raised in [2].
1 Introduction
An undirected, simple graph G = (V, E) is k-choosable if for every assignment of a list S(v) of at least
k colors to each vertex v V , there is a proper vertex coloring of G assigning to each vertex v a color
from its list S(v). The choice number ch(G) of G, (which is also called the list chromatic number of
G) is the minimum number k such that G is k-choosable.
The concept of choosability, introduced by Vizing in 1976 [6] and independently by Erdos, Rubin
and Taylor in 1979 [4], received a considerable amount of attention recently. Many of the recent results
can be found in the survey papers [1], [5] and their many references. By definition, the choice number
ch(G) of any graph G is at least as large as its chromatic number (G), and it is well known that
strict inequality can hold. In fact, it is shown in [4] that the choice number of the complete bipartite
|
{"url":"http://www.osti.gov/eprints/topicpages/documents/record/660/1705729.html","timestamp":"2014-04-20T08:33:48Z","content_type":null,"content_length":"8614","record_id":"<urn:uuid:f6f75fb7-6f14-42fc-915b-8e70a7e31f16>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00492-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Soquel Algebra Tutor
Find a Soquel Algebra Tutor
...As I continued working with them, they kept telling me things such as "You should really consider being a teacher," "You are really good at explaining things," "Your explanation is so clear,"
or "I would have aced my calculus and physics classes in high school if my teachers had taught the way yo...
11 Subjects: including algebra 2, algebra 1, chemistry, calculus
I am an experienced, enthusiastic, and dedicated instructor who will help students understand physics and mathematics using various method of instruction. I have a M.S. degree in Condensed Matter
Physics from Iowa State University with more than 5 years of teaching experience in physics. I was a r...
14 Subjects: including algebra 2, physics, calculus, algebra 1
...I look forward to helping students grow and mature academically.The foundation for all other math courses, I will emphasize all the basics in this course as it helps students with all future
courses If I have a specialty, it would be algebra. I have tutored many students in the past in this subject. Calculus is one of the more interesting subjects that I like to tutor.
9 Subjects: including algebra 1, algebra 2, calculus, geometry
...I love teaching students at this age, where their critical thinking and reasoning skills have matured to the point where problem-solving can be exciting rather than intimidating. I have a large
library of enrichment material for those students who need/want additional challenge, and I also have ...
10 Subjects: including algebra 1, algebra 2, calculus, geometry
...In addition to my personal education, I have a child with ADHD and understand the special time and patience it takes to achieve proper study habits, motivation and focus to learn. I do know
some ASL, and can work with those who utilize this method of communication. Thank you for considering me as your tutor!
13 Subjects: including algebra 1, reading, English, prealgebra
|
{"url":"http://www.purplemath.com/Soquel_Algebra_tutors.php","timestamp":"2014-04-19T09:36:06Z","content_type":null,"content_length":"23821","record_id":"<urn:uuid:2774edef-16a1-4f6a-9636-6b1fc1aa3f0a>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00294-ip-10-147-4-33.ec2.internal.warc.gz"}
|
[racket] Declaring type of identifier defined which typed/racket exports as untyped.
[racket] Declaring type of identifier defined which typed/racket exports as untyped.
From: Jens Axel Søgaard (jensaxel at soegaard.net)
Date: Sun Aug 5 14:04:18 EDT 2012
This program:
#lang typed/racket
Gives the error:
Type Checker: untyped identifier integer-sqrt/remainder imported from
module <typed/racket> in: integer-sqrt/remainder
And this program:
#lang typed/racket
(: integer-sqrt/remainder : Natural Natural -> Natural)
gives this error:
Type Checker: Declaration for integer-sqrt/remainder provided, but
integer-sqrt/remainder is defined in another module in:
How do I fix this?
/Jens Axel
Posted on the users mailing list.
|
{"url":"http://lists.racket-lang.org/users/archive/2012-August/053339.html","timestamp":"2014-04-18T05:34:40Z","content_type":null,"content_length":"6102","record_id":"<urn:uuid:31d16686-da90-461e-90ae-b7edc06cd12b>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00294-ip-10-147-4-33.ec2.internal.warc.gz"}
|
CFD Analysis for Heat Transfer Enhancement inside a Circular Tube with Half-Length Upstream and Half-Length Downstream Twisted Tape
Journal of Thermodynamics
Volume 2012 (2012), Article ID 580593, 12 pages
Research Article
CFD Analysis for Heat Transfer Enhancement inside a Circular Tube with Half-Length Upstream and Half-Length Downstream Twisted Tape
^1Department of Mechanical Engineering, MIT College of Engineering, Pune-411028, India
^2Department of Mechanical Engineering, Flora Institute of Technology, Pune-412205, India
Received 30 August 2012; Revised 31 October 2012; Accepted 31 October 2012
Academic Editor: Ahmet Z. Sahin
Copyright © 2012 R. J. Yadav and A. S. Padalkar. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and
reproduction in any medium, provided the original work is properly cited.
CFD investigation was carried out to study the heat transfer enhancement characteristics of air flow inside a circular tube with a partially decaying and partly swirl flow. Four combinations of tube
with twisted-tape inserts, the half-length upstream twisted-tape condition (HLUTT), the half-length downstream twisted-tape condition (HLDTT), the full-length twisted tape (FLTT), and the plain tube
(PT) with three different twist parameters (, 0.27, and 0.38) have been investigated. 3D numerical simulation was performed for an analysis of heat transfer enhancement and fluid flow for turbulent
regime. The results of CFD investigations of heat transfer and friction characteristics are presented for the FLTT, HLUTT, and the HLDTT in comparison with the PT case.
1. Introduction
The heat transfer enhancement technology (HTET) has been developed and widely applied to heat exchanger applications over last decade, such as refrigeration, automotives, process industry, nuclear
reactors, and solar water heaters. Till date, there have been many attempts to reduce the sizes and the costs of the heat exchangers and their energy consumption with the most influential factors
being heat transfer coefficients and pressure drops, which generally lead to the incurring of less capital costs.
HTET can offer significant economic benefits in various industrial processes. By “augmentation” we mean an enhancement in heat transfer, over that which is existent on the reference surface for
similar operating conditions.
Bergles and Webb [1, 2] have reported comprehensive reviews on techniques for heat transfer enhancement. For a single-phase heat transfer, the enhancement has been brought using roughened surfaces
and other augmentation techniques, such as swirl/vortex flow devices and modifications to duct cross sections and surfaces. These are the passive augmentation techniques, which can increase the
convective heat transfer coefficient on the tube side. Many techniques for the enhancement of heat transfer in tubes have been proposed over the years.
Siddique et al. [3] reported the following heat transfer enhancers in his review paper: (a) extended surfaces including fins and microfins, (b) porous media, (c) large particles suspensions, (d)
nanofluids, (e) phase-change devices, (f) flexible seals, (g) flexible complex seals, (h) vortex generators, (i) protrusions, and (j) ultrahigh thermal conductivity composite materials. Many methods
that assist in heat transfer enhancement effects have been extracted from the literature.
Among of these methods discussed in the literature are using joint fins, fin roots, fin networks, biconvections, permeable fins, porous fins, and helical microfins and using complicated designs of
twisted-tapes. The authors concluded that more attention should be made towards single phase heat transfer augmented with microfins in order to alleviate the disagreements between the works of the
different authors.
Also, it was noted that additional attention should be made towards uncovering the main mechanisms of heat transfer enhancements due to the presence of nanofluids. Further, it can be concluded that
perhaps the successful modeling of flow and heat transfer inside porous media as seen in the works of Kim and Kuznetsov [14], which is a well-recognized passive enhancement method, could help in well
discovering the mechanism of heat transfer enhancements due to nanofluids. This is due to some similarities between both media. Additionally, many recent works related to passive augmentations of
heat transfer using vortex generators, protrusions, and ultra high thermal conductivity composite material have been reported by the authors. Whereas, Nield and Kuznetsov [15] reported an analysis of
laminar forced convection in a helical pipe of circular cross section and filled by a porous medium saturated with a fluid, for the case when the curvature and torsion of the pipe are both small.
Later Cheng and Kuznetsov [16, 17] carried out an investigation of laminar flow in a helical pipe filled with a fluid saturated porous medium. The maximum levels of the heat transfer enhancement
estimated due to each enhancer were presented in Table 1.
Inside the round tubes, a wide range of inserts, such as tapered spiral inserts, wire coil, twisted-tape with different geometries, rings, disks, streamlined shapes, mesh inserts, spiral brush
inserts, conical-nozzles, and V-nozzles, have been used Promvonge and Eiamsa-ard [18] and Promvonge [19].
Smithberg and Landis [20] have estimated the tape-fin effect assuming a uniform heat transfer coefficient on the tape wall, equal to that on the tube wall. Authors reported that the fin effect
increases the heat transfer but in practice, the tape-fin effect will not attain such a high value due to the poor contact between the tape and the tube. In order to estimate the tape fin effect,
Lopina and Bergles [21] conducted experiments using insulated tapes. Assuming zero contact resistance between tube and tape with equal and uniform heat transfer coefficients on tube and tape walls
authors predicted from 8% to 17% of the heat is transferred through the tape.
Date [22] reported the prediction of fully developed, laminar and turbulent, and uniform-property flow in a tube containing a twisted-tape. The predictions have shown that significant augmentation in
heat transfer can be obtained at high Reynolds and Prandtl numbers, low twist ratios, and high fin parameters.
Manglik and Bergles [23] presented experimental correlations for pressure drop and heat transfer coefficient for laminar, transition, and turbulent flow in isothennal-wall tubes with twisted-tape
inserts. Unlike previous correlations, they included the tape thickness in order to properly account for the helical twisting of the streamlines.
Al-Fahed et al. [24] reported an experimental work to study the heat transfer and friction characteristics in a microfin tube fitted with twisted-tape inserts for three different twist/width ratios
under laminar flow region.
Saha and Dutta [25] reported investigation on the experimental data on swirl flow due to twisted-tape in laminar region for friction factor and Nusselt number for a large Prandtl number range (from
205 to 518). The author observed that, on the basis of a constant pumping power, short-length twisted-tape is a good choice because in this case swirl generated by the twisted-tape decays slowly
downstream which increases the heat transfer coefficient with minimum pressure drop, as compared with a full-length twisted-tape.
Whereas, the concluding remarks from earlier studies on numerical and experimental work are as follows.
Rahimi et al. [26] carried out experimental and CFD studies on heat transfer and friction factor characteristics of a tube equipped with modified twisted-tape inserts. The investigations are with the
classic and three modified twisted-tape inserts. The authors observed that the Nusselt number and performance of the jagged insert were higher than other ones.
Eiamsa-ard et al. [27] carried out the numerical analysis of heat and fluid flows through a round tube fitted with twisted-tape. The author investigated the effect of tape clearance ratio on the
flow, heat transfer, and friction factor.
Chiu and Jang [28] studied numerically and experimentally three-dimensional gas-fluid flow and heat transfer inside tubes with longitudinal strip inserts (both with/without holes) and twisted-tape
inserts twisted at three different angles (, 24.4°, and 15.3°).
From above studies it could be concluded that the tube-tape inserts in a full length of tube provide one of the most attractive heat transfer augmentative techniques for flow inside the tube on
account of its simplicity and the effectiveness. The short-length twisted-tape s have been considered by many researchers due to reduction of pressure drop while augmenting the heat transfer
A CFD prediction of the heat transfer and friction characteristics of the partially decaying swirl flows in the turbulent flow regime has been taken up to study the structure of velocity and
temperature fields. The effects of the twisted-tape location on pressure drop and heat transfer characteristics due to creation of swirl in the turbulent flow within the tubes were also studied.
2. Numerical Simulations
2.1. Physical Model
The numerical simulations were carried out using the CFD software package FLUENT-6.2.16 that uses the finite-volume method to solve the governing equations.
Geometry was created for air flowing in an electrically heated stainless steel tube of 22mm diameter () and length () 90 times the diameter as in the experimental setup as shown in Figure 1. A
computational model has been created in GAMBIT-2.2.30 as shown in Figure 3.
In this study, the effects of the twist parameter (, 0.27, and 0.38) and the two heat flux inputs ( and 6200W/m^2) on heat transfer rate (Nu), friction factor (), and thermal performance factor ()
are examined under uniform heat flux conditions with air as the testing fluid and with different inlet frontal velocities with Reynolds number, Re between 25000 and 110000.
Twisted-tape inserts under the following locations of the twisted-tape configurations were used.(a)Upstream condition (HLUTT)—tube with twisted-tapes located in the first half of 50 diameters of the
heated section. (Partially decaying swirl flow.)(b)Downstream condition (HLDTT)—tube with twisted-tapes located in the second half of 50 diameters of the heated section. (Partly swirl flow.) (c)
Full-length condition (FLTT)—tube with twisted-tapes located in the full length of the heated section. (Fully swirl flow.) (d)Plain tube condition (PT)—tube without twisted-tapes in the full length
of the heated section. (Smooth tube flow.)
2.2. Numerical Method
The available finite-difference procedures for swirling flows and boundary layer are employed to solve the governing partial differential equations. Some simplifying assumptions are required for
applying of the conventional flow equations and energy equations to model the heat transfer process in tube with twisted-tape.
For turbulent, steady, and incompressible air flow with constant properties, while we neglect the natural convection and radiation, we follow the three-dimensional equations of continuity, momentum,
and energy, in the fluid region. These equations are as below.
Continuity equation:
Momentum equation
Energy equation In the Reynolds-averaged approach for turbulence modeling, the Reynolds stresses in (2) are appropriately modeled by a method that employs the Boussinesq hypothesis to relate the
Reynolds stresses to the mean velocity gradients as shown below: An appropriate turbulence model is used to compute the turbulent viscosity term . The turbulent viscosity is given as The second-order
upwind scheme was used to discretize the convective term. The linkage between the velocity and pressure was computed using the SIMPLE algorithm. The standard wall treatment model was chosen for the
near-wall modeling method.
For validating the accuracy of numerical solutions, the grid independent test has been performed for the physical model. The grid is highly concentrated near the wall and in the vicinity of the
twisted-tape. Four grid systems with about 130000, 300000,660000, and 1200000 cells are adopted to calculate grid independence. We compared the friction factors for these four mesh configurations as
shown in Figure 2. After checking the grid independence test, the simulation grid in this study was meshed using about 6, 60,000 cells that consisted of tetrahedral grid.
Figure 3 shows an example of the partial-meshed configuration of the round tube equipped with a twisted-tape. It consists of a tube of diameter 22mm containing twisted-tape insert, test section
2000mm, and calming section of 1200mm dimensions just like those in experimental setup with twist angle 0.14. To capture wall gradient effects, mesh has been finer toward the walls. There are a
total of 6, 60,000 nodes in the domain simulation.
In addition, a convergence criterion of 10^−6 was used for energy and 10^−3 for the mass conservation of the calculated parameters.
The air inlet temperature was specified as 300K, and three assumptions were made in the model: (1) the uniform heat flux was along the length of test section, (2) the wall of the inlet calming
section was adiabatic, and (3) the physical properties of air were constant and were evaluated at the bulk mean temperature. The velocity inlet boundary condition was adopted at the inlet and outflow
at the outlet of the domain shown in Figure 1.
2.3. Data Reduction
In order to express the experimental results in a more efficient way, the measured data was reduced using the following procedure. Three important parameters considered were the friction factor, the
Nusselt number, and the thermal performance factor, which were used for determining the friction loss, heat transfer rate, and the effectiveness of heat transfer enhancement in the tube,
respectively. The friction factor () is computed from pressure drop, across the length of the tube () using the following equation: The Nusselt number is defined as The average Nusselt number can be
obtained by The various characteristics of the flow, the Nusselt number, and the Reynolds number were based on the average of the tube wall and the outlet air temperatures. The local wall
temperature, inlet and outlet air temperatures, the pressure drop across the test section, and the air flow velocity were measured for heat transfer of the heated tube with different tube inserts.
The average Nusselt numbers and friction factors were calculated, and all fluid properties were determined at the overall bulk mean temperature.
Thermal performance factor was given by where in Nu[o], Nu, , and were the Nusselt numbers and friction factors for the plain tube and the tube with twisted-tape swirl generator, respectively.
3. Results and Discussion
3.1. Validation of Setup
The CFD simulation result of the plain tube (PT) without a twisted-tape insert has been validated with the experimental data as shown in Figures 4(a) and 4(b). The Dittus-Boelter equation for the
heat transfer and the Blasius equation for the friction factor are the correlations used for the comparison. These results are within deviation for the heat transfer (Nu) and for the friction factor
(). Similarly, the CFD results for the plain tube are compared with analytical correlations. The CFD results are within deviation for the heat transfer (Nu) and for the friction factor () with
slightly higher deviation of for Re higher than 75000.
3.2. Heat Transfer
Effect of the FLTT twisted-tapes and HLUTT twisted-tapes on the heat transfer rate is presented in Figure 5. The results for the tube fitted with HLUTT and HLDTT have been compared with those for a
plain tube and the FLTT under similar operating conditions for .
It was seen that the effect of different inserts on the heat transfer rate was significant for all the Reynolds numbers used due to the induction of high reverse flows and disruption of boundary
layers. This technique has resulted in an improvement of the heat transfer rate over the plain tube.
It is clearly seen that as the Reynolds number goes on increasing, the heat transfer coefficient also goes on increasing.
It was found that the heat transfer coefficients in the tubes with the FLTT were 29–86% greater than those in the case of the plain tubes without inserts.
When the twisted-tape inserts with the HLUTT condition were used, the heat transfer coefficients were 8–37% higher than those of the plain tubes. However, the twisted-tape inserts with the HLUTT and
HLDTT conditions had 15–95% reduction in values for the heat transfer coefficient when compared to the FLTT condition. Whereas when the twisted-tape inserts with the HLDTT condition were used, the
heat transfer coefficients were 9–47% higher than those of the plain tubes.
3.3. Friction Factor
The variation of the pressure drop is presented in terms of the friction factor as shown in Figure 6. It shows the friction factor versus the Reynolds numbers for different combinations of inserts.
It is seen that the friction factors obtained from three different inserts follow a similar trend and this decreases with an increase in the Reynolds number. The increase in friction factor with
swirl flow is much higher than that with an axial flow.
It was found that the pressure drop for the FLTT inserts was 203–623% higher than that for the plain tubes. For the HLUTT inserts, it was 36–170% higher than that for the plain tubes. However, the
pressure drop for the HLUTT was 82–168% less than that for the FLLT inserts. For the HLDTT inserts, it was 31–144% higher than that for the plain tubes. It was seen that the highest pressure drop
occurred when the tape inserts with a twist ratio were used.
3.4. Thermal Performance Factor
From Figure 7, it has been observed that the thermal performance factor tends to decrease with an increasing twist parameter and with an increase in the Reynolds number for HLUTT and HLDTT
twisted-tapes. Whereas for FLTT, it was found that thermal performance factor tends to decrease with an increasing twist ratio and increase with an increase in the Reynolds number. For all the twist
ratios, the HLUTT and HLDTT configurations have been seen to give the thermal performance factors in the range of 1.02–1.16, which is comparable with those provided by the FLTT (1.03–1.24).
3.5. Streamline and Pathline
Plots of pathlines through the tube with twisted-tape inserts have been shown in Figure 8. It is evident that the insertion of the tape induces the swirling flow, and the twisted-tapes generate two
types of flows which are (1) a swirling flow and (2) an axial or straight flow near the tube wall. It is noteworthy that the FLTT gives higher velocity of the fluid flow through the test section
compared to those with partially extending tapes where decaying of swirl flow along the length of tube takes place.
3.6. Velocity Vector Plots
Vector plots of velocity predicted for the tubes with FLTT and PT configuration are depicted in Figure 9 and Figure 10. As seen in the figures, two longitudinal vortices are generated around tapes in
the core flow area. These longitudinal vortices play a critical role of disturbing the boundary layer and making the temperature uniform in the core flow. And at the same time, it has been found that
a vortex tends to decay along the length in case of HLUTT and HLDTT cases which are partially decaying due to absence of twisted-tape in the latter part of the test section as shown in Figure 11.
Whereas for the HLUDTT case a vortex tends to grow along the length after the initial half part of the test section as shown in Figure 12. The tangential velocity is almost zero for the plain tube at
all the Reynolds numbers. However, it is seen that this velocity component increases when any of the above mentioned inserts are placed inside the tube.
3.7. Temperature Profiles Analysis
(a) Plain Tube Data
It is observed from the smooth tube temperature profile that the maximum wall temperature in the test section, , and the maximum temperature difference between the tube wall and the fluid, , both
occur at . varies from 48°C to 215°C, varies from 10°C to 56°C, and the wall temperature profile shows fully developed characteristics at roughly 17 diameters from entrance. At any location wall
temperature decreases with Reynolds number and increases with heat flux.
(b) Twisted-Tape Data
Figures 13 and 14 show wall temperature profiles, for all the cases investigated. The data presented reveal the following trend.(i)Effect of Reynolds Number on . At any axial location the local wall
temperature decreases with increasing Reynolds number. This is quite expected since heat transfer coefficients increase with Reynolds number bringing down both the wall to fluid temperature
difference and the absolute wall temperature.(ii)Effect of Heat Flux on . In all the cases, an increase in heat flux results in an increase in the local wall temperature. For the upstream condition
at , an increase in heat flux caused, in addition to an increase in the local wall temperature, a shift in the location of from to .(iii)Effect of Twist Parameter on . It is observed that the twist
parameter has a significant effect on both the magnitude of and its variation along the test section. Effect of on will be discussed separately for the upstream and downstream conditions.
Upstream Condition
A dip in wall temperature is observed at the end of tape section for all values of . In the swirl decay section following the tape, the local wall temperature increases with an increase in .
Downstream Condition
The maximum wall temperature, is located at , for all values of . A steep temperature drop is noticed rear the entrance to tape section, for and 0.27, except for . For all values of , the tape
section records the lowest wall temperature and the smooth section, the highest. (iv)Effect of Tape Location on . It is observed that the local wall temperatures are least for downstream condition.
The effect of tape location on the maximum test section temperature, , is given below. Values are given as a percentage decrease from corresponding smooth tube values. See Table 2.
It is seen that downstream location of tape is most effective in bringing down the maximum test section wall temperature.
3.8. Local Nusselt Number Analysis
An examination of the local Nusselt number profiles for the upstream and downstream conditions as shown in Figure 15 shows that the Nusselt number attains local peaks, a characteristic which was
noticed by Klepper [29] also in his experiments on partially extending tapes. This unusual behavior of the local Nusselt number has not been reported by any other investigator. That the occurrence of
these peaks is real and appears beyond doubt when it is observed that they occur at all Re, and, and for both the downstream and upstream tape locations.
While, the characteristic of local peaks observed in the investigation can be used in avoiding the local hot spots in heat exchanger application, with possible application in such diverse areas as
the cooling of an overheated rocket nozzle throat, prevention of burnout in space and earth power plants, and reduction of wall temperature in circulating fuel reactors and in the heat exchange
equipment used in process industries. In most of the above applications temperatures critical to material life are likely to be reached, and as such any reduction in wall temperature would imply an
improvement in performance.
4. Conclusion
The important issue in the present work can be expressed as the understanding of heat transfer and temperature analysis for fully, partially decaying, and partly swirl flow using the FLTT, HLUTT, and
HLDTT twisted-tape insert.
The performance of this insert was compared with those of the FLTT twisted-tape inserts and the PT.
It was found that the heat transfer coefficient and the pressure drop in the tubes with the FLTT were 29–86% and 203–623% greater than those in the case of the plain tubes without inserts.
When the twisted-tape inserts with the HLUTT condition were used, the heat transfer coefficient and the pressure drop were estimated at 8–37% and 36–170% higher than those in the case of the plain
When the twisted-tape inserts with the HLDTT condition were used, the heat transfer coefficient and the pressure drop were estimated at 9–47% and 31–144% higher than those in the plain tube case.
It was found that thermal performance and local peak in heat transfer could be increased by using a combination of inserts with different geometries in the plain tubes while reducing the pressure
drop. The characteristic of local peaks observed in the investigation can be used in avoiding the local hot spots in heat exchanger application.
Since the Nusselt number peaks were observed for both the downstream and upstream tape locations, the choice of tape location would be governed by the actual location of hot spots.
: Total energy, J
: Friction factor
: Enthalpy, J or convective heat transfer coefficient,
: Thermal conductivity,
: Effective thermal conductivity, ,
: Test section length, m
Nu: Nusselt number
: Static pressure, Pa; pressure drop, Pa
: Reynolds number, =axial Reynolds number
: Mean velocity, fluctuation velocity components,
: Tape width, m
: Twist ratio=, pitch for rotation of the twisted-tape (mm)
: Twist parameter= or
: Mean inside wall temperature
: Mean bulk fluid temperature.
Greek Symbols
: Eddy viscosity,
: Thermal performance factor
: Thickness of the twisted-tape (mm)
: Turbulent dissipation rate, .
The authors would like to acknowledge the keen interest taken by Late Dr. M.S. Lonath to start this research work. The moral support given to this investigation by Professor Dr. M.T. Karad is also
appreciated and deeply recognized.
1. A. E. Bergles, “Techniques to augment heat transfer,” in Handbook of Heat Transfer Applications, J. P. Hartnett, W. M. Rohsenow, and E. N. Ganic, Eds., chapter 1, McGraw-Hill, New York, NY, USA,
2nd edition, 1985.
2. R. L. Webb, Principle of Enhanced Heat Transfer, John Wiley, New York, NY, USA, 1994.
3. A. R. A. Khaled, M. Siddique, N. I. Abdulhafiz, and A. Y. Boukhary, “Recent advances in heat transfer enhancements: a review report,” International Journal of Chemical Engineering, vol. 2010,
Article ID 106461, 28 pages, 2010. View at Publisher · View at Google Scholar · View at Scopus
4. W. E. Hilding and C. H. Coogan, “Heat transfer and pressure loss measurements in internally finned tubes,” in Proceedings of the ASME Symposium on Air Cooled Heat Exchangers, pp. 57–85, 1964.
5. P. Bharadwaj, A. D. Khondge, and A. W. Date, “Heat transfer and pressure drop in a spirally grooved tube with twisted tape insert,” International Journal of Heat and Mass Transfer, vol. 52, no.
7-8, pp. 1938–1944, 2009. View at Publisher · View at Google Scholar · View at Scopus
6. M. A. Al-Nimr and M. K. Alkam, “Unsteady non-Darcian forced convection analysis in an annulus partially filled with a porous material,” Journal of Heat Transfer, vol. 119, no. 4, pp. 799–804,
1997. View at Scopus
7. Y. Ding, H. Alias, D. Wen, and R. A. Williams, “Heat transfer of aqueous suspensions of carbon nanotubes (CNT nanofluids),” International Journal of Heat and Mass Transfer, vol. 49, no. 1-2, pp.
240–250, 2006. View at Publisher · View at Google Scholar · View at Scopus
8. K. Vafai and A. R. A. Khaled, “Analysis of flexible microchannel heat sink systems,” International Journal of Heat and Mass Transfer, vol. 48, no. 9, pp. 1739–1746, 2005. View at Publisher · View
at Google Scholar · View at Scopus
9. A. R. A. Khaled and K. Vafai, “Analysis of thermally expandable flexible fluidic thin-film channels,” Journal of Heat Transfer, vol. 129, no. 7, pp. 813–818, 2007. View at Publisher · View at
Google Scholar · View at Scopus
10. S. Tiwari, P. L. N. Prasad, and G. Biswas, “A numerical study of heat transfer in fin-tube heat exchangers using winglet-type vortex generators in common-flow down configuration,” Progress in
Computational Fluid Dynamics, vol. 3, no. 1, pp. 32–41, 2003. View at Scopus
11. E. M. Sparrow, J. E. Niethammer, and A. Chaboki, “Heat transfer and pressure drop characteristics of arrays of rectangular modules encountered in electronic equipment,” International Journal of
Heat and Mass Transfer, vol. 25, no. 7, pp. 961–973, 1982. View at Scopus
12. E. M. Sparrow, A. A. Yanezmoreno, and D. R. Otis Jr., “Convective heat transfer response to height differences in an array of block-like electronic components,” International Journal of Heat and
Mass Transfer, vol. 27, no. 3, pp. 469–473, 1984. View at Scopus
13. Y. M. Chen and J. M. Ting, “Ultra high thermal conductivity polymer composites,” Carbon, vol. 40, no. 3, pp. 359–362, 2002. View at Publisher · View at Google Scholar · View at Scopus
14. S. Y. Kim, J. M. Koo, and A. V. Kuznetsov, “Effect of anisotropy in permeability and effective thermal conductivity on thermal performance of an aluminum foam heat sink,” Numerical Heat Transfer;
Part A, vol. 40, no. 1, pp. 21–36, 2001. View at Scopus
15. D. A. Nield and A. V. Kuznetsov, “Forced convection in a helical pipe filled with a saturated porous medium,” International Journal of Heat and Mass Transfer, vol. 47, no. 24, pp. 5175–5180,
2004. View at Publisher · View at Google Scholar · View at Scopus
16. L. Cheng and A. V. Kuznetsov, “Heat transfer in a laminar flow in a helical pipe filled with a fluid saturated porous medium,” International Journal of Thermal Sciences, vol. 44, no. 8, pp.
787–798, 2005. View at Publisher · View at Google Scholar · View at Scopus
17. L. Cheng and A. V. Kuznetsov, “Investigation of laminar flow in a helical pipe filled with a fluid saturated porous medium,” European Journal of Mechanics, B/Fluids, vol. 24, no. 3, pp. 338–352,
2005. View at Publisher · View at Google Scholar · View at Scopus
18. P. Promvonge and S. Eiamsa-ard, “Heat transfer enhancement in a tube with combined conical-nozzle inserts and swirl generator,” Energy Conversion and Management, vol. 47, no. 18-19, pp.
2867–2882, 2006. View at Publisher · View at Google Scholar · View at Scopus
19. P. Promvonge, “Heat transfer behaviors in round tube with conical ring inserts,” Energy Conversion and Management, vol. 49, no. 1, pp. 8–15, 2008. View at Publisher · View at Google Scholar ·
View at Scopus
20. E. Smithberg and F. Landis, “Friction and forced convection heat transfer characteristics in tubes with twisted tape swirl generators,” Journal of Heat Transfer, vol. 86, no. 1, pp. 39–49, 1964.
21. R. F. Lopina and A. E. Bergles, “Heat transfer and pressure drop in tape-generated swirl flow of single-phase water,” Journal of Heat Transfer, vol. 91, pp. 434–442, 1968. View at Scopus
22. A. W. Date, “Prediction of fully-developed flow in a tube containing a twisted-tape,” International Journal of Heat and Mass Transfer, vol. 17, no. 8, pp. 845–859, 1974. View at Scopus
23. R. M. Manglik and A. E. Bergles, “Heat transfer and pressure drop correlations for twisted-tape inserts in isothermal tubes: part I—laminar flows,” Journal of Heat Transfer, vol. 115, no. 4, pp.
881–889, 1993. View at Scopus
24. S. Al-Fahed, L. M. Chamra, and W. Chakroun, “Pressure drop and heat transfer comparison for both microfin tube and twisted-tape inserts in laminar flow,” Experimental Thermal and Fluid Science,
vol. 18, no. 4, pp. 323–333, 1998. View at Publisher · View at Google Scholar · View at Scopus
25. S. K. Saha and A. Dutta, “Thermohydraulic study of laminar swirl flow through a circular tube fitted with twisted tapes,” Journal of Heat Transfer, vol. 123, no. 3, pp. 417–427, 2001. View at
Publisher · View at Google Scholar · View at Scopus
26. M. Rahimi, S. R. Shabanian, and A. A. Alsairafi, “Experimental and CFD studies on heat transfer and friction factor characteristics of a tube equipped with modified twisted tape inserts,”
Chemical Engineering and Processing, vol. 48, no. 3, pp. 762–770, 2009. View at Publisher · View at Google Scholar · View at Scopus
27. S. Eiamsa-ard, C. Thianpong, P. Eiamsa-ard, and P. Promvonge, “Convective heat transfer in a circular tube with short-length twisted tape insert,” International Communications in Heat and Mass
Transfer, vol. 36, no. 4, pp. 365–371, 2009. View at Publisher · View at Google Scholar · View at Scopus
28. Y. W. Chiu and J. Y. Jang, “3D numerical and experimental analysis for thermal-hydraulic characteristics of air flow inside a circular tube with different tube inserts,” Applied Thermal
Engineering, vol. 29, no. 2-3, pp. 250–258, 2009. View at Publisher · View at Google Scholar · View at Scopus
29. O. H. Klepper, Experimental tudy of Heat Transfer and pressure drop for gas flowing in tubes containing a short twisted tape [M.S. thesis], University of Tennessee, 1971.
|
{"url":"http://www.hindawi.com/journals/jther/2012/580593/","timestamp":"2014-04-19T22:26:48Z","content_type":null,"content_length":"183076","record_id":"<urn:uuid:288b3623-ede2-4f09-9db0-7deb0f41a295>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00244-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Distance Between 2 Lines: Vectors
Date: 8/19/96 at 23:29:28
From: Anonymous
Subject: Shortest Distance...
What is the shortest distance between 2 lines?
Date: 8/20/96 at 8:16:26
From: Doctor Anthony
Subject: Re: Shortest Distance...
I am not sure how much vector work you have done, but I will assume a
knowledge of scalar products of vectors, and the vector equation of
straight lines.
In 3D space the shortest distance between two skew lines is in the
direction of the common perpendicular. (There is one and only one such
direction, as can be seen if you move one line parallel to itself
until it intersects the other line. These two lines would now define a
plane, and the perpendicular to this plane is the direction of the
common perpendicular).
You now take any point on one line, and any point on the other line,
and write down the vector joining these two points. Finally you find
the component of this vector in the direction of the common
perpendicular. This is done by finding the scalar product of the
vector with the UNIT vector in the direction of the common
perpendicular. The result of the scalar product is the shortest
distance you require.
I will illustrate the method by means of an example.
Find the shortest distance between the lines:
x/1 = (y-3)/1 = z/(-1)
(x-5)/3 = (y-8)/7 = (z-2)/(-1)
First we require the vector perpendicular to both (1,1,-1) and
Let the common perpendicular be (p,q,r). The scalar product of this
with both (1,1.-1) and (3,7,-1) will be zero, so:
p+q-r = 0 and 3p+7q-r = 0
Note that although there are apparently 3 unknowns and only two
equations, these are homogeneous equations (having 0 on the right hand
side), so we could find values of p/r and q/r and hence the ratios
p:q:r which is all that we require. Using the determinant method for
solving, we have:
p -q r
------ = ------- = ------
|1 -1| |1 -1| |1 1|
|7 -1| |3 -1| |3 7|
p/6 = -q/2 = r/4
p/3 = q/-1 = r/2 and so p:q:r = 3:-1:2
So the common perpendicular is the vector (3,-1,2)
As a UNIT vector this is (1/sqrt(14)){3,-1,2}
Next we have point (0,3,0) on line (1) and (5,8,2) on line (2).
The vector joining these points is (5,5,2) and now scalar product
this with the unit vector of the common perpendicular.
Scalar product = (1/(sqrt(14)){5*3 + 5*(-1) + 2*2}
= (1/sqrt(14)){15 - 5 + 4}
= 14/sqrt(14)
= sqrt(14)
and this is the shortest distance required.
-Doctor Anthony, The Math Forum
Check out our web site! http://mathforum.org/dr.math/
|
{"url":"http://mathforum.org/library/drmath/view/54873.html","timestamp":"2014-04-17T21:46:00Z","content_type":null,"content_length":"7700","record_id":"<urn:uuid:77e93d25-cf34-4d71-b6b1-bad6e1502099>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00570-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Geometric Constructions with the Compasses Alone
Jen-chung Chuan
Deparment of Mathematics
National Tsing Hua University
Hsinchu, Taiwan 300
e-mail: jcchuan@math.nthu.edu.tw
Mascheroni dedicated one of his books Geometria del compasso (1797) to Napoleon in verse in which he proved that all Euclidean constructions can be made with compasses alone, so a straight edge in
not needed. This theorem was (unknown to Mascheroni) proved in 1672 by a little known Danish mathematician Georg Mohr. In the setting of dynamic geometry, the Mohr-Mascheroni constructions ask for
specific procedures in which the figures are constructed using the compasses alone. In what follows we are to concentrate the constructions of
1. the conics: hyperbola, parabola and ellipse.
2. the epicycloids (the cardioid and the nephroid), hypocycloids (the deltoid and the astroid) and their osculating circles.
3. the Lemniscate of Bernoulli.
4. the Bowditch curve.
The dynamic geometry environment provided by CabriJava is essential in our exploration.
Total number of intermediate circles: 8
Location of the CabriJava file: http://poncelet.math.nthu.edu.tw/disk3/cabrijava/hyperbola-with-compass.html
Principle: hyperbolas are the inversions of the lemniscates. [Lockwood; p. 116]
Total number of intermediate circles: 8
Location of the CabriJava file: http://poncelet.math.nthu.edu.tw/disk3/cabrijava/parabola-with-compass.html
Principle: parabolas are the inversions of the cardioids. [Lockwood; p. 180]
Total number of intermediate circles: 8
Location of the CabriJava file: http://poncelet.math.nthu.edu.tw/disk3/cabrijava/ellipse-with-8circles.html
1. Center of the reference circle, the inverse and the point itself are collinear.
2. x = a cos t, y = b sin t.
Total number of intermediate circles: 4
Location of the CabriJava file: http://poncelet.math.nthu.edu.tw/disk3/cabrijava/cardioid-from-circle.html
Principle: x = 2 cos t - cos(2t), y = 2 sin t - sin(2t).
Cardioid and its Osculating Circle
Total number of intermediate circles: 10
Location of the CabriJava file: http://poncelet.math.nthu.edu.tw/disk3/cabrijava/osc-cardioid-compass.html
Principle: the points [cos t,sin t], [cos(2t), sin(2t)] separate the point [2 cos t - cos(2t), 2 sin t - sin(2t)] and the center of curvature harmonically.
Nephroid and its Osculating Circle
Total number of intermediate circles: 11
Location of the CabriJava file: http://poncelet.math.nthu.edu.tw/disk3/cabrijava/osc-nephroid-compass.html
Principle: the points [cos t,sin t], [cos(3t), sin(3t)] separate the point [3 cos t - cos(3t), 3 sin t - sin(3t)] and the center of curvature harmonically.
Deltoid and its Osculating Circle
Total number of intermediate circles: 11
Location of the CabriJava file: http://poncelet.math.nthu.edu.tw/disk3/cabrijava/osc-deltoid-compass.html
Principle: the points [cos t,sin t], [cos(-2t), sin(-2t)] separate the point [cos (2t) - 2 cos(t), sin (2t) +2 sin(t)] and the center of curvature harmonically.
Recovering the Center of a Circle
Total number of intermediate circles: 6
Location of the CabriJava file: http://poncelet.math.nthu.edu.tw/disk3/cabrijava/center.html
Principle: Inversion.
Fermat Point
Total number of intermediate circles: 16
Location of the CabriJava file: http://poncelet.math.nthu.edu.tw/disk3/cabrijava/compass-fermat.html
Principle: Simititude.
Peaucellier's Linkage
Total number of intermediate circles: 5
Location of the CabriJava file: http://poncelet.math.nthu.edu.tw/disk3/cabrijava/peau.html
Principle: Inversion.
Intersection of a Line and a Circle
Total number of intermediate circles: 4
Location of the CabriJava file: http://poncelet.math.nthu.edu.tw/disk3/cabrijava/compass-line-circle-intersections.html
Principle: Symmetry.
Envelope Formaing Deltoid and 3-Cusped-Epicycloid
Total number of intermediate circles: 11
Location of the CabriJava file: http://poncelet.math.nthu.edu.tw/disk3/cabrijava/compass-epi-hypo.html
Principle: Symmetry.
Regular Pentagon
Total number of intermediate circles: 12
Location of the CabriJava file: http://poncelet.math.nthu.edu.tw/disk3/cabrijava/5-gon-compass.html
Principle: Inversion.
Linkage Drawing the Ellipse
Total number of intermediate circles: 11
Location of the CabriJava file: http://poncelet.math.nthu.edu.tw/disk3/cabrijava/ellipse-linkage-with-compass.html
Principle: Inversion.
Square Constructed from One Diagonal
Total number of intermediate circles: 6
Location of the CabriJava file: http://poncelet.math.nthu.edu.tw/disk3/cabrijava/compass-sq-diag.html
Principle: Symmetry.
Square Constructed from One Side
Total number of intermediate circles: 6
Location of the CabriJava file: http://poncelet.math.nthu.edu.tw/disk3/cabrijava/compass-sq-side.html
Principle: Symmetry and translation.
Dividing a Cirlce into Four Equal Parts
Total number of intermediate circles: 6
Location of the CabriJava file: http://poncelet.math.nthu.edu.tw/disk3/cabrijava/compass-sq.html
Principle: Symmetry and translation.
Arc Bisection
Total number of intermediate circles: 7
Location of the CabriJava file: http://poncelet.math.nthu.edu.tw/disk3/cabrijava/bisect-arc.html
Principle: Symmetry.
Bowditch Curve
Total number of intermediate circles: 13
Location of the CabriJava file: http://poncelet.math.nthu.edu.tw/disk3/cabrijava/bowditch-with-compass.html
Principle: Coordinates.
|
{"url":"http://poncelet.math.nthu.edu.tw/disk3/cabriworld2001/paper-chuan/","timestamp":"2014-04-18T03:15:35Z","content_type":null,"content_length":"15870","record_id":"<urn:uuid:f981e908-c6cb-4ed2-ba3f-b70981431e22>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00183-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Abraham Berman and Robert J. Plemmons
Classics in Applied Mathematics 9
Here is a valuable text and research tool for scientists and engineers who use or work with theory and computation associated with practical problems relating to Markov chains and queuing networks,
economic analysis, or mathematical programming. Originally published in 1979, this new edition adds material that updates the subject relative to developments from 1979 to 1993.
Theory and applications of nonnegative matrices are blended here, and extensive references are included in each area. You will be led from the theory of positive operators via the Perron-Frobenius
theory of nonnegative matrices and the theory of inverse positivity, to the widely used topic of M-matrices. On the way, semigroups of nonnegative matrices and symmetric nonnegative matrices are
discussed. Later, applications of nonnegativity and M-matrices are given; for numerical analysis the example is convergence theory of iterative methods, for probability and statistics the examples
are finite Markov chains and queuing network models, for mathematical economics the example is input-output models, and for mathematical programming the example is the linear complementarity problem.
Nonnegativity constraints arise very naturally throughout the physical world. Engineers, applied mathematicians, and scientists who encounter nonnegativity or generalizations of nonegativity in their
work will benefit from topics covered here, connecting them to relevant theory. Researchers in one area, such as queuing theory, may find useful the techniques involving nonnegative matrices used by
researchers in another area, say, mathematical programming.
Exercises and biographical notes are included with each chapter.
Chapter 1: Matrices Which Leave a Cone Invariant; Chapter 2: Nonnegative Matrices; Chapter 3: Semigroups of Nonnegative Matrices; Chapter 4: Symmetric Nonnegative Matrices; Chapter 5: Generalized
Inverse- Positivity; Chapter 6: M-Matrices; Chapter 7: Iterative Methods for Linear Systems; Chapter 8: Finite Markov Chains; Chapter 9: Input-Output Analysis in Economics; Chapter 10: The Linear
Complementarity Problem; Chapter 11: Supplement 1979 - 1993; References; Index.
1994 / xx + 340 pages/Softcover / ISBN-13: 978-0-898713-21-3 / ISBN-10: 0-89871-321-8 /
List Price $54.00 / SIAM Member Price $37.80 / Order Code CL09
|
{"url":"http://www.ec-securehost.com/SIAM/CL09.html","timestamp":"2014-04-21T05:13:37Z","content_type":null,"content_length":"6305","record_id":"<urn:uuid:575ee00d-5bec-4729-b193-4b5299c030fc>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00000-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Calculating Tax Efficient Amounts
Hi Bogleheads community!
I've read The Bogleheads' Guide to Investing, and this is my first post.
I have a 401k, Rollover IRA, Roth IRA, and taxable account. In each of them, I've placed both stocks and bonds.
After reading the book and some wiki pages, I'm moving funds to create a more tax-efficient portfolio.
One of the pages in the book shows the differences between placing assets in more appropriate accounts.
Stocks in Tax Deferred Bonds in Taxable
Initial Investment $50,000.00 $50,000.00
Value After 30 Years $872,470.11 $232,077.55
Taxes at Distribution $218,117.53 $0.00
Final Value $654,352.59 $232,077.55
Bonds in Tax Deferred Stocks in Taxable
Initial Investment $50,000.00 $50,000.00
Value After 30 Years $380,612.75 $820,490.16
Taxes at Distribution $95,153.19 $100,499.00
Final Value $285,459.56 $719,991.16
This assumes a 10% return on stocks, 7% return on bonds, 1.5% dividend yield, 15% capital gains and dividend tax rate, and 25% income tax rate.
I'm a bit of a math nerd, and like to know how one arrived at the numbers.
Using Excel and the FV function, I was able to calculate the final values for all four scenarios except the Stocks in Taxable. I understand that the interest rate used was 9.775% (.09775) to account
for the annual taxation of dividends.
But how do you arrive at the taxes owed amount of $100,499.00? I assume that the 15% tax rate would be applied to the difference between the final value of $820,490.16 and the beginning value of
$50,000.00. This is $770,490.16, and 15% of that is $115,573.52, not $100,499.00.
I'm guessing that I didn't account for dividends, but still can't figure this out after doing all sorts of calculations.
How do you arrive at the taxes owed amount of $100,499.00?
Re: Calculating Tax Efficient Amounts
If you hold stocks in a taxable account, your basis includes the initial amount and the reinvested dividends. In this example, the stock return is 9.775% annually, and 1.275% reinvested dividends, so
8.5/9.775 of the gains are taxed. That fraction of the gains is $669,991.44, and the tax due on it is $100,498.72; the 28-cent difference is probably the result of cents being dropped in an
intermediate calculation.
Re: Calculating Tax Efficient Amounts
Excellent. Thanks for the clear explanation David!
|
{"url":"http://www.bogleheads.org/forum/viewtopic.php?f=10&t=110536&newpost=1608242","timestamp":"2014-04-19T11:59:46Z","content_type":null,"content_length":"20020","record_id":"<urn:uuid:01eead81-f246-49a9-a9ae-afbcd2d3ce37>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00143-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Stream Cipher (thing)
Return to Stream Cipher (thing)
What is a stream cipher? Existing:
A stream cipher is a symmetric encryption method that usually operates at the character or bit level, with the plaintext being combined (normally by an operation such as XOR) with a
generated keystream to produce the ciphertext. Non-Existing:
Although seemingly simple, its security stems from the fact that, if the generated keystream is not distinguishable from a random sequence and used only once to encrypt a message, it
has the same security as a one time pad. Particular requirements for a good stream cipher are a long period and high linear complexity, but not all ciphers with these requirements are
necessarily secure.
They are often built using counters, linear feedback shift registers, nonlinear feedback shift registers, nonlinear filters and/or S-boxes, cryptographic sponges, T-functions or even
more complicated things.
Regardless of their internal components, stream ciphers can be generally seen as finite state machines: they take some input (internal state, key and optionally, as in the case of
self-synchronizing stream ciphers past ciphertext), perform some operations and output the next internal state. A part (or even a nonlinear function of parts) of the internal state are
also output at each step as the keystream.
This implies that a stream cipher can never really attain the security level of a one time pad, as sequences generated by a finite state machine are always periodic and, therefore,
non-random (it might just have a period that exceeds the remaining time until the heat death of the Universe, but it's still finite).
One very obvious "problem" with a stream cipher is that if you re-use a key (or a key+IV pair), the generated keystream will be the same, compromising the security of the plaintexts
encrypted with such keystream (but, hey... that's not a bug, it's a feature! otherwise, the other party wouldn't be able to replicate the correct keystream and therefore decrypt your
Why not just use a block cipher?
A block cipher, unlike a stream cipher, operates at the level of blocks, providing a (key-dependent) permutation family which should resemble, as much as possible, a group of
pseudo-random permutations (PRP). This implies that thorough diffusion (mixing) and confusion (nonlinear layers) are required for a certain level of robustness against cryptanalysis.
On the other hand, a stream cipher usually only exposes part (or even a nonlinear combination of parts) of its internal state at each step, which implies that it can probably afford
less mixing and nonlinearity than a full block cipher between each step (with LFSR being an extreme example, with very slow mixing of its internal state between each step). They are
also often more efficient in hardware than block ciphers, being therefore a very valid choice for symmetric encryption in embedded systems and low-power requirements situations (e.g.
Nonetheless, it is true that the design and attack of block ciphers is much better understood in academia, which generally grants block ciphers a higher sense of security (due to
heightened scrutiny regarding their designs). Also, it's trivial to build a secure stream cipher using a secure block cipher in counter mode and/or using a block cipher to mix some
internal state.
Examples of stream ciphers
|
{"url":"http://everything2.com/user/datagram/writeups/Stream+Cipher?displaytype=linkview","timestamp":"2014-04-21T02:16:35Z","content_type":null,"content_length":"34062","record_id":"<urn:uuid:c01a9c80-7c8f-4199-80c0-a342007e57e8>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00583-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Unitary and triangular = diagonal matrix
April 13th 2009, 12:30 AM
Unitary and triangular = diagonal matrix
can anybody help me with this question?
Show that if a matrix A is both triangular and unitary, then it is diagonal.
April 13th 2009, 02:15 AM
i'll assume that the matrix $A=[a_{ij}], \ 1 \leq i,j \leq n,$ is upper triangular. the lower triangular case is the same. proof is by induction over $n$: for $n=2$ it's easy. now if $[b_{ij}]=AA
^*=I_n,$ where $A^*$
is the complex conjugate of $A,$ then we'll have $b_{in}=0, \ \forall \ i \leq n-1,$ but $b_{in}=a_{in} \overline{a_{nn}}$ and $a_{nn} eq 0.$ therefore $b_{in}=0, \ \forall \ i \leq n-1.$ now
apply the induction hypothesis for the $(n-1) \times (n-1)$
matrix $C=[a_{ij}], \ 1 \leq i,j \leq n-1$ to finish the proof.
April 13th 2009, 03:05 AM
then we'll have $b_{in}=0, \ \forall \ i \leq n-1,$ but $b_{in}=a_{in} \overline{a_{nn}}$ and $a_{nn} eq 0.$ therefore $b_{in}=0, \ \forall \ i \leq n-1.$
I don't really understand how
$b_{in}=a_{in} \overline{a_{nn}}$ and $a_{nn} eq 0.$
April 13th 2009, 03:10 AM
$b_{in}=a_{in} \overline{a_{nn}}$ and $a_{nn} eq 0.$
From what I'm reading here is that for a say a 2X2 matrix, the element
$b_{i2}$ is the $a_{i2}$ element multiplied by the
$b_{22}$ element.
Is that correct?
i'm going to try that on paper.
April 13th 2009, 03:18 AM
SOrry, I mean the conjugate of the $a_{22}$ element.
not $b_{aa}$ element.
Don't worry,I will do some paper work here.
I think I'm starting to understand some of your proof using a 2X2 Identity matrix.
thanks for your response.
|
{"url":"http://mathhelpforum.com/advanced-algebra/83466-unitary-triangular-diagonal-matrix-print.html","timestamp":"2014-04-18T19:10:19Z","content_type":null,"content_length":"10743","record_id":"<urn:uuid:e8ead1c9-a151-498f-b24e-09e1fa940801>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00644-ip-10-147-4-33.ec2.internal.warc.gz"}
|
U.S. Global Change Research Information Office
Figure 3 An illustration of how a small change in the mean or average value of a meteorological variable can have a large effect on the expected number of extreme readings. UPPER FIGURE (black curve,
A): the probability of different temperature readings when the mean temperature is 50°F. UPPER FIGURE (blue curve, B): the same, when the mean temperature rises 5°, to 55° F. The shape or spread of
the bell-shaped pattern by which expected values are distributed on either side of the mean is the same in both cases. LOWER FIGURE: the corresponding changes in the probability of the temperature
readings shown on the bottom scale. The solid blue curve (with scale at left) is the difference in probability of the readings shown on the bottom scale. The greatest difference is for temperatures
about 10° above and below the mean value of 55°. The dashed blue curve (scale at right) is the percentage change in probability. Readings of 60 to 70°, for example, are expected roughly twice as
often as before.
|
{"url":"http://www.gcrio.org/CONSEQUENCES/vol5no1/fig_3a.html","timestamp":"2014-04-20T18:25:32Z","content_type":null,"content_length":"11582","record_id":"<urn:uuid:cc8795b9-ded8-49bf-b842-a2a3b25dfb7d>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00125-ip-10-147-4-33.ec2.internal.warc.gz"}
|
de Morgan, Augustus (1806
de Morgan, Augustus (1806–1871)
British mathematician, born in India, who was an important innovator in the field of mathematical logic. The system he devised to express such notions as the contradictory, the converse, and the
transitivity of a relation, as well as the union of two relations, laid some of the groundwork for his friend George Boole. de Morgan lost the sight of his right eye shortly after birth, entered
Trinity College, Cambridge, at the age of 16, and received his B.A. However, he objected to a theological test required for the M.A. and returned to London to study for the Bar. In 1827, he applied
for the chair of mathematics in the newly-founded University College, London and, despite having no mathematical publications, he was appointed. In 1831, he resigned on principle after another
professor was fired without explanation but regained his job five years later when his replacement died in an accident. He resigned again in 1861.
His most important published work, Formal Logic, included the concept of the quantification of the predicate, an idea that solved problems that were impossible under the classic Aristotelian logic.
de Morgan coined the phrase "universe of discourse," was the first person to define and name mathematical induction, and developed a set of rules to determine the convergence of a mathematical
series. In addition, he devised a decimal coinage system, an almanac of all full moons from 2000 BC to AD 2000, and a theory on the probability of life events that is still used by insurance
companies. de Morgan was also deeply interested in the history of mathematics. In Arithmetical Books (1847) he describes the work of over fifteen hundred mathematicians and discusses subjects such as
the history of the length of a foot, while in A Budget of Paradoxes he gives a marvelous compendium of eccentric mathematics including the poem:
Great fleas have little fleas upon their backs to bite 'em,
And little fleas have lesser fleas, and so ad infinitum,
And the great fleas themselves, in turn, have greater fleas to go on,
While these again have greater still, and greater still, and so on.
the first lines of which paraphrase a similar rhyme by Jonathan Swift. On one occasion, when asked his age, De Morgan replied: "I was x years old in the year x." How old must he have been at the
time? (Answer: 43 – the only number that, when squared, gives a number between the years of De Morgan's birth and death.)
Related category
|
{"url":"http://www.daviddarling.info/encyclopedia/D/de_Morgan.html","timestamp":"2014-04-20T05:44:28Z","content_type":null,"content_length":"8999","record_id":"<urn:uuid:e641b56b-72c6-4e16-a4dc-26c0132fa31c>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00255-ip-10-147-4-33.ec2.internal.warc.gz"}
|
MathGroup Archive: September 2000 [00207]
[Date Index] [Thread Index] [Author Index]
Re: Random spherical troubles
• To: mathgroup at smc.vnet.net
• Subject: [mg25199] Re: Random spherical troubles
• From: dkeith at sarif.com
• Date: Tue, 12 Sep 2000 21:24:44 -0400 (EDT)
• References: <8pkl8u$m80@smc.vnet.net>
• Sender: owner-wri-mathgroup at wolfram.com
Hello Barbara,
Your problem is similar to others I have considered doing Monte Carlo
simulations. I think it can be understood best in two steps:
1) You want the probability of a point occuring in the region
dTheta*dPhi to be proportional to the actual Euclidean area on the
surface of the sphere represented by dTheta*dPhi. But that area is not
just that product. If you condsider how Euclidean area is swept out by
theta and phi, in just the same way you would use to integrate the
surface, you will see that dArea is given by r^2*Sin[theta]
*dTheta*dPhi. So the probability of finding a point in dTheta*dPhi is
proportional to r^2*Sin[theta].
2) The next problem is to create a transformation which takes a random
variable distribution we have into this one we want. There is a nice
theory in probability that says if p[x] is a distribution function and
f[x] is its cumulative distribution (integral), and the distribution p
is normalized, then Inverse[f][Random[0,1]] is a variable having p
distribution. (Random[0,1] is a random variable of uniform distibution
in (0,1).)
So to make use of this we take f[theta] to be A*Sin[theta] since we
need probability proportional to Sin[theta] and independent of phi. "A"
is for normalization. The cumulative distribution is gotten by
integrating and normalizing to get f[theta]=(1-Cos[theta])/2. The
transformation is then the inverse function theta=ArcCos[1-2*f].
So our new random variable can be generated as
{r, theta, phi}={r, ArcCos[1-2 Random[]],Random[Real,{0,2Pi}]}
or for example
polarPoints =
Table[{1, ArcCos[1 - 2Random[]], Random[Real, {0, 2Pi}]}, {1000}];
In article <8pkl8u$m80 at smc.vnet.net>,
Barbara DaVinci <barbara_79_f at yahoo.it> wrote:
> Hi MathGrouppisti
> This time, my problem is to generate a set of
> directions randomly
> distributed over the whole solid angle.
> This simple approach is incorrect (spherical
> coordinates are assumed) :
> Table[{Pi Random[], 2 Pi Random[]} , {100}]
> because this way we obtain a set of point uniformly
> distributed
> over the [0 Pi] x [0 2Pi] rectangle NOT over a
> spherical surface :-(
> If you try doing so and plot the points {1,
> random_theta , random_phi}
> you will see them gathering around the poles because
> that simple
> transformation from rectangle to sphere isn't
> "area-preserving" .
> Such a set is involved in a simulation in statistical
> mechanics ...
> and I can't get out this trouble.
> May be mapping [0 Pi] x [0 2Pi] in itself , using an
> suitable
> "non-identity" transformation, can spread points in a
> way balancing
> the poles clustering effect.
> ====================================================================
> While I was brooding over that, an intuition flashed
> trought my mind :
> since spherical to cartesian transformation is
> x = rho Sin[ theta ] Cos[ phi ]
> y = rho Sin[ theta ] Sin[ phi ]
> z = rho Cos[ theta ]
> perhaps the right quantities to randomly spread
> around are Cos[ theta ] and
> Cos[ phi ] rather than theta and phi for itself. Give
> a glance at this :
> Table[{
> ArcCos[ Random[] ],
> ArcCos[ Random[] Sign[ 0.5 - Random[] ]
> } , {100}]
> Do you think it is close to the right ? Do you see a
> better way ?
> Have you just done the job in the past ? Should I
> reinvent the wheel ?
> ====================================================================
> I thanks you all for prior replies and in advance
> this time.
> Distinti Saluti
> (read : "Faithfully yours")
> Barbara Da Vinci
> barbara_79_f at yahoo.it
> ______________________________________________________________________
> Do You Yahoo!?
> Il tuo indirizzo gratis e per sempre @yahoo.it su http://mail.yahoo.it
Sent via Deja.com http://www.deja.com/
Before you buy.
|
{"url":"http://forums.wolfram.com/mathgroup/archive/2000/Sep/msg00207.html","timestamp":"2014-04-20T11:10:49Z","content_type":null,"content_length":"38551","record_id":"<urn:uuid:6d88a36e-5e6f-42a1-adeb-7c2e01898810>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00591-ip-10-147-4-33.ec2.internal.warc.gz"}
|