content
stringlengths 86
994k
| meta
stringlengths 288
619
|
|---|---|
Partial Derivatives and rates of change.
November 2nd 2010, 07:18 PM #1
Junior Member
Jul 2008
Partial Derivatives and rates of change.
The question is "Find the rate of change of z at P in the direction of the origin."
So I found the gradient vector to be <1,-2y> or at point P it would be <1,-2>.
Can anyone help me find the rate of change of z at the point P in the direction of the origin?
Follow Math Help Forum on Facebook and Google+
|
{"url":"http://mathhelpforum.com/calculus/161897-partial-derivatives-rates-change.html","timestamp":"2014-04-16T10:45:59Z","content_type":null,"content_length":"29318","record_id":"<urn:uuid:e4e30680-996b-41e0-a4ac-79dbed0eff95>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00191-ip-10-147-4-33.ec2.internal.warc.gz"}
|
400 D
Memorizing Pi to 400 Decimal Places
The obvious first question is, What exactly IS pi to 400 decimal places? Here it is:
There are many different ways to memorize pi. There are even those pi purists who refuse to use the mnemonic alphabet, and attempt to learn the numbers as numbers themselves. University of
Edinburgh professor Alexander Craig Aitken learned it to a particular rhythm. Others assign meaning directly to the numbers themselves. For example look at the last four numbers in the first row
above (1971). Some might remember this number as the year they were born or that some other memorable event from that year.
Are these methods effective? They certainly seem to be for the individuals who create them. It can be tricky though, for others to try and learn these methods, especially as the associations are
often highly personal.
What advantages does this method offer? First, it can be taught to anyone who is familiar with both the mnemonic alphabet and the English language (indeed, it can be easily adapted to almost any
Germanic language). Second, it doesn't just teach the digits in order, but out of order at the same time! What do I mean by out of order?
Imagine not just knowing the digits of pi themselves, but also where they are relative to each other. You could face challenges like these:
* Given the proper location, you can recall a corresponding group of four digits.
* Recall a single digit in the Nth position after the decimal point
* Given a group of four numbers, you can recall the location
* You can even recall entire sequences of numbers from pi
The traditional mnemonic alphabet method for pi is based on converting the numbers into words, and then linking them into a story. As you can see, if you forget just one element of the story, the
entire number is thrown off! The method taught here eliminates the story aspect, and makes the memorization both simpler AND much more effective at the same time!
Link System
Major System
When you're ready, click to continue.
Pi Chart
To make this easier, we'll break the 400 digits after the decimal point (note that the initial 3 isn't in the chart itself) into a 10x10 grid of four digit numbers:
A 1415 9265 3589 7932 3846 2643 3832 7950 2884 1971
B 6939 9375 1058 2097 4944 5923 0781 6406 2862 0899
C 8628 0348 2534 2117 0679 8214 8086 5132 8230 6647
D 0938 4460 9550 5822 3172 5359 4081 2848 1117 4502
E 8410 2701 9385 2110 5559 6446 2294 8954 9303 8196
F 4428 8109 7566 5933 4461 2847 5648 2337 8678 3165
G 2712 0190 9145 6485 6692 3460 3486 1045 4326 6482
H 1339 3607 2602 4914 1273 7245 8700 6606 3155 8817
I 4881 5209 2096 2829 2540 9171 5364 3678 9259 0360
J 0113 3053 0548 8204 6652 1384 1469 5194 1511 6094
Now, we'll turn each set of coordinates and each four-digit number into words we can associate with each other. For example, since 1 is equal to a t sound, then we can make A1 represent the word
ATe . The number at that point in the grid, 1415, translates into the sounds of t, r, t, and l - so we'll represent that number with the word TuRTLe . Now, you link the word ATe to the word TuRTL
e in a humorous or exaggerated way. Picturing yourself just having ate a turtle should do it. In a similar manner, you can turn A2 into ANnoy and 9265 into PuNCH Low , and picture yourself being
annoyed by a punch low on your body, perhaps by a disembodied fist (just to make the picture unusual and memorable).
Below is a complete list of associations. Go through each one and associate the words together in silly ways. The more memorable the image, the stronger a memory key it will be! Assuming you already
know what sounds go with what number in the Peg system, you should have little trouble remembering the entire chart in a short time!
You can also find alternate mnemonics, courtesy of Train Your Brain and Entertain user Wallace Gluck, in my New Pi Mnemonics post.
Once you've made each of the associations, go to the next section to learn various ways to present this feat. If you'd rather be tested right away, practice with the 400-digit Pi quiz!
Mnemonics Mnemonics
A1: ATe - TuRTLe B1: BaD - SHaBBy MoB
A2: ANnoy - PuNCH Low B2: BoNe - PuMa CLaw
A3: AiM - MaLe FiB B3: BuM - weT SaLiVa
A4: AiR - KeeP MooN! B4: BeeR - NoSe PiCK
A5: ALe - Mmmmm...FReSH! B5: BeLL - RePaiReR
A6: ASH - New GeRM B6: BaDGe - Law, By NaMe
A7: ACHe - MoVe hiM? No! B7: BaG - SaCK FooD
A8: A Vow - CouPLeS B8: BuFF - CHaiR'S waSH
A9: (h)APPy - iN FaVor B9: BiB - NePHew CHiN
A10: ACe - ToP CaT B10: BuS - SaVe BoB!
C1: CaT - FiSH kNiFe D1: DoT - SoaP 'eM oFF
C2: CaN - SMuRF D2: DeN - aiR RuSHeS
C3: CoMa - aNNuL MaRRy D3: DaMn - PoLo LoSS
C4: CaR - huNT DoG D4: DRy - LV NeoN
C5: CoaL - iCe aGe CuBe D5: DeaL - MaDe GaiN
C6: CaSH - FiNDeR D6: DaSH - LiMb LeaP
C7: CooK - haVe hiS FuDGe D7: DoG - ReCeiVeD
C8: CaVe - LouD MaN D8: DiVe - NaVy aRRiVe
C9: CuP - VeNoMS D9: DoPe - iDea iDioTiC
C10: CaSe - JuDGe woRK D10: DiCe - RoLL SooN
E1: EDDy - FjoRDS F1: FighT - waRRioR kNiFe
E2: EN - eNCaSeD F2: FuN - FaT SPy
E3: EM - BeaM FeLL F3: FoaM - CoLLeGe waSH
E4: ERR - eNTiTieS F4: FeaR - heLP! MoMMy!
E5: EEL - Lay Low heLP F5: FiLe - ReRuSHeD
E6: EDGe - SHeaR RiDGe F6: FiSH - New FoRK
E7: EGG - New NeighBoR F7: FaKe - Lie, SHeRiFF?
E8: EVe - ViP LuRe F8: FiFe - eNeMy MoCK
E9: EBB - Bay MuSeuM F9: FiB - FiSH GooF
E10: EaSy - PHoTo PaGe F10: FaCe - MeeT JuLie
G1: GuT - NiCoTiNe H1: HaTe - DooM MoB
G2: GowN - STePS H2: HeN - Ma haTCHeS eGG
G3: GaMe - PaTRoL H3: HoMe - eNJoy SuN
G4: GeaR - SHRiVeL H4: HaRe - RaBBiT eaR
G5: GoaL - eaCH CHiP iN H5: HeLLo - aDD iNCoMe
G6: GuSH - eMeRGeS H6: HeDGE - CoiN RoLe
G7: GaG - MoRe FiSH H7: HoG - haVe eXCeSS
G8: GaFF - DiCe ReaL? H8: HaVe - JuDGe'S iSSue
G9: GaP - Re-MaNaGe H9: HiP - MeTaL aLLoy
G10: GaS - SHaRe VaN H10: HoSe - halF-oFF TaG
I1: IT - ReViVe iT J1: JeT - STaDiuM
I2: INN - LoNe SPa J2: JoiN - MoSLeM
I3: I'M - NoiSy, BiTCHy J3: JaM - haS LaRVa
I4: IRe - eNouGH! uNhaPPy! J4: JeeR - oFteN SouR
I5: ILL - iNhaLeRS J5: JaiL - huGe JaiL Now
I6: ItCH - PeT CaT J6: JuDGe - DeeM FaiR
I7: IKe - Law MaJoR J7: JacK - TiRe SHoP
I8: IVy - MaGiC iVy J8: JaVa - OlD BRew
I9: (y)IPe - PaiN yeLP J9: JoB - DeLeTeD
I10: ICe - SMaSHeS J10: JaS - JaSPeR
Presenting Your Knowledge of Pi
The best way to be ready to demonstrate this feat is to create a small chart you can carry around in your wallet. Create a small ID-card sized chart, including the coordinates (A-J & 1-10), on your
printer and have it laminated. You are now ready to be quizzed in a variety of ways.
Basic 4-Digits
If you've made the associations properly, you can already do this one! Simply have someone name a set of coordinates, and recall the correct number via the mnemonic association.
Intermediate 4-Digits
Ask for a set of coordinates, and mention you'll give the number backwards! When you recall the mnemonic word, simply convert them into numbers starting with the rightmost (last) digit, and
continuing through to the leftmost (first) digit. This gets easier with practice. To those watching, though, this seems MUCH tougher than it actually is.
Advanced 4-Digits
In this version, you ask for a set of four digits, and you recall the coordinates at which they are located. This will take a little practice, as you have to be able to quickly translate a given
number into it's mnemonic association, and then recall the coordinate mnemonic. It's a startling addition to your pi feat, though.
Nth Digit of Pi
This will require some mental calculations. Have someone name any position up to 400 digits after the decimal place. Let's say they ask for the 157th digit after the decimal place. You first divide
by 4, and remember not only the number but the remainder. In our example, this would be 39 with a remainder of 1 (157 / 4 = 39 r 1). The 3 in the 39 tells us to skip 3 complete rows (A, B & C)
and look for it in the next row (row D). The 9 in the 39 tells us how many complete columns over the number is. We already know we're looking for D9. In our example, we have a remainder of 1, so
we now know we're looking for the 1st digit after D9. In other words, we're looking for the first digit of D10. We know D10 is DiCe which is associated with RoLL SooN . R is the first letter of
the associated phrase, so we can state that 4 is the 157th digit after the decimal place!
Rows, Columns and Diagonals
This is impressive enough to use as a finale, but not much more difficult than the Basic 4-Digits feat above. You simply ask for any row (A-J) and recite it starting with the 1st set of four
digits, and continuing through with the 10th set of four digits in that row. Columns can be called instead, and you just start with the A coordinates for that column, and continue through to the J
coordinates. You can even have them ask for diagonals, either A1 through J10 or A10 through J1. With a little extra practice, you can run through every row, column or diagonal backwards!
Practice now with the pi quiz!
No Response to "400 Digits of Pi"
|
{"url":"http://gmmentalgym.blogspot.com/2010/10/400-digits-of-pi.html","timestamp":"2014-04-19T02:23:20Z","content_type":null,"content_length":"675196","record_id":"<urn:uuid:289d8533-42b2-40f0-a9d2-19c76e7b276a>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00629-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Package cern.jet.stat.tdouble
Tools for basic and advanced statistics: Estimators, Gamma functions, Beta functions, Probabilities, Special integrals, etc.
See: Description
• Class Summary
Class Description
DoubleDescriptive Basic descriptive statistics.
Gamma Gamma and Beta functions.
Probability Custom tailored numerical integration of certain probability distributions.
Package cern.jet.stat.tdouble Description
Tools for basic and advanced statistics: Estimators, Gamma functions, Beta functions, Probabilities, Special integrals, etc.
SCaVis 1.7 © jWork.org
|
{"url":"http://jwork.org/scavis/api/doc.php/cern/jet/stat/tdouble/package-summary.html","timestamp":"2014-04-16T21:54:54Z","content_type":null,"content_length":"18568","record_id":"<urn:uuid:8a133d1d-2a1d-4914-93dc-fc4d3d0f06f9>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00491-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Points, Lines, Angles, and Planes Complementary, My Dear Watson Quiz
Think You Got it Down?
Test your knowledge again:
Points, Lines, Angles, and Planes: Complementary, My Dear Watson Quiz
Think you’ve got your head wrapped around Points, Lines, Angles, and Planes? Put your knowledge to the test. Good luck — the Stickman is counting on you!
Q. "An angle is bigger than its complement." This statement is…
True for all angles
True for all acute angles
True for all obtuse angles
True for some angles
Never true
Q. If two angles are supplementary, which of the following must be true?
They are adjacent
They are vertical
They are congruent
Their measures add to 90 degrees
Their measures add to 180 degrees
Q. A, B, C, and D are not collinear, and B is between A and C. Which of the following is impossible?
AC and BD are parallel
B is the midpoint of AB
∠DBA is a right angle
∠DBA and ∠DBC are supplementary
∠DBA and ∠DBC are congruent
Q. ∠1 measures 40° and ∠2 measures 45°. Which of the following is possible?
∠1 and ∠2 are vertical angles
∠1 and ∠2 are congruent
∠1 and ∠2 are complementary
∠1 and ∠2 are supplementary
None of the above is possible
Q. What is the relationship between ∠CHF and ∠CFH?
The angles are congruent
The angles are adjacent
The angles are vertical
The angles are complementary
The angles are supplementary
Q. If m∠ACH = 3x + 7 and m∠DCE = 4x – 3, what is the measure of ∠ACB?
Q. What is the measure of ∠DCB?
Q. If m∠CFH = 45, what is m∠GFH equal to?
Q. If line p bisects DH, what is the midpoint of DH?
Q. If C is the midpoint of BF and BC = 8, what is the measurement of BF?
|
{"url":"http://www.shmoop.com/points-lines-angles-planes/quiz-2.html","timestamp":"2014-04-17T13:16:54Z","content_type":null,"content_length":"46176","record_id":"<urn:uuid:38f4d00e-37f8-43e1-98d0-fcac44fe6d97>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00264-ip-10-147-4-33.ec2.internal.warc.gz"}
|
How many miles from charlottesville virginia to fairfax virginia?
You asked:
How many miles from charlottesville virginia to fairfax virginia?
Assuming you meant
• Fairfax, the place in City of Fairfax, Virginia, USA
Did you mean?
Say hello to Evi
Evi is our best selling mobile app that can answer questions about local knowledge, weather, books, music, films, people and places, recipe ideas, shopping and much more. Over the next few months we
will be adding all of Evi's power to this site.
Until then, to experience all of the power of Evi you can download Evi for free on iOS, Android and Kindle Fire.
|
{"url":"http://www.evi.com/q/how_many_miles_from_charlottesville_virginia_to_fairfax_virginia","timestamp":"2014-04-20T16:16:42Z","content_type":null,"content_length":"54395","record_id":"<urn:uuid:6070c8bb-6482-4043-9139-0cf378a1042a>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00438-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Equation for band stop filter
Newbie level 5
Join Date
Apr 2007
0 / 0
Equation for band stop filter
show that below equivalent introduce a band stop filter:
x'(t) is the hilbert transformation of x(t).
Advanced Member level 2
Join Date
Oct 2003
35 / 35
Re: band stop filter
as an example case, i took x(t) to be sin(t) and x'(t) would then be -cos(t). and then i implemented the formula u have given in matlab as,
y = exp(-cos(2 * pi * 10 * t)) .* cos( 2 * pi * f1 * t + sin(2* pi * 10 * t));
here, i dint know what to assume for f1... i took the value as the 10 Hz input sine wave itself. i took the transform of y and divided by transform of sin(t) which is x(t) to get the
frequency response H(w) and took the ifft to get impulse response h(t). and i used freqz(h) to get the mag and phase response.
And, It dint look like a band stop filter... What did I do wrong here? If this question is right to be proved...
|
{"url":"http://www.edaboard.com/thread97311.html","timestamp":"2014-04-16T07:13:20Z","content_type":null,"content_length":"58440","record_id":"<urn:uuid:95df8200-5a61-4ddb-ad11-82fc06c96f3a>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00526-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Sorcerer Dice Mechanic, Examined
Let's say you're Binding an enormous demon. You have 7 dice for the Binding roll, and the hulking fiend has 15. We're using d10s. What would you say is the most likely outcome of this roll? Obviously
the Demon is favoured, but what is the expected value of the Binding? In other words, if we did this roll 1000 times, what would be the most common outcome? How about the average out come? +8 in
favour of the Demon?
I did an informal test with two handfuls of dice and about two dozen trials. To my surprise, the most common outcome by a long shot was: +1 in favour of the demon. There were a couple of +2's, a +7,
and a few Successes for the sorcerer. Really not what I was expecting! So I modeled this roll in a spreadsheet, copied it 1000 times, and took some statistics.
Here is the probability distribution of 15 d10s vs 7 d10s :
Mode (most common outcome): 1 Victory for the Demon
Mean (average outcome): 1.38 in favour of the Demon
65% of the time : Demon Success
18% of the time : Sorcerer Success
(why don't these add up to 100%: see "Errors, Limitations" below)
58.1% of the time : Binding roll is 1 or 2, for either party
36.0% of the time : Binding roll is 1 for either party
24.8% of the time : Binding roll is 3 or greater, for either party
13.4% of the time : Binding roll is 4 or greater, for either party
Errors, Limitations:
17% of the results are "Zero." This is a limitation of my simple spreadsheet. It doesn't know what to do when both characters have the same number of highest results, e.g. both chrs roll two 10's. I
can't think of a quick way to sort out these cases automatically, and I don't want to spend all night on this.
This tells me that, in general, most opposed rolls in Sorcerer are going to end up with a low number of victories, just 1 or 2 - no matter how asymmetric the dice pools are. Highly asymmetric
Bindings (4 or greater Victories) are expected to be rare, even with enormous demons (Will 15!).
This was significant because I'm working on developing some rules specifically for Demon Lords (a-la Elric/S&S). I had an idea that depended on highly asymmetric binding rolls, but now I see that I
can't count on these occurring. Hmmm.
Was anyone else surprised by this? Does it seem "right" that a highly unbalanced Binding roll should end up with just 1 or 2 victories most of the time?
|
{"url":"http://www.indie-rpgs.com/forge/index.php?topic=30125.msg278385","timestamp":"2014-04-17T06:57:56Z","content_type":null,"content_length":"53687","record_id":"<urn:uuid:5d6ab2d0-3ef2-4c6e-b23e-aab94a89ad1c>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00406-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
wel i hav some doubts on gravitation pls help
Best Response
You've already chosen the best response.
Why is g maximum in Jupiter?
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
im thinking
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
wat i think is the jupiter is biggest planet
Best Response
You've already chosen the best response.
but according that thinking g should be smallest in jupiter!!
Best Response
You've already chosen the best response.
pls help
Best Response
You've already chosen the best response.
pls wait i should see books
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
are you sure g is max? jupiters' r is big
Best Response
You've already chosen the best response.
sorry biggest not big
Best Response
You've already chosen the best response.
i think hence mass of it ,is very bigger than radius, g is so big i answer this by this formula\[mg'=GmM/R ^{2}\rightarrow g'=GM/R ^{2}\] M=1.8986 *10^27 & R=11 * 6400 *10^3
Best Response
You've already chosen the best response.
soory so late
Best Response
You've already chosen the best response.
k next doubt
Best Response
You've already chosen the best response.
a hole is drilled from the surface of the earth and a ball is thrown inside .the ball executes SHM why?
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
this is not a qn on SHM!!!
Best Response
You've already chosen the best response.
gravitation force couse of it \[F _{gravitation}=ma\] |dw:1328451210520:dw| i will say detail... as soon as possible
Best Response
You've already chosen the best response.
The ball experiencing SMH, because once you're belong the surface of the earth, the gravitational force is directly proportional to the radial distance from the center of the earth. Hence the
force the ball experiences is exactly like that it would experience on a spring.
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred.
|
{"url":"http://openstudy.com/updates/4f2e554ce4b0571e9cbad254","timestamp":"2014-04-18T11:04:14Z","content_type":null,"content_length":"145013","record_id":"<urn:uuid:a1a2f8af-c374-4e37-ada4-abbf52eb344d>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00544-ip-10-147-4-33.ec2.internal.warc.gz"}
|
MathFiction: Harvey Plotter and the Circle of Irrationality (Nathan Carter / Dan Kalman)
a list compiled by Alex Kasman (College of Charleston)
Home All New Browse Search About
Harvey Plotter and the Circle of Irrationality (2011)
Nathan Carter / Dan Kalman
Harvey Plotter, who has a scar shaped like a radical sign on his forehead, must find all of the rational points on the circularum unititatus before the evil Lord Voldemorphism.
The reader follows along as Harvey and his Graphindor friends (Rong and Hymernie) prove that the points on the unit circle for which both coordinates are rational numbers (aside from (0,1))
correspond to the lines with rational slope through the point (0,-1), and then (with help from Alphas Jumblemore) parametrize these points in terms of integer parameters p and q.
The puns are heavy and the plot light, but they are probably enough to coax the reader to continue on through a very clear and precise elementary worked example in rational algebraic geometry.
Published in Math Horizons, November 2011, pp 10-14. (This may not be the last we see of these characters. The Great Harvey Plotter Writing Contest is announced on page 2 of the same issue.)
(Note: This is just one work of mathematical fiction from the list. To see the entire list or to see more works of mathematical fiction, return to the Homepage.)
Works Similar to Harvey Plotter and the Circle of Irrationality
According to my `secret formula', the following works of mathematical fiction are similar to this one:
1. Uncle Georg's Attic by Ben Schumacher
2. Pythagoras's Darkest Hour by Colin Adams
3. The Gangs of New Math by Robert W. Vallin
4. Donald in Mathmagic Land by Hamilton Luske (director)
5. Lost in Lexicon: An Adventure in Words and Numbers by Pendred Noyce
6. Unreasonable Effectiveness by Alex Kasman
7. The Case of the Murdered Mathematician by Julia Barnes / Kathy Ivey
8. Cardano and the Case of the Cubic by Jeff Adams
9. The Cat in Numberland by Ivar Ekeland (author) / John O'Brien (illustrator)
10. Flatterland: like Flatland, only more so by Ian Stewart
Ratings for Harvey Plotter and the Circle of Irrationality:
Content: Have you seen/read this work of mathematical fiction? Then click here to enter your own votes on its mathematical content and literary quality or send me comments to post on
5/5 (1 votes) this Webpage.
Literary Quality:
1/5 (1 votes)
Genre Humorous, Fantasy, Didactic, Children's Literature,
Topic Geometry/Topology/Trigonometry, Algebra/Arithmetic/Number Theory,
Medium Short Stories,
Home All New Browse Search About
Your Help Needed: Some site visitors remember reading works of mathematical fiction that neither they nor I can identify. It is time to crowdsource this problem and ask for your help! You would help
a neighbor find a missing pet...can't you also help a fellow site visitor find some missing works of mathematical fiction? Please take a look and let us know if you have seen these missing stories
(Maintained by Alex Kasman, College of Charleston)
|
{"url":"http://kasmana.people.cofc.edu/MATHFICT/mfview.php?callnumber=mf1012","timestamp":"2014-04-19T17:27:11Z","content_type":null,"content_length":"8713","record_id":"<urn:uuid:eb0c1443-e76d-41b1-bb99-53f89dee21cb>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00345-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Medina, WA Algebra 2 Tutor
Find a Medina, WA Algebra 2 Tutor
...Also available for piano lessons, singing and bridge.I personally scored 800/800 on the SAT Math as well as 800/800 on the SAT Level II Subject Test. I have a lot of experience in helping
students prepare for any of the SAT Math tests to be able to find solutions to the problems quickly and accu...
43 Subjects: including algebra 2, chemistry, calculus, physics
...I am confident that with a little bit of time and motivation I can help any student understand and think more critically about the subject they are struggling with. In many cases, students
just need a push toward the more interesting side of learning, and I always encourage students to look past...
14 Subjects: including algebra 2, writing, geometry, biology
...Unfortunately, the school system is set up in such a manner that teachers do not always have the time to explain why mathematics works the way it does and many students get lost and confused.
This is the main reason why math is one of the most disliked subjects for some. As a tutor I have devel...
20 Subjects: including algebra 2, reading, calculus, statistics
...I have also been performing at weddings for the past 20 years. I do stress proper warmups and warmdowns as well as voice strengthening exercises. I also want my students to know how to read
46 Subjects: including algebra 2, reading, English, algebra 1
...I get requests from all over the country, so I usually use online meeting software. The software allows us to talk in real time (just like we're on Skype or on the phone), and we see and work
the same problems together. I've taught in classrooms, over the kitchen table, and I have to say that the online experience is by far the best.
15 Subjects: including algebra 2, algebra 1, ASVAB, MCAT
Related Medina, WA Tutors
Medina, WA Accounting Tutors
Medina, WA ACT Tutors
Medina, WA Algebra Tutors
Medina, WA Algebra 2 Tutors
Medina, WA Calculus Tutors
Medina, WA Geometry Tutors
Medina, WA Math Tutors
Medina, WA Prealgebra Tutors
Medina, WA Precalculus Tutors
Medina, WA SAT Tutors
Medina, WA SAT Math Tutors
Medina, WA Science Tutors
Medina, WA Statistics Tutors
Medina, WA Trigonometry Tutors
|
{"url":"http://www.purplemath.com/medina_wa_algebra_2_tutors.php","timestamp":"2014-04-17T07:33:39Z","content_type":null,"content_length":"23974","record_id":"<urn:uuid:6ab9fb93-b7b4-4109-970d-8a31264f7022>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00609-ip-10-147-4-33.ec2.internal.warc.gz"}
|
The Golden Ratio
August 2nd, 2009 by Creation-Facts Leave a reply »
In the year 1202 the Italian mathematician Leonardo Fibonacci worked out a problem about rabbits having babies and discovered a pattern of numbers: 0, 1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89,
144,233,377, and so on. Every number in the pattern was the addition of the two that came before it: 0+1= 1, 1+1= 2, 1+2= 3, 2+3= S, 3+5= 8,5+8= 13, 8+13= 21, and so on. We see these Fibonacci
numbers showing up over, and over, and over in nature.
If you ignore the zero, and divide a Fibonacci number into the one before it you get: 1, 2, 1.5, 1.67, 1.6, 1.625, 1.615, 1.619, 1.617,1.618,1.617,1.618,1.618, etc. After the first few, the answer lS
always close to 1.618. Now you might ask yourself, “So what? What does 1.618 have to do with anything?” Well, as it turns out, that’s a very special number – so special, in fact, that it’s called the
“golden ratio“.
The ancient Greeks based a lot of their art and buildings on the golden ratio (often shown as the Greek letter Phi Φ). The length of the Parthenon, for example, is a rectangle 1.618 times as long as
it is wide (known as a golden rectangle). They also designed much of their pottery with the same ratio. Now why did they do that? They did it because they believed that this special ratio was much
more pleasing to the human eye than any other ratio. Many of the great artists used the golden ratio in their art. For example, Mona Lisa’s face is 1.618 times as long as it is wide. Beautiful
symphonies also have the same golden ratio. The first movement is usually 1.618 times as long as the second one.
So why do we find that number so pleasing to the eye and to the ear? Do we find it beautiful because it copies creation, the work of the Master Artist, God? Could it be that the golden ratio is one
of the blueprints God used in His creation? Let’s look at a few of the other ways that the number 1.618 shows up over, and over, and over again throughout the universe.
ΦEach segment in your finger is roughly 1.618 times as long as the next one.
Φ Your forearm is approximately 1.618 times as long as your hand.
Φ People with mouths 1.618 times as wide as their noses, are often considered the most beautiful.
Φ In addition, the distance between their pupils is about 1.618 times as wide as their mouths.
The leaves and stems of some trees are arranged at 137.5 degrees from each other. That angle lets the sun shine on the greatest number of leaves. When you draw that angle inside a circle, you get two
pieces. Divide 137.5° into 222.5° and you get … 1.618!
If you make a spiral based on Fibonacci numbers, where every quarter turn is 1.618 times as far from the center as the previous one, you get what is known as a “golden spiral”. Amazingly, most of the
spirals found in nature are golden spirals.
The list of golden ratios goes on and on and on. From art and music to nature and science, 1.618 keeps showing up over and over. It is almost as if “Somebody” used that number as a measuring stick
for the universe. It just can’t be an accident. Many people call the golden ratio the “divine proportion” because it is clear only God could have done it!
The above document was published by Alpha Omega Institute, Kids Think and Believe, March-April 2006 and authored by Lanny & Marilyn Johnson.
I usually don’t post in Blogs but your blog forced me to, I just want to say i like youre site and blog post …
It is just exactly as you said.
|
{"url":"http://www.creation-facts.org/scientific-law/the-golden-ratio/","timestamp":"2014-04-19T19:33:01Z","content_type":null,"content_length":"19658","record_id":"<urn:uuid:4bd41380-cf01-4718-8c78-9e8b502f2ddc>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00443-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Fresnel equations at normal incidence
Wow, I didn't consider this aspect!!! Thank you for pointing this out! I'm studying these things right now.
Well, I think that you are right, the plane of incidence loses its meaning.
THOUGH, the thing here maybe another one - I say maybe because I've been thinking of it for 10 minutes, but as it makes sense to me I'll write it here.
The information from the reflection coefficients is not only about the amplitude of the reflected wave, but also about its phase.
Now, despite the lacking of a plane of incidence, at normal incidence the coefficients still have to tell you that the electric component has a 180° phase shift, while the magnetic one don't. This is
why, I think, you get that the coefficients are opposite: as you say, there is no difference as regards the plane of incidence (the amplitudes of the reflected wave are the same), but there is still
a difference in the phase of the reflected wave.
I think this is the explanation. I'm not sure, but it makes sense.Hope it is clear.
|
{"url":"http://www.physicsforums.com/showthread.php?t=540374","timestamp":"2014-04-16T18:59:27Z","content_type":null,"content_length":"35056","record_id":"<urn:uuid:a374a76c-bea0-441c-a936-22ab56788355>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00019-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Patente US6873994 - Conflict detection and resolution in association with data allocation
This invention relates to the field of data allocation, and more particularly to conflict detection and resolution in association with data allocation.
It is often desirable within a business or other planning environment to generate information regarding demand, available supply, selling price, or other data concerning a product or other item. Data
for products may often be dependent in some manner on data for other hierarchically related products. For example, demand for a product with multiple components may drive the demand for a particular
one of those components. Similarly, demand for products in a particular geographic region may drive the demand for the products in a particular territory in the region. Because of these hierarchical
dependencies, the data concerning various products or other items may be stored hierarchically in data storage or derived in a hierarchical fashion. Furthermore, the data may be stored at a storage
location associated with multiple dimensions, such as a product dimension (the storage location being associated with a particular product or product component), a geography dimension (the storage
location being associated with a particular geographical area), and a time dimension (the storage location being associated with a particular time or time period).
It is often desirable to update product data by forecasting demand values or other appropriate values for a particular product or group of products. As an example, using the current and/or past
demand values associated with a particular product in a particular state, the demand for the product in that state at a time in the future may be forecasted. However, it may not be feasible or
accurate to forecast demand values for the product in a particular region of the state or to forecast demand values for the product in the entire country in which the state is included. Instead, the
demand value for the product in the particular state may be used to determine other hierarchically related demand values using allocation techniques. For example, the forecasted demand value may be
determined by aggregating it with demand values for the product in other states in the country to determine a demand value for the product in the entire country. Alternatively, the demand value may
be allocated by disaggregating it to determine a demand value for the product in each of the regions of the state. However, many current allocation methods do not provide a sufficiently accurate
allocation of forecasted values and thus negatively affect demand planning, supply planning, or other planning based on the allocated values.
According to the present invention, disadvantages and problems associated with previous data allocation techniques have been substantially reduced or eliminated.
According to one embodiment of the present invention, a method for detecting and resolving conflicts in association with a data allocation includes determining a relationship between each of a
plurality of positions in a hierarchical organization of data. The method also includes selecting a position i and determining a total weight of position i. If the total weight of position i is
effectively non-zero, the method further includes removing the influence of position i from the other positions and adding position i to a set of conflict-free positions. Alternatively, if the total
weight of position i is effectively zero, the method includes selecting a position k with which position i has a relationship, reintroducing the effect of position k on the other positions if k is
already in the conflict-free set, removing position k from the conflict-free set if k is already in the conflict-free set, removing the influence of position i from the other positions if i is not
the selected position, and adding position i to the conflict-free set if i is not the selected position. The method also includes successively repeating the method for each position, with each
successive position becoming position i.
Embodiments of the present invention may provide one or more technical advantages. For example, particular embodiments may be used in conjunction with an allocation of values to detect and resolve
potential conflicts associated with the allocation. When a complex allocation is to be performed, it may be difficult to identify the various hierarchical dependencies that may lead to conflicts.
Certain embodiments may ensure that when a set of positions in an hierarchical organization of data are selected from which to allocate values to lower level positions in the hierarchy, the selected
positions will be free from conflicts that might make the allocation infeasible. Furthermore, particular embodiments may use techniques for identifying such conflicts that generate values that may be
used when actually performing the allocation. Therefore, many of the computations performed during a conflict identification and resolution process by particular embodiments may be re-used in the
associated allocation and thus the conflict identification and resolution process is computationally efficient.
Other important technical advantages are readily apparent to those skilled in the art from the figures, descriptions, and claims included herein.
To provide a more complete understanding of the present invention and the features and advantages thereof, reference is made to the following description taken in conjunction with the accompanying
drawings, in which:
FIG. 1 illustrates an example system for allocating data in a hierarchical organization of data;
FIG. 2 illustrates an example product dimension within a multi-dimensional organization of data;
FIG. 3 illustrates an example geography dimension within a multi-dimensional organization of data;
FIG. 4 illustrates an example method for allocating data within a business or other planning environment;
FIG. 5 illustrates an example allocation of a forecasted value associated with a single parent in one dimension;
FIG. 6 illustrates an example allocation of forecasted values associated with multiple parents associated with multiple dimensions; and
FIG. 7 illustrates an example method of detecting and resolving conflicts before an allocation.
FIG. 1 illustrates an example system 10 for allocating data, such as forecasted data, in a hierarchical organization of data associated with a business or other planning environment. As described
below, system 10 implements an allocation strategy that may be used to allocate a value associated with a particular data member in a data storage device or in a representation of data to
hierarchically related data members. In general, forecasted data provides an estimate of the forecasted demand, available supply, selling price, or other quantifiable data measure associated with a
particular product or other item. Although products are typically referred to herein, system 10 may be used to allocate forecast data for appropriate tangible or non-tangible items other than
products, including but not limited to services or other benefits. Furthermore, although forecasts are primarily discussed herein, system 10 may also allocate historical or other data, separately or
in combination with forecast data, according to particular planning needs of an enterprise, facility, or user. Moreover, although the allocation of demand forecasts for products is primarily
described, those skilled in the art will appreciate that system 10 may also allocate forecasts for available supply, selling price, and any other suitable data.
System 10 includes client 12, server 14, and data storage 16. Client 12 may include one or more processes to provide appropriate administration, analysis, and planning input. Although these processes
are preferably separate processes running on a dedicated client processor, these processes may be integrated, in whole or in part, and run on one or more processors within the same or different
computers. Similarly, the server 14 may include one or more processes to receive administration, analysis, and planning input from client 12 and interact with data storage 16 to provide corresponding
output to client 12. Although the processes are preferably separate processes running on a dedicated server processor, these processes may be integrated, in whole or in part, and run on one or more
processors within the same or different computers. Client 12 and server 14 may be fully autonomous or may operate at least in part subject to input from users of system 10.
The term “data storage” is used to refer to any appropriate data source, representation of data, or other organization of data. Data storage 16 may be hierarchical in nature, may be
multi-dimensional, and/or may provide persistent data storage for system 10. For example, data storage 16 may be a multi-dimensional database that stores data in a hierarchical and multidimensional
format or data storage 16 may be a representation of data derived by server 12 or other appropriate component from data stored in a relational database, in memory, or in any other appropriate
location. Server 12 or other appropriate component may use a multi-dimensional hierarchical transformation layer to create such a representation of the data. In one embodiment, data storage 16
includes three-dimensional data and, for each data measure, associates with each storage location 18 a particular member from the product dimension, a particular member from the geography dimension,
and a particular member from the time dimension. Each of these particular combinations of members of these three dimensions is associated with a corresponding storage location 18 in data storage 16,
similar to each combination of coordinates from the x, y, and z axes being associated with a point in three-dimensional Euclidian space. Furthermore, position within a particular dimension may be
changed independent of members of other dimensions, much like the position of a coordinate along the x axis may be changed independent of the positions of other coordinates along the y and z axes in
three-dimensional Euclidian space.
Data storage 16 may have as few or as many dimensions as appropriate for the particular application. For example, and not by way of limitation, an enterprise associated with system 10 may not
consider geography in connection with its data forecasting needs. This might be the case when products are ordered using the Internet or the telephone and then distributed from a single distribution
point. In this example, data storage 16 might be two-dimensional rather than three-dimensional and might not reflect positions or members within the geography dimension. Furthermore, the demand or
other data might be quantified per specified time interval, in which case data storage 16 might be two-dimensional and might not reflect positions or members within the time dimension. Other possible
scenarios involving more or fewer than three dimensions will be apparent to those skilled in the art. Data storage 16 may have any number of dimensions appropriate for the needs of the enterprise or
facility associated with system 10.
In the three-dimensional case, the values of the data measures within the set for a particular storage location 18 depend on the combined positions of members within product, geography, and time
dimensions for that storage location 18. As a result, the values of the data measures typically vary with these combined positions as appropriate to accurately reflect the demand, available supply,
selling price, or other data associated with these members. As described below, when a suitable combination of members is specified in the product, geography, and time dimensions according to
operation of system 10, data storage 16 accesses the data measures for storage location 18 associated with that combination of members to assist system 10 in allocating demand forecasts or other
suitable data. Other suitable dimensions may replace or be combined with the product, geography, and time dimensions according to particular needs.
In one embodiment, data storage 16 supports multi-dimensional on-line analytical processing (OLAP) capability and is populated with data measures received from one or more transactional data sources
that are internal, external, or both internal and external to the enterprise or facility associated with system 10. For example, and not by way of limitation, data measures received from sources
internal to a manufacturing or warehousing facility may include unit shipping data, dollar shipping data, inventory data, pricing data, and any other suitable information applicable to demand
forecasting. Data measures received from external sources, such as from syndicated partners of the enterprise or facility, may include point-of-sale demographic data and any other suitable
information. Appropriate data measures may be stored in data storage 16 in any suitable manner.
Server 12 is coupled to data storage 16 using link 32, which may be any wireline, wireless, or other link suitable to support data communications between server 12 and data storage 16 during
operation of system 10. Data storage 16 may be integral to or separate from server 12, may operate on one or more computers, and may store any information suitable to support the operation of system
10 in allocating demand forecasts or other data. Server 12 is coupled to client 14 using link 30, which may be any wireline, wireless, or other link suitable to support communications between server
12, client 14, and the processes of server 12 and client 14 during operation of system 10. Although link 30 is shown as generally coupling server 12 to client 14, processes of server 12 may
communicate directly with one or more corresponding processes of client 14.
System 10 may operate on one or more computers 20 that are integral to or separate from the hardware and software that support server 12, client 14, and data storage 16. Computer 20 may include a
suitable input device 22, such as a keypad, mouse, touch screen, microphone, or other device to input information. An output device 24 may convey information associated with the operation of system
10, including digital or analog data, visual information, or audio information. Computer 20 may include fixed or removable storage media, such as magnetic computer disks, CD-ROM, or other suitable
media to receive output from and provide input to system 10. Computer 30 may include one or more processors 26 and associated memory to execute instructions and manipulate information according to
the operation of system 10. Although only a single computer 20 is shown, server 12, client 14, and data storage 16 may each operate on separate computers 20 or may operate on one or more shared
computers 20. Each of the one or more computers 20 may be a work station, personal computer (PC), network computer, personal digital assistant (PDA), wireless data port, or any other suitable
computing device.
FIG. 2 illustrates an example product dimension 50 within data storage 16 that includes a hierarchy of product levels 52 each having one or more members 54. The value of each data measure associated
with a member 54 is an aggregation of the values of corresponding data measures associated with hierarchically related members 54 in lower levels 52 of product dimension 50. In an example embodiment
in which system 10 provides demand forecasts, the demand associated with a member 54 is the aggregate demand for these hierarchically related members 54 in lower levels 52 of product dimension 50. In
the illustrated embodiment, product levels 52 for product dimension 50 include an all products level 58, a product type level 60, a product category level 62, and a product family level 64. Selected
and merely example hierarchical relationships between members 54 are shown using links 56, as described more fully below. Links 56 between hierarchically related members 54 in adjacent levels 52 of
product dimension 50 reflect parent-child relationships between members 54. Although FIG. 2 is described primarily in connection with demand relationships, the following description is similarly
applicable to other data relationships, such as available supply, selling price, or any other relationships relating to data measures associated with an item or set of items.
In the particular example shown in FIG. 2, all products level 58 contains “All” member 54 representing the aggregate demand for all members 54 in lower levels 60, 62, and 64 of product dimension 50.
Product type level 60 contains “Components,” “Base Units,” and “Options” members 54. “Components” member 54 represents the aggregate demand for hierarchically related members 54 below “Components”
member 54 in levels 62 and 64 of product dimension 50. Similarly, “Base Units” member 54 represents the aggregate demand for hierarchically related members 54 below “Base Units” member 54 and
“Options” member 54 represents the aggregate demand for hierarchically related members 54 below “Options” member 54. Links 56 between “All” member 54 and “Components,” “Base Units,” and “Options”
members 54 indicate the hierarchical relationships between these members 54.
Product category level 62 contains, under “Components” member 54, “Hard Drives,” “Memory Boards,” and “CPUs” members 54. “Hard Drives” member 54 represents the aggregate demand for hierarchically
related members 54 below “Hard Drives” member 54 in level 64 of product dimension 50. Similarly, “Memory Boards” member 54 represents aggregate demand for hierarchically related members 54 below
“Memory Boards” member 54 and “CPUs” member 54 represents the aggregate demand for hierarchically related members 54 below “CPUs” member 54. Links 56 between “Components” member 54 and “Hard Drives,”
“Memory Boards,” and “CPUs” members 54 indicate the hierarchical relationships between these members 54. Analogous links 56 reflect hierarchical relationships between “Base Units” and “Options”
members 54 of product type level 60 and corresponding members 54 in lower levels 62 and 64 within product dimension 50.
Product family level 64 contains, under “Hard Drives” member 54, “4 GB” and “6 GB” members 54. Links 56 between “Hard Drives” member 54 and “4 GB” and “6 GB” members 54 indicate hierarchical
relationships between these members 54. Analogous links 56 reflect hierarchical relationships between “Memory Boards,” “CPUs,” “Servers,” “Desktops,” “Laptops,” “Monitors,” “Keyboards,” and
“Printers” members 54 of product category level 62 and corresponding members 54 in lower level 64 within product dimension 50. Although no links 56 are shown between members 54 in product family
level 64 and possible lower levels 52, such further levels 52 may exist within product dimension 50 and analogous links 56 may exist to reflect the corresponding hierarchical relationships.
Furthermore, members 54 shown in FIG. 2 are example only and are not intended to be an exhaustive set of all possible members 54. Those skilled in the art will appreciate that other suitable members
54 and associated links 56 may exist.
FIG. 3 illustrates an example geography dimension 70 within data storage 16 that includes a hierarchy of geography levels 72 each having one or more members 74. The value of each data measure
associated with a member 74 is an aggregation of the values of corresponding data measures associated with hierarchically related members 74 in lower levels 72 of geography dimension 70. In the
example embodiment in which system 10 provides demand forecasts, the demand associated with a member 74 is the aggregate demand for these hierarchically related members 74. In this embodiment,
geography levels 72 for geography dimension 70 include a world level 78, a country level 80, a region level 82, and a district level 84. Selected and merely example hierarchical relationships between
members 74 are shown using links 76, which are analogous to links 56 described above with reference to FIG. 2. Although FIG. 3 is described primarily in connection with demand relationships, the
following description is similarly applicable to other data relationships, such as available supply, selling price, or any other relationships relating to one or more data measures associated with an
item or set of items.
In the particular example illustrated in FIG. 3, world level 78 contains “World” member 74 representing aggregate worldwide demand. Country level 80 contains “U.S.” and “Canada” members 74, which
represent aggregate demand for the United States and Canada, respectively. Link 76 between “U.S.” members 74 in country level 80 and “World” members 74 in world level 78 indicates a hierarchical
relationship between these members 74. Similarly, link 76 between “Canada” member 74 and “World” member 74 indicates a hierarchical relationship between these members 74. In this example, worldwide
demand is an aggregation of aggregate demand in the United States as well as aggregate demand in Canada. Although other links 76 are not described in detail, those skilled in the art will appreciate
that links 76 are analogous to links 56 described above with reference to FIG. 2 in that each represents a corresponding hierarchical relationship between members 74 in the various levels 72 of
geography dimension 70. As discussed above, geography dimension 70 may be eliminated or otherwise not considered in allocating data, for example, if geography dimension 70 is not relevant to
particular data forecasting needs. Data storage 16 might in this situation be two-dimensional.
Demand or other forecasts may be derived using traditional forecasting techniques and suitable information concerning products, geographic areas, customers, and/or other data dimension. Such
information may include historical sales, causal factors, key account input, market intelligence, and the like. Forecasting techniques may rely on hierarchical relationships between members 54, 74 to
allocate data forecasts for products corresponding to members 54, 74. As described above, the data measures associated with each member 54, 74 are an aggregation of the data measures associated with
some or all members 54, 74 in lower levels 52, 72 within the same hierarchy of parent-child links 56, 76. Therefore, given forecast data for a member 54, 74 (a parent) at one level 52, 72, the
forecasts for each of the related members 54 in the next lowest level 52, 72 (the children of the parent) may be determined by disaggregating the forecast data for the parent between the children.
Furthermore, although the terms “parent” and “children” are used above to identify a relationship between members 54, 74 of a single dimension 50, 70, these terms may also be used to refer to the
relationship between data measures or values associated with a storage location 18 associated with a member from each of a number of dimensions. For example, a storage location 18 that includes a
demand value for a particular product in a particular state may be hierarchically related to a storage location 18 that includes a demand value for the product in a city of that state (the value
associated with the former storage location 18 being a parent of the value associated with the latter storage location 18).
When allocating a forecast from one or more parents to their children, a “top-down” proportional allocation strategy is often used. In this strategy, the value of the forecast (such as a demand
forecast) associated with a parent is divided proportionally among its children according to the relative current values (such as current demand values) associated with the children. Therefore, using
such proportional allocation, children having larger values get a larger share of the number being allocated and children having smaller values get a proportionately smaller share. For example, if a
parent with a forecasted demand of 1800 units has a first child that currently has an associated demand of 1000 units and a second child that currently has an associated demand of 500 units, then
1200 units of the forecasted demand would be allocated to the first child and 600 units of the forecasted demand would be allocated to the second child.
Top-down allocation, proportional or otherwise, may be used for many reasons. For example, forecasts that are estimated at a higher level 52, 72 often are more accurate and a forecast planner may
want to keep those values intact and adjust the forecasts at the lower levels 52, 72 to agree with the higher level forecast. Alternatively, the forecasts at a high level 52, 72 may be specified,
such as objectives and targets, and the lower level forecasts are adjusted to achieve the target forecast at the higher level 52, 72. However, proportional allocation is often too restrictive and may
adversely affect the accuracy of the forecast values determined for children in the lower level 52, 72. For example, a scenario where proportional allocation may create inaccurate forecasts is when
the value associated with a child to which an estimated forecast is to be allocated has a relatively high variance (for example, the value varies widely over time). In this case, a proportional
allocation based on the current value associated with the child (or based on an average of a selected number of past values) may be skewed by a temporary fluctuation in the value.
System 10 may use an allocation strategy that accounts for variance in the values associated with children when allocating a forecasted value from a parent of the children. Furthermore, it is
possible for the values of the children to have positive or negative relationships between themselves, so that a higher value associated with one child may have a correspondence with a higher or
lower value associated with another child. The allocation strategy may also account for these relations.
A distance measure may be defined as follows in order to take into account the variance and correspondence between children when allocating a forecast: $d = ( x _ - x _ ′ ) ∑ - 1 ( x _ - x _ ′ )
( 1 )$
In this equation, {overscore (x)} is the vector of current values (such as demand values) associated with the children of a particular parent. Σ is the variation matrix that identifies the variation
of each child $( ∑ - 1 being the inverse of the variance matrix ) .$
The variation of a particular child may be expressed as a standard deviation, variance, or any other suitable measure and may be determined using statistical equations, models, or any other
appropriate technique. {overscore (x)}′ is the vector of the values associated with the children after the allocation of the forecast from the parent. To optimally allocate the forecast, the
selection of the values of {overscore (x)}′ should minimize the distance d.
The determination of {overscore (x)}′ may be subject to the constraint of the parent-child relationships. For general linear relationships, such as when the value associated with a parent equals the
sum or average of the values associated with its children, it is possible to define a suitable parent-child relationship matrix R such that if {overscore (y)} is the vector of values associated with
one or more parents of the children represented in {overscore (x)} and {overscore (x)}′, then the parent-child relationship can be expressed as follows:
R{overscore (x)}′={overscore (y)}(2)
It should be noted that a child may have multiple parents in the same dimension or in multiple dimensions. This concept is described below with reference to FIG. 6. Given the above two equations, an
optimal {overscore (x)}′ may be given by the following equation:
{overscore (x)}′={overscore (x)}+ΣR ^T(RΣR ^T)^−1({overscore (y)}−R{overscore (x)})(3)
where R^T is the transpose of R.
As an example only and not by way of limitation, consider a local hierarchy with one parent and three children for a time series of length T. The values of each child i may be denoted by a separate
column vector (x[i,1], . . . , x[i,]T) and the values of the parent may be denoted by a single column vector (y[1], y[2], . . . , yT). A single column vector including the values of all children i
for all times t may be expressed as follows: $x _ = [ x 1 , 1 x 2 , 1 x 3 , 1 x 1 , 2 x 2 , 2 x 3 , 2 ⋮ x 1 , T x 2 , T x 3 , T ]$
Assuming that an example parent-child relationship indicates that, at each time, the sum of values of all the children should equal the value of the parent, the parent-child relationship matrix may
be expressed as follows: $R = [ 1 1 1 0 0 0 … 0 0 0 0 0 0 1 1 1 … 0 0 0 0 0 0 0 0 0 … 0 0 0 ⋰ 0 0 0 0 0 0 … 0 0 0 0 0 0 0 0 0 … 1 1 1 ]$
In general, the matrix Variation(x) representing the variations of all children at all times is a square matrix of dimension equal to the product of the number of children and T. In such a matrix,
most elements are equal to zero. For example, assume that the values of {overscore (x)} are predictions from a model. Variation(x) assumes a typical block-diagonal structure as follows: $Variation
( x ) = [ ∑ 1 0 … 0 0 ∑ 2 … 0 ⋰ 0 0 … ∑ T ]$
where, the zeros represent variation matrices of all elements zero of appropriate order and each $∑ t$
(not italicized in the above matrix) is of the general form (for the example case of three children): $∑ t = [ σ 1 , 1 t σ 1 , 2 t σ 1 , 3 t σ 2 , 1 t σ 2 , 2 t σ 2 , 3 t σ 3 , 1 t σ 3 , 2 t σ 3 ,
3 t ]$
where σ^t [i,j ]is the variation (such as the variation) of a particular child i at time t and σ^t [i,j ]is the correlated variation or “covariation” (such as the covariance) between two different
children i and j at time t. The variations and covariations may be determined using any appropriate methods, including but not limited to standard statistical formulas or more complex statistical
models. In certain embodiments, the covariations are not utilized and are replaced by zeros in the above matrix.
After algebraic manipulation of the expression for {overscore (x)}′ described above, the allocation to each child i for a particular time t amounts to adding to its value {overscore (x)} at time t a
proportion, $∑ j σ i , j t ∑ i , j σ i , j t ,$
of the difference at the time t between the value associated with the parent and the sum of the values associated with children. Under the most common scenario of univariate modeling and forecasting
methods, the adjustment proportion would take the simpler form, $σ i , i t ∑ i σ i , i t .$
Unlike previous allocation techniques, system 10 accounts for the variation of the data values associated with a child in a hierarchical organization of data when allocating a value to that child
from a parent. Therefore, system 10 provides a more accurate allocation. Furthermore, system 10 may also take into account parent-child relationships involving different dimensions of data storage 16
when allocating a value. In addition, system 10 typically does not require complex computations to perform an allocation. Example allocation scenarios are described below with reference to FIGS. 5
and 6.
FIG. 4 illustrates an example method for allocating data, such as forecast data, in hierarchical organization of data associated with a business or other planning environment. The method begins at
step 102 where the value of one or more parents at a time t is forecasted or otherwise determined. As described above, any appropriate technique for generating a forecast for a particular value (such
as a demand value) associated with a parent may be used. Values associated with multiple parents in different dimensions within data storage 16 may be determined and those values may be allocated to
the children of those parents according to the present method. If there is a single parent value, then this value is represented in Equations (1), (2), and (3) above as a value y (instead of a vector
{overscore (y)}). If there are multiple parent values, those parent values are represented in the equations as a column vector {overscore (y)}. At step 104, the current values of the children (or the
values otherwise existing before allocation of the forecasted parent value) are determined. These values are represented in Equations (1), (2), and (3) as the column vector {overscore (x)}.
At step 106, the parent-child relationship matrix R is determined. As described above, the parent-child relationship matrix is formulated such that the value of a parent or parents at a particular
time is equal to the product of the parent-child relationship matrix and the vector of the child values at that time. The variation matrix Σ for the children at the relevant time t is determined at
step 108. As described above, the variations included in the variation matrix may be determined using any appropriate methods. At step 110, the values of the children at time t are determined
according to Equation (3) presented above. In this equation, the values of the children at time t are represented as the vector {overscore (x)}′ and are determined using the values of y, {overscore
(x)}, R, and Σ that were determined in steps 102, 104, 106, and 108, respectively. However, it should be understood that these values may be determined in any appropriate order and in any appropriate
FIG. 5 illustrates an example allocation of a forecasted value associated with a single parent 200 in one dimension using the method of FIG. 4. In this example, the current value (such as a demand
value) associated with parent 200, which may represent a product category C1 in product dimension 50, is 600 units. Parent 200 has a first child 210 a representing a product family F1 and having a
current associated value of 100 units, a second child 210 b representing a product family F2 and having a current associated value of 200 units, and a third child 210 c representing a product family
F3 and having a current associated value of 300 units. These values may be expressed in a vector as follows: $x _ = [ 100 200 300 ]$
In this example, the sum of the values of children 210 equals the value of parent 200. Therefore, the parent-child relationship matrix may be expressed as follows:
R=[1 1 1]
Furthermore, assume that the variation matrix for children 210 is as follows: $∑ = [ 5 0 0 0 25 0 0 0 10 ]$
Assuming that the forecasted value y associated with parent 200 at time t is 700 units, the values allocated to children 210 for time t using Equation (3) above may be determined as follows: $x _ ′ =
x _ + ∑ R T ( R ∑ R T ) - 1 ( y - R x _ )$ $x _ ′ = [ 100 200 300 ] + [ 5 25 10 ] ( 0.025 ) ( 700 - [ 1 1 1 ] [ 100 200 300 ] )$ $x _ ′ = [ 112.5 262.5 325 ]$
It should be noted that the sum of these allocated values equals the forecasted parent value. Furthermore, these values may be compared to the values obtained using a proportional allocation
technique. Using such a technique, the values of the first, second, and third children 210 would be 116.67 units, 233.33 units, and 350 units, respectfully. However, these values do not account for
the variations in the values associated with each child 210 and thus likely to be less accurate than the values that are obtained above using the example method.
FIG. 6 illustrates an example allocation of forecasted values associated with multiple parents 220 in multiple dimensions using the method of FIG. 4. In this example, a first parent 220 a is
associated with a territory T1 in geography dimension 70 and also with a product category C1 in product dimension 50. The product category C1 includes two families F1 and F2. Furthermore, a second
parent 220 b is associated with a district D1 in geography dimension 70 and also with family F2 in product dimension 50. District D1 includes territories T1, T2, and T3. As can be seen, the first and
second parents 220 each represent values (such as demand values) associated with two dimensions. Parent 220 a represents values associated with product category C1 in territory T1. Parent 220 b
represents values associated with district D1 for product family F2.
Parent 220 a has a first child 230 a that represents values associated with product family F1 in territory T1 and has a second child 230 b that represents values associated with product family F2 in
territory T1. Parent 220 b has a first child 230 b that represents values associated with product family F2 in territory T1, a second child 230 c that represents values associated with product family
F2 in territory T2, and a third child 230 d that represents values associated with product family F2 in territory T3. Therefore, parents 220 share a single child 230 b representing values associated
with product family F2 in territory T1.
In this example, the current value (such as a demand value) associated with parent 220 a is 300 units. Child 230 a has an associated current value of 100 units and child 230 b has an associated
current value of 200 units. The current value associated with parent 220 b is 900 units. As described above, child 230 b (which is shared with parent 220 a) has an associated current value of 200
units. Child 230 c has an associated current value of 300 units and a child 230 d has an associated current value of 400 units. The values associated with children 230 may be expressed in a vector as
follows: $x _ = [ 100 200 300 400 ]$
In this example, the sum of the values of children 230 equals the value of their respective parents 220. Therefore, the parent-child relationship matrix may be expressed as follows: $R = [ 1 1 0 0 0
1 1 1 ]$
Furthermore, assume that the variance matrix for children 230 is as follows: $∑ = [ 10 0 0 0 0 20 0 0 0 0 30 0 0 0 0 40 ]$
Assuming that the forecasted value associated with parent 220 a at time t is 400 units and the forecasted value associated with parent 220 b at time t is 1000 units, the values allocated to children
230 at time t using Equation (3) above may be determined as follows: $x _ ′ = x _ + ∑ R T ( R ∑ R T ) - 1 ( y _ - R x _ )$ $x _ ′ = [ 100 200 300 400 ] + [ 10 0 20 20 0 30 0 40 ] [
0.039 - 0.008 - 0.009 0.013 ] ( [ 400 1000 ] - [ 1 1 0 0 0 1 1 1 ] [ 100 200 300 400 ] )$ $x _ ′ = [ 130.43 269.57 313.04 417.39 ]$
It should be noted that the sum of the allocated values for children 230 equal the forecasted values of their respective parents 220. Furthermore, these values may be compared to the values obtained
using a proportional allocation technique. Using such a technique, the values of children 230 a and 230 b of parent 220 a would be 133.33 units and 266.67 units, respectively. The values of the
children 230 b, 230 c, and 230 d of parent 220 b would be 222.22 units, 333.33 units, and 444.44 units, respectively. However, these values do not account for the variations in the values associated
with each child 230 and thus likely to be less accurate than the values that are obtained above using the example method. Furthermore, the proportional allocation technique produces two different
values for child 230 b since the proportional allocation method is performed separately for each parent 220 and the allocation from each parent 220 produces a different result for the value to be
associated with child 230 b. These different results could then have to be reconciled. Therefore, system 10 is also advantageous in that it can simultaneously allocate values from multiple parents
220 to their children 230 (some of which may be common to two or more parents).
As can be seen from the above examples, system 10 provides for the allocation of data from parents to children in a hierarchical organization of data having one or more dimensions in a manner likely
to be more accurate than with previous techniques. System 10 provides a number of advantages. For example, the representation of parent child relations using the parent-child relationship matrix R is
a flexible and general mathematical representation allowing greater flexibility and rigor in allocation from parents to children. Furthermore, when the quantities involved in an allocation are
appropriate statistical quantities, the result from the allocation is statistically optimal.
Although particular examples are described above, the described techniques may be used for many other applications. For example, one advantage is the generality offered by the parent-child
relationship matrix in handling and representing parent-child relations. The parent-child relationship matrix can have as many rows as there are parents whose values need to be allocated to their
children. However, if multiple parents do not have common children, computation may be simplified by separating such rows into different parent-child relationship matrices.
The most elementary case is a parent-child relationship matrix having a single row. The columns of the matrix represent the total number of children involved in a parent child relationship with a
parent represented by the row of the matrix. Each child typically has only one column regardless of the number of parents the child has. The value of an element in a row will typically be zero if the
corresponding column is not a child of the parent represented by the row. A nonzero value indicates that the corresponding column is a child of the parent represented by the row. The nonzero value
itself could be any number, so that any linear relationship could exist between a set of children and their parent.
One example of a type of parent-child relationship is when the parent is equal to the sum of its children (an aggregation relation), as described above. In this case, each element of a row in the
parent-child relationship matrix is zero if a child is not involved in the aggregation relation and is a one when the child is involved in the aggregation. Another type of parent-child relationship
is when the parent is the average of its children. In this case, each element of a row of the parent-child relationship matrix is zero when the corresponding child is not involved in averaging and is
the fraction 1/n (where n is equal to the number of children of the parent) when the corresponding child is involved in the averaging. Yet another example of a parent-child relationship is when the
parent is a dollar value of a group of items and the children are quantities of each item contributing to the dollar value of the parent. In this case, each element of a row of the parent-child
relationship matrix is zero when the corresponding child is not involved in contributing to the dollar value represented by the parent. The value of the element is equal to the price of the
corresponding child when the child is involved in contributing to the dollar value represented by the parent.
It should be noted that a row of the parent-child relationship matrix may represent a direct parent of the children involved in an allocation or an indirect parent. For example, the parent
represented by a row may be a parent of another member that is a parent of the children involved in the allocation. Furthermore, although the values in the parent-child relationship matrices
described above are numerical, the values may also be semantic or have particular business meanings.
The flexibility offered by the parent-child relationship matrix is not restricted to cases where some or all of the values to be allocated are statistical quantities. For example, {overscore (x)} may
be zero or may be the result of other types of computations or user inputs. Similarly, Σ also may include either statistical quantities, more simplified user-inputs, results of other non-statistical
computations, or any other appropriate values. For certain cases, Σ may include the same values as {overscore (x)}, or be directly related to those values, along its diagonal and have off-diagonal
elements equal to zero.
For example, {overscore (x)} may be regarded as zero and Σ may be populated with values along its diagonal which are not all zero. In this case, system 10 could be used to perform a proportional
allocation of the desired values of the parents to the children, taking advantage of the ability of the method to handle multiple parents with shared children. A similar example is when it is
desirable to allocate the difference between the current value of a parent and the desired value, according to a particular proportion among its children. In this case, not all elements of {overscore
(x)} may be regarded as zero. The current values of {overscore (x)} (or their functions, such as their square root or square) may be used as weights for an allocation and appear as the diagonal
elements of Σ. Again, the advantage is the ability to allocate multiple parents to their children in a consistent fashion.
The above examples show that the parent-child relationship matrix and a variety of choices for {overscore (x)} and Σ allow for a flexible and generalized allocation scheme with respect to
parent-child relations. Additional flexibility and rigor is obtained in the allocation by using a variety of different types of values as the contents of Σ. As a example only, one can design a Σ
matrix with the variances or a measure of the relative variation of the children along its diagonal. Further, unlike in the previous examples, the off-diagonal values of Σ can be non-zero and made
equal to measures of covariances or relative co-variation of each pair of children. This structure of Σ, when used in an allocation scheme, can account for relations between the children themselves.
For example, when the value of one child i is higher, another child j may tend to be higher or lower to a degree specified by the quantity in the ith row and jth column of Σ.
Furthermore, the final allocated quantity may not be one that is explicitly produced by using Equation (3) presented above. For example, determining the final allocated quantity may involve selecting
between and/or combining outputs obtained from different allocations using Equation (3). One reason for such selection and/or combination is that there may be uncertainties about the accuracy of the
various quantities involved in allocation. In such cases, it may be preferable to use alternative quantities in the allocation method and combine the results of the allocations in an appropriate
manner (for example, by averaging the results) such that the final quantities after allocation might not be the result of applying the method to any one choice of input quantities. Similarly the
output from one or more of the allocations may be selected based on appropriate criteria.
Another problem that may arise during an allocation are inconsistencies or conflicts in relationships between parents and children or between parents. A conflict may arise when the underlying
“degrees of freedom” for allocating a value are less than the number of rows in R. As an example, assume that there is a parent whose value is to be allocated between two children, C1 and C2. The
user may specify that C1 must have a particular fixed value; however, a particular fixed value can not then be specified for C2 since the parent value could not be allocated (there would not be
enough degrees of freedom to allocate the parent value since both child values are fixed). When there are multiple parents being allocated to common children (possibly in multiple dimensions) the
relationships are much more complicated and the conflicts are harder to identify. However, the method described below can be employed to determine the dependencies in an allocation set. Furthermore,
the quantities computed are intermediate to the allocation method itself and thus there is no additional overhead for detecting the conflicts. Moreover, the method can be used to automatically
resolve these conflicts by elimination without any user intervention, when desirable. Alternatively, it is also possible to accept incremental modifications to the current allocation set when a
conflict is detected (with no significant overhead), so that the user can specify what gets eliminated.
As described above, conflicts can occur when specifying various points (such as a parent or any other position where a user desires to specify a predetermined value) in the hierarchies where certain
predetermined values need to be fixed for allocation to their children. These conflicts result in conflicting requirements for the values at the target positions (for example, children), and cannot
be resolved unless one or more of the conflicting requirements are eliminated. The cause of such conflicts is in the selection of the set of source positions (for example, parents) whose effects have
to be allocated to the target positions. Consider an N dimensional space spanned by the children and p dimensional space spanned by the parent-child relationship matrix R (where the p dimensional
space is a subspace of the N dimensional space). Therefore, p can not be greater than N without introducing conflicts. When a source position for allocation is conflicting with a set of other
positions for allocation, the space spanned by the matrix R is degenerate in the sense that at least one of the rows of R is determined by a linear combination of the remaining rows (for example, if
the row rank of R is less than the number of rows in R). Therefore, a conflict in allocation may be defined as a condition in which some source positions in the allocation set completely determine
some other source position(s) in the allocation set. Since the latter are completely dependent on the former, they are not available for change. Any change in their value(s) are the result of changes
in the values of the other source positions which completely determine them.
This creates a linear dependency problem which may be resolved by removing at least one of the rows of R (which may be each associated with a parent) involved in the dependency, in order for the rest
of the requirements to be satisfied exactly (another approach to resolving conflicts might be to satisfy the requirements approximately without excluding any requirement completely). Regardless of
the allocation method used, analyzing the vectors of the parent-child relationship matrix R or, alternatively, the parent-parent relationship matrix RΣR^T (as described below) will enable the
detection and resolution of conflicts in an allocation.
Particular embodiments of the present invention provide techniques for detection and resolution of conflicts in an allocation based on the properties of the matrix RΣR^T (or RR^T), as discussed
above. According to these techniques, a conflict-free allocation set is a set having the matrix RΣR^T (or RR^T) with a non-zero determinant. The determinant is zero if a conflict is present. The
basic approach of the techniques is to compute the inverse of the relevant matrix using the pivoting method described below and, in the process, detecting the rows (or columns) of the matrix which
are involved in a dependency with previously considered rows (or columns). When a dependency is detected, the position involved in the dependency is removed from the allocation scheme and the
corresponding elements are removed from the matrix, resulting in resolution of the conflict that was detected. The techniques described below may be implemented in any appropriate combination of
software and/or hardware operating in association with one or more computers 20 of system 10.
Two operators referred to as SWEEP and INVSWEEP may be used in association with determining the inverse. The result of SWEEP on all diagonal elements of a positive definite matrix is the inverse of
the matrix. The effect of a single SWEEP is the removal of the influence of one row (or column) vector from the remaining vectors. One of the useful properties of the SWEEP operator is its ability to
compute the determinant. The determinant may not be computed explicitly, with only the rows (or columns) causing the determinant to be zero being detected and eliminated, thus producing a generalized
inverse of the original matrix when its determinant is zero. The INVSWEEP operator is the inverse of the SWEEP operator so that INVSWEEP can be used to reintroduce the effect of a vector which has
previously been sweeped.
There are several variants of the operator SWEEP. In the following methods, a variant is used that results in the negative inverse of the original matrix. One advantage of this particular variant of
SWEEP is that it is easy to undo the effect of the operator by using the given INVSWEEP operator; however, any other appropriate variant may be used. The particular variant of SWEEP that is used is
as follows:
Let A be a symmetric matrix. Sweeping on the kth diagonal entry, a[k,k]≠0, results in a new symmetric matrix Â=(â[i,j]) where, $a ^ k , k = - 1 a k , k$ $a ^ i , k = a i , k a k , k$ $a ^ k , j = a k
, j a k , k$ $a ^ i , j = a i , j - a i , k a k , j a k , k$
for i,j≠k
Inverse sweeping on the kth diagonal entry, a[k,k]≠0, results in a new matrix Â=(â[i,j])where $a ^ k , k = - 1 a k , k$ $a ^ i , k = a i , k a k , k$ $a ^ k , j = a k , j a k , k$ $a ^ i , j = a i ,
j - a i , k a k , j a k , k$
for i,j≠k.
Three example techniques are presented below for inverting RΣR^T using the above operators and determining a set P of positions that are free from conflicts (RR^T may be similarly inverted, if
appropriate). The first technique computes the inverse when the matrix is full rank. When the matrix is rank-deficient, the technique eliminates one parent involved in each dependency detected and
produces a generalized inverse of the matrix that can be used for allocation. However, in eliminating the dependencies of the parents, this technique does not expect user inputs and runs
automatically. The second technique works like the first technique, except that when a dependency is detected, the technique allows for user input to eliminate a parent involved in the dependency and
proceeds to produce a generalized inverse. These two techniques demonstrate the basic functioning of the method of determining a conflict-free set P of positions for allocation from a given set.
Using aspects from first and second techniques, several levels of user interaction and automation may be implemented. The third technique presents an example of one such approach where the user may
control the elimination of a subset of positions (as in the second technique) while the remaining positions are free for automatic elimination (as in the first technique). The outputs of all of these
techniques can then be used for computing the allocation, as described above.
The first technique (automatic resolution of conflicts) may be implemented as follows:
Let A=RΣR^T (a p×p matrix), let δ be a very small number (such that δ is not zero, but as close to zero as is appropriate for computational purposes), and let P be a set of conflict-free positions.
The following process may then be performed:
Initialize P as an empty set
For each i such that 1 ≦ i ≦ p
If a[i,i ]> δ
SWEEP on a[i,i]
Add position i to P
Set all values in ith row and ith column to zero.
After the process is completed the set P will contain a set of positions that are consistent and free of conflicts. Furthermore, the matrix resulting from both the techniques will be a negative
inverse of A which can be used in the allocation computation as the quantity (RΣR^T)^−1.
Similarly, the second technique (a technique for interactive resolution of conflicts) may be performed as follows:
Initialize P as an empty set
For each i such that 1 ≦ i ≦ p
If a[i,i ]> δ
SWEEP on a[i,i].
Add position i to P
dependency ← TRUE
Show P and i to user for elimination and get input k
such that a[i,k ]≠ 0 or k=i
Set all values in kth row and kth column to zero.
If (k=i)
dependency ← FALSE
INVSWEEP on a[k,k]
Remove position k from P
If a[i,i ]> δ
SWEEP on a[i,i].
Add position i to P
dependency ← FALSE
}until dependency = FALSE
As with the first technique, after the process is completed the set P will contain the set of allocations that are consistent and free of conflicts. Furthermore, the matrix resulting from both the
techniques will be a negative inverse of A which can be used in the allocation computation as the quantity (RΣR^T)^−1.
The third technique (a technique for selectively controlled resolution of conflicts) is a generalization of the two techniques above and allows the user to control conflict resolution on a certain
subset of the positions while allowing the technique to automatically resolve on all other positions. This technique may be implemented as follows:
Let A=RΣR^T (a p×p matrix), let δ be a very small number (such that δ is not zero, but as close to zero as is appropriate for the calculations), let P be the set of conflict-free positions, and let C
contain the row numbers corresponding to the allocation positions that the user wants to control. The following process may then be performed:
Initialize P as an empty set
For each i such that 1 ≦ i ≦ p
If a[i,i ]> δ
SWEEP on a[i,i].
Add position i to P
If i ∉ C
Set all values in ith row and ith column to zero.
dependency ← TRUE
Show P and i to user for elimination and get
input k such that a[i,k ]≠ 0 or k=i
Set all values in kth row and kth column to zero.
If (k=i)
dependency ← FALSE
INVSWEEP on a[k,k]
Remove position k from P
If a[i,i ]> δ
SWEEP on a[i,i].
Add position i to P
dependency ← FALSE
}until dependency = FALSE
After the process is completed the set P will contain the set of allocations that are consistent and free of conflicts. Furthermore, as with the previous techniques, the matrix resulting from both
the techniques will be a negative inverse of A which can be used in the allocation computation as the quantity (RΣR^T)^−1.
FIG. 7 illustrates an example method of detecting and resolving conflicts before an allocation. The method starts at step 300 where a rule set is selected for identifying and resolving conflicts
associated with an allocation. For example, one of the three techniques described above may be selected for use (or any other appropriate technique may be selected) and/or other parameters associated
with conflict detection and resolution may be selected. At step 302, the set P of conflict-free positions is initialized as an empty set and index i is set as the first parent or other position to be
analyzed. System 10 determines at step 304 whether the total “weight” of position i (the total influence of the position) is greater than zero (or a very small number may be used to effectively
represent zero for computational purposes). The total weight of position i is the value a[i,i ]described above (when A=RΣR^T or RR^T).
If the total weight of position i is greater than zero (or greater than the very small number that effectively represents zero), then the influence of position i is removed from the other parents at
step 306. As described above, this step may be carried out by performing a SWEEP on a[i,i ]or using any other appropriate technique. At step 308, position i is added to set P and system 10 determines
at step 310 whether position i is the last position. If so, the method ends. If not, index i is set to the next position at step 312 and the method returns to step 304.
If the total weight of position i is zero (or not greater than the very small number that effectively represents zero), then the weight that position i shares with other positions (the “shared
weight”) is examined at step 314. For example, the shared weight a[i,k ](described above) identifies the relationship, if any, between position i and a selected position k. At step 316, system 10 may
determine the subset of positions in set P with which position i has a relationship. A position k may have a relationship with position i if the shared weight a[i,k ]is not equal to zero (or is
greater in absolute value than the very small number used to effectively represent zero). A position k, which could be either i or a member from set P, is selected for elimination at step 318. The
manner in which a position k is selected may differ based on the conflict resolution technique that is used. For example, if the automatic resolution technique is used, then the position k that is
selected may simply be the last position i that was evaluated. If the technique for interactive resolution of conflicts is used, then the choice of position k may be left to the user. If the
technique for selectively controlled resolution of conflicts is used, then the user may select a position k if the position i being evaluated is included in the set C (as described above).
Alternatively, the rule set defined for identifying positions for elimination might impose a priority ordering on the positions involved in the conflict for identifying the positions for elimination.
At step 320, system 10 determines whether the selected position k is the current position i. If it is, then the method proceeds to step 310. If it is not, then the position k is a member of the set P
and the effect of position k is reintroduced on the other positions at step 322. This step may be carried out by performing an INVSWEEP on a[k,k]. Position k is removed from set P at step 324 and
then all values in the kth row and kth column of A to zero. The method then proceeds to step 106 where the influence of position i is removed from all other positions, as described above (for
example, using a SWEEP), and position i is added to set P at step 308. The method may then proceed to step 310 where system 10 determines whether position i is the last position to be evaluated. As
described above, if it is not, then the method proceeds to step 312. If position i is the last position, then the method ends. Using this example method, conflicts in an allocation set may be
detected and resolved before the allocation is performed to prevent problems during the allocation. These conflicts may be detected and resolved even though complex hierarchical dependencies (that
may lead to the conflicts) may exist between positions. Furthermore, certain quantities calculated during the conflict resolution process may be used during the allocation process, thus leading to
computational efficiencies.
Although the present invention has been described with several embodiments, numerous changes, substitutions, variations, alterations, and modifications may be suggested to one skilled in the art, and
it is intended that the invention encompass all such changes, substitutions, variations, alterations, and modifications as fall within the spirit and scope of the appended claims.
|
{"url":"http://www.google.es/patents/US6873994?dq=flatulence","timestamp":"2014-04-21T12:29:40Z","content_type":null,"content_length":"187697","record_id":"<urn:uuid:528fccbb-bf99-4a7b-8595-87af6b7f6cb5>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00276-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Aritcle- QUEUEING THEORY,USAGE FACTOR AND SYSTEM PERFORMANCE
In this write up I have tried to give a very basic overview of queuing theory concepts, as it applies to system performance.
Queuing Theory is a set of mathematical solutions to waiting line or process type problems. In these problems, there is some process which consumes time, units arrive in the system, and are processed
though it.
A simple queue can be thought of as a set of requests that need servicing arriving at a particular rate with a particular probability of arrival (red dots). A processor that services the request at a
particular speed and serviced requests (green dots).
The arrival rate and the probability determine the input queue length at any given time. For further reading you can try a google search on queuing theory.
What we are interested in, is the Total elapsed time which is the sum of queue wait time and the request processing time. For the sake of simplicity let us assume that all the arriving requests are
of equal complexity, which means that each request takes the same time to process when it is the only request to be processed.
In most real life situations the processor is shared among requesters. ( Only a few tellers for a queue of customers at the bank, Only one CPU complex for thousands of SQL request, Only One VSAM
dataset on a disk for thousands of SQL on that table).
The questions most of us would like to answer are ,
If one SQL when run by itself takes "m" milliseconds to run , how long will 5000 requests take to run?
Is it 5000 x "m" milliseconds? If so how long will 10000 SQLs take? Definitely it is not 10000 x "m" milliseconds. Or is it? Is there a point at which your processor becomes overwhelmed and simply
breaks down? If so, where is that point? How many SQLs can I run concurrently with out reaching the point of melt down? etc.
To understand and answer business critical questions like these, one needs to have at least a basic understanding of resource usage factor, how it affects service times etc.
Usage factor U , very simply put is the ratio of the current usage of a resource to it's maximum available usage.
A 100 GB disk which has 60 GB data in it has a usage factor of 0.6 .
A 1000 MIPS CPU complex which has applications running on it that consume 750 MIPS has a usage factor of 0.75.
A Truck that can carry 5000 Kg with a maximum axle rating of 10000 Kg has a usage factor of 0.5.
In our queue example, if the processor has a capacity to service 1000 requests per second and if it has 1000 requests every second to process it has a usage factor of 1.0
Understanding the effect on usage factor is key in estimating the point of melt down.
Total Elapsed time = Total Queue Wait time + Actual Request Service time
Request Service time = ( Ideal Request Service time x Usage factor ) / (1 - Usage factor ) where 0>= Usage factor >= 1
The Request Service time is proportional to U/(1-U).
When we invest in infrastructure we want to get the maximum return on our investment. Naturally, we are tempted to use the resource close to it's maximum rated capacity. Or a usage factor of close to
Let us look at what happens to service time as we approach a usage factor of 1 . U/(1-U) approaches 1/(1-1) which is 1/0 .
Therefore as U ==> 1 , U/1-U ==> Infinity . Hence your service time also approaches infinity.
As you can see from the simple plot above as U reaches 0.95 you are fast approaching the meltdown point.
As U gets closer to .95 the Service time of the system reacts violently and starts approaching infinity.
The "system" might be your CPU , DISK, Network, employee or your motor car.
It is just a bad idea to push the average resource utilization factor beyond 0.9, and the peak resource utilization factor beyond 0.95.
The next time some one in your company tells you to use a DASD volume to it's full capacity or gives you just enough buffers for your requirement think about the meltdown point where U approaches
|
{"url":"http://www.db2-dba.net/articles/Article-Usage%20Factor.html","timestamp":"2014-04-20T06:34:16Z","content_type":null,"content_length":"5895","record_id":"<urn:uuid:d858d86a-d1ab-4f1d-938e-4ccc7bec5e50>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00061-ip-10-147-4-33.ec2.internal.warc.gz"}
|
FOM: Mayberry on justification of logic
Neil Tennant neilt at mercutio.cohums.ohio-state.edu
Sun Jan 25 17:11:56 EST 1998
John Mayberry wrote:
you cannot *justify* a system of proof procedures without using the
the general notion of a model.
Surely it is obvious that you have to justify a system of
formal logical proof. Why should logical consequence be *defined* to be
"formally derivable in just *this* system of formal rules and logical
Clearly Mayberry is talking here of classical logic, rather than, say,
intuitionistic logic.
But even so, it seems to me that the question of justification
ought to be distinguished from the question of logical
completeness. For justification, all one needs is a soundness
theorem. (Some people with certain sorts of philosophical scruples
might even object to the claim that a proof of soundness furnished an
adequate justification; but their attitude only provides further
evidence that the question of soundness ought to be addressed
separately from the question of completeness.)
The history of logic is one of a great variety of alternative
formalisms consisting of axioms and rules of inference (for, say,
classical logic) all of whose deducibility relations turn out to be
coextensive. This history perhaps furnishes Mayberry's question
Why should logical consequence be *defined* to be "formally
derivable in just *this* system of formal rules and logical axioms"?
with more rhetorical force than ought to be conceded, in the light of
what we now know to be the best system.
The validity of logical reasoning is grounded in the rules we
can discern as somehow "at work" and underlying both the local and
global manoeurvres that mathematicians make when reasoning from axioms
to lemmas and theorems (and their corollaries). It took logicians a
long time to distill what I would regard as the essence of those
moves, namely the rules of introduction and elimination of logical
operators. Gentzen got them in 1936; and they were almost immediately
adopted by G"odel.
An introduction rule tells one how to infer a conclusion with
a dominant occurrence of the logical operator in question; while the
corresonding elimination rule tells one how to "reason away from" a
major premiss with that operator dominant.
For intuitionistic mathematics, the introduction and
elimination rules suffice. [For the cogniscenti who might object that
I have not mentioned the absurdity rule---ex falso quodlibet---I
should quickly add that one can show that it is *unnecessary* for the
logic of intuitionistic mathematics.]
These introduction and elimination rules are so basic, elegant
and beautiful, and so obviously a distillation of the essence of the
reasoning going on in (intuitionistic) mathematics, that the system
consisting of them should, in my view, be exempt from the implicature
in Mayberry's question. That is to say, this system is *not* "just one
system among many". Rather, it is THE system; and what remains is to
justify it, IF anyone really has a seriously felt need for such a
I contend that anyone expressing such a seriously felt need is
in the grip of a misconception as to what it is to expose, and
recognize, a final foundation for one's reasoning. It would be
impossible to convince the skeptic about these natural deduction rules
anyway, since those very rules would need to be used in any supposed
"justification" of the rules themselves. A soundness theorem would
have to be conducted in a language governed by the self-same rules of
logical reasoning. So too would any *further* philosophical argument
purporting to do more than a soundness theorem by way of justification
for the rules. [There is room here, by the way, for a technical
result, analogous to G"odel's 2nd incompleteness theorem: can one show
that it is impossible to establish the soundness of a system of
logical rules without using---hence, assuming the validity of---those
very rules?]
If pressed, the intuitionist can say more about why the rules
are right, without (contra Mayberry) having to invoke anything like
models. For example, one could justify the rules by showing that they
preserve the property of *having a canonical warrant* from premisses
to conclusion of any proof constructed by means of the rules. This
requires an inductive specification of canonical warrant; and the
resulting soundness theorem is, in effect, the normalization theorem
for the system of natural deduction. This approach goes back to Dag
Prawitz's seminal paper "On the idea of a general proof theory",
Synthese 1974. Interestingly, this approach provides a "reduction" of
the whole system of rules to just the introduction rules. It is the
introduction rules by reference to which the notion of canonical
warrant is defined. The elimination rules are then justified (modulo
the introduction rules) by means of the reduction procedures involved
in normlaizing proofs. If we need a slogan, we can say: "The
introduction rules FIX the meanings of the logical operators; the
elimination rules merely UNPACK the consequences of those meanings."
To summarize: so far as intuitionistic logic is concerned, the
system of deductive rules is so basic as to need no
justification. But, if one insists on justification, a justification
can be given that makes no use of the notion of a model. Moreover, it
effects a conceptual reduction from the whole set of rules
(introduction and elimination) to just the introduction rules.
As I said earlier, however, Mayberry's claims concerned
*classical* logic rather than intuitionistic logic. So let us imagine
the system of basic deductive rules extended by the addition of any
one of the usual classical negation rules (double negation
elimination; classical reductio ad absurdum; the law of excluded
middle; the rule of constructive dilemma). Not one of these extra
classical rules can be justified without invoking the very rule
itself---or one provably equivalent to it, modulo intuitionistic
logic---in the course of providing the justification. So why not, as a
classical reasoner, simply adopt one of the classical rules without
further ado? The resulting system is classically sound; and that is
all that is required for *justification* of the choice of such a
system as a codification of correct classical reasoning.
It is, of course, a major happy accident that we discovered
relatively early on the necessary truth that the resulting classical
system is also complete (at least, at first order). But it would have
been conceptually possible to concern oneself with the foundations of
mathematics even in ignorance of the (necessary fact of)
completeness. If, for example, no one had succeeded in proving
completeness, we could still be arguing about the comparative merits
of set theory v. category theory as a foundation for mathematics while
bearing in mind the conceptual distinction between deducibility by
means of our rules, and truth-preservation under all (classical)
Neil Tennant
More information about the FOM mailing list
|
{"url":"http://www.cs.nyu.edu/pipermail/fom/1998-January/000993.html","timestamp":"2014-04-17T00:53:25Z","content_type":null,"content_length":"9451","record_id":"<urn:uuid:4c31dd79-d93e-4b01-8413-6ea1b61e596b>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00013-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Physics Forums - View Single Post - Harmonic oscillation with friction
You mean that [itex]F_\mathrm{friction}(t) = \mu x' (t)[/itex]?
How would you solve it without the friction term? I guess you would make an Ansatz for the form of the solution such as [itex]x(t) = e^{\lambda t}[/itex] ?
Most engineers (at least in the UK and US) would call that "viscous damping", not "friction".
For the Coulomb model of friction, F in the OP's equation is constant, and its sign depends on the sign of the velocity.
You can easily solve the two separate cases where F is positive or negative. The solution is the same as if the mass and spring was vertical, and F was the weight of the mass.
For the complete solution, you start with one of the two solutions (depending on the initial conditiosn) until the velocity = 0, then you switch to the other solution, and so on. You can't easily get
a single equation that gives the complete solution in one "formula" for x.
The graph of displacement against time will look like a sequence of half-oscillations of simple harmonic motion, with amplitudes that decrease in a linear progression (not exponentially). The mass
will stop moving after a finite number of half-osciillations, at some position where the static friction force can balance the tension in the spring.
|
{"url":"http://www.physicsforums.com/showpost.php?p=4258178&postcount=3","timestamp":"2014-04-20T08:45:57Z","content_type":null,"content_length":"9333","record_id":"<urn:uuid:b9da0593-e8f3-4b65-865c-bef3b1413891>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00062-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Discrete Topology
March 12th 2011, 11:45 AM #1
Junior Member
May 2010
(b) If A, as a subspace of X, has discrete topology, then X has discrete topology.
False. X= Q(Rational Numbers) and A = Z(Integers) (My Guess)
(c) If X has discrete topology then every subspace of X has discrete topology.
I don't know to get these answers can someone plz give me a hint
Right. (I suppose the usual topology on $\mathbb{Q}$) .
(c) If X has discrete topology then every subspace of X has discrete topology.True.
True, if $B\subset A$ then, $B=A\cap B$ and $A$ is open in $X$ .
March 12th 2011, 12:10 PM #2
|
{"url":"http://mathhelpforum.com/differential-geometry/174374-discrete-topology.html","timestamp":"2014-04-17T09:12:30Z","content_type":null,"content_length":"34640","record_id":"<urn:uuid:760aae51-376a-4c8b-912a-dea8bb3fbc9f>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00137-ip-10-147-4-33.ec2.internal.warc.gz"}
|
The Pressure Gauge On A Cylinder Of Gas Registers ... | Chegg.com
The pressure gauge on a cylinder of gas registers the gauge pressure, which is the difference between the interior pressure and the exterior pressure P[0]. Let's call the gauge pressure P[g]. When
the cylinder is full, the mass of the gas in it is m[i] at a gauge pressure of P[gi]. Assuming the temperature of the cylinder remains constant, show that the mass of the gas remaining in the
cylinder when the pressure reading is P[gf] is given by
Image text transcribed for accessibility: The pressure gauge on a cylinder of gas registers the gauge pressure, which is the difference between the interior pressure and the exterior pressure P0.
Let's call the gauge pressure Pg. When the cylinder is full, the mass of the gas in it is mi at a gauge pressure of Pgi. Assuming the temperature of the cylinder remains constant, show that the mass
of the gas remaining in the cylinder when the pressure reading is Pgf is given by
|
{"url":"http://www.chegg.com/homework-help/questions-and-answers/pressure-gauge-cylinder-gas-registers-gauge-pressure-difference-interior-pressure-exterior-q4255728","timestamp":"2014-04-20T22:18:10Z","content_type":null,"content_length":"23849","record_id":"<urn:uuid:c0bf6708-79d6-4c00-aa72-825e0ed5e116>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00061-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Submitted by metaperl on Tue, 04/18/2006 - 10:13pm.
We first see
What bind does is to take a container of type (m a) and a function of type (a -> m b). It first maps the function over the container, (which would give an m (m b)) and then applies join to the
result to get a container of type (m b).
But then we see
Joining is equivalent to binding a container with the identity map. This is indeed still called join in Haskell:
So then the question becomes: if bind uses join and join uses bind, then we have a serious circularity issue...
Submitted by metaperl on Tue, 04/18/2006 - 8:45pm.
From the ocaml book we have the following
Certain algorithms are easier to write in this (imperative) programming style. Take for instance the computation of the product of two matrices. Even though it is certainly possible to translate
it into a purely functional version, in which lists replace vectors, this is neither natural nor efficient compared to an imperative version.
Submitted by metaperl on Tue, 04/18/2006 - 7:13am.
I was reading Cale Gibbard's Monads as Containers and thought "now this is what I learned Haskell for" and then I began to wonder about Ocaml and monads, which led me to google which led me to this
post on non-deterministic monad of streams which the author says "cannot be (naively) done in either Prolog or in Haskell's
|MonadPlus monad, both of which would go into an infinite loop on this example."
Submitted by metaperl on Tue, 04/18/2006 - 6:45am.
[19:33:05] /Cale/ But in FP, things are usually sort of 'dual' to OO in a strange way. Data is inextensible, but the operations on it are very extensible, which is sort of the reverse of the
situation in OO-land.
Submitted by metaperl on Sat, 04/15/2006 - 8:29pm.
[19:26:57] /Cale/ loufoque: another thing is that it's just
fun to program in Haskell -- you don't feel so much like
you're writing boilerplate code all the time, and if it
compiles, it usually works, since the typesystem catches
80 or 90 percent of all the stupid mistakes which the
compilers in other languages wouldn't.
Submitted by metaperl on Fri, 04/14/2006 - 7:30pm.
this is more of a personal blog post than anything, so either you get it or you dont
[18:10:44] palomer: but what does return do?
[18:12:17] it takes a value of type b and returns a value of type (m b)
[18:12:29] a value of type (m b) is a box with value of type b in it
[18:14:49] palomer: but that's just my point: (m b) has _one_ element
[18:15:12] therefore m (m a) is something where m has (m a) and (m a) has one element
[18:18:55] "The third method, join, also specific to monads, takes a container of containers m (m a), and combines them into one m a in some sensible fashion." ---- why is this a container of containers instead of a container of one
[18:21:55] @type [True, False, True]
[18:21:57] [Bool]
[18:22:07] @type [[True, False, True], [False, True]]
[18:22:07] [[Bool]]
[18:22:15] That's for metaperl .
[18:23:39] monochrom: ah! the second one is m (m a)
[18:23:55] YES!
[18:36:15] return takes a single element and "containerizes it", yielding something of type m a... but that (m a) just so happens to be an (m a) where the container has one element. There is nothing about the notation (m a) which constrains (m a) to only contain one type as the input to join (as well as monochrom's example) shows
Submitted by metaperl on Tue, 04/11/2006 - 11:24am.
In section 8.3 of the discussion on stringifying tree structures Hudak says:
Because (++) has time complexity linear in the length of its left argument, showTree is potentially quadratic in the size of the tree.
in response to this code:
showTree (Leaf x) = show x
showTree (Branch l r) = "<" ++ showTree l ++ "|" ++ showTree r ++ ">"
So this brings up two questions:
1. Why does (++) have time complexity linear in the length of its left argument?
2. Why is showTree potentially quadratic in the size of the tree?
Submitted by metaperl on Wed, 04/05/2006 - 4:19pm.
For a function fn (x:xs) ... what happens if it is called like this fn []?
An error
A convenient way to alias a type is how?
type String = [Char]
Submitted by metaperl on Mon, 04/03/2006 - 9:33am.
1. A Haskell paste page which syntax highlights Haskell code. Using paste.lisp.org is not a good idea because people don't respond to paste questions listed there
Submitted by Revision17 on Sun, 04/02/2006 - 7:16pm.
Alright, I'm new to haskell, and I just wrote a cellular automata simlator that a few people have expressed interest in seeing, so here it is:
module Main where
import Data.Bits
import Text.Printf (printf)
newtype State = State ([Bool],Int)
stateLineToString (x:xs) | x == True = 'X':(stateLineToString xs)
| otherwise = ' ':(stateLineToString xs)
stateLineToString [] = "|"
instance Show (State) where
show (State (x,y)) = printf "%04.0d:%s|" y (stateLineToString x)
hood = 3 cell neighborhood; this determines which bit of the ruleNumber we look at for the result; it's three least significant bits determine which neighborhood it is
applyRule :: Int -> (Bool,Bool,Bool) -> Bool
applyRule ruleNum hood = testBit ruleNum (tripleToInt hood)
tripleToInt :: (Bool,Bool,Bool) -> Int
tripleToInt (x,y,z) = (x `trueNum` 4) + (y `trueNum` 2) + (z `trueNum` 1)
trueNum x y | x == True = y
| otherwise = 0
applyRuleToState :: ((Bool,Bool,Bool) -> Bool) -> State -> State -- [Bool] -> [Bool]
applyRuleToState f (State (x,y)) = State (False:(applyRuleToList f x),(y+1))
applyRuleToList :: ((Bool,Bool,Bool) -> Bool) -> [Bool] -> [Bool]
applyRuleToList rule (a:b:c:rest) = (rule (a,b,c)):(applyRuleToList rule (b:c:rest))
applyRuleToList _ [x,y] = [x]
testState = State ((take 100 (repeat False)) ++ [True] ++ (take 100 (repeat False)),0)
test = applyRuleToState rule30 testState
rule30 = applyRule 30
rule30All = iterate (applyRuleToState rule30) testState
rule90 = applyRule 90
rule90All = iterate (applyRuleToState rule90) testState
rulesToString :: [State] -> String
rulesToString (x:xs) = ((show x) ++ ['\n'])++(rulesToString xs)
rulesToSTring [] = ['\n']
main :: IO ()
main = putStrLn (rulesToString (take 100 rule90All))
As I'm still new there are some sloppy things with it, but it'll output a rule 90 cellular automata. With slight modification, it'll output any that use 3 cell neighborhoods.
|
{"url":"http://sequence.complete.org/blog?page=9","timestamp":"2014-04-20T03:13:44Z","content_type":null,"content_length":"28964","record_id":"<urn:uuid:b74566a5-eece-4b81-85f4-a49776fb61c1>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00164-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Mathematics: The science of patterns
- APEIRON , 2001
"... re applied to derive the following results for the observed association between prime number distribution and quantum-like chaos. (i) Number theoretical concepts are intrinsically related to the
quantitative description of dynamical systems. (ii) Continuous periodogram analyses of different set ..."
Cited by 5 (2 self)
Add to MetaCart
re applied to derive the following results for the observed association between prime number distribution and quantum-like chaos. (i) Number theoretical concepts are intrinsically related to the
quantitative description of dynamical systems. (ii) Continuous periodogram analyses of different sets of adjacent prime number spacing intervals show that the power spectra follow the model predicted
universal inverse power-law form of the statistical normal distribution. The prime number distribution therefore exhibits self-organized criticality, which is a signature of quantum-like chaos. (iii)
The continuum real number field contains unique structures, namely, prime numbers, which are analogous to the dominant eddies in the eddy continuum in turbulent fluid flows. Keywords: quantum-like
chaos in prime numbers, fractal structure of primes, quantification of prime number distribution, prime numbers and fluid flows 1. Introduction he continuum real number field (infinite numbe
"... Mathematics is our “invisible culture ” (Hammond 1978). Few people have any idea how much mathematics lies behind the artifacts and accoutrements of modern life. Nothing we use on a daily
basis—houses, automobiles, bicycles, furniture, not to mention cell phones, computers, and Palm Pilots—would be ..."
Cited by 3 (0 self)
Add to MetaCart
Mathematics is our “invisible culture ” (Hammond 1978). Few people have any idea how much mathematics lies behind the artifacts and accoutrements of modern life. Nothing we use on a daily
basis—houses, automobiles, bicycles, furniture, not to mention cell phones, computers, and Palm Pilots—would be possible without mathematics. Neither would our economy nor our democracy: national
defense, Social Security, disaster relief, as well as political campaigns and voting, all depend on mathematical models and quantitative habits of mind. Mathematics is certainly not invisible in
education, however. Ten years of mathematics is required in every school and is part of every state graduation test. In the late 1980s, mathematics teachers led the national campaign for high,
publicly visible standards in K-12 education. Nonetheless, mathematics is the subject that parents most often recall with anxiety and frustration from their own school experiences. Indeed,
mathematics is the subject most often responsible for students ’ failure to attain their educational goals. Recently, mathematics curricula have become the subject of ferocious debates in school
districts across the country. My intention in writing this essay is to make visible to curious and uncommitted outsiders some of the forces that are currently shaping (and distorting) mathematics
education. My focus is on the
, 2005
"... We present the proof of Diophantus’ 20th problem (book VI of Diophantus’ Arithmetica), which consists in wondering if there exist right triangles whose sides may be measured as integers and
whose surface may be a square. This problem was negatively solved by Fermat in the 17th century, who used the ..."
Cited by 2 (0 self)
Add to MetaCart
We present the proof of Diophantus’ 20th problem (book VI of Diophantus’ Arithmetica), which consists in wondering if there exist right triangles whose sides may be measured as integers and whose
surface may be a square. This problem was negatively solved by Fermat in the 17th century, who used the wonderful method (ipse dixit Fermat) of infinite descent. This method, which is, historically,
the first use of induction, consists in producing smaller and smaller non-negative integer solutions assuming that one exists; this naturally leads to a reductio ad absurdum reasoning because we are
bounded by zero. We describe the formalization of this proof which has been carried out in the Coq proof assistant. Moreover, as a direct and no less historical application, we also provide the proof
(by Fermat) of Fermat’s last theorem for n = 4, as well as the corresponding formalization made in Coq.
"... researchers with a novel, and perhaps startling perspective on mathematical thinking. However, as evidenced by reviewers ’ criticisms (Gold, 2001; Goldin, 2001; Madden, 2001), their perspective
– though liberating for many, with its humanistic emphases – remains controversial. Nonetheless, we believ ..."
Add to MetaCart
researchers with a novel, and perhaps startling perspective on mathematical thinking. However, as evidenced by reviewers ’ criticisms (Gold, 2001; Goldin, 2001; Madden, 2001), their perspective –
though liberating for many, with its humanistic emphases – remains controversial. Nonetheless, we believe this perspective deserves further constructive response. In this paper, we propose that
several of the book’s flaws can be addressed through a more rigorous establishment of conceptual distinctions as well as a more appropriate set of methodological approaches. In the past decade,
several mathematics education researchers have emphasised the embodied nature of mathematical understanding, working toward displacing the more prevalent, conventional views found in both psychology
and philosophy and studying implications for mathematics learning. These researchers have argued that sensory-motor action plays a crucial role in mathematical activity. A major struggle has been to
explain how abstract, formal mathematical ideas can emerge from concrete sensory-motor experiences. This struggle has more recently found promising
, 2004
"... Introduction DNA topology is of fundamental importance for a wide range of biological processes [1]. Since the topological state of genomic DNA is of importance for its replication,
recombination and transcription, there is an immediate interest to obtain information about the supercoiled state fro ..."
Add to MetaCart
Introduction DNA topology is of fundamental importance for a wide range of biological processes [1]. Since the topological state of genomic DNA is of importance for its replication, recombination and
transcription, there is an immediate interest to obtain information about the supercoiled state from sequence periodicities [2,3]. Identification of dominant periodicities in DNA sequence will help
understand the important role of coherent structures in genome sequence organization [4,5]. Li [6] has discussed meaningful applications of spectral analyses in DNA sequence studies. Recent studies
indicate that the DNA sequence of letters A, C, G and T exhibit the inverse power law form 1/f frequency spectrum where f is the frequency and a the exponent. It is possible, therefore, that the
sequences have longrange order [7-14]. Inverse power-law form for power spectra of fractal space-time fluctuations is generic to dynamical systems in nature and is identified as self-organized
"... this paper shows (Section 2) that Fibonacci series underlies fractal fluctuations on all space-time scales ..."
Add to MetaCart
this paper shows (Section 2) that Fibonacci series underlies fractal fluctuations on all space-time scales
, 2010
"... Abstract. The goal of this essay is a description of modern mathematical practice, with emphasis on differences between this and practices in the nineteenth century. I explain how and why these
differences greatly increased the effectiveness of mathematical methods and enabled sweeping developments ..."
Add to MetaCart
Abstract. The goal of this essay is a description of modern mathematical practice, with emphasis on differences between this and practices in the nineteenth century. I explain how and why these
differences greatly increased the effectiveness of mathematical methods and enabled sweeping developments in the twentieth century. A particular concern is the significance for mathematics education:
elementary education remains modeled on the mathematics of the nineteenth century and before, and use of modern methodologies might give advantages similar to those seen in mathematics. This draft is
about 90 % complete, and comments are welcome. 1.
"... functions to model and solve problems. With this learning goal in mind, Minnesota students will have the opportunity to pursue the following instructional components: • Recognize, describe, and
generalize patterns and build mathematical models to make predictions. • Analyze the interaction between q ..."
Add to MetaCart
functions to model and solve problems. With this learning goal in mind, Minnesota students will have the opportunity to pursue the following instructional components: • Recognize, describe, and
generalize patterns and build mathematical models to make predictions. • Analyze the interaction between quantities and/or variables to model patterns of change. • Use algebraic concepts and
processes to represent and solve problems that involve variable quantities. As biology is the science of life and physics the science of energy and matter, so mathematics is
"... government sponsorship are encouraged to express freely their judgement in professional and technical matters. Points of view or opinions do not, therefore, necessarily represent ..."
Add to MetaCart
government sponsorship are encouraged to express freely their judgement in professional and technical matters. Points of view or opinions do not, therefore, necessarily represent
"... To develop an informed citizenry and to support a democratic government, schools must graduate students who are numerate as well as literate. ..."
Add to MetaCart
To develop an informed citizenry and to support a democratic government, schools must graduate students who are numerate as well as literate.
|
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=13368","timestamp":"2014-04-18T20:26:38Z","content_type":null,"content_length":"34056","record_id":"<urn:uuid:e64ac055-783b-4b93-b202-a497ac96b850>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00124-ip-10-147-4-33.ec2.internal.warc.gz"}
|
binomial standard deviations
April 14th 2012, 04:01 PM #1
Apr 2012
binomial standard deviations
French roulette, 37 pockets.
The chance to pick a 3-number-section that reach 3 standard deviations in 1000 2000 trials is 27/10000 or 0,027%.
a)It is the same chance to reach the 3 st dev in 1000 2000 or 10,000?
b)What is the chance to find 3-number-section on the wheel with 3 st dev? 1/10?(27 times 37/10000)
c)What is de chance to find 3 scattered numbers on the wheel that their sum reach 3 st dev? It is bigger, how do you calculate it? The 3 numbers do not need to have 3 st dev each one, 3 scattered
numbers with 2 st dev have the amount to reach 3 st dev.
d)what is harder at 1000 2000 or what ever? 1 isolated number with 3 st dev, a 3 number section or a group of 10 number in a section? why?
Follow Math Help Forum on Facebook and Google+
|
{"url":"http://mathhelpforum.com/statistics/197285-binomial-standard-deviations.html","timestamp":"2014-04-16T14:36:47Z","content_type":null,"content_length":"29190","record_id":"<urn:uuid:1e045319-1b0b-40ba-ab72-4a7d435c2502>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00047-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Survey on the Power of Non-Uniformity
up vote 0 down vote favorite
Non-Uniformity is quite powerful in complexity theory.
For example: BPP is a subset of P/poly. If NP is a subset of P/poly, then the polynomial hierarchy collapses to Sigma^2.
Is there a good survey on the power/limitations of non-uniformity?
computational-complexity reference-request
1 You should also consider asking this question at [cstheory.stackexchange.com]. – Niel de Beaudrap Oct 6 '11 at 22:29
Why is this not appropriate for the complexity theory subsection of math overflow? It's asking for a survey of a research topic. – nonuniformity Oct 6 '11 at 23:31
@nonuniformity: If you're right that "this is not appropriate for the complexity theory subsection", then the only reason I could think of for that would be that essentially every general CS
theory paper was such a survey. – Ricky Demer Oct 6 '11 at 23:44
1 I think Niel's point is that there is larger complexity theory audience on cstheory and you might get more answers, not that it is not appropriate here. (pay attention to "also" in his comment). I
am not sure, but probably we have had a question similar to this on cstheory sometime ago. – Kaveh Oct 7 '11 at 1:19
ps: it is better to register an account so you can edit your posts in future. – Kaveh Oct 7 '11 at 1:20
show 1 more comment
Know someone who can answer? Share a link to this question via email, Google+, Twitter, or Facebook.
Browse other questions tagged computational-complexity reference-request or ask your own question.
|
{"url":"http://mathoverflow.net/questions/77403/survey-on-the-power-of-non-uniformity","timestamp":"2014-04-16T10:36:17Z","content_type":null,"content_length":"51547","record_id":"<urn:uuid:9f85c151-9b6c-40c2-a5e6-496357c7b1bf>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00122-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Calculus (update) 2nd Edition | 9780072937299 | eCampus.com
List Price: [S:$237.15:S]
Only one copy
in stock at this price.
In Stock Usually Ships in 24 Hours.
This is a hard-to-find title. We are making every effort to obtain this item, but do not guarantee stock.
Questions About This Book?
Why should I rent this book?
Renting is easy, fast, and cheap! Renting from eCampus.com can save you hundreds of dollars compared to the cost of new or used books each semester. At the end of the semester, simply ship the book
back to us with a free UPS shipping label! No need to worry about selling it back.
How do rental returns work?
Returning books is as easy as possible. As your rental due date approaches, we will email you several courtesy reminders. When you are ready to return, you can print a free UPS shipping label from
our website at any time. Then, just return the book to your UPS driver or any staffed UPS location. You can even use the same box we shipped it in!
What version or edition is this?
This is the 2nd edition with a publication date of 4/22/2003.
What is included with this book?
• The New copy of this book will include any supplemental materials advertised. Please check the title of the book to determine if it should include any CDs, lab manuals, study guides, etc.
• The Used copy of this book is not guaranteed to inclue any supplemental materials. Typically, only the book itself is included.
• The Rental copy of this book is not guaranteed to include any supplemental materials. You may receive a brand new copy, but typically, only the book itself.
The wide-ranging debate brought about by the calculus reform movement has had a significant impact on calculus textbooks. In response to many of the questions and concerns surrounding this debate,
the authors have written a modern calculus textbook, intended for students majoring in mathematics, physics, chemistry, engineering and related fields. The text is written for the average student --
one who does not already know the subject, whose background is somewhat weak in spots, and who requires a significant motivation to study calculus.The authors follow a relatively standard order of
presentation, while integrating technology and thought-provoking exercises throughout the text. Some minor changes have been made in the order of topics to reflect shifts in the importance of certain
applications in engineering and science. This text also gives an early introduction to logarithms, exponentials and the trigonometric functions. Wherever practical, concepts are developed from
graphical, numerical, and algebraic perspectives (the "Rule of Three") to give students a full understanding of calculus. This text places a significant emphasis on problem solving and presents
realistic applications, as well as open-ended problems.
Table of Contents
0 Preliminaries
0.1 The Real Numbers and the Cartesian Plane
0.2 Lines and Functions
0.3 Graphing Calculators and Computer Algebra Systems
0.4 Solving Equations
0.5 Trigonometric Functions
0.6 Exponential and Logarithmic Functions
0.7 Transformations of Functions
0.8 Preview of Calculus
1 Limits and Continuity
1.1 The Concept of Limit
1.2 Computation of Limits
1.3 Continuity and its Consequences
1.4 Limits Involving Infinity
1.5 Formal Definition of the Limit
1.6 Limits and Loss-of-Significance Errors
2 Differentiation
2.1 Tangent Lines and Velocity
2.2 The Derivative
2.3 Computation of Derivatives: The Power Rule
2.4 The Product and Quotient Rules
2.5 Derivatives of Trigonometric Functions
2.6 Derivatives of Exponential and Logarithmic Functions
2.7 The Chain Rule
2.8 Implicit Differentiation and Related Rates
2.9 The Mean Value Theorem
3 Applications of Differentiation
3.1 Linear Approximations adn L'Hopital's Rule
3.2 Newton's Method
3.3 Maximum and Minimum Values
3.4 Increasing and Decreasing Functions
3.5 Concavity
3.6 Overview of Curve Sketching
3.7 Optimization
3.8 Rates of Change in Applications
4 Integration
4.1 Antiderivatives
4.2 Sums and Sigma Notation
4.3 Area
4.4 The Definite Integral
4.5 The Fundamental Theorem of Calculus
4.6 Integration by Substitution
4.7 Numerical Integration
5 Applications of the Definite Integral
5.1 Area Between Curves
5.2 Volume
5.3 Volumes by Cylindrical Shells
5.4 Arc Length and Surface Area
5.5 Projectile Motion
5.6 Work, Moments, and Hydrostatic Force
5.7 Probability
6 Exponentials, Logarithms, and Other Transcendental Functions
6.1 The Natural Logarithm Revisited
6.2 Inverse Functions
6.3 The Exponential Function Revisited
6.4 Growth and Decay Problems
6.5 Separable Differential Equations
6.6 Euler's Method
6.7 The Inverse Trigonometric Functions
6.8 The Calculus of the Inverse Trigonometric Functions
6.9 The Hyperbolic Functions
7 Integration Techniques
7.1 Review of Formulas and Techniques
7.2 Integration by Parts
7.3 Trigonometric Techniques of Integration
7.4 Integration of Rational Functions using Partial Fractions
7.5 Integration Tables and Computer Algebra Systems
7.6 Indeterminate Forms and L'Hopital's Rule
7.7 Improper Integrals
8 Infinite Series
8.1 Sequences of Real Numbers
8.2 Infinite Series
8.3 The Integral Test and Comparison Tests
8.4 Alternating Series
8.5 Absolute Convergence and the Ratio Test
8.6 Power Series
8.7 Taylor Series
8.8 Fourier Series
9 Parametric Equations and Polar Coordinates
9.1 Plane Curves and Parametric Equations
9.2 Calculus and Parametric Equations
9.3 Arc Length and Surface Area in Parametric Equations
9.4 Polar Coordinates
9.5 Calculus and Polar Coordinates
9.6 Conic Sections
9.7 Conic Sections in Polar Coordinates
10 Vectors and the Geometry of Space
10.1 Vectors in the Plane
10.2 Vectors in Space
10.3 The Dot Product
10.4 The Cross Product
10.5 Lines and Planes in Space
10.6 Surfaces in Space
11 Vector-Valued Functions
11.1 Vector-Valued Functions
11.2 The Calculus of Vector-Valued Functions
11.3 Motion in Space
11.4 Curvature
11.5 Tangent and Normal Vectors
12 Functions of Several Variables and Differentiation
12.1 Functions of Several Variables
12.2 Limits and Continuity
12.3 Partial Derivatives
12.4 Tangent Planes and Linear Approximations
12.5 The Chain Rule
12.6 The Gradient and Directional Derivatives
12.7 Extrema of Functions of Several Variables
12.8 Constrained Optimization and Lagrange Multipliers
13 Multiple Integrals
13.1 Double Integrals
13.2 Area, Volume and Center of Mass
13.3 Double Integrals in Polar Coordinates
13.4 Surface Area
13.5 Triple Integrals
13.6 Cylindrical Coordinates
13.7 Spherical Coordinates
13.8 Change of Variables in Multiple Integrals
14 Vector Calculus
14.1 Vector Fields
14.2 Line Integrals
14.3 Independence of Path and Conservative Vector Fields
14.4 Green's Theorem
14.5 Curl and Divergence
14.6 Surface Integrals
14.7 The Divergence Theorem
14.8 Stokes' Theorem
|
{"url":"http://www.ecampus.com/calculus-update-2nd-smith-robert-t/bk/9780072937299","timestamp":"2014-04-18T06:05:10Z","content_type":null,"content_length":"59457","record_id":"<urn:uuid:9895226b-c41e-4df9-94e4-8e06ec145d00>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00658-ip-10-147-4-33.ec2.internal.warc.gz"}
|
undirected graph
(data structure)
Definition: A graph whose edges are unordered pairs of vertices. That is, each edge connects two vertices.
Formal Definition: A graph G is a pair (V,E), where V is a set of vertices, and E is a set of edges between the vertices E ⊆ {{u,v} | u, v ∈ V}. If the graph does not allow self-loops, adjacency is
irreflexive, that is E ⊆ {{u,v} | u, v ∈ V ∧ u ≠ v}.
See also directed graph, hypergraph, multigraph.
Note: An undirected graph may be represented as a directed graph with two directed edges, one "to" and one "from," for each undirected edge.
Author: PEB
Go to the Dictionary of Algorithms and Data Structures home page.
If you have suggestions, corrections, or comments, please get in touch with Paul E. Black.
Entry modified 18 October 2007.
HTML page formatted Fri Mar 25 16:20:35 2011.
Cite this as:
Paul E. Black, "undirected graph", in Dictionary of Algorithms and Data Structures [online], Paul E. Black, ed., U.S. National Institute of Standards and Technology. 18 October 2007. (accessed TODAY)
Available from: http://www.nist.gov/dads/HTML/undirectedGraph.html
|
{"url":"http://www.darkridge.com/~jpr5/mirror/dads/HTML/undirectedGraph.html","timestamp":"2014-04-19T18:25:07Z","content_type":null,"content_length":"2877","record_id":"<urn:uuid:f8d993e5-9b2a-42d5-9081-e2f319e0b0d8>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00030-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Visualizing Galois Fields
Visualizing Galois Fields May 17th, 2012
Patrick Stein
Galois fields are used in a number of different ways. For example, the AES encryption standard uses them.
Arithmetic in Galois Fields
The Galois fields of size $2^n$ for various $n$ are convenient in computer applications because of how nicely they fit into bytes or words on the machine. The Galois field $GF(2^n)$ has $2^n$
elements. These elements are represented as polynomials of degree less than $n$ with all coefficients either 0 or 1. So, to encode an element of $GF(2^n)$ as a number, you need an $n$-bit binary
For example, let us consider the Galois field $GF(2^3)$. It has $2^3 = 8$ elements. They are (as binary integers) $000$, $001$, $010$, $011$, $100$, $101$, $110$, and $111$. The element $110$, for
example, stands for the polynomial $x^2 + x$. The element $011$ stands for the polynomial $x + 1$.
The coefficients add together just like the integers-modulo-2 add. In group theory terms, the coefficients are from $\mathbb{Z}/2\mathbb{Z}$. That means that $0 + 0 = 0$, $0 + 1 = 1$, $1 + 0 = 1$,
and $1 + 1 = 0$. In computer terms, $a + b$ is simply $a$ XOR $b$. That means, to get the integer representation of the sum of two elements, we simply have to do the bitwise-XOR of their integer
representations. Every processor that I’ve ever seen has built-in instructions to do this. Most computer languages support bitwise-XOR operations. In Lisp, the bitwise-XOR of $a$ and $b$ is (logxor a
Multiplication is somewhat trickier. Here, we have to multiply the polynomials together. For example, $(x + 1) \cdot x = x^2 + x$. But, if we did $(x^2 + x) \cdot x$, we’d end up with $x^3 + x^2$. We
wanted everything to be polynomials of degree less than $n$ and our $n$ is 3. So, what do we do?
What we do, is we take some polynomial of degree $n$ that cannot be written as the product of two polynomials of less than degree $n$. For our example, let’s use $x^3 + x + 1$ (which is $1011$ in our
binary scheme). Now, we need to take our products modulo this polynomial.
You may not have divided polynomials by other polynomials before. It’s a perfectly possible thing to do. When we divide a positive integer $a$ by another positive integer $b$ (with $b$ bigger than
1), we get some answer strictly smaller than $a$ with a remainder strictly smaller than $b$. When we divide a polynomial of degree $m$ by a polynomial of degree $n$ (with $n$ greater than zero), we
get an answer that is a polynomial of degree strictly less than $m$ and a remainder that is a polynomial of degree strictly less than $n$.
Dividing proceeds much as long division of integers does. For example, if we take the polynomial (with integer coefficients) $x^3 + 2x + 5$ and divide it by the polynomial $x + 3$, we start by
writing it as:
$x + 3 \overline{\left) x^3 + 0x^2 + 2x + 5\right.}$
We notice that to get
$(x+3) \cdot q(x)$
to start with
, we need
to start with
. We then proceed to subtract
$(x+3) \cdot x^2$
and then figure out that we need a
for the next term, and so on. We end up with
$x^2 - 3x + 11$
with a remainder of
(a degree zero polynomial).
$\begin{array}{ccrcrcrcr}& & & & x^2 & - & 3x & + & 11 \\& \multicolumn{8}{l}{\hrulefill} \\(x+3) & ) & x^3 & + & 0x^2 & + & 2x & + & 5 \\& & x^3 & + & 3x^2 & & & & \\& & \multicolumn{3}{l}{\
hrulefill} & & & & \\& & & - & 3x^2 & + & 2x & & \\& & & - & 3x^2 & - & 9x & & \\& & & \multicolumn{4}{l}{\hrulefill} & & \\& & & & & & 11x & + & 5 \\& & & & & & 11x & + & 33 \\& & & & & & \
multicolumn{3}{l}{\hrulefill} \\& & & & & & & - & 28\end{array}$
For the example we cited earlier we had
$x^3 + x^2$
which we needed to take modulo
$x^3 + x + 1$
. Well, dividing
$x^3 + x^2$
$x^3 + x + 1$
, we see that it goes in one time with a remainder of
$x^2 + x + 1$
. [Note: addition and subtraction are the same in
$\begin{array}{ccrcrcrcr}& & & & & & & & 1 \\& \multicolumn{8}{l}{\hrulefill} \\(x^3+x+1) & ) & x^3 & + & x^2 & + & 0x & + & 0 \\& & x^3 & + & 0x^2 & + & x & + & 1 \\& & \multicolumn{7}{l}{\
hrulefill} \\& & & & x^2 & + & x & + & 1\\\end{array}$
For a reasonable way to accomplish this in the special case of our integer representations of polynomials in
, see this article about
Finite Field Arithmetic and Reed Solomon Codes
. In (tail-call style) Lisp, that algorithm for
might look something like this to multiply
(flet ((next-a (a)
(ash a -1))
(next-b (b)
(let ((overflow (plusp (logand b #x80))))
(if overflow
(mod (logxor (ash b 1) m) #x100)
(ash b 1)))))
(labels ((mult (a b r)
((zerop a) r)
((oddp a) (mult (next-a a) (next-b b) (logxor r b)))
(t (mult (next-a a) (next-b b) r)))))
(mult a b 0)))
How is the Galois field structured?
The additive structure is simple. Using our 8-bit representations of elements of $GF(2^8)$, we can create an image where the pixel in the $i$-th row and $j$-th column is the sum (in the Galois field)
of $i$ and $j$ (written as binary numbers). That looks like this:
Just before the
above-mentioned article
, I got to wondering if the structure of the Galois field was affected at all by the choice of polynomial you used as the modulus. So, I put together some code to try out all of the polynomials of
order 8.
Remember way back at the beginning of multiplication, I said that the modulus polynomial had to be one which couldn’t be written as the product of two polynomials of smaller degree? If you allow
that, then you have two non-zero polynomials that when multiplied together will equal your modulus polynomial. Just like with integers, if you’ve got an exact multiple of the modulus, the remainder
is zero. We don’t want to be able to multiply together two non-zero elements to get zero. Such elements would be called zero divisors.
Zero divisors would make these just be Galois rings instead of Galois fields. Another way of saying this is that in a field, the non-zero elements form a group under multiplication. If they don’t,
but multiplication is still associative and distributes over addition, we call it a ring instead of a field.
Galois rings might be interesting in their own right, but they’re not good for AES-type encryption. In AES-type encryption, we’re trying to mix around the bits of the message. If our mixing takes us
to zero, we can never undo that mixing—there is nothing we can multiply or divide by to get back what we mixed in.
So, we need a polynomial that cannot be factored into two smaller polynomials. Such a polynomial is said to be irreducible. We can just start building an image for the multiplication for a given
modulus and bail out if it has two non-zero elements that multiply together to get zero. So, I did this for all elements which when written in our binary notation form odd numbers between (and
including) 100000001 and 111111111 (shown as binary). These are the only numbers which could possibly represent irreducible polynomials of degree 8. The even numbers are easy to throw out because
they can be written as $x$ times a degree 7 polynomial.
The ones that worked were: 100011011, 100011101, 100101011, 100101101, 100111001, 100111111, 101001101, 101011111, 101100011, 101100101, 101101001, 101110001, 101110111, 101111011, 110000111,
110001011, 110001101, 110011111, 110100011, 110101001, 110110001, 110111101, 111000011, 111001111, 111010111, 111011101, 111100111, 111110011, 111110101, and 111111001. That first one that worked
(100011011) is the one used in AES. Its multiplication table looks like:
Here’s it is again on the left with the multiplication table when 110011111 is the modulus on the right:
So, the addition image provides some insight into how addition works. The multiplication tables, at least for me, provide very little insight into anything. They don’t even make a good stereo pair.
To say two of the multiplication tables have the same structure, means there is some way to map back and forth between them so that the multiplication still works. If we have table $X$ and table $Y$,
then we need an invertible function $f$ such that $f(a \cdot_X b) = f(a) \cdot_Y f(b)$ for all $a$ and $b$ in the table $X$.
What’s next?
If there is an invertible map between two multiplication tables and there is some element $a$ in the first table, you can take successive powers of it: $a$, $a^2$, $a^3$, $\ldots$. There are only $2^
n$ elements in $GF(2^n)$ no matter which polynomial we pick. So, somewhere in there, you have to start repeating. In fact, you have to get back to $a$. There is some smallest, positive integer $k$ so
that $a^{k+1} \equiv a$ in $GF(2^n)$. If we pick $a = 0$, then we simply have that $k = 1$. For non-zero $a$, we are even better off because $a^k \equiv 1$.
So, what if I took the powers of each element of $GF(2^n)$ in turn? For each number, I would get a sequence of its powers. If I throw away the order of that sequence and just consider it a set, then
I would end up with a subset of $GF(2^n)$ for each $a$ in $GF(2^n)$. How many different subsets will I end up with? Will there be a different subset for each $a$?
I mentioned earlier that the non-zero elements of $GF(2^n)$ form what’s called a group. The subset created by the powers of any fixed $a$ forms what’s called a subgroup. A subgroup is a subset of a
group such that given any two members of that subset, their product (in the whole group) is also a member of the subset. As it turns out, for groups with a finite number of elements, the number of
items in a subgroup has to divide evenly into the number of elements in the whole group.
The element zero in $GF(2^n)$ forms the subset containing only zero. The non-zero elements of $GF(2^n)$ form a group of $2^n - 1$ elements. The number $2^n - 1$ is odd (for all $n \ge 1$). So,
immediately, we know that all of the subsets we generate are going to have an odd number of items in them. For $GF(2^8)$, there are 255 non-zero elements. The numbers that divide 255 are: 1, 3, 5,
15, 17, 51, 85, and 255.
It turns out that the non-zero elements of $GF(2^n)$ form what’s called a cyclic group. That means that there is at least one element $a$ whose subset is all $2^n - 1$ of the non-zero elements. If
take one of those $a$‘s in $GF(2^8)$ whose subset is all 255 of the elements, we can quickly see that the powers of $a^3$ form a subset containing 85 elements, the powers of $a^5$ form a subset
containing 51 elements, …, the powers of $a^{85}$ form a subset containing 3 elements, and the powers of $a^{255}$ form a subset containing 1 element. Further, if both $a$ and $b$ have all 255
elements in their subset, then $a^k$ and $b^k$ will have the same number of elements in their subsets for all $k$. We would still have to check to make sure that if $a^i + a^j = a^k$ that $b^i + b^j
= b^k$ to verify the whole field structure is the same.
This means there are only 8 different subset of $GF(2^8)$‘s non-zero elements which form subgroups. Pictorially, if we placed the powers of $a$ around a circle so that $a^0 = 1$ was at the top and
the powers progressed around the circle and then drew a polygon connecting consecutive points, then consecutive points in the $a^3$ sequence and consecutive points in the $a^5$ sequence, etc…. we’d
end up with this:
If we don’t reorder them and just leave them in the numerical order of their binary representation, the result isn’t as aesthetically pleasing as the previous picture. Here are the same two we used
before 100011011 (on the left) and 110011111 (on the right). They are easier to look at. They do not lend much more insight nor make a better stereo pair.
*shrug* Here’s the source file that I used to generate all of these images with
Profile cancel
Last reply was 2012-05-23
1. Jan Van lent
View 2012-05-18
I suspect that if you use the ordering 0, a^0, a^1, … the multiplication table looks simpler since
a^i a^j = a^(i+j).
No idea what the addition table then looks like.
PS: typos: “logior”, “something like something like”
□ patreplied:
View 2012-05-18
Indeed, I should re-render the multiplication table with the cyclic ordering. And fixed the typos. Thanks.
Well explained, thanks.By the way, DEC’s PDP-8 minicomputer didn’t have an exclusive-or instruction, or a plain or instruction come to that.(It did have and and not and from nand flows
everything else.)
Very cool!
I learned about GF(q^n) when I was studying network coding, but I never thought about drawing out the + and * tables.
I recommend that you set “vertical-align:middle” to the math equation in your post.
In fact, you might as well use MathJax which is the best way since sliced bread and handles all kinds of LaTeX through javascript, UTF8 and STIX fonts.
□ patreplied:
Yes, I do need to work on the alignment of the equations. I’m not a fan of MathJax though. It takes tons of time to get all of the equations rendered on my iPhone and even on some desktop
browser (do not recall which, right now) where I thought the browser had crashed because I couldn’t scroll for minutes at a time.
4. View 2012-05-23
[...] my previous article, I should have finished by remapping the multiplication and addition tables so that the [...]
• ‡ Galois, Galois fields, group theory, lisp, vecto
• ‡ Link
|
{"url":"http://nklein.com/2012/05/visualizing-galois-fields/","timestamp":"2014-04-20T09:16:17Z","content_type":null,"content_length":"86730","record_id":"<urn:uuid:e3d3a495-0f97-41df-a08d-76c45a6c7764>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00270-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Writing an expression to a power with positive exponents.
Can someone explain to me why...
x^7/2 x (x) = x^9/2 ?
Re: Writing an expression to a power with positive exponents.
Originally Posted by
Can someone explain to me why...
x^7/2 x (x) = x^9/2 ?
Re: Writing an expression to a power with positive exponents.
Originally Posted by
Can someone explain to me why...
x^7/2 x (x) = x^9/2 ?
Please use * for multiplication sign...plus brackets when necessary:
x^(7/2) * x^1 = x^(7/2) * x^(2/2) = x^(7/2 + 2/2) = x^(9/2)
|
{"url":"http://mathhelpforum.com/algebra/195520-writing-expression-power-positive-exponents-print.html","timestamp":"2014-04-19T03:42:12Z","content_type":null,"content_length":"5204","record_id":"<urn:uuid:90e95910-9bef-4760-8824-df91bd566c9d>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00168-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Representations of partial derivatives in thermodynamics
Representations of partial derivatives in thermodynamics Documents
Main Document
written by John R. Thompson, Corinne A. Manogue, David J. Roundy, and Donald B. Mountcastle
One of the mathematical objects that students become familiar with in thermodynamics, often for the first time, is the partial derivative of a multivariable function. The symbolic representation of a
partial derivative and related quantities present difficulties for students in both mathematical and physical contexts, most notably what it means to keep one or more variables fixed while taking the
derivative with respect to a different variable. Material properties are themselves written as partial derivatives of various state functions (e.g., compressibility is a partial derivative of volume
with respect to pressure). Research in courses at the University of Maine and Oregon State University yields findings related to the many ways that partial derivatives can be represented and
interpreted in thermodynamics. Research has informed curricular development that elicits many of the difficulties using different representations (e.g., geometric) and different contexts (e.g.,
connecting partial derivatives to specific experiments).
Published February 6, 2012
Last Modified April 24, 2012
|
{"url":"http://www.compadre.org/per/document/ServeFile.cfm?ID=11819&DocID=2669&DocFID=4453&Attachment=1","timestamp":"2014-04-18T05:56:18Z","content_type":null,"content_length":"14525","record_id":"<urn:uuid:02f25849-65c4-47dc-9cf5-76a993b5a16c>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00294-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Code / Software Modules
Chara: A General-purpose Software Package for Characteristic Tracking on Unstructured Meshes
ELLAM: Eulerian-Lagrangian Localized Adjoint Methods
LinLite: Linear Solvers Lite
This is a compact/lite suite of C++ code for vectors, matrices, and linear solvers.
LinSys.h matrix.h vector.h
LinSys.cpp matrix.cpp vector.cpp
For solving large scale linear systems, we recommend PETSc and Trilinos.
This is an ongoing collaborative research project. Nonconforming finite elements, including the locally divergence-free (LDF) elements, are used to solve Maxwell source and eigenvalue problems on
3-dim unstructured tetrahedral meshes. Preliminary results are very encouraging: no spurious eigenvaules for eigenproblems and optimal convergence rates for source problems. About 3000 lines of C++
code have been developed and tested, and more are coming.
We would like to share our C++ code for the quadratic finite volume element methods on quadrilateral meshes for elliptic and parabolic problems. All the files listed below have been compressed into
│ Ex2Fxns.h │ Ex2Fxns.cpp │ │
│ fve2.h │ fve2.cpp │ Ex2QuadriMesh16.dat │
│ GaussQuad.h │ LinSys.cpp │ Ex2QuadriMesh32.dat Ex2QuadriMesh64.dat │
│ LinSys.h │ main.cpp │ Ex2QuadriMesh8.dat │
│ matrix.h │ matrix.cpp │ │
│ PtVec2d.h │ PtVec2d.cpp │ │
│ Quadri2Mesh.h │ Quadri2Mesh.cpp │ │
│ subs.h │ subs.cpp │ │
│ vector.h │ vector.cpp │ │
See our recently published paper for the mathematical details:
Min Yang and Jiangguo Liu, A quadratic finite volume element method for parabolic problems on quadrilateral meshes, IMA J. Numer. Anal., 31(2011), pp.1038--1061. PDF
UTMC: Uniform Tetrahedral Meshes for Cubes (3-dim Rectangles)
I apologize if this causes confusion with the University of Texas Medical Center. TetView can be used to visualize your tetrahedral meshes. For generation of unstructured tetrahedral meshes, we
recommend Gmsh and Tetgen.
|
{"url":"http://www.math.colostate.edu/~liu/code.html","timestamp":"2014-04-17T05:04:32Z","content_type":null,"content_length":"12099","record_id":"<urn:uuid:5d0f89bf-4cba-47af-816a-7b7331ea2002>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00322-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Is there an "accepted" jamming limit for hard spheres placed in the unit cube by random sequential adsorption?
up vote 2 down vote favorite
I have a unit cube, and operating in the continuum limit (i.e. not on a lattice), I sequentially place spheres of some radius $r$ inside the cube until a filled volume "jamming limit" $\theta_
{spheres}$ is achieved s.t. no further spheres can be placed inside of the cube. Is there an accepted range of values for $\theta_{spheres}$ in the literature?
For circular discs in the unit square, we can find a value of $\theta_{circles} \approx 0.547 +/- 0.003$ in the literature$^{1,2}$. See this ( Packing density of randomly deposited circles on a plane
) MathOverflow question and:
1. Hinrichsen, E.L., Feder, J., Jøssang, T. Geometry of random sequential adsorption. Journal of Statistical Physics 44(5-6), pp. 793-827 (1986).
2. Cadilhe, A., Araujo, N.A.M., Privman, V. Random sequential adsorption: from continuum to lattice and pre-patterned substrates. J. Phys. Cond. Mat. 19, 065124 (2007).
However, I'm having trouble finding such a range for the 3D case with spheres? Also, is there an accepted value for packing spheres of radius $r$ into a larger unit sphere?
In terms of running simulations, I seem to be hitting a "soft" wall around a value of $\theta_{spheres} \approx 0.335$ for packing $r = 0.02$ radius spheres in a unit cube. I say "soft" because
further spheres can still be added with (what seems like) an exponentially growing number of attempts at placement.
geometry sphere-packing packing
add comment
1 Answer
active oldest votes
The process you describe is usually called Random Sequential Addition (RSA). In this paper, Torquato, Uche, and Stillinger compute the saturation density up to $d=6$ (see
Table I).
up vote 4 down vote
accepted For $d=3$ they have $\phi_s\approx 0.38278$ as the saturation density.
Also, I don't think anybody would have computed a value for the expected RSA saturation density for any specific size finite box. – Yoav Kallus May 19 '13 at 17:24
@Yoav Kallus Right, I just mentioned the unit cube for the sake of defining a geometry, and without defining the radius of the spheres being packed therein. – BlueLight
May 19 '13 at 18:08
The shape of the container should not make a difference in the limit of a very large container. – Yoav Kallus May 19 '13 at 18:45
@Yoav Kallus Actually that's a good point. – BlueLight May 19 '13 at 19:28
add comment
Not the answer you're looking for? Browse other questions tagged geometry sphere-packing packing or ask your own question.
|
{"url":"http://mathoverflow.net/questions/131134/is-there-an-accepted-jamming-limit-for-hard-spheres-placed-in-the-unit-cube-by","timestamp":"2014-04-19T17:54:55Z","content_type":null,"content_length":"56599","record_id":"<urn:uuid:8892eeab-cc06-40d0-84ba-2ee293d7457f>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00598-ip-10-147-4-33.ec2.internal.warc.gz"}
|
soma cube
A puzzle of the class of spatial puzzles.
The soma cube is a cube, 3x3x3 units. It is subdivided into several unique pieces, each constructed of multiple 1x1x1 units attached.
The puzzle is to get all those pieces to fit together back into the body of 3x3x3.
Many people go on to use the cube pieces as a sculpture, trying to form minimalist shapes, instead of forming the basic cube. Hundreds or thousands of patterns are named after the animals and things
they vaguely resemble.
The tangrams and pentominoes puzzles are somewhat related.
|
{"url":"http://everything2.com/title/soma+cube","timestamp":"2014-04-21T07:53:01Z","content_type":null,"content_length":"24082","record_id":"<urn:uuid:08183671-ec25-425c-a206-3255921cb742>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00403-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Two Staggered Finite Circular Cylinders in Cross-Flow
Abstract (Summary)
Circular cylinders in cross-flow have been extensively studied in the last century. However, there are still many unsolved problems in this area, one of which is the flow structure around two
staggered finite circular cylinders. This thesis mainly focuses on an experimental investigation of the vortex shedding characteristics of two staggered finite circular cylinders of equal diameter in
cross-flow. Wind tunnel experiments were conducted to measure the vortex shedding frequency at the mid-height of the two cylinders and along the height of the two cylinders. Two identical circular
cylinders of aspect ratio AR = 9 were partially immersed in a flat-plate turbulent boundary layer, where the boundary layer thickness to cylinder height ratio at the location of the cylinders was ?/H
= 0.4. The Reynolds number based on the cylinder diameter was ReD = 2.4z x ~104. Centre-to-centre pitch ratios of P/D = 1.125, 1.25, 1.5, 2, 2.5, 3, 4 and 5 were examined and the incidence angle was
incremented in small steps from á = 0° to 180°. For each configuration of the cylinders, the vortex shedding frequency, represented in dimensionless form as the Strouhal number, St, was measured with
a single-component hot-wire anemometer. Also, a seven-hole pressure probe was used to measure the time-averaged wake velocity field behind the cylinders at selected configurations in order to get a
better understanding of the wake structure.
The vortex shedding frequencies measured at the mid-height of the cylinders clearly showed the similarities and differences of vortex shedding between two staggered finite and infinite circular
cylinders. The Strouhal number behavior of the two finite circular cylinders is generally similar to that of two infinite circular cylinders, but the values of St for the two finite cylinders were
found for most cases to be smaller than the case of the infinite cylinders.
The measurements of vortex shedding frequency along the heights of each finite cylinder revealed that, for most incidence angles, the value of the Strouhal number remains constant along the height of
the cylinder, but a notable variation in the shape and strength of the vortex shedding peak along the heights of the cylinders is observed. Sharp and strong peaks in the power spectra are measured
around the mid-height of the cylinder. Broader and weaker peaks are found both at the base of the cylinder and near the free end. At several particular configurations, the vortex shedding frequency
changes along the height of the cylinder, caused by the varying flow pattern in the vertical direction.
Wake measurements showed the velocity field behind the two finite circular cylinders arranged in tandem configurations of P/D = 1.125, 2 and 5. The experimental data revealed that the flow structure
behind two finite circular cylinders arranged in a tandem configuration is much more complicated than that of the single finite circular cylinder. The downwash flow from the tip of the downstream
cylinder is weaker due to the flow interaction between the free ends of two cylinders, and this downwash flow becomes stronger with increasing P/D. A similar trend happens to the vorticity of the tip
vortex structures. However, the upwash flow behind the downstream cylinder is not strongly affected by the existence of the upstream cylinder.
Bibliographical Information:
Advisor:Sumner, D.; Bugg, J.D.; Bergstrom, D.J.; Mazurek, K.
School:University of Saskatchewan
School Location:Canada - Saskatchewan
Source Type:Master's Thesis
Keywords:circular cylinders vortex shedding
Date of Publication:02/20/2008
|
{"url":"http://www.openthesis.org/documents/Two-Staggered-Finite-Circular-Cylinders-580041.html","timestamp":"2014-04-20T03:12:04Z","content_type":null,"content_length":"11233","record_id":"<urn:uuid:77a35223-f210-4551-8d0b-6e72f856f2c1>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00589-ip-10-147-4-33.ec2.internal.warc.gz"}
|
How do you add UA values for a space if you have walls and a ceiling?
Member Login
Come Join Us!
Are you an
Engineering professional?
Join Eng-Tips now!
• Talk With Other Members
• Be Notified Of Responses
To Your Posts
• Keyword Search
• One-Click Access To Your
Favorite Forums
• Automated Signatures
On Your Posts
• Best Of All, It's Free!
*Eng-Tips's functionality depends on members receiving e-mail. By joining you are opting in to receive e-mail.
Donate Today!
Posting Guidelines
Promoting, selling, recruiting, coursework and thesis posting is forbidden.
Link To This Forum!
How do you add UA values for a space if you have walls and a ceiling?
Forum Search FAQs Links Jobs Whitepapers MVPs
jasno999 (Aerospace) 6 May 08 23:06
How do you add UA values for a space if you have walls and a ceiling. I have a space and I have walls built up of several materials and a ceiling and then an air gap and a metal outter
surface (call it a roof).
I know how to find the U values for the wall or the ceiling and therefore I can determine the UA value for the wall or for the ceiling and outter surface. However I am not sure how you find
the overall UA value for the entire space. Do you simply add the wall UA value and the celing UA value together or do you need to do some type of series / parallel type adding???
MintJulep (Mechanical) 7 May 08 7:38
One method is to simply treat the space between the ceiling and the roof (let's call it an attic) as a space.
If you know UA of the attic floor (or room ceiling), and of the roof, then you can calculate the attic temperature.
Now you can use that temperature in the UA delta-T caclulation for the room ceiling.
jasno999 (Aerospace) 7 May 08 9:08
I don't think that answered the basic question. How do you get the overall UA value for the room. Is the process simply you add the UA value you get for the wall to the UA value for the
How about this as an example. Pretend you have a two storry building that is open and has nothing in it. Attaced to one wall is a 1 story garage. And between the ceilign and the roof you have
a 3 inch air gap.
How do you go about finding the UA value for the buiding???
I know how to get a UA value across the wall to the outside and I knwo how to find the UA value through the ceiling, air gap and roof. The U value between the buildign and the garage should
be the same as for the otehr walls if the wall itself has the same layers to it.
Granted the Q (heat transfer) into the garage section will be lower because the temeprature in that area will be greater than what you have outside- I get that.
the question becomes how do I determine the overall UA value for the building???? I thought I would jsut add the UA values for each area together however I saw an example in a book that leads
me to beleive that this might not be true. However this example I saw is unclear and that is whay I need some help.
vpl (Nuclear) 7 May 08 10:59
I would take the inverse of each of the UA's (1/UA), add all those, then take the inverse of that.
So 1/[1/UA(wall)+ 1/US(ceiling) + 1/UA(whatever else)
Patricia Lougheed
Please see FAQ731-376: Eng-Tips.com Forum Policies: Eng-Tips.com Forum Policies for tips on how to make the best use of the Eng-Tips Forums.
jasno999 (Aerospace) 8 May 08 7:40
vpl - I beleive what you stated is not correct. When you are looking for the overall U value you use the equation:
U = 1 / (R1 + R2 + R3...)
However I don't think you add the invers of the UA values together to get an overall UA value????
Just htink abotu the math... Say the Wall UA=45 and the ceiling UA = 23
You method says (1/45)+(1/23)= 1/UA = 1/0.0657 = 15.2
I beleive the process is a simple addition of 45 + 23 to give you an overall room UA value of 68 in this example.
vpl (Nuclear) 8 May 08 8:26
I think we're both wrong.
I should have told you to remove the area from each UA value then inverse the U's, calculate the total U and the total area separately then multiply back together. For example, assume your
room is a perfect cube, where one wall is 8x8 and the roof is 8x8. So each has an area of 64. Divide 45/64 to give you your U value for your wall. Do the same with the ceiling. Invert the U
values and add and then invert again. Add your 2 areas. Then multiply U*A again.
Doing this, I came up with a total UA of around 30.
Patricia Lougheed
Please see FAQ731-376: Eng-Tips.com Forum Policies: Eng-Tips.com Forum Policies for tips on how to make the best use of the Eng-Tips Forums.
jasno999 (Aerospace) 8 May 08 9:37
Sorry I have to dissagree again. I don't htink that is how it works cause the numbers jsut don't make sense.
When you think about it Heat transfer or Q is equal to UA(Delta T).
So if your wall UA value is 45 and lets assume a delta T = 40F then Q is 1800
That is just for one wall. If you do the addition like you suggest you are telling me the Q for the entire room is less than what really goes thru jsut one wall....
Back To
Like I said I think I answered my own question- it appears that you simply ad UA values to come up with the overall UA value. Forum
IRstuff (Aerospace) 8 May 08 12:03
No, you do not set the deltaT. You set the heat. The deltaT is a consequence of the amount of heat available.
If you only have 500 W of heat, then the deltaT is correspondingly lower.
FAQ731-376: Eng-Tips.com Forum Policies
DRWeig (Electrical) 8 May 08 13:56
Answer to the original posted question is just add 'em up. If you've calculated the UA correctly for the walls and ceiling, then UAroom = UAwalls + UAceiling.
In your case, though, it might not make sense to do this addition. Your ceiling delta-T will probably be different from your wall delta-T, so you need to keep them separate to determine total
heat load:
UAdeltaTwall + UAdeltaTceiling = UAdeltaTroom
Good on ya,
Goober Dave
jasno999 (Aerospace) 8 May 08 15:54
DRWeig - I agree with you totally.
IRstuff - I am nto sure what you are talkign about. The deltaT is the difference in temperature between the inside area and the outside area or between the inside space and the garage space.
If you want to keep a house at 80F and the outdoor design temperature is 20F then the difference between inside and outside is 60F. If yo uare looking at gettign the UA between the space and
say a garage area then you need to look at all the UA values from the garage to the outside and solve for the garage temperature- then with that you can fidn out the UA between the garage and
the space.
I have the answer to the original question which is UA, once you know them, are used to find the Q or heat tranfer out of each section (wall/ceiling/garage/etc.) Then you can simply add all
the Q values to find out what the overall heat transfer out of the room is.
IRstuff (Aerospace) 8 May 08 16:15
The deltaT is a manifestation, i.e., the RESULT of heat flow. You cannot arbitrarily set the interior temperature without some means of making heat flows create the deltaT in question.
In your 60ºF deltaT example, if you had diamond walls, the interior temperature would rapidly drop due to the fact that the resistance to heat flow is low and there isn't enough heat
generated inside to compensate. Likewise, if your walls are made from vacuum insulation packs, the interior temperature will actuall climb, since the walls cannot move the heat out.
The classic heat flow equation is about a HEAT FLOW, meaning that for the equation to remain with a static deltaT, you need a heat source that is constantly supplying the heat flow. Without
that heat source, the equation becomes transient, resulting in the eventual result of a 0ºF deltaT. So, that's consistent with poor insulation, which is that heat leaks out, and the inside
gets colder.
FAQ731-376: Eng-Tips.com Forum Policies
jasno999 (Aerospace) 8 May 08 17:28
IRstuff - Here is how I see it. I do understand what you are saying but my goal is to maintain a specific temperature in a given space. Therfore the steps to take would be to look at the
given temperature and the outdoor air desing temperatures (both hot and cold). Solving for all the UA values and addign together all of the Q (heat transfers out of the space) you come up
with an overall heat transfer in (hot day) or out (cold day).
Now that I know Q I can determine how much heat or coolign I need to put into the space via forced air through a heatign or cooling system...
vpl (Nuclear) 9 May 08 8:25
It sounds like you've made up your mind, so it doesn't matter what anyone else tells you.
However, using your method, in a hypothetical one-room house with 4 walls with a UA of 45 and a completely uninsulated roof, the total UA value would be 200. This sounds like a really good UA
value, but I sure wouldn't want to be the homeowner paying the heating bills.
It doesn't make real world sense.
Patricia Lougheed
Please see FAQ731-376: Eng-Tips.com Forum Policies: Eng-Tips.com Forum Policies for tips on how to make the best use of the Eng-Tips Forums.
DRWeig (Electrical) 9 May 08 9:41
Jasno9999 --
If you get stuck, consider posting over in the HVAC/R Engineering forum. When it comes to heating and cooling a house, there are lots of simple rules-of-thumb that can be applied, proven to
work by hundreds of millions of installations.
I agree with each answer I've seen here, from IRStuff, MintJulep, and vpl (Patricia) -- none are wrong, they're just from different perspectives. There are just easier ways to get to your end
Good on all of y'all,
Goober Dave
|
{"url":"http://www.eng-tips.com/viewthread.cfm?qid=216333","timestamp":"2014-04-17T06:48:11Z","content_type":null,"content_length":"40993","record_id":"<urn:uuid:f79af9e4-7c31-4375-a7bc-5b1b3e9f5874>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00428-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Pre Calculus - Math 170
Math 170, pre-calculus, is a review of algebra and trigonometry that you will use in calculus courses 180,185 and 280.
We will review basic algebra (adding the rational roots theorem), solve compound inequalities and compose graphs of typical algebraic functions. In addition, we will cover the basic definitions of
functions as they apply in calculus.
We will review trigonometry, including solutions to trigonometric equations, the laws of sines and cosines, graph the trig and inverse trig functions and review those sum and difference identities
useful in math 180 and 185.
By the end of this course, you will
1. Use algebraic, numerical, and graphical processes to manipulate and analyze equations, inequalities, and functional relationships.
2. Formulate and analyze mathematical models for a variety of real-world phenomenon and use mathematical and technological tools to determine the veracity of the model.
For a copy of the course overview, click on this link: class handout
For the assignments collected, click on this link: Graded Assignments
For your class standings, click on the following link : Grade so far
Or you may return to Mr. Smith's home page by clicking: Home Page
|
{"url":"http://sccollege.edu/Faculty/JSmith/Pages/PreCalculus-Math170.aspx","timestamp":"2014-04-18T20:44:51Z","content_type":null,"content_length":"25648","record_id":"<urn:uuid:138199b7-5765-497d-9fee-69875db85bcc>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00483-ip-10-147-4-33.ec2.internal.warc.gz"}
|
IS weak isospin conserved by all interactions?
Spontaneous symmetry breaking is not what's involved. As the book points out, any interaction that fails to preserve handedness, fails to conserve weak isospin.
So how do you reconcile this claim with the fact that if you write down the Standard Model Lagrangian before electroweak symmetry breaking, weak isospin is an exact symmetry and so must be exactly
Surely spontaneous symmetry breaking is at the heart of the matter here. Before electroweak symmetry breaking, the mass term for the electron (say) is actually a three-particle interaction between
the left-handed electron, the Higgs doublet, and the right-handed electron. So the electron can change from left-handed to right-handed as long as it emits a Higgs boson, which carries away the
conserved weak isospin. Here is an interaction that changes the handedness of an electron but manifestly conserves isospin.
If you write down the Lagrangian after spontaneous symmetry breaking things are more confusing, to me. Then there is an electron mass term that seems to violate weak isospin. There is also an
electron-Higgs interaction term that violates weak isospin. But if you add these terms together the sum conserves weak isospin.
Also, the electromagnetic interaction disconserves weak isospin, the photon being a mixture of T[w] = 1 and T[w] = 0.
I think the electromagnetic interaction definitely doesn't violate weak isospin. Electromagnetism is the unbroken part of weak isospin and weak hypercharge; how can it break weak isospin or weak
It's true that the photon isn't an *eigenstate* of weak isospin. But despite the claims of the book you cited, I don't see how this proves anything about whether weak isospin is conserved. A
spontaneously broken symmetry isn't manifest in the particle spectrum, but the corresponding current is still conserved.
|
{"url":"http://www.physicsforums.com/showthread.php?s=0376525dccc10b7629aec653f171944d&p=4528778","timestamp":"2014-04-24T12:06:34Z","content_type":null,"content_length":"50802","record_id":"<urn:uuid:fbbf9ed1-b828-44b3-a24a-c8b62e2c8879>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00588-ip-10-147-4-33.ec2.internal.warc.gz"}
|
SAT Math Scores Reveal HUGE Gender Differences
SAT Math Scores Reveal HUGE Gender Differences
On a previous post, I documented the statistically significant male-female test score gap for the 2008 SAT math exam, and the graph above shows that this statistically significant difference of more
than 30 points has persisted over time. Could the male-female SAT math test score gap be explained by: a) males taking more math classes than females in high school, or b) males demonstrating higher
performance in high school math classes than females, or c) male high school students having higher GPAs than female students? The answers appear to be NO, using data from the 2008 SAT report.
Table 13 below (click to enlarge) shows that female high school students dominate male students at the highest GPA levels (A+, A and A-) by wide margins, and male students dominate female students at
the lowest GPA levels (C, D, E or F). For example, there are 150 female students earning GPAs at the highest A+ level for every 100 male students, and there are 160 male students earning GPAs at the
lowest D/E/F level for every 100 female students. Further, the overall GPA for all female students (3.38) is higher than the overall GPA for male students (3.23).
Table 14 below (click to enlarge) shows that there is essentially no male-female difference for average years of math study (3.9 years for males vs. 3.8 years for females) or math GPA (3.12 for both
male and female students).
Table 15 below (click to enlarge) shows no male-female differences for: a) years of math study or b) highest level of math achieved, and shows that the 54% of students taking AP/Honors math classes
are female vs. 46% male. That is, there are 117 female students taking AP/Honors math classes for every 100 male students.
Bottom Line: Female high school students are better students on average compared to male high school students, and they are equally or better prepared than males for the math SAT exam based on the
number and level of math classes taken in high school. And yet, male students score significantly higher on the SAT math test than females, and the statistically significant male-female test score
gap of more than 30 points persists over time.
Based on the statistical evidence, is there any other conclusion than this obvious one: In general and on average, male high school students in the U.S. are just plain better at math than female high
school students? If there are other reasonable conclusions, please share them.
And yet, we hear statements like this: "There just aren't gender differences anymore in math performance," says University of Wisconsin-Madison psychology professor Janet Hyde. Stereotypes are very,
very resistant to change," she says, "but as a scientist I have to challenge them with data."
31 Comments:
Links to this post:
|
{"url":"http://mjperry.blogspot.com/2009/06/sat-math-scores-suggest-huge-gender.html","timestamp":"2014-04-20T10:47:27Z","content_type":null,"content_length":"71271","record_id":"<urn:uuid:73bc78a8-293a-4a21-8d35-377d3dcfbdaf>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00022-ip-10-147-4-33.ec2.internal.warc.gz"}
|
North Bergen Math Tutor
Find a North Bergen Math Tutor
...Students learn such topics as: measurements of center (mean, median, midrange, and mode) and spread (standard deviation, interquartile range, range), basic set theory, theoretical probability,
conditional probability, independence, probability distributions (including the binomial, geometric, nor...
21 Subjects: including geometry, physics, economics, econometrics
I am a highly motivated, passionate math teacher who has taught in high performing schools in four states and two countries. I have previously taught all grades from 5th to 10th and am extremely
comfortable teaching all types of math to all level learners. I am a results driven educator who motivates and educates in a fun, focused atmosphere.
7 Subjects: including algebra 2, American history, accounting, prealgebra
...Where you can do anything with the ball except touch it with your hands. You play with 11 people on a soccer field that consist of a goalkeeper, defenders, midfielders, and forwards. All must
cooperate together in order to win and score goals.
27 Subjects: including ACT Math, probability, SAT math, algebra 1
...I use contemporary, traditional, and personal techniques depending on the needs of the student to build proficiency in comprehension of stories, understanding of grammar, and mastery of
vocabulary. In addition, I have tremendous experience teaching classes and tutoring individuals in English and...
27 Subjects: including SAT math, ACT Math, Spanish, English
...I am photography-based artist and a graduate of the International Center of Photography General Studies Program in 2012. My work featured at the Rita K. Hillman gallery at ICP.
21 Subjects: including calculus, elementary (k-6th), grammar, study skills
|
{"url":"http://www.purplemath.com/North_Bergen_Math_tutors.php","timestamp":"2014-04-19T12:00:18Z","content_type":null,"content_length":"23832","record_id":"<urn:uuid:db3ee42a-1d47-4a3e-bf0d-a35d3f7ae9de>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00074-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Virginia Gardens, FL Math Tutor
Find a Virginia Gardens, FL Math Tutor
...My teaching philosophy is that with a positive attitude and willingness to learn, math can be easy and fun. The success rate of my tutoring is positive and many students have improved their
grades by a letter grade or more. I worked in a wet laboratory for six years doing medical research.
18 Subjects: including geometry, study skills, biochemistry, cooking
...I have taken several University courses on films in California. I know about directors and directors' style, actors, classics from the 1930s and beyond. Having taught Precalculus for many
years, I have had great results with my students, one on one.
48 Subjects: including precalculus, chemistry, elementary (k-6th), physics
...I have worked in churches, wellness centers, gyms and yoga studios. I have been helping students for the SAT Math for the last two years. I can accommodate to the needs of the students and
give options on how to prepare better.
16 Subjects: including SAT math, algebra 1, algebra 2, chemistry
...In the past I have tutored students ranging from elementary school to college in a variety of topics including FCAT preparation, Biology, Anatomy, Math and Spanish. I enjoy teaching and
helping others and always do my best to make sure the information is enjoyable and being presented effectively...
30 Subjects: including algebra 2, biology, calculus, prealgebra
...This vigorous and challenging degree left me ready for the Finance world where I have three years of contributions and experience. My expertise in the educational and real world side of
Finance can not only help you have a much better understanding of it, but allow you to have conceptual knowled...
8 Subjects: including algebra 2, finance, algebra 1, accounting
Related Virginia Gardens, FL Tutors
Virginia Gardens, FL Accounting Tutors
Virginia Gardens, FL ACT Tutors
Virginia Gardens, FL Algebra Tutors
Virginia Gardens, FL Algebra 2 Tutors
Virginia Gardens, FL Calculus Tutors
Virginia Gardens, FL Geometry Tutors
Virginia Gardens, FL Math Tutors
Virginia Gardens, FL Prealgebra Tutors
Virginia Gardens, FL Precalculus Tutors
Virginia Gardens, FL SAT Tutors
Virginia Gardens, FL SAT Math Tutors
Virginia Gardens, FL Science Tutors
Virginia Gardens, FL Statistics Tutors
Virginia Gardens, FL Trigonometry Tutors
Nearby Cities With Math Tutor
Bal Harbour, FL Math Tutors
Coral Gables, FL Math Tutors
Doral, FL Math Tutors
El Portal, FL Math Tutors
Hialeah Math Tutors
Hialeah Gardens, FL Math Tutors
Hialeah Lakes, FL Math Tutors
Medley, FL Math Tutors
Miami Beach, WA Math Tutors
Miami Springs, FL Math Tutors
North Bay Village, FL Math Tutors
North Miami Bch, FL Math Tutors
Sea Ranch Lakes, FL Math Tutors
Sweetwater, FL Math Tutors
West Miami, FL Math Tutors
|
{"url":"http://www.purplemath.com/Virginia_Gardens_FL_Math_tutors.php","timestamp":"2014-04-18T04:15:27Z","content_type":null,"content_length":"24235","record_id":"<urn:uuid:a8f825aa-01e6-4957-bab0-de994c059732>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00080-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Probability on the distance
up vote 0 down vote favorite
Let $A$ be an $n\times n$ gaussian matrix whose entries are i.i.d. copies of a gaussian variable, and $\left\{ a_{j}\right\} _{j=1}^{n}$ be the column vectors of $A$. How to show that the probability
$\mathbb{P}\left(d\geq t\right)\leq Ce^{-ct}$ for some $c,C>0$ and every $t>0$, where $d$ is the distance between $a_{1}$ and the $n-1$-dimensional subspace spanned by $a_{2},\cdots,a_{n}$.
Thanks a lot!
Is this Gaussian variable assumed to have mean 0? – Harald Hanche-Olsen Mar 13 '10 at 13:23
Yes, we assume it is in stardard normal distribution. – user4606 Mar 14 '10 at 3:46
add comment
1 Answer
active oldest votes
symmetry shows that you can suppose that $$\text{span}(a_2, \ldots, a_n) = ( x \in \mathbb{R}^n: x_1=0 ) = H.$$ Hence you just want to show that $d(a_1, H) = |A_{1,1}|$ is exponentially
up vote 4 small - there is a closed-form expression for that: $$ P(d>t) = P(|\mathcal{N}(0,1)|>t) = \sqrt{\frac{2}{\pi}} \int_t^{\infty} e^{-\frac{x^2}{2}} dx.$$
down vote
I don't see how to use symmetry to get $$\text{span}(a_2, \ldots, a_n) = ( x \in \mathbb{R}^n: x_1=0 ) = H$$. Could you please explain a little more on that? – user4606 Mar 14 '10 at
1 You can see it using the concepts of conditional probability. For each possible $H$ (using Alekk's notation) there is a probability distribution of $d$ conditional on that $H$, and the
total (unconditional) distribution of $d$ is obtained from that by integrating over $H$. Now the conditional distribution of $d$ given $H$ is independent of $H$, because the joint
probability distribution of all the $a$'s is invariant under rotation. So integrating over $H$ is unnecessary – just use one fixed $H$. – Harald Hanche-Olsen Mar 14 '10 at 4:40
1 If you want $\ell_1$ distance, I think the formula (if one can be found) will be rather complex, and its derivation horrendously so. You might experiment with a computer algebra system
and see how it works out for small values of $n$. – Harald Hanche-Olsen Mar 14 '10 at 21:53
add comment
Not the answer you're looking for? Browse other questions tagged random-matrices or ask your own question.
|
{"url":"http://mathoverflow.net/questions/18057/probability-on-the-distance/18059","timestamp":"2014-04-19T12:11:03Z","content_type":null,"content_length":"56793","record_id":"<urn:uuid:d8a1f552-dd9a-4047-a3fd-04ed049c6ff1>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00642-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Statistics theory
Citizendium - building a quality general knowledge free online encyclopedia. Click here to join and contribute—free
Many thanks March 2014 donors; special to Darren Duncan. April 2014 donations open; need minimum total $100. Let's exceed that.
Donate here. Donating gifts yourself and CZ.
Statistics theory/Related Articles
From Citizendium, the Citizens' Compendium
See also changes related to Statistics theory, or pages that link to Statistics theory or to this page or whose text contains "Statistics theory".
Parent topics
Related topics
|
{"url":"http://en.citizendium.org/wiki/Statistics_theory/Related_Articles","timestamp":"2014-04-20T20:58:51Z","content_type":null,"content_length":"38632","record_id":"<urn:uuid:1b0af7be-2006-42c0-aed0-ce65a7427ac1>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00566-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Growing Patterns
Students explore growing patterns. They analyze, describe, and justify their rules for naming patterns. Since students are likely to see growing patterns differently, this is an opportunity to engage
them in communicating about mathematics.
Start this lesson by reading a counting book of your choice. (Ten Black Dots by Donald Crews is especially appropriate, but any book which uses a "count on by 1" strategy will work.) Then ask
students to tell what happened in the book. Next, tell the students that in this lesson they will explore patterns that grow according to a rule. Display the following growing pattern (without the
Ask, "What will come next in this pattern?" [Students may find this question easier to answer if they copy the pattern onto paper.] Have the students explain how they got the answer. When someone has
given the correct answer, write the number of dots in each row. Solicit student responses to add additional rows to this pattern and label them. Ask the students if they know a name for this pattern
and the rule they would use to add more rows to the pattern.
Next display the pattern below and tell the students this is called an L pattern. Ask students how each L is changing. After students state the answer, or as a hint, write the number of dots used
below each L. Ask several students to state the rule they would use to add more figures to the pattern. Call on students to draw the next three L shapes in the pattern.
Distribute the Growing Pattern activity sheet to students.
Growing Patterns Activity Sheet
Have them add three more steps in the pattern and write a number pattern to match the figures. Ask students to share their shape and number patterns, explaining how they identified the pattern.
• Chart Paper with Growing Patterns
• Paper
• Crayons
Collect students’ Growing Patterns activity sheets.
Questions for Students
1. What will come next in this pattern (first growing pattern)? How do you know?
[There will be 5 dots. I used the counting numbers.]
2. What is a name for this pattern?
[Counting on or counting numbers.]
3. What is the rule?
[You have to add one more to each row.]
4. How many dots are in the first figure of the L pattern? How many are in the second figure? The third?
[The first L figure has 1 dot. The second figure has 3 dots. The third figure has 5 dots.]
5. How is each L changing?
[Each L has two more dots.]
6. What is the rule for the L pattern?
[Add one dot at the top and add one dot at the bottom of the next L.]
7. How many dots in the next three L shapes in the pattern?
[The next L figures will contain 9, 11, and 13 dots.]
8. Have you ever seen this pattern before?
[It is the set of odd numbers.]
9. How long could we continue this pattern?
[We could keep going forever.]
10. What will be the next three figures in the triangle growing pattern (from student activity sheet)?
[They will be 5 triangles, 6 triangles, and 7 triangles.]
11. What number pattern did you use to describe the pattern?
[Possible answers include 1, 2, 3, 4, 5, 6, 7 (number of triangles); 3, 5, 7, 9, 11, 13, 15 (number of sides); 3, 4, 5, 6, 7, 8, 9 (number of vertices/corners). Accept all reasonable answers.]
Teacher Reflection
• Were students able to analyze and describe growing patterns? If so, what extension activities are appropriate now?
• Were students able to write number patterns to match the growing patterns?
• What other examples of growing patterns could I use in this lesson or for continued practice?
• Did I encourage students to explain and defend their thinking?
In this lesson students seriate objects and review the meaning of ordinal numbers. They describe orderings in words and in pictures. [This lesson gives you an opportunity to review or teach
vocabulary such as before, after, and next.] At the conclusion of the lesson, students make an entry in their portfolios. A Science extension is suggested.
[At this point you may wish to make pretty pasta for the students to use in this unit. Simply place uncooked pasta of various shapes in a plastic bag; add a few drops of food color and a few drops of
rubbing alcohol. Shake the bag until the pieces are coated, then spread them out to dry.]
Students sort objects and symbols and make patterns with sorted objects. They make Venn diagrams and use their sortings to create linear patterns. They extend a pattern created by the teacher.
Students will begin identifying pattern cores and reading patterns. A Social Studies connection is suggested as an extension.
Pre-K-2, 3-5
In this lesson, students make patterns with objects, read patterns and find patterns in the environment. They should be encouraged to classify patterns by type (i.e. AAB, ABC). They continue learning
about patterns by extending a given pattern, identifying missing elements in a pattern, and recording a pattern.
Students use objects and symbols to make repeating linear patterns. They extend patterns and translate patterns from one modality (auditory, visual, and kinesthetic) to another. A Physical Education
connection is suggested as an extension. This lesson is intended to take two class periods to ensure that all students have multiple opportunities to create original patterns.
Students extend their knowledge of linear patterns by recognizing and discussing familiar patterns. Students make auditory and visual patterns from names. An art activity is suggested as an
Students explore patterns which involve doubling. They use objects and numbers in their exploration and record them using a table.
Pre-K-2, 3-5
Students make and extend numerical patterns using hundred charts. They also explore functions at an intuitive level. This lesson integrates technology.
This final lesson reviews the work of the previous lessons. Students explore patterns in additional contexts and record their investigations. Students will rotate through center activities. Teachers
may add other centers they feel will benefit the students.
Learning Objectives
Students will:
• Extend growing patterns
• Describe growing patterns
• Analyze how growing patterns are created
Common Core State Standards – Mathematics
Grade 4, Algebraic Thinking
• CCSS.Math.Content.4.OA.C.5
Generate a number or shape pattern that follows a given rule. Identify apparent features of the pattern that were not explicit in the rule itself. For example, given the rule ''Add 3'' and the
starting number 1, generate terms in the resulting sequence and observe that the terms appear to alternate between odd and even numbers. Explain informally why the numbers will continue to
alternate in this way.
|
{"url":"http://illuminations.nctm.org/Lesson.aspx?id=597","timestamp":"2014-04-20T23:27:27Z","content_type":null,"content_length":"81077","record_id":"<urn:uuid:5117481c-376d-4b56-b7b1-bbaf42f56dfa>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00184-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Carlo W. J. Beenakker Quotes : FinestQuotes.com
... mathematicians are much more concerned for example with the structure behind something or with the whole edifice. Mathematicians are not really puzzlers. Those who really solve mathematical
puzzles are the physicists. If you like to solve mathematical puzzles, you should not study mathematics but physics!
Carlo W. J. Beenakker
Another Quote Share On Facebook
|
{"url":"http://www.finestquotes.com/quote-id-61035.htm","timestamp":"2014-04-20T00:41:23Z","content_type":null,"content_length":"11069","record_id":"<urn:uuid:4d2c715f-86f0-4c22-8212-479dd14c62a6>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00163-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Semiprime (but not prime) ring whose center is a domain
up vote 3 down vote favorite
The center of a prime ring is a domain and the center of a semiprime ring is reduced.
Now I have no evidence to believe that if the center of a semiprime ring R is a domain,
then R has to be a prime ring. So I'm looking for some examples of semiprime rings R
with these properties: R is not prime and the center of R is a domain.
This is not important but it'd be nice if the center of R is not too small.
Thanks for reading
noncommutative-algebra ra.rings-and-algebras
3 I find it preposterous to completely answer a technical question providing an explanation and references, and get exactly ZERO response. – Victor Protsak Aug 5 '10 at 16:42
add comment
1 Answer
active oldest votes
A whole class of examples of this kind can be obtained from prime ideals in enveloping algebras with the same central character. I will sketch the construction in the case of primitive
ideals in simple Lie algebras, but these conditions can be considerably relaxed.
Let $\mathfrak{g}$ be a complex simple Lie algebra of rank at least $2$ (i.e. not $\mathfrak{sl}_2$), $U(\mathfrak{g})$ its universal enveloping algebra, $I_1, I_2$ two incomparable
primitive ideals with the same infinitesimal character, and $I=I_1\cap I_2$ their intersection. Then $A=U(\mathfrak{g})/I$ is semiprimitive, and hence semiprime. By the assumption, $I_1$ and
$I_2$ intersect $Z(\mathfrak{g})$ at the same maximial ideal, so $Z(A)=\mathbb{C},$ which is a domain. To get a larger center, you can repeat this construction with incomparable prime ideals
whose intersection with $Z(\mathfrak{g})$ is the same non-maximal prime ideal of the latter ring.
up vote
6 down If you know representation theory of simple Lie algebras, here is an explicit construction of a pair of ideals with these properties: let $\lambda$ be an integral dominant weight, choose two
vote different simple reflections $s_i, i=1, 2$ in the Weyl group, and let $I_i=\text{Ann}\ L(s_i*\lambda)$ be the annihilator of the simple highest weight module with highest weight $s_i(\
lambda+\rho)-\rho.$ The ideals $I_1$ and $I_2$ have the same infinitesimal character by the Harish-Chandra isomorphism and they are incomparable by the theory of $\tau$-invariant: $\tau(I_i)
=\{s_i\}$, but $\tau$-invariant is compatible with the containment of primitive ideals.
Everything except for the $\tau$-invariant is explained in Dixmier's "Enveloping algebras", and you can find the rest in Borho–Jantzen's or Vogan's old papers (you need the main property of
the $\tau$-invariant stated above) or read Jantzen's book about the enveloping algebras (in German) for the whole story.
add comment
Not the answer you're looking for? Browse other questions tagged noncommutative-algebra ra.rings-and-algebras or ask your own question.
|
{"url":"http://mathoverflow.net/questions/33161/semiprime-but-not-prime-ring-whose-center-is-a-domain","timestamp":"2014-04-16T20:14:26Z","content_type":null,"content_length":"53149","record_id":"<urn:uuid:a74e18ee-1a60-4dd2-a096-754a3a7b3af6>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00363-ip-10-147-4-33.ec2.internal.warc.gz"}
|
suppose box has V = 100cc, bottom area 50 cm^2; find height
If I have a #3 Height of a box:
Suppose that a box has a volume of 100 cubic centimeters and that the area of the bottom of the box is 50 square centimeters. Find the height of the box.
Re: suppose box has V = 100cc, bottom area 50 cm^2; find hei
daphnel wrote:Suppose that a box has a volume of 100 cubic centimeters and that the area of the bottom of the box is 50 square centimeters. Find the height of the box.
Hint: What is the formula for the volume V of a box with height h, length L, and width w? What is the area of the base of this box? What happens if you divide...?
|
{"url":"http://www.purplemath.com/learning/viewtopic.php?f=8&t=1907","timestamp":"2014-04-20T13:42:55Z","content_type":null,"content_length":"18647","record_id":"<urn:uuid:9d6d8bb4-b394-496a-a3d0-e1e420e22546>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00569-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Turing : entry for the Routledge Encyclopaedia of The Philosophy of Science (2005)
My Publications
For a guide to this website go to the Alan Turing Home Page
This entry on Alan Turing was first submitted to the editors in 2001. It has been revised slightly for publication in 2005.
The work of the British mathematician, Alan Turing, stands as the foundation of computer science. Turing's 1936 definition of computability remains a classic paper in the elucidation of an abstract
concept into a new paradigm. His 1950 argument for the possibility of Artificial Intelligence is one of the most cited in modern philosophical literature. These papers, his best known, have led his
contributions to be defined as theoretical. But his work was highly practical, both in codebreaking during World War II codebreaking, and in the design of an electronic digital computer. Indeed
Turing's expression for the modern computer was 'Practical Universal Computing Machine,' a reference to his 1936 'universal machine.' This combination of theory and practice meant that Turing's work
fitted no conventional category of 'pure' or 'applied.' Likewise his life involved many contradictions. Detached from social and economic motivations, and perceived as an eccentric, apolitical,
unworldly innocent, he was swept into a central position in history as the chief scientific figure in the Anglo-American mastery of German communications.
The matter of Mind
Amidst this complexity there is one constant theme: Turing's fascination with the description of mental action in scientific terms. His computational model can be seen as a twentieth-century
renovation of materialist philosophy, with a claim that the discrete state machine is the appropriate level of description for mental states. However it did not begin in that way: Turing's interest
in the material embodiment of mind comes first in a private letter of about 1932 (Hodges 1983, page 63) in which he alluded to the newly elucidated quantum-mechanical nature of matter, and then,
influenced by Eddington, speculated on 'Will' as a physical effect in a region of the brain. It was after studying von Neumann's axioms of quantum mechanics, then Russell on the foundations of
mathematics, that he learnt of logic, and so of the question that was to make his name.
The question, proposed by Hilbert, but transformed through the 1931 discovery of Kurt Gödel, was that of the decidability of mathematical propositions. Is there a definite method or procedure which
can (in principle) be applied to a mathematical proposition and which will decide whether it is provable? Turing learnt of this outstanding Entscheidungsproblem from the lectures of the Cambridge
mathematician Max Newman. The difficulty of the question was that it demanded an unassailable definition of the concept of method, procedure, or algorithm. This is what Turing supplied in 1936
through the definition of the Turing machine. Specifically, he modelled the action of a human being in following a definite method, either through explicit instructions, or through following a
sequence of 'states of mind.'
Turing's definition came shortly after another elucidation of 'effective calculability' by the American logician Alonzo Church. Thus in a narrow sense Turing was pre-empted. Church's definition
turned out to be mathematically equivalent to Turing's definition of computability. But Church (and Gödel) agreed that Turing's definition gave an intuitively compelling account of why this
definition encompassed 'effectiveness'.
The Turing machine breaks down the concept of a procedure into primitive atomic steps. The exact details are somewhat arbitrary and nowadays other equivalent formulations are often used. The
essential point is that the machines should have finite descriptions (as 'tables of behaviour'), but be allowed unlimited time and space for computation.
Church's thesis, that his definition of effective calculability would capture any natural notion of that concept, became the Church-Turing thesis, and opened a new area for mathematical 'decision
problems.' But the work had much wider consquences: Turing's bold modelling of 'states of mind' opened a new approach to what are now called the cognitive sciences. It is often asserted that
modelling the mind with a computer shows the influence of a dominant technology, but in fact Turing's work went in the reverse direction. For a striking and visionary aspect of Turing's paper was his
definition of a 'universal' Turing machine, which led to the computer. A machine is universal if it can read the 'table of behaviour' of any other machine, and then execute it. This is just what a
modern computer does, the instructions in programs being equivalent to 'tables of behaviour'. It was essential in Turing's description that the instructions must be stored and read like any other
form of data; and this is the idea of the internally stored program. It is now hard to study Turing machines without the programmers' mind-set, and to remember that when Turing formulated them,
computers did not exist.
Turing and machines
Turing's work, being based on answering Hilbert's question, modelled the human being performing a methodical computation. However the imagery of the teleprinter-like 'machine' was striking. Newman
(1955), in stressing the boldness of this innovation, said that Turing embarked on 'analyzing the general notion of a computing machine.' This was criticised by Gandy (1988) as giving a false
impression of Turing's approach. But Newman's account of the flavour of Turing's thought should not be entirely discounted; Turing certainly became fascinated by machines and engineered machines with
his own hands in 1937-9. Church also makes it clear that the notion of a computing machine was current at this time. Church (1937) wrote (while Turing was working with him at Princeton) that Turing
... proposes as a criterion that an infinite sequence of digits 0 and 1 be "computable" that it shall be possible to devise a computing machine, occupying a finite space and with working
parts of finite size, which will write down the sequence to any desired number of terms if allowed to run for a sufficiently long time. As a matter of convenience, certain further restrictions
are imposed on the character of the machine, but these are of such a nature as obviously to cause no loss of generality — in particular, a human calculator, provided with pencil and paper and
explicit instructions, can be regarded as a kind of Turing machine.
Yet neither Turing nor Church analyzed the general concept of a computing machine with 'working parts.' Turing (1939), giving a definitive statement of the Church-Turing thesis, used the expression
'purely mechanical' without further analysis. Only in 1948 did he give some more discussion.
This topic has recently been made controversial by B. J. Copeland, who holds that the Church-Turing thesis is widely misunderstood, because it was never intended to apply to machines. Copeland
overlooks Church's characterisation of computability, as quoted above, which assumes that all finitely defined machines fall within the scope of computability. To support his claim, Copeland points
to the 'oracle' defined by Turing (1939), which supplies an uncomputable function, and holds that it gives a broader characterization of computation. But the whole point of Turing's oracle is that it
facilitates the mathematical exploration of the uncomputable. The oracle, as Turing emphasised, cannot be a machine. It performs non-mechanical steps. His 'oracle-machines,' defined so as to call
upon 'oracles', are not purely mechanical.
Turing's oracle is related to Gödel's theorem, which seems to show that the human mind can do more than a mechanical system, when it sees the truth of formally unprovable assertions. Turing described
this as mental 'intuition'. The oracle, as Newman (1955) interpreted it, can be taken as a model of 'intuition.' But Turing left open the question of how intuition was to be considered as actually
embodied. He was not at this stage committed to the computability of all mental acts, as came to be his position after 1945. His 1936 work had considered the mind only when applied to a definite
method or procedure. Turing had to resolve this question before embarking on his Artificial Intelligence program.
Copeland has gone even further and has described Turing's 'oracle' as heralding a new revolution in computer science, illustrating 'what Turing imagined' by sketching an oracle supposed to operate by
measuring a physical quantity to infinite precision. But Turing's oracle models what machines cannot do, and the question for him was, and always remained, whether machines can do as much as minds.
He did not suggest the opposite idea, stated in Copeland (1997), that 'among a machine's repertoire of atomic operations there may be those that no human being unaided by machinery can perform.'
The historical question of what Turing thought must naturally be distinguished from scientific truth. It is a serious (and unanswered) question as to whether actual physical objects do necessarily
give rise to computable effects. Nowadays we demand a closer analysis of 'finite size.' Gandy (1980) arrived at conclusions that generally support Church's assumptions: the limitations of
computability follow from quite general assumptions about the construction of a machine. But if the constraint of finiteness is interpreted so as to allow a 'machine' with infinitely many
sub-components built on smaller and smaller scales, or working faster and faster without limit, then it is easy to show that Turing's computability can be surpassed in a finite time. In any imaginary
universe such a construction might be possible; such examples may therefore be said to show that the Church-Turing thesis has a physical content.
'Effective' means 'doing' (as opposed to postulating or imagining), thus depending on some concept of realistic action, and hence on physical law. Quantum computation has already shown that the
classical picture of 'doing' is incomplete. The nature of quantum mechanics, still mysterious in its non-local and 'reduction' properties, means that there may yet be more to be found out, and in
recent years the work of Penrose (1989, 1994), who like Turing focuses on the mind and brain, has drawn new attention to this question.
Turing's practical machinery
It is perhaps surprising that Turing himself did not in this 1936-39 period say anything about the physics of the 'machine' concept, in view of his interest in quantum mechanics. He might have done
so but for the war. War disrupted Turing's investigation of the uncomputable, along with his conversations with Wittgenstein on the foundations of mathematics (Diamond 1976) — never extended, as many
now wish they had been, into the philosophy of mind. But war work gave Turing an intimate acquaintance with the power of algorithms and with advanced technology for implementing them. For Turing
became the chief scientific figure at Bletchley Park, the British cryptanalysis centre and a key location in modern history.
Turing had anticipated this development in 1936. He had applied his ideas to 'the most general code or cipher', and in fact one of the machines he made himself was to implement a particular cipher
system (Hodges 1983, page 138). This, together with his Cambridge connection with influential figures such as J. M. Keynes, may explain why he was the first mathematician brought into the
codebreaking department .
Turing transformed the Government Code and Cypher School with the power of scientific method. Turing's logic and information theory, applied with advanced engineering, achieved astounding feats, with
particular effect in decrypting the U-Boat Enigma signals for which he was personally responsible. By 1944 the power reliability and speed of electronic technology showed Turing that his universal
machine could be implemented. The plethora of advanced algorithms employed in cryptanalysis also supplied ample practical motivation.
In 1945, Turing was appointed to the National Physical Laboratory, with the commission of designing an electronic computer. Turing's plans soon emerged as the ACE proposal (Turing 1946). Again,
Turing was pre-empted by work in the United State, for the EDVAC report of 1945 had preceded his own publication. Turing's plans were however independent, more detailed, and more far-reaching.
Furthermore, a recent survey (Davis 2000) suggests that von Neumann needed his knowledge of Turing's work when shaping the EDVAC. As a point of interest in the history of science, none of the
mathematical leaders — Turing, von Neumann, Newman — clearly defined the stored program concept or its debt to symbolic logic in treating instructions as data. Turing never published the book on the
theory and practice of computation that he might have done, and so neglected his own good claim to be the inventor of the computer.
It is an unobvious fact, long resisted, now familiar, that more complex algorithms do not need more complex machines, only sufficient storage space and processor speed. This was Turing's central
idea. In a world familiar with the power of a universal machine we can better appreciate the remark in (Turing 1946) that '...every known process has got to be translated into instruction table form
at some stage...' Turing emphasised that arithmetical calculations were only one aspect of the computer's role — partly the influence of non-numerical Bletchley Park work, but more deeply, his base
in symbolic logic. His hardware design was probably impractical in detail, but he far surpassed von Neumann in seeing the significance of software, and that this would use the computer itself, a fact
now familiar in compilers and editors:
The work of constructing instruction tables should be very fascinating. There need be no real danger of it ever becoming a drudge, for any processes that are quite mechanical may be turned over
to the machine itself.
Turing's insight into programming by modifying instructions led to the idea of simulating learning, training and evolution by an extension of these ideas. As he put it, human intelligence required
something other than 'discipline', namely 'initiative.' By 1945 Turing had convinced himself that human faculties of an apparently non-mechanical nature did not require explanation in terms of the
uncomputable. Turing was well aware of the paradox of expecting intelligence from a machine capable only of obeying orders. But he believed that with sufficient complexity machines need not appear
'mechanical' as in common parlance.
A crucial point is that Turing had by this stage formulated an argument from human 'mistakes' to explain why Gödel's theorem did not show the existence of an uncomputable human intuition; indeed a
sentence in (Turing 1946) shows the early influence of this view in his project for Artificial Intelligence (Hodges 1997). He now expected self-modifying machines to exhibit the apparently
'non-mechanical' aspects of mental behaviour.
With this as his strategic goal, Turing sketched networks of logical elements, to be organised into function by 'training' or by an analogy with evolution. Although he did not develop his specific
schemes, he anticipated the 'connectionist' approach to Artificial Intelligence. Nowadays the term 'non-algorithmic' is confusingly used for systems where the program is implicitly developed, rather
than explicitly written by a programmer. Turing was however quite clear that such operations still lie within the realm of the computable. All these developments were sketched in (Turing 1948). He
also in this paper gave his only systematic account of the concept of 'machine'. In doing so he introduced the possibility of random elements into the Turing machine model, but he made no reference
to a need for uncomputable elements in randomness. Indeed he indicated that pseudo-random elements, clearly computable, would suffice.
Turing's Intelligent Machinery
Disappointed with the lack of progress at the NPL, Turing moved in 1948 to Manchester. There Michael Polanyi stimulated Turing to write for a more general audience on the question of whether machines
could in principle rival the human mind. The idea in (Turing 1950) was to cut through traditional philosophical assumptions about Mind, by the thought-experiment now known as the Turing Test, but
which he called 'the imitation game'. A human and a programmed computer compete to convince an impartial judge that they are human, by textual messages alone. Turing's position was that thought or
intelligence, unlike other human faculties, is capable of being fairly tested through a communication channel like a teleprinter.
Critics have raised questions about non-verbal communication, cultural assumptions, animal intelligence, and other issues. Turing's principal defence against all such arguments was that we can only
judge the intelligence of other humans by making such external comparisons, and that is unfair to impose more stringent criteria on a computer. But it may be held that, when addressing human
consciousness with moral seriousness, there is something inadequate about a definition of intelligence that depends upon deceit. Turing also confused the issue by introducing the 'imitation game'
with a poor analogy: a parlour game in which a man has to pretend to be a woman under the same conditions of remote questioning. In such a game imitation proves nothing, so the analogy is misleading
and has confused many readers. However, Turing's drama has the merit of expressing the full-bloodedness of his program. His wit has attracted lasting popular interest. Turing's references to gender
have also fascinated cultural critics, who speculate widely over biographical and social issues in their commentaries (Lassègue 1998).
A dryer but stronger feature of (Turing 1950) lies in Turing's setting out of the level of description of the discrete state machine, and his emphasis on explaining computability and the universal
machine. Critics who point out that the brain is not structured like a computer miss his essential point that any algorithm can be implemented on the computer. This applies to explicit algorithms,
and to those arrived at by processes as in neural networks; Turing described both. Another strength of Turing's paper lies in his advocating both approaches, never seeing programming as standing in
opposition to the modelling of intelligence by learning and adaptation. Artificial Intelligence research has tended towards division between the two camps dominated by expert systems and neural
networks, but recently hybrid approaches have appeared. Thus Turing's ideas still have force. Also futuristic was Turing's prophecy that by the end of the century 'one will be able to speak of
machines thinking without expecting to be contradicted.'
It is probably true to say that this prophecy was not fulfilled in 2000, but Turing was prepared to take this risk:
The popular view that scientists proceed inexorably from well-established fact to well-established fact, never being influenced by any unproved conjecture, is quite mistaken. Provided it is made
clear which are proved facts and which are conjectures, no harm can result. Conjectures are of great importance since they suggest useful lines of research.
Turing's conjecture was that the brain's action is computable. If this is true, then it is hard to refute his argument that given sufficient storage space, a computer can perform the function of the
mind. Turing emphasised this argument by putting a figure on the storage capacity of the brain. But he still did not directly address that underlying question of whether physical objects necessarily
have computable behaviour. Artificial Intelligence research has generally accepted without question the assumption that they do. But Penrose, linking the still ill-understood 'reduction' in quantum
mechanics with Gödel's theorem, has thrown the spotlight back on the problems that Turing himself found most perplexing.
Turing's unfinished work
Turing (1951) did refer to the problem of quantum mechanics when giving a popular radio talk. As Copeland (1999) has noted, he gave a more qualified assertion of the computability of mental action
than in Turing (1950). But this was not because of believing oracles to reside in the brain; it was because of quantum unpredictability. . But this was not because of believing oracles to reside in
the brain; it was because of quantum unpredictability. Harking back to his schooldays reading, he referred to Eddington in connection with the mechanism of the brain. This brief allusion may explain
why Turing thereafter gave fresh attention to quantum theory. His friend, student and colleague Gandy (1954) wrote in a letter to Newman of Turing's ideas, in particular of his attention to the
question of the 'reduction' or 'measurement' process and the 'Turing paradox' that according to standard theory, continuous observation should prevent a system from evolving.
These ideas were not developed into publication. They were curtailed by his suicide in 1954. One of many ironies of Turing's life, lived for science, was that he suffered in 1952-53 a 'scientific'
treatment by oestrogen supposed to negate his homosexuality. This treatment was the alternative to prison after his arrest in February 1952. His openness and defiance did not command the admiration
of authority, and there was at least one further crisis when he found himself under watch. One may regret that he did not write more of his own sense of liberty and will. Indeed it is remarkable that
he, so original and unconventional, should champion the possibility of programming the mind. But he left few hints as to the personal dilemmas encountered on the roads to freedom. He was of course
constrained by intense state secrecy, his brain the repository of the most highly guarded Anglo-American secrets.
Besides his new enquiries in physics, he had a large body of incomplete theory and computational experiment in mathematical biology. This, neglected until the 1970s, is now the foundation of a lively
area of non-linear dynamics. Turing described his theory as intended to oppose 'The Argument from Design.' It was a theme parallel to (and through his interest in brain function connected with) his
quest for a new materialism of mind. He had not exhausted his ideas, and their impact has not yet been fully absorbed.
Church A. (1937) Review of Turing 1936-7, Journal of Symbolic Logic, 2, 42-43.
Copeland. B. J. (1997), The Church-Turing thesis, in E. N. Zalta (ed.), Stanford Encyclopaedia of Philosophy, http://plato.stanford.edu
Copeland B. J. (1999) A lecture and two radio broadcasts on machine intelligence by Alan Turing, in K. Furukawa, D. Michie. and S. Muggleton (eds.), Machine Intelligence 15, 445-476 (Oxford: Oxford
University Press)
Davis, M. (2000) The universal computer (New York: Norton)
Diamond, C. (ed.) (1976) Wittgenstein's lectures on the foundations of mathematics, Cambridge 1939, (Hassocks, UK: Harvester Press)
Gandy, R. O. (1954) letter to M. H. A. Newman. Included in the Mathematical Logic volume of the Collected Works.
Gandy, R. O. (1980) Principles of Mechanisms, in The Kleene Symposium, eds. J. Barwise, H. J. Keisler and K. Kunen (Amsterdam: North-Holland)
Gandy R. O. (1988) The confluence of ideas in 1936, in (Herken 1988)
Herken R. (ed.) (1988) The universal Turing machine: a half-century survey (Berlin: Kammerer und Unverzagt, Oxford: Oxford University Press)
Hodges, A. (1983) Alan Turing: the enigma (London: Burnett, New York: Simon & Schuster; new editions London, Vintage 1992, New York: Walker 2000).
Hodges, A. (1997) Turing, a natural philosopher (London: Phoenix; also New York: Routledge, 1999). Included in: The Great Philosophers (eds. R. Monk and F. Raphael, London: Weidenfeld and Nicolson
Lassègue, J. (1998) Turing (Paris: Les Belles Lettres)
Newman, M. H. A. (1955), Alan M. Turing, Biographical Memoirs of Fellows of the Royal Society, 1, 253-263
Penrose, R. (1989) The emperor's new mind (Oxford, New York: Oxford University Press)
Penrose, R. (1994) Shadows of the mind (Oxford, New York: Oxford University Press)
Turing, A. M.,The Collected Works of A. M. Turing: Pure Mathematics, ed. J. L. Britton, (1993); Mechanical Intelligence, ed. D. C. Ince, (1993); Morphogenesis, ed. P.T. Saunders, (1993); Mathematical
Logic, eds. R. O. Gandy and C. E. M. Yates, (2001). (Amsterdam: North-Holland)
Turing, A. M., ed. B. J. Copeland (2004) The Essential Turing (Oxford: Oxford University Press)
Turing A. M. (1936-7), On computable numbers, with an application to the Entscheidungsproblem, Proceedings of the London Mathematical Society, ser. 2, 42, 230-265
Turing A. M. (1939), Systems of Logic defined by Ordinals, Proceedings of the London Mathematical Society, ser. 2, 45, 161-228
Turing, A. M. (1946), Proposed electronic calculator, unpublished report for National Physical Laboratory; published in A. M. Turing's ACE Report of 1946 and other papers (eds. B. E. Carpenter and R.
W. Doran, Cambridge, Mass.: MIT Press, 1986)
Turing A. M. (1948), Intelligent machinery, report for National Physical Laboratory; script available at http://www.turingarchive.org; published (ed. D Michie) in Machine Intelligence 5, 3-23 (1969).
Turing A. M. (1950), Computing machinery and intelligence, Mind 59, 433-460
Turing A. M. (1951) BBC Radio talk, script available at http://www.turingarchive.org; also in Copeland (1999)
│ │ │ │ │
│ Alan Turing Home Page │ Turing Scrapbook │ Turing Sources │ Alan Turing: the Enigma │
│ │ Email andrew@synth.co.uk │ │
│ My Publications │ │ My Main Page │
|
{"url":"http://www.turing.org.uk/publications/routledge.html","timestamp":"2014-04-19T11:56:27Z","content_type":null,"content_length":"28751","record_id":"<urn:uuid:f1890f6d-2fcb-45d7-92a1-e330b72b9d95>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00626-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Thiel College
Course Offerings
PHYS 123: Astronomy
3 CH / Offered Every Fall / WIC
General introduction to astronomy, open to all students. The course focuses on observation of the night sky, history of astronomy, modern views of the universe, star composition and development,
structure and fate of the universe, astronomical instruments, interaction between astronomy and physics, accomplishments and expectations of space exploration. Viewing the sky is weather dependent.
The course can be taken at any time and there are no prerequisites. The course satisfies the natural/physical non-lab science requirements of Depth and Diversity of the IR for either the B.A. or B.S.
degrees. It is an evening class.
PHYS 154: Introductory Physics I (non-calculus)
4 CH / Lab Fee / WIC
A non-calculus course for students enrolled in academic disciplines not requiring or recommending calculus-based physics as part of their respective programs. Topics to be covered include vectors,
forces, motion, Newton's laws, work, energy, fluids, elasticity, oscillations, waves and theory of heat. Three lecture periods and one three-hour laboratory each week. This course may be held in
conjunction with PHYS 174, but assignments and tests are different. Offered fall of even-numbered years.
PHYS 164: Introductory Physics II (non-calculus)
4 CH / Lab Fee / WIC
A continuation of PHYS 154, also non-calculus. Topics to be covered include electricity, magnetism, and optics. Three lecture periods and one three-hour laboratory each week. This course may be held
in conjunction with PHYS 184, but assignments and tests are different. (P: PHYS 154 or permission of instructor) Offered spring of odd-numbered years.
PHYS 174: Introductory Physics I (calculus-based)
4 CH / Offered Every Fall / Lab Fee / WIC
Foundation course for students majoring in physics or binary engineering or enrolled in other academic disciplines requiring or recommending calculus-based physics as part of their respective
programs. Topics to be covered are vectors, forces, motion, Newton s laws, work, energy, fluids, elasticity, oscillations, waves and theory of heat. Three lecture periods and one three-hour
laboratory each week. (P or corequisite: Calculus I)
PHYS 184: Introductory Physics II (calculus-based)
4 CH / Offered Every Spring / Lab Fee / WIC
A continuation of PHYS 174. Topics to be covered include electricity, magnetism and optics. Three lecture periods and one three-hour laboratory each week. (P: PHYS 174 or permission of instructor and
corequisite: Calculus II).
PHYS 194: Alternative Energies
4 CH / Lab Fee / WIC
This course examines the generation and use of energy in modern technological societies. Some basic principles of physics concerning the concept of energy and a variety of heat engines are
introduced. Conventional energy sources like coal, oil, gas and nuclear energy are discussed. Alternative sources of energy examined are solar, wind, biomass, hydropower and geothermal energy.
Strategies for energy conservation and the implications of alternative energies on transportation are discussed. Finally, the connection between energy uses and air pollution and other global effects
is examined. Three hour lecture, three hour lab weekly. The course is accepted as a laboratory course for the IR. (P: MATH 107 or equivalent) Offered on an irregular basis.
PHYS 213: Analog Electronics
3 CH / Offered Every Spring / Lab Fee
This course is laboratory based. It begins at a level suitable for those with no previous exposure to electronics, but with basic knowledge of electricity. The treatment is largely non-mathematical
with an emphasis on hands-on experience. This course involves circuits with diodes, transistors, operational amplifiers and power supplies. This course is independent of PHYS 243 (Digital
Electronics). It is suitable for students in the natural and computer sciences and binary engineering. Two three-hour laboratory afternoons per week. (P: PHYS 164 or PHYS 184)
PHYS 223: Thermophysics
3 CH / Offered Every Fall
The course introduces the fundamental ideas of heat, work and internal energy, reversibility and entropy, enthalpy, Maxwell s relations and conversion of heat into work in an engine. Application of
thermodynamics in physics, chemistry and engineering and an introduction to statistical physics are presented. (P: PHYS 174, P or corequisite: Calculus II) Offered fall semester, as needed.
PHYS 243: Digital Electronics
3 CH / Offered Every Fall / Lab Fee
Digital Electronics is laboratory based. It begins at a level suitable for those with no previous exposure to electronics or the theory of electricity. The course is largely non-mathematical with an
emphasis on hands-on experience. Basic elements of the course are digital logic, Boolean algebra, logic gates and networks, logic families, flip-flops, clocks, registers, counters and memories. The
course can be taken independently of PHYS 213 (Analog Electronics), and is suitable for physics, binary engineering and computer science students. Two three-hour laboratory afternoons per week.
PHYS 253: Statics and Dynamics
3 CH / Offered Every Fall
This course introduces the student to the concepts of internal and external forces, equilibrium, structures, friction, the moment of inertia and systems of forces. These concepts are applied to
mechanical structures and devices which are typical components of engineering designs like bridges, joints, gears, etc. The dynamics section covers particle kinematics of a rigid body. (P: PHYS 174;
P or corequisite Calculus II)
PHYS 263: Modern Physics
3 CH / Offered Every Fall
Basic concepts of classical physics: the electron, electromagnetic radiation, the classical theory vs. quantum effects, and the Rutherford-Bohr model of the atom. Multi-electron atoms. Basic concepts
of quantum mechanics without rigorous mathematical formalism. Structure of nuclei, radioactivity, particle and high-energy physics, and special relativity. (P: PHYS 174, 184)
PHYS 343: Electromagnetic Fields and Waves
3 CH / Offered Every Spring
Properties of dielectric and magnetic materials. Solutions for static electric and magnetic fields under a wide variety of conditions. Time-dependent solutions of Maxwell's equations. Radiation and
wave propagation. Oriented towards engineering applications. (P: PHYS 184, Calculus II) Offered spring semester, as needed.
PHYS 353: Intermediate Lab
3 CH / Offered Every Spring / Lab Fee / WIC
This course is designed to expose junior and/or senior students to advanced methods of experimental physics. Students will perform a variety of experiments involving electrical measurements,
cryogenics, vacuum systems, microwave measurements, plasma physics, thermodynamics, atomic physics, nuclear physics and optics. Two three-hour laboratory/lecture periods per week. (P: PHYS 263)
PHYS 363: Mathematical Physics
3 CH / Offered Every Spring
A course in mathematical methods in physics: Matrices and determinants; selected ordinary and partial differential equations; and Fourier series and integrals, complex numbers and special functions.
This course is designed primarily for physics majors, mathematics majors, and binary engineering students. (P: PHYS 174, 184, P or corequisite: Differential Equations)
PHYS 414: Cooperative Education
1-4 CH / Offered Every Semester
PHYS 424: Seminar and Senior Research
2-4 CH / Offered Every Semester / WIC
An introduction to the literature, teaching and research methods in physics. Preparation and presentation of papers on selected topics from the current literature of physics. Education students
majoring in physics may attend the seminar in their junior year concentrating on preparation and presentation of topics related to the teaching of physics. A technical report on a special problem
based on library as well as laboratory and/or computational research. The student will be expected to report on his or her project findings as the senior comprehensive examination. May be taken as an
extended course. (P: Consent of department chair)
|
{"url":"http://www.thiel.edu/academics/departments/physics/course-offerings","timestamp":"2014-04-19T17:06:11Z","content_type":null,"content_length":"35617","record_id":"<urn:uuid:00835c74-58e4-4608-8836-a4c93a57dca4>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00271-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Math Forum Discussions
Math Forum
Ask Dr. Math
Internet Newsletter
Teacher Exchange
Search All of the Math Forum:
Views expressed in these public forums are not endorsed by Drexel University or The Math Forum.
Math Forum » Discussions » Policy and News » mathed-news
Topic: PUT DOWN CALCULATORS, AND A RESPONSE
Replies: 0
Posted: May 22, 1998 11:27 AM
New York Post, Thursday, May 21, 1998
[Note: David Gelernter is a professor of computer science at Yale
University and author
of "Drawing Life," among other books. He is The Post's new Thursday columnist.
Look for his commentary every week in this space.
http://www.nypostonline.com/commentary/2735.htm ]
PUT DOWN THAT CALCULATOR, STUPID!
By DAVID GELERNTER
CALCULATORS should be banned from American elementary schools. We have
deeper educational problems, but calculators are interesting because they
pose a
concrete policy choice. We could kick them out tomorrow if we wanted to; the
cost would be zero, and the education establishment couldn't stop us if we'd
made up our minds. We won't do it, but we ought to. The practical gain would
be large, the symbolic value even greater.
If you hand a child a calculator, you must take care that it is used
judiciously or the result is catastrophic: an adult who can't do basic
arithmetic. Such a person is condemned to stumble through life's numeric
moments in a haze.
The National Council of Teachers of Mathematics has a position paper
recommending "the integration of the calculator into the school mathematics
program at all grade levels in class work, homework and evaluation." Most
schools reject this bad advice and use calculators only occasionally: students
work some problems by hand and use calculators for the rest.
From its perch on the sidelines, the calculator subtly undermines the whole
math curriculum. (Walking to school isn't bad if you do it every day - but if
you sometimes ride, walking can start to seem like a pain.) And "once the
calculator goes on," says Mike McKeown, a geneticist at the Salk Institute in
San Diego, "the brain goes off, no matter what we hope." McKeown is a
co-founder of "Mathematically Correct," a group that lobbies for common sense
in math education.
My generation of schoolchildren mostly learned the times tables in second
grade. (Japanese children still do.) You can't proceed to long multiplication
and division, and fractions and decimals, without knowing the times tables.
But at the school my kids attend, which seems fairly typical for Connecticut,
students don't master the times tables until fourth grade. These children burn
lots of class hours in second and third grades learning something other than
basic arithmetic; have they mastered some marvelous new kind of mathematics?
Not so you'd notice.
It appears that, mostly, they've spent the extra time learning how to mouth
off, which they were pretty good at already. Along the way they've cranked out
the occasional essay about the larger role of mathematics in society, but
they'd have more to say on this topic if they knew what mathematics was.
Teachers and principals who defend calculators make this argument:
Calculators are cheap, handy and accurate. To the extent we allow children to
rely on them, teachers needn't waste time on basic arithmetic - and can
proceed faster and deeper into more advanced terrain.
As most parents realize, this is complete nonsense.
If you haven't mastered basic arithmetic by hand, you can't do arithmetic at
all - with or without calculators. Calculators are reliable but people aren't;
they hit wrong keys. You can't solve a problem unless you start with a general
idea of the right answer. Otherwise you don't catch your errors, and you and
your calculator are a menace.
But suppose you're perfect; you never hit wrong keys. Even so, if you can't
do arithmetic manually you can't do it mentally; and you will need to do rough
mental arithmetic all the time. Is there time to do this before that? What
year was he born, how long ago did that happen, when will I arrive, how much
cash will that leave me, what do I tip, is this a bargain or an outrage? You
encounter such problems shopping, strolling, driving, lying on the beach,
waiting at McDonald's, paying the cab driver - yes you could whip out your
calculator on such occasions, and you could skip learning how to drive and
simply consult the owner's manual each time you needed to make a right turn;
but is that what we want for our children?
We're told (in effect) "you can leave the easy problems to your calculator;
the advanced stuff you'll really learn." Which is clearly upside-down. Common
sense suggests that you master the basic material and look up the advanced
stuff. Most people have no use for "mathematical concepts" anyway - arithmetic
yes, group theory no. For the others, the theory that "real math" has nothing
to do with arithmetic is wrong - engineeers and hard scientists are invariably
intimate with numbers. They have to be. So if you don't go on in math, basic
arithmetic is crucial. Whereas if you do go on in math, basic arithmetic is
It comes down to this: Knowledge you can "look up" is knowledge you don't
have. To be educated is to master a body of facts and skills and have them
on-call 24 hours a day, as you talk and walk and read and work and garden and
scheme and think. You can't master everything, but after many centuries of
mulling we are agreed on a time-tested basic agenda - reading and writing and
history; basic arithmetic.
Our education establishment is deeply confused. Recently, Carol Innerst of
the Washington Times investigated teacher training in today's ed schools;
teachers-to-be, she discovered, are taught how to "think like children." Back
in real life, adults don't need to think like children; children need to think
like adults. That's what education is for.
The yawning chasm between ed-school doctrine and common sense has already
swallowed up (to our national shame) a whole generation of American kids. Big
reforms are needed, but the electronic calculator perfectly captures what the
struggle is about. When you hand children an automatic, know-it-all crib
sheet, you undermine learning - obviously. So let's get rid of the damned
things. Professional educators are leading us full-speed towards a world of
smart machines and stupid people.
Copyright (c) 1998, N.Y.P. Holdings, Inc. All rights reserved.
Topic No. 11
Date: Fri, 22 May 1998 06:53:03 -0700
From: ruthp@pacificrim.net (Ruth Parker)
To: amte@csd.uwm.edu
Subject: Re: K-16: Yale Prof on Calculators and Ed Profs (fwd)
Message-ID: <v01540b00b18b309d028a@[199.236.228.152]>
In his New York Post diatribe against mathematics education, Dr. Gelernter
states, "Teachers and principals who defend calculators make this argument:
Calculators are cheap, handy and accurate. To the extent we allow children
to rely on them, teachers needn't waste time on basic arithmetic - and can
proceed faster and deeper into more advanced terrain... We're told (in
effect) 'you
can leave the easy problems to your calculator; the advanced stuff you'll
really learn.'"
I would like to know Dr. Gelernter's sources. I know of no mathematics
educator who would make such a claim. It is certainly not a position that
I've ever heard from the National Council of Teachers of Mathematics
(NCTM). If he's going to be a regular columnist who comments on education,
I hope Dr. Gelernter will soon do his homework. If he looks at any of the
elementary mathematics programs recently developed to support NCTM-based
reform efforts, he will clearly see that work with number facts still plays
a predominant role at the primary level. To suggest otherwise is simply
irresponsible. Many mathematics educators, who have thought deeply about
this issue, would agree that the ready availability of calculators and
computers makes number sense and facility with numbers (large and small)
even more important, not less so.
As for having memorized his multiplication facts in the 2nd grade, I'm
curious to know when and where Dr. Gelernter went to school. I clearly
remember memorizing my multiplication facts. Mrs. LeMaster taught them to
me and she was my 4th grade teacher. And I'm to old to have experienced
the 1960's "new math" movement.
I hope Dr. Gelernter will read the National Council of Supervisors of
Mathematics' latest monograph titled "Future Basics: Developing Numerical
Power." It is a far more accurate representation of the position taken by
many within the mathematics education community than are many of Dr.
Gelernter's inflammatory accusations. I'm sure he can locate the document
at NCSM's web site: forum.swarthmore.edu/mcsm.
Ruth Parker
Jerry P. Becker
Dept. of Curriculum & Instruction
Southern Illinois University
Carbondale, IL 62901-4610 USA
Fax: (618)453-4244
Phone: (618)453-4241 (office)
E-mail: JBECKER@SIU.EDU
|
{"url":"http://mathforum.org/kb/thread.jspa?threadID=733842","timestamp":"2014-04-16T18:57:57Z","content_type":null,"content_length":"22862","record_id":"<urn:uuid:f30c60fe-aef2-4c11-897b-d41fc7b2ee98>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00215-ip-10-147-4-33.ec2.internal.warc.gz"}
|
One more slice of PI
Re: One more slice of PI
Message #4 Posted by Valentin Albillo on 26 Nov 2004, 8:54 a.m.,
in response to message #3 by Larry Corrado
Larry posted:
"I don't think I've ever heard of a program that lets you find 6-digits of pi starting anywhere without finding earlier digits."
Have a look at this
The caveat is, it only works for hexadecimal digits of Pi. No such formulae are known or believed to exist for base-10 digits, though they actually exist for some transcendental constants, such as
log(9/10), where the corresponding formula has been used recently to compute its ten billionth decimal digit (US 'billions', i.e., 10^9).
This also applies to other bases and constants, such as base 2, for which you can compute individual digits of log(n), for n=2,3,5,7,11,13,17,19,29,31,37,41,43,61,73,... ...,18837001, 22366891, ...
Alas, no such luck for base-10 and Pi.
Best regards from V.
|
{"url":"http://www.hpmuseum.org/cgi-sys/cgiwrap/hpmuseum/archv014.cgi?read=66179","timestamp":"2014-04-16T08:14:02Z","content_type":null,"content_length":"17825","record_id":"<urn:uuid:acee949c-4e2f-466f-a9fc-5fba8eab1457>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00309-ip-10-147-4-33.ec2.internal.warc.gz"}
|
arithmetic mean
arithmetic mean
<mathematics> The mean of a list of N numbers calculated by dividing their sum by N. The arithmetic mean is appropriate for sets of numbers that are added together or that form an arithmetic series.
If all the numbers in the list were changed to their arithmetic mean then their total would stay the same.
For sets of numbers that are multiplied together, the geometric mean is more appropriate.
Last updated: 2007-03-20
Try this search on Wikipedia, OneLook, Google
Nearby terms: ARI Service « ARITH-MATIC « Arithmetic and Logic Unit « arithmetic mean » arity » arj » Arjuna
Copyright Denis Howe 1985
|
{"url":"http://foldoc.org/arithmetic+mean","timestamp":"2014-04-18T23:19:04Z","content_type":null,"content_length":"5017","record_id":"<urn:uuid:279580a1-fba7-4fc4-bee3-af038ff4472d>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00339-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Diagnosing and Treating Bifurcations in Perturbation Analysis of Dynamic Macro Models
Finance and Economics Discussion Series: 2007-14 Screen Reader version
Diagnosing and Treating Bifurcations in Perturbation Analysis of Dynamic Macro Models
Keywords: bifurcation, perturbation, relative price distortion, optimal monetary policy
In perturbation analysis of nonlinear dynamic systems, the presence of a bifurcation implies that the first-order behavior of the economy cannot be characterized solely in terms of the first-order
derivatives of the model equations. In this paper, we use two simple examples to illustrate how to detect the existence of a bifurcation. Following the general approach of Judd (1998), we then show
how to apply l'Hospital's rule to characterize the solution of each model in terms of its higher-order derivatives. We also show that in some cases the bifurcation can be eliminated through
renormalization of model variables; furthermore, renormalization may yield a more accurate first-order solution than applying l'Hospital's rule to the original formulation.
JEL Classification: C63; C61; E52.
In recent analysis of nonlinear dynamic macroeconomic models, the characterization of their first-order dynamics has been an important step for understanding theoretical implications and evaluating
empirical success. However, the presence of a bifurcation in perturbation analysis of nonlinear dynamic systems implies that the first-order behavior of the economy cannot be characterized solely in
terms of the first-order derivatives of the model equations.
In this paper, we use two simple macroeconomic models to address several issues regarding bifurcations. In particular, the bifurcation problem would emerge in conjunction with the price dispersion
generated by staggered price setting in the part of firms. We then show how to apply l'Hospital's rule to characterize the solution of each model in terms of its higher-order derivatives. We also
show that in some cases the bifurcation can be eliminated through renormalization of model variables; furthermore, renormalization may yield a more accurate first-order solution than applying
l'Hospital's rule to the original formulation.
Before presenting our results, it is noteworthy that our definition of bifurcation is distinct from the one analyzed in Benhabib and Nishimura (1979). In particular, their analysis on bifurcation is
associated with time evolution of dynamic systems. However, our concern with bifurcation arises in the process of approximating nonlinear equations, as discussed in Judd (1998).
We proceed as follows. Section 2 describes the two examples and illustrates how to detect the existence of a bifurcation problem. Section 3 follows the general approach of Judd (1998) and applies
l'Hospital's rule to characterize the first-order behavior of each model. Section 4 shows how the bifurcation can be eliminated through renormalization of model variables. Section 5 concludes.
This section discusses how we can detect the existence of bifurcation in two simple economies. In both models, Calvo-style price setting behavior of firms can be summarized by the following law of
motion for the relative price distortion:
where the distortion index is defined as The parameters and represent the percentage of firms that cannot change their price in each period and the elasticity of substitution across goods ,
respectively. The variable is the gross inflation rate of the price index aggregated over firms.
To discuss the issue of bifurcation, we have to close the model with another equation. In the first example, we simply assume that inflation follows an exogenous stochastic process,
where the logarithm of follows a mean zero process. We can rationalize this process in terms of monetary policy by a version of strict inflation targeting around the exogenous process or a version of
strict output-gap targeting in a model with cost-push shocks.
By combining the two equations, we now have a single-equation model:
Since this equation is backward looking, this exact nonlinear form can be used for any dynamic analysis. However, we suppose that we have to rely on approximation methods to analyze this model as
would be the case when there are forward-looking equations.
Woodford (2003) and Benigno and Woodford (2005) pointed out that, when deviations of the (net) inflation rate from its zero steady state are of first order in terms of exogenous variations,
deviations of the distortion index from one is of second order. Based on this observation, one can naturally approximate the system with respect to the square root of the logarithm of the relative
price distortion index. Note that this distortion index is unity at the steady state with zero inflation rate. We follow the convention of using lower cases for log deviations,
Specifically, corresponds to the approximation in Woodford (2003) and Benigno and Woodford (2005). It will be also shown in Section 4 that can be used as the basis of an alternative approximation.
Under the choice of as the approximation variable, (2.2) can be rewritten as follows:
Now let's see what happens if we try a Taylor approximation of this system with respect to and . It is easy to see that the derivative with respect to the endogenous variable would be zero at the
steady state. Based on this zero derivative, we can diagnose the bifurcation problem in this case. Put in an alternative way, the implicit function theorem cannot be applied when the derivative with
respect to the endogenous variable is zero.
It is noteworthy to find out what would happen if we feed this case into computer codes commonly available for dynamic macroeconomic analysis. The Dynare package (version 3.05) produces an message
saying `Warning: Matrix is singular to working precision', and AIM (developed by Gary Anderson and George Moore, and widely used at the Federal Reserve Board) returns a code indicating `Aim: too many
exact shiftrights'. The routine developed by Christopher Sims (gensys.m) ends without any output or error message.
The second example is a case with multiple equations. Our example is a prototypical Calvo-style sticky-price model, and the optimal policy problem is to maximize the household welfare subject to the
following four constraints: the law of motion for relative price distortions, the social resource constraint, the firms' profit maximization condition, and the present-value budget constraint of the
household. However, it is shown in Yun (2005) that the optimal policy problem can be reduced to minimizing the index for relative price distortion (2.1). At the optimum, we have the following
Therefore, the solution to the optimal policy problem can be represented with the following bivariate nonlinear system:
As in the single-equation case, we start with a normalization according to which and are endogenous variables and is exogenous:
where is the net inflation rate. When there are multiple equations in the system, the assumption of the implicit function theorem involves the non-singularity of the Jacobian. Computing the
determinant for the Jacobian, we have Since the Jacobian is singular, the implicit function theorem cannot be applied and the regular perturbation method does not work. We need to rely on the
bifurcation method.
As explained in Judd (1998) and Judd and Guu (2001), the bifurcation problem can be resolved by using l'Hospital's rule.
To understand the approximated behavior of in the singe-equation example, we need to compute and where is defined as an implicit function as follows:
In cases for which regular perturbation analysis could be applied, the first-order approximation of would come from the implicit function theorem as follows: The number in the parenthesis indicates
the order of approximation. However, the assumption of the implicit function theorem does not hold in our case since . We need to adopt an advanced asymptotic method--the bifurcation method in this
Noting that the derivatives in the numerators are also zero at the steady state, we apply l'Hospital's rule to the two ratios in the form of and obtain the following first-order approximation:
This is an example of the transcritical bifurcation.
In this single-equation model, it is easy to avoid the bifurcation problem when we consider the following equation that is equivalent to (2.3),
The derivative becomes nonzero, so the assumption of the implicit function theorem is satisfied. However, we still have to use l'Hospital's rule in computing the derivative with respect to the
exogenous variables: and .
To illustrate how we can invoke the bifurcation method in the multi-dimensional example, we substitute the second equation in (2.5) into the first to obtain:
Were the assumptions of the bifurcation theorem to hold, then differentiation of the implicit expression with respect to would produce the equation
However, since both derivatives on the right-hand side are zero at the steady state, we need to apply l'Hospital's rule to compute . The first-order solution for is
and the second-order accurate expression for inflation is
Note that the dependence of on is purely quadratic (i.e. the zero coefficient for the linear term) around the steady state with zero inflation rate.
The presence of bifurcations is not only related to the economic model in hand, but also to the choice of the variable with respect to which the Taylor approximation is applied. This section shows
that the bifurcation can be eliminated through renormalization of model variables; furthermore, renormalization may yield a more accurate first-order solution than applying l'Hospital's rule to the
original formulation.
In the single-equation setting, if we can approximate the model with respect to and instead of and , then the bifurcation problem would not emerge. To see this, rewrite (2.2) as follows:
With this renormalization, the second-order Taylor approximation of yields the second-order solution for the endogenous variable: This choice of expansion variable implies that, when the initial
relative price distortion is of first--rather than purely second--order, the current relative price distortion is also of first order. That is, the relative price distortion is of the same order of
magnitude as the shocks. This equation differs from what we would obtain by squaring both sides of (
) because the renormalization leads to the presence of the term.
Under this renormalization, the expression for the relative price distortion is richer--and more accurate--than (3.6) derived using l'Hospital's rule. Another renormalization that produces a solution
similar to (3.6) is to approximate with respect to (instead of ). This alternative way is based on the interpretation that the initial relative price distortion is of second order. Specifically, we
rewrite the model as
and the second-order behavior of the endogenous variable becomes purely quadratic, Since this expression is purely second order, it is consistent with the results under the timeless perspective--a la
Woodford (2003) and Benigno and Woodford (2005)--that the relative price distortions are zero when we focus solely on the first-order approximation.
In the multi-dimensional case, the two ways of renormalization would correspond to
and Either way, the determinant of the Jacobian is nonzero, and the implicit function theorem can be applied. The computer codes written for the regular perturbation methods would work.
According to the first renormalization, the second-order approximation of (2.1) is
and the logarithmic transformation of (
) is Therefore, the second-order solution of this problem would be
It is noteworthy to point out that, according to this renormalization, the first-order relationship between inflation and relative price distortions ( ) replicates the exact nonlinear relationship (
The alternative renormalization consistent with the timeless perspective is to adopt (instead of ) as an exogenous variable. Based on this choice of an expansion parameter, Woodford (2003) concluded
that the optimal inflation rate is zero to the first order in the absence of cost-push shocks. Under this normalization, the two model equations are approximated as follows:
The second-order solution to this system of equations would be purely quadratic
The first-order approximation of this solution is consistent with the optimality of zero inflation, as derived in the linear-quadratic approximation by Woodford (2003), Benigno and Woodford (2005),
and Levine, Pearlman and Pierse (2006). Furthermore, the second-order solution for inflation is equivalent to the one via the bifurcation method, (
After presenting two different renormalizations, it is natural to compare approximation errors for these two methods. For this purpose, we use as a reference point the closed-form solution to the
optimal policy problem (2.5). Specifically, as shown in Yun (2005), the exact nonlinear solution for the optimal inflation rate is
It is noteworthy that this closed-form solution is feasible only when the relative price distortion is the only distortion--due to the assumption that there is an optimal subsidy and there are no
cost-push shocks. The optimal rate of inflation is less than zero as long as there are initial price distortions .
The difference between the two methods is that the expansion parameter of the first renormalization is , while that of the second is . Figure 1 compares the accuracy of the two normalizations based
on the first-order solution under each normalization. The black solid line represents the exact closed-form solution for annualized inflation ( ) in terms of initial relative distortion ( ). The blue
line with crosses is the linear approximation of this nonlinear solution. This corresponds to the first-order approximation of when the expansion parameter is --that is, . It is evident that this
approximation is more accurate than : the first-order approximation with as the expansion parameter, depicted by the red circles.
We can provide an intuitive understanding about the improved accuracy of the approximation with respect to as follows. Since is the square of , the first-order approximation with respect to is
equivalent to the second-order approximation with respect to :
Note that the equality holds because no linear terms are included in with zero steady-state inflation rate.
We have illustrated how to detect the existence of a bifurcation and demonstrated how to apply l'Hospital's rule to characterize the solution. We have also shown that the bifurcation can be
eliminated through renormalization of model variables; furthermore, renormalization may yield a more accurate first-order solution than applying l'Hospital's rule to the original formulation. This
paper has focused on the consequences of renormalization on the treatment of bifurcations. However, the renormalization is also associated with the welfare evaluation of different policies as in
Benigno and Woodford (2005).
"The hopf bifurcation and the existence and stability of closed orbits in multisector models of optimal economic growth." Journal of Economic Theory, December, 1979, Vol. 21(3), pp. 421-444.
" Inflation Stabilization and Welfare: The Case of A Distorted Steady State." Journal of the European Economic Association, December, 2005, Vol. 3(6), pp. 1185-1236.
"Staggered Prices in a Utility Maximizing Framework." Journal of Monetary Economics, 1983, 12 (3), pp. 383-398.
Numerical Methods in Economics, MIT Press, 1998.
"Asymptotic Methods for Asset Market Equilibrium Analysis." Economic Theory, 2001, 1 (18), pp. 127-157.
"Monetary Policy under Uncertainty in Micro-Founded Macroeconometric Models" , in M. Gertler and K. Rogoff, eds., NBER Macroeconomics Annual, 2005. Cambridge, MA: MIT Press, 2006.
"Linear-Quadratic Approximation, External Habit and Targeting Rules." unpublished manuscript, University of Surrey, October, 2006.
"Optimal Simple and Implementable Monetary and Fiscal Rules." forthcoming in Journal of Monetary Economics, 2006.
Interest and Prices: Foundations of Theory of Monetary Policy, Princeton NJ: Princeton University Press, 2003.
"Optimal Monetary Policy with Relative Price Distortions." American Economomic Review, March, 2005, 95 (1), pp. 89 - 109.
|
{"url":"http://www.federalreserve.gov/Pubs/feds/2007/200714/index.html","timestamp":"2014-04-17T00:52:09Z","content_type":null,"content_length":"67436","record_id":"<urn:uuid:1925f2fc-db19-4781-87a8-859faa16364d>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00460-ip-10-147-4-33.ec2.internal.warc.gz"}
|
note Anonymous Monk Remember this: even though the "array" data-structure in Perl is "only one-dimensional," each element in this (or any other type of ...) data structure can be <i>a reference to</
i> "anything at all." Therefore, you can construct data structures of arbitrary complexity. "Multi-dimensional arrays" are the least of what you can do. 1014730 1014730 4
|
{"url":"http://www.perlmonks.org/index.pl?displaytype=xml;node_id=1014824","timestamp":"2014-04-19T20:52:18Z","content_type":null,"content_length":"909","record_id":"<urn:uuid:93b47220-0285-4381-ba7e-1ad1d2dc9511>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00483-ip-10-147-4-33.ec2.internal.warc.gz"}
|
American Mathematical Society honors three UCSC professors
Three UC Santa Cruz professors are among the mathematical scientists from around the world named Fellows of the American Mathematical Society (AMS) for 2013, the program's initial year. They are
Richard Montgomery and Maria Schonbek, both professors of mathematics, and Harold Widom, professor emeritus of mathematics.
The inaugural class of 1,119 AMS fellows represents over 600 institutions. The fellows are recognized by the society for their outstanding contributions to the creation, exposition, advancement,
communication, and utilization of mathematics. A complete list of this year's class of fellows is available online.
Montgomery is known for his work on the three-body problem from celestial mechanics, one of the oldest problems in mathematics and physics. Since 2000, he has worked primarily on this N-body problem
and the geometry of distributions. He joined the UCSC faculty in 1990.
Schonbek's research focuses on non-linear partial differential equations derived from models in fluid dynamics. The primary questions she is concerned with are related to the qualitative behavior of
solutions, specifically questions concerning long-time behavior of solutions. She has been a UCSC faculty member since 1986.
Widom is well known for his work on random matrix theory, which has earned several awards for him and his collaborator, Craig Tracy of UC Davis. They discovered a new class of distribution functions
called Tracy-Widom distributions that are of great interest to physicists. A fellow of the American Academy of Arts and Sciences, Widom has been a UCSC faculty member since 1968.
"The new AMS Fellows Program recognizes some of the most accomplished mathematicians--AMS members who have contributed to our understanding of deep and important mathematical questions, to
applications throughout the scientific world, and to educational excellence," said AMS president Eric M. Friedlander. "The AMS is the world's largest and most influential society dedicated to
mathematical research, scholarship, and education. Recent advances in mathematics include solutions to age-old problems and key applications useful for society."
|
{"url":"http://news.ucsc.edu/2012/11/ams-fellows.html","timestamp":"2014-04-18T19:12:08Z","content_type":null,"content_length":"12973","record_id":"<urn:uuid:647b1e0c-00b9-478e-aa06-05900fb11977>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00489-ip-10-147-4-33.ec2.internal.warc.gz"}
|
CAD Program for Aircraft Design
November 11th, 2012, 08:22 PM #16
Re: CAD Program for Aircraft Design
G'Day all, I'm looking for a decent CAD program that I can use to model a few designs I have on paper. Ideally, the sort of program I'd like would be: a) Easy to use b) Cheap! c) Have the
facility to accurately model airfoil sections d) Able to integrate with a CFD program to calculate lift, pressure and drag coefficients for a design e) Able to perform "aeronautical" calculations
on a design, such as calculating wing area, wetted area, etc f) Able to save in a format compaitble with a CNC mill or lathe Any help would be greatly appreciated, Brad
Drawing airfoils is dead easy with Design CAD. That's because DC uses a cubic spline for a curve. I don't know of many others that do that. Anyway, draw a horizontal line and dimension it to 100
units long. Then working from the table of ordinates for the airfoil you want to draw, set points at those locations (using point relative from the leading edge of the line). When all the points
are there, just wrap a curve around the points and save the file. Then, to set the actual chord length, just scale that 100 unit drawing by the chord length you want (in inches) and save it as a
new file. And you can add offsets for skin thickness with a the parallel line command. It take me about 10 minutes do do one these days. As for the calculations part, you are dreaming of
something that none of us can afford! That's what spreadsheets and programming languages are for. CAD and Spread Sheet. Tools of creation. Richard
Re: CAD Program for Aircraft Design
Drawing airfoils is dead easy with Design CAD. That's because DC uses a cubic spline for a curve. I don't know of many others that do that. Anyway, draw a horizontal line and dimension it to 100
units long. Then working from the table of ordinates for the airfoil you want to draw, set points at those locations (using point relative from the leading edge of the line). When all the points
are there, just wrap a curve around the points and save the file. Then, to set the actual chord length, just scale that 100 unit drawing by the chord length you want (in inches) and save it as a
new file. And you can add offsets for skin thickness with a the parallel line command. It take me about 10 minutes do do one these days. As for the calculations part, you are dreaming of
something that none of us can afford! That's what spreadsheets and programming languages are for. CAD and Spread Sheet. Tools of creation. Richard
Cubic splines are in almost every CAD package now, even tunable versions of different linear algebra for curve fitting. The highest end packages have the best tools of course. I was always really
impressed with the design CAD kernel for the price. It has a little bit of a learning curve but it really is a capable engineering package for even a small company. For an Individual it is really
good and shines above all the others sub $100. I think I paid ten times that for it more than a decade ago back when 3.5" floppy discs were an amazing new invention.
Jay K.
VT USA
Re: CAD Program for Aircraft Design
On the subject of computer aided Design...
This is a simple minded old BASIC program.
I have the .exe file and it does run on a XP box -
as Win95 compatible.
If anyone wants the .EXE just ask.
Or maybe use it as a guide for your own wing
design spread sheet?
It might offer a quick and dirty way to get wing
parameters in the ball park quickly.
And it's kind of fun to look at numbers for say -
a Bowing 747. Or Gossamer Albatross.
Solve wing questions for Lift Velocity Surface Clift.
Assign values to 3 parameters and solve for the 4th.
'This was developed on a Tandy pocket computer.
'But this file is from the QuickBASIC 4.5 compiler,
FF = 0
TT = NOT FF
doRun = TT
KK$ = "LVSCQF?"
'initial values:
L = 555
W = L
A = 55
V = A * 1.4666
Cl = 1.2
S = 125
DataFile$ = "Wing.Txt"
PRINT " Finite Wing Theory: ---== R. Lamb 1983 ==---"; : PRINT
PRINT " Data file is "; DataFile$
WHILE doRun = TT
PRINT " solve for Lift Velocity Surface Clift Quit File [LVSCQF?] ";
Z$ = ""
WHILE Z$ = ""
Z$ = UCASE$(INKEY$):
IF Z$ = CHR$(13) THEN Z$ = ""
IF Z$ = CHR$(27) THEN Z$ = "Q"
IF INSTR(KK$, Z$) = 0 THEN Z$ = ""
SELECT CASE UCASE$(Z$)
CASE "L"
PRINT " Calculate LIFT.............."
GOSUB doLift
CASE "V"
PRINT " Calculate Velocity.........."
GOSUB doVel
CASE "S"
PRINT " Calculate Wing Area........."
GOSUB doSurf
CASE "C"
PRINT " Calculate Coef. of Lift....."
GOSUB doCL
CASE "?"
'GOSUB doDump
PRINT " Variable dump:---------------------------"
PRINT " Lift / Weight "; L; " lbs"
PRINT " Airspeed "; A; " mph = "; V; " fps"
PRINT " Coefec of Lift "; Cl
PRINT " Wing Area "; S; " sq ft"
PRINT " -----------------------------------------"
CASE "Q"
doRun = FF
CASE "F"
OPEN DataFile$ FOR APPEND AS #1
PRINT #1, ""
PRINT #1, "W=";
PRINT #1, USING "#,###,###"; W;
PRINT #1, TAB(15); "L=";
PRINT #1, USING "#,###,###"; L;
PRINT #1, TAB(30); "C=";
PRINT #1, USING "#.###"; Cl;
PRINT #1, TAB(40); "S=";
PRINT #1, USING "####"; S;
PRINT #1, TAB(50); "V=";
PRINT #1, USING "####"; A
CLOSE 1
END SELECT
GOSUB GetSurf
GOSUB GetVel
GOSUB GetCL
L = .001188 * Cl * V * V * S
PRINT " Lift = ";
print using "#,###,###";L
GOSUB GetSurf
GOSUB GetCL
GOSUB GetWgt
V = SQR(L / (.001188 * Cl * S))
A = V * .681
PRINT " Velocity = ";
PRINT USING "#,###.#"; A;
PRINT " MPH = ";
PRINT USING "#,###.#"; V;
PRINT " FPS"
GOSUB GetCL
GOSUB GetWgt
GOSUB GetVel
S = L / (.001188 * Cl * V * V)
PRINT " Surface = ";
print using "#,###";S
GOSUB GetWgt
GOSUB GetVel
GOSUB GetSurf
Cl = L / (.001188 * S * V * V)
PRINT using " Coeff. Lift = #.###"; Cl
PRINT " Wing Area (sq ft) ["; S; "]";
INPUT ""; X
IF X <> 0 THEN S = X
PRINT " Airspeed (mph) ["; A; "]";
INPUT ""; X
IF X <> 0 THEN
A = X
V = X * 1.467
END IF
PRINT " Coeff. Lift (#.##) [";
print using "#.###"; Cl;
print "]";
INPUT ""; X
IF X <> 0 THEN Cl = X
PRINT " Gross Weight (lbs) ["; L; "}";
INPUT ""; X
IF X <> 0 THEN
W = X ' steady state W = L
L = X
END IF
Re: CAD Program for Aircraft Design
Alibre 3d $99, has a CAM module also.
Re: CAD Program for Aircraft Design
Realize this is an old thread but just found a reference to some CAD software called "SpaceClaim". Does anyone have some experience with this software?????
SpaceClaim|3D Direct Modeling Software for Engineering, Manufacturing and CAE
Re: CAD Program for Aircraft Design
I've heard from a few folks who would like to play with these programs,
but don't have the programming background (in ancient languages).
So I've recompiled them using FreeBASIC to run on a windows machine.
The zip file contains both the source code and executable files.
WingX (listed up-thread) does simple wing parameter calculations;
Area, Coefficient of Lift, velocity, and lift (weight).
It assumes sea level standard day conditions.
It was originally designed to run on a Tandy Pocket Computer.
Re: CAD Program for Aircraft Design
This is the other program I posted a while back.
It is a bit more ambitious - to design cantilever wing spars.
It took a bit of rewrite to get it to compile, since FreeBASIC
doesn't allow low level access to the underlying machine like
QuickBASIC 4.5 did. And it doesn't like the dot notation I
used back then (precursor to object oriented properties).
I don't have all the resources that I did when I wrote this.
But it still seems to be running right.
Wouldn't hurt to give it a reality check or two.
If you run into any problems with it, please let me know.
Re: CAD Program for Aircraft Design
Another old program - calculates stresses in strut based spars.
Not my code, so I can't offer the source.
But might be helpful...
Re: CAD Program for Aircraft Design
CATIA and Pro/E are far from cheap. And Autocad is quite obsolete.
If ignorance is bliss, why aren't there more happy people?
ParaFan PPG
Kolb Ultrastar
November 12th, 2012, 07:27 AM #17
November 12th, 2012, 11:49 PM #18
March 20th, 2013, 05:15 PM #19
Registered User
Join Date
Sep 2011
AMES, IA USA
March 29th, 2013, 11:57 AM #20
April 19th, 2013, 07:28 AM #21
April 19th, 2013, 07:38 AM #22
May 1st, 2013, 06:04 PM #23
January 30th, 2014, 04:28 PM #24
January 30th, 2014, 05:32 PM #25
|
{"url":"http://www.homebuiltairplanes.com/forums/general-experimental-aviation-questions/1648-cad-program-aircraft-design-2.html","timestamp":"2014-04-21T07:04:13Z","content_type":null,"content_length":"70583","record_id":"<urn:uuid:968f132b-2481-40e5-8123-e988275262b3>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00202-ip-10-147-4-33.ec2.internal.warc.gz"}
|
[Numpy-discussion] Counting array elements
Tim Hochberg tim.hochberg at cox.net
Sun Oct 31 14:24:01 CST 2004
Robert Kern wrote:
> Tim Hochberg wrote:
>> Could you describe the SciPy axis convention: I'm not familiar with it.
> axis=-1
OK, so Numarray (currently) and Numeric use axis=0, SciPy uses axis=-1
and there is some desire to use axis=ALL as instead.
One advantage of ALL is that it breaks everyone's code equally, so there
wouldn't be any charges of favoritism <0.8 wink>.
I can't come up with any way to reconcile the three, but I can suggest a
transition strategy whatever the decision. Supply an option so that one
can require axis arguments to all calls to reduce. Then it's relatively
easy to track down all the reduce calls and fix the ones that are
broken. Something like numarray.setRequireReduceAxisArg(True).
FWIW, it wouldn't bother me much to use SciPy's default here: supporting
SciPy is a worthwhile goal and I think SciPy's choice here is a
reasonable one. Another alternative that wouldn't bother me much is "In
the face of ambiguity, refuse the temptation to guess". That is, always
require axis arguments for multidimensional arrays. While not backwards
compatible, this would make the transition relatively easy, since uses
that might fail would raise exceptions.
More information about the Numpy-discussion mailing list
|
{"url":"http://mail.scipy.org/pipermail/numpy-discussion/2004-October/003646.html","timestamp":"2014-04-17T09:46:23Z","content_type":null,"content_length":"3767","record_id":"<urn:uuid:340a8c62-2943-4962-a9dc-be86d5bcc807>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00546-ip-10-147-4-33.ec2.internal.warc.gz"}
|
A model of the evaluation of logic programs (i.e., resolving Horn clauses)
The point is not to re-implement Prolog with all the conveniences but to formalize evaluation strategies such as SLD, SLD-interleaving, and others. The formalization is aimed at reasoning about
termination and solution sequences. See App A and B of the full FLOPS 2008 paper (the file arithm.pdf in this directory).
type VStack = [Int]Source
A logic var is represented by a pair of an integer and a list of `predicate marks' (which are themselves integers). Such an odd representation is needed to ensure the standardization apart: different
instances of a clause must have distinctly renamed variables. Unlike Claessen, we get by without the state monad to generate unique variable names (labels). See the discussion and examples in the
tests section below. Our main advantage is that we separate the naming of variables from the evaluation strategy. Evaluation no longer cares about generating fresh names, which notably simplifies the
analysis of the strategies. Logic vars are typed, albeit phantomly.
type Subst term = Map (LogicVar term) termSource
A finite map from vars to terms (parameterized over the domain of terms)
data Formula term Source
A formula describes a finite or _infinite_ AND-OR tree that encodes the logic-program clauses that may be needed to evaluate a particular goal. We represent a goal g(t1,...,tn) by a combination of
the goal g(X1,...,Xn), whose arguments are distinct fresh logic variables, and a substitution {Xi=ti}. For a goal g(X1,...,Xn), a |Formula| encodes all the clauses of the logic program that could
possibly prove g, in their order. Each clause
g(t1,...,tn) :- b1(t11,...,t1m), ...
is represented by the (guarding) substitution {Xi=ti, Ykj=tkj} and the sequence of the goals bk(Yk1,...,Ykm) in the body. Each of these goals is again encoded as a |Formula|. All variables in the
clauses are renamed to ensure the standardization apart.
Our trees are similar to Antoy's definitional trees, used to encode rewriting rules and represent control strategies in term-rewriting systems and _functional logic_ programming. However, in
definitional trees, function calls can be nested, and patterns are linear.
The translation from Prolog is straightforward; the first step is to re-write clauses so that the arguments of each goal are logic variables: A fact
is converted to
g(X) :- X = term.
A clause
g(term) :- g1(term1), g2(term2).
is converted to
g(X) :- X = term, _V1 = term1, g1(_V1), _V2=term2, g2(_V2).
See the real examples at the end
A formula (parameterized by the domain of terms) is an OR-sequence of clauses
(Clause term) :+: (Formula term)
data Clause term Source
A clause is a guarded body; the latter is an AND-sequence of formulas
G (Subst term) (Body term)
data Body term Source
(Formula term) :*: (Body term)
prune :: Unifiable term => Formula term -> Subst term -> Formula termSource
The evaluation process starts with a formula and the initial substitution, which together represent a goal. The guarding substitution of the clause conjoins with the current substitution to yield the
substitution for the evaluation of the body. The conjunction of substitutions may lead to a contradiction, in which case the clause is skipped (pruned).
Evaluation as pruning: we traverse a tree and prune away failed branches
eval :: Unifiable term => Formula term -> Subst term -> [Subst term]Source
A different evaluator: Evaluate a tree to a stream (lazy list) given the initial substitution s. Here we use the SLD strategy.
class Unifiable term whereSource
Evaluation, substitutions and unification are parametric over terms (term algebra), provided the following two operations are defined. We should be able to examine a term and determine if it is a
variable or a constructor (represented by an integer) applied to a sequence of zero or more terms. Conversely, given a constructor (represented by an integer) and a list of terms-children we should
be able to build a term. The two operations must satisfy the laws:
either id build . classify === id
classify . build === Right
Unifiable UN Constructor UZ is represented by 0, and US is represented by 1
unify :: Unifiable term => Subst term -> Subst term -> Maybe (Subst term)Source
Conjoin two substitutions (see Defn 1 of the FLOPS 2008 paper). We merge two substitutions and solve the resulting set of equations, returning Nothing if the two original substitutions are
solve :: Unifiable term => Subst term -> [(Either (LogicVar term) term, term)] -> Maybe (Subst term)Source
Solve the equations using the naive realization of the Martelli and Montanari process
data UN Source
UNv (LogicVar UN)
Eq UN
Show UN
Unifiable UN Constructor UZ is represented by 0, and US is represented by 1
genu :: Formula UNSource
Encoding of variable names to ensure standardization apart. A clause such as genu(X) or add(X,Y,Z) may appear in the tree (infinitely) many times. We must ensure that each instance uses distinct
logic variables. To this end, we name variables by a pair (Int, VStack) whose first component is the local label of a variable within a clause. VStack is a path from the root of the tree to the
current occurrence of the clause in the tree. Each predicate along the path is represented by an integer label (0 for genu, 1 for add, 2 for mul, etc). To pass arguments to a clause, we add to the
current substitution the bindings for the variables of that clause. See the genu' example below: whereas (0,h) is the label of the variable X in the current instance of genu, (0,genu_mark:h) is X in
the callee.
A logic program
genu([u|X]) :- genu(X).
and the goal genu(X) are encoded as follows. The argument of genu' is the path of the current instance of genu' from the top of the AND-OR tree.
|
{"url":"http://hackage.haskell.org/package/liboleg-2010.1.10.0/docs/Language-DefinitionTree.html","timestamp":"2014-04-20T02:12:23Z","content_type":null,"content_length":"24326","record_id":"<urn:uuid:f3f36a02-6348-425b-86ce-3f84364d8df9>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00495-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Chapter Introduction
PDF version
(NAG web site
, 64-bit version, 64-bit version
NAG Toolbox Chapter Introduction
F12 — Large Scale Eigenproblems
Scope of the Chapter
This chapter provides functions for computing
eigenvalues and eigenvectors of large-scale (sparse) standard and generalized eigenvalue problems. It provides functions for:
• – solution of symmetric eigenvalue problems;
• – solution of nonsymmetric eigenvalue problems;
• – solution of generalized symmetric-definite eigenvalue problems;
• – solution of generalized nonsymmetric eigenvalue problems;
• – partial singular value decomposition.
Functions are provided for both
The functions in this chapter have all been derived from the ARPACK software suite (see
Lehoucq et al. (1998)
), a collection of Fortran 77 subfunctions designed to solve large scale eigenvalue problems. The interfaces provided in this chapter have been chosen to combine ease of use with the flexibility of
the original ARPACK software. The underlying iterative methods and algorithms remain essentially the same as those in ARPACK and are described fully in
Lehoucq et al. (1998)
The algorithms used are based upon an algorithmic variant of the Arnoldi process called the Implicitly Restarted Arnoldi Method. For symmetric matrices, this reduces to a variant of the Lanczos
process called the Implicitly Restarted Lanczos Method. These variants may be viewed as a synthesis of the Arnoldi/Lanczos process with the Implicitly Shifted QR$QR$ technique that is suitable for
large scale problems. For many standard problems, a matrix factorization is not required. Only the action of the matrix on a vector is needed.
Background to the Problems
This section is only a brief introduction to the solution of large-scale eigenvalue problems. For a more detailed discussion see, for example,
Saad (1992)
Lehoucq (1995)
in addition to
Lehoucq et al. (1998)
. The basic factorization techniques and definitions of terms used for the different problem types are given in
Section [Background to the Problems]
in the F08 Chapter Introduction.
Sparse Matrices and their Storage
A matrix A $A$ may be described as sparse if the number of zero elements is so large that it is worthwhile using algorithms which avoid computations involving zero elements.
A $A$
is sparse, and the chosen algorithm requires the matrix coefficients to be stored, a significant saving in storage can often be made by storing only the nonzero elements. A number of different
formats may be used to represent sparse matrices economically. These differ according to the amount of storage required, the amount of indirect addressing required for fundamental operations such as
matrix-vector products, and their suitability for vector and/or parallel architectures. For a survey of some of these storage formats see
Barrett et al. (1994)
Most of the functions in this chapter have been designed to be independent of the matrix storage format. This allows you to choose your own preferred format, or to avoid storing the matrix
altogether. Other functions are
general purpose
, which are easier to use, but are based on fixed storage formats. One such format is currently provided. This is the banded coordinate storage format as used in
Chapters F07
(LAPACK) for storing general banded matrices.
Symmetric Eigenvalue Problems
symmetric eigenvalue problem
is to find the
λ $λ$
, and corresponding
z ≠ 0 $z ≠ 0$
, such that
A z = λ z , A = A^T , where A is real.
$A z = λ z , A = AT , where A is real.$
For the
Hermitian eigenvalue problem
we have
A z = λ z , A = A^H , where A is complex.
$A z = λ z , A = AH , where A is complex.$
For both problems the eigenvalues
λ $λ$
are real.
The basic task of the symmetric eigenproblem functions is to compute some of the values of λ $λ$ and, optionally, corresponding vectors z $z$ for a given matrix A $A$. For example, we may wish to
obtain the first ten eigenvalues of largest magnitude, of a large sparse matrix A $A$.
Generalized Symmetric-definite Eigenvalue Problems
This section is concerned with the solution of the generalized eigenvalue problems A z = λ B z $A z = λ B z$, A B z = λ z $A B z = λ z$, and B A z = λ z $B A z = λ z$, where A $A$ and B $B$ are real
symmetric or complex Hermitian and B $B$ is positive definite. Each of these problems can be reduced to a standard symmetric eigenvalue problem, using a Cholesky factorization of B $B$ as either B =
L L^T $B = L LT$ or B = U^T U $B = UT U$ ( L L^H $L LH$ or U^H U $UH U$ in the Hermitian case).
B = L L^T $B = L LT$
, we have
A z = λ B z ⇒ (L^ − 1AL^ − T) (L^Tz) = λ (L^Tz) .
$A z = λ B z ⇒ ( L-1 A L-T ) ( LTz ) = λ ( LTz ) .$
Hence the eigenvalues of
A z = λ B z $A z = λ B z$
are those of
C y = λ y $C y = λ y$
, where
C $C$
is the symmetric matrix
C = L^ − 1 A L^ − T $C = L-1 A L-T$
y = L^T z $y = LT z$
. In the complex, case
C $C$
is Hermitian with
C = L^ − 1 A L^ − H $C = L-1 A L-H$
y = L^H z $y = LH z$
The basic task of the generalized symmetric eigenproblem functions is to compute some of the values of λ $λ$ and, optionally, corresponding vectors z $z$ for a given matrix A $A$. For example, we may
wish to obtain the first ten eigenvalues of largest magnitude, of a large sparse matrix pair A $A$ and B $B$.
Nonsymmetric Eigenvalue Problems
nonsymmetric eigenvalue problem
is to find the
λ $λ$
, and corresponding
v ≠ 0 $v ≠ 0$
, such that
A v = λ v .
$A v = λ v .$
More precisely, a vector
v $v$
as just defined is called a
right eigenvector
A $A$
, and a vector
u ≠ 0 $u ≠ 0$
u^T A = λ u^T (u^HA = λu^H when u is complex)
$uT A = λ uT ( uH A = λ uH when u is complex )$
is called a
left eigenvector
A $A$
A real matrix A $A$ may have complex eigenvalues, occurring as complex conjugate pairs.
This problem can be solved via the
Schur factorization
A $A$
, defined in the real case as
A = ZT Z^T ,
$A = ZT ZT ,$
Z $Z$
is an orthogonal matrix and
T $T$
is an upper quasi-triangular matrix with
diagonal blocks, the
blocks corresponding to complex conjugate pairs of eigenvalues of
A $A$
. In the complex case, the Schur factorization is
A = Z T Z^H ,
$A = Z T ZH ,$
Z $Z$
is unitary and
T $T$
is a complex upper triangular matrix.
The columns of Z $Z$ are called the Schur vectors. For each k $k$ ( 1 ≤ k ≤ n $1 ≤ k ≤ n$), the first k $k$ columns of Z $Z$ form an orthonormal basis for the invariant subspace corresponding to the
first k $k$ eigenvalues on the diagonal of T $T$. Because this basis is orthonormal, it is preferable in many applications to compute Schur vectors rather than eigenvectors. It is possible to order
the Schur factorization so that any desired set of k $k$ eigenvalues occupy the k $k$ leading positions on the diagonal of T $T$.
The two basic tasks of the nonsymmetric eigenvalue functions are to compute, for a given matrix A $A$, some values of λ $λ$ and, if desired, their associated right eigenvectors v $v$, and the Schur
Generalized Nonsymmetric Eigenvalue Problem
generalized nonsymmetric eigenvalue problem
is to find the eigenvalues,
λ $λ$
, and corresponding
v ≠ 0 $v ≠ 0$
, such that
A v = λ B v , A B v = λ v , and B A v = λ v .
$A v = λ B v , A B v = λ v , and B A v = λ v .$
More precisely, a vector
v $v$
as just defined is called a
right eigenvector
of the matrix pair
(A,B) $(A,B)$
, and a vector
u ≠ 0 $u ≠ 0$
u^T A = λ u^T B (u^HA = λu^HB when u is complex)
$uT A = λ uT B ( uH A = λ uH B when u is complex )$
is called a
left eigenvector
of the matrix pair
(A,B) $(A,B)$
The Singular Value Decomposition
singular value decomposition
(SVD) of an
is given by
A = UΣV^T, (A = UΣV^Hin the complex case)
$A=UΣVT, (A=UΣVHin the complex case)$
are orthogonal (unitary) and
is an
diagonal matrix with real diagonal elements,
, such that
σ[1] ≥ σ[2] ≥ ⋯ ≥ σ[min (m,n)] ≥ 0.
are the
singular values
and the first
min (m,n)$min(m,n)$
columns of
are the
right singular vectors
. The singular values and singular vectors satisfy
Av[i] = σ[i] u[i] and A^Tu[i] = σ[i] v[i] (or A^Hu[i] = σ[i]v[i]) so that A^TA u[i] = σ[i]^2 u[i] ( A^HA u[i] = σ[i]^2 u[i] )
$Avi = σi ui and ATui = σi vi ( or AH ui = σi vi ) so that ATA ui = σi2 ui ( AHA ui = σi2 ui )$
are the
th columns of
Thus selected singular values and the corresponding right singular vectors may be computed by finding eigenvalues and eigenvectors for the symmetric matrix A^TA $ATA$ (or the Hermitian matrix A^HA
$AHA$ if A$A$ is complex).
An alternative approach is to use the relationship
= Σ
$0 A AT 0 U V = U V Σ$
and thus compute selected singular values and vectors via the symmetric matrix
C = (0A) A
(C = (0A) A^H 0 if A is complex)
$C = 0 A AT 0 ( C = 0 A AH 0 if A is complex ) .$
In many applications, one is interested in computing a few (say
) of the largest singular values and corresponding vectors. If
denote the leading
columns of
respectively, and if
denotes the leading principal submatrix of
Σ $Σ$
, then
A[k] ≡ U[k] Σ[k] V^T[k] (or U[k] Σ[k] V^H[k] )
$Ak ≡ Uk Σk VTk (or Uk Σk VHk )$
is the best rank-
approximation to
in both the
-norm and the Frobenius norm. Often a very small
will suffice to approximate important features of the original
or to approximately solve least squares problems involving
Iterative Methods
methods for the solution of the standard eigenproblem
A x = λ x
$A x = λ x$
approach the solution through a sequence of approximations until some user-specified termination criterion is met or until some predefined maximum number of iterations has been reached. The number of
iterations required for convergence is not generally known in advance, as it depends on the accuracy required, and on the matrix
A $A$
, its sparsity pattern, conditioning and eigenvalue spectrum.
Recommendations on Choice and Use of Available Functions
Types of Function Available
The functions available in this chapter divide essentially into three suites of basic reverse communication functions and some general purpose functions for banded systems.
Basic functions
are grouped in suites of five, and implement the underlying iterative method. Each suite comprises a setup function, an options setting function, a solver function, a function to return additional
monitoring information and a post-processing function. The solver function is independent of the matrix storage format (indeed the matrix need not be stored at all) and the type of preconditioner. It
reverse communication
, i.e., it returns repeatedly to the calling program with the parameter
set to specified values which require the calling program to carry out a specific task (either to compute a matrix-vector product or to solve the preconditioning equation), to signal the completion
of the computation or to allow the calling program to monitor the solution. Reverse communication has the following advantages:
(i) Maximum flexibility in the representation and storage of sparse matrices. All matrix operations are performed outside the solver function, thereby avoiding the need for a complicated interface
with enough flexibility to cope with all types of storage schemes and sparsity patterns. This also applies to preconditioners.
(ii) Enhanced user interaction: you can closely monitor the solution and tidy or immediate termination can be requested. This is useful, for example, when alternative termination criteria are to be
employed or in case of failure of the external functions used to perform matrix operations.
At present there are suites of basic functions for real symmetric and nonsymmetric systems, and for complex systems.
General purpose functions call basic functions in order to provide easy-to-use functions for particular sparse matrix storage formats. They are much less flexible than the basic functions, but do not
use reverse communication, and may be suitable in many cases.
The structure of this chapter has been designed to cater for as many types of application as possible. If a general purpose function exists which is suitable for a given application you are
recommended to use it. If you then decide you need some additional flexibility it is easy to achieve this by using basic and utility functions which reproduce the algorithm used in the general
purpose function, but allow more access to algorithmic control parameters and monitoring.
Iterative Methods for Real Nonsymmetric and Complex Eigenvalue Problems
The suite of basic functions
nag_sparseig_real_init (f12aa)
nag_sparseig_real_iter (f12ab)
nag_sparseig_real_proc (f12ac)
nag_sparseig_real_option (f12ad)
nag_sparseig_real_monit (f12ae)
implements the iterative solution of real nonsymmetric eigenvalue problems, finding estimates for a specified spectrum of eigenvalues. These eigenvalue estimates are often referred to as Ritz values
and the error bounds obtained are referred to as the Ritz estimates. These functions allow a choice of termination criteria and many other options for specifying the problem type, allow monitoring of
the solution process, and can return Ritz estimates of the calculated Ritz values of the problem
A $A$
For complex matrices there is an equivalent suite of functions.
nag_sparseig_complex_init (f12an)
nag_sparseig_complex_iter (f12ap)
nag_sparseig_complex_proc (f12aq)
nag_sparseig_complex_option (f12ar)
nag_sparseig_complex_monit (f12as)
are the basic functions which implement corresponding methods used for real nonsymmetric systems. Note that these functions are to be used for both Hermitian and non-Hermitian problems. Occasionally,
when using these functions on a complex Hermitian problem, eigenvalues will be returned with small but nonzero imaginary part due to unavoidable round-off errors. These should be ignored unless they
are significant with respect to the eigenvalues of largest magnitude that have been computed.
There are general purpose functions for the case where the matrices are known to be banded. In these cases an initialization function is called first to set up default options, and the problem is
solved by a single call to a solver function. The matrices are supplied, in LAPACK banded-storage format, as arguments to the solver function. For real general matrices these functions are
nag_sparseig_real_band_init (f12af)
nag_sparseig_real_band_solve (f12ag)
; and for complex matrices the pair is
nag_sparseig_complex_band_init (f12at)
nag_sparseig_complex_band_solve (f12au)
. With each pair non-default options can be set, following a call to the initialization function, using
nag_sparseig_real_option (f12ad)
for real matrices and
nag_sparseig_complex_option (f12ar)
for complex matrices. For real matrices that can be supplied in the sparse matrix compressed column storage (CCS) format, the driver function
nag_eigen_real_gen_sparse_arnoldi (f02ek)
is available. This function uses functions from
Chapter F12
in conjunction with direct solver functions from
Chapter F11
There is little computational penalty in using the non-Hermitian complex functions for a Hermitian problem. The only additional cost is to compute eigenvalues of a Hessenberg rather than a
tridiagonal matrix. The difference in computational cost should be negligible compared to the overall cost.
Iterative Methods for Real Symmetric Eigenvalue Problems
There is a general purpose function pair for the case where the matrices are known to be banded. In this case an initialization function,
nag_sparseig_real_symm_band_init (f12ff)
, is called first to set up default options, and the problem is solved by a single call to a solver function,
nag_sparseig_real_symm_band_solve (f12fg)
. The matrices are supplied, in LAPACK banded-storage format, as arguments to
nag_sparseig_real_symm_band_solve (f12fg)
. Non-default options can be set, following a call to
nag_sparseig_real_symm_band_init (f12ff)
, using
nag_sparseig_real_symm_option (f12fd)
Iterative Methods for Singular Value Decomposition
The partial singular value decomposition,
A[k] $Ak$
(as defined in
Section [The Singular Value Decomposition]
), of an
( m × n) $(m×n)$
can be computed efficiently using functions from this chapter. For real matrices, the suite of functions listed in
Section [Iterative Methods for Real Symmetric Eigenvalue Problems]
(for symmetric problems) can be used; for complex matrices, the corresponding suite of functions for complex problems can be used; however, there are no general purpose functions for complex
The driver function
nag_eigen_real_gen_partialsvd (f02wg)
is available for computing the partial SVD of real matrices. The matrix is not supplied to
nag_eigen_real_gen_partialsvd (f02wg)
; rather, a user-defined function argument provides the results of performing Matrix-vector products.
For both real and complex matrices, you should use the default options (see, for example, the options listed in
Section [Optional Parameters]
) for problem type (
), computational mode (
) and spectrum (
Largest Magnitude
). The operation to be performed on request by the reverse communication function (e.g.,
nag_sparseig_real_symm_iter (f12fb)
) is, for real matrices, to multiply the returned vector by the symmetric matrix
A^TA $ATA$
m ≥ n $m≥n$
, or by
AA^T $AAT$
m < n$m<n$
. For complex matrices, the corresponding Hermitian matrices are
A^HA $AHA$
AA^H $AAH$
The right (
m ≥ n$m≥n$
) or left (
m < n$m<n$
) singular vectors are returned by the post-processing function (e.g.,
nag_sparseig_real_symm_proc (f12fc)
). The left (or right) singular vectors can be recovered from the returned singular vectors. Providing the largest singular vectors are not multiple or tightly clustered, there should be no problem
in obtaining numerically orthogonal left singular vectors from the computed right singular vectors (or vice versa).
The second example in
Section [Example]
illustrates how the partial singular value decomposition of a real matrix can be performed using the suite of functions for finding some eigenvalues of a real symmetric matrix. In this case
m ≥ n$m≥n$
, however, the program is easily amended to perform the same task in the case
m < n$m<n$
Similarly, functions in this chapter may be used to estimate the
-norm condition number,
K[2](A) = (σ[1])/(σ[n]) .
$K2(A) = σ1 σn .$
This can be achieved by setting the option
Both Ends
to get the largest and smallest few singular values, then taking the ratio of largest to smallest computed singular values as your estimate.
Alternative Methods
Other functions for the solution of sparse linear eigenproblems can currently be found in
Chapters F02
. In particular, the following functions allow the direct solution of real symmetric systems:
Band nag_eigen_real_band_geneig (f02sd) and nag_lapack_dsbgv (f08ua)
Sparse nag_eigen_real_symm_sparse_eigsys (f02fj)
General Use of Functions
This section will describe the complete structure of the reverse communication interfaces contained in this chapter. Numerous computational modes are available, including several shift-invert
strategies designed to accelerate convergence. Two of the more sophisticated modes will be described in detail. The remaining ones are quite similar in principle, but require slightly different tasks
to be performed with the reverse communication interface.
This chapter is structured as follows. The naming conventions used in this chapter, and the data types available are described in
Section [Naming Conventions]
, spectral transformations are discussed in
Section [Shift and Invert Spectral Transformations]
. Spectral transformations are usually extremely effective but there are a number of problem dependent issues that determine which one to use. In
Section [Reverse Communication and Shift-invert Modes]
we describe the reverse communication interface needed to exercise the various shift-invert options. Each shift-invert option is specified as a computational mode and all of these are summarised in
the remaining sections. There is a subsection for each problem type and hence these sections are quite similar and repetitive. Once the basic idea is understood, it is probably best to turn directly
to the subsection that describes the problem setting that is most interesting to you.
Perhaps the easiest way to rapidly become acquainted with the modes in this chapter is to run each of the example programs which use the various modes. These may be used as templates and adapted to
solve specific problems.
Naming Conventions
Functions for solving nonsymmetric (real and complex) eigenvalue problems have as first letter after the chapter name, the letter ‘A’, e.g.,
nag_sparseig_real_iter (f12ab)
; equivalent functions for symmetric eigenvalue problems will have this letter replaced by the letter ‘F’, e.g.,
nag_sparseig_real_symm_iter (f12fb)
. For the letter following this, functions for real eigenvalue problems will have letters in the range ‘A to M’ while those for complex eigenvalue problems will have letters correspondingly shifted
into the range ‘N to Z’; so, for example, the complex equivalent of
nag_sparseig_real_option (f12ad)
nag_sparseig_complex_option (f12ar)
, while the real symmetric equivalent is
nag_sparseig_real_symm_option (f12fd)
A suite of five functions are named consecutively, e.g.,
nag_sparseig_real_init (f12aa)
nag_sparseig_real_iter (f12ab)
nag_sparseig_real_proc (f12ac)
nag_sparseig_real_option (f12ad)
nag_sparseig_real_monit (f12ae)
. Each general purpose function has its own initialization function, but uses the option setting function from the suite relevant to the problem type. Thus each general purpose function can be viewed
as belonging to a suite of three functions, even though only two functions will be named consecutively. For example,
nag_sparseig_real_option (f12ad)
nag_sparseig_real_band_init (f12af)
nag_sparseig_real_band_solve (f12ag)
represent the suite of functions for solving a banded real symmetric eigenvalue problem.
Shift and Invert Spectral Transformations
The most general problem that may be solved here is to compute a few selected eigenvalues and corresponding eigenvectors for
A x = λ B x , where A and B are real or complex n × n matrices.
$A x = λ B x , where A and B are real or complex n × n matrices.$
The shift and invert spectral transformation is used to enhance convergence to a desired portion of the spectrum. If
(x,λ) $(x,λ)$
is an eigen-pair for
(A,B) $(A,B)$
σ ≠ λ $σ ≠ λ$
( A − σB )^ − 1 B x = ν x , where ν = 1/( λ − σ ) .
$( A - σB ) -1 B x = ν x , where ν = 1 λ - σ .$
This transformation is effective for finding eigenvalues near
σ $σ$
since the
n[ν] $n ν$
eigenvalues of
C ≡ ( A − σB )^ − 1 B $C ≡ ( A - σB ) -1 B$
that are largest in magnitude correspond to the
n[ν] $n ν$
λ[j] $λ j$
of the original problem that are nearest to the shift
σ $σ$
in absolute value. These transformed eigenvalues of largest magnitude are precisely the eigenvalues that are easy to compute with a Krylov method. (See
Barrett et al. (1994)
). Once they are found, they may be transformed back to eigenvalues of the original problem. The direct relation is
λ[j] = σ + 1/(ν[j])
$λj = σ + 1 νj$
and the eigenvector
x[j] $x j$
associated with
ν[j] $νj$
in the transformed problem is also an eigenvector of the original problem corresponding to
λ[j] $λj$
. Usually the Arnoldi process will rapidly obtain good approximations to the eigenvalues of
C $C$
of largest magnitude. However, to implement this transformation, you must provide the means to solve linear systems involving
A − σB $A - σB$
either with a matrix factorization or with an iterative method.
In general,
C $C$
will be non-Hermitian even if
A $A$
B $B$
are both Hermitian. However, this is easily remedied. The assumption that
B $B$
is Hermitian positive definite implies that the bilinear form
〈x,y〉 ≡ x^H By
$〈x,y〉 ≡ xH By$
is an inner product. If
B $B$
is positive semidefinite and singular, then a semi-inner product results. This is a weighted
B $B$
-inner product and vectors
x $x$
y $y$
are called
B $B$
-orthogonal if
〈x,y〉 = 0 $〈x,y〉 = 0$
. It is easy to show that if
A $A$
is Hermitian (self-adjoint) then
C $C$
is Hermitian self-adjoint with respect to this
B $B$
-inner product (meaning
〈Cx,y〉 = 〈x,Cy〉 $〈Cx,y〉 = 〈x,Cy〉$
for all vectors
x $x$
y $y$
). Therefore, symmetry will be preserved if we force the computed basis vectors to be orthogonal in this
B $B$
-inner product. Implementing this
B $B$
-orthogonality requires you to provide a matrix-vector product
Bv $Bv$
on request along with each application of
C $C$
. In the following sections we shall discuss some of the more familiar transformations to the standard eigenproblem. However, when
B $B$
is positive (semi)definite, we recommend using the shift-invert spectral transformation with
B $B$
-inner products if at all possible. This is a far more robust transformation when
B $B$
is ill-conditioned or singular. With a little extra manipulation (provided automatically in the post-processing functions) the semi-inner product induced by
B $B$
prevents corruption of the computed basis vectors by roundoff-error associated with the presence of infinite eigenvalues. These very ill-conditioned eigenvalues are generally associated with a
singular or highly ill-conditioned
B $B$
. A detailed discussion of this theory may be found in Chapter 4 of
Lehoucq et al. (1998)
Shift-invert spectral transformations are very effective and should even be used on standard problems, B = I $B = I$, whenever possible. This is particularly true when interior eigenvalues are sought
or when the desired eigenvalues are clustered. Roughly speaking, a set of eigenvalues is clustered if the maximum distance between any two eigenvalues in that set is much smaller than the minimum
distance between these eigenvalues and any other eigenvalues of (A,B) $(A,B)$.
If you have a generalized problem B ≠ I $B ≠ I$, then you must provide a way to solve linear systems with either A $A$, B $B$ or a linear combination of the two matrices in order to use the reverse
communication suites in this chapter. In this case, a sparse direct method should be used to factor the appropriate matrix whenever possible. The resulting factorization may be used repeatedly to
solve the required linear systems once it has been obtained. If instead you decide to use an iterative method, the accuracy of the solutions must be commensurate with the convergence tolerance used
for the Arnoldi iteration. A slightly more stringent tolerance is needed relative to the desired accuracy of the eigenvalue calculation.
The main drawback with using the shift-invert spectral transformation is that the coefficient matrix A − σ B $A - σ B$ is typically indefinite in the Hermitian case and has zero-valued eigenvalues in
the non-Hermitian case. These are often the most difficult situations for iterative methods and also for sparse direct methods.
The decision to use a spectral transformation on a standard eigenvalue problem B = I $B = I$ or to use one of the simple modes is problem dependent. The simple modes have the advantage that you only
need to supply a matrix vector product Av $Av$. However, this approach is usually only successful for problems where extremal non-clustered eigenvalues are sought. In non-Hermitian problems, extremal
means eigenvalues near the boundary of the spectrum of A $A$. For Hermitian problems, extremal means eigenvalues at the left- or right-hand end points of the spectrum of A $A$. The notion of
non-clustered (or well separated) is difficult to define without going into considerable detail. A simplistic notion of a well-separated eigenvalue λ[j] $λj$ for a Hermitian problem would be ‖λ[i] −
λ[j]‖ > τ ‖λ[n] − λ[1]‖ $‖ λi - λj ‖ > τ ‖ λn - λ1 ‖$ for all j ≠ i $j ≠ i$ with τ ≫ ε $τ ≫ ε$, where λ[1] $λ1$ and λ[n] $λn$ are the smallest and largest algebraically. Unless a matrix vector
product is quite difficult to code or extremely expensive computationally, it is probably worth trying to use the simple mode first if you are seeking extremal eigenvalues.
The remainder of this section discusses additional transformations that may be applied to convert a generalized eigenproblem to a standard eigenproblem. These are appropriate when B $B$ is
well-conditioned (Hermitian or non-Hermitian).
B is Hermitian positive definite
B $B$
is Hermitian positive definite and well-conditioned (
‖B‖ ‖B^ − 1‖ $‖B‖ ‖ B-1 ‖$
is of modest size), then computing the Cholesky factorization
B = L L^H $B = L LH$
and converting equation
(L^ − 1AL^ − H) y = λy , where L^H x = y
$( L-1 A L-H ) y = λy , where LH x = y$
provides a transformation to a standard eigenvalue problem. In this case, a request for a matrix vector product would be satisfied with the following three steps:
(i) Solve L^H z = v $LH z = v$ for z $z$.
(ii) Matrix-vector multiply z ← A z $z ← A z$.
(iii) Solve L w = z $L w = z$ for w $w$.
Upon convergence, a computed eigenvector y $y$ for (L^ − 1AL^ − H) $( L-1 A L-H )$ is converted to an eigenvector x $x$ of the original problem by solving the triangular system L^H x = y $LH x = y$.
This transformation is most appropriate when A $A$ is Hermitian, B $B$ is Hermitian positive definite and extremal eigenvalues are sought. This is because when A $A$ is Hermitian, so is (L^ − 1AL^ −
H) $( L-1 A L-H )$.
If A $A$ is Hermitian positive definite and the smallest eigenvalues are sought, then it would be best to reverse the roles of A $A$ and B $B$ in the above description and ask for the largest
algebraic eigenvalues or those of largest magnitude. Upon convergence, a computed eigenvalue λ̂ $λ^$ would then be converted to an eigenvalue of the original problem by the relation λ ← 1/(λ̂) $λ ← 1λ^
B is not Hermitian positive semidefinite
If neither
A $A$
B $B$
is Hermitian positive semidefinite, then a direct transformation to standard form is required. One simple way to obtain a direct transformation of equation
to a standard eigenvalue problem
C x = λ x $C x = λ x$
is to multiply on the left by
B^ − 1 $B-1$
which results in
C = B^ − 1 A $C = B-1 A$
. Of course, you should not perform this transformation explicitly since it will most likely convert a sparse problem into a dense one. If possible, you should obtain a direct factorization of
B $B$
and when a matrix-vector product involving
C $C$
is called for, it may be accomplished with the following two steps:
(i) Matrix-vector multiply z ← A v $z ← A v$.
(ii) Solve B w = z $B w = z$ for w $w$.
Several problem-dependent issues may modify this strategy. If B $B$ is singular or if you are interested in eigenvalues near a point σ $σ$ then you may choose to work with C ≡ (A − σB)^ − 1 B $C ≡
(A-σB) -1 B$ but without using the B $B$-inner products discussed previously. In this case you will have to transform the converged eigenvalues of C $C$ to eigenvalues of the original problem.
Reverse Communication and Shift-invert Modes
The reverse communication interface function for real nonsymmetric problems is
nag_sparseig_real_iter (f12ab)
; for complex problems is
nag_sparseig_complex_iter (f12ap)
; and for real symmetric problems is
nag_sparseig_real_symm_iter (f12fb)
. First the reverse communication loop structure will be described and then the details and nuances of the problem setup will be discussed. We use the symbol
OP $OP$
for the operator that is applied to vectors in the Arnoldi/Lanczos process and
B $B$
will stand for the matrix to use in the weighted inner product described previously. For the shift-invert spectral transformation mode
OP $OP$
(A − σB)^ − 1 B $(A-σB) -1 B$
The basic idea is to set up a loop that repeatedly call one of
nag_sparseig_real_iter (f12ab)
nag_sparseig_complex_iter (f12ap)
nag_sparseig_real_symm_iter (f12fb)
. On each return, you must either apply
OP $OP$
B $B$
to a specified vector or exit the loop depending upon the value returned in the reverse communication parameter
Shift and invert on a generalized eigenproblem
The example program in
Section [Example]
illustrates the reverse communication loop for
nag_sparseig_real_iter (f12ab)
in shift-invert mode for a generalized nonsymmetric eigenvalue problem. This loop structure will be identical for the symmetric problem calling
nag_sparseig_real_symm_iter (f12fb)
. The loop structure is also identical for the complex arithmetic function
nag_sparseig_complex_iter (f12ap)
In the example, the matrix B $B$ is assumed to be symmetric and positive semidefinite. In the loop structure, you will have to supply a function to obtain a matrix factorization of (A − σB) $(A-σB)$
that may repeatedly be used to solve linear systems. Moreover, a function needs to be provided to perform the matrix-vector product z = Bv $z = Bv$ and a function is required to solve linear systems
of the form (A − σB) w = z $(A-σB) w = z$ as needed using the previously computed factorization.
When convergence has taken place (indicated by
irevcm = 5 $irevcm = 5$
ifail = 0 $ifail = 0$
), the reverse communication loop will be exited. Then, post-processing using the relevant function from
nag_sparseig_real_proc (f12ac)
nag_sparseig_complex_proc (f12aq)
nag_sparseig_real_symm_proc (f12fc)
must be done to recover the eigenvalues and corresponding eigenvectors of the original problem. When operating in shift-invert mode, the eigenvalue selection option is normally set to
Largest Magnitude
. The post-processing function is then used to convert the converged eigenvalues of
OP $OP$
to eigenvalues of the original problem
. Also, when
B $B$
is singular or ill-conditioned, the post-processing function takes steps to purify the eigenvectors and rid them of numerical corruption from eigenvectors corresponding to near-infinite eigenvalues.
These procedures are performed automatically when operating in any one of the computational modes described above and later in this section.
You may wish to construct alternative computational modes using spectral transformations that are not addressed by any of the modes specified in this chapter. The reverse communication interface will
easily accommodate these modifications. However, it will most likely be necessary to construct explicit transformations of the eigenvalues of OP $OP$ to eigenvalues of the original problem in these
Using the computational modes
The problem set up is similar for all of the available computational modes. In the previous section, a detailed description of the reverse communication loop for a specific mode (Shift-invert for a
Generalized Problem) was given. To use this or any of the other modes listed below, you are strongly urged to modify one of the example programs.
The first thing to decide is whether the problem will require a spectral transformation. If the problem is generalized,
B ≠ I $B ≠ I$
, then a spectral transformation will be required (see
Section [Shift and Invert Spectral Transformations]
). Such a transformation will most likely be needed for a standard problem if the desired eigenvalues are in the interior of the spectrum or if they are clustered at the desired part of the spectrum.
Once this decision has been made and
OP $OP$
has been specified, an efficient means to implement the action of the operator
OP $OP$
on a vector must be devised. The expense of applying
OP $OP$
to a vector will of course have direct impact on performance.
Shift-invert spectral transformations may be implemented with or without the use of a weighted
B $B$
-inner product. The relation between the eigenvalues of
OP $OP$
and the eigenvalues of the original problem must also be understood in order to make the appropriate eigenvalue selection option (e.g.,
Largest Magnitude
) in order to recover eigenvalues of interest for the original problem. You must specify the number of eigenvalues to compute, which eigenvalues are of interest, the number of basis vectors to use,
and whether or not the problem is standard or generalized. These items are controlled by setting options via the option setting function.
Setting the number of eigenvalues
and the number of basis vectors
(in the setup function) for optimal performance is very much problem dependent. If possible, it is best to avoid setting
in a way that will split clusters of eigenvalues. As a rule of thumb
ncv ≥ 2 × nev $ncv ≥ 2 × nev$
is reasonable. There are trade-offs due to the cost of the user-supplied matrix-vector products and the cost of the implicit restart mechanism. If the user-supplied matrix-vector product is
relatively cheap, then a smaller value of
may lead to more user matrix-vector products and implicit Arnoldi iterations but an overall decrease in computation time. Convergence behaviour can be quite different depending on which of the
spectrum options (e.g.,
Largest Magnitude
) is chosen. The Arnoldi process tends to converge most rapidly to extreme points of the spectrum. Implicit restarting can be effective in focusing on and isolating a selected set of eigenvalues near
these extremes. In principle, implicit restarting could isolate eigenvalues in the interior, but in practice this is difficult and usually unsuccessful. If you are interested in eigenvalues near a
point that is in the interior of the spectrum, a shift-invert strategy is usually required for reasonable convergence.
The integer argument
is the reverse communication flag that will specify a requested action on return from one of the solver functions
nag_sparseig_real_iter (f12ab)
nag_sparseig_complex_iter (f12ap)
nag_sparseig_real_symm_iter (f12fb)
. The options
specify if this is a standard or generalized eigenvalue problem. The dimension of the problem is specified on the call to the initialization function only; this value, together with the number of
eigenvalues and the dimension of the basis vectors is passed through the communication array. There are a number of spectrum options which specify the eigenvalues to be computed; these options differ
depending on whether a Hermitian or non-Hermitian eigenvalue problem is to be solved. For example, the
Both Ends
is specific to Hermitian (symmetric) problems while the
Largest Imaginary
is specific to non-Hermitian eigenvalue problems (see
Section [Description of the Optional s]
). The specification of problem type will be described separately but the reverse communication interface and loop structure is the same for each type of the basic modes
Regular Inverse
Shifted Inverse
Shifted Inverse Real
Shifted Inverse Imaginary
for real nonsymmetric problems), and for the problem type:
. There are some additional specialised modes for symmetric problems,
, and for real nonsymmetric problems with complex shifts applied in real arithmetic. You are encouraged to examine the documented example programs for these modes.
specifies the accuracy requested. If you wish to supply shifts for implicit restarting then the
Supplied Shifts
must be selected, otherwise the default
Exact Shifts
strategy will be used. The
Supplied Shifts
should only be used when you have a great deal of knowledge about the spectrum and about the implicit restarted Arnoldi method and its underlying theory. The
Iteration Limit
should be set to the maximum number of implicit restarts allowed. The cost of an implicit restart step (major iteration) is in the order of
4 n (ncv − nev) $4 n ( ncv - nev )$
floating point operations for the dense matrix operations and
ncv − nev $ncv - nev$
matrix-vector products
w ← Av $w ← Av$
with the matrix
A $A$
The choice of computational mode through the option setting function is very important. The legitimate computational mode options available differ with each problem type and are listed below for each
of them.
Computational modes for real symmetric problems
The reverse communication interface function for symmetric eigenvalue problems is
nag_sparseig_real_symm_iter (f12fb)
. The option for selecting the region of the spectrum of interest can be one of those listed in
Table 1
┃ Largest Magnitude │ The eigenvalues of greatest magnitude ┃
┃ Largest Algebraic │ The eigenvalues of largest algebraic value (rightmost) ┃
┃ Smallest Magnitude │ The eigenvalues of least magnitude. ┃
┃ Smallest Algebraic │ The eigenvalues of smallest algebraic value (leftmost) ┃
┃ Both Ends │ The eigenvalues from both ends of the algebraic spectrum ┃
Table 2
lists the spectral transformation options for symmetric eigenvalue problems together with the specification of
OP $OP$
B $B$
for each mode and the problem type option setting.
┃ Problem Type │ Mode │ Problem │ OP$OP$ │ B$B$ ┃
┃ Standard │ Regular │ Ax = λx$Ax=λx$ │ A$A$ │ I$I$ ┃
┃ Standard │ Shifted Inverse │ Ax = λx$Ax=λx$ │ (A − σI)^ − 1$(A-σI)-1$ │ I$I$ ┃
┃ Generalized │ Regular Inverse │ Ax = λBx$Ax=λBx$ │ B^ − 1Ax$B-1Ax$ │ B$B$ ┃
┃ Generalized │ Shifted Inverse │ Ax = λBx$Ax=λBx$ │ (A − σB)^ − 1B$(A-σB)-1B$ │ B$B$ ┃
┃ Generalized │ Buckling │ Kx = λK[G]x$Kx=λKGx$ │ (K − σK[G])^ − 1K$(K-σKG)-1K$ │ K$K$ ┃
┃ Generalized │ Cayley │ Ax = λBx$Ax=λBx$ │ (A − σB)^ − 1(A + σB)$(A-σB)-1(A+σB)$ │ B$B$ ┃
Computational modes for non-Hermitian problems
A $A$
is a general non-Hermitian matrix and
B $B$
is Hermitian and positive semidefinite, then the selection of the eigenvalues is controlled by the choice of one of the options in
Table 3
┃ Largest Magnitude │ The eigenvalues of greatest magnitude ┃
┃ Smallest Magnitude │ The eigenvalues of least magnitude ┃
┃ Largest Real │ The eigenvalues with largest real part ┃
┃ Smallest Real │ The eigenvalues with smallest real part ┃
┃ Largest Imaginary │ The eigenvalues with largest imaginary part ┃
┃ Smallest Imaginary │ The eigenvalues with smallest imaginary part ┃
Table 4
lists the spectral transformation options for real nonsymmetric eigenvalue problems together with the specification of
OP $OP$
B $B$
for each mode and the problem type option setting. The equivalent listing for complex non-Hermitian eigenvalue problems is given in
Table 5
┃ Problem Type │ Mode │ Problem │ OP $OP$ │ B $B$ ┃
┃ Standard │ Regular │ Ax = λx $Ax = λx$ │ A $A$ │ I $I$ ┃
┃ Standard │ Shifted Inverse Real │ Ax = λx $Ax = λx$ │ (A − σI)^ − 1 $( A - σI ) -1$ │ I $I$ ┃
┃ Generalized │ Regular Inverse │ Ax = λBx $Ax = λBx$ │ B^ − 1 Ax $B-1 Ax$ │ B $B$ ┃
┃ Generalized │ Shifted Inverse Real with real σ $σ$ │ Ax = λBx $Ax = λBx$ │ (A − σB)^ − 1 B $( A - σB ) -1 B$ │ B $B$ ┃
┃ Generalized │ Shifted Inverse Real with complex σ $σ$ │ Ax = λBx $Ax = λBx$ │ real {(A − σB)^ − 1B} $real { ( A - σB ) -1 B }$ │ B $B$ ┃
┃ Generalized │ Shifted Inverse Imaginary │ Ax = λBx $Ax = λBx$ │ imag {(A − σB)^ − 1B} $imag { ( A - σB ) -1 B }$ │ B $B$ ┃
┃ │ with complex σ $σ$ │ │ │ ┃
Note that there are two shifted inverse modes with complex shifts in
Table 4
. Since
σ $σ$
is complex, these both require the factorization of the matrix
A − σ B $A - σ B$
in complex arithmetic even though, in the case of real nonsymmetric problems, both
A $A$
B $B$
are real. The only advantage of using this option for real nonsymmetric problems instead of using the equivalent suite for complex problems is that all of the internal operations in the Arnoldi
process are executed in real arithmetic. This results in a factor of two saving in storage and a factor of four saving in computational cost. There is additional post-processing that is somewhat more
complicated than the other modes in order to get the eigenvalues and eigenvectors of the original problem. These modes are only recommended if storage is extremely critical.
┃ Problem Type │ Mode │ Problem │ OP $OP$ │ B $B$ ┃
┃ Standard │ Regular │ Ax = λx $Ax = λx$ │ A $A$ │ I $I$ ┃
┃ Standard │ Shifted Inverse │ Ax = λx $Ax = λx$ │ (A − σI)^ − 1 $( A - σI ) -1$ │ I $I$ ┃
┃ Generalized │ Regular Inverse │ Ax = λBx $Ax = λBx$ │ B^ − 1 Ax $B-1 Ax$ │ B $B$ ┃
┃ Generalized │ Shifted Inverse │ Ax = λBx $Ax = λBx$ │ (A − σB)^ − 1 B $( A - σB ) -1 B$ │ B $B$ ┃
Post processing
On the final successful return from a reverse communication function, the corresponding post-processing function must be called to get eigenvalues of the original problem and the corresponding
eigenvectors if desired. In the case of
Shifted Inverse
modes for
problems, there are some subtleties to recovering eigenvectors when
B $B$
is ill-conditioned. This process is called eigenvector purification. It prevents eigenvectors from being corrupted with noise due to the presence of eigenvectors corresponding to near infinite
eigenvalues. These operations are completely transparent to you. There is negligible additional cost to obtain eigenvectors. An orthonormal (Arnoldi/Lanczos) basis is always computed. The approximate
eigenvalues of the original problem are returned in ascending algebraic order. The option relevant to this function is
which may be set to values that determine whether only eigenvalues are desired or whether corresponding eigenvectors and/or Schur vectors are required. The value of the shift
σ $σ$
used in spectral transformations must be passed to the post-processing function through the appropriately named argument(s). The eigenvectors returned are normalized to have unit length with respect
to the semi-inner product that was used. Thus, if
B = I $B = I$
then they will have unit length in the standard-norm. In general, a computed eigenvector
x $x$
will satisfy
x^H B x = 1 $xH B x = 1$
Solution monitoring and printing
The option setting function for each suite allows the setting of three options that control solution printing and the monitoring of the iterative and post-processing stages. These three options are:
Print Level
. By default, no solution monitoring or printing is performed. The
option controls where solution details are printed; the
option controls where monitoring details are to be printed and is mainly used for debugging purposes; the
Print Level
option controls the amount of detail to be printed, see individual option setting function documents for specifications of each print level. The value passed to
can be the same, but it is recommended that the two sets of information be kept separate. Note that the monitoring information can become very voluminous for the highest settings of
Print Level
Functionality Index
Standard or generalized eigenvalue problems for complex matrices,
initialize problem and method nag_sparseig_complex_band_init (f12at)
selected eigenvalues, eigenvectors and/or Schur vectors nag_sparseig_complex_band_solve (f12au)
initialize problem and method nag_sparseig_complex_init (f12an)
option setting nag_sparseig_complex_option (f12ar)
reverse communication implicitly restarted Arnoldi method nag_sparseig_complex_iter (f12ap)
reverse communication monitoring nag_sparseig_complex_monit (f12as)
selected eigenvalues, eigenvectors and/or Schur vectors of original problem nag_sparseig_complex_proc (f12aq)
Standard or generalized eigenvalue problems for real nonsymmetric matrices,
initialize problem and method nag_sparseig_real_band_init (f12af)
selected eigenvalues, eigenvectors and/or Schur vectors nag_sparseig_real_band_solve (f12ag)
initialize problem and method nag_sparseig_real_init (f12aa)
option setting nag_sparseig_real_option (f12ad)
reverse communication implicitly restarted Arnoldi method nag_sparseig_real_iter (f12ab)
reverse communication monitoring nag_sparseig_real_monit (f12ae)
selected eigenvalues, eigenvectors and/or Schur vectors of original problem nag_sparseig_real_proc (f12ac)
Standard or generalized eigenvalue problems for real symmetric matrices,
initialize problem and method nag_sparseig_real_symm_band_init (f12ff)
selected eigenvalues, eigenvectors and/or Schur vectors nag_sparseig_real_symm_band_solve (f12fg)
initialize problem and method nag_sparseig_real_symm_init (f12fa)
option setting nag_sparseig_real_symm_option (f12fd)
reverse communication implicitly restarted Arnoldi(Lanczos) method nag_sparseig_real_symm_iter (f12fb)
reverse communication monitoring nag_sparseig_real_symm_monit (f12fe)
selected eigenvalues and eigenvectors and/or Schur vectors of original problem nag_sparseig_real_symm_proc (f12fc)
Barrett R, Berry M, Chan T F, Demmel J, Donato J, Dongarra J, Eijkhout V, Pozo R, Romine C and Van der Vorst H (1994) Templates for the Solution of Linear Systems: Building Blocks for Iterative
Methods SIAM, Philadelphia
Lehoucq R B (1995) Analysis and implementation of an implicitly restarted iteration PhD Thesis Rice University, Houston, Texas
Lehoucq R B (2001) Implicitly restarted Arnoldi methods and subspace iteration SIAM Journal on Matrix Analysis and Applications 23 551–562
Lehoucq R B and Scott J A (1996) An evaluation of software for computing eigenvalues of sparse nonsymmetric matrices Preprint MCS-P547-1195 Argonne National Laboratory
Lehoucq R B and Sorensen D C (1996) Deflation techniques for an implicitly restarted Arnoldi iteration SIAM Journal on Matrix Analysis and Applications 17 789–821
Lehoucq R B, Sorensen D C and Yang C (1998) ARPACK Users' Guide: Solution of Large-scale Eigenvalue Problems with Implicitly Restarted Arnoldi Methods SIAM, Philidelphia
Saad Y (1992) Numerical Methods for Large Eigenvalue Problems Manchester University Press, Manchester, UK
PDF version
(NAG web site
, 64-bit version, 64-bit version
© The Numerical Algorithms Group Ltd, Oxford, UK. 2009–2013
|
{"url":"http://www.nag.com/numeric/MB/manual64_24_1/html/F12/f12intro.html","timestamp":"2014-04-20T09:48:36Z","content_type":null,"content_length":"195739","record_id":"<urn:uuid:b3b8a37d-532b-457f-b560-b90825f2739c>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00096-ip-10-147-4-33.ec2.internal.warc.gz"}
|
deformation of stable curve
up vote 3 down vote favorite
Let $R$ be a DVR, and $k$ residue field of $R$. We suppose that $X_{0}$ is a stable curve over Spec$k$.
Dose there exist a stable model $X$ over $R$ such that the special fiber isomorphic to $X_{0}$ ?
If we assume $R=C[[t]]$, where C is complex number field, how to find a deformation which make the generic fiber is smooth?
If by a stable model you mean a flat family of stable curves, then the answer is "yes" as stable curves have unobstructed deformations (and all formal deformations are algebraic). Probably you can
also make the generic fibre smooth by choosing the deformations carefully... – anon Jan 24 '13 at 10:57
2 @anon: Unfortunately the OP did not specify that his DVR is complete (or even just Henselian). Otherwise your comment is perfectly correct. – Jason Starr Jan 24 '13 at 11:25
Ah, yes, I missed the potential non-completeness, thanks! Is the claim clear in the henselian case without excellence assumption? – anon Jan 24 '13 at 12:11
@anon: I think the claim is okay in the Henselian case even without excellence (but I need to think it through). – Jason Starr Jan 24 '13 at 13:24
@kiseki: Choose a pluri-canonical projective embedding $j:X_0\hookrightarrow \mathbf{P}^n_k$, and consider the Hilbert scheme $H$ of $\mathbf{P}^n$ over $R$. By the proof of smoothness of the
moduli stack of stable curves (of arithmetic genus $\ge 2$), $H$ is smooth near the point $x_0 \in H(k)$ corresponding to $(X_0,j)$ and stability holds for the universal curve over an open
neighborhood of $x_0$. An open $U$ in $H$ around $x_0$ is etale over an affine space, so by approximating an $\widehat{R}$-point of that with an $R$-point we win when R is henselian. – user30180
Jan 24 '13 at 14:32
show 8 more comments
1 Answer
active oldest votes
The comments above fully answer the OP's question. I am simply collecting some of these into an answer.
First, the answer to the original question is "no" if one does not impose some additional hypothesis on $R$ such as being complete or Henselian. As with many similar such questions, one
negative answer comes from the Harris-Mumford(-Eisenbud) theorem that $\overline{M}_g$ is non-uniruled for $g\geq 23$. If $X_0$ is a stable, genus $g$ curve that is reducible with a single
node $p$, say $(X',p') \cup (X'',p'')$ where $p$ is identified with the point $p'$ in the first irreducible component $X'$ and with $p''$ in the second irreducible component $X''$, and if $
(X',p')$ and $(X'',p'')$ are sufficiently general pointed curves, then there is no deformation to a smooth curve over the (non-complete, non-Henselian) DVR $R=\mathbb{C}[t]_{\langle t \
rangle}$. If there were, this would give a rational curve in $\overline{M}_g$ that intersects a general point of a boundary divisor. This would imply that a general point of the "interior"
is also contained in a rational curve, contradicting the Harris-Mumford(-Eisenbud) theorem.
On the other hand, if $R$ is complete, or just Henselian, then there does exist a deformation. It is clear from the comments that the OP is looking for a very explicit formulation of this
result. Here is one such formulation. Every proper curve is projective, and for stable curves, there is even an explicit tensor power of the dualizing sheaf that is very ample. Thus, assume
up vote 5 that $X_0$ is given as a closed curve in some projective space $\mathbb{P}^n$. Up to re-embedding by a $2$-uple Veronese embedding (only necessary in positive characteristic), a
down vote sufficiently general pencil of hyperplane sections is "Lefschetz". More precisely, for a sufficiently general codimension $2$ linear subspace $L \subset \mathbb{P}^n$ that is disjoint from
accepted $X_0$, for the associated linear projection $$\pi_L:(\mathbb{P}^n\setminus L) \to \mathbb{P}^1,$$ the restriction of $\pi_L$ to $X_0$, $$\pi:X_0\to \mathbb{P}^1,$$ has sheaf of relative
differentials $\Omega_\pi$ that is the pushforward to $X_0$ of an invertible sheaf from an effective Cartier divisor $D$ of $X_0$ with (a) no two distinct points of $D$ are contained in a
common fiber of $\pi$, (b) the length of $D$ at every double point of $X_0$ equals $2$, and (c) for every smooth point of $X_0$ contained in $D$, the length of $D$ equals $1$.
The branch divisor of $\pi$ is, by definition, $\pi_*D$: an effective Cartier divisor in $\mathbb{P}^1$ that has length $2$ at the image of every double point of $X_0$ and has length $1$ at
the image of every other point of $D$. By the analysis in Stable Maps and Branch Divisors of B. Fantechi and R. Pandharipande (the map $\pi$ is a "stable map"), for every formal deformation
of the divisor $\pi_* D$ in $\mathbb{P}^1$, there exists a unique formal deformation of the stable map $(X_0,\pi:X_0\to \mathbb{P}^1)$ to $\mathbb{P}^1$ such that the associated branch
divisor of the deformation equals the deformation of the branch divisor. In particular, choosing a deformation of $\pi_*D$ to a reduced divisor in $\mathbb{P}^1$ gives a formal deformation
of $X_0$ to a smooth, stable curve.
add comment
Not the answer you're looking for? Browse other questions tagged ag.algebraic-geometry or ask your own question.
|
{"url":"http://mathoverflow.net/questions/119736/deformation-of-stable-curve?answertab=oldest","timestamp":"2014-04-18T21:45:32Z","content_type":null,"content_length":"57957","record_id":"<urn:uuid:36b7dedc-175b-4ad3-9505-24752f49c5dd>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00493-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Mplus Discussion >> EFA positive definite
david posted on Tuesday, January 22, 2013 - 6:04 am
May i ask for some guidance on an EFA i am doing using dichotomous variables. The correlation matrix is non-positive definite (observed in R). However, the model runs in Mplus (dependant on rotation
chioce and number of factors).
I assume that Mplus calculates pairwise tetrachoric correlations for the data. Does Mplus apply any smoothing method to the correlation matrix to allow the model to fit. If so, do the model fit
statistics need to be adjusted?
Is there areason why ML cannot be used in this EFA or is WLSMV preferable? It would be great if you could point me to a reference for either of these.
thanks for any help, david
Linda K. Muthen posted on Tuesday, January 22, 2013 - 9:29 am
Yes, a pairwise matrix is used. There is no smoothing.
Maximum likelihood can be used if there are not too many factors. The issue with maximum likelihood and categorical variables is that numerical integration is required, one dimension for each factor.
More than four dimensions can be computationally unrealistic.
david posted on Monday, February 11, 2013 - 12:56 am
Thank you for your reply.
I also have an issue with zero cells. Is their a facility to add a continuity correction when calculating the tetrachorics? If not i have seen previous posts where you recommend removing one of the
variables - do you think this necessary when only one table has a zero cell out of say 465 correlations?
thanks, david
Linda K. Muthen posted on Monday, February 11, 2013 - 6:42 am
For binary variables, a zero cell implies a correlations of one. Both variables should not be used in the analysis. For polytomous items, some zero cells should be okay. Mplus automatically adds a
constant to the frequency of a zero cell. See the ADDFREQUENCY option.
david posted on Monday, February 11, 2013 - 7:40 am
Thanks for your help Dr Munthen
So the default is to add 0.5/(sample size) to the zero cell. I had been wondering why i wasn't seing a corrrelation of -1 between the relevant variables. With this added the correlation between the
two variables is not equal to 1 (or -1).
But i am a bit confused. Why do you advocate removing the variables when the continuity correction has removed some of the bias that tetrachorics have to tend towrds -1 when zero cells are present?
thanks, david
Linda K. Muthen posted on Monday, February 11, 2013 - 8:06 am
You don't see a correlation of plus/minus one. That is the problem. The implied correlation is not what is estimated. The addition of a constant does not necessarily work well particularly in the
base of binary items. With binary items, we recommend using only one of the items when they correlate one just as we would recommend if two continuous variables correlated one.
Back to top
|
{"url":"http://www.statmodel.com/cgi-bin/discus/discus.cgi?pg=next&topic=8&page=11410","timestamp":"2014-04-17T21:48:35Z","content_type":null,"content_length":"23654","record_id":"<urn:uuid:0a8995e2-3434-49b1-b5ca-c681a54637ca>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00150-ip-10-147-4-33.ec2.internal.warc.gz"}
|
How can I access information stored after I run a command in Stata (returned results)?
Stata FAQ
How can I access information stored after I run a command in Stata (returned results)?
In addition to the output in the shown in the results window, many of Stata's commands store information about the command and it's results in memory. This allows the user, as well as other Stata
commands, to easily make use of this information. Stata calls these returned results. Returned results can be very useful when you want to use information produced by a Stata command to do something
else in Stata. For example, if you want to mean center a variable, you can use summarize to calculate the mean, then use the value of the mean calculated by summarize to center the variable. Using
returned results will eliminate the need to retype or cut and paste the value of the mean. Another example of how returned results can be useful is if you want to generate predicted values of the
outcome variable when the predictor variables are at a specific set of values, again here, you could retype the coefficients or use cut and paste, but returned results make the task much easier.
The best way to get a sense of how returned results work is to jump right in and start looking at and using them. The code below opens an example dataset and uses summarize (abbreviated sum) to
generate descriptive statistics for the variable read. This produces the expected output, but more importantly for our purposes, Stata now has results from the summarize command stored in memory. But
how do you know what information has been stored? A listing of the information saved by each command is included in the help file and/or printed manual, so I could look there, but I can also just
type return list, which will list all the returned results in memory.
use http://www.ats.ucla.edu/stat/stata/notes/hsb2, clear
sum read
Variable | Obs Mean Std. Dev. Min Max
read | 200 52.23 10.25294 28 76
return list
r(N) = 200
r(sum_w) = 200
r(mean) = 52.23
r(Var) = 105.1227135678392
r(sd) = 10.25293682648241
r(min) = 28
r(max) = 76
r(sum) = 10446
Above is a list of the returned results, as you can see each result is of the form r(...) where the ellipses ("...") is a short label. We could see the help file for the summarize command to find out
what each item on the list is, but it is often easy to figure out what value is assigned to what result, for example, r(mean), not surprisingly contains the mean of read (you can check this against
the output), but others are not as obvious, for example r(sum_w), for these, you may need to consult the manual if you think you might want to use them. Most of the time the process will be
relatively easy because you'll know what result you want to access, you will be looking at the list to find out what name it is stored under, rather than looking at the list and trying to figure out
what each item is.
As you might imagine, different commands, and even the same command with different options, store different results. Below we summarize the variable read again, but add the detail option. Then we use
return list to get the list of returned results. Just as the detail option adds additional information to the output, it also results in additional information stored in the returned results. The new
list includes all of the information returned by the sum command above, plus skewness; kurtosis; and a number of percentiles, including the 1st ( r(p25) )and 3rd ( r(p75) ) quartiles and the median (
r(p50) ).
sum read, detail
reading score
Percentiles Smallest
1% 32.5 28
5% 36 31
10% 39 34 Obs 200
25% 44 34 Sum of Wgt. 200
50% 50 Mean 52.23
Largest Std. Dev. 10.25294
75% 60 73
90% 67 73 Variance 105.1227
95% 68 76 Skewness .1948373
99% 74.5 76 Kurtosis 2.363052
return list
r(N) = 200
r(sum_w) = 200
r(mean) = 52.23
r(Var) = 105.1227135678392
r(sd) = 10.25293682648241
r(skewness) = .1948372909440272
r(kurtosis) = 2.363051990033788
r(sum) = 10446
r(min) = 28
r(max) = 76
r(p1) = 32.5
r(p5) = 36
r(p10) = 39
r(p25) = 44
r(p50) = 50
r(p75) = 60
r(p90) = 67
r(p95) = 68
r(p99) = 74.5
Now that we have some sense of what results are returned by the summarize command, we can make use of the returned results. Following through with one of the examples mentioned above, we will mean
center the variable read. Assuming that the last command we ran was the summarize command above, the code below uses generates a new variable, c_read that contains the mean centered values of read.
Notice that instead of using the actual value of the mean of read in this command, we used the name of the returned result (i.e. r(mean)), Stata knows when it sees r(mean) that we actually mean the
value stored in that system variable. On the next line we summarize the new variable c_read, while the mean is not exactly equal to zero, it is within rounding error of zero, so we know that we have
properly mean centered the variable read.
gen c_read = read - r(mean)
sum c_read
Variable | Obs Mean Std. Dev. Min Max
c_read | 200 2.18e-07 10.25294 -24.23 23.77
As the code above suggests, we can use returned results pretty much the same way we would use an actual number. This is because Stata uses the r(...) as a placeholder for a real value. For another
example of this, say that we want to calculate the variance of read from its standard deviation (ignoring the fact that summarize returns the variance in r(Var)). We can do this on the fly using the
display command as a calculator. The second line of code below does this. We can even check the result by cutting and pasting the value of the standard deviation from the output, which is done in the
third command below. The results are basically the same, the very slight difference is rounding error because the stored estimate r(sd) contains more digits of accuracy than the value of the standard
deviation displayed in the output.
display r(sd)^2
display 10.25294^2
Types of returned results, r-class and e-class
Now that you know a little about returned results and how they work you are ready for a little more information about them. Returned results come in two main types, r-class, and e-class (there are
also s-class and c-class results/variables, but we will not discuss them here). Commands that perform estimation, for example regressions of all types, factor analysis, and anova are e-class
commands. Other commands, for example summarize, correlate and post-estimation commands, are r-class commands. The distinction between r-class and e-class commands is important because Stata stores
results from e-class and r-class commands in different "places." This has two ramifications for you as a user. First, you need to know whether results are stored in r() or e() (as well as the name of
the result) in order to make use of them. If you're not sure which class a command you've run is in, you can either look it up in the help file, or "look" in one place (using the appropriate command
to list results), if the results are not stored there they are probably in the other. A potentially more important ramification of the difference in how results from r-class and e-class commands are
returned is that returned results are held in memory only until another command of the same class is run. That is, returned results from previous commands are replaced by subsequent commands of the
same class. In contrast, running a command of another class will not affect the returned results. For example, if I run a regression, and then a second regression, the results of the first regression
(stored in e()) are replaced by those for the second regression (also stored in e()) . However, if instead of a second regression, I ran a post-estimation command, the results from the regression
would remain in e() while the results from the post estimation command would be placed in r().
While there is a distinction between the two, the actual use of results from r-class and e-class commands is very similar. For starters, the commands are parallel, to list the r-class results stored
in memory the command is return list, to do the same for e-class results the command ereturn list. Further, except for the difference in naming conventions (r() vs. e()), the results are accessed in
the same way. The example below demonstrates this, first we regress write on female and read, and then use ereturn list to look at the returned results.
regress write female read
Source | SS df MS Number of obs = 200
-------------+------------------------------ F( 2, 197) = 77.21
Model | 7856.32118 2 3928.16059 Prob > F = 0.0000
Residual | 10022.5538 197 50.8759077 R-squared = 0.4394
-------------+------------------------------ Adj R-squared = 0.4337
Total | 17878.875 199 89.843593 Root MSE = 7.1327
write | Coef. Std. Err. t P>|t| [95% Conf. Interval]
female | 5.486894 1.014261 5.41 0.000 3.48669 7.487098
read | .5658869 .0493849 11.46 0.000 .468496 .6632778
_cons | 20.22837 2.713756 7.45 0.000 14.87663 25.58011
ereturn list
e(N) = 200
e(df_m) = 2
e(df_r) = 197
e(F) = 77.21062421518363
e(r2) = .4394192130387506
e(rmse) = 7.132734938503835
e(mss) = 7856.321182518186
e(rss) = 10022.5538174818
e(r2_a) = .4337280375366059
e(ll) = -675.2152914029985
e(ll_0) = -733.0934827146213
e(cmdline) : "regress write female read"
e(title) : "Linear regression"
e(vce) : "ols"
e(depvar) : "write"
e(cmd) : "regress"
e(properties) : "b V"
e(predict) : "regres_p"
e(model) : "ols"
e(estat_cmd) : "regress_estat"
e(b) : 1 x 3
e(V) : 3 x 3
The list of returned results for regress includes several types of returned results listed under the headings scalars, macros, matrices and functions. We will discuss the types of returned results
below, but for now we will show how you can use the scalar returned results the same way that we used the returned results from summarize. For example, one way to calculate the variance of the errors
after a regression is to divide the residual sum of squares by the total degrees of freedom (i.e. n-1). The residual sum of squares is stored in e(rss) and that the n for the analysis is stored in e
(N). Below we use the display command as a calculator, along with the returned results to calculate the variance of the errors.
display e(rss)/(e(N)-1)
How results are returned: Scalars, strings, matrices and functions
As mentioned above, for both r-class and e-class commands, there are multiple types of returned results, including scalars, strings, matrices, and functions. In the lists of returned results, each
type is listed under its own heading. The results listed under the heading "scalars" are just that, a single numeric value. Their usage is discussed above, so we won't say anymore about them in this
Returned results listed under "macros" are generally strings that give information about the command that was run. For example, in the returned results of for the regression shown above, e(cmd_line)
contains the command the user issued (without any abbreviations). These are generally used in programming Stata.
Results listed under "matrices" are, as you would expect, matrices. While the list of results returned by return list and erturn list show you the values taken on by most of the returned results,
this is not practical with matrices, instead the dimensions of the matrices are listed. To see the contents of matrices you must display them using matrix commands. We do this below with the matrix
of coefficients (e(b)) using the command matrix list e(b). (Note that there is another way to access coefficients and their standard errors after you fit a model, this is discussed below.) If we
would like to perform matrix operations on returned matrices, or wish to access individual elements of the matrix, we can move the matrix stored as a returned result to a normal Stata matrix. This is
done in the final line of syntax below.
matrix list e(b)
female read _cons
y1 5.486894 .56588693 20.228368
matrix b = e(b)
Finally, the results returned under the heading "functions" contain functions that can be used in a manner similar to other Stata functions. The most common function returned by Stata estimation
commands is probably e(sample). This function marks the sample used in estimation of the last analysis, this is useful as datasets often contain missing values resulting in not all cases in the
dataset being used in a given analysis. Assuming that the last estimation command run was the regression of write on female and read shown above, the first line of code below uses e(sample) to find
the mean of read among those cases used in the model. The second line of code uses e(sample) to create a new variable called flag which is equal to 1 for cases that were used in the analysis, and
zero otherwise. (Note since the example dataset contains no missing data, all of the cases are included in the analysis, and flag is a constant equal to one.)
sum read if e(sample)==1
Variable | Obs Mean Std. Dev. Min Max
read | 200 52.23 10.25294 28 76
gen flag = e(sample)
Coefficients and their standard errors
As discussed above, after one fits a model, coefficients and their standard errors are stored in e() in matrix form. These matrices allow the user access to the coefficients, but Stata gives you an
even easier way to access this information by storing it in the system variables _b and _se. To access the value of a regression coefficient after a regression, all one needs to do is type _b
[varname] where varname is the name of the predictor variable whose coefficient you want to examine. To access the standard error, you can simply type _se[varname]. To access the coefficient and
standard error of the constant we use _b[_cons] and _se[_cons] respectively. Below we run the same regression model we ran above (omitting the output), using female and read to predict write. Once we
have estimated the model, we use the display command to show that the values in _b are equal to our regression coefficients. Finally, we calculate the predicted value of write when a female (female=
1) student has a read score of 52.
regress write female read
Source | SS df MS Number of obs = 200
-------------+------------------------------ F( 2, 197) = 77.21
Model | 7856.32118 2 3928.16059 Prob > F = 0.0000
Residual | 10022.5538 197 50.8759077 R-squared = 0.4394
-------------+------------------------------ Adj R-squared = 0.4337
Total | 17878.875 199 89.843593 Root MSE = 7.1327
write | Coef. Std. Err. t P>|t| [95% Conf. Interval]
female | 5.486894 1.014261 5.41 0.000 3.48669 7.487098
read | .5658869 .0493849 11.46 0.000 .468496 .6632778
_cons | 20.22837 2.713756 7.45 0.000 14.87663 25.58011
display _b[_cons]
display _b[female]
display _b[read]
display _b[_cons] + _b[female]*1 + _b[read]*52
The content of this web site should not be construed as an endorsement of any particular web site, book, or software product by the University of California.
|
{"url":"http://www.ats.ucla.edu/stat/stata/faq/returned_results.htm","timestamp":"2014-04-20T23:27:05Z","content_type":null,"content_length":"34666","record_id":"<urn:uuid:8ad2acf6-c8b2-4e9f-877b-053d6dc72751>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00657-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Divisors of an integer
up vote 0 down vote favorite
Say $M$ is the number of divisors of an integer. Is there a simple formula for the minimal integer $n$ so that the number of divisors of $n$ is $M$?
1 If $M$ is ordinary - yes (ordinary in sense defined here - researchgate.net/publication/…). I guess you found that already, but that's my 2¢. – Harun Šiljak Sep 4 '12 at 15:01
I guess the growth rate is all you could expect to say anything about. – i. m. soloveichik Sep 4 '12 at 15:14
3 oeis.org/A005179 - there are references to partial results – Julian Kuelshammer Sep 4 '12 at 15:21
Given n has prime factorization with exponents a,b,c..., M is (a+1)(b+1)((c+1)..., and the exponent bases are the first however many primes. You have an upper bound of 2^(M+1), which can be
optimized quickly. If M itself has k prime factors, you get (p_k)(kf) as a quick upper bound, where p_k is the kth smallest prime and f is the largest of the k prime factors. However, even a
greedy strategy may not yield the minimum, so you will still need to do some searching. Gerhard "Ask Me About System Design" Paseman, 2012.09.04 – Gerhard Paseman Sep 4 '12 at 15:28
Do you insist that the number of divisors is exatly $M$ or is at least $M$ what you are interested in. If you care about exact count note that this then depends quite a bit on the $M$ and not just
its rough sizes (and this is what the comments refer to). To highligth something implictly in other comments. If $M$ is prime for example, it is clear that the only eligible $N$ are (M-1)th prime
powers, and then clearly one needs to take a power of two. If you care just about at least $M$ this is a different question; then a keyword is "Highly composite number". – quid Sep 4 '12 at 15:45
show 2 more comments
Know someone who can answer? Share a link to this question via email, Google+, Twitter, or Facebook.
Browse other questions tagged nt.number-theory or ask your own question.
|
{"url":"http://mathoverflow.net/questions/106352/divisors-of-an-integer","timestamp":"2014-04-19T00:08:25Z","content_type":null,"content_length":"51372","record_id":"<urn:uuid:f5c6d5d8-8eee-4d35-907e-b9fa44873770>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00395-ip-10-147-4-33.ec2.internal.warc.gz"}
|
CTHULHU, that "friend" - Elder Sign
From quite far from Arkham, New England: Spain.
Hey pals!
Don't you think that combat Cthulhu isn't so hard as this Ancient One deserves?
I mean, it doesn't matter how many "points" of stamina or sanity, It atacks the maximun level; so, if you have an standard 5-5 investigator, 4-4 resolving the permanent effect of Cthulhu, you have 6
times receiving Cthulhu's atacks before you die… I find it a bit dissappointing, and you?
One doubt: An investigator can try to complete the task any time he/she wants, but I assume that if you can complete one time those dice are fixed each one in the part of the task, I mean: you
haven't all the dice to try to complete the task again, have you? (is it my English enough clear?)
Lovecraft's lovers: a hug from this side of the ocean
|
{"url":"http://community.fantasyflightgames.com/index.php?showtopic=63609","timestamp":"2014-04-18T20:54:37Z","content_type":null,"content_length":"82172","record_id":"<urn:uuid:15cb8436-8bb4-433c-b51b-3fa8e25f5628>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00513-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Occoquan Prealgebra Tutor
Find an Occoquan Prealgebra Tutor
I have been teaching for the last ten years. I would love to work one-on-one with students that really need my help. This way I can give them all the time and attention they need. I have a
bachelors degree in Math education and love making math fun and easy to understand.
5 Subjects: including prealgebra, geometry, algebra 1, algebra 2
...Even if you just need a little reminder of math you used to know, I'm happy to help you remember the fundamentals. I feel very strongly about help students succeed in math because I believe a
true understanding of math can make many other subjects and future studies easier and more rewarding. I graduated from University of Virginia with a degree in economics and mathematics.
22 Subjects: including prealgebra, calculus, geometry, algebra 1
...I have been tutoring Anatomy and Physiology, Cell Structure and Function, General Biology, and Microbiology since 2012. Tutoring similar college level biology courses as listed are my
specialty. Prior to 2012, I have had varying other tutoring experiences such as algebra one and high school chemistry and writing.
9 Subjects: including prealgebra, writing, biology, ESL/ESOL
...I have remained on the Dean's List throughout my higher education years. I am excellent with children and adults. I am a swimming instructor, and by trait, am talented in getting children to
excel at things they didn't think they liked doing.I am a certified Life Guard and Water Safety Instructor through the American Red Cross.
17 Subjects: including prealgebra, ASVAB, algebra 2, elementary (k-6th)
...I have tutored Biology concepts to many students over the years. I have completed Pre Med studies and I know how to present the subject matter to my students. Special areas include (Genetics,
Mitosis, Meiosis, Organelles, Cellular Respiration, etc.). I have tutored chemistry to many students and I know how to effectively communicate the material to facilitate the learning process.
23 Subjects: including prealgebra, chemistry, calculus, geometry
|
{"url":"http://www.purplemath.com/Occoquan_Prealgebra_tutors.php","timestamp":"2014-04-17T13:07:56Z","content_type":null,"content_length":"24187","record_id":"<urn:uuid:497df8c4-66a0-4ef5-8d4c-1e380929ef04>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00267-ip-10-147-4-33.ec2.internal.warc.gz"}
|
188 helpers are online right now
75% of questions are answered within 5 minutes.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred.
|
{"url":"http://openstudy.com/users/jiteshmeghwal9/asked","timestamp":"2014-04-19T17:22:59Z","content_type":null,"content_length":"126306","record_id":"<urn:uuid:340503d0-87c4-4874-88ad-13b9b800a9be>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00203-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Kurt Gödel
Gödel, Kurt (göˈdəl) [key], 1906–78, American mathematician and logician, b. Brünn (now Brno, Czech Republic), grad. Univ. of Vienna (Ph.D., 1930). He came to the United States in 1940 and was
naturalized in 1948. He was a member of the Institute for Advanced Study, Princeton, until 1953, when he became professor of mathematics at Princeton. He is best known for his work in mathematical
logic, particularly for his theorem (1931) stating that the various branches of mathematics are based in part on propositions that are not provable within the system itself, although they may be
proved by means of logical (metamathematical) systems external to mathematics. Gödel shared the 1951 Albert Einstein Award for achievement in the natural sciences with Julian Schwinger, Harvard
mathematical physicist. His writings include Foundations of Mathematics (1969).
See H. Wang, Reflections on Kurt Gödel (1987); E. Nagel et al., Gödel's Proof (rev. ed. 2001); R. Goldstein, The Proof and Paradox of Kurt Gödel (2005); P. Yourgrau, A World without Time: The
Forgotten Legacy of Gödel and Einstein (2005).
The Columbia Electronic Encyclopedia, 6th ed. Copyright © 2012, Columbia University Press. All rights reserved.
|
{"url":"http://www.factmonster.com/encyclopedia/people/godel-kurt.html","timestamp":"2014-04-18T21:49:39Z","content_type":null,"content_length":"20354","record_id":"<urn:uuid:057b7abc-c7e8-4e2e-aae6-74d9eee52ad8>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00036-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Homework 2
General Physics II, Spring '10
Homework 2 -- Due Friday, Feb. 5, 11:30 AM
Please don't forget that Alec and I are both available for help -- Alec in person after dinner Thursday night, and me by email during the day Thursday, and of course either of us any other time you
find us.
1. Project 2.8. Most of this should be done in Excel, but you will need to do a little theoretical work first to figure out, e.g., some of the formulas that you'll need to use in Excel. Incidentally,
I find this to be one of the most ingenious and beautiful things in the whole of astronomy.
2. Project 2.9. You don't actually have to do anything here -- just figure out how a certain thing could be done. Still, figuring that out and presenting it in a clear and detailed way, will take
some work.
3. Project 2.13. The idea here is just to "read off" (probably using a ruler) from Figure 2.16 the periods and orbital radii for the four moons of Jupiter. Put the numbers in Excel or something, and
see if you can find some Kepler's-third-law-ish relationship.
4. Project 2.14. More precisely: pick something from pages 67-69 to work out the mathematical proof of. For example, you could show how Equation (2.4) follows from Equation (2.3). Or vice versa. Or
you could show how Equation (2.6) follows from or implies one of the earlier equations defining the ellipse.
Last modified: Monday, December 19, 2011, 9:18 AM
|
{"url":"https://courses.marlboro.edu/mod/page/view.php?id=4816","timestamp":"2014-04-18T11:04:56Z","content_type":null,"content_length":"23410","record_id":"<urn:uuid:fc989492-766a-4a34-9897-dd51da9106df>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00485-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Fairless Hills SAT Math Tutor
Find a Fairless Hills SAT Math Tutor
...In a one-on-one tutoring session, I am able to tailor my lessons right to the specific student. I begin by getting to know the child. In order to best individualize their lessons, I like to
know the student's interests, motivations, learning styles, and strengths and weaknesses.
7 Subjects: including SAT math, geometry, algebra 2, algebra 1
...I am willing to tutor a range of subjects - from history, to writing skills, to social studies, to languages (Arabic, German, ESL and Spanish). I am looking for engaged students from elementary
school to college. I guarantee significant improvement in my students' performance.I studied classical...
24 Subjects: including SAT math, English, reading, writing
...Many of my students have tried several centers and packaged systems before beginning with me, and I often hear them say that they've never learned like this before and that my advice is perfect
for their unique needs. When I work with you, I'm not following a scripted curriculum. Instead, I'm e...
47 Subjects: including SAT math, chemistry, reading, English
...If you need help with basic math or science, Algebra, Geometry, Earth Science, Biology, GED, SAT, and/or overall study skills, I would love to talk to you. But don't just take my word for it,
talk to students of mine who have turned an almost-failing into a definitely-passing, and not only lived...
22 Subjects: including SAT math, reading, biology, English
...I have been weight lifting steadily for 10 years. I was a personal trainer in college (2007-2009) and have tutored individuals on proper form, diet and exercise regiments. I have worked in the
broadcast industry since 2099, upon graduating with a degree in Communication.
45 Subjects: including SAT math, English, reading, writing
|
{"url":"http://www.purplemath.com/fairless_hills_pa_sat_math_tutors.php","timestamp":"2014-04-16T19:08:17Z","content_type":null,"content_length":"24227","record_id":"<urn:uuid:295654c3-2991-469d-9539-69272c437927>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00230-ip-10-147-4-33.ec2.internal.warc.gz"}
|
How Do You Know? Let's Try With Math
Accuracy and Precision
The accuracy of a measurement refers to the closeness between the measurement obtained and the true value. Since scientists rarely know the true value, it is generally impossible to determine the
accuracy completely. One approach to evaluating accuracy is to make a measurement by two completely independent methods. If the results from independent measurements agree, scientists have more
confidence in the accuracy of their results.
Accuracy is affected by determinate errors, that is, errors due to poor technique or incorrectly calibrated instruments. Careful evaluation of an experiment may eliminate determinate errors.
Precision refers to the closeness of repeated measurements to each other. If the mass of an object is determined as 35.43 grams, 35.41 grams, and 35.44 grams in three measurements, the result may be
considered precise because they differ only in the last of the four digits. There is no guarantee that they are accurate, however, unless the balance was properly calibrated and the correct methods
were used in weighing the object. With proper techniques, precise results infer, but do not guarantee, accurate results.
Precision is affected by indeterminate errors, that is, errors that arise in estimating the last, uncertain digit of a measurement. Indeterminate errors are random errors and cannot be eliminated.
Statistical analysis deals with the theory of random errors.
Significant Figures
Every experimental measurement is made in such a way as to obtain the most information possible from whatever instrument is used. As a result, measurements involve numbers in which the last digit is
uncertain, Scientists characterize a measured number based on the number of significant figures it contains.
The number of significant figures in a measurement includes all digits in that number from the first nonzero digit on the left to the last digit on the right. For exponential numbers, the number of
significant figures is determined from the digits to the left of the multiplication sign. Thus, 3.456 X 104 has four significant figures.
Sometimes there are trailing zeros on the right side of a number. If the number contains a decimal point, trailing zeros are always significant. Trailing zeros that are used to complete a number,
however, may or may not be significant. Scientists avoid writing a number such as 12,000 since it does not definitely indicate the number of significant figures. Scientific notation is used instead.
Twelve thousand can be written as 1.2 X 104, indicating two, three, four and five significant figures, respectively. It is the responsibility of the experimenter to write numbers in such a way that
there is not ambiguity.
Some numbers are exact numbers, which involve no uncertainty. The number of plates set on a table for dinner can be determined exactly. If five plates are observed and counted, that measurement is
exactly five. In chemistry many defined equalities are exact. For instance, there are exactly 4.184 joules in each calorie and exactly two hydrogen atoms in each water molecule. Other exact numbers
are stoichiometri coefficients and subscripts in chemical formulas.
The reason for determining the number of significant figures in a measured value is that this number tells us how to write the answers to calculations based on that value.
There are two types of uncertainty, absolute uncertainty and relative uncertainty. The absolute uncertainty is the uncertainty of the last digit of a measurement. For example 45.47 milliliters is a
measurement of volume, and the last digit is uncertain. The absolute uncertainty is ± 0.01 milliliter. The measurement should be regarded as somewhere between 45.46 and 45.48 milliliters.
The relative uncertainty of a number is found by dividing the absolute uncertainty of the measurement by the number itself. For the above example, the relative uncertainty
0.01 mL/45.47 mL=2 X 10-4
The absolute uncertainty governs the principles used for addition and subtraction. The relative uncertainty governs the principles used for multiplication and division.
Calculations, especially those done using an electronic calculator, often generate more, but sometimes fewer, significant figures or decimal places than are required.
These answers must be rounded to the proper number of significant figures or decimal places.
Significant Figures in Atomic and Molar Masses
All of the atomic masses listed in the Periodic Table have four or more significant figures. These are measured values and must be included in the determination of the correct number of significant
figures in any calculation that uses them. However, most calculations involve fewer than four significant figures, and the significant figures in atomic and molar masses usually do not affect the
number of significant figures in the answers. As a result, atomic and molar masses are often rounded to the nearest whole number. One exception is chlorine, which usually has its atomic mass rounded
to 35.5.
A graph is used to illustrate the relationship between two variables. Graphs are often a more effective method of communication than tables of data. A graph has two axes: a horizontal axis, called
the x-axis (abscissa), and a vertical axis, called the y-axis (ordinate).
It is customary to use the x-axis for the independent variable, and the y-axis for the dependent variable, in an experiment. An independent variable is one that the experimenter selects. For
instance, the concentrations of standard solutions that a chemist prepares are independent variables since any concentrations may be chosen. The dependent variable is a measured property of the
independent variable. For instance, the dependent variable may be the amount of light that each of the standard solutions prepared by the chemist absorbs, since the absorbed light is dependent of the
concentration. Each data point is xy pair representing the value of the independent variable and the value of the dependent variable determined in the experiment.
The first step in constructing a graph is to label the x and y axes to indicate the identity of the independent and dependent variables. Next, the axes are numbered, usually from zero to the largest
value expected for each variable. Finally, each data point is plotted by drawing a horizontal line at the value of the independent variable. The intersection of these two line determines where that
data point belongs on the graph as shown below.
(figure available in print form)
Most graphs show a linear relationship between tow variables. Others, such as kinetic curves, have curved lines. In both cases, data points are plotted on the graph and then the best smooth line is
drawn through the pints. Lines are never drawn by connecting the data points with straight lines. In very accurate work a statistical analysis called the method of least squares is used to determine
the best straight line for the data. In most cases, however, the line is drawn by eye,attempting to have all data points as close as possible to the line. The usual results is a line that has the
same number of data points above and below.
(figure available in print form)
The graph illustrates that it is incorrect to draw any line below the measured data points. The reason is that anything beyond the measured data is unknown. Extending the line implies information
that has not been verified by experimental data.
The slope of a curve or line is often needed as an experimental result. To determine the slope of a line, two points on the line are chosen. The left-hand point has coordinates (x1 , y1), and the
right-hand point has coordinates ( x2 , y 2 ). The values of x and y at these points are determined from the graph, and the following equation is used to determine the slope:
Slope = (x2 - x1)/(y2 - y1)
In a graph with a curved line the slope is determined by drawing a tangent to the curve and then determining the slope of the tangent, as is done for a straight line.
Mathematical Literacy
As society changes, so must the school. How can we as mathematics educators prepare our students for the challenges of the 21st century? One important way is to expand our educational goals to
reflect the importance of mathematical literacy. In order for students to become mathematically literate in the 21st century we must:
a. Prepare students for successful work lives in a changing society.
b. Equip students with reasoning tools they need as good citizens.
c. Develop students personal capacities to enjoy and appreciate mathematics.
Only then will students attain the mathematical literacy needed in a world where mathematics is rapidly growing and being applied to many diverse fields.
To address the growing concern about mathematical literacy, the National council of Teachers of Mathematics has recommended fundamental changes in the teaching and learning of mathematics. Its
Curriculum and Evaluation Standards for School Mathematics calls for students to gain mathematical power, which is a student's ability "to explore, conjecture, and reason logically." (p.5) Most
mathematics educators agree that mathematical power must be the central concern of mathematics education. Many state department of education are also calling for mathematics reform.
According to the Mathematics Framework for California Public School, "Mathematically powerful students think and communicate, drawing on mathematical ideas and using mathematical tools and
techniques." (p.3)
In addition to the four dimensions of mathematical power, students are expected to work both individually and cooperatively, to appreciate mathematics in history and society, and to develop positive
attitudes toward mathematics. Because society is demanding mathematical power from all citizens, it must be available for all students. Let's examine mathematical power in more detail.
1. Think refers to higher-order thinking skills such as classifying, analyzing, planning, comparing, conjecturing, deducing, inferring, hypothesizing, a synthesizing. These are characterized in the
reasoning, problem solving, and connections standards in the NCTM standards.
2. Communication refers to the verbal or written expression of understanding. As students work, they communicate their understanding to themselves and others.
3. Ideas refer to the mathematical content. These are concepts such as proportional relationships, geometry, logic, and so on.
4. Tools and Techniques refer to literal tools, such as calculators, computers, and manipulative, and also figurative tools, such as computational procedures and mathematical representations.
Cooperation is necessary for full-time participation in society. It is relevant on a large scale, as nations interact with each other, and on a small scale, where individuals relate to their
neighbors and their families. Cooperation is expected of workers in the workplace and of citizens in a democracy. IN our society, the diversity of our population dictates that individuals must be
willing to work with people who differ from themselves. As part of society, our classrooms may be composed of students from many different cultural, ethnic, and language backgrounds. Regardless of
their differences, all students must learn to cooperate for the common good.
Cooperative learning involves a small group of learners, working together as a team to accomplish a common goal. According to the NCTM standards, the goal should be to solve meaningful problems.
Cooperative learning provides a structure in which students are given more responsibility for their own learning, while the teacher's responsibility shifts from the giver of knowledge to that of a
facilitator or mentor.
Cooperative learning is not having students sit together in groups and work on problems individually. It is not having one student in the group do all the work while the other students listen.
Teamwork is the key to effective cooperative learning. (See Good Decisions Through Teamwork Appendix)
Learning is an active process that is both and individual and a special experience. According to Judah L. Schwartz, professor of education at HGSE "learning takes place in the minds of the individual
learners when they make connections to what they already know." Constructivist learning is enhanced by cooperative learning. An important component of cooperative learning is the communication that
occurs among group members. According to the standards, "Small groups provide a forum in which students ask questions, discuss ideas, make mistakes, learn to listen to other ideas, offer constructive
criticism, and summarize their discoveries in writing." (p. 79) These are valuable activities because they allow students to help each other make connections and discover their own meanings. Small
groups are safe; students can take risks and make mistakes. Students have more chances to communicate, and feel more comfortable doing so, in small groups than in whole-class discussions. When
students work in cooperative groups, they receive encouragement from their peers in their efforts to learn mathematical processes and concepts.
Another aspect of cooperative learning is that students create their own meaning when they are given many opportunities to experience and do mathematics through exploration. "This happens most
readily when students work in groups, engage in discussion, make presentations, and in other ways in discussion, make presentations, and in other ways take charge of their own learning." (Everybody
Counts, p.59) How do you know? Let's try with Math supports the strategy of cooperative learning.
How to Graph Scientific Data
Graphs are a useful tool for displaying scientific data because they show relationships among variables in a compact, visual form. You probably know how to make and interpret several types of graphs
such as pie charts and bar graphs (or histograms). You may have also use X-Y graphs (or Cartesian graphs) in our math classes. However, you may not know how to use X-Y graphs to display experimental
data in chemistry laboratory work.
The following guidelines will help
1. Determine the independent variable
-Determine which of the quantities that you will graphing is the independent variable and which are the dependent variable. The independent variable, denoted as x, is the variable whose values are
chosen by the experimenter. The independent variable is plotted on the horizontal axis of the dependent variable are determined by the independent variable.
-For example, the data shown in the table to the left was gathered in an experiment in which the temperature of a gas was increased and the resulting volume increase was measured. In this case,
temperature was the independent variable and volume was the dependent variable. In the graph for this experiment, temperature is plotted on the horizontal axis and volume is plotted not he vertical
2. Scale of axes
-Each axis must have a scale with equal divisions.
-Allow as much room as possible between divisions.
-Each division must represent a whole number of units of the variable being plotted such as 1, 2, 5, 10 or some multiple of these. To decide which multiple to use for the horizontal axis, divide the
maximum value of the independent variable by the number of major divisions on your graph paper. For example, Graph A,in the Appendix, shows 10 divisions along the grid on the horizontal axis. The
data used to plot the curve for Graph A is shown in the Appendix. The maximum value of T is 480 K. divide the number of divisions into the maximum value of the variable to get 48 K/10 divisions or 48
K per division. To simplify, round up to allow 50 K per division on the horizontal axis.
-To scale the vertical axis follow the same procedure. The maximum value of the dependent variable, V, is 4L. The grid allows for 6 divisions. Round up to allow 1.0 L per division. Then there will be
2 divisions left over on the top of the vertical axis. Check Graph A to see how this looks.
-Label each axis with the quantity to be plotted and the units used to express each measurement. For example, the axes of Graph A are Volume (L) and Temperature (K.)
3. Plot the data
-Plot each data point by locating the proper coordinates for the ordered pair on the graph grid. If the data points look as they fall roughly on a straight line, use a transparent ruler to find the
line of best fit for the data points. Draw the best-fit line through or between the points.
-If the data points clearly do not fall along a straight line, but appear to fit another smooth curve, lightly sketch in the smooth curve that connects the points.
-Once you have sketched a smooth curve, draw over it in ink.
4. Title your graph
-Title your graph to indicate the x and y variables. If you can also tell how the variables relate to one another without making the title too long, include this information. For example, "Volume
Versus Temperature Change in a Gas" is a suitable title for Graph A. Write the title at the top of the graph.
5. Interpret your graph
-If your data points lie roughly along a straight line, the x and y variables have a linear relationship or are directly proportional. This means that as one variable increase, the other does too, in
a constant proportion -- as x doubles, y doubles; as x triples, y triples; etc. Directly proportional quantities, x and y, relate to one another through mathematical equations of the form y = mx + b,
where m is a constant and b is zero. The equation for the directly proportional linear relationship shown in Graph A is V=kT. Here, m= k and b= 0.
-If your data points lie along a curve that drops from left to right as shown in Graph B, then the quantities have an inverse relationship or are inversely proportional. In an inverse relationship,
one quantity increases as the other decreases. Graph B shows that gas pressure and volume decreases. The mathematical relationship that expresses an inverse relationship is y=1/x. The expression
relating gas pressure and volume follows the form PV = k. Note that inverse relationships are nonlinear because the increase of one variable is not accompanied by a constant rate of decrease in the
other variable.
6. Use your graph
-Straight-line graphs are the easiest graphs to analyze and to express as equations. More complex graphs illustrate inversely proportional, exponential, or logarithmic relationships. It is often
useful to replot a nonlinear graph to obtain a straight-line graph.
Graph C shows the inverse relationship PV = k replotted as a straight line. To obtain this graph, both sides of the equation PV = k were divided by P.
The resulting equation, V = k * 1/P, has the same form as y = mx, which if plotted would produce a straight line that passes through the origin. To plot the actual data, the pressure values in the
table must be converted to 1/P values. The fit pressure conversion is as follows.
1/0.100 = 10.0
V is plotted on the y axis and 1/P is plotted on the x axis.
How to Use Significant Figures and Scientific Notation
Scientists use significant figures to report information about the certainty of measurements and calculations. With this method, a measurement of 2.25 m means that while the 2 in the ones place and
the 2 in the tenths place are certain, the 5 in the hundredth's place is an estimate. If this measurement is combined with several other measurements in a formula, there must be some way of tracking
the amount of uncertainty in each measurement and in the final result. For example, using a calculator to find the volume of a cube that measures 2.25 m on a side, you get 11.390625 m3. This answer
indicates far grater precision in the volume measurement than is realistic. Remember that the 5 in 2.25 is an estimated digit.
The rules and examples that follow will show you how to work with the uncertainty in measurements to express your results with an appropriate level of precision.
1. Determining the number of significant figures
The first set of rules shows you how to look at a measurement to determine the number of significant figures. A measurement expressed to the appropriate number of significant figure includes all
digits that are certain and one digit in the measurement that is uncertain.
Rules for Determining the number of Significant Figures
34800. mL has five, 200. cm has three
Exact conversion factors are understood to have an unlimited number of significant figures
By definition there are exactly 100 cm in 1 m so the conversion factor 100 cm/1 m is understood to have an unlimited number of significant figures
There are exactly 30 days in June, not 30.1 or 20.005, so an unlimited number of significant figures is understood in the expression "30 days"
When measurements are used in calculations, you must apply the rules regarding significant figures so that your results reflect the number of significant figures of the measurements.
The product or quotient should be rounded off to the same number of significant figures as the measurement with the fewest number of significant figures.
To obtain the correct number of significant figure in a measurement or calculation, numbers must often be rounded.
Greater than 5, increase the digit by 1.
Less than 5, do not change the last digit.
5, followed by non-zero digit(s), increase the last digit by 1.
5, not followed by a non-zero digit and preceded by odd digit(s), increase the last digit by 1.
Measurements made in chemistry often involve very large or small numbers. To express these numbers conveniently, scientific notation is used. In scientific notation, numbers are expressed in terms of
their order or magnitude. For example, 54,000 can be expressed 5.4 X 104 in scientific notation, and the number 0.000008765 can be expressed as 8.765 X106.
As the preceding examples show, each value expressed in scientific notation has two parts. The first factor is always between 1 and 10, but it may have any number of digits. To write the first factor
of the number, move the decimal point to the right or left sot that there is only one nonzero digit tot he left of it. The second factor of the number is written raised to an exponent of 10 that is
determined by counting the number of places the decimal point must be moved. If the decimal point is moved to the left, the exponent is positive. If the decimal point is moved to the right, the
exponent is negative.
Using scientific notation along with significant figures is especially useful for measurements such as 200 L, 2560 m, or 10 000 kg, because it is unclear which zeros are significant.
1. How many significant figures are there in these expressions?
____a. (8.369 X 103 + 4.58 X 102 - 6.30 X 103)/4.156 X 107
____b. (6.499 X 102)(5.915 X 104+ 3.4733 X 105)
____c. (7.23780 X 10-3 - 3.65 X 10-5 )(3.6792 X 102 + 2.67)
____d. (2.1267 X 10-5)(3.3456 X 10-2 - 0.012)/(2.6 X 10-2 - 3.23 X 10-2)
Students will use basic concepts of probability and statistics to collect, organize, display and analyze data, simulate and test hypotheses. Good Decisions Through Teamwork will assure that students:
1. Estimate probabilities, predict outcomes and test hypotheses using statistical techniques.
2. Design a sampling experiment, interpret the data, and recognize the role of sampling in statistical claims.
3. Use the law of large numbers to interpret data from a sample of a particular size.
4. Select appropriate measures of central tendency, dispersion and correlation.
5. Design and conduct a statistical experiment and interpret its results.
6. Draw conclusions from data and identify fallacious arguments or claims.
7. Use scatterplots and curve-fitting techniques to interpolate and predict from data.
8. Use relative frequency and probability to represent and solve problems involving uncertainty; and use simulations to estimate probabilities.
Use the products of the weight of each atom multiplied by the corresponding specific heat to find:
2. Prepare a graph representing the solubilities of potassium nitrate, KNO3, in water at the following temperatures. From your graph, estimate the solubility of KNO3 at 65°C and at 105°C.
b. Do you have an inverse or direct relation?
c. Write the equation that expresses the mathematical relationship between P and V.
d. Replot the data to obtain a straight-line graph. Write the equation that represents the linear relationship shown by your graph.
The purpose of this activity is to analyze data via statistical methods in order to determine the equation of a line. Given either a point and a slope, or two points, we will derive the equation of
the line determined by the data. A brief discussion of linear regression by the least squares method will be included.
Consider a set of data consisting of only tow points, (x1, y1) and (x2, y2). The regression line for this data is simply the line that passes through these two points. The slope of the line could be
calculated algebraically. The resulting slope could then be substituted for a into the equation y = ax + b, and one of the given points could be substituted for (x, y). We could then solve the only
unknown left, b. Although this is not a particularly difficult task to perform, it can become tedious. It is important to recognize that the regression analysis features of the TI calculators can be
employed to focus our attention where it belongs, on the analysis of the data, rather than on the mathematical manipulation of numbers and equations in order to produce a solution. Understanding
concept takes us beyond number crunching and makes us aware of the importance or recognizing patterns and relationships useful in predicting and correcting estimated data.
1: Edit...
|
{"url":"http://www.yale.edu/ynhti/curriculum/units/1999/5/99.05.06.x.html","timestamp":"2014-04-21T14:42:14Z","content_type":null,"content_length":"45671","record_id":"<urn:uuid:71647aaa-2cba-45dc-b15b-50b18e466135>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00292-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Volume of solid of revolution
April 9th 2010, 07:49 AM
Volume of solid of revolution
can u plzz help me with this math..??
find the volume of the solid that results when the region enclosed by y=root x,y=o and x=9 is resolved about the line x=9.
plzz help mee.....
April 9th 2010, 05:08 PM
The best thing you can do here is to graph your equations. When you revolve them around the line x = 9, you get an ellipsoid. So now what you have to do is ask yourself, what would a slice of
this ellipsoid look like? It would be an elliptical cylinder.
So if you use the disk method, you want to find the volume of the slice, which is the pi * Area * width. So we integrate:
$<br /> \pi * \displaystyle\int^b_a Area\,dx<br />$
April 9th 2010, 10:57 PM
Easy. Using the method of cylindrical shells:
$V = 2\pi\int_0^9(9-x)\cdot(\sqrt{x})dx$
$V = 2\pi\int_0^99x^\frac{1}{2} - x^\frac{3}{2} dx$
$V = 2\pi\left[6x^\frac{3}{2} - \frac{2x^\frac{5}{2}}{5}\right]_0^9$
$V = \frac{648\pi}{5}$
|
{"url":"http://mathhelpforum.com/calculus/138171-volume-solid-revolution-print.html","timestamp":"2014-04-21T06:27:09Z","content_type":null,"content_length":"6047","record_id":"<urn:uuid:85aa1ca3-212f-482f-a822-67307299eb43>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00091-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Peregrine's 'Soliton' observed at last
A figure showing a calculated Peregrine solition upon a distorted background. This illustrates how such extreme wave structures may appear as they emerge suddenly on an irregular surface such as the
open ocean. The destructive power of such a steep nonlinear wave on the ocean can be easily imagined.
(PhysOrg.com) -- An old mathematical solution proposed as a prototype of the infamous ocean rogue waves responsible for many maritime catastrophes has been observed in a continuous physical system
for the first time.
The Peregrine 'Soliton', discovered over 25 years ago by the late Howell Peregrine (1938-2007), an internationally renowned Professor of Applied Mathematics formerly based at the University of
Bristol, is a localised solution to a complex partial differential equation known as the nonlinear Schrodinger equation (NLSE).
The Peregrine solution is of great physical significance because its intense localisation has led it to be proposed as a prototype of ocean rogue waves and also represents a special mathematical
limit of a wide class of periodic solutions to the NLSE.
Yet despite its central place as a defining object of nonlinear science for over 25 years, the unique characteristics of this very special nonlinear wave have never been directly observed in a
continuous physical system - until now.
An international research team from France, Ireland, Australia and Finland report the first observation of highly localised waves possessing near-ideal Peregrine soliton characteristics in the
prestigious journal, Nature Physics.
The researchers carried out their experiments using light rather than water, but were are able to rigorously test Peregrine’s prediction by exploiting the mathematical equivalence between the
propagation of nonlinear waves on water and the evolution of intense light pulses in optical fibres.
By building on decades of advanced development in fibre-optics and ultrafast optics instrumentation, the researchers were able to explicitly measure the ultrafast temporal properties of the generated
soliton wave, and carefully compare their results with Peregrine’s prediction.
Their results represent the first direct measurements of Peregrine soliton localisation in a continuous wave environment in physics. In fact, the authors are careful to remark that a mathematically
perfect Peregrine solution may never actually be observable in practice, but they also show that its intense localisation appears even under non-ideal excitation conditions.
This is an especially important result for understanding how high intensity rogue waves may form in the very noisy and imperfect environment of the open ocean.
The findings also highlight the important role that experiments from optics can play in clarifying ideas from other domains of science. In particular, since related dynamics governed by the same
NLSE propagation model are also observed in many other systems such as plasmas and Bose Einstein Condensates, the results are expected to stimulate new research directions in many other fields.
8/24/2010: This is a corrected version of the article.
More information: The paper is available to view via the following URL: dx.doi.org/10.1038/NPHYS1740
3.8 / 5 (10) Aug 23, 2010
The term is 'soliton', not 'solition'. Note also the thousands of dupes on Google of this mistake. Did noone even read the abstract of the original research?
4.1 / 5 (9) Aug 23, 2010
Did noone even read the abstract of the original research?
'Soliton' is indeed the correct term, but I believe "none" is the word you were looking for, not "noone".
3.7 / 5 (3) Aug 23, 2010
There are also successful approaches to model single particles (mesons, baryons) as solitons (so called skyrmion models)...
So maybe all of them are just solitons and so we should search for a field which structure of solitons correspond well to known menagerie of particles and their behavior?
Natural extension of quantum phase - ellipsoid field (between too abstract skyrmions and too simple optical vertices) seems to fulfill these requirements - excitations comes in 3 spin 1/2 families,
simplest charged particle has to have spin, further excitations correspond to mesons and baryons which can join into nucleus-like constructions, their natural interactions are two sets of Maxwell's
equations: for electromagnetism and gravity (4th section of http://arxiv.org/pdf/0910.2724 )
4.4 / 5 (7) Aug 23, 2010
Did noone even read the abstract of the original research?
'Soliton' is indeed the correct term, but I believe "none" is the word you were looking for, not "noone".
"Noone" is a common misspelling of the phrase "no one," not of the word "none."
3.9 / 5 (64) Aug 23, 2010
� ever is uneven any tenttalking but never was not even one tenttalkst unever, bey quadrate. Mean, all tenttalks are always without theory because with practum is also not the real
tenttalk �
4.4 / 5 (7) Aug 23, 2010
Genastropsychicallst appears to be a Dutch blogger that doesn't speak a scrap of English but consistently spams Physorg with these messages straight from a Dutch to English translator in a lame
attempt to attract visitors to his blog.
3.9 / 5 (64) Aug 23, 2010
Did noone even read the abstract of the original research?
'Soliton' is indeed the correct term, but I believe "none" is the word you were looking for, not "noone".
"Noone" is a common misspelling of the phrase "no one," not of the word "none."
You didn't need to put the word none in quotes, and the comma and period should have been outside the quotes.
3.9 / 5 (63) Aug 23, 2010
Genastropsychicallst appears to be a Dutch blogger that doesn't speak a scrap of English but consistently spams Physorg with these messages straight from a Dutch to English translator in a lame
attempt to attract visitors to his blog.
I just checked out his "blog", looks like an insane amount of copy-and-pasting words from other sources,... odd.
3.7 / 5 (3) Aug 23, 2010
@ Noumenon
I thought when a word in quotes ends a sentence then the period does belong in the quotes?!?
Beware the Syntax attacks!
3 / 5 (2) Aug 23, 2010
thats like chirping with lasers
3.7 / 5 (3) Aug 23, 2010
Did noone even read the abstract of the original research?
'Soliton' is indeed the correct term, but I believe "none" is the word you were looking for, not "noone".
"Noone" is a common misspelling of the phrase "no one," not of the word "none."
You didn't need to put the word none in quotes, and the comma and period should have been outside the quotes.
actually commas and periods are placed inside of quoatation marks...
4.3 / 5 (6) Aug 23, 2010
Did noone even read the abstract of the original research?
'Soliton' is indeed the correct term, but I believe "none" is the word you were looking for, not "noone".
"Noone" is a common misspelling of the phrase "no one," not of the word "none."
You didn't need to put the word none in quotes, and the comma and period should have been outside the quotes.
actually commas and periods are placed inside of quoatation marks...
That depends on a lot of rules. Furthermore, these rules differ between British English and American English.
3 / 5 (3) Aug 23, 2010
@ Noumenon
I thought when a word in quotes ends a sentence then the period does belong in the quotes?!?
Beware the Syntax attacks!
Actually you could replace "?!?" with an interrobang: ‽
3 / 5 (2) Aug 23, 2010
@ Noumenon
I thought when a word in quotes ends a sentence then the period does belong in the quotes?!?
Beware the Syntax attacks!
Actually you could replace "?!?" with an interrobang (well you'd be able to, but Physorg report an error if you try - come of Physorg accept the interrobang!)
3.8 / 5 (6) Aug 23, 2010
actually commas and periods are placed inside of quoatation marks...
Which is often silly at best. If I write a sentence that "quotes someone" internally the period should come after the quotes just I am doing here. If I am quoting an entire sentence than the period
DOES belong inside the quotes. And if someone that does grammar for a living disagrees with me on this then they need to get real, instead of insisting on Good Grammar, no matter how much it destroys
the sense or scan of the sentence.
Genastropsychicallst has already been banned once. Perhaps it is taking a page from the MultiNamed AWITSBS spammer. Who was very kind in making an empty post. He didn't even claim that AWITSBS
explains English Grammar. NOTHING explain English Grammar adequately.
2.7 / 5 (3) Aug 23, 2010
Do I forsee a new understanding of sunspots/solar flares, CMEs as a result of this observation?
3 / 5 (2) Aug 24, 2010
I think "Genastropsychicallst" is a rogue wave, himself. Language is a sea of ideas and words are the waves. Genastropsychicallst is but a Peregrine Event which has now been mathematically been
proven to occur eventually. Like the open ocean, few see such events, but the internet provides the perfect vantage point to witness one propagate. Interrobang if you will, if comment you must.
And THAT'S how I'm attempting to bring this discussion BACK ON TOPIC! ;-)
5 / 5 (3) Aug 29, 2010
Don't you people recognize a new word when it's coined? Solition is short for soliton solution.
not rated yet Aug 30, 2010
@ Noumenon
I thought when a word in quotes ends a sentence then the period does belong in the quotes?!?
Beware the Syntax attacks!
Actually you could replace "?!?" with an interrobang (well you'd be able to, but Physorg report an error if you try - come of Physorg accept the interrobang!)
interrobang? sounds like porn slang :D
|
{"url":"http://phys.org/news201764943.html","timestamp":"2014-04-19T02:45:11Z","content_type":null,"content_length":"91844","record_id":"<urn:uuid:dbde37a9-d32e-4835-9377-992d444c0d71>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00612-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Square Root
July 18th 2008, 05:45 AM
Square Root
Why is the answer to a square root problem always positive and negative?
sqrt{16} = -4 and +4
Why two answers?
July 18th 2008, 05:58 AM
Because $(-4)^{2}=16$ and $4^{2}=16$
July 18th 2008, 05:59 AM
They both lead to the same answer when squared...
$(-4)^2 = 16$
$4^2 = 16$
$\implies \sqrt{16} = \pm 4$
Visually it may appear clearer, consider the quadratic graph of $y = x^2$ (Shown Below), for every $\pm x$ value, it is mapped onto the same $y$ value.
July 18th 2008, 06:59 AM
Very good reply...
I thank both replies. I want to thank Air for the picture, which makes the answer clearer.
By the way, does the same applies to variables?
For exmaple:
sqrt{x^2} = -x and +x??? True or false?
July 18th 2008, 09:37 AM
True. (Clapping)
Variables are just numbers in disguise. (A range of constants)
$\sqrt{x^2} = \left|x\right| = <br /> \begin{cases} <br /> x, & \mbox{if }x \ge 0 \\<br /> -x, & \mbox{if }x \le 0 <br /> \end{cases}$
July 18th 2008, 09:37 AM
I agree that there are two square roots of any positive real number.
But, I disagree completely that $\sqrt {16} = \pm 4$. That is simply an abuse of notation!
This is a standard discussion in any elementary mathematics course.
The two square roots of 16 are: $\sqrt {16} = 4\,\& \, - \sqrt {16} = - 4$.
Therefore, $\sqrt {x^2 } = \left| x \right|$.
July 18th 2008, 10:31 AM
I agree that there are two square roots of any positive real number.
But, I disagree completely that $\sqrt {16} = \pm 4$. That is simply an abuse of notation!
This is a standard discussion in any elementary mathematics course.
The two square roots of 16 are: $\sqrt {16} = 4\,\& \, - \sqrt {16} = - 4$.
Therefore, $\sqrt {x^2 } = \left| x \right|$.
Completely agree with that !
If you see the graph of the function y=sqrt(x), you will see that y can't be negative.
Actually, working with the graph y=x² is a mistake because sqrt(x) is not the inverse function of x² !!!!
If one has x²=16, then for sure x=+ or - sqrt(16) because x²=16 --> x²-16=0 --> x²-(sqrt(16))²=0 --> (x-sqrt(16))(x+sqrt(16))=0 and the rest follows.
Actually, I'd say that these two messages are contradictory. You state clearly that sqrt(x²)=|x|, which is true.
So since 16=(-4)²=4², sqrt(16)=|4|=|-4|=4 ! ;)
July 27th 2008, 11:13 AM
Fabulous work!
I thank all those who took time to help me understand this concept more and more.
|
{"url":"http://mathhelpforum.com/pre-calculus/43969-square-root-print.html","timestamp":"2014-04-20T08:04:57Z","content_type":null,"content_length":"14527","record_id":"<urn:uuid:0e28b878-c7da-4fe5-aca1-e8442c3bf737>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00533-ip-10-147-4-33.ec2.internal.warc.gz"}
|
A Chaotic Pulse-Time Modulation Method for Digital Communication
Abstract and Applied Analysis
Volume 2012 (2012), Article ID 835304, 15 pages
Research Article
A Chaotic Pulse-Time Modulation Method for Digital Communication
School of Electronics and Telecommunications, Hanoi University of Science and Technology, 1 Dai Co Viet, Hanoi, Vietnam
Received 15 January 2012; Revised 6 March 2012; Accepted 6 March 2012
Academic Editor: Muhammad Aslam Noor
Copyright © 2012 Nguyen Xuan Quyen et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in
any medium, provided the original work is properly cited.
We present and investigate a method of chaotic pulse-time modulation (PTM) named chaotic pulse-width-position modulation (CPWPM) which is the combination of pulse-position-modulation (PPM) and
pulse-width modulation (PWM) with the inclusion of chaos technique for digital communications. CPWPM signal is in the pulse train format, in which binary information is modulated onto
chaotically-varied intervals of position and width of pulses, and therefore two bits are encoded on a single pulse. The operation of the method is described and the theoretical evaluation of
bit-error rate (BER) performance in the presence of additive white Gaussian noise (AWGN) is provided. In addition, the chaotic behavior with tent map and its effect on average parameters of the
system are investigated. Theoretical estimation and numerical simulation of a CPWPM system with specific parameters are carried out in order to verify the performance of the proposed method.
1. Introduction
In recent years, chaotic behavior has been investigated in various research fields such as physics, biology, chemistry, and engineering [1]. Chaos-based digital communication has been receiving
significant attention [2] due to its potentials in improving the privacy of information [3]. Many chaos-based modulation methods have been proposed using different modulation schemes [3, 4]. Each
method has its own advantages and disadvantages but most of them use the chaotic carrier created by a chaotic dynamical system to convey information, so they are sensitive to distortion and noise
that can strongly affect the synchronization [5–7] and cause errors in recovering information.
Pulse-time modulation (PTM) technique was reported in the last 1940s [8] and it has received significant attention for the development of digital communication, especially with optical fiber
transmission system. In PTM, the binary information is modulated onto one of time-dependent parameters such as position, width, interval, or frequency in order to create the corresponding methods
which are pulse-position modulation (PPM), pulse-width modulation (PWM), pulse-interval modulation (PIM) or pulse-frequency modulation (PFM) [9].
A chaotic PTM method named chaotic-pulse-position modulation (CPPM) was proposed [10, 11] to reduce the effect of the channel on chaos synchronization. Since binary information is only modulated onto
the interpulse intervals, the impact of distortion and noise on the pulse shape does not seriously affect the synchronization process. The principal advantage of CPPM is the automatic synchronization
with the noncoherent demodulation type and without the need of specific hand-shaking protocols [12].
In this research, we present and investigate a method named chaotic-pulse-position-width modulation (CPWPM) which is the combination of PPM and PWM with the inclusion of chaos technique. In which,
the binary information is modulated onto two chaotically-varied intervals that are position and width of pulses. The position and width of a pulse are determined by time intervals from its rising
edge to the previous rising edge and to its falling edge, respectively. With each received pulse, the binary information of two bits are recovered and thus transmission rate can be improved. Since
the CPWPM signal also has the pulse train format which guides the synchronization in an automatic way, so this method performs well in distortion- and noise-affected channels as well as achieves a
high level of information privacy.
The rest of this paper is organized as follows: the operation of the CPWPM method is described in Section 2. Section 3 presents the theoretical evaluation of the BER performance in AWGN channel. In
Section 4, we investigate the chaotic behavior of CPWPM with tent map, from that average parameters of the system are determined. A CPWPM system with specific parameters is calculated and simulated,
and their results are shown in Section 5. Finally, concluding remarks are given in Section 6.
2. Description of CPWPM
In this section we describe operation of the CPWPM method by means of the analysis of modulation and demodulation schemes which are illustrated in Figures 1(a) and 1(b), respectively. Basically, each
scheme is built around a chaotic pulse regenerator (CPRG) as shown in Figure 2.
2.1. CPRG
In the CPRG, a counter operates in free running mode to produce a linearly increasing signal, , where is the time duration from the reset instance and is count-step (the slope of the signal). This
linearly increasing signal is reset to zero by the input pulse. Before the reset time, , the output value of the counter, , is stored in the sample-and-hold circuit ( whose output is fed to the
nonlinear converter, (). An amplifier with a gain-factor, , is used to produce another linearly increasing signal, , which has a higher slope compared with that of the input signal. When the
magnitude of the output signal of the amplifier and that of counter reach the same value at the output of the (), two narrow pulses at Outputs 2 and 1 are generated at the times, and , respectively.
It is easy to see that the time is earlier than and these times can be controlled by the values of the gain-factor and count-step . With a proper choice of parameters, when Output 1 is connected back
to the input to form a closed loop, CPRG will generate two chaotic pulse trains at its two outputs.
2.2. Modulation
In the modulation scheme, the binary information is modulated separately onto the interpulse intervals of two consecutive pulses at the outputs of CPRG by using delay modulators in the corresponding
feedback loops. At the delay modulators, the input pulses trigger data source to get the next binary bits and . Depending on the values of these binary bits, the input pulses, and , are delayed by
time durations, and , respectively. Note that and are constant time delays inserted to guarantee the synchronization of the system, and are modulation depths which are delayed-time differences
between “0” and “1” bits. Therefore, the delayed pulses and at the outputs of the delay modulators 2 and 1 are generated at the times, and , respectively. After that, the modulated chaotic pulse
trains are applied to a pulse-triggered edge generator (PTEG) whose output will switch to high and low levels as triggering by the inputs, and , to define the position and width of pulses,
respectively. The pulse train at the output of PTEG is the CPWPM signal which is mathematically expressed as follows: where is the unit-step function, is the time to generate the th pulse, and and
are the amplitude and width of pulses, respectively. It is clear that the width of the th pulse and the position of the th pulse are determined by the following intervals:
The comparison between the PPM, PWM, and CPWPM signals in the time domain is illustrated in Figure 3. In the conventional PPM, the binary information is modulated onto interpulse interval (interval
of inter-rising edge) which determines the position of the current pulse compared to the previous pulse, while the width of pulses is fixed. In contrast, in the PWM method, the interpulse intervals
are fixed, and the information is modulated onto the pulse widths (interval of rising and falling edges of a same pulse). With the PPM and PWM methods, time difference between modulated intervals of
“0” and “1”bits is a constant . In our proposed method CPWPM, both the interpulse interval and the width of pulses convey the binary information and their variation is controlled by the nonlinear
function (). This can be seen from the expression in (2.2). Values of parameters , and () are chosen so that chaotic behavior exhibits in (2.2), in other words, the position and width of CPWPM pulses
vary chaotically.
2.3. Demodulation
In the demodulation scheme, the received signal is applied to an edge-triggered pulse generator (ETPG). ETPG is triggered by the rising and falling edges of input pulses to produce narrow pulses at
Outputs 1 and 2, respectively. Output 1 of ETPG is connected to CPRG which is identical as in the modulation scheme. As the synchronization state is maintained, the reproduced chaotic pulse trains at
the outputs of CPRG are identical to those in the modulation scheme. At Delay detectors 1 and 2, these pulses are compared with the corresponding ones from ETPG to determine the delayed-time
durations, and , respectively. Consequently, data bits are recovered as follows:
Like CPPM, the CPWPM system can automatically synchronize due to its pulse train format. Equation (2.3) points out that the demodulation scheme only needs to correctly detect three consecutive
intervals, and , in order to resynchronize and decode correctly. Note that the set of values of and () is considered as a secret key. The binary information is only correctly recovered when a
receiver has full information on these parameters.
Since two data bits are recovered with each received pulse, the bit rate of transmission is twice improved in comparison with PPM, PWM, and CPPM. Furthermore, data bits at the inputs (i.e., Data_ins
1 and 2) in the modulation scheme are recovered separately at their corresponding outputs (i.e., Data_outs 1 and 2) in the demodulation scheme. Therefore, CPWPM can provide a multiaccess method of
two users.
3. Theoretical Evaluation of BER Performance
The analytical method to evaluate the CPPM error probability reported in [11] is employed for evaluating the BER of CPWPM in this research. For simplicity, let us consider a system model presented in
Figure 4. The input signal of the threshold detector, , is the sum of the transmitted signal and channel noise (AWGN), and it is compared with a threshold value . When the magnitude of changes over ,
corresponding edges are produced and thus a rectangular pulse with an amplitude is regenerated at the output. The resulting pulse train of is put into the CPWPM demodulator for recovering the data
The detection windows of the rising and falling edges of the th pulse in the demodulator are defined as in Figure 5. Assumed that the demodulator maintains the synchronization at all times, the
reproduced pulse trains at the outputs of CPRG are identical to those in the modulator, and therefore the instances and are determined. The rising and falling edge detection durations are taken from
to and from to , respectively. The width of each detection duration is equal to the corresponding modulation depth and it is divided into “0” and “1” windows, both have the same width. Due to the
effect of noise on the signal , bit error will occur when the shifted pulse edges of the pulse train fall into unexpected windows in the corresponding detection durations. It means that the pulse
edges of the pulse of transmitted “0” bits fall into “1” windows and vice versa. Here, we divide each window into bins; each bin has the width which is also the fundamental sampling period of the
system. It is noted that is frequency of clock pulse supplying to Counter in CPRG at the demodulator. The signal is sampled once at the end of every bin with sampling cycle .
Each CPWPM pulse is equivalent to one symbol from , or which carries the binary information of two bits from “00”, “01”, “10” or “11”, respectively. We consider the case that the symbol is
transmitted and the correct detection probability of this symbol is where and are the probabilities to detect “1” bit when “1” bit is transmitted in the rising and falling edge detection durations,
respectively. Let us first evaluate which is the probability of the signal from any bin in the “0” window not exceeding the threshold value . Using the statistical independence of the measurements
for each window in the case of AWGN, we have
Secondly, we evaluate which is the probability of the signal from any bin in the “0” window which remains higher than the threshold value . Thus, it is determined as follows:
In (3.2) and (3.3), and are the window widths in the rising and falling edge detection durations, respectively; the rate ; and are the energy per bit and the spectrum power density of noise,
The recovery will be unsuccessful if at least one of four symbols is decoded incorrectly. From (3.1), (3.2), and (3.3), the error probability of CPWPM can be estimated by the following equation:
4. Chaotic Behavior with Tent Map and Average Parameters
Tent map is a discrete-time and one-dimension nonlinear function with the piecewise-linear I/O characteristic curve [13] and it is used for generating chaotic values seen as pseudorandom numbers [14
]. In the communication, the tent map is proposed for application in chaotic modulation [15] with such advantages as the simplified calculation and the robust regime of chaos generation for rather
broad range of modulation parameters. Here, the utilization of tent map for chaotic behavior of CPWPM is investigated. Based on average fixed point of the map, average parameters of the CPWPM system
are determined theoretically. These are very important for design process to guarantee the chaotic behavior in the system.
4.1. CPWPM Tent Map
The conventional tent map is iteratively generated through a transformation function as given by
In this equation, represents the time step; is the initial value; is the output value at the th step, and the parameter controls the chaotic behavior of the map.
In CPWPM, from (2.2), the position and width of the th pulse are rewritten as follows: then these intervals can be converted to the following: here, , and are the input and output values of the
nonlinear converter () at the th and th steps, respectively. After that, we have From (4.1) and (4.4), the tent map for the CPWPM system, called the CPWPM ten map, is derived as
4.2. Chaotic Behavior
The equation of the CPWPM tent map above points out that its chaotic behavior depends on not only the control parameter , but also on the parameters and . The chaos of depends on and ; the chaos of
depends on the chaos of with a difference, . In other words, the chaos of leads to the chaos of the system.
The Lyapunov exponent of the map is determined by Based on (4.5) and (4.6), the behavior of the CPWPM tent map becomes chaotic in with the following condition: which is equivalent to
Figure 6 shows the bifurcation diagram of the CPWPM tent map according to and , . Here, is as the conventional tent map; the more the value of increases, the smaller the chaotic area is. And, in the
the chaotic area disappears. It is easy to find that the bifurcation diagram of is also the bifurcation diagram of after being shifted vertically with a distance, .
In the modulation process, the binary bit varies between “0" or “1" and thus has two values, and . Based on (4.8), the condition in order to guarantee the chaotic behavior in the CPWPM method are or
4.3. Average Parameters
In the iteration process, the CPWPM tent map varies chaotically around a fixed point [1] determined by
In the modulation process, due to the variation between “0” or “1" of input binary bits and , this fixed point is shifted around an average fixed point as follows:
Due to this feature, the intervals of position and width of the CPWPM signal vary chaotically around average intervals: and its spectrum therefore has an average fundamental harmonic and an average
bandwidth which are
The value of the average fundamental harmonic is equal to the average number of pulses transmitted in one second. Since each CPWPM pulse conveys two bits, the average bit-rate of the system is
evaluated as follows:
5. Calculation and Simulation Results
In this section, the CPWPM system as the model in Figure 4 with specific parameters is calculated theoretically and simulated numerically in order to verify the analysis and performance of the
presented method. The estimation and simulation results as well as comparison are provided. The specific parameters of the CPWPM system are chosen as follows: the fundamental sampling period μs, ;
the nonlinear converter () uses the tent map with .
5.1. Theoretical Calculation
Based on (4.3), the CPWPM tent map is determined by the following parameters:
With , the condition for the chaos of the method according to (4.10) becomes . Therefore, we choose to guarantee the chaotic behavior of the CPWPM system. Based on the analysis in the Section 4.3,
the average parameters of the system are calculated as follows:
5.2. Numerical Simulation
Numerical simulation of the CPWPM system with the above specific parameters is carried out in Simulink. Simulated signals in the time domain of the modulator within the duration from starting time 0
to are presented in Figure 7. The intervals of position and width vary chaotically in the ranges μs to μs and μs to μs, respectively. When the synchronization state of the system is maintained, the
recovered signals in the demodulator exactly match their corresponding signals in the modulator. The chaotic behavior of the system is verified by attractor diagram in Figure 8. In the modulation
process, the fixed point is shifted on the bisector and around the average fixed point, (red point). Average spectrum of the CPWPM signal is shown in Figure 9. Values of the average fundamental
harmonic and average bandwidth can be determined from this spectrum graph. We can observe that values of the average parameters in the simulation results are completely reasonable to that of the
theoretical calculation above. This proves the validation of the theoretical analysis.
BER performance obtained from simulation of CPWPM, CPPM, PPM systems in the AWGN channel as well as the evaluation BER of the CPWPM system according to (3.4) is presented in Figure 10. Simulation
BERs are calculated as the number of error bits divided by the total number of bits transmitted. With the CPWPM system, the simulation BER is slightly higher than the evaluation one. The cause of
these differences is the loss of synchronization. In the theoretical estimation, we suppose that the synchronization state is maintained at all times, thus errors in position and width of pulses
leading to bit error only occur due to noise. However, in the numerical simulation, the effect of noise may cause not only the errors in position and width of pulses, but also the loss of
synchronization which also leads to bit error. It can be observed that as the increases, the synchronization of the system becomes better and thus the simulation results move closer to the estimation
results. Both BER performances of the CPWPM and CPPM systems are about 4dB poorer than that of the conventional PPM system. This is due to simple demodulation of PPM, in which the information
recovery does not depend on previous received intervals, data bit is determined by comparing current interval with a reference interval. The BER simulation results also point out that the CPWPM
system performs slightly worse than the CPPM system, but in return the bit rate of the CPWPM system is twice as high as that of the CPPM system with equivalent parameters.
6. Conclusion
The paper has presented and investigated the chaotic-pulse-position-width modulation method for chaos-based digital communication. The performance of the method is analyzed using both theoretical
evaluation and numerical simulation in terms of time- and frequency-domain signals and BER performance. In addition, the chaotic behavior of CPWPM with tent map is investigated considering the
determination of the average parameters of the system in the modulation process. It can be seen from obtained results that: (1) the CPWPM system provides a significant improvement of bit rate with a
slightly worse performance in comparison with an equivalent CPPM system; (2) two separate data streams can be conveyed by the CPWPM pulses and they are recovered separately at two corresponding
outputs in the demodulator, thus CPWPM can be used as a multiaccess method of two users; (3) about the privacy, the CPWPM method offers an improvement compared with CPPM and a strong improvement
compared with the PPM and PWM. Due to the chaotically-varied intervals of both the position and width, the CPWPM method can eliminate any trace of periodicity from the spectrum of the transmitted
signal. Moreover, the chaotic variation depends on the privacy key with several parameters. It is impossible for an intruder to recover correctly the binary data without having full information on
the structure of modulation and the private key; (4) the CPWPM pulses can be considered as a time-modulated baseband binary signal and thus it can be conveyed by conventional binary sinusoidal
carrier modulation methods such as on-off keying (OOK), binary-frequency-shift keying (BFSK) and binary-phase-shift keying (BPSK). All these features make the CPWPM method attractive for development
of chaos-based digital communications.
This work is supported by the Vietnam’s National Foundation for Science and Technology Development (NAFOSTED) under Grant no. 102.99-2010.17.
1. S. H. Strogatz, Nonlinear Dynamics And Chaos: With Applications To Physics, Biology, Chemistry, And Engineering, Westview Press, Boulder, Colo, USA, 2001.
2. M. P. Kennedy and G. Kolumban, “Special issue on non-coherent chaotic communications,” IEEE Transactions on Circuits and Systems I, vol. 47, no. 12, pp. 1661–1764, 2000. View at Publisher · View
at Google Scholar · View at Zentralblatt MATH
3. P. Stavroulakis, Chaos Applications in Telecommunications, CRC Press, New York, NY, USA, 2005.
4. W. M. Tam, F. C. M. Lau, and C. K. Tse, Digital Communication with Chaos Multiple Access Techniques and Performance, Elsevier Science, Amsterdam, The Netherlands, 2006.
5. L. M. Pecora and T. L. Carroll, “Synchronization in chaotic systems,” Physical Review Letters, vol. 64, no. 8, pp. 821–824, 1990. View at Publisher · View at Google Scholar · View at Zentralblatt
6. N. F. Rulkov and L. S. Tsimring, “Synchronization methods for communication with chaos over band-limited channels,” International Journal of Circuit Theory and Applications, vol. 27, no. 6, pp.
555–567, 1999.
7. C. C. Chen and K. Yao, “Numerical evaluation of error probabilities of self-synchronizing chaotic communications,” IEEE Communications Letters, vol. 4, no. 2, pp. 37–39, 2000. View at Publisher ·
View at Google Scholar · View at Scopus
8. M. M. Levy, “Some theoretical and practical considerations of pulse modulation,” Journal of the Institution of Electrical Engineerings, vol. 94, no. 13, pp. 565–572, 1947. View at Publisher ·
View at Google Scholar
9. B. Wilson and Z. Ghassemlooy, “Pulse time modulation techniques for optical communications,” IEE Proceedings Optoelectronics, vol. 140, no. 6, pp. 347–357, 1993.
10. M. M. Sushchik, N. Rulkov, L. Larson, et al., “Chaotic pulse position modulation: a robust method of communicating with chaos,” IEEE Communication Letters, vol. 4, no. 4, pp. 128–130, 2000. View
at Publisher · View at Google Scholar
11. N. F. Rulkov, M. M. Sushchik, L. S. Tsimring, and A. R. Volkovskii, “Digital communication using chaotic-pulse-position modulation,” IEEE Transactions on Circuits and Systems, vol. 48, no. 12,
pp. 1436–1444, 2001. View at Publisher · View at Google Scholar · View at Scopus
12. H. Torikai, T. Saito, and W. Schwarz, “Synchronization via multiplex pulse trains,” IEEE Transactions on Circuits and Systems I, vol. 46, no. 9, pp. 1072–1085, 1999. View at Publisher · View at
Google Scholar · View at Scopus
13. J. T. Bean and P. J. Langlois, “Current mode analog circuit for tent maps using piecewise linear functions,” in Proceedings of the 1994 IEEE International Symposium on Circuits and Systems, vol.
6, pp. 125–128, June 1994. View at Scopus
14. T. Addabbo, M. Alioto, A. Fort, S. Rocchi, and V. Vignoli, “The digital tent map: performance analysis and optimized design as a low-complexity source of pseudorandom bits,” IEEE Transactions on
Instrumentation and Measurement, vol. 55, no. 5, pp. 1451–1458, 2006. View at Publisher · View at Google Scholar · View at Scopus
15. H. Ruan, E. E. Yaz, T. Zhai, and Y. I. Yaz, “A generalization of tent map and its use in EKF based chaotic parameter modulation/demodulation,” in Proceedings of the 43rd IEEE Conference on
Decision and Control (CDC '04), vol. 2, pp. 2071–2075, December 2004. View at Scopus
|
{"url":"http://www.hindawi.com/journals/aaa/2012/835304/","timestamp":"2014-04-16T22:35:41Z","content_type":null,"content_length":"335597","record_id":"<urn:uuid:a1653660-a9d8-4f88-b05b-fa6f4427d0b0>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00003-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Distributive Property?
March 11th 2013, 01:36 AM
Distributive Property?
Hi All,
I am teaching my self algebra using khan academy and Ive come across a problem I cant figure out. I am trying to solve composite functions and I cant figure out why a Plus sign is added to the
equation, Could I be missing something that I am suppose to distrubute?
The functions is:
g(−1)=(−1)^3+2(−1)^2 + (−7)(−1)+5+f(−1)
where does this............Δ Plus sign come from?
March 11th 2013, 03:37 AM
Re: Distributive Property?
You have done right, have confidence in yourself. Just to repeat
f(t)=3t-h(t)= 3t - (4t+2) So f(-1) = 3(-1) - ( -4 + 2 ) = -3 + 2 = -1
So g(f(-1))=g(-1) = (-1)^3+2(-1)^2−7(-1)+5+f(-1)
= -1 +2+7+5-1 = 12
March 11th 2013, 04:28 AM
Re: Distributive Property?
Ok I understand now, Thank you
|
{"url":"http://mathhelpforum.com/algebra/214574-distributive-property-print.html","timestamp":"2014-04-23T17:08:58Z","content_type":null,"content_length":"4498","record_id":"<urn:uuid:cba3cff7-fff7-44c8-b284-782a4036d445>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00223-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
i need help!! [2.03] Solve: 2(x + 1) = 2x + 5 Any Real Number No Solution 0 3
• one year ago
• one year ago
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred.
|
{"url":"http://openstudy.com/updates/50a6d6f3e4b082f0b852d351","timestamp":"2014-04-21T04:42:28Z","content_type":null,"content_length":"121242","record_id":"<urn:uuid:793fe41d-3994-4f92-a4f9-a3803d414c2a>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00249-ip-10-147-4-33.ec2.internal.warc.gz"}
|
if it is noon in Arkansas what time is it in France
You asked:
if it is noon in Arkansas what time is it in France
7:00:00pm Central European Summer Time
7:00:00pm Central European Daylight Time (the European timezone UTC+2)
Say hello to Evi
Evi is our best selling mobile app that can answer questions about local knowledge, weather, books, music, films, people and places, recipe ideas, shopping and much more. Over the next few months we
will be adding all of Evi's power to this site.
Until then, to experience all of the power of Evi you can download Evi for free on iOS, Android and Kindle Fire.
|
{"url":"http://www.evi.com/q/if_it_is_noon_in_arkansas_what_time_is_it_in_france","timestamp":"2014-04-18T02:59:27Z","content_type":null,"content_length":"69388","record_id":"<urn:uuid:41bff340-3cea-4380-a7c9-5bc19d0478a6>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00226-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Using triple integrals to find centre of gravity of an object with varying density
October 6th 2010, 02:49 AM #1
Aug 2009
i have a really hard homework question where i have spent hours trying to figure it out.
the question is;
find the centre of gravity of a solid bound by $z=1-y^2$ (for y>=0), z=0, y=0, x=-1 and x=1 which has a mass density of $rho(x,y,z)=yz$ grams/m^3.
i have done some research on the web and found that the centre of gravity of an object with varying density is cg * W = g * SSS x * rho(x,y,z) dx dy dz where cg is center of gravity, W is weight
(which i dont have), g is gravity(which is also not specified), SSS indicates a triple integral with respect to dx,dy,dz and rho(x,y,z) is the object density.
i asked my lecturer about it and this is what she replied;
You need an integral to find the mass (this will be just a number, not a function). You then need an integral to find the x-coordinate of the centre of gravity (this will also be just a number,
not a function). Then one for the y-component and then one for the z-component. So in total, you should do 4 triple integrals.
please help as i cannot figure it out.
This is very strange. Since this is homework, you must be taking a class in which you are being taught this, yet you used the internet to find a formula that should be in your text book- and you
appear not to know how to use an integral to find the weight of an object with given density. All of those should have been covered in class before you were given homework like this.
One of the very first things you should have learned about triple integrals is that they can be used to find the volume of a three-dimensional region. And, in particular, if you are given the
region as bounded by x= a, x= b, y= c, y= d, z= e, z= f(x,y) (where f(x,y) is a function of x and y) then its volume is given by
$\int_{x= a}^b \int_{y= c}^d\int_{z= e}^{f(x,y)} dzdydx$
And if the density is given by $\rho(x,y,z)$, you just include that instead of "1" as integrand to find the mass:
$\int_{x= a}^b \int_{y= c}^d\int_{z= e}^{f(x,y)} \rho(x, y, z) dzdydx$
"Weight" is the density times g but you don't really need g- it will cancel out. The "center of mass" is exactly the same as the "center of gravity".
The x- coordinate of the center of mass is given by
$\int_{x= a}^b \int_{y= c}^d\int_{z= e}^{f(x,y)} x\rho(x,y,z) dzdydx$
divided by the mass. The y-coordinate is given by
$\int_{x= a}^b \int_{y= c}^d\int_{z= e}^{f(x,y)} y\rho(x,y,z) dzdydx$
and the z- coordinate is given by
$\int_{x= a}^b \int_{y= c}^d\int_{z= e}^{f(x,y)} z\rho(x,y,z) dzdydx$
The mass integral (which has no "x", "y", or "z" multiplying $\rho(x,y,z)$) and those three integrals are the four integrals your lecturer wsa refering to. In your problem, $f(x,y)= 1- y^2$, $\
rho(x,y,z)= yz$, a= -1, b= 1, (x between -1 and 1), c= 0 ( $y\ge 0$, and e= 0 (z= 0), but you do not give an upper bound on y. An upper bound for y, either as a number or as a function of x, must
be given in order for this region to be properly defined.
thanks very much for that, it cleared up a lot.
with the y upper bound (y=d), am i right in thinking that it can be found by using z=1-y^2 where z=0? this would give me an upper limit of y=1 (d=1).
October 6th 2010, 03:41 AM #2
MHF Contributor
Apr 2005
October 6th 2010, 04:21 AM #3
Aug 2009
|
{"url":"http://mathhelpforum.com/calculus/158576-using-triple-integrals-find-centre-gravity-object-varying-density.html","timestamp":"2014-04-17T15:17:39Z","content_type":null,"content_length":"41258","record_id":"<urn:uuid:28e0e157-a5b0-4f08-bf10-3f6e977096b4>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00056-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Bayes factors in practice
, 1995
"... In a 1935 paper, and in his book Theory of Probability, Jeffreys developed a methodology for quantifying the evidence in favor of a scientific theory. The centerpiece was a number, now called
the Bayes factor, which is the posterior odds of the null hypothesis when the prior probability on the null ..."
Cited by 981 (70 self)
Add to MetaCart
In a 1935 paper, and in his book Theory of Probability, Jeffreys developed a methodology for quantifying the evidence in favor of a scientific theory. The centerpiece was a number, now called the
Bayes factor, which is the posterior odds of the null hypothesis when the prior probability on the null is one-half. Although there has been much discussion of Bayesian hypothesis testing in the
context of criticism of P -values, less attention has been given to the Bayes factor as a practical tool of applied statistics. In this paper we review and discuss the uses of Bayes factors in the
context of five scientific applications in genetics, sports, ecology, sociology and psychology.
- JOURNAL OF THE AMERICAN STATISTICAL ASSOCIATION , 1996
"... ... In this article we present the general outlook and discuss general families of elaborations for use in practice; the exponential connection elaboration plays a key role. We then describe
model elaborations for use in diagnosing: departures from normality, goodness of fit in generalized linear mo ..."
Cited by 13 (1 self)
Add to MetaCart
... In this article we present the general outlook and discuss general families of elaborations for use in practice; the exponential connection elaboration plays a key role. We then describe model
elaborations for use in diagnosing: departures from normality, goodness of fit in generalized linear models, and variable selection in regression and outlier detection. We illustrate our approach
with two applications.
, 1999
"... The Schwarz information criterion (SIC, BIC, SBC) is one of the most widely known and used tools in statistical model selection. The criterion was derived by Schwarz (1978) to serve as an
asymptotic approximation to a transformation of the Bayesian posterior probability of a candidate model. Althoug ..."
Cited by 4 (1 self)
Add to MetaCart
The Schwarz information criterion (SIC, BIC, SBC) is one of the most widely known and used tools in statistical model selection. The criterion was derived by Schwarz (1978) to serve as an asymptotic
approximation to a transformation of the Bayesian posterior probability of a candidate model. Although the original derivation assumes that the observed data is independent, identically distributed,
and arising from a probability distribution in the regular exponential family, SIC has traditionally been used in a much larger scope of model selection problems. To better justify the widespread
applicability of SIC, we derive the criterion in a very general framework: one which does not assume any specific form for the likelihood function, but only requires that it satisfies certain
non-restrictive regularity conditions.
, 805
"... for sequence pattern models ∗ ..."
|
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=1348075","timestamp":"2014-04-20T07:02:37Z","content_type":null,"content_length":"18829","record_id":"<urn:uuid:fb4f71ec-685f-4422-ab32-8805bd64482b>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00569-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Operations with the Golden Beads
Multiplication 1
As for addition
To understand the concept of multiplication as equal addition
To learn the vocabulary: multiplication, multiplicand, multiplier, and product
(Note: Because multiplication is the addition of equal quantities, it may be introduced at any time after children have learned addition. Since these problems do not include exchanging, the
product of any heirarchy must not be greater than 9, and the largest product possible is 9999.)
5 years onwards
A group exercise for a few children. The material is arranged as for addition. One child is responsible for the golden beads material. One child is in charge of the large number cards. One child
oversees the small number cards.
The teacher thinks of a problem, e.g. 2322 x 3. The teacher makes 2322, three times in small number cards and puts them on each of three trays. She gives a tray to each of three children. She
asks the children to read the number on their tray in turn. When they have done so, she stresses the fact that they each have the same numeral. "You each have the same numeral. You each have
2322. Will you all go to the bank and collect that amount in golden beads."
The children collect 2322 each and come back to the teacher.
The teacher takes each tray in turn. She takes the quantities off each tray, saying, "You have brought 2322." She arranges the quantities on the mat one below the other, and takes the small
number cards and places them under each other at the top of the mat. The teacher says, indicating the golden beads, "We have 2322 three times."
We will add them together and see how much we have altogether. She adds the hierarchies. She asks a child to count the result beginning with the units. After each hierarchy has been counted, the
corresponding amount in large number cards is placed beside it.
The teacher superimposes the number cards and places them under the small ones at the top of the table. She recaps, "We had 2322 three times. We added them together and got 6966." Other problems
can be worked in this way.
(Note: In multiplication small equal quantities are added together to make a larger quantity. Small number cards are used for the quantities and large number cards for the product to help give
this impression.)
|
{"url":"http://www.montessoriworld.org/Math/Decimals/goldbead/beadmul1.html","timestamp":"2014-04-17T18:58:24Z","content_type":null,"content_length":"3891","record_id":"<urn:uuid:83c2ccea-9606-4edd-a27d-fae07405ea24>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00267-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Math Help
October 14th 2008, 07:46 PM #1
Junior Member
Sep 2008
a) Prove that if n is a perfect square, then n + 2 is not a perfect square.
b) Use a proof by cases to show that min(a, min(b, c)) = min(min(a, b), c).
a) if n is a perfect square, is has form 4b or 4b+1.
Let n = a^2.
If a=2k (even) then n = 4(k^2) = 4b.
If a=2k+1 (odd) then n= 4k^2+4k+1 = 4(k^2+k) + 1 = 4b+1.
4b + 2 and 4b+3 are not of these form and it is thus impossible that they are perfect square.
October 14th 2008, 07:57 PM #2
|
{"url":"http://mathhelpforum.com/number-theory/53762-proofs.html","timestamp":"2014-04-20T23:54:27Z","content_type":null,"content_length":"31298","record_id":"<urn:uuid:24314feb-87cc-4247-973c-84882bde748e>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00351-ip-10-147-4-33.ec2.internal.warc.gz"}
|
This is a virtual maths board game where you have to keep track on how much money you have to deposit at the end of the month.
Martin Burrett on 07 May 13
A truly amazing Apple App for learning maths. Download the host or client app to your Apple device and set maths quizzes to complete in real time together in class. The apps communicate through a
wireless network or Bluetooth and the host device tracks and keeps all the data for each quiz so you can see where your class need to improve. To set questions you just turn the sections on or
off and the app chooses questions from these at random.
Martin Burrett on 06 May 13
This is a wonderfully entertaining maths game where players adjust the force and angle to throw bananas at the opponent gorilla to score points.
Martin Burrett on 06 May 13
Start your students on a roll with this fun multiplication game iPad app. Roll the virtual marble to the correct answers to complete the level.
Mark Gleeson on 01 Apr 13
A range of apps covering aspects of number and algebra, measurement and geometry, statistics and probability. Most are free.
Martin Burrett on 07 Feb 13
This site has a great set of maths games that are sorted into different primary age groups. Practise multiplication, more/less than, decimals and much more. No sign in or registration needed.
Martin Burrett on 24 Jan 13
A useful virtual maths geoboard to explore a range of shape and angle work. There is also an iPad version which can be found at https://itunes.apple.com/us/app/geoboard-by-math-learning/
Martin Burrett on 23 Jan 13
A useful flash fractions activity. You can either read fractions or make equivalents. A good resource for whiteboards.
S J R on 14 Jan 13
"If students could develop their school's curriculum this is what it would look like. Find lessons that teach core academic subjects through popular culture including Math, Science, English,
History and many more!"
Martin Burrett on 10 Jan 13
A fun dinosaur archaeologist themed coordinates game. Find the correct coordinates to uncover the dinosaur bones.
Martin Burrett on 10 Jan 13
Kenken is a logic puzzle game similar to Sudoku, but players are given + - x or ÷ questions to solve the area and complete the grid. This site has a range puzzles at various complexities and
difficulty levels.
Martin Burrett on 06 Jan 13
This is a superb maths number line resource. Choose the scale and then run calculations of counting on and counting back.
Jac Londe on 14 Dec 12
Google always surprise me with is new tools.
Interesting for maths problems ...
Martin Burrett on 04 Dec 12
This is a great YouTube channel with a range of recorded maths lessons aimed at 11-18 year olds. Watch as lessons expertly presented about areas of sectors, probability, simultaneous equations
and much more. Check out @mathschallenge for daily maths questions to try out in your class.
Martin Burrett on 22 Nov 12
This great maths site has an amazing collection of maths self-marking problem solving questions. Search by age level or topic. This covers both Primary and Secondary levels. Topics include
numbers, geometry, algebra, data analysis, probability and more.
|
{"url":"https://groups.diigo.com/group/diigoineducation/content/tag/maths?page_num=1","timestamp":"2014-04-18T03:28:34Z","content_type":null,"content_length":"157986","record_id":"<urn:uuid:ae1df4e0-85a3-481e-8cba-d2ce28dc7647>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00175-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Number System Conversions
When writing programs for microcontrollers we’re usually stuck dealing with 3 different number systems: decimal, binary and hexadecimal (or hex). We use decimal because it comes naturally; that’s the
way we count. Unfortunately, it’s not how computers count. Since computers and microcontrollers are limited to 1’s and 0’s, they count using sequences of these numbers. This is the binary number
system. Binary numbers are usually prefixed with the '0b' characters which are not part of the number. Sometimes they are also subdivided into groups of 4 digits to make them easier to read as well
as easier to relate to the hexadecimal number system. An example of a binary number is 0b0100.1011. The periods in the number don't represent anything, they just make it easier to read the number.
The binary system is simple to understand, but it takes a lot of digits to use the binary system to represent large numbers. The hexadecimal system can represent much larger numbers using fewer
characters, and it closely resembles binary numbers. Hexadecimal numbers are usually prefixed with the characters '0x' which are not part of the number. A single hexadecimal digit can represent four
binary digits!
Binary numbers can only consist of 1’s and 0’s; typically a binary number consists of 8 digits (or some multiple of 8) if it’s being used in some kind of a computer (or microcontroller). It’s useful
to know how to convert a binary number into a decimal number and vice versa. So how do we convert between number systems? First consider how we determine the value of a decimal number. The number 268
can be broken down as 200 + 60 + 8, or 2 * (10^2) + 6 * (10^1) + 8 * (10^0). There are two important numbers that we have to know to ‘deconstruct’ the number - the base of the number system and the
location of the digit within the number. The base of a decimal number is 10. When we’re converting the number 268, 2 is the second digit, 6 is the first digit and 8 is the zero digit. Each digit has
to be scaled according to its place within the number. The scale of the digit is the base of the number system raised to the power of the digit's location in the number. So each number is scaled, and
then all of the scaled digits are added to find the total value of the number.
The same method can be used to find the value of a binary number. For example, let’s look at the number 0b1011.0101. The base of the binary system is 2 (the prefix 0b is often used in code to
indicate that the number is in the binary format). The value of our number is: 1*(2^7)+0*(2^6)+1*(2^5)+1*(2^4)+0*(2^3)+1*(2^2)+0*(2^1)+1*(2^0), which is equal to 181.
0b1011.0101. What a completely inefficient way of typing a number! But we can represent the same binary number using only 2 hexadecimal digits. First though, we'll start by converting a hexadecimal
(hex) number to decimal like we did for a binary number. How about 0xB5? Wait, what?! The prefix 0x is used in code to indicate that the number is being written in hex. But what is ‘B’ doing in
there? The hexadecimal format has a base of 16, which means that each digit can represent up to 16 different values. Unfortunately, we run out of numerical digits after ‘9,’ so we start using
letters. The letter ‘A’ represents 10, ‘B’ is 11, ‘C’ is 12, ‘D’ is 13, ‘E’ is 14 and ‘F’ is 15. ‘F’ is the largest digit in the hex numbering system. We convert the number the same way as before.
The value of 0xB5, then, is: B*(16^1)+5*(16^0) or 181.
Knowing how to convert binary and hex to decimal is important, but the most useful number conversion is probably converting between hex and binary. These two numbering systems actually work pretty
well together. The numbering systems happen to be related such that a single hex digit represents exactly 4 binary digits, and so 2 hex digits can represent 8 bits (or binary digits). Here’s a table
that shows how each hex digit is related to the binary system:
│Binary Value │Hex Value │
│0000 │0 │
│0001 │1 │
│0010 │2 │
│0011 │3 │
│0100 │4 │
│0101 │5 │
│0110 │6 │
│0111 │7 │
│1000 │8 │
│1001 │9 │
│1010 │A │
│1011 │B │
│1100 │C │
│1101 │D │
│1110 │E │
│1111 │F │
For example, to convert the hex number 0x1C to binary, we would find the corresponding binary value for 1 and C and combine them. So, 0x1C in binary is 0b0001.1100. If we wanted to figure out the hex
value for a binary number we just go the other way. To find the hex representation of the binary number 0b0010.1011 we first find the hex value for 0010, then the hex value for 1011 and combine them;
the hex value would be 0x2B.
There are many free tools available to help convert between these numbering systems, just google 'hex number conversion.' If you use Windows as an operating system you have a great tool built into
the calculator. Just change the calculator to scientific mode and you can convert between number systems by typing a number then changing the format of the calculator!
to post comments.
What… No love for octal? ;-)
I just registered on SparkFun and I already found something interesting. Now I can convert from binary and hexadecimal to decimal, but how do I convert from decimal to binary or hexadecimal? I
don’t know if that could be useful at all, I just want to do it for the fun of it.
I’m having trouble with the statement “A single hexadecimal digit can represent four binary digits!”. <br />
My understanding is that HexaDecimal is 16 so it would hold up to 16 binary digits, Not 4. <br />
<br />
I’m a programmer that has worked with Packed Fields (AS/400) and I can’t figure how it only holds 4 binary digits.
A four-bit field can be used to represent 16 different possible values. Values = 2^storage bits.
There isn’t as much bit-twiddling with the AS/400 as there is in embedded systems. My co-workers were in awe when I did an uppercase conversion by using %BITOR with a 0x80. It’s a different
To work out how much information (number of combinations) a given length number (n digits) in number base-m (ie. 2 for binary, 10 for decimal, 16 for hexidecimal) it is m^n.<br />
<br />
1 hex digit can store 16^1 = 16 combinations or ‘states’, which is the same as 4 binary digits (2^4 = 16).<br />
<br />
<br />
Some common conversions:<br />
<br />
1 octal = 3 binary<br />
<br />
1 hex = 4 binary<br />
<br />
1 base-64 = 8 binary
1 base-64 digit is 6 binary digits, not 8.
A single hex digit:<br />
0,1,2,3,4,5,6,7,8,9,A,B,C,D,E,F<br />
<br />
Check out the table near the bottom of the tutorial.
seanroberts:<br />
<br />
seanroberts: Why does it have to be 0b and 0x to tell which form its in, not b and x?<br />
<br />
I understand the need for a prefix,I just want to know why it has to be 0b or 0x instead for just b or x. You were talking about making things ambiguous, and having different types of
numbers stat with 0 sounds ambiguous.<br />
<br />
More to your point, the reason is speed and convenience for a parser. When a computer program is trying to read text it helps to break things down into smaller pieces. If you were to
write a compiler or parser or whatnot, you might start by breaking all the text down into words based on were you find blank space. With that done, your program may loop over every word
trying to make sense of it. For each token (word) you need to decide whether it’s a number, punctuation, keyword, or other word. What’s a quick way for a program to know if a token is
actually a number? It makes things easier if you make a rule that all numbers must start with a number, regardless of the number base (also called the radix). <br />
<br />
Have you noticed that many programmimg languages have a rule that function and variable names can have letters and numbers but they must not start with a number? It’s for the same
reasons. That rule is the “other side of the coin” if you follow what I mean. <br />
<br />
Using 0 allows for quick determination that the token is a number and that the following character will tell you what kind of number. That’s why zero is used. It’s for speed.
I don’t see a compiler needed for the tutorial. In fact each number has its radix written along with it, as in “hex number 0x1C in binary”. The problem here is three fold. First, 0x1C is not a
hex number, 1C is a hex number. Two, many languages do not use the C pre-processor conventions (they are not part of C IIRC). Third, I commend the tutorial author for using upper case for Hex
numbers. The C pre-processor encourages a very sloppy hex style by allowing upper and lower case letters. Why is it sloppy? Because upper and lower case are completely different ASCII values and
some languages allow arbitrary radix, like base 60, were the upper and lower are actually different numbers.<br />
<br />
To be precise in writing about computers it is better practice (can not bring myself to use that horrible “bxxx practices” phrase) to use upper case for hex, and state the radix without using
compiler specific prefixes. Besides, those Roman numerals are confusing.
the notation will be determined by the compiler that you’re using. For example, Freescale’s software CodeWarrior defaults to decimal if no prefix or suffix is added. Hexidecimal is indicated by
adding ‘$’ as a prefix, binary by adding ‘b’ as a suffix, and decimal by adding ’t' as a suffix. That’s for freescale assembly, I assume the C notation would be the same.<br />
<br />
A tutorial on binary math would be very useful…<br />
<br />
Why does it have to be 0b and 0x to tell which form its in, not b and x?
0b and 0x are very widely-held convention to distinguish the base you are using. Many programming languages (C, C++, Python, etc.) and much of computer science in general follow this
convention. <br />
<br />
For instance (you are on your private island or something), you could write numbers as 10101 (base:binary) or 66 (base:hexadecimal). The immediate problem is that as you read left to right,
you automatically see 10,101 (decimal) or 66 (decimal) - neither one of which is right.<br />
<br />
To distinguish, a prefix is added because your eyes see the prefix first. 0x is used as a prefix for hex notation and 0b is for binary. The examples I listed then become 0b10101 or 0x66.<br
<br />
If you see a number without a prefix, assume base 10 (decimal). 0b is binary (b for binary). 0x is hex (it has an x in it)<br />
<br />
Octal (base 8) has BAD prefix notation but fortunately octal is only rarely encountered. Octal is just prefixed with a zero. This is highly ambiguous and VERY prone to misreading, confusion,
and error. For instance, if I wrote the number 067, the most common interpretation is to strip the zeroes in front, thus giving a decimal 67. If we were talking octal, this leading zero would
mean it would be an octal number.<br />
<br />
The number fifty-five in binary, octal, decimal, and hex:<br />
<br />
0b110111 = 067 = 55 = 0x37
seanroberts: Why does it have to be 0b and 0x to tell which form its in, not b and x?<br /><br />
<br />
I understand the need for a prefix,I just want to know why it has to be 0b or 0x instead for just b or x. You were talking about making things ambiguous, and having different types of
numbers stat with 0 sounds ambiguous.
Remember that it is a computer program (compiler) that normally reads this stuff. So, when it sees a letter after a space, it assumes a word. If it is a numeral, then, it is a number.
By default a number is a decimal. If it starts with 0x or 0b then it is a hex or binary number.
|
{"url":"https://www.sparkfun.com/tutorials/216","timestamp":"2014-04-16T10:48:49Z","content_type":null,"content_length":"69271","record_id":"<urn:uuid:91ce6a65-3d46-4a56-938e-203d39fd84ec>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00418-ip-10-147-4-33.ec2.internal.warc.gz"}
|
I’ve been reading Dijkgraaf, Gukov, Kazakov and Vafa. They point out the obvious fact that the Vandermonde determinant (which arises from fixing the overall U(N) symmetry of the integral) that
appears in the Matrix Integral can be represented using Fadeev-Popov “ghosts”.
So far, not too surprising. What had not occurred to me before they mentioned it, is that this leads to a much simpler set of Feynman rules for the perturbative evaluation of the Matrix Integral.
It’s obvious, in retrospect, but leads to some very nice calculations.
Posted by distler at October 29, 2002 3:28 PM
|
{"url":"http://golem.ph.utexas.edu/~distler/blog/archives/000021.html","timestamp":"2014-04-16T14:34:03Z","content_type":null,"content_length":"9069","record_id":"<urn:uuid:dc3ec8d0-b6c5-4100-9611-369ab7430be4>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00407-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Portability uses gnuplot and ImageMagick
Stability provisional
Maintainer Alberto Ruiz (aruiz at um dot es)
This module is deprecated. It can be replaced by improved drawing tools available in the plot\plot-gtk packages by Vivian McPhail or Gnuplot by Henning Thielemann.
mplot :: [Vector Double] -> IO ()Source
plots several vectors against the first one
> let t = linspace 100 (-3,3) in mplot [t, sin t, exp (-t^2)]
plot :: [Vector Double -> Vector Double] -> (Double, Double) -> Int -> IO ()Source
Draws a list of functions over a desired range and with a desired number of points
> plot [sin, cos, sin.(3*)] (0,2*pi) 1000
parametricPlot :: (Vector Double -> (Vector Double, Vector Double)) -> (Double, Double) -> Int -> IO ()Source
Draws a parametric curve. For instance, to draw a spiral we can do something like:
> parametricPlot (\t->(t * sin t, t * cos t)) (0,10*pi) 1000
splot :: (Matrix Double -> Matrix Double -> Matrix Double) -> (Double, Double) -> (Double, Double) -> Int -> IO ()Source
Draws the surface represented by the function f in the desired ranges and number of points, internally using mesh.
> let f x y = cos (x + y)
> splot f (0,pi) (0,2*pi) 50
mesh :: Matrix Double -> IO ()Source
Draws a 3D surface representation of a real matrix.
> mesh $ build (10,10) (\\i j -> i + (j-5)^2)
In certain versions you can interactively rotate the graphic using the mouse.
|
{"url":"http://hackage.haskell.org/package/hmatrix-0.11.0.3/docs/Graphics-Plot.html","timestamp":"2014-04-17T17:16:28Z","content_type":null,"content_length":"16996","record_id":"<urn:uuid:e8802c5e-0af0-4bbd-ada4-ca89478b3889>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00435-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Math/Optics: How can I derive the focal length of a lens from the magnification percentage of an image?
[more inside] posted by ScarletPumpernickel on Mar 13, 2011 - 10 answers
I have a poetic, creative, intuitive brain. I feel out of my element talking to programmers, mathematicians, and physicists, yet I find their ideas aesthetically intriguing and I want to understand
[more inside] posted by xenophile on Mar 9, 2011 - 17 answers
Any recommendations for a good graduate level text book for an introduction to mathematical finance course?
[more inside] posted by jeffburdges on Mar 8, 2011 - 1 answer
I'm trying to express my love for a certain dessert in a math/logic formula. I have this:
π > ∼π
, which I take to mean "Pi is greater than not Pi", and this:
π > ∞-π
, which I take to mean "Pi is greater than Everything but Pi". Do these make any sense or hold up in any legit way?
posted by TheCoug on Mar 5, 2011 - 15 answers
Post grad studies in Game Theory? Which universities have a focus in Game Theory? Are there Masters available for this? Sorry for the grunt like presentation to my question. I just thought the
abbreviated style would be best for a summary review of what I'm looking for. Thanks!
posted by EricBrotto on Mar 1, 2011 - 6 answers
The solution to 12 = x - (square root of x) is 16, but how do I manipulate the equation to show that?
[more inside] posted by Wanderboy on Feb 23, 2011 - 47 answers
If I know the number of widgets, number of days, and total budget, can I break down a per widget price?
[more inside] posted by pencroft on Feb 22, 2011 - 10 answers
Chemistry, math nerds, and smart people please help me with this cooking question involving salt.
[more inside] posted by anonymous on Feb 20, 2011 - 13 answers
Should I pursue my interest and study college physics more? Specifically, should I take a year-long calculus-based program, until I get comfortable that I'm on top of it-- and can really judge my
level of interest and aptitude fairly? Considering I'm a Junior English major. Also considering that it's been 15 years since HS algebra and I've never been good at math. But I want to be.
[more inside] posted by reenka on Feb 10, 2011 - 31 answers
A friend in college once mentioned a theorem for deciding the number of people to meet before deciding on the person to marry. Was it just a joke, or is there really such a theorem?
[more inside] posted by bleary on Feb 8, 2011 - 16 answers
if I'm looking through a library... does being published with Springer Verglag mean that the research is legit, and does being published with Nova Publishers mean that it isn't?
posted by moorooka on Jan 30, 2011 - 6 answers
I've been asked to teach a week at a summer university. Help me to design the most awesome computer science/discrete math course ever.
[more inside] posted by mathemagician on Jan 27, 2011 - 7 answers
I have a series of tables describing a set of data and their values for certain points in time. I want to create a multi-line chart from this data so I can see the growth curves. Is there a name for
this kind of plot? It must be a solved problem and some tools to do this but I can't google for it if I don't know the name of what I want.
[more inside] posted by rdhatt on Jan 26, 2011 - 7 answers
When doing mixed-operation arithmetic keeping significant figures, do you round after each operation- according to the rules for that particular step? Or do you keep everything in long-form
calculator numbers, then round at the final answer?
[more inside] posted by tremspeed on Jan 25, 2011 - 21 answers
Please help me evaluate integrals more accurately. Sloppy calculus is killing me.
[more inside] posted by PFL on Jan 24, 2011 - 14 answers
Please refresh my memory with regard to a straightforward question of fair dice and probability.
[more inside] posted by Justinian on Jan 20, 2011 - 14 answers
Math Filter: I am trying to find a name for this equation. I am not sure if I can solve it given the information that I currently have.
[more inside] posted by occidental on Jan 17, 2011 - 14 answers
Do the economics in Nick Spencer and Christian Ward's
Infinite Vacation
make sense? Are supply and demand meaningless concepts when an infinite multiverse is brought into the picture?
[more inside] posted by davextreme on Jan 12, 2011 - 9 answers
Can someone give me some insight as to how to approach and make sense of mathermatical proofs?
[more inside] posted by Listener on Jan 11, 2011 - 30 answers
I'm about to start a 300-level Statistics course, but my math skills are very rusty. How can I quickly rejuvenate my old skills?
[more inside] posted by double block and bleed on Jan 8, 2011 - 5 answers
I just failed my math class (a finance class). I'm smart at math but I don't handle organization well for math-related classes. How can I handle them classes better?
[more inside] posted by dubadubowbow on Dec 17, 2010 - 17 answers
Counting-Filter: A friend once asked me how many points the entire 52-card deck is worth in the game of Cribbage...not that you could actually be dealt the whole deck, but just a curiosity.
[more inside] posted by klausman on Dec 6, 2010 - 5 answers
I'm a pure mathematician with a Ph.D. and I'm currently a visiting professor at a large state university. However, I'm looking to switch gears and get into the quantitative finance field. The problem
is that I don't know anything at all about finance. I know that there are companies such as D. E. Shaw that hire mathematicians that don't have financial experience; what other companies should I
look at? Is there any general advice you'd give someone in my position? Also, I have my Ph.D. from a well-regarded state school, but I'm not an Ivy-leaguer; does that put me at a disadvantage?
posted by Frobenius Twist on Nov 29, 2010 - 6 answers
I have a learning disability (dyscalculia/mathematics disorder). Could I handle the formal language component of an undergrad Introduction to Logic class?
[more inside] posted by autoclavicle on Nov 23, 2010 - 19 answers
When do I look for non-academic jobs if I'm trying to transition out of academia, and which jobs do I look for? (I have a math PhD.)
[more inside] posted by madcaptenor on Nov 20, 2010 - 18 answers
I get to teach myself Cryptography for a class, please help me pick my book!
[more inside] posted by zidane on Nov 14, 2010 - 12 answers
But Sir, what is x? Teaching students algebra for the first time. They keep wanting to put values in for x, and write that down. E.g. x + x = . Becomes 1 + 1 = 2, in their books. Whereas I want x + x
= 2x. Does anyone have any strategies, or techniques to overcome this?
[more inside] posted by 92_elements on Nov 11, 2010 - 23 answers
I've been seriously considering becoming a high school math teacher. I have some experience teaching in a classroom, and lots of experience tutoring. What I'd like to know now - before I jump into an
alternative degree program - is what it's actually like
a teacher day in and day out. If I'm not going to be able to hack it, I'd rather know now.
[more inside] posted by Gori Girl on Nov 9, 2010 - 30 answers
Poor student.
The prices of TI calculators are 2 damn high.
Looking for:
Free software that performs wide range of operations (most important: a "matrix" function that is easy to punch in data).
posted by Taft on Nov 8, 2010 - 34 answers
Assuming a grasp of college algebra, what maths would one need to understand the entirety of the
wikipedia entry
on white noise. Which order would be the most advisable way to learn those maths and which books might be good for a self study on those topics?
posted by Drama Penguin on Oct 28, 2010 - 7 answers
I'm writing a graphics app where I'm generating a simple gear transmission system with a variable number of gears each with variable number of teeth and placed at random orbits around each other. I
generate gears in succession, placing and rotating each with respect to its parent/driver gear before moving on to the next. The problem I'm having is how to determine the initial orientation of each
subsequent gear such that its teeth are properly meshed with that of its driver.
[more inside] posted by Epsilon-minus semi moron on Oct 25, 2010 - 6 answers
Textbooks on data mining techniques / statistical analysis on large data sets?
[more inside] posted by wooh on Oct 22, 2010 - 5 answers
I need help untangling a formula for tracking poker winnings. All this math is making my head hurt! Totally legal details within.
[more inside] posted by c:\awesome on Oct 7, 2010 - 8 answers
Why doesn't the OED have better coverage of mathematical terms? Is this an area they want to improve on, or have they drawn a line of obscurity somewhere that just leaves out more than I expected?
[more inside] posted by ErWenn on Oct 5, 2010 - 9 answers
The area between f(x), the x-axis and the lines x=a and x=b is revolved around the x-axis. The volume of this solid of revolution is b^3-a*b^2 for any a,b. What is f(x)?
[more inside] posted by stuart_s on Sep 29, 2010 - 27 answers
this isn't homework filter, I promise: what is the cosine of 30?
posted by pipti on Sep 24, 2010 - 16 answers
How do I reduce something by -800% when I'm working with a scale between -99% and 99%?
[more inside] posted by tunestunes on Sep 13, 2010 - 15 answers
I'm interested in learning everything there is to know about waves. Sound waves, ocean waves, light waves, electromagnetic waves, waves in math, in economics, brain waves, etc, etc....
[more inside] posted by empath on Sep 7, 2010 - 15 answers
|
{"url":"http://ask.metafilter.com/tags/math?page=6","timestamp":"2014-04-18T16:52:12Z","content_type":null,"content_length":"78457","record_id":"<urn:uuid:01a7f7cf-e5de-4802-8772-19acdfa81fdf>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00034-ip-10-147-4-33.ec2.internal.warc.gz"}
|
MathGroup Archive: May 2007 [00147]
[Date Index] [Thread Index] [Author Index]
Re: question (for Mathematica 6!)
• To: mathgroup at smc.vnet.net
• Subject: [mg75584] Re: [mg75519] question (for Mathematica 6!)
• From: Andrzej Kozlowski <akoz at mimuw.edu.pl>
• Date: Sat, 5 May 2007 06:13:49 -0400 (EDT)
• References: <200705040809.EAA04730@smc.vnet.net>
On 4 May 2007, at 17:09, dimitris wrote:
> I don't have a copy of Mathematica 6 but having spent much time
> reading the online documentation (not only because of curiosity!) I
> have one question...
> Can somebody explain me the introduction of the HeavisideTheta
> function
> http://reference.wolfram.com/mathematica/ref/HeavisideTheta.html
> in version 6?
> Note that the UnitStep function is still here
> http://reference.wolfram.com/mathematica/ref/UnitStep.html
> Dimitris
I think WRI has finally decided to make a clear distinction between
distributions (HeavisideTheta) and piecwise defined fucntions
(UnitStep). The difference can be seen here:
In[2]:= PiecewiseExpand[HeavisideTheta[x]]
Out[2]= HeavisideTheta[x]
In[3]:= PiecewiseExpand[UnitStep[x]]
Out[3]= Piecewise[{{1, x >= 0}}]
HeavisideTheta is a distribution and not a piecewise-defined
function so is not expanded. Another example that illustrates this:
Integrate[D[HeavisideTheta[x], x], x]
Integrate[D[UnitStep[x], x], x]
Note that in traditional notation both HeavisideTheta[x] and UnitStep=
[x] look identical (at least to me!) . It might mean that
introduction of HeavisideTheta is a result of, on the one hand,
recognizing that the behaviour of UnitStep was incorrect (for a
distribution) and, on the other, of trying to preserve compatibility. =
However, since UnitStep is reamins fully documented and supported, we =
now have two superfically similar functions (the more the merrier!)
but with a deep underlying difference in meaning.
Note also the following:
in 5.2:
Integrate[DiracDelta[x - a], {x, b, c},
Assumptions -> a =E2=88=88 Reals && b < c]
UnitStep[a - b]*UnitStep[c - a]
In 6.0:
Integrate[DiracDelta[x - a], {x, b, c}, Assumptions -> Element[a,
Reals] && b < c]
HeavisideTheta[a - b]*HeavisideTheta[c - a]
In TraditionalForm the outputs look identical.
Andrzej Kozlowski=
• References:
|
{"url":"http://forums.wolfram.com/mathgroup/archive/2007/May/msg00147.html","timestamp":"2014-04-20T13:29:17Z","content_type":null,"content_length":"36135","record_id":"<urn:uuid:4ab9c38d-fa3a-4f78-9ef0-73b2ae0cbb04>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00178-ip-10-147-4-33.ec2.internal.warc.gz"}
|
MathGroup Archive: May 2013 [00055]
[Date Index] [Thread Index] [Author Index]
Re: Mathematica-assisted learning was .. Re: Speak errors
• To: mathgroup at smc.vnet.net
• Subject: [mg130717] Re: Mathematica-assisted learning was .. Re: Speak errors
• From: Helen Read <readhpr at gmail.com>
• Date: Mon, 6 May 2013 02:26:33 -0400 (EDT)
• Delivered-to: l-mathgroup@mail-archive0.wolfram.com
• Delivered-to: l-mathgroup@wolfram.com
• Delivered-to: mathgroup-outx@smc.vnet.net
• Delivered-to: mathgroup-newsendx@smc.vnet.net
• References: <17744967.14121.1366277874273.JavaMail.root@m06> <kl0arj$l43$1@smc.vnet.net> <20130422071048.E1C5A6AF5@smc.vnet.net> <kl8bed$no7$1@smc.vnet.net>
<20130425065103.A637D6A12@smc.vnet.net> <2E7A3077-D754-4293-AEA6-168242B12BFE@math.umass.edu> <klddjp$d0o$1@smc.vnet.net> <klia9d$s97$1@smc.vnet.net> <klipeg$11h$1@smc.vnet.net>
On 4/28/2013 5:17 AM, Richard Fateman wrote:
> On 4/27/2013 9:58 PM, Helen Read wrote:
>> On 4/26/2013 4:24 AM, Richard Fateman wrote:
>>> I note that the MIT regular calculus, 18.01
>>> http://math.mit.edu/classes/18.01/Spring2013/
>>> apparently uses a computer algebra system, but not Mathematica.
>>> I do not see how it is used or how it could be used on the exams.
>> I teach calculus in a classroom (we have two such rooms) equipped with a
>> computer for each student. We use Mathematica routinely throughout the
>> semester, in and out of class, and most of the students like having it
>> and using it. We have a site license that allows the students to install
>> Mathematica on their own laptops so they can use it outside of class.
> I was speaking of how the MIT course could use computer algebra. If you
> look at the review problems
> http://math.mit.edu/classes/18.01/Spring2013/Supplementary%20notes/01rp1.pdf
> you see that quite a few of them are trivial if you just type them in to
> a computer algebra system, and presumably would not be much of a
> learning experience if in fact they were just typed in. Others require
> explain/prove/give examples.
> I have no doubt that a calculus course could be constructed using
> computer algebra systems --- I would hope it would be quite different,
> emphasizing (say) the calculational aspects of the subject and then
> observing the symbolic, almost coincidental, solutions to the same
> evaluations. It sounds like you are doing something along those lines.
> It does not surprise me that MIT is still doing the same old thing; when
> I was covering recitation sections, the main lecturer was Arthur
> Mattuck. In 1971. The notes used in 2013 are apparently by Arthur Mattuck.
> It is presumably possible to do things at U. Vermont without overcoming
> such massive inertia. I have encountered substantial inertia at UC
> Berkeley in mathematics and engineering, too.
> On the other hand, the question remains for any of these courses as to
> whether one can objectively demonstrate that students learn calculus
> more than those in a control group not using computers. I am not
> doubting for a moment that instructors who like computers prefer
> teaching using them. (Including me.) Yet there are still math
> instructors who, for whatever reason, prefer not.
It's a difficult thing to measure. Many, many years ago I developed a
"reform" version of the "baby" calculus at UVM. (This is our
two-semester, easier, calculus sequence taken by students whose majors
require them to take calculus but not at the level that is required by
say math or engineering majors. It is taken by more students at UVM than
any other course, including English 001, and many of the students have
terrible deficits in pre-calculus skills such as algebra and
trigonometry.) For the "new wave" version of the course we used a
textbook written by some folks at Clemson University that emphasized
concepts over algebra, and used data driven examples to motivate the
concepts. The emphasis was on understanding and interpretation, using
graphing calculators to handle the drudge work. Almost everything was a
"word problem" and rote skill-and-drill problems were downplayed (though
we still assigned some). Many of the faculty freaked out over this
approach, and we ended up with two separate tracks taught by different
One semester we did make an attempt to compare outcomes by putting some
common questions on the final exams, but the more conceptual questions
that those of us teaching the "new wave" course proposed were rejected
by those teaching the traditional course as "unfair" questions that
their students should not be expected to answer. Which tells you
something right there. In the end we found no statistical difference
between the two groups on the skill questions (e.g., product rule), and
on the (very few) mildly conceptual questions that we were permitted to
ask the students in the "reform" group outperformed (in a statistically
significant way) the students from the traditional group. Nonetheless,
there was so much faculty resistance that the "reform" version ended up
being given a separate course number, and was eventually killed off
because almost all of the client departments continued to require the
original traditional version.
I haven't taught the baby calculus in ages (I stopped when the reform
version was discontinued) and don't really know what they are doing with
it these days, but my sense is that it is somewhere in between the two
versions, but closer to the old traditional way.
> <snip> ....
>> I find that
>> overall they seem to end up with a better understanding of series than
>> my students did years ago when all we did was paper-and-pencil
>> convergence (which the students found to be terribly abstract).
> Can you quantify this? (This is somewhat unfair -- you are stating your
> own observations and I'm asking you to be an expert on human factors,
> learning, etc. I've often seen and participated in "innovation" in
> teaching and rarely tried to prove the innovation had positive results!
> Nevertheless, it would be nice to have "evidence".)
Unfortunately I have nothing more than anecdotal evidence.
>> My students do use Mathematica on exams, but not for everything. I make
>> up exams in two parts. Part 1 is paper and pencil only, and I keep the
>> computers "locked" (using monitoring software installed on all the
>> student computers). When a student finishes Part 1, s/he hands it in and
>> I unlock that particular computer (which I can do remotely from the
>> instructor's desk), and the student has full use of Mathematica for Part
>> 2. I can monitor what the students are doing on their computers from the
>> instructor's station (and of course I get up and walk around and answer
>> questions if they get stuck on something like a missing comma). We have
>> a printer in the room so that the students can print their work and
>> staple it to their test paper when they hand it in.
> I have no doubt that there are interesting calculations that are vastly
> easier to do with the help of a computer algebra system.
> I would be interested to see what kinds of questions you can ask on a
> calculus exam that (a) test something that students are expected to know
> from a calculus course and (b) require (or are substantially assisted
> by) Mathematica.
>> I've been teaching this way since the late 1990s, and wouldn't dream of
>> going back to doing it without technology.
> Another question, based on my own observations ... If you are on
> sabbatical and not available to teach this course, does someone else
> pick it up and teach it the same way? What I've seen is that when the
> computer enthusiast is not available, the course reverts to something
> rather more traditional.
My department enacted a policy that requires some use of Mathematica
throughout the three semester "grown up" calculus sequence, and drew up
a document of minimum Mathematica competence that should be achieved by
all students. From my point of view Mathematica competence in and of
itself isn't really the main point, it's just a means to a greater end.
Still, most of the faculty are on board with this, and many are very
enthusiastic and integrate Mathematica in ways that we believe benefit
the students (again, no hard evidence -- but I don't have hard evidence
for lots of the choices I make in teaching; all I have is years of
experience and observation). If I were not around, there are enough
others to carry on.
|
{"url":"http://forums.wolfram.com/mathgroup/archive/2013/May/msg00055.html","timestamp":"2014-04-16T22:16:22Z","content_type":null,"content_length":"34484","record_id":"<urn:uuid:34d8526e-fa51-412b-968b-1a691334289c>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00097-ip-10-147-4-33.ec2.internal.warc.gz"}
|
International Journal of Mathematics and Mathematical Sciences
Volume 21 (1998), Issue 3, Pages 519-532
Regularized sum for eigenfunctions of multi-point problem in the commensurable case
Department of Mathematics, Faculty of Science, Tanta University, Tanta, Egypt
Received 3 January 1995; Revised 22 May 1995
Copyright © 1998 S. A. Saleh. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium,
provided the original work is properly cited.
Consider the eigenvalue problem which is given in the interval [0,π] by the differential equation −y″(x)+q(x)y(x)=λy(x);0≤x≤π(0,1) and multi-point conditions U1(y)=α1y(0)+α2y(π)+∑K=3nαKy
(xKπ)=0,U2(y)=β1y(0)+β2y(π)+∑K=3nβKy(xKπ)=0,(0,2) where q(x) is sufficiently smooth function defined in the interval [0,π]. We assume that the points X3,X4,…,Xn divide the interval [0,1] to
commensurable parts and α1β2−α2β1≠0. Let λk,s=ρk,s2 be the eigenvalues of the problem (0.1)-(0.2) for which we shall assume that they are simple, where k,s, are positive integers and suppose that
Hk,s(x,ξ) are the residue of Green's function G(x,ξ,ρ) for the problem (0.1)-(0.2) at the points ρk,s. The aim of this work is to calculate the regularized sum which is given by the form: ∑(k)∑(s)
[ρk,sσHk,s(x,ξ)−Rk,s(σ,x,ξ,ρ)]=Sσ(x,ξ)(0,3) The above summation can be represented by the coefficients of the asymptotic expansion of the function G(x,ξ,ρ) in negative powers of k. In series
(0.3) σ is an integer, while Rk,s(σ,x,ξ,ρ) is a function of variables x,ξ, and defined in the square [0,π]x[0,π] which ensure the convergence of the series (0.3).
|
{"url":"http://www.kurims.kyoto-u.ac.jp/EMIS/journals/IJMMS/Volume21_3/479193.abs.html","timestamp":"2014-04-16T22:14:26Z","content_type":null,"content_length":"18892","record_id":"<urn:uuid:b3a58f2f-e328-4536-a969-0da84824b99a>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00471-ip-10-147-4-33.ec2.internal.warc.gz"}
|
[FOM] About ln |x|
Harvey Friedman friedman at math.ohio-state.edu
Mon Feb 9 18:43:13 EST 2004
On 2/9/04 4:58 PM, "Adam Epstein" <adame at maths.warwick.ac.uk> wrote:
> I don't now anything about the status of these questions. But matters such
> as these are already an issue for much more basic (if less physical)
> dynamical systems. For example, consider the one-parameter familiy of
> polynomial maps x->x^2+c. These are often studied over C (yielding
> Julia sets and the Mandelbrot set) but for now restrict attention to real
> values of x and real parameters c.
If we are going to be talking about function iteration, then here is a
favorite of mine. (I like yours too).
In calculus, one considers the map from R\{0} into R given by
f(x) = ln |x|.
QUESTION: Is the orbit of 2 finite or infinite?
There are endless questions one can ask about the set of x such that the
orbit of x is finite.
I have not heard of any nontrivial results about this.
Harvey Friedman
More information about the FOM mailing list
|
{"url":"http://www.cs.nyu.edu/pipermail/fom/2004-February/007884.html","timestamp":"2014-04-20T21:00:16Z","content_type":null,"content_length":"3375","record_id":"<urn:uuid:9474fb95-a604-4bef-be2b-4e7fa9b0bcc2>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00272-ip-10-147-4-33.ec2.internal.warc.gz"}
|
E/M should be constant
As the speed of light is constant in the universe, E/M should be constant.
Suppose that in some corner of universe mass is being converted in energy. So there must be some another corner of universe where mass is being converted into energy at the same time.Is my argument
Sorry, but this makes no sense.
You are confusing an equation that describes the conversion of one to another with a conservation law. That equation is not a conservation law just because c is a constant.
|
{"url":"http://www.physicsforums.com/showthread.php?s=50c701b2f61813011e4aef933233cd87&p=4587946","timestamp":"2014-04-25T08:49:11Z","content_type":null,"content_length":"32119","record_id":"<urn:uuid:c1cf39e6-8f23-4034-b827-a3248c041d27>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00266-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
15% of x = 15 what is x
• one year ago
• one year ago
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
do u know how to expand 15% as a fraction?
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
ok..listen any A% can be written as A/100..k?
Best Response
You've already chosen the best response.
ok lost already math is not my strong suit
Best Response
You've already chosen the best response.
ok lets take 20% ]..k now 20% of ure maths marks... let ure maths marks be x then, 20% of ure maths marks will be, 20/100 *x.. get it 20/100 is a fraction u do know what a fraction is rite?
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
so u get the % thing?
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred.
|
{"url":"http://openstudy.com/updates/50f8073fe4b007c4a2eb73d5","timestamp":"2014-04-20T18:55:06Z","content_type":null,"content_length":"44145","record_id":"<urn:uuid:49341c9f-6343-417b-b188-1db8292867de>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00490-ip-10-147-4-33.ec2.internal.warc.gz"}
|
North Andover Math Tutor
Find a North Andover Math Tutor
...I have been recognized by Cambridge Who's Who for my expertise in special education. I was also nominated for the Disney Teacher of the Year. I enjoy working with special needs students and
have throughout my years of service.
14 Subjects: including algebra 2, reading, prealgebra, algebra 1
I am a full-time tutor at a local high school where I primarily tutor Geometry, Algebra, SAT, and MCAS. In addition, I am an experienced writing tutor who worked as an English tutor at a
university writing center for a number of semesters. I am a fun, enthusiastic tutor who would be excited to wor...
29 Subjects: including SAT math, statistics, linear algebra, ACT Math
...The biggest changes were to the verbal reasoning section. The GRE is very similar to the SAT but with two essays instead of one in the Analytical Writing section, and more variety of question
formats in the rest. I focus not only on the essential reading, quantitative, and writing skills, but a...
44 Subjects: including econometrics, algebra 1, algebra 2, calculus
...I received a 5/5 on the AP Calculus AB exam in 9th grade. Over a year ago I took the GRE and received a perfect score in math. I have tutored students younger and older than myself at every
stage of education, be it elementary, middle, high school, or being employed by RPI as a one-on-one tutor.
27 Subjects: including algebra 1, algebra 2, biology, calculus
I am a motivated tutor who strives to make learning easy and fun for everyone. My teaching style is tailored to each individual, using a pace that is appropriate. I strive to help students
understand the core concepts and building blocks necessary to succeed not only in their current class but in the future as well.
16 Subjects: including calculus, physics, logic, algebra 1
|
{"url":"http://www.purplemath.com/North_Andover_Math_tutors.php","timestamp":"2014-04-20T02:25:49Z","content_type":null,"content_length":"23838","record_id":"<urn:uuid:e85c7632-f9df-4cd2-ad1c-84fd94a299d5>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00648-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Ashburnham Math Tutor
...Vocabulary is critical to understanding. English grammar rules are often a difficult topic for students. I can help you look at sentence diagrams and understand what's going on and what the
rules are.
20 Subjects: including algebra 1, English, prealgebra, reading
...I have also led Bible studies for youth for a number of years. I became certified in secondary social studies in Massachusetts after passing the teaching exam. I have tutored students in AP
53 Subjects: including algebra 1, algebra 2, linear algebra, biology
I have a Bachelor's degree in Math, as well as a Master's degree in Education. I have experience teaching in 1st, 3rd, 4th, 5th, and 7th grade classrooms. I have tutored students in math from 1st
grade through college level, including standardized testing.
13 Subjects: including trigonometry, algebra 1, algebra 2, geometry
...I hope you find my experience and style suitable for your child's needs!I am special Ed certified in Pre-k through 8th grade and have taught elementary math for 10 years. I also have taken a
manipulative based graduate level math class. I am very confident in this subject area.
29 Subjects: including algebra 1, algebra 2, SAT math, reading
...Since the Wilson system has a very specific method for proceeding through each step of each lesson, this was an important aspect in my training. Since my daughter is dyslexic and I was
homeschooling her, my daughter was my student. I brought her with me every month to work with my trainer.
25 Subjects: including calculus, discrete math, algebra 1, algebra 2
|
{"url":"http://www.purplemath.com/Ashburnham_Math_tutors.php","timestamp":"2014-04-17T00:51:05Z","content_type":null,"content_length":"23543","record_id":"<urn:uuid:2477fb69-a9d6-4ff4-b0db-a9d276553fe7>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00270-ip-10-147-4-33.ec2.internal.warc.gz"}
|
New Dilemmas for the Prisoner
New Dilemmas for the Prisoner
Dictators and Extortionists
One mischievous strategy might be called the dictator: It unilaterally sets the other player’s long-term average score to any value between the mutual-defection payment and the mutual-cooperation
payment. (For the standard payoff values 0, 1, 3, 5, that means anywhere between 1 and 3.)
Consider the strategy (4/5, 2/5, 2/5, 1/5), where the four numbers again indicate the probability of choosing to cooperate after cc, cd, dc, dd, respectively. If Y plays this strategy against X, then
X’s average score per round will converge on the value 2.0 after a sufficiently long series of games, no matter what strategy X chooses to play. In the lower illustration on the previous page X
responds to Y’s coercion with four different strategies, but in each case X’s average score gravitates ineluctably toward 2.0. It should be emphasized that dictating X’s score does not require Y to
make any active adjustments or responses as the game proceeds. Y can set the four probabilities and then “go to lunch,” as Press and Dyson put it.
A second form of mischief manipulates the ratio between X’s score and Y’s score. If S[X] and S[Y] are the players’ long-term average scores, the strategy allows Y to enforce the linear relation S[Y]
= 1 + M(S[X] – 1), where M is an arbitrary constant greater than 1. X has the option of playing an always-defect strategy, which consigns both players to the minimal payoff of one point per round.
But if X takes any steps to improve this return, every increment to S[X] will increase S[Y] by M times as much. Press and Dyson call the technique extortion. As an example they cite the strategy (11/
13, 1/2, 7/26, 0), which sets M = 3. If Y adopts this rule, X can play always-defect (or tit-for-tat) to limit both players to one point per round. When X chooses other strategies, however, Y comes
out ahead. If X plays Pavlov, the scores are approximately S[X] = 1.46 and S[Y] = 2.36. To maximize his or her score, X must cooperate unconditionally, earning an average of 1.91 points, but then Y
gets 3.73 points.
The discovery of dictatorial and extortionate strategies came as a great surprise, and yet there were precedents. Aspects of the discovery were anticipated in the 1990s by Maarten C. Boerlijst,
Martin A. Nowak, and Karl Sigmund. Moreover, not all of the zero-determinant strategies are exotic ideas that no one ever thought of trying. Tit-for-tat, the most famous of all IPD rules, is in fact
an extortionate zero-determinant strategy. It sets M = 1, forcing equality of scores.
Watching the coercive strategies in action (or playing against them), I can’t help feeling there is something uncanny going on. In a game whose structure is fully symmetrical, how can one contestant
wield such power over the other? In the case of the dictatorial strategies, the symmetry isn’t so much broken as transformed: When I take control of your score, I lose control of my own; although
there’s nothing you can do to alter your own score, you have the power to set mine.
The extortionate strategies can’t be explained away so easily. There really is an asymmetry, with one player grabbing an unfair share of the spoils, and the only defense is to retreat to the policy
of universal defection that leaves everyone impoverished. IPD seems to be back in the same dreary jail cell where it all began.
|
{"url":"http://www.americanscientist.org/issues/pub/new-dilemmas-for-the-prisoner/4","timestamp":"2014-04-18T08:03:02Z","content_type":null,"content_length":"114255","record_id":"<urn:uuid:ce7e89e9-3ff6-4fd9-ae94-3ec197dbd6ff>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00271-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Integration Problem to solve. Cannot quite understand what's required.
May 14th 2013, 01:14 AM
Integration Problem to solve. Cannot quite understand what's required.
Hi MHF!
The final problem in an assignment I have is making absolutely no sense, mainly because it sounds technical and I can't determine what's required of me.
The following is the problem:
A current:
$i = 45 sin(100\pi t)$
amperes is applied across an electric circuit. Determine its mean and r.m.s values each correct to 4s.f., over the range t=0 to t=10 ms.
So, first up, because of the theme of the assignment I'm assuming integration is involved with limits ranging from 0 to 0.01.
Integrating 45sin(100PI t) dt would result in $\frac {- 45cos(100\pi t)}{ 100PI}$ (Correct me if I'm wrong, I believe chain rule is as in differentiation as integration, however it's in division
instead of multiplication).
Finally, the main problem is, I need the mean and r.m.s, which I have no clue what it equates to from the values I find. Can someone please help clarify this problem?
EDIT: Just remembered, since this is what previous problems involved in the assignment, should I use the Trapezium/Mid-ordinate/Simpson's Rule instead by any chance, since Mean probably means
average and r.m.s might have something to do with that?
Thanks and regards,
May 14th 2013, 03:24 AM
Re: Integration Problem to solve. Cannot quite understand what's required.
Hey Luponius.
Hint: The RMS is going to be according to the wiki
May 15th 2013, 03:09 AM
Re: Integration Problem to solve. Cannot quite understand what's required.
Thanks for the hint, reading through it I'm positive it doesn't sound familiar at all, we haven't done any of this in class specifically.
If I understood correctly I'm to acquire the mean initially. This is done by doing an integration between 0 and 0.01 of $45sin(100\pi t)$. The result will be the mean value or average?
Following that I'll square root the answer?
Can you please confirm?
May 15th 2013, 04:34 AM
Re: Integration Problem to solve. Cannot quite understand what's required.
The mean value is calculated by integrating the function over the given range and then dividing that result by the width of the interval.
To calculate the R.M.S. value,
(1) square the function;
(2) find the mean value (of the square of the function) as above over the given interval;
(3) take the square root of this mean value.
May 15th 2013, 06:25 AM
Re: Integration Problem to solve. Cannot quite understand what's required.
So for the mean: Integrate between 0 to 0.01 and divide answer by 0.01 since that's the width of the interval in seconds. Mean acquired.
Now the R.M.S steps are confusing me, square the function... am I to raise the power of f(i) to 2 or something else?
Next up on step two you metion I find the mean by integrating the squared function, which would not be the same as a standard definite integration.
I then square root the mean, this I find fine, but do I actually do f(i) ^ 2 first before doing any of the rest or do I not?
Sorry if I'm being slow, I'm seriously confused, every other question leading to this was straightforward definite/indefinite integration and this popped up and I'm completely lost with the
english of it.
May 15th 2013, 10:18 AM
Re: Integration Problem to solve. Cannot quite understand what's required.
to update this is solved, thank you very much BobP, chiro. Greatly appreciate your assistance!
Thanks once again and regards,
|
{"url":"http://mathhelpforum.com/calculus/218898-integration-problem-solve-cannot-quite-understand-whats-required-print.html","timestamp":"2014-04-16T14:50:31Z","content_type":null,"content_length":"8887","record_id":"<urn:uuid:641a69f5-2ff9-4dba-9397-20e48c2f5374>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00578-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Randomness extractors for independent sources and applications
Abstract: The use of randomized algorithms and protocols is ubiquitous in computer science. Randomized solutions are typically faster and simpler than deterministic ones for the same problem. In
addition, many computational problems (for example in cryptography and distributed computing) are impossible to solve without access to randomness. In computer science, access to randomness is
usually modeled as access to a string of uncorrelated uniformly random bits. Although it is widely believed that many physical phenomena are inherently unpredictable, there is a gap between the
computer science model of randomness and what is actually available. It is not clear where one could find such a source of uniformly distributed bits. In practice, computers generate random bits in
ad-hoc ways, with no guarantees on the quality of their distribution. One aim of this thesis is to close this gap and identify the weakest assumption on the source of randomness that would still
permit the use of randomized algorithms and protocols. This is achieved by building randomness extractors ... Such an algorithm would allow us to use a compromised source of randomness to obtain
truly random bits, which we could then use in our original application. Randomness extractors are interesting in their own right as combinatorial objects that look random in strong ways. They fall
into the class of objects whose existence is easy to check using the probabilistic method (i.e., almost all functions are good randomness extractors), yet finding explicit examples of a single such
object is non-trivial. Expander graphs, error correcting codes, hard functions, epsilon biased sets and Ramsey graphs are just a few examples of other such objects. Finding explicit examples of
extractors is part of the bigger project in the area of derandomization of constructing such objects which can be used to reduce the dependence of computer science solutions on randomness. These
objects are often used as basic building blocks to solve problems in computer science. The main results of this thesis are: Extractors for Independent Sources: The central model that we study is the
model of independent sources. Here the only assumption we make (beyond the necessary one that the source of randomness has some entropy/unpredictability), is that the source can be broken up into two
or more independent parts. We show how to deterministically extract true randomness from such sources as long as a constant (as small as 3) number of sources is available with a small amount of
entropy. Extractors for Small Space Sources: In this model we assume that the source is generated by a computationally bounded processes -- a bounded width branching program or an algorithm that uses
small memory. This seems like a plausible model for sources of randomness produced by a defective physical device. We build on our work on extractors for independent sources to obtain extractors for
such sources. Extractors for Low Weight Affine Sources: In this model, we assume that the source gives a random point from some unknown low dimensional affine subspace with a low-weight basis. This
model generalizes the well studied model of bit-fixing sources. We give new extractors for this model that have exponentially small error, a parameter that is important for an application in
cryptography. The techniques that go into solving this problem are inspired by the techniques that give our extractors for independent sources. Ramsey Graphs: A Ramsey graph is a graph that has no
large clique or independent set. We show how to use our extractors and many other ideas to construct new explicit Ramsey graphs that avoid cliques and independent sets of the smallest size to date.
Distributed Computing with Weak Randomness: Finally, we give an application of extractors for independent sources to distributed computing. We give new protocols for Byzantine Agreement and Leader
Election that work when the players involved only have access to defective sources of randomness, even in the presence of completely adversarial behavior at many players and limited adversarial
behavior at every player. In fact, we show how to simulate any distributed computing protocol that assumes that each player has access to private truly random bits, with the aid of defective sources
of randomness.
|
{"url":"http://repositories.lib.utexas.edu/handle/2152/3152","timestamp":"2014-04-18T10:52:26Z","content_type":null,"content_length":"18199","record_id":"<urn:uuid:fc6f2378-3fda-454f-99cb-372432897d10>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00028-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Comparing Population Parameters (Z-test, t-tests and Chi
Comparing Population Parameters (Z-test, t-tests and Chi-Square test) Dr. M. H. Rahbar. Professor of Biostatistics. Department of Epidemiology
Filesize: 5033 KB | format : .PPT
Estimation of population parameters: Confidence Z and t tests are used to determine if the Z test . Is the proportion of male executives the
Filesize: 5047 KB | format : .PPT
Tests whether the mean of a population is different In SPM t-tests are one-tailed (i.e. for contrast X-Y The a parameters reflect the independent contribution
Filesize: 5061 KB | format : .PPT
Actual population parameters are unknown since the cost z-test . . . involve hypothesis testing using interviews of only female students and comparing
Filesize: 5075 KB | format : .PPT
Z-test . Classic example: what is the mean of data Chi-square test . Consider a multinomial distribution t-tests could be used if we feel they have different
Filesize: 5089 KB | format : .PPT
… we are comparing the means of two samples. In chi-square test, we can check the equality of more than two population parameters using z-test, t run 3 t-tests. The
Filesize: 5103 KB | format : .PPT
Simple Random Sample from a population with known s One sample z-test. One sample t-test. Two sample z-test. pair of groups by performing all pair-wise t-tests.
Filesize: 5117 KB | format : .PPT
To do Normal Z test using STATA, akin to immediate parameters being studied (μ) under H 1, are allowed In the discussion of paired t-tests thus. far, each person
Filesize: 5131 KB | format : .PPT
… the null for the ANOVA test and the T-tests for 2-sample Z-test (Jack in the Box vs. In N Out) proportions are not the same as the population proportions. Chi
Filesize: 5146 KB | format : .PPT
Related posts:
|
{"url":"http://www.manyppt.com/07/Comparing-Population-Parameters-Z-test-t-tests-and-Chi.html","timestamp":"2014-04-21T04:53:31Z","content_type":null,"content_length":"20694","record_id":"<urn:uuid:6a9a0d1e-c850-4744-a2c0-1e0b4d353002>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00369-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Physics Question......many thanks
Posted by Jessy on Monday, January 15, 2007 at 11:29am.
A projectile is shot directly away from Earth's surface. Neglect the rotation of Earth.
(a) As a multiple of Earth's radius RE, what is the radial distance a projectile reaches if its initial speed is one-fifth of the escape speed from Earth?
____ times RE
(b) As a multiple of Earth's radius RE, what is the radial distance a projectile reaches if its initial kinetic energy is one-fourth of the kinetic energy required to escape Earth?
_____ timesRE
(c) What is the least initial mechanical energy required at launch if the projectile is to escape Earth?
These problems all can be solved by using the Conservation of Energy relationship
Total Energy = KE + PE =
(1/2) m V^2 - GMm/R = constant
wher M is the mass of the earth, m is the mass of the projectile and R is the distance from the center of the Earth.
The "escape speed" Ve is the launch velocity that allows V to be zero when R = infinity. Thus
(1/2) m Ve^2 - G M m/Re = constant
Ve = sqrt (2 G M/Re)
Here is how to do (a):
If V = (1/5) Ve at R = Re, then
(1/2) m V^2 - G M m/R = constant
= (1/50) m Ve^2 - G M m/Re
When V = 0 (maximum value of R),
(1/50) Ve^2 = GM [(1/Re - (1/R)]
(1/25) G M/Re = G M [(1/Re - (1/R)]
GM/R = (24/25)(GM/Re)
R/Re = 25/24
Proceeds imilarly for (b)
For (c), the least mechanical energy (including potential energy), is zero, since potential energy is defined as zero at infinite distance, as is almost always done in inverse-square law situations.
This is the case when PE = - G M m/R
Your explanaion is very clear to me. Thank you.
• Physics Question......many thanks - Max, Tuesday, November 9, 2010 at 11:35pm
Sorry, your explanation made too many logical leaps. How did you get from 1/5 to 1/50??? It's garbage like this that makes you want to never see physics problems again.
• Physics Question......many thanks - Jon, Saturday, November 20, 2010 at 1:08am
Amen Max!
• Physics Question......many thanks - ., Thursday, March 31, 2011 at 11:21pm
1/2(1/5Ve)^2 = 1/2 * 1/25 * Ve^2 = 1/50Ve^2
Related Questions
Gravitation help..........thanks - A projectile is shot directly away from Earth...
physics - The earth has a radius of 6380 km and turns around once on its axis in...
Physics - Neglect the gravity of the Moon, neglect atmospheric friction, and ...
Physics - A projectile is fired vertically from Earth's surface with an initial ...
Physics - PROJECTILE LAUNCHED FROM EARTH'S SURFACE? A projectile is shot ...
Physics - At the heighest point on earth, the distance to the center of the ...
physics - A hole is drilled with smooth sides straight through the center of the...
PHYSICS(HELP!!) - A hole is drilled with smooth sides straight through the ...
physics help (elena) - A hole is drilled with smooth sides straight through the ...
physics (PLS HELP) - A hole is drilled with smooth sides straight through the ...
|
{"url":"http://www.jiskha.com/display.cgi?id=1168878595","timestamp":"2014-04-20T23:02:30Z","content_type":null,"content_length":"10469","record_id":"<urn:uuid:8f5b41dd-2351-434e-a56e-ca4ec54bc1b4>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00642-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Need Help: Applied Optimization
March 7th 2007, 06:04 PM #1
Feb 2007
Need Help: Applied Optimization
1. A commercial cherry grower estimates from past records that if 51 trees are planted per acre, each tree will yield 31 pounds of cherries for a growing season. Each additional tree per acre (up
to 20) results in a decrease in yield per tree of 1 pound. How many trees per acre should be planted to maximize yield per acre, and what is the maximum yield?
2. A parcel delivery service will deliver a package only if the length plus the girth (distance around) does not exceed 108 inches. Find the maximum volume of a rectangular box with square ends
that satisfies the delivery company's requirements.
3. A fence is to be built to enclose a rectangular area of 270 square feet. The fence along three sides is to be made of material that costs 4 dollars per foot, and the material for the fourth
side costs 16 dollars per foot. Find the length L and width W (with W<=L) of the enclosure that is most economical to construct.
4. 1 pt) Let Q=(0
To solve this problem, we need to minimize the following function of x:
over the closed interval [a
We find that f(x) has only one critical number in the interval at x=
where f(x) has value
Since this is smaller than the values of f(x) at the two endpoints, we conclude that this is the minimal sum of distances.
Any help is appreciated. Thanks!
Last edited by CaptainBlack; March 7th 2007 at 09:21 PM.
1. A commercial cherry grower estimates from past records that if 51 trees are planted per acre, each tree will yield 31 pounds of cherries for a growing season. Each additional tree per acre (up
to 20) results in a decrease in yield per tree of 1 pound. How many trees per acre should be planted to maximize yield per acre, and what is the maximum yield?
From what you are told you will find that the yeild per tree:
y = 31 - (t-51), for t<=71
where t is the number of trees per acre. therefor the total yield per arcre:
Y= t*y = 31*t - t*(t-51) = 72*t - t^2
This takes a maximum at the point where dY/dt=0, or an end point of the interval.
dY/dt = 72 - 2t,
so dY/dt = 0, gives us t=36 with a yeild of 1296 pounds per acre, we also
check the end point t=71, to get yeilds per acre of : 71.
So the maximum yeild is 1296 pounds per arce at a tree density of 36 trees pre acre.
March 7th 2007, 09:20 PM #2
Grand Panjandrum
Nov 2005
|
{"url":"http://mathhelpforum.com/calculus/12312-need-help-applied-optimization.html","timestamp":"2014-04-16T14:18:30Z","content_type":null,"content_length":"35463","record_id":"<urn:uuid:0cfb06c0-816e-456a-a42e-95bb5cdbef74>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00597-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Finding the First Trillion Congruent Numbers
6019845 story
Posted by
from the after-that-it's-easy dept.
"First stated by al-Karaji about a thousand years ago, the congruent number problem is simplified to finding positive whole numbers that are the area of a right triangle with rational number sides.
Today, discovering these congruent numbers is limited only by the size of a computer's hard drive. An international team of mathematicians recently decided to push the limits on finding congruent
numbers and came up with the first trillion. Their two approaches are outlined in detail, with pseudo-code, in their paper (PDF) as well as details on their hardware. For those of you familiar with
this sort of work, the article provides links to solving this problem — from multiplying very large numbers to identifying square-free congruent numbers."
This discussion has been archived. No new comments can be posted.
Finding the First Trillion Congruent Numbers
Comments Filter:
• Re:Why? (Score:5, Informative)
by immakiku (777365) on Tuesday September 22, 2009 @11:35AM (#29504749)
Well math is usually all for fun anyway. And it seems like they're having fun. But here's where someone found an application: [url]http://en.wikipedia.org/wiki/Congruent_number[/url]. Please
don't ask us to explain why elliptic curves are useful.
Parent Share
• Birch-Swinnerton-Dyer (Score:5, Informative)
by JoshuaZ (1134087) on Tuesday September 22, 2009 @11:41AM (#29504831) Homepage
Among other issues, which numbers are congruent numbers is deeply related to the Birch-Swinnerton-Dyer conjecture which is a major open problem http://en.wikipedia.org/wiki/
Birch_and_Swinnerton-Dyer_conjecture [wikipedia.org]. This is due to a theorem which relates whether a number is a congruent number or not to the number of solutions of certain ternary quadratic
The summary isn't quite accurate in that regard: The problem of finding congruent numbers is not completely solved. If BSD is proven then we can reasonably call the question solved. But it
doesn't look like there's much hope for anyone resolving BSD in the foreseeable future. There's also hope that the data will give us further patterns and understanding of ternary quadratic forms
and related issues which show up in quite a few natural contexts (such as Julia Robinson's proof that first order Q is undecidable).
Parent Share
• Terrible summary (Score:5, Informative)
by theskov (556173) <philipskov@g[ ]l.com ['mai' in gap]> on Tuesday September 22, 2009 @11:49AM (#29504957) Homepage
They didn't find a trillion numbers - they found all numbers up to a trillion.
FtFP (From the F***ing Paper):
We report on a computation of congruent numbers, which subject to the Birch and Swinnerton-Dyer conjecture is an accurate list up to 10^12.
• Slight correction (Score:5, Informative)
by mcg1969 (237263) on Tuesday September 22, 2009 @11:52AM (#29504993)
I don't believe they found the first trillion congruent numbers; rather, they tested the first trillion whole numbers for congruency.
• Great work demonstrating important algorithms! (Score:2, Informative)
by onionman (975962) on Tuesday September 22, 2009 @12:05PM (#29505191)
This is a fantastic piece of work by some of the leading computational number theorists today. Most of the authors are involved in the Sage [sagemath.org] project in some form or another and
their algorithms and code are driving the cutting edge of the field. Great work guys!!
• Re:Why? (Score:1, Informative)
by Triela (773061) on Tuesday September 22, 2009 @12:40PM (#29505647)
> this isn't particularly useful in itself, but the new techniques they had to develop to solve it are important. Wiles' Fermat proof is a paramount example.
Parent Share
• Re:Terrible summary (Score:3, Informative)
by Intron (870560) on Tuesday September 22, 2009 @01:09PM (#29506037)
Although in the body it says they found 3,148,379,694 congruent numbers, the title is "A Trillion Triangles" and the web page is titled "The first 1 trillion coefficients of the congruent number
curve" so I think you should let the lazy editors put their sandaled feet up and sip their lattes on this one. It's the article that's got it wrong.
Parent Share
• For anybody who's curious... (Score:3, Informative)
by clone53421 (1310749) on Tuesday September 22, 2009 @01:21PM (#29506217) Journal
I had to look up congruent numbers to make sense of the definition in TFS (I was thinking the "sides" of a right triangle meant only base and height, instead of all 3 sides of a triangle...
needless to say this didn't make sense).
So, for the mathematically inclined, here's an explanation with as little English as possible.
Find all positive integers (1/2)bh where b, h, and sqrt(b^2 + h^2) are rational numbers.
They found all such integers = 10^16 (up to 1 trillion), not the first trillion such integers (as is incorrectly claimed). The reason for this error was that the article claims they used an
algorithm to determine whether a number is congruent, then tested the first trillion numbers (some of them were congruent, some were not).
• Re:Why? (Score:3, Informative)
by Shaterri (253660) on Tuesday September 22, 2009 @01:27PM (#29506279)
Strictly speaking, there aren't any seriously new methods of multiplying numbers here; even the techniques they use for handling multiplicands larger than the computer's memory (sectional FFTs,
using the Chinese Remainder Theorem to solve the problem by reducing modulo a lot of small primes) are pretty well-established from things like computations of pi, with this group offering a few
improvements to the core ideas. What they did provide, and what sounds particularly promising, is a library (judging from the article, likely even open-source) for handling bignums like this that
they've made available for general use. It'll be interesting to see if anyone else picks up this ball and runs with it.
Parent Share
• Re:Terrible summary (Score:3, Informative)
by clone53421 (1310749) on Tuesday September 22, 2009 @02:16PM (#29506921) Journal
Hmm, that's correct. Today, in fact. I didn't realize.
However, this PDF [cms.math.ca] (the top result for Al-Karaji congruent -trillion [google.com]) does support the edit:
A congruent number k is an integer for which there exists a square such that the sum and difference of that square with k are themselves squares.
Parent Share
• Re:Why? (Score:2, Informative)
by William Stein (259724) <wstein@gmail.com> on Tuesday September 22, 2009 @04:49PM (#29508725) Homepage
It is an *open problem* to show that there exists algorithm at all to decide whether a given integer N is a congruent number. Full stop. It's not a question of speed, or even skipping previous
integers. We simply don't even know that it is possible to decide whether or not integers are congruent numbers. However, if the Birch and Swinnerton-Dyer conjecture is true (which we don't
know), then there is an algorithm.
Parent Share
|
{"url":"http://science.slashdot.org/story/09/09/22/1512237/Finding-the-First-Trillion-Congruent-Numbers/informative-comments","timestamp":"2014-04-17T03:51:54Z","content_type":null,"content_length":"99579","record_id":"<urn:uuid:403af7be-1c1d-4961-87c4-20a906c38042>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00566-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Posts by
Total # Posts: 562
what is the biggest holiday for followers of Shinto?
Can someone help me solve this please A*.3=B A-B=150,000
Folic acid, iron and calcium
project management
for example increase profits or increase employee moral.. these are just examples.
project management
Using the MBO approach (management by objectives)come up with a strategy.(make it up) Break them into the 4 levels level 1 top level 2 division level 3 department level 3 employ
project management
You find a problem in a project. The project has a defined baseline and task progress has been entered for all tasks for the first report period. The ACWP field show the right cost for the work
reported, but the BCWP and BCWS fields are zero. what could be the problem?
thank you for the articles. I'm focusing on external communications in north america. Does the txt messaging article also refer to North america?
Hi there, I am doing a project on Cadbury Chocolate and how to uses (external communication) could you give me ideas of what i should talk about? Or any useful websites or videos? Thanks.
On a bull.
Math check
To complete the square, you divide the number before the x by 2, square it, and add it to the end. The answer is d.
Math help
-8a+6(a+7)=1 -8a+6a+42=1 -2a=-41 a=20.5
-3x+y=-2 -2=7x-y
Mike took a 2"x4" picture to the photo store. He had it made into a 4"x8" print. Which geometric concept best describes the relationship between the two prints? Similarity, Linear, Symmetry or
math-3rd grade
Explain your thinking. Use the digits 1, 3, 4, 5, 7, and 9 to write two money amounts that you can subtract by trading 1 dollar for 10 dimes. Then solve.
what are the physcial changes of ethylene glycol at 197 degrees
[-50/2+3] (20-9)
Greetings: I was wondering how to calculate the weight/volume % of vinegar using the formula ---------------w/v Per Cent = g/100 mL or grams per deciliter: WHAT WEIGHT DO I USE NaOH, Acetic ACID? the
data include: volume of NaOH needed to neutralize Vingegar : 14.24 mL Volume ...
how can i help my son matthew about division and subtract borrow . thanks
9 in the thousands and 2 in the ones
9 in the thousands and 2 in the ones
Why is plastic an expensive material to use?
Which is an equation of the line with slope 2 and y-intercept 6? A). y=2x-6 B). y=6x+2 C). 2x+6y=1 D). y+2x+6. thanks if you can help
Social Studies
I am presently creating a unit about land transportation in the community and how these transportations affect our daily lives eg. taxi, garbage truck, train etc. Do you have any ideas or suggestions
for a fun lesson plan.... do you have any websites that could at least give m...
Science 4th grade
Why is it important for 4th graders to learn about animals that live in the ocean?
Chemistry; to drwls or drbob
i think i have posted this before except i forgot to put the decimal in the temperature. i was hoping you could help me now that it might make more sense. The total pressure in a flask containing air
and ethanol at 25.7 C is 878 mm Hg. The pressure of the air in the flask at 2...
what is the chance of a woman having five female children in a row?
what is mm^2?
Digital Logic
How many flip-flops, in a shift register, would be required to hold an initial count of 16?
Digital Logic
The input voltage to a certain op-amp inverting amplifier is 10 mV, and the output is 2 V. What is the closed-loop voltage gain?
English Grammar
In the sentence "They are always late to the ball game.", what part of speech is "always" and "late"?
suppose that 3 <_ f prime of x <_ 5, for all values x. show that 18<_ f(8)-f(2) <_ 30 <_ signs mean less or equal to... im supposed to apply mean value theorem or rolle's theorem... i dont understand
neither so i cant do the question! please help!
Algebra Word Problem
A rectangle's length is 8 cm more than three times it's width. If the perimeter is 128 cm, find the length and width. What is the equation for this?
Can you show me how to do this problem please. Thanks. |4-6to the second power ÷ 2 |- |-6|=
Algebra Absolute Value
|-2/3(x-6)| = 12
How did you get the answer?
Why did Jefferson as the Democratic-Republican candidate win the election of 1800? -Clearly stated thesis -Identify and describe 3 factors that influenced the outcome of the election -A conclusion
that pulls the ideas together
is that good? =)
Thomas Jefferson won the election of 1800 due to his strong Democratic - Republican Party, Major support of the majority of the 16 states, and friendly help from the federalist Alexander Hamilton.
Jefferson was able to win his election in 1800 due to the strong Democratic ...
is this a good thesis? Thomas Jefferson won the election of 1800 due to his strong Democratic - Republican Party, Major support of the majority of the 16 states, and friendly help from the federalist
Alexander Hamilton
Why did Jefferson as the Democratic-Republican candidate win the election of 1800?
A silver bar has mass of 368g. What is the volume, in cm^3, of the bar? the density of silver is 19.5 g/cm^3.?
The diamenter of a carbon atom is 77 pm. Express this measurement in um?
What are the 3 rule the govern the filling of atomic orbitals by electrons?
What the difference between an orbit in the Bohr model and an orbital in the quantum mechanical model of the atom?
A suit designer has 15-3/5 yards of fabric. each suit uses 2-1/8 yards of fabric. (1) How many suits can be made? (2) How much fabris is left over?
what is the fourth largest country Check this site for your answer. http://worldatlas.com/aatlas/populations/ctyareal.htm In area? In population? What? http://www.infoplease.com/ce6/us/A0850088.html
http://en.wikipedia.org/wiki/List_of_countries_by_population ??
Do you need to regroup 10 tens for 1 hundred when you add 241 + 387? Yes. Notice the 8 and the 4 in the tens column, that is 12 in the tens column (120), or 10 tens and 20. Regroup, leaving 20.
Substitution Method!PLZ HELP 2222!!
y-2x=-5 3y-x=5 In the top equation: 1) y=2x -5 Put that in the second equation 3( ) -x =5 solve for x, then put that value of x back in to 1) to solve for y. So would it be: 3(2x-5)=5? would it?PLz
help me bob and margie!
Algebra!Problem Solving!PLZ HELP!
Sunset rents an SUV at $21.95 plus $0.23 per mile. Sunrise rents the same vehicle for $24.95 plus $0.19 per mile. For what mileage is the cost the same? This means that one cost equals the other. Let
X stand for the number of miles. Thus the basic cost plus the mileage costs f...
Substitution Method in Algebra!HELP PLZ!
Use the Substitution method to solve the system of equations. 3x + y = 5 4x - 7y = -10 multiply first equation by 4 multiply second equation by 3 thus both equations have same x or y value in this
case it is the x value. 12x + 4y = 20 12x - 21y = -30 solve by elimination then ...
I really need help with this Essay question and I haven't got a clue. It is to do with Fitzgeralds "The Great Gatsby" please answer before Monday 6th November. Thankyou. here is the question: Through
analysis of patterning, examine Fitzgerald's use of symboli...
what is a SQUARE ROOT? sqrt(x) * sqrt(x) = x sqrt(x) = x ^ (1/2) details: http://en.wikipedia.org/wiki/Square_root
What disease is caused when cells in body divide too rapidly by mitosis? It has been suggested that both certain types of cancer and Alzheimers have a relationship to mitotic disease. 1. Cancer 2.
Elephantiosis (spelling?)The Elephant Man Is a good example, and movie. 3. Osteo...
What are 3 chemical properties of iron and 2 chemical properties of sodium? http://chemistry.allinfoabout.com/periodic/fe.html http://chemistry.allinfoabout.com/periodic/na.html Thanks. That helped a
civil war- true/false
The soldiers died more of disease than gunshot
What is range Range is the possible values a dependent variable may take on. For instance, If y= 3x+2, and the domain of x is {0,1,2}, then the range of y is [2,5,8]
Pages: <<Prev | 1 | 2 | 3 | 4 | 5 | 6
|
{"url":"http://www.jiskha.com/members/profile/posts.cgi?name=Matthew&page=6","timestamp":"2014-04-16T04:29:26Z","content_type":null,"content_length":"18234","record_id":"<urn:uuid:fc63c8b7-c755-45f7-92cb-e07ee3645765>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00621-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Stat 401F/XW - Lab Activities
Lab activities
lab 1: 22 Aug Introduction to SAS, tomato.sas, output
lab 2, 31 Aug Randomization tests, t-tests and paired t-tests SAS code and output. The first 5 pages are blank for question time.
lab 3, 5 Sept Paired T tests and transformations. SAS code and output. The first 5 pages are blank for question time.
lab 4, 14 Sept Wilcoxon rank sum test, pooling residuals SAS code and output. The first 5 pages are blank for question time.
lab 5, 21 Sept Fitting an ANOVA model, estimating means and linear combinations of means. SAS code and output. The first 5 pages are blank for question time.
lab 6, 28 Sept More 'after the ANOVA' analyses: multiple comparisons and false discovery rate. SAS code and output. The first 5 pages are blank for question time.
lab 7, 3 Oct Fitting linear regression in GLM and REG. SAS code and output. The first 5 pages are blank for question time.
lab 8, 10 Oct More fitting linear regressions in GLM. Lots more ways to plot data. SAS code and output. The first 5 pages are blank for question time.
lab 9, 17 Oct More fitting linear regressions in GLM. Calculate T and F quantiles. Fitting multiple linear regression. SAS code and output. The first 5 pages are blank for question time.
lab 10, 24 Oct Constructing variables SAS code and output. The first 5 pages are blank for question time.
lab 11, 31 Oct Model selection SAS code and output. The first 5 pages are blank for question time.
lab 12, 7 Nov 2 way ANOVA SAS code and output. The first 5 pages are blank for question time.
lab 13, 14 Nov RCBD and proportions SAS code and output. The first 5 pages are blank for question time.
lab 14, 28 Nov Miscellaneous things you can do in SAS (macros, merging data, dates, times, keyboard shortcuts) SAS code and output. The first 5 pages are blank for question time.
lab 15, 5 Dec Logistic regression, overdispersion, Poisson regression SAS code and output. The first 5 pages are blank for question time.
|
{"url":"http://www.public.iastate.edu/~pdixon/stat401/labs.html","timestamp":"2014-04-19T22:11:34Z","content_type":null,"content_length":"3496","record_id":"<urn:uuid:263171cf-c1a9-4630-8986-87ef0d691c3d>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00019-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Integers, Floating-point
Number Systems
Human beings use decimal (base 10) and duodecimal (base 12) number systems for counting and measurements (probably because we have 10 fingers and two big toes). Computers use binary (base 2) number
system, as they are made from binary digital components (known as transistors) operating in two states - on and off. In computing, we also use hexadecimal (base 16) or octal (base 8) number systems,
as a compact form for represent binary numbers.
Decimal (Base 10) Number System
Decimal number system has ten symbols: 0, 1, 2, 3, 4, 5, 6, 7, 8, and 9, called digits. It uses positional notation. That is, the least-significant digit (right-most digit) is of the order of 10^0
(units or ones), the second right-most digit is of the order of 10^1 (tens), the third right-most digit is of the order of 10^2 (hundreds), and so on. For example,
735 = 7×10^2 + 3×10^1 + 5×10^0
We shall denote a decimal number with an optional suffix D if ambiguity arises.
Binary (Base 2) Number System
Binary number system has two symbols: 0 and 1, called bits. It is also a positional notation, for example,
10110B = 1×2^4 + 0×2^3 + 1×2^2 + 1×2^1 + 0×2^0
We shall denote a binary number with a suffix B. Some programming languages denote binary numbers with prefix 0b (e.g., 0b1001000), or prefix b with the bits quoted (e.g., b'10001111').
A binary digit is called a bit. Eight bits is called a byte (why 8-bit unit? Probably because 8=2^3).
Hexadecimal (Base 16) Number System
Hexadecimal number system uses 16 symbols: 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, A, B, C, D, E, and F, called hex digits. It is a positional notation, for example,
A3EH = 10×16^2 + 3×16^1 + 14×16^0
We shall denote a hexadecimal number (in short, hex) with a suffix H. Some programming languages denote hex numbers with prefix 0x (e.g., 0x1A3C5F), or prefix x with hex digit quoted (e.g.,
Each hexadecimal digit is also called a hex digit. Most programming languages accept lowercase 'a' to 'f' as well as uppercase 'A' to 'F'.
Computers uses binary system in their internal operations, as they are built from binary digital electronic components. However, writing or reading a long sequence of binary bits is cumbersome and
error-prone. Hexadecimal system is used as a compact form or shorthand for binary bits. Each hex digit is equivalent to 4 binary bits, i.e., shorthand for 4 bits, as follows:
0H (0000B) (0D) 1H (0001B) (1D) 2H (0010B) (2D) 3H (0011B) (3D)
4H (0100B) (4D) 5H (0101B) (5D) 6H (0110B) (6D) 7H (0111B) (7D)
8H (1000B) (8D) 9H (1001B) (9D) AH (1010B) (10D) BH (1011B) (11D)
CH (1100B) (12D) DH (1101B) (13D) EH (1110B) (14D) FH (1111B) (15D)
Conversion from Hexadecimal to Binary
Replace each hex digit by the 4 equivalent bits, for examples,
A3C5H = 1010 0011 1100 0101B
102AH = 0001 0000 0010 1010B
Conversion from Binary to Hexadecimal
Starting from the right-most bit (least-significant bit), replace each group of 4 bits by the equivalent hex digit (pad the left-most bits with zero if necessary), for examples,
1001001010B = 0010 0100 1010B = 24AH
10001011001011B = 0010 0010 1100 1011B = 22CBH
It is important to note that hexadecimal number provides a compact form or shorthand for representing binary bits.
Conversion from Base r to Decimal (Base 10)
Given a n-digit base r number: dn-1 dn-2 dn-3 ... d3 d2 d1 d0 (base r), the decimal equivalent is given by:
dn-1 × r^(n-1) + dn-2 × r^(n-2) + ... + d1 × r^1 + d0 × r^0
For examples,
A1C2H = 10×16^3 + 1×16^2 + 12×16^1 + 2 = 41410 (base 10)
10110B = 1×2^4 + 1×2^2 + 1×2^1 = 22 (base 10)
Conversion from Decimal (Base 10) to Base r
Use repeated division/remainder. For example,
To convert 261D to hexadecimal:
261/16 => quotient=16 remainder=5
16/16 => quotient=1 remainder=0
1/16 => quotient=0 remainder=1 (quotient=0 stop)
Hence, 261D = 105H
The above procedure is actually applicable to conversion between any 2 base systems. For example,
To convert 1023(base 4) to base 3:
1023(base 4)/3 => quotient=25D remainder=0
25D/3 => quotient=8D remainder=1
8D/3 => quotient=2D remainder=2
2D/3 => quotient=0 remainder=2 (quotient=0 stop)
Hence, 1023(base 4) = 2210(base 3)
General Conversion between 2 Base Systems with Fractional Part
1. Separate the integral and the fractional parts.
2. For the integral part, divide by the target radix repeatably, and collect the ramainder in reverse order.
3. For the fractional part, multiply the fractional part by the target radix repeatably, and collect the integral part in the same order.
Example 1:
Convert 18.6875D to binary
Integral Part = 18D
18/2 => quotient=9 remainder=0
9/2 => quotient=4 remainder=1
4/2 => quotient=2 remainder=0
2/2 => quotient=1 remainder=0
1/2 => quotient=0 remainder=1 (quotient=0 stop)
Hence, 18D = 10010B
Fractional Part = .6875D
.6875*2=1.375 => whole number is 1
.375*2=0.75 => whole number is 0
.75*2=1.5 => whole number is 1
.5*2=1.0 => whole number is 1
Hence .6875D = .1011B
Therefore, 18.6875D = 10010.1011B
Example 2:
Convert 18.6875D to hexadecimal
Integral Part = 18D
18/16 => quotient=1 remainder=2
1/16 => quotient=0 remainder=1 (quotient=0 stop)
Hence, 18D = 12H
Fractional Part = .6875D
.6875*16=11.0 => whole number is 11D (BH)
Hence .6875D = .BH
Therefore, 18.6875D = 12.BH
Exercises (Number Systems Conversion)
1. Convert the following decimal numbers into binary and hexadecimal numbers:
1. 108
2. 4848
3. 9000
2. Convert the following binary numbers into hexadecimal and decimal numbers:
1. 1000011000
2. 10000000
3. 101010101010
3. Convert the following hexadecimal numbers into binary and decimal numbers:
1. ABCDE
2. 1234
3. 80F
4. Convert the following decimal numbers into binary equivalent:
1. 19.25D
2. 123.456D
Answers: You could use the Windows' Calculator (calc.exe) to carry out number system conversion, by setting it to the scientific mode. (Run "calc" ⇒ Select "View" menu ⇒ Choose "Programmer" or
"Scientific" mode.)
1. 1101100B, 1001011110000B, 10001100101000B, 6CH, 12F0H, 2328H.
2. 218H, 80H, AAAH, 536D, 128D, 2730D.
3. 10101011110011011110B, 1001000110100B, 100000001111B, 703710D, 4660D, 2063D.
4. ??
Computer Memory & Data Representation
Computer uses a fixed number of bits to represent a piece of data, which could be a number, a character, or others. A n-bit storage location can represent up to 2^n distinct entities. For example, a
3-bit memory location can hold one of these eight binary patterns: 000, 001, 010, 011, 100, 101, 110, or 111. Hence, it can represent at most 8 distinct entities. You could use them to represent
numbers 0 to 7, numbers 8881 to 8888, characters 'A' to 'H', or up to 8 kinds of fruits like apple, orange, banana; or up to 8 kinds of animals like lion, tiger, etc.
Integers, for example, can be represented in 8-bit, 16-bit, 32-bit or 64-bit. You, as the programmer, choose an appropriate bit-length for your integers. Your choice will impose constraint on the
range of integers that can be represented. Besides the bit-length, an integer can be represented in various representation schemes, e.g., unsigned vs. signed integers. An 8-bit unsigned integer has a
range of 0 to 255, while an 8-bit signed integer has a range of -128 to 127 - both representing 256 distinct numbers.
It is important to note that a computer memory location merely stores a binary pattern. It is entirely up to you, as the programmer, to decide on how these patterns are to be interpreted. For
example, the 8-bit binary pattern "0100 0001B" can be interpreted as an unsigned integer 65, or an ASCII character 'A', or some secret information known only to you. In other words, you have to first
decide how to represent a piece of data in a binary pattern before the binary patterns make sense. The interpretation of binary pattern is called data representation or encoding. Furthermore, it is
important that the data representation schemes are agreed-upon by all the parties, i.e., industrial standards need to be formulated and straightly followed.
Once you decided on the data representation scheme, certain constraints, in particular, the precision and range will be imposed. Hence, it is important to understand data representation to write
correct and high-performance programs.
Rosette Stone and the Decipherment of Egyptian Hieroglyphs
Egyptian hieroglyphs (next-to-left) were used by the ancient Egyptians since 4000BC. Unfortunately, since 500AD, no one could longer read the ancient Egyptian hieroglyphs, until the re-discovery of
the Rosette Stone in 1799 by Napoleon's troop (during Napoleon's Egyptian invasion) near the town of Rashid (Rosetta) in the Nile Delta.
The Rosetta Stone (left) is inscribed with a decree in 196BC on behalf of King Ptolemy V. The decree appears in three scripts: the upper text is Ancient Egyptian hieroglyphs, the middle portion
Demotic script, and the lowest Ancient Greek. Because it presents essentially the same text in all three scripts, and Ancient Greek could still be understood, it provided the key to the decipherment
of the Egyptian hieroglyphs.
The moral of the story is unless you know the encoding scheme, there is no way that you can decode the data.
Reference and images: Wikipedia.
Integer Representation
Integers are whole numbers or fixed-point numbers with the radix point fixed after the least-significant bit. They are contrast to real numbers or floating-point numbers, where the position of the
radix point varies. It is important to take note that integers and floating-point numbers are treated differently in computers. They have different representation and are processed differently (e.g.,
floating-point numbers are processed in a so-called floating-point processor). Floating-point numbers will be discussed later.
Computers use a fixed number of bits to represent an integer. The commonly-used bit-lengths for integers are 8-bit, 16-bit, 32-bit or 64-bit. Besides bit-lengths, there are two representation schemes
for integers:
1. Unsigned Integers: can represent zero and positive integers.
2. Signed Integers: can represent zero, positive and negative integers. Three representation schemes had been proposed for signed integers:
1. Sign-Magnitude representation
2. 1's Complement representation
3. 2's Complement representation
You, as the programmer, need to decide on the bit-length and representation scheme for your integers, depending on your application's requirements. Suppose that you need a counter for counting a
small quantity from 0 up to 200, you might choose the 8-bit unsigned integer scheme as there is no negative numbers involved.
n-bit Unsigned Integers
Unsigned integers can represent zero and positive integers, but not negative integers. The value of an unsigned integer is interpreted as "the magnitude of its underlying binary pattern".
Example 1: Suppose that n=8 and the binary pattern is 0100 0001B, the value of this unsigned integer is 1×2^0 + 1×2^6 = 65D.
Example 2: Suppose that n=16 and the binary pattern is 0001 0000 0000 1000B, the value of this unsigned integer is 1×2^3 + 1×2^12 = 4104D.
Example 3: Suppose that n=16 and the binary pattern is 0000 0000 0000 0000B, the value of this unsigned integer is 0.
An n-bit pattern can represent 2^n distinct integers. An n-bit unsigned integer can represent integers from 0 to (2^n)-1, as tabulated below:
n Minimum Maximum
8 0 (2^8)-1 (=255)
16 0 (2^16)-1 (=65,535)
32 0 (2^32)-1 (=4,294,967,295) (9+ digits)
64 0 (2^64)-1 (=18,446,744,073,709,551,615) (19+ digits)
Signed Integers
Signed integers can represent zero, positive integers, as well as negative integers. Three representation schemes are available for signed integers:
1. Sign-Magnitude representation
2. 1's Complement representation
3. 2's Complement representation
In all the above three schemes, the most-significant bit (msb) is called the sign bit. The sign bit is used to represent the sign of the integer - with 0 for positive integers and 1 for negative
integers. The magnitude of the integer, however, is interpreted differently in different schemes.
n-bit Sign Integers in Sign-Magnitude Representation
In sign-magnitude representation:
• The most-significant bit (msb) is the sign bit, with value of 0 representing positive integer and 1 representing negative integer.
• The remaining n-1 bits represents the magnitude (absolute value) of the integer. The absolute value of the integer is interpreted as "the magnitude of the (n-1)-bit binary pattern".
Example 1: Suppose that n=8 and the binary representation is 0 100 0001B.
Sign bit is 0 ⇒ positive
Absolute value is 100 0001B = 65D
Hence, the integer is +65D
Example 2: Suppose that n=8 and the binary representation is 1 000 0001B.
Sign bit is 1 ⇒ negative
Absolute value is 000 0001B = 1D
Hence, the integer is -1D
Example 3: Suppose that n=8 and the binary representation is 0 000 0000B.
Sign bit is 0 ⇒ positive
Absolute value is 000 0000B = 0D
Hence, the integer is +0D
Example 4: Suppose that n=8 and the binary representation is 1 000 0000B.
Sign bit is 1 ⇒ negative
Absolute value is 000 0000B = 0D
Hence, the integer is -0D
The drawbacks of sign-magnitude representation are:
1. There are two representations (0000 0000B and 1000 0000B) for the number zero, which could lead to inefficiency and confusion.
2. Positive and negative integers need to be processed separately.
n-bit Sign Integers in 1's Complement Representation
In 1's complement representation:
• Again, the most significant bit (msb) is the sign bit, with value of 0 representing positive integers and 1 representing negative integers.
• The remaining n-1 bits represents the magnitude of the integer, as follows:
□ for positive integers, the absolute value of the integer is equal to "the magnitude of the (n-1)-bit binary pattern".
□ for negative integers, the absolute value of the integer is equal to "the magnitude of the complement (inverse) of the (n-1)-bit binary pattern" (hence called 1's complement).
Example 1: Suppose that n=8 and the binary representation 0 100 0001B.
Sign bit is 0 ⇒ positive
Absolute value is 100 0001B = 65D
Hence, the integer is +65D
Example 2: Suppose that n=8 and the binary representation 1 000 0001B.
Sign bit is 1 ⇒ negative
Absolute value is the complement of 000 0001B, i.e., 111 1110B = 126D
Hence, the integer is -126D
Example 3: Suppose that n=8 and the binary representation 0 000 0000B.
Sign bit is 0 ⇒ positive
Absolute value is 000 0000B = 0D
Hence, the integer is +0D
Example 4: Suppose that n=8 and the binary representation 1 111 1111B.
Sign bit is 1 ⇒ negative
Absolute value is the complement of 111 1111B, i.e., 000 0000B = 0D
Hence, the integer is -0D
Again, the drawbacks are:
1. There are two representations (0000 0000B and 1111 1111B) for zero.
2. The positive integers and negative integers need to be processed separately.
n-bit Sign Integers in 2's Complement Representation
In 2's complement representation:
• Again, the most significant bit (msb) is the sign bit, with value of 0 representing positive integers and 1 representing negative integers.
• The remaining n-1 bits represents the magnitude of the integer, as follows:
□ for positive integers, the absolute value of the integer is equal to "the magnitude of the (n-1)-bit binary pattern".
□ for negative integers, the absolute value of the integer is equal to "the magnitude of the complement of the (n-1)-bit binary pattern plus one" (hence called 2's complement).
Example 1: Suppose that n=8 and the binary representation 0 100 0001B.
Sign bit is 0 ⇒ positive
Absolute value is 100 0001B = 65D
Hence, the integer is +65D
Example 2: Suppose that n=8 and the binary representation 1 000 0001B.
Sign bit is 1 ⇒ negative
Absolute value is the complement of 000 0001B plus 1, i.e., 111 1110B + 1B = 127D
Hence, the integer is -127D
Example 3: Suppose that n=8 and the binary representation 0 000 0000B.
Sign bit is 0 ⇒ positive
Absolute value is 000 0000B = 0D
Hence, the integer is +0D
Example 4: Suppose that n=8 and the binary representation 1 111 1111B.
Sign bit is 1 ⇒ negative
Absolute value is the complement of 111 1111B plus 1, i.e., 000 0000B + 1B = 1D
Hence, the integer is -1D
Computers use 2's Complement Representation for Signed Integers
We have discussed three representations for signed integers: signed-magnitude, 1's complement and 2's complement. Computers use 2's complement in representing signed integers. This is because:
1. There is only one representation for the number zero in 2's complement, instead of two representations in sign-magnitude and 1's complement.
2. Positive and negative integers can be treated together in addition and subtraction. Subtraction can be carried out using the "addition logic".
Example 1: Addition of Two Positive Integers: Suppose that n=8, 65D + 5D = 70D
65D → 0100 0001B
5D → 0000 0101B(+
0100 0110B → 70D (OK)
Example 2: Subtraction is treated as Addition of a Positive and a Negative Integers: Suppose that n=8, 5D - 5D = 65D + (-5D) = 60D
65D → 0100 0001B
-5D → 1111 1011B(+
0011 1100B → 60D (discard carry - OK)
Example 3: Addition of Two Negative Integers: Suppose that n=8, -65D - 5D = (-65D) + (-5D) = -70D
-65D → 1011 1111B
-5D → 1111 1011B(+
1011 1010B → -70D (discard carry - OK)
Because of the fixed precision (i.e., fixed number of bits), an n-bit 2's complement signed integer has a certain range. For example, for n=8, the range of 2's complement signed integers is -128 to
+127. During addition (and subtraction), it is important to check whether the result exceeds this range, in other words, whether overflow or underflow has occurred.
Example 4: Overflow: Suppose that n=8, 127D + 2D = 129D (overflow - beyond the range)
127D → 0111 1111B
2D → 0000 0010B(+
1000 0001B → -127D (wrong)
Example 5: Underflow: Suppose that n=8, -125D - 5D = -130D (underflow - below the range)
-125D → 1000 0011B
-5D → 1111 1011B(+
0111 1110B → +126D (wrong)
The following diagram explains how the 2's complement works. By re-arranging the number line, values from -128 to +127 are represented contiguously by ignoring the carry bit.
Range of n-bit 2's Complement Signed Integers
An n-bit 2's complement signed integer can represent integers from -2^(n-1) to +2^(n-1)-1, as tabulated. Take note that the scheme can represent all the integers within the range, without any gap. In
other words, there is no missing integers within the supported range.
n minimum maximum
8 -(2^7) (=-128) +(2^7)-1 (=+127)
16 -(2^15) (=-32,768) +(2^15)-1 (=+32,767)
32 -(2^31) (=-2,147,483,648) +(2^31)-1 (=+2,147,483,647)(9+ digits)
64 -(2^63) (=-9,223,372,036,854,775,808) +(2^63)-1 (=+9,223,372,036,854,775,807)(18+ digits)
Decoding 2's Complement Numbers
1. Check the sign bit (denoted as S).
2. If S=0, the number is positive and its absolute value is the binary value of the remaining n-1 bits.
3. If S=1, the number is negative. you could "invert the n-1 bits and plus 1" to get the absolute value of negative number.
Alternatively, you could scan the remaining n-1 bits from the right (least-significant bit). Look for the first occurrence of 1. Flip all the bits to the left of that first occurrence of 1. The
flipped pattern gives the absolute value. For example,
n = 8, bit pattern = 1 100 0100B
S = 1 → negative
Scanning from the right and flip all the bits to the left of the first occurrence of 1 ⇒ 011 1100B = 60D
Hence, the value is -60D
Big Endian vs. Little Endian
Modern computers store one byte of data in each memory address or location, i.e., byte addressable memory. An 32-bit integer is, therefore, stored in 4 memory addresses.
The term"Endian" refers to the order of storing bytes in computer memory. In "Big Endian" scheme, the most significant byte is stored first in the lowest memory address (or big in first), while
"Little Endian" stores the least significant bytes in the lowest memory address.
For example, the 32-bit integer 12345678H (2215053170[10]) is stored as 12H 34H 56H 78H in big endian; and 78H 56H 34H 12H in little endian. An 16-bit integer 00H 01H is interpreted as 0001H in big
endian, and 0100H as little endian.
Exercise (Integer Representation)
1. What are the ranges of 8-bit, 16-bit, 32-bit and 64-bit integer, in "unsigned" and "signed" representation?
2. Give the value of 88, 0, 1, 127, and 255 in 8-bit unsigned representation.
3. Give the value of +88, -88 , -1, 0, +1, -128, and +127 in 8-bit 2's complement signed representation.
4. Give the value of +88, -88 , -1, 0, +1, -127, and +127 in 8-bit sign-magnitude representation.
5. Give the value of +88, -88 , -1, 0, +1, -127 and +127 in 8-bit 1's complement representation.
6. [TODO] more.
1. The range of unsigned n-bit integers is [0, 2^n - 1]. The range of n-bit 2's complement signed integer is [-2^(n-1), +2^(n-1)-1];
2. 88 (0101 1000), 0 (0000 0000), 1 (0000 0001), 127 (0111 1111), 255 (1111 1111).
3. +88 (0101 1000), -88 (1010 1000), -1 (1111 1111), 0 (0000 0000), +1 (0000 0001), -128 (1000 0000), +127 (0111 1111).
4. +88 (0101 1000), -88 (1101 1000), -1 (1000 0001), 0 (0000 0000 or 1000 0000), +1 (0000 0001), -127 (1111 1111), +127 (0111 1111).
5. +88 (0101 1000), -88 (1010 0111), -1 (1111 1110), 0 (0000 0000 or 1111 1111), +1 (0000 0001), -127 (1000 0000), +127 (0111 1111).
Floating-Point Number Representation
A floating-point number (or real number) can represent a very large (1.23×10^88) or a very small (1.23×10^-88) value. It could also represent very large negative number (-1.23×10^88) and very small
negative number (-1.23×10^88), as well as zero, as illustrated:
A floating-point number is typically expressed in the scientific notation, with a fraction (F), and an exponent (E) of a certain radix (r), in the form of F×r^E. Decimal numbers use radix of 10 (F×10
^E); while binary numbers use radix of 2 (F×2^E).
Representation of floating point number is not unique. For example, the number 55.66 can be represented as 5.566×10^1, 0.5566×10^2, 0.05566×10^3, and so on. The fractional part can be normalized. In
the normalized form, there is only a single non-zero digit before the radix point. For example, decimal number 123.4567 can be normalized as 1.234567×10^2; binary number 1010.1011B can be normalized
as 1.0101011B×2^3.
It is important to note that floating-point numbers suffer from loss of precision when represented with a fixed number of bits (e.g., 32-bit or 64-bit). This is because there are infinite number of
real numbers (even within a small range of says 0.0 to 0.1). On the other hand, a n-bit binary pattern can represent a finite 2^n distinct numbers. Hence, not all the real numbers can be represented.
The nearest approximation will be used instead, resulted in loss of accuracy.
It is also important to note that floating number arithmetic is very much less efficient than integer arithmetic. It could be speed up with a so-called dedicated floating-point co-processor. Hence,
use integers if your application does not require floating-point numbers.
In computers, floating-point numbers are represented in scientific notation of fraction (F) and exponent (E) with a radix of 2, in the form of F×2^E. Both E and F can be positive as well as negative.
Modern computers adopt IEEE 754 standard for representing floating-point numbers. There are two representation schemes: 32-bit single-precision and 64-bit double-precision.
IEEE-754 32-bit Single-Precision Floating-Point Numbers
In 32-bit single-precision floating-point representation:
• The most significant bit is the sign bit (S), with 0 for negative numbers and 1 for positive numbers.
• The following 8 bits represent exponent (E).
• The remaining 23 bits represents fraction (F).
Normalized Form
Let's illustrate with an example, suppose that the 32-bit pattern is 1 1000 0001 011 0000 0000 0000 0000 0000, with:
• S = 1
• E = 1000 0001
• F = 011 0000 0000 0000 0000 0000
In the normalized form, the actual fraction is normalized with an implicit leading 1 in the form of 1.F. In this example, the actual fraction is 1.011 0000 0000 0000 0000 0000 = 1 + 1×2^-2 + 1×2^-3 =
The sign bit represents the sign of the number, with S=0 for positive and S=1 for negative number. In this example with S=1, this is a negative number, i.e., -1.375D.
In normalized form, the actual exponent is E-127 (so-called excess-127 or bias-127). This is because we need to represent both positive and negative exponent. With an 8-bit E, ranging from 0 to 255,
the excess-127 scheme could provide actual exponent of -127 to 128. In this example, E-127=129-127=2D.
Hence, the number represented is -1.375×2^2=-5.5D.
De-Normalized Form
Normalized form has a serious problem, with an implicit leading 1 for the fraction, it cannot represent the number zero! Convince yourself on this!
De-normalized form was devised to represent zero and other numbers.
For E=0, the numbers are in the de-normalized form. An implicit leading 0 (instead of 1) is used for the fraction; and the actual exponent is always -126. Hence, the number zero can be represented
with E=0 and F=0 (because 0.0×2^-126=0).
We can also represent very small positive and negative numbers in de-normalized form with E=0. For example, if S=1, E=0, and F=011 0000 0000 0000 0000 0000. The actual fraction is 0.011=1×2^-2+1×2^-3
=0.375D. Since S=1, it is a negative number. With E=0, the actual exponent is -126. Hence the number is -0.375×2^-126 = -4.4×10^-39, which is an extremely small negative number (close to zero).
In summary, the value (N) is calculated as follows:
• For 1 ≤ E ≤ 254, N = (-1)^S × 1.F × 2^(E-127). These numbers are in the so-called normalized form. The sign-bit represents the sign of the number. Fractional part (1.F) are normalized with an
implicit leading 1. The exponent is bias (or in excess) of 127, so as to represent both positive and negative exponent. The range of exponent is -126 to +127.
• For E = 0, N = (-1)^S × 0.F × 2^(-126). These numbers are in the so-called denormalized form. The exponent of 2^-126 evaluates to a very small number. Denormalized form is needed to represent
zero (with F=0 and E=0). It can also represents very small positive and negative number close to zero.
• For E = 255, it represents special values, such as ±INF (positive and negative infinity) and NaN (not a number). This is beyond the scope of this article.
Example 1: Suppose that IEEE-754 32-bit floating-point representation pattern is 0 10000000 110 0000 0000 0000 0000 0000.
Sign bit S = 0 ⇒ positive number
E = 1000 0000B = 128D (in normalized form)
Fraction is 1.11B (with an implicit leading 1) = 1 + 1×2^-1 + 1×2^-2 = 1.75D
The number is +1.75 × 2^(128-127) = +3.5D
Example 2: Suppose that IEEE-754 32-bit floating-point representation pattern is 1 01111110 100 0000 0000 0000 0000 0000.
Sign bit S = 1 ⇒ negative number
E = 0111 1110B = 126D (in normalized form)
Fraction is 1.1B (with an implicit leading 1) = 1 + 2^-1 = 1.5D
The number is -1.5 × 2^(126-127) = -0.75D
Example 3: Suppose that IEEE-754 32-bit floating-point representation pattern is 1 01111110 000 0000 0000 0000 0000 0001.
Sign bit S = 1 ⇒ negative number
E = 0111 1110B = 126D (in normalized form)
Fraction is 1.000 0000 0000 0000 0000 0001B (with an implicit leading 1) = 1 + 2^-23
The number is -(1 + 2^-23) × 2^(126-127) = -0.500000059604644775390625 (may not be exact in decimal!)
Example 4 (De-Normalized Form): Suppose that IEEE-754 32-bit floating-point representation pattern is 1 00000000 000 0000 0000 0000 0000 0001.
Sign bit S = 1 ⇒ negative number
E = 0 (in de-normalized form)
Fraction is 0.000 0000 0000 0000 0000 0001B (with an implicit leading 0) = 1×2^-23
The number is -2^-23 × 2^(-126) = -2×(-149) ≈ -1.4×10^-45
Exercises (Floating-point Numbers)
1. Compute the largest and smallest positive numbers that can be represented in the 32-bit normalized form.
2. Compute the largest and smallest negative numbers can be represented in the 32-bit normalized form.
3. Repeat (1) for the 32-bit denormalized form.
4. Repeat (2) for the 32-bit denormalized form.
1. Largest positive number: S=0, E=1111 1110 (254), F=111 1111 1111 1111 1111 1111.
Smallest positive number: S=0, E=0000 00001 (1), F=000 0000 0000 0000 0000 0000.
2. Same as above, but S=1.
3. Largest positive number: S=0, E=0, F=111 1111 1111 1111 1111 1111.
Smallest positive number: S=0, E=0, F=000 0000 0000 0000 0000 0001.
4. Same as above, but S=1.
Notes For Java Users
You can use JDK methods Float.intBitsToFloat(int bits) or Double.longBitsToDouble(long bits) to create a single-precision 32-bit float or double-precision 64-bit double with the specific bit
patterns, and print their values. For examples,
IEEE-754 64-bit Double-Precision Floating-Point Numbers
The representation scheme for 64-bit double-precision is similar to the 32-bit single-precision:
• The most significant bit is the sign bit (S), with 0 for negative numbers and 1 for positive numbers.
• The following 11 bits represent exponent (E).
• The remaining 52 bits represents fraction (F).
The value (N) is calculated as follows:
• Normalized form: For 1 ≤ E ≤ 2046, N = (-1)^S × 1.F × 2^(E-1023).
• Denormalized form: For E = 0, N = (-1)^S × 0.F × 2^(-1022). These are in the denormalized form.
• For E = 2047, N represents special values, such as ±INF (infinity), NaN (not a number).
More on Floating-Point Representation
There are three parts in the floating-point representation:
• The sign bit (S) is self-explanatory (0 for positive numbers and 1 for negative numbers).
• For the exponent (E), a so-called bias (or excess) is applied so as to represent both positive and negative exponent. The bias is set at half of the range. For single precision with an 8-bit
exponent, the bias is 127 (or excess-127). For double precision with a 11-bit exponent, the bias is 1023 (or excess-1023).
• The fraction (F) (also called the mantissa or significand) is composed of an implicit leading bit (before the radix point) and the fractional bits (after the radix point). The leading bit for
normalized numbers is 1; while the leading bit for denormalized numbers is 0.
Normalized Floating-Point Numbers
In normalized form, the radix point is placed after the first non-zero digit, e,g., 9.8765D×10^-23D, 1.001011B×2^11B. For binary number, the leading bit is always 1, and need not be represented
explicitly - this saves 1 bit of storage.
In IEEE 754's normalized form:
• For single-precision, 1 ≤ E ≤ 254 with excess of 127. Hence, the actual exponent is from -126 to +127. Negative exponents are used to represent small numbers (< 1.0); while positive exponents are
used to represent large numbers (> 1.0).
N = (-1)^S × 1.F × 2^(E-127)
• For double-precision, 1 ≤ E ≤ 2046 with excess of 1023. The actual exponent is from -1022 to +1023, and
N = (-1)^S × 1.F × 2^(E-1023)
Take note that n-bit pattern has a finite number of combinations (=2^n), which could represent finite distinct numbers. It is not possible to represent the infinite numbers in the real axis (even a
small range says 0.0 to 1.0 has infinite numbers). That is, not all floating-point numbers can be accurately represented. Instead, the closest approximation is used, which leads to loss of accuracy.
The minimum and maximum normalized floating-point numbers are:
Precision Normalized N(min) Normalized N(max)
0080 0000H 7F7F FFFFH
0 00000001 00000000000000000000000B 0 11111110 00000000000000000000000B
Single E = 1, F = 0 E = 254, F = 0
N(min) = 1.0B × 2^-126 N(max) = 1.1...1B × 2^127 = (2 - 2^-23) × 2^127
(≈1.17549435 × 10^-38) (≈3.4028235 × 10^38)
0010 0000 0000 0000H 7FEF FFFF FFFF FFFFH
Double N(min) = 1.0B × 2^-1022 N(max) = 1.1...1B × 2^1023 = (2 - 2^-52) × 2^1023
(≈2.2250738585072014 × 10^-308) (≈1.7976931348623157 × 10^308)
Denormalized Floating-Point Numbers
If E = 0, but the fraction is non-zero, then the value is in denormalized form, and a leading bit of 0 is assumed, as follows:
• For single-precision, E = 0,
N = (-1)^S × 0.F × 2^(-126)
• For double-precision, E = 0,
N = (-1)^S × 0.F × 2^(-1022)
Denormalized form can represent very small numbers closed to zero, and zero, which cannot be represented in normalized form, as shown in the above figure.
The minimum and maximum of denormalized floating-point numbers are:
Precision Denormalized D(min) Denormalized D(max)
0000 0001H 007F FFFFH
0 00000000 00000000000000000000001B 0 00000000 11111111111111111111111B
Single E = 0, F = 00000000000000000000001B E = 0, F = 11111111111111111111111B
D(min) = 0.0...1 × 2^-126 = 1 × 2^-23 × 2^-126 = 2^-149 D(max) = 0.1...1 × 2^-126 = (1-2^-23)×2^-126
(≈1.4 × 10^-45) (≈1.1754942 × 10^-38)
0000 0000 0000 0001H 001F FFFF FFFF FFFFH
Double D(min) = 0.0...1 × 2^-1022 = 1 × 2^-52 × 2^-1022 = 2^-1074 D(max) = 0.1...1 × 2^-1022 = (1-2^-52)×2^-1022
(≈4.9 × 10^-324) (≈4.4501477170144023 × 10^-308)
Special Values
Zero: Zero cannot be represented in the normalized form, and must be represented in denormalized form with E=0 and F=0. There are two representations for zero: +0 with S=0 and -0 with S=1.
Infinity: The value of +infinity (e.g., 1/0) and -infinity (e.g., -1/0) are represented with an exponent of all 1's (E = 255 for single-precision and E = 2047 for double-precision), F=0, and S=0 (for
+INF) and S=1 (for -INF).
Not a Number (NaN): NaN denotes a value that cannot be represented as real number (e.g. 0/0). NaN is represented with Exponent of all 1's (E = 255 for single-precision and E = 2047 for
double-precision) and any non-zero fraction.
Character Encoding
In computer memory, character are "encoded" (or "represented") using a chosen "character encoding schemes" (aka "character set", "charset", "character map", or "code page").
For example, in ASCII (as well as Latin1, Unicode, and many other character sets):
• code numbers 65D (41H) to 90D (5AH) represents 'A' to 'Z', respectively.
• code numbers 97D (61H) to 122D (7AH) represents 'a' to 'z', respectively.
• code numbers 48D (30H) to 57D (39H) represents '0' to '9', respectively.
It is important to note that the representation scheme must be known before a binary pattern can be interpreted. E.g., the 8-bit pattern "0100 0010B" could represent anything under the sun known only
to the person encoded it.
The most commonly-used character encoding schemes are: 7-bit ASCII (ISO/IEC 646) and 8-bit Latin-x (ISO/IEC 8859-x) for western european characters, and Unicode (ISO/IEC 10646) for
internationalization (i18n).
A 7-bit encoding scheme (such as ASCII) can represent 128 characters and symbols. An 8-bit character encoding scheme (such as Latin-x) can represent 256 characters and symbols; whereas a 16-bit
encoding scheme (such as Unicode UCS-2) can represents 65,536 characters and symbols.
7-bit ASCII Code (aka US-ASCII, ISO/IEC 646, ITU-T T.50)
• ASCII (American Standard Code for Information Interchange) is one of the earlier character coding schemes.
• ASCII is originally a 7-bit code. It has been extended to 8-bit to better utilize the 8-bit computer memory organization. (The 8th-bit was originally used for parity check in the early
• Code numbers 32D (20H) to 126D (7EH) are printable (displayable) characters as tabulated:
Hex 0 1 2 3 4 5 6 7 8 9 A B C D E F
2 SP ! " # $ % & ' ( ) * + , - . /
3 0 1 2 3 4 5 6 7 8 9 : ; < = > ?
4 @ A B C D E F G H I J K L M N O
5 P Q R S T U V W X Y Z [ \ ] ^ _
6 ` a b c d e f g h i j k l m n o
7 p q r s t u v w x y z { | } ~
□ Code number 32D (20H) is the blank or space character.
□ '0' to '9': 30H-39H (0011 0001B to 0011 1001B) or (0011 xxxxB where xxxx is the equivalent integer value)
□ 'A' to 'Z': 41H-5AH (0101 0001B to 0101 1010B) or (010x xxxxB). 'A' to 'Z' are continuous without gap.
□ 'a' to 'z': 61H-7AH (0110 0001B to 0111 1010B) or (011x xxxxB). 'A' to 'Z' are also continuous without gap. However, there is a gap between uppercase and lowercase letters. To convert between
upper and lowercase, flip the value of bit-5.
• Code numbers 0D (00H) to 31D (1FH), and 127D (7FH) are special control characters, which are non-printable (non-displayable), as tabulated below. Many of these characters were used in the early
days for transmission control (e.g., STX, ETX) and printer control (e.g., Form-Feed), which are now obsolete. The remaining meaningful codes today are:
□ 09H for Tab ('\t').
□ 0AH for Line-Feed or newline (LF, '\n') and 0DH for Carriage-Return (CR, 'r'), which are used as line delimiter (aka line separator, end-of-line) for text files. There is unfortunately no
standard for line delimiter: Unixes and Mac use 0AH ("\n"), Windows use 0D0AH ("\r\n"). Programming languages such as C/C++/Java (which was created on Unix) use 0AH ("\n").
□ In programming languages such as C/C++/Java, line-feed (0AH) is denoted as '\n', carriage-return (0DH) as '\r', tab (09H) as '\t'.
DEC HEX Meaning DEC HEX Meaning
0 00 NUL Null 17 11 DC1 Device Control 1
1 01 SOH Start of Heading 18 12 DC2 Device Control 2
2 02 STX Start of Text 19 13 DC3 Device Control 3
3 03 ETX End of Text 20 14 DC4 Device Control 4
4 04 EOT End of Transmission 21 15 NAK Negative Ack.
5 05 ENQ Enquiry 22 16 SYN Sync. Idle
6 06 ACK Acknowledgment 23 17 ETB End of Transmission
7 07 BEL Bell 24 18 CAN Cancel
8 08 BS Back Space '\b' 25 19 EM End of Medium
9 09 HT Horizontal Tab '\t' 26 1A SUB Substitute
10 0A LF Line Feed '\n' 27 1B ESC Escape
11 0B VT Vertical Feed 28 1C IS4 File Separator
12 0C FF Form Feed 'f' 29 1D IS3 Group Separator
13 0D CR Carriage Return '\r' 30 1E IS2 Record Separator
14 0E SO Shift Out 31 1F IS1 Unit Separator
15 0F SI Shift In
16 10 DLE Datalink Escape 127 7F DEL Delete
8-bit Latin-1 (aka ISO/IEC 8859-1)
ISO/IEC-8859 is a collection of 8-bit character encoding standards for the western languages.
ISO/IEC 8859-1, aka Latin alphabet No. 1, or Latin-1 in short, is the most commonly-used encoding scheme for western european languages. It has 191 printable characters from the latin script, which
covers languages like English, German, Italian, Portuguese and Spanish. Latin-1 is backward compatible with the 7-bit US-ASCII code. That is, the first 128 characters in Latin-1 (code numbers 0 to
127 (7FH)), is the same as US-ASCII. Code numbers 128 (80H) to 159 (9FH) are not assigned. Code numbers 160 (A0H) to 255 (FFH) are given as follows:
Hex 0 1 2 3 4 5 6 7 8 9 A B C D E F
A NBSP ¡ ¢ £ ¤ ¥ ¦ § ¨ © ª « ¬ SHY ® ¯
B ° ± ² ³ ´ µ ¶ · ¸ ¹ º » ¼ ½ ¾ ¿
C À Á Â Ã Ä Å Æ Ç È É Ê Ë Ì Í Î Ï
D Ð Ñ Ò Ó Ô Õ Ö × Ø Ù Ú Û Ü Ý Þ ß
E à á â ã ä å æ ç è é ê ë ì í î ï
F ð ñ ò ó ô õ ö ÷ ø ù ú û ü ý þ ÿ
ISO/IEC-8859 has 16 parts. Besides the most commonly-used Part 1, Part 2 is meant for Central European (Polish, Czech, Hungarian, etc), Part 3 for South European (Turkish, etc), Part 4 for North
European (Estonian, Latvian, etc), Part 5 for Cyrillic, Part 6 for Arabic, Part 7 for Greek, Part 8 for Hebrew, Part 9 for Turkish, Part 10 for Nordic, Part 11 for Thai, Part 12 was abandon, Part 13
for Baltic Rim, Part 14 for Celtic, Part 15 for French, Finnish, etc. Part 16 for South-Eastern European.
Other 8-bit Extension of US-ASCII (ASCII Extensions)
Beside the standardized ISO-8859-x, there are many 8-bit ASCII extensions, which are not compatible with each others.
ANSI (American National Standards Institute) (aka Windows-1252, or Windows Codepage 1252): for Latin alphabets used in the legacy DOS/Windows systems. It is a superset of ISO-8859-1 with code numbers
128 (80H) to 159 (9FH) assigned to displayable characters, such as "smart" single-quotes and double-quotes. A common problem in web browsers is that all the quotes and apostrophes (produced by "smart
quotes" in some Microsoft software) were replaced with question marks or some strange symbols. It it because the document is labeled as ISO-8859-1 (instead of Windows-1252), where these code numbers
are undefined. Most modern browsers and e-mail clients treat charset ISO-8859-1 as Windows-1252 in order to accommodate such mis-labeling.
Hex 0 1 2 3 4 5 6 7 8 9 A B C D E F
8 € ‚ ƒ „ … † ‡ ˆ ‰ Š ‹ Œ Ž
9 ‘ ’ “ ” • – — ™ š › œ ž Ÿ
EBCDIC (Extended Binary Coded Decimal Interchange Code): Used in the early IBM computers.
Unicode (aka ISO/IEC 10646 Universal Character Set)
Before Unicode, no single character encoding scheme could represent characters in all languages. For example, western european uses several encoding schemes (in the ISO-8859-x family). Even a single
language like Chinese has a few encoding schemes (GB2312/GBK, BIG5). Many encoding schemes are in conflict of each other, i.e., the same code number is assigned to different characters.
Unicode aims to provide a standard character encoding scheme, which is universal, efficient, uniform and unambiguous. Unicode standard is maintained by a non-profit organization called the Unicode
Consortium (@ www.unicode.org). Unicode is an ISO/IEC standard 10646.
Unicode is backward compatible with the 7-bit US-ASCII and 8-bit Latin-1 (ISO-8859-1). That is, the first 128 characters are the same as US-ASCII; and the first 256 characters are the same as
Unicode originally uses 16 bits (called UCS-2 or Unicode Character Set - 2 byte), which can represent up to 65,536 characters. It has since been expanded to more than 16 bits, currently stands at 21
bits. The range of the legal codes in ISO/IEC 10646 is now from U+0000H to U+10FFFFH (21 bits or about 2 million characters), covering all current and ancient historical scripts. The original 16-bit
range of U+0000H to U+FFFFH (65536 characters) is known as Basic Multilingual Plane (BMP), covering all the major languages in use currently. The characters outside BMP are called Supplementary
Characters, which are not frequently-used.
Unicode has two encoding schemes:
• UCS-2 (Universal Character Set - 2 Byte): Uses 2 bytes (16 bits), covering 65,536 characters in the BMP. BMP is sufficient for most of the applications. UCS-2 is now obsolete.
• UCS-4 (Universal Character Set - 4 Byte): Uses 4 bytes (32 bits), covering BMP and the supplementary characters.
UTF-8 (Unicode Transformation Format - 8-bit)
The 16/32-bit Unicode (UCS-2/4) is grossly inefficient if the document contains mainly ASCII characters, because each character occupies two bytes of storage. Variable-length encoding schemes, such
as UTF-8, which uses 1-4 bytes to represent a character, was devised to improve the efficiency. In UTF-8, the 128 commonly-used US-ASCII characters use only 1 byte, but some less-commonly characters
may require up to 4 bytes. Overall, the efficiency improved for document containing mainly US-ASCII texts.
The transformation between Unicode and UTF-8 is as follows:
Bits Unicode UTF-8 Code Bytes
7 00000000 0xxxxxxx 0xxxxxxx 1 (ASCII)
11 00000yyy yyxxxxxx 110yyyyy 10xxxxxx 2
16 zzzzyyyy yyxxxxxx 1110zzzz 10yyyyyy 10xxxxxx 3
21 000uuuuu zzzzyyyy yyxxxxxx 11110uuu 10uuzzzz 10yyyyyy 10xxxxxx 4
In UTF-8, Unicode numbers corresponding to the 7-bit ASCII characters are padded with a leading zero; thus has the same value as ASCII. Hence, UTF-8 can be used with all software using ASCII. Unicode
numbers of 128 and above, which are less frequently used, are encoded using more bytes (2-4 bytes). UTF-8 generally requires less storage and is compatible with ASCII. The drawback of UTF-8 is more
processing power needed to unpack the code due to its variable length. UTF-8 is the most popular format for Unicode.
• UTF-8 uses 1-3 bytes for the characters in BMP (16-bit), and 4 bytes for supplementary characters outside BMP (21-bit).
• The 128 ASCII characters (basic Latin letters, digits, and punctuation signs) use one byte. Most European and Middle East characters use a 2-byte sequence, which includes extended Latin letters
(with tilde, macron, acute, grave and other accents), Greek, Armenian, Hebrew, Arabic, and others. Chinese, Japanese and Korean (CJK) use three-byte sequences.
• All the bytes, except the 128 ASCII characters, have a leading '1' bit. In other words, the ASCII bytes, with a leading '0' bit, can be identified and decoded easily.
Example: 您好 (Unicode: 60A8H 597DH)
Unicode (UCS-2) is 60A8H = 0110 0000 10 101000B
⇒ UTF-8 is 11100110 10000010 10101000B = E6 82 A8H
Unicode (UCS-2) is 597DH = 0101 1001 01 111101B
⇒ UTF-8 is 11100101 10100101 10111101B = E5 A5 BDH
UTF-16 (Unicode Transformation Format - 16-bit)
UTF-16 is a variable-length Unicode character encoding scheme, which uses 2 to 4 bytes. UTF-16 is not commonly used. The transformation table is as follows:
Unicode UTF-16 Code Bytes
xxxxxxxx xxxxxxxx Same as UCS-2 - no encoding 2
000uuuuu zzzzyyyy yyxxxxxx 110110ww wwzzzzyy 110111yy yyxxxxxx 4
(uuuuu≠0) (wwww = uuuuu - 1)
Take note that for the 65536 characters in BMP, the UTF-16 is the same as UCS-2 (2 bytes). However, 4 bytes are used for the supplementary characters outside the BMP.
For BMP characters, UTF-16 is the same as UCS-2. For supplementary characters, each character requires a pair 16-bit values, the first from the high-surrogates range, (\uD800-\uDBFF), the second from
the low-surrogates range (\uDC00-\uDFFF).
UTF-32 (Unicode Transformation Format - 32-bit)
Same as UCS-4, which uses 4 bytes for each character - unencoded.
Formats of Multi-Byte (e.g., Unicode) Text Files
Endianess (or byte-order): For a multi-byte character, you need to take care of the order of the bytes in storage. In big endian, the most significant byte is stored at the memory location with the
lowest address (big byte first). In little endian, the most significant byte is stored at the memory location with the highest address (little byte first). For example, 您 (with Unicode number of
60A8H) is stored as 60 A8 in big endian; and stored as A8 60 in little endian. Big endian, which produces a more readable hex dump, is more commonly-used, and is often the default.
BOM (Byte Order Mark): BOM is a special Unicode character having code number of FEFFH, which is used to differentiate big-endian and little-endian. For big-endian, BOM appears as FE FFH in the
storage. For little-endian, BOM appears as FF FEH. Unicode reserves these two code numbers to prevent it from crashing with another character.
Unicode text files could take on these formats:
• Big Endian: UCS-2BE, UTF-16BE, UTF-32BE.
• Little Endian: UCS-2LE, UTF-16LE, UTF-32LE.
• UTF-16 with BOM. The first character of the file is a BOM character, which specifies the endianess. For big-endian, BOM appears as FE FFH in the storage. For little-endian, BOM appears as FF FEH.
UTF-8 file is always stored as big endian. BOM plays no part. However, in some systems (in particular Windows), a BOM is added as the first character in the UTF-8 file as the signature to identity
the file as UTF-8 encoded. The BOM character (FEFFH) is encoded in UTF-8 as EF BB BF. Adding a BOM as the first character of the file is not recommended, as it may be incorrectly interpreted in other
system. You can have a UTF-8 file without BOM.
Formats of Text Files
Line Delimiter or End-Of-Line (EOL): Sometimes, when you use the Windows NotePad to open a text file (created in Unix or Mac), all the lines are joined together. This is because different operating
platforms use different character as the so-called line delimiter (or end-of-line or EOL). Two non-printable control characters are involved: 0AH (Line-Feed or LF) and 0DH (Carriage-Return or CR).
• Windows/DOS uses OD0AH (CR+LF, "\r\n") as EOL.
• Unixes use 0AH (LF, "\n") only.
• Mac uses 0DH (CR, "\r") only.
End-of-File (EOF): [TODO]
Windows' CMD Codepage
Character encoding scheme (charset) in Windows is called codepage. In CMD shell, you can issue command "chcp" to display the current codepage, or "chcp codepage-number" to change the codepage.
Take note that:
• The default codepage 437 (used in the original DOS) is an 8-bit character set called Extended ASCII, which is different from Latin-1 for code numbers above 127.
• Codepage 1252 (Windows-1252), is not exactly the same as Latin-1. It assigns code number 80H to 9FH to letters and punctuation, such as smart single-quotes and double-quotes. A common problem in
browser that display quotes and apostrophe in question marks or boxes is because the page is supposed to be Windows-1252, but mislabelled as ISO-8859-1.
• For internationalization and chinese character set: codepage 65001 for UTF8, codepage 1201 for UCS-2BE, codepage 1200 for UCS-2LE, codepage 936 for chinese characters in GB2312, codepage 950 for
chinese characters in Big5.
Chinese Character Sets
Unicode supports all languages, including asian languages like Chinese (both simplified and traditional characters), Japanese and Korean (collectively called CJK). There are more than 20,000 CJK
characters in Unicode. Unicode characters are often encoded in the UTF-8 scheme, which unfortunately, requires 3 bytes for each CJK character, instead of 2 bytes in the unencoded UCS-2 (UTF-16).
Worse still, there are also various chinese character sets, which is not compatible with Unicode:
• GB2312/GBK: for simplified chinese characters. GB2312 uses 2 bytes for each chinese character. The most significant bit (MSB) of both bytes are set to 1 to co-exist with 7-bit ASCII with the MSB
of 0. There are about 6700 characters. GBK is an extension of GB2312, which include more characters as well as traditional chinese characters.
• BIG5: for traditional chinese characters BIG5 also uses 2 bytes for each chinese character. The most significant bit of both bytes are also set to 1. BIG5 is not compatible with GBK, i.e., the
same code number is assigned to different character.
For example, the world is made more interesting with these many standards:
Standard Characters Codes
GB2312 和谐 BACD D0B3
Simplified USC-2 和谐 548C 8C10
UTF-8 和谐 E5928C E8B090
BIG5 和諧 A94D BFD3
Traditional UCS-2 和諧 548C 8AE7
UTF-8 和諧 E5928C E8ABA7
Notes for Windows' CMD Users: To display the chinese character correctly in CMD shell, you need to choose the correct codepage, e.g., 65001 for UTF8, 936 for GB2312/GBK, 950 for Big5, 1201 for
UCS-2BE, 1200 for UCS-2LE, 437 for the original DOS. You can use command "chcp" to display the current code page and command "chcp codepage_number" to change the codepage. You also have to choose a
font that can display the characters (e.g., Courier New, Consolas or Lucida Console, NOT Raster font).
Collating Sequences (for Ranking Characters)
A string consists of a sequence of characters in upper or lower cases, e.g., "apple", "BOY", "Cat". In sorting or comparing strings, if we order the characters according to the underlying code
numbers (e.g., US-ASCII) character-by-character, the order for the example would be "BOY", "apple", "Cat" because uppercase letters have a smaller code number than lowercase letters. This does not
agree with the so-called dictionary order, where the same uppercase and lowercase letters have the same rank. Another common problem in ordering strings is "10" (ten) at times is ordered in front of
"1" to "9".
Hence, in sorting or comparison of strings, a so-called collating sequence (or collation) is often defined, which specifies the ranks for letters (uppercase, lowercase), numbers, and special symbols.
There are many collating sequences available. It is entirely up to you to choose a collating sequence to meet your application's specific requirements. Some case-insensitive dictionary-order
collating sequences have the same rank for same uppercase and lowercase letters, i.e., 'A', 'a' ⇒ 'B', 'b' ⇒ ... ⇒ 'Z', 'z'. Some case-sensitive dictionary-order collating sequences put the uppercase
letter before its lowercase counterpart, i.e., 'A' ⇒'B' ⇒ 'C'... ⇒ 'a' ⇒ 'b' ⇒ 'c'.... Typically, space is ranked before digits '0' to '9', followed by the alphabets.
Collating sequence is often language dependent, as different languages use different sets of characters (e.g., á, é, a, α) with their own orders.
For Java Programmers - java.nio.Charset
JDK 1.4 introduced a new java.nio.charset package to support encoding/decoding of characters from UCS-2 used internally in Java program to any supported charset used by external devices.
Example: The following program encodes some Unicode texts in various encoding scheme, and display the Hex codes of the encoded byte sequences.
import java.nio.ByteBuffer;
import java.nio.CharBuffer;
import java.nio.charset.Charset;
public class TestCharsetEncodeDecode {
public static void main(String[] args) {
// Try these charsets for encoding
String[] charsetNames = {"US-ASCII", "ISO-8859-1", "UTF-8", "UTF-16",
"UTF-16BE", "UTF-16LE", "GBK", "BIG5"};
String message = "Hi,您好!"; // message with non-ASCII characters
// Print UCS-2 in hex codes
System.out.printf("%10s: ", "UCS-2");
for (int i = 0; i < message.length(); i++) {
System.out.printf("%04X ", (int)message.charAt(i));
for (String charsetName: charsetNames) {
// Get a Charset instance given the charset name string
Charset charset = Charset.forName(charsetName);
System.out.printf("%10s: ", charset.name());
// Encode the Unicode UCS-2 characters into a byte sequence in this charset.
ByteBuffer bb = charset.encode(message);
while (bb.hasRemaining()) {
System.out.printf("%02X ", bb.get()); // Print hex code
UCS-2: 0048 0069 002C 60A8 597D 0021 [16-bit fixed-length]
H i , 您 好 !
US-ASCII: 48 69 2C 3F 3F 21 [8-bit fixed-length]
H i , ? ? !
ISO-8859-1: 48 69 2C 3F 3F 21 [8-bit fixed-length]
H i , ? ? !
UTF-8: 48 69 2C E6 82 A8 E5 A5 BD 21 [1-4 bytes variable-length]
H i , 您 好 !
UTF-16: FE FF 00 48 00 69 00 2C 60 A8 59 7D 00 21 [2-4 bytes variable-length]
BOM H i , 您 好 ! [Byte-Order-Mark indicates Big-Endian]
UTF-16BE: 00 48 00 69 00 2C 60 A8 59 7D 00 21 [2-4 bytes variable-length]
H i , 您 好 !
UTF-16LE: 48 00 69 00 2C 00 A8 60 7D 59 21 00 [2-4 bytes variable-length]
H i , 您 好 !
GBK: 48 69 2C C4 FA BA C3 21 [1-2 bytes variable-length]
H i , 您 好 !
Big5: 48 69 2C B1 7A A6 6E 21 [1-2 bytes variable-length]
H i , 您 好 !
For Java Programmers - char and String
The char data type are based on the original 16-bit Unicode standard called UCS-2. The Unicode has since evolved to 21 bits, with code range of U+0000 to U+10FFFF. The set of characters from U+0000
to U+FFFF is known as the Basic Multilingual Plane (BMP). Characters above U+FFFF are called supplementary characters. A 16-bit Java char cannot hold a supplementary character.
Recall that in the UTF-16 encoding scheme, a BMP characters uses 2 bytes. It is the same as UCS-2. A supplementary character uses 4 bytes. and requires a pair of 16-bit values, the first from the
high-surrogates range, (\uD800-\uDBFF), the second from the low-surrogates range (\uDC00-\uDFFF).
In Java, a String is a sequences of Unicode characters. Java, in fact, uses UTF-16 for String and StringBuffer. For BMP characters, they are the same as UCS-2. For supplementary characters, each
characters requires a pair of char values.
Java methods that accept a 16-bit char value does not support supplementary characters. Methods that accept a 32-bit int value support all Unicode characters (in the lower 21 bits), including
supplementary characters.
This is meant to be an academic discussion. I have yet to encounter the use of supplementary characters!
Displaying Hex Values & Hex Editors
At times, you may need to display the hex values of a file, especially in dealing with Unicode characters. A Hex Editor is a handy tool that a good programmer should possess in his/her toolbox. There
are many freeware/shareware Hex Editor available. Try google "Hex Editor".
I used the followings:
• NotePad++ with Hex Editor Plug-in: Open-source and free. You can toggle between Hex view and Normal view by pushing the "H" button.
• PSPad: Freeware. You can toggle to Hex view by choosing "View" menu and select "Hex Edit Mode".
• TextPad: Shareware without expiration period. To view the Hex value, you need to "open" the file by choosing the file format of "binary" (??).
• UltraEdit: Shareware, not free, 30-day trial only.
Let me know if you have a better choice, which is fast to launch, easy to use, can toggle between Hex and normal view, free, ....
The following Java program can be used to display hex code for Java Primitives (integer, character and floating-point):
1 public class PrintHexCode {
3 public static void main(String[] args) {
4 int i = 12345;
5 System.out.println("Decimal is " + i); // 12345
6 System.out.println("Hex is " + Integer.toHexString(i)); // 3039
7 System.out.println("Binary is " + Integer.toBinaryString(i)); // 11000000111001
8 System.out.println("Octal is " + Integer.toOctalString(i)); // 30071
9 System.out.printf("Hex is %x\n", i); // 3039
10 System.out.printf("Octal is %o\n", i); // 30071
12 char c = 'a';
13 System.out.println("Character is " + c); // a
14 System.out.printf("Character is %c\n", c); // a
15 System.out.printf("Hex is %x\n", (short)c); // 61
16 System.out.printf("Decimal is %d\n", (short)c); // 97
18 float f = 3.5f;
19 System.out.println("Decimal is " + f); // 3.5
20 System.out.println(Float.toHexString(f)); // 0x1.cp1 (Fraction=1.c, Exponent=1)
22 f = -0.75f;
23 System.out.println("Decimal is " + f); // -0.75
24 System.out.println(Float.toHexString(f)); // -0x1.8p-1 (F=-1.8, E=-1)
26 double d = 11.22;
27 System.out.println("Decimal is " + d); // 11.22
28 System.out.println(Double.toHexString(d)); // 0x1.670a3d70a3d71p3 (F=1.670a3d70a3d71 E=3)
29 }
30 }
In Eclipse, you can view the hex code for integer primitive Java variables in debug mode as follows: In debug perspective, "Variable" panel ⇒ Select the "menu" (inverted triangle) ⇒ Java ⇒ Java
Preferences... ⇒ Primitive Display Options ⇒ Check "Display hexadecimal values (byte, short, char, int, long)".
Summary - Why Bother about Data Representation?
Integer number 1, floating-point number 1.0 character symbol '1', and string "1" are totally different inside the computer memory. You need to know the difference to write good and high-performance
• In 8-bit signed integer, integer number 1 is represented as 00000001B.
• In 8-bit unsigned integer, integer number 1 is represented as 00000001B.
• In 16-bit signed integer, integer number 1 is represented as 00000000 00000001B.
• In 32-bit signed integer, integer number 1 is represented as 00000000 00000000 00000000 00000001B.
• In 32-bit floating-point representation, number 1.0 is represented as 0 01111111 0000000 00000000 00000000B, i.e., S=0, E=127, F=0.
• In 64-bit floating-point representation, number 1.0 is represented as 0 01111111111 0000 00000000 00000000 00000000 00000000 00000000 00000000B, i.e., S=0, E=1023, F=0.
• In 8-bit Latin-1, the character symbol '1' is represented as 00110001B (or 31H).
• In 16-bit UCS-2, the character symbol '1' is represented as 00000000 00110001B.
• In UTF-8, the character symbol '1' is represented as 00110001B.
If you "add" a 16-bit signed integer 1 and Latin-1 character '1' or a string "1", you could get a surprise.
Exercises (Data Representation)
For the following 16-bit codes:
0000 0000 0010 1010;
1000 0000 0010 1010;
Give their values, if they are representing:
1. a 16-bit unsigned integer;
2. a 16-bit signed integer;
3. two 8-bit unsigned integers;
4. two 8-bit signed integers;
5. a 16-bit Unicode characters;
6. two 8-bit ISO-8859-1 characters.
Ans: (1) 42, 32810; (2) 42, -32726; (3) 0, 42; 128, 42; (4) 0, 42; -128, 42; (5) '*'; '耪'; (6) NUL, '*'; PAD, '*'.
1. (Floating-Point Number Specification) IEEE 754 (1985), "IEEE Standard for Binary Floating-Point Arithmetic".
2. (ASCII Specification) ISO/IEC 646 (1991) (or ITU-T T.50-1992), "Information technology - 7-bit coded character set for information interchange".
3. (Latin-I Specification) ISO/IEC 8859-1, "Information technology - 8-bit single-byte coded graphic character sets - Part 1: Latin alphabet No. 1".
4. (Unicode Specification) ISO/IEC 10646, "Information technology - Universal Multiple-Octet Coded Character Set (UCS)".
5. Unicode Consortium @ http://www.unicode.org.
|
{"url":"http://www3.ntu.edu.sg/home/ehchua/programming/java/DataRepresentation.html","timestamp":"2014-04-20T10:46:07Z","content_type":null,"content_length":"92513","record_id":"<urn:uuid:b4b61299-5d64-4f13-8996-c20160234cfa>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00401-ip-10-147-4-33.ec2.internal.warc.gz"}
|