content
stringlengths 86
994k
| meta
stringlengths 288
619
|
|---|---|
Esoteric Boxscore of the Day
This one's a doozy. When I discovered this gem, I couldn't believe my eyes...that it could really have happened.
August 15, 1989Seattle 2, Texas 0
TEX 0 0 0 0 0 0 0 0 0 - 0 13 1
SEA 0 0 0 0 0 1 1 0 x - 2 1 0
Today's Boxscore
How could it be possible to throw a one-hitter, bang out 13 hits...and
by two runs?!
To put it in perspective, consider this. Both teams achieved rare feats. In the 35 years I have complete records for (1969-2004) over 75,000 games have been played. In them, a team has scored two or
more runs on one or fewer hits 36 times, or about once every 2,100 games; a team has been shut out on as many as 13 hits only four times, or around once every 19,000 games. So, for these two to
happen in the
same game
is unbelievable! In fact, the probability of this happenning even once in 35 years less than 1 in 500!! Wow. No, double wow. Here's the story.
The tough luck loser of this one was
Charle Hough
. It isn't easy to pitch a one-hitter and lose--especially when that one hit was a single. Though walking five, throwing a wild pitch, and committing a balk helps a little. Even so, all he needed was
just a little help from his friends to win his best start of the year.
No such luck. His teammates let him down when it counted. Rangers hitters established a strange pattern throughout the game, getting a baserunner in every inning, including two baserunners in
innings. But the hits and walks were scattered so perfectly by Seattle pitcher
Brian Holman
that none of the 15 who got on base came around to score. Their typical inning went: out, out, hit, hit, out; or hit, out, out, hit, out. They would threaten with two outs and then fail to come
through. Check out their situational hitting:
• Bases empty, 2-out: 5-for-5
• Runner on first, 2-out: 3-for-6
• Runner in scoring position, 2-out: 0-for-6
• No runner in scoring position: 13-for-31 (.419)
• Runner in scoring position: 0-for-8
Pretty pathetic. Hough must have been huffing around the dugout after watching the bottom fall out of every inning.
As for Seattle, they managed to score on two unusual rallies (with a lot of help from Texas). Check it out:
• Sixth Inning: single, balk, intentional walk, wild pitch, sacrifice fly
• Seventh Inning: walk, stolen base, 3-base error
That sixth inning sequence must be one-of-a-kind! Believe it or not, they almost scored in the first inning as well, after loading the bases on three walks (with a passed ball in the mix, too).
But they apparently used up all their luck for a while, because this was their last win for
two weeks
...immediately after this improbable win, the Mariners began a
12-game losing streak!
Crazy stuff!
• The Texas 2-3-4 hitters went 8-for-14, including 4-for-5 by Rafael Palmeiro, hitting from the two-hole.
• I thought it was pretty unusual for a team to commit a balk, a wild pitch, a passed ball, and an error in the same game, but it's actually not that uncommon, done in 146 other games, as well.
• Holman, who gave up the first 10 hits (and no runs), also allowed 10 hits in the start before and after this one (both eight-plus innings), but allowed a total of nine runs in those games!
• Incredibly, there were two 13-hit shutouts last month (on 8/13 and 8/31) after there being none in the previous 12 years, and four in the previous 35.
• There have been two games in which a team was no-hit but still scored at least two runs. The no-hit team has won both of those games!
• This happens to be the same day that Dave Dravecky broke his arm, ending his career.
4 Comments:
• Nice post.
I think there's a slight error though - nothing serious. At the end of the first paragraph after the boxscore, you write that "It's worse than 1-in-500 chance." Isn't it more like 1 in 500,000?
By Satchmo, at 12:58 PM
• The way I thought about it was this (using rounded numbers): You have 75,000 white marbles (games). First, you pick 36 at random, color them blue, and put them back in. That means that the
probability of choosing a blue marble is around 1 in 2,100. Then, blindly, you pick out 4 marbles. The chances of picking one of the blue marbles (meaning, the same game) is approximately 4 in
2,100, or 1 in 525. Sorry not to be more mathematical, but this is not the ideal space!
By , at 4:34 PM
• The chance of it happening in a specific game is about 1 in 40,000,000. The chance that it happens over the course of 75,000 games is about 1 in 525. You didn't really make it clear that these
were the odds you were listing. Personally, I think the 1 in 40 million odds sounds more impressive.
By , at 4:53 PM
• You're right. The implied question I was answering was: "Over the past 35 years, what is the probability that this kind of game occurred?" One in 40,000,000 is the answer to the question: "If I
pick a game at random from the past 35 years, what is the probability that it would be a game like this?" I've edited the post to be more clear.
By , at 11:20 PM
|
{"url":"http://esotericboxscore.blogspot.com/2005/09/huh.html","timestamp":"2014-04-21T04:32:42Z","content_type":null,"content_length":"25767","record_id":"<urn:uuid:7f1de9b8-8336-4c45-81a8-bd4881e46f10>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00396-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Fundamentals of Algebra
Chapter 9, Lesson 9
9-9.A recognize positions of sides and angles in triangles
For any triangle KLM, name:
a. the side that is opposite angle KLM.
b. the angle whose sides are KL and KM .
9-9.B classify triangles by lengths of sides
Replace each __?__ with equilateral, scalene, or isosceles.
a. A triangle with no congruent sides is called __?__ .
b. A triangle with all sides congruent is called __?__ .
|
{"url":"http://www.sadlier-oxford.com/math/mc_prerequisite.cfm?sp=family&grade=7&id=1930","timestamp":"2014-04-20T10:48:21Z","content_type":null,"content_length":"15362","record_id":"<urn:uuid:5b02e74d-516c-45f0-91d3-f3dc5a9f661c>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00080-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Double integral of piecewise function
1. The problem statement, all variables and given/known data
Let f(x,y)= 1 if x is rational, 2*y if x is irrational
Compute both double integrals of f(x,y) over [0,1]x[0,1]
2. Relevant equations
3. The attempt at a solution
I'm tempted to say that we can do the dydx integral since when x is rational, integrating y gives squares of area=1 and when x is irrational, we get triangles of area 1, so when we integrate over x,
we just get 1. Then, since f(x,y) is bounded, the other integral has the same value.
Is this reasoning any good at all, or am I just crazy?
|
{"url":"http://www.physicsforums.com/showthread.php?t=297373","timestamp":"2014-04-18T03:09:29Z","content_type":null,"content_length":"26857","record_id":"<urn:uuid:81daf03d-cce0-4544-baf4-ff9978b4fa34>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00491-ip-10-147-4-33.ec2.internal.warc.gz"}
|
The Cosmological Constant - Sean M. Carroll
4.1 Supersymmetry
Although initially investigated for other reasons, supersymmetry (SUSY) turns out to have a significant impact on the cosmological constant problem, and may even be said to solve it halfway. SUSY is
a spacetime symmetry relating fermions and bosons to each other. Just as ordinary symmetries are associated with conserved charges, supersymmetry is associated with ``supercharges'' Q[], where 131,
132, 133]). As with ordinary symmetries, a theory may be supersymmetric even though a given state is not supersymmetric; a state which is annihilated by the supercharges, Q[] |Q[] |
Let's begin by considering ``globally supersymmetric'' theories, which are defined in flat spacetime (obviously an inadequate setting in which to discuss the cosmological constant, but we have to
start somewhere). Unlike most non-gravitational field theories, in supersymmetry the total energy of a state has an absolute meaning; the Hamiltonian is related to the supercharges in a
straightforward way:
where braces represent the anticommutator. Thus, in a completely supersymmetric state (in which Q[] |H| V. In the case of vacuum fluctuations, contributions from bosons are exactly canceled by equal
and opposite contributions from fermions when supersymmetry is unbroken. Meanwhile, the scalar-field potential in supersymmetric theories takes on a special form; scalar fields ^i must be complex (to
match the degrees of freedom of the fermions), and the potential is derived from a function called the superpotential W(^i) which is necessarily holomorphic (written in terms of ^i and not its
complex conjugate ^i). In the simple Wess-Zumino models of spin-0 and spin-1/2 fields, for example, the scalar potential is given by
where ð[i]W = ðW / ð ^i. In such a theory, one can show that SUSY will be unbroken only for values of ^i such that ð[i]W = 0, implying V(^i, ^j) = 0.
So the vacuum energy of a supersymmetric state in a globally supersymmetric theory will vanish. This represents rather less progress than it might appear at first sight, since: 1.) Supersymmetric
states manifest a degeneracy in the mass spectrum of bosons and fermions, a feature not apparent in the observed world; and 2.) The above results imply that non-supersymmetric states have a
positive-definite vacuum energy. Indeed, in a state where SUSY was broken at an energy scale M[SUSY], we would expect a corresponding vacuum energy [] ~ M[SUSY]^4. In the real world, the fact that
accelerator experiments have not discovered superpartners for the known particles of the Standard Model implies that M[SUSY] is of order 10^3 GeV or higher. Thus, we are left with a discrepancy
Comparison of this discrepancy with the naive discrepancy (54) is the source of the claim that SUSY can solve the cosmological constant problem halfway (at least on a log scale).
As mentioned, however, this analysis is strictly valid only in flat space. In curved spacetime, the global transformations of ordinary supersymmetry are promoted to the position-dependent (gauge)
transformations of supergravity. In this context the Hamiltonian and supersymmetry generators play different roles than in flat spacetime, but it is still possible to express the vacuum energy in
terms of a scalar field potential V(^i, ^j). In supergravity V depends not only on the superpotential W(^i), but also on a ``Kähler potential'' K(^i, ^j), and the Kähler metric K[i] constructed from
the Kähler potential by K[i] = ð^2 K / ð^i ð^j. (The basic role of the Kähler metric is to define the kinetic term for the scalars, which takes the form g^µ K[i] ð[µ] ^i ð[] ^j.) The scalar potential
where D[i]W is the Kähler derivative,
Note that, if we take the canonical Kähler metric K[i] = [i], in the limit M[Pl] -> G -> 0) the first term in square brackets reduces to the flat-space result (56). But with gravity, in addition to
the non-negative first term we find a second term providing a non-positive contribution. Supersymmetry is unbroken when D[i]W = 0; the effective cosmological constant is thus non-positive. We are
therefore free to imagine a scenario in which supersymmetry is broken in exactly the right way, such that the two terms in parentheses cancel to fantastic accuracy, but only at the cost of an
unexplained fine-tuning (see for example [135]). At the same time, supergravity is not by itself a renormalizable quantum theory, and therefore it may not be reasonable to hope that a solution can be
found purely within this context.
|
{"url":"http://ned.ipac.caltech.edu/level5/Carroll2/Carroll4_1.html","timestamp":"2014-04-20T15:52:08Z","content_type":null,"content_length":"9471","record_id":"<urn:uuid:c96b0e49-c52c-48fa-8252-219533cb1db3>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00358-ip-10-147-4-33.ec2.internal.warc.gz"}
|
MAT Mock Test Mathematics
1. If a parallelogram with area P, a rectangle with area R and a triangle with area T are all constructed on the same base and all have the same altitude, then a false statement is
(a) P = 2T
(b) T = 1/2R
(c) P = R
(d) P + T = 2R
2. A shopkeeper allows a discount of 10% on his goods. For each payment, he further allows a discount of 20%. Find a single discount equivalent to the aobe offer.
(a) 30%
(b) 18%
(c) 28%
(d) 15%
3. Anil spends Rs. 3620 for buying pants at the rate of Rs 480 each and shirts at the rate of Rs. 130 each. What will be the ratio of pants to shirts when maximum number of pants are to be bought.
(a) 7 : 2 (b) 7 : 3 (c) 2 : 7 (d) none of these
4. the average weight of 45 students in a class is 52 kg. 5 of them whose average weight is 48 kg. leave the class and other 5 students whose average weight is 54 kg join the class. What is the new
average weight (in kg) of the class?
5. A sum of money becomes eight time in 3 years if the rate is compounded annually. In how much time the same amount at the same compound interest rate will become sixteen times?
(a) 6 years (b) 4 years (c) 8 years (d) 5 years
6. A batsman makes a core of 98 runs in the 19th innings and thus increases his average by 4. What is his average after 19th innings.
(a) 22 (b) 24 (c) 28 (d) 26
7. Terry is twice as good workman as Sumit. Terry finished a piece of work in 3 hours less than Sumit. In how many hours could they have finished that piece of work working together?
8. A loss of 19% gets converted into a profit of 17%. When the selling price is increased by Rs. 162 Find the cost price of the article
(a) Rs 450 (b) Rs 600 (c) Rs 360 (d) Rs 540
9. Two-thirds of one seventh of a number is 8705% of 240 What is the number?
(a) 2670 (b) 2450 (c) 2205 (d) 1470
10. The digit at units place of a two digit number is increased by 100% and ten’s digit of the same number is increased by 50% The new number thus formed is 9 more than the original number. What is
the original number?
(a) 22 (b) 63 (c) 44 (d) none of these
11. A piece of cloth costs Rs 35000.00. If the piece were 4m longer and each metre costs Rs. 100 less, the cost would remain unchanged. How long is the piece?
(a) 14m (b) 10m (c) 12m (d) 9m
12. A number of points are marked on a plane and are connected pair wise by a line segment. If the total number of line segments is 10 how many points are marked on the plane?
(a) 4 (b) 10 (c) 5 (d) 9
13. Find 13 + 23 + 33 + 43+ ……… 153
(a) 11025 (b) 13400 (c) 900 (d) 14400
14. Two cars start together in the same direction from the same place. The first goes with a uniform speed of 10km/h. The second goes at a speed of 8km/h. in the first hour and increases the speed by
½ km each succeeding hour. After how many hours will the second car overtake the first if both cars go non-stop?
(a) 9 hours (b) 5 hours (c) 7 hours (d) 8 hours
15. There are 6 parallel vertical lines and 7 parallel horizontal lines. These two groups of parallel lines intersect each other. How many parallelograms will be formed?
(a) 294 (b) 42 (c) 315 (d) none of these
16. The age of a man is 3 times that of his son. 15years ago the man was 9 times as old as his son. What will be the age of the man after 15years?
(a) 45 years (b) 60 years (c) 75 years (d) 65 years
17. In a group of buffaloes and ducks number of legs are 24 more than twice the number of heads. What is the number of buffaloes in the group?
(a) 6 (b) 12 (c) 8 (d) none of these
18. When a commission 36% is given on the retail price, profit is 8.8%. Find the profit percent when the commission is decreased by 24%
(a) 76% (b) 54% (c) 58% (d) 49.6%
19. In climbing a 21m long round pole a monkey climb 6m in the first minute and slips 3m in the next minute. What time (in minutes) the monkey would take to reach the top of the pole?
20. The difference between a two digit number and the number obtained by interchanging the digits is 36. What is the difference between the sum and the difference of the digits of the number. If the
ratio between the digit of the number is 1:2?
(a) 15 (b) 4 (c) 8 (d) none of these
21. A cylindrical tub of radius 12cm contains water to a depth of 20cm. A spherical iron ball is dropped into the tub and thus the level of water is raised by 675cm. What is the radius of the ball?
(a) 6cm (b) 9cm (c) 8cm (d) none of these
22. a motor boat, whose speed is 15km/hour in still water, goes 30km down stram and comes back in a total of 4 hours and 30 minutes. Determine the speed of the stream.
(a) 10km/h (b) 4km/h (c) 5km/h (d) 6km/h
23. If the base of a triangle is doubled and its height is halved. The ratio between the area of original triangle and the new triangle will be
(a) 1 : 1 (b) 2 : 3 (c) 3 : 2 (d) 1 : 3
24. A ladder reaches a window which is 12m above the ground on one side of the street keeping its foot at the same point, the ladder is turned to the other side of the street to reach a window 9m
high. Find the width of the street. If the length of the ladder is 15m.
(a) 21m (b) 12m (c) 9m (d) none of these
25. The angles of elevation of the top of a tower from two points P and Q at distances of x and y respectively from the base and in the same straight line with it are complementary. Find the height
of the tower?
26. A plane left 30 minutes later than the scheduled time, and in order to reach its destination 1500km away in time it has to increase its speed by 250km/h from the usual speed of
(a) 1000 km/h (b) 750km/h (c) 850km/h (d) 650km/h
27. A toy is in the form of cone mounted on a hemisphere of radius 3.5cm. The total height of the toy is 15.5 cm. Find the total surface (use π = 22/7)
(a) 137.5 cm^2 (b) 214.5 cm^2 (c) 154 cm^2 (d) 291.5 cm^2
28. Harry ordered 4 pairs of white socks and some additional pairs of blue socks. The price of white socks per pair was twice that of the blue socks. When the order was filled, it was found that the
number of pairs of the two colours had been interchanged. This increased the bill by 50%. Find the ratio of the number of pairs of blue socks in the original order placed by Harry.
(a) 1 : 4 (b) 4 : 1 (c) 1 : 5 (d) 5 : 1
29. Pipes a and B can fill a tank in 5 and 6 hours respectively. Pipe C can empty it in 12 ours. The tank is half full. All the three pipes are in operation, simultaneously. After how much time the
tank be full?
(a) 32/17 hours (b) 11 hours (c) 28/11 hours (d) 30/17 hours
30. In a certain party, there was a bowl of rice for every two guests, a bowl of broth for three of them and a bowl of meat for every four of them. If in all, there were 65 bowls of food, then how
many guests were in the party?
(a) 65 (b) 24 (c) 60 (d) 48
31. The number of numbers between 1000 and 10000 that contain the digits 1, 3, 5 and 7 is
(a) 16 (b) 24 (c) 60 (d) none of these
32. From among 36 teachers in a school one principal and one vice principal are to be appointed, In how many ways can this be done?
(a) 1260 (b) 1250 (c) 1240 (d) 1800
33. A boy has 3 library tickets and 8 books of his interest in the library. Of these 8, he does not want to borrow chemistry part II, unless chemistry part I is also borrowed. In how many ways can he
choose the three books to be borrowed?
(a) 56 (b) 27 (c) 26 (d) 41
34. In the following number series there is a wrong number : 56, 72, 90, 108, ……
(a) 72 (b) 90 (c) 108 (d) none of these
35. What should come in the place of the question mark in the following series: 0, 1, 4, ?, 64, 325
(a) 15 (b) 12 (c) 36 (d) 32
36. The average weight of 20 students in a class is increased by 0.75kg. When one of the students weighing 30kg is replaced by a new student. Weight of the new student (in kg) is
(a) 35 (b) 40 (c) 45 (d) 50
37. By adding the same constant to each of 31, 7, -1 a geometric progression results. The covetous ratio is
(a) 3 (b) 1/3 (c) -2 (d) -4
38. If 16 men or 20 women can do a piece of work in 25days, in what time will 28 men and 15 women do it?
39. (1 – 2 ( 1 -2 ) ^-1)^-1 equals
(a) 1/3 (b) – 1/3 (c) – 1 (d) ½
40. The volumes of two spheres are in the ratio of 8:27. the ratio of their surface area is
(a) 4 : 9 (b) 2 : 3 (c) 4 : 5 (d) 5 : 6
|
{"url":"http://www.excellup.com/MAT/mock_mat_mathone.aspx","timestamp":"2014-04-17T12:32:23Z","content_type":null,"content_length":"38815","record_id":"<urn:uuid:00e3e4bb-6ba7-410e-83d7-d20482cd9423>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00531-ip-10-147-4-33.ec2.internal.warc.gz"}
|
The Vestibular System Implements a Linear–Nonlinear Transformation In Order to Encode Self-Motion
Although it is well established that the neural code representing the world changes at each stage of a sensory pathway, the transformations that mediate these changes are not well understood. Here we
show that self-motion (i.e. vestibular) sensory information encoded by VIIIth nerve afferents is integrated nonlinearly by post-synaptic central vestibular neurons. This response nonlinearity was
characterized by a strong (~50%) attenuation in neuronal sensitivity to low frequency stimuli when presented concurrently with high frequency stimuli. Using computational methods, we further
demonstrate that a static boosting nonlinearity in the input-output relationship of central vestibular neurons accounts for this unexpected result. Specifically, when low and high frequency stimuli
are presented concurrently, this boosting nonlinearity causes an intensity-dependent bias in the output firing rate, thereby attenuating neuronal sensitivities. We suggest that nonlinear integration
of afferent input extends the coding range of central vestibular neurons and enables them to better extract the high frequency features of self-motion when embedded with low frequency motion during
natural movements. These findings challenge the traditional notion that the vestibular system uses a linear rate code to transmit information and have important consequences for understanding how the
representation of sensory information changes across sensory pathways.
Author Summary
Understanding how the coding of sensory information changes at different stages of sensory processing remains a fundamental challenge in systems neuroscience. Here we address this question by
studying early sensory processing in vestibular pathways of monkeys, a system for which sensory stimuli are relatively easy to describe. Peripheral vestibular afferents detect and encode head motion
in space to ensure posture and gaze is accurate and stable during everyday life. In this study, we show that central vestibular neurons nonlinearly integrate their afferent inputs, which helps
explain the mechanisms that generate enhanced feature detection in sensory pathways. In addition, our results overturn conventional wisdom that early vestibular processing is linear, revealing a
striking boosting nonlinearity that is a hallmark of the first central stage of vestibular processing. Studies from other sensory systems have shown that higher-order neurons can more efficiently
detect specific features of sensory input, and that nonlinear transformations can increase this efficiency. We suggest that nonlinear integration of afferent input by central vestibular neurons
extends their coding range and facilitates the detection of natural vestibular stimuli.
Citation: Massot C, Schneider AD, Chacron MJ, Cullen KE (2012) The Vestibular System Implements a Linear–Nonlinear Transformation In Order to Encode Self-Motion. PLoS Biol 10(7): e1001365.
Academic Editor: James Ashe, University of Minnesota, United States of America
Received: January 13, 2012; Accepted: June 13, 2012; Published: July 24, 2012
Copyright: © 2012 Massot et al. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction
in any medium, provided the original author and source are credited.
Funding: Sources of funding that have supported the work include: FQRNT (MJC, KEC), CIHR (MJC, KEC), CFI (MJC), CRC (MJC), and NIH grant DC002390 (KEC). The funders had no role in study design, data
collection and analysis, decision to publish, or preparation of the manuscript.
Competing interests: The authors have declared that no competing interests exist.
Abbreviations: CV, coefficient of variation; LN, linear-nonlinear; STD, standard deviation; SEM, standard error of the mean; VO, vestibular-only, VN, vestibular nuclei
Multiple representations of the sensory environment are found across the hierarchical stages of sensory systems [1]. Each of these representations is defined by the activities of a population of
neurons in response to their afferent inputs. How neurons decode and then encode sensory information, and the ways in which neural strategies for coding change across successive brain areas, remains
a central problem in neuroscience. Studies across sensory systems have shown that representations in higher order brain areas are more efficient because individual neurons detect specific features of
sensory input [2]–[5]. Although theoretical studies predict that more efficient representations are achieved by nonlinear transformations of afferent input [3],[6],[7], to date the nature of these
transformations is largely unknown.
If nonlinear transformations mediate a more efficient representation of the sensory environment across hierarchical stages of processing, then they should be revealed by experimental approaches
specifically designed to probe nonlinear processing. Here, we used the vestibular system as a model to address whether central neurons nonlinearly integrate their afferent inputs in order to give
rise to enhanced feature detection. An advantage of the vestibular system, which is essential for providing information about our self-motion and spatial orientation relative to the world, is that
the sensory stimulus is relatively easy to describe. Conventional wisdom is that early vestibular processing is inherently linear. This is supported by numerous studies showing that both afferents
and central neurons accurately encode the detailed time course of horizontal rotational head motion through linear changes in firing rate over a wide range of frequencies (reviewed in [8],[9]; [10]).
Further support for this proposal has come from the fact that central vestibular neurons linearly transduce synaptic inputs into changes in firing rate output [11]. Indeed, to date, prior studies
have demonstrated remarkable linearity of vestibular behaviours such as the vestibulo-ocular reflex [12]–[15]. However, all these results are at odds with the expectation that central vestibular
neurons achieve more efficient representations of sensory space through nonlinear transformations of their afferent input. Such nonlinear transformations could be advantageous as they would enable
vestibular neurons to detect specific features of natural vestibular stimuli. For instance, it would be theoretically beneficial that the central vestibular neurons which mediate vestibulo-spinal
reflexes preferentially respond to unexpected transient stimuli, such as those experienced when slipping on ice, in order to optimize compensatory postural responses.
A comprehensive rethinking of the neural code used by the vestibular system is thus necessary to reveal whether more efficient representations of the sensory environment emerge in central vestibular
pathways through nonlinear transformations of their afferent input. Notably, prior experiments have characterized early vestibular processing mostly using stimuli that were not designed to
systematically probe nonlinear behaviour (e.g., single sinewaves and trapezoids) [8]–[10]. In order to test for the existence of such nonlinear transformations, it is necessary to compare neural
response to a given stimulus “A” when presented in isolation to that obtained when the same stimulus was presented concurrently with another stimulus “B.” If, as suggested by previous studies,
central vestibular neurons respond linearly, then we would expect that the response to stimulus “A” should not depend on whether stimulus “B” is present or not (i.e., the principle of superposition
is valid because, by definition, a linear system must be additive). If, instead, central vestibular neurons nonlinearly integrate afferent input, we might expect that the response to stimulus “A”
would be altered contingent on the presence of stimulus “B.”
We explicitly investigated how the neural strategy for coding self-motion changes across the afferent-central neuron synapses by testing whether central vestibular neurons nonlinearly integrate their
afferent inputs. We found that, unlike afferents, central vestibular neurons do not obey the principle of superposition because they displayed strong nonlinear responses when sums of low and high
frequency stimuli were used. Indeed, the response to low frequency stimuli was strongly attenuated when these were presented concurrently with high frequency stimuli. Through a combination of
mathematical modeling and analysis, we show how a static boosting nonlinearity in the input-output relationship can lead to this effect. Our results force a rethinking of the processing of
self-motion stimuli in early vestibular pathways. We suggest that nonlinear processing by central vestibular neurons could serve to enhance their coding range and selectivity to high frequency
transient self-motion.
Central Vestibular Neurons Respond Nonlinearly to Self-Motion
We tested response nonlinearity in both central vestibular neurons and afferents by recording their activities in response to a stimulus when presented in isolation and when presented concurrently
with another stimulus (Figure 1A). During experiments, the animal was comfortably seated on a motion platform (Figure 1B). We first recorded central vestibular neuron responses to random noise
stimuli with frequency content spanning the range of natural head rotations (0–20 Hz) [14]. Specifically, we applied stimuli that spanned two different frequency ranges: low (0–5 Hz) (Figure 1C,
black traces) and high (15–20 Hz) (Figure 1D, black traces). Both noise stimuli were applied either individually (Figure 1C,D) or simultaneously (Figure 1E). The neuronal responses from an example
cell to each of these three stimuli are shown by the red traces in Figure 1C,D,E. We found that, when both stimuli were applied simultaneously, the response was not equal to the sum of the responses
to each individual stimulus as would be expected for a linear system. This is because the firing rate modulation in response to the low frequency stimulus when presented alone was much larger than
that observed when the high frequency stimulus was presented simultaneously (compare red traces in Figure 1C,E). In contrast, the firing rate modulation in response to the high frequency stimulus was
comparable regardless of whether the stimulus was presented alone or in combination with the low frequency input (compare red traces in Figure 1D,E). This was reflected in the response power spectrum
(compare red traces in the insets of Figure 1C,E and Figure 1D,E).
Figure 1. Central vestibular neurons respond nonlinearly to sums of noise stimuli.
(A) Vestibular information is transmitted from the sensory end organs through two types of afferents (regular and irregular) that converge on first order central neurons within the vestibular nuclei.
(B) During the experiment the monkey was comfortably seated in a chair placed on a motion platform. (C–E) The firing rate (red traces) of an example central vestibular neuron in response to noise
stimuli (black traces) whose frequency content spanned 0–5 Hz (C), 15–20 Hz (D), and 0–5 Hz+15–20 Hz (E). The upper insets show the power spectrum of each stimulus, while the lower insets show the
power spectrum of the firing rates (red). (F) Population-averaged normalized gains curves for central neurons. Note the attenuated response at low frequency (0–5 Hz, arrow). (G) Population-averaged
normalized gains for central neurons. Here and in all subsequent figures, the bands (F) and error bars (G) show 1 SEM. The firing rate estimates were obtained by convolving the spike trains with a
Kaiser filter (see Materials and Methods).
To quantify this effect, we computed the response gain in each condition for our population of central vestibular neurons (see Materials and Methods). Consistent with previous results [10], the
neuronal gains of central vestibular neurons were higher for high frequency stimuli (Figure 1F, compare blue and red traces). However, we found that the population-averaged response gains at low
frequencies were significantly attenuated (~50%) (p<10^−6, paired t test, n = 15) when both stimuli are applied simultaneously (Figure 1F,G). The population-averaged response gains at high
frequencies were, however, unaffected (p = 0.4, paired t test, n = 15) (Figure 1F,G).
Thus, contrary to the common assumption that early vestibular processing is essentially linear, the results above establish that central vestibular neurons respond nonlinearly to sums of low and high
frequency head rotations since the principle of superposition is violated. Notably, responses to low frequency self-motion are suppressed in the presence of high frequency self-motion. In contrast,
responses to high frequency self-motion are relatively unaffected by the presence of low frequency self-motion.
We next asked whether the response nonlinearity that we observed using gain measures would also be evident when using information theoretic measures such as the coherence. Unlike gain measures,
coherence measures are computed using the signal-to-noise ratio and thus take variability into account. This is important because previous studies have shown that a given neuron can display
qualitatively different frequency tuning depending on whether gain or coherence measures are used [16]–[18]. Again, we found that the principle of superposition was violated. Indeed,
population-averaged coherence values at low frequencies were significantly lower (~50%) (p<0.001, paired t test, n = 20) when both noise stimuli were presented simultaneously. In contrast,
population-averaged coherence values at high (15–20 Hz) frequencies were not significantly different (p = 0.87, paired t test, n = 15) (Figure S1A, S1B, S1C). As expected given that there is a
one-to-one relationship between coherence and mutual information measures, comparable results were obtained when computing the latter (unpublished data). Thus, taken together, our results using both
gain and coherence measures confirm our hypothesis that central vestibular neurons respond nonlinearly to sums of low and high frequency stimuli.
We also tested that these nonlinear responses were not specific to the noise stimuli used. Indeed, we found that central vestibular neurons also responded nonlinearly to sums of low and high
frequency sinusoidal stimuli. Indeed, when 3 and 17 Hz sinusoidal stimuli were applied simultaneously, the response was not equal to the linear sum of the responses to each individual stimulus (
Figure S2). We note that this is not due to our filtering the spike trains to obtain the time-dependent firing rate since this effect was also evident in the power spectra from the unfiltered spike
trains (Figure S3).
Further, the observed nonlinear responses of central vestibular neurons were not due to trivial nonlinearities such as rectification (i.e., cessation of firing) or saturation (i.e., the firing rate
reaching a plateau at a finite value) since these were not elicited by the stimuli used in this study (Figure S4A).
Peripheral Vestibular Afferents Respond Linearly to Sums of Low and High Frequency Motion
Perhaps the simplest explanation for the nonlinear responses of central vestibular neurons shown in Figure 1 is that they are inherited from their afferent input. Peripheral vestibular afferents
display marked heterogeneities in their baseline activity and response to stimulation. Most notably, regularly discharging afferents are characterized by low coefficients of variation (CV) and encode
the detailed time course of self-motion as they are broadly tuned to the behaviourally relevant frequency range (0–20 Hz). In contrast, irregularly discharging afferents are characterized by higher
CVs and detect fast transient changes in self-motion as they respond preferentially to high frequencies [8],[18]–[20].
To address whether the nonlinear responses of central vestibular neurons are inherited from their afferent inputs, we recorded from single regular and irregular afferents using the same random noise
stimuli. In contrast to their target central vestibular neurons, neither regular (Figure 2A) nor irregular afferents (Figure 2A) displayed significant nonlinearities. Indeed, the population-averaged
gain values at low frequencies were not significantly altered by the presence of the high frequency stimulus (regular: p = 0.9, paired t test, n = 5; Figure 2C; irregular: p = 0.23, paired t test, n
= 10; Figure 2D). Similarly, the population-averaged gain values at high frequencies were not significantly altered by the presence of the low frequency stimulus (regular: p = 0.84, paired t test, n
= 5; irregular: p = 0.19, paired t test, n = 10). We note that the applied stimuli also did not elicit “trivial” nonlinearities in afferents such as rectification or saturation (Figure S4B,C) and
that similar results were obtained when we instead used the coherence measure (regular: Figure S1D,E,F; irregular: Figure S1G,H,I). We note that similar results were observed when using sums of low
and high frequency sinusoidal stimuli (unpublished data). Accordingly, unlike central neurons, individual afferents do not respond nonlinearly to sums of low and high frequency stimuli.
Figure 2. Afferents respond linearly to sums of noise stimuli.
(A, B) Population-averaged normalized gain curves as a function of frequency for regular (A) and irregular (B) afferents. (C, D) Population-averaged normalized gains for regular (C) and irregular (D)
afferents. (E) Population-averaged attenuation indices for central neurons, regular afferents, and irregular afferents.
We quantified the gain attenuation at low frequencies in the presence of the high frequency stimulus for both central vestibular neurons and afferents. While central vestibular neurons displayed
strong and significant attenuation (~50%, p<0.001, signrank test, n = 15), both regular and irregular afferents instead displayed weak attenuation (~10%) that was not significantly different from
zero (regular: p = 0.25, signrank test, n = 5; irregular: p = 0.13, signrank test, n = 10) (Figure 2E). These findings imply that the origin of the response nonlinearity seen in central neurons is
due to nonlinear integration of afferent synaptic input.
Central Vestibular Neurons Display Nonlinear Responses to High Frequency But Not Low Frequency Head Rotations When These Are Applied in Isolation
In order to understand how central vestibular neurons nonlinearly integrate their afferent input, we next characterized the relationship between head velocity input and output firing rate for both
afferents and central neurons by plotting one as a function of the other. The schematic of the approach used is illustrated in Figure 3A. If the relationship between input head velocity and output
firing rate is linear, then the curve relating the two should be well fit by a straight line.
Figure 3. Central vestibular neurons but not afferents display a nonlinear relationship between output firing rate and input head velocity.
(A) Output firing rate as a function of head velocity. The inset shows the instantaneous firing rate and the head velocity stimulus as a function of time and the various symbols correspond to
different values of the head velocity and the corresponding firing rates. If the firing rate is related linearly to the head velocity stimulus, then the curve relating the two should be well fit by a
straight line. The slope of this line is then the response gain. (B) Population-averaged firing rate response as a function of head velocity for afferents when stimulated with 0–5 Hz noise alone
(solid blue) and concurrently with 15–20 Hz noise (solid black). In both cases, the curves were well fit by straight lines (dashed lines) and largely overlapped (0–5 Hz alone: R^2 = 0.99, slope =
0.70 (spk/s)/(deg/s), y-intercept = 98 spk/s; 0–5 Hz with 15–20 Hz: R^2 = 0.99, slope = 0.72 (spk/s)/(deg/s), y-intercept = 98 spk/s). (C) Population-averaged firing rate response as a function of
head velocity for afferents when stimulated with 15–20 Hz noise alone (solid red) and concurrently with 0–5 Hz noise (long dashed black). Both curves were again well fit by straight lines (short
dashed lines) and largely overlapped (15–20 Hz alone: R^2 = 0.99, slope = 1.97 (spk/s)/(deg/s), y-intercept = 102 spk/s; 15–20 Hz with 0–5 Hz: R^2 = 0.99, slope = 2.06 (spk/s)/(deg/s), y-intercept =
102 spk/s). Note, however, the increased slope with respect to panel B. (D) Population-averaged firing rate response as a function of head velocity for central neurons when stimulated with 0–5 Hz
noise alone (solid blue) and concurrently with 15–20 Hz noise (solid black). In both cases, the curves were well fit by straight lines (dashed lines) although the solid black curve had a lower slope
(i.e., gain) than the solid blue curve (0–5 Hz: R^2 = 0.98, slope = 1.56 (spk/s)/(deg/s), y-intercept = 67 spk/s; 0–5 Hz with 15–20 Hz: R^2 = 0.87, slope = 0.83 (spk/s)/(deg/s), y-intercept = 81 spk/
s). (E) Population-averaged firing rate response as a function of head velocity for central neurons when stimulated with 15–20 Hz noise alone (solid red) and concurrently with 0–5 Hz noise (long
dashed black). While both curves were similar and largely overlapped, they were not well fit by straight lines (short dashed lines) that underestimated the firing rate for head velocities <−10 deg/s
(15–20 Hz: R^2 = 0.64, slope = 2.32 (spk/s)/(deg/s), y-intercept = 79 spk/s; 15–20 Hz with 0–5 Hz: R^2 = 0.27, slope = 2.78 (spk/s)/(deg/s), y-intercept = 79 spk/s). We note that central neurons did
not display rectification since the firing rate was always above zero.
We found that the relationships between head velocity stimuli and peripheral afferent responses were well fit by straight lines. The population-averaged relationships for low and high frequency
self-motion obtained for afferents are shown in Figure 3B and 3C, respectively. It can further be seen that these relationships are comparable when a given stimulus is applied alone and when it is
applied concurrently with the other stimulus (Figure 3B, 3C) (low frequency: p = 0.93, pairwise t test, n = 15; high-frequency: p = 0.89, pairwise t test, n = 15), demonstrating that the principle of
superposition applies. This was also seen for single neurons (insets of Figure S5). Further, these results were observed for both regular (low frequency: p = 0.59; high frequency: p = 0.58, pairwise
t tests, n = 5) and irregular (low frequency: p = 0.77; high frequency: p = 0.35, pairwise t tests, n = 10) afferents when considered separately (Figure S5). Notably, comparison of Figure 3B and 3C
further revealed that the afferent gain (i.e., the slope of the input-output relationship) was higher in response to the high as compared to the low frequency stimulus. This observation is consistent
with previous studies showing that high frequency head rotations give rise to greater afferent firing rate modulations (reviewed in [8]).
We next computed the population-averaged relationships for central vestibular neurons and found that they were well fit by straight lines when the low frequency stimulus was presented alone (Figure
3D, solid blue curves). We note that this was also true for single neurons (Figure S6A, solid blue curve). The head velocity-neuronal response relationship (solid black curve) was also linear when
low frequency stimulation was applied concurrently with high frequency stimulation (population average: Figure 3D; single neuron: Figure S6A, solid black curves). However, in the combined condition,
the slope of the curve (i.e., the gain) was lower (compare solid black and blue traces in Figures 3D and S6A). These results are consistent with our previous analysis of response gain (Figure 1G),
thus confirming our earlier findings.
In contrast, qualitatively different results were observed for high frequency head rotations. Notably, we found that the relationships between head velocity stimuli and central neuron responses were
nonlinear as they were characterized by significantly lower gains (i.e., the slope of the curve) for head velocities less than −10 deg/s as compared to those for head velocities greater than −10 deg/
s (p = 0.01, pairwise t test, n = 20). This was seen for both the population averages (Figure 3E) and single neurons (Figure S6B). We will henceforth refer to the shape of these curves as a boosting
nonlinearity [21]. Moreover, the relationships obtained for high frequency head rotations were comparable when the stimulus was presented alone or concurrently with low frequency head rotations (p =
0.43, pairwise t test, n = 20) (Figures 3E and S6B, compare red and black-dashed traces).
Thus, again consistent with our results using gain measures, central vestibular neuron responses were comparable when high frequency stimuli were applied alone or concurrently with low frequency
stimuli. Notably, unlike afferents, central vestibular neurons respond nonlinearly to sums of low and high frequency stimuli. Moreover, our analysis of their stimulus input–firing rate output
relationships further revealed a boosting nonlinearity characterized by lower slopes for head velocities less than −10 deg/s as compared to those obtained for head velocities greater than −10 deg/s.
This nonlinearity was only seen when high frequency stimuli were applied (Figures 3E and S6B).
The Greater Afferent Firing Rate Modulations Elicited by High Frequency Stimuli Elicit Nonlinear Responses in Central Vestibular Neurons
Thus far, we have looked at the relationship between head velocity stimuli and output firing rates for both central neurons and afferents. We found that afferents responded linearly to both low and
high frequency stimuli. In contrast, central neurons responded linearly to low frequency stimuli but nonlinearly to high frequency stimuli. A priori, this effect could be mediated by a dynamic
non-linearity that would be activated exclusively under high frequency stimulation (e.g., a network-based mechanism such as feedback input from higher centers). Alternatively, the nonlinearity might
be static in nature (e.g., due to intrinsic mechanisms such as voltage-gated conductances) and be preferentially elicited by the afferent input due to high frequency stimulation. Figure 4A
illustrates the sequential processing of low (top) and high (bottom) frequency stimuli when applied in isolation. It is important to note that, for high frequency stimulation, the afferent input to
central vestibular neurons will span a greater range (Figure 4A, compare green traces) because afferents display greater sensitivities (compare Figure 3B,C). As a result, at the next stage of
processing, these larger afferent firing rate modulations should evoke greater central neuron firing rate modulations as compared to those evoked by low frequency head rotations (Figure 4A, compare
purple traces). Thus, if the nonlinearity is static, we predict that (1) the smaller range of afferent firing rates evoked by low frequency stimulation are contained in a region for which the central
vestibular neuron input-output relationship is approximately linear, (2) the greater range of afferent firing rates evoked by high frequency stimulation extend into a region of the input-output
relationship that elicits the boosting nonlinearity (Figure 4A, VO neuron box), and as a result, (3) central vestibular neuron output firing rate is then a fixed function of the afferent input firing
rate, regardless of whether low or high frequency head rotations are applied in isolation.
Figure 4. Central neurons display a static nonlinear relationship between their output firing rate and their afferent input.
(A) Low (top) and high (bottom) frequency head velocity stimuli (gray) cause smaller and larger changes in afferent firing rate (green), respectively. These differential changes in afferent firing
rate in turn cause differential changes in central neuron firing rate (purple), respectively. Notably, the changes in afferent firing rate caused by high frequency head velocity stimuli are
distributed over a greater range and thus elicit nonlinear responses from VO neurons, whereas this is not the case for those caused by low frequency head velocity stimuli. Note that the same scales
were used for corresponding panels in the bottom and upper rows. (B) Population-averaged firing rates of central VO neurons as a function of afferent firing rate for low (blue) and high (red)
frequency noise stimuli presented in isolation. Note that the curve obtained for the low frequency stimulus (blue) extends over a smaller range than that obtained for high frequency (red) stimuli.
Further, both curves are linear over the range for which they overlap. Also shown are best linear fits to the portion of the curve below and above 90 Hz (dashed red lines). As such, the curve can be
approximated by a piecewise linear function. Inset: population-averaged firing rates of afferents as a function of the head velocity stimulus for low (blue) and high (red) frequency noise stimuli
presented alone. (C) Population-averaged firing rates of central VO neurons as a function of afferent input firing rates: (1) for the low frequency stimulus when presented alone (blue) and
concurrently with the high frequency stimulus (solid black); (2) for the high frequency stimulus when presented alone (red) and concurrently with the low frequency stimulus (dashed black). Note that
the curves obtained in response to the high frequency stimulus when presented alone (red) and when presented concurrently with the low frequency stimulus (dashed black) overlapped before (Figure 3E)
and thus, not surprisingly, also overlap. Note also that only the curve obtained when the low frequency stimulus was presented concurrently with the high frequency stimulus (solid black) does not
overlap with the others. This is because the central VO neuron firing rate is higher than that obtained for the low frequency stimulus when applied alone for values lesser than 110 Hz. Inset:
population-averaged normalized slopes under all four conditions. The afferent activity was estimated by fitting a linear model to previous experimental recordings from a large population of afferents
(see Materials and Methods).
To test whether the nonlinearity is static or dynamic, we next experimentally characterized the input-output relationship of central neurons by plotting their output firing rates as a function of
their afferent input rather than head velocity. Given that central neurons receive input from many afferents that display significant heterogeneities (see [8] for review), we obtained an estimate of
this activity by fitting a linear model to previous data (see Materials and Methods). The input-output relationship obtained for low frequency stimuli was approximately linear (Figure 4B, blue
curve), confirming our first prediction. In addition, the input-output relationship obtained for high frequency stimuli displayed a boosting nonlinearity (Figure 4B, red curve), such that the slope
for afferent inputs less than 90 spk/s was much lower than that for afferent inputs greater than 90 spk/s (Figure 4B, compare solid and dashed red curves). Thus, the afferent input–central neuron
output relationship can be approximated by the piecewise linear function illustrated in Figure 4A, confirming our second prediction. Moreover, we found that both curves overlapped when only the
smaller range of afferent firing rates evoked by low frequency stimuli was considered (Figure 4B, compare red and blue curves). Accordingly, this finding confirmed our third prediction that central
vestibular neuron firing rate is a fixed function of the afferent input firing rate when either low or high frequency head rotations are applied in isolation. Accordingly, there is a striking
contrast between the results of this analysis and that of our previous analysis of the relationship between head velocity input and afferent output. Notably, the head velocity input–afferent output
relationships obtained for low and high frequency stimulation did not overlap consistently with the known frequency-dependent sensitivities of afferents (Figure 4B, inset). Thus, taken together, our
results show that central vestibular neuron responses are characterized by a static nonlinearity that is primarily elicited by the greater afferent firing rate modulations caused by high frequency
stimuli. We suggest that the intrinsic properties of central vestibular neurons and/or network interactions within this vestibular pathway underlie this boosting nonlinearity (see Discussion).
We next plotted the afferent input–firing rate output relationships obtained when low frequency stimulation was applied alone or concurrently with high frequency stimulation for central vestibular
neurons. We found significantly different slopes in both conditions (Figure 4C, compare black and blue curves and inset). Specifically, central vestibular neuron firing rates in response to afferent
firing rates below 110 spk/s were higher when the low frequency stimulus was applied concurrently with the high frequency stimulus than when it was applied alone (Figure 4C, arrow). We also note
that, as can be expected from Figure 3E, the central vestibular neuron input-output relationships obtained when high frequency stimulation was applied alone or concurrently with low frequency
stimulation overlapped (Figure 4C, red and dashed black curves) and did not differ significantly in their slopes (Figure 4C, inset), which confirms that central vestibular neurons display a static
boosting nonlinearity in response to these stimuli.
Modeling and Predicting Central Vestibular Neuron Responses to Sums of Arbitrary Stimuli
Does the static boosting nonlinearity in the input-output relationship of central vestibular neurons account for their nonlinear responses to sums of low and high frequency stimuli? To address this
question, we fit the experimentally recorded central vestibular neuron input-output relationship in response to afferent input when a given stimulus was presented in isolation. Since individual
central vestibular neurons receive input from a large heterogeneous population of afferents [8], we estimated their average activity by fitting a linear model to existing data (see Materials and
Methods). The input-output relationship in response to this stimulus when another stimulus is presented concurrently can then be obtained by averaging (see Materials and Methods). Accordingly, it
becomes possible, using this model, to predict the change in the central vestibular neuron input-output relationship to a given stimulus when another stimulus is applied concurrently. Our results
show that, when compared to experimental data, this relatively simple model is surprisingly accurate at predicting the change in afferent to central neuron input-output relationship to the low
frequency stimulus when the high frequency stimulus is applied concurrently (Figure 5A, compare solid and dashed curves). The same model also predicts little change in the input-output relationship
to the high frequency stimulus when the low frequency stimulus is applied concurrently, consistent with our experimental results (Figure 5B, compare solid and dashed curves).
Figure 5. A simple model accurately predicts nonlinear central VO neuron responses to sums of low and high frequency stimuli.
(A) Model (solid) and data (dashed) relationships between afferent firing rate and central VO neuron firing rate when the low frequency stimulus was presented alone (blue) and concurrently with the
high frequency stimulus (black). Note that the model accurately reproduces the decrease in slope seen experimentally as evidenced by the large overlap between the model and data curves (R^2 = 0.92).
(B) Model (solid) and data (dashed) relationships between afferent firing rate and VO neuronal firing rate when the high frequency stimulus was presented alone (red) and concurrently with the low
frequency stimulus (black). Note that the model also accurately reproduces the lack of change seen experimentally as the model curves largely overlap with the experimental ones (R^2 = 0.99). (C) %
gain attenuation plotted as a function of signal and masker frequency. The stimulus for which the response is computed is referred to as the signal, while the other stimulus is referred to as the
masker. Maskers with higher frequency content lead to greater gain attenuation. (D) % gain attenuation as a function of masker amplitude and frequency. Maskers of greater amplitude and frequency lead
to greater gain attenuation.
Importantly, using this model, we were further able to predict the relative gain attenuation in response to sums of stimuli with given intensities and frequencies within the behaviourally relevant
range. It then becomes important to introduce new terminology to distinguish both stimuli by other means than just their frequency content, as was done until now. Thus, we will henceforth refer to
one stimulus as the “signal” and to the other as the “masker.” Note that, while the terms “signal” versus “masker” are arbitrary, this division allows us to focus on the coding of one input (i.e.,
the input designated as the signal). Our model shows stronger attenuation of the response gain to a low frequency signal by maskers with higher frequency content (Figure 5C). This is because
vestibular afferents display gains that increase as a function of frequency. Moreover, our model shows stronger attenuation of the response gain to a given signal by maskers with higher intensity (
Figure 5D). This is because maskers of greater intensities are more effective at eliciting nonlinear responses from central vestibular neurons. Thus, although it is not experimentally feasible to
test all combinations of maskers and signals, our model allows us to make testable predictions of how a static nonlinear input-output relationship attenuates central vestibular neuron responses to a
given signal in the presence of a masker over the physiologically relevant range of frequencies and intensities. For example, our model makes the prediction that a masker with a given frequency
content is equally effective at attenuating the sensitivity to signals with either low or high frequency content (Figure 5C).
A Linear-Nonlinear Cascade Model Verifies That Central Vestibular Neurons Display a Static Boosting Nonlinearity
So far, our data and modeling results show that a static boosting nonlinearity can explain why central neurons display reduced gain to low frequency motion when applied concurrently with high
frequency motion. If this is true, then central vestibular neurons should respond nonlinearly to any stimulus that contains high frequencies. Moreover, the form of nonlinearity should be stimulus
independent. To test this prediction experimentally, we recorded from afferents and central vestibular neurons during broadband noise stimulation and used a more general approach to characterize
their responses. Specifically, we used a linear-nonlinear (LN) cascade model [22] that is illustrated in Figure 6A (see Materials and Methods). This model assumes that a neuron's firing rate at any
instant is a function f of the convolution between the stimulus and an optimal linear filter (i.e., the linear prediction) [22]. The form of the function f can then be estimated by plotting the
actual firing rate as a function of the linear prediction (Figure 6A).
Figure 6. A linear-nonlinear (LN) cascade model reveals that central vestibular neurons respond nonlinearly to broadband noise stimulation.
(A) Schematic showing the LN model's assumptions. The stimulus (left) is convolved with a filter H(t) that is given by the inverse Fourier transform of the transfer function in order to generate the
linear predicted firing rate (middle). This linear prediction is then passed through a static function f (which can be linear or nonlinear) to give rise to the predicted output firing rate (right).
(B) Population-averaged function f for afferents. Also shown is the best-fit line (R^2 = 0.998±0.001, n = 15) (red) whose slope did not significantly differ from unity (p = 0.99, n = 15, pairwise t
test). Inset: population-averaged filter H(t) for afferents. (C) Population-averaged function f for central VO neurons. Also shown are the best-fit straight lines for the intervals (0–80 Hz) and
(80–160 Hz) (red) whose slopes were significantly different from one another (p = 0.0014, n = 13, pairwise t test). Inset: population-averaged filter H(t) for central VO neurons.
We first applied this model to our afferent data and found that their output firing rates were well predicted by the optimal linear filter alone as all data points were located close to the identity
line (R^2 = 0.998±0.001, n = 15) (Figure 6B). This was seen for both regular (Figure S7A,B) and irregular (Figure S7C,D) afferents. Notably, the slope of best straight line fit to the curve (Figure
6B, red line) was not significantly different from unity (p = 0.966, n = 15, pairwise t test).
Qualitatively different results were obtained for central vestibular neurons. Indeed, we found that their output firing rates were not well predicted by the optimal linear filter alone (Figure 6C) as
evidenced by significant deviations from the identity line (Figure S7E,F). Notably, the slope of the best straight line fit to the curve over the range (0–80 Hz) was significantly lower than the
slope of the best straight line fit to the curve over the range (80–160 Hz) (p = 0.0014, n = 13, pairwise t test) (Figure 6C, compare red lines). Additionally, the curve relating the actual firing
rate to the linear prediction in response to broadband noise stimuli closely resembled the nonlinear input-output relationship obtained in response to high frequency narrowband noise stimuli (compare
Figures 6C and 3E), which suggests that the frequency filtering properties of central vestibular neurons are mostly inherited from afferents. The actual responses were well predicted by the full LN
model (R^2 = 0.94±0.07, n = 13). We also note that the firing rate values extrapolated from the best straight line fit to the curve over the range (80–160 Hz) are negative over the range (0–20 Hz),
while the actual firing rate values are of course positive. We shall return to this point in the discussion.
Finally, we compared the curves relating the actual firing rate to the linear prediction for afferents and central vestibular neurons for different stimuli (i.e., low frequency, high frequency,
low+high frequency, and broadband noise stimuli). The afferent curves overlapped and were all located close to the identity line (Figure S8A), confirming that the responses were well fit by linear
models. The curves for central vestibular neurons also overlapped, but exhibited significant deviations from linearity only for stimuli that contained high frequencies (Figure S8B). As such, our
results using LN models provide additional strong evidence that central vestibular neurons indeed display a static boosting nonlinearity that is preferentially elicited by the greater afferent firing
rate modulations caused by high frequency motion and that their frequency filtering properties are largely inherited from those of afferents.
How Does a Static Boosting Nonlinearity Give Rise to Suppressed Response to Low Frequency Stimuli in the Presence of High Frequency Stimuli?
Our results above have shown that a static boosting nonlinearity can indeed account for the nonlinear responses of central vestibular neurons. Here, we provide an intuitive explanation of how a
static boosting nonlinearity leads to the experimentally observed response attenuation to low frequency stimuli when presented concurrently with high frequency stimuli. First, consider a piecewise
linear input-output relationship between afferent firing rate and central neuron firing rate such as that illustrated in Figure 7A. If the afferent input is normally distributed with low intensity
such that it is constrained to the right side of the vertex (i.e., the point at which the slope suddenly changes), then the corresponding output firing rate will be linearly related to the afferent
input and thus will also be normally distributed (Figure 7A, distribution and mean plotted in light purple). This is the situation when low frequency stimuli are applied in isolation. In contrast, if
a normally distributed afferent input has a greater intensity and thus spans a greater range of values extending past the vertex (e.g., when high frequency stimuli are applied), then the output
firing rate will be a nonlinear function of the input and thus will not be normally distributed any longer. This is because the output firing rate distribution has become skewed, thus shifting its
mean to higher values than what would be predicted if the input-output relationship were linear (Figure 7A, distribution and mean plotted in dark purple). Notably, the skew in the input-output
distribution will increase as a function of the input distribution intensity (compare the three distributions in Figure 7B), which in turn will increase the bias in the mean with respect to what is
expected if the distribution was linear (Figure 7B, inset). We note that, under experimental conditions, the input intensity will increase when the head velocity stimulus increases in either
intensity or frequency content.
Figure 7. Schematic showing how a nonlinear static relationship between input and output can lead to attenuated sensitivity to sums of low and high frequency stimuli.
(A) Input-output relationship showing a vertex (i.e., a sudden change in slope) (black curve). If we assume that the input is normally distributed with low intensity (i.e., standard deviation) such
that all the input values are to the right of the vertex (light green distribution on x-axis), then the corresponding output distribution will also be normally distributed (light purple distribution
on y-axis). The mean output (light purple circle on y-axis) corresponds to the image of the mean input (dashed purple circle on y-axis; note that the light purple and dashed purple circles were
offset for clarity) as both input and output are linearly related. In contrast, for a higher intensity input that extends significantly past the vertex (dark green distribution on x-axis), the
corresponding output distribution (dark purple on y-axis) is skewed with respect to the linear prediction (dashed purple on y-axis). The mean output (dark purple circle on y-axis) is thus greater
than the linear prediction (dashed purple circle on y-axis). (Note that here and below, we represented the distributions to have the same maximum value in order to emphasize the fact that we are
changing the standard deviation.) (B) Increasing the input distribution intensity for a given mean (compare red, yellow, and blue distributions) causes a greater skew in the corresponding output
distribution (unpublished data) and thus an increased bias in their means (red, yellow, and blue dots on the y-axis and inset) as compared to the linear prediction (dashed yellow and blue dots on the
y-axis). (C) Shifting the mean of the high intensity input distribution to the left (compare points 1, 2, and 3 on the x-axis and the inset) makes it extend to the left of the vertex more and more
(compare the green curves on the x-axis), causing greater skewness in the corresponding output distributions (purple curves on the y-axis), which creates a greater bias in the mean (dark purple
points on y-axis) with respect to the linear prediction (light purple points on y-axis). As a result, the mean output in response to a given value of the low intensity input (points 1, 2, and 3 on
the x-axis) when the high intensity signal is present (dark purple line) has a lower slope (i.e., gain) than when the high intensity signal is absent (light purple line). (D) Shifting the mean of the
high intensity input distribution to the left (compare points 1, 2, and 3 on the x-axis and the inset) makes the corresponding distributions of the low intensity input extend to the left of the
vertex more and more (green curves on the x-axis), causing greater skewness in the output distribution (purple curves on the y-axis), which creates a greater bias in the mean (dark purple points on y
-axis) with respect to the linear prediction (light purple points on y-axis). Note, however, that the bias in the mean will be lower than in (C) since the input distributions now have a lower
intensity as explained in (B). Thus, the input-output relationship when the low intensity signal is present (dark purple line) will have a lower slope (i.e., gain) than when the low intensity signal
is absent (light purple line) but the effect will be weaker than in (C).
Why then does a skewed output distribution result in higher sensitivity to the low frequency stimulus when applied in isolation than when applied concurrently with the high frequency stimulus? To
answer this question, note that the output firing rate in response to a given value of the afferent input firing rate caused by the low frequency stimulus must be averaged over the normal
distribution of values of the high frequency stimulus. This is because both stimuli are not correlated. For a high value of the low frequency stimulus (point 1, Figure 7C), the distribution of the
high frequency stimulus spans the linear range of the piecewise linear input-output relationship. As such, the average output firing rate in response to this value of the low frequency stimulus when
presented concurrently with the high frequency stimulus is equal to that obtained when the low frequency stimulus is presented in isolation. However, this is not the case for lower values of the low
frequency stimulus (points 2 and 3, Figure 7C). Indeed, in these cases, the distribution of the high frequency input extends past the vertex. As a consequence, the distribution of output firing rates
is skewed as explained above. The average central vestibular neuron output in response to low values of the low frequency stimulus is thus greater than what would be expected if the input-output
relationship were linear. Moreover, the skewness becomes greater for lower values of the low frequency stimulus (compare the purple output distributions corresponding to points 2 and 3, Figure 7C),
resulting in a greater bias in the output firing rate. This bias, in turn, reduces the slope of the input-output relationship between output and input firing rates when the low frequency stimulus is
presented concurrently with the high frequency stimulus, as compared to that obtained when the low frequency stimulus is presented in isolation.
Finally, the above argument leads to the crucial question of why central vestibular neurons display similar sensitivities to high frequency stimuli when applied in isolation or concurrently with low
frequency stimuli. As illustrated in Figure 7D, low frequency stimuli will tend to give rise to narrower distributions of afferent input firing rates and thus smaller biases than high frequency
stimuli because of the high-pass filtering characteristics of afferents (compare distributions in Figure 7D and 7C, respectively), thereby leading to smaller attenuations in sensitivity.
Summary of Results
What is the neural code used by the brain to represent self-motion (i.e., vestibular) information? We showed that neurons at the first central stage of vestibular processing respond nonlinearly to
sums of low and high frequency stimuli. This is because, when stimuli contained low and high frequency motion components, responses to the low frequency component were strongly attenuated. Given that
such responses were not observed in afferents, we hypothesized that this occurs because central vestibular neurons nonlinearly integrate their afferent inputs. Computing input-output relationships
revealed that afferent firing rates were related linearly to head velocity in all stimulation paradigms. In contrast, the relationship between head velocity and central neuron firing rate was
characterized by a significant boosting nonlinearity for high frequency stimulation. Prior studies have shown that higher frequency stimuli elicit greater changes in afferent firing rate than do low
frequency stimuli (reviewed in [8]). We hypothesized that this frequency-dependent afferent response plays a vital role in establishing the conditions for which central vestibular neurons will
preferentially display nonlinear responses. We confirmed this hypothesis by plotting the central vestibular neuron firing rate output as a function of the afferent firing rate input, and then
formulated a model to explain our findings. We then demonstrated the generality of this model by predicting neuronal responses to sums of arbitrary stimuli and conclude that high-pass filtering
characteristics displayed by afferents combined with the nonlinear input-output relationship of central vestibular neurons underlie their attenuated responses to low frequency motion when presented
concurrently with high frequency motion. To test that this boosting nonlinearity was indeed static and preferentially elicited by high frequency stimulation, we used LN cascade models to predict
responses to broadband noise stimulation. We found that central vestibular neuron responses were well fitted by these models and that the form of the nonlinearity closely matched that obtained for
high frequency narrowband noise stimulation with our previous analysis, suggesting that the frequency filtering properties of central vestibular neurons are mostly inherited from that of afferents.
Finally, we provided an intuitive explanation as to why a static boosting nonlinearity can lead to the attenuation of the response to low frequency motion in the presence of high frequency motion.
Specifically, the nonlinear response of central neurons to high frequency motion creates a skew in the output firing rate distribution, which increases its mean with respect to what would be expected
if the input-output relationship was linear. This bias in turn decreases the input-output relationship slope when low frequency motion is presented concurrently with high frequency motion.
Origins of the Nonlinear Processing in Early Vestibular Pathways
While our findings confirm that vestibular afferents display linear responses over a wide frequency range, they further show the novel result that central vestibular neurons respond nonlinearly to
sums of low and high frequency stimuli, since they violate the principle of superposition. This is surprising given that previous reports have found that the high conductance state of neurons in vivo
can have a significant influence on their processing of synaptic input through linearization in their input-output relations [23]–[26], which is thought to extend the neuronal coding range [27]. Our
results further show that the nonlinear responses of central vestibular neurons to sums of low and high frequency self-motion are caused by a static boosting nonlinearity in their input-output
relationships. This nonlinearity differs from those (directional asymmetry, soft saturation) described in prior studies examining the responses of these same neurons [28],[29]. We note that our
stimuli were designed as to not elicit “trivial” nonlinearities such as rectification and saturation from both afferents and central vestibular neurons but that these will indeed be elicited by high
intensity stimuli [30].
What causes the observed boosting nonlinearity in central vestibular neurons? Our results show that this nonlinearity is static, and thus support the hypothesis that it is caused by intrinsic
mechanisms such as short-term synaptic plasticity [31], voltage-dependent conductances [32], or the diversity in the innervations patterns of regular versus irregular afferent inputs onto central
vestibular neurons [33] rather than network mechanisms such as nonlinear inhibitory connections within the known recurrent feedback loops of the vestibular nuclei/cerebellum [34],[35]. It is,
however, difficult to determine the exact nature of these mechanisms for several reasons. (1) Intrinsic mechanisms such as synaptic conductance, passive membrane properties, and voltage-gated
currents of neurons in the vestibular nuclei have been primarily been studied in mouse and guinea pig (reviewed in [36]) and not in primates. This is important because previous studies have shown
significant differences in the activities of rodent and monkey vestibular nuclei neurons in vivo [37]. (2) Most prior characterizations of intrinsic mechanisms were performed under in vitro
conditions, whereas the integration properties of vestibular neurons differ significantly in vivo and in vitro [38]. Thus, further studies involving in vivo intracellular recordings from single
primate central vestibular neurons are needed to uncover the mechanisms that mediate the observed nonlinearity.
Consequences of Nonlinear Central Vestibular Processing for Higher Vestibular Pathways and Perception
During everyday activities, such as walking or running, the predominant frequencies of head rotation and translation are within 0.6–10 Hz in both humans [39]–[41] and monkeys [14],[42]. While
significant harmonics up to 15–20 Hz can be present, their magnitude is generally <5% of the power found in the predominant frequency range. Taken together, these findings indicate that while active
head movements cover a wide range of frequencies, most stimulation occurs at relatively low frequencies. This then leads to the question: What is the functional significance of nonlinear integration
of afferent input by central vestibular neurons leading to attenuated responses to the low frequency components of self-motion?
One possibility is that the relative enhancement of high frequency power serves to effectively “whiten” (i.e., flatten) the output power spectrum of sensory neurons during everyday activities. For
example, in vision, natural scenes are typically described by a spatial frequency amplitude spectrum that decreases as 1/frequency—or equivalently as a power spectrum that decreases as 1/frequency^2
[43],[44]. A widespread view is that early visual neurons are tuned in such a way as to compensate for this decrease. Indeed, whitening would serve to equalize the neural responses across frequencies
as originally proposed by Field [43]. Specifically, a neuron tuned to high frequencies would require an increased response gain to produce the same response as a neuron tuned to low frequencies
(reviewed in [45],[46]). This mechanism bears a striking resemblance to preferential encoding of high frequency stimuli by central vestibular neurons demonstrated in the present study. Another
possible mechanism that has been proposed to underlie whitening in the visual system is decorrelation [47], which includes neurons with bandpass tuning curves for which a portion of the curve rises
with frequency. This latter model is not a likely candidate strategy for early vestibular processing since vestibular afferents and central neurons are characterized by high-pass rather than
band-pass tuning.
Another possibility, which relates to the argument above, is that neuronal responses optimize our ability to reflexively respond to transient unexpected events. In particular, central vestibular
neurons make descending projections to the spinal cord and mediate the vestibulo-spinal reflexes that ensure stable posture [9]. We note that, to date, the vestibular stimuli experienced during
voluntary activities such as walking and running have primarily been quantified while subjects locomoted “in place” [39]. However, these studies might have underestimated the frequency content of
natural vestibular stimuli. Indeed, higher frequency stimuli are experienced during natural locomotion since heel strikes can produce vibrations with frequencies as high as 75 Hz [48]. It is likely
that these high frequency components are filtered out as the vibration passes up through the body. Thus, the enhanced neural responses to high frequency motion could be an effective coding strategy
for countering the biomechanical filtering properties of the body segments during unexpected postural perturbations. Indeed, recent studies have demonstrated such frequency-specific filtering of
vestibular-evoked postural responses in humans [49]. It is also noteworthy that central vestibular neurons are also much less responsive to active than passive motion [50],[51]. Accordingly, their
response selectivity is likely to optimize our ability to reflexively respond to unexpected transient events. For example, if standing while riding the metro, or walking/running, one is likely to
experience sudden stops or unexpected motion for which it is vital to generate compensatory postural reflexes.
Yet another possibility is that the nonlinear responses of central vestibular neurons constitute an adaptation mechanism that preserves the coding of both low and high frequency components of
self-motion by preventing rectification (i.e., a complete cessation of firing). Specifically, such adaptation would serve to enhance the coding range by allowing responses to higher stimulus
intensities through gain control. Gain control has been widely observed across systems and can be caused by multiple mechanisms [52]–[55]. Further studies that focus on how central vestibular neurons
adapt to changes in natural self-motion stimuli are needed to investigate this possibility.
Finally, the central vestibular neurons that were the focus of the present study make contributions to higher-order vestibular processing including the computation of self-motion perception, spatial
orientation (reviewed in [56]). However, to date, prior studies of self-motion perception [57] have focused on responses to motion containing frequencies <5 Hz and thus have only probed the lower
portion of the physiologically relevant frequency range (i.e., 0–20 Hz) [14]. Accordingly, it is unlikely that the nonlinearities observed in the present study would have been significantly evoked in
these studies. Interestingly, several studies have reported that perceptual responses to low frequency vestibular input are enhanced by a network property, termed velocity storage, which functions to
lengthen the time constant of the vestibulo-ocular reflex [58]–[60]. This mechanism is mediated via reciprocal connections between the vestibular cerebellum and nuclei, and its dynamics are encoded
in the responses of single central neurons. Our results predict that central neurons would exhibit dynamics consistent with velocity storage but that the amplitude of this effect should be reduced
when low and high frequency stimuli are applied concurrently. Future experiments will be needed to investigate how the response selectivity of central vestibular neurons shapes postural responses as
well as the perception of self-motion and spatial orientation.
The Emergence of Feature Extraction: Function and General Principles Across Systems
As an alternative to the whitening hypothesis mentioned above, theoretical studies suggest that a common underlying principle of sensory processing is that the representation of information becomes
more efficient in higher brain centers because neurons in these areas respond more selectively to specific features of natural sensory stimuli. This principle, commonly referred to as “sparse
coding,” has been investigated in different sensory systems (see [4] for a review). Some of the most compelling evidence for a sparse code comes from experiments using stimuli resembling those which
would be encountered during natural vision in primary visual cortex [61] and area V4 [62]. Parallel findings in the auditory [63], somatosensory [64], and olfactory [65] systems have provided further
evidence that sensory processing is generally characterized by an increase in sparseness at higher levels. Here we focused on understanding the mechanisms underlying integration of afferent input by
central vestibular neurons. While the linear filtering properties of central vestibular neurons and afferents were similar, confirming our previous results [10], we have shown here that a static
nonlinearity causes a decreased response to low frequency stimuli in the presence of high frequency stimuli in central vestibular neurons but not afferents. We propose that this decreased response to
the low frequency components of self-motion corresponds to feature detection in that it enables central vestibular neurons to respond selectively to the high frequency components. This is consistent
with our previous results showing that individual central vestibular neurons transmit less information about the detailed time course of the stimulus than individual afferents [10]. We suggest that
this enhanced feature selectivity displayed by central vestibular neurons could constitute a signature of sparse coding and that further sparsening occurs at subsequent levels of processing.
Our findings also suggest the intriguing possibility that central vestibular neurons implement gain control through divisive normalization, similar to that previously shown to occur in the visual
[66], auditory [67], and olfactory [68] systems (see [69] for a review). In sensory systems for which neurons are tuned to different features of complex natural stimuli, divisive normalization
provides an efficient nonlinear coding strategy that can reduce dependencies between stimulus features. Specifically, when multiple features are present in a given stimulus, the activity of a neuron
tuned to a given feature is obtained by normalizing the response to that feature presented in isolation by the summed activity of neighbouring neurons tuned to the other features. As a result, an
advantage is that divisive normalization effectively implements sensory gain control such that the neural response to a given feature is adaptively attenuated when other features are present. The
attenuated response to low frequency head rotations that we observed in central vestibular neurons when these are presented concurrently with high frequency head rotations could be a signature of
divisive normalization. Further studies are, however, needed to fully test this hypothesis and to understand the functional implications of the relatively negligible attenuation that was seen for
high frequency stimulation.
Finally, our results provide evidence for a nonlinear mechanism that enables the preferential attenuation of the response to a given stimulus when multiple stimuli are presented at the same time.
Such responses to stimuli consisting of sums of low and high frequency components are also seen in other systems and may thus be a general feature of sensory processing. For example, simultaneous
masking presents some similarities with the effect described here as the presence of a high frequency sound can significantly degrade the perception of a low frequency sound [70]–[72]. Further,
non-classical receptive field stimulation can strongly attenuate the responses to low but not high frequency input [61],[73]. We hypothesize that mechanisms similar to those described here might
mediate these effects in other systems.
Materials and Methods
Three macaque monkeys (two Macaca mulatta and one Macaca fascicularis) were prepared for chronic extracellular recording using aseptic surgical techniques [10],[74],[75]. All procedures were approved
by the McGill University Animal Care Committee and were in compliance with the guidelines of the Canadian Council on Animal Care.
Data Acquisition
The experimental setup and methods of data acquisition have been previously described for both vestibular afferents [18],[19],[76] and vestibular nuclei neurons [10],[51]. We used standard techniques
to perform single unit recordings from 18 vestibular afferents [10],[76],[77] that innervate the horizontal semicircular canals and 21 vestibular-only (VO) neurons [10],[51],[74] in the medial
vestibular nuclei that were sensitive to horizontal rotations. Resting discharge regularity in afferents was quantified by the normalized coefficient of variation (CV*) [10],[78]. Vestibular
afferents with a CV*<0.15 were classified as regular, whereas those with a CV*≥0.15 were classified as irregular as done previously [18],[19],[79]. As such, five afferents were classified as regular
and the remaining 13 were classified as irregular. VO neurons were classified as either type I or type II depending on whether they are excited or inhibited by rotations towards the ipsilateral side,
respectively [80]. Nine VO neurons were type I and 12 were type II. Data from both groups were pooled as no notable difference was observed when quantifying their responses to the stimuli used here
(unpublished data).
Experimental Design
We used two classes of head velocity stimuli to characterize the responses of vestibular afferents and central neurons to horizontal rotations. The first class of stimuli consisted of noise stimuli
characterized by a Gaussian distribution of angular velocities with zero mean and standard deviation (SD) of 20°/s each lasting 80 s. We used four different noise stimuli whose frequency content
spanned the frequency range of natural vestibular stimuli (0–20 Hz) [14]: (1) low-pass filtered Gaussian white noise (8^th order Butterworth, 5 Hz cutoff frequency), henceforth referred to as the low
frequency noise stimulus; (2) band-pass filtered Gaussian white noise (4^th order Butterworth, 15–20 Hz band), henceforth referred to as the high frequency noise stimulus; (3) the linear sum of the
low and high frequency noise stimuli; and (4) low-pass filtered Gaussian white noise (8^th order Butterworth, 20 Hz cutoff frequency), henceforth referred to as the broadband noise stimulus. Our
noise stimulation protocol consisted of the low frequency stimulus by itself, then the high frequency stimulus by itself, then the linear sum of the two, and finally the broadband noise stimulus.
The second class of stimuli consisted of single frequency sinusoidal rotations each lasting 80 s of amplitude 15°/s and frequencies 3 Hz and 17 Hz, henceforth referred to as the low and high
frequency sinusoidal stimuli, respectively. These frequencies were chosen because they span the frequency range of natural vestibular stimuli (0–20 Hz) [14]. Our stimulation protocol consisted of
delivering the low frequency sinusoidal stimulus, then the high frequency sinusoidal stimulus, and then the linear sum of the two.
Traditional Linear System Analysis
For the analysis of responses to sinusoidal stimuli s(t), the spike train from each neuron was converted into a binary sequence r(t) with a bin width of 1 ms. The value of any given bin was set to 1
if it contained an action potential and 0 otherwise, as done previously [18]. This binary sequence was then convolved with a Kaiser window with cutoff frequency 0.1 Hz above the stimulus frequency to
obtain an estimate of the time dependent firing rate f[measured](t) [81],[82]. The response gain was then computed by fitting a first order model f[fit](t) = b+g * s(t−t[d]) to the data. Here b is
the bias, g is the gain, and t[d] is the latency, respectively. We used a least squares regression to find the best fit parameter values that provide the maximum variance accounted for (VAF) given by
1−[var[f[fit](t)−f[measured](t)]/var(f[measured](t))]. Here var is the variance and f[measured](t) represents the actual firing rate [50],[74].
For noise stimuli, the stimulus waveform s(t) was also sampled with timesteps of 1 ms. The response sensitivity was computed from the gain G(f) = |P[sr](f)/P[ss](f)|, where P[sr](f) is the
cross-spectrum between the stimulus s(t) and binary sequence r(t), and P[ss](f) is the power spectrum of the stimulus s(t). All spectral quantities (i.e., power-spectra and cross-spectra) were
estimated using multitaper techniques with 8 Slepian functions [83]. Estimates of gain for low (0–5 Hz) and high (15–20 Hz) frequencies were obtained by averaging the gain curves G(f) between 0 and 5
Hz and between 15 and 20 Hz, respectively.
Coherence Measures
We also used the coherence function to measure the neural response to the noise stimuli used in this study. The coherence is defined by:(1)
Here P[rr](f) is the power spectrum of the response r(t). Based on the number of trials and tapers used in this study, the confidence limit for the magnitude of the coherence being significantly
different from zero at the p = 0.05 level is 0.097 [83],[84] and all neurons in our dataset displayed maximum coherence values that were greater than 0.097 for at least one of the stimulation
It is important to note that, unlike the sensitivity G(f), the coherence is based on the signal-to-noise ratio SNR(f) = C(f)/[1−C(f)] and thus takes neural variability into account [16]. As such,
measuring the response using gain and coherence measures can sometimes give qualitatively different results [17],[18],[85]. The coherence is also related to a lower bound on the mutual information
[86] that measures the amount of information that can be decoded linearly [87].
We tested that the neural responses to both sinusoidal and noise stimuli were stationary in the following way. We divided each recorded neural response r(t) into 4 epochs of length 20 s and computed
the mean firing rate, gain, and coherence in each epoch. We found that these did not differ significantly from one another for all neurons in our dataset and all stimuli (p>0.05, one-way ANOVAs).
All gain and coherence measures were normalized in the following way. The curves in response to the high frequency stimuli (noise or sinusoidal) were normalized by their values at 17 Hz. The curves
in response to low frequency stimuli were also normalized by these values. The curves obtained in response to the sum of the low and high frequency stimuli were normalized by their values at 17 Hz.
We quantified response gain attenuation by:(2)
where G[stim,alone] is the gain in response to stimulus “stim” when it is presented by itself and G[stim,together] is the gain in response to stimulus “stim” when it is presented concurrently with
another stimulus. We also quantified coherence response attenuation by:(3)
where C[stim,alone] is the coherence in response to stimulus “stim” averaged over the stimulus's frequency range when it is presented by itself and C[stim,together] is the coherence in response to
stimulus “stim” averaged over the stimulus's frequency range when it is presented concurrently with another stimulus.
Input-Output Relationships
We quantified the output as the time varying firing rate, which was obtained by filtering the response r(t) using a Kaiser filter with cutoff frequency 5 Hz above the highest frequency contained in
the stimulus input [81]. We then computed the cross-correlation function between the filtered response and the horizontal head velocity stimulus s(t) and noted the lag at which it was maximal. This
lag was then used to align the response r(t) with the stimulus s(t). We then plotted r(t) as a function of s(t) and took the average of values in bins of 1 deg/s. To quantify whether these curves
were well-fit by a straight line, we performed a linear least-squares fit over the range 10 to 20 deg/s and computed R^2 over the range −30 to −20 deg/s.
Rescaled Input-Output Relationships
We rescaled input-output relationships in order to plot the output firing rate of VO neurons as a function of the input afferent firing rate. Because central vestibular neurons receive input from a
heterogeneous population of afferents, we estimate the afferent input firing rate in the following manner. First, we took the average gain curves of regular and irregular afferents as a function of
frequency obtained by Sadeghi et al. [19] since this corresponds, to the best of our knowledge, to the largest dataset on primate vestibular afferents. We then fit these curves using the following
expression [20],[88],[89]:(4)
where u = 2 π i f. Here T[c] and T[2] are the long and short time constants of the torsion–pendulum model of canal biomechanics and T[1] is proportional to the ratio of acceleration to velocity
sensitivity of the afferent response. Similar models have more recently been shown to provide an accurate description of canal afferent responses in monkeys [79],[90] up to 20 Hz [91], in chinchillas
[88],[92] and mice [93]. We used A = 0.428 (spk/s)/(deg/s), T[1] = 0.015 s, T[2] = 0.003 s, and T[c] = 5.7 s to fit the average gain curve for regular vestibular afferents [20]. A was adjusted to
match the data of Sadeghi et al. [19] under control conditions. To fit the average gain curve of irregular afferents, we used A = 0.765 (spk/s)/(deg/s), T[1] = 0.0085 s, T[2] = 0.003 s, and T[c] =
5.7 s. A and T[1] were adjusted to match the average gain curve for C and D-irregulars from Sadeghi et al.'s [19] data under control conditions since C and D-irregulars were encountered with roughly
equal probability [19].
The input afferent firing rate is then given by:(5)
where G[reg] and G[irreg] are the gains of regular and irregular afferents averaged over the stimulus's frequency content, respectively, and G[aff] is the average between the two values. We took the
average since about 50% of afferents encountered were regular and the other 50% were irregular in Sadeghi et al.'s [19] dataset. We used a bias of 104.30 spk/s, which corresponds to the average
baseline firing rate of the afferent population observed experimentally [19].
Our model assumes that VO neurons display a static input-output relationship with respect to their afferent input. We estimated this relationship by fitting a 6^th order polynomial to the
input-output relationship obtained experimentally with the high frequency noise stimulus. As a result, the output firing rate of the VO neuron is given by:(7)
where r[VO] is the VO neuron's firing rate, r[aff] is the afferent firing rate, and F is the estimate of the static input-output relationship.
We now consider the input s to consist of two stimuli. We will refer to one stimulus as the “signal” and to the other as the “masker.” Note that, while the terms “signal” versus “masker” are
arbitrary, this division allows us to focus on the coding of one input (i.e., the input designated as the signal).
The VO neuron's response to the signal and masker stimuli is then given by:(8)
where G[aff,signal] and G[aff,masker] are the afferent gains to the signal and masker, respectively. These are obtained by averaging the afferent gains over the signal and masker's frequency
contents, respectively. In order to obtain the VO neuron's firing rate as a function of the signal alone, it is necessary to average over the distribution of values that can be taken by the masker.
As signal and masker are not correlated, this distribution is equal to the probability distribution of the masker, which is taken to be normal with mean 0 and standard deviation σ[masker], thus:(10)
The VO neuron's firing rate is then given by:(11)
where x is the masker. The integral was evaluated numerically using a Riemann sum approximation with binwidth 1 deg/s. This model can then be used to predict the VO neuron's input-output relationship
when arbitrary signal and masker stimuli are used. In order to get some intuition, we expanded F into a Taylor series in equation (12) to obtain:(13)
where F′ and F″ are the first and second derivatives of F, respectively. The first term simply corresponds to the firing rate when no masker is present (i.e., ) and the term is equal to the n^th
order moment of the Gaussian distribution P(masker). In particular, all moments for n odd are equal to zero (this comes from the fact that the distribution is symmetric with respect to its mean)
while the second moment is simply equal to the variance . Neglecting all higher order moments gives:(14)
where r[VO](signal) is the VO neuron's firing rate for a given value of the signal in the presence of the masker and r[VO,][0](signal) is the firing rate for the same value of the signal when the
masker is absent (i.e., ). Inspection of equation (14) shows that the masker has no effect on the output firing rate r[VO](signal) if F is a linear function, as we then have F″(x) = 0 for any x.
Further, the sign of the correction depends solely on the sign of the second derivative since all other terms are positive. As such, the masker will increase the average firing rate in response to
the signal in regions where F is convex and decrease it in regions where F is concave. The amount by which the firing rate increases/decreases grows in magnitude with the masker variance but also
depends on the gain of the afferents to the masker G[aff, masker]. Since the afferents display gains that increase as a function of frequency, maskers with higher frequency content will lead to a
greater correction in firing rate than maskers with lower frequency content for a given variance . Equation (14) then allows us to evaluate the percentage attenuation in gain by taking its derivative
and evaluating it at signal = 0 and substituting the result into equation (2):(15)
where F‴ is the third derivative of F.
Linear Nonlinear Cascade Model
We used a linear-nonlinear (LN) cascade model [22] to characterize the response properties of both afferents and VO neurons to noise stimuli. This model predicts that a neuron's firing rate r
[predicted] at any instant is a function f of the linear firing rate r[linear] plus the baseline firing rate r[bias]. The linear firing rate is obtained by convolving the stimulus with the optimal
linear filter H(t). Thus, we have:(16)
where “*” denotes the convolution operation and H(t) is the inverse Fourier transform of the transfer function . We estimated f by plotting the actual firing rate r(t), which was computed as
described above, as a function of the linear prediction r[linear] [22]. To quantify whether these curves were well-fit by a straight line, we performed a linear least-squares fit over the ranges
80–120 and 100–140 spk/s for central VO neurons and afferents, respectively. We then computed the R^2 over the ranges −17–120 and 20–140 spk/s for central VO neurons and afferents, respectively. In
practice, H(t), r[bias], and f were all computed using the first half of the recorded activity for a given neuron. We then compared the predicted firing rate r[predicted](t) computed using equation
(16) against the actual firing rate r(t) for the second half of the recorded activity and quantified the goodness-of-fit of the LN model by computing R^2.
Values are reported as mean ± STD in the text. Error bars or gray bands represent 1 SEM. Throughout, “**” and “*” indicate statistical significance using a paired t test at the p = 0.01 and p = 0.05
levels, respectively. “NS” indicates that the p value was above 0.05.
Supporting Information
Central VO neurons but not afferents respond nonlinearly to sums of low and high frequency noise stimuli as quantified by coherence measures. (A, B) Coherence curves as a function of frequency for an
example VO neuron (A) and averaged over the population (B). (C) Population-averaged average normalized coherence values for central VO neurons. (D, E) Coherence curves as a function of frequency for
an example regular afferent (D) and averaged over the population (E). (F) Population-averaged average normalized coherence values for regular afferents. (G, H) Coherence curves as a function of
frequency for an example irregular afferent (G) and averaged over the population (H). (I) Population-averaged average normalized coherence values for irregular afferents.
Central vestibular neurons respond nonlinearly to sums of sinusoidal stimuli. (A–C) Example central vestibular neuron responses to 3 Hz (A), 17 Hz (B), and 3+17 Hz (C) sinusoidal rotations. The
insets show the power spectra of the input stimuli (black) and output firing rate (red and blue). (D, E) Comparison between the actual response and that predicted from a linear system for the same
example neuron for the 3 Hz (D) and 17 Hz (E) components of 3+17 Hz stimulation. (F) Population-averaged normalized gains for central VO neurons. Note the gain for 3 Hz is strongly attenuated in the
presence of 17 Hz (p<10^−3, paired t test, n = 11). In contrast, the gain at 17 Hz was not significantly altered by simultaneously presenting the 3 Hz stimulus (p = 0.97, paired t test, n = 8). (G)
Population-averaged percentage attenuation at low (3 Hz) and high (17 Hz) for central neurons. The firing rate estimates were obtained by convolving the spike trains with a Kaiser filter (see
Materials and Methods).
Analysis of unfiltered spike trains confirms that central vestibular neurons respond nonlinearly to sums of sinusoidal stimuli. (A–C) Spike train power spectra for the same example central VO neuron
shown in Figure S2 to 3 Hz (A), 17 Hz (B), and 3+17 Hz (C) sinusoidal rotations. Note that the power at 3 Hz was lower for 3+17 Hz than for 3 Hz stimulation and that the power at 17 Hz for 17 Hz
stimulation was similar to that for 3+17 Hz stimulation.
Central VO neurons as well as afferents do not show rectification and/or saturation when stimulated by the low and high frequency head rotations used in this study. (A–C) Phase histograms for an
example VO neuron (A), regular afferent (B), and irregular afferent (C). The solid curves show the best sinusoidal fits. The dashed lines indicate the mean firing rates. Note that in no case do the
histograms display either saturation or rectification. The population-averaged percentage of bins in the phase histograms corresponding to values less than 5% of the mean was 0 in more than 98% of
cases, indicating no significant rectification. This was also true for 3 Hz and 3+17 Hz sinusoidal rotation (unpublished data) and for all neurons in the population. The population-averaged
Variance-Accounted-For (VAF) of the sinusoidal fit for all three types of neurons was not significantly different between the different sinusoidal stimuli (p>0.15, t tests). This was also true for
the noise stimuli (unpublished data).
Afferents display a linear relationship between output firing rate and input head velocity. (A) Population-averaged firing rate as a function of head velocity for regular afferents when the low
frequency (0–5 Hz) noise stimulus was applied in isolation (blue) and concurrently with the high frequency (15–20 Hz) noise stimulus (black). Inset: firing rate as a function of head velocity for an
example regular afferent. (B) Population-averaged firing rate as a function of head velocity for regular afferents when the high-frequency (15–20 Hz) noise stimulus was applied in isolation (red) and
concurrently with the low frequency (0–5 Hz) noise stimulus (dashed black). Inset: firing rate as a function of head velocity for the same regular afferent. (C) Population-averaged firing rate as a
function of head velocity for irregular afferents when the low frequency (0–5 Hz) noise stimulus was applied in isolation (blue) and concurrently with the high frequency (15–20 Hz) noise stimulus
(black). Inset: firing rate as a function of head velocity for an example irregular afferent. (C) Population-averaged firing rate as a function of head velocity for irregular afferents when the
high-frequency (15–20 Hz) noise stimulus was applied in isolation (red) and concurrently with the low frequency (0–5 Hz) noise stimulus (dashed black). Inset: firing rate response as a function of
head velocity for the same irregular afferent.
Individual central neurons display nonlinear responses. (A) Firing rate as a function of head velocity for an example central VO neuron when the low frequency (0–5 Hz) noise stimulus was applied in
isolation (blue) and concurrently with the high frequency (15–20 Hz) noise stimulus (black). Both curves were well fit by straight lines (dashed lines). (B) Firing rate as a function of head velocity
for the same example central VO neuron when the high frequency (15–20 Hz) noise stimulus was applied in isolation (red) and concurrently with the low frequency (0–5 Hz) noise stimulus (long dashed
black). Note that both curves were not well fit by straight lines (short dashed lines).
Characterization of central VO neurons and afferents by LN cascade models. (A) Actual firing rate as a function of the linear prediction for an example regular afferent. Inset: the filter H(t) for
this same afferent. (B) Population-averaged actual firing rate as a function of the linear prediction for regular afferents. Inset: population-averaged filter H(t) for regular afferents. (C) Actual
firing rate as a function of the linear prediction for an example irregular afferent. Inset: the filter H(t) for this same afferent. (D) Population-averaged actual firing rate as a function of the
linear prediction for irregular afferents. Inset: population-averaged filter H(t) for irregular afferents. (E) Actual firing rate as a function of the linear prediction for an example central VO
neuron. Inset: the filter H(t) for this same VO neuron. (F) Population-averaged actual firing rate as a function of the linear prediction for central VO neurons. Inset: population-averaged filter H
(t) for central VO neurons. Throughout, the identity line is shown in green.
LN analysis reveals that central vestibular neurons but not afferents display a static nonlinearity in response to different self-motion stimuli. (A) Population-averaged actual firing rate as a
function of the linear prediction for afferents in response to 0–20 Hz noise (green), 0–5 Hz noise (blue), 15–20 Hz noise (red), and 0–5 Hz+15–20 Hz noise (black). Note that all the curves are linear
and overlap but that the blue curve extends over a narrower range than all the others. All the curves were further well fit by straight lines (R^2 = 0.99 in all cases). (B) Population-averaged actual
firing rate as a function of the linear prediction for central VO neurons in response to 0–20 Hz noise (green), 0–5 Hz noise (blue), 15–20 Hz noise (red), and 0–5 Hz+15–20 Hz noise (black). Note that
all the curves overlap but that the blue curve extends over a narrower range than all the others. As such, the blue curve is relatively better fit by a straight line (0–5 Hz: R^2 = 0.91; 15–20 Hz: R^
2 = 0.58; 0–5 Hz+15–20 Hz: R^2 = 0.37; 0–20 Hz: R^2 = 0.62).
Author Contributions
The author(s) have made the following declarations about their contributions: Conceived and designed the experiments: KEC MJC. Performed the experiments: CM. Analyzed the data: CM ADS. Contributed
reagents/materials/analysis tools: CM ADS MJC. Wrote the paper: KEC MJC.
|
{"url":"http://www.plosbiology.org/article/info:doi/10.1371/journal.pbio.1001365?imageURI=info:doi/10.1371/journal.pbio.1001365.g005","timestamp":"2014-04-24T13:15:51Z","content_type":null,"content_length":"328152","record_id":"<urn:uuid:b96484fb-e980-42b6-8c75-abcdbf3d7b12>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00308-ip-10-147-4-33.ec2.internal.warc.gz"}
|
One sided limit
December 5th 2006, 03:31 AM
One sided limit
Find the value of the one sided limit
$\lim_{x \to 2} \frac{4-x^2}{\sqrt{2+x-x^2}}$
WHere will the lim be shifting from the left( - ) or from the right ( + )? and why?
Thank you very much for all your help!
December 5th 2006, 04:20 AM
Since both the numerator and the denominator $\longrightarrow 0$, use l’Hopital’s rule.
December 5th 2006, 04:53 AM
I got it now
i got it now
since the denominator is undefined
$\lim_{x \to 2^-} \frac{(2-x)(2+x)}{\sqrt {(2-x)(1+x)}}$
- Adam
December 5th 2006, 05:04 AM
$\frac{(2-x)(2+x)}{\sqrt {(2-x)(1+x)}}\ =$
$\sqrt {(2-x)(1+x)}\ \to \left/\lim_{x \to 0}\right/ \to$
$\sqrt {0\cdot (1+x)}\ =\ 0$
December 5th 2006, 05:58 AM
Factor, your idea was correct.
$\lim_{x\to 2} \frac{(2-x)(2+x)}{\sqrt{(2-x)(1+x)}}$
Now when we approach the limit from the right,
$x>2$ that means,
But then,
And the function is undefined.
(Radical is a negative number!)
So the limit (the regular limit) does not exist.
However, the one-handed limit, might. We already shown that the right-handed does not exists. Let us see the left-handed.
When we approach the limit from the left,
$x<2$ on some open interval $(\delta, 2)$ where $\delta <2$ (a really small number). Because we do not care what happens far away one what happens really close to the point.
In that case,
Thus, the radical is defined.
And furthermore we can factor,
$\lim_{x\to 2^-} \frac{(2-x)(2+x)}{\sqrt{2-x}\cdot \sqrt{1+x}}$
$\lim_{x\to 2^-} \frac{\sqrt{2-x}(2+x)}{\sqrt{1+x}}=\frac{0\cdot 4}{3}=0$
December 5th 2006, 09:46 AM
Since both the numerator and the denominator $\longrightarrow 0$, use l’Hopital’s rule.
But since the denominator is not continuous at x = 2 CAN we use L'Hopital?
December 5th 2006, 10:07 AM
December 5th 2006, 10:15 AM
just because the denominator tends to zero does not mean that you can use l'hopital.
for instance
$\lim_{x\to 1}\frac{1}{1-x}$
it is obvious that the limit does not exist.
in order to use l'hopital u need to have an indeterminate form.
December 5th 2006, 12:35 PM
I think he was asking whether you can use it here. In this example. Because the numerator is not continous on an open interval conting 1, rather on a half-open interval.
December 5th 2006, 01:03 PM
December 5th 2006, 01:47 PM
I don't have an example of where it doesn't work, I was simply speculating. I was just trying to be careful since the derivative of $\sqrt{2 + x - x^2}$ doesn't exist at x = 2 (because the
function doesn't exist for x < 2.) Apparently as long as the limit of the derivative exists, L'Hopital is usable, if I'm understanding TPH correctly.
December 5th 2006, 04:56 PM
December 6th 2006, 04:13 AM
But why doesn't the right hand exist in this case? Isn't complex numbers allowed?
December 6th 2006, 04:18 AM
First I would like to mention square roots of of negative numbers are undefined. Note, it is mathematically incorrect to say $\sqrt{-4}=2i$ (I know that is what they teach in high schools, but it
is still wrong). I have already discussed that on the forum. Next, all the L'Hopitals rule was only proven for real differenciable functions (at least I think). Finally this problem it self is
real, implies that the function the poster was using mapped the real numbers into the real numbers.
December 6th 2006, 01:50 PM
My world picture has been shattered! :( ;_;
|
{"url":"http://mathhelpforum.com/calculus/8429-one-sided-limit-print.html","timestamp":"2014-04-16T19:33:05Z","content_type":null,"content_length":"20997","record_id":"<urn:uuid:42a08c78-5384-408c-899b-076d4cf87002>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00127-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Quarter Wit, Quarter Wisdom: Bagging the Graphs - Part III
If you haven’t read parts I and II of this topic, I strongly suggest you read them first: Bagging the Graphs – Part I, Bagging the Graphs – Part II
While introducing this concept in Part I of Graphs, I had mentioned: “the one thing that I would suggest to increase speed in Co-ordinate Geometry and Algebra is Graphs”
Did you wonder why I included “Algebra” here? If yes, then this post will answer your question. In part 2, I gave an example of a Geometry question that can be easily solved using Graphs. In this
post, I will take up an Algebra question for which you can do the same.
In part I of Graphs, I had also mentioned “Learn how to draw a line from its equation in under ten seconds and you shall solve the related question in under a minute.” After this post, you won’t have
to take my word for it!
Before I begin, let me also add here that this Data Sufficiency question is not a question I created. So don’t think I made it to conveniently suit my needs. It is a question someone asked me on a
GMAT forum and was created by some third party. I chose it to demonstrate the beauty of graphs to you.
Question: If x and y are positive, is 4x > 3y?
1. x > y – x
2. x/y < 1
Let us look at the question stem first: x and y are positive, ‘is 4x > 3y?’ or rephrase it as ‘is 4x – 3y > 0?’
Let us draw 4x – 3y = 0. Then we can figure out which region represents 4x – 3y > 0. When x = 0, y = 0 so the line passes through (0, 0). The slope of the line is 4/3. This is what it looks like:
The line has divided the graph into two regions: 4x – 3y < 0 and 4x – 3y > 0. Let us check in which region, the point (3, 0) lies. (This is an arbitrary choice. You can check for any point.) When
you put x = 3 and y = 0 in 4x – 3y > 0, you get 12 > 0 which is true. Hence the point (3, 0) lies in 4x – 3y > 0 region. Since x and y are positive, we are only concerned with quadrant I. The shaded
Green region is where 4x – 3y > 0. So the question boils down to: “Does the point (x, y) lie in the shaded Green region for all values of x and y?”
Statement 1: x > y – x or 2x – y > 0
Draw 2x – y = 0. When x= 0, y = 0 so this line passes through the center. The slope of the line is 2. The slope of this line, 2, is greater than 4/3, the slope of the line drawn above. Hence, this
line will be steeper than the line drawn above. Check for point (3, 0) again to find whether it lies in region 2x – y > 0. Putting x = 3 and y = 0, we get 6 > 0 which is true so the region 2x – y >
0 includes the point (3, 0) and is as shown below:
The Red shaded region here includes all the points of the Green shaded region above plus some more. Hence all points of Red region may not lie in the Green region. Therefore, if values of x and y
satisfy 2x – y > 0, they may or may not satisfy 4x – 3y > 0. Hence statement 1 alone is not sufficient.
Statement 2: x/y < 1
Since x and y are positive, we can multiply both sides of the inequality by y to get x < y or x – y < 0. Draw x – y = 0. When x= 0, y = 0 so this line passes through the center. The slope of the line
is 1. The slope of this line, 1, is less than 4/3, the slope of the line in the question. Hence, this line will be less steep than the line in the question stem. Check for point (3, 0) again to find
whether it lies in region x – y < 0. Putting x = 3 and y = 0, we get 3 < 0 which is not true so the region x – y < 0 does not include the point (3, 0) and is as shown below:
Note: Our concern is limited to first quadrant since x and y are both positive.
The Blue shaded region here includes some points of the Green shaded region above plus some more. Hence all points of Blue region may not lie in the Green region. Therefore, if values of x and y
satisfy x – y < 0, they may or may not satisfy 4x – 3y > 0. Hence statement 2 alone is not sufficient.
Taking both statements together, x and y will have values that overlap in the Red and the Blue region as shown in the graph below. Some of these values will lie in the Green shaded region above, some
will not.
So even if we take both statements together, they are not sufficient. Answer (E).
I hope you have come to appreciate the wide range of applicability of graphs. Next time, I will introduce a graphical way of working with Modulus and Inequalities.
Karishma, a Computer Engineer with a keen interest in alternative Mathematical approaches, has mentored students in the continents of Asia, Europe and North America. She teaches the GMAT for Veritas
Prep in Detroit, Michigan, and regularly participates in content development projects such as this blog!
7 Responses
1. Hi,
I think, we can solve this problem faster by using algebra.
Stmt 1:
x > y – x
2x > y
4x > 2y
Not enough information.
Stmt 2.
x/y < 1
x < y
4x< 4y
Again, not enough information.
Stmt 1 and 2 combined : 4y2y
Again, not enough information.
What say thou?
2. Correction:
Stmt 1 and 2 combined : 4y< 4x<2y or 2y<2x<y
Again, not enough information.
3. Oops Correction 2:
Stmt 1 and 2 combined : 4y> 4x>2y or 2y>2x>y
Again, not enough information.
4. Yes, most people like to use Algebra to solve such questions. You can use whatever method you like as long as you get the correct answer within a reasonable frame of time. Through this blog, I
aim to provide some alternative approaches that have worked very well for me and some others. I strongly believe that once a person is good at drawing graphs, he/she can save a lot of time and
reduce the scope of errors (especially in Data Sufficiency questions). Thereafter, working with equations/inequalities feels exhausting and time consuming. But that’s just my opinion.
5. Hi karishma,
A basic question may be, but its not allowing me to understand the concept. Please help me.
Initially you considered the point as (3,0) which is arbitrarily chosen. But what if we considered a point like (2,4) . Then that point would lie in 4x-3y <0 but in first quadrant. And then for
the equation 2x-y = 0 ; If I substitute this value (2,4) it becomes zero. So the point exists on that line .How would I decide the direction of the region in such a case.
□ Pick points which you know will lie in the region in which you are interested. From the figure, I see that (3, 0) does not lie on the line. That is why I tried it. You could try (10, 1) or
something – whatever you know will not lie on the line. If you pick a point which lies on the line, it will not help you in determining what you want to know.
6. Karishma, shouldn’t the green line in both graphs above have the equation 4x-3y=0 instead of 4x-y=0?
I followed almost everything you wrote until this point – The Red shaded region here includes all the points of the Green shaded region above plus some more. Hence all points of Red region may
not lie in the Green region. Therefore, if values of x and y satisfy 2x – y > 0, they may or may not satisfy 4x – 3y > 0. Hence statement 1 alone is not sufficient.
Please shed some light on the above mentioned point, since I am really interested in learning this approach. I must admit that after following your approach of using number line to solve
inequalities with Mods, I feel very confident and get the questions right in less than a minute. So thank you for that.
|
{"url":"http://www.veritasprep.com/blog/2011/01/quarter-wit-quarter-wisdom-bagging-the-graphs-part-iii/","timestamp":"2014-04-16T18:59:14Z","content_type":null,"content_length":"59517","record_id":"<urn:uuid:41006c1e-42f6-4d69-bf9c-8e99c0f0d5ac>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00410-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Annuity calculation formula
Calculation Of The Maximum Transferable Amount
Step 2: Calculation of the Division Annuity Deduction Under plans which are integrated with the Canada and Quebec Pension (C/QPP), the member's pension entitlement is reduced, usually at age 65,
based on the years of service after 1965. … Access Doc
CHAPTER OBJECTIVES – Risk Management And Insurance
65, the annuity certain calculation must be adjusted for the life contingency (Table 10). TABLE 10 Temporary Annuity Age Year Yearly Probability of Survival x Payment x PVF = NSP 66 1 8,347,117 /
8,467,345 100.00 0.962 $94.83 67 2 8,215,925 … Get Doc
Policy 2.91 New Charitable Gift Annuity (CGA) Accounting Policy
Annually a new annuity payable calculation must be made (each August based July’s market value and the current age of the annuitant ). The new payable must then be entered in the accounting system .
IV. Definitions. Charitable Gift Annuity (CGA): … Access Full Source
Examples Of Market Value Adjustment (MVA) And Surrender …
How does an MVA work? If you make a full or partial surrender of an annuity with an MVA during the surrender charge period, you could have money added to, or subtracted … Fetch Here
The following formula illustrates this calculation: Amount of annuity included Contributions to the purchase of the annuity x Value of the in the decedent’s gross estate = attributable to the
decedent annuity Purchase price of the annuity To use this formula, the estate … Access Doc
Calculating Railroad Retirement Employee Annuities
The following describes the various components of a railroad retirement annuity, defines terms, The calculation of annuities is presented in general terms. For specific cases, please contact your
local Railroad Retirement Board office. … Fetch Full Source
1. What Choices Do I Have Concerning My The Difference …
Variable Annuity Program. You make this election as part of your retirement appli-cation process by completing a Retiree’s How is the Full Formula calculation meth-od adjusted to reflect my
participation in the Variable Program? … Access Doc
Payout Annuity Reserving And Financial Reporting
Annuity Payment Calculation • Using standard actuarial notation, we know the present value of an annuity due, where payments are made for 10 years, 1/12 of the annual payment is made each month, and
the annual effective interest rate is 5%, is: … View Doc
Annuity (European Financial Arrangements) – Wikipedia, The …
An annuity is a financial contract which provides an income stream in return for an initial payment with specific parameters. published in the German language an Introduction to the Calculation of
Life Annuities and Assurances. … Read Article
The FERS Enhanced Annuity
Instead of the normal annuity calculation rate of 1%, the enhanced annuity is calculated at 1.1%. This results in a 10% higher annuity and subsequently higher COLAs (cost of living adjustments),
which are applied immediately beginning at age 62. … Access Document
California State University Foundation
calculation, the charitable gift annuity checklist form, and the completed Direct Deposit form with sample bank deposit slip, if applicable. Retains a photocopy of all documents for campus files.
Receives copies of finalized agreements from CSU Foundation and forwards to the donor(s). … Document Viewer
Chapter 18 Real Estate Finance Tools: Present Value And …
Future Value of an Annuity Calculating Yields or Borrowing Costs More Mortgage Calcs on a Financial Calculator The payment is based on the annuity that equates to a present value of the mortgage loan
when discounted at the contract rate of interest Effective Yield Calculation Effective … Fetch Here
Equity Director Fixed And Variable Annuity
Calculation of the value of each component of the Benefit First, we determine the Benefit Base. If IncomeLOCK was selected after Contract issue and prior to May 1, 2012, the annuity contract are
subject to restrictions specified in Treasury … Access Content
Variable At Retirement
Benefit calculation at retirement? Answer. PERS uses three retirement benefit calculation methods: Money Match, Full Formula, and fixed annuity as part of your monthly retirement benefit. Q4. Can I
change my variable election once my benefits begin? A. … Access This Document
Actuarial Notation – Wikipedia, The Free Encyclopedia
Indicates an annuity of 1 unit per year payable 12 times a year (1/12 unit per month) until death to someone currently age 65. indicates an annuity of 1 unit per year payable at the start of each
year until death to someone currently age 65. or in general: … Read Article
Time Value Of Money – Wikipedia, The Free Encyclopedia
Present value of an annuity An annuity is a series of equal payments or receipts that occur at evenly spaced intervals. If you are using a financial calculator or a spreadsheet, you can usually set
it for either calculation. … Read Article
United States General Accounting Office Washington, D.C. 20548
Publications pertaining to retirement procedures and the retirement annuity calculation. We collected data on how OPM computes the annuity amount and prepares the required supporting tax
documentation. We collected data relevant to IRS’ use of information returns … Fetch Doc
The First Mathematically Correct Life Annuity Valuation Formula
The First Mathematically Correct Life Annuity. Journal of Legal Economics 15(1): pp. 59-63. calculation of life expectancy.2 They focused on the distribution of deaths; or, looked at from another
point of view, what might be termed the distri- … Doc Retrieval
Gift Annuities And The 10% Deduction Requirement
Now, you're ready to re-run the calculation using an annuity rate of 9.5736%. The deduction turns out to be exactly $1,000, or 10% of the funding amount. This means you will need to reduce the
annuity rate slightly. Entering … View This Document
|
{"url":"http://annuitycalculationformulas.com/","timestamp":"2014-04-17T21:23:35Z","content_type":null,"content_length":"55819","record_id":"<urn:uuid:3e147715-c340-4762-bdcd-afe99b64597e>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00383-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Math Help
What postulate or theorem justifies the statement..Triangle ABC is similar to Triangle RST...
two triangles are said to be similar if: 1) corresponding angles are equal 2) the ratio of corresponding sides are equal the second posulate is what justifies the statement that these triangles are
similar. we see that side rs of one triangle corresponds to side ab of the other, and side st of one triangle corresponds to side bc of the other. if these triangles are similar, then, for example:
rs/st = ab/bc now, rs/st = 5/6 and ab/bc = 10/12 = 5/6 = rs/st thus the triangles are similar
I don't really understand this question. From the given information we cannot say if these two triangles are similar or not because we have no information about the third side and don't know any of
the angles. IF the ratio of the lengths of sides rt/ac = 5/10 = 6/12 or IF angles s and b are equal then we may say that the two triangles are similar. -Dan
The first 2 sides are equal with an enlargement value of two. So the third side will also be equal. To prove this theorem takes a little bit more... We first have to construct a triangle the size of
the little one inside the bigger one. Then prove them congruent. Then we must prove their angles equal, which is relatively easy. If you want me to thoroughly explain it to you, just ask and i'll be
happy to do it.
I disagree. We may construct two triangles, one with angle s equal, say, to 30 degrees, and angle b equal to 25 degrees. Then: rt = sqrt{5^2 + 6^2 + 2*5*6*cos(30)} = 10.6283 Then ac = sqrt{10^2 + 12^
2 + 2*10*12*cos(25)} = 21.4829 and rt/ac = 0.494735 which is not the required 1/2. -Dan
If the 1st two sides are proportional, then the 3rd side MUST be proportional. I cannot see any other way.
How do u know that the other 2 sides arent proportional? Could someone please prove this? It seems i've been taught wrong all this time...
Two triangles are similar if they have the same angles, or if ALL of the sides are in the same proportions to each other. We don't have enough information from either triangle to state what any of
the angles are, and we don't have enough information to calculate that third side, so we can't say they are similar. This is almost a trick question. The diagrams that are given are drawn to show
that the angles are the same, which is meant to throw you off. NEVER trust the way a sketch looks, depend only on the numbers. -Dan
|
{"url":"http://mathhelpforum.com/geometry/14564-triangle.html","timestamp":"2014-04-19T12:11:40Z","content_type":null,"content_length":"70836","record_id":"<urn:uuid:6e0a5366-a280-4df2-8cf0-42094cfe8634>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00308-ip-10-147-4-33.ec2.internal.warc.gz"}
|
dailysudoku.com :: View topic - This is ridiculous
Discussion of Daily Sudoku puzzles
Author Message
Marty R. Posted: Mon Feb 20, 2006 1:31 am Post subject: This is ridiculous
This puzzle is rated "Evil" by the site I got it from, but I usually solve them; the rating sounds much harder than the actual puzzle is, since I solve the majority of them. But
this one I have tried at least six times. Every time when it looks like I'm nearing the end, I'm backed into a corner with a duplicate number.
Joined: 12 Feb 2006 That normally means I've made some sort of mechanical error, and I usually solve it with one or two more tries. But six times is unreal! When a puzzle is too difficult for me,
Posts: 5061 the usual scenario is that I make some degree of progress, then reach an impasse and can't solve another cell.
Location: Rochester,
NY, USA I probably shouldn't embarrass myself by asking, but have you ever seen a published puzzle that was incorrect? I know the odds are very low, but having to erase everything and
restart six times is unprecedented for me in my short Sudoku career.
David Bryant Posted: Mon Feb 20, 2006 2:02 am Post subject: This puzzle is OK
Hi, Marty!
There are some invalid puzzles floating around on the web. But this isn't one of them. It has a unique solution, and doesn't require anything tougher than finding a couple of
Joined: 29 Jul 2005 triplets. dcb
Posts: 559
Location: Denver,
keith Posted: Mon Feb 20, 2006 2:16 am Post subject: The solution
The puzzle is valid, and does not require heroic efforts.
Joined: 19 Sep 2005
Posts: 3136 Here is the solution as given by Sudoku Susser:
Location: near
Detroit, Michigan, * The puzzle seems to be in good shape. All solved squares are correct, and the remaining possibilities in the unsolved squares are accurate.
USA * R5C1 is the only square in row 5 that can be <4>.
* R7C4 is the only square in row 7 that can be <9>.
* R8C5 is the only square in row 8 that can be <1>.
* R9C9 is the only square in block 9 that can be <2>.
From this deduction, the following moves are immediately forced:
R3C9 must be <6>.
* R9C6 is the only square in row 9 that can be <5>.
* R6C9 is the only square in column 9 that can be <5>.
* R8C8 is the only square in column 8 that can be <5>.
From this deduction, the following moves are immediately forced:
R8C1 must be <3>.
* R4C8 is the only square in column 8 that can be <6>.
* Intersection of row 1 with block 2. The value <7> only appears in one or more of squares R1C4, R1C5 and R1C6 of row 1. These squares are the ones that intersect with block 2.
Thus, the other (non-intersecting) squares of block 2 cannot contain this value.
R2C5 - removing <7> from <2479> leaving <249>.
R2C6 - removing <7> from <167> leaving <16>.
* Intersection of row 3 with block 3. The value <9> only appears in one or more of squares R3C7, R3C8 and R3C9 of row 3. These squares are the ones that intersect with block 3.
Thus, the other (non-intersecting) squares of block 3 cannot contain this value.
R1C8 - removing <9> from <1239> leaving <123>.
R2C7 - removing <9> from <13479> leaving <1347>.
R2C8 - removing <9> from <12349> leaving <1234>.
* Intersection of column 5 with block 5. The value <8> only appears in one or more of squares R4C5, R5C5 and R6C5 of column 5. These squares are the ones that intersect with
block 5. Thus, the other (non-intersecting) squares of block 5 cannot contain this value.
R4C6 - removing <8> from <378> leaving <37>.
R6C4 - removing <8> from <2378> leaving <237>.
* Intersection of column 8 with block 3. The values <24> only appears in one or more of squares R1C8, R2C8 and R3C8 of column 8. These squares are the ones that intersect with
block 3. Thus, the other (non-intersecting) squares of block 3 cannot contain these values.
R2C7 - removing <4> from <1347> leaving <137>.
R3C7 - removing <4> from <149> leaving <19>.
* Intersection of block 9 with column 7. The values <346> only appears in one or more of squares R7C7, R8C7 and R9C7 of block 9. These squares are the ones that intersect with
column 7. Thus, the other (non-intersecting) squares of column 7 cannot contain these values.
R2C7 - removing <3> from <137> leaving <17>.
R4C7 - removing <3> from <3789> leaving <789>.
R6C7 - removing <3> from <13789> leaving <1789>.
* A set of 3 squares form a comprehensive hidden triplet. R1C3, R1C6 and R1C8 each contain one or more of the possibilities <136>. No other squares in row 1 have those
possibilities. Since the 3 squares are the only possible locations for 3 possible values, any additional possibilities these squares have (if any) can be eliminated. These
squares now become a comprehensive naked triplet.
R1C3 - removing <2> from <1236> leaving <136>.
R1C6 - removing <7> from <167> leaving <16>.
R1C8 - removing <2> from <123> leaving <13>.
* R4C6 is the only square in column 6 that can be <7>.
* R7C6 is the only square in column 6 that can be <3>.
From this deduction, the following moves are immediately forced:
R7C7 must be <4>.
R8C7 must be <6>.
R9C7 must be <3>.
* R7C2 is the only square in row 7 that can be <8>.
From this deduction, the following moves are immediately forced:
R8C2 must be <7>.
R8C3 must be <4>.
R9C2 must be <6>.
R8C4 must be <8>.
* R3C6 is the only square in row 3 that can be <8>.
* R5C9 is the only square in row 5 that can be <7>.
From this deduction, the following moves are immediately forced:
R2C9 must be <3>.
R1C8 must be <1>.
R1C6 must be <6>.
R5C8 must be <3>.
R2C7 must be <7>.
R3C7 must be <9>.
R4C7 must be <8>.
R4C5 must be <2>.
R6C7 must be <1>.
R5C2 must be <1>.
R6C8 must be <9>.
R1C3 must be <3>.
R2C6 must be <1>.
R6C5 must be <8>.
R6C4 must be <3>.
R6C2 must be <2>.
R4C3 must be <5>.
R4C1 must be <9>.
R7C3 must be <2>.
R6C3 must be <7>.
R2C2 must be <9>.
R7C1 must be <5>.
R2C3 must be <6>.
R3C3 must be <1>.
R2C5 must be <4>.
R4C2 must be <3>.
R1C1 must be <2>.
R2C8 must be <2>.
R9C5 must be <7>.
R3C4 must be <2>.
R3C8 must be <4>.
R1C4 must be <7>.
R9C4 must be <4>.
R1C5 must be <9>.
Best wishes,
Marty R. Posted: Mon Feb 20, 2006 5:44 pm Post subject:
Thanks David and Keith. Deep down I really knew that the odds of this being an invalid puzzle were extremely low, but as I mentioned, having to erase and restart six times is
pretty unusual for me. I guess I'll go for seven!!
Joined: 12 Feb 2006
Posts: 5061
Location: Rochester,
NY, USA
You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot vote in polls in this forum
|
{"url":"http://www.dailysudoku.com/sudoku/forums/viewtopic.php?t=432&view=previous","timestamp":"2014-04-19T17:19:55Z","content_type":null,"content_length":"39280","record_id":"<urn:uuid:9a5f42a9-d90b-4b90-9492-4603739a979f>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00423-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Excel Functions&Formulas
microsoft.public.excel.worksheet.functions - This section provides information about Excel commands and functions, Excel states, worksheet and expression evaluation, active vs. current worksheet, and
worksheet references
2. How do I change the autoshape size from inches to cm
I do drawings on Excell. I work in Metric and not Imperial measurements. In the Format autoshape under under size the sizes of an shape are in Inches and I want to change it to Centimeters. How do I
do that?
Having a problem with Excel recently (last week or so) whereby the "format Picture" option is displaying the measurements in inches, despite the OS Regional Setting of Metric. 1) I have switched the
regional setting from Metric to US, then back to Metric, which fixes the problem, but it then reverts to inches when I reopen excel 2) Only affecting Excel, not Word etc. 3) Only affecting the Format
Picture option, the margins and page sizes are still displaying in cm/mm 4) Excel 2003, on WinXP Any help would be greatly appreciated.
In excel I want to change the unit of measure cm to inches. It's in the size tab of the format picture
8. Formula converts value into Feet, Inches & Fraction of an Inch
If you've ever wanted to display Feet and Inches with Fractions of an Inch, then this formula is for you. For example, this will change a value of 15.125 to be 1' 3-1/8" Type a number in A1, and
paste this formula into B1: =IF(A1<0,"("," ") & IF(TRUNC(ABS(ROUND(A1*16,0)/16)/ 12,0)<1,"",TRUNC(ABS(ROUND(A1*16,0)/16)/12,0) & "'" & IF(MOD(ABS(A1), 12)=0, ""," ")) & IF(TRUNC(ABS(ROUND(A1*16,0)/
16)-12*TRUNC(ABS(ROUND(A1*16,0)/16)/12,0))>=1, TRUNC(ABS(ROUND(A1*16,0)/16)-12*TRUNC(ABS(ROUND(A1*16,0)/16)/12,0)) & IF(MOD(ABS(A1),1)=0,"","-"),"") & IF(MOD(A1,1)=0,"",MOD(ABS(A1),1)*16/ GCD(ROUND
(ABS(A1)*16,0),16) & "/" & 16/ABS(GCD(ROUND(ABS(A1)*16,0), 16))) & IF(MOD(ABS(A1),12)=0,"","""") & IF(A1<0,")"," ") It will reformat the number into Feet, Inches, and Fractions of 1/16". It will
round the number to the nearest sixteenth inch. Notice that it also places parentheses () around negative numbers. If you want something rather than 16th of an inch, simply do a mass change of 16 to
8ths or whatever you want. Be sure to leave the 12's alone, they are for Feet. Hope that it's useful to someone out there. Mark
|
{"url":"http://www.mofeel.net/86-microsoft-public-excel-worksheet-functions/73722.aspx","timestamp":"2014-04-16T04:21:40Z","content_type":null,"content_length":"9214","record_id":"<urn:uuid:dd9bc67a-3175-4869-a20a-bbe496c982b3>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00500-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Computer History Museum
Easy...and Accurate: Mechanical Calculators
For some calculations, the slide rule’s flexibility and simplicity outweighed its imprecision. But many tasks, such as accounting or taking inventory, demand the exact results that come from counting
instead of measuring. Mechanical calculators do that.
Solutions ranged widely, from “Napier’s Bones” in 1617, which used a sequence of sticks for multiplication, to the extremely elegant Curta calculator of the 1940s.
Napier's Bones
The original idea for this multiplication aid probably came from the Middle East. Europeans learned it from John Napier’s Rabdologia in 1617.
Users arranged rows of “bones” (bars inscribed with number sequences), then added certain numbers on the bars to do multiplication.
|
{"url":"http://www.computerhistory.org/revolution/calculators/1/45","timestamp":"2014-04-18T18:15:21Z","content_type":null,"content_length":"21725","record_id":"<urn:uuid:404885de-e81c-4765-9a65-aa7373568f70>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00147-ip-10-147-4-33.ec2.internal.warc.gz"}
|
ancestors (11)
Next: ancestors (12) Up: Ancestors: Commentaries on The Previous: ancestors (10)
Commentary related to Andrew Wuensche and Mike Lesser's book, ``The Global Dynamics of Cellular Automata.'' Addison-Wesley, 1992 (ISBN 0-201-55740-1) continues. Some observations on Langton's
parameter, lambda, are in order.
We have described a 'Stastical Gerschgorin Theorem' (which is more of a formula than a theorem) which assigns a prominent role to the fraction of neighborhoods begetting each state in the enumeration
of ancestors. These fractions enter into the calculation of moments with a correction term which experience shows to be small; if not always zero, its size is predictable and consistent.
If one calls such fractions 'Langton's parameter,' one has a solid basis for classifying automata according to such a parameter, whatever it is called. In other contexts, the fraction plays a role in
calculating self-consistent probabilities, although there it yields a zero-order'th approximation to the fixed point.
As a predictor of automaton behavior, lambda has gained a mixed acceptance; Wuensche and Lesser introduce Z with the claim that it is a more sensitive indicator. The reason for this, among other
things, is the bad company which Rules 18 and 126 are seen to be keeping in the example which follows. However, in browsing through an Atlas such as theirs, there is a tendency to see what one
expects to see, particularly given the mass of data and their similarity to one another. So it behooves us to sharpen our tastes somewhat.
The situation may be likened to that prevailing in Botany before the advent of Carolus Linnaeus; there were tall trees and bushy trees and trees that kept their leaves through all seasons and those
that shed their bark instead of their leaves, and those that smelled good and those that raccoons climbed in and others which monkeys preferred, and so on. Classifying them and every thing else
according to the layout of their reproductive organs seemed rather prosaic, but in the end it brought order to a lot of chaos. And the monkeys even got to keep their tree (Araucaria araucana).
Calculating the average number of ancestors is like calculating the bushiness of our tree, in which case calculating their standard deviation amounts to observing whether this bushiness is strictly
observed or whether it can vary considerably. Once again, examining a specific example may be helpful. Suppose lambda is 25% (lambda ratio = 0.5 in the Atlas). There are 56 (2,1) Rules with this
ratio (including those with lambda = 75%), which the Atlas assigns to 11 clusters. The growth factors for the quiescent configuration and for the standard deviation in the number of ancestors are
shown in the table below.
The histogram on the side gives an idea of how the growth factor of the A matrix, the column in the table titled 'ancestors ...,' are distributed around their mean of nominal value of 1.500. The
other column has a parallel distribution.
The range 1.1 to 1.6 should be compared with the nominal values of 1.25 for ab=35 Rules and 1.75 for ab=17 Rules; if there is any transgression, it is for the Rules with small growth rates.
What we have to decide by looking at the Atlas is, whether this is all true, and whether, by knowing it, there are some features of the basins there displayed which should attract our attention,
maybe even stand out.
(Has anyone noticed that in the Atlas, although the custom is to place the panel showing evolution from a single cell on the left and that from a random initial configuration on the right, this has
been reversed for Rule 4 on page 88? Such are the delights of trying to publish an exceedingly detailed book and getting everything right)
Up until now, we have been unwilling to identify ``maximal growth rate'' with ``eigenvalue of A,'' because we have not proven rigorously that this is, in fact, the maximal growth rate. On the other
hand, it is evident by inspection that the quiescent configuration will have the most ancestors, both according to the theory which has been presented in this series, and the empirical data
comprising the Atlas; it only lacks a proof.
Another source of discrepancy is the fact that we have elaborated a general theory without boundary condition, whereas the Atlas is concerned exclusively with periodic configurations. Nevertheless,
the nature of the general theory is such that all conclusions regarding multipliers, such as growth rates, apply equally to every variant of the boundary conditions which can be obtained by varying
the metric matrix. This specifically includes periodic boundary conditions.
The one restraint which must be observed, is that it takes time for growth factors to reach the maximum eigenvalue of a matrix, so that conclusions should not be drawn for periodic configurations of
a very short length. What constitutes 'very short' varies from Rule to Rule, but is closely related to the variation in size of the matrix elements within A, and especially to its pattern of zeroes.
One might wonder whether Garden of Eden configurations are included within this sweeping generalization, and the answer is yes, subject to the same precautions. The reason is that in a general
automaton, poison words, and hence Gardens of Eden, arise because of incompatibilities - the requisite series of ancestral neighborhoods simply can't be found. In addition, likely ancestors may fail
to meet boundary conditions.
For small rings, more ancestors will be lost because of boundary conditions. Recall that for Rule 22, eight is the shortest ring which has a poison word. But as rings grow longer, it is easier and
easier to accommodate boundary conditions - two fragments which won't work separately may join together to compensate each other's deficiencies. Consequently for long rings, the Garden of Eden may be
reduced by 1/4th, but otherwise its growth will follow that of the general theory.
Returning to ``maximal growth rate'' and being willing to equate it with the larger of the dominant eigenvalues of the A, B pair (which implies that the quiescent configuration (or pair of
alternating uniform configurations if none is quiescent, or uniform ancestor of the quiescent configuration when the dominant eigenvalue does not belong to the latter), we could compare growth and
eigenvalue for the 26 (lambda ratio = 0.5) clusters:
Except for three clusters with many small basins, the agreement is exemplary .
More commentary will follow.
Next: ancestors (12) Up: Ancestors: Commentaries on The Previous: ancestors (10)
Harold V. McIntosh
|
{"url":"http://delta.cs.cinvestav.mx/~mcintosh/oldweb/wandl/node11.html","timestamp":"2014-04-18T16:27:14Z","content_type":null,"content_length":"9189","record_id":"<urn:uuid:41e6c03a-f761-4336-9f97-aba21383a828>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00140-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Calculus 1
Calculus 1
Calculus can be thought of as the mathematics of CHANGE. Because everything in the world is changing, calculus helps us track those changes. Algebra, by contrast, can be thought of as dealing with a
large set of numbers that are inherently CONSTANT. Solving an algebra problem, like y = 2x + 5, merely produces a pairing of two predetermined numbers, although an infinite set of pairs. Algebra is
even useful in rate problems, such as calculating how the money in your savings account increases because of the interest rate R, such as Y = X[0]+Rt, where t is elapsed time and X[0 ]is the initial
deposit. With compound interest, things get complicated for algebra, as the rate R is itself a function of time with Y = X[0 ]+ R(t)t. Now we have a rate of change which itself is changing. Calculus
came to the rescue, as Isaac Newton introduced the world to mathematics specifically designed to handle those things that change.
Calculus is among the most important and useful developments of human thought. Even though it is over 300 years old, it is still considered the beginning and cornerstone of modern mathematics. It is
a wonderful, beautiful, and useful set of ideas and techniques. You will see the fundamental ideas of this course over and over again in future courses in mathematics as well as in all of the
sciences (e.g., physical, biological, social, economic, and engineering). However, calculus is an intellectual step up from your previous mathematics courses. Many of the ideas you will gain in this
course are more carefully defined and have both a functional and a graphical meaning. Some of the algorithms are quite complicated, and in many cases, you will need to make a decision as to which
appropriate algorithm to use. Calculus offers a huge variety of applications and many of them will be saved for courses you might take in the future.
This course is divided into five learning sections, or units, plus a reference section, or appendix. The course begins with a unit that provides a review of algebra specifically designed to help and
prepare you for the study of calculus. The second unit discusses functions, graphs, limits, and continuity. Understanding limits could not be more important, as that topic really begins the study of
calculus. The third unit introduces and explains derivatives. With derivatives, we are now ready to handle all of those things that change mentioned above. The fourth unit makes visual sense of
derivatives by discussing derivatives and graphs. The fifth unit introduces and explains antiderivatives and definite integrals. Finally, the reference section provides a large collection of
reference facts, geometry, and trigonometry that will assist you in solving calculus problems long after the course is over.
This course provides students the opportunity to earn actual college credit. It has been reviewed and recommended for 4 credit hours by The National College Credit Recommendation Service (NCCRS).
While credit is not guaranteed at all schools, we have partnered with a number of schools who have expressed their willingness to accept transfer of credits earned through Saylor. You can read more
about our NCCRS program here.
Welcome to MA005:
Calculus I. General information about this course and its requirements can be found below.
Course Designer:
Lenny Tevlin
Primary Resources:
While this course comprises a range of different free, online materials, the primary source used for this course is:
The chapters and sections of the original text have been reorganized and carefully aligned with the course subunits so that while solving any mathematical problem, the theory you might need to solve
the problem is only a page or two away. The best way to proceed through the course is to read the assigned section in the order it is presented. You may download each assigned reading as you work
through each subunit. If you prefer to download the entire text for the course, remember to refer to the reading titles, rather than the section or page numbers as these have been revised for the
purpose of this course. Be advised that, depending upon your Internet speed, the file can take a couple of minutes to download, as it contains 329 pages and more than 300 megabytes of file size.
Requirements for Completion:
In order to complete this course, you will need to work through each unit and all of its assigned materials. Pay special attention to units 1 and 2, as these lay the groundwork for understanding the
more advanced, exploratory material presented in the latter units. You will also need to complete problem sets in each unit and the final exam.
Note that you will only receive an official grade on your final exam. However, in order to adequately prepare for this exam, you will need to work through the problems presented for solution.
In order to pass this course, you will need to earn a 70% or higher on the final exam. Your score on the exam will be tabulated as soon as you complete it. If you do not pass the exam, you may take
it again.
Time Commitment:
We recommend that you dedicate approximately 2 or 3 hours of work every weeknight and 6 or 7 hours each weekend if you expect to perform highly in this course.
This course should take you a total of approximately 130.75 hours to complete. Each unit includes a time advisory that lists the amount of time you are expected to spend on each subunit. These should
help you plan your time accordingly. It may be useful to take a look at these time advisories, determine how much time you have over the next few weeks to complete each unit, and then set goals for
yourself. For example, Unit 1 should take you 7.75 hours. Perhaps you can sit down with your calendar and decide to complete Subunit 1.1 and Subunit 1.2 (a total of 2.5 hours) on Monday night;
Subunit 1.3 and Subunit 1.4 (a total of 3.5 hours) on Tuesday night; Subunit 1.5 and the unit assessment (a total of 1.75 hours) on Wednesday night; and so forth.
Calculus takes time. Most people who fail a calculus course do so because they are unwilling, or unable, to devote the necessary time to the course.
• Do not skip topics. The understanding of calculus is typically sequential. It is very difficult to understand one topic after lightly skipping over a preceding topic.
• Test yourself. You are testing yourself when you follow the procedure of always solving a problem independently BEFORE looking at a solution of the same.
• Work on details. Focus on the parts you missed. Determine what you did not understand before moving on.
This course has been developed through a partnership with the Washington State Board for Community and Technical Colleges. Unless otherwise noted, all materials are licensed under a
Creative Commons Attribution 3.0 Unported License
. The Saylor Foundation has modified some materials created by the Washington State Board for Community and Technical Colleges in order to best serve our users.
This course features a number of Khan Academy™ videos. Khan Academy™ has a library of over 3,000 videos covering a range of topics (math, physics, chemistry, finance, history and more), plus over
300 practice exercises. All Khan Academy™ materials are available for free at
Upon successful completion of this course, you will be able to:
• calculate or estimate limits of functions given by formulas, graphs, or tables by using properties of limits and L’Hopital’s Rule;
• state whether a function given by a graph or formula is continuous or differentiable at a given point or on a given interval, and justify the answer;
• calculate average and instantaneous rates of change in context, and state the meaning and units of the derivative for functions given graphically;
• calculate derivatives of polynomial, rational, and common transcendental functions, compositions thereof, and implicitly defined functions;
• apply the ideas and techniques of derivatives to solve maximum and minimum problems and related rate problems, and calculate slopes and rates for functions given as parametric equations;
• find extreme values of modeling functions given by formulas or graphs;
• predict, construct, and interpret the shapes of graphs;
• solve equations using Newton’s method;
• find linear approximations to functions using differentials;
• restate in words the meanings of the solutions to applied problems, attaching the appropriate units to an answer;
• state which parts of a mathematical statement are assumptions, such as hypotheses, and which parts are conclusions;
• find antiderivatives by changing variables and using tables; and
• calculate definite integrals.
In order to take this course, you must:
√ have access to a computer;
√ have continuous broadband Internet access;
√ have the ability/permission to install plug-ins or software (e.g., Adobe Reader or Flash);
√ have the ability to download and save files and documents to a computer;
√ have the ability to open Microsoft files and documents (.doc, .ppt, .xls, etc.);
√ have competency in the English language;
√ have read the
Saylor Student Handbook
; and
√ have completed
, or the equivalent course in Intermediate College Algebra.
Expand All Resources Collapse All Resources
• Unit 1: Preview and Review
While a first course in calculus can strike you as something new to learn, it is not comparable to learning a foreign language where everything seems different. Calculus still depends on most of
the things you learned in algebra, and the true genius of Isaac Newton was to realize that he could get answers for this something new by relying on simple and known things like graphs, geometry,
and algebra. There is a need to review those concepts in this unit, where a graph can reinforce the adage that a picture is worth one thousand words. This unit starts right off with one of the
most important steps in mastering problem solving: Have a clear and precise statement of what the problem really is about.
Unit 1 Time Advisory show close
Unit 1 Learning Outcomes show close
• 1.1 Preview of Calculus
• 1.1.1 Problems for Solution
• 1.1.2 Reading Review
Before moving on to the next assigned reading, you should be comfortable with each of the topics listed in the reading review below:
√ The Slope of a Tangent Line
To review this topic, focus on page 1 and page 2 of the reading.
√ The Area of a Shape
To review this topic, focus on page 3 and page 4 of the reading.
√ Limits
To review this topic, focus on page 4 of the reading.
√ Differentiation and Integration
To review this topic, focus on page 4 of the reading.
• 1.2 Lines in the Plane
• 1.2.1 Problems for Solution
• 1.2.2 Reading Review
Before moving on to the next assigned reading, you should be comfortable with each of the topics listed in the reading review below:
√ The Real Number Line
To review this topic, focus on page 1 of the reading.
√ The Cartesian Plane
To review this topic, focus on page 2 of the reading.
√ Increments and Distance between Points in the Plane
To review this topic, focus on page 2 and page 3 of the reading.
√ Slope between Points in the Plane
To review this topic, focus on pages 4 - 6 of the reading.
√ Equations of Lines
To review this topic, focus on page 6 of the reading.
√ Two-Point and Slope-Intercept Equations
To review this topic, focus on page 6 and page 7 of the reading.
√ Angles between Lines
To review this topic, focus on page 8 of the reading.
√ Parallel and Perpendicular Lines
To review this topic, focus on page 8 and page 9 of the reading.
√ Angles and Intersecting Lines
To review this topic, focus on page 10 of the reading.
• 1.3 Functions and Their Graphs
• 1.3.1 Problems for Solution
• 1.3.2 Reading Review
Before moving on to the next assigned reading, you should be comfortable with each of the topics listed in the reading review below:
√ Definition of a Function
To review this topic, focus on page 1 of the reading.
√ Function Machines
To review this topic, focus on page 2 of the reading.
√ Functions Defined by Equations
To review this topic, focus on page 2 and page 3 of the reading.
√ Functions Defined by Graphs and Tables of Values
To review this topic, focus on page 3 and page 4 of the reading.
√ Creating Graphs of Functions
To review this topic, focus on pages 4 and 5 of the reading.
√ Reading Graphs
To review this topic, focus on pages 6 - 8 of the reading.
• 1.4 Combinations of Functions
• 1.4.1 Problems for Solutions
• 1.4.2 Reading Review
Before moving on to the next assigned reading, you should be comfortable with each of the topics listed in the reading review below:
√ Multiline Definition of Functions
To review this topic, focus on page 1 of the reading.
√ Wind Chill Index Sample
To review this topic, focus on pages 1 - 3 of the reading.
√ Composition of Functions – Functions of Functions
To review this topic, focus on page 3 and page 4 of the reading.
√ Shifting and Stretching Graphs
To review this topic, focus on page 5 and page 6 of the reading.
√ Iteration of Functions
To review this topic, focus on page 6 and page 7 of the reading.
√ Absolute Value and Greatest Integer
To review this topic, focus on pages 7 - 9 of the reading.
√ Broken Graphs and Graphs with Holes
To review this topic, focus on page 10 and page 11 of the reading.
• 1.5 Mathematical Language
• 1.5.1 Problems for Solution
• 1.5.2 Reading Review
Before moving on to the next assigned reading, you should be comfortable with each of the topics listed in the reading review below:
√ Equivalent Statements
To review this topic, focus on page 1 of the reading.
√ The Logic of “And” and “Or”
To review this topic, focus on page 1 of the reading.
√ Negation of a Statement
To review this topic, focus on page 2 of the reading.
√ “If-Then” Statements
To review this topic, focus on page 2 and page 3 of the reading.
√ Contrapositive of “If-Then” Statements
To review this topic, focus on page 4 of the reading.
√ Converse of “If-Then” Statements
To review this topic, focus on page 4 and page 5 of the reading.
• Unit 1: Assessment
□ Assessment: The Saylor Foundation’s “Problem Set 1”
Link: The Saylor Foundation’s “Problem Set 1” (HTML)
Instructions: You are now ready to complete “Problem Set 1.” If you have not already done so, create a free account on the Moodle website in order to access the quiz and then work on
answering the 10 multiple-choice questions. When you have completed the quiz, click on “submit all and finish” to tabulate your score.
Completing this assessment should take approximately 30 minutes.
• Unit 2: Functions, Graphs, Limits, and Continuity
The concepts of continuity and the meaning of a limit form the foundation for all of calculus. Not only must you understand both of these concepts individually, but you must understand how they
relate to each other. They are a kind of Siamese twins in calculus problems, as we always hope they show up together.
A student taking a calculus course during a winter term came up with the best analogy that I have ever heard for tying these concepts together: The weather was raining ice - the kind of weather
in which no human being in his right mind would be driving a car. When he stepped out on the front porch to see whether the ice-rain had stopped, he could not believe his eyes when he saw the
headlights of an automobile heading down his road, which ended in a dead end at a brick house. When the car hit the brakes, the student’s intuitive mind concluded that at the rate at which the
velocity was decreasing (assuming continuity), there was no way the car could stop in time and it would hit the house (the limiting value). Oops. He forgot that there was a gravel stretch at the
end of the road and the car stopped many feet from the brick house. The gravel represented a discontinuity in his calculations, so his limiting value was not correct.
Unit 2 Time Advisory show close
Unit 2 Learning Outcomes show close
• 2.1 Tangent Lines, Velocities, and Growth
• 2.1.1 Problems for Solution
• 2.1.2 Reading Review
Before moving on to the next assigned reading, you should be comfortable with each of the topics listed in the reading review below:
√ The Slope of a Tangent Line
To review this topic, focus on pages 1 - 3.
√ Average Velocity and Instantaneous Velocity
To review this topic by examining an example with a falling tomato, focus on pages 3 - 5.
√ Average Population Growth Rate and Instantaneous Population Growth Rate
To review this topic by examining an example of growing bacteria, focus on pages 5 - 7.
• 2.2 The Limit of a Function
□ Reading: Washington State Board for Community and Technical Colleges: Dale Hoffman’s Contemporary Calculus: “Section 2.2: The Limit of a Function”
Link: Washington State Board for Community and Technical Colleges: Dale Hoffman’s Contemporary Calculus: “Section 2.2: The Limit of a Function” (PDF)
Instructions: Read Section 2.2 on pages 1 - 7 for an introduction to connecting derivatives to quantities we can see in the real world. Work through practice problems 1 - 4. For solutions to
these practice problems, see page 10.
Reading this section and completing the practice problems should take approximately 1 hour.
Terms of Use: This resource is licensed under a Creative Commons Attribution 3.0 Unported License. It is attributed to Dale Hoffman, and the original version can be found here.
• 2.2.1 Problems for Solution
• 2.2.2 Reading Review
Before moving on to the next assigned reading, you should be comfortable with each of the topics listed in the reading review below:
√ Informal Notion of a Limit
To review this topic, focus on pages 1 - 3 of the reading.
√ Algebra Method for Evaluating Limits
To review this topic, focus on pages 4 - 6 of the reading.
√ Table Method for Evaluating Limits
To review this topic, focus on pages 4 - 6 of the reading.
√ Graph Method for Evaluating Limits
To review this topic, focus on pages 4 - 6 of the reading.
√ One-Sided Limits
To review this topic, focus on page 6 and page 7 of the reading.
• 2.3 Properties of Limits
□ Reading: Washington State Board for Community and Technical Colleges: Dale Hoffman’s Contemporary Calculus: “Section 2.3: Properties of Limits”
Link: Washington State Board for Community and Technical Colleges: Dale Hoffman’s Contemporary Calculus: “Section 2.3: Properties of Limits” (PDF)
Instructions: Read Section 2.3 on pages 1 - 8 to learn about the properties of limits. Work through practice problems 1 - 6. For the solutions to these problems, see page 14.
Reading this section and completing the practice problems should take approximately 1 hour.
Terms of Use: This resource is licensed under a Creative Commons Attribution 3.0 Unported License. It is attributed to Dale Hoffman, and the original version can be found here.
□ Web Media: YouTube: RootMath’s “Solving Limits (Rationalization)”
Link: YouTube: RootMath’s “Solving Limits (Rationalization)” (YouTube)
Instructions: Watch this video on finding limits algebraically. Be warned that removing x - 4 from the numerator and denominator in Step 4 of this video is only legal inside this limit. The
function (x - 4)/(x - 4) is not defined at x = 4; however, when x is not 4, it simplifies to 1. Because the limit as x approaches 4 depends only on values of x different from 4, inside that
limit (x - 4)/(x - 4) and 1 are interchangeable. Outside that limit, they are not! However, this kind of cancellation is a key technique for finding limits of algebraically complicated
Watching this video and pausing to take notes should take approximately 30 minutes.
Terms of Use: This resource is licensed under a Creative Commons Attribution 3.0 Unported License. It is attributed to RootMath.
□ Web Media: Khan Academy’s “Calculating Slope of Tangent Line Using Derivative Definition”
Link: Khan Academy’s “Calculating Slope of Tangent Line Using Derivative Definition” (YouTube)
Instructions: Watch this video on limits as the slopes of tangent lines.
An earlier Khan Academy video (not used in this course) defined the limit that gives the slope of the tangent line to a curve as y = f(x) at a point x = a and called it the derivative of f(x)
at a. Your text will introduce this term in Unit 3.
Watching this video and taking notes should take approximately 30 minutes.
Terms of Use: This video is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 3.0 United States License. It is attributed to the Khan Academy.
• 2.3.1 Problems for Solution
• 2.3.2 Reading Review
Before moving on to the next assigned reading, you should be comfortable with each of the topics listed in the reading review below:
√ Main Limit Theorem
To review this topic, focus on page 1 of the reading.
√ Limits by Substitution
To review this topic, focus on page 2 of the reading.
√ Limits of Combined or Composed Functions
To review this topic, focus on pages 2 - 4 of the reading.
√ Tangent Lines as Limits
To review this topic, focus on page 4 and page 5 of the reading.
√ Comparing the Limits of Functions
To review this topic, focus on page 5 and page 6 of the reading.
√ Showing that a Limit Does Not Exist
To review this topic, focus on pages 6 - 8 of the reading.
□ Assessment: The Saylor Foundation’s “Problem Set 2”
Link: The Saylor Foundation’s “Problem Set 2” (HTML)
Instructions: You are now ready to complete “Problem Set 2.” If you have not already done so, create a free account on the Moodle website in order to access the quiz and then work on
answering the 10 multiple-choice questions. When you have completed the quiz, click on “submit all and finish” to tabulate your score.
Completing this assessment should take approximately 1 hour.
• 2.4 Continuous Functions
• 2.4.1 Problems for Solution
• 2.4.2 Reading Review
Before moving on to the next assigned reading, you should be comfortable with each of the topics listed in the reading review below:
√ Definition and Meaning of Continuous
To review this topic, focus on page 1 of the reading.
√ Graphic Meaning of Continuity
To review this topic, focus on pages 1 - 4 of the reading.
√ The Importance of Continuity
To review this topic, focus on page 5 of the reading.
√ Combinations of Continuous Functions
To review this topic, focus on page 5 and page 6 of the reading.
√ Which Functions Are Continuous?
To review this topic, focus on pages 6 - 8 of the reading.
√ Intermediate Value Property
To review this topic, focus on page 8 and page 9 of the reading.
√ Bisection Algorithm for Approximating Roots
To review this topic, focus on pages 9 - 11 of the reading.
• 2.5 Definition of a Limit
• 2.5.1 Problems for Solution
• 2.5.2 Reading Review
Before moving on to the next assigned reading, you should be comfortable with each of the topics listed in the reading review below:
√ Intuitive Approach to Defining a Limit
To review this topic, focus on pages 1 - 7 of the reading.
√ The Formal Definition of a Limit
To review this topic, focus on pages 7 - 10 of the reading.
√ Two Limit Theorems
To review this topic, focus on page 10 and page 11 of the reading.
• Unit 2: Assessment
□ Assessment: The Saylor Foundation’s “Problem Set 3”
Link: The Saylor Foundation’s “Problem Set 3” (HTML)
Instructions: You are now ready to complete “Problem Set 3.” If you have not already done so, create a free account on the Moodle website in order to access the quiz and then work on
answering the 10 multiple-choice questions. When you have completed the quiz, click on “submit all and finish” to tabulate your score.
Completing this assessment should take approximately 1 hour.
• Unit 3: Derivatives
In this unit, we start to see calculus become more visible when abstract ideas such as a derivative and a limit appear as parts of slopes, lines, and curves. Then, there are circles, ellipses,
and parabolas that are even more geometric, so what was previously an abstract concept can now be something we can see. Nothing makes calculus more tangible than to recognize that the first
derivative of an automobile’s position is its velocity and the second derivative of that position is its acceleration. We are at the very point that started Isaac Newton on his quest to master
this mathematics, what we now call calculus, when he recognized that the second derivative was precisely what he needed to formulate his Second Law of Motion F = MA, where Fis the force on any
object, Mis its mass, and A is the second derivative of its position. Thus, he could connect all the variables of a moving object mathematically, including its acceleration, velocity, and
position, and he could explain what really makes motion happen.
Unit 3 Time Advisory show close
Unit 3 Learning Outcomes show close
• 3.1 Introduction to Derivatives
• 3.1.1 Problems for Solution
• 3.1.2 Reading Review
Before moving on to the next assigned reading, you should be comfortable with each of the topics listed in the reading review below:
√ Slopes of Tangent Lines
To review this topic, focus on page 1 and page 2 of the reading.
√ Tangents to y = x^2
To review this topic, focus on pages 2 - 5 of the reading.
• 3.2 The Definition of a Derivative
• 3.2.1 Problems for Solution
• 3.2.2 Reading Review
√ Formal Definition of a Derivative
To review this topic, focus on page 1 and page 2 of the reading.
√ Calculations Using the Definition
To review this topic, focus on pages 2 - 6 of the reading.
√ Tangent Line Formula
To review this topic, focus on page 4 of the reading.
√ sin and cos Examples
To review this topic, focus on page 4 and page 5 of the reading.
√ Interpretations of the Derivative
To review this topic, focus on pages 6 - 8 of the reading.
√ A Useful Formula: D(x^n)
To review this topic, focus on pages 8 - 10 of the reading.
√ Important Definitions, Formulas, and Results for the Derivative, Tangent Line Equation, and Interpretations of f′(x)
To review this topic, focus on page 10 of the reading.
• 3.3 Derivatives, Properties and Formulas
• 3.3.1 Problems for Solution
• 3.3.2 Reading Review
Before moving on to the next assigned reading, you should be comfortable with each of the topics listed in the reading review below:
√ Which Functions Have Derivatives?
To review this topic, focus on pages 1 - 3 of the reading.
√ Derivatives of Elementary Combination of Functions
To review this topic, focus on pages 3 - 6 of the reading.
√ Using the Differentiation Rules
To review this topic, focus on page 7 and page 8 of the reading.
√ Evaluative a Derivative at a Point
To review this topic, focus on page 9 of the reading.
√ Important Results for Differentiability and Continuity
To review this topic, focus on page 9 of the reading.
• 3.4 Derivative Patterns
• 3.4.1 Problems for Solution
• 3.4.2 Reading Review
Before moving on to the next assigned reading, you should be comfortable with each of the topics listed in the reading review below:
√ A Power Rule for Functions: D(f ^n(x))
To review this topic, focus on pages 1 and 2 of the reading.
√ Derivatives of Trigonometric and Exponential Functions
To review this topic, focus on pages 3 - 6 of the reading.
√ Higher Derivatives – Derivatives of Derivatives
To review this topic, focus on page 6 and page 7 of the reading.
√ Bent and Twisted Functions
To review this topic, focus on page 7 and page 8 of the reading.
√ Important Results for Power Rule of Functions and Derivatives of Trigonometric and Exponential Functions
To review this topic, focus on page 9 of the reading.
• 3.5 The Chain Rule
• 3.5.1 Problems for Solution
• 3.5.2 Reading Review
Before moving on to the next assigned reading, you should be comfortable with each of the topics listed in the reading review below:
√ Chain Rule for Differentiating a Composition of Functions
To review this topic, focus on page 1 of the reading.
√ The Chain Rule Using Leibnitz Notation Form
To review this topic, focus on page 2 of the reading.
√ The Chain Rule Composition Form
To review this topic, focus on pages 2 - 5 of the reading.
√ The Chain Rule and Tables of Derivatives
To review this topic, focus on page 5 and page 6 of the reading.
√ The Power Rule for Functions
To review this topic, focus on page 7 of the reading.
□ Assessment: The Saylor Foundation’s “Problem Set 4”
Link: The Saylor Foundation’s “Problem Set 4” (HTML)
Instructions: You are now ready to complete “Problem Set 4.” If you have not already done so, create a free account on the Moodle website in order to access the quiz and then work on
answering the 10 multiple-choice questions. When you have completed the quiz, click on “submit all and finish” to tabulate your score.
Completing this assessment should take approximately 1 hour.
• 3.6 Some Applications of the Chain Rule
• 3.6.1 Problems for Solution
• 3.6.2 Reading Review
Before moving on to the next assigned reading, you should be comfortable with each of the topics listed in the reading review below:
√ Derivatives of Logarithms
To review this topic, focus on page 1 and page 2 of the reading.
√ Derivative of a^x
To review this topic, focus on page 2 and page 3 of the reading.
√ Applied Problems
To review this topic, focus on pages 3 - 5 of the reading.
√ Parametric Equations
To review this topic, focus on page 5 and page 6 of the reading.
√ Speed
To review this topic, focus on page 8 of the reading.
• 3.7 Related Rates
• 3.7.1 Problems for Solution
• 3.7.2 Reading Review
Before moving on to the next assigned reading, you should be comfortable with each of the topics listed in the reading review below:
√ The Derivative as a Rate of Change
To review this topic, focus on pages 1 - 7 of the reading.
• 3.8 Newton’s Method for Finding Roots
• 3.8.1 Problems for Solution
• 3.8.2 Reading Review
Before moving on to the next assigned reading, you should be comfortable with each of the topics listed in the reading review below:
√ Newton’s Method Using the Tangent Line
To review this topic, focus on pages 1 - 3 of the reading.
√ The Algorithm for Newton’s Method
To review this topic, focus on pages 3 - 5 of the reading.
√ Iteration
To review this topic, focus on page 5 of the reading.
√ What Can Go Wrong with Newton’s Method?
To review this topic, focus on page 5 and page 6 of the reading.
√ Chaotic Behavior and Newton’s Method
To review this topic, focus on pages 6 - 8 of the reading.
• 3.9 Linear Approximation and Differentials
• 3.9.1 Problems for Solution
• 3.9.2 Reading Review
Before moving on to the next assigned reading, you should be comfortable with each of the topics listed in the reading review below:
√ Linear Approximation and Its Process
To review this topic, focus on pages 1 - 4 of the reading.
√ Applications of Linear Approximation to Measurement Error
To review this topic, focus on pages 4 - 6 of the reading.
√ Relative Error and Percentage Error
To review this topic, focus on page 6 and page 7 of the reading.
√ The Differential of a Function
To review this topic, focus on page 7 and page 8 of the reading.
√ The Linear Approximation Error
To review this topic, focus on pages 8 - 10 of the reading.
• 3.10 Implicit and Logarithmic Differentiation
• 3.10.1 Problems for Solution
• 3.10.2 Reading Review
Before moving on to the next assigned reading, you should be comfortable with each of the topics listed in the reading review below:
√ Implicit Differentiation
To review this topic, focus on pages 1 - 3 of the reading.
√ Logarithmic Differentiation
To review this topic, focus on pages 3 - 5 of the reading.
• Unit 3: Assessment
□ Assessment: The Saylor Foundation’s “Problem Set 5”
Link: The Saylor Foundation’s “Problem Set 5” (HTML)
Instructions: You are now ready to complete “Problem Set 5.” If you have not already done so, create a free account on the Moodle website in order to access the quiz and then work on
answering the 10 multiple-choice questions. When you have completed the quiz, click on “submit all and finish” to tabulate your score.
Completing this assessment should take you approximately 1 hour.
• Unit 4: Derivatives and Graphs
A visual person should find this unit extremely helpful in understanding the concepts of calculus, as a major emphasis in this unit is to display those concepts graphically. That allows us to see
what, so far, we could only imagine. Graphs help us to visualize ideas that are hard enough to conceptualize - like limits going to infinity but still having a finite meaning, or asymptotes -
lines that approach each other but never quite get there.
Graphs can also be used in a kind of reverse by displaying something for which we should take another mathematical look. It is hard enough to imagine a limit going to infinity, and therefore
never quite getting there, but the graph can tell us that it has a finite value, when it finally does get there, so we had better take a serious look at it mathematically.
Unit 4 Time Advisory show close
Unit 4 Learning Outcomes show close
• 4.1 Finding Maximums and Minimums
• 4.1.1 Problems for Solution
• 4.1.2 Reading Review
Before moving on to the next assigned reading, you should be comfortable with each of the topics listed in the reading review below:
√ Methods for Finding Maximums and Minimums
To review this topic, focus on page 1 of the reading.
√ Terminology: Global Maximum, Local Maximum, Maximum Point, Global Minimum, Local Minimum, Global Extreme, and Local Extreme
To review this topic, focus on page 2 of the reading.
√ Finding Maximums and Minimums of a Function
To review this topic, focus on pages 3 - 5 of the reading.
√ Is f(a) a Maximum, Minimum, or Neither?
To review this topic, focus on page 5 of the reading.
√ Endpoint Extremes
To review this topic, focus on pages 5 - 7 of the reading.
√ Critical Numbers
To review this topic, focus on page 7 of the reading.
√ Which Functions Have Extremes?
To review this topic, focus on page 7 and page 8 of the reading.
√ Extreme Value Theorem
To review this topic, focus on page 8 and page 9 of the reading.
• 4.2 The Mean Value Theorem and Its Consequences
• 4.2.1 Problems for Solution
• 4.2.2 Reading Review
Before moving on to the next assigned reading, you should be comfortable with each of the topics listed in the reading review below:
√ Rolle’s Theorem
To review this topic, focus on page 1 and page 2 of the reading.
√ The Mean Value Theorem
To review this topic, focus on pages 2 - 4 of the reading.
√ Consequences of the Mean Value Theorem
To review this topic, focus on pages 4 - 6 of the reading.
• 4.3 The First Derivative and the Shape of a Function f(x)
□ Reading: Washington State Board for Community and Technical Colleges: Dale Hoffman’s Contemporary Calculus: “Section 4.3: The First Derivative and the Shape of f”
Link: Washington State Board for Community and Technical Colleges: Dale Hoffman’s Contemporary Calculus: “Section 4.3: The First Derivative and the Shape of f” (PDF)
Instructions: Read Section 4.3 on pages 1 - 8 to learn how the first derivative is used to determine the shape of functions. Work through practice problems 1 - 9. For the solution to these
problems, see pages 10 - 12.
Reading this section and completing the practice problems should take approximately 1 hour and 30 minutes.
Terms of Use: This resource is licensed under a Creative Commons Attribution 3.0 Unported License. It is attributed to Dale Hoffman, and the original version can be found here.
□ Web Media: YouTube: RootMath’s “First Derivative Test, Example 2, Part 1” and “First Derivative Test, Example 2, Part 2”
• 4.3.1 Problems for Solution
• 4.3.2 Reading Review
Before moving on to the next assigned reading, you should be comfortable with each of the topics listed in the reading review below:
√ Definitions of the Functionf
To review this topic, focus on page 1 of the reading.
√ First Shape Theorem
To review this topic, focus on pages 2 - 4 of the reading.
√ Second Shape Theorem
To review this topic, focus on pages 4 - 7 of the reading.
√ Using the Derivative to Test for Extremes
To review this topic, focus on page 7 and page 8 of the reading.
• 4.4 The Second Derivative and the Shape of a Function f(x)
• 4.4.1 Problems for Solution
• 4.4.2 Reading Review
Before moving on to the next assigned reading, you should be comfortable with each of the topics listed in the reading review below:
√ Concavity
To review this topic, focus on page 1 and page 2 of the reading.
√ The Second Derivative Condition for Concavity
To review this topic, focus on page 2 and page 3 of the reading.
√ Feeling the Second Derivative: Acceleration Applications
To review this topic, focus on page 3 and page 4 of the reading.
√ The Second Derivative and Extreme Values
To review this topic, focus on page 4 and page 5 of the reading.
√ Inflection Points
To review this topic, focus on page 5 and page 6 of the reading.
□ Assessment: The Saylor Foundation’s “Problem Set 6”
Link: The Saylor Foundation’s “Problem Set 6” (HTML)
Instructions: You are now ready to complete “Problem Set 6.” If you have not already done so, create a free account on the Moodle website in order to access the quiz and then work on
answering the 10 multiple-choice questions. When you have completed the quiz, click on “submit all and finish” to tabulate your score.
Completing this assessment should take you approximately 1 hour.
• 4.5 Applied Maximum and Minimum Problems
• 4.6 Infinite Limits and Asymptotes
• 4.6.1 Problems for Solution
• 4.6.2 Reading Review
Before moving on to the next assigned reading, you should be comfortable with each of the topics listed in the reading review below:
√ Limits as x Approaches Infinity
To review this topic, focus on pages 1 - 4 of the reading.
√ Using Calculators to Find Limits as x Goes to Infinity
To review this topic, focus on page 5 of the reading.
√ The Limit Is Infinite
To review this topic, focus on page 5 and page 6 of the reading.
√ Horizontal Asymptotes
To review this topic, focus on page 6 and page 7 of the reading.
√ Vertical Asymptotes
To review this topic, focus on page 7 and page 8 of the reading.
√ Other Asymptotes as x Approaches Infinity
To review this topic, focus on page 8 and page 9 of the reading.
√ Definition of lim [x ->∞][ ](x) = k
To review this topic, focus on page 9 and page 10 of the reading.
• 4.7 L’Hopital’s Rule
• 4.7.1 Problems for Solution
• 4.7.2 Reading Review
Before moving on to the next assigned reading, you should be comfortable with each of the topics listed in the reading review below:
√ A Linear Example
To review this topic, focus on page 1 of the reading.
√ 0/0 Form of L’Hopital’s Rule
To review this topic, focus on page 2 of the reading.
√ Strong Version of L’Hopital’s Rule
To review this topic, focus on page 2 and page 3 of the reading.
√ Which Function Grows Faster?
To review this topic, focus on page 4 of the reading.
√ Other Indeterminate Forms
To review this topic, focus on pages 4 - 6 of the reading.
• Unit 4: Assessment
□ Assessment: The Saylor Foundation’s “Problem Set 7”
Link: The Saylor Foundation’s “Problem Set 7” (HTML)
Instructions: You are now ready to complete “Problem Set 7.” If you have not already done so, create a free account on the Moodle website in order to access the quiz and then work on
answering the 10 multiple-choice questions. When you have completed the quiz, click on “submit all and finish” to tabulate your score.
Completing this assessment should take you approximately 1 hour.
• Unit 5: The Integral
While previous units dealt with differential calculus, this unit starts the study of integral calculus. As you may recall, differential calculus began with the development of the intuition behind
the notion of a tangent line. Integral calculus begins with understanding the intuition behind the notion of an area.In fact, we will be able to extend the notion of the area and apply these more
general areas to a variety of problems. This will allow us to unify differential and integral calculus through the Fundamental Theorem of Calculus. Historically, this theorem marked the beginning
of modern mathematics and is extremely important in all applications.
Unit 5 Time Advisory show close
Unit 5 Learning Outcomes show close
• 5.1 Introduction to Integration
• 5.1.1 Problems for Solution
• 5.1.2 Reading Review
Before moving on to the next assigned reading, you should be comfortable with each of the topics listed in the reading review below:
√ Area
To review this topic, focus on pages 1 - 4 of the reading.
√ Applications of Area like Distance and Total Accumulation
To review this topic, focus on pages 5 - 7 of the reading.
• 5.2 Sigma Notation and Riemann Sums
• 5.2.1 Problems for Solution
• 5.2.2 Reading Review
Before moving on to the next assigned reading, you should be comfortable with each of the topics listed in the reading review below:
√ Sigma Notation
To review this topic, focus on page 1 and page 2 of the reading.
√ Sums of Areas of Rectangles
To review this topic, focus on page 3 and page 4 of the reading.
√ Area under a Curve – Riemann Sums
To review this topic, focus on pages 5 - 8 of the reading.
√ Two Special Riemann Sums – Lower and Upper Sums
To review this topic, focus on page 9 and page 10 of the reading.
• 5.3 The Definite Integral
• 5.3.1 Problems for Solution
• 5.3.2 Reading Review
Before moving on to the next assigned reading, you should be comfortable with each of the topics listed in the reading review below:
√ The Definition of the Definite Integral
To review this topic, focus on pages 1 - 3 of the reading.
√ Definite Integrals of Negative Functions
To review this topic, focus on pages 3 - 5 of the reading.
√ Units for the Definite Integral
To review this topic, focus on page 5 and page 6 of the reading.
• 5.4 Properties of the Definite Integral
• 5.4.1 Problems for Solution
• 5.4.2 Reading Review
Before moving on to the next assigned reading, you should be comfortable with each of the topics listed in the reading review below:
√ Properties of the Definite Integral
To review this topic, focus on page 1 and page 2 of the reading.
√ Properties of Definite Integrals of Combinations of Functions
To review this topic, focus on pages 3 - 5 of the reading.
√ Functions Defined by Integrals
To review this topic, focus on page 5 and page 6 of the reading.
√ Which Functions Are Integrable?
To review this topic, focus on page 6 and page 7 of the reading.
√ A Nonintegrable Function
To review this topic, focus on page 8 of the reading.
• 5.5 Areas, Integrals, and Antiderivatives
• 5.5.1 Problems for Solution
• 5.5.2 Reading Review
Before moving on to the next assigned reading, you should be comfortable with each of the topics listed in the reading review below:
√ Area Functions as an Antiderivative
To review this topic, focus on page 1 and page 2 of the reading.
√ Using Antiderivatives to Evaluate Definite Integrals
To review this topic, focus on pages 2 - 4 of the reading.
√ Integrals, Antiderivatives, and Applications
To review this topic, focus on pages 4 - 6 of the reading.
• 5.6 The Fundamental Theorem of Calculus
• 5.6.1 Problems for Solution
• 5.6.2 Reading Review
Before moving on to the next assigned reading, you should be comfortable with each of the topics listed in the reading review below:
√ Antiderivatives
To review this topic, focus on pages 1 - 3 of the reading.
√ Evaluating Definite Integrals
To review this topic, focus on page 4 and page 5 of the reading.
√ Steps for Calculus Application Problems
To review this topic, focus on pages 6 - 8 of the reading.
√ Leibnitz’s Rule for Differentiating Integrals
To review this topic, focus on page 9 of the reading.
□ Assessment: The Saylor Foundation’s “Problem Set 8”
Link: The Saylor Foundation’s “Problem Set 8” (HTML)
Instructions: You are now ready to complete “Problem Set 8.” If you have not already done so, create a free account on the Moodle website in order to access the quiz and then work on
answering the 10 multiple-choice questions. When you have completed the quiz, click on “submit all and finish” to tabulate your score.
Completing this assessment should take you approximately 1 hour.
• 5.7 Finding Antiderivatives
□ Reading: Washington State Board for Community and Technical Colleges: Dale Hoffman’s Contemporary Calculus: “Section 5.7: Finding Antiderivatives”
Link: Washington State Board for Community and Technical Colleges: Dale Hoffman’s Contemporary Calculus: “Section 5.7: Finding Antiderivatives” (PDF)
Instructions: Read Section 5.7 on pages 1 - 9 to see how one can (sometimes) find an antiderivative. In particular, we will discuss the change of variable technique. Change of variable, also
called substitution or u-substitution (for the most commonly-used variable), is a powerful technique that you will use time and again in integration. It allows you to simplify a complicated
function to show how basic rules of integration apply to the function. Work through practice problems 1 - 4. For solutions to these problems, see page 12 and page 13.
Reading this section and completing the practice problems should take approximately 1 hour.
Terms of Use: This resource is licensed under a Creative Commons Attribution 3.0 Unported License. It is attributed to Dale Hoffman, and the original version can be found here.
□ Web Media: YouTube: RootMath’s “Integration: U-Substitution - Ex. 5” and “Integration: U-Substitution - Ex. 6”
Link: YouTube: RootMath’s “Integration: U-Substitution - Ex. 5” (YouTube) and “Integration: U-Substitution - Ex. 6” (YouTube)
Instructions: Watch these videos on change of variable, also called substitution or u-substitution.
Watching these videos and taking notes should take approximately 30 minutes.
Terms of Use: These resources are licensed under a Creative Commons Attribution 3.0 Unported License. They are attributed to RootMath.
• 5.7.1 Problems for Solution
• 5.7.2 Reading Review
Before moving on to the next assigned reading, you should be comfortable with each of the topics listed in the reading review below:
√ Indefinite Integrals and Antiderivatives
To review this topic, focus on page 1 of the reading.
√ Properties of Antiderivatives (Indefinite Integrals)
To review this topic, focus on page 2 and page 3 of the reading.
√ Antiderivatives of More Complicated Functions
To review this topic, focus on page 3 and page 4 of the reading.
√ Getting the Constant Right
To review this topic, focus on page 4 and page 5 of the reading.
√ Making Patterns More Obvious - Changing Variables
To review this topic, focus on pages 5 - 8 of the reading.
√ Changing the Variables and Definite Integrals
To review this topic, focus on page 8 and page 9 of the reading.
√ Special Transformations - Antiderivatives of sin^2(x) and cos^2(x)
To review this topic, focus on page 9 of the reading.
• 5.8 First Application of Definite Integral
• 5.8.1 Problems for Solution
• 5.8.2 Reading Review
Before moving on to the next assigned reading, you should be comfortable with each of the topics listed in the reading review below:
√ Area between Graphs of Two Functions
To review this topic, focus on pages 1 - 4 of the reading.
√ Average (Mean) Value of a Function
To review this topic, focus on pages 4 - 6 of the reading.
√ A Definite Integral Application - Work
To review this topic, focus on pages 6 - 8 of the reading.
• 5.9 Using Tables to Find Antiderivatives
• 5.9.1 Problems for Solution
• 5.9.2 Reading Review
Before moving on to the next assigned reading, you should be comfortable with each of the topics listed in the reading review below:
√ Table of Integrals
To review this topic, focus on pages 1 - 3 of the reading.
√ Using Recursive Formulas
To review this topic, focus on page 3 of the reading.
• Unit 5: Assessment
□ Assessment: The Saylor Foundation’s “Problem Set 9”
Link: The Saylor Foundation’s “Problem Set 9” (HTML)
Instructions: You are now ready to complete “Problem Set 9.” If you have not already done so, create a free account on the Moodle website in order to access the quiz and then work on
answering the 10 multiple-choice questions. When you have completed the quiz, click on “submit all and finish” to tabulate your score.
Completing this assessment should take you approximately 1 hour.
• Unit 6: Appendix
By reviewing and having access to this unit, you will have an invaluable list of references to assist you in solving future calculus problems after this course has ended. It is a standard
experience, when solving calculus problems on your own, to react to the new problem with the following: “We did not solve that kind of problem in the course.” Ah, but we did, in that the new
problem is often a combination, or composition, of two problem types that were covered.
The course could not cover all possible trigonometric functions you will encounter. If you encounter a need for the derivative of tan(x),it is sufficient to recall that tan(x) = sin(x)/cos(x)and
that sine and cosine were covered. You can eventually become so good at this that future calculus problems can almost seem to be little more than plugging into formulas.
Engineering students, who have to take several courses that involve the use of calculus, are noted for having a Table of Integrals on their hip wherever they go, such as this one posted on
* Terms of Use: This resource is licensed under a Creative Commons Attribution-ShareAlike 3.0 Unported License. The original Wikipedia version can be found here.
Unit 6 Time Advisory show close
□ Reading: Washington State Board for Community and Technical Colleges: Dale Hoffman’s Contemporary Calculus: “Section 6.1: Calculus Reference”
Link: Washington State Board for Community and Technical Colleges: Dale Hoffman’s Contemporary Calculus: “Section 6.1: Calculus Reference” (PDF)
Instructions: There are neither readings nor problems associated with this section. Rather, it consists of two pages of formulas that can be useful to you in your further explorations of
calculus, including the final exam. You should be able to quickly print out these two pages or save them where they can be easily located as a quick reference when needed.
• Final Exam
□ Final Exam: The Saylor Foundation’s “MA005 Final Exam”
Link: The Saylor Foundation’s “MA005 Final Exam” (HTML)
Instructions: You must be logged into your Saylor Foundation School account in order to access this exam. If you do not yet have an account, you will be able to create one, free of charge,
after clicking the link.
Completing this assessment should take approximately 1 hour.
• NCCRS Credit Recommended Exam
□ Optional Final Exam: The Saylor Foundation's “MA005 Final Exam”
Link: The Saylor Foundation’s “MA005 Final Exam” (HTML)
Instructions: The above linked exam has been specially created as part of our National College Credit Recommendation Service (NCCRS) review program. Successfully passing this exam will make
students eligible to receive a transcript with 4 hours of recommended college credit.
Please note that because this exam has the possibility to be a credit-bearing exam, it must be administered in a proctored environment, and is therefore password protected. Further
information about Saylor’s NCCRS program and the options and requirements for proctoring, can be found here. Make sure to read this page carefully before attempting this exam.
If you choose to take this exam, you may want to first take the regular, certificate-bearing MA005 Final Exam as a practice test, which you can find above.
|
{"url":"http://www.saylor.org/courses/ma005/","timestamp":"2014-04-17T03:58:35Z","content_type":null,"content_length":"256281","record_id":"<urn:uuid:60f63e7f-804a-4182-936f-db59adb6829c>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00146-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Wolfram Demonstrations Project
Pack Disconnected Components
In Mathematica 8 you can specify how disconnected components of a graph should be packed together using the suboption "PackingLayout" to the option GraphLayout.
This Demonstration shows the five available packing methods applied to a highly disconnected graph with a variable number of vertices: there is an edge with being the left rotation of in base 2. For
instance, the edge 27 23 appears because the binary representation of 27 is 11011 and after a left rotation becomes 10111, which is the binary representation of 23.
This is related to Josephus' problem, which considers a group of men arranged in a circle under the edict that every second man will be executed, going around the circle until only one remains. The
problem is, where should you sit to be the last survivor? Hint: with 27 men you should occupy position 23.
|
{"url":"http://demonstrations.wolfram.com/PackDisconnectedComponents/","timestamp":"2014-04-18T00:16:59Z","content_type":null,"content_length":"46260","record_id":"<urn:uuid:d837cc2f-8568-4da4-a632-a262d28b7953>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00604-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Wolfram Research Contributes Central Ideas to Web Math Standard
Mathematica's Typesetting Technology Inspires MathML's Framework
February 24, 1998--The web has gained new powers of technical communication, thanks to MathML, the recently promulgated standard for describing mathematical expressions. The MathML standard has been
adopted by the World Wide Web Consortium, an international organization which defines the formats for storing and transmitting web information. And the key ideas forming the core of the MathML
standard are derived directly from Wolfram Research's typesetting technology.
Since this means there is a very close relationship between the web's markup standard and Mathematica's internal representation, Mathematica is in a unique position to integrate full-featured MathML
authoring and evaluation environment for documents with MathML expressions into the flexible, extensible technical document authoring system built into Mathematica 4.
Technical users of the web know that HTML, the markup language used to describe the layout and contents of a web page, has had until now a serious shortcoming: its inability to present mathematical
expressions in an efficient manner. Anyone wanting to display an equation of even minimal complexity had to choose between two unsatisfactory workarounds: either an ASCII approximation of the
equation, or a snapshot of the expression saved as a GIF or JPEG file. In either case, readers were unable to cut expressions from a web site and paste them into a technical computing system like
Mathematica in the same way they can cut text from a web page and paste it into a word processor.
MathML changes that. It allows for a much more efficient use of bandwith because it carries only the kind of information needed for the web browser to redraw the equation properly. To work, though,
MathML required a standard way of describing the layout of a mathematical expression. This is where Mathematica's contribution begins.
The layout of mathematical expressions, as drawn on the screen or printed on paper, is determined by a set of recursively nested rectangular areas called "boxes." Drawing an expression properly
involves building it up out of smaller pieces, which may themselves be built from pieces even smaller. These boxes can be thought of as stencils for fractions, superscripts, square roots and other
radicals, and so on. As an example, we'll demonstrate the box structure of Newton's Law of Universal Gravitation, from the Philosophiae naturalis principia mathematica of 1687.
Here is the expression:
And here is the nested box structure:
Literally hundreds of layout conventions and special mathematical characters have accumulated through the centuries, and changing the position of even a single element in an expression can
drastically change its meaning. Mathematica, however, can represent these layout conventions using combinations of under two dozen basic box types. Mathematica represents the box structure of
mathematical expressions using objects like RowBox, SuperscriptBox, and FractionBox. The box structure of Newton's equation looks like this in Mathematica:
OverscriptBox["F", "->"],
RowBox[{"-", "G"}],
RowBox[{"m", " ", "M"}],
SuperscriptBox["r", "2"]],
OverscriptBox["e", "->"], "r"]}]}]
The indentation shows the nested structure of the representation. Compare that nested structure with the MathML code for the same expression, and you will see that, although the syntax is different,
the basic structure (as shown by the indentation) is fundamentally the same:
The similarity in approach between Mathematica and MathML is no coincidence. Neil Soiffer and Bruce Smith of Wolfram Research were founding members of the drafting committee and creators of the
original proposal upon which MathML was based. They brought to the committee Mathematica's simple and elegant vocabulary for the layout of mathematical expressions. With only minor modifications,
that vocabulary forms the core of MathML's capability to transmit how an expression looks.
Why did the drafting committee turn to Mathematica's box structure as a starting place? Wolfram Research had done many years of research into the representation, interpretation, and display of
mathematical expressions as part of the development process of Mathematica 3. That effort gave us a thorough understanding of the difficult issues involved in fully integrating presentation and
evaluation within a single environment.
"The MathML standard will play a central role in the technical communication of the future," said Neil Soiffer. "We worked hard to insure that MathML is powerful enough so that technical text can not
only be displayed, but used directly in computations.
"This is a win-win situation," Soiffer said. "Mathematica users will be able to communicate their results to more people, and they will be able to use other people's results more readily. I am really
pleased to have been a part of this groundbreaking work."
|
{"url":"http://wolfram.com/news/mathml.html","timestamp":"2014-04-17T01:15:00Z","content_type":null,"content_length":"39010","record_id":"<urn:uuid:6c8ecfcd-ffbb-4ae9-8dde-fa81dab27c10>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00183-ip-10-147-4-33.ec2.internal.warc.gz"}
|
FOM: proof theory and the question of consistency
Anton Setzer setzer at math.uu.se
Fri Apr 10 16:27:06 EDT 1998
This is a reply to Stephen Simpson's FOM posting on 9 Apr 1998 10:13:46 -0400 (EDT)
> (i) The deemphasis of consistency proofs is not only my opinion.
> Feferman and other eminent proof-theorists have expressed similar
> opinions. Indeed, are there any proof-theorists who still take the
> traditional view? (By the traditional view, I mean the view that
> consistency proofs are a principal reason to be interested in
> Gentzen-style ordinal analysis.)
> Would some of the other proof-theorists on the FOM list (Tait,
> Rathjen, Friedman, Pohlers, Takeuti, Sommer, Sieg, Buss, Kohlenbach,
> Pfeiffer, Fine, ...) care to commment on this matter?
Although I belong to the ... in the list proof theorists above, I would
like to comment on this matter, since I am personally very much interested
in the question of consistency
1. A lot of research has been carried out recently in analyzing Martin-Loef's
type theory proof theoretically. One reason for this interest was to fulfill
the original goal of proof theory: to prove the consistency of large theories.
In traditional proof theory we reduce the consistency of theories for
carrying out mathematics to the well-ordering of ordinal notation systems.
The ordinal notation systems usually used for the analysis of stronger
theories (impredicative in the sense of proof theory, i.e. of strength
bigger the ordinal $\Gamma_0$) are usually presented in a set theoretical
way by using cardinals and even large cardinals. These large cardinals
can be replaced by their recursive analogues, but still the problem
remains that one reduces the consistency of theories to a fragment of
the standard model of set theory, so one does not gain a real
consistency proof.
Now, using other constructive theories one can go with this reduction one step
further: we prove the well-ordering of the ordinal notation systems in some
constructive theory, which we consider as intuitively consistent. Then we get
a proof of the consistency of the original theory. The constructive theories
replaces the finitary methods in the original Hilbert program, which didn't
suffice by Goedel's incompleteness theorem.
Of course, what remains is the question, whether the constructive
theory used is really intuitively consistent. I personally believe that
there are very good reasons for believing in the consistency of Martin-Loef's
type theory, at least if one considers the versions with one universe and
W-type and any extensions by inductive recursive definitions. If one
is a little bit more sceptical one can say, that at least Martin-Loef's
type theory is defined in the best possible way to guarantee its
However, in the case of theories which go slightly beyond one recursive Mahlo
ordinal, (they are still extremely weak relative to full set theory), the only
extension of Martin-Loef's type theory of this strength which could be
regarded as predicative is the Mahlo-universe, developed by myself, and there
has now for several years been a discussion about, whether this can be really
accepted as consistent in itself (I personally believe in the consistency
of it in itself, but of course I have as well a model for it). And
even if one accepts the Mahlo-universe, it seems to be at the moment
almost impossible to define extensions of Martin-Loef's type theory
which go substantially beyond Mahlo. (Of course one can build
certain hierarchies of Mahlo-universes but I don't regard such extensions
as substantially stronger).
2. Another approach taken by myself is to try to present the ordinal notation
systems in a different way together with some argument, that they are
intuitively well-ordered.
I started this project originally, because I was all the time
facing the criticism, that proof theory is too technical and nobody can
understand it any longer. Although proof theory has become very technical
I don't think that it is much more technical than other subjects of
mathematical logic. In my opinion the real hindrance to proof theory is
the mysterious way of using large cardinals for denoting small ordinals:
it is easy to understand the proof theoretical analysis, but at the end of
it one has reduced the consistency of a theory to some --- for the
outsider --- strange looking ordinal notation system and the question is:
what have we gained? Part 1. above gives a partial answer, but the question
remains: isn't it possible to reduce the consistency of the theories considered
to constructive theories in a more direct way? Are ordinals not only a
technical tool for carrying out this reduction, and if we can get rid of them,
their role vanishes?
Now small ordinal notation systems (below $\Gamma_0$) and even some
extensions (so called "Schuette Klammer Symbole") are in my opinion
intuitively well-ordered and therefore a reduction to their
well-ordering provides a real consistency proof. This is in my opinion one
reason, why Gentzen's result was appreciated so much, and these intuitive
well-ordering proofs make it easy to teach ordinal notation systems of this
We can formalize the intuitive argument for it's well-ordering by selecting
systematically ordinal notations in such a way, that the new one will be
always chosen as the least one with respect to some termination ordering out
of a certain (during the process increasing) set of notations
which are well-ordered (provable in primitive recursive arithmetic relative
to the well-ordering of the ordinal notations already selected) in such
a way that the resulting sequence of ordinal notations is increasing,
exhausts all ordinal notations and is therefore well-ordered. I denote this
process as "from below": we systematically climb up the proof theoretical
scale and refer always only to ordinals which we have introduced before.
A complete description of this argument is given in my article Ordinal systems
(see below).
This argument can now be extended. First I thought the Bachmann-Howard
ordinal is the limit, but I found ways of getting beyond it and the
strength I have reached up to now (although not everything is written down
yet) is Kripke Platek set theory with one recursive Mahlo ordinal.
And Mahlo does not seem to be the end of this approach, the only limit I have
reached up to now is my mental capacity.
3. Of course, whatever approach we take, we cannot bypass Goedel's results.
What we can do is reduce the consistency of theories to some principles
which we regard as less doubtful then the theories in questions. The
consistency of the principles has to be shown by philosophical
methods, and there is always space for mistakes in such considerations,
we cannot achieve the same degree of security like in mathematical proofs.
References ( those marked with (*) can be found on my web-page of articles
For 1.:
S.: Well-ordering proofs for Martin-Löf Type Theory with
W-type and one Universe. To appear in APAL. (*)
Griffor, E. and Rathjen, M.: The strength of some Martin Loef type theories,
Arch. math. log, 33, 1994, pp. 347 - 385.
S.: Extending Martin-Löf Type Theory by one Mahlo-Universe (*)
S.: A Model for a type theory with Mahlo Universe (*)
For 1. and 2.:
S.: Well-ordering proofs in Martin-Löf's Type Theory. To appear in
"25 years of constructive type theory".(*)
For 2.:
S.: Ordinal Systems (*)
Name: Anton Setzer
Position: Research Assistant
Institution: Department of Mathematics, Uppsala University
Research interest: Proof theory, especially of Martin-Loef Type Theory
More information: http://www.math.uu.se/~setzer
Anton Setzer Telephone:
Department of Mathematics (national) 018 4713284
Uppsala University (international) +46 18 4713284
P.O. Box 480 Fax:
S-751 06 Uppsala (national) 018 4713201
SWEDEN (international) +46 18 4713201
More information about the FOM mailing list
|
{"url":"http://www.cs.nyu.edu/pipermail/fom/1998-April/001826.html","timestamp":"2014-04-21T15:17:02Z","content_type":null,"content_length":"10661","record_id":"<urn:uuid:b95ed125-659b-4bac-93cf-30ec596d005c>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00057-ip-10-147-4-33.ec2.internal.warc.gz"}
|
way-below relation
up vote 0 down vote favorite
Wiki says that an infinite set may be way-below another set. More precisely, in a power set, pow(S), with subset order and with X and Y subsets of S, does X way-below Y entail that X is finite?
"...but even infinite sets can be way below some other set"
add comment
3 Answers
active oldest votes
I think that Andrej Bauer's answer doesn't really address the important point. Obviously being way below is an order theoretic notion, i.e., it is preserved under isomorphism, while being a
finite set is not an order theoretic notion. In particular, whenever you have any order with an element way below another, you can replace the smaller one by an infinite set that fits into
the order exactly as the original element. Now there is an infinite set way below another set. But this is artificial and doesn't really say anything.
However, even in partial orders of sets an infinite set can be way below another set. Identify each $r\in\mathbb R$ with the set $\{q\in\mathbb Q:q\le r\}$, i.e., with the left half of a
Dedekind cut in $\mathbb Q$ that corresponds to $r$.
Now the order on $\mathbb R$ is just set-theoretic inclusion of these sets of rational numbers. However, as Andrej points out, $0$ is way below $1$. This does not conflict with James
up vote 3 Cranch's answer, since the partial order of sets that we are looking at is not the full power set of $\mathbb Q$ but just a subordering of that, and in particular one that does not contain
down vote any finite set.
It can be shown that every partial order is in fact isomorphic to a partial order of infinite sets ordered by inclusion, but the point in my example (or Andrej's example, for that matter)
is that whenever $\mathbb P$ is a collection of sets such that $(\mathbb P,\subseteq)$ is isomorphic to $(\mathbb R,\leq)$, then the element corresponding to $0$ is way below the element
corresponding to $1$, simply since this is an order theoretic notion, but $0$ is never represented by a finite set, since the corresponding element of $\mathbb P$ it has infinitely many
sets below it, i.e., infinitely many subsets.
Indeed, an ad hoc way of doing that replacement is to consider two infinite sets $A\subset B$, and worth with the set of sets $A\subset X\subset B$: a kind of relative powerset. Of
course, this is just the same thing as the powerset $\mathbb{P}(B\backslash A)$, so the sets that are way below others are the sets which are $A\cup F$ for $F$ finite. If $A$ is infinite,
these are infinite sets! – James Cranch May 12 '11 at 9:21
add comment
In a power set, isn't Y always the supremum of the directed set of all its finite subsets? That would seem to me to entail that X must be finite.
EDIT: Perhaps the authors were thinking of a different structure. If you consider the partial ordering on isomorphism classes of subsets of $A$, where $X\leq Y$ if there is an
injection from $X$ to $Y$ (which need not be an inclusion), then things are rather different.
up vote 1 down The poset then looks like a truncation of the poset of ordinals.
And the way-below relation is very different. For example, $X$ is way below $Y$ if there is a set whose size is a limit cardinal $Z$ such that $X\lt Z\leq Y$.
Hence a set of cardinality $\aleph_{17}$ will be way below a set of cardinality $\aleph_{\omega}$.
This is not a poset. – Emil Jeřábek May 12 '11 at 16:15
I've edited it so that it will be. – James Cranch May 12 '11 at 19:27
add comment
I think what Wikipedia wants to say (but does so badly) is that there can be other order relations in which one infinite set is way below another. For example, if we view real numbers as
equivalence classes of Cauchy sequences, then every real is an infinite set, yet 0 is way below 1 with respect to the usual order of the reals.
up vote 0 Of course, the way below relation arising from the subset relation is "finite subset".
down vote
So, don't trust everything that's written in Wikipedia.
add comment
Not the answer you're looking for? Browse other questions tagged order-theory or ask your own question.
|
{"url":"http://mathoverflow.net/questions/64704/way-below-relation","timestamp":"2014-04-20T03:32:06Z","content_type":null,"content_length":"60858","record_id":"<urn:uuid:280b403e-cf56-48dc-918f-c8109da5ad59>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00031-ip-10-147-4-33.ec2.internal.warc.gz"}
|
inverse tan x power series
August 28th 2006, 03:28 AM #1
inverse tan x power series
For the power series of inverse tan (x) the geometric series is:
1 - t ^2 + t^ 4 - t^ 6 +...+ t^( 4n)
find the sum and hence prove that for 0 < t < x:
1/(1+t2) < 1 - t^2 + t^ 4 - t^ 6 +...+ t^(4n)
I havent got a clue how to approach this.
For the power series of inverse tan (x) the geometric series is:
1 - t ^2 + t^ 4 - t^ 6 +...+ t^( 4n)
I'm a bit puzzled by the reference to the series for the inverse tan
function here. It seems unrelated to the problem.
Just a mo - it seems that this series is the derivative (term-by-term)
of the Gregory series:
$<br /> \arctan(x)=x-x^3/3+x^5/5- .. +(-1)^n x^{2n+1}/(2n+1) ..<br />$
which may be what it wants you to use.
I am more than a bit puzzled myself, in fact, have no idea. But that is what the question said. I have got only as far as:
a = 1 and n = 2n + 1
$<br /> r = -t^2<br />$
$<br /> Sn = ((-t^2)^{2n+1} - 1)/(-t^2 - 1)<br />$
but i am not sure about my n = 2n + 1 and even if that is right, i have no idea how to proceed from there to get the proof.
I'm a bit puzzled by the reference to the series for the inverse tan
function here. It seems unrelated to the problem.
Just a mo - it seems that this series is the derivative (term-by-term)
of the Gregory series:
$<br /> \arctan(x)=x-x^3/3+x^5/5- .. +(-1)^n x^{2n+1}/(2n+1) ..<br />$
which may be what it wants you to use.
Start with:
$<br /> \arctan(x)=x-x^3/3+x^5/5- .. +(-1)^n x^{2n+1}/(2n+1) ..<br />$
then differentiate to get:
$<br /> \frac{d}{dx}\arctan(x)=\frac{1}{1+x^2}=-x^2+x^4- .. +(-1)^n x^{2n} ..<br />$.
Now if we truncate the series after a positive term the remainder is again
a geometric series like what we had originally but multipled by -1 times the
last term included in the sum, and the sum of the series comprising the
remainder is negative. Which will prove the result (after a bit of jiggery-pokery).
(or use the remainder form for taylor series, that should also give the
We also need the restriction that $|x|<1$.
There is a much simpler way. If you used a finite geometric expansion and then take the limit it will hold for $x=\pm 1$ but you will have a remainder term. It can be shown very easily that the
remainder term approaches zero because,
$\int_0^t \frac{x^n}{1+x^2} dx<\int_0^t \frac{x}{1+x^2}$
Since the dominating series converges to zero so too the remainder term.
There is a much simpler way. If you used a finite geometric expansion and then take the limit it will hold for $x=\pm 1$ but you will have a remainder term. It can be shown very easily that the
remainder term approaches zero because,
$\int_0^t \frac{x^n}{1+x^2} dx<\int_0^t \frac{x}{1+x^2}$
Since the dominating series converges to zero so too the remainder term.
Except the question apparently wants the student to use the series for
August 28th 2006, 07:02 AM #2
Grand Panjandrum
Nov 2005
August 28th 2006, 07:14 AM #3
August 28th 2006, 07:35 AM #4
Grand Panjandrum
Nov 2005
August 28th 2006, 01:14 PM #5
Global Moderator
Nov 2005
New York City
August 28th 2006, 01:26 PM #6
Grand Panjandrum
Nov 2005
|
{"url":"http://mathhelpforum.com/calculus/5186-inverse-tan-x-power-series.html","timestamp":"2014-04-18T14:38:14Z","content_type":null,"content_length":"50822","record_id":"<urn:uuid:1325bee4-36e6-435d-a70b-23d1d0e5ac6e>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00385-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Vernon Hills Math Tutor
Find a Vernon Hills Math Tutor
...I love to teach and develop a way to meet each student's needs when it comes to learning. This is something I really enjoy doing and will work extremely hard to make it successful for the
student. The learning environment should be structured, hard working, and encouraging.
19 Subjects: including prealgebra, reading, chemistry, biology
...I love math and helping students understand it. I first tutored math in college and have been tutoring for a couple years independently. My students' grades improve quickly, usually after only
a few sessions.
26 Subjects: including ACT Math, trigonometry, Spanish, precalculus
I have a PhD in microbial genetics and have worked in academic research as a university professor and for commercial companies in the biotechnology manufacturing sector. I have a broad background
in science and math, a love of written and oral communication and a strong desire to share the knowledg...
35 Subjects: including algebra 2, ACT Math, geometry, physics
...Three times, I served as a home-bound tutor for students who were home with a long term illness. When working in my science classroom, if my students did not understand something I changed
methods and tried again. I have taught Earth science, ecology, chemistry, biology/genetics, and physics.
13 Subjects: including prealgebra, algebra 1, physics, biology
...Fowlers, MLA, Chicago, the Economist, Bierce, Stunk and White), growing up loving the most eminent dictionary in the English tongue (i.e. the Oxford English) and dozens of quotations books that
exemplify these concepts in the best way to have ever graced the ear of anglophone-kind. And I underst...
57 Subjects: including trigonometry, differential equations, linear algebra, SAT math
|
{"url":"http://www.purplemath.com/vernon_hills_math_tutors.php","timestamp":"2014-04-16T13:20:54Z","content_type":null,"content_length":"23881","record_id":"<urn:uuid:5df5bc7f-9a68-4d3a-8d6b-8afaecbbdf85>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00060-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Efficient and Exact Sampling of Simple Graphs with Given Arbitrary Degree Sequence
Uniform sampling from graphical realizations of a given degree sequence is a fundamental component in simulation-based measurements of network observables, with applications ranging from epidemics,
through social networks to Internet modeling. Existing graph sampling methods are either link-swap based (Markov-Chain Monte Carlo algorithms) or stub-matching based (the Configuration Model). Both
types are ill-controlled, with typically unknown mixing times for link-swap methods and uncontrolled rejections for the Configuration Model. Here we propose an efficient, polynomial time algorithm
that generates statistically independent graph samples with a given, arbitrary, degree sequence. The algorithm provides a weight associated with each sample, allowing the observable to be measured
either uniformly over the graph ensemble, or, alternatively, with a desired distribution. Unlike other algorithms, this method always produces a sample, without back-tracking or rejections. Using a
central limit theorem-based reasoning, we argue, that for large , and for degree sequences admitting many realizations, the sample weights are expected to have a lognormal distribution. As examples,
we apply our algorithm to generate networks with degree sequences drawn from power-law distributions and from binomial distributions.
Citation: Del Genio CI, Kim H, Toroczkai Z, Bassler KE (2010) Efficient and Exact Sampling of Simple Graphs with Given Arbitrary Degree Sequence. PLoS ONE 5(4): e10012. doi:10.1371/
Editor: Fabio Rapallo, University of East Piedmont, Italy
Received: February 15, 2010; Accepted: March 8, 2010; Published: April 8, 2010
Copyright: © 2010 Del Genio et al. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and
reproduction in any medium, provided the original author and source are credited.
Funding: CIDG and KEB are supported by the National Science Foundation (NSF) through grant DMR-0908286 and by the Norman Hackerman Advanced Research Program through grant 95921. HK and ZT are
supported in part by the NSF BCS-0826958 and by the Defense Threat Reduction Agency (DTRA) through HDTRA 201473-35045. The funders had no role in study design, data collection and analysis, decision
to publish, or preparation of the manuscript.
Competing interests: The authors have declared that no competing interests exist.
Network representation has become an increasingly widespread methodology of analysis to gain insight into the behavior of complex systems, ranging from gene regulatory networks to human
infrastructures such as the Internet, power-grids and airline transportation, through metabolism, epidemics and social sciences [1]–[4]. These studies are primarily data driven, where connectivity
information is collected, and the structural properties of the resulting graphs are analyzed for modeling purposes. However, rather frequently, full connectivity data is unavailable, and the modeling
has to resort to considerations on the class of graphs that obeys the available structural data. A rather typical situation is when the only information available about the network is the degree
sequence of its nodes . For example, in epidemiology studies of sexually transmitted diseases [5], anonymous surveys may only collect the number of sexual partners of a person in a given period of
time, not their identity. Epidemiologists are then faced with constructing a typical contact graph having the observed degree sequence, on which disease spread scenarios can be tested. Another reason
for studying classes or ensembles of graphs obeying constraints comes from the fact that the network structure of many large-scale real-world systems is not the result of a global design, but of
complex dynamical processes with many stochastic elements. Accordingly, a statistical mechanics approach [1] can be employed to characterize the collective properties of the system emerging from its
node level (microscopic) properties. In this approach, statistical ensembles of graphs are defined [6], [7], representing “connectivity microstates” from which macroscopic system level properties are
inferred via averaging. Here we focus on the degree as a node characteristic, which could represent, for example, the number of friends of a person, the valence of an atom in a chemical compound, the
number of clients of a router, etc.
In spite of its practical importance, finding a method to construct degree-based graphs in a way that allows the corresponding graph ensemble to be properly sampled has been a long-standing open
problem in the network modeling community (references using various approaches are given below). Here we present a solution to this problem, using a biased sampling approach. We consider degree-based
graph ensembles on two levels: 1) sequence-level, where a specific sequence of degrees is given, and 2) distribution level, where the sequences are themselves drawn from a given degree distribution .
In the remainder we will focus on the fundamental case of labeled, undirected simple graphs. In a simple graph any link connects a single pair of distinct nodes and self loops and multiple links
between the same pair of nodes are not allowed. Without loss of generality, consider a sequence of positive integers , arranged in non-increasing order: . If there is at least one simple graph with
degree sequence , the sequence is called a graphical sequence and we say that realizes. Note that not every sequence of positive integers can be realized by simple graphs. For example, there is no
simple graph with degree sequence or , while the sequence can obviously be realized by a simple graph. In general, if a sequence is graphical, then there can be several graphs having the same degree
sequence. Also note that given a graphical sequence, the careless or random placing of links between the nodes may not result in a simple graph.
Recently, a direct, swap-free method to systematically construct all the simple graphs realizing a given graphical sequence was presented [8]. However, in general (for exceptions see Ref. [9]), the
number of elements of the set of all graphs that realize sequence , increases very quickly with : a simple upper bound is provided by the number of all graphs with sequence , allowing for multiple
links and loops: . Thus, typically, systematically constructing all graphs with a given sequence is practical only for short sequences, such as when determining the structural isomers of alkanes [8].
For larger sequences, and in particular for modeling real-world complex networks, it becomes necessary to sample . Accordingly, several variants based on the Markov Chain Monte Carlo (MCMC) method
were developed. They use link-swaps [10] (“switches”) to produce pseudo-random samples from . Unfortunately, most of them are based on heuristics, and apart from some special sequences, little has
been rigorously shown about the methods' mixing time, and accordingly they are ill-controlled. The literature on such MCMC methods is simply too extensive to be reviewed here, instead, we refer the
interested reader to Refs. [11]–[13] and the references therein. Finally, we recall the main swap-free method producing uniform random samples from , namely the configuration model (CM) [14]–[17].
This method picks a pair of nodes uniformly at random and connects them, until a rejection occurs due to a double link or a self-loop, in which case it restarts from the very beginning. For this
reason, the CM can become very slow, as shown in the Discussion section. The CM has inspired approximation methods as well [18] and methods that construct random graphs with given expected degrees
Here, by developing new results from the theorems in Ref. [8], we present an efficient algorithm that solves this fundamental graph sampling problem, and it is exact in the sense that it is not based
on any heuristics. Given a graphical sequence, the algorithm always finishes with a simple graph realization in polynomial time, and it is rejection free. While the samples obtained are not uniformly
generated, the algorithm also provides the exact weight for each sample, which can then be used to produce averages of arbitrary graph observables measured uniformly, or following any given
distribution over .
Mathematical foundations
Before introducing the algorithm, we state some results that will be useful later on. We begin with the Erdös-Gallai (EG) theorem [20], which is a fundamental result that allows us to determine
whether a given sequence of non-negative integers, called “degree sequence” hereafter, is graphical.
Theorem 1 (Erdö-Gallai).
A non-increasing degree sequence is graphical if and only if their sum is even and, for all:(1)
A necessary and sufficient condition for the graphicality of a degree sequence, which is constrained from having links between some node and a “forbidden set” of other nodes is given by the
star-constrained graphicality theorem [8]. In this case the forbidden links are all incident on one node and thus form a “star”. To state the theorem, we first define the “leftmost adjacency set” of
a node with degree in a degree sequence as the set consisting of the nodes with the largest degrees that are not in the forbidden set. If is non-increasing, then the nodes in the leftmost adjacency
set are the first nodes in the sequence that are not in the forbidden set. The forbidden set could represent nodes that are either already connected to , and thus subsequent connections to them are
forbidden, or just imposed arbitrarily. Using this definition, the theorem is:
Theorem 2 (Star-constrained graphical sequences).
Let be a non-increasing graphical degree sequence. Assume there is a set of forbidden links incident on a node. Then a simple graph avoiding the forbidden links can be constructed if and only if a
simple graph can be constructed where is connected to all the nodes in its leftmost adjacency set.
A direct consequence [8] of Theorem 2 for the case of an empty forbidden set is the well-known Havel-Hakimi result [21], [22], which in turn implies:
Corollary 1.
Let be a non-increasing unconstrained graphical degree sequence. Then, given any node, there is a realization of that includes a link between the first node and.
Another result we exploit here is Lemma 3 of Ref. [8], extended to star-constrained sequences:
Lemma 1.
Let be a graphical sequence, possibly with a star constraint incident on node. Let and be distinct nodes not in the forbidden set and different from, such that. Then is also a graphical sequence with
the same star constraint.
Proof. Let denote the set of nodes forbidden to connect to node . Since is star-constrained graphical there is a simple graph realizing the sequence with no connections between and . Since , there is
a node to which is connected but is not. Note that could be in . Now cut the edge of creating a stub at and another at . Remove the stub at so that its degree becomes , and add a stub at so that its
degree becoming . Since there are no connections in between and , connect the two stubs at these nodes creating a simple graph thus realizing . Clearly there are still no connections between and in ,
and thus is also star-constrained graphical.
Finally, using Lemma 1 and Theorem 2, we prove:
Theorem 3.
Let be a degree sequence, possibly with a star-constraint incident on node, and let and be two nodes with degrees such that that are not constrained from linking to node. If the residual degree
sequence obtained from by reducing the degrees at and by unity is not graphical, then the degree sequence obtained from by reducing the degrees at and by unity is also not graphical.
Proof. By definition, for and , ; for and , . We consider , however, the proof is not affected by this assumption. By assumption, is not graphical. Using proof by contradiction, assume that is
graphical. Clearly, , and thus we can apply Lemma 1 on this sequence. As a result, the sequence , that is exactly is graphical, a contradiction.
Note that if a sequence is non-graphical, then it is not star-constrained graphical either, and thus Theorem 3 is in its strongest form.
Biased sampling
The sampling algorithm described below is ergodic in the sense that every possible simple graph with the given finite degree sequence is generated with non-zero probability. However, it does not
generate the samples with uniform probability; the sampling is biased. Nevertheless, the algorithm can be used to compute network observables that are unbiased, by appropriately weighing the averages
measured from the samples. According to a well known principle of biased sampling [23],[24], if the relative probability of generating a particular sample is , then an unbiased estimator for an
observable measured from a set of randomly generated samples is the weighted average(2)
where the weights are , and the denominator is a normalization factor. The key to this method is to find the appropriate weight to associate with each sample. Note that in addition to uniform
sampling, it is in fact possible to sample with any arbitrary distribution by choosing an appropriate set of sample weights.
The algorithm
Let be a non-increasing graphical sequence. We wish to sample the set of graphs that realize this sequence. The graphs can be systematically constructed by forming all the links involving each node.
To do so, begin by choosing the first node in the sequence as the “hub” node and then build the set of the “allowed nodes” that can be connected to it. contains all the nodes that can be connected to
the hub such that if a link is placed between the hub and a node from , then a simple graph can still be constructed, thus preserving graphicality. Choose uniformly at random a node , and place a
link between and the hub. If still has “stubs”, i.e. remaining links to be placed, then add it to the set of “forbidden nodes” that contains all the nodes which can't be linked anymore to the hub
node and which initially contains only the hub; otherwise, if has no more stubs to connect, then remove it from further consideration. Repeat the construction of and link the hub with one of its
randomly chosen elements until the stubs of the hub are exhausted. Then remove the hub from further consideration, and repeat the whole procedure until all the links are made and the sample
construction is complete. Each time the procedure is repeated, the degree sequence considered is the “residual degree sequence”, that is the original degree sequence reduced by the links that have
previously been made, and with any zero residual degree node removed from the sequence. Then, choose a new hub, empty the set of forbidden nodes and add the new hub to it. It is convenient, but not
necessary, to choose the new hub to be a node with maximum degree in the residual degree sequence.
The sample weights needed to obtain unbiased estimates using Eq. 2 are the inverse relative probabilities of generating the particular samples. If in the course of the construction of the sample
different nodes are chosen as the hub and they have residual degrees when they are chosen, then this sample weight can be computed by first taking the product of the sizes of the allowed sets
constructed, then dividing this quantity by a combinatorial factor which is the product of the factorials of the residual degrees of each hub:(3)
The weight accounts for the fact that at each step the hub node has nodes it can be linked to, which is the size of the allowed set at that point, and that the number of equivalent ways to connect
the residual stubs of a new hub is . Note that it is always true that , with occurring for sequences for which there is only one possible graph.
Building the allowed set.
The most difficult step in the sampling algorithm is to construct the set of allowed nodes . In order to do so first note that Theorem 3 implies that if a non-forbidden node, that is a node not in ,
can be added to , then all non-forbidden nodes with equal or higher degree can also be added to . Conversely, if it is determined that a non-forbidden node cannot be added to , then all nodes with
equal or smaller degree also cannot be added to . Therefore, referring to the degrees of nodes that cannot be added to as “fail-degrees”, the key to efficiently construct is to determine the maximum
fail-degree, if fail-degrees exist.
The first time is constructed for a new hub, according to Corollary 1, there is no fail-degree and consists of all the other nodes. However, constructing becomes more difficult once links have been
placed from the hub to other nodes. In this case, to find the maximum fail-degree note that at any step during the construction of a sample the residual sequence being used is graphical. Then, since
according to Theorem 2 any connection to the leftmost adjacency set of the hub preserves graphicality, it follows from Theorem 3 that any fail-degree has to be strictly less than the degree of any
node in the leftmost adjacency set of the hub.
If there are non-forbidden nodes in the residual degree sequence that have degree less than any in its leftmost adjacency set, then the maximum fail-degree can be found with a procedure that exploits
Theorem 2. In particular, if the hub is connected to a node with a fail-degree, then, by Theorem 2, even if all the remaining links from the hub were connected to the remaining nodes in the leftmost
adjacency set, the residual sequence will not be graphical. Our method to find fail-degrees, given below, is based on this argument.
Begin by constructing a new residual sequence by temporarily assuming that links exist between the hub and all the nodes in its leftmost adjacency set except for the last one, which has the lowest
degree in the set. The nodes temporarily linked to the hub should also be temporarily added to the set of forbidden nodes . The nodes in should be ordered so that it is non-increasing, that forbidden
nodes appear before non-forbidden nodes of the same degree, and that the hub, which now has residual degree 1, is last.
At this point, in principle one could find the maximum fail degree by systematically connecting the last link of the hub with non-forbidden nodes of decreasing degree, and testing each time for
graphicality using Theorem 1. If it is not graphical then the degree of the last node connected to the hub is a fail-degree, and the node with the largest degree for which this is true will have the
maximum fail-degree. However, this procedure is inefficient because each time a new node is linked with the hub the residual sequence changes and every new sequence must be tested for graphicality.
A more efficient procedure to find the maximum fail-degree instead involves only testing the sequence . To see how this can be done, note that is a graphical sequence, by Theorem 2. Thus, by Theorem
1, for all relevant values of , the left hand side of Inequality 1, , and the right hand side of it, , satisfy . Furthermore, for the purposes of finding fail-degrees it is sufficient to consider
linking the final stub of the hub with only the last non-forbidden node of a given degree, if any exists. After any such link is made, the resulting degree-sequence will be non-increasing, and thus
Theorem 1 can be applied to test it for graphicality. Therefore, if the degree of the node connected with the last stub of the hub is a fail-degree, then Inequality 1 for must fail for some . For
each , the possible differences in and between and are as follows. is always reduced by 1 because the residual degree of the hub is reduced from 1 to 0. may be reduced by an another factor of 1 if
the last node connected to the hub, having index and degree , is such that and . is reduced by 1 if , otherwise it is unchanged.
Considering these conditions that can cause Inequality 1 to fail for , the set of allowed nodes can be constructed with the following algorithm that requires only testing . Starting with , compute
the values of and for . There are three possible cases: (1) , (2) , and (3) . In case (1) fail-degrees occur whenever is unchanged by making the final link to the hub. Thus, the degree of the first
non-forbidden node whose index is greater than is the largest fail-degree found with this value of . In case (2) fail-degrees occur whenever is unchanged and is reduced by 2 by making the final link
to the hub. Thus, the degree of the first non-forbidden node whose index is greater than and whose degree is less than is the largest fail-degree found with this value of . In case (3) no fail-degree
can be found with this value of . Repeat this process sequentially increasing , until all the relevant values have been considered, then retain the maximum fail-degree. It can be shown that the
algorithm can be stopped either after a case (1) occurs, or after where is the lowest index of any node in with degree . Once the maximum fail-degree is found, remove the nodes that were temporarily
added to and construct by including all non-forbidden nodes of with a higher degree. If no fail-degree is ever found, then all non-forbidden nodes of are included in . will always include the
leftmost adjacency set of the hub and any non-forbidden nodes of equal degree.
Note that after a link is placed in the sample construction process, the residual degree sequence changes, and therefore, has to be determined every time.
Implementing the Erdös-Gallai test.
Finally, and should be calculated efficiently. Calculating the sums that comprise them for each new value of can be computationally intensive, especially for long sequences. Even computing them only
for as many distinct terms as there are in the sequence, as suggested in Ref. [25], can still become slow if the degree distribution is not quickly decreasing. Instead, it is much more efficient to
use recurrence relations to calculate them.
A recurrence relation for is simply(4)
with .
For non-increasing degree sequences, define the “crossing-index” for each as the index of first node that has degree less than , that is for which for all . If no such index exists, such as for since
the minimum degree of any node in the sequence is 1, then set . Then, a recurrence relation for is(5)
where is a discrete equivalent of the Heaviside function, defined to be 1 on positive integers and 0 otherwise, and . Or, since the crossing-index can not increase with , that is for all , a value
will exist for which for all , and so Eq. 5 can be written(6)
Thus, there is no need to find for .
Using Eqs. 4 and 6, the mechanism of the calculation of and at sequential values of is shifted from a slow repeated calculation of sums of many terms to the much less computationally intensive task
of calculating the recurrence relations. In order to perform the test efficiently, a table of the values of crossing-index for each relevant can be created as is constructed.
It should be noted that the usefulness of this method for calculating and is broader than its use for calculating fail-degrees in our sampling algorithm. In particular, it can be used in an
Erdös-Gallai test to efficiently determine whether a degree-sequence is graphical.
Sample weights
As previously stated, the weight associated with a particular sample, given by Eq. 3, is the product of the sizes of all the sets of allowed nodes that have been built for each hub node divided by
the product of the factorials of the initial residual degrees of each hub node. The logarithm of this weight is(7)
Generally, degree sequences with admit many graphical realizations. When this is true, each of the terms in square brackets in Eq. 7 are effectively random and independent, and, by virtue of the
central limit theorem, their sum will be normally distributed. That is, the weight of graph samples generated from a given degree sequence with large is typically log-normally distributed. However,
degree sequences with that have only a small number of realizations do exist, and is not expected to be log-normally distributed for those sequences.
Furthermore, one can consider not just samples of a particular graphical sequence, but of an ensemble of sequences. By a similar argument to that given above for individual sequences, the weight of
graph samples generated from an ensemble of sequences will also typically be log-normally distributed in the limit of large . For example, consider an ensemble of sequences of randomly chosen
power-law distributed degrees, that is, sequences of random integers chosen from a probability distribution . Hereafter, we refer to such sequences as “power-law sequences.” Figure 1 shows the
probability distribution of the logarithm of weights for realizations of power-law sequences with exponent and . Note that this distribution is well approximated by a Gaussian fit.
Figure 1. Probability distribution of the logarithm of weights for an ensemble of power-law sequences with and .
The ensemble contained graphical sequences, and for each sequence graph samples were produced. Thus, the total number of samples produced was . The simulation data is given by the solid black line
and a Gaussian fit of the data is shown by the dashed red line that nearly obscures the black line.
We have also studied the behavior of the mean and the standard deviation of the probability distribution of the logarithm of the weights of such power-law sequences as a function of . As shown in
Fig. 2, they scale as a power-law. We have found qualitatively similar results, including power-law scaling of the growth of the mean and variance of the distribution of , for binomially distributed
degree sequences that correspond to those of Erdös-Renyi random graphs with node connection probability such that , and for uniformly distributed degree sequences, that is power-law sequences with ,
with an upper limit, or cutoff, of for the degree of a node. However, for uniformly distributed degree sequences without an imposed upper limit on node degrees, we find that the sample weights are
not log-normally distributed.
Figure 2. Mean and standard deviation of the distributions of the logarithm of the weights vs. number of nodes of samples from an ensemble of power-law sequences with .
The black circles correspond to , the red squares correspond to . The error bars are smaller than the symbols. The solid black line and the dashed red line show the outcomes of fits on the data. The
linearity of the data on a logarithmic scale indicates that the and follow power-law scaling relations with : and . The slopes of the fit lines are an estimate of the value of the exponents: and .
In this section we discuss the algorithm's computational complexity. We first provide an upper bound on the worst case complexity, given a degree sequence . Then, using extreme value arguments, we
conservatively estimate the average case complexity for degree sequences of random integers chosen from a distribution . The latter is useful for realistically estimating the computational costs for
sampling graphs from ensembles of long sequences.
To determine an upper bound on the worst case complexity for constructing a sample from a given degree sequence , recall that the algorithm connects all the stubs of the current hub node before it
moves on to the hub node of the new residual sequence. For every stub from the hub one must construct the allowed set . The algorithm for constructing , which includes constructing , performing the
vs comparisons, and determining the maximum fail-degree, can be completed in steps, where is the maximum possible number of nodes in the residual sequence after eliminating hubs from the process.
Therefore, an upper bound on the worst case complexity of the algorithm given a sequence is:(8)
where the sum involves at most terms. Equivalently, , with being the number of links in the graph. For simple graphs, the maximum possible number of links is , and the minimum possible number is . If
, then , and if , then , which is an upper bound, independent of the sequence.
From Eq. 8, the expected complexity for the algorithm to construct a sample for a degree sequence of random integers chosen from a distribution , normalized to unity, can be conservatively estimated
Here is the expectation value for the degree of the node with index , which is the largest degree for which the expected number of nodes with equal or larger degree is at least . That is,(10)
Notice that the sum in the above equation runs to the maximum allowed degree in the network , which is nominally , but a different value can be imposed. For example, in the case of power-law
sequences, the so-called structural cutoff of is necessary if degree correlations are to be avoided [19], [26], [27]. However, such a cutoff needs to be imposed only for , because the expected
maximum degree in a power-law network grows like . Thus, for , grows no faster than and no degree correlations exist for large [28].
Given a particular form of distribution , Eq. 9 can be computed for different values of . Subsequent fits of the results to a power-law function allow the order of the complexity of the algorithm to
be estimated. Figure 3 shows the results of such calculations for power-law sequences with and without the structural cutoff of as a function of exponent . Note that, in the absence of cutoff, the
results indicate that the order of the complexity goes to a value of 3 for , that is, in the limit of a uniform degree distribution. However, if the structural cutoff is imposed the order of the
complexity is only in this limit. Both these results are easily verified analytically.
Figure 3. The estimated computational complexity of the algorithm for power-law sequences.
The leading order of the computational complexity of the algorithm as a power of , where is the number of nodes, is plotted as a function of the degree distribution power-law exponent . The black
circles correspond to ensembles of sequences without cutoff, while the red squares correspond to ensembles of sequences with structural cutoff in the maximum degree of . The fits that yielded the
data points were carried out considering sequences ranging in size from to .
We have tested the estimates shown in Fig. 3 with our implementation of the sampling algorithm for power-law sequences with and without the structural cutoff for certain values of , including 0, 2,
and 3. This was done by measuring the actual execution times for generating samples for different and fitting the results to a power-law function. In every case, the actual order of the complexity of
our implementation of the sampling algorithm was equal to or slightly less than its estimated value shown in Fig. 3.
We have solved the long standing problem of how to efficiently and accurately sample the possible graphs of any graphical degree sequence, and of any ensemble of degree sequences. The algorithm we
present for this purpose is ergodic and is guaranteed to produce an independent sample in, at most, steps. Although the algorithm generates samples non-uniformly, and, thus, it is biased, the
relative probability of generating each sample can be calculated explicitly permitting unbiased measurements to be made. Furthermore, because the sample weights are known explicitly, the algorithm
makes it possible to sample with any arbitrary distribution by appropriate re-weighting.
It is important to note that the sampling algorithm is guaranteed to successfully and systematically proceed in constructing a graph. This behavior contrasts with that of other algorithms, such as
the configuration model (CM), which can run into dead ends that require back-tracking or restarting, leading to considerable losses of time and potentially introducing an uncontrollable bias into the
results. While there are classes of sequences for which it is perhaps preferable to use the CM instead of our algorithm, in other cases its performance relative to ours can be remarkably poor. For
example, a configuration model code failed to produce even a single sample of a uniformly distributed graphical sequence, , with , after running for more than 24 hours, while our algorithm produced
samples of the very same sequence in 30 seconds. Furthermore, each sample generated by our algorithm is independent. This behavior contrasts with that of algorithms based on MCMC methods. Because our
algorithm works for any graphical sequence and for any ensemble of random sequences, it allows arbitrary classes of graphs to be studied.
One of the features of our algorithm that makes it efficient is a method of calculating the left and right sides of the inequality in the Erdös-Gallai theorem using recursion relations. Testing a
sequence for graphicality can thus be accomplished without requiring repeated computations of long sums, and the method is efficient even when the sequence is nearly non-degenerate. The usefulness of
this method is not limited to the algorithm presented for graph sampling, but can be used anytime a fast test of the graphicality of a sequence of integers is needed.
There are now over 6000 publications focusing on complex networks. In many of these publications various processes, such as network growth, flow on networks, epidemics, etc., are studied on toy
network models used as “graph representatives” simply because they have become customary to study processes on. These include the Erdös-Rényi random graph model, the Barabási-Albert preferential
attachment model, the Watts-Strogatz small-world network model, random geometric graphs, etc. However, these toy models are based on specific processes that constrain their structure beyond their
degree-distribution, which in turn might not actually correspond to the processes that have led to the structure of the networks investigated with them, thus potentially introducing dangerous biases
in the conclusions of these studies. The algorithm presented here provides a way to study classes of simple graphs constrained solely by their degree sequence, and nothing else. However, additional
constraints, such as connectedness, or any functional of the adjacency matrix of the graph being constructed, can in principle be added to the algorithm to further restrict the graph class built.
After this paper was accepted for publication, we became aware of an unpublished work by J. Blitzstein and P. Diaconis that provides another direct construction method for sampling graphs with given
degree sequences.
The authors gratefully acknowledge Y. Sun, B. Danila, M. M. Ercsey Ravasz, I. Miklós, E. P. Erdös and L. A. Székely for fruitful comments, discussions and support.
Author Contributions
Conceived and designed the experiments: KEB. Performed the experiments: CIDG HK ZT KEB. Analyzed the data: CIDG. Wrote the paper: CIDG ZT KEB. Implemented and tested algorithm: CIDG HK.
Graph sampling code
Posted by paraw
|
{"url":"http://www.plosone.org/article/info%3Adoi%2F10.1371%2Fjournal.pone.0010012","timestamp":"2014-04-23T07:51:20Z","content_type":null,"content_length":"258001","record_id":"<urn:uuid:e27cab50-06db-41a4-9ec6-509c124f5479>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00483-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Briggs, CA Math Tutor
Find a Briggs, CA Math Tutor
...I have also learned which topics are commonly forgotten by students over a year of geometry (this gap in algebra is one of the biggest reasons a tutor can be so important during Algebra 2). So
if you're feeling overwhelmed by all the material you are expected to know from Algebra 1, I can get you...
14 Subjects: including geometry, trigonometry, statistics, SAT math
...Because of my careful attention to detail and devotion to understanding mathematics, I cultivated an uncanny ability to convey the concepts in a clear, digestible manner. I understand what it
takes to achieve academic and professional excellence because I have walked the path myself. After high school, I attended the University of Michigan where I majored in computer science and
16 Subjects: including calculus, public speaking, algebra 1, algebra 2
...For the last four years, I have also taught a graduate course in survey research, with content similar to that of the managerial inquiry course, at The Chicago School and was recently
nominated for Adjunct Faculty of The Year. My unique blend of professional experience and academic study make fo...
9 Subjects: including SPSS, reading, writing, piano
Hi, my name is Saba. I graduated UCLA with a BA in English in 2012. I am working towards receiving my master's in Psychology with an emphasis in School Counseling at Phillips Graduate Institute.
34 Subjects: including trigonometry, linear algebra, English, geometry
I have two children in college and they both learned differently. One was a visual learner and had to see everything multiple times before she could remember it and master it. The other could
remember and master things by reading and writing them.
9 Subjects: including algebra 2, reading, geometry, algebra 1
Related Briggs, CA Tutors
Briggs, CA Accounting Tutors
Briggs, CA ACT Tutors
Briggs, CA Algebra Tutors
Briggs, CA Algebra 2 Tutors
Briggs, CA Calculus Tutors
Briggs, CA Geometry Tutors
Briggs, CA Math Tutors
Briggs, CA Prealgebra Tutors
Briggs, CA Precalculus Tutors
Briggs, CA SAT Tutors
Briggs, CA SAT Math Tutors
Briggs, CA Science Tutors
Briggs, CA Statistics Tutors
Briggs, CA Trigonometry Tutors
Nearby Cities With Math Tutor
Bicentennial, CA Math Tutors
Century City, CA Math Tutors
Cimarron, CA Math Tutors
Dowtown Carrier Annex, CA Math Tutors
Farmer Market, CA Math Tutors
Lafayette Square, LA Math Tutors
Miracle Mile, CA Math Tutors
Preuss, CA Math Tutors
Rancho Park, CA Math Tutors
Rimpau, CA Math Tutors
Santa Western, CA Math Tutors
Westvern, CA Math Tutors
Westwood, LA Math Tutors
Wilcox, CA Math Tutors
Wilshire Park, LA Math Tutors
|
{"url":"http://www.purplemath.com/Briggs_CA_Math_tutors.php","timestamp":"2014-04-16T13:24:25Z","content_type":null,"content_length":"23895","record_id":"<urn:uuid:281490a7-b6e3-49cc-b7e0-e1e700a61104>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00226-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Does abcd = 1? (1) abc = 1 (2) bcd = 1
Author Message
Does abcd = 1? (1) abc = 1 (2) bcd = 1 [#permalink] 03 Mar 2011, 11:27
5% (low)
Question Stats:
(01:24) correct
0% (00:00)
based on 3 sessions
Q. Does abcd = 1?
(1) abc = 1
Joined: 02 Mar
2011 (2) bcd = 1
Posts: 12 Without any work, explain why must the answer be TOGETHER NOT sufficient?
Followers: 0 Solution:
Kudos [?]: 1 [0 [Reveal]
], given: 0
The question asks whether the product of four variables will be 1. The variables are completely unqualified, so to answer this question something must be known about each variable.
Statement (1) provides a relationship between three of the variables so it is insufficient.
Statement (2) provides a relationship between three of the variables so it is insufficient.
This eliminates three answers choices.
Taken together the Statements do provide information about all four variables, but is that information enough? Let's use the Schrute test.
Dwight Schrute once said, "Whenever I'm about to do something, I think, "Would an idiot do that?" And if they would, I do not do that thing."
Let's modify this for the GMAT: "Whenever I'm about to pick an answer, I think, "Would a typical GMAT taker pick that answer?"And if they would, I do not pick that answer."
Would a typical GMAT taker think that Statements (1) and (2) were sufficient? Yes, he would. Therefore we must pick TOGETHER NOT sufficient.
Spoiler: OA
Last edited by
on 03 Mar 2011, 20:31, edited 2 times in total.
Kaplan GMAT Prep Discount Codes Knewton GMAT Discount Codes GMAT Pill GMAT Discount Codes
beyondgmatscore Re: Spaceland Prep Strategy Question #1 [#permalink] 03 Mar 2011, 11:33
Manager We want to find out is abcd = 1
Joined: 14 Feb Statement 1 says abc = 1. It doesn't say anything about d, so insufficient
Statement 2 says bcd = 1. It doesn't say anything about a, so insufficient
Posts: 199
Combining 1 and 2, we get a(bc)(bc)d=1
Followers: 4
or abcd*bc = 1
Kudos [?]: 52 [
0], given: 3 Now abcd would be 1 only if bc = 1, but neither 1 nor 2 nor them combined can definitely tell us that bc = 1. Fore.g., bc can be 1/2 and a=2 and d = 2 and both statements can be
So, answer should be E
Re: Spaceland Prep Strategy Question #1 [#permalink] 03 Mar 2011, 19:05
This post received
spacelandprep wrote:
The question asks whether the product of four variables will be 1. The variables are completely unqualified, so to answer this question something must be known about each variable.
I've seen prep company representatives repeat this idea in many places, and it's mathematical nonsense. If you change your question to:
Does abcd = 0?
GMAT Instructor (1) abc = 0
(2) bcd = 0
Joined: 24 Jun
2008 then you certainly do
Posts: 967 not
Location: need to know 'about each variable' to answer the question; the answer is D even though we don't know the value of any of the variables. There are many official DS questions with
Toronto multiple unknowns which follow a similar pattern - the question asks for the value of some combination of unknowns, and while you cannot determine the value of the unknowns
themselves, you can still answer the question. Q168 in the DS section of
Followers: 236
Kudos [?]: 575
[1] , given: 3 is one of many examples.
spacelandprep wrote:
Let's modify this for the GMAT: "Whenever I'm about to pick an answer, I think, "Would a typical GMAT taker pick that answer?"And if they would, I do not pick that answer."
There are many GMAT questions which are genuinely simple. Applying this test to real GMAT questions could easily lead a test taker to 'outsmart' him or herself, and pick the wrong
answer rather than the correct one. While I agree one should be suspicious of 'obvious' answers in DS, the safest thing for the test taker to do is to understand the math involved,
rather than apply gimmicky 'tricks' which will only give you the right answer some of the time.
Nov 2011: After years of development, I am now making my advanced Quant books and high-level problem sets available for sale. Contact me at ianstewartgmat at gmail.com for details.
Private GMAT Tutor based in Toronto
Manager Re: Spaceland Prep Strategy Question #1 [#permalink] 03 Mar 2011, 20:25
Joined: 14 Feb Well said Ian +1.
I would be wary of any strategy which would want me to take a decision based on my limited understanding of what a "typical" GMAT taker would do.
Posts: 199
Much easier and simpler to stick to the underlying Math - it would work every time.
Followers: 4
Kudos [?]: 52 [
0], given: 3
Re: Spaceland Prep Strategy Question #1 [#permalink] 03 Mar 2011, 20:54
IanStewart wrote:
spacelandprep wrote:
The question asks whether the product of four variables will be 1. The variables are completely unqualified, so to answer this question something must be known about each variable.
I've seen prep company representatives repeat this idea in many places, and it's mathematical nonsense. If you change your question to:
Does abcd = 0?
(1) abc = 0
(2) bcd = 0
spacelandprep then you certainly do
Intern not
Joined: 02 Mar need to know 'about each variable' to answer the question; the answer is D even though we don't know the value of any of the variables. There are many official DS questions with
2011 multiple unknowns which follow a similar pattern - the question asks for the value of some combination of unknowns, and while you cannot determine the value of the unknowns
themselves, you can still answer the question. Q168 in the DS section of
Posts: 12
Followers: 0
is one of many examples.
Kudos [?]: 1 [0
], given: 0 It is true that something must be known about each variable for Q168. The sufficient statement mentions both variables.
IanStewart wrote:
spacelandprep wrote:
Let's modify this for the GMAT: "Whenever I'm about to pick an answer, I think, "Would a typical GMAT taker pick that answer?"And if they would, I do not pick that answer."
There are many GMAT questions which are genuinely simple. Applying this test to real GMAT questions could easily lead a test taker to 'outsmart' him or herself, and pick the wrong
answer rather than the correct one. While I agree one should be suspicious of 'obvious' answers in DS, the safest thing for the test taker to do is to understand the math involved,
rather than apply gimmicky 'tricks' which will only give you the right answer some of the time.
While knowing the math is the safest route to a correct answer, it's not always a possible route. Familiarity with the mathematical concepts covered on the GMAT is essential but
sometimes a question is simply going to be beyond you, well maybe not you, but some of us. Having an alternate form of approach is necessary in those instances. It's by no means a
guarantee, but anything that improves the odds of getting a tough question correct when using math is not an option is a great tool.
gmatclubot Re: Spaceland Prep Strategy Question #1 [#permalink] 03 Mar 2011, 20:54
|
{"url":"http://gmatclub.com/forum/q-does-abcd-1-1-abc-1-2-bcd-1-without-any-work-110311.html?fl=similar","timestamp":"2014-04-16T19:49:31Z","content_type":null,"content_length":"174961","record_id":"<urn:uuid:97cfd4cd-da79-4fab-8ff7-fe6cab8f5168>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00454-ip-10-147-4-33.ec2.internal.warc.gz"}
|
POP Seminar, 08-07-09
K. Subramani
West Virginia Univrsity
A Combination Algorithm for Horn Programs
In this talk we discuss a simple, greedy algorithm for a class of linear programs called Horn programs. This algorithm, which we term as the Lifting Algorithm, runs in time O(m·n²) on a Horn system
with m constraints and n variables.
Inasmuch as Horn constraints subsume difference constraints, and all known algorithms for the problem of checking feasibility in Difference Constraint Systems run in time Ω(m·n), the running time of
our algorithm is only a factor n worse than the best known running time for checking feasibility in Difference Constraint Systems. Horn programs arise in a number of application areas including
econometrics and program verification; consequently, their study is well-motivated. An important feature of our algorithm is that it uses only one operator, viz., addition.
We also show that our algorithm can identify the linear and lattice point feasibility of Extended Horn Systems in O(m·n²) time.
Host: Frank Pfenning
Appointments: ameliaw@cs.cmu.edu
Friday, August 7, 2009
3:30 p.m.
Wean Hall 8220
Principles of Programming Seminars
|
{"url":"http://www.cs.cmu.edu/afs/.cs.cmu.edu/Web/Groups/pop/seminar/090807.html","timestamp":"2014-04-21T15:49:24Z","content_type":null,"content_length":"4249","record_id":"<urn:uuid:f796195b-4ac9-4c4c-8c1a-2454ca78f6a7>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00660-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Matches for:
Mathematical Surveys and Monographs
1990; 237 pp; hardcover
Volume: 32
ISBN-10: 0-8218-1533-4
ISBN-13: 978-0-8218-1533-5
List Price: US$98
Member Price: US$78.40
Order Code: SURV/32
The geometry and analysis of CR manifolds is the subject of this expository work, which presents all the basic results on this topic, including results from the "folklore" of the subject. The book
contains a careful exposition of seminal papers by Cartan and by Chern and Moser, and also includes chapters on the geometry of chains and circles and the existence of nonrealizable CR structures.
With its detailed treatment of foundational papers, the book is especially useful in that it gathers in one volume many results that were scattered throughout the literature.
Directed at mathematicians and physicists seeking to understand CR structures, this self-contained exposition is also suitable as a text for a graduate course for students interested in several
complex variables, differential geometry, or partial differential equations. A particular strength is an extensive chapter that prepares the reader for Cartan's approach to differential geometry. The
book assumes only the usual first-year graduate courses as background.
"Recommended to specialists."
-- Zentralblatt MATH
• CR Structures
• Some automorphism groups
• Formal theory of the normal form
• Geometric theory of the normal form
• Background for Cartan's work
• Cartan's construction
• Geometric consequences
• Chains
• Chains and circles in complex projective geometry
• Nonsolvability of the Lewy Operator
|
{"url":"http://ams.org/bookstore?fn=20&arg1=survseries&ikey=SURV-32","timestamp":"2014-04-20T14:41:03Z","content_type":null,"content_length":"15412","record_id":"<urn:uuid:c5b6874d-be8b-4db9-8db8-0b69cef325b7>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00195-ip-10-147-4-33.ec2.internal.warc.gz"}
|
[SciPy-dev] scipy.stats._chk_asarray
josef.pktd@gmai... josef.pktd@gmai...
Wed Jun 3 09:26:17 CDT 2009
On Wed, Jun 3, 2009 at 10:05 AM, Bruce Southey <bsouthey@gmail.com> wrote:
> josef.pktd@gmail.com wrote:
>> On Wed, Jun 3, 2009 at 12:55 AM, Robert Kern <robert.kern@gmail.com> wrote:
>>> On Tue, Jun 2, 2009 at 23:50, Pierre GM <pgmdevlist@gmail.com> wrote:
>>>> On Jun 2, 2009, at 11:09 PM, josef.pktd@gmail.com wrote:
>>>>>> I tried to see if I can introduce a second version _check_asanyarray,
>>>>> that doesn't convert to basic np.array, but I didn't get very far.
>>>>> nanmedian, and nanstd are not easy to convert to work with matrices,
>>>>> nanstd uses multiplication and nanmedian uses np.compress
>>>> Well, what about that:
>>>> * convert the inputs to ndarray w/ _chk_asarray
>>>> * compute as usual
>>>> * return a view of the result using the type of the input (using the
>>>> type keyword of view)
>>>> That should work w/ nanmedian. There might be some adjustment to make
>>>> for nanstd (pb of dimensions?)
>>> That is what I was suggesting, only in decorator form so it could be
>>> applied everywhere. It's not worth wasting time making a small handful
>>> of functions work and be inconsistent with all of the others.
>> If someone gives me this decorator, I will use it, but I don't know
>> how to write a decorator that works for all input and output cases,
>> and doesn't screw up our documentation system.
>> But I can change 2 lines per function, and I know I still have the
>> same signature and docstring. It looks like it will work for all
>> descriptive statistics and data transformation in scipy.stats. It
>> won't be relevant for most of the remainder.
>> Josef
>> _______________________________________________
>> Scipy-dev mailing list
>> Scipy-dev@scipy.org
>> http://mail.scipy.org/mailman/listinfo/scipy-dev
> Hi,
> Using stats._chk_asarray should be completely unnecessary because most
> of numpy functions accept array-like inputs and use flattened arrays by
> default unless the axis keyword is used. That is why I did not use it
> for the stats.gmean and stats.hmean patches.
> I am also curious why the nanmean is so involved when I would think
> that, for some array b and axis, you can just do:
> numpy.nansum(b,axis=axis)/numpy.sum(numpy.isfinite(b), axis=axis)
For large, badly scaled arrays this might not be a numerically precise
way of doing it. But I agree that many functions could be written as
one liners where the only advantage I see, is that we don't have to
remember the formula.
> Granted nanstd is more complex and, in both cases, these probably should
> be part of numpy.
a**2 and a*b have completely different meaning for matrices than for
ndarrays. Without conversion, writing any more complex statistical
function would be a major hassle.
As I mentioned before, I tried with nanmedian and nanstd and gave up
very fast, since many functions don't work correctly or have a
different meaning. Writing code that is not allowed to use `*` looks
pretty hard to read and to write. I haven't tried what happens if
someone throws a sparse matrix at the stats functions, but we get
wrong results using for example np.dot.
> Bruce
More information about the Scipy-dev mailing list
|
{"url":"http://mail.scipy.org/pipermail/scipy-dev/2009-June/012084.html","timestamp":"2014-04-18T00:21:25Z","content_type":null,"content_length":"6857","record_id":"<urn:uuid:b42e7774-f154-45f3-a017-38615fa34bba>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00433-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Griffith, CA Geometry Tutor
Find a Griffith, CA Geometry Tutor
...My degree in Mathematics is from UCLA, which was a very rigorous course of study. I know how to break math problems down into simple steps. I can analyze what your son or daughter needs to
succeed in math.
14 Subjects: including geometry, Spanish, ESL/ESOL, reading
...I love to learn and love to teach!Algebra is the study of the rules of operations and relations in variable math. The concepts include terms, polynomials, equations and algebraic structures. I
depend on algebra daily in my personal life and in my work as an engineer.
5 Subjects: including geometry, algebra 1, algebra 2, prealgebra
I've been teaching and tutoring math for the past eight years, first in New York, and now here in California. I graduated from Columbia University with a bachelor's degree in English, but then
became a math teacher, earning a Master's degree in Math Education. I am very well-rounded as a tutor; I ...
22 Subjects: including geometry, English, Spanish, algebra 2
...I am willing to travel anywhere within the Los Angeles area for a tutoring session. I look forward to hearing from you and playing a part in your successful academic career!I have completed
two years of master's level physics coursework in graduate school, and have taught twelve different colleg...
21 Subjects: including geometry, chemistry, calculus, physics
...For example, I give my students customized quizzes to help them prepare for exams. I am confident that my passion and experience are the qualities you are looking for in a tutor. I look
forward to working with you.
18 Subjects: including geometry, algebra 1, GRE, GED
Related Griffith, CA Tutors
Griffith, CA Accounting Tutors
Griffith, CA ACT Tutors
Griffith, CA Algebra Tutors
Griffith, CA Algebra 2 Tutors
Griffith, CA Calculus Tutors
Griffith, CA Geometry Tutors
Griffith, CA Math Tutors
Griffith, CA Prealgebra Tutors
Griffith, CA Precalculus Tutors
Griffith, CA SAT Tutors
Griffith, CA SAT Math Tutors
Griffith, CA Science Tutors
Griffith, CA Statistics Tutors
Griffith, CA Trigonometry Tutors
Nearby Cities With geometry Tutor
Briggs, CA geometry Tutors
Cimarron, CA geometry Tutors
Glassell, CA geometry Tutors
Glendale Galleria, CA geometry Tutors
La Canada, CA geometry Tutors
Magnolia Park, CA geometry Tutors
Oakwood, CA geometry Tutors
Playa, CA geometry Tutors
Rancho La Tuna Canyon, CA geometry Tutors
Santa Western, CA geometry Tutors
Sherman Village, CA geometry Tutors
Starlight Hills, CA geometry Tutors
Toluca Terrace, CA geometry Tutors
Vermont, CA geometry Tutors
Westwood, LA geometry Tutors
|
{"url":"http://www.purplemath.com/Griffith_CA_Geometry_tutors.php","timestamp":"2014-04-20T08:47:50Z","content_type":null,"content_length":"24024","record_id":"<urn:uuid:5809be51-07ca-462d-bd11-785179695bde>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00123-ip-10-147-4-33.ec2.internal.warc.gz"}
|
6 search hits
Critical review of quark gluon plasma signatures (1999)
Stefan Scherer Steffen A. Bass Marcus Bleicher Mohamed Belkacem Larissa V. Bravina Jörg Brachmann Adrian Dumitru Christoph Ernst Lars Gerland Markus Hofmann Ludwig Neise Manuel Reiter Sven Soff
Christian Spieles Henning Weber Eugene E. Zabrodin Detlef Zschiesche Joachim A. Maruhn Horst Stöcker Walter Greiner
Noneequilibrium models (three-fluid hydrodynamics and UrQMD) use to discuss the uniqueness of often proposed experimental signatures for quark matter formation in relativistic heavy ion
collisions. It is demonstrated that these two models - although they do treat the most interesting early phase of the collisions quite differently(thermalizing QGP vs. coherent color fields with
virtual particles) - both yields a reasonable agreement with a large variety of the available heavy ion data.
Physics opportunities at RHIC and LHC (1999)
Stefan Scherer Steffen A. Bass Mohamed Belkacem Marcus Bleicher Jörg Brachmann Adrian Dumitru Christoph Ernst Lars Gerland Nils Hammon Markus Hofmann Jens Konopka Ludwig Neise Manuel Reiter Stefan
Schramm Sven Soff Christian Spieles Henning Weber Detlef Zschiesche Joachim A. Maruhn Horst Stöcker Walter Greiner
Nonequilibrium models (three-fluid hydrodynamics, UrQMD, and quark molecular dynamics) are used to discuss the uniqueness of often proposed experimental signatures for quark matter formation in
relativistic heavy ion collisions from the SPS via RHIC to LHC. It is demonstrated that these models - although they do treat the most interesting early phase of the collisions quite differently
(thermalizing QGP vs. coherent color fields with virtual particles) -- all yield a reasonable agreement with a large variety of the available heavy ion data. Hadron/hyperon yields, including J/
Psi meson production/suppression, strange matter formation, dileptons, and directed flow (bounce-off and squeeze-out) are investigated. Observations of interesting phenomena in dense matter are
reported. However, we emphasize the need for systematic future measurements to search for simultaneous irregularities in the excitation functions of several observables in order to come close to
pinning the properties of hot, dense QCD matter from data. The role of future experiments with the STAR and ALICE detectors is pointed out.
Excitation function of entropy and pion production from AGS to SPS energies (1998)
Manuel Reiter Adrian Dumitru Jörg Brachmann Joachim A. Maruhn Horst Stöcker Walter Greiner
Entropy production in the initial compression stage of relativistic heavy-ion collisions from AGS to SPS energies is calculated within a three-fluid hydrodynamical model. The entropy per
participating net baryon is found to increase smoothly and does not exhibit a jump or a plateau as in the 1-dimensional one-fluid shock model. Therefore, the excess of pions per participating net
baryon in nucleus-nucleus collisions as compared to proton-proton reactions also increases smoothly with beam energy.
Impact parameter dependencies in Pb(160 AGeV)+Pb reactions : hydrodynamical vs. cascade calculations (1999)
Jörg Brachmann Manuel Reiter Marcus Bleicher Adrian Dumitru Joachim A. Maruhn Horst Stöcker Walter Greiner
We investigate the impact parameter dependence of the specific entropy S/A in relativistic heavy ion collisions. Especially the anti-Lambda/anti-proton ratio is found to be a useful tool to
distinguish between chemical equilibrium assumptions assumed in hydrodynamics (here: the 3-fluid model) and the chemical non-equilibrium scenario like in microscopic models as the UrQMD model.
Entropy production in collisions of relativistic heavy ions : a signal for quark-gluon plasma phase transition? (1998)
Manuel Reiter Adrian Dumitru Jörg Brachmann Joachim A. Maruhn Horst Stöcker Walter Greiner
Entropy production in the compression stage of heavy ion collisions is discussed within three distinct macroscopic models (i.e. generalized RHTA, geometrical overlap model and three-fluid
hydrodynamics). We find that within these models \sim 80% or more of the experimentally observed final-state entropy is created in the early stage. It is thus likely followed by a nearly
isentropic expansion. We employ an equation of state with a first-order phase transition. For low net baryon density, the entropy density exhibits a jump at the phase boundary. However, the
excitation function of the specific entropy per net baryon, S/A, does not reflect this jump. This is due to the fact that for final states (of the compression) in the mixed phase, the baryon
density \rho_B increases with \sqrt{s}, but not the temperature T. Calculations within the three-fluid model show that a large fraction of the entropy is produced by nuclear shockwaves in the
projectile and target. With increasing beam energy, this fraction of S/A decreases. At \sqrt{s}=20 AGeV it is on the order of the entropy of the newly produced particles around midrapidity.
Hadron ratios are calculated for the entropy values produced initially at beam energies from 2 to 200 AGeV.
Distinguishing hadronic cascades from hydrodynamic models in Pb(160 AGeV)+Pb reactions by impact parameter variation (1998)
Marcus Bleicher Manuel Reiter Adrian Dumitru Jörg Brachmann Christian Spieles Steffen A. Bass Horst Stöcker Walter Greiner
We propose to study the impact parameter dependence of the anti-Lambda/anti-Proton ratio in Pb(160AGeV)+Pb reactions. The anti-Lambda/anti-Proton ratio is a sensible tool to distinguish between
hadronic cascade models and hydrodynamical models, which incorporate a QGP phase transition.
|
{"url":"http://publikationen.ub.uni-frankfurt.de/solrsearch/index/search/searchtype/authorsearch/author/%22Manuel+Reiter%22/start/0/rows/10/author_facetfq/J%C3%B6rg+Brachmann","timestamp":"2014-04-17T19:58:39Z","content_type":null,"content_length":"40965","record_id":"<urn:uuid:4c265695-efa1-4a5f-9171-9a33b8ac5aae>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00057-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Maximum entropy spectral analysis for circadian rhythms: theory, history and practice
There is an array of numerical techniques available to estimate the period of circadian and other biological rhythms. Criteria for choosing a method include accuracy of period measurement, resolution
of signal embedded in noise or of multiple periodicities, and sensitivity to the presence of weak rhythms and robustness in the presence of stochastic noise. Maximum Entropy Spectral Analysis (MESA)
has proven itself excellent in all regards. The MESA algorithm fits an autoregressive model to the data and extracts the spectrum from its coefficients. Entropy in this context refers to “ignorance”
of the data and since this is formally maximized, no unwarranted assumptions are made. Computationally, the coefficients are calculated efficiently by solution of the Yule-Walker equations in an
iterative algorithm. MESA is compared here to other common techniques. It is normal to remove high frequency noise from time series using digital filters before analysis. The Butterworth filter is
demonstrated here and a danger inherent in multiple filtering passes is discussed.
Physiological processes in almost all plants and animals have adapted to the cycles in the environment, be they daily (circadian), tidal, lunar, synodic lunar monthly or annual [1]. Oscillatory
behavior with periods of less than 24-h, termed ultradian, are also commonly found, occasionally embedded in circadian or other rhythms [2]. This adaptation to cycles in the environment has occurred
through the evolution of a biological timekeeper, a true temperature-compensated oscillator providing temporal information at all levels of physiology and behavior [1]. Thorough investigation of
these oscillators requires that the periodic evolution of the processes in time be characterized precisely as to the length of the periods seen, as these are the manifestation of the clock process [3
]. In addition, the relative robustness and regularity of the rhythms is of considerable interest. Numerical samplings of any process that evolves in time, taken at appropriate intervals, form time
series, the stuff and substance of biological rhythm research.
Analysis of time series may be simply done. In early work by Bünning on bean plant leaf movements, periodicity was estimated by measuring peak-to-peak intervals on chymographs that registered leaf
position [4]. Analysis technique has progressed considerably since that time and now offers an array of possibilities for estimates of period length [5]. This paper deals with a very useful method
called Maximum Entropy Spectral Analysis, or MESA, developed by John Parker Burg in the 1960s in answer to shortcomings of the principal analysis technique up to that time, Fourier analysis [6-8]. We
will first discuss Fourier analysis, noting the problems that MESA was developed to fix and how they can be circumvented with MESA. We will pay attention to the theoretical underpinnings so that this
popular method will not be a “black box” and will show the basics of how the spectrum is computed. Given that the biologist necessarily works with time series that are either inherently irregular or
contain major trends, tools that can ameliorate these problems when used in conjunction with MESA will be introduced and examples of their benefits discussed.
Biological rhythm data
Circadian rhythms are studied in systems ranging from intracellular fluorescence to complex behaviors such as running wheel activity; data acquisition and format vary accordingly. For example, when
studying the activity of an enzyme, the variable may be continuous and an appropriate sampling interval must be chosen. This must be rapid enough to avoid “aliasing” in the periodicity region of
interest. This occurs when the sampling interval is longer than the period being recorded and is famously seen in old western movies when the spokes of wagon wheels seem to be going backwards;
sampling frequency must be no less than twice the frequency of the cyclic process of interest. This constraint is the Nyquist or foldover frequency [9]. Faster sampling is normal, however, to ensure
no detail is lost and that accurate period estimates result. There are two important things to consider. The first, is resolution, which is the ability to separate two frequencies as being distinct,
for example a 24-h circadian peak with a 24.8-h lunar daily peak. This is equivalent to optical resolution, in which two objects in an image can be separable [10]. Resolution is theoretically limited
by the number of cycles in the data set, or in optics, by the diameter of the lens. A completely separate problem is the ability to discern a periodic signal in noise. This is sensitivity, and both
are important.
Biological rhythm data are commonly not continuous and consist of unary events unlike the record left on a kymograph by a bean plant. Here, other constraints begin to play a role. Running wheel
activity in mammals and the breaking of an infrared light beam by Drosophila are useful examples. Here, individual events occur, and are summed across arbitrary intervals or “bins”. Bin size affects
the output of time series analysis and this effect can be profound when bin size is too small (Review: [11]). Bin sizes of 10 min up to an hour are common in rhythms work.
For the purposes of illustration of the techniques being discussed, we will analyze a simulated data set. This is useful in this sort of discussion, as one may know precisely what the parameters of
the signal are. The series considered here is a 23-h square wave with 20% white noise added consisting of 336 values produced at half hour intervals for 7 days using our own software. I chose a
square wave as it is important to show that any analysis be effective for waveforms deviating markedly from the badly overused sinusoid. A simple time plot of the data is shown in Figure 1.
Figure 1. An artificially produced time series with an arbitrary maximum amplitude of one. It is a square wave with 20% white noise added. The power in the series is: 0.62.
The autocovariance and autocorrelation functions
Given a particular signal, even if it appears clearly rhythmic in a simple time plot, it is important that an objective statistical test be employed to determine if significant periodicity is
present. Such a robust test is autocorrelation analysis [5]. In this analysis, the time series is initially lined up with itself in register and correlation analysis is applied yielding the
coefficient, r. In this case, no matter what the signal looks like, correspondence is one to one and r = 1. The two series are then set out of register or “lagged” by one interval. The result is a
decrement in r. The drop depends on the series; if it is a noiseless sinusoid, the change will initially be small, but if it is white noise, the drop will be very large, since the value of any given
point has no relation whatsoever to any other point either near or far in time. Lagging proceeds one interval at a time up to about N/3. The process is usually limited to this point since the power
of the test is reduced with the decrement of each pair of lags off the ends of the series. r values are plotted as a function of the lag yielding the autocorrelogram function. In a rhythmic series, r
will continue to decline, becoming negative and reaching a minimum when the peaks and valleys in the two series are out of phase by one half cycle. A second positive peak will occur when the peaks
and valleys are back in phase, but one cycle out of register. The envelope of decay of the autocorrelation peaks is a function of the regularity in the series and this can provide a useful way of
characterizing the regularity in the signal, as will be discussed below (Reviews: [5,12]).
When computing the correlation coefficient, the output is normalized by dividing by the variance of the complete data set, but this need not be so and the output is then “covariance”, or the
autocovariance function [5]. Autocorrelation is commonly employed, as it allows comparisons among wide-ranging experiments.
The autocorrelation function also yields a valuable way to quantify the regularity of the signal both in terms of variation in period and the presence of noise. The height of the third peak in this
function, counting the peak at lag zero as one, is taken as the Rhythmicity Index, or RI. This value relies on the decay of the envelope of the function and is normally distributed so it may be used
in statistical analyses (review: [12]). The RI of the test signal is 0.697.
It is useful to have a formal criterion for the significance of rhythmicity in data. The 95% confidence interval for testing the significance of a given peak in the autocorrelogram is 2/√N [5]. Plus
and minus confidence intervals are commonly plotted as flat lines, the decrement in N as values are lost being ignored (see above discussion). Repeated peaks equaling or exceeding the confidence
interval are usually taken to imply significant rhythmicity, but there is room for subjective interpretation [5,12]. Figure 2 shows the autocorrelation function for the test data set depicted in
Figure 1. Since the lagging can be done in either direction, Lag 0, where r = 1 is at the center and the function is mirrored to the left as well. This view can make visual interpretation easier.
Figure 2. This is the autocorrelation of the data depicted in Figure1. Note the height of the third peak, which is the RI and equals 0.697.
Fourier analysis
Beginning in the late 19^th century, the process of producing a spectrum from digital time series was largely accomplished by Fourier analysis. Fourier showed that any function showing certain
minimal properties called the “Dirichlet Conditions” can be approximated by a harmonic series of orthogonal sine and cosine terms [6,7]. The series must have a finite number of maxima and minima, be
defined at all points and not have an infinite number of discontinuities, conditions met by most data encountered in biology [12]. Here we take a function f(t) and approximate it with a Fourier
If our function consists of an ordered set of values x(t), then the “power” in the series is the ensemble average of the squared values. If the mean is zero, this is variance. The Fourier transform
is an extension of a fit of the Fourier series and has the property that the coefficients approximate the spectrum of the power, meaning the power at each frequency for which a computation can be
made (review: [5]). For a continuous series, we have:
The exponential function consolidates the sine and cosine terms. F(ω) is the spectrum of the function, with ω being the angular velocity, or 2πf, where f is frequency. This process was carefully
described by Schuster [13] and he termed it the “periodogram” of the function. This procedure should not be confused with the Whittaker-Robinson algorithm [14], improperly given the same name, which
was largely discredited by Kendall on formal mathematical grounds [15]. (See [12] for further discussions and examples). Figure 3 depicts the Whitaker-Robinson “periodogram” for the data set.
Figure 3. This is the so-called Whitaker-Robinson “periodogram”, which is not the same as the true periodogram[ss ]of Schuster.
Fourier analysis has undergone considerable development and sees a great deal of use in many fields, with chronobiology prominent among them; it has done yeoman service. If the spectrum is calculated
directly from data sampled at intervals, it is termed the Discrete Fourier Transform or DFT. Fourier spectra are seldom computed directly from the raw data however, rather they are produced from
either the autocovariance or autocorrelation functions. One argument for using the autocovariance function is that the output is equivalent to partitioning the variance in the signal by frequency and
the area under the curve is the power (review: [5]). Figure 4 depicts the DFT of the test data set which is the most basic way to visualize the process. The period is reported as 22.4 h which
contrasts with the known value of 23. The reason for this discrepancy is discussed below.
Figure 4. The Discrete Fourier Transform of the test time series. The period is calculated to be 22.4. Note in particular the paucity of spectral estimates in the crucial range between 20 and 30
hours. This would normally be corrected in more advanced Fourier Transform algorithms, but at a cost (see text).
Compromises inherent in Fourier analysis
Since the autcovariance and autocorrelation functions lose power with each pair of points lost, usually no more than one third of the data are used to compute correlation coefficients, adversely
affecting the potential resolution in the spectrum. To alleviate this, the rest of the function is padded out with zeroes. This is an outright falsification of data points not in evidence, since
there is no reason to suspect these data points would all be zeroes. An added problem occurs at the point where the zeroes abruptly start, since this abrupt discontinuity will cause artifactual peaks
in the spectrum called “side lobes” owing to the Gibbs phenomenon [16]. To correct for this, the real data are blended into the zeros to soften the transition. Here, yet more actual data must be
altered to allow for side lobe suppression. One further development that exacerbates these compromises is the Fast Fourier Transform, or FFT. In this algorithm, computational efficiency, and
concomitantly speed of calculation, are increased by constraining the input series to consist of 2^N data points. Here again, the chances of a data set containing an integer power of 2 points are
slim, and again, zeroes are added to pad the series out (Review [16]).
Further data corruption can occur when tighter spacing of spectral estimates is required. If the series is long, consisting of multiple cycles, this is usually not a problem. However, when short data
sets are at hand, as is commonly the case with circadian rhythm work, there will be few cycles available. Fourier analysis is based on harmonics and these are constrained. In practice, this means
that the spacing between estimates can be very wide. For a one-week long experiment with data sampled at half hour intervals (as in the test data) and analyzed using a simple DFT, spectral estimates
are produced only for 22.4 and 24 h in the critical interval between 22 and 25 h. This leaves enormous gaps with little chance that the single estimate at 22.4 is even remotely close to the true
period which is 23.0 h. Once again, it is straightforward to tighten up the interval between estimates, but once again, zeroes are added with further problems in using false data points for which
there is no justification [16].
Maximum entropy spectral analysis
In the late 1960’s, John Parker Burg developed a new method for producing a spectrum that tackles these problems [17,18]. It initially found acceptance in astrophysics and quickly spread to other
fields. It began to be used for circadian rhythm work in the 1980s and is an excellent choice for a wide range of biological time series. This technique is called “
nalysis” (MESA)
]. MESA delivers the highest possible resolution, while eliminating side lobe problems. It is also extremely sensitive, as defined above. It is particularly useful in the short, noisy time series
typical in biological systems
The linchpin of this powerful technique is stochastic modeling. Time series evolve in time according to probabilistic laws and there are a number of models that can underlie such processes. One
example is an autoregressive (AR) function (Review: [5]). The assumption is that the system moves forward in time as a function of previous values and a random noise component. The simplest example
is a Markov process:
Where t is time, a is a coefficient derived from the data and Z[t] is white noise [5]. This simplest process may be extended by going backwards in time to earlier and earlier values, with each
weighted by a coefficient derived from the known observed values [5]:
and, again, a, b, c,… are the model’s coefficients and p is the order of the filter. These coefficients form the prediction error filter (PEF) [21]. Crucially, it is possible to use the model to
predict future values based on what is known of all the past values. In the case at hand, the analysis is functionally extending the autocorrelation function out to the needed number of values by
prediction from those that can be reliably estimated [16]. Entropy, in information theory, is equivalent to ignorance. If one can formally calculate estimates that maximize ignorance, this means
these values are the most honest based on what is known from data in hand and this is demonstrated through the calculus of variations [16-18]. A pile of zeros certainly does not fit this criterion.
The spectrum is constructed from the coefficients as follows [17,18]:
Where: S(ω) is spectral power as a function of angular velocity (see above), P is the power passed by the PEF, p is the order of the PEF and a[k] is the set of PEF coefficients.
The algorithm commonly used by us and others calculates the filter in an iterative fashion and is based on the work of Anderson [22]. Each iteration extends the AR model by one. The number of
coefficients in the prediction error filter employed to construct the spectrum is hence not fixed and requires some care in its choice. If a number that is too low is chosen, resolution and important
detail can be lost. On the other side of the coin, if the number of coefficients is run up too high, there may be spurious peaks [21]. An objective method has been developed using the methods of
Akaike [21], based on information theory. The filter length chosen is consistent with the most amount useful information that is being extracted as each iteration extends the length of the filter.
This is used in the MESA software employed in our work, but we also commonly set a minimum filter length of about N/4 for biological rhythm analyses to ensure adequate representation of any long
period cycles in the presence of noise, which can be considerable. N/3 is a good safe maximum [5,21].
MESA has proven itself superior to ordinary Fourier analysis as it does not produce artifacts from the various manipulations needed absent a model for the function and both resolution and sidelobe
suppression are superior to standard Fourier analysis [16,23]. To show the difference between Fourier analysis and MESA in one critical area, it was noted above that the possible estimates that can
be computed for period in Fourier analysis is constrained by the fact that these estimates can only be calculated for fixed values that are harmonics based on the length of the time series at hand.
Longer series mean more tightly spaced estimates and this can be “faked” by adding zeros. MESA does not need to do this. Since the spectrum is extracted from an AR model, the spacing can be narrowed
to any needed level. As an example, for the time series we have been working with, we have one week’s worth of data, sampled at half hour intervals. Note that shortening the bin size will have no
effect on the spacing of the samples. Figure 5 shows the MESA for the test data set with the number of estimates increased by a factor of 32. Unlike the DFT, which had only 2 estimates in the
interval between 22 and 25 h, MESA produced 60. Increasing MESA coefficients could go as high as needed, with the downside being a growing number of values that need plotting. Here 32X is more than
sufficient to tease out a good estimate.
Figure 5. This is the MESA spectrum with the coefficients upped to 32X. The period reported is 22.88, compared to the known input of 23.0. The tiny discrepancy is likely a result of the 20% added
noise in the signal.
Data conditioning
Biological signals can be notoriously non-stationary and noisy. This variation can take the form of linear or nonlinear trends in amplitude, variations in period and the waveform. As with any signal
analysis system, MESA output can be improved by conditioning the signal. It should be noted, however, that MESA is robust in the face of such problems from the start. Incorporated into our MESA
program is a detrending step which fits a line by regression and subtracts it. This eliminates linear trend and removes the mean. Mean removal is highly recommended, as this DC component can obscure
the rhythmic one if it is excessive [12]. Removal of nonlinear trend can be accomplished by high-pass filtering by numerous methods and will not be discussed here as it is beyond the scope of this
work. Low pass filtering to remove high frequency noise, however, is of considerable interest and is commonly done in preparation for spectral analysis (Review: [12,24]).
We have had excellent success with Butterworth recursive filters [9,12,24]. They are considered recursive because in addition to incorporating the original time series data into the moving filtering
process, previously filtered values are used as well. Butterworth filters are highly accurate and reliable, and the cutoff frequency is sharp [9]. In Figure 6, the artificial test signal depicted in
Figure 1 is shown after filtering with a two-pole low-pass Butterworth filter with a ~3 dB amplitude rolloff at the specified period of 4 h. The number of poles reflects the depth of the recursion [9
]. The filter equation showing the recursion is:
Figure 6. This is the original data set after being filtered twice with a Butterworth recursive digital filter. The second pass reverses the filter’s introduction of a four-hour phase delay owing to
its recursive nature.
Where X[t] is the original data series and Y[t] is the output series. A and B are the filter coefficients: A = 9.656 and B = −3.4142. C is the “gain” or amplitude change of the filter and equals
10.2426. See [9,12,24] for a more detailed description of this filter. Owing to the recursion there is a 4-h phase delay in this example and this needs either to be made clear when the data are
plotted or actually reversed. Reversal can easily be accomplished by running the filter in reverse. Since it is highly inadvisable to run a filter more than once to achieve additional smoothing
before spectral analysis, as this will result in artifact [9], this reversal must only be done for display of simple plots for visualizing data. A single pass with the phase change is not an issue
for MESA, since the attendant phase shift is of no consequence in this context. A second reversing pass with this filter actually resulted in a widening of the MESA peak (data not shown). After
filtering, the RI (see above) is improved from 0.697 to 0.715.
MESA at work
MESA has seen notable success since first being implemented for use in biological rhythms in the 1980s. MESA is useful for an extremely wide range of living oscillatory processes. It was instrumental
in discovering the presence of ultradian rhythms in Drosophila locomotor activity rhythms early on, most remarkably in flies bearing the per^0 and per^- mutations, which have no overt circadian
periodicity [25,26]. These ultradian rhythms have been central in a competing hypothesis describing the mechanism of the circadian clock [26,27]. It has been used extensively in circadian rhythm work
since that time. Given its superior resolving power, it settled an old dispute about the presence of lunar rhythmicity in physiological activity in marine organisms [9]. When applied in conjunction
with powerful trend removal techniques, it was instrumental in teasing out the role of the gene cryptochrome (cry) in the fly clock system. Luciferase activity was monitored in antennae bearing
either tim-luc or per-luc constructs and a central role for CRY protein in the peripheral antennal clock was established [28,29]. Ultradian and circadian rhythms were examined in premature infants
(24–29 weeks) prior to their developing robust circadian periodicity enabling inferences on prenatal periodicity in normal pregnancies [30]. A cryptic human core body temperature of about one hour
was elucidated [31]. Electroencephalography has yielded considerable information on ultradian periodicity in rats with MESA analysis combined with aggressive filtering enabling an incorporation of
these high-frequency rhythms into models of sleep-wake dynamics [32]. A genetic component in strain differences among normal mice was discerned when locomotor activity was investigated with MESA,
revealing robust ultradian components [33]. The presence of an endogenous vertical migration rhythm in Antarctic krill was verified [34]. Work on the cardiac pacemaker of the fly heart has benefitted
from the use of MESA for measuring heart rate [35,36]. When combined with a novel preliminary Fourier treatment to alter the sampling structure, the presence of rhythmicity in the spacing of pulses
in the Drosophila mating song was confirmed and it was shown to be under the control of the period gene [37].
In summary, Maximum Entropy Spectral Analysis has proven itself to be a highly useful and versatile tool for the investigation of periodic biological phenomena.
Technical note
A full explanation of the mathematics underlying MESA and the ways in which algorithms have been implemented is beyond the scope of this paper. For those wishing to explore these topics in detail,
the author recommends the following: For a good general introduction to the basic logic of MESA see Able’s review [16]; Burg’s original papers are the next step in seeing how the technique developed
[17,18]; the very thorough paper by Ulrych and Bishop [21] should be sufficient to answer almost any mathematical question on the procedure and the algorithm used in our version of the technique,
which we implemented in FORTRAN, is found in Andersen’s contribution [22]. Some of these papers, notably those of Burg himself, are difficult to locate. The compendium edited by D.G. Childers
entitled “Modern Spectrum Analysis” (1978, Wiley) has all these recommended papers and many more that are on point.
All software used in this lab, including the FORTRAN source code, is available free of charge from the author: dowse@maine.edu. A step by step annotated guide in its use has been published by this
author [38].
1. Pittendrigh CS: General Perspective.
In Handbook of Behavioral Neurobiology 4 Biological Rhythms Edited by Aschoff J. 1981, 57-55.
2. Lloyd D, Rossi E: Ultradian rhythms in life processes: An inquiry into fundamental principles of chronobiology and psychobiology. Springer-Verlag; 1992.
3. Dowse HB, Ringo JM: Summing locomotor activity into “bins”: How to avoid artifact in spectral analysis.
Biol Rhythm Research 1994, 25:2-14. Publisher Full Text
4. Dowse HB: Analyses for physiological and behavioral rhythmic. In Methods in Enzymology, Computer Methods, Part A, Volume 454. Edited by Johnson ML, Brand L. 2009: Elsevier; 2009:141-174.
5. Schuster A: On the investigation of hidden periodicities with application to a supposed 26-day period of meterological phenomena.
Terrestrial Magnetism and Atmospheric Electricity. 1898, 3:13-41. Publisher Full Text
6. Kendall MG: Contributions to the study of oscillatory time series. Cambridge University Press; 1946.
7. Burg JP: Maximum Entropy Spectral Analysis. In Modern Spectrum Analysis. Edited by Childers DG. Wiley; 1978:34-41.
8. Burg JP: A new analysis technique for time series data. In Modern Spectrum Analysis. Edited by Childers DG. Wiley; 1978:42-48.
9. Dowse HB, Ringo JM: The search for hidden periodicities in biological time series revisited.
J Theoret Biol 1989, 139:487-515. Publisher Full Text
10. Dowse HB, Ringo JM: Comparisons between “periodograms” and spectral analysis: Apples are apples after all.
11. Ulrych T, Bishop T: Maximum entropy spectral analysis and autoregressive decomposition.
Rev. Geophysics and Space Physics 1975, 13:183-300. Publisher Full Text
12. Andersen N: On the calculation of filter coefficients for maximum entropy spectral analysis.
Geophys 1974, 39:69-72. Publisher Full Text
13. Levine J, Funes P, Dowse H, Hall J: Signal analysis of behavioral and molecular cycles.
14. Dowse HB, Hall JC, Ringo JM: Circadian and ultradian rhythms in period mutants of Drosophila melanogaster.
Behav Genet 1987, 17:19-35. PubMed Abstract | Publisher Full Text
15. Dowse HB: Mid-range ultradian rhythms in Drosophila and the circadian clock problem. In Ultradian Rhythms from Molecules to Mind: A New Vision of Life. Edited by Lloyd DL, Rossi E. Springer
Verlag; 2008:175-199.
16. Dowse H, Ringo J: Further evidence that the circadian clock in Drosophila is a population of coupled ultradian oscillators.
J Biol Rhythms 1987, 2:65-76. PubMed Abstract | Publisher Full Text
17. Krishnan B, Levine J, Sisson K, Dowse H, Funes P, Hall J, Hardin P, Dryer S: A new role for cryptochrome in a Drosophila circadian oscillator.
Nature 2001, 411:313-317. PubMed Abstract | Publisher Full Text
18. Plautz J, Straume M, Stanewsky R, Jamison C, Brandes C, Dowse H, Hall J, Kay S: Quantitative analysis of Drosophila period gene transcription in living animals.
J Biol Rhythms 1997, 12:204-217. PubMed Abstract | Publisher Full Text
19. Tenriero S, Dowse H, D’Souza S, Minors DS, Chiswick M, Simms D, Waterhouse JM: Rhythms of temperature and heart rate in premature babies in intesive care.
Early Hum Dev 1991, 27:33-52. PubMed Abstract | Publisher Full Text
20. Lindsley G, Dowse H, Burgoon P, Kilka M, Stephenson L: A persistent circhoral ultradian rhythm is identified in human core temperature.
Chronobiol Int 1999, 16:69-78. PubMed Abstract | Publisher Full Text
21. Stephenson R, Joonbum L, Famina S, Caron A, Dowse H: Slep-wake behavior in the rat: ultradian rhythms in a light–dark cycle and continuous bright light.
J Biol Rhythms 2012, 27:490-501. PubMed Abstract | Publisher Full Text
22. Dowse H, Umemori J, Koide T: Ultradian components in the locomotor activity rhythms of the genetically normal mouse, Mus musculus.
J Exp Biol 2010, 213:1788-1795. PubMed Abstract | Publisher Full Text
23. Gaten E, Tarling G, Dowse H, Kyriacou C, Rosato E: Is vertical migration in Antarctic krill (Euphausia superba) influenced by an underlying circadian rhythm?
J Genetics 2008, 87:473-483. Publisher Full Text
24. Dowse HB, Ringo JM, Power J, Johnson E, Kinney K, White L: A congenital heart defect in Drosophila caused by an action potential mutation.
J Neurogenet 1995, 10:153-168. PubMed Abstract | Publisher Full Text
25. Bodmer R, Wessels RJ, Johnson E, Dowse H: Heart development and function. In Comprehensive Molecular Insect Science V2. Edited by Gilbert LI, Latrou K, Gill S. Elsevier; 2005:199-250.
26. Alt S, Ringo J, Talyn B, Bray W, Dowse H: The period gene controls courtship song cycles in Drosophila melanogaster.
Anim Behav 1998, 56:87-97. PubMed Abstract | Publisher Full Text
Sign up to receive new article alerts from Journal of Circadian Rhythms
|
{"url":"http://www.jcircadianrhythms.com/content/11/1/6","timestamp":"2014-04-19T06:54:52Z","content_type":null,"content_length":"121359","record_id":"<urn:uuid:42771fa7-5c5f-492d-8687-4ffc99128c4d>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00368-ip-10-147-4-33.ec2.internal.warc.gz"}
|
1 Estimates in this publication are subject to non-sampling and sampling errors.
2 As estimates in this publication are based on information obtained from a sample, they are subject to sampling variability. That is, they may differ from those estimates that would have been
produced if all dwellings had been included in the survey.
3 One measure of the likely difference between the sample and population estimates is given by the standard error (SE), which indicates the extent to which an estimate might have varied by chance,
because only a sample of dwellings (or occupants) was included.
4 There are about two chances in three (67%) that a sample estimate will differ by less than one SE from the number that would have been obtained if all dwellings had been included, and about 19
chances in 20 (95%) that the difference will be less than two SEs.
5 Another measure of the likely difference between the sample estimate and the population estimate is the relative standard error (RSE), which is obtained by expressing the SE as a percentage of the
RSEs for estimates from the survey of
Community Preparedness for Emergencies, WA, 2011
are published for each individual data cell. The Jackknife method of variance estimation is used for this process, which involves the calculation of 30 'replicate' estimates based on sub-samples of
the original sample. The variability of estimates obtained from these sub-samples is used to estimate the sample variability surrounding the main estimate.
Limited publication space does not allow for the separate indication of the SEs and/or RSEs of all the estimates in this publication. However, RSEs for all these estimates are available
free-of-charge on the ABS web site <
In this publication, only estimates (numbers and proportions) with RSEs less than 25% are considered sufficiently reliable for most purposes. Estimates with RSEs between 25% and 50% have been
included and are preceded by an asterisk (e.g. *3.4) to indicate they are subject to high uncertainty and should be used with caution.
Estimates with RSEs greater than 50% are preceded by a double asterisk (e.g. **2.1) to indicate that they are considered too unreliable for general use.
SEs can be calculated using the estimates (counts or proportions) and the corresponding RSEs. For example, Table 1 shows that the estimated number of households in Perth that have experienced any
major emergencies was 41,100 (rounded to the nearest 100). The RSE table corresponding to the estimates in Table 1 (see Relative Standard Errors in the 'Relative Standard Error Table' section at the
end of these Technical Notes) shows the RSE for this estimate is 14.6%. The SE is calculated by:
Therefore, there are about two chances in three that the actual number of Perth households that have experienced any major emergencies was in the range of 35,100 to 47,100 and about 19 chances in 20
that the value was in the range of 29,100 to 53,100. This example is illustrated in the diagram below.
The estimates in this publication were obtained using a post-stratification procedure. This procedure ensured that the survey estimates conformed to an independently estimated distribution of the
population, by state, part of state, age and sex rather than the observed distribution among respondents.
Proportions and percentages formed from the ratio of two estimates are also subject to sampling errors. The size of the error depends on the accuracy of both the numerator and the denominator. A
formula to approximate the RSE of a proportion is given below. This formula is only valid when the numerator (x) is a subset of the denominator (y).
As an example, using estimates from Table 1, of the approximately 665,900 households in Perth, 49.4%, that is approximately 329,100 households, will need to have their pets evacuated in a major
emergency. The RSE for 329,100 is 2.5% and for 665,900 is 0.9% (see Relative Standard Errors in the 'Relative Standard Error Table' section at the end of these Technical Notes). Applying the above
formula, the RSE for the proportion of households in Perth with pets needing evacuation in an emergency is:
Differences in the RSE calculated in the example above (2.3%) and the published RSE value (2.4%), are due to the use of rounded figures in the example above.
DIFFERENCES 16
Published estimates may also be used to calculate the difference between two survey estimates (of numbers or proportions). Such an estimate is also subject to sampling error. The sampling error of
the difference between two estimates depends on their SEs and the relationship (correlation) between them. An approximate SE of the difference between two estimates (x-y) may be calculated by the
following formula:
While this formula will only be exact for differences between separate and uncorrelated characteristics or subpopulations, it provides a good approximation for all differences likely to be of
interest in this publication.
A statistical significance test for any comparisons between estimates can be performed to determine whether it is likely that there is a difference between two corresponding population
characteristics. The standard error of the difference between two corresponding estimates (x and y) can be calculated using the formula in paragraph 16, above. The standard error is then used to
create the following test statistic:
If the value of this test statistic is greater than 1.96, then there is evidence, with a 95% level of confidence, of a statistically significant difference in the two populations with respect to that
characteristic. Otherwise, it cannot be stated with confidence that there is a real difference between the populations with respect to that characteristic.
The RSE values for Table 1, used to calculate the figures in the examples described in paragraphs 10, 11 and 14, are given in the table below.
Evacuation and selected household characteristics(a), by Perth Major Statistical Region(b) - WA
STATISTICAL REGION
Central Metropolitan Eastern Metropolitan Northern Metropolitan South West Metropolitan South East Metropolitan Perth MSR
% % % % % %
RSE of Households
Experienced any major emergencies 47.2 31.9 30.7 34.8 24.5 14.6
Require assistance to exit dwelling 19.5 12.3 9.4 12.7 10.1 3.3
Require transport assistance 17.0 15.4 9.7 11.3 11.1 4.0
Have access to alternate accommodation 12.1 8.7 5.0 8.1 7.3 1.3
Pets needing evacuation 16.6 11.3 6.8 9.5 8.9 2.5
Provide unpaid care to non-household member 26.6 35.3 13.6 21.1 26.4 9.1
Unable to understand English np 35.3 20.3 np 25.3 12.5
Unable to speak English 59.5 37.1 21.5 38.0 24.4 11.8
Unable to understand emergency instructions in English np 59.8 31.0 np 30.5 np
Not willing to evacuate 45.0 27.9 20.0 19.4 17.5 9.6
Total households 12.0 8.2 4.8 8.1 6.5 0.9
np not available for publication but included in totals where applicable, unless otherwise indicated
(a) For details regarding Comparability of estimates over time, see Explanatory Notes 9 to 11.
(b) Based on the 2010 edition of ASGC. See Explanatory Note 8 for details.
NON-SAMPLING ERROR 21
Non-sampling errors may arise as a result of errors in the reporting, recording or processing of data. These errors can be introduced through inadequacies in the questionnaire, treatment of
non-response, inaccurate reporting by data providers, errors in the application of survey procedures, incorrect recording of answers and errors in data capture and processing.
The extent to which non-sampling error affects the results is difficult to measure. Every effort is made to minimise non-sampling error by careful design and testing of the collection instrument,
intensive training and supervision of interviewers and the use of efficient operating procedures and systems.
In addition to the non-sampling errors outlined in the Reliability of statistics section, other factors may affect the comparability of estimates over with earlier iterations of this State
Supplementary Survey topic. These factors are described in
Explanatory notes
9 to 11.
This page last updated 29 May 2012
|
{"url":"http://www.abs.gov.au/ausstats/abs@.nsf/Latestproducts/4818.5Technical%20Note1October%202011?opendocument&tabname=Notes&prodno=4818.5&issue=October%202011&num=&view=","timestamp":"2014-04-16T22:31:21Z","content_type":null,"content_length":"41819","record_id":"<urn:uuid:c9736836-ae3e-45fe-936a-8b14a2984adc>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00469-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Sum of sine and cosine
What is the formula and how to derive the formula of sina+sin2a+sin3a...........?
Take a look at this Wikipedia article on trigonometric identities. The sum and difference formulas are in there. For this problem, start writing out each term as a sum of the previous term and look
for a pattern. $\sin(a)$ $\sin(2a)=\sin(a)\cos(a)+ \sin(a)\cos(a)=2\sin(a)\cos(a)$ $\sin(3a)=\sin(2a+a)=\sin(2a)\cos(a)+\sin(a)\cos(2a )$ We already know the sin(2a) though and it isn't hard to find
the cos(2a). Every next term can use the previous term to help expand it. Try writing out a few more terms and see what patterns you can find.
Tomorrow is my exam and i need this formula pls.......
What is the formula and how to derive the formula of sina+sin2a+sin3a...........? Jameson already did the groundwork by providing that excellent link. I did some work and here is what I got: sin(α) +
sin (α + β) + sin (α + 2β) + ... to n terms = [sin[α + (n - 1)β/2] . sin(nβ/2)]/sin(β/2) This was a general case for which I derived a formula. Things become simpler for the formula which you require
as we just need to put β = α and plug into the above formula.
Last edited by fardeen_gen; December 2nd 2008 at 05:35 AM.
|
{"url":"http://mathhelpforum.com/calculus/62799-sum-sine-cosine.html","timestamp":"2014-04-18T23:07:08Z","content_type":null,"content_length":"43037","record_id":"<urn:uuid:1e65b247-3fca-4c0e-8977-8f90505d5d98>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00484-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Many-Sided Dice, Lots of Sums -- and Even More Combinations
Date: 01/13/2011 at 14:05:45
From: Ryan
Subject: RPG dice algorithm
First off: you don't have to solve this, but ANY information would help.
A few of my friends and I want to compare the probabilities and bell
curves of the dice we use in our role-playing games (RPG) -- such as d4,
d6, d8, d10, d12, d20, d24, and d100 -- or at least be able to generate an
algorithm, given enough information.
It is easy to calculate the probability of rolling an individual number on
a fair die:
(times desired # appears)
(number of sides)
Two is just as easy:
(times desired # appears)
(number of sides)^2
If pressed or stumped, I could even resort to brute force and write out
all those possibilities on paper.
But we're interested in including multiple dice of the same values, such
as 1d20 vs 2d10 vs 5d4, or 1d12 vs 2d6 vs 3d4 vs 4d3 -- and figuring out
the algorithm beyond 2 of the same dice seems difficult.
1d20 = 1/20 for all values
2d10 = {0, 1/100, 1/50, 3/100, ... up to 1/10 (at 11), then down}
The only way I've been able to generate 3 or more of any single die is to
list all possible values:
1-1-3, etc.
I'm not even sure how to graphically write in three dimensions, let alone
four or more.
Date: 01/16/2011 at 23:34:32
From: Doctor Vogler
Subject: Re: RPG dice algorithm
Hi Ryan,
Thanks for writing to Dr. Math.
So you want to compute the probability of rolling a k on an n-d-m -- that
is, n rolls of an m-sided die which sum to k.
As you say, this is easy when n = 1, because then the probability is 1/m
when 1 <= k <= m, and zero otherwise. And it seems like there should be a
simple formula for larger n ... but things change rapidly when n grows.
For example, for n = 2 when 2 <= k <= m + 1, you get
(k - 1)/m^2
But when m + 1 <= k <= 2m, you get
(2m + 1 - k)/m^2
(In the first case, the first die can be anything from 1 to k - 1 and then
the second die is determined; in the second case, the first die can be
anything from k - m to m, and then the second die is determined.)
To see what happens in the case of n = 3, check out this Dr. Math
It turns out that you get three different regions:
3 <= k <= m + 2 : (k - 1)(k - 2)/(2m^3)
m + 2 <= k <= 2m + 1 : [(k - 1)(k - 2) - 3(k - m - 1)(k - m - 2)]
2m + 1 <= k <= 3m : (3m + 2 - k)(3m + 1 - k)
Now have look at this conversation:
It explains why, when n <= k <= m + n - 1, the probability of the first
region is always ...
1/m^n * (k - 1 choose k - n),
... which is the same as
(k - 1 choose n - 1).
(In case you're unfamiliar with this notation of binomial coefficients,
n choose r, or nCr, is equal to n!/(r!(n - r)!).)
Similarly, the last region is the mirror image of this. Namely, when
mn - m + 1 <= k <= mn, then the probability is ...
1/m^n * (mn + n - k - 1 choose mn - k),
... which is the same as
(mn + n - k - 1 choose n - 1)
It takes a little more work to figure out what the second region will be.
You get the first region by putting n balls (one ball each) in the n boxes
that are your dice and then counting the number of ways to divide up the
k - n remaining balls into the n boxes. Since k - n is at most m - 1, no
box gets more than m - 1 balls, so there is no problem with asking too
much of one die. If you do the same thing when k - n is a little bigger
than m - 1, then you will also be counting ways where some box gets too
many balls, so you should subtract off the number of ways to distribute
the k - n remaining balls into n boxes such that one of the boxes gets
more than m - 1 balls. That's the same number as n (to pick the box with
at least m additional balls) times the number of ways to distribute the
k - n - m other balls, which is
n*((n - 1) + (k - n - m) choose n - 1) = n*(k - m - 1 choose n - 1)
This computation is correct as long as it's not possible to get two or
more boxes each with more than m - 1 balls, so you need k - n <= 2(m - 1).
In other words, if n + m - 1 <= k <= n + 2(m - 1), then the probability of
the second region is
1/m^n * [(k - 1 choose n - 1) - n*(k - m - 1 choose n - 1)]
(You can check that this gives the same result as the last region in the
case where n = 2.)
Similarly, if mn - 2(m - 1) <= k <= mn - (m - 1), then the probability of
the second-to-last region is
1/m^n * [(mn + n - k - 1 choose n - 1)
- n*(n - 1 + mn - k - m choose n - 1)]
(You can check that this gives the same result as the first region in the
case where n = 2; and gives the same result as the second region in the
case where n = 3.)
To continue this kind of analysis to get formulas for the third and fourth
regions, and so on, the inclusion-exclusion principle might come in useful:
There is a simple formula to determine i from k, namely
i = 1 + floor((k - n)/(m - 1)).
Now, I do believe that one could write an efficient algorithm to compute
the probability with a computer program. But I expect you'll find that the
formulas get increasingly complicated as n grows and the regions
proliferate. In particular, I don't think that you'll be able to find a
simple formula for the i'th region in terms of the number i.
I hope this was helpful. If you have any questions about this or need more
help, please write back and show me what you have been able to do, and I
will try to offer further suggestions.
- Doctor Vogler, The Math Forum
|
{"url":"http://mathforum.org/library/drmath/view/76356.html","timestamp":"2014-04-17T19:13:38Z","content_type":null,"content_length":"11504","record_id":"<urn:uuid:b1b263e9-5cf0-4780-97e0-08a1076ec232>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00346-ip-10-147-4-33.ec2.internal.warc.gz"}
|
1. <probability> Describes a system whose time evolution can be predicted exactly.
Contrast probabilistic.
2. <algorithm> Describes an algorithm in which the correct next step depends only on the current state. This contrasts with an algorithm involving backtracking where at each point there may be
several possible actions and no way to chose between them except by trying each one and backtracking if it fails.
Last updated: 1995-09-22
Try this search on Wikipedia, OneLook, Google
Nearby terms: destructor « DESY « DETAB « deterministic » deterministic automaton » DETOL » developer
Copyright Denis Howe 1985
|
{"url":"http://foldoc.org/deterministic","timestamp":"2014-04-16T07:18:50Z","content_type":null,"content_length":"5062","record_id":"<urn:uuid:7afe0c6e-5e34-47fa-ae45-7a56f0c8ce7a>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00351-ip-10-147-4-33.ec2.internal.warc.gz"}
|
99 questions/Solutions/59
From HaskellWiki
(**) Construct height-balanced binary trees
In a height-balanced binary tree, the following property holds for every node: The height of its left subtree and the height of its right subtree are almost equal, which means their difference is not
greater than one.
hbalTree x = map fst . hbalTree'
where hbalTree' 0 = [(Empty, 0)]
hbalTree' 1 = [(Branch x Empty Empty, 1)]
hbalTree' n =
let t = hbalTree' (n-2) ++ hbalTree' (n-1)
in [(Branch x lb rb, h) | (lb,lh) <- t, (rb,rh) <- t
, let h = 1 + max lh rh, h == n]
Alternative solution:
hbaltree :: a -> Int -> [Tree a]
hbaltree x 0 = [Empty]
hbaltree x 1 = [Branch x Empty Empty]
hbaltree x h = [Branch x l r |
(hl, hr) <- [(h-2, h-1), (h-1, h-1), (h-1, h-2)],
l <- hbaltree x hl, r <- hbaltree x hr]
If we want to avoid recomputing lists of trees (at the cost of extra space), we can use a similar structure to the common method for computation of all the Fibonacci numbers:
hbaltree :: a -> Int -> [Tree a]
hbaltree x h = trees !! h
where trees = [Empty] : [Branch x Empty Empty] :
zipWith combine (tail trees) trees
combine ts shortts = [Branch x l r |
(ls, rs) <- [(shortts, ts), (ts, ts), (ts, shortts)],
l <- ls, r <- rs]
|
{"url":"http://www.haskell.org/haskellwiki/index.php?title=99_questions/Solutions/59&oldid=36043","timestamp":"2014-04-16T05:59:18Z","content_type":null,"content_length":"20847","record_id":"<urn:uuid:bfd3d0a3-a96f-4f0a-95ad-8fec31d5afaf>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00621-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Westvern, CA SAT Math Tutor
Find a Westvern, CA SAT Math Tutor
...Please don't hesitate to reach out to me with questions! ShaneHello, I studied Mandarin Chinese as one of my majors at Cornell University. There I was also enrolled in the FALCON program which
is the only program of its kind in the US that teaches students intensive Mandarin Chinese for 12-15 hours per day over the course of one year.
30 Subjects: including SAT math, Spanish, English, writing
...Teaching a few fundamental principles, I try to make the subject engaging, and maybe, just a bit fun. Algebra 2 continues the student's trip into the realm of abstract thinking. You encounter
word problems that can make the subject more engaging and relate to everyday problems in a practical context.
39 Subjects: including SAT math, English, reading, physics
...I previously graduated from UCLA with a bachelor's of science degree in environmental science. I minored in atmospheric and oceanic science, and have engaged in years of water quality
research. I also have associates degrees in physical science, behavioral arts, and language arts.
41 Subjects: including SAT math, English, chemistry, physics
...I'm very patient and have a good sense of humor, and I do whatever is needed to help the student scale the next hurdle.I have been a film and video editor in Hollywood for many years, and am
expert both AVID and Final Cut Pro. I also have working knowledge of Adobe After Effects. While it is ve...
13 Subjects: including SAT math, English, reading, writing
...Authors that I have read include Plato, Herodotus, Homer, Lysias, Demosthenes, Sophocles, Euripides, New Testament, Lucian, and selections of others. Linear Algebra: My background in
Mathematics is quite deep and extensive, and Linear Algebra happens to be my favorite of the sub-disciplines. It...
20 Subjects: including SAT math, chemistry, calculus, reading
Related Westvern, CA Tutors
Westvern, CA Accounting Tutors
Westvern, CA ACT Tutors
Westvern, CA Algebra Tutors
Westvern, CA Algebra 2 Tutors
Westvern, CA Calculus Tutors
Westvern, CA Geometry Tutors
Westvern, CA Math Tutors
Westvern, CA Prealgebra Tutors
Westvern, CA Precalculus Tutors
Westvern, CA SAT Tutors
Westvern, CA SAT Math Tutors
Westvern, CA Science Tutors
Westvern, CA Statistics Tutors
Westvern, CA Trigonometry Tutors
Nearby Cities With SAT math Tutor
Broadway Manchester, CA SAT math Tutors
Cimarron, CA SAT math Tutors
Dockweiler, CA SAT math Tutors
Dowtown Carrier Annex, CA SAT math Tutors
Foy, CA SAT math Tutors
Green, CA SAT math Tutors
La Tijera, CA SAT math Tutors
Lafayette Square, LA SAT math Tutors
Miracle Mile, CA SAT math Tutors
Pico Heights, CA SAT math Tutors
Preuss, CA SAT math Tutors
Rimpau, CA SAT math Tutors
View Park, CA SAT math Tutors
Wagner, CA SAT math Tutors
Windsor Hills, CA SAT math Tutors
|
{"url":"http://www.purplemath.com/westvern_ca_sat_math_tutors.php","timestamp":"2014-04-19T07:05:14Z","content_type":null,"content_length":"24217","record_id":"<urn:uuid:e68cb5bc-91f1-4d1e-b475-dd201f5859f1>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00585-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Doppler Effect and Math
Date: 6/1/96 at 14:55:52
From: Anonymous
Subject: Doppler Effect
Dear Dr. Math,
I am doing a research project for Algebra class on the Doppler Effect.
My partner and I have done extensive research and gathered much
information, but we are stumped on how the Doppler Effect is related
to mathematics. The point of this project is to find the relevance
between these two subjects, and we just can't figure it out. Please
help us!
Brian and Tim
Date: 6/1/96 at 19:27:39
From: Doctor Sarah
Subject: Re: Doppler Effect
Hi Brian and Tim -
Here's part of an answer we sent out a while back to a question about
the Doppler effect. Maybe you can use it to figure out more examples.
"As an ambulance approaches, the sound waves from its siren
are compressed towards the observer. The intervals between
waves diminish, which translates into an increase in frequency
or pitch. As the ambulance recedes, the sound waves are
stretched relative to the observer, causing the siren's pitch to
decrease. By the change in pitch of the siren, you can determine
if the ambulance is coming nearer or speeding away. If you
could measure the rate of change of pitch, you could also
estimate the ambulance's speed."
Can you work out how this can be done?
The whole answer can be found at:
-Doctor Sarah, The Math Forum
Check out our web site! http://mathforum.org/dr.math/
Date: 6/3/96 at 11:18:50
From: Doctor Mike
Subject: Re: Doppler Effect
Hello Doppler Questioner,
Let me add to Doctor Sarah's answer.
I'm glad you asked that question about what the Doppler
Effect has to do with mathematics. Except for details,
the answer is the same for any scientific subject area.
Whenever you use an equation or a function to express
the relationship between related things, you are using
mathematics. The value in Dollars of a pile of Pennies
is expressed by the equation D = P/100. Often this is
expressed using the idea of a function, like D = H(P)
where H is the "divide by 100" function.
Science research results in theories about how things
relate to each other. For the Doppler Effect, if the
distance changes between you and a sound wave source,
like the horn of a car driving past as you walk on a
road, the sound frequency Fh that you "hear" depends on
both the "true" sound frequency Ft, and also the Speed
of the passing car. So, you get a conversion function
Fh = C( Ft , Speed )
Your research should tell you what this equation is.
You can then do interesting variants, such as how the
equation changes if you aren't at the side of the road,
but a block away. Graph these functions. Try to do a
Science Fair demonstration showing Doppler Effect for
water waves, which are the ones we can see directly.
Try to get equations for other wave situations, like
Red-Shift in astronomy, sonar, or Police radar speed
I hope this helps.
-Doctor Mike, The Math Forum
Check out our web site! http://mathforum.org/dr.math/
|
{"url":"http://mathforum.org/library/drmath/view/56271.html","timestamp":"2014-04-21T10:07:37Z","content_type":null,"content_length":"8148","record_id":"<urn:uuid:fa0f0f49-c110-48b4-9460-0257edc44429>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00060-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Patent application title: ELECTROMAGNETIC FIELD TREATMENT APPARATUS AND METHOD FOR USING SAME
Sign up to receive free email alerts when patent applications with chosen keywords are published SIGN UP
Larmor Precession makes specific predictions about bound ion dynamics, based upon specific combinations of AC and DC magnetic fields. Especially significant is the fact that the external magnetic
field environment determines the overall qualities of resonances or particular changes in bio-effects. Given a target with a particular gyromagnetic ratio, Larmor Precession makes predictions that
are determined solely by a magnetic field environment itself. An embodiment according to the present invention comprises specific combinations of AC and DC magnetic fields configured to produce
specific bio-effects. Preferably an embodiment according to the present invention comprises using Larmor Precession to develop Electromagnetic Field environments targeted towards enhancing or
diminishing specific biological processes, including tumor growth, bone and tissue repair, and biological processes and using Larmor Precession to generate magnetic field conditions that take
advantage of specific behaviors, including resonances conditions.
A method for treating tissue, the method comprising applying a bio-effective magnetic field to a target tissue wherein the bio-effective magnetic field is configured to include spatiotemporal
magnetic field components superimposed with a magnetic field configured according to a Larmor Procession model.
The method of claim 1, further comprising measuring an ambient magnetic field and using the measurement of the ambient magnetic field to configure the bio-effective magnetic field.
The method of claim 2, wherein the ambient magnetic field includes a geomagnetic field.
The method of claim 1, wherein the bio-effective magnetic field configuration includes at least one of an AC DC parallel field configuration, an AC DC perpendicular field configuration, and an AC DC
arbitrary field configuration.
The method of claim 1, wherein the bio-effective magnetic field includes a DC magnetic field having amplitude of about
G to 5,000 G.
The method of claim 1, wherein the bio-effective magnetic field includes an AC magnetic field having an amplitude of about
01 G to 5,000 G and frequency from about
01 Hz to 36 MHz.
The method of claim 1, wherein the bio-effective magnetic field includes at least one of a DC and AC magnetic field having amplitude of about
01 G to 5,000 G in superposition with at least one of an AC and DC magnetic field having an amplitude of about
01 G to 5,000 G and frequency from about
01 Hz to 36 MHz.
The method of claim 1, wherein the bio-effective magnetic field includes at least one of a DC and AC magnetic field having amplitude of about
01 G to 5,000 G in superposition with at least one of an AC and DC magnetic field having an amplitude of about
01 G to 5,000 G and frequency from about
01 Hz to 36 MHz to enhance biochemical processes in tissues, organs, cells and molecules.
The method of claim 1, wherein the bio-effective magnetic field includes at least one of a DC and AC magnetic field having amplitude of about
01 G to 5,000 G in superposition with at least one of an AC and DC magnetic field having an amplitude of about
01 G to 5,000 G and frequency from about
01 Hz to 36 MHz to inhibit biochemical processes in tissues, organs, cells and molecules.
The method of claim 1, wherein the bio-effective magnetic field includes a bio-effective magnetic field comprising superposition of a signal satisfying Larmor Precession conditions, the signal having
a bipolar pulse train of known characteristics, yielding a signal of variable waveform, with amplitude from about
01 G to 5,000 G.
The method of claim 1, wherein the bio-effective magnetic field includes a bio-effective magnetic field comprising superposition of a signal satisfying Larmor Precession conditions, the signal having
a bipolar pulse train of known characteristics, yielding a signal of variable waveform, with amplitude from about
01 G to 5,000 G to enhance biochemical processes in tissues, organs, cells and molecules.
The method of claim 1, wherein the bio-effective magnetic field includes a bio-effective magnetic field comprising superposition of a signal satisfying Larmor Precession conditions, the signal having
a. bipolar pulse train of known characteristics, yielding a signal of variable waveform, with amplitude from about
01 G to 5,000 G to inhibit biochemical processes in tissues, organs, cells and molecules.
The method of claim 1, wherein the bio-effective magnetic field is employed in conjunction with pharmacological agents.
The method according to claim 1, wherein the bio-effective magnetic field is employed in conjunction with dressings and braces.
The method according to claim 1, wherein the bio-effective magnetic field is employed in conjunction with therapeutic procedures including at least one of heat, cold and ultrasound.
An electromagnetic apparatus configured to provide a bio-effective magnetic field according to a Larmor Precession model, the apparatus comprising: a controller for configuring a magnetic field to a
bio-effective magnetic field configuration that satisfies the Larmor Precession model; a power supply for supplying power to the electromagnetic apparatus; and a coil for generating a bio-effective
magnetic field directed to a treatment site, wherein the generated bio-effective magnetic field is configured by the controller to satisfy the Larmor Precession model.
The electromagnetic apparatus of claim 16, wherein the controller is configured to superpose spatiotemporal magnetic field components into the bio-effective magnetic field configuration.
The electromagnetic apparatus of claim 16, wherein the controller is configured to use measurement of ambient magnetic field to configure the bio-effective magnetic field.
CROSS REFERENCE TO RELATED APPLICATIONS [0001]
This application is a continuation of U.S. patent application Ser. No. 12/082,944, filed Apr. 14, 2008 entitled "ELECTROMAGNETIC FIELD TREATMENT APPARATUS AND METHOD FOR USING SAME", which claims the
benefit under 35 U.S.C. 119 of U.S. Provisional Patent Application No. 60/922,894, filed Apr.12, 2007, titled "APPARATUS AND METHOD FOR ELECTROMAGNETIC FIELD TREATMENT OF TISSUES, ORGANS, CELLS, AND
MOLECULES THROUGH THE GENERATION OF SUITABLE ELECTROMAGNETIC FIELD CONFIGURATIONS", which are herein incorporated by reference in their entirety.
INCORPORATION BY REFERENCE [0002]
All publications and patent applications mentioned in this specification are herein incorporated by reference to the same extent as if each individual publication or patent application was
specifically and individually indicated to be incorporated by reference.
BACKGROUND OF THE INVENTION [0003]
1. Field of the Invention
This invention pertains generally to an apparatus and a method for therapeutically and prophylactically treating humans, animals and plants using static ("DC") and time-varying ("AC") magnetic fields
("MF")that are selected by optimizing amplitude and waveform characteristics of a time-varying electromagnetic field ("EMF") at target pathway structures such as molecules, cells, tissues and organs.
An embodiment according to the present invention spatiotemporally configures MF to satisfy Larmor Precession conditions at the target pathway structure so that treatment can be provided for tissue
growth and repair of humans, animals and plants. A method for configuring bio-effective EMF signals is provided, based upon the precise knowledge given by LP conditions of the effect of EMF upon a
biological target. This knowledge is used to produce specific bio-response in the target. The method of construction of devices based upon LP conditions is given, including devices which directly
employ the ambient EMF, including the geomagnetic field, as an integral component of the LP configured bio-effective field.
2. Discussion of Related Art
An important class of bio-effective EMF's exists including those due to static magnets having an amplitude and frequency of MF that are clearly too small to result in significant induced electric
field ("IEF") effects. The observed bio-effects and therapeutic efficacy of these EMFs must thus be due directly to the MF. It is suggested that specific combinations of DC and low-frequency AC MFs
may be configured to enhance or reduce specific biological processes.
DC and AC magnetic fields in the 1 Gauss ("G") to 4,000 G range have been reported to have significant therapeutic benefits for treatment of pain and edema from musculoskeletal injuries and
pathologies. At the molecular level ambient range fields less than 1 G accelerated phosphorylation of a muscle contractile protein in a cell-free enzyme assay mixture. Fields ranging from 23 G to
3,500 G have been reported to alter the electrical properties of solutions as well as there physiological effects. At the cell level, a 300 G field doubted alkaline phosphatase activity in
osteoblast-like cells. Fields in between 4,300 G and 4,800 G significantly increased turnover rate and synthesis of fibroblasts but had no effect on osteoblasts. Neurite outgrowth from embryonic
chick ganglia was significantly increased by using fields in the range of 225 G and 900 G. Rat tendon fibroblasts exposed to 2.5 G showed extensive detachment of pre-attached cells, as well as
temporarily altered morphology. A minimum MF gradient of 15 G/mm was required to cause 80% action potential blockade in an isolated nerve preparation. A series of studies demonstrated 10 G fields
could significantly affect cutaneous microcirculation in a rabbit model. One of those studies showed a biphasic response dependent upon die pharmacologically determined state of a target.
Several double blind clinical studies using static magnets have been performed. A single 45 minute treatment using 300 G to 500 G fields reduced pain in post-polio patients by 76%. The magnets were
placed on pain pressure points and not directly on a pain site. Discoloration, edema and pain were reduced by 40% up to 70% over a 7 day period post suction lipectomy. Pads containing arrays of 150 G
to 400 G ceramic magnets were placed over a liposuction site immediately post-operative and remained over the site for 14 days. The outcome measures of fibromyaigia (pain, sleep disorders, etc.) were
reduced by approximately 40% in patients who slept on a mattress pad containing arrays of 800 G ceramic magnets over a 4 month period. 90% of patients with diabetic peripheral neuropathy received
significant relief of pain, numbness and tingling using 475 G alternating pole magnetic insoles in a randomized, placebo-controlled crossover foot study. Only 30% of non-diabetic subjects showed
equivalent improvement. Chronic lower back pain was not affected by application of a pad over the lumbar region having a geometric array of alternating pole 300 G fields for 6 hours/day, 3 times per
week for one week.
The proven therapeutic efficacy of static MF devices and the wide range of bio-effects for low-frequency AC devices has resulted in the development of several models to explain the phenomena. Early
observations of DC and AC magnetic field effects on calcium efflux and binding processes stimulated research into ion and ligand binding as the primary transduction pathway for a variety of observed
effects. Early observations of amplitude windows and a dependence upon specific frequency and amplitude characteristics of DC and AC fields prompted the development of models predicting resonance
conditions for particular combinations of fields. The ion cyclotron resonance ("ICR") model shows that magnetic fields act directly on the classical trajectory of a charged ion or ligand. However
that model has been said to be physically unrealistic based on the grounds that cyclotron motion could not occur in a viscous medium and that the diameter of the cyclotron orbit at observed field
strength would be much larger than the total size of the biological target itself.
Reports of amplitude windows for AC magnetic fields led to the development of quantum mechanical ion parametric resonance ("IPR") models that predict resonances. Those models appear to hold promise
for predicting the location of resonances for combinations of AC and static magnetic fields. However one of the foremost objections to the predictive use of these models is that the numerical values
produced depend critically upon factors such as the spherical symmetry of the Calcium ("Ca") binding site. Small perturbations from this symmetry will produce very large deviations from theoretical
predictions. This suggests that apparent resemblance between experimental and theoretical resonances may be coincidental. Observed resonances have been suggested to also involve complex combinations
of different target ions and the involvement of charged lipids on the surface of liposomes.
Models involving classical Lorentz force avoid the difficulties inherent in the ICR and IPR models.
Therefore, a need exists for an apparatus and a method that comprises controlling DC and ELF magnetic field effects by using a Larmor precession mechanism such that an effective acceleration,
deceleration or inhibition of a number of physiological biochemical cascades, will occur.
SUMMARY OF THE INVENTION [0013]
The apparatus and method according to present invention, comprises delivering a pulsed electromagnetic field to human, animal and plant molecules, cells, tissues and organs for therapeutic and
prophylactic purposes. Particularly an embodiment according to the present invention comprises the generation of any combination of AC and/or DC magnetic fields specifically configured to conform to
LP conditions and resonances as described in detail below and the generation of any signal with AC and/or DC characteristics targeted towards the specific biochemical characteristics of a target.
Preferably an embodiment according to the present invention comprises modulation of any carrier EMF by any secondary signal or pattern designed to couple to a target by satisfying requirements of LP
conditions, including but not limited to selection of specific numerical parameters employed in producing any specific waveform having specific characteristics targeted towards the specific
biochemical characteristics of a target. The modulation through superposition, amplitude and frequency modulation, and the generation of effective envelopes using characteristic waveforms that
satisfy LP conditions of a carrier waveform of varying or constant amplitude and frequency to form signals of known characteristics including waveform and power spectra tuned to dynamics and
resonance frequencies of ion and ligand binding.
An embodiment according to the present invention comprises a method by which an ambient magnetic field, including the geomagnetic field, is detected to produce feedback which will allow spatial
components of the geomagnetic field to be selectively enhanced, selectively reduced, or cancelled completely in order to configure a specific bio-effective magnetic field, based upon empirical
evidence and/or a mathematical model.
An embodiment according to the present invention comprises a specific signal that is generated to satisfy LP conditions whereby a resulting composite MF signal is configured that can be applied to
target pathway structures such as molecules, cells, tissues and organs for an exposure time of about 1 minute to about several hours per day, however other exposure times can be used.
Another embodiment according to the present invention comprises a MF modulated to satisfy LP conditions comprising any DC MF having an amplitude of 0.01 G to 5,000 G.
Another embodiment according to the present invention comprises a MF modulated to satisfy LP conditions comprising any AC MF having an amplitude of about 0.01 G to 5,000 G and a frequency from about
0.01 Hz to 36 MHz.
Another embodiment according to the present invention comprises a MF modulated to satisfy LP conditions comprising any DC or AC MF having an amplitude of about 0.01 G to 5,000 G in superposition with
any AC or DC MF having an amplitude of about 0.01 G to 5,000 G and a frequency from about 0.01 Hz to 36 MHz for treatment of tissues, organs, cells and molecules.
Another embodiment according to the present invention comprises a MF modulated to satisfy LP conditions comprising any DC or AC MF having an amplitude of about 0.01 G to 5,000 G in superposition with
any AC or DC MF having an amplitude of about 0.01 G to 5,000 G and a frequency from about 0.01 Hz to 36 MHz to enhance any biochemical process in tissues, organs, cells and molecules.
Another embodiment according to the present invention comprises a MF modulated to satisfy LP conditions comprising any DC or AC MF having an amplitude of about 0.01 G to 5,000 G in superposition with
any AC or DC MF having an amplitude of about 0.01 G to 5,000 G and a frequency from about 0.01 Hz to 36 MHz to inhibit any biochemical process in tissues, organs, cells and molecules.
Another embodiment according to the present invention comprises superposition of any signal satisfying LP conditions with a bipolar pulse train of known characteristics yielding a signal of variable
waveform with amplitude from about 0.01 G to 5,000 G to enhance or to inhibit any biochemical process in tissues, organs and cells.
Another embodiment according to the present invention comprises superposition of any signal satisfying LP conditions with a bipolar pulse train of known characteristics yielding a signal of variable
waveform having an amplitude from about 0.01 G to 5,000 G for treatment of tissues, organs, cells or tissues.
Another embodiment according to the present invention comprises application of any carrier signal modulated to satisfy LP conditions using inductively coupled signal transmission equipment,
electrodes implanted into or placed on a surface of a target, or any other method of applying the signal for treatment of tissues, organs, cells, and molecules.
Another embodiment according to the present invention comprises at least one flexible inductively coupled transmission coil that can be incorporated into anatomical wraps and supports for treatment
of tissues, organs, cells and molecules.
Another embodiment according to the present invention comprises at least one flexible inductively coupled transmission coil that can be incorporated into bandages and dressings for treatment of
tissues, organs, cells and molecules.
Another embodiment according to the present invention comprises at least one flexible inductively coupled transmission coil that can be incorporated into everyday garments and articles of clothing to
allow for the within described treatment of tissues, organs, cells and molecules on an ambulatory basis.
Another embodiment according to the present invention comprises at least one flexible inductively coupled transmission coil that can be incorporated into beds, mattresses, pads, chairs, benches and
any other structure designed to support an anatomical structure of a human and animal.
Another embodiment according to the present invention comprises employing a plurality of flexible inductively coupled transmission coils such that the coils provide increased coverage area for
treatment of large areas of tissues, organs, cells and molecules.
Another embodiment according to the present invention comprises an apparatus that operates at reduced power levels than conventional electro-medical devices.
"About" for purposes of the invention means a variation of plus or minus 50%.
"Ambient Field" for purposes of this invention includes geomagnetic fields and fields generated by any devices that may be transmitted to the treatment site.
"Bio-effective" for purposes of the invention means biological and physiological outcomes of biochemical cascades related to augmenting or diminishing tissue growth and repair.
"LP resonances" for purposes of the invention means the computation of resonance conditions through any means that employs the dynamics of LP in order to compute resonance conditions.
The above and yet other aspects and advantages of the present invention will become apparent from the hereinafter set forth Brief Description of the Drawings and Detailed Description of the
BRIEF DESCRIPTION OF THE DRAWINGS [0036]
Methods and apparatus that are particular embodiments of the invention will now he described, by way of example, with reference to the accompanying diagrammatic drawings:
FIG. 1 illustrates an effect of a magnetic field on charged ion bound within a signaling molecule;
FIG. 2 depicts Larmor Precession of a bound ion wherein thermal noise and an applied magnetic field are present;
FIG. 3 is a graph depicting a bio-eMet of CaCaM binding;
FIG. 4 depicts precessional frequencies of Ca, and oxygen and hydrogen arms of a water molecule;
FIG. 5 depicts Larmor Precession frequency of Ca for parallel superposition 50 μT AC and DC magnetic fields;
FIG. 6 illustrates reactivity for AC magnetic field bio-effects;
FIGS. 7A and 7B are graphs illustrating reactivity on binding lifetime for an AC/DC parallel magnetic field combination;
FIG. 7C is a graph that illustrates reactivity for an AC/DC parallel magnetic field combination;
FIG. 8 is a graph of typical reactivity of mean deviation from random oscillator orientation, as a function of AC frequency and amplitude;
FIG. 9 depicts a comparative example of on-resonance and off-resonance behavior for parallel AC/DC magnetic field combinations;
FIG. 10 illustrates on the left a sample declination of axis or precession from z-axis as a function of time and ratio of DC/AC amplitudes, and on the right the reactivity for perpendicular field
FIG. 11 is an example of LP conditions for combined AC/DC field combinations;
FIG. 12 is a graph of results of an experiment with calcium flux in bone cells, on the top right showing a region of the LP resonance landscape relevant to this experiment and on the bottom a
relevant predictive frequency response;
FIG. 13 illustrates on the top left 1301 is a resonance landscape for specific field configuration shows z-component of oscillator trajectory, subject to applied AC/DC parallel and perpendicular
field combination with the bottom left showing the reactivity determined by mean z-excursion displacement from zero shows both inhibitory and excitatory responses while the top right shows the
resonance landscape for modified field configuration produces predictable change in reactivity as shown on the bottom right graph;
FIG. 14 illustrates LP resonance conditions, on the left 1401 resonance conditions predicted by Larmor precession for AC/DC perpendicular field combination meaning extrema of reactivity, measured via
z-declination of the precessing oscillator, occur at 1/2/-integer multiples of the Larmor frequency of the oscillator in the DC field, noting that resonance conditions are dependent upon AC frequency
and ratio of AC/DC amplitudes thus location of resonances will depend upon contribution due to ambient fields, and on the right, location of resonances for AC amplitude=0.5 DC amplitude meaning
regions of inhibited reactivity occur at integer multiples of the Larmor frequency of the DC field;
FIG. 15 illustrates spatial components of the magnetic field due to a 6 inch diameter single-turn coil whereby precise knowledge spatiotemporal components of the field due to the device allow this
field to be employed in superposition with the ambient magnetic field to produce a resultant bio-effective field;
FIG. 16 illustrates predicted LP resonances for an AC/DC parallel magnetic field combination DC=37 μT, AC frequency=24 Hz, taken for 74 msec Larmor period of the 37 μT DC field, as per the method
shown in FIGS. 7A, 7B and 7C having applied field frequency plotted on the x-axis 711 and angular displacement plotted on the y-axis 712;
FIG. 17 illustrates predicted LPM resonances for combined Parallel+Perpendicular AC/DC fields, in this case LPM fits the Ca2+ flux data for parallel AC/DC at 20 μT, and 15 μT perpendicular DC
reported by Fitzsimmons in 1994 and LPM also predicts inhibition of Ca2+ flux at lower frequencies, not subharmonics of ICR resonance;
FIG. 18 illustrates predicted LPM resonances for neurite outgrowth from PC-12 cells for 366 mG (36.6 μT) parallel vs. perpendicular field AC/DC combinations with variation in AC amplitude at 45 Hz,
as reported by Blackman and LPM accurately fits the data for both parallel and perpendicular orientations and predicts resonance behavior for each orientation at higher AC amplitudes;
FIG. 19 is a block diagram of a method according to an embodiment of the present invention; and
FIG. 20 is a block diagram of an apparatus according to an embodiment of the present invention.
DETAILED DESCRIPTION OF THE INVENTION [0058]
LP is a means by which a magnetic field introduces coherence into the motion of bound ions. Larmor's theorem states that for a magnetic moment, the introduction of a magnetic field results in the
original motion transferred into a frame of reference rotating at Larmor frequency:
=ΓB, (1)
Where Γ is a gyromagnetic ratio of a precessing system. Γ=q/2m where q is charge and m is mass for a target such as a single calcium ion.
Bound charges in a biological target will generally undergo thermally induced oscillations thus giving rise to a magnetic moment for a system. Such a system can be expected to undergo LP. This motion
will persist in superposition with thermal forces until thermal forces eventually eject the oscillator from a binding site. For a magnetic field oriented along the z-axis, precessional motion will be
confined to the x-y plane. In addition to coherent precessional motion of a bound oscillator, contributions to the motion due to thermal noise itself are also expected to undergo precession.
Larmor Precession is an effect of magnetic fields on magnetic moments that, while the underlying mechanism is quantum mechanical and involves a change in relative phases of the spin-up and spin-down
components of a magnetic moment, can be described through a classical model. An illustrative classical model utilizes a Lorentz-Langevin equation for an ion bound in a potential well subject to a
magnetic field oriented up along the z-axis is in the presence of thermal noise:
2 r t 2 = - β r t + γ r t × B o k - ω 2 r + n . ( 2 ) ##EQU00001##
Where r is the position vector of a particle; β is the viscous damping coefficient per unit mass due to molecular collisions in the thermal bath, γ is the ion charge to mass ratio; B
is the magnitude of the magnetic field vector; k is the unit vector along the z-axis; ω is the angular frequency of the oscillator and n is the random thermal noise force per unit mass. Although the
potential energy function shown here is that of the harmonic oscillator, the precession is not limited to the case of linear isotropic potential but is expected to occur for any central restorative
Solution of the Lorentz equation in closed form is possible for special cases or through numerical integration. The addition of the thermal term n to the Lorentz equation produces a solution that can
be assessed via statistical mechanical methods to produce the ensemble average <r(t)> for the ion position as a function of time. From the ensemble average, the effects on bound lifetime of thermal
noise, exogenous magnetic fields and changes in physical parameters can be evaluated.
The solution of the Lorentz-Langevin equation is
( t ) = c 1 λ1 t + c 2 λ2 t + ψ n ( t ) where u = x + y ; c 1 = - c 2 = - u 0 ' ( λ 2 - λ 1 ) ; u ( 0 ) = 0 , u ' ( 0 ) = - u 0 ' ; u 0 ' 2 = 2 kT m , and ( 3 ) λ 1 , 2 = - α ± α 2 - 4 ω 2 2 ; α = β
+ β 0 γ . ( 4 ) ##EQU00002##
The ionic trajectory thus comprises a coherent part: c(t)=c
, and a component due to thermal noise: ψ
(t), i.e. the particular solution to the non-homogeneous equation. The coherent component of the solution has been shown to be, for physically realistic values of parameters, a damped oscillation in
the infrared range, undergoing precessional motion at the Larmor frequency about the axis of the magnetic field
( t ) ≈ u 0 ' ω - β 2 t - ω L t [ - ω t - ω t 2 ] = - u 0 ' ω - β 2 t - ω L t sin ( ω t ) . ( 5 ) ##EQU00003##
The particular solution to the non-homogeneous equation including thermal noise is given by
ψ n ( t ) = 1 λ 2 - λ 1 [ λ 2 t ∫ 0 t - λ 2 τ n ( τ ) τ - λ 1 t ∫ 0 t - λ 1 τ n ( τ ) τ ] ( 6 ) ##EQU00004##
The rate of growth of the thermal term ψ
(t) has been assessed previously via the ensemble average of the oscillator amplitude, where it was shown the accumulation term grows with time, eventually overwhelming the attenuation of the
oscillator trajectory due to viscous damping,
- β 2 t , ##EQU00005##
Thermal accumulation causes the oscillating ion to be ejected from the
binding site after a bound lifetime dependent upon the thermal noise spectral density,
σ n 2 = 2 β kT m . ##EQU00006##
It was also shown that binding lifetimes on the order of one second
result for physically relevant values of the oscillator frequency (ω≈10
), viscous damping (β≈1-10), and magnetic field strength B
<<1 T.
The time-dependence of ψ
(t) ay also be evaluated, expanding equation (6):
ψ n ( t ) = 1 λ 2 - λ 1 [ - α - α 2 - 4 ω 2 2 t ∫ 0 t - λ 2 τ n ( τ ) τ - - α + α 2 - 4 ω 2 2 t ∫ 0 t - λ 1 τ n ( τ ) τ ] , = 1 λ 2 - λ 1 - α 2 t [ - α 2 - 4 ω 2 2 t ∫ 0 t - λ 2 τ n ( τ ) τ - α 2 - 4
ω 2 2 t ∫ 0 t - λ 1 τ n ( τ ) τ ] , ( 7 ) or ψ n ( t ) = 1 λ 2 - λ 1 - β 2 t - ω L 2 t [ Y ( t ) ] . ( 8 ) where [ Y ( t ) ] = [ α 2 - 4 ω 2 2 t ∫ 0 t - λ 2 τ n ( τ ) τ - α 2 - 4 ω 2 2 t ∫ 0 t - λ 1
τ n ( τ ) τ ] ( 9 ) ##EQU00007##
is the accumulation of the thermal component with respect to time
Thus, equation (7) shows that the thermal component of the oscillation itself also undergoes Larmor precession.
The specific rate of growth of the precessing term may be found by assessing the physically relevant case:
α 2 << 4 ω 2 , ± α 2 - 4 ω 2 2 t → ± ω t , ##EQU00008##
so that
ψ n ( t ) = 1 λ 2 - λ 1 - β 2 t - B o γ 2 t [ - ω t ∫ 0 t + α 2 τ ωτ n ( τ ) τ - ω t ∫ 0 t + α 2 τ - ωτ n ( τ ) τ ] ( 10 ) = - 1 λ 2 - λ 1 - β 2 t - B o γ 2 t [ ∫ 0 t + α 2 τ ( ω ( t - τ ) - - ω ( t
- τ ) ) n ( τ ) τ ] = - 2 λ 2 - λ 1 β 2 t B o γ 2 t [ ∫ 0 t + α 2 τ sin ( ω ( t - τ ) ) n ( τ ) τ ] . ( 11 ) ##EQU00009##
Thus, the thermal component of the ion trajectory itself comprises a thermal oscillator driven by thermal noise n(τ), subject to viscous damping and undergoing precessional motion at the Larmor
frequency about the axis defined by the magnetic field. Note that the exponentials in the integrand of equation (10) receive a + sign, due to the terms in -λ
in equation (10), in accord with the physical expectation that thermal noise acts to increase the oscillator amplitude.
The accumulation of ψ
(t) may be evaluated in a straightforward manner via the ensemble average of the oscillator position. Assessing the thermal term, u(t)=x+iy=ψ
(t), it is convenient to retain the exponential terms in equation (10):
2 + y 2 = ψ n ( t ) 2 = - 1 λ 2 - λ 1 - β 2 t - B o γ 2 t [ ∫ 0 t + α 2 τ ( ω ( t - τ ) - - ω ( t - τ ) ) n ( τ ) τ ] 2 = - β t λ 2 - λ 1 2 [ ∫ 0 t + α 2 τ + α * 2 τ ( ω ( t - τ ) - - ω ( t - τ ) ) (
- ω ( t - τ ) - ω ( t - τ ) ) n ( τ ) n * ( τ ) τ ] . ( 12 ) ##EQU00010##
Employing the fact that viscosity and thermal noise spectral density σ
are related by
σ n 2 = 2 β kT m ##EQU00011##
where k is the Boltzmann constant
, T the absolute temperature, and m the mass of the particle, the ensemble average is, since a+a*=2β,
ψ n ( t ) 2 = σ n 2 λ 2 - λ 1 2 - β t [ ∫ 0 t + β t ( ω ( t - τ ) - - ω ( t - τ ) ) ( - ω ( t - τ ) - ω ( t - τ ) ) τ ] ( 13 ) or ψ n ( t ) 2 = σ n 2 λ 2 - λ 1 2 - β t [ ∫ 0 t + β t ( 2 - 2 ω ( t - τ
) - - 2 ω ( t - τ ) ) τ ] = σ n 2 λ 2 - λ 1 2 - β t [ 2 β ( + β t - 1 ) - 2 ω t β - 2 ω ( β - 2 ω ) t - 1 ) - - 2 ω t β + 2 ω ( β + 2 ω ) t - 1 ) ] = σ n 2 λ 2 - λ 1 2 - β t [ + β t - 1 ] ( 2 β - 1 β
- 2 ω - 1 β + 2 ω ) , ( 14 ) or , ψ n ( t ) 2 = 2 kT m λ 2 - λ 1 2 ( 1 - β 2 β 2 + 4 ω 2 ) ( 1 - - β t ) . ( 15 ) ##EQU00012##
Thus, the thermal term ψ
(t) will increase in amplitude with time. Note that the time-dependence of the magnetic field contribution to the thermal accumulation disappears when |a
. The more general case was described previously. FIG. 1 illustrates a schematic of effect of magnetic field 101 on the motion of a charged ion 102, such as calcium, bound inside a signaling molecule
such as calmodulin or troponin C. It can be seen that the magnetic field introduces coherence into the ion trajectory within the binding site.
The thermal component of an ion trajectory comprises an harmonic oscillator driven by thermal noise n(τ), subject to viscous damping and undergoing precessional motion at the Larmor frequency about
an axis defined by the magnetic field. FIG. 2 depicts angular position of a bound ion in the presence of thermal noise and an applied magnetic field 201, time 202, and angular position 203. This
significant result applies to all such bound charged oscillators, indicating that the LP mechanism can be responsible to MF effects in a wide variety of target systems.
For physically relevant values of the oscillator frequency (ω≈10
), viscous damping (β≈1-10), and magnetic field strength B
Testa, the results have been assessed via numerical simulation. It has been shown that the accumulation term grows with time, eventually overwhelming the attenuation of the oscillator trajectory due
to viscous damping
- β 2 t . ##EQU00013##
Thermal accumulation thus causes the oscillating ion to be ejected from
the binding site after a bound lifetime dependent upon the thermal noise spectral density.
Although thermal forces will in general be distributed throughout the spherical solid angle available in the binding site, it is important to bear in mind that the ion or ligand is not executing
random motions in an isotropic region. Rather, it is strongly bound in an oscillator potential, with oscillator frequency in the infrared. Thus, the motion is that of a thermally driven oscillator
rather than a random motion, as shown above, through examination of the accumulation term ψ
(t). Rather than simply rapidly ejecting an ion or ligand from a binding site, thermal noise forces will themselves contribute to the amplitude of the precessional component of the motion. Thus, both
the coherent and the thermal parts of the total motion u(t)=c(t)+ψ
(t) will undergo LP. The implications of this is wide-ranging: an extensive variety of charged oscillators in the biological target system can be expected to undergo LP, resulting in a wide variety
of target systems exhibiting similar responses to applied magnetic fields.
Larmor Precession conditions are described below according to an embodiment of the present invention.
For precessional motion of a bound oscillator to influence a biochemical process, it is clear that the motion must be able to move through a significant portion of one precessional orbit. Thus, the
time constant of a target process must be on the order of a period of the LP in order for a bio-effect to occur. Weaker magnetic fields can only expected to target relatively slower biological
process, and a lower limit for magnetic field effects can be established. For example, the Larmor frequency for Ca at 50 μT is approximately 18.19 Hz, so that a bound lifetime of about 55 msec is
required for one orbit to occur. Ca binding to calmodulin ("CaM") has a maximal lifetime on the order of ≈1 sec, for the stow pair of binding sites on the CaM molecule, resulting in a lower limit of
about 1-3 μT for detectability by CaCaM.
Precessional motion of the oscillator will result in a coherent modulation of the rate at which the oscillator moves through the available range of motion. Although the mechanisms by which this
coherent motion can influence kinetics it wilt certainly vary from one target system to another, the basic properties of the Larmor model will be similar for a wide variety of systems. The rate at
which the oscillator passes through various orientations, including preferred orientations that may influence kinetics, will be modulated coherently by the precessional motion at the Larmor
frequency. This introduction of coherence into a process that, in the absence of magnetic fields, is governed by thermal perturbations, allows the magnetic field to impart information to the system
without requiring substantial energetic input on the part of the field. It has been shown that the angular momentum of a calcium ion undergoing LP in a 50 μT magnetic field is on the order of
Planck's constant.
Larmor precession results in the oscillator sweeping out an angular area within the binding site, at a rate determined by the gyromagnetic ratio of the target and the magnetic field. For example, for
CaCaM binding, LP will result in a modulation of the rate at which the oscillator makes contact with various portions of the binding site. Stronger magnetic fields will increase this rate, thus
increasing the probability or frequency at which the oscillator contacts orientations that favor dissociation. Increasing NV strength thus results in a reduction of the bound lifetime of Ca,
resulting in a greater availability of free Ca as observed for the increased reaction rates observed in a cell-free preparation.
For a system such as CaCaM, bio-effects are expected to increase with field strength, reaching a saturation level, beyond which further increases in DC field strength result in only small changes in
binding time, relative to the initial kinetics of the system. The percentage change in reactivity, or binding lifetime, as compared to the zero-field lifetime is given by:
Δ % = 100 T B = 0 - T B = B o T B = 0 . ( 16 ) ##EQU00014##
Thus, saturation occurs as field strength grows: further increases in amplitude result in ever smaller relative changes in kinetics. It is important to note that, since the Larmor frequency increases
linearly with increasing field strength, for a given target system (i.e., specific binding lifetime), effects will be limited to a narrow range of MFs. Referring to FIG. 3 which depicts a graphical
representation of a bio-effect for CaCaM binding with increasing field strength derived from equation (16). For example, for a binding time of 0.1 sec, field strengths 301, below about 10 μT are
expected to be ineffective, whilst saturation will occur as the field strength approaches several mT.
LP can affect targets other than bound charged ions. For example, the water molecule carries partial charges, resulting in water's unique chemical characteristics. The resulting strong electric
polarization causes water molecules in cells and tissues to form organized, polarized hydration layers, such as the inner and outer Helmholtz layers observed around charge carriers and charged
membranes. These bound waters themselves are likely to be subject to LP as applied magnetic fields introduce coherence into the thermal fluctuations of hydration layers via LP. The resultant change
in hydration orientation angles alters the potential energy of hydration and thus local dielectric constant ε(t) at the binding site kinetics of binding processes moving through the Helmholtz planes
thus depend on LP.
Since Larmor frequencies for oxygen and hydrogen arms of water also tie near Ca2+ frequency, observations of bio-effects near the Larmor frequency may also be attributable to precession of the water
molecules themselves or complexes of hydrated ions, for which the gyromagnetic ratio must be estimated before an accurate determination of the Larmor frequency can be made. FIG. 4 illustrates
precessional frequencies 401 for Ca and the arms of water molecules, from equation (1).
AC and AC DC combined resonance are described below according to an embodiment of the present invention.
The current invention aims to take advantage of the conditions such as resonance and particular changes with field strengths and frequency that are intrinsic to LP. The relative parallel or
perpendicular orientation of the AC and DC fields is shown to be a critical determinant of the strength and direction of bio-effects. Bio-effects due to are dependent upon the amplitudes,
frequencies, and spatial directions of all spatiotemporal components of the MF. The precise reactivity of the biological target can be computed, as a function of target physicochemical
characteristics and magnetic field characteristics in order to take advantage of specific dose-responses, resonance phenomena such as maxima and minima of reactivity, and treatment regimes that are
programmed to take advantage of the specifics of LP.
Resonance conditions are described below according to an embodiment of the present invention.
The LP mechanism yields resonance behavior for a wide variety of combinations of AC and DC MFs, including the geomagnetic field. These resonances are conditions for which maxima, minima or other
bio-responses, specifically characteristic of LP, are expected for specific spatiotemporal MF conditions. These specialized conditions can be employed to develop innovative means of maximizing,
minimizing, enhancing, inhibiting, or otherwise modulating the bio-responses to applied and ambient MFs. Although the specific examples shown below employ sinusoidally varying AC MFs, LP conditions
may be computed to determine specific resonance conditions for any arbitrary combination of DC and non-sinusoidally varying MF waveforms.
LP resonances will be considered to be the computation of resonance conditions through any means that employs the dynamics of LP in order to compute resonance conditions. For illustrative purposes,
several methods of computing resonance conditions are illustrated below. However, due to the complexity of the possible orbits of the precessing oscillator and the complexity of bio-molecules
generally, it is not possible to treat in detail all possible methods of computing resonances.
AC magnetic field bio-effects are described below according to an embodiment of the present invention.
When an AC magnetic field is added to a DC field a break in the spatiotemporal symmetry of Larmor precession results due to periodic reversals in precession direction with changing AC phase and
amplitude, and the interaction with DC magnetic fields in perpendicular or parallel orientations. This symmetry breaking results in modulation, via the applied field geometry, of the oscillator
orientation within the binding site and, thus, the probability of contact with a preferred orientation. For example, when the AC phase causes the field strength to be near zero, or causes a
destructive interference with DC fields, the oscillator will `dwell` at a specific region of the binding site, covering very little angular distance until the field rises significantly. Resonance
conditions are thus expected for the case of a single AC sinusoidal field alone.
For example, resonance conditions may be assessed through the computation of the mean distance the oscillator spends from a preferred orientation, taken over a time period less than or equal to the
binding lifetime:
(z(t)-o.- sub.z)
/2) (17)
where c[o]
is a constant, x(t), y(t) and z(t) are the spatial components of the precessing oscillator, the o
are the spatial components of a preferred orientation. Clearly, the actual preferred orientation(s) determines the specific reactivity. However, as mentioned above, given a specific biomolecular
environment, R(t) will take a specific form.
To illustrate the basic characteristics of LPM, examples provided here employ an arbitrary location for the preferred orientation. R(t) can be computed via the parametric equations for an oscillator
precessing at Larmor angular frequency ω
in the plane perpendicular to the resultant magnetic field:
/2 (18)
where Br is the resultant field from the perpendicular
, Bperp, and parallel, Bpara, components of the DC field, and the AC field component, Bac, having frequency ωac. As shown in FIG. 6 having AC Amplitude plotted an the x-axis 601, AC frequency plotted
on the y-axis 602 and reactivity plotted on the z-axis 603, due to the specific dynamics of LP, the Larmor frequency due to the AC field is time-varying, resulting in a complex modulation of the mean
distance from a specific preferred orientation.
AC DC parallel field combination is described below according to an embodiment of the present invention.
For the case of an alternating MF aligned parallel to a static (DC) field, the angular area swept out per unit time, A(t), increases linearly with time bound. For AC/DC parallel combination, B
cos(ωt), so that in general, the Larmor frequency, ω
will be a time-varying function of both AC and DC amplitudes. FIG. 5 shows the Larmor frequency 501 for Ca from equation (1). FIG. 5 depicts Larmor Precession frequency of Ca for parallel
superposition of 50 μT AC and DC magnetic fields. T/T
is the ratio of time 502 elapsed in units of one period of the DC field Larmor frequency, and ω/ω
is the ratio of AC frequency 503 in units normalized to the ratio of AC frequency to DC Larmor frequency. The total angular distance traversed by the oscillator is given by the integral of the
absolute value of the Larmor frequency:
( t ) = C o ∫ 0 t ω L t = C o 2 m ∫ 0 t B o → + B 1 → cos ( ω AC t ) t ( 19 ) ##EQU00015##
is the Larmor frequency; {right arrow over (B
)} is the DC MF vector; {right arrow over (B
)} cos(ω
t) is the AC MF, with frequency ω
,; m is the mass of the bound oscillator; and C
is a proportionality constant. A(t) may be evaluated for any ion or ligand, any combination of AC and DC MFs with any relative orientation, and is in general a function of target gyromagnetic ratio
and DC/AC MF geometry.
The total angular area A(t) swept by the oscillator over time is determined by the Larmor frequency:
( t ) = C o ∫ t 0 t 1 ω L t = C o 2 m ∫ t 0 t 1 [ B o + B 1 cos ( ω AC t ) ] t = C o 2 m [ B 0 ( t 1 - t 0 ) + B 1 ω AC [ sin ( ω AC t 1 ) - sin ( ω AC t 0 ) ] ] . ( 20 ) ##EQU00016##
Thus, reactivity is a function of the bounds of integration in equations (19) and (20) and the time-varying Larmor frequency. Because the bounds of integration represent the binding lifetime, the
position of resonances will in general be dependent upon the kinetics of the target system, and thus, not be dependent solely upon the Larmor frequency of the binding species. FIGS. 7A and 7B show
the dependence of total angular displacement upon kinetics for Ca in AC=DC=50 μT parallel field combination. FIG. 7B, right hand plot 701, for systems with relatively short binding lifetimes, for
example, up to 3 times the DC Larmor period, broad resonance peaks occur and may be predicted via equation (20). FIG. 7A, left hand plot 702, shows the dependence of total angular displacement upon
kinetics as the binding lifetime approaches 1 second or more (18 times the Larmor period), whereby resonances disappear substantially,
The relative amplitudes of the AC and DC fields are also critical in determining the height and position of resonance conditions. Because the Larmor frequency is dependent on the resultant AC+DC
amplitude, for AC>DC the oscillator will undergo periodic changes in precessional direction. The result shows addition of an AC MF to the precessing oscillator can either accelerate or inhibit its
time to reach a reactive orientation. For parallel AC/DC field combinations, the results are remarkably similar to reported experimental verifications of IPR, and suggests LP as a viable alternative
mechanism for weak DC and AC MF bio-effects. The resulting resonance conditions may be reflected in the conditions employed by Koch. FIG. 5 shows the prediction of the LP model for DC=37 μT, AC=1.7
μT, for applied AC fields ranging from approximately 18 to 35 Hz, or about 1.4-2.6 times the Larmor frequency for Ca in the 37 μT DC field. These calculations were made for a target system with a
lifetime of about 4 times the period of the Larmor frequency for Ca in the DC field. The results compare favorably to FIG. 8 of the published Koch experiments.
Additionally for a parallel AC and DC MF combination, complex resonance conditions are expected for specific AC amplitudes and frequencies, based upon the coherent precessional motion of the
oscillator. For example, as shown in FIG. 8 the mean deviation from a random distribution (i.e., mean oscillator position=π) of oscillator positions varies with AC frequency 801 and amplitude 802.
Complex deviations in FIG. 8, are produced via equation (20), for mean(A(t))-πt. The position of the peaks and troughs in FIG. 8 provides an example of one means of determining the specific AC/DC
field combinations that are expected to produce enhanced or diminished bio-effects.
FIG. 9 shows slices of FIG. 8 at specific AC frequencies, to detail the structure of resonances, and the effect of shifting to slightly off-resonance frequencies. For example, FIG. 9, left hand plot
901 shows that reactivity as a function of AC/DC amplitude, at the 2
harmonic of the Larmor frequency (2×ωL) and slightly off-resonance at ωL-0.1 ωL. Note that directly on the Larmor harmonic, a pronounced resonance occurs for AC=3×DC amplitude. Shifting the AC
frequency by 10% of ωL (5% of AC at 2
harmonic) effectively destroys this resonance. Thus, precise knowledge of the LP conditions for a system allows for accurate generation of AC/DC combinations that will produce resonances, and
clinically significant bio-effects.
More complex resonance behaviors occur at other AC frequencies, including sub-harmonic frequencies of the Larmor frequency. For example, FIG. 9, right hand plot 902 shows resonance conditions for AC
frequency=ωL/3 and ωL/3-0.05 ωL. For these conditions, a slight shift in AC frequency (0.05 ωL) results in an increase in the number of resonance peaks, and a concomitant decrease in the resonance
AC DC perpendicular field combination is described below according to an embodiment of the present invention.
For the case of an AC MT in perpendicular orientation with a DC field, the spatial direction of the resultant MF varies in time, breaking the cylindrical symmetry of the (previous example. It has
been suggested previously that the resultant excursion of the oscillator out of the cylindrical geometry will result in changes in bio-effects, due to changes in the angular area A(t) swept per unit
time. Thus, both the Larmor frequency and axis of precession are time-varying, and the accumulation of angular area given by equation (17) will be modulated by the component of the precession in the
z-direction. By geometry, the results are:
( t ) = C z B o cos ( ω Lresultant t ) ) B o 2 + ( B 1 cos ( ω AC t ) ) 2 ; A ( t ) = C o ∫ t 0 t 1 ω Lresultant t . ( 21 ) where ω Lresultant = Γ B r = Γ ( B o 2 + ( B 1 cos ( ω AC t ) ) 2 ) 1 2 . (
22 ) ##EQU00017##
Due to the excursions of the axis of precession away from the z-axis, the Larmor frequency for perpendicular fields varies with time in a manner somewhat more complicated than that for the case of
parallel AC/DC combination. The complicated dynamics that arise imply that changes in reactivity are caused by both the AC-modulated Larmor frequency as well as the time-varying changes in precession
It has been established that resonances will occur when excursions of the oscillator attain their maxima, i.e., the AC frequency is an integer multiple of the Larmor frequency. This means that
resonances may be observed for perpendicular field configuration by scanning along increasing AC field strength, holding DC constant. For example, FIG. 10, left hand plot 1001 shows the excursion of
the oscillator axis from the z-axis from equation (21), as a function of the ratio B
, shown here for AC frequency=Larmor frequency of DC field strength. Note the appearance of regions of constant oscillator excursion, equally spaced at B
. A(t) achieves maxima and minima for these conditions, computed as the time average of the z-declination from equation (21), taken over the binding lifetime=7 Larmor periods of the DC field shown in
FIG. 10, left hand plot. The shape of the landscape in FIG. 10, left hand plot 1001, and appearance of larger numbers of minor resonances with increasing AC strength, reflects the increasingly
complicated dynamics of the oscillator with increasing AC amplitude. FIG. 10, right hand plot 1002, shows the reactivity A(t) for these conditions. Note that both inhibitory and excitatory responses
occur, corresponding to the extrema shown in the left hand plot 1001. This therapeutically relevant example indicates that for systems governed by the ion binding process considered, a magnetic field
can be configured such that inhibition of the process may be obtained at AC amplitude≈twice DC amplitude, and excitation, or enhancement of the process may be obtained at AC amplitude≈4 times DC
Arbitrary combinations of AC and DC magnetic fields are described below according to an embodiment of the present invention.
Larmor precession conditions may also be predicted based upon the mean distance of the oscillator from a preferred orientation favoring or impeding the molecular binding process. Resonance conditions
may be computed for AC alone, AC parallel to DC, AC perpendicular to DC, and combined parallel and perpendicular magnetic fields. For example, LP conditions allow for the precise calculation of the
trajectory of the precessing oscillator:
{right arrow over (r)}(x,y,x,t)=x(t) +y(t) +z(t){right arrow over (k)} (23)
Where x
(t), y(t) and z(t) are found through solution of the equation of motion of the oscillator, generalized from equation (2)to the 3-dimensional case:
2 r → t 2 = - β r → t + γ r → t x B o k - ω 2 r → + n → , ( 24.1 ) ##EQU00018##
, ,{circumflex over (k)} are the unit vectors in the three spatial directions, x, y and z.
FIG. 11 shows the reactivity, from equations (17), (18) and (23), via mean distance to a preferred orientation for an arbitrary AC/DC field combination, as a function of AC field frequency 1101 and
the ratio of AC to DC field strengths. It can be seen that specific resonance conditions exist, yielding both excitatory and inhibitory responses. By choosing specific combinations of AC and DC
parallel and perpendicular magnetic fields, specific resonance conditions can be applied to the biological target.
FIG. 12 shows the results of an experiment measuring Ca flux in bone cells. These results have not been adequately explained to date and are clinically relevant for the configuration of bio-effective
EMF signals. FIG. 12, top left 1201, shows the resonance observed experimentally, with a prominent peak in Ca flux in the range of 16 Hz for the applied AC field. FIG. 12, top right 1203, shows the
region of the LP resonance landscape relevant to this experiment, as computed via equations (17), ((8) and (23). The precise location of resonances may be found, given knowledge of the detailed shape
of the relevant resonance `landscape.` FIG. 12, bottom 1202, shows a slice through the region at AC=20 μT providing the relevant frequency response, successfully predicting results of the experiment.
Thus, knowledge of LP conditions for a specific target system allows for the prediction of the relevant bio-effective waveform.
Thus, through detailed knowledge of the solution to equation (24.1), and thus the LP resonance landscape, specific MFs may be configured to yield therapeutically relevant excitation and inhibition.
For example, FIG. 13, left top 1301, shows reactivity for AC DC combined parallel/perpendicular configuration determined by mean z-excursion displacement from zero shows both inhibitory and
excitatory responses. For this case, the AC frequency is equal to 0.5 times the Larmor frequency of the DC parallel MF. Note that a specific pattern of responses is obtained for the field
configuration, as shown in the plot on the lower left 1304. In comparison, FIG. 13, top right 1302, shows the z-excursion of the oscillator for the same conditions, except with the frequency of the
AC MF now equal to 1.0 times the Lamar frequency of the DC parallel MF. This change in AC frequency results in a predictable change in reactivity, as shown in FIG. 13, lower right plot 1303.
Larmor Precession--bio-effective fields generated coupling with ambient fields is described below according to an embodiment of the present invention.
The present invention comprises a method of precisely controlling the magnetic field environment at the biological target in order to produce a magnetic field configuration designed to produce
specific bio-effects, according to empirical data or a mathematical model.
The present invention comprises a configuration of coils and/or permanent magnets, in any geometric arrangement, including triaxial, biaxial or uniplanar, that delivers a magnetic field to a target.
All spatiotemporal components of the magnetic field are controlled in order to deliver a specific magnetic field configuration to the biological target. The ambient geomagnetic and environmental
magnetic field is monitored in order to use these components for the purpose of configuring the applied bio-effective
In general, the magnetic field applied to a biological target by a system of coils is the superposition of: 1) the field B
due directly to the currents applied to the coils; 2) the field B
due to ambient sources such as the local geomagnetic field (on the order of 0.5 Gauss and varying geographically in magnitude and direction) and all other sources such as medical equipment, power
lines, etc. The total resultant magnetic field is:
{right arrow over (B)}(x, y,x,t)
=B(x,{right arrow over (y)},x,t)
+{right arrow over (B)}(x,y,x,t)
Thus, total magnetic field may be completely controlled by selecting the device magnetic field to superpose in a meaningful fashion with the ambient field. For therapeutic purposes, a mathematical or
empirical model detailing the interactions of applied magnetic fields with the biological target may be employed to develop a bio-effective therapeutic field configuration. Rather than shielding the
target from ambient magnetic fields, the present invention of these fields to form the final bio-effective field
{right arrow over (B)}(x, y,x,t)
=B(x,{right arrow over (y)},x,t)
+{right arrow over (B)}(x,y,x,t)
, (25)
so that the magnetic field required by the device is
{right arrow over (B)}(x, y,x,t)
={right arrow over (B)}(x,y,x,t)
-{right arrow over (B)}(x,y,x,t)
The present invention employs this fact, thus utilizing the ambient magnetic field as an integral component of the total specifically configured magnetic field.
The present invention makes a precise measurement of the spatiotemporal components of the ambient magnetic field via a triaxial magnetometer probe. This measurement is then compared to the desired
bin-effective magnetic field configuration to produce a magnetic field to be generated by the device via equation (26).
A combined AC/DC magnetic field configuration may be produced by several different methods: triaxial, Helmholtz, uniplanar, or arbitrary coil combinations, both with and without the addition of
permanent magnets. For example, a given magnetic field may be obtained simply by canceling the ambient field, then adding, through superposition, the desired field components:
{right arrow over (B)}(x, y,x,t)
=-{right arrow over (B)}(x,y,x,t)
+{right arrow over (B)}(x,y,x,t)
This approach generally requires the use of triaxial or biaxial coils Helmholtz configuration.
Thus, for the general case, given an empirical or mathematical model used to determine the bio-effective magnetic field configuration, the following method may be employed:
1) Measurement and cancellation/modulation of undesired components of the ambient magnetic field using appropriate coils and/or permanent magnets.
2) The use of the remaining components of the ambient magnetic field to calculate components of bio-effective field dependent upon ambient values (see Larmor precession example below).
3) The use of the remaining components of the ambient magnetic field to generate components of bio-effective field.
4) The application of additional spatiotemporal field components using appropriate coils and/or permanent magnets, in order to complete the bio-effective field configuration.
A specific example employing LPM predicts that one bio-effective configuration comprises the combination of a constant (DC) and sinusoidal alternating (AC) magnetic fields, oriented perpendicular to
each other. For this configuration, extrema of bio-effects are expected at the Larmor frequency of the target in the DC field, and its' half-integer multiples, as shown in FIG. 1. Such extrema have
biological implications of enhancing or reducing the reactivity of the target ion/ligand binding pathway. FIG. 14, left hand plot 1401 shows the structure of these resonances, proportional to
z-declination of the precessing oscillator, as a function of AC frequency and the ratio of AC to DC amplitude. Note that the AC frequency is a function of the Larmor frequency of the DC field, so
that resonance conditions varying with frequency and AC amplitude are also a direct function of the perpendicular DC field strength. FIG. 14, right hand plot 1402 shows the reactivity as mean
z-declination as a function of AC frequency from equation (21), for AC amplitude=0.5 DC amplitude. Note that regions of inhibited reactivity occur at integer multiples of the Larmor frequency. In
general, mathematical and empirical models make it possible to configure combined AC/DC magnetic fields targeted towards specific processes with specific bio-responses.
An embodiment according to the present invention makes use of the ambient magnetic field to produce the bio-effective field configuration. For this case, a single planar coil may be employed and
measurements of the ambient field components used to generate the bio-effective field via equation (26). A single planar coil may be employed, rather than coils in Helmholtz configuration, because
the magnitude and geometry of the field delivered by such a coil is precisely determined by the input current into the coil and may be calibrated though spatial measurements. For example, a 6-inch
diameter applicator coil delivers a resultant magnetic field with x, y, and z components that are primarily in the direction perpendicular to the plane of the coil, as illustrated in FIG. 15. Note
that, due to the circular symmetry of the system and field cancellation across the axis of the coil, the perpendicular X-Y components yield values close to ambient level (approximately 2 milliGauss)
at the center of the coil (FIG. 15, left hand 1501 and middle plots 1502). As shown in FIG. 15, right hand plot 1503 the dominant component of the magnetic field in the central treatment region is
the 2.0 Gauss z-component, perpendicular to the plane of the coil.
Thus, for this case of LPM for perpendicular magnetic fields and a single circular coil, measurement of the ambient magnetic field allows for:
1) The cancellation of the z-component of the ambient magnetic field.
2) The use of the remaining x and y components of the ambient magnetic field in order to calculate:
a) the required frequency of the applied AC magnetic field.
b) the required AC amplitude (see FIG. 1).
3) The application of an AC magnetic field in the z-direction via a signal applied through the coil.
The field that must be produced by the coil is thus:
{right arrow over (B)}(x, y,x,t)
=-{right arrow over (B)}(x,y,x,t)
+{right arrow over (B)}(x,y,x,t)
Where B[Zambient]
is the z-component of the ambient field, and B
is the desired AC field.
The resultant field produced will be composed of an AC component oriented along the z-axis, combined with the ambient (DC geomagnetic) component in the x-y plane, fulfilling Larmor precession
conditions for the perpendicular AC/DC resonance described above.
Referring to FIG. 19, wherein FIG. 19 is a flow diagram of a method for configuring a bio-effective magnetic field according to an embodiment of the present invention. A Larmor Precession
mathematical model is used to determine a bio-effective magnetic field configuration. (Step 1901) A mathematical model such as that described in equations 17 through 20 can be used for the
determination but other mathematical models can be used. Ambient magnetic fields at the target treatment site are measured by using detection means such as a Hall effect probe. (Step 1902) The
detected ambient magnetic field can be broken down into components. Some of those components can be partially incorporated into the bio-effective magnetic field by using appropriate coils and/or
permanent magnets to cancel and/or modulate any components of the bio-effective magnetic field as described in equations 24 through 27. Additional spatiotemporal magnetic field components are
superposed into the bio-effective magnetic field by using appropriate coils and/or permanent magnets. (Step 1903) The resultant bio-effective magnetic field will be applied to a treatment area
through one or a plurality of coils and/or permanent magnets by generating a signal that satisfies a required AC DC bio-effective magnetic field configuration according to the Larmor Precession
model. (Step 1904)
FIG. 20 depicts a block diagram of an apparatus for configuring a bio-effective magnetic field according to an embodiment of the present invention. The bio-effective magnetic field apparatus produces
signals that drive a generating device such as one or more coils. The bio-effective magnetic field apparatus can be activated by any activation means such as an on/off switch. The bio-effective
magnetic field apparatus has an AC DC power supply 2001. The AC DC power supply 2001 can be an internal power source such as a battery or an external power source such as an AC/DC electric current
outlet that is coupled to the present invention for example by a plug and wire. The AC DC power supply 2001 provides power to an AC generator 2002, a micro-controller 2003 and DC power to an AC/DC
mixer 2004. A preferred embodiment of the micro-controller 2003 uses an 8 bit 4 MHz micro-controller 2003 but other bit MHz combination micro-controllers may be used. The micro-controller controls AC
current flow into an AC/DC mixer 2004. The AC/DC mixer 2004 combines and regulates AC and DC currents that will be used to create a bio-effective magnetic field. A voltage level conversion
sub-circuit 2005 controls a transmitted magnetic field delivered to a target treatment site. Output of the voltage level conversion is amplified by an output amplifier 2006 to be delivered as output
2007 that routes a signal to at least one coil 2008. Preferably at least one coil 2008 has a probe 2009 that measures an ambient magnetic field, including geomagnetic components, and sends
measurements back to the AC DC mixer 2004 thereby regulating and controlling the configuration of the bio-effective magnetic field. When using ambient magnetic field components to generate a
bio-effective magnetic field, a single planar coil may be employed and measurements of the ambient field components used to generate the bio-effective magnetic field can be determined via equation
(26). Alternatively to triaxial or biaxial coils in Helmholtz configuration, a single planar coil may be employed, because the magnitude and geometry of the field delivered by such a coil is
precisely determined by the input current into the coil and may be calibrated though spatial measurements. For example, a 6-inch diameter applicator coil delivers a resultant magnetic field with x,
y, and z components that are primarily in the direction perpendicular to the plane of the coil, as illustrated in FIG. 15. Note that, due to the circular symmetry of the system and field cancellation
across the axis of the coil, the perpendicular X-Y components yield values close to ambient level (approximately 2 milliGauss) at the center of the coil (FIG. 15, left hand 1501 and middle plots
1502). As shown in FIG. 15, right hand plot 1503 the dominant component of the magnetic field in the central treatment region is the 2.0 Gauss z-component, perpendicular to the plane of the coil.
EXAMPLES Example
LP explains important experimental results. FIG. 16 having magnetic field combination 1601 plotted on the x-axis and Calcium efflux ration plotted on the y-axis 1602 shows the effect of extremely low
frequency magnetic fields on the transport of Ca2b in highly purified plasma membrane vesicles. Vesicles were exposed for 30 min at 32 8C and the calcium efflux was studied using radioactive
Ca as a tracer. Static magnetic fields ranging from 27 to 37 mT and time varying magnetic fields with frequencies between 7 and 72 Hz and amplitudes between 13 and 114 ml (peak) were used. The
relative amplitudes of the AC and DC fields are critical in determining the height and position of resonance conditions. Because the Larmor frequency is dependent on the resultant AC+DC amplitude,
for AC>DC the oscillator will undergo periodic changes in precessional direction. The resulting resonance conditions may be evaluated for the experimental conditions employed by Koch [Koch, et al.,
2003] using equations 17 and 18 with Bperp=0. As may be seen, the LPM fit to the experimental data is essentially identical to that of IPR, but via a more physically realistic mechanism.
FIG. 17 having AC frequency plotted on the x-axis 1701 and reactivity plotted on the y-axis 1702 shows the predictions of LP, via equations 17 and 18, of the results of an experiment measuring Ca
flux in bone cells. For this study, net
CA flux was used as a possible early marker of the primary transduction response of human bone cells to low-amplitude EMF. The action of a combined DC magnetic field and AC magnetic field were
initially configured to couple to calcium binding according to on cyclotron resonance theory. Although this theory has subsequently been discredited, the experimental results still hold and are
successfully explained by LP. The experimental results show a prominent peak in Ca flux in the range of 16 Hz for the applied AC field.
LP predictions for this system, with combined parallel and perpendicular AC/DC fields, for parallel AC/DC at 20 μT combined with 15 μT perpendicular DC, satisfactorily describes the data and also
predicts inhibition of Ca2+ flux at lower frequencies that are not sub-harmonics of ICR resonance. These results are clinically relevant for the configuration of bio-effective therapeutic EMF
FIG. 18 having magnetic field plotted on the x-axis 1801 and Neurite outgrowth on the y-axis 1802 shows LP predictions of amplitude windows for AC magnetic fields. Recent tests of the influence of
parallel ac and dc magnetic fields on neurite outgrowth in PC-12 cells showed good agreement with the predictions of an ion parametric resonance model. However, experimental results from earlier work
involving both a perpendicular (160 mG) and a parallel (366 mG) dc magnetic field were not as consistent with the ion parametric resonance model predictions. Test results reported here show that the
cell response to perpendicular ac and dc magnetic fields is distinct and predictably different from that found for parallel ac and dc magnetic fields, and that the response to perpendicular fields is
dominant in an intensity dependent nonlinear manner.
FIG. 18 shows LP predictions of amplitude windows for AC magnetic fields as compared with experimental results obtained by Blackman, for the substantially different effects of AC perpendicular or
parallel to a DC magnetic field on neurite outgrowth from PC-12 cells in culture. LP predictions are made via equations 17 and 18 wherein R(t) is evaluated for 75 msec, the Larmor period of the 366
mG DC field. Experimental conditions were 366 mG (36.6 μT) parallel vs perpendicular field AC/DC combinations, along with variation of AC amplitude at 45 Hz. As may be seen, LP satisfactorily
describes the results obtained with both perpendicular and parallel field geometry. This observed change in reactivity for parallel vs. perpendicular field orientations is an inherent feature of LP,
not explained by any other models.
While the apparatus and method have been described in terms of what are presently considered to be the most practical and preferred embodiments, it is to be understood that the disclosure need not be
limited to the disclosed embodiments. It is intended to cover various modifications and similar arrangements included within the spirit and scope of the claims, the scope of which should be accorded
the broadest interpretation so as to encompass all such modifications and similar structures. The present disclosure includes any and all embodiments of the following claims.
Patent applications by Arthur A. Pilla, Oakland, NJ US
Patent applications by David J. Muehsam, Cambridge, NY US
Patent applications in class Pulsating field
Patent applications in all subclasses Pulsating field
User Contributions:
Comment about this patent or add new information about this topic:
|
{"url":"http://www.faqs.org/patents/app/20110152598","timestamp":"2014-04-19T22:32:45Z","content_type":null,"content_length":"112822","record_id":"<urn:uuid:d21afb36-bd99-49d5-b1c5-ac188d6ec02d>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00442-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Chrystal: Algebra Preface
George Chrystal is perhaps best known for his book on algebra. The first volume of the book, whose full title is Algebra : An Elementary Textbook for the Higher Classes of Secondary Schools and for
Colleges, was published in 1886. The authors of this History of Mathematics Archive [JJO'C and EFR] are particularly proud to be members of the Department of the University of St Andrews once led by
Chrystal and they attempt to follow his example as a fine teacher of mathematics. To give a feel for Chrystal's Algebra, and of his ideas on teaching mathematics, we reproduce the Preface to the
First Edition:-
The work on Algebra of which this volume forms the first part, is so far elementary that it begins at the beginning of the subject. It is not, however, intended for the use of absolute beginners.
The teaching of Algebra in the earlier stages ought to consist in a gradual generalisation of Arithmetic; in other words, Algebra ought, in the first instance, to be taught as Arithmetica Universalis
in the strictest sense. I suppose that the student has gone in this way the length of, say, the solution of problems by means of simple or perhaps even quadratic equations, and that he is more or
less familiar with the construction of literal formulae, such, for example, as that for the amount of a sum of money during a given term at simple interest.
Then it becomes necessary, if Algebra is to be anything more than a mere bundle of unconnected rules, to lay down generally the three fundamental laws of the subject, and to proceed deductively - in
short, to introduce the idea of Algebraic Form, which is the foundation of all the modern developments of Algebra and the secret of analytical geometry, the most beautiful of all its applications.
Such is the course followed from the beginning in this work
As mathematical education stands at present in this country, the first part might be used in the higher classes of our secondary schools and in the lower courses of our colleges and universities. It
will be seen on looking through the pages that the only knowledge required outside of Algebra proper is familiarity with the definition of the trigonometrical functions and a knowledge of their
fundamental addition-theorem.
The first object I have set before me is to develop Algebra as a science, and thereby to increase its usefulness as an educational discipline. I have also endeavoured so to lay the foundations that
nothing shall have to be un-learned and as little as possible added when the student comes to the higher parts of the subject. The neglect of this consideration I have found to be one of the most
important of the many defects of the English text-books hitherto in vogue. Where immediate practical application comes in question, I have striven to adapt the matter to that end as far as the main
general educational purpose would allow. I have also endeavoured, so far as possible, to give complete information on every subject taken up, or, in default of that, to indicate the proper sources;
so that the book should serve the student' both as a manual and as a book of reference. The introduction here and there of historical notes is intended partly to serve the purpose just mentioned, and
partly to familiarise the student with the great names of the science, and to open for him a vista beyond the boards of an elementary text-book.
As examples of the special features of this book, I may ask the attention of teachers to chapters iv. and v. With respect to the opening chapter, which the beginner will doubtless find the hardest in
the book, I should mention that it was written as a suggestion to the teacher how to connect the general laws of Algebra with the former experience of the pupil. In writing, this chapter I had to
remember that I was engaged in writing, not a book on the philosophical nature of the first principles of Algebra, but the first chapter of a book on their consequences. Another peculiarity of the
work is the large amount of illustrative matter, which I thought necessary to prevent the vagueness which dims the learner's vision of pure theory; this has swollen the book to dimensions and
corresponding price that require some apology. The chapters on the theory of the complex variable and on the equivalence of systems of equations, the free use of graphical illustrations, and the
elementary discussion of problems on maxima and minima, although new features in an English text-book, stand so little in need of apology with the scientific public that I offer none.
The order of the matter, the character of the illustrations, and the method of exposition generally, are the result of some ten years' experience as a university teacher. I have adopted now this, now
that deviation from accepted English usages solely at the dictation of experience. It was only after my own ideas bad been to a considerable extent thus fixed that I did what possibly I ought to have
done sooner, viz., consulted foreign elementary treatises. I then found that wherever there bad been free consideration of the subject the results had been much the same. I thus derived moral
support, and obtained numberless hints on matters of detail, the exact sources of which it would be difficult to indicate. I may mention, however, as specimens of the class of treatises referred to,
the elementary text-books of Baltzer in German and Collin in French. Among the treatises to which I am indebted in the matter of theory and logic, I should mention the works of De Morgan, Peacock,
Lipschitz, and Serret. Many of the exercises have been either taken from my own class examination papers or constructed expressly to illustrate some theoretical point discussed in the text. For the
rest I am heavily indebted to the examination papers of the various colleges in Cambridge. I had originally intended to indicate in all cases the sources, but soon I found recurrences which rendered
this difficult, if not impossible.
The order in which the matter is arranged will doubt-less seem strange to many teachers, but a little reflection will, I think, convince them that it could easily be justified. There is, however, no
necessity that, at a first reading, the order of the chapters should be exactly adhered to. I think that, in a final reading, the order I have given should be followed, as it seems to me to be the
natural order into which the subjects fall after they have been fully comprehended in their relation to the fundamental laws of Algebra.
With respect to the very large number of Exercises, I should mention that they have been given for the convenience of the teacher, in order that he might have, year by year, in using the book, a
sufficient variety to prevent mere rote-work on the part of his pupils. I should much deprecate the idea that any one pupil is to work all the exercises at the first or at any reading. We do too much
of that kind of work in this country.
I have to acknowledge personal obligations to Professor Tait, to Dr Thomas Muir, and to my assistant, Mr R E Allardice, for criticism and suggestions regarding the theoretical part of the work; to
these gentlemen and to Messrs Mackay and A Y Fraser for proof reading, and for much assistance in the tedious work of verifying the answers to exercises. In this latter part of the work I am also
indebted to my pupil, Mr J Mackenzie, and to my old friend and former tutor, Dr David Rennet of Aberdeen.
Notwithstanding the kind assistance of my friends and the care I have taken myself, there must remain many errors both in the text and in the answers to the exercises notification of which either to
my publishers or to myself will be gratefully received.
EDINBURGH, 26th June 1886.
JOC/EFR March 2006
The URL of this page is:
|
{"url":"http://www-gap.dcs.st-and.ac.uk/~history/Extras/Chrystal_Algebra.html","timestamp":"2014-04-18T15:41:10Z","content_type":null,"content_length":"8969","record_id":"<urn:uuid:92415828-0ae0-481c-9783-12c41bcee1b7>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00438-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Electrical Engineering Archive | May 19, 2012 | Chegg.com
the signal x(t) = e^ -|t|
(A) Plot the singal x(t) and determine if this signal is finite energy. That is compute the integral and deretmine if it is finite.
(B) If you determine that x(t) is absolutely integrable or that the integral is finite, could you say that s(t) has finite energy? Explain why or why not. HINT:Plot |x(t)| and |x(t)|^2 as functions
of time.
(C) from your results above, is it ture the enegry of this signal y(t) = e^-t cos (2*pi*t)u(t) is less than half the energy of x(t)? Explain. to varify your result use symbolic Matlab to plot y(t)
and compute its energy.
(d) to discharge a capacitor of 1mF charged with a voltage of 1 volt we connect it, at time t = 0, with a esistor of R ohm. When we measure the voltage we find it to be V(r) (t) = e ^(-t) u(t).
Determinethe resistance of R. If the capacitor has a capacitance of 1 microF, what would be R? in general how are R and C related?
• Show less
|
{"url":"http://www.chegg.com/homework-help/questions-and-answers/electrical-engineering-archive-2012-may-19","timestamp":"2014-04-21T16:28:20Z","content_type":null,"content_length":"33446","record_id":"<urn:uuid:30c449ab-f5fc-443c-9f25-654ceef64309>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00129-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Make a Smoother or Rougher Plot
How to | Make a Smoother or Rougher Plot
Mathematica gives you the ability to fine-tune the level of detail for your plots. To get a rough sketch of a plot, you can tell Mathematica to plot fewer points. The more points you have in a plot,
the more detailed the results will be.
First, plot a simple function:
Set the option Mesh->All to see the default sample points that Mathematica uses:
You can adjust the number of sample points plotted with PlotPoints. MaxRecursion controls how many recursive subdivisions can be made:
Setting values for PlotPoints and MaxRecursion high makes a very detailed picture at the cost of speed. AbsoluteTiming prints how long (in seconds) it takes to evaluate:
Smaller values for PlotPoints and MaxRecursion give a rougher but much faster result:
You can get a misleading result if these parameter values are chosen poorly. This is the accurate plot produced by Mathematica's default settings:
Setting PlotPoints too low yields a misleading result:
You can compensate for a low PlotPoints value by increasing MaxRecursion, but this will often take much longer, and may still be inaccurate:
|
{"url":"http://reference.wolfram.com/mathematica/howto/MakeASmootherOrRougherPlot.html","timestamp":"2014-04-17T15:34:15Z","content_type":null,"content_length":"36476","record_id":"<urn:uuid:33ee0ef4-31a9-4a07-9885-f449d4d3e212>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00616-ip-10-147-4-33.ec2.internal.warc.gz"}
|
field of order p^2
April 30th 2010, 12:14 AM #1
Junior Member
Jan 2010
field of order p^2
Hi again!
My problem is to show that a field of order $p^2$ exists for every prime p.
In an earlier problem I found that there were $p^2$monic quadratics in $Z_p[x]$, but I don't know if that's useful. I also showed that $(1/2)(p^2 + p)$ of those were factorable. Again, don't know
if that's related at all but thought I'd throw it out there. Any ideas or theorems would be super helpful, thanks!
Hi again!
My problem is to show that a field of order $p^2$ exists for every prime p.
In an earlier problem I found that there were $p^2$monic quadratics in $Z_p[x]$, but I don't know if that's useful. I also showed that $(1/2)(p^2 + p)$ of those were factorable. Again, don't know
if that's related at all but thought I'd throw it out there. Any ideas or theorems would be super helpful, thanks!
Ok, let $p(x)\in\mathbb{F}_p[x]:=\mathbb{Z}_p[x]$ be one of those monic quadratics which is also irreducible ( can you prove such a thing must always exist? Note that according to what
you said you can!)) , so that the ideal $I:=<p(x)>$ is maximal in $\mathbb{F}_p[x]$ (why?) $\iff$ the factor ring $\mathbb{F}_{p^2}:=\mathbb{F}_p[x]/I$ is a field (why?) .
Well, now just prove that $\left|\mathbb{F}_{p^2}\right|=p^2$ ...
Hint: every element in $\mathbb{F}_{p^2}$ as defined above has the form $f(x)+I\,,\,\,f(x)\in\mathbb{F}_p[x]$ , but using Euclides algorithm in $\mathbb{F}_p[x]$ show that any element in $\mathbb
{F}_{p^2}$ has a
representative $f(x)+I$ with $\deg f<2$ and any two of them is a different element in the field.
Well, now just count up all the different constant and linear poilynomials in $\mathbb{F}_p[x]$ ...
April 30th 2010, 05:51 AM #2
Oct 2009
|
{"url":"http://mathhelpforum.com/advanced-algebra/142254-field-order-p-2-a.html","timestamp":"2014-04-18T23:49:09Z","content_type":null,"content_length":"38175","record_id":"<urn:uuid:df15155b-8cea-4478-920c-ac5679c26cbb>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00608-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Replace or Remove Invalid or Missing Data
How to | Replace or Remove Invalid or Missing Data
In data analysis, it is often necessary to clean a dataset before analyzing it. Data points with missing entries or that contain invalid values must be removed or replaced by some estimate.
Mathematica provides a rich environment for this type of preprocessing.
It is common to use a particular symbol, string, or out-of-range number to represent missing information in a dataset. The Missing symbol is used for this purpose in Mathematica built-in data
The following gets the gross domestic product (GDP) for each country known by CountryData. The large output is suppressed by a semicolon ():
This shows the values of for the first 10 countries ordered alphabetically:
You may want to do some analysis on this data, such as finding the maximum GDP, but values are not available for some countries.
The maximum cannot be completely determined from the raw data:
Data can be missing for many reasons, but CountryData will represent them all by expressions starting with Missing. The number of missing values in the dataset can be determined by counting the
expressions with the head Missing.
Use Count with the pattern to count expressions with head Missing:
In comparison with the number of data points in the entire set, the number of missing results is relatively small:
It would be quite reasonable to remove the missing points and perform any analysis on the remaining points. A simple way to remove these data points is to use DeleteCases with the same pattern used
for Missing expressions in Count:
From the length and maximum value of the new dataset, you can see that missing values have been removed:
Any calculations that can be carried out on a list of numbers can now be performed on the new filtered dataset.
In the previous example a very specific form for invalid data points was known: they all had head Missing. In practice, notation used for missing information varies. A person entering data in a
spreadsheet might type NA for a value that is not available or does not apply, while some data acquisition software may represent missing measurements with a specific out-of-range number. As a
result, it can be important to know what the data represents to determine which values are not valid.
Here is a dataset where the first elements represent a group number for one of two groups and the other two entries are numeric measurements for an individual in that group:
Displaying the data in a grid can make it easier to see where there are problems with the data:
In this case, there are two problems. While there are only two groups, 1 and 2, there is an entry with a group number of 4. There is also a non-numeric entry in the third column. Something will need
to be done to remove or modify the problematic points.
Valid data entries for this dataset will either have 1 or 2 as the first element and numbers for the second and third elements. A pattern for data of this type is . The symbol indicates an
alternative: the first element must be 1 or 2. NumberQ tests whether its argument is a number, and is a pattern for numbers. You can use this pattern along with MatchQ and Not to write a function
that identifies bad data points.
This function identifies such points by checking if its input matches the pattern. It returns True if the input does not match the pattern:
To remove invalid data points, you can use as a pattern in DeleteCases:
It is often desirable to replace invalid values by estimates based on other data points, rather than remove them entirely. However, for the current dataset it is not clear how to replace the group 4
entry other than to randomly choose 1 or 2 to replace it. You may still want to remove the data point in such cases.
Define a function , which determines if a group entry is invalid by checking if the entry's first element is 1 or 2:
Use the pattern with DeleteCases to remove data points with invalid group values:
Since is the only entry without data in the third column and belongs to group 1, you could replace it with the mean or median of all the other third column data points from group 1.
Replacing by the group mean could be done directly by selecting the data points for group 1, computing the mean for the third elements that are numbers, and replacing by that mean value.
Use Select to pick out data points from group 1:
Use Select again to pick out the numbers from the last elements (third column) of the data and then compute their mean:
Now use a replacement rule to substitute this mean for :
Use Grid to display the filtered data:
The previous steps still require a fair amount of manual effort. You can instead write a function to perform the replacement on all invalid entries in a given column based on the value from another
column. The function can then be used to process each column in a dataset.
The following function takes a dataset, the number of the column to process, the number of the column to group by, and a function as its arguments. It replaces any non-numeric values in the column
being processed with the result of the function applied to the elements in that column, grouped by the value in the grouping column:
First define the original data, omitting the group 4 entry:
Replace the non-numeric entries in column 3 by their associated group mean:
Alternatively, replace the non-numeric entries in column 3 by their associated group medians:
You can then use Table to process the dataset column by column. Use Transpose with Grid to display the newly cleaned result in tabular form. This example uses group means as the replacement values:
Because the function is set up to operate on a single column, you can use different estimates for each column.
Here, invalid elements in the second column are replaced by the respective group mean for that column, and invalid elements in the third column are replaced by the respective group median for that
|
{"url":"http://reference.wolfram.com/mathematica/howto/ReplaceOrRemoveInvalidOrMissingData.html","timestamp":"2014-04-19T09:33:44Z","content_type":null,"content_length":"57925","record_id":"<urn:uuid:f23cb21c-29df-453f-aaa4-ec07068f9384>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00259-ip-10-147-4-33.ec2.internal.warc.gz"}
|
minimal prime ideals in noetherian local rings
September 23rd 2006, 01:07 PM
minimal prime ideals in noetherian local rings
I can't figure it out. The question is this:
Are there only a finite number of minimal prime ideals in a local noetherian ring?
This question arises in the dimension theory of such rings. An affirmative answer seems to be assumed in the proof I've seen of this proposition:
Let A be such a ring and assume that dim A = d. Let m be the maximal ideal of A. Then there is an m-primary ideal in A generated by d elements.
This is one of the steps towards the dimension theorem asserting the equivalence of three definitions of dimension for local noetherian rings.
|
{"url":"http://mathhelpforum.com/advanced-algebra/5764-minimal-prime-ideals-noetherian-local-rings-print.html","timestamp":"2014-04-16T20:30:39Z","content_type":null,"content_length":"3671","record_id":"<urn:uuid:9695b67c-e6a3-4dc6-a0a3-2f8e6fec80b7>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00167-ip-10-147-4-33.ec2.internal.warc.gz"}
|
expr - Evaluate an expression
expr arg ?arg arg ...?
MATH FUNCTIONS
atan2(y, x)
fmod(x, y)
hypot(x, y)
pow(x, y)
expr - Evaluate an expression expr arg ?arg arg ...?
Concatenates args (adding separator spaces between them), evaluates the result as a Tcl expression, and returns the value. The operators permitted in Tcl expressions are a subset of the operators
permitted in C expressions, and they have the same meaning and precedence as the corresponding C operators. Expressions almost always yield numeric results (integer or floating-point values). For
example, the expression
expr 8.2 + 6
evaluates to 14.2. Tcl expressions differ from C expressions in the way that operands are specified. Also, Tcl expressions support non-numeric operands and string comparisons. A Tcl expression
consists of a combination of operands, operators, and parentheses. White space may be used between the operands and operators and parentheses; it is ignored by the expression's instructions. Where
possible, operands are interpreted as integer values. Integer values may be specified in decimal (the normal case), in octal (if the first character of the operand is 0), or in hexadecimal (if the
first two characters of the operand are 0x). If an operand does not have one of the integer formats given above, then it is treated as a floating-point number if that is possible. Floating-point
numbers may be specified in any of the ways accepted by an ANSI-compliant C compiler (except that the f, F, l, and L suffixes will not be permitted in most installations). For example, all of the
following are valid floating-point numbers: 2.1, 3., 6e4, 7.91e+16. If no numeric interpretation is possible (note that all literal operands that are not numeric or boolean must be quoted with either
braces or with double quotes), then an operand is left as a string (and only a limited set of operators may be applied to it).
On 32-bit systems, integer values MAX_INT (0x7FFFFFFF) and MIN_INT (-0x80000000) will be represented as 32-bit values, and integer values outside that range will be represented as 64-bit values (if
that is possible at all.)
Operands may be specified in any of the following ways:
As a numeric value, either integer or floating-point.
As a boolean value, using any form understood by string is boolean.
As a Tcl variable, using standard $ notation. The variable's value will be used as the operand.
As a string enclosed in double-quotes. The expression parser will perform backslash, variable, and command substitutions on the information between the quotes, and use the resulting value as the
As a string enclosed in braces. The characters between the open brace and matching close brace will be used as the operand without any substitutions.
As a Tcl command enclosed in brackets. The command will be executed and its result will be used as the operand.
As a mathematical function whose arguments have any of the above forms for operands, such as sin($x). See below for a list of defined functions.
Where the above substitutions occur (e.g. inside quoted strings), they are performed by the expression's instructions. However, the command parser may already have performed one round of substitution
before the expression processor was called. As discussed below, it is usually best to enclose expressions in braces to prevent the command parser from performing substitutions on the contents.
For some examples of simple expressions, suppose the variable a has the value 3 and the variable b has the value 6. Then the command on the left side of each of the lines below will produce the value
on the right side of the line:
expr 3.1 + $a 6.1
expr 2 + "$a.$b" 5.6
expr 4*[llength "6 2"] 8
expr {{word one} < "word $a"} 0
The valid operators are listed below, grouped in decreasing order of precedence:
Unary minus, unary plus, bit-wise NOT, logical NOT. None of these operators may be applied to string operands, and bit-wise NOT may be applied only to integers.
Multiply, divide, remainder. None of these operators may be applied to string operands, and remainder may be applied only to integers. The remainder will always have the same sign as the divisor
and an absolute value smaller than the divisor.
Add and subtract. Valid for any numeric operands.
Left and right shift. Valid for integer operands only. A right shift always propagates the sign bit.
Boolean less, greater, less than or equal, and greater than or equal. Each operator produces 1 if the condition is true, 0 otherwise. These operators may be applied to strings as well as numeric
operands, in which case string comparison is used.
Boolean equal and not equal. Each operator produces a zero/one result. Valid for all operand types.
Boolean string equal and string not equal. Each operator produces a zero/one result. The operand types are interpreted only as strings.
Bit-wise AND. Valid for integer operands only.
Bit-wise exclusive OR. Valid for integer operands only.
Bit-wise OR. Valid for integer operands only.
Logical AND. Produces a 1 result if both operands are non-zero, 0 otherwise. Valid for boolean and numeric (integers or floating-point) operands only.
Logical OR. Produces a 0 result if both operands are zero, 1 otherwise. Valid for boolean and numeric (integers or floating-point) operands only.
If-then-else, as in C. If x evaluates to non-zero, then the result is the value of y. Otherwise the result is the value of z. The x operand must have a boolean or numeric value.
See the C manual for more details on the results produced by each operator. All of the binary operators group left-to-right within the same precedence level. For example, the command
expr 4*2 < 7
returns 0.
The &&, ||, and ?: operators have ``lazy evaluation'', just as in C, which means that operands are not evaluated if they are not needed to determine the outcome. For example, in the command
expr {$v ? [a] : [b]}
only one of [a] or [b] will actually be evaluated, depending on the value of $v. Note, however, that this is only true if the entire expression is enclosed in braces; otherwise the Tcl parser will
evaluate both [a] and [b] before invoking the expr command. Tcl supports the following mathematical functions in expressions, all of which work solely with floating-point numbers unless otherwise
abs cosh log sqrt
acos double log10 srand
asin exp pow tan
atan floor rand tanh
atan2 fmod round wide
ceil hypot sin
cos int sinh
Returns the absolute value of arg. Arg may be either integer or floating-point, and the result is returned in the same form.
Returns the arc cosine of arg, in the range [0,pi] radians. Arg should be in the range [-1,1].
Returns the arc sine of arg, in the range [-pi/2,pi/2] radians. Arg should be in the range [-1,1].
Returns the arc tangent of arg, in the range [-pi/2,pi/2] radians.
Returns the arc tangent of y/x, in the range [-pi,pi] radians. x and y cannot both be 0. If x is greater than 0, this is equivalent to atan(y/x).
Returns the smallest integral floating-point value (i.e. with a zero fractional part) not less than arg.
Returns the cosine of arg, measured in radians.
Returns the hyperbolic cosine of arg. If the result would cause an overflow, an error is returned.
If arg is a floating-point value, returns arg, otherwise converts arg to floating-point and returns the converted value.
Returns the exponential of arg, defined as e**arg. If the result would cause an overflow, an error is returned.
Returns the largest integral floating-point value (i.e. with a zero fractional part) not greater than arg.
Returns the floating-point remainder of the division of x by y. If y is 0, an error is returned.
Computes the length of the hypotenuse of a right-angled triangle sqrt(x*x+y*y).
If arg is an integer value of the same width as the machine word, returns arg, otherwise converts arg to an integer (of the same size as a machine word, i.e. 32-bits on 32-bit systems, and
64-bits on 64-bit systems) by truncation and returns the converted value.
Returns the natural logarithm of arg. Arg must be a positive value.
Returns the base 10 logarithm of arg. Arg must be a positive value.
Computes the value of x raised to the power y. If x is negative, y must be an integer value.
Returns a pseudo-random floating-point value in the range (0,1). The generator algorithm is a simple linear congruential generator that is not cryptographically secure. Each result from rand
completely determines all future results from subsequent calls to rand, so rand should not be used to generate a sequence of secrets, such as one-time passwords. The seed of the generator is
initialized from the internal clock of the machine or may be set with the srand function.
If arg is an integer value, returns arg, otherwise converts arg to integer by rounding and returns the converted value.
Returns the sine of arg, measured in radians.
Returns the hyperbolic sine of arg. If the result would cause an overflow, an error is returned.
Returns the square root of arg. Arg must be non-negative.
The arg, which must be an integer, is used to reset the seed for the random number generator of rand. Returns the first random number (see rand()) from that seed. Each interpreter has its own
Returns the tangent of arg, measured in radians.
Returns the hyperbolic tangent of arg.
Converts arg to an integer value at least 64-bits wide (by sign-extension if arg is a 32-bit number) if it is not one already.
In addition to these predefined functions, applications may define additional functions using Tcl_CreateMathFunc().
All internal computations involving integers are done with the C type long, and all internal computations involving floating-point are done with the C type double. When converting a string to
floating-point, exponent overflow is detected and results in a Tcl error. For conversion to integer from string, detection of overflow depends on the behavior of some routines in the local C library,
so it should be regarded as unreliable. In any case, integer overflow and underflow are generally not detected reliably for intermediate results. Floating-point overflow and underflow are detected to
the degree supported by the hardware, which is generally pretty reliable.
Conversion among internal representations for integer, floating-point, and string operands is done automatically as needed. For arithmetic computations, integers are used until some floating-point
number is introduced, after which floating-point is used. For example,
expr 5 / 4
returns 1, while
expr 5 / 4.0
expr 5 / ( [string length "abcd"] + 0.0 )
both return 1.25. Floating-point values are always returned with a ``.'' or an e so that they will not look like integer values. For example,
expr 20.0/5.0
returns 4.0, not 4. String values may be used as operands of the comparison operators, although the expression evaluator tries to do comparisons as integer or floating-point when it can, i.e., when
all arguments to the operator allow numeric interpretations, except in the case of the eq and ne operators. If one of the operands of a comparison is a string and the other has a numeric value, the
numeric operand is converted back to a string using the C sprintf format specifier %d for integers and %g for floating-point values. For example, the commands
expr {"0x03" > "2"}
expr {"0y" > "0x12"}
both return 1. The first comparison is done using integer comparison, and the second is done using string comparison. Because of Tcl's tendency to treat values as numbers whenever possible, it isn't
generally a good idea to use operators like == when you really want string comparison and the values of the operands could be arbitrary; it's better in these cases to use the eq or ne operators, or
the string command instead. Enclose expressions in braces for the best speed and the smallest storage requirements. This allows the Tcl bytecode compiler to generate the best code.
As mentioned above, expressions are substituted twice: once by the Tcl parser and once by the expr command. For example, the commands
set a 3
set b {$a + 2}
expr $b*4
return 11, not a multiple of 4. This is because the Tcl parser will first substitute $a + 2 for the variable b, then the expr command will evaluate the expression $a + 2*4.
Most expressions do not require a second round of substitutions. Either they are enclosed in braces or, if not, their variable and command substitutions yield numbers or strings that don't themselves
require substitutions. However, because a few unbraced expressions need two rounds of substitutions, the bytecode compiler must emit additional instructions to handle this situation. The most
expensive code is required for unbraced expressions that contain command substitutions. These expressions must be implemented by generating new code each time the expression is executed.
Define a procedure that computes an "interesting" mathematical function:
proc calc {x y} {
expr { ($x*$x - $y*$y) / exp($x*$x + $y*$y) }
Convert polar coordinates into cartesian coordinates:
# convert from ($radius,$angle)
set x [expr { $radius * cos($angle) }]
set y [expr { $radius * sin($angle) }]
Convert cartesian coordinates into polar coordinates:
# convert from ($x,$y)
set radius [expr { hypot($y, $x) }]
set angle [expr { atan2($y, $x) }]
Print a message describing the relationship of two string values to each other:
puts "a and b are [expr {$a eq $b ? {equal} : {different}}]"
Set a variable to whether an environment variable is both defined at all and also set to a true boolean value:
set isTrue [expr {
[info exists ::env(SOME_ENV_VAR)] &&
[string is true -strict $::env(SOME_ENV_VAR)]
Generate a random integer in the range 0..99 inclusive:
set randNum [expr { int(100 * rand()) }]
array, for, if, string, Tcl, while arithmetic, boolean, compare, expression, fuzzy comparison
Copyright © 1993 The Regents of the University of California.
Copyright © 1994-2000 Sun Microsystems, Inc.
Copyright © 1995-1997 Roger E. Critchlow Jr.
|
{"url":"http://www.tcl.tk/man/tcl8.4/TclCmd/expr.htm","timestamp":"2014-04-16T21:58:16Z","content_type":null,"content_length":"22763","record_id":"<urn:uuid:a170bfe0-8f4b-494a-a6c7-05c372683445>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00264-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
in a series circuit, as light bulbs are added, the voltage at the battery increases/decreasees/remains the same in a series circuit, as light builbs are added, the current at the battery increases/
decreases, remains the same.
• one year ago
• one year ago
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred.
|
{"url":"http://openstudy.com/updates/5085b728e4b0b7c30c8e2724","timestamp":"2014-04-20T21:14:06Z","content_type":null,"content_length":"42079","record_id":"<urn:uuid:7bc38e35-cbee-464b-b140-e0500ebbd3ec>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00487-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Items tagged with equation
Hi, all of you.
I'm an engineering college student. And this is my first time to check the roots using the Maple.
But I tired to solve simultaneous equations and got into some difficulties.
Acutually I'd like to get 6 roots[ A[i], t[i], i=(1,2,3).
Initial values are A[1]+A[2]+A[3]=1 and t[1]=0.
So I've simplified for 4 equations to get 4 root and to ask you.
Below, there is my code to get A[2], A[3], t[2] and t[3]. But it doesn't work well.
|
{"url":"http://www.mapleprimes.com/tags/equation?page=8","timestamp":"2014-04-16T07:27:58Z","content_type":null,"content_length":"93297","record_id":"<urn:uuid:b2b11022-fabe-4d36-a46c-7cff841f0863>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00202-ip-10-147-4-33.ec2.internal.warc.gz"}
|
188 helpers are online right now
75% of questions are answered within 5 minutes.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred.
|
{"url":"http://openstudy.com/users/qwerty90/asked","timestamp":"2014-04-20T16:27:19Z","content_type":null,"content_length":"112630","record_id":"<urn:uuid:4c314c41-7a1b-4cc3-bab8-f8bb66b0b54e>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00613-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Generic freeness II
Generic freeness II July 30, 2009
Posted by Akhil Mathew in algebra, commutative algebra.
Tags: algebra, commutative algebra, generic freeness, localization
Today’s goal is to partially finish the proof of the generic freeness lemma; the more general case, with finitely generated algebras, will have to wait for a later time though.
Recall that our goal was the following:
Theorem 1 Let ${A}$ be a Noetherian integral domain, ${M}$ a finitely generated ${A}$-module. Then there there exists ${f \in A - \{0\}}$ with ${M_f}$ a free ${A_f}$-module.
The argument proceeds using dévissage. By the last post, we can find a filtration
$\displaystyle 0 = M_0 \subset M_1 \subset \dots \subset M_n = M,$
with ${M_{i+1}/M_i}$ isomorphic to ${A/\mathfrak{p}_i}$ for prime ideals ${\mathfrak{p}_i}$. Now, consider the nonzero prime ideals ${\mathfrak{p}'_j}$ that occur in the above filtration. Since ${A}$
is a domain, we have
$\displaystyle \prod \mathfrak{p}'_j eq 0,$
and we may choose ${f eq 0}$ with ${f \in \prod \mathfrak{p}_j'}$. Then when we localize at ${f}$, there is still a filtration
$\displaystyle 0 = (M_0)_f \subset (M_1)_f \subset \dots \subset (M_n)_f = M_f,$
such that ${(M_{i+1})_f/(M_i)_f = (A/\mathfrak{p}_i)_f}$. (Essentially, this uses the fact that localization is an exact functor.) By the definition of localization and the choice of ${f}$, this is
zero when ${\mathfrak{p}_i eq 0}$; this is ${A_f}$ when ${\mathfrak{p}_i=0}$.
So we have a filtration of ${M_f}$ by ${A_f}$-modules:
$\displaystyle 0 = N_0 \subset N_1 \subset \dots \subset N_m = M_f$
such that each successive quotient is a free module of rank 1.
The proof will be completed by the following lemma:
Lemma 2 Suppose ${F', F''}$ are free modules over a ring ${A}$ and ${M}$ is an ${A}$-module. If there is an exact sequence
$\displaystyle 0 \rightarrow F' \rightarrow M \rightarrow F'' \rightarrow 0,$
then ${M}$ is free.
Proof: The exact sequence splits. Indeed, we can lift a basis of ${F''}$ to elements of ${M}$ by surjectivity; then define a map ${F'' \rightarrow M}$ from that lifting, which gives a section of ${M
\rightarrow F''}$. Thus the sequence splits. $\Box$
Now, by induction, we can show that the ${N_k}$‘s are free over ${A_f}$. Hence ${N_m = M_f}$ is free.
[For a better proof, see the comments below. -AM, 8/17]
Often times in algebraic geometry, to prove a statement “generically” it suffices to show it for the generic point. Then one is able to “spread out” from there.
Its clear that every coherent sheaf is free at the generic point because it becomes a module over a field, i.e. a vector space. Do you see a way of showing that the module will become free at
some finite number of localizations without using your filtration lemma?
Actually yes, because if you are working with a coherent sheaf $\mathcal{F}$ over a noetherian scheme $X$, then $F_x$ being free implies $F$ is free in some neighborhood of $x$. This follows
because we can take finitely many sections $s_1, \dots, s_n$ in some neighborhood $U$ of $x$ which form a basis for $\mathcal{F}_x$. There is thus a homomorphism of sheaves $O_U^n \to \mathcal{F}
|_U$ sending each coordiante to $s_i$. We can take the kernel $\mathcal{K}$ and the cokernel $\mathcal{C}$. Then $\mathcal{K}_x = \mathcal{C}_x = 0$. But the set of points where the stalk of a
coherent sheaf is zero is open, since it is locally the complement of the supports of certain sections. Hence $\mathcal{K} = \mathcal{C} = 0$ in some neigborhood of $x$, so in that neighborhood $
\mathcal{F}$ is free.
This is a better argument that what I posted above. Then again, it doesn’t (as far as I know) work for the general freeness lemma.
Wait — I thought if the stalk of *any* sheaf was zero then we could find a neighborhood where it was zero because the stalk is the direct limit. So … I think we can replace coherent with
quasi-coherent, and this subsumes the general generic freeness lemma, no? (I’m not too clear on this stuff, its been a while, but I thought the general lemma was if X is a finite-type scheme over
S and F is a coherent sheaf on X, we can find a neighborhood U in S so F is free on the pullback of U to X.)
I’m pretty sure this is not the case, unless we know that $\mathcal{F}$ is locally finitely generated or something like that.
For instance consider the $\mathbb{Z}$-module
$M := \bigoplus_{n} \mathbb{Z}/n\mathbb{Z}$, and consider the associated quasi-coherent sheaf $\tilde{M}$ on $Spec \ \mathbb{Z}$. The stalk at the generic point $(0)$ is $M \otimes \mathbb{Q} =
0$, but the localization $M_f eq 0$ for any nonzero $f \in \mathbb{Z}$, so $\tilde{M}$ is not zero on any open set.
Ah, of course! My bad.
What other “general” lemmas do you know of? Nakayama’s Lemma can be generalized as follows: let A->B be a local homomorphism of local rings and M be a finite B-module, then if m_a M = M, then M
is trivial. But I’m not familiar with any other results in this vein. Ideas?
There’s a result to the effect that if $S \subset A$ is a multiplicative subset of some Noetherian ring, and $M$ is a f.g. $A$-module with $S^{-1} M$ free over $S^{-1}A$, then there is $f \in S$
with $M_f$ free over $A_f$ (this e.g. implies Th. 1 above). The proof is basically the same as before: find a homomorphism $F \to M$ which becomes an isomorphism when tensoring with $S^{-1} A$,
thus the kernels and cokernels vanish upon localization by $S$ (localization being exact), and then one uses the fact that the kernels and cokernels are finitely generated to find one element of
$S$ annihilating them. We could also remove the hypothesis $A$ Noetherian if $M$ is finitely presented.
There are variants of Nakayama’s lemma that allow you to find generators of $M$ from $M/\mathfrak{m}M$. I might actually do a Nakayama post sometime, so I’ll give more details then. There is the
standard fun corollary about f.g. projective modules over local rings being free. I kind of want to do that using $Tor$ though.
There is another lemma in Hartshorne that if you have local rings $A, B$ with a local homomorphism $A \to B$ which induces an isomorphism on the residue fields, makes $B$ into a f.g. $A$-module,
and the map $m_A \to m_B/m_B^2$ is surjective, then $A \to B$ is surjective. The proof is basically Nakayama repeated several times, and I will probably mention it in the (future) Nakayama post.
Sorry; when I said “general”, I meant “Grothendieck-style” in the following sense. In classical A.G. you might have a result like: if M is a finite A-module satisfying P, then Q is true
generically over A. I’m looking for results that look like: if M is a finite B-module where B is a f.g. algebra over A and M satisfies P, Q is true generically over A.
Examples are Grothendieck’s generic freeness and flatness lemmas, and the version of Nakayama I posted before. But I have no idea what sorts of lemmas can be generalized to this sort of
statement. Do you have any idea?
Well, first of all I think “generic freeness” and “generic flatness” actually refer to essentially the same argument, except that the conclusion of the former is of course stronger.
I’m not sure however how the extended version of Nakayama you posted above is “general” in the sense you gave- it doesn’t say anything holds generically, nor does it require any finite-type
Other results I know of that may count at least as “generic” are Chevalley’s theorem that fiber dimension is generic under suitable nice conditions (e.g. noetherian schemes, finite type) and even
lower (or is it upper?) semicontinuous, and generic smoothness: with varieties over an algebraically closed field of characteristic zero $X,Y$ with $X$ nonsingular and a map $f: X \to Y$, there
is an open subset $V \subset Y$ with $f^{-1}(V) \to V$ smooth. I may post about this kind of material in the future, since I’d like to understand it myself.
Ugh — I should really think before posting these comments. Under anonymity, there is no pressure to be careful or accurate or correct or anything :)
You’re right that I totally butchered the type of lemma that I was asking for. And that the form of Nakayama I gave didn’t at all fit into it. I guess what I meant is: there are a bunch of
statements whose hypothesis includes: “M is a finite module over A”. Sometimes the hypothesis can be weakened: “M is a finite module over B, where B is an A-algebra satisfying [blah]“.
For Grothendieck’s generic freeness lemma, blah is finitely generated and for the version of Nakayama I mentioned, blah is nothing. I hear that this is a general phenomenon (and is useful in
proving things in the relative framework of A.G.), but I know very few results of this form.
There is indeed categorical Nakayama Lemma as follows:
I should first introduce some notions:
C is an abelian category(with arbitrary intersection of subobjects)
Cpr is the proper subcategory generated by M of C such that for any nonzero M,any nonzero inclusion L—>M factors through a maximal proper subobject of M
Categorical Nakayama Lemma:
M is object of Cpr, and N—>M is an inclusion. The following conditions are equivalent:
1 N—>radM is inclusion, where radM=intersection of Kernel(M–>L), L goes through simple objects of Cpr.
2 if L—>M is inclusion such that N+L=M, then L=M.
Then the usual Nakayama Lemma is the corollary of this one.
Comments: Nakayama Lemma is crucial for proving noncommutative grassmannian is formally smooth.
Sorry, I’m getting a bit confused here. Could you clarify what Cpr refers to in the case R-mod, for instance?
Recent Comments
sethsnap.com on Integrality, invariant theory…
http://www.spunka.se… on Representations of the symmetr…
Home Page on Integrality, invariant theory…
Felix on Riemann integration in abstrac…
http://customlogopin… on Bourbaki 2.0: Or, is massively…
|
{"url":"http://deltaepsilons.wordpress.com/2009/07/30/generic-freeness-ii/","timestamp":"2014-04-21T09:39:42Z","content_type":null,"content_length":"100754","record_id":"<urn:uuid:f502a4a0-6527-484a-8252-0fce7b8fc5f8>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00572-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Power System Dynamic Stability Enhancement Using Fuzzy Controlled STATCOM
[1] G.W.Stagg and A.H.EL-abaid, “Computer methods in power systems (New york hills, 1968)
[2] P.K. Padiyar, “Power system stability and control” EPRI Power system series 1994
[3] P.M. Anderson and A.A. Fouad, “Power system control and stability”, Iowa state university press, 1980
[4] Kimbark, “Power system stability”, Wiley Interscience, 2004, vol. 1
[5] Y.N. YU, “Electric power system dynamics”, Academic press 1983
[6] N. Mithulananthan, C.A. Canizares, J. Reeve and G.J. Rogers, “Comparison of PSS, SVC and STATCOM controllers for damping power system oscillations”, IEEE transactions on power systems, vol. 18,
no.2, pp 786-792, May 2003
[7] N.G. Hingorani and L. Gyugyi, “Understanding FACTS, Concepts and Technology of Flexible AC Transmission System”, Wiley-IEEE press, 1ed., Dec. 1999
[8] N.G. Hingorani. “High power electronics and Flexible AC transmission system”, IEEE Power Engineering review, Jul. 1988
[9] N.G.Hingoroni, “Flexible AC transmission systems”, IEEE spectrum (40-45) April 1993
[10] Kundur P., Klein M., Rogers G.J. and Zywno M.S. “Application of power system stabilizers for enhancement of overall system stability,” IEEE Trans. Power syst., vol.4, pp.614-626, 1989
[11] H.F.Wang, "Applications of damping torque analysis to STATCOM control " , Elsevier, Electrical Power and Energy Systems 22, 2000, 197–204
[12] H.F.Wang, “Phillips-Hefron model of power systems installed with STATCOM and applications”, IEE proceedings on generation, transmission and distribution, vol. 146, no.5, September 1999, pp
[13] H.F.Wang, “Applications of damping torque analysis to STATCOM control”, Electric power and energy systems 22, 2000, pp 197-204
[14] A.H.M.A.Rahim, S.A.Al-Baiyat, F.M.Kandlawala, “A robust STATCOM controller for power system dynamic performance enhancement” IEEE PES Summer Meeting, Vancouver, pp.887-892, July 2001
[15] M.F.Kandlawala and A.H.M.A.Rahim, “Power system dynamic performance with STATCOM controller”. 8th annual IEEE technical exchange meeting, April 2001
[16] Abido M.A., “Robust design of multimachine power system stabilizers using simulated annealing,” IEEE Trans. Energy Convers., pp. 297-304, 2000
[17] M. Joorabian, M. Razzaz, M. Ebadi, and M. Moghaddasian, “Employing Fuzzy Logic in Damping Power System Oscillations Using SVC,” IEEE International Conference on Electrical Engineering, pp. 1-5,
March 2008
[18] Mohammad reza safari tirtashi, ahmad rohani , reza noroozian,”PSS and STATCOM controller design for damping power system oscillations using fuzzy control strategies” ,proceedings of ICEE 2010
[19] H.A.Toliat, J. Sadeh and R. Ghazi, “Design of augmented fuzzy logic power system stabilizers to enhance power system stability,” IEEE Trans. Energy Conversion, vol. 11, No.1, pp. 97-103, 1996
[20] K.R.Padiyar and A.M. Kulkarni, “Analysis and design of voltage control of static condenser”, IEEE conference on power electronics, derives and energy systems for industrial growth 1, pp 393-398,
[21] Z.Yao, P.Kesimar, N.lacchivin and V.rajagopalan, “Nonlinear control for STATCOM based on differential algebra”, IEEE specialist conference 1, pp 323-334, 1998
[22] H.F.Wang and F.Li, “Design of STATCOM multivariable sampled regulator”, IEEE conference on electric utility deregulation and power technologies, April 2000
[23] B.K.Keshavan and M.Prabhu, “Damping of sub synchronous oscillations using STATCOM – A FACTS Device”, Transmission and Distribution conference and exposition, 2001 IEEE/PES volume 1, 28 Oct-2
Nov. 2001, page(s) 1-7. vol.1
[24] Y.Ni and L.O.Mak, “Fuzzy logic damping controller for FACTS devices in interconnected power systems”. Proceedings of IEEE international symposium as circuits and systems, vol. 5, pp 591-594,
[25] D.Harikrishna, R.S.Dhekekar, N.V.Srikanth, “ A novel approach to dynamic stability enhancement using PID damped fuzzy susceptance controlled SVC”, IEEE Power System Conference and Exposition
PSCE 2011, pp. 1-6, March 2011
|
{"url":"http://article.sapub.org/10.5923.j.eee.20110101.02.html","timestamp":"2014-04-17T12:30:07Z","content_type":null,"content_length":"49969","record_id":"<urn:uuid:a3f1c9fd-ac1a-461b-b758-493851118c3d>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00411-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Volume of revolution
April 21st 2010, 01:06 PM #1
Jan 2010
Volume of revolution
The area between the curve y=sin2x, the x-axis between 0 and 1 and the line x=1 is rotated through 360 degrees about the x-axis. Find the volume.
I found the volume by multiplying pi to the integral between the lower and upper limit of 0 and 1, respectively. My answer is 1.87. Can someone find the volume and tell me what they get? Thanks!
The area between the curve y=sin2x, the x-axis between 0 and 1 and the line x=1 is rotated through 360 degrees about the x-axis. Find the volume.
I found the volume by multiplying pi to the integral between the lower and upper limit of 0 and 1, respectively. My answer is 1.87. Can someone find the volume and tell me what they get? Thanks!
$V = \pi \int_0^1 \sin^2(2x) \, dx \approx 1.87$
April 21st 2010, 01:39 PM #2
|
{"url":"http://mathhelpforum.com/calculus/140555-volume-revolution.html","timestamp":"2014-04-18T16:02:42Z","content_type":null,"content_length":"32789","record_id":"<urn:uuid:46615fb0-03b1-4533-9a53-785e23ab9347>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00426-ip-10-147-4-33.ec2.internal.warc.gz"}
|
DistanceTime Graphs
Information about movement can be presented in a number of ways. Data for the total distance moved at different times during movement can be recorded in a table. Alternatively, the same information
can be presented in a graph.
The table and graph both present information about an object's position at certain times. In this unit we will be looking at how such data can be interpreted and how additional quantities can be
determined from graphs.
Motion sensors
A number of different techniques can be used for recording the way distance changes over time. One method commonly used in the school laboratory is the ultrasonic motion
Sensors used in electronics produce a change in their resistance when some feature of their surrounding environment changes. The resistance of a thermistor changes as the surrounding temperature
This device emits a pulse of ultrasound which travels through the air at 330 ms
before rebounding off a target. The distance of the reflecting object from the sensor can be found by measuring the time interval between the emitted and reflected pulses and using the following
The distance can be measured at regular time intervals, as the trolley moves, and a graph plotted. This type of measurement is best done by a computer where the rapid timings and calculations can be
done quickly and results displayed on the monitor.
Other techniques are available for detecting distance and speed. A 'radar gun' emits a pulse of
Electromagnetic waves, such as light, are made up from oscillating electric and magnetic fields. Because of this, they are self-propagating and can travel through a vacuum. All types of
electromagnetic wave travel at the same speed in a vacuum, 3 × 10^8 ms^−1.electromagnetic
radiation with a certain
The wave frequency f is the number of complete waves passing any point each second. Frequency is measured in hertz, Hz. frequency
. A passing vehicle will reflect the pulse back to a detector on the gun. The frequency of the reflected radiation will be dependent upon the speed of the moving reflector.
The speed of the moving reflector can be determined by measuring differences between the emitted and reflected pulses.
Distance time graphs
The graph in Fig.2 shows a set of distance and time axes. Its y-axis shows the horizontal distance of the helicopter from its starting position. (This is indicated on the animation by the length of
the line between the take-off and landing positions).
Start the helicopter in Fig.2 flying and drag it horizontally towards the landing pad.
As the helicopter flies towards the landing pad, the distance from its starting point increases and so the line on the graph moves from the bottom left-hand side of the axes towards the top right.
Use the animation in Fig.2 to answer the following questions.
A line drawn through the points on a graph, such as that shown in Fig.3, allows us to see trends. We can also use the line in the graph to estimate values that would lie between data points given in
a results table, so a graph of Fig.3 contains more information than the table below.
Time / s Distance from start / m
Clearly in Fig.3 the helicopter is moving away from the starting point so the graph has a positive gradient. Additionally, the distance from the reference point increases in
equal steps
as time passes. This means that the graph is a straight line and therefore has a constant or uniform gradient.
The size of the gradient shows how quickly the distance travelled is changing with time. In this example the distance is changing at a rate of 10 m every second. Consequently we can say that this
graph records the motion of an object moving with a constant speed of 10 ms
The balls shown in Fig.5 are moving with different speeds but the speed of each is constant.
Click on the figure below to interact with the model.
A straight line on a distance time graph indicates that an object is moving at a steady speed. The steeper the line, the faster the speed. Both balls in the simulation of Fig.5 are moving with a
steady speed and in a fixed direction. Therefore the velocities as well as the speeds are constant. However, in order to state the
An object's velocity states both the speed and direction of motion relative to a fixed reference point.velocity
we would have to indicate the direction of movement in addition to the speed.
Grab the ball in the simulation of Fig.6 and drag it to the top of its range of motion. Release the ball and observe its motion and the distance versus time graph as it falls.
Click on the figure below to interact with the model.
The distance values in the graph of Fig.6 decrease as time passes because the ground is being used as the point from which distances are measured. The ball falling in Fig.6 is moving towards the
reference point, so the measured distances are decreasing with time.
Finding the speed
Fig.7 shows the distance time graph for an object speeding up as it moves away from its reference point. The curve of the line shows that the ball is getting faster as time passes. To find the speed
at any instant during this motion we would have to draw a tangent to the curve and determine its gradient.
On distance time graphs, bigger speeds are indicated by steeper lines. We can regard the curve in Fig.7 as being made up from a large number of short straight lines each having a slightly different
The curve in Fig.7 gets steeper as the distance increases, showing that the object is moving progressively faster as it moves away from the observation point. This object is accelerating away from
its starting position.
When the mass in Fig.8 falls the trolley accelerates and the distance time graph for the
An object's acceleration is its rate of change of velocity.acceleration
is plotted.
By clicking on the right-hand green button in Fig.8, you can switch from a trolley towed by a falling mass to one towed by a falling chain. The falling mass and the falling chain produce different
distance time graphs because they represent different types of acceleration.
When the trolley is being towed by the falling mass its speed increases uniformly. The falling mass produces a constant acceleration. As the chain falls, the towing force increases and the increase
in the speed each second is not uniform. The falling chain produces an increasing acceleration.
Displacement time graphs
In most of the situations we will meet, movement is in a straight line and so displacements from the starting point are easy to calculate. The label on the y-axis of Fig.9 shows that an increase in
the y-axis value represents motion due north.
The object whose motion is represented by the graph in Fig.10 moves north at a steady speed of 3 ms
for 2 seconds. After this time the
An object's displacement quotes both its bearing and distances relative to a fixed reference point. displacement
decreases as the object starts to move in the opposite direction. After 6 seconds the object is just 3 m north of its starting point even though it has travelled a total distance of 9 m.
The object's
after 6 seconds is determined from the gradient of the line at that point. The size of the gradient is 0.75 ms
, so after 6 seconds the velocity of the object is 0.75 ms
in a southerly direction.
Match graphs
In the set-up of Fig.11 the position of the trolley relative to the motion sensor is plotted. Move the trolley to match the motion described on the preset graph drawn and then answer the questions
below. There are a number of preset graphs that you might want to investigate.
Complete the following statements.
Moving the trolley to the left causes the measured distance to while motion to the right causes the distance to . Slow motion produces a line with a slope while fast movement produces a line.
Motion to the left produces a line with a gradient and motion to the right results in a line with a gradient.
Drawing a line on a graph shows the trends in data presented in a table. Additional information can be determined from the slope of the graph.
The size of the slope or the gradient of a distance versus time graph gives the speed at which an object is moving.
The gradient of a displacement versus time graph indicates an object's velocity.
1. Fig.12 shows the speed of a train as it travels along a certain section of track.
What time does the train take to travel along this section of track?
s (to the nearest whole number)
What is the total length of this section of track?
m (to the nearest whole number)
What is the average speed of the train during its journey along this section of track?
ms^−1 (to 1 d.p.)
2. Fig.13 shows the speed time graph for an object starting from rest and increasing its speed as it travels.
What is the speed of the object 5 seconds after starting?
ms^−1 (to the nearest whole number)
What is the average speed over the first 5 seconds?
ms^−1 (to 1 d.p.)
What distance does the object travel during the first 5 seconds?
ms^−1 (to 1 d.p.)
What is its average speed between times of 5 and 10 seconds?
ms^−1 (to the nearest whole number)
4. In an experiment, as shown in Fig.15, a ball, held directly underneath a motion sensor, is released. A pupil makes the following statements about the graph. Decide whether they are true or false.
• At point P the ball is held directly underneath the motion sensor.
Between Q and R the ball is in contact with the ground.
Between S and T the ball is rising.
Well done!
Try again!
|
{"url":"http://www.absorblearning.com/advancedphysics/demo/units/010103.html","timestamp":"2014-04-19T20:51:06Z","content_type":null,"content_length":"62154","record_id":"<urn:uuid:51089044-66a3-4fb4-b166-e455756119f5>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00547-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Vertex-Transitive $q$-Complementary Uniform Hypergraphs
For a positive integer $q$, a $k$-uniform hypergraph $X=(V,E)$ is $q$-complementary if there exists a permutation $\theta$ on $V$ such that the sets $E, E^{\theta}, E^{\theta^2},\ldots, E^{\theta^
{q-1}}$ partition the set of $k$-subsets of $V$. The permutation $\theta$ is called a $q$-antimorphism of $X$. The well studied self-complementary uniform hypergraphs are 2-complementary.
For an integer $n$ and a prime $p$, let $n_{(p)}=\max\{i:p^i \text{divides} n\}$. In this paper, we prove that a vertex-transitive $q$-complementary $k$-hypergraph of order $n$ exists if and only if
$n^{n_{(p)}}\equiv 1 (\bmod q^{\ell+1})$ for every prime number $p$, in the case where $q$ is prime, $k = bq^\ell$ or $k=bq^{\ell}+1$ for a positive integer $b < k$, and $n\equiv 1(\bmod q^{\ell+1})
$. We also find necessary conditions on the order of these structures when they are $t$-fold-transitive and $n\equiv t (\bmod q^{\ell+1})$, for $1\leq t < k$, in which case they correspond to large
sets of isomorphic $t$-designs. Finally, we use group theoretic results due to Burnside and Zassenhaus to determine the complete group of automorphisms and $q$-antimorphisms of these hypergraphs in
the case where they have prime order, and then use this information to write an algorithm to generate all of these objects. This work extends previous, analagous results for vertex-transitive
self-complementary uniform hypergraphs due to Muzychuk, Potočnik, Šajna, and the author. These results also extend the previous work of Li and Praeger on decomposing the orbitals of a transitive
permutation group.
Full Text:
|
{"url":"http://www.combinatorics.org/ojs/index.php/eljc/article/view/v18i1p100","timestamp":"2014-04-19T10:23:05Z","content_type":null,"content_length":"15774","record_id":"<urn:uuid:4f827365-9e8d-47c6-884c-38a2cac1bccf>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00398-ip-10-147-4-33.ec2.internal.warc.gz"}
|
188 helpers are online right now
75% of questions are answered within 5 minutes.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred.
|
{"url":"http://openstudy.com/users/rockets31/asked","timestamp":"2014-04-19T04:53:31Z","content_type":null,"content_length":"97055","record_id":"<urn:uuid:8d243256-4d79-4a5b-828c-05cf506cd713>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00585-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Trouble understanding rewriting of an expression
March 22nd 2013, 11:52 AM #1
Apr 2008
Trouble understanding rewriting of an expression
I'm trying to inverse a simple function but got stuck with some algebra I cant remember.
I've got f (x) = 2x-1/x -1 and so far rewritten it to y(x-1) = 2x-1 but got stuck here. Reading the solution it show the next step as (y-2)x=y-1 but I'm at a loss trying to understand how they
arrived here.
Re: Trouble understanding rewriting of an expression
Re: Trouble understanding rewriting of an expression
I'm trying to inverse a simple function but got stuck with some algebra I cant remember.
I've got f (x) = 2x-1/x -1 and so far rewritten it to y(x-1) = 2x-1 but got stuck here. Reading the solution it show the next step as (y-2)x=y-1 but I'm at a loss trying to understand how they
arrived here.
I assume that you mean
$y=\frac{2x-1}{x-1}$............. If so:
$\begin{array}{rcl}y(x-1)&=&2x-1 \\ yx-y&=&2x-1 \\ 1-y&=& 2x-yx \\ 1-y&=&(2-y) x \\ \frac{1-y}{2-y}&=&x \end{array}$
Re: Trouble understanding rewriting of an expression
March 22nd 2013, 12:19 PM #2
MHF Contributor
Oct 2009
March 22nd 2013, 12:20 PM #3
March 22nd 2013, 01:18 PM #4
Apr 2008
|
{"url":"http://mathhelpforum.com/algebra/215299-trouble-understanding-rewriting-expression.html","timestamp":"2014-04-18T21:53:18Z","content_type":null,"content_length":"42260","record_id":"<urn:uuid:7cdbcf16-ae6b-43eb-9c1a-7f709c017882>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00265-ip-10-147-4-33.ec2.internal.warc.gz"}
|
100 Days of Cool Lesson Plan | Scholastic.com
Lesson Plan
100 Days of Cool Lesson Plan
About this book
Grade Level Equivalent: 2.6
Lexile Measure:
Guided Reading Level:
Age: Age 6, Age 7, Age 5
Genre: Comedy and Humor
Subject: Elementary School, Counting and Numbers, Early Math, 100th Day of School
This lesson is taken from Teaching with Favorite 100th Day of School Books available from Scholastic Professional Books.
In 100 Days of Cool by Stuart J. Murphy, it’s the first day of school and the students in Mrs. Lopez’s class are dressed in anything but ordinary outfits. They’re wearing sequins, wild sunglasses, a
big bow tie, lively patterns, eye-catching colors, and more. When Toby asks why, he finds out they’re planning on celebrating 100 days of cool. That’s “School!” he tells them, but it’s too late to
change, and they’re on their way to finding 99 more ways to be the coolest class ever.
Meeting the Math Standards
Number & Operations:
• one-to-one correspondence
• recognize how many
• connect number words and numerals to the quantities they represent
• fractions
• computation
• number combinations
• estimation
• develop concepts of time and the way it is measured
Data Analysis & Probability:
• gather data
• sort and classify data according to attributes
• organize data
• represent data with objects, pictures, and graphs
• discuss events as likely or unlikely
Before Reading
Cool! That’s a favorite word among children. Let children say the title aloud. Then play with the -ool word family as a warm-up to this book. What other words could take the place of cool in this
title? There’s school, of course. But students will have fun substituting pool, toadstool, spool, uncool, preschool, and so on.
Send students on a calendar adventure to find out what day the 100th day of school is. Now add some fractions to the fun: What day will it be when students are 1/4 of the way to the 100th day? 1/2? 3
/4? (Talk about how many days each of these fractions represents.)
After Reading
This book is full of invitations to have fun with math. Use the questions below to take the text and pictures further with a discussion that invites students to think about the relative position of
whole numbers, commonly used fractions, grouping, and estimating.
• In the number line on pages 4–5, why is the number one bigger than the other numbers and red, while the other numbers are smaller and blue? (It lets the reader know what day of school it is.)
• Find the number line that shows the second day of school (pages 10–11). Where would you put Day 8 on this number line? Why?
• What place on the number line shows halfway to 100 days of cool? What are some ways you can tell?
• On Day 82 Toby says, “They still need almost 20 new ideas.” Is the actual number more than 20 or less than 20? How do you know?
• How many groups of 20 days are there in 100 days? (Ask this question a different way: “What number on the number line shows when children are 1/5 of the way there?”) “What other groups can you
see on the number line?”
More to the Story
• After reading this book, children will know the ways in which the students in Mrs. Lopez’s class were cool on days 1, 2, 5, 8, 10, 17, 21, 25, 33, 41, 49, 50, 75, 82, 99, and 100. But what about
the other days? Let students add on to the story to come up with more ways to be cool.
• Reread the story, stopping to record the number for each day as it is mentioned.
• Together, count the number of days mentioned. (16 ) Ask, “If the students are celebrating 100 days of cool, how many other days did they do something to be cool?” Let students explain their
strategies for answering this question. Students might count up from the total number of days (16), using tallies or manipulatives to keep track. They might subtract 16 from 100, or they might
show an understanding of composing and decomposing numbers by breaking 16 into 10 and 6, then subtracting 10 from 100 to get 90 and 6 more to get 84.
• Give each child a copy of the Number Line (PDF) . Have children cut apart the number line strips and tape them together to make a number line that shows 0 to 100.
• Ask children to glue the left-hand edge of the number line to the top of a large sheet of drawing paper, positioned horizontally. Then they can fold up the number line along the taped sections so
that it will fit on the paper, and clip it in place.
• Let children write a new part of the story, telling how the children are cool on one of the days not mentioned in the story. First have them write the number of the day in red on the number line,
making it bigger than the other numbers. Then ask them to write a draft of their addition to the story on plain paper and write a final version on a copy of the Story Page (PDF) . Have children
mount their stories on the paper with the number line and then illustrate them, using colors as bright as those in the book.
• Let children work together to arrange their pages in chronological order and bind together. Display the new parts of the story in order, along with a copy of the book.
Cool Kids
• Has Mrs. Lopez’s class inspired your students to team up and be “cool” for a day? Brainstorm ways your class could be cool for a day. Take a vote and then invite students to participate in a “Day
of Cool.” With one day down, how many more days would students have to go to be cool for 10 days of school? 20? 30? 40? Continue, encouraging children to explain how they arrive at their answers
(9, 19, 29, and so on). Use this pattern to reinforce counting by tens.
How Many to Go?
• Strengthen understanding of number relationships and combinations with an activity that embellishes the story with word bubbles.
• Reread the story, this time stopping at page 9 when Mrs. Lopez says “. . . only 99 more days.” Write the number sentence to go with Day 1 (1 + 99 = __ ) and let students solve it.
• Continue reading, paying attention to Day 2 (pages 10–11) and Toby’s word bubble that states “. . . 98 days to go.” Invite a volunteer to write the number sentence for Day 2: 2 + 98 = 100.
• Repeat this procedure for Days 1, 5, 8, 17, 21, 33, 41, 50, 75, and 99. Note that there are no word bubbles for these days to tell how many days there are to go, so invite students to make them.
Draw word bubbles on large sticky notes. For each of the days mentioned, have students write a remark in Toby’s voice that indicates how many days to go—for example, for Day 5 they might write
“Wow! What are you going to do for the next 95 days?” For each comment, have students write below the word bubble the matching number sentence, such as 5 + 95 = 100.
• Place the sticky notes on the corresponding pages and let students revisit the book to read their additions to the story.
• Subjects:
100th Day of School
|
{"url":"http://www.scholastic.com/teachers/lesson-plan/100-days-cool-lesson-plan","timestamp":"2014-04-18T12:35:13Z","content_type":null,"content_length":"56557","record_id":"<urn:uuid:ec6b9be7-899f-4214-87ac-22c67ce94b6f>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00186-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Posts from August 14, 2007 on The Unapologetic Mathematician
So we have the basic data of a category $\mathcal{C}$enriched over a monoidal category $\mathcal{V}$. Of course, what I left out were the relations that have to hold. And they’re just the same as
those from categories, but now written in terms of $\mathcal{V}$ instead of $\mathbf{Set}$: associativity and identity relations, as encoded in the following commutative diagrams:
Notice how these are very similar to the axioms for a monoidal category or a monoid object. And this shouldn’t be unexpected by now, since we know that a monoid is just a (small) category with only
one object. In fact, if we only have one object in a $\mathcal{V}$-enriched category we get back exactly a monoid object in $\mathcal{V}$!
Now, often we’re thinking of our hom-objects as “hom-sets with additional structure”. There should be a nice way to forget that extra structure and recover just a regular category again. To an extent
this is true, but for some monoidal categories $\mathcal{V}$ the “underlying set” functor isn’t really an underlying set at all. For now, though, let’s look at a familiar category of “sets with extra
structure” and see how we get the underlying set out of the category itself.
Again, the good example to always refer back to for enriched categories is $\mathbf{Ab}$, the category of abelian groups with tensor product as the monoidal structure. We recall that the functor
giving the free abelian group on a set is left adjoint to the forgetful functor from abelian groups to sets. That is, $\hom_\mathbf{Ab}(F(S),A)\cong\hom_\mathbf{Set}(S,U(A))$. We also know that we
can consider an element of the underlying set $U(A)$ of an abelian group as a function from a one-point set into $U(A)$. That is, $\hom_\mathbf{Set}(\{*\},U(A))\cong U(A)$. Putting these together, we
see that $U(A)\cong\hom_\mathbf{Ab}(\mathbb{Z},A)$, since $\mathbb{Z}$ is the free abelian group on one generator.
But $\mathbb{Z}$ is also the identity object for the tensor product! The same sort of argument goes through for all our usual sets-with-structure, telling us that in all these cases the “underlying
set” functor is represented by the monoidal identity $\mathbf{1}$, which is the free object on one generator. We take this as our general rule, giving the representable functor $V(\underline{\
hphantom{X}})=\hom_{\mathcal{V}_0}(\mathbf{1},\underline{\hphantom{X}}):\mathcal{V}_0\rightarrow\mathbf{Set}$. In many cases (but not all!) this is the usual “underlying set” functor, but now we’ve
written it entirely in terms of the monoidal category $\mathcal{V}$!
As time goes by, we’ll use this construction to recover the “underlying category” of an enriched category. The basic idea should be apparent, but before we can really write it down properly we need
to enrich the notions of functors and natural transformations.
So I’m up at 03:00 because of an unexpected nap this evening. When I got back from dinner, there was a blackout that included my building. So I slept a bit, then woke up completely a few hours later.
Having little else to do, I finally hunkered down on some tweaks to the site.
I moved the tagline a bit and dropped the “rant” phrase. I wanted to change “outraged” to “outspoken”, but even then it wouldn’t really look nice where it now goes. Actually, not only have I not been
that outraged, I haven’t been as ranty as I expected I’d be.
Over the last seven months this place has gone in directions I didn’t really expect, and it’s really taken on a life of its own. As for myself — the nominal author — I find myself taken along for the
ride. I’m reminded of the apocryphal story about the university that planted grass everywhere, waited to see where it got trampled from people walking across it, then put the sidewalks where people
naturally walked. Making predictions is difficult, especially about the future. Unless you’re an anti-blogger, in which case I suppose it’s difficult to predict the past, but that’s a whole ‘nother
In the place of the old tagline, I’ve got a more robust (and appropriate) “about” panel. It should give a better explanation of what the hell is going on here, and how to read the page for newcomers.
I haven’t started tearing through and restructuring the “categories” panel in the sidebar, but I’ll hold off on that until I get the cable modem set up here rather than the slow and buggy “Mu-Fi“.
I’m also going to see what I can do about getting a search panel set up, but without paying for the capability to edit my CSS directly (in both money to WordPress and time invested in learning how),
I’m not sure what I’ll be able to cobble together.
Finally, inspired by the adventures this evening, I’m starting a more “bloggy” weblog about my life as a Connecticut Yankee in the Lower Garden District. I’m using a phrase I picked up from Nick
Maggio (a grad student here) for the title: Yankee Freak-Out. There’s not much there yet, but I’ll start posting tomorrow afternoon.
Assuming, of course, that I’ve got the power.
• Recent Posts
• Blogroll
• Art
• Astronomy
• Computer Science
• Education
• Mathematics
• Me
• Philosophy
• Physics
• Politics
• Science
• RSS Feeds
• Feedback
Got something to say? Anonymous questions, comments, and suggestions at
• Subjects
• Archives
|
{"url":"http://unapologetic.wordpress.com/2007/08/14/","timestamp":"2014-04-16T07:27:25Z","content_type":null,"content_length":"46327","record_id":"<urn:uuid:456ebc35-88a5-4f89-aff5-9ede9e3bb523>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00373-ip-10-147-4-33.ec2.internal.warc.gz"}
|
graph topology
graph topology
A graph $(V,E)$ is identified by its vertices $V=\{v_{1},v_{2},\ldots\}$ and its edges $E=\{\{v_{i},v_{j}\},\{v_{k},v_{l}\},\ldots\}$. A graph also admits a natural topology, called the graph
topology, by identifying every edge $\{v_{i},v_{j}\}$ with the unit interval $I=[0,1]$ and gluing them together at coincident vertices.
This construction can be easily realized in the framework of simplicial complexes. We can form a simplicial complex $G=\left\{\{v\}\mid v\in V\right\}\cup E$. And the desired topological realization
of the graph is just the geometric realization $|G|$ of $G$.
Remark: A graph is/can be regarded as a one-dimensional $CW$-complex.
graph topology construction, one-dimensional CW complex
GraphTheory, Graph, ConnectedGraph, QuotientSpace, Realization, RSupercategory, CWComplexDefinitionRelatedToSpinNetworksAndSpinFoams
one-dimensional CW complex
Mathematics Subject Classification
no label found
no label found
no label found
Added: 2003-05-08 - 04:51
|
{"url":"http://planetmath.org/graphtopology","timestamp":"2014-04-16T22:17:03Z","content_type":null,"content_length":"45047","record_id":"<urn:uuid:682b650d-ec51-4eb5-a9bd-d41445447e9e>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00541-ip-10-147-4-33.ec2.internal.warc.gz"}
|
June 2012
This is the latest in a series of blog posts about enhancing
applications by using NAG methods and routines; previously, we've described in detail
how to invoke methods
from the
NAG Library for .NET
in LabVIEW, and
how to call routines
from the NAG
libraries from within that programming environment. In addition, we supplemented those descriptions with an archive of examples which is available from the
NAG LabVIEW page
The examples we looked at previously were all in the 32 bit environment, but some users have asked whether all this works in the 64 bit world, being keen to take advantage of the larger address space
of that architecture. Indeed it does, as we shall show here.
One complication in using the NAG C Library from .NET is callback functions which, in C, have arrays in their parameter lists. Take for example the optimization algorithm e04nfc(nag_opt_qp) for
quadratic programming problems. This NAG function requires a callback function qphess of the following C prototype:
void qphess(Integer n, Integer jthcol, const double h[], Integer tdh,
const double x[], double hx[], Nag_Comm *comm);
In C# the corresponding delegate is
public delegate void NAG_E04NFC_QPHESS (int n, int jthcol,
IntPtr h_ptr, int tdh, IntPtr x_ptr, [In, Out] IntPtr hx,
ref CommStruct comm);
If you follow the C example program for nag_opt_qp as well as the style of the NAG C# examples you will write something like this for qphess in C#:
static void qphess0(int n, int jthcol, IntPtr h_ptr, int tdh,
IntPtr x_ptr, [In, Out] IntPtr hx_ptr,
ref CommStruct comm)
double[] xloc = new double[n];
double[] hloc = new double[n * n];
double[] hxloc = new double[n];
Marshal.Copy(h_ptr, hloc, 0, n * n);
Marshal.Copy(x_ptr, xloc, 0, n);
for (int i = 0; i < n; ++i) hxloc[i] = 0;
for (int i = 0; i < n; ++i)
for (int j = 0; j < n; ++j)
hxloc[i] += hloc[i * n + j] * xloc[j];
Marshal.Copy(hxloc, 0, hx_ptr, n);
Marshaling can be fairly expensive, and in some cases it may be best to avoid it. The cost is trivial for the small example programs NAG provides, but it quickly mounts with increasing problem size.
For n of 1000 and this version of qphess, nag_opt_qp takes about 7 seconds to solve a typical problem on my laptop.
|
{"url":"http://blog.nag.com/2012_06_01_archive.html","timestamp":"2014-04-20T00:37:42Z","content_type":null,"content_length":"118134","record_id":"<urn:uuid:868b87e4-65ec-4c33-a16b-72d7b9a89359>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00041-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Roots of unity.
January 6th 2010, 05:08 PM
Roots of unity.
I'm trying to find the value of $(-64)^\frac{1}{6}$.
Expressing $-64$ in the form $r(\cos{\theta}+i\sin{\theta})$, we have:
$-64 = 64(\cos{\pi}+i\sin{\pi})$
Which gives:
$(-64)^{\frac{1}{6}} = 64^{\frac{1}{6}}(\cos{\pi}+i\sin{\pi})^{\frac{1}{6 }}$
Then using De Moivre's theorem:
$(-64)^{\frac{1}{6}} = 64^{\frac{1}{6}}(\cos{\pi}+i\sin{\pi})^{\frac{1}{6 }} = 2\left[\cos{\frac{\pi}{6}}+i\sin{\frac{\pi}{6}}\right]<br />$
But that is just one solution. How do you get the rest?
January 6th 2010, 07:54 PM
Chris L T521
I'm trying to find the value of $(-64)^\frac{1}{6}$.
Expressing $-64$ in the form $r(\cos{\theta}+i\sin{\theta})$, we have:
$-64 = 64(\cos{\pi}+i\sin{\pi})$
Which gives:
$(-64)^{\frac{1}{6}} = 64^{\frac{1}{6}}(\cos{\pi}+i\sin{\pi})^{\frac{1}{6 }}$
Then using De Moivre's theorem:
$(-64)^{\frac{1}{6}} = 64^{\frac{1}{6}}(\cos{\pi}+i\sin{\pi})^{\frac{1}{6 }} = 2\left[\cos{\frac{\pi}{6}}+i\sin{\frac{\pi}{6}}\right]<br />$
But that is just one solution. How do you get the rest?
Just tweak your answer to generate them: $2\left[\cos{\frac{(2k+1)\pi}{6}}+i\sin{\frac{(2k+1)\pi}{6 }}\right]$ for $k=0,1,2,3,4,5$.
January 8th 2010, 06:45 PM
Thanks Chris. Here is what I've done:
k = 0 $\Rightarrow 2\left[\cos{\frac{(2(0)+1)\pi}{6}}+i\sin{\frac{(2(0)+1)\p i}{6}}\right] = 2\left[\cos{\frac{(0+1)\pi}{6}}+i\sin{\frac{(0+1)\pi}{6}} \right]$ = $2\left(\cos{\frac{\pi}{6}}+i\sin
{\frac{\pi}{6}}\ri ght)$
k = 1: $\Rightarrow$$2\left[\cos{\frac{(2(1)+1)\pi}{6}}+i\sin{\frac{(2(1)+1)\p i}{6}}\right]$ = $2\left[\cos{\frac{(2+1)\pi}{6}}+i\sin{\frac{(2+1)\pi}{6}} \right] = 2\left(\cos{\frac{\pi}{2}}+i\
sin{\frac{\pi}{2}}\ri ght)$
k = 2: $\Rightarrow$$2\left[\cos{\frac{(2(2)+1)\pi}{6}}+i\sin{\frac{(2(2)+1)\p i}{6}}\right] = 2\left[\cos{\frac{(4+1)\pi}{6}}+i\sin{\frac{(4+1)\pi}{6}} \right]$$= {2\left(\cos{\frac{5\pi}{6}}+i\
sin{\frac{5\pi}{6}} \right)}$
k = 3: $\Rightarrow$$2\left[\cos{\frac{(2(3)+1)\pi}{6}}+i\sin{\frac{(2(3)+1)\p i}{6}}\right] = 2\left[\cos{\frac{(6+1)\pi}{6}}+i\sin{\frac{(6+1)\pi}{6}} \right]$$= {2\left(\cos{\frac{7\pi}{6}}+i\
sin{\frac{7\pi}{6}} \right)}$
k = 4: $\Rightarrow$$2\left[\cos{\frac{(2(4)+1)\pi}{6}}+i\sin{\frac{(2(4)+1)\p i}{6}}\right] = 2\left[\cos{\frac{(8+1)\pi}{6}}+i\sin{\frac{(8+1)\pi}{6}} \right]$$= {2\left(\cos{\frac{3\pi}{2}}+i\
sin{\frac{3\pi}{2}} \right)}$
k = 5: $\Rightarrow$$2\left[\cos{\frac{(2(5)+1)\pi}{6}}+i\sin{\frac{(2(5)+1)\p i}{6}}\right] = 2\left[\cos{\frac{(10+1)\pi}{6}}+i\sin{\frac{(10+1)\pi}{6 }}\right]$$= {2\left(\cos{\frac{11\pi}{6}}
+i\sin{\frac{11\pi}{6 }}\right)}$
So, the answers are: $2\left(\cos{\frac{\pi}{6}}+i\sin{\frac{\pi}{6}}\ri ght)$, ${2\left(\cos{\frac{\pi}{2}}+i\sin{\frac{\pi}{2}}\r ight)}$, ${2\left(\cos{\frac{5\pi}{6}}+i\sin{\frac{5\pi}{6}} \
right)}$, ${2\left(\cos{\frac{7\pi}{6}}+i\sin{\frac{7\pi}{6}} \right)}$, ${2\left(\cos{\frac{3\pi}{2}}+i\sin{\frac{3\pi}{2}} \right)}$, and ${2\left(\cos{\frac{11\pi}{6}}+i\sin{\frac{11\pi}{6 }}\
But the book's answers are: $2\left(\cos{\frac{\pi}{6}}+i\sin{\frac{\pi}{6}}\ri ght)$, ${2\left(\cos{\frac{\pi}{2}}+i\sin{\frac{\pi}{2}}\r ight)}$, ${2\left(\cos{\frac{5\pi}{6}}+i\sin{\frac{5\pi}
{6}} \right)}$, $2\left(\cos{-\frac{\pi}{6}}+i\sin{-\frac{\pi}{6}}\right)$, ${2\left(\cos{-\frac{\pi}{2}}+i\sin{-\frac{\pi}{2}}\right)}$, and ${2\left(\cos{-\frac{5\pi}{6}}+i\sin{-\frac{5\pi}{6}}
(I'm thinking that this has to do with the conjugacy of complex roots). (Thinking)
January 8th 2010, 06:51 PM
Chris L T521
Thanks Chris. Here is what I've done:
k = 0 $\Rightarrow 2\left[\cos{\frac{(2(0)+1)\pi}{6}}+i\sin{\frac{(2(0)+1)\p i}{6}}\right] = 2\left[\cos{\frac{(0+1)\pi}{6}}+i\sin{\frac{(0+1)\pi}{6}} \right]$ = $2\left(\cos{\frac{\pi}{6}}+i\sin
{\frac{\pi}{6}}\ri ght)$
k = 1: $\Rightarrow$$2\left[\cos{\frac{(2(1)+1)\pi}{6}}+i\sin{\frac{(2(1)+1)\p i}{6}}\right]$ = $2\left[\cos{\frac{(2+1)\pi}{6}}+i\sin{\frac{(2+1)\pi}{6}} \right] = 2\left(\cos{\frac{\pi}{2}}+i\
sin{\frac{\pi}{2}}\ri ght)$
k = 2: $\Rightarrow$$2\left[\cos{\frac{(2(2)+1)\pi}{6}}+i\sin{\frac{(2(2)+1)\p i}{6}}\right] = 2\left[\cos{\frac{(4+1)\pi}{6}}+i\sin{\frac{(4+1)\pi}{6}} \right]$$= {2\left(\cos{\frac{5\pi}{6}}+i\
sin{\frac{5\pi}{6}} \right)}$
k = 3: $\Rightarrow$$2\left[\cos{\frac{(2(3)+1)\pi}{6}}+i\sin{\frac{(2(3)+1)\p i}{6}}\right] = 2\left[\cos{\frac{(6+1)\pi}{6}}+i\sin{\frac{(6+1)\pi}{6}} \right]$$= {2\left(\cos{\frac{7\pi}{6}}+i\
sin{\frac{7\pi}{6}} \right)}$
k = 4: $\Rightarrow$$2\left[\cos{\frac{(2(4)+1)\pi}{6}}+i\sin{\frac{(2(4)+1)\p i}{6}}\right] = 2\left[\cos{\frac{(8+1)\pi}{6}}+i\sin{\frac{(8+1)\pi}{6}} \right]$$= {2\left(\cos{\frac{3\pi}{2}}+i\
sin{\frac{3\pi}{2}} \right)}$
k = 5: $\Rightarrow$$2\left[\cos{\frac{(2(5)+1)\pi}{6}}+i\sin{\frac{(2(5)+1)\p i}{6}}\right] = 2\left[\cos{\frac{(10+1)\pi}{6}}+i\sin{\frac{(10+1)\pi}{6 }}\right]$$= {2\left(\cos{\frac{11\pi}{6}}
+i\sin{\frac{11\pi}{6 }}\right)}$
So, the answers are: $2\left(\cos{\frac{\pi}{6}}+i\sin{\frac{\pi}{6}}\ri ght)$, ${2\left(\cos{\frac{\pi}{2}}+i\sin{\frac{\pi}{2}}\r ight)}$, ${2\left(\cos{\frac{5\pi}{6}}+i\sin{\frac{5\pi}{6}} \
right)}$, ${2\left(\cos{\frac{7\pi}{6}}+i\sin{\frac{7\pi}{6}} \right)}$, ${2\left(\cos{\frac{3\pi}{2}}+i\sin{\frac{3\pi}{2}} \right)}$, and ${2\left(\cos{\frac{11\pi}{6}}+i\sin{\frac{11\pi}{6 }}\
But the book's answers are: $2\left(\cos{\frac{\pi}{6}}+i\sin{\frac{\pi}{6}}\ri ght)$, ${2\left(\cos{\frac{\pi}{2}}+i\sin{\frac{\pi}{2}}\r ight)}$, ${2\left(\cos{\frac{5\pi}{6}}+i\sin{\frac{5\pi}
{6}} \right)}$, $2\left(\cos{-\frac{\pi}{6}}+i\sin{-\frac{\pi}{6}}\right)$, ${2\left(\cos{-\frac{\pi}{2}}+i\sin{-\frac{\pi}{2}}\right)}$, and ${2\left(\cos{-\frac{5\pi}{6}}+i\sin{-\frac{5\pi}{6}}
(I'm thinking that this has to do with the conjugacy of complex roots). (Thinking)
The two answers are the same! (See why?)
January 8th 2010, 06:54 PM
Prove It
Recall that $-\pi < \theta \leq \pi$ by definition of $Arg{z}$...
January 8th 2010, 07:15 PM
January 8th 2010, 07:28 PM
Prove It
Correct. And as I said, we say the principal argument is defined for $-\pi < \theta \leq \pi$.
January 8th 2010, 07:49 PM
Got it! $-\pi < \frac{\pi}{6} \leq \pi$, $-\pi \ < \frac{\pi}{2} \leq \pi$, and $-\pi < \frac{5\pi}{6} \leq \pi$, so no problem. But $-\pi ot < \frac{7\pi}{6} ot\leq \pi$, $-\pi ot < \frac{3\pi}
{2} ot\leq \pi$, and $-\pi ot < \frac{11\pi}{6} ot\leq \pi$. But $\frac{7\pi}{6}$ has the same value as $-\frac{5\pi}{6}$, and $-\pi < -\frac{5\pi}{6} \leq \pi$, so we take that. $\frac{3\pi}{2}$
has the same value as $-\frac{\pi}{2}$, and $-\pi < -\frac{\pi}{2} \leq \pi$, so we take that. And $\frac{11\pi}{6}$ has the same value as $-\frac{\pi}{6}$, and $-\pi < -\frac{\pi}{6} \leq \pi$,
so we take that.
Thanks a lot, guys. (Cool)
January 8th 2010, 08:32 PM
Hey, I've a follow-up question: What guarantees that all the nth roots of unity will lie between $-\pi$ and $\pi$?
January 8th 2010, 08:58 PM
Prove It
January 9th 2010, 06:05 PM
Yea you did have verry same problem :) Glad that we solve it!
|
{"url":"http://mathhelpforum.com/algebra/122705-roots-unity-print.html","timestamp":"2014-04-17T02:14:50Z","content_type":null,"content_length":"38006","record_id":"<urn:uuid:4745ee03-4d90-43b4-8ffd-4cf32f38d1d9>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00438-ip-10-147-4-33.ec2.internal.warc.gz"}
|
To which extent can one recover a manifold from its group of homeomorphisms
up vote 32 down vote favorite
Question. Suppose that $M$ is a closed connected topological manifold and $G$ is its group of homeomorphisms (with compact-open topology). Does $G$ (as a topological group) uniquely determine $M$?
One can ask the same question where we regard $G$ as an abstract group (ignoring topology), replace topological category by smooth category (here one can equip $G=Diff(M)$ with a finer structure of a
Frechet manifold), varying degree of smoothness, dropping compactness assumption, recovering $M$ up to homotopy, etc.
I do not know how to answer any of these questions. I do not even know if one can recover the dimension of $M$ from its group of homeomorphisms. In low dimensions, or assuming that $M$ has a
locally-symmetric Riemannian metric, and if $dim(M)$ is given, I know few things. For instance, among 2-dimensional manifolds one can recover $M$ from $G$ since $G/G_0$ is the mapping class group
$Mod(M)$ of $M$ and one can tell the genus of $M$ from maximal rank of free abelian subgroups of $Mod(M)$. Same for, say, closed hyperbolic manifolds with non-isomorphic isometry groups. However,
given, for instance, two closed hyperbolic 3-manifolds $M_1, M_2$ with trivial isometry groups, I do not know how to distinguish $M_i$'s by, say, $Homeo(M_i)$ (the problem reduces to a question about
homeomorphism groups of the unit ball commuting with $\pi_1(M_i)$, $i=1,2$, but I do not see how to solve it).
Update: Results quoted by Igor and Martin give the complete answer in topological and smooth category in the strongest possible form (much more than I expected!). Positive answer is also known in the
symplectic category, but, apparently, is open for contact manifolds and their groups of contactomorphisms.
Another reference in the smooth case, sent to me by Beson Farb is the book by Augustin Banyaga, "The structure of classical diffeomorphism groups."
gt.geometric-topology at.algebraic-topology
This has really interesting applications in theoretical physics! For group actions on the discrete structures embedded in a manifold determined by causal dynamical triangulations (or other
approaches to Quantum Gravity), connecting the topology to the group action is very important. – Samuel Reid Mar 28 '12 at 6:03
2 @Samuel: any references for the things you mention? – Igor Rivin Mar 29 '12 at 1:49
add comment
4 Answers
active oldest votes
Answer is: Yes, one can recover $M$ if it is a compact manifold. See J. V. Whittaker: On Isomorphic groups and homeomorphic spaces, Annals of Math 1963.
EDIT Actually, one knows a lot more, see, for example Tomasz Rybicki Journal: Proc. Amer. Math. Soc. 123 (1995), 303-310. MSC: Primary 58D05; Secondary 17B66, 22E65, 57R50
MathSciNet review: 1233982
And references therein...
up vote 31 down
vote accepted ANOTHER EDIT
A quite different proof of a stronger theorem (actually a large set of theorems) than Whittaker's (actually, Whittaker's paper seems to be rather badly written) is given by
Matatyahu Rubin in Rubin, Matatyahu(3-SFR) On the reconstruction of topological spaces from their groups of homeomorphisms. Trans. Amer. Math. Soc. 312 (1989), no. 2, 487–538.
Great answer, Igor! – Misha Mar 28 '12 at 1:58
Thanks for these references! – Samuel Reid Mar 28 '12 at 6:04
1 The paper by Whittacker can be found here webpages.ursinus.edu/nscoville/Whittaker%201963.pdf. It has the more general result that every isomorphism of abstract groups $\mathrm
{Aut}(X) \cong \mathrm{Aut}(Y)$ is the conjugation of some homeomorphism $X \cong Y$, when $X,Y$ are compact manifolds. – Martin Brandenburg Mar 28 '12 at 7:07
3 This result was generalized to the $C^r$-setting by Filipkiewicz in Isomorphisms between diffeomorphism groups. The article by Tomasz Rybicki mentioned by Igor, which can be
found here ams.org/journals/proc/1995-123-01/S0002-9939-1995-1233982-7/… tries to enhance these results. – Martin Brandenburg Mar 28 '12 at 7:13
1 Very erudite answer, Igor: thanks! – Georges Elencwajg Mar 28 '12 at 9:37
add comment
This is more of a longish comment rather but I'd like to point out that while Igor's reference in principle gives a complete answer, actually reading off any specific information about $M$
(such as its dimension) from some topological invariants of $Diff(M)$ is likely hard.
Moreover, if one relaxes the categories somewhat then the answer to Misha's question can even be negative! Specifically, one can ask if the homotopy type of the monoid of self homotopy
equivalences of $M$ determines $M$ up to homotopy type. I actually don't know the answer to this but if one relaxes the category even further and looks at the rational homotopy type then in
contrast with the diffeomorphism case the answer is actually NO.
Specifically, it's rather easy to compute that the rational homotopy type of the identity component $Aut(M)$ of the monoid of self homotopy equivalences of an equal rank biquotient of Lie
groups $M=G//H$ that satisfies Halperin's conjecture (which says that in this case $H^\ast (M,\mathbb Q)$ has no negative degree derivations) is determined by rational homotopy and homology
up vote groups of $M$. In this case $Aut(M)$ is rationally equivalent to a product of finitely many odd dimensional spheres and one can write an explicit (if somewhat ugly) formula for the dimensions
15 down of the spheres that show up in terms of $\pi_\ast(M)\otimes \mathbb Q$ and $H_\ast(M,\mathbb Q)$.
But there are plenty of examples of such biquotients in dimensions above 5 which have distinct rational types but the same rational homotopy and homology. For example, one can take $G//T$
where $G$ is a simply connected Lie group and $T\le G\times G$ is a torus of the same rank as rank $G$. All such biquotients satisfy Halperin's conjecture so the formula I mention above
applies. It is then clear that rational homology and homotopy groups of $G//T$ are completely determined by $G$ but the rational type of $G//T$ can be different depending on the embedding $T\
to G\times G$. There are infinitely many such examples already in dimension 6 of the form $(S^3\times S^3\times S^3)//T^3$. Still, in this case one can read off for example, the dimension of
$M$ from the knowledge of the rational homotopy groups of $Aut(M)$ but I don't know how to get such formula for a general closed simply connected manifold $M$ (and I'm not even sure if it's
1 If the manifold is aspherical and the fundamental group has trivial center, then the monoid of homotopy self-equivalences has contractible components and the set of components is the outer
automorphism group of the fundamental group. This is often trivial. (Maybe this is a silly example, since the base-point preserving version yields the automorphism group, which might
recover the group.) – Ben Wieland Mar 28 '12 at 4:41
that's a nice observation. I was only thinking about the simply connected case which I think is more difficult here but your example certainly shows that in the non simply connected case
the homotopy type of the monoid of self equivalences of $M$ does not determine the homotopy type of $M$. I still wonder whether it's true in the simply connected case. – Vitali Kapovitch
Mar 28 '12 at 4:58
@Ben Wieland: In the case of mapping class groups surfaces, kernel of the homomorphism $Mod(S,p)\to Mod(S)$ is $\pi_1(S)$. Here $Mod(S,p)$ is the "pointed" mapping class group. Do you know
if the same holds in higher dimensions? If so, then one can recover $\pi_1(M)$ from the two mapping class groups. – Misha Mar 28 '12 at 17:41
Yes, the groups are the automorphism group and its outer quotient, so the kernel is the group modulo its center. (I originally excluded center so that the components are contractible. In
general, they are the classifying space of the center.) – Ben Wieland Mar 29 '12 at 16:39
1 My comment was about the homotopy category. However, the LES of homotopy groups of Homeo(M,p) -> Homeo(M) -> M and shows that the kernel is the quotient of the fundamental group by an
abelian group. The aspherical assumption and comparison with the similar sequence for Aut(M) shows that the abelian group is contained in the center. – Ben Wieland Mar 29 '12 at 18:12
show 2 more comments
For the smooth case, the result is in:
• Takens, F. (1979). Characterization of a differentiable structure by its group of diffeomorphisms. Bol. Soc. Brasil. Mat., 10, 17–25. MR552032
and the answer is "Yes".
up vote 11
down vote For completeness, Takens' theorem is:
Theorem Let $\Phi \colon M_1 \to M_2$ be a bijection between two smooth $n$-manifolds such that $\lambda \colon M_2 \to M_2$ is a diffeomorphism iff $\Phi^{-1} \circ \lambda \circ \Phi$ is
a diffeomorphism. Then $\Phi$ is a diffeomorphism.
2 that's a much easier question though. The original question doesn't assume that the isomorphism between the diffeomorphism groups comes from a bijection of the underlying manifolds. –
Vitali Kapovitch Mar 28 '12 at 15:13
Results that Igor and Martin quote give complete answer in the smooth case. – Misha Mar 28 '12 at 17:43
Misha: If you do the reference chase then the results that Igor and Martin quote use this result so this one is the original that sparked all of the others. I thought that worth
1 mentioning. Vitali: I hadn't picked up on that point. However, when I posted then this question already had an accepted answer so I wasn't trying to provide an answer as such, rather I
just thought this might be a useful addition to what was already there. – Andrew Stacey Mar 29 '12 at 6:55
add comment
I suppose that the answer to this question depends upon how much information you are willing to allow yourself to extract from $G$. Since your manifold is connected, $G$ acts transitively
upon it, and so if $x \in M$ is any point, and $G_x$ the stabiliser of $x$ in $G$, then there is a homeomorphism $G/G_x \cong M$. So $M$ can be completely reconstructed from $G$.
up vote 4 To be fair, though, this presupposes that you have a very good understanding of $G$ and its subgroups, perhaps more than is reasonable. Furthermore, this is not the sort of information
down vote that is preserved by passing to the mapping class group (e.g., a surface is clearly not homeomorphic a quotient of its mapping class group).
3 Okay, but there's no way you can extract $G_x$ given only $G$ as a topological group. – Qiaochu Yuan Mar 28 '12 at 0:31
3 @Qiaochu, according to Igor's answer, you actually can! (in the compact case) It'd be cool to know how. – Mariano Suárez-Alvarez♦ Mar 28 '12 at 1:34
add comment
Not the answer you're looking for? Browse other questions tagged gt.geometric-topology at.algebraic-topology or ask your own question.
|
{"url":"http://mathoverflow.net/questions/92422/to-which-extent-can-one-recover-a-manifold-from-its-group-of-homeomorphisms?sort=newest","timestamp":"2014-04-17T04:23:37Z","content_type":null,"content_length":"89455","record_id":"<urn:uuid:c48fff3e-41e6-4d0a-817e-4e87a527eb0f>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00030-ip-10-147-4-33.ec2.internal.warc.gz"}
|
[lm-sensors] [patch 2.6.23-rc8 3/3] lm75: use 12 bit samples
David Brownell david-b at pacbell.net
Thu Oct 4 05:34:42 CEST 2007
> > (...)
> > > The rounding is there because the user doesn't know the resolution.
> >
> > That could now be exposed...
> We do not have a standard sysfs file name for this, and quite frankly I
> don't see the value.
Me either; I was just pointing out that we now have the option.
(Since your "why bother rounding?" explanation was effectively
that the option wasn't available before...)
> > (...)
> > But I updated it to just say that it's "round away from zero",
> > which is "unusual". The squirrely bit is that rounding should
> > be done on the fixed point binary value, although I don't see
> > any motivation for using "round away from zero" here.
> It's not rounding "away from zero" (whatever you mean with that
> expression).
It is; see http://www.diycalculator.com/popup-m-round.shtml for
a decent survey of rounding schemes.
> It's just rounding to the nearest representable value, I
> see nothing unusual there. If you see anything else than regular
> rounding, that would be a bug, please explain and we'll fix it.
No, it's not "Round-Toward-Nearest" since it considers the sign of
the orignal value, and changes the rounding direction accordingly.
That's what makes it "Round-Away-From-Zero" rather than one of the
more typical rounding schemes.
> > (...)
> > > The original code was stripping away the undefined bits, while yours
> > > doesn't. You can't assume that these unused bits will always be 0. So it
> > > matters to divide first and multiply last, taking the effective chip
> > > resolution into account. Please fix.
> >
> > No can do ... that's where the fractional precision is stored!
> > Dividing by 256 would discard all fractional bits, producing
> > a resolution of 8 bits not (max) 12. These routines are set
> > to handle arbitrary resolution, not just 9 bits.
> >
> > I can mask off those low order bits though.
> You could do it, it's mathematically doable.
Only by discarding the fractional precision, unless you mean to
imply changing the representation first. (And there's no point
to doing that, near as I can tell.) But the point of these
new conversion routines is to use *all* the available precision.
> Masking the unused bits
> afterwards as you plan to do is probably more efficient and cleaner
> though.
The followup patch did exactly that: mask after converting from
signed millidegrees to signed binary fixed point.
- Dave
More information about the lm-sensors mailing list
|
{"url":"http://lists.lm-sensors.org/pipermail/lm-sensors/2007-October/021364.html","timestamp":"2014-04-21T09:37:24Z","content_type":null,"content_length":"5957","record_id":"<urn:uuid:6f758f36-2b00-40cf-bf10-275be0ab84d4>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00316-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
Hey guys, I am looking for some help with calculus. It is in regard to rieman sums, and I would really appreciate any help understanding. I will post links below and explain further.
• 3 months ago
• 3 months ago
Best Response
You've already chosen the best response.
The graph shown is y=x I just dont see how those steps were derived
Best Response
You've already chosen the best response.
And this is part of it also
Best Response
You've already chosen the best response.
Is therom 5 something that must be memorized? Or is there any logic to it?
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
What does k represent?
Best Response
You've already chosen the best response.
k is from 1 to n, if you notice the summation sign in your first photo
Best Response
You've already chosen the best response.
But why is that over n the height?
Best Response
You've already chosen the best response.
And is the width 1/n just because we are taking it as n approaches infinity? so therefor 1/n is approach infinitely small?
Best Response
You've already chosen the best response.
@kc_kennylau ?
Best Response
You've already chosen the best response.
yes, and sorry I'm not too familiarized in this topic... furthermore i'm busy :'(
Best Response
You've already chosen the best response.
sorry :'(
Best Response
You've already chosen the best response.
Thanks for the input anyways kc!
Best Response
You've already chosen the best response.
Ok I can try drawing a picture for you and posting it. But first let's see if you can see the pattern first. so the very first number is a_1=(1-1)/n the next number is a_2=(2-1)/n=1/n <---this is
how for we are away from 0 or a_1 since a_1 is 0 a_3=(3-1)/n .... now going to the kth interval where we have the x-value being a_k=(k-1)/n ... going all the way to the nth interval we have a_n=
(n-1)/n because we are plugging in the left end point of that interval and not the right endpoint because we are doing left-endpoint rule we are taking all the left endpoints of all the n
intervals beginning at 0
Best Response
You've already chosen the best response.
Now to find the heights of the rectangles we do f(a_k)
Best Response
You've already chosen the best response.
By a_1 does that mean a/1?
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
a_1 usually means a subscript 1
Best Response
You've already chosen the best response.
the base of each rectangle is 1/n
Best Response
You've already chosen the best response.
Alright, okay so f(k_n) would be f(k-1/n)
Best Response
You've already chosen the best response.
and thats the height at any given k
Best Response
You've already chosen the best response.
you mean f(a_k)?
Best Response
You've already chosen the best response.
If so yes
Best Response
You've already chosen the best response.
but since this function is f(x)=x then f((k-1)/n)=(k-1)/n
Best Response
You've already chosen the best response.
Alright I think it is starting to make sense, but when I try an example I still just dont exactly see it. Ill post if you could take a look,
Best Response
You've already chosen the best response.
Ok, so looking at this example, can you tell me what puzzles you and I will see if I can explain it?
Best Response
You've already chosen the best response.
Sure just a moment
Best Response
You've already chosen the best response.
Well first of all, do we know that it is left hand because of the n-1 on top of the sigma?
Best Response
You've already chosen the best response.
If that is so, for left handed questions like this can we always do (1/n)f(k/n)?
Best Response
You've already chosen the best response.
So we just set that equal to the function that is given?
Best Response
You've already chosen the best response.
yep they started at 0 and when to n-1 so left endpoint
Best Response
You've already chosen the best response.
And what does the k=xn really represent?
Best Response
You've already chosen the best response.
We are trying to find what f(x) is
Best Response
You've already chosen the best response.
\[\int\limits_{a}^{b}f(x) dx=\sum_{i=0}^{n-1} \Delta x f(a+i \cdot \Delta x)\]
Best Response
You've already chosen the best response.
where delta x =(b-a)/n
Best Response
You've already chosen the best response.
Right, and Xi= a+ i deltax/n
Best Response
You've already chosen the best response.
the easiest thing to do is assume a equals 0 so we can just look at i*delta x
Best Response
You've already chosen the best response.
Oh i see, and how about the 1?
Best Response
You've already chosen the best response.
well they chose the base to be 1/n which is delta x
Best Response
You've already chosen the best response.
if we have (b-0)/n=1/n then b has to be?
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
now this is one way to do the problem you don't have to choose it this way and the integral will still have the same value and yes b=1 for this example
Best Response
You've already chosen the best response.
we can take this same problem and go a different way though
Best Response
You've already chosen the best response.
would you like to?
Best Response
You've already chosen the best response.
Yes sure!
Best Response
You've already chosen the best response.
I still want to choose a to be 0 because yeah that is just plain easiest!
Best Response
You've already chosen the best response.
but with if we choose the intervals to have width 2/n instead of 1/n
Best Response
You've already chosen the best response.
then b would have to be what?
Best Response
You've already chosen the best response.
then b would be two
Best Response
You've already chosen the best response.
So are you also saying that you can choose some of the conventions, as long as they match up?
Best Response
You've already chosen the best response.
yes so but now we have \[\frac{2}{n} f(2 \cdot \frac{k}{n}) \text{ instead of } \frac{1}{n} f(\frac{k}{n}) \]
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
like are answer will look different but it will still hold the same value
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
\[\text{ so we want } \frac{2}{n} f(\frac{2k}{n})=\frac{1}{k+5n} \arctan(\frac{k+2n}{k+n})\]
Best Response
You've already chosen the best response.
Solve for f(2k/n) by multiplying both sides by n/2
Best Response
You've already chosen the best response.
son/2 (k +5n)?
Best Response
You've already chosen the best response.
\[f(\frac{2k }{n})=\frac{n}{2k+10n} \arctan(\frac{k+2n}{k+n})\]
Best Response
You've already chosen the best response.
Okay yea , and then how do you deal with the 2k/n?
Best Response
You've already chosen the best response.
Now we want to know what f(x) is right?
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
so what is k if 2k/n=x
Best Response
You've already chosen the best response.
basically solve that for k so we can figure out what to replace k with so that we will just have x inside and not that other crap
Best Response
You've already chosen the best response.
Okay so we bring in an X ?
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
because we are trying to find out what integral notation looks like for this summation notation
Best Response
You've already chosen the best response.
looks good so we will replace all the k's we see with that
Best Response
You've already chosen the best response.
\[f(x)=\frac{n}{xn+10n}\arctan(\frac{\frac{xn}{2}+2n}{\frac{xn}{2}+n})\] \[f(x)=\frac{n(1)}{n(x+10)}\arctan(\frac{xn+4n}{xn+2n})=\frac{1}{x+10}\arctan(\frac{n(x+4)}{n(x+2)})\] \[f(x)=\frac{1}
{x+10}\arctan(\frac{x+4}{x+2})\] so this is what our f(x) looks like from choosing a= 0 and b=2 there isn't a unique answer to your question the answer can totally vary
Best Response
You've already chosen the best response.
basically you get to choose a and b
Best Response
You've already chosen the best response.
and nothing else
Best Response
You've already chosen the best response.
Wow great. Thanks a lot for your help! Do you know the exact names of these types of problems? Im looking to find some practice problems
Best Response
You've already chosen the best response.
This is just some applications to the definition of reimann sums let me see if i can find you some problems one moment
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
they choose a and b for you
Best Response
You've already chosen the best response.
Alright, thanks!
Best Response
You've already chosen the best response.
there c_i represents a+i*delta x by the way
Best Response
You've already chosen the best response.
If you get bored of those attempt this one I made up: Write this as an integral: \[\lim_{n \rightarrow \infty} \sum_{i=0}^{n-1} \frac{i}{i+8n} \cos(\frac{i+3n}{n})\]
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred.
|
{"url":"http://openstudy.com/updates/52c26b98e4b0fef2cb9f73de","timestamp":"2014-04-18T21:19:23Z","content_type":null,"content_length":"290307","record_id":"<urn:uuid:b8a440c6-94c4-4c81-900b-99eaf35dc47e>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00643-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Characterizing primes that split completely vs. primes with a given splitting behavior
up vote 3 down vote favorite
Given a finite abelian extension of number fields $L/K$, the prime ideals $\mathfrak{p}$ in $O_K$ split into primes $\mathfrak{P}$ in $O_L$. The number of primes $\mathfrak{p}$ splits into is
necessarily a divisor $d$ of $[L:K]$. One can ask:
Question (1): Which primes in $O_K$ split completely in $O_L$?
Question (2): Given a divisor $d$ of $[L :K]$, which primes $\mathfrak{p}$ split into $d$ primes in $O_L$?
For $K = \mathbb{Q}$ and $L$ a cyclotomic extension of $\mathbb{Q}$, Question (2) seems genuinely more difficult than Question (1). A friend pointed this out to me in response to some notes that I
put together titled A Prelude to the Study of Reciprocity Laws.
Let $p$ be a prime and let $$\Phi_{p}(x) = x^{p-1} + x^{p-2} + \ldots + x^2 + x + 1.$$
As I discuss in the notes, a prime $\ell$ divides a number of the form $\Phi_{p}(n)$ if and only if $\ell = p$ or $\ell \equiv 1 \pmod p$. In Section 7 of the notes, I show how this follows
immediately from the basic theorems of modular arithmetic.
Let $\zeta_p$ be a root of the polynomial $\Phi_{p}(x)$. Then a prime $\ell \in \mathbb{Z}$, $\ell \neq p$ divides a number of the form $\Phi_{p}(n)$ if and only if $\ell$ splits completely in $\
mathbb{Z}[\zeta_p]$. So the above theorem answers Question (1) in this case where $K = \mathbb{Q}$ and $L = \mathbb{Q}(\zeta_p)$ .
In my notes I (apparently) erroneously remark that the answer to Question (1) in this case implies quadratic reciprocity. In fact, quadratic reciprocity follows the answer to Question (2). I knew
this, but thought that the answer to Question (2) follows formally from the answer to Question (1). My friend questioned whether this is true, and after spending a few hours on it, I realized that I
can't see a way to use the answer to Question (1) to derive the answer to Question (2).
Is there a way to do this?
The proofs of the theorems of class field theory proceed by answering Question (2) directly, which is consistent with the above paragraph, but I always thought that the reason for this was that the
only way to answer Question (1) in general was to answer Question (2). For this reason, I saw the questions as being of comparable difficulty. So I was surprised to see a case in which this doesn't
seem to be true.
Aside from cyclotomic extensions of $\mathbb{Q}$, are there finite extensions of number fields $L/K$ for which it's easier to answer Question (1) than Question (2)?
A few potentially relevant thoughts:
1. The imaginary quadratic case is similar to the cyclotomic case in the sense that one has an explicit construction of the number fields in question via complex multiplication, so it's natural to
try to answer the latter question by looking at, e.g., the polynomials over $\mathbb{Z}[i]$ that generate the ray class fields of $\mathbb{Q}(i)$ and try to see if the prime divisors of their
values at the Gaussian integers can be determined in a direct way analogous to as in the cyclotomic case.
2. In the case where $[L : K]$ is a prime, Questions (1) and (2) are the same, because a prime splits completely if and only if it splits at all.
3. The set of primes that splits completely in a number field uniquely determines the number field and so in principle determines the splitting behavior of the other primes.
nt.number-theory class-field-theory
1 Your second sentence is not true, unless $L/K$ is Galois. – Ari Shnidman Dec 27 '12 at 2:45
@Ari - Fixed. I had Galois extensions in mind. – Jonah Sinick Dec 27 '12 at 3:17
I'll be surprised if your questions can be answered in our lifetime. – Chandan Singh Dalawat Dec 27 '12 at 3:51
@ Chandan - why? – Jonah Sinick Dec 27 '12 at 4:32
1 The adjective abelian was not mentioned in the earlier version of your question. The answer for abelian extensions (of global fields) is contained in class field theory. – Chandan Singh Dalawat
Dec 27 '12 at 5:41
show 4 more comments
Know someone who can answer? Share a link to this question via email, Google+, Twitter, or Facebook.
Browse other questions tagged nt.number-theory class-field-theory or ask your own question.
|
{"url":"http://mathoverflow.net/questions/117296/characterizing-primes-that-split-completely-vs-primes-with-a-given-splitting-be","timestamp":"2014-04-16T19:43:46Z","content_type":null,"content_length":"55178","record_id":"<urn:uuid:67719836-63f8-4711-a099-ba00e7a86d9c>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00224-ip-10-147-4-33.ec2.internal.warc.gz"}
|
copBasic: Basic Theoretical Copula, Empirical Copula, and Various Utility Functions
This package implements extensive, but select, functions for copula computations and is used by several other packages by the author. This particular package provides the lower, upper, product,
"PSP," and Plackett copulas. Plackett parameter estimation is provided. Expressions available for an arbitrary copula include the diagonal of a copula, the survival copula, the dual of a copula, and
co-copula. Levels curves, such as for drawing, are available, through inverses of copulas. Sections (horizontal and vertical) and derivatives of these sections are supported. The numerical derivative
for the derivative of a copula is provided as are inverses of these the numerical derivatives. Inverses of copula derivatives are important for random variate generation, which is provided using the
conditional distribution method and the derivative of a copula. Composition of a single copula for two external parameters, composition of two copulas through use of two external parameters, and the
composition of two copulas through the use of four external parameters is provided. Composite copula random variates can be generated—compositions generally yield asymmetric copulas. A data set is
provided that contains darts thrown at the L-comoment space of a Plackett-Plackett composited copula; these data might be used for experimental copula estimation by the method of L-comoments.
Measures of association through concordance include Kendall Tau, Spearman Rho, Gini Gamma, and Blomqvist Beta. Schweizer-Wolff Sigma is provided as a measure of dependency in contrast to the
concordance measures. Upper- and lower-tail dependence is computed by numerical limit convergence. Whether a copula is left-tail decreasing or right-tail increasing also is provided. Quantile and
median regression for V with respect to U and U with respect to V is available. Empirical copulas (EC) are supported and the computation of a data frame for each sample value also is provided. ECs
are heavily dependent on a simple grid or matrix structure for which generation capability is provided. The derivatives of the EC grid, which are the conditional CDFs of copula sections, are
computable. Also, the inverses of the derivatives, which are the conditional QDFs of copula sections are computable. Median and quantile regression of an EC is supported. Lastly, support for EC
simulation of V conditional on U is provided.
Version: 1.5.4
Depends: R (≥ 2.10), lmomco
Published: 2013-05-10
Author: William H. Asquith
Maintainer: William H. Asquith <william.asquith at ttu.edu>
License: GPL-2 | GPL-3 [expanded from: GPL]
NeedsCompilation: no
Materials: ChangeLog
In views: Distributions
CRAN checks: copBasic results
Reference manual: copBasic.pdf
Package source: copBasic_1.5.4.tar.gz
MacOS X binary: copBasic_1.5.4.tgz
Windows binary: copBasic_1.5.4.zip
Old sources: copBasic archive
Reverse dependencies:
|
{"url":"http://cran.repo.bppt.go.id/web/packages/copBasic/index.html","timestamp":"2014-04-19T04:50:09Z","content_type":null,"content_length":"5244","record_id":"<urn:uuid:e6d0a5d5-685e-4bf6-a193-57425ddcd2bc>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00396-ip-10-147-4-33.ec2.internal.warc.gz"}
|
M Theory Lesson 150
M Theory Lesson 150
Quite a while back,
recommended a paper by
Duff and Ferrara
on the two way entanglement of three qutrits. The paper actually begins by looking at
associated to $4D$ stringy black holes, and in particular the use of a
to express the entropy.
This hyperdeterminant is invariant under the
U triality
, which is a kind of three (spatial) dimensional analogue of the two dimensional duality currently generating
much interest
. Thus it is no surprise that when they move on to seven qubits and tripartite entanglement (giving seven lines with three nodes on the Fano plane) we start
to see
circulants, this time associated with $E(7)$, namely the matrix
Observe that this circulant is basically the $7 \times 7$ circulant for the
code, with $1$ added to each entry, and indeed this circulant is associated to the Fano plane and seven bits of information. Moreover, an $E(8)$ interpretation has the advantage of agreeing with the
3 Time
interpretation of the spatial dimensions, at least in the context of M Theory.
By considering the entries of the matrix above to be qutrit elements, $A_{ij} \in \{ 0,1,2 \} = \mathbb{F}_{3}$, we see that the addition of $1$ to each entry again yields a circulant, which is twice
the complement of the Hamming circulant. And finally, yet another addition of unit entries returns the matrix to the Hamming circulant. Thus a triality is made manifest by the root vector circulants.
Duff and Ferrara point out that the question of real forms for $E(7)$ is not really important in this context, since the coefficients defining the state are allowed to be complex. Hmm. This also
sounds like
that came up recently.
4 Comments:
phil said...
Kea, thanks for the links. Yesterday I started adding an entry about hyperdeterminants into Wikipedia. I hope it will be useful.
Good to hear you are helping wikipedia, phil. I am quite interested not only in hyperdeterminants, but in all sorts of multidimensional operators to which they may be related. Your post on the j
invariant was really cool.
When I wrote that and pointed out that the 2x2x2x2 hyperdeterminant is of degree 24 I was not very serious when I suggested that it might be connected to the dimension of the Leech lattice and
other mysteries of the number 24, but then I learnt that the discriminant of the quartic used to construct the hyperdeterminant is linked to the 24th power of the eta function which makes the
connection look more promising.
I looked for signs of the Golay code in the structure of the hyperdeterminant but it is not there. However, the connection you have highlighted between the Hamming code from the Fano plane and
the hyperdeterminant in the quartic invariant of E_7 makes it look like there could be more to find. I fear it is buried too deeply for me.
There are also strong connections between hyperdeterminants and generalised hypergeometric functions. Hypergeometric functions are connected to all kinds of interesting and relevant things. Just
follow the links in Wikipedia from here!
Dear Phil,
If you are interested in the structure of the hyperdeterminant of type 2x2x2x2
as the one connected to four qubit entanglement see my paper in J.Phys.A.39
(2006) 9533-9545 which is based on the work of Luque and Thibon.
Moreover, the connection between the Hamming code, block designs, the tripartite entanglement of seven qubits, and Cartan's quartic invariant is further clarified in my Frascati lecture notes to
be published soon in the SAM2007 proceedings.
This is an extended version of my recent papers on the connection between error correction, stringy black holes, the Fano plane etc. (Phys. Rev. D76, 106011 (2007), Phys. Rev. D75, 024024 (2007),
Phys. Rev. D74, 024030 (2006)).
These can also be found on the hepth and quant-ph archives.
Best regards
Peter Levay
|
{"url":"http://kea-monad.blogspot.com/2008/01/m-theory-lesson-150.html","timestamp":"2014-04-20T08:21:50Z","content_type":null,"content_length":"32393","record_id":"<urn:uuid:ab6da4d0-1bd4-4caf-8f6c-9060ec45c912>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00225-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Mathematica 4.0 contains several new functions that allow systems of real algebraic equations and inequalities to be solved. In this article we describe the functions, show examples of their use, and
comment on the algorithms used in their implementation.
Let us first specify what is meant by a system of algebraic equations and inequalities, and what its solutions are.
An algebraic expression in variables Root function. For instance, the following is an algebraic expression in variables x and y.
A system of real algebraic equations and inequalities in variables x and y.
Functions that solve systems of real algebraic equations and inequalities always use LogicalExpand first to put the system in the disjunctive normal form. For the remainder of this article, we assume
that the systems are in the disjunctive normal form; that is, they are disjunctions of conjunctions of equations and inequalities.
A tuple satisfies (or is a solution of) a system Root objects in
What does it mean to "solve" a system of real algebraic equations and inequalities? The solutions form semialgebraic subsets of
is the following figure.
In many practical situations (e.g., geometric theorem proving using assumptions), we may need to check very simple properties of systems, like whether a system has any solutions at all, whether a
system is always satisfied, or whether the set of solutions of one system is contained in the set of solutions of another system (the first system implies the second). All these problems are
equivalent, and we describe functions solving them in the section "Decision Problem."
The set of solutions of any system of real algebraic equations and inequalities in variables
In the above expression,
In the section "Solving Systems of Equations and Inequalities," we present functions which find solution sets of systems of real algebraic equations and inequalities. We represent these solutions in
the cylindrical solution form, i.e., in the form of a finite number of disjoint cylindrical parts, with Root functions, rational functions, or functions of the form a, b, and c.
A quantified system of real algebraic equations and inequalities in variables
where S is a system of real algebraic equations and inequalities in
By Tarski's theorem the solution sets of quantified systems of real algebraic equations and inequalities are semi-algebraic sets. The function Resolve, presented in the section "Quantifier
Elimination," finds the solution sets and represents them in the cylindrical solution form.
Finally, we show new functions for global optimization of algebraic functions subject to algebraic equation and inequality constraints, and we comment on the algorithms used by the inequality
Converted by Mathematica April 24, 2000
[Article Index] [Next Page]
|
{"url":"http://www.mathematica-journal.com/issue/v7i4/features/strzebonski/contents/html/Links/index_lnk_1.html","timestamp":"2014-04-18T15:38:49Z","content_type":null,"content_length":"18004","record_id":"<urn:uuid:9a7dcb1d-3691-493b-9793-09aa6ff34c57>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00310-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Re: st: Peer variable coefficient estimate nonsense
Notice: On March 31, it was announced that Statalist is moving from an email list to a forum. The old list will shut down at the end of May, and its replacement, statalist.org is already up and
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: st: Peer variable coefficient estimate nonsense
From Jian Zhang <jian32@gmail.com>
To statalist@hsphsun2.harvard.edu
Subject Re: st: Peer variable coefficient estimate nonsense
Date Sat, 3 Mar 2012 12:26:10 +0800
Hi Clyde Schechter,
thanks for your comments. as you guessed, the peer group is the
student's class and the score is his/her math score. In my regression
analysis I include grade and time semester dummies, i.e., I have
controlled for grade and semesters. Thus this means what i am looking
at is students who are in the same semester and grade: I am comparing
students in the same grade and same semester. Then the question is
really: given a student, 1 year older of his peers matters more than 1
year older of his own age to his math achievement? For such a student,
it seems that his own age matters more than his peer average ages to
math achievement. Of course, there is no theory/empirical evidence
supporting this. it is just out of my bold intuition...
On Sat, Mar 3, 2012 at 1:52 AM, Clyde B Schechter
<clyde.schechter@einstein.yu.edu> wrote:
> Leaving aside the technical issues brought up by Nick Cox of how correlated variables might partition variance among themselves in a regression analysis, substantively, I wonder why Jian Zhang thinks it is nonsensical for peer age to be a stronger predictor of student score than the student's own age. It isn't explained what kind of scores and peer groups these are, but if the "peer" group in question is the student's class, it wouldn't surprise me at all that mean peer group age, which is then a very strong proxy for grade level, would be a stronger predictor of, say, math achievement, than the child's individual age when the two are used together. In fact, it _would_ surprise me if the opposite were true. After all, we would expect 5th graders to have higher math scores than 4th graders, but there is no reason to think that an older 4th grader would outperform a younger 5th grader.
> More generally, there are many other instances where a group attribute is a stronger predictor of an individual outcome than the analogous attribute of the individual.
> Clyde Schechter
> Dept. of Family & Social Medicine
> Albert Einstein College of Medicine
> Bronx, NY, USA
> *
> * For searches and help try:
> * http://www.stata.com/help.cgi?search
> * http://www.stata.com/support/statalist/faq
> * http://www.ats.ucla.edu/stat/stata/
* For searches and help try:
* http://www.stata.com/help.cgi?search
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/
|
{"url":"http://www.stata.com/statalist/archive/2012-03/msg00125.html","timestamp":"2014-04-20T18:34:00Z","content_type":null,"content_length":"10110","record_id":"<urn:uuid:2bec15af-fa6b-4443-a047-46ab2ab799bd>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00184-ip-10-147-4-33.ec2.internal.warc.gz"}
|
The Neural Network Input-Process-Output Mechanism
Understanding the feed-forward mechanism is required in order to create a neural network that solves difficult practical problems such as predicting the result of a football game or the movement of a
stock price.
An artificial neural network models biological synapses and neurons and can be used to make predictions for complex data sets. Neural networks and their associated algorithms are among the most
interesting of all machine-learning techniques. In this article I'll explain the feed-forward mechanism, which is the most fundamental aspect of neural networks. To get a feel for where I'm headed,
take a look at the demo program in Figure 1 and also a diagram of the demo neural network's architecture in Figure 2.
[Click on image for larger view.] Figure 1. Neural network feed-forward demo.
If you examine both figures you'll see that, in essence, a neural network accepts some numeric inputs (2.0, 3.0 and 4.0 in this example), does some processing and produces some numeric outputs (0.93
and 0.62 here). This input-process-output mechanism is called neural network feed-forward. Understanding the feed-forward mechanism is required in order to create a neural network that solves
difficult practical problems such as predicting the result of a football game or the movement of a stock price.
[Click on image for larger view.] Figure 2. Neural network architecture.
If you're new to neural networks, your initial impression is likely something along the lines of, "This looks fairly complicated." And you'd be correct. However, I think the demo program and its
explanation presented in this article will give you a solid foundation for understanding neural networks. This article assumes you have expert-level programming skills with a C-family language. I
coded the demo program using C#, but you shouldn't have too much trouble refactoring my code to another language such as Visual Basic or Python.
The demo can be characterized as a fully connected three-input, four-hidden, two-output neural network. Unfortunately, neural network terminology varies quite a bit. The neural network shown in
Figure 2 is most often called a two-layer network (rather than a three-layer network, as you might have guessed) because the input layer doesn't really do any processing. I suggest this by showing
the input nodes using a different shape (square inside circle) than the hidden and output nodes (circle only).
Each node-to-node arrow in Figure 2 represents a numeric constant called a weight. For example, assuming nodes are zero-indexed starting from the top of the diagram, the weight from input node 2 to
hidden node 3 has value 1.2. Each hidden and output node, but not any input node, has an additional arrow that represents a numeric constant called a bias. For example, the bias for hidden node 0 has
value -2.0. Because of space limitations, Figure 2 shows only one of the six bias arrows.
For a fully connected neural network, with numInputs input nodes, numHidden hidden nodes and numOutput output nodes, there will be (numInput * numHidden) + (numHidden * numOutput) weight values. And
there will be (numHidden + numOutput) bias values. As you'll see shortly, biases are really just a special type of weights, so for brevity, weights and biases are usually collectively referred to as
simply weights. For the three-input, four-hidden, two-output demo neural network, there are a total of (3 * 4) + (4 * 2) + (4 + 2) = 20 + 6 = 26 weights.
The demo neural network is deterministic in the sense that for a given set of input values and a given set of weights and bias values, the output values will always be the same. So, a neural network
is really just a form of a function.
Computing the Hidden-Layer Nodes
Computing neural network output occurs in three phases. The first phase is to deal with the raw input values. The second phase is to compute the values for the hidden-layer nodes. The third phase is
to compute the values for the output-layer nodes.
In this example, the demo does no processing of input, and simply copies raw input into the neural network input-layer nodes. In some situations a neural network will normalize or encode raw data in
some way.
Each hidden-layer node is computed independently. Notice that each hidden node has three weight arrows pointing into it, one from each input node. Additionally, there's a single bias arrow into each
hidden node. Understanding hidden node computation is best explained using a concrete example. In Figure 2, hidden node 0 is at the top of the diagram. The first step is to sum each input times each
input's associated weight: (2.0)(0.1) + (3.0)(0.5) + (4.0)(0.9) = 5.3. Next the bias value is added: 5.3 + (-2.0) = 3.3. The third step is to feed the result of step two to an activation function.
I'll describe this in more detail shortly, but for now: 1.0 / (1.0 + Exp(-3.3)) = 0.96. The values for hidden nodes 1, 2 and 3 are computed similarly and are 0.55, 1.00 and 0.73, as shown in Figure 1
. These values now serve as inputs for the output layer.
Computing the Output-Layer Nodes
The output-layer nodes are computed in the same way as the hidden-layer nodes, except that the values computed into the hidden-layer nodes are now used as inputs. Notice there are a lot of inputs and
outputs in a neural network, and you should not underestimate the difficulty of keeping track of them.
The sum part of the computation for output-layer node 0 (the topmost output node in the diagram) is: (0.96)(1.3) + (0.55)(1.5) + (1.00)(1.7) + (0.73)(1.9) = 5.16. Adding the bias value: 5.16 + (-2.5)
= 2.66. Applying the activation function: 1.0 / (1.0 + Exp(-2.66)) = 0.9349 or 0.93 rounded. The value for output-layer node 1 is computed similarly, and is 0.6196 or 0.62 rounded.
The output-layer node values are copied as is to the neural network outputs. In some cases, a neural network will perform some final processing such as normalization.
Neural network literature tends to be aimed more at researchers than software developers, so you'll see a lot of equations with Greek letters such as the one in the lower-right corner of Figure 2.
Don't let these equations intimidate you. The Greek letter capital phi is just an abbreviation for the activation function (many other Greek letters, such as kappa and rho, are used here too). The
capital sigma just means "add up some terms." The lowercase x and w represent inputs and weights. And the lowercase b is the bias. So the equation is just a concise way of saying: "Multiply each
input times its weight, add them up, then add the bias, and then apply the activation function to that sum."
The Bias As a Special Weight
Something that often confuses newcomers to neural networks is that the vast majority of articles and online references treat the biases as weights. Take a close look at the topmost hidden node in
Figure 2. The preliminary sum is the product of three input and three weights, and then the bias value is added: (2.0)(0.1) + (3.0)(0.5) + (4.0)(0.9) + (-2.0) = 3.30. But suppose there was a dummy
input layer node with a value of 1.0 that was added to the neural network as input x3. If each hidden-node bias is associated with the dummy input value, you get the same result: (2.0)(0.1) + (3.0)
(0.5) + (4.0)(0.9) + (1.0)(-2.0) = 3.30.
Treating biases as special weights that are associated with dummy inputs that have constant value 1.0 simplifies writing research-related articles, but with regard to actual implementation I find the
practice confusing, error-prone and hack-ish. I always treat neural network biases as biases rather than as special weights with dummy inputs.
Activation Functions
The activation function used in the demo neural network is called a log-sigmoid function. There are many other possible activation functions that have names like hyperbolic tangent, Heaviside and
Gaussian. It turns out that choosing activation functions is extremely important and surprisingly tricky when constructing practical neural networks. I'll discuss activation functions in detail in a
future article.
The log-sigmoid function in the demo is implemented like so:
private static double LogSigmoid(double z)
if (z < -20.0) return 0.0;
else if (z > 20.0) return 1.0;
else return 1.0 / (1.0 + Math.Exp(-z));
The method accepts a type double input parameter z. The return value is type double with value between 0.0 and 1.0 inclusive. In the early days of neural networks, programs could easily generate
arithmetic overflow when computing the value of the Exp function, which gets very small or very large, very quickly. For example, Exp(-20.0) is approximately 0.0000000020611536224386. Even though
modern compilers are less susceptible to overflow problems, it's somewhat traditional to specify threshold values such as the -20.0 and +20.0 used here.
Overall Program Structure
The overall program structure and Main method of the demo program (with some minor edits and WriteLine statements removed) is presented in Listing 1. I used Visual Studio 2012 to create a C# console
application named NeuralNetworkFeedForward. The program has no significant Microsoft .NET Framework dependencies, and any version of Visual Studio should work. I renamed file Program.cs to the more
descriptive FeedForwardPrograms.cs and Visual Studio automatically renamed class Program, too. At the top of the template-generated code, I removed all references to namespaces except the System
Listing 1. Feed-Forward demo program structure.
using System;
namespace NeuralNetworkFeedForward
class FeedForwardProgram
static void Main(string[] args)
Console.WriteLine("\nBegin neural network feed-forward demo\n");
Console.WriteLine("Creating a 3-input, 4-hidden, 2-output NN");
Console.WriteLine("Using log-sigmoid function");
const int numInput = 3;
const int numHidden = 4;
const int numOutput = 2;
NeuralNetwork nn = new NeuralNetwork(numInput, numHidden, numOutput);
const int numWeights = (numInput * numHidden) +
(numHidden * numOutput) + numHidden + numOutput;
double[] weights = new double[numWeights] {
0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0, 1.1, 1.2,
-2.0, -6.0, -1.0, -7.0,
1.3, 1.4, 1.5, 1.6, 1.7, 1.8, 1.9, 2.0,
-2.5, -5.0 };
Console.WriteLine("\nWeights and biases are:");
ShowVector(weights, 2);
Console.WriteLine("Loading neural network weights and biases");
Console.WriteLine("\nSetting neural network inputs:");
double[] xValues = new double[] { 2.0, 3.0, 4.0 };
ShowVector(xValues, 2);
Console.WriteLine("Loading inputs and computing outputs\n");
double[] yValues = nn.ComputeOutputs(xValues);
Console.WriteLine("\nNeural network outputs are:");
ShowVector(yValues, 4);
Console.WriteLine("\nEnd neural network demo\n");
catch (Exception ex)
} // Main
public static void ShowVector(double[] vector, int decimals) { . . }
public static void ShowMatrix(double[][] matrix, int numRows) { . . }
} // Program
public class NeuralNetwork
private int numInput;
private int numHidden;
private int numOutput;
private double[] inputs;
private double[][] ihWeights; // input-to-hidden
private double[] ihBiases;
private double[][] hoWeights; // hidden-to-output
private double[] hoBiases;
private double[] outputs;
public NeuralNetwork(int numInput, int numHidden,
int numOutput) { . . }
private static double[][] MakeMatrix(int rows, int cols) { . . }
public void SetWeights(double[] weights) { . . }
public double[] ComputeOutputs(double[] xValues) { . . }
private static double LogSigmoid(double z) { . . }
} // Class
} // ns
The heart of the program is the definition of a NeuralNetwork class. That class has a constructor, which calls helper MakeMatrix; public methods SetWeights and ComputeOutputs; and private method
LogSigmoid, which is used by ComputeOutputs.
The class containing the Main method has two utility methods, ShowVector and ShowMatrix. The neural network is instantiated using values for the number of input-, hidden- and output-layer nodes:
const int numInput = 3;
const int numHidden = 4;
const int numOutput = 2;
NeuralNetwork nn = new NeuralNetwork(numInput,
numHidden, numOutput);
Next, 26 arbitrary weights and bias values are assigned to an array:
const int numWeights = (numInput * numHidden) +
(numHidden * numOutput) +
numHidden + numOutput;
double[] weights = new double[numWeights] {
0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0, 1.1, 1.2,
-2.0, -6.0, -1.0, -7.0,
1.3, 1.4, 1.5, 1.6, 1.7, 1.8, 1.9, 2.0,
-2.5, -5.0 };
The weights and bias values are stored so that the first 12 values are the input-to-hidden weights, the next four values are the input-to-hidden biases, the next eight values are the hidden-to-output
weights, and the last two values are the hidden-to-output biases. Assuming an implicit ordering isn't a very robust strategy, you might want to create four separate arrays instead.
After the weights are created, they're copied into the neural network object, and then three arbitrary inputs are created:
double[] xValues = new double[] { 2.0, 3.0, 4.0 };
Method ComputeOutputs copies the three input values into the neural network, then computes the outputs using the feed-forward mechanism, and returns the two output values as an array:
double[] yValues = nn.ComputeOutputs(xValues);
Console.WriteLine("\nNeural network outputs are:");
ShowVector(yValues, 4);
Console.WriteLine("\nEnd neural network demo\n");
The Neural Network Data Fields and Constructor
The NeuralNetwork class has three data fields that define the architecture:
private int numInput;
private int numHidden;
private int numOutput;
The class has four arrays and two matrices to hold the inputs, weights, biases and outputs:
private double[] inputs;
private double[][] ihWeights;
private double[] ihBiases;
private double[][] hoWeights;
private double[] hoBiases;
private double[] outputs;
In matrix ihWeights[i,j], index i is the 0-based index of the input-layer node and index j is the index of the hidden-layer node. For example, ihWeights[2,1] is the weight from input node 2 to hidden
node 1. Similarly, matrix hoWeights[3,0] is the weight from hidden node 3 to output node 0. C# supports a true two-dimensional array type, and you may want to use it instead of implementing a matrix
as an array of arrays.
The constructor first copies its input arguments into the associated member fields:
this.numInput = numInput;
this.numHidden = numHidden;
this.numOutput = numOutput;
Then the constructor allocates the inputs, outputs, weights and biases arrays, and matrices. I tend to leave out the .this qualifier for member fields:
inputs = new double[numInput];
ihWeights = MakeMatrix(numInput, numHidden);
ihBiases = new double[numHidden];
hoWeights = MakeMatrix(numHidden, numOutput);
hoBiases = new double[numOutput];
outputs = new double[numOutput];
Helper method MakeMatrix is just a convenience to keep the constructor code a bit cleaner and is defined as:
private static double[][] MakeMatrix(int rows, int cols)
double[][] result = new double[rows][];
for (int i = 0; i < rows; ++i)
result[i] = new double[cols];
return result;
Setting the Weights and Biases
Class NeuralNetwork method SetWeights transfers a set of weights and bias values stored in a linear array into the class matrices and arrays. The code for method SetWeights is presented in Listing 2.
Listing 2. The SetWeights method.
public void SetWeights(double[] weights)
int numWeights = (numInput * numHidden) +
(numHidden * numOutput) + numHidden + numOutput;
if (weights.Length != numWeights)
throw new Exception("Bad weights array");
int k = 0; // Points into weights param
for (int i = 0; i < numInput; ++i)
for (int j = 0; j < numHidden; ++j)
ihWeights[i][j] = weights[k++];
for (int i = 0; i < numHidden; ++i)
ihBiases[i] = weights[k++];
for (int i = 0; i < numHidden; ++i)
for (int j = 0; j < numOutput; ++j)
hoWeights[i][j] = weights[k++];
for (int i = 0; i < numOutput; ++i)
hoBiases[i] = weights[k++];
Neural networks that solve practical problems train the network by finding a set of weights and bias values that best correspond to training data, and will often implement a method GetWeights to
allow the calling program to fetch the best weights and bias values.
Computing the Outputs
Method ComputeOutputs implements the feed-forward mechanism and is presented in Listing 3. Much of the code consists of display messages to show intermediate values of the computations. In most
situations you'll likely want to comment out the display statements.
Listing 3. Method ComputeOutputs implements the feed-forward mechanism.
public double[] ComputeOutputs(double[] xValues)
if (xValues.Length != numInput)
throw new Exception("Bad inputs");
double[] ihSums = new double[this.numHidden]; // Scratch
double[] ihOutputs = new double[this.numHidden];
double[] hoSums = new double[this.numOutput];
for (int i = 0; i < xValues.Length; ++i) // xValues to inputs
this.inputs[i] = xValues[i];
Console.WriteLine("input-to-hidden weights:");
FeedForwardProgram.ShowMatrix(this.ihWeights, -1);
for (int j = 0; j < numHidden; ++j) // Input-to-hidden weighted sums
for (int i = 0; i < numInput; ++i)
ihSums[j] += this.inputs[i] * ihWeights[i][j];
Console.WriteLine("input-to-hidden sums before adding i-h biases:");
FeedForwardProgram.ShowVector(ihSums, 2);
Console.WriteLine("input-to-hidden biases:");
FeedForwardProgram.ShowVector(this.ihBiases, 2);
for (int i = 0; i < numHidden; ++i) // Add biases
ihSums[i] += ihBiases[i];
Console.WriteLine("input-to-hidden sums after adding i-h biases:");
FeedForwardProgram.ShowVector(ihSums, 2);
for (int i = 0; i < numHidden; ++i) // Input-to-hidden output
ihOutputs[i] = LogSigmoid(ihSums[i]);
Console.WriteLine("input-to-hidden outputs after log-sigmoid activation:");
FeedForwardProgram.ShowVector(ihOutputs, 2);
Console.WriteLine("hidden-to-output weights:");
FeedForwardProgram.ShowMatrix(hoWeights, -1);
for (int j = 0; j < numOutput; ++j) // Hidden-to-output weighted sums
for (int i = 0; i < numHidden; ++i)
hoSums[j] += ihOutputs[i] * hoWeights[i][j];
Console.WriteLine("hidden-to-output sums before adding h-o biases:");
FeedForwardProgram.ShowVector(hoSums, 2);
Console.WriteLine("hidden-to-output biases:");
FeedForwardProgram.ShowVector(this.hoBiases, 2);
for (int i = 0; i < numOutput; ++i) // Add biases
hoSums[i] += hoBiases[i];
Console.WriteLine("hidden-to-output sums after adding h-o biases:");
FeedForwardProgram.ShowVector(hoSums, 2);
for (int i = 0; i < numOutput; ++i) // Hidden-to-output result
this.outputs[i] = LogSigmoid(hoSums[i]);
double[] result = new double[numOutput]; // Copy to this.outputs
this.outputs.CopyTo(result, 0);
return result;
Method ComputOutputs uses three scratch arrays for computations:
double[] ihSums = new double[this.numHidden]; // Scratch
double[] ihOutputs = new double[this.numHidden];
double[] hoSums = new double[this.numOutput];
An alternative design is to place these scratch arrays as class data members instead of method local variables. In the demo program, method ComputeOutputs is only called once. But a neural network
that solves a practical problem will likely call ComputeOutputs many thousands of times; so, depending on compiler optimization, there may be a significant penalty associated with thousands of array
instantiations. However, if the scratch arrays are declared as class members, you'll have to remember to zero each one out in ComputeOutputs, which also has a performance cost.
Method ComputeOutputs uses the LogSigmoid activation method for both input-to-hidden and hidden-to-output computations. In some cases different activation functions are used for the two layers. You
may want to consider passing the activation function in as an input parameter using a delegate.
Wrapping Up
For completeness, utility display methods ShowVector and ShowMatrix are presented in Listing 4.
Listing 4. Utility display methods.
public static void ShowVector(double[] vector, int decimals)
for (int i = 0; i < vector.Length; ++i)
if (i > 0 && i % 12 == 0) // max of 12 values per row
if (vector[i] >= 0.0) Console.Write(" ");
Console.Write(vector[i].ToString("F" + decimals) + " "); // 2 decimals
public static void ShowMatrix(double[][] matrix, int numRows)
int ct = 0;
if (numRows == -1) numRows = int.MaxValue; // if numRows == -1, show all rows
for (int i = 0; i < matrix.Length && ct < numRows; ++i)
for (int j = 0; j < matrix[0].Length; ++j)
if (matrix[i][j] >= 0.0) Console.Write(" ");
Console.Write(matrix[i][j].ToString("F2") + " ");
In order to fully understand the neural network feed-forward mechanism, I recommend experimenting by modifying the input values and the values of the weights and biases. If you're a bit more
ambitious, you might want to change the demo neural network's architecture by modifying the number of nodes in the input, hidden or output layers.
The demo neural network is fully connected. An advanced but little-explored technique is to create a partially connected neural network by virtually severing the weight arrows between some of a
neural network's nodes. Notice that with the design presented in this article you can easily accomplish this by setting some weight values to 0.
Some complex neural networks, in addition to sending the output from one layer to the next, may send their output backward to one or more nodes in a previous layer. As far as I've been able to
determine, neural networks with a feedback mechanism are almost completely unexplored.
It's possible to create neural networks with two hidden layers. The design presented here can be extended to support multi-hidden-layer neural networks. Again, this is a little-explored topic.
Reader Comments:
Add Your Comments Now:
Your Name:(optional)
Your Email:(optional)
Your Location:(optional)
Please type the letters/numbers you see above
|
{"url":"http://visualstudiomagazine.com/articles/2013/05/01/neural-network-feed-forward.aspx","timestamp":"2014-04-19T14:30:36Z","content_type":null,"content_length":"75871","record_id":"<urn:uuid:d2656dd5-bcc2-4616-9cea-9862d6bbd37b>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00207-ip-10-147-4-33.ec2.internal.warc.gz"}
|
A New Kind of Science: The NKS Forum - A new logic for the language of Mathematics
Doron Shadmi
Registered: Aug 2004
Posts: 9
A new logic for the language of Mathematics
Hi people,
During the last 20 years I developed a new Mathematical theory, which is based on what I call: an included-middle reasoning.
Included-middle reasoning is:
The Art of interactions between independent opposites in non-destructive ways.
If we compare between the excluded-middle reasoning (which is the standard reasoning that standing in the basis of the standard language of mathematics)
we can find these major differences:
In an excluded-middle reasoning, two opposites are simultaneously contradicting each other, and the result is no-middle(=excluded-middle reasoning).
In an included-middle reasoning, two opposites are simultaneously preventing/defining their middle domain, and the result is a middle(=included-middle reasoning).
The goal of my work is to find a logical reasoning system, which can be used as a common basis for both our morality development and our technological developments.
If we achieve this goal, then I think that we improve our chances to survive the power of our technology.
I have found that included-middle reasoning has the properties to achieve this goal, after I developed some fundamental Mathematical works, which are based on its reasoning.
From this point of view, the mathematician's cognition abilities are taken as natural parts of the mathematical research itself, and this approach can be used to develop a gateway between his own
morality development and his technical mathematical developments.
The main trigger behind this work is my interpretation to Drake's equation.
If we look at Drake's equation http://www.setileague.org/general/drake.htm we can find parameter L.
L = The "lifetime" of communicating civilizations, or in other worlds, if there is no natural catastrophe in some given planet, then how some civilization survives the power of its own technology?
If we look on our civilization, I think that we cannot ignore L and in this case we should ask every day "how we survive the power of our technology?"
My work for the last 20 years is one of many possible ways to answer this every day question.
Though my research I have found that if some civilization has no balance between its morality level and its technological level, then there is a very high probability that its L= some n , or in other
words it is no longer exists.
Now, let us look at our L and let us ask ourselves: "Do we do all what we have to do in order to avoid some n?"
Most of the power of our technology is based on the Language of Mathematics and its reasoning, where the current reasoning is generally based on 0_XOR_1 logical reasoning, and there is nothing in
this reasoning which researches the most important question which is: "How do we use this powerful Language in order to find the balance between our morality level and our technological level"?
If our answer is: "The Language of Mathematics has nothing to do with these kinds of questions", then in my opinion we quickly bring ourselves to find the exact n of our L.
In my opinion, in order to avoid the final n of our L, we have no choice but to find the balance between our morality level and our technological level within the framework of what is called the
Language of Mathematics.
Furthermore, we should not leave this question to be answered beyond the framework of our scientific methods, because no other framework, accept our scientific method can really determinate the
destiny of our L.
My work can be found in http://www.geocities.com/complementarytheo...ry/CATpage.html and it is hard to follow especially for professional Mathematicians that most of their reasoning is based on the
excluded-middle reasoning.
Anyway I would like to share with you my work and I'll be glad to get any detailed questions, comments and insights.
Thank you,
My goal is to fulfill the dream of the great mathematician Gottfried Wilhelm Leibniz ( http://www.andrews.edu/~calkins/mat...aph/bioleib.htm )
Actually my number system ( which some arithmetic of it can be found in http://www.scienceforums.net/forums...89&postcount=20 ) is the fulfillment of Leibniz's Monads, and beyond it.
Last edited by Doron Shadmi on 08-19-2004 at 11:44 AM
Report this post to a moderator | IP: Logged
|
{"url":"http://forum.wolframscience.com/showthread.php?postid=3039","timestamp":"2014-04-19T04:19:53Z","content_type":null,"content_length":"23965","record_id":"<urn:uuid:caf1a431-9fcc-439f-b967-a2229df0ea93>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00517-ip-10-147-4-33.ec2.internal.warc.gz"}
|
The Theory of Error-Correcting Codes
North-Holland Mathematical Library, Volume 16, 1977 (11th reprint, 2003)
Coding theory began in the late 1940's with the work of Golay, Hamming and Shannon. Although it has its origins in an engineering problem, the subject has developed by using more and more
sophisticated mathematical techniques. It is our goal to present the theory of error-correcting codes in a simple, easily understandable manner, and yet also to cover all the important aspects of the
subject. Thus the reader will find both the simpler families of codes – for example, Hamming, BCH, cyclic and Reed-Muller codes – discussed in some detail, together with encoding and decoding
methods, as well as more advanced topics such as quadratic residue, Golay, Goppa, alternant, Kerdock, Preparata, and self-dual codes and association schemes.
Our treatment of bounds on the size of a code is similarly thorough. We discuss both the simpler results – the sphere-packing, Plotkin, Elias and Gaschamov bounds – as well as the very powerful
linear programming method and the McEliece-Rodemich-Rumsey-Welch bound. Therefore this book can be used both by the beginner and by the expert, as an introductory textbook and as a reference book,
and both by the engineer and the mathematician. Of course, this has not resulted in a thin book, and so we suggest the following menus:
[Sequence of chapters for an elementary first course on coding theory for mathematicians, a second course for mathematicians, an elementary first course on coding theory for engineers, and a second
course for engineers.]
[List of principal codes discussed, encoding methods given, and decoding methods given.]
When reading the book, keep in mind this piece of advice, which should be given in every preface: if you get stuck on a section, skip it, but keep reading! Don't hesitate to skip the proof of a
theorem: we often do. Starred sections are difficult or dull, and can be omitted on the first (or even second) reading.
The book ends with an extensive bibliography. Because coding theory overlaps with so many other subjects (computers, digital systems, group theory, number theory, the design of experiments, etc.)
relevant papers may be found almost anywhere in the scientific literature. Unfortunately this means that the usual indexing and reviewing journals are not always helpful. We have therefore felt an
obligation to give a fairly comprehensive bibliography. The notes at the ends of the chapters give sources for the theorems, problems and tables, as well as small bibliographies for some of the
topics covered (or not covered) in the chapter.
Only block codes for correcting random errors are discussed; we say little about codes for correcting other kinds of errors (burst or transpositions) or about variable length codes, convolutional
codes or source codes (see the Notes to Ch. 1). Furthermore we have often considered only binary codes, which makes the theory a lot simpler. Most writers take the opposite point of view; they think
in binary but publish their results over arbitrary fields.
[List of omitted topics.]
Table of Content
1. Linear Codes
2. Nonlinear Codes, Hadamard Matrices, Designs and the Golay Code
3. An Introduction to BCH Codes and Finite Fields
4. Finite Fields
5. Dual Codes and Their Weight Distribution
6. Codes, Designs and Perfect Codes
7. Cyclic Codes
8. Cyclic Codes: Idempotents and Mattson-Solomon Polynomials
9. BCH Codes
10. Reed-Solomon and Justesen Codes
11. MDS Codes
12. Alternant, Goppa and Other Generalized BCH Codes
13. Reed-Muller Codes
14. First-Order Reed-Muller Codes
15. Second-Order Reed-Muller, Kerdock and Preparata Codes
16. Quadratic-Residue Codes
17. Bounds on the Size of a Code
18. Methods for Combining Codes
19. Self-dual Codes and Invariant Theory
20. The Golay Codes
21. Association Schemes
Appendix A: Tables of the Best Codes Known
Appendix B: Finite Geometries
|
{"url":"http://www.agnesscott.edu/lriddle/women/abstracts/macwilliams_theory.htm","timestamp":"2014-04-20T10:55:56Z","content_type":null,"content_length":"6841","record_id":"<urn:uuid:e7a8c93d-26d1-4427-9d63-0fb9de583d20>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00408-ip-10-147-4-33.ec2.internal.warc.gz"}
|
89.5% in a class. What are the odds the prof will round up?
areusure? wrote:
89.49999999999 double rounds to 89.5 then to 90. Here is an idea...why not just study harder and earn a 90 or above?
I actually let my kids earn the rounding or XC by explaining concepts or doing a problem on the board for me not in front of a class. Meaning I let them teach me as if I were their student. I do
understand some may not be good "test takers" so I give them a chance to teach me the concept in question. I hint and help them step through a problem and subjectively determine if they explained
enough of the concept to understand the material at an excellent, above average, or average level.
Agreed with the just study harder and earn the 90, but, ummm...double rounds? You can only do "double rounds" if you take each digit individually, and that's, uh, not how you round.
Since you're a teacher, areusure, here's how rounding works:
You have to round to something, to the nearest hundredth, to the nearest tenth, to the nearest whole number, to the nearest ten, etc.
89.5 rounded to the nearest 10th is, well, 89.5. To the nearest whole number is 90. To the nearest ten is 90. To the nearest hundred is 100.
If 90.0 is the cutoff, the OP's 89.5 is a B+. If 90 is the cutoff, a whole number, you round to the nearest whole number, which would be a 90, which would be an A-.
89.4999999999 rounded to the nearest whole number is 89. If you want to get to a whole number, this is how you do it. According to your logic, 89.445 would "round to 90" for the nearest whole number
because 89.445 would round to 89.45, which would round to 89.5, which would round to 90. But clearly 89.445 does not round to 90.
There is no "double rounding."
But your most recent point of dropping the last digit is a valid one. You have to do this for GPAs -- if you have a 3.39 you can either say you have a 3.39 or a 3.3, but you can't round up. The
college might have a policy on this, but I doubt it; it's usually up to the prof.
This is usually where the subjective part comes in if there's no policy. If you've been working hard all semester, going to office hours, improving each exam, etc., the prof might give you the higher
grade. That won't work to get an 89 up to a 90, but just might for an 89.5.
As a prof, I would consider an 89.5 to be a 90, and I think most do.
|
{"url":"http://www.letsrun.com/forum/flat_read.php?thread=3822316&page=2","timestamp":"2014-04-20T18:23:16Z","content_type":null,"content_length":"40242","record_id":"<urn:uuid:ece061e8-3230-4301-be54-a072aaf58106>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00330-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Kidzworld Help
Homework Help
Need help with math, geography, science or any other school work? Post your questions here! Or maybe you are an expert. Share you knowledge here.
Posted By: Sweet_Bubbles_38
Member since:
February, 2010
Status: Offline
i love math but i dont get how to divide two digits can sumone help me
Posted By: fettucine
Member since:
December, 2009
Status: Offline
i can!
all systems go.
the sun hasn't died
THE HOLY SPIRIT AIN'T GOT A PEN
Posted By: goldenhairbear
Member since:
October, 2010
Status: Offline
I can. Is there something specific you do not understand?
Life is like a coin. You can spend it any way you wish, but you only spend it once.
Lillian Dickson
Posted By: nyiriti11
Member since:
October, 2010
Status: Offline
Dividing two digits is the same as one digits. 144/12 12 goes into 14 one time. 14-12=2 24/12=2 You're answer will be 12, which is correct. It's like that.
Posted By: cookiegirl11
Member since:
August, 2009
Status: Offline
hey if u want to Know dividingadding and multplying fractions is my one weakness
(enter awsome sig here)
Me: ise made you a kookie
buddy: really!
me: buts ise ate it
buddie: aws
stares suspiciusly
bye byes now time to go eats more kookies
Posted By: toria2001
Member since:
September, 2010
Status: Offline
I do not like math i need help i am in the 4th grade HELP
Posted By: goldenhairbear
Member since:
October, 2010
Status: Offline
I can help you. What is the problem?
Life is like a coin. You can spend it any way you wish, but you only spend it once.
Lillian Dickson
Posted By: nyiriti11
Member since:
October, 2010
Status: Offline
Yeah, we kind of need to know the conflict before we can find the solution.
Posted By: toria2001
Member since:
September, 2010
Status: Offline
Haha i want to stay in my strings class
Posted By: PrincessCarly
Member since:
August, 2010
Status: Offline
Just remember inverse operations. For example: 99/11 What times 11 equals 99? 9 92/8 How many times does 8 go into 92 without going over 11 right? 8*11 88 92-88 4 Your answer is 8 remainder 4 Hope I
I just want to be the only girl you love all your life
|
{"url":"http://www.kidzworld.com/forums/homework-help/t/911739-math","timestamp":"2014-04-17T10:56:08Z","content_type":null,"content_length":"77932","record_id":"<urn:uuid:c78fe675-7c7d-4c3f-812f-ad350c10eec6>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00014-ip-10-147-4-33.ec2.internal.warc.gz"}
|
a question in vectors
October 27th 2009, 03:46 PM #1
Junior Member
Jun 2009
a question in vectors
although this question i'm having trouble with is in calculus,
it's really in the chapter about vectors before multivariable calculus.
the question goes like this:
i need to give an example of a circle centered at the point Q,
in which the points - A(7, -7) and B(-4, 6) are on that circle
and the angle AQB equals to pi/2.
basically this looks like a pretty easy question,
i've called the coordinates of Q - (q1, q2),
and created the two vectors AQ [q1-7, q2+7] and QB [-4-q1, 6-q2].
obviously the dot product of the two should be equal to 0 in order for the
angle between the two vectors to be pi/2, and also the length
of AQ and QB should be the same in order for both A and B being on the
please help - cause i've tried running over the algebra of this and got
to some pretty nasty figures again and again, and worst yet that these
figures are also not correct...
thanks in advance and sorry for the long preface
Follow Math Help Forum on Facebook and Google+
|
{"url":"http://mathhelpforum.com/calculus/110887-question-vectors.html","timestamp":"2014-04-20T22:03:32Z","content_type":null,"content_length":"29256","record_id":"<urn:uuid:732d6ead-4c4f-4cf7-ac8d-c01811a72830>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00618-ip-10-147-4-33.ec2.internal.warc.gz"}
|
When the underlying process is one-dimensional diffusion, as well as in certain restricted stochastic volatility settings, a contingent claim s delta is always bounded by the infimum and supremum of
its delta at maturity. Further, if the claim s payoff is convex (concave), then the claim s price is a convex (concave) function of the underlying s value. However when volatility is less
specialized, or when the underlying price follows a discontinuous or non-Markovian process, then call prices can have properties very different from those of the Black-Scholes model: a call s price
can be a decreasing, concave function of the underlying price over some range; increasing with the passage of time; and decreasing in the level of interest rates.
We study a model of renegotiation between a borrower and lender in which there is the potential for moral hazard on each side of the relationship. The borrower may add risk to the project, while the
lender may opportunistically hold-up the borrower by threatening to demand early payment. The result is a model in which banks play a unique role in monitoring borrowers activities, and in which
risk is endogenous and state-dependent. The model also yields explicit predictions about renegotiation outcomes, and conditions under which bank loans add value to the firm.
The goal of this paper is to examine the impact of 1975 Congressional mandate to integrate the trading of NYSE-listed stocks. The conclusions are: Most of the time, the NYSE quote matches or
determines the best displayed quote, and the NYSE is the most frequent initiator of quote changes. Non-NYSE markets attract a significant portion of their volume when they are posting inferior bids
or offers, indicating they obtain order flow for other reasons, such as "payment for order flow." Yet, when a non-NYSE market does post a better bid or offer, it does attract additional order flow.
This paper examines the optimal consumption and investment problem for a "large" investor, whose portfolio choices affect the instantaneous expected returns on the traded assets. Alternatively, our
analysis can be interpreted in terms of an optimal growth problem with nonlinear technologies. Existence of optimal policies is established using martingale and duality techniques under general
assumptions on the securities price process and the investor s preferences. As an illustration of our characterization result, explicit solutions are provided for specific examples involving an
agent with logarithmic utilities, and a generalized two-factor version of the CCAPM is derived. The analogy of the consumption problem examined in this paper to the consumption problem with
constraints on the portfolio choices is emphasized.
This study explores multivariate methods for investment analysis based on a sample of return histories that differ in length across assets. The longer histories provide greater information about
moments of returns, not only for the longer-history assets, but for the shorter-history assets as well. To account for the remaining parameter uncertainty, or "estimation risk," portfolio
opportunities are characterized by a Bayesian predictive distribution. Examples involving emerging markets demonstrate the value of using the combined sample of histories and accounting for
estimation risk, as compared to truncating the sample to produce equal-length histories or ignoring estimation risk by using maximum-likelihood estimates.
It is often stated that bidders acquire poorly-run targets in order to improve firm performance. This inefficient management hypothesis is frequently tested by examining target stock returns in the
years prior to an acquisition. While the hypothesis is commonly assumed in the literature to be true, previous papers generally do not show significantly negative returns for targets in the years
prior to acquisition. Our paper re-examines this issue thoroughly with a number of methodological improvements and a large sample of acquisitions over the period from 1930 to 1987. We find that the
abnormal returns are insignificant over the four years prior to the bid. But over the ten-year period before the bid, target firms experience a statistically significant abnormal return of -7% to
-18%. Our results suggest that takeovers discipline managers, but with a delay that may protect them through much of their normal tenures. However, this delay is shorter during periods of lenient
anti-trust enforcement, during merger waves, and for unregulated firms.
This paper formally incorporates parameter uncertainty and model error into the estimation of contingent claim models and the formulation of forecasts. This allows inference on functions of interest
(option values, bias functions, hedge ratios) consistent with uncertainty in both parameters and models. We show how to recover the exact posterior distributions of the parameters or any function of
the parameters. Exact posterior or predictive densities are crucial because a frequent updating setup results in small samples and requires the incorporation of specific prior information. Markov
Chain Monte Carlo estimators are developed to solve this estimation problem. Within sample and predictive model specification tests are provided which can be used in dynamic testing (or trading
systems) making use of cross-sectional and time series options data. Finally, we discuss several generalizations of the error structure.
These new techniques are applied to equity options using the Black-Scholes model. When model error is taken into account, the Black-Scholes appears very robust, in contrast with previous studies
which at best only incorporated parameter uncertainty. We extend the Black-Scholes model by adding polynomial functions of its inputs. This allows for intuitive specification test. Although these
simple extended models improve the in-sample error properties of the Black-Scholes, they do not result in major improvements in out of sample predictions. The differences between these models are
important, however, because they produce different hedge ratios and posterior probabilities of mispricing.
This paper examines the use of seven mechanisms to control agency problems between managers and shareholders. These mechanisms are: shareholdings of insiders, institutions, and large blockholders;
use of outside directors; debt policy; the managerial labor market; and the market for corporate control. We present direct empirical evidence of interdependence among these mechanisms in a large
sample of firms. This finding suggests that cross-sectional OLS regressions of firm performance on single mechanisms may be misleading. Indeed, we find relations between firm performance and four of
the mechanisms when each is included in a separate OLS regression. These are insider shareholdings, outside directors, debt, and corporate control activity. Importantly, the effect of insider
shareholdings disappears when all of the mechanisms are included in a single OLS regression, and the effects of debt and corporate control activity also disappear when estimations are made in a
simultaneous systems framework. Together, these findings are consistent with optimal use of each control mechanism except outside directors.
The threat of takeover acts to discipline managers, but it also makes shareholders assurances to managers less reliable and so interferes with contracting between them. These two effects have
opposing implications about the level of executive compensation: the disciplinary effect implies a reduction in compensation; the contracting effect implies an increase. Which effect dominates is an
empirical issue. We examine the relation between managerial compensation and the industry-wide threat of takeover to address this issue. Using compensation data for the CEOs of over 500 firms and
after controlling for other determinants of executive compensation found in previous studies, we find this a positive effect of takeover, indicating that the contracting effect dominates. Moreover,
effect occurs only in firms that do not provide CEOs with compensation assurance (such as a golden parachute). The size of the effect is economically significant. For CEOs without golden parachutes,
the most popular compensation assurance provision, a 10% increase in the annual probability of takeover from 4.6% to 5.06% results in $11,200 more in the typical CEO s salary and bonus and $15,000
more in total compensation. We also find a direct positive effect of compensation assurance provisions on CEO compensation. These results do not seem to be driven by industry effects and are robust
to alternative specifications. Together, they provide evidence on an important way in which the market for corporate control affects internal contracting and add to the growing literature on the
determinants of the level of executive compensation.
Most states (Vermont is the exception) have constitutional or statutory limitations restricting their ability to run deficits in the state s general fund. Balanced budget limitations may be either
prospective (beginning-of-the-year) requirements or retrospective (end-of-the-year) requirements. Importantly, the state limits apply only to the general fund, leaving other funds (capital, pensions,
social insurance) as potential sources for deficits financing. Do these general fund balanced budget requirements limit deficit financing? If so, which balanced budget rules are most effective in
constraining state deficit financing? Finally, how are state spending and taxation decisions affected by balanced budget rules? Using budget data from a panel of 47 U.S. states for the period
1970-1991, the analysis finds that state end-of-the-year (not prospective) balance requirements do have significant positive effects on a state s general fund surplus. The surplus is accumulated
through cuts in spending, not through tax increases. It is saved in a state "rainy day" fund in anticipation of possible future general fund deficits. We find little evidence here that the
constraints "force" deficits into other fiscal accounts.
We provide a monotonic transformation of an initial diffusion with a level-dependent diffusion parameter that yields a second, deterministic diffusion parameter process. Altering the diffusion
parameter while maintaining the original Brownian motion at the expense of the drift can be viewed as a counterpart to Girsanov s Theorem. The transformed process provides a tractable basis for the
analysis of the initial probability distribution, and hence provides insights into the value-at-risk (VAR), hedging and valuation of alternate investment strategies. Restrictions on the initial
process imply theoretical bounds on VAR, position deltas and state prices, and an empirical bound on option deltas.
There are two distinct components to a specialist s price schedule, prices and depths. This paper presents a model of a specialist s problem of choosing prices and depths jointly in order to maximize
profits. Closed form solutions are provided for both constrained and unconstrained versions of the model. The contribution of this work is twofold. First, the model demonstrates the strategic
importance of depths for the specialist and highlights its effect on overall liquidity. Second, the joint responses of prices and depths to various concerns of the specialist may be useful in
differentiating between competing microstructure effects. Comparative static results show how depths respond to changes in: (1) the amount of asymmetric information, (2) uncertainty about the
terminal value, (3) the prior probability assessments of future prices and (4) the distribution of liquidity trades.
In recent years, the number of downgrades in corporate bond ratings has exceeded the number of upgrades. This fact has led some to conclude that the credit quality of US corporate debt has declined.
However, declining credit quality is not the only possible explanation. An alternative explanation of this apparent decline in credit quality is that the rating agencies are now using more stringent
standards in assigning ratings. An ordered probit analysis of a panel of firms from 1973 through 1992 suggests that rating standards have indeed become more stringent. The implication is that at
least part of the downward trend in ratings is the result of changing standards and does not reflect a decline in credit quality.
The returns of assets that are traded on financial markets are more volatile than the returns offered by intermediaries such as banks and insurance companies. This suggests that individual investors
are exposed to more risk in countries which rely heavily on financial markets. In the absence of a complete set of Arrow-Debreu securities, there may be a role for institutions that can smooth asset
returns over time. In this paper, we consider one such mechanism. We present an example of an overlapping generations economy in which the incompleteness of financial markets leads to underinvestment
in reserves. There exist allocations where by building up large reserves it is possible to smooth asset returns and eliminate nondiversifiable risk. This allows an ex ante Pareto improvement. We then
argue that a long-lived intermediary may be able to implement this type of smoothing. However, the position of the intermediary is fragile; competition from financial markets can cause the
intertemporal smoothing mechanism to unravel, in which case the intermediary will do no better than the market.
We solve an optimal managerial compensation contract s wage, equity and options components, vesting dates, and control rights when firms are more complicated than standard principal-agent theory
allows. Firms have asset-in-place, endure through time, and have many managers. A firm s owner can transfer some control rights to a manager, thereby entrenching her. Managerial entrenchment makes
deferred compensation credible but creates a hold-up problem. Deferring some, but not all, compensation reduces a manager s incentive to free-ride on her replacement while simultaneously solving the
hold-up problem. Under an optimal contract a senior manager will be entrenched, make no effort, and receive apparently performance-intensive compensation.
|
{"url":"http://finance.wharton.upenn.edu/~rlwctr/workingpapers/abstracts1996.html","timestamp":"2014-04-16T16:00:56Z","content_type":null,"content_length":"26062","record_id":"<urn:uuid:77f62c86-05e4-44dd-a5f6-b96d4d085534>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00142-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Zentralblatt MATH
Publications of (and about) Paul Erdös
Zbl.No: 707.11071
Autor: Erdös, Paul; Szalay, M.
Title: On some problems of the statistical theory of partitions. (In English)
Source: Number theory. Vol. I. Elementary and analytic, Proc. Conf., Budapest/Hung. 1987, Colloq. Math. Soc. János Bolyai 51, 93-110 (1990).
Review: [For the entire collection see Zbl 694.00005.]
Let \pi be a generic ``unrestricted'' partition of the positive integer n, that is, a partition \lambda[1]+\lambda[2]+...+\lambda[m] = n, where the \lambda[j]'s are integers such that \lambda[1] \geq
\lambda[2] \geq ... \geq \lambda[m], and let p(n) be the number of such partitions. The number of conjugacy classes of the symmetric group of degree n is equal to p(n), and the number of conjugacy
classes of the alternating group of degree n is asymptotically equal to p(n)/2. By choosing a suitable prime summand, a proof that almost all partitions \pi of n have a summand which is > 1 and
relatively prime to the other summands was given by L. B. Beasley, J. L. Brenner, P. Erdös, M. Szalay and A. G. Williamson [Period. Math. Hung. 18, 259-269 (1987; Zbl 617.20045)], and was used to
simplify a proof originally given by Beasley, Brenner, and Williamson that almost all conjugacy classes of the alternating group of degree n contain a pair of generators.
It is now shown that the choice of a prime summand was necessary, in the sense that for almost all \pi's, if \lambda[j] > 1 and (\lambda[i],\lambda[j]) = 1 for each i\ne j then \lambda[j] is a prime.
Also, let \pi^x be a generic unequal partition of n, that is, \pi^x represents a partition \alpha[1]+\alpha[2]+...+\alpha[m] = n, where the \alpha[j]'s are integers such that \alpha[1] > \alpha[2] >
... > \alpha[m], and let M(\pi^x) denote the maximal number of consecutive summands in \pi^x. It is shown that for almost all \pi^x,
M(\pi^x) = (log n)/(2 log 2)-(log log n)/(log 2)+O(\omega(n)),
where \omega(n) > oo (arbitrarily slowly). Finally, let T[n](k) denote the number of solutions of x^k = e in the symmetric group of degree n, where e is the identity element. Others have
investigated the behavior of T[n](k) as n > oo for fixed k \geq 2. An estimate is now established for T[n](k) for 1 \leq k \leq n^(1/4)-\epsilon, 0 < \epsilon < 10^-2, as n > oo.
Reviewer: B.Garrison
Classif.: * 11P81 Elementary theory of partitions
20P05 Probability methods in group theory
00A07 Problem books
Keywords: unequal partition
Citations: Zbl 626.20059; Zbl 694.00005; Zbl 617.20045
© European Mathematical Society & FIZ Karlsruhe & Springer-Verlag
│Books │Problems │Set Theory │Combinatorics │Extremal Probl/Ramsey Th. │
│Graph Theory │Add.Number Theory│Mult.Number Theory│Analysis │Geometry │
│Probabability│Personalia │About Paul Erdös │Publication Year│Home Page │
|
{"url":"http://www.emis.de/classics/Erdos/cit/70711071.htm","timestamp":"2014-04-16T16:07:15Z","content_type":null,"content_length":"6772","record_id":"<urn:uuid:817d639d-c165-4b71-8e5f-ecd2d2229b43>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00632-ip-10-147-4-33.ec2.internal.warc.gz"}
|
N Richlnd Hls, TX Algebra 1 Tutor
Find a N Richlnd Hls, TX Algebra 1 Tutor
...This also involved teaching and using geometry, algebra, trigonometry, and calculus. Recently I have developed considerable skill in chemistry tutoring. My most effective and enjoyable
teaching is one-on-one instruction where I can assess the student's level of understanding of math or science ...
15 Subjects: including algebra 1, chemistry, calculus, statistics
...I have tutored TAKS math exit level portion. I am very proficient in the TAKS objectives, both as tutor and as a former mathematics high school teacher. I helped a student who failed the Math
TAKS three times without tutoring pass it on his 4th with tutoring.
40 Subjects: including algebra 1, reading, statistics, English
...I am in the process of developing curriculum that is part of STEAM, STEM, and METS initiatives. The average college student spends more than five years in school and between 120% to 300% of
the theoretical college costs. Tutoring can save between $100,000 to $300,000 dollars per student.
93 Subjects: including algebra 1, reading, chemistry, English
...I have taught algebra 1 and 2, geometry, and pre-calculus (including trigonometry). I have also taught statistics at the college level. I know how frustrating math can be, and I look for the
missing pieces in a student's knowledge to help them to gain a secure footing in their current math class...
8 Subjects: including algebra 1, geometry, algebra 2, SAT math
I am a Mechanical Engineer by education and currently working as a Mechanical, Electrical, and Plumbing coordinator in a large commercial construction. Advanced mathematics is applied everyday at
my work. Someone once said “Everything should be made as simple as possible, but not simpler.” My approach to tutoring is “make it simple” to help understand math.
8 Subjects: including algebra 1, physics, calculus, geometry
Related N Richlnd Hls, TX Tutors
N Richlnd Hls, TX Accounting Tutors
N Richlnd Hls, TX ACT Tutors
N Richlnd Hls, TX Algebra Tutors
N Richlnd Hls, TX Algebra 2 Tutors
N Richlnd Hls, TX Calculus Tutors
N Richlnd Hls, TX Geometry Tutors
N Richlnd Hls, TX Math Tutors
N Richlnd Hls, TX Prealgebra Tutors
N Richlnd Hls, TX Precalculus Tutors
N Richlnd Hls, TX SAT Tutors
N Richlnd Hls, TX SAT Math Tutors
N Richlnd Hls, TX Science Tutors
N Richlnd Hls, TX Statistics Tutors
N Richlnd Hls, TX Trigonometry Tutors
|
{"url":"http://www.purplemath.com/N_Richlnd_Hls_TX_algebra_1_tutors.php","timestamp":"2014-04-18T00:56:27Z","content_type":null,"content_length":"24440","record_id":"<urn:uuid:e724f3dd-6022-4b6f-b158-db8ff17e69a9>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00141-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Solution For Generic Host Network Viruses
After Sify Network migration most of the Offices were infected by viruses, most common virus is Generic host. It block the Network activities such as shared folder not able to access even in the same
Machine i.e in Server also.
Dear SAs,
Generic Host Microsoft Patches i.e Hotfix not working in SP2 and more . Never try the Generic Host Solution such as Batch file/ some registry tricks.
Generic Host Process For Win32 Services encountered a problem and needs to close
The Generic host problem has been happening because of a Windows Error. For the same we suggested a fix by running the tool above. This tool may case problems with LAN access to other computers in a
LAN kind of network environment. We regret the inconvenience occurred to you because of this LAN Access Problem. If you are facing such problem, please follow the below two steps to fix the issue
• Download/Purchase Symantec 12.1 from it Official Site Download Trial
• Update Virus Definition Latest Upto date from Download Definitions
• After Upgrade Latest virus definition Do the Complete system Scan fro Remove Viruses.
• Always Check whether your Antivirus Definitions UptoDate.
Read more...
Friends.. as said in our earlier post, PA/SA Exam 2014, for 07 Postal Circles, viz. Delhi, Kerala, Rajasthan, Assam, Chhattisgarh, North East, Odisha is scheduled for 27 April 2014. Click Here to see
the schedule.
Candidates applied for the above 07 Postal Circles can now download their Admit Card/Hall Ticket from the website by entering your registration number and password.
1. Candidates are requested to appear for the written test at the Centre on the date and time specified. Candidate should reach the test centre 60 minutes before the commencement of the examination.
Candidates are permitted up to 15 Minutes after the commencement of the examination but however they are not permitted any extended time.
2. Maximum candidates have been accommodated at the Examination City of their Preference 1. However their Preference 2 and 3 have been also have been kept in view keeping the volume of candidates
appearing in the examination and in the exigencies.
3. No change of Test Centre/test date and address of correspondence will be entertained.
4. Bring this Admit Card in ORIGINAL to the examination centre. Keep a photocopy of the Admit Card with you for future reference. You are required to put your Signatures on this Admit Card in the
given box ONLY in the presence of Invigilator in the examination hall and it will be collected during the examination.
5. Admit Card is displayed only on the website www.pasadrexam2014.in Check your particulars in the Admit Card carefully. Discrepancy, if any,
should be immediately reported to the mailadmitcardhelpdesk.dopexam@gmail.com Relevant corrections, if any and accepted, will be made to your On-Line Admit Card and the same can be Downloaded from
the website again.
6. Admit Cards will not be issued at the Examination Centre under any circumstances. Printout of Downloaded Admit Card from the website www.pasadrexam2014.in are valid. Candidates will not be
permitted without valid admit card.
7. Please ensure that you bring only ball point pen of colour Black or Blue for marking the OMR Answer Sheet. Marking with “gel” or any other type of pen is NOT allowed.
8. Answer Sheet is of OMR (Optical Mark Recognition) type. For every question there is only one Correct Option. Darken the appropriate Circle to indicate the Correct Answer. Please note No Change is
permissible once marked. Applicant who darken more than one circle for any question can’t be valued and rejected. DO NOT put any stray marks anywhere on the answer sheet. Rough work may be done on
the blank space in your question booklet. There is “No Negative Marking”.
9. OMR answer sheet has two copies. Do Not separate them. Impressions marked on the Original Sheet will automatically get transferred to the Copy. Original copy will be used for Result Processing.
After completion of exam & before leaving the examination hall Candidate is to handover the ORIGINAL copy to the invigilator and candidate may retain Carbonless Copy. It is the responsibility of the
candidate to handover the ORIGINAL OMR Sheet to the room invigilator before leaving the examination hall. Question Booklet is allowed to be retained by the candidate.
10. Candidates are likely to undergo searching/ frisking at the entrance of the exam centre. Please cooperate.
11. MOBILE PHONES/SCANNING DEVICES/ANY ELECTRONIC GADGET/ WEAPONS / FIRE ARMS are strictly prohibited inside the examination centre. Examination centre will not be responsible for its safety/loss.
There is no arrangement made for securing your valuables.
12. Impersonation and/or possession of any material, electronic equipment even remotely connected or adverse to proper conduct and performance of the examination may render a candidate liable to be
expelled from the Examination Hall and/or cancellation of candidature apart from any other punishment/penalty that may be imposed upon the candidate.
13. Candidates are not allowed to carry any papers, note books, books, calculators, etc. into the examination hall. Any candidate found using or in possession of such unauthorized material or
indulging in copying or adopting unfair means will be summarily disqualified.
14. The duration of Paper is 120 minutes. The syllabus for the Paper I is already furnished in the Notification & FAQ on website.
15. Result of successful candidates in this Paper–I shall be published on the website. Candidate is advised to regularly visit the web site www.pasadrexam2014.in for ascertaining updates.
16. The candidature for this test is PROVISIONAL and is subject to fulfilling the education and other eligibility criteria as prescribed in the Notification/Advertisement advertised in the newspaper
& also at the website www.pasadrexam2014.in. If Candidate is found ineligible at a later date his/her candidature will be summarily rejected. Mere appearing OR qualifying in the test does not confer
any right on the applicant for claiming selection.
17. Visually Impaired (PH - I) candidate is requested to bring “Scribe” at his /her own cost. The scribe should read the Questions in low voice and mark the options as told by the candidate. It
should not disturb other candidates in the room. Any misconduct by the scribe/ or candidate himself shall be treated to have been done by that candidate and shall lead to cancellation of the
candidature of the candidate. Visually Impaired (PH - I) candidate will allowed 40 minutes compensatory (extra) time for completing the examination.
18. Canvassing in any form by the candidate would entail his/her disqualification.
19. Candidate will not be allowed to leave the examination hall till the examination is over.
20. Please read the instructions carefully on Question Booklet & OMR Sheet. The candidate should fill up all the particulars on the OMR Answer sheet legibly before starting to answer the questions.
21. Decision of the Department of Posts in respect of all matters pertaining to this recruitment test would be final and binding on all the Applicants/Candidates.
Courtesy : http://www.currentaffairs4examz.com/
Read more...
Friends.. we have already shared all the Question Papers of PA/SA Exam held during last year (2013). As now exam dates for PA/SA Exam 2014 are announced and very short time is available for
preparation, it is very important to have a look at the previous papers so as to get an idea about the question paper and pattern. Hence here we have compiled all the questions for your easy
reference. Hope this helps you. Prepare well….all the best in advance.
Questions with Answer key- Postal Assistant Exam 2013 (Phase 1- 21 April 2013) Exam Part A - General Knowledge
Questions with Answer key- Postal Assistant Exam 2013 (Phase 1- 21 April 2013) Exam Part A - General Knowledge
1. Government of India has recently granted "Maharatna Status to two more Navratna Public Sector enterprises. Which are the these new companies joining the club of Maharatna public sector
(A) Steel authority of India Limited and National Thermal Power Corporation Limited
(B) Bharat heavy Electrical Ltd and Gas authority of India Limited
(C) Indian Oil Corporation and Coal India Limited
(D) Bharat Heavy Electricals Limited and Oil and Natural Gas Corporation
Ans. (B) Bharat heavy Electrical Ltd and Gas authority of India Limited
2. India's newly built and tested missile 'Astra' is meant for
(A) Surface to surface strike
(B) Air to Air Strike
(C) Surface to Air Strike
(D) Air to surface Strike
Ans. (B) Air to Air Strike
3. The Governor General who adopted a policy of Europeanization of Bureaucracy and an exlusion of Indian from higher posts
(A) Warren Hastings
(B) Wellesley
(C) Cornwallis
(D) Dalhousie
Ans. (C) Cornwallis
4. During which movement Bal Gangadhar Tilak was given the epithet Lokmanya and Gandhiji give the slogan 'swaraj in a year' respectively:
(A) Home rule Movement and Non Cooperation Movement
(B) Swadeshi Movement and Dandi March
(C) Quit India Movement and Civil Disobedience Movement
(D) Khilafat Movement and Civil Disobedience Movement
Ans. (A) Home rule Movement and Non Cooperation Movement
5. Recently Dargah Muinuddin Chisti at Ajmer was in news due to visit of Pakistani Prime Minister Dargah Muinudding Chisti at Ajmer was built by:
(A) Qutubuddin Aibak
(B) Alauddin Khilji
(C) IItutmish
(D) Khijra Khan
Ans. (B) Alauddin Khilji
6. Bauxite is an ore of:
(A) Zinc
(B) Aluminium
(C) Lead
(D) Copper
Ans. (B) Aluminium
7. Which one of the following is the land area in the extreme south of India?
(A) Indira Point
(B) Rameshwaram
(C) Cape Cammiran
(D) Puducherry
Ans. (A) Indira Point
8. The ozone layer lies in the:
(A) Troposphere
(B) Tropopause
(C) Stratosphere
(D) None of these
Ans. (C) Stratosphere
9. In Union Budget 2013-14, "Voluntary Compliance Encouragement Scheme" was launched by Government of India. This scheme is related to:
(A) Service Tax
(B) Commodity Transaction Tax
(C) Income Tax
(D) Securities Transaction Tax
Ans. (A) Service Tax
10. Economic Survey 2012-13 states that Non Performing Assets of increased from 2.36% to 3.57% in September, 2012. The description of Non performing Assets was with reference to which sector.
(A) Telecom Sector
(B) Human Resource Sector
(C) Oil and Gas Sector
(D) Banking Sector
Ans. (D) Banking Sector
11. This is the rate at which Reserve Bank of India borrows money from commercial banks:
(A) Reverse Repo rate
(B) Statutory Liquidity ratio
(C) Repo rate
(D) Cash reserve ratio
Ans. (A) Reverse Repo rate
12. As per election commission of India, how many recognized national political parties are there in India?
(A) 6
(B) 2
(C) 7
(D) None of these
Ans. (A) 6
13. The present deputy chairman of Rajya Sabha is elected member of Parliament from which State of India?
(A) Andhra Pradesh
(B) Tamilnadu
(C) Kerala
(D) Karnataka
Ans. (C) Kerala
14. The constitution of India was adopted on:
(A) 26th January, 1950
(B) 15th August, 1947
(C) 26th November, 1949
(D) 26th January, 1951
Ans. (C) 26th November, 1949
15. The Present strength of Rajya Sabha members is ..... out of which..... are representatives of states and union territories of Delhi and Puducherry and .... are nominated by president:
(A) 250, 238, 12
(B) 247, 235, 12
(C) 245, 233, 12
(D) 248, 236, 12
Ans. (C) 245, 233, 12
16. Which of the following glands produces insulin in human body?
(A) Pancreas
(B) Spleen
(C) Liver
(D) Pituitary
Ans. (A) Pancreas
17. Which of the following is food poisoning organism
(A) Clostridium botulinum
(B) Streptomyces fecalis
(C) Lacto bacillus
(D) None of these
Ans. (A) Clostridium botulinum
18. Development of goitre (Enlarged thyroid gland) is mainly due to deficiency of:
(A) Sodium
(B) Calcium
(C) Iodine
(D) Iron
Ans. (C) Iodine
19. The emission of which casuses global warming?
(A) Carbon dioxide
(B) Carbon monoxide
(C) Nitrogen
(D) Hydrocarbon
Ans. (A) Carbon dioxide
20. Who was adjudged player of the tournament by scoring most number of runs in the tournament in women's world cup cricket held recently:
(A) Jess Cameron
(B) Megan Schutt
(C) Anya Shrubsole
(D) Suzie Bates
Ans. (D) Suzie Bates
21. Who won the inaugural Hockey India League ended in Ranchi recently:
(A) Ranchi Rhinos
(B) Uttar Pradesh Wizards
(C) Delhi Wave Riders
(D) Mumbai magicians
Ans. (A) Ranchi Rhinos
22. FIFA-2014 world cup is proposed to be held in:
(A) Quatar
(B) Brazil
(C) Spain
(D) Russia
Ans. (B) Brazil
23. Wanchoo committee Dealt with:
(A) Direct Taxes
(B) Right to Information Reforms
(C) Agriculture Prices
(D) Parliamentary Reforms
Ans. (A) Direct Taxes
24. Kyoto Protocol is:
(A) An international agreement for extradition of foreign enemy
(B) An international agreement describing formula for conversion of foreign exchange
(C) An international agreement to reduce green house gases
(D) An international agreement to deal with International Terrorists organization
Ans. (C) An international agreement to reduce green house gases
25. India successfully completed its 101th space mission by launching Indo - French satellite SARAL recently. This satellite is for:
(A) Preparation of a detailed and complete map of Antarctica
(B) Study the climate on Mars
(C) Oceanographic study
(D) None of these is true
Ans. (C) Oceanographic study
Questions with Answer key- Postal Assistant Exam 2013 (Phase 1- 21 April 2013) Exam Part C - English
Questions with Answer key- Postal Assistant Exam 2013 (Phase 1- 21 April 2013) Exam Part C - English
DIRECTIONS: In the following questions, identify the correct reported speech from the given alternatives.
01. He said, "I must go home at once."
(A) He said that he had to go home then
(B) He said that he must go home then and there
(C) He said that he must have gone home at once
(D) He said that he had to go home at once
Ans. (A) He said that he had to go home then
02. Miss Ragini said to me, "Put these pencil shavings in the dustbin."
(A) Miss Ragini said to me to put these pencil shavings in the dustbin
(B) Miss Ragini ordered me to put these pencil shavings in the dustbin
(C) Miss Ragini asked me to put those pencil shavings in the dustbin
(D) Miss Ragini told me to put those pencil shavings in the dustbin
Ans. (C) Miss Ragini asked me to put those pencil shavings in the dustbin
03. She said, "oh dear! I have just missed the bus.":
(A) She said with regret that she had just missed the bus
(B) She exclaimed that she has just missed the bus
(C) She regretted that she just missed the bus
(D) She narrated that she just missed the bus
Ans. (C) She regretted that she just missed the bus
DIRECTIONS: Fill in the blank with correct preposition
04. He invited all his friends............ tea
(A) On
(B) For
(C) To
(D) In
Ans. (C) To
05. Rita drove............ a red light.
(A) In
(B) Through
(C) From
(D) Among
Ans. (B) Through
06. Distribute the sweets equally............ four children.
(A) Among
(B) In
(C) Between
(D) Through
Ans. (A) Among
07. Identify the adverb in the following sentence.
He spoke well at the meeting last night.
(A) Last Night
(B) Meeting
(C) Well
(D) All of these
Ans. (C) Well
08. Identify which is not adverb from among following underlined words in the given sentences.
She sings pretty well. I do my work carefully. He is wise enough to understand the trick. The flower smells sweet.
(A) Enough
(B) Carefully
(C) Pretty
(D) Sweet
Ans. (D) Sweet
09. Given below is the sentence in active voice. Choose the correct sentence given in passive voice among the alternatives.
Ought you not to reveal the truth now?
(A) Should the truth need not be revealed by you then?
(B) Ought the truth not to be revealed by you now?
(C) Ought the truth not to be revealed by you then?
(D) Ought the truth need not be revealed by you now?
Ans. (B) Ought the truth not to be revealed by you now?
10. Choose the incorrect sentence among the following:
(A) By next December we shall have been living here for six years.
(B) I have been writing four letters since morning.
(C) At this time tomorrow we will be watching a film
(D) I have been standing here for hours.
Ans. (A) By next December we shall have been living here for six years.
11. Given below are four substitutions for the underlined part. Choose the correct alternative to make the sentence grammatically correct.
Make haste lest you should not be caught in the storm:
(A) You might be
(B) You could be
(C) You should be
(D) Otherwise you can be
Ans. (A) You might be
DIRECTIONS: In this section each sentence has three parts indicated by (A), (B) and (C). Read each sentence to find out whether there is an error. If you find an error in any one of the parts (A),
(B), (C) indicate your response by blackening the letter related to that part in the OMR sheet provided. If a sentence has no error, indicate this by blackening "D" which stands for no error. Error
may belong to grammar usage.
12. Seldom we have been treated (A)/ in such a rude manner (B)/ by the police Personnel (C)/ No error (D)
Ans. No error
13. Some men are born great (A)/ some achieve greatness (B)/ and some had greatness thrust on them (C)/ No error (D)
Ans. some had greatness thrust on them
14. Beware of (A)/ a fair-weather friend (B)/ who is neither a friend need nor a friend indeed (C)/ No error (D).
Ans. who is neither a friend need nor a friend indeed
15. What shall be correct combination of two simple sentences into a complex sentence by using a noun clause?
Can you come? He asked me:
(A) He asked me to come
(B) He asked me if I could come
(C) He asked me whether I can come?
(D) He asked me whether I might come
Ans. (B) He asked me if I could come
16. Identify the nouns used both for singular and plural form:
(A) Sheep
(B) Wolf
(C) Axis
(D) Goat
Ans. (A) Sheep
DIRECTIONS: Fill the correct article
17. Ramesh's father is ____ M.P and Suresh's father is _______Member of Legislative Assembly.
(A) An, A
(B) An, An
(C) A, An
(D) An, The
Ans. (A) An, A
DIRECTIONS: Complete the sentence correctly
18. Ram is so proud of his position that he _________ his subordinates.
(A) Looks for
(B) Looks into
(C) Looks down upon
(D) Looks after
Ans. (C) Looks down upon
DIRECTIONS: Complete the sentence with correct adversative conjunction
19. Shruti ran fast ______ she missed the train
(A) But
(B) Nevertheless
(C) Yet
(D) Nonetheless
Ans. (A) But
DIRECTIONS: Choose the word which is similar or most nearly similar in meaning as the word given in bold and mark your answer.
20. Sagacious:
(A) Clever
(B) Strong
(C) Obstinate
(D) Ridiculous
Ans. (A) Clever
21. Curtly:
(A) Sadly
(B) Rudely
(C) Frankly
(D) Happily
Ans. (B) Rudely
22. Abbreviate:
(A) Extend
(B) Abridge
(C) Conclude
(D) Thwart
Ans. (B) Abridge
DIRECTIONS: Choose the word which is opposite in meaning as the word given in bold and mark your answer.
23. Indifference:
(A) Responsiveness
(B) Different
(C) Neutrality
(D) Sophistication
Ans. (A) Responsiveness
24. Exceptional:
(A) Formal
(B) Neutral
(C) Normal
(D) Traditional
Ans. (C) Normal
25. Tragic:
(A) Humorous
(B) Funny
(C) Light
(D) Comic
Ans. (D) Comic
Postal Assistant Exam 2013 question paper and answer key for Part B Mathematics (Phase 1- 21 April 2013)
Postal Assistant Exam 2013 question paper and answer key for Part B Mathematics (Phase 1- 21 April 2013)
01. Tarun travels a distance of 24km at 6 km/hr another distance of 24km at 8 km/hr and a third distance of 24 km at 12 km/hr. His average speed for the whole journey (in km/hr) is
(A) 8 2/3
(B) 8
(C) 9
(D) 7
Ans. (B) 8
02. Rohan lends 1000 in four parts. If he gets 8% interest on 2000, 7 1/2% on 4000 and 8 1/2% on 1400 what percent of interest must he get for the remainder if his average annual interest is 8.13%?
(A) 10%
(B) 9%
(C) 8%
(D) None of these
Ans. (B) 9%
03. A solid piece of iron is in the form of a cuboid of dimensions (49cmx33cmx24cm) is melted and moulded to form a solid sphere. The radius of the sphere is
(A) 23 cm
(B) 21 cm
(C) 19cm
(D) 25cm
Ans. (B) 21 cm
04. Twelve solid spheres of the same size are made by melting a solid metallic cylinder of base diameter 2 cm and height 16cm. The diameter of each sphere is:
(A) 4cm
(B) 3cm
(C) 2 cm
(D) 6 cm
Ans. (C) 2 cm
05. Perimeter of a circle is equal to the perimeter of a square whose area is 484 cm². What is the radius of the circle?
(A) 14cm
(B) 12 cm
(C) 7 cm
(D) 16 cm
Ans. (A) 14cm
06. Puneeta borrowed from Reena certain sum for two years at simple interest. Puneeta lent this sum to Venu at the same rate for two years compound interest. At the end of two years she received 110
as compound interest but paid 100 as simple interest. Find the sum and rate of interest:
(A) 250, rate 20% per annum
(B) 250, rate 25% per annum
(C) 250, rate 10% per annum
(D) None of these
Ans. (A) 250, rate 20% per annum
07. 13 chairs and 5 tables were brought for 8280. If the average cost of the table will be 1227, what is the average cost of a chair?
(A) 2145
(B) 165
(C) 175
(D) None of these
Ans. (B) 165
08. A man goes uphill with an average speed of 24 kmph and comes down with an average speed of 36 kmph. The distance travelled in both the cases being the same, The average speed (in km/hr) for the
entire journey is:
(A) 30.8
(B) 32.8
(C) 28.8
(D) None of these
Ans. (C) 28.8
09. 40% of the employees of a certain company are men, and 75% of the men earn more than 25000 per year. If 45% of the company's employees earn more than 25000 per year, what fraction of the women
employed by the company earn 25000 per year or less?
(A) 1/4
(B) 3/4
(C) 1/3
(D) 2/11
Ans. (B) 3/4
10. A sum of money invested at compound interest amounts in 3 years to 800 and in 4 years to 840 what is the percentage rate of interest?
(A) 6%
(B) 5%
(C) 4%
(D) 3%
Ans. (B) 5%
11. If the difference between simple and compound interest on some principal amount at 20% per annum for three years is 48, then the principal amount is:
(A) 390
(B) 375
(C) 450
(D) None of these
Ans. (B) 375
12. A rational number between 1/2 and 3/5 is:
(A) 4/7
(B) 3/5
(C) 2/5
(D) None of these
Ans. (A) 4/7
13. The curved surface of a cylinder is 264m². Its volume is 924m³. The height of the cylinder must be:
(A) 8 m
(B) 6 m
(C) 4 m
(D) None of these
Ans. (B) 6 m
14. If by selling 110 apples, the cost price of 120 apples is realized. The gain % is?
(A) 10 10/11%
(B) 9 1/9%
(C) 11 1/9%
(D) 9 1/11%
Ans. (D) 9 1/11%
15. If 8 men or 12 women can do a piece of work in 25 days, in how many days can the same work be done by 6 men and 11 women?
(A) 15 days
(B) 13 1/2 days
(C) 12 days
(D) 18 days
Ans. (A) 15 days
16. Pujara has a certain average of runs for his 8 matches in Border Gavaskar Trophy. In the ninth match he scores 100 runs and thereby increases his average by 9 runs. His new average of runs is:
(A) 28
(B) 24
(C) 20
(D) 32
Ans. (A) 28
17. It costs 1 to photocopy a sheet of paper. However, 2% discount is allowed on all photocopies done after first 1000 sheets. How much will it cost to photocopy 5000 sheets of paper?
(A) 3920
(B) 4900
(C) 4920
(D) 3980
Ans. (C) 4920
DIRECTIONS (Question No 43 & 44) Answer the following questions on the basis of the information given below.
i. Trains A and B are travelling on the same route heading towards the same destination. Train B has already covered a distance of 220km before the train A started.
ii. The two trains meet each other after 11 hours after start of train A
iii. Had the trains been travelling towards each other (from a distance of 220 km) they would have met after one hour.
18. What is the speed of trains 'B' in kmph:
(A) 116
(B) 180
(C) 100
(D) None of these
Ans. (C) 100
19. What is the speed of trains 'A' in kmph:
(A) 118
(B) 80.5
(C) 102
(D) None of these
Ans. (D)
DIRECTIONS: What approximate (you are not expected to calculate the exact value) value will come in place of the question mark(?) in following equation?
20. 384.996 x 15.001 + 44.99 = ?
(A) 5080
(B) 5820
(C) 5280
(D) 5420
Ans. (B) 5820
21. Three men or eight boys can do a piece of work in 17 days. How many days will two men and six boys together take to finish the same work?
(A) 12 days
(B) 17 days
(C) 11 days
(D) None of these
Ans. (A) 12 days
22. Vaibhav got a monthly increment of 12 percent of Pushpa's monthly salary. Pushpa's monthly salary is 7800. Vaibhav's monthly salary before increment was 6500. What amount will he earn in five
months after his increment?
(A) 37.180
(B) 35.180
(C) 36.180
(D) None of these
Ans. (A) 37.180
23. The ratio of two numbers is 4:5. When the first is increased by 20% and the second is decreased by 20% the ratio of the resulting numbers is:
(A) 5:6
(B) 5:4
(C) 6:5
(D) 4:5
Ans. (C) 6:5
24. If the numerator of a fraction be increased by 12% and its denominator decreased by 2% the value of the fraction becomes 6/7. Thus, the original fraction is:
(A) 3/4
(B) 4/3
(C) 2/3
(D) None of these
Ans. (A) 3/4
25. What approximate value will come in place of the question mark (?) in the following equation?
239.99 / 8.0001 x 9.99 = ?
(A) 350
(B) 260
(C) 400
(D) 300
Ans. (D) 300
PA/SA Exam 2013-Phase 1: Questions & Answer Key for Part D Reasoning and Analytical Ability
PA/SA Exam 2013-Phase 1: Questions & Answer Key for Part D Reasoning and Analytical Ability
DIRECTIONS (Question no 76 to 79) Read the following passage carefully andanswer the questions given below.
A group of seven friends Anil, Vinod, Sumit, Dilip, Indra, Firoz and Gaurav work as Engineer, Accountant, IT Officer, Technician, Clerk, Physiotherapist and Research Analyst for companies L, M, N, P,
Q, R and S but not necessarily in the same order. Sumit works for company N and is neither a Research Analyst nor a clerk. Indra is an IT officer and works for company R. Anil work as Physiotherapist
and does not work for company L or Q. The one who is an Accountant works for company M. The one who works for company L works as a technician. Firoz works for company Q. Gaurav works for company P as
Research Analyst. Dilip is not an accountant.
01. Who amongst the following works as accountant?
(A) Firoz
(B) Anil
(C) Vinod
(D) Dilip
Ans. (C) Vinod
02. What is the profession of Sumit?
(A) Engineer
(B) Clerk
(C) Technician
(D) None of these
Ans. (A) Engineer
03. For which company does Dilip work?
(A) L
(B) R
(C) Q
(D) N
Ans. (A) L
04. Which of the following combinations of person, profession and company is correct?
(A) Vinod - Accountant - R
(B) Firoz - Clerk - Q
(C) Anil - Physiotherapist - M
(D) None of these
Ans. (B) Firoz - Clerk - Q
05. There are five books A, B, C, D and E. C is above D. E is below A. D is above A; and B is below E. Which is the bottom most book?
(A) C
(B) E
(C) B
(D) D
Ans. (C) B
06. The second and the third digits of the following numbers are interchanged and each interchanged number is written in reverse order. Now, if they are arranged in the ascending order, which of the
original numbers will be fourth in that order?
592, 987, 176, 468, 719, 398
(A) 987
(B) 176
(C) 468
(D) 719
Ans. (A) 987
07. Rajesh walked 3 km towards East. Then he turned to his left and walked 6 km. He again turned to his right and walked 3 km. After that he turned to his right and walked 9 km. At last he turned to
his right and walked 6 km. At what distance is he from the starting point and in which direction?
(A) 3 km South
(B) 1 km North
(C) 2 km South
(D) 3 km North
Ans. (A) 3 km South
08. In the following number series only one number is wrong. Find out the wrong number.
1, 4, 27, 256, 3125, 46669:
(A) 27
(B) 256
(C) 3125
(D) 46669
Ans. (D) 46669
DIRECTIONS: (Question No. 84 & 85) Study the following information carefully and answer the questions given below.
Five persons are travelling in a train - A, B, C, D and E. A is the mother of C who is the wife of E. D is the brother of A and B is the husband A.
09. How is B related to E?
(A) Brother in law
(B) Mother in law
(C) Father
(D) Father in law
Ans. (D) Father in law
10. How is A related to E?
(A) Mother in law
(B) Mother
(C) Sister
(D) Niece
Ans. (A) Mother in law
11. If A = 26, SUN = 27 then CAT = ?
(A) 58
(B) 57
(C) 24
(D) None of these
Ans. (B) 57
12. If A = 1, FAT = 27 then FAINT = ?
(A) 50
(B) 42
(C) 44
(D) None of these
Ans. (A) 50
13. There are some girls and buffalos at a place. If total number of heads is 15 and total number of legs is 46, then how many girls and how many buffalos are there?
(A) 7 girls and 8 buffalos
(B) 9 girls and 6 buffalos
(C) 8 girls and 7 buffalos
(D) 6 girls and 9 buffalos
Ans. (A) 7 girls and 8 buffalos
14. In the following Question pick the choice that establishes the logical relationship:
(A) FG
(B) EC
(C) DG
(D) GD
Ans. (C) DG
15. 8 boys sat in a row. Raju is immediate right of Rakesh and immediate left of Shekhar. Shyam is sitting after 3 boys on the right of Shekhar. How many boys were there between Raju and Shyam:
(A) 5
(B) 4
(C) 3
(D) None of these
Ans. (B) 4
16. Given two positions of the dice as being.
When 2 is at the bottom which number is at the top:
(A) 1
(B) 4
(C) 5
(D) 6
Ans. (B) 4
17. In a class examination the average marks in mathematics by Girls was 80% and by Boys 60%. It is logical to conclude:
(A) All boys are weaker at maths than girls
(B) Some girls are better than boys in maths
(C) All girls are more intelligent than boys
(D) None of these
Ans. (B) Some girls are better than boys in maths
18. Pick the odd one out.
(A) Leopard
(B) Lion
(C) Tiger
(D) Wolf
Ans. (D) Wolf
19. In a certain code language 'PROPORTION' is written as 'PORPRONOIT'. How is 'CONVERSION' written in that code language?
(A) VNOCRENOIS
(B) VNCORENOIS
(C) VNOCERONIS
(D) VNOCREIONS
Ans. (A) VNOCRENOIS
20. If 'A B' means 'A is the father of B', 'A @ B' means 'A is the mother of B', 'A ! B' means 'A is the wife of B', then which of the following means 'T is the grandmother of U'?
(A) T@R! S!U
(B) T@S U!y
(C) T@R y!U
(D) None of these
Ans(B)T@S U!y
21. Five students participated in an examination and each scored different marks. Naina scored higher than Meena. Kamla scored lower than praveen but higher than Naina. Anuj's score was between Meena
and Naina. Which of the following pairs represents the highest and lowest scores respectively?
(A) Praveen Anuj
(B) Naina Praveen
(C) Praveen Naina
(D) Praveen Meena
Ans. (D) Praveen Meena
22. A set of figures carrying certain numbers is given.
Assuming that the numbers in each figure follow a similar pattern, the missing number (?) is:
(A) 49
(B) 53
(C) 81
(D) 62
Ans. (B) 53
23. How many triangles does the following figure contain?
(A) 9
(B) 13
(C) 11
(D) None of these
Ans. (B) 13
DIRECTIONS (Question No. 99 & 100) Seven players were available to be sent to Sydney Olympics for taking part in events Government approved sending only four Ram, Shyam, Kumar, Sita, Geeta, Rita &
Anita are the possibles. Two males and two females were to go Due to likes & dislikes, Shyam cannot go if Sita goes, Kumar cannot go if Anita goes, Sita cannot go if Rita goes.
24. If Rita is selected and Shyam is rejected, the team will consist of:
(A) Ram, Kumar, Geeta, Rita
(B) Ram, Kumar, Anita, Rita
(C) Ram, Kumar, Sita, Rita
(D) Ram, Geeta, Anita, Rita
Ans. (A) Ram, Kumar, Geeta, Rita
25. If Sita goes, which other players are also on the team:
(A) Ram, Kumar, Geeta
(B) Ram, Kumar, Rita
(C) Ram, Shyam, Rita
(D) Ram, Kumar, Anita
Ans. (A) Ram, Kumar, Geeta
Read more...
|
{"url":"http://psdnashik.blogspot.com/","timestamp":"2014-04-16T04:37:32Z","content_type":null,"content_length":"276926","record_id":"<urn:uuid:ea2bb888-a1fc-46e4-9634-172c0ccdbb08>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00044-ip-10-147-4-33.ec2.internal.warc.gz"}
|
A Higher-Order Generalized Singular Value Decomposition for Comparison of Global mRNA Expression from Multiple Organisms
• We are sorry, but NCBI web applications do not support your browser and may not function properly.
More information
PLoS One. 2011; 6(12): e28072.
A Higher-Order Generalized Singular Value Decomposition for Comparison of Global mRNA Expression from Multiple Organisms
Dongxiao Zhu, Editor^
The number of high-dimensional datasets recording multiple aspects of a single phenomenon is increasing in many areas of science, accompanied by a need for mathematical frameworks that can compare
multiple large-scale matrices with different row dimensions. The only such framework to date, the generalized singular value decomposition (GSVD), is limited to two matrices. We mathematically define
a higher-order GSVD (HO GSVD) for N≥2 matrices D[i]U[i]Σ[i]V^T, where V, identical in all factorizations, is obtained from the eigensystem SVVΛ of the arithmetic mean S of all pairwise quotients i≠j.
We prove that this decomposition extends to higher orders almost all of the mathematical properties of the GSVD. The matrix S is nondefective with V and Λ real. Its eigenvalues satisfy λ[k]≥1.
Equality holds if and only if the corresponding eigenvector v[k] is a right basis vector of equal significance in all matrices D[i] and D[j], that is σ[i,k]/σ[j,k]i and j, and the corresponding left
basis vector u[i,k] is orthogonal to all other vectors in U[i] for all i. The eigenvalues λ[k]S. pombe, S. cerevisiae and human. Unlike existing algorithms, a mapping among the genes of these
disparate organisms is not required. We find that the approximately common HO GSVD subspace represents the cell-cycle mRNA expression oscillations, which are similar among the datasets. Simultaneous
reconstruction in the common subspace, therefore, removes the experimental artifacts, which are dissimilar, from the datasets. In the simultaneous sequence-independent classification of the genes of
the three organisms in this common subspace, genes of highly conserved sequences but significantly different cell-cycle peak times are correctly classified.
In many areas of science, especially in biotechnology, the number of high-dimensional datasets recording multiple aspects of a single phenomenon is increasing. This is accompanied by a fundamental
need for mathematical frameworks that can compare multiple large-scale matrices with different row dimensions. For example, comparative analyses of global mRNA expression from multiple model
organisms promise to enhance fundamental understanding of the universality and specialization of molecular biological mechanisms, and may prove useful in medical diagnosis, treatment and drug design
[1]. Existing algorithms limit analyses to subsets of homologous genes among the different organisms, effectively introducing into the analysis the assumption that sequence and functional
similarities are equivalent (e.g., [2]). However, it is well known that this assumption does not always hold, for example, in cases of nonorthologous gene displacement, when nonorthologous proteins
in different organisms fulfill the same function [3]. For sequence-independent comparisons, mathematical frameworks are required that can distinguish and separate the similar from the dissimilar
among multiple large-scale datasets tabulated as matrices with different row dimensions, corresponding to the different sets of genes of the different organisms. The only such framework to date, the
generalized singular value decomposition (GSVD) [4]–[7], is limited to two matrices.
It was shown that the GSVD provides a mathematical framework for sequence-independent comparative modeling of DNA microarray data from two organisms, where the mathematical variables and operations
represent biological reality [7], [8]. The variables, significant subspaces that are common to both or exclusive to either one of the datasets, correlate with cellular programs that are conserved in
both or unique to either one of the organisms, respectively. The operation of reconstruction in the subspaces common to both datasets outlines the biological similarity in the regulation of the
cellular programs that are conserved across the species. Reconstruction in the common and exclusive subspaces of either dataset outlines the differential regulation of the conserved relative to the
unique programs in the corresponding organism. Recent experimental results [9] verify a computationally predicted genome-wide mode of regulation that correlates DNA replication origin activity with
mRNA expression [10], [11], demonstrating that GSVD modeling of DNA microarray data can be used to correctly predict previously unknown cellular mechanisms.
We now define a higher-order GSVD (HO GSVD) for the comparison of
To clarify our choice of [5], the matrix Appendix S1). We observe that this Appendix S1), and therefore, as Paige and Saunders showed [6], can be computed in a stable way. We also note that in the
GSVD, the matrix
We prove that [7], we interpret the
Recent research showed that several higher-order generalizations are possible for a given matrix decomposition, each preserving some but not all of the properties of the matrix decomposition [12]–
[14] (see also Theorem S6 and Conjecture S1 in Appendix S1). Our new HO GSVD extends to higher orders all of the mathematical properties of the GSVD except for complete column-wise orthogonality of
the left basis vectors that form the matrix
We illustrate the HO GSVD with a comparison of cell-cycle mRNA expression from S. pombe [15], [16], S. cerevisiae [17] and human [18]. Unlike existing algorithms, a mapping among the genes of these
disparate organisms is not required (Section 2 in Appendix S1). We find that the common HO GSVD subspace represents the cell-cycle mRNA expression oscillations, which are similar among the datasets.
Simultaneous reconstruction in this common subspace, therefore, removes the experimental artifacts, which are dissimilar, from the datasets. Simultaneous sequence-independent classification of the
genes of the three organisms in the common subspace is in agreement with previous classifications into cell-cycle phases [19]. Notably, genes of highly conserved sequences across the three organisms
[20], [21] but significantly different cell-cycle peak times, such as genes from the ABC transporter superfamily [22]–[28], phospholipase B-encoding genes [29], [30] and even the B cyclin-encoding
genes [31], [32], are correctly classified.
HO GSVD Construction
Suppose we have a set of
where each [7], in the HO GSVD comparison of global mRNA expression from Figure 1 and Section 2 in Appendix S1). We obtain
Higher-order generalized singular value decomposition (HO GSVD).
HO GSVD Interpretation
In this construction, the rows of each of the Appendix S1). As in our GSVD comparison of two matrices, we interpret the [7]. A ratio of
We prove that an eigenvalue of
It follows that each of the right basis vectors [6], we note that the common HO GSVD subspace can also be computed in a stable way by computing all pairwise GSVD factorizations of the matrices [33].
Such a formulation may lead to a stable numerical algorithm for computing the HO GSVD, and possibly also to a higher-order general Gauss-Markov linear statistical model [34]–[36].
We show, in a comparison of
HO GSVD Mathematical Properties
Theorem 1
is nondefective (it has independent eigenvectors) and its eigensystem is real.
Proof. From Equation (2) it follows that
and the eigenvectors of
Let the SVD of the matrices
Since the matrices
and the eigenvalues of
A sum of real, symmetric and positive definite matrices,
is real with
Thus, from Equation (5),
Theorem 2
The eigenvalues of satisfy .
Proof. Following Equation (9), asserting that the eigenvalues of
From Equations (6) and (7), the eigenvalues of
under the constraint that
where [37] (see also [4], [34], [38]) for the real nonzero vectors
With the constraint of Equation (11), which requires the sum of the
for all
Theorem 3
The common HO GSVD subspace. An eigenvalue of satisfies if and only if the corresponding eigenvector is a right basis vector of equal significance in all and , that is, for all and , and the
corresponding left basis vector is orthonormal to all other vectors in for all . The “common HO GSVD subspace” of the matrices is, therefore, the subspace spanned by the right basis vectors
corresponding to the eigenvalues of that satisfy .
Proof. Without loss of generality, let [37], where, from Equation (13), the corresponding eigenvalue equals
Following Equations (14) and (15), where
with zeroes in the
The corresponding higher-order generalized singular values are
Corollary 1
An eigenvalue of satisfies if and only if the corresponding right basis vector is a generalized singular vector of all pairwise GSVD factorizations of the matrices and with equal corresponding
generalized singular values for all and .
Proof. From Equations (12) and (13), and since the pairwise quotients
We prove (Theorems S1–S5 in Appendix S1) that in the case of
Note that since the GSVD can be computed in a stable way [6], the common HO GSVD subspace we define (Theorem 3) can also be computed in a stable way by computing all pairwise GSVD factorizations of
the matrices [33]. Such a formulation may lead to a stable numerical algorithm for computing the HO GSVD, and possibly also to a higher-order general Gauss-Markov linear statistical model [34]–[36].
HO GSVD Comparison of Global mRNA Expression from Three Organisms
Consider now the HO GSVD comparative analysis of global mRNA expression datasets from the S. pombe, S. cerevisiae and human (Section 2.1 in Appendix S1, Mathematica Notebooks S1 and S2, and Datasets
S1, S2 and S3). The datasets are tabulated as matrices of S. pombe genes, S. cerevisiae genes or Figure 1).
Following Theorem 3, the approximately common HO GSVD subspace of the three datasets is spanned by the five genelets S. pombe, S. cerevisiae and human datasets, respectively (Figure 2 a and b). The
five corresponding arraylets in each dataset are Appendix S1).
Genelets or right basis vectors.
Common HO GSVD Subspace Represents Similar Cell-Cycle Oscillations
The expression variations across time of the five genelets that span the approximately common HO GSVD subspace fit normalized cosine functions of two periods, superimposed on time-invariant
expression (Figure 2 c and d). Consistently, the corresponding organism-specific arraylets are enriched [39] in overexpressed or underexpressed organism-specific cell cycle-regulated genes, with 24
of the 30 P-values Table 1 and Section 2.2 in Appendix S1). For example, the three 17th arraylets, which correspond to the 0-phase 17th genelet, are enriched in overexpressed G2 S. pombe genes, G2/M
and M/G1 S. cerevisiae genes and S and G2 human genes, respectively, representing the cell-cycle checkpoints in which the three cultures are initially synchronized.
Arraylets or left basis vectors.
Simultaneous sequence-independent reconstruction and classification of the three datasets in the common subspace outline cell-cycle progression in time and across the genes in the three organisms
(Sections 2.3 and 2.4 in Appendix S1). Projecting the expression of the 17 arrays of either organism from the corresponding five-dimensional arraylets subspace onto the two-dimensional subspace that
approximates it (Figure S4 in Appendix S1), Figure 3 a–c). In these two-dimensional subspaces, the angular order of the arrays of either organism describes cell-cycle progression in time through
approximately two cell-cycle periods, from the initial cell-cycle phase and back to that initial phase twice. Projecting the expression of the genes, S. pombe genes classified as cell
cycle-regulated, 554 of the 641 S. cerevisiae cell-cycle genes, and 632 of the 787 human cell-cycle genes (Figure 3 d–f). Simultaneous classification of the genes of either organism into cell-cycle
phases according to their angular order in these two-dimensional subspaces is consistent with the classification of the arrays, and is in good agreement with the previous classifications of the genes
(Figure 3 g–i). With all 3167 S. pombe, 4772 S. cerevisiae and 13,068 human genes sorted, the expression variations of the five arraylets from each organism approximately fit one-period cosines, with
the initial phase of each arraylet (Figures S5, S6, S7 in Appendix S1) similar to that of its corresponding genelet (Figure 2). The global mRNA expression of each organism, reconstructed in the
common HO GSVD subspace, approximately fits a traveling wave, oscillating across time and across the genes.
Common HO GSVD subspace represents similar cell-cycle oscillations.
Note also that simultaneous reconstruction in the common HO GSVD subspace removes the experimental artifacts and batch effects, which are dissimilar, from the three datasets. Consider, for example,
the second genelet. With S. pombe, S. cerevisiae and human datasets, respectively, this genelet is almost exclusive to the S. cerevisiae dataset. This genelet is anticorrelated with a time decaying
pattern of expression (Figure 2a). Consistently, the corresponding S. cerevisiae-specific arraylet is enriched in underexpressed S. cerevisiae genes that were classified as up-regulated by the S.
cerevisiae synchronizing agent, the P-value S. cerevisiae-approximately exclusive pattern of expression variation from the three datasets.
Simultaneous HO GSVD Classification of Homologous Genes of Different Cell-Cycle Peak Times
Notably, in the simultaneous sequence-independent classification of the genes of the three organisms in the common subspace, genes of significantly different cell-cycle peak times [19] but highly
conserved sequences [20], [21] are correctly classified (Section 2.5 in Appendix S1).
For example, consider the G2 S. pombe gene BFR1 (Figure 4a), which belongs to the evolutionarily highly conserved ATP-binding cassette (ABC) transporter superfamily [22]. The closest homologs of BFR1
in our S. pombe, S. cerevisiae and human datasets are the S. cerevisiae genes SNQ2, PDR5, PDR15 and PDR10 (Table S1a in Appendix S1). The expression of SNQ2 and PDR5 is known to peak at the S/G2 and
G2/M cell-cycle phases, respectively [17]. However, sequence similarity does not imply similar cell-cycle peak times, and PDR15 and PDR10, the closest homologs of PDR5, are induced during stationary
phase [23], which has been hypothesized to occur in G1, before the Cdc28-defined cell-cycle arrest [24]. Consistently, we find PDR15 and PDR10 at the M/G1 to G1 transition, antipodal to (i.e., half a
cell-cycle period apart from) SNQ2 and PDR5, which are projected onto S/G2 and G2/M, respectively (Figure 4b). We also find the transcription factor PDR1 at S/G2, its known cell-cycle peak time,
adjacent to SNQ2 and PDR5, which it positively regulates and might be regulated by, and antipodal to PDR15, which it negatively regulates [25]–[28].
Simultaneous HO GSVD classification of homologous genes of different cell-cycle peak times.
Another example is the S. cerevisiae phospholipase B-encoding gene PLB1 [29], which peaks at the cell-cycle phase M/G1 [30]. Its closest homolog in our S. cerevisiae dataset, PLB3, also peaks at M/G1
[17] (Figure 4d). However, among the closest S. pombe and human homologs of PLB1 (Table S1b in Appendix S1), we find the S. pombe genes SPAC977.09c and SPAC1786.02, which expressions peak at the
almost antipodal S. pombe cell-cycle phases S and G2, respectively [19] (Figure 4c).
As a third example, consider the S. pombe G1 B-type cyclin-encoding gene CIG2 [31], [32] (Table S1c in Appendix S1). Its closest S. pombe homolog, CDC13, peaks at M [19] (Figure 4e). The closest
human homologs of CIG2, the cyclins CCNA2 and CCNB2, peak at G2 and G2/M, respectively (Figure 4g). However, while periodicity in mRNA abundance levels through the cell cycle is highly conserved
among members of the cyclin family, the cell-cycle peak times are not necessarily conserved [1]: The closest homologs of CIG2 in our S. cerevisiae dataset, are the G2/M promoter-encoding genes CLB1,2
and CLB3,4, which expressions peak at G2/M and S respectively, and CLB5, which encodes a DNA synthesis promoter, and peaks at G1 (Figure 4f).
We mathematically defined a higher-order GSVD (HO GSVD) for two or more large-scale matrices with different row dimensions and the same column dimension. We proved that our new HO GSVD extends to
higher orders almost all of the mathematical properties of the GSVD: The eigenvalues of
The only property that does not extend to higher orders in general is the complete column-wise orthogonality of the normalized left basis vectors in each factorization. Recent research showed that
several higher-order generalizations are possible for a given matrix decomposition, each preserving some but not all of the properties of the matrix decomposition [12]–[14]. The HO GSVD has the
interesting property of preserving the exactness and diagonality of the matrix GSVD and, in special cases, also partial or even complete column-wise orthogonality. That is, all
The complete column-wise orthogonality of the matrix GSVD [5] enables its stable computation [6]. We showed that each of the right basis vectors that span the common HO GSVD subspace is a generalized
singular vector of all pairwise GSVD factorizations of the matrices [33].
It would be ideal if our procedure reduced to the stable computation of the matrix GSVD when [34]–[36].
It was shown that the GSVD provides a mathematical framework for sequence-independent comparative modeling of DNA microarray data from two organisms, where the mathematical variables and operations
represent experimental or biological reality [7], [8]. The variables, subspaces of significant patterns that are common to both or exclusive to either one of the datasets, correlate with cellular
programs that are conserved in both or unique to either one of the organisms, respectively. The operation of reconstruction in the subspaces common to both datasets outlines the biological similarity
in the regulation of the cellular programs that are conserved across the species. Reconstruction in the common and exclusive subspaces of either dataset outlines the differential regulation of the
conserved relative to the unique programs in the corresponding organism. Recent experimental results [9] verify a computationally predicted genome-wide mode of regulation [10], [11], and demonstrate
that GSVD modeling of DNA microarray data can be used to correctly predict previously unknown cellular mechanisms.
Here we showed, comparing global cell-cycle mRNA expression from the three disparate organisms S. pombe, S. cerevisiae and human, that the HO GSVD provides a sequence-independent comparative
framework for two or more genomic datasets, where the variables and operations represent biological reality. The approximately common HO GSVD subspace represents the cell-cycle mRNA expression
oscillations, which are similar among the datasets. Simultaneous reconstruction in the common subspace removes the experimental artifacts, which are dissimilar, from the datasets. In the simultaneous
sequence-independent classification of the genes of the three organisms in this common subspace, genes of highly conserved sequences but significantly different cell-cycle peak times are correctly
Additional possible applications of our HO GSVD in biotechnology include comparison of multiple genomic datasets, each corresponding to (i) the same experiment repeated multiple times using different
experimental protocols, to separate the biological signal that is similar in all datasets from the dissimilar experimental artifacts; (ii) one of multiple types of genomic information, such as DNA
copy number, DNA methylation and mRNA expression, collected from the same set of samples, e.g., tumor samples, to elucidate the molecular composition of the overall biological signal in these
samples; (iii) one of multiple chromosomes of the same organism, to illustrate the relation, if any, between these chromosomes in terms of their, e.g., mRNA expression in a given set of samples; and
(iv) one of multiple interacting organisms, e.g., in an ecosystem, to illuminate the exchange of biological information in these interactions.
Supporting Information
Appendix S1
A PDF format file, readable by Adobe Acrobat Reader.
Mathematica Notebook S1
Higher-order generalized singular value decomposition (HO GSVD) of global mRNA expression datasets from three different organisms. A Mathematica 5.2 code file, executable by Mathematica 5.2 and
readable by Mathematica Player, freely available at http://www.wolfram.com/products/player/.
Mathematica Notebook S2
HO GSVD of global mRNA expression datasets from three different organisms. A PDF format file, readable by Adobe Acrobat Reader.
Dataset S1
S. pombe global mRNA expression. A tab-delimited text format file, readable by both Mathematica and Microsoft Excel, reproducing the relative mRNA expression levels of S. pombe gene clones at et al.
[15] with the cell-cycle classifications of Rustici et al. or Oliva et al. [16].
Dataset S2
S. cerevisiae global mRNA expression. A tab-delimited text format file, readable by both Mathematica and Microsoft Excel, reproducing the relative mRNA expression levels of S. cerevisiae open reading
frames (ORFs), or genes, at et al. [17].
Dataset S3
Human global mRNA expression. A tab-delimited text format file, readable by both Mathematica and Microsoft Excel, reproducing the relative mRNA expression levels of et al. [18].
We thank G. H. Golub for introducing us to matrix and tensor computations, and the American Institute of Mathematics in Palo Alto and Stanford University for hosting the 2004 Workshop on Tensor
Decompositions and the 2006 Workshop on Algorithms for Modern Massive Data Sets, respectively, where some of this work was done. We also thank C. H. Lee for technical assistance, R. A. Horn for
helpful discussions of matrix analysis and careful reading of the manuscript, and L. De Lathauwer and A. Goffeau for helpful comments.
Competing Interests: The authors have declared that no competing interests exist.
Funding: This research was supported by Office of Naval Research Grant N00014-02-1-0076 (to MAS), National Science Foundation Grant DMS-1016284 (to CFVL), as well as the Utah Science Technology and
Research (USTAR) Initiative, National Human Genome Research Institute R01 Grant HG-004302 and National Science Foundation CAREER Award DMS-0847173 (to OA). The funders had no role in study design,
data collection and analysis, decision to publish, or preparation of the manuscript.
Jensen LJ, Jensen TS, de Lichtenberg U, Brunak S, Bork P. Co-evolution of transcriptional and post-translational cell-cycle regulation. Nature. 2006;443:594–597. [PubMed]
Lu Y, Huggins P, Bar-Joseph Z. Cross species analysis of microarray expression data. Bioinformatics. 2009;25:1476–1483. [PMC free article] [PubMed]
Mushegian AR, Koonin EV. A minimal gene set for cellular life derived by comparison of complete bacterial genomes. Proc Natl Acad Sci USA. 1996;93:10268–10273. [PMC free article] [PubMed]
4. Golub GH, Van Loan CF. Matrix Computations. Baltimore: Johns Hopkins University Press, third edition; 1996.
5. Van Loan CF. Generalizing the singular value decomposition. SIAM J Numer Anal. 1976;13:76–83.
6. Paige CC, Saunders MA. Towards a generalized singular value decomposition. SIAM J Numer Anal. 1981;18:398–405.
Alter O, Brown PO, Botstein D. Generalized singular value decomposition for comparative analysis of genome-scale expression data sets of two different organisms. Proc Natl Acad Sci USA. 2003;100
:3351–3356. [PMC free article] [PubMed]
Alter O. Discovery of principles of nature from mathematical modeling of DNA microarray data. Proc Natl Acad Sci USA. 2006;103:16063–16064. [PMC free article] [PubMed]
Omberg L, Meyerson JR, Kobayashi K, Drury LS, Diffley JF, et al. Global effects of DNA replication and DNA replication origin activity on eukaryotic gene expression. Mol Syst Biol. 2009;5:312. [PMC
free article] [PubMed]
Alter O, Golub GH. Integrative analysis of genome-scale data by using pseudoinverse projection predicts novel correlation between DNA replication and RNA transcription. Proc Natl Acad Sci USA. 2004;
101:16577–16582. [PMC free article] [PubMed]
Omberg L, Golub GH, Alter O. A tensor higher-order singular value decomposition for integrative analysis of DNA microarray data from different studies. Proc Natl Acad Sci USA. 2007;104:18371–18376. [
PMC free article] [PubMed]
12. De Lathauwer L, De Moor B, Vandewalle J. A multilinear singular value decomposition. SIAM J Matrix Anal Appl. 2000;21:1253–1278.
13. Vandewalle J, De Lathauwer L, Comon P. The generalized higher order singular value decomposition and the oriented signal-to-signal ratios of pairs of signal tensors and their use in signal
processing. Proc ECCTD'03 - European Conf on Circuit Theory and Design. 2003. pp. I-389–I-392.
Alter O, Golub GH. Reconstructing the pathways of a cellular system from genome-scale signals using matrix and tensor computations. Proc Natl Acad Sci USA. 2005;102:17559–17564. [PMC free article] [
Rustici G, Mata J, Kivinen K, Lió P, Penkett CJ, et al. Periodic gene expression program of the fission yeast cell cycle. Nat Genet. 2004;36:809–817. [PubMed]
Oliva A, Rosebrock A, Ferrezuelo F, Pyne S, Chen H, et al. The cell cycle-regulated genes of Schizosaccharomyces pombe. PLoS Biol. 2005;3:e225. [PMC free article] [PubMed]
Spellman PT, Sherlock G, Zhang MQ, Iyer VR, Anders K, et al. Comprehensive identification of cell cycle-regulated genes of the yeast Saccharomyces cerevisiae by microarray hybridization. Mol Biol
Cell. 1998;9:3273–3297. [PMC free article] [PubMed]
Whitfield ML, Sherlock G, Saldanha A, Murray JI, Ball CA, et al. Identification of genes periodically expressed in the human cell cycle and their expression in tumors. Mol Biol Cell. 2002;13
:1977–2000. [PMC free article] [PubMed]
Gauthier NP, Larsen ME, Wernersson R, Brunak S, Jensen TS. Cyclebase.org: version 2.0, an updated comprehensive, multi-species repository of cell cycle experiments and derived analysis results.
Nucleic Acids Res. 2010;38:D699–D702. [PMC free article] [PubMed]
Altschul SF, Gish W, Miller W, Myers EW, Lipman DJ. Basic local alignment search tool. J Mol Biol. 1990;215:403–410. [PubMed]
Pruitt KD, Tatusova T, Maglott DR. NCBI reference sequences (RefSeq): a curated nonredundant sequence database of genomes, transcripts and proteins. Nucleic Acids Res. 2007;35:D61–D65. [PMC free
article] [PubMed]
Decottignies A, Goffeau A. Complete inventory of the yeast ABC proteins. Nat Genet. 1997;15:137–145. [PubMed]
Mamnun YM, Schüller C, Kuchler K. Expression regulation of the yeast PDR5 ATP-binding cassette (ABC) transporter suggests a role in cellular detoxification during the exponential growth phase. FEBS
Lett. 2004;559:111–117. [PubMed]
Werner-Washburne M, Braun E, Johnston GC, Singer RA. Stationary phase in the yeast Saccharomyces cerevisiae. Microbiol Rev. 1993;57:383–401. [PMC free article] [PubMed]
Meyers S, Schauer W, Balzi E, Wagner M, Goffeau A, et al. Interaction of the yeast pleiotropic drug resistance genes PDR1 and PDR5. Curr Genet. 1992;21:431–436. [PubMed]
Mahé Y, Parle-McDermott A, Nourani A, Delahodde A, Lamprecht A, et al. The ATP-binding cassette multidrug transporter Snq2 of Saccharomyces cerevisiae: a novel target for the transcription factors
Pdr1 and Pdr3. Mol Microbiol. 1996;20:109–117. [PubMed]
Wolfger H, Mahé Y, Parle-McDermott A, Delahodde A, Kuchler K. The yeast ATP binding cassette (ABC) protein genes PDR10 and PDR15 are novel targets for the Pdr1 and Pdr3 transcriptional regulators.
FEBS Lett. 1997;418:269–274. [PubMed]
Hlaváček O, Kučerová H, Harant K, Palková Z, Váchová L. Putative role for ABC multidrug exporters in yeast quorum sensing. FEBS Lett. 2009;583:1107–1113. [PubMed]
Lee KS, Patton JL, Fido M, Hines LK, Kohlwein SD, et al. The Saccharomyces cerevisiae PLB1 gene encodes a protein required for lysophospholipase and phospholipase B activity. J Biol Chem. 1994;269
:19725–19730. [PubMed]
Cho RJ, Campbell MJ, Winzeler EA, Steinmetz L, Conway A, et al. A genome-wide transcriptional analysis of the mitotic cell cycle. Mol Cell. 1998;2:65–73. [PubMed]
Martin-Castellanos C, Labib K, Moreno S. B-type cyclins regulate G1 progression in fission yeast in opposition to the p25^rum1 cdk inhibitor. EMBO J. 1996;15:839–849. [PMC free article] [PubMed]
Fisher DL, Nurse P. A single fission yeast mitotic cyclin B p34^cdc2 kinase promotes both S-phase and mitosis in the absence of G1 cyclins. EMBO J. 1996;15:850–860. [PMC free article] [PubMed]
33. Chu MT, Funderlic RE, Golub GH. On a variational formulation of the generalized singular value decomposition. SIAM J Matrix Anal Appl. 1997;18:1082–1092.
34. Rao CR. Linear Statistical Inference and Its Applications. New York, NY: John Wiley & Sons, second edition; 1973.
35. Rao CR. Optimization of functions of matrices with applications to statistical problems. In: Rao PSRS, Sedransk J, editors. W.G. Cochran's Impact on Statistics. New York, NY: John Wiley & Sons;
1984. pp. 191–202.
36. Paige CC. The general linear model and the generalized singular value decomposition. Linear Algebra Appl. 1985;70:269–284.
37. Marshall AW, Olkin L. Matrix versions of the Cauchy and Kantorovich inequalities. Aequationes Mathematicae. 1990;40:89–93.
38. Horn RA, Johnson CR. Matrix Analysis. Cambridge, UK: Cambridge University Press; 1985.
Tavazoie S, Hughes JD, Campbell MJ, Cho RJ, Church GM. Systematic determination of genetic network architecture. Nat Genet. 1999;22:281–285. [PubMed]
Articles from PLoS ONE are provided here courtesy of Public Library of Science
Your browsing activity is empty.
Activity recording is turned off.
See more...
|
{"url":"http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3245232/?tool=pubmed","timestamp":"2014-04-18T06:10:20Z","content_type":null,"content_length":"190273","record_id":"<urn:uuid:08205316-b530-47d7-a3e9-ed3ca891ac8c>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00235-ip-10-147-4-33.ec2.internal.warc.gz"}
|
In basic terms maths, or to give it its full name mathematics, is the study of numbers and the effects that their patterns and systems can have. Maths, as a science, can work as a stand-alone study
or can be applied to a variety of scientific orientated disciplines and involves looking at numbers to solve problems or to come to specific conclusions.
On a day to day basis we all use maths in some form. We may count change in a shop or use a calculator to plan our household budgets. The study of maths that we all go through at school is usually
divided into various disciplines. Students may, for example, study algebra, geometry and arithmetic at various times during their studies. Other options here include mathematical set theory and
A mathematician, on the other hand, will use numbers to help them make deductions, prove abstractions, count and measure. The aim here is to use maths to work towards proving that something is true
or feasible. Maths plays an integral part in various research fields including other sciences, technology, astronomy, engineering, medicine and economics.
In general terms there are two primary types of maths. Applied maths is used to foster research and developments in other fields and sectors. So, for example, maths may be used as part of a space
research project with the aim of contributing to a greater whole. On the other hand pure maths is studied on its own merits and is not necessarily designed for use with other disciplines. Success in
pure maths, however, can have a knock on effect that can then be applied to other disciplines over time.
|
{"url":"http://www.coolmath.co.uk/Maths.htm","timestamp":"2014-04-19T11:57:10Z","content_type":null,"content_length":"12089","record_id":"<urn:uuid:e52592fc-6501-4364-9077-c878f8720449>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00090-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Fairmount Heights, MD SAT Math Tutor
Find a Fairmount Heights, MD SAT Math Tutor
...I can help you study for the math, verbal, and writing sections to maximize your score. You tell me what area or areas you want to focus on, and I will plan our sessions accordingly to build
your skills and confidence. I can give you advice on how to approach math and reading comprehension problems, how to learn vocabulary efficiently, and how to write effectively.
46 Subjects: including SAT math, reading, Spanish, English
...I studied macro and micro, price and allocation theory, game theory, capital markets, environmental economics and natural resource management, and international trade. I have tutored many
students in these subjects. I received a 170 out of 180 on the LSAT in December 2009, which is the 98th percentile.
21 Subjects: including SAT math, statistics, geometry, algebra 1
John received his Bachelor's Degree in Computer Science from Morehouse College and a Master of Business Administration (MBA) from Georgia Tech with concentrations in Finance and Information
Technology. He has served as a Life Leadership Adviser for the NBMBAA Leaders of Tomorrow Program (LOT) for t...
18 Subjects: including SAT math, statistics, geometry, algebra 1
...I have taught multiple people how to swim and I personally tutored more than 20 people in stroke, turn, and start technique for all four strokes. I am a college graduate who will apply to
medical school in the coming year. I took over 20 MCAT practice tests, averaging a score of 35 with a high score of 37.
39 Subjects: including SAT math, Spanish, chemistry, writing
...In tutoring, I always make it a point to figure out the student's style of learning and I plan my tutoring sessions accordingly, spending extra time to prepare for the session prior to meeting
with the student. My broad background in math, science, and engineering combined with my extensive rese...
16 Subjects: including SAT math, calculus, physics, statistics
|
{"url":"http://www.purplemath.com/Fairmount_Heights_MD_SAT_Math_tutors.php","timestamp":"2014-04-20T13:35:24Z","content_type":null,"content_length":"24773","record_id":"<urn:uuid:8b81e908-9764-40bf-b55d-f5cf804dc231>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00146-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Math Forum: Alejandre - Dürer Classroom Activity
Suzanne Alejandre's
Table of Contents
NCTM Standards: Number and Operations and Connections]
1. Students will learn the definition of a magic square.
2. Students will learn the historical background of magic squares.
3. Students will experience the artistic aspects of magic squares.
Method 1: If Internet display capabilities are available or students have access to the Web at individual computer stations, the activity can be structured by viewing Melancholia, the magic
square found in Melancholia, and information on Albrecht Dürer [Albrecht Dürer - engraver and More about Albrecht Dürer] interactively and then continuing with the activity.
Method 2: Prepare overhead transparencies and/or handouts before presenting the activity, including:
For both Methods 1 and 2:
1. Blank paper
2. Blank overhead transparency and pens
3. Rulers
Display the Dürer Magic Square.
History Questions:
1. In what year did Albrecht Dürer create the engraving, Melancholia?
2. After reading about Albrecht Dürer's background, why do you think he included a magic square in his engraving?
3. If you were to make a magic square with this year's date [1996], what size grid would you need to fill?
Number Questions:
1. What is magic about the arrangement of the numbers in the 4x4 cell square?
2. What is the first number that Dürer used?
3. What is the last number?
4. How many numbers are there?
5. Is any number repeated?
6. What is the sum of the numbers in the1st row? 2nd row? 3rd row? 4th row?
7. What is the sum of the numbers in the1st column? 2nd column? 3rd column? 4th column?
8. What is the sum of the numbers on one diagonal? the other diagonal?
With the teacher modeling on the overhead projector, have the students construct a 4x4 grid. Use the engraving and write in the grid the numerals used by Dürer. [The numerals should be positioned in
the center of each cell of the grid.]
Draw a dot in the center of each numeral for reference. Using a ruler, connect the dots starting at 1, going to 2, 3, ....to 16. [Refer to Dürer Magic Square with lines.]
Symmetry Questions:
1. What patterns do you see?
2. What are the symmetrical relationships?
3. Is the line design you have created, an example of
rotation symmetry?
translation symmetry?
reflection symmetry?
glide reflection symmetry?
[Note: To discover the symmetry involved, create a transparency (just trace the original) of the lines and use it to test for rotation, translation (slide), and reflection (flip).]
|
{"url":"http://mathforum.org/alejandre/magic.square/al1.html","timestamp":"2014-04-20T01:55:38Z","content_type":null,"content_length":"8992","record_id":"<urn:uuid:86e0d100-458a-4930-b4c0-0fd938cd6b17>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00582-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Cushion - Math Genius
The above represents a square of brocade.
A lady wishes to cut it in four pieces so that two pieces will form one perfectly square cushion top, and the remaining two pieces another square cushion top.
How is she to do it?
Of course, she can only cut along the lines that divide the twenty-five squares, and the pattern must "match" properly without any irregularity whatever in the design of the material.
There is only one way of doing it. Can you find it?
See answer
|
{"url":"http://www.pedagonet.com/MathGenius/test170.html","timestamp":"2014-04-18T05:32:50Z","content_type":null,"content_length":"5304","record_id":"<urn:uuid:7065d3e1-461c-4224-bb3e-84e954b7beb8>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00312-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Selection of weights Help for Central Tendency - Transtutors
Selection of weights
An important point that arises in the calculation of weighted mean is the selection of weights. Weights could be either actual or arbitrary. If actual Weights are available there is no problem in
calculating weighted mean. If however, weights are arbitrary, then it becomes difficult to determine them. Different persons may assign different weights to various items. However a change in weights
does not generally ajfect the series as much as a change in the value of an item. As such an error in weight is less serious than a corresponding error in the size of the item. It is for this reason
that King observed "the items should be as exact as possible and the weights used should be approximately accurate"
Email Based, Online Homework Assignment Help in Selection of Weights
Transtutors is the best place to get answers to all your doubts regarding selection of weights. Transtutors has a vast panel of experienced statistics tutorswho can explain the different concepts to
you effectively. You can submit your school, college or university level homework or assignment to us and we will make sure that you get the answers related to selection of weights.
Related Questions
• Essentials of Statistics for Business and Economics 3 hrs ago
See Attachements
Tags : Statistics, Descriptive Statistics, Others, College ask similar question
• Let a random process be given as X (t) = Acos(2p fct) + Bsin(2p fct) where... 1 day ago
Let a random process be given as X (t) = Acos(2p fct) + Bsin(2p fct) where A and B are independent, zero-mean, unit variance random variables. Is the process X (t) (wide-sense) stationary?
Tags : Statistics, Descriptive Statistics, Others, College ask similar question
• Administrators at a university are planning to offer a summer seminar 1 day ago
Administrators at a university are planning to offer a summer seminar
Tags : Statistics, Basics of Statistics, Theory of probability, University ask similar question
• Suppose you have a box with a very large number of orange and blue beads.... 1 day ago
Suppose you have a box with a very large number of orange and blue beads. You want to estimate the proportion p of orange beads in the box and you want to be 92% confident that your point
estimate, which is the sample...
Tags : Statistics, Basics of Statistics, Theory of probability, University ask similar question
• One measure of the risk or volatility of an individual stock is the... 1 day ago
One measure of the risk or volatility of an individual stock is the standard deviation of thetotal return (capital appreciation plus dividends) over several periods of time. Althoughthe standard
deviation is easy to compute,...
Tags : Statistics, Regression, Correlation, Regression, University ask similar question
• selecting a test 1 day ago
selecting a test
Tags : Statistics, Hypothesis Testing, Others, University ask similar question
• Suppose the probability density function of the length of computer cables... 1 day ago
Suppose the probability density function of the length of computer cables is <span style="color: rgb(0, 0, 0); font
Tags : Statistics, Basics of Statistics, Theory of probability, University ask similar question
• Hypothesis testing What do we need to consider when we try to select a... 1 day ago
Hypothesis testing<span style='color: rgb(68, 68, 68); line-height: 200%; font-family: "Arial","sans-serif"; font-size: 12pt; mso-fareast-font-family: "Times New Roman"; mso-font-kerning:...
Tags : Statistics, Hypothesis Testing, Others, University ask similar question
• The following data represent the running times of films produced 2 days ago
The following data represent the running times of films produced by 2 motion-picture companies. Test the hypothesis that the average running time of films produced by company 2 exceeds the
average running time of films...
Tags : Statistics, Hypothesis Testing, t,F,Z distibutions, College ask similar question
• Use Excel to compute the descriptive statistics for the following data set:... 2 days ago
Use Excel to compute the descriptive statistics for the following data set:25, 45, 73, 16, 34, 98, 34, 45, 26, 2, 56, 97, 12, 445, 23, 63, 110, 12, 17, and 41.
Tags : Statistics, Descriptive Statistics, Standard Deviation, College ask similar question
more assignments »
|
{"url":"http://www.transtutors.com/homework-help/statistics/central-tendency/selection-of-weight.aspx","timestamp":"2014-04-18T05:33:01Z","content_type":null,"content_length":"110922","record_id":"<urn:uuid:6eaf59cb-6c2a-417c-b698-bb85f395da6a>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00584-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Discoveries In Mathematics
Therefore, in conclusion, learning mathematics by heart or memorizing a large amount of the discoveries in mathematics in accordance to the discoveries in mathematics of computation into mathematical
expressions filled with short-form notations and symbols to shorten lengthy mathematical formula and operations. An example is trigonometry.
Understanding in mathematics is concerned; its statements are designed through simple data entry; at the discoveries in mathematics. For written presentation, processing starts from the first 2
questions he wasn't sure what to say* but readily answered the discoveries in mathematics and activities after the discoveries in mathematics. The questions were in a particular order, most City
Colleges around America have elementary to high school teacher. The same can be reduced to logic, thereby making it a BIG statement, especially considering the discoveries in mathematics of 2, I
began to teach my son and one of his toys, which he liked the discoveries in mathematics, were my pupils. Andre was pretty keen to play this game and in the discoveries in mathematics a
specialization. Apart from anything else, it will work in the US.
Applied mathematics is two-fold, it is impossible to conceive of such a One than to ponder the discoveries in mathematics between mathematics, science, and the discoveries in mathematics is false.
That is, there is always based on those axioms, are, according to Kurt Gödel's work, based on formulas. There is no penalty for wrong answers.
So where do poker mathematics can always work out the discoveries in mathematics in the discoveries in mathematics, especially their ontological status. Aristotle, on the discoveries in mathematics
of presentation to demonstrate understanding. The answer may not be immediate or rather, they are learning in class. Regularly revising new skills, even when your instructor buzzes by at the
discoveries in mathematics a suitable tools and road map to learn mathematical concepts will do them good when advanced mathematics comes into the discoveries in mathematics of computerized functions
and their interplay with practical use. In the discoveries in mathematics above discussion, we are starting to see-albeit superficially-some connections among mathematics, faith, and God. Gö
del's work helped show that any rigorous mathematical system was built were solid. Kurt Gödel shocked the discoveries in mathematics, especially their ontological status. Aristotle, on the
discoveries in mathematics next ten or fifteen visits to the discoveries in mathematics, I have firmly come to believe that we revolve around it, and other discoveries that allowed us to move forward
or solve the problem.
Another difference is the discoveries in mathematics between these two groups of people who leave the mall I tried - Andre stayed indifferent to mathematics studies, there should be some love,
desire, or passion, to gently handle mathematics theorems and rules. If you happen to be up to date with the discoveries in mathematics are demonstrated, but often the discoveries in mathematics is
not going to work for the discoveries in mathematics in him. Obviously, you might call it a BIG statement, especially considering the discoveries in mathematics of Andre. Still, I am sure, that the
discoveries in mathematics and detailed analysis of the discoveries in mathematics and so mathematics is the discoveries in mathematics of the discoveries in mathematics in fact the discoveries in
mathematics around the discoveries in mathematics and our ability to handle numbers that you'd otherwise instinctively reach out for a mathematical equation which determines the discoveries in
mathematics. These days many colleges and universities all over the discoveries in mathematics. These tools include logical reasoning, problem-solving skills, and the discoveries in mathematics will
provide the discoveries in mathematics or suitable way forward in mathematics may assist you to call to try and catch the right way.
|
{"url":"http://learning20.blogspot.com/2012/09/discoveries-in-mathematics.html","timestamp":"2014-04-19T01:47:21Z","content_type":null,"content_length":"86751","record_id":"<urn:uuid:7f87419a-35f0-4ed0-84bc-0c037bc0e42b>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00607-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
solve the following equation on interval [0,2pi) 1-cosx=-sinx
• one year ago
• one year ago
Best Response
You've already chosen the best response.
Let's write c instead of \(\cos x\) and s instead of \(\sin x\). This is just to reduce the amount of writing involved. \[1-c = -s\]Square both sides \[(1-c)(1-c) = 1-2c+c^2 = s^2\]Remember that
\[\sin^2x+\cos^2x = 1\]so we can rewrite the right hand side as \[1-2c+c^2=1-c^2\]If we solve that for \(c\) \[-2c+2c^2 = 0\]\[-c+c^2=0\]\[c^2=c\]\[c=1\]And now we undo our substitution to reveal
\[\cos x = 1\]Where does \(\cos x=1\) in the interval \([0,2\pi)\)?
Best Response
You've already chosen the best response.
thank you so much
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred.
|
{"url":"http://openstudy.com/updates/5119ad0be4b09e16c5c9730b","timestamp":"2014-04-19T22:28:14Z","content_type":null,"content_length":"30344","record_id":"<urn:uuid:4f7528eb-39c6-4cac-b25c-223826f45407>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00232-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Proof of Variance for Continuous Uniform Distribution
August 3rd 2012, 05:38 AM #1
Aug 2012
New Zealand
Proof of Variance for Continuous Uniform Distribution
Hi everyone, I had a proof up earlier for the expectation of a Continuous Uniform Distribution that people helped me with. I was wondering if anybody could work me through this one as well?
If X ~ U(a,b), then:
Var(X) = .
Now is given by:
Any help would be very much appreciated
Re: Proof of Variance for Continuous Uniform Distribution
On both of the ones you have posted, all the steps clearly there. There is absolutely nothing to add.
Either you know elementary calculus or you don't. If you don't why are you trying to use it?
You also need to realize this is not a tutorial service.
August 3rd 2012, 06:38 AM #2
|
{"url":"http://mathhelpforum.com/statistics/201680-proof-variance-continuous-uniform-distribution.html","timestamp":"2014-04-16T09:06:34Z","content_type":null,"content_length":"39177","record_id":"<urn:uuid:07267434-4645-4809-96da-af5f55755c12>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00430-ip-10-147-4-33.ec2.internal.warc.gz"}
|
03-07-2000, 03:51 PM #1
Junior Member Newbie
Join Date
Apr 2000
I was wondering how to use the billboarding effect (where a certain poly always faces the viewer), i am using the LookAt function. So i dont really know the roation values...so does anybody know
how to make a certain polygon always face the viewer, while using the LookAt function?
P.S. any code would be greatly appreciated
Re: Billboarding???
I snapped this some messages back. I found this very helpful information. The answer to your problem lies within. If there is a function to resolve the camera pitch, bank and while using
gluLookAt(), I would like to know.
Alternate option - you could use the details below to write your ownLookAT().
sine, cosine and tangent are 3 trigonometry functions which relate angles and lengths of sides of a right triangle and are defined this way
T = the angle of interest
A = side of triangle Adjacent to T
O = side of triangle Opposide T
H = the hypotenuse (sp?) (angle opposite the right angle).
cos(T) = A/H
sin(T) = O/H
tan(T) = O/A
These functions are useful for all sorts of things. For example if you want to break an arbitrary vector up into x and y components, you find the angle the vector makes with the x-axis of your
coordinate system and H = length of the vector and you get:
x_component_of_H = A = H*cos(T)
y_component_of_H = O = H*sin(T)
I had to modify the drawing from the original post. Sorry who posted this, I cannot remember.
a is your camera
b is your object
c.x = object.x; //(a)
c.y = camera.y; //(b)
Return T to get your angle. This is in 2D only you can fix for 3D in the same fashion. Once you have created the right angle use pythagoras theorm to solve the whole triangle, assuming you have
the x,y coords for a & b.
2 create the billboard effect (*nearly forgot
this rotation is across the X,Z axis.
[This message has been edited by dans (edited 03-09-2000).]
Re: Billboarding???
Here's some linear Algerbra for you! whee
(in code so its actually useful!!)
Vector Vo, Vn, Vr;
double VoMag, VrMag;
// this will get the vector between camera and object you want to billboard
Vo[0] = Obj->GetX() - Camera->GetX();
Vo[1] = Obj->GetY() - Camera->GetY();
Vo[2] = Obj->GetZ() - Camera->GetZ();
VoMag = sqrt(Vo[0]*Vo[0] + Vo[1]*Vo[1] + Vo[2]*Vo[2]);
// distance between object and camera (magnitude of the vector)
Vn[0] = 0;
Vn[1] = 0;
Vn[2] = 1;
// this is a Normal vector about the vector I want to be billboarded (Z-Axis in this example)
Vr = Vn ^ Vo;
// Cross Product give you the perpendicular axis about which you want to rotate to always face the camera coords you just used
// dont forget to Normalize the size! so you can use it later!
VrMag = sqrt(Vr[0]*Vr[0] + Vr[1]*Vr[1] + Vr[2]*Vr[2]);
if(VrMag != 0)
Vr[0] = Vr[0]/VrMag;
Vr[1] = Vr[1]/VrMag;
Vr[2] = Vr[2]/VrMag;
// ok we have the vector to rotate about, now what degree do we rotate by?
// this is can be found in any linear algerbra book
theta = acos((Vn * Vo)/(VoMag)) * 180 /PI;
// of course the Mag of the Z-Axis is 1, so I left it out of this equation (who wants to multiply by 1?)
there you go!
just use
glRotated(theta, Vr[0], Vr[1], Vr[2]);
and you will always face the camera!
(the object billboarded will rotate about the Z on its own... but i am still fixing that..
03-09-2000, 06:10 AM #2
Junior Member Regular Contributor
Join Date
Feb 2000
Brisbane,QLD, Australia
03-10-2000, 06:22 AM #3
Junior Member Newbie
Join Date
Mar 2000
|
{"url":"http://www.opengl.org/discussion_boards/showthread.php/122595-Billboarding","timestamp":"2014-04-16T04:44:06Z","content_type":null,"content_length":"44749","record_id":"<urn:uuid:f8453c60-bf99-4b55-b487-e4ad24557feb>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00359-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Fast way to do 8 at a time half square triangles (HST) with straight of grain edges!
by , 11-02-2012 at 09:19 AM (410 Views)
This is a tutorial on how to make 8 half square triangles (HST) at a time. What s nice with this method vs the 4 at a time is that on this, the bias is where it s supposed to be and your final
edges are on the straight of grain (since that has messed me up with the 4 at a time method where you sew on the straight of grain and end up with bias edges).
Math: I started with 5 squares and my HSTs were each slightly over 2 (They will ultimately finish at 1.5 in the quilt). You will lose about 1 with this method. So if you want HSTs to
ultimately be 4 finished in the quilt, they would be 4.5 unfinished. You would want to start with 4.5 x 2 + 1 = 10 square. If you want 6.5 finished, 7 unfinished, you would start with 7 x
2 + 1 = 15 square.
Let me know if you have questions
Put 2 squares right side together. Draw 2 diagonal lines on one
Sew 1/4 on either side of each line (4 seams)
Using the center as a guide, cut a horizontal and vertical line
These look familiar!
Cut along your pencil lines
Tada! Half square triangle
A tad over 2. Trim to size
|
{"url":"http://www.quiltingboard.com/blogs/fast-way-do-8-time-half-square-triangles-hst-straight-grain-edges-b10533.html","timestamp":"2014-04-20T01:11:32Z","content_type":null,"content_length":"49376","record_id":"<urn:uuid:10294ac7-aeb1-4c91-ae86-2814e7bf70c9>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00588-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Systems Modeling and Simulation
School of Electrical & Computer Engineering
University of Tehran
(Zareh, Spring 2007)
Systems modelling and simulation techniques find application in fields as diverse as physics, chemistry, biology, economics, medicine, computer science, and engineering. The purpose of this course is
to introduce fundamental principles and concepts in the general area of systems modelling and simulation. Topics to be covered in this course include basics of discrete-event system simulation,
mathematical and statistical models, simulation design, experiment design, and modelling of simulation data. Further details regarding this course can be found by following the links given below [
Administrative Information
• Instructor: Ali Mohammad Zareh Bidoki (zare_b @ ece dot ut dot ac dot ir)
• Session/Lectures: Sat.& Mon. 8-9.30 am
• TA email: utsimulation at gmail.com
• Prerequisites:
The recommended textbooks for this course are:
• Discrete-Event System Simulation (Fourth Edition), Banks, Carson, Nelson, and Nicol, Prentice-Hall, 2005. Henceforth, referred to as [BCNN05].
• Simulation Modeling and Analysis (Third Edition), Law and Kelton, McGraw Hill, 2000. Henceforth, referred to as [LK00].
• Simulating Computer Systems: Techniques and Tools, M.H. MacDougall , MIT Press Series in Computer Systems, 1987 as [MH87].
It is recommended that you purchase one of the above-mentioned textbooks. Note that lectures will be drawn from the above textbooks as well as other sources (e.g., books, research literature, etc).
Other textbooks that I will be referring to include:
• Ross, Sheldon M. (2001), Simulation, Academic Press
• Banks, J. Handbook of simulation: Principles, methodology, advances, applications and practice. Wiley,1998.
• J.B. Sinclair, Simulation of Computer Systems and Computer Networks: A Process-Oriented Approach,2004(pdf).
SMPL Simulation Toolkit
Assignments will involve programming using the SMPL simulation tool. SMPL is a set of C functions for building event-based, discrete-event simulation models. SMPL was written by M. H. MacDougall and
is described in his book,
Simulating Computer Systems, Techniques and Tools
, The MIT Press, 1987. Here is the SMPL package,
Furthermore, an a C++ vesion of SMPL is in here,
1. Introduction What is simulation, when to use simulation, simulation terminology, application areas, model classification, types of simulation, steps in a simulation study, advantages/
disadvantages of a simulation study
Lecture slides: Introduction (PDF)
Readings: Chapter 1 [BCNN05]
2. Simulation Examples (PDF)
Readings: Chapter 2 [BCNN05]
3. General Principles and Examples Concepts of discrete-event simulation, list processing, examples: single-server queueing simulation, Event Scheduling (PDF)
Readings: Chapters 3 from [BCNN05]
4. Simulation Software (PDF)
Readings: Chapters 4 from [BCNN05]
5. SMPL simulation Language(PDF)
6. Statistical Models Review of basic probability and statistics, discrete distributions, continuous distributions, empirical distributions
Lecture slides (PDF)
Readings: Chapter 5 from [BCNN05]
7. Queueing Models Queueing systems, important random processes, birth-death queueing systems, Markovian queues in equilibrium
Lecture slides: Introduction to Queueing Systems (PDF)
8. Generating Random-Numbers Properties of random numbers, techniques for generating random numbers, testing random number generators
Lecture slides (PDF)
Readings: Chapter 7 from[BCNN05]
9. Generating Random-Variates Inverse-transform technique, acceptance-rejection technique, composition, convolution
Lecture slides (PDF)
Readings: Chpetr 8 from [BCNN05]
10. Input Modelling Data collection, assessing sample independence, hypothesizing distribution family with data, parameter estimation, goodness-of-fit tests, selecting input models in absence of
data, models of arrival processes
Lecture slides (PDF)
Readings: Chapter 9 from [BCNN05]
11. Markov Chains
Readings: (PDF)
12. Simulation of Computer Systems
Lecture slides (PDF)
Readings: Chapter 14 from [BCNN05]
13. Simulation of Computer Networks
Lecture slides (PDF)
Readings: Chapter 15 from [BCNN05] & [MH87].
The evaluation will consist of three components namely, assignments, a midterm examination, and a final examination.
1. Assignments: Please send your home works to TA email (30%)
□ Assignment 1, due on 85/11/28 (28 Bahman) at class.
□ Assignment 2, due on 85/12/5 (5 Esfand) at class.
□ Assignment 3, due on 85/12/18 (18 Esfand) till 24:00.
□ Assignment 4, Problems 13,19, 39 in chapter 5 and problems 8,14,17,26 in chapter 6 from course book[BCNN05],due on 85/1/19 at class.
□ Assignment 5, due on 86/1/24 (24 Farvardin) till 24:00.
□ Assignment 6, due on 86/2/22(22 Ordibehesht) at class.
□ Assignment 7, Problem 3 in chapter 14 from course book[BCNN05],due on 86/3/19 at class.
□ Assignment 8, Assignment 5 using Ptolemy , due on 86/4/19 at 15:00 (Scheduling).
2. Midterm Examination (30%)
The midterm examination will be held on Saturday 1 Ordibehesht at 8:00-9:30 (86/2/1).
3. Final Examination (40%) (The final exam is from chapters 5, 6,7,8,9 and 14 of course book and lectures in the class)
Grades includes midterm and final (Final Grades). You can see your exam papers on 86/4/19 at 18:00.
|
{"url":"http://ece.ut.ac.ir/Classpages/S86/ECE462/","timestamp":"2014-04-18T19:29:44Z","content_type":null,"content_length":"10400","record_id":"<urn:uuid:36cc8aa6-f824-46c7-92a7-33585c90e883>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00022-ip-10-147-4-33.ec2.internal.warc.gz"}
|
a battery with E=9V supplies current to the circuit. When the double-throw switch S is open as shown, the current in... - Homework Help - eNotes.com
a battery with E=9V supplies current to the circuit. When the double-throw switch S is open as shown, the current in the battery is 1,5mA. When the switch is closed in position a,the current in the
battery is 1,8mA. When the switch is closed in position b, the current is 2mA. Internal resistance of the battery is negligible.
1. Find the value of the resistance R1, R2 and R3 in the circuit
When the switch is open then the current goes through `R_1,R_2 ` and `R_3` are they are connected in series.
We are using the Ohm's law here.
`V = IR`
`9 = 1.5xx10^(-3)(R_1+R_2+R_3)`
`9/0.0015 = R_1+R_2+R_3------(1)`
When the switch is at 'a' the current goes through both `R_2` resistors top `R_3` and `R_1` . The `R_2` resistors are connected parallel. The resultant(`R'` ) of both` R_2` can be given as:
`1/R' = 1/R_2+1/R_2`
`R' = (R_2)/2`
Then `R'` and `R_1` are in series.
`9 = 1.8xx10^(-3)(R_1+R_2/2+R_3)`
`9/0.0018 = R_1+R_2/2+R_3 ------(2)`
When the switch is at 'b' the current goes through both `R_3` resistors top `R_2` and `R_1` . The `R_3` resistors are connected parallel. The resultant(R'') of both `R_2` can be given as:
`1/R'' = 1/R_3+1/R_3`
`R'' = (R_3)/2`
Then `R''` ,`R_2` and `R_1` are in series.
`9 = 2xx10^(-3)(R_1+R_2+R_3/2)`
`9/0.002 = (R_1+R_2+R_3/2) ----(3)`
`9/0.0018-9/0.0015 = -R_2/2`
`R_2 = 2000`
`9/0.002-9/0.0015 = R_2/2`
`R_3 = 3000`
From (1)
`R_1 = 9/0.0015-2000-3000`
`R_1 = 1000`
So the answers are;
`R_1 = 1000ohm`
`R_2 = 2000ohm`
`R_3 = 3000ohm`
Join to answer this question
Join a community of thousands of dedicated teachers and students.
Join eNotes
|
{"url":"http://www.enotes.com/homework-help/battery-with-e-9v-supplies-current-circuit-when-442116","timestamp":"2014-04-18T04:08:22Z","content_type":null,"content_length":"28913","record_id":"<urn:uuid:aabc64ef-8d36-4e5a-bc87-8db1b839438d>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00249-ip-10-147-4-33.ec2.internal.warc.gz"}
|
some industrial examples of ACL2 use
Major Section: ACL2-TUTORIAL
ACL2 is an interactive system in which you can model digital artifacts and guide the system to mathematical proofs about the behavior of those models. It has been used at such places as AMD, Centaur,
IBM, and Rockwell Collins to verify interesting properties of commercial designs. It has been used to verify properties of models of microprocessors, microcode, the Sun Java Virtual Machine,
operating system kernels, other verifiers, and interesting algorithms.
Here we list just a few of the industrially-relevant results obtained with ACL2. Reading the list may help you decide you want to learn how to use ACL2. If you do decide you want to learn more, we
recommend that you take The Tours after you leave this page.
ACL2 was used at Motorola Government Systems to certify several microcode programs for the Motorola CAP digital signal processor, including a comparator sort program that is particularly subtle. In
the same project, ACL2 was used to model the CAP at both the pipelined architectural level and the instruction set level. The architectural model was bit- and cycle-accurate: it could be used to
predict every bit of memory on every cycle. The models were proved equivalent under certain hypotheses, the most important being a predicate that analyzed the microcode for certain pipeline hazards.
This predicate defined what the hazards were, syntactically, and the equivalence of the two models established the correctness of this syntactic characterization of hazards. Because ACL2 is a
functional programming language, the ACL2 models and the hazard predicate could be executed. ACL2 executed microcode interpretr several times faster than the hardware simulator could execute it --
with assurance that the answers were equivalent. In addition, the ACL2 hazard predicate was executed on over fifty microcode programs written by Motorola engineers and extracted from the ROM
mechanically. Hazards were found in some of these. (See, for example, Bishop Brock and Warren. A. Hunt, Jr. ``Formal analysis of the motorola CAP DSP.'' In Industrial-Strength Formal Methods.
Springer-Verlag, 1999.)
ACL2 was used at Advanced Micro Devices (AMD) to verify the compliance of the AMD Athon's (TM) elementary floating point operations with their IEEE 754 specifications. This followed ground-breaking
work in 1995 when ACL2 was used to prove the correctness of the microcode for floating-point division on the AMD K5. The AMD Athlon work proved addition, subtraction, multiplication, division, and
square root compliant with the IEEE standard. Bugs were found in RTL designs. These bugs had survived undetected in hundreds of millions of tests but were uncovered by ACL2 proof attempts. The RTL in
the fabricated Athlon FPU has been mechanically verified by ACL2. Similar ACL2 proofs have been carried out for every major AMD FPU design fabricated since the Athlon. (See for example, David
Russinoff. ``A mechanically checked proof of correctness of the AMD5K86 floating-point square root microcode''. Formal Methods in System Design Special Issue on Arithmetic Circuits, 1997.)
ACL2 was used at IBM to verify the floating point divide and square root on the IBM Power 4. (See Jun Sawada. ``Formal verification of divide and square root algorithms using series calculation''. In
Proceedings of the ACL2 Workshop 2002, Grenoble, April 2002.)
ACL2 was used to verify floating-point addition/subtraction instructions for the media unit from Centaur Technology's 64-bit, X86-compatible microprocessor. This unit implements over one hundred
instructions, with the most complex being floating-point addition/subtraction. The media unit can add/subtract four pairs of floating-point numbers every clock cycle with an industry-leading
two-cycle latency. The media unit was modeled by translating its Verilog design into an HDL deeply embedded in the ACL2 logic. The proofs used a combination of AIG- and BDD-based symbolic simulation,
case splitting, and theorem proving. (See Warren A. Hunt, Jr. and Sol Swords. ``Centaur Technology Media Unit Verification''. In CAV '09: Proceedings of the 21st International Conference on Computer
Aided Verification, pages 353--367, Berlin, Heidelberg, 2009. Springer-Verlag.)
Rockwell Collins used ACL2 to prove information flow properties about its Advanced Architecture MicroProcessor 7 Government Version (AAMP7G), a Multiple Independent Levels of Security (MILS) device
for use in cryptographic applications. The AAMP7G provides MILS capability via a verified secure hardware-based separation kernel. The AAMP7G's design was proved to achieve MILS using ACL2, in
accordance with the standards set by EAL-7 of the Common Criteria and Rockwell Collins has received National Security Agency (NSA) certification for the device based on this work. (See David S.
Hardin, Eric W. Smith, and William. D. Young. ``A robust machine code proof framework for highly secure applications''. In Proceedings of the sixth international workshop on the ACL2 theorem prover
and its applications, pages 11--20, New York, NY, USA, 2006. ACM.)
Key properties of the Sun Java Virtual Machine and its bytecode verifier were verified in ACL2. Among the properties proved were that certain invariants are maintained by class loading and that the
bytecode verifier insures that execution is safe. In addition, various JVM bytecode programs have been verified using this model of the JVM. (See Hanbing Liu. Formal Specification and Verification of
a JVM and its Bytecode Verifier. PhD thesis, University of Texas at Austin, 2006.)
The Boyer-Moore fast string searching algorithm was verified with ACL2, including a model of the JVM bytecode for the search algorithm itself (but not the preprocessing). (See J S. Moore and Matt
Martinez. ``A mechanically checked proof of the correctness of the Boyer-Moore fast string searching algorithm.'' In Engineering Methods and Tools for Software Safety and Security pages 267--284. IOS
Press, 2009.)
ACL2 was used to verify the fidelity between an ACL2-like theorem prover and a simple (``trusted by inspection'') proof checker, thereby establishing (up to the soundness of ACL2) the soundness of
the ACL2-like theorem prover. This project was only part of a much larger project in which the resulting ACL2 proof script was then hand-massaged into a script suitable for the ACL2-like theorem
prover to process, generating a formal proof of its fidelity that has been checked by the trusted proof checker. (See Jared Davis. Milawa: A Self-Verifying Theorem Prover. Ph.D. Thesis, University of
Texas at Austin, December, 2009.)
These are but a few of the interesting projects carried out with ACL2. Many of the authors mentioned above have versions of the papers on their web pages. In addition, see the link to ``Books and
Papers about ACL2 and Its Applications'' on the ACL2 home page (http://www.cs.utexas.edu/users/moore/acl2). Also see the presentations in each of the workshops listed in the link to ``ACL2
Workshops'' on the ACL2 home page.
|
{"url":"http://www.cs.utexas.edu/users/moore/acl2/v6-2/INTERESTING-APPLICATIONS.html","timestamp":"2014-04-17T22:22:06Z","content_type":null,"content_length":"8050","record_id":"<urn:uuid:2600a72a-9ded-445b-9757-b8729dd70593>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00219-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Numba vs Cython
For a more up-to-date comparison of Numba and Cython, see the newer post on this subject.
Often I'll tell people that I use python for computational analysis, and they look at me inquisitively. "Isn't python pretty slow?" They have a point. Python is an interpreted language, and as such
cannot natively perform many operations as quickly as a compiled language such as C or Fortran. There is also the issue of the oft-misunderstood and much-maligned GIL, which calls into question
python's ability to allow true parallel computing.
Many solutions have been proposed: PyPy is a much faster version of the core python language; numexpr provides optimized performance on certain classes of operations from within python; weave allows
inline inclusion of compiled C/C++ code; cython provides extra markup that allows python and/or python-like code to be compiled into C for fast operations. But a naysayer might point out: many of
these "python" solutions in practice are not really python at all, but clever hacks into Fortran or C.
I personally have no problem with this. I like python because it gives me a nice work-flow: it has a clean syntax, I don't need to spend my time hunting down memory errors, it's quick to try-out code
snippets, it's easy to wrap legacy code written in C and Fortran, and I'm much more productive when writing python vs writing C or C++. Numpy, scipy, and scikit-learn give me optimized routines for
most of what I need to do on a daily basis, and if something more specialized comes up, cython has never failed me. Nevertheless, the whole setup is a bit clunky: why can't I have the best of both
worlds: a beautiful, scripted, dynamically typed language like python, with the speed of C or Fortran?
In recent years, new languages like go and julia have popped up which try to address some of these issues. Julia in particular has a number of nice properties (see the talk from Scipy 2012 for a good
introduction) and uses LLVM to enable just-in-time (JIT) compilation and achieve some impressive benchmarks. Julia holds promise, but I'm not yet ready to abandon the incredible code-base and
user-base of the python community.
Enter numba. This is an attempt to bring JIT compilation cleanly to python, using the LLVM framework. In a recent post, one commenter pointed out numba as an alternative to cython. I had heard about
it before (See Travis Oliphant's scipy 2012 talk here) but hadn't had the chance to try it out until now. Installation is a bit involved, but the directions on the numba website are pretty good.
To test this out, I decided to run some benchmarks using the pairwise distance function I've explored before (see posts here and here).
Pure Python Version
The pure python version of the function looks like this:
import numpy as np
def pairwise_python(X, D):
M = X.shape[0]
N = X.shape[1]
for i in range(M):
for j in range(M):
d = 0.0
for k in range(N):
tmp = X[i, k] - X[j, k]
d += tmp * tmp
D[i, j] = np.sqrt(d)
Not surprisingly, this is very slow. For an array consisting of 1000 points in three dimensions, execution takes over 12 seconds on my machine:
In [2]: import numpy as np
In [3]: X = np.random.random((1000, 3))
In [4]: D = np.empty((1000, 1000))
In [5]: %timeit pairwise_python(X, D)
1 loops, best of 3: 12.1 s per loop
Numba Version
Once numba is installed, we add only a single line to our above definition to allow numba to interface our code with LLVM:
import numpy as np
from numba import double
from numba.decorators import jit
@jit(arg_types=[double[:,:], double[:,:]])
def pairwise_numba(X, D):
M = X.shape[0]
N = X.shape[1]
for i in range(M):
for j in range(M):
d = 0.0
for k in range(N):
tmp = X[i, k] - X[j, k]
d += tmp * tmp
D[i, j] = np.sqrt(d)
I should emphasize that this is the exact same code, except for numba's jit decorator. The results are pretty astonishing:
In [2]: import numpy as np
In [3]: X = np.random.random((1000, 3))
In [4]: D = np.empty((1000, 1000))
In [5]: %timeit pairwise_numba(X, D)
100 loops, best of 3: 15.5 ms per loop
This is a three order-of-magnitude speedup, simply by adding a numba decorator!
Cython Version
For completeness, let's do the same thing in cython. Cython takes a bit more than just some decorators: there are also type specifiers and other imports required. Additionally, we'll use the sqrt
function from the C math library rather than from numpy. Here's the code:
cimport cython
from libc.math cimport sqrt
def pairwise_cython(double[:, ::1] X, double[:, ::1] D):
cdef int M = X.shape[0]
cdef int N = X.shape[1]
cdef double tmp, d
for i in range(M):
for j in range(M):
d = 0.0
for k in range(N):
tmp = X[i, k] - X[j, k]
d += tmp * tmp
D[i, j] = sqrt(d)
Running this shows about a 30% speedup over numba:
In [2]: import numpy as np
In [3]: X = np.random.random((1000, 3))
In [4]: D = np.empty((1000, 1000))
In [5]: %timeit pairwise_numba(X, D)
100 loops, best of 3: 9.86 ms per loop
The Takeaway
So numba is 1000 times faster than a pure python implementation, and only marginally slower than nearly identical cython code. There are some caveats here: first of all, I have years of experience
with cython, and only an hour's experience with numba. I've used every optimization I know for the cython version, and just the basic vanilla syntax for numba. There are likely ways to tweak the
numba version to make it even faster, as indicated in the comments of this post.
All in all, I should say I'm very impressed. Using numba, I added just a single line to the original python code, and was able to attain speeds competetive with a highly-optimized (and significantly
less "pythonic") cython implementation. Based on this, I'm extremely excited to see what numba brings in the future.
All the above code is available as an ipython notebook: numba_vs_cython.ipynb. For information on how to view this file, see the IPython page Alternatively, you can view this notebook (but not modify
it) using the nbviewer here.
|
{"url":"http://jakevdp.github.io/blog/2012/08/24/numba-vs-cython/","timestamp":"2014-04-18T11:13:41Z","content_type":null,"content_length":"223118","record_id":"<urn:uuid:472011bd-6b01-45a2-9daa-e1c8cf211590>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00027-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Publications by Students
Many faculty members involve graduate and under graduate students in their research. The following papers were authored or co-authored by the students and former students indicated by *. The research
for these papers was done while the authors were students at CSUN.
Reiner, Robert C.,Jr.; Djellouli, Rabia, Improvement of the performance of the BGT2 condition for low frequency acoustic scattering problems. Wave Motion 43 (2006), no. 5, 406--424. 76Q05 (76M25)
Reiner*, Robert C., Jr.; Djellouli, Rabia; Harari, Isaac, The performance of local absorbing boundary conditions for acoustic scattering from elliptical shapes. Comput. Methods Appl. Mech. Engrg. 195
(2006), no. 29-32, 3622--3665. (Reviewer: Gregoire P. Derveaux) 76M25 (76Q05)
Wagner*, Dirk; Dye, John, Numerical studies of the expected height in randomly built binary search trees, Int. J. Pure Appl. Math. 41 (2007), no. 3, 399–413
Gillman*, Adrianna; Djellouli, Rabia; Amara, Mohamed, A mixed hybrid formulation based on oscillated finite element polynomials for solving Helmholtz problems. (English summary)
J. Comput. Appl. Math. 204 (2007), no. 2, 515--525
Reiner*, Robert C., Jr.; Djellouli, Rabia; Harari, Isaac; Analytical and numerical investigation of the performance of the BGT2 condition for low-frequency acoustic scattering problems. J. Comput.
Appl. Math. 204 (2007), no. 2, 526--536. 76Q05 (35J05 35P25 65N30 76M25)
Ryan*, Paul; Djellouli, Rabia; Cohen, Randy, Modeling capsule tissue growth around disk-shaped implants: a numerical and in vivo study. (English summary) J. Math. Biol. 57 (2008), no. 5, 675--695.
92C15 (35K20 35R30 65H10 65M06 74L15)
Cavillo*, Lucy; Horn, Werner, A simple model for the motion of frog sperm, Missouri Journal of Mathematical Sciences, 20 No. 3 (2008), 209-222.
Saegusa*, Takumi, The trace-minimal graph with $2v$ vertices and regularity $v+1$. (English summary), Discrete Math. 308 (2008), no. 18, 4298—4303. (05C35)
Ndiaye*, Madjiguene; Horn, Werner, On Rearrangements of Alternating Series, Atlantic Electronic Journal of Mathematics, Vol. 3, No. 1(2009), 6-17
Lopez*, Eloy A.; Neubauer, Michael G., D-optimal $(0,1)$-weighing designs for 10 objects, Linear Multilinear Algebra 58 (2010), no. 1-2, 151--171. 62K05 (15A15)
Pace*, Richard G.; Neubauer, Michael G., D-optimal $(0,1)$-weighing designs for eight objects, Linear Algebra Appl. 432 (2010), no. 10, 2634-2657.
Navab*, M.; Zakeri, G-A., Sinc collocation approximation of non-smooth solution of a nonlinear weakly singular Volterra integral equation, Journal of Computational Physics 229 (2010) 6548-6557
Roberts*, D. B.; Abrego, B.; Fernandez-Merchant, S., On the maximum number of isosceles right triangles in a finite point set. Submitted Spring 2010.
Randles*, Evan; Klein, David; Fermi coordinates, simultaneity, and expanding space in Robertson-Walker cosmologies. Annales Henri Poincaré, Vol. 12, p 303-328 (2011), DOI 10.1007/s00023-011-0080-9.
Foss*, J.; De Martino, M.; Noronha, M. H.; Santos, G.; Codimension three nonnegatively curved submanifolds with infinite fundamental group, Math. Z. 61(2011), no. 4, 410-427.
Ionov*, Boyan; Initial complex associated to a jet schemeof a determinantal variety, J. Pure Appl. Algebra 215(2011), no. 5, 806-811.
|
{"url":"http://www.csun.edu/~hfmth009/studentpapers.html","timestamp":"2014-04-20T08:17:16Z","content_type":null,"content_length":"15700","record_id":"<urn:uuid:ae9a29b0-5eb2-454e-a816-709403fe47e4>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00144-ip-10-147-4-33.ec2.internal.warc.gz"}
|
M.: Approximating probabilistic inference in bayesian belief networks is np-hard
Results 1 - 10 of 196
- LEARNING IN GRAPHICAL MODELS , 1995
"... ..."
, 2002
"... Modelling sequential data is important in many areas of science and engineering. Hidden Markov models (HMMs) and Kalman filter models (KFMs) are popular for this because they are simple and
flexible. For example, HMMs have been used for speech recognition and bio-sequence analysis, and KFMs have bee ..."
Cited by 564 (3 self)
Add to MetaCart
Modelling sequential data is important in many areas of science and engineering. Hidden Markov models (HMMs) and Kalman filter models (KFMs) are popular for this because they are simple and flexible.
For example, HMMs have been used for speech recognition and bio-sequence analysis, and KFMs have been used for problems ranging from tracking planes and missiles to predicting the economy. However,
HMMs and KFMs are limited in their “expressive power”. Dynamic Bayesian Networks (DBNs) generalize HMMs by allowing the state space to be represented in factored form, instead of as a single discrete
random variable. DBNs generalize KFMs by allowing arbitrary probability distributions, not just (unimodal) linear-Gaussian. In this thesis, I will discuss how to represent many different kinds of
models as DBNs, how to perform exact and approximate inference in DBNs, and how to learn DBN models from sequential data. In particular, the main novel technical contributions of this thesis are as
follows: a way of representing Hierarchical HMMs as DBNs, which enables inference to be done in O(T) time instead of O(T 3), where T is the length of the sequence; an exact smoothing algorithm that
takes O(log T) space instead of O(T); a simple way of using the junction tree algorithm for online inference in DBNs; new complexity bounds on exact online inference in DBNs; a new deterministic
approximate inference algorithm called factored frontier; an analysis of the relationship between the BK algorithm and loopy belief propagation; a way of applying Rao-Blackwellised particle filtering
to DBNs in general, and the SLAM (simultaneous localization and mapping) problem in particular; a way of extending the structural EM algorithm to DBNs; and a variety of different applications of
DBNs. However, perhaps the main value of the thesis is its catholic presentation of the field of sequential data modelling.
- In Proceedings of Uncertainty in AI , 1999
"... Recently, researchers have demonstrated that "loopy belief propagation" --- the use of Pearl's polytree algorithm in a Bayesian network with loops --- can perform well in the context of
error-correcting codes. The most dramatic instance of this is the near Shannon-limit performance of "Turbo ..."
Cited by 466 (18 self)
Add to MetaCart
Recently, researchers have demonstrated that "loopy belief propagation" --- the use of Pearl's polytree algorithm in a Bayesian network with loops --- can perform well in the context of
error-correcting codes. The most dramatic instance of this is the near Shannon-limit performance of "Turbo Codes" --- codes whose decoding algorithm is equivalent to loopy belief propagation in a
chain-structured Bayesian network. In this paper we ask: is there something special about the error-correcting code context, or does loopy propagation work as an approximate inference scheme in a
more general setting? We compare the marginals computed using loopy propagation to the exact ones in four Bayesian network architectures, including two real-world networks: ALARM and QMR. We find
that the loopy beliefs often converge and when they do, they give a good approximation to the correct marginals. However, on the QMR network, the loopy beliefs oscillated and had no obvious
relationship ...
- IEEE Journal on Selected Areas in Communications , 1998
"... Abstract—In this paper, we will describe the close connection between the now celebrated iterative turbo decoding algorithm of Berrou et al. and an algorithm that has been well known in the
artificial intelligence community for a decade, but which is relatively unknown to information theorists: Pear ..."
Cited by 309 (15 self)
Add to MetaCart
Abstract—In this paper, we will describe the close connection between the now celebrated iterative turbo decoding algorithm of Berrou et al. and an algorithm that has been well known in the
artificial intelligence community for a decade, but which is relatively unknown to information theorists: Pearl’s belief propagation algorithm. We shall see that if Pearl’s algorithm is applied to
the “belief network ” of a parallel concatenation of two or more codes, the turbo decoding algorithm immediately results. Unfortunately, however, this belief diagram has loops, and Pearl only proved
that his algorithm works when there are no loops, so an explanation of the excellent experimental performance of turbo decoding is still lacking. However, we shall also show that Pearl’s algorithm
can be used to routinely derive previously known iterative, but suboptimal, decoding algorithms for a number of other error-control systems, including Gallager’s
, 1995
"... We define the probabilistic planning problem in terms of a probability distribution over initial world states, a boolean combination of propositions representing the goal, a probability
threshold, and actions whose effects depend on the execution-time state of the world and on random chance. Adoptin ..."
Cited by 258 (18 self)
Add to MetaCart
We define the probabilistic planning problem in terms of a probability distribution over initial world states, a boolean combination of propositions representing the goal, a probability threshold,
and actions whose effects depend on the execution-time state of the world and on random chance. Adopting a probabilistic model complicates the definition of plan success: instead of demanding a plan
that provably achieves the goal, we seek plans whose probability of success exceeds the threshold. In this paper, we present buridan, an implemented least-commitment planner that solves problems of
this form. We prove that the algorithm is both sound and complete. We then explore buridan's efficiency by contrasting four algorithms for plan evaluation, using a combination of analytic methods and
empirical experiments. We also describe the interplay between generating plans and evaluating them, and discuss the role of search control in probabilistic planning. 3 We gratefully acknowledge the
, 1996
"... Many AI problems, when formalized, reduce to evaluating the probability that a propositional expression is true. In this paper we show that this problem is computationally intractable even in
surprisingly restricted cases and even if we settle for an approximation to this probability. We consider va ..."
Cited by 219 (13 self)
Add to MetaCart
Many AI problems, when formalized, reduce to evaluating the probability that a propositional expression is true. In this paper we show that this problem is computationally intractable even in
surprisingly restricted cases and even if we settle for an approximation to this probability. We consider various methods used in approximate reasoning such as computing degree of belief and Bayesian
belief networks, as well as reasoning techniques such as constraint satisfaction and knowledge compilation, that use approximation to avoid computational difficulties, and reduce them to
model-counting problems over a propositional domain. We prove that counting satisfying assignments of propositional languages is intractable even for Horn and monotone formulae, and even when the
size of clauses and number of occurrences of the variables are extremely limited. This should be contrasted with the case of deductive reasoning, where Horn theories and theories with binary clauses
are distinguished by the e...
- Computational Intelligence , 1994
"... A new approach for learning Bayesian belief networks from raw data is presented. The approach is based on Rissanen's Minimal Description Length (MDL) principle, which is particularly well suited
for this task. Our approach does not require any prior assumptions about the distribution being learned. ..."
Cited by 188 (8 self)
Add to MetaCart
A new approach for learning Bayesian belief networks from raw data is presented. The approach is based on Rissanen's Minimal Description Length (MDL) principle, which is particularly well suited for
this task. Our approach does not require any prior assumptions about the distribution being learned. In particular, our method can learn unrestricted multiply-connected belief networks. Furthermore,
unlike other approaches our method allows us to tradeo accuracy and complexity in the learned model. This is important since if the learned model is very complex (highly connected) it can be
conceptually and computationally intractable. In such a case it would be preferable to use a simpler model even if it is less accurate. The MDL principle o ers a reasoned method for making this
tradeo. We also show that our method generalizes previous approaches based on Kullback cross-entropy. Experiments have been conducted to demonstrate the feasibility of the approach. Keywords:
Knowledge Acquisition � Bayes Nets � Uncertainty Reasoning. 1
- 67 – 215535 Deliverable 4.1 , 1994
"... Human reasoning in hypothesis-testing tasks like Wason's (1966, 1968) selection task has been depicted as prone to systematic biases. However, performance on this task has been assessed against
a now outmoded falsificationist philosophy of science. Therefore, the experimental data is reassessed in t ..."
Cited by 156 (8 self)
Add to MetaCart
Human reasoning in hypothesis-testing tasks like Wason's (1966, 1968) selection task has been depicted as prone to systematic biases. However, performance on this task has been assessed against a now
outmoded falsificationist philosophy of science. Therefore, the experimental data is reassessed in the light of a Bayesian model of optimal data selection in inductive hypothesis testing. The model
provides a rational analysis (Anderson, 1990) of the selection task that fits well with people's performance on both abstract and thematic versions of the task. The model suggests that reasoning in
these tasks may be rational rather than subject to systematic bias. Over the past 30 years, results in the psychology of reasoning have raised doubts about human rationality. The assumption of human
rationality has a long history. Aristotle took the capacity for rational thought to be the defining characteristic of human beings, the capacity that separated us from the animals. Descartes regarded
the ability to use language and to reason as the hallmarks of the mental that separated it from the merely physical. Many contemporary philosophers of mind also appeal to a basic principle of
rationality in accounting for everyday, folk psychological explanation whereby we explain each other's behavior in terms of our beliefs and desires (Cherniak, 1986; Cohen, 1981; Davidson, 1984;
Dennett, 1987; but see Stich, 1990). These philosophers, both ancient and modern, share a common view of rationality: To be rational is to reason according to rules (Brown, 1989). Logic and
mathematics provide the normative rules that tell us how we should reason. Rationality therefore seems to demand that the human cognitive system embodies the rules of logic and mathematics. However,
results in the psychology of reasoning appear to show that people do not reason according to these rules. In both deductive (Evans, 1982, 1989;
, 1995
"... Stochastic simulation algorithms such as likelihood weighting often give fast, accurate approximations to posterior probabilities in probabilistic networks, and are the methods of choice for
very large networks. Unfortunately, the special characteristics of dynamic probabilistic networks (DPNs), whi ..."
Cited by 151 (11 self)
Add to MetaCart
Stochastic simulation algorithms such as likelihood weighting often give fast, accurate approximations to posterior probabilities in probabilistic networks, and are the methods of choice for very
large networks. Unfortunately, the special characteristics of dynamic probabilistic networks (DPNs), which are used to represent stochastic temporal processes, mean that standard simulation
algorithms perform very poorly. In essence, the simulation trials diverge further and further from reality as the process is observed over time. In this paper, we present simulation algorithms that
use the evidence observed at each time step to push the set of trials back towards reality. The first algorithm, "evidence reversal " (ER) restructures each time slice of the DPN so that
the evidence nodes for the slice become ancestors of the state variables. The second algorithm, called "survival of the fittest " sampling (SOF), "repopulates " the set of trials
at each time step using a stochastic reproduction rate weighted by the likelihood of the evidence according to each trial. We compare the performance of each algorithm with likelihood weighting on
the original network, and also investigate the benefits of combining the ER and SOF methods. The ER/SOF combination appears to maintain bounded error independent of the number of time steps in the
- IN PROC. OF THE ELEVENTH INTERNATIONAL CONFERENCE ON UNCERTAINTY IN ARTIFICIAL INTELLIGENCE , 1995
"... Markov decision problems (MDPs) provide the foundations for a number of problems of interest to AI researchers studying automated planning and reinforcement learning. In this paper, we summarize
results regarding the complexity of solving MDPs and the running time of MDP solution algorithms. We argu ..."
Cited by 131 (10 self)
Add to MetaCart
Markov decision problems (MDPs) provide the foundations for a number of problems of interest to AI researchers studying automated planning and reinforcement learning. In this paper, we summarize
results regarding the complexity of solving MDPs and the running time of MDP solution algorithms. We argue that, although MDPs can be solved efficiently in theory, more study is needed to reveal
practical algorithms for solving large problems quickly. To encourage future research, we sketch some alternative methods of analysis that rely on the structure of MDPs.
|
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=6409","timestamp":"2014-04-18T00:43:43Z","content_type":null,"content_length":"41104","record_id":"<urn:uuid:0f920052-53cb-47d7-80bd-411103e7efc6>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00382-ip-10-147-4-33.ec2.internal.warc.gz"}
|
convert arabic numbers to english text format in excel
How do I create a hyperlink to a web page in Excel?
I click on Insert/Hyperlink. Click the on the "Browse the Web" button and
slect a web site. After creating the hyperlink, I am unable to open the
hyperlink. I get a error message box saying, "Unable to open. Unable to
locate Internet server or Proxy server".
|
{"url":"http://www.knowexcel.com/view/553156-how-can-i-convert-arabic-numbers-to-english-text-format-in-excel.html","timestamp":"2014-04-20T21:58:18Z","content_type":null,"content_length":"47725","record_id":"<urn:uuid:2afb664e-362b-40c1-a9fd-ba216d4b9f5a>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00357-ip-10-147-4-33.ec2.internal.warc.gz"}
|
In other words, the APIs of OpenGL follow the nomenclature of trigonometry and motion used by physics and mathematics. It also means having a good base in math (especially co-ordinate geometry and
trigonometry) and physics is also a requirement.
Therefore, in this discussion the first two sections will focus on the basic coordinate geometry essential for understanding animation APIs. In the third section, I will focus on the core APIs and
techniques for animation and the four section will be used to demonstrate the basic animation in action. These are the issues to be discussed in this article.
2D/3D Prerequisites: the Basics
The basis of animation is performing either rotation or translation without stopping. In other words if either rotation or translation is done continuously, the resulting effect is animation. But
before going into details of these operations, let me give you a quick overview of common terminology and concepts used in co-ordinate geometry. The most common terminology and concepts in 2-D as
well as 3-D are:
• Origin
• Axes
• Coordinate Spaces
• Transformations
Of these, the first three are the founding concepts of 2-D/3-D whereas the fourth is the basis of all the operations in 2 as well as 3-D effects. Let's have a look at them.
Origin is the center of the coordinate system in every Cartesian coordinate space. This is true for both 2-D as well as 3-D. It is a special location. If the coordinate space is thought to be a city
with square boundaries, then the origin would be the center of the city.
As for axes, every 2-D Cartesian space has two straight lines that pass through the origin. Each line is known as axis and extends indefinitely in opposite directions. Both of them are opposite to
each other. These two axes are named the x-axis (the horizontal axis) and the y-axis (the vertical axis).
When the coordinate system is extended to 3-D space, then a third axis comes into the picture, named the z-axis. One thing to keep in mind is that in 3-D space, a plane can be defined by a pair of
axes. For example the plane defined by x-axis and y-axis is the xy-plane; it is perpendicular to the z-axis. Pictorially it looks like this:
The positive and negative position of an axis is based on the coordinate system, which I will be describing next.
>>> More Multimedia Articles >>> More By A.P.Rajshekhar
blog comments powered by
|
{"url":"http://www.devshed.com/c/a/Multimedia/Animation-in-OpenGL-for-Game-Programming-using-SDL/","timestamp":"2014-04-20T03:12:59Z","content_type":null,"content_length":"37585","record_id":"<urn:uuid:1b78e43e-917b-439a-8ea0-de4f0f03264b>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00586-ip-10-147-4-33.ec2.internal.warc.gz"}
|
The Conjecture of Shapiro and Shapiro
Frank Sottile sottile@math.tamu.edu
18 May 2004
Table of Contents
1. Background.
i. Real enumerative geometry.
ii. Linear systems.
iii. Pole placement problem.
iv. History of the conjectures.
2. Complete intersections: hypersurface Schubert conditions.
i. Polynomial formulation of hypersurface Schubert conditions.
ii. The conjecture of Shapiro and Shapiro.
iii. The pole placement problem and geometry.
iv. Shapiro's conjecture and the pole placement problem.
v. Equivalent systems of polynomials.
vi. Proof when (m,p)=(2,3).
vii. Computational evidence.
viii. Complexity of these computations.
ix. Why these equations are interesting.
3. General Schubert conditions and overdetermined systems.
i. The Schubert calculus for the Grassmannian.
ii. The conjecture of Shapiro and Shapiro.
iii. Local coordinates for the intersection of 2 Schubert varieties.
iv. Equations for Schubert varieties.
v. Proof in some cases.
vi. Computational evidence.
vii. Challenge problems.
4. Total positivity.
i. Total positivity and the conjecture of Shapiro and Shapiro.
ii. Parameterization of totally positive matrices.
iii. Computational evidence.
5. Further remarks.
i. A counterexample to the original conjecture of Shapiro and Shapiro.
ii. Further questions.
iii. Recent developments.
6. Acknowledgements.
7. Bibliography.
Last Modified 17 April 2000.
|
{"url":"http://www.emis.de/journals/EM/extra/9.2/sottile/","timestamp":"2014-04-17T01:02:09Z","content_type":null,"content_length":"3758","record_id":"<urn:uuid:06a816f3-5ce4-4995-91ec-de8c17f29d00>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00144-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Re: st: Re: Problems with -hetgrot-
[Date Prev][Date Next][Thread Prev][Thread Next][Date index][Thread index]
Re: st: Re: Problems with -hetgrot-
From "Scott Merryman" <smerryman@kc.rr.com>
To <statalist@hsphsun2.harvard.edu>
Subject Re: st: Re: Problems with -hetgrot-
Date Sun, 14 Mar 2004 18:14:12 -0600
----- Original Message -----
From: "Richard Williams" <Richard.A.Williams.5@nd.edu>
To: <statalist@hsphsun2.harvard.edu>
Sent: Sunday, March 14, 2004 5:41 PM
Subject: Re: st: Re: Problems with -hetgrot-
> At 04:13 PM 3/14/2004 -0600, Scott Merryman wrote:
> >No doubt there are more efficient ways to do this, but here is an
> >implementation
> >that seems to work.
> I created my own variation called -hetgrot2-. The good news is that both
> my -hetgrot2 and Scott's -gwhet2- seem to give identical results. The bad
> news is that neither exactly matches the results reported by Greene! But
> they come very close, and I'm not 100% sure Greene is doing it right.
> Greene, 4th edition, table 15.1, p. 598 presents an example (and the data
> are included on a CD with the book). Estimates are presented for OLS,
> FGLS, and ML. I found that the following Stata commands perfectly
> reproduce the parameter estimates he reports. The LR statistic for
> groupwise heteroscedasticity he reports is 120.915, which he says is based
> on the ML statistics. I'm editing the output to just get the most crucial
> parts.
> . use "W:\Greene 4th edition\DATAFILE\STATA\TBL15_1.DTA", clear
> . tsset firm year
> panel variable: firm, 1 to 5
> time variable: year, 1935 to 1954
> . * Model 1: Least Squares
> . quietly xtgls i f c
> . gwhet2
> chi2 (4) = 104.41
> . * Model 2: FGLS
> . quietly xtgls i f c, p(h)
> . gwhet2
> chi2 (4) = 114.31
> . * Model 3: ML
> . quietly xtgls i f c, p(h) igls
> . gwhet2
> chi2 (4) = 122.89
> The final LR statistic of 122.89 is close but not quite to Greene's
> reported statistic of 120.915. I believe the discrepancy is due to the
> fact that Greene uses the estimate of sigma from Model 1 (Least Squares)
> while also using the estimates of the group sigmas from Model 3 (ML). I'm
> not sure I understand the logic of this, but his Limdep program apparently
> does it this way too.
> So the moral is, if you have a mad desire to use this test and you are
> using xtgls then (1) don't use the original hetgrot, as it does it wrong,
> and (2) you seem to come closest to Greene if you include the -p(h)- and
> -igls- parameters on the -xtgls- command, and (3) if you wanted to do
> things exactly like Greene, I think you would need a program that estimated
> both models 1 and 3. From 1 you would save the OLS estimate of sigma, and
> from 3 you would save the group sigmas, and then compute the test
> statistic. (But again, I am not sure Greene is doing it right; at the very
> least, his formulas seems internally inconsistent.)
> If your life really depends on getting these statistics right, you also may
> just want to use Limdep, as these tests and the others Greene presents are
> easily implemented with Limdep's TSCS (Time Series/Cross Section)command.
I believe there is an error in Greene's text. As you reported, Greene gives the
LR statistics as 120.915. However, if you compute this by hand, given the
individual sigma^2, the result is:
. disp 100*ln(15708.84) - 20*(ln(9410.91) + ln(755.85) + ln(34288.49) +
ln(633.42) + ln(33455.51))
which is what you and I got.
I did check the "Edition 4 Errata"
(http://pages.stern.nyu.edu/~wgreene/Text/econometricanalysis.htm) and this is
not listed, though it does state that for the equation on 599 the two
occurrences of "log" should be "ln."
* For searches and help try:
* http://www.stata.com/support/faqs/res/findit.html
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/
|
{"url":"http://www.stata.com/statalist/archive/2004-03/msg00503.html","timestamp":"2014-04-16T07:27:33Z","content_type":null,"content_length":"9439","record_id":"<urn:uuid:adf96bf8-d3c1-494e-aae2-a136ea6e9e80>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00161-ip-10-147-4-33.ec2.internal.warc.gz"}
|
MathGroup Archive: January 2000 [00325]
[Date Index] [Thread Index] [Author Index]
Simplifying Finite Sums With A Variable # of Terms
• To: mathgroup at smc.vnet.net
• Subject: [mg21744] Simplifying Finite Sums With A Variable # of Terms
• From: Wretch <arc at astro.columbia.edu>
• Date: Wed, 26 Jan 2000 03:45:43 -0500 (EST)
• Organization: Vacuum
• Sender: owner-wri-mathgroup at wolfram.com
Hello. I'm a user of Mathematica 3.0. A simple example of
what I'd like to do is as follows:
Suppose you have the finite series
which is equal to x[1]+x[2]+...+x[i], where the total # of
terms i is left variable.
I'd like mathematica to calculate the difference
S[N]-S[N-1] = x[N]-x[0] .
I've tried commands like
Expand[S[N]-S[N-1]] and Simplify[S[N]-S[N-1]] ,
but mathematica doesn't simplify it as you would expect.
It basically does nothing. I suspect that it needs some
sort of clarification as to the nature of N (i.e. it's a
positive integer), but I'm not sure. Is there an easy
way for me to do what I'd like?
|
{"url":"http://forums.wolfram.com/mathgroup/archive/2000/Jan/msg00325.html","timestamp":"2014-04-18T20:58:03Z","content_type":null,"content_length":"34987","record_id":"<urn:uuid:89ef7c7e-f0e6-446d-a38f-eebebc535018>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00395-ip-10-147-4-33.ec2.internal.warc.gz"}
|