content stringlengths 86 994k | meta stringlengths 288 619 |
|---|---|
Dummies guide to the latest Dummies guide to the latest “Hockey Stick” controversy
Dummies guide to the latest “Hockey Stick” controversy
MBH98 were particularly interested in whether the tree ring data showed significant differences from the 20th century calibration period, and therefore normalized the data so that the mean over this
period was zero. As discussed above, this will emphasize records that have the biggest differences from that period (either positive of negative). Since the underlying data have a ‘hockey stick’-like
shape, it is therefore not surprising that the most important PC found using this convention resembles the ‘hockey stick’. There are actual two significant PCs found using this convention, and both
were incorporated into the full reconstruction.
4) Does using a different convention change the answer?
As discussed above, a different convention (MM05 suggest one that has zero mean over the whole record) will change the ordering, significance and number of important PCs. In this case, the number of
significant PCs increases to 5 (maybe 6) from 2 originally. This is the difference between the blue points (MBH98 convention) and the red crosses (MM05 convention) in the first figure. Also PC1 in
the MBH98 convention moves down to PC4 in the MM05 convention. This is illustrated in the figure on the right, the red curve is the original PC1 and the blue curve is MM05 PC4 (adjusted to have same
variance and mean). But as we stated above, the underlying data has a hockey stick structure, and so in either case the ‘hockey stick’-like PC explains a significant part of the variance. Therefore,
using the MM05 convention, more PCs need to be included to capture the significant information contained in the tree ring network.
This figure shows the difference in the final result whether you use the original convention and 2 PCs (blue) and the MM05 convention with 5 PCs (red). The MM05-based reconstruction is slightly less
skillful when judged over the 19th century validation period but is otherwise very similar. In fact any calibration convention will lead to approximately the same answer as long as the PC
decomposition is done properly and one determines how many PCs are needed to retain the primary information in the original data.
5) What happens if you just use all the data and skip the whole PCA step?
This is a key point. If the PCs being used were inadequate in characterizing the underlying data, then the answer you get using all of the data will be significantly different. If, on the other hand,
enough PCs were used, the answer should be essentially unchanged. This is shown in the figure below. The reconstruction using all the data is in yellow (the green line is the same thing but with the
‘St-Anne River’ tree ring chronology taken out). The blue line is the original reconstruction, and as you can see the correspondence between them is high. The validation is slightly worse,
illustrating the trade-off mentioned above i.e. when using all of the data, over-fitting during the calibration period (due to the increase number of degrees of freedom) leads to a slight loss of
predictability in the validation step.
6) So how do MM05 conclude that this small detail changes the answer?
| Previous page | Next page | {"url":"http://www.realclimate.org/index.php?p=121&wpmp_switcher=mobile&wpmp_tp=2","timestamp":"2014-04-18T00:17:23Z","content_type":null,"content_length":"15350","record_id":"<urn:uuid:fefb8a13-5519-47ca-82cc-61f2072f62f5>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00033-ip-10-147-4-33.ec2.internal.warc.gz"} |
Applications of the natural bilinear forms on the direct sum between a vector space and its dual
up vote 0 down vote favorite
As it is know, the vector space $V\oplus V^\ast$ admits the natural symmetric and skew symmetric bilinear forms $$\langle X+\xi,Y+\eta\rangle|_\pm:=\frac 1 2 \(\xi(Y) \pm \eta(X)\)$$
I am interested in collecting results concerning these bilinear forms and their applications. They were used for example in
linear-algebra clifford-algebras big-list
Both these forms are indefinite (trace zero), so perhaps they should not be called inner products in the title? – Noah Stein Feb 28 '13 at 15:25
This is a local version of a global result, namely that cotangent bundles are symplectic manifolds, and this gets used in mathematical physics. – Qiaochu Yuan Feb 28 '13 at 18:30
add comment
1 Answer
active oldest votes
Search for Dirac structures or Courant algebroids in MathSciNet: These are common generalizations of symplectic and Poisson structures and use the symmetric bilinear form on $TM\times_M T
^*M$ on a manifold: Namely, the graph of a symplectic structure as well as the graph of a Poisson structure are maximal isotropic subbundles, with further properties.
up vote 2
down vote There is a lot of literature on them now.
Thank you. Indeed, Dirac structures are also mentioned in the works on Generalized complex geometry. – Cristi Stoica Mar 1 '13 at 8:22
add comment
Not the answer you're looking for? Browse other questions tagged linear-algebra clifford-algebras big-list or ask your own question. | {"url":"http://mathoverflow.net/questions/123208/applications-of-the-natural-bilinear-forms-on-the-direct-sum-between-a-vector-sp","timestamp":"2014-04-19T02:27:52Z","content_type":null,"content_length":"54827","record_id":"<urn:uuid:52c32780-c76e-402a-86b8-909c90d402ef>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00207-ip-10-147-4-33.ec2.internal.warc.gz"} |
The Symmetric Eigenproblem
, 1980
"... When computing eigenvalues of sym metric matrices and singular values of general matrices in finite precision arithmetic we in general only expect to compute them with an error bound
pro-portional to the product of machine precision and the norm of the matrix. In particular, we do not expect to comp ..."
Cited by 80 (14 self)
Add to MetaCart
When computing eigenvalues of sym metric matrices and singular values of general matrices in finite precision arithmetic we in general only expect to compute them with an error bound pro-portional to
the product of machine precision and the norm of the matrix. In particular, we do not expect to compute tiny eigenvalues and singular values to high relative accuracy. There are some important
classes of matrices where we can do much better, including bidiagonal matrices, scaled diagonally dominant matrices, and scaled diagonally dominant definite pencils. These classes include many graded
matrices, and all sym metric positive definite matrices which can be consistently ordered (and thus all symmetric positive definite tridiagonal matrices). In particular, the singular values and
eigenvalues are determined to high relative precision independent of their magnitudes, and there are algorithms to compute them this accurately. The eigenvectors are also determined more accurately
than for general matrices, and may be computed more accurately as well. This work extends results of Kahan and Demmel for bidiagonal and tridiagonal matrices. | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=9095719","timestamp":"2014-04-17T07:38:54Z","content_type":null,"content_length":"12453","record_id":"<urn:uuid:178806af-5b1f-4bd8-b5c9-8762a46c3ef6>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00557-ip-10-147-4-33.ec2.internal.warc.gz"} |
Cool art with geogebra!
Re: Cool art with geogebra!
Hi bobbym,
Yes, they are beautiful!
Hi pisquared,
I don't have much idea how it was constructed.
"Believe nothing, no matter where you read it, or who said it, no matter if I have said it, unless it agrees with your own reason and your own common sense" - Buddha?
"Data! Data! Data!" he cried impatiently. "I can't make bricks without clay." | {"url":"http://www.mathisfunforum.com/viewtopic.php?pid=195174","timestamp":"2014-04-18T21:33:31Z","content_type":null,"content_length":"18712","record_id":"<urn:uuid:a9bfd238-edc9-4671-9d71-d585d01bacdd>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00574-ip-10-147-4-33.ec2.internal.warc.gz"} |
how to convert formula units to moles
Best Results From Wikipedia Yahoo Answers Youtube
From Wikipedia
Units conversion by factor-label
Many, if not most, parameters and measurements in the physical sciences and engineering are expressed as a numerical quantity and a corresponding dimensional unit; for example: 1000 kg/m³, 100 kPa/
bar, 50 miles per hour, 1000 Btu/lb. Converting from one dimensional unit to another is often somewhat complex and being able to perform such conversions is an important skill to acquire. The
factor-label method, also known as the unit-factor method or dimensional analysis, is a widely used approach for performing such conversions. It is also used for determining whether the two sides of
a mathematical equation involving dimensions have the same dimensional units.
The factor-label method for converting units
The factor-label method is the sequential application of conversion factors expressed as fractions and arranged so that any dimensional unit appearing in both the numerator and denominator of any of
the fractions can be cancelled out until only the desired set of dimensional units is obtained. For example, 10 miles per hour can be converted to meters per second by using a sequence of conversion
factors as shown below:
10 [S:mile:S] 1609 meter 1 [S:hour:S] meter -- ---- × ---- ----- × ---- ------ = 4.47 ------ 1 [S:hour:S] 1 [S:mile:S] 3600 second second
It can be seen that each conversion factor is equivalent to the value of one. For example, starting with 1 mile = 1609 meters and dividing both sides of the equation by 1 mile yields 1 mile / 1 mile
= 1609 meters / 1 mile, which when simplified yields 1 = 1609 meters / 1 mile.
So, when the units mile and hour are cancelled out and the arithmetic is done, 10 miles per hour converts to 4.47 meters per second.
As a more complex example, the concentration of nitrogen oxides (i.e., NOx) in the flue gas from an industrial furnace can be converted to a mass flow rate expressed in grams per hour (i.e., g/h) of
NOx by using the following information as shown below:
NOx concentration :
= 10 parts per million by volume = 10 ppmv = 10 volumes/10^6 volumes
NOx molar mass :
= 46 kg/kgmol (sometimes also expressed as 46 kg/kmol)
Flow rate of flue gas :
= 20 cubic meters per minute = 20 m³/min
The flue gas exits the furnace at 0 °C temperature and 101.325 kPa absolute pressure.
The molar volume of a gas at 0 °C temperature and 101.325 kPa is 22.414 m³/kgmol.
10 [S:m³ NOx:S] 20 [S:m³ gas:S] 60 [S:minute:S] 1 [S:kgmol NOx:S] 46 [S:kg:S] NOx 1000 g g NOx --- ------ × -- ------ × -- ------ × ------ --------- × -- --------- × ---- -- = 24.63 ----- 10^6
[S:m³ gas:S] 1 [S:minute:S] 1 hour 22.414 [S:m³ NOx:S] 1 [S:kgmol NOx:S] 1 [S:kg:S] hour
After cancelling out any dimensional units that appear both in the numerators and denominators of the fractions in the above equation, the NOx concentration of 10 ppm[v] converts to mass flow rate of
24.63 grams per hour.
Checking equations that involve dimensions
The factor-label method can also be used on any mathematical equation to check whether or not the dimensional units on the left hand side of the equation are the same as the dimensional units on the
right hand side of the equation. Having the same units on both sides of an equation does not guarantee that the equation is correct, but having different units on the two sides of an equation does
guarantee that the equation is wrong.
For example, check the Universal Gas Law equation of P·V = n·R·T, when:
• the pressure P is in pascals (Pa)
• the volume V is in cubic meters (m³)
• the amount of substance n is in moles (mol)
• the universal gas law constant R is 8.3145 Pa·m³/(mol·K)
• the temperature T is in kelvins (K)
[S:mol:S] (Pa)(m³) [S:K:S] (Pa)(m³) = ----- × ---------- × --- 1 ([S:mol:S])([S:K:S]) 1
As can be seen, when the dimensional units appearing in the numerator and denominator of the equation's right hand side are cancelled out, both sides of the equation have the same dimensional units.
The factor-label method can convert only unit quantities for which the units are in a linear relationship intersecting at 0. Most units fit this paradigm. An example for which it cannot be used is
the conversion between degrees Celsius and kelvins (or Fahrenheit). Between degrees Celsius and kelvins, there is a constant difference rather than a constant ratio, while between Celsius and
Fahrenheit there is both a constant difference and a constant ratio. Instead of multiplying the given quantity by a single conversion factor to obtain the converted quantity, it is more logical to
think of the original quantity being divided by its unit, being added or subtracted by the constant difference, and the entire operation being multiplied by the new unit. Mathematically, this is an
affine transform (ax+b), not a linear transform (ax). Formally, one starts with a displacement (in some units) from one point, and ends with a displacement (in some other units) from some other
For instance, the freezing point of water is 0 in Celsius and 32 in Fahrenheit, and a 5 degrees change in Celsius correspond to a 9 degrees change in Fahrenheit. Thus to convert from Fahrenheit to
Celsius one subtracts 32 (displacement from one point), divides by 9 and multiplies by 5 (scales by the ratio of units), and adds 0 (displacement from new point). Reversing this yields the formula
for Celsius; one could have started with the equivalence between 100 Celsius and 212 Fahrenheit, though this would yield the same formula at the end.
From Yahoo Answers
Question:Would anyone happen to know the answer to the following questions: *For each of these I need a number answer* 1. How many atoms would be in 2 moles of an element? 2. How many atoms would be
in half a mole of an element? 3. How many atoms would be in 5 moles of Sulfur? 4. What is the mass of one mole of Sulfur? Thanks, Amy Levin
Answers:1. 6.022x10^23 x 2 2. 6.022x10^23/2 3. 5x 6.022x10^23 4. 32 i believe, its the mass number on periodic table a mole is 6.022x10^23, and the mass is the bottom number on the periodic table
Question:please show work.
Answers:Moles = 1 x 10^24 / 6.02 x 10^23 =1.7
Answers:Molar mass. So, say you want to know what 1 mole of C2H6 is. The atomic mass of C is 12 so, 12x2 is 24. The atomic mass of H is 1 so 24+6 is 30. So the molar mass of C2H6 is 30, so 1 gram of
C2H6 is 30 moles of C2H6.
Question:I need this for a chemistry project. It's a very small project. I do need to show some work, though. I found out that a marshmallow is 20 grams. 1 mol is 6.022 x 10 ^ 23, so 1 mole of
marshmallows is 1.21 x 10 ^ 25. How can I figure out how much this is in distance? I have no idea how to do this. Please, help! Thank you!
Answers:How can you possibly know the moles of marshmallows if you don't know their molecular formula? Did your teacher just assign Avogadro's number = 1 mol..because typically this is a conversion.
Secondly, by finding out how much it is in distance, are you refering to just converting your answer to meters? If that's the case, moles to meters...you got me there..
From Youtube
A Mole Is A Unit :HEY, before blaming me for stealing, I AM FORK, forknshoe was disbanded. So yeah, Watch what you say. ACTUAL DESCRIPTION: An Epic about the Unit of measurment, Moles, and examples
of how huge a mole is! MAde in gmod! Fork: Yeah, another school project, But I like this one ALOT more than the last one. Let me know what you think :D Special thanks to Darkroy12 for all hte help :D
How to Convert Units - Unit Conversion Made Easy :"The most entertaining unit conversion video ever made!" This tutorial demonstrates the basics of how to convert from one unit to another. We set up
the general formula, then convert from meters to centimeters, pounds to kilograms, and dollars to yen. | {"url":"http://www.edurite.com/kbase/how-to-convert-formula-units-to-moles","timestamp":"2014-04-16T07:18:20Z","content_type":null,"content_length":"75805","record_id":"<urn:uuid:00ada923-5138-480d-bd18-4c528f07ff47>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00657-ip-10-147-4-33.ec2.internal.warc.gz"} |
• Publication
• 1998
• Issue No. 2 - February
• Abstract - Computation of $\sqrt {{x \mathord{\left/ {\vphantom {x d}} \right. \kern-\nulldelimiterspace} d}}$ in a Very High Radix Combined Division/Square-Root Unit with Scaling and Selection
by Rounding
This Article
Bibliographic References
Add to:
Computation of $\sqrt {{x \mathord{\left/ {\vphantom {x d}} \right. \kern-\nulldelimiterspace} d}}$ in a Very High Radix Combined Division/Square-Root Unit with Scaling and Selection by Rounding
February 1998 (vol. 47 no. 2)
pp. 152-161
ASCII Text x
Elisardo Antelo, Tomás Lang, Javier D. Bruguera, "Computation of $\sqrt {{x \mathord{\left/ {\vphantom {x d}} \right. \kern-\nulldelimiterspace} d}}$ in a Very High Radix Combined Division/
Square-Root Unit with Scaling and Selection by Rounding," IEEE Transactions on Computers, vol. 47, no. 2, pp. 152-161, February, 1998.
BibTex x
@article{ 10.1109/12.663761,
author = {Elisardo Antelo and Tomás Lang and Javier D. Bruguera},
title = {Computation of $\sqrt {{x \mathord{\left/ {\vphantom {x d}} \right. \kern-\nulldelimiterspace} d}}$ in a Very High Radix Combined Division/Square-Root Unit with Scaling and Selection by
journal ={IEEE Transactions on Computers},
volume = {47},
number = {2},
issn = {0018-9340},
year = {1998},
pages = {152-161},
doi = {http://doi.ieeecomputersociety.org/10.1109/12.663761},
publisher = {IEEE Computer Society},
address = {Los Alamitos, CA, USA},
RefWorks Procite/RefMan/Endnote x
TY - JOUR
JO - IEEE Transactions on Computers
TI - Computation of $\sqrt {{x \mathord{\left/ {\vphantom {x d}} \right. \kern-\nulldelimiterspace} d}}$ in a Very High Radix Combined Division/Square-Root Unit with Scaling and Selection by
IS - 2
SN - 0018-9340
EPD - 152-161
A1 - Elisardo Antelo,
A1 - Tomás Lang,
A1 - Javier D. Bruguera,
PY - 1998
KW - Digit-recurrence algorithm
KW - division
KW - high-radix methods
KW - inverse square-root
KW - square-root.
VL - 47
JA - IEEE Transactions on Computers
ER -
Abstract—A very-high radix digit-recurrence algorithm for the operation $\sqrt {{x \mathord{\left/ {\vphantom {x d}} \right. \kern-\nulldelimiterspace} d}}$ is developed, with residual scaling and
digit selection by rounding. This is an extension of the division and square-root algorithms presented previously, and for which a combined unit was shown to provide a fast execution of these
operations. The architecture of a combined unit to execute division, square-root, and $\sqrt {{x \mathord{\left/ {\vphantom {x d}} \right. \kern-\nulldelimiterspace} d}}$ is described, with inverse
square-root as a special case. A comparison with the corresponding combined division and square-root unit shows a similar cycle time and an increase of one cycle for the extended operation with
respect to square-root. To obtain an exactly rounded result for the extended operation a datapath of about 2n bits is needed. An alternative is proposed which requires approximately the same width as
for square-root, but produces a result with an error of less than one ulp. The area increase with respect to the division and square root unit should be no greater than 15 percent. Consequently,
whenever a very high radix unit for division and square-root seems suitable, it might be profitable to implement the extended unit instead.
[1] W.S. Briggs and D.W. Matula, "Method and Apparatus for Performing Division Using a Rectangular Aspect Ratio Multiplier," U.S. Patent No. 5 046 038, Sept. 1991.
[2] W.S. Briggs, T.B. Brightman, and D.W. Matula, "Method and Apparatus for Performing the Square-Root Function Using a Rectangular Aspect Ratio Multiplier," US. Patent No. 5 060 182, Oct. 1991.
[3] D. DasSarma and D.W. Matula, “Faithful Bipartite ROM Reciprocal Tables,” Proc. 12th Symp. Computer Arithmetic, pp. 17-28, 1995.
[4] M.D. Ercegovac, "A Higher Radix Division with Simple Selection of Quotient Digits," Proc. Sixth IEEE Symp. Computer Arithmetic, pp. 94-98, 1983.
[5] M.D. Ercegovac and T. Lang, Division and Square Root—Digit-Recurrence Algorithms and Implementations. Kluwer Academic, 1994.
[6] M.D. Ercegovac, T. Lang, and P. Montuschi, “Very High Radix Division with Prescaling and Selection by Rounding,” IEEE Trans. Computers, vol. 43, no. 8, pp. 909-917, Aug. 1994.
[7] M. Ito, N. Takagi, and S. Yajima, “Efficient Initial Approximation and Fast Converging Methods for Division and Square Root,” Proc. 12th Symp. Computer Arithmetic (ARITH12), pp. 2-9, 1995.
[8] H. Kwan, R.L. Nelson, and E.E. Swartzlander Jr., “Cascaded Implementation of an Iterative Inverse-Square Root Algorithm with Overflow Lookahead,” Proc. 12th Symp. Computer Arithmetic, pp.
114-123, 1995.
[9] T. Lang and P. Montuschi, "Very-High Radix Combined Division and Square-Root with Prescaling and Selection by Rounding," Proc. 12th IEEE Symp. Computer Arithmetic, pp. 124-131, 1995.
[10] D.W. Matula, "Design of a Highly Parallel IEEE Floating Point Arithmetic Unit" Proc. Symp. Combinatorial Optimization Science and Technology (COST), RUTCOR/DIMACS, Apr. 1991.
[11] N. Mikami, M. Kobayashi, and Y. Yokoyama, "A New DSP-Oriented Algorithm for Calculation of the Square-Root Using a Nonlinear Digital Filter," IEEE Trans. Signal Processing, vol. 40, no. 7, pp.
1,663-1,669, July 1992.
[12] E.M. Schwarz and M.J. Flynn,“Hardware starting approximation for the square root operation,” Proc. IEEE 11th Symp. Computer Arithmetic, pp. 103-11, 1993.
[13] E.M. Schwarz and M.J. Flynn,“Parallel high radix nonrestoring division,” IEEE Trans. Computers, vol. 42, no. 10, pp. 1,234-1,246, Oct. 1993.
[14] D. Wong and M. Flynn,“Fast division using accurate quotient approximations to reduce the number of iterations,” IEEE Trans. Computers, vol. 41, pp. 981-995, Aug. 1992.
Index Terms:
Digit-recurrence algorithm, division, high-radix methods, inverse square-root, square-root.
Elisardo Antelo, Tomás Lang, Javier D. Bruguera, "Computation of $\sqrt {{x \mathord{\left/ {\vphantom {x d}} \right. \kern-\nulldelimiterspace} d}}$ in a Very High Radix Combined Division/
Square-Root Unit with Scaling and Selection by Rounding," IEEE Transactions on Computers, vol. 47, no. 2, pp. 152-161, Feb. 1998, doi:10.1109/12.663761
Usage of this product signifies your acceptance of the
Terms of Use | {"url":"http://www.computer.org/csdl/trans/tc/1998/02/t0152-abs.html","timestamp":"2014-04-18T22:05:00Z","content_type":null,"content_length":"57495","record_id":"<urn:uuid:f1b76fbb-d953-47bb-a84e-5c042ff95c2b>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00015-ip-10-147-4-33.ec2.internal.warc.gz"} |
Posts about Z boson on A Quantum Diaries Survivor
Posted by dorigo in physics, science.
Tags: bremsstrahlung, CMS, LHC, PDF, QCD, QED, Z boson
comments closed
Thanks to the many offers for help received a few days ago, when I asked for hints on possible functional forms to interpolate a histogram I was finding hard to fit, I have successfully solved the
problem, and can now release the results of my study.
The issue is the following one: at the LHC, Z bosons are produced by electroweak interactions, through quark-antiquark annihilation. The colliding quarks have a variable energy, determined by
probability density functions (PDF) which determine how much of the proton’s energy they carry; and the Z boson has a resonance shape which has a sizable width: 2.5 GeV, for a 91 GeV mass. The
varying energy of the center of mass, determined by the random value of quark energies due to the PDF, “samples” the resonance curve, creating a distortion in the mass distribution of the produced Z
The above is not the end of the story, but just the beginning: in fact, there are electromagnetic corrections (QED) due to the radiation of photons, both “internally” and by the two muons into which
the Z decays (I am focusing on that final state of Z production: a pair of high-momentum muons from $Z \to \mu^+ \mu^-$). Also, electromagnetic interactions cause a interference with Z production,
because a virtual photon may produce the same final state (two muons) by means of the so-called “Drell-Yan” process. All these effects can only be accounted for by detailed Monte Carlo simulations.
Now, let us treat all of that as a black box: we only care to describe the mass distribution of muon pairs from Z production at the LHC, and we have a pretty good simulation program, Horace
(developed by four physicists in Pavia University: C.M. Carloni Calame, G. Montagna, O. Nicrosini and A. Vicini), which handles the effects discussed above. My problem is to describe with a simple
function the produced Z boson lineshape (the mass distribution) in different bins of Z rapidity. Rapidity is a quantity connected to the momentum of the particle along the beam direction: since the
colliding quarks have variable energies, the Z may have a high boost along that direction. And crucially, depending on Z rapidity, the lineshape varies.
In the post I published here a few days ago I presented the residual of lineshape fits which used the original resonance form, neglecting all PDF and QED effects. By fitting those residuals with a
proper parametrized function, I was trying to arrive at a better parametrization of the full lineshape.
After many attempts, I can now release the results. The template for residuals is shown below, interpolated with the function I obtained from an advice by Lubos Motl:
I then interpolated those residuals with parabolas, and extracted their fit parameters. Then, I could parametrize the parameters, as the graph below shows: the three degrees of freedom have roughly
linear variations with Z rapidity. The graphs show the five parameter dependences on Z rapidity (left column) for lineshapes extracted with the CTEQ set of parton PDF; for MRST set (center column);
and the ratio of the two parametrization (right column), which is not too different from 1.0.
Finally, the 24 fits which use the $f(m,y)$ shape, with now all of the rapidity-dependent parameters fixed, are shown below (the graph shows only one fit, click to enlarge and see all of them
The function used is detailed in the slide below:
I am rather satisfied by the result, because the residuals of these final fits are really small, as shown on the right: they are certainly smaller than the uncertainties due to PDF and QED effects.
The $f(m,y)$ function above will now be used to derive a parametrization of the probability that we observe a dimuon pair with a given mass $m$ at a rapidity $y$, as a function of the momentum scale
in the tracker and the muon momentum resolution.
Posted by dorigo in mathematics, personal, physics, science.
Tags: function, Z boson
comments closed
I have a problem today -actually I’ve fiddled with it for a couple of days now. So, since it does not involve particles (at least not directly), I figured I’d bounce it off the mathematically
inclined among you: maybe I get an answer before I can figure my problem out by myself!
The problem is simple: find a functional form that can be a good fit, with suitable parameters, to the following graph:
(This is a residual of a Z lineshape fit to a relativistic Breit-Wigner function by the way, but you need not bother with these unnecessary details).
As you can see, we have a negative asymptote and a positive asymptote that have different values, and a central wiggling which has different “width” for the negative and positive component. I have
been trying several combinations like $f(x) = atan(h(x))*g(x)$, where g(x) is a gaussian and h(x) some kind of “warping factor” with a different slope in the negative and positive side (with respect
to x=91)… But I am getting nowhere. I am sure there is somebody out there that has a good advice, so please shoot!
UPDATE: Marius suggests a function in the comments thread below. I thank him for his input, but as is, the function $f(x)=A atan(x)+ B atan(x+C) +D$ does not work well: see the best fit below
(parameters in the upper right legend are A,B,C,D as in the function suggested by Marius):
Maybe with suitable modifications this might work, though. Hmmmm…
UPDATE: Using the hint by Marius that the addition of another arctangent could account for the different height of the two asymptotes, I have cooked up a better fit:
This is better, but I am really not satisfied. The function has 11 degrees of freedom -which is not too troublesome since there are 300 points in the graph to fit anyway; but the function is UGLY:
$p_o atan[(p_1-x) e^{-((x-p_3)/p_4)^2}] + p_5 atan[p_6(p_7-x)] + p_8 e^{-((x-p_9)/p_{10})^2}$
Any further idea on how to improve it ?
Hmmm, and I should add that having 11 parameters is a curse for me, because what I am going to do after I have a reasonable functional form is to study the parameters as a function of Z rapidity
(which modifies the original graph), and parameterize those 11 dependencies… I already have a headache!
UPDATE: Lubos makes a very good attempt with a simple ratio of polynomials in the comments thread, offering $f(x) = p_0 ( x-p_1)/(p_2x^2 + p_3 x+p_4)$ (he even offers some eyeballed parameters). Nice
try, but the problem is that the function seems to be very irregular. If one fits the center region, Lubos’ function obtains a good fit (see upper plot below); if one tries to extend the fit further
out on the tails, however, the fit rapidly worsens (lower plot).
Despite the shortcomings, I think I will investigate some ways to fix the function offered by Lubos -it has the potential of describing with few parameters the whole shape, once tweaked a bit…
UPDATE: Lubos tried to mend himself the function he proposed above, by adding a hyperbolic tangent. The function fits better the whole range, but it still fails to catch the subtleties of the slopes…
Here is a fit using his suggested parameters:
I think I will remove the hyperbolic tangent and work on some warping of the polynomial…
UPDATE: warping the x values above 91 GeV from Lubos’ polynomial with a function $f(x)=-p_0+p_1 x +\sqrt{(p_1-1)^2 x^2 + p_0^2}$ seems to work. The result is below:
The fit is not extremely precise, but these are residuals from a Breit-Wigner, so I guess that the multiplication of this function by the original shape will give a more than adequate
parametrization, for my goals. Next up is obtaining 50 different fits like the one above, one per each interval in Z rapidity from 0 to 5.0, and parametrizing each of the seven parameter of the fits…
Posted by dorigo in physics.
Tags: coupling constants, exams, GIM mechanism, QCD, subnuclear physics, top quark, University, W boson, Z boson
comments closed
Here are the questions asked at an exam in Subnuclear Physics this morning:
• Draw the strong and electromagnetic coupling constants as a function of $Q^2$, explain their functional dependence using feynman graphs of the corrections to the photon and gluon propagators,
write their formula, and compute the value of the constants at $Q^2=M_Z^2$, given the values at $Q^2=1 MeV^2$ (QED) and $Q^2=1 GeV^2$ (QCD).
• The GIM mechanism: explain the need for a fourth quark using box diagrams of kaon decays to muon pairs. How does the charm contribution depend on its mass ? What conclusion could be drawn by that
dependence in the case of B mixing measurements in the eighties ?
• Discuss a measurement of the top quark mass. For a dileptonic decay of top quark pairs, discuss the final state and its production rate.
• Discuss decay modes of W bosons and their branching fraction values. Discuss decay modes of Z bosons and their branching fraction values.
The student answered well all questions and he got 30/30 points.
Posted by dorigo in physics, science.
Tags: anomalous muons, CDF, D0, Higgs boson, LHC, Lubos Motl, new physics, PDF, QCD, Tevatron, top mass, top quark, Z boson
comments closed
Here is the second part of the list of useful physics posts I published on this site in 2008. As noted yesterday when I published the list for the first six months of 2008, this list does not include
guest posts nor conference reports, which may be valuable but belong to a different place (and are linked from permanent pages above). In reverse chronological order:
December 29: a report on the first measurement of exclusive production of charmonium states in hadron-hadron collisions, by CDF.
December 19: a detailed description of the effects of parton distribution functions on the production of Z bosons at the LHC, and how these effects determine the observed mass of the produced Z
bosons. On the same topic, there is a maybe simpler post from November 25th.
December 8: description of a new technique to measure the top quark mass in dileptonic decays by CDF.
November 28: a report on the measurement of extremely rare decays of B hadrons, and their implications.
November 19, November 20, November 20 again , November 21, and November 21 again: a five-post saga on the disagreement between Lubos Motl and yours truly on a detail on the multi-muon analysis by
CDF, which becomes a endless diatriba since Lubos won’t listen to my attempts at making his brain work, and insists on his mistake. This leads to a back-and-forth between our blogs and a surprising
happy ending when Motl finally apologizes for his mistake. Stuff for expert lubologists, but I could not help adding the above links to this summary. Beware, most of the fun is in the comments
November 8, November 8 again, and November 12: a three-part discussion of the details in the surprising new measurement of anomalous multi-muon production published by CDF (whose summary is here).
Warning: I intend to continue this series as I find the time, to complete the detailed description of this potentially groundbreaking study.
October 24: the analysis by which D0 extracts evidence for diboson production using the dilepton plus dijet final state, a difficult, background-ridden signature. The same search, performed by CDF,
is reported in detail in a post published on October 13.
September 23: a description of an automated global search for new physics in CDF data, and its intriguing results.
September 19: the discovery of the $\Omega_b$ baryon, an important find by the D0 experiment.
August 27: a report on the D0 measurement of the polarization of Upsilon mesons -states made up by a $b \bar b$ pair- and its relevance for our understanding of QCD.
August 21: a detailed discussion of the ingredients necessary to measure with the utmost precision the mass of the W boson at the Tevatron.
August 8: the new CDF measurement of the lifetime of the $\Lambda_b$ baryon, which had previously been in disagreement with theory.
August 7: a discussion of the new cross-section limits on Higgs boson production, and the first exclusion of the 170 GeV mass, by the two Tevatron experiments.
July 18: a search for narrow resonances decaying to muon pairs in CDF data excludes the tentative signal seen by CDF in Run I.
July 10: An important measurement by CDF on the correlated production of pairs of b-quark jets. This measurement is a cornerstone of the observation of anomalous multi-muon events that CDF published
at the end of October 2008 (see above).
July 8: a report of a new technique to measure the top quark mass which is very important for the LHC, and the results obtained on CDF data. For a similar technique of relevance to LHC, also check
this other CDF measurement.
Posted by dorigo in news, personal, physics, science.
Tags: CDF, graviton, Higgs boson, Z boson
comments closed
About a year ago I reported here on a search performed by CDF for events featuring two Z bosons, both decaying to electron-positron pairs: I had been an internal reviewer of that analysis, and I
discussed it in some detail after we approved it for publication. While the standard model expectation for electroweak production of two Z bosons is of about 1.5 pb, and the process has indeed been
put in evidence in CDF and D0 Run II data, the analysis was rather focused on a search for heavy mass resonances decaying to the ZZ final state: new physics, that is, either in the form
of a heavy Higgs boson, or of a graviton (in the Randall-Sundrum scenario), or other still fancier (and improbable) beasts.
CDF has now repeated that search by increasing the dataset size by a factor of three, and by including mixed final states which include muon pairs and even jet pairs. This makes the analysis
intrinsically interesting to me, since I have started a similar analysis with the CMS experiment, together with a PhD student in Padova, Mia Tosi. Mia and I will be looking for Higgs bosons in the
dilepton plus dijet final state, with particular emphasis on the $Z \to b \bar b$ decay, which is a signal with which we have quite some familiarity.
The new CDF search for high-mass ZZ events configures itself as a “signature-based” one: despite the reference to the Randall-Sundrum graviton, the analysis cuts are kept generic, such that a signal
can be found for anything that decays to two Z bosons, and in case no signal is seen, a model-independent limit on the cross section can be set. The only limitation of the search is that the
four-body mass is studied only above the minimum value of 300 GeV. Such a requirement allows to steer away from phase space regions where backgrounds dominate.
Once four objects (electrons, muons, and jets, with the specification that at most two jets are present) are selected with loose cuts, a statistical estimator is built to test the hypothesis that
they originate from the decay $X \to ZZ \to llll (lljj)$. It is a simple $\chi^2$ function, which utilizes the expected resolution on the two two-body masses and the resulting four-body mass to
estimate how much the event departs from the tentative signal interpretation. Only in the case of jet pairs, an explicit cut is set on the dijet mass to lay between 65 and 120 GeV, to avoid accepting
too many random jet combinations.
While the $M_x>300 GeV$ region is the one where the signal is sought, the complementary region of the four-body mass is used as a control sample, to verify that background estimates obtained with
Monte Carlo simulations are in agreement with the observed data. The nice thing about such a spectacular signature as the production of two Z bosons is that backgrounds are exclusively of electroweak
nature: by having at least one $Z \to ll$ decay in the final state, the signal cannot be mimicked easily by purely quantum chromodynamical processes, which plague most hadron collider searches with
high rates. Besides regular $ZZ$ pairs from standard model processes, backgrounds include WZ, WW, and Z+jets production. At high four-body mass, however, all of these are really small, and even in
the 3 inverse femtobarns of proton-antiproton collisions analyzed by CDF for this search, they contribute only few events; only the dilepton+dijet signature accepts a few hundred events, because of
the large cross-section of Z+2 jet production processes.
In the end, no signal is seen, and a cross-section limit is extracted as a function of the X mass. The limit is shown below, compared to the expected cross section for graviton production and decay
to the ZZ final state. The comparison of upper limit (the red curve) with the theory hatched line allows to exclude gravitons with masses below 491 GeV, for a particular choice of model parameters $k
/M_p=0.1$ (k is a warp factor for the extra dimensions, and $M_p$ is the Planck mass).
As a by-product of this analysis, a new set of excellent standard-model-like ZZ decay candidates have been selected. I am unable to show any of the new event displays here, because they have not been
approved for public consumption by CDF yet… So please see the lego plot of a $ZZ \to eeee$ candidate below, extracted last year by the same authors. The two pairs of electrons make masses very close
to that of the Z boson, as evidenced by the two pink numbers.
To read this graph, you have to know that the greek letter $\eta$ is the pseudorapidity, basically a function of the angle that particles make with the beam axis. A pseudorapidity of zero means that
the particle is emitted at 90 degrees from the beam, while positive and negative values indicate the proton and antiproton directions. The other coordinate, $\phi$, indicates the azimuthal angle in
the transverse plane. The z axis (the height of the bars) indicates how much energy is deposited in the $\eta - \phi$ interval span by the bars. In bright pink are shown the four electron candidates,
as measured by the CDF calorimeter, and each bar is labeled by the energy in GeV measured for each.
I am only left with the pleasant task of congratulating my colleagues Antonio Boveia, Ben Brau, and David Stuart for this new result, which greatly extends the scope of the analysis I have reviewed
last year. During my review I had encouraged them to pursue the other decay modes of ZZ pairs, and so they did. Well done, folks!
Posted by dorigo in personal, physics, science.
Tags: CMS, momentum scale, PDF, Z boson
comments closed
Yesterday I posted a nice-looking graph without abounding in explanations on how I determined it. Let me fill that gap here today.
A short introduction
Z bosons will be produced copiously at the LHC in proton-proton collisions. What happens is that a quark from one proton hits an antiquark of the same flavour in the other proton, and the pair
annihilates, producing the Z. This is a weak interaction: a relatively rare process, because weak interactions are much less frequent than strong interactions. Quarks carry colour charge as well as
weak hypercharge, and most of the times when they hit each other what “reacts” is their colour, not their hypercharge. Similarly, when you meet John at the coffee machine you discuss football more
often than chinese checkers: in particle physics terms, that is because your football coupling with John is stronger than your chinese-checkers coupling.
Posted by dorigo in personal, physics, science.
Tags: CMS, momentum scale, PDF, Z boson
comments closed
Tonight I feel accomplished, since I have completed a crucial update of the cornerstone of the algorithm which provides the calibration of the CMS momentum scale. I have no time to discuss the
details tonight, but I will share with you the final result of a complicated multi-part calculation (at least, for my mediocre standards): the probability distribution function of measuring the Z
boson mass at a certain value $M$, using the quadrimomenta of two muon tracks which correspond to an estimated mass resolution $\sigma_M$, when the rapidity of the Z boson is $Y_Z$.
The above might -and should, if you are not a HEP physicist- sound rather meaningless, but the family of two-dimensional functions $P(M,\sigma_M)_Y$ is needed for a precise calibration of the CMS
tracker. They can be derived by convoluting the production cross-section of Z bosons $\sigma_M$ at a given rapidity $Y$ with the proton’s parton distribution functions using a factorization integral,
and then convoluting the resulting functions with a smearing Gaussian distribution of width $\sigma_M$.
Still confused ? No worry. Today I will only show one sample result – the probability distribution as a function of $M$ and $\sigma_M$ for Z bosons produced at a rapidity $2.8< |Y| <2.9$, and
tomorrow I will explain in simple terms how I obtained that curve and the other 39 I have extracted today.
In the three-dimensional graph above, one axis has the reconstructed mass of muon pairs $M$ (from 71 to 111 GeV), the other has the expected mass resolution $\sigma_M$ (from 0 to 10 GeV). The height
of the function is the probability of observing the mass value $M$, if the expected resolution is $\sigma_M$. On top of the graph one also sees in colors the curves of equal probability displayed on
a projected plane. It will not escape to the keen eye that the function is asymmetric in mass around its peak: that is entirely the effect of the parton distribution functions…
Posted by dorigo in personal, physics, science.
Tags: CMS, PDF, QCD, Z boson
comments closed
The Z boson mass has been measured with exquisite precision in the nineties by the LEP experiments ALEPH, OPAL, DELPHI and L3, and by the SLD experiment at SLAC: we know its value to better than a
few MeV precision. The PDG gives $M_Z = 91.1876 \pm 0.0021 GeV$. Now, a precise Z mass is an important input to our theory, the Standard Model, and through its measurement, as well as that of other
Z-related quantities that the four LEP experiments and SLD measured with great precision, a giant leap forward has been made in the understanding of the subtleties of electroweak interactions.
For an experimental physicist, however, the knowledge of the Z mass is more a tool for calibration purposes than a key to theoretical investigations. Indeed, as I have discussed elsewhere recently, I
am working at the calibration of the CMS tracker using the decays of Z bosons, as well as of lower-mass resonances. We take $Z \to \mu \mu$ decays, we measure muon tracks, determine the measured mass
of the Z boson with them, and compare the latter to the world average. This provides us with precious information on the calibration of the momentum measurement of muon tracks.
In CMS we will quickly collect large numbers of Z bosons, so statistics is not an issue: we will be able to study the calibration of tracks very effectively with those events. However, when
statistics is large, experimentalists start worrying about systematic uncertainties. Indeed, there are several effects that cause a difference between the mass value we reconstruct with muon tracks
and the true value of the Z boson mass -the one so well determined which sits in the PDG.
I decided to study one of those effects today: the mass shift due to parton distribution functions (PDF). When you collide protons against other protons, what creates a Z boson is the hard
interaction between a quark and an antiquark. These constituents of the projectiles carry a fraction of the total proton momentum, but this fraction -called parton distribution function- is unknown
on an event-by event basis. By studying proton collisions in different conditions and environments for a long time, we have been able to extract functions $f_q(x)$ which describe how likely it is
that a quark q in the proton carries a fraction x of the proton’s momentum. As an example, if the proton travels at 5 TeV as in LHC, an x value of 0.1 means that the quark q will carry 500 GeV by
q (u,d,s,c,b) has its own different parton distribution function. The proton contains two valence up-quarks and one valence down-quark: it has a (uud) composition. Those quarks carry a good part of
the proton’s momentum, but a large share is due to the rest of partons the proton is made of: sea quark-antiquark pairs, and gluons. Protons do carry antiquarks of all kinds -five in total-, as well
as gluons, and these, too, get their own distribution function. A plot of the parton distribution functions of the proton (with a logarithmic x-axis to enhance the low-x behavior) is shown on the
right. Note the bumps of u- and d- quark distributions, in blue and green, respectively: those bumps are due to the valence quark contributions.
In reality, things are even more complicated than what I discussed above: you do not simply get away with one function per each of the 11 partons I mentioned thsi far, because these functions have a
value which depends on the energy at which you probe the proton, $Q^2$: in a soft collision (which means a small $Q_1$), $f(x, Q^2_1)$ is very different from what it is in a harder one, $f(x, Q^2_2)$
(with a larger $Q_2$).
The reason for the weird behavior of parton distribution functions -their evolution with $Q^2$- is that quarks have the tendency of emitting gluons, becoming less energetic, and this tendency in turn
depends on the energy Q at which they are studied. What is stated above is encoded in very famous functions called DGLAP (Dokshitzer-Gribov-Lipatov-Altarelli-Parisi) equations. They are in a sense
another consequence of the “asymptotic freedom” exhibited by strongly interacting particles: at high energy they behave as free particles, emitting little color radiation, while at low energy their
interaction with the gluon field increases in strength. It is all due to the fact that the coupling constant of the theory, $\alpha_s$, is large at small Q. That constant is not a constant by any
You have every reason to be confused now: I was talking about calibrating the CMS tracker using muons, and now we are deep into Quantum ChromoDynamics. What gives ? Well: Z bosons are created by
quark-antiquark annihilations, and those are found inside the colliding protons with probabilities which depend on their momentum fraction x, and on the total collision energy Q. Since the PDF of
quarks and antiquarks peak at very small values of x, the probability of a collision yielding a Z boson -which has a respectable mass of 91 GeV- is small. If the Z was lighter, more of them would be
produced. Now, the Z boson is a resonance, and like every resonance, it has a finite width. What that means is that not all Z bosons have exactly the same mass: while the peak is at 91.186 GeV, the
width is 2.5 GeV, which means that it is not infrequent for a Z boson to have a mass of 89, or 92 GeV, rather than the average value. This is described by the Z lineshape, a function called
$F(\Gamma,M) = \frac{\Gamma/2} {(M-M_Z)^2 + \Gamma^2 /4}$.
The function is shown below.
As you can see, there is a non-negligible probability that a Z boson has a mass quite different -even a few GeV off- from 91.19 GeV. Now, since Z bosons can be created at masses lower than $M_Z$,
they will be privileged by parton distribution functions over masses higher than $M_Z$ by the same amount, because .parton distribution functions are larger at lower x. This creates a bias: the
perfectly symmetric Breit-Wigner lineshape gets distorted by the preference of partons to carry a lower fraction of the proton momentum.
The distorsion is very small, but it is very important to take it in account when one wants to use measured Z masses to precisely calibrate the track momentum measurement. To size up the effect of
the PDF on the Z lineshape, one can compute an integral of the Breit-Wigner weighted with the PDF $f(x)$, by taking into account the different combinations of quarks which give rise to a Z boson in
proton-proton collisions.
A Z can be produced by the following quark-antiquark interactions:
• $u \bar u$: this can originate from a valence u-quark and a sea anti-u-quark, as well as from a sea u-quark and a sea anti-u-quark. The probability that this quark pair creates a Z depends on the
coupling of u-quarks to the Z boson, and this probability is a function of some coefficient predicted by electroweak theory. It is proportional to 0.11784.
• $d \bar d$: same as above, but the coupling is proportional to 0.15188.
• $s \bar s$: these can only occur through sea-sea interactions. The coefficient is the same as for d-quarks.
• $c \bar c$: these are due to the small charm component of the proton sea. They get the 0.11784 coefficient as u-quarks too.
• $b \bar b$: these are tiny, but still exist. b-quarks couple to the Z with the 0.15188 factor.
• $t \bar t$: these are basically zero.
• $g g$: gluon-gluon collisions cannot produce a Z boson, because they are vector particles as the Z (spin 1), and a vector-vector-vector vertex is zero by construction. Note that the same does not
hold for the Higgs boson, which is a scalar (spin 0) particle: a vector-vector-scalar vertex is possible, and in fact it is the largest contribution to H production at the LHC.
rapidity of the Z boson, a quantity labeled by the letter Y (the dependence is shown in the last graph of this post, below). Rapidity is a measure of how fast is the Z boson moving in the detector
reference frame: when one of the partons has a much larger momentum fraction than the one it is colliding against, the produced Z boson has a large momentum in the direction of the more energetic
The rapidity distribution of Z bosons is shown in the graph below, separately for Zbosons produced by valence-sea collisions (in red) and by sea-sea collisions (in blue).
A rapidity Y=0 means that the Z was produced at rest in the detector, +5 is a fast-forward-moving Z, and -5 is a Z moving in the opposite direction with as much speed. As you can see, the valence-sea
interactions are the most asymmetric ones, predominantly producing a forward-moving Z boson.
On the right here I also plot with the same color-coding the x distribution of quarks taking part in the Z creation. The red distribution has both a very small-x and a very large-x component,
highlighting the asymmetric production.
Despite being in black and white, the most interesting plot is however the following one. It shows the average mass of the Z bosons (on the vertical scale, in GeV) as a function of the Z rapidity.
The downward shift from 91.186 GeV is relevant -about 0.25 GeV overall- but it increases at large values of rapidity, when one of the two partons has a very small value of x, so that the collision
“samples” a rapidly varying PDF for that parton.
Recent Comments
dorigo on Nobel Prize in Physics to part…
physicspet on Nobel Prize in Physics to part…
Jackie on Silvia Baraldini is a free…
Nigel on Beethoven’s Appassionata…
Alfonze on Lisa Randall: Black holes out… | {"url":"http://dorigo.wordpress.com/tag/z-boson/","timestamp":"2014-04-18T10:37:24Z","content_type":null,"content_length":"96468","record_id":"<urn:uuid:4ae1f569-932f-472d-9af4-31a78a5a2c58>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00120-ip-10-147-4-33.ec2.internal.warc.gz"} |
2013 - 2014 Graduate Catalog
Doctor of Philosophy in Applied and Industrial Mathematics
Major: MATH Degree Awarded: Ph.D. Unit: GA Program Webpage: http://www.math.louisville.edu/graduate
Program Information
Undergraduate coursework equivalent to a major in mathematics from an accredited university. This should include at least a one-year course in either analysis or abstract algebra, equivalent to
Mathematics 501-502 and 521-522 at the University of Louisville. Candidates who have not taken both must complete the second in their program.
All students admitted to the program must complete the following or their equivalent:
A. Core Courses - 24 semester hours
1. Two sequences, each of six (6) semester hours, chosen from:
□ Algebra MATH 621-622
□ Combinatorics MATH 681-682
□ Real Analysis MATH 601-602
2. Two sequences, each of six (6) semester hours, chosen from:
□ Applied Statistics MATH 665-667
□ Mathematical Modeling MATH 635-636
□ Probability & Mathematical Statistics MATH 660-662
B. Additional Topics and Area of Specialization - 18 semester hours
In addition to the core, an application area of 18 hours will be required. The courses may be in a department outside Mathematics. They will be chosen in consultation with the student's advisory
C. Qualifying Examinations
Students must pass three written examinations. Two of these will be chosen from the areas of Algebra, Combinatorics and Real Analysis. The third will be chosen from the areas of Applied Statistics,
Mathematical Modeling and Probability & Mathematical Statistics. Normally, these will be taken within a year of completion of the core coursework. These examinations need not be taken together and
each may be attempted at most twice.
D. Industrial Internship
Each student, with prior approval of the Graduate Studies Director, has to complete an internship in an appropriate industrial or governmental setting, or have equivalent experience.
Computing Project: Each student must complete an approved computer project related to the student’s area of concentration.
Candidacy Examination: Each student must pass an oral examination in the chosen area of concentration. Usually, at most two attempts at passing this examination will be permitted. Students who wish
to make a third attempt must petition the Graduate Studies Committee of the department for permissions to do so.
Dissertation – 12 to 18 semester hours: A doctoral dissertation is required of each student.
Dual Degree Program in Applied and Industrial Mathematics and Biostatistics - Decision Science
Dual degrees in Biostatistics-Decision Science and Applied and Industrial Mathematics are offered by the College of Arts and Sciences and the School of Public Health and Information Sciences. Upon
completion of the program, students will receive a Ph.D. in Applied and Industrial Mathematics and an M.S.P.H. in Biostatistics-Decision Science.
Application Procedure
To be admitted to the program, the student is required to apply to and be accepted by both the Department of Mathematics and the Biostatistics-Decision Science Program. A student seeking admission
into this program must submit letters to both the Department of Mathematics and the Department of Bioinformatics and Biostatistics stating the intent to take advantage of the dual degree program, and
stating whether the student is interested in the Biostatistics or the Decision Science concentration. Students must submit two (2) recent letters of recommendation with their letter of intent.
Applicants will receive written notification stating whether their admission request has been approved or disapproved.
Degree Requirements
Required Courses
The required courses for the dual degree program consist of all non-overlapping core courses for both the Ph.D. in Applied and Industrial Mathematics and the M.S. in Biostatistics - Decision Science,
as well as the requirements for either the Decision Science or Biostatistics concentration within the Biostatistics-Decision Science program.
• Core course requirements for the Ph.D. in Applied and Industrial Mathematics (24 semester hours).
□ Two sequences, each of six (6) semester hours, chosen from:
☆ Algebra - Mathematics 621 and 622
☆ Combinatorics - Mathematics 681 and 682
☆ Real Analysis - Mathematics 601 and 602
□ Two sequences, each of six (6) semester hours, chosen from:
☆ Mathematical Modeling - Mathematics 635 and 636
☆ Applied Statistics - Mathematics 665 and 667
☆ Probability and Mathematical Statistics - Mathematics 660 and 662
□ Courses taken in requirement of the mathematics component of the dual degree program can be used to satisfy the 6 to 9 semester hours of electives required for the M.S. in
Biostatistics-Decision Science.
• Core course requirements derived from the M.S. in Biostatistics-Decision Science (12 to 18 semester hours).
□ The following courses are required for both tracks:
□ Introduction to Public Health and Epidemiology - PHEP 511 (3 semester hours)
□ Social and Behavioral Sciences in Health Care - PHCI 631 (2 semester hours)
□ Introduction to Environmental Health
□ Health Economics
□ Biostatistics-Decision Science Seminar - PHDA 602 (4 semester hours)
□ Probability and Mathematical Statistics - PHST 661 and 662 (6 semester hours)*
* This requirement is waived if the student takes the Mathematics 660, 662 sequence listed above.
• Requirements from one of the two possible concentrations for the M.S. in Biostatistics - Decision Science. (5 to 6 semester hours)
□ Biostatistics Concentration Requirements:
☆ Biostatistical Methods I and II - PHDA 680 and 681 (6 semester hours)
□ Decision Science Concentration Requirements:
☆ Ethical Issues in Decision Making - PHDA 605 (2 semester hours)
☆ Decision Analysis - PHDA 663 (3 semester hours)
Courses taken to satisfy the Biostatistics-Decision Science component of the dual degree program can be applied to the 18 semester hours of electives which are required for the Ph.D. in Applied and
Industrial Mathematics.
Combined Industrial Internship, Practicum and Masters Thesis. (6-8 semester hours)
The Industrial Internship required by the Department of Mathematics and the Public Health Practicum and Masters thesis required for the M.S. can be satisfied by a single internship and technical
report which simultaneously satisfies the requirements for both degrees. Specifically, the internship must both focus on public health so that it satisfies the Public Health Practicum (PHDA 603 and
PHDA 604), and contain advanced mathematical content, so that it satisfies the Ph.D.-level Industrial Internship (Math 694). Likewise, the technical report must meet two requirements: it must satisfy
the requirements for a Master’s thesis for the M.S. degree (PHDA 666) and it must be written at an advanced mathematical level expected for the Ph.D.-level Industrial Internship. The six (6) to eight
(8) semester hours of the internship will be divided evenly between the Department of Mathematics and the Biostatistics-Decision Science Program.
Dissertation and Qualifying Examinations
In order for the student to fulfill the Ph.D. requirements, the student must satisfy both the qualifying examination and dissertation requirements for the Ph.D. in Applied and Industrial Mathematics.
Failure to complete these requirements will not jeopardize the M.S. degree, if all its requirements have been satisfactorily completed.
Special Considerations: Students who have already completed a Master’s degree in the Department of Mathematics
To preserve the spirit of a dual degree, such students need to complete 36 semester hours of courses as required for the M.S. in Biostatistics-Decision Science. Six (6) semester hours from the
previous Master’s degree coursework can be applied to this requirement. The remaining semester hours must be chosen from the list of not covered by core courses approved electives for the Department
of Bioinformatics and Biostatistics, with preference given to courses in the Departments of Mathematics and Bioinformatics and Biostatistics. Combined Industrial Internship, Practicum and Masters
Thesis cannot be replaced by a previous Master’s thesis. This requirement must be satisfied as previously described, meeting the specifications of both departments.
Departmental Faculty
Thomas Riedel
Department Chair
Csaba Bíro
Assistant Professor
Mary E. Bradley
Associate Professor
Udayan B. Darji
Richard M. Davitt
Arnab Ganguly
Assistant Professor
Roger H. Geeslin
Ryan S. Gill
Associate Professor
Graduate Advisor
Changbing Hu
Associate Professor
Chakram S. Jayanthi
Adjunct Professor
Thomas M. Jenkins
André Kézdy
Director of Graduate Studies
Lael F. Kinch
Ewa Kubicka
Grzegorz Kubicki
Hamid Kulosman
Associate Professor
Lee Larson
Kiseop Lee
Associate Professor
Bingtuan Li
Jiaxu Li
Associate Professor
Jinjia Li
Assistant Professor
Robert B. McFadden
Alica Miller
Associate Professor
Robert Powers
Distinguished Teaching Professor
Assistant Chair
Prasanna Sahoo
Steven Seif
Associate Professor
David Swanson
Associate Professor
Undergraduate Director
Cristina Tone
Assistant Professor
David J. Wildstrom
Associate Professor
W. Wiley Williams
Shi-Yu Wu
Adjunct Professor
Yongzhi Steve Xu
Stephen Young
Assistant Professor
Wei-Bin Zeng
Associate Professor | {"url":"http://louisville.edu/graduatecatalog/programs/degree-programs/academic/ga/mathphd/","timestamp":"2014-04-17T15:49:24Z","content_type":null,"content_length":"32348","record_id":"<urn:uuid:0f72e8b8-99f1-4f24-90a2-1b4da62b70af>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00214-ip-10-147-4-33.ec2.internal.warc.gz"} |
Posts about Algebra on memoirs on a rainy day
Posts Tagged 'Algebra'
I like to say that algebra is the scaffolding that lets us do calculus in different structures.—
range (@djrange) March 31, 2011
Last Thursday, I gave my first lecture in a graduate class of mathematics. There were three other students in that class, and the professor. All of the students were graduate students in Algebra. I
was the sole person in Analysis. At first, this intimate setting was pretty daunting. I hadn’t taken the class last semester and the prof obviously didn’t like me being part of it¹.
Continue reading ‘Giving Lectures & Presentations in Graduate Classes (Commutative Algebra)†’
Published October 24, 2009 education Leave a Comment
category theory
commutative diagram
couniversal object
universal object
One of the marks of being a good prof is when they see that students didn’t understand something and go over it again. My algebra prof has made a habit of this, especially when he goes over stuff in
class too quickly. This happened last week and left me quite furious. Someone must have mentioned something to the prof², and he went over what we saw in the last hour again. This took an hour out of
the three-hour class. I appreciated. I realized that there were some undergrads in our class, who weren’t familiar with some of the more abstract concepts of category theory³. It was an issue. Class
was great today. I was drinking my strong milk tea and noting stuff down. We saw direct sums and (co)universal objects. Having proofs done with commutative diagrams is so elegant and simple.
Continue reading ‘Categorically Yours & Typographical Musings†’
Published October 22, 2009 cycling , education , mathematics 1 Comment
azumaya algebra
clifford algebra
differential manifold
graduate school
graduate studies
holomorphic analysis
riemann surfaces
Things got abstract very quickly in complex analysis. We are constructing differentiable manifolds in the complex plane, to see the topology of holomorphic domains. It blends together quite a few
algebraic notions, as well as some beautiful topology, and it’s extremely interesting. The prof told us that this would fit neatly into a Riemann manifold or Riemann surfaces class.
Why is this so interesting? It explains exactly why derivatives and integrals actually work in the complex plane. Well, that’s not really true. It’s more than that. Applying calculus to complex
functions is certainly richer than for real functions. We delve into the differential k-forms and their construction⁷. It’s quite elegant, I have to say. Some of my classmates were a bit dismayed by
the abstract nature of this week’s lectures, but it had my full attention⁴.
I also noticed that we started using Berenstein & Gay’s book, Complex Variables¹. We’re about 5 weeks into the semester and we are on page 10 or so⁵. The level of difficulty in this class just went
up a notch. Also, the level of complexity went up. That’s why they call it complex analysis!
Published October 14, 2009 education , mathematics , travelogue 2 Comments
borel measure
Disposable Teachers
graduate school
graduate studies
haar measure
hausdorff dimension
Jo Rees
lebesgue measure
Measure and Integral
Measure Theory
Paul Halmos
radon measure
teaching directors
I’ve been working hard this week at learning more about measure theory. It’s a really interesting research subject and there are quite a few things that I didn’t know about it. In class, we are
currently seeing the Lebesgue measure and topics. I’ve read up on the Borel, Haar, Radon, and Daniell measures.
I’ve got quite a few books in this area, including Paul Halmos’ Measure Theory¹ that I got for $6. The Measure and Integral² book that is used in my real analysis class is finally available. I have
it photocopied, but I’d rather buy it. It’s a bit more expensive, but not that much. It’s $46. Einstein has it for $69.
The real analysis professor spends 3hrs a week copying that book onto the blackboard. It’s really strange. He doesn’t give any further examples and quite a few of my classmates abandoned the class
after the first week.
As I mentioned before, the classes are what you make of them. At my level, having a great professor doesn’t really matter, unless he’s my thesis adviser. I’m actually lucky that 2 out of my 3 profs
are good. Since I am going to specialize in analysis, probably abstract analysis and topology, the real analysis class is fundamental to my mathematical development, as it introduces all sorts of
concepts that were probably not seen at an undergraduate level. We’ve started the Lebesgue integral and I hadn’t seen it before.
Published October 8, 2009 education , mathematics 1 Comment
abstract algebra
azumaya algebra
clifford algebra
cyclic groups
graduate school
graduate studies
I’ve spent about 4-5 hours on my algebra homework. I still have another 27 problems to finish¹. Naturally, they get harder as you go along. Kind of annoying. I like writing easier ones first and then
moving to harder ones a bit later. I like this to happen in each problem set. For some reason, I had trouble with cyclic groups and had to review the subject matter before completing two problems.
With these types of abstract math, it’s best to stop when you feel it slipping away or when you hit a problem that looks impossible to let it stew and come back to it. This has been my technique for
the last few years and it works well. I have to be really careful with the solutions. I have all of the solutions of the problems that I’m doing in Hungerford’s Algebra².
Published October 6, 2009 art , education , mathematics , travelogue Leave a Comment
university life
The rain has finally abated. I love the rain in Canada, but I hate it here. Why? You just get wet all the time. You get wet when you get on the scooter, when you drive around, and when you get off.
Rain gear does wonders, but it’s annoying to have to carry it around and wait for it to dry. Also, driving in the rain is a lot more dangerous. I tend to be really careful.
Temperatures have cooled down significantly. It’s no longer 30C, but only 24C³. It’s getting a bit chilly when riding on the scooter. I’ll need to take a scarf.
Published October 1, 2009 education , mathematics 3 Comments
abstract algebra
graduate school
graduate studies
Graduate Texts in Mathematics
James Wilson
thesis adviser
Thomas Hungerford
For some reason, our Algebra prof gave us 40 problems to solve. They are all in the first chapter and review comprehensively what I’ve learned in past algebra and algebraic structures classes. Still,
40 exercises, that’s a lot. Seeing as Hungerford’s book is pretty much a reference for graduate students in algebra, I was looking around for solutions to all of the problems since I am unsure if our
prof will give us any or if he will give us hints.
I found Dr. James Wilson’s book on Hungerford’s problem sets. It’s available for free here. If it’ no longer there, you can launch a google search and you should find it easily enough.
Part of me almost wants to print it out. It’s 167 pages long. At any copy shop, it will cost about $3 to print that out. That’s including binding. That’s really cool. It will help me out quite a bit.
Luckily, I’ve seen most of the subject matter before, so it shouldn’t be a problem. The prof mentioned that he’d take our midterm exam out of this problem set. Sounds good to me. Midterms are in week
9 and we’ve just finished week 3.
Published September 25, 2009 education , mathematics 2 Comments
abstract algebra
Francesco Iachello
Galois theory
graduate school
graduate studies
lie algebra
My wife teaches university students and she really enjoys using Powerpoint presentations in class³. Most lectures by visiting scholars, as well as research, is usually presented with some kind of
presentation. In the math world, it’s usually some Linux-based derivative.
I’ve been going to a class where the professor solely relies on using Powerpoint presentations. I have come to hate them. The reason is that the professor doesn’t understand how much time it takes
for students to note down what they see on the slides. Sure, the presentation is made available later on the web, but I like taking notes. That’s how my learning process works. I know that most
students work in similar fashion.
The professor shows a theorem, barely explaining it and the rushes through a demonstration. I haven’t even finished noting down the theorem when he’s already midway through the demo. It’s very
annoying. The other extremely annoying fact is that the demos, or parts of them, vanish because animation is used in the Powerpoint. Extremely frustrating⁵.
Continue reading ‘Using Powerpoint Presentations In Classrooms’
Published September 24, 2009 education , mathematics , paperblog 3 Comments
graduate school
graduate studies
MD Paper
midori japan
rhodia drive
I had to get another Moleskine for my Modern Algebra class. I just received the textbook, appropriately called Algebra [Hungerford, Springer Verlag, vol 73], and needed to get back to class quickly.
I spent an obscene amount of time waiting at the ESlite at Gongguan to get served. I wanted to get a red Moleskine XL Plain Paper softcover. They had shown me something similar, but I must have
remembered it wrong, because they don’t have those types of MS. They only had the L size in red, as well as a bunch of Cahier Journals. For some reason, it took about 20 minutes to get this answer. I
was getting pissed off. I almost left without buying anything.
But I got the softcover Moleskine XL Plain Paper instead. Why? Well I need to transfer over my notes before class tomorrow and this is the only class that’s left that I haven’t done so. I plan on
using a Japanese notebook from MD Paper (Midori Paper Japan) to note stuff about the colloquium we attend every week³.
recent comments
Taylor Leske on Buy Bikes Direct From Tai…
9 to 5 Travel Blog on Taipei Jade Market
Mr Peter on Stainless Steel Washing Machin…
range on Racism And Reverse Racism In T…
Adam Kong on Buy Bikes Direct From Tai… | {"url":"http://range.wordpress.com/tag/algebra/","timestamp":"2014-04-21T07:05:57Z","content_type":null,"content_length":"129804","record_id":"<urn:uuid:9f8ff809-99fe-43e2-b778-138bbd672d31>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00265-ip-10-147-4-33.ec2.internal.warc.gz"} |
Ph.D. Theses (Mathematics)The local conservation laws of the nonlinear Schrödinger equationAdiabatic and stable adiabatic timesContinued fractions and the divisor at infinity on a hyperelliptic curve : examples and order boundsSome results in probability from the functional analytic viewpoint
http://hdl.handle.net/1957/15739 2014-04-16T04:49:09Z http://hdl.handle.net/1957/44749 The local conservation laws of the nonlinear Schrödinger equation Barrett, Johnner The nonlinear Schrödinger
equation is a well-known partial differential equation that provides a successful model in nonlinear optic theory, as well as other applications. In this dissertation, following a survey of
mathematical literature, the geometric theory of differential equations is applied to the nonlinear Schrödinger equation. The main result of this dissertation is that the known list of local
conservation laws for the nonlinear Schrödinger equation is complete. A theorem is proved and used to produce a sequence of local conservation law characteristics of the nonlinear Schrödinger
equation. The list of local conservation laws as given by Faddeev and Takhtajan and a theorem of Olver, which provides a one-to-one correspondence between equivalence classes of conservation laws and
equivalence classes of their characteristics, are then used to prove the main result. Graduation date: 2014; Access restricted to the OSU Community, at author's request, from Dec. 12, 2013 - June 12,
2014 2013-11-08T00:00:00Z http://hdl.handle.net/1957/40078 Adiabatic and stable adiabatic times Bradford, Kyle B. While the stability of time-homogeneous Markov chains have been extensively studied
through the concept of mixing times, the stability of time-inhomogeneous Markov chains has not been studied as in depth. In this manuscript we will introduce special types of time-inhomogeneous
Markov chains that are defined through an adiabatic transition. After doing this, we define the adiabatic and the stable adiabatic times as measures of stability these special time-inhomogeneous
Markov chains. To construct an adiabatic transition one needs to make a transitioning convex combination of an initial and final probability transition matrix over the time interval [0, 1] for two
time-homogeneous, discrete time, aperiodic and irreducible Markov chains. The adiabatic and stable adiabatic times depend on how this convex combinations transitions. In the most general setting, we
suggested that as long as P : [0, 1] --> P[superscript ia][subscript n] is a Lipschitz continuous function with respect to the ‖ ·‖₁ matrix norm, then the adiabatic time is bounded above by a
function of the mixing time of the final probability transition matrix [equation] For the stable adiabatic time, the most general result we achieved was for nonlinear adiabatic transitions P
[subscript ø (t)] = (1-ø (t))P₀+ ø(t)P₁ where ø is a Lipschitz continuous functions that is piecewise defined over a finite partition of the interval [0, 1] so that on each subinterval ø is a
bi-Lipschitz continuous function. In this setting we asymptotically bounded the stable adiabatic time by the largest mixing of P[subscript ø(t)] over all t∈[0, 1]. We found that [equation] We also
have some additional results at bound the stable adiabatic time in this manuscript, but they are included to show the different attempts we took and highlight how important it is to pick the right
variables to compare. We also provide examples to queueing and statistical mechanics. Graduation date: 2013 2013-05-15T00:00:00Z http://hdl.handle.net/1957/38663 Continued fractions and the divisor
at infinity on a hyperelliptic curve : examples and order bounds Daowsud, Katthaleeya We use the theory of continued fractions over function fields in the setting of hyperelliptic curves of equation
y²=f(x), with deg(f)=2g+2. By introducing a new sequence of polynomials defined in terms of the partial quotients of the continued fraction expansion of y, we are able to bound the sum of the degrees
of consecutive partial quotients. This allows us both (1) to improve the known naive upper bound for the order N of the divisor at infinity on a hyperelliptic curve; and, (2) to apply a naive method
to search for hyperelliptic curves of given genus g and order N. In particular, we present new families defined over ℚ with N=11 and 2 ≤ g ≤ 10. Graduation date: 2013 2013-04-25T00:00:00Z http://
hdl.handle.net/1957/38592 Some results in probability from the functional analytic viewpoint Gelbaum, Zachary A. This dissertation presents some results from various areas of probability theory, the
unifying theme being the use of functional analytic intuition and techniques. We first give a result regarding the existence of certain stochastic integral representations for Banach space valued
Gaussian random variables. Next we give a spectral geometric construction of Gaussian random fields over various manifolds that generalize classical fractional Brownian motion. Lastly we present a
result describing the limiting distribution for the largest eigenvalue of a product of two random matrices from the β-Laguerre ensemble. Graduation date: 2013 2013-04-17T00:00:00Z | {"url":"https://ir.library.oregonstate.edu/xmlui/feed/rss_1.0/1957/15739","timestamp":"2014-04-16T04:49:09Z","content_type":null,"content_length":"6706","record_id":"<urn:uuid:625ac65e-c94b-4d1f-a169-4db23527c555>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00104-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Forum Discussions
Math Forum
Ask Dr. Math
Internet Newsletter
Teacher Exchange
Search All of the Math Forum:
Views expressed in these public forums are not endorsed by Drexel University or The Math Forum.
Topic: They Didn't Eat Beans and Other Stories
Replies: 1 Last Post: Jun 25, 1993 1:53 PM
Messages: [ Previous | Next ]
They Didn't Eat Beans and Other Stories
Posted: Jun 20, 1993 1:45 PM
I have been asked to write something which addresses the issue that
there are no mathematicians who are household words. This means that
there are no famous role models for kids to emulate. At first I was
planning to do a series of detailed several page descriptions of the
lives of a few specific mathematicians. However, after considering
what would have intrigued me as a kid, this will only contain a some
of the exciting parts about some great mathematicians and historical
Many sources try to make mathematicians and mathematics sound far
removed from the world, but this is not at all the case. There are
politically active mathematicians, mathematicians who steal others'
ideas, and even murder over theorems. There is passion and excitement,
as in any history when the people involved really care about it. I am
sorry for the lack of women in this description, but there really are
not yet that many famous women mathematicians. Emmy Noether, Sophie
Germain, and Sonya Kovalevsky are the only three that I can think of
Born in around 532 B.C., the ancient Greek Pythagorus was the founder
of a school of mathematicians and is credited with the discovery of
the relationship between the lengths of the sides of a right triangle
(although it is not clear Pythagorus actually deserves credit for this
theorem). Pythagorus was also important politically; he founded the
religious sect of the Pythagoreans, who became a major political force
in Southern Italy, even gaining the rule of some of the cities. The
major beliefs of the Pythagoreans included the transmigration of
souls, that everything depended on whole numbers, and the sinfulness
of eating beans. Other laws included not touching a white cock and not
looking in a mirror beside a light.[Russell, Bertrand, "A History of
Western Philosophy," Simon and Schuster, NY, 1945.] So great was the
importance of whole numbers that the discovery that the square root of
two is irrational remained a religious secret. It is said that when
the Pythagorean Hippasus disclosed the secret, other members of the
sect drowned him in the sea.[Eves, Howard, "An Introduction to the
History of Mathematics," third edition, Holt, Rinehart and Winston,
NY, 1964.]
In the sixteenth century, mathematicians wanted to find a formulas
like the quadratic formula for factoring third and fourth degree
polynomials. The answers were first published by Cardan (1501-76),
though it was not his work. He found out the secret of how to solve
the cubic from Tartaglia (1500-57), who probably also did not discover
it. Cardan's publication came after he promised Tartaglia that he
would never reveal the secret. According to Boyer, it is probably
Scipione del Ferro (1465-1526) who actually discovered the formula. He
kept it a secret, revealing it to one student before he died.[Boyer,
Carl, "A History of Mathematics," John Wiley & Sons, NY, 1968.]
After the discovery of formulas to factor third and fourth degree
polynomials, it is natural to wonder about five and beyond. In fact,
it is impossible to write down a general formula to factor polynomials
of any degree greater than four. It was Galois (1812-1832) who proved
this result in the course of developing a branch of mathematics now
called Galois theory. Through a series of unfortunate circumstances,
Galois repeatedly was denied entrance to the Ecole Polytechnique, the
most pretigious university in France, as well as never getting his
work recognized in his lifetime, although two papers were published in
1830. This same year, Galois became a revolutionary, fighting for
France to be a republic. Through this political activity (or perhaps
over a woman), he was challenged to a duel. It was in this dual that
he died at the age of twenty. According to legend, knowing that he
would die, he wrote down many of his ideas in a letter to a friend the
night before the duel. The letter and other partial manuscripts were
finally published in the Journal de Mathematiques in 1846.[Boyer]
I will not write much about Newton (1642-1727), but there are a few
interesting things to mention. Newton was the first to discover
calculus, but because he did not publish for more than ten years,
Leibniz independently arrived at the same discovery and published
first. The result was a terrible fight between the two, making the
last part of Newton's life unhappy. In 1696, he was appointed Warden
of the Mint and promoted to Master of the Mint in 1699.[Eves] He took
the job seriously, saving the country money by introducing the idea of
coin milling. This meant that people were no longer able to clip
silver off the edges of the coins.[Barrow, John, "The World Within the
World," Oxford University Press, 1990.]
Credited with the invention of modern analysis, Euler (1707-83) is
probably the most prolific mathematician ever. Spending the last
seventeen years of his life blind did not slow down his productivity.
He just dictated to his children. Aside from the mathematical content
of his work, Euler standardized mathematical notation. He is
responsible for the use of the letter e for exponential functions, the
capital sigma for summation, i for the square root of minus one, and
even for the use of the letter pi for the ratio of the circumference
to diameter of the circle! [Boyer] Thus it is that we can write one of
the most fundamental equations of modern mathematics, voted the most
beautiful theorem by readers of the Mathematical Intelligencer.[Wells,
David, "Are These the Most Beautiful?" Mathematical Intelligencer, Vol
12, No 3, 1990.] Namely:
That mathematicians participate in the world is not something of the
past. The contemporary mathematician Steve Smale, who is very
important in many areas including Dynamical Systems, had to appear in
front of the House Un-American Activities Committee and was active in
the Free Speech Movement in Berkeley. He caused quite a bit of
contraversy when he spoke against the U.S. and Soviet involvement in
Vietnam in Moscow, 1966. The University of California denied him
summer support; he then has his NSF proposal returned for political
reasons.[Smale, Steve, "The Story of the Higher Dimensional Poincare
Conjecture (What Actually Happened on the Beaches of Rio),"
Mathematical Intelligencer, Vol 12, No 2, 1990.]
Perhaps the most telling comment regarding the importance of
mathematics comes from the algebraic geometer Alexandre Grothendieck,
when he was teaching math in Vietnam in 1967. He says: "In general, I
can attest that both the political leaders and the senior academic
people are convinced that scientific research--including theoretical
research having no immediate practical applications--is not a luxury,
and that it it necessary ... starting now, without waiting for a
better future."[Koblitz, Neal, "Recollections of Mathematics in a
Country Under Siege," Mathematical Intelligencer, Vol 12, No 3, 1990.]
I would like to thank Scott Carlson for sharing his knowledge and
books with me for this article.
Date Subject Author
6/20/93 They Didn't Eat Beans and Other Stories Evelyn Sander
6/25/93 Re: They Didn't Eat Beans and Other Stories Susan Ross | {"url":"http://mathforum.org/kb/thread.jspa?threadID=352132","timestamp":"2014-04-16T17:24:56Z","content_type":null,"content_length":"24350","record_id":"<urn:uuid:b2b9c0a7-0c73-4622-b548-87e405ebd258>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00136-ip-10-147-4-33.ec2.internal.warc.gz"} |
Fourier analysis and resynthesis in Pd
Next: Narrow-band companding: noise suppression Up: Examples Previous: Examples Contents Index
Example I01.Fourier.analysis.pd (Figure 9.14, part a) demonstrates computing the Fourier transform of an audio signal using the fft~ object:
Fast Fourier transform. The two inlets take audio signals representing the real and imaginary parts of a complex-valued signal. The window size
The Fast Fourier transform [SI03] reduces the computational cost of Fourier analysis in Pd to only that of between 5 and 15 osc~ objects in typical configurations. The FFT algorithm in its simplest
form takes
Example I02.Hann.window.pd (Figure 9.14, parts b and c) shows how to control the block size using a block~ object, how to apply a Hann window, and a different version of the Fourier transform. Part
(b) shows the invocation of a subwindow which in turn is shown in part (c). New objects are:
real Fast Fourier transform. The imaginary part of the input is assumed to be zero. Only the first fft~object.
repeatedly outputs the contents of a wavetable. Each block of computation outputs the same first
In this example, the table ``$0-hann" holds a Hann window function of length 512, in agreement with the specified block size. The signal to be analyzed appears (from the parent patch) via the inlet~
object. The channel amplitudes (the output of the rfft~ object) are reduced to real-valued magnitudes: the real and imaginary parts are squared separately, the two squares are added, and the result
passed to the sqrt~ object. Finally the magnitude is written (controlled by a connection not shown in the figure) via tabwrite~ to another table, ``$0-magnitude", for graphing.
Figure 9.15: Fourier analysis and resynthesis, using block~ to specify an overlap of 4, and rifft~ to reconstruct the signal after modification.
Example I03.resynthesis.pd (Figure 9.15) shows how to analyze and resynthesize an audio signal following the strategy of Figure 9.7. As before there is a sub-window to do the work at a block size
appropriate to the task; the figure shows only the sub-window. We need one new object for the inverse Fourier transform:
real inverse Fast Fourier transform. Using the first rfft~/rifft~ pair together result in a gain of ifft~ object is also available which computes an unnormalized inverse for the fft~ object,
reconstructing a complex-valued output.
The block~ object, in the subwindow, is invoked with a second argument which specifies an overlap factor of 4. This dictates that the sub-window will run four times every inlet~ object does the
necessary buffering and rearranging of samples so that its output always gives the 512 latest samples of input in order. In the other direction, the outlet~ object adds segments of its previous four
inputs to carry out the overlap-add scheme shown in Figure 9.7.
The 512-sample blocks are multiplied by the Hann window both at the input and the output. If the rfft~ and rifft~ objects were connected without any modifications in between, the output would
faithfully reconstruct the input.
A modification is applied, however: each channel is multiplied by a (positive real-valued) gain. The complex-valued amplitude for each channel is scaled by separately multiplying the real and
imaginary parts by the gain. The gain (which depends on the channel) comes from another table, named ``$0-gain". The result is a graphical equalization filter; by mousing in the graphical window for
this table, you can design gain-frequency curves.
There is an inherent delay introduced by using block~ to increase the block size (but none if it is used, as shown in Chapter 7, to reduce block size relative to the parent window.) The delay can be
measured from the inlet to the outlet of the sub-patch, and is equal to the difference of the two block sizes. In this example the buffering delay is 512-64=448 samples. Blocking delay does not
depend on overlap, only on block sizes.
Next: Narrow-band companding: noise suppression Up: Examples Previous: Examples Contents Index Miller Puckette 2006-12-30 | {"url":"http://msp.ucsd.edu/techniques/v0.11/book-html/node179.html","timestamp":"2014-04-17T12:45:35Z","content_type":null,"content_length":"11205","record_id":"<urn:uuid:ac8a021d-94ff-4fdd-8042-9e19e71206ce>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00548-ip-10-147-4-33.ec2.internal.warc.gz"} |
CurvFit 5.07
CurvFit Details
5.07Version :
Windows 95/98 Platform :
1.4 MbFile Size :
Free to try; $53.85 to buy License :
December 21, 2006 Date Added :
Rating :
CurvFit Review
CurvFit (tm) is a curve fitting program 4 Windows 95/98. Lorentzian, Sine, Exponential and Power series are
available models to match your data. A Lorentzian series is highly recommended for real data especially for multiple
peaked/valleys data. CurvFit is an example of Fortran Calculus programming... ie. minutes to solve, days or years to
understand solution and what it implies (e.g. wrong model, sampling rate error). See comments in EX-*.?
Changes in version 5.07:
All resolutions should now be fixed.
Software related to CurvFit | {"url":"http://www.newfreedownloads.com/Home-Education/Math/CurvFit.html","timestamp":"2014-04-16T08:14:42Z","content_type":null,"content_length":"10577","record_id":"<urn:uuid:28747883-7118-4bff-a6b2-63cdd026268b>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00645-ip-10-147-4-33.ec2.internal.warc.gz"} |
Choosing & using 'Evidence of Change Near the Arctic Circle'
• :
□ 5-8:
☆ A - Science as inquiry:
○ Abilities necessary to do scientific inquiry
☆ D - Earth and space science:
○ Earth in the solar system
○ Structure of the earth system
☆ E - Science and technology:
○ Understandings about sci. / tech.
□ 9-12:
☆ A - Science as inquiry:
○ Abilities necessary to do scientific inquiry
☆ D - Earth and space science:
○ Energy in the earth system
☆ E - Science and technology:
○ Understandings about science and technology
• :
□ Places and regions:
☆ The physical and human characteristics of places
This resource supports math standards for grades 8-12 in the topics of data analysis and probability. These assignments are based on the National Council of Teachers of Mathematics (NCTM) standards.
This resource supports earth science standards for grades 8-12. This assignment is based on the Virginia Standards of Learning ES.1c: The student will plan and conduct investigations in which scales,
diagrams, maps, charts, graphs, tables, and profiles are constructed and interpreted. ES.2c: The student will demonstrate scientific reasoning and logic by comparing different scientific explanations
for a set of observations about the Earth. ES.11a: The student will investigate and understand that oceans are complex, interactive physical, chemical, and biological systems and are subject to long-
and short-term variations. Key concepts include physical and chemical changes (tides, waves, currents, sea level and ice cap variations, upwelling, and salinity variations). | {"url":"http://www.dlese.org/library/view_annotation.do?type=standards&id=MYND-000-000-000-083&other=true","timestamp":"2014-04-19T09:26:58Z","content_type":null,"content_length":"15035","record_id":"<urn:uuid:67a02adf-92b3-4a2c-8fb5-bc1059b8054b>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00368-ip-10-147-4-33.ec2.internal.warc.gz"} |
Physics Forums - Interactive math exersises web (www.emathematics.net) Fantastic!!!
evagavila Mar15-09 02:16 AM
Interactive math exersises web (www.emathematics.net) Fantastic!!!
I am a math teacher and I send you this e-mail to suggest you a new link about an interactive math exercises web.
The web site is
and has Interactive exercises, lessons, and worksheets to practice knowledge of whole numbers, integers, divisibility, fractions, exponents and powers, percentages, proportional reasoning, linear
equations, quadratic equations, monomials, polynomials, special products, radicals, systems of equations, exonential and logarithmic equations, geometry, sequences and series, functions and graphs,
trigonometry, determinants, matrices, inner product, factorial, variations, permutations and combinations.
It is very useful to work on your self or with your students (if you are a teacher). | {"url":"http://www.physicsforums.com/printthread.php?t=299836","timestamp":"2014-04-25T00:36:03Z","content_type":null,"content_length":"4441","record_id":"<urn:uuid:00000e9a-a558-4b3b-b3ff-ca1c10659c6b>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00115-ip-10-147-4-33.ec2.internal.warc.gz"} |
a survey conducted by the american automobile association showed that a family of four spends an average of $215.60 per day while on vacation. suppose a sample of 64 families of four vacationing at
niagara falls resulted in a sample mean of $252.45 per day and a sample standard deviation of $77.00. br br a. develop a 95% confidence interval estimate of the mean amount spent per day by a family
of four visiting niagara falls (to 2 decimals).br br div | {"url":"http://expresshelpline.com/support-question-5257809.html","timestamp":"2014-04-19T11:57:28Z","content_type":null,"content_length":"15948","record_id":"<urn:uuid:d3a74b1b-58cd-4802-8268-55492cb616af>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00488-ip-10-147-4-33.ec2.internal.warc.gz"} |
Merion, PA Prealgebra Tutor
Find a Merion, PA Prealgebra Tutor
...Having worked with a diverse population of students, I have strong culturally competent teaching practices that are adaptive to diverse student learning needs. I have a robust knowledge of
various math curricula and resources that will get your child to love math in no time! I look forward to w...
9 Subjects: including prealgebra, geometry, ESL/ESOL, algebra 1
...I have owned a Mac since my senior year of high school (2007), and have been using one ever since. I am especially well-versed in helping people to make the transition from Windows to Mac, as
I assisted my parents in the process. My services would be best suited to helping customize a Mac in order to best serve the needs of the user.
21 Subjects: including prealgebra, reading, calculus, physics
...Dr. Peter is always willing to offer flexible scheduling to suit the client's needs. He is also prepared to be responsive to any budgetary concerns.My qualification for tutoring GMAT is based
upon (1) my academic record and (2) my workplace experience.
10 Subjects: including prealgebra, calculus, algebra 1, GRE
...Middle school and early High School are the ages when most children develop crazy ideas about their abilities regarding math. It upsets me when I hear students say, 'I'm just not good in math!
' Comments like that typically mean that a math teacher along the way wasn't able to present the materi...
9 Subjects: including prealgebra, geometry, precalculus, algebra 2
...I completed my Master of Science degree in biostatistics at the University of Pittsburgh. While in graduate school, I received merit-based appointments to serve as a teaching assistant for 3
separate courses offered by the biostatistics department - two were core introductory biostatistics cours...
26 Subjects: including prealgebra, geometry, statistics, algebra 1
Related Merion, PA Tutors
Merion, PA Accounting Tutors
Merion, PA ACT Tutors
Merion, PA Algebra Tutors
Merion, PA Algebra 2 Tutors
Merion, PA Calculus Tutors
Merion, PA Geometry Tutors
Merion, PA Math Tutors
Merion, PA Prealgebra Tutors
Merion, PA Precalculus Tutors
Merion, PA SAT Tutors
Merion, PA SAT Math Tutors
Merion, PA Science Tutors
Merion, PA Statistics Tutors
Merion, PA Trigonometry Tutors | {"url":"http://www.purplemath.com/Merion_PA_prealgebra_tutors.php","timestamp":"2014-04-17T22:02:05Z","content_type":null,"content_length":"24136","record_id":"<urn:uuid:4242a907-68e9-4de2-83b5-ca2d2cd97ce6>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00348-ip-10-147-4-33.ec2.internal.warc.gz"} |
Physics Forums - (y')^2+y^2=-2 why this equation has no general solution ?
young_eng Oct15-10 05:41 PM
(y')^2+y^2=-2 why this equation has no general solution ?
why this differential equation has no general solution ?
Pengwuino Oct15-10 07:01 PM
Re: (y')^2+y^2=-2 why this equation has no general solution ?
It's non-linear. Having a general solution for these types of equations is the exception, not the rule.
Not that there isn't a general solution because I don't know. It's just that non-linear equations rarely have general solutions
Dickfore Oct15-10 07:19 PM
Re: (y')^2+y^2=-2 why this equation has no general solution ?
Because the lhs is necessarily non-negative (sum of squares), whereas the lhs is negative. You can have a solution in complex numbers.
jackmell Oct16-10 12:14 AM
Re: (y')^2+y^2=-2 why this equation has no general solution ?
Quote by young_eng (Post 2935154)
why this differential equation has no general solution ?
Why not just solve it the regular way:
[tex]\frac{dy}{\sqrt{-2-y^2}}=\pm dx[/tex]
or [itex]y=\pm i\sqrt{2}[/tex] are solutions, maybe singular ones. Not sure. Otherwise:
[tex]\frac{y\sqrt{-2-y^2}}{2+y^2}=\tan(c\pm x)[/tex]
[tex]y(x)=\pm \frac{\sqrt{2}\tan(c\pm x)}{\sqrt{-\sec^2(c\pm x)}}[/tex]
so that the solution is in the form of y(z)=u+iv
HallsofIvy Oct17-10 11:31 AM
Re: (y')^2+y^2=-2 why this equation has no general solution ?
There is no general solution in terms of real valued functions because if y' and y are both real numbers (for a given x) then [itex](y')^2+ y^2[/itex] cannot be negative!
Oops! Dickfore had already said that, hadn't he?
All times are GMT -5. The time now is 05:45 AM.
Powered by vBulletin Copyright ©2000 - 2014, Jelsoft Enterprises Ltd.
© 2014 Physics Forums | {"url":"http://www.physicsforums.com/printthread.php?t=438409","timestamp":"2014-04-18T10:45:16Z","content_type":null,"content_length":"7233","record_id":"<urn:uuid:05d2d5b4-bb25-4781-85ef-39161426eef1>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00462-ip-10-147-4-33.ec2.internal.warc.gz"} |
Corona, NY Prealgebra Tutor
Find a Corona, NY Prealgebra Tutor
...Real Experience - I have tutored over 300 students in all areas of math including ACT/SAT math, algebra, geometry, pre-calculus, Algebra Regents, and more. I specialize in SAT/ACT Math. I teach
students how to look at problems, how to break them down, which methods, strategies, and techniques to apply, and how to derive the quickest solution.
30 Subjects: including prealgebra, reading, English, physics
...My educational experience includes home tutoring New York City high/middle/elementary school students on an individual basis (2 years), and teaching college freshman in traditional classroom
settings of small groups (5 years). I have taught chemistry, biology and mathematics to hundreds of studen...
4 Subjects: including prealgebra, chemistry, ACT Math, elementary math
...I have a weird obsession with Military History so if you need tutoring for anything related to that, I'm your man.I have extensive experience tutoring K-6 students in English, Math, and Social
Studies. I have been a part of the Supplemental Education Services tutoring initiative on behalf of the...
37 Subjects: including prealgebra, reading, English, writing
...My levels-- grades 85 and better-- correspond to what is currently the "advanced" designation or "mastery" in all of those subjects. Many of my classes were AP (Advanced Placement) as well. I
did not utilize prep courses or tutoring.
55 Subjects: including prealgebra, reading, English, Spanish
...I have also worked with young children teaching in a Sunday school setting. I am patient and calm, and have a friendly personality, I am sure that you will love working with me and I will make
sure that you see results!I attended a private Christian school from Kindergarten to 12th grade, attend...
16 Subjects: including prealgebra, geometry, biology, algebra 1 | {"url":"http://www.purplemath.com/corona_ny_prealgebra_tutors.php","timestamp":"2014-04-16T16:04:35Z","content_type":null,"content_length":"24188","record_id":"<urn:uuid:b39c1b99-92aa-4804-807e-49bd9ee41650>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00272-ip-10-147-4-33.ec2.internal.warc.gz"} |
st: RE:
[Date Prev][Date Next][Thread Prev][Thread Next][Date index][Thread index]
st: RE:
From Finne Håkon <Hakon.Finne@sintef.no>
To statalist@hsphsun2.harvard.edu
Subject st: RE:
Date Wed, 19 Jun 2002 17:02:53 +0200
Kurmas Akdogan,
If the previous answer (see archives) was not sufficient, perhaps your
question was not understood the way you meant it to be. Let me try to
suggest some alternative interpretations.
You have a panel of data with variables A and B for a number of units and
for each of the years 1993 - 2000. For each year, you want to calculate the
means of A and the means of B. This will give you two variables Amean and
Bmean, each with 8 distinct values. Correct or incorrect?
Then I see at least two interpretations for the next step:
1. For each year, you want a scatterplot of Amean versus Bmean. This will
give you 8 distinct points on the graph. Each of the points could be marked
with the year. This case basically corresponds to the interpretation of the
first answer on the list.
2. Alternatively, you want two lines that possibly cross each other. The
first line is a graph of Amean versus year. Amean is on the Y axis and year
is on the X axis. The second line is a graph of Bmean versus year. For this
line, Bmean is on the X axis and year is on the Y axis.
So please give some more details if you need an answer different from the
first one.
Håkon Finne
> -----Original Message-----
> From: Kurmas Akdogan [mailto:kurmas1@yahoo.com]
> Sent: 19. juni 2002 10:11
> To: statalist
> Subject:
> Hi everybody,
> I want to make a cross-plot of 'means' of two variables (say A and B)
> against a time variable (from 1993 to 2000).
> Y axis: means of A from 1993 to 2000
> X axis: means of B from 1993 to 2000
> I use stata 6.0.
> Thanks
> __________________________________________________
> Do You Yahoo!?
> Yahoo! - Official partner of 2002 FIFA World Cup
> http://fifaworldcup.yahoo.com
> *
> * For searches and help try:
> * http://www.stata.com/support/faqs/res/findit.html
> * http://www.stata.com/support/statalist/faq
> * http://www.ats.ucla.edu/stat/stata/
* For searches and help try:
* http://www.stata.com/support/faqs/res/findit.html
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/ | {"url":"http://www.stata.com/statalist/archive/2002-06/msg00275.html","timestamp":"2014-04-19T14:55:27Z","content_type":null,"content_length":"6850","record_id":"<urn:uuid:6396d5b5-149a-4565-a567-eb72a4b25816>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00478-ip-10-147-4-33.ec2.internal.warc.gz"} |
The near-critical scaling window for directed polymers on disordered trees
Tom Alberts (California Institute of Technology) Marcel Ortgiese (TU Berlin)
We study a directed polymer model in a random environment on infinite binary trees. The model is characterized by a phase transition depending on the inverse temperature. We concentrate on the
asymptotics of the partition function in the near-critical regime, where the inverse temperature is a small perturbation away from the critical one with the perturbation converging to zero as the
system size grows large. Depending on the speed of convergence we observe very different asymptotic behavior. If the perturbation is small then we are inside the critical window and observe the same
decay of the partition function as at the critical temperature. If the perturbation is slightly larger the near critical scaling leads to a new range of asymptotic behaviors, which at the extremes
match up with the already known rates for the sub- and super-critical regimes. We use our results to identify the size of the fluctuations of the typical energies under the critical Gibbs measure.
Full Text: Download PDF | View PDF online (requires PDF plugin)
Pages: 1-24
Publication Date: January 30, 2013
DOI: 10.1214/EJP.v18-2036
• E. Aïdékon, J. Berestycki, É. Brunet, and Z. Shi. The branching brownian motion seen from its tip. arXiv:1104.3738, 2011.
• Arguin, L.-P.; Bovier, A.; Kistler, N. Genealogy of extremal particles of branching Brownian motion. Comm. Pure Appl. Math. 64 (2011), no. 12, 1647--1676. MR2838339
• L.-P. Arguin, A. Bovier, and N. Kistler. Poissonian statistics in the extremal process of branching brownian motion. To appear in Ann. Appl. Probab., 2012.
• F. Aurzada and S. Dereich. Universality of the asymptotics of the one-sided exit problem for integrated processes. To appear in Ann. Inst. Henri Poincaré Probab. Stat., 2012.
• T. Alberts, K. Khanin, and J. Quastel. The intermediate disorder regime for directed polymers in dimension 1+1. arXiv:1202.4398v1 [math.PR], 2012.
• Aïdékon, Elie; Shi, Zhan. Weak convergence for the minimal position in a branching random walk: a simple proof. Period. Math. Hungar. 61 (2010), no. 1-2, 43--54. MR2728431
• E. Aïdékon and Z. Shi. The Seneta-Heyde scaling for the branching random walk. arXiv:1102.0217, 2011.
• Biggins, J. D. Martingale convergence in the branching random walk. J. Appl. Probability 14 (1977), no. 1, 25--37. MR0433619
• Bovier, Anton; Kurkova, Irina. Derrida's generalized random energy models. II. Models with continuous hierarchies. Ann. Inst. H. Poincaré Probab. Statist. 40 (2004), no. 4, 481--495. MR2070335
• Bolthausen, Erwin. On a functional central limit theorem for random walks conditioned to stay positive. Ann. Probability 4 (1976), no. 3, 480--485. MR0415702
• Buffet, E.; Patrick, A.; Pulé, J. V. Directed polymers on trees: a martingale approach. J. Phys. A 26 (1993), no. 8, 1823--1834. MR1220795
• J. Barral, R. Rhodes, and V. Vargas. Limiting laws of supercritical branching random walks. arXiv:1203.5445, 2012.
• Derrida, B.; Spohn, H. Polymers on disordered trees, spin glasses, and traveling waves. New directions in statistical mechanics (Santa Barbara, CA, 1987). J. Statist. Phys. 51 (1988), no. 5-6,
817--840. MR0971033
• Hu, Yueyun; Shi, Zhan. Minimal position and critical martingale convergence in branching random walks, and directed polymers on disordered trees. Ann. Probab. 37 (2009), no. 2, 742--789.
• Johnson, Torrey; Waymire, Edward C. Tree polymers in the infinite volume limit at critical strong disorder. J. Appl. Probab. 48 (2011), no. 3, 885--891. MR2884824
• Komlós, J.; Major, P.; Tusnády, G. An approximation of partial sums of independent RV's, and the sample DF. II. Z. Wahrscheinlichkeitstheorie und Verw. Gebiete 34 (1976), no. 1, 33--58. MR0402883
• Kozlov, M. V. The asymptotic behavior of the probability of non-extinction of critical branching processes in a random environment. (Russian) Teor. Verojatnost. i Primenen. 21 (1976), no. 4,
813--825. MR0428492
• Kahane, J.-P.; Peyrière, J. Sur certaines martingales de Benoit Mandelbrot. Advances in Math. 22 (1976), no. 2, 131--145. MR0431355
• T. Madaule. Convergence in law for the branching random walk seen from its tip. arXiv:1107.2543, 2011.
• Mörters, Peter; Ortgiese, Marcel. Minimal supporting subtrees for the free energy of polymers on disordered trees. J. Math. Phys. 49 (2008), no. 12, 125203, 21 pp. MR2484334
• Shreve, Steven E. Stochastic calculus for finance. II. Continuous-time models. Springer Finance. Springer-Verlag, New York, 2004. xx+550 pp. ISBN: 0-387-40101-6 MR2057928
This work is licensed under a
Creative Commons Attribution 3.0 License | {"url":"http://www.kurims.kyoto-u.ac.jp/EMIS/journals/EJP-ECP/article/view/2036.html","timestamp":"2014-04-20T23:57:43Z","content_type":null,"content_length":"22787","record_id":"<urn:uuid:21d03ec7-08fe-49a6-8cca-aadf9c449d0d>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00274-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Forum Discussions
- User Profile for: st9468s_@_rexel.edu
Math Forum
Ask Dr. Math
Internet Newsletter
Teacher Exchange
Search All of the Math Forum:
Views expressed in these public forums are not endorsed by Drexel University or The Math Forum.
User Profile: st9468s_@_rexel.edu
User Profile for: st9468s_@_rexel.edu
UserID: 193598
Name: EJC
Registered: 1/25/05
Total Posts: 3
Show all user messages | {"url":"http://mathforum.org/kb/profile.jspa?userID=193598","timestamp":"2014-04-20T13:33:10Z","content_type":null,"content_length":"10631","record_id":"<urn:uuid:1e946771-0807-4833-be09-5cf3d355683c>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00631-ip-10-147-4-33.ec2.internal.warc.gz"} |
Precal help.. word problem w/ vectors
March 11th 2010, 04:33 PM #1
Mar 2010
okay here's the problem:
The magnitude and direction of two forces acting on an object are 70 pounds, S56°E, and 50 pounds, N72°E, respectively. Find the magnitude, to the nearest hundreth of a pound, and the direction
angle, to the nearest tenth of a degree, of the resultant force.
how do i find the magnitude and the direction angle?
everytime i sketch out the graph, it turns out weird. i do not understand this at all and when i do get an answer, it is incredibly different than the answer in the back.
Answer:108.21 lbs ; S 77.4°E
okay here's the problem:
The magnitude and direction of two forces acting on an object are 70 pounds, S56°E, and 50 pounds, N72°E, respectively. Find the magnitude, to the nearest hundreth of a pound, and the direction
angle, to the nearest tenth of a degree, of the resultant force.
how do i find the magnitude and the direction angle?
everytime i sketch out the graph, it turns out weird. i do not understand this at all and when i do get an answer, it is incredibly different than the answer in the back.
Answer:108.21 lbs ; S 77.4°E
directions relative to the + x-axis (east)
$R_x = 50\cos(18) + 70\cos(-34)$
$R_y = 50\sin(18) + 70\sin(-34)$
magnitude, $|R| = \sqrt{R_x^2 + R_y^2}$
direction, $\theta = \arctan\left(\frac{R_y}{R_x}\right)$
March 11th 2010, 05:23 PM #2 | {"url":"http://mathhelpforum.com/pre-calculus/133384-precal-help-word-problem-w-vectors.html","timestamp":"2014-04-20T16:59:50Z","content_type":null,"content_length":"36106","record_id":"<urn:uuid:153bd7d1-6a71-4297-bd0a-481ad3841755>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00357-ip-10-147-4-33.ec2.internal.warc.gz"} |
Pappus (păpˈəs) [key], fl. c.300, Greek mathematician of Alexandria. He recorded and enlarged on the results of his predecessors, including Euclid and Apollonius of Perga, in his Mathematical
Collection (8 books; date conjectural). The six and a half extant books, edited and translated into Latin by Commandinus (1588), stimulated a revival of geometry in the 17th cent.; Descartes
expounded several of his problems. The collection was reedited by Frederick Hultsch (1876–78). Pappus' other works include a commentary on Ptolemy's Almagest.
See T. L. Heath, A Manual of Greek Mathematics (1931).
The Columbia Electronic Encyclopedia, 6th ed. Copyright © 2012, Columbia University Press. All rights reserved.
More on Pappus from Fact Monster: | {"url":"http://www.factmonster.com/encyclopedia/people/pappus.html","timestamp":"2014-04-21T05:21:16Z","content_type":null,"content_length":"20732","record_id":"<urn:uuid:74c92c8d-f910-4a11-b1aa-151baed985a2>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00515-ip-10-147-4-33.ec2.internal.warc.gz"} |
ALEX Lesson Plans
Subject: Mathematics (9 - 12)
Title: Imaginary numbers? What do you mean imaginary?
Description: Is it any wonder that students are suspicious? We lead, sometimes drag, them through Algebra I insisting they must follow the order of operations. We make them learn the "hard way" of
doing an assignment one day only to show them the "short cut" the next. We give them difficult equations that require what appears to be nothing less than magic to solve. The one reality to which
they've managed to cling is what "someone" told them a long time ago. . ."You can't take the square root of a negative number." Just when students are convinced we've been making all of this up all
along, we introduce imaginary numbers. This lesson is designed as a computer lab activity. If you're looking for a more traditional classroom activity, please see this lesson developed by Jerry
Weeks: http://alex.state.al.us/lesson_view.php?id=11364 This lesson plan was created by exemplary Alabama Math Teachers through the AMSTI project.
Subject: Mathematics (9 - 12), or Science (9 - 12)
Title: Writing Word Equations
Description: The students will learn to write word and formula equations. We will watch a web video about chemical equations. We will then complete the science in motion lab “The Color of Chemistry”.
The students will also be required to write word and formula equations.This lesson plan was created as a result of the Girls Engaged in Math and Science, GEMS Project funded by the Malone Family | {"url":"http://alex.state.al.us/all.php?std_id=54367","timestamp":"2014-04-21T07:07:57Z","content_type":null,"content_length":"30501","record_id":"<urn:uuid:c596cd05-a77f-4e50-ab55-ef925f0094dc>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00549-ip-10-147-4-33.ec2.internal.warc.gz"} |
regular and exact completions
Category theory
Universal constructions
Regular and exact completions
The forgetful 2-functors
have left adjoints, and in fact are (2-)monadic. Their left adjoints are called (free) regular or exact completions.
In the third case the 2-monad is idempotent, so the left adjoint can properly be called a completion, while in the first two cases, the 2-monad is only lax-idempotent, so the left adjoint should
technically be called a free completion. However, the phrases regular completion and exact completion are also commonly used for the first two cases. To disambiguate the second and third cases the
phrases ex/lex completion and ex/reg completion are also used, and so by analogy the first case is called reg/lex completion.
In fact, the reg/lex and ex/lex completion can be applied to a category that merely has weak finite limits, although in this case they are not left adjoint to the obvious forgetful functor. A general
context which includes all of these types of these completions is the 2-category of unary sites, in which the categories of regular and exact categories form reflective sub-2-categories.
The ex/lex completion
There are several constructions of the ex/lex completion. Perhaps the quickest one to state (Hu-Tholen 1996) is that if $C$ is small, then $C_{ex/lex}$ is the full subcategory of its presheaf
category $Set^{C^{op}}$ spanned by those presheaves $F$ such that
• $F$ admits a regular epimorphism $y(X)\twoheadrightarrow F$ from a representable presheaf,
• with the additional property that if $K\rightrightarrows y(X)$ is the kernel pair of $y(X)\twoheadrightarrow F$, then $K$ also admits a regular epi $y(Z)\to K$ from a representable presheaf.
A more explicit construction is as follows. Let us think, informally, of the objects of $C$ as presets and the morphisms of $C$ as “proofs”. An object of $C_{ex} = C_{ex/lex}$ will then be a “set” or
setoid constructed from $C$. Precisely, we take the objects of $C_{ex}$ to be the pseudo-equivalence relations in $C$: a pseudo-equivalence relation consists of:
• an object $X\in C$ and
• a parallel pair $s,t\colon R\rightrightarrows X$, such that
• there exists an arrow $i\colon X\to R$ with $s i = t i = 1_X$,
• there exists an arrow $v\colon R\to R$ with $s v = t$ and $t v = s$, and
• there exists an arrow $c\colon R\times_X R \to R$ with $s c = s \pi_1$ and $t c = t \pi_2$. (If $C$ has merely weak finite limits, we assert this for some, and hence every, weak pullback $R\times
^w_X R$.)
If $(s,t)\colon R\to X\times X$ is a monomorphism, then these conditions make it precisely a congruence or internal equivalence relation in $C$. In general, we can think of the fiber of $R$ over $
(x_1,x_2)$ as giving a collection of “reasons” or “proofs” that $x_1 \mathrel{R} x_2$. Then $i$ supplies a uniform proof that $x \mathrel{R} x$ for every $x$, while $v$ supplies a uniform proof that
$x \mathrel{R} y$ implies $y \mathrel{R} x$, and $c$ supplies a uniform proof thath $x \mathrel{R} y$ and $y \mathrel{R} z$ imply $x \mathrel{R} z$.
If $R\rightrightarrows X$ and $S\rightrightarrows Y$ are two pseudo-equivalence relations, a morphism between them in $C_{ex}$ is defined to be a morphism $f\colon X\to Y$ in $C$, such that there
exists a morphism $f_1\colon R\to S$ with $s f_1 = f s$ and $t f_1 = f t$. That is, $f_1$ supplies a uniform proof that if $x \mathrel{R} y$ then $f(x) \mathrel{S} f(y)$. Moreover, we declare two
such morphisms $f,g\colon X\to Y$ to be equal if there exists a morphism $h\colon X\to S$ such that $s h = f$ and $t h = g$ (that is, a uniform proof that $f(x) \mathrel{S} g(x)$). Because $S\
rightrightarrows Y$ is a pseudo-equivalence relation, this defines an actual equivalence relation on the morphisms $f\colon X\to Y$, which is compatible with composition; thus we have a well-defined
category $C_{ex}$.
We have a full and faithful functor $C\to C_{ex}$ sending an object $X$ to the pseudo-equivalence relation $X\rightrightarrows X$. One can then verify directly that $C_{ex}$ is exact, that this
embedding preserves finite limits, and that it is universal with respect to lex functors from $C$ into exact categories.
There are also other constructions. Of course, the ex/lex completion can also be obtained by composing (any construction of) the reg/lex completion with (any construction of) the ex/reg completion.
The reg/lex completion
The reg/lex completion $C_{reg}= C_{reg/lex}$ of a lex category $C$ is perhaps most succinctly described as the subcategory of $C_{ex}$ consisting of those objects which admit monomorphisms into
objects of $C$. That is, instead of adding all quotients of pseudo-equivalence relations in $C$, we only add those quotients which are necessary in order to be able to construct images of morphisms
in $C$. For many construction of $C_{ex}$, this idea can then be made more explicit and sometimes simplified.
For instance, if we regard $C_{ex}$ as a full subcategory of $Set^{C^{op}}$ as above, then we can likewise regard $C_{reg}$ as the full subcategory of $Set^{C^{op}}$ determined by those presheaves
$F$ such that
• $F$ admits a regular epimorphism $y(X) \twoheadrightarrow F$ from a representable presheaf, and
• $F$ admits a monomorphism $F\rightarrowtail y(Z)$ into a representable presheaf.
If we construct $C_{ex}$ using pseudo-equivalence relations, as above, then we can characterize the pseudo-equivalence relations which we need to form $C_{reg}$ as precisely the kernel pairs of
morphisms of $C$ (or finite families of such). Therefore, we obtain an equivalent definition of $C_{reg}$ as follows. Its objects are morphisms of $C$ (regarded as stand-ins for their formally added
images). A morphism from $p\colon X\to Y$ to $q\colon Z\to W$ should be a morphism $f\colon X\to Z$ for which there exists an $f_1$ relating the kernel of $p$ to the kernel of $q$, modulo an
equivalence relation generated by maps from $X$ to the kernel of $q$. But by definition of kernel pairs, two morphisms will be identified under this latter equivalence relation if and only if they
have the same composite with $q$, so it makes sense to define the morphisms of $C_{reg}$ from $p\colon X\to Y$ to $q\colon Z\to W$ to be certain morphisms $\overline{f}\colon X\to W$ which factor
through $q$ (non-uniquely). We still have to impose the condition that $f$ should preserve the kernel pairs, but in terms of $\overline{f}$ this is simply the statement that $\overline{f} r = \
overline{f} s$, where $(r,s)$ is the kernel pair of $p$. This is the definition of $C_{reg}$ given in the Elephant.
We can then verify that $C_{reg}$ is regular, that we have a full and faithful functor $C\to C_{reg}$, which preserves finite limits, and is universal among lex functors from $C$ to regular
categories. Again, there are also other constructions.
Also, just as for the free exact completion, the construction works essentially the same if $C$ has only weak finite limits. In this case, instead of the objects of $C_{ex}$ admitting a monomorphism
to a single object of $C$, we have to consider those admitting a jointly-monic finite family of morphisms into objects of $C$, with similar modifications for the other descriptions.
The ex/reg completion
If $C$ is regular, a quick definition of $C_{ex/reg}$ is as the full subcategory of the category $Sh(C)$ of sheaves for the regular coverage on $C$ spanned by those sheaves which are quotients of
congruences in $C$. (Lack 1999)
A more explicit description can be obtained by first passing from $C$ to its allegory of internal relations, then splitting the idempotents which are equivalence relations in $C$, and finally
reconstructing a regular category from the resulting allegory. Yet more explicitly, this means that the objects of $C$ are congruences in $C$, and the morphisms are relations which are entire and
functional relative to the given congruences.
The higher categorical approach
A somewhat more unified approach to all these completions can be obtained as follows. Observe that in the classical situation (that is, in the presence of choice), sets can be identified with all of
the following:
• 0-trivial groupoids, i.e. groupoids in which any two parallel morphisms are equal, i.e. equivalence relations.
• 0-trivial 2-groupoids, i.e. 2-groupoids in which any two parallel 2-morphisms are equal and any two parallel 1-morphisms are isomorphic.
• and so on
• 0-trivial n-groupoids for any $0\le n \le \infty$.
In the absence of choice, this is still true as long as the morphisms between 0-trivial n-groupoids are $n$-anafunctors. If instead we consider only actual functors, however, in the absence of choice
what we obtain are various completions of $Set$. Specifically:
• $Set_{reg/lex}$ can be identified with the category whose objects are 0-trivial groupoids, and whose morphisms are natural isomorphism classes of functors.
• $Set_{ex/lex}$ can be identified with the category whose objects are 0-trivial 2-groupoids, and whose morphisms are pseudonatural equivalence classes of 2-functors. In the notion of 2-groupoid
here we also demand that each 1-cell be equipped with a specified inverse equivalence.
This idea can be generalized to provide alternate constructions of the completions for an arbitrary $C$ with finite limits. The notions of internal $n$-category and internal $n$-functor in such a $C$
make perfect sense for any $n$. The same is true of the notion of $n$-groupoid, as long as we interpret this to mean the structure of “inverse-assigning” morphisms in $C$. The statement “any two
parallel $n$-cells are equal” also makes sense in any lex category, since it demands that a certain specified morphism is monic. Finally, we can also interpret “any two parallel $k$-cells are
equivalent” algebraically by specifying a particular equivalence between any such pair. (Note that for $k=(n-1)$, since parallel $n$-cells are equal there is a unique way to do this.) We thereby
obtain a notion of internal 0-trivial $n$-groupoid in any lex category, and we write $0 triv n Gpd(C)$ for the category of such things and internal $n$-natural equivalence classes of functors. We
then have:
• It is fairly clear from the above explicit description that $C_{reg/lex}$ is the full subcategory of $0 triv 1 Gpd(C)$ determined by the kernel pairs (which are congruences, i.e. internal
0-trivial 1-groupoids). If, like $Set$, $C$ is already exact, so that every congruence is a kernel pair, then $C_{reg/lex}\simeq 0 triv 1 Gpd(C)$.
• $C_{ex/lex}$ is always equivalent to $0 triv 2 Gpd(C)$. To see this, note that a pseudo-equivalence relation (together with chosen maps $i$, $c$, and $v$) can be regarded as the 1-skeleton of an
internal bicategory in $C$ with specified inverse equivalences for every 1-cell. There is then a unique way to add 2-cells to make it a 0-trivial bigroupoid.
It is not clear how 0-trivial $n$-groupoids fit into this picture for $n\gt 2$, although it seems likely that the objects of iterated reg/lex and ex/lex completions can be identified with some type
of internal n-fold category.
Now if $C$ is already regular, then we can define a notion of internal anafunctor between internal $n$-categories. It is then easily seen that
• $C_{ex/reg}$ is equivalent to the category of 0-trivial 1-groupoids, and natural isomorphism classes of internal anafunctors between them.
Again, it is not entirely clear how the 0-trivial $n$-groupoids and anafunctors behave for $n\gt 1$, although it seems fairly likely (to me) that in this case the process will stabilize at $n=1$,
i.e. 0-trivial $n$-groupoids with equivalence classes of ana-$n$-functors will give $C_{ex/reg}$ for all $n\ge 1$.
Completions of unary sites
The descriptions of the ex/lex and ex/reg completions in terms of pseudo-equivalence relations and equivalence relations, respectively, have a common generalization. Let $C$ be a unary site, so that
it has a notion of “covering morphism” and admits “finite local unary prelimits”. In particular, any cospan $X\to Z\leftarrow Y$ has a local unary pre-pullback, which is a commutative square
$\array{ P & \to & Y \\ \downarrow && \downarrow\\ X & \to & Z }$
such that for any other commutative square
$\array{ V & \to & Y \\ \downarrow && \downarrow\\ X & \to & Z }$
there is a cover $U\to V$ and a map $U\to P$ such that the induced composites $U\to X$ and $U\to Y$ are equal.
Now we can define a unary congruence in $C$ to consist of:
• An object $X\in C$
• A parallel pair $s,t:R\toto X$, such that
• There exists a cover $p:Y\to X$ and a map $i:Y\to R$ with $s i = t i = p$,
• There exists a cover $q:S\to R$ and a map $v:S\to R$ with $s v = t q$ and $t v = s q$, and
• There exists a local unary pre-pullback $T$ of the cospan $R \xrightarrow{t} X \xleftarrow{s} R$ and an arrow $c:T\to R$ such that $s c = s \pi_1$ and $t c = t \pi_2$, where $\pi_1$ and $\pi_2$
are the projections of $T$ to $R$.
If $C$ has a trivial topology, then local unary prelimits are simply weak limits, and this reduces to the definition of pseudo-equivalence relation. On the other hand, if $C$ is regular with its
regular topology, then these conditions ensure exactly that the image of $R\to X\times X$ is an internal equivalence relation on $X$.
Now we can define morphisms between unary congruences using a suitable kind of either entire and functional relations or anafunctors, and obtain the exact completion of the unary site $C$. This
construction exhibits the 2-category of exact categories as a reflective sub-2-category of the 2-category of unary sites, and restricts to the ex/wlex and ex/reg completions on the sub-2-categories
of categories with weak finite limits and trivial topologies and of regular categories with regular topologies, respectively. It can also be modified to construct regular completions. See (Shulman)
for details.
Generalizations to higher arity
More generally, any κ-ary site has a $\kappa$-ary exact completion, which is a κ-ary exact category. This exhibits the 2-category of $\kappa$-ary exact categories as a reflective sub-2-category of
that of $\kappa$-ary sites. See (Shulman) for details. In particular:
Properties of regular and exact completions
Many categorical properties of interest are preserved by one or more of the regular and exact completions. That is, if $C$ has these properties, then so does the completion, and the inclusion functor
preserves them. Note that frequently, for a completion to have some structure, it suffices for $C$ to have a “weak” version of that structure.
• Of course, finite limits are preserved by all three completions. In fact, as we have remarked, for the ex/lex and reg/lex completions, $C$ need only have weak finite limits.
• $C$ is lextensive if and only if $C_{ex/lex}$ is, and if and only if $C_{reg/lex}$ is, and in this case the embeddings preserve coproducts (Menni 2000). It follows that if $C$ is a pretopos, then
so is $C_{ex/lex}$, although the inclusion $C\to C_{ex/lex}$ is not a “pretopos functor” as it does not preserve regular epis.
• If $C$ is lextensive and has coequalizers (and hence has finite colimits), then so do $C_{ex/lex}$ and $C_{reg/lex}$ (Menni 2000). However, the inclusion functors do not preserve coequalizers. In
fact, it suffices for $C$ to be lextensive with quasi-coequalizers, meaning that for every $f,g\colon Y\rightrightarrows X$ there exists $q\colon X\to Q$ with $q f = q g$, such that for any $h\
colon X\to Z$ with $h f = h g$, $h$ coequalizes the kernel pair of $q$.
• The categories $C_{reg/lex}$ and $C_{ex/lex}$ always have enough (regular) projectives. In fact, the objects of $C$ are precisely the projective objects of these categories. Moreover, an exact
category $D$ is of the form $C_{ex/lex}$ for some $C$ (with weak finite limits) if and only if it has enough projectives, in which case of course $C$ can be taken to be the subcategory of
projectives (Carboni–Vitale 1998). Note that if $D$ has enough projectives, then its subcategory of projectives always has weak finite limits. Similarly, a regular category $D$ is of the form $C_
{reg/lex}$ for some $C$ (with weak finite limits) if and only if it has enough projectives and every object can be embedded in a projective one.
• If $C$ is a regular category satisfying the “regular” axiom of choice (i.e. every regular epi splits), then it is equivalent to $C_{reg/lex}$, and hence the latter also satisfies the axiom of
choice. Similarly, if $C$ is exact and satisfies choice, then it is equivalent to $C_{ex/lex}$. Conversely, if the inclusion $C\to C_{reg/lex}$ or $C\to C_{ex/lex}$ is an equivalence, then since
the objects of $C$ are projective in these completions, $C$ must satisfy the axiom of choice.
In fact, if we assume merely that $C_{ex/lex} \to (C_{ex/lex})_{ex/lex}$ is a equivalence, then since the objects of $C_{ex/lex}$ are projective in $(C_{ex/lex})_{ex/lex}$, they must also all be
projective in $C_{ex/lex}$, and therefore $C\to C_{ex/lex}$ is also an equivalence. It follows by induction that if the sequence of iterations of $(-)_{ex/lex}$ stabilizes at any finite stage, it
must in fact stabilize at the very beginning and $C$ must satisfy the axiom of choice. A similar argument applies to the reg/lex completion. (The ex/reg completion, of course, always stabilizes
after one application.)
• Cartesian closure is preserved by the ex/lex completion (Carboni–Rosolini 2000). In fact, $C_{ex/lex}$ is cartesian closed if and only if $C$ has weak simple products, meaning weak dependent
product?s along product projections.
• Local cartesian closure is also preserved by the ex/lex completion (Carboni–Rosolini 2000). In fact, $C_{ex/lex}$ is locally cartesian closed if and only if $C$ is weakly locally cartesian closed
, meaning that each slice category has weak dependent products. It follows in particular that if $C$ is a Π-pretopos, then so is $C_{ex/lex}$. For each $\Pi$-pretopos $C$ we thus obtain a
sequence $C$, $C_{ex}$, $(C_{ex})_{ex}$, … of $\Pi$-pretopoi, which in general does not stabilize.
• (Local) cartesian closure is seemingly not always preserved by the reg/lex completion, but it is under certain hypotheses. Recalling that $C_{reg/lex}$ is the full subcategory of $C_{ex/lex}$
consisting of the kernel pairs, suppose that $C$ has pullback-stable (epi,regular mono) factorizations and that every regular congruence is a kernel pair. Then $C_{reg/lex}$ is reflective in $C_
{ex/lex}$ (BCRS 1998, Menni 2000): the reflection of a pseudo-equivalence relation $R\rightrightarrows X$ is its (epi,regular mono) factorization. Moreover, the reflection preserves products, and
also pullbacks along maps in $C_{reg/lex}$, from which it follows that if $C_{ex/lex}$ is cartesian closed or locally cartesian closed, so is $C_{reg/lex}$.
Thus, if $C$ is weakly cartesian closed (resp. weakly locally cartesian closed), has pullback-stable (epi,regular mono) factorizations, and every regular congruence is a kernel pair, then $C_{reg
/lex}$ is cartesian closed (resp. locally cartesian closed). In particular, the local versions of these hypotheses apply in particular to Top and to any quasitopos. Note that $Top_{reg/lex}$ is
called the category of equilogical spaces.
• If $C$ is lextensive with coequalizers (or “quasi-coequalizers”) and a strong-subobject classifier, then so is $C_{reg/lex}$ (Menni 2000). It follows that if $C$ is a lextensive quasitopos, then
so is $C_{reg/lex}$. For each lextensive quasitopos $C$ we thus obtain a sequence $C$, $C_{reg}$, $(C_{reg})_{reg}$, … of lextensive quasitopoi, which in general does not stabilize.
• If $C$ has a natural numbers object, then so do $C_{reg/lex}$ and $C_{ex/lex}$. (Does $C_{ex/reg}$? What about more general W-types?)
• $C_{ex/lex}$ is an elementary topos iff $C$ has weak dependent products and a generic proof (Menni 2000). Note that if $C$ is a topos satisfying the axiom of choice, then its subobject classifier
is a generic proof. It follows that in this case $C_{ex/lex}$ is a topos—but we already knew that, because $C_{ex/lex}$ is equivalent to $C$ for such a $C$.
• If $C$ is regular, locally cartesian closed, and has a generic mono, i.e. a monomorphism $\tau\colon \Upsilon\to \Lambda$ such that every monomorphism is a pullback of $\tau$ (not necessarily
uniquely), then $C_{ex/reg}$ is a topos (Menni 2000).
On the other hand, some properties are not preserved by the completions.
• Of course, all the completions are regular categories, but the inclusions are not regular functors, since they do not preserve regular epis.
• We have seen that the existence of a subobject classifier or power objects is not, in general, preserved by the completions (although if $C$ is a topos, then of course so is $C_{ex/reg}$, since
it is equivalent to $C$).
• Similarly, if $C$ is well-powered, it does not follow that $C_{reg/lex}$ or $C_{ex/lex}$ are. In particular, for $X\in C$, the subobject preorders $Sub_{C_{reg/lex}}(X)$ and $Sub_{C_{ex/lex}}(X)$
are equivalent to the preorder reflection of the slice category $C/X$, and it is easy to construct examples in which this is not essentially small^1.
• If $C$ is a coherent category, it does not follow that $C_{ex/lex}$ or $C_{reg/lex}$ is. However, if $C$ is additionally lextensive, we have seen above that so are these completions, and hence in
particular also coherent (any extensive regular category is coherent). One can also write down the “free coherent completion” and the “free pretopos completion” of a lex category, and the
“pretopos completion” of a coherent category; see familial regularity and exactness for some clues on how to proceed.
• If $C$ is a Heyting category, it does not follow that $C_{ex/lex}$ or $C_{reg/lex}$ is. However, if $C$ is additionally lextensive and locally cartesian closed, we have seen above that so are
these completions, and hence Heyting (any lextensive locally cartesian closed regular category is Heyting).
• Unsurprisingly, if $C$ is a Boolean category, it does not follow that $C_{ex/lex}$ or $C_{reg/lex}$ is, even if $C$ is lextensive and LCC so that its completions are Heyting.
In fact, a stronger statement is true: if $C$ is lextensive and regular, then $C_{reg/lex}$ and $C_{ex/lex}$ are Boolean if and only if $C$satisfies the axiom of choice (in which case they are of
course equivalent to $C$). More precisely, if $X\in C$ is such that every subobject of $X$ in $C_{reg/lex}$ is complemented, then $X$ is projective in $C$. (The same argument applies to $C_{ex/
lex}$.) For suppose that $p\colon Y\to X$ is a regular epi in $C$. Recall that $Sub_{C_{reg/lex}}(X)$ is the preorder reflection of $C/X$. Thus $p$, considered as an object of $C/X$, defines a
monomorphism in $C_{reg/lex}$. By assumption, this monic is complemented; let its complement be represented by $q\colon Z\to X$. Since complements are disjoint, and meets in $Sub_{C_{reg/lex}}(X)
$ are given by pullbacks in $C/X$, the pullback of $p$ and $q$ admits a morphism to the initial object $0$, and hence is itself initial since $C$ is extensive. Now $p$ is regular epi, hence so is
its pullback $0\to Z$. But in a lextensive regular category, disjointness of the coproduct $1+1$ implies that $0\to 1$ is the equalizer of of the coprojections $1\rightrightarrows 1+1$, and
therefore any epimorphism with domain $0$ is an isomorphism; thus $Z$ is also initial. Now since joins in $Sub_{C_{reg/lex}}(X)$ are given by coproducts in $C/X$, the induced map $Y+Z \to X$ must
become an isomorphism in $Sub_{C_{reg/lex}}(X)$, which means that it must admit a section; but since $Z$ is initial this means that $p$ itself has a section.
• If $C$ is well-pointed, it does not follow that $C_{ex/lex}$ or $C_{reg/lex}$ are (in the stronger sense appropriate for non-toposes). It is of course always true that $1$ is projective in the
completions, and if it is a generator in $C$ then it will also be so in the completions. And if $C$ is lextensive, so that its completions are coherent, then $1$ is indecomposable in them as soon
as it is so in $C$. However, it does not follow that $1$ is a strong generator in the completions even if it is so in $C$, since the completions have (in general) many more monomorphisms than $C$
Of course, if $C$ is a well-pointed topos such that $C_{ex/lex}$ is also a topos, then the latter is also well-pointed, since any generator in a topos is a strong generator.
(to be written…)
• Realizability toposes arise as ex/lex completions of categories of partitioned assemblies based on a partial combinatory algebra $A$. In fact the reg/ex completion gives, as an interesting
intermediate step, the category of assemblies based on $A$ (which turns out to be the quasitopos of $egeg$-separated objects inside the realizability topos). This is discussed in Menni.
• The category $TF$ of torsion-free abelian groups is regular, but not exact. For instance, the congruence $\{ (a,b) | a \equiv b \mod 2 \} \subseteq \mathbb{Z}\times\mathbb{Z}$ is not a kernel in
$TF$. Unsurprisingly, the ex/reg completion of $TF$ is equivalent to the category $Ab$ of all abelian groups.
Note that although $TF$ is not exact, its inclusion into $Ab$ does have a left adjoint (quotient by torsion), and thus $TF$ is cocomplete. Herein lies a subtle trap for the unwary: since the ex/
reg completion monad is idempotent, it is in particular lax-idempotent, which means that any left adjoint to the unit $C \hookrightarrow C_{ex/reg}$ is in fact a (pseudo) algebra structure; but
since the monad is actually idempotent, any algebra structure is an equivalence. Of course, the reflection $Ab \to TF$ is not an equivalence, which doesn’t contract the general facts because this
left adjoint is not a regular functor, and hence not an adjunction in the 2-category on which the monad $(-)_{ex/reg}$ lives. In fact, it is not hard to check that $C \hookrightarrow C_{ex/reg}$
has a left adjoint in $Cat$ if and only if $C$ has coequalizers of congruences (while if it has a left adjoint in $Reg$ then it must be an equivalence).
• Carboni and Celia Magno, “The free exact category on a left exact one”, J. Austral. Math. Soc. (Ser. A), 1982.
• Hu and Tholen, “A note on free regular and exact completions and their infinitary generalizations”, TAC 1996.
• Carboni and Vitale, “Regular and exact completions”, JPAA 1998.
• Birkedal and Carboni and Rosolini and Scott, “Type Theory via Exact Categories,” 1998
• Stephen Lack, “A note on the exact completion of a regular category, and its infinitary generalizations” TAC 1999.
• The Elephant, Sections A1.3 and A3.
• Carboni and Rosolini, “Locally cartesian closed exact completions”, JPAA 2000
• Matías Menni, “Exact completions and toposes,” Ph.D. Thesis, University of Edinburgh, 2000. (web)
• Michael Shulman, “Exact completions and small sheaves”. Theory and Applications of Categories, Vol. 27, 2012, No. 7, pp 97-173. Free online | {"url":"http://ncatlab.org/nlab/show/regular+and+exact+completions","timestamp":"2014-04-19T07:28:55Z","content_type":null,"content_length":"146935","record_id":"<urn:uuid:435c4cce-e8b5-49ac-a51b-babf4833db1e>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00161-ip-10-147-4-33.ec2.internal.warc.gz"} |
Traversable Networks - Part One
Welcome to the Maths Resources of Adrian Bruce
Traversable Networks - Part One
Problem Solving - Traversable Networks
What is a traversable network? Which of these networks are traversable? Who was Leonard Euler? Is this network traversable? What is the Königsberg bridge Problem? What is the solution to the
Königsberg bridge Problem?
Just choose a starting planet and travel
across EVERY path without lifting your finger.
Traversable Network Puzzle 1
- A4 Word Version
Traversable Network Puzzle 1
- A4 Acrobat Version
Traversable Network Puzzle 2
- A4 Word Version
You Can find Lot of Maths Ideas at Amazon
Math Activities | {"url":"http://www.greatmathsgames.com/number/item/33-brain-teasers/32-traversable-networks-part-1.html","timestamp":"2014-04-18T15:53:32Z","content_type":null,"content_length":"31126","record_id":"<urn:uuid:aee34561-8ff4-4ba2-a376-1632352366db>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00471-ip-10-147-4-33.ec2.internal.warc.gz"} |
model structure on dg-
Model category theory
Universal constructions
Producing new model structures
Presentation of $(\infty,1)$-categories
Model structures
for $\infty$-groupoids
for $n$-groupoids
for $\infty$-groups
for $\infty$-algebras
for stable/spectrum objects
for $(\infty,1)$-categories
for stable $(\infty,1)$-categories
for $(\infty,1)$-operads
for $(n,r)$-categories
for $(\infty,1)$-sheaves / $\infty$-stacks
A model category structure on the category of dg-coalgebras.
Let $k$ be a field of characteristic 0.
There is a pair of adjoint functors
$(\mathcal{L} \dashv \mathcal{C}) \;\colon\; dgLieAlg_k \stackrel{\overset{\mathcal{L}}{\leftarrow}}{\underset{\mathcal{C}}{\to}} dgCoCAlg_k$
between the category of dg-Lie algebras (on unbounded chain complexes) and that of dg cocommutative coalgebras, where the right adjoint sends a dg-Lie algebra $(\mathfrak{g}_\bullet, [-,-])$ to its
“Chevalley-Eilenberg coalgebra”, whose underlying coalgebra is the free graded co-commutative coalgebra on $\mathfrak{g}_\bullet$ and whose differential is given on the tensor product of two
generators by the Lie bracket $[-,-]$.
There exists a model category structure on $dgCoCAlg_k$ for which
• the cofibrations are the (degreewise) injections;
• the weak equivalences are those morphisms that become quasi-isomorphisms under the functor $\mathcal{L}$ from prop. 1.
Moreover, this is naturally a simplicial model category structure.
This is (Hinich98, theorem, 3.1). More details on this are in the relevant sections at model structure for L-infinity algebras.
Relation to the model structure on dg-Lie algebras
Hence $(\mathcal{L} \dashv \mathcal{C})$ is a Quillen adjunction to the model structure on dg-algebras.
This is (Hinich98, theore, 3.2).
• Dan Quillen, Rational homotopy theory , Annals of Math., 90(1969), 205–295. (see Appendix B)
• Ezra Getzler, Paul Goerss, A model category structure for differential graded coalgebras (ps)
• Vladimir Hinich, Homological algebra of homotopy algebras , Comm. in algebra, 25(10)(1997), 3291–3323. | {"url":"http://www.ncatlab.org/nlab/show/model+structure+on+dg-coalgebras","timestamp":"2014-04-17T12:30:01Z","content_type":null,"content_length":"56320","record_id":"<urn:uuid:75aa55ae-05f5-426f-8685-233cfbe93119>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00181-ip-10-147-4-33.ec2.internal.warc.gz"} |
The Best Bits
The Best Bits
A new technology called compressive sensing slims down data at the source
The last big challenge of compressive sensing is decompressing! Given a vector of m sums and an m × N matrix of random bits, how do you recover the original signal vector of N elements? It’s one
thing to know that a unique solution exists, another to find it.
The approach devised by Candès and his colleagues treats the decoding of the compressed signal as an optimization problem. The aim is to find, among the infinite set of solutions to the m equations,
the solution that optimizes some measure of sparseness. The obvious thing to optimize is sparseness itself; in other words, look for the solution that minimizes the number of nonzero signal elements.
An algorithm for conducting such a search is straightforward; unfortunately, it is also utterly impractical, requiring a blind search among all possible arrangements of the k nonzero elements. A
camera based on this technology would produce pictures you could view only by solving a certifiably hard computational problem.
A more familiar mode of optimization is the method of least squares, known for more than 200 years. This prescription calls for minimizing the sum of the squares of the vector elements. There are
efficient methods of finding a least-squares solution, so practicality is not an issue. However, there’s another show-stopper: In compressive sensing the least-squares solution is seldom the correct
one. Thus a camera using this algorithm gives you a picture you can see quickly, but the image is badly garbled.
These two optimization methods seem quite different on the surface, but they have a hidden connection. For each vector element x, the least-squares rule calculates x^2 and then sums all the results.
The search for a sparsest vector can be framed in similar terms, the only change being that x^2 is replaced by x^0. The zeroth power of 0 is 0, but for any other value of x, x^0 is equal to 1. Thus
the sum of the zeroth powers counts the number of nonzero elements in the vector. This is just the result we want, but there is no efficient algorithm for finding it.
At this point, having found that x^2 doesn’t work and neither does x^0, the Goldilocks alternative is irresistible. How about x^1? Of course x^1 is simply x, and so the optimization strategy derived
from this rule amounts to a search for the solution vector whose elements have the smallest sum. There are efficient algorithms for performing this minimization. Equally important, the method
produces correct results. Given a sparse input (all but k elements exactly zero), the compression-decompression cycle almost always reconstructs the input exactly. For an approximately sparse input
(all but k elements near zero), the reconstruction is a good approximation.
The version of compressive sensing I have presented here is something of a caricature. In particular, the pretense that a signal arrives at the sensor preformatted as a sparse vector of numbers
glosses over a great deal of real-world complexity. In practice some transformation (such as conversion from the time domain to the frequency domain) is usually needed. And the random vectors that
govern the sampling of the signal may have elements more complicated than 0’s and 1’s. But the basic scheme remains intact. Here is the compressed version of compressive sensing: Find a sparse
domain. Sum random subsets of the signal. Decompress by finding the solution that minimizes the sum of the signal elements. | {"url":"http://www.americanscientist.org/issues/pub/the-best-bits/6","timestamp":"2014-04-20T01:32:05Z","content_type":null,"content_length":"125057","record_id":"<urn:uuid:9e5fea0d-13c9-407f-8d05-1bd5da08f30c>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00564-ip-10-147-4-33.ec2.internal.warc.gz"} |
joint probability density function
April 7th 2010, 01:10 PM #1
Dec 2008
joint probability density function
If some could help me out with this problem I' really appreciate it, what I'm having trouble with on the second one is the bounds, If x plus y have to be less than or equal to 3 then shouldn't
the bounds just be 0<x<2 and 1<y<3 because x can't be 3 and y cant be 0?
The bounds you mentioned won't work because for example they include the point (x, y) = (2, 3) where x+y = 5 > 3. (I'm not being very careful to distinguish between $>$ and $\geq$, etc., because
it's not important for these kinds of problems, and the meaning should be clear.)
Geometrically the inequality x+y < 3 represents the region under the line x+y = 3 in the xy plane. So in D, this is the region under the diagonal going from the upper left to the lower right
corner of the square.
Here's what I get:
April 7th 2010, 03:35 PM #2 | {"url":"http://mathhelpforum.com/calculus/137788-joint-probability-density-function.html","timestamp":"2014-04-18T19:07:24Z","content_type":null,"content_length":"37527","record_id":"<urn:uuid:64df38ca-dc8a-48f5-af20-474cc2317ce7>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00526-ip-10-147-4-33.ec2.internal.warc.gz"} |
FinAid | Professional Judgment | Counterintuitive Results
Dependency Override Increases EFC
In some cases a dependency override can increase the student's EFC instead of decreasing it. Although the formula for independent students is generally more favorable than the formula for dependent
students, there is a little wiggle room that can lead to an increase in EFC instead of a decrease.
This quirk is caused by differences in the income protection allowance. The income protection allowance for independent students is approximately $3,000 higher than the allowance for dependent
students. So normally an independent student will have a lower available income. However, when a student becomes independent, cash support from parents can be included on Worksheet B. If this support
is greater than the difference in income protection allowances, and most of the parent's income was previously sheltered by allowances, the student's EFC might increase as a result of a dependency
override instead of decreasing. | {"url":"http://www.finaid.org/educators/pj/counterintuitive.phtml","timestamp":"2014-04-19T12:43:27Z","content_type":null,"content_length":"21062","record_id":"<urn:uuid:129dc7a2-0a50-4177-9a9f-3ef966030e9f>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00182-ip-10-147-4-33.ec2.internal.warc.gz"} |
Mean, Median, Range, IQR, Standard Deviation...
June 29th 2010, 01:47 AM #1
Jan 2009
Hello! I am suffering a great deal of perplexity due to Part B of this question. I have indicated in bold the source of my confusion. Any help would be greatly appreciated. Thank you in advance.
The community nurse collected data on the ages of women who had given birth to their first child in the last year in each of the two towns that she visited. Her findings were:
Town A: 23, 22, 23, 28, 33, 40, 22, 18, 34, 28
Town B: 27, 26, 24, 30, 29, 32, 28, 27
a. Calculate the mean age, median age, range, interquartile range, and standard deviation for each town. (I have done so.)
b. Give a brief summary and compare the age of women giving birth to their first child in each town. What implications might this have for the services that the community nurse needs to provide
in each town?
June 30th 2010, 10:04 PM #2 | {"url":"http://mathhelpforum.com/statistics/149665-mean-median-range-iqr-standard-deviation.html","timestamp":"2014-04-19T22:46:50Z","content_type":null,"content_length":"36074","record_id":"<urn:uuid:0a590cec-926a-4a91-a3e5-c2c6fcdeab8e>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00547-ip-10-147-4-33.ec2.internal.warc.gz"} |
Foy, CA Algebra 1 Tutor
Find a Foy, CA Algebra 1 Tutor
...I encourage students to use an assignment book and track their homework due dates, test dates, and plan accordingly. Other study skills that students learn while working with me include
goal-setting, prioritizing, and predicting possible test questions. I have worked with many students that have attention problems.
24 Subjects: including algebra 1, chemistry, writing, geometry
...I will provide the appropriate learning activity through accommodations and modifications to every student based on each student’s individual needs and current curriculum practices. I am a
Highly Qualified, Certified Special Education Teacher. I have been teaching elementary students with Autism for +12 years.
15 Subjects: including algebra 1, English, writing, dyslexia
...I am currently living in the West Hollywood area and I am willing to commute to your home. Feel free to contact me at any time, and I will get back to you at my earliest convenience. Thanks!
17 Subjects: including algebra 1, geometry, algebra 2, elementary (k-6th)
...I've had the privilege of working with many math teachers and professors while I've been tutoring and have learned much from being the student and instructor. As a student, I understand the
material, but as the instructor I become aware of the mechanics of the subject and find ways to make it ea...
7 Subjects: including algebra 1, geometry, algebra 2, trigonometry
...I am a student at Cal State LA, and I am studying Business and Computer Information Systems. I used to be a pre-med, so I have taken many Biology and Chemistry classes. I have taken a lot of
these classes that WyzAnt provides tutoring for, so I feel that I could be of great help for you.
19 Subjects: including algebra 1, Spanish, English, writing
Related Foy, CA Tutors
Foy, CA Accounting Tutors
Foy, CA ACT Tutors
Foy, CA Algebra Tutors
Foy, CA Algebra 2 Tutors
Foy, CA Calculus Tutors
Foy, CA Geometry Tutors
Foy, CA Math Tutors
Foy, CA Prealgebra Tutors
Foy, CA Precalculus Tutors
Foy, CA SAT Tutors
Foy, CA SAT Math Tutors
Foy, CA Science Tutors
Foy, CA Statistics Tutors
Foy, CA Trigonometry Tutors
Nearby Cities With algebra 1 Tutor
Cimarron, CA algebra 1 Tutors
Dockweiler, CA algebra 1 Tutors
Dowtown Carrier Annex, CA algebra 1 Tutors
Green, CA algebra 1 Tutors
Griffith, CA algebra 1 Tutors
La Tijera, CA algebra 1 Tutors
Oakwood, CA algebra 1 Tutors
Pico Heights, CA algebra 1 Tutors
Rimpau, CA algebra 1 Tutors
Sanford, CA algebra 1 Tutors
Santa Western, CA algebra 1 Tutors
Vermont, CA algebra 1 Tutors
Westvern, CA algebra 1 Tutors
Wilcox, CA algebra 1 Tutors
Wilshire Park, LA algebra 1 Tutors | {"url":"http://www.purplemath.com/Foy_CA_algebra_1_tutors.php","timestamp":"2014-04-18T00:53:10Z","content_type":null,"content_length":"23973","record_id":"<urn:uuid:224b09c6-dc3d-4fc9-8742-9414999ab7b6>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00062-ip-10-147-4-33.ec2.internal.warc.gz"} |
Strange things about Xeno's Paradox
Strange things about Xeno's Paradox
According to what Xeno said about measuring and motion. Something can't travel to somewhere without first travelling half-way. So this means that a measurement of 0 to 1 would be 0.5. This is not
logical and this post is to explain what a quack Xeno was.
P.S For those who don't know, Xeno was a greek person who
liked measuring things.
0 can be nothing and something.
Re: Strange things about Xeno's Paradox
Wikipedia wrote:
What the paradox says is that if you are walking 5 feet, you must first move 2.5, then 1.25, then 0.625, etc, dividing by 2, and never reaching 0 because division doesn't reach 0. But humans
don't walk via division of distance. We walk via subtraction of step length from distance. So if we take 1-foot steps, our objective is 5 feet away, then 4, then 3, and eventually we're there.
What you have to remember, is that Zeno was trained in argueing, not math(primarily anyway).
Last edited by Patrick (2007-01-04 01:44:18)
Re: Strange things about Xeno's Paradox
According to what Xeno said about measuring and motion. Something can't travel to somewhere without first travelling half-way. So this means that a measurement of 0 to 1 would be 0.5. This is not
logical and this post is to explain what a quack Xeno was.
I'm sorry, but I don't see where you explained how Xeno is a quack.
To me, this is a very important critique of our view of space as being continuous. The problem is that we don't know what happens at the Planck length. My opinion on this is that it is a scientific
concept (not mathematical), and thus, I accept space as being continuous on the fact that the statement has yet to be falsified.
"In the real world, this would be a problem. But in mathematics, we can just define a place where this problem doesn't exist. So we'll go ahead and do that now..."
Re: Strange things about Xeno's Paradox
Well, as I see it, Zeno of Elea(it's not Xeno) got this all wrong. Instead of saying before you can move one unit, you have to move a half unit, you should be saying when you move one unit you also
move a half unit.
Re: Strange things about Xeno's Paradox
Instead of saying before you can move one unit, you have to move a half unit...
But is that first part wrong? If so, how?
"In the real world, this would be a problem. But in mathematics, we can just define a place where this problem doesn't exist. So we'll go ahead and do that now..."
Re: Strange things about Xeno's Paradox
Because it's implying that moving a half unit is a seperate action from moving the whole unit. An action that has to be done on it's own and before the other.
Re: Strange things about Xeno's Paradox
Patrick wrote:
Because it's implying that moving a half unit is a seperate action from moving the whole unit. An action that has to be done on it's own and before the other.
Are you saying that to move a half unit, one must also move a full unit? Because if that isn't the case, then they are separate actions.
"In the real world, this would be a problem. But in mathematics, we can just define a place where this problem doesn't exist. So we'll go ahead and do that now..."
Real Member
Re: Strange things about Xeno's Paradox
Well I agree that you have to move half a unit before you move one... but that wouldn't make a measurement from 0 to 1 = 0.5.
Re: Strange things about Xeno's Paradox
...but that wouldn't make a measurement from 0 to 1 = 0.5.
You lost me there. Is the measurement equal to 0.5? Or is whatever your measuring from 0 to 1 equal to whats at its median?
I'm not even sure why you're talking about measurements being equal to something.
"In the real world, this would be a problem. But in mathematics, we can just define a place where this problem doesn't exist. So we'll go ahead and do that now..."
Real Member
Re: Strange things about Xeno's Paradox
Zeroface wrote:
According to what Xeno said about measuring and motion. Something can't travel to somewhere without first travelling half-way. So this means that a measurement of 0 to 1 would be 0.5. This is not
logical and this post is to explain what a quack Xeno was.
P.S For those who don't know, Xeno was a greek person who
liked measuring things.
That's what I was responding to
Re: Strange things about Xeno's Paradox
I guess I really don't understand that either.
"In the real world, this would be a problem. But in mathematics, we can just define a place where this problem doesn't exist. So we'll go ahead and do that now..."
Super Member
Re: Strange things about Xeno's Paradox
To get from point a to point b you need to move half the distance there first. Then from there to b you need to move half the remaining distance and so on. So its a "Half the distance to the goal"
thing that keeps repeating. The limit will be the goal point but it will never actually get there. Which I'm guessing is why Zeroface is calling him a quack. (Because we all know you can get there
and move beyond it.) But his wording is still unclear.
Last edited by mikau (2007-01-05 18:29:00)
A logarithm is just a misspelled algorithm.
Re: Strange things about Xeno's Paradox
But the argument is against continuous space, not whether or not you can travel.
"In the real world, this would be a problem. But in mathematics, we can just define a place where this problem doesn't exist. So we'll go ahead and do that now..."
Super Member
Re: Strange things about Xeno's Paradox
aye, but i think he may have been using one to prove/disprove the other. I don't know. It was the first thing that ran through my head.
A logarithm is just a misspelled algorithm.
Re: Strange things about Xeno's Paradox
Ricky wrote:
Patrick wrote:
Because it's implying that moving a half unit is a seperate action from moving the whole unit. An action that has to be done on it's own and before the other.
Are you saying that to move a half unit, one must also move a full unit? Because if that isn't the case, then they are separate actions.
Okay, you got me wrong there. I thought you would understand that all I was saying, was about the case where the goal was to move one unit. In that process, the half unit movement is a part of the
full unit movement, not a seperate process.
Re: Strange things about Xeno's Paradox
I think Zeno is more clever than you're giving him credit for. He obviously didn't actually believe that his assertions were true - he just made them and then used the real world to contradict them.
Hence it being called Zeno's paradox instead of Zeno's theorem.
Zeno liked to make up things that used apparently sound mathematics and yet that were clearly false, to annoy all the mathematicians of the time. A more famous example is of the tortoise and the
athlete, although the reasoning used is kind of similar.
Why did the vector cross the road?
It wanted to be normal.
Re: Strange things about Xeno's Paradox
mathsyperson wrote:
I think Zeno is more clever than you're giving him credit for. He obviously didn't actually believe that his assertions were true - he just made them and then used the real world to contradict
them. Hence it being called Zeno's paradox instead of Zeno's theorem.
Zeno liked to make up things that used apparently sound mathematics and yet that were clearly false, to annoy all the mathematicians of the time. A more famous example is of the tortoise and the
athlete, although the reasoning used is kind of similar.
Well, that's exactly my point! It doesn't really contradict the real world, or the world as we experience it anyway, because the premises are wrong. This is one of the things that can qualify a
statement as a paradox, so Zeno succeded in making a paradox I guess. Though, I don't think it's of any importance to mathematics, only to the study of rethoric.
Re: Strange things about Xeno's Paradox
I'm sorry, I'm still not understanding. Which premise is the one which you are saying is wrong?
And I don't see this as a paradox of math. Just of a continuous space universe. If space is not continuous, then the paradox breaks down as there will be a point where you can't split a distance in
"In the real world, this would be a problem. But in mathematics, we can just define a place where this problem doesn't exist. So we'll go ahead and do that now..."
Re: Strange things about Xeno's Paradox
It would help if we knew what space and length is.
I defined Space as "The region in which objects exist.", and Length as "Distance. How far from end to end." in the math dictionary.
(If anyone else would like to offer concise definitions better than those, please do.)
"The physicists defer only to mathematicians, and the mathematicians defer only to God ..." - Leon M. Lederman
Re: Strange things about Xeno's Paradox
I don't think the definitions are the problem. Just let space and length be as we naively think about them.
"In the real world, this would be a problem. But in mathematics, we can just define a place where this problem doesn't exist. So we'll go ahead and do that now..."
Re: Strange things about Xeno's Paradox
hmm...i think that what zeno was referring to is that all change can be reduced to .0000...on to infinity...1. since the limit of this is zero, we can never move, hence the paradox. the problem with
this is that all change cant in fact be reduced to .0000...1. which of course, leads us to the nature of the continuity of change. this leads us on to continuous functions on the complex plane, i
think...or maybe im just a freshman who knows nothing.
Re: Strange things about Xeno's Paradox
o wait, i totally got it.
the universe cant be continuous, or else we run into this paradox, where any division of distance can be reduced to .000...1. obviously a paradox. therefore, space must be discrete, which is the only
other logical logical possibility. a sort of proof by contradiction of the discreteness of the universe. as mentioned before, the plank length is the discrete unit of space, just like the quanta is
the discrete unit of energy...which was discovered by max planck. this in itself leads to interesting physical conclusions, but also mathematical ones. yay!
Re: Strange things about Xeno's Paradox
the universe cant be continuous, or else we run into this paradox, where any division of distance can be reduced to .000...1. obviously a paradox. therefore, space must be discrete, which is the
only other logical logical possibility. a sort of proof by contradiction of the discreteness of the universe. as mentioned before, the plank length is the discrete unit of space, just like the
quanta is the discrete unit of energy...which was discovered by max planck. this in itself leads to interesting physical conclusions, but also mathematical ones. yay!
The bolded part is a logic fallacy known as a False Dilemma. You are forcing me to choose between X and Y, when in reality other choices such as Z exist.
A more accurate statement is "which is the only other possibility, that I can think of".
Perhaps movement is discrete and space is continuous. Perhaps space is continuous but acts with very weird properties on the small scale. Actually, we already know the 2nd is true.
The list goes on...
"In the real world, this would be a problem. But in mathematics, we can just define a place where this problem doesn't exist. So we'll go ahead and do that now..."
Re: Strange things about Xeno's Paradox
Ricky wrote:
the universe cant be continuous, or else we run into this paradox, where any division of distance can be reduced to .000...1. obviously a paradox. therefore, space must be discrete, which is
the only other logical logical possibility. a sort of proof by contradiction of the discreteness of the universe. as mentioned before, the plank length is the discrete unit of space, just
like the quanta is the discrete unit of energy...which was discovered by max planck. this in itself leads to interesting physical conclusions, but also mathematical ones. yay!
The bolded part is a logic fallacy known as a False Dilemma. You are forcing me to choose between X and Y, when in reality other choices such as Z exist.
A more accurate statement is "which is the only other possibility, that I can think of".
Perhaps movement is discrete and space is continuous. Perhaps space is continuous but acts with very weird properties on the small scale. Actually, we already know the 2nd is true.
The list goes on...
ah...you have got me thinking again. in fact, when i wrote that i almost put in the "that i can think of" part.
Re: Strange things about Xeno's Paradox
i mean, the discrete space theory of the plank length is accepted, right?
I've never even heard of a non-philosophical based discrete space theory. That is, one which comes out of actual physics instead of mind-experiments. The plank length is how much one has to "zoom in"
until something known as quantum foam is observed. This is where space gets really weird and freaky and general relativity, which requires space to be nice and smooth, breaks down.
"In the real world, this would be a problem. But in mathematics, we can just define a place where this problem doesn't exist. So we'll go ahead and do that now..." | {"url":"http://www.mathisfunforum.com/viewtopic.php?pid=55208","timestamp":"2014-04-20T21:16:13Z","content_type":null,"content_length":"43545","record_id":"<urn:uuid:fcc73205-0784-459c-9ccb-de7a632a6e6e>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00325-ip-10-147-4-33.ec2.internal.warc.gz"} |
Motivation, Not IQ, Matters Most for Learning New Math Skills | TIME.com
Jamie Grill / Getty Images
You don’t have to be born with math skills; solving problems is a matter of studying and motivation.
That may not seem like such a surprise, but it’s become easy to say ‘I just can’t do math.’ While some element of math achievement may be linked to natural inborn intelligence, when it comes to
developing skills during high school, motivation and math study habits are much more important than IQ, according to a new study.
“It’s not how smart we are; it’s how motivated we are and how effectively we study that determines growth in math achievement over time,” says Kou Murayama, a post-doctoral psychology researcher at
University of California Los Angeles and lead author of the study published in the journal Child Development.
Murayama and his colleagues studied math achievement among roughly 3,500 public school students living in the German state of Bavariain. The German students were tracked from the fifth grade through
the tenth grade and given an annual (grade-appropriate) standardized math exam every year. The kids were also given an IQ test, and asked about their attitudes toward math.
In particular, the psychologists were interested in how much the adolescents believed that math achievement was something within their control, and whether the kids were interested in math for its
own sake. They also asked the students about study strategies, such as whether they would try to link concepts together when learning new material, or simply try to memorize the steps to typical
To their surprise, the researches found that IQ does not predict new learning — in other words, intelligence as measured by the IQ test does not indicate how likely students are to pick up new
concepts or accumulate new skills. While children with higher IQs did have higher test scores from the beginning of the study, how much new material the kids learned over the years was not related to
how smart they were, at least not once demographic factors were taken into account.
“Students with high IQ have high math achievement and students with low IQ have low math achievement,” Murayama says. “But IQ does not predict any growth in math achievement. It determines the
starting point.”
So the children who improved in math over the years were disproportionately those who said they “agreed” or “strongly agreed” with statements such as, “When doing math, the harder I try, the better I
perform,” or “I invest a lot of effort in math, because I am interested in the subject”– even if they had not started out as high-achieving students. In contrast, kids who said they were motivated
purely by the desire to get good grades saw no greater improvement over the average. As for study strategies, those who said they tried to forge connections between mathematical ideas typically
improved faster than kids who employed more cursory rote-learning techniques.
While not entirely surprising — it makes sense that more motivated students would do better and that those who put in more effort to learn would see better results — the findings provide reassuring
confirmation that academic success is not governed by a student’s cognitive abilities alone. Instead, students who want to learn math and who work at it may find they make faster gains and learn
better than students who are bright but less motivated.
That’s encouraging not just for students, but for schools as well, says Murayama. He notes that it’s not clear how generalizable the results from the German school system are to other nations, but he
is intrigued enough by the results to investigate different instructional styles that teachers and parents may use to inspire kids to learn. While certain intelligence traits seem to be based in
genetics and and therefore hard to change, previous research suggests that motivation is not innate, but largely learned. Even, it seems, when it comes to math.
95 comments
Newest | Oldest
Well, no kidding. But given that most American parents and far too many American teachers will proudly declare that they're "not a math person," I don't see how this is likely to help much. After
all, if the people who should be motivating you to work hard and try your best at math already accept that "math is tough," how likely is it that that attitude won't be passed down to the kids?
eetom 5pts just now
Isn't it obvious that motivation is the propelling force behind success in anything? Why do some face eath to climb high mountains and some prefer to stay at home and watch TV? To be a
mathematician you have to be gifted. But to pass high school mathematics and to learn enough arithmetic for a "normal life" one does not need to be a genius. He only needs to be normal. If a
person has no appetite and starve himself to death, who is to be blamed?
Read more: http://healthland.time.com/2012/12/26/motivation-not-iq-matters-most-for-learning-new-math-skills/#ixzz2XPNSOUEI
@SproutsCanada http://t.co/dTQhPOMKsN
@ANHighSchool http://t.co/Ausx8G36
@Nysa_Qe นิศาไปโพสในเฟสให้หน่อย
@Nysa_Qe @midtongta @meowmeowml @NiNew_PK @TUCK_Smile @faijungrai @Palatpat @preaw16 @llluminates @illsmilefy แล้วดูไรต่ออะ
@LabombilladeEdi https://t.co/ZGpzUsAS
@ireallylovepi: Motivation, not IQ, Matters Most for Learning New Math Skills http://t.co/Ebi43wuJ via @deliabulldogs #deliabulldogs
@arizona_mesa Appreciate the RT!
Agile Mind has a program called Academic Youth Development that is perfect to help districts with student motivation and effective effort in relation to STEM courses.
@phillipshuskies @zite Absolutely! Motivation is critical to learning/achievement regardless of IQ!!
@RevRonAOFC http://t.co/CrfOEeed
Practice makes perfect! To motivate students to exercise math is simple : challenge them with real life problems that are adapted to their level. This is an example of such a challenge :
@jtbakler http://t.co/FkpqtzZr
@danicamckellar @TIME @TIMEHealthland Good to hear!
@dmiller212001 Thanks for sharing!
@grethelmende A very bright ISS will cross your sky tonight. It comes up in the North at 18:51. Details: http://t.co/6azzx4SE
@nbucka reminds me of Dr. Lauren Resnick- effort vs. ability
@danicamckellar @TIME @TIMEHealthland, I agree. Motivation is controlled by the student. Teachers can try to spark it, but to no avail.
@erikgahner Det forklarer alt!
@Vari_Audrey @mindshiftkqed I wish more Southern Africans believed this. Persistence not IQ are the major determiners of excellence in Maths
@MindShiftKQED Now if we could just figure out how to teach/foster effort.
@earlsamuelson I think it's valuable to be + about worksheet development but most, I mean MOST, don't motivate but demotivate
@johnkuhntx Thank goodness for the trenchant insights of the mainstream press.
Isn't this obvious in learning any subject or in trying to achieve anything? Did someone just discovered recently that 1 plus 1 equals 2?
@danicamckellar rad!!
@lattesc http://t.co/FoK36O9N
@danicamckellar Mathematics is just Greek (μάθημα máthēma) for 'study hard!' ;0)
@MathofBG http://t.co/flbANxU1
@danicamckellar after last calculus ten years ago, going back to school with differential equations, it's going to take more than motivation
@midtongta แท็กไปแล้วอ่ะ
@deliabulldogs http://t.co/o0BrsIdl
@BrianFe53088939 Have to check out- reminds me of Carol Dwecks Growth Mindset & IQ measures' validity has been questioned/disproven 4years
@jboyd_math Granted. On the other hand, teachers can KILL motivation in any subject fairly easily. Teaching math in a way that encourages memorization without real understanding is a great way to
do it.
So true that motivation is student controlled. My husband teaches high school mathematics. He has said, countless times, that until the academics becomes important to the student, and the student
has the epiphany of "hey I have to give it my all," it doesn't matter how entertaining he is, how "real world" he makes it or how hands on, it all comes down to the student and their "aha" moment.
@Vari_Audrey I agree - support teachers to be able to teach math with passion!
@rodaniel How is a worksheet defined? I can throw together some problems that are all connected & call it a worksheet; very valuable. #abed
@Nysa_Qe โอเคขอบคุณกูเสร็จแล้วไปพารากราฟนึงเหลืออีเรียนยังไงให้สนุก
@earlsamuelson I agree valuable for students to reinforce understanding but consistent use likely doesn't motivate or offer engagement
@Nysa_Qe ทำอยู่
@midtongta รีบทำ ๅ ๅ
Procedural fluency is a key to understanding. WHat do you mean by "understanding"? And CCSS apparently contains a balance of both.
@rodaniel Well planned problems on "worksheets" can serve as extensions of existing knowledge, creating curiosity for what comes later #abed
@rodaniel It would be impossible to weave concepts together in a meaningful way with no significant knowledge of those concepts. #abed
@rodaniel I've heard many times over the years from "experts" that "anyone can teach math". #abed
@rodaniel I'm having a struggle understanding why ANY "math" teacher would do anything BUT that. #abed
@earlsamuelson I fear teachers missing the point of #CCSS that pushes for deeper learning rather than proficiency at discreet skills
@earlsamuelson my point is I would rather see students showing what they know about small set of connected topics & depth of learning
@earlsamuelson won't argue that past isn't all bad but I see too many worksheets in use that use routine problem sets that aren't important
@rodaniel My point is that we can't be throwing out everything from "the past" just because someone claims it to be "ineffective". #abed | {"url":"http://healthland.time.com/2012/12/26/motivation-not-iq-matters-most-for-learning-new-math-skills/?iid=hl-category-mostpop1","timestamp":"2014-04-17T00:53:00Z","content_type":null,"content_length":"125083","record_id":"<urn:uuid:19d695cb-fa1d-4cb5-93ad-b39cc55f2b0b>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00640-ip-10-147-4-33.ec2.internal.warc.gz"} |
Irene Sikora, Fourth Grade
Homework, Week of April 14
Week of April 14
Monday, April 14
5th Math; WB p.107; Sim.sol.#93
4th Math Sim.sol.#93
Religion: Study Stations of the Cross.
Tuesday, April 15
5th Math: Sim.sol.#94
5th Science: Study for test.
4th Math: Sim.sol.#94
homework; week of April7
Monday, April 7
4th and 5th Math Study for test; Sim.Sol.#89
Tuesday, April 8
4th and 5th Math; Sim.sol.#90
Wednesday, April 9
4th and 5th Math; Sim.sol.#91
Religion: Start studying the Stations of the Cross. Test next Tlues.
Thursday, April10
5th Math: p.442, #13 to 32, Sim.sol.#92
4th Math: WB p.119; Sim.sol.#92
Homework, week of March 31
Monday, March 31
5th Math: Sim.sol.#85, WB p.103
4th Math: Sim.sol.#85
5th Science: Questions on p.191 &193
Wednesday, April 2
5th Math:Sim.sol.#87
4th Math: Sim.sol.#87
5th Science: Study the plates of the earth for test tomorrow.
Thursday, April 3
4th and 5th Math:Simple sol.#88
Homework, Week of March 24
Monday, March 24
5th Math: Sim.sol.#81
4th Math: Sim.sol.#81
4th Science: Study vocab.; test on Thurs.
Tuesday, March 25
5th Math: Study graphs for test tomorrow; Sim.sol.#82
4th Math: Sim.sol.# 82; WB p.108#1 to 16
4th Science: Study for vocab. test on Thurs.
Wednesday,March 26
5th Math: Sim.sol.#83
4th Math:Sim.sol.#83; p.472, #11 to 24
4th Science: Study for test on vocab. and water cycle.
Thursday, March27
5th Math: WB p.101; Sim.sol.#84
4th Math: Study for test; Sim.sol.#84
Homework, Week of March 17
Monday, March 17
4th and 5th Math Sim.sol.#77
4th and 5th Science Study vocab.; test on Wed.
Tuesday, March 18
5th Math:Sim.sol.#78
5th Science Study vocab. Test tomorrow
4th Math P 446#15 to 18
4th Science Study vocab. Test tomorrow
Wednesday, March10
5th Math: WB p.61; Sim. sol.#79
4th Math: Sim.sol. #79
Thursday, March 20
4th and 5th Math: Sim.sol.#80
Homework; Week of March 10
Monday, March 10
5th Math: Sim.sol. #73
4th Math: sim.sol.#73; Chapter test on Wed.
Religion: Finish Lenten poster
Tuesday, March 11
5th Math.Finish WB p,99,#3 to 12; SS#74
4th Math: Test tomorrow; Study p.424 Sets A, B, & D; Sim.sol.#74
Wednesday, March 12
5th Math: Finish WB p.100; Sim.sol.#75
4th Math: Sim.sol.#75
Thursday, March 13
5th Math: Sim.sol.#76
5th Science: Finish definitions of 13 vocab. words. Test next Wed.
4th Math Sim.sol.#76
4th Science: Finish definitions of 10 vocab. words. Test next Wed.
Terra Nova Tips
Tips for Terra Nova Week
Get a good night's sleep.
Eat a healthy breakfast.
Bring a healthy snack.
Have 2 #2 pencils and erasers.
Relax and do the best you can do. There is nothing to worry about!
Homework for Grade 4
Simple Solutions #85, English Plus Workbook p. 36, 7-18; Spelling on Tuesday, Feb. 25, Inventor information due 2/26
Homework, Week of Feb.24
Monday, Feb. 24
5th Math: Study for test on adding/subtracting mixed numbers; sim.sol.#69
4th Math: Sim.sol.#69
5th Math /sim.sol.#70
4th Math 5Math problems, Sim.sol.70
Wednesday, Feb. 26
4th and 5th Math: Sim.sol.#71
Religion Start studying the Precepts of the Church. Test next Monday.
Thursday, Feb. 27
5th MathSim.sol.#72, Study for test tomorrow
4th MathSim.sol.#72
4th Religion Study precepts of the Church for test on Mon.
Homework, Week of Feb. 18
Tuesday, Feb.18
5th Math:p.367, #5 to 8; sim.sol.#65
5Th Science: Make sure the 5 definitions from p. 242 are finished. Test on Thurs.
4th Math: Sim.sol. #65
4th Science: Study definitions; test on Thurs.
Wednesday, Feb. 19
5th Math: p.371, #13 to 17; sim.sol.#66
5th Science: Study def. and notes for test tomorrow.
4th Math:sim.sol.#66
4th Science"Study for test tomorrow.
Wednesday, Feb.20
5th Math: WB p.90#1 to 8; sim.sol.#67
4th Math:sim.sol.#67
Religion:Finish Name Acrostic.
Thursday, Feb.21
I have had problems with students completing homework on time lately. Starting on Monday, Feb.24 students will have to stay in at recess to do the homework. When they accumulate 3 missing homeworks,
they will get an after-school detention, from 2:30 to 3:00. I will give you advance notice and a note to be signed if they get an after-care solution.
4th and 5th Math: Sim.sol. #68
Homework, Week of Feb. 10
Monday, Feb. 10
5th Math: Sim. sol. must be finished to p.61.
5th Science p.240 & 241, #1 to 16, must be finished by Wed.
4th Math Sim.sol.#61
Tuesday, Feb. 11
5th Math:p.358, set C; sim.sol.#62
5th Science:Finish p. 240 # 1 to 16.
4th Math: WB p.86, #5 to 8. Sim.sol.#62
4th Science:Start studying definitions; test will be next Tues.
Homework, Week of Feb. 3
Monday, Feb. 3
snow day
Tues., Feb. 4
5th Math: Sim.sol. #58
5th Science: Test that was supposed to be today is postponed until Thurs. Test will be on Thurs., regardless of Wed.'s weather, so remember to bring home notes to study.
4th math: Sim.sol. #57
Wed., Feb. 5
Snow day
Thurs., Feb. 6
5th Math: Sim.sol. #59; WB p.80
4th Math:Sim.sol.#58 and 59.
Homework, week of Jan.27
Monday, Jan. 27
4th Math: Sim.sol. #56
Tuesday, Jan.28
5th Math:Sim.sol.#55. WB p.76
4th Math: Test on Division tomorrow.
Homework, Week of Jan.21
Thursday, Jan.23
5th Math: WB p.75; sim.sol.#54
4th Math: p.312, #9 to 17; sim.sol.#54
Homework, Week of Jan.13
Monday, Jan.13
5th Math: Finish p.298, #27 to 36 and #43 to 50.
5th Science: Study vocab. for test on Thurs.
4th Math: p.295, Set F, # 1 to 10
Tuesday, Jan.14
5th Math:Finish p.301, Set E, #1to10and 17 to 20
5th Science: Study for Science test on Thurs.
4th Math: WB p.70, #1 to 12. Prepare for test tomorrow.
Wednesday, Jan.15
5th Math:WBp.72; Study for test on Prime Factorization; Sim.sol.#51
5th Science:Study for vocab. test
4th Math: Sim.sol.#51
Homework, Week of Jan.6
Monday, Jan.6
5th Math: WBp.66, #1 to 9; sim.sol.#47
4th Math: WB p.66; sim.sol.#47
4th Science: Study vocab. for test on Fri.
Tues., Jan.7
5th Math: Finish p.282, #7 to 16; Sim.sol.#48
4th Math: Sim.sol.#48
4th Science: Study vocab. for test on FRi.
Wed., Jan.8
5th Math: WB67#11 to 19; Study for tomorrow's test; sim.sol.#49
5th Science:Finish Open Book test.
4th Math: P284 #13 to 17; sim.sol.#49
4th Science: Study vocab. for Fri.'s test
Religion Study Corporal Works of Mercy
Thursday, Jan.9
5th Math:Sim.sol.#50
4th Math:Sim.sol.#50
4th Science: Study for Science test tomorrow.
Homework, Week of Jan. 2
Thurs., Jan. 2
5th Math Sim.sol.#46
4th MathWB.p64; Sim.sol.#46
4th Science: Start studying new vocab.
Homework Week of Dec.16
Monday, Dec.16
5th MathFinish WB p.54; Sim.sol.#45
4TH Math WBp.41;
Religion Study Beatitudes
Tuesday, Dec. 17
No homework for 4th or 5th grade. Some 5th graders have make-up work to do.
Homework, Week of Dec.9
Monday, Dec. 9
5th Math: Sim.sol. # 43; Prepare for division of decimals test tomorrow; Finish p. 204, Set B, #9 to 29
5th Science: Study for vocab. test on Wed.
4th Math: WB p.39; Sim.sol.#43
Wednesday, Dec.11
5th Math:Sim.sol.#44
5th Science:Study vocabulary and order of steps for Mitosis for test tomorrow.
4th Math: Prepare for test on time and temperature; sim.sol.#44
4th Religion: Study Beatitudes; must match proper beginnings with endings.
Thursday, Dec. 12
No Homework; See you at the Concert!
Homework, Week of Dec.2
Monday, Dec.2
5th Math: p.189, #3 to 17 to prepare for tomorrow's test; sim.sol.#39
5th Science; finish Ch. Open Book Test
4th Math;WB p.36;sim.sol.#39
Tuesday, Dec.3
4th and 5th Math:Sim.sol.#40
Thursday, Dec.4
5th Science:Start studying definitions; test next Wed.
4th Math:Sim.sol.41
Homework, week of Nov.25
Monday, Nov.25
4th and 5th Math:Sim.sol.#36
4th and 5th Science: Study for Science test tomorrow.
Tuesday, Nov.26
5th Math: Sim.sol.#38
4th Math: WB p.35; Sim.sol.#38
Homework, Week of Nov.18
Monday, Nov.18
5th Math Sim.sol.#33
4th Math Sim.sol.#33; WBp.31
4th Science; Finish p.92 & 93, #1 to 16
Tuesday, Nov.19
5th Math Sim.sol.#34
4th Math: WBp.33; Sim.sol.#34
Wednesday, Nov.20
5th Math:Study for test; Sim.sol.#35
5th Science: Test next Tues.
4th Math Study for test tomorrow. Sim.sol.#35
4th Science Study vocab. definitions.
Thursday, Nov,21
4th and 5th Math: Sim.sol.#36
4th and 5th Science: Start studying notes for test next Tues.
Homework, Week of Nov.,12
Tues., Nov.12
5th Math:Study for Ch. test tomorrow; Sim.sol.#29
5th Science: Study vocab. and notes for test on Thurs.
4th Math: Sim.sol.#29
Wed., Nov. 13
5th Math:WB p.37; Sim.sol.#30
5th Science: Study for test; remember, there is no word bank
4th Math: WB p.29; Sim.sol.#30
Thurs., Nove 14
5th Math: WB p.38; Sim.sol.#31
4th Math:WBp.30; sim.sol.#31
Homework, Week of Nov.4
, Nov. 4
5th Math: Sim.sol.#25
5th Science Finish p.62 & 63, # 1 to 16. Write out question and correct answer.
4th Math: Sim.sol.#25
4th ScienceStart studying for Science test on Thurs.
Tuesday, Nov.5
5th Math WB p.33; Sim.sol.#26
4th Math:Sim.sol.#26
5th Science: Finish Open Book test"
4th Science:Study definitions and notes for test on Thulrs.
Wednesday, Nov.6
5th Math:WB p.34; Sim.sol.#27
4th Math Sim.sol.#27
4th Science: Study for test tomorrow.
Thursday, Nov.7
5th Math:Sim.sol.#28
4th Math: WB p.28; Sim.sol.#29
5th Science: Start studying definitions and notes on classification. Test will be next Wed. or Thurs.
Homework; Week of Nov.4
Thank you to all the parents, aunts, and grandparents who were able to join us for breakfast today. A special thanks to Mrs. Nolan for setting it up for us. It was great that Father Bill and Mr.
Sorci were also able to join us. The children and I appreciate your participation. Hope we can do it again sometime.
Homework, Week of Oct.28
Monday, Oct. 28
5th Math:Sim.sol.#21; Chapter test on Wed.
5th Science: Study notes on Skeletal, Muscular, and Excretory System and text book pages on Nervous system; test on Wed.
4th Math: WB p.25, Sim.sol.#21
Tuesday;, Oct.29
5th Math;Test tomorrow; Study p.123; Sim.sol.#22
5th Science: Test on Skeletal, Muscular, Nervous, and Excretory Systems
4th Math: WB p.26; Sim.sol.#22
Wednesday, Oct. 30
5th Math: Sim.sol.#23 and 24.
5th Math: Silm.sol.#23 and 24; Math test on Friday.
Thursday, Oct. 31
No Homework; HAPPY HALLOWEEN!
Homework, Week of Oct,21
Monday, Oct. 21
5th Math: Math p.98 #19 to 25
4th and 5th Science: Study notes for test tomorrow.
4th Math: 4 problems in notebook; Sim. Sol. #17
Tuesday, Oct.22
5th Math:Worksheet on evaluating expressions; sim.sol.#17
4th Math:4 mult. problems in notebooks; sim.sol.#18.
Wednesday, Oct.23
5th Math: Sim.sol.#18
5th Science:Start studying new set of notes for test next Tues.
4th Math:Sim.sol.#19
Thursday, Oct.24
5th Math:sim.sol.#19 and 20
4th Math: sim.sol.#20
5th Science: Study notes on skeletal and muscular systems; more notes to come for test next Wed.
4th Science: Start studying 6 vocab. words. Notes to come; test next Tues. or Wed.
Homework, Week of Oct.15
Tuesday, Oct. 15
5th Math: P.81, #11 to 14: S.S.#14
4th Math: WB p.20
Wednesday, Oct.16
5th Math: Sim.Sol.#15
4th and 5th Science: Study Science notes for test next week.
4th Math:Sim.sol.#14
Thursday, Oct.17
5th Math:Sim.sol.#16
4th and 5th Science: Study notes; test next Tues.
4th Math: Sim.sol.#15; Study 2,3,4, &5 times tables.
Religion Study 10 Commandments.
Homework, Week of Oct.7
Monday, Oct. 7
4th and 5th Math:Simple sol. #12
45h Nd 5th Science: Study notes for test tomorrow.
4th and 5th Math: Simple sol. #13
Homework; Week of Sept. 30
Monday, Sept. 29
5th Math: finish p.69, #8 to 22 ; Sim. Sol. #8
4th Math: Sim.Sol. # 8.
Tuesday, Oct.1
5th Math:Worksheet and Sim.Sol.#9
4th Math: Finish WS; Sim.Sol.#9
4th and 5th Science:Study notes; test will be next week.
Wednesday, Oct.2
4th and 5th Math:Sim.sol.#10; Both classes will have a math test on Fri.
4th and 4th Science: Study notes in notebooks. Both classes will be having a Science test on their notes next Tues. or Wed.
Thursday, Oct.3
5th Math:Study for test tomorrow. Study notes. Sim.sol.#11
5th Science: Study notes for test next Tues.
4th Math: Study for Ch. 3 test. Sim. sol. #11
4th Science:Study notes for test next Tues.
Homework, Week of Sept. 23
Monday, Sept. 23
5th Math:Do #1 to 5 on worksheet
4th and 5th Science: Study for test on notes and vocab. tomorrow.
4th Math:P.50, #8 to 12
Tuesday, Sept. 24
5th Math;Test tomorrow; study samples on pages 52 & 53.
4th Math: Test tomorrow; study samples on p.56.
Wednesday, Sept.25
4th and 5th Math: Simple Solutions # 5
Thursday, Sept. 26
4th and 5th Math: Simple Solutions #6
Homework, Week of Sept. 16
Monday; Sept.16
5th Math: Finish Classwork: p.37, #11 to 20; p.39, #8 to 22
5th Science:Definitions are due on Wed.; test will be next Tues.
4th Math- Worksheet
4th Science: start studying new Science vocab. Test will be next Tues.
Tuesday, Sept. 17
5th Math: 5 multiplication problems in your notebook.
5th Science: Finish notes from p.32 & 33; Finish vocab.
4th Math: p.36, #15 to 18 and 21,22
Wednesday; Sept. 17
5th Math: Simple Solutions worksheet
5th Science: Bring in 8 different things (pieces of candy or otherwise) to represent the organelles of plant and animal cells. Students may have a little extra to eat while doing the project; but not
an entire meal!
4th Science: Study vocab. for test on Tues.
Thursday; Sept. 18
5th Math: P.45, #10 to 14
4th and 5th Science:Study notes and vocab. for test on Tues.
4th Math: p.40, #2 ti 11
Homework, Week of Sept. 9
Monday, Sept.9
5th Science:Study notes and vocab. for Thurs.'s test
4th Math: Worksheet, do odd numbers only
4th Science: Study notes and vocab. for Thurs.'s test
Wednesday, Sept. 11
4th Math:Worksheet
4th Science: Study vocab. and notes for test tomorrow
5th Math: p.21, #5 to 9
5th Science: Study vocab. and notes for test tomorrow
Thursday, Sept.12
5th Math: Prepare for Ch. 1 test tomorrow. Study/practice problems from pages 28 & 29
4th Math:Prepare for Ch. 1 test tomorrow. Study/ practice problems from p.24
Homework; Week of Sept. 3
Wed.., Sept. 5
5th Math: p.9, # 2,3,4,7,8,9
5th Science:Start studying vocab.; test next Thurs.
4th Science: Write definitions of vocabulary words on p.x.
Thurs., Sept. 6
5th Math: Worksheet, both sides
4th Math: Worksheet, side 1.3
All books should be covered, all supplies should be purchased, and all summer choice projects should be handed in by Friday, Sept. 6.
Fri., Sept. 7
No homework; 4th grade has gym on Mon.
Summer Messsage
Dear Parents,
I would like to welcome you to 4th grade at Mother Teresa Regional School. I am eager to get to know you and your children. I have heard nothing but good things about this group of students, and I
look forward to working with them. We have lots of good projects in store, and I'm sure it will be a great year. I will see you at Back to School Night. It you have any questions or comments for me,
feel free to e-mail me at .(JavaScript must be enabled to view this email address) and I will get back to you as quickly as I can.
Mrs. Sikora
Irene Sikora, Fourth Grade
I am Mrs. Sikora, and this is my fifth year teaching at MTRS. I have had over 30 years experience teaching in the Diocese of Trenton, all subject areas from grades 4 through 8. I graduated from
College Misericordia in Dallas, PA with a BS in Elementary Education. I am certified to teach all major subjects in grades K through 8. This year I will teach the 4th grade Religion, Science, and
Math. I will also teach 5th grade Math and Science. I am looking forward to getting to know the MTRS families. | {"url":"http://www.mtregional.com/index.php/students_parents/class_pages/class_fourthgrade/","timestamp":"2014-04-17T15:33:20Z","content_type":null,"content_length":"50209","record_id":"<urn:uuid:5a8b1b0c-35ae-41c5-9c69-9ed8dbdf6fa4>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00399-ip-10-147-4-33.ec2.internal.warc.gz"} |
Brevet US5955932 - Q-controlled microresonators and tunable electric filters using such resonators
This is a divisional of application Ser. No. 07/989,396 filed Dec. 11, 1992 now U.S. Pat. No. 5,491,604.
The present invention relates generally to resonant microstructures, and more particularly to Q-control for resonant microstructures and electronic filters using such microstructures.
The need for high-frequency bandpass filters with high selectivity for telecommunication systems has stimulated interest in integrated versions of such filters wherein entire systems may be
integrated onto a single silicon chip. Examples of systems requiring these filters include radio-frequency (RF) receiver systems, mobile phone networks, and satellite communication systems.
Previously, intermediate frequency (IF) filtering in frequency modulated (FM) receivers has been performed at 10.7 Mega-Hertz (MHz) IF frequency, using highly selective inductive-capacitance (LC)
ceramic or crystal filters. Recently, integrated versions using integrated circuit (IC) switched-capacitor techniques have been attempted. However, designs based upon a coupled biquad filter
architectures suffer from dynamic range reduction introduced when attempting high-Q operational simulation of LC stages. (Q is a figure of merit equal to reactance divided by resistance. The Q of a
system determines the rate of decay of stored energy.) Modulation filtering techniques, such as N-path designs, suffer from the generation of extraneous signals, such as image and clock components
inside the signal band, resulting from the remodulation process.
Recent advances in micromachining offer another analog, high frequency, high-Q, tunable integrated filter technology that can enhance filter performance over that of previous integrated versions
while maintaining design characteristics appropriate for bulk fabrication in very large-scale integrated (VLSI) systems. Specifically, micromachined mechanical resonators or resonant microstructures
may be used. These microresonators are integrated electromechanical devices with frequency selectivity superior to integrated resistance-capacitance (RC) active filtering techniques. Using integrated
micromechanical resonators, which have Q-factors in the tens of thousands, microelectromechanical filters with selectivity comparable to macroscopic mechanical and crystal filters may be fabricated
on a chip.
Since the passband shape of these filter designs depends strongly on the Q of the constituent resonators, a precise technique for controlling resonator Q is required to optimize the filter passband.
Such a Q-control technique would be most convenient and effective if the Q was controllable through a single voltage or an element value, e.g., a resistor, and if the controlled value of Q was
independent of the original Q.
An object of the present invention is thus to provide feedback techniques for precise control of the Q-factor of a micromechanical resonator.
Another object of the present invention is to provide very high Q microelectromechanical filters constructed of Q-controlled microresonator biquads in biquad filter architectures. In addition, the
invention provides a means for passband correction of spring-coupled or parallel micromechanical resonators via control over the Q-factor of the constituent resonators.
Additional objects and advantages of the invention will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by practice of the
invention. The objects and advantages of the invention may be realized and obtained by means of the instrumentalities and combinations particularly pointed out in the claims.
The present invention is directed to a resonator structure. The resonator structure comprises a first electrode at which an input signal may be applied and a second electrode at which an output
signal may be sensed. The resonator structure further includes a feedback means for applying the output signal to the first electrode for controlling the Q of the resonator structure.
The equivalent circuit series resistance (R.sub.x) of the resonator of the present invention is proportional to the inverse of the Q of the resonator. As such, the controlled value of Q is
independent of the original Q of the resonator. Rather, it is dependent only on the control voltage (V.sub.Q) or some other controlling factor such as resistance values.
Additionally, the gain of the resonator (v.sub.0 /v.sub.i) is equal to the number of input fingers divided by the number of feedback fingers. This is advantageous in that it offers very precise gain
values. This enables construction of bandpass biquads with precisely settable gains. Also, the gain will stay constant as the Q is changed.
Dimensions of a microresonator of the present invention may be: a length between about 5 microns(μm) and 1000 μm, a width between about 5 μm and 100 μm, and a thickness from between about 0.1 and 100
High-Q tunable electronic filters based upon the Q-controlled microresonators of the present invention are suitable for batch fabrication using standard complementary metal-oxide semiconductor (CMOS)
integrated circuit and micromachining technologies. The Q-controlled microresonators may serve as adjustable biquad stages in various filter architectures such as coupled (or cascaded) biquad,
follow-the-leader feedback (FLF), or other multiple-loop feedback techniques. Frequency and bandwidth are independently voltage-controllable. This permits adaptive signal processing.
Noise analysis determines that the dynamic range of a proposed high-Q filter is much higher than that of its high-Q active RC counterparts, i.e., switched-capacitor MOSFET-C, and g.sub.m -C filters.
Specifically, a dynamic range in excess of 90 decibels (dB) is predicted for a filter centered at 10.7 MegaHertz (MHz) with a bandwidth of 56 KiloHertz (kHz).
With the resonators of the present invention, temperature insensitivity can be achieved through micro-oven control, which, on a micron scale, provides orders of magnitude improvement in power
dissipation and thermal time constant over equivalent macroscopic methods.
The present invention will be described in terms of a number of different embodiments. It is directed to Q-control for microresonators. These resonators may be used to build very high Q
microelectromechanical filters. The filters may be constructed of coupled, Q-controlled microresonator biquads, spring-coupled resonators or resonators electrically connected in parallel.
Spring-coupled resonators and resonators electrically connected in parallel are described in the above-identified, co-pending application entitled "Microelectromechanical Signal Processors," which
has been incorporated by reference.
A basic Q-control architecture for a microresonator 20 is shown in FIG. 1. The microresonator is of the type shown in U.S. Pat. No. 5,025,346, issued Jun. 18, 1991, which is hereby incorporated by
The resonator shown in U.S. Pat. No. 5,025,346 is preferred in the context of the present invention. However, the principles of the present invention equally apply to other types of resonators, and
the Q-control scheme discussed herein may be used with those resonators. Also the filter architectures, frequency-pulling schemes and micro-oven schemes discussed below may be applied to these other
types of resonators. Such resonators include, but are not limited to, those which use piezoelectric, piezoresistive, parallel-plate electrostatic, or magnetic drive and sense, and to resonators with
arbitrary geometries, such as cantilevers or double-ended tuning forks.
As shown in FIG. 1, resonator 20 has three ports, comprising a drive electrode 22, a sense electrode 23, and a feedback electrode 24. The resonator is driven electrostatically by the drive electrode
and capacitive motional current is sensed at the sense electrode. Signals are fed back to the microresonator via the feedback electrode.
The electrodes comprise interdigitated finger (comb) structures 25 and 27. The fingers 25 are stationary, being anchored to a substrate 29a, which may be a silicon wafer substrate, or anchored to
passivation layers, which may be a nitride layer 29b over an oxide layer 29c, over the substrate. The darkly shaded region 28 represents the anchor point for the drive electrode 22 and its associated
fingers 25. The fingers 27 are attached to a suspended, movable shuttle 27a; thus, they are movable. The shuttle 27a and fingers 27 are spaced above the substrate, and are allowed to move laterally
relative to the substrate overlayers and stationary fingers 25. A folded-beam suspension arrangement, represented generally by reference numeral 30, allows shuttle 27a and attached fingers 27 to
The folded beam suspension 30 comprises folded beams 31a, 31b, 31c, and 31d, and truss support beam 31f, all of which are suspended above the substrate 29a and associated overlayers 29b and 29c.
Motivations for this truss suspension are its large compliance and its capability for relief of built-in residual strains in the structural film. The cantilever beams 31b and 31d are anchored at one
end to a ground plane 29d, which is fabricated over the substrate 29a and substrate overlayers 29b and 29c, near a center point 31e (a darkly shaded region) and attached at the other end to the
folding truss beam 31f. Cantilever beams 31a and 31c are attached at one end to the folding truss beam 31f and at the other to the shuttle 27a. The folded beam suspension 30 allows expansion or
contraction of the four beams along the y-axis, increasing the linear range of operation of the resonator 20. The folded beam suspension 30', comprising 32a, 32b, 32c, 32d, and 32f, is anchored
through beams 32b and 32c to ground plane 29d and/or overlayers 29b and 29c at location 32e, and the suspension operates like beams 31a-31f.
The long effective support length of beams 31a-31d and 32a-32d result in a highly compliant suspension for movable fingers 27 of the drive, sense, and feedback electrodes. In an alternate
arrangement, the substrate overlayers may be eliminated. The anchor points would then be formed on the substrate, and the substrate would serve as the ground plane.
The motion of the movable fingers is sensed by detecting the motional current through the time-varying interdigitated finger capacitor formed by the movable and stationary fingers of the sense
electrode 23 with a direct current (dc) bias voltage V.sub.p applied to ground plane 29b, which is attached to the shuttle 27a and movable fingers 27 through anchor points 31e and 32e. The driving
force F.sub.I and the output sensitivity are proportional to the variation of the comb capacitance C with the lateral displacement x, of the structure ∂C/∂x.
A key feature of the electrostatic-comb drive is that ∂C/∂x is a constant, independent of the displacement x, so long as x is less than the finger overlap. Note that ∂C/∂x for a given port is a
function of the number of overlaps between movable and stationary fingers 27 and 25, respectively, of the port in question. Thus, it can be different for drive port or drive electrode 28, sense port
or sense electrode 23, and feedback port or feedback electrode 24. To distinguish these values, (∂C/∂x).sub.d, (∂C/∂x).sub.s, and (∂C/∂x).sub.fb may be used for the drive, sense, and feedback ports,
At sense electrode 23, harmonic motion of the structure results in a sense current I.sub.s which is represented by:
I.sub.s =V.sub.p (∂C/∂x).sub.s (∂x/∂t) (1)
At drive electrode 22, the static displacement is a function of drive voltage v.sub.D given by: ##EQU1## where F.sub.x is the electrostatic force in the x direction and k.sub.sys is the system spring
For a drive voltage V.sub.D (t)=V.sub.p +v.sub.d sin (ωt) the time derivative of x is ##EQU2## where v.sub.d is the amplitude of the input ac signal, V.sub.p is the previously-mentioned dc-bias
applied to the resonator, and where the fact that (∂C/∂x).sub.d is a constant for the inter-digitated-finger capacitor 23 or 24 is used. The second-harmonic term on the right-hand side of Equation
(3) is negligible if v.sub.d <<V.sub.p. Furthermore, if a push-pull (differential) drive is used, this term results in a common-mode force and is cancelled to the first order. At mechanical
resonance, the magnitude of the linear term in Equation (3) is multiplied by the Q-factor, from which it follows that the magnitude of the transfer function T(jω.sub.r)=X/v.sub.D relating the phasor
displacement X to phasor drive voltage V.sub.d at the resonant frequency ω.sub.r is: ##EQU3##
The transconductance of the resonant structure is defined by Y(jω)=I.sub.s /V.sub.d. Its magnitude at resonance can be found by substitution of Equation (4) into the phasor form of Equation (1): ##
Planar electrode or ground plane 29d (FIGS. 1A and 1B) can be grounded or set to a dc potential in order to minimize parasitic capacitive coupling between the drive, feedback and sense ports. An
additional function of this electrode is to suppress the excitation of undesired modes of the structure.
As noted, the motional current output from the resonator is electronically sensed by means of sense electrode 23. The motional current is applied to a transimpedence or transresistance amplifier 34,
where it is converted to a voltage v.sub.o. The voltage v.sub.o is fed back to the microresonator via feedback electrode 24. The drive voltage v.sub.d is applied to the resonator via drive electrode
22. The microresonator sums the drive voltage and the negative feedback signal, v.sub.fb =v.sub.o, closing the loop and reducing its own original Q. The Q of the microresonator is effectively
controlled by the gain of amplifier 34, which can be made voltage controllable through the voltage V.sub.Q.
The equivalent system block diagram for the architecture of FIG. 1A is shown in FIG. 2, where Y.sub.d Y.sub.fb port-to-output and feedback port-to-output transfer functions, respectively. Using FIG.
2, and modelling the resonator n port to m port transfer functions Y.sub.m ##EQU5## where R.sub.xm resonator from any port m to any port n, and ω.sub.0 is the natural resonance frequency. The
equivalent series resistance is discussed below in relation to FIG. 5. In the equations that follow, any port m or n may be d, s, or fb, corresponding to drive, sense, or feedback ports,
respectively. Direct analysis of FIG. 2 yields ##EQU6## where R.sub.amp is the value of the transresistance or transimpedence of amplifier 34 and where ##EQU7## is the controlled value of the
Q-factor. For large loop gain, the gain of Equation (7) reduces to (R.sub.xfb which, as will be seen, is determined by the number of input and feedback fingers, and stays constant as Q is varied. The
Q can be changed, as noted, by adjusting the gain of amplifier 34 through the voltage V.sub.Q.
A schematic of the Q-control architecture for a two-port resonator 40 is shown in FIG. 3. Although FIG. 3 shows a resonator with equal numbers of drive and sense fingers, the number of fingers need
not be equal. This resonator includes only a drive electrode 22 and a sense electrode 23. A summing amplifier 42 is provided to sum the input and feedback signals v.sub.d and v.sub.o, respectively,
which in FIG. 1A were summed by the multi-port resonator itself. The resistances R.sub.k and R.sub.f are variable. These resistances and R.sub.sum provide gain factors for signals applied to
amplifier 42. Thus, they directly determine the Q and gain of the Q-control circuit.
FIG. 4 shows the single-ended system block diagram equivalent of the circuit of FIG. 3. Referring to FIGS. 3 and 4, gain factor ##EQU8## and gain factor ##EQU9## Using FIG. 4, and modeling the
resonator with the transfer function ##EQU10## where R.sub.xd resistance of the resonator. Direct analysis yields ##EQU11## where ##EQU12## is the controlled value of the Q-factor. For large loop
gain, the gain of Equation (10) reduces to K/f, which in turn reduces to R.sub.f /R.sub.k. In addition, Q' can be varied by changing R.sub.f, with R.sub.k tracking this change.
The discussion of Q-control has so far concentrated on the lowering of Q through the application of a negative feedback voltage. By using a positive feedback, however, the Q of a resonator can be
raised. Positive feedback implementations of Q-control can be realized by merely changing the amplification of amplifier 34 from positive to negative on the architectures of FIGS. 1A and 3.
Alternatively, and more conveniently, positive feedback may be obtained by interchanging finger connections as shown in FIG. 5. Specifically, the connections to microresonator 20 of FIG. 1A are
reversed so sense electrode 23 becomes drive electrode 22' in the embodiment of FIG. 5. Similarly, drive electrode 22 of FIG. 1A becomes sense electrode 23', and the feedback electrode 24' is at the
input or drive side of microresonator 20 where the input voltage v.sub.1 is applied. The equation for controlled Q under positive feedback is: ##EQU13##
To design for a specific Q and voltage gain v.sub.o /v.sub.d for the architecture of FIG. 1A, the equivalent drive-to-sense and feedback-to-sense series resistances, R.sub.xd R.sub.xfb calculate
these resistances, reference may be made to an equivalent circuit for a three-port micromechanical resonator. The equivalent circuit, as shown in FIG. 6, is biased and excited as in the circuit of
FIG. 1A. The equations for the circuit elements are as follows: ##EQU14## where n corresponds to the port of the resonator (drive, sense, or feedback) in question, C.sub.on is the overlap capacitance
across the motionless shuttle and electrode fingers, and the Φ's represent multiplication factors for the current-controlled current sources shown in the figure. Typical element values for high-Q (Q=
50,000) operation of a microresonator are f.sub.0 =20 kHz, C.sub.0 =15fF, C.sub.x =0.3fF, L.sub.x =100 KH, and R.sub.x =500K Ω.
The equivalent drive-to-sense resistance of the microresonator may be calculated from the following equation: ##EQU15## Driving the equivalent circuit of FIG. 6 at the input port d and grounding the
other ports, the output motional current i.sub.s at resonance is: ##EQU16## Applying Equation (15) to (14), gives: ##EQU17## A similar analysis yields ##EQU18## To maximize the range of Q-control
afforded by a given amplifier 34, the loop gain of the circuit, A=(R.sub.amp /R.sub.xfb a wide range. Thus, R.sub.xfb turn requires that R.sub.xfb be minimized and Φ.sub.sfb be maximized. Reduction
in R.sub.xfb can be achieved by increasing the number of feedback fingers, decreasing the gaps between these fingers, and increasing finger thickness. Φ.sub.sfb is increased with similar
modifications to the output fingers.
The number of input and feedback fingers also determines the gain of the Q-control circuit. Using Equation (17) and (18), the equation for gain at resonance is: ##EQU19## where N.sub.d and N.sub.fb
are the number of input and feedback fingers, respectively. The last equality assumes identical finger gaps and thicknesses for both ports. Thus, the gain is determined by resonator geometry and is
independent of variables which determine the controlled Q.
FIG. 3 presented a schematic of Q-control using a two-port microresonator, two amplifiers, and linear resistors. In order to implement variability of Q through voltage control, metal oxide
semiconductor resistors (MOS) can replace the linear resistors of FIG. 3. The value of resistance realized by an MOS resistor can be varied through variation of the gate voltage of such devices.
However, MOS resistors suffer from the drawback that they are less linear than their passive counterparts. In order to linearize MOS resistors, a balanced architecture must be used.
Such a balanced architecture is shown in FIG. 7, which illustrates Q-control using MOS resistors and a four-port microresonator 50. The microresonator 50 is similar in construction to microresonator
20 in that it includes movable and stationary, interdigitated fingers forming differential drive and sense electrodes 52 and 54, respectively. As in the embodiment of FIG. 1A, stationary electrode
fingers 55 are anchored to the overlayers 29b and 29c (see FIG. 1B) at the darkly shaded regions or anchor points 56. The movable fingers 57 are suspended above the ground plane by means of the
folded beam suspension arrangement 58.
Drive voltages v.sub.i(-) and v.sub.i(+) are applied to the drive electrodes. The output voltages v.sub.o-( represent amplifications of the signals sensed by sense electrodes 54. Since the shuttle
and its fingers are electrically connected to the ground plane, they are at the same voltage, V.sub.p, as the ground plane.
The architecture of FIG. 7 also utilizes metal oxide semiconductor (MOS) resistors M.sub.Q1, M.sub.Q2, M.sub.K1, M.sub.K2, M.sub.sum1, and M.sub.sum2. Such resistors are normally nonlinear, unless
operated in a fully balanced architecture, such as that depicted in FIG. 7. Fully balanced operation minimizes the even ordered harmonics of the MOS resistor voltage-to-current response, thus greatly
reducing the total nonlinearity in such devices. In FIG. 7, MOS resistors M.sub.Q1 and M.sub.Q2 serve to feed back the output signal v.sub.o with the appropriate gain factor f=R.sub.sum /R.sub.Qn =(W
/L).sub.Qn /(W/L).sub.sumn, (see FIG. 4) where n is either 1 or 2, to the summing amplifier composed of balanced operational amplifier 62 and shunt-shunt MOS resistors M.sub.sum1 and M.sub.sum2. Note
that gain factor f is determined by a ratio of MOS W/L's, which are the width over length ratios, and thus can be accurately set to a 0.2% or better tolerance using integrated circuit processes. MOS
resistors M.sub.K1 and M.sub.K2 direct the input signal v.sub.i with the appropriate gain factor K=R.sub.sumn /R.sub.Kn =(W/L).sub.Kn /(W/L).sub.sumn to the summing amplifier to be summed with the
negative feedback signal from MOS resistors M.sub.Q1 and M.sub.Q2. This summation completes the feedback loop for Q-control as in the block diagram for the equivalent single-ended version given in
FIG. 3. The equations dictating Q-control for the balanced version of FIG. 7 are similar to those for FIG. 3, Equations (9) through (11), except for changes in the drive-to-sense resistance R.sub.xd
nature of the resonator, and can be easily obtained using an analysis similar to that of Equations (13) through (18).
The circuitry further includes a balanced transimpedance or transresistance amplifier 60, which may or may not be variable. As shown, it is voltage-controllable via V.sub.R.
For large loop gain, the gain in the scheme of FIG. 7 is determined by a ratio of MOS resistor gate width over gate length ratios (W/L)'s, specifically ##EQU20## wherein K=R.sub.sum /R.sub.k =(W/
L).sub.Kn /(W/L).sub.sumn and f=R.sub.sum /R.sub.Q =(W/L).sub.Qn /(W/L).sub.sumn. The gain of the stage in FIG. 7 stays constant with changing Q, since the channel resistances of M.sub.Q and M.sub.K
track with V.sub.Q.
Any Q may be realized using the embodiment discussed herein; thus, any bandpass biquad transfer function may be implemented. Since both the Q and gain of the stage of the embodiment of FIG. 7 depend
mainly on ratios of the MOS resistors, which can be made to tolerances as low as 0.2%, this scheme, as well as the other embodiments of the present invention, is quite suitable for bulk fabrication.
The initial high Q of microresonators allows for the fabrication of high-Q filters. In addition, the Q of the Q-control circuit and thus the bandwidth of a filter in which the circuit may be
incorporated, may be adjusted by changing the loop gain of the circuit. This can be achieved by merely changing a single voltage V.sub.Q which controls the value of the channel resistance realized
by, for example, resistors M.sub.Q1 and M.sub.Q2. This simple control of a filter bandwidth encourages adaptive circuit techniques for very precise control of filter characteristics.
As shown in FIG. 8, the Q-control scheme of the embodiment of FIG. 7 can be further simplified by using additional microresonator ports to sum the input and feedback signals, removing the requirement
for summing amplifier 62. In this scheme, only one transresistance amplifier 60 is required per two filter poles.
As shown in FIG. 8, microresonator 70 is a six-port resonator using one balanced transresistance amplifier 60. The drive voltages v.sub.i(+) and v.sub.i(-) are applied to drive electrodes 71 and 72
which, as in the other embodiments, comprise stationary and movable interdigitated fingers. The output signal from amplifier 60, voltages v.sub.0(+) and v.sub.0(-), is channeled directly back to
resonator 70 via feedback electrodes 73 and 74. The output at sense electrodes 75 and 76 is applied to the negative and positive inputs, respectively, of amplifier 60. Q is controlled by varying the
transresistance (transimpedance) of amplifier 60, which is controllable via the control voltage V.sub.Q.
By expanding Equation (8) using elements from above analyses resulting from the equivalent circuit of FIG. 6, it can be shown that the value of controlled Q is independent of the original Q. Doing
this, the controlled Q for the embodiment of FIG. 1A is: ##EQU21## where MEFF is an effective mass of the resonator (including support beams and folding truss), k.sub.sys is the system spring
constant, V.sub.p is the applied dc-bias, and (∂c/∂x).sub.fb, and (∂c/∂x).sub.s are the change in capacitance per displacement of the microresonator's feedback and sense ports, respectively. Equation
(20) shows no dependence on the original Q, and thus, the Q-factor can be set irrespective, for example, of the ambient operating pressure.
A similar expansion applied to the architecture of FIG. 3 yields ##EQU22## which is also independent of the original Q.
As discussed, by using positive feedback, the Q of a resonator can be raised. Positive feedback implementations of Q-control can be realized by-merely changing the transresistance amplification
R.sub.amp, from positive to negative, in the embodiments of FIGS. 7 and 8. Alternatively, positive feedback can also be achieved by keeping the R.sub.amp of amplifier 60 positive and interchanging
(crossing) any two parallel leads in the feedback loop. For the one amplifier Q-control version (FIG. 8), the equation for controlled Q under positive feedback is ##EQU23## where R.sub.xfb feedback
port to the sense port. For positive feedback, the controlled Q is dependent upon the original Q.
The Q-controlled microresonator architectures described above, the embodiments of FIGS. 1, 3, 7 and 8, can implement any arbitrary bandpass biquads transfer function. Thus, they can be used as biquad
stages in various filter architectures such as follow the leader feedback (FLF), coupled (or cascaded) biquad, or other multiple-loop feedback techniques. FLF designs are quite desirable, since they
have low element sensitivities, comparable or superior to those of leapfrog designs.
A FLF version of a filter, represented generally by reference numeral 75, is shown in FIG. 9, and the equivalent system block diagram for a general FLF filter design is shown in FIG. 10A. In filter
75, the bandpass biquad stages 80, 81 and 82 all have identical center frequency and Q (but differing gains K.sub.i). They may be implemented using any of the Q-control microresonator architectures
of FIGS. 1, 3, 7, or 8.
Filter 75 includes MOS transistors M.sub.KA, M.sub.KB, M.sub.FBA, M.sub.FBB, M.sub.F3A, M.sub.F2A, M.sub.F2B, M.sub.F3B, M.sub.B1A, M.sub.B2A, M.sub.3A, M.sub.B1B, M.sub.B2B, M.sub.B3B, M.sub.FFA,
and M.sub.FFB connected to implement the feedback in the total system. The transistors M.sub.Fnx, where n can be 2 or 3 and x can be A or B in correspondence with FIG. 9, are used as variable MOS
resistors to realize the feedback gains F.sub.n depicted in FIG. 10A. The MOS resistors are directed into operational amplifier 76, which is connected as a summing amplifier with MOS resistors
M.sub.FBA and M.sub.FBB. In this configuration, the feedback gains are given by F.sub.n =(W/L).sub.FBr /(W/L).sub.Fnx, where x can be either A or B and n can be either 2 or 3 in correspondence with
FIG. 9. The M.sub.Kx are also used as MOS resistors going into the amplifier 76. They realize the gain factor K in FIG. 10A via the equation K=(W/L).sub.FBx /(W/L).sub.Kx, where again, x can be
either A or B in correspondence with FIG. 9.
The transistors M.sub.Bnx, where n can be 1, 2 or 3 and x can be A or B in correspondence with FIG. 9, are used as variable MOS resistors to realize the feedforward gains B.sub.n depicted in FIG.
10A. The MOS resistors are directed into operational amplifier 72, which is connected as a summing amplifier with MOS resistors M.sub.FFA and M.sub.FFB. In this configuration, the feedforward gains
are given by B.sub.n =(W/L).sub.FFx /(W/L).sub.Bnx, where x can be either A or B and n can be 1, 2, or 3, in correspondence with FIG. 9. Both the center frequency and bandwidth of the filter are
variable via the single voltage V.sub.Q.
Filter 75 uses its three identical microresonator biquads 80, 81 and 82 to realize a sixth order bandpass filter with equiripple passband and stopband. Loss pole locations are determined by the loop
gains of balanced feedback loops 84a and 84b, and 85a and 85b, while stopband zeros are determined by the feedforward coefficients realized by the M.sub.FFx 's and M.sub.Bnx 's. The bandpass stages
80, 81 and 82 determine the center frequency and Q-factor of the filter.
In filter 75, the feedback gains -F.sub.2, -F.sub.3 and -F.sub.n (FIG. 10A) are implemented by ratios of MOS W/L's as are the biquad gains K.sub.i. Since the Q of the biquads 80, 81 and 82 are
controllable via the voltage V.sub.Q (FIGS. 1, 3, 7 or 8), the bandwidth of the whole filter is likewise controllable via this single voltage.
Pole/zero precision for the filter should be comparable to that for switched-capacitor circuits, since poles and zeros can be made dependent on microresonator matching and ratios of the MOS resistors
W/L's, i.e, (W/L).sub.2 /(W/L).sub.1, in much the same way capacitor ratios determine the characteristics of switch capacitor filters. Fabrication of such filters may be achieved through a
combination of standard CMOS integrated circuit and micromachining technologies, such as the recent Modular Integration of CMOS and Sensors (MICS) process.
FIG. 11 shows simulated responses, v.sub.o /v.sub.i in decibels (db), using SPICE for filter 75, for different values of V.sub.Q, V.sub.Q1 and V.sub.Q2, demonstrating bandwidth control and the
potential for high Q. The filter Q for the solid plot is about 250, and the bandwidth is less than 100 Hz.
The dynamic range of the high-Q filter 75 has been calculated to be much higher than that of its high-Q active RC counterparts, i e. switched capacitor, MOSFET-C and g.sub.m -C filters. Such active
RC filters, which are designed via operational simulation of LC ladders, have reduced dynamic range when implementing high-Q filters, because the noise per stage is amplified by a factor
approximately equal to the filter Q. This comes about because the large currents and voltages present in high-Q LC circuits are represented by integrator outputs in the active RC equivalent; thus,
attenuation must be provided at appropriate nodes to prevent saturation. Q-controlled microresonator filters do not share this drawback, because the high-Q elements, the microresonators, are
effectively passive transconductance devices.
The noise block diagram of FIG. 10B, wherein the block 100 schematically represents a two-port resonator, such as in FIG. 3, can be used to calculate the output noise per Q-control stage.
Straightforward analysis yields ##EQU24## which at resonance, reduces to ##EQU25## where R.sub.x is the equivalent drive-to-sense resistance of resonator 100. Equation (24) shows that noise in the
high-Q filter is not amplified by filter Q.
Using Equation (24), the dynamic range of filter 75 (FIG. 9), having a bandwidth of 56 kHz and a 5V supply, is calculated to be in excess of 90 dB.
The amplifiers 34 and 60 represent single-ended and balanced versions of transimpedance or transresistance amplifiers of any general design. The design could be as simple as shunt-shunt feedback
applied to an operational amplifier or commercial designs of transimpedance amplifiers used in optical receivers.
If it is desired to obtain large loop gains for the Q-control architectures described above, amplifiers 34 or 60 should be designed for maximum gain bandwidth product. One such design which utilizes
CMOS transistors, but can use any technology, be it bipolar, BiCMOS, etc., is shown in FIG. 18. (MOS technology has the advantage that the input noise current into the gate of a transistor is
minuscule at lower frequencies.) In this design, which is fully balanced, transistors M1 through M9, as shown in FIG. 18, comprise a current feedback pair input stage, which has the advantages of low
input noise current and large gain bandwidth product. Transistors M10 through M25 comprise a video amplifier second stage, featuring a current feedback pair architecture for high bandwidth. The
bandwidth of this amplifier is large because all nodes in its signal path are low impedance nodes. Finally, transistors M26 through M29 make up a common-mode feedback loop, which minimizes the
common-mode gain of the amplifier and forces the output dc level to the "Balancing Level" voltage. All transistors in FIG. 18 operate as MOS transistors in the saturation region, except for M.sub.11,
M.sub.12, M.sub.13, and M.sub.14, which operate as MOS resistors for the current feedback pairs in which they operate. The gain of the amplifier is varible through voltage V.sub.QA and V.sub.QB, or
V.sub.Q if these nodes are tied as shown by the dashed connections.
Using the design of FIG. 18, gains of over 100 mega-ohms with bandwidths over 100 MHz can be attained, depending upon the technology being used. A single-ended version of the amplifier follows
readily from FIG. 18.
Because of squeeze-film damping, Couette flow, or similar fluid-based damping mechanisms, the quality factor of a microresonator is strongly dependent upon the ambient pressure in which it operates.
In addition, the intrinsic Q of a microresonator is a function of the anchor and is also temperature dependent. For lateral electrostatic-comb driven resonators, the Q ranges from under 50 in
atmosphere to over 50,000 in 10 mTorr vacuum. Since the operational pressure for a microresonator is not easily controlled, a Q-control method independent of the original Q of the resonator is
The controlled Q in the resonators of the present invention can be shown to be independent of the original resonator Q, and thus, of ambient pressure, using the equivalent series resistance discussed
above. Inserting Equation (18) in (8) and assuming sufficient loop gain ##EQU26## yields ##EQU27## where the equation for the first mode resonance frequency ω.sub.o =√k.sub.sys /M.sub.eff has been
inserted. In the above equations, M.sub.eff is an effective mass of the resonator, including the support beams and folding truss. Note that the controlled quality factor Q' depends only upon the
transresistance amplification R.sub.amp, the bias voltage V.sub.p, and microresonator geometry. It has no dependence on the original Q provided there is sufficient loop gain.
Initial experimental verification of the feasibility of the filters of the present invention has been achieved by demonstrating the Q-control techniques described above. FIG. 12 shows measured
microresonator transconductance spectra under different loop gains, varied by changing the value of the transresistance of amplifier 34 in the circuit of FIG. 1A. As shown, the measured values of Q
are 53,000 for R.sub.amp =1 mega-ohm and 18,000 for R.sub.amp =3.3 mega-ohms. The measurements were made under vacuum at a pressure of 10 mTorr.
FIG. 13 presents experimental verification that the value of the controlled Q is invariant under changing ambient pressures, being dependent only on the Q-controlling feedback set by transresistance
(transimpedance) amplifier 34 (FIG. 1A). Without Q-control, the original Q at 8 mTorr is 53000 and that at 50 mTorr is 84000. With Q-control, the Q for both cases is 18000.
The present invention also contemplates different methods for voltage-controlled tuning of the resonance frequency of a microresonator, and thus, of a filter in which it may be used. One method
involves the introduction of some nonlinearity into the voltage-to-force transfer function of the microresonator, which gives rise to a bias dependence of the resonance frequency. For an
electrostatic-comb driven lateral micromechanical resonator, the most convenient way to do this is to use sloped drive fingers, as shown in FIGS. 14A and 14B.
Specifically, sloped drive fingers 92 of microresonator 90 form part of the interdigitated fingers (comb) of the frequency-pulling electrode pair 91a. As shown, drive electrodes 91 and 93 also
include straight, movable electrode fingers 94 and straight, fixed electrode fingers 95. The sense electrodes are represented by reference numeral 96, and as discussed above, include fixed and
movable fingers.
As shown in FIG. 14B, sloped drive fingers 92 may be sloped at an angle θ. A distance d.sub.0 may separate sloped fingers 92 and straight fingers 94. An overlap L.sub.0 may exist between sloped
fingers 92 and straight fingers 94. By way of example, θ can be about 15 d.sub.o about 2 μm, and L.sub.0 about 20 μm. The straight movable fingers 94 are displaced in the x direction when the
resonator is driven by the drive electrodes 91 and 93. The straight fingers 95 of drive fingers 91 and 93 can also be sloped to enhance the frequency-pulling effect. The sloped drive fingers
introduce a nonlinear voltage-to-force transfer function, which in turn results in a bias dependent resonance frequency, allowing center frequency tunability. Sloped drive fingers cause the
capacitance variation with displacement ∂C/∂x to be nonlinear, which makes the voltage-to-force transfer function nonlinear. The force versus voltage transfer function is given in phasor form by: ##
EQU28## where N.sub.d is the number of shuttle or movable fingers surrounded by straight drive, fixed fingers, N.sub.p is the number of shuttle fingers surrounded by sloped fingers, and (∂C/
∂x).sub.lin corresponds to the straight drive fingers. Using Equation (26) to derive the equation for ##EQU29## and then extracting the resonance frequency, the following is obtained: ##EQU30## where
##EQU31## Equations (27) and (28) indicate that resonator resonance frequency can be pulled by simply varying the bias voltage V.sub.p.
Sloped drive fingers are not the only way to introduce a nonlinearity into the voltage-to-force transfer function. A third polylayer as shown in FIGS. 15A and 15B, would also work, as would other
geometrical configurations.
Here, microresonator 100 includes sense electrodes 101 and differential drive electrodes 102. The fixed fingers 103 of one electrode pair 110 are triangular in shape and include a third polylayer 107
wherein a first polylayer 109 forms a shuttle ground plane 105a and an electrode ground plane 105b, and a second polylayer 108 forms the movable fingers 104. As shown, fingers 104 (second polylayer
108) are disposed between third polylayer 107 and electrode ground plane 105b.
The third polylayer 107 and electrode ground plane 105b introduce a non-linear variation of the voltage-to-force transfer function of the resonator, i.e., introduces a nonlinear capacitance versus
displacement transfer function, allowing for resonance frequency pulling via variation of the applied voltage V.sub.Δf. The first polylayer 109 forming electrode ground plane 105b matches the third
polylayer 107 under the triangular-areas to balance vertically-directed electrostatic forces, preventing the possible pull-in of the suspended or movable fingers 104.
Another method for tuning the center frequency involves pulling the "springs" (beams) of a microresonator 110, as shown in FIG. 16A. The tension in the suspending springs is varied by
electrostatically pulling on the truss support, where the supporting beams 114a-114d and 115a-115d fold. The pulling force is applied via voltage source (V.sub.Δf) which is different from bias
voltage V.sub.p and applied to spring-pulling electrodes 116 and 118 located on opposite sides of folded beam arrangement 112.
Initial analysis indicates that for a parallel-plate electrostatic pull with a gap g.sub.o =0.5 μm between the electrode 116 or 118 and the spring-folding truss 119 and capacitor area of 400
μm.sup.2, a force of 17.7 μN is generated for an applied pulling voltage of 50 volts (V.sub.Δf) corresponding to a 1% change in resonance frequency. Smaller gaps and larger capacitor area, of course,
will result in much larger frequency shifts, as large as 10%. FIG. 16B shows a plot of resonance frequency versus frequency-pulling voltage V.sub.ΔF for a fabricated device of the type shown in FIG.
16A. For V.sub.ΔF =40V, a 0.2% shift in frequency in measured.
The variation of filter characteristics with temperature is determined mainly by the dependence of resonator resonance frequency on temperature. In macroscopic crystal oscillator circuits, two
methods for minimizing the temperature dependence of the crystal resonance frequency are: (1) temperature compensation, where circuit techniques which pull the frequency of resonance are used to
compensate for frequency changes due to temperature variation; and (2) temperature control, where the temperature of the system is held at a certain point in an attempt to eliminate from the start
the mechanism for frequency variation.
Although temperature control can achieve better frequency stability than compensation, the former has been less frequently used due to the following drawbacks: (1) a large volume is required for
thermal isolation; (2) a warm-up time for the oven is needed; and (3) the power consumption, particularly in cold environments, is large (up to 10 watts (W)).
Thus, temperature compensation has proven to be the predominant technique for achieving temperature stable oscillators in the macroscopic world.
For microresonators, however, there is a strong potential for reversing the situation. Micro-miniaturization can eliminate many of the drawbacks noted above. In particular, microminiaturization
offers, of course, smaller volume, and this combined with the potential for using a vacuum shell and/or special micromachining processing techniques for thermal isolation, solves all of the above
problems, since orders of magnitude less warm-up time and power consumption are required to stabilize the temperature of micron-sized structures.
Thus, for a micro-oven control, the resonance frequency of a micromechanical resonator may be stabilized by using heating and sensing resistors in a feedback loop to maintain a constant temperature.
Such a scheme is depicted in FIG. 17A.
In this embodiment, the voltage V.sub.th is initially high and causes the amplifier 121 to supply current to the heating resistors 122. As the temperature rises, the resistance of thermistors 123,
which may be polysilicon resistors, decreases, causing V.sub.th to rise to the optimum value V.sub.ref, where the feedback loop, represented by connection 124, attempts to stabilize V.sub.th. The
temperature of the system is, thus, set by V.sub.ref, and this temperature may be chosen at a point in the fractional frequency change versus temperature curve where the slope is zero, and the
temperature exceeds room temperature.
The power consumption required to maintain the specified temperature is determined by the thermal loss in the system, which should be minimized to minimize the power requirement. Herein lies the main
advantage of miniaturized resonators, since it is in the reduction of thermal loss where microminiaturization proves most rewarding.
In the embodiment of FIG. 17A, microresonator 120, heating resistors 122, and thermistors 123 are fabricated on a microplatform 125, which is connected to a substrate (not shown) by only thin
supporting beams 126. Designs where the filter circuitry and micro-oven control circuits are fabricated on the microplatform are possible as well. Such a microplatform for thermal isolation purposes
has been previously considered wherein bulk micromachining processes were used to achieve a silicon nitride microplatform. Experimental measurements found that the power required to maintain 300 only
3.3 msec. These figures are to be compared with up to 10 W and 15 to 30 minutes for macroscopic temperature-controlled quartz crystal oscillators. Evidently, several orders of magnitude improvement
in power dissipation and warm-up time can be achieved with microresonators. A scanning electron micrograph (SEM) of a resonator fabricated on top of a thermally-isolated microplatform is shown in
FIG. 17B.
Using additional ports on a micromechanical resonator, electrostatic feedback techniques which control the Q of the microresonator have been demonstrated. Such Q-control techniques can be applied to
passband smoothing of micromechanical filters and/or Q-controlled biquads in biquad filter architectures. The solid curves in FIGS. 19A and 19B show frequency versus amplitude responses for a fourth
order parallel, microresonator filter as described in the above-identified application entitled "Microelectromechanical Signal Processors." FIGS. 19A also shows the responses of the two resonators,
resonator 1 and resonator 2, which constitute the filter. Immediately after fabrication, and in a vacuum, the Q's of the resonators constituting the filter are large and unpredictable, resulting in a
filter frequency response similar to the one in FIG. 19A. By applying Q-control to each resonator, as described herein and in accordance with the present invention, the passband may be corrected to
be flat as shown in FIG. 19B.
FIG. 20 shows an implementation of such passband correction. In FIG. 20, two four-port resonators are represented by equivalent circuit diagrams 130, where the central structure depicts the shuttle
and supporting springs, and the vertical lines represent ports, and it is understood that this resonator circuit diagram can be generalized to any number of ports. In the scheme of FIG. 20, each
resonator has one drive port 136 and 137, two sense ports 132, 135 and 133, 138, and one feedback port 139 and 134. As in the normal parallel microresonator bandpass filter implementation, the drive
voltages v.sub.i(+) and v.sub.i(-) to each resonator are 180 sunmmed and then amplified to a voltage by amplifier 34, generating the output of the filter. The quality factor of each resonator is
controlled by negative feedback loops involving negative transimpedance (or transresistance) amplifiers 131, which amplify sense currents from ports 135 and 138, and feed them back to ports 134 and
130, as shown in FIG. 20. The Q-control implementation operates as discussed above. Using the implementation of FIG. 20, corrected bandpass filter responses as shown in FIG. 19B can be obtained.
Although Q-control has been discussed using multiport resonators, single-port resonator implementations are also possible. FIG. 21 shows a schematic of Q-control for a single-port resonator. Here,
single-port resonator 140 is driven at port 143. The motional current resulting from capacitive variation of port 143 flows through the resonator 140 and into node 144, and is 90 143. The current is
sensed directly from the resonator via capacitive amplifier 141. The lead to node 144 from resonator 140 is electrically connected to the resonator ground plane (not shown). As discussed, the ground
plane and the resonator shuttle are at the same voltage potential. Capacitive amplifier 141 has amplification factor C.sub.amp and provides an additional +90 the output signal v.sub.0 to the summing
amplifier consisting of operational amplifier 42 and resistor R.sub.sum. Reverse-biased diode 142 is provided to bias node 144 to the dc voltage V.sub.p.
With these changes, the circuit of FIG. 21 then operates as the previous embodiments, with control of Q through variation of R.sub.X and R.sub.Q, which track each other.
The ability to control Q to the above precision also has implications beyond this. For example, using the Q-control architecture of FIG. 3, changes in pressure can be quantified by measuring the
feedback signal at the output of the summing amplifier, which adjusts to maintain constant Q under varying pressure. Such a Q-balanced resonator pressure sensor would have the advantage of automatic
limiting of the resonator amplitude, and thus, would have a wide sensing range.
The present invention has been described in terms of a number of different embodiments. The invention, however, is not limited to the embodiments depicted and described. Rather, the scope of the
invention is defined by the appended claims.
The accompanying drawings, which are incorporated in and constitute a part of the specification, schematically illustrate a preferred embodiment of the invention and, together with a general
description given above and the detailed description of the preferred embodiment given below, will serve to explain the principles of the invention.
FIG. 1A is a schematic representation of a Q-control scheme for a three-port electrostatic-comb driven microresonator.
FIG. 1B is a schematic cross-section along lines 1B--1B of FIG. 1A.
FIG. 2 is a system block diagram for the circuit of FIG. 1A.
FIG. 3 is a schematic representation of a Q-control scheme for a two-port microresonator.
FIG. 4 is a system block diagram for the circuit of FIG. 3.
FIG. 5 is a schematic representation of a scheme for raising the Q of a three-port microresonator.
FIG. 6 is an equivalent circuit diagram for a three-port microresonator biased and excited as shown in FIG. 1A.
FIG. 7 is a schematic representation of a balanced Q-control scheme for a four-port microresonator using two balanced amplifiers (one of them transimpedance) and metal oxide semiconductor (MOS)
FIG. 8 is a schematic representation of a balanced Q-control scheme for a six-port microresonator using one balanced transimpedance amplifier.
FIG. 9 is a schematic representation of a Q-controlled microresonator filter using a balanced FLF architecture.
FIG. 10A is a system block diagram for a general FLF filter.
FIG. 10B is a single-ended noise block diagram for the circuit of FIG. 3 or 6.
FIG. 11 is a graphical representation of simulated responses for the filter of FIG. 9.
FIG. 12 is a graphical representation of the measured transconductance spectra of the embodiment of FIG. 1A using different values of R.sub.amp and demonstrating control of the Q-factor through
control of R.sub.amp.
FIG. 13 is a graphical representation of the transconductance spectra for the microresonator of FIG. 1A subjected to Q-control with R.sub.amp =3.3 mega-ohms and with varying ambient pressure.
FIG. 14A is a schematic representation of a microresonator including sloped drive fingers, which allow resonance frequency-pulling.
FIG. 14B is an enlarged schematic representation of the relationship between the sloped and straight drive fingers.
FIG. 15A is a schematic representation of a microresonator including a third polylayer to introduce a nonlinear variation in the voltage-to-force transfer function of the resonator and thus allow
FIG. 15B is a view along lines 15B--15B of FIG. 15A.
FIG. 16A is a schematic representation of a microresonator including spring-pulling electrodes for frequency tuning.
FIG. 16B is a graphical representation of resonance frequency versus frequency pulling voltage for the microresonator of FIG. 16A.
FIG. 17A is a schematic representation of feedback control circuitry for a micro-oven controlled resonator fabricated on a microplatform for thermal and mechanical isolation.
FIG. 17B is a scanning electron micrograph of a resonator fabricated on top of a thermally-isolated microplatform.
FIG. 18 is a circuit diagram of a high gain transresistance amplifier which may be used in the present invention.
FIGS. 19A and 19B are graphical representations of filter passband correction.
FIG. 20 is a circuit diagram showing the implementation of passband correction for a parallel microresonator filter.
FIG. 21 is a circuit diagram for Q control of a resonator structure with a single port. | {"url":"http://www.google.fr/patents/US5955932","timestamp":"2014-04-20T05:55:45Z","content_type":null,"content_length":"162510","record_id":"<urn:uuid:7613ad29-0190-41cf-a934-8df28fae61db>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00204-ip-10-147-4-33.ec2.internal.warc.gz"} |
Twelve Active Learning Strategies
Example 1
Example 1 Explanation
In order for students to learn effectively, they must make connections between what they already know (prior knowledge) and new content to which they're exposed. The opening of a lecture should
facilitate these connections by helping students exercise their prior knowledge of the day's subject matter. The following four slides illustrate strategies which stimulate students' thinking and
prepare them to learn.
One useful strategy is to open the lecture with a question. Present an "opening question" on a PowerPoint slide, give students a moment to think about their response, and then ask a few members of
the class for answers. This strategy is easy to initiate, takes very little time, works in small or large classes, and effectively focuses students' attention on the day's topic. It also provides the
instructor with useful feedback on what students know and don't know about the material being presented.
Example 2
Example 2 Explanation
"Think-Pair-Share" is an active learning strategy that engages students with material on an individual level, in pairs, and finally as a large group. It consists of three steps. First, the instructor
poses a prepared question and asks individuals to think (or write) about it quietly. Second, students pair up with someone sitting near them and share their responses verbally. Third, the lecturer
chooses a few pairs to briefly summarize their ideas for the benefit of the entire class.
When used at the beginning of a lecture, a Think-Pair-Share strategy can help students organize prior knowledge and brainstorm questions. When used later in the session, the strategy can help
students summarize what they're learning, apply it to novel situations, and integrate new information with what they already know. The strategy works well with groups of various sizes and can be
completed in as little as two or three minutes, making it an ideal active learning strategy for classes in which lecture is the primary instructional method.
Example 3
Example 3 Explanation
Focused listing is a strategy in which students recall what they know about a subject by creating a list of terms or ideas related to it. To begin, the instructor asks students to take out a sheet of
paper and begin generating a list based on a topic presented on a PowerPoint slide. Topics might relate to the day's assigned reading, to a previous day's lecture material, or to the subject of the
current session. Instructors often move around the room and look at students' lists as they write, briefly summarizing major trends or themes as a way of closing the exercise. Others ask students
randomly to share the contents of their lists before moving on with their lecture. In either case, focused listing need not take more than a few minutes. It's an effective way to get students to
actively engage material, and it offer feedback that the instructor can use to tailor the subsequent presentation of material to students' needs.
Example 4
Example 4 Explanation
Like focused listing, brainstorming is an active learning strategy in which students are asked to recall what they know about a subject by generating terms and ideas related to it. In brainstorming,
however, students are encouraged to stretch what they know by forming creative connections between prior knowledge and new possibilities. To initiate the strategy, the instructor asks students, via a
PowerPoint slide, what they know about a topic. Students are instructed to begin with those things they know to be true and systematically work toward formulating surprising relationships they hadn't
considered before.
Brainstorming can work well at the beginning of a lecture to gain students' attention and prepare them to receive the day's material, or it can be used at the end of a lecture to summarize and help
students formulate connections between what they've just learned and the world outside the classroom. Like the previous strategies we've discussed, brainstorming can be adapted to large or small
classes and can be completed in as little as a minute.
Example 5
Example 5 Explanation
Most instructors set aside time for student questions when planning their lectures. In the heat of the moment, however, it's easy to forget to ask them. One of the advantages of PowerPoint is that
the instructor can plan breaks for student questions in advance. By inserting a slide that asks for questions, the instructor is reminded to step back from his material and interact with his
students. This is also an opportunity for students to catch their breath and reflect on the material. When brief question breaks or other active learning strategies are planned every fifteen minutes
throughout the lecture, students' attention is less likely to wander and they're more likely to understand and remember the material after the lecture is over.
Example 6
Example 6 Explanation
One way to gain students' attention and to remind yourself to stop for questions is to insert a blank slide into your presentation. Imagine a lecture hall. The instructor is discussing material,
moving through slides, and then the screen goes dark. Students are immediately transfixed. Did the machine break? What is the instructor going to do? At this point you have your students' full
attention. You can ask for questions and move on to the next part of your lecture.
Example 7
Example 7 Explanation
Think-Pair-Share and the other active learning strategies we've discussed can be used at transition points in the lecture. Employed in this way, these strategies give students an opportunity to think
about and work with material just presented before moving to new information. They also help the instructor gauge how well students have understood the content, perhaps shaping what the instructor
discusses during the remainder of the period.
Example 8
Example 8 Explanation
The note check is a strategy in which the instructor asks students to partner with someone near by and compare their notes, focusing on summarizing key information and locating misconceptions.
Students can also generate questions or solve a problem posed by the instructor. The exercise can be completed in as little as two or three minutes.
Some instructors find this strategy problematic because they assume that students will simply not take notes, relying instead on their peers to do the work for them. It's important to remember that
students are not giving their notes to one another in this exercise, but working together to fill gaps in their collective understanding of the information. In this way, instructors can help students
learn good note taking skills, as well as monitor whether or not students are able to identify the key ideas in the day's material.
Example 9
Example 9 Explanation
Question and answer pairs is an exercise in which teams of students practice asking and answering challenging questions. To begin, the instructor asks students to partner with someone near by. Each
student takes a minute to formulate one question based on the information presented in the lecture or course readings. Student A begins by posing her question for student B to answer. Then the roles
are reversed, with student B becoming the questioner. The instructor may choose to ask for a sampling of student questions, either verbally or by collecting them at the end of the period.
Particularly good questions can be highlighted in subsequent lectures or used on practice examinations. The strategy is particularly useful for teaching students how to frame good questions. It can
also be used to encourage students to prepare for class if the instructor asks students to formulate questions based on their reading.
Example 10
Example 10 Explanation
In this strategy, the instructor pauses and asks students to write in response to a question presented on a PowerPoint slide. The strategy can be used at any point in a lecture, but it's particularly
useful at the end as a way of encouraging students to summarize the day's content. The minute paper forces students to put information in their own words, helping them internalize it and identify
gaps in their understanding.
When collected at the end of the period, the minute paper can serve as a classroom assessment technique to help instructors gauge how well students are learning the material, what they understand,
and what the instructor needs to spend more time on.
Example 11
Example 11 Explanation
Most instructors end their lectures by asking for questions. To encourage students to think deeply about the material before they leave the room, create a PowerPoint slide which asks them to come up
with a final question. The instructor can choose students randomly and answer their questions in the time remaining. If collected in writing, the questions can also serve as a classroom assessment
technique to help instructors judge how well their students are learning.
Example 12
Example 12 Explanation
In the spirit of active learning, we have a final question for you. Which of the strategies we've discussed in the tutorial would you like to try in your own classes?
Download Example Slides
Please feel free to download a PowerPoint presentation of these 12 slides (ppt). | {"url":"http://www1.umn.edu/ohr/teachlearn/tutorials/powerpoint/learning/index.html","timestamp":"2014-04-19T22:51:20Z","content_type":null,"content_length":"29135","record_id":"<urn:uuid:95ab4869-709b-4e54-a2a9-9bc5abc85cb1>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00524-ip-10-147-4-33.ec2.internal.warc.gz"} |
Mplus Discussion >> Construct validiy with CFA-SEM
Anonymous posted on Thursday, October 07, 2004 - 5:21 am
I want to check the construct-validity of an expected two factor model (one factor with 7 observed variables and one factor with 4 variables). I treat the 11 variables as categorical. Making a CFA
with Analysis: Type=General and Output: Standardized I get a CFI 0.93. How is it possible to see if or which items should be taken out of the model, to get a better CFI? How is it possible to see
that maybe some items are highly correlated with the items of the other factor? How is it possible to compute residual covariances? Some residual variances are quite high. Which is the best estimator
for such a check of construct-validity? If i look to the model in an EFA i can see in a varimax rotation that some Items load high on the other factor. But in an EFA there is no CFI. Do i have to
change between EFA and CFA to make the construct-validity? Thank you.
Linda K. Muthen posted on Tuesday, October 12, 2004 - 4:43 pm
The way to see which items may not behave the way you expect them to is to do an EFA as you have done. You can also do an EFA in a CFA framework where you can obtain standard errors, other fit
measures like CFI, and modification indices.
Anonymous posted on Wednesday, October 13, 2004 - 12:27 am
Thank you very much. But what does "EFA in a CFA framework" mean? If I make a CFA with for example: Type is General, and Model: f1 by y1 y2 etc. and Output: Standardized. I get standard errors and
see significant unstandardized loadings. The standardized loadings are nearly conform to those estimated (unrotated) in an EFA. I can see in this CFA also RMSEA and SRMR. But this seems not to be
that what you understand with "EFA in a CFA-Framework". The other thing is which "Estimator" is a "good" one for this check of the construct validity?
bmuthen posted on Wednesday, October 13, 2004 - 4:53 pm
For EFA in a CFA framework, please see the handout for Day 1 of our short courses - see the Mplus web site under Web Training and Handouts.
Back to top | {"url":"http://www.statmodel.com/cgi-bin/discus/discus.cgi?pg=next&topic=9&page=485","timestamp":"2014-04-16T08:23:57Z","content_type":null,"content_length":"20471","record_id":"<urn:uuid:a9190605-f761-4dba-91ad-97df2aa3f73f>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00547-ip-10-147-4-33.ec2.internal.warc.gz"} |
Finding an Automated Algorithm to Solve Hess' Law
Finding an Automated Algorithm to Solve Hess’ Law
Last week in my chemistry class, I was exposed to Hess’ Law and while doing some example problems in class, I immediately wondered if there was some way to apply linear algebra to automagically solve
the equations. I asked my professor if he knew if linear algebra could be applied to solve them, but he said he was unsure since he’s actually not a math person.
I really hate throwing things like that out in class where it doesn’t really apply to what’s being taught (you know, those jerks that ask these stupid wiseguy questions in class just to look like
they know more than everyone else), but I just had to get some affirmation if it was possible.
During the weekend I tried to see if I can do it with matrix operations, but I eventually realized that the linear algebra I know is too limited to find a solution. I’ve completely mapped out what
the problem looks like, I know how it can be solved via human guess and check, but I really feel there’s a set of consistent mathematical operations that would just give me what I need (kind of like
gaussian elimination to solve linear systems of equations).
If anyone knows the algorithm, I will give $100 USD to the first person who does. Note that I’ve also posted this on LinkedIn, meaning whoever is the first, gets it. The algorithm needs to be able to
solve all matrices that can be assembled from problems found in Hess’ Law without human intervention (other than inputting initial and final values of the problem). What I’m looking for is an
automatic generation of the intermediate Xn values that are required to multiply each row to get the final values.
Edit 1: Piyush Pant discovered that Wolfram Alpha can be used to solve these problems. However, it does not give the algorithm.
Here is an actual application of this possible algorithm:
Solution: http://www95.wolframalpha.com/input/?i=Solve+{w%2Cx}.{{1%2C3%2C-2%2C-3%2C0}%2C+{0%2C3%2C-2%2C-2%2C1}}%3D{-1%2C0%2C0%2C1%2C1}
One thought on “Finding an Automated Algorithm to Solve Hess’ Law” | {"url":"http://journal.suteki.nu/2009/05/17/finding-an-algorithm-to-solve-hess-law/","timestamp":"2014-04-18T21:19:36Z","content_type":null,"content_length":"48530","record_id":"<urn:uuid:4b8b74f4-a9fd-45e3-892d-91d58c02840e>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00299-ip-10-147-4-33.ec2.internal.warc.gz"} |
Pythagorean Circles Activity - Explanation
The purpose of this activity was to have the students take the smaller steps necessary to solve a problem which may, at first, seem too difficult. The original problem posed was as follows:
The diagonals of the rectangle OABC intersect at a point P. The point P lies on circle 1, O is the center of circles 1, 2 and 3, and AO = 3.
1. Find the area of the annulus between circle 1 and circle 2.
2. Find the area of the annulus between circle 2 and circle 3.
By breaking the activity into smaller parts, it allowed the students to work at their own pace, giving hints to the students that were having trouble. We also found that by doing the activity over
two days of class the students were more involved the second time, possibly more invested by having already put time into their work.
In parts II.A and II.B we left blanks for the area and the radius of a circle. These parts were explained before they began that section.
Resources: handout. | {"url":"http://www.csun.edu/~mathgs/fermat/PythagoreanCircles.html","timestamp":"2014-04-21T05:12:45Z","content_type":null,"content_length":"8496","record_id":"<urn:uuid:2284f6ac-98c6-45da-9ed8-6752b423e717>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00611-ip-10-147-4-33.ec2.internal.warc.gz"} |
Post a reply
hi {7/3}
At points where the function is 'well-behaved', yes. eg. For your example, everywhere except x = 2.
At this point, it may still be ok. It is continuous there, so it passes that hurdle. You then need to consider whether there is a left limit for the chord gradient and a right limit, and whether they
are the same.
So, whilst f(x) = 2x, the gradient function is 2, for all x.
Whilst f(x) = x^2, the gradient function is 2x, so the right limit, as x tends to 2, is 4.
So there is not a consistency between the gradient as x approaches 2 from the left, and the gradient as x approaches 2 from the right. So it is said that it is not differentiable at this point. | {"url":"http://www.mathisfunforum.com/post.php?tid=19476&qid=269552","timestamp":"2014-04-18T18:53:25Z","content_type":null,"content_length":"16732","record_id":"<urn:uuid:41dd7b70-47a6-4a3f-bc16-f2ac0e6e99af>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00036-ip-10-147-4-33.ec2.internal.warc.gz"} |
Arc Length, and Area of a sector
October 2nd 2009, 04:47 PM #1
Junior Member
Sep 2009
Arc Length, and Area of a sector
Hi i need soem help on doign a problem please! I actually asked this problem to the teacher and showed him my work if this was how to do the problem and he said yes its all correct so i input it
in the computer at home and it said the answer was wrong! here is the problem.
An arc of length 3 feet is cut off by a central angle of π/2 radians. Find the area of the sector formed. (Round the answer to two decimal places.)
answer goes
______ ft^2
So what i did was. since they gave me s=3 and r=pi/2 i did 3/(pi/2), and i get 1.909859317. and then this gives me theta. so then i put it in the 1/2(r)^2(theta) and get 2.35619449 then round
that to 2.36 and it says im wrong o_0 help please! thxs!
Hi i need soem help on doign a problem please! I actually asked this problem to the teacher and showed him my work if this was how to do the problem and he said yes its all correct so i input it
in the computer at home and it said the answer was wrong! here is the problem.
An arc of length 3 feet is cut off by a central angle of π/2 radians. Find the area of the sector formed. (Round the answer to two decimal places.)
answer goes
______ ft^2
So what i did was. since they gave me s=3 and r=pi/2 i did 3/(pi/2), and i get 1.909859317. and then this gives me theta. so then i put it in the 1/2(r)^2(theta) and get 2.35619449 then round
that to 2.36 and it says im wrong o_0 help please! thxs!
$A = \frac{1}{2}r^2 \theta$
$\frac{1}{2} \cdot \frac{36}{\pi^2} \cdot \frac{\pi}{2} = \frac{9}{\pi} \approx 2.86$
how did u get 36/pi?
Hi i need soem help on doign a problem please! I actually asked this problem to the teacher and showed him my work if this was how to do the problem and he said yes its all correct so i input it
in the computer at home and it said the answer was wrong! here is the problem.
An arc of length 3 feet is cut off by a central angle of π/2 radians. Find the area of the sector formed. (Round the answer to two decimal places.)
answer goes
______ ft^2
So what i did was. since they gave me s=3 and r=pi/2 i did 3/(pi/2), and i get 1.909859317. and then this gives me theta. so then i put it in the 1/2(r)^2(theta) and get 2.35619449 then round
that to 2.36 and it says im wrong o_0 help please! thxs!
I must confess that the last paragraph mystifies me. How does one "do" 3/(pi/2)? What do "s" and "r" represent and why did you divide s by r?
Do you know that pi/2 (90 degrees) is 1/4 of a complete circle? Do you know that the circumference of a circle is 2pi r (r is the radius of the circle). Here you know that 1/4 of the circle has
circumference 3, what is the circumference of the entire circle? What is the radius of that circle?
Do you know that the area of a circle is pi r^2? What would the area of a circle of this radius be? What is the area of 1/4 of that circle?
I must confess that the last paragraph mystifies me. How does one "do" 3/(pi/2)? What do "s" and "r" represent and why did you divide s by r?
Do you know that pi/2 (90 degrees) is 1/4 of a complete circle? Do you know that the circumference of a circle is 2pi r (r is the radius of the circle). Here you know that 1/4 of the circle has
circumference 3, what is the circumference of the entire circle? What is the radius of that circle?
Do you know that the area of a circle is pi r^2? What would the area of a circle of this radius be? What is the area of 1/4 of that circle?
well this was my thinking. Since i know a= 1/2r^2theta. then since i was given radius omg -_- nvm i keep getting the radius and radian r mixed up..... but yeah ill keep going on though.. sincei
thought they already gave me radius i was going to find theta so used the s=r*theta and then that got me theta and i pluged that in and wahla -_- but yeah r=radius not radian....
October 2nd 2009, 05:07 PM #2
October 2nd 2009, 05:11 PM #3
Junior Member
Sep 2009
October 2nd 2009, 05:18 PM #4
October 3rd 2009, 04:37 PM #5
MHF Contributor
Apr 2005
October 4th 2009, 08:46 AM #6
Junior Member
Sep 2009 | {"url":"http://mathhelpforum.com/trigonometry/105751-arc-length-area-sector.html","timestamp":"2014-04-20T18:32:53Z","content_type":null,"content_length":"51387","record_id":"<urn:uuid:677be4c0-1d48-4e12-bca3-53424f1285f5>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00242-ip-10-147-4-33.ec2.internal.warc.gz"} |
FOM: Tennant's re-invention of the numbers
Solomon Feferman sf at Csli.Stanford.EDU
Mon Feb 2 02:09:40 EST 1998
In his posting of 27 Jan 12:15, Neil Tennant took issue with my comment
the day before (7:59) on the quotation from Wigner's "The unreasonable
effectiveness of mathematics". Recall:
>I would say that mathematics is the science of skillful operations
>with concepts and rules invented just for this purpose. The principal
>emphasis is on the invention of concepts.
>This seems to point to the subjective origin of mathematical concepts
>and rules.
>Fundamental question:
>Why should the invention of concepts be a *subjective* matter?
Answer (American Heritage Dictionary, 3d edn.):
"Invent, tr.v. 1. To produce or contrive (something previously unknown) by
the use of ingenuity or imagination. 2. To make up, fabricate."
Neil, what definition do you use? Perhaps Wigner didn't mean what he said,
but taken at face value it's hard to read it other than the way that I
did. Note that I spoke about the subjective *origin* of mathematical
concepts. This does not imply that once invented, mathematical concepts
have no objective status. On the contrary, they gain that status through
their intersubjective communication. Inventions have a subjective origin
and objective products: witness e-mail and post-its. (I have no doubt as
to the objectivity of the internet--but what exactly is it?)
The main part of Tennant's riposte is devoted to a neo-Fregean
re-invention of the number concept. Basically this comes to the
(contextual) introduction of such numbers as 2 in a conservative extension
of logic, with #xFx = 2 equivalent to "there are exactly two F's". Is
this really a re-invention? More importantly, is this an explanation of
the number concept? NO. It is only a partial explanation for each
natural number n, as to how the equation #xFx = n may be reasoned with.
It does not explain the general notion of natural number. If that is to
have the properties of Peano's axioms for 0 and successor, with induction
expressed for arbitrary properties, much more has to be done. Doing so
may indeed "command rational assent", but first one has to have the idea
how that is to be done (Dedekind, Frege being two approaches), then it has to
be communicated.
My own view is that our *usual* conception of the structure of natural
numbers is the ordinal one descended from Dedekind via Peano. Frege
instead tried to extract the notion from the more general notion of cardinal
number. We should not conflate the two. However explained, once you and
I have a meeting of minds what we are talking about, it's in an objective
arena. What this involves is no more mysterious than the objective
status of chess or go, at least not as far as the fundamental conception
is concerned.
Sol Feferman
More information about the FOM mailing list | {"url":"http://www.cs.nyu.edu/pipermail/fom/1998-February/001073.html","timestamp":"2014-04-19T04:18:43Z","content_type":null,"content_length":"5297","record_id":"<urn:uuid:58a05bee-b272-4e34-813c-0d27a7ed9111>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00397-ip-10-147-4-33.ec2.internal.warc.gz"} |
ButterWOrth filter using matlab
There are 8 messages in this thread.
You are currently looking at messages 1 to .
Is this discussion worth a thumbs up?
ButterWOrth filter using matlab - Suman - 2003-07-18 06:07:00
hi everybody,
I am working on MAtlab for verification of Butterworth Filter for low
pass filter, I have written the following program for it,
N ;%Order of Filter
fc 0;%Cut-off frequency of filter
fsP00;%Sampling Frequency
fin00;%Input frequency of pure sine wave
n1=fs/fin; %No. of samples in sine value
%Input Coefficients
for m=0:2*pi/n1:2*pi,
%Filter Coefficients
for m=0:n2:2*fc,
%Output Coefficients.
If the above program is correct and the output obtained is correct, I
am able go forward with the same program to be implemented in ADSP
21061 EZKit Lite.
please help me regarding this.
Re: ButterWOrth filter using matlab - Clay S. Turner - 2003-07-18 11:31:00
Hello Suman,
While I'm not a Matlab guy, it appears (If I read your program correctly)
that you are convolving your data with an FIR filter whose tap values are
the mangitude of a Butterworth filter. If this is the case, it will fail for
several reasons. One: the Butterworth filter is an IIR design. Two: the
Butterworth filter has a phase response different from what you have here.
Third: the magnitude response for a Butterworth needs a square root which is
left out. And of course the taps in an FIR filter are the impulse response
and not the frequency response.
"Suman" <s...@yahoo.com> wrote in message
> hi everybody,
> I am working on MAtlab for verification of Butterworth Filter for low
> pass filter, I have written the following program for it,
> N ;%Order of Filter
> fc 0;%Cut-off frequency of filter
> fsP00;%Sampling Frequency
> fin00;%Input frequency of pure sine wave
> n1=fs/fin; %No. of samples in sine value
> n2=fs/fc;
> i=1;
> %Input Coefficients
> for m=0:2*pi/n1:2*pi,
> x(i)=sin(m);
> i=i+1;
> end;
> %Filter Coefficients
> i=1;
> for m=0:n2:2*fc,
> y(i)=1/(1+((m/fc).^(2*N)));
> i=i+1;
> end;
> %Output Coefficients.
> z=conv(x,y1);
> If the above program is correct and the output obtained is correct, I
> am able go forward with the same program to be implemented in ADSP
> 21061 EZKit Lite.
> please help me regarding this.
Re: ButterWOrth filter using matlab - Eric Jacobsen - 2003-07-18 16:41:00
On Fri, 18 Jul 2003 11:31:31 -0400, "Clay S. Turner"
<p...@bellsouth.net> wrote:
>Hello Suman,
>While I'm not a Matlab guy, it appears (If I read your program correctly)
>that you are convolving your data with an FIR filter whose tap values are
>the mangitude of a Butterworth filter. If this is the case, it will fail
>several reasons. One: the Butterworth filter is an IIR design. Two: the
>Butterworth filter has a phase response different from what you have here.
>Third: the magnitude response for a Butterworth needs a square root which
>left out. And of course the taps in an FIR filter are the impulse response
>and not the frequency response.
Actually, you can get a very good Butterworth response with a FIR
design. See
for a brief description how.
>"Suman" <s...@yahoo.com> wrote in message
>> hi everybody,
>> I am working on MAtlab for verification of Butterworth Filter for low
>> pass filter, I have written the following program for it,
>> N ;%Order of Filter
>> fc 0;%Cut-off frequency of filter
>> fsP00;%Sampling Frequency
>> fin00;%Input frequency of pure sine wave
>> n1=fs/fin; %No. of samples in sine value
>> n2=fs/fc;
>> i=1;
>> %Input Coefficients
>> for m=0:2*pi/n1:2*pi,
>> x(i)=sin(m);
>> i=i+1;
>> end;
>> %Filter Coefficients
>> i=1;
>> for m=0:n2:2*fc,
>> y(i)=1/(1+((m/fc).^(2*N)));
>> i=i+1;
>> end;
>> %Output Coefficients.
>> z=conv(x,y1);
>> If the above program is correct and the output obtained is correct, I
>> am able go forward with the same program to be implemented in ADSP
>> 21061 EZKit Lite.
>> please help me regarding this.
Eric Jacobsen
Minister of Algorithms, Intel Corp.
My opinions may not be Intel's opinions.
Re: ButterWOrth filter using matlab - Peter J. Kootsookos - 2003-07-18 18:27:00
Hi All,
No comments on what you're doing, but how you're doing it.
s...@yahoo.com (Suman) writes:
> hi everybody,
> I am working on MAtlab for verification of Butterworth Filter for low
> pass filter, I have written the following program for it,
> N ;%Order of Filter
> fc 0;%Cut-off frequency of filter
> fsP00;%Sampling Frequency
> fin00;%Input frequency of pure sine wave
> n1=fs/fin; %No. of samples in sine value
> n2=fs/fc;
> i=1;
> %Input Coefficients
> for m=0:2*pi/n1:2*pi,
> x(i)=sin(m);
> i=i+1;
> end;
It's easier (and faster for more complex loops) to write:
x = sin(0:2*pi/n1:2*pi);
> %Filter Coefficients
> i=1;
> for m=0:n2:2*fc,
> y(i)=1/(1+((m/fc).^(2*N)));
> i=i+1;
> end;
Same here:
y = 1./(1+([0:n2:2*fc].^2*N));
> %Output Coefficients.
> z=conv(x,y1);
> If the above program is correct and the output obtained is correct, I
> am able go forward with the same program to be implemented in ADSP
> 21061 EZKit Lite.
> please help me regarding this.
Peter K.
Peter J. Kootsookos
"Na, na na na na na na, na na na na"
- 'Hey Jude', Lennon/McCartney
Re: ButterWOrth filter using matlab - Eric Jacobsen - 2003-07-19 12:52:00
On 19 Jul 2003 08:27:08 +1000, p...@remove.ieee.org (Peter J.
Kootsookos) wrote:
>Hi All,
>No comments on what you're doing, but how you're doing it.
>> %Filter Coefficients
>> i=1;
>> for m=0:n2:2*fc,
>> y(i)=1/(1+((m/fc).^(2*N)));
>> i=i+1;
>> end;
>Same here:
> y = 1./(1+([0:n2:2*fc].^2*N));
>Peter J. Kootsookos
I always have this struggle with my love-hate relationship with
matlab. Being able to do things like you've illustrated make it nice
to work with, but there are so many other things I really dislike
about it. We're in the middle of transitioning almost all of our
sims to C (after years of me threatening to do so), but I know I'll
miss this sort of thing.
Eric Jacobsen
Minister of Algorithms, Intel Corp.
My opinions may not be Intel's opinions.
Re: ButterWOrth filter using matlab - Peter J. Kootsookos - 2003-07-19 20:11:00
e...@ieee.org (Eric Jacobsen) writes:
> I always have this struggle with my love-hate relationship with
> matlab. Being able to do things like you've illustrated make it nice
> to work with, but there are so many other things I really dislike
> about it. We're in the middle of transitioning almost all of our
> sims to C (after years of me threatening to do so), but I know I'll
> miss this sort of thing.
I had an occassion where I was working with a really good coder. He
understood how to take C code and vectorise it into matlab with
minimal performance hit. At one stage, he had some C code to
vectorise and, due to the incredible silly way the C code was written,
his matlab implementation was 3 times faster.
We tweaked the algorithm a little and he re-coded it in C from scratch.
THe final C code ran 9 times faster than the original C code with
slightly better performance.
That experience has always made me think (when people complain about
matlab's [lack of] speed) that they should look at their code first.
Peter K.
Peter J. Kootsookos
"Na, na na na na na na, na na na na"
- 'Hey Jude', Lennon/McCartney
Re: ButterWOrth filter using matlab - Eric Jacobsen - 2003-07-26 13:22:00
On 20 Jul 2003 10:11:44 +1000, p...@remove.ieee.org (Peter J.
Kootsookos) wrote:
>e...@ieee.org (Eric Jacobsen) writes:
>> I always have this struggle with my love-hate relationship with
>> matlab. Being able to do things like you've illustrated make it nice
>> to work with, but there are so many other things I really dislike
>> about it. We're in the middle of transitioning almost all of our
>> sims to C (after years of me threatening to do so), but I know I'll
>> miss this sort of thing.
>I had an occassion where I was working with a really good coder. He
>understood how to take C code and vectorise it into matlab with
>minimal performance hit. At one stage, he had some C code to
>vectorise and, due to the incredible silly way the C code was written,
>his matlab implementation was 3 times faster.
>We tweaked the algorithm a little and he re-coded it in C from scratch.
>THe final C code ran 9 times faster than the original C code with
>slightly better performance.
>That experience has always made me think (when people complain about
>matlab's [lack of] speed) that they should look at their code first.
>Peter K.
Absolutely. Writing fast Matlab requires good coding skills, though,
and an understanding of what makes things fast or inefficient.
Someone who can write fast Matlab code can usually write must faster C
code. That's a little obvious, I guess, but it's one of the things
we're up against.
We have a pretty large, computationally intensive Matlab suite that
we're in the middle of wringing out. The coder was complaining it was
taking about seven hours to generate a single output point (on a
1.2GHz P4). I was given an overview of the code architecture and
then suggested some changes. I'm told those changes provided an
improvement of about 10x!
Matlab can be pretty fast, C can be faster, but either is highly
dependant on the skill of the coder.
Eric Jacobsen
Minister of Algorithms, Intel Corp.
My opinions may not be Intel's opinions.
Re: ButterWOrth filter using matlab - Rune Allnor - 2003-07-27 09:49:00
e...@ieee.org (Eric Jacobsen) wrote in message
> On 20 Jul 2003 10:11:44 +1000, p...@remove.ieee.org (Peter J.
> Kootsookos) wrote:
> >e...@ieee.org (Eric Jacobsen) writes:
> >
> >> I always have this struggle with my love-hate relationship with
> >> matlab. Being able to do things like you've illustrated make it
> >> to work with, but there are so many other things I really dislike
> >> about it. We're in the middle of transitioning almost all of
> >> sims to C (after years of me threatening to do so), but I know
> >> miss this sort of thing.
> >
> >I had an occassion where I was working with a really good coder. He
> >understood how to take C code and vectorise it into matlab with
> >minimal performance hit. At one stage, he had some C code to
> >vectorise and, due to the incredible silly way the C code was written,
> >his matlab implementation was 3 times faster.
> >
> >We tweaked the algorithm a little and he re-coded it in C from
> >
> >THe final C code ran 9 times faster than the original C code with
> >slightly better performance.
I don't understand? Do you measure performance in other units than speed?
> >That experience has always made me think (when people complain about
> >matlab's [lack of] speed) that they should look at their code first.
> >
> >Ciao,
> >
> >Peter K.
> Absolutely. Writing fast Matlab requires good coding skills, though,
> and an understanding of what makes things fast or inefficient.
> Someone who can write fast Matlab code can usually write must faster C
> code. That's a little obvious, I guess, but it's one of the things
> we're up against.
> We have a pretty large, computationally intensive Matlab suite that
> we're in the middle of wringing out. The coder was complaining it was
> taking about seven hours to generate a single output point (on a
> 1.2GHz P4). I was given an overview of the code architecture and
> then suggested some changes. I'm told those changes provided an
> improvement of about 10x!
> Matlab can be pretty fast, C can be faster, but either is highly
> dependant on the skill of the coder.
What C is concerned, performance does definately depend on the skill
of the coder. Matters are somewhat more complex with matlab. The problem
is (was) that matlab is based on computation primitives that are (were)
optimized for operations on 1D or 2D arrays. As long as the operation
can be expressed in terms of available primitives that work on arrays
of dimensions not larger than 2, matlab code can be "vectorized" to
way better performance that code with explicit, naive control loops.
So if the code requires nested "for to do" loops in three or more
C code runs significantly faster than matlab. I had a little loop that
demonstrated the difference, that I posted here a few months ago. The
naive for-to-do code ran some 40 times slower than the "vectorized"
version. Somebody made me aware that the performance had been significantly
improved in the last releases of matlab (Matlab 6.5 and higher).
There are a few other issues as well, as code interpreters vs compilers,
copying arguments to and from functions vs refernce-by-pointer, but matlab
is, all in all, a quick'n dirty prototyping tool where ease of use (from
a programmer's point of view) has been achieved at the cost of a severe
penalty in run-time perfomance and code "maintainability".
The "lab" in "matlab" is there for a reason. | {"url":"http://www.dsprelated.com/showmessage/22510/1.php","timestamp":"2014-04-19T12:03:10Z","content_type":null,"content_length":"40207","record_id":"<urn:uuid:de400c84-fa6b-44ef-88d0-8167ee1cc6c3>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00305-ip-10-147-4-33.ec2.internal.warc.gz"} |
digitalmars.D - support for various angular units in std.math
Colin Wallace <wallacoloo gmail.com>
I've been doing a lot of opengl work in D. Opengl deals primarily
(maybe even completely?) with degrees.
When using trigonometric functions, they all deal with radians. So
I've been having to do lots of converting. Converting is very
straightforward, but I still think it would be nice if there were some
built in functions for dealing with other units than radians. I think
a system that looked like what is used for the Duration struct in
core.time would make things more readable. The code for taking the
sine of a number measured in degrees could look like:
rather than
It doesn't make a huge difference to me, but neither does the dur!()
function, yet somebody decided it would be helpful and it made its way
into the standard library. So I figured I would at least share this
idea to see what other people thought of it.
Feb 27 2011
Colin Wallace <wallacoloo gmail.com> wrote:
I've been doing a lot of opengl work in D. Opengl deals primarily
(maybe even completely?) with degrees.
When using trigonometric functions, they all deal with radians. So
I've been having to do lots of converting. Converting is very
straightforward, but I still think it would be nice if there were some
built in functions for dealing with other units than radians. I think
a system that looked like what is used for the Duration struct in
core.time would make things more readable. The code for taking the
sine of a number measured in degrees could look like:
rather than
It doesn't make a huge difference to me, but neither does the dur!()
function, yet somebody decided it would be helpful and it made its way
into the standard library. So I figured I would at least share this
idea to see what other people thought of it.
I think a better solution would be a proper units struct: unit!"degrees" a; sin(a); // Automagically behaves correctly bool foo(unit!"radians" bar) { return bar < PI; } foo(a); // Compile-time error:
degrees not equal to radians. -- Simen
Feb 27 2011
Walter Bright <newshound2 digitalmars.com>
Colin Wallace wrote:
When using trigonometric functions, they all deal with radians. So
I've been having to do lots of converting. Converting is very
straightforward, but I still think it would be nice if there were some
built in functions for dealing with other units than radians. I think
a system that looked like what is used for the Duration struct in
core.time would make things more readable. The code for taking the
sine of a number measured in degrees could look like:
rather than
I appreciate the suggestion, but suspect that adding a parallel set of trig functions that do nothing more than multiply the arg by a constant is more cognitive load for the user than benefit.
Feb 27 2011
Andrei Alexandrescu <SeeWebsiteForEmail erdani.org>
On 2/28/11 12:49 AM, Walter Bright wrote:
Colin Wallace wrote:
When using trigonometric functions, they all deal with radians. So
I've been having to do lots of converting. Converting is very
straightforward, but I still think it would be nice if there were some
built in functions for dealing with other units than radians. I think
a system that looked like what is used for the Duration struct in
core.time would make things more readable. The code for taking the
sine of a number measured in degrees could look like:
rather than
I appreciate the suggestion, but suspect that adding a parallel set of trig functions that do nothing more than multiply the arg by a constant is more cognitive load for the user than benefit.
Agreed. What would add value is a units library that refuses calls to e.g. trig functions unless the user specifically inserts a conversion. That is, instead of sin!"degrees"(x); you'd use: sin(to!
"degrees"(x)); That way the conversion remains explicit but is hoisted out of the multitude of math functions. Andrei
Feb 28 2011
bearophile <bearophileHUGS lycos.com>
you'd use:
That way the conversion remains explicit but is hoisted out of the
multitude of math functions.
Have you seen Don's answer? http://www.digitalmars.com/webnews/newsgroups.php?art_group=digitalmars.D&article_id=130757 Bye, bearophile
Feb 28 2011
For a very tidy and typesafe implementation of units I would recommend checking
out the units feature in F#.
Probably not the best link, but the compiler will flag if units are not
in computations.
Also accept they adopt a 'tagging' system to declare units, but conversely it is
very extensable to any set of defined units.
Feb 28 2011
Peter Alexander <peter.alexander.au gmail.com>
On 28/02/11 1:17 AM, Colin Wallace wrote:
I've been doing a lot of opengl work in D. Opengl deals primarily
(maybe even completely?) with degrees.
When using trigonometric functions, they all deal with radians. So
I've been having to do lots of converting. Converting is very
straightforward, but I still think it would be nice if there were some
built in functions for dealing with other units than radians. I think
a system that looked like what is used for the Duration struct in
core.time would make things more readable. The code for taking the
sine of a number measured in degrees could look like:
rather than
It doesn't make a huge difference to me, but neither does the dur!()
function, yet somebody decided it would be helpful and it made its way
into the standard library. So I figured I would at least share this
idea to see what other people thought of it.
When you start doing OpenGL properly, degrees don't show up at all. In fact, I believe all degrees functions are deprecated in the latest version (could be wrong on this). Really, you should be doing
all the matrix calculations and rotations yourself. If anything should come out of this, it would be a nice, small linear algebra library.
Feb 28 2011
Trass3r <un known.com>
Really, you should be doing all the matrix calculations and rotations
yourself. If anything should come out of this, it would be a nice, small
linear algebra library.
I think you even have to do that in OpenGL 4 (maybe even 3).
Feb 28 2011
On 28/02/11 10:12 AM, Trass3r wrote:
Really, you should be doing all the matrix calculations and rotations
yourself. If anything should come out of this, it would be a nice,
small linear algebra library.
I think you even have to do that in OpenGL 4 (maybe even 3).
Yep, I'm pretty sure you're right.
Feb 28 2011
Don <nospam nospam.com>
Colin Wallace wrote:
I've been doing a lot of opengl work in D. Opengl deals primarily
(maybe even completely?) with degrees.
When using trigonometric functions, they all deal with radians. So
I've been having to do lots of converting. Converting is very
Actually, it isn't! assert(sin(360 * PI / 180 ) == 0.0); // fails! There are supposed to be sinPi(), cosPi(), tanPi() functions, but they are not yet implemented (they are quite difficult to get
right). You should use: sinPi(360/180); to get sine in degrees.
Feb 28 2011
Colin Wallace <wallacoloo gmail.com>
Don Wrote:
Colin Wallace wrote:
When using trigonometric functions, they all deal with radians. So
I've been having to do lots of converting. Converting is very
Actually, it isn't! assert(sin(360 * PI / 180 ) == 0.0); // fails!
That's a very marginal error (1e-19). For graphics, I never need that much precision. Besides, it has nothing to do with the conversion in this case. sin(2*PI) results in the same value, causing the
assertion to fail.
Feb 28 2011
Colin Wallace wrote:
Don Wrote:
Colin Wallace wrote:
When using trigonometric functions, they all deal with radians. So
I've been having to do lots of converting. Converting is very
assert(sin(360 * PI / 180 ) == 0.0); // fails!
That's a very marginal error (1e-19). For graphics, I never need that much precision.
Yes, but someone doing high school trigonometry with degrees will be confused when it fails.
Besides, it has nothing to do with the conversion in this case. sin(2*PI)
results in the same value, causing the assertion to fail.
No, it's the conversion. It's because the mathematical number pi is not precisely representable in floating point. But 360 is precisely representable, and sin(360 degrees) is also precisely
representable. assert(sinPi(360/180) == 0) will pass. So multiplying by PI is *not* the correct way to calculate trignometric functions in degrees. 2*PI is not 360 degrees.
Mar 01 2011 | {"url":"http://www.digitalmars.com/d/archives/digitalmars/D/support_for_various_angular_units_in_std.math_130743.html","timestamp":"2014-04-20T18:25:38Z","content_type":null,"content_length":"26289","record_id":"<urn:uuid:057d872d-86fa-4b66-b3ab-5bb7ef89078b>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00423-ip-10-147-4-33.ec2.internal.warc.gz"} |
Merion, PA SAT Math Tutor
Find a Merion, PA SAT Math Tutor
...However, I really found myself seeking a higher pursuit and many people close to me encouraged me to teach. I thought back on my first teaching experience back when I was in college. I took
part in this program where students from my university taught a group of public school children how to make a model rocket and how it worked.
16 Subjects: including SAT math, Spanish, calculus, physics
...Instead, we study how the language works. My goal is to help my students become lifelong writers (and readers), not only to master a test. Areas of instruction include: SAT Writing SAT
Critical Reading SAT Math GRE Verbal GRE Quantitative Reasoning GRE Analytical Writing MCAT verbal GMAT Prax...
47 Subjects: including SAT math, chemistry, reading, English
...I use multi-sensory techniques for teaching reading, writing, and phonics, and also use student interests and games to begin work in an area of difficulty. I am certified in PA to teach
Special Education grades PK-12. In NJ public schools I worked with children with special needs from ages PK-12.
20 Subjects: including SAT math, reading, dyslexia, algebra 1
...This included taking classes such as Calculus 1, 2, 3,4, and Advanced Calculus (where we prove the theorems used in calculus 1. Also, while at Rutgers, I worked as a math tutor, tutoring
students in subjects that included calculus 1 and 2. I have obtained a bachelor's degree in mathematics from Rutgers University.
16 Subjects: including SAT math, English, physics, calculus
...I taught Desktop Publishing (including Microsoft Publisher) at Rhodes High School in Philadelphia to two classes. One class was composed of Honors 7th graders and the other class was a mixed
class of 9-12 graders. I also worked in IT for years doing computer troubleshooting for a local law firm.
37 Subjects: including SAT math, reading, geometry, algebra 1
Related Merion, PA Tutors
Merion, PA Accounting Tutors
Merion, PA ACT Tutors
Merion, PA Algebra Tutors
Merion, PA Algebra 2 Tutors
Merion, PA Calculus Tutors
Merion, PA Geometry Tutors
Merion, PA Math Tutors
Merion, PA Prealgebra Tutors
Merion, PA Precalculus Tutors
Merion, PA SAT Tutors
Merion, PA SAT Math Tutors
Merion, PA Science Tutors
Merion, PA Statistics Tutors
Merion, PA Trigonometry Tutors | {"url":"http://www.purplemath.com/Merion_PA_SAT_math_tutors.php","timestamp":"2014-04-19T10:14:32Z","content_type":null,"content_length":"24106","record_id":"<urn:uuid:0e545bf0-d54d-428a-addc-e2051043f66a>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00296-ip-10-147-4-33.ec2.internal.warc.gz"} |
High-resolution radar via compressed sensing
Results 1 - 10 of 54
, 2009
"... Wideband analog signals push contemporary analog-to-digital conversion systems to their performance limits. In many applications, however, sampling at the Nyquist rate is inefficient because the
signals of interest contain only a small number of significant frequencies relative to the bandlimit, alt ..."
Cited by 69 (15 self)
Add to MetaCart
Wideband analog signals push contemporary analog-to-digital conversion systems to their performance limits. In many applications, however, sampling at the Nyquist rate is inefficient because the
signals of interest contain only a small number of significant frequencies relative to the bandlimit, although the locations of the frequencies may not be known a priori. For this type of sparse
signal, other sampling strategies are possible. This paper describes a new type of data acquisition system, called a random demodulator, that is constructed from robust, readily available components.
Let K denote the total number of frequencies in the signal, and let W denote its bandlimit in Hz. Simulations suggest that the random demodulator requires just O(K log(W/K)) samples per second to
stably reconstruct the signal. This sampling rate is exponentially lower than the Nyquist rate of W Hz. In contrast with Nyquist sampling, one must use nonlinear methods, such as convex programming,
to recover the signal from the samples taken by the random demodulator. This paper provides a detailed theoretical analysis of the system’s performance that supports the empirical observations.
- IEEE Int. Work. on Comp. Adv. Multi-Sensor Adaptive Proc., CAMPSAP , 2007
"... Abstract. This paper outlines a new framework for compressive sensing: convolution with a random waveform followed by random time domain subsampling. We show that sensing by random convolution
is a universally efficient data acquisition strategy in that an n-dimensional signal which is S sparse in a ..."
Cited by 65 (4 self)
Add to MetaCart
Abstract. This paper outlines a new framework for compressive sensing: convolution with a random waveform followed by random time domain subsampling. We show that sensing by random convolution is a
universally efficient data acquisition strategy in that an n-dimensional signal which is S sparse in any fixed representation can be recovered from m � S log n measurements. We discuss two imaging
scenarios — radar and Fourier optics — where convolution with a random pulse allows us to seemingly super-resolve fine-scale features, allowing us to recover high-resolution signals from
low-resolution measurements. 1. Introduction. The new field of compressive sensing (CS) has given us a fresh look at data acquisition, one of the fundamental tasks in signal processing. The message
of this theory can be summarized succinctly [7, 8, 10, 15, 32]: the number of measurements we need to reconstruct a signal depends on its sparsity rather than its bandwidth. These measurements,
however, are different than the samples that
, 2008
"... The rapid developing area of compressed sensing suggests that a sparse vector lying in a high dimensional space can be accurately and efficiently recovered from only a small set of non-adaptive
linear measurements, under appropriate conditions on the measurement matrix. The vector model has been ext ..."
Cited by 63 (38 self)
Add to MetaCart
The rapid developing area of compressed sensing suggests that a sparse vector lying in a high dimensional space can be accurately and efficiently recovered from only a small set of non-adaptive
linear measurements, under appropriate conditions on the measurement matrix. The vector model has been extended both theoretically and practically to a finite set of sparse vectors sharing a common
sparsity pattern. In this paper, we treat a broader framework in which the goal is to recover a possibly infinite set of jointly sparse vectors. Extending existing algorithms to this model is
difficult due to the infinite structure of the sparse vector set. Instead, we prove that the entire infinite set of sparse vectors can be recovered by solving a single, reduced-size
finite-dimensional problem, corresponding to recovery of a finite set of sparse vectors. We then show that the problem can be further reduced to the basic model of a single sparse vector by randomly
combining the measurements. Our approach is exact for both countable and uncountable sets as it does not rely on discretization or heuristic techniques. To efficiently find the single sparse vector
produced by the last reduction step, we suggest an empirical boosting strategy that improves the recovery ability of any given sub-optimal method for recovering a sparse vector. Numerical experiments
on random data demonstrate that when applied to infinite sets our strategy outperforms discretization techniques in terms of both run time and empirical recovery rate. In the finite model, our
boosting algorithm has fast run time and much higher recovery rate than known popular methods.
- RADON SERIES COMP. APPL. MATH XX, 1–95 © DE GRUYTER 20YY
"... These notes give a mathematical introduction to compressive sensing focusing on recovery using ℓ1-minimization and structured random matrices. An emphasis is put on techniques for proving
probabilistic estimates for condition numbers of structured random matrices. Estimates of this type are key to ..."
Cited by 59 (13 self)
Add to MetaCart
These notes give a mathematical introduction to compressive sensing focusing on recovery using ℓ1-minimization and structured random matrices. An emphasis is put on techniques for proving
probabilistic estimates for condition numbers of structured random matrices. Estimates of this type are key to providing conditions that ensure exact or approximate recovery of sparse vectors using
- Online]. Available: http://www.ece.wisc.edu/ ∼nowak/sub08 toep.pdf
"... Compressed sensing (CS) has recently emerged as a powerful signal acquisition paradigm. In essence, CS enables the recovery of high-dimensional sparse signals from relatively few linear
observations in the form of projections onto a collection of test vectors. Existing results show that if the entri ..."
Cited by 43 (8 self)
Add to MetaCart
Compressed sensing (CS) has recently emerged as a powerful signal acquisition paradigm. In essence, CS enables the recovery of high-dimensional sparse signals from relatively few linear observations
in the form of projections onto a collection of test vectors. Existing results show that if the entries of the test vectors are independent realizations of certain zero-mean random variables, then
with high probability the unknown signals can be recovered by solving a tractable convex optimization. This work extends CS theory to settings where the entries of the test vectors exhibit structured
statistical dependencies. It follows that CS can be effectively utilized in linear, time-invariant system identification problems provided the impulse response of the system is (approximately or
exactly) sparse. An immediate application is in wireless multipath channel estimation. It is shown here that time-domain probing of a multipath channel with a random binary sequence, along with
utilization of CS reconstruction techniques, can provide significant improvements in estimation accuracy compared to traditional least-squares based linear channel estimation strategies. Abstract
extensions of the main results are also discussed, where the theory of equitable graph coloring is employed to establish the utility of CS in settings where the test vectors exhibit more general
statistical dependencies. Index Terms circulant matrices, compressed sensing, Hankel matrices, restricted isometry property, sparse channel estimation, Toeplitz matrices, wireless communications. I.
- in Proc. of Conf. on Information Sciences and Systems (CISS , 2008
"... Abstract—Reliable wireless communications often requires accurate knowledge of the underlying multipath channel. This typically involves probing of the channel with a known training waveform and
linear processing of the input probe and channel output to estimate the impulse response. Many real-world ..."
Cited by 40 (9 self)
Add to MetaCart
Abstract—Reliable wireless communications often requires accurate knowledge of the underlying multipath channel. This typically involves probing of the channel with a known training waveform and
linear processing of the input probe and channel output to estimate the impulse response. Many real-world channels of practical interest tend to exhibit impulse responses characterized by a
relatively small number of nonzero channel coefficients. Conventional linear channel estimation strategies, such as the least squares, are ill-suited to fully exploiting the inherent
low-dimensionality of these sparse channels. In contrast, this paper proposes sparse channel estimation methods based on convex/linear programming. Quantitative error bounds for the proposed schemes
are derived by adapting recent advances from the theory of compressed sensing. The bounds come within a logarithmic factor of the performance of an ideal channel
, 2009
"... Abstract—The theory of compressed sensing suggests that successful inversion of an image of the physical world (e.g., a radar/sonar return or a sensor array snapshot vector) for the source modes
and amplitudes can be achieved at measurement dimensions far lower than what might be expected from the c ..."
Cited by 26 (2 self)
Add to MetaCart
Abstract—The theory of compressed sensing suggests that successful inversion of an image of the physical world (e.g., a radar/sonar return or a sensor array snapshot vector) for the source modes and
amplitudes can be achieved at measurement dimensions far lower than what might be expected from the classical theories of spectrum or modal analysis, provided that the image is sparse in an apriori
known basis. For imaging problems in passive and active radar and sonar, this basis is usually taken to be a DFT basis. The compressed sensing measurements are then inverted using an ℓ1-minimization
principle (basis pursuit) for the nonzero source amplitudes. This seems to make compressed sensing an ideal image inversion principle for high resolution modal analysis. However, in reality no
physical field is sparse in the DFT basis or in an apriori known basis. In fact the main goal in image inversion is to identify the modal structure. No matter how finely we grid the parameter space
the sources may not lie in the center of the grid cells and there is always mismatch between the assumed and the actual bases for sparsity. In this paper, we study the sensitivity of basis pursuit to
mismatch between the assumed and the actual sparsity bases and compare the performance of basis pursuit with that of classical image inversion. Our mathematical analysis and numerical examples show
that the performance of basis pursuit degrades considerably in the presence of mismatch, and they suggest that the use of compressed sensing as a modal analysis principle requires more consideration
and refinement, at least for the problem sizes common to radar/sonar. I.
, 2009
"... Abstract—We analyze the Basis Pursuit recovery method when observing signals with general perturbations (i.e., additive, as well as multiplicative noise). This completely perturbed model extends
the previous work of Candès, Romberg and Tao on stable signal recovery from incomplete and inaccurate mea ..."
Cited by 21 (3 self)
Add to MetaCart
Abstract—We analyze the Basis Pursuit recovery method when observing signals with general perturbations (i.e., additive, as well as multiplicative noise). This completely perturbed model extends the
previous work of Candès, Romberg and Tao on stable signal recovery from incomplete and inaccurate measurements. Our results show that, under suitable conditions, the stability of the recovered signal
is limited by the noise level in the observation. Moreover, this accuracy is within a constant multiple of the bestcase reconstruction using the technique of least squares. I.
- Appl. Comput. Harmon. Anal
"... In the theory of compressed sensing, restricted isometry analysis has become a standard tool for studying how efficiently a measurement matrix acquires information about sparse and compressible
signals. Many recovery algorithms are known to succeed when the restricted isometry constants of the sampl ..."
Cited by 19 (5 self)
Add to MetaCart
In the theory of compressed sensing, restricted isometry analysis has become a standard tool for studying how efficiently a measurement matrix acquires information about sparse and compressible
signals. Many recovery algorithms are known to succeed when the restricted isometry constants of the sampling matrix are small. Many potential applications of compressed sensing involve a
data-acquisition process that proceeds by convolution with a random pulse followed by (nonrandom) subsampling. At present, the theoretical analysis of this measurement technique is lacking. This
paper demonstrates that the sth order restricted isometry constant is small when the number m of samples satisfies m � (s log n) 3/2, where n is the length of the pulse. This bound improves on
previous estimates, which exhibit quadratic scaling. 1
- IEEE Trans. Signal Process , 2011
"... Abstract—Compressed sensing (CS) is an emerging field that has attracted considerable research interest over the past few years. Previous review articles in CS limit their scope to standard
discrete-to-discrete measurement architectures using matrices of randomized nature and signal models based on ..."
Cited by 17 (6 self)
Add to MetaCart
Abstract—Compressed sensing (CS) is an emerging field that has attracted considerable research interest over the past few years. Previous review articles in CS limit their scope to standard
discrete-to-discrete measurement architectures using matrices of randomized nature and signal models based on standard sparsity. In recent years, CS has worked its way into several new application
areas. This, in turn, necessitates a fresh look on many of the basics of CS. The random matrix measurement operator must be replaced by more structured sensing architectures that correspond to the
characteristics of feasible acquisition hardware. The standard sparsity prior has to be extended to include a much richer class of signals and to encode broader data models, including continuoustime
signals. In our overview, the theme is exploiting signal and measurement structure in compressive sensing. The prime focus is bridging theory and practice; that is, to pinpoint the potential of
structured CS strategies to emerge from the math to the hardware. Our summary highlights new directions as well as relations to more traditional CS, with the hope of serving both as a review to
practitioners wanting to join this emerging field, and as a reference for researchers that attempts to put some of the existing ideas in perspective of practical applications. Index
Terms—Approximation algorithms, compressed sensing, compression algorithms, data acquisition, data compression, sampling methods. I. | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=5312975","timestamp":"2014-04-19T06:20:05Z","content_type":null,"content_length":"41832","record_id":"<urn:uuid:1bd65036-dca4-43c7-9be9-0f068beeb8f3>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00497-ip-10-147-4-33.ec2.internal.warc.gz"} |
IMAGE_PROCESSING_AND_ITS_APPLICATION Ppt Presentation
Technical Seminar onIMAGE PROCESSING AND ITS APPLICATION :
1 Technical Seminar onIMAGE PROCESSING AND ITS APPLICATION Presented by Rakhi Ghosh CS200157261 Under the Guidance of Mr. Anisur Rahman
Slide 2:
2 Digital image processing fundamentals Digital image processing methods stems from two principals application areas: Improvement of pictorial information. Processing of scene data.
Slide 3:
3 IMAGES : Image is replica of object. An image defined in the "real world" is considered to be a function of two real variables x and y. TYPES OF IMAGES : Gray-tone image: Line copy images:
Half-tone images
Slide 4:
4 EXAMPLE: A digital image a[m,n] described in a 2D discrete space is derived from an analog image a(x,y). The 2D continuous image a(x,y) is divided into N rows and M columns.
5 STEPS IN IMAGE PROCESSING Image acquisition Preprocessing Segmentation Representation and Description Recognition Interpretation Knowledge base
Slide 6:
7 IMAGE TRANSFORMATION FOURIER TRANSFORM The Fourier transform produces representation of a signal, as a weighted sum of complex exponentials. Because of Euler's formula: , The defining formulas for
the forward Fourier and the inverse Fourier transforms are as follows.
Slide 8:
8 The forward transform goes from the spatial domain, either continuous or discrete to the frequency domain which is always continuous . The inverse Fourier transform goes from the frequency domain
back to the spatial domain.
Slide 9:
9 The specific formulas for transforming back and forth between the spatial domain and the frequency domain are given below. In 2D continuous space and in discrete space
10 WALSH TRANSFORM The discrete Walsh transform of a function f(x), where N=2N denoted by W(u) is obtained by substituting the kernel as The inverse transform is the relation
Slide 11:
11 The 1-D, forward Hadamard kernel is the relation HADAMARD TRANSFORM 1-D Hadamard transform
Slide 12:
12 An inverse kernel that, except for the 1/N term, is equal to the forward Hedamard kernel The inverse Hadamard transform: For x=0,1,2…N-1.
13 IMAGE ENHANCEMENT The process of image acquisition frequently leads (inadvertently) to image degradation. The principle objective of enhancement techniques is to process an image so, that the
result is more suitable than the original image for specification application. Image enhancement techniques are used to increase the signal-to-noise ratio. Make certain features easier to see by
modifying the colors or intensities of an image.
Slide 14:
14 The approach of enhancement techniques falls into two categories: Spatial domain method : In this category are based on direct manipulation of pixels in an image, that is the gray values of the
peels are directly manipulated to obtain the enhanced image. Frequency domain method: Processing techniques are based on modifying the Fourier transform of an image, that the image f(x,y) is Fourier
transformed to F(u,v) before any modification is done
15 BASIC IMAGE ENHANCEMENT TECHNIQUES: SPATIAL DOMAIN METHODS: The term spatial domain refers to the aggregate of pixels composing an image, and spatial domain methods are procedure that operates
directly in this pixel Image processing function in the spatial domain may be expressed as g(x,y) =T[f(x,y)] Where f(x,y) is the input image and g(x,y) is the processed image, and T is an operator on
f, defined over some neighborhood about(x,y)
Slide 16:
16 FREQUENCY DOMAIN METHODS: The foundation of frequency domain technique is the convolution theorem .Let g(,y) be an image formed by the convolution of an image f(x,y)and a linear position, position
invariant operator h(x,y) that is, g(x,y) = h(x,y) * f(x,y) Then from the convolution theorem, the following frequency domain relation holds: G( u,v) = H (u , v)F(u , v) Where G,F &H are the Fourier
transforms of g,h &h respectively.
Slide 17:
17 ENHANCEMENT BY POINT PROCESS: Contrast stretching. Gray-level slicing. Histogram processing. Histogram specification. Image subtracting. Image averaging.
18 Spatial filtering: Mean Filter Median filter Smoothing filter Filtering in frequency domain: Low pass filter Ideal Low pass Filter Butter worth low pass filter Homomorphic filtering ENHANCEMENT BY
Slide 19:
19 APPLICATION OF FILTERS Application of median filter Application of Smoothing filter
Slide 20:
20 Morphology tools such as dilation and erosion can be used in conjunction with edge detection to detect and outline a prostate cancer cell. The effect of homorphic filtring on the noisy filter
Slide 21:
21 IMAGE COMPRESSION MODELS: Source Encoder Channel Encoder Channel Source Decoder Channel Decoder F(x,y) F’(x,y) SOURCE ENCODER: The source encoder is responsible for reducing or eliminating any
coding, interpixel, or psycho visual redundancies in the input image. SOURCE DECODER: The source decoder contains only two components the symbol decoder and an inverse mapped.
Slide 22:
22 ERROR FREE COMPRESSION: VARIABLE LENGTH CODING Huffman Coding. Arithmetic Coding. Bit plane coding. LOSSY COMPRESSION:
Slide 23:
23 IMAGE SEGMENTATION Thresholding. Fixed threshold. Istogram-derived thresholds. Edge finding.
Slide 24:
24 IMAGE RESTORATION The ultimate goal of restoration techniques is to improve an image. The restoration techniques are oriented toward modeling the degradation applying the inverse process in order
to recover the original image.
Slide 25:
25 CONCLUSION The various aspects of Image processing and their practical usage and the steps involved in their processing are studied. This has given a good and practical idea of using various
Transforms techniques on images .
Slide 26:
26 THANK YOU | {"url":"http://www.authorstream.com/Presentation/maheshalways-785264-image-processing-and-its-application/","timestamp":"2014-04-20T03:42:26Z","content_type":null,"content_length":"133593","record_id":"<urn:uuid:53fe113c-7b3a-47da-8e62-08c1a3667603>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00133-ip-10-147-4-33.ec2.internal.warc.gz"} |
the definition of symmetric
Computing Dictionary
symmetric definition
1. A
R is symmetric if, for all x and y,
x R y => y R x
If it is also
(x R y & y R x => x == y) then x R y => x == y, i.e. no two different elements are related.
2. In
linear algebra
, a member of the
tensor product
of a
vector space
with itself one or more times, is symmetric if it is a
fixed point
of all of the linear isomorphisms of the tensor product generated by
of the ordering of the copies of the vector space as factors. It is said to be antisymmetric precisely if the action of any of these linear maps, on the given tensor, is equivalent to multiplication
by the sign of the permutation in question. | {"url":"http://dictionary.reference.com/browse/symmetric","timestamp":"2014-04-19T17:58:17Z","content_type":null,"content_length":"108761","record_id":"<urn:uuid:cde98844-cae8-413a-bb6e-aa4195c3a06d>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00298-ip-10-147-4-33.ec2.internal.warc.gz"} |
Corrections in Linkage Book
Jurg Ott OTT at NYSPI.bitnet
Fri Jul 3 11:47:29 EST 1992
J. Ott 25 June 1992
Corrections and clarifications to
"Analysis of Human Genetic Linkage"
Below, the currently known corrections to the revised
edition of this book (1991, Johns Hopkins University Press,
Baltimore) are listed. I am grateful to the readers who made me
aware of errors and inaccuracies.
In this e-mail version of the list of corrections, mathemat-
ical and other non-ASCII characters are given in the syntax of
the WordPerfect equation editor (version 5.1).
Page 14, line 4 up: Assumption (2) is sufficient for that
statement; (2) implies (1).
Page 18, line 8: Replace (1.3) by (1.2).
Page 38, Problem 2.2: Replace 200 cM by 100 cM.
Page 44, line 8 below table 3.1 should read: "Generally,
for phase known data, if T=k/n is the value of...". Also, line
12 should read: "Since T is unbiased, ..."
Page 47, lines 5-8: These two sentences are clearer when
worded as follows: "Consider now our previous hypothetical
example of one recombinant and four nonrecombinants and test
H_0:theta=1/2 against H_1:theta=0.1. For these data, the
likelihood ratio is calculated as T_{obs}=[0.1 times(0.9)^4]
Page 48, line 16: Replace A approx (1-beta) by A approx
Page 59, line 3 from the bottom: ..., P(0 <= theta < 1/2) =
1/22, ...
Page 60, lines 6 and 7 should read: The ith segment
(i = 1..s), of length b_i, then contains the likelihood ratio,
L^*(theta_i), where b_1 = 1/2 (theta2+theta1),
bi = 1/2 (theta_{i+1} + theta_i) - 1/2 (theta_i + theta_{i-1}) =
1/2 (theta_{i+1} - theta_{i-1}),
b_s = 0.5 - 1/2 (theta_s + theta_{s-1}); SUM b_i = 0.5.
Line 17 should read: 52.672, resulting in a value of 0.71
for Smith's (1959) posterior...
Table 4.1: The values of b_i for i=1 (now 0.025) and i=2
(now 0.050) should be 0.030 and 0.045, respectively. This way,
they are consistent with the definition of the b_i's further up
on page 60.
Page 34, lines 17 and 18 up are clearer when formulated as
follows: "... often used before linkage analysis as a preliminary
test of paternity."
Page 45, lines 12 and 13 should be phrased more exactly as
follows: "..., which allows the calculation of approximate
confidence intervals from asymptotic variances... ."
Page 68, last line before section 4.5: Replace 11.7 by
Page 74, line 3: Replace Z(theta hat) and Z(theta hat_f) by
Z_1(theta hat_m) and Z_2(theta hat_f).
Page 75, line 5: Replace (1-alpha_1)^n by (1-alpha_1)^g.
Page 92, line 3: Replace A1 by A2.
Page 93, table 5.3, line i=4: Replace AB-22 by AB-11.
Page 101, after equation (5.15): Replace 1/[n times i(r)]
by 1/[n times i(r)]^{1/2}.
Page 101, line 6 in section 5.9 should read: "type 1 is a
recombinant under one of the parental phases (phase I, say) but a
nonrecombinant under the other, ..."
Page 117, lines 21-23: The last sentence in this paragraph
should read: The second child has genotype 121/222 or 122/221,
each of which requires at least one recombination in the father
or the mother.
Page 137, first line should be: ..between the loci C and D.
Page 139, Table 6.10, line R: Replace "444 theta_{BC}" by
"444 theta_{AB}".
Page 148, line 11: Replace f_{dd} by f_{DD}.
Page 149, table 7.1, line d1/d1: replace 1/2 by {1/2}r for
P(g;r) (as on the line above it).
Page 216, Problem 9.2, line 2: Replace "table 9.6" by
"table 9.7".
Page 250: The last sentence of the top paragraph contains a
typo: -2 should be 2, and Z(alpha,x) was not defined. For better
clarity, the last two sentences in that paragraph should read:
"In practice, this means that one evaluates Z(alpha hat,x) at
each map position, x, where Z(alpha,x) is analogous to (9.9) with
theta_1 replaced by x, and alpha hat is determined by the maximum
of Z(alpha,x) at the given x value. Only those points x are then
excluded for which Z(alpha hat,x)<2 and Z(x)<-2, where Z(x) is
the lod score under homogeneity."
Page 268, Solution 9.2, line 2: Replace "table 9.6" by
"table 9.7".
Page 270, line 1: Replace 1/3 by 2/3. Line 3: Replace
"with that mutation" by "without that mutation".
Page 279, ref. Hall et al. (1990): Replace "Anserson" by
Page 294, line 2 up should read: "...tetraploid..."
Page 302, Support interval: Replace 110 by 55.
More information about the Gen-link mailing list | {"url":"http://www.bio.net/bionet/mm/gen-link/1992-July/000013.html","timestamp":"2014-04-16T22:03:51Z","content_type":null,"content_length":"6815","record_id":"<urn:uuid:2ee87083-feb0-42bc-9d2e-e9e7c10f09a3>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00081-ip-10-147-4-33.ec2.internal.warc.gz"} |
parse truly large numbers to binary
parse truly large numbers to binary
Hi, there
I am trying to convert 0 to some large int numbers (say, pow(a,c)) to a-binary, where pow(a,c), a and c could be large, say a=3,c=100, so the pow result could easily exceed unsigned long length
What I am looking for is: say a=3 c=3, so I 'd deal with 0-26 and convert them into 3-binary (sorry if i call it wrong). Let's pick 11 (in DEC) , I want to convert it to 102 (1*3^2+0*3*1+2*3^0)
and store in an int array, s.t.
bit[0] = 2
bit[1] = 0
bit[2] = 1
I know for if pow(a,c) < 2^64-1 I can use && and pick each bit value, but can anyone help if pow(a,c) > 2^64-1 ? I don't even know how to deal with it since pow(a,c) can not exceed 2^64-1 at
all... Actually I do not care about what the large number is, all I want is the each bit's value (0 to c-1) in that range.
Thanks in advance!
thanks !
could you please explain more? I went through the intro and it seems like some free optimized code, so all I do is install it and find the proper func and run it? Is that simple?
Originally Posted by Salem
It's a library to do maths with very large numbers.
I can't really say much more because I can't really see what you're trying to achieve. | {"url":"http://cboard.cprogramming.com/c-programming/81139-parse-truly-large-numbers-binary-printable-thread.html","timestamp":"2014-04-21T04:48:00Z","content_type":null,"content_length":"8155","record_id":"<urn:uuid:bccd3656-b9d0-44fc-ace9-870903bc239f>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00416-ip-10-147-4-33.ec2.internal.warc.gz"} |
Plainsboro Math Tutor
...DanDiscrete Math is often coined as "finite mathematics". It does not deal with the real numbers and it's continuity. I have studied discrete math as I obtained my BS in mathematics from Ohio
14 Subjects: including algebra 1, algebra 2, calculus, geometry
...It is a very useful and powerful tool and I enjoy using and teaching it. I am a chemist and teacher. I've often helped my chemistry students learn how to use Word more proficiently.
10 Subjects: including algebra 1, algebra 2, prealgebra, GED
I am an experienced high school teacher that has spent her entire educational career in one place. And I love it. Teaching is a second career for me.
9 Subjects: including algebra 2, prealgebra, algebra 1, chemistry
...I have been tutoring and teaching since I was in high school myself because I love doing it! During and after earning a Master of Science in mathematics, I spent 8 years teaching at the
post-secondary level in universities and community colleges. I have also worked with middle and high school students.
10 Subjects: including algebra 1, algebra 2, calculus, geometry
...I have been tutoring elementary, junior high, and high school students in math for over a year now. I have prepared students for integrated algebra and geometry regents. As of now, I am
tutoring junior high students for SHSAT and a sophomore for PSAT.
15 Subjects: including calculus, trigonometry, algebra 1, algebra 2 | {"url":"http://www.purplemath.com/Plainsboro_Math_tutors.php","timestamp":"2014-04-16T07:29:15Z","content_type":null,"content_length":"23537","record_id":"<urn:uuid:6ce90ba8-857f-4b16-9716-365cc386d786>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00170-ip-10-147-4-33.ec2.internal.warc.gz"} |
San Ramon Algebra 1 Tutor
Find a San Ramon Algebra 1 Tutor
...Courses that I have taught include Conceptual Physics, General Physics, Honors Physics, Advanced Placement Physics (both Calculus and non-Calculus based), Advanced Topics in Physics, Astronomy,
Earth Science/Geoscience, Meteorology, Oceanography, Algebra Readiness, Algebra I, Algebra II and Preca...
32 Subjects: including algebra 1, reading, ACT Math, elementary math
...I also have years of experience in Yoga as a form of physical exercise that connects body postures with the depth of understanding of the mind. I can help students who want to do more in depth
research of the philosophical issues dealt with in Yoga philosophy. I have a BA in Anthropology Magna ...
73 Subjects: including algebra 1, reading, Spanish, English
...I had students who needed help in pre-algebra, algebra, trigonometry, geometry, precalculus, calculus, differential equations and linear algebra. I also have extensive tutoring experience in
high school physics and below. Moreover, I got qualified over the last two years of my higher education to be grading papers from different lower division astronomy classes.
10 Subjects: including algebra 1, physics, geometry, algebra 2
...Being a math tutor for more than 20 years, I am committed to taking the important responsibility of providing my students with extra practice, guidance, and personal encouragement. I also
nurture my students' self-confidence. With support, my student(s) will be kept interested in learning Geometry concepts and excel in this class.
17 Subjects: including algebra 1, calculus, statistics, geometry
...I had to take 8 Bible classes during my four years there. I also am a student of the Bible and teach a children's Bible based Sunday school each week. I currently teach at Heritage Baptist
11 Subjects: including algebra 1, geometry, ASVAB, GED | {"url":"http://www.purplemath.com/san_ramon_algebra_1_tutors.php","timestamp":"2014-04-16T19:21:53Z","content_type":null,"content_length":"24044","record_id":"<urn:uuid:ceb1e171-4fbd-4920-98db-bb6a4f90909e>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00493-ip-10-147-4-33.ec2.internal.warc.gz"} |
NA Digest
NA Digest Sunday, October 11, 1998 Volume 98 : Issue 38
Today's Editor:
Cleve Moler
The MathWorks, Inc.
Submissions for NA Digest:
Mail to na.digest@na-net.ornl.gov.
Information about NA-NET:
Mail to na.help@na-net.ornl.gov.
URL for the World Wide Web: http://www.netlib.org/na-net/na_home.html
From: Elias A. Lipitakis <eal@aueb.gr>
Date: Thu, 8 Oct 1998 15:47:17 +0300 (EET DST)
Subject: Hononary Degree for B. B. Mandelbrot
It is with great pleasure that we inform you that
Yale University, New Haven-IBM/ T.J. Watson Research Center, NY
received an Honorary Doctorate of Science from the Department of
Informatics of the Athens University of Economics and Business
on September 28, 1998. The citation for the degree is as follows:
"for pioneering work and fundamental contributions to Mathematics
and Computer Graphics, and their significant application in
Economic, Physical and Social and Sciences"
Please join us in congratulating Professor B.B. Mandelbrot on his seminal
contributions to Sciences.
Professor Elias A. Lipitakis
From: Frank D Uhlig <uhligfd@mail.auburn.edu>
Date: Tue, 6 Oct 1998 13:29:23 -0500 (CDT)
Subject: Symmetric Models
Generally symmetric matrices are introduced in elementary math courses as
representing "symmetric phenomena" in the physical sciences, in economics
etc. Just look at the symmetry of the human body, ...
Having scratched my mathematician's brain for real world examples of such
processes (that can easily be presented to a sophomore class), I could on=
think of discretizations of DEs via central differences, leading to
symmetric systems of linear equations.
Further search in a few dozen elementary Linear Algebra textbooks brought
up the above phrase on the importance of symmetric matrices a few times,
yet without any substantial concrete examples for their use.
Surely I must be brain dead or blind: what are, where can I find a couple
of significant elementary examples in physics, chemisty, ... that lead to
symmetric matrices in their description, both as linear systems of
equations and as eigenvalue problems. And how does a symmetric physical
problem interact with its symmetric matrix description? With an
unsymmetric one? Advantages of one over the other?
Thank you very much,
Frank Uhlig
Department of Mathematics
Auburn University
Auburn, AL 36849 - 5310
From: Henry Wolkowicz <hwolkowi@orion.math.uwaterloo.ca>
Date: Wed, 7 Oct 1998 14:19:43 -0400 (EDT)
Subject: Semidefinite Relaxations in a Branch and Bound Strategy
We would like to use the Matlab package SDPpack (or other packages)
in a Branch&Bound strategy, to solve a mixed integer-and-continuous
program. In particular, we would like to use semidefinite programming
relaxations; solve them with SDPpack and add a B&B strategy.
Is anyone doing this, or something similar, yet? Is there any available
software? We would like to discuss formulations and implementations and,
if possible, share results.
We will share references obtained with all who are interested.
Thank you in advance for any help.
Henry Wolkowicz
frossell@orion.math.uwaterloo.ca and/or
From: Billy Stewart <billy@eos.ncsu.edu>
Date: Tue, 6 Oct 1998 14:32:50 -0400
Subject: Numerical Solution of Markov Chains
The Third International Meeting on the
Numerical Solution of Markov Chains
will take place at the
Centro Politicnico Superior,
Universidad de Zaragoza
Zaragoza, Spain
on September 6-10, 1999
Joint Chairs for this meeting are
Brigitte Plateau (France) and Billy Stewart (USA)
This meeting will be held jointly with the Workshops on
Petri Nets and Performance Models (PNPM) and
Process Algebra and Performance Modelling (PAPM)
Papers are solicited on all aspects of the numerical solution of
Markov chains. Both theoretical and practical contributions are
welcome. Work in progress and poster sessions may be organized in
addition to regular sessions.
A non-exhaustive list of possible topics is available on the WWW
at the URL: http://www-apache.imag.fr/~plateau/nsmc/
The following is a list of important dates:
* Submission deadline for papers is February 10, 1999
* Authors notifications will be sent on April 25, 1999
* Camera ready versions of accepted papers are due on June 10, 1999
* Tutorials and short one-day workshops be held on 6-7 September 1999
* The meeting will place from 8th to 10th of September 1999.
Papers should be written in English and should not exceed 20 double-space=
pages, excluding figures and tables. Papers must be unpublished and must
not be submitted for publication elsewhere.
Please send
* an electronic postscript version
* and a single hard copy version
to Brigitte Plateau at the following address.
Brigitte Plateau
100 rue des Mathematiques
BP 53 --- Campus Universitaire
38041 Grenoble cedex 9
Email: Brigitte.Plateau@imag.fr
From: Wayne Mastin <mastin@nrcpet1.wes.hpc.mil>
Date: Wed, 7 Oct 1998 10:49:14 -0500
Subject: Workshop on Computational Structural Mechanics
November 3-4, 1998
U.S. Army Engineer Waterways Experiment Station
Vicksburg, Mississippi
Sponsored by the U.S. Army Engineer Waterways Experiment Station (CEWES),
Defense Special Weapons Agency (DSWA), and Army Research Office (ARO)
in partnership with the DoD HPC Modernization Program Major Shared
Resource Center (MSRC) at CEWES
Theme: Multiphysics/interdisciplinary large-scale applications in
computational structural mechanics
The workshop will bring together users, developers and researchers to
present the latest theoretical and computational developments and the
applications in addressing challenges simulating practical problems in
computational structural mechanics (CSM). Topics of interest include:
three-dimensional coupled problems (blast-medium interaction,
soil-medium interaction, etc.), multi-scale in time and space (blast
initiation, propagation, structural assessment, etc.), scalable algorithm=
in CSM, scalability issues, pre-processing and visualization of large dat=
sets, benchmarking and validation, and large-scale applications. A
special session will focus on the status of simulating real-world
problems in CSM.
Invited Speakers: Dr. Robert Whalin, Director, CEWES; Dr. George Ullrich,
Deputy Director, DSWA; Mr. C. B. McFarland Jr., DSWA; Dr. N. Radhakrishna=
CEWES; Dr. J. Shang, Air Force Research Laboratory; Prof. J. T. Oden,
University of Texas; Prof. T. Belytschko, Northwestern University; Prof.
M. Ortiz, California Institute of Technology; Prof. Joseph E. Flaherty,
Rensselear Polytechnic Institute; Prof. James Glimm, SUNY Stony Brook;
Prof. Tom Geers, University of Colorado; Prof. Graham Carey, University o=
Texas; Dr. L. Taylor, Sandia National Laboratory; Dr. Tim Trucano,
Sandia National Laboratory; Dr. P. Raboin, Lawrence Livermore National
Laboratory; Dr. R. Couch, Lawrence Livermore National Laboratory;
Dr. J. Baum, SAIC; Dr. M. Ito, TRT; Dr. H. Levine, Weidlinger Associates;
Mr. C. Charman, General Atomics; Mr. K. Kimsey, Army Research Laboratory;
Dr. Andrew Mark, Army Research Laboratory; Dr. David Horner, CEWES;
Dr. M. Emery, Naval Research Laboratory; and Dr. R. Namburu, CEWES
Information: The workshop program and updated information will appear
on the web at http://www.wes.hpc.mil/msrc/training/f_cewes.html.
Travel information, including a map of CEWES, directions, and lodging,
may be found at http://www.wes.army.mil/WES/welcome.html. For
further information, contact Dr. Wayne Mastin, Nichols Research, Phone:
(601) 634-3063, Email: mastin@nrcpet1.wes.hpc.mil or Dr. Raju
Namburu, CEWES, Phone:(601) 634-3811, Email:
Registration: Advance registration required. To register, contact
the CEWES MSRC Customer Assistance Center at 1-800-500-4722 or
info-hpc@wes.hpc.mil and mention the CSM workshop, or register on the
web at http://www.wes.hpc.mil/msrc/training/registration/reg_form.html.
Foreign nationals should request information about security requirements
necessary for entering CEWES. A $40 registration fee will be charged
to cover the workshop banquet and refreshments during the two days.
From: Nicolette Goodwin <n.goodwin@auckland.ac.nz>
Date: Fri, 9 Oct 1998 16:00:06 +1300
Subject: Symposium in Honour of John Butcher
Symposium to mark the retirement of John Butcher
At the end of 1998, John retires from his position at the
University of Auckland which he has held for 33 years.
To mark the occasion, the Department of Mathematics
is organising a symposium. Details are
14 - 16 December 1998
University of Auckland
Auckland, New Zealand
The following have been invited so speak at this
Alan Feldstein, Tempe
Joe Flaherty, Troy
Arieh Iserles, Cambridge
Zdzislaw Jackiewicz, Tempe
Gaven Martin, Auckland
Ander Murua, Donostia (San Sebastian)
Reinout Quispel, Melbourne
Manfred Trummer, Vancouver
Gerhard Wanner, Geneva
There will be a limited scope for additional lectures and anyone
interested in presenting a lecture is urged to
contact a member of the organising committee as soon
as possible. The members of the organising committee are
Robert Chan (Chair) chan@math.auckland.ac.nz
Marston Conder conder@math.auckland.ac.nz
Nicolette Goodwin goodwin@math.auckland.ac.nz
Bev Grove grove@math.auckland.ac.nz
Allison Heard heard@math.auckland.ac.nz
Further information is available on the symposium
web page, which is still being developed.
This has a link to an on-line registration form.
From: Michaela Schulze <sigopt99@msun9.uni-trier.de>
Date: Fri, 9 Oct 1998 17:15:46 +0200
Subject: International Conference on Optimization
International Conference on Optimization
organized by
Special Interest Group in Optimization
of the
Deutsche Mathematiker Vereinigung (DMV)
March 22-24, 1999
at the University of Trier, Germany
SIGOPT provides a forum for discussing current and future developments
in a broad variety of disciplines associated with optimization, and activ=
supports interdisciplinary research and applications to industry. In
particular, SIGOPT encourages students and younger scientists to become
involved in research in optimization.
Program Committee: U. Rieder (Ulm)
E. Sachs (Trier)
U. Zimmermann (Braunschweig)
Local Organizers: E. Sachs (chair)
R. Horst
R. Tichatschke
Invited presentations:
William J. Cook, Houston
John E. Dennis, Houston
Ruediger Schultz, Duisburg
Contributed talks are invited in the areas of continuous, discrete and
stochastic optimization. The program committee also encourages proposals =
minisymposia in these areas.
Important deadlines
registration (reduced fee) Jan. 31, 1999
titles and abstracts for contributed talks Jan. 31, 1999
Information / Registration:
Electronic registration via www is preferred. Please find further informa=
and an electronic registration form on
To contact us by e-mail, please use the address
Postal address:
Ekkehard W. Sachs
Department of Mathematics
University of Trier
D-54286 Trier
Phone: ++49 651 - 201 3474
Fax: ++49 651 - 201 3973
From: C. T. H. Baker <cthbaker@ma.man.ac.uk>
Date: Sat, 10 Oct 1998 16:16:07 +0100 (BST)
Subject: Research Position at UMIST and Manchester University
Researchers with a PhD who would be interested in working with
Christopher Baker (Manchester University) and Ruth Thomas (UMIST,
Manchester) on a project in evolutionary functional differential
equations (delay and Volterra integro- differential equations) are
asked to email us both
cthbaker@ma.man.ac.uk, rmt@lanczos.ma.umist.ac.uk
with a short curriculam vitae and a note indicating interests and
availability. The E-mail should please have
Baker/Thomas Research position
in the Subject line.
The funding is expected to cover about one year's employment on the
basic RA scale (details are negotiable); someone who is not a member
of the European Community would require a visa and work permit, so
nationality is an important detail. Ideally, we look for someone who
might start as soon as possible.
The research team is part of the inter-institutional MANCHESTER CENTRE
FOR COMPUTATIONAL MATHEMATICS, which has a group of 5 staff and 4 PhD
students interested in this area, in additional to numerical analysts
in other areas (ODEs, PDEs, Numerical Linear Algebra, Parallel
Computing, etc.).
Professor Christopher T H Baker
Fax: Fax:
Int: +44 161 275 5819 UK: 0161 275 5819
From: Zhaojun Bai <bai@ms.uky.edu>
Date: Tue, 6 Oct 1998 12:05:40 -0400 (EDT)
Subject: Postdoctoral Position at University of Kentucky
An immediate opening is available for a post-doctoral
research associate in the area of Large Eddy Simulation (LES)
for environmental flows. The successful candidate will perform
state-of-the-art computational fluid dynamics research using LES
and finite element methods. Applicants with a Ph.D. in the areas
of computational fluid dynamics, LES and parallel computing
are encouraged to apply. Experience in finite element methods
and domain decomposition is preferred. This position would commence
immediately and is available for two years. The research is supported
by the US Environmental Protection Agency. Applicants should send
an email to Prof. Tate T.H. Tsang (tsang@engr.uky.edu, Dept. of
Chemical and Materials Engineering, University of Kentucky,
Lexington, KY 40506-0046) including a brief CV, publication list,
the names of at least three references (with their (email) addresses,
telephone and fax numbers).
University of Kentucky is an affirmative Action/Equal Opportunity
From: Candy Ellis <candi@math.lsa.umich.edu>
Date: Tue, 6 Oct 1998 13:20:26 -0400 (EDT)
Subject: Faculty Positions at University of Michigan
The University of Michigan, Department of Mathematics has several opening=
at the tenure-track or tenure level. We invite applications or inquiries
from all interested parties. Candidates for these positions should hold
the Ph.D. in mathematics or a related field, and should show outstanding
promise and/or accomplishments in both research and teaching (commensurat=
with years past receipt of the Ph.D.). Areas of special need for us this
year are: Applied mathematics, probability, analysis, topology/ geometry
and actuarial mathematics, although any area of pure or applied
mathematics is of possible interest to us, and we encourage inquiries.
Salaries are competitive, based on candidate's credentials. Send
applications materials (cv, bibliography, research statement, teaching
statement) to: Personnel Committee, University of Michigan, Department of
Mathematics, 2074 East Hall, Ann Arbor MI 48109-1109
Information regarding available positions is also on our web-page:
The University of Michigan is an equal opportunity, affirmative action
From: Rick Miranda <miranda@math.colostate.edu>
Date: Tue, 06 Oct 1998 15:25:48 -0600
Subject: Faculty Positions at Colorado State University
The Department of Mathematics at Colorado State University
invites applications for three regular tenure-track faculty
positions and one postdoctoral position beginning Fall of 1999.
The appointment level for the faculty positions is open, but preference
will be given to candidates at the Assistant Professor level. The
individuals appointed must hold a Ph.D. at the time of
appointment and be capable of fulfilling the highest expectations in
research and in teaching. The Department currently has areas of
strength in both applied/computational and pure mathematics, including
dynamical systems, numerical analysis, optimization, partial differential
equations, pattern analysis, algebra, algebraic geometry/topology,
combinatorics, and analysis. While our primary needs are in algebraic
geometry, numerical partial differential equations, and optimization,
exceptional candidates in other areas of interest may also be
A one-semester visiting professorship is also being offered for
Applicants should submit a complete curriculum vita and a summary
of future research plans; evidence of strong teaching credentials is also
desired. Applicants should also arrange for at least three letters of
recommendation to be sent on their behalf to:
Faculty Hiring Committee
Department of Mathematics
Colorado State University
Fort Collins, CO 80523-1874.
Applications received by January 15, 1999, will receive full consideratio=
but screening will continue until the positions are filled. A job
description can be found at http://www.math.colostate.edu/jobs.html .
Colorado State University is an EEO/AA employer.
From: So-Hsiang Chou <chou@zeus.bgsu.edu>
Date: Wed, 7 Oct 1998 15:50:13 -0400
Subject: Faculty Position at Bowling Green State University
Bowling Green State University, Bowling Green, OH
Position Announcement
The Department of Mathematics & Statistics at Bowling Green State Univers=
invites applications for a tenure-track positions at the Assistant Profes=
rank in the area of Applied Mathematics starting August, 1999.
We are searching for a candidate who has a broad interest in applied
mathematics with a preferred emphasis in computational mathematics.
Usual duties consist of teaching two courses each semester, conducting sc=
research and participating in service activities. A candidate for this
position will have a doctorate in mathematics, be committed to outstandin=
teaching and interaction with students at all levels of undergraduate and=
study, and be able to demonstrate an exceptional potential for research.
BGSU is an AA/EEO employer and strongly encourages applications from wome=
n, minorities,
veterans, and persons with disabilities. To apply send a cover sheet (AMS=
Cover Sheet preferred), curriculum vitae, three current letters of refere=
nce (one
addressing teaching), and a transcript showing the highest degree to
Search Committee
Department of Mathematics & Statistics
Bowling Green State University
Bowling Green, Ohio 43403-0221
Email: math-stat@bgnet.bgsu.edu
Phone: (419) 372-2636
Deadline for applications is January 15, 1999.
For more details, visit our website at
From: Bjorn Sjogreen <bjorns@nada.kth.se>
Date: Thu, 08 Oct 1998 16:03:20 +0200
Subject: Postdoctoral Position at Swedish Royal Institute of Technology
The department of numerical analysis and computing sciences offers
a post-doctoral position, within the framework of the TMR network
"viscosity solutions and their applications"
Requirements: Citizen of a European Union member state other
than Sweden, or of a state associated with the TMR program
( Iceland, Liechtenstein, Norway, Israel ). Less than 35 years of age.
Recent holder of a PhD in applied mathematics, numerical analysis,
or related field.
Ongoing projects at the department, related to viscosity solutions
include: numerical simulation of dendritic solidification using phase
field models, adaptive numerical methods for stochastic differential
equations, numerical simulation of advancing fronts in combustible fluids=
We encourage applications from anyone with a research interest
related to viscosity solutions, and numerical analysis of
non-linear partial differential equations. More applied
research such as computational fluid dynamics might also be considered.
Professor Bjorn Engquist is the scientific director of the research
in numerical analysis at the department.
Senior researchers inlcude: G. Kreiss, B. Sjogreen, A. Szepessy.
High performance computing can be done on the machines of
the nearby center for parallel computers (PDC).
Applicants are asked to submit a curriculum vitae with a list of
publications, a research proposal, and two letters of recommendation
before November 6th, to the address below:
Pernilla =3DD6stlund
100 44 Stockholm
Further questions can be answered by:
Bjorn Sjogreen
Pernilla =3DD6stlund
From: Christoph Borgers <borgers@math.tufts.edu>
Date: Thu, 8 Oct 1998 11:05:50 -0400 (EDT)
Subject: Faculty Position at Tufts University
Department of Mathematics
Tufts University
Applications are invited for a tenure-track Assistant Professorship
to begin September 1, 1999. Applicants must show promise of outstanding
research with specialization in numerical methods for inverse problems
or numerical methods for partial differential equations, and excellent
teaching. The teaching load will be two courses per semester.
We are building a group in applied mathematics to work together and with
other units in the university. Preference will be given to candidates wh=
show promise of research interaction with members of our department and
other departments at Tufts University.
Applicants should send a curriculum vitae and have three letters of
recommendation sent to Christoph Borgers, Search Committee Chair,
Department of Mathematics, Tufts University, Medford, MA 02155.
Review of applications will begin January 20, 1999 and continue until
the position is filled.
Tufts University is an Affirmative Action/Equal Opportunity employer.
We are committed to increasing the diversity of our faculty. Members
of underrepresented groups are strongly encouraged to apply.
From: Jan Griffin <griffin@mcs.anl.gov>
Date: Thu, 08 Oct 1998 16:31:00 -0500
Subject: Postdoctoral Position at Argonne National Laboratory
Mathematics and Computer Science Division
Argonne National Laboratory
Argonne National Laboratory invites outstanding candidates to apply for a
postdoctoral research position with the ALICE project in the Mathematics
and Computer Science Division. Candidates should have a Ph.D. in
mathematics, computer science, or a related discipline. Knowledge of
high-performance computing methodology is required, particularly in the
areas of the design and implementation of high-performance numerical soft=
The successful candidate will participate in a project involving the
development of high-performance sparse matrix software, with particular
attention to achieving a higher fraction of peak performance through a
combination of new algorithms, data structures, and code generation
techniques. The project requires the ability to work in an
interdisciplinary research environment. Information on the ALICE project
can be found at Website http://www.mcs.anl.gov/alice.
The Mathematics and Computer Science Division has a vigorous research
program in applied mathematics and computer science. The computational
environment includes scalable parallel computers, a distributed systems
laboratory, and a virtual environments laboratory. For further
information, see http://www.mcs.anl.gov/.
Argonne is located in the southwestern Chicago suburbs offering the
advantages of affordable housing and good schools, as well as easy access
to the cultural attractions of the city.
Applicants must have received their Ph.D. not more than three years prior
to the beginning of the appointment. The appointment is available
immediately and for a one-year term (renewable). Applications should be
addressed to Walter McFall, Box mcs-127674, Employment and Placement,
Argonne National Laboratory, 9700 S. Cass Avenue, Argonne, IL 60439, and
must include a resume and the names and addresses of three references. T=
submit resumes electronically, please send e-mail to griffin@mcs.anl.gov.
For further information, contact Bill Gropp at gropp@mcs.anl.gov.
Argonne is an affirmative action/equal opportunity employer.
From: Jan Griffin <griffin@mcs.anl.gov>
Date: Thu, 08 Oct 1998 16:29:08 -0500
Subject: Research Position at Argonne National Laboratory
Mathematics and Computer Science Division
Argonne National Laboratory
Argonne National Laboratory invites applications for a research position =
the Mathematics and Computer Science Division. The successful candidate
will develop methods and build computational tools for the solution of
computational science problems on scalable parallel computers and will
perform research in the area of numerical methods and software.
Candidates must have a Ph.D. in mathematics, computer science, or a relat=
discipline. Considerable knowledge of analytical and numerical methods,
principles of advanced scientific computing, software design, and softwar=
methodology for high-performance computing is required.
The Mathematics and Computer Science Division has a vigorous research
program in applied mathematics and computer science, with numerous
opportunities for interactions with scientists from other disciplines. Th=
computational environment includes scalable parallel computers, a
distributed supercomputing laboratory, and a virtual environments
laboratory. For further information, see http://www.mcs.anl.gov/.
Argonne is located in the southwestern Chicago suburbs, offering the
advantages of affordable housing and easy access to the cultural
attractions of the city.
Resumes should be addressed to Walter McFall, Box mcs-127726, Employment
and Placement, Argonne National Laboratory, 9700 S. Cass Avenue, Argonne,
IL 60439, and must include the names and addresses of three references.
The position is available immediately; applications will be accepted unti=
the position is filled. Applications may be submitted electronically to
Argonne National Laboratory is an affirmative action/equal opportunity
From: Soon Chul Park <scp@math.ufl.edu>
Date: Mon, 5 Oct 1998 11:07:17 -0400 (EDT)
Subject: Contents, Computational Optimization adn Applications
Table of Contents for Volume 10
Volume 10, Issue 1, April 1998
Masao Fukushima, Zhi-Quan Luo, Jong-Shi Pang : A Globally Convergent
Sequential Quadratic ProgrammingAlgorithm for Mathematical Programs
with Linear Complementarity Constraints
pp. 5-34
Dao Li Zhu, Patrice Marcotte : Convergence Properties of Feasible
Descent Methods for Solving Variational Inequalities in Banach Spaces
pp. 35-49
Renato D.C. Monteiro, Fangjun Zou : On the Existence and Convergence
of the Central Path for Convex Programming and Some Duality Results
pp. 51-77
Hiroshi Yabe, Hideho Ogasawara : Quadratic and Superlinear Convergence
of the Huschens Method for Nonlinear Least Squares Problems
pp. 79-103
Volume 10, Issue 2, May 1998
Jens Clausen, Stefan E. Karisch, Michael Perregaard, Franz Rendl :
On the Applicability of Lower Bounds for Solving Rectilinear
Quadratic Assignment Problems in Parallel
pp. 127-147
Nguyen Van Thoai : Global Optimization Techniques for Solving
the General Quadratic Integer Programming Problem
pp. 149-163
J.M. Belenguer, E. Benavent : The Capacitated Arc Routing Problem
(Valid Inequalities and Facets)
pp. 165-187
Egon Balas : Projection with a Minimal System of Inequalities
pp. 189-193
Mohamad Akra, Louay Bazzi : On the Solution of Linear Recurrence Equation=
pp. 195-210
Volume 10, Issue 3, July 1998
Yuying Li : A Newton Acceleration of the Weiszfeld Algorithm for
Minimizing the Sum of Euclidean Distances
pp. 219-242
Erling D. Andersen, Yinyu Ye : A Computational Study of the Homogeneous
Algorithm for Large-scale Convex Optimization
pp. 243-269
Artur Swietanowski : A New Steepest Edge Approximation for the
Simplex Method for Linear Programming
pp. 271-281
Zhi Wang, K. Droegemeier, L. White : The Adjoint Newton Algorithm
for Large-Scale Unconstrained Optimization in Meteorology Applications
pp. 283-320
Volume 10, Issue 3, July 1998
Yuying Li : A Newton Acceleration of the Weiszfeld Algorithm for
Minimizing the Sum of Euclidean Distances
pp. 219-242
Erling D. Andersen, Yinyu Ye : A Computational Study of the Homogeneous
Algorithm for Large-scale Convex Optimization
pp. 243-269
Artur Swietanowski : A New Steepest Edge Approximation for the
Simplex Method for Linear Programming
pp. 271-281
Zhi Wang, K. Droegemeier, L. White : The Adjoint Newton Algorithm
for Large-Scale Unconstrained Optimization in Meteorology Applications
pp. 283-320
From: Siberian Journal of Numerical Mathematics <sibjnm@oapmg.sscc.ru>
Date: Fri, 9 Oct 1998 13:16:08 +0600
Subject: Contents, Siberian Journal of Numerical Mathematics
CONTENTS, Siberian Journal of Numerical Mathematics
Volume 1, No. 3 (July 1998)
For information to contributors and about subscriptions
see http://www.sscc.ru/SibJNM/
I.A. Blatov
On incomplete factorization for the fast Fourier transform
for the discrete Poisson equation in a curvilinear boundary domain
(in Russian) pp. 197-216
L.V. Gilyova
A cascadic multigrid algorithm in the finite element method
for the three-dimensional Dirichlet problem
(in Russian) pp. 217-226
V.A. Debelov, A.M. Matsokin, and S.A. Upol'nikov
Subdivision of a plane and set operations on domains
(in Russian) pp. 227-247
A.I. Zadorin
Numerical solution of the equation with a small parameter
and a point source on the infinite interval
(in Russian) pp. 249-260
B.G. Mikhailenko and O.N. Soboleva
Absorbing boundary conditions for the elastic theory equations
(in Russian) pp. 261-269
V.F. Raputa, A.I. Krylova, and G.A. Platov
Inverse problem for estimating the total emission
for the nonstationary boundary layer of the Atmosphere
(in Russian) pp. 271-279
G.I. Shishkin
Grid approximations of singularly perturbed systems
for parabolic convection-diffusion equations with counterflow
pp. 281-297
CONTENTS, Siberian Journal of Numerical Mathematics
Volume 1, No. 4 (October 1998)
For information to contributors and about subscriptions
see http://www.sscc.ru/SibJNM/
On the anniversary of Anatoly Semenovich Alekseev
(in Russian) pp. 299-300
V.A. Vasilenko and A.V. Elyseev
Abstract splines with the tension as the functions
of parameters in energy operator (in Russian) pp. 301-311
A.V. Gavrilov
On best quadrature formulas in the reproducing kernel
Hilbert space (in Russian) pp. 313-320
V.P. Il'in and K.Yu. Laevsky
On incomplete factorization methods with generalized
compensation (in Russian) pp. 321-336
O.A. Klimenko
Stability of an inverse problem for transport equation
with discrete data
pp. 337-345
Yu.M. Laevsky and O.V. Rudenko
On the locally one-dimensional schemes for solving the third
boundary value parabolic problems in nonrectangular domains
(in Russian) pp. 347-362
V.A. Leus
On the differentially conditioned function generating based
on degree potentials (in Russian) pp. 363-371
A.I. Rozhenko
Spline approximation in tensor product spaces
(in Russian) pp. 373-390
V.V. Smelov
On completeness of hemispherical harmonics system
(in Russian) pp. 391-395
Author Index of Volume 1 (in Russian)
pp. 397-398
Author Index of Volume 1
pp. 399-400
End of NA Digest | {"url":"http://www.netlib.org/na-digest-html/98/v98n38.html","timestamp":"2014-04-21T02:06:00Z","content_type":null,"content_length":"38901","record_id":"<urn:uuid:063586d9-35d5-4042-85c2-22ca30a328b1>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00199-ip-10-147-4-33.ec2.internal.warc.gz"} |
Evaluating Functions
September 15th 2011, 02:40 PM #1
Sep 2011
Evaluating Functions
Hi everyone,
I'm having a hard time with a math problem. It gets too messy for me.
The given function is: F(t)= 4/t-3
Asks to compute and simplify: (a) F(3y+4 / y) and (b) F(3y+4) / F(y)
can please somebody help me with this?!
Thanks in advance!
**(attached is a .doc with the correct math notation of the functions.)
Re: Evaluating Functions
Hi everyone,
I'm having a hard time with a math problem. It gets too messy for me.
The given function is: F(t)= 4/(t-3)
Asks to compute and simplify: (a) F[(3y+4) / y] and (b) F(3y+4) / F(y)
can please somebody help me with this?!
Thanks in advance!
**(attached is a .doc with the correct math notation of the functions.)
note ... please use grouping symbols.
$F(t) = \frac{4}{t-3}$
$F\left(\frac{3y+4}{y}\right) = \frac{4}{\frac{3y+4}{y} - 3} = \frac{4y}{(3y+4) - 3y} = \frac{4y}{4} = y$
now ... you try (b)
Re: Evaluating Functions
Re: Evaluating Functions
should be 3y+1 in the denominator
September 15th 2011, 02:54 PM #2
September 15th 2011, 03:13 PM #3
Sep 2011
September 15th 2011, 03:23 PM #4 | {"url":"http://mathhelpforum.com/pre-calculus/188071-evaluating-functions.html","timestamp":"2014-04-19T20:47:10Z","content_type":null,"content_length":"41404","record_id":"<urn:uuid:0a543f37-ad50-473e-9096-3a74df1d4970>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00327-ip-10-147-4-33.ec2.internal.warc.gz"} |
MAT 090: STRATEGIES FOR SUCCESS IN MATH
MAT090 is designed to provide students with the tools they need to achieve a higher level of success in their entry level mathematics courses. Students who have fully participated in but have been
unsuccessful in 0-level math courses should take this course. The course is designed to help students understand and learn the skills that are required to be successful in mathematics. Students will
learn to be active rather than passive participants in the learning process. Students will work individually and collaboratively throughout the course. Co-requisite: A co-requisite of at least one
MAT course.
Credits: 1
Type: Lecture
Attributes: Remedial Math, Remedial
Department: Math, Physical & Computer Sci
All sections for this course: Spring 2014 Summer 2014 Fall 2014
MAT 091: BEGINNING ALGEBRA
Beginning Algebra is intended for students who need a foundation in, or to review the general topics related to Algebra. Topics covered include operations with fractions, signed numbers, solving
equations, factoring, linear equations and polynomials. A grade of C or better is required for entrance into MAT 095/096/097, 099, 109, 118, or 131. List of pre-requisites: Student never took Regents
Algebra 2/Trig exam (if student took this exam, then the student should be placed higher). Regents Geometry score of 1-49 in the last two years, OR Regents Integrated Algebra score of 50-74 in the
last two years, OR CSM 094 with grade of C or higher, OR Compass Pre-Algebra score of 36 or more, OR Compass Algebra score of 23-48.
Credits: 3
Type: Online, Lecture
Attributes: Remedial Math, Remedial
Department: Math, Physical & Computer Sci
All sections for this course: Spring 2014 Summer 2014 Fall 2014
MAT 092: MATH LITERACY FOR COLL STUDNTS
This course will provide students with the essential quantitative skills and knowledge needed in the workplace, and needed for entrance into BUS 101, MAT 109, MAT 116, MAT 118, or 100-level general
education science courses. It will emphasize number sense, percents, computational ability, and basic applications of mathematics including graphs and rate of change. Pre-requisites: Student never
took Regents Algebra 2/Trig exam (if student took this exam, then the student should be placed higher). Regents Geometry score of 1-49 in the last two years, OR Regents Integrated Algebra score of
50-74 in the last two years, OR CSM 094 with grade of C or higher, OR Compass Pre-Algebra score of 36 or more, OR Compass Algebra score of 23-48.
Credits: 3
Type: Online, Lecture
Attributes: Remedial Math, Remedial
Department: Math, Physical & Computer Sci
All sections for this course: Spring 2014 Summer 2014 Fall 2014
MAT 095: INTERMEDIATE ALGEBRA PART I
This is the first credit of the three-credit Intermediate Algebra sequence of courses. Functions Part 1 is intended to introduce students to functions and function notation. The course will teach
students how to recognize linear, quadratic, and exponential functions when given in symbolic or numeric or graphical form. Topics covered include using function notation, finding domain and range,
and identifying basic features of linear, quadratic, and exponential functions. A TI-83 or TI-84 calculator is required. If placed into this course, then a grade of C or higher in this course is
required for entrance to the Functions Part 2 module. Pre-requisites: Regents Algebra 2/Trig score 1-49 in the last two years, OR Regents Geometry score of 50 or more in the last 2 years, OR Regents
Integrated Algebra of 75 or more in the last two years, OR MAT 091 with a C or higher, OR Compass Algebra score of 49 or higher.
Credits: 1
Type: Lecture
Attributes: Remedial Math, Remedial
Department: Math, Physical & Computer Sci
All sections for this course: Spring 2014 Summer 2014 Fall 2014
MAT 096: INTERMEDIATE ALGEBRA PART II
This is the second credit of the three-credit Intermediate Algebra sequence of courses. Intermediate Algebra Part 2 is intended to give students a more detailed understanding of linear and quadratic
and exponential functions. The course will teach students how to graphically, numerically, and symbolically solve problems involving these three types of functions. Topics covered include graphically
and numerically solving problems with the calculator, factoring and the quadratic formula, finding equations of linear functions, and interpreting the real-world meaning of points and slope of lines
and rates in exponential functions. A TI-83 or TI-84 calculator is required. If placed into this course, then a grade of C or higher in this course is required for entrance to MAT097, Intermediate
Algebra Part 3. Pre-requisite: MAT095, Intermediate Algebra Part 1, with C or higher.
Credits: 1
Type: Lecture
Attributes: Remedial Math, Remedial
Department: Math, Physical & Computer Sci
All sections for this course: Spring 2014 Summer 2014 Fall 2014
MAT 097: INTERMEDIATE ALGEBRA PART III
This is the final credit of the three-credit Intermediate Algebra sequence of courses. Preparation for College Level is intended to prepare students for College Algebra (MAT110) or Algebra and Trig
for PreCalculus (MAT184) or Mathematics for Elementary School Teachers (MAT107). Topics covered include fractions without a calculator, exponent rules, systems of equations, and basic applications. A
TI-83 or TI-84 calculator is required. If placed into this course, then a grade of C or higher in this course is required for entrance to College Algebra (MAT110) or Math for Elementary School
Teachers (MAT107) or College Algebra and Trigonometry (MAT184). Prerequisites: MAT096, Intermediate Algebra 2, with a C or higher.
Credits: 1
Type: Lecture
Attributes: Remedial Math, Remedial
Department: Math, Physical & Computer Sci
All sections for this course: Spring 2014 Summer 2014 Fall 2014
MAT 099: INTERMEDIATE ALGEBRA COMBINED
MAT099 is intended for students who must bring their mathematics proficiency to the level necessary for entrance into MAT110, 184, or 107. This course cannot be used to satisfy the mathematics
requirement of the Associate in Art degree program. MAT109 will fulfill the mathematics requirement for many students in Associate of Arts degree programs. Topics include: Functions, Linear
Functions, Quadratic Functions, Exponential Functions, Solving Equations symbolically and graphically and numerically, Systems of Linear Equations, Factoring and Graphing. The TI-83, or TI-83 Plus,
or TI-84 or TI-84 Plus is required. Pre-requisites: Regents Algebra 2/Trig score 1-49 in the last two years, OR Regents Geometry score of 50 or more in the last 2 years, OR Regents Integrated Algebra
of 75 or more in the last two years, OR MAT 091 with a C or higher, OR Compass Algebra score of 49 or higher.
Credits: 3
Type: Lecture
Attributes: Remedial Math, Remedial
Department: Math, Physical & Computer Sci
All sections for this course: Spring 2014 Summer 2014 Fall 2014
MAT 107: MATHEMATICS FOR ELEM TEACHERS
This course meets the Math requirement for students who are enrolled in the Liberal Arts and Sciences: Education, Early Childhood Education (Birth - Grade 2) and Childhood Education (Grade 1-6) dual
certification with SUNY New Paltz, A.S. degree program and who plan to transfer to SUNY New Paltz. The emphasis is on problem-solving as it relates to the number system. Probability and statistics
are also introduced. Pre-requisites: Regents Algebra 2/Trig score 50-64 in the last two years, OR Regents Integrated Algebra score of 85 or more in the last two years, OR MAT 097 or MAT 099 or MAT
131 with a C or higher, OR Compass Algebra score of 76 or higher.
Credits: 3
Type: Online, Lecture
Attributes: Elective
Department: Math, Physical & Computer Sci
All sections for this course: Spring 2014 Summer 2014 Fall 2014
MAT 109: SURVEY OF MATHEMATICS
The course will allow students the opportunity to explore mathematics through interesting real life applications, as they strengthen their critical thinking and practical problem solving skills.
Students will be required to use contemporary technology, perform web research and will work collaboratively throughout the course. Topics will include geometry, probability, statistics, and finance.
Other topics may include history of mathematics and modern mathematical systems. Pre-requisites: Regents Algebra 2/Trig score ANY score in the last two years, OR Regents Geometry score of 50 or more
in the last 2 years, OR Regents Integrated Algebra of 75 or more in the last two years, OR MAT 092 or MAT 091 with a C or higher (note that MAT 092 is recommended instead of 091), OR Compass Algebra
score of 49 or higher.
Credits: 3
Type: Online, Lecture
Attributes: SUNY Gen Ed Appendix A, Elective
Department: Math, Physical & Computer Sci
All sections for this course: Spring 2014 Summer 2014 Fall 2014
MAT 110: COLLEGE ALGEBRA
This course satisfies the SUNY General Education mathematics requirement and is the prerequisite for Business Calculus (MAT125). Topics include applications of linear, reciprocal, exponential,
logarithmic, power, and quadratic functions; composition and inverses of functions; systems of equations; regression; and piecewise equations. Students will solve equations both algebraically and
graphically. Use of the one of the following graphing calculators will be required: TI-83, 83 Plus, 84 or 84 Plus. Not for students who intend to take MAT185, 221, 222 or 223. Pre-requisites: Regents
Algebra 2/Trig score 50-64 in the last two years, OR Regents Integrated Algebra score of 85 or more in the last two years, OR MAT 097 or MAT 099 or MAT 131 with a C or higher, OR Compass Algebra
score of 76 or higher.
Credits: 3
Type: Online, Lecture
Attributes: SUNY Gen Ed Appendix A, Elective
Department: Math, Physical & Computer Sci
All sections for this course: Spring 2014 Summer 2014 Fall 2014
MAT 116: EXPLORING APPLICATIONS OF MATH
This course gives students the opportunity to explore mathematics through interesting, real life applications. Each semester students will select an area of study such as forensic science, amusement
park ride design, encryption, the cellular phone industry, etc. Mathematics will be presented in class, as it is needed, within the context of the problem being explored. The emphasis of this course
is on helping students get a better understanding of the links between mathematics and real life applications as they strengthen their critical thinking and practical problem solving skills. Students
will be required to do web research and will work collaboratively throughout the course. Pre-requisites: Compass Algebra Score of at least 49 OR Math A Regents/Integrated Algebra Regents within the
last 2 years of at least 65 OR MAT 091 with at least a C .
Credits: 3
Type: Lecture
Attributes: SUNY Gen Ed Appendix A, Elective
Department: Math, Physical & Computer Sci
All sections for this course: Spring 2014 Summer 2014 Fall 2014
MAT 117: MATH FOR ELEM SCHL TEACHRS II
This course is a second semester requirement for students in the elementary education programs (EDC and EDE). It emphasizes background information for the teaching of elementary school geometry.
Topics include spacial visualization, measurement, coordinate geometry, similarity and congruence, and transformational geometry. Students learn mathematical theory and application, and experience
the role of elementary school students through a variety of classroom activities and demonstrations. Pre-requisite: MAT107 with a grade of C or better
Credits: 3
Type: Lecture
Attributes: SUNY Gen Ed Appendix A
Department: Math, Physical & Computer Sci
All sections for this course: Spring 2014 Summer 2014 Fall 2014
MAT 118: ELEMENTARY STATISTICS
Satisfies the mathematics requirement of the Associate in Arts degree program. Basic statistical procedures are developed. Topics include descriptive statistics; probability; probability
distributions; hypothesis testing; confidence intervals; correlation and regression. Technology (either a graphing calculator from the TI-83/84 family or a statistical analysis software) will be used
regularly throughout course Prerequisites: Regents Algebra 2/Trig score ANY score in the last two years, OR Regents Geometry score of 50 or more in the last 2 years, OR Regents Integrated Algebra of
75 or more in the last two years, OR MAT 092 or MAT 091 with a C or higher (note that MAT 092 is recommended instead of 091), OR Compass Algebra score of 49 or higher.
Credits: 3
Type: Online, Lecture
Attributes: SUNY Gen Ed Appendix A, Elective
Department: Math, Physical & Computer Sci
All sections for this course: Spring 2014 Summer 2014 Fall 2014
MAT 125: CALCULUS W/ BUSINESS APPL
A survey of the basic concepts and operations of calculus with business and management applications. Designed for students in the Business Administration Transfer program and should not be taken by
mathematics and science majors. Students will use Microsoft Excel extensively throughout the course. No previous knowledge of Excel is required. Prerequisite: Compass College Algebra Score of at
least 46 OR Math B regents/Algebra II and Trigonometry Regents within the last 2 years of at least 85 OR MAT 110 with at least a C.
Credits: 4
Type: Online, Lecture
Attributes: SUNY Gen Ed Appendix A
Department: Math, Physical & Computer Sci
All sections for this course: Spring 2014 Summer 2014 Fall 2014
MAT 128: TECHNICAL MATHEMATICS A
This is the first course in a two-semester sequence of intermediate algebra and trigonometry with technical applications. Topics include operations in the real number system, functions and graphs,
first-degree equations, lines and linear functions, systems of linear equations, right triangle trigonometry, geometry (perimeters, areas, volumes of common figures), rules of exponents, polynomial
operations, factoring, operations on rational expressions, quadratic equations, and binary and hexadecimal notation. A calculator and a laptop computer will be used throughout.
Credits: 4
Type: Lecture
Department: Math, Physical & Computer Sci
All sections for this course: Spring 2014 Summer 2014 Fall 2014
MAT 129: TECHNICAL MATHEMATICS B
This is the second course in a two-semester sequence of intermediate algebra and trigonometry with technical applications. Topics include the operations of exponents and radicals, exponential and
logarithmic functions and equations, trig functions of any angle, radians, sinusoidal functions and graphing, vectors, complex numbers and their applications, oblique triangles, inequalities, ratio
and proportion, variation, introduction to statistics (optional) and an intuitive approach to calculus. The graphing calculator and laptop computer will be integrated throughout the course.
Prerequisite: MAT128.
Credits: 4
Type: Lecture
Attributes: SUNY Gen Ed Appendix A
Department: Math, Physical & Computer Sci
All sections for this course: Spring 2014 Summer 2014 Fall 2014
MAT 131: TECHNICAL MATHEMATICS I
This course satisfies the math requirement for the Applied Academic Certificate in ACR. It is designed for those students who need to improve their math proficiency for entrance into MAT 132. Topics
include: review of operations on whole numbers, fractions, and decimals; operations using signed numbers; exponents and roots; scientific notation; unit analysis; percentage; algebraic expressions;
factoring; linear equations; literal equations; geometry of the triangle, circle and regular polygons; measurement conversions; and introduction to basic trigonometry. Use of a scientific calculator
is required. Prerequisites: Regents Algebra 2/Trig score 1-49 in the last two years, OR Regents Geometry score of 50 or more in the last 2 years, OR Regents Integrated Algebra of 75 or more in the
last two years, OR MAT 091 with a C or higher, OR Compass Algebra score of 49 or higher.
Credits: 3
Type: Lecture
Department: Math, Physical & Computer Sci
All sections for this course: Spring 2014 Summer 2014 Fall 2014
MAT 132: TECHNICAL MATHEMATICS II
This course satisfies the mathematics requirement for students in ARC, CNS, FIR and FTP. Students enrolled in the above curricula may receive credit for MAT 132 or MAT 110, but not both. Topics
include a review of right triangle trigonometry, law of sines and cosines, vectors, factoring, literal, fractional and quadratic equations and applications. Use of a scientific calculator is
required. Prerequisites: Regents Algebra 2/Trig score 50-64 in the last two years, OR Regents Integrated Algebra score of 85 or more in the last two years, OR MAT 097 or MAT 099 or MAT 131 with a C
or higher, OR Compass Algebra score of 76 or higher.
Credits: 3
Type: Lecture
Attributes: SUNY Gen Ed Appendix A
Department: Math, Physical & Computer Sci
All sections for this course: Spring 2014 Summer 2014 Fall 2014
MAT 184: ALGEBRA & TRIG FOR PRECALCULUS
Satisfies the mathematics requirement of the Associate in Arts degree program, and is intended to prepare students for MAT185 (Precalculus). Topics include equations and inequalities, graphing
techniques, analysis of a variety of functions, and triangle trigonometry including the Laws of Sines and Cosines. Prerequisites: Regents Algebra 2/Trig score 50-64 in the last two years, OR Regents
Integrated Algebra score of 85 or more in the last two years, OR MAT 097 or MAT 099 or MAT 131 with a C or higher, OR Compass Algebra score of 76 or higher.
Credits: 3
Type: Online, Lecture
Attributes: SUNY Gen Ed Appendix A, Elective
Department: Math, Physical & Computer Sci
All sections for this course: Spring 2014 Summer 2014 Fall 2014
MAT 185: PRECALCULUS
This course is intended primarily for students planning to take calculus. Topics include a review of the fundamental operations; polynomial, rational, trigonometric, exponential, logarithmic, and
inverse functions; modeling and data analysis. A graphing calculator from the TI-83/84 family of calculators is required for this course. Pre-requisites Compass College Algebra Score of at least 46
OR Math B Regents/Algebra II and Trigonometry Regents within the last 2 years of at least 65 OR MAT 184 with at least a C OR MAT 132 with at least a C OR MAT 110 with at least an A-.
Credits: 4
Type: Lecture
Attributes: SUNY Gen Ed Appendix A, Elective
Department: Math, Physical & Computer Sci
All sections for this course: Spring 2014 Summer 2014 Fall 2014
MAT 214: DISCRETE MATHEMATICS
Intended primarily for students in the CPS or LAM curriculum. Topics include: Set Theory, Boolean Algebra, Methods of Proof, Counting Techniques, Functions and Relations, Graph Theory and Computer
Applications. Pre- or Co-requisite: MAT 221.
Credits: 3
Type: Lecture
Attributes: SUNY Gen Ed Appendix A, Elective
Department: Math, Physical & Computer Sci
All sections for this course: Spring 2014 Summer 2014 Fall 2014
MAT 215: INTRO TO LINEAR ALGEBRA
A basic introduction to linear algebra. Topics include vector spaces, systems of linear equations, matrices and determinants and linear transformations. Required for prospective mathematics majors.
Prerequisite: MAT 222 with a grade of C or better.
Credits: 3
Type: Lecture
Attributes: SUNY Gen Ed Appendix A, Elective
Department: Math, Physical & Computer Sci
All sections for this course: Spring 2014 Summer 2014 Fall 2014
MAT 221: ANLYTC GEOM & CALC I
This course is the first of a three-semester sequence developing calculus for the student majoring in engineering, mathematics, or the sciences. Topics include the derivative, limits, continuity,
differentiability, the definite integral, the Fundamental Theorem of Calculus, techniques of differentiation (including for transcendental functions), applications of differentiation, mathematical
modeling and computer applications. A graphing calculator from the TI-83/84 family of calculators is required for this course. Pre-requisites Compass Trigonometry Score of at least 46 OR Math B
Regents/Algebra II and Trigonometry Regents within the last 2 years of at least 65 AND 1 year of high school Precalculus with a grade of at least C OR MAT 185 with a grade of at least C OR permission
of the instructor.
Credits: 4
Type: Lecture
Attributes: SUNY Gen Ed Appendix A, Elective
Department: Math, Physical & Computer Sci
All sections for this course: Spring 2014 Summer 2014 Fall 2014
MAT 222: ANLYTC GEOM & CALC II
This course is the second of a three-semester sequence developing calculus for the student majoring in engineering, mathematics or the sciences. Topics include The Fundamental Theorem, constructing
antiderivatives, definite and indefinite integrals, techniques of integration, improper integrals, applications of integration (including probability distribution functions), differential equations
(first and second order linear, separation of variables, numerical approximations, systems, and applications to growth and decay and oscillations), Taylor and other series, mathematical modeling and
computer applications. A graphing calculator from the TI83/84 family of calculators is required for this course. Prerequisite: MAT 221 with a grade of C or better, or permission of the department.
Credits: 4
Type: Lecture
Attributes: SUNY Gen Ed Appendix A, Elective
Department: Math, Physical & Computer Sci
All sections for this course: Spring 2014 Summer 2014 Fall 2014
MAT 223: ANLYTC GEOM & CALC III
A continuation of MAT 222. Topics include vectors in the plane, solid analytic geometry, functions of several variables, partial differentiation, multiple integration, line integrals and vector
fields, Green's Theorem, Stokes' Theorem, applications. A graphing calculator from the TI-83/84 family of calculators is required for this course. Prerequisite: MAT 222 with a grade of C or better or
advanced placement with the permission of the department.
Credits: 4
Type: Lecture
Attributes: SUNY Gen Ed Appendix A, Elective
Department: Math, Physical & Computer Sci
All sections for this course: Spring 2014 Summer 2014 Fall 2014
MAT 224: DIFFERENTIAL EQUATIONS
An introductory course in differential equations for students in mathematics, engineering and sciences. Topics include the theory, solution and estimation of differential equations of the first and
second order, Laplace transforms, systems of differential equations, power series and an introduction to Fourier series and partial differential equations. Prerequisite: MAT 223 with a grade of C or
Credits: 4
Type: Lecture
Attributes: SUNY Gen Ed Appendix A, Elective
Department: Math, Physical & Computer Sci
All sections for this course: Spring 2014 Summer 2014 Fall 2014
MAT 271: SPECIAL STUDY PROJECT I
A special learning experience designed by one or more students with the cooperation and approval of a faculty member. Proposed study plans require departmental approval. Projects may be based on
reading, research, community service, work experience, or other activities that advance the student's knowledge and competence in the field of mathematics or related areas. The student's time
commitment to the project will be approximately 35-50 hours.
Credits: 1
Type: Lecture, Independent Study
Attributes: Elective
Department: Math, Physical & Computer Sci
All sections for this course: Spring 2014 Summer 2014 Fall 2014
MAT 272: SPECIAL STUDY PROJECT II
Similar to MAT 271, except that the student's time commitment to the project will be approximately 70-90 hours.
Credits: 2
Type: Lecture, Independent Study
Attributes: Elective
Department: Math, Physical & Computer Sci
All sections for this course: Spring 2014 Summer 2014 Fall 2014
MAT 273: SPECIAL STUDY PROJECT III
Similar to MAT 271, except that the student's time commitment to the project will be approximately 105-135 hours.
Credits: 3
Type: Lecture, Independent Study
Attributes: Elective
Department: Math, Physical & Computer Sci
All sections for this course: Spring 2014 Summer 2014 Fall 2014 | {"url":"http://www.sunydutchess.edu/catalog/current/courses/mathematics/","timestamp":"2014-04-17T12:50:06Z","content_type":null,"content_length":"84465","record_id":"<urn:uuid:99d69e1d-2cec-44b1-9b78-4beb1ed1bf7d>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00065-ip-10-147-4-33.ec2.internal.warc.gz"} |
A factor of a given number is a whole number that divides exactly into the given number.
So, 4 is a factor of 12 as it divides exactly into 12, and 3 is also a factor of 12.
If a number can be expressed as a product of two whole numbers, then the whole numbers are called factors of that number.
So, the factors of 12 are 1, 2, 3, 4, 6 and 12.
Common Factors
Common factors are factors that are common to two or more numbers.
Example 16
Find the common factors of 10 and 20.
So, the common factors of 10 and 20 are 1, 2, 5 and 10.
Example 17
Find the common factors of 22 and 33.
So, the common factors of 22 and 33 are 1 and 11.
Highest Common Factor
The highest common factor (HCF) of two (or more) numbers is the largest common factor.
So, the common factors of 8 and 12 are 1, 2 and 4; and 4 is the largest common factor.
Setting out:
Often, we set out the solution as follows:
Example 18
Find the highest common factor of 16 and 32.
Key Terms
factors, common factors, highest common factor, HCF | {"url":"http://www.mathsteacher.com.au/year8/ch01_arithmetic/04_factors/fact.htm","timestamp":"2014-04-20T05:42:13Z","content_type":null,"content_length":"11945","record_id":"<urn:uuid:6e5b2756-f9bd-44fe-987a-dd728f93144b>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00174-ip-10-147-4-33.ec2.internal.warc.gz"} |
Computational Methods In Commutative Algebra And Algebraic Geometry
Computational Methods In Commutative Algebra And Algebraic Geometry (Google eBook)
May 18, 2004
408 pages
This ACM volume deals with tackling problems that can be represented by data structures which are essentially matrices with polynomial entries, mediated by the disciplines of commutative algebra and
algebraic geometry. The discoveries stem from an interdisciplinary branch of research which has been growing steadily over the past decade. The author covers a wide range, from showing how to obtain
deep heuristics in a computation of a ring, a module or a morphism, to developing means of solving nonlinear systems of equations - highlighting the use of advanced techniques to bring down the cost
of computation. Although intended for advanced students and researchers with interests both in algebra and computation, many parts may be read by anyone with a basic abstract algebra course.
We haven't found any reviews in the usual places.
Fundamental Algorithms 7
11 Grobner Basics 8
12 Division Algorithms 12
14 Hilbert Functions 21
Toolkit 29
21 Elimination Techniques 30
66 Integral Closure of an Ideal 176
67 Integral Closure of a Morphism 184
Ideal Transforms and Rings of Invariants 189
71 Divisorial Properties of Ideal Transforms 190
72 Equations of Blowup Algebras 193
73 Subrings 202
74 Rings of Invariants 209
Computation of Cohomology 219
22 Rings of Endomorphisms 35
23 Noether Normalization 37
24 Fitting Ideals 41
25 Finite and QuasiFinite Morphisms 46
26 Flat Morphisms 49
27 CohenMacaulay Algebras 58
Principles of Primary Decomposition 65
31 Associated Primes and Irreducible Decomposition 67
32 Equidimensional Decomposition of an Ideal 77
33 Equidimensional Decomposition Without Exts 83
34 Mixed Primary Decomposition 85
35 Elements of Factorizers 90
Computing in Artin Algebras 103
41 Structure of Artin Algebras 104
42 ZeroDimensional Ideals 109
43 Idempotents versus Primary Decomposition 113
44 Decomposition via Sampling 115
45 Root Finders 120
Nullstellensatze 127
51 Radicals via Elimination 128
52 Modules of Differentials and Jacobian Ideals 130
53 Generic Socles 134
54 Explicit Nullstellensatze 136
55 Finding Regular Sequences 141
56 Top Radical and Upper Jacobians 146
Integral Closure 149
61 Integrally Closed Rings 151
62 Multiplication Rings 154
63 S2ification of an Affine Ring 159
64 Desingularization in Codimension One 167
65 Discriminants and Multipliers 173
81 Eyeballing 220
82 Local Duality 222
83 Approximation 224
Degrees of Complexity of a Graded Module 227
91 Degrees of Modules 230
92 Index of Nilpotency 244
93 Qualitative Aspects of Noether Normalization 249
94 Homological Degrees of a Module 263
95 Complexity Bounds in Local Rings 273
Primer on Commutative Algebra 281
A2 Krull Dimension 288
A3 Graded Algebras 295
A4 Integral Extensions 298
A5 Finitely Generated Algebras over Fields 305
A6 The Method of Syzygies 309
A7 CohenMacaulay Rings and Modules 321
A8 Local Cohomology 329
A9 Linkage Theory 338
Hilbert Functions 343
B2 The Study of R via grFR 347
B3 The HilbertSamuel Function 352
B4 Hilbert Functions Resolutions and Local Cohomology 356
B5 Lexsegment Ideals and Macaulay Theorem 359
B6 The Theorems of Green and Gotzmann 362
Using Macaulay 2 367
C1 Elementary Uses of Macaulay 2 368
C2 Local Cohomology of Graded Modules 382
C3 Cohomology of a Coherent Sheaf Mathematical Background 387
References 393
Index 405
References from web pages
JSTOR: Computational Methods in Commutative Algebra and Algebraic ...
7[13-01, 13P99, 14-01, 14Q99]-Computational methods in commutative algebra and algebraic geometry, by Wolmer V. Vasconcelos, Algorithms and Computation in ...
links.jstor.org/ sici?sici=0025-5718(200007)69%3A231%3C1312%3ACMICAA%3E2.0.CO%3B2-K
Computational Methods in Commutative Algebra and Algebraic Geometry
Wolmer V. Vasconcelos. Corrections and Complements to. Computational Methods in. Commutative Algebra and. Algebraic Geometry ...
www.math.rutgers.edu/ ~vasconce/ compur.pdf
Vasconcelos, wv, Computational Methods in Commutative Algebra and ...
Vasconcelos, wv, Computational Methods in Commutative Algebra and Algebraic Geometry. Band 2 der Reihe Algorithms and Computations in Mathematics, ...
www.fachgruppe-computeralgebra.de/ CAR/ CAR22/ node16.html
Michael Stillman: Computer Algebra systems Michael Stillman: Books ...
Michael Stillman: Computer Algebra systems. [1] Macaulay, A system for computing in algebraic geometry and commutative algebra. Available via ...
www.math.cornell.edu/ ~mike/ pubs2007.pdf
Computational methods in commutative algebra and algebraic geometry
Computational methods in commutative algebra and algebraic geometry. Purchase this Book · Purchase this Book. Source, Springer Algorithms And Computation In ...
portal.acm.org/ citation.cfm?id=269988
【楽天市場】Computational Methods in Commutative Algebra and ...
Computational Methods in Commutative Algebra and Algebraic Geometry(Computational ... Computational Methods in Commutative Algebra and Algebraic Geometry ...
item.rakuten.co.jp/ book/ 4945740/
GSCI Digest Friday, June 04, 2004 Volume 2004: Issue 06 For ...
Computational Methods in Commutative Algebra and Algebraic Geometry ----------------------------------------------------------------------
www.scicomp.uni-erlangen.de/ archives/ letter/ v04n06/ v04n06
W. Vasconcelos, Computational Methods in Commutative Algebra and Algebraic Geometry, Algorithms Comput. Math. vol . 2, Springer-Verlag, Berlin (1998). M ...
citation.nstl.gov.cn/ detail.jsp?internal_id=921983
Biblioteca ISI-CNR
Titolo, Computational Methods in Commutative Algebra and Algebraic Geometry. form_resp. Edizione. Luogo di pubblicazione, Berlin. Editore, Springer ...
biblio.cs.icar.cnr.it/ biblio/ mostraLibro.asp?ID=555
2007-02-02 - Franの日記
休みの間にこれでも読んでみようかな。 Computational Methods in Commutative Algebra and Algebraic Geometry (Algorithms and Computation in Mathematics, V ...
d.hatena.ne.jp/ Fran/ 20070202
Bibliographic information | {"url":"http://books.google.com/books?id=KzwF_K0yXfMC&source=gbs_citations_module_r&cad=6","timestamp":"2014-04-19T22:47:55Z","content_type":null,"content_length":"149368","record_id":"<urn:uuid:b7c5e525-93fc-4cde-b3dd-7bc17687b563>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00660-ip-10-147-4-33.ec2.internal.warc.gz"} |
A. Krylov–Bogolyubov formalism for qubit–resonator system
B. Inductive coupling with LCR resonator. Parametric inductance
1. Low-quality qubit (T[1]≪T): Phase shift probes the parametric inductance of qubit
2. Higher-quality qubit (): Parametric resistance due to qubit’s lagging
C. Capacitive coupling with nanomechanical resonator. Parametric capacitance
A. Inductance of superconductingqubits
B. Equilibrium-state measurement
C. Resonant transitions in the charge qubit
D. One- and multiphoton transitions in the flux qubit
E. Interferometry with nanoresonator
A. Equations for a system of coupled qubits
B. Weak-driving spectroscopy
C. Direct and ladder-type multiphoton transitions
D. Lasing in the two-qubit system | {"url":"http://scitation.aip.org/content/aip/journal/ltp/38/4/10.1063/1.3701717","timestamp":"2014-04-18T21:36:54Z","content_type":null,"content_length":"96720","record_id":"<urn:uuid:33a5db15-b78c-4b6e-a8b3-6fdb3fb55f0e>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00500-ip-10-147-4-33.ec2.internal.warc.gz"} |
How long ago was 11/09/1979?
You asked:
How long ago was 11/09/1979?
34 years, 5 months and 9 days
Assuming you meant
Did you mean?
Say hello to Evi
Evi is our best selling mobile app that can answer questions about local knowledge, weather, books, music, films, people and places, recipe ideas, shopping and much more. Over the next few months we
will be adding all of Evi's power to this site.
Until then, to experience all of the power of Evi you can download Evi for free on iOS, Android and Kindle Fire. | {"url":"http://www.evi.com/q/how_long_ago_was_11/09/1979","timestamp":"2014-04-18T21:45:48Z","content_type":null,"content_length":"48564","record_id":"<urn:uuid:fa57115f-84ae-4f54-9777-4bd287c288d2>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00127-ip-10-147-4-33.ec2.internal.warc.gz"} |
Updated August 18, 2013
Click here to go to our page on microstrip loss
Click here to go to our page on bends in transmission lines
Click here to go to our main page on microwave transmission lines
Click here to go to our microwave calculators
Click here to go to our page on calculating transmission line loss
Click here to go to our page on microstrip patch antennas
Click here to go to our page on microstrip dispersion (new for March 2012!)
Microstrip is by far the most popular transmission line used in microwave engineering for circuit design.
History of microstrip
Microstrip is a planar transmission line, similar to stripline and coplanar waveguide. Microstrip was developed by ITT Federal Telecommunications Laboratories in Nutley New Jersey, as a competitor to
stripline (first published by Grieg and Engelmann in the December 1952 IRE proceedings). According to Pozar, early microstrip work used fat substrates, which allowed non-TEM waves to propagate which
makes results unpredictable. In the 1960s the thin version of microstrip became popular.
By 1955 ITT had published a number of papers on microstrip in the IEEE transactions on microwave theory and technique. A paper by M. Arditi titled Characteristics and Applications of Microstrip for
Microwave Wiring is a good one. The author seems to apologize for the inability of microstrip to support slotted line measurements, (as it is "rather unconventional") and concludes that it is
dispersion-free but non-TEM mode but admits that he didn't analyze this, he bases his assumption on limited measured data. The "microstrip kit" shown below is a priceless artifact. Presumably it
would be sent to customers to let them try out microstrip for themselves. Networks such as rat-races can somehow be clipped together to form a receiver or something useful. It has plenty of
transitions to waveguide and coax (called "transducers" in the paper) so you can actually measure what you just made... if anyone has one of these gems, please tell us your address we will fly the
Microwaves101 private jet there to take some proper pictures!
This kit is similar to the Raytheon "Lectron", which was an educational toy marketed starting in 1967 in which magnets held circuit elements imprinted with their schematic symbols together, it had
everything you needed to build a simple transistor radio, including a tiny speaker. One of these days we'll go out to the garage, find our old Lectron and take some photos. Not long ago, a Raytheon
lawyer warned us not to use their good name on this web site. To which we respond, "ptttthhhhh!" with extra ejected saliva.
In 1996, a serious crime against microwave history was committed when ITT Nutley tore down their 300 foot microwave tower to make way for some ugly condos. The video below is not great, we'll keep an
eye out for a better one.
Overview of microstrip
Microstrip transmission lines consist of a conductive strip of width "W" and thickness "t" and a wider ground plane, separated by a dielectric layer (a.k.a. the "substrate") of thickness "H" as shown
in the figure below. Microstrip is by far the most popular microwave transmission line, especially for microwave integrated circuits and MMICs. The major advantage of microstrip over stripline is
that all active components can be mounted on top of the board. The disadvantages are that when high isolation is required such as in a filter or switch, some external shielding may have to be
considered. Given the chance, microstrip circuits can radiate, causing unintended circuit response. A minor issue with microstrip is that it is dispersive, meaning that signals of different
frequencies travel at slightly different speeds. Microstrip does not support a TEM mode, because of its filling factor. For coupled lines, the even and odd modes will not have the same phase
velocity. This property is what causes the asymmetric frequency of microstrip bandpass filters, for example.
Variants of microstrip include embedded microstrip and coated microstrip, both of which add some dielectric above the microstrip conductor. Anyone care to donate some material on these topics?
Effective dielectric constant
Because part of the fields from the microstrip conductor exist in air, the effective dielectric constant "Keff" is somewhat less than the substrate's dielectric constant (also known as the relative
permittivity). Thanks to Brian KC2PIT for reminding us the term "relative dielectric constant" is an oxymoron only used my microwave morons!) According to Bahl and Trivedi[1], the effective
dielectric constant ε[eff] (a.k.a. Keff) of microstrip is calculated by:
All microstrip equations are approximate. The above equations ignore strip thickness, so we wouldn't recommend relying on them for critical designs on thick copper boards.
The effective dielectric constant is a seen to be a function of the ratio of the width to the height of a microstrip line (W/H), as well as the dielectric constant of the substrate material. Be
careful, the way it is expressed here it is also a function of H/W!
We have a table of "hard" substrate material properties here, and "soft" substrate material properties here, in case you want to look up the dielectric constant of a specific material.
Note that there are separate solutions for cases where W/H is less than 1, and when W/H is greater than or equal to 1. These equations provide a reasonable approximation for ε[eff] (effective
dielectric constant). This calculation ignores strip thickness and frequency dispersion, but their effects are usually small.
Go to our microwave calculator page, our microstrip calculator does this calculation for you!
Here's calculator that was suggested to us that takes into account the metal thickness effect:
(link fixed May 1, 2008 thanks to Brian...)
Let us know if you find it accurate (or not!)
Wavelength for any transmission line can be calculated by dividing free space wavelength by the square root of the effective dielectric constant, which is explained above.
Characteristic impedance
Characteristic impedance Z[0] of microstrip is also a function of the ratio of the height to the width W/H (and ratio of width to height H/W) of the transmission line, and also has separate solutions
depending on the value of W/H. According to Bahl and Trivedi[1], the characteristic impedance Z[0] of microstrip is calculated by:
Again, these equations are approximate, and don't take into account strip thickness. When strip thickness is a substantial fraction of substrate height, you should use a more accurate calculation.
Suggestion: use Agilent ADS's Linecalc. One of these days we'll post some more accurate equations.
Go to our microwave calculator page, our microstrip calculator does this calculation for you!
It's time for a Microwaves101 Rule of Thumb!
For pure alumina (ε[R]=9.8), the ratio of W/H for fifty-ohm microstrip is about 95%. That means on ten mil (254 micron) alumina, the width for fifty ohm microstrip will be about 9.5 mils (241
microns). On GaAs (ε[R]=12.9), the W/H ratio for fifty ohms is about 75%. Therefore on four mil (100 micron) GaAs, fifty ohm microstrip will have a width of about 3 mils (75 microns). On PTFE-based
soft board materials ε[R]=2.2), W/H to get fifty ohms is about 3. Remember these!
Effect of metal thickness on calculations
Having a finite thickness of metal for conductor strips tends to increase the capacitance of the lines, which effects the ε[eff] and Z[0] calculations. We'll add this correction factor at a later
Effect of cover height on calculations
Having a lid in close proximity raises the capacitance per length, and therefore lowers the impedance. We suggest that if your impedance calculation is important, to use EDA software to make the
final calculation on line widths!
Cutoff frequency
Below we present a microstrip rule of thumb, based on experience and not theory. In order to prevent higher-order transmission modes which might ruin your day, you should limit the thickness of your
microstrip substrate to 10% of a wavelength. Examples of what this means: 15 mil alumina is good up to 25 GHz, 4 mil GaAs is good up to 82 GHz, and 5 mil quartz is good up to 121 GHz. There are
formulas associated with calculating the exact cut-off frequencies, some day maybe we'll post them.
Note to MMIC designers: increasing the height of a microstrip substrate decreases metal loss proportionately. It also increases the parasitic inductance associated with via holes. You may find that a
two-mil substrate gives more gain than a four-mil substrate for the same active device, because of reduced inductance, and in many cases that "extra" gain will offset the loss associated with reduced
strip widths. For switch designs, reduced substrate height means reduced switch isolation.
Note to GaN fab operators: as GaN-on-SiC moves into mainstream and up into millimeter-waves, it seems like most fabs are trying to duplicate the 50 and 100um (2 and 4 mil) thicknesses of GaAs
substrates. If you consider that SiC has DK=10 versus GaAs DK of 12.9, you can get away with a 75um (three mil) process to serve all the way to 110 GHz. You will get better heat spreading and lower
metal loss to boot.
Microstrip loss calculations
This topic now has it's own page! Also, check out our page on transmission line loss calculations page.
The effective dielectric constant (and therefore phase velocity and characteristic impedance) of microstrip is a slight function of frequency. This effect is not a big deal in most cases.
Here's our page on the topic of dispersion.
Our main discussion of microstrip dispersion is now here.
[1] Reference: I. J. Bahl and D. K. Trivedi, "A Designer's Guide to Microstrip Line", Microwaves, May 1977, pp. 174-182. Go to our book section and buy a book on microstrip! | {"url":"http://www.microwaves101.com/encyclopedia/microstrip.cfm","timestamp":"2014-04-19T11:56:31Z","content_type":null,"content_length":"27377","record_id":"<urn:uuid:6018aa0e-1f22-4f58-bc0b-5ae1b8da99cb>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00607-ip-10-147-4-33.ec2.internal.warc.gz"} |
Negative Binomial or Pascal and Geometric distribution
Anscombe (1950) defines the Negative Binomial (NB) distribution as:
where m is the distribution mean, k is a parameter and Γ() is the Gamma function. Letting p=m/(m+k) this expression can be re-written as:
Plots of this distribution for p=0.5 and varying values of k are shown below.
Negative Binomial, p=0.5, k=0.5,1,3,5
Originally this distribution was introduced as a model of the number of successes in trials before a failure is observed, where p is the probability of success. However, the distribution has been
more widely used as a model for count data that are more clustered than one would expect for a purely random process (i.e. more clustered than under a Poisson process). A quick test to see if the
Negative Binomial might be appropriate when the Poisson is not is to see if the variance>mean. If an observed distribution shows more clustering than can be modeled effectively with a NB
distribution, some other form of clustered or contagious distribution may be more effective. The distribution is always positively skewed (left skewed) and for large values of the parameter k tends
to a symmetric distribution.
Ehrenberg (1959 [EHR1]) used the NB distribution (based on Anscombe's formulation [ANS1]) with great success to model consumer purchasing behavior. He found that for a very large range of regularly
purchased branded products, such as breakfast cereals, canned goods, soft drinks, detergents etc, the number of units purchased by consumers over time could be modeled using the Negative Binomial.
Furthermore, a convenient and effective fit of the model could be obtained by calculating the mean of the sample, m, and the proportion of non-buyers, p(0), both of which are readily available from
the survey data. Ehrenberg cites the example of purchases made of a specific product over a 26-week period by a consumer panel of 2000 households, and demonstrates that using the fitting method just
described the fit for 0 units is exact, and for up to 10 units is very good. The distribution of recorded purchases did have a very long tail, with a few consumers buying much larger numbers of the
product than expected (e.g. 20+). This partly reflects the problem of fitting such distributions where there are varying packaging sizes, brand mixes, bulk offers etc., issues that have increased
since the time Ehrenberg produced these findings. However, his core observations regarding consumer purchasing habits and the usefulness of NB models remains broadly valid today.
Key measures for the NB are shown below, where q=1-p:
[ANS1] Anscombe F J (1950) Sampling Theory of the Negative Binomial and Logarithmic Series Distributions. Biometrika, 37(3/4), 358-382
[EHR1] Ehrenberg A S C (1959) The Pattern of Consumer Purchases. Applied Statistics, 8(1), 26-41
[JOH1] Johnson N L, Kotz S (1969) Discrete distributions. Houghton Mifflin/J Wiley & Sons, New York
Mathworld/Weisstein E W: Negative Binomial Distribution: http://mathworld.wolfram.com/NegativeBinomialDistribution.html
Wikipedia: Negative Binomial Distribution: http://en.wikipedia.org/wiki/Negative_binomial | {"url":"http://www.statsref.com/HTML/negative_binomial.html","timestamp":"2014-04-17T18:29:01Z","content_type":null,"content_length":"13750","record_id":"<urn:uuid:bafc743f-6616-4f39-be2a-130246724013>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00292-ip-10-147-4-33.ec2.internal.warc.gz"} |
The NRICH Project aims to enrich the mathematical experiences of all learners. To support this aim, members of the NRICH team work in a wide range of capacities, including providing professional
development for teachers wishing to embed rich mathematical tasks into everyday classroom practice. More information on many of our other activities can be found here. | {"url":"http://nrich.maths.org/public/leg.php?group_id=12&code=132","timestamp":"2014-04-17T06:43:32Z","content_type":null,"content_length":"57592","record_id":"<urn:uuid:8fdd9e31-5324-4f94-8a24-a538830fe2c2>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00093-ip-10-147-4-33.ec2.internal.warc.gz"} |
Measures of Dispersion
Date: 6/30/96 at 1:24:37
From: Anonymous
Subject: Measures of Dispersion
Good day!
I'm from Manila, Philippines. My question is:
What are the different characteristics of these different measures of
a.) Range
b.) Mean Absolute Deviation
c.) Standard Deviation
d.) Variance
e.) Quantiles (percentile, decile, quartiles)
f.) CV
I`m not quite sure about that last one's real description...
I've gone through different books but still I can't find any answers.
Date: 7/1/96 at 8:55:45
From: Doctor Anthony
Subject: Re: Measures of Dispersion
Let us go through the list you have given.
(a) Range - Very simple. This is simply the difference between the
largest and smallest members of the population. So if age is what you
are looking at, and the oldest is 90, the youngest 35, then the range
is 90 - 35 = 55
(b) Mean deviation. First calculate the mean (= total of all the
measurements divided by number of measurements) If the population is
made up of x1, x2, x3, ....xn, then mean = (x1+x2+x3+...+xn)/n
If mean = m then to get mean deviation we calculate the numerical
value i.e. ignore negative signs of |x1-m|, |x2-m|, |x3-m|, ... |xn-m|
now add all the numbers together (they are all positive) and divide
by n.
(c) Standard deviation - This is the square root of the variance, so I
will describe that in section (d), and then you get the s.d. by taking
the square root of the variance.
(d) Variance. To avoid the problem of negative numbers we encountered
with mean deviation, we could square the deviations and then average
those. This is the variance. So variance =
{(x1-m)^2 + (x2-m)^2 + (xn-m)^2}/n. As mentioned earlier, the
standard deviation is then found by taking the square root of the
(e) Quantiles. You need to plot a cumulative frequency diagram to
make use of these. The cumulative frequency shows the total number of
the population less than any given value of x. If you plot x
horizontally and cumulative frequency on the vertical axis, then to
find the median you go half way up the vertical axis, across to the
curve and down to the x axis to read off the median value of x. This
is the value of x which divides the population exactly in half. i.e.
half the population have values below x and half have values above x.
The interquartile range is found by going 1/4 and then 3/4 of the way
up the vertical axis, across to the curve and down to the x axis. 1/4
of the population have values of x below the lower quartile and 1/4 of
the population have values of x above the upper quartile. This means
that the middle half of the population have values within the
interquartile range. A decile is found by going 10% up the vertical
axis, and corresponds to 10% of the population. A percentile
represents 1% of the population, so the 30th percentile is the value
of x below which 30% of the population lies.
(f) I think you mean covariance by the letters CV. This applies to
situations where you have two variables x and y which affect each
other. (If x and y are independent the covariance is 0). When finding
the variance of (x+y) we have:
var(x+y) = var(x) + var(y) + 2*cov(xy)
Cov(xy) is calculated from {x1y1 + x2y2+ ..+ xnyn}/n - mean(x)*mean(y)
As mentioned above, cov(xy) = 0 if x and y are independent.
-Doctor Anthony, The Math Forum
Check out our web site! http://mathforum.org/dr.math/
Date: Thu, 23 Oct 1997 16:49:16 -0400 (EDT)
From: Amal Jasentuliyana
Subject: Clarification of a Dr. Math answer
Hi. I think this is a wonderful site.
I noticed a point of possible confusion in one of the archived
Dr. Math answers. The question asks about a number of different
measures of the spread of a distribution, and the last one is "CV".
In the answer, CV was assumed to be covariance. I _think_ that
coefficient of variation might have been what was intended in the
question (since everything else was univariate), and that this may
also be a more common usage of the abbreviation.
Coeff. of variation is defined as (sd * 100)/mean (see Sokal
and Rohlf [1981] _BIOMETRY_, 2nd ed., ch. 4.10).
Keep up the excellent work, - Amal.
For more on the meanings of "quartile" and mathematicians'
disagreements about them, see
Defining Quartiles
- Doctor Melissa, The Math Forum | {"url":"http://mathforum.org/library/drmath/view/52156.html","timestamp":"2014-04-17T21:28:51Z","content_type":null,"content_length":"9746","record_id":"<urn:uuid:f71ce160-2c0b-4177-8d02-d53248b16f2b>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00402-ip-10-147-4-33.ec2.internal.warc.gz"} |
History of the Metric System Part 2 Non-Metric System
About the history of the metric system, the simple system of measurement the United States refuses to accept.
Meet the Meter
The metric system, like the U.S. currency, is a decimal system; all the units are related to each other by a factor of 10. Humans have used their 10 fingers to count on since the dawn of history, so
the manipulation of units of 10 is almost 2nd nature. To convert from one metric unit to another, all one has to do is add easily memorized prefixes and shift the decimal point. For example, the
prefix "centi" means one hundredth: c==1/100==10-2==0.01. "Milli" means one thousandth: m==1/1000==10-3==0.001. One millimeter is 1/1000 of a meter and 1/10 of a centimeter. "Kilo" means one
thousand: k==1000==10 3. One thousand grams is one kilogram, which is approximately 2.2 lbs.; a thousand grams is also a liter in liquid measure, which corresponds roughly to one quart. Lists follow
giving all the established prefixes and their equivalent values in nonmetric units; these have been prepared not only for weights and measures but for other physical quantities as well.
The nonmetric system and its shortcomings can be traced back to historical developments that took place before uniform reference standards could be established. For example, a medieval British ruler
changed the Roman mile of 5,000' to 5,280' to make it conformable with the length of 8 furlongs. Another British king proclaimed that 3 kernels of grain--wheat or barley--laid end to end were the
equivalent of one inch which, in turn, was 1/12 the length of a human foot. As a result we have remained saddled with a complicated system of units which have no relation to one another. There are
troy ounces and avoirdupois ounces and liquid ounces. A quart of water has 57.75 cubic inches, but a quart of dry measure is equivalent to 67.20 cubic inches. Pricing or cost accounting of such
irregular units using our decimal currency system is an unavoidably laborious process.
The only reason for the continued use of the nonmetric system is human inertia and an unwillingness to accept change. But, contrary to popular opinion, the metric system is already so well
established in the U.S. that its official adoption does not constitute the introduction of a radically new system but merely the recognition of one which is already in use in many areas. For
instance, we are used to 8-, 16-, or 35-millimeter film; doctors prescribe, pharmacists fill, and nurses administer medicine in cubic centimeter units. The consumption of electricity is measured in
watts and kilowatts, and the engine displacement of automobiles is now commonly given in cubic centimeters. Spurred on by General Motors, Ford, IBM, Honeywell, and scores of other large companies who
have announced their orderly conversion to the metric system, subcontractors, suppliers, and machine-tool manufacturers will increasingly work to metric standards. Presumably they will produce goods
in nonmetric dimensions too, for some time to come, to satisfy the replacement market. But since that market will disappear in time, production eventually will be geared exclusively to metric | {"url":"http://www.trivia-library.com/a/history-of-the-metric-system-part-2-non-metric-system.htm","timestamp":"2014-04-16T15:59:37Z","content_type":null,"content_length":"7394","record_id":"<urn:uuid:8bc0f46d-0057-4ee6-8309-7a071286b5c7>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00115-ip-10-147-4-33.ec2.internal.warc.gz"} |
Block Pounds Can You Measure Up? Get the Turtle to the Pond Going Places How Many Steps? Ladybug Lengths Magnificent Measurement My Pet What Should I Measure Next? How About Me! Zoe's Pet Shelter
Bell Curve Competing Coasters Cylinders and Scale Heights of Students in Our Class How Long? How Wide? How Tall? How Deep? Makeshift Measurements Numerical and Categorical Data Pitching Cards Sizing
Up Scuba Diving in Belize Water, Water What's Your Wingspan?
Apple Pi Allison's Star Balls Area Contractor Area Formulas Castles & Cornerstones Constant Dimensions Cubed Cans Discovering Volume Garden Shed Project Hay Bale Farmer Inclined Plane Linking Length,
Perimeter, Area, and Volume Mandalas and Polygons Planning a Playground What Can Data Tell Us?
Atlatl Lessons Frieze Designs in Indigenous Art Growth Rate How Tall? Impact of a Superstar Investigating Pick's Theorem Pi Line Popcorn, Anyone? Tetrahedral Kites Varying Motion | {"url":"http://www.uwosh.edu/coehs/cmagproject/Measurement/index.htm","timestamp":"2014-04-18T04:19:22Z","content_type":null,"content_length":"12494","record_id":"<urn:uuid:02c662e7-8596-4537-b1c1-8876ef37fa3b>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00348-ip-10-147-4-33.ec2.internal.warc.gz"} |
Undergraduate Communication Requirement
CI-M Subjects for Course 18 Majors
CI-M subjects fulfill the CI component for the designated major(s). Non-majors who enroll in these subjects will have the same educational experience, but will not receive CI credit for completing
these subjects. You should check the degree charts in the MIT Bulletin to identify the CI-M subjects for your major program(s).
18 Two of:
18.104 Seminar in Analysis
18.304 Undergraduate Seminar in Discrete Mathematics
18.384 Undergraduate Seminar in Physical Mathematics
18.424 Seminar in Information Theory
18.434 Seminar in Theoretical Computer Science
18.504 Seminar in Logic
18.704 Seminar in Algebra
18.784 Seminar in Number Theory
18.821 Project Laboratory in Mathematics
18.904 Seminar in Topology
18.994 Seminar in Geometry
or one from the above list and one of:
8.06 Quantum Physics III
14.33 Research and Communication in Economics: Topics, Methods, and Implementation
18.100C Real Analysis
18.310 Principles of Discrete Applied Mathematics
18-C Two of:
18.104 Seminar in Analysis
18.304 Undergraduate Seminar in Discrete Mathematics
18.384 Undergraduate Seminar in Physical Mathematics
18.424 Seminar in Information Theory
18.434 Seminar in Theoretical Computer Science
18.504 Seminar in Logic
18.704 Seminar in Algebra
18.784 Seminar in Number Theory
18.821 Project Laboratory in Mathematics
18.904 Seminar in Topology
18.994 Seminar in Geometry
or one from the above list and one of:
6.033 Computer System Engineering
8.06 Quantum Physics III
14.33 Research and Communication in Economics: Topics, Methods, and Implementation
18.100C Real Analysis
18.310 Principles of Discrete Applied Mathematics | {"url":"http://web.mit.edu/commreq/cim/course18.html","timestamp":"2014-04-16T22:06:02Z","content_type":null,"content_length":"12284","record_id":"<urn:uuid:32980212-b6ac-45df-b5c7-fb540397a472>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00512-ip-10-147-4-33.ec2.internal.warc.gz"} |
Mandelbrot Set
Date: 10/22/96 at 14:33:17
From: Justin Rhees
Subject: What is the Mandelbrot set?
I would like to know the definition of the Mandelbrot set and what it
Thank you.
Date: 10/23/96 at 17:24:17
From: Doctor Anthony
Subject: Re: What is the Mandelbrot set?
You will need to know a little about complex numbers to understand the
Mandelbrot set.
A complex number z is given by z = x + iy where i is sqrt(-1) and
x and y are real numbers.
The Argand diagram has the usual x and y axes, with REAL numbers
plotted along the x axis and the y numbers called IMAGINARY plotted
along the y axis. So if z = 3 + 4i, then z would be plotted at the
point (3,4), and would (by Pythagoras) be 5 units from the origin.
This number 5 is called the modulus of the complex number.
The Mandelbrot set is a portion of the Argand diagram which satisfies
a particular condition. To test whether a particular complex number c
is in the set, we carry out the following iteration, starting at
z = 0:
z1 = (z0)^2 + c
z2 = (z1)^2 + c
z3 = (z2)^2 + c . . . . . and on and on and on!
This iteration is continued for as long as is necessary to see if z is
heading off to infinity. If z begins to move further and further from
the origin, then the point c does not belong to the set. If x or y
becomes greater than 2 or less than -2, it is surely heading off to
infinity. But if the program repeats the calculation many times
(thousands if necessary) and neither the real or imaginary or real
part becomes greater than 2, then the point c is part of the set.
The program is repeated for every point c of the complex plane (in
practice thousands of points on a grid), and the results are
displayed. Points in the set can be colored black, other points
white. For a more vivid picture, the white points can be replaced by
colored gradations. If the iteration exceeds 2 after 10 repetitions,
for example, the program might plot a red dot; for 20 repetitions an
orange dot; for 40 repetitions a yellow dot and so on. The colors
reveal the contours of the terrain just outside the set proper. The
resulting shape is remarkable for its intricate and curious geometry.
It has been described as the most complex mathematical shape ever
invented - yet you can get a computer to draw it with about ten lines
of program code.
The most startling feature of the Mandelbrot set is the way it retains
its highly complicated structure if you zoom in on it at ever higher
levels of magnification. It is infinitely scalable, so that even
after enlargements of many millions, it shows the same structure of
whirlpools, scrolls, seahorses, lumps, sprouts, cacti, coils, blobs
and zigzags. And every so often, buried deep within the structure,
perhaps a millionth of the size, you can find an exact replica of the
original shape, complete in every detail together with its own
replicas at an even deeper level.
Standard geometry takes an equation and asks for the set of points
that satisfy it. Thus we obtain simple equations for circles,
ellipses, parabolas and straight lines. But if we iterate an equation
instead of solving it, the equation becomes a preocess instead of a
description, dynamic instead of static. When a number goes into the
equation, a new number comes out; the new number then itself goes in
and so on, points hopping from place to place. A point is plotted not
when it satisfies the equation, but when it produces a certain kind of
behaviour. One behaviour might be a steady state. Another might be a
convergence to a periodic repetition of states. Another might be an
out-of-control race to infinity. With computers, trial and error
geometry of this sort became possible.
The Mandelbrot set is the boundary between two types of radically
different patterns and is a model for chaotic behaviour.
For a good online introduction to fractals, please see:
-Doctor Anthony, The Math Forum
Check out our web site! http://mathforum.org/dr.math/ | {"url":"http://mathforum.org/library/drmath/view/54527.html","timestamp":"2014-04-16T10:59:24Z","content_type":null,"content_length":"8903","record_id":"<urn:uuid:1d4b3a17-52c3-44c3-9093-14a19fbdaf46>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00252-ip-10-147-4-33.ec2.internal.warc.gz"} |
Shorewood Prealgebra Tutor
Find a Shorewood Prealgebra Tutor
...I have had numerous students tell me that I should teach math, as they really enjoy my step-by-step, simple breakdown method. I have helped a lot of people conquer their fear of math. Before I
knew I was going to teach French, I was originally going to become a math teacher.
16 Subjects: including prealgebra, English, chemistry, French
...Talk to you soon! KellyI have been using Apple products daily since my first purchase in 2009. I use an iMac, iPad, iPhone, and iPod and have tutored both adults and children to learn how to
use these devices.
26 Subjects: including prealgebra, Spanish, geometry, chemistry
I hold a bachelor's degree in accounting and classics. I've worked in the corporate world for many years, and tutored Math on the side during that time. However, I've come to realize my true
passion lies in teaching mathematics, and I want to do this as much us possible.
27 Subjects: including prealgebra, chemistry, Spanish, English
...Eventually, I began teaching ACT Reading/English, began to teach math and reading to all ages, and eventually became a sought after subject tutor. Later, I would become Exam Prep Coordinator
and Managing Director of the Learning Center. However, my next venture was being involved in the martial...
26 Subjects: including prealgebra, Spanish, English, reading
...Whether it is math abilities, general reasoning, or test taking abilities that need improvements, I can help you progress substantially. I work with systems of linear equations and matrices
almost every day. My PhD in physics and long experience as a researcher in theoretical physics make me well qualified for teaching linear algebra.
23 Subjects: including prealgebra, physics, calculus, statistics | {"url":"http://www.purplemath.com/shorewood_prealgebra_tutors.php","timestamp":"2014-04-21T04:43:47Z","content_type":null,"content_length":"23960","record_id":"<urn:uuid:7c680c66-04fa-4dd2-a11b-bb8e882951c7>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00516-ip-10-147-4-33.ec2.internal.warc.gz"} |
Don’t take the spray pattern lightly
The first time I studied defensive alignments, I was so proud of my analysis that I started a blog just to show that work.
Nevertheless there was something in that study that left me unsatisfied; I even expected critics for that, and they duly arrived.
What was wrong (or needed at least a big improvement)? My analysis showed an efficient alignment to prevent balls in play from becoming base hits.
This can be accepted in a world where batting average is the queen stat, thus neither here at THT nor by a team trying to gain an edge to win one more ballgame.
The critics I was expecting suggested that, for example, ground balls going through the infield down the line have a different impact than rollers up the middle; thus improving your out/hit ratio
doesn’t necessarily imply you are improving your run prevention. In this article I’ll show a way to take this issue into account.
Let’s take Derek Jeter‘s 2008 ground balls.
In the following chart the x axis represents the batted ball trajectory angle (from -45° down the left field line, to 0° up-the-middle, to 45° down the right field line); the y axis shows in a
continuum the density of ground balls hit by Jeter.
I gave a brief explanation of the density plot a couple of weeks ago (References and Resources section); anyway, as you may have guessed, the higher the line, the more ground balls hit at the
corresponding angle.
You have surely noticed the four dotted lines on the graph. I will refer to them as the infielders’ average positioning, and I obtained them as follows:
I selected all the ground balls hit by right-handed batters (should I consider a lefty instead of Jeter, I’ll take the ground balls hit by left-handed batters) with the bases empty and resulting in
an out by one of the four infielders. Then I averaged the trajectory angle for outs made by the third baseman and considered the resulting value as the average position of third basemen. I repeated
the same process to get the average positioning for shortstops, second basemen and first basemen.
I chose the empty bases situation because infielders shouldn’t be shifted for other reasons than the batter’s spray pattern. I can’t wait for the day when we’ll have the actual positioning of the
fielders on every batted ball, but for the time being I consider the values I got as reasonable estimates.
Have another look at the density plot. Where do you think the fielders should play Derek Jeter? If you believe the dotted lines should cross the (blue) density line in places where it peaks, you are
quite right. Thus the middle infielders should play The Captain close to their average positions (maybe a little farther from the bag), the third baseman should move way toward the hole leaving the
line unguarded and the first baseman, having no peak to cover, should cheat toward his right as far as he can and still get to the base in time to receive the assists from his peers.
Right? Well, no. Again, we are minimizing the proportion of ground balls by Jeter that go through the infield for a base hit. How can we minimize his run production?
The following chart will give us some help.
This one represents (on the vertical axis) the run value of every ground ball hit by a right-handed batter and fielded by an outfielder. On the x axis we have again the trajectory angle of the batted
ball. In other words, we are looking at the run values of ground balls gone through the infield. As we would expect, we have the highest values at the corners, while there’s not much variation over
the rest of the field, except for a little hump in the left/center gap.
Now we just have to multiply one chart by the other to get what we are after. To be honest, density is not probability, thus a transformation is needed, but I believe nobody wants to hear about
The resulting chart has a shape very similar to the density plot. Wait! All the smoothing, charting, integrating, multiplying to discover we could just have used the first chart? If that’s the case,
it hasn’t been a waste of time anyway. If preventing base hits on grounders is equivalent to preventing runs on grounders, that was something that needed to be proved.
Anyway, we have looked at just one player, one known to spray the ball to all directions. I ran the same kind of analysis for a bunch of hitters and I’m going to report an interesting case in the
paragraphs that follow.
Ryan Howard is a power lefty who hits home runs to the opposite field as well as to the pull side. His grounders, on the contrary, tend to cluster on the right side of the diamond, and so do
infielders of teams playing against him. Here is Howard’s density plot for grounders.
Where would you put your four infielders? (The dotted lines show the average positioning against lefty hitters). Don’t base your answer on this chart.
Let’s look at the run value of grounders hit by left-handed batters gone through the infield.
It’s a mirror image of the one for righties. Again, highest values on the corners and a hump, this time in right center. Howard hits a significant portion of grounders close to the right field line,
thus the chart that results after smoothing/integrating/multiplying shows something new.
Now we have a couple of peaks on the right instead of just one. Okay, so the first baseman should place himself toward the line, the second baseman in the hole (closer to a first baseman’s usual
position), the shortstop over the keystone, and the third baseman in the left side hole. (Basically this is how opposing ballclubs play Howard.)
Note: while placing fielders exactly on the peaks may be a good rule of thumb, we should note that there isn’t always symmetry along the peaks’ lines. Thus, the optimal positioning would be slightly
off center.
Now. we know that for some hitters it’s worth to do some extra work to devise an optimal defensive alignment.
I’m not done on fielders’ positioning yet. Finding the optimal alignment would require taking into account the range of the glove men and their different abilities going to either side. It’s not an
easy task, but it can be done. Moreover, I would like to quantify (in runs prevented) the impact of moving a fielder to a better position.
…and then there is the issue of the batter’s willingness and ability to hit against the shift and all the cat-and-mouse games that would follow…
References & Resources
Data from 2008 MLBAM Gameday.
I owe you some explanations about the weighted charts. Basically I’ve cut the field into 0.2 degree slices from foul line to foul line. For each slice I calculated the probability of a ground ball
going through the slice, multiplied by the expected run value of a ground ball that reaches the outfield through that slice, multiplied by the total number of batted balls for the batter . If you sum
these products across all the slices you get the run value of the batter’s ground balls if none of them are converted into an out by an infielder.
Leave a Reply Cancel reply | {"url":"http://www.hardballtimes.com/dont-take-the-spray-pattern-lightly/","timestamp":"2014-04-18T05:54:52Z","content_type":null,"content_length":"47989","record_id":"<urn:uuid:68c7f4ec-d8fd-493a-8738-47cd75e96a2b>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00047-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Forum Discussions
Math Forum
Ask Dr. Math
Internet Newsletter
Teacher Exchange
Search All of the Math Forum:
Views expressed in these public forums are not endorsed by Drexel University or The Math Forum.
Topic: How to generate continuous variable corresponding to Odds Ratio X
Replies: 3 Last Post: Nov 7, 2006 5:58 PM
Messages: [ Previous | Next ]
Re: How to generate continuous variable corresponding to Odds Ratio X
Posted: Nov 6, 2006 12:59 PM
Haris wrote:
> Greg Heath wrote:
> > Haris wrote:
> > > I am looking to generate a continuous normally distributed random
> > > variable with a given MEAN and SD for two groups. Is there a
> > > systematic way to generate two such groups with N number of cases so
> > > that their odds ratio in a logistic regression would be predictable?
> > > In my case I am looking for OR=1.5 but any other number would do.
> > >
> > > Two normally distributed groups is the idealized case. If there are
> > > other distributions that I can use to solve this problem I would love
> > > to hear about them as well.
> >
> > I'm probably missing something. Why won't two normal distributions
> > with the same standard deviation and priors P1 = 2/5, P2 = 3/5
> > do the trick?
That's the answer to a different question. I was thinking of a 2
component univariate Gaussian mixture classification scenario
and misinterpreted the term odds ratio to be the odds (ratio of
the posterior probabilities). When s2=s1=s,
P(1|x)/[1-P(1|x)] = [P1/(1-P1)]*exp{K*(x-M)/s},
where M = (m1+m2)/2 and K = (m1-m2)/s (Known in radar target
detection and classification fields as the "The K-factor"). Choosing
a threshold at x = M then yields odds of at least 3/2 when P1 = 3/5.
> The problem I am trying to simulate has to do with two equal
> populations: those with events and without. I need to relate
> differences in the normally distributed properties of those populations
> to the presence of event. The simulation is for power calculations and
> those are normally based on 1SD difference between the two means. What
> you are proposing may work, I need to look into this. However, I am
> not sure how I would be able to link the mean difference and OR.
As stated above, I was thinking about a different problem.
Hope this helps.
Date Subject Author
11/6/06 Re: How to generate continuous variable corresponding to Odds Ratio X Greg Heath
11/7/06 Re: How to generate continuous variable corresponding to Odds Ratio X Ray Koopman
11/7/06 Re: How to generate continuous variable corresponding to Odds Ratio X Ray Koopman | {"url":"http://mathforum.org/kb/message.jspa?messageID=5360126","timestamp":"2014-04-19T02:33:53Z","content_type":null,"content_length":"20564","record_id":"<urn:uuid:0f581f62-6263-4e0b-b90a-c3e129ff857f>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00571-ip-10-147-4-33.ec2.internal.warc.gz"} |
Keeping Answers Pretty
Algebra may not be naturally pretty, but if we have the right foundation, blush, and lipstick, we can enhance its natural features. Of course, we won't be using literal makeup, since that makes the
monitor all smeary.
When solving for a variable, we might wind up with a formula that involves some kind of fraction. When this result happens, make sure to give the fraction in reduced form.
Sample Problem
Solve the equation z + 2y = 8 for z.
By rearranging, we find that
Ooh la la! Hey equation—are you a model?
Another way to find the same answer is to divide both sides of the original equation by 2 to get
2z + y = 4
and then solve for z. Thankfully, we find the same answer either way. No going back to the drawing board for us.
Keeping Answers Pretty Practice:
Solve the equation 2x + 4y = 16 for y.
Solve the equation 3x + 6y = 27 for x.
Solve the equation 6x + 2y = 14 for x.
Solve the equation y - x + 3(x + 4) = 5y for x. | {"url":"http://www.shmoop.com/equations-inequalities/simplifying-answers-help.html","timestamp":"2014-04-18T13:26:47Z","content_type":null,"content_length":"38126","record_id":"<urn:uuid:9c0e6e1e-71fa-4e27-b9ac-0d93bb830368>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00196-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
The tub of a washer goes into its spin cycle, starting from rest and gainiing angular speed steadily for 8.0 s when it is turning at 5.0 rev/s. At this time the person doing the laundry opens the lid
and a safety switch turns off the washer. The tub smoothly slows to rest in 12.0 s. Through how many revolutions does the tub turn whil it is in motion.
• one year ago
• one year ago
Best Response
You've already chosen the best response.
Look at the problem in two separate parts. Before the lid is opened, the initial angular speed is 0. Your final w is 5 rev/s. Time is 8 seconds. Angular acceleration is change of angular velocity
over time. Use the common form: wf^2-wi^2=2(alpha)(theta) and isolate theta to get the number of revolutions of the first part. You should get 20rev.. Afterwards, when the system slows down, you
now have your initial speed as 5 rev/s. It comes to rest so wf is 0. Solve for alpha. Plug it in to common form and solve for theta. You get 30 rev. Add the two parts, you should get 50 rev.
Best Response
You've already chosen the best response.
That answer is wrong
Best Response
You've already chosen the best response.
@Inspired's answer is correct. Doing it another way you get: \[\large \Delta \theta = (\bar{\omega_1}t_1)+(\bar{\omega_2}t_2)\]\[\large \Delta \theta = \frac{(0+5(2\pi \space rads/s))(8s)}{2} + \
frac{(5(2\pi \space rads/s)+0 \space rads/s)(12s)}{2}\]\[\large \Delta \theta = 20(2\pi \space rad)+30(2\pi \space rad)=50(2\pi\space rad)=100\pi \space rad\]\[\large \frac{100\pi \space rad}{2\
pi \space rad}=50 \space revolutions\] For more info see: http://hyperphysics.phy-astr.gsu.edu/hbase/rotq.html
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/50f8c66ae4b027eb5d9a5183","timestamp":"2014-04-17T01:19:56Z","content_type":null,"content_length":"33632","record_id":"<urn:uuid:90b5162e-59f6-483b-a8f0-b349717076c0>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00163-ip-10-147-4-33.ec2.internal.warc.gz"} |
Never Ending Thoughts...
Let’s assume your yearly base income is I, out of which you make some investment, which gives you a capital gain, G. Your tax T, in this case would be a function of I and G:
For your investment to make up for all this tax, the capital gain G must be greater than or equal to the tax T.
Considering a flat tax rate of x%, equation (2) becomes:
Now, let’s see what kind of investments can actually produce that kind of gain. As you can see from equation (3), it really depends on your base income and the tax you’d end up paying. If you plot a
graph of x v/s x/(1-x) as shown on the right, you can see as the tax increases, it would become harder to find an investment that would produce a gain enough to zero up the taxes. Even worst, it
becomes exponentially difficult to achieve such a gain.
So how does it look for a person who makes a mere $65K a year and falls under 15% tax bracket? He/she would need to invest such that the gain is at least 65 x 0.15 / 0.85 = $15K. If this person
invests in anything that produces a healthy 10% yearly gain, he/she should be investing 15/0.1 = $150K, which is roughly 2.3 times the base income. And how would it look for a person who makes $500K
a year and falls under 33% tax bracket? He/she would need to invest such that the gain is 500 x 0.33 / 0.67 = $246.27K. If this person invests in anything that produces a similar 10% yearly gain, he/
she should be investing 246.27/0.1 = $2.46M, which is roughly 4.9 times the base income!
This means the more your base income, harder it will be for you to find investments that produce gain enough to Zero up your taxes. The only way you’d be able to do that is to make risky investments
in the hope they’d flourish.
Lesson: do your own math; Use your own brain and invest carefully. Because earning more wouldn’t necessarily give you more leverage as far as taxes are concerned :-).
Java Geolocation API
This is my Java implementation of the Geolocation API specifications put together by W3C: https://bitbucket.org/gautamtandon/java-geolocation
The reason I wrote this implementation is because I often had to deal with converting IP addresses to geographic location using some sort of IP to geolocation database feed, and I realized that while
there are so many such utilities available, none of them follow a standard approach. This becomes a challenge specially when you want to prototype something quickly so you pick up a cheap/free
library just to prove your point, but then later when it comes to developing “the actual product” you can’t really reuse much of your geolocation code because now you are dealing with some
“production grade” geolocation service. And of course, your cheap/free geolocation provider doesn’t follow the same API signatures, etc. that your production grade geolocation service provider uses.
Even more, both of them don’t follow any common industry standard at all!
My code is available under the MIT License, and I hope one day all geolocation service providers out there will use this code (or its variant) in order to provide a standardized way to perform IP to
Please check it out and do not hesitate to get back to me if you have questions, doubts, suggestions, etc. You can simply reply to this post, or find my contact information in the MIT License file.
Will there ever be a “Software Architectural Wonder” in the World?
The field of Architecture has played a vital role throughout the history of mankind. It has helped build structures that have lasted for centuries; Some of them are so breathtaking that we call them
“the wonders of our World”. It has played a key role in battles throughout the history, and it has even triggered revolutions. In some sense every architectural marvel acts as a tribute to its
civilization. It’s a way for a civilization to pass on their message to its future civilizations that anything can be accomplished if you believe in yourself.
Being in touch with the “Software World” for over two decades it never struck me how differently we perceive the word “Architecture” when it comes to “Software”. Could an “Architectural Wonder”
really exist in the World of “Software”? Would there ever be “that amazing piece of software” that people say two hundred years from now would look at and go spell bound, and really appreciate what
our civilization had accomplished with “so little software knowledge” as compared to them? Would there ever be references made to “great architectural work done in software”, that perhaps would help
future generations understand the cultural values of our current society?
Today, software is everywhere, and yet it’s nowhere! From the very basic program burnt into the Flash ROM of a quartz watch to sophisticated software programs that coordinate flight plans, run trains
and telephones across the globe, we have woven software everywhere into our lives. But it’s not visible at all. Perhaps it is the most “hidden” form of product ever developed by mankind. So how a
“Software Architectural Wonder” would look like? How would we “see” it? And moreover, how would future generations look at it? Is a “software program” really capable of reflecting the true cultural
values of a society just like other (non-software) works of art are? And if so, how?
Perhaps one could say that the Internet and the TCP/IP standard is the true testament of mankind’s software architectural capability because it is responsible for the “connected World” as we see it
today. Or perhaps the UNIX operating system is the true epitome of mankind’s software architectural achievement because it is in a way the genesis of all modern operating systems – the very basic
code that brings “life” into a piece of silicon, metal and plastic we call “computer”. If you think in these lines, then perhaps a “Software Architectural Wonder” would be more comparable to an
outstanding work in “poetry” or “painting” rather than a “physical structure”. Question then will be – Are current software programming languages rich enough to produce such an outstanding work? In
other words, do the basic building blocks of current programming languages allow us to produce a work of art, so “beautiful” and “meaningful”, that it would truly be respected by future civilizations
as a “Software Architectural Wonder”? One could argue, yes we do have those building blocks already. Just like a painting masterpiece, such as a sketch art, can be created with only two contrasting
colors – black and white, so can a “Software Architectural Wonder”. Only that in the case of software, the fundamental building blocks would be the binary Zero and One.
I believe the question whether or not there will be a “Software Architecture Wonder” in the World can be better answered by understanding the “philosophy of writing software”. Does our society write
software to only solve certain problems? Or they are also written to express a thought? Is it “beautiful”? Is it “meaningful”? Is it worth “cherishing”? Perhaps only when software development would
mature to a point where its use goes beyond the basic needs of solving problems, only then our society would be ready to create a true “Software Architectural Wonder”.
If you look at it from a different angle, it took mankind literally hundreds of years to reach to the point where spoken and written languages could be used for a purpose beyond basic communication.
Perhaps in the same manner it shall take a few centuries before we can see a true “Software Architectural Wonder”. Until then let’s keep learning and perfecting our software development skills.
This is to all the amazing software engineers out there – Some day one of You will be acknowledged as the creator or a “Software Architectural Wonder”!
jQuery plugin for showing informational popup
I recently created a jQuery plugin for showing informational popup right next to any HTML element. I have made it available on https://bitbucket.org/gautamtandon/jquery.attachinfo. It is free and
open source (under MIT license). All the information related to this plugin is available on that link. So I’m not going to duplicate the same information here :-).
If you are using jQuery and need something like this, give it a try! And feel free to send me your feedback!
Here’s how it’ll look like:
Mathematical proof of why most quarrels generally start because of one single stupid reason
Two days back I had a heated argument with my wife. The good part is, like every other time we quickly patched up things and life was back to normal. Later, we looked back and laughed as to how
stupid we were just fighting on a “small thing” that didn’t even matter anymore!
Next day, when I was introspecting myself (like I usually do when I’m stuck in an hour long jam on I-880) and replaying the whole situation in my head, it struck upon me, how did that “one small
thing” become such a “big thing” after all? Big enough to shake the entire relationship, even if it were for a moment? And why generally there’s always “that one thing” that unleashes the “beast” in
us? When people have quarrels, why don’t they generally say “It all started because he or she did so, and so, and so.”; Or may be “It all started because he said this and this and that to me”? It’s
generally always “just one stupid reason”. Why?
I’m sure many people would try to answer this from a psychological point of view. Some would perhaps throw a biological angle to it. And few would choose the path of Mathematics. I thought to go that
direction and see if I can come up with some conclusion, mathematically, behind it all. This post is about my quest of understanding a part of the human psychology using Mathematics. Here’s my take:
1. A quarrel can only begin if there are at least two people involved. This is kind of “obvious”, but since we are under the shelter of Mathematics, let’s call it out loud.
2. A quarrel is always initiated by one person. While many quarrels may seem as if they started with both the people taking action at the same time, if you take a careful look at it, you’ll notice
that it is always initiated by one person. After that it may continue back and forth.
3. Quarrel can be described as inverse of Understanding. In other words, when someone quarrels with someone, his/her “understanding” about the other person depletes. This again is one of those
“obvious” things, but it’s important to call it out here.
To make our equations easy to discuss, let’s assume there two individuals, $a$ and $b$. And let’s assume $a$ started the quarrel. With these assumptions, let’s define Quarrel as $Q$ and understanding
of a person $a$ about a person $b$ as $U(a,&space;b)$. So our equation of quarrel between $a$ and $b$ becomes:
Now let’s dig deeper into the function $U(a,&space;b)$ because that seems to be the center theme here. $U(a,&space;b)$ signifies the “understanding” person $a$ has for person $b$. This can be further
broken down into function of traits that that $b$ shows and how much weight $a$ gives to each of those traits. Hence each “weighted trait” $T_{i}$ can be represented as:
Here $w_{a_{i}}$ is the weight person $a$ assigns to the $i^{th}$ trait $t(b_{i})$ of person $b$.
Since traits can be positive or negative we can safely assume that the best way to combine traits in order to form the understanding will be by performing a root mean square of all the $T_{i}'s$.
This brings us to our third equation:
Combining $(2)$ and $(3)$ gives us:
In case of a quarrel, understanding $U(a,&space;b)$ becomes Zero. Let’s represent that as $\mu\lim_{\mu\rightarrow&space;0}$. Hence $(4)$ becomes:
Further simplification of this equation gives us:
Since $w^2_{a_{i}}t^2(b_{i})$ will always be a positive number, the only way equation $(6)$ can resolve is if all the $w^2_{a_{i}}t^2(b_{i})$ values are significantly small. In all practical terms
this is going to be harder to achieve as the value of $n$ increases. This is because the higher the number $n$ becomes, the more traits are being evaluated; And the more traits are evaluated, the
harder it is going to be to sum them all up to Zero! Hence, the least possible value of $n$ will give us highest probability of quarrel. That means, for highest probability of quarrel, $n&space;=&
space;1$. In other words person $a$ only considers one trait of person $b$. Hence, we have:
Assuming traits in general don’t vary drastically, however weights may change based on the “heat of the situation”, we can combine equations $(6)$ and $(7)$ and reach to this:
In other words, equations $(7)$ and $(8)$ tell us that the possibility of a quarrel becomes extremely high when person $a$ “judges” person $b$ on only one trait that highlights during the argument
and applies a very low weight on that trait. While doing so person $a$ completely disregards all other traits of person $b$ that otherwise would have created a healthy understanding $U(a,&space;b)$
between both of them, just enough to not lead to any quarrel.
When you are in a heated debate with someone, while you may get “tempted” to just pick that “one trait” that highlights at that moment, try to look at as many other traits as possible. This will help
you “judge” the person in a much more “neutral” way and will possibly avoid quarrels at the first place! It’s common sense! It works! And Mathematics has just proved it!
Are you a bot or a human – php utility
Are you looking for a quick and easy way to detect if a website call to your server came from a human or a bot? Here’s a simple PHP utility that I wrote for just that: https://bitbucket.org/
It’s extremely simple; does not require any extra software or changes in your code; and last but not the least, it’s free and open source!
Lego Mindstorms NXT Touchless Motion Sensing Faucet
This is my first Lego creation using the Mindstorms NXT set. After trying out the usual basic creations such as the ball sorter, car, etc. I wanted to build something that would be equally as much
fun, simple, and at the same time useful in day to day activities. So I came up with this idea of converting our normal kitchen faucet into “touchless faucet”. And around two hours later here’s what
I was able to do:
I used one motor and a pair of gear assembly to handle the kind of power needed for turning the faucet.
Most of the assembly actually goes into securing the motor and gear system to the faucet as tightly as possible.
The ultrasonic sensor at the top senses if there’s anyone in front of the faucet.
As soon as someone approaches within 20 inches range, the sensor notifies the mindstorms brick, which turns on the faucet. Once the person moves farther than 20 inches, the faucet is turned off.
Here is the programming that went behind this.
I start by setting a logic variable called “isOpen” to “false”. This signifies that the faucet is currently closed.
Then, every 2 seconds, I check if there’s someone within 20 inches of proximity using the ultrasonic sensor. If there is someone within the range, and the “isOpen” logic variable is “false”, I run
the motor to open the faucet and set the “isOpen” variable to “true”. Otherwise, I run the motor the other way round to close the faucet and set the “isOpen” variable to “false”. Simple! | {"url":"http://neverendingthoughts.com/","timestamp":"2014-04-20T18:22:37Z","content_type":null,"content_length":"76543","record_id":"<urn:uuid:3fa1ea36-95f3-4dd3-b806-f1103b6fa4d1>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00296-ip-10-147-4-33.ec2.internal.warc.gz"} |
Nested loops with mapply
December 31, 2012 7 Comments
So as I sink deeper into the second level of R enlightenment, one thing troubled me. “lapply” is fine for looping over a single vector of elements, but it doesn’t do a nested loop structure. These
tend to be pretty ubiquitous for me. I’m forever doing the same thing to a set of two or three different variables. “apply ” smells like a logical candidate, but it will really only allow to you to
do the same operation over a set of vectors. Meh. “tapply” is more of the same, but applies over a “ragged” array. But “mapply” fits the bill. As it turns out, using mapply is incredibly easy. I
found that the trickiest thing to implement is the logic to create a set of all possible combinations over which I want to loop.
Let’s look at that first. Say that you have three variables. To keep things simple, each one is a two-dimensional character vector as below.
a = c("A", "B")
b = c("L", "M")
c = c("X", "Y")
I poked around for a function that would easily render the Cartesian product of those three vectors. Interaction seemed like a natural choice, but it seems as though it wants to work with factors and
my first attempts to use it returned an error which had something to do with the number of elements. Diagnosing errors in R can be a Kafka-esque adventure and you have to choose your battles. I
decided to look elsewhere. An easy way to do that is to handle it manually if you only have two vectors. Just replicate each, order one of them and bind the results together, sort of like this:
var1 = rep(a, length(b))
var1 = var1[order(var1)]
var2 = rep(b, length(a))
df = data.frame(a = var1, b = var2)
The ordering step is necessary so that all combinations are represented. So, this is fine for two variables, but won’t work for three or more. Extension of the idea above is straightforward. After
two variables, you have a matrix and you simply need to replicate it, just as you would a vector. I coded a function that would take two arguments. The first is a matrix (or a vector) and the second
is the next vector we want to reflect.
CartProduct = function(CurrentMatrix, NewElement)
if (length(dim(NewElement)) != 0 )
warning("New vector has more than one dimension.")
return (NULL)
if (length(dim(CurrentMatrix)) == 0)
CurrentRows = length(CurrentMatrix)
CurrentMatrix = as.matrix(CurrentMatrix, nrow = CurrentRows, ncol = 1)
} else {
CurrentRows = nrow(CurrentMatrix)
var1 = replicate(length(NewElement), CurrentMatrix, simplify=F)
var1 = do.call("rbind", var1)
var2 = rep(NewElement, CurrentRows)
var2 = matrix(var2[order(var2)], nrow = length(var2), ncol = 1)
CartProduct = cbind(var1, var2)
return (CartProduct)
Note that using rep or replicate with a character matrix may not give you the results you intended. rep converts a matrix into a one-dimensional array. So, I coerce results into matrices and
replicate using a list structure, rather than the simplified result from replicate.
So. Nested loops. At this point, it’s easy.
someFunction = function(a, b, c)
aList = list(a = toupper(a), b = tolower(b), c = c)
return (aList)
mojo = CartProduct(a, b)
mojo = CartProduct(mojo,c)
aList = mapply(someFunction, mojo[,1], mojo[,2], mojo[,3], SIMPLIFY = F)
Compare this with the following:
for (a in 1:length(a))
for (b in 1:length(b))
for (c in 1:length(c))
aListElement = someFunction(a, b, c)
Ugh. Note that you can’t do things like check for critical values or whatnot. But for execution over many categories this will spare me a bit of sanity.
7 Responses to Nested loops with mapply
1. I didn’t think it was all that unobvious– at least for “sapply” . I’ve written functions which call sapply(sapply(sapply(stuff,…),stuff…),stuff) pretty regularly.
□ Although I find that a novel use of sapply, I don’t think it would work in this case. The nesting of sapply which you illustrate presumes that the output of a function may also serve as the
input. The specific problem which I had was rapid application of a function which pulls NFL results for a single season for a single team. Given a vector of seasons and a vector of teams, I
wanted to get a dataframe which had results for all teams and all seasons. Having done that, I thought about how to apply it generally to problems where I was simply running the same function
against combinations of many variables.
2. FYI, an even easier way to form the cartesian product: expand.grid(a, b, c)
3. you should check out expand.grid. It is in base and works well for this.
□ One of the trickiest things about R is that it’s so hard to identify solutions that you presume must be out there. I had done some searching on the word “cartesian” but only turned up some
ggplot2 functions. No way in the world would I have guessed that expand.grid was what I was looking for. Great tip!
4. How about something like:
whatever.summary <- ddply(.data=whatever, .variables=c("a", "b", "c"), some.function)
□ The ply functions are ones that I’m not yet familiar with. I expect I’ll be looking into them eventually. Thanks for the suggestion! | {"url":"http://pirategrunt.com/2012/12/31/nested-loops-with-mapply/","timestamp":"2014-04-21T09:37:54Z","content_type":null,"content_length":"72087","record_id":"<urn:uuid:64bc82b3-31b9-4183-a2b3-92a200f83931>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00662-ip-10-147-4-33.ec2.internal.warc.gz"} |
Patent US20030036988 - Computerised financial services method
[0001] This application is a continuation-in-part of my International Application number PCT/GB01/03948, with the International filing date of Sep. 4, 2001, and publication number WO 02/23418.
[0002] This invention relates to a computerised method for valuing shares for use in identifying whether they are overvalued or undervalued, and to apparatus and programs for use in the method.
[0003] The valuation of entire businesses (known as “enterprise” or “entity” value) and groups of companies is an important tool in measuring the worth of their shares on the stockmarkets. It is also
important in identifying whether current share prices are too high or too low.
[0004] Professional investors and investment advisors use computerised information systems to retrieve data and forecasts about individual companies. They also use computerised systems to value
shares and groups of shares.
[0005] Background—Computerised Share Valuation &, Information Systems
[0006] There are many different types of system used by professional investors. However their aim is similar, namely to assist with the valuation of shares and the management of portfolios of shares.
To do this they use a wide variety of data which helps them to predict the likely price of their investments in the future and to establish whether they should buy or sell the shares in a particular
company (or corporation).
[0007] There are also a number of organisations that provide data to professional investors. These companies are known as “estimate” or “forecast” providers. Two well-known companies are First Call
and I/B/E/S International (both now merged as part of the Thomson Financial group of companies), both of which collect from share analysts and brokers their forecasts relating to the future
performance of companies. Other similar companies include Multex, Zacks and JCF.
[0008] Having collected broker forecasts these estimate providers then use computer systems to manipulate and present this data in a variety of ways. This is stored in databases which are then sold
either directly to investment fund managers (along with the software needed to manipulate and extract the information) or indirectly through the systems of other information providers such as Factset
and Reuters, who have their own manipulation and reporting software.
[0009] The use of computers is essential in modern investment. For example, I/B/E/S carries data for approximately 18,000 companies in 56 countries For each of these 18,000 companies they collect,
store and manipulate numerous different forecast data items for up to 5 different years.
[0010] Share price indices also rely on computers; the Dow Jones and FTSE index are recalculated continuously. Investment fund managers and analysts regularly perform complex calculations on their
own specific share portfolios, which could not be done without computerised valuation and management systems.
[0011] Background—Enterprise Value (EV)
[0012] In the investment community one valuation measure in common use is called Enterprise (or Entity) Value (EV). This measure represents the market value of the capital the company uses within the
business. It consists of the following components.
[0013] The most important component is market capitalisation, which is the price an investor would need to pay in order to buy all the issued shares of the company at today's share price. Effectively
this is the market value of the equity (or share) capital of the company, that is, the number of shares multiplied by the share price.
[0014] The second component is the value of the debt capital within the company. This is the long term borrowing of the company.
[0015] The third component is optional and represents adjustments to the first two components. Different people adjust Enterprise Value in different ways. For example, some make adjustments to cash,
others to pension fund liabilities, and some for assets that are not key to the operational performance of the business.
[0016] The above can be summarised by the expression:
[0017] where MC is market capitalisation, VOD is the value of debt, and ADJ represents any adjustments.
[0018] Whether or not adjustments are made, the rationale behind EV is however the same, namely to establish the market value of the capital employed within the entire company.
[0019] Background—Valuations Based on Enterprise Value
[0020] To aid the comparison of the values of different companies, EV is often expressed as part of a ratio, such as EV/Sales or EV/Cash-flow. The use of EV-based ratios enables one company to be
compared with another. For example company A may have an EV of $1 million and company B may have an EV of $100 million. As such their performance and values are difficult to compare. But if the sales
of company A are $500,000 and of company B are $10 million then the ratios of EV/Sales for these companies are 2:1 for company A and 10:1 for company B. In other words, for every dollar of capital
invested in these organisations, company A generates 50 cents of sales but company B only generates 10 cents. So company A, although smaller, may be a better investment than company B. Looking at it
another way, company A may be argued to be undervalued and company B may be overvalued.
[0021] Background—Discounted Cash Flow
[0022] For many years investors have realised that $1 received in one year's time is worth less than $1 received today. They recognise that money has a “time value” and that interest is earned to
compensate for that time value. So $100 invested for one year at 10% interest may be worth $110 in one year's time. The interest rate is often expressed as the “cost of capital” invested. Another way
of looking at this is that, at a cost of capital of 10%, the worth today of $110 received in one year's time is $100 ($110/1.1=$100). This $100 figure is known as the “present value” or PV of the
$110 receipt in one year's time. We have “discounted” the future cash flow in order to arrive at this present value—hence the term discounted cash flow or DCF.
[0023] The PV of a sum received in period “n” is therefore defined as:
[0024] where “i” is the interest rate expressed as a decimal, and “n” is a variable number of years in the future.
[0025] The method (also known as NPV or net present value) can apply to receipts in any particular year. So the present value of $121 received in 2 year's time is also $100 if we discount using a 10%
cost of capital; that is $121/(1.1)^2.
[0026] If moneys are received in more than one year the PV will be the sum of the receipts, suitably discounted. So the PV of $110 received at the end of year one and $121 received at the end of year
two, using a 10% cost of capital will be $200; namely $110/1.1 plus $121/(1.1)^2.
[0027] This DCF method has been used as one of the ways of valuing the shares of companies for many years. In this case the receipts are the stream of dividends paid by the company to its
shareholders. This is known as the Dividend Discount Model (DDM). Variants of the model have also been developed to reflect growth in the dividend stream.
[0028] The DDM has also been modified to reflect the fact that some companies do not pay dividends. The amended models uses the earnings or cash generated by the company in each accounting period,
instead of the dividends it pays. The rationale is that the company could pay these amounts as dividends but may choose not to in order to reinvest for the future.
[0029] The current theory of company valuation using DCF is summarised in Bartley J. Madden's book “CFROI Valuation” (Cash Flow Return On Investment) Butterworth-Heinemann Finance, 1999, ISBN 0 7506
3865 6. He states on page 9 that “the firm's warranted value is driven by a forecast net cash receipt (NCR) stream which is translated into a present value by use of the investor's discount rate”.
[0030] In theory the forecasts for the net cash receipt stream (also known as “free cash flow” or FCF) should continue to infinity. In practice such forecasts tend to be for a limited number of years
so, in order to compensate for the lack of forecasts beyond the horizon, a “terminal value” (TV) is often substituted into the formula at time “n”, the date at which forecasts finish.
[0031] We have appreciated that the forecasts produced by estimate providers can be of direct use in the calculation of share valuations using the DCF or NPV method.
[0032] Background—Economic Profit and EVA
[0033] Stern Stewart a co, of New York, USA, have commercialised a particular version of economic profit which they have trademarked as EVA, economic value added. This is defined as earnings less a
charge for the book value of capital invested at the beginning of each period. The charge is calculated using cost of capital and it is based on balance-sheet or accounting values, suitably adjusted
in accordance with Stern Stewart's methods.
[0034] Adjustments that may be made include for example: capitalising research and development and long-term marketing expenses and depreciating them over future periods; adding acquired goodwill
into the capital employed number (where this has not been followed in the normal accounting policies of the company); changing the method of depreciation; and capitalising leases and treating them as
if the assets had been purchased and the money borrowed.
[0035] So, the initial adjusted balance-sheet value is used when measuring the EVA for period 1. However when measuring the EVA for period 2 a different figure is used.
[0036] Background—Residual Income (RI)
[0037] EVA is an example of a well-known technique called residual income valuation, which uses DCF methods but which also deducts an interest charge to cover the cost of capital from the earnings
for each period considered. There are however problems with the existing RI techniques, some of which are described in a paper by James A. Ohlson “Residual Income Valuation: The Problems”, Stern
School of Business, New York University, New York City, N.Y. 10012, USA; March 2000.
[0038] We have appreciated that one particular problem is that the existing RI techniques are optimised for corporate use and do not provide optimum results for an investor in the company.
Furthermore, the existing commercial RI models necessitate complex adjustments.
[0039] The invention in its various aspects is defined in the independent claims appended to this description, to which references should now be made. Advantageous features of the invention are set
forth in the appendant claims.
[0040] In accordance with this invention a measure of residual income is obtained which is not based on adjusted balance-sheet values. Instead a measure of residual income is obtained using a
valuation known as “Enterprise Value” or EV to calculate the charge for the cost of capital. Enterprise value is determined by adding market capitalisation and debt, with optional adjustments. The
cost of capital charge (EV×i) is deducted from the cash flow or earnings.
[0041] This way of calculating the charge for cost of capital can be said to comply with RI theory more accurately than the prior methods.
[0042] As an alternative to using the enterprise value, it is also possible to base the residual income calculation simply on the market capitalisation. In this case, the interest rate used is the
cost of equity capital only, and the cash flow/earnings used in the calculation are taken after deduction of interest paid.
[0043] In either case a number of useful subsidiary metrics can be developed and calculated by special computer programs as discussed below. These metrics can be scored and evaluated in various ways
so that the results can be added together by the software programs. The software can then produce a ranking list of the shares being evaluated, identifying those with the highest and lowest scores,
representing those that are the most (or least) undervalued (or overvalued). As a result these programs and metrics help to identify overpricings and underpricings in shares.
[0044] The invention will now be described in more detail, by way of example, with reference to the accompanying drawings, in which:
[0045]FIG. 1 is a high-level flow chart illustrating the main steps involved in a program embodying the invention in a first aspect and which produces new valuation measures based upon EV+.
[0046]FIG. 2 is a flow chart illustrating the steps for calculating the enterprise value in the method of FIG. 1.
[0047]FIG. 3 is a glow chart illustrating the steps for calculating the cost of capital in the method of FIG. 1.
[0048]FIG. 4 is a flow chart illustrating the steps for calculating the residual income stream EV+ in the method Of FIG. 1, using EV as an input.
[0049]FIG. 5 is a flow chart illustrating the steps for calculating the warranted enterprise value and absolute overpricing or underpricing in the method of FIG. 1.
[0050]FIG. 6 is a flow chart illustrating the steps for calculating examples of the subsidiary valuation metrics in the method of FIG. 1.
[0051]FIG. 7 is a flow chart illustrating the steps for calculating the aggregates of subsidiary valuation metrics in the method of FIG. 1.
[0052]FIG. 8 is a flow chart illustrating the steps for calculating tables of relative valuation in the method of FIG. 1.
[0053]FIG. 9 is a high-level flow chart illustrating the main steps involved in a program embodying the invention in a second aspect which produces new valuation measures based on MC+.
[0054]FIG. 10 is a flow chart illustrating the steps for calculating a market capitalisation value for use in the method of FIG. 9.
[0055]FIG. 11 is a flow chart illustrating the steps for calculating the cost of equity capital in the method of FIG. 9.
[0056]FIG. 12 is a flow chart illustrating the steps for calculating the residual income stream MC+ in the method of FIG. 9.
[0057]FIG. 13 is a flow chart illustrating the steps for calculating the warranted market capitalisation MC[w ]and absolute overpricing or underpricing in the method of FIG. 9.
[0058]FIG. 14 is a flowchart illustrating a routine for calculating examples of subsidiary valuation metrics based on market capitalisation values, in the method of FIG. 9.
[0059]FIG. 15 is a flow chart illustrating a routine for calculating the aggregates of subsidiary valuation metrics like that shown in FIG. 14, in the method of FIG. 9.
[0060]FIG. 16 is a flow chart illustrating the steps for calculating tables of relative valuation in the method of FIG. 9.
[0061] It is possible to represent the numbers in a DCF calculation in different ways yet achieve an identical result. We showed above that the PV of $110 received in 1 year, using a 10% cost of
capital, is $100. This is another way of saying that the warranted value or present value (PV) of that receipt is $100. But, if it were to cost us only $90 today to secure that receipt in the future,
then it would have a net present value (NPV) of $10 ($100-$90). This is the NPV of the investment. It represents the underpricing (or overpricing) of the investment.
[0062] Another way of arriving at this underpricing, or NPV, figure is to deduct an interest charge from the $110 receipt in order to arrive at the residual income and then to discount that. In other
words, the cost today of $90 would attract an interest charge of $9 (using the same 10% cost of capital). If we were to deduct $9 from the anticipated receipt of $110 we would have a residual income
of $101. If we now deduct the original cost of $90 we have a figure of $11 ($101-$90), which represents the surplus of the residual income over the original cost. Discounting this at 10% to give a PV
we arrive at $10 ($11/1.1), the underpricing, or NPV, of the cash flow receipt and payment. This is identical to the result produced in the alternative calculation above.
[0063] The calculation would be more complex for multiple periods but the same principles apply. These complexities are explained in relation to the determination or EV[w ]described in detail in the
first embodiment below.
[0064] So RI techniques can be seen to produce the same results as DCF calculations.
[0065] When calculating RI the interest charge is done by reference to the value of the original investment. In our example, $90×10%=$9.
[0066] The residual income (RI) model is a variant of the Discounted Cash Flow Model (DCF) that has been commercialized as the way of measuring economic profit, or shareholder value added in a given
period. Stern Stewart's EVA is a good example of this as described above, though other DCF-based metrics are available. EVA is defined as a company's earnings, suitably adjusted, less a charge for
the capital employed. Capital employed is defined by reference to adjusted balance-sheet values at the beginning of the period in which the RI is being measured. The known EBO (Edwards-Bell-Ohlson)
model is another example of balance-sheet-based DCF valuation.
[0067] Balance sheets contain valuation problems that impact on RI results. To overcome the valuation problems in balance sheets and intangibles, commercial RI models necessitate complex adjustments.
[0068] We have appreciated that corporates and investors might chose to measure residual income differently. Their perspectives on the “investment”, or capital employed, are different. For RI
calculations corporates might utilise the market values of assets in the balance sheet, as proposed by Stern Stewart with EVA, but investors should substitute the market values of their investment Or
the company as a whole, i.e. the values of the shares (or market capitalisation) or enterprise value. However they fail to do this because existing RI measures are based on the balance sheet.
[0069] The time period when “capital” is measured is also critical to RI calculations. It can be argued that pure RI mathematics requires the charge to be based on the value of capital invested
today, even though commercial RI models use capital at the beginning of each period to generate an intuitive “economic profit”. This can seriously affect the capital charge made in computing the RI
and therefore the valuation of any share price that may result.
[0070] We have appreciated that, since RI techniques generate identical results to DCF calculations, RI can also be used to value shares in a similar way to the use of the DCF share valuation
[0071] We have appreciated that by using alternative valuation methods (employing EV as opposed to the balance-sheet value of capital employed) and timings (using the opening EV number to calculate
the interest charged in all subsequent periods), a new RI model can be developed specifically for investors. This model can itself generate a family of valuation metrics, and these can all be
produced by software to aid with share valuation and selection. This model is different from existing valuation approaches in 3 significant ways:
[0072] It uses EV, not balance-sheet values, as the basis for the charge in the RI calculation. We call this metric “Enterprise Value Added” or “EV+”.
[0073] It preferably uses the current value of EV as the basis for the capital charge, irrespective of the period for which the RI is being measured. Existing approaches use the balance-sheet values
at the beginning of each different period.
[0074] It can exploit the data available from investment information providers to produce a large variety of new valuation metrics.
[0075] We have also appreciated that the new model can be varied in a number of valuable ways, and that, in order to compute values for all listed companies and allow flexibility in the calculations,
it is desirable to use sophisticated software delivery systems. In particular:
[0076] the market capitalisation of the company can be substituted for EV. This is described with reference to the second embodiment.
[0077] the calculation of EV can be modified to include or exclude optional adjustments.
[0078] the valuation of debt can be done in a variety of ways.
[0079] the interest rate used to calculate the capital charge can be computed in a variety of ways.
[0080] the user may prefer to calculate metrics and values based upon earnings, dividends or cash flows.
[0081] the terminal value can be computed in a variety of ways.
[0082] the choice of metrics, scoring methods and reporting formats is a choice to be exercised by the user.
[0083] Using this Ev+ model it is possible to create a series of RI-based share valuation metrics similar to EPS (earnings per share), P/E (price to earnings ratio) and PEG (P/E growth) but
incorporating proper RI techniques. These metrics are primarily useful for screening shares within a given sector to highlight those that, prima facie, appear relatively overpriced or underpriced
over the medium term.
[0084] Strictly, the RI model is based on future cash flows. First Call and I/B/E/S data, and that of other providers, can be exploited to produce these measures and multiple-period residual-income
forecasts can be computed using data from providers such as First Call, I/B/E/S and others. Such data tends to be in the form of “consensus” information, namely, the arithmetic mean of the estimates
received from individual analysts. The consensus cash flow is therefore a useful tool for input to the calculation of EV-based RI metrics, and using them to screen large ranges of stocks in order to
highlight those that may be suitable for purchase or sale.
[0085] Certain users may prefer to select only certain analyst estimates for use in the EV+ valuation software.
[0086] We have appreciated that it will be desirable to produce sophisticated computer programs in order to generate these metrics and make them available to the investment community. The software
should produce valuations for around 20,000 companies at least once per day, but ideally intra-day (every time there is a price or estimate movement). The rankings, scoring and reports will also
change every time a valuation is adjusted. We have also appreciated that we will need to enable flexibility in the calculation of the EV+ metrics, particularly in relation to the key variables, such
as cash flow.
[0087] There are many differing views about the way of calculating the earnings or cash figure that should be used in DCF-based share valuation calculations, and these same arguments apply in the
case of EV+ RI metrics. Share analysts may wish to recalculate the model results by substituting their own estimates of key variables such as cash flow. The preferred programs to be described are
therefore be tailored to each customer's requirements in this respect and may include the flexibility to use alternative known cash flow and earnings measures and adjustments, such as:
[0088] Earnings before interest, tax, depreciation and amortisation (EBITDA)
[0089] Earnings before goodwill (EBG)
[0090] Cash earnings
[0091] Earnings per share (EPS)
[0092] Free cash flow (FCF)
[0093] Earnings before interest (EBIT)
[0094] Funds from operations
[0095] Working capital movements
[0096] Cash flow per share (CPS)
[0097] Tax paid
[0098] Capital expenditure
[0099] Intangible expenditure, such as Research and Development.
[0100] We have appreciated that the model can be used to value long term cash-flow streams, but for practical reasons (published analyst forecasts tend not to exceed a 5-year horizon) the subsidiary
metrics currently developed largely focus on the short term, namely the next 5 years.
[0101] We have appreciated that the subsidiary short-term valuation metrics we have designed are probably best applied within sectors and generate the most meaningful results when there is an
earnings stream to analyse. Thus “dotcom” companies (with no forecast earnings for the next few years) are not as well suited to some of the metrics in this model as would be companies with a
reasonably stable and steady forecast performance. We have therefore designed:
[0102] flexibility into the model to enable the inclusion of “intangible” and “option” values (which are known techniques) which some consider to give a truer picture of the performance of certain
companies. These values are often not recognised in traditional accounting statements. In the standard program, these values need not be included so the use of these values within the program will
depend upon the application and the intentions of the customer;
[0103] some of the metrics to measure the growth in the rate of EV+ and the present value of EV+ per share. This enables us to produce metrics which may apply to certain technology and internet stock
[0104] We have appreciated that, by focussing on the short term, that is the next 4 or 5 years, these metrics improve on the use of balance-sheet values and some of them avoid the need for
terminal-value calculations. However some definitional problems remain with EV, and we have therefore built flexibility into our programs to enable users to adjust the values of EV that they prefer
to use in the calculation of our metrics.
[0105] We have appreciated that investment information providers, such as First Call and I/B/E/S, are continually expanding their sets of data items and this enables our metrics to be calculated with
increasing efficiency. In future, additional information will enable stock valuers to refine their own estimates of the key variable, cash flow. The programs should therefore be designed to enable
the incorporation of improved data sets as they are produced.
[0106] The EV+ forecasts of today can therefore be improved by:
[0107] Standardising the definition of key variables within our model.
[0108] Utilising published forecasts for EBITDA or Free Cash Flow in the calculations of EV+.
[0109] Improving the consistency of calculation of the key data items collected by information providers.
[0110] First Embodiment
[0111] Now turning to the first preferred embodiment of the invention, this comprises a method of determining a residual income metric in which the cost of capital is based on the enterprise value.
That is to say that in the preferred embodiment the warranted enterprise value is equal to the present value of the stream of all residual income from t periods, that is year 1 to year n, plus the
present value of the difference between the terminal value in year n TV[n ]and the current enterprise value EV[0], plus the current enterprise value EV[0].
[0112] That is, assuming that Free Cash Flow (FCF) is chosen as the measure of earnings, warranted EV is as follows:
[0113] It will be recalled from above that:
[0114] where MC is market capitalization, VOD is the value of debt, and ADJ represents any adjustments. Thus the value of enterprise value used is determined by adding debt and adjustments, if any,
to the current market capitalisation of the investment.
[0115] The following is a numerical example of the application of the above formula to give a warranted Enterprise Value, EV[w]. If:
[0116] Free cash flow (FCF) in year 1 is 30
[0117] Free cash flow (FCF) in year 2 is 40
[0118] There are only 2 forecasts so “n” is 2
[0119] The terminal value (TV) in year 2 is 110
[0120] The mean cost of capital, “i”, is 10%
[0121] The enterprise value (EV) now, EV[0], is 100,
[0122] then:
[0123] So the warranted value of the company is 151.23. Since the actual value, EV[0], is 100, the company is underpriced by 51.23 (151.23−100).
[0124] The above formula is based upon defining EV+ (Enterprise Value Added) as follows:
[0125] where FCF is the free cash flow, EV is a measure of enterprise value based upon market capitalisation and debt, and i is an assumed mean interest rate expressed as a decimal value. The mean
interest rate is a mean calculated taking both the cost of equity capital and the cost of debt into account. FCF is therefore taken before interest payments in this embodiment. In accordance with the
above equality, EV+ is thus obtained from the free cash flow from which a deduction is made to reflect the cost of capital employed. This deduction is based upon enterprise value multiplied by the
cost of capital, “i”. Enterprise value is itself dependent upon market capitalisation and debt. As described above, while FCF is preferred, other alternative cash flow and/or earnings measures may be
used such as EPS, EBIT, EBITDA, CPS, and so on. Thus, more generally, if E is the earnings (or cash flow) measure employed, and CC is the charge for the cost of capital employed, then:
[0126] or, in this case,:
[0127] The resultant value of EV+ thus figures in the equation as:
[0128] The term EV+/(1+i)^t is the present value of EV+, and this is evaluated for all time periods t from 1 to n. The values thus obtained are then summed. To this is added the Terminal Value
Premium, that is the present value of the difference between the terminal value in year n TV[n ]and the current enterprise value EV[0], (TV[n]−EV[0])/(1+i)^n. Finally the current value EV[0 ]of the
enterprise value is added.
[0129] A high-level flow-chart illustrating the main steps of the method embodying the invention is shown in FIG. 1, and the main steps are shown in FIGS. 2 to 8. FIG. 1 sets out the main steps in
the method, and refers to the relevant subsequent figures. The main steps are calculating the enterprise value (EV), FIG. 2, calculating the cost of capital, FIG. 3, calculating the residual income
stream using the enterprise value, FIG. 4, and then calculating the warranted enterprise value and from that the absolute underpricing or overpricing, FIG. 5. The results obtained in this way can
then be used as desired to calculate a variety of subsidiary valuation metrics, see FIG. 6 which gives examples, and the resultant metrics can be calculated as aggregates of subsidiary valuation
metrics to give an overall result, FIG. 7. The results thus obtained for a large number of companies or investments can be calculated and tables or charts of relative valuations produced, FIG. 8.
[0130]FIG. 2 illustrates a routine which calculates enterprise value (EV). It allows for adjustments to be made if required by the user but this is not essential to the method. The EV figure thus
calculated is used in later parts of the method. It first retrieves predefined policies on debt valuation and adjustments, step 2.10. It then retrieves the current share price (P) step 2.20, and the
number of shares in issue (N) step 2.30. Both of these values are obtainable from commercial data providers. From these the market capitalisation is calculated as P×N, step 2.40, and stored. A
decision is now made in step 2.50 as to whether debt is to be valued at market value, this decision being taken in accordance with the policy retrieved in step 2.10. If it is not, the procedure moves
to step 2.51, where the book value of the debts is retrieved from the stored accounts for the company. If it is, the market value of the debt is calculated using standard present value (PV) yield
techniques, step 2.52. Other known ways of deriving a market value for debt could be used. In any event, in step 2.60 the thus-derived value for debt is stored.
[0131] In step 2.70, the values obtained from steps 2.40 and 2.60, namely the market capitalisation and the debt, are summed together. A determination is made in step 2.80 as to whether any
adjustments are needed and if they are the chosen adjustments are retrieved and stored, step 2.81. In either event, the enterprise value (EV) is outputted in step 2.90 by summing the value from step
2.70 with any value from step 2.81.
[0132] The policies on debt valuation and EV adjustments will be predetermined to suit any particular application as required.
[0133]FIG. 3 illustrates a routine which calculates the mean cost of capital, “i”. This method is illustrative only since the program may be written to allow the user to employ their preferred method
of calculation. The “i” figure thus calculated is used in later parts of the method. In FIG. 3 a weighted average cost of capital is obtained which constitutes the desired mean cost of capital. It is
possible simply to choose a desired value for mean cost of capital on an arbitrary basis and use that instead, or to use alternative calculations. In the method illustrated in FIG. 3, in step 3.10
data required for the calculation of the cost of equity capital K[n ]is first obtained. This comprises three elements, namely a coefficient beta (β), the market return rate (r[m]), and a risk-free
interest rate (r[f]). The coefficient beta is a published and well-known data item and represents the coefficient of correlation between the returns on a single share, and the returns on the market
as a whole. It can, in principle, vary between zero and a value greater than one, and provides a weighting factor to weight the contributions of the market premium return (r[m]−r[f]). In step 3.2,
the cost of equity capital is calculated in accordance with the equation:
K [c] =r [f]+β(r [m] −r [f]).
[0134] The next step, step 3.30, is to calculate the cost of debt capital (K[d]). This step uses data from step 2.52 in FIG. 2. The final step in the calculation of the mean interest rate in FIG. 3
is step 3.40 where the weighted average cost of capital i is calculated. This is calculated in accordance with the equation:
[0135] There are other known ways of calculating cost of capital which could be used.
[0136]FIG. 4 illustrates a routine which calculates the key new measure EV+. This is the residual income stream. It is noted that an alternative RI metric, using market capitalisation, could also be
similarly computed, and this is described below in relation to the second embodiment of the invention. The routine of FIG. 4 is an important feature of the method of the invention. The EV+ values
thus calculated are also used in later parts of the method. In the initial step 4.10, the earnings or cash flow estimates, in this case the free cash flow, are retrieved from store for all the time
periods t under consideration, namely period 1 to period n. Normally the time periods will be years. In step 4.20 the interest charge is now calculated by multiplying the mean cost of capital
calculated in step 3.40 in FIG. 3 by the enterprise value which results from step 2.90 in FIG. 2. The procedure then enters a loop, and initially in step 4.30 the period t is set equal to 1. For this
period, in step 4.40, EV+ is calculated as:
EV+ [t] =FCF [t] −CC
[0137] where CC is the charge for the cost of capital employed. After this calculation a determination is made in step 4.50 as to whether all the periods have been processed, that is as to whether t
equals n, and if they have not t is incremented by 1 in step 4.51 and the procedure returns to step 4.40 for calculation of EV+ for the next time period. When step 4.50 determines that EV+ has been
calculated for all time periods, the values of EV+ are stored in step 4.60, and finally in step 4.70 the residual income stream determined as Ev+ for the t periods 1 to n is output.
[0138] In step 4.20 in FIG. 4 it is assumed that the interest charge is always calculated by reference to the initial value of EV, that is EV[0]. Some users may prefer to spread the interest charges
that result in a different fashion in order to modify the metrics that are subsequently obtained. This could be achieved using standard DCF and compounding techniques. In addition, some users may
prefer to calculate the interest charge by reference to a number of different values for EV, namely EV[1 ]to EV[n], which are forecast separately from the beginning of each period. If this is
desired, step 4.20 is calculated by reference to a separate EV value for each of periods 1 to n.
[0139] As previously mentioned, the example described uses free cash flow as an input. It is possible to use other earnings or cash flow estimates instead of free cash flow.
[0140]FIG. 5 illustrates a method for calculating the warranted EV, EV[W]. This makes use of the above formula, for which a numerical example has already been given. This is also an important feature
of the method. The EV[W ]value thus calculated is used in later parts of the method and, when compared with EV[0], highlights the overpricing or underpricing of the company. This can be expressed per
share to identify an overpricing or underpricing of the share in both absolute and relative terms. In step 5.10 the terminal value in period n (TV[n]) is first retrieved. The warranted enterprise
value EV[w ]is then calculated in accordance with the equations given above for warranted EV. The first term represents the summation of the present value of EV+ for the time periods 1 to n. The
second term represents the present value of the change in value of the investment from the initial enterprise value to the final terminal value, and the final term represents the present initial
enterprise value. This assumes that a terminal value is used, that is to say n is a finite number. It may be possible to set n at infinity, in which case no terminal value needs to be entered,
however in most circumstances n will not be set at infinity. The terminal value figure, when used, can be calculated in any desired manner. In step 5.30 the warranted enterprise value thus obtained
is stored, and in step 5.40 the underpricing or overpricing per share is determined as (EV[w]−EV[0]) divided by the number of shares in issue.
[0141] The underpricing or overpricing may be modified to allow for other values, such as “real options”, a known technique for valuing alternative future business decisions and the resulting
earnings. Alternatively, this may be incorporated in the method of determining the terminal value TV[n].
[0142] We have appreciated that, for some applications, it will be desirable that the measure of enterprise value used should differ from that described (EV[0]), and be based instead on a value of EV
which is calculated at the beginning of the period for which the residual income value is to be calculated (EV[1,2,3,etc.]).
[0143] A modification of the first preferred method described about will be described in more detail with respect to the second embodiment of the invention. In this modification the RI calculation is
changed so the cost of capital charge in step 4.20 is based not on an enterprise value but rather on market capitalisation, MC[0]. That is to say, the amount of any debt owed by the company is
ignored at this point.
[0144] The second embodiment of the invention uses the market capitalisation MC[0 ]to calculate a residual income stream based on MC, MC+ and the warranted market capitalisation MC[w]. This is then
used to calculate the under or overpricing of shares. Like EV, it may also be used in a number of different ways as a component in a metric. Examples of such metrics are described next with
particular reference to Enterprise Value EV+ explained in the first embodiment. It will be understood however that EV+ and MC+ may be interchanged in the metrics. A more detailed discussion of some
metrics in which MC+ is employed follows after the description of the second embodiment.
[0145] Dependant and Subsidiary Metrics
[0146] The residual income determined by either the preferred or modified method described can be used in a number of different ways, examples of which will now be given.
[0147] Examples of the ways in which the residual income determination described above can be used are given in the following metrics:
[0148] 1. The warranted enterprise value (EV[w]) can be determined for any given time period (or the warranted market capitalisation MC[w ]as appropriate).
[0149] 2. EV+ for any given time period (or MC+ as appropriate).
[0150] 3. EV+ per share for any given time period (or MC+ as appropriate).
[0151] 4. The PV of EV+ (“PVEV+”) for any given time period (or MC+ as appropriate).
[0152] 5. The PV of EV+ per share (“PVEV+ per share”) for any given time period (or MC+ as appropriate).
[0153] 6. The sum of a number of EV+ values for a number of time periods (or MC+ as appropriate), whether in absolute terms or on a per-share basis.
[0154] 7. The sum of a number of PVs of EV+ values for a number of time periods (or MC+ as appropriate) whether in absolute terms or on a per-share basis.
[0155] 8. The ratio, or if inverted the percentage, of the Current EV per share to (i) EV+ per share or (ii) PVEV+ per share for any given time period, whether for single or multiple periods (or MC+
as appropriate).
[0156] 9. The ratio, or if inverted the percentage, of the Current EV to EV+ or PVEV+ for any given time period whether for single or multiple periods (or MC+ as appropriate).
[0157] 10. The ratio of Current Price per share to EV+ per share or PVEV+ per share for any given time period (“P/EV+” or “P/PVEV+”), whether for single or multiple periods (or MC+ as appropriate)
[0158] 11. The mean of a number of EV+ values for a number of time periods (or MC+ as appropriate), whether in absolute terms or on a per-share basis.
[0159] 12. The mean of a number of PVs of EV+ values for a number of time periods (or MC+ as appropriate) whether in absolute terms or on a per-share basis.
[0160] 13. The growth rate in EV+ from one time period to another, whether per-share or in absolute terms and whether measured over one period or many.
[0161] 14. The growth rate in PV of EV+ from one time period to another, whether per-share or in absolute terms and whether measured over one period or many.
[0162] 15. The ratio that is produced by dividing either P/EV+ or P/PVEV+ or P/EV+ per share or P/PVEV+ per share by the growth rates in EV+ or PVEV+ or EV+ per share or PVEV+ per share (“PEVG” and
[0163] 16. Using long term growth (LTG) or other growth rates to compute a share value payback period on PVEV+ per share.
[0164] 17. Using mean or periodic PVEV+ values (see above) to compute a share value payback period.
[0165] 18. Using the values produced by the method to compute an EV+ spread, similar to a known metric, EVA spread, but employing EV+ based measures.
[0166] 19. Using any of the above measures to compute sector, country or global aggregates as shown in flowchart steps 7.10 and 7.20.
[0167] 20. The impact of a change in estimate can be valued by reference to the difference between the previous and current EV[w].
[0168] 21. Combining the ratios to price of the present values of both the EV+ stream and the terminal value premium, to determine the overall relative over/under pricing of a company in relation to
its peers.
[0169] 22. calculating how many years it takes for an initially negative EV+ to become positive, to determine the ‘EV+ breakeven point’.
[0170] 23. The mean of each of the metrics 8, 9, and 10 above, where the values are determined for multiple time periods.
[0171] EV+ may be defined as the mispricing in any one year up to period ‘n’. When compared to share price, the resulting P/EV+ ratio, metric 10, (or alternatively Ev+/P), is a powerful equivalent of
a P/E ratio. The present value of EV+ compared to share price (P/PVEV+), may also be used.
[0172] P/EV+ gives a percentage misvaluation in any of ‘n’ years and also shows the total percentage misvaluation over the whole of ‘n’ years. The real value of these percentages is when the
percentages of a company are compared to those of the company's peers to show relative mispricing. This enables sector-wide effects to be measured separately from company-specific misvaluations. For
example, while undervaluing of an entire sector would be indicated by positive P/EV+ ratios for all or most of the companies in that sector, the relative underpricing of a particular company in
relation to its peers can be deduced from the comparison of the respective P/EV+ ratios for the different companies and by comparison with the sector or peer group average or aggregate. If Company
A's ratio is larger than than the peer group average by 5% say, then it can be deduced that Company A is perhaps undervalued in relation to its peers by 5%.
[0173] In the same way that Price-Earnings Growth (PEG) is a valuable variable in Earnings Per Share (EPS), it is useful to produce a ratio for EV+ incorporating the growth rate in EV+ from one year
to another, EV+ PEG, or P/EV+G, metric 15. This helps to identify those companies that are significantly mispriced compared to their peers and highlights those that will become increasingly so if
they achieve their forecast earnings growth rates.
[0174] The evaluations provided by P/EV+, or the P/PVEV+ focus only on the short-term relative EV+ streams of a company and its comparator peer group. It ignores the relative terminal value premium,
which is longer term in nature. Metric 21, combines the short-term P/EV+ stream with the terminal value premia of all peers to provide an overall relative measure of misvaluation of a company in
relation to its peers.
[0175] The EV+ breakeven point, metric 22, is particularly useful for technology companies in which a great deal of potential shareholder interest lies in the company's ability to generate long term
growth. The EV+ figure of technology companies for the first few years may well be negative, indicating an overpricing of the shares. However, reference to the EV+ breakeven point of different
technology companies provides a useful comparison of their relative strength.
[0176] Another metric that allows evaluation of different companies is the growth rate implied within the Terminal Value (TV) calculation, as growth is implied in the majority of these calculations.
Those companies that require a higher growth rate in order to justify their current price may be deemed to be relatively overvalued or more risky than their peers.
[0177] A detailed example of the use of some of these metrics will now be explained.
[0178] Table 1 below shows a detailed example of how a business, Company A, can be valued using EV+ methods. The process is similar to that used in residual income calculations. This particular
example uses an analyst's terminal value of the company, $0.9 m, representing the value of the free cash flows from Year 4 onwards. The current market value of the company's equity is $0.5 m, and the
debt is $0.2 m, so the EV is $0.7 m ($0.5 m plus $0.2 m). The company's cost of capital has been calculated to be 11 percent. It has 100,000 shares in issue, so the current price per share is $5.
[0179] In this example, the free cash flow in the first three years is $140 000, $150 000 and $165 000 respectively, as shown in the first row of the table. There are three forecasts so n=3.
[0180] The second row of the table shows the capital charge value of (EV×i), and the third the residual income stream (EV+) which is the result of the subtraction between the free cash flow and the
capital charge.
[0181] The fourth row shows the discount factor or 1/(1+i)^n for each of the three years, and the fifth row shows the EV+ stream value or present value of the EV+. This is the value given by the
first term in the equation for the warranted EV+ value calculated for each of the respective three years. The fourth column of this row shows the total of the summation as $179 700.
[0182] The sixth, seventh and eighth rows of the table show the figures for the calculation of the second term in the warranted EV+ equation. The terminal value profit (TV[n]−EV) is calculated to be
$200 000. This figure is then discounted by a factor 0.713 to reflect the real value over three years. The present value of the terminal profit is therefore $146 200.
[0183] Summing these two values gives the total under or over pricing of Company A as $325 900. If we add this to the current EV of $700 000 we get the warranted Enterprise value or Estimated true
Enterprise value as $1 025 900.
[0184] If the company's debts of $200 000 are then deducted, the estimated true value of the Equity capital is $825 900 which when divided by the number of shares gives a true share price of $8.26.
The company therefore appears to be undervalued by $3.26.
[0185] The next step is to perform a relative evaluation of Company A against its peers. This figures for this are shown in Table 2. The first five rows of Table 2 show the calculation of the total
present value of EV+, and the present values of EV+ for each of the years in the EV+ stream.
[0186] The sixth row shows the value of PVEV+ per share, metric 5, and the seventh, the Price to PVEV+ ratio, P/PVEV+, metric 10. This can be seen to be the share price of $5 divided by the EV+ per
share value. The eighth row of the table shows the mean P/PVEV+ for the peer group or sector in each of the years. The percentage difference between the P/PVEV+ ratio for Company A and the P/PVEV+
for the peer group is shown in the last row for each year. The percentage differences are 17.3% for the first year, 18.1% for the second and 2.6% for the third.
[0187] The initial valuation of Company A showed that it appeared undervalued as its current price of $5 is less than its target price of $8.26. The total undervaluation consists of the EV+ stream
and the PV of the terminal value profit. It results in positive values for the P/EV+ ratio. If the EV+ stream were negative the share would be overpriced and the P/PVEV+ ratio would also be negative.
[0188] A similar analysis of peer group companies shows that they too are undervalued, as shown by the positive P/PVEV+ ratios in row 8 of table 2. Therefore it is possible that the entire sector has
been downrated, perhaps for general economic reasons, and this needs to be reflected in the valuation of Company A. The way to incorporate this downrating is to measure the relative ratios of P/
PVEV+. The comparisons in table 2 show that Company A's ratios are between 2.6% and 18.1% higher than the peer group. The mean of the percentages for company A is 12.6%, which indicates that all
things being equal, Company A is undervalued by around this figure. Its true share price in the current market should perhaps be around $5.63 ($5×1.126). This is a relatively short-term valuation as
it focuses on EV+ for only three years only. As the market for this sector improves this target price may rise towards the initial valuation of $8.26.
[0189] There is yet another factor which could be considered before finalising the target price. The valuation above focuses on the short term relative EV+ streams of Company A and its comparator
peer group, and as mentioned earlier ignores the relative terminal value premia, which are longer term in nature. It is possible that the terminal value profits of the peer group when compared with
the share price, may be significantly different from Company A. If so, this difference should also be taken into account.
[0190] By combining the ratios to price of both the EV+ stream and the terminal value profits, it is possible to identify the overall relative over/underpricing of a company in relation to its peers.
Similar calculations can be performed for the comparison of entire sectors.
[0191] As mentioned earlier, incorporating the growth rate in EV+ from one year to another helps to identify those companies that are significantly mispriced compared to their peers. Table 3
illustrates the calculation.
[0192] The first row of the table shows the values calculated before for the EV+ stream, and the second shows the relative growth in the EV+ stream from one year to the next. For example, the growth
for year 2 is given by 59.3/56.8 expressed as a percentage, while that for the third year is given by 63.6/59.3 expressed as a percentage.
[0193] The third row shows the P/PVEV+ ratio for each of the three years (shown in table 2). This figure is then divided by the percentage value of growth (row 2) to give P/PVEV+ per growth rate,
metric 15.
[0194] This ratio is useful for comparative share valuations. As with normal PEG ratios, the higher the growth rate the lower the ratio. Faced with a choice between two companies that have similar
positive P/EV+ ratios, investors should prefer the one that has a lower ratio of P/EV+ G.
[0195] We have appreciated that the EV+ metrics produce different results depending upon the variables used in the calculation. We have appreciated that this enables sophisticated users (and/or
sophisticated delivery software) to produce alternative values using a variety of inputs. We have appreciated that the dispersion of these alternative values can be plotted and that statistical
measures of the dispersion or these values can be produced. We have appreciated that low statistical measures of dispersion give greater confidence in the range of values that result and help to
measure the riskiness of the investment. We have appreciated that this process could be automated within the software and that a family of valuation confidence and risk measures can be generated as a
[0196] We have further appreciated that the metrics can be used to evaluate the market sentiment in relation to a particular company. By using EV+ metrics to compare a company with its peers (or a
sector or index), it will be seen to be relatively overvalued or undervalued by a given percentage. By then comparing this percentage with previous values, a trend can be established, and it can be
demonstrated whether the company's relative value is increasing or decreasing, demonstrating whether market sentiment is moving in its favour or against it.
[0197]FIG. 6 illustrates the calculation of some of the many subsidiary metrics that can be computed using EV+. These are useful because they enable conclusions to be drawn about the share price. In
particular many of them focus on near-term EV+ values and avoid the need for a terminal value to be calculated. In the example shown in FIG. 6, in step 6.10 the EV+ values in relation to a company or
investment are first retrieved, for the t periods 1 to n. These were stored in step 4.70 described above. In step 6.20 the present value of each of these EV+ values is calculated as follows:
PVEV+ [n] =EV+ [n]/(l+i)^n
[0198] From these values the present value of EV+ per share is calculated for the t periods 1 to n, by dividing the resultant of step 6.20 by the number of shares in issue. Then in step 6.40 the
ratio of the resultant PVEV+ per share to the share price is calculated for each of the t periods 1 to n, by dividing the resultant of step 6.30 by the current share price which was determined in
step 2.20 in FIG. 2. Finally in step 6.50, the resultant can be multiplied by 100 to show a percentage mispricing in each of the t periods 1 to n. Thus, referring to the 23 metrics enumerated above,
step 6.20 generates metric 4, step 6.30 generates metric 5 and step 6.40 generates metric 10. The results thus obtained can be inverted if desired. The results obtained in step 6.50 may also be added
together, or a mean figure derived for the t periods 1 to n. Many other subsidiary valuation metrics can be developed using EV+. In particular, the growth ratio of PVEV+ can be calculated and
compared to the share price for example, metric 15.
[0199]FIG. 7 illustrates the calculation of aggregate metrics for a group of companies. These are useful for comparing the price performance of one company with a group of its peers, indicating
whether it is relatively overpriced or underpriced. FIG. 7 comprises just two steps, namely step 7.10 in which the selected metrics for a number of companies are retrieved, these for example being
the metric produced in step 6.50 in FIG. 6, and the mean calculated as the sum of the values for each company divided by the number of companies, in step 7.20. The use of aggregate metrics enables
the comparison of one company with a group of companies, either within the same sector, country, or stock market. Aggregate metrics can be calculated using arithmetic means or using more
sophisticated techniques. For example, the mean can be weighted by the market capitalisations of the companies involved. The simple calculation shown in FIG. 7 is, therefore, only one straightforward
example. It demonstrates how EV+ metrics can be converted into aggregates, and in any particular application the choice of EV+ metrics for conversion to aggregates will be selected as desired.
[0200]FIG. 8 illustrates the use of company and aggregate metrics in a table, enabling the results to be reported to users of the program. In step 8.10 the selected subsidiary valuation metrics, such
as the metric produced in step 6.50 in FIG. 6, are retrieved for a number of companies, In step 8.20 the aggregate metrics such as that produced in step 7.20 in FIG. 7 are likewise retrieved, and in
step 8.30 the thus retrieved values are sorted and listed in a table or chart. There are many ways of comparing the results of different companies using the combined or aggregated results of EV+
metrics of the type enumerated as metric 1 to metric 23 above.
[0201] In a proposed system, for example, short and long term metrics are employed and combined to give a total metric. The short term metric is calculated from the residual cash income that is
forecast to be earned after deducting the capital charge for EV[0], the current Enterprise Value. This residual income is divided by the number of shares in issue and then compared to share price to
provide a short term metric (metric 10).
[0202] The EV+ relative misvaluation data provided by this short term metric can be usefully combined on display with a rebased historic price chart. This would give a user of the proposed system, an
immediate insight into the way that prices have moved and shows whether based on earnings forecasts, the current price is too high or too low.
[0203] The long term misvaluation metric is represented by the premium of the terminal value above the current Enterprise value. This premium is discounted to a current value, divided by the number
of shares in issue and the compared to share price. The values for each of the companies is preferably shown to a user in the form of a table or graph.
[0204] An additional, and worthwhile, long term metric is to compute the growth in the terminal value that is required to bring a company's valuation in line with the peer group mean. This enables
comparison of different companies and indicates the riskiness of those that need significantly higher levels of growth in order to justify their price.
[0205] The combination of the short and long term metrics provide a more complete misvaluation picture. The results may usefully be compared to the peer group average in order to separate sector and
non-sector effects.
[0206] Thus the preferred embodiments described above and illustrated in FIGS. 1 to 8 provide improved or at least alternative residual income valuations in that they use a measure of enterprise
value (EV[0]), defined as market capitalisation plus debt (plus adjustments, if any) as the basis for calculating the capital charge, or alternatively they use the market capitalisation alone.
[0207] Modification—Second Embodiment
[0208] As mentioned earlier, the second embodiment of the invention comprises a modified RI calculation, so that the cost of capital change in step 4.20 is based not on an enterprise value, but on
the market capitalisation.
[0209] In doing this modification it is necessary to use the cost of equity capital (Ke), step 3.20, as the value for “i′”, rather than a value for mean cost of capital taking both equity and debt
into account. It is also necessary to use as the free cash flow figure in step 4.10, and in the calculation of TV[n], step 5.10, a figure for earnings after interest or FCF′. That is, FCF′ is derived
from FCF by deducting a charge for the interest payable on debt capital.
[0210] The formula for calculating the warranted market capitalisation, MC[w], is thus:
[0211] This thus comprises a method of determining a residual income metric in which the cost of capital is based on the market capitalisation. That is to say that in the second embodiment the
warranted market capitalisation is equal to the present value of the stream of all residual income from t periods, that is year 1 to year n, using earnings after interest, plus the present value of
the difference between the terminal value of the market capitalisation in year n TV[n]′ and the current market capitalisation MC[0], plus the current market capitalisation MC[0].
[0212] If in the above formula Market Capitalisation Plus MC+ is defined as follows:
[0213] where FCF′ is the free cash flow after debt interest, MC is market capitalisation, and i′ is the assumed equity interest rate after debt payments expressed as a decimal value. In accordance
with the above equality, MC+ is obtained from the free cash flow from which a deduction is made to reflect the market capitalisation. As described above, while FCF′ is preferred, other alternative
cash flow and/or earnings measures may be used. Thus, more generally, if E′ is the earnings (or cash flow) measure employed after debt interest, and CC′ is the charge for the cost of equity capital
employed, then:
[0214] or, in this case,:
[0215] The procedure illustrated in FIGS. 2 to 5 is followed with appropriate alterations. In particular, market capitalisation MC is substituted for EV in steps 4.10 to 4.70. Also, instead of the
calculation shown in FIG. 3, the interest rate i′ is set simply to the cost of equity capital, K[e]. Finally, the earnings or cash flow data used in step 4.10 is first adjusted so as to be the result
after interest, that is the interest payable on the debt capital is deducted from the forecast or earnings cash stream.
[0216] Thus this modified system uses market capitalisation alone as the basis for calculating the capital charge. In calculating the present value for future years the measure or market
capitalisation can be based on MC[0], as described, or alternatively can be based on a value for MC which is calculated at the beginning of the period for which the residual income is to be
calculated (MC[1,2,3,etc]).
[0217]FIG. 9 is a high level flow chart illustrating the main steps in the method according to the second embodiment of the invention. FIG. 9 refers to relevant subsequent FIGS. 10 to 16 which
illustrate the steps in more detail. The main steps are similar to those for the first embodiment but involve modifications necessary for the calculation of MC, and the subsequent valuation process.
The main steps are calculating the market capitalisation, FIG. 10; calculating the cost of equity capital, FIG. 11; calculating the residual income stream using the market capitalisation, FIG. 12;
calculating the warranted market capitalisation and using that to calculate the absolute overpricing and underpricing FIG. 13. As before, these results can then be used as desired to calculate a
variety of subsidiary valuation metrics. Examples of these are shown in FIGS. 14 to 16.
[0218] FIGS. 10 to 13 show the procedure for calculating the warranted market capitalisation MC[w], in accordance with the second embodiment of the invention. It will be appreciated that the
procedure is similar to that shown in FIGS. 2 to 5 only with appropriate alterations. In particular, market capitalisation MC is substituted for EV in steps 4.10 to 4.70 to produce steps 12.10 to 12.
80. Also instead of the calculation shown in FIG. 3, the interest rate i is simply set to the cost of the equity capital, K[c]. This is illustrated in FIG. 11. Also, the earnings or cash flow data
used in step 12.10, differs from that used in Step 4.10 in that it is adjusted so as to be the result after interest, that is the interest payable on the debt from the forecast or earnings cash
[0219]FIG. 10 illustrates a routine which calculates a value for the market capitalization MC[0].
[0220] As explained earlier with reference to FIG. 2, the routine first retrieves the current share price (P), step 10.10, and the number of shares in issue (N), step 10.20 and calculates, step 10.30
, a figure for the market capitalisation. This is also stored in Step 10.30 and is made available for later routines.
[0221]FIG. 11 illustrates a routine for calculating the cost of equity capital used in the calculation of the second embodiment. The interest rate value to be used will be set equal to the cost of
equity capital. The routine is similar to that illustrated in FIG. 3. In step 11.10, the data required for the calculation is obtained. The data comprises three elements, namely the coefficient beta
(β), the market return rate (r[m]) and the risk-free interest rate (r[f]). In step 11.20, the cost of equity capital K[e ]is calculated according to the equation K[e]=r[f]+β(r[m]−r[f]). In step 11.30
, the value of the interest rate i′ is simply set equal to the cost of equity capital K[e].
[0222]FIG. 12 illustrates a routine which calculates the value for Market Capitalisation Added, MC+. In step 12.10, earnings or cash flow estimates after interest, in this case free cash flow, FCF′,
are retrieved from the store for all of the time periods under consideration, namely period 1 to period n.
[0223] In step 12.20, the interest charge on the Market Capital is then calculated by multiplying the interest rate i′ by MC.
[0224] The routine then enters a loop in which the period t is initially set to be equal to 1, in step 12.30, after which it is incremented by 1 for each cycle of the loop until t=n, where n is the
number of periods. For each period t, the Market Capitalisation Added value is calculated in step 12.40, according to the equation:
MC+ [t] =FCF′ [t]−(i′×MC)
[0225] After the calculation for a period t, a determination is made in step 12.50 as to whether all periods have been processed, that is whether t=n. If they have not, t is incremented by 1 in step
12.51 and the procedure returns to step 12.40 and calculates the value of MC+ for the next time period t+1.
[0226] When step 12.50 determines that MC+ has been calculated for all time periods, the value of MC+ are stored in step 12.60. Finally in step 12.70 the residual income stream based on MC+ for the
periods t=1 to t=n is output.
[0227] As described earlier with reference to FIG. 4, the calculation of MC+ can be varied by using standard techniques to calculate spread interest changes, or by using a value of MC that is
calculated separately for each period. Also, instead of free cash flow FCF, other earnings or cash flow estimates could also be used.
[0228]FIG. 13 illustrates a routine for calculating the warranted MC, MC[w]. The calculation is the same as that illustrated in FIG. 5, only MC+ is substituted for EV+, and MC[0 ]is substituted for
EC[0]. In step 13.10, the terminal value of the market capitalisation for the final period n (TV[n]′) is retrieved or calculated using known techniques. In step 13.20, the warranted Market
Capitalisation is calculated according to the equation:
[0229] In step 13.30, the value MC[w ]calculated in step 13.20 is stored. This value is then used in step 13.40, with the value MC[0 ]to calculate the underpricing or overpricing of shares.
[0230] FIGS. 14 to 16 show subsidiary metrics corresponding to those shown in FIGS. 6 to 8, only they are based on Market Capitalisation rather than enterprise value.
[0231] In FIG. 14 a routine is illustrated which shows a calculation for the percentage mispricing of share price. In step 14.10 the MC+ values in relation to a company or investment are first
retrieved for each of the t periods 1 to n. These were stored in step 12.70. Next in step 14.20, the present value of each of the MCT values is calculated according to the equation:
PVMC+ [t] =MC+ [t]/(1+i′)^t
[0232] From the present values of each of the MC+ values, the present value of MC+ per share is calculated for the t periods 1 to n. This is done in step 14.30 by dividing the value of PVMC+[t ]by
the number of shares in issue N. In step 14.40 the resultant of step 14.30 is divided by the current share price, determined earlier in step 10.10 in FIG. 10. This gives a ratio of PVMC+ per share to
the share price for each of the t periods. Finally, in step 14.50, the resultant is multiplied by 100 to show the percentage mispricing in each of the t periods 1 to n. It can be seen that step 14.20
generates metric 4 of the 24 metrics referred to above, step 14.30 generates metric 5, and step 14.40 generates metric 10.
[0233] The results obtained in step 14.50 may be used in a number of ways. They may for example be added together or a mean figure derived for the t periods 1 to n.
[0234]FIG. 15 corresponds to FIG. 7 described earlier and illustrates the calculation of aggregate metrics for a group of companies. The routine shown in FIG. 15, may be used to calculate the
relative mispricings of a number of companies, based on the calculation made in step 14.50, that calculation being based on Market Capital values.
[0235] In step 15.10, a selected metric such as that produced in step 14.50 is retrieved for a number of companies, and in step 15.20 the mean value of this metric is then calculated. In the case of
the metric calculated in step 14.40, the result of step 15.20 can be used to indicate the relative mispricing of one company in comparison to a group of companies such as a sector of the market.
[0236]FIG. 16 illustrates the use of company and aggregate metrics in a table, and corresponds to the process shown in FIG. 8. In step 16.10 the selected subsidiary valuation metrics, such as the
metrics produced in step 14.50 in FIG. 14, are retrieved for a number of companies. In step 16.20 aggregate metrics, such as that produced in step 15.20 in FIG. 15, are also retrieved, and in step 16
.30 the retrieved values are sorted and listed in a table or chart.
[0237] Thus this preferred embodiment described above with reference to FIGS. 9 to 16 uses the Market Capitalisation alone, instead of a measure of enterprise value, to provide improved or at least
an alternative residual income valuation.
[0238] Many more specific subsidiary metrics can be developed around EV+ and MC+ concepts. All the metrics can be varied in the manner of their calculation. In particular:
[0239] The interest rate used to calculate the Capital charge (“Cost of capital”) can be computed in a number of different known ways. For example rates may be computed for whole sectors or countries
rather than individual companies. An “internal rate of return” figure may also be used to calculate K[e], as shown in step 3.20. These are all examples of numerous alternative known methods.
[0240] Enterprise value can be computed in a variety of known ways other than that described in detail above. In particular debt may be valued at book or market value. In addition a number of
optional adjustments may be made to the combined figure of debt and equity.
[0241] Earnings could be expressed as dividends, profit or cash flow (each of which can be defined in different ways depending on taxation, accounting method and preferred usage of the model).
[0242] Present Value (PV) techniques can be applied in various different known ways to “discount” the value of a future sum.
[0243] The terminal value can be calculated in a number of different known ways.
[0244] In addition to the variety of metrics that EV+ and MC+ allows, they provides analysts with techniques for purposes other than straight forward stock evaluation. The techniques can be used to
measure the relative value of different sectors, which aids the asset allocation decisions faced by fund managers, as well as sensitivity analysis of valuation conclusions and to the measurement of
the statistical significance of mispricings.
[0245] Also, because EV+ and MC+ incorporate forecast of cash flows or earnings, it is possible for analysts to substitute their own in-house forecasts rather than published consensus estimates. This
can generate unique insights that other analysts may not have.
[0246] Uses
[0247] The systems described can be used directly in specialised share information and valuation systems (such as “Estimates Direct” and “Active Express”, produced by First Call and I/B/E/S
respectively). They can also be used in general investment information systems such as Reuters 3000 Xtra, Bloomberg and Thomson Financial Datastream. They can also be used within similar information
systems on the internet or over the media used with mobile telephones employing the WAP (Wireless Application Protocol). The systems can also be used by computerised valuation, trading or fund
management systems, such as those used by fund managers, investment bankers and corporate financiers.
[0248] Further description of the uses of the invention in value based management are described in “Value Based Management”, Gary Ashworth and Paul James, ISBN 0 273 65404 7, published July 2001.
[0249] While preferred embodiments of the invention have been described by way of example, it will be appreciated that many changes and modifications may be made to the methods described and
illustrated within the scope of the present invention. | {"url":"http://www.google.com/patents/US20030036988?ie=ISO-8859-1","timestamp":"2014-04-18T17:33:05Z","content_type":null,"content_length":"147105","record_id":"<urn:uuid:cd3e50bb-bdfa-4ebb-a026-b4f1afa2c387>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00170-ip-10-147-4-33.ec2.internal.warc.gz"} |
Sic Bo Rules and Strategies
Video Poker
Pai Gow Poker
Texas Hold'em
Caribbean Stud
Let It Ride
Casino War
Big Six
Sic Bo
Trente et Quarante
Game Rules
Systems Gallery
Order Form
On-Line Forum
Post a Gambling
Question or Opinion -
Get an Answer. . .
Rules and Strategies for
Sic Bo
We are quite used to the game of Craps, that uses two dice. It's challenging enough to find winning strategies with two dice, considering all dice combinations. How about an interesting game that
uses three dice? The combinations would be much more intriguing, wouldn't they?
I'm referring to the game called "Sic Bo", that uses three dice on a layout that is quite different than the Craps layout. Sic Bo has been recently popular in Casino of Montreal. Sic Bo is an ancient
Chinese dice game still played in many southeast Asian countries, where it's known as big and small. Sic Bo is both exciting and easy to play, and offers players a wide variety of options. Payouts
range from even money up to 150 to 1.
The betting options available at Sic Bo are formed by the various combinations obtained with three dice. These wagers and their payouts are reproduced on the gaming table. Players may wager on as
many combinations as desired per game. The dice are shaken by dealer, by means of a vibrating platform under a round shape glass cover. After all bets have been made, the dealer activates the dice
shaker. The outcome of each of the three dice appears on the display. At the same time, the spaces corresponding to the winning combinations light up on the table. The dealer then removes all losing
bets from the table, such as in roulette, and proceeds to pay all winners.
There are 8 different ways you can bet.
1) You can bet on one number, which must appear on all three dice. This is called a Three of a Kind. Obviously, you have the least chances of hitting a winning bet, as there are 216 (6 X 6 X 6) dice
combinations and only one of them can give you a 1,1,1 or 2,2,2 or 3,3,3 or 4,4,4 or 5,5,5 or 6,6,6. If you do win however, it pays 150 to 1. Considering your winning chances to be 1 out of 216, this
gives the casino a huge advantage.
2) You can bet on one number, which must appear on two of the three dice. This is called a Two of a Kind, such as a 1,1 or 2,2 or 3,3 or 4,4 or 5,5, or 6,6. This pays 8 to 1, although your chances of
winning is 5.82%, as there are 12 ways of making such a combination.
3) You can bet that the same number from 1 to 6 will appear on all three dice, such as either a 1,1,1 or 2,2,2 or 3,3,3 or 4,4,4 or 5,5,5 or 6,6,6. This is called Any Three of a Kind and it pays 24
to 1. You have 6 times more chances to win relative to the Three of a Kind. The probability is still 6 out of 216 or about 2.78%.
4) There is an area called Small, that pays even money, where you bet that the sum of the three dice will be equal to 4, 5, 6, 7, 8, 9, or 10 excluding a Three of a Kind. That's why the 3 (3 ones) is
not a winning bet, neither are the hardways 6 and 9, which provides the casino edge.
5) Similarly, there is an area called Big, that also pays even money, where you bet that the sum of the three dice will be equal to 11, 12, 13, 14, 15, 16, or 17 excluding a Three of a Kind. That's
why the 18 (3 sixes) is not a winning bet, neither are the hardways 12 and 15, which again provides the casino edge.
6) On the Sic Bo layout, there is a wide area with the numbers 4 to 17 written on them. Those numbers correspond to the sum of the three dice. So, you bet on a number from 4 to 17, which are the sum
of all 3 dice and the payout table is as follows:
• If the sum is 4, winning bets are paid 50 to 1;
• If the sum is 5, winning bets are paid 30 to 1;
• If the sum is 6, winning bets are paid 18 to 1;
• If the sum is 7, winning bets are paid 12 to 1;
• If the sum is 8, winning bets are paid 8 to 1;
• If the sum is 9, winning bets are paid 6 to 1;
• If the sum is 10, winning bets are paid 6 to 1;
• If the sum is 11, winning bets are paid 6 to 1;
• If the sum is 12, winning bets are paid 6 to 1;
• If the sum is 13, winning bets are paid 8 to 1;
• If the sum is 14, winning bets are paid 12 to 1;
• If the sum is 15, winning bets are paid 18 to 1;
• If the sum is 16, winning bets are paid 30 to 1;
• If the sum is 17, winning bets are paid 50 to 1;
You can see the symmetrical pattern of payouts, depending on the dice combinations and their probability of appearance, always with an edge on the casino side, of course.
7) You can bet on two different numbers, which must appear on at least two of the three dice. This bet is called Duo, and it pays 5 to 1. The dice combinations you can bet on are: 1,2 or 1,3 or 1,4
or 1,5 or 1,6 or 2,3 or 2,4 or 2,5 or 2,6 or 3,4 or 3,5 or 3,6 or 4,5 or 4,6 or 5,6, as long as the two numbers are not the same.
8) And the last type of bet is called a simple One of a Kind, where you bet on a single number that must appear on one, two or all three dice. This is shown in the layout as the dice face 1, or 2, or
3, or 4, or 5, or 6.
The payout table of this bet depends on the following:
• If the number you chose appears on one of the three dice, you are paid even;
• If the number you chose appears on two of the three dice, you are paid 2 to 1;
• If the number you chose appears on three of the three dice, you are paid 3 to 1.
The game is called Sic Bo, meaning Small Big, probably because of the type of bets that pay even - types 4 and 5 above, which could be attractive to bet using our even money bet strategies (Superior
Roulette, Reward, etc.). What we need to determine is the percentage of the casino edge, in order to compare this type of bet to Baccarat or Roulette.
Taking the Small bet, we can see that half of all dice combinations will give us a 3 or 4 or 5 or 6 or 7 or 8 or 9 or 10. If we exclude the Three of a Kind, we are excluding the 1,1,1, the 2,2,2 and
the 3,3,3. Therefore, we are excluding 3 possibilities out of 108 (216/2). In other words our chances of winning is 105 out of 216 or 48.61%.
Comparing this to roulette, our chances of winning on even bets are 18 out of 37 or 48.64% on a single zero wheel and 18 out of 38 or 47.37% on a double zero wheel. So playing Sic Bo on even money
bets such as Small Big is comparable to single zero roulette, the Three of a Kind decision acting as a zero.
So we can easily use any even bet strategy that we have developed for Roulette, that performs even better for single zero wheels.
Poker Guide - The world's largest poker guide PokerListings.com with reviews of 750 poker Texas Holdem - Page that compares 45 Texas Holdem online poker rooms on a number of relevant
sites. features. | {"url":"http://www.letstalkwinning.com/Sic-Bo.htm","timestamp":"2014-04-21T07:03:38Z","content_type":null,"content_length":"21519","record_id":"<urn:uuid:d0f5f950-c0bf-4999-af6d-bb99ebee8e35>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00432-ip-10-147-4-33.ec2.internal.warc.gz"} |
I’m working on a post defining probabilities. That’s the crux of the matter, so it worth laying out forthrightly. The unifying question probabilities aim to answer is: “When is A relevant for B?”
It’s fair to surmise I’m the only living person who believes that, so as preparation for that future post, I brought in Jaynes for moral support.
The following is from his paper “Where do we Stand on Maximum Entropy“. Although this paper is only remembered for Jaynes’s dice example, it’s the most important philosophy of science paper of the
second half the 20th century. It’s not just for the philsophical though; a Ph.D. in Statistics could fill a long and illustrious career by exploiting seams opened therein.
From bottom of page 16:
From Boltzmann’s reasoning, then, we get a very unexpected and nontrivial dynamical prediction by an analysis that, seemingly, ignores the dynamics altogether! This is only the first of many such
examples where it appears that we are “getting something for nothing,” the answer coming too easily to believe. Poincare, in his essays on “Science and Method” felt this paradox very keenly, and
wondered how by exploiting our ignorance we can make correct predictions in a few lines of calculation, that would be quite impossible to obtain if we attempted a detailed calculation of the
It requires very deep thought to understand why we are not, in this argument and others to come, getting something for nothing. In fact, Boltzmann’s argument does take the dynamics into account,
but in a very efficient manner. Information about the dynamics entered his equations at two places: (1) the conservation of total energy; and (2) the fact that he defined his cells in terms of
phase volume, which is conserved in the dynamical motion (Liouville’s theorem). The fact that this was enough to predict the correct spatial and velocity distribution of the molecules shows that
the millions of intricate dynamical details that were not taken into account, were actually irrelevant to the predictions, and would have cancelled out anyway if he had taken the trouble to
calculate them.
Boltzmann’s reasoning was super-efficient; far more so than he ever realized. Whether by luck or inspiration, he put into his equations only the dynamical information that happened to be relevant
to the questions he was asking. Obviously, it would be of some importance to discover the secret of how this came about, and to understand it so well that we can exploit it in other problems.
Exploit it in other problems indeed! The future truly belongs to Bayesians.
December 6, 2013
29 comments »
• The future belongs to those who read Jaynes. There are lots of people under the Bayesian tent who just wouldn’t get it, e.g., Bayesian philosophers of science, or personalistic Bayesian
statisticians like Jay Kadane, who writes on page 1 of his 2011 text, Principles of Uncertainty,
Before we begin, I emphasize that the answers you give to the questions I ask you about your uncertainty are yours alone, and need not be the same as what someone else would say, even someone
with the same information as you have, and facing the same decisions.
(I should point out that nothing else in the book actually relies on the truth of this assertion.)
• Yeah, I’ve been shocked more times than I care to remember just how bad even prominent members of the Bayesian establishment can be. A big part of that is that I had been reading Jaynes’s papers
for a long time before encountering other Bayesians. It turns my stomach to think I might have been one of those knuckleheads under less fortunate circumstances. Thank God for the Corps!
• “Thank God for the Corps!”
Just curious, did the Corps influence you in reading Jaynes?
• Ha! no. Marines are very skeptical of eggheads. If you’d like to see what Marines read check out their official reading list. It’s set by the Commandant and usually changes slightly every year.
Here’s the current version broken down by ranks:
It did get me away from Academia though, which has gotten noticably worse over my lifetime.
• I hadn’t looked at that list in a while, so it’s funny to see Taleb, Gladwell, Kuhn, and Kahneman on there.
• Corey, the quoted text doesn’t seem so bad when read generously enough.
Bayes rule applied to inference gives a consistent procedure for calculating your posterior uncertainty, the procedure relies on two aspects of your state of information: A Prior uncertainty, and
a Likelihood. When two Bayesians have the same Prior and Likelihood they SHOULD get the same answers. But several times on this blog we have discussed how both Priors and Likelihoods are in
practice modeling decisions. We almost always throw away some information in constructing both of them (hence Gelman’s tendency to assert that answers that don’t “feel right” usually mean that
some implicit knowledge was left out).
Since two people with the same knowledge could make different modeling decisions, it shouldn’t surprise us that two Bayesians with the same knowledge could get different results about their
uncertainty. Two Bayesians with the same MODEL and information though should get the same results.
• The part that annoys me is where personalistic Bayesians assert that their results are more or less true by definition. (ie. no model checking needed). Ok fine, if you want to define truth as the
result of a bayesian computation with your personal prior and likelihood plugged in…. then you could do that… but then it’s a true statement about your internal life not a true statement about
the external world. I’m much more interested in the external world. Telling me that you believe a particular elephant weighs between 1 and 3 pounds with 99% certainty might be a true statement
about your internal state, but it’s more or less completely wrong with respect to the actual elephant.
So if that’s what you were getting at, then yes, I agree with you.
• Daniel,
That’s what I thought too, but there really are people who think as a matter of principle, that it’s ok to associate two different distributions with the same precisely defined state of
Note this isn’t a practical consideration where one person throws away some of the information for convenience thereby implicitly using a different state of information. Nor is it case where one
analyst considers different states of information for their work. Or even a case where there is ambiguity in the state of information.
But even more than that, I took Corey to be referring to a whole family of blather, like the one you mentioned, that can be found in real Bayesian circles. I think he’s right most Bayesians
aren’t really going to get it any better than Frequentists will.
• From Jaynes’ paper you linked:
“If we can learn how to recognize and remove irrelevant information at the beginning of a problem, we shall be spared having to carry out immense calculations, only to discover at the end that
practically everything we calculated was irrelevant to the question we were asking”
Amen brother! So much of the stuff being done these days with enormous finite element or CFD models on supercomputers has this flavor to me. We do an enormous amount of computation, we get a
result, this result is applicable in some very specific set of circumstances which will never happen exactly in practice, we wonder what will happen in practice, so we’re forced to re-calculate
things under alternative circumstances… yikes.
Now admittedly, I don’t have a general way in which we can avoid all that stuff, but I think it’s worth considering this as a goal. Enormously precise CFD and other numerics are telling us
enormously precise information about things we only care about in some relatively vague way.
Perfect examples are things like predicting the future climate from essentially weather models.
• Also, I’d like to mention that Jaynes says some useful stuff about ergodic sampling and soforth. In my work on fitting parameters for my dissertation I began to suspect that all the fancy
Hamiltonian monte-carlo and markov chain theory in general were restricting the dynamics of MCMC type simulations more than necessary.
I read some papers on adaptive monte carlo and various schemes that don’t satisfy detailed-balance and it seems to me that there’s a LOT to be done in that area. Although there are some
technicalities related to continuous state spaces, there’s already a proof that detailed balance is too strong a condition. http://link.aip.org/link/JCPSA6/v110/i6/p2753/s1&Agg=doi
Although I love Stan and NUTS compared to other Bayesian computation schemes available today, I’d really love to see a system that is less restrictive of the type of models available (for example
something that could handle ODEs and mixed continuous and discrete parameters) and yet still very efficient at sampling the relevant high probability region of the posterior. I suspect that a
system which throws out strict detailed balance *and* uses continuous adaptation but yet still converges on the correct posterior distribution will ultimately be the answer. Jaynes indicates
related ideas in this paper.
• Daniel,
I have a policy of taking people at their word very literally (when I don’t know of any incentive for them to be dishonest, anyway). So when Kadane writes that personalistic creed and
specifically mentions “same information” I take him to be denying Jaynes’s desideratum (IIIc), which you can find on page 14 of PTLOS (a link for convenience: http://bayes.wustl.edu/etj/prob/
book.pdf). That desideratum underlies the proper understanding the Principle of Insufficient Reason (the explanation starts in the middle of page 34 of PTLOS), and also one of the most impressive
resolutions of a probabilistic paradox and predictions of empirical frequencies (on the basis of pure thought!) that I know of. (Link.)
• “The following is from his paper “Where do we Stand on Maximum Entropy“. Although this paper is only remembered for Jaynes’s dice example, it’s the most important philosophy of science paper of
the second half the 20th century.”
It is indeed brilliant. When I read something like this, I get disappointed at how few people can communicate so well.
With regard to the personalistic Bayesian ideas I understand the discomfort, but I do see the value of “eliciting an expert’s probabilities” mostly because it is much easier than analysing all of
their prior information explicitly (i.e. becoming a better expert yourself).
• I don’t think “eliciting an expert’s probabilities” are illegitimate at all. Basically, any method whereby you can construct a distribution where the true value is in the high probability
manifold will work, including simply asking an expert where they think the true value is (if their guess is a good one that is). I’m thinking about putting out a post on a time I did exactly that
in Iraq and the results lead to 2 named military operations. The experts in that case were the local EOD (Explosive Ordinance Disposal) team.
The math of the subjective Bayesians like Savage/De Finetti et al is often great great stuff that really moves the subject forward. And what are called “subjective probabilities” are often fine
if properly understood. But the philosophical stance of the subjective Bayesians, including their weird attachment to Bayes Theorem as opposed to the sum/product rule more generally, is all kinds
of messed up.
Having said all that, Jaynes talks about meeting Savage once and them having a long conversation hashing out disagreements. He claimed that after properly understanding each other they were much
closer to being on the same page than a naive look at subjective vs objective Bayesians would indicate.
• My entry point to learning about statistics was Jaynes. I was extremely impressed by his book and papers. I also get the impression that many people who enter statistics from the side, like I
have done, particularly from engineering, computer science or physics take a similar route in finding Jaynes’ work an excellent starting place.
Jaynes role seems to be polarised within Bayesian groups either to dominate to the near exclusion of everything else, or else being ignored. I prefer to see him as one of many important figures,
but I like him a lot.
I read this paper: http://arxiv.org/abs/physics/0010064/ some time ago, when I was extremely impressed by Jaynes and found it quite difficult to accept (although, I am now pretty sympathetic). It
relates directly to Corey’s comment:
“Our goal is that inferences are to be completely ‘objective’ in the sense that
two persons with the prior information must assign the same prior probability.”
[20] This is a very na¨ıve idealistic statement of little practical relevance.
This is one difficulty with Jaynes. A reader of the book can (as I did) get the impression he has solved the problem of setting priors, in very general circumstances. In practice someone
constructing a hierarchical model or even a plain old normal mixture model (like I did) pretty much has to use convenience priors to make practical progress. Reading about Jeffrey’s priors and
transformation groups etc really won’t help you (as interesting and as brilliant as all this stuff is)… (FWIW I don’t think elicitating priors in these situations except in the crudest sense is
particularly helpful either)
My more fundamental doubts about the objective Bayes approach come from the following dilemma:
Is it reasonable to use objective Bayesian probability in order compute the expected utility of decisions?
I think the answer is sometimes yes, sometimes no. This leads me to access if they were to conflict which is more important and for me the important problem is the ordering of decisions – and I
am willing to give up on the objective Bayesian ideal as attractive as that is.
A tangential point addressing Joseph’s last comment. “including their weird attachment to Bayes Theorem as opposed to the sum/product rule”. I am a bit puzzled by that comment. If you read (say)
Frank Lad, “Operational Subjective Statistical Methods” Bayes theorem is introduced after the fundamental theorem of prevision on page 150. de Finetti’s theory of probability (which was Lad’s
main inspiration) does much the same, Kadane referred to above is a little faster introducing it in Chapter 2.
I agree with Joseph’s last paragraph. For me one of the most exciting and under appreciated ideas on the interface of probability theory and philosophy of science is the idea that an exchangeably
extendable probability specification is a _restriction_ on being just an exchangeable probability sequence. This explains why in a non extendable probabilistic sequence say when we card count in
blackjack inference runs in the opposite direction to normal i.e. many low card seen means low cards in the future are _less_ likely. This was of course due to de Finetti but perhaps explained
better by Jaynes in bayes.wustl.edu/etj/articles/applications.pdf. I find it interesting to see a version of this idea being used in quantum information in http://perimeterinstitute.ca/personal/
cfuchs/ based on the constraints putting an exchangeable distribution on _many_ particles (disclaimer: I don’t know much at all about physics).
• I hope this isn’t too much of a text dump, but I find this discussion by Ariel Caticha (from this paper: http://arxiv.org/pdf/0908.3212.pdf) very complelling, on the goals of inference and
“subjective” vs “objective” Bayes.
“Different individuals may hold different beliefs and it is certainly important to figure out what those beliefs might be — perhaps by observing their gambling behavior — but this is not our present
concern. Our objective is neither to assess nor to describe the subjective beliefs of any particular individual. Instead we deal with the altogether different but very common problem that arises
when we are confused and we want some guidance about what we are supposed to believe. Our concern here is not so much with beliefs as they actually are, but rather, with beliefs as they ought to
be. Rational beliefs are constrained beliefs. Indeed, the essence of rationality lies precisely in the existence of some constraints. The problem, of course, is to figure out what those
constraints might be. We need to identify normative
criteria of rationality. It must be stressed that the beliefs discussed here are meant to be those held by an idealized rational individual who is not subject to practical human limitations. We
are concerned with those ideal standards of rationality that we ought to strive to attain at least when discussing scientific matters.
Here is our first criterion of rationality: whatever guidelines we pick they must be of general applicability—otherwise they fail when most needed, namely, when not much is known about a problem.
Different rational individuals can reason about different topics, or about the same subject but on the basis of different information, and therefore they could hold different beliefs, but they must
agree to follow the same rules.”
• David Rohde,
The point of noninformative measures isn’t necessarily to actually use them as priors in any given analysis — it’s to set the starting point for updating with whatever prior info you actually
have. Zero is the starting point for a sequence of addition operations; unity is the starting point for a sequence of multiplication operations; a noninformative measure is the starting point for
a sequence of Bayesian updates.
• David Rohde, I was a bit puzzled as to how one could start with Jaynes — the most uncompromising and acerbic writer on statistics I have ever read — and then get “converted” to the subjective
Bayesian approach. But then I Googled up the Lad book and found this review (<- hides a link) that describes the book as "even outdo[ing] de Finetti’s far-ranging opinionatedness and stubborn
Ah, now I see: I hypothesize that Lad's and Jaynes's writing styles share a certain quality of I might call "the quality of convincing exhortation". (Note to self: must master this style.)
• Brendon,
I guess you’ll be seeing Caticha next week in Australia!
“the most uncompromising and acerbic writer on statistics I have ever read” It’s funny you say this because I though Jaynes was very mild. Indeed, I pictured myself the “bad cop” to his “good
cop”. I thought this until recently when it became clear what Jaynes’s reputation in the wider world really is. Still, I can’t help but think of him as anything other than a mild mannered guy who
had some definite, but highly constructive, ideas about statistics. I chalk up the differences in style between him and me as resulting from the fact that he was a Navy Officer, while I was a
Marine Officer.
There are those running around claiming the only way to ever change a distribution is through Bayes Theorem. It’s easy for anti-Bayesians to make this view look stupid. I believe it’s wrong in at
least two ways: the philosophical definition of probabilities P(x|K) allows us to change from K_1 to K_2 whenever we feel like it (as long as they’re both true or hypothesized to be so), and
secondly, the sum and product rules imply an infinite number of updating rules. See here:
Jaynes definitely did not consider the problem of how to convert states of information K into probability distributions P(x|K) as solved. The extreme opposite in fact, he not only though this the
main open question in Statistics, but he also though it open ended since there are always new states of information K to consider. His main beef with Frequentists in practice (as opposed to the
philosophical) was that they diverted enormous amounts of mathematical talent away from this (in general) unsolved problem toward irrelevancies.
The idea that Jaynes’s Objective Bayes is some kind of impractical ideal is a very serious misreading of Jaynes and the points he was making. Reread that quoted passage again. What he’s talking
about is an tremendously powerful practical tool: namely the ability to throw away (or never learn in the first place) vast amounts of information and still get accurate answers to specific
questions. This is not some impossible ideal, it’s the essence of practicality. Indeed it’s a good deal more practical than the current fit-distribution-to-histogram paradigm and magically hope
the future looks like the past.
• Also, David, it’s often the case that those convenient conjugate priors are far better justified than most people think. Basically, we can imagine increasing states of knowledge:
all of which are true. These leads to different distributions:
with each distribution having smaller entropy than the one before it. While your true state of knowledge might be
Far from this being in conflict with Jaynes’s Bayesian viewpoint, his is the only viewpoint I know of in which this makes perfect sense and which also explains when it will and won’t work.
Frequentists will think this wrong because they think there is only one “correct” distribution. Subjective Bayesians will think it’s wrong because it doesn’t represent your true beliefs. Or
they’ll think it always works or some such.
• “I guess you’ll be seeing Caticha next week in Australia!”
Unfortunately he couldn’t make it. John Skilling will be arguing against some of the parts of Ariel’s philosophy that I am also skeptical of, so that should be interesting.
• I am trying to remember what caused me to ‘convert’ from objective Bayes to subjective. I think I found this special issue: http://ba.stat.cmu.edu/vol01is03.php as well as Lindley’s philosophy of
stats http://www.phil.vt.edu/dmayo/personal_website/Lindley_Philosophy_of_Statistics.pdf to be influential. Another review of ‘operational subjective statistical methods’ is this one (David
Banks) http://link.springer.com/article/10.1007%2Fs003579900047?LI=true (paywalled). In style Lad is pretty different to Jaynes, although both write very well. When writing my thesis, I tried to
emulate Jaynes writing style, but it didn’t work at all for me! Also, I was lucky enough to meet Lad at a past EBEB conference, incidentally he seemed to be a fan of Caticha (and Jaynes).
It seems that Fuchs also ‘converted’ from a Jaynesian to an operational subjective view, see: http://perimeterinstitute.ca/personal/cfuchs/VaccineQPH.pdf. Note how I immodestly compare myself
with a great physicist. FWIW, I am not trying to change anyone’s mind and expect all of you know most or all of these refs already.
I would agree that in terms of practical applications the Objective Bayesians have a good record. That said, I do think Jaynes’s desideratum (IIIc) is a problem in practice, but I don’t think we
loose all that much by letting it go. I agree ignoring information in order to simplify a problem is very practical, but if Jaynes developed a general way to do this then I don’t see it.
Joseph: I also like your old post on Quantum Mechanics (at least it is consistent with my prejudices!). Although as I see it the updating rules you speak about seem to be just combinations of
conditioning and marginalisation, something I think subjective Bayesians are on top of.
I find the rest of your comment interesting and challenging. I am not sure what you mean by $K$, it seems to be something more abstract than an observation perhaps like $I$ in Jaynes writings…
The case of ignoring an observation is I think very interesting, and practically necessary but philosophically troublesome (to me at least). Say we have a probability distribution P(a,b,c). Say
we want to know a and we learn b and c but we decide to ignore c. There are at least two things we can do
marginalize to get P(a,b) and then condition P(a|b) – but what does this distribution really mean? As Jaynes shows us using non-sufficient statistics can cause problems http://bayes.wustl.edu/etj
/articles/confidence.pdf but isn’t using non-sufficient statistics a big part of what approximation is all about?
Alternatively we could find the most extreme values of P(a,b,c) in order to compute an interval probability P_u(a|b) P_l(a|b). This has a clearer interpretation, but maybe the interval is too
wide to be useful and maybe it is harder not easier to compute.
For context some examples might be useful e.g. imagine a is a quantity to predict, b are posterior samples and c is the data. Another example could be a is a quantity to predict, b are imprecise
measurements and c are precise measurements.
• David Rohde, if you give up desideratum (IIIc), you’re giving up on the reasoning found in Jaynes’s article The Well-Posed Problem (link).
• Corey,
I absolutely love that paper. I think it’s viewed as kind of cute result not relevant to most of statistics. I couldn’t disagree more. I think it’s a devastating critique of both Frequentists and
Subjective Bayesians using mathematics and experimental results both of which are un-assialable.
When I was still on speaking terms with Mayo, I tried to point out that the “frequency correspondence” stuff at the end was a deathblow to Frequentism, but she refused to even think about it. In
retrospect, I think the mathematical sophistication of the paper is just a little beyond her and she didn’t want to admit it. So much for philosophers untiring quest for the Truth and all that.
Even those who are favorable to the paper commonly think it has no bearing on applications. While it’s true that it differs from the usual statistical work seen in the life and social sciences,
most of that work is worthless crap anyway, so that isn’t much of a slight. It is however reminiscent of the way Physicists get their distributions, which in my physics education were always
theoretically derived rather than learned from data. It’s worth noting that Physicists have had dramatically better results with their probability distributions than most others have.
So there are applications and then there are APPLICATIONS.
• Joseph, maybe I should post a “good parts” version a la The Princess Bride (only mine would be an actual abridgment, not a fictional one).
Just the other day I was talking to a frequentistically-trained (but not doctrinaire) coworker about how invariance approaches for point estimators and intervals can be extended to invariant
distributions. I pointed him to the Wikipedia article on Bertrand’s paradox, which does not do full justice to Jaynes’s reasoning. He pointed out that Jaynes was, to all appearances, getting
something (a frequency distribution) for nothing (a lack of information in the problem statement), so I gave a short summary of the argument from the “no-skill” limit. I was dissatisfied with
leaving it at that, so yesterday I printed out a copy of the article and highlighted only those parts that contributed most succinctly to the philosophical approach underlying the derivation. It
included Figure 1 and amounted to about 2/5 of the text.
• That’s a great paper, but why is it a problem to a subjective Bayesian?
quoting de Finetti (1970):
The main points of view that have been put forward are as follows.
The classical view, based on physical considerations of symmetry, in which one should be obliged to give the same probability to such ‘symmetric cases’. But which symmetry? And, in any case why?
The original sentence becomes meaningful if reversed: the symmetry is probabilistically significant, in someone’s opinion, if it leads him to assign the same probabilities to such events.
• “But which symmetry? And, in any case why?”
Both those questions are answered in the paper. This problem involves connecting an input fact to an output fact. The input is the “low skill” of the thrower, and the output is a definite shape
to some histogram.
All the analysis, including the assumption of symmetry and the creation of a probability distribution, serves no other purpose than to show that in the vast, vast, vast, majority of cases where
the former (input) fact is true, then so is the later (output) fact. Admittedly the derivation is done in an incredibly slick way that makes it difficult to see that’s what’s actually going on,
but as Jaynes explained that’s what it all means.
So if by “subjective” you mean nothing more than that recognition of facts is done by humans, then there’s no objection. But if anyone claims either of the following is just someone’s opinion:
(1) The mathematical derivation makes a definite prediction: if the thrower has low skill then we’ll likely see a histogram of a specified shape.
(2) Experimentally whenever the thrower does have low skill then the resulting histogram is observed to have that specified shape.
Then I would say they’ve lost touch with reality, because neither of those is just an opinion.
• I think the central idea of objective Bayes is that the symmetrical assignment of probabilities always ought to have decision theoretic force. I don’t think Jaynes convincingly makes that
argument here or elsewhere. In the paper above, I think he describes the type of symmetry that he prefers to assign uniform probability to, reversing the direction of the argument.
Your point (2) suggests you yourself do not hold subjective probabilities that are i.i.d replications of Jaynes solution. Presumably your willingness to look at experimental results suggests that
if the histogram did not resemble Jaynes’ solution you would adjust your predictive distribution away from Jaynes’ and towards the histogram. If you are willing to do that clearly your
probabilities are not i.i.d repetitions of Jaynes solution, but more likely an exchangeable sequence constructed using a mixture of Jaynes solutions and other possibilities.
Anyway, I don’t have any huge beef with what you are saying, just suggesting that subjective Bayesians are probably less different and more sophisticated than your current (colourful bad cop)
presentation suggests. And thanks for an interesting blog!
• I really wish I knew Maxent was in Oz earlier than just now, it just clicked. I would have loved to have gone, it looks like a great program!
• David,
The purpose of the blog is to aid the development of my arguments/explanations. So I’m not harping on you, just trying to modify my approach. From what you said last, you are very seriously
misunderstanding both me and Jaynes. So here is a different route.
At heart Jaynes wasn’t making any argument about what kind of symmetry he wanted to assign a uniform distribution to. He was actually doing something of a very different nature. He was working
out what reproducible consequences, if any, the assumption of a “low skilled” tosser would have.
You could think of it as a kind of sensitivity analysis. The assumption of “low skilled” implies a certain range of possibilities might occur. So the key question is:
“What conclusions are highly insensitive to possible outcomes within that range?”
Jaynes showed the approximate shape of the empirical histogram is one of those robust conclusions, in the sense that the vast majority of possibilities that a “low skilled” thrower could have
achieved with their skill level yield similar shaped histograms.
Or if you think in terms of repeated trials, that approximate shape for the histogram should be reproducible from trial to trial.
All that symmetry/uniform-probability stuff which you are insisting is some kind of huge metaphysical assumption needing either objective or subjective Bayesian justification, is actually just a
slick mathematical trick for carrying out that sensitivity analysis.
Mathematical tricks don’t need philosophical justification! | {"url":"http://www.entsophy.net/blog/?p=218","timestamp":"2014-04-19T11:57:04Z","content_type":null,"content_length":"58240","record_id":"<urn:uuid:e102808f-d673-485f-86c5-00ba318d3908>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00380-ip-10-147-4-33.ec2.internal.warc.gz"} |
[Numpy-discussion] bug or feature?
humufr@yah... humufr@yah...
Thu Feb 8 15:21:09 CST 2007
I have a big problem with numpy, numarray and Numeric (all version)
If I'm using the script at the bottom, I obtain these results:
var1 before function is [3 4 5]
var2 before function is 1
var1 after function must be [3 4 5] is [ 9 12 15] <------ problem
var2 after function must be 1 is 1
var3 must be the [9 12 15] is [ 9 12 15]
var4 must be the 'toto' is toto
I'm very surprised by the line noted. I always thinking that the input
variable didn't change the variable itself outside the function. It's the
comportement for var2 but var1 is changed and it's a big problem (at least
for me). The only object in python with this behavior are the numeric object
(Numeric, numarray or numpy), with list or other kind of object I have the
expected result (the var1 before to go inside the function)
I can't keep the input variable if I'm not doing a copy before to call a
Is it normal and so do I have to do a copy of the input data each time I'm
calling a function?
#!/usr/bin/env python
import numpy
print "numpy version ", numpy.__version__
def test(var1,var2):
#print "var1 input function is",var1
#print "var2 input function is",var2
var1 *=3
var2 = 'toto'
return var1,var2
print "var1 before function is ",var1
print "var2 before function is ",var2
print "var1 after function must be [3 4 5] is ",var1
print "var2 after function must be 1 is ", var2
print "var3 must be the [9 12 15] is ", var3
print "var4 must be the 'toto' is ", var4
More information about the Numpy-discussion mailing list | {"url":"http://mail.scipy.org/pipermail/numpy-discussion/2007-February/025998.html","timestamp":"2014-04-17T03:55:54Z","content_type":null,"content_length":"4062","record_id":"<urn:uuid:d3368e56-80af-4b61-9a96-d59a6372de13>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00511-ip-10-147-4-33.ec2.internal.warc.gz"} |
User Geoff Robinson
bio website
location United Kingdom
age 60
visits member for 3 years
seen 1 hour ago
stats profile views 4,160
2h revised Subgroups from which all class functions extend to class functions on the ambient group
added example.
2h revised Subgroups from which all class functions extend to class functions on the ambient group
Mentioned fusion/transfer type result. Corrected typos
14h Subgroups from which all class functions extend to class functions on the ambient group
comment I'm not sure what you would consider a "geometric" reason. There are any number of algebraic expanations, including the fact that $Q_{8}$ admits ${\rm Sp}(2,2) \cong {\rm SL}(2,2)$ as a
group of automorphisms.
14h revised Subgroups from which all class functions extend to class functions on the ambient group
added 110 characters in body
15h answered Subgroups from which all class functions extend to class functions on the ambient group
22h revised Properties to have matrices that commute in $\mathrm{GL}_n(\mathbb C)$
Tidying up text
1d revised Properties to have matrices that commute in $\mathrm{GL}_n(\mathbb C)$
minor bibliographic correction
1d awarded Enlightened
1d awarded Good Answer
1d Properties to have matrices that commute in $\mathrm{GL}_n(\mathbb C)$
comment @Yves: Yes, I was just pointing out that although finiteness is essential for the result as stated, there are things that can be said in some infinite groups- I wasn;t disputing the
appropriateness of the "finite groups" tag
1d Properties to have matrices that commute in $\mathrm{GL}_n(\mathbb C)$
comment @YvesCornulier: Although there as an important extension of this type of result to discrete subgroups of linear groups by Zassenhaus, which led to the important notion of Zassenhaus
neighbourhoods in Lie groups.
1d revised Properties to have matrices that commute in $\mathrm{GL}_n(\mathbb C)$
1d awarded Mortarboard
1d revised Properties to have matrices that commute in $\mathrm{GL}_n(\mathbb C)$
Rearrangment, some explanation
1d revised Properties to have matrices that commute in $\mathrm{GL}_n(\mathbb C)$
Expanded explanations, gave additional references.
1d revised Bound for the Frattini subgroup of a $p$-group
deleted 1 character in body
1d revised Bound for the Frattini subgroup of a $p$-group
deleted 1 character in body
1d awarded Nice Answer
1d revised How to transform matrix to this form by unitary transformation?
How to transform matrix to this form by unitary transformation?
1d comment @Frederik Poloni: I was aware that $UMV$ would not have the same spectrum as $M.$ However, it is true that $UMV$ has the same operator norm (with respect to Euclidean norm on vectors )
as $M,$ namely $m_{1}.$ I was still careless, because this certainly need not imply that $UMV$ has spectral radius $m_{1}.$ | {"url":"http://mathoverflow.net/users/14450/geoff-robinson?tab=activity","timestamp":"2014-04-21T13:02:21Z","content_type":null,"content_length":"46741","record_id":"<urn:uuid:4e0cb33c-b844-417f-a53a-38c9bdb77a80>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00187-ip-10-147-4-33.ec2.internal.warc.gz"} |
Wolfram Demonstrations Project
Two Wheel Belt
A classic trigonometry problem asks for the length of a belt wrapped around two wheels. The radii of the two wheels are and . The wheels are units apart.
Let A and C be the two arcs of the belt and let B be one of the two segments joining the ends of the arcs. B is an exterior tangent to both circles. Then the lengths of A, B, and C are and . | {"url":"http://demonstrations.wolfram.com/TwoWheelBelt/","timestamp":"2014-04-16T19:47:47Z","content_type":null,"content_length":"42442","record_id":"<urn:uuid:d5a30311-40a1-48ae-8e70-bc83166ff887>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00116-ip-10-147-4-33.ec2.internal.warc.gz"} |
What NYC progress reports ‘prove’ about charters
This is a continuation of my last post which you can read here.
In my last post I argued that there is almost no correlation between the progress ranks from one year to the next that New York City uses to calculate the report card grades that are then used to
shut down schools.
One commenter noted that when you make a graph of the progress scores from 2010 to 2011 rather than the ranks, there does appear to be a correlation. I’d like to address that here and add some more
analysis of the data in this context.
The commenter says that the city uses the scores and not the ranks to decide which schools to close so it is not appropriate to look at the change in ranks. I think that the unstable ranks actually
are the more relevant stat since the number of ‘F’ schools is based not on an absolute scale of what progress should be, but on the preordained decision that the bottom 5% of the schools are going to
get ‘F’s.
Still, the commenter makes a valid point, though one that I don’t think will weaken my argument, and actually one that will enable me to find some more weaknesses in the reformers plan.
I did make a graph of the progress scores (it is ranked out of 60) comparing 2010 to 2011 scores. As you will see in the graph below, these do correlate a lot more than the ranks. It still looks a
lot like a blob of random points, but one that looks a bit like an upward sloping line. If you like this graph better, it proves that there is some stability in the metric from one year to the next.
I still don’t think that the metric actually measures anything important so it doesn’t matter to me that it might be somewhat stable.
Looking at this got me thinking about what sorts of conclusions I could make about the progress reports data if I suspend disbelief and pretend that I believe that they are reliable. Reformers might
criticize me here saying that if I don’t believe in these metrics aren’t I a hypocrite to use them to prove other points. I don’t see it that way. I see it like the way Clarence Darrow in the
Scopes trial used The Bible in his famous cross examination. It is great to show how even under their own concocted metrics the reformers still aren’t able to cover up what little progress they are
So under the assumption that this progress metric was good, I looked through the database and found that out of 1108 schools, 67 were charters and 1041 were non-charters. In the student progress
category, which accounts for 60% of the report card score, I found that there were exactly 86 F’s in this category. Looking closer, I saw that 9 of those Fs went to charters and 77 went to
non-charters. This means that 9 out of 67 charters got Fs in this category, or 13% while 77 out of 1041 non-charters got Fs, or 7%.
So my first conclusion using this metric as The Bible is that if you go to a charter school in New York City you are twice as likely to get a school that has an F in progress than if you go to a
non-charter school.
If we include Ds also, there were 175 schools that got either Ds or Fs. 15 out of the 67 charters got Ds or Fs, which is 22% while 160 out of 1041 non-charters got Ds or Fs, which is 15%.
So, even by the reformers own metric, a student would have a better chance of making progress at a non-charter school than at a charter school.
The next thing I studied was inspired by something I noticed in the progress report for one charter. I saw that their math progress score was significantly higher than their ELA progress score. I’m
a math teacher and I love math, but I think that it is definitely over-emphasized in its importance in this standardized testing age. Reading is a much more important skill to develop. But if a
school wants to maximize their test scores, they can focus on the math which is easier to test prep for.
So what I did was sort the list by the difference between the math and ELA scores. Out of 1108 schools there were only 126 schools whose math progress score was ten points or higher than their ELA
progress score. Of those 126 schools, 28 were charters and 98 were non-charters. So 28 out of 67 or 42% of the charters had significantly higher math scores while only 98 out of 1041 or 9% of the
non-charter schools did this. On the other end of the spectrum, only 2 of the 67 charters had significantly higher ELA than math scores. This is evidence of the type of intensive (and often
mindless) math test prep that happens in some schools. You can see these charter outliers on the bottom right of the blob.
So charters which represent 6% of the schools but, since the schools are smaller, only 4% of the students have a pretty big representation in the number of Fs and Ds in progress. They also have an
unusually high number of schools with way higher math progress than ELA progress.
Again, I encourage everyone to download the files (see previous post for the link) and see what sorts of things you can find in there.
I find it interesting that in the peer rankings for elementary school they weigh being black/hispanic as much as being poor or identified for special education services, and three times more than
being an ELL.
I’ve wondered about that as well. A few years ago, I poured over the test results for District 2, broken down by those same demographic flags. At least in that data set, there was NO correlation
between “free lunch” (aka poverty), and scores.
Also, the range in percentage of students of a given demographic VARIES from category to category. If School A has 50% Black/Hispanic and 10% Disabilities, and School B has 10% Black/Hispanic and
50% Disabilities, (holding Free Lunch and ELL constant), is it reasonable to infer those are COMPARABLE schools? The weighting would suggest yes. I have my doubts.
And why isn’t there some weeder-outer for G&T schools or programs? The student bodies of those are pre-filtered for higher scores (but perhaps not higher “progress” in this screwy system).
remember, in the words of our chancellor “slow incremental change is unacceptable” | {"url":"http://garyrubinstein.teachforus.org/2011/09/28/what-nyc-progress-reports-prove-about-charters/","timestamp":"2014-04-24T03:14:33Z","content_type":null,"content_length":"60089","record_id":"<urn:uuid:7aec94b6-fbdd-4d31-a252-c116515466d8>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00474-ip-10-147-4-33.ec2.internal.warc.gz"} |
Attleboro Precalculus Tutor
Find an Attleboro Precalculus Tutor
I recently completed my undergraduate studies in pure mathematics at Brown University. I am available as a tutor for pre-algebra, algebra I, algebra II, geometry, trigonometry, pre-calculus,
calculus I, II, and III, SAT preparation, and various other standardized test preparations. I have extensiv...
22 Subjects: including precalculus, reading, Spanish, calculus
...MY EDUCATION: My two degrees are from MIT in theoretical math and literature. I also completed more than a semester of coursework at Harvard in English, philosophy, and intellectual history.
While at MIT, I won prizes for my writing in both the humanities and the sciences.
47 Subjects: including precalculus, English, chemistry, reading
...I can teach the basics of grammar, spelling, and punctuation for the lower levels (K-5), and essay writing, critical analysis, and critical essays of the classics for upper level grades. Before
I began a family I was in the actuarial field. I also worked at Framingham State University, in their CASA department, which provides walk-in tutoring for FSU students.
25 Subjects: including precalculus, English, reading, calculus
...I am also a chemistry instructor at Wheaton College. I majored in biochemistry and psychology as an undergraduate student at Wheaton College. I have years of experience as a tutor.
17 Subjects: including precalculus, chemistry, calculus, geometry
I believe that mathematics is a difficult subject for many students and I enjoy working with them to develop both their abilities and confidence. I have taught AP Calculus for over twenty-five
years. My students have always scored very well on the national AP tests.
8 Subjects: including precalculus, calculus, geometry, GRE | {"url":"http://www.purplemath.com/Attleboro_precalculus_tutors.php","timestamp":"2014-04-17T21:27:10Z","content_type":null,"content_length":"24074","record_id":"<urn:uuid:891a9e98-79f3-4ce2-930e-856eaa5c64e1>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00468-ip-10-147-4-33.ec2.internal.warc.gz"} |
Institute for Mathematics and its Applications (IMA)
- 2006-2007 Algebraic Geometry and Applications Seminar
Diane Maclagan (Department of Mathematics, Rutgers University)
Equations and degenerations of the moduli space of genus zero stable curves with n marked points
Abstract: Curves are one of the basic objects of algebraic geometry, and so much attention has been paid to the moduli space of all curves of a given genus. This talk will focus on the moduli space
of genus zero stable curves with n marked points, which is a compactification of the space M[0,n] of isomorphism classes of n points on the projective line. After introducing this space, I will
describe joint work with Angela Gibney on explicit equations for it, which lets us see degenerations to toric varieties. November 2, 2006, 10:10-11:00 am, Lind Hall 409
Daniel Lichtblau (Wolfram Research, Inc.)
Computer assisted mathematics: Tools and tactics for solving hard problems
Talk Materials: IMA2006_Lichtblau_talk.pdf IMA2006_Lichtblau_talk.nb
Abstract: In this talk I will present several problems that have caught my attention over the past few years. We will go over Mathematica formulations and solutions. Along the way we will meet with a
branch-and-bound loop in its natural habitat, some rampaging Gröbner bases, a couple of tamed logic puzzles, and at least a dozen wild beasts.
As the purpose is to illustrate a few of the many ways in which Mathematica can be used to advantage in tackling difficult problems, we will go into a bit of detail in selected examples. Do not let
this deter you; there will be no exam, and it is the methods, not the problems, that are of importance. The examples are culled from problems I have seen on Usenet groups (primarily MathGroup), in
articles, or have been asked in person.
November 8, 2006, 11:15 am-12:15 pm, Lind Hall 409
Sorin Popescu (Department of Mathematics, Stony Brook University)
Excess intersection theory and homotopy continuation methods
Abstract: I will recall first basic techniques and results in (excess) intersection theory in algebraic geometry and then discuss their implications and also applications toward a numerical approach
to primary decomposition for ideals in polynomial rings.
November 13, 2006, 11:15 am-12:15 pm, Lind Hall 409
Uwe Nagel (Department of Mathematics, University of Kentucky)
Complexity measures
Abstract: It is well-known that on the one hand the costs for computing a Gröbner basis can be prohibitively high in the worst case, but on the other hand computations can often be carried out
successfully in practice. As an attempt to explain this discrepancy, several invariants that measure the size or the complexity of an ideal or a module have been introduced. The most prominent one is
the Castelnuovo-Mumford regularity, but there are also extended degrees introduced by Vasconcelos and, more recently, the extended regularity jointly proposed with Chardin. The latter two notions are
defined axiomatically. In the talk we will discuss the three concepts and their relations as well as some known results and open problems. November 15, 2006, 11:15 am-12:15 pm, Lind Hall 409
Richard Moeckel (School of Mathematics, University of Minnesota)
Tropical celestial mechanics
Abstract: Some interesting problems in mechanics can be reduced to solving systems of algebraic equations. A good example is finding relative equilibria of the gravitational n-body problem. These are
special configurations of the n point masses which can rotate rigidly such that the outward centrifugal forces exactly cancel the gravitational attractions. The algebraic equations are complicated
enough that it is a long-standing open problem even to show that the number of solutions is finite. I will describe a solution to this question for n=4 which makes use of some ideas from what is now
called tropical algebraic geometry – Puiseux series solutions, initial ideals, etc. The problem is open for larger n. November 15, 2006, 2:15 pm-3:15 pm, Lind Hall 409
Wenyuan Wu (Department of Applied Mathematics, University of Western Ontario)
On approximate triangular decompositions in dimension zero
Abstract: Triangular decompositions for systems of polynomial equations with n variables, with exact coefficients are well-developed theoretically and in terms of implemented algorithms in computer
algebra systems. However there is much less research about triangular decompositions for systems with approximate coefficients. In this talk we will discuss the zero-dimensional case, of systems
having finitely many roots. Our methods depend on having approximations for all the roots, and these are provided by the homotopy continuation methods of Sommese, Verschelde and Wampler. We introduce
approximate equiprojectable decompositions for such systems, which represent a generalization of the recently developed analogous concept for exact systems. We demonstrate experimentally the
favourable computational features of this new approach, and give a statistical analysis of its error. Our paper is available at http://publish.uwo.ca/~wwu26/
November 20, 2006, 11:15 am-12:15 pm, Lind Hall 409
Chehrzad Shakiban (IMA, University of Minnesota)
Computations in classical invariant theory of binary forms
Slides: pdf  ppt
Abstract: Recent years have witnessed a reflourishing of interest in classical invariant theory, both as a mathematical subject and in applications. The applications have required a revival of the
computational approach, a task that is now improved by the current availability of symbolic manipulation computer softwares. In this talk, we will review the basic concepts of invariants and
covariants of binary forms, and discuss some elementary examples. We will then use the symbolic method of Aronhold, and algebrochemical methods of Clifford and Sylvester for computations and present
some applications to syzygies and transvectants of covariants.
November 29, 2006, 11:15 am-12:15 pm, Lind Hall 409
Alicia Dickenstein (Departamento de Matemática, Universidad de Buenos Aires)
Tropical discriminants
Abstract: The theory of A-discriminants is a far going generalization of the discriminant of a univariate polynomial, proposed in the late 80's by Gel'fand, Kapranov and Zelevinsky, who also
described many of their combinatorial properties. We present a new approach to this theory using tropical geometry.
We tropicalize the Horn-Kapranov uniformization, which allows us to determine invariants of A-discriminants, even if the actual equations are too hard to be computed.
Joint work with
December 6, 2006, 11:15 am-12:15 pm, Lind Hall 409
Niels Lauritzen (Matematisk Institut, Aarhus Universitet)
Gröbner walks and Scarf homotopies
Abstract: We give a light introduction to Gröbner basis conversion and outline emerging insights on the connection to a classical algorithm by Scarf in optimization.
December 13, 2006, 11:15 am-12:15 pm, Lind Hall 409
Gennady Lyubeznik (School of Mathmatics, University of Minnesota)
Some algorithmic aspects of local cohomology
Abstract: One application of local cohomology is that it provides a lower bound on the number of defining equations of algebraic varieties. To be useful for this application, local cohomology must be
efficiently computable. We will discuss some computability issues and resulting lower bounds for the number of defining equations of some interesting varieties.
January 10, 2007, 11:15 am-12:15 pm, Lind Hall 409
Anton Leykin (IMA Postdoc, University of Minnesota)
Algorithms in algebraic analysis
Abstract: In the first part of this talk I will give an introduction to the algorithmic theory of D-modules. This would include the description of the properties of the rings of differential
operators, in particular, the ones that allow for computation of Gröbner bases.
The second part will show the applications of D-modules to the computation of local cohomology of a polynomial ring at a given ideal. The nonvanishing of the local cohomology module of a certain
degree may answer the question about the minimal number of generators for the ideal.
The presentation is going to be accompanied by the demonstration of the relevant computations in the D-modules for Macaulay 2 package.
January 24, 2007, 11:15 am-12:15 pm, Lind Hall 409
Mihai Putinar (Department of Mathematics, University of California)
Moments of positivity
Abstract: The seminar will offer an unifying overview of the theory of positive functionals, the spectral theorem, moment problems and polynomial optimization. We will treat only the commutative
case, in the following order:
1. The integral
2. Positive definite quadratic forms and the spectral theorem
3. Orhtogonal polynomials and Jacobi matrices
4. Moment problems and continued fractions
5. Polynomial optimization
6. Open questions
We encourage the participants to have a look at Stieltjes' classical memoir on continued fractions, available at: http://www.numdam.org/numdam-bin/fitem?id=AFST_1894_1_8_4_J1_0
January 31, 2007, 11:15 am-12:15 pm, Lind Hall 229 [Note room change]
Jean Bernard Lasserre (LAAS, Centre National de la Recherche)
SDP and LP-relaxations in polynomial optimization: The power of real algebraic geometry
Summary: In this seminar we consider the general polynomial optimization problem: that is, finding the GLOBAL minimum of a polynomial over a compact basic semi-algebraic set, a NP-hard problem. We
will describe how powerfull representation results in real algebraic geometry are exploited to build up a hierarchy of linear or semidefinite programming (LP or SDP) relaxations, whose monotone
sequence of optimal values converges to the desired value. A comparison with the usual Kuhn-Tucker local optimality conditions is also discussed.
February 7, 2007, 11:15 am-12:15 pm, EE/Sci 3-180 [Note room change]
Serkan Hosten (Department of Mathematics, San Francisco State University)
An introduction to algebraic statistics
Talks (A/V) Slides: pdf
Abstract: This will be a gentle introduction to the applications of algebraic geometry to statistics. The main goal of the talk is to present statistical models, i.e. sets of probability
distributions (defined parametrically most of the time), as algebraic varieties. I will give examples where defining equations of such statistical model varieties have been successfully computed:
various graphical models and models for DNA sequence evolution. I will also talk about the algebraic degree of maximum likelihood estimation with old and new examples. February 14, 2007, 11:15
am-12:15 pm, Lind Hall 229
Stephen E. Fienberg (Department of Statistics, Carnegie Mellon University)
Statistical formulation of issues associated with multi-way contingency tables and the links to algebraic geometry
Talks(A/V) Slides: pdf
Abstract: Many statistical problems arising in the context of multi-dimensional tables of non-negative counts (known as contingency tables) have natural representations in algebraic and polyhedral
geometry. I will introduce some of these problems in the context of actual examples of large sparse tables and talk about how we have treated them and why. For example, our work on bounds for
contingency table entries has been motivated by problems arising in the context of the protection of confidential statistical data results on decompositions related to graphical model representations
have explicit algebraic geometry formulations. Similarly, results on the existence of maximum likelihood estimates for log-linear models are tied to polyhedral representations. It turns out that
there are close linkages that I will describe. February 21, 2007, 11:15 am-12:15 pm, EE/CSci 3-180
Evelyne Hubert (Institut National de Recherche en Informatique Automatique (INRIA) Sophia Antipolis)
Rational and algebraic invariants of a group action
Slides: pdf  Talks(A/V)
Abstract: We consider a rational group action on the affine space and propose a construction of a finite set of rational invariants and a simple algorithm to rewrite any rational invariant in terms
of those generators.
The construction can be extended to provide algebraic foundations to Cartan's moving frame method, as revised in [Fels & Olver 1999].
This is joint work with Irina Kogan, North Carolina State University.
February 28, 2007, 11:15 am-12:15 pm, EE/CSci 3-180
Peter J. Olver (School of Mathematics, University of Minnesota)
Moving frames in classical invariant theory and computer vision
Abstract: Classical invariant theory was inspired by the basic problems of equivalence and symmetry of polynomials (or forms) under the projective group. In this talk, I will explain how a powerful
new approach to the Cartan method of moving frames can be applied to classify algebraic and differential invariants for very general group actions, leading, among many other applications, to new
solutions to the equivalence and symmetry problems arising in both invariant theory, differential geometry, and object recognition in computer vision. March 14, 2007, 11:15 am-12:15 pm, EE/CSci 3-180
Mordechai Katzman (Department of Pure Mathematics, University of Sheffield)
Counting monomials
Abstract: The contents of this elementary talk grew out of my need to explain to non-mathematicians what I do for a living.
I will pose (and solve) two old chessboard enumeration problems and a new problem. We will solve these by counting certain monomials, and this will naturally lead us to the notion of Hilbert
functions. With these examples in mind, we will try and understand the simplest of monomial ideals, namely, edge ideals, and discover that these are not simple at all! On the way we will discover a
new numerical invariant of forests.
March 21, 2007, 11:15 am-12:15 pm, EE/CSci 3-180
Thorsten Theobald (Fachbereich Informatik und Mathematik, Goethe-Universität Frankfurt am Main)
Symmetries in SDP-based relaxations for constrained polynomial optimization
Abstract: We consider the issue of exploiting symmetries in the hierarchy of semidefinite programming relaxations recently introduced in polynomial optimization. After providing the necessary
background we focus on problems where either the symmetric or the cyclic group is acting on the variables and extend the representation-theoretical methods of Gatermann and Parrilo to constrained
polynomial optimization problems. Moreover, we also propose methods to efficiently compute lower and upper bounds for the subclass of problems where the objective function and the constraints are
described in terms of power sums.
(Joint work with L. Jansson, J.B. Lasserre and C. Riener)
March 28, 2007, 11:15 am-12:15 pm, EE/CSci 3-180
Seth Sullivant (Department of Mathematics, Harvard University)
Algebraic geometry of Gaussian Bayesian networks
Abstract: Conditional independence models for Gaussian random variables are algebraic varieties in the cone of positive definite matrices. We explore the geometry of these varieties in the case of
Bayesian networks, with a view towards generalizing the recursive factorization theorem. When some of the random variables are hidden, non-independence constraints are need to describe the Bayesian
networks. These non-independence constraints have potential inferential uses for studying collections of random variables. In the case that the underlying network is a tree, we give a complete
description of the defining constraints of the model and show a surprising connection to the Grassmannian. April 4, 2007, 11:15 am-12:15 pm, EE/CSci 3-180
Frank Sottile (Department of Mathematics, Texas A&M)
Optimal fewnomial bounds from Gale dual polynomial systems
Abstract: In 1980, Askold Khovanskii established his fewnomial bound for the number of real solutions to a system of polynomials, showing that the complexity of the set of real solutions to a system
of polynomials depends upon the number of monomials and not on the degree. This fundamental finiteness result in real algebraic geometry is believed to be unrealistically large.
I will report on joint work with Frederic Bihan on a new fewnomial bound which is substantially lower than Khovanskii's bound and asymptotically optimal. This bound is obtained by first reducing a
given system to a Gale system, and then bounding the number of solutions to a Gale system. Like Khovanskii's bound, this bound is the product of an exponential function and a polynomial in the
dimension, with the exponents in both terms depending upon the number of monomials. In our bound, the exponents are smaller than in Khovanskii's.
I will also dicuss a continuation of this work with J Maurice Rojas in which we show that this fewnomial bound is optimal, in an asymptotic sense. We also use it to establish a new and significantly
smaller bound for the total Betti number of a fewnomial hypersurface. Conditional independence models for Gaussian random variables are algebraic varieties in the cone of positive definite matrices.
We explore the geometry of these varieties in the case of Bayesian networks,
April 11, 2007, 11:15 am-12:15 pm, EE/CSci 3-180
Saugata Basu (School of Mathematics, Georgia Institute of Technology)
Combinatorial complexity in o-minimal geometry
Abstract: We prove tight bounds on the combinatorial and topological complexity of sets defined in terms of n definable sets belonging to some fixed definable family of sets in an o-minimal
structure. This generalizes the combinatorial parts of similar bounds known in the case of semi-algebraic and semi-Pfaffian sets, and as a result vastly increases the applicability of results on
combinatorial and topological complexity of arrangements studied in discrete and computational geometry. As a sample application, we extend a Ramsey-type theorem due to Alon et al. originally proved
for semi-algebraic sets of fixed description complexity to this more general setting. The talk will be self-contained and I will go over the basic definitions of o-minimality for those who are
unfamiliar with the notion. April 25, 2007, 11:15am-12:15 pm, Lind Hall 229
Gregorio Malajovich (Departamento de Matemática Aplicada, Universidade Federal do Rio de Janeiro)
On sparse polynomial systems, mixed volumes and condition numbers
Abstract: pdf April 26, 2007, 10:15-11:10 am, EE/CSci 3-180
J. Maurice Rojas (Department of Mathematics, Texas A&M University)
Random polynomial systems and balanced metrics on toric varieties
Abstract: Suppose c[0],...,c[d] are independent identically distributed real Gaussians with mean 0 and variance 1. Around the 1940s, Kac and Rice proved that the expected number of real roots of the
polynomial c[0] + c[1] x + ... + c[d] x^di, then the expected number of real roots is EXACTLY the square root of d. Aside from the cute square root phenomenon, Kostlan also observed that the
distribution function of the real roots is constant with respect to the usual metric on the real projective line.
The question of what a "natural" probability measure for general multivariate polynomials then arises. We exhibit two (equivalent) combinatorial constructions that conjecturally yield such a measure.
We show how our conjecture is true in certain interesting special cases, thus recovering earlier work of Shub, Smale, and McLennan. We also relate our conjecture to earlier asymptotic results of
Shiffman and Zelditch on random sections of holomorphic line bundles.
This talk will deal concretely with polynomials and Newton polytopes, so no background on probability or algebraic geometry is assumed.
May 2, 2007, 11:15am-12:15 pm, Lind Hall 409
Patricia Hersh (Department of Mathematics, Indiana University)
A homological obstruction to weak order on trees
Abstract: When sorting data on a network of computers, it is natural to ask which data swaps between neighbors constitute progress. In a linear array, the answer is simple, by virtue of the fact that
permutations admit pleasant notions of inversions and weak order. I will discuss how the topology of chessboard complexes constrains the extent to which these ideas may carry over to other trees; it
turns out that there are homological obstructions telling us that a tree does not admit an inversion function unless each node has at least as much capacity as its degree minus one. On the other
hand, we construct an inversion function and weak order for all trees that do meet this capacity requirement, and we prove a connectivity bound conjectured by Babson and Reiner for 'Coxeter-like
complexes' along the way. May 9, 2007, 11:15am-12:15 pm, Lind Hall 409
Alexander Yong (School of Mathematics, University of Minnesota)
Schubert combinatorics and geometry
Abstract: The topic of Schubert varieties of homogeneous spaces G/P is at the interface between algebraic geometry and combinatorics. I'll describe work on two themes.
The first is Schubert calculus: counting points in intersections of Schubert varieties. A goal has been combinatorial rules for these computations. I'll explain the carton rule which manifests basic
symmetries of the numbers for the Grassmannian case; this version also has the advantage of generalizing to (co)minuscule G/P.
The second concerns singularities of Schubert varieties. I'll give a combinatorial framework for understanding invariant of singularities via a notion we call interval pattern avoidance.
The first half of this talk is joint work with Hugh Thomas (U. New Brunswick) while the second half is joint work with Alexander Woo (UC Davis).
May 15, 2007, 1:15-2:15 pm, Lind Hall 305
Sandra Di Rocco (Department of Mathematics, KTH)
Discriminants, dual varieties and toric geometry
Abstract: Given an algebraic variety, embedded in projective space, the closure of all hyperplanes tangent at some non singular point is called the dual variety. A general embedding has dual variety
of co-dimension one (in the dual projective space) and hence defined by an irreducible homogeneous polynomial, called the discriminant. The study of the exceptional embeddings, i.e. the ones having
dual variety of lower dimension, is a very classical problem in algebraic geometry, still open for many classes of varieties. I will explain the problem and give the solution for the class of non
singular toric varieties. May 16, 2007, 11:15am-12:15 pm, Lind Hall 409
Peter Bürgisser (Mathematik - Informatik, Universität Paderborn)
Average volume, curvatures, and Euler characteristic of random real algebraic varieties
Abstract: We determine the expected curvature polynomial of random real projective varieties given as the zero set of independent random polynomials with Gaussian distribution, whose distribution is
invariant under the action of the orthogonal group. In particular, the expected Euler characteristic of such random real projective varieties is found. This considerably extends previously known
results on the number of roots, the volume, and the Euler characteristic of the solution set of random polynomial equations. May 23, 2007, 11:15am-12:15 pm, Lind Hall 409
Ioannis Z. Emiris (Department of Informatics & Telecommunications, National Kapodistrian University of Athens)
On the Newton polytope of specialized resultants
Slides: pdf
Abstract: We overview basic notions from sparse, or toric, elimination theory and apply them in order to predict the Newton polytope of the sparse resultant. We consider the case when all but a
constant number of resultant parameters are specialized. Of independent interest is the problem of predicting the support of the implicit equation of a parametric curve or surface. We bound this
support by a direct approach, based on combinatorial geometry. The talk will point to various open questions.
June 6, 2007, 11:15am-12:15 pm, Lind Hall 302
Gabriela Jeronimo (Departamento de Matemática, Facultad de Ciencias Exactas y Naturales, University of Buenos Aires)
A symbolic approach to sparse elimination
Abstract: Sparse elimination is concerned with systems of polynomial equations in which each equation is given by a polynomial having non-zero coefficients only for those monomials lying in a
prescribed set.
We will discuss a new symbolic procedure for solving zero-dimensional sparse polynomial systems by means of deformation techniques. Roughly speaking, a deformation method to solve a zero-dimensional
polynomial equation system works as follows: the input system is regarded as a member of a parametric family of zero-dimensional systems. Then, the solutions to a particular parametric instance which
is "easy to solve" are computed and, finally, these solutions enable one to recover the solutions of the original system.
The algorithm combines the polyhedral deformation introduced by Huber and Sturmfels with symbolic techniques relying on the Newton-Hensel lifting procedure. Its running time can be estimated mainly
in terms of the input length and two invariants related to the combinatorial structure underlying the problem.
June 20, 2007, Lind Hall 305
Frank Sottile (Department of Mathematics Texas A&M University)
IMA program on applications of algebraic geometry (Special seminar as part of the annual PIC-IAB meeting)
Slides: small.pdf talk.pdf | {"url":"http://ima.umn.edu/2006-2007/seminars/","timestamp":"2014-04-16T22:31:27Z","content_type":null,"content_length":"56221","record_id":"<urn:uuid:03b70b15-f396-462a-8646-d4588f098344>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00449-ip-10-147-4-33.ec2.internal.warc.gz"} |
Inference and Learning in Hybrid Bayesian Networks
Results 1 - 10 of 20
- Computing Science and Statistics , 2001
"... The Bayes Net Toolbox (BNT) is an open-source Matlab package for directed graphical models. BNT supports many kinds of nodes (probability distributions), exact and approximate inference,
parameter and structure learning, and static and dynamic models. BNT is widely used in teaching and research: the ..."
Cited by 176 (2 self)
Add to MetaCart
The Bayes Net Toolbox (BNT) is an open-source Matlab package for directed graphical models. BNT supports many kinds of nodes (probability distributions), exact and approximate inference, parameter
and structure learning, and static and dynamic models. BNT is widely used in teaching and research: the web page has received over 28,000 hits since May 2000. In this paper, we discuss a broad
spectrum of issues related to graphical models (directed and undirected), and describe, at a high-level, how BNT was designed to cope with them all. We also compare BNT to other software packages for
graphical models, and to the nascent OpenBayes effort.
, 2002
"... Many real-world systems are naturally modeled as hybrid stochastic processes, i.e., stochastic processes that contain both discrete and continuous variables. Examples include speech recognition,
target tracking, and monitoring of physical systems. The task is usually to perform probabilistic inferen ..."
Cited by 48 (0 self)
Add to MetaCart
Many real-world systems are naturally modeled as hybrid stochastic processes, i.e., stochastic processes that contain both discrete and continuous variables. Examples include speech recognition,
target tracking, and monitoring of physical systems. The task is usually to perform probabilistic inference, i.e., infer the hidden state of the system given some noisy observations. For example, we
can ask what is the probability that a certain word was pronounced given the readings of our microphone, what is the probability that a submarine is trying to surface given our sonar data, and what
is the probability of a valve being open given our pressure and flow readings. Bayesian networks are
"... Markov logic networks (MLNs) combine first-order logic and Markov networks, allowing us to handle the complexity and uncertainty of real-world problems in a single consistent framework. However,
in MLNs all variables and features are discrete, while most real-world applications also contain continuo ..."
Cited by 27 (1 self)
Add to MetaCart
Markov logic networks (MLNs) combine first-order logic and Markov networks, allowing us to handle the complexity and uncertainty of real-world problems in a single consistent framework. However, in
MLNs all variables and features are discrete, while most real-world applications also contain continuous ones. In this paper we introduce hybrid MLNs, in which continuous properties (e.g., the
distance between two objects) and functions over them can appear as features. Hybrid MLNs have all distributions in the exponential family as special cases (e.g., multivariate Gaussians), and allow
much more compact modeling of non-i.i.d. data than propositional representations like hybrid Bayesian networks. We also introduce inference algorithms for hybrid MLNs, by extending the MaxWalkSAT and
MC-SAT algorithms to continuous domains. Experiments in a mobile robot mapping domain—involving joint classification, clustering and regression—illustrate the power of hybrid MLNs as a modeling
language, and the accuracy and efficiency of the inference algorithms.
- In Workshop on Perceptual User-Interfaces , 1999
"... The development of user interfaces based on vision and speech requires the solution of a challenging statistical inference problem: The intentions and actions of multiple individuals must be
inferred from noisy and ambiguous data. We argue that Bayesian network models are an attractive statistical f ..."
Cited by 18 (3 self)
Add to MetaCart
The development of user interfaces based on vision and speech requires the solution of a challenging statistical inference problem: The intentions and actions of multiple individuals must be inferred
from noisy and ambiguous data. We argue that Bayesian network models are an attractive statistical framework for cue fusion in these applications. Bayes nets combine a natural mechanism for
expressing contextual information with efficient algorithms for learning and inference. We illustrate these points through the development of a Bayes net model for detecting when a user is speaking.
The model combines four simple vision sensors: face detection, skin color, skin texture, and mouth motion. We present some promising experimental results. 1
- Pattern Recognition , 2004
"... www.elsevier.com/locate/patcog ..."
- PhD thesis, Program in Media Arts and Sciences, Massachusetts Institute of Technology , 2000
"... by ..."
"... Abstract—Facial expression is a natural and powerful means of human communication. Recognizing spontaneous facial actions, however, is very challenging due to subtle facial deformation, frequent
head movements, and ambiguous and uncertain facial motion measurements. Because of these challenges, curr ..."
Cited by 7 (1 self)
Add to MetaCart
Abstract—Facial expression is a natural and powerful means of human communication. Recognizing spontaneous facial actions, however, is very challenging due to subtle facial deformation, frequent head
movements, and ambiguous and uncertain facial motion measurements. Because of these challenges, current research in facial expression recognition is limited to posed expressions and often in frontal
view. A spontaneous facial expression is characterized by rigid head movements and nonrigid facial muscular movements. More importantly, it is the coherent and consistent spatiotemporal interactions
among rigid and nonrigid facial motions that produce a meaningful facial expression. Recognizing this fact, we introduce a unified probabilistic facial action model based on the Dynamic Bayesian
network (DBN) to simultaneously and coherently represent rigid and nonrigid facial motions, their spatiotemporal dependencies, and their image measurements. Advanced machine learning methods are
introduced to learn the model based on both training data and subjective prior knowledge. Given the model and the measurements of facial motions, facial action recognition is accomplished through
probabilistic inference by systematically integrating visual measurements with the facial action model. Experiments show that compared to the state-of-the-art techniques, the proposed system yields
significant improvements in recognizing both rigid and nonrigid facial motions, especially for spontaneous facial expressions. Index Terms—Facial action unit recognition, face pose estimation, facial
action analysis, facial action coding system, Bayesian networks. Ç 1
- In Intl. Conf. Document Analysis and Recognition (ICDAR , 2003
"... for explicitly modeling components and their relationships of Korean Hangul characters. A Hangul character is modeled with hierarchical components: a syllable model, grapheme models, stroke
models and point models. Each model is constructed with subcomponents and their relationships except a point m ..."
Cited by 6 (2 self)
Add to MetaCart
for explicitly modeling components and their relationships of Korean Hangul characters. A Hangul character is modeled with hierarchical components: a syllable model, grapheme models, stroke models
and point models. Each model is constructed with subcomponents and their relationships except a point model, the primitive one, which is represented by a 2-D Gaussian for X-Y coordinates of point
instances. Relationships between components are modeled with their positional dependencies. For on-line handwritten Hangul characters, the proposed system shows higher recognition rates than the HMM
system with chain code features: 95.7% vs 92.9% on average.
, 1998
"... Introduction We consider the problem of nding the Maximum Likelihood (ML) estimates of the parameters of a conditional Gaussian node Y with continuous parent X and discrete parent Q, i.e., p
(yjx; Q = i) = cj i j 1 2 exp 1 2 (y B i x) 0 1 i (y B i x) where c = (2) d=2 is a constant and j ..."
Cited by 5 (2 self)
Add to MetaCart
Introduction We consider the problem of nding the Maximum Likelihood (ML) estimates of the parameters of a conditional Gaussian node Y with continuous parent X and discrete parent Q, i.e., p(yjx; Q =
i) = cj i j 1 2 exp 1 2 (y B i x) 0 1 i (y B i x) where c = (2) d=2 is a constant and jyj = d. The j'th row of B i is the regression vector for the j component of y given that Q = i. To allo | {"url":"http://citeseerx.ist.psu.edu/showciting?doi=10.1.1.32.6529","timestamp":"2014-04-16T12:34:42Z","content_type":null,"content_length":"35120","record_id":"<urn:uuid:7ec45181-d21e-451e-a802-69a98c62e8a9>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00475-ip-10-147-4-33.ec2.internal.warc.gz"} |
Merrimack Trigonometry Tutor
Find a Merrimack Trigonometry Tutor
...The critical reading timing is essential and that is where most students suffer the most damage to their score. I have studied the SAT intensely over the last 10 years, developing a program to
help students overcome the difficulties of the SAT writing test. There are just 5 major errors that I have identified.
19 Subjects: including trigonometry, chemistry, geometry, algebra 2
...I've tutored with much success to high school students in all sorts of math. Algebra, Trigonometry, Geometry, precalculus, and calculus. In some cases, I've worked under difficult
47 Subjects: including trigonometry, reading, chemistry, geometry
...My science background helps here, and I can often find examples of techniques I have used in my career despite not knowing the application when I was in high school. Although my educational
background is in chemistry, my second major in college was applied mathematics. I recently passed the Mas...
12 Subjects: including trigonometry, chemistry, calculus, physics
...I helped the students to improve their grades in math subjects. I helped the students in the following area, such as Algebra, Calculus, Trigonometry and Geometry. Also, most of the students
love my teaching skills.My major was math.
9 Subjects: including trigonometry, chemistry, calculus, geometry
...I have had many recent tutoring situations with students taking algebra, calculus, and statistics. Courses I have taught include algebra, trigonometry, precalculus, calculus, differential
equations, statistics, discrete mathematics, and advanced engineering mathematics.I have taught algebra clas...
12 Subjects: including trigonometry, calculus, geometry, statistics | {"url":"http://www.purplemath.com/Merrimack_Trigonometry_tutors.php","timestamp":"2014-04-20T08:50:28Z","content_type":null,"content_length":"23968","record_id":"<urn:uuid:79a928e9-01f7-4a74-bcf9-be1ca2ccfbb3>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00221-ip-10-147-4-33.ec2.internal.warc.gz"} |
probability theory (mathematics)
probability theory
Article Free Pass
probability theory, a branch of mathematics concerned with the analysis of random phenomena. The outcome of a random event cannot be determined before it occurs, but it may be any one of several
possible outcomes. The actual outcome is considered to be determined by chance.
The word probability has several meanings in ordinary conversation. Two of these are particularly important for the development and applications of the mathematical theory of probability. One is the
interpretation of probabilities as relative frequencies, for which simple games involving coins, cards, dice, and roulette wheels provide examples. The distinctive feature of games of chance is that
the outcome of a given trial cannot be predicted with certainty, although the collective results of a large number of trials display some regularity. For example, the statement that the probability
of “heads” in tossing a coin equals one-half, according to the relative frequency interpretation, implies that in a large number of tosses the relative frequency with which “heads” actually occurs
will be approximately one-half, although it contains no implication concerning the outcome of any given toss. There are many similar examples involving groups of people, molecules of a gas, genes,
and so on. Actuarial statements about the life expectancy for persons of a certain age describe the collective experience of a large number of individuals but do not purport to say what will happen
to any particular person. Similarly, predictions about the chance of a genetic disease occurring in a child of parents having a known genetic makeup are statements about relative frequencies of
occurrence in a large number of cases but are not predictions about a given individual.
This article contains a description of the important mathematical concepts of probability theory, illustrated by some of the applications that have stimulated their development. For a fuller
historical treatment, see probability and statistics. Since applications inevitably involve simplifying assumptions that focus on some features of a problem at the expense of others, it is
advantageous to begin by thinking about simple experiments, such as tossing a coin or rolling dice, and later to see how these apparently frivolous investigations relate to important scientific
Experiments, sample space, events, and equally likely probabilities
Applications of simple probability experiments
The fundamental ingredient of probability theory is an experiment that can be repeated, at least hypothetically, under essentially identical conditions and that may lead to different outcomes on
different trials. The set of all possible outcomes of an experiment is called a “sample space.” The experiment of tossing a coin once results in a sample space with two possible outcomes, “heads” and
“tails.” Tossing two dice has a sample space with 36 possible outcomes, each of which can be identified with an ordered pair (i, j), where i and j assume one of the values 1, 2, 3, 4, 5, 6 and denote
the faces showing on the individual dice. It is important to think of the dice as identifiable (say by a difference in colour), so that the outcome (1, 2) is different from (2, 1). An “event” is a
well-defined subset of the sample space. For example, the event “the sum of the faces showing on the two dice equals six” consists of the five outcomes (1, 5), (2, 4), (3, 3), (4, 2), and (5, 1).
A third example is to draw n balls from an urn containing balls of various colours. A generic outcome to this experiment is an n-tuple, where the ith entry specifies the colour of the ball obtained
on the ith draw (i = 1, 2,…, n). In spite of the simplicity of this experiment, a thorough understanding gives the theoretical basis for opinion polls and sample surveys. For example, individuals in
a population favouring a particular candidate in an election may be identified with balls of a particular colour, those favouring a different candidate may be identified with a different colour, and
so on. Probability theory provides the basis for learning about the contents of the urn from the sample of balls drawn from the urn; an application is to learn about the electoral preferences of a
population on the basis of a sample drawn from that population.
Another application of simple urn models is to use clinical trials designed to determine whether a new treatment for a disease, a new drug, or a new surgical procedure is better than a standard
treatment. In the simple case in which treatment can be regarded as either success or failure, the goal of the clinical trial is to discover whether the new treatment more frequently leads to success
than does the standard treatment. Patients with the disease can be identified with balls in an urn. The red balls are those patients who are cured by the new treatment, and the black balls are those
not cured. Usually there is a control group, who receive the standard treatment. They are represented by a second urn with a possibly different fraction of red balls. The goal of the experiment of
drawing some number of balls from each urn is to discover on the basis of the sample which urn has the larger fraction of red balls. A variation of this idea can be used to test the efficacy of a new
vaccine. Perhaps the largest and most famous example was the test of the Salk vaccine for poliomyelitis conducted in 1954. It was organized by the U.S. Public Health Service and involved almost two
million children. Its success has led to the almost complete elimination of polio as a health problem in the industrialized parts of the world. Strictly speaking, these applications are problems of
statistics, for which the foundations are provided by probability theory.
In contrast to the experiments described above, many experiments have infinitely many possible outcomes. For example, one can toss a coin until “heads” appears for the first time. The number of
possible tosses is n = 1, 2,…. Another example is to twirl a spinner. For an idealized spinner made from a straight line segment having no width and pivoted at its centre, the set of possible
outcomes is the set of all angles that the final position of the spinner makes with some fixed direction, equivalently all real numbers in [0, 2π). Many measurements in the natural and social
sciences, such as volume, voltage, temperature, reaction time, marginal income, and so on, are made on continuous scales and at least in theory involve infinitely many possible values. If the
repeated measurements on different subjects or at different times on the same subject can lead to different outcomes, probability theory is a possible tool to study this variability.
Because of their comparative simplicity, experiments with finite sample spaces are discussed first. In the early development of probability theory, mathematicians considered only those experiments
for which it seemed reasonable, based on considerations of symmetry, to suppose that all outcomes of the experiment were “equally likely.” Then in a large number of trials all outcomes should occur
with approximately the same frequency. The probability of an event is defined to be the ratio of the number of cases favourable to the event—i.e., the number of outcomes in the subset of the sample
space defining the event—to the total number of cases. Thus, the 36 possible outcomes in the throw of two dice are assumed equally likely, and the probability of obtaining “six” is the number of
favourable cases, 5, divided by 36, or 5/36.
Now suppose that a coin is tossed n times, and consider the probability of the event “heads does not occur” in the n tosses. An outcome of the experiment is an n-tuple, the kth entry of which
identifies the result of the kth toss. Since there are two possible outcomes for each toss, the number of elements in the sample space is 2^n. Of these, only one outcome corresponds to having no
heads, so the required probability is 1/2^n.
It is only slightly more difficult to determine the probability of “at most one head.” In addition to the single case in which no head occurs, there are n cases in which exactly one head occurs,
because it can occur on the first, second,…, or nth toss. Hence, there are n + 1 cases favourable to obtaining at most one head, and the desired probability is (n + 1)/2^n.
Do you know anything more about this topic that you’d like to share? | {"url":"http://www.britannica.com/EBchecked/topic/477530/probability-theory","timestamp":"2014-04-19T17:32:12Z","content_type":null,"content_length":"97782","record_id":"<urn:uuid:44daa417-a822-4577-b242-05d1079c10b3>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00214-ip-10-147-4-33.ec2.internal.warc.gz"} |
Re: st: RE: FW: graphing ordinal panel data over time
Notice: On March 31, it was announced that Statalist is moving from an email list to a forum. The old list will shut down at the end of May, and its replacement, statalist.org is already up and
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: st: RE: FW: graphing ordinal panel data over time
From Roger Newson <r.newson@imperial.ac.uk>
To "statalist@hsphsun2.harvard.edu" <statalist@hsphsun2.harvard.edu>
Subject Re: st: RE: FW: graphing ordinal panel data over time
Date Tue, 8 Jun 2010 15:26:15 +0100
If Paul wants to track the proportions with the 3 values over time, with confidence limits, then Paul might define 3 identifier variables identifying the 3 values, and model the progress of the means
(proportions) of these with time (possibly using -ci- with -statsby- to make an output dataset with 1 obs per time point), and then use -eclplot- to produce the confidence interval plots,
Alternatively, if time is a continuous variable, then Paul might prefer to use a spline regression model of log(p/(1-p)) with respect to time, using the -frencurv- module of the -bspline- package to
compute a spline basis whose parameter values are values of the spline at points on the time axis, and then use -logit-, -logistoc- or -glm-, with the -noconst- option, to define confidence intervals
for the p/(1-p) values at these time points, and then to use -parmest- to produce an output dataset with 1 observation per parameter and data on estimates and confidence limits, and then to do an
end-point transformation of the confidence intervals to the original probability scale before plotting the confidence intervals over time using -eclplot-. The examples of Newson (2001) and Newson
(2004) might be helpful here.
If Paul is principally interested in measuring the ordinal association of this 3-level ordinal variable with time, then the -somersd- package, downloadable from SSC, might provide a way to do this.
I hope this helps.
Best wishes
Newson R. 2001. Splines with parameters that can be explained in words to non-mathematicians. Presented at the 7th UK Stata User Meeting, 14 May, 2001. Download from
Newson R. 2004. B-splines and splines parameterized by their values at reference points. (This is a manual for the bspline package, downloadable from the SSC archive site, and is a post-publication
update of a Stata Technical Bulletin article.) Download from
Roger B Newson BSc MSc DPhil
Lecturer in Medical Statistics
Respiratory Epidemiology and Public Health Group
National Heart and Lung Institute
Imperial College London
Royal Brompton Campus
Room 33, Emmanuel Kaye Building
1B Manresa Road
London SW3 6LR
Tel: +44 (0)20 7352 8121 ext 3381
Fax: +44 (0)20 7351 8322
Email: r.newson@imperial.ac.uk
Web page: http://www.imperial.ac.uk/nhli/r.newson/
Departmental Web page:
Opinions expressed are those of the author, not of the institution.
On 08/06/2010 13:16, Maarten buis wrote:
--- Seed, Paul forwarding a.hense@jpberlin.de
I have got an ordered outcome variable with three values
over a period of 20 years. To describe the data, I want
to show how the three values of my outcome variable have
developed over time.
--- On Tue, 8/6/10, Nick Cox wrote:
There are several possible graphs here. For example, you
can -contract- and then just draw line plots of frequencies
or percents. Another way to do it directly is using
-catplot- (SSC).
Yet another way would be look at -proprcspline-, see
-ssc d proprcspline- and
An important difference is that -proprcspline- also applies
certain amount of smoothing, which can for example be helpful
when some the of the periods contain relatively few cases, such
that the changes between periods becomes too eratic.
Hope this helps,
Maarten L. Buis
Institut fuer Soziologie
Universitaet Tuebingen
Wilhelmstrasse 36
72074 Tuebingen
* For searches and help try:
* http://www.stata.com/help.cgi?search
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/
* For searches and help try:
* http://www.stata.com/help.cgi?search
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/ | {"url":"http://www.stata.com/statalist/archive/2010-06/msg00428.html","timestamp":"2014-04-19T14:32:36Z","content_type":null,"content_length":"13076","record_id":"<urn:uuid:34e68c60-f21d-439f-b0c0-2679994e3643>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00159-ip-10-147-4-33.ec2.internal.warc.gz"} |
Frequency-Domain Validation of Linearization
Validate Linearization in Frequency Domain using Linear Analysis Tool
This example shows how to validate linearization results using an estimated linear model.
In this example, you linearize a Simulink^® model using the model initial conditions. You then estimate the frequency response of the model using the same operating point (model initial condition).
Finally, you compare the estimated response to the exact linearization result.
Step 1. Linearize Simulink model.
● Open a Simulink model.
sys = 'scdDCMotor';
● Open the Linear Analysis Tool for the model.
In the Simulink model window, select Analysis > Control Design > Linear Analysis.
● Select a visualization for the linearized model.
In the Plot Result list, choose New Bode.
● Linearize the model.
Click .
A new linearized model, linsys1, appears in the Linear Analysis Workspace.
The software used the model initial conditions as the operating point to generate linsys1.
Step 2. Create sinestream input signal.
● Click the Frequency Response Estimation tab.
In this tab, you estimate the frequency response of the model.
● Open the Create sinestream input dialog box.
Select Sinestream from the Input Signal list.
● Initialize the input signal frequencies and parameters based on the linearized model.
Click Initialize frequencies and parameters.
The Frequency content viewer is populated with frequency points. The software chooses the frequencies and input signal parameters automatically based on the dynamics of linsys1.
● In the Frequency content viewer of the Create sinestream input dialog box, select all the frequency points.
● Modify the amplitude of the input signal.
Enter 1 in the Amplitude box.
● Click OK.
The input signal in_sine1 appears in the Linear Analysis Workspace.
Step 3. Select the plot to display the estimation result.
In the Plot Result list, choose Bode Plot 1 to add the next computed linear system to Bode Plot 1.
Step 4. Estimate frequency response.
Click .
The estimated system, estsys1, appears in the Linear Analysis Workspace.
Step 5. Examine estimation results.
Bode Plot 1 now shows the Bode responses for the estimated model and the linearized model.
The frequency response for the estimated model matches that of the linearized model.
Choosing Frequency-Domain Validation Input Signal
For frequency-domain validation of linearization, create a sinestream signal. By analyzing one sinusoidal frequency at a time, the software can ignore some of the impact of nonlinear effects.
┃ Input Signal │ Use When │ See Also ┃
┃ Sinestream │ All linearization inputs and outputs are on continuous signals. │ frest.Sinestream ┃
┃ Sinestream with fixed sample time │ One or more of the linearization inputs and outputs is on a discrete signal │ frest.createFixedTsSinestream ┃
You can easily create a sinestream signal based on your linearized model. The software uses the linearized model characteristics to accurately predict the number of sinusoid cycles at each frequency
to reach steady state.
When diagnosing the frequency response estimation, you can use the sinestream signal to determine whether the time series at each frequency reaches steady state.
More About | {"url":"http://www.mathworks.co.uk/help/slcontrol/ug/frequency-domain-validation-of-linearization-results.html?nocookie=true","timestamp":"2014-04-23T19:59:00Z","content_type":null,"content_length":"39249","record_id":"<urn:uuid:211fd156-7f5b-45f9-ab6e-65bf375ada7b>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00187-ip-10-147-4-33.ec2.internal.warc.gz"} |
Statistics Glossary Random Variables And Probability Distributions
Statistics Glossary Random Variables And Probability Distributions
Watch Statistics Glossary Random Variables And Probability Distributions
Download Introduction to Discrete Random Variables and Discrete Video
MP4 | 3GP
If you Couldn't Find and the Page you Were Looking For, REFRESH or Search Again Videos Above Top Right!!
what i want to discuss a little bit in this video; is the idea of a random variable; and random variables are first a little bit confusing; because we would want to Random variables and probability
distributions - khan academy variance and standard deviation of a random variable. we have already looked at variance and standard deviation as measures of dispersion under the section on averages.
Variance of a random variable | wyzant resources to define probability distributions for the simplest cases, one needs to distinguish between discrete and continuous random variables. in the discrete
case, one can Probability distribution - wikipedia, the free encyclopedia 8.1: random variables and distributions. based on section 8.1 in finite mathematicsand finite mathematics and applied
calculus. note:to follow this tutorial, you need Tutorial: random variables and their probability distributions what is a random variable? this lesson defines random variables. explains difference
between discrete vs continuous and finite vs infinite random variables. Random variables - statistics, probability, and survey preface. this is an internet-based probability and statistics e-book.
the materials, tools and demonstrations presented in this e-book would be very useful for Probability and statistics ebook - socr - ucla
Related Statistics Glossary Random Variables And Probability Distributions Video Post
Feb 22, 2010
statistical glossary - random variables and probability distributions Random Variable. The outcome of an experiment need not be a number, for example, the outcome
Feb 22, 2010
Random variables. Expected value. Probability distributions (both discrete and continuous). Binomial distribution. Poisson processes.
Feb 22, 2010
Introduction to Probability Distributions - Random Variables A random variable is defined as a function that associates a real number (the probability | {"url":"http://onmilwiki.com/statistics/statistics-glossary-random-variables-and-probability-distributions.html","timestamp":"2014-04-19T03:09:56Z","content_type":null,"content_length":"24134","record_id":"<urn:uuid:5f4457a4-33fd-44ee-a966-24e4f2721a85>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00579-ip-10-147-4-33.ec2.internal.warc.gz"} |
Optimal Depth-First Strategies for And-Or Trees
Russell Greiner and Ryan Hayward, University of Alberta; Michael Molloy, University of Toronto
Many tasks require evaluating a specified boolean expression f over a set of probabilistic tests where we know the probability that each test will succeed, and also the cost of performing each test.
A strategy specifies when to perform which test, towards determining the overall outcome of f. This paper investigates the challenge of finding the strategy with the minimum expected cost. We observe
first that this task is typically NP-hard -- eg, when tests can occur many times within f, or when there are probabilistic correlations between the test outcomes. We therefore focus on the situation
where the tests are probabilistically independent and each appears only once in f. Here, f can be written as an and-or tree, where each internal node corresponds to either the "And" or "Or" of its
children, and each leaf node is a probabilistic test. There is an obvious depth-first approach to evaluating such and-or trees: First evaluate each penultimate subtree in isolation; then reduce this
subtree to a single "mega-test" with appropriate cost and probability, and recur on the resulting reduced tree. After formally defining this approach, we show first that it produces the optimal
strategy for shallow (depth 1 or 2) and-or trees, then show it can be arbitrarily bad for deeper trees. We next consider a larger, natural subclass of strategies -- those that can be expressed as a
linear sequence of tests -- and show that the best such "linear strategy" can also be very much worse than the optimal strategy in general. Finally, we show that our results hold in a more general
model, where internal nodes can also be probabilistic tests.
This page is copyrighted by AAAI. All rights reserved. Your use of this site constitutes acceptance of all of AAAI's terms and conditions and privacy policy. | {"url":"http://aaai.org/Library/AAAI/2002/aaai02-109.php","timestamp":"2014-04-17T06:53:30Z","content_type":null,"content_length":"3596","record_id":"<urn:uuid:72fe7099-13bc-4632-87a1-9a21b72dcd06>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00613-ip-10-147-4-33.ec2.internal.warc.gz"} |
Illinois Learning Standards
Illinois Learning Standards
Stage A - Math
6A —
Students who meet the standard can demonstrate knowledge and use of numbers and their many representations in a broad range of theoretical and practical settings. (Representations)
1. Count with understanding, including skip counting by 2's, 5 s, and 10 s from zero. **
2. Recognize how many in sets of objects. **
3. Demonstrate the concept of odd and even using manipulatives.
4. Develop initial understanding of place value and the base-ten number system using manipulatives. **
5. Describe numeric relationships using appropriate vocabulary.
6. Differentiate between cardinal and ordinal numbers in quantifying and ordering numbers.
7. Connect number words and numerals to the quantities they represent. **
8. Describe parts of a whole using 1/2, 1/3, and 1/4.
9. Order concrete representations of unit fractions.
6B —
Students who meet the standard can investigate, represent and solve problems using number facts, operations, and their properties, algorithms, and relationships. (Operations and properties)
1. Solve one-step addition and subtraction number sentences and word problems using concrete materials.
2. Construct number sentences to match word problems.
3. Demonstrate and describe the effects of adding and subtracting whole numbers using appropriate mathematical notation and vocabulary. **
4. Explore and apply properties of addition and subtraction.
5. Compute using fact families.
6C —
Students who meet the standard can compute and estimate using mental mathematics, paper-and-pencil methods, calculators, and computers. (Choice of method)
1. Develop and use strategies for whole number computations with a focus on addition and subtraction. *
2. Use mental math counting strategies.
3. Describe reasonable and unreasonable sums and differences.
4. Utilize a calculator for counting patterns.
6D —
Students who meet the standard can solve problems using comparison of quantities, ratios, proportions, and percents.
1. Compare two or more sets, using manipulatives, to solve problems.
7A —
Students who meet the standard can measure and compare quantities using appropriate units, instruments, and methods. (Performance and conversion of measurements)
1. Determine the attributes of an object that are measurable (e.g., length and weight are measurable; color and texture are not).
2. Compare and order objects according to measurable attributes. **
3. Measure objects using non-standard units.
4. Explore and describe chronological events (e.g., calendars, timelines, seasons).
5. Identify units of money and the value of each.
6. Count like sets of coins.
7B —
Students who meet the standard can estimate measurements and determine acceptable levels of accuracy. (Estimation)
1. Estimate nonstandard measurements of length, weight, and capacity.
7C —
Students who meet the standard can select and use appropriate technology, instruments, and formulas to solve problems, interpret results, and communicate findings. (Progression from selection of
appropriate tools and methods to application of measurements to solve problems)
1. Select appropriate nonstandard measurement units to measure length, weight, and capacity (e.g., number of handfuls of cubes to fill a container).
Students who meet the standard can describe numerical relationships using variables and patterns. (Representations and algebraic manipulations)
1. Describe common and uncommon attributes (all, some, none) in a set.
2. Recognize, describe, and extend patterns such as sequences of sounds, motions, shapes, or simple numeric patterns, and translate from one representation to another (e.g., red-blue-red-blue
translates to snap-clap-snap-clap). **
3. Describe given patterns using letters.
4. Analyze repeating patterns. **
Students who meet the standard can interpret and describe numerical relationships using tables, graphs, and symbols. (Connections of representations including the rate of change)
1. Describe and compare qualitative change, (e.g., student grows taller). **
Students who meet the standard can solve problems using systems of numbers and their properties. (Problem solving; number systems, systems of equations, inequalities, algebraic functions)
1. Solve simple number sentences with variables (e.g., missing addend problems).
Students who meet the standard can use algebraic concepts and procedures to represent and solve problems. (Connection of 8A, 8B, and 8C to solve problems)
1. Solve real life word problems using patterns.
Students who meet the standard can demonstrate and apply geometric concepts involving points, lines, planes, and space. (Properties of single figures, coordinate geometry and constructions)
1. Identify two- and three-dimensional shapes. **
2. Model two-dimensional geometric shapes by drawing or building. **
3. Describe and interpret relative positions in space and apply concepts of relative position (e.g., above/below). **
4. Recognize and describe shapes that have line symmetry. **
5. Identify geometric shapes and structures in the environment. **
6. Explore the effects of translations (slides), reflections (flips), and rotations (turns) with concrete objects.
Students who meet the standard can identify, describe, classify and compare relationships using points, lines, planes, and solids. (Connections between and among multiple geometric figures)
1. Identify objects that are the same shape.
2. Compare and sort two- and three-dimensional objects.
Students who meet the standard can construct convincing arguments and proofs to solve problems. (Justifications of conjectures and conclusions)
1. Recognize and explain a geometric pattern.
9D is Not Applicable for Stages A - F.
Students who meet the standard can organize, describe and make predictions from existing data. (Data analysis)
1. Organize, describe, and label simple data displays such as pictographs, tallies, tables, and bar graphs.
2. Compare numerical information derived from tables and graphs.
Students who meet the standard can formulate questions, design data collection methods, gather and analyze data and communicate findings. (Data Collection)
1. Gather data to answer a simple question.
Students who meet the standard can determine, describe and apply the probabilities of events. (Probability including counting techniques)
1. Identify possible and impossible results of probability events using concrete materials.
2. Determine all possible outcomes of a given situation.
* National Council of Teachers of Mathematics. Principles and Standards for School Mathematics. Reston, Va: National Council of Teachers of Mathematics, 2000.
** Adapted from: National Council of Teachers of Mathematics. Principles and Standards for School Mathematics. Reston, Va: National Council of Teachers of Mathematics, 2000.
Return to Math Classroom Assessments and Performance Descriptors | {"url":"http://www.isbe.state.il.us/ils/math/stage_A/descriptor.htm","timestamp":"2014-04-20T10:46:35Z","content_type":null,"content_length":"22767","record_id":"<urn:uuid:f102e5f0-8c21-4044-8f00-3790b5680596>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00405-ip-10-147-4-33.ec2.internal.warc.gz"} |
RF PA linearity in GSM transmitters
Nonlinearity in GSM PAs for digitally modulated signals is acceptable within certain limits, beyond which phase error becomes a problem. Although IMD has only negligible effects on phase error,
higher-order IMD products have a more severe effect on spectral re-growth.
For the PDF version of this article, click here.
Consider a nonlinear element with the following characteristic of the voltage amplitude V ^[4]:
If amplitude V is small and coefficient µ is finite, this characteristic is linear. At high values of V ƒ(V) is limited by margins ±A[0]. At intermediate values of V it resembles the soft limiter
characteristic. By that measure, “softness” is determined by the dispersion σ^2. Parameter σ is the root-mean-square (rms) deviation of a signal. The coefficient μ defines the linear part of Eq. 1.,
for which the output signal phase completely resembles the input signal phase. If an input signal applied to the nonlinear part of Eq. 1 is the arbitrary narrowband signal V(t) = V(t)[cos][ω[0](t) +
φ(t)] with a spectrum S[in](ω), the output signal spectrum in the vicinity of the center frequency ω[0] can be represented in the form^[4]:
The functions Θ(ω) = S(ω) Ⓧ S(-ω), Θ[n] (ω) = Θ[n-1] (ω) Ⓧ Θ(ω) are convolutions of the input signal spectrum components and the function Θ[0](ω) = δ(ω) is the delta-function:
n! = 1*2*3* …*n ; n!! = 1*3*5*7* …*n.
Equation 2 shows that the output spectrum of an input signal with S[in](ω) passed through the nonlinear element of Eq. 1 comprises the linearly scaled input signal (first term in Eq. 2) and distorted
term of the spectrum noted as ΔS(ω). The term ΔS(ω) represents the multiple self-convolution of an input signal. If the dispersion of an input signal is defined as and an output signal is defined as
then the output signal can also be expressed as^[5]:
The functions K(x) and D(x) are the full elliptic integrals^[5], and they can be represented in series form:
Dispersion associated with the undistorted first term in Eq. 2 is given by^[4]:
The difference between (5) and (3) is then:
So, the output power (dispersion) associated with the distorted part of a signal in Eq. 2 can be simply found out by Eq. 6. It comprises both so-called in-channel and out-of-channel distorted
components in the vicinity of ω[0] (harmonics are not taken into account). The maximum level of the distorted power, η[1] = 1 π / 4, is reached when μ = 0 (for hard limiter), so lim x → 1 [K(x) - D(x
)] = 1. Thus, 21.5% of the total output power of a signal passed through a hard limiter is spread over the limits of the useful signal, decreasing the in-channel signal-to-noise ratio and elevating
spectrum side lobes. This deteriorates the so-called spectral purity for GSM systems or the adjacent-channel power ratio (ACPR) for other digital communication standards. Extending the linear part of
Eq. 1 (elevating parameter), the distorted part of the power defined by Eq. 6 decreases and in limit becomes 0 at μ → ∞, so lim x → 0 [K(x) - D(x)] = π/4. To evaluate exact values of spectral
components at specified frequencies, Eq. 2 must be resolved knowing the input signal spectrum for each particular case. Note that equations 2 through 6 are valid for any kind of narrowband signals
passed through the nonlinear part of Eq. 1. For instance, the dispersion associated with the distorted part of a spectrum for a five-channel forward link CDMA signal passed through the hard limiter
(with linear phase characteristic) has been measured as 21%. By that, the output signal envelope becomes practically constant. Only 0.5% of the distorted power is assumed to reduce the in-channel
signal-to-noise ratio by 0.022 dB. In general, the CDMA-like random signals are more resistant to the in-channel noise growth imposed by nonlinearity, although, for out-of-channel spectral re-growth,
an opposite affirmation is valid^[6-8].
Now consider a signal with a Gaussian spectrum applied to the nonlinear element in Eq. 1:
In this case^[4]:
Again, the first term in Eq. 8 is an appropriately linearly scaled input signal, and the sum represents the distorted part of a spectrum. In this case, the maximum value of the distorted component
imposed by the nonlinearity at any frequency does not exceed^[4]:
The function ζ(p) is the Riman zetta-function^[4]. Equation 9 shows that, even for an ideal hard limiter, the ideal Gaussian spectrum signals passed have values of η2 < 1.29 ≈ 1.1dB for any component
considered, and an out-of-channel spectral re-growth is not the main concern for these signals passed through the ideal nonlinearity in Eq. 1. The measure of an in-channel noise growth influence onto
a whole GSM system performance is the bit error rate (BER) and frame error rate (FER)^[3]. However, the GSM transmitter performance influenced by this noise is the phase error. Considering the
constellation diagram in Figure 1, the maximum phase error is seen to be:
The term a is the undistorted component and b is the additional noise component. By this approach the maximum phase error can reach α[max] = 2sin ^-1(0.29/2) = 16.7°. However, simply choosing μ >
0.815 for the linear part of Eq. 1 avoids compliance violations with the maximum average phase error of 5° and the peak phase error of 20° specified for GSM. So, when μ = 0.815, it defines a margin
for the nonlinear characteristic of Eq. 1, below which phase error may cause trouble. For a PA near the saturation region, this condition results in back-off operation from the maximal achievable
power with some efficiency reduction. To reach the phase error of 5°, the next correlation to be carried out should be — b / a = -10.6 dB (a is the total channel power at limits ± 100 kHz and b is
the total additional noise at the same limits for the worst case of the phase correlation). While the worst case of the noise components phase correlation will rarely be achieved, predicting this
correlation in a particular case is difficult. Note that the spectral re-growth defined by Eq. 9 for each frequency component and the phase error determined in Eq. 10 are independent on the input
signal dispersion σ^2.
Statistical consideration of distortion
So far, the ideal limiter characteristic described in Eq. 1 with a linear phase characteristic and the “pure” Gaussian spectrum signal defined by Eq. 7 have been considered. A real PA transfer
characteristic should take into account amplitude modulation (AM) and phase modulation (PM), considering not only AM-to-AM conversion but also AM-to-PM, PM-to-PM, PM-to-AM, memory effects, frequency
dependence of matching and biasing networks and non-symmetry of IMD products^[9]. The effects noted are especially important at high power levels with high efficiency where the gain compression and
power saturation become perceptible and the phase characteristic of the PA varies sharply. These effects result in deterioration of an out-of-channel spectrum purity and in-channel noise level
growth, even for the constant-envelope Gaussian signals.
As an example, Figure 2 represents the spectrum measured for a 4 W GSM MMIC PA (f = 880 MHz, filter rolloff α = 0.3, full data rate 270.833 kbps, P[SAT] = 36.5 dBm and spectral components were
measured at 30 kHz resolution bandwidth, pulsed-modulation disabled). In the main spectral lobe, the difference between the input and output spectral components does not exceed 1.1 dB. However,
starting from 350 kHz, this difference becomes obvious (in this particular case, the maximum difference is 8 dB at 450 kHz).
When representing the linearity of a PA for Gaussian-like signal applications, manufacturers usually include different orders of two-tone IMD products at some frequency spacing and harmonics level in
the datasheet. However, the system performance requires other measures of quality, such as the out-of-channel spectral purity and phase error for GSM standard, and this raises several questions. For
example, what is the relation between these different concepts and how to evaluate them? How can a circuit be tuned to satisfy all requirements? What part of the transmitter determines spectral
re-growth at particular frequencies?
Consider a statistical signal with the power spectral density PSD(ƒ). If the envelope and average power P[0] of a signal are constants, one can represent PSD(ƒ) as the probability that the spectral
component with power P[0] appears at frequency ƒ:
In this case, the average power is:
So, g(ƒ) is the density of a power probability distribution function (distribution throughout the frequency). The power spectrum is solid owing to non-periodic driving of the PA. The probability of
an appearance of the spectral component with power P[0] at the frequency limits dƒ[1] is g(ƒ[1])dƒ[1] and, at limits dƒ[2], it is equal to g(ƒ[2])dƒ[2]. These simultaneous power components passing
through a nonlinear PA create different order instantaneous IMD products with the probability of g(ƒ[1])g(ƒ[2])dƒ[1]dƒ[2]. The frequency spacing (ƒ[1] - ƒ[2]) defines the frequency offsets ƒ[0] at
which different order IMD products will appear. The n^th-order IMD product is denoted as by the driving of the PA input by two signals with each carrier level equal to P[0]. For the fixed offset, ƒ
[0] and fixed order of IMD, the resulting effect is the sum of the input spectral component scaled by the gain value, and IMD defined at different spacing as an integral:
The term G(ƒ[0]) is the gain of the PA at a certain input signal level. The term n is the order of IMD products (for the frequencies of interest n = 2m - 1 and m is an integer >2). Eq. 13 represents
the same physical entity as Eq. 2, but with only one convolution procedure. Repeating the procedure defined by Eq. 13 for many times, the results of Eqs. 13 and 2 will be entirely equal. The function
Fn(ƒ[0]) plays the role of a weighting coefficient at a particular frequency offset ƒ[0] for each order of IMD product specified at P[0] input drive level (analogous to results in^[6-8]).
Consider the ideal Gaussian spectrum signal defined in Eq. 7 (solid line in Figure 3). The frequency offset ƒ[0] is normalized to the rms deviation σ of an input signal. The value σ can be defined
knowing α = B[b]T of a Gaussian filter used in a system (B[b] is the filter bandwidth and T is the period of the baseband signal). The Gaussian filter complex transfer characteristic is represented
in the form:
Here, ƒ[0]=ω[0]/2π is the center frequency of the filter; Δ is the filter-equivalent bandwidth and τ[d] is time delay of the filter. Assuming ω >> Δ, ω[0] >> Δ and, for simplicity τ[d]=0, the
following is derived:
At ω - ω[0] = Δ: K(ω) = e-^π/2= 0.208045
Normalizing this point to Gaussian distribution (0.399043*0.208045 = 0.083019), one can get from the table^[5]:
In this case, the result of the appearance of several weighting coefficients F[n](ƒ[0]) in Eq. 13 is shown in Figure 5; F[n](ƒ[0]) is equal to the square restricted by the appropriate curve and the
x-axis. The term Δƒ is the frequency spacing at which IMD products are defined. Note that the F[n](Δƒ) distribution at a fixed ƒ[0] is a Gaussian one with clearly observed maximums and different
dispersions, while also noting the power distribution is not Gaussian. The resulting weighting functions for three IMD product orders n are presented in Figure 3. The frequency spacing at which the
IMD contribution is maximum is presented in Figure 4 by solid lines for the same IMD. Accordingly, IMD products must be measured (with equal two-tone drive levels) at 4 dB back-off from the real
output-power level P[0], recalling that the main lobe spectrum shape is almost a Gaussian one (Eq. 8), and the maximum spectral density is 4 dB less than the total power (Eq. 9). In general, the
simple rule to calculate the slope of the frequency spacing Δƒ vs. the frequency offset ƒ[0] is:
The frequency margins at which IMD products must be defined can be represented in the form. These functions are presented in Figure 4 by dashed lines. The power restricted by these margins comprises
95.5% of the total power determined by the second term in Eq. 13 if the variations of IMD through Δƒ are absent. The simple rule to define σ[n] is:
It is important to note that the two-tone frequency spacing for the IMD contribution to the total distortion does not depend on the frequency offset for the fixed order of IMD (in opposite, for flat
spectrum shape signals^[6, 7], the frequency spacing decays with an increase in offset). Results in Figure 3 and Figure 4 have been calculated by use of a single convolution procedure defined in Eq.
13. However, this procedure precisely accounts for the main contribution of IMD toward spectral re-growth.
In Figure.3, the maximum values of IMD relate to the carrier power maximum spectral density by the rule of Eq. 18. The shape of IMD curves vs. ƒ[0] is also a Gaussian one with different dispersions
calculated as:
For a properly designed PA, IMD products do not usually vary within frequency spacing margins. On the contrary, if these variations are observed, this is a sign of improperly matched circuits at
baseband frequencies, most likely the biasing circuits. It is known that two-tone IMD products may elevate sharply for very small frequency spacing^[9]. Considering Figure 3 and Figure 4 (and
analogous results in^[6, 7]) confirms that the Gaussian average power spectrum signals are more robust to the out-of-channel spectral re-growth than flat average-power spectrum signals.
Figure 3 presents the spectral mask at frequencies specified for GSM (DCS, PCS) mobile and small base station PAs. The difference between the IMD curves and the spectral mask in Figure 3 shows the
margin between IMD products (in decibel values) at specified output power levels and the level beyond which the system spec becomes invalid (if the influence of only one specified order of IMD is
considered). These margins for GSM (DCS, PCS) mobile and small base station PAs are presented in Figure 6. In Figure 3, it can be seen that the spectral mask resembles the properly weighted IM[3]
characteristic. Usually, this is likely for Class A solid-state PAs operating very close to saturation. For other classes of operation, the shape of the side-lobe spectrum may differ from IM[3]
presented in Figure 3 due to an increased influence of higher-order IMD products. Spectral re-growth problems can be considered through simulation or measurement of the different orders of IMD
products in accordance with Figure 3 for frequency offsets specified in Figure 4 and Equations 18 and 19. Then, the entire circuit can be tuned to eliminate the source of distortion.
Figure 6 shows that, for GSM (DCS, PCS) mobile and small base station PAs, the frequency offsets contributing the most to the deterioration of a spectrum are placed around 400 kHz, which is a
well-known “weak” point confirmed by the real PA testing presented in Figure 2. In this case, the frequency spacing margins Δƒ at which IMD products should be considered are presented in Figure 7.
Therefore, when designing GSM PAs, special care should be taken to match networks at baseband frequencies specified by Figure 7. At these frequencies, the IM[5] and higher-order IMD products play a
decisive role in spectral re-growth. In most practical cases, the margins shown in Figure 6 rarely can be achieved for classes A, AB and B PAs, even very close to saturation. However, for high
efficiency switching-type PAs, such as modes E, F and other rectangular-shaped voltage or current RF output signals^[10], it is problematic to achieve the maximum efficiency for a GSM modulated
signal due to an increased level of intermodulation products (except for the nonlinear transmitter approach used in^[11] or other linearization techniques^[9]), because the spectrum mask requirements
will be violated.
Figure 3 shows that up to the saturation region, the additional IMD-imposed noise components are far from the relation of b/a = -10.6 dB at ±100 kHz channel bandwidth, as indicated previously, and
that IMD products have a small influence on phase error in GSM systems. The main source of a phase error is the phase-transient response, not the steady state AM-PM conversion, and the amplitude
transient response of the PA, which is statistically the amount of time an output signal exists outside the input signal trajectory when passing from one phase state to another. During the operation
of a real GSM transmitter, the average phase error contribution imposed by the PA's nonlinear characteristic rarely exceeds 2°. The peak phase error usually is imposed by pulse shaping during the
burst operation. However, discussion of this topic is beyond the scope of this article.
Figure 3 also presents the channel offset for several different existing and old digital communications standards with Gaussian-like signals (DECT, CDPD, CT2 and Mobitex) for nominal modulation data
rates and standard filters. The paging system (Mobitex) has the lowest sensitivity to IMD products, and its PA can be driven closer to saturation.
Finally, to properly account for noise in the PA, the noise power spectral density term must be added to Eq. 13. To evaluate in-channel noise growth, the previous approach can be extended up to the
center of the frequency spectrum^[8].
The burst mode of a PA results in the additional spectral components within the spectrum. Though not discussed here, these components would depend on the burst repetition rate and the particular
pulse-shaping characteristics of the PA.
Transmitter intermodulation
One of the important parameters of a GSM transmitter is the transmitter intermodulation (IM) characteristic when a low-level RF signal from an antenna connector reaches the output of an RF power
amplifier. This small signal mixes with the large signal of the PA, resulting in unwanted signals on the transmitter output. The GSM standard places severe restrictions on the radiation of these
unwanted signals^[3]. Usually, the check of transmitter intermodulation characteristics is carried out after completing design of a whole transmitter. However, the power amplifier specification often
fails to include this important parameter. Except for a small contribution by passive components like output filters, duplexers and power combiners, the main source of intermodulation distortion is
the nonlinear output transconductance of the final stage of the transmitter amplifier. The challenge then becomes selecting components for this stage to avoid this difficulty.
An output transistor's transconductance can be represented as:
Driving this transconductance with a two-tone voltage signal is represented as follows:
The current through this transconductance is calculated as:
Assuming ω[1] is the large-signal carrier frequency and ω[2] is the small-signal interference signal, and restricting analysis to only third-order IMD, which usually gives the highest level of
transmitter intermodulation) results in two IMD current levels at the transistor output reference plane:
The passive linear part of a transmitter between the final stage of the PA and the antenna connector cannot change nonlinearity. So, it is convenient to do analysis at the output of the PA, where 50
Ω matching has usually been established. Applying the GSM standard transmitter intermodulation margin^[3] defined in Eq. 23, one can define the nonlinearity requirements for the PA's final transistor
stage for different power levels and frequency offsets. The calculated results of these nonlinearity requirements for mobile transmitters and base station transmitters are presented in Figure 8 and
Figure 9. The IMD caused by frequency offsets between 1.2 MHz and 6 MHz should be verified first.
For DCS mobiles, the minimum nonlinearity requirement is G[3] = 0.009335V^-2 for the maximum rated power of 33 dBm and the transmitter IM is not a concern when designing the PA. Therefore, different
kinds of transistors can be used in that situation. For GSM mobiles and mid- and low-power base stations, requirements for nonlinearity are restrictive. However, up to 43 dBm rated power, regular
MESFET transistors with flat epi-doping profiles can still be used^[12]. For higher power levels, only increased linearity transistors like doping-improved MESFET, HEMT or LDMOS should be used to
pass an intermodulation spec. At power levels higher than 49 dBm, the best choice seems to be LDMOS transistors.
Certainly, transmitter output loss and filtering characteristics are not taken into account by this consideration. However, Figures 8 and 9 give clear insights into the design of the transmitter
output chain when the nonlinear characteristics of the transistor chosen for the PA output stage are known.
1. F. Amoroso, R.A. Monzingo. “Digital Data Signal Spectral Side Lobe Regrowth in Soft Saturating Amplifiers.” Microwave Journal, February 1998, No. 2, p. 27-32.
2. J. Duclercq. “GSM Base Station Power Amplifier Module Linearity.” Microwave Journal, April 1999, No. 4, p.116-127.
3. Digital Cellular Telecommunication Systems (Phase 2+), Radio Transmission and Reception (GSM05.05 version 8.5.0 release 1999), Draft ETSI EN300910.
4. Y.A. Evsikov, V.V. Chapursky. “Random Process Transformation in Radio-technical Systems,” Moscow, Vyschaya Shkola, 1977.
5. G.A. Korn, T.M. Korn. “Mathematical Handbook for Scientists and Engineers,” McGraw-Hill Book Company, 1968.
6. O. Gorbachov. “IMD Products and Spectral Regrowth in CDMA Power Amplifiers.” Microwave Journal, March 2000, No. 3, p. 96-108.
7. O. Gorbachov, Y. Cheng, J.S.W. Chen. “Noise and ACPR Correlation in CDMA Power Amplifiers.” RF Design, May 2001, p. 48-56.
8. O. Gorbachov, J.S.W. Chen. “Evaluate Noise in GSM PAs” Microwaves & RF, February 2001, p. 69-74.
9. A. Katz. “SSPA Linearization.” Microwave Journal, April 1999, No. 4, p. 22-44.
10. F.J. Ortega-Gonzalez, J.L. Jimenez-Martin, A.Asenisio-Lopez, G. Torregrosa-Penalva. “High-Efficiency Load-Pull Harmonic Controlled Class-E Power Amplifier.” IEEE Microwave and Guided Wave
Letters, Vol. 8, No. 10, October 1998, p. 348-350.
11. M. Heimbach. “Digital Multimode Technology Redefines the Nature of RF Transmission.” Applied Microwave & Wireless, August 2001.
12. J.A. Higgins, R.L. Kuvas. “Analysis and Improvement of Intermodulation Distortion in GaAs Power FET.” IEEE Trans. on MTT, vol. MTT-28, No. 1, January 1980, p. 9-17.
Oleksandr Gorbachov received his M.S. and Ph.D. degrees in electrical engineering in 1983 and 1990, respectively, from Kiev Polytechnic Institute in Kiev, Ukraine. Previous experience includes 25
years of different R&D and application engineering positions in RF and millimeter-wave technology. Currently, he is the RF application manager at STMicroelectronics in Taipei, Taiwan. His present
interests include RF semiconductor ICs, components, and modules for wireless communications.
Discuss this Article 0
Post new comment | {"url":"http://mobiledevdesign.com/news/rf-pa-linearity-gsm-transmitters","timestamp":"2014-04-16T10:10:47Z","content_type":null,"content_length":"82632","record_id":"<urn:uuid:9c4343c9-ab3c-4ae3-8b74-4829eb001b84>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00325-ip-10-147-4-33.ec2.internal.warc.gz"} |
Protein complex identification by supervised graph local clustering
• We are sorry, but NCBI web applications do not support your browser and may not function properly.
More information
Bioinformatics. Jul 1, 2008; 24(13): i250–i268.
Protein complex identification by supervised graph local clustering
Motivation: Protein complexes integrate multiple gene products to coordinate many biological functions. Given a graph representing pairwise protein interaction data one can search for subgraphs
representing protein complexes. Previous methods for performing such search relied on the assumption that complexes form a clique in that graph. While this assumption is true for some complexes, it
does not hold for many others. New algorithms are required in order to recover complexes with other types of topological structure.
Results: We present an algorithm for inferring protein complexes from weighted interaction graphs. By using graph topological patterns and biological properties as features, we model each complex
subgraph by a probabilistic Bayesian network (BN). We use a training set of known complexes to learn the parameters of this BN model. The log-likelihood ratio derived from the BN is then used to
score subgraphs in the protein interaction graph and identify new complexes. We applied our method to protein interaction data in yeast. As we show our algorithm achieved a considerable improvement
over clique based algorithms in terms of its ability to recover known complexes. We discuss some of the new complexes predicted by our algorithm and determine that they likely represent true
Availability: Matlab implementation is available on the supporting website: www.cs.cmu.edu/~qyj/SuperComplex
Contact: zivbj/at/cs.cmu.edu
Protein–protein interactions (PPI) are fundamental to the biological processes within a cell. Correctly identifying the interaction network among proteins in an organism is useful for deciphering the
molecular mechanisms underlying given biological functions. Beyond individual interactions, there is a lot more systematic information contained in protein interaction graphs. Complex formation is
one of the typical patterns in this graph and many cellular functions are performed by these complexes containing multiple protein interaction partners. As the number of species for which global high
throughput protein interaction data is measured becomes larger (Ito et al., 2001; Rual et al., 2003; Stelzl et al., 2005; Uetz et al., 2000), methods for accurately identifying complexes from such
data become a bottleneck for further analysis of the resulting interaction graphs.
High-throughput experimental approaches aiming to specifically determine the components of protein complexes on a proteome-wide scale suffer from high false positive and false negative rates (von
Mering et al., 2002). In particular, mass spectrometry methods (Gavin et al., 2002; Ho et al., 2002) may miss complexes that are not present under the given conditions; tagging may disturb complex
formation and weakly associated components may dissociate and escape detections. Therefore, accurately identifying protein complexes remains a challenge.
The logical connections between proteins in complexes can be best represented as a graph where the nodes correspond to proteins and the edges correspond to the interactions. Extracting the set of
protein complexes from these graphs can help obtain insights into both the topological properties and functional organization of protein networks in cells. Previous attempts at automatic complex
identification have mainly involved the use of binary protein–protein interaction graphs. Most methods utilized unsupervised graph clustering for this task by trying to discover densely connected
Automatic complex identification approaches can be divided into five categories: (1) Graph segmentation. To identify complexes King et al. (2004) partitioned the nodes of a given graph into distinct
clusters using a cost-based local search algorithm. Zotenko et al. (2006) proposed a graph-theoretical approach to identify functional groups and provided a representation of overlaps between
functional groups in the form of the ‘Tree of Complexes’. (2) Overlapping clustering. Since some proteins participate in multiple complexes or functional modules, a number of approaches allow
overlapping clusters. Bader et al. (2003b) detected densely connected regions in large PPI networks using vertex weights representing local neighborhood density. Pereira-Leal et al. (2004) used the
line graph strategy of the network (where a node represents an interaction between two proteins and edges share interactors between interactions) to produce an overlapping graph partitioning of the
original PPI network. Adamcsek et al. (2006) identified overlapping densely interconnected groups in a given undirected graph using the k-clique percolation clusters in the network. Spirin et al. (
2003) discovered molecular modules that are densely connected with themselves but sparsely connected with the rest of the network by analyzing the multibody structure of the PPI network. (3) New
similarity measures. Rives et al. (2003) applied standard clustering algorithms to group similar nodes on the interaction graph. The cluster similarity is calculated on vectors of nodes’ attributes,
such as their shortest path distances to other nodes. (4) Conservation across species. Sharan et al. (2005) used conservation alignment to find protein complexes that are common to yeast and
bacteria. They formulated a log-likelihood ratio model to represent individual edges between proteins and assumed a clique structure for a protein complex. (5) Spatial constraints analysis. By
utilizing the spatial aspects of complex formation, Scholtens et al. (2005) applied a local modeling method to better estimate the protein complex membership from direct mass spectrometry complex
data and Y2H binary interaction data. Chu et al. (2006) proposed an infinite latent feature model to identify protein complexes and their constituents from large-scale direct mass spectrometry sets.
The methods presented above are based on the assumption that complexes form a clique in the interaction graph. While this is true for many complexes, there are many other topological structures that
may represent a complex on a PPI graph. One example is a ‘star’ or ‘spoke’ model, in which all vertices connect to a ‘Bait’ protein (Bader et al., 2003a). Another possible topology is a structure
that links several small densely connected components with loose linked edges. This topology is especially attractive for large complexes: due to spatial limitations, it is unlikely that all proteins
in a large complex can interact with all others. See Figure 1 for some examples of real complexes with different topologies.
Projection of selected yeast MIPS complexes on our PPI graph (weight thresholded). (a) Example of a clique. All nodes are connected by edges. (b) Example of a star-shape, also referred to as the
spoke model. (c) Example of a linear shape. (d). Example ...
While some previous work was carried out to identify such structures in PPI networks [most notable by looking for network motifs (Yeger-Lotem et al., 2004)], these structure were not exploited for
complex discovery. In this article we present a computational framework that can identify complexes without making strong assumptions about their topology. Instead of the ‘cliqueness’ assumption, we
derive several properties from known complexes, and use these properties to search for new complexes. Since our method relies on real complexes, it does not assume any prior model for complexes. Our
algorithm is probabilistic. Following training to determine the importance of different properties, it can assign a score to any subgraph in the graph. By thresholding this likelihood ratio score we
can label some of the subgraphs as complexes. Our model results in a significantly improved F1-score when compared to the density-based approaches. Using a cross validation analysis we show that the
graphs discovered by our method highly coincide with complexes from the hand-curated MIPS database and a recent high confidence mass spectrometry dataset (Gavin et al., 2006). The top-ranked new
complexes are likely to provide novel hypotheses for the mechanism of action or definition of function of proteins within the predicted complex as we discuss in Section 3.
2 METHODS
The main feature of our method is that it considers the possibility of multiple factors defining complexes in protein interaction graphs. Instead of assuming a specific topological model, we design a
general framework which learns to weigh possible subgraph patterns based on the available known complexes.
Previous analysis of known PPI graphs has already revealed multiple shapes forming subgraphs. For example, Bader et al. (2003a) proposed two topological models in the context of protein complexes.
The first is the ‘matrix model’ which assumes that each of the members in the complex physically interact with all other members (leading to a clique-like structure). The second shape is the ‘spoke
model’ that assumes that all proteins in a complex directly interact with one ‘bait’ protein leading to a star shape. Hybrids of these or other models are also possible, resulting in more complex
Besides graph structures, there could be other features that characterize complexes. In particular, complexes have certain biological, chemical or physical properties that distinguish them from
non-complexes. For example, the physical size of a complex may be an important feature. There is a physical limitation of creating large complexes because inner proteins become inaccessible and
therefore more difficult to regulate. By incorporating such additional features into our supervised learning framework, the proposed model is able to integrate multiple evidence sources to identify
new complexes in the PPI graph.
The input to our algorithm is a weighted graph of interacting proteins. The network is modeled as a graph, where vertexes represent proteins and edges represent interactions. Edge weights represent
the likelihoods for the interactions. Since the current data does not provide any directionality information, the PPI graph considered in this article is a weighted undirected graph. Our objective is
to recover the protein complexes from this undirected PPI graph. Computationally speaking, complexes are one special kind of subgraphs on the PPI network. A subgraph represents a subset of nodes with
a specific set of edges connecting them. The number of distinct subgraphs, or clusters, grows exponentially with the number of nodes.
2.1 Complex features
Extracting appropriate features for subgraphs representing complexes is related to the problem of measuring the similarity between complex subgraphs. This task has been studied for other networks,
specifically social networks (Chakrabarti et al., 2005; Robins et al., 2005; Virtanen, 2003). In general, these previous approaches either (1) utilize properties of nodes or edges (indegree,
outdegree, cliqueness (Borgwardt et al., 2007), or (2) rely on comparing non-trivial substructures such as triangles or rectangles (Przulj et al., 2007; Yan et al., 2002). We use both types to arrive
at a list of properties for a feature vector that describes a subgraph in the PPI network. The properties include topological measurements about the subgraph structures and biological properties of
the group of proteins in the subgraph.
Table 1 presents the set of features we use. We rely in part on prior work (Bader et al., 2003b; Barabasi et al., 2004; Chakrabarti et al., 2005; Stelzl et al., 2005; Zhu et al., 2005) to determine
which features may be useful for this complex identification task. Each row in Table 1 represents one group of features. Totally 33 features were extracted from 10 groups.
Features for representing protein complex properties
Below we briefly discuss each of the feature types used. The numbers match the numbers in Table 1.
1. Given a complex subgraph G=(V,E), with |V| vertexes and |E| edges, the first property we considered is the number of nodes in the subgraph: |V|.
2. The density is defined as |E| divided by the theoretical maximum number of possible edges |E|[max]. Since we do not consider self interactions in the input weighted PPI graph, |E|[max]=|V|*(|V|
−1)/2. As mentioned above, in the ‘matrix’ model the graph density is expected to be very high, whereas it may be lower for the ‘spoke’ shape.
3. Degree statistics are calculated from the degree of nodes in the candidate subgraph. Degree is defined as the number of partners for a node. This group includes mean degree, degree variance,
degree median and degree maximum.
4. The edge weight feature includes mean and variance of edge weights considering two different cases (with and without missing edges).
5. Density of weight cutoffs evaluate the possibility of topological changes under different weight cutoffs.
6. Degree correlation property measures the neighborhood connectivity of nodes within the subgraph. For each node it is defined as the average number of links of the nearest neighbors of the
protein. We use mean, variance and maximum of this property in the feature set.
7. Clustering coefficient (CC) measures the number of triangles that go through nodes. To compute this feature we calculate the number of neighbors (q) and the number of links (t) connecting the q
neighboring nodes. We set CC=2t/q(q−1). This feature will have a small value for ‘star’ or ‘linear’ shapes while ‘matrix’ or ‘hybrid’ shapes receive a higher value.
8. The topological coefficient (TC) is a relative measure of the extent to which a protein shares interaction partners with other proteins. It reflects the number of rectangles that pass through a
node. See supporting website for details.
9. The first three largest singular values (SV) of the candidate subgraph's adjacency matrix. Different shapes have distinct value distributions for these three SV. For instance when comparing
subgraphs with the same size, the ‘matrix’ shape has higher value for the first SV than other shapes and the ‘star’ shape has a lower value of the third SV. See supporting website for details.
As for biological properties (No. 10), we use average and maximum protein length and average and maximum protein weight of each subgraph. This feature is based on the intuition that protein complexes
are unlikely to grow indefinitely, because proteins within the center of large complexes become inaccessible to interactions with other putative partners.
Our framework described below is general and it is straightforward to add other features if they are deemed relevant.
2.2 A supervised Bayesian network (BN) to model complexes
We assume a generative probabilistic model for complexes. Figure 2 presents an overview framework of our model. Our method uses a BN model. Features are generated, independently, based on two
parameters, (1) whether the subgraph is a complex or not (C) and (2) the number of nodes in the subgraph (N). The main reason we pay special attention to N and do not model it as another complex
property is because of the tendency of other properties to depend on N. For example, the larger the complex the more unlikely it is that all members will interact with each other (due to spatial
constraints). Thus, the density property is directly related to the size. Similarly other properties such as ‘mean of edge weight’ and ‘average clustering coefficient’ also depend on N. While it
would have been useful to assume more dependency among other features, the more dependencies our model has the more data we need in order to estimate its parameters. We believe that the current model
strikes a good balance between the need to encode feature dependencies and the available training data. Thus, other feature descriptors, X[1]…X[m] are assumed to be independent given the size (N) and
the label (C) of the subgraph.
A Bayesian probabilistic model for scoring a subgraph in our framework. The root node ‘Label’ is the binary indicator for complexes (1 if this subgraph is a complex, 0 otherwise). The second level
node ‘nodeSize’ represents ...
For a subgraph in our PPI network we can compute the conditional probability of how likely it represents a complex using the following Equation (4).
The second row uses Bayes rule. The third row utilizes the chain rule. The fourth equation uses the conditional independence encoded in our graphical model to decompose the probability to products of
different features. Similarly, we can compute a posterior probability for a non complex by replacing 1 with 0 in the above equation.
Using these two posteriors we can compute a log likelihood ratio score for each candidate subgraph:
Applying Bayes’ rule and canceling common terms in the numerator and denominator, the only terms we need to compute for the likelihood ratio L are the prior probability P(C[i]) and the conditional
probabilities P(N|C) and P(X[k]|N,C[i]).
Maximum likelihood estimation is used for learning these conditional dependencies from training data. We first discretized the continuous features and then used the multinomial distribution to model
their probabilities. We uniformly discretized each feature into 10 equal width bins in the experiments presented in Section 3. Due to the small sample size of the training data, we apply a Bayesian
Beta Prior to smooth the multinomial parameters in extreme cases (Manning et al., 1999). As for the prior p(C=1) of complexes, we assign a default value of 0.0001 which leads to good performance in
cross validation experiments.
The BN structure in Figure 2 was manually selected. We have also tried to learn the BN structure using tree augmented structure learning techniques (Witten et al., 2000). However, the resulting
performance of the learned network is not significantly better than our proposed structure (Fig. 2). Since our structure is simpler we omit the related results here. However potential improvements
may be possible with more training examples and better BN structure learning approaches.
2.3 Searching for new complexes
The above model can be used to evaluate candidate subgraphs. If the log-likelihood ratio exceeds a certain threshold the subgraph is predicted to be a complex. This reduces the problem of identifying
proteins complexes to the problem of searching for high scoring subgraphs in our PPI network. However, as we prove in the following lemma this search problem is NP-hard.
LEMMA 2.1. —
Identifying the set of maximally scoring subgraph in our PPI graph is NP-hard
PROOF. —
We prove this by reducing our search problem to max-clique, a NP hard problem (Cormen et al., 2001). To reduce our model to max-clique we will assume that we are only using one property, the graph
density and that all edges in our graph have a weight of 1. Furthermore, we set the probability of a complex given a subgraph to:
For this model, the only subgraphs with positive scores are the cliques in our graph. In addition, the bigger the clique the higher our score and so finding the highest scoring subgraph is equivalent
to finding the maximal clique.
The NP-hardness implies that there are no efficient (polynomial time) algorithms that can find an optimal solution for the search problem defined above. Thus, heuristic algorithms are needed. There
are many approaches for local graph search proposed in the literature, which include hill climbing, simulated annealing, heuristic based greedy search, or tabu-search heuristic (Virtanen, 2003). All
these strategies try to find local optima for certain fitness functions.
Here we choose to employ the iterated simulated annealing (ISA) search (Ideker et al., 2002; Virtanen, 2003), using the complex ratio score as the objective function (see Equation (6)). The basic
idea for ISA is: after each round of modifying the current cluster, we accept the new cluster candidate if it has a higher score L′ than the current score L, but even if the score decreases, we
accept the new cluster with probability exp((L−L′)/T), where T is the temperature of the system. This allows the algorithm to avoid local minima in some cases. After each round, the temperature is
decreased by a scaling factor α by setting T′=αT. The initial temperature T[0], the scaling factor α, and the number of rounds are parameters of the search process. After the algorithm terminates the
highest scoring subgraph is returned and the search continues. Ideker et al. (2002) pointed out that given a suitable parameter setting, ISA could identify the global optimum even though this setting
is generally unknown and can be impractically hard to find.
At the beginning, we connect each seeding protein to its highest weight neighbor and then use the pair as the starting cluster. Beginning from these clusters, we pursue the cluster modification
process and the simulated annealing search. A number of heuristics could be used for modifying the current cluster. The order in which we add new proteins to the cluster is based on their impact on
the cluster ratio score. We also explore the option of removing nodes from the cluster and merging of two clusters. We chose to limit the rounds of iterative search to 20. This restricts the size of
the complexes we search for is between 3 and 20. We use cross validation to choose best values for the temperature and scaling factor parameters. To avoid revisiting the same/similar clusters, we
keep checking the overlap ratio between the current cluster to the investigated clusters so far. If the ratio is higher than a threshold, we stop searching for the current seed. See supporting
website for details about the complexity of the algorithm and values for the parameters it uses.
The complete proposed algorithm for complex identification is presented in Table 2. Our input is the weighted PPI graph and a set of known complexes and non-complexes (random collections of genes) as
training data. First, we learn model parameters for the probabilistic BN model from the training data. Next, we search for subgraphs to identify candidate complexes. The final output clusters are
those clusters found to have a ratio score larger than a predefined threshold.
Protein complex identification algorithm
2.4 Weighted undirected PPI graph
As discussed above, we assume that our model input is a weighted undirected graph representing the PPI network. The edge weight describes how likely an interaction happens between the two related
proteins based on the following rationale: While high-throughput experimental data for PPI is available, it has suffered from high false positive and false negative rates (von Mering et al., 2002).
In addition to direct experimental interaction data there are many indirect sources that may contain information about protein interactions. As has been shown in a number of recent papers (Jansen et
al., 2003), such indirect data can be combined with the direct data to improve the accuracy of protein interaction prediction. This type of analysis usually results in an interaction probability or
confidence score assigned to each protein pair. Edges in our graph are weighted using this interaction probability which is computed as follows. In previous work (Qi et al., 2006), we assembled a
large set of biological features (a total of 162 features representing 17 distinct groups of biological data sources) for the task of pairwise protein interaction prediction. Considering our current
goal of complex identification we remove the features derived from the two high throughput mass spectrometry data sets (Gavin et al., 2002; Ho et al., 2002). Training is based on the small scale
physical PPI data in the DIP database (Xenarios et al., 2002). Based on our previous evaluation, the support vector machine (SVM) classifier (Joachims et al., 2002) performs as well or better than
any of the other classifiers suggested for this physical interaction task. We have thus used the results of our SVM analysis [see details in Qi et al. (2006)] to obtain weights for edges in our
graph. Weights range from minus infinity to infinity where larger values indicate a higher likelihood to be an interacting pair. To reduce the number of edges in our graph we apply a cutoff and
remove all edges with weights below the cutoff. We have chosen a cutoff of 1.0 such that the number of remaining edges roughly corresponds to previous estimates of the number of protein interaction
pairs in yeast (von Mering et al., 2002).
To further improve the quality of the PPI graph we filter the predicted weighted graph using a newly published Yeast interaction data set from Reguly et al., (2006). For each of the remaining
interactions we keep the weight learned from our integrated data analysis. This data contains a comprehensive database of genetic and protein interactions in yeast, manually curated from over 31 793
abstracts and online publications. A total of 35 244 interactions are reported, including literature curated and high throughput interactions. To allow fair comparisons we removed those interactions
coming from the high-throughput mass spectrometry experiments in this data set.
3.1 Reference sets
The MIPS (Mewes et al., 2004) protein complex catalog is a curated set of 260 protein complexes for yeast that was compiled from the literature and is thus more accurate than large scale mass
spectrometry complex data. After filtering away those complexes composed of a single or a pair of proteins, 101 complexes in MIPS remained. The size of the complexes in MIPS is distributed as a power
law, with most of the complexes having fewer than five proteins. We use the projection of the MIPS complexes on our PPI graphs as the positive training examples. See Figure 1 for four examples of
such a projection.
As another independent positive set we used the core set of protein complexes from a newly published TAP-MS experiment (Gavin et al., 2006), one of the most comprehensive genome-wide screens for
complexes in budding yeast. Again, we removed those complexes with only two proteins leading to 152 complexes that were used as positive examples to test our method.
Since we are using a supervised learning method we also need negative training data, which we generated by randomly selecting nodes in the graph. The size distribution of these non-complexes follows
the same power law distribution of the known complexes in MIPS. Figure 3 presents the histogram of these distributions for each of the three reference sets: ‘MIPS’, ‘TAP06’ and ‘Non-complexes’. As
can be seen, all roughly follow the same ‘power law’ distributions.
Histogram of number of proteins in each of the three reference sets: ‘MIPS’, ‘TAP06’ and ‘Non-complexes’. Note that all resemble ‘power law’ distributions. Horizontal axis is the number of ...
Figure 4 presents the distribution of two classes for real complexes (blue) versus negative examples (red) when projected on the first three principal coordinates after applying SVD on the features.
The distribution strongly indicates that the proposed features can separate the two sets reasonably.
Reference examples’ distribution when projected with the first three principle components after applying SVD to the features.
3.2 Performance measures
In order to quantify the success of different methods in recovering the set of known complexes we define three descriptors for each pair of a known and predicted complex:
• A: Number of proteins only in the predicted complex
• B: Number of proteins only in the known complex
• C: Number of proteins in the overlap between two
We say that a predicted complex recovers a known complex if
where p is an input parameter between 0 and 1 which we set to 0.5. Thus we require that the majority of the proteins in the complex be recovered and that the majority of the proteins in the predicted
complex belong to that known complex.
Based on the above definition, three evaluation criteria are applied to quantify the quality of different protein complex identification methods:
• Recall (r): Measures the fraction of known complexes detected by predicted complexes, divided by the total number of positive examples in the test set.
• Precision (p): Measures the fraction of the predicted complexes that match the positive complexes among all predicted complexes.
• F1: The F1 score combines the precision and recall scores. It is defined as 2pr/(p+r).
All three values range from 0 to 1, with 1 being the best score. Recall quantifies the extent to which a solution set captures the labeled examples. Precision measures the accuracy of the solution
set. A good protein complex detector should have both high precision and high recall. The F1 measure provides a reasonable combination for both precision and recall. These three criterions are
frequently used in many computational areas (Jones et al., 1981).
3.3 Performance comparison
To assess the performance in complex identification, we conducted experiments using MIPS as the positive training set and TAP06 as a test set and vice versa. There are a total of 1376 proteins in the
MIPS and TAP06 complexes. Thus, we applied our train-test analysis on a PPI graph containing theses genes. The resulting graph used contains 1376 proteins and 10 918 weighted edges.
We have compared our method, referred to as ‘SCI-BN’, with three other methods suggested for complex identification. (1) ‘Density’ uses the the same search algorithm discussed in Section 2. However,
unlike our method which maximizes the BN likelihood ratio, for ‘Density’ we simply try to find the maximally dense subgraphs in the graph. (2) The ‘MCODE’ complex detection method was proposed by
Bader et al. (2003b). MCODE finds clusters (highly interconnected regions) in any network loaded into Cytoscape. The method was developed for PPI in which these clusters correspond to protein
complexes (Bader et al., 2003b). (3) ‘SCI-SVM’ is used to determine whether the BN structure helps in identifying complexes. It uses the same features as our method but instead of using a BN it uses
a SVM (Joachims et al., 2002).
The performance comparison is presented in Table 3. For each method, we report the precision, recall and F1, separately. As can be seen our method dominates all other methods in all measures. The
recall rate of our method is around 50%. This number is impressive when considering the fact that the training and testing were done on different datasets. Our precision is lower (between 20–30%).
However, since many of the complexes are not included in either gold standard sets, this precision value can be the result of correct predictions that are not included in the available data. We
discuss some of these complexes below. As for the other methods, surprisingly, the recall and F1 values reported by MCODE are much lower than both the ‘Density’ and ‘SCI-SVM’ methods. We investigated
the clusters identified by ‘MCODE’ and determined that they were relatively large compared to clusters determined by other methods which may have hurt performance. Interestingly the performance of
‘SCI-SVM’ is not as good as ‘SCI-BN’. This is largely caused by the unique way BN can handle the ‘node size’ feature. For the ‘Density’ approach, it performs reasonably well for the Recall measure
but not as good in terms of precision.
Performance comparison between our algorithm (‘SCI-BN’), SVM with the same set of features (‘SCI-SVM’), Clique based method using only the density feature (‘Density’) and the ‘MCODE’ methods ...
Using a threshold of 1.0 for the weights of the edges, our yeast PPI network contains 5234 proteins and 19 246 interaction edges. To identify and validate new complexes within this network graph, we
trained a new BN model on all of the MIPS manual complexes as positive examples and used 2000 randomly selected non-complexes subgraphs as negative examples. Within the resulting full graph, we
predict 987 complexes using the ‘SCI-BN’ search method.
To identify new complexes within the predicted graph, we compared the predicted clusters with those reported in five reference datasets, the manually curated MIPS dataset (Mewes et al., 2004) and
four large-scale complex datasets obtained using high-throughput experimental approaches (Gavin et al., 2002, 2006; Ho et al., 2002; Krogan et al., 2006). After filtering those clusters matching
reference complexes, we are left with 570 novel predictions. These are either entirely new complexes or extensions to known complexes by adding new proteins.
Amongst the new complexes, most highly ranked were of size 3–4. The size distribution agrees with the distribution of known complexes. While many of these top scoring complexes took the shape of
cliques, others displayed more diverse shapes. Examples are shown in Figure 5. Black edges in Figure 5 represent interactions with SVM score higher than 4.0 (indicating strong evidence for
interactions between proteins).
Projection of predicted complexes on our weighted PPI graph. The edge weights are thresholded and color coded. See color legend (top right corner bar) for edge weights. Descriptions for each
predicted complex are provided in the ‘Validation’ ...
The clique complex shown in Figure 5a represents a protein complex involved in translation. CDC33, also known as eIF4E, is a translation initiation factor. PAB1 is a Poly(A)-binding protein. TIF4632
is the 130-kD subunit of translation initiation factor eIF4F/G. TIF4631 is the 150-kD subunit of the same translation initiation factor, eIF4F/G. Being two subunits of the same protein, we expect the
evidence for this binary interaction to be very strong, represented by the black edge connecting these two proteins. eIF4F/G needs to interact with eIF4E to mediate cap-dependent mRNA translation.
eIF4F/G can also interact with p20, but p20 competes with eIF4F/G for binding to eIF4E. Thus, in a complex involving eIF4E (CDC33), we expect to find CDC33 or p20 but not all three proteins together.
This is what is indeed observed in this complex.
Figure 5b shows a high scoring cluster that is not a clique. This cluster contains four proteins with known or presumed roles in actin cytoskeleton structure, and a complex formation between them is
quite likely.
Figure 5c shows a cluster that is not listed in any of the databases used but is actually a known complex: the heterotrimeric G-protein [with alpha(GPA1)-, beta(STE4)- and gamma(STE18)-subunits]
binds to activated pheromone alpha-factor receptor(STE2) (Whiteway et al., 1989). This is a transient complex and would not be identified by high-throughput screening methods, although the formation
of this complex is a requirement for G protein coupled signal transduction (not only in yeast, but in all G protein coupled receptor signaling). The identification of this cluster by our methodology
is particularly encouraging, as such transient complexes can have crucial cellular roles. The G protein coupled receptors are the most abundant cell surface receptors in human, and some 60% of
currently marketed drugs are targeted at them (Muller, 2000).
The shape shown in Figure 5d constitutes several small cliques connected via common edges or nodes. This predicted cluster therefore potentially gives a higher-level view of the local functionalities
for related proteins. Most proteins in this complex have defined roles in transcription regulation, and a subset of these was already known to form a complex earlier (SIN3, RPD3, SDS3, UME6, SAP30
are part of the histone deacetylase complex). The function of SRP1 (karyopherin-alpha) is somewhat engimatic with diverse roles in nuclear import on the one hand and protein degradation on the other
hand. The prediction of SRP1 being part of this complex would be interesting to verify experimentally because it would potentially link multiple processes.
Although the detected cluster shown in Figure 5e is a subcluster of a very large cluster previously detected by high-throughput methodology (Gavin et al., 2002), we present it here because of its
interesting shape of two clusters (triangle SEC27, COP1, CDC39) and (rectangle CAF40, POP2, CCR4, CDC39) being connected by a common binding partner (CDC39). The first cluster contains proteins that
are part of secretory pathway vesicles (SEC27, COP1), while the second cluster contains proteins mostly with roles in transcription. CDC39 linking these two groups is itself a protein also involved
in transcription. Its linking role to secretory pathway proteins is unsuspected and should be investigated experimentally.
In this article we presented a probabilistic algorithm for discovering complexes in a supervised manner. Specifically we extract features that can be used to distinguish complex versus non-complexes
and train a classifier using these features to identify new complexes in the PPI graph. Unlike previous methods that relied on the ‘dense’ assumption of complex subgraphs, our algorithm integrates
subgraph topologies and biological evidence, and learns the importance of each of the features from known complexes. This allows our algorithm to identify complexes with topologies that are missed by
previous methods. We have shown that our algorithm can achieve better precision and recall rates for previously identified complexes. Finally, we discussed examples of new complexes determined by our
algorithm and their possible function.
Our framework of feature representation is general. It is straightforward to add other topological properties that are found to be relevant for this problem. It is also possible to add other types of
features. For example, information about the function of proteins can be encoded in our framework as well.
We hope to extend this work and improve both feature representation and search so that we can detect other types of interaction groups. Besides complexes, pathways of logically connected proteins
also play a major role in both cellular metabolism and signaling. How to detect interesting pathways on PPI graph in our framework is an interesting direction to pursue. Another interesting direction
is to apply this method to other species for which protein interaction data became available recently, including humans.
This work was supported in part by National Science Foundation grants CAREER 0448453, EIA0225656, EIA0225636, CAREER CC044917, and National Institutes of Health grant LM07994-01. The authors want to
express sincere thanks to Oznur Tastan of CMU for suggestions regarding one validation.
Conflict of Interest: none declared.
• Adamcsek B, et al. Cfinder: locating cliques and overlapping modules in biological networks. Bioinformatics. 2006;22:1021–1023. [PubMed]
• Bader GD, Hogue CW. Analyzing yeast protein-protein interaction data obtained from different sources. Nat. Biotechnol. 2003a;20:991–997. [PubMed]
• Bader GD, Hogue CW. An automated method for finding molecular complexes in large protein interaction networks, BMC Bioinformatics. 2003b;4:2. [PMC free article] [PubMed]
• Barabasi AL, Oltvai ZN. Network biology: understanding the cell's functional organization, Nat Rev Genet. 2004;5:101–103. [PubMed]
• Borgwardt KM, et al. Graph kernels for disease outcome prediction from protein-protein interaction networks, Pacific Symposium on Biocomputing. 2007;12:4–15. [PubMed]
• Chakrabarti D. Ph.d. thesis. School of Computer Science, Carnegie Mellon University; 2005. Tools for Large Graph Mining.
• Chu W, et al. Identifying protein complexes in high-throughput protein interaction screens using an infinite latent feature model, Pacific Symposium on Biocomputing. 2006;11:231–242. [PubMed]
• Cherry JM, et al. Genetic and physical maps of Saccharomyces cerevisiae. Nature. 1997;387:67–73. [PMC free article] [PubMed]
• Cormen, et al. McGraw-Hill. 2001. Introduction to algorithms (Second Edition)
• Gavin AC, et al. Functional organization of the yeast proteome by systematic analysis of protein complexes. Nature. 2002;415:141–147. [PubMed]
• Gavin AC, et al. Proteome survey reveals modularity of the yeast cell machinery. Nature. 2006;440:631–636. [PubMed]
• Ho Y, et al. Systematic identification of protein complexes inSaccharomyces cerevisiaeby mass spectrometry. Nature. 2002;415:180–183. [PubMed]
• Ideker T, et al. Discovering regulatory and signalling circuits in molecular interaction networks. Bioinformatics. 2002;18(Suppl1):S233–S240. [PubMed]
• Ito T, et al. A comprehensive two-hybrid analysis to explore the yeast protein interactome. Proc. Natl Acad. Sci. 2001;10:4569–4574. [PMC free article] [PubMed]
• Jansen R, et al. A Bayesian networks approach for predicting protein-protein interactions from genomic data. Science. 2003;302:449–453. [PubMed]
• Joachims T. PhD Thesis. Cornell University, Department of Computer Science; 2001. Learning to classify text using support vector machines.
• Jones KS, editor. Information Retrieval Experimental. London: Butterworths; 1981.
• Kim PM, et al. Relating three-dimensional structures to protein networks provides evolutionary insights. 2006;314:1938–1941. [PubMed]
• King AD, et al. Protein complex prediction via cost-based clustering. Bioinformatics. 2004;20:3013–3020. [PubMed]
• Krogan NJ, et al. Global landscape of protein complexes in yeast Saccharomyces cerevisiae. Nature. 2006;440:637–643. [PubMed]
• Manning, Schutze . MIT press; 1999. Foundations of Statistical Natural Language Processing.
• Mewes HW, et al. MIPS: analysis and annotation of proteins from whole genomes. Nucleic Acids Res. 2004;32:D41–D44. [PMC free article] [PubMed]
• Muller G. Towards 3D structures of G protein-coupled receptors: a multidisciplinary approach. Curr. Med. Chem. 2000;7:861–888. [PubMed]
• Pereira-Leal JB, et al. Detection of functional modules from protein interaction networks. Proteins. 2004;54:49–57. [PubMed]
• Przulj N. Biological network comparison using graphlet degree distribution. Bioinformatics. 2007;23:e177–e183. [PubMed]
• Qi Y, et al. Evaluation of different biological data and computational classification methods for use in protein interaction prediction. Proteins. 2006;63:490–500. [PMC free article] [PubMed]
• Reguly T, et al. Comprehensive curation and analysis of global interaction networks in Saccharomyces cerevisiae. J. Biol. 2006;5:11. [PMC free article] [PubMed]
• Rives AW, Galitski T. Modular organization of cellular networks. Proc. Natl Acad. Sci. USA. 2003;100:1128–1133. [PMC free article] [PubMed]
• Robins G, et al. Psychology Department, University of Melbourne; 2005. A workshop on exponential random graph (p*) models for social networks.
• Rual JF, et al. Towards a proteome-scale map of the human protein-protein interaction network. Nature. 2005;437:1173–1178. [PubMed]
• Scholtens D, et al. Local modeling of global interactome networks. Bioinformatics. 2005;21:3548–3557. [PubMed]
• Sharan R, et al. Identification of protein complexes by comparative analysis of yeast and bacterial protein interaction data. J. Comput. Biol. 2005;12:835–846. [PubMed]
• Spirin V, Mirny LA. Protein complexes and functional modules in molecular networks. Proc. Natl Acad. Sci. USA. 2003;100:12123–1218. [PMC free article] [PubMed]
• Stelzl U, et al. A human protein-protein interaction network: a resource for annotating the proteome. Cell. 2005;122:830–832. [PubMed]
• Uetz P, et al. A comprehensive analysis of protein-protein interactions in Saccharomyces cerevisiae. Nature. 2000;403:623–627. [PubMed]
• Virtanen SE. Research Report. Helsinki University of Technology, Laboratory for Theoretical Computer Science. 2003. Properties of nonuniform random graph models.
• von Mering C, et al. Comparative assessment of large-scale data sets of protein-protein interactions. Nature. 2002;417:399–403. [PubMed]
• Whiteway M, et al. The STE4 and STE18 genes of yeast encode potential beta and gamma subunits of the mating factor receptor-coupled G protein. Cell. 1989;56:467–477. [PubMed]
• Witten IH, Frank E. San Francisco: Morgan Kaufmann; 2000. Data Mining: Practical machine learning tools with Java implementations.
• Xenarios I, et al. DIP, the Database of Interacting Proteins: a research tool for studying cellular networks of protein interactions. Nucleic Acids Res. 2002;30:303, 305. [PMC free article] [
• Yan X, Han J. Technical Report UIUCDCS-R-2002-2296. Dept. of Computer Science, UIUC; 2002. gSpan: Graph-based substructure pattern mining.
• Yeger-Lotem E, et al. Network motifs in integrated cellular networks of transcription-regulation and protein-protein interaction. Proc. Natl Acad. Sci. USA. 2004;101:5934–5939. [PMC free article]
• Zhu D, Qin ZS. Structural comparison of metabolic networks in selected single cell organisms. BMC Bioinformatics. 2005;6:8. [PMC free article] [PubMed]
• Zotenko E, et al. Decomposition of overlapping protein complexes: A graph theoretical method for analyzing static and dynamic protein associations. Algorithms Mol. Biol. 2006;1:7. [PMC free
article] [PubMed]
Articles from Bioinformatics are provided here courtesy of Oxford University Press
• PubMed
PubMed citations for these articles
Your browsing activity is empty.
Activity recording is turned off.
See more... | {"url":"http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2718642/?tool=pubmed","timestamp":"2014-04-17T22:05:59Z","content_type":null,"content_length":"118207","record_id":"<urn:uuid:723ea1a8-4c93-48f2-9712-fdd5786d3ccd>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00388-ip-10-147-4-33.ec2.internal.warc.gz"} |
Prove that it is smaller than 6
1 Attachment(s)
Prove that it is smaller than 6
Prove that Attachment 22141.
How do i prove this? thank you...
Re: Prove that it is smaller than 6
A possibility is to take the square of both sides.
Re: Prove that it is smaller than 6
Originally Posted by
which is true.
Re: Prove that it is smaller than 6
Originally Posted by
Suppose otherwise, then by assumption:
Now square both sides and follow the consequences to get a contradiction.
Re: Prove that it is smaller than 6
Originally Posted by
(Squaring both sides)
(Squaring both sides)
which is true.
Normally, one would NOT start a proof by assuming what you want to prove, as alexmahone does here. However, this is a perfectly valid "synthetic proof". Every step is invertible. A "standard"
proof would be given by going from "320< 324" upward. | {"url":"http://mathhelpforum.com/algebra/186852-prove-smaller-than-6-a-print.html","timestamp":"2014-04-19T11:43:27Z","content_type":null,"content_length":"9022","record_id":"<urn:uuid:80b4f401-b2d8-482a-b170-b2c6102d31b2>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00566-ip-10-147-4-33.ec2.internal.warc.gz"} |
Probability of observing outcome with low individual probability
up vote 3 down vote favorite
Suppose I throw k-sided dice n times and want to know the probability $p$ of observing a set of counts with individual probability higher than $x$.
Example, let k=2,n=2, fair dice. Possible sets of counts are (0,2),(1,1),(2,0). Individual probabilities of those counts are 1/4,1/2 and 1/4 respectively. Probability of getting outcome with
individual probability above 0 is 1, above 1/4 is 1/2, above 1/2 is 0.
What is the relationship between $p$ and $x$? For k=3, line gives surprisingly good fit
This is a generalization of a related unanswered question
Douglas Zare suggests to think of counts as lattice sites of a random walk and use Central Limit theorem. This suggests that relationship is going to be quadratic for k=5, and indeed, parabola seems
to give a decent upper bound in that case
n = 21;
types = Flatten[
Permutations /@ (IntegerPartitions[n, {3}, Range[0, n]]/n), 1];
prob[p_, q_] := n! Times @@ MapThread[(#1)^(n #2)/(n #2)! &, {p, q}];
cum[p_, cutoff_] :=
Total[Select[prob2[p, #] & /@ types, # >= cutoff &]];
p0 = RandomChoice[Select[types, FreeQ[#, 0] &]];
pvals = prob[p0, #] & /@ Union[types];
cvals = cum[p0, #] & /@ pvals;
data = Transpose[{pvals, cvals}];
Show[ListPlot[data, PlotRange -> All],
Plot @@ {Fit[data, {1, x}, x], {x, 0, Max[pvals]}, PlotStyle -> Red}]
(0,2),(1,1),(2,0) are tuples of counts, not sets of counts. Also, what language is your code in? – Ricky Demer Oct 4 '10 at 21:12
@Ricky: Looks like Mathematica. – Charles Oct 4 '10 at 21:20
Tuple (b1,b2,...) represents set {(1,b1),(2,b2),...} where pair (a,b) indicates that event a happened b times. Language is Mathematica – Yaroslav Bulatov Oct 4 '10 at 21:38
How have you managed to obtain negative values for (p), which is a probability? Perhaps I am being dense. – Tom Smith Oct 4 '10 at 22:21
x axis was at y=0.2 which is confusing, fixed – Yaroslav Bulatov Oct 4 '10 at 22:30
show 3 more comments
2 Answers
active oldest votes
The following is not rigorous, but it explains the linearity for $k=3$, and I believe it can be made rigorous.
The counts are naturally arranged in a simplex with $k$ vertices by projecting orthogonally to the line $x_1 = x_2 = ... x_k$. You can view the counts as the endpoints of a random walk
starting from the center of the simplex (the projection of the origin).
up vote 4 The Central Limit Theorem suggests that the multinomial coefficients at distance $d$ from the center of the simplex are about $c_1 \exp(-d^2/2)$. For $k=3$ and $d\lt n$, the number of
down vote vertices of distance at most $d$ is proportional to $d^2$, so there are about $c_2 d$ points of distance between $d$ and $d+1$. That suggests that the probability of encountering a
probability at least $q=c_1\exp(-d^2/2)$ is about $\int_0^d c_2 x ~c_1 \exp(-x^2/2) dx$ which is linear in $q$.
In other dimensions ($k\ne3$), the result of the integral is not linear in $\exp(-d^2/2)$.
Thanks, this seems to be a promising approach. For k=3, random walks seem to be over the hexagonal lattice. Any idea what this lattice is called for general k? – Yaroslav Bulatov Oct 5
'10 at 20:15
To clarify, for $k=5$, the indefinite integral is $(a_1 x^2+a_2)\exp(−x^2/2)+C$ which is not quadratic in $exp(−x^2/2)$, it looks like $(a_3\log(q)+a_4)q+C$. – Douglas Zare Oct 5 '10 at
The endpoints are in a copy of the $A_k$ lattice, the lattice points in $\mathbf Z^{k+1}$ whose coordinate sum is $n$ instead of $0$. – Douglas Zare Oct 5 '10 at 21:11
$k=5$ adds two extra dimensions over $k=3$, shouldn't that multiply the integrand by $x^2$? – Yaroslav Bulatov Oct 5 '10 at 21:48
That $x$ is not the same $x$ as you are plotting, which is closer to $q=c_1 \exp(-x^2/2)$. – Douglas Zare Oct 5 '10 at 21:58
add comment
If you're concerned with very small values of x it will be better to compute p = 1 - P(counts with individual probability at most x). This way you'll be summing far fewer terms and unless
your x is very very small with k and n very very large then you need not worry about round off error.
I'm not sure about your coding, but your description deals with integer compositions here not integer partitions.
Let's just handle the binomial case k=2. Then .5 is the probability of each outcome.
if x is given then in your notation and I don't do my 1- trick: p = (sum from j=1 to j=n where I only count terms when j is such that (n choose j).5^n>x) of (n choose j).5^n. Note that .5^n
is really best understood as .5^(n-j)*.5^j.
up vote
-1 down So it is easy to tell where you're going to hit the x axis, take the middle binomial coefficient*.5^n.
So this is really a step function. But yes linear is close but not exact relationship. If you look at successive left hand endpoints of the steps you should be able to convince yourself
that if you interpolate linearly between those then it would not be linear. I've sort of convinced myself that the slopes between the left hand endpoints are basically 2*.5^n*(n choose j)/
(n-1 choose j-1) for appropriate j, so the differences in the slopes between successive left endpoints differ by at most a factor of n/n-1 and often a much smaller factor which for n large
basically goes to having the same slope between each adjacent pair of left endpoints.
I hope this helps. Since I'm new here I could use the reputation to put a bounty on one of my problems :) .
It's concave for k=2 and convex for k=4. k=3 seems to be the only one where relationship is linear – Yaroslav Bulatov Oct 5 '10 at 3:57
By the way, this problem for k=2 is well studied, and all the formulas I see (check the link in desc.) are either loose approximations or are complicated. k=3 seems to have a tight+simple
approximation over whole range of x. Nice formulas for large x and/or large k would be useful – Yaroslav Bulatov Oct 5 '10 at 4:08
add comment
Not the answer you're looking for? Browse other questions tagged pr.probability or ask your own question. | {"url":"http://mathoverflow.net/questions/41065/probability-of-observing-outcome-with-low-individual-probability","timestamp":"2014-04-16T22:38:04Z","content_type":null,"content_length":"69176","record_id":"<urn:uuid:6dafc0ff-b718-4b97-bb42-2891af469d46>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00400-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
Please explain this C code (follows in text). It is code for a guessing game (guess the right number). I am a beginning student and this example comes from my textbook, but I can understand the while
loop, if, else, the call to functions; also it uses words as variables and I can't help interpreting the meaning, where as a computer just sees it as a variable or string.
• one year ago
• one year ago
Best Response
You've already chosen the best response.
#include <stdio.h> #define TARGET 17 #define LOW 0 #define HIGH 100 #define TRUE 1 #define FALSE 0 int process( int guess, int target ); int. main (void) { Int a_guess, correct; correct = FALSE;
printf( “I’m thinking of a number between %d and %d.\n”, LOW, HIGH ); while ( correct == FALSE ) { printf( “Try to guess it now. “ ); scanf( “%d”, &a_guess ); correct = process( a_guess, TARGET
); } return 0; int process( int guess, int target ) { If ( guess < target ) printf( “Too low!\n” ) else if ( guess > target ) printf( “Too high!\n” ); else { printf( “You guessed it!\n” )’ return
TRUE; } Return FALSE; }
Best Response
You've already chosen the best response.
Whuat exactly is the issue?
Best Response
You've already chosen the best response.
I don't understand the nuts and the bolts. Like how the while function works, how correct is assigned FALSE, then correct is assigned process. In class, we are just breezing through everything,
and I seem stuck because I can't comprehend it.
Best Response
You've already chosen the best response.
I'm kind of piecing it together, the more I look at it, the 'process' function eventually returns FALSE, which tells the while loop to terminate.
Best Response
You've already chosen the best response.
Basically while works that it will keep asking the question until your "target" guess is correct. Low is 0, high is 100. The target is 17. So basically you are asked a # between 0 and 100. While
it's false, it will ask you to enter a number, if you enter ANY number besides 17 you will goto the if statements stating it if is too high, or too low. When you enter 17 it will tell you you are
correct, and will set it to true, so the while loop stops.
Best Response
You've already chosen the best response.
Thank you, my text book is kind of verbose and detailed and speaks the 'C language'. It never gives a 'English' explanation as you gave above. So I'm having trouble with basics here and there,
but I'm getting it.
Best Response
You've already chosen the best response.
yup anything else needed?
Best Response
You've already chosen the best response.
OK, here is another code, my classmate started, it is not finished, you might get the idea; the assignment was to write a menu-driven mini statistics package. I guess I get a little intimidated
by all the code, but I don't follow a lot of this, if you can tell me what is going on.
Best Response
You've already chosen the best response.
#include <math.h> #include "stdafx.h" #include <stdio.h> #define MAX 200 void print_inventory( int inventory[], int numbitems); int stdDev( int inventory[], int num_items, int meen) int
input_inventory( int inventory[], int maximum); int largest( int inventory[], int n); int mean(int inventory[],int n); /******************************************main***************************/
int main( void ) { int inventory[MAX]; int num_item; int large; int meen; int dev; printf( "enter 1 - 200 differnt numbers\n"); num_item = input_inventory( inventory, MAX); print_inventory(
inventory, num_item ); large = largest( inventory, num_item); printf( "%d is the largest number\n",inventory[large]); meen = mean( inventory, num_item ); printf(" mean of array is %d\n", meen);
dev = stdDev( inventory, num_item, meen); printf(" the standard deviation is %d",dev); return 0; } /***************************input items***************************/ int input_inventory( int
inventory[], int maximum) { int index = 0; scanf_s( "%d", &inventory[ index] ); while ( index < maximum && inventory[ index ] != -1) { index++; scanf_s( "%d", &inventory[index] ); } if (index ==
-1) { printf( "No room for more items."); return( index+1); } else return ( index); } /*************************print items******************/ void print_inventory( int inventory[], int numitems)
{ int index; for( index= 0; index < numitems; index++) { printf(" item number %d;\t", index +1); printf( "number on hand %5d\n",inventory[index]); } } /
****************************mean*******************/ int mean(int inventory[],int n) { int i; int sum=0; for(i=0;i<n;i++) sum=sum+inventory[i]; return (sum/n); } /
*****************largest********************/ int largest( int inventory[], int n) { int i, index = 0; for( i = 0; i <= n; i++) if( inventory[index] < inventory[i] ) index = i; return index; } /
****************************diveation***********************/ int stdDev( int inventory[], int num_items, int meen) { int i; int sumdevs = 0; for( i = 0; 9 < num_items; i++) sumdevs = sumdevs +
(inventory[i] - meen)^2; return((sumdevs/num_items)^(1/2)); }
Best Response
You've already chosen the best response.
he's creating a bunch of methods, and each method does something different. In coding making certain bits of code is important. Lets say you wan tto add something, so you could make a method like
int add(int a, int b) { return a + b; }
Best Response
You've already chosen the best response.
OK, thank you. I think I'm going to stop looking at the vast code and being intimidated, I eat it in little nugget size like your example.
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/50b2843ae4b0e906b4a64d62","timestamp":"2014-04-19T04:19:56Z","content_type":null,"content_length":"58866","record_id":"<urn:uuid:5e9f4548-19ca-4a94-98e5-2bb2eee33390>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00370-ip-10-147-4-33.ec2.internal.warc.gz"} |
Bivariate analysis
Bivariate analysis is one of the simplest forms of the quantitative (statistical) analysis.^[1] It involves the analysis of two variables (often denoted as X, Y), for the purpose of determining the
empirical relationship between them.^[1] In order to see if the variables are related to one another, it is common to measure how those two variables simultaneously change together (see also
Bivariate analysis can be helpful in testing simple hypotheses of association and causality – checking to what extent it becomes easier to know and predict a value for the dependent variable if we
know a case's value of the independent variable (see also correlation).^[2]
Bivariate analysis can be contrasted with univariate analysis in which only one variable is analysed.^[1] Furthermore, the purpose of a univariate analysis is descriptive. Subgroup comparison – the
descriptive analysis of two variables – can be sometimes seen as a very simple form of bivariate analysis (or as univariate analysis extended to two variables).^[1] The major differentiating point
between univariate and bivariate analysis, in addition to the latter's looking at more than one variable, is that the purpose of a bivariate analysis goes beyond simply descriptive: it is the
analysis of the relationship between the two variables.^[1] Bivariate analysis is a simple (two variable) special case of multivariate analysis (where multiple relations between multiple variables
are examined simultaneously).^[1]
Types of analysis[edit]
Common forms of bivariate analysis involve creating a percentage table or a scatterplot graph and computing a simple correlation coefficient.^[1] The types of analysis that are suited to particular
pairs of variables vary in accordance with the level of measurement of the variables of interest (e.g. nominal/categorical, ordinal, interval/ratio). If the dependent variable—the one whose value is
determined to some extent by the other, independent variable— is a categorical variable, such as the preferred brand of cereal, then probit or logit regression (or multinomial probit or multinomial
logit) can be used. If both variables are ordinal, meaning they are ranked in a sequence as first, second, etc., then a rank correlation coefficient can be computed. If just the dependent variable is
ordinal, ordered probit or ordered logit can be used. If the dependent variable is continuous—either interval level or ratio level, such as a temperature scale or an income scale—then simple
regression can be used.
If both variables are time series, a particular type of causality known as Granger causality can be tested for, and vector autoregression can be performed to examine the intertemporal linkages
between the variables.
See also[edit] | {"url":"http://blekko.com/wiki/Bivariate_analysis?source=672620ff","timestamp":"2014-04-18T04:39:57Z","content_type":null,"content_length":"50022","record_id":"<urn:uuid:1b35d2ab-0d8f-46fb-875b-5ec9f03505d9>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00368-ip-10-147-4-33.ec2.internal.warc.gz"} |
Olema Algebra 2 Tutor
...I've used the concepts during my years as a programmer and have tutored many students in the subject. I have a strong background in linear algebra and differential equations. I have a Masters
in mathematics and a PhD in economics which requires a good understanding of both topics.
49 Subjects: including algebra 2, calculus, geometry, physics
...I am a commercial airline pilot with 26 years experience and am looking to retire soon. I would like to "give back" ("pay it forward," if you will) to a community and world that has given me
much by helping young people. As a retired Air Force Command Pilot with an educational background in Civ...
14 Subjects: including algebra 2, reading, ASVAB, elementary (k-6th)
...Different students will respond better to one style of teaching versus another. My job as a tutor is to find out what teaching style or presentation will make my student comprehend the subject
at hand better. First, I try to find out what sparks the interest in the student and then try to relate the subject matter that he or she has difficult with to that interesting topic.
24 Subjects: including algebra 2, chemistry, calculus, physics
...By addressing a few basic concepts -- such as the fact that a quadratic expression can be represented by a rectangular figure -- and making certain that underlying skills are solid, the road
ahead can be free of barriers. I took my degree at Harvard in interdisciplinary social science, with an e...
22 Subjects: including algebra 2, English, writing, physics
...I would be honored to help you in your quest for this knowledge. I have a bachelor's degree in Physics from U.C. Berkeley.
12 Subjects: including algebra 2, chemistry, physics, calculus | {"url":"http://www.purplemath.com/Olema_Algebra_2_tutors.php","timestamp":"2014-04-19T11:58:58Z","content_type":null,"content_length":"23731","record_id":"<urn:uuid:72f450b6-dd94-43b5-a1b1-892479f0b5e0>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00206-ip-10-147-4-33.ec2.internal.warc.gz"} |