content
stringlengths 86
994k
| meta
stringlengths 288
619
|
|---|---|
Narberth Geometry Tutor
Find a Narberth Geometry Tutor
...I find that children with ADHD often struggle organizing information and often prefer to learn about something in a way that relates to them and their interests. For this reason, I like to use
the 4-Square Writing Program for struggling writers. It allows them to easily organize their reading and writing, and the graphic organizer never changes, they just use it differently.
20 Subjects: including geometry, reading, dyslexia, algebra 1
...I have tutored math and sciences in many volunteer and job opportunities. I have experience with after school tutoring from 2003-2006. I was an Enon Tabernacle after school ministry tutor for
elementary and high school students 2011-2012.
13 Subjects: including geometry, chemistry, biology, algebra 1
...All of my students have seen grade improvement within their first two weeks of tutoring, and all of my students have reviewed me positively. Through WyzAnt, I have tutored math subjects from
prealgebra to precalculus; I have also tutored English writing, English grammar, and economics, and I am ...
38 Subjects: including geometry, Spanish, English, reading
...In his case, the only thing he needed was to be taught normally and not rushed ahead, and then he was able to catch up to grade level reading. I have had several students whose problem was
remembering what they had just read. For them, I've found the best way to correct that problem was working...
15 Subjects: including geometry, reading, English, writing
...Their success becomes my success. Every student brings a unique perspective and a unique set of expectations to his or her lesson, causing me to adapt my teaching style and approach to forge a
connection that works for both of us. I have learned a great deal from my students in this process!
21 Subjects: including geometry, reading, writing, algebra 1
|
{"url":"http://www.purplemath.com/narberth_geometry_tutors.php","timestamp":"2014-04-17T13:07:13Z","content_type":null,"content_length":"23975","record_id":"<urn:uuid:45df5fe3-5250-4102-9b02-e28a5a3bd3c9>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00601-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Homework Help
Posted by Sissy on Friday, May 18, 2007 at 10:13pm.
The sum of two numbers is 112. The second is 7 more than 4 times the first. What are the two numbers?
Am I doing something wrong? I seem to be stuck.
x + y =112
The second equation should be:
Should be the 2nd equation.
The 2nd one:
is 7 more. This is a little tricky. "Is" means an equals sign, but you're also saying that number to left left is 7 more than what's on the right. So you still have to add 7 to the left to balance it
"four times the first" is:
y+7 = 4x
Can you solve it from there?
Made a mistake. In my paragraph that reads "is 7 more" I should say SUBTRACT 7 from the left side:
That will leave you with the equation:
My apologies.
Related Questions
Math 8th grade - I NEED THESE IN ALGEBRAIC EXPRESSION!! The larger of two sums ...
8th Grade Math - *I NEED THESE IN ALGEBRAIC EXPRESSIONS! 1.) The larger of two ...
math - The sum of two numbers is 76. The second is 8 more than 3 times the first...
Grade 8 Math - Can you write them in Mathematical/numbers form I'll solve them ...
Math - Algebra - The sum of two numbers is 25. Twice the second plus 4 is equal ...
math word problems - The sum of two numbers is 69. The second is 9 more than 4 ...
algebra - The sum of two numbers is 81. The second is 6 more than 2 times the ...
College Algebra - The sum of two numbers is less than 12. The second number is 8...
math - The second of two numbers is 7 times the first. Their sum is 72. Find the...
math - The second of two numbers is 6 times the first. Their sum is 77. Find the...
|
{"url":"http://www.jiskha.com/display.cgi?id=1179540793","timestamp":"2014-04-16T18:07:25Z","content_type":null,"content_length":"8812","record_id":"<urn:uuid:bf39954a-b5b4-44ae-9bf6-40098f647cc7>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00658-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Equation of lines
March 19th 2013, 04:40 AM #1
Oct 2009
Equation of lines
How would I find the
parametric equations for the line through (2,4,6) that
is perpendicular to the plane x-y+3z=7
It is my understanding that to find the equation of a line i need a parallel vector and a point.
Im assuming I have to do something with the normal vector (1,-1,3) but it's not parallel to the line in question, so what could i possibly do?
and also,
find the equation of a line passing through (2,1,0) and perpendicular to both i + j and j + k
Re: Equation of lines
How would I find the
parametric equations for the line through (2,4,6) that
is perpendicular to the plane x-y+3z=7
It is my understanding that to find the equation of a line i need a parallel vector and a point.
Im assuming I have to do something with the normal vector (1,-1,3) but it's not parallel to the line in question,
Why would you say that? The line is to be perpendicular to the plane and the normal vector (1, -1, 3) is perpendicular to the plane. How many perpendiculars to you think a plane has?
so what could i possibly do?
and also,
find the equation of a line passing through (2,1,0) and perpendicular to both i + j and j + k
March 19th 2013, 05:28 AM #2
MHF Contributor
Apr 2005
|
{"url":"http://mathhelpforum.com/calculus/215065-equation-lines.html","timestamp":"2014-04-18T13:17:26Z","content_type":null,"content_length":"34770","record_id":"<urn:uuid:ee1ddc77-2353-4ba5-adcf-64582a888805>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00143-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Local Extreme Values Question
November 1st 2008, 09:13 AM #1
Sep 2008
Local Extreme Values Question
Let y=f(x) be differentiable and suppose that the graph of f does not pass through the origin. The distance D from the origin to a point P(x,f(x)) of the graph is given by
$D=\sqrt{x^2 + [f(x)]^2}$
Show that if D has a local extreme value at c, then the line through (0,0) and (c,f(c)) is perpendicular to the line tangent to the graph of f at (c,f(c))
Let y=f(x) be differentiable and suppose that the graph of f does not pass through the origin. The distance D from the origin to a point P(x,f(x)) of the graph is given by
$D=\sqrt{x^2 + [f(x)]^2}$
Show that if D has a local extreme value at c, then the line through (0,0) and (c,f(c)) is perpendicular to the line tangent to the graph of f at (c,f(c))
Don't bump.
You should realise that the gradient of the tangent is $f'(c)$ and the gradient of the line is $\frac{f(c)}{c}$. If they are normal then their product is equal to -1: $f'(c) \cdot \frac{f(c)}{c}
= -1 \Rightarrow f'(c) \cdot f(c) = -c$.
Use the chain rule to differentiate D:
$\frac{d D}{dx} = \frac{1}{2 \sqrt{x^2 + [f(x)]^2}} \cdot [2x + 2 f(x) f'(x)]$.
$\frac{d D}{dx} = 0 \Rightarrow 2x + 2 f(x) f'(x) = 0 \Rightarrow f(x) \, f'(x) = -x$.
Therefore ....
November 1st 2008, 05:19 PM #2
|
{"url":"http://mathhelpforum.com/calculus/56896-local-extreme-values-question.html","timestamp":"2014-04-18T07:18:21Z","content_type":null,"content_length":"35367","record_id":"<urn:uuid:d7cd5aa9-330a-4ec9-b775-8cdb3a89366b>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00500-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Creating Professional Tables Using Estpost
Original Code
* This command should install the package estout.
ssc install estout
estpost is one of several commands included in the pacakge estout.
In my previous post I delt with the primary use of estout, to create post estimation tables fit for publication.
I pretty much read through the high quality documentation listed on the estout site and made my own examples.
I will probably to the same with estpost.
I strongly recommend reading through the documentation found on the website http://repec.org/bocode/e/estout/estpost.html.
See (http://www.econometricsbysimulation.com/2012/11/professional-post-estimation-tables.html)
In this post I will deal with the estpost command which takes the results of several common summary statistics commands and converts them to formats that will be used by esttab.
If you end up using this command to create your tables please cite the author Jann, Ben.
It is obvious a lot of work went into creating this package with probably very little reward:
Citing estout
Thanks for citing estout in your work. For example, include a note such as
"Tables produced by estout (Jann 2005, 2007)."
and add
Jann, Ben (2005): Making regression tables from stored estimates. The Stata Journal 5(3): 288-308.
Jann, Ben (2007): Making regression tables simplified. The Stata Journal 7(2): 227-244.
to the bibliography.
estpost is compatible with the following commands:
From: http://repec.org/bocode/e/estout/hlp_estpost.html#commands
command description
summarize post summary statistics
tabstat post summary statistics
ttest post two-group mean-comparison tests
prtest post two-group tests of proportions
tabulate post one-way or two-way frequency table
svy: tabulate post frequency table for survey data
correlate post correlations
ci post confidence intervals for means,
proportions, or counts
stci post confidence intervals for means
and percentiles of survival time
margins post results from margins (Stata 11)
* Let us start with some made up data!
set obs 10
* Let's imagine that we are interested in 10 different products
gen prod_num = _n
* Each product has a base price
gen prod_price = rbeta(2,5)*10
* In six different markets
expand 6
sort prod_num
* I subtract the 1 because _n starts at 1 but mod (modular function) starts at 0
gen mercado = mod(_n-1, 6)
* Each mercado adds a fixed value to each product based on local demand
gen m_price = rbeta(2,5)*5 if prod_num == 1
bysort mercado: egen mercado_price = sum(m_price)
* Get rid of the reference price
drop m_price
* There is 104 weeks of observations for each product and mercado
expand 104
sort prod_num mercado
gen week = mod(_n-1,104)
* Each week there is a shared shock to all of the prices
gen week_p = rnormal()*.5 if mercado==0 & prod_num==1
bysort week_p: egen week_price=sum(week_p)
drop week_p
* Finally there is a product, market, and week specific shock that is indepentent of other shocks.
gen u = rnormal()*.5
* Let's generate some other random characteristics.
gen prod_stock = rnormal()
* Seasonality
gen seasonality = rnormal()
* Now let's calculate the price
gen price = prod_price + mercado_price + week_price + prod_stock + seasonality + u
* Finally in order to make things interesting let's say that our data set is incomplete because of random factors which occure 10% of the time.
gen missing = rbinomial(1,.1)
drop if missing==1
drop missing
* And to drop our unobservables
drop u week_price mercado_price prod_price
* Now that we have created our data, let's do some descriptive statistics that we will create tables from
* First the basic summarize command
estpost summarize price seasonality prod_stock
* This in effect tells us what statistics can be pulled from the summarize command.
* We can get more stats (such as medians) by using the detail option
estpost summarize price seasonality prod_stock, detail
* We can now create a table of estimates
esttab ., cells("mean sd count p1 p50 p99") noobs compress
* To save the table directly to a rtf (word compatible format)
esttab . using tables.rtf, replace cells("mean sd count p1 p50 p99") noobs compress
* Or excel
esttab . using tables.csv, replace cells("mean sd count p1 p50 p99") noobs compress
* Note the . after esttab is important. I don't know why, but it does not work without it.
* Now imagine we would like to assemble a table that has the mean price seasonality and prod_stock by mercado
estpost tabstat price seasonality prod_stock, statistics(mean sd) columns(statistics) listwise by(mercado)
* Everything looks like it is working properly up to this point but for some reason I can't get the next part to work.
esttab, main(mean) aux(sd) nostar unstack noobs nonote nomtitle nonumber
* The table only has one column when it should have 6 for the six different markets.
estpost tab prod_num
esttab . using tables.rtf , append cells("b(label(freq)) pct(fmt(2)) cumpct(fmt(2))")
* There is also a correlate function that will post information about the correlation between the first variable listed after corr and the other variables.
estpost corr price week seasonality mercado
esttab . using tables.rtf , append cell("rho p count")
* Unfortunately the alternative option, to generate the matrix of correlations that we would expect is not working either.
* This is the sad fate of these user written programs (such as Ian Watson's tabout), Stata becomes updated and they do not.
* I would find it very annoying to have to update code constantly so that a general public that I do not know can continue to use my code for free.
* However, perhaps if people are nice and send the author some emails requesting an update he might be encouraged to come back to his code knowing it is being used.
* His contact information listed on the package tutorial is Ben Jann, ETH Zurich, jann@soz.gess.ethz.ch.
2 comments:
1. Well, while testing your code, I got this:
. * This in effect tells us what statistics can be pulled from the summarize command.
. * We can get more stats (such as medians) by using the detail option
. estpost summarize price seasonality prod_stock, detail
| e(count) e(sum_w) e(mean) e(Var) e(sd) e(skewn~)
price | 5567 5567 4.179372 3.569746 1.889377 .0742438
seasonality | 5567 5567 .0044847 .991086 .995533 -.0089968
prod_stock | 5567 5567 -.0048699 .9900933 .9950343 .0260169
| e(kurto~) e(sum) e(min) e(max) e(p1) e(p5)
price | 2.940709 23266.56 -3.707268 10.74922 -.0037667 1.163866
seasonality | 2.932526 24.96656 -3.446181 3.271273 -2.343164 -1.645892
prod_stock | 2.989126 -27.11094 -3.376829 3.447521 -2.334032 -1.617825
| e(p10) e(p25) e(p50) e(p75) e(p90) e(p95)
price | 1.760584 2.858054 4.16412 5.445695 6.600117 7.375202
seasonality | -1.280259 -.6522442 .0002313 .6745954 1.298612 1.623599
prod_stock | -1.288978 -.669455 -.0005103 .6562532 1.295787 1.65028
| e(p99)
price | 8.694004
seasonality | 2.366776
prod_stock | 2.321529
. * We can now create a table of estimates
. esttab ., cells("mean sd count p1 p50 p99") noobs compress
current estimation results do not have e(b) and e(V)
2. Dear Bach.
This web page seems to be useful for you.
|
{"url":"http://www.econometricsbysimulation.com/2012/11/creating-professional-tables-using.html","timestamp":"2014-04-17T12:30:08Z","content_type":null,"content_length":"206651","record_id":"<urn:uuid:9ee0ad0b-c685-4000-bcc6-7b470d5f19e3>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00447-ip-10-147-4-33.ec2.internal.warc.gz"}
|
GPMD Publications 2007 - present
H.-G. Yu, Ab initio molecular dynamics simulation of photodetachment reaction of cyclopentoxide, Chem. Phys. Lett, 441, 20 (2007).
H.-G. Yu, J. T. Muckerman and J.S. Francisco, Quantum force molecular dynamics study ofthe O atoms with HOCO reaction, J. Chem. Phys. 127, 094302 (2007).
M. L. Costen and G. E. Hall, Coherent and incoherent orientation and alignment of ICNphotoproducts, Phys. Chem. Chem. Phys. 9, 272-287 (2007).
H.-G. Yu, G. Poggi, J.S. Francisco and J. T. Muckerman, Energetics and molecular dynamics of the reaction of HOCO with HO[2] radicals, J. Chem. Phys. 129, 214307 (2008).
H.-G. Yu and J.S. Francisco, Energetics and kinetics of the reaction of HOCO with hydrogen atoms, J. Chem. Phys. 128, 244315 (2008).
H.-G. Yu, J.S. Francisco and J. T. Muckerman, Ab initio and direct dynamics study of the reaction of Cl atoms with HOCO, J. Chem. Phys. 129, 064301 (2008).
B.J. Braams and H.-G. Yu, Potential energy surface and quantum dynamics study of rovibrational states of HO[3](X^ 2A”), Phys. Chem. Chem. Phys. 10, 3150 (2008).
W.-Q. Han, H.-G. Yu, C. Zhi, J. Wang, Z. Liu, T. Sekiguchi and Y. Bando, Isotope effect on band gap and radiative transitions properties of boron nitride nanotubes, Nano Lett. 8, 491 (2008).
H.-G. Yu, A spherical electron cloud hopping model for studying product branching ratios of dissociative recombination, J. Chem. Phys. 128, 194106 (2008).
J. Hofstein, H. Xu, T. J. Sears, and P. M. Johnson, The fate of excited states in jet-cooled aromatic molecules: Bifurcating pathways and very long-lived species from the S-1 excitation of
phenylacetylene and benzonitrile, J. Phys. Chem. A, 112, 1195-1201 (2008).
Z. Wang, Y. Kim, G. E. Hall and T. J. Sears, State mixing and predissociation in the c-a band system of singlet methylene studied by optical-optical double resonance, J. Phys. Chem. A, 112, 9248-9254
J. J. Harrison, J. M. Brown, J. Chen, T. Steimle and T. J. Sears, The Zeeman effect on lines in the (1,0) band of the F ^4Δ – X ^4Δ transition of the FeH radical, Astrophys. J. 679, 854-861 (2008).
H.-G. Yu, Spherical electron cloud hopping molecular dynamics simulation on dissociative recombination of protonated water, J. Phys. Chem. A, 113, 6555 (2009).
H.-G. Yu and J.S. Francisco, A theoretical study of the reaction of CH3 with HOCO radicals, J. Phys. Chem. A 113, 3844 (2009).
H.-G. Yu, A general rigorous quantum dynamics algorithm to calculate vibrational energy levels of pentaatomic molecules, J. Mol. Spectrosc. 256, 287 (2009).
S. H. Kable, S. R. Reid and T. J. Sears, The halocarbenes: Model systems for understanding the spectroscopy, dynamics and chemistry of carbenes, Int. Rev. Phys. Chem. 28, 435-480 (2009).
H.-G. Yu and J.S. Francisco, Ab initio and RRKM study of the reaction of ClO with HOCO radicals, J. Phys. Chem. A, 113, 12932 (2009).
H.-G. Yu, Product branching ratios of the reaction of CO with H[3]^+ and H[2]D^+, Astrophys. J. Lett., 706, L52 (2009).
M. L. Hause, G. E. Hall, and T. J. Sears, Sub-Doppler laser absorption spectroscopy of the A ^2Π[i] − X ^2Σ+ (1,0) band of CN. Measurement of the ^14N hyperfine parameters in A ^ 2Π[i] CN. J. Molec.
Spectrosc. 253, 122-128 (2009); Corrigendum 260, 138 (2010).
M. L. Hause, G. E. Hall, and T. J. Sears, Sub-Doppler Stark spectroscopy in the A-X (1,0) band of CN. J. Phys. Chem. A 113, 13342-13346 (2009).
J.S. Francisco, J.T. Muckerman, and H.-G. Yu, HOCO radical chemistry, Acc. Chem. Res. 43, 1519 (2010).
W.-Q. Han, Z. Liu and H.-G. Yu, Synthesis and optical properties of GaN/ZnO solid soulution nanocrystals, App. Phys. Lett., 96, 183112 (2010).
P. Sivakumar, C. P. McRaven, P. M. Rupasinghe, T. Zh. Yang, N. E. Shafer-Ray, G. E. Hall and T. J. Sears, Pseudo-continuous resonance-enhanced multiphoton ionization: application to the determination
of the hyperfine constants of ^208Pb^19F, Mol. Phys. 108, 927-935 (2010).
G. Hancock, G. Richmond, G.A.D. Ritchie, S. Taylor, M.L. Costen and G.E. Hall, Frequency modulated circular dichroism spectroscopy: application to ICN photolysis, Mol. Phys. 108, 1083-1095 (2010).
C.P. McRaven, P. Sivakumar, N.E. Shafer-Ray, Gregory E. Hall and Trevor J. Sears, Spectroscopic constants of the known electronic states of lead monofluoride, J. Molec. Spectrosc. 262, 89–92 (2010).
H.-F. Xu, P. M. Johnson and T. J. Sears, Effect of laser injection seeder on rotationally resolved spectra of benzonitrile , Chin. Phys. Lett. 27, 083301(3) (2010).
C.-H. Chang, G. Lopez, T. J. Sears and P. M. Johnson, Vibronic analysis of the S[1] – S[0] transition of phenyl acetylene using photoelectron imaging and spectral intensities derived from electronic
structure calculations, J. Phys. Chem. A 114, 8262-8270 (2010).
C.-H. Chang, G.E. Hall and T.J. Sears, Sub-Doppler spectroscopy of mixed state levels in CH[2], J. Chem. Phys. 133, 144310 (2010).
H.-G. Yu, An optimal density functional theory method for GaN and ZnO, Chem. Phys. Lett. 512, 231 (2011).
T.V. Tscherbul, H.-G. Yu, and A. Dalgarno, Sympathetic cooling of polyatomic molecules with S-state atoms in a magnetic trap, Phys. Rev. Lett., 106, 073201 (2011).
H.-G. Yu, An ab initio molecular dynamics study of the roaming mechanism of the H[2] + HOC^+ reaction, Physica Scripta 84, 028104 (2011).
S.-Y. Du, T. Germann, J. Francisco, K. Peterson, H.-G. Yu, and J. Lyons, The kinetics study of the S + S[2] ® S[3] reaction by the Chaperon mechanism, J. Chem. Phys., 134, 154508 (2011).
W.-Q. Han, H.-G. Yu, Z. Liu Convert graphene sheets to boron nitride and boron nitride-carbon sheets via a carbon-substitution reaction, App. Phys. Lett., 98, 203112 (2011).
C.-H. Chang, J. Xin, T. Latsha, E. Otruba, Z. Wang, G.E. Hall and T.J. Sears, The CH[2] b^ 1B[1] - a^ 1A[1] band origin at 1.20μm, J. Phys. Chem. A 115, 9440-9446 (2011).
C.-H. Chang, Z. Wang, G.E. Hall, T.J. Sears and J. Xin, Transient laser absorption spectroscopy of CH[2] near 780 nm, J. Molec. Spectrosc. 267, 50-57 (2011).
C. P. McRaven, M. J. Cich, G. V. Lopez, T. J. Sears, D. Hurtmans and A. W. Mantz, Frequency comb-referenced measurements of self- and nitrogen-broadening in the ν[1] + ν[3] band of acetylene, J.
Molec. Spectrosc. 266, 43-51 (2011).
L. D. Alphei, J.-U Grabow, A. N. Petrov, R. Mawhorter, B. Murphy, A. Baum, T. J. Sears, T. Zh. Yang, P. M. Rupasinghe, C. P. McRaven and N. E. Shafer-Ray, Precision spectroscopy of the ^207Pb^19F
molecule: implications for measurement of P-odd and T-odd effects, , Phys. Rev. A 83 040501 (2011).
R. Mawhorter, B. Murphy, A. Baum, T. J. Sears, T. Zh. Yang, P. M. Rupasinghe, C.P. McRaven, N. E. Shafer-Ray, L.D. Alphei and J.-U. Grabow, Precise characterization of the ground X[1] state of ^206Pb
^19F, ^207Pb^19F and ^208Pb^19F, Phys. Rev. A 84 022508(12) (2011).
M.H. Alexander, G.E. Hall and P.J. Dagdigian, The Approach to Equilibrium: Detailed Balance and the Master Equation, J. Chem. Educ. 88 1538-1543 (2011).
H.-G. Yu and G. Nyman, The infrared and UV-visible spectra of polycyclic aromatic hydrocarbons containing (5,7)-member ring defects: A theoretical study, Astrophys. J. 751, 3 (2012).
P.P Dholabhai and H.-G. Yu, Exploring the ring opening pathways in the reaction of morpholinyl radicals with oxygen molecule, , J. Phys. Chem. A, 116, 7123 (2012).
T.V. Tscherbul, T.A. Grinev, H.-G. Yu, A. Dalgarno, J. Klos, L. Ma and M.H. Alexander, Cold collisions of polyatomic molecular radicals with S-state atoms in a magnetic field: An ab initio study of
He + CH[2](X^3B[1]), , J. Chem. Phys. 137, 104302 (2012).
M.J. Cich, C.P. McRaven, G.V. Lopez, T.J. Sears, D. Hurtmans and A.W. Mantz, Temperature-dependent pressure broadened line shape measurements of self- and nitrogen-broadening in the ν[1 ]+ ν[3] band
of acetylene, Appl. Phys. B. 109, 373-384 (2012).
G.V. Lopez, C.-H. Chang, P. M. Johnson, G.E. Hall, T. J. Sears, B. Markiewicz, M. Milan and A. Teslja, What is the best DFT function for vibronic calculations? A comparison of the calculated vibronic
structure of the S[1] – S[0] transition of phenylacetylene with accurate experimental band intensities, J. Phys. Chem .A 116, 6750-6758 (2012).
V.V. Goncharov and G.E. Hall, Broadband laser enhanced dual-beam interferometry, Optics Letters 37 2406-2408 (2012).
D. Forthomme, C.P. McRaven, G.E. Hall, and T.J. Sears, Hyperfine structures in the v=1-0 vibrational band of the B^3Π[g] – A^3Σ^+[u] of N[2] J. Molec. Spectrosc. 282, 50-55 (2012).
G. Nyman and H.-G. Yu, Quantum approaches to polyatomic reaction dynamics, Int. Rev. Phys. Chem., 32, 000 (2013).
P.P Dholabhai and H.-G. Yu, Electronic structure and quantum dynamics of photoinitiated dissociation of O[2] on rutile TiO[2] nanocluster, J. Chem. Phys., 138, 194705 (2013).
D. Forthomme, C.P. McRaven, T.J. Sears, and G.E. Hall, Argon-induced pressure broadening, shifting and narrowing in the CN A^2Π – X^2Σ^+ (1-0) band, J. Phys. Chem. A 117, 11837-11846 (2013) http://
M. J. Cich, D. Forthomme, C. P. McRaven, G. V. Lopez, G. E. Hall, T. J. Sears and A. W. Mantz, Temperature-dependent, nitrogen-perturbed line shape measurements in the ν1+ν3 band of acetylene using a
diode laser referenced to a frequency comb, J. Phys. Chem. A 117, 13908-13918 (2013) http://dx.doi.org/10.1021/jp408960e
R. Grimminger, D.J. Clouthier, R. Tarroni, Z. Wang and T.J. Sears, An experimental and theoretical study of the electronic spectrum of HPS, a second row HNO analog J. Chem. Phys. 139 174306 (2013)
D. Forthomme, C. P. McRaven, T.J. Sears and G.E. Hall, Collinear two-color saturation spectroscopy in CN A–X (1–0) and (2–0) bands, J. Molec. Spectrosc. 296, 36-42 (2014) http://dx.doi.org/10.1016/
Last Modified: February 11, 2014
Please forward all questions about this site to: Mahendra Kahanda
|
{"url":"http://www.bnl.gov/chemistry/gpmd/publications.asp","timestamp":"2014-04-16T05:58:53Z","content_type":null,"content_length":"29720","record_id":"<urn:uuid:7adae4de-8ebf-48df-bcaa-cbe1a4116c08>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00228-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Sequences, especially arithmetic and geometric ones, are good for word problems.
Sequence story problems come in two main flavors. If these flavors were ice cream, they'd be vanilla and rocky road. We may have to find
• the value of a particular term a[n]. This is the standard vanilla problem.
• the value of n at which the terms do something in particular. This is the more complicated, rocky road problem.
In general, it's good strategy to write out the first few terms of the sequence in question so we can see the pattern of the terms. Maybe we can do it with ice cream cone in hand.
Sample Problem
An old story goes that a peasant won a reward from the king, and asked for rice: one grain to be placed on the first square of a chessboard, two grains on the second square, four on the third square,
and so on. Each square was to contain double the number of grains on the previous square.
1. How many grains of rice would be on the 32nd square?
2. Which square would contain exactly 512 grains of rice?
If we look at the first few terms, we can see the pattern:
The n^th square contains 2^n – 1 grains of rice. We have
a[n] = 2^n – 1
where n is the square and a[n] is how many grains of rice are on that square. Now we're prepared to answer the questions.
1. The question "How many grains of rice would be on the 32nd square?" is asking us to find the value of the 32nd term. No problem:
a[32] = 2^31 = 2,147,483,648.
2. The question "Which square would contain exactly 512 grains of rice?" is asking us what value of n makes a[n] = 512. We use the formula we have for a[n] and solve for n:
512 = a[n]
= 2^n – 1
log [2] 512 = n – 1
9 = n – 1
n = 10
This means the 10th square would contain exactly 512 grains of rice.
Be Careful: One type of sequence problem asks us to find a value of a[n]. Another asks us to find a value of n. We should be sure to provide the correct information in our answer.
Sometimes when we're asked to find a value of n, we might solve an equation or inequality and get a value of n that isn't a whole number. This is where the road can get bumpy, so we could take a lick
of our rocky road cone, and then we'd try out the whole numbers to either side and see which gives a better answer.
|
{"url":"http://www.shmoop.com/sequences/word-problem.html","timestamp":"2014-04-21T14:54:32Z","content_type":null,"content_length":"27776","record_id":"<urn:uuid:8a343b84-781d-4efd-aff1-ee10c7aecad0>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00500-ip-10-147-4-33.ec2.internal.warc.gz"}
|
The n-Category Café
Question about Tensor Categories
Posted by Urs Schreiber
Hendryk Pfeiffer asked me to forward the following question to the Café.
Dear $n$-category people,
I have a question about tensor categories on which I would appreciate comments and references. As probably several people are interested in this, I decided to ask this question here.
The short version of my question is:
Are there examples of $k$-linear additive spherical categories that are non-degenerate, but not semisimple?
In more detail:
I am interested in $k$-linear additive spherical categories where $k$ is a field. The notion of a spherical category was defined in
[1] J. W. Barrett, B. W. Westbury, Spherical categories, Adv Math 143 (1999) 357, hep-th/9310164
A spherical category is a pivotal category in which left- and right-traces agree. A pivotal category is, roughly speaking, a monoidal category in which each object X has a specified dual $X^*$ and in
which $X^{**}$ is naturally isomorphic to $X$. The details are in
[2] P. Freyd, D. N. Yetter, Coherence theorems via knot theory, J Pure Appl Alg 78 (1992) 49
In a $k$-linear spherical category, the trace defines bilinear maps
\begin{aligned} \mathrm{tr}: Hom(X,Y) \otimes Hom(Y,X) &\to k \\ f \otimes g &\mapsto \mathrm{tr}_Y (f\circ g) \end{aligned}
A $k$-linear spherical category is called non-degenerate if all these traces are non-degenerate, i.e. if for any $f:X \to Y$ the following holds:
$\mathrm{tr}_Y (f\circ g)=0 \;\mathrm{for}\; \mathrm{all}\; g:Y \to X \; \mathrm{implies} f=0 \,.$
As usual, an object $X$ is called simple if $\mathrm{Hom}(X,X)=k$, and the category is called [finite] semisimple if every object is isomorphic to a finite direct sum of simple objects [and if the
set of isomorphism classes of simple objects is finite].
The following implication is known to hold:
If a $k$-linear additive spherical category is finite semisimple, then it is non-degenerate.
[3] J. W. Barrett, B. W. Westbury, Invariants of piecewise-linear 3-manifolds, Trans AMS 348 (1996) 3997, hep-th/9311155
the property of being non-degenerate is part of the definition of semisimple. With the more standard definitions I used above, however, one has to prove this. The proof is completely analogous to the
one for ribbon categories in Section II.4.2 of
[4] V. G. Turaev, Quantum invariant of knots and 3-manifolds, de Gruyter, 1994.
In order to understand why the converse implication fails, I am interested in learning about examples of k-linear additive spherical categories that are
(1) non-degenerate and not finite semisimple
(2) non-degenerate and not semisimple
(3) non-degenerate, not semisimple, and which are of the form A-mod for some finite-dimensional k-algebra A
I should be grateful for any sort of comments.
Hendryk Pfeiffer
Posted at August 13, 2007 4:57 PM UTC
Re: Question about Tensor Categories
I feel like (3) cannot be possible. If you look at the endomorphism induced by an element of the Jacobson radical on any representation, there’s no way that could have non-zero trace, since it
induces the 0 map on the associated graded by any filtration with simple quotients.
More generally, this should imply that if your category is non-degenerate, then the endomorphism rings of all objects with a finite composition series must have semi-simple endomorphism rings. I
think this should imply that all objects of finite length must be semi-simple (if you have an object of finite length $M$ that isn’t semi-simple, then there’s a non-split injection of a simple object
$N\to M$. Consider the endomorphism algebra $\mathrm{End}(M\oplus N)$. The inclusion of $N$ into $M$ considered as an element of this algebra should be in the Jacobson radical. Q.E.D.).
Of course, if you’re willing to leave Artinian categories, you might get some more interesting stuff.
Posted by: Ben Webster on August 13, 2007 7:45 PM | Permalink | Reply to this
Re: Question about Tensor Categories
You seem to be thinking about matrix traces.
Consider a finite dimensional commutative algebra. Then any linear functional is a trace.
Posted by: Bruce Westbury on August 14, 2007 6:19 AM | Permalink | Reply to this
Re: Question about Tensor Categories
But can that trace extend to a trace on the category of representations of said algebra?
The only assumption I made about the traces on endomorphism algebras induced by a spherical structure is the following (this could be false. I’ve sketched a proof in my head, but am fairly willing to
believe that proof is wrong):
Fact?: If we have an exact sequence $M'\to M \to M''$ and $f:M\to M$ is a morphism which preserves $M'$, then $\mathrm{tr}(f,M)=\mathrm{tr}(f,M')+\mathrm{tr}(f,M'').$
If this true, then I stand by my previous post. If it’s not, then I guess I’ll just have to accept that traces on spherical categories are weirder than I thought.
Posted by: Ben Webster on August 15, 2007 3:44 AM | Permalink | Reply to this
Re: Question about Tensor Categories
Can we have some clarification on the wording of the question?
When you say $k$-linear do you require that each Hom-set is finitely generated as a $k$-module?
Do you want to assume that you have taken the idempotent completion (if necessary)?
Do you want to say that $k$ has characteristic zero?
Posted by: Bruce Westbury on August 13, 2007 8:05 PM | Permalink | Reply to this
Re: Question about Tensor Categories
Yes, I’d like all Hom-sets to be finitely generated k-modules. Idempotent completion is not required and k may be any field.
Posted by: Hendryk Pfeiffer on August 14, 2007 7:58 PM | Permalink | Reply to this
Re: Question about Tensor Categories
A simple example of a spherical category which is non-degenerate and which does not satisfy your definition of being semi-simple is the category of even dimensional vector spaces with the one
dimensional vector space. What has gone wrong here is that there are objects missing.
It seems plausible to me that you could start with a finite semisimple example; choose a simple V, add 2V and remove V. This should then give finite counterexamples.
There also seems to me to be another way this can fail but I have not checked this.
I suspect that there are Frobenius algebras which are not semisimple.
Given a finite non-degenerate spherical category then you can construct an algebra in the usual way by taking the direct sum of the Hom spaces. Then (maybe) this algebra is a Frobenius algebra. Also
(maybe) this algebra is semisimple if and only if the category is semisimple.
Posted by: Bruce Westbury on August 15, 2007 6:51 PM | Permalink | Reply to this
Re: Question about Tensor Categories
Bruce wrote:
I suspect that there are Frobenius algebras which are not semisimple.
Yes, even commutative ones. For example, take
$A = \mathbb{C}[x]/\langle x^n \rangle$
to be the algebra of polynomials in $x$ modulo the ideal generated by $x^n$. Equip it with the linear functional $tr: A \to \mathbb{C}$ for which $tr(x^i) = 0$ for $i e n$ and $tr(x^n) = \lambda$ for
some nonzero $\lambda \in \mathbb{C}$. It’s easy to see that this gives a commutative Frobenius algebra.
Since examples of this sort can be manufactured ad nauseum, it’s hopeless to classify commutative Frobenius algebras over $\mathbb{C}$ unless we demand that they’re semisimple.
But I urge that you look at page 6 of Steve Sawin’s paper on direct sum decompositions of TQFTs, where Proposition 2 ‘classifies’ the indecomposable commutative Frobenius algebras over $\mathbb{C}$ —
at least modulo the classification of commutative algebras with one-dimensional socle.
(Don’t be scared of ‘socles’. In the example I gave, the ‘socle’ of $A$ is the 1d subspace spanned by $x^n$. In general, it’s the space of all guys $a$ such that $a b = 0$ for all nilpotent $b$.
Intuitively speaking, they’re the guys at the very ‘top’ of $A$, which vanish into oblivion if you try to push them any higher.)
Posted by: John Baez on September 29, 2009 10:54 PM | Permalink | Reply to this
Re: Question about Tensor Categories
I believe examples of both 1 and 2 given in Wenzl-Tuba Section 9.1. The example comes from the Kauffman polynomial and can be thought of as the “quantum group” U_q(O_t) where q is a root of unity but
t is not an integer.
Was question 3 ever settled? Like Ben my intuition here was that there’d be no examples of 3.
Also it’s worth noting the importance of working in characteristic 0 everywhere here, otherwise semisimple does not imply nondegenerate.
Posted by: Noah Snyder on September 29, 2009 7:44 PM | Permalink | Reply to this
Re: Question about Tensor Categories
By email Hendryk Pfeiffer pointed out that I was mistaken about the characteristic zero issue. What you need characteristic zero for is to guarantee that the global dimension (the sum of the squares
of the individual dimensions) is nonzero. The nonzero-ness of global dimension is important for a lot of things (ENO’s lifting criteria, semisimplicity of dual categories, etc.). Dimensions of
individual simple objects are nonzero in any characteristic (and hence semisimple implies nondegenerate) but outside of characteristic zero the sum of their squares could still be zero.
Posted by: Noah Snyder on September 29, 2009 11:29 PM | Permalink | Reply to this
|
{"url":"http://golem.ph.utexas.edu/category/2007/08/question_about_tensor_categori.html","timestamp":"2014-04-18T18:12:48Z","content_type":null,"content_length":"36242","record_id":"<urn:uuid:812124fe-144f-46a2-94b7-e70c15d2a312>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00322-ip-10-147-4-33.ec2.internal.warc.gz"}
|
C245 Homework 3
Due 2/23/05
First Name:
Last Name:
SID #:
E-mail Address for confirmation:
1. Calculate the deflection of a rectangular cantilever under it's own weight. In class, we calculated an upper bound on the deflection by putting the entire mass of the cantilever at the
end. What is the error in this estimate (should be a ratio)?
2. You are designing a suspension using two constant-cross-section beams. One beam is anchored to the substrate and has length Lx, the other is attached to the end of the first beam at a 90
degree angle and has length Ly. When you apply a force along the x axis and measure x deflection you want to get the same result as when you apply a force along the y axis and measure a y
deflection. Calculate the ratio of the lengths of the two beams. (Hint: Write down the compliance matrix for the whole suspension, and make the diagonals equal).
3. Run the sugar tutorial and check your answer to the problem above.
4. Run the intellisuite tutorial and check your answer to the problem above.
5. Assuming a DRIE aspect ratio S, a minimum feature size l, a material density r, and a maximum voltage V,
1. calculate the force per unit area of an ideal gap-closing actuator array. (Use the zero-deflection force, not the force after pull-in; you may assume some magical sub-lithographic gap
stop if you like).
2. Calculate the force output per kilogram, assuming a film thickness t. Will micro robots with gap closing electrostatic actuators be able to lift themselves without massive gearing or
3. What is the optimum value for t, assuming we want to make minimum sized gaps and the maximum aspect ratio possible in our beams?
4. Calculate the maximum work done per cycle of the actuator against a constant force load (what is the load?).
5. Calculate the electrical energy input per cycle, assuming that some smart control circuit limits the total charge on the capacitor plates to twice the charge applied initially to the
actuator (the zero-deflection charge).
6. What is the energy efficiency of this actuator?
7. Assuming that you choose the support spring for your gap closing actuator such that when the gap closer has pulled in completely that the spring force is equal to the zero-deflection
force of the electrostatic actuator, calculate the resonant frequency of your actuator in terms of only V , l, and r. (remember sqrt(K/m)?)
8. Ignoring damping/resonance, assume that we can run the actuator close to its resonant frequency and estimate the maximum power output per kilogram of an electrostatic gap closing
9. Insect flight muscle has a peak power density of 100W/kg, about as high as it gets for animals. Will micro robots based on gap closing electrostatic actuators ever fly?
(Kris Pister)
|
{"url":"http://bsac.eecs.berkeley.edu/~pister/245/2005S/HW/HW3/hw3.html","timestamp":"2014-04-19T09:24:27Z","content_type":null,"content_length":"5267","record_id":"<urn:uuid:ff240489-99de-49fd-ade2-036075f1fa80>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00631-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Applications of a splitting trick
, 2010
"... The performance of a dynamic dictionary is measured mainly by its update time, lookup time, and space consumption. In terms of update time and lookup time there are known constructions that
guarantee constant-time operations in the worst case with high probability, and in terms of space consumption ..."
Cited by 7 (3 self)
Add to MetaCart
The performance of a dynamic dictionary is measured mainly by its update time, lookup time, and space consumption. In terms of update time and lookup time there are known constructions that guarantee
constant-time operations in the worst case with high probability, and in terms of space consumption there are known constructions that use essentially optimal space. In this paper we settle two
fundamental open problems: • We construct the first dynamic dictionary that enjoys the best of both worlds: we present a two-level variant of cuckoo hashing that stores n elements using (1+ϵ)n memory
words, and guarantees constant-time operations in the worst case with high probability. Specifically, for any ϵ = Ω((log log n / log n) 1/2) and for any sequence of polynomially many operations, with
high probability over the randomness of the initialization phase, all operations are performed in constant time which is independent of ϵ. The construction is based on augmenting cuckoo hashing with
a “backyard ” that handles a large fraction of the elements, together with a de-amortized perfect hashing scheme for eliminating the dependency on ϵ.
- Information Systems
"... A hash function is a mapping from a key universe U to a range of integers, i.e., h: U↦→{0, 1,...,m−1}, where m is the range’s size. A perfect hash function for some set S ⊆ U is a hash function
that is one-to-one on S, where m≥|S|. A minimal perfect hash function for some set S ⊆ U is a perfect hash ..."
Cited by 2 (1 self)
Add to MetaCart
A hash function is a mapping from a key universe U to a range of integers, i.e., h: U↦→{0, 1,...,m−1}, where m is the range’s size. A perfect hash function for some set S ⊆ U is a hash function that
is one-to-one on S, where m≥|S|. A minimal perfect hash function for some set S ⊆ U is a perfect hash function with a range of minimum size, i.e., m=|S|. This paper presents a construction for
(minimal) perfect hash functions that combines theoretical analysis, practical performance, expected linear construction time and nearly optimal space consumption for the data structure. For n keys
and m=n the space consumption ranges from 2.62n to 3.3n bits, and for m=1.23n it ranges from 1.95n to 2.7n bits. This is within a small constant factor from the theoretical lower bounds of 1.44n bits
for m=n and 0.89n bits for m=1.23n. We combine several theoretical results into a practical solution that has turned perfect hashing into a very compact data structure to solve the membership problem
when the key set S is static and known in advance. By taking into account the memory hierarchy we can construct (minimal) perfect hash functions for over a billion keys in 46 minutes using a
commodity PC. An open source implementation of the algorithms is available
"... The dynamic approximate membership problem asks to represent a set S of size n, whose elements are provided in an on-line fashion, supporting membership queries without false negatives and with
a false positive rate at most ϵ. That is, the membership algorithm must be correct on each x ∈ S, and may ..."
Add to MetaCart
The dynamic approximate membership problem asks to represent a set S of size n, whose elements are provided in an on-line fashion, supporting membership queries without false negatives and with a
false positive rate at most ϵ. That is, the membership algorithm must be correct on each x ∈ S, and may err with probability at most ϵ on each x / ∈ S. We study a well-motivated, yet insufficiently
explored, variant of this problem where the size n of the set is not known in advance. Existing optimal approximate membership data structures require that the size is known in advance, but in many
practical scenarios this is not a realistic assumption. Moreover, even if the eventual size n of the set is known in advance, it is desirable to have the smallest possible space usage also when the
current number of inserted elements is smaller than n. Our contribution consists of the following results: • We show a super-linear gap between the space complexity when the size is known in advance
and the space complexity when the size is not known in advance. When the size is known in advance, it is well-known that Θ(n log(1/ϵ)) bits of space are necessary and sufficient (Bloom ’70, Carter et
al. ’78). However, when the size is not known in advance, we prove
"... Abstract — The dynamic approximate membership problem asks to represent a set S of size n, whose elements are provided in an on-line fashion, supporting membership queries without false
negatives and with a false positive rate at most ɛ. That is, the membership algorithm must be correct on each x ∈ ..."
Add to MetaCart
Abstract — The dynamic approximate membership problem asks to represent a set S of size n, whose elements are provided in an on-line fashion, supporting membership queries without false negatives and
with a false positive rate at most ɛ. That is, the membership algorithm must be correct on each x ∈ S, and may err with probability at most ɛ on each x / ∈ S. We study a well-motivated, yet
insufficiently explored, variant of this problem where the size n of the set is not known in advance. Existing optimal approximate membership data structures require that the size is known in
advance, but in many practical scenarios this is not a realistic assumption. Moreover, even if the eventual size n of the set is known in advance, it is desirable to have the smallest possible space
usage also when the current number of inserted elements is smaller than n. Our contribution consists of the following results: • We show a super-linear gap between the space complexity when the size
is known in advance and the space complexity when the size is not known in advance. When the size is known in advance, it is well-known that Θ(n log(1/ɛ)) bits of space are necessary and sufficient
(Bloom ’70, Carter et al. ’78). However, when the size is not known in advance, we prove that at least (1 − o(1))n log(1/ɛ) + Ω(n log log n) bits of space must be used. In particular, the average
number of bits per element must depend on the size of the set. • We show that our space lower bound is tight, and can even be matched by a highly efficient data structure. We present a data structure
that uses (1+o(1))n log(1/ɛ)+O(n log log n) bits of space for approximating any set of any size n, without having to know n in advance. Our data structure supports membership queries in constant time
in the worst case with high probability, and supports insertions in expected amortized constant time. Moreover, it can be “de-amortized” to support also insertions in constant time in the worst case
with high probability by only increasing its space usage to O(n log(1/ɛ) + n log log n) bits. 1.
|
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=9999449","timestamp":"2014-04-25T02:07:30Z","content_type":null,"content_length":"24547","record_id":"<urn:uuid:6204d1e7-5cd6-4115-abf2-4e0a1c890931>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00546-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
If the parabola y + 5 = a(x + 2)2 has a y-intercept 4, find a. A. .82 B. -5 C. 5 D. 2.25
• one year ago
• one year ago
Best Response
You've already chosen the best response.
@satellite73 Can you help?
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
So whats the answer?
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred.
|
{"url":"http://openstudy.com/updates/50c15c98e4b016b55a9e2aaf","timestamp":"2014-04-19T07:20:50Z","content_type":null,"content_length":"44822","record_id":"<urn:uuid:7924ec82-878a-40dc-82d6-e4cedf2d8f9e>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00378-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Alexis H.
I have a passion for Math. I have been teaching and tutoring part time for the last 25 years. While in college, I was on the Putnam Intercollegiate Math Competition team for 3 consecutive years, and
won several math competitions. I had a 4.0 GPA in math as an undergraduate (graduating with more than twice the number of required credit hours in math). I also obtained a perfect score in Computer
Science (20 out of 20) while a student at the University Pierre and Marie Curie, Paris, France. I enjoy explaining math to others and sharing the excitement that I have about the subject.
I received perfect scores (800 out of 800) on both SAT Math Level 1 and Math Level 2.
Besides regular math courses, I have taught problem-solving classes to prepare for Mathcounts, the Math League Contest, the Math Kangaroo Competition, the American Mathematics Competition (AMC 8, 10,
and 12), the Harvard-MIT Mathematics Tournament, and the American Invitational Mathematics Examination (AIME) -- these classes prepare kids to go on the USA and International Mathematical Olympiads.
I also provide online lessons via the WyzAnt Online Platform.
Please note that lesson time is for a minimum of 90 minutes (30 minutes minimum for online lessons.)
Alexis's subjects
|
{"url":"http://www.wyzant.com/Tutors/VA/Ashburn/7804265/?g=3JY","timestamp":"2014-04-19T09:41:53Z","content_type":null,"content_length":"99715","record_id":"<urn:uuid:db40a69f-73b2-4f0a-8189-38477aad84b2>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00593-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Guessing Simulations
February 26, 2009
Efficiently simulating nondeterministic turing machines
Ron Book was a language theorist–who unfortunately died in 1997. [DEL:I used the image of his co-author Sheila Greibach since I could not find his–I plan a post later on her own important work. :DEL]
Ron worked in a part of theory that is consider unimportant today–I think incorrectly. In the 1970′s language theory was “the king of the hill”: the core of theory then was all about the properties
of languages. Languages were classified according to their complexity: there were regular languages, context free languages, context sensitive languages, and others. Language theorists studied
closure properties–is the union of two languages of type X still of type X? They also studied decidability questions–is it decidable whether or not a language of type X has some property? And so on.
The theory was (is?) pretty, some of the questions were very hard, and parts of the theory had important practical applications. For example, Don Knuth, who is famous for many things, did his early
work on the structure of “deterministic context free languages”. All of modern parsers are based on this and related work.
While most consider this area less important today, it helped create many of the key ideas that we still find central. For example, the LBA question, which today we would phrase as “are
nondeterministic space machines closed under complement?” was created by the language theorists. See my post on we “guessed wrong” for a discussion of the wonderful result that showed that the answer
is yes.
Book and I overlapped at Yale in the mid 1970′s. He was the senior theorist in the Computer Science Department at Yale at the time. Ron thought for a while that he was close to a proof that P was not
equal to NP. Very close. His approach was based on trying to show that certain language theoretic properties of P and NP were different. One day I noticed that there was a fundamental obstacle to his
approach–the details today are not really important. I do remember that he was “really upset” with me for showing him that his approach could not work. It was a bit unreal–as if it was my fault that
the mathematics worked out the way it did. I only wish I could control such things. Oh well, on to a beautiful result of his.
Simulation of Turing Machines
Book and Greibach have an extremely beautiful result that I want to discuss today. Many of you may not know this result. What is neat about his result is that it is a critical part to a terrific
result of Albert Meyer and Charlie Rackoff that I will discuss in a later post. Also while their result is not hard to prove, it shows a distinction between nondeterministic time and deterministic
time. This difference could play a role, in the future, on the P=NP question. So I thought I might explain the result.
One of the basic questions then, and still today, is how efficiently can one model of computation simulate another model of computation. Besides being a natural question, there are many applications
of efficient simulations. One of the main application is in proving hierarchy theorems–such theorems show that more of a resource yields more power.
They looked at the following basic question: how efficiently could a fixed nondeterministic Turing Machine simulate any other nondeterministic Turing Machine? This may sound trivial: just simulate
the machine, after all they have the same abilities. But there is a catch. Suppose that our fixed machine ${M}$ has two working tapes, while a machine ${S}$ has a ${100}$ working tapes. If the
machine ${S}$ runs in time ${T(n)}$, then how can the machine ${M}$ simulate ${S}$ in about the same time? The difficulty is simple: ${M}$ must use its two tapes to somehow simulate efficiently the $
{100}$ tapes of ${S}$. This can be done but the obvious method would take time ${O(T(n)^{2})}$. A much harder simulation due to Mike Fischer and Nick Pippenger (I will talk about it in another post)
shows that such a simulation can be done in time ${O(T(n)\log T(n) )}$ for any type of Turing machines.
Their result was that for nondeterministic Turing machines the simulation could be done in ${O(T(n))}$. This is clearly optimal: essentially he showed that in this case there was no additional cost
in having only two tapes. So if you were buying a nondeterministic Turing Machine at Best Buy you should not be talked into buying the ${100}$ tape model–two tapes are plenty. Save your money.
The Method
So how did Book and Greibach show that two tapes could simulate ${100}$ tapes? If you have not seen the idea you take a second and ponder how you might do it. The method is quite, in my opinion,
Suppose that we want to simulate the Turing Machine ${S}$ on a given input. The idea is to first guess a complete trace of what the machine ${S}$ does at each step and write this into one tape. This
trace will include the following information: the state of the finite control of ${S}$, the value of the currently scanned squares of each work tape and input tape position. This trace information
takes at most ${O(T(n))}$ space and therefore time to write down.
We next have to check that the trace is a correct description of an accepting computation of ${S}$. It is easy to check that the finite state control has been updated correctly and that the last
state is an accepting one. The hard part is to check whether or not each work tape was used correctly. The problem is this: suppose that at time ${i}$ we wrote that a certain working tape had ${0}$
on it. Then, at the next time that square is visited it must have the same symbol there.
We check this by using our second tape. For each of the working tapes we go over the trace and simulate the it on the second tape. If the trace is correct, then there is no problem. If the trace is
wrong, then this will detect an inconsistency. We therefore do this for each working tape. This is the way that Book and Greibach show that we can simulate ${S}$ in time ${O(T(n))}$.
Open Problems
There are at least two obvious open problems. First, can this type of result be proved for deterministic Turing machines? The method makes essential use of the ability of a nondeterministic machine
to guess. Still there may be a cool way to improve the simulation of deterministic machines. Second, could one exploit this fundamental property of nondeterminism to get a hold of the P=NP problem? I
do not see how it could be exploited, but it is an intriguing thought.
Recent Comments
Mike R on The More Variables, the B…
maybe wrong on The More Variables, the B…
Jon Awbrey on The More Variables, the B…
Henry Yuen on The More Variables, the B…
The More Variables,… on Fast Matrix Products and Other…
The More Variables,… on Progress On The Jacobian …
The More Variables,… on Crypto Aspects of The Jacobian…
The More Variables,… on An Amazing Paper
The More Variables,… on Mathematical Embarrassments
The More Variables,… on On Mathematical Diseases
The More Variables,… on Who Gets The Credit—Not…
John Sidles on Multiple-Credit Tests
KWRegan on Multiple-Credit Tests
John Sidles on Multiple-Credit Tests
John Sidles on Multiple-Credit Tests
|
{"url":"http://rjlipton.wordpress.com/2009/02/26/guessing-simulations/","timestamp":"2014-04-18T18:19:30Z","content_type":null,"content_length":"79896","record_id":"<urn:uuid:56443bee-dbc9-4b8f-8903-9913bdf905ca>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00071-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Lower bounds on off-diagonal Ramsey numbers
January 30, 2011 by Qiaochu Yuan
The goal of this post is to prove the following elementary lower bound for off-diagonal Ramsey numbers $R(s, t)$ (where $s \ge 3$ is fixed and we are interested in the asymptotic behavior as $t$ gets
$\displaystyle R(s, t) = \Omega \left( \left( \frac{t}{\log t} \right)^{ \frac{s}{2} } \right).$
The proof does not make use of the Lovász local lemma, which improves the bound by a factor of $\left( \frac{t}{\log t} \right)^{ \frac{1}{2} }$; nevertheless, I think it’s a nice exercise in
asymptotics and the probabilistic method. (Also, it’s never explicitly given in Alon and Spencer.)
The alteration method
The basic probabilistic result that gives the above bound is actually quite easy to prove, and is an example of what Alon and Spencer call the alteration method: construct a random structure, then
alter it to get a better one. Recall that the Ramsey number $R(s, t)$ is the smallest positive integer $n$ such that every coloring of the edges of the complete graph $K_n$ by two colors (say blue
and yellow) contains either a blue $K_s$ or a yellow $K_t$.
Theorem: For any positive integer $n$ and any real number $0 < p < 1$, we have
$\displaystyle R(s, t) > n - {n \choose s} p^{{s \choose 2}} - {n \choose t} (1 - p)^{ {t \choose 2}}$.
Proof. Consider a random coloring of the edges of $K_n$ in which an edge is colored blue with probability $p$ and yellow with probability $1 - p$, and delete a vertex from every blue $K_s$ or yellow
$K_t$. How many vertices will be deleted, on average? Since the expected number of blue $K_s$‘s is ${n \choose s} p^{ {s \choose 2} }$, and the expected number of yellow $K_t$‘s is ${n \choose t} (1
- p)^{ {t \choose 2} }$, it follows that the expected number of vertices to be deleted is at most their sum (it should be less since deleting one vertex may mean we do not have to delete others).
Here we are using the fundamental fact that a random variable is at most (equivalently, at least) its expected value with positive probability, which is trivial when the sample space is finite.
This means that, with positive probability, we delete at most the expected number. The result is a coloring of the complete graph on $n - {n \choose s} p^{ {s \choose 2}} - {n \choose t} (1 - p)^{ {t
\choose 2} }$ vertices with no blue $K_s$ or yellow $K_t$, hence $R(s, t)$ must be greater than this number.
Now our goal is to choose close-to-optimal values of $n, p$. When $s = t$ it turns out that this method only gives a small improvement over Erdős’s classic bound $R(k, k) \ge 2^{k/2}$ (“the
inequality that launched a thousand papers”), but when $s$ is fixed and we are interested in the asymptotics as a function of $t$ then we can do quite a bit better than the obvious generalization of
Erdős’s bound.
We will choose $p$ first. For large $t$ the important term to control is ${n \choose t} (1 - p)^{ {t \choose 2} }$; if this term is too large then the bound above is useless, so we want it to be
small. This entails making $1 - p$ small, hence making $p$ large. However, ${n \choose s} p^{ {s \choose 2} }$ will overwhelm the only positive term $n$ if we choose $p$ too large. Since we are
willing to lose constant factors, let’s aim to choose $p$ so that
$\displaystyle {n \choose s} p^{ {s \choose 2}} \approx \frac{n}{2}$
since the positive contribution from $n$ will still occur up to a constant factor. Using the inequality ${n \choose s} \le \frac{n^s}{2}$ (which is good enough, since $s$ is fixed) we see that we
should choose $p$ so that $p^{ {s \choose 2} } \approx n^{1-s}$, so let’s choose $p = n^{- \frac{2}{s} }$. This gives
$\displaystyle R(s, t) \ge \frac{n}{2} - {n \choose t} (1 - p)^{ {t \choose 2} }$.
Now we need to choose $n$. To do this properly we’ll need to understand how the second term grows. This requires two estimates. First, the elementary inequality $t! \ge \left( \frac{t}{e} \right)^t$
(which one can prove, for example, by taking the logarithm of both sides and bounding the corresponding Riemann sum by an integral; see also Terence Tao’s notes on Stirling’s formula) gives
$\displaystyle {n \choose t} \le \left( \frac{en}{t} \right)^t$.
Second, the elementary inequality $1 - p \le e^{-p}$ (by convexity, for example) gives
$\displaystyle (1 - p)^{ {t \choose 2} } \le \exp \left( -n ^{- \frac{2}{s} } {t \choose 2} \right)$.
Let me pause for a moment. I was recently not assigned this problem as homework in a graph theory course. Instead, we were assigned to prove a weaker bound, and only for $R(3, t)$. When I described
the general argument to my supervision partner and supervisor, they commented on the “weird” (I forget the exact word) estimates it required, and didn’t seem particularly interested in the details.
These estimates are not weird! In order to get any kind of nontrivial lower bound it is necessary that $n \to \infty$, and in order to prevent the second term from overwhelming the first it is
necessary that $p \to 0$. In this regime, to estimate ${n \choose t}$ when both $n$ and $t$ goes to infinity requires more detail than the trivial bound $n^t$, and the detail provided by the above
estimate (which ignores the small corrective factors coming from the rest of Stirling’s formula) is exactly suited to this problem. And in order to estimate $(1 - p)^{ {t \choose 2} }$ it is
perfectly natural to use the exponential inequality, since the exponential is much easier to analyze (indeed, bounding expressions like these is in some sense the whole point of the function $e^x$).
These are not contrived expressions coming from nowhere. The reader who is not comfortable with these estimates should read something like Steele’s The Cauchy-Schwarz Master Class and/or Graham,
Knuth, and Patashnik’s Concrete Mathematics.
Back to the mathematics. By our estimates, the logarithm of ${n \choose t} (1 - p)^{ {t \choose 2} }$ is bounded by
$\displaystyle t \left( \log n - \log t + 1 - n^{- \frac{2}{s} } \frac{t-1}{2} \right)$.
We want to choose $n$ as large as possible subject to the constraint that this logarithm is bounded by $\log n - \log 4$ or so. To get a feel for how the above expression behaves, let’s set $n = t^k$
for some $k$. This gives
$\displaystyle t \left( (k-1) \log t + 1 - t^{ - \frac{2k}{s} } \frac{t-1}{2} \right)$.
The first term is $O(t \log t)$ while the second term is $O \left( t^{2 - \frac{2k}{s}} \right)$, so to get these terms to match as close as possible we’d like $k$ to be slightly smaller than $\frac
{s}{2}$. To get the logarithmic factor to match, we’ll set
$\displaystyle n = C \left( \frac{t}{\log t} \right)^{ \frac{s}{2} }$
for some constant $C$. This gives a bound of
$\displaystyle t \left( \frac{s}{2} \log t - \log \log t + \log C + 1 - C^{-\frac{2}{s}} \frac{t-1}{2t} \log t \right)$.
The dominant term here is $\left( \frac{s}{2} - \frac{1}{2} C^{ - \frac{2}{s} } \right) t \log t$. We’d like this to be less than $\log n - \log 4$, which requires the coefficient of this term to
tend to zero. But as long as it does so sufficiently quickly, modifying $C$ beyond that will only lead to a constant change in $n$ (which is the main contribution to our lower bound), so we’ll cheat
a little: we’ll make the coefficient negative so it overwhelms the other terms, which will ensure that its exponential tends to zero, giving an estimate $R(s, t) = \left( \frac{1}{2} - o(1) \right)
n$. This requires that $s < C^{ - \frac{2}{s} }$, hence $s^{ - \frac{s}{2} } > C$, so we will take
$\displaystyle n = \frac{1}{2} \left( \frac{t}{s \log t} \right)^{ \frac{s}{2} }$.
This gives the final estimate
$\displaystyle R(s, t) = \Omega \left( \left( \frac{t}{\log t} \right)^{ \frac{s}{2} } \right)$
as desired, where the implied constant is something like $\frac{1}{4} s^{ - \frac{s}{2} }$ (although I have made no particular effort to optimize it).
As for the question of what is currently known, see this MO question. Up to logarithmic factors, it seems the best known lower bound grows like $\tilde{\Omega} \left( t^{ \frac{s+1}{2} } \right)$
(which is what the local lemma gives), while the best known upper bound grows like $\tilde{O} \left( t^{s-1} \right)$, and the latter is conjectured to be tight. For $s = 3$ a result of Kim gives the
exact asymptotic
$\displaystyle R(3, t) = \Theta \left( \frac{t^2}{\log t} \right)$.
I have a question.
I think Kim’s result is $t^2 / \log t$ not $(t/\log t)^2$.
Am I right?
• Oops. Yes, you’re right.
2 Responses
|
{"url":"http://qchu.wordpress.com/2011/01/30/lower-bounds-on-off-diagonal-ramsey-numbers/","timestamp":"2014-04-17T15:26:07Z","content_type":null,"content_length":"108297","record_id":"<urn:uuid:07598d80-4d6b-4d99-9196-d470dcf8922e>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00053-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
Determine how many slices you will cut your pizza into. You need to pick two different numbers of slices. Once you have determined the number of slices, calculate the following for both ways of
cutting your pizza: • the interior angle and arc length of the slices • the area of each slice
• one year ago
• one year ago
Best Response
You've already chosen the best response.
Determine the delivery radius for your shop. Draw a point on a coordinate plane where your shop will be located. Create two different radii lengths from your shop, and construct the circles that
represent each delivery area. How much area will each delivery radius cover? Write the equation for each circle
Best Response
You've already chosen the best response.
Seems like one of those "do it yourself" problems to me.
Best Response
You've already chosen the best response.
I really don't understand how to do this tho :(
Best Response
You've already chosen the best response.
@inkyvoyd, help her
Best Response
You've already chosen the best response.
Let's start with a couple of things 1. The formula for the area of a circle and sector 2. The formula for the arc length of a circle 3. The equation of a circle centered at (h,k) @Hero , your
turn. Alternatively, @lgbasallote
Best Response
You've already chosen the best response.
I'm too busy. I don't have any time.
Best Response
You've already chosen the best response.
@Hero is adequate enough though
Best Response
You've already chosen the best response.
Sorry, maybe on the weekends
Best Response
You've already chosen the best response.
Stop being lazy guys (I'm not lazy :P) @lgbasallote , helpy out.
Best Response
You've already chosen the best response.
Just please somebody help :( I've been stuck on this problem for the longest time.
Best Response
You've already chosen the best response.
I got like 9 minutes, and I have about 3 days worth of online coursework to catch up on for the moment. Let me retag that igbiw. @lgbasallote
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
@ParthKohli . My last chance. Help out please
Best Response
You've already chosen the best response.
First, pick any random number to determine the size of the pizza, we know that a circle is defined by 1 thing, the radius; then, pick the number of slices.
Best Response
You've already chosen the best response.
Let me make the first example, I have a 14cm radius pizza, and I want 3 slices. First off, we know that a circle has 360 degrees inside. 3 equal slices, so 360/3 = 120 degrees for each slice. |
dw:1337224132502:dw| Then, to find a arc length of each slice, we can use the little formula we are going to invent by ourselves, Circonference \(\div\) Number of slice = Arc length of each slice
The formula to find the circonference of a circle is \(2 \pi r\) Let's plug our radius in, \(2 \pi 14=28 \pi\) Okay, cool, we have our circonference, 3 slices. Apply our little formula,
Circonference \(\div\) Number of slice = Arc length of each slice; \(28 \pi \div 3 \approx 29.32cm\) So the arc length of each slice would be 29.32cm. Next, the area, simple use the formula \(\pi
Best Response
You've already chosen the best response.
And for the shop problem, as I haven't learn ellipse yet, I can't help you out for this one. @Callisto can you help for that last part?
Best Response
You've already chosen the best response.
Ok, thank you so much ! :)
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred.
|
{"url":"http://openstudy.com/updates/4fa6f4f9e4b029e9dc36620c","timestamp":"2014-04-18T16:03:37Z","content_type":null,"content_length":"87880","record_id":"<urn:uuid:90abe0a2-70a0-4e02-a219-f6276c76d825>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00300-ip-10-147-4-33.ec2.internal.warc.gz"}
|
The NRICH Project aims to enrich the mathematical experiences of all learners. To support this aim, members of the NRICH team work in a wide range of capacities, including providing professional
development for teachers wishing to embed rich mathematical tasks into everyday classroom practice. More information on many of our other activities can be found here.
|
{"url":"http://nrich.maths.org/public/leg.php?code=-100&cl=2&cldcmpid=8289","timestamp":"2014-04-16T07:51:35Z","content_type":null,"content_length":"60021","record_id":"<urn:uuid:653b6c9d-83c7-4966-867e-05f18efb933e>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00262-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Coordination System Change Matrix
December 17th 2011, 11:26 AM #1
Dec 2011
Coordination System Change Matrix
Let's say we have the default xyz coordination system and want to use the bisectors of the angles of the default coordination system as our new coordination system. What will the transformation
Matrix (A) be so that w=A*v, where w,v our old and new systems of coordination respectively
Sorry for syntax errors, but I am not learning math in english, so I am not comfortable with some expressions
Re: Coordination System Change Matrix
Use the following general result: if $B=\{e_1,e_2,e_3\}$ and $B'=\{e'_1,e'_2,e'_3\}$ are basis of $\mathbb{R}^3$ and $e'_i=\sum_{j=1}^3{a_{ij}e_j}$ for $i=1,2,3$ , then $[v]_B=\begin{bmatrix}{a_
{11}}&{a_{21}}&{a_{31}}\\{a_{ 12}}&{a_{22}}&{a_{32}}\\{a_{13}}&{a_{23}}&{a_{33}} \end{bmatrix}[v]_{B'}$
December 18th 2011, 12:35 AM #2
|
{"url":"http://mathhelpforum.com/advanced-algebra/194407-coordination-system-change-matrix.html","timestamp":"2014-04-20T08:26:34Z","content_type":null,"content_length":"34241","record_id":"<urn:uuid:818972d1-936d-4574-9705-2456321a26b3>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00048-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Linear Interpolation FP1 Formula
Re: Linear Interpolation FP1 Formula
Their solution identifies a pattern then uses the standard sum for n^2 to get to that formula. I tried a GF approach and it is too messy, I don't think it is a good approach for this part of the
Re: Linear Interpolation FP1 Formula
I can generate a gf but I can not get the zeroth coefficient from it.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Linear Interpolation FP1 Formula
I have the same problem too, it might be quicker to use the 'normal' method (the problem is aimed at students who have never encountered GFs).
Re: Linear Interpolation FP1 Formula
Okay, I will have a look at that paper. Thanks for the link.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Linear Interpolation FP1 Formula
I found out that she leaves only a couple of minutes before I do. She has early lessons on Tuesday and Wednesday, so I can meet her on the way to school on those days.
She doesn't seem to be that interested in maths though... I just need to show her something nice, but I don't know what she is interested in. She just said she likes 'pure maths', but she only says
that because she has never really done any applied maths. To be truthful, she doesn't even tend to do well in maths... but I'll see if I can find out why she likes it.
Re: Linear Interpolation FP1 Formula
Probably some reason that will be difficult to understand.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Linear Interpolation FP1 Formula
Her Dad has a PhD in physics, maybe he was pushy...
Re: Linear Interpolation FP1 Formula
Yeccchh! Ever heard of the teakettle principle?
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Linear Interpolation FP1 Formula
Isn't that when you simplify a problem to one you how to solve? I remember reading something about a joke, don't know what it is though...
Re: Linear Interpolation FP1 Formula
A physicist and a mathematician are given an empty teakettle, a fire and a water supply and are asked to boil water. They both fill the teakettle, place it on the fire and boil the water. Now they
are both given a teakettle already filled with water. The physicist after much thought shouts "Eureka" and places the teakettle on top of the fire and boils the water. The mathematician immediately
empties his teakettle, the physicist asks why did you empty it? The mathematician says, "because I already know how to solve the empty teakettle problem."
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Linear Interpolation FP1 Formula
Haha, that is good. I'll remember that one.
Re: Linear Interpolation FP1 Formula
Vilenkin, the Russian combinatoricist is the source.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Linear Interpolation FP1 Formula
Never heard of him...
Re: Linear Interpolation FP1 Formula
Neither did I until I came across his book. The Soviet authors did not get a lot of exposure over here.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Linear Interpolation FP1 Formula
It's the same here. We are never taught the history of any maths, or where things came from.
Re: Linear Interpolation FP1 Formula
That is what I have found. It is a shame, the history is fascinating.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Linear Interpolation FP1 Formula
It is a shame indeed. I was asking a maths teacher about Leibniz and do you know what he said?
"Oh, Leibniz! Those cost about £1.99 just across the street."
Turns out he wasn't joking, he thought Leibniz was a chocolate biscuit, not a man.
Re: Linear Interpolation FP1 Formula
I had a similar experience. I tried to compliment a supposedly pretty good math type with a phrase from Newton and one of the Bernoulli's. He had never heard of it.
Remember we were talking about cf's and Pell equations? There is an example, Pell had nothing to do with that equation. It was Fermat's! A historical error names them Pell equations.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Linear Interpolation FP1 Formula
Oh I see -- I've seen them called 'Pell-Fermat equations'... I do not really know what else Pell is famous for doing though. Pell equations are all I can associate with him.
Argand diagrams are a historical error too, aren't they? I don't think someone called Argand discovered them.
Re: Linear Interpolation FP1 Formula
I do not know about that one, I will have to look it up.
The Binet formula for the Fibonacci numbers, that was discovered by De Moivre, not Binet.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Linear Interpolation FP1 Formula
I did not know that, and we never do question these things when we learn about them. I just accepted it...
Venn diagrams were actually first used by Euler I think...
Re: Linear Interpolation FP1 Formula
I heard that. It is strange that in some cases they did not even get the name of the discoverer right. Makes you wonder what other mistakes there are.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Linear Interpolation FP1 Formula
Hopefully the errors are only historical ones...
Re: Linear Interpolation FP1 Formula
That is what we hope. I do not agree with cutting it out of the educational process. They should teach a little bit of the background of these men.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Linear Interpolation FP1 Formula
Even fellow mathematicians care little for history or even for the beauty of maths itself. Not that those are necessary pre-requisites to study it, though.
|
{"url":"http://www.mathisfunforum.com/viewtopic.php?pid=256943","timestamp":"2014-04-20T18:36:38Z","content_type":null,"content_length":"35889","record_id":"<urn:uuid:5d321a18-06ee-467d-a645-b77400629272>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00650-ip-10-147-4-33.ec2.internal.warc.gz"}
|
[Python-ideas] Introducing where clauses
Nick Coghlan ncoghlan at gmail.com
Mon Jun 22 12:40:05 CEST 2009
Andrey Popp wrote:
> value f(y) calculates three times -- It is a problem if function f
> takes much time to compute its value or has side effects. If we would
> have where clause, we can rewrite expression above with:
> [(y, y) for x in some_iterable if y < 2 where y = f(x)]
> I think it is really useful. We can also expand this idea to lambdas
> or maybe to introducing arbitrary scoping blocks of code.
Or you could just bite the bullet and write a custom generator:
def g(iterable):
for x in iterable:
y = f(x)
if y < 2:
yield (y, y)
Give it a meaningful name and docstring and it can even be self-documenting.
Lambdas, comprehensions and expressions in general all have limits -
usually deliberate ones. When one runs up against those limits it is a
hint that it is time to switch to using multiple statements (typically
factored out into a function that can be substituted for the original
inline expression)
But then, I'll freely confess to not really understanding the apparently
common obsession with wanting to be able to do everything as an expression.
Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia
More information about the Python-ideas mailing list
|
{"url":"https://mail.python.org/pipermail/python-ideas/2009-June/005028.html","timestamp":"2014-04-16T20:48:50Z","content_type":null,"content_length":"4086","record_id":"<urn:uuid:0e878e2d-45a2-423e-a85c-3f3ddad06baa>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00559-ip-10-147-4-33.ec2.internal.warc.gz"}
|
About the Chessmetrics Rating System
About the Chessmetrics Rating System
There are already two widely-accepted rating systems in place: the official FIDE ratings and the Professional ratings. The FIDE ratings have been calculated yearly since the early 1970's, twice a
year starting in 1980, and now four times a year starting in late 2000. Before 1970, the only widely-known historical ratings are those calculated by Arpad Elo, the inventor of the Elo ratings
system used by FIDE. These historical ratings, which appeared in Elo's 1978 book The Rating of Chessplayers Past & Present, were calculated every five years, using only games among top-80 players
within each five-year-span, and Elo only reported on the best of these ratings ever achieved by each player. With the exception of a tantalizing graph which displayed the progression of ratings
(every five years) for 36 different players, there was no way to see more than one single rating for each player's entire career.
Elo's historical rating calculations were clearly an incredible accomplishment, especially considering the lack of computational power available to him more than two decades ago. However, with
modern game databases, better computers, and more than two decades of rated games to indicate how well the FIDE ratings work, it is just as clear that the time is long past overdue for the next
generation of historical ratings. In the past year, it has gradually become clear to me that I should be the one to calculate those ratings. Once I reached the decision to move forward with this
project, there were three big questions to answer:
(1) How far back in time should I go? That one is pretty easy to answer. The first international chess tournament was held in London 1851, and before that time most recorded games are either
individual matches or casual games. Starting in 1851, there are increasingly more recorded games available, but there were still enough games in the pre-1851 era to allow for an initial pool of
rated players to be built, based on games played between the start of 1830 and then end of 1850. Once that initial pool was built, it became possible to start calculating yearly ratings, with the
first rating list appearing as of December 31st, 1851.
(2) Where should the raw data come from? The first time I tried to do historical ratings, in early 2001, I used the only large game collection I owned, which was the Master Chess 2000 CD. To
supplement it with games right up to the present, I used games downloaded from the TWIC (The Week in Chess) site. The result was the ratings which have appeared on the Chessmetrics site for the
past several months. Unfortunately, there was no standardization of the spelling of player names on the MC2000 CD, so I had to do a tremendous amount of manual work in standardizing them, so that
Ratmir Kholmov (for instance) would show up as one person rather than five different people named "Holmov, R", "Kholmov, R", "Kholmov", "Holmov, Ratmir", and "Kholmov, Ratmir". I tried to do this
accurately, but I'm sure there must have been errors. In addition, there does seem to be extensive duplication or omission of games. The results were nevertheless quite useful, but the feedback I
got from readers led me to conclude that the ChessBase game collection CD's like MegaBase would work better, since many more games were included and there was much better (though not perfect)
standardization of player names. I still had to go through the process of identifying players with multiple spellings, and cleaning up duplicate games, but it was definitely easier than before. The
CD I bought only went through mid-2000, so I still had to supplement it with more recent games from TWIC.
(3) What formula should I use? When I did my first try at historical ratings, using the Master Chess 2000 games, I tried many different approaches, eventually settling on a compromise which
combined three different approaches: a simultaneous iterative approach similar to how the initial pool was generated for the Professional ratings (and for my initial pool of players), a statistical
"Most Likely Estimate" approach which used probability theory, and also the traditional Elo approach. I tried to use this compromise solution again on the ChessBase data, but eventually discarded
it because it was taking too long to calculate, and I was also identifying some problems with how provisional players were entering the system. I decided to start over from scratch and see what I
could do with the benefit of several months of experience in developing rating systems.
The obvious first step in calculating 150 years of retroactive historical ratings, based on the ChessBase games, would be to use the Elo formula itself. This is indeed the first thing that I did.
Then I went back and applied the expected-score formulas to all historical games in my database, using those ratings, and compared the prediction with the actual results, to see how well the
ratings worked at actually predicting the outcome of each historical game. I found significant deviations between the predicted and actual outcomes, suggesting that the Elo formula could itself be
improved upon. After considerable statistical analysis of this data, I eventually arrived at a formula which seemed to work significantly better than the Elo scheme. I believe that in addition to
working better than the Elo scheme, my Chessmetrics ratings have just as solid a grounding in statistical theory.
In order to calculate a player's Chessmetrics rating, we need to know what their rating was exactly a year ago, as well as their performance rating based on all their (rated) games that were played
during the past year. The quantity of games played is also very important. If you played only a few games over the past year, then we are going to mostly believe your older rating, with only minor
adjustments based on those few games. This is similar to how the FIDE ratings work, where you have an ongoing rating which gets changed a little bit after each game you play. The Professional
ratings don't work very well in this scenario, since you have to go so far back in time to include the 100-most-recent games.
On the other hand, if you played a hundred games in the past year, then we don't really care too much what your rating was a year ago. There is so much evidence (from those hundred games) of your
current level of play, that we can basically say that your recent "performance rating" (over that entire year) is the best estimate of your current level of play. This is similar to how the
Professional ratings work, where a performance rating of your most-recent 100 games is calculated and becomes your new rating. The FIDE ratings don't work as well in this scenario, since the old
rating is increasingly out-of-date when a player plays frequently, yet the old rating is still what is used for the ongoing rating calculations, until the next rating period. Even having more
frequent FIDE calculations (now quarterly) doesn't help nearly as much as you would think.
Since most players' number of games per year will be somewhere in the middle, the best compromise is a combination of the two approaches, a rating formula that works equally well for frequent and
infrequent players. Of course, it is also important to know whether that older rating was based on just a few games, or whether there was a lot of evidence to justify the accuracy of the older
rating. For instance, if two years ago you were very active, then we can have a lot of confidence that your rating a year ago was a pretty good guess at your level of play at that time. On the
other hand, if you played very infrequently two years ago, then we will place correspondingly less emphasis on the accuracy of that rating from a year ago, and even more emphasis on your recent
results than we "normally" would.
You can think of a Chessmetrics rating as a weighted average between the player's rating a year ago, and the player's performance rating over the past year. The weights are determined by the
accuracy of that year-old-rating (e.g., whether it was based on many games or few games) as well as the accuracy of the performance rating over the past year (e.g., whether it represents many games
or few games). The ratings are era-corrected, anchored to a particular spot further down in the rating list (the specific spot is based on the world population; in 2001 the #30 player always gets a
particular rating number, and everyone else is adjusted relative to that player), such that a 2800 rating should typically be about the level needed to become world champion.
Ultimately, it is up to each person to decide which rating system they trust most. To help you in this decision, please allow me to mention some of the advantages that the Chessmetrics ratings have
over the FIDE and Professional ratings. To be fair, I will also mention all of the disadvantages that I am aware of, though you'll have to forgive me if I don't criticize my rating system too
The FIDE rating system has a serious drawback in that it is heavily dependent on how frequently ratings are calculated. For the same set of games, starting from the same initial ratings for
everyone, you will get a very different set of ratings after a few years, based on whether you are calculating ratings every year, every six months, every quarter, or every month. You might think
that the more frequent cycles would actually result in more accurate FIDE ratings, but that is actually not at all true.
The Chessmetrics and Professional ratings are relatively unaffected by how frequently the ratings are calculated. It can only help to calculate ratings very frequently, because that way you get
more up-to-date ratings. The Chessmetrics and Professional ratings differ significantly, however, in how far back they look while considering what games to use for the rating calculation. The
Professional ratings always look back exactly 100 games, whether those games were played in the past five months or the past five years. Further, the more recent games are much more heavily
weighted, so that half of your Professional rating is actually based on just your past thirty rated games. The Professional rating calculations don't care what a player's previous rating was; the
entire rating comes from those 100 games. The Chessmetrics ratings, on the other hand, always look back exactly a year, whether that year includes zero games or 200 games. Of course, it will put
correspondingly more emphasis on the past year's results, based on how many games were played. It is a matter of preference whether you think your "recent" results are best represented by a fixed
time period, or a fixed number of games (that go back however far is necessary in order to reach the prescribed number of games).
Another serious flaw in the FIDE and Professional ratings is that they do not provide any statement about how accurate the ratings are. In Elo's book from a quarter-century ago, he provides a small
table of numbers describing what the expected error would be in the ratings, for several different quantities of career games played (that table is the source of the "provisional until 30 career
games" rule), but that is simply based on theoretical considerations; there is no empirical evidence to support Elo's assertion that those errors have any correspondence to reality. That approach
also suggests that the accuracy of a player's rating is always increasing, as long as their number of career games keeps increasing. This is clearly wrong; if a player begins to play less
frequently, then even though their career number of games is increasing, we become less and less sure about the accuracy of their rating. The Professional ratings are at least accompanied by a
"variance", but that is simply a measure of how stable the player's performance rating tends to be in individual games; it says nothing about the accuracy of the ratings.
On the other hand, every Chessmetrics rating is accompanied by a corresponding +/- value, which represents the standard error (standard deviation) of the rating. Players can only qualify for the
world ranking list if their +/- value is small enough to indicate a "significant" rating. A rating is an estimate of what the player's level of performance currently is, and the +/- value indicates
the standard error of that estimate.
Another important drawback to the FIDE and Professional rating systems is that of inflation/deflation. This phenomenon has been widely studied, and it is clear that there has been considerable
inflation in the FIDE ratings in the past decades. For instance, in the early 1970's Bobby Fischer's rating peaked at 2780, and Fischer's domination of his peers was far greater than the current
domination of Vladimir Kramnik and Viswanathan Anand, both of whom have surpassed Fischer's 2780 mark in the past year. Any list of the highest-category-ever tournaments will invariably list only
tournaments from the past five or ten years, also due to the rating inflation at the top. It is impossible to meaningfully compare FIDE ratings that are even five years apart, let alone ten or
twenty. The Professional ratings have not been around nearly as long as the FIDE ratings, so it is not clear to what degree the inflation is occurring. However, I am not aware of any corrections
for inflation/deflation in the Professional calculations, and since it is an ongoing performance rating calculation, it seems likely that there is nothing anchoring the average ratings to a
particular standard.
On the other hand, the Chessmetrics ratings have been carefully adjusted in a serious attempt to eliminate any inflation or deflation. A rating of 2700 or 2500 should mean approximately the same
thing in 2001 that it did in 1901. To learn more about my corrections for inflation, read the section lower down about inflation. This correction enables the comparison of ratings across eras. Of
course, a rating always indicates the level of dominance of a particular player against contemporary peers; it says nothing about whether the player is stronger/weaker in their actual technical
chess skill than a player far removed from them in time. So while we cannot say that Bobby Fischer in the early 1970's or Jose Capablanca in the early 1920's were the "strongest" players of all
time, we can say with a certain amount of confidence that they were the two most dominant players of all time. That is the extent of what these ratings can tell us.
And, of course, the biggest flaw in the FIDE and Professional ratings is that they don't go far enough back in time. Elo's historical calculations and graphs are simply too coarse to be of any real
use, and even the official FIDE ratings are of limited availability in the 1970's. Further, the FIDE ratings since 1980 were only calculated twice a year (until very recently), which is simply not
frequent enough. The monthly Professional ratings are indeed more frequent, but they go back less than a decade.
My Chessmetrics ratings, on the other hand, are currently calculated weekly, and the monthly calculations go all the way back to 1980, and only the pre-1950 ratings are done as infrequently as once
per year. But the ratings go all the way back to 1851, so for historical analysis it seems clear that the Chessmetrics ratings are far more useful than the FIDE or Professional ones, as long as you
trust the accuracy of the Chessmetrics ratings.
Is there any reason why you shouldn't trust the accuracy of the Chessmetrics ratings? I'd love to say that they are perfect, but of course they are not. The biggest criticism of the ratings has to
be that the source of games is not as cleanly defined as it is for FIDE (I don't know what source of games is used for the Professional ratings). I have not excluded rapid or blitz games, or even
casual games, simply because there is no easy way to tell from a PGN game whether it should count as a "rated" game. Although I have invested considerable time working on the accuracy of the game
results, I have not omitted any games due to the conditions under which they were played.
Now, even though my ratings do include all games rather than just regulation-time-control "serious" games, remember that those ratings do nevertheless outperform the FIDE ratings in their accuracy
at predicting the outcomes of games. That fact goes a long way toward justifying the inclusion of those other games, but nevertheless it would be great if the 1.8 million games in my database could
be somehow pared down to only "serious" games. I simply don't have the resources to do that, and I'm not convinced that such an action would necessarily improve the accuracy of the ratings
themselves. It might, and then again it might not.
Not only does my game collection include too many games, you might just as well say that it includes too few games. Because I need up-to-date ratings for the purposes of my statistical analysis, I
elected to use the TWIC games as my source for the past 2.5 years. This necessarily means that many games are excluded that would normally be included in a huge database like the ChessBase Mega
Database. If that database were more timely, then perhaps I could use it, but instead I am almost forced to use the TWIC games, which could conceivably raise questions about the accuracy of the
ratings in the past couple of years, for people who don't have all their games included in TWIC. Mark Crowther's opinion was that the TWIC approach should work well at least for the top 50. I know
that when the next version of the big ChessBase database comes out, I can use it to plug some of the gaps, but that is a secondary concern right now. I apologize to anyone whose recent games are
not included as a result of this decision, but I'll do what I can to remedy this situation, and I urge all tournament directors to make their games available to TWIC.
In addition, it is very difficult to get a "perfect" set of games that were played many decades ago. I worked very hard to get an accurate set of games (even those where we only know the result and
not the moves; sometimes we don't even know who had the first move) up through 1880, but after that point it just became too difficult to keep up with the expanding tournament scene, and so there
could easily be missing games, especially for tournaments which didn't manage to preserve the entire gamescores.
Finally, it will always be true that somewhere out there is a slightly better formula. I know that my ratings work better than the FIDE ones, but of course that doesn't mean that the Chessmetrics
rating formula is the "best" one. I have tried to optimize it, based on the accumulated evidence of more than a million chess games, but it is almost certain that there is a better formula than the
one I currently use. Nevertheless, it's the best formula I could find, and I did try a large number of other alternatives.
The Statistical Theory Behind the Ratings
The formula is based upon considerable empirical chess data from a very large database of historical games. The statistical theory behind the formula is based upon the Method of Maximum Likelihood
and its application to certain variables which are assumed to follow normal distributions, those variables being:
(a) the error of a rating estimate; and
(b) the observed difference between a player's true rating and their performance rating over a subsequent time period (usually a year).
There is of course no abstract reason why those variables must follow a normal distribution (although (b) is a trinomial distribution which for more than a few games should indeed follow a normal
distribution), but experience predicts that they would follow a normal distribution, and the empirical data seems to indicate strong agreement. Using that empirical data, I have created formulas
which estimate the variance of those two variables listed above. The variance of (a) is based on the number of games played in recent years leading up to the calculation of the rating, and the
variance of (b) is based on the number of games played during that year. In both cases, the formula actually uses the inverse square root of that number of games, since the statistical theory
suggests that the variance typically would be proportional to that inverse square root.
The Method of Maximum Likelihood requires an "a priori", or "prior" distribution, as well as a "posterior" distribution. In the specific case of rating calculations, the prior distribution
describes our estimate of a player's true level of skill, exactly a year ago. The mean of this distribution is the actual calculated Chessmetrics rating a year ago, and the variance is based upon
the quantity of games played, leading up to that calculation. The posterior distribution represents a performance rating, namely the observed performance level of the player during the year in
question. The mean of that distribution is the player's true level of skill a year ago, and the variance is based upon the quantity of games played during the subsequent year.
When you use the Method of Maximum Likelihood, you consider many different guesses for what the player's true level of skill was a year ago. Certain guesses are more likely than others; the most
promising guess is that their calculated rating a year ago was exactly right, but of course it is quite likely that there was a certain amount of error in that rating estimate; probably the
player's true level of skill was either underrated or overrated by that calculation.
For each guess under consideration, you first assume that the guess was exactly right, and then see what the chance would be of the player actually scoring what they really did score. The
"likelihood" is then calculated as the probability of your original guess (as to the player's true skill over the past year) being right, times the probability (assuming that the guess was correct)
of the player's actual results.
Let's try a small example to illustrate how this works. A hypothetical Player X has a rating of 2400, with a particular uncertainty associated with that rating. To keep it simple, let's say that
there is one chance in two that Player X's true level of skill is actually 2400, one chance in five that Player X's true level of skill is actually 2500, and one chance in a hundred that Player X's
true level of skill is actually 2600. Then Player X plays fifteen more games, with a performance rating of 2600, and the big question is how we revise Player X's rating. Is it still near 2400, is
it near 2600, or is it somewhere in the middle?
Let's further pretend that if a player has a true rating of 2400, then they have one chance in fifty of scoring a performance rating of 2600 in fifteen games. And if they have a true rating of
2500, then they have one chance in ten of scoring a performance rating of 2500 in fifteen games. Finally, if they have a true rating of 2600, maybe there is one chance in three of scoring a
performance rating of 2600 in fifteen games. These are all hypothetical numbers, of course; the real trick is to figure out what the actual numbers should be!
Using these simple numbers, though, we can calculate the "likelihood" of a particular rating estimate as the product of those two chances. The chance of Player X's original "true rating" being 2400
(one in two) times the chance of a 2600 performance rating in fifteen games by a 2400-rated player (one in fifty) gives an overall "likelihood" of one in hundred that their "true rating" really is
2400. The same calculation gives a likelihood of one in fifty for a 2500 rating, and a likelihood of one in three hundred for a 2600 rating. Thus, in this very simplistic example, our "most likely"
estimate of the player's true skill is 2500, since that one has the greatest likelihood of being true (.02 vs. 01 or .003). And so, even though our previous guess of the player's true skill was
2400, the evidence of those fifteen subsequent games leads us to re-evaluate our current estimate of the player's true skill, to 2500.
This approach provides a middle ground between the conservative FIDE ratings, which will always be too slow to react to a drastic change in a player's ability, and the sensitive Professional
ratings, which place no emphasis at all on a player's prior rating, looking only at a weighted performance rating that may overstate whether a player really has improved as much as their recent
results would indicate.
Now, of course, there are more than just three possible "true ratings"; there are infinitely many, and this means you have to deal with probability densities rather than actual probabilities, and
those densities are based on the density of a normal variable, which is a pretty ugly exponential formula. However, it all has a happy ending. It turns out that if your prior distribution is
normally-distributed, and your posterior distribution is normally-distributed, then rather than maximizing the likelihood, you can instead maximize the logarithm of the likelihood, which lets you
cancel out all of the ugly exponential terms. Also, the logarithm of "X times Y" is the logarithm of X plus the logarithm of Y, and it works far better to take the derivative of a sum than it does
to take the derivative of a product, especially when you are going to be solving for one of your variables. Further, since you are maximizing it, you need only take the derivative of the
log-likelihood function, with respect to the player's "true" rating. This lets you zero out several terms (that are not related to the player's "true" rating). By setting the derivative equal to
zero and solving for the "true" rating, you get a very simple formula, which turns out to be a simple weighted average of the previous rating with the observed performance rating, with the weights
being the variances of (b) and (a), respectively. Since (a) and (b) were defined many paragraphs ago, let me state them again:
(a) the error of a rating estimate; and
(b) the observed difference between a player's true rating and their performance rating over a subsequent time period (usually a year).
As long as you're still with me after all of that math, let me point out one more thing. The Professional ratings are just a special case of the more general equation. If you assume that the
variance of (b) is zero, then your resultant rating will be exactly equal to the observed performance rating, and that's how the Professional ratings work. So, the Professional ratings assume that
if your true rating is 2383 over a particular time period, you will always score an exact weighted performance rating of 2383 over a hundred games during that time period. That is clearly not true;
even a thousand games is probably not enough to ensure such accuracy. The variance of (b) is definitely nonzero. So, if the Professional ratings were truly an attempt to estimate, as accurately as
possible, a player's true level of skill, some weight needed to be given to what their rating was originally, since that does provide some useful information. However, perhaps the Professional
ratings are simply intended to be an accurate measure of a player's recent results, rather than an estimate of how good a player really is.
Another possibility is that my statistics are flawed and that the Professional ratings actually are a great way to estimate a player's true skill. The real proof, of course, would come from
comparing whether the Professional ratings work as well as the Chessmetrics ratings at predicting the results of future games. I would love to perform such an analysis, but unfortunately I have
been unable to obtain a satisfactory set of historical Professional rating lists, or specific definition of how the details of the calculations work (so I could do it myself). Specifically, I don't
understand how provisional players enter the list (since at the start they won't have 100 games played, and they won't be part of the original basis calculations); my inquiries to Vladimir
Dvorkovich and to Ken Thompson have been unanswered. Mark Glickman, inventor of the Glicko rating system, has been very helpful in general, but couldn't help me out in this particular case.
The FIDE ratings, on the other hand, are far more available than the Professional ratings, allowing me to check whether I was really improving on the FIDE approach, or whether I was out of my
league in my attempts to find a better approach. I can now confidently say that the Chessmetrics ratings work better than the FIDE ratings at predicting the results of future games, and thus the
Chessmetrics ratings are more accurate than the FIDE ratings at estimating the true level of skill of chess players.
Still not convinced? Want to see the numbers? In order to keep myself honest, I decided that my process would be to use all games up through 1994 to develop my rating formulas, and then I would use
the games of 1995 and 1996 to test whether the formulas really worked better. Otherwise, if I used all games through 2001 to develop my formulas, and then used some of those same games to compare
rating systems, it wouldn't be fair to the FIDE system, since my formulas would already be optimized for those same games. So, I pretended that I had invented everything in 1994, and had then spent
1995 and 1996 checking to make sure that I had really improved on the Elo formulas. I didn't want the cutoff times to be much later than 1996, since my switchover to using TWIC games (rather than
ChessBase) might influence the results.
This test was successful; the Chessmetrics ratings consistently outperformed the FIDE ratings, month after month after month. I can't provide the full details right now, though I promise to put
them up on the site soon. I was hoping to include the Professional ratings in the mix before doing a full-blown analysis, but for now the only Professional ratings I have access to are the monthly
top-fifty lists as published by Mark Crowther in his weekly TWIC issues. So perhaps we can only make conclusions about how the rating systems work among top-50 players. I did do an analysis of FIDE
vs. Professional a year ago, using those top-50 lists, and found that the Professional ratings did no better than the FIDE ratings at predicting the results of future games, and maybe even a little
worse than the FIDE ratings. The FIDE ratings still work quite well, and not really that much worse than my Chessmetrics ratings, but they are demonstrably inferior.
Correction for inflation/deflation
The final topic to be covered is that of rating inflation. Let me digress for a moment. If we wanted to compare the performance of today's golfers with past historical great golfers, we can do
that, because it is easy to measure the absolute performance of a golfer. The same argument applies even more strongly to individual sports such as swimming or high-jumping or javelin throwing.
There is still room to argue about whether the performances of today's swimmers are more impressive than the performances of past greats (who didn't have the benefits of today's training methods,
or whatever), but there can be no doubt that today's top athletes swim faster, jump higher, and throw further than any of their predecessors.
Do today's top chess players play better than any of their predecessors? That question is harder to answer objectively, without an absolute standard to measure against like we have in track and
field. Chess players compete against other chess players, and the average chess performance hasn't changed in centuries; it's still a 50% score. In the same way, we can't measure objectively the
relative performance of Barry Bonds vs. Babe Ruth, or Muhammad Ali vs. Joe Louis, or Michael Jordan's Chicago Bulls against Bill Russell's Boston Celtics. All we can do is measure the degree to
which they dominated their contemporaries. The same goes for trying to compare Garry Kasparov to Bobby Fischer to Jose Capablanca to Wilhelm Steinitz. If we had only had the foresight to lock Bobby
Fischer in a room in 1972 so he could play thousands of games against an incredible supercomputer running the state-of-the-art computer chess program, we could drag that same computer out of
mothballs today and begin to make progress on this question. We could emulate that same computer program on a Palm Pilot and pit it against Garry Kasparov, and then maybe we could start to say
something about who was truly stronger, although there are huge problems with even that approach, since players learn about their opponents during competition, and presumably each player would win
their final 1,000 games against that computer opponent.
To continue this ridiculous discussion a few sentences longer, we do have a special advantage in chess in that we have a near-perfect record of Bobby Fischer's performance in 1970-2, and the same
to varying degrees for Garry Kasparov in 1999 and Jose Capablanca in 1922 and Wilhelm Steinitz in 1878, since we have the moves of all of their games; we just don't have the skills yet to construct
an objective way for a computer to analyze whose technical play was truly "strongest". We have to resort to human analysis of their play, and so we enter the realm of subjectivity, which is
probably where this question belongs anyway, given the undeniable human element whenever a human plays a game of chess.
Nevertheless, it is possible to measure (in an objective way) a player's performance against contemporaries, allowing us to sort a list of players from strongest to weakest, and we can express the
relative level of skill of two players in terms of a "rating difference", which has a well-established meaning today. However, even if we know that Player A is the top-rated player, and Player B is
second, 40 points behind, and Player C is five points back of Player B, what ratings do we give them? Should Player A have a rating of 2800, and B 2760, and C 2755? Or should Player A get a rating
of 2.80, or 28 million? It doesn't matter for the purposes of a single list, but when we try to measure how much one player dominated in 1958 against how much another player dominated in 1859, it
would be great to have some sort of meaningful scale to allow that sort of comparison.
Unfortunately, the Elo scale itself doesn't have any safeguards built in to prevent rating inflation/deflation, and it is clear that the meaning of a 2700 rating (for instance) is different if you
are talking about 1971 versus 2001. In 1971, a 2700-rated player would be extremely dominant, and almost certainly the strongest player in the world. In 2001, a 2700-rated player is not even in the
top ten in the world, and almost certainly NOT the strongest player in the world.
My original approach to this problem was to adjust all of the ratings so that the #10-rated player in the world received a 2600 rating, for all of the lists from 1850 to 2001. This was an
improvement on having no correction at all, but hardly an optimal one. Long ago, there would have been far fewer players within 200 points of the world champion than we have today, so a world
champion would have been almost expected to be 200 points higher than the #10 player in the world, whereas today it would be almost unheard of. So the "#10 player is 2600" rule seems unfair to
modern players, among other problems.
I still liked the idea of anchoring a specific rating to a particular world rank #, but it needed to vary across time to reflect the fact that there are many more players today than ever before. I
eventually hit on the scheme (based on a suggestion from a reader of my original Chessmetrics site) of having the anchor world rank # depend upon the world's population at the time. The general
rule is that for every 200 million people in the world, I will use one slot further down in the world rank # as my anchor. So if the world population were 2 billion, I would use the #10 player as
my anchor, but in modern times as the world population neared 6 billion, I would eventually use the #30 slot as my anchor.
Does that mean that the anchor would always receive the same rating? No, since there is no guarantee that the spacing of players at the top should be directly related to the world population.
Instead, I used the actual historical lists that I had already generated, as a guide for what the anchor slot's rating should be. I wanted the rating of the top players each year to be about the
same (and I picked 2800 as the desirable rating for the #1 player), but I didn't want to follow that rule too closely, since then we would see Fischer and Capablanca and Steinitz and Kasparov all
having the same 2800 rating, which would be kind of pointless.
I plotted the gap between #1 and #5 across 150 years, and fit that data to a straight line or at most a simple curve. Then I did the same thing for the gap between #5 and #10, and for #10 and #12,
and in fact for many different gaps involving top-30 players. Using this data, I could predict what the gap should be between #1 and #12 on a particular year, by adding the three predicted gaps
together. Let's say that gap was 140 points. Then, if the anchor for that year was the #12 slot (because the world population was about 2.4 billion), I could work backward from the desired goal of
2800 for the #1 player, and the predicted gap of 140 points, to arrive at anchor values for that year: I would add/subtract the exact number of rating points to give the #12 player a rating of
2660, and then everyone else on that list would get the same number of points added/subtracted to their rating, so that the relative differences between ratings stayed the same. The top player
would be measured against other historical greats by looking at whether this top player really did rate 140 points higher than the #12 player, or whether they managed a gap higher or lower than the
By fitting the data to simple lines and curves, I hoped to capture overall trends among top players, while still allowing the #1 players across eras to differentiate themselves. There were still
potential pitfalls. For instance, what should we do if there happened to be a big clump of players right above or right below the anchor slot? Answer: use a weighted average of the five players
centered on the anchor slot, instead of just the one player. Another big hurdle was what to do when a few top players retired simultaneously (or showed up simultaneously), and suddenly threw off
the ranks. Answer: at the end, go through a few iterations of trying to minimize the overall change in ratings for players between their previous and subsequent ratings, possibly causing entire
lists to move up or down a significant amount, though I only considered players whose world rank hadn't changed by more than a few slots. Today the anchor slot is fixed at #30, and the rating given
to the #30 slot is slowly increasing, to reflect the fact that as the population of chess players increases, we should see slightly more clumping at the top, so the gap between #1 and #30 should be
slowly decreasing.
Conclusions, and Looking Ahead
So, there you have it; I don't have too much more to say at this point, other than the fact that I expect to eventually revise much or all of the above process thanks to my receiving constructive
criticism from all of you who are reading this. Interestingly enough, this effort was more of a means to an end, rather than an end in itself; I wanted a sound way to arrive at the error of a
rating estimate, to allow me to do better statistical analysis than what I have done so far in my articles for KasparovChess.com. Now I have those error values, so I can move on to interesting
topics like the true odds of players' winning the FIDE knockout championships, or what the best candidates' system is for determining a challenger to the world champion, and things like that.
However, I also have great hopes for improving this site. Here are my immediate plans for the future of this project:
(1) The whole point of switching over to the TWIC games was so I could have very up-to-date ratings. Currently the ratings only go up through September 10th, 2001 (since that was the last TWIC
issue I had imported before I did my final run of rating calculations), but as soon as I get some time I plan to implement a process where I can bring in a TWIC issue each week and somehow update
the site with the latest ratings. I'm still working on the finer points of some of that. Currently the site uses static HTML pages, but my plan is to switch over to dynamically-generated ASP pages
from my database, as soon as I know that I won't be hurting the performance of the site by doing that.
(2) I know that there are still some embarrassing parts of my rating system. It still has Gata Kamsky in the top 15, even though he seems quite retired, so perhaps I need to ease my rules about
what "retired" means. Also, if you look at some of the age graphs for very young players who didn't play very many games, you can see an interesting cyclical effect which seems to indicate problems
in my calculations of provisional ratings. Finally, I have a correction which I apply to provisionally-rated players to reflect the "regression to the mean" effect: the fact that their calculated
ratings are probably too far away from the average rating. That one seems to work, but probably I should do a similar thing for players who are no longer provisional, but who nevertheless have
uncertain ratings due to not playing very much. This would probably help in forcing down the ratings of older or semi-retired top players. It would be great if I could somehow adjust my algorithm
to take care of these problems. Probably I'll incorporate these changes the next time I modify the set of games used (like if I do more cleanup work on the 19th century games) and have to rerun the
entire ratings calculations from 1850 again.
(3) I know that my decision to use TWIC games will mean that I lose a lot of events, so many of the 13,000+ players will have some games excluded and their ratings will be correspondingly more
uncertain. In addition, at the other end of the continuum, I know that my game collection is imperfect, especially in the pre-1920 era. I tried to do a really good job through 1880, but I'm sure
that errors slipped through, and who knows what it's like between 1880 and 1920? Ideally, I would go through Jeremy Gaige's landmark books on tournament crosstables, and find the next-best-thing
for match records (which are not included in Gaige's books, I believe), and manually enter all of the results (or at least check them against the ChessBase data and add missing games, which is
basically what I did through 1880). At this point, I've done the best I could do in my limited spare time away from the rest of my life; maybe some of you can help me out somehow. If Gaige's books
were computerized, that would be a great first step. By the way, while I'm on the subject of Jeremy Gaige, I would like to mention that I made use of his excellent book "Chess Personalia: A
Biobibliography" to enter birth and death dates, as well as consistent spellings, accent marks, etc., for everyone who had ever been in the top 100 of one of my date lists. That book is about 15
years old, so I know that there are many players who must have died since then, and many spellings that I don't have the ability to check against a master source. I apologize to anyone who I have
wrong data for, and I tried to do the accent marks like Gaige did, though there were some special characters that my database wouldn't accept.
(4, 5, 6, ...) I want to add the ability to get graphical plots for any player, rather than just the 99 players who ever been in the top 5 in the world. I want to include more top lists, like who
have the best peak ratings, or the best peak-five-year ratings. I want to include nationalities so my friends from Denmark and Sweden and Iceland who are looking for their countrymen can see
interesting lists limited only to a particular country. I want to show the analysis which compares the FIDE, Professional, and Chessmetrics rating performance. I want to add another dimension of
"openings" to all of this. I want to add the ability to drill down to individual lists of games for each player, and to view those games. I want to add my past and future articles about various
statistical topics. I want to add the ability for a user to generate their own graphs of historical data about requested players. We’ll see whether I manage to do any of those!
Thanks for reading all of this, and I hope you enjoy my site.
 - Jeff Sonas
Back to top
|
{"url":"http://chessmetrics.com/cm/Documents/AboutSystem.htm","timestamp":"2014-04-18T15:39:47Z","content_type":null,"content_length":"47184","record_id":"<urn:uuid:2a934e79-45f4-44b4-b554-bebf2a5617fe>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00203-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Localization of a UFD is again a UFD
November 9th 2010, 04:46 AM
Localization of a UFD is again a UFD
Ilm been trying to do this problem for a while now.
Suppose our UFD is $R$ and we have the multiplicative set $W\subseteq R$.
The problem I'm having is that irreducible elements in one ring (either $R$ or $W^{-1}R$) don't seem to always carry over. As a simple example, consider the ring $\mathbb{Q}[x,y]$. Then $x$ is
irreducible here, but is no longer irreducible in $\mathbb{Q}[x,y]_{(y)}$ (it becomes a unit). Conversely, the element $\frac{xy}{1}\in \mathbb{Q}[x,y]_{(y)}$ is irreducible (it is an associate
of the irreducible element $\frac{y}{1}$). However, $xy$ is certainly not irreducible in $\mathbb{Q}[x,y]_{(y)}$.
So I'm basically stuck on BOTH conditions of having a UFD (showing there always EXISTS a factorization into irreducibles, and then proving that it is unique). Does anyone have any ideas?
EDIT: I believe I figured it out. A quick sketch of the steps I used:
1. Prove that if $r\in R$ is irreducible, then $r/1$ is either irreducible or a unit.
2. This means that a factorization of $r$ into irreducibles in $R$ will give a corresponding factorization of $r/1$ into irreducibles (and units). Since an arbitrary element of $W^{-1}R$ looks
like $\frac{r}{w}=\frac{1}{w}\cdot \frac{r}{1}$, and $\frac{1}{w}\in W^{-1}R^*$, this implies that every nonzero nonunit of the localization has a factorization into irreducibles (existance).
3. Prove that if $r/w$ is irreducible, then $r/w$ is an associate of an element of the form $\frac{x^e}{1}$, where $x\in R$ is irreducible.
4. Use (3) to prove uniqueness, writing any irreducible in this form.
So, it works (I believe), but it's really just a lot of playing with elements and factorizations. I'm sure there's a better proof.
|
{"url":"http://mathhelpforum.com/advanced-algebra/162631-localization-ufd-again-ufd-print.html","timestamp":"2014-04-20T01:30:06Z","content_type":null,"content_length":"8734","record_id":"<urn:uuid:7659296e-67bb-4648-95d7-fddbac0030ba>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00224-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Bonferroni inequalities
Bonferroni inequalities
Let $E(1)$, $E(2),\ldots,E(n)$ be events in a sample space. Define
$\displaystyle S_{1}:=\sum_{{i=1}}^{n}\Pr(E(i))$
$\displaystyle S_{2}:=\sum_{{i<j}}\Pr(E(i)\cap E(j)),$
and for $2<k\leq n$,
$S_{k}:=\sum\Pr(E(i_{1})\cap\cdots\cap E(i_{k}))$
where the summation is taken over all ordered $k$-tuples of distinct integers.
For odd $k$, $1\leq k\leq n$,
$\Pr(E(1)\cup\cdots\cup E(n))\leq\sum_{{j=1}}^{k}(-1)^{{j+1}}S_{j},$
and for even $k$, $2\leq k\leq n$,
$\Pr(E(1)\cup\cdots\cup E(n))\geq\sum_{{j=1}}^{k}(-1)^{{j+1}}S_{j},$
Mathematics Subject Classification
no label found
Added: 2004-07-30 - 01:21
Attached Articles
|
{"url":"http://planetmath.org/BonferroniInequalities","timestamp":"2014-04-18T23:19:07Z","content_type":null,"content_length":"65058","record_id":"<urn:uuid:d40cde26-3773-4cab-b7ac-27e35a16b71c>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00455-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Concrete-Representational-Abstract Instructional Approach
What is it?
Concrete-Representational-Abstract-Approach or CRA for short “can enhance the mathematics performance of students with learning disabilities. It is a three-part instructional strategy, with each part
building on the previous instruction to promote student learning and retention and to address conceptual knowledge.” (The Access Center, 2004).
The purpose of teaching through a concrete to representational to abstract approach is to make sure students completely understand the skill or concept they are learning before executing the problem
on their own. The three steps of CRA include: Concrete, Representational and Abstract. The Concrete stage is the “doing” stage, the Representational is the “seeing” stage and the Abstract is the
“symbolic” stage.
1. Concrete
In this step, the teacher introduces a math concept by modeling examples using manipulatives such as unifix cubes, pattern blocks, beans, base ten blocks etc. Students are able to manipulate the
objects by using their visual, tactile and kinesthetic senses. Students are given many opportunities to use these objects to problem solve.
Research-based studies show “that students who use concrete materials develop more precise and more comprehensive mental representations, often show more motivation and on-task behavior, understand
mathematical ideas, and better apply these to life situations.” (Anstrom, 2006).
A teacher could model how to multiply by using marbles as manipulatives. The teacher would display three groups of three marbles each and ask the students how many marbles there are. The teacher
would allow students to touch and count the marbles.
Once students have demonstrated mastery using concrete materials, they are ready to move onto the Representational step.
2. Representational
In this step, the student would draw pictures that represent the concrete objects previously used. These pictures help the student visualize the math operations during problem solving. The teacher
must explain the relationship between the pictures and the concrete objects and allow the student numerous opportunities to practice until they solve the problems independently. After students are
successful with the representational step, they would move on to the abstract step.
3. Abstract
During this final step teacher models the concept at a symbolic level and uses math symbols to represent addition, subtraction, multiplication and division. It is often referred to as “doing math in
your head.” After students have handled multiplication manipulatives and made pictorial representations, the teacher would show the abstract form, which is “3 x 3.”
This is an example of what CRA would look like.
This instructional approach “benefits all students but has been shown to be particularly effective with students who have mathematics difficulties, mainly because it moves gradually from actual
objects through pictures and then to symbols. (Sousa, 2007).
While this video is a little lengthy, it gives a great overview of the CRA approach.
Connection to Multiple Intelligences
Students with strengths in tactile and kinesthetic learning styles learn best with hands-on experiences. These students might even prefer to act out the concepts. Visual learners can easily visualize
counters and auditory learners can repeat the concepts in their heads.
My thoughts…
I believe this is a great approach to use with learning disabled students as well as non-disabled students. It is especially beneficial to students with learning disabilities because they have a
harder time with abstract concepts. The CRA approach can help students connect ideas so the gain a deep understanding of the math concept. As a result, students are more likely retain the
information .
CRA allows the teacher to differentiate instruction and meet the needs of all students.
The Access Center. (2004). Concrete-Representational-Abstract Approach. Retrieved from: http://www.k8accesscenter.org/training_resources/CRA_Instructional_Approach.asp
Anstrom, T. (2006). Supporting Students in Mathematics Through the Use of Manipulatives. Retrieved from Center of Implementing Technology in Education : http://www.cited.org/library/resourcedocs/
Kurczodyna, V., Cavanagh, C. & Curiel, J. (n. d.) Math Instrucitonal Strategies. Retrieved from http://www.eiu.edu/~speebp/ppt/Math–_CRA_PPT.ppt
Math VIDS. (n.d.). Retrieved from http://fcit.usf.edu/mathvids/strategies/cra.html
Math Instructional Strategies. (n.d.).Retrieved from www.iu12.org/images/…/Math%20Instructional%20Strategies.ppt
Sousa, D. (2007). How the Brain Learns Mathematics. Thousand Oaks, CA: Corwin Press
Filed under Math
|
{"url":"http://cbennettrivier.wordpress.com/2012/03/29/concrete-representational-abstract-instructional-approach/","timestamp":"2014-04-19T12:49:11Z","content_type":null,"content_length":"355475","record_id":"<urn:uuid:5937c7e8-d388-4e84-a20d-47d35b711a0c>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00495-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Summary: A Counterexample to the Generalized LinialNisan Conjecture
Scott Aaronson #
In earlier work [1], we gave an oracle separating the relational versions of BQP and the
polynomial hierarchy, and showed that an oracle separating the decision versions would follow
from what we called the Generalized LinialNisan (GLN) Conjecture: that ``almost kwise in
dependent'' distributions are indistinguishable from the uniform distribution by constantdepth
circuits. The original LinialNisan Conjecture was recently proved by Braverman [7]; we o#ered
a $200 prize for the generalized version. In this paper, we save ourselves $200 by showing that
the GLN Conjecture is false, at least for circuits of depth 3 and higher.
As a byproduct, our counterexample also implies that # p
2 ## P NP relative to a random oracle
with probability 1. It has been conjectured since the 1980s that PH is infinite relative to a
random oracle, but the highest levels of PH previously proved separate were NP and coNP.
Finally, our counterexample implies that the famous results of Linial, Mansour, and Nisan
[12], on the structure of AC 0 functions, cannot be improved in several interesting respects.
1 Introduction
Proving an oracle separation between BQP and PH is one of the central open problems of quantum
complexity theory. In a recent paper [1], we reported the following progress on the problem:
(1) We constructed an oracle relative to which FBQP ## FBPP PH , where FBQP and FBPP PH are
|
{"url":"http://www.osti.gov/eprints/topicpages/documents/record/722/3701216.html","timestamp":"2014-04-18T11:56:14Z","content_type":null,"content_length":"8742","record_id":"<urn:uuid:34bd0ae2-d039-4550-a69e-4d57fbb1b4ba>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00046-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Summary: MaxCut in H-free graphs
Noga Alon
Michael Krivelevich
Benny Sudakov
Dedicated to B´ela Bollob´as on his 60th birthday
For a graph G, let f(G) denote the maximum number of edges in a cut of G. For an integer
m and for a fixed graph H, let f(m, H) denote the minimum possible cardinality of f(G),
as G ranges over all graphs on m edges that contain no copy of H. In this paper we study
this function for various graphs H. In particular we show that for any graph H obtained by
connecting a single vertex to all vertices of a fixed nontrivial forest, there is a c(H) > 0 such
that f(m, H) m
2 + c(H)m4/5
, and this is tight up to the value of c(H). We also prove that
for any even cycle C2k there is a c(k) > 0 such that f(m, C2k) m
2 + c(k)m(2k+1)/(2k+2)
this is tight, up to the value of c(k), for 2k {4, 6, 10}. The proofs combine combinatorial,
probabilistic and spectral techniques.
1 Introduction
|
{"url":"http://www.osti.gov/eprints/topicpages/documents/record/226/1705799.html","timestamp":"2014-04-21T10:07:33Z","content_type":null,"content_length":"7983","record_id":"<urn:uuid:6919a3f0-183f-4575-9269-77f1972c239a>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00069-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Atherton Science Tutor
Find an Atherton Science Tutor
...I'm flexible in my teaching style, and will work with parents, schools and students to determine the format of tutoring most likely to bring them success. In all cases, however, I emphasise
risk taking, self-sufficency and critical thinking when approaching problems. My aim is to help students to become independent learners who eventually won't need my help.
11 Subjects: including physics, chemistry, calculus, statistics
Hello! My name is Eduardo. I was born and raised in Puerto Rico, where I completed my bachelor's degree in Industrial Biotechnology at the University of Puerto Rico at Mayagüez.
13 Subjects: including genetics, biology, chemistry, physical science
...I love the look of excitement when a concept suddenly makes sense, or a new connection can be drawn between ideas. I believe all kids can understand math and science, if it is explained in a
way that makes sense to them. I have a Masters in Chemical Engineering from MIT, and over 10 years of industry experience.
26 Subjects: including genetics, Regents, algebra 1, algebra 2
...I build on those. Everyone has areas of weakness. I co-create lessons for skill-building.
8 Subjects: including biology, reading, English, grammar
...I strongly believe that everyone can be successful at math, and my motto is to give my students the concepts and confidence they need to achieve success in future endeavors. I have BS Degree in
Electrical Engineering. I have been working in that electrical engineering domain since last 15+ years as a principal hardware engineer.
13 Subjects: including electrical engineering, chemistry, calculus, linear algebra
|
{"url":"http://www.purplemath.com/atherton_science_tutors.php","timestamp":"2014-04-19T20:06:44Z","content_type":null,"content_length":"23736","record_id":"<urn:uuid:761cf88b-8de1-46d8-b320-f801e916646b>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00336-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Carroll K. Johnson and Michael N. Burnett
Chemical Sciences Division, Oak Ridge National Laboratory, Oak Ridge, Tennessee 37831-6197, USA [e-mail (ckj@ornl.gov), (mnb@ornl.gov)].
Our Crystallographic Orbifold Atlas illustrates space-group topology by showing asymmetric units of space groups wrapped up to form closed spaces, called Euclidean 3-orbifolds, which have singular
sets corresponding to the Wyckoff sites. The Gaussian density for a crystal structure, based on overlapping Gaussian density functions centered on atomic sites, has a critical-net representation with
critical points joined by density gradient-flow separatrices. Crystal-structure critical nets, wrapped into the corresponding space-group orbifolds, form Crystal Orbifold Morse Functions (COMFs) with
the singular set of the space group acting as a template for the critical net. COMFs provides a new approach for classifying both crystal structures and space groups.
For simple crystal structures, each component of the critical net, which includes (a) peaks, (b) passes, (c) pales, and (d) pits, as well as (ab), (bc), and (cd) separatrices, plus the (da) steepest
gradient paths, corresponds to a classical crystallographic lattice complex. This geometric arrangement of lattice complexes provides the global characteristics needed to characterize and classify
crystal structure families using only the asymmetric units of the unit cells wrapped up as COMFs. Morse functions on orbifolds have unique topological characteristics which currently are not well
characterized in the mathematical topology literature.
Crystallographers have long bemoaned the fact that traditional space group nomenclature is more a hindrance than a help in classification requiring systematic symmetry breaking. We are trying to
derive a more structurally related space-group classification based on the imbedding properties of a basis set of simple COMFs into space group orbifolds. This classification also will incorporate
space-group/subgroup relationships as given by the color Shubnikov groups represented as color orbifolds.
Please visit our WWW site at http://www.ornl.gov/ortep/topology.html
Research sponsored by the Laboratory Directed R&D Program of ORNL, managed by LMERC for U.S. DOE under contract DE-AC05-96OR22464.
|
{"url":"http://web.ornl.gov/sci/ortep/topology/ckjiucr.html","timestamp":"2014-04-18T22:17:12Z","content_type":null,"content_length":"2741","record_id":"<urn:uuid:c02cba53-1c9f-4333-b5f5-d8a78368b7c9>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00504-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
sock sorting problem?
• one year ago
• one year ago
Best Response
You've already chosen the best response.
Professor Tovey sorts his socks in the following way. He grabs a sock from the laundry basket and places it on the table. Then he grabs another sock from the basket. If it matches a sock on the
table, he folds the two together and puts them away. If the sock does not match a sock on the table, he places it on the table. He continues selecting socks one at a time until all of the socks
have been paired up and put away. Let us assume that his laundry basket initially has n pairs of socks. Among the 2n pairs of socks in the basket, each sock has exactly one partner. Let us assume
that each time Prof. Tovey selects a sock from the basket, he selects one of the socks at random from the basket. (In this context, \at random" means that each sock remaining in the basket is
equally likely to be selected.) Let us assume n = 50 so initially, there are 100 socks in the basket. (a) After Prof. Tovey has selected 50 socks from the basket, what is the probability that
there are (a) no socks on the table?
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred.
|
{"url":"http://openstudy.com/updates/510d6ac3e4b09cf125bcaa77","timestamp":"2014-04-20T13:58:59Z","content_type":null,"content_length":"28498","record_id":"<urn:uuid:ebcbc9ef-6329-48da-91eb-7faceab3ddac>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00308-ip-10-147-4-33.ec2.internal.warc.gz"}
|
What Is Measured in Mathematics Tests
Volume 31, Issue 4 (2002)
What Is Measured in Mathematics Tests? Construct Validity of Curriculum-Based Measurement in Writing
Robin Schul Thurber
pp. 498—513
Abstract. Mathematics assessment is often characterized in the literature as being composed of two broad components: Computation and Applications. Many assessment tools are available to evaluate
student skill in these areas of mathematics.However, not all math tests can be used in formative evaluation to inform instruction and improve student achievement. Mathematics curriculum-based
measurement(M-CBM) is one tool that has been developed for formative evaluation in mathematics. However, there is considerably less technical adequacy information on M-CBM than CBM reading. Of
particular interest is the construct that M-CBM measures, computation or general mathematics achievement. This study utilized confirmatory factor analysis procedures to determine what constructs
M-CBM actually measures in the context of a range of other mathematics measures. Other issues examined in this study included math assessment in general and the role of reading in math assessment.
Participants were 207 fourth-grade students who were tested with math computation, math applications, and reading tests. Three theoretical models of mathematics were tested. Results indicated that a
two-factor model of mathematics where Computation and Applications were distinct although related constructs, M-CBM was a measure of Computation, and reading skill was highly correlated with both
math factors best fit the data. Secondary findings included the important role that reading skills play in general mathematics assessment.
NASP Members Log in
to download article.
|
{"url":"http://www.nasponline.org/publications/spr/abstract.aspx?ID=1605","timestamp":"2014-04-19T12:59:55Z","content_type":null,"content_length":"24183","record_id":"<urn:uuid:5ec2d643-d8f7-415f-adc1-9e3cd3b0687d>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00087-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Jean-Pierre Serre
Born: 15 September 1926 in Bages, France
Click the picture above
to see seven larger pictures
Previous (Chronologically) Next Main Index
Previous (Alphabetically) Next Biographies index
Jean-Pierre Serre was educated at the Lycée de Nimes and then the École Normale Supérieure in Paris from 1945 to 1948. Serre was awarded his doctorate from the Sorbonne in 1951. From 1948 to 1954 he
held positions at the Centre National de la Recherche Scientifique in Paris.
In 1954 Serre went to the University of Nancy where he worked until 1956. From 1956 he held the chair of Algebra and Geometry in the Collège de France until he retired in 1994 when he became an
honorary professor. His permanent position in the Collège de France allowed Serre to spend quite a lot of time making research visits. In particular he spent time at the Institute for Advanced Study
at Princeton and at Harvard University.
Serre's early work was on spectral sequences. A spectral sequence is an algebraic construction like an exact sequence, but more difficult to describe. Serre did not invent spectral sequences, these
were invented by the French mathematician Jean Leray. However, in 1951, Serre applied spectral sequences to the study of the relations between the homology groups of fibre, total space and base space
in a fibration. This enabled him to discover fundamental connections between the homology groups and homotopy groups of a space and to prove important results on the homotopy groups of spheres.
Serre's work led to topologists realising the importance of spectral sequences. The Serre spectral sequence provided a tool to work effectively with the homology of fiberings.
For this work on spectral sequences and his work developing complex variable theory in terms of sheaves, Serre was awarded a Fields Medal at the International Congress of Mathematicians in 1954.
Serre's theorem led to rapid progress not only in homotopy theory but in algebraic topology and homological algebra in general.
Over many years Serre has published many highly influential texts covering a wide range of mathematics. Among these texts, which show the topics Serre has worked on, are Homologie singulière des
espaces fibrés (1951), Faisceaux algébriques cohérents (1955), Groupes d'algébriques et corps de classes (1959), Corps locaux (1962), Cohomologie galoisienne (1964), Abelian l-adic representations
(1968), Cours d'arithmétique (1970), Représentations linéaires des groupes finis (1971), Arbres, amalgames, SL[2] (1977), Lectures on the Mordell-Weil theorem (1989) and Topics in Galois theory
These books are outstanding and led to Serre being honoured. In 1995 he was awarded the Steele Prize for mathematical exposition and the citation for the award reads [2]:-
It is difficult to decide on a single work by a mathematician of Jean-Pierre Serre's stature which is most deserving of the Steele Prize. Any one of Serre's numerous other books might have served
as the basis of this award. Each of his books is beautifully written, with a great deal of original material by the author, and everything smoothly polished. It would be hard to make any
significant improvement on his expositions; many are the everyday standard references in their areas, both for working mathematicians and graduate students. Serre brings his whole mathematical
personality to bear on the material of these books; they are alive with the breadth of real mathematics and are an example to all of how to write for effect, clarity, and impact.
The references [4] and [5] provide a fascinating view of Serre's views on some aspects of his career up to 1985:-
Presently, the topic which amuses me most is counting points on algebraic curves over finite fields. It is a kind of applied mathematics: you try to use any tool in algebraic geometry and number
theory that you know of, ... and you don't quite succeed!
The interview in [4] and [5] also provides a chance to examine Serre's views on mathematics.
Serre has received numerous awards. In addition to the Fields Medal in 1954 he was elected a Fellow of the Royal Society of London in 1974. He has also been made an Officer Légion d'Honneur and
Commander Ordre National du Mérite. He has been elected to many national academies in addition to the Royal Society, in particular the academies of France, Sweden, United States and the Netherlands.
He was awarded the Prix Gaston Julia in 1970, the Balzan Prize in 1985, the Steele Prize, described above, from the American Mathematical Society in 1995 and the Wolf Prize in 2000. He has been
awarded honorary degrees from the University of Cambridge in 1978, the University of Stockholm in 1980, the University of Glasgow in 1983, the University of Harvard in 1998 and the University of Oslo
in 2002. In 2003 he was awarded the first Abel Prize by the Norwegian Academy of Science and Letters.
Article by: J J O'Connor and E F Robertson
Click on this link to see a list of the Glossary entries for this page
List of References (6 books/articles)
Mathematicians born in the same country
Honours awarded to Jean-Pierre Serre
(Click below for those honoured in this way)
Fields Medal 1954
Speaker at International Congress 1962
BMC plenary speaker 1970, 1984, 1995
LMS Honorary Member 1973
Fellow of the Royal Society 1974
AMS Steele Prize 1995
Wolf Prize 2000
Abel Prize 2003
Cross-references in MacTutor
Previous (Chronologically) Next Main Index
Previous (Alphabetically) Next Biographies index
History Topics Societies, honours, etc. Famous curves
Time lines Birthplace maps Chronology Search Form
Glossary index Quotations index Poster index
Mathematicians of the day Anniversaries for the year
JOC/EFR © April 1998 School of Mathematics and Statistics
Copyright information University of St Andrews, Scotland
The URL of this page is:
|
{"url":"http://turnbull.mcs.st-and.ac.uk/~history/Biographies/Serre.html","timestamp":"2014-04-17T15:26:13Z","content_type":null,"content_length":"16511","record_id":"<urn:uuid:8036eb9f-4a41-41af-8ae8-1e61bae511e5>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00435-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Graph drawing: unrooted undirected tree graphs with specified edge lengths.
up vote 1 down vote favorite
Has Joseph Felsenstein's equal daylight layout been analyzed by the graph drawing community? The following description is taken from his drawtree documentation:
"This iteratively improves an initial tree by successively going to each interior node, looking at the subtrees (often there are 3 of them) visible from there, and swinging them so that the arcs
of "daylight" visible between them are equal. This is not as fast as Equal Arc but should never result in lines crossing. It gives particularly good-looking trees, and it is the default method
for this program. It will be described in a future paper by me."
Although this description is sufficient for some kind of computer implementation, it probably needs to be fleshed out before it is mathematically appealing. For example, although "swinging a subtree"
will not cause edges to cross, it could possibly completely occlude a different vertex so that no daylight reaches it. Additionally it may be reasonable to assume that if two layouts both have equal
daylight arcs at every vertex, then the layout with maximal daylight should be preferred. Maybe a generalization of this idea would be a hypothetical criterion like "minimum daylight angular
resolution" which would maximize the minimum daylight angle over all vertices of the tree. Other generalizations could look at daylight angles integrated over branches of the drawing, possibly
allowing curved edges with constrained arc lengths.
As additional background, here's the drawtree description of a simple equal arc algorithm that draws these edge-distance-constrained trees in a way that can be proved to not depend on the root (up to
rotation and translation of the drawing). It has the advantage of being an exact algorithm that runs in finite time, but it has the disadvantage of making subjectively uglier trees than the equal
daylight layout:
"This method, invented by Christopher Meacham in PLOTREE, the predecessor to this program, starts from the root of the tree and allocates arcs of angle to each subtree proportional to the number
of tips in it. This continues as one moves out to other nodes of the tree and subdivides the angle allocated to them into angles for each of that node's dependent subtrees. This method is fast,
and never results in lines of the tree crossing."
One of the few papers that cites equal daylight layout is Improved Layout of Phylogenetic Networks.
A recent graph drawing paper that looks at unrooted tree drawing with edge length constraints is Angle and Distance Constraints on Tree Drawings.
Added: An algorithm suggested in an answer by David Eppstein: Trees with convex faces and optimal angles. The notion of daylight is implicit in this drawing in the sense that every vertex sees
daylight. Edge lengths can be set arbitrarily after the angles have been determined, and the properties that edges do not cross and every vertex sees daylight will be preserved.
reference-request computational-geometry trees graph-drawing
add comment
1 Answer
active oldest votes
I'm not familiar with Felsenstein's work, and the documentation available from the link you give is not very conducive to understanding it: is there no description of what kind of
layout algorithms they're using? Or even examples of its output?
However, re: Maybe a generalization of this idea would be a hypothetical criterion like "minimum daylight angular resolution" which would maximize the minimum daylight angle over all
vertices of the tree.: I think one of my papers may be relevant for this. See:
"Trees with convex faces and optimal angles." J. Carlson and D. Eppstein. arXiv:cs.CG/0607113. 14th Int. Symp. Graph Drawing, Karlsruhe, Germany, 2006. Lecture Notes in Comp. Sci. 4372,
up vote 4 2007, pp. 77-88.
down vote
accepted It finds tree drawings with the property that, if the leaves are extended to infinity, the result is a decomposition of the plane into convex cells (in particular, every vertex can see
out to infinity or, in your terms, every vertex can see daylight) and that the minimum angle between consecutive edges that share a vertex is maximized. It doesn't exactly maximize the
minimum daylight angle (that may be zero for some vertices) but I suspect that could be done with minor modifications to the algorithms.
For instance, below is an example of its output; in this example, the optimum vertex angle it finds is a little over π/2:
@David: Thanks for your response. I would have linked this paper in my question if it either used the concept of daylight or if it allowed edge length constraint. I agree that the
documentation of equal daylight layout is sparse, and I would be very interested to see modifications of your algorithm which include edge distance constraints and the concept of
daylight. – psd Oct 27 '11 at 22:54
Edge distance constraints are trivial to add to my algorithm: you can make the edges any length you want and they won't cross each other. – David Eppstein Oct 27 '11 at 23:23
I guess that you can make the edge lengths any length you want and not only will they not cross each other, but the every-vertex-sees-daylight property will also be preserved? – psd
Oct 27 '11 at 23:33
Yes, every vertex will continue to see daylight but the angle of some vertices' daylight wedges may shrink to zero as the edges get long. Instead, if you want all opening angles to
stay bounded away from zero even in the limit as the leaf edges get very long relative to the rest of the drawing, just use a similar style drawing in which the leaf edges' slopes are
all equally spaced around the circle of possible slopes. – David Eppstein Oct 28 '11 at 1:48
I've edited the question to mention these interesting properties of your algorithm. If leaf edge slopes are equally spaced then the terminal fork angles will be small $(2 pi / n)$
when there are $n$ leaves. – psd Oct 28 '11 at 3:03
add comment
Not the answer you're looking for? Browse other questions tagged reference-request computational-geometry trees graph-drawing or ask your own question.
|
{"url":"http://mathoverflow.net/questions/79315/graph-drawing-unrooted-undirected-tree-graphs-with-specified-edge-lengths","timestamp":"2014-04-20T11:21:22Z","content_type":null,"content_length":"62211","record_id":"<urn:uuid:a11b2eab-339b-4f2b-a0c7-d9b88a0f6900>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00088-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Projective resolution of modules over rings which are regular in codimension n
up vote 5 down vote favorite
All rings are Noetherian and commutative, modules are finitely generated.
It is a theorem of Serre that over a regular ring $R$, every module has a finite projective resolution.
More generally, if $R$ is regular in codimension n, what can we say about projective resolution of modules over $R$? For example, is it true that every ideal with height less than n has a finite
projective resolution?
Similarly, over a Noetherian seperated regular scheme $X$, every coherent sheaf has a finite resolution by vector bundles. The same questions can be asked for schemes as for rings.
Examples are extremely appreciated. Thanks!
Edit:It is not true that every ideal with height less than n has a finite projective resolution. As inkspot pointed out, if $R$ is normal, excellent, local and all height 1 ideals have finite
projective dimension, then $R$ is factorial. So the local ring of a cone at origin gives a counterexample.
Since factorial is equivalent to $Cl(R)=0$ for $R$ normal, this makes me wonder for a local ring $R$ whether every ideal with height less than n has a finite projective resolution is equivalent to:
1. $R$ is regular in codimension n plus some other condition on the ring such as normal and excellent.
2. Some kind of "generalized divisor class group"(may be Chow group) vanishes.
If $R$ is not local, I think condition 2 should be replaced by something like:
2'. Some part of $K_0(R)$ and $G_0(R)$ are isomorphic.
where $K_0$ is the Grothendieck group of the category of projective modules over $R$, $G_0$ is the Grothendieck group of the category of finite generated modules over $R$.
Could above be true?
ac.commutative-algebra homological-algebra ag.algebraic-geometry
add comment
2 Answers
active oldest votes
If $R$ is normal (so regular in codimension $1$), excellent and local and all height $1$ ideals $I$ have finite projective dimension, then $R$ is factorial. So there are many
counter-examples. (I don't have a reference to hand, but the argument is Serre's proof that regular implies factorial. Say $X= Spec\ R$ and $j:U\to X$ is the regular locus. A finite
up vote 4 projective (= free) resolution of $I$ restricts to a free resolution of the restriction $\mathcal{I}$ of $I$ to $U$. Now $\mathcal{I}$ is locally free of rank $1$; taking the determinant
down vote of its resolution shows that it is free, and then $I=j_*\mathcal{I}$ is free, which means that $R$ is factorial.)
Since factorial is equivalent to $Cl(R)=0$ for normal ring. Suppose $R$ is local, is there some kind of "generalized divisor class group" such that if it vanishes then every ideal with
height less than n has a finite projective resolution? – Liu Hang Jan 29 '11 at 5:08
add comment
Dear Liu,
I like your updated question a lot. To make things easier to discuss, let me define the following properties for a Noetherian local ring $R$ and $n>0$:
($A_n$) every ideal with height less than $n$ has a finite projective resolution.
($B_n$) $R_P$ is regular for each prime $P$ of height at most $n$.
($C_n$) The Chow groups $CH^i(R)=0$ for codimensions $1\leq i \leq n$.
Your question asked whether $A_n$ implies $B_n$ and $C_n$.
It is not hard to see that $A_n$ implies $B_n$: localizing a resolution of $R/P$ over $R$ shows that the residue field of $R_P$ has finite projective dimension, which forces $R_P$ to be
up vote 4 regular.
down vote
I do not know whether or not $A_n$ implies $C_n$, and I am probably not alone! As far as I known, it is a conjecture that Chow groups of codimensions at least one in any regular local
ring are $0$. It is known if $R$ is essentially of finite type over a field (and perhaps a bit more generally, see this paper).
However, it is relatively easy to show that if $A_n$ holds, then $CH^i(R)$ is torsion for codimensions $1\leq i \leq n$. The brief reason is if an ideal $I$ has positive height and finite
projective dimension, the class of $R/I$ is $0$ in the Grothendieck group $G_0(R)$, and this group is equal to the total Chow group after tensoring with $\mathbb Q$.
I think $A_n$ for even $n=2$ is a pretty strong condition. I would not be too surprised if it implies regularity of $R$. For example, it is not hard to show that $A_n$ implies $B_{n+1}$
if $R$ is Cohen-Macaulay and $n\geq 2$ (so if $\dim R=3$, $A_2$ forces $R$ to be regular).
UPDATE per the comments: If $R$ is not local, $A_n \Rightarrow B_n$ still holds. However, $C_n$ is hopeless. Just take any non-singular affine curve with non-trivial Picard group. Then
any module still has finite projective dimension since the ring is regular. The trouble is that $A_n$ is a local condition, but $C_n$ is not.
Dear Hailong, thanks for the informative answer. – Liu Hang Feb 10 '11 at 12:08
What about the case when $R$ is not local? – Liu Hang Feb 12 '11 at 2:30
1 Liu: see my update. – Hailong Dao Feb 23 '11 at 22:47
add comment
Not the answer you're looking for? Browse other questions tagged ac.commutative-algebra homological-algebra ag.algebraic-geometry or ask your own question.
|
{"url":"http://mathoverflow.net/questions/53583/projective-resolution-of-modules-over-rings-which-are-regular-in-codimension-n?sort=newest","timestamp":"2014-04-21T13:11:20Z","content_type":null,"content_length":"61827","record_id":"<urn:uuid:ba361702-cc6b-424c-a8b7-e0c04f77dc5d>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00545-ip-10-147-4-33.ec2.internal.warc.gz"}
|
The nonsense maths effect
Stephen Hawking was once told by an editor that every equation in a book would halve the sales. Curiously, the opposite seems to happen when it comes to research papers. Include a bit of maths in the
abstract (a kind of summary) and people rate your paper higher — even if the maths makes no sense at all. At least this is what a study published in the Journal Judgment and decision making seems to
Maths: incomprehensible but impressive?
Kimmo Eriksson, the author of the study, took two abstracts from papers published in respected research journals. One paper was in evolutionary anthropology and the other in sociology. He gave these
two abstracts to 200 people, all experienced in reading research papers and all with a postgraduate degree, and asked them to rate the quality of the research described in the abstracts. What the 200
participants didn't know is that Eriksson had randomly added a bit of maths to one of the two abstracts they were looking at. It came in the shape of the following sentence, taken from a third and
unrelated paper:
A mathematical model
That sentence made absolutely no sense in either context.
People with degrees in maths, science and technology weren't fooled by the fake maths, but those with degrees in other areas, such as the humanities, social sciences and education, were: they rated
the abstract with the tacked-on sentence higher. "The experimental results suggest a bias for nonsense maths in judgements of quality of research," says Eriksson in his paper.
The effect is probably down to a basic feature of human nature: we tend to be in awe of things we feel we can't understand. Maths, with its reassuring ring of objectivity and definiteness, can boost
the credibility of research results. This can be perfectly legitimate: maths is a useful tool in many areas outside of hard science. But Eriksson, who moved from pure maths to interdisciplinary work
in social science and cultural studies, isn't entirely happy with the way it is being used in these fields. "In areas like sociology or evolutionary anthropology I found mathematics often to be used
in ways that from my viewpoint were illegitimate, such as to make a point that would better be made with only simple logic, or to uncritically take properties of a mathematical model to be properties
of the real world, or to include mathematics to make a paper look more impressive," he says in his paper. "If mathematics is held in awe in an unhealthy way, its use is not subjected to sufficient
levels of critical thinking."
You can read Eriksson's paper here. There is also an interesting article on this and other bogus maths effect in this article in the Wall Street Journal.
Submitted by Anonymous on February 7, 2013.
", or to uncritically take properties of a mathematical model to be properties of the real world".....string theory for example :-)
Submitted by Anonymous on February 4, 2013.
Although I agree that mathematics (specially statistics) is often abused in social sciences to obtain results that should not resist critical review - from assuming stronger results than the maths
actually show, failing to apply consistent methodology (for example not controlling variable dependency) or just plain non-sequiturs, I think the effect described here is not so much concerned with
mathematics itself as with confidence on competence.
When someone requests a service, especially a knowledge service, there is an implicit trust in the intellectual honesty of the provider - if I request legal counseling, I do not expect that the
service provider will behave in an incompetent fashion. If he cites bogus laws, how can I detect it unless I am a legal expert myself? When the social scientist sees a paper with a mathematical
model, if he does not truly understand it, he at least makes the assumption of competence from the writer. Since a mathematical model usually provides a non-ambiguous problem description, the
presence of one usually does imply greater rigor - at least to the extent that other scientists can reproduce and test the model. In that way, the social scientist is making what I think is a
reasonable decision when prefering papers that have underlying mathematical models, at least to the extent that the underlying experiment usually will be more reproducible.
What does this say about review processes? I think the only lesson we can take from this experiment is that papers should be peer reviewed and rated by those with competence to understand the
entirety of the paper. If a reviewer is uncomfortable with the maths, he should consult another expert that can help him in that regard - no one can be burdened with the duty to know everything.
Submitted by Anonymous on January 22, 2013.
In the picture, on the third line from bottom, they seem to have lost the h divisor.
|
{"url":"http://plus.maths.org/content/nonsense-maths-effect","timestamp":"2014-04-18T16:03:44Z","content_type":null,"content_length":"29148","record_id":"<urn:uuid:a8a6792c-abf9-4a23-86d5-66984c2b47f2>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00160-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Calculus Tutors
Chandler, AZ 85224
Mathematics and Science Tutor
...Also, the application of algebra to higher mathematics is a really important skill to have. Advanced algebra has applications in business, engineering, medicine, and many other professions.
is critical tool in the development of science, technology, engineering,...
Offering 10+ subjects including calculus
|
{"url":"http://www.wyzant.com/geo_Tempe_calculus_tutors.aspx?d=20&pagesize=5&pagenum=2","timestamp":"2014-04-18T01:05:22Z","content_type":null,"content_length":"60231","record_id":"<urn:uuid:5458cd37-ebc1-4413-877f-fa5877490963>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00437-ip-10-147-4-33.ec2.internal.warc.gz"}
|
East Palo Alto, CA Algebra 1 Tutor
Find an East Palo Alto, CA Algebra 1 Tutor
...All of the students that I have worked with have increased their scores. I have worked with students specifically in Elementary Math for 3 years. I use the coaching and teaching skills that I
developed while obtaining my certification to help students understand math concepts.
15 Subjects: including algebra 1, English, Spanish, geometry
I have over 30 years of experience as a tutor and lecturer in mathematics and physics, both before and after I completed my physics PhD at the University of Illinois. I like tutoring so much that
I started doing it as a sophomore in college and have been doing it ever since. I am available for tutoring at middle school, high school, and college levels.
17 Subjects: including algebra 1, calculus, physics, geometry
...For the past 12 years, I have been teaching Japanese to students of a variety of ages from preschoolers to corporate executives, and skill levels from the entry level to the advanced level.
Among the many reasons you should consider me are: I have an excellent ability to individualize curric...
3 Subjects: including algebra 1, Japanese, prealgebra
...PS: I have a PhD in theoretical physics, am a Phi Beta Kappa, graduated from the two best universities in China, and was once a NASA scientist.I have a PhD in theoretical physics which
requires comprehensive training in mathematical methods and have working experience with differential equations ...
15 Subjects: including algebra 1, calculus, statistics, physics
...It was more lucrative financially, but less emotionally satisfying. Now that I have two small children at home, they take up a lot of my time and energy... yet I find that I miss teaching. I
still work as a grant writer, evaluator, and researcher on a consulting basis for a variety of education companies.
22 Subjects: including algebra 1, English, reading, chemistry
Related East Palo Alto, CA Tutors
East Palo Alto, CA Accounting Tutors
East Palo Alto, CA ACT Tutors
East Palo Alto, CA Algebra Tutors
East Palo Alto, CA Algebra 2 Tutors
East Palo Alto, CA Calculus Tutors
East Palo Alto, CA Geometry Tutors
East Palo Alto, CA Math Tutors
East Palo Alto, CA Prealgebra Tutors
East Palo Alto, CA Precalculus Tutors
East Palo Alto, CA SAT Tutors
East Palo Alto, CA SAT Math Tutors
East Palo Alto, CA Science Tutors
East Palo Alto, CA Statistics Tutors
East Palo Alto, CA Trigonometry Tutors
|
{"url":"http://www.purplemath.com/east_palo_alto_ca_algebra_1_tutors.php","timestamp":"2014-04-19T20:08:16Z","content_type":null,"content_length":"24456","record_id":"<urn:uuid:b457dbb8-36fd-4c9d-87f0-3f4350ab4f62>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00317-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Borrego Springs ACT Tutors
...But that is also what makes it such a great subject to teach. My favorite part of tutoring students in math and specifically in algebra is seeing that little light in their eyes that announces:
"I get it now!" Algebra 2 can be increasingly and sometimes frustratingly more difficult that Algebra ...
11 Subjects: including ACT Math, calculus, geometry, ASVAB
...Teaching is my first love. For the past five years, I have been a teacher's aide in Special Ed at a high school. Here I meet with different challenges, but helping children overcome their
learning challenges is extremely satisfying.
14 Subjects: including ACT Math, reading, geometry, algebra 1
...I have a B.A. in mathematica and a PhD in theoretical physics. Also experience in research in chemistry and computational methods. I have a B.A. in mathematics and a Ph.
29 Subjects: including ACT Math, chemistry, calculus, physics
...I am well versed, energetic and astute. I look forward to helping you solve those tough problems and helping you become a mastermind in your subject. I am immediately available.
18 Subjects: including ACT Math, Spanish, ESL/ESOL, SAT math
...I create a different lesson plan for each student based on their needs and what they struggle with, and then set them up for success. I understand that sometimes life gets hectic, so I only
require 6 hours for cancellation. That gives me enough time to rearrange my day but also gives you the flexibility to cancel even if you're just having a tough day.
37 Subjects: including ACT Math, calculus, geometry, statistics
|
{"url":"http://www.algebrahelp.com/Borrego_Springs_act_tutors.jsp","timestamp":"2014-04-19T20:44:09Z","content_type":null,"content_length":"24647","record_id":"<urn:uuid:b6fe33a7-1267-42ce-8d25-bcc5e68dbb6b>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00094-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Elliptical Orbits
Elliptical Orbits and Kepler's Three Laws of Planetary Motion
Since the orbits of the planets are ellipses, let us review a few basic properties of ellipses.
1. For an ellipse there are two points called foci (singular: focus) such that the sum of the distances to the foci from any point on the ellipse is a constant. In terms of the diagram shown below,
with "x" marking the location of the foci, we have the equation:
a + b = constant
that defines the ellipse in terms of the distances a and b.
2. The amount of "flattening" of the ellipse is termed the eccentricity. Thus, in the following figure the ellipses become more eccentric from left to right. A circle may be viewed as a special case
of an ellipse with zero eccentricity, while as the ellipse becomes more flattened the eccentricity approaches one. Thus, all ellipses have eccentricities lying between zero and one.
The orbits of the planets are ellipses but the eccentricities are so small for most of the planets that they look circular at first glance. For most of the planets one must measure the geometry
carefully to determine that they are not circles, but ellipses of small eccentricity. Pluto and Mercury are exceptions: their orbits are sufficiently eccentric that they can be seen by inspection to
not be circles.
3. The long axis of the ellipse is called the major axis, while the short axis is called the minor axis. Half of the major axis is termed a semi-major axis.
The length of a semi-major axis is often termed the size of the ellipse. It can be shown that the average separation of a planet from the Sun as it goes around its elliptical orbit is equal to the
length of the semi-major axis. Thus, by the "radius" of a planet's orbit one usually means the length of the semi-major axis.
The Laws of Planetary Motion
Kepler obtained Brahe's data after his death despite the attempts by Brahe's family to keep the data from him in the hope of monetary gain. There is some evidence that Kepler obtained the data by
less than legal means; it is fortunate for the development of modern astronomy that he was successful. Utilizing the voluminous and precise data of Brahe, Kepler was eventually able to build on the
realization that the orbits of the planets were ellipses to formulate his Three Laws of Planetary Motion.
Kepler's First Law:
I. The orbits of the planets are ellipses, with the Sun at one focus of the ellipse.
Kepler's First Law is illustrated in the image shown above. The Sun is not at the center of the ellipse, but is instead at one focus (generally there is nothing at the other focus of the ellipse).
The planet then follows the ellipse in its orbit, which means that the Earth-Sun distance is constantly changing as the planet goes around its orbit. For purpose of illustration we have shown the
orbit as rather eccentric; remember that the actual orbits are much less eccentric than this.
Why are the orbits elliptical and not circular? More about elliptical orbits
Kepler's Second Law:
II. The line joining the planet to the Sun sweeps out equal areas in equal times as the planet travels around the ellipse.
Kepler's second law is illustrated in the preceding figure. The line joining the Sun and planet sweeps out equal areas in equal times, so the planet moves faster when it is nearer the Sun. Thus, a
planet executes elliptical motion with constantly changing angular speed as it moves about its orbit. The point of nearest approach of the planet to the Sun is termed perihelion; the point of
greatest separation is termed aphelion. Hence, by Kepler's second law, the planet moves fastest when it is near perihelion and slowest when it is near aphelion.
Kepler's Third Law:
III. The ratio of the squares of the revolutionary periods for two planets is equal to the ratio of the cubes of their semi-major axes.
In this equation P represents the period of revolution for a planet and R represents the length of its semi-major axis. The subscripts "1" and "2" distinguish quantities for planet 1 and 2
respectively. The periods for the two planets are assumed to be in the same time units and the lengths of the semi-major axes for the two planets are assumed to be in the same distance units.
Kepler's Third Law implies that the period for a planet to orbit the Sun increases rapidly with the radius of its orbit. Thus, we find that Mercury, the innermost planet, takes only 88 days to orbit
the Sun but the outermost planet (Pluto) requires 248 years to do the same.
(For more detailed mathematical explanation, see this link.)
Calculations Using Kepler's Third Law
A convenient unit of measurement for periods is in Earth years, and a convenient unit of measurement for distances is the average separation of the Earth from the Sun, which is termed an astronomical
unit and is abbreviated as AU. If these units are used in Kepler's 3rd Law, the denominators in the preceding equation are numerically equal to unity and it may be written in the simple form
P (years)^2 = R (A.U.s)^3
This equation may then be solved for the period P of the planet, given the length of the semi-major axis,
P (years) = R (A.U.)^3/2
or for the length of the semi-major axis, given the period of the planet,
R (A.U.) = P (Years)^ 2/3
As an example of using Kepler's 3rd Law, let's calculate the "radius" of the orbit of Mars (that is, the length of the semi-major axis of the orbit) from the orbital period. The time for Mars to
orbit the Sun is observed to be 1.88 Earth years. Thus, by Kepler's 3rd Law the length of the semi-major axis for the Martian orbit is
R = P^ 2/3 = (1.88)^ 2/3 = 1.52 AU
which is exactly the measured average distance of Mars from the Sun. As a second example, let us calculate the orbital period for Pluto, given that its observed average separation from the Sun is
39.44 astronomical units. From Kepler's 3rd Law
P = R^3/2 = (39.44)^3/2 = 248 Years
which is indeed the observed orbital period for the planet Pluto.
|
{"url":"http://www.astro-tom.com/technical_data/elliptical_orbits.htm","timestamp":"2014-04-20T10:47:17Z","content_type":null,"content_length":"12274","record_id":"<urn:uuid:4d16f16f-a518-4128-9326-a18e6c2025b8>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00475-ip-10-147-4-33.ec2.internal.warc.gz"}
|
One simple multiple choice question
April 26th 2013, 07:36 PM #1
Apr 2013
Hi, I have one multiple choice question.
When you are designing a research study and considering what hypothesis test you might use, a common rule of thumb is to select the most powerful test. Why is this a good idea?
a. The most powerful test is the test most likely to get the right answer.
b. The most powerful test is the test most likely to result in a type II error.
c. The most powerful test is the test least likely to fail to reject the null hypothesis when it is false.
d. The most powerful test is the most likely to not reject the null hypothesis when it is true.
Thanks! (Brief explanation will be very helpful to me.)
Re: One simple multiple choice question
Hey therexists.
Hint: The power of a test measures the ability to reject the null hypothesis when it is not true. Do you know what the power represents in terms of probability regarding H0 and H1?
Re: One simple multiple choice question
I don't know exactly what the power represents in terms of H0 and H1.
Re: One simple multiple choice question
According to your hint, I guess the answer is C, but I'm not sure of "least likely".
Re: One simple multiple choice question
The definition of power is given by P(H1 rejected|H1 false) = 1 - B where B = P(H1 rejected|H1 is true).
Your Type I and Type II errors are Type I = P(H0 rejected|H0 true) and Type II = B = P(H1 rejected|H1 true)
Re: One simple multiple choice question
so then the answer is B?
Re: One simple multiple choice question
No it's the opposite: You want to reduce a Type II error which means you want B to be as small as possible and therefore 1 - B to be as large as possible.
Re: One simple multiple choice question
The answer is C or D. You said the power of a test measures the "ability" to reject and C said "fail to reject". So then answer is D?
Re: One simple multiple choice question
It can't be b) because you want to minimize the Type II error.
It can't be d) because the power looks at the alternative hypothesis.
In terms of a) we would be tempted to say yes but it is not completely true.
The reason why I think the answer is c) is because being least likely to reject H0 when H0 is false means that it is most likely to fail to reject H0 when H0 is false and thus accept H1 when H1
is true which is the definition of power.
Remember that Power = P(H1 accepted|H1 true) and so you want to maximize this probability to maximize the power of your test.
Re: One simple multiple choice question
Yes, the answer is C. I was doing online homework. Thank you very much Chiro, and your explanations are really good. I will read it again before exam.
April 26th 2013, 07:47 PM #2
MHF Contributor
Sep 2012
April 26th 2013, 08:09 PM #3
Apr 2013
April 26th 2013, 08:10 PM #4
Apr 2013
April 26th 2013, 08:16 PM #5
MHF Contributor
Sep 2012
April 26th 2013, 08:23 PM #6
Apr 2013
April 26th 2013, 08:31 PM #7
MHF Contributor
Sep 2012
April 26th 2013, 08:36 PM #8
Apr 2013
April 26th 2013, 08:50 PM #9
MHF Contributor
Sep 2012
April 26th 2013, 08:53 PM #10
Apr 2013
|
{"url":"http://mathhelpforum.com/statistics/218260-one-simple-multiple-choice-question.html","timestamp":"2014-04-18T17:31:09Z","content_type":null,"content_length":"51662","record_id":"<urn:uuid:1d473646-109c-41b3-aacf-1dfeb824344f>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00107-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Winthrop Harbor Algebra 2 Tutor
Find a Winthrop Harbor Algebra 2 Tutor
...I wanted to begin tutoring again now that I have free time, and I really enjoyed tutoring in the past. In high school, I worked in the tutoring center at school and did tutoring on the side as
well. I can tutor students from elementary to high school in sciences, math, and basic Spanish.
15 Subjects: including algebra 2, chemistry, statistics, Spanish
...I believe that when students feel comfortable that they have the gist of these works, and build on their knowledge of them, that they appreciate how useful this knowledge is. I can help
students to build that appreciation. I had an experience proofreading, editing, and writing headlines for news and sports for a newspaper in Colorado Springs.
20 Subjects: including algebra 2, reading, English, writing
...I have experience in all areas of high school math including Precalculus and Statistics. I teach predominately juniors and seniors. I am a math teacher at a local high school.
26 Subjects: including algebra 2, calculus, geometry, statistics
...I first began my journey helping others in college when I realized that I had a strong work ethic that allowed me to teach myself even if the material presented in class was inadequate. I often
found myself mentoring colleagues through Chemistry classes, Philosophy classes, Spanish classes, and ...
26 Subjects: including algebra 2, chemistry, English, reading
Hi!! My name is Harry O. I have been tutoring high school and college students for the past six years. Previously I taught at Georgia Institute of Technology from which I received a Bachelor's in
Electrical Engineering and a Master's in Applied Mathematics.
18 Subjects: including algebra 2, physics, calculus, geometry
|
{"url":"http://www.purplemath.com/winthrop_harbor_algebra_2_tutors.php","timestamp":"2014-04-21T05:18:59Z","content_type":null,"content_length":"24220","record_id":"<urn:uuid:91447ea0-1d2e-4204-a913-a0a9afe289f3>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00026-ip-10-147-4-33.ec2.internal.warc.gz"}
|
August 13th 2009, 01:58 PM #1
Apr 2009
Jan had the same summer job for three years 1993 through 1996, earning $250 in 1993, $325 in 1994, $400 in 1995 ,and $475 in 1996.
a. what is the slope of this line? what does it represent?
It is 75 and it represents the yearly increase.
b. What points on this line are meaningful in this context?
Aren't they all equally meaningful?
c. Guess what Jan's earnings were for 1992 and 1998, assuming the same summer job.
1992 = $175 1998 = $625
d. Write an inequality that states that Jan's earnings in 1998 were within 10% of the amount you guessed.
This is the question i really needed help on.
I set up l 625 - X l < 62.5
-62.5 < 625 - X < 62.5
562.5 < X < 687.5
Is this correct?
I believe so. That's how I would answer it (although I may be naughty and leave it as |625-X|< 62.5 because (a) it's technically an inequality, and (b) I'm lazy and can't be bothered to do
arithmetic, and (c) it doesn't *say* you've got to work it out into a particular form).
As for "meaningful" I don't know what that means in this context.
I think so, any predicted points pre or post this series may not be as meaningful. For example if you were to find a value for 1989 it would be negative.
It is!
Thank you!!!
And I really appreciate your help on the lattice point question.
I kept on drawing the graph thinking I got the points wrong.
August 13th 2009, 02:03 PM #2
August 13th 2009, 02:09 PM #3
August 13th 2009, 02:13 PM #4
Apr 2009
August 13th 2009, 02:17 PM #5
Apr 2009
|
{"url":"http://mathhelpforum.com/algebra/97965-inequality.html","timestamp":"2014-04-19T22:35:22Z","content_type":null,"content_length":"44429","record_id":"<urn:uuid:a4f91151-8dd2-435f-832c-144cd490b2ab>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00427-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Valley Stream Statistics Tutor
Find a Valley Stream Statistics Tutor
...I have a BBA degree in Accounting, and would like to give subjects in this area a try. I have had to take subjects such as Management courses, economics, Business Law I and II, marketing and
more. Hopefully, you will give me a chance at this subject area.
47 Subjects: including statistics, chemistry, reading, accounting
...I am completely familiar with the concepts of geometry from my experience with teaching AP Physics and college physics courses. I am proficient in all topics of a full year college level
Physics curriculum. The course may include use of intermediate algebra, trigonometry and calculus.
16 Subjects: including statistics, physics, calculus, geometry
...Louis, and minored in German, economics, and writing. While there, I tutored students in everything from counting to calculus, and beyond. I then earned a Masters of Arts in Teaching from Bard
College in '07.
26 Subjects: including statistics, physics, calculus, geometry
...Let me be your guide and companion in your next academic journey and you will find the trip far easier and more pleasant than you imagined!I have taught algebra techniques not only as a topic
on its own but also in conjunction with the physical sciences and biology since the 1980s. I am an econo...
50 Subjects: including statistics, chemistry, calculus, physics
...What makes mathematics different from other subjects is that it builds upon itself and so the need of a good foundation is vital. The qualities that I have as a tutor are that I'm very
knowledgeable and enthusiastic with the subject matter,I can deliver it in a very simple and understanding fash...
18 Subjects: including statistics, calculus, algebra 1, algebra 2
|
{"url":"http://www.purplemath.com/valley_stream_ny_statistics_tutors.php","timestamp":"2014-04-18T04:09:33Z","content_type":null,"content_length":"24147","record_id":"<urn:uuid:d9e5065d-1597-46ef-92f4-68f6e781c356>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00070-ip-10-147-4-33.ec2.internal.warc.gz"}
|
saa congruency postulate
Definition of SAA Congruency Postulate
● If two angles and the included side of one triangle are congruent to the two angles and the included side of another triangle, then the two triangles are congruent.
More about SAA Congruency Postulate
● SAA postulate can also be called as AAS postulate.
● The side between two angles of a triangle is called the included side of the triangle.
● SAA postulate is one of the conditions for any two triangles to be congruent.
Example of SAA Congruency Postulate
● The triangles ABC and PQR are congruent, i.e., ΔABC ≅ ΔPQR, since ∠CAB = ∠RPQ, AC = PR, and ∠ABC = ∠PQR.
Solved Example on SAA Congruency Postulate
If the two triangles given are congruent by SAA postulate then identify the value of angle Q.
A. 80°
B. 60°
C. 75°
D. 70°
Correct Answer: A
Step 1: If two angles and the non-included side of one triangle is congruent to two angles and the non-included side of another triangle then the two triangles are congruent by SAA postulate.
Step 2: As the given triangles are congruent by SAA postulate
∠FDE = ∠RPQ, DF= PR, and ∠DEF = ∠PQR.
Step 3: And given ∠DEF = 80° it implies ∠PQR = 80° by SAA postulate.
Related Terms for SAA Congruency Postulate
● Angle
● Congruent
● Side
● Triangle
|
{"url":"http://www.icoachmath.com/math_dictionary/SAA_Congruency_Postulate.html","timestamp":"2014-04-18T08:18:47Z","content_type":null,"content_length":"8557","record_id":"<urn:uuid:940a18c0-6ac8-4390-8b1c-a4e8140b4136>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00245-ip-10-147-4-33.ec2.internal.warc.gz"}
|
The Unlikeliest Final Four
Posted by Neil Paine on March 28, 2011
Just how unlikely is this year’s Final Four of Kentucky, UConn, Virginia Commonwealth, and Butler?
Well, going by one measure, the odds of it happening were 0.00003% — only two entries (out 5.9 million) correctly picked the four teams in ESPN.com’s Bracket Challenge. But I decided to see how this
year’s improbable group matched up against other inexplicable Final Fours since the tournament expanded to 64 teams in 1985. Here were the Final Fours with the highest average seed # since then:
Year Team A Seed Team B Seed Team C Seed Team D Seed Avg #1s
2011 KEN 4 CONN 3 VCU 11 BUTL 8 6.50 0
2000 UNC 8 FLA 5 WISC 8 MICS 1 5.50 1
2006 GEOM 11 FLA 3 LSU 4 UCLA 2 5.00 0
1986 KAN 1 DUKE 1 LSU 11 LOU 2 3.75 2
1992 IND 2 DUKE 1 MICH 6 CIN 4 3.25 1
2010 MICS 5 BUTL 5 WVIR 2 DUKE 1 3.25 1
1985 STJO 1 GTWN 1 VILL 8 MEM 2 3.00 2
1990 ARKA 4 DUKE 3 GEOT 4 UNLV 1 3.00 1
1996 MIST 5 SYRA 4 UMAS 1 KEN 1 2.75 2
2005 LOU 4 ILL 1 MICS 5 UNC 1 2.75 2
Aside from 2011, two other years stand out at the top of the list: 2000, when two 8-seeds crashed the Final Four, and 2006, when no #1 seeds made it (but George Mason did). In terms of pre-tournament
likelihood, how do those years stack up to 2011?
To answer that question, I simulated each tournament from scratch ten thousand times using the seed-based win probability formula I introduced here. In my 10,000 simulations, here’s how often each
team made the Final Four:
Year Team Count Probability
2011 KEN 1194 11.9%
2011 CONN 1631 16.3%
2011 VCU 24 0.2%
2011 BUTL 174 1.7%
2006 LSU 1140 11.4%
2006 UCLA 2261 22.6%
2006 FLA 1649 16.5%
2006 GEOM 50 0.5%
2000 FLA 749 7.5%
2000 UNC 192 1.9%
2000 MICS 3028 30.3%
2000 WISC 211 2.1%
Multiplying the probabilities together, we find that the 2006 Final Four had a 0.00213% chance of happening based on seeds, the 2000 Final Four had a 0.00092% chance of happening, and the 2011 Final
Four had a staggering 0.00008% chance (about 1 in 1,229,650) of happening. Since the field expanded to 64 teams, I think it’s safe to say that this year’s Final Four is easily the most improbable.
8 Responses to “The Unlikeliest Final Four”
1. Dave Says:
March 28th, 2011 at 12:48 pm
How does it compare if you take out the play in game that VCU had to play?
2. Neil Paine Says:
March 28th, 2011 at 1:15 pm
If you take away the play-in, VCU’s expected path would be the same as George Mason faced in 2006 (about 0.5%), so the likelihood of the 2011 Final Four would become 0.00017%. That still would
make it the most improbable Final Four of the 64-team era, though.
3. AHL Says:
March 28th, 2011 at 1:32 pm
What’s sad is that we all saw this coming. Everyone was all “none of the 1 seeds strike me as strong favorites” and yet in that uncertainty many more people went chalk, including national media
talking heads (Obama lol). Yet even the more advanced “pick Texas to hedge your bets” analysis didn’t work out so hot. Is there an expected time when someone finally picks the Perfect Bracket?
4. BSK Says:
March 28th, 2011 at 7:28 pm
Does this include VCU’s “play in” game?
I’m also reminded of a thought experiment I read about (the name eludes me, but I’m sure someone here knows it). The premise was that if a lottery exists such that any individual ticket had such
infinitesimal odds of winning as to consider them zero, could it still be assumed that SOMEONE was guaranteed to win the lottery? Basically, if no one had a practical shot to win, then is it
possible that no one wins? To apply it here, if we assumed that no Final Four was likely, could we conclude that maybe the Final Four just won’t happen??? I SURE HOPE NOT!
5. deron Says:
March 28th, 2011 at 8:44 pm
This has been one the strangest tournaments I’ve ever seen. VCU’s win sets up a historic game with Butler, two unranked teams in Final Four game. This occurrence says a lot about college
basketball today, anyone can win it all.
6. David Zukerman Says:
March 29th, 2011 at 6:45 pm
Anyway to determine if the eight TV timeouts a game have an impact? Seems to me, the flow of the game goes for no more than a bit more than five minutes before there is a pause of four minutes.
Then another four minutes or so, and pause of four minutes,
another four minutes of play or so. Paise again four minutes. Then less than four minutes to the half (or end of regulation play).
These pauses — this interference with the normal momentum of play, this removal of the flow of game from control of coaches and players — all this has no impact on results?
Where does the money go? For coaches pay and recruiting costs?
7. dinero Says:
December 13th, 2011 at 4:50 pm
I and my buddies were actually viewing the best secrets found on your web blog while at once I got an awful suspicion I had not thanked you for those secrets. My boys had been consequently glad
to see all of them and have in effect pretty much been using these things. Appreciate your getting really accommodating and also for making a decision on variety of high-quality information most
people are really wanting to learn about. Our own honest apologies for not expressing gratitude to you earlier.
8. Tegan Merring Says:
January 11th, 2012 at 2:39 am
She said she thinks Erving recognized her during that first meeting. I stood in line for an autographed ball, and I didnt really want to. And I got there. I just stood in line to see what he
looked like. And I got up there, he asked me if I wanted a ball, and I said `no, and I walked away.
|
{"url":"http://www.sports-reference.com/cbb/blog/?p=228","timestamp":"2014-04-17T21:35:41Z","content_type":null,"content_length":"30452","record_id":"<urn:uuid:08931c3a-7fb2-47b6-81df-ea72d2ffea7e>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00158-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Numbers (NVGR)
This is a blog about math. If that scares you, go ahead and return to reading about video games. You won't hurt my feelings.
What's your favorite number? You might call it a lucky number; your go-to number when someone asks you for a numerical input. Do you genuinely feel that this number is better than others, or do you
just feel a seemingly irrational attachment to it?
I like numbers. One of my favorites is 892. It's a nice, friendly number. Less than 1000, it's of manageable size, but has plenty of syllables, so it sounds impressive, but not too unwieldy. Beyond
that, there's no particular reason I like it better than any others. Whenever I see it in some unexpected context, I smile and think, "There it is again." Other numbers that capture my interest are
232 and multiples of 3, especially those whose digits are all multiples of 3. I find them more elegant than their peers.
I've never felt that any number has brought me luck, since luck doesn't make sense to me. Truly random processes may or may not exist, but for all intents and purposes we can call the throw of a die
a random event. Humans are good at seeing patterns, even patterns that don't exist. Attachment to items, even abstract items like numbers, can develop from exposure to apparently random experiences.
We get a winning or losing streak and we associate it with some correlating event, like the presence of an item, the performance of a ritual, or the choice of a number. Your lucky number is no more
likely to win at roulette than any other, but you have to choose something, right?
Math captured my interest around middle school, when I entered a mathematics program run by the University of Buffalo. I was introduced to number theory in seventh grade, when we were tasked with
defining numbers in terms of set theory.
As it turns out, this can be done. We can then define a successor function that builds the next number by taking the previous number and appending the empty set to it. All we have to do is define 0
as the empty set:
0 = {}
1 = Succ({}) = { {} }
2 = Succ({ {} }) = { {}, { {} }}
...and so on.
Interestingly enough, this construction shows up in computer science. In 1936, Alonzo Church published a paper in which he definied lambda calculus. It was essentially a formal programming language
with exactly three contructions:
names - x
function declarations - λx.y
function applications - (x y)
Names are handles by which functions are referred to and applications result in functions, meaning everything in the language is a function. The most basic function in lambda calculus is the identity
λx.x - Takes an argument and gives it back.
So doing something like this:
(λx.x 1)
would apply the identity function to "1", resulting in 1. Many functional programming languages, especially the Lisp family, are based on lambda calculus.
But hold on. This is a language without numbers. Where did I get that 1 from? And if everything is a function, isn't inherently weaker than a language that has more types (including numbers)?
As it turns out, we can build numbers out of these abstract functions. One way to do it is called the Church encoding, developed by the same Church I mentioned earlier. It looks like the following:
0 = λf.λx.x
1 = λf.λx.(f x)
2 = λf.λx.(f (f x))
3 = λf.λx.(f (f (f x)))
...and so on. Thus, every number is a function whose structure is both unique and usefully predictable.
Looks familiar, doesn't it? On the surface, at least, the Church encoding of natural numbers closely resembles the above set theory representation. Church and his students even constructed perfectly
reasonable ways to manipulate these numbers, including your basic arithmetic operations.
My point is that numbers are interesting, even though we typically take them for granted. When you count something, you don't think about how you're counting, you just start at 1 (or, if you're a
computer scientist, 0) and generate the successors automatically.
In fact, the concept of 0 as a number is an interesting topic on its own, and it might even be called the most interesting natural number. In some cultures, it started as a placeholder between
symbols, so something like 102 could be distinguished from 12 or 10020. It only relatively recently that it was accepted as a number of its own right, for several philosophical and practical reasons.
On the other end of the spectrum, we have infinity. Most people don't consider infinite numbers. If you have two inifinite sets, they're both equally big, right?
As it turns out, that's not the case. Some sets infinite sets are countable and others are not. A countable set is one that is comparable to the natural numbers. That is, if we can assign one natural
number to each element in some set, we have a countable set.
But if the natural numbers are inifinite, how can you have an uncountable set? After all, we can keep assigning numbers forever and never run out.
However, there are uncountably more real numbers than there are natural numbers. Intuitively, you might get an idea of why this is. Take any two natural numbers and count how many numbers are between
them. This will obviously be some natural number. Now take any two real numbers and count how many numbers are between them. It can't be done. Regardless of how close together those numbers are,
there is always an uncountably large amount of numbers between them. This is no proof, but it plants the idea in your head.
In fact, we have a hierarchy of infinite sets: the more infinite than another!
And let's not even get started on the transcendentals. Look up Euler's identity and try not to be impressed.
I'm a student of computer science, and I honestly chose the field because I like to program and mess with computers. But computer science reintroduced me to mathematics when I started to study
complexity theory, and I found the parallels between counting and computation quite profound.
Back in high school, people would ask, "When are we ever going to use this stuff?" Today, I don't care if I never use number theory. I'm just glad I learned about it.
So, what spurred this appreciative rant of all that is numerical? I was re-playing Metal Gear Solid 4 the other day and saw the side of Drebin's APC:
"893," I thought. "That's only one less than 892, and 892 is a damned good number."
Comments not appearing? Anti-virus apps like Avast or some browser extensions can cause this.
Easy fix: Add [*].disqus.com to your software's white list. Tada! Happy comments time again.
Did you know? You can now get daily or weekly email notifications when humans reply to your comments.
|
{"url":"http://www.destructoid.com/blogs/Zyrshnikashnu/numbers-nvgr--177185.phtml","timestamp":"2014-04-19T00:20:42Z","content_type":null,"content_length":"94540","record_id":"<urn:uuid:c4da3a6f-038b-42a0-8eb8-06b7580efcb1>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00560-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Mathematical Systems & Solutions Salar
Mathematical Systems & Solutions Salary
Mathematical Systems & Solutions average salary is $78,541, median salary is $79,997 with a salary range from $69,222 to $84,947.
Mathematical Systems & Solutions salaries are collected from government agencies and companies. Each salary is associated with a real job position. Mathematical Systems & Solutions salary statistics
is not exclusive and is for reference only. They are presented "as is" and updated regularly.
Job Title Salaries City Year More Info
Mathematicians 84,947-84,947 Pasadena, CA, 91050 2010 Mathematical Systems & Solutions Mathematicians Salaries (1)
Mathematical Systems & Solutions Pasadena, CA Salaries
Mathematical Scientist 79,997-79,997 Pasadena, CA, 91050 2007 Mathematical Systems & Solutions Mathematical Scientist Salaries (2)
Mathematical Systems & Solutions Pasadena, CA Salaries
Mathematician 69,222-69,222 Pasadena, CA, 91050 2007 Mathematical Systems & Solutions Mathematician Salaries (1)
Mathematical Systems & Solutions Pasadena, CA Salaries
Mathematical Systems & Solutions Jobs
Mathematical Systems & Solutions Salary News & Advice
Calculate how much you could earn
It's FREE. Based on your input and our analysis. How we do it?
All fields are required for calculation accuracy.
• We will send you an email to access your personalized report.
• We won’t share your email address
Related Mathematical Systems & Solutions Salary
Mathematical Systems & Solutions Jobs
Recent Mathematical Systems & Solutions Salaries (April 18, 2014)
Network System Engineer 3s Network $45,323 Foothill Ranch, CA, 92610 01/02/2012
Software Developer Multivision $70,000 Washington, DC, 20001 01/22/2012
Mathematical Systems & Solutions salary is full-time annual starting salary. Intern, contractor and hourly pay scale vary from regular exempt employee. Compensation depends on work experience, job
location, bonus, benefits and other factors.
|
{"url":"http://www.salarylist.com/company/Mathematical-Systems-Solutions-Salary.htm","timestamp":"2014-04-18T11:39:26Z","content_type":null,"content_length":"30216","record_id":"<urn:uuid:e8f88b10-ef2b-4069-b4ea-efa55c238ba7>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00024-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Newton's Law of Cooling
1. The aim of the experiment is to verify Newton's Law of Cooling of different materials and different liquids.
2. To draw the cooling curve.
Temperature difference in any situation results from energy flow into a system or energy flow from a system to surroundings. The former leads to heating, whereas latter leads to cooling of an object.
Newton’s Law of Cooling states that the rate of temperature of the body is proportional to the difference between the temperature of the body and that of the surrounding medium. This statement leads
to the classic equation of exponential decline over time which can be applied to many phenomena in science and engineering, including the discharge of a capacitor and the decay in radioactivity.
Newton's Law of Cooling is useful for studying water heating because it can tell us how fast the hot water in pipes cools off. A practical application is that it can tell us how fast a water heater
cools down if you turn off the breaker when you go on vacation.
Suppose that a body with initial temperature T[1]°C, is allowed to cool in air which is maintained at a constant temperature T[2]°C.
Let the temperature of the body be T°C at time t.
Then by Newton’s Law of Cooling,
Where k is a positive proportionality constant. Since the temperature of the body is higher than the temperature of the surroundings then T-T[2] is positive. Also the temperature of the body is
decreasing i.e. it is cooling down and rate of change of temperature is negative.
The constant ‘k’ depends upon the surface properties of the material being cooled.
Initial condition is given by T=T[1] at t=0
Solving (1)
Applying initial conditions;
Substituting the value of C in equation (2) gives
This equation represents Newton’s law of cooling.
If k <0, lim t --> ∞, e^-k^t = 0 and T= T[2 ],
Or we can say that the temperature of the body approaches that of its surroundings as time goes.
The graph drawn between the temperature of the body and time is known as cooling curve. The slope of the tangent to the curve at any point gives the rate of fall of temperature.
In general,
T(t) = Temperature at time t,
T[A] = Ambient temperature (temp of surroundings),
T[H] = Temperature of hot object at time 0,
k = positive constant and
t = time.
Example of Newton's Law of Cooling:
This kind of cooling data can be measured and plotted and the results can be used to compute the unknown parameter k. The parameter can sometimes also be derived mathematically.
1. To predict how long it takes for a hot object to cool down at a certain temperature.
2. To find the temperature of a soda placed in a refrigerator by a certain amount of time.
3. It helps to indicate the time of death given the probable body temperature at the time of death and current body temperature.
|
{"url":"http://amrita.vlab.co.in/?sub=1&brch=194&sim=354&cnt=1","timestamp":"2014-04-21T12:08:02Z","content_type":null,"content_length":"18199","record_id":"<urn:uuid:5367490e-86f6-415f-adf4-f4a47ecb1be8>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00384-ip-10-147-4-33.ec2.internal.warc.gz"}
|
To find the differential dy, solved alread, need to chek with your expertize :)
March 9th 2013, 11:44 AM #1
To find the differential dy
I have the function y = tan(3x + 6); need to find dy when x=3 and dx=0.03
By the formula dy = f'(x)*dx ;
f'(x)= sec^2(3x+6)(4)dx = (1/cos^2(3x+6))(4)dx, (need to calculate in radians)
so,forf'(3) = (1/cos^2(3x+6)) = (1/cos^2(3(3)+6)) = (1/cos^2(15)) = (1/cos^2(45-30)).... am I on the right track , i'm confused on which identity to apply further
Just give me a hint, I don't need the solution,
Thank you
Last edited by dokrbb; March 9th 2013 at 04:15 PM. Reason: corrected from sec to tan
Re: To find the differential dy, solved alread, need to chek with your expertize :)
10 viewed and no reply - does this mean this is correct
Re: To find the differential dy
I have the function y = tan(3x + 6); need to find dy when x=3 and dx=0.03
By the formula dy = f'(x)*dx ;
f'(x)= sec^2(3x+6)(3)dx = (1/cos^2(3x+6))(3)dx, (need to calculate in radians)
so,for f'(3) = (1/cos^2(3x+6)) = (1/cos^2(3(3)+6)) = (1/cos^2(15)) = (1/cos^2(45-30)).... am I on the right track , i'm confused on which identity to apply further
Thank you
I realized I did completely wrong, I corrected all the mistakes but I'm confused on which identity to apply further
Just give me a hint, I don't need the solution,
I continued in this way:
f'(3) = (1/cos^2(3x+6)) = (1/cos^2(3(3)+6)) = (1/cos^2(15)) = (1/[(1+cos2(15)/2)]) = (2/(1+cos30)) = {2/[1+ sqrt(3)/2]} = [2/[(2+sqrt(3))/2]] = 4/((2+sqrt(3)),
so, dy = (4/((2+sqrt(3)))(3)(0.03) = 0.36/(2+sqrt(3)) = ... am I correct this time, please, tell me I am
Last edited by dokrbb; March 9th 2013 at 04:46 PM.
March 9th 2013, 01:06 PM #2
March 9th 2013, 04:17 PM #3
|
{"url":"http://mathhelpforum.com/calculus/214498-find-differential-dy-solved-alread-need-chek-your-expertize.html","timestamp":"2014-04-16T08:34:27Z","content_type":null,"content_length":"36520","record_id":"<urn:uuid:e719a7d1-280e-4fdc-aa3f-5be200b8567f>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00375-ip-10-147-4-33.ec2.internal.warc.gz"}
|
If points A, B, and C lie on a circle of radius 1, what is
Author Message
If points A, B, and C lie on a circle of radius 1, what is [#permalink] 19 Mar 2011, 12:20
5% (low)
Question Stats:
zaur2010 46%
Intern (02:44) correct
Joined: 27 Sep 2010 53% (00:38)
Posts: 8 wrong
Followers: 0 based on 13 sessions
Kudos [?]: 0 [0], given: Quote:
If points A, B, and C lie on a circle of radius 1, what is the area of triangle ABC?
1. AB^2 =AC^2+BC^2
2. Angle CAB equals 30 degrees
The previous answers in this forum tended for C as the correct answer. I've marked B not C and let me explain why
statement (1) suggests that there's a right triangle, BUT the angle sides might be different and the area of triangle might vary with these angle mesaures. E.g. when angles
follow 45-45-90 the area of triangle would be 1, while with 30-60-90 the area of triangle is Sqrt(3)/2 Not Sufficient;
statement (2) Very interesting statement offering the inscribed angle measurement. If we find the angle CAB intercepted at the center, we get (30`)*2 OR 60`. Additionally,
with the centrally intercepted angle we have the isosceles triangle with the base angles 60` which convert into the equilateral triangle, since all angles are 60` (BC=OC=OB).
SO, side BC is equal to radius 1.
If we continue the line BO from the point O up-to the point D we receive height DC for the side BC of triangle ABC. Now we need to calculate the height which is easy by
knowing triangle BCD is a right triangle and angle CBD=60`. So, DC is Sqrt(3). The area of triangle ABC using all these properties ---> base (BC)*height (CD)/2 = 1*Sqrt(3)/2,
Sufficient as we can answer the questions area of triangle ABC=Sqrt(3)/2 therefore answer B.
dr03.JPG [ 11.91 KiB | Viewed 2861 times ]
Re: Area of Triangle inside a Circle [#permalink] 19 Mar 2011, 12:29
The answer is C.
Status: Nothing comes
easy: neither do I want. Take a minute and think why the height of triangle ABC is CD.
Joined: 12 Oct 2009 The height of triangle ABC should AE where E is the point extended the line BC from C.
Posts: 2793 _________________
Location: Malaysia Fight for your dreams :For all those who fear from Verbal- lets give it a fight
Concentration: Money Saved is the Money Earned
Entrepreneurship Jo Bole So Nihaal , Sat Shri Akaal
GMAT 1: 670 Q49 V31 Support GMAT Club by putting a GMAT Club badge on your blog/Facebook
GMAT 2: 710 Q50 V35
Followers: 161
Gmat test review :
Kudos [?]: 833 [0], 670-to-710-a-long-journey-without-destination-still-happy-141642.html
given: 235
Re: Area of Triangle inside a Circle [#permalink] 19 Mar 2011, 13:16
Ans is C.
Possible inscribed triangles with
Math Forum Moderator
Joined: 20 Dec 2010
Posts: 2058
All these triangles have different areas.
Followers: 123
Kudos [?]: 828 [0], Inscribed_Triangle_ABC.PNG [ 3.79 KiB | Viewed 2840 times ]
given: 376
Re: Area of Triangle inside a Circle [#permalink] 19 Mar 2011, 18:55
Expert's post
zaur2010 wrote:
If points A, B, and C lie on a circle of radius 1, what is the area of triangle ABC?
1. AB^2 =AC^2+BC^2
2. Angle CAB equals 30 degrees
The previous answers in this forum tended for C as the correct answer. I've marked B not C and let me explain why
VeritasPrepKarishma statement (1) suggests that there's a right triangle, BUT the angle sides might be different and the area of triangle might vary with these angle mesaures. E.g. when angles
follow 45-45-90 the area of triangle would be 1, while with 30-60-90 the area of triangle is Sqrt(3)/2 Not Sufficient;
Veritas Prep GMAT
Instructor statement (2) Very interesting statement offering the inscribed angle measurement. If we find the angle CAB intercepted at the center, we get (30`)*2 OR 60`. Additionally,
with the centrally intercepted angle we have the isosceles triangle with the base angles 60` which convert into the equilateral triangle, since all angles are 60` (BC=OC=OB).
Joined: 16 Oct 2010 SO, side BC is equal to radius 1.
Posts: 4192 If we continue the line BO from the point O up-to the point D we receive height DC for the side BC of triangle ABC. Now we need to calculate the height which is easy by
knowing triangle BCD is a right triangle and angle CBD=60`. So, DC is Sqrt(3). The area of triangle ABC using all these properties ---> base (BC)*height (CD)/2 = 1*Sqrt(3)/2,
Location: Pune, India Sufficient as we can answer the questions area of triangle ABC=Sqrt(3)/2 therefore answer B.
Followers: 897 First of all, I think it's a great effort. It is always refreshing when people try to analyze from different perspectives. There was one error though... Look at the diagram
below and figure out which of the following colorful altitudes could help you find the area of the triangle? They are all perpendicular to their respective bases.
Kudos [?]: 3816 [0],
given: 148 Attachment:
Ques2.jpg [ 7.77 KiB | Viewed 2723 times ]
I think you will agree that the purple line cannot be used as an altitude to find the area of this triangle... I hope this helps you in identifying your mistake.
Veritas Prep | GMAT Instructor
My Blog
Save $100 on Veritas Prep GMAT Courses And Admissions Consulting
Enroll now. Pay later. Take advantage of Veritas Prep's flexible payment plan options.
Veritas Prep Reviews
Re: Area of Triangle inside a Circle [#permalink] 19 Mar 2011, 19:21
Status: Matriculating
Except for the rt angled triangle the geometry cannot be defined with two parameters. First assume S1 right angled -one parameter (hyp is known). The other parameter is S2
Affiliations: Chicago (one angle is known)
Booth Class of 2015
Posted from my mobile device
Joined: 03 Feb 2011
Posts: 934
Followers: 11
Kudos [?]: 165 [0],
given: 123
Re: Area of Triangle inside a Circle [#permalink] 29 Mar 2011, 11:18
Warlock007 zaur2010 wrote:
Manager If points A, B, and C lie on a circle of radius 1, what is the area of triangle ABC?
Status: ==GMAT Ninja== 1. AB^2 =AC^2+BC^2
2. Angle CAB equals 30 degrees
Joined: 08 Jan 2011
Posts: 247 my take is A
Schools: ISB, IIMA ,SP as by having 1 AB^2 =AC^2+BC^2 we are clear that it will be a right angle triangle with right angle at C
Jain , XLRI
and then AB will be the diameter = 2 and then we know all the angles hence can find out the area
WE 1: Aditya Birla Group
(sales) while with statement 2. Angle CAB equals 30 degrees
WE 2: Saint Gobain Group there are various possibilities of triangles with different hieghts and diffferent bases
please clarify
Followers: 4
Kudos [?]: 43 [0],
given: 46 WarLocK
The War is oNNNNNNNNNNNNN for 720+
see my Test exp here http://gmatclub.com/forum/my-test-experience-111610.html
do not hesitate me giving kudos if you like my post.
SVP Re: Area of Triangle inside a Circle [#permalink] 29 Mar 2011, 17:17
Joined: 16 Nov 2010 @zaur2010, please extend line BC upwards and draw a perpendicular line from A dropping on that line, that will be the height of the triangle.
Posts: 1698 I don't know how to draw Geometry figures online, else would have done so.
Location: United States _________________
Formula of Life -> Achievement/Potential = k * Happiness (where k is a constant)
Concentration: Strategy,
Followers: 29
Kudos [?]: 265 [0],
given: 36
Re: If points A, B, and C lie on a circle of radius 1, what is [#permalink] 27 Dec 2013, 15:02
zaur2010 wrote:
jlgdr Quote:
VP If points A, B, and C lie on a circle of radius 1, what is the area of triangle ABC?
Status: I'm back and not 1. AB^2 =AC^2+BC^2
stopping until I hit 2. Angle CAB equals 30 degrees
The previous answers in this forum tended for C as the correct answer. I've marked B not C and let me explain why
Joined: 06 Sep 2013
statement (1) suggests that there's a right triangle, BUT the angle sides might be different and the area of triangle might vary with these angle mesaures. E.g. when angles
Posts: 1366 follow 45-45-90 the area of triangle would be 1, while with 30-60-90 the area of triangle is Sqrt(3)/2 Not Sufficient;
Location: United States statement (2) Very interesting statement offering the inscribed angle measurement. If we find the angle CAB intercepted at the center, we get (30`)*2 OR 60`. Additionally,
with the centrally intercepted angle we have the isosceles triangle with the base angles 60` which convert into the equilateral triangle, since all angles are 60` (BC=OC=OB).
Concentration: Finance, SO, side BC is equal to radius 1.
General Management
If we continue the line BO from the point O up-to the point D we receive height DC for the side BC of triangle ABC. Now we need to calculate the height which is easy by
Schools: Wharton '17 knowing triangle BCD is a right triangle and angle CBD=60`. So, DC is Sqrt(3). The area of triangle ABC using all these properties ---> base (BC)*height (CD)/2 = 1*Sqrt(3)/2,
Sufficient as we can answer the questions area of triangle ABC=Sqrt(3)/2 therefore answer B.
GPA: 3.5
I think I may have made a mistake
WE: Corporate Finance
(Investment Banking) See, from Statement 1 we have that ABC is a right triangle, since it is inscribed in the circle then of course hypothenuse = diameter = 2
Followers: 7 So then how can we find the area. Well can't we extend a height perpendicular to the diameter which will in fact be the radius = 1 to find it. With base and height we could
have the area
Kudos [?]: 88 [0],
given: 182 Would anybody be so kind to explain why this reasoning is wrong?
Thanks a lot
Intern Re: If points A, B, and C lie on a circle of radius 1, what is [#permalink] 27 Dec 2013, 18:47
Joined: 22 Nov 2013 Jlgdr,Warlock,
I think you are considering the triangle as isosceles triangle with diameter AB. But the point C can be very near to say Point A or B and still be a right angle and statement
Posts: 4 1 would be true. But the height ( perpendicular on AB from C ) would change and thus the area.
Followers: 0
Kudos [?]: 1 [0], given:
Status: I'm back and not
stopping until I hit Re: If points A, B, and C lie on a circle of radius 1, what is [#permalink] 27 Dec 2013, 19:36
I get it, that's it
Joined: 06 Sep 2013 I need to get better at visualizing those figures
Posts: 1366 Cheers!
Location: United States
Posted from my mobile device
Concentration: Finance,
General Management
Schools: Wharton '17
GPA: 3.5
WE: Corporate Finance
(Investment Banking)
Followers: 7
Kudos [?]: 88 [0],
given: 182
gmatclubot Re: If points A, B, and C lie on a circle of radius 1, what is [#permalink] 27 Dec 2013, 19:36
Similar topics Author Replies Last post
The points A,B and C lie on a circle that has a radius 4. If positive soul 3 20 Jun 2006, 04:35
Pints A,B,C lie on a circle with radius 1. What is the area anonymousegmat 6 24 Jul 2007, 12:01
Points A, B, and C lie on a circle of radius. What is the bmwhype2 5 19 Nov 2007, 05:42
9 Points A, B, and C lie on a circle of radius 1. What is the Economist 18 27 Sep 2009, 08:00
Points A, B, C and D lie on a circle of radius 1. Let x be study 0 06 Oct 2013, 03:22
|
{"url":"http://gmatclub.com/forum/if-points-a-b-and-c-lie-on-a-circle-of-radius-1-what-is-111134.html","timestamp":"2014-04-23T14:30:35Z","content_type":null,"content_length":"187031","record_id":"<urn:uuid:01b36b14-cf00-4e4c-839f-d99a14485af7>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00194-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Percentiles and Box Plots
We saw that the median splits the data so that half lies below the median. Often we are interested in the percent of the data that lies below an observed value.
We call the r^th percentile the value such that r percent of the data fall at or below that value.
If you score in the 75^th percentile, then 75% of the population scored lower than you.
Suppose the test scores were
22, 34, 68, 75, 79, 79, 81, 83, 84, 87, 90, 92, 96, and 99
If your score was the 75, in what percentile did you score?
There were 14 scores reported and there were 4 scores at or below yours. We divide
[S: :S] 100% = 29
So you scored in the 29^th percentile.
There are special percentile that deserve recognition.
1. The second quartile (Q[2]) is the median or the 50th percentile
2. The first quartile (Q[1]) is the median of the data that falls below the median. This is the 25th percentile
3. The third quartile (Q[3]) is the median of the data falling above the median. This is the 75th percentile
We define the interquartile range as the difference between the first and the third quartile
IQR = Q[3] - Q[1]
An example will be given when we talk about Box Plots.
Box Plots
Another way of representing data is with a box plot. To construct a box plot we do the following:
1. Draw a rectangular box whose bottom is the lower quartile (25th percentile) and whose top is the upper quartile (75th percentile).
2. Draw a horizontal line segment inside the box to represent the median.
3. Extend horizontal line segments ("whiskers") from each end of the box out to the most extreme observations.
Box plots can either be shown vertically or horizontally. The steps describe how to create a vertical box plot, while the graph below shows an example of a horizontal box plot the shows how student's
commuting miles are distributed.
Back to the Descriptive Statistics Home Page
Back to the Elementary Statistics (Math 201) Home Page
Back to the Math Department Home Page
e-mail Questions and Suggestions
|
{"url":"http://ltcconline.net/greenl/courses/201/descstat/percentileBox.htm","timestamp":"2014-04-17T03:49:04Z","content_type":null,"content_length":"6387","record_id":"<urn:uuid:80346692-340d-4a14-8062-fd611f0b3b76>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00436-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Syllabus Intermediate Biometry
PLNTSOIL 661
Fall Semester, Every Year
Instructor Wesley R. Autio
205 Bowditch Hall
Telephone: (413) 545-2963
Fax: (413) 545-0260
Email: autio@pssci.umass.edu
Office hours by appointment.
Texts Three course packets are available at the Textbook Annex. They include the course outline, homework assignments, and course notes in one, 10 years of old exams (2001-10) and their
solutions the second, and SAS laboratories and SAS homework assignments in the third.
Optional Text: Damon, R. A., Jr. and W. R. Harvey. 1987. Experimental Design, ANOVA, and Regression. Harper & Row, New York. 508 pages (This book is out of print, but is published as a
course packet.).
Times Tuesdays & Thursdays, 11:15 AM to 12:30 PM (136 Hasbrouck)
(Fall, Tuesdays from 1:00 PM to 2:30 PM (1667 W.E.B. Du Bois Library)
Class Students will be given the background necessary to design and analyze the results from field and laboratory experiments. The class will focus on statistical analysis for agricultural
Description scientists, but will be relevant for students in a variety of biological fields. Computer-assisted analysis will be presented and will be utilized by students in assignments.
Grading Three exams, a comprehensive take-home final, and written assignments will be used to assess student progress.
Class I. Basic terminology
A. Symbolic notation
B. Degrees of freedom
C. Mathematical models
D. Descriptive statistics
II. Analysis of variance
A. Introduction
B. One-way classification
C. Two-way classification, one measurement
D. Two-way classification, repeated measurements
E. Three+-way classification
F. Nested classification
G. Fixed and random effects
1. Fixed model
2. Random model
3. Mixed model
H. Unequal numbers
I. Mean separation, partitioning of sums of squares
1. Linear comparisons
2. Orthogonal polynomial comparisons
3. Range tests
4. Mean separation within interactions
III. Regression
A. The regression model
B. Linear and curvi- linear regression
C. Prediction
D. Analysis of covariance
IV. Experimental design
A. Completely randomized
B. Randomized complete block
C. Latin square
D. Greco-Latin square
E. Split plot
F. Combined designs
2011 Class Schedule
Readings & ┌──────────────┬───────────────────┬────────────────┬─────────┐
Practice │Course outline│Text sections │Pages │Exercises│
Exercises ├──────────────┼───────────────────┼────────────────┼─────────┤
│I. │1.1-1.5.6 │1-10 │1.1-1.7 │
│II. │2.1-2.9 │12-31 │2.1-2.6 │
│II. │3.1-3.19 │38-112 │3.1-3.15 │
│II. │4.1-4.19 │152-173 │4.1-4.8 │
│III. │5.1-5.9, 5.14-5.15 │185-233, 247-256│5.1-5.11 │
│III. │8.1-8.7 │389-411 │8.1-8.4 │
│IV. │6.1-6.15 │280-314 │6.1-6.8 │
│IV. │7.1-7.9.1 │317-368 │7.1-7.8 │
Updated September 6, 2011.
|
{"url":"http://people.umass.edu/~autio/Biometry.htm","timestamp":"2014-04-18T05:50:47Z","content_type":null,"content_length":"9950","record_id":"<urn:uuid:36b235bb-6678-4217-b84a-7c4079bb1a75>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00481-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
OS App Development Thread is Relocated
• one year ago
• one year ago
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred.
|
{"url":"http://openstudy.com/updates/50ff20c7e4b0426c63681ac8","timestamp":"2014-04-17T04:11:59Z","content_type":null,"content_length":"104028","record_id":"<urn:uuid:677ed5de-e3c3-4a7a-a871-4a53bba0a0a6>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00444-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Working Group: Biological Problems Using Binary Matrices
(L to R): Daniel B. Stouffer, Robert Dorazio, Richard Barker, Diego Vazquez, Steven Schwager, Stefano Allesina, Nicholas Gotelli, Joshua Ladau, Steven Kembel; (Not Pictured): Jennifer Dunne, Dan
Simberloff, Edward Connor.
Topic: Working Group on Biological Problems Using Binary Matrices
Meeting dates: May 26-29, December 10-13, 2009; May 4-7, 2010; December 14-17, 2010
Edward Connor (Dept. of Biology, San Francisco State Univ., San Francisco, CA);
Josh Ladau (Gladstone Institutes, GICD, San Francisco, CA)
Objectives: Many fundamental questions in ecology cannot be addressed experimentally because at the relevant large spatial and temporal scales, experimentation is impractical, unethical, or
impossible. Instead, to investigate these questions inferences must be made from observational data. Null model testing comprises a key tool for making these inferences, allowing large-scale effects
of processes such as environmental filtering, competition, and facilitation to be inferred from observations of species ranges, abundance distributions, body sizes, and other similar traits. Three
types of ecological data that are commonly analyzed using null models include binary presence-absence matrices, which give the distribution of species over a set of sites; ecological networks such as
food webs and pollinator networks; and phylogenetic patterns in community composition. All of these data can be coded in a binary form.
The Binary Matrices working group focused on null model tests of binary data, with a particular emphasis on the aforementioned examples. A key problem with null model tests is that they are generally
developed and justified based on intuition. However, multiple tests can all seem intuitively appropriate for the same data, yet yield conflicting conclusions. Hence, a pressing issue is developing
and implementing an overarching mathematical framework to guide the development and application of null model tests. One such framework is optimality; for instance consideration of methods that have
minimal Type II error rates subject to controlled Type I error rates. Further application of the optimality framework is possible, and the development and application other types of guiding
frameworks are worth considering.
To implement null model tests developed in an optimality framework, it is often necessary to simulate random quantities from non-standard probability distributions. For instance, simulating
presence-absence matrices from a uniform distribution over the set of binary matrices with the observed marginal totals is necessary for implementing methods that control Type I error rates under a
relatively broad class of statistical models. The algorithmic challenges associated with these simulations can be formidable, and require the development of unbiased Markov Chain Monte Carlo
algorithms and analysis of their mixing times and convergence properties.
The development of null model tests for species co-occurrence, ecological networks, and community phylogenetics have all relied on the assumption that the presence-absence matrix, food web, or
community phylogeny is fully known. In the context of species co-occurrence this has usually been articulated as assuming that the probability of detecting a species if it is present is equal to 1.
However, with limited sampling, differences in species’ abundance, and differences in species’ behaviors, some species are easier to detect than others rendering such an assumption suspect at best. A
separate literature has developed arising from statistical inference based on mark-recapture-release data, that has begun to examine community patterns relaxing the assumption that detection
probability = 1. Further application of these methods to problems of multi-species co-occurrence patterns (number of species > 2) or to other community ecological patterns is ripe for development.
Focusing generally on these three issues, this working group will aim to foster the investigation and development of solutions to these problems.
The goals of the Binary Matrices Working group were to bring together biologists, statisticians, and mathematicians to address these and other related issues to improve quantitative inference from
binary data in biology.
Meeting Summaries for NIMBioS Working Group:
Biological Problems Using Binary Matrices
Meeting 1: May 26-29, 2009 Agenda (PDF) Participants Evaluation report (PDF)
Meeting 1 summary. The May 2009 meeting of the working group began with presentations that provided an overview of the existing analyses of binary matrices in biology. The presentations fostered
extensive discussion on the challenges facing the analysis of binary matrices and approaches for addressing these challenges. Following the presentations, the working group broke into subgroups to
develop specific research projects on the analysis of binary matrices. The subgroups identified four areas in which improved analyses are strongly needed:
- the analysis of food webs,
- pollination networks,
- incidence-based co-occurrence patterns, and
- abundance-based co-occurrence patterns.
The working group also developed a general strategy for developing improved analyses by combining ideas from ecology and mathematical statistics. The working group aims to complete four papers by
the end of 2009 and meets again in December 2009.
Meeting 2: Dec 10-13, 2009 Agenda (PDF) Participants Evaluation report (PDF)
Meeting 2 summary. The meeting began with presentations from the four subgroups (the analysis of food webs; pollination networks; incidence-based co-occurrence patterns; and abundance-based
co-occurrence patterns) and discussions about the progress that had been made since the last meeting. Following the presentations, the subgroups worked to further their projects. Substantial progress
was made acquiring data sets for analysis, coding statistical methods, and discussing data-related matters and models. The next meeting for the group is scheduled for May 2010.
Meeting 3: May 4-7, 2010 Agenda (PDF) Participants Evaluation report (PDF)
Meeting 3 summary. The third meeting began with discussions led by the four subgroups of the Working Group, i.e., analysis of food webs, pollination networks, incidence-based co-occurrence patterns,
and abundance-based co-occurrence patterns. Currently, the food web subgroup is focusing on detecting universal patterns of trophic interactions. The pollination network subgroup is refining models
and biological hypotheses. The incidence-based co-occurrence subgroup is focusing on characterizing the Plackett-Luce model of community assembly, while the abundance-based co-occurrence subgroup is
finishing analyses and writing a paper incorporating abundance data into examination of species co-occurrence patterns. The Working Group aims to complete four papers by the end of 2010, one on each
of the four sub-areas. The next meeting is scheduled for December 2010.
Meeting 4: Dec 14-17, 2010 Agenda (PDF) Participants Evaluation report (PDF)
Meeting 4 summary. The final meeting began with discussions led by the four subgroups of the Working Group, i.e., analysis of food webs, pollination networks, incidence-based co-occurrence patterns,
and abundance-based co-occurrence patterns. The discussions focused on outlining the progress that had been made since the last meeting and delineating the work remaining on the papers that the
working group is writing. Following the presentations, the subgroups worked to further their projects. The food web subgroup focused on detecting universal patterns of trophic interactions. In the
pollination network subgroup, work focused on refining models and biological hypotheses. The incidence-based co-occurrence subgroup focused on applying log-linear models to the problem of community
assembly and applying the Plackett-Luce model to data on the colonization process. And in the abundance-based co-occurrence subgroup, work focused on finishing analyses and writing. The
abundance-group also discussed extensions of the hierarchical model they were developing to address other problems such as food webs, pollinator networks, and community phylogenetics. The working
group is aiming to complete four papers by the end of 2011, one on each of the four sub-areas. Currently, each participant is working on two to three of these papers. There are no further meetings
scheduled for the Binary Matrices working group. However, three group members will meet in San Francisco in April 20011 to continue collaborations (Dorazio, Ladau, and Connor), and three group
members (Dorazio, Allesina, and Ladau) will present papers at the International Environmetrics Society (TIES) meeting in July 2011 on research performed as part of the working group.
NIMBioS Working Groups are chosen to focus on major scientific questions at the interface between biology and mathematics. NIMBioS is particularly interested in questions that integrate diverse
fields, require synthesis at multiple scales, and/or make use of or require development of new mathematical/computational approaches. NIMBioS Working Groups are relatively small (10-12 participants),
focus on a well-defined topic, and have well-defined goals and metrics of success. Working Groups will typically meet 2-4 times over a two-year period, with each meeting lasting 3-5 days; however,
the number of participants, number of meetings, and duration of each meeting is flexible, depending on the needs and goals of the Group.
|
{"url":"http://www.nimbios.org/workinggroups/WG_binary_matrices.html","timestamp":"2014-04-16T07:12:41Z","content_type":null,"content_length":"24683","record_id":"<urn:uuid:b5fa6272-822a-41dc-b21c-0a8698995fdf>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00210-ip-10-147-4-33.ec2.internal.warc.gz"}
|
quadrature,in mathematics, the process of determining the area of a plane geometric figure by dividing it into a collection of shapes of known area (usually rectangles) and then finding the limit (as
the divisions become ever finer) of the sum of these areas. When this process is performed with solid figures to find volume, the process is called cubature. A similar process called rectification is
used in determining the length of a curve. The curve is divided into a sequence of straight line segments of known length. Because the definite integral of a function determines the area under its
curve, integration is still sometimes referred to as quadrature.
|
{"url":"http://media-3.web.britannica.com/eb-diffs/902/485902-8909-62136.html","timestamp":"2014-04-21T04:45:11Z","content_type":null,"content_length":"1449","record_id":"<urn:uuid:3eada3bb-5a68-4070-aa58-e737fb5fbba5>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00450-ip-10-147-4-33.ec2.internal.warc.gz"}
|
st: RE: saving output of -correlate, c- in two submatrixes for later us
[Date Prev][Date Next][Thread Prev][Thread Next][Date index][Thread index]
st: RE: saving output of -correlate, c- in two submatrixes for later use
From "Nick Cox" <n.j.cox@durham.ac.uk>
To <statalist@hsphsun2.harvard.edu>
Subject st: RE: saving output of -correlate, c- in two submatrixes for later use
Date Tue, 6 Nov 2007 17:32:16 -0000
-ereturn list- shows nothing after -correlate- because -correlate- is an
r-class command. This is documented, but think of it this way: in what
way is -correlate- fitting a model?
You can get correlation matrices in various ways. -findit correlation
matrix- points to various, including
FAQ . . . . . . . . . . . . . . . . . . . Obtaining the correlation
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . W.
12/99 How can I obtain the correlation matrix as a Stata
corrmat from http://www.stata.com/users/sdriver
corrmat. Create and save correlation or covariance matrices. /
by Shannon Driver, StataCorp <sdriver@stata.com>. / Creates a
matrix, covariance matrix, or both and optionally / saves them in
return list and/or matrix dir.
cpcorr from http://fmwww.bc.edu/RePEc/bocode/c
'CPCORR': module for correlations for each row vs each column
variable /
cpcorr produces a matrix of correlations for rowvarlist versus /
colvarlist. cpspear does the same for Spearman correlations. This /
may thus be oblong, and need not be square. Both also / allow a
Also check out -makematrix- from SSC.
You have two choices: get the whole matrix, and extract; or get separate
submatrices. -cpcorr- and -makematrix- support the latter approach.
The sum of squared values I would get by squaring a variable and finding
its sum, using -summarize, meanonly- and picking up r(sum).
Here is an example:
. gen ysq = y^2
. su ysq, meanonly
. scalar ysum = r(sum)
I have a panel of some dozens of variables: ns x* z*. I would like to:
(1) save some of the output of -correlate x* z*, c- in two matrixes, X
(containing only that portion of the output about the (co)variances
between the x* variables and themselves) and Z (containing only that
portion of the output about the covariances between the x* variables and
the z* variables), e.g.
x1 x2 ... xn z1 ... zn
z1 .......useless........
zn ..of...the...output...
Why can't I find the output with -ereturn list-, such as after -mean()-?
(2) and then to find the matrix A such that Z*A=X/sq where sq is the
scalar given by the sum of the squares of the values contained in
variable ns. (I know that A is -matrix A= inv(Z)*X/sq-, but has Stata 9
any simple command to calculate sq? And can I save the value of sq in
some variable for later use in a for i=1...sq statement?)
* For searches and help try:
* http://www.stata.com/support/faqs/res/findit.html
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/
|
{"url":"http://www.stata.com/statalist/archive/2007-11/msg00130.html","timestamp":"2014-04-19T19:49:11Z","content_type":null,"content_length":"8212","record_id":"<urn:uuid:39213df0-3233-4764-ad80-41d186475614>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00117-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Heap queue algorithm
This module provides an implementation of the heap queue algorithm, also known as the priority queue algorithm.
Heaps are arrays for which heap[k] <= heap[2*k+1] and heap[k] <= heap[2*k+2] for all k, counting elements from zero. For the sake of comparison, non-existing elements are considered to be infinite.
The interesting property of a heap is that heap[0] is always its smallest element.
The API below differs from textbook heap algorithms in two aspects: (a) We use zero-based indexing. This makes the relationship between the index for a node and the indexes for its children slightly
less obvious, but is more suitable since Python uses zero-based indexing. (b) Our pop method returns the smallest item, not the largest (called a "min heap" in textbooks; a "max heap" is more common
in texts because of its suitability for in-place sorting).
These two make it possible to view the heap as a regular Python list without surprises: heap[0] is the smallest item, and heap.sort() maintains the heap invariant!
To create a heap, use a list initialized to [], or you can transform a populated list into a heap via function heapify().
The following functions are provided:
Push the value item onto the heap, maintaining the heap invariant.
Pop and return the smallest item from the heap, maintaining the heap invariant. If the heap is empty, IndexError is raised.
Transform list x into a heap, in-place, in linear time.
Pop and return the smallest item from the heap, and also push the new item. The heap size doesn't change. If the heap is empty, IndexError is raised. This is more efficient than heappop()
followed by heappush(), and can be more appropriate when using a fixed-size heap. Note that the value returned may be larger than item! That constrains reasonable uses of this routine.
Example of use:
>>> from heapq import heappush, heappop
>>> heap = []
>>> data = [1, 3, 5, 7, 9, 2, 4, 6, 8, 0]
>>> for item in data:
... heappush(heap, item)
>>> sorted = []
>>> while heap:
... sorted.append(heappop(heap))
>>> print sorted
[0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
>>> data.sort()
>>> print data == sorted
See About this document... for information on suggesting changes.
|
{"url":"http://wingware.com/psupport/python-manual/2.3/lib/module-heapq.html","timestamp":"2014-04-18T15:41:10Z","content_type":null,"content_length":"9058","record_id":"<urn:uuid:82b57c03-9e66-4e2d-a95a-46358191b4d8>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00646-ip-10-147-4-33.ec2.internal.warc.gz"}
|
non-zero, divergence-free vector fields on 2-torus
up vote 2 down vote favorite
Suppose $X$ is a nowhere vanishing vector field on the 2-torus that preserves the standard area element $\mu=d\theta\wedge d\zeta$. By area preservation, $$ i_X\mu=dh+ad\theta+bd\zeta, $$ for some
smooth function $h$ and constants $a,b\in\mathbb{R}$. Is there a diffeomorphism $\phi$ of the 2-torus such that $\phi^*X$ is a (re-scaling of) the constant vector field $Y=b\frac{\partial}{\partial\
Based on the discussion in this old paper by T. Saito
it seems like the answer definitely could be yes, but I'm having a hard time proving it myself or finding a reference that addresses the question.
Progress so far
-When one of the constants, say $b$, is zero (note that both cannot be zero) the answer is yes. In this case one such $\phi$ is $\phi^{-1}(\theta,\zeta)=(\theta+\frac{1}{a}h(\theta,\zeta),\zeta)$.
-When the maximum value of $|\partial h/\partial\theta|^2+|\partial h/\partial\zeta|^2$ is less than $a^2+b^2$, then you can use Moser's trick (nice discussion of it here http://
concretenonsense.wordpress.com/2009/09/03/symplectic-geometry-ii/) to prove the answer is yes. In particular, you can show that $dh+ad\theta+bd\zeta$ is strongly isotopic to $a d\theta+bd\zeta$.
add comment
1 Answer
active oldest votes
The orbits of the flow by the vector field $X$ forms a foliation $\mathcal{F}_X$ of $T^2$. There is a transverse measure to the foliation: for a curve $\sigma$ transverse to $\mathcal{F}_X$,
define the measure of $\sigma $ to be $\int_\sigma i_X\mu$. Since the vector field $X$ and 2-form $\mu$ are preserved by the flow by $X$, this measure is invariant under the flow by $X$, and
in fact by transverse isotopy to $\mathcal{F}_X$ rel endpoints.
I think it's well-known that a measured foliation $\mathcal{F}_X$ is homeomorphic to a foliation by lines of a fixed slope (in your case, it would be slope $a/b$). However, I don't know what
regularity one can choose for this homeomorphism, in particular, is it a diffeomorphism?
vote 1 Addendum: For one description of measure foliations, you can have a look at A primer on mapping class groups. However, I think the point I'm making is fairly simple. Consider a simple closed
down curve $\sigma$ transverse to $\mathcal{F}_X$. The curve $\sigma$ has a measure (absolutely continuous with respect to Lebesgue measure) coming from the transverse measure to $\mathcal{F}_X$.
vote Every leaf of $\mathcal{F}_X$ must meet $\sigma$, and cutting $T^2$ along $\sigma$ gives an annulus, with a foliation consisting of intervals connecting both sides. This must be a product
foliation, and the identification of opposite sides rotates $\sigma$ by some fraction $\alpha$ (since the flow from one side to the other preserves transverse measure). So take a Euclidean
annulus $A$ with the same area at $T^2$, and with geodesic boundary components of the same length as the measure of $\sigma$. There is a canonical way to connect opposite sides by orthogonal
lines, so glue opposite sides by an $\alpha$-fraction rotation of the circle. Then I think there is homeomorphism sending this torus to the original $T^2$, and sending the foliation by lines
orthogonal to $\partial A$ to $\mathcal{F}_X$. Being a bit more careful, I think one modify this map to be an area-preserving map, but I haven't thought this through carefully.
Thanks very much for your answer. It's pretty slow going for me as I now look through the literature in order to find a more detailed description of why this is true. Could you possibly
point me to a reference, maybe a textbook? – Josh Burby Nov 5 '12 at 5:51
The answer I gave I don't think really answers your question, so you could uncheck it and wait for a better answer. – Ian Agol Nov 5 '12 at 16:22
add comment
Not the answer you're looking for? Browse other questions tagged ds.dynamical-systems or ask your own question.
|
{"url":"http://mathoverflow.net/questions/111494/non-zero-divergence-free-vector-fields-on-2-torus","timestamp":"2014-04-25T08:16:15Z","content_type":null,"content_length":"55870","record_id":"<urn:uuid:91aa3e60-fe0a-47c1-a7f2-0617488a7e73>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00121-ip-10-147-4-33.ec2.internal.warc.gz"}
|
January 2nd 2006, 06:12 PM #1
Global Moderator
Nov 2005
New York City
Something tells me I made a similar post before but I did not find it.
What is the rigorous mathematical definition of randomness? I was thinking like this: Assume you have an algorithm which produces numbers then we can define it as random if the limit of the
correlation coefficient approaches zero as $n \rightarrow \infty$.
I am talking about concepts (probability and algorithms) which I have never studied thus you might not understand what I am trying to say. But I am trying to say for example an algorithm is
defined in such a way it produces only one's. Then we can create a plot with the x-axis being the number of times the algorithm used and y-axis is its result output. Then by the correlation
coefficient formula that coefficient is always one. Thus, the limit as increasing the number of times using the algorithm is 1 not zero thus that algorithm is not random. Does anyone understand
what I am trying to ask?
CaptainBlack this is what you know best Computability Theory, help me.
One good definition is that of algorithmic incompressibility: a sequence of bits is incompressible if the size S(n) of the shortest computer program (Turing machine) which produces the first n
bits satisfies S(n)/n -> 1: that is, you cannot write a shorter program than one which simply copies the bits from a file.
I have no idea what you just said. No need to explain it to me, just asking what you think about my definition?
What is the rigorous mathematical definition of randomness? I was thinking like this: Assume you have an algorithm which produces numbers then we can define it as random ...
If you produce a sequence of numbers with an algorithm, that sequence is not random.
You might want to look at Gregory Chaitin's pages, lots of articles there
on this stuff. If you can you might as well consult the horses mouth
The most recent of these may be the best place to start, as it is pitched
at a semi-popular level.
Last edited by CaptainBlack; January 14th 2006 at 12:58 PM.
Rigorous definition of randomness
Guess what? There is no such thing! We postulate a space of possibilities and a subset of events and assign probabilities to them. Some people will say that a selection in accordance with those
probabilities is a "random" selection, others will insist that only "equiprobable events" can provide random selection.
Actually, mathematicians don't discuss randomness very much. Probability measures, yes, randomness, no.
A rigorous definition of randomness
I think that the only problem with PerfectHacker's definition is his use of the word algorithm.
Otherwise I think it is a good starting point to work form. It seems to me that some definition of the notion of onservation or measurment is necessary -consider quantum mechanics, where a
physical process is completley non-random or deterministic (unitary evolution), until somebody decides to measure an observable of the system.
Any thoughts?
January 2nd 2006, 10:35 PM #2
January 3rd 2006, 01:59 PM #3
Global Moderator
Nov 2005
New York City
January 14th 2006, 12:29 PM #4
January 14th 2006, 12:48 PM #5
Grand Panjandrum
Nov 2005
January 29th 2006, 06:01 PM #6
Jan 2006
Puerto Rico
January 29th 2006, 06:55 PM #7
Jan 2006
|
{"url":"http://mathhelpforum.com/advanced-statistics/1537-randomness.html","timestamp":"2014-04-17T23:33:45Z","content_type":null,"content_length":"48762","record_id":"<urn:uuid:22fe6645-760a-4ee2-b4d1-ad0a93cfe3d8>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00592-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Shell Theory and Masonry Domes
Shell Theory is a field of physics, mathematics, architecture, topography and engineering which provides insight into masonry dome structures. Today I’ll be looking at Shell Theory in the context of
many topics which I’ve already written about on this blog, and tying several things together toward some valuable insight into masonry dome and arch structures.
Shell Theory has its roots in the Euler–Bernoulli Beam Equation (also known as the “engineer’s beam theory”, “classical beam theory” or just “beam theory”) which was developed around 1750. This
theory describes a means of calculating the deflection and load-bearing capacity of a beam. This theory was not fully utilized until the late nineteenth century, with the construction of both the
Eiffel Tower and the Ferris Wheel. This theory played a major role in the engineering developments of the Industrial Revolution.
The Euler-Bernoulli equation describes the relationship between applied load and deformation, where E is the elastic modulus; I is the second moment of area; w is the deflection of the beam at some
point, x; and q is the distributed load. Don’t be scared by this equation! It says if you push a beam this hard, in that place, it will bend this much, that’s all.
Plate Theory is an expansion of beam theory to thin-walled structures, or “plates.” The plate is assumed to be thin enough that it can be treated as a two-dimensional element, rather than as a thick
beam, as in beam theory. Plates are assumed to be flat, or planar.
If the plates (in plate theory) are curved in two dimensions, then we enter the realm of shell theory. A round cylinder is curved in one dimension: whereas a sphere, or dome, is curved in two
dimensions. A plate bent in two dimensions will have greater flexural rigidity than a plate bent in one dimension, and be more rigid still than a flat plate. (Flat plates are still shells; they’re
just boring, weak, flat shells).
One good illustrative example of shell theory is provided by looking at an actual shell, like a chicken egg shell. If we return to Brunelleschi, who I talked briefly about here, it is interesting to
note how he convinced his patrons (Medici) to allow him to build his famous dome. He simply took eggs and squashed their wide bottoms, so the thin tips were pointing up! Voila! He seemed to say, the
egg creates a catenary arch which is simple, robust, rigid, symmetrical and will serve as a form to make the masonry dome, or duomo. His idea worked, it worked magnificently, and today we still have
his duomo as a testament to his insight. It is interesting to note that Brunelleschi had a highly developed intuitive sense of shell theory centuries before this theory had been mathematically
expressed and articulated by equations.
If we take this example of an actual egg shell and apply some of what was learned by Galileo’s mistake of applying his Square Cube Law to masonry structures (as I discussed here, here and here) the
results are pretty astounding. The reader will recall that one critical fact of masonry arches is that they are scaleable. This means that if an arch’s span is doubled, it will remain stable so long
as the wall thickness is also doubled. As long as proportions remain intact, a dome or arch remains stable, no matter how large it is made. This is a direct refutation of Galileo’s Square Cube Law.
Galileo was wrong when he applied his law to masonry arches.
An actual chicken egg has a ratio of wall thickness to diameter of 7 to 1000. In other words, an egg 2 inches across has a shell which is 0.014 inches thick. If we “scale up” the egg so that it has a
diameter of, say, 25 feet in diameter, then the walls would be only 0.175 inches thick! This points to the inherent strength of a masonry arch which is doubly curved, or domed. Any engineer must
include a safety factor. I am planning to make triangular block to build a 25 foot diameter dome with wall thickness of just 4 inches. If directly compared with an egg shell, this provides an
adequate safety factor of almost 23 (0.175 x safety factor = 4; safety factor = 22.857). A typical safety factor is usually around 10. Thus a 25 foot diameter dome made with block 4 inches thick
would have a very high safety factor. Of course block differ from eggshell, so the comparison is tricky; more on that later.
I’ll talk more about Shell Theory and masonry domes in my next entry. This is a fascinating topic.
2 comments:
1. The question I thing is what is the perfect geometry for a dome or arch in order the to eliminate tension in whole structure. The answer is in www.uni-str.com. You can optimize masonry arch and
dome geometry online.
1. Hello Osman, I looked at your website. You have made available a catenary calculator. Catenary is not always ideal (See "Catenary reconsidered" entry on this site, February 26, 2012). A very
large number of different types of shapes are required to construct these catenary structures. Furthermore in an earthquake with sideways motion, the catenary thrust lines shift, and will not
correspond to the catenary your program calculates; it is a whole new shape (again, see "Catenary reconsidered"). Of course tension is never eliminated: it is simply resolved at the base
(foundation) with either a massive buttress or a tension ring, or both. I like your approach to catenary code, it might be useful for some.
|
{"url":"http://masonrydesign.blogspot.com/2012/01/shell-theory-and-masonry-domes.html","timestamp":"2014-04-19T04:36:31Z","content_type":null,"content_length":"124212","record_id":"<urn:uuid:9c72e510-b7c6-408e-9299-54a683e2a458>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00368-ip-10-147-4-33.ec2.internal.warc.gz"}
|
jordan decomposition and generalized eigenvectors
Jeremy Watts
science forum Guru Wannabe
Joined: 24 Mar 2005
Posts: 239
Posted: Tue Jul 18, 2006 6:49 pm Post subject:
ok, firstly excuse the length of this post and the fact that it is cross
posted.... (i really didnt know the most appropriate NG to send it to....)
anyway, i am using an algorithm to perform a jordan decomposition taken from
'schaums outlines for matrix operations'. the algorithm states on p.82 that
to form a canonical basis (this being the first step in forming a jordan
decomposition) , then :-
Step 1. Denote the multiplicity of lambda as m , and determine the
smallest positive integer p for which the rank of (A - lambda I )^p equals
n-m , where n denotes the number of rows (and columns in A), lambda denotes
an eigenvalue of A and I is the identity matrix.
Step 2. For each integer k between 1 and p, inclusive, compute the
'eigenvalue rank number Nk' as :-
Nk = rank(A - lambda I)^(k-1) - rank(A - lambdaI)^k
Each Nk is the number of generalized eigenvectors of rank k that will appear
in the canonical basis
Step 3. Determine a generalized eigenvector of rank p, and construct the
chain generated by this vector. Each of these vectors is part of the
canonical basis.
Step 4. Reduce each positive Nk (k = 1,2,...,p) by 1. If all Nk are zero
then stop; the procedure is complete for this particular eigenvalue. If not
then continue to Step 5.
Step 5. Find the highest value of k for which Nk is not zero, and determine
a generalized eigenvector of that rank which is linearly independent of all
previously determined generalized eigenvectors associated with lambda. Form
the chain generated by this vector, and include it in the basis. Return to
Step 4.
Now, the matrix I am using the above procedure on is :-
0 0 1 0 i
0 -9+6i 0 1 0
A = 0 0 8 i 1
0 2i 0 -9 8
Now the eigenvalues and multiplicities are :-
-9+6i with multiplicity 1
8 with multiplicity 3
0 with multiplicity 1
Starting with -9+6i and going through the procedure then i make the value of
p in step 1 as p = 5. This immediately arouses my suspicions as it looks too
high, as Step 3 not only fails to find a generalized eigenvector of rank 5,
but also even if it existed, the vector plus its chain would be of length 5,
and so fill the entire canonical basis with the vectors generated by just
the first eigenvalue .
By the way I am using the definition of a 'generalized eigenvector' as the
one given in the same book, on the same page in fact as the above procedure,
which is :-
"A vector Xm is a generalized eigenvector of rank m for the square matrix A
and associated eigenvalue lambda if :-
(A - lambda I)^m Xm = 0 but (A - lambda I)^(m-1)Xm =/= 0
So, firstly does anyone agree that a generalized eigenvector of rank 5
cannot exist for the matrix A with the eigenvalue -9+6i , and if so what is
going wrong here generally?
|
{"url":"http://sci4um.com/post-323981---Tue-Jul-18--2006-6-49-pm.html","timestamp":"2014-04-20T13:34:41Z","content_type":null,"content_length":"27632","record_id":"<urn:uuid:bcc4c3f6-f0bb-4ed8-b7fd-6a557e3bea22>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00614-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Chain conditions in quotients of power sets
up vote 5 down vote favorite
Several days ago a friend asked me the following:
We know that in $\mathcal P(\mathbb N)$ we can find a family of size continuum that every [distinct] two intersect in a finite set. Can we do that with $\mathcal P(\mathbb R)$, that is a family
of size $2^{\frak c}$ many subsets of real numbers that the intersection of any [distinct] two is finite, or at least less than $\frak c$?
The question, if so, asks about $(2^{\frak c})^+$-c.c. in the Boolean algebras $\mathcal B_\kappa=\mathcal P(\mathbb R)/\sim_\kappa$ where $\sim_\kappa$ is the equivalence relation defined as $A\sim
B\iff |A\triangle B|<\kappa$. The first question asks for $\mathcal B_\omega$ and the latter asks for $\mathcal B_\mathfrak c$.
Assuming GCH (or at least that $2^{\frak c}=\aleph_2$) gives a relatively simple positive answer to the latter question:
Consider the tree $2^{<\omega_1}$, it is of size $\aleph_1$ so we can encode the nodes as a real numbers. This tree has $2^{\omega_1}=\aleph_2$ many branches, each defines a subset of $\mathbb R$
using the encoding, and every distinct two branches meet at most at countable set of points.
I consulted with several other folks from the department and I was told that most of these questions are very well known, so an answer about consistency and provability is almost certainly out there.
Naive Google search got me nowhere, so I came to ask here the following:
1. In the particular case of the question above, can we say anything in ZFC about the chain-condition of $\mathcal B_\kappa$ for $\omega\leq\kappa\leq\frak c$?
2. My partial answer above shows that with GCH we have an answer for $\cal B_\frak c$, but does that also answer $\cal B_\omega$ or do we need to assume stronger principles as $\lozenge$ for
suitable cardinals?
3. How far does this generalized, when replacing $2^\omega$ by any infinite cardinal $\mu$, and asking the similar question about $(2^\mu)^+$-c.c. in the similar quotients?
I'd be glad to have a reference to a survey of such results, if it exists.
lo.logic set-theory boolean-algebras
3 There is a nice but old survey by Milner and Prikry in Surveys in combinatorics 1987 LMS Lecture Notes 123 - ams.org/mathscinet-getitem?mr=905279 – François G. Dorais♦ Jun 8 '12 at 14:18
add comment
3 Answers
active oldest votes
The answer to your first question (with finite intersections) is negative.
Indeed, if $X$ is an infinite set and $I$ has cardinality greater than that of $X^{\aleph_0}$ then $X$ can't contain $I$ distinct subsets with pairwise finite intersection. This
answers your question since $c^{\aleph_0}=c$.
Indeed, let $(A_i)_{i\in I}$ be a family of subsets of $X$ with pairwise finite intersection. Let $B_i$ be the set of infinite countable subsets of $X$ contained in $A_i$. Then the
$B_i$ are pairwise disjoint. Moreover, $B_i$ is empty only when $A_i$ is finite, and we can remove such exceptional $i$'s because the number of finite subsets of $X$ is only the
up vote 6 down cardinality of $X$.
vote accepted
The $B_i$ live in the set of infinite countable subsets of $X$, which has cardinality $X^{\aleph_0}$. So $I$ is at most the cardinal of $X^{\aleph_0}$.
Edit: the obvious generalization of the argument is the following: if $\alpha,\beta,\gamma$ are infinite cardinals, and if $\alpha$ admits $\beta$ subsets with pairwise intersection of
cardinal $<\gamma$, then $\beta\le\alpha^\gamma$. In particular, if $\alpha=2^\delta$ and $\gamma\le\delta$ then $\alpha^\gamma=\alpha$, so the conclusion reads as: $2^\delta$ does not
admit more that $2^\delta$ subsets with pairwise intersection of cardinal $<\delta$.
So this argument can be extended to "countable intersection" if $X$ is large enough, right? – Asaf Karagila Jun 8 '12 at 17:43
@Asaf: you mean, if $|X|^{\aleph_1}=|X|$ where $|X|$ is the cardinal of $X$; this is true if $|X|=2^\alpha$ with $\alpha\ge\aleph_1$ but this does not mean it is true for every
cardinal large enough (at least I don't claim it). – Yves Cornulier Jun 8 '12 at 21:53
add comment
It is consistent with ZFC that a set of size $\aleph_1$ does not have $2^{\aleph_1}$ subsets, each of size $\aleph_1$, with all pairwise intersections countable. This is an old result of
Jim Baumgartner; see "Almost-disjoint sets, the dense-set problem, and the partition calculus", Ann. Math. Logic 9(1976), 401-439, particularly Theorem 5.6(d) and the remark on page 422
up vote 6 after it. [Caution: I can't check the paper itself now; I'm going by an old e-mail from Jim.]
down vote
add comment
I'll add in that Shelah has used pcf theory to investigate related questions. Typically these results are tucked away inside long papers dealing with other questions, but I know that the
last section of [Sh:410] explicitly deals with ``strongly almost disjoint families", and characterizes their existence in terms of pcf.
For example, if $\aleph_0<\kappa\leq\kappa^{\aleph_0}<\lambda$, then the existence of a family of $\lambda^+$ sets in $[\lambda]^{\kappa}$ with pairwise finite intersection is equivalent
to a ``pcf statement''.
up vote 4
down vote I'm not sure which version of the paper to link to, as the published version has been reworked a few times. I THINK that the most recent version is here:
add comment
Not the answer you're looking for? Browse other questions tagged lo.logic set-theory boolean-algebras or ask your own question.
|
{"url":"http://mathoverflow.net/questions/99119/chain-conditions-in-quotients-of-power-sets/99133","timestamp":"2014-04-18T14:12:12Z","content_type":null,"content_length":"63226","record_id":"<urn:uuid:86c89534-d241-4578-b775-768df1538cca>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00512-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
Can someone please help me understand how to solve the following: f(x) = x^2, and f'(x) = 2x. Find a differential equation for f. For this example, it should be of the form f′(x) = f(x) ∗ g(x) where
your task is to find g(x).
• one year ago
• one year ago
Best Response
You've already chosen the best response.
your required g(x)=2/x, where x not equals zero.
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred.
|
{"url":"http://openstudy.com/updates/50a2ee57e4b079bc14512327","timestamp":"2014-04-16T16:51:01Z","content_type":null,"content_length":"27795","record_id":"<urn:uuid:809d2be8-b83f-4f87-aa83-21c005e6834b>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00070-ip-10-147-4-33.ec2.internal.warc.gz"}
|
The book, Top-down Calculus, contains five chapters. The first four chapters constitute the core calculus course. Chapter five develops infinite series with an emphasis on intuition. The first four
chapters are presented below. Each chapter (a pdf file) has its own Table of Contents and Index. In addition, separate pdf files for Appendix 1, Math Tables, and Appendix 3, Solutions, Partial
Solutions, and Hints, have been provided. Some comments and hints to problems are being added to these files by the author. A list of Corrections to Top-down Calculus is provided below.
|
{"url":"http://cseweb.ucsd.edu/~gill/TopDownCalcSite/","timestamp":"2014-04-19T04:20:55Z","content_type":null,"content_length":"17959","record_id":"<urn:uuid:2b493d61-96ff-4e19-b7fc-1c02b96d334c>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00413-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Garrett Park Geometry Tutor
Find a Garrett Park Geometry Tutor
...I am very patient and know how to help them work efficiently. I am able to teach concepts and material in ways that accommodate to them. I am also able to teach them necessary study and
organizational skills that will help them in accomplishing their goals.
17 Subjects: including geometry, reading, writing, algebra 1
...Furthermore, throughout medical school, I received an extensive education on psychology and psychiatry. As a medical student, my expertise is in anatomy and biology. I have taken numerous
classes in human anatomy, starting in high school and continuing to my higher levels of education.
27 Subjects: including geometry, reading, English, writing
...I have a Bachelor's degree in Computer Science and a PhD in Applied Mathematics. I have published a number of research papers about computer science, mathematics and the teaching of children in
the most qualified international journals, such as Discrete Math, Applied Math, etc. I was selected one of the top 200 tutors in the entire country in 2011.
12 Subjects: including geometry, calculus, algebra 1, algebra 2
...I know that the most important and useful skill to develop for the math portion of the SAT exam is to be able to come up with the appropriate mental shortcut for the problem, which requires the
ability to see problems from multiple angles. Also, every year I review the current SAT prep materials. I am a native speaker of Russian.
27 Subjects: including geometry, chemistry, reading, biology
...In the past I was both a faculty high school mathematics instructor and a junior college instructor. I have a degree in Mechanical Engineering and my math skills got me through it; that
material is far tougher than discrete math, but more importantly I explain mathematics well. I can effectivel...
28 Subjects: including geometry, chemistry, calculus, physics
|
{"url":"http://www.purplemath.com/Garrett_Park_Geometry_tutors.php","timestamp":"2014-04-17T15:30:27Z","content_type":null,"content_length":"24246","record_id":"<urn:uuid:81deaae6-8621-4bac-b993-2ebf6e955fa0>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00491-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Chapter 22. Fast Prefiltered Lines
GPU Gems 2
GPU Gems 2
is now available, right here, online. You can
purchase a beautifully printed version of this book
, and others in the series, at a 30% discount courtesy of InformIT and Addison-Wesley.
Please visit our
Recent Documents
page to see all the latest whitepapers and conference presentations that can help you with your projects.
Chapter 22. Fast Prefiltered Lines
Eric Chan
Massachusetts Institute of Technology
Frédo Durand
Massachusetts Institute of Technology
This chapter presents an antialiasing technique for lines. Aliased lines appear to have jagged edges, and these "jaggies" are especially noticeable when the lines are animated. Although line
antialiasing is supported in graphics hardware, its quality is limited by many factors, including a small number of samples per pixel, a narrow filter support, and the use of a simple box filter.
Furthermore, different hardware vendors use different algorithms, so antialiasing results can vary from GPU to GPU.
The prefiltering method proposed in this chapter was originally developed by McNamara, McCormack, and Jouppi (2000) and offers several advantages. First, it supports arbitrary symmetric filters at a
fixed runtime cost. Second, unlike common hardware antialiasing schemes that consider only those samples that lie within a pixel, the proposed method supports larger filters. Results are
hardware-independent, which ensures consistent line antialiasing across different GPUs. Finally, the algorithm is fast and easy to implement.
22.1 Why Sharp Lines Look Bad
Mathematically speaking, a line segment is defined by its two end points, but it has no thickness or area. In order to see a line on the display, however, we need to give it some thickness. So, a
line in our case is defined by two end points plus a width parameter. For computer graphics, we usually specify this width in screen pixels. A thin line might be one pixel wide, and a thick line
might be three pixels wide.
Before we try to antialias lines, we must understand why we see nasty aliasing artifacts in the first place. Let's say we draw a black line that is one pixel wide on a white background. From the
point of view of signal processing, we can think of the line as a signal with a value of 1 corresponding to maximum intensity and 0 corresponding to minimum intensity. Because our frame buffer and
display have only a finite number of pixels, we need to sample the signal. The Sampling Theorem tells us that to reconstruct the signal without aliasing, we must sample the input signal at a rate no
less than twice the maximum frequency of the signal.
And that's where the problem lies. A line with perfectly sharp edges corresponds to a signal with infinitely high frequencies! We can think of an edge of a 1D line as a step function, as shown in
Figure 22-1a; discrete samples are shown as vertical blue lines in Figure 22-1b. Intuitively, we can see that no matter how finely we sample this step function, we cannot represent the step
discontinuity accurately enough. The three images in Figure 22-2 show what happens to the appearance of a line as we increase the pixel resolution. The results are as we expect: aliasing decreases as
resolution increases, but it never goes away entirely.
Figure 22-1 Trying to Sample a Line
Figure 22-2 Decreasing Aliasing by Increasing Resolution
What have we learned? The only way to reconstruct a line with perfectly sharp edges is to use a frame buffer with infinite resolution—which means it would take an infinite amount of time, memory, and
money. Obviously this is not a very practical solution!
22.2 Bandlimiting the Signal
A more practical solution, and the one that we describe in this chapter, is to bandlimit the signal. In other words, because we cannot represent the original signal by increasing the screen
resolution, we can instead remove the irreproducible high frequencies. The visual result of this operation is that our lines will no longer appear to have sharp edges. Instead, the line's edges will
appear blurry. This is what we normally think of when we hear the term "antialiased": a polygon or line with soft, smooth edges and no visible jaggies.
We can remove high frequencies from the original signal by convolving the signal with a low-pass filter. Figure 22-3 illustrates this process with a two-dimensional signal. Figure 22-3a shows the
sharp edge of a line. The x and y axes represent the 2D image coordinates, and the vertical z axis represents intensity values. The left half (z = 1) corresponds to the interior of the line, and the
right half (z = 0) lies outside of the line. Notice the sharp discontinuity at the boundary between z = 0 and z = 1. Figure 22-3b shows a low-pass filter, centered at a pixel; the filter is
normalized to have unit volume. To evaluate the convolution of the signal in Figure 22-3a with the filter shown in Figure 22-3b at a pixel, we place the filter at that pixel and compute the volume of
intersection between the filter and the signal. An example of such a volume is shown in Figure 22-3c. Repeating this process at every image pixel yields the smooth edge shown in Figure 22-3d.
Figure 22-3 Convolution of a Sharp Line with a Low-Pass Filter
Although the idea of convolving the signal with a low-pass filter is straightforward, the calculations need to be performed at every image pixel. This makes the overall approach quite expensive!
Fortunately, as we see in the next section, all of the expensive calculations can be done in a preprocess.
22.2.1 Prefiltering
McNamara et al. (2000) developed an efficient prefiltering method originally designed for the Neon graphics accelerator. We describe their method here and show how it can be implemented using a pixel
shader on modern programmable GPUs.
The key observation is that if we assume that our two-dimensional low-pass filter is symmetric, then the convolution depends only on the distance from the filter to the line. This means that in a
preprocess, we can compute the convolution with the filter placed at several distances from the line and store the results in a lookup table. Then at runtime, we evaluate the distance from each pixel
to the line and perform a table lookup to obtain the correct intensity value. This strategy has been used in many other line antialiasing techniques, including those of Gupta and Sproull (1981) and
Turkowski (1982).
This approach has several nice properties:
• We can use any symmetric filters that we like, such as box, Gaussian, or cubic filters.
• It doesn't matter if the filters are expensive to evaluate or complicated, because all convolutions are performed offline.
• The filter diameter can be larger than a pixel. In fact, according to the Sampling Theorem, it should be greater than a pixel to perform proper antialiasing. On the other hand, if we make the
filter size too large, then the lines will become excessively blurry.
To summarize, this approach supports prefiltered line antialiasing with arbitrary symmetric filters at a fixed runtime cost. Now that we have seen an overview of the prefiltering method, let's dig
into some of the details, starting with the preprocess.
22.3 The Preprocess
There are many questions that need to be addressed about this stage, such as how many entries in the table we need, which filter to use, and the size of the filter. We look at answers to these
questions as we proceed.
Let's start by studying how to compute the table for a generic set of filter and line parameters. Figure 22-4 shows a line of width w and a filter of radius R. We distinguish between the mathematical
line L, which is infinitely thin and has zero thickness, and the wide line whose edges are a distance w/2 from L. Let's ignore the line's end points for now and assume the line is infinitely long.
Figure 22-4 Line Configuration and Notation
When we convolve the filter with the wide line, we obtain an intensity value. Let's see what values we get by placing the filter at various distances from L. We get a maximum intensity when the
filter lies directly on L, as shown in Figure 22-5a, because this is where the overlap between the filter and the wide line is maximized. Similarly, we get a minimum intensity when the filter is
placed a distance of w/2 + R from the line, as shown in Figure 22-5b; this is the smallest distance for which there is no overlap between the filter and the wide line. Thus, intensity should drop off
smoothly as the filter moves from a distance of 0 from L to a distance of w/2 + R.
Figure 22-5 How Filter Placement Affects the Convolution
This observation turns out to be a convenient way to index the table. Instead of using the actual distance measured in pixels to index the table, we use a normalized parameter d that has a value of 1
when the filter is placed directly on L and a value of 0 when the filter is placed a distance of w/2 + R away. The reason for using this parameterization is that it allows us to handle different
values for R and w in a single, consistent way.
Let's get back to some of the questions we raised earlier about prefiltering the lines. For instance, which filter should we use, and how big should it be? Signal processing theory tells us that to
eliminate aliasing in the reconstructed signal, we should use the sinc filter. Unfortunately, this filter is not practical, because it has an infinite support, meaning that R would be unbounded. The
good news is that we can achieve good results using simpler filters with a compact support. In practice, for thick lines (that is, with higher values of w), we prefer to use a Gaussian with a
two-pixel radius and a variance of s ^2 = 1.0. For thinner lines, however, the results can be a bit soft and blurry, so in those cases, we use a box filter with a one-pixel radius. Blinn 1998
examines these issues in more detail. Remember, everything computed in this stage is part of a preprocess, and runtime performance is independent of our choice of filter. Therefore, feel free to
precompute tables for different filters and pick one that gives results that you like.
Here's another question about our precomputation: How big do our tables need to be? Or in other words, at how many distances from L should we perform the convolution? We have found that a 32-entry
table is more than enough. The natural way to feed this table to the GPU at runtime is as a one-dimensional luminance texture. A one-dimensional, 32-entry luminance texture is tiny, so if for some
reason you find that 32 entries is insufficient, you can step up to a 64-entry texture and the memory consumption will still be very reasonable.
One more question before we move on to the runtime part of the algorithm: What about the line's end points? We've completely ignored them in the preceding discussion and in Figure 22-4, pretending
that the line L is infinitely long. The answer is that for convenience's sake, we can ignore the end points during the preprocess and instead handle them entirely at runtime.
22.4 Runtime
The previous section covered the preprocess, which can be completed entirely on the host processor once and for all. Now let's talk about the other half of the algorithm. At runtime, we perform two
types of computations. First, we compute line-specific parameters and feed them to the GPU. Second, we draw each line on the GPU conservatively as a "wide" line, and for each fragment generated by
the hardware rasterizer, we use the GPU's pixel shader to perform antialiasing via table lookups. Let's dig into the details.
Each fragment produced by the rasterizer for a given line corresponds to a sample position. We need to figure out how to use this sample position to index into our precomputed lookup table so that we
can obtain the correct intensity value for this fragment. Remember that our table is indexed by a parameter d that has a value of 1 when the sample lies directly on the line and a value of 0 when the
sample is w/2 + R pixels away. Put another way, we need to map sample positions to the appropriate value of d. This can be done efficiently using the following line-setup algorithm.
22.4.1 Line Setup (CPU)
Let's say we want to draw the line L shown in Figure 22-6. This line is defined by its two end points (x [0], y [0]) and (x [1], y [1]). The actual wide line that we want to draw has width w, and its
four edges surround L as shown. For a sample located at (x, y) in pixel coordinates, we can compute the parameter d efficiently by expressing d as a linear edge function of the form ax + by + c,
where (a, b, c) are edge coefficients. Figure 22-6 shows four edges E [0], E [1], E [2], and E [3] surrounding L. We will compute the value of d for each edge separately and then see how to combine
the results to obtain an intensity value.
Figure 22-6 Edge Functions for a Line
First, we transform the line's end points from object space to window space (that is, pixel coordinates). This means we transform the object-space vertices by the modelview projection matrix to
obtain clip-space coordinates, apply perspective division to project the coordinates to the screen, and then remap these normalized device coordinates to window space. Let (x [0], y [0]) and (x [1],
y [1]) be the coordinates of the line's end points in window space.
Next, we compute the coefficients of the four linear edge functions. Each set of coefficients is expressed as a three-vector:
These calculations are performed once per line on the CPU.
22.4.2 Table Lookups (GPU)
The four sets of coefficients are passed to a pixel shader as uniform (that is, constant) parameters. The shader itself is responsible for performing the following calculations. If (x, y) are the
pixel coordinates (in window space) of the incoming fragment, then we evaluate the four linear edge functions using simple dot products:
d [0] = (x, y, 1) ·
d [1] = (x, y, 1) ·
d [2] = (x, y, 1) ·
d [3] = (x, y, 1) ·
If any of the four results is less than zero, it means that (x, y) is more than w/2 + R pixels away from the line and therefore this fragment should be discarded.
How do we use the results of the four edge functions? We need a method that antialiases both the sides of the wide line and the end points. McNamara et al. (2000) propose the following algorithm:
intensity = lookup(min(d0, d2)) * lookup(min(d1, d3))
Let's see how this method works. It finds the minimum of d [0] and d [2], the two functions corresponding to the two side edges E [0] and E [2]. Similarly, it finds the minimum of d [1] and d [3],
the two functions corresponding to the two end point edges E [1] and E [3] (see Figure 22-6). Two table lookups using these minimum values are performed. The lookup associated with min(d [0], d [2])
returns an intensity value that varies in the direction perpendicular to L; as expected, pixels near L will have high intensity, and those near edges E [0] or E [2] will have near-zero intensity. If
L was infinitely long, this would be the only lookup required.
Because we need to handle L's end points, however, the method performs a second lookup (with min(d [1], d [3])) that returns an intensity value that varies in the direction parallel to L; pixels near
the end points of L will have maximum intensity, whereas those near edges E [1] and E [3] will have near-zero intensity. Multiplying the results of the two lookups yields a very close approximation
to the true convolution between a filter and a finite wide line segment. The resulting line has both smooth edges and smooth end points.
Notice that only a few inexpensive operations need to be performed per pixel. This makes line antialiasing very efficient.
Cg pixel shader source code is shown in Listing 22-1. A hand-optimized assembly version requires only about ten instructions.
Example 22-1. Cg Pixel Shader Source Code for Antialiasing Lines
void main (out float4 color : COLOR,
float4 position : WPOS,
uniform float3 edge0,
uniform float3 edge1,
uniform float3 edge2,
uniform float3 edge3,
uniform sampler1D table)
float3 pos = float3(position.x, position.y, 1);
float4 d = float4(dot(edge0, pos), dot(edge1, pos),
dot(edge2, pos), dot(edge3, pos));
if (any(d < 0)) discard;
// . . . compute color . . .
color.w = tex1D(table, min(d.x, d.z)).x *
tex1D(table, min(d.y, d.w)).x;
22.5 Implementation Issues
22.5.1 Drawing Fat Lines
For the pixel shader in Listing 22-1 to work, we have to make sure the hardware rasterizer generates all the fragments associated with a wide line. After all, our pixel shader won't do anything
useful without any fragments! Therefore, we must perform conservative rasterization and make sure that all the fragments that lie within a distance of w/2 + R are generated. In OpenGL, this can be
accomplished by calling glLineWidth with a sufficiently large value:
glLineWidth(ceil((2.0f * R + w) * sqrt(2.0f)));
For example, if R = 1 and w = 2, then we should call glLineWidth with a parameter of 6. We also have to extend the line by w/2 + R in each direction to make it sufficiently long.
22.5.2 Compositing Multiple Lines
Up until now, we have only considered drawing a single line. What happens when we have multiple (possibly overlapping) lines? We need to composite these lines properly.
One way to accomplish this task is to use frame-buffer blending, such as alpha blending. In the pixel shader, we write the resulting intensity value into the alpha component of the RGBA output, as
shown in Listing 22-1. In the special case where the lines are all the same color, alpha blending is a commutative operation, so the order in which we draw the lines does not matter. For the more
general case of using lines with different colors, however, alpha blending is noncommutative. This means that lines must be sorted and drawn from back to front on a per-pixel basis. This cannot
always be done correctly using a standard z-buffer, so instead we can use a heuristic to approximate a back-to-front sort in object space. One heuristic is to sort lines by their midpoints. Although
this heuristic can occasionally cause artifacts due to incorrect sorting, the artifacts affect only a limited number of pixels and aren't particularly noticeable in practice.
22.6 Examples
Now that we've seen how to implement prefiltered lines on the GPU, let's take a look at some examples. Figure 22-7 compares hardware rendering with and without the GPU's antialiasing with the method
presented in this chapter. In the first row, we draw a single black line of width 1 on an empty, white background; the second row is a close-up view of this line. In the third row, we draw a thicker
black line of width 3; the fourth row provides a close-up view of the thick line. The third and fourth columns show the results of prefiltering the line using a box filter with R = 1 and a Gaussian
filter with R = 2 and s ^2 = 1.0, respectively. The advantages in image quality of the prefiltered approach over hardware antialiasing are especially noticeable with nearly horizontal and nearly
vertical lines.
Figure 22-7 Comparing Line Antialiasing Methods for Thin and Thick Lines
An interesting application of line antialiasing is the smoothing of polygon edges. Although graphics hardware offers built-in support for polygon antialiasing, we can achieve better quality by using
a simple but effective method proposed by Sander et al. (2001). The idea is first to draw the polygons in the usual way. Then we redraw discontinuity edges (such as silhouettes and material
boundaries) as antialiased lines. For example, Figure 22-8a shows a single triangle drawn without antialiasing. Figure 22-8b shows its edges drawn as prefiltered antialiased lines. By drawing these
lines on top of the original geometry, we obtain the result in Figure 22-8c.
Figure 22-8 Overview of the Discontinuity Edge Overdraw Method
Comparisons showing close-ups of the triangle's nearly horizontal edge are shown in Figure 22-9. Close-ups of the triangle's nearly vertical edge are shown in Figure 22-10.
Figure 22-9 Comparing Antialiasing Methods on a Nearly Horizontal Edge
Figure 22-10 Comparing Antialiasing Methods on a Nearly Vertical Edge
There are some limitations to this polygon antialiasing approach, however. One drawback is that we must explicitly identify the discontinuity edges for a polygonal model, which can be expensive for
large models. Another drawback is the back-to-front compositing issue described earlier. Standard hardware polygon antialiasing avoids these issues at the expense of image quality.
22.7 Conclusion
In this chapter we have described a simple and efficient method for antialiasing lines. The lines are prefiltered by convolving an edge with a filter placed at several distances from the edge and
storing the results in a small table. This approach allows the use of arbitrary symmetric filters at a fixed runtime cost. Furthermore, the algorithm requires only small amounts of CPU and GPU
arithmetic, bandwidth, and storage. These features make the algorithm practical for many real-time rendering applications, such as rendering fences, power lines, and other thin structures in games.
22.8 References
Blinn, Jim. 1998. "Return of the Jaggy." In Jim Blinn's Corner: Dirty Pixels, pp. 23–34. Morgan Kaufmann.
Gupta, Satish, and Robert F. Sproull. 1981. "Filtering Edges for Gray-Scale Devices." In Proceedings of ACM SIGGRAPH 81, pp. 1–5.
McNamara, Robert, Joel McCormack, and Norman P. Jouppi. 2000. "Prefiltered Antialiased Lines Using Half-Plane Distance Functions." In Proceedings of the ACM SIGGRAPH/Eurographics Workshop on Graphics
Hardware, pp. 77–85.
Sander, Pedro V., Hugues Hoppe, John Snyder, and Steven J. Gortler. 2001. "Discontinuity Edge Overdraw." In Proceedings of the 2001 Symposium on Interactive 3D Graphics, pp. 167–174.
Turkowski, Kenneth. 1982. "Anti-Aliasing Through the Use of Coordinate Transformations." ACM Transactions on Graphics 1(3), pp. 215–234.
Many of the designations used by manufacturers and sellers to distinguish their products are claimed as trademarks. Where those designations appear in this book, and Addison-Wesley was aware of a
trademark claim, the designations have been printed with initial capital letters or in all capitals.
The authors and publisher have taken care in the preparation of this book, but make no expressed or implied warranty of any kind and assume no responsibility for errors or omissions. No liability is
assumed for incidental or consequential damages in connection with or arising out of the use of the information or programs contained herein.
NVIDIA makes no warranty or representation that the techniques described herein are free from any Intellectual Property claims. The reader assumes all risk of any such claims based on his or her use
of these techniques.
The publisher offers excellent discounts on this book when ordered in quantity for bulk purchases or special sales, which may include electronic versions and/or custom covers and content particular
to your business, training goals, marketing focus, and branding interests. For more information, please contact:
U.S. Corporate and Government Sales
(800) 382-3419
For sales outside of the U.S., please contact:
International Sales
Visit Addison-Wesley on the Web: www.awprofessional.com
Library of Congress Cataloging-in-Publication Data
GPU gems 2 : programming techniques for high-performance graphics and general-purpose
computation / edited by Matt Pharr ; Randima Fernando, series editor.
p. cm.
Includes bibliographical references and index.
ISBN 0-321-33559-7 (hardcover : alk. paper)
1. Computer graphics. 2. Real-time programming. I. Pharr, Matt. II. Fernando, Randima.
T385.G688 2005
GeForce™ and NVIDIA Quadro® are trademarks or registered trademarks of NVIDIA Corporation.
Nalu, Timbury, and Clear Sailing images © 2004 NVIDIA Corporation.
mental images and mental ray are trademarks or registered trademarks of mental images, GmbH.
Copyright © 2005 by NVIDIA Corporation.
All rights reserved. No part of this publication may be reproduced, stored in a retrieval system, or transmitted, in any form, or by any means, electronic, mechanical, photocopying, recording, or
otherwise, without the prior consent of the publisher. Printed in the United States of America. Published simultaneously in Canada.
For information on obtaining permission for use of material from this work, please submit a written request to:
Pearson Education, Inc.
Rights and Contracts Department
One Lake Street
Upper Saddle River, NJ 07458
Text printed in the United States on recycled paper at Quebecor World Taunton in Taunton, Massachusetts.
Second printing, April 2005
To everyone striving to make today's best computer graphics look primitive tomorrow
|
{"url":"http://http.developer.nvidia.com/GPUGems2/gpugems2_chapter22.html","timestamp":"2014-04-17T15:26:26Z","content_type":null,"content_length":"48845","record_id":"<urn:uuid:1b518c8f-4be4-4e5b-b0ad-c9df320a3171>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00180-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Constant and nonconstant growth valuation
September 20th 2009, 03:41 PM #1
Sep 2009
Constant and nonconstant growth valuation
Can someone please help me, i am stumped on these 2 problems? Thanks.
1-A company currently pays dividnd of $2 per share. It is estimated that the conpnay's divivend will grow at a rate of 20% per year for the next 2 years and then grow at a constant 7% therafter.
The compnay's stock has a beta equal to 1.2, the risk-free rate is 7.5% and the narket risk premium is 4%. What s your estimate of tyhe stock's current price?
2- The risk-free rate of return is 11%, the requires rate of return on the market is 14% and Schuler Compnay's stock has a beta coefficent of 1.5.
a. If the dividend expected during the coming year is $2.25 and g= a constant 5%, at what price should Schuler's stock sell?
b. Now, suppose the Federal Reserve Baord increses the money supply, causing the risk-free rate to drop to 9% and the rate on the market falls to 12%. what would this do to the price of the
It has been a little while since I did these type of questions... so hopefully I haven't done anything incorrectly.
1- by the CAPM, the discount rate for the company is
$i=r_{f}+\beta \left( r_{m}-r_{f} \right) = 0.075+1.2(0.04)=0.123$ or $12.3\%$
Given that the company receives a current divdend of $2, the dividend in year 1 will be $2.40 which gets discounted 1 year, and $2.88 in year 2 which gets discounted 2. In year three, the
dividend grows by 7% and grows indefinitly at this rate. Using the PV of a perputiuty formula will discount these future cash flows into year 2, so they then need to be discounted back a further
2 years to get the PV.
So the current share price is equal to the discounted value of these cashflows;
$<br /> PV = \frac{2.40}{1+0.123}+\frac{2.88}{(1+0.123)^2}+\fra c{\left[\frac{2.88(1+0.07)}{0.123-0.07}\right]}{(1+0.123)^2}=\50.52504<br />$
edit: i made a mistake when i discounted the PV of the growing dividends... Something doesn't look right now?
Last edited by Robb; September 21st 2009 at 05:15 PM.
September 21st 2009, 05:02 PM #2
Mar 2009
|
{"url":"http://mathhelpforum.com/business-math/103352-constant-nonconstant-growth-valuation.html","timestamp":"2014-04-18T04:08:48Z","content_type":null,"content_length":"33987","record_id":"<urn:uuid:12a43d26-e74f-4e2c-8417-ad6206388d83>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00619-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Post a reply
I assume they mean that only whole numbers are allowed (they should have said so!).
You can solve these by writing a factor table, like this:
These are all the factors of 36. This means, "Multiply the top number by the bottom number, and you'll get 36." Since you find the area of a rectangle by multiplying the length by the width, then
these factor pairs are also your rectangles.
Good luck!
|
{"url":"http://www.mathisfunforum.com/post.php?tid=1589","timestamp":"2014-04-20T16:42:15Z","content_type":null,"content_length":"16611","record_id":"<urn:uuid:33dc376d-94a6-43f2-8072-5c618bb052de>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00632-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Orbits in commutative groups.
up vote 6 down vote favorite
Let A be finite commutative group say $(Z_m)^h$. I will say that $S \subset A$ is an orbit if exist group $H$ which acts on A such that $S$ is an orbit of $H$.
Can one give a simple characterization of all orbits of $(Z_m)^h$?
By action on $A$ I mean automorphisms of a group A.
co.combinatorics ac.commutative-algebra rt.representation-theory linear-algebra
Does it possible to say something inteligent about how orbits are look like? – Klim Efremenko Oct 15 '10 at 6:38
add comment
1 Answer
active oldest votes
The abelian group in question is the product of its Sylow-$p$ subgroups, which are preserved by automorphisms. Therefore the orbits in it are the products of orbits in the Sylow
$p$-subgroups. Therefore, we may consider the case where $m=p^k$ for some prime $p$ and some natural number $k$.
I can answer this question for maximal orbits (orbits under the full automorphism group). I think the more general questions may not have a nice answer.
up vote 3 down In $(Z_{p^k})^h$, there are precisely $k+1$ orbits of the full automorphism group, represented by $e, pe, \ldots, p^ke$, where $e=(1,0,\ldots,0)$. The orbit of $p^i e$ consists of
vote accepted those vectors where the gcd of entries divides $p^i$, but not $p^{i+1}$ (except for $i=k$, where the orbit is just the element $0$).
For a general finite abelian group, this problem was solved more than a 100 years ago by Miller, and also discussed by Birkhoff and Baer. For the exact references, as well as a
modern treatment see http://arxiv.org/abs/1005.5222.
add comment
Not the answer you're looking for? Browse other questions tagged co.combinatorics ac.commutative-algebra rt.representation-theory linear-algebra or ask your own question.
|
{"url":"http://mathoverflow.net/questions/42160/orbits-in-commutative-groups?sort=newest","timestamp":"2014-04-16T07:32:06Z","content_type":null,"content_length":"52023","record_id":"<urn:uuid:2a8ddc15-7979-46b1-8c1f-e7fa3b36f400>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00530-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Savage, MD SAT Math Tutor
Find a Savage, MD SAT Math Tutor
...I have developed fun activities for students to actually have fun while they are learning. I look forward to helping your child become a huge success. In fact, several of my past students have
earned As and Bs on their exams and major tests after scoring Ds, Es, and Fs.
18 Subjects: including SAT math, reading, writing, calculus
I am currently an 8th grade math teacher for Anne Arundel County Public Schools. I have previously taught a wide variety of math subjects from 7th grade through entry level college classes. My
previous clients have gone on to significantly increase their score on their standardized tests as well as raise their class grades by an average of 1.5 letter grades.
12 Subjects: including SAT math, reading, writing, geometry
...In both my undergraduate education training and subsequent re-certification classes, I have received training in phonics instruction. In the classroom and tutoring, I have helped students use
phonics to learn to read and to sound-out new vocabulary words. I also have a background with Latin and...
32 Subjects: including SAT math, chemistry, reading, biology
...I have taught these subjects for more than 10 years inside and outside the classroom. Chemistry has been my second choice course in college. I have 2 semesters of college chemistry and have a
few years of experience teaching the subject in high school.
19 Subjects: including SAT math, chemistry, ASVAB, GRE
...No matter the method the desired teaching system is to achieve repetition of the material through different ways of thinking, until a specific pattern of thinking is established and thus
proceed teaching on that. Even though my PhD is in Biomedical Engineering, I am most comfortable at teaching ...
17 Subjects: including SAT math, calculus, physics, statistics
Related Savage, MD Tutors
Savage, MD Accounting Tutors
Savage, MD ACT Tutors
Savage, MD Algebra Tutors
Savage, MD Algebra 2 Tutors
Savage, MD Calculus Tutors
Savage, MD Geometry Tutors
Savage, MD Math Tutors
Savage, MD Prealgebra Tutors
Savage, MD Precalculus Tutors
Savage, MD SAT Tutors
Savage, MD SAT Math Tutors
Savage, MD Science Tutors
Savage, MD Statistics Tutors
Savage, MD Trigonometry Tutors
Nearby Cities With SAT math Tutor
Berwyn Heights, MD SAT math Tutors
Crownsville SAT math Tutors
Dayton, MD SAT math Tutors
Gambrills SAT math Tutors
Glenelg SAT math Tutors
Hanover, MD SAT math Tutors
Highland, MD SAT math Tutors
Jessup, MD SAT math Tutors
Laurel, MD SAT math Tutors
Linthicum Heights SAT math Tutors
Martins Add, MD SAT math Tutors
Martins Additions, MD SAT math Tutors
Russett, MD SAT math Tutors
Simpsonville, MD SAT math Tutors
Spencerville, MD SAT math Tutors
|
{"url":"http://www.purplemath.com/savage_md_sat_math_tutors.php","timestamp":"2014-04-16T19:12:03Z","content_type":null,"content_length":"24033","record_id":"<urn:uuid:8e8775c5-8bd0-4c5d-be5c-a658ca0f2504>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00453-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Towards in-place geometric algorithms and data structures
, 2005
"... We give space-efficient geometric algorithms for three related problems. Given a set of n axis-aligned rectangles in the plane, we calculate the area covered by the union of these rectangles
(Klee’s measure problem) in O(n 3/2 log n) time with O(√n) extra space. If the input can be destroyed and the ..."
Cited by 5 (0 self)
Add to MetaCart
We give space-efficient geometric algorithms for three related problems. Given a set of n axis-aligned rectangles in the plane, we calculate the area covered by the union of these rectangles (Klee’s
measure problem) in O(n 3/2 log n) time with O(√n) extra space. If the input can be destroyed and there are no degenerate cases and input coordinates are all integers, we can solve Klee’s measure
problem in O(n log² n) time with O(log² n) extra space. Given a set of n points in the plane, we find the axis-aligned unit square that covers the maximum number of points in O(n log³ n) time with O
(log² n) extra space.
"... We propose to design data structures called succinct geometric indexes of negligible space (more precisely, o(n) bits) that support geometric queries in optimal time, by taking advantage of the
n points in the data set permuted and stored elsewhere as a sequence. Our first and main result is a succi ..."
Cited by 2 (1 self)
Add to MetaCart
We propose to design data structures called succinct geometric indexes of negligible space (more precisely, o(n) bits) that support geometric queries in optimal time, by taking advantage of the n
points in the data set permuted and stored elsewhere as a sequence. Our first and main result is a succinct geometric index that can answer point location queries, a fundamental problem in
computational geometry, on planar triangulations in O(lg n) time1. We also design three variants of this index. The first supports point location using lg n +2 √ lg n + O(lg 1/4 n) point-line
comparisons. The second supports point location in o(lg n) time when the coordinates are integers bounded by U. The last variant can answer point location queries in O(H +1) expected time, where H is
the entropy of the query distribution. These results match the query efficiency of previous point location structures that occupy O(n) words or O(n lg n) bits, while saving drastic amounts of space.
We generalize our succinct geometric index to planar subdivisions, and design indexes for other types of queries. Finally, we apply our techniques to design the first implicit data structures that
support point location in O(lg 2 n) time. 1
, 2011
"... A constant-workspace algorithm has read-only access to an input array and may use only O(1) additional words of O(log n) bits, where n is the size of the input. We show that we can find a
triangulation of a plane straight-line graph with n vertices in O(n²) time. We also consider preprocessing a sim ..."
Cited by 2 (2 self)
Add to MetaCart
A constant-workspace algorithm has read-only access to an input array and may use only O(1) additional words of O(log n) bits, where n is the size of the input. We show that we can find a
triangulation of a plane straight-line graph with n vertices in O(n²) time. We also consider preprocessing a simple n-gon, which is given by the ordered sequence of its vertices, for shortest path
queries when the space constraint is relaxed to allow s words of working space. After a preprocessing of O(n²) time, we are able to solve shortest path queries between any two points inside the
polygon in O(n²/s) time.
, 2007
"... Abstract We revisit a classic problem in computational geometry: preprocessing a planar n-point set to answer nearest neighbor queries. In SoCG 2004, Br"onnimann, Chan, and Chen showed that it
is possible to design an efficient data structure that takes no extra space at all other than the inpu ..."
Cited by 1 (1 self)
Add to MetaCart
Abstract We revisit a classic problem in computational geometry: preprocessing a planar n-point set to answer nearest neighbor queries. In SoCG 2004, Br"onnimann, Chan, and Chen showed that it
is possible to design an efficient data structure that takes no extra space at all other than the input array holding a permutation of the points. The best query time known for such "in-place
data structures " is O(log 2 n). In this paper, we break the O(log 2 n) barrier by providing a method that answers nearest neighbor queries in time O((log n) log3=2 2 log log n) = O(log
"... In this paper, we consider the problem of designing in-place algorithms for computing the maximum area empty rectangle of arbitrary orientation among a set of points in 2D, and the maximum
volume empty axisparallel cuboid among a set of points in 3D. If n points are given in an array of size n, the ..."
Add to MetaCart
In this paper, we consider the problem of designing in-place algorithms for computing the maximum area empty rectangle of arbitrary orientation among a set of points in 2D, and the maximum volume
empty axisparallel cuboid among a set of points in 3D. If n points are given in an array of size n, the worst case time complexity of our proposed algorithms for both the problems is O(n 3); both the
algorithms use O(1) extra space in addition to the array containing the input points. 1
"... Asano et al. [JoCG 2011] proposed an open problem of computing the minimum enclosing circle of a set of n points in R2 given in a read-only array in sub-quadratic time. We show that Megiddo’s
prune and search algorithm for computing the minimum radius circle enclosing the given points can be tailore ..."
Add to MetaCart
Asano et al. [JoCG 2011] proposed an open problem of computing the minimum enclosing circle of a set of n points in R2 given in a read-only array in sub-quadratic time. We show that Megiddo’s prune
and search algorithm for computing the minimum radius circle enclosing the given points can be tailored to work in a read-only environment in O(n1+ɛ) time using O(log n) extra space, where ɛ is a
positive constant less than 1. As a warm-up, we first solve the same problem in an in-place setup in linear time with O(1) extra space.
"... One of the classic data structures for storing point sets in R 2 is the priority search tree, introduced by McCreight in 1985. We show that this data structure can be made in-place, i.e., it can
be stored in an array such that each entry stores only one point of the point set and no entry is stored ..."
Add to MetaCart
One of the classic data structures for storing point sets in R 2 is the priority search tree, introduced by McCreight in 1985. We show that this data structure can be made in-place, i.e., it can be
stored in an array such that each entry stores only one point of the point set and no entry is stored in more than one location of that array. It combines a binary search tree with a heap. We show
that all the standard query operations can be answered within the same time bounds as for the original priority search tree, while using only O(1) extra space. We introduce the min-max priority
search tree which is a combination of a binary search tree and a min-max heap. We show that all the standard queries which can be done in two separate versions of a priority search tree can be done
with a single min-max priority search tree. As an application, we present an in-place algorithm to enumerate all maximal empty axisparallel rectangles amongst points in a rectangular region R in R 2
in O(m log n) time with O(1) extra-space, where m is the total number of maximal empty rectangles. 1
|
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=4040081","timestamp":"2014-04-16T21:20:18Z","content_type":null,"content_length":"27921","record_id":"<urn:uuid:a01736b0-01a7-45e1-b818-d67a70cc5cf5>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00042-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Million, billion, trillion
Thursday, 12 March 2009 | 3 Comments
I used to think that I knew what 1 billion was, i.e. 1 000 000 000 000. Then a couple of years ago, I looked on Wikipedia and found there were two defintions: Long and short scales.
It seems that changes in the way Australia defines one billion have only occurred relatively recently (quoting Wikipedia)
As of 1999, the Australian Government’s financial department did not consider short scale to be standard, but used it occasionally. The current recommendation by the Australian Department of
Finance and Administration (formerly known as AusInfo), and the legal definition, is the short scale.
According to the Metric Conversion page from the National Measurement Institute:
Common usage in Australia (AS/NZX 1376:1996 Conversion Factors, p31) is that:
□ million = 10^6 (i.e. 1 000 000)
□ billion = 10^9 (i.e. 1 000 000 000)
□ trillion = 10^12 (i.e. 1 000 000 000 000)
□ quadrillion = 10^15 (i.e. 1 000 000 000 000 000)
So, it seems that Australia does follow the short scale.
Now that we’ve sorted that out… here are some helpful photos found on the web for understanding large numbers.
This first image is a photo of one billion dollars (US of course).
This second visualization starts with a $100 dollar bill, goes to $10 000, $1 million, $100 million, $1 billion (pictured below) and then $1 trillion.
Click through both photos for a better version.
A couple of notes:
i was of the understanding that a billion in australia and england was a million million and that in usa was a thousand million so being a billionaire in america was more common
Perhaps this is more of a language issue than a maths issue. My high school dictionary has both definitions of a billion. The foreign language dictionaries I checked only went up to a million.
Post a comment
Commenting is not available in this channel entry.
|
{"url":"http://mathsclass.net/comments/million-billion-trillion/","timestamp":"2014-04-21T04:44:54Z","content_type":null,"content_length":"16553","record_id":"<urn:uuid:092f12dc-1603-4ec4-acf1-26de0a865968>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00396-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Why does Haskell's "flip id" has this type?
up vote 17 down vote favorite
I'm curious about the expression flip id (It's not homework: I found it in the getOpt documentation).
I wonder why it has this type:
Prelude> :t (flip id)
(flip id) :: b -> (b -> c) -> c
For example, (flip id) 5 (+6) gives 11.
I know why id (+6) 5 gives 11, but I don't "get" the flip id thing.
I tried to figure this out myself using pen and paper but couldn't. Could anybody please explain this to me? I mean, how does flip id come to have the type b -> (b -> c) -> c ?
add comment
1 Answer
active oldest votes
The id function has this type:
id :: a -> a
You get an instance of this type, when you replace a by a -> b:
id :: (a -> b) -> (a -> b)
which, because of currying, is the same as:
id :: (a -> b) -> a -> b
Now apply flip to this and you get:
flip id :: a -> (a -> b) -> b
up vote 40 down vote In the case of id (+) the instance is:
id :: (Num a) => (a -> a) -> (a -> a)
Now flip id gives you:
flip id :: (Num a) => a -> (a -> a) -> a
Side note: This also shows you how ($) is the same as id, just with a more restricted type:
($) :: (a -> b) -> a -> b
($) f x = f x
-- unpoint:
($) f = f
-- hence:
($) = id
10 Hey, ertes, you seem to have another account, both of which are unregistered. If you register your account, you can merge them and then have a single account for all your
answers (which are really good, by the way!). – dbaupp Sep 9 '12 at 16:12
Thanks, great answer. Your mention of $ makes it more intuitive to understand and I'm glad you didn't left it out. It will take my brain a few more days to fully understand
your answer. – Niccolo M. Sep 10 '12 at 12:54
Nice answer. Thinking of flip id as flip ($) helps a lot. – Garrett Jul 22 '13 at 2:16
add comment
Not the answer you're looking for? Browse other questions tagged haskell or ask your own question.
|
{"url":"http://stackoverflow.com/questions/12339822/why-does-haskells-flip-id-has-this-type","timestamp":"2014-04-18T06:30:16Z","content_type":null,"content_length":"66857","record_id":"<urn:uuid:11e4c317-e5de-474e-b704-9b6c3a6cfb3a>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00219-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Bits required to encode difference between number of subgraphs with odd number of edges and number of subgraphs with even number of edges
up vote 3 down vote favorite
Let $H = ( V, E )$ be a $k$-uniform connected hypergraph, with $n = |V|$ vertices and $m = |E|$ hyperedges. Let $O_w$ be the number of edge induced subgraphs of $H$ having $w$ vertices and an odd
number of hyperedges. Let $E_w$ be the number of edge induced subgraphs of $H$ having $w$ vertices and an even number of hyperedges. Let $\Delta_w = O_w - E_w$.
Let $b_w$ be the number of bits required to encode $\Delta_w$. Let $b = \displaystyle\max_{\substack{w}} b_w$.
I'm interested in how $b$ grows. I would like to determine the best possible upper bound for $b$ which is expressible as a function of only $n$, $m$ and $k$. More precisely, I would like to determine
a function $f(n,m,k)$ having both the following properties:
1. $b \leq f( n, m, k )$ for any $k$-uniform hypergraph $H$ having $n$ vertices and $m$ hyperedges.
2. $f(n,m,k)$ grows slower than any other function which satisfies the 1^st property.
In general both $O_w$ and $E_w$ are exponential in $m$, therefore I expect that their difference $\Delta_w$ is not exponential in $m$ and thus that $b \in o( m )$.
However for the moment I've no clue on how to try to prove this.
□ How does $b$ grow with respect to $n$, $m$ and $k$?
□ Are there any relevant results in the literature?
□ Any hint on how to try to prove $b \in o(m)$?
Update 13/09/2013
Here are some clarifications:
• $O_w$ is the number of distinct edge-induced subgraphs in $H$. Repetitions are not allowed. The same holds for $E_w$ of course.
• By "the number of bits required to encode $\Delta_w$" I mean $log_2 \ \Delta_w$.
co.combinatorics graph-theory
1 Can you explain what is meant by edge-induced? Especially, will the subgraphs be connected and k-uniform for large w? – The Masked Avenger Sep 7 '13 at 15:49
If the subgraphs are not k-uniform, looking at k-complete graphs should get b greater than 2n - logn. – The Masked Avenger Sep 7 '13 at 16:00
1 mathworld.wolfram.com/Edge-InducedSubgraph.html – Giorgio Camerani Sep 7 '13 at 18:09
Of course every edge induced subgraph is $k$-uniform. – Giorgio Camerani Sep 7 '13 at 18:26
1 Even so, complete hypergraphs should set the bar pretty high for b. – The Masked Avenger Sep 7 '13 at 19:20
show 2 more comments
Know someone who can answer? Share a link to this question via email, Google+, Twitter, or Facebook.
Browse other questions tagged co.combinatorics graph-theory or ask your own question.
|
{"url":"http://mathoverflow.net/questions/141516/bits-required-to-encode-difference-between-number-of-subgraphs-with-odd-number-o","timestamp":"2014-04-16T19:50:05Z","content_type":null,"content_length":"53807","record_id":"<urn:uuid:c6f4d6df-8715-43f3-bb35-a74046282064>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00065-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Math Help
March 13th 2007, 06:56 AM #1
Junior Member
Mar 2007
Related Rates
problem: A 5 meters ladder is leaning against a vertical wall with the lower end on the floor. If the lower end is pulled away from the wall at a rate of 1 meter/sec, how fast is the upper end of
the ladder moving along the wall when the lower end is 3 meters from the wall?
First let's modify this diagram a bit, see below:
So when faced with a related rates problem, you want to begin by drawing a diagram and filling in what you know and don't know. Then you try to figure out a formula that connects what you know to
what you don't, hopefully, what you don't know will be the only unknown in this formula. Here goes:
The first formula that should come into your head for this is Pythagoras' formula, and it is the one we'll be using:
By Pythagoras:
5^2 = x^2 + y^2 .........now we differentiate implicitly with respect to time
=> 0 = 2x dx/dt + 2y dy/dt .........now solve for dy/dt
=> dy/dt = (-2x dx/dt)/2y
now we know dx/dt, the question told us. we also know we want x to be 3, but what is y? well, we go back to the faithful Pythagoras' formula to find out what y is when x = 3
by pythagoras:
y^2 = 5^2 - x^2
=> y^2 = 5^2 - 3^2
=> y^2 = 25 - 9 = 16
=> y = 4 ....now we know everything, let's plug all these values into our formula.
dy/dt = (-2x dx/dt)/2y
=> when x = 3, dx/dt = 1, y = 4, we get:
dy/dt = (-2(3) (1))/(2(4))
=> dy/dt = -6/8
=> dy/dt = -3/4 m/s
this should be a negative rate, since the length of y is decreasing (see diagram below to see what i call x and y)
March 13th 2007, 07:22 AM #2
|
{"url":"http://mathhelpforum.com/calculus/12491-related-rates.html","timestamp":"2014-04-19T17:01:49Z","content_type":null,"content_length":"34919","record_id":"<urn:uuid:bf206aa9-8f66-47fc-94db-0b7e48d88fae>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00338-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Microsoft® Excel Data Analysis and Business Modeling
Safari Books Online is a digital library providing on-demand subscription access to thousands of learning resources.
Well known consultant, statistician, and business professor Wayne Winston teaches by example the best ways to use Microsoft Excel for data analysis, modeling, and decision making within real-world
business scenarios.
Sub-Categories:Desktop and Web Applications > Office & Productivity ApplicationsProduct > Microsoft ExcelVendor > Microsoft
Description Content
Visit the catalog page for Microsoft® Excel Data Analysis and Business Modeling • Catalog Page
Visit the errata page for Microsoft® Excel Data Analysis and Business Modeling • Errata
Download the supplemental electronic content for Microsoft® Excel Data Analysis and Business Modeling • Supplemental Content
Visit the catalog page for Microsoft® Excel Data Analysis and Business Modeling
Visit the errata page for Microsoft® Excel Data Analysis and Business Modeling
Download the supplemental electronic content for Microsoft® Excel Data Analysis and Business Modeling
|
{"url":"http://my.safaribooksonline.com/0735619018?portal=oreilly&cid=orm-cat-readnow-0735619018","timestamp":"2014-04-18T23:45:20Z","content_type":null,"content_length":"260543","record_id":"<urn:uuid:01afa7ea-4a71-4f1c-be64-dcfc7df12163>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00160-ip-10-147-4-33.ec2.internal.warc.gz"}
|
mathematics :: Linear algebra
Last Updated
Differential equations, whether ordinary or partial, may profitably be classified as linear or nonlinear; linear differential equations are those for which the sum of two solutions is again a
solution. The equation giving the shape of a vibrating string is linear, which provides the mathematical reason why a string may simultaneously emit more than one frequency. The linearity of an
equation makes it easy to find all its solutions, so in general linear problems have been tackled successfully, while nonlinear equations continue to be difficult. Indeed, in many linear problems
there can be found a finite family of solutions with the property that any solution is a sum of them (suitably multiplied by arbitrary constants). Obtaining such a family, called a basis, and putting
them into their simplest and most useful form, was an important source of many techniques in the field of linear algebra.
Consider, for example, the system of linear differential equations
It is evidently much more difficult to study than the system dy[1]/dx = αy[1], dy[2]/dx = βy[2], whose solutions are (constant multiples of) y[1] = exp (αx) and y[2] = exp (βx ... (200 of 41,575
|
{"url":"http://www.britannica.com/EBchecked/topic/369194/mathematics/66027/Linear-algebra","timestamp":"2014-04-19T10:20:18Z","content_type":null,"content_length":"101802","record_id":"<urn:uuid:2ec733fa-f4a5-400b-8578-91924392f64d>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00228-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Formal Concept Analysis
Results 1 - 10 of 205
- JOURNAL OF EXPERIMENTAL AND THEORETICAL ARTIFICIAL INTELLIGENCE , 2002
"... Several algorithms that generate the set of all formal concepts and diagram graphs of concept lattices are considered. Some modifications of wellknown algorithms are proposed. Algorithmic
complexity of the algorithms is studied both theoretically (in the worst case) and experimentally. Conditions ..."
Cited by 91 (8 self)
Add to MetaCart
Several algorithms that generate the set of all formal concepts and diagram graphs of concept lattices are considered. Some modifications of wellknown algorithms are proposed. Algorithmic complexity
of the algorithms is studied both theoretically (in the worst case) and experimentally. Conditions of preferable use of some algorithms are given in terms of density/sparseness of underlying formal
contexts. Principles of comparing practical performance of algorithms are discussed.
, 2001
"... Pattern structures consist of objects with descriptions (called patterns) that allow a semilattice operation on them. Pattern structures arise naturally from ordered data, e.g., from labeled
graphs ordered by graph morphisms. It is shown that pattern structures can be reduced to formal contexts, ..."
Cited by 39 (11 self)
Add to MetaCart
Pattern structures consist of objects with descriptions (called patterns) that allow a semilattice operation on them. Pattern structures arise naturally from ordered data, e.g., from labeled graphs
ordered by graph morphisms. It is shown that pattern structures can be reduced to formal contexts, however sometimes processing the former is often more ecient and obvious than processing the latter.
Concepts, implications, plausible hypotheses, and classi cations are de ned for data given by pattern structures. Since computation in pattern structures may be intractable, approximations of
patterns by means of projections are introduced.
- in Proc. of ICPC'07 , 2007
"... The paper addresses the problem of concept location in source code by presenting an approach which combines Formal Concept Analysis (FCA) and Latent Semantic Indexing (LSI). In the proposed
approach, LSI is used to map the concepts expressed in queries written by the programmer to relevant parts of ..."
Cited by 37 (16 self)
Add to MetaCart
The paper addresses the problem of concept location in source code by presenting an approach which combines Formal Concept Analysis (FCA) and Latent Semantic Indexing (LSI). In the proposed approach,
LSI is used to map the concepts expressed in queries written by the programmer to relevant parts of the source code, presented as a ranked list of search results. Given the ranked list of source code
elements, our approach selects most relevant attributes from these documents and organizes the results in a concept lattice, generated via FCA. The approach is evaluated in a case study on concept
location in the source code of Eclipse, an industrial size integrated development environment. The results of the case study show that the proposed approach is effective in organizing different
concepts and their relationships present in the subset of the search results. The proposed concept location method outperforms the simple ranking of the search results, reducing the programmers ’
effort. 1.
, 2004
"... The application of clustering methods for automatic taxonomy construction from text requires knowledge about the tradeoff between, (i), their effectiveness (quality of result), (ii), efficiency
(run-time behaviour), and, (iii), traceability of the taxonomy construction by the ontology engineer. In t ..."
Cited by 36 (4 self)
Add to MetaCart
The application of clustering methods for automatic taxonomy construction from text requires knowledge about the tradeoff between, (i), their effectiveness (quality of result), (ii), efficiency
(run-time behaviour), and, (iii), traceability of the taxonomy construction by the ontology engineer. In this line, we present an original conceptual clustering method based on Formal Concept
Analysis for automatic taxonomy construction and compare it with hierarchical agglomerative clustering and hierarchical divisive clustering.
- In General Lattice Theory, G. Grätzer editor, Birkhäuser , 1997
"... then the theory. Thereby, Formal Concept Analysis has created results that may be of interest even without considering the applications by which they were motivated. For proofs, citations, and
further details we refer to [2]. 1 Formal contexts and concept lattices A triple (G; M; I) is called a for ..."
Cited by 32 (0 self)
Add to MetaCart
then the theory. Thereby, Formal Concept Analysis has created results that may be of interest even without considering the applications by which they were motivated. For proofs, citations, and
further details we refer to [2]. 1 Formal contexts and concept lattices A triple (G; M; I) is called a formal context if G and M are sets and I ` G\ThetaM is a binary relation between G and M . We
call the elements of G objects, those of M attributes, and I the incidence of the context (G; M; I). For A ` G, 1 we define A 0 := fm 2 M j (g; m) 2 I for all g 2 Ag<F12.38
, 2003
"... In order to tackle the need of sharing knowledge within and across organisational boundaries, the last decade has seen researchers both in academia and industry advocating for the use of
ontologies as a means for providing a shared understanding of common domains. But with the generalised use of ..."
Cited by 31 (10 self)
Add to MetaCart
In order to tackle the need of sharing knowledge within and across organisational boundaries, the last decade has seen researchers both in academia and industry advocating for the use of ontologies
as a means for providing a shared understanding of common domains. But with the generalised use of large distributed environments such as the World Wide Web came the proliferation of many di#erent
ontologies, even for the same or similar domain, hence setting forth a new need of sharing---that of sharing ontologies. In addition, if visions such as the Semantic Web are ever going to become a
reality, it will be necessary to provide as much automated support as possible to the task of mapping di#erent ontologies. Although many e#orts in ontology mapping have already been carried out, we
have noticed that few of them are based on strong theoretical grounds and on principled methodologies. Furthermore, many of them are based only on syntactical criteria. In this paper we present a
theory and method for automated ontology mapping based on channel theory, a mathematical theory of semantic information flow.
, 2004
"... Abstract. In this paper, we consider the problems of generating all maximal (bipartite) cliques in a given (bipartite) graph G = (V, E) with n vertices and m edges. We propose two algorithms for
enumerating all maximal cliques. One runs with O(M(n)) time delay and in O(n 2) space and the other runs ..."
Cited by 31 (1 self)
Add to MetaCart
Abstract. In this paper, we consider the problems of generating all maximal (bipartite) cliques in a given (bipartite) graph G = (V, E) with n vertices and m edges. We propose two algorithms for
enumerating all maximal cliques. One runs with O(M(n)) time delay and in O(n 2) space and the other runs with O( ∆ 4) time delay and in O(n + m) space, where ∆ denotes the maximum degree of G, M(n)
denotes the time needed to multiply two n × n matrices, and the latter one requires O(nm) time as a preprocessing. For a given bipartite graph G, we propose three algorithms for enumerating all
maximal bipartite cliques. The first algorithm runs with O(M(n)) time delay and in O(n 2) space, which immediately follows from the algorithm for the nonbipartite case. The second one runs with O( ∆
3) time delay and in O(n + m) space, and the last one runs with O( ∆ 2) time delay and in O(n + m + N∆) space, where N denotes the number of all maximal bipartite cliques in G and both algorithms
require O(nm) time as a preprocessing. Our algorithms improve upon all the existing algorithms, when G is either dense or sparse. Furthermore, computational experiments show that our algorithms for
sparse graphs have significantly good performance for graphs which are generated randomly and appear in real-world problems. 1
, 2002
"... This is the second of a two-part paper to review ontology research and development, in particular, ontology mapping and evolving. Ontology is defined as a formal explicit specification of a
shared conceptualization. Ontology itself is not a static model so that it must have the potential to capture ..."
Cited by 30 (1 self)
Add to MetaCart
This is the second of a two-part paper to review ontology research and development, in particular, ontology mapping and evolving. Ontology is defined as a formal explicit specification of a shared
conceptualization. Ontology itself is not a static model so that it must have the potential to capture changes of meanings and relations. As such, mapping and evolving ontologies is part of an
essential task of ontology learning and development. Ontology mapping is concerned with reusing existing ontologies, expanding and combining them by some means and enabling a larger pool of
information and knowledge in different domains to be integrated to support new communication and use. Ontology evolving, likewise, is concerned with maintaining existing ontologies and extending them
as appropriate when new information or knowledge is acquired. It is apparent from the reviews that current research into semi-automatic or automatic ontology research in all the three aspects of
generation, mapping and evolving have so far achieved limited success. Expert
- In Proc. of the 15th Int. Conf. on Advanced Information Systems Engineering (CAiSE 2003 , 2002
"... Peer-oriented computing is an attempt to weave interconnected machines into the fabric of the Internet. Service-oriented computing (exemplified by web-services), on the other hand, is an attempt
to provide a loosely coupled paradigm for distributed processing. In this paper we present an event-n ..."
Cited by 28 (0 self)
Add to MetaCart
Peer-oriented computing is an attempt to weave interconnected machines into the fabric of the Internet. Service-oriented computing (exemplified by web-services), on the other hand, is an attempt to
provide a loosely coupled paradigm for distributed processing. In this paper we present an event-notification based architecture and formal framework towards unifying these two computing paradigms to
provide essential functions required for automating e-business applications and facilitating service publication, discovery and exchange.
- KNOWLEDGE ORGANIZATION , 2000
"... A lattice-based model for information retrieval has been suggested in the 1960's but has been seen as a theoretical possibility hard to practically apply ever since. This paper attempts to
revive the lattice model and demonstrate its applicability in an information retrieval system, FaIR, that in ..."
Cited by 21 (2 self)
Add to MetaCart
A lattice-based model for information retrieval has been suggested in the 1960's but has been seen as a theoretical possibility hard to practically apply ever since. This paper attempts to revive the
lattice model and demonstrate its applicability in an information retrieval system, FaIR, that incorporates a graphical representation of a faceted thesaurus. It shows how Boolean queries can be
lattice-theoretically related to the concepts of the thesaurus and visualized within the thesaurus display. An advantage of FaIR is that it allows for a high level of transparency of the system which
can be controlled by the user.
|
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=53413","timestamp":"2014-04-16T21:16:21Z","content_type":null,"content_length":"37893","record_id":"<urn:uuid:af78e861-29c5-46ab-9dd5-c7ec964a3c9a>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00513-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Bending of space
This is my analogy. Please tell me if it makes any sense.
Let's say, I have a rope of known length l and both the end's co-ordinates (x1, y1, z1) and (x2, y2, z2). With co-ordinate (x1, y1, z1) as the center and radius l, I construct a sphere. If the
co-ordinate (x2, y2, z2) is part of the sphere, I can confirm the rope is not curved. If not, the rope is curved. (the (x2, y2, z2) must be always inside the sphere; I cannot think of a possibility
(x2, y2, z2) is outside the sphere).
What I do not understand, why light has to bend if there's no space?
|
{"url":"http://www.physicsforums.com/showthread.php?t=311787","timestamp":"2014-04-19T22:45:10Z","content_type":null,"content_length":"54376","record_id":"<urn:uuid:637a54a9-55ad-4656-9875-476ac44bf6ef>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00543-ip-10-147-4-33.ec2.internal.warc.gz"}
|
the definition of triangle inequality
noun Mathematics.
the theorem that the
absolute value
of the sum of two quantities is less than or equal to the sum of the absolute values of the quantities.
the related theorem that the magnitude of the sum of two vectors is less than or equal to the sum of the magnitudes of the vectors.
(for metric spaces) the related theorem that the distance between two points does not exceed the sum of their distances from any third point.
|
{"url":"http://dictionary.reference.com/browse/triangle%20inequality","timestamp":"2014-04-16T12:40:41Z","content_type":null,"content_length":"93342","record_id":"<urn:uuid:b6db4629-4842-4c70-b618-7aeca3402417>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00317-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Physics Forums - View Single Post - Power of a thrust to maintain an extended spring
Exhaust stream velocity is not known, or at least beyond my ability to measure accurately.
For a bit more background, I want to measure the efficiency of an
. Mine is very similar to
this one
, but it hangs from a bearing over a bunsen instead of being mounted on a turntable. I have done this in the past by trying to measure the angular acceleration as the thing spins up to speed. Knowing
the moment of inertia, which I have measured, I can therefore work out the resultant torque (Thrust - friction). Measuring the angular deceleration as it comes to rest allowed me to estimate the
friction. Then I just applied W=Fx while it was spinning up to speed.
I made the measurements with a light gate which registered every half rotation, which wasn't a very precise method as the thing is at full speed in a few rotations (<5). Plus the fact that letting
the thing spin down to estimate the friction includes all sorts of assumptions about that friction that are probably dubious (e.g. that the friction is is constant and independent of rotational
speed). Was there a better way of measuring the work done? I wondered whether it could be done by hanging the Aeolipile from a torsion spring, hence my original question.
However, I'm starting to think that I just don't have enough info. to solve this for the work done. The set up would allow me to measure the thrust of the Aeolipile, but not the power, right?
Could I then let the Aeolipile spin up to a constant angular velocity, measure that velocity, and then apply P=Fv using that measured thrust?
p.s. - In case you are interested, the power input is measured by timing how long it takes to boil away a known quantity of water and working out the energy needed to boil that water using water's
latent heat of vaporization.
|
{"url":"http://www.physicsforums.com/showpost.php?p=4150446&postcount=3","timestamp":"2014-04-19T22:45:25Z","content_type":null,"content_length":"9115","record_id":"<urn:uuid:44801e4b-699c-4ed0-9b5e-3b809e3cff97>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00083-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Summary: Computational Mathematics and Mathematical Physics, Vol.40, No.9, 2000, pp. 12391254.
Translated from Zhurnal Vychislitel'noi Matematiki i Matematicheskoi Fiziki, Vol.40, No.9, 2000, pp. 12911307.
Original Russian Text Copyright c
#2000 by Antipin.
English Translation Copyright c
#2000 by MAIK Nauka/Interperiodika (Russia).
Solution Methods for Variational Inequalities
with Coupled Constraints
A.S. Antipin
Computing Center, Russian Academy of Sciences, ul. Vavilova 40, GSP-1, Moscow, 117967 Russia
Revised December 9, 2003
Variational inequalities with coupled constraints are considered. The class of sym-
metric vector functions that form coupled constraints is introduced. Explicit and implicit
prediction-type gradient and proximal methods are proposed for solving variational in-
equalities with coupled constraints. The convergence of the methods is proved.
1. STATEMENT OF THE PROBLEM
To solve a variational inequality with coupled constraints means to nd a vector v #
## 0
such that
#F (v # ), w - v # # # 0 #w
|
{"url":"http://www.osti.gov/eprints/topicpages/documents/record/699/3712152.html","timestamp":"2014-04-20T16:02:14Z","content_type":null,"content_length":"8378","record_id":"<urn:uuid:5cabcca0-e30f-408f-a443-7600da6379b4>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00120-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Zentralblatt MATH
Publications of (and about) Paul Erdös
Zbl.No: 885.01023
Autor: Losonczi, László
Title: Paul Erdös on functional equations: Contributions and impact. (In English)
Source: Aequationes Math. 54, No.3, 221-233 (1997).
Review: Short biography of Erdös, and a survey of papers that were inspired by his results and conjectures on extensions of functional equations, almost everywhere additive functions, and additive
arithmetical functions. Bibliography of fifty entries.
Reviewer: D.Laugwitz (Darmstadt)
Classif.: * 01A70 Biographies, obituaries, personalia, bibliographies
39-03 Historical (functional equations)
Keywords: P. Erdös; functional equations; additive functions
Index Words: Obituary
Biogr.Ref.: Erdös, P.
© European Mathematical Society & FIZ Karlsruhe & Springer-Verlag
│Books │Problems │Set Theory │Combinatorics │Extremal Probl/Ramsey Th. │
│Graph Theory │Add.Number Theory│Mult.Number Theory│Analysis │Geometry │
│Probabability│Personalia │About Paul Erdös │Publication Year│Home Page │
|
{"url":"http://www.emis.de/classics/Erdos/cit/88501023.htm","timestamp":"2014-04-18T21:16:21Z","content_type":null,"content_length":"3220","record_id":"<urn:uuid:5a136195-c680-4f89-baf9-47fab2bc93a0>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00559-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Boolean Algebra
December 7th 2010, 02:44 PM #1
MHF Contributor
Mar 2010
Boolean Algebra
In order to show a Lattice is a Boolean Algebra, the diagram needs to be bounded, distributive, complemented, and $|L|=2^n \ n\geq 1 \ n\in\mathbb{Z}$.
However, my book says, "A finite lattice is called a Boolean Algebra if it is isomorphic to $B_n$ for some nonnegative integer n."
What is $B_n$?
I know $D_n$ are the numbers that divide n but have no clue about this $B_n$.
In order to show a Lattice is a Boolean Algebra, the diagram needs to be bounded, distributive, complemented, and $|L|=2^n \ n\geq 1 \ n\in\mathbb{Z}$.
However, my book says, "A finite lattice is called a Boolean Algebra if it is isomorphic to $B_n$ for some nonnegative integer n."
What is $B_n$?
I know $D_n$ are the numbers that divide n but have no clue about this $B_n$.
I'm not quite sure what they mean, but a fundamental result in the study of Boolean algebras is that every finite Boolean algebra $B$ is isomorphic to $2^{[n]}$ (here $[n]=\{1,\cdots,n\}$) for
some $n\in\mathbb{N}$. This fact would make me guess that $B_n=2^{[n]}$.
So if $\displaystyle |L|=8$, what am I showing it is isomorphic too?
Last edited by dwsmith; December 7th 2010 at 03:50 PM.
Haven't a clue unless you are talking about the science sense or as in bomb.
Here is the Hasse diagram for $2^{[3]}$
How is that the Hasse Diagram for $2^3\mbox{?}$ How do I come up with the Hasse diagram when I have $2^n$ is the better question?
So 4 is the P(s)={1,2,3,4} for instance?
December 7th 2010, 02:58 PM #2
December 7th 2010, 03:01 PM #3
MHF Contributor
Mar 2010
December 7th 2010, 03:26 PM #4
December 7th 2010, 03:30 PM #5
MHF Contributor
Mar 2010
December 7th 2010, 03:43 PM #6
December 7th 2010, 03:50 PM #7
MHF Contributor
Mar 2010
December 7th 2010, 03:53 PM #8
December 7th 2010, 04:13 PM #9
MHF Contributor
Mar 2010
December 7th 2010, 04:21 PM #10
December 7th 2010, 04:23 PM #11
MHF Contributor
Mar 2010
December 7th 2010, 04:26 PM #12
December 7th 2010, 04:30 PM #13
MHF Contributor
Mar 2010
December 7th 2010, 06:12 PM #14
|
{"url":"http://mathhelpforum.com/discrete-math/165602-boolean-algebra.html","timestamp":"2014-04-16T16:59:45Z","content_type":null,"content_length":"84327","record_id":"<urn:uuid:d8ea0a86-b169-446e-a45f-f980010a2d05>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00235-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Directional Derivative
September 14th 2011, 06:38 AM
Directional Derivative
Ok, so I kept getting an Latex Error:Unknown Error when I tried to enter my stuff in LaTex. Then I scanned my handwritten work and tried to attach it...it just timed out on me everytime. Sorry
this is so primitive, but I just typed everything out. I'm looking for someone to check my work and help me with the last part of #20. Here are the two problems:
19. Suppose that z=e^{xy+x-y}. How fast is z changing when we move away from the origin toward (2,1)?
20. In problem 19 in what direction should we move away from the origin for z to change most rapidly? What is the maximum rate of change? In what directions is the derivative zero at the origin?
Here is my work:
19. z=e^{xy+x-y}
Gradient of z(0,0)=<1,-1>
Directional Derivative of z at (0,0)=<1,-1> . (1/sqrt{5})<2,1>=(1/sqrt{5})
Answer to 19: (1/sqrt{5})
20. The gradient of z(0,0)=<1,-1> is the direction we should move away from (0,0) for z to change most rapidly.
The maximum rate is sqrt{2}
-I'm stuck on the last part of question 20: "In what directions is the derivative zero at the origin?" Can someone help me with this part? Thanks.
September 14th 2011, 09:36 AM
Re: Directional Derivative
I finally clued into the fact that Latex is only working with [tex] tags, so I retyped everything to make it easier.
19. $z=e^{xy+x-y}$
P: (0,0)
$\frac{\delta z}{\delta x}=(y+1)*e^{xy+x-y}=1$
$\frac{\delta z}{\delta y}=(x-1)*e^{xy+x-y}=-1$
$D_{\vec{u}}f(0,0)=<1,-1> \cdot\ \frac{1}{\sqrt{5}}<2,1>$
<1,-1> is the direction we should move away from the origin for z to change most rapidly.
The max rate of change is $\sqrt{2}$
I still have yet to figure out how to solve the last question in problem #20.
Any help with where I'm stumped or in just reviewing the work I have completed so far would be appreciated. Thanks.
September 14th 2011, 12:22 PM
Re: Directional Derivative
The "directional derivative" in the direction of unit vector, v, is the dot product of grad f and v. In particular, the derivative will be 0 is that dot product is 0 which means v must be
perpendicular to grad f. What vectors are perpendicular to <1, -1>?
September 15th 2011, 03:55 AM
Re: Directional Derivative
<-1,-1> & <1,1>
|
{"url":"http://mathhelpforum.com/calculus/187978-directional-derivative-print.html","timestamp":"2014-04-18T10:41:43Z","content_type":null,"content_length":"8255","record_id":"<urn:uuid:d2f42824-608d-478d-8ba8-73b18e87591b>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00188-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Derivative of an integral
August 3rd 2010, 05:31 AM #1
Apr 2010
Derivative of an integral
Differentiate: $f(x) = \int_{0}^{x} \frac {x}{1 + t^2 + sin^2t}dt$
I am not sure how to do this, since there are 2 different variables in the integral (x and t).
Take the x out of the integral (assuming it doesn't depend on t, which it doesn't look like it). Then use the product rule on x and the integral.
August 3rd 2010, 05:34 AM #2
|
{"url":"http://mathhelpforum.com/calculus/152679-derivative-integral.html","timestamp":"2014-04-17T08:25:18Z","content_type":null,"content_length":"32374","record_id":"<urn:uuid:acb4e9ec-df83-4f03-930f-6da0854b7bb8>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00226-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Petaluma ACT Tutor
Find a Petaluma ACT Tutor
...These opportunities have given me the chance to work with students of all ages. Many have had a debilitating fear of math, which I can personally relate to. As a high school and college
student, mathematics was always my worst subject.
10 Subjects: including ACT Math, geometry, statistics, algebra 1
...With each major I learned about different processes of learning and education and gained and overall understanding of how to efficiently study. I also gave a lecture on efficient study
techniques to incoming medical and podiatry students at my former medical school. Studying can be broken down ...
37 Subjects: including ACT Math, chemistry, statistics, physics
...I now know firsthand many of the ways that science in the classroom is applied in research, and I bring this perspective into my tutoring sessions. My Tutoring Approach Helping students improve
their understanding of difficult coursework boosts their grades, and ultimately their curiosity in lea...
50 Subjects: including ACT Math, reading, English, calculus
...I am a mother and am accustomed to exhibiting the patience necessary to encourage a child to learn. I know that children, like adults, have learning preferences whether they know it or not. I
always try to get through to a student by utilizing the methods that will be the most effective.
29 Subjects: including ACT Math, Spanish, reading, chemistry
...I have more than five years of tutoring experience. I worked as a math tutor for a year between high school and college and continued to tutor math and physics throughout my undergraduate
career. I specialize in tutoring high school mathematics, such as geometry, algebra, precalculus, and calculus, as well as AP physics.
25 Subjects: including ACT Math, calculus, physics, geometry
|
{"url":"http://www.purplemath.com/petaluma_act_tutors.php","timestamp":"2014-04-18T23:29:39Z","content_type":null,"content_length":"23546","record_id":"<urn:uuid:ff8b700d-be13-4993-894a-8c388a8638bf>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00052-ip-10-147-4-33.ec2.internal.warc.gz"}
|
A Monte Carlo AIXI Approximation
Joel Veness, Kee Siong Ng, Marcus Hutter and David Silver
arXiv Number 0909.0801, 2009.
This paper describes a computationally feasible approximation to the AIXI agent, a universal reinforcement learning agent for arbitrary environments. AIXI is scaled down in two key ways: First, the
class of environment models is restricted to all prediction suffix trees of a fixed maximum depth. This allows a Bayesian mixture of environment models to be computed in time proportional to the
logarithm of the size of the model class. Secondly, the finite-horizon expectimax search is approximated by an asymptotically convergent Monte Carlo Tree Search technique. This scaled down AIXI agent
is empirically shown to be effective on a wide class of toy problem domains, ranging from simple fully observable games to small POMDPs. We explore the limits of this approximate agent and propose a
general heuristic framework for scaling this technique to much larger problems.
PDF - Requires Adobe Acrobat Reader or other PDF viewer.
|
{"url":"http://eprints.pascal-network.org/archive/00005841/","timestamp":"2014-04-16T07:32:21Z","content_type":null,"content_length":"7790","record_id":"<urn:uuid:aee338c2-8853-4155-bbc0-56f077342301>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00482-ip-10-147-4-33.ec2.internal.warc.gz"}
|
logistic growth problem
August 19th 2009, 12:15 PM #1
Aug 2008
logistic growth problem
The population of fish in a certain lake at time t months is given by the function:
P(t) = 20,000/(1+24e^(-t/4)) , where T is greater than or equal to 0. There is an upper limit on the fish population due to the oxygen supply, available food, etc.
A. What is the initial population of fish?
B. When will there be 15,000 fish?
C. What is the maximum number of fish possible in the lake?
A and B i know how to do, but C is a little confusing. To find the maximum, i took the derivative of P(t), which after simplifying, is 120,000 e^(-t/4) / (1+24e^(-t/4))^2. i tried to find
critical numbers by setting the numerator and denominator equal to 0, but on the top i end up with e^(-t/4) = 0 and exponential functions never equal 0 so that's no solution. and on the bottom i
get e^(-t/4) = -1/24 and since exponential functions are never negative, that one has no solution either. i saw the answer that said take the limit as t approaches infinite and you get 20,000 as
the maximum. but how come i couldn't use the derivative to find the maximum in this problem?
Consider the graph -- it is an increasing function with an inflection pt at
Your derivative shows this as there are no critical pts and the derivative is positive
So the max ocurs at infinity
since this is a logistic growth, the bottom of the graph is flat, the graph increases, and then the top of the graph is flat. how come the first derivative test didn't pick up a max or min at P
(t) = 0 or P(t) = 20,000? since the graph is flat in those 2 areas, shouldn't the derivative be 0 and thus register as max and mins?
Don't confuse "looks flat" with horizontal. If the rate of change is small but not 0 the graph appears flat relatively speaking but not horizontal
The graph is not flat in the sense the derivative is never 0.
Granted initially the rate of increase is small but not 0 otherwise the population would never increase.
Similarly as t - > infinity the graph flattens out but again as the derivative is never 0 the population approaches 20,000 asymptotically but again
as the derivative is not 0 the population is increasing at a very small rate
The population of fish in a certain lake at time t months is given by the function:
P(t) = 20,000/(1+24e^(-t/4)) , where T is greater than or equal to 0. There is an upper limit on the fish population due to the oxygen supply, available food, etc.
A. What is the initial population of fish?
B. When will there be 15,000 fish?
C. What is the maximum number of fish possible in the lake?
$\lim_{t \to \infty} P(t) = 20000$
August 19th 2009, 12:38 PM #2
August 19th 2009, 02:35 PM #3
Aug 2008
August 19th 2009, 02:49 PM #4
August 19th 2009, 02:49 PM #5
|
{"url":"http://mathhelpforum.com/calculus/98603-logistic-growth-problem.html","timestamp":"2014-04-23T11:27:02Z","content_type":null,"content_length":"44097","record_id":"<urn:uuid:cadc5dd1-d786-48bb-a119-ecb69587073a>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00574-ip-10-147-4-33.ec2.internal.warc.gz"}
|
LKML: Peter Osterlund: Re: [PATCH] Apple USB Touchpad driver (new)
Messages in this thread
Subject Re: [PATCH] Apple USB Touchpad driver (new)
From Peter Osterlund <>
Date 10 Jul 2005 00:48:30 +0200
Vojtech Pavlik <vojtech@suse.cz> writes:
> Btw, what I don't completely understand is why you need linear
> regression, when you're not trying to detect motion or something like
> that. Basic floating average, or even simpler filtering like the input
> core uses for fuzz could work well enough I believe.
Indeed, this function doesn't make much sense:
+static inline int smooth_history(int x0, int x1, int x2, int x3)
+ return x0 - ( x0 * 3 + x1 - x2 - x3 * 3 ) / 10;
In the X driver, a derivative estimate is computed from the last 4
absolute positions, and in that case the least squares estimate is
given by the factors [.3 .1 -.1 -.3]. However, in this case you want
to compute an absolute position estimate from the last 4 absolute
positions, and in this case the least squares estimate is given by the
factors [.25 .25 .25 .25], ie a floating average. If the function is
changed to this:
+static inline int smooth_history(int x0, int x1, int x2, int x3)
+ return (x0 + x1 + x2 + x3) / 4;
the standard deviation of the noise will be reduced by a factor of 2
compared to the unfiltered values. With the old smooth_history()
function, the noise reduction will only be a factor of 1.29.
Peter Osterlund - petero2@telia.com
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
|
{"url":"http://lkml.org/lkml/2005/7/9/138","timestamp":"2014-04-17T10:45:47Z","content_type":null,"content_length":"9284","record_id":"<urn:uuid:56604615-24f0-4cf1-961c-fb6366579714>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00275-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Programming Examples in RPN Mode
Programming on the HP 30b
The HP 30b Business Professional calculator includes a programming capability designed to help automate repetitive calculations and extend the usefulness of the built-in function set of the
calculator. The capability includes the creation of up to 10 separate programs using up to 290 bytes of memory among them.
Programs record keystrokes, with each keystroke using one byte of memory, although some commands use more than one byte, as described later. In addition, many program-only functions are provided for
conditional tests, conditional and unconditional 'gotos', looping, displaying intermediate results and even calling other programs as subroutines.
This learning module will cover using loops and subroutines in the HP 30b programming environment in some detail. Other learning modules will show how to enter and edit programs, how to automate
short, repetitive tasks, as well as showing several example programs to help get you started.
As shown in the picture, the HP 30b has additional functions assigned to the keys that are program-only functions. Other than the Black-Scholes function (shown as Black S), which is not a program
function but a financial function, these functions are not printed or labeled on the actual HP 30b itself. However, an overlay is provided that lays over the top rows of keys that help indicate how
these functions are mapped to the keys.
Each of these functions is inserted into a program by pressing the shift key and holding it down while pressing the key under which the program function is displayed. For example, to insert a LBL
(label) command, press
and, while holding it down, press
. In these learning modules describing programming, this will be shown as
. Pressing that key combination will insert a LBL instruction into a program in program edit mode. Pressing that key combination in calculation mode will do nothing.
There are 10 numbered slots available for programs, numbered from 0 to 9. These are displayed in the program catalog which is viewed by pressing
. In the image above, the program catalog is displayed, showing Prgm 0 or program 0. Pressing the
keys will scroll through the list of 10 programs. Pressing
will enter the selected program, allowing you to view the program steps stored in that program slot or to change the program steps. To exit this program editing mode and return to the program
catalog, press
. To exit the program catalog and return to calculation mode, press
When a program is displayed, a number will be shown below it indicating how many bytes are used. If the program name is shown in reverse video, then the program has been assigned to a key and can be
executed by pressing the appropriate key combination, even when in calculation mode. This is shown in the image below. When viewing a program in the program catalog, pressing
will delete the presently displayed program and return you to the calculation environment. To delete all programs, press
while in calculation mode.
At different places within a program, you can insert a Label (LBL) command. A label defines a location to which program control may be transferred. The HP 30b can handle up to 100 labels within the
entire program memory. These labels are a two-digit numeric value from 00 to 99. No label can be used more than once, which makes each label a 'global' label and defined only once within the global
program memory space. If you attempt to enter a label that has already been used, a message saying 'Exists!' will be briefly displayed.
Example 1: Calculating digits of PI
The first example program will compute a user-specified number of digits of the constant PI and place the result in the cash flow / statistics data registers. It uses Euler’s convergence improvement
applied to the Gregory series for PI.
This program illustrates an important feature of the HP 30b that is available to programmers: 100 data registers are available and can be accessed indirectly using data register 0 as an index or
pointer. The statistics data registers begin from one end of this 100 register area and the cash flow values begin from the other end. They cannot overlap, so any values stored in one data area
reduce the available number of registers for the other data area.
For example, if you press
and then press
. The 5 will be stored in position 6 of the data registers, which is Y(3). It will be stored in the 6th position because the first position is referenced with an index of 0. To recall a value from
the statistics data registers, store the proper index value into memory register 0 and press
To use the cash flow registers, press
and then press
.The 5 will be stored in position 6 of the cash flow registers, which is #CF(2). It will be stored in the 6th position because the first position is referenced with an index of 0. To recall a value
from the cash flow registers, store the proper index value into memory register 0 and press
This allows for the use of two separate data areas of up to 100 total values, if a programmer wishes.
Keys Explanation
Enters program mode and displays the last program previously viewed in the program catalog. If you wish to enter your program into a different program number in the catalog, press or until
the program number you wish to use is displayed. Use Prgm location 3 which is assumed to be empty. Then press:
Enters program edit mode and displays the first line of the program
Save the number of registers to fill.
Save the user’s mode to restore at end of program.
Set mode to RPN and fix 0 decimal places
Clear the cash flow registers
Clear the statistics registers. This provides maximum room for the statistics registers where the results will be stored.
Recall the number of registers to fill.
Multiply by 14, which is 7 digits per register x 2 (which is the loop increment).
Number of digits ÷ log base 2 of 10 are the number.of iterations needed (and the count down is by 2).
Accesses the IP (integer part) function in the math menu.
Saves the number of loops required in memory 1.
Label 80 is the top of the main loop.
Set up the number of registers for ISG loop in label 81.
Initial value of term.
Initial value for carry.
Label 81 is the top of the loop through the statistics registers.
Numerator is Data(i)*n + carry * 10^7.
Set up the number of registers for ISG loop in label 81.
Denominator is 2n+1.
Accesses the IP (integer part) function in the math menu. IP(Data(i)*n+carry*10^7)/(2n+1)) stored into Data(i).
Performs a swap of X and Y, since this program is in RPN mode.
Carry into the next register.
Inserts ISG 0. Checks for the end of the statistics register loop.
If not the end, loop back to label 81.
These next lines are needed because of the way the loop ends. A 2 is needed in Data(0).
Inserts DSE 1. Checks for the end of the term loop. If you add these two instructions BEFORE the DSE 1 at this step, you will see a 'countdown' displayed as the program executes this loop:
and . This can provide useful feedback on longer execution times.
If not the end, loop back to label 80.
This is the final 'fix up' loop. Just one pass through the registers to adjust the overflows.
Initial value of the carry.
Label 82 is the top of the loop through the statistics registers to adjust for any overflow.
Add carry to register.
Inserts ?<conditional test. If true, no overflow.
If true, skip over adjustment.
Back out overflow.
Carry into next register.
Label 83.
No carry.
Inserts DSE 0. Decreases register pointer and loops until all have been checked.
Loop over. Clean up by restoring user’s original mode settings.
Program will end showing the cash flow registers to allow for review.
Inserts Stop. Program over.
Exits program edit mode and returns to the program catalog.
This program takes 154 bytes and has a checksum of 189. This program uses over half of the available program memory on the HP 30b.
To execute this program, enter the number of registers you wish to use for the results and press
. The first register will always contain the integer value of PI: a value of 3. The registers after the first one contain the decimal digits of PI, shown as an integer. Entering 1 to use registers 0
and 1 for storage will compute 7 decimal digits of PI in about 1 second while a value of 5 (using registers 0 through 5) will compute 35 digits in just a few seconds. The maximum number of registers
that can be used is 99, which uses registers 0 through 99, for 693 digits of PI in under an hour. Also note that leading zeroes are not shown in the data registers. If the seven digits should be
0000023, the data register would simply show 23. The user must note and add any leading zeroes. If run with 5 as the number of registers to be used, the program ends with the following displayed.
to see additional results as shown below.
To 35 decimals, the value of PI is 3.14159265358979323846264338327950288.
Example 2: Finding prime factors of an integer
Don would like to develop a program to factor some numbers into their prime factors. This example program will find the prime factors of an integer. For example, the number 10 can be factored into
the product of two primes, 2 and 5. The number 13 is prime, as it can only be factored into 1 and 13.
Given a number, this program will return a series of prime factors. After each factor is returned, press
(which executes a R/S command) to continue the factoring of the number. If the original number is displayed, then the original number is prime. The program presented below MUST be run in RPN mode.
Keys Explanation
Enters program mode and displays the last program previously viewed in the program catalog. If you wish to enter your program into a different program number in the catalog, press or until
the program number you wish to use is displayed. Use Prgm location 4 which is assumed to be empty. Then press:
Enters program edit mode and displays the first line of the program.
Store number to be factored in memory 0.
Memory 1 stores the trial factor to use. Start with 2.
Memory 2 stores the increment to the trial factor. Starts with 1 to make 2nd factor tried equal to 3, then 2 to try 5, 7, 9…
Label 00 is the main loop.
Accesses the FP (fractional part) function in the math menu. If 0, found a factor in memory 1.
Inserts a Goto False command. If the result of the FP instruction is zero, go to label 02.
Fractional part was non-zero. Number in memory 1 is not a factor. Increment factor to try next by recalling value in memory 2 and adding it to value in memory 1.
These steps ensure the factor increment is 2, since the loop starts with this at a value of 1.
Trial factor squared. If larger than number being factored, stop the loop.
Number being factored.
Inserts a ?<= conditional test. If the value of memory 1, squared, is less than memory 0, places a 1 in the X register. Otherwise, places a 0 in the X register.
Inserts a Goto True command. If X is not equal to 0, go to label 00.
Compare last factor found to 1.
Inserts a ?= conditional test. If the value of memory 0 is equal to 1, places a 1 in the X register. Otherwise, places a 0 in the X register
Inserts a Goto True command. If X is not equal to 0, go to label 01.
Inserts R/S command and displays the present prime factor.
Label 01 is the destination if the last factor was 1.
Inserts a Stop command and displays a 0. Indicates all prime factors have been found
Label 02 indicates a prime factor was found.
Display factor found.
Inserts a R/S command and displays the current factor.
Update new number to factor by dividing number by factor found.
Inserts a Goto 00 command. Continues the loop.
Exits program edit mode and returns to the program catalog.
This program takes 59 bytes and has a checksum of 247. To execute this program from the program catalog, enter the number you wish to factor and press
. If you have left the program catalog, reenter it by pressing
Question 1
0 is displayed. This indicates the factors have been found. The prime factors of 55 are 5 and 11.
Question 2
What are the prime factors of 9999999967? Enter the program catalog by pressing
. Key 9999999967 and press
. Be aware that this will take several minutes to run.
0 is displayed. This indicates the factors have been found. 9999999967 is prime.
NOTE:The HP user club (not associated with Hewlett Packard) that came to be known as PPC published a journal for many years that included programs written by users. One such program was a 'Speedy
Factor Finder'. The value used as a test case for speed improvements was the largest 10-digit prime number, 9999999967. This number proved prime using a program written for the HP 67 calculator in
just under 3 hours.
Example 3: Base conversions
This example program converts a number from a base to another base in the range of bases 2 through 10, provided that one of the bases is in fact 10. For example, this program can convert from base 10
to a base 2 through 9, or can convert from a base 2 through 9 to base 10. To convert a number in base 8 to base 2, for example, you must perform an intermediate step by first converting the base 8
number to base 10 and then converting the resulting base 10 number to base 2. Bases greater than 10 are not supported by this program.
Inputs to this program are the number to convert, the input base, and the output base. The program presented below MUST be run in RPN mode.
Keys Explanation
Enters program mode and displays the last program previously viewed in the program catalog. If you wish to enter your program into a different program number in the catalog, press or until
the program number you wish to use is displayed. Use Prgm location 5 which is assumed to be empty. Then press:
Enters program edit mode and displays the first line of the program.
Store the output base in memory 2.
This executes a roll down of the 4-level stack.
Store the input base in memory 1.
This executes a roll down of the 4-level stack.
Store the number to convert in memory 0.
Initialize the output number in memory 3.
Initialize the multiplier to use in memory 4.
Label 10 is the main loop.
Divide number to convert by output base.
Store result back into memory 0.
Accesses the FP (fractional part) function in the math menu.
Multiply fractional part by output base to get digit.
Multiply by digit position multiplier and
add to accumulated total.
Update multiplier by multiplying by
the input base.
Accesses the IP (integer part) function in the math menu. Takes the integer part of the earlier computed quotient.
Store the result back into memory 0.
Inserts a Goto True command. If the integer part is not equal to zero, go to label 10. This will continue the loop until the quotient is zero.
Recall final output number in new base.
Inserts a Stop command. Program ends execution.
Exits program edit mode and returns to the program catalog.
This program takes 54 bytes and has a checksum of 155. To execute this program from the program catalog, enter the number you wish to convert, press
, enter the number’s present base,
, and enter the base you wish to convert it to and press
175 base 8 is equal to 125 base 10. Now convert this result to base 2. Since when executed, the program leaves the program catalog, to run it again press:
175 base 8 is equal to 125 base 10 which is equal to 1111101 base 2.
Example 4: Lunar lander game
This example program simulates landing on the moon. It was originally published by Hewlett Packard in 1975 and can be found in the HP 25 Applications Program book.
The game starts off with the rocket descending at a velocity of 50 feet/sec from a height of 500 feet. The velocity and height are shown in a combined display as -50.0500, the height appearing to the
right of the decimal point and the velocity to the left, with a negative sign on the velocity to indicate downward motion. If a velocity is ever displayed with no fractional part, for example, -15,
it means that you have crashed at a speed of 15 feet/sec. In game terms, this means that you have lost; in real-life, it signifies an even less favorable outcome.
You will start the game with 120 units of fuel. You may burn as much or as little of your available fuel as you wish (as long as it is an integer value) at each step of your descent; burns of zero
are quite common. A burn of 5 units will just cancel gravity and hold your speed constant. Any burn over 5 will act to change your speed in an upward direction. You must take care, however, not to
burn more fuel than you have; for if you do, no burn at all will take place, and you will free-fall to your doom! The final velocity shown will be your impact velocity. Any impact velocity over 5
feet/sec would probably doom your attempt. You may display your remaining fuel at any time by recalling memory 2.
Keys Explanation
Enters program mode and displays the last program previously viewed in the program catalog. If you wish to enter your program into a different program number in the catalog, press or until
the program number you wish to use is displayed. Use Prgm location 6 which is assumed to be empty. Then press:
NOTE:If you have the previous examples in memory, you will have to delete one of them before entering this program.
Enters program edit mode and displays the first line of the program.
Store the initial height in memory 0.
Store the initial downward velocity in memory 1.
Store the initial fuel in memory 2.
This executes a roll down of the 4-level stack.
Set mode to RPN and 4 decimal places shown.
Label 30 is the main loop.
Divide height by 10,000.
Compare velocity to 0.
Inserts a?<conditional test. If the velocity is less than 0, places a 0 in the X register. Otherwise, places a 1 in the X register.
Inserts a Goto True command. If X is equal to 0, go to label 31. Label 31 is when velocity is negative.
These steps are performed when velocity is positive. Performs a stack roll down.
Adds velocity to fraction displaying height.
Inserts a Goto command. Jumps to label 33.
Label 31. These steps are performed when velocity is negative.
Performs a stack roll down.
Performs a stack swap of the X and Y registers.
Subtracts a negative velocity from a positive height.
Label 33. Destination after alternate paths for positive or negative velocity.
Inserts R/S command and displays V.X, velocity.height
Inserts a ?< conditional test. If the input burn is greater than amount of fuel, prepare to crash.
Inserts a Goto True command. If X is equal to 0, go to label 34 and prepare to crash.
Performs a stack roll down. Burn is less than total fuel. Update acceleration, velocity, and height.
Subtract burn from fuel.
5 units cancels effects of gravity, so acceleration is burn minus 5.
Store acceleration into memory 3.
New height = original height plus velocity plus acceleration.
Store new height into memory 0.
Compare height to 0.
Inserts a ?< conditional test. If the height is less than 0, places a 0 in the X register. Otherwise, places a 1 in the X register.
Inserts a Goto True command. If X is equal to 0, go to label 35. Label 35 represents a crash with fuel remaining.
Have not crashed. Performs a stack roll down.
Recall acceleration.
Store new velocity into memory 1.
Inserts a Goto command. Jumps to label 30 to begin another loop.
Label 34. Determines velocity of crash with no fuel remaining.
Compute crash velocity as square root of (V^2 + 2gHeight), where g=5
Display as a negative number to indicate a crash.
Label 35. Create display to indicate a crash occurred.
Set mode to RPN and 0 decimal places shown for a crash.
Recall crash velocity.
Inserts a Stop command. Program ends execution.
Exits program edit mode and returns to the program catalog.
This program takes 105 bytes and has a checksum of 121. To run this program from the program catalog, press
The initial descent display is shown. The landing craft is 500 feet high and descending at 50 feet / sec. Burn 5 units of fuel by pressing
Check remaining fuel by pressing .
This is a crash. Perhaps you can do better?
Example 5: Guess the secret number game
This program generates a secret number between 0 and 99. The user enters a guess and the program indicates whether the guess is too high or too low. This looping process continues until you guess the
number. By making proper guesses, any number can be found in 7 or fewer attempts.
Keys Explanation
Enters program mode and displays the last program previously viewed in the program catalog. If you wish to enter your program into a different program number in the catalog, press or until
the program number you wish to use is displayed. Use Prgm location 7 which is assumed to be empty. Then press:
Enters program edit mode and displays the first line of the program.
Get random seed.
Multiply by 100.
Accesses the integer part (IP) function from Math menu. Displayed as 'Math Up Up ='
Initialize guess counter at 0.
Label 70. Main loop of program.
Inserts a R/S command. Enter your guess between 0 and 99.
Necessary to terminate digit entry of guess.
Increment guess counter.
Roll the stack down.
Swap. Puts your guess in X and secret number in Y.
Inserts a not equal conditional test.
Inserts a Goto True command. If the guess is not the secret number, go to label 71.
Recall guess count.
Insert a MSG command to display 'Yes'
Inserts a Y.
Inserts an e.
Inserts an s.
Terminates MSG character entry.
Inserts a Stop command. Game over. Number of guesses is in display.
Your guess was wrong.
Inserts ?> conditional test.
Inserts a Goto True command. If the secret number is greater than the guess, go to label 72.
Insert a MSG command to display 'High'
Inserts an H.
Inserts an i.
Inserts a g.
Inserts an h.
Terminates MSG character entry.
Go back for another guess.
Your guess was too low.
Insert a MSG command to display 'Low'
Inserts an L.
Inserts an o.
Inserts a w.
Terminates MSG character entry.
Go back for another guess. This is line 32 of the program.
Exits program edit mode and returns to the program catalog.
This program takes 59 bytes and has a checksum of 192. To play the game, press
while in the program catalog. Enter a guess of 40.
NOTE:Since the number generated will be random, the game play illustrated below will probably not match your own experience, since a different secret number will probably be generated.
The secret number was 92 and was found in 5 guesses!
Related support links
HP Support forums
Find solutions and collaborate with others on the HP Support Forum
|
{"url":"http://h10025.www1.hp.com/ewfrf/wc/document?cc=us&lc=en&dlc=en&docname=c02047259","timestamp":"2014-04-19T09:32:07Z","content_type":null,"content_length":"199084","record_id":"<urn:uuid:4a427844-f358-4019-b359-0f16993671ef>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00614-ip-10-147-4-33.ec2.internal.warc.gz"}
|
EALR'S and GLE'S (Make the connections clear and specific)
Grade 8
Component 1.5: Understand and apply concepts and procedures from algebraic sense.
Patterns, functions, and other relations
1.5.1 Apply understanding of linear and non-linear relationships to analyze patterns, sequences, and situations. W
· Extend, represent, or create linear and non-linear patterns and sequences using tables and graphs. [RL]
· Explain the difference between linear and non-linear relationships. [CU]
· Predict an outcome given a linear relationship (e.g., from a graph of profit projections, predict the profit). [RL]
· Use technology to generate linear and non-linear relationship. [SP, RL]
1.5.6 Understand and apply a variety of strategies to solve multi-step equations and one-step inequalities with one variable. W
· Solve multi-step equations and one-step inequalities with one variable.
· Solve single variable equations involving parentheses, like terms, or variables on both sides of the equal sign.
· Solve one-step inequalities (e.g., 2x<6, x+4>10).
· Solve real-world situations involving single variable equations and proportional relationships and verify that the solution is reasonable for the problem. [SP, RL, CU]
Component 2.2: Apply strategies to construct solutions.
2.2.2 Apply mathematical tools to solve the problem. W
· Implement the plan devised to solve the problem or answer the question posed (e.g., in a table of values of lengths, widths, and areas find the one that shows the largest area; check smaller
increments to see if this is the largest that works).
· Identify when an approach is unproductive and modify or try a new approach (e.g., if an additive model didn’t work, try a multiplicative model).
· Check the solution to see if it works (e.g., if the solution for a speed of 19 feet per second is 5 steps per second, perhaps the assumption of linearity was incorrect).
Grade 9/10
Component 1.1: Understand and apply concepts and procedures from number sense.
1.1.4 Apply understanding of direct and inverse proportion to solve problems. W
· Explain a method for determining whether a real-world problem involves direct proportion or inverse proportion. [SP, CU, MC]
· Explain a method for solving a real-world problem involving direct proportion. [CU, MC]
· Explain a method for solving a real-world problem involving inverse proportion. [CU, MC]
· Solve problems using direct or inverse models (e.g., similarity, age of car vs. worth). [SP, MC]
· Explain, illustrate, or describe examples of direct proportion. [CU]
· Explain, illustrate, or describe examples of inverse proportion. [CU]
· Use direct or inverse proportion to determine a number of objects or a measurement in a given situation.
|
{"url":"http://academic.evergreen.edu/curricular/met/Family%20Portraits/LESSON.htm","timestamp":"2014-04-21T04:31:52Z","content_type":null,"content_length":"16358","record_id":"<urn:uuid:c5241dd7-e98e-45e4-ae8c-62e5df66793b>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00491-ip-10-147-4-33.ec2.internal.warc.gz"}
|
From Exampleproblems
For the numerical analysis algorithm, see bisection method. For the musical set theory concept see bisector (music).
Bisection is the general activity of dividing something into two parts. In geometry, the concept is limited to divisions into two equal parts, usually by a line, which is then called a bisector. The
most often considered types of bisectors are segment bisectors and angle bisectors.
A segment bisector passes through the midpoint of the segment. Particularly important is the perpendicular bisector of a segment, which, according to its name, meets the segment at right angles. The
perpendicular bisector of a segment also has the property that each of its points is equidistant from the segment's endpoints. Therefore Voronoi diagram boundaries consist of segments of such lines
or planes.
An angle bisector divides the angle into two equal angles. An angle only has one bisector. Each point of an angle bisector is equidistant from the sides of the angle. The interior bisector of an
angle is the line or line segment that divides it into two equal angles on the same side as the angle. The exterior bisector of an angle is the line or line segment that divides it into two equal
angles on the opposite side as the angle.
In classical geometry, the bisection is a simple ruler-and-compass construction, whose possibility depends on the ability to draw circles of equivalent radius and different centers.
The segment is bisected by drawing intersecting circles of equal radius, whose centers are the endpoints of the segment. The line determined by the points of intersection is the perpendicular
bisector, and crosses our original segment at its center. Alternately, if a line and a point on it are given, we can find a perpendicular bisector by drawing a single circle whose center is that
point. The circle intersects the line in two more points, and from here the problem reduces to bisecting the segment defined by these two points.
To bisect an angle, one draws a circle whose center is the vertex. The circle meets the angle at two points: one on each leg. Using each of these points as a center, draw two circles of the same
size. The intersection of the circles (two points) determines a line that is the angle bisector.
The proof of the correctness of these two constructions is fairly intuitive, relying on the symmetry of the problem. It is interesting to note that the trisection of an angle (dividing it into three
equal parts) is somewhat more difficult, and cannot be achieved with the ruler and compass alone (Pierre Wantzel).
See also
External links
This article incorporates material from Angle bisector on PlanetMath, which is licensed under the GFDL.fr:Bissectrice io:Bisekanto sl:Bisekcija
|
{"url":"http://exampleproblems.com/wiki/index.php/Bisection","timestamp":"2014-04-20T09:01:28Z","content_type":null,"content_length":"20842","record_id":"<urn:uuid:992e49ad-5a7a-46cb-bc7a-219428b20b60>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00227-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Relationship between the logs of a number with different bases
January 7th 2011, 02:17 PM
Relationship between the logs of a number with different bases
Can anyone explain step by step the relation between the logs of a number to different bases?
If we take a number x and do the following:
What is the relationship of the two?
I believe the formula that sums the relationship is:
$log_{b}x log_{a}x=log_{b}x$
Can anyone explain how this is reached?
P.S sorry for the formatting!
January 7th 2011, 02:20 PM
There is no relationship between them because the bases are different
If you said that $\log_b(x) = \log_a(x)$ then $a=b$ but as two expressions there is no relationship
You can use the change of base rule should you need to: $\log_a(x) = \dfrac{\log_b(x)}{\log_b(a)}$
January 7th 2011, 02:25 PM
Really? I have a book that states the following:
"The log of 0.278 is not equal to 1n 0.278. i.e. logarithms with different bases have different values. The different values are, however, related to eachother...."
January 7th 2011, 02:30 PM
Of course that is the case.
If $ae b~\&~Ne 1$ then $\log_a(N)e \log_b(N)$.
January 7th 2011, 02:39 PM
I think it's referring to a different relationship then i posted (and to example the book uses.)
E.g. the relationship between:
$\log_{b}a$ and the $\log_{a}x$
defining the relationship as:
"This is the change of base formula which relates logarithms of a number relative to two different bases."
How is that the case if the logarithms are of two different numbers, namely a and x? or am i misunderstanding?
January 7th 2011, 02:53 PM
Uning the definitions $\log_b(a)=\dfrac{\ln(a)}{\ln(b)}~\&~ \log_a(x)=\dfrac{\ln(x)}{\ln(a)}$ it is clear the by multiplying we
get $\,~~\log_b(a)\log_a(x)= \dfrac{\ln(x)}{\ln(b)}=\log_b(x)$,
What is your question now?
January 7th 2011, 03:03 PM
I think i have it now. Thanks Plato. I was looking at the definition a different way.
January 7th 2011, 03:38 PM
then in exponetial form of this is x=a^A
then in exponetial form of this is x=b^B
then relation is a^A=b^B
i think 2 question is not valid always.
January 7th 2011, 03:50 PM
then in exponetial form of this is x=a^A
then in exponetial form of this is x=b^B then relation is a^A=b^B
i think 2 question is not valid always.
What a confusing posting.
Can you tell us what point you are trying to make?
We do know that $\log_b(a)=x$ is defined as $b^x=a$.
So what is your point?
|
{"url":"http://mathhelpforum.com/algebra/167735-relationship-between-logs-number-different-bases-print.html","timestamp":"2014-04-25T09:18:00Z","content_type":null,"content_length":"12203","record_id":"<urn:uuid:80308e82-ca8b-4a49-87f5-8ad6984daa9e>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00153-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Sums of Squares and Sums of Cubes
Copyright © University of Cambridge. All rights reserved.
Let $n$ be a whole number. When is it possible to write $n$ as a sum of two squares, say $n=a^2+b^2$, or as a sum of three squares, say $n=a^2+b^2 +c^2$, or as a sum of four squares, and so on? Of
course, $a$, $b$, $c$, $\ldots$ are also meant to be whole numbers here. We can also ask whether there is any convenient test to decide whether $n$ is the sum of one square; that is, whether $n$ is
itself a square number, say $n=a^2$. In this article we shall mention some of the interesting answers to these questions. The proofs, which belong to the subject called Number Theory, are too
difficult to be given here, but take a look at the
written by Tom Sanders, aged 16.
Although there seems to be no easy test to decide when $n$ is a square number, it is easy to give a test to decide when an even whole number $n$ is
a square number. Suppose that $n$ is even and $n=a^2$. Then $a$ must be even (for if $a$ is odd, then so is $a^2$), so that $a^2$ is divisible by $r$. Thus if $n$ is even and a square number, then
$4$ divides $n$ exactly. This shows, for example, that the number $68792734815298359030284382$ (which is too big to put into your calculator) is not a square number. Why does it show this? Well, this
number is of the form $100m+82$ and as $100$ is divisible by $4$, we see that $n$ is divisible by $4$ if and only if $82$ is (and it is not). Now you might like to write down some other (very large)
numbers that are not square numbers. Can you see why if $n$ is divisible by $3$ but not by $9$ then $n$ is not a square number? (Try some examples of this.) What can you say about numbers divisible
by $5$ but not by $25$?
Now let us consider writing $n$ as the sum of two squares. There is much we can say about this, but first we need to know about the idea of a prime number. A whole number is a prime number if it has
no factors other than itself and $1$; for example, the first seven prime numbers are $2$, $3$, $5$, $7$, $11$, $13$, $17$. Now every whole number can be written as a product of prime numbers, and
each prime number is either $2$, or an odd number of the form $4k+1$, or an odd number of the form $4k+3$; for example
$$350=2\times 5^2\times 7=2\times (4+1)^2\times (4+3)$$
$$490=2\times 5\times 7^2=2\times (4+1)\times (4+3)^2$$
$$2450=2\times 5^2\times 7^2=2\times (4+1)^2\times (4+3)^2$$
Given a whole number $n$, look at all of the different prime factors of the form $4k+3$ (ignoring the other prime factors) and also the number of times that they occur; if every one of these prime
factors occurs an even number of times then $n$ can be written as a sum of two squares; if not, then $n$ cannot be written as the sum of two squares. (Another way of saying this is that $n$ can be
written as the sum of two squares if and only if the product of all of its prime factors of the form $4k+3$ is itself a square number.) For example, $490$ and $2450$ can be written as a sum of two
squares but $350$ cannot. Try some other examples yourself; for example, $36=4\times 3^2$ so $36$ can be written as a sum of two squares, namely $6^2+0^2$. Try to decide which of the numbers $25$,
$37$, $99$ and $245$ can be written as a sum of two squares and when they can, find what the two squares are. For $25$ (and possibly some of the others) there is more than one answer.
Next, let us try to write $n$ as the sum of three squares. It is known that this is possible if and only if $n$ is
of the form $4^m(8k+7)$ (where $m$ can be zero and $4^0=1$); for example, $53=4^0(8\times 6+5)$ can be (try it), but $60=4(8+7)$ cannot (again try it, but not for too long!). What about the numbers
$30$, $48$, $77$ and $79$?
The problem of writing $n$ as the sum of three squares is closely connected to the problem of writing $n$ as a sum of three triangular numbers (a whole number $m$ is a triangular number if it is of
the form $k(k+1)/2$). For instance, it is known that every number can be written as the sum of three triangular numbers, and this means that every number of the form $8k+3$ can be written as the sum
of three squares. To see this suppose that $n=8k+3$ and let
Then $n$ can be written as a sum of three squares because
Finally, we may try to write $n$ as the sum of four (or more) squares. In this case the answer is easy to state (but not to prove) for
every whole number can be written as the sum of four squares
! Of course, we may wish to use $0$ as one of the square numbers, and there are often several ways to do this; for example,$$4=1^1+1^1+1^1+1^1=2^2+0^2+0^2+0^2$$
You should now choose some whole numbers yourself (not too large, though) and try to express each as a sum of four squares.
One of the important steps in considering sums of two squares is the formula $$(a^2+b^2)(c^2+d^2)=(a c+b d)^2+(a d-b c)^2=(a c-b d)^2+(a d+b c)^2$$ which holds for any whole numbers $a$, $b$, $c$,
$d$. This formula shows ust hat if $n$ ($=a^2+b^2$) and $m$ ($=c^2+d^2$) are the sum of two squares then so is their product $m n$. To illustrate this, note that $5=2^2+1^2$ and $13=3^2+2^2$ so we
can take $a=2$, $b=1$, $c=3$, $d=2$ and so find $65$ ($=5\times 13$) as the sum of two squares in two different ways. A similar formula to this holds for sums of four squares but sadly not for sums
of three squares (and it is the lack of such a formula that makes the problem of sums of three squares more difficult to deal with).
It is much harder to see when a number can be written as a sum of cubes (for example $n=a^3+b^3$, or $n=a^3+b^3+c^3$), but it is known that every whole number can be written as a sum of nine cubes
(including, if necessary, $0^3$); for example, $$23=2^3+2^3+1^3+1^3+1^3+1^3+1^3+1^3+1^3$$ $$239=4^3+4^3+3^3+3^3+3^3+3^3+1^3+1^3+1^3$$ Curiously, these two numbers ($23$ and $239$) are the only whole
numbers that really do need nine cubes; all other whole numbers need only at most eight cubes. See if you can express the numbers $12$, $21$ and $73$ as the sum of at most eight cubes in as many ways
as possible.
|
{"url":"http://nrich.maths.org/1343/index?nomenu=1","timestamp":"2014-04-18T03:14:56Z","content_type":null,"content_length":"9458","record_id":"<urn:uuid:297c023f-a095-49a7-8e8d-43d3a53f4cbf>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00108-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Mathematics of Computation
ISSN 1088-6842(online) ISSN 0025-5718(print)
A Note on NUCOMP
Author: Alfred J. van der Poorten
Journal: Math. Comp. 72 (2003), 1935-1946
MSC (2000): Primary 11Y40, 11E16, 11R11
Published electronically: April 29, 2003
MathSciNet review: 1986813
Full-text PDF Free Access
Abstract | References | Similar Articles | Additional Information
Abstract: This note is a detailed explanation of Shanks-Atkin NUCOMP--composition and reduction carried out ``simultaneously''--for all quadratic fields, that is, including real quadratic fields.
That explanation incidentally deals with various ``exercises'' left for confirmation by the reader in standard texts. Extensive testing in both the numerical and function field cases by Michael J
Jacobson, Jr, reported elsewhere, confirms that NUCOMP as here described is in fact efficient for composition both of indefinite and of definite forms once the parameters are large enough to
compensate for NUCOMP's extra overhead. In the numerical indefinite case that efficiency is a near doubling in speed already exhibited for discriminants as small as
• 1. A. O. L. Atkin, Letter to Dan Shanks on the programs NUDUPL and NUCOMP, 12 December 1988; from the Nachlaß of D. Shanks and made available to me by Hugh C. Williams.
• 2. Duncan A. Buell, Binary quadratic forms, Springer-Verlag, New York, 1989. Classical theory and modern computations. MR 1012948 (92b:11021)
• 3. Henri Cohen, A course in computational algebraic number theory, Graduate Texts in Mathematics, vol. 138, Springer-Verlag, Berlin, 1993. MR 1228206 (94i:11105)
• 4. Felix Klein, Elementary mathematics from an advanced standpoint: Geometry, reprint (New York: Dover, 1939); see §IIff.
• 5. Hermann Grassmann, A new branch of mathematics, Open Court Publishing Co., Chicago, IL, 1995. The Ausdehnungslehre of 1844 and other works; Translated from the German and with a note by Lloyd
C. Kannenberg; With a foreword by Albert C. Lewis. MR 1637704 (99e:01015)
• 6. W. V. D. Hodge and D. Pedoe, Methods of algebraic geometry. Vol. I, Cambridge Mathematical Library, Cambridge University Press, Cambridge, 1994. Book I: Algebraic preliminaries; Book II:
Projective space; Reprint of the 1947 original. MR 1288305 (95d:14002a)
• 7. Michael J Jacobson Jr and Alfred J van der Poorten, ``Computational aspects of NUCOMP'', to appear in Claus Fieker and David Kohel eds, Algorithmic Number Theory (Proc. Fifth International
Symposium, ANTS-V, Sydney, NSW, Australia July 2002), Springer Lecture Notes in Computer Science 2369 (2002), 120-133.
• 8. H. W. Lenstra Jr., On the calculation of regulators and class numbers of quadratic fields, Number theory days, 1980 (Exeter, 1980) London Math. Soc. Lecture Note Ser., vol. 56, Cambridge Univ.
Press, Cambridge, 1982, pp. 123–150. MR 697260 (86g:11080)
• 9. Daniel Shanks, Class number, a theory of factorization, and genera, 1969 Number Theory Institute (Proc. Sympos. Pure Math., Vol. XX, State Univ. New York, Stony Brook, N.Y., 1969), Amer. Math.
Soc., Providence, R.I., 1971, pp. 415–440. MR 0316385 (47 #4932)
Mary Zimmerman, Matrix multiplication as an application of the principle of combinatorial analysis, Pi Mu Epsilon J. 6 (1975), no. 3, 166–175. MR 0389933 (52 #10762)
• 10. Daniel Shanks, On Gauss and composition. I, II, Number theory and applications (Banff, AB, 1988) NATO Adv. Sci. Inst. Ser. C Math. Phys. Sci., vol. 265, Kluwer Acad. Publ., Dordrecht, 1989,
pp. 163–178, 179–204. MR 1123074 (92e:11150)
Similar Articles
Retrieve articles in Mathematics of Computation with MSC (2000): 11Y40, 11E16, 11R11
Retrieve articles in all journals with MSC (2000): 11Y40, 11E16, 11R11
Additional Information
Alfred J. van der Poorten
Affiliation: ceNTRe for Number Theory Research, 1 Bimbil Pl. Killara, New South Wales 2071, Australia
Email: alf@math.mq.edu.au
DOI: http://dx.doi.org/10.1090/S0025-5718-03-01518-7
PII: S 0025-5718(03)01518-7
Keywords: Binary quadratic form, composition
Received by editor(s): January 10, 2002
Published electronically: April 29, 2003
Additional Notes: The author was supported in part by a grant from the Australian Research Council
Article copyright: © Copyright 2003 American Mathematical Society
|
{"url":"http://www.ams.org/journals/mcom/2003-72-244/S0025-5718-03-01518-7/","timestamp":"2014-04-21T05:02:00Z","content_type":null,"content_length":"29273","record_id":"<urn:uuid:6f186584-9edd-4dc1-8c1b-002e0c31a022>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00492-ip-10-147-4-33.ec2.internal.warc.gz"}
|
World Urbanization Prospects, the 2011 Revision
Glossary of Demographic Terms
Variable Definition
Total population De facto population in a country, area or region as of 1 July of the year indicated. Figures are presented in thousands.
Urban population De facto population living in areas classified as urban according to the criteria used by each area or country. Data refer to 1 July of the year indicated and are
presented in thousands.
Rural population De facto population living in areas classified as rural. Data refer to 1 July of the year indicated and are presented in thousands.
Percentage urban Urban population as a percentage of the total population.
Percentage rural Rural population as a percentage of the total population.
Total annual growth rate See: Average annual rate of change of the total population
Urban annual growth rate See: Average annual rate of change of the urban population
Rural annual growth rate See: Average annual rate of change of the urban population
Average annual rate of change of Average exponential rate of growth of the population over a given period. It is calculated as ln(Pt/P0)/n where n is the length of the period and P is the
the total population population. It is expressed as a per cent.
Average annual rate of change of Average exponential rate of growth of the urban population over a given period. It is calculated as ln(UPt/UP0)/n where n is the length of the period and UP is the
the urban population urban population. It is expressed as a per cent.
Average annual rate of change of Average exponential rate of growth of the rural population over a given period. It is calculated as ln(RPt/RP0)/n where n is the length of the period and RP is the
the rural population rural population. It is expressed as a per cent.
Average annual rate of change of Average exponential rate of change of the percentage urban over a given period. It is calculated as ln(PUt/PU0)/n where n is the length of the period and PU is the
the percentage urban percentage urban. It is expressed as a per cent.
Average annual rate of change of Average exponential rate of change of the percentage rural over a given period. It is calculated as ln(PRt/PR0)/n where n is the length of the period and PR is the
the percentage rural percentage rural. It is expressed as a per cent.
Urban agglomeration Refers to the de facto population contained within the contours of a contiguous territory inhabited at urban density levels without regard to administrative
boundaries. It usually incorporates the population in a city or town plus that in the sub-urban areas lying outside of but being adjacent to the city boundaries.
Metropolitan area Includes both the contiguous territory inhabited at urban levels of residential density and additional surrounding areas of lower settlement density that are also
under the direct influence of the city (e.g., through frequent transport, road linkages, commuting facilities etc.).
City proper A locality defined according to legal/political boundaries and an administratively recognized urban status that is usually characterized by some form of local
Capital cities The designation of any specific city as a capital city is done solely on the basis of the designation as reported by the country or area. The city can be the seat
of the government as determined by the country. Few countries designate more than one city to be a capital city with a specific title function (e.g.,
administrative and/or legislative capital).
Urban agglomerations annual growth See: Average annual rate of change of urban agglomerations
Average annual rate of change of Average exponential rate of growth of the population of urban agglomerations over a given period. It is calculated as ln(PUAt/PUA0)/n where n is the length of the
urban agglomerations period and PUA is the population of urban agglomerations. It is expressed as a per cent.
Percentage of the urban population Population residing in urban agglomerations as a percentage of the total urban population.
residing in urban agglomerations
Percentage of the total population Population residing in urban agglomerations as a percentage of the total population
residing in urban agglomerations
Print Glossary
|
{"url":"http://esa.un.org/unup/Documentation/WUP_glossary.htm","timestamp":"2014-04-20T05:43:25Z","content_type":null,"content_length":"30920","record_id":"<urn:uuid:dd8aa279-f88e-4ff9-9bbf-307f082c58d5>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00330-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Approximate Cloaking Using Tranformation Optics and Negative Index Materials
Tuesday, November 27, 2012 at 11:00 am
LA 511
: Hoai-Minh Nguyen
: University of Minnesota
Applied and Interdisciplinary Mathematics Seminar
Cloaking recently attracts a lot of attention from the scientific community due to the progress of advanced technology. There are several ways to do cloaking. Two of them are based on transformation
optics and negative index materials. Cloaking based on transformation optics was suggested by Pendry and Leonhardt using transformations which blow up a point into the cloaked regions. The same
transformations had previously used by Greenleaf et al. to establish the non-uniqueness for Calderon’s inverse problem. These transformations are singular and hence create a lot of difficulty in
analysis and practical applications. The second method of cloaking is based on the peculiar properties of negative index materials. It was proposed by Lai et al. and inspired from the concept of
complementary media due to Pendry and Ramakrishna. In this talk, I will discuss approximate cloaking using these two methods. Concerning the first one, I will consider the situation, first proposed
in the work of Kohn et al., where one uses transformations which blow up a small ball (instead of a point) into cloaked regions. Many interesting issues such as finite energy and resonance will be
mentioned. Concerning the second method, I provide the (first) rigorous analysis for cloaking using negative index materials by investigating the situation where the loss (damping) parameter goes to
0. I will also explain how the arguments can be used not only to establish the rigor for other interesting related phenomena using negative index materials such as superlense and illusion optics but
also to lighten the mechanism of these phenomena.
|
{"url":"http://www.northeastern.edu/physics/event/approximate-cloaking-using-tranformation-optics-and-negative-index-materials/","timestamp":"2014-04-20T21:37:18Z","content_type":null,"content_length":"20256","record_id":"<urn:uuid:6aa715e6-dd97-4330-a0fd-d7953df82670>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00414-ip-10-147-4-33.ec2.internal.warc.gz"}
|
What Is Ultimately Possible in Physics?
Written for the "What's Ultimately Possible in Physics?" Fall 2009 FQXi Essay Contest.
This essay uses insights from studying the computational universe to explore questions about possibility and impossibility in the physical universe and in physical theories. It explores the ultimate
limits of technology and of human experience, and their relation to the features and consequences of ultimate theories of physics.
The history of technology is littered with examples of things that were claimed to be impossible—but later done. So what is genuinely impossible in physics? There is much that we will not know about
the answer to this question until we know the ultimate theory of physics. And even when we do—assuming it is possible to find it—it may still often not be possible to know what is possible.
Let's start, though, with the simpler question of what is possible in mathematics.
In the history of mathematics, particularly in the 1800s, many "impossibility results" were found [1, p. 1137]. Squaring the circle. Trisecting an angle. Solving a quintic equation. But these were
not genuine impossibilities. Instead, they were in a sense only impossibilities at a certain level of mathematical technology.
It is true, for example, that it is impossible to solve any quintic—if one is only allowed to use square roots and other radicals. But it is perfectly possible to write down a finite formula for the
solution to any quintic in terms, say, of elliptic functions [2]. And indeed, by the early 1900s, there emerged the view that there would ultimately be no such impossibilities in mathematics. And
that instead it would be possible to build more and more sophisticated formal structures that would eventually allow any imaginable mathematical operation to be done in some finite way.
Yes, one might want to deal with infinite series or infinite sets. But somehow these could be represented symbolically, and everything about them could be worked out in some finite way.
In 1931, however, it became clear that this was not correct. For Gödel's theorem [3] showed that in a sense mathematics can never be reduced to a finite activity. Starting from the standard axiom
system for arithmetic and basic number theory, Gödel's theorem showed that there are questions that cannot be guaranteed to be answered by any finite sequence of mathematical steps—and that are
therefore "undecidable" with the axiom system given.
One might still have thought that the problem was in a sense one of "technology": that one just needed stronger axioms, and then everything would be possible. But Gödel's theorem showed that no
finite set of axioms can ever be added to cover all possible questions within standard mathematical theories.
At first, it wasn't clear how general this result really was. There was a thought that perhaps something like a transfinite sequence of theories could exist that would render everything possible—and
that perhaps this might even be how human minds work.
But then in 1936 along came the Turing machine [4], and with it a new understanding of possibility and impossibility. The key was the notion of universal computation: the idea that a single universal
Turing machine could be fed a finite program that would make it do anything that any Turing machine could do.
In a sense this meant that however sophisticated one's Turing machine technology might be, one would never be able to go beyond what any Turing machine that happened to be universal can do. And so if
one asked a question, for example, about what the behavior of a Turing machine could be after an infinite time (say, does the machine ever reach a particular "halt" state), there might be no possible
systematically finite way to answer that question, at least with any Turing machine.
But what about something other than a Turing machine?
Over the course of time, various other models of computational processes were proposed. But the surprising point that gradually emerged was that all the ones that seemed at all practical were
ultimately equivalent. The original mathematical axiom system used in Gödel's theorem was also equivalent to a Turing machine. And so were all other reasonable models of what might constitute not
only a computational process, but also a way to set up mathematics.
There may be some quite different way to set up a formal system than the way it is done in mathematics. But at least within mathematics as we currently define it, we can explicitly prove that there
are impossibilities. We can prove that there are things that are genuinely infinite, and cannot meaningfully be reduced to something finite.
We know, for example, that there are polynomial equations involving integers where there is no finite mathematical procedure that will always determine whether the equations have solutions [5]. It is
not—as with the ordinary quintic equation—that with time some more sophisticated mathematical technology will be developed that allows solutions to be found. It is instead that within mathematics as
an axiomatic system, it is simply impossible for there to be a finite general procedure.
So in mathematics there is in a sense "genuine impossibility".
Somewhat ironically, however, mathematics as a field of human activity tends to have little sense of this. And indeed there is a general belief in mathematics—much more so than in physics—that with
time essentially any problem of "mathematical interest" will be solved.
A large part of the reason for this belief is that known examples of undecidable—or effectively impossible—problems tend to be complicated and contrived, and seem to have little to do with problems
that could be of mathematical interest. My own work [1] in exploring generalizations of mathematics gives strong evidence that undecidability is actually much closer at hand—and that in fact its
apparent irrelevance is merely a reflection of the narrow historical path that mathematics as a field has followed [1, sect. 12.9]. In a sense, the story is always the same—and to understand it sheds
light on some of what might be impossible in physics. The issue is computation universality. Just where is the threshold for computation universality?
For once it is possible to achieve computation universality within a particular type of system or problem, it follows that the system or problem is in a sense as sophisticated as any other—and it is
impossible to simplify it in any general way. And what I have found over and over again is that universality—and traces of it—occur in vastly simpler systems and problems than one might ever have
imagined [1, chap. 11; 6; 7].
Indeed, my guess is that a substantial fraction of the famous unsolved problems in mathematics today are not unsolved because of a lack of mathematical technology—but because they are associated with
universality, and so are fundamentally impossible to solve.
But what of physics?
Is there a direct correspondence of mathematical impossibility with physical impossibility? The answer is that it depends what physics is made of. If we can successfully reduce all of physics to
mathematics, then mathematical impossibility in a sense becomes physical impossibility.
In the first few decades of the modern study of computation, the various models of computation that were considered were thought of mainly as representing processes—mechanical, electronic or
mathematical—that a human engineer or mathematician might set up. But particularly with the rise of models like cellular automata (e.g. [8]), the question increasingly arose of how these models—and
computational processes they represent—might correspond to the actual operation of physics.
The traditional formulation of physics in terms of partial differential equations—or quantized fields—makes it difficult to see a correspondence. But the increasing implementation of physical models
on computers has made the situation somewhat clearer.
There are two common technical issues. The first is that traditional physics models tend to be formulated in terms of continuous variables. The second is that traditional physics models tend not to
say directly how a system should behave—but instead just to define an equation which gives a constraint on how the system should behave.
In modern times, good models of physical systems have often been found (e.g. [1, chap. 8]) that are more obviously set up like traditional digital computations—with discrete variables, and explicit
progression with time. But even traditional physical models are in many senses computational. For we know that even though there are continuous variables and equations to solve, there is an immense
amount that we can work out about traditional physical models using, for example, Mathematica [9].
Mathematica obviously runs on an ordinary digital computer. But the point is that it can symbolically represent the entities in physical models. There can be a variable x that represents a continuous
position, but to Mathematica it is just a finitely represented symbol, that can be manipulated using finite computational operations.
There are certainly questions that cannot obviously be answered by operating at a symbolic level—say about the precise location of some idealized particle represented by a real number. But when we
imagine constructing an experiment or an apparatus, we specify it in a finite, symbolic way. And we might imagine that then we could answer all questions about its behavior by finite computational
But this is undoubtedly not so. For it seems inevitable that within standard physical theories there is computation universality. And the result is that there will be questions that are impossible to
answer in any finite way. Will a particular three-body gravitational system (or an idealized solar system) be stable forever? Or have some arbitrarily complicated form of instability?
Of course, it could be even worse.
If one takes a universal Turing machine, there are definite kinds of questions that cannot in general be answered about it—an example being whether it will ever reach a halt state from a given input.
But at an abstract level, one can certainly imagine constructing a device that can answer such questions: doing some form of "hypercomputation" (e.g. [10, 11]). And it is quite straightforward to
construct formal theories of whole hierarchies of such hypercomputations.
The way we normally define traditional axiomatic mathematics, such things are not part of it. But could they be part of physics? We do not know for sure. And indeed within traditional mathematical
models of physics, it is a slippery issue.
In ordinary computational models like Turing machines, one works with a finite specification for the input that is given. And so it is fairly straightforward to recognize when some long and
sophisticated piece of computational output can really be attributed to the operation of the system, and when it has somehow been slipped into the system through the initial conditions for the
But traditional mathematical models of physics tend to have parameters that are specified in terms of real numbers. And in the infinite sequence of digits in a precise real number, one can in
principle pack all sorts of information—including, for example, tables of results that are beyond what a Turing machine can compute. And by doing this, it is fairly easy to set things up so that
traditional mathematical models of physics appear to be doing hypercomputation.
But can this actually be achieved with anything like real, physical, components?
I doubt it. For if one assumes that any device one builds, or any experiment one does, must be based on a finite description, then I suspect that it will never be possible to set up hypercomputation
within traditional physical models [1, sect. 12.4 and notes].
In systems like Turing machines, there is a certain robustness and consistency to the notion of computation. Large classes of models, initial conditions and other setups are equivalent at a
computational level. But when hypercomputation is present, details of the setup tend to have large effects on the level of computation that can be reached, and there do not seem to be stable answers
to questions about what is possible and not.
In traditional mathematical approaches to physics, we tend to think of mathematics as the general formalism, which in some special case applies to physics. But if there is hypercomputation in
physics, it implies that in a sense we can construct physical tools that give us a new level of mathematics—and that answer problems in mathematics, though not by using the formalism of mathematics.
And while at every level there are analogs of Gödel's theorem, the presence of hypercomputation in physics would in a sense overcome impossibilities in mathematics, for example giving us ways to
solve all integer equations.
So could this be how our universe actually works?
From existing models in physics we do not know. And we will not ultimately know until we have a fundamental theory of physics.
Is it even possible to find a fundamental theory of physics? Again, we do not know for sure. It could be—a little like in hypercomputation—that there will never be a finite description for how the
universe works. But it is a fundamental observation—really the basis for all of natural science—that the universe does show order, and does appear to follow definite laws.
Is there in a sense some complete set of laws that provide a finite description for how the whole universe works? We will not know for sure until or unless we find that finite description—the
ultimate fundamental theory.
One can argue about what that theory might be like. Is it perhaps finite, but very large, like the operating system of one of today's computers? Or is it not only finite, but actually quite small,
like a few lines of computer code? We do not yet know.
Looking at the complexity and richness of the physical universe as we now experience it, we might assume that a fundamental theory—if it exists—would have to reflect all that complexity and richness,
and itself somehow be correspondingly complex. But I have spent many years studying what is in effect a universe of possible theories—the computational universe of simple programs. And one of the
clear conclusions is that in that computational universe it is easy to find immense complexity and richness, even among extremely short programs with extremely simple structure [1].
Will we actually be able to find our physical universe in this computational universe of possible universes? I am not sure. But certainly it is not obvious that we will not be able to do so. For
already in my studies of the computational universe, I have found candidate universes that I cannot exclude as possible models of our physical universe (e.g. [12, 13]).
If indeed there is a small ultimate model of our physical universe, it is inevitable that very few familiar features of our universe as we normally experience it will be visible in that model [1,
sect. 9.5]. For in a small model, there is in a sense no room to specify, say, the number of dimensions of space, the conservation of energy or the spectrum of particles. Nor probably is there any
room to have anything that corresponds directly to our normal notion of space or time [1, sects. 9.6–9.11].
Quite what the best representation for the model should be I am not sure. And indeed it is inevitable that there will be many seemingly quite different representations that only with some effort can
be shown to be equivalent.
A particular representation that I have studied involves setting up a large number of nodes, connected in a network, and repeatedly updated according to some local rewrite rule [1, chap. 9]. Within
this representation, one can in effect just start enumerating possible universes, specifying their initial conditions and updating rules. Some candidate universes are very obviously not our physical
universe. They have no notion of time, or no communication between different parts, or an infinite number of dimensions of space, or some other obviously fatal pathology.
But it turns out that there are large classes of candidate universes that already show remarkably suggestive features. For example, any universe that has a notion of time with a certain robustness
property turns out in an appropriate limit to exhibit special relativity [1, sect. 9.13]. And even more significantly, any universe that exhibits a certain conservation of finite dimensionality—as
well as generating a certain level of effective microscopic randomness—will lead on a large scale to spacetime that follows Einstein's equations for general relativity [1, sect. 9.15].
It is worth emphasizing that the models I am discussing are in a sense much more complete than models one usually studies in physics. For traditionally in physics, it might be considered quite
adequate to find equations one of whose solutions successfully represents some feature of the universe. But in the models I have studied the concept is to have a formal system which starts from a
particular initial state, then explicitly evolves so as to reproduce in every detail the precise evolution of our universe.
One might have thought that such a deterministic model would be excluded by what we know of quantum mechanics. But in fact the detailed nature of the model seems to make it quite consistent with
quantum mechanics. And for example its network character makes it perfectly plausible to violate Bell's inequalities at the level of a large-scale limit of three-dimensional space [1, sect. 9.16].
So if in fact it turns out to be possible to find a model like this for our universe, what does it mean?
In some sense it reduces all of physics to mathematics. To work out what will happen in our universe becomes like working out the digits of pi: it just involves progressively applying some particular
known algorithm.
Needless to say, if this is how things work, we will have immediately established that hypercomputation does not happen in our universe. And instead, only those things that are possible for standard
computational systems like Turing machines can be possible in our universe.
But this does not mean that it is easy to know what is possible in our universe. For this is where the phenomenon of computational irreducibility [1, sect. 12.6] comes in.
When we look at the evolution of some system—say a Turing machine or a cellular automaton—the system goes through some sequence of steps to determine its outcome. But we can ask whether perhaps there
is some way to reduce the computational effort needed to find that outcome—some way to computationally reduce the evolution of the system.
And in a sense much of traditional theoretical physics has been based on the assumption that such computational reduction is possible. We want to find ways to predict how a system will behave,
without having to explicitly trace each step in the actual evolution of the system.
But for computational reduction to be possible, it must in a sense be the case that the entity working out how a system will behave is computationally more sophisticated than the system itself.
In the past, it might not have seemed controversial to imagine that humans, with all their intelligence and mathematical prowess, would be computationally more sophisticated than systems in physics.
But from my work on the computational universe, there is increasing evidence for a general Principle of Computational Equivalence [1, chap. 12], which implies that even systems with very simple rules
can have the same level of computational sophistication as systems constructed in arbitrarily complex ways.
And the result of this is that many systems will exhibit computational irreducibility, so that their processes of evolution cannot be "outrun" by other systems—and in effect the only way to work out
how the systems behave is to watch their explicit evolution.
This has many implications—not the least of which is that it can make it very difficult even to identify a fundamental theory of physics.
For let us say that one has a candidate theory—a candidate program for the universe. How can we find out whether that program actually is the program for our universe? If we just start running the
program, we may quickly see that its behavior is simple enough that we can in effect computationally reduce it—and readily prove that it is not our universe.
But if the behavior is complex—and computationally irreducible—we will not be able to do this. And indeed as a practical matter in actually searching for a candidate model for our universe, this is a
major problem. And all one can do is to hope that there is enough computational reducibility that one manages to identify known physical laws within the model universe.
It helps that if the candidate models for the universe are simple enough, then there will in a sense always be quite a distance from one model to another—so that successive models will tend to show
very obviously different behavior. And this means that if a particular model reproduces any reasonable number of features of our actual universe, then there is a good chance that within the class of
simple models, it will be essentially the only one that does so.
But, OK. Let us imagine that we have found an ultimate model for the universe, and we are confident that it is correct. Can we then work out what will be possible in the universe, and what will not?
Typically, there will be certain features of the universe that will be associated with computational reducibility, and for which we will readily be able to identify simple laws that define what is
possible, and what is not.
Perhaps some of these laws will correspond to standard symmetries and invariances that have already been found in physics. But beyond these reducible features, there lies an infinite frontier of
computational irreducibility. If we in effect reduce physics to mathematics, we still have to contend with phenomena like Gödel's theorem. So even given the underlying theory, we cannot work out all
of its consequences.
If we ask a finite question, then at least in principle there will be a finite computational process to answer that question—though in practice we might be quite unable to run it. But to know what is
possible, we also have to address questions that are in some sense not finite.
Imagine that we want to know whether macroscopic spacetime wormholes are possible.
It could be that we can use some computationally reducible feature of the universe to answer this.
But it could also be that we will immediately be confronted with computational irreducibility—and that our only recourse will for example be to start enumerating configurations of material in the
universe to see if any of them end up evolving to wormholes. And it could even be that the question of whether any such configuration—of any size—exists could be formally undecidable, at least in an
infinite universe.
But what about all those technologies that have been discussed in science fiction?
Just as we can imagine enumerating possible universes, so also we can imagine enumerating possible things that can be constructed in a particular universe. And indeed from our experience in exploring
the computational universe of simple programs, we can expect that even simple constructions can readily lead to things with immensely rich and complex behavior.
But when do those things represent useful pieces of technology?
In a sense, the general problem of technology is to find things that can be constructed in nature, and then to match them with human purposes that they can achieve (e.g. [1, sects. 9.11 and 9.10]).
And usually when we ask whether a particular type of technology is possible, what we are effectively asking is whether a particular type of human purpose can be achieved in practice. And to know this
can be a surprisingly subtle matter, which depends almost as much on understanding our human context as it does on understanding features of physics.
Take for example almost any kind of transportation.
Earlier in human history, pretty much the only way to imagine that one would successfully achieve the purpose of transporting anything would be explicitly to move the thing from one place to another.
But now there are many situations where what matters to us as humans is not the explicit material content of a thing, but rather the abstract information that represents it. And it is usually much
easier to transport that information, often at the speed of light.
So when we say "will it ever be possible to get from here to there at a certain speed" we need to have a context for what would need to be transported. In the current state of human evolution, there
is much that we do that can be represented as pure information, and readily transported. But we ourselves still have a physical presence, whose transportation seems like a different issue.
No doubt, though, we will one day master the construction of atomic-scale replicas from pure information. But more significantly, perhaps our very human existence will increasingly become purely
informational—at which point the notion of transportation changes, so that just transporting information can potentially entirely achieve our human purposes.
There are different reasons for saying that things are impossible.
One reason is that the basic description of what should be achieved makes no sense. For example, if we ask "can we construct a universe where 2 + 2 = 5?", this makes no sense. From the very meaning
of the symbols in 2 + 2 = 5, we can deduce that it can never be satisfied, whatever universe we are in.
There are other kinds of questions where at least at first the description seems to make no sense.
Like "is it possible to create another universe?" Well, if the universe is defined to be everything, then by definition the answer is obviously "no". But it is certainly possible to create
simulations of other universes; indeed, in the computational universe of possible programs we can readily enumerate an infinite number of possible universes.
For us as physical beings, however, these simulations are clearly different from our actual physical universe. But consider a time in the future when the essence of the human condition has been
transferred to purely informational form. At that time, we can imagine transferring our experience to some simulated universe, and in a sense existing purely within it—just as we now exist within our
physical universe.
And from this future point of view, it will then seem perfectly possible to create other universes.
So what about time travel? There are also immediate definitional issues here. For at least if the universe has a definite history—with a single thread of time—the effect of any time travel into the
past must just be reflected in the whole actual history that the universe exhibits.
We can often describe traditional physical models—for example for the structure of spacetime—by saying that they determine the future of a system from its past. But ultimately such models are just
equations that connect different parameters of a system. And there may well be configurations of the system in which the equations cannot readily be seen just as determining the future from the past.
Quite which pathologies can occur with particular kinds of setups may well be undecidable, but when it seems that the future affects the past what is really being said is just that the underlying
equations imply certain consistency conditions across time. And when one thinks of simple physical systems, such consistency conditions do not seem especially remarkable. But when one combines them
with human experience—with its features of memory and progress—they seem more bizarre and paradoxical.
In some ancient time, one might have imagined that time travel for a person would consist of projecting them—or some aspect of them—far into the future. And indeed today when one sees writings and
models that were constructed thousands of years ago for the afterlife, there is a sense in which that conception of time travel has been achieved.
And similarly, when one thinks of the past, the increasing precision with which molecular archaeology and the like can reconstruct things gives us something which at least at some time in history
would have seemed tantamount to time travel.
Indeed, at an informational level—but for the important issue of computational irreducibility—we could reasonably expect to reconstruct the past and predict the future. And so if our human existence
was purely informational, we would in some sense freely be able to travel in time.
The caveat of computational irreducibility is a crucial one, however, that affects the possibility of many kinds of processes and technologies.
We can ask, for example, whether it will ever be possible to do something like unscramble an egg, or in general in some sense to reverse time. The second law of thermodynamics has always suggested
the impossibility of such things.
In the past, it was not entirely clear just what the fundamental basis for the second law might be. But knowing about computational irreducibility, we can finally see a solid basis for it [1, sect.
9.3]. The basic idea is just that in many systems the process of evolution through time in effect so "encrypts" the information associated with the initial conditions for the system that no feasible
measurement or other process can recognize what they were. So in effect, it would take a Maxwell's demon of immense computational power to unscramble the evolution.
In practice, however, as the systems we use for technology get smaller, and our practical powers of computation get larger, it is increasingly possible to do such unscrambling. And indeed that is the
basis for a variety of important control systems and signal processing technologies that have emerged in recent years.
The question of just what kinds of effective reversals of time can be achieved by what level of technology depends somewhat on theoretical questions about computation. For example, if it is true that
P != NP, then certain questions about possible reversals will necessarily require immense computational resources.
There are many questions about what is possible that revolve around prediction.
Traditional models in physics tend to deny the possibility of prediction for two basic reasons. The first is that the models are usually assumed to be somehow incomplete, so that the systems they
describe are subject to unknown—and unpredictable—effects from the outside. The second reason is quantum mechanics—which in its traditional formulation is fundamentally probabilistic.
Quite what happens even in a traditional quantum formulation when one tries to describe a whole sequence from the construction of an experiment to the measurement of its results has never been
completely clear. And for example it is still not clear whether it is possible to generate a perfectly random sequence—or whether in effect the operation of the preparation and measurement apparatus
will always prevent this [1, p. 1062]. But even if—as in candidate models of fundamental physics that I have investigated—there is no ultimate randomness in quantum mechanics, there is still another
crucial barrier to prediction: computational irreducibility.
One might have thought that in time there would be some kind of acceleration in intelligence that would allow our successors to predict anything they want about the physical universe.
But computational irreducibility implies that there will always be limitations. There will be an infinite number of pockets of reducibility where progress can be made. But ultimately the actual
evolution of the universe in a sense achieves something irreducible—which can only be observed, not predicted.
What if perhaps there could be some collection of extraterrestrial intelligences around the universe who combine to try to compute the future of the universe?
We are proud of the computational achievements of our intelligence and our civilization. But what the Principle of Computational Equivalence implies is that many processes in nature are ultimately
equivalent in their computational sophistication. So in a sense the universe is already as intelligent as we are, and whatever we develop in our technology cannot overcome that [1, sects. 9.10 and
9.12]. It is only that with our technology we guide the universe in ways that we can think of as achieving our particular purposes.
However, if it turns out—as I suspect—that the whole history of the universe is determined by a particular, perhaps simple, underlying rule, then we are in a sense in an even more extreme situation.
For there is in a sense just one possible history for the universe. So at some level this defines all that is possible. But the point is that to answer specific questions about parts of this history
requires irreducible computational work—so that in a sense there can still be essentially infinite amounts of surprise about what is possible, and we can still perceive that we act with free will [1,
sect. 12.7].
So what will the limit of technology in the future be like?
Today almost all the technology we have has been created through traditional methods of engineering: by building up what is needed one step at a time, always keeping everything simple enough that we
can foresee what the results will be.
But what if we just searched the computational universe for our technology? One of the discoveries from exploring the computational universe is that even very simple programs can exhibit rich and
complex behavior. But can we use this for technology?
The answer, it seems, is often yes. The methodology for doing this is not yet well known. But in recent years my own technology development projects [9, 14, 15] have certainly made increasingly
central use of this approach.
One defines some particular objective—say generating a hash code, evaluating a mathematical function, creating a musical piece or recognizing a class of linguistic forms. Then one searches the
computational universe for a program that achieves the objective. It might be that the simplest program that would be needed would be highly complex—and out of reach of enumerative search methods.
But the Principle of Computational Equivalence suggests that this will tend not to be the case—and in practice it seems that it is not.
And indeed one often finds surprisingly simple programs that achieve all sorts of complex purposes.
Unlike things created by traditional engineering, however, there is no constraint that these programs operate in ways that we as humans can readily understand. And indeed it is common to find that
they do not. Instead, in a sense, they tend to operate much more like many systems in nature—that we can describe as achieving a certain overall purpose, but can't readily understand how they do it.
Today's technology tends at some level to look very regular—to exhibit simple geometrical or informational motifs, like rotary motion or iterative execution. But technology that is "mined" from the
computational universe will usually not show such simplicity. It will look much more like many systems in nature—and operate in a sense much more efficiently with its resources, and much closer to
computational irreducibility.
The fact that a system can be described as achieving some particular purpose by definition implies a certain computational reducibility in its behavior.
But the point is that as technology advances, we can expect to see less and less computational reducibility that was merely the result of engineering or historical development—and instead to see more
and more perfect computational irreducibility.
It is in a sense a peculiar situation, forced on us by the Principle of Computational Equivalence. We might have believed that our own intelligence, our technology and the physical universe we
inhabit would all have different levels of computational sophistication.
But the Principle of Computational Equivalence implies that they do not. So even though we may strive mightily to create elaborate technology, we will ultimately never be able give it any
fundamentally greater level of computational sophistication. Indeed, in a sense all we will ever be able to do is to equal what already happens in nature.
And this kind of equivalence has fundamental implications for what we will consider possible.
Today we are in the early stages of merging our human intelligence and existence with computation and technology. But in time this merger will no doubt be complete, and our human existence will in a
sense be played out through our technology. Presumably there will be a progressive process of optimization—so that in time the core of our thoughts and activities will simply consist of some
complicated patterns of microscopic physical effects.
But looking from outside, a great many systems in nature similarly show complicated patterns of microscopic physical effects. And what the Principle of Computational Equivalence tells us is that
there can ultimately be no different level of computational sophistication in the effects that are the result of all our civilization and technology development—and effects that just occur in nature.
We might think that processes corresponding to future human activities would somehow show a sense of purpose that would not be shared by processes that just occur in nature. But in the end, what we
define as purpose is ultimately just a feature of history—defined by the particular details of the evolution of our civilization (e.g. [1, sect. 12.2 and notes]).
We can certainly imagine in some computational way enumerating all possible purposes—just as we can imagine enumerating possible computational or physical or biological systems. So far in human
history we have pursued only a tiny fraction of all possible purposes. And perhaps the meaningful future of our civilization will consist only of pursuing some modest extrapolation of what we have
pursued so far.
So which of our purposes can we expect to achieve in the physical universe? The answer, I suspect, is that once our existence is in effect purely computational, we will in a sense be able to program
things so as to achieve a vast range of purposes. Today we have a definite, fixed physical existence. And to achieve a purpose in our universe we must mold physical components to achieve that
purpose. But if our very existence is in effect purely computational, we can expect not only to mold the outside physical universe, but also in a sense to mold our own computational construction.
The result is that what will determine whether a particular purpose can be achieved in our universe will more be general abstract issues like computational irreducibility than issues about the
particular physical laws of our universe. And there will certainly be some purposes that we can in principle define, but which can never be achieved because they require infinite amounts of
irreducible computation.
In our science, technology and general approach to rational thinking, we have so far in our history tended to focus on purposes which are not made impossible by computational irreducibility—though we
may not be able to see how to achieve them with physical components in the context of our current existence. As we extrapolate into the future of our civilization, it is not clear how our purposes
will evolve—and to what extent they will become enmeshed with computational irreducibility, and therefore seem possible or not.
So in a sense what we will ultimately perceive as possible in physics depends more on the evolution of human purposes than it does on the details of the physical universe. In some ways this is a
satisfying result. For it suggests that we will ultimately never be constrained in what we can achieve by the details of our physical universe. The constraints on our future will not be ones of
physics, but rather ones of a deeper nature. It will not be that we will be forced to progress in a particular direction because of the specific details of the particular physical universe in which
we live. But rather—in what we can view as an ultimate consequence of the Principle of Computational Equivalence—the constraints on what is possible will be abstract features of the general
properties of the computational universe. They will not be a matter of physics—but instead of the general science of the computational universe.
About the Author
Stephen Wolfram is the CEO of Wolfram Research, the creator of Mathematica and Wolfram|Alpha, and the author of A New Kind of Science. Long ago he was officially a physicist, receiving his PhD from
Caltech in 1979 (at the age of 20). Many of his early papers on particle physics and cosmology continue to fare well. Every few years he makes an effort to continue his approach to finding the
fundamental theory of physics; his effort-before-last is in Chapter 9 of A New Kind of Science. Today he noticed the title of this essay competition, and this evening decided to have some fun writing
the essay here—perhaps as a procrastination for what he should have been doing on Wolfram|Alpha during that time.
1. S. Wolfram, A New Kind of Science, Wolfram Media, 2002.
2. Wolfram Research, "Solving the Quintic with Mathematica," Poster; M. Trott and V. Adamchik, library.wolfram.com/examples/quintic, 1994.
3. K. Gödel, "Über formal unentscheidbare Sätze der Principia Mathematica und verwandter System I," Monatshefte für Math. u. physik, 38, 1931, pp. 173-198.
4. A. Turing, "On Computable Numbers, with an Application to the Entscheidungsproblem," in Proceedings of the London Mathematical Society, Ser. 2, 42, 1937, pp. 230-265.
5. Y. Matiyasevich, Hilbert's Tenth Problem, MIT Press, 1993.
6. S. Wolfram, "The Wolfram 2,3 Turing Machine Research Prize,"
www.wolframscience.com/prizes/tm23, May 14, 2007.
7. A. Smith, "Universality of Wolfram's 2,3 Turing Machine," to appear in Complex Systems.
8. S. Wolfram, Cellular Automata and Complexity: Collected Papers, Addison-Wesley Publishing Company, 1994.
9. Wolfram Research, Mathematica, 1988-.
10. A. Turing, "Systems of Logic Based on Ordinals," Proc. London Math. Soc., Ser. 2-45, 1939, pp. 161-228.
11. B. Copeland, "Hypercomputation," Minds and Machines, 12(4), 2002, pp. 461-502.
12. S. Wolfram, Talk given at the Emergent Gravity Conference, MIT, Aug 26, 2008.
13. S. Wolfram, Talk given at the JOUAL 2009 Workshop, CNR-Area, Jul 10, 2009.
14. Wolfram Research, WolframTones, tones.wolfram.com, 2005-.
15. Wolfram Alpha LLC, Wolfram|Alpha, www.wolframalpha.com, 2009-.
|
{"url":"http://www.stephenwolfram.com/publications/what-ultimately-possible-physics/","timestamp":"2014-04-17T19:05:22Z","content_type":null,"content_length":"53387","record_id":"<urn:uuid:5f18b899-ac3e-47e3-abf2-b15242136475>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00659-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Understanding Filter Efficiency and Beta Ratios
Filter ratings are an often misunderstood area of contamination control. On several recent occasions, I have witnessed someone describing a filter by its nominal rating. A nominal rating is an
arbitrary micrometer value given to the filter by the manufacturer. These ratings have little to no value. Tests have shown that particles as large as 200 microns will pass through a nominally rated
10-micron filter. If someone tries to sell you a filter based on an "excellent" nominal rating of five microns, run away.
Absolute Rating
Another common rating for filters is the absolute rating. An absolute rating gives the size of the largest particle that will pass through the filter or screen. Essentially, this is the size of the
largest opening in the filter although no standardized test method to determine its value exists. Still, absolute ratings are better for representing the effectiveness of a filter over nominal
Figure 1
Beta Rating
The best and most commonly used rating in industry is the beta rating. The beta rating comes from the Multipass Method for Evaluating Filtration Performance of a Fine Filter Element (ISO 16889:1999).
Table 1. Effect of Filtration Ratio (Beta Ratio) on Downstream Fluid Cleanliness
To test a filter, particle counters accurately measure the size and quantity of upstream particles per known volume of fluid, as well as the size and quantity of particles downstream of the filter.
The ratio is defined as the particle count upstream divided by the particle count downstream at the rated particle size. Using the beta ratio, a five-micron filter with a beta 10 rating, will have on
average 10 particles larger than five microns upstream of the filter for every one particle five microns or greater downstream.
The efficiency of the filter can be calculated directly from the beta ratio because the percent capture efficiency is ((beta-1)/beta) x 100. A filter with a beta of 10 at five microns is thus said to
be 90 percent efficient at removing particles five microns and larger.
Caution must be exercised when using beta ratios to compare filters because they do not take into account actual operating conditions such as flow surges and changes in temperature.
A filter's beta ratio also does not give any indication of its dirt-holding capacity, the total amount of contaminant that can be trapped by the filter throughout its life, nor does it account for
its stability or performance over time.
Nevertheless, beta ratios are an effective way of gauging the expected performance of a filter.
I hope this new knowledge of filter efficiency ratings enables you to make a more informed purchase the next time you buy a filter.
Editor's Note
An estimate of the dirt-holding capacity is generated as a sub-part of ISO 16889:99.
|
{"url":"http://www.machinerylubrication.com/Read/1289/oil-filter-efficiency","timestamp":"2014-04-19T22:06:19Z","content_type":null,"content_length":"30226","record_id":"<urn:uuid:170b38ba-b796-4eb8-ae93-19b9f6258e63>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00423-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Queuing Theory problem, M/M/1/K queue with twist
July 3rd 2011, 05:49 PM #1
Jul 2011
Queuing Theory problem, M/M/1/K queue with twist
The problem statement
Customers arrive to a register as a poisson process with arrival rate λ and are serviced with an exponential distribution with service rate μ. When a customer arrives he'll decide to join the
line or not depending on how many people are currently in the system (not including himself). In other words, if at a given time n people are in the system then a new customer will join the line
with probability βn, and not join the line and not take the service with probability 1-βn. However, 0 ≤ βn ≤ 1 (0 ≤ n ≤ N), βn = 0 (n > N).
(1) Find the balance equation expressed with the balance probability state pn (where n is the number of customers in the system).
(2) Solve the equation in (1).
My attempt at a solution
(1) So, my problem here is where the β will enter the problem. I'm thinking that the events "Customer stays" and "Customer arrives" are independent, resulting in the following:
With this I would get:
λ β0 p0 = μ p1
λ βn pn + μ pn = μ pn+1 + λ β[sub]n-1/SUB] pn-1 for 1 ≤ n ≤ N-1
λ βN-1 pN-1 = μ pN
Is this correct? If not, any hint?
(2) If the above is correct, how to solve it? Any hint is appreciated.
Re: Queuing Theory problem, M/M/1/K queue with twist
You are right. Those events are independent.
To solve that, just use the recursion to express any $p_n$ in terms of $p_0$ and then remember $\sum_{n=0}^N p_n=1$.
Re: Queuing Theory problem, M/M/1/K queue with twist
Thanks! I calculated it and got a pretty messy, but hopefully correct, expression.
July 7th 2011, 07:20 PM #2
July 12th 2011, 10:05 PM #3
Jul 2011
|
{"url":"http://mathhelpforum.com/advanced-statistics/184031-queuing-theory-problem-m-m-1-k-queue-twist.html","timestamp":"2014-04-18T05:40:03Z","content_type":null,"content_length":"36406","record_id":"<urn:uuid:b1539f57-a007-4444-8517-d2b338bb6e6b>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00093-ip-10-147-4-33.ec2.internal.warc.gz"}
|
cofficients of fourier series - File Exchange - MATLAB Central
This function is written to calculate the cofficients of fourier series.
In the function arguments, fs(x,T0,n,type), note that :
x : the fuction that you might to calculate it's fourier series,
*.note that you should enter one period of the periodic function(x).
**. Be careful! You should enter one period of signal in range [-T/2,T/2]
,not [0 T] or other ranges.
T0 : Period of function.
n : Number of cofficients must be calculated.
type : type 0 to calculate the magnitude and phase of cofficients.
type 1 to calculate the real and imaginary part of cofficients.
Please login to add a comment or rating.
|
{"url":"http://www.mathworks.com/matlabcentral/fileexchange/41684-cofficients-of-fourier-series","timestamp":"2014-04-19T19:55:40Z","content_type":null,"content_length":"24085","record_id":"<urn:uuid:19976974-fdb7-46ed-b3a8-21738dfdb99c>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00625-ip-10-147-4-33.ec2.internal.warc.gz"}
|