content stringlengths 86 994k | meta stringlengths 288 619 |
|---|---|
Math Forum Discussions
Math Forum
Ask Dr. Math
Internet Newsletter
Teacher Exchange
Search All of the Math Forum:
Views expressed in these public forums are not endorsed by Drexel University or The Math Forum.
Topic: Coulomb law does charge; Ampere law does spin Chapt13.4.03 Charge
and spin #1025 New Physics #1145 ATOM TOTALITY 5th ed
Replies: 3 Last Post: Nov 23, 2012 4:15 AM
Messages: [ Previous | Next ]
Coulomb law does charge; Ampere law does spin Chapt13.4.03 Charge
and spin #1025 New Physics #1145 ATOM TOTALITY 5th ed
Posted: Nov 23, 2012 2:57 AM
Well, I had too much to eat for Thanksgiving. I saved up some special
for dinner tonight and ate too much. So looks like I will try to eat
just cereal for the next two days. I want to try to maintain my 137
lbs weight that I had in High School, so that means some days of near
fasting. But enough of that, lets get to important things.
I had to make a detour into the electric motor, the rotor and thanks
to Tim's responses, I am pretty sure the problem is with the
Schrodinger Equation gives inaccurate descriptions of the "s"
orbitals. The Schrodinger Equation gives spherical orbitals to the
"s", but we all know the Dirac Equation relativizes the Schrodinger
Equation. It puts the Schrodinger Equation into motion, so that the
sphere is no longer a adequate description of the "s" orbital. So what
happens when you put a sphere into motion? What figure comes out?
Well, easily that a sphere produces when in motion is a cylinder
So the "s" orbitals of chemistry should really look like a cylinder
rather than a sphere. Now the Schrodinger Equation gets a lot of
elongated ellipses for the p, d, f orbitals. And if we put those into
the Dirac Equation, it elongates them even more so. The Dirac Equation
makes orbitals more like wire loops around the nucleus of an atom.
Now I had to be sure that no electric motor or rotor thereof was a
sphere shaped wire loop. Now I am not saying such a object cannot
exist or is nonexistent. I am saying that the basic principle of an
electric motor is based on the cylinder shape.
Now I am getting closer to my goal of relating charge with spin. I am
centimetering my way there, rather than millimetering my way there.
Since the theme of New Physics is that the Maxwell Equations derives
all of physics, that the concept of charge and spin must be begotten
out of the Maxwell Equations. Charge and spin can be primitive
notions, but then the Maxwell Equations would define charge and spin
from the laws of the Maxwell Equations.
And that amounts to basically Coulomb law defining charge and the
Ampere law defining spin.
And the way that works is that the Coulomb law would be a geometry
effect of opposite charges fitting inside one another as the inverse
square of distance, whereas like charges repel and cannot fit inside
one another. So that a proton and electron are nested, concentric
spheres radiating from the center of an atom, and the electron matches
every concentric sphere of the proton by composing the inside of that
sphere surface.
So charge is geometry, of the three types of geometry, Euclidean,
Elliptic and Hyperbolic.
That leaves us with spin. Spin in essence is the Ampere law which says
that parallel currents attract one another. It is this law that makes
electrons pair up in suborbitals and yields the Hund's rule. It is
spin that creates the 3 p suborbitals of paired electrons. When
electrons flow in parallel, they attract and thus pair up and cause a
suborbital of two electrons.
So the Coulomb law describes charge and the Ampere law describes spin.
The charge is geometry for the proton is elliptic and the electron is
hyperbolic, where the proton is the outer surface of a sphere and the
electron is the inner surface of the same sphere with its poles and
equator missing.
So what is spin in terms of geometry? Well, since it is the Ampere
law, the geometry involved is a choice of direction of motion of the
two electrons. If the electrons are in parallel motion they attract,
if antiparallel they repel.
So for charge there are 3 possible values for charge, -1,0,+1 and for
spin there cannot be more values, more possibilities than charge.
There can only be 3 possible spins, -1/2, 0, +1/2. If the spins are
parallel they are +1/2 with -1/2 equalling 0; if they are
antiparallel the spins repel and do not form a permanent structure,
with a net spin overall.
Now in ferromagnetism, we have electrons of unfilled suborbitals and
this large collection of electrons of unfilled suborbitals have a
parallel overall spin and that yields an overall attraction force and
we see it as ferromagnetism.
So what is the relationship of charge to spin? Well, it is the
relationship of Coulomb's law compared to Ampere's law. In effect
those two laws are independent since they are required in the Maxwell
Equations. So I cannot tie or connect them any more than I can tie
Coulomb's law to Ampere's law.
Google's New-Newsgroups censors AP posts and halted a proper
archiving ?of author, but Drexel's Math Forum does not and my posts
in archive ?form is seen here: ?http://mathforum.org/kb/profile.jspa?userID=499986
Archimedes Plutonium
whole entire Universe is just one big atom
where dots of the electron-dot-cloud are galaxies | {"url":"http://mathforum.org/kb/thread.jspa?threadID=2416504&messageID=7927095","timestamp":"2014-04-20T19:52:14Z","content_type":null,"content_length":"25137","record_id":"<urn:uuid:92698641-9a61-4b6e-b850-2b039a9ece7c>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00654-ip-10-147-4-33.ec2.internal.warc.gz"} |
Calculator words!
[750² + 150000 + √1452025]
seven-hundred-fifty squared, plus one-million-five-hundred-thousand, plus the square root of one-million-four-hundred-fifty-two-thousand-and-twenty-five
(3.4 × 10^-4) + [(191÷ 382)²] + (1.5 ÷ 25)
three-point-four times ten to the minus four, plus one-hundred-ninety-one divided by three-hundred-eighty-two all squared, plus one-point-fve divided by twent-five.
There are many ways to say these, and English is not always precise (for example in saying what exactly gets squared just above), so that is why we use formulas!
The "[" is an "open square bracket"
The "]" is a "close square bracket"
"The physicists defer only to mathematicians, and the mathematicians defer only to God ..." - Leon M. Lederman | {"url":"http://www.mathisfunforum.com/viewtopic.php?pid=17190","timestamp":"2014-04-21T09:55:01Z","content_type":null,"content_length":"10697","record_id":"<urn:uuid:c5bd9ebf-46a1-4c41-81d5-c91682933767>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00533-ip-10-147-4-33.ec2.internal.warc.gz"} |
My watch list
With an accout for my.chemeurope.com you can always see everything at a glance – and you can configure your own website and individual newsletter.
• My watch list
• My saved searches
• My saved topics
• My newsletter
Quantum yield
The quantum yield of a radiation-induced process is the number of times that a defined event occurs per photon absorbed by the system. Thus, the quantum yield is a measure of the efficiency with
which absorbed light produces some effect.
For example, in a chemical photodegradation process, when a molecule falls apart after absorbing a light quantum, the quantum yield is the number of destroyed molecules divided by the number of
photons absorbed by the system. Since not all photons are absorbed productively, the typical quantum yield will be less than 1.
Quantum yields greater than 1 are possible for photo-induced or radiation-induced chain reactions, in which a single photon may trigger a long chain of transformations. One example is the reaction of
hydrogen with chlorine, in which a few hundred molecules of hydrochloric acid are typically formed per quantum of blue light absorbed.
In optical spectroscopy, the quantum yield is the probability that a given quantum state is formed from the system initially prepared in some other quantum state. For example, a singlet to triplet
transition quantum yield is the fraction of molecules that, after being photoexcited into a singlet state, cross over to the triplet state. The fluorescence quantum yield is defined as the ratio of
the number of photons emitted to the number of photons absorbed.
See also
Quantum Efficiency
This article is licensed under the GNU Free Documentation License. It uses material from the Wikipedia article "Quantum_yield". A list of authors is available in Wikipedia. | {"url":"http://www.chemeurope.com/en/encyclopedia/Quantum_yield.html","timestamp":"2014-04-17T21:35:20Z","content_type":null,"content_length":"47807","record_id":"<urn:uuid:4ee6b174-00f1-4ea2-9f27-74a140112ec5>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00366-ip-10-147-4-33.ec2.internal.warc.gz"} |
Timer percent complete
Hi Me again
I have my program almost done thanks to peeps here all i need to do now is figure out how to Calculate percent complete and mov to D0010?
Set value for the timer is in D0001 and Timer value is T0000
I made a function block ST but have no idea how to hook up the timer to execute it and the output result moved to D0010 I have attached a sceen shot of the function so far.
Thanks in advance | {"url":"http://forums.mrplc.com/index.php?showtopic=22940&pid=110717&st=0","timestamp":"2014-04-17T06:42:04Z","content_type":null,"content_length":"195500","record_id":"<urn:uuid:02f83f90-fa7d-467a-8e08-995fd05d0cc3>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00471-ip-10-147-4-33.ec2.internal.warc.gz"} |
Nextel (S)
S » Topics » Interest Rate Risk
This excerpt taken from the S 10-K filed Feb 26, 2010.
Interest Rate Risk
The communications industry is a capital intensive, technology driven business. We are subject to interest rate risk primarily associated with our
borrowings. Interest rate risk is the risk that changes in interest rates could adversely affect earnings and cash flows. Specific interest rate risk
includes: the risk of increasing interest rates on floating-rate debt and the risk of increasing interest rates for planned new fixed rate long-term
financings or refinancings.
About 90% of our debt as of December 31, 2009 was fixed-rate debt. While changes in interest rates impact the fair value of this debt, there is no impact to
earnings and cash flows because we intend to hold these obligations to maturity unless market and other conditions are favorable.
We perform interest rate sensitivity analyses on our variable rate debt. These analyses indicate that a one percentage point change in interest rates would
have an annual pre-tax impact of $18 million on our consolidated statements of operations and cash flows for the year ended December 31, 2009. We also
perform a sensitivity analysis on the fair market value of our outstanding debt. A 10% decline in market interest rates would cause an $872 million increase
in the fair market value of our debt to $20.9 billion.
These excerpts taken from the S 10-K filed Feb 27, 2009.
Interest Rate Risk
The communications industry is a capital intensive, technology driven business. We are subject to interest rate risk primarily associated with our
borrowings. Interest rate risk is the risk that changes in interest rates could adversely affect earnings and cash flows. Specific interest rate risk
includes: the risk of increasing interest rates on floating-rate debt and the risk of increasing interest rates for planned new fixed rate long-term
financings or refinancings.
Interest Rate Risk
FACE="Times New Roman" SIZE="2">The communications industry is a capital intensive, technology driven business. We are subject to interest rate risk
primarily associated with our borrowings. Interest rate risk is the risk that changes in interest
rates could adversely affect earnings and cash flows. Specific interest rate risk includes: the risk of increasing interest rates on floating-rate debt and
the risk of increasing interest rates for planned new fixed rate long-term financings or
These excerpts taken from the S 10-K filed Feb 29, 2008.
Interest Rate Risk
The communications industry is a capital intensive, technology driven business. We are subject to interest rate risk primarily associated with our
borrowings. Interest rate risk is the risk that changes in interest rates could adversely affect earnings and cash flows. Specific interest rate risk
include: the risk of increasing interest rates on short-term debt; the risk of increasing interest rates for planned new fixed rate long-term financings; and
the risk of increasing interest rates for planned refinancings using long-term fixed rate debt.
Table of Contents
Interest Rate Risk
FACE="Times New Roman" SIZE="2">The communications industry is a capital intensive, technology driven business. We are subject to interest rate risk
primarily associated with our borrowings. Interest rate risk is the risk that changes in interest
rates could adversely affect earnings and cash flows. Specific interest rate risk include: the risk of increasing interest rates on short-term debt; the risk
of increasing interest rates for planned new fixed rate long-term financings; and the risk
of increasing interest rates for planned refinancings using long-term fixed rate debt.
Table of Contents
This excerpt taken from the S 10-K filed Mar 1, 2007.
Interest Rate Risk
The communications industry is a capital intensive, technology driven business. We are subject to interest rate risk primarily associated with our
borrowings. Interest rate risk is the risk that changes in interest rates could adversely affect earnings and cash flows. Specific interest rate risk
include: the risk of increasing interest rates on short-term debt; the risk of increasing interest rates for planned new fixed rate long-term financings; and
the risk of increasing interest rates for planned refinancing using long-term fixed rate debt.
These excerpts taken from the S 8-K filed Sep 18, 2006.
Interest Rate Risk
The communications industry is a capital intensive, technology driven business. We are subject to interest rate risk primarily associated with our
borrowings. Interest rate risk is the risk that changes in interest rates could adversely affect earnings and cash flows. Specific interest rate risk
include: the risk of increasing interest rates on short-term debt; the risk of increasing interest rates for planned new fixed rate long-term financings; and
the risk of increasing interest rates for planned refinancing using long-term fixed rate debt.
Interest Rate Risk
The communications industry is a capital intensive, technology driven business. We are subject to interest rate risk primarily associated with our
borrowings. Interest rate risk is the risk that changes in interest rates could adversely affect earnings and cash flows. Specific interest rate risks
include: the risk of increasing interest rates on short-term debt; the risk of increasing interest rates for planned new fixed rate long-term financings; and
the risk of increasing interest rates for planned refinancings using long-term fixed rate debt.
This excerpt taken from the S 10-Q filed Aug 9, 2006.
Interest Rate Risk
The communications industry is a capital intensive, technology driven business. We are subject to interest rate risk primarily associated with our
borrowings. Interest rate risk is the risk that changes in interest rates could adversely affect earnings and cash flows. Specific interest rate risks
include: the risk of increasing interest rates on short-term debt; the risk of increasing interest rates for planned new fixed rate long-term financings; and
the risk of increasing interest rates for planned refinancings using long-term fixed rate debt.
This excerpt taken from the S 10-Q filed May 5, 2006.
Interest Rate Risk
The communications industry is a capital intensive, technology driven business. We are subject to interest rate risk primarily associated with our
borrowings. Interest rate risk is the risk that changes in interest rates could adversely affect earnings and cash flows. Specific interest rate risks
include: the risk of increasing interest rates on short-term debt; the risk of increasing interest rates for planned new fixed rate long-term financings; and
the risk of increasing interest rates for planned refinancings using long-term fixed rate debt.
This excerpt taken from the S 10-K filed Mar 7, 2006.
Interest Rate Risk
The communications industry is a capital intensive, technology driven business. We are subject to interest rate risk primarily associated with our
borrowings. Interest rate risk is the risk that changes in interest rates could adversely affect earnings and cash flows. Specific interest rate risk
include: the risk of increasing interest rates on short-term debt; the risk of increasing interest rates for planned new fixed rate long-term financings; and
the risk of increasing interest rates for planned refinancing using long-term fixed rate debt.
Cash Flow Hedges
We enter into interest rate swap agreements designated as cash flow hedges to reduce the impact of interest rate movements on future interest expense by
effectively converting a portion of our floating-rate debt to a fixed-rate. As of December 31, 2005, we had no outstanding interest rate cash flow hedges.
Fair Value Hedges
We enter into interest rate swap agreements to manage exposure to interest rate movements and achieve an optimal mixture of floating and fixed-rate debt
while minimizing liquidity risk. The interest rate swap agreements
designated as fair value hedges effectively convert our fixed-rate debt to a floating-rate by receiving fixed rate amounts in exchange for floating rate
interest payments over the life of the agreement without an exchange of the underlying principal amount. As of December 31, 2005, we had outstanding interest
rate swap agreements that were designated as fair value hedges.
Approximately 80% of our debt as of December 31, 2005 was fixed-rate debt excluding interest rate swaps. While changes in interest rates impact the fair
value of this debt, there is no impact to earnings and cash flows because we intend to hold these obligations to maturity unless market and other conditions
are favorable.
As of December 31, 2005, we held fair value interest rate swaps with a notional value of $1 billion. These swaps were entered into as hedges of the fair
value of a portion of our senior notes. These interest rate swaps have maturities ranging from 2008 to 2012. On a semiannual basis, we pay a floating rate of
interest equal to the six-month LIBOR plus a fixed spread and receive an average interest rate equal to the coupon rates stated on the underlying senior
notes. On December 31, 2005, the rate we would pay averaged 7.0% and the rate we would receive was 7.2%. Assuming a one percentage point increase in the
prevailing forward yield curve, the fair value of the interest rate swaps and the underlying senior notes would change by $36 million. These interest rate
swaps met all the requirements for perfect effectiveness under derivative accounting rules as all of the critical terms of the swaps perfectly matched the
corresponding terms of the hedged debt; therefore, there is no impact to earnings and cash flows for any fair value fluctuations.
We perform interest rate sensitivity analyses on our variable rate debt including interest rate swaps. These analyses indicate that a one percentage point
change in interest rates would have an annual pre-tax impact of $45 million on our statements of operations and cash flows as of December 31, 2005. While our
variable-rate debt may impact earnings and cash flows as interest rates change, it is not subject to changes in fair values.
We also perform a sensitivity analysis on the fair market value of our outstanding debt. A 10% decline in market interest rates would cause a $987 million
increase in fair market value of our debt to $29 billion. This analysis includes the hedged debt.
We have entered into a series of interest rate collars associated with the anticipated issuance of debt by Embarq at the time of its expected spin-off in
2006. These collars have been designated as cash flow hedges in Embarq s financial statements against the variability in interest payments that would result
from a change in interest rates before the debt is issued at the time of spin-off. However, because the forecasted interest payments of debt will occur after
the subsidiary is spun-off, the derivative instruments do not qualify for hedge accounting treatment in our consolidated financial statements, and so any
changes in the fair value of these instruments are recognized in earnings during the period of change. Based on market prices on December 31, 2005, a one
percentage point change in interest rates would result in a decrease in the fair value of these instruments by approximately $179 million.
This excerpt taken from the S 10-Q filed Nov 9, 2005.
Interest Rate Risk
The communications industry is a capital intensive, technology driven business. Sprint Nextel is subject to interest rate risk primarily associated with its
borrowings. Sprint Nextel selectively enters into interest rate swap agreements to manage its exposure to interest rate changes on its debt.
Approximately 80% of Sprint Nextel s outstanding debt at September 30, 2005 was fixed-rate debt, excluding interest rate swaps. While changes in interest
rates impact the fair value of this debt, there is no impact on earnings and cash flows because Sprint Nextel intends to hold these obligations to maturity
unless market conditions make it beneficial to refinance these obligations.
As of September 30, 2005, Sprint Nextel held fair value interest rate swaps with a notional value of $1 billion. These swaps were entered into as hedges of
the fair value of a portion of our senior notes. These interest rate swaps have maturities ranging from 2008 to 2012. On a semiannual basis, Sprint Nextel
pays a floating rate of interest equal to the six-month LIBOR, plus a fixed spread, which averaged 6.6% as of September 30, 2005, and received an average
interest rate equal to the coupon rates stated on the underlying senior notes of 7.2%. Assuming a one percentage point increase in the prevailing forward
yield curve, the fair value of the interest rate swaps and the underlying senior notes would change by $38 million. These interest rate swaps met all the
requirements for perfect effectiveness under derivative accounting rules; therefore, there is no impact on earnings and cash flows for any fair value
Sprint Nextel performs interest rate sensitivity analyses on its variable-rate debt, including interest rate swaps. These analyses indicate that a one
percentage point change in interest rates would have an annual pre-tax impact of $50 million on the Consolidated Statements of Operations and Consolidated
Statements of Cash Flows at September 30, 2005. While Sprint Nextel s variable-rate debt is subject to earnings and cash flows impacts as interest rates
change, it is not subject to changes in fair values.
Sprint Nextel also performs a sensitivity analysis on the fair market value of its outstanding debt. A 10% decrease in market interest rates would cause a
$722 million increase in fair market value of its debt to $28 billion.
This excerpt taken from the S 10-Q filed Aug 8, 2005.
Interest Rate Risk
The communications industry is a capital intensive, technology driven business. Sprint is subject to interest rate risk primarily associated with its
borrowings. Sprint selectively enters into interest rate swap agreements to manage its exposure to interest rate changes on its debt.
Approximately 95% of Sprint s outstanding debt at June 30, 2005 is fixed-rate debt, excluding interest rate swaps. While changes in interest rates impact the
fair value of this debt, there is no impact on earnings and cash flows because Sprint intends to hold these obligations to maturity unless market conditions
are favorable.
As of June 30, 2005, Sprint held fair value interest rate swaps with a notional value of $1 billion. These swaps were entered into as hedges of the fair
value of a portion of our senior notes. These interest rate swaps have maturities ranging from 2008 to 2012. On a semiannual basis, Sprint pays a floating
rate of interest equal to the six-month LIBOR, plus a fixed spread, which averaged 6.1% as of June 30, 2005, and received an average interest rate equal to
the coupon rates stated on the underlying senior notes of 7.2%. Assuming a one percentage point increase in the prevailing forward yield curve, the fair
value of the interest rate swaps and the underlying senior notes would change by $42 million. These interest rate swaps met all the requirements for perfect
effectiveness under derivative accounting rules; therefore, there is no impact on earnings and cash flows for any fair value fluctuations.
Sprint performs interest rate sensitivity analyses on its variable-rate debt including interest rate swaps. These analyses indicate that a one percentage
point change in interest rates would have an annual pre-tax impact of $14 million on the Statements of Operations and Consolidated Statements of Cash Flows
at June 30, 2005. While Sprint s variable-rate debt is subject to earnings and cash flows impacts as interest rates change, it is not subject to changes in
fair values.
Sprint also performs a sensitivity analysis on the fair market value of its outstanding debt. A 10% decrease in market interest rates would cause a $540
million increase in fair market value of its debt to $19 billion.
This excerpt taken from the S 10-K filed Apr 29, 2005.
Interest Rate Risk
The communications industry is a capital intensive, technology driven business. Sprint is subject to interest rate risk primarily associated with its
borrowings. Sprint selectively enters into interest rate swap and cap agreements to manage its exposure to interest rate changes on its debt.
Approximately 93% of Sprint s debt at December 31, 2004 was fixed-rate debt excluding interest rate swaps. While changes in interest rates impact the fair
value of this debt, there is no impact to earnings and cash flows because Sprint intends to hold these obligations to maturity unless market and other
conditions are favorable.
As of December 31, 2004, Sprint held fair value interest rate swaps with a notional value of $1 billion. These swaps were entered into as hedges of the fair
value of a portion of our senior notes. These interest rate swaps have maturities ranging from 2008 to 2012. On a semiannual basis, Sprint pays a floating
rate of interest equal to the six-month LIBOR plus a fixed spread and receives an average interest rate equal to the coupon rates stated on the underlying
senior notes. On December 31, 2004, the rate Sprint would pay averaged 5.0% and the rate Sprint would receive was 7.2%. Assuming a one percentage point
increase in the prevailing forward yield curve, the fair value of the interest rate swaps and the underlying senior notes would change by $46 million. These
interest rate swaps met all the requirements for perfect effectiveness under derivative accounting rules as all of the critical terms of the swaps perfectly
matched the corresponding terms of the hedged debt; therefore, there is no impact to earnings and cash flows for any fair value fluctuations.
Sprint performs interest rate sensitivity analyses on its variable rate debt including interest rate swaps. These analyses indicate that a one percentage
point change in interest rates would have an annual pre-tax impact of $18 million on the statements of operations and cash flows at December 31, 2004. While
Sprint s variable-rate debt may impact earnings and cash flows as interest rates change, it is not subject to changes in fair values.
Sprint also performs a sensitivity analysis on the fair market value of its outstanding debt. A 10% decline in market interest rates would cause a $579
million increase in fair market value of its debt to $20.1 billion. This analysis includes the hedged debt.
This excerpt taken from the S 10-K filed Mar 11, 2005.
Interest Rate Risk
The communications industry is a capital intensive, technology driven business. Sprint is subject to interest rate risk primarily associated with its
borrowings. Sprint selectively enters into interest rate swap and cap agreements to manage its exposure to interest rate changes on its debt.
Approximately 93% of Sprint s debt at December 31, 2004 was fixed-rate debt excluding interest rate swaps. While changes in interest rates impact the fair
value of this debt, there is no impact to earnings and cash flows because Sprint intends to hold these obligations to maturity unless market and other
conditions are favorable.
As of December 31, 2004, Sprint held fair value interest rate swaps with a notional value of $1 billion. These swaps were entered into as hedges of the fair
value of a portion of our senior notes. These interest rate swaps have maturities ranging from 2008 to 2012. On a semiannual basis, Sprint pays a floating
rate of interest equal to the six-month LIBOR plus a fixed spread and receives an average interest rate equal to the coupon rates stated on the underlying
senior notes. On December 31, 2004, the rate Sprint would pay averaged 5.0% and the rate Sprint would receive was 7.2%. Assuming a one percentage point
increase in the prevailing forward yield curve, the fair value of the interest rate swaps and the underlying senior notes would change by $46 million. These
interest rate swaps met all the requirements for perfect effectiveness under derivative accounting rules as all of the critical terms of the swaps perfectly
matched the corresponding terms of the hedged debt; therefore, there is no impact to earnings and cash flows for any fair value fluctuations.
Sprint performs interest rate sensitivity analyses on its variable rate debt including interest rate swaps. These analyses indicate that a one percentage
point change in interest rates would have an annual pre-tax impact of $18 million on the statements of operations and cash flows at December 31, 2004. While
Sprint s variable-rate debt may impact earnings and cash flows as interest rates change, it is not subject to changes in fair values.
Sprint also performs a sensitivity analysis on the fair market value of its outstanding debt. A 10% decline in market interest rates would cause a $579
million increase in fair market value of its debt to $20.1 billion. This analysis includes the hedged debt.
The communications industry is a capital intensive, technology driven business. We are subject to interest rate risk primarily associated with our borrowings. Interest rate risk is the risk that
changes in interest rates could adversely affect earnings and cash flows. Specific interest rate risk includes: the risk of increasing interest rates on floating-rate debt and the risk of increasing
interest rates for planned new fixed rate long-term financings or refinancings.
About 90% of our debt as of December 31, 2009 was fixed-rate debt. While changes in interest rates impact the fair value of this debt, there is no impact to earnings and cash flows because we intend
to hold these obligations to maturity unless market and other conditions are favorable.
We perform interest rate sensitivity analyses on our variable rate debt. These analyses indicate that a one percentage point change in interest rates would have an annual pre-tax impact of $18
million on our consolidated statements of operations and cash flows for the year ended December 31, 2009. We also perform a sensitivity analysis on the fair market value of our outstanding debt. A
10% decline in market interest rates would cause an $872 million increase in the fair market value of our debt to $20.9 billion.
FACE="Times New Roman" SIZE="2">The communications industry is a capital intensive, technology driven business. We are subject to interest rate risk primarily associated with our borrowings. Interest
rate risk is the risk that changes in interestrates could adversely affect earnings and cash flows. Specific interest rate risk includes: the risk of increasing interest rates on floating-rate debt
and the risk of increasing interest rates for planned new fixed rate long-term financings orrefinancings.
The communications industry is a capital intensive, technology driven business. We are subject to interest rate risk primarily associated with our borrowings. Interest rate risk is the risk that
changes in interest rates could adversely affect earnings and cash flows. Specific interest rate risk include: the risk of increasing interest rates on short-term debt; the risk of increasing
interest rates for planned new fixed rate long-term financings; and the risk of increasing interest rates for planned refinancings using long-term fixed rate debt.
FACE="Times New Roman" SIZE="2">The communications industry is a capital intensive, technology driven business. We are subject to interest rate risk primarily associated with our borrowings. Interest
rate risk is the risk that changes in interestrates could adversely affect earnings and cash flows. Specific interest rate risk include: the risk of increasing interest rates on short-term debt; the
risk of increasing interest rates for planned new fixed rate long-term financings; and the riskof increasing interest rates for planned refinancings using long-term fixed rate debt.
The communications industry is a capital intensive, technology driven business. We are subject to interest rate risk primarily associated with our borrowings. Interest rate risk is the risk that
changes in interest rates could adversely affect earnings and cash flows. Specific interest rate risk include: the risk of increasing interest rates on short-term debt; the risk of increasing
interest rates for planned new fixed rate long-term financings; and the risk of increasing interest rates for planned refinancing using long-term fixed rate debt.
The communications industry is a capital intensive, technology driven business. We are subject to interest rate risk primarily associated with our borrowings. Interest rate risk is the risk that
changes in interest rates could adversely affect earnings and cash flows. Specific interest rate risks include: the risk of increasing interest rates on short-term debt; the risk of increasing
interest rates for planned new fixed rate long-term financings; and the risk of increasing interest rates for planned refinancings using long-term fixed rate debt.
We enter into interest rate swap agreements designated as cash flow hedges to reduce the impact of interest rate movements on future interest expense by effectively converting a portion of our
floating-rate debt to a fixed-rate. As of December 31, 2005, we had no outstanding interest rate cash flow hedges.
We enter into interest rate swap agreements to manage exposure to interest rate movements and achieve an optimal mixture of floating and fixed-rate debt while minimizing liquidity risk. The interest
rate swap agreements
designated as fair value hedges effectively convert our fixed-rate debt to a floating-rate by receiving fixed rate amounts in exchange for floating rate interest payments over the life of the
agreement without an exchange of the underlying principal amount. As of December 31, 2005, we had outstanding interest rate swap agreements that were designated as fair value hedges.
Approximately 80% of our debt as of December 31, 2005 was fixed-rate debt excluding interest rate swaps. While changes in interest rates impact the fair value of this debt, there is no impact to
earnings and cash flows because we intend to hold these obligations to maturity unless market and other conditions are favorable.
As of December 31, 2005, we held fair value interest rate swaps with a notional value of $1 billion. These swaps were entered into as hedges of the fair value of a portion of our senior notes. These
interest rate swaps have maturities ranging from 2008 to 2012. On a semiannual basis, we pay a floating rate of interest equal to the six-month LIBOR plus a fixed spread and receive an average
interest rate equal to the coupon rates stated on the underlying senior notes. On December 31, 2005, the rate we would pay averaged 7.0% and the rate we would receive was 7.2%. Assuming a one
percentage point increase in the prevailing forward yield curve, the fair value of the interest rate swaps and the underlying senior notes would change by $36 million. These interest rate swaps met
all the requirements for perfect effectiveness under derivative accounting rules as all of the critical terms of the swaps perfectly matched the corresponding terms of the hedged debt; therefore,
there is no impact to earnings and cash flows for any fair value fluctuations.
We perform interest rate sensitivity analyses on our variable rate debt including interest rate swaps. These analyses indicate that a one percentage point change in interest rates would have an
annual pre-tax impact of $45 million on our statements of operations and cash flows as of December 31, 2005. While our variable-rate debt may impact earnings and cash flows as interest rates change,
it is not subject to changes in fair values.
We also perform a sensitivity analysis on the fair market value of our outstanding debt. A 10% decline in market interest rates would cause a $987 million increase in fair market value of our debt to
$29 billion. This analysis includes the hedged debt.
We have entered into a series of interest rate collars associated with the anticipated issuance of debt by Embarq at the time of its expected spin-off in 2006. These collars have been designated as
cash flow hedges in Embarq s financial statements against the variability in interest payments that would result from a change in interest rates before the debt is issued at the time of spin-off.
However, because the forecasted interest payments of debt will occur after the subsidiary is spun-off, the derivative instruments do not qualify for hedge accounting treatment in our consolidated
financial statements, and so any changes in the fair value of these instruments are recognized in earnings during the period of change. Based on market prices on December 31, 2005, a one percentage
point change in interest rates would result in a decrease in the fair value of these instruments by approximately $179 million.
The communications industry is a capital intensive, technology driven business. Sprint Nextel is subject to interest rate risk primarily associated with its borrowings. Sprint Nextel selectively
enters into interest rate swap agreements to manage its exposure to interest rate changes on its debt.
Approximately 80% of Sprint Nextel s outstanding debt at September 30, 2005 was fixed-rate debt, excluding interest rate swaps. While changes in interest rates impact the fair value of this debt,
there is no impact on earnings and cash flows because Sprint Nextel intends to hold these obligations to maturity unless market conditions make it beneficial to refinance these obligations.
As of September 30, 2005, Sprint Nextel held fair value interest rate swaps with a notional value of $1 billion. These swaps were entered into as hedges of the fair value of a portion of our senior
notes. These interest rate swaps have maturities ranging from 2008 to 2012. On a semiannual basis, Sprint Nextel pays a floating rate of interest equal to the six-month LIBOR, plus a fixed spread,
which averaged 6.6% as of September 30, 2005, and received an average interest rate equal to the coupon rates stated on the underlying senior notes of 7.2%. Assuming a one percentage point increase
in the prevailing forward yield curve, the fair value of the interest rate swaps and the underlying senior notes would change by $38 million. These interest rate swaps met all the requirements for
perfect effectiveness under derivative accounting rules; therefore, there is no impact on earnings and cash flows for any fair value fluctuations.
Sprint Nextel performs interest rate sensitivity analyses on its variable-rate debt, including interest rate swaps. These analyses indicate that a one percentage point change in interest rates would
have an annual pre-tax impact of $50 million on the Consolidated Statements of Operations and Consolidated Statements of Cash Flows at September 30, 2005. While Sprint Nextel s variable-rate debt is
subject to earnings and cash flows impacts as interest rates change, it is not subject to changes in fair values.
Sprint Nextel also performs a sensitivity analysis on the fair market value of its outstanding debt. A 10% decrease in market interest rates would cause a $722 million increase in fair market value
of its debt to $28 billion.
The communications industry is a capital intensive, technology driven business. Sprint is subject to interest rate risk primarily associated with its borrowings. Sprint selectively enters into
interest rate swap agreements to manage its exposure to interest rate changes on its debt.
Approximately 95% of Sprint s outstanding debt at June 30, 2005 is fixed-rate debt, excluding interest rate swaps. While changes in interest rates impact the fair value of this debt, there is no
impact on earnings and cash flows because Sprint intends to hold these obligations to maturity unless market conditions are favorable.
As of June 30, 2005, Sprint held fair value interest rate swaps with a notional value of $1 billion. These swaps were entered into as hedges of the fair value of a portion of our senior notes. These
interest rate swaps have maturities ranging from 2008 to 2012. On a semiannual basis, Sprint pays a floating rate of interest equal to the six-month LIBOR, plus a fixed spread, which averaged 6.1% as
of June 30, 2005, and received an average interest rate equal to the coupon rates stated on the underlying senior notes of 7.2%. Assuming a one percentage point increase in the prevailing forward
yield curve, the fair value of the interest rate swaps and the underlying senior notes would change by $42 million. These interest rate swaps met all the requirements for perfect effectiveness under
derivative accounting rules; therefore, there is no impact on earnings and cash flows for any fair value fluctuations.
Sprint performs interest rate sensitivity analyses on its variable-rate debt including interest rate swaps. These analyses indicate that a one percentage point change in interest rates would have an
annual pre-tax impact of $14 million on the Statements of Operations and Consolidated Statements of Cash Flows at June 30, 2005. While Sprint s variable-rate debt is subject to earnings and cash
flows impacts as interest rates change, it is not subject to changes in fair values.
Sprint also performs a sensitivity analysis on the fair market value of its outstanding debt. A 10% decrease in market interest rates would cause a $540 million increase in fair market value of its
debt to $19 billion.
The communications industry is a capital intensive, technology driven business. Sprint is subject to interest rate risk primarily associated with its borrowings. Sprint selectively enters into
interest rate swap and cap agreements to manage its exposure to interest rate changes on its debt.
Approximately 93% of Sprint s debt at December 31, 2004 was fixed-rate debt excluding interest rate swaps. While changes in interest rates impact the fair value of this debt, there is no impact to
earnings and cash flows because Sprint intends to hold these obligations to maturity unless market and other conditions are favorable.
As of December 31, 2004, Sprint held fair value interest rate swaps with a notional value of $1 billion. These swaps were entered into as hedges of the fair value of a portion of our senior notes.
These interest rate swaps have maturities ranging from 2008 to 2012. On a semiannual basis, Sprint pays a floating rate of interest equal to the six-month LIBOR plus a fixed spread and receives an
average interest rate equal to the coupon rates stated on the underlying senior notes. On December 31, 2004, the rate Sprint would pay averaged 5.0% and the rate Sprint would receive was 7.2%.
Assuming a one percentage point increase in the prevailing forward yield curve, the fair value of the interest rate swaps and the underlying senior notes would change by $46 million. These interest
rate swaps met all the requirements for perfect effectiveness under derivative accounting rules as all of the critical terms of the swaps perfectly matched the corresponding terms of the hedged debt;
therefore, there is no impact to earnings and cash flows for any fair value fluctuations.
Sprint performs interest rate sensitivity analyses on its variable rate debt including interest rate swaps. These analyses indicate that a one percentage point change in interest rates would have an
annual pre-tax impact of $18 million on the statements of operations and cash flows at December 31, 2004. While Sprint s variable-rate debt may impact earnings and cash flows as interest rates
change, it is not subject to changes in fair values.
Sprint also performs a sensitivity analysis on the fair market value of its outstanding debt. A 10% decline in market interest rates would cause a $579 million increase in fair market value of its
debt to $20.1 billion. This analysis includes the hedged debt. | {"url":"http://www.wikinvest.com/stock/Sprint_Nextel%20(S)/Interest_Rate_Risk","timestamp":"2014-04-21T03:18:41Z","content_type":null,"content_length":"71545","record_id":"<urn:uuid:3e310547-a2f1-4208-bcf0-ba47add039f5>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00165-ip-10-147-4-33.ec2.internal.warc.gz"} |
Show that a matrix ISNT invertible?
April 16th 2012, 08:37 AM #1
Junior Member
Mar 2012
Show that a matrix ISNT invertible?
So matrix A is
2 -1 1 3
how will i go able showing that this matrix ISNT invertible?
Re: Show that a matrix ISNT invertible?
You double posted, you should probably edit one and say it was an accident or someone might get mad at you.
There's lots of ways,
Row reduce to find a zero row,
Find that the det = 0
and a whole bunch of over equivalent statements,
Google singular matrix and go to the wiki
Re: Show that a matrix ISNT invertible?
The easiest thing to do in this situation is for the OP to report the post him/herself and say that he/she accidentally double posted, and a moderator will clean it up
April 16th 2012, 08:41 AM #2
Junior Member
Sep 2008
April 16th 2012, 09:11 AM #3 | {"url":"http://mathhelpforum.com/advanced-algebra/197393-show-matrix-isnt-invertible.html","timestamp":"2014-04-17T03:05:33Z","content_type":null,"content_length":"37442","record_id":"<urn:uuid:0e3454cc-d77e-4694-a213-e7e0d2f8a197>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00202-ip-10-147-4-33.ec2.internal.warc.gz"} |
F1Stats - A Prequel to Getting Started With Rank Correlations
F1Stats – A Prequel to Getting Started With Rank Correlations
I finally, finally made time to get started on my statistics learning journey with a look at some of the results in the paper A Tale of Two Motorsports: A Graphical-Statistical Analysis of How
Practice, Qualifying, and Past SuccessRelate to Finish Position in NASCAR and Formula One Racing.
Note that these notes represent a description of the things I learned trying to draw on ideas contained within the paper and apply it to data I had available. There may be errors… if you spot any,
please let me know via the comments.
The paper uses race classification data from the 2009 season, comparing F1 and NASCAR championships and claiming to explore the extent to which positions in practice and qualifying relate to race
classification. I won’t be looking at the NASCAR data, but I did try to replicate the F1 stats which I’ll start to describe later on in this post. I’ll also try to build up a series of interactive
apps around the analyses, maybe along with some more traditional print format style reports.
(There are so many things I want to learn about, from the stats themselves, to possible workflows for analysis and reporting, to interactive analysis tools that I’m not sure what order any of it will
pan out into, or even the extent to which I should try to write separate posts about the separate learnings…)
As a source of data, I used my f1com megascraper that grabs classification data (along with sector times and fastest laps) since 2006. (The ergast API doesn’t have the practice results, though it
does have race and qualifying results going back a long way, so it could be used to do a partial analysis over many more seasons). I downloaded the whole Scraperwiki SQLite database which I could
then load into R and play with offline at my leisure.
The first result of note in the paper is a chart that claims to demonstrate the Spearman rank correlation between practise and race results, qualification and race results, and championship points
and race results, for each race in the season. The caption to the corresponding NASCAR graphs explains the shaded region: “In general, the values in the grey area are not statistically significant
and the values in the white area are statistically significant.” A few practical uses we might put the chart to come to mind (possibly!): is qualifying position or p3 position a good indicator of
race classification (that is, is the order folk finish at the end of p3 a good indicator of the rank order in which they’ll finish the race?)?; if the different rank orders are not correlated,
(people finish the race in a different order to the gird position), does this say anything about how exciting the race might have been? Does the “statistical significance” of the correlation value
add anything meaningful?
So what is this chart trying to show and what, if anything, might it tell us of interest about the race weekend?
First up, the statistic that’s being plotted is Spearman’s rank correlation coefficient. There are four things to note there:
1. it’s a coefficient: a coefficent is a single number that tends to come in two flavours (often both at the same time). In a mathematical equation, a coefficient is typically a constant number that
is used as multiplier of a variable. So for example, in the equation t = 2 x, the x variable has the coefficient 2. Note that the coefficient may also be a parameter, as for example in the
equation y= a.x (where the . means ‘multiply’, and we naturally read x as a dependent variable that is used to determine the value of y having been multiplied by the value of a). However, a
coefficient may also be a particular number that characterises a particular relationship between two things. In this case, it characterises the degree of correlation between two things…
2. The refrain “correlation is not causation” is one heard widely around the web that mocks the fact that just because two things may be correlated – that is, when one thing changes, another changes
in a similar way – it doesn’t necessarily mean that the way one thing changed caused the other to change in a similar way as a result. (Of course, it might mean that…;-). (If you want to find
purely incidental correlations between things, have a look at Google correlate, that identifies different search terms whose search trends over time are similar. You can even draw your own trend
over time to find terms that have trended in a similar way.)
Correlation, then, describes the extent to which two things tend to change over time in a similar way: when one goes up, the other goes up; when one goes down, the other goes down. (If they
behave in opposite ways – if one goes up steeply the other goes down steeply; if one goes up gently, the other goes down gently – then they are negatively or anti-correlated).
Correlation measures require that you have paired data. You know you have paired data if you can plot your data as a two dimensional scatterplot and label each point with the name of the person
or thing that was measured to get the two co-ordinate values. So for example, on a plot of qualifying position versus race classification, I can label each point with the name of the driver. The
qualification position and race classification is paired data around the set of drivers.
3. A rank correlation coefficient is used to describe the extent to which two rankings are correlated. Ranked orders do away with the extent to which different elements in a series differ from each
other and just concentrate on their rank order. That is, we don’t care how much change there is in each of the data series, we’re just interested in rank positions within each series. In the case
of an F1 race, the distribution of laptimes during qualifying may see the first few cars separated by a few thousandths of a second, but the time between the best laptimes of consecutively placed
cars at the back of the grid might be of the order of tenths of a second. (Distributions take into account, for example, the variety and range of values in a dataset.) However, in a rank ordered
chart, all we are interested in is the integer position: first, second, third, …, nineteenth, twentieth. There is no information about the magnitude of the actual time difference between the
laptimes, that is, how far close to or far apart from each other the laptimes of consecutively placed cars were, we just know the rank order of fastest to slowest cars. The distribution of the
rank values is not really very interesting, or subject to change, at all.
One thing that I learned that’s possibly handy to know when decoding the jargon: rank based stats are also often referred to as non-parametric statistics because no assumptions are made about how
the numbers are distributed (presumably, there are no parameters of note relating to how the values are distributed, such as the mean and standard deviation of a “normal” distribution). If we
think about the evolution of laptimes in a race, then most of them will be within a few tenths of the fastest lap each lap, with perhaps two bunches of slower lap times (in-lap and out-lap around
a pitstop). The distribution of these lap times may be interesting (for example, the distribution of laptimes on a lap when everyone is racing will be different to the distribution of lap times
on a lap when several cars are pitting). On the other hand, for each lap, the distribution of the rank order of laptimes during that lap will always be the same (first fastest, second fastest,
third fastest, etc.). That is not to say, however, that the rank order of the drivers’ lap times does not change lap on lap, which of course it might do (Webber might be tenth fastest on lap 1,
fastest on lap 2, eight fastest on lap three, and so on).
Of course, this being stats, “non-parametric” probably means lots of other things as well, but for now my rule of thumb will be: the distribution doesn’t matter (that is, the statistic does not
make any assumptions about the distribution of the data in order for the statistic to work; which is to say, that’s one thing you don’t have to check in order to use the statistic (erm, I
4. The statistic chosen was Spearman’s rank correlation coefficient. Three differently calculated correlation coefficients appear to be widely used, (and also appear as possible methods in the R
corr() function that calculates correlations between lists of numbers): i) Pearson’s product moment correlation coefficient (how well does a straight line through an x-y scatterplot of the data
describe the relationship between the x and y values, and what’s the sign of its gradient); ii) Spearman’s rank correlation coefficient (also known as Spearman’s rho or r[s]); [this interactive
is rather nice and shows how Pearson and Spearman correlations can differ]; iii) Kendall’s τ (that is, Kendall’s Tau; this coefficient is based on concordance, which describes how the sign of the
difference in rank between pairs of numbers in one data series is the same as the sign of the difference in rank between a corresponding pair in the other data series.). Other flavours of
correlation coefficient are also available (for example, Lin’s concordance correlation coefficient, as demonstrated in this example of identifying a signature collapse in a political party’s
electoral vote when the Pearson coefficient suggested it had held up, which I think is used to test for how close to a 45 degree line the x-y association between paired data points is…).
The statistical significance test is based around the “null hypothesis” that the two sets of results are not correlated; the result is significant if they are more correlated than you might expect if
both are randomly ordered. this made me a little twitchy: wouldn’t it be equally valid to argue that F1 is a procession and we would expect the race position and grid position to be perfectly
correlated, for example, and then define our test for significance on the extent to which they are not?
This comparison of Pearson’s product moment and Spearman’s rank correlation coefficients helped me get a slightly clearer view of the notion of “test” and how both these coefficients act as tests for
particular sorts of relationship. The Pearson product moment coefficent has a high value if a strong linear relationship holds across the data pairs. The Spearman rank correlation is weaker, in that
it is simply looking to see whether or not the relationship is monotonic (that is, things all go up together, or they all go down together, but the extent to which they do so need not define a linear
relationship, which is an assumption of the Pearson test.). In addition, when defining the statistical significance of the test, this is dependent on particular assumptions about the distribution of
the data values, at least in the case of the Pearson test. The statistical significance relates to how likely the correlation value was assuming a normal distribution in the values within the paired
data series (that is, each series is assumed to represent a normally distributed set of values).
If I understand this right, it means we separate the two things out: on the one hand, we have the statistic (the correlation coefficient); on the other, we have the significance test, which tells you
how likely that result us given a set of assumptions about how the data is distributed. A question that then comes to mind is this: is the definition of the statistic dependent on a particular
distribution of the data in order for the statistic to have something interesting to say, or is it just the significance that relies on that distribution. To twist this slightly, if we can’t do a
significance test, is the statistic then essentially meaningless (because we don’t know whether those values are likely to occur whatever the relationships (or even no relationship) between the data
sets?). Hmm.. maybe a statistic is actually a measurement in the context of its significance given some sort of assumption about how likely it is to occur by chance?
As far as Spearman’s rank correlation coefficient goes, I was little bit confused by the greyed “not significant” boundary on the diagram shown above. The claim is that any correlation value in that
grey area could be accounted for in many cases by random sorting. Take two sets of driver numbers, sort them both randomly, and much of the time the correlation value will fall in that region. (I
suspect this is not only arbitrary but misleading? If you have random orderings, is the likelihood that the correlation is in the range -0.1 to 0.1 the same as the likelihood that it will be in the
range 0.2 to 0.4? Is the probability distribution of correlations “uniform” across the +/- 1 range?) Also, my hazy vague memory is that the population size affects the confidence interval (see also
Explorations in statistics: confidence intervals) – isn’t this the principle on which funnel plots are built? The caption to the figure suggests that the population size (the “degrees of freedom”)
was different for different races (different numbers of drivers). So why isn’t the shaded interval differently sized for those races?
Something else confused me about the interval values used to denote the significance of the Spearman rho values – where do they come from? A little digging suggested that they come from a table (i.e.
someone worked them out numerically, presumably by generating looking at the distribution of different random rank orderings, rather than algorithmically – I couldn’t find a formula to calculate
them? I did find this on Sample Size Requirements for Estimating Pearson, Kendall and Spearman Correlations by D Bonett (Psychometrika Vol. 65, No. 1, 23-28, March 2000) though). A little more
digging suggested Significance Testing of the Spearman Rank Correlation Coefficient by J Zar (Journal of the American Statistical Association, Vol. 67, No. 339 (Sep., 1972), pp. 578- 580) as a key
work on this, with some later qualification in Testing the Significance of Kendall’s τ and Spearman’s r[s] by M. Nijsse (Psychological Bulletin, 1988, Vol. 103, No. 2,235-237).
Hmmm.. this post was supposed to be about running the some of the stats used in A Tale of Two Motorsports: A Graphical-Statistical Analysis of How Practice, Qualifying, and Past SuccessRelate to
Finish Position in NASCAR and Formula One Racing on some more recent data. But I’m well over a couple of thousand words into this post and still not started that bit… So maybe I’ll finish now, and
hold the actual number crunching over to the next post…
PS I find myself: happier that I (think I) understand a little bit more about the rationale of significance tests; just as sceptical as ever I was about the actual practice;-)
One Response
[...] already done brief review of what correlation means (sort of) in F1Stats – A Prequel to Getting Started With Rank Correlations, so I’m just going to dive straight in with some R code that
shows how I set about trying to [...] | {"url":"http://blog.ouseful.info/2013/01/25/f1stats-a-prequel-to-getting-started-with-rank-correlations/?orderby=ID&order=ASC","timestamp":"2014-04-20T03:32:00Z","content_type":null,"content_length":"70793","record_id":"<urn:uuid:c50f64b9-d538-4121-8d40-ed7bc65148c7>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00437-ip-10-147-4-33.ec2.internal.warc.gz"} |
Input the data
Thanks to suggestions of Kevin and Barry Fittler, and the insistence of many others, I've finally made a bit of time to do 'repair work' on the calculator. Please report problems, if any.
Design frequency MHz
Number of turns (twist)
Length of one turn wavelengths
Bending radius mm
Conductor diameter mm (optimum: - mm)
Width/height ratio
Wavelength - mm
Compensated wavelength - mm
Bending correction - mm
Larger loop
Total length - mm
Vertical separator - mm
Total compensated length - mm
Compensated vertical separation - mm
Antenna height H1= - mm
Internal diameter Di1= - mm
Horizontal separator D1= - mm
Compensated horiz. separation Dc1= - mm
Smaller loop
Total length - mm
Vertical tube - mm
Compensated total length - mm
Compensated vertical tube - mm
Antenna height H2= - mm
Internal diameter Di2= - mm
Horizontal separator D2= - mm
Compensated horiz. separator Dc2= - mm
Generate a drilling template
To generate a drilling template in PDF form, please enter the following extra data:
Vertical support tube diameter: mm
Horizontal support tube diameter: mm
For more information on the construction technique corresponding to this template, visit this page.
This calculator generates a lot of data! Take care in the use of the information in order not to make mistakes...
Here's a little explanation:
What's the twist of the antenna? (normally 0.5 (180 degrees)
A few variations of the antenna exist. Normally the circumference (length of the loop) is 1 wavelength, but 1.5 wavelength and 2 wavelength versions exist.
As it's impossible to bend the corner abruptly at 90 degrees, this value is needed for the calculations. It's measured from the bending center to the center of the tube.
External diameter of the tube or coax cable.
Most frequently this ratio is 0.44, but slightly lower values (0.3 to 0.4) give better horizon coverage.
Wavelength, corresponding to the selected frequency.
Wavelength, compensated according to the conductor diameter.
Correction value needed according to the bending diameter.
Total length of the loop, before compensation.
Total length of the loop, compensated for the bending effect, and the fact that the loop must be slightly larger (or smaller). This is the amount of tubing necessary for this loop.
Vertical separation (without the 'bends').
This is in fact the horizontal part without the 'bends', and corresponds to the horizontal pipe necessary to support the cable.
Height of the loop (twisted!).
The diameter of the (imaginary) cylinder on which the loop would be wound.
(c) John Coppens ON6JC/LW3HAZ mail | {"url":"http://www.jcoppens.com/ant/qfh/calc.en.php","timestamp":"2014-04-21T00:01:33Z","content_type":null,"content_length":"12857","record_id":"<urn:uuid:9f4d3c65-8726-45ab-abfe-892eb21179b5>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00088-ip-10-147-4-33.ec2.internal.warc.gz"} |
WyzAnt Resources
It is the mark of an educated mind to be able to entertain a thought without accepting it. (Aristotle) This quote provokes me never to accept the status quo and always challenge assumptions. It is
the thought that through education we never stop learning or seeking after truth and knowledge. | {"url":"http://www.wyzant.com/resources/Blogs/prealgebra?f=recent&pagesize=20&pagenum=2","timestamp":"2014-04-18T05:59:36Z","content_type":null,"content_length":"76167","record_id":"<urn:uuid:db767ebc-c83c-4642-855f-7503b9534d81>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00570-ip-10-147-4-33.ec2.internal.warc.gz"} |
search results
Expand all Collapse all Results 1 - 25 of 29
1. CJM Online first
The Ordered $K$-theory of a Full Extension
Let $\mathfrak{A}$ be a $C^{*}$-algebra with real rank zero which has the stable weak cancellation property. Let $\mathfrak{I}$ be an ideal of $\mathfrak{A}$ such that $\mathfrak{I}$ is stable and
satisfies the corona factorization property. We prove that $ 0 \to \mathfrak{I} \to \mathfrak{A} \to \mathfrak{A} / \mathfrak{I} \to 0 $ is a full extension if and only if the extension is stenotic
and $K$-lexicographic. {As an immediate application, we extend the classification result for graph $C^*$-algebras obtained by Tomforde and the first named author to the general non-unital case. In
combination with recent results by Katsura, Tomforde, West and the first author, our result may also be used to give a purely $K$-theoretical description of when an essential extension of two
simple and stable graph $C^*$-algebras is again a graph $C^*$-algebra.}
Keywords:classification, extensions, graph algebras
Categories:46L80, 46L35, 46L05
2. CJM 2013 (vol 65 pp. 783)
Generalised Triple Homomorphisms and Derivations
We introduce generalised triple homomorphism between Jordan Banach triple systems as a concept which extends the notion of generalised homomorphism between Banach algebras given by K. Jarosz and
B.E. Johnson in 1985 and 1987, respectively. We prove that every generalised triple homomorphism between JB$^*$-triples is automatically continuous. When particularised to C$^*$-algebras, we
rediscover one of the main theorems established by B.E. Johnson. We shall also consider generalised triple derivations from a Jordan Banach triple $E$ into a Jordan Banach triple $E$-module,
proving that every generalised triple derivation from a JB$^*$-triple $E$ into itself or into $E^*$ is automatically continuous.
Keywords:generalised homomorphism, generalised triple homomorphism, generalised triple derivation, Banach algebra, Jordan Banach triple, C$^*$-algebra, JB$^*$-triple
Categories:46L05, 46L70, 47B48, 17C65, 46K70, 46L40, 47B47, 47B49
3. CJM 2012 (vol 65 pp. 481)
Correction of Proofs in "Purely Infinite Simple $C^*$-algebras Arising from Free Product Constructions'' and a Subsequent Paper
The proofs of Theorem 2.2 of K. J. Dykema and M. Rørdam, Purely infinite simple $C^*$-algebras arising from free product constructions}, Canad. J. Math. 50 (1998), 323--341 and of Theorem 3.1 of
K. J. Dykema, Purely infinite simple $C^*$-algebras arising from free product constructions, II, Math. Scand. 90 (2002), 73--86 are corrected.
Keywords:C*-algebras, purely infinite
4. CJM 2012 (vol 65 pp. 52)
C$^*$-algebras Nearly Contained in Type $\mathrm{I}$ Algebras
In this paper we consider near inclusions $A\subseteq_\gamma B$ of C$^*$-algebras. We show that if $B$ is a separable type $\mathrm{I}$ C$^*$-algebra and $A$ satisfies Kadison's similarity problem,
then $A$ is also type $\mathrm{I}$ and use this to obtain an embedding of $A$ into $B$.
Keywords:C$^*$-algebras, near inclusions, perturbations, type I C$^*$-algebras, similarity problem
5. CJM 2011 (vol 64 pp. 755)
Homotopy Classification of Projections in the Corona Algebra of a Non-simple $C^*$-algebra
We study projections in the corona algebra of $C(X)\otimes K$, where K is the $C^*$-algebra of compact operators on a separable infinite dimensional Hilbert space and $X=[0,1],[0,\infty),(-\infty,\
infty)$, or $[0,1]/\{ 0,1 \}$. Using BDF's essential codimension, we determine conditions for a projection in the corona algebra to be liftable to a projection in the multiplier algebra. We also
determine the conditions for two projections to be equal in $K_0$, Murray-von Neumann equivalent, unitarily equivalent, or homotopic. In light of these characterizations, we construct examples
showing that the equivalence notions above are all distinct.
Keywords:essential codimension, continuous field of Hilbert spaces, Corona algebra
Categories:46L05, 46L80
6. CJM 2011 (vol 64 pp. 544)
On the Simple Inductive Limits of Splitting Interval Algebras with Dimension Drops
A K-theoretic classification is given of the simple inductive limits of finite direct sums of the type I $C^*$-algebras known as splitting interval algebras with dimension drops. (These are the
subhomogeneous $C^*$-algebras, each having spectrum a finite union of points and an open interval, and torsion $\textrm{K}_1$-group.)
Categories:46L05, 46L35
7. CJM 2011 (vol 64 pp. 573)
Fundamental Group of Simple $C^*$-algebras with Unique Trace III
We introduce the fundamental group ${\mathcal F}(A)$ of a simple $\sigma$-unital $C^*$-algebra $A$ with unique (up to scalar multiple) densely defined lower semicontinuous trace. This is a
generalization of ``Fundamental Group of Simple $C^*$-algebras with Unique Trace I and II'' by Nawata and Watatani. Our definition in this paper makes sense for stably projectionless $C^
*$-algebras. We show that there exist separable stably projectionless $C^*$-algebras such that their fundamental groups are equal to $\mathbb{R}_+^\times$ by using the classification theorem of
Razak and Tsang. This is a contrast to the unital case in Nawata and Watatani. This study is motivated by the work of Kishimoto and Kumjian.
Keywords:fundamental group, Picard group, Hilbert module, countable basis, stably projectionless algebra, dimension function
Categories:46L05, 46L08, 46L35
8. CJM 2011 (vol 63 pp. 381)
A Complete Classification of AI Algebras with the Ideal Property
Let $A$ be an AI algebra; that is, $A$ is the $\mbox{C}^{*}$-algebra inductive limit of a sequence $$ A_{1}\stackrel{\phi_{1,2}}{\longrightarrow}A_{2}\stackrel{\phi_{2,3}}{\longrightarrow}A_{3} \
longrightarrow\cdots\longrightarrow A_{n}\longrightarrow\cdots, $$ where $A_{n}=\bigoplus_{i=1}^{k_n}M_{[n,i]}(C(X^{i}_n))$, $X^{i}_n$ are $[0,1]$, $k_n$, and $[n,i]$ are positive integers. Suppose
that $A$ has the ideal property: each closed two-sided ideal of $A$ is generated by the projections inside the ideal, as a closed two-sided ideal. In this article, we give a complete classification
of AI algebras with the ideal property.
Keywords:AI algebras, K-group, tracial state, ideal property, classification
Categories:46L35, 19K14, 46L05, 46L08
9. CJM 2010 (vol 62 pp. 889)
Singular Integral Operators and Essential Commutativity on the Sphere
Let ${\mathcal T}$ be the $C^\ast $-algebra generated by the Toeplitz operators $\{T_\varphi : \varphi \in L^\infty (S,d\sigma )\}$ on the Hardy space $H^2(S)$ of the unit sphere in $\mathbf{C}^n$.
It is well known that ${\mathcal T}$ is contained in the essential commutant of $\{T_\varphi : \varphi \in \operatorname{VMO}\cap L^\infty (S,d\sigma )\}$. We show that the essential commutant of $
\{T_\varphi : \varphi \in \operatorname{VMO}\cap L^\infty (S,d\sigma )\}$ is strictly larger than ${\mathcal T}$.
Categories:32A55, 46L05, 47L80
10. CJM 2009 (vol 61 pp. 1239)
Periodicity in Rank 2 Graph Algebras
Kumjian and Pask introduced an aperiodicity condition for higher rank graphs. We present a detailed analysis of when this occurs in certain rank 2 graphs. When the algebra is aperiodic, we give
another proof of the simplicity of $\mathrm{C}^*(\mathbb{F}^+_{\theta})$. The periodic $\mathrm{C}^*$-algebras are characterized, and it is shown that $\mathrm{C}^*(\mathbb{F}^+_{\theta}) \simeq \
mathrm{C}(\mathbb{T})\otimes\mathfrak{A}$ where $\mathfrak{A}$ is a simple $\mathrm{C}^*$-algebra.
Keywords:higher rank graph, aperiodicity condition, simple $\mathrm{C}^*$-algebra, expectation
Categories:47L55, 47L30, 47L75, 46L05
11. CJM 2008 (vol 60 pp. 975)
An AF Algebra Associated with the Farey Tessellation
We associate with the Farey tessellation of the upper half-plane an AF algebra $\AA$ encoding the ``cutting sequences'' that define vertical geodesics. The Effros--Shen AF algebras arise as
quotients of $\AA$. Using the path algebra model for AF algebras we construct, for each $\tau \in \big(0,\frac{1}{4}\big]$, projections $(E_n)$ in $\AA$ such that $E_n E_{n\pm 1}E_n \leq \tau E_n$.
Categories:46L05, 11A55, 11B57, 46L55, 37E05, 82B20
12. CJM 2007 (vol 59 pp. 343)
Weak Semiprojectivity in Purely Infinite Simple $C^*$-Algebras
Let $A$ be a separable amenable purely infinite simple \CA which satisfies the Universal Coefficient Theorem. We prove that $A$ is weakly semiprojective if and only if $K_i(A)$ is a countable
direct sum of finitely generated groups ($i=0,1$). Therefore, if $A$ is such a \CA, for any $\ep>0$ and any finite subset ${\mathcal F}\subset A$ there exist $\dt>0$ and a finite subset ${\mathcal
G}\subset A$ satisfying the following: for any contractive positive linear map $L: A\to B$ (for any \CA $B$) with $ \|L(ab)-L(a)L(b)\|<\dt$ for $a, b\in {\mathcal G}$ there exists a homomorphism $h
\from A\to B$ such that $ \|h(a)-L(a)\|<\ep$ for $a\in {\mathcal F}$.
Keywords:weakly semiprojective, purely infinite simple $C^*$-algebras
Categories:46L05, 46L80
13. CJM 2006 (vol 58 pp. 1268)
Gauge-Invariant Ideals in the $C^*$-Algebras of Finitely Aligned Higher-Rank Graphs
We produce a complete description of the lattice of gauge-invariant ideals in $C^*(\Lambda)$ for a finitely aligned $k$-graph $\Lambda$. We provide a condition on $\Lambda$ under which every ideal
is gauge-invariant. We give conditions on $\Lambda$ under which $C^*(\Lambda)$ satisfies the hypotheses of the Kirchberg--Phillips classification theorem.
Keywords:Graphs as categories, graph algebra, $C^*$-algebra
14. CJM 2006 (vol 58 pp. 1144)
Partial $*$-Automorphisms, Normalizers, and Submodules in Monotone Complete $C^*$-Algebras
For monotone complete $C^*$-algebras $A\subset B$ with $A$ contained in $B$ as a monotone closed $C^*$-subalgebra, the relation $X = AsA$ gives a bijection between the set of all monotone closed
linear subspaces $X$ of $B$ such that $AX + XA \subset X$ and $XX^* + X^*X \subset A$ and a set of certain partial isometries $s$ in the ``normalizer" of $A$ in $B$, and similarly for the map $s \
mapsto \Ad s$ between the latter set and a set of certain ``partial $*$-automorphisms" of $A$. We introduce natural inverse semigroup structures in the set of such $X$'s and the set of partial
$*$-automorphisms of $A$, modulo a certain relation, so that the composition of these maps induces an inverse semigroup homomorphism between them. For a large enough $B$ the homomorphism becomes
surjective and all the partial $*$-automorphisms of $A$ are realized via partial isometries in $B$. In particular, the inverse semigroup associated with a type ${\rm II}_1$ von Neumann factor,
modulo the outer automorphism group, can be viewed as the fundamental group of the factor. We also consider the $C^*$-algebra version of these results.
Categories:46L05, 46L08, 46L40, 20M18
15. CJM 2005 (vol 57 pp. 983)
A Symmetric Imprimitivity Theorem for Commuting Proper Actions
We prove a symmetric imprimitivity theorem for commuting proper actions of locally compact groups $H$ and $K$ on a $C^*$-algebra.
Categories:46L05, 46L08, 46L55
16. CJM 2005 (vol 57 pp. 351)
Extensions by Simple $C^*$-Algebras: Quasidiagonal Extensions
Let $A$ be an amenable separable $C^*$-algebra and $B$ be a non-unital but $\sigma$-unital simple $C^*$-algebra with continuous scale. We show that two essential extensions $\tau_1$ and $\tau_2$ of
$A$ by $B$ are approximately unitarily equivalent if and only if $$ [\tau_1]=[\tau_2] \text{ in } KL(A, M(B)/B). $$ If $A$ is assumed to satisfy the Universal Coefficient Theorem, there is a
bijection from approximate unitary equivalence classes of the above mentioned extensions to $KL(A, M(B)/B)$. Using $KL(A, M(B)/B)$, we compute exactly when an essential extension is quasidiagonal.
We show that quasidiagonal extensions may not be approximately trivial. We also study the approximately trivial extensions.
Keywords:Extensions, Simple $C^*$-algebras
Categories:46L05, 46L35
17. CJM 2005 (vol 57 pp. 17)
On Amenability and Co-Amenability of Algebraic Quantum Groups and Their Corepresentations
We introduce and study several notions of amenability for unitary corepresentations and $*$-representations of algebraic quantum groups, which may be used to characterize amenability and
co-amenability for such quantum groups. As a background for this study, we investigate the associated tensor C$^{*}$-categories.
Keywords:quantum group, amenability
Categories:46L05, 46L65, 22D10, 22D25, 43A07, 43A65, 58B32
18. CJM 2004 (vol 56 pp. 3)
Locally Compact Pro-$C^*$-Algebras
Let $X$ be a locally compact non-compact Hausdorff topological space. Consider the algebras $C(X)$, $C_b(X)$, $C_0(X)$, and $C_{00}(X)$ of respectively arbitrary, bounded, vanishing at infinity,
and compactly supported continuous functions on $X$. Of these, the second and third are $C^*$-algebras, the fourth is a normed algebra, whereas the first is only a topological algebra (it is indeed
a pro-$C^\ast$-algebra). The interesting fact about these algebras is that if one of them is given, the others can be obtained using functional analysis tools. For instance, given the $C^\
ast$-algebra $C_0(X)$, one can get the other three algebras by $C_{00}(X)=K\bigl(C_0(X)\bigr)$, $C_b(X)=M\bigl(C_0(X)\bigr)$, $C(X)=\Gamma\bigl( K(C_0(X))\bigr)$, where the right hand sides are the
Pedersen ideal, the multiplier algebra, and the unbounded multiplier algebra of the Pedersen ideal of $C_0(X)$, respectively. In this article we consider the possibility of these transitions for
general $C^\ast$-algebras. The difficult part is to start with a pro-$C^\ast$-algebra $A$ and to construct a $C^\ast$-algebra $A_0$ such that $A=\Gamma\bigl(K(A_0)\bigr)$. The pro-$C^\ast$-algebras
for which this is possible are called {\it locally compact\/} and we have characterized them using a concept similar to that of an approximate identity.
Keywords:pro-$C^\ast$-algebras, projective limit, multipliers of Pedersen's ideal
Categories:46L05, 46M40
19. CJM 2003 (vol 55 pp. 1302)
The Ideal Structures of Crossed Products of Cuntz Algebras by Quasi-Free Actions of Abelian Groups
We completely determine the ideal structures of the crossed products of Cuntz algebras by quasi-free actions of abelian groups and give another proof of A.~Kishimoto's result on the simplicity of
such crossed products. We also give a necessary and sufficient condition that our algebras become primitive, and compute the Connes spectra and $K$-groups of our algebras.
Categories:46L05, 46L55, 46L45
20. CJM 2002 (vol 54 pp. 694)
Cuntz Algebra States Defined by Implementers of Endomorphisms of the $\CAR$ Algebra
We investigate representations of the Cuntz algebra $\mathcal{O}_2$ on antisymmetric Fock space $F_a (\mathcal{K}_1)$ defined by isometric implementers of certain quasi-free endomorphisms of the
CAR algebra in pure quasi-free states $\varphi_{P_1}$. We pay corresponding to these representations and the Fock special attention to the vector states on $\mathcal{O}_2$ vacuum, for which we
obtain explicit formulae. Restricting these states to the gauge-invariant subalgebra $\mathcal{F}_2$, we find that for natural choices of implementers, they are again pure quasi-free and are, in
fact, essentially the states $\varphi_{P_1}$. We proceed to consider the case for an arbitrary pair of implementers, and deduce that these Cuntz algebra representations are irreducible, as are
their restrictions to $\mathcal{F}_2$. The endomorphisms of $B \bigl( F_a (\mathcal{K}_1) \bigr)$ associated with these representations of $\mathcal{O}_2$ are also considered.
Categories:46L05, 46L30
21. CJM 2002 (vol 54 pp. 138)
On the Classification of Simple Stably Projectionless $\C^*$-Algebras
It is shown that simple stably projectionless $\C^S*$-algebras which are inductive limits of certain specified building blocks with trivial $\K$-theory are classified by their cone of positive
traces with distinguished subset. This is the first example of an isomorphism theorem verifying the conjecture of Elliott for a subclass of the stably projectionless algebras.
Categories:46L35, 46L05
22. CJM 2001 (vol 53 pp. 1223)
Classification of Certain Simple $C^*$-Algebras with Torsion in $K_1$
We show that the Elliott invariant is a classifying invariant for the class of $C^*$-algebras that are simple unital infinite dimensional inductive limits of finite direct sums of building blocks
of the form $$ \{f \in C(\T) \otimes M_n : f(x_i) \in M_{d_i}, i = 1,2,\dots,N\}, $$ where $x_1,x_2,\dots,x_N \in \T$, $d_1,d_2,\dots,d_N$ are integers dividing $n$, and $M_{d_i}$ is embedded
unitally into $M_n$. Furthermore we prove existence and uniqueness theorems for $*$-homomorphisms between such algebras and we identify the range of the invariant.
Categories:46L80, 19K14, 46L05
23. CJM 2001 (vol 53 pp. 979)
Ranks of Algebras of Continuous $C^*$-Algebra Valued Functions
We prove a number of results about the stable and particularly the real ranks of tensor products of \ca s under the assumption that one of the factors is commutative. In particular, we prove the
following: {\raggedright \begin{enumerate}[(5)] \item[(1)] If $X$ is any locally compact $\sm$-compact Hausdorff space and $A$ is any \ca, then\break $\RR \bigl( C_0 (X) \otimes A \bigr) \leq \dim
(X) + \RR(A)$. \item[(2)] If $X$ is any locally compact Hausdorff space and $A$ is any \pisca, then $\RR \bigl( C_0 (X) \otimes A \bigr) \leq 1$. \item[(3)] $\RR \bigl( C ([0,1]) \otimes A \bigr) \
geq 1$ for any nonzero \ca\ $A$, and $\sr \bigl( C ([0,1]^2) \otimes A \bigr) \geq 2$ for any unital \ca\ $A$. \item[(4)] If $A$ is a unital \ca\ such that $\RR(A) = 0$, $\sr (A) = 1$, and $K_1 (A)
= 0$, then\break $\sr \bigl( C ([0,1]) \otimes A \bigr) = 1$. \item[(5)] There is a simple separable unital nuclear \ca\ $A$ such that $\RR(A) = 1$ and\break $\sr \bigl( C ([0,1]) \otimes A \bigr)
= 1$. \end{enumerate}}
Categories:46L05, 46L52, 46L80, 19A13, 19B10
24. CJM 2001 (vol 53 pp. 592)
Ideal Structure of Multiplier Algebras of Simple $C^*$-algebras With Real Rank Zero
We give a description of the monoid of Murray-von Neumann equivalence classes of projections for multiplier algebras of a wide class of $\sigma$-unital simple $C^\ast$-algebras $A$ with real rank
zero and stable rank one. The lattice of ideals of this monoid, which is known to be crucial for understanding the ideal structure of the multiplier algebra $\mul$, is therefore analyzed. In
important cases it is shown that, if $A$ has finite scale then the quotient of $\mul$ modulo any closed ideal $I$ that properly contains $A$ has stable rank one. The intricacy of the ideal
structure of $\mul$ is reflected in the fact that $\mul$ can have uncountably many different quotients, each one having uncountably many closed ideals forming a chain with respect to inclusion.
Keywords:$C^\ast$-algebra, multiplier algebra, real rank zero, stable rank, refinement monoid
Categories:46L05, 46L80, 06F05
25. CJM 2001 (vol 53 pp. 161)
Classification of Simple Tracially AF $C^*$-Algebras
We prove that pre-classifiable (see 3.1) simple nuclear tracially AF \CA s (TAF) are classified by their $K$-theory. As a consequence all simple, locally AH and TAF \CA s are in fact AH algebras
(it is known that there are locally AH algebras that are not AH). We also prove the following Rationalization Theorem. Let $A$ and $B$ be two unital separable nuclear simple TAF \CA s with unique
normalized traces satisfying the Universal Coefficient Theorem. If $A$ and $B$ have the same (ordered and scaled) $K$-theory and $K_0 (A)_+$ is locally finitely generated, then $A \otimes Q \cong B
\otimes Q$, where $Q$ is the UHF-algebra with the rational $K_0$. Classification results (with restriction on $K_0$-theory) for the above \CA s are also obtained. For example, we show that, if $A$
and $B$ are unital nuclear separable simple TAF \CA s with the unique normalized trace satisfying the UCT and with $K_1(A) = K_1(B)$, and $A$ and $B$ have the same rational (scaled ordered) $K_0$,
then $A \cong B$. Similar results are also obtained for some cases in which $K_0$ is non-divisible such as $K_0(A) = \mathbf{Z} [1/2]$.
Categories:46L05, 46L35 | {"url":"http://cms.math.ca/cjm/msc/46L05?fromjnl=cjm&jnl=CJM","timestamp":"2014-04-16T04:16:14Z","content_type":null,"content_length":"69266","record_id":"<urn:uuid:3f5c7b37-aa9d-46cd-89a4-2ee4ec622966>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00062-ip-10-147-4-33.ec2.internal.warc.gz"} |
A classy approach to parser combinators
Parser combinators
Parser combinators serve as a great introduction to Functional Programming, and are one of the most-studied topics in the field. Nevertheless, they are a very complex and broad topic, covering
concepts such as non-determinism, monads, and higher-order functions.
What people may not get as much exposure to, at least in my experience, is many of the Haskell type classes and their relationship to parsers. As we'll see in this article, the most common and useful
combinators are actually parser-specific versions of more widely useful generic operations.
Type classes
The definitions of the type classes used are based on both the standard Haskell classes of the same name (minus the '), and Brent Yorgey's Typeclassopedia. I've added a ' to the end of each of their
names to indicate that they're not identical to the standard Haskell classes, and in some cases are quite different. I've also added one type class of my own -- Switch' -- which represents the
ability to convert a failing computation into a successful one with a default value, and to convert a successful computation into a failing one. I wasn't able to find a type class providing this
functionality on Hoogle.
Parser definition and basic combinators
Following convention, parsers are modeled as functions that operate on token streams, either producing a result paired with the rest of the token stream, or failing. A convenient choice for
representing possible failure is the Maybe data type.
newtype Parser t a = Parser {
getParser :: [t] -> Maybe ([t], a)
run :: Parser t a -> [t] -> Maybe ([t], a)
run = getParser
In addition, we'll use these basic parsers repeatedly throughout the examples to build bigger and more exotic parsers:
-- succeeds, consuming one token, as
-- long as input is not empty
getOne :: Parser s s
getOne = Parser (\xs -> case xs of
(y:ys) -> pure (ys, y);
_ -> empty)
-- runs the parser, and if it succeeds,
-- checks that its result satisfies a predicate
check :: (a -> Bool) -> Parser s a -> Parser s a
check f p = p >>= \x ->
guard (f x) >>
pure x
-- consumes one token if the token
-- satisfies a predicate
satisfy :: (a -> Bool) -> Parser a a
satisfy p = check p getOne
-- builds a parser that only
-- matches the given token
literal :: Eq a => a -> Parser a a
literal tok = satisfy (== tok)
Alternation and failure
Alternation and failure are covered by the semigroup and monoid classes, respectively. Semigroups are characterized by an associative, binary, closed operation.
The parser interpretation of semigroups is choice: given two parsers, use the first one if it succeeds, but use the second one if the first fails.
class Semigroup' a where
(<|>) :: a -> a -> a
instance Semigroup' (Parser s a) where
Parser f <|> Parser g = Parser (\xs -> f xs <|> g xs)
This implementation exploits the fact that the Maybe datatype can also form a left-biased semigroup.
Monoids are semigroups whose binary operation has an identity element; for parsers, this means that applying the choice operator to any parser plus the identity parser will always return the result
of the first parser, regardless of whether it fails or succeeds. The identity parser always ignores its input and fails:
class Semigroup' a => Monoid' a where
empty :: a
instance Monoid' (Parser s a) where
empty = Parser (const Nothing)
Here are some examples:
-- combining two parsers with choice: succeeds if either parser succeeds
a :: Parser Char Char
a = literal 'a'
b :: Parser Char Char
b = literal 'b'
ab :: Parser Char Char
ab = a <|> b
$ run ab "babcd"
Just ("abcd",'b')
$ run ab "abcd"
Just ("bcd",'a')
-- the empty parser always fails
fail :: Parser Char Char
fail = empty
$ run fail "abcd"
$ Nothing
-- the empty parser is both a right and a left identity
a_ :: Parser Char Char
a_ = a <|> fail
_a :: Parser Char Char
_a = fail <|> a
$ run a_ "abcd"
Just ("bcd",'a')
$ run a_ "babcd"
$ run _a "abcd"
Just ("bcd",'a')
$ run _a "babcd"
We're not limited to combining two parsers at a time, of course; there is also the 'mconcat' combinator:
mconcat :: Monoid' a => [a] -> a
mconcat = foldr (<|>) empty
$ run (mconcat []) "abcde"
digits :: [Parser Char Char]
digits = map literal ['0' .. '9']
$ run (mconcat digits) "4hi!!"
Just ("hi!!", '4')
'mconcat' combines a list of monoids using the binary operation, and the identity element as the base case. This means that using 'mconcat' on an empty list will generate a parser that always fails.
Similarly to the parser that always fails, we have a parser that always succeeds. This is captured by the pointed class, which is the 'pure' part of the Applicative class in the standard Haskell
libraries. This class allows you to lift a value into a context; for parsers, we build a parser that always succeeds, with the specified value as its result, and consuming zero tokens.
class Pointed' f where
pure :: a -> f a
instance Pointed' (Parser s) where
pure a = Parser (\xs -> Just (xs, a))
pass :: Parser Integer String
pass = pure "Hello, world!"
$ run pass []
Just ([],"Hello, world!")
$ run pass [1,100,31]
Just ([1,100,31],"Hello, world!")
The parser 'pass' always succeeds, even with empty input; it simply returns its input token stream along with its value.
Mapping and sequencing
It's also useful to have access to a parser's value for further processing; a common use case is building up a parse tree. This concept is captured by the Functor class, which lifts a normal function
to a function that operates on the result value of a parser. The parser interpretation is that, given a function and a parser, if the parser succeeds, map the function over its results; whereas if
the parser fails, just propagate the failure.
class Functor' f where
fmap :: (a -> b) -> f a -> f b
instance Functor' (Parser s) where
-- one 'fmap' for the Maybe, one for the tuple
fmap f (Parser g) = Parser (fmap (fmap f) . g)
The Applicative class enables not just lifting, but application in which both the function and its arguments are in contexts. It allows parsers to be run in sequence, where the first parser is run,
and if it fails, the whole chain fails; if it succeeds, the rest of the token stream is passed to the next parser and its result is collected, and so on. This implementation makes use of the Monad
instance of Maybe, although it could also be implemented without such an assumption.
class Functor' f => Applicative' f where
(<*>) :: f (a -> b) -> f a -> f b
instance Applicative' (Parser s) where
Parser f <*> Parser x = Parser h
h xs = f xs >>= \(ys, f') ->
x ys >>= \(zs, x') ->
Just (zs, f' x')
Here are some examples:
one :: Parser Char Char
one = literal '1'
oneInt :: Parser Char Int
oneInt = fmap (\x -> (read :: String -> Int) [x .. '9']) one
$ run oneInt "123"
Just ("23",123456789)
two :: Parser Char Char
two = literal '2'
twelve :: Parser Char (Char, Char)
twelve = pure (,) <*> one <*> two
$ run twelve "123"
Just ("3",('1','2'))
$ run twelve "1123"
The first example shows a Char parser ('one') that is converted into an Int parser using 'fmap' and a function of type 'Char -> Int'. The second example applies the '(,)' function within an
Applicative parser context, tupling the results of the parsers 'one' and 'two'. The third example shows that parsers run in sequence must all succeed for the entire match to succeed; although the '1'
is matched, the '2' cannot be.
The power of Applicative parsers can also be harnessed to create parsers that ignore the results (but not the effects!) of some or all of their parsers:
(*>) :: Parser t a -> Parser t b -> Parser t b
l *> r = fmap (flip const) l <*> r
(<*) :: Parser t a -> Parser t b -> Parser t a
l <* r = fmap const l <*> r
Both '(*>)' and '(<*)' will only succeed if both of their arguments succeed in sequence; the difference is that '(*>)' only returns the result of the 2nd parser, while '(<*)' only returns the result
of the 1st parser. Examples, using the 'one' and 'two' parsers defined above:
$ run (two *> one) "212345"
Just ("2345",'1')
$ run (two <* one) "212345"
Just ("2345",'2')
Combining Applicatives with Semigroups, we can create repeating parsers:
many :: Parser t a -> Parser t [a]
many p = some p <|> pure []
some :: Parser t a -> Parser t [a]
some p = fmap (:) p <*> many p
(note that 'some' and 'many' are mutually recursive). 'many' tries to run its parser as many times as possible, progressively chewing up input; it always succeeds since it's fine with matching 0
times. On the other hand, 'some' matches its parser at least once, failing if it can't match it at all, but other than that is identical to 'many'. Examples (using 'one' from above):
$ run (fmap length $ many one) "111111234"
Just ("234",6)
$ run (many one) "23434593475dkljdfs"
Just ("23434593475dkljdfs","")
$ run (fmap length $ one) "111111234"
Just ("234",6)
$ run (some one) "23434593475dkljdfs"
Oftentimes, parsing conditions are easier to state in the negative than in the positive. For instance, if you were parsing a string, you might look for a double-quote character to open the string,
and another double-quote to end the string. Meanwhile, anything that's *not* a double-quote which comes after the opening will be part of the string. To capture this pattern, I created the 'Switch'
class Switch' f where
switch :: f a -> f ()
instance Switch' (Parser s) where
switch (Parser f) = Parser h
where h xs = fmap (const (xs, ())) $ switch (f xs)
This converts a failing parser to a successful one and vice versa. Importantly, it consumes no input from the token stream -- it acts as a negative lookahead parser, which allows us to build flexible
parsers on top of it. Examples:
not1 :: Parser t b -> Parser t t
not1 p = switch p *> getOne
dq :: Parser Char Char
dq = literal '"'
not_dq :: Parser Char Char
not_dq = not1 dq
dq_string :: Parser Char String
dq_string = dq *> many not_dq <* dq
$ run dq_string "\"no ending double-quote"
$ run dq_string "\"I'm a string\"abcxyz"
Just ("abcxyz","I'm a string")
The 'not1' combinator takes a parser as input, runs that parser, and if it succeeds, 'not1' fails; if that parser fails, 'not1' then tries to consume a single token (any token). In other words, it's
like saying "I want anything but
The 'not_dq' parser matches any character that's not a double-quote; the string parser matches a double-quote followed by any number of non-double-quotes, followed by another double-quote; it throws
away the results of both double-quote parsers, only returning the body of the string.
Running many parsers in sequence
Traversable is an interesting type class. It allows you to 'commute' two functors; i.e. if you have '[Maybe Int]', it allows you to create 'Maybe [Int]' (that is, turn a list of 'Maybe Int's into a
'Maybe' list of Ints. This is also useful for parsing, where it allows one to convert a list of parsers into a (single) parser of lists. In this case, we don't need to supply an instance for 'Parser'
because the Functor in question is lists:
class Functor' t => Traversable' t where
commute :: (Pointed' f, Applicative' f) => t (f a) -> f (t a)
Here are some examples (using 'digits' from above):
six_fours :: [Parser Char Char]
six_fours = replicate 6 (literal '4')
$ run (commute digits) "0123456789abcxyz"
Just ("abcxyz","0123456789")
$ run (commute six_fours) "4444449999999"
Just ("9999999","444444")
$ run (commute six_fours) "44444 oops that was only 5 fours"
What parsing article could be complete without mentioning monads? Monads are similar to applicatives, but add the extra ability to have computations depend on the result of previous computations.
Here's the class definition and parser implementation:
class (Applicative' m, Pointed' m) => Monad' m where
join :: m (m a) -> m a
instance Monad' (Parser s) where
join (Parser f) = Parser h
h xs = f xs >>= \(o, Parser g) -> g o
A good example of putting this extra power to work is this combinator:
twice :: Eq a => Parser a a -> Parser a a
twice p = p >>= \x ->
literal x
It runs its input parser, and if it succeeds, attempts to match the *same* output a second time. Thus, the second match depends on the results of the first. We can't build such a parser using
applicatives (although we can build less general versions by enumerating multiple cases). Here's an example showing how it's different from an Applicative version, using the 'ab' parser from earlier:
ab_twice :: Parser Char Char
ab_twice = twice ab
-- using monads
$ run ab_twice "aa123"
Just ("123",'a')
$ run ab_twice "ab123"
-- using applicatives
$ run (pure (,) <*> ab <*> ab) "aa123"
Just ("123",('a','a'))
$ run (pure (,) <*> ab <*> ab) "ab123"
Just ("123",('a','b'))
In the first example, which uses monadic parsing, 'ab_twice' parses the first input and fails on the second. However, the second example -- with applicatives -- successfully parses both inputs. It
sees the two parsers as being totally independent of each other and thus isn't able to require that the second one match the same tokens as the first one.
Relationship to BNF grammars, regular expressions, etc.
Of course, all of these useful parsing combinators have also been applied in other parsing approaches, such as grammars and regular expressions. Here's a quick correspondence:
BNF/regex combinators
| <|> of semigroups
sequencing <*> of applicatives
* many
+ some
grouping always explicitly grouped
What's next & further reading
There are a few topics that weren't covered in this article. First and foremost, good error detection and reporting is a key component of a parser library that's friendly and easy to use. Second,
although I chose to use the Maybe data type to model the results, this could be extended to use any arbitrary monad -- resulting in a much richer set of parsers. Two examples are the list monad, to
allow non-deterministic parses, and the state monad, two allow context-dependent parses.
If you're interested in learning more about parsing, Philip Wadler, Graham Hutton, and Doaitse Swierstra have published some excellent papers over the years on the topic; reading their papers was
what really helped me to understand parsing. And of course there's also the powerful Parsec tool, a Haskell-based library for parser combinators which illustrates these ideas in a practical context. | {"url":"http://mfenwick100.blogspot.com/2012/11/a-classy-approach-to-parser-combinators.html","timestamp":"2014-04-19T10:17:05Z","content_type":null,"content_length":"72489","record_id":"<urn:uuid:4cb78b98-2c66-4b5e-8f27-9bc613411af0>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00344-ip-10-147-4-33.ec2.internal.warc.gz"} |
Items where Division is "Faculty of Physical Sciences and Engineering > Electronics and Computer Science" and Year is 1981
Number of items: 24.
Bowler, K C, Corvi, P J, Hey, A J G, Jarvis, P D and King, R C (1981) The Role of Sp (12,R) in Harmonic Oscillator Quark Model Calculations. UNSPECIFIED
Evci, C C, Steele, R and Xydeas, C S (1981) DPCM-AQF using second-order adaptive predictors for speech signals. IEEE Transactions on ASSP, 29, (3), 337-341.
Gatenby, P V, Hawkins, K C and Rutt, H N (1981) Line narrowed TEA CO2 laser for optical pumping of molecular gas lasers. Journal Of Physics E, Scientific Instruments, 14, 56-57.
Hartel, P. H. (1981) Pascal for systems programmes. Proc. 32nd Conf. European control data users (ECODU 32) Control Data corp., 2.54-2.61.
Haskell, B G and Steele, R (1981) Audio and video bit-rate reductions. Proceedings of IEEE, 69, (2), 252-262.
Haskell, B G and Steele, R (1981) Speech and video bit-reduction. IEEE Conference on Digital Processing of Signalsin Communications , 395-411.
Henderson, P. (1981) System Design, State of the Art Report, Pergammon-Infotech Ltd.
Henderson, P. and Gimson, R. B. (1981) Modularisation of Large Programs. Software - Practice and Experience, 11, (5), 497-520.
Hey, A J G, Chung, S U and Lindenbaum, S J (eds.) (1981) Theories of Baryonium, Exotics and Multiquark States.
Hughes, J.F. (1981) Electrostatics and Applications to Industrial Processes. Proceedings of the IEE, 128 Pa, (4), 244-56.
Hughes, J.F. (1981) Industrial Hazards of Electrostatics. Physics in Technology, 12, (1), 10-6.
Hughes, J.F., Lees, P., McAllister, D., Smith, J.R. and Briton, L.G. (1981) An Experimental and Computational Investigation of Electrostatic Fields in Plastic Tanks. Journal of Electrostatics, 10,
Hughes, J.F. and Pavey, I.D. (1981) Electrostatic Emulsification. Journal of Electrostatics, 10, 45-55.
Hughes, J.F., Pavey, I.D. and Arimoto, M. (1981) An Experimental Electrostatic Emulsifier. In, UNSPECIFIED , IEEE/IAS, 1031-5.
Mohar, B., Pisanski, T. and Shawe-Taylor, J. (1981) Edge-colouring of Composite Regular Graphs. Colloquia Mathematica Societatis János Bolyai, 37, 591-600.
Nunn, D (1981) Performance Investigations of a Time Domain Adaptive Array Processor in a Broadband Environment. Southampton University Report. Report on RAE Research Contract. UNSPECIFIED , 380.
Patrick, P J, Steele, R and Xydeas, C S (1981) Voiced/unvoiced band switching system for transmission of 6 KHz speech over 3.4 KHz telephone channels. The Radio and Electronic Engineer, 51, (5),
Patrick, P J, Xydeas, C S, Steele, R and Cham, W K (1981) Wideband quality speech encoders with bit-rates of 16-32 Kb/s. Proceeding of IEEE ICASSP'81 , 844-847.
Pisanski, T. and Shawe-Taylor, J. (1981) Search for Minimal Trivalent Cycle Permutation Graphs with Girth Nine. Discrete Math., 36, 113-115.
Pisanski, Tomav and Shawe-Taylor, John (1981) Cycle Permutation Graphs with Large Girth. In, Temperley, H N V (ed.) Abstracts of the Eighth British Combinatorial Conference, Swansea. , Cambridge
University Press.
Steele, R, Evci, C C and Xydeas, C S (1981) Sequential adaptive predictors for ACPCM speech encoders. Proceedings of NTC'81 , E8.1.1-E8.1.5.
Steele, R and Vitello, D (1981) Simultaneous transmission of speech and data using code-breaking techniques. BSTJ, 60, (9), 2081-2105.
Wong, W C and Steele, R (1981) Adaptive discrete cosine transformation of pictures using an energy distribution logarithmic model. IEEE Conference on Digital Processing of Signalsin Communications ,
Wong, W C and Steele, R (1981) Adaptive discrete cosine transformation of pictures using an energy distribution logarithmic model. The Radio and Electronic Engineer, 51, (11/12), 571-578. | {"url":"http://eprints.soton.ac.uk/view/divisions/uos2011-F7-FP/1981.html","timestamp":"2014-04-20T18:28:10Z","content_type":null,"content_length":"23255","record_id":"<urn:uuid:e302ae4e-a924-4507-9cb8-25069897119f>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00182-ip-10-147-4-33.ec2.internal.warc.gz"} |
Scientific NotationAlgebraLAB: Lessons
Sometimes on the news you will hear about the federal deficit being close to 34 trillion dollars. But what does that number look like? To write out 34 trillion would look like 34,000,000,000,000.
That’s a lot of zeros. It’s also a very large and inconvenient number to write. Unless we use
scientific notation
Scientific notation
is a way of using exponents and powers of 10 to write very large or very small numbers. In general, a number written in
scientific notation
looks like
□ The base number, a, must be a number one or larger but less than 10 (
□ The exponent, b, can be either positive or negative.
The most common problems we see with
scientific notation
involve changing from standard notation to
scientific notation
and vice versa. Let’s look at these types of problems before moving on to operations with scientific notation.
We’ll start by writing the deficit amount in scientific notation.
Let's Practice:
i. Write 34,000,000,000,000 in scientific notation.
For a number to be written in scientific notation, we need a number between 1 and 10 and a power of 10. To determine the number between 1 and 10, look at the original number and determine where
you can place a decimal point to create a number between 1 and 10. For this problem, we can place the decimal point between 3 and 4 to get 3.4 which is between 1 and 10.
But what about all those zeros?
If the decimal point is between the 3 and 4, how many places would it need to be moved to get to the end of the number? If we move the decimal 13 places, we end up where we started. The 13 then
becomes our exponent. So 34,000,000,000,000 written in scientific notation would be Notice that the exponent is positive because in determining our exponent we moved the decimal 13 places to the
right to get back to our original number.
ii. Write 0.000000000294 in scientific notation.
Once again we want to begin by creating a number between 1 and 10 which would be 2.94. We would need to move the decimal point 10 places to the left to get it back to where it originally started.
So 0.000000000294 written in scientific notation is In this case, we used a negative exponent because we would need to move the decimal point 10 places to the left to get back to the original
iii. Write 2.7 x 10^15 in expanded notation.
In this case, we start with 2.7 and move the decimal point 15 places to the right (because the exponent is positive). You may be wondering how we can move the decimal point 15 spaces when after
one space we are at the end of our number. To keep moving the decimal point, keep adding zeros.
Once we know how to go back and forth between standard notation and scientific notation, it is very common to perform operations on numbers that are written in scientific notation. Doing this
involves making use of the rules of exponents.
To find the answer to this problem, we can multiply 9.3 and 6.2 and get 57.66.
We can also multiply
So putting these two pieces together we get But remember that to be in scientific notation, the number must be between 1 and 10. Clearly 57.66 is not between 1 and 10. But if we move the decimal
so that the number becomes 5.766, we would be OK. But we can’t move the decimal point without affecting the exponent attached to the 10. Since we need to move the decimal point one space the
answer to the problem in scientific notation is
Example Group #1
Write each number in scientific notation.
What is your answer?
What is your answer?
What is your answer?
What is your answer?
Example Group #2
Write each number in standard notation.
Example Group #3
Perform each operation. | {"url":"http://www.algebralab.org/lessons/lesson.aspx?file=Algebra_ExponentsScientific.xml","timestamp":"2014-04-17T16:00:18Z","content_type":null,"content_length":"28839","record_id":"<urn:uuid:cf20d02f-327f-43bc-88f9-a167abfbc028>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00133-ip-10-147-4-33.ec2.internal.warc.gz"} |
Variables in research
Explanations > Social Research > Measurement > Variables in research
Definitions | Type | Independence | Control | Correlation | Cause | See also
When doing social research, variables are both important and tricky. Here's a few words about them.
A variable is something that can change, such as 'gender' and are typically the focus of a study.
Attributes are sub-values of a variable, such as 'male' and 'female'. An exhaustive list contains all possible answers, for example gender could also include 'male transgender' and 'female
transgender' (and both can be pre- or post-operative).
Mutually exclusive attributes are those that cannot occur at the same time. Thus in a survey a person may be requested to select one answer from a list of alternatives (as opposed to selecting as
many that might apply).
Quantitative data is numeric. This is useful for mathematical and statistical analysis that leads to a predictive formula.
Qualitative data is based on human judgement. You can turn qualitative data into quantitative data, for example by counting the proportion of people who hold a particular qualitative viewpoint.
Units are the ways that variables are classified. These include: individuals, groups, social interactions and objects.
Descriptive variables are those that which will be reported on, without relating them to anything in particular.
Categorical variables result from a selection from categories, such as 'agree' and 'disagree'. Nominal and ordinal variables are categorical.
Numeric variables give a number, such as age.
Discrete variables are numeric variables that come from a limited set of numbers. They may result from , answering questions such as 'how many', 'how often', etc.
Continuous variables are numeric variables that can take any value, such as weight.
An independent variable is one is manipulated by the researcher. It is like the knob on a dial that the researcher turns. In graphs, it is put on the X-axis.
A dependent variable is one which changes as a result of the independent variable being changed, and is put on the Y-axis in graphs.
The holy grail for researchers is to be able to determine the relationship between the independent and dependent variables, such that if the independent variable is changed, then the researcher will
be able to accurately predict how the dependent variable will change.
Extraneous variables are additional variables which could provide alternative explanations or cast doubt on conclusions.
Variables may have the following characteristics:
• Period: When it starts and stops.
• Pattern: Daily, weekly, ad-hoc, etc.
• Detail: Overview through to 'in depth'.
• Latency: Time between measuring dependent and independent variable (some things take time to take effect).
Note that in an experiment there may be many additional variables beyond the manipulated independent variable and the measured dependent variables. It is critical in experiments that these variables
do not vary and hence bias or otherwise distort the results. There is a struggle between control vs. authenticity in managing this.
With perfect correlation, the X-Y graph of points (as a scatter diagram) will give a straight line. Whilst this may happen in physics, it seldom happens in social research and a probabilistic
relationship is the best that can be determined.
Correlation can be positive (increasing X increases Y), negative (increasing X decreases Y) or non-linear (increasing X makes Y increase or decrease, depending on the value of X).
Correlation can also be partial, that is across only a range of values X. As all possible values of X can seldom be tested, most correlations found are at best partial.
When correlation is determined, a further question is whether varying the independent variable caused the independent variable to change. This adds complexity and debate to the situation.
Sometimes a third variable is the cause, such as when a correlation between ice-cream sales and drowning is actually due to the fact that both are caused by warm weather.
See also
Variation, Scatter diagram, Histogram | {"url":"http://changingminds.org/explanations/research/measurement/variables.htm","timestamp":"2014-04-17T04:20:09Z","content_type":null,"content_length":"34989","record_id":"<urn:uuid:91136389-7777-4d97-9e8c-fe4246739335>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00298-ip-10-147-4-33.ec2.internal.warc.gz"} |
Equivalent Algebraic Expressions.
I am trying to see right now how these became equivalent: $\frac{1}{2}((x-a)^2+y_0^2)^-^1^/^2(2(x-a))=\frac{x-a}{\sqrt(x-a)^2+y_0^2}$ Thanks in advance...
$x^{-k} = \dfrac{1}{x^k}$ and $x^{1/n} = \sqrt[n]{x}$ (where $k$ is any real number and $n$ is a positive integer). Do you see it now?
Last edited by SlipEternal; November 5th 2013 at 01:03 PM.
$\frac{1}{2}*\frac{2(x-a)}{((x-a)^2+y_0^2)^1^/^2}$? I think this is maybe it. Thank you for your reply.
Last edited by sepoto; November 5th 2013 at 01:08 PM. | {"url":"http://mathhelpforum.com/advanced-algebra/223898-equivalent-algebraic-expressions.html","timestamp":"2014-04-20T17:47:42Z","content_type":null,"content_length":"42418","record_id":"<urn:uuid:5872e7be-6199-4862-976a-f96d97c8f1cf>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00264-ip-10-147-4-33.ec2.internal.warc.gz"} |
PIRSA - Perimeter Institute Recorded Seminar Archive
Special Relativity 8 - Applications of Minkowskian Geometry
Abstract: Learning to use Minkowskian geometry to understand, very simply, a variety of aspects of Einstein’s spacetime.
Learning Outcomes:
• How a straight line is the longest path between two points in spacetime.
• How a light particle experiences space and time: its journey from one location in the universe to another involves zero spacetime distance, and is thus instantaneous!
• How Einstein’s special relativity has no difficulty handling accelerated observers.
Date: 12/08/2008 - 9:00 am | {"url":"http://pirsa.org/08080067","timestamp":"2014-04-19T17:07:26Z","content_type":null,"content_length":"8554","record_id":"<urn:uuid:24617f88-fcd9-4830-819e-370737ce06cc>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00628-ip-10-147-4-33.ec2.internal.warc.gz"} |
Physics Forums - View Single Post - Interesting Identity
sum[0 to x](sin(x))=180 * sin(x/2)*sin(x/2) + sin(x)/2
where x is in degrees
Didn't you mean f(180), instead of 180, above? f(180) being the sum[t=0 to 180](sin(t)).
Also, to avoid confusing the two meanings of 'x', you may want to express the left-hand side as sum[t=0 to x](sin(t)) ; 'x' is the limit of the sum, not really the argument to sin().
The x's in the right-hand side are correct, as they mean the same value of 'x' as the limit of the sum in the left-hand side.
So, if I get you right, the conjecture is[tex]\sum_{t=0}^x \sin t = \left( \sum_{t=0}^{180} \sin t \right) \sin^2(x/2) + (\sin x)/2[/tex]with the arguments in degrees, not radians.
The two overlapping curves (left-hand side, right-hand side) looking like this, for x=0 to 180: | {"url":"http://www.physicsforums.com/showpost.php?p=3807056&postcount=2","timestamp":"2014-04-20T18:40:26Z","content_type":null,"content_length":"8387","record_id":"<urn:uuid:43e8b77e-e42c-444d-bacc-10935698f8ba>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00105-ip-10-147-4-33.ec2.internal.warc.gz"} |
University Park, TX Algebra 2 Tutor
Find an University Park, TX Algebra 2 Tutor
...I have been a teaching assistant for general chemistry and college physics courses for almost two years. In addition, I have been a course assistant for chemistry and physics courses and also
have been tutoring math classes. The most rewarding part of my career as a teacher is to work with a st...
19 Subjects: including algebra 2, chemistry, physics, statistics
...I took geometry in high school, as well as at the college level while studying to get my bachelor's degree. I have a bachelor's and master's degree in mathematics. Also, I would say that a
good 50% of my tutoring and teaching experience have been with Algebra 1 and 2 students and prealgebra is the basis for understanding algebra 1 and 2 in high school.
10 Subjects: including algebra 2, calculus, probability, algebra 1
Dear prospective student, Solving the problems is just half of the objective of a tutoring session. The most important thing is learning how to come from the basic principles (from lectures and
notes) to the solution of each problem. With the ability to do that, you can solve similar problems on your quizzes, lab reports, tests, exams (SAT, SAT 2). If you have problems in Gen.
24 Subjects: including algebra 2, English, reading, calculus
...Mark's School of Texas and then the University of Texas at Dallas, I have always been trained and excelled in critical thinking and mathematics. My success is largely through an education
which focuses on process rather than solution. In the same way you can give a man a fish you can cram for a test.
15 Subjects: including algebra 2, reading, geometry, algebra 1
...Overall I am passionate to tutoring and teaching chemistry, math and physics for undergraduate and high school students. My main target will be on helping and fulfilling students learning
desire by providing clear and simple lesson on any of those subjects. I love to simplify complex concepts in to easier form and i also do a lot of examples in my tutor.
10 Subjects: including algebra 2, chemistry, physics, geometry
Related University Park, TX Tutors
University Park, TX Accounting Tutors
University Park, TX ACT Tutors
University Park, TX Algebra Tutors
University Park, TX Algebra 2 Tutors
University Park, TX Calculus Tutors
University Park, TX Geometry Tutors
University Park, TX Math Tutors
University Park, TX Prealgebra Tutors
University Park, TX Precalculus Tutors
University Park, TX SAT Tutors
University Park, TX SAT Math Tutors
University Park, TX Science Tutors
University Park, TX Statistics Tutors
University Park, TX Trigonometry Tutors | {"url":"http://www.purplemath.com/University_Park_TX_Algebra_2_tutors.php","timestamp":"2014-04-18T00:48:53Z","content_type":null,"content_length":"24655","record_id":"<urn:uuid:634fadce-1878-4d1d-bc0a-5a28baea2d00>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00026-ip-10-147-4-33.ec2.internal.warc.gz"} |
Ionization and Thermal Structures of the Absorption-Line
Systems of Quasars
Next: Quasar Spectral Features Up: QSOsGalaxies, and Previous: Absorption line spectra
Science with the Hubble Space Telescope -- II
Book Editors: P. Benvenuti, F. D. Macchetto, and E. J. Schreier
Electronic Editor: H. Payne
Shin Sasaki
Department of Physics, the University of Tokyo, Bunkyo, Tokyo 113, Japan
Fumio Takahara and K. Masai
Department of Physics, Tokyo Metropolitan University, Hachiouji, Tokyo 192-03, Japan
JSPS Research Fellow present address:Department of Physics, Tokyo Metropolitan University,Hachiouji, Tokyo 192-03, Japan
We studied the ionization and thermal structures of the absorption-line systems of quasar spectra. We calculated ionization and thermal structures for several models by varying the density and size
of the absorption systems and the intensity and spectrum index of the ionizing spectrum. Comparison with the observations, we put constraints on the density and size of the absorption systems and the
intensity and spectrum index of the ionizing spectrum.
Keywords: cosmology : absorption systems
In quasar spectra, there are many types of absorption systems and their column density of neutral hydrogen (
In order to understand new observations precisely, we need study thermal properties of absorption systems comprising of hydrogen and helium theoretically. In this paper, we study that using simple
model based on photo-ionization model, and consider what types of absorption systems and UVB are favorable to explain observations. Then, we need treat radiative processes carefully even when we
study absorption systems whose optical depth of neutral hydrogen is less than unity. Because optical depth of singly ionized helium is larger than that of HI unless the spectrum of UVB is too hard,
it is possible that the HeII optical depth is greater than unity even when the HI optical depth is less than unity.
We adopt the photo-ionization model as absorption systems. In this paper, we assume that they are flattened structures and consider the homogeneous plane-parallel slab comprising of hydrogen and
helium, for simplicity. In our model, the absorption systems are illuminated on both sides by diffuse UV background radiation L, the intensity of the UVB
We calculate thermal structures of absorption systems based on the model presented in the previous section. In the case of optically thin systems, thermal properties are constant everywhere in a
system, but they depend on the adopted model parameters. The fraction of HI decreases as the intensity of UVB increases and as the number density of the system decreases. On the other hand, the ratio
of number density of HeII to HI depends on the spectrum index of UVB strongly, and does not depend on the intensity of UVB and number density. In the case of optically thick systems, thermal
properties change with depth. From these results, we can see that the thermal properties of absorption systems strongly depend on the model parameters, as expected. Thus, we expect we can put
constraints on the model parameters by comparing them with observations.
Vogel & Reimers (1995) detected absorption features of HeI in optically thin Lyman limit systems whose HI column density
Recently, Jakobsen et al. (1994) detected HeII absorption toward Q0302-003 (
From these observations, we see that number density of absorption systems whose
In order to restrict to our model parameters, it is most promising to study
Figure: The ratio of column density of
We studied the ionization and thermal structures of the absorption-line systems of quasar spectra based on photo-ionization model. Comparison with the observations, we put constraints on the model
parameters. We find that the spectrum index
The author (S.S.) acknowledges the Research Fellowships of the Japan Society for the Promotion of Science.
Jakobsen, P. et al. 1994, Nature, 370, 35
Vogel, S. & Reimers, D. 1995, A&A, 294, 377
Next: Quasar Spectral Features Up: QSOsGalaxies, and Previous: Absorption line spectra | {"url":"http://www.stsci.edu/stsci/meetings/shst2/sasakis.html","timestamp":"2014-04-19T00:00:40Z","content_type":null,"content_length":"14850","record_id":"<urn:uuid:4e8527b4-2557-4845-9fd6-e3b29414f20a>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00036-ip-10-147-4-33.ec2.internal.warc.gz"} |
Multiply 11 up to 19 (shortcut)
I was astounded an hour ago when
I came across this video that teaches
an easier way in most cases to
multiply any two numbers together
from 11 to 19, such as 12 times 16.
Here is an example of 12 times 16, which we know is 192,
due to its popularity in base-2 as 128 + 64.
Okay here's how it goes:
Forget one of the one's, and just add 12 + 6 for 18,
and then tack on a zero for 180.
Next do the one's digits multiplied and add that on to
the 180. So 2 times 6 is 12, and 12 + 180 = 192.
Yes, it is correct.
If you break the 4 chiffre (digits in French) into parts
and multiply them the 4 ways you usually do, I think
you get a glimpse as to why this works.
The 1 times the 1 is 100, due to the ten's places.
This will always be true. Then the two cross-multiplications,
the 1 times the 2 and the 1 times the 6, on diagonals, add
up to 20 + 60 or 80, so 100 + 80 is 180.
So this method shortens all that by assuming you start with
ones in the ten's place and so you get the 180.
Then the only 4th multiple to do is the two one's places and
add that on. It is so wonderful, I am really excited I
found this video this morning!!
Here is another example:
Let's do 16 times 19.
16 + 9 = 25 so think 250 with the zero on the end.
Then add to this 6 times 9 or 54 + 250 = 304.
And my calculator says that is correct as I didn't
have that one down pat yet.
Good Day!! Bonne Journee!! | {"url":"http://www.mathisfunforum.com/viewtopic.php?id=15107","timestamp":"2014-04-20T06:02:49Z","content_type":null,"content_length":"14371","record_id":"<urn:uuid:f99ab452-c2e5-4fee-8dbc-27cd5353adff>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00501-ip-10-147-4-33.ec2.internal.warc.gz"} |
An algorithm for analysis of the structure of finitely presented
An algorithm for analysis of the structure of finitely presented Lie algebras
Vladimir P. Gerdt, Vladimir V. Kornyak
We consider the following problem: what is the most general Lie algebra satisfying a given set of Lie polynomial equations? The presentation of Lie algebras by a finite set of generators and defining
relations is one of the most general mathematical and algorithmic schemes of their analysis. That problem is of great practical importance, covering applications ranging from mathematical physics to
combinatorial algebra. Some particular applications are constructionof prolongation algebras in the Wahlquist-Estabrook method for integrability analysis of nonlinear partial differential equations
and investigation of Lie algebras arising in different physical models. The finite presentations also indicate a way to q-quantize Lie algebras. To solve this problem, one should perform a large
volume of algebraic transformations which is sharply increased with growth of the number of generators and relations. For this reason, in practice one needs to use a computer algebra tool. We
describe here an algorithm for constructing the basis of a finitely presented Lie algebra and its commutator table, and its implementation in the C language. Some computer results illustrating our
algorithmand its actual implementation are also presented.
Full Text:
GZIP Compressed PostScript PostScript PDF original HTML abstract page | {"url":"http://www.dmtcs.org/dmtcs-ojs/index.php/dmtcs/article/viewArticle/86","timestamp":"2014-04-16T14:36:51Z","content_type":null,"content_length":"13276","record_id":"<urn:uuid:9c614dc7-858e-4618-b677-c52964037c6c>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00203-ip-10-147-4-33.ec2.internal.warc.gz"} |
Mathematics Weblog
Thursday 28 April 2005 at 11:05 pm | In
Post Comment
Who remembers Rubik’s cube? Rubik Unbound has an online version. Found that easy to solve? Then try the much harder 4-dimensional version at Magic Cube 4D Applet
Magic Cube 4D says the normal version has 43 252 0032 274 489 856 000 unique positions whereas the 4D version has 1 756 772 880 709 135 843 168 526 079 081 025 059 614 484 630 149 557 651 477 156 021
733 236 798 970 168 550 600 274 887 650 082 354 207 129 600 000 000 000 000 unique positions, more than the number of atoms in the universe.
Friday 22 April 2005 at 2:57 pm | In
Post Comment
Students learning (finite) group theory often have to prove that 2 groups are isomorphic. They may construct a function from G to H, guided by their Cayley tables, then assume that the function is a
homomorphism. Maybe they will check a few cases but don’t think it necessary to prove all
They are told that isomorphic groups have the same properties and, in particular, have the same number of elements of the same order. Unfortunately, they assume the converse is true which it isn’t.
But the examples they see tend to confirm the converse; they don’t often see counter-examples.
To make things easier let’s say two finite groups G and H are similar if they have the same number of elements of the same order. I suspect this is entirely non-standard terminology 8-).
The counter-example of smallest order, 16, is where
Other examples of non-isomorphic similar groups are:
• p is an odd prime:
• p,q odd primes with
G: Â
H: Â
• q an odd prime such that
G: Â
H: Â
Saturday 16 April 2005 at 10:58 pm | In
7 Comments
I have been teaching Norwegian students for some years; every year it’s a new group but every year they are a pleasure to teach. Since we follow their syllabus the textbook is in Norwegian, which is
fine for most mathematics but probability questions can be challenging; just a subtle change in wording can change the resulting probability.
Mathematics is a fairly universal language but there are occasional differences in Norway. In classes for British students I often use . or × for multiplication as in
Other interesting differences in symbols in the textbook are:
In differentiation, function notation is used but I have never before seen it used as in
The solution of
Of course, I am assuming it’s not just the book I’m using, but as the students are comfortable with the notation I expect it’s common in Norway.
The exams are interesting. They are much longer than in the UK lasting 5 hours, so they can only have 1 exam per day. But what is really fascinating, is that to maintain national standards,
externally set exams are only sat by selected students, chosen in a lottery. The students only get short notice of whether or not they have been selected and the external exam mark supersedes any
internal exam marks. Different selections are made for each subject.
The standard of mathematics they have to learn is roughly equivalent to A level, but the standard of behaviour, willingness to learn and participation is far superior! They study more subjects than
is common in the UK and not only do they all know who Niels Henrik Abel was, but even know his most famous result (insolubility of a quintic). Impressive. How many British students can do the same
for any British mathematician? They get taught multiplication tables up to 20, which is twice as far as here in the UK and further than the 12 in my day.
Question: Can you name any other famous Norwegian mathematicians? One of them is well known for theorems in group theory. Answers below
Continue reading Norwegian mathematics…
Thursday 7 April 2005 at 5:32 pm | In
4 Comments
I do sometimes tend to go on and on about common errors students make, such as dividing by zero or assuming what they are trying to prove. The Most Common Errors In Undergraduate Mathematics^1 is a
long-standing page written by Eric Schechter. It probably contains every error that I’ve ever seen (and more) so should be required reading by all maths students.
A few errors taken at random from the page, hopefully they will encourage you to read it in more detail
Everything is additive
Everything is commutative
I do like this one, since it gives the right answer via multiple errors
Oh and he has a go at teachers:
Some teachers are hostile to questions. That is an error made by teachers. Teachers, you will be more comfortable in your job if you try to do it well, and don’t think of your students as the
enemy. This means listening to your students and encouraging their questions.
Surely there aren’t many of these type of teachers about are there
1. Undergraduate here is an American term so the page is also highly relevant to A-Level maths students in the UK
Powered by WordPress with Pool theme design by Borja Fernandez.
Entries and comments feeds. Valid XHTML and CSS. ^Top^ | {"url":"http://www.sixthform.info/maths/?m=200504","timestamp":"2014-04-21T07:55:52Z","content_type":null,"content_length":"32034","record_id":"<urn:uuid:cba8ecac-6b83-4e7c-bbc1-82a047876f77>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00053-ip-10-147-4-33.ec2.internal.warc.gz"} |
Items where Subject is "D - G > Functional analysis"
Jump to:
Number of items at this level: 7.
Belton, A. C. R. (1998) A matrix formulation of quantum stochastic calculus. PhD thesis, University of Oxford.
Dew, N. (2003) Asymptotic structure of banach spaces. PhD thesis, University of Oxford.
Lasis, Andris and Suli, Endre (2003) Poincaré-type inequalities for broken Sobolev spaces. Technical Report. Unspecified. (Submitted)
Loghin, Daniel and Wathen, A. J. (1997) Preconditioning the Advection-Diffusion Equation: the Green's Function Approach. Technical Report. Unspecified. (Submitted)
Norbury, John and Wei, J. and Winter, M. (2007) Stability of patterns with arbitrary period for a Ginzburg-Landau equation with a mean field. European Journal of Applied Mathematics, 18 . pp.
129-151. ISSN 0956-7925
Pathmanathan, S. (2002) The poisson process in quantum stochastic calculus. PhD thesis, University of Oxford.
Srivastava, S. (2002) Laplace transforms, non-analytic growth bounds and PhD thesis, University of Oxford. | {"url":"http://eprints.maths.ox.ac.uk/view/subjects/M46.html","timestamp":"2014-04-21T14:45:30Z","content_type":null,"content_length":"9709","record_id":"<urn:uuid:ad63cfdc-a642-40aa-a5d7-42ea9c25d9eb>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00339-ip-10-147-4-33.ec2.internal.warc.gz"} |
C according to what reference frame?
You're drifting off into philosophy.
To address two points which seem reasonably concerned with physics:
I am trying to picture the difference between time inside of our galactic orbit and time in a place where there is comparatively little gravitational influence or magnitude of velocity.
It seems to me that our knowledge may be limited due to a self-referencing issue... that we really do not know just how "deep" the stillness can be outside of our present position.
These theories are named after "Relativity", which is the guiding principle behind them. Gravitational potential as well as velocity are relative concepts. You can't tell where things are slowest or
potential is highest (except in artificial toy models). That's not a matter of lacking knowledge. Both absolute velocity and absolute potential don't exist, so how could we know their values?
I am wondering if relativity skews our perception of the "billions of years" it takes light from distant galaxies to reach us. After all, the light must travel through this comparative void where
time, speed, and distance could be FAR different than we can conceive of.
You bet!
You can gain an understanding of these things if you study GR. You have the metric there as a basis (well, at least
finding a solution to the field equations), and all these concepts like distance, time, velocity, and potential have to be drawn from it. There are are usually infinitely many way of doing this, and
you get a feeling how much these concepts are relative. | {"url":"http://www.physicsforums.com/showthread.php?t=391127","timestamp":"2014-04-16T13:56:28Z","content_type":null,"content_length":"46782","record_id":"<urn:uuid:0489300f-12ab-4e5f-b9ac-6ee4d985fc6a>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00407-ip-10-147-4-33.ec2.internal.warc.gz"} |
factorial of n numbers using c# lambda..?
up vote 5 down vote favorite
I just started playing with lambdas and Linq expression for self learning. I took the simple factorial problem for this. with the little complex scenario where find the factorial for given n numbers
(witout using recursive loops).
Below the code i tried. But this is not working.
public void FindFactorial(int range)
var res = Enumerable.Range(1, range).Select(x => Enumerable.Range(0, x).Where(y => (y > 1)).Select(y => y * (y-1)));
foreach (var outt in res)
this is the procedure i used
• loop through the numbers 1 to n -- Enumerable.Range(1, range).
• select each number x and again loop them upto x times (instead of recursion)
• and select the numbers Where(y => (y > 1)) greater than 1 and multiply that with (y-1)
i know i messed up somewhere. can someone tell me whats wrong and any other possible solution.
i am going to let this thread open for some time... since this is my initial steps towards lambda.. i found all the answers very useful and informative.. And its going to be fun and great learning
seeing the differnt ways of approaching this problem.
c# linq lambda
add comment
4 Answers
active oldest votes
Currently there's no recursion - that's the problem. You're just taking a sequence of numbers, and projecting each number to "itself * itself-1".
The simple and inefficient way of writing a factorial function is:
Func<int, int> factorial = null; // Just so we can refer to it
factorial = x => x <= 1 ? 1 : x * factorial(x-1);
up vote 19 down vote for (int i = 1; i <= range; i++)
accepted {
Typically you then get into memoization to avoid having to repeatedly calculate the same thing. You might like to read Wes Dyer's blog post on this sort of thing.
3 10 out of 10 for style simply for the use of "x => x <= 1 ? 1 : x * factorial(x-1);"... x => x <= 1 :) – veggerby Sep 15 '09 at 12:04
thanks Jon, I have tried this way earlier. But i thought its cool doing this without recursion. thanks for the links. – RameshVel Sep 15 '09 at 12:06
1 +1 for memoization... BTW, there's an interesting library called Elevate which provides an extension method for memoizing a function : elevate.codeplex.com/sourcecontrol/
changeset/view/… – Thomas Levesque Sep 15 '09 at 12:15
2 Man... I feel like an idiot. I had to go to Wikipedia and look up Memoization. I have a degree in Comp Sci, and I have never heard of this word before today. Thanks for
teaching me something. – John Kraft Sep 15 '09 at 16:21
1 Wiki Link for people like John and myself. – Scott Chamberlain Sep 29 '10 at 19:48
show 2 more comments
Just to continue on Jon's answer, here's how you can memoize the factorial function so that you don't recompute everything at each step :
public Func<T, TResult> Memoize<T, TResult>(Func<T, TResult> func)
Dictionary<T, TResult> _resultsCache = new Dictionary<T, TResult>();
return (arg) =>
TResult result;
if (!_resultsCache.TryGetValue(arg, out result))
result = func(arg);
_resultsCache.Add(arg, result);
return result;
up vote 5 down vote Func<int, int> factorial = null; // Just so we can refer to it
factorial = x => x <= 1 ? 1 : x * factorial(x-1);
var factorialMemoized = Memoize(factorial);
var res = Enumerable.Range(1, 10).Select(x => factorialMemoized(x));
foreach (var outt in res)
EDIT: actually the code above is not correct, because factorial calls factorial, not factorialMemoized. Here's a better version :
Func<int, int> factorial = null; // Just so we can refer to it
Func<int, int> factorialMemoized = null;
factorial = x => x <= 1 ? 1 : x * factorialMemoized(x-1);
factorialMemoized = Memoize(factorial);
var res = Enumerable.Range(1, 10).Select(x => factorialMemoized(x));
foreach (var outt in res)
With that code, factorial is called 10 times, against 55 times for the previous version
@thomas, u rock... i never considered abt the Memoization.. thanks for giving an insight.... – RameshVel Sep 15 '09 at 12:40
Note that it is faster for large values, but probably slower for small values, because of the overhead of the dictionary insertion and lookup – Thomas Levesque Sep 15 '09 at
add comment
Simple although no recursion here:
public static int Factorial(this int count)
return count == 0
? 1
up vote 5 down vote : Enumerable.Range(1, count).Aggregate((i, j) => i*j);
3.Factorial() == 6
thats a nice trick... – RameshVel Sep 30 '10 at 6:05
add comment
I tried to come up with something resembling F#'s scan function, but failed since my LINQ isn't very strong yet.
Here's my monstrosity:
//this is similar to the folowing F# code:
//let result = [1..10] |> List.scan (fun acc n -> acc*n) 1
var result =
Enumerable.Range(1, 10)
.Aggregate(new List<int>(new[] { 1 }),
up vote 2 down vote (acc, i) => {
acc.Add(i * acc.Last());
return acc;
foreach(var num in result) Console.WriteLine("{0}",num);
If anyone knows if there actually is an equivalent of F#'s scan function in LINQ that I missed, I'd be very interested.
@cfern, thanks for the answer.. its cool .... you guys have given diffrent possibilities that i missed.... – RameshVel Sep 15 '09 at 13:12
add comment
Not the answer you're looking for? Browse other questions tagged c# linq lambda or ask your own question. | {"url":"http://stackoverflow.com/questions/1426715/factorial-of-n-numbers-using-c-sharp-lambda","timestamp":"2014-04-18T03:51:15Z","content_type":null,"content_length":"86758","record_id":"<urn:uuid:1b84c400-472f-411e-9762-3ef20262007d>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00549-ip-10-147-4-33.ec2.internal.warc.gz"} |
Crow Extended Model - Reliability Growth Planning
Related Topics:
Reliability Growth Planning
Reliability Growth Planning Examples
Crow-AMSAA (NHPP) Model
Crow Extended Model
Crow Extended - Continuous Evaluation Model
Crow Extended Model - Reliability Growth Planning
The Crow Extended model for reliability growth planning is a revised and improved version of the MIL-HDBK-189 growth curve [13]. MIL-HDBK-189 can be considered as the growth curve based on the
Crow-AMSAA (NHPP) model. Using MIL-HDBK-189 for reliability growth planning assumes that the corrective actions for the observed failure modes are incorporated during the test and at the specific
time of failure. However, in actual practice, fixes may be delayed until after the completion of the test or some fixes may be implemented during the test while others are delayed and some are not
fixed at all. The Crow Extended model for reliability growth planning provides additional input to be able to account for a specific management strategy and delayed fixes with specified effectiveness
Management Strategy Ratio & Initial Failure Intensity
When a system is tested and failure modes are observed, management can make one of two possible decisions, either to fix or not fix the failure mode. Therefore, the management strategy places failure
modes into two categories: A modes and B modes. A modes are all failure modes such that, when seen during the test, no corrective action will be taken. This accounts for all modes for which
management determines that it is not economically or otherwise justified to take a corrective action. B modes are either corrected during the test or the corrective action is delayed to a later time.
The management strategy is defined by what portion of the failures will be fixed.
Let be the initial failure intensity of the system in test. is defined as the A mode initial failure intensity and is defined as the B mode initial failure intensity. is the failure intensity of the
system that will not be addressed by corrective actions even if a failure mode is seen during test. is the failure intensity of the system that will be addressed by corrective actions if a failure
mode is seen during testing.
Then, the initial failure intensity of the system is:
The initial system MTBF is:
Based on the initial failure intensity definitions, the management strategy ratio is defined as:
The is the portion of the initial system failure intensity that will be addressed by corrective actions, if seen during the test.
The Type A and B failure mode initial failure mode intensity is:
Effectiveness Factor
When a delayed corrective action is implemented for a Type B failure mode, in other words a BD mode, the failure intensity for that mode is reduced if the corrective action is effective. Once a BD
mode failure mode is discovered, it is rarely totally eliminated by a corrective action. After a BD mode has been found and fixed, a certain percentage of the failure intensity will be removed, but a
certain percentage of the failure intensity will generally remain. The fraction decrease in the BD mode failure intensity due to corrective actions, , is called the effectiveness factor. A study on
EFs showed that an average EF, was about 70%. Therefore, typically about 30%, i.e., of the BD mode failure intensity will remain in the system after all of the corrective actions have been
implemented. However, individual EFs for the failure modes may be larger or smaller than the average. This average value of 70% can be used for planning purposes, or if such information is recorded,
an average effectiveness factor from a previous reliability growth program can be used.
MTBF Goal
When putting together a reliability growth plan, a goal MTBF (or goal failure intensity ) is defined as the requirement or target for the product at the end of the growth program.
Growth Potential
The failure intensity remaining in the system at the end of the test will depend on the management strategy given by the classification of the Type A and Type B failure modes. The engineering effort
applied to the corrective actions determines the effectiveness factors. In addition, the failure intensity depends on , which is the rate at which problem failure modes are being discovered during
testing. The rate of discovery drives the opportunity to take corrective actions based on the seen failure modes and it is an important factor in the overall reliability growth rate. The reliability
growth potential is the limiting value of the failure intensity as time increases. This limit is the maximum MTBF that can be attained with the current management strategy. The maximum MTBF will be
attained when all Type B modes have been observed and fixed.
If all seen Type B modes are corrected by time , that is, no deferred corrective actions at time , then the growth potential is the maximum attainable with the Type B designation of the failure modes
and the corresponding assigned effectiveness factors. This is called the nominal growth potential. In other words, the nominal growth potential is the maximum attainable growth potential assuming
corrective actions are implemented for every mode that is planned to be fixed. In reality, some fixes to modes might be implemented at a later time due to schedule, budget, engineering, etc.
If some seen Type B modes are not corrected at the end of the current test phase then the prevailing growth potential is below the maximum attainable with the Type B designation of the failure modes
and the corresponding assigned effectiveness factors.
If all Type B failure modes are seen and corrected with an average effectiveness factor, , then the maximum reduction in the initial system failure intensity is the growth potential failure
The growth potential MTBF is:
Note that based Eqns. (6), (1) and (3), the initial failure intensity is equal to:
Growth Potential Design Margin
The Growth Potential Design Margin () can be considered as a safety margin when setting target MTBF values for the reliability growth plan. It is common for systems to degrade in terms of reliability
when a prototype product is going into full manufacturing due to variation in material, processes, etc. Furthermore, the in-house reliability growth testing usually overestimates the actual product
reliability, since the field usage conditions may not be perfectly simulated during growth testing. Typical values for the are around 1.2. Higher values yield less risk for the program, but require a
more rigorous reliability growth test plan. Lower values imply higher program risk, with less "safety margin."
During the planning stage, the growth potential MTBF, can be calculated based on the goal MTBF, and the growth potential design margin, :
or in terms of failure intensity:
Nominal Idealized Growth Curve
During developmental testing, management should expect that certain levels of reliability will be attained at various points in the program in order to have assurance that reliability growth is
progressing at a sufficient rate to meet the product reliability requirement. The idealized curve portrays an overall characteristic pattern, which is used to determine and evaluate intermediate
levels of reliability and construct the program planned growth curve. Note that growth profiles on previously developed, similar systems provide significant insight into the reliability growth
process and are valuable in the construction of idealized growth curves.
The nominal idealized growth curve portrays a general profile for reliability growth throughout system testing. The idealized curve has the baseline value until an initialization time, when
reliability growth occurs. From that time and until the end of testing, which can be one or, most commonly, multiple test phases, the idealized curve increases steadily according to a learning curve
pattern until it reaches the final reliability requirement, . The slope of this curve on a log-log plot is the growth rate of the Crow Extended model [13].
Nominal Failure Intensity Function
The nominal idealized growth curve failure intensity as a function of test time is:
where is the initial system failure intensity, is test time and is the initialization time, which is discussed in the next section.
It can be seen that Eqn. (10) is the failure intensity equation of the Crow Extended model.
Initialization Time
Reliability growth can only begin after a Type B failure mode occurs, which cannot be at a time equal to zero. Therefore, there is a need for an initialization time, different than zero, to be
defined. The nominal idealized growth curve failure intensity is initially set equal to the initial failure intensity, until the initialization time, :
Using Eqn. (1) to substitute we have:
The initialization time, allows for growth to start after a Type B failure mode has occurred.
Nominal Time to Reach Goal
Assuming that we have a target MTBF or failure intensity goal, we can solve Eqn. (10) to find out how much test time,, is required, (based on the Crow Extended model and the nominal idealized growth
curve) to reach that goal:
Note that when or, in other words, the initial failure intensity is lower than the goal failure intensity, there is no need to solve for the nominal time to reach the goal, because the goal is
already met. In this case, no further reliability growth testing is needed.
Growth Rate for Nominal Idealized Curve
The growth rate for the nominal idealized curve is defined in the same context as the growth rate for the Duane Postulate [8]. The nominal idealized curve has the same functional form for the growth
rate as the Duane Postulate and the Crow-AMSAA (NHPP) model.
For both the Duane Postulate and the Crow-AMSAA (NHPP) model, the average failure intensity is given by:
Also, for both the Duane Postulate and the Crow-AMSAA (NHPP) model, the instantaneous failure intensity is given by:
Taking the difference, between the average failure intensity, , and the instantaneous failure intensity, , yields:
For reliability growth to occur, must be decreasing.
The growth rate for both the Duane Postulate and the Crow-AMSAA (NHPP) model is the negative of the slope of as a function of :
The slope is negative under reliability growth and equals:
The growth rate for both the Duane Postulate and the Crow-AMSAA (NHPP) model is equal to the negative of this slope:
The instantaneous failure intensity for the nominal idealized curve is:
The cumulative failure intensity for the nominal idealized curve is:
Therefore, in accordance with the Duane Postulate and the Crow-AMSAA (NHPP) model, is the growth rate for the reliability growth plan.
Lambda - Beta Parameter Relationship
Under the Crow-AMSAA (NHPP) model, the time to first failure is a Weibull random variable. The MTTF of a Weibull distributed random variable with parameters and is:
The parameter lambda is defined as:
Using Eqn. (16), the MTTF relationship shown in Eqn. (15) becomes:
Or, in terms of failure intensity:
Actual Idealized Growth Curve
The actual idealized growth curve differs from the nominal idealized curve in that it takes into account the average fix delay that might occur in each test phase. The actual idealized growth curve
is continuous and goes through each of the test phase target MTBFs.
Fix Delays and Test Phase Target MTBF
Fix delays reflect how long it takes from the time a problem failure mode is discovered in testing, to the time the corrective action is incorporated into the system and reliability growth is
realized. The consideration of the fix delay is often in terms of how much calendar time it takes to incorporate a corrective action fix after the problem is first seem. However, the impact of the
delay on reliability growth is reflected in the average test time it takes between finding a problem failure mode and incorporating a corrective action. The fix delay is reflected in the actual
idealized growth curve in terms of test time.
In other words, the average fix delay is calendar time converted to test hours. For example, say that we expect an average fix delay of two weeks: if in two weeks the total test time is 1000 hours,
the average fix delay is 1000 hours. If in the same two weeks the total test time is 2000 hours (maybe there are more units available or more shifts) then the average fix delay is 2000 hours.
There can be a constant fix delay across all test phases or, as a practical matter, each test phase can have a different fix delay time. In practice, the fix delay will generally be constant over a
particular test phase. denotes the fix delay for phase where is the total number of phases in the test. RGA 7 allows for a maximum of seven test phases.
Actual Failure Intensity Function
Consider a test plan consisting of phases. Taking into account the fix delay within each phase, we expect the actual failure intensity to be different (i.e. shifted) from the nominal failure
intensity. This is because fixes are not incorporated instantaneously, thus growth is realized at a later time compared to the nominal case.
Specifically, the actual failure intensity will be estimated as follows:
Test Phase 1
For the first phase of a test plan, the actual idealized curve failure intensity, , is
Note that the end time of Phase 1, must be greater than . That is, .
The actual idealized curve initialization time for Phase 1, is calculated from:
Therefore, using Eqn. (12):
Solving Eqn. (19) for we get:
Test Phase
For any test phase , the actual idealized curve failure intensity is given by:
where and is the test time of each corresponding test phase.
The actual idealized curve MTBF is:
Actual Time to Reach Goal
The actual time to reach the target MTBF or failure intensity goal,
Since the actual idealized growth curve depends on the phase durations and average fix delays, there are three different cases that need to be treated differently in order to determine the actual
time to reach the MTBF goal. The cases depend on when the actual MTBF that can be reached within the specific phase durations and fix delays becomes equal to the MTBF goal. This can be determined by
solving Eqn. (21) for phases through , then solving in terms of actual MTBF using Eqn. (22) for each phase and finding the phase during which the actual MTBF becomes equal to the goal MTBF. The three
cases are presented next.
Case 1: MTBF goal is met during the last phase
If indicates the cumulative end phase time for the last phase and indicates the fix delay for the last phase, then we have:
Starting to solve for yields:
We can substitute the left term by using Eqn. (14), thus we have:
Case 2: MTBF goal is met before the last phase
Eqn. (23) still applies, but in this case and are the time and fix delay of the phase during which the goal is met.
Case 3: MTBF goal is met after the final phase
If the goal MTBF,
• Add more phase(s) to the program.
• Re-examine the phase duration of the existing phases.
• Investigate whether there are potential process improvements in the program that can reduce the average fix delay for the phases.
Other alternative routes for consideration would be to investigate the rest of the inputs in the model:
• Change the management strategy.
• Consider if further program risk can be acceptable, and if so, reduce the growth potential design margin.
• Consider if it is feasible to increase the effectiveness factors of the delayed fixes by using more robust engineering redesign methods.
Note that each change of input variables into the model can significantly influence the results. With that in mind, any alteration in the input parameters should be justified by actionable decisions
that will influence the reliability growth program. For example, increasing the average effectiveness factor value should be done only when there is proof that the program will pursue a different,
more effective path in terms of addressing fixes. | {"url":"http://www.chinarel.com/onlinebook/RelGrowthWeb/Crow_Extended_Model_-_Reliability_Growth_Planning.htm","timestamp":"2014-04-21T12:30:43Z","content_type":null,"content_length":"60081","record_id":"<urn:uuid:5e0d5416-7dbf-441c-8d52-1ac76bdbf4ec>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00323-ip-10-147-4-33.ec2.internal.warc.gz"} |
Re: st: Standard Error of a Wald Estimator and -nlcom-
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: st: Standard Error of a Wald Estimator and -nlcom-
From Austin Nichols <austinnichols@gmail.com>
To statalist@hsphsun2.harvard.edu
Subject Re: st: Standard Error of a Wald Estimator and -nlcom-
Date Thu, 15 Oct 2009 15:33:02 -0400
Misha Spisok <misha.spisok@gmail.com> :
The Wald estimator with the correct standard error is here:
use http://pped.org/card.dta
ivreg lwage exper (educ=nearc4), nohe r
as shown in
You can also:
reg lwage exper nearc4, nohe r
loc b1=_b[nearc4]
loc s1=_se[nearc4]
reg educ exper nearc4, nohe r
loc b2=_b[nearc4]
loc s2=_se[nearc4]
di `b1'/`b2'
and use -suest- and so forth, but why would you? Unless you are
estimating on two separate datasets...
On Thu, Oct 15, 2009 at 3:19 PM, Misha Spisok <misha.spisok@gmail.com> wrote:
> Prof. Nichols,
> Please forgive my obtuseness, but I'm not sure what you mean by
> "rather than aggregating quantities over multiple regressions." I
> think you are referring to using the results from separate regressions
> on the same data set, as you point out in the first line of the post
> to which you referred me. Please correct me if I'm wrong.
> Also, by having "three ways to get a SE," do you mean to include the
> initial one from the code
> (http://www.stata.com/statalist/archive/2009-10/msg00498.html) that
> comes from the two separate OLS regressions and the following
> calculation?
> di `b1'/`b2'*sqrt((`s2'/`b2')^2+(`s1'/`b1')^2)
> I would understand this to be incorrect for the reasons given in the
> first line of the referenced post--i.e., it uses the results from
> separate estimates on the same data--in addition to the fact that it
> neglects to correct for any correlation between b1 and b2.
> And, for the sake of clarity, the other two being:
> di `b1'/`b2'*sqrt((`s2'/`b2')^2+(`s1'/`b1')^2-2*`c'/`b1'/`b2')
> which, I understand to be an approximate standard error formula with a
> correction for non-zero covariance, and
> nlcom [r1_mean]_b[nearc4]/[r2_mean]_b[nearc4]
> which, if I understand the documentation correctly, uses some
> numerical implementation of the delta method.
> Thank you for your time and patience. I appreciate you correcting my
> misunderstandings and taking the time to provide tidy examples.
> Best,
> Misha
> On Thu, Oct 15, 2009 at 8:49 AM, Austin Nichols <austinnichols@gmail.com> wrote:
>> Misha Spisok <misha.spisok@gmail.com> :
>> Instead of focusing on the final line, look at the first sentence of the post:
>> http://www.stata.com/statalist/archive/2009-10/msg00498.html
>> You have 3 ways to get a SE, not 2, and -ivreg- or equivalent (I use
>> -ivreg2- from SSC) is the way to go, rather than aggregating
>> quantities over multiple regressions.
>> On Wed, Oct 14, 2009 at 8:58 PM, Misha Spisok <misha.spisok@gmail.com> wrote:
>>> In brief, are the two following approaches for the standard error of a
>>> Wald estimate equivalent? If not, why not?
>>> use http://pped.org/card.dta
>>> reg lwage exper nearc4, nohe r
>>> loc b1=_b[nearc4]
>>> loc s1=_se[nearc4]
>>> reg educ exper nearc4, nohe r
>>> loc b2=_b[nearc4]
>>> loc s2=_se[nearc4]
>>> ivreg lwage exper (educ=nearc4), nohe r
>>> di `b1'/`b2'
>>> di `b1'/`b2'*sqrt((`s2'/`b2')^2+(`s1'/`b1')^2)
>>> qui reg lwage exper nearc4
>>> est sto r1
>>> qui reg educ exper nearc4, nohe
>>> est sto r2
>>> suest r1 r2
>>> mat v=e(V)
>>> matrix cov=v["r1_mean:nearc4","r2_mean:nearc4"]
>>> loc c=cov[1,1]
>>> -----Approach 1-----
>>> di `b1'/`b2'*sqrt((`s2'/`b2')^2+(`s1'/`b1')^2-2*`c'/`b1'/`b2')
>>> This final line is the result of the approach suggested by Austin
>>> Nichols (http://www.stata.com/statalist/archive/2009-10/msg00498.html)
>>> to get the standard error for the Wald estimator.
>>> Then, using the above results from -suest-,
>>> -----Approach 2-----
>>> nlcom [r1_mean]_b[nearc4]/[r2_mean]_b[nearc4]
>>> The results for the standard error are close (the difference is
>>> 0.00001913), but not exactly the same. Are the two approaches
>>> analytically equivalent but different only numerically?
>>> Thank you for your time and attention.
>>> Misha
* For searches and help try:
* http://www.stata.com/help.cgi?search
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/ | {"url":"http://www.stata.com/statalist/archive/2009-10/msg00713.html","timestamp":"2014-04-19T14:34:35Z","content_type":null,"content_length":"11359","record_id":"<urn:uuid:5c0e4adc-55df-4c32-8200-5fd31a0c627a>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00319-ip-10-147-4-33.ec2.internal.warc.gz"} |
The progress of atomic theory
From inside the book
18 pages matching crystal in this book
Where's the rest of this book?
Results 1-3 of 18
What people are saying - Write a review
We haven't found any reviews in the usual places.
Related books
manner 16
Result from the Lorentz electrodynamics uiven for comparison 18
WITH SYMMETRICAL 71
11 other sections not shown
Other editions - View all
Common terms and phrases
alpha-particles angle approximation assumed atomic model atomic number atomic weight axes bodies boron carbon carbon atom centre Chapter charge of 2e computed connecting electrons constant crystal
cube curve denoted diamond lattice direction displacement eccentricity electrostatic force elementary charge elements energy content equal equation equatorial plane equilibrium exist expression
face-centred cubic force due form of electrodynamics frequency given gives helium atom Hence hydrogen atom hydrogen molecule inverse isotope K-series Lorentz mass maximum distance minor axis motion
negative electron number of electrons numerical value oblate spheroid obtained orbit particles passing electron positive charge Poynting vector quantity r-4 term radiation radius vector ratio reason
represented repulsion ring rotation Rutherford Rydberg constant second atom selected atom semi-minor axis shown simple cubic lattice sphere spherical supposed surface symmetrical atoms theory total
number velocity of light vibration wave-length whole atom X-ray z-component zero
Bibliographic information
manner 16
Result from the Lorentz electrodynamics uiven for comparison 18 | {"url":"http://books.google.com/books?id=bIBKAAAAMAAJ&q=crystal&dq=related:NYPL33433089969525&lr=&source=gbs_word_cloud_r&cad=6","timestamp":"2014-04-18T06:30:54Z","content_type":null,"content_length":"115750","record_id":"<urn:uuid:c333c9bd-170b-4d0a-b7e8-d04bb1b38775>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00489-ip-10-147-4-33.ec2.internal.warc.gz"} |
Albert, John - Department of Mathematics, University of Oklahoma
• Estimating Sums June 27, 2006
• Counting in two ways Reid Barton
• The Classical Inequalities Putnam Practice
• Putnam Seminar 2004 More problems from "Mathematical Miniatures". Here are three shorter problems
• Progressions and Sums Putnam Practice
• (April, 2004) Crows are black,
• Pigeonhole Principle Putnam practice
• 3\22\07 OR 3\32\07 Christopher's NOTES
• Rational and Irrational Numbers Putnam Practice
• Putnam Seminar 2004 We continue with some problems from "Mathematical Miniatures" by S. Savchev and T. Andreescu.
• Pell Equations Putnam Practice
• Diophantine Equations I Putnam practice
• Putnam practice November 12, 2003
• Existence and stability of ground-state solutions of a Schrodinger-KdV system
• Polynomials Putnam practice
• Pafnuty Chebyshev, Steam Engines, and Polynomials by John Albert
• Contemporary Mathematics Volume 00, 0000
• Positivity Properties and Uniqueness of Solitary Wave Solutions
• Putnam Seminar Sept. 17, 2010
• Some problems from the American Mathematical Monthly 1. The perimeter of a triangle ABC is divided into three equal parts by three points P, Q, R. Show that
• Putnam Seminar 2004 More problems from "Mathematical Miniatures".
• A Strange Letter This letter is my translation of a Portuguese original that appeared in the "Novo Boletim do IMECC",
• Symmetric Functions Putnam Practice | {"url":"http://www.osti.gov/eprints/topicpages/documents/starturl/11/965.html","timestamp":"2014-04-16T13:36:34Z","content_type":null,"content_length":"10215","record_id":"<urn:uuid:29676a9d-f3c4-4e35-879f-0fa931882852>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00589-ip-10-147-4-33.ec2.internal.warc.gz"} |
MathGroup Archive: March 2008 [00842]
[Date Index] [Thread Index] [Author Index]
Re: Basic plotting of an evaluated function
• To: mathgroup at smc.vnet.net
• Subject: [mg86912] Re: Basic plotting of an evaluated function
• From: Jens-Peer Kuska <kuska at informatik.uni-leipzig.de>
• Date: Wed, 26 Mar 2008 04:52:11 -0500 (EST)
• Organization: Arcor
• References: <fsa5cd$ab2$1@smc.vnet.net>
it would be helpful
a) if you would consider to read the manual
b) use correct Mathematica syntax because
is remarkable nonsense
f[x_?NumericQ, y_?NumericQ] := x*(990 - y^2)
s[x_?NumericQ] := y /. Last[NMinimize[{f[x, y], y < 995, y > 989}, y]]
Plot[s[x], {x, -14, 46}]
lederer at ssb.rochester.edu wrote:
> I am trying to do something basic--but I cannot figure it out
> I have some function in two varaibles
> f[x_,y_]= some arbitrary polynomial
> Now I want to minimize the function with respect to y over some
> compact interval with fixed x
> NMinimize[{f[x,y],y=995,y=989},y]
> I want to plot the y's versus the x's.
> I have tried many things like defining
> s[x_]=Part[NMinimize[{f[x,y],y=995,y=989},y],2]
> Plot[s[x],{x,-14, 46}]
> or variations
> Plot[tt/.s[x],{x,-14, 46}].
> Many error messages or nothing at all results.........
> I am sure this is easy, and someone can suggest how to do this.
> However, one more thing would be greatly helpful (give a man a decent
> and readable Mathematica manual and he can feed himself...)
> I am baffled by what Mathematica wants when it comes to using a single
> part of a list and changing it to a scalar. Suppose as above q is a
> vector with a rule assigning a value to some variable x that is x-
>> some number, and this assignment is in the third position of the list
> What is going on with q/.d[[3]] for example: Is q now a scalar? If
> not why not?
> Thanks for you help,
> Phil Lederer | {"url":"http://forums.wolfram.com/mathgroup/archive/2008/Mar/msg00842.html","timestamp":"2014-04-21T09:57:56Z","content_type":null,"content_length":"26888","record_id":"<urn:uuid:2a52f7eb-a758-4bf3-b0cc-6a3d1c046373>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00258-ip-10-147-4-33.ec2.internal.warc.gz"} |
Equations Involving Absolute Values
Example 1. Solve `|2x-7|=3`.
If `|a|=3` then either a=3 or a=-3. This means that given equation is equivalent to the set of equations: `2x-7=3,2x-7=-3`.
From first equation we find that x=5, from second we find that x=2. Thus, there are 2 solutions: 2 and 5.
Example 2. Solve `|2x-8|=3x+1`.
We need to consider two cases: `2x-8>=0` and `2x-8<0`.
If `2x-8>=0` then `|2x-8|=2x-8` and given equation can be rewritten as `2x-8=3x+1`. From this we have that x=-9. However, when x=-9 inequality `2x-8>=0` doesn't hold (`2*(-9)-8=-26<0`), therefore, x=
-9 is not root of the equation.
If `2x-8<0` then `|2x-8|=-(2x-8)` and given equation can be rewritten as `-(2x-8)=3x+1`. From this we have that `x=7/5` . When `x=7/5` inequality `2x-8<0` holds (`2*(7/5)-8=-26/5<0`), therefore, `x=7
/5` is root of the equation.
Therefore, there is only one root: `x=7/5`.
Note, that equation of the form `|x-a|=b` can be also solved geometrically. | {"url":"http://www.emathhelp.net/notes/algebra-2/trigonometry/equations-involving-absolute-values/","timestamp":"2014-04-17T06:46:44Z","content_type":null,"content_length":"158019","record_id":"<urn:uuid:7b19cdcd-2154-40e8-a9b7-8a206326708c>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00054-ip-10-147-4-33.ec2.internal.warc.gz"} |
Author valueOf()
String txt = "Honey ";
Joined: Jan
10, 2006 System.out.println("double for the above string is: " + Double.valueOf(txt));
Posts: 7
why doesn't it execute.
Joined: Nov
24, 2005 "Honey" doesn't look like a parsable number to to me
Posts: 14687
[My Blog]
I like... All roads lead to JavaRanch
Java Cowboy
Saloon Keeper
Joined: Aug Do you know what the method Double.valueOf() does with the string that to supply to it? If not, read the API documentation of that method, and you'll understand why giving it a string
16, 2005 like "Honey " doesn't work.
Posts: 13875
Java Beginners FAQ - JavaRanch SCJP FAQ - The Java Tutorial - Java SE 7 API documentation
I like... Scala Notes - My blog about Scala
Or, with other words: What would you expect it to do?
Joined: Jul
11, 2001
Posts: 14112 The soul is dyed the color of its thoughts. Think only on those things that are in line with your principles and can bear the light of day. The content of your character is your choice.
Day by day, what you do is who you become. Your integrity is your destiny - it is the light that guides your way. - Heraclitus
i would have no idea what the decimal value of the word "honey" is either, if somebody asked me.
Joined: Oct
02, 2003 the method you are calling tries to read the string and make it into a number. its looking for digits, a possible decimal point, possibly a +/- sign.
Posts: 10916
how is it supposed to convert letters to numbers?
I like...
There are only two hard things in computer science: cache invalidation, naming things, and off-by-one errors
Sheriff [Fred]: how is it supposed to convert letters to numbers?
Joined: Jan Well, if we get rid of the space at the end of "Honey ", and use an Integer or Long rather than a Double, then we could parse it as a number in base 35 or higher...
30, 2000
Posts: 18671 "I'm not back." - Bill Harding, Twister
Well, if we get rid of the space at the end of "Honey ", and use an Integer or Long rather than a Double, then we could parse it as a number in base 35 or higher...
Joined: Dec
30, 2005
Posts: 12
pffft. Only if we assume that it is not case sensitive. Otherwise we obviously cannot parse this in any number system less than base 61.
Sheriff Originally posted by Jim Yingst:
[Fred]: how is it supposed to convert letters to numbers?
Joined: Jul
11, 2001 Well, if we get rid of the space at the end of "Honey ", and use an Integer or Long rather than a Double, then we could parse it as a number in base 35 or higher...
Posts: 14112
Have you tried
Integer.parseInt("Honey", 35);
Sheriff Sure did. It's 26568324.
Joined: Jan Do you get a different result, Ilja?
30, 2000
Posts: 18671 [Joseph]: Only if we assume that it is not case sensitive.
Hey, I'm still flexible on assumptions here - I haven't even committed to a particular base yet. But if we're talking about the parseXxx() methods (and I did say I was using Integer,
or Long), it's documented that these are case insensitive. See details under Character.digit().
My apologies to the original poster, as this little discussion is almost certainly of no interest to you. The point is - what would you expect Double.valueOf() to do here?
[ January 10, 2006: Message edited by: Jim Yingst ]
Ranch Hand
Originally posted by AJKC Shekhawat:
Joined: Dec String txt = "Honey ";
06, 2001
Posts: 3061 System.out.println("double for the above string is: " + Double.valueOf(txt));
why doesn't it execute.
Actually, this DOES execute. However, I think you are confused about the result, since it is probably not what you expected. So what DO you expect this to do? Please helpe us help you
by providing some more details.
Java API Documentation
The Java Tutorial
subject: valueOf() | {"url":"http://www.coderanch.com/t/401991/java/java/valueOf","timestamp":"2014-04-19T02:52:27Z","content_type":null,"content_length":"41639","record_id":"<urn:uuid:cfae804d-2f57-47f3-aa99-dc7399eaffd0>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00093-ip-10-147-4-33.ec2.internal.warc.gz"} |
Homework Help
functions - MathMate, Friday, June 25, 2010 at 7:13pm
The answer to the question lies in the definition of a function, which is a transformation from a set X of values to a set Y of values, such that any value in the set X will be transformed to one and
only one value in the set Y.
An ordered pair (x,y) represents one particular element x in the set X that will be transformed into an element y in the set Y.
The formal notation is
y = f(x), where x∈X and y ∈Y.
The set X is called the domain, and contains all valid values of x. The set Y is called the range, and contains all possible values of y.
An example of a function is
y = f(x) = x², and some ordered pairs are: (-1,1), (0,0), (1,1), (2,4), (3,9).
Note that even though some values of y have been duplicated, the same value of x always give one and only one value of y.
If we had a relation such as
y = sqrt(x),
we would have ordered pairs such as
(0,0), (1,1), (1,-1), (4,2), (4,-2).
In this case, since y=±sqrt(x), the same value of x is not transformed to one single value of y, and so sqrt(x) is NOT a function.
Take it from here and post your answer for a check if you wish. | {"url":"http://www.jiskha.com/display.cgi?id=1277496699","timestamp":"2014-04-16T18:35:16Z","content_type":null,"content_length":"10206","record_id":"<urn:uuid:1abcc041-14ef-4bd5-983e-a2aef4a5e96e>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00126-ip-10-147-4-33.ec2.internal.warc.gz"} |
One Very Long Wire Carries Current 30.0 Ato The ... | Chegg.com
One very long wire carries current 30.0 Ato the left along the
axis. A second very long wirecarries current 75.0 A to the right alongthe line (
= 0.280 m,
= 0).
(a) Where in the plane of the two wires is thetotal magnetic field equal to zero?
m (along the y axis)
(b) A particle with a charge of -2.00 µC is moving with avelocity of 150y = 0.100 m,z = 0). Calculate the vector magnetic force acting on theparticle.
(c) What If? A uniform electric field is appliedto allow this particle to pass through this region undeflected.Calculate the required vector electric field. | {"url":"http://www.chegg.com/homework-help/questions-and-answers/one-long-wire-carries-current-300-ato-left-along-x-axis-second-long-wirecarries-current-75-q406443","timestamp":"2014-04-19T05:42:13Z","content_type":null,"content_length":"27081","record_id":"<urn:uuid:3635ebfa-a4df-4e74-a349-af16b8f23910>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00263-ip-10-147-4-33.ec2.internal.warc.gz"} |
A study towards a unified approach to the joint estimation of objective and risk neutral measures for the purpose of option valuation
Results 1 - 10 of 66
- Journal of Financial Economics
"... Abstract: This paper examines the joint time series of the S&P 500 index and near-the-money short-dated option prices with an arbitrage-free model, capturing both stochastic volatility and
jumps. Jump-risk premia uncovered from the joint data respond quickly to market volatility, becoming more promi ..."
Cited by 210 (1 self)
Add to MetaCart
Abstract: This paper examines the joint time series of the S&P 500 index and near-the-money short-dated option prices with an arbitrage-free model, capturing both stochastic volatility and jumps.
Jump-risk premia uncovered from the joint data respond quickly to market volatility, becoming more prominent during volatile markets. This form of jump-risk premia is important not only in
reconciling the dynamics implied by the joint data, but also in explaining the volatility “smirks” of cross-sectional options data.
- Journal of the American Statistical Association , 2001
"... Using high-frequency data on deutschemark and yen returns against the dollar, we construct model-free estimates of daily exchange rate volatility and correlation that cover an entire decade. Our
estimates, termed realized volatilities and correlations, are not only model-free, but also approximately ..."
Cited by 150 (17 self)
Add to MetaCart
Using high-frequency data on deutschemark and yen returns against the dollar, we construct model-free estimates of daily exchange rate volatility and correlation that cover an entire decade. Our
estimates, termed realized volatilities and correlations, are not only model-free, but also approximately free of measurement error under general conditions, which we discuss in detail. Hence, for
practical purposes, we may treat the exchange rate volatilities and correlations as observed rather than latent. We do so, and we characterize their joint distribution, both unconditionally and
conditionally. Noteworthy results include a simple normality-inducing volatility transformation, high contemporaneous correlation across volatilities, high correlation between correlation and
volatilities, pronounced and persistent dynamics in volatilities and correlations, evidence of long-memory dynamics in volatilities and correlations, and remarkably precise scaling laws under
temporal aggregation.
, 2002
"... We propose using the price range in the estimation of stochastic volatility models. We show theoretically, numerically, and empirically that range-based volatility proxies are not only highly
efficient, but also approximately Gaussian and robust to microstructure noise. Hence range-based Gaussian qu ..."
Cited by 114 (11 self)
Add to MetaCart
We propose using the price range in the estimation of stochastic volatility models. We show theoretically, numerically, and empirically that range-based volatility proxies are not only highly
efficient, but also approximately Gaussian and robust to microstructure noise. Hence range-based Gaussian quasi-maximum likelihood estimation produces highly efficient estimates of stochastic
volatility models and extractions of latent volatility. We use our method to examine the dynamics of daily exchange rate volatility and find the evidence points strongly toward two-factor models with
one highly persistent factor and one quickly mean-reverting factor. VOLATILITY IS A CENTRAL CONCEPT in finance, whether in asset pricing, portfolio choice, or risk management. Not long ago,
theoretical models routinely assumed constant volatility ~e.g., Merton ~1969!, Black and Scholes ~1973!!. Today, however, we widely acknowledge that volatility is both time varying and predictable
~e.g., Andersen and Bollerslev ~1997!!, andstochastic volatility models are commonplace. Discrete- and continuous-time stochastic volatility models are extensively used in theoretical finance,
empirical finance, and financial econometrics, both in academe and industry ~e.g., Hull and
, 2001
"... This paper studies the empirical performance of jump-diffusion models that allow for stochastic volatility and correlated jumps affecting both prices and volatility. The results show that the
models in question provide reasonable fit to both option prices and returns data in the in-sample estimation ..."
Cited by 97 (2 self)
Add to MetaCart
This paper studies the empirical performance of jump-diffusion models that allow for stochastic volatility and correlated jumps affecting both prices and volatility. The results show that the models
in question provide reasonable fit to both option prices and returns data in the in-sample estimation period. This contrasts previous findings where stochastic volatility paths are found to be too
smooth relative to the option implied dynamics. While the models perform well during the high volatility estimation period, they tend to overprice long dated contracts out-of-sample. This evidence
points towards a too simplistic specification of the mean dynamics of volatility.
- Journal of Finance , 2006
"... We especially thank an anonymous referee and Rob Stambaugh, the editor, for helpful suggestions that greatly improved the article. Andrew Ang and Bob Hodrick both acknowledge support from the
NSF. ..."
Cited by 82 (6 self)
Add to MetaCart
We especially thank an anonymous referee and Rob Stambaugh, the editor, for helpful suggestions that greatly improved the article. Andrew Ang and Bob Hodrick both acknowledge support from the NSF.
- Statist. Sci , 2005
"... Abstract. This paper gives a brief overview of the nonparametric techniques that are useful for financial econometric problems. The problems include estimation and inference for instantaneous
returns and volatility functions of time-homogeneous and time-dependent diffusion processes, and estimation ..."
Cited by 35 (8 self)
Add to MetaCart
Abstract. This paper gives a brief overview of the nonparametric techniques that are useful for financial econometric problems. The problems include estimation and inference for instantaneous returns
and volatility functions of time-homogeneous and time-dependent diffusion processes, and estimation of transition densities and state price densities. We first briefly describe the problems and then
outline the main techniques and main results. Some useful probabilistic aspects of diffusion processes are also briefly summarized to facilitate our presentation and applications.
, 2003
"... We study optimal investment strategies given investor access not only to bond and stock markets but also to the derivatives market. The problem is solved in closed form. Derivatives extend the
risk and return tradeoffs associated with stochastic volatility and price jumps. As a means of exposure to ..."
Cited by 33 (5 self)
Add to MetaCart
We study optimal investment strategies given investor access not only to bond and stock markets but also to the derivatives market. The problem is solved in closed form. Derivatives extend the risk
and return tradeoffs associated with stochastic volatility and price jumps. As a means of exposure to volatility risk, derivatives enable non-myopic investors to exploit the time-varying opportunity
set; and as a means of exposure to jump risk, they enable investors to disentangle the simultaneous exposure to diffusive and jump risks in the stock market. Calibrating to the S&P 500 index and
options markets, we find sizable portfolio improvement from derivatives investing.
, 2006
"... In the econometric literature of high frequency data, it is often assumed that one can carry out inference conditionally on the underlying volatility processes. In other words, conditionally
Gaussian systems are considered. This is often referred to as the assumption of “no leverage effect”. This is ..."
Cited by 12 (3 self)
Add to MetaCart
In the econometric literature of high frequency data, it is often assumed that one can carry out inference conditionally on the underlying volatility processes. In other words, conditionally Gaussian
systems are considered. This is often referred to as the assumption of “no leverage effect”. This is often a reasonable thing to do, as general estimators and results can often be conjectured from
considering the conditionally Gaussian case. The purpose of this paper is to try to give some more structure to the things one can do with the Gaussian assumption. We shall argue in the following
that there is a whole treasure chest of tools that can be brought to bear on high frequency data problems in this case. We shall in particular consider approximations involving locally constant
volatility processes, and develop a general theory for this approximation. As applications of the theory, we propose an improved estimator of quarticity, an ANOVA for processes with multiple
regressors, and an estimator for error bars on the Hayashi-Yoshida estimator of quadratic covariation Some key words and phrases: consistency, cumulants, contiguity, continuity, discrete observation,
efficiency, Itô process, likelihood inference, realized volatility, stable convergence
, 2005
"... Stochastic volatility (SV) is the main concept used in the fields of financial economics and mathematical finance to deal with the endemic time-varying volatility and codependence found in
financial markets. Such dependence has been known for a long time, early comments include Mandelbrot (1963) and ..."
Cited by 12 (0 self)
Add to MetaCart
Stochastic volatility (SV) is the main concept used in the fields of financial economics and mathematical finance to deal with the endemic time-varying volatility and codependence found in financial
markets. Such dependence has been known for a long time, early comments include Mandelbrot (1963) and Officer (1973). It was also clear to the founding fathers of modern continuous time finance that
homogeneity was an unrealistic if convenient simplification, e.g. Black and Scholes (1972, p. 416) wrote “... there is evidence of non-stationarity in the variance. More work must be done to predict
variances using the information available. ” Heterogeneity has deep implications for the theory and practice of financial economics and econometrics. In particular, asset pricing theory is dominated
by the idea that higher rewards may be expected when we face higher risks, but these risks change through time in complicated ways. Some of the changes in the level of risk can be modelled
stochastically, where the level of volatility and degree of codependence between assets is allowed to change over time. Such models allow us to explain, for example, empirically observed departures
from Black-Scholes-Merton prices for options and understand why we should expect to see occasional dramatic moves in financial markets. The outline of this article is as follows. In section 2 I will
trace the origins of SV and provide links with the basic models used today in the literature. In section 3 I will briefly discuss some of the innovations in the second generation of SV models. In
section 4 I will briefly discuss the literature on conducting inference for SV models. In section 5 I will talk about the use of SV to price options. In section 6 I will consider the connection of SV
with realised volatility. A extensive reviews of this literature is given in Shephard (2005). 2 The origin of SV models The origins of SV are messy, I will give five accounts, which attribute the
subject to different sets of people. | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=167792","timestamp":"2014-04-18T02:08:36Z","content_type":null,"content_length":"38467","record_id":"<urn:uuid:7de1e1a8-83e4-4fb0-98fb-980e322c0c72>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00285-ip-10-147-4-33.ec2.internal.warc.gz"} |
need some hints for homework
Write a program that finds all of the prime numbers between 1 and 100.
I was trying to get my head around the part: prime numbers. However, I am not sure how to get them work in programming.
Any idea or hints about it?
Well, there are two common approaches to Prime numbers. If you need to find a large number of primes, say 10000 or more then you might use Erastothenes sieve - google it. But in this case, I think it
would be simpler to loop through the numbers from 1 to 100 and test each one to see whether it is prime.
How to test whether a number is prime.
I'd recommend this is made into a self-contained function which takes an integer as a parameter, and returns a bool result.
The number 1 by definition is not prime.
Divide the number by each integer starting with 2 and smaller than itself. If the remainder is non-zero in every case then the number is prime. (there are ways to make this process more efficient).
+1 for "separate" primality check. As for the more efficient way, that would probably be the check up to the square root of the number.
Chervil, would you like to give me bit more hints about the ways to make the process?
[DEL:I think you mean for those true / false statements to be the other way round?:DEL]
Last edited on
Oh yeah, my mistake. Should I edit?
Yeah. That was for clarity. Plus, I think it looks neater just to use it that way always.
Even I have made that mistake before.
@MikeyBoy You're completely right. I will adopt that practice now.
Make it a habbit boys. Add those braces, it will just pay off in the long run.
I see the use of 'new' through out your code, but never delete. I thought you should always have a delete go after new for clean up.
Did I miss something?
The program is now more "complete."
Topic archived. No new replies allowed. | {"url":"http://www.cplusplus.com/forum/beginner/109051/","timestamp":"2014-04-18T18:26:31Z","content_type":null,"content_length":"27130","record_id":"<urn:uuid:5dcb4013-b190-4ee5-9dd0-8b0c3159c21c>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00051-ip-10-147-4-33.ec2.internal.warc.gz"} |
vecotr (rigid body) question
November 8th 2010, 06:30 PM #1
Super Member
Dec 2008
vecotr (rigid body) question
Can someone help me on the following question:
A rigid body rotates with angular speed of 7 radians per second through an axis in direction i+j+3k, which passes through the point (2,3,6). All coordinates are in meteres, The linear velocity
(in m/sec) of the point (1,-,4) on the body is given by.
How would you start this question?
November 9th 2010, 02:03 AM #2 | {"url":"http://mathhelpforum.com/pre-calculus/162598-vecotr-rigid-body-question.html","timestamp":"2014-04-19T02:06:57Z","content_type":null,"content_length":"32857","record_id":"<urn:uuid:f0cbf9a3-56a1-4e8c-a4d9-19a7292a905f>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00502-ip-10-147-4-33.ec2.internal.warc.gz"} |
Really quick question, should be easy for most people
March 28th 2009, 04:44 PM #1
Mar 2009
Really quick question, should be easy for most people
I have a real quick question. I suck at math and this popped up while finding a derivative. I am not sure if this should be in this part of the board or not, but I am putting it here cause this
corresponds to the math level where this popped up.
If I have 3(2x+h)/h, would it be permissible to cancel out the h and leave 3*2x=6x, or is this incorrect?
This would be incorrect because if you expand it:
= (6x + 3h)/h
= 6x/h + 3h/h
= 6x/h +3
The h can only be canceled out if every term in the numerator had an h.
Hope this helps!
Ah, right, I see. I knew something was feeling wrong about it, but didnt know what. I'm not a good math person, but I know when I am prolly doing something wrong. Thank you.
Going back, I think I see where I might've went the wrong way.
I end up with $6xh+3h^2/h$
Can this be simplified to $6x+3?$ Cause what I did was just take out the h from the $3h^2$...
Last edited by mr fantastic; March 28th 2009 at 06:28 PM. Reason: Merged posts
$\frac{6xh + 3h^2}{h} = \frac{h (6x + 3h)}{h} = \frac{ot{h} (6x + 3h)}{ot{h}} = 6x + 3h$.
It would be better if you posted the whole question ..... Otherwise who knows whether you're even meant to have $\frac{6xh + 3h^2}{h}$ as part of the working ....
March 28th 2009, 04:58 PM #2
Junior Member
Nov 2008
March 28th 2009, 05:02 PM #3
Mar 2009
March 28th 2009, 06:27 PM #4 | {"url":"http://mathhelpforum.com/calculus/81136-really-quick-question-should-easy-most-people.html","timestamp":"2014-04-17T14:00:03Z","content_type":null,"content_length":"39571","record_id":"<urn:uuid:a560d061-8881-4197-b919-bcddec832164>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00624-ip-10-147-4-33.ec2.internal.warc.gz"} |
Discrete Math Tutors
Berkeley, CA 94720
Enthusiastic about tutoring all levels of math! :)
...As a masters student at Caltech, I was a TA for a graduate course on probability and random processes. And finally, as a PhD student at Cal, I have been a TA for an undergrad level intro to
discrete math
and probability class required for all computer science undergraduate...
Offering 10+ subjects including discrete math | {"url":"http://www.wyzant.com/San_Francisco_Discrete_Math_tutors.aspx","timestamp":"2014-04-16T14:45:19Z","content_type":null,"content_length":"60693","record_id":"<urn:uuid:379faa7a-112f-4f6d-88d5-7bccec54dc57>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00015-ip-10-147-4-33.ec2.internal.warc.gz"} |
Stop Watch - Math Genius
We have here a stop-watch with three hands.
The second hand, which travels once round the face in a minute, is the one with the little ring at its end near the centre.
Our dial indicates the exact time when its owner stopped the watch.
You will notice that the three hands are nearly equidistant.
The hour and minute hands point to spots that are exactly a third of the circumference apart, but the second hand is a little too advanced.
An exact equidistance for the three hands is not possible.
Now, we want to know what the time will be when the three hands are next at exactly the same distances as shown from one another.
Can you state the time?
See answer | {"url":"http://www.pedagonet.com/mathgenius/test62.html","timestamp":"2014-04-19T09:26:01Z","content_type":null,"content_length":"4653","record_id":"<urn:uuid:fbc23412-3687-444c-8278-870b878905c1>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00612-ip-10-147-4-33.ec2.internal.warc.gz"} |
Moment of Inertia of two particles located in the x-y plane
1. 71825
Moment of Inertia of two particles located in the x-y plane
In this problem, you will answer several questions that will help you better understand the moment of inertia, its properties, and its applicability.
--------- Diagram ------------------------
1. On which of the following does the moment of inertia of an object depend?
A. linear speed
B. linear acceleration
C. angular speed
D. angular acceleration
E. total mass
F. shape and density of the object
G. location of the axis of rotation
Type the letters corresponding to the correct answers. Do not use commas. For instance, if you think that only assumptions C and D are correct, type CD.
2. Find the moment of inertia Ix of particle a with respect to the x axis (that is, if the x axis is the axis of rotation), the moment of inertia Iy of particle a with respect to the y axis, and the
moment of inertia Iz of particle a with respect to the z axis (the axis that passes through the origin perpendicular to both the x and y axes).
Express your answers in terms of m and r separated by commas.
3. It is useful to see how the formula for rotational kinetic energy agrees with the formula for the kinetic energy of an object that is not rotating. To see the connection, let us find the kinetic
energy of each particle.
Using the formula for kinetic energy of a moving particle , find the kinetic energy of particle a and the kinetic energy of particle b.
Express your answers in terms of m , w , and r separated by a comma.
Two particles are located in the x-y plane. Moment of inertia of the two particles with respect to the x, y and z coordinate axes are found. Further, kinetic energy of the particles are derived and
expressed in terms of the particle's mass, distance and angular speed. | {"url":"https://brainmass.com/physics/rotation/71825","timestamp":"2014-04-18T08:28:49Z","content_type":null,"content_length":"31880","record_id":"<urn:uuid:822dc9eb-009d-4123-bb83-f9f806040db3>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00338-ip-10-147-4-33.ec2.internal.warc.gz"} |
Find the inserted element in the list
up vote 5 down vote favorite
In a recent interview I was asked:
If you have 2 lists
listA listB
Each are the size 1000 and contain the same elements 1 - 1000. If an element, N is added to listB how can you determine the value of that element?
I responded correctly by saying to subtract listB from listA and the remainder would be the value.
Then he said what if we just have listB after N was added. How would you determine the value that was added?
I failed to answer this!!!! I should know it but I just cant think of it. The hint he gave was to do something similar as I did in the first problem.
Any suggestions?
Are the elements {1,2,3 .... 1000}? – belisarius Oct 21 '11 at 2:31
"size 1000 and contain the same elements 1 - 1000": does this mean that each list contains each value between 1 and 1000 exactly once, or does it mean that each list contains 1000 elements, each
1 element is in the range 1 - 1000, the two lists contain the same elements (perhaps not in the same order), but there may be dupes? If the latter then without having listA, or a chance to see listB
before N is added, then clearly it's not possible to know which element was added to listB last. If the former then the problem's simple, e.g. just subtract 500500 from the sum of elements of
listB. – Steve Jessop Oct 21 '11 at 2:31
betterexplained.com/articles/… – vikingosegundo Oct 29 '11 at 23:10
add comment
4 Answers
active oldest votes
You can exclusive XOR both lists and whatever is new (N) you will have it. This is the answer to the first question.
up vote 2 down vote accepted
Yeah that is a good idea and much better than adding both and subtracting from each other. – segFault Oct 21 '11 at 2:28
add comment
Sum A into sumA. Insert a new element. Sum the list now into sumB. Subtract sumA from sumB. Viola is a musical instrument.
up vote 2
down vote
I know the first one, but I dont know the second. I only have listB now. – segFault Oct 21 '11 at 2:28
I understood the question that you don't keep listA and listB to diff. But you have to remember something, otherwise it's impossible. So sum the listA before you add the element,
remember the sum, then sum the listB after you add the element. Also, what @SteveJessop said - you should know by the question format what the sum of listA is. – Amadan Oct 21 '11 at
add comment
Assuming the list is not automatically sorted when inserting new elements, you can determine what the new element is if you know what the list's "Add" method implementation is.
If the list adds the new element by appending it, you know the new element will be the last element in the list.
If the list adds the new element by pre-pending it, you know the new element will be the first element in the list.
up vote 1 down vote If you insert the element into the list, you know exactly where the element is because you specified the location.
If the list randomly inserts the element upon adding it, then you're out of luck. In this case you'll have to have something to use as a comparison (the original list).
This is the result of an object not just having state, but also having behaviour.
add comment
Considering that most of what I do is SQL based:
select *
from listb b
up vote 0 down vote left outer join lista a on (a.id = b.id)
where (a.id is null)
add comment
Not the answer you're looking for? Browse other questions tagged algorithm or ask your own question. | {"url":"http://stackoverflow.com/questions/7844330/find-the-inserted-element-in-the-list","timestamp":"2014-04-18T21:20:56Z","content_type":null,"content_length":"82038","record_id":"<urn:uuid:c39f3c4d-6ab3-49e2-9a4e-9d6be72fecc9>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00503-ip-10-147-4-33.ec2.internal.warc.gz"} |
= Preview Document = Member Document = Pin to Pinterest
By 10, by 9, by 5, by 3, and by chunks. This unit contains tricks for multiplication and for checking your work. A playful (and very useful) approach to multiplication. This unit is presented at
three levels of varying complexity.
A one page math lesson on the Properties of Equality includes: Addition Property of Equality, Multiplication Property of Equality, Reflexive Property of Equality, Symmetric Property of Equality,
and the Transitive Property of Equality. It is followed by two pages of equations for practice.
Four pages of worksheets for practicing multiplication by 0, 1, and 2. With answers and cute illustrations.
Cut out the factor squares, cut apart the answer boxes, place the correct answer box in the correct square.
• [member-created with abctools] Trace and cut out. This train-shaped shapebook is a fun way for students to learn to count to 100 by 10s. Makes a great shapebook or bulletin board decoration.
Master the multiplication tables with these wacky flashcard-holding animals (a different animal for each set). Up to 12 x 12.
“A group of migrating blue whales travels twenty-six miles per day. How far do they travel in eight days?” Five multiplication word problems with an endangered animal theme.
• Amanda had $3.00. She bought a hot dog for $1.35, chips for 35 cents, and a drink for 85 cents. Did she have enough money? Did she have money left over? If so, how much? Six word problems.
Great for practice. Use alone, or link all the bookmarks together on a ring.
This lesson is designed to help students practice multiplication skills. Combine groups of threes and fives with ones to complete a chart; includes teaching suggestions and answer sheets.
Our math "machines" make multiplication drills fun. Cut out the two shapes and practice simple multiplication. With multiplicands up to 10.
By 10, by 9, by 5, by 3, and by chunks. This unit contains tricks for multiplication and for checking your work. A playful (and very useful) approach to multiplication. This unit is presented at
three levels of varying complexity.
"Lisa bought 5 pencils at the school store. The cost of each pencil is 33 cents. How much did Lisa pay for her purchase?" One page, five problems.
A one page explanation of the rules of using exponents, followed by a practice sheet and an answer page.
Colorful multiplication table bookmarks to use alone or in conjunction with the interactive multiplication games.
6 pages of worksheets to practice multiplication to 12x12, with answer sheets.
Explains factors and gives students the chance to practice their knowledge.
• Jacob loves school, but he hates multiplication. A realistic fiction reading comprehension.
• [member-created using abctools] Write a multiplication equation with prime numbers that equals the given number. (answers included) Common Core: 6.NS.4
A set of three posters featuring multiplication word problems with U. S. currency.
4 pages of worksheets to practice 1-digit multiplication.
"When you read PRODUCT... multiply!" Five posters of guidelines to help with reading word problems as equations.
Book comprehension and vocabulary enhancement for this installment of Marc Brown's popular "Arthur" series. Arthur has trouble with truth in advertising.
"There are three rows of pencils. Each row has four pencils. How many colored pencils does Yoko have?" Five school-themed multiplication problems.
"Eddie can pick and clean ten pumpkins per hour. How many can he do in four hours? How many in five? How many in six?" Five Thanksgiving-themed skip counting (by tens) word problems.
Explains common factors and gives students a chance to practice their knowledge.
3 pages of worksheets to practice 2-digit multiplication, plus answers.
5 pages of worksheets to practice 2-digit multiplication, plus answers.
All the word problems from sets A-U, unnumbered and unformatted; these can be cut into strips and glued into math journals for daily practice. Answers are provided.
"Dot-to-dot multiplication." Practice multiplication skills by finding and extending patterns, and then writing appropriate equations.
"Cowboy Jake is afraid of rattlesnakes. He sees a lot of them on the ranch where he lives. In fact, he sees an average of thirty every year. About how many has he seen in the nine years he has
lived on the ranch?"
Students provide missing products for twelve basic multiplication facts (autumn leaves) 3 pages.
Worksheet practicing rounding numbers to the nearest 100 and estimating the sums. Common Core: 4.NBT.A.3
"How many e-mails does he forward in one week altogether?" One page; 12 word problems with contemporary themes. | {"url":"http://www.abcteach.com/directory/subjects-math-multiplication-652-7-5","timestamp":"2014-04-16T07:34:15Z","content_type":null,"content_length":"150930","record_id":"<urn:uuid:de61dd39-8511-461c-8528-f788bff2e259>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00656-ip-10-147-4-33.ec2.internal.warc.gz"} |
Is mathematics Important for all types of courses? Is there any valuable degree that doesn't require maths except Medical?
1. 6th June 2011 02:42 PM #1
Is mathematics Important for all types of courses? Is there any valuable degree that doesn't require maths except Medical?
Hello sir,
is maths really imp 4 doing everything? is dr any valuable degree dat dsnt require maths except medical?
3. 6th June 2011 08:55 PM #2
Re: Is mathematics Important for all types of courses? Is there any valuable degree that doesn't require maths except Medical?
Dear friend
You can go for Biotechnology,it is a very emerging field and only elementary level maths is required.....................
4. 6th June 2011 09:25 PM #3
Re: Is mathematics Important for all types of courses? Is there any valuable degree that doesn't require maths except Medical?
Mathematic is not compolsari for all type of corse and it depend on the corce
5. 6th June 2011 09:34 PM #4
Re: Is mathematics Important for all types of courses? Is there any valuable degree that doesn't require maths except Medical?
There are some courses which don't have mathematics. These courses are,-
1. Animation
2. Fashion designing
3. Interior designing
6. 7th June 2011 10:37 AM #5
Re: Is mathematics Important for all types of courses? Is there any valuable degree that doesn't require maths except Medical?
every valuable degree is important in maths because atleast one paper in maths like, BE, BCA, BBA,B.com
7. 7th June 2011 11:32 AM #6
Re: Is mathematics Important for all types of courses? Is there any valuable degree that doesn't require maths except Medical?
If you are interested in science but you don't want to study maths then you may go for graduation you will get lots of option without maths. You may go for mass media courses that will also
help you a lot.
8. 7th June 2011 11:36 AM #7
Re: Is mathematics Important for all types of courses? Is there any valuable degree that doesn't require maths except Medical?
Yes there are cources that does not require maths.
Like CA,CS LLB
9. 7th June 2011 11:59 AM #8
Re: Is mathematics Important for all types of courses? Is there any valuable degree that doesn't require maths except Medical?
10. 7th June 2011 03:26 PM #9
Re: Is mathematics Important for all types of courses? Is there any valuable degree that doesn't require maths except Medical?
Yes, maths is very important for doing any professional course,
Lets take an example:
If you are thinking of doing MCA, maths is essential
If you caant do any engineering course without maths.
In commerce, you can apply for without maths, but then after 12th , you will have less opportunities as compared to others.
For CA, maths is very imp.
You cannt do computer science course, without abstract maths.
So, its better to have strong basics in maths, to get an edge over others.
11. 8th June 2011 08:29 PM #10
Re: Is mathematics Important for all types of courses? Is there any valuable degree that doesn't require maths except Medical?
no dear math is really important subject
12. 10th June 2011 08:05 PM #11
Re: Is mathematics Important for all types of courses? Is there any valuable degree that doesn't require maths except Medical?
Every single course require Mathematics at one level or the other but on the other hand it only matters how or at what level you need to get familiar with it. At times only a small logic type
may only be needed. A brief and basic knowledge about the same should be there for you to get in to school. A conceptual approach should be there towards mathematics. All bachelors’ degree
requires one or two level of maths as part of the general curriculum. Degree in Fine arts and Humanities will require utmost maths or algebra. You have an option of Choosing subjects like
Sociology, Psychology, History; Liberal Arts etc require maths in way of statistics and all. Another would be like opting for subjects like Literature, Journalism which have low maths to
13. 15th June 2011 11:06 PM #12
Re: Is mathematics Important for all types of courses? Is there any valuable degree that doesn't require maths except Medical?
with out maths there are many degrees but inthat degree little maths is required.that maths will not be hard and also it is so easy.
the degree courses without maths are
biotechnology ,
bpharmacy ,
animation ,
fashion designing ,
bcom ,
BA english ,
CA ,
14. 5th July 2011 12:10 PM #13
Re: Is mathematics Important for all types of courses? Is there any valuable degree that doesn't require maths except Medical?
Yes, maths is very important for doing any professional course,
Lets take an example:
If you are thinking of doing MCA, maths is essential
If you caant do any engineering course without maths.
In commerce, you can apply for without maths, but then after 12th , you will have less opportunities as compared to others.
For CA, maths is very imp.
You cannt do computer science course, without abstract maths.
So, its better to have strong basics in maths, to get an edge over others.
15. 6th July 2011 11:34 PM #14
Re: Is mathematics Important for all types of courses? Is there any valuable degree that doesn't require maths except Medical?
Yes, there are courses that does not require Maths and are also not connected or in any way related to the medical field. Nowadays , there are a number of courses that are opted by both
people from the commerce field as well as people from the science field . These courses are popular among each and every category of students.
One of the most prominent example is Chartered Accountancy which is nowadays also done by a lot of people from science background, and one of the most important thing is that it does not
require a person to be perfect in Maths .
It also does not requires the person to have Maths as a optional or main subject in 12th standard. Apart from that there are several other courses which also does not requires a good hand on
Mathematics.They can be listed as follows:
1)BCA(Bachelor of Computer Administration)
2)BBA(Bachelor of Business Administration)
3)B.Sc in Computer Science.
4)B.Sc in Information Technology
5)B.Sc in Physics
6)B.Sc in chemistry, etc.
All these have their own importance in their respective field and all have the potential to become a bright and successful career option. It only requires that you should have a real interest
and dedication in your respective field.
16. 9th July 2011 07:37 PM #15
Re: Is mathematics Important for all types of courses? Is there any valuable degree that doesn't require maths except Medical?
No, math is not compulsory to do any course. if you want to do technical and non-technical course then some time math is not necessary. If you want to make your career in management and
computer then math is not required. You can take admission in BCA, BBA, BA, B.Sc IT, MBA, MCA, MA, and M.Sc IT where math is not necessary.
There are also many courses and field which you can join and make a good career in that field. These fields are animation, hardware, networking, software engineer, modelling, fashion
designing, hospitality management, event manager, banking etc.
17. 13th January 2012 07:58 AM #16
Re: Is mathematics Important for all types of courses? Is there any valuable degree that doesn't require maths except Medical?
WHICH ARE THE COURSES THAT I CAN JOIN AS I DO NOT HAVE MATHS IN 11TH AND 12TH. AS I HAD TAKEN BIO-COMPUTER.CAN U SUGGEST ME SOME COURSES AND COLLEGES?
18. 15th February 2012 10:40 PM #17
Re: Is mathematics Important for all types of courses? Is there any valuable degree that doesn't require maths except Medical?
Yes, maths is very important for doing any professional course,
Lets take an example:
If you are thinking of doing MCA, maths is essential
If you caant do any engineering course without maths.
In commerce, you can apply for without maths, but then after 12th , you will have less opportunities as compared to others.
For CA, maths is very imp.
You cannt do computer science course, without abstract maths. | {"url":"http://educationcareer.in/mathematics-important-all-types-courses-there-any-valuable-degree-doesnt-require-maths-except-medical-7857.html","timestamp":"2014-04-19T17:05:40Z","content_type":null,"content_length":"83282","record_id":"<urn:uuid:58e68623-8641-4a0c-9466-23e1bcfdfbca>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00546-ip-10-147-4-33.ec2.internal.warc.gz"} |
Probability and Moment Generating Functions
May 22nd 2008, 02:16 AM
Probability and Moment Generating Functions
I am terrible when it comes to probability and moment generating functions. I missed 2 lectures this week and I am now completely lost on the topic.I have two questions that I need to do by
tomorrow and I was hoping if shown how to do one of the questions, I could work out the other one myself as it is similar.
Here is the first question:
Let Xn be a discrete random variable that takes the values 1,2,...,n with equal probability 1/n. Find the probability generating function of Xn and then determine its moment generating function.
Determine the moment generating function of Yn = Xn/n. Show that the moment generating function of Yn = Xn/n converges pointwise to the moment generating function of a random variable that is
uniformly distributed on (0,1).
I know this is a lot to do, so even if someone could tell me what it is I have to do here would be great. Like I said, I missed 2 lectures so Ive tried to learn this concept on my own, and have
had nobody to correct any assumptions I make that are incorrect. Thank-you.
May 22nd 2008, 02:43 AM
mr fantastic
I am terrible when it comes to probability and moment generating functions. I missed 2 lectures this week and I am now completely lost on the topic.I have two questions that I need to do by
tomorrow and I was hoping if shown how to do one of the questions, I could work out the other one myself as it is similar.
Here is the first question:
Let Xn be a discrete random variable that takes the values 1,2,...,n with equal probability 1/n. Find the probability generating function of Xn and then determine its moment generating function.
Determine the moment generating function of Yn = Xn/n. Show that the moment generating function of Yn = Xn/n converges pointwise to the moment generating function of a random variable that is
uniformly distributed on (0,1).
I know this is a lot to do, so even if someone could tell me what it is I have to do here would be great. Like I said, I missed 2 lectures so Ive tried to learn this concept on my own, and have
had nobody to correct any assumptions I make that are incorrect. Thank-you.
This will get you started:
Probability generating function: Read Generating Functions.
So $G_{X_n} (t) = E(t^{X_n}) = \sum_{j = 1}^{n} \frac{t^j}{n} = \frac{1}{n} \sum_{j = 1}^{n} t^j$.
Moment generating function: Read Moment-generating function - Wikipedia, the free encyclopedia.
So $m_{X_{n}} (t) = E \left( e^{tX} \right) = \sum_{j = 1}^{n} \frac{e^{jt}}{n} = \frac{1}{n} \sum_{j = 1}^{n} e^{jt}$.
The moment generating function of Y = aX + b is $m_Y (t) = e^{bt} \, m_X(at)$: See Moment Generating Function.
In your question $a = \frac{1}{n}$ and b = 0.
The moment generating function of a random variable distributed uniformly on (a, b) is $\frac{e^{bt} - e^{at}}{t(b - a)}$: See Uniform distribution (continuous) - Wikipedia, the free encyclopedia
Substitute a = 0 and b = 1 to get the answer your shooting for when finding $\lim_{n \rightarrow \infty} m_Y (t)$ ......
May 22nd 2008, 03:11 AM
Thank-you so much. I'll see what I can come up with. | {"url":"http://mathhelpforum.com/advanced-statistics/39247-probability-moment-generating-functions-print.html","timestamp":"2014-04-17T22:49:37Z","content_type":null,"content_length":"8962","record_id":"<urn:uuid:7272f230-9313-4e43-9335-b83f744a142c>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00632-ip-10-147-4-33.ec2.internal.warc.gz"} |
Number Theory
0912 Submissions
[3] viXra:0912.0043 [pdf] replaced on 21 Dec 2009
Imanol's Numbers
Authors: Imanol Pérez
Comments: 2 Pages.
Imanol's numbers are those that the sum of their digits is 2, 3, 5, 6, 8 or 9.
Category: Number Theory
[2] viXra:0912.0040 [pdf] submitted on 18 Dec 2009
Expansión (1/x+2/x.......+a/x)^n
Authors: Imanol Pérez
Comments: 2 Pages. In Spanish
Expansion of (1/x+2/x.......+a/x)^n
Category: Number Theory
[1] viXra:0912.0030 [pdf] submitted on 12 Dec 2009
Diophantine Equation 1^N + 2^N + ...+ (M 1)^N +M^N = (M + 1)^N
Authors: Arkoprobho Chakraborty
Comments: 13 pages.
Erdos had conjectured that the equation of the title had no solutions in natural numbers except the trivial 1^1 + 2^1 = 3^1. Moser (1953) had shown that there are no solutions for M+1 < 10^10^6.
Butske et al (1993) had further shown that there are no solutions for M+1 < 9.3x10^6. In this paper I show that a solution to this equation cannot exist for any value of M > 2 hence proving Erdos'
conjecture. This is achieved using elementary number theoretic methods employing congruences and well-known identities.
Category: Number Theory | {"url":"http://vixra.org/numth/0912","timestamp":"2014-04-21T04:33:32Z","content_type":null,"content_length":"5089","record_id":"<urn:uuid:c678f464-3699-4747-8c4f-c8c25eec2c5a>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00245-ip-10-147-4-33.ec2.internal.warc.gz"} |
For n
= n
In order to apply the Mann-Whitney test, the raw data from samples A and B must first be combined into a set of
elements, which are then ranked from lowest to highest, including tied rank values where appropriate. These rankings are then re-
sorted into the two separate samples.
If your data have already been ranked, these ranks can be entered directly into the cells headed by the label «Ranks». In this case, please note that the sum of all ranks for samples A and B combined
must be equal to [N(N+1)]/2. If this equality is not satisfied, you will receive a message asking you to examine your data entry for errors.
If your data have not yet been rank-ordered in this fashion, they can be entered into the cells labeled «Raw Data» and the ranking will be performed automatically. There is also an option, below, for
importing raw data from a spreadsheet.
After data have been entered, click one or the other of the «Calculate» buttons according to whether you are starting out with ranks or raw data.
• The value of U reported in this analysis is the one based on sample A, calculated as
U[A] = n[a]n[b] + n[a](n[a]+1)2 T[A]
where T[A] = the observed sum of ranks for sample A, and
n[a]n[b] + n[a](n[a]+1)2 = the maximum possible value of T[A]
• With relatively small samples, the calculated value of U[A] can be referred directly to the sampling distribution of U[A]. For cases where n[a] and n[b] both fall between 5 and 21, inclusive, the
lower and upper limits of the critical intervals of U[A] are calculated by this page and placed in the designated table below. If either of the samples is of a size smaller than 5, additional
instructions will be given below.
• As n[a] and n[b] increase, the sampling distribution of T[A] becomes a reasonably close approximation of the unit normal distribution. If n[a] and n[b] are both equal to or greater than 5, this
page will also calculate the value of z, along with the corresponding one-tailed and two-tailed probabilities. Note, however, that the approximation to the normal distribution is best when n[a]
and n[b] are both equal to or greater than 10.
Option for Importing Raw Data via Copy & Paste:[T]
Within the spreadsheet application or other source of your data, select and copy the column of data for sample A. Then return to your web browser, click the cursor into the text area for sample A and
perform the 'Paste' operation from the 'Edit' menu. Perform the same procedure for sample B. In importing raw data into the Mann-Whitney test, it is absolutely essential that the numbers of data
items for samples A and B are precisely the same as the values of n[a] and n[b] that you have specified in setting up this page. For each sample, make sure that the final entry in the list is not
followed by a carriage return. To perform this check, click the cursor immediately to the right of the final entry in the list, then press the down-arrow key. If an extra line is present, the cursor
will move downward. Extra lines can be removed by pressing the down-arrow key until the cursor no longer moves, and then pressing the 'Backspace' key (on a Mac platform, 'delete') until the cursor
stands immediately to the right of the final entry.[T]
Import Raw Data[T]
│ Sample A │ Sample B │ Import data │
├──────────┼──────────┤ to data cellsClear A Clear B │
Data Entry:[T]
│ │ Ranks for │ │ Raw Data for │
├───────┼──────────┬──────────┤ ├──────────┬──────────┤
│ count │ Sample A │ Sample B │ │ Sample A │ Sample B │
├───────┼──────────┼──────────┤ ├──────────┼──────────┤
├───────┼──────────┼──────────┤ ├──────────┼──────────┤
│ Mean Ranks for │ │ │
├───────────┬───────────┤ ├────────┤
│ Sample A │ Sample B │ │ U[A] = │
├───────────┼───────────┤ ├────────┤
Note that mean ranks are provided only for descriptive purposes. They
are not part of the Mann-Whitney test. ~ Note also that the z-ratio is
calculated only if n[a] and n[b] are both equal to or greater than 5.
Critical Intervals of U[A] for [Q]
The adjacent critical intervals are calculated only if n
│ │ Level of Significance for a │ [a]
│ ├───────────────────────────────┤
│ │ Directional Test │ and n
│ ├─────────┬───────────┬─────────┤
│ │ .05 │ .025 │ .01 │ [b]
│ ├─────────┴───────────┴─────────┤
│ │ Non-Directional Test │ both fall between 5
│ ├─────────┬───────────┬─────────┤
│ │ -- │ .05 │ .02 │ and 21,
│ lower limit │ │ │ │ inclusive. For sample sizes smaller
│ upper limit │ │ │ │ than 5,
you can refer your results to a standard table of Mann-Whitney critical values,
The observed value of U[A] is significant at or beyond the designated level if it is equal to or smaller than the such as the following, provided by the Department of Mathematics & Statistics at
indicated lower limit for that level or equal to or greater than the upper limit. It is non-significant if it is the University of Saskatchewan:
larger than the lower limit and smaller than the upper limit.
with n[a]=, n[b]=, U[A]=, and U[B]=
│ Home │ Click this link only if you did not arrive here via the VassarStats main page. │
©Richard Lowry 2001-
All rights reserved. | {"url":"http://vassarstats.net/utest.html","timestamp":"2014-04-18T10:58:25Z","content_type":null,"content_length":"18815","record_id":"<urn:uuid:6092455b-9ab5-4a52-b3e7-bc5f6fa60559>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00544-ip-10-147-4-33.ec2.internal.warc.gz"} |
How to use recursion with elements in the list?
up vote 2 down vote favorite
I'm trying to use recursion in the list and I need to go through all elements. Here is my code:
(define compare
(lambda (ls pred?)
(if (null? list)
(pred? (list-ref ls (- (length ls) 2)) (list-ref ls (- (length ls) 1))))))
But it works only with last two elements. Result should be like this:
(compare '(1 2 3 4 5) <) -> #t
(compare '(1 2 8 4 5) <) -> #f
Do you have any idea what I should do?
add comment
1 Answer
active oldest votes
You're not using recursion anywhere in the code. In fact, it has errors and I don't think you tested it thoroughly. For example:
• The if condition should be (null? ls)
• Using list-ref is not the way to go when traversing a list in Scheme, for that in general you want to use recursion, car, cdr, etc.
• Again, where is the recursive call? compare should be called at some point!
I believe this is what you intended, it's not recursive but it's the simplest way to implement the procedure:
(define (compare ls pred?)
(apply pred? ls))
Because this looks like homework I can only give you some hints for solving the problem from scratch, without using apply. Fill-in the blanks:
(define (compare ls pred?)
(if <???> ; special case: if the list is empty
<???> ; then return true
up vote 3 down (let loop ((prev <???>) ; general case, take 1st element
vote accepted (ls <???>)) ; and take the rest of the list
(cond (<???> ; again: if the list is empty
<???>) ; then return true
(<???> ; if pred? is false for `prev` and current element
<???>) ; then return false
(else ; otherwise advance the recursion
(loop <???> <???>)))))) ; pass the new `prev` and the rest of the list
Notice that I used a named let for implementing the recursion, so loop is the recursive procedure here: you can see that loop is being called inside loop. Alternatively you could've
defined a helper procedure. I had to do this for taking into account the special case where the list is initially empty.
The recursion works like this for the general case: two parameters are required, prev stores the previous element in the list and ls the rest of the list. at each point in the
traversal we check to see if the predicates is false for the previous and the current element - if that is the case, then we return false. If not, we continue the recursion with a new
prev (the current element) and the rest of the list. We keep going like this until the list is empty and only then we return true.
I know about apply but I can't use it. It is the problem. :( – Ats Nov 24 '12 at 12:47
@Ats OK, I updated my answer with some hints ;) – Óscar López Nov 24 '12 at 13:11
I'm a little bit silly. What should be here: (<???> ; if pred? is false for prev and current element. And here: (loop <???> <???>)))))) ; pass the new prev and the rest of the list.
In ??-> (loop (car list) (cdr list))) or something else – Ats Nov 24 '12 at 15:23
@Ats: the second part is correct. For the first part: this is the core of the procedure, think about it: what would make the whole procedure return false? how about this: that the
previous element and the current element, when compared with pred?, evaluate to false. – Óscar López Nov 24 '12 at 15:52
I have tried it all the day, but nothing is working. I tried: (pre? (car lis) prev), (pre? prev (car lis)). I don't know what I should write there. – Ats Nov 25 '12 at 19:21
show 1 more comment
Not the answer you're looking for? Browse other questions tagged list recursion lisp scheme compare or ask your own question. | {"url":"http://stackoverflow.com/questions/13541215/how-to-use-recursion-with-elements-in-the-list?answertab=votes","timestamp":"2014-04-21T11:01:22Z","content_type":null,"content_length":"71611","record_id":"<urn:uuid:c4fa9cd0-69ad-4203-be2f-f017dca57a6c>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00549-ip-10-147-4-33.ec2.internal.warc.gz"} |
Wolfram Demonstrations Project
Discrete Versions of Continuous Functions
Math functions can be used to produce nested patterns in 2D images or 3D surfaces.
THINGS TO TRY
• Rotate and Zoom in 3D
• Slider Zoom
"Discrete Versions of Continuous Functions" from the Wolfram Demonstrations Project
Contributed by: Daniel de Souza Carvalho
Math functions can be used to produce nested patterns in 2D images or 3D surfaces. | {"url":"http://demonstrations.wolfram.com/DiscreteVersionsOfContinuousFunctions/","timestamp":"2014-04-19T07:09:12Z","content_type":null,"content_length":"42769","record_id":"<urn:uuid:5ff89c2c-8334-4be9-abf5-06512064d910>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00575-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Forum Discussions
Math Forum
Ask Dr. Math
Internet Newsletter
Teacher Exchange
Search All of the Math Forum:
Views expressed in these public forums are not endorsed by Drexel University or The Math Forum.
Topic: Concretizable categories
Replies: 9 Last Post: Apr 11, 2006 10:30 AM
Messages: [ Previous | Next ]
Pham Re: Concretizable categories
Posted: Apr 6, 2006 10:39 AM
Posts: 109
Registered: 12/11/04 Nath Rao <XnaTthHErCaAPoS@yahoo.com> wrote:
> Jonathan Scott wrote:
> > The proof is in "The Steenrod Algebra and its Applications" (proceedings
> > from a conference in honour of Norman Steenrod's 60th birthday), in an
> > article by Lawvere (I think!).
> Lect. Notes Math, no. 168. The author is P. J. Freyd
The article in question is most likely
Peter Freyd: Homotopy is not concrete
It is also available as
Reprints in Theory and Applications of Categories, No. 6 (2004) pp 1-10
at <http://www.tac.mta.ca/tac/reprints/articles/6/tr6abs.html>
Date Subject Author
4/5/06 Jamie Vicary
4/5/06 Re: Concretizable categories Jonathan Scott
4/5/06 Re: Concretizable categories Nath Rao
4/6/06 Re: Concretizable categories Colin McLarty
4/6/06 Re: Concretizable categories David Madore
4/6/06 Re: Concretizable categories Tobias Fritz
4/7/06 Re: Concretizable categories Jamie Vicary
4/11/06 Re: Concretizable categories Philippe Gaucher
4/7/06 Re: Concretizable categories Colin McLarty
4/6/06 Re: Concretizable categories Pham | {"url":"http://mathforum.org/kb/thread.jspa?messageID=4612119&tstart=0","timestamp":"2014-04-20T01:08:18Z","content_type":null,"content_length":"27137","record_id":"<urn:uuid:1fbd5cad-22bd-4e63-a8cb-940fb65c754b>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00572-ip-10-147-4-33.ec2.internal.warc.gz"} |
Q. You have said that Jung dealt with psychological reality implicitly in holistic mathematical terms. Can you start by briefly outlining his treatment of personality types?
PC The Jungian system is based on four functions (thinking, feeling, sensation and intuition) and two attitudes (extroversion and introversion). Combining each of these four functions with the
two attitudes, Jung came up with 8 personality types as follows:
Extrovert Thinking
Introvert Thinking
Extrovert Feeling
Introvert Feeling
Extrovert Sensation
Introvert Sensation
Extrovert Intuition
Introvert Intuition
Now the characteristics associated with each type have been well documented elsewhere so I will not deal with them here.
The Myers-Briggs personality type indicator - which builds on Jung’s analysis, distinguishes 16 different personalities types.
This system is based on the same four functions (sensing, thinking, feeling and intuition) and four attitudes (extroversion, introversion, perception and judgement).
One way of arriving at the 16 personality types is through permutating (i.e. arranging) functions (two at a time) one dominant and the other the auxiliary, each having either an extroverted or
introverted attitude respectively.
Thus if we take sensing and thinking as the initial two functions we can obtain the following four personality types:
ISTJ (sensing dominant and introverted, thinking auxiliary and extroverted)
ISTP (thinking dominant and introverted, sensing auxiliary and extroverted)
ESTP (sensing dominant and extroverted, thinking auxiliary and introverted)
ESTJ (thinking dominant and extroverted, sensing auxiliary and introverted)
When we take sensing and feeling as the initial two types, we obtain the following four:
ISFJ (sensing dominant and introverted, feeling auxiliary and extroverted)
ISFP (feeling dominant and introverted, sensing auxiliary and extroverted)
ESFP (sensing dominant and extroverted, feeling auxiliary and introverted)
ESFJ (feeling dominant and extroverted, sensing auxiliary and introverted)
Combining intuition and feeling as the two types, the following four emerge
INFJ (intuition dominant and introverted, feeling auxiliary and extroverted)
INFP (feeling dominant and introverted, intuition auxiliary and extroverted)
ENFP (intuition dominant and extroverted, feeling auxiliary and introverted)
ENFJ (feeling dominant and extroverted, intuition auxiliary and introverted)
Finally, combining intuition and thinking, we have the following:
INTJ (intuition dominant and introverted, thinking auxiliary and extroverted)
INTP (thinking dominant and introverted, intuition auxiliary and extroverted)
ENTP (intuition dominant and extroverted, feeling auxiliary and introverted)
ENTJ (thinking dominant and extroverted, intuition auxiliary and introverted)
However when we combine from four (two at a time), six (rather than four) distinct arrangements are possible (i.e. 4C[2 ]= 6). Thus in terms of our functions we can also start with intuition and
sensing or alternatively with thinking and feeling.
Thus combining these with the two attitudes (extroversion and introversion) we can generate eight additional personality types (two more groupings representing in each case four distinct
personality types).
Therefore we should generate 24 - rather than 16 - distinct personality types.
Q. So you are saying that there are eight missing personality types in the Myers-Briggs typology?
PC Yes, this is clearly the case. The Myers-Briggs approach is based on the identification of polar opposites. Thus one is classified as either an extrovert or introvert (E or I); one is either
sense orientated or intuitive (S or N); one is either a thinking or a feeling type (T or F); finally one uses perception or judgement in making decisions (P or J).
However - as we will see shortly the very essence of the eight "missing" types is the combination of these opposites in personality.
Q. So tell us now about your revision of this system?
PC The Jungian system of types and preferences - though admittedly highly useful - is in some respects confusing. Whereas in normal language feeling relates to emotional-affective experience,
Jung treats it as a rational evaluation function.
Also, intuition (which relates primarily to unconscious experience) can be explained as essentially resulting from the dynamic interaction of conscious functions.
Indeed the 24 personality types (including the 16 recognised in the Myers-Briggs personality type indicator) can actually be derived from just four aspects with two modes (i.e. functions) and two
directions (i.e. attitudes).
The two modes are the cognitive (rational) and the affective (emotional) respectively. The two directions are the external (objective) and internal (subjective) respectively. Combining these two
modes and two directions yields four functions.
The cognitive mode in an external direction is the thinking aspect. The corresponding mode in an internal direction is the judgement aspect (i.e. reason applied to subjective decisions).
The affective mode in an external direction is the perception aspect. The corresponding mode in an internal direction is the feeling aspect.
So we now have four aspects. Each permutation or arrangement of these four aspects (taking all four at a time) gives a distinct personality type.
In mathematics 4P[4] = 24, so 24 such permutations or configurations are possible thereby giving 24 personality types.
It would be very helpful in what follows to picture the four functions as the corners of a square.
The horizontal lines represent functions of same direction (and differing mode).
The vertical lines represent functions of differing direction (and same mode).
The diagonal lines represent functions both of differing direction and differing mode.
We can now use this diagram to derive three distinct groups (comprising in each case 8 personality types) giving 24 in all.
Each permutation (or arrangement) of aspects gives a distinct personality type. The first represents the dominant, the second the auxiliary, the third less developed and the final the inferior
Q. What are the characteristics of the first group?
PC The first group is the horizontal and includes those personality types with the two principal aspects of same direction (and differing mode). Because unconscious intuition is based on the
dynamic interaction of opposite directions, relatively little is generated by these types. They operate very much out of the conscious process. As they are thereby most firmly rooted in actual
reality they can be mathematically termed the "real" types. These indeed correspond to the sense types in the Myers-Briggs typology. The following are the eight types (with corresponding
Myers-Briggs designation).
P+T+F-J- (ESFP)
P+T+J-F- (ESFJ)
T+P+F-J- (ESTP)
T+P+J-F- (ESTJ)
F-J-P+T+ (ISFP)
F-J-T+P+ (ISFJ)
J-F-P+T+ (ISTP)
J-F-T+P+[ ](ISTJ)
The dominant and auxiliary functions are of the same sign (Equally both the less developed and inferior functions are of opposite sign). When the dominant and auxiliary functions carry a positive
sign we get extroverts; when negative we get introverts.
Again in terms of the Myers Briggs type indicator all of these personalities (by definition are S types).
If the dominant function (in my classification) is affective (F or P) then in terms of the 3rd letter (in Myers Briggs) we get F; if dominant function is cognitive (T or J) we get T (in Myers
Finally if the third letter (in my classification) is affective (P or F), then the final letter in Myers-Briggs is P; otherwise (if T or J in my classification), it is J in Myers Briggs.
Q. What about the second personality group?
PC The second group is the vertical which includes those personality types of differing direction (and same mode). When dominant and auxiliary aspects are well balanced much intuition can be
generated through this fusion of opposites. As these tend to operate so much out of the unconscious they are more flexible and creative being interested in the potential for changing reality.
They can be accurately described in mathematical terms as the "imaginary" types. These correspond to the intuitives (N) in the Myers-Briggs typology.
Again these are the corresponding eight types (with Myers-Briggs designation).
P+F-T+J- (ENFP)
P+F-J-T+[ ](ENFJ)
T+J-P+F- (ENTP)
T+J-F-P+ (ENTJ)
F-P+T+J- (INFP)
F-P+J-T+ (INFJ)
J-T+P+F-[ ](INTP)
J-T+F-P+[ ](INTJ)
The dominant function here will decide the overall direction of experience. If the dominant function (on left) is positive, then we get an extrovert; if negative we get an introvert.
Because dominant and auxiliary are (by definition) of opposite signs we get intuitives (N in Myers Briggs) in all cases. If the second auxiliary function (on left) is cognitive (T or J), the
third letter in Myers-Briggs is T; if auxiliary (on left) is affective (P or F) then third letter (in Myers Briggs) is T.
Finally if third letter (on left) is positive, then we get P for fourth letter (in Myers Briggs).
If third letter (on left) is negative the fourth letter (in Myers Briggs) is J.
Q. Most intriguingly what are the characteristics of your "new" personality group?
PC The third (unrecognised) group is the diagonal which includes those personality types both of differing direction and differing mode. These find it particularly difficult to find a centre of
being around either a (conscious) sense or (unconscious) intuitive based approach and are driven - in reaching integration towards the mid-point or centre of personality which connects both. As
successful integration relates directly on this spiritual centre they can be referred to mathematically as the infinite or "transfinite" types orientated to the transparent or empty fundamental
ground of what essentially is.
They can alternatively be described as the "complex" types reflecting the balancing of personality characteristics that are diagonally opposite (in both mode and direction).
From one perspective this personality group - with mature development - is the most simple of all, successfully balancing polar inclinations that exist with both the "real" and "imaginary"
Thus they are primarily neither extroverts nor introverts as such but rather centroverts. Thus in place of the E/I classification in the Myers Briggs we now have C for all 8 types in this group.
Likewise they are neither sense nor intuitive orientated but rather mystical (i.e. spiritual in a direct experiential way) Thus we can replace the S/N classification in the Myers Briggs with M
for the 8 types.
Again these types operate directly from neither feeling nor thinking as such but rather volition (i.e. the direct capacity of the will). Thus T/F can be replaced with V for the group.
Finally they neither display perception nor judgement but rather a more balanced attitude in what be called discernment. Thus P/J can be replaced here with D.
So in a fundamental manner all 8 types of this third diagonal group share the same personality type which represents the golden mean as between the opposing tendencies of the other groups.
However just as these types (from this primary integrated perspective) are "simple", equally from a secondary perspective they are "complex" combining - in the same personality - opposite
We can list these eight types as follows (giving the corresponding Myers Briggs types).
P+J-T+F- ENFJ and ISTP
P+J-F-T+ ENFP and ISTJ
J-P+F-T+ INTP and ESFJ
J-P+T+F- INTJ and ESFP
T+F-P+J- ENTP and ISFJ
T+F-J-P[+] ENTJ and ISFP
F-T+J-P[+] INFJ and ESTP
F[-]T[+]P[+]J- INFP and ESTJ
In a primary sense these are all centroverts. If first letter i.e. dominant aspect (on left) is positive, in terms of Myers Briggs we get a secondary extrovert (with strong shadow introvert
personality characteristics); if first letter (on left) is negative then we get a secondary introvert (with strong shadow extrovert tendencies).
Again in a primary sense, all these types are (potentially) mystical. Because dominant and auxiliary functions are of opposite signs, in secondary terms they are all intuitives (N) with strong
shadow sense (S) tendencies.
All - in a primary sense - are volitional (operate through the will). If the dominant function (on left) is cognitive (T or J), then in a secondary sense (in Myers Briggs) we will get T as our
3rd letter; if dominant function (on left) is affective (P or F), then on Myers Briggs we will get F as 3rd letter. Again in each case these are strongly counterbalanced by opposite shadow
Finally in a primary sense all these types use discernment as a preference.
The third function (on left) if cognitive (T or J) in a secondary sense will lead to the choice of J as 4th letter (in Myers Briggs); if third function (on left) is affective (P or F), then in a
secondary sense will we have P as final letter. Once more these are counterbalanced by opposite shadow tendencies.
When a high level of integration is achieved, well developed characteristics in the personality will also exist that literally shadow those presented at face value.
At low levels of integration, these opposite characteristics lead to considerable confusion and a lack of any coherent self identity. Putting it another way, such personalities are especially
sensitive to their shadow selves. This is why the development of a strong spiritual centre is so vital for true integration to be achieved. It is to this group that the lines from Dryden
especially apply.
"Great wits are to madness near allied
And thin partitions do their lines divide"
Q. Can you know clarify the holistic mathematical rationale of your classification?
PC There is a fundamental logic underlying my four aspects of understanding. They can be simply expressed as the four "complex" co-ordinates corresponding to the circle of unit radius (i.e. the
cross within the circle with the horizontal line representing the "real" x axis and the vertical line representing the "imaginary" y axis). If we take the objective external direction of
understanding as positive, then the subjective internal direction - in relative terms - is negative. Likewise if we take the cognitive (rational) mode as "real", then again - in relative terms -
the affective (emotional) mode is "imaginary". This is the basis of so many mandalas which deeply symbolise integration of the psyche. We can now give a coherent mathematical reason as to why
this in fact this is the case. The four aspects of understanding - from which all personality types are derived - can be simply obtained from obtaining the four roots of unity. In this sense all
understanding is simply a reduced expression of unity.
This also helps to explain the limited nature of most of our actual experience of the world. Properly interpreted all four aspects are equally important and at the appropriate "high" level of
understanding are fully unified. However each of our 24 personality types is built on a certain specialisation around two aspects only,
Thus the "real" group attempts to unify with dominant and auxiliary aspects of the same direction (and opposite mode).
The "imaginary" group attempts to unify with dominant and auxiliary aspects of the same mode (and different direction).
The "transfinite" group finally attempts to unify with dominant and auxiliary aspects of differing mode (and differing direction).
However true integration requires that all four aspects be properly differentiated and then combined in a dynamic harmonious fashion. In a direct sense this is the task of transpersonal spiritual
development. It is to this we now turn.
Q. You believe that one's personality type is especially relevant to the manner of transpersonal development. Can you briefly elaborate on this important point?
PC It would be helpful at the onset to outline the major "higher" levels of psychological development. For convenience we can list four i.e. the linear (gross realm), the circular (subtle realm)
the point (causal realm) and the radial (non-dual reality).
In our Western culture, personality development rarely goes significantly beyond the rational linear level and is based on a high level of specialisation of the conscious mind.
The first "real"group are most firmly rooted in the conscious world. Integration for this type would normally not entail radical transpersonal development. They fit in too well with accepted
conventions and are less prone to serious existential questioning.
Specialisation of the linear level is largely completed by early adulthood. Further growth would essentially entail achieving a more moderate and flexible approach to understanding and the
cultivation of what might be termed "vision logic". Though this does not rule out authentic mystical experience, for the "real" group, the rational (linear) paradigm is likely to remain the
predominant means of interpreting reality.
For the second vertical group the most fundamental feature is that strongly developed structures are of the same mode but of opposite direction. Thus someone, for example, with perception as the
most developed structure, will tend to have the opposite affective structure of feeling as the main auxiliary. There is likely to be far more directional switching here, especially if both
structures approach equal strength. This entails that that the (positive) conscious information of one structure, is continually negated through unconscious switching to the other. This in turn
means - that if co-ordinated - far more spiritual fusion or energy is likely to be developed, thus rendering experience very intuitive in character.
Such types are likely to be less well adapted to the world. There is always an underlying conflict as between their own unconscious tendencies and the prevailing cultural paradigm. However, if
resolved this tension can be very creative.
With their intuitive vision, they are not so much interested in actual reality but rather the creative potential for changing reality.
Integration for this group, requires going significantly beyond the linear level into the circular level. Otherwise, unconscious potential will never be properly realised, remaining untrained and
unduly instinctive and immature.
The third - and least recognised - is the diagonal group.
The defining characteristic of this group is that the two main structures are both of opposite mode and opposite direction. Thus for example, someone with the cognitive structure of reason,
strongly developed in an external direction, will have the affective structure of emotion, also well developed in an internal direction.
Whereas the first group can identify (directly) with the conscious process, and the second group (directly) with the unconscious process, this group can do neither. Not surprisingly there is
likely to be a considerable identity crisis involved.
At one end of the spectrum, many with unresolved psychotic problems who have failed to achieve integration, belong to this group. At the other end some of the greatest mystics, who after a long
painful struggle to achieve psychic harmony also belong to the same group. Resolution of the existential problem here is likely to be more demanding than with the other groups. In fact, the
answer depends especially on authentic spiritual development. In other words it is the third essential process that is now especially relevant for integration.
If the first group is "real", relating to actual reality, and the second "imaginary", relating to potential reality, this group is aptly defined "complex" (i.e. both "real" and "imaginary")
relating to essential reality. The paradox is that a high degree of true simplicity finally underlines integration of this inherently complex personality type. It primarily involves neither
conscious or unconscious understanding as such, but rather that point (i.e. will), at the centre of being itself, which unites both.
For someone belonging to this group, therefore full integration is likely to involve moving beyond both linear and circular levels into the point level.
The key determinant of whether some will go on to the radial level - representing the most complete level of personality development - depends essentially on whether the two weaker aspects can be
developed sufficiently to counterbalance the two strong aspects (dominant and auxiliary). This necessarily entails a lengthy period of intense exposure to the "shadow" side of one's personality.
Q. Because integration for your vertical and diagonal groups involves "higher" levels of transpersonal development are you implying that these personalities are superior?
PC No, not all. I would be very opposed to any form of elitist categorisation on these lines.
Remember the classification of personality types is neutral as regards talent or intellectual ability. Most "successful" people will actually belong to the "real" group. For example leaders
successfully running governments, businesses etc. generally belong to this group and can display a wide range of abilities. Indeed as this group finds a grounding easily in actual reality, they
can often find their niche early in life subsequently developing in an extensive direction (with multiple interests).
Highly creative individuals will often belong to the "imaginary" group. However they can experience difficulties finding a satisfactory resolution as between the demands of society and their own
inclinations. Too often they fail to get their act together and do not realise their abilities.
The truly mystical types belong to the "diagonal" group. Though possessing the potential for the highest level of integration, personalities of this type are most prone to their shadow selves
(and psychological illness). Properly understood many of the great saints faced a stark dilemma in having to achieve a high level of spiritual integration to avoid falling into mental illness.
Q. Indicate now how your classification of personality types can lead to a "new" psychological interpretation of dimensions?
PC There is a fascinating - if unappreciated connection - as between our profile of personality types and the dimensions of space and time.
To his great credit Jung was deeply conscious of this potential link with physical reality and saw his four functions as - in a fundamental sense - mirroring the four dimensions of space and time
in physics. Unfortunately the conventional world-view of three dimensions of space and one of time is one of broken symmetry. Jung came very close to accurately expressing the true
four-dimensional "complex" symmetry of space-time with his classification of two rational functions (thinking and feeling) and two irrational functions (sensation and intuition).
In my own mathematically "complex" reclassification, we have - in relative terms - two modes ("real" cognitive and "imaginary" affective) and two directions ("positive" external and "negative"
internal). In this system the four functions are (in relative terms):
Real (positive) = thinking
Real (negative) = judgement
Imaginary (positive) = perception
Imaginary (negative) = feeling
Now, equally these four functions represent the "complex" symmetric aspects of space-time. In other words we have two dimensions of space (one positive, one negative) and two dimensions of time
(one positive, one negative). Thus we can rewrite our scheme as follows:
Time (positive) = thinking
Time (negative) = judgement
Space (positive) = perception
Space (negative) = feeling
In other words we literally create space and time in experience through the interaction of all four aspects.
Now in pure symmetry time and space cancel out in the experience of the eternal and immediate present which can be represented by the binary structures as 1 and 0 (i.e. both fullness and void).
However in terms of broken symmetry each personality type experiences space and time differently. Thus each personality type represents a unique configuration of space-time.
In the original Jungian profile of personality we have eight distinct types.
This - combined with the implicit binary structures - leads to 8 + 2 = 10 such unique configurations.
In our modified version of the Myers-Briggs we have 24 personality types which when combined with the implicit binary structures gives 24 + 2 = 26 unique configurations.
Thus when we look on dimensions psychologically in this new way - as representing unique (reduced) configurations of the 4 complex poles of space-time, we ultimately obtain 10 or alternatively 26
dimensions of space-time.
What this simply means is that each personality type experiences space and time in a unique fashion. Thus a dimension in this sense refers not to space and time (as separately considered) but
rather to a unique configuration (i.e. permutation) of all four aspects.
Q. So full personality integration would require all 10 (or alternatively 26) "dimensions"?
PC Yes, this is true. Of necessity everyone uses all to some degree. However usually one - or at most two aspects - tend to dominate. The other aspects remain inferior and misunderstood
frequently breaking into consciousness through projection.
Now as I have already explained those personalities of the third "diagonal" group are particularily sensitive to the projected "shadow". Thus full development for this type requires
differentiating all four aspects to a considerable degree. At the later stages of the point level this task approaches completion. So all dimensions are now considerably fused. The "real"
conscious and "imaginary" shadow personalities now approach integration.
So the 26 ways of organising dimensional qualitative reality now equally represent 26 ways of organising phenomenal (quantitative) reality. As we have already seen objects and dimensions are
"real" and "imaginary" with respect to each other.
Thus dimensions can be looked on as shadow (i.e. imaginary) objects. Likewise objects can be viewed as shadow dimensions. Thus reality (real) and its shadow (imaginary) now mutually reflect each
Thus saying that reality has a shadow counterpart is just another way of saying that reality is dynamically "complex" with "real" and "imaginary" aspects.
Q. All of this has a bearing on Superstring Theory. Can you explain how this is so.
PC Yes this represents an excellent example of what I term vertical complementarity (where "low level" physical reality is reflected in "high level" psychological reality).
So just as we have a "high-level" psychological understanding of space-time, we equally have a complementary "low-level" physical understanding relating to material reality.
Once again ultimately physical reality can be represented in symmetrical terms by the four "complex" poles of space-time. In original binary terms this represents pure nothingness or
alternatively pure fullness (i.e. the potential for all actual existence).
Now in the material physical world of broken symmetry, these four poles can be permutated (four at a time) to give unique (reduced) configurations of space-time. Combining these configurations
with the implicit binary dimensions, we can therefore represent the world as comprising 10 (or alternatively 26 unique dimensions).
This is intimately tied to the Theory of Superstrings which - in some respects - promises to provide the most satisfactory mathematical model of the fundamental nature of physical reality. One of
the most puzzling features of this approach is that it requires moving away from conventional 4 dimensional space-time.
One of the earlier versions seemed to work successfully only in 26 (24 + 2) dimensions, whereas recent versions seem to work best in 10 (8+2). Though mathematically satisfying, such an approach
seems highly non-intuitive (judged by conventional criteria).
However if we accept that such dimensions actually represent unique configurations of the original 4 (symmetric) poles of space-time then suddenly it all makes great sense.
The dimensions to which Superstring Theory apply can be simply expressed as differing configurations of the same four aspects of space-time (with each configuration representing a unique
dimension). The standard explanation of "compactification" is unsatisfactory. This states that the "extra" dimensions are curled up in incredibly small regions of space and invisible in the
conventional four dimensions. The true problem here is this rigid insistence on four dimensional space-time which so dominates physics. Simply redefining a "dimension" in more dynamic terms (as a
unique configuration of the four polar aspects of space and time) solves this dilemma.
There are deep and remarkable connections as between the standard analytical approach and this holistic mathematical treatment of Superstrings. Modular functions are especially relevant in terms
of the analytical approach. These functions describe a four dimensional hyperspace defined in the upper right quadrant of the complex plane (i.e. where space has two real and two imaginary
dimensions). These modular functions possess remarkable symmetry features.
Now in holistical mathematical terms I define space in complementary qualitative
"complex" fashion with two "real" and two "imaginary" dimensions. (However because this is a dynamic approach, space here has both positive and negative directions). This - as I have so often
stated - provides the appropriate way for understanding the dynamic symmetries of nature.
So here we have a good example of the comprehensive mathematical paradigm at work
where "real" quantitative and "imaginary" qualitative aspects complement each other.
The "real" aspect provides the quantitative analytical (rational) interpretation of space-time.
The "imaginary" aspect provides the corresponding qualitative holistic (intuitive) interpretation of the same space-time.
Physicists will readily admit that they have no real understanding of the dimensions of Superstring Theory. This is due to the lack of a coherent holistic element in mathematical understanding.
However once we supply this missing element we can give a surprisingly simple yet intuitively satisfying explanation of what these "dimensions" actually mean.
More importantly - in my opinion - it demonstrates the power of the holistic mathematical notion of vertical complementarity. Through this the psychological world of "personality types" and the
physical world of "particle types" are now seen to be intimately connected.
Q. Can you develop these connections any further?
PC Yes. As you will recall I defined my "personality types" in terms of three groups.
The first horizontal "real" group involve personalities of differing mode and same direction.
The corresponding "real" group on the particle side would be the fermions (e.g. matter particles such as protons, electrons etc.)
The second vertical "imaginary" group involves personalities of same mode and same direction.
The corresponding "imaginary" group on the particle side would be the bosons (e.g. radiation such as photons where opposite directions coincide).
The third diagonal "complex" group involves personalities of differing mode and differing direction.
The corresponding "complex" group on the particle side would be the superstrings (where bosons and fermions can no longer be identified separately).
The linear level is the domain of the horizontal personality type. Likewise the linear level is the domain of the horizontal particle type (i.e. fermions readily exist at this level).
The "higher" circular level is the domain of the vertical personality type. Likewise the "lower" circular level is the domain of the vertical particle type (i.e. bosonic activity is more evident
at this lower level of reality).
The "higher" point level is the domain of the diagonal personality type. Likewise the "lower" point level is the domain of the diagonal particle type (i.e. superstrings).
Now at the "higher" psychological point level, objects and dimensions are highly differentiated with respect to each other. At the "lower" physical point level, objects and dimensions are highly
undifferentiated i.e. confused. This state is what I refer to mathematically as prime structures.
Q. Can you elaborate on this point ? What do you mean by prime structures?
PC Again this starts from the notion of a prime number in mathematics.
Now a prime number (e.g. 7) is one with no factors (other than itself and 1). A prime number is thus strictly a one dimensional number.
All other natural numbers (i.e. composite) are obtained from a unique combination of prime numbers. Thus 6 = 2X3 is two dimensional (i.e. has two factors). It therefore has both horizontal
(quantitative) and vertical (dimensional) aspects
Prime numbers therefore are the basic building blocks of the number system.
Now we can look on phenomenal reality in dynamic complementary fashion.
Natural (i.e. composite) phenomena - starting with observable particles such as fermions and bosons) - are obtained from a unique combination of prime elements. These prime ingredients in fact
are the superstrings.
Now just as a prime number involves the separation of quantitative and qualitative aspects and is one dimensional, a superstring - in complementary fashion - involves the confusion of
quantitative and qualitative aspects and is one dimensional.
Thus at the level of the superstrings we cannot meaningfully separate particles from background dimensions. As the observation of particles is always in the context of a background of space and
time, superstrings cannot be directly identified. In a true sense the dimensions are contained within the superstrings.
Now the standard interpretation of superstrings in many ways is quite similar. They are viewed as tiny one dimensional objects which vibrate with varying frequencies (i.e. interact with their
dimensions) to produce the observable phenomena of nature.
However the interpretation is too linear.
Clearly a one dimensional object has no phenomenal meaning (in the conventional sense). Therefore diagrams explaining the splitting and joining of strings cannot be taken literally and are
misleading. Also even though it is meaningless to try and isolate superstrings as in any way separate from dimensions this is what the standard approach seems to attempt.
This leads to an interesting problem. Superstring Theory predicts "two worlds" - the "real" world of particles and an identical "shadow" world where these are invisible.
When placed in a dynamic context this is easy to explain.
Because superstrings - dynamically - are equally prime "particles" and "prime" dimensions, in reduced static terms we can view them alternatively as particles (with dimensions fixed) or
dimensions (with particles fixed). This latter interpretation refers to the "shadow" universe predicted by the theory.
However just as in psychological terms ones persona and shadow are complementary aspects of the same personality, likewise in physical terms the universe and its shadow are complementary aspects
of the same reality.
Q. To be consistent with your principles of horizontal complementarity, there should be a psychological counterpart in experience to this reality. Is this so?
PC Yes this is one of the most remarkable applications of horizontal complementarity.
There is a fascinating connection as between the meaning of the word "prime" in holistic mathematics and "primitive" in psychology.
A prime structure can be defined as the confusion of the quantitative and qualitative aspects of reality. Equally this could be expressed as the confusion of the horizontal and vertical; the
confusion of "real" and "imaginary"; the confusion of objects and dimensions; the confusion of parts and wholes.
In other words at this level these aspects have no clear separate identity.
Now this is what defines primitive infant behaviour. Here fleeting instinctive impulses are at one and the same time the expression of qualitative whole desire (unconscious) and quantitative part
phenomena (conscious). However because of the high level of confusion, neither is clearly distinguishable. Thus neither the dimensions of space and time nor the phenomenal objects which they
contain can achieve any permanence. However these primitive interactions gradually lead to the emergence of natural phenomena in space and time.
Thus in holistic mathematical terms the (psychological) instinctive behaviour of the early infant involves prime objects (confused with prime dimensions).
Remarkably this directly complements (in horizontal terms) the (physical) behaviour of superstrings where equally involves prime objects (confused with prime dimensions).
However it requires the "higher" understanding of the vertically complementary point level to properly interpret the nature of this "lower" level reality.
So just as we can interpret the fundamental ground of reality (physically and psychologically) in terms of qualitative binary numbers, equally we can interpret the next "lowest" level in terms of
qualitative prime numbers. Ultimately all other levels of reality can be interpreted in terms of the appropriate qualitative numbers.
Thus the holistic structures of reality - physically and psychologically - at all levels can be precisely explained in terms of qualitative numbers.
Q. Can you know briefly summarise your findings.
PC The original Jungian classification of personality types has eight distinctive types.
The Myers-Briggs revision has 16 types. However it can be easily shown that there are eight "missing" personality types. So the full system has 24 types.
This extended personality profile can be easily translated in holistic mathematical terms and classified as three distinct groups with "real", "imaginary" and "complex" types.
The personality group one belongs to, in general terms determines the subsequent manner of transpersonal development.
Each of the 24 personality types can be interpreted as representing a distinct arrangement of four polar co-ordinates representing space and time. Each configuration of space and time represents
a unique dimension. When combined with the universal binary "dimensions" (1 and 0), this leads to 26 (24+2) or alternatively 10 (8+2) "dimensions".
This simply entails that each personality type experiences space and time in a different manner.
Using vertical complementarity, there is a fascinating connection here with the "lower" level physical reality of superstrings. In like manner dimensions represent unique permutations or
configurations of the four 4 polar co-ordinates of space and time.leading to 26 (24+2) or alternatively - in more general terms - 10 (8+2) dimensions.
These configurations simply arise due to the fact that by definition the four aspects of space and time cannot yet be successfully separated. Thus a "dimension" necessarily involves a combination
of all 4 types.
Thus psychologically at the "high-level" we have unique personality configurations (representing subjective experience of space-time). In complementary terms at the physical "low-level" we have
unique particle configurations (representing the objective nature of space-time).
Superstring theory therefore represents these fundamental "objective" organisations of "broken" space-time. They closely mirror Jung’s theory of personality types representing - in experience -
fundamental "subjective" organisations of the same broken space-time.
Holistic mathematics can also throw valuable light on other issues - such as "shadow" matter. It also precisely defines in qualitative number terms both the - highly interrelated - "objects" and
"dimensions" - which superstrings represent - as prime numbers.
Return to Science of Integration | {"url":"http://www.iol.ie/~peter/holqua7.html","timestamp":"2014-04-18T21:31:40Z","content_type":null,"content_length":"44946","record_id":"<urn:uuid:425301bd-1d71-419e-814a-90a67cb62630>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00240-ip-10-147-4-33.ec2.internal.warc.gz"} |
FOM: Re: good news: serious physicists believe that intuitionistic logic is important for physics
Peter Apostoli apostoli at sympatico.ca
Sat Oct 6 06:49:35 EDT 2001
Dear All,
As far as I know, the first publications by a physicist in the use of topos
theory in quantum gravity were by Chris Isham and Jeremy Butterfield around
1997 to 1998. This was approximately one year after Professor Isham received
a copy of the co-authored book "Parts of the continuum: towards a modern
ontology of science", which at that time had already been reviewed and
accepted for publication by the Poznan Studies in the Phil. of Science and
the Humanities. The book presented a model of semi-constructive (Pi_2) set
theory in the form of CFG (an approximation space placed in type-lowering
correspondence with its own power space). CFG is a topos in the category of
approximation spaces. The book outlined in detail how CFG is the first
successful renormalization of the continuum and suggested that this
quantization of the set theoretic universe is a blue-print for a theory of
quantum gravity. Physicists were the first group to (tacitly) pick up on
this research, but not the first group to originate/develop/publish it. That
honour goes to my co-author, Akira Kanda (who describes himself as a
Prof. Isham describes his research as follows:
>In particular, I have been developing a new approach to quantum gravity
based on a quantum-logic extension of the consistent-histories >version of
quantum theory. This enables the standard ideas of quantum theory to be
extended to situations where there is no normal notion >of time, including
the possibility that a variety of non-metrical aspects of space-time may
also be subject to quantisation. I have become >particularly interested in
the use of generalised set theory ('topoi') in the context of the consistent
histories programme where the internal >logic of a topos seems to play an
important role. I am also planning to use topos ideas in the development of
new models for spacetime; in >particular, there may be important links with
topological quantum field theory, expecially the recent work involving ideas
coming from the >loopspace approach to the Ashtekar programme of canonical
quantisation of gravity.
Peter Apostoli
----- Original Message -----
From: Vladik Kreinovich <vladik at cs.utep.edu>
To: <fom at math.psu.edu>
Sent: Friday, October 05, 2001 7:02 AM
Subject: FOM: good news: serious physicists believe that intuitionistic
logic is important for physics
> Dear Friends,
> For those who like intuitionistic logic and related research, you may want
> read a new popular book by Lee Smolin, one of the world leading
specialists in
> quantum gravity (i.e., quantization of space-time), called "Three rodas to
> Quantum Gravity".
> He strongly believes that this logic - and a more general topos approach -
> of great physical importance to quantum gravity.
> Not only he believes in it, he cites papers by himself and other
> where these logical ideas have been transformed into working physical
> He also emphasizes that the success in physics will eventually lead to a
> successful application of these logics to human decision making, economics
> (Usually, when an idea is used in physics, it is researched over and over
> again, and as a result, in many cases, social sciences have advanced when
> adapted well-developed formalisms from physics).
> Vladik
More information about the FOM mailing list | {"url":"http://www.cs.nyu.edu/pipermail/fom/2001-October/005074.html","timestamp":"2014-04-16T17:31:50Z","content_type":null,"content_length":"6476","record_id":"<urn:uuid:4b0a4588-d561-495c-90c4-850b37a5fa4b>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00214-ip-10-147-4-33.ec2.internal.warc.gz"} |
A plausible inequality.
up vote 5 down vote favorite
I come across the following problem in my study.
Let $x_i, y_i\in \mathbb{R}, i=1,2\cdots,n$ with $\sum\limits_{i=1}^nx_i^2=\sum\limits_{i=1}^ny_i^2=1$, and $a_1\ge a_2\cdots \ge a_n>0 $. Is it true $$\left(\frac{\sum\limits_{i=1}^na_i(x_i^2-y_i^
2)}{a_1-a_n}\right)^2\le 1-\left(\sum\limits_{i=1}^nx_iy_i\right)^2~~?$$
Has anyone seen this inequality before, or can you give a counterexample?
3 The LHS is invariant under affine transformations of the $(a_i)$, so you might as well assume $a_1=1$ and $a_n=0$. – Douglas Zare Apr 2 '10 at 18:54
You accepted an answer that is, according to the author, incorrect... – Mariano Suárez-Alvarez♦ Apr 3 '10 at 7:45
@Mariano: Maybe the outer square in the LHS should not be there after all? I'm sure I did not see it yesterday – Sergei Ivanov Apr 3 '10 at 9:16
@Sergei: if it makes you feel better, I didn't see the square either. So after failing to prove the inequality, I "verified" your counterexample was correct, and up-voted it. It seems that the
original post didn't use \left( and \right) for the brackets on the LHS... I blame that and my new glasses. – Willie Wong Apr 3 '10 at 9:34
add comment
5 Answers
active oldest votes
[Wrong ounter-example deleted]
This it true for all $n$. The case $n=2$ is handled by Hailong Dao, let's reduce the general case to $n=2$.
First, we may assume that $a_1=1$ and $a_n=0$ as others mentioned. So remove the denominator in LHS. Then forget the condition that $a_i$ are monotone, let's only assume that they are in
$[0,1]$ (we can rearrange the indices anyway). Also we may assume that the sum under the square in the LHS is nonnegative - otherwise swap $x$ and $y$.
up vote 9
down vote Then, for every $i$ such that $x_i^2-y_i^2>0$, set $a_i=1$, otherwise $a_i=0$. The LHS grows, the RHS stays. So it suffices to prove the inequality for $a_i\in\{0,1\}$. Rearrange indices
accepted so that the first $k$ of $a_i$'s are 1. We arrive to $$ \left(\sum_{i=1}^k x_i^2-\sum_{i=1}^ky_i^2 \right)^2 \le 1 - \left(\sum_{i=1}^n x_iy_i\right)^2 . $$ Define $X_1,X_2,Y_1,Y_2\ge 0$
by $$ X_1^2 = \sum_{i=1}^k x_i^2, \ \ \ X_2^2 = \sum_{i=k+1}^n x_i^2, \ \ \ Y_1^2 = \sum_{i=1}^k y_i^2, \ \ \ Y_2^2 = \sum_{i=k+1}^n y_i^2. $$ Then the LHS equals $(X_1^2-Y_1^2)^2$ and
$X_1^2+X_2^2=Y_1^2+Y_2^2=1$. By Cauchy-Schwarz, $$ \left|\sum x_iy_i\right| \le X_1Y_1+X_2Y_2 , $$ so the RHS is greater or equal to $1-(X_1Y_1+X_2Y_2)^2$. Now the inequality follows from
$$ (X_1^2-Y_1^2)^2 \le 1-(X_1Y_1+X_2Y_2)^2 $$ which is the same inequality for $n=2$.
5 Erm... And how exactly is this a counterexample? If $x^2+y^2=1$, then $(x^2-y^2)^2=1-(2xy)^2$. – fedja Apr 3 '10 at 1:36
I think fedja is right! – Hailong Dao Apr 3 '10 at 3:35
Yup, I overlooked the square. In fact, inequality is correct modulo the case n=2. Please unaccept or read the new answer. – Sergei Ivanov Apr 3 '10 at 7:59
Darnit, you beat me by ten minutes! What are the odds?? – zeb Apr 3 '10 at 8:08
add comment
The inequality is true for all $n$.
First of all, we can simplify it a little - from Douglas Zare's comment, we can assume $a_0 = 1$, $a_n = -1$, and try to maximize the LHS by varying the $a_i$s. Since the set of values for
the $a_i$s under these conditions is compact, there is a maximum value, and the LHS is a convex function of each $a_i$ so we must have $a_i = \pm 1$ for all $i$. Then, we see that the LHS
is clearly maximized when we take $a_i = \frac{|x^2-y^2|}{x^2-y^2}$, so we just have to prove that:
$(\frac{\sum |x_i^2-y_i^2|}{2})^2 \le 1 - (\sum x_iy_i)^2$
up vote 9
down vote whenever $\sum x_i^2 = \sum y_i^2 = 1$. This follows from plugging $\alpha_i = \mbox{max}(x_i,y_i)$, $\beta_i = \mbox{min}(x_i,y_i)$ into the inequality
$(\frac{\sum \alpha_i^2 - \sum \beta_i^2}{2})^2 \le (\frac{\sum \alpha_i^2 + \sum \beta_i^2}{2})^2 - (\sum \alpha_i\beta_i)^2$,
which is just Cauchy-Schwartz in disguise.
The restriction doesn't change the inequality - read Douglas Zare's comment. Letting $a_n < 0$ slightly generalizes the inequality, and makes the proof look a lot nicer. – zeb Apr 3 '10
at 20:59
add comment
Here is a proof for the case $n=2$:
Some general facts: As Douglas pointed out, one can assume $a_1=1, a_n=0$. Also, note that the RHS is:
$$(\sum x_i^2)(\sum y_i^2) -(\sum x_iy_i)^2 = \sum_{i<j}(x_iy_j-x_jy_i)^2$$
Now let $n=2$. The inequality becomes: $$(x_1^2 -y_1^2)^2 \leq (x_1y_2-x_2y_1)^2$$ or
up vote 6 down vote
$$|x_1^2 -y_1^2| \leq |x_1y_2-x_2y_1|$$
Let $x_1 = cos(\alpha), x_2=sin(\alpha), y_1=cos(\beta), y_2=sin(\beta)$. Then LHS is $$|\frac 12 (cos(2\alpha) -cos(2\beta))| = |sin(\alpha-\beta)sin(\alpha+\beta)|$$
while the RHS is $|sin(\alpha-\beta)|$
add comment
The following proof is a bit heavy-handed; I'm sure you it can be simplified. Assume $a_1=1, a_n=0$ as suggested above and write:
Write $$F(x,y) = \sum_{i=1}^n a_i(x_i^2-y_i^2)$$, $$G(x,y)=F(x,y)^2+\langle x,y\rangle^2.$$ Let $(x,y)\in S^{n-1}\times S^{n-1}$ be a point where $G$ is maximized, where we may assume that
for each $i$ at least one of $x_i,y_i$ is non-zero. It is clear that $-\sum_i y_i^2 \leq F(x,y) \leq \sum_i x_i^2$ so we may also assume $\langle x,y\rangle \neq 0$.
up vote 5 By the method of Lagrange multipliers there exist $\xi,\eta$ such that for all $i$ $$ 4a_i x_i F+2\langle x,y\rangle y_i=2\xi x_i$$ and $$ -4a_i y_i F+2\langle x,y\rangle x_i=2\eta y_i.$$
down vote
Multiplying the first equation by $x_i$, the second by $y_i$, adding the two and summing over $i$ gives $ 4G = 2(\xi+\eta)$. Multiplying the second equation by $y_i$, the first by $x_i$ and
adding gives $$ \langle x,y\rangle (y_i^2+x_i^2) = (\xi+\eta)x_i y_i = 2G\cdot x_i y_i. $$ By assumption one of $x_i,y_i$ is non-zero. Dividing by the square of that number we see that the
quadratic $$\langle x,y\rangle t^2 - 2G t + \langle x,y\rangle = 0$$ has a real root. Evaluating the discriminant it follows that $$ G^2 \leq \langle x,y\rangle^2 \leq 1.$$
Looks like you got ninja'd as well... – zeb Apr 3 '10 at 8:29
add comment
A similar (maybe slightly simpler) counterexample: $n=2$, $x=(2,0)$, $y=(0,2)$, arbitrary $a_1>a_2$.
up vote 0
down vote
3 There is a condition $\sum x_i^2=\sum y_i^2=1$. – Sergei Ivanov Apr 2 '10 at 18:50
Oops, sorry. I didn't see the "=1" (even though I was suspecting that it should be there!), because the jsMath formula extended too far out on the right into the rest of the sentence so
that it was overwritten by the word "and". This annoying problem with formulas happens from time to time, and quite unpredictably. For example, the "1" and the "S" in your comment are
rendered on top of each other as I'm writing this, although it was shown correctly before I reloaded the page. Has anyone else experienced this too? – Hans Lundmark Apr 3 '10 at 10:18
This annoys me too. My preferred browser (Konqueror) sometimes skips jmath symbols or renders them on top on one another. The only solution I found, however unpleasant, was to use
another browser (Firefox). – Sergei Ivanov Apr 3 '10 at 17:25
Well, I'm using Firefox already, so that's not an option for me! Maybe I should try Konqueror. ;) – Hans Lundmark Apr 4 '10 at 16:15
add comment
Not the answer you're looking for? Browse other questions tagged inequalities or ask your own question. | {"url":"http://mathoverflow.net/questions/20172/a-plausible-inequality","timestamp":"2014-04-17T04:42:14Z","content_type":null,"content_length":"84732","record_id":"<urn:uuid:05fc1cf3-1a3b-4dce-ac95-95e8f7c24768>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00211-ip-10-147-4-33.ec2.internal.warc.gz"} |
3217 Angell Hall
Professor B. A. Taylor, Chair
May be elected as a departmental concentration program
David E. Barrett, Several Complex Variables
Andreas R. Blass, Logic, Set Theory, Category Theory, Computational Complexity, Combinatorics
Morton Brown, Topology
Daniel M. Burns, Jr., Complex Analysis, Algebraic and Differential Geometry
Joseph G. Conlon, Mathematical Physics, Applied Mathematics
Igor V. Dolgachev, Algebraic Geometry
Peter L. Duren, Real and Complex Analysis, Univalent Functions, Harmonic Analysis, Probability
Paul G. Federbush, Rigorous Quantum Field Theory and Statistical Mechanics
John Erik Fornaess, Several Complex Variables, Analysis
Frederick W. Gehring, (T.H. Hildebrandt Distinguished University Professor of Mathematics) Geometric Function Theory, Quasiconformal Mappings, Mobius Groups
Robert L. Griess, Jr., Finite Group Theory, Group Extension Theory, Simple Groups
Philip J. Hanlon, Combinatorics
Donald G. Higman, Group Theory, Algebraic Combinatorics
Peter G. Hinman, Mathematical Logic, Recursion Theory, Foundations of Mathematics, Computational Complexity
Melvin Hochster, (R.W. and LH. Browne Professor of Science) Commutative Algebra, Algebraic Geometry
James M. Kister, Geometric Topology, Transformation Groups
Eugene F. Krause, Mathematics Education
Donald J. Lewis, Diophantine Equations, Algebraic Numbers and Function Fields
James S. Milne, Algebraic Geometry and Number Theory
Hugh L. Montgomery, Number Theory, Distribution of Prime Numbers, Fourier Analysis, Analytic Inequalities, Probability
Gopal Prasad, Representation Theory
M. S. Ramanujan, Functional Analysis, Nuclear Spaces
Jeffrey B. Rauch, Partial Differential Equations
Frank A. Raymond, Topology, Transformation Groups
G. Peter Scott, Geometric Topology, Combinatorial Group Theory
Carl P. Simon, Dynamical Systems, Singularity Theory, Mathematical Economics, Mathematical Epidemiology, Applied Mathematics
Joel A. Smoller, Nonlinear Partial Differential Equations
J. Tobias Stafford, Noetherian Rings, Lie Algebras, Algebraic K-theory, Rings of Differential Operators
Thomas F. Storer, Combinatorics
B. Alan Taylor, Complex Analysis
Arthur G. Wasserman, Differential Topology, Transformation Groups, Foliations, Applied Mathematics
Michael I. Weinstein, Nonlinear Partial Differential Equations
David J. Winter, Algebra, Lie Algebras, Algebraic Groups
Michael B. Woodroofe, Probability Theory, Mathematical Statistics
Associate Professors
Anthony M. Bloch, Geometric Mechanics, Nonlinear Control Theory
Christoph Borgers, Numerical Solution of Partial Differential Equations
Jack L. Goldberg, Special Functions, Linear Algebra
Manoussos Grillakis, Partial Differential Equations, Mathematical Physics
Thomas Hales, Lie Theory
Eduard Harabetian, Partial Differential Equations, Numerical Analysis
Robert Krasny, Partial Differential Equations, Fluid Dynamics
John W. Lott, Differential Geometry, Mathematical Physics
Robert Megginson, Geometry of Banach Spaces
Allen Moy, Representation Theory
Art J. Schwartz, Analysis, Computer Algebra
Chung-Tuo Shih, Probability Theory
Ralf J. Spatzier, Differential Geometry
John R. Stembridge, Algebraic Combinatorics
Berit Stensones, Several Complex Variables
Alejandro Uribe-Ahumada, Global Analysis
Assistant Professors
Alexander Barvonik, Optimization
David C. Butler, Algebraic Geometry
Richard Canary, Topology
Carolyn A. Dean, Noncommutative Algebra
Estela A. Gavosto, Several Complex Variables
Anthony Giaquinto, Algebra, Deformation Theory, Polynomial Algebras
Juha Heinonen, Geometric Function Theory
Pekka J. Koskela, Analysis, Potential Theory, Nonlinear PDE
Ruth Lawrence, Topology
Shenglin Lu, Mathematical Physics, Applied Mathematics
Eyal Markman, Algebraic Geometry
Nigel Pitt, Number Theory
Rodolfo Torres, Harmonic Analysis
Zhenghan Wang, Topology, Geometry, Dynamical Systems
Trevor Wooley, Number Theory
T.H. Hildebrandt Research Assistant Professors
Neil Dummigan, Number Theory
Willian Jockusch, Combinatorics
Richard Laugesen, Complex Analysis
Irena Swanson, Commutative Algebra
Adjunct Professors
Curtis Huntington, Actuarial Science
Howard Young, Employee Benefits, Actuarial Science
Patricia Shure, Mathematics Education
Professors Emeriti Robert C. F. Bartels, Douglas G. Dickson, Charles L. Dolph, Frank Harary, George E. Hay, Donald A. Jones, Phillip S. Jones, Wilfred Kaplan, Wilfred M. Kincaid, Chung-Nim Lee, Jack
E. McLaughlin, Cecil J. Nesbitt, Carl M. Pearcy, George Piranian, Maxwell O. Reade, Ronald H. Rosen, Charles J. Titus, Joseph L. Ullman, and James G. Wendel.
Mathematics is sometimes called the Queen of the Sciences; because of its unforgiving insistence on accuracy and rigor it is a model for all of science. It is a field which serves science but also
stands on its own as one of the greatest edifices of human thought. Much more than a collection of calculations, it is finally a system for the analysis of form. Alone among the sciences, it is a
discipline where almost every fact can and must be proved.
The study of mathematics is an excellent preparation for many careers; the patterns of careful logical reasoning and analytical problem solving essential to mathematics are also applicable in
contexts where quantity and measurement play only minor roles. Thus students of mathematics may go on to excel in medicine, law, politics, or business as well as any of a vast range of scientific
careers. Special programs are offered for those interested in teaching mathematics at the elementary or high school level or in actuarial mathematics, the mathematics of insurance. The other programs
split between those which emphasize mathematics as an independent discipline and those which favor the application of mathematical tools to problems in other fields. There is considerable overlap
here, and any of these programs may serve as preparation for either further study in a variety of academic disciplines, including mathematics itself, or intellectually challenging careers in a wide
variety of corporate and governmental settings.
Elementary Courses. In order to accommodate diverse backgrounds and interests, several course options are available to beginning mathematics students. All courses require three years of high school
mathematics; four years are strongly recommended and more information is given for some individual courses below. Students with College Board Advanced Placement credit and anyone planning to enroll
in an upper-level class should consider one of the Honors sequences and discuss the options with a mathematics advisor.
Students who need additional preparation for calculus are tentatively identified by a combination of the math placement test (given during orientation), college admissions test scores (SAT or ACT),
and high school grade point average. Academic advisors will discuss this placement information with each student and refer students to a special mathematics advisor when necessary.
Two courses preparatory to the calculus, Math 105 and Math 110, are offered. Math 105 is a course on data analysis, functions, and graphs with an emphasis on problem solving. Math 110 is a condensed
half-term version of the same material offered as a self-study course through the Math Lab and directed towards students who are unable to complete a first calculus course successfully. A maximum
total of 4 credits may be earned in courses numbered 110 and below. Math 103 is offered exclusively in the Summer half-term for students in the Summer Bridge Program.
Math 127 and 128 are courses containing selected topics from geometry and number theory, respectively. They are intended for students who want exposure to mathematical culture and thinking through a
single course. They are neither prerequisite nor preparation for any further course.
Each of Math 112, 115, 185, and 195 is a first course in calculus and generally credit can be received for only one course from this list. Math 112 is designed for students of business and the social
sciences who require only one term of calculus. It neither presupposes nor covers any trigonometry. The sequence 115-116-215 is appropriate for most students who want a complete introduction to
calculus. Math 118 is an alternative to Math 116 intended for students of the social sciences who do not intend to continue to Math 215. One of Math 215, 285, or 295 is prerequisite to most more
advanced courses in Mathematics. Math 112 does not provide preparation for any subsequent course.
Students planning a career in medicine should note that some medical schools require a course in calculus. Generally either Math 112 or 115 will satisfy this requirement, although most science
concentrations require at least a year of calculus. Math 112 is accepted by the School of Business Administration, but Math 115 is prerequisite to concentration in Economics and further math courses
are strongly recommended.
The sequences 175-176-285-286, 185-186-285-286, and 195-196-295-296 are Honors sequences. All students must have the permission of an Honors advisor to enroll in any of these courses, but they need
not be enrolled in the LS&A Honors Program. All students with strong preparation and interest in mathematics are encouraged to consider these courses; they are both more interesting and more
challenging than the standard sequences.
Math 185-285 covers much of the material of Math 115-215 with more attention to the theory in addition to applications. Most students who take Math 185 have taken a high school calculus course, but
it is not required. Math 175-176 assumes a knowledge of calculus roughly equivalent to Math 115 and covers a substantial amount of so-called combinatorial mathematics (see course description) as well
as calculus-related topics not usually part of the calculus sequence. Math 175 and 176 are taught by the discovery method: students are presented with a great variety of problems and encouraged to
experiment in groups using computers. The sequence Math 195-296 provides a rigorous introduction to theoretical mathematics. Proofs are stressed over applications and these courses require a high
level of interest and commitment. The student who completes Math 296 is prepared to explore the world of mathematics at the advanced undergraduate and graduate level.
In rare circumstances and with permission of a Mathematics advisor reduced credit may be granted for Math 185 or 195 after one of Math 112 or 115. A list of these and other cases of reduced credit
for courses with overlapping material is available from the Department. To avoid unexpected reduction in credit, students should always consult a advisor before switching from one sequence to
another. In all cases a maximum total of 16 credits may be earned for calculus courses Math 112 through Math 296, and no credit can be earned for a prerequisite to a course taken after the course
Students with strong scores on either the AB or BC version of the College Board Advanced Placement exam may be granted credit and advanced placement in either the regular or Honors sequences. A table
explaining the possibilities is available from advisors and the Department. The Department encourages strong students to enter beginning Honors courses in preference to 116 or 215.
Students completing Math 215 may continue either to Math 216 (Introduction to Differential Equations) or to the sequence Math 217-316 (Linear Algebra-Differential Equations). Math 217-316 is required
for all students who intend to take more advanced courses in mathematics, particularly for those who may concentrate in mathematics. Math 217 both serves as a transition to the more theoretical
material of advanced courses and provides the background required for optimal treatment of differential equations.
Prerequisites to Concentration. Most programs require completion of one of the sequences ending with Math 215-217, 285-217, or 295-296. A working knowledge of a high-level computer language (e.g.,
FORTRAN, Pascal, or C) at a level equivalent to completion of EECS 183 and eight credits of Physics, preferably Physics 140/ 141 and 240/241, are recommended for all programs and required for some.
For detailed requirements consult the brochure Undergraduate Programs available from the Undergraduate Mathematics Program Office (UMPO), 3011 Angell Hall, (313) 763-4223.
Concentration Programs. A student considering concentration in mathematics should consult a mathematics concentration advisor in the UMPO as early as possible and certainly by the first term of the
sophomore year. The Department offers many different concentration programs with varying requirements; failure to meet some of these at the intended time may delay completion of the program and
graduation. A concentration plan must be designed with and approved by a concentration advisor. The departmental brochure Undergraduate Programs should be regarded as the most comprehensive and
up-to-date guide to the options and requirements for concentration programs in mathematics.
Pure Mathematics
(Students should consult the pamphlet Undergraduate Programs of the Department of Mathematics for its program requirements which take precedence over the descriptions in this Bulletin.)
a. Four basic courses (one course from each of the following four groups):
Modern Algebra: Math 412 or 512
Differential Equations: Math 286 or 316
Analysis: Math 451
Geometry/Topology: Math 432, 433, 490 or 531
b. Four elective courses (mathematics) chosen from a list of approved electives and approved by a concentration advisor.
c. One cognate course outside the Mathematics Department, but having advanced mathematical content.
Mathematical Sciences Program
(Students should consult the pamphlet Undergraduate Programs of the Department of Mathematics for its program requirements which take precedence over the descriptions in this Bulletin.)
Additional prerequisite: one term of computer programming (EECS 183 or the equivalent), and for the Numerical and Applied Analysis option, 8 credits of physics.
a. Four basic courses (one course from each of the following four groups):
Differential Equations: Math 286 or 316
Discrete Math/Modern Algebra: Math 312, 412 or 512
Analysis: Math 450 or 451
Probability: Math 425 or 525
b. At least three courses from ONE of the Program Options listed below (the list of possible electives for each option is given in the Undergraduate Programs pamphlet described above):
Discrete and Algorithmic Methods
Numerical and Applied Analysis
Operations Research and Modelling
Probabilistic Methods
Mathematical Economics
Control Systems
c. Two additional advanced mathematics (or related) courses, approved by a concentration advisor.
Honors Concentration
Outstanding students may elect an Honors concentration in Mathematics. The Honors Program is designed not only for students who expect to become mathematicians but also for students whose ultimate
professional goal lies in the humanities, law, medicine or the sciences.
Students intending an Honors concentration are strongly advised to take one of the Honors introductory calculus sequences (175 or 185)-286 or 195-296, or some combination of the two. Eight credits of
physics and familiarity with a high-level computer language are strongly recommended.
(Students should consult the pamphlet Undergraduate Programs of the Department of Mathematics for its program requirements which take precedence over the descriptions in this Bulletin.)
a. Four basic courses (one course from each of the following four groups):
Linear Algebra: Math 513
Modern Algebra: Math 512
Analysis: Math 451
Geometry/Topology: Math 433, 490, 590 or 531
b. Four elective (mathematics) courses, chosen with the approval of the Honors advisor.
c. One cognate course from outside the Mathematics department, but containing significant mathematical content, chosen with the approval of the Honors advisor.
Students who, in the judgment of the departmental Honors committee, have completed an Honors concentration with distinction are granted a citation upon graduating. Interested students should discuss
their program and the specific requirements for obtaining the citation with a Mathematics Honors advisor (appointments scheduled in
3011 Angell Hall) no later than the second term of their sophomore year.
Actuarial Mathematics
(Students should consult the pamphlet Undergraduate Programs of the Department of Mathematics for its program requirements which take precedence over the descriptions in this Bulletin.)
Additional prerequisite: At least one course in each of the following fields: Accounting (271, 272, 471), Computer Science (183, 280), and Economics (201, 202, 400).
a. Five basic courses (one from each of the following five groups):
1. Differential Equations: Math 286 or 316
2. Probability: Math 425 or 525
3. Analysis: Math 450 or 451
4. Statistics: Stat 426
5. Numerical Analysis: Math 371 or 471 (preferred)
b. Three special actuarial courses, including Math 424 and 520, and one of Math 521 or 522.
c. Two additional courses in areas relating to Actuarial Science, approved by an advisor.
Teaching Certificate
It is essential that students planning to obtain a teaching certificate consult a teaching certificate advisor, either Professor Krause (LS&A) or Professor Coxford (Education), prior to beginning
their concentration program.
Additional prerequisite: One term of computer programming, EECS 183 or the equivalent.
(Students should consult the pamphlet Undergraduate Programs of the Department of Mathematics for its program requirements which take precedence over the descriptions in this Bulletin.)
a. Four basic courses, one from each of the following four groups (chosen with the approval of a teaching certificate advisor):
1. Discrete Math/Modern Algebra: Math 312, 412, or 512
2. Geometry: Math 431, 432 or 531
3. Probability: Math 425 or 525
4. Secondary Mathematics: Math 486
b. Seven specific education courses, totaling 28 credits. Consult the Undergraduate Programs pamphlet for the list of courses.
c. A major or minor in a second academic area (normally requires 20-24 credits in a structured program other than Mathematics. Consult the Bulletin of the School of Education for acceptable
d. Two additional courses, which must include a course in the Psychology Department, and a minimum of one additional mathematics course.
Students should consult with Professor Coxford in their sophomore year to be admitted to the certification program and to schedule practice teaching.
Advising. Appointments are scheduled at the Undergraduate Mathematics Program Office, 3011 Angell Hall. Students are strongly urged to consult with a concentration advisor each term before selecting
courses for the following term.
Foreign Languages. The language requirement of the A.B. or B.S. degrees with concentration in mathematics may be satisfied in any of the languages acceptable to the College. However, students
planning to do graduate work in mathematics should be aware that at most universities one of the requirements for a Ph.D. degree is a demonstration of the ability to read mathematical texts in two of
the three languages French, German, and Russian.
Special Departmental Policies. All prerequisite courses must be satisfied with a grade of C- or above. Students with lower grades in prerequisite courses must receive special permission of the
instructor to enroll in subsequent courses.
William Lowell Putnam Competition. A departmental team participates in the annual William Lowell Putnam Competition sponsored by the Mathematical Association of America. Interested students with
exceptional mathematical aptitude are asked to contact the department office for detailed information. The department also sponsors other competitions and activities.
Courses in Mathematics (Division 428)
A maximum total of 4 credits may be earned in Mathematics courses numbered 110 and below. A maximum total of 16 credits may be earned for calculus courses Math 112 through Math 296, and no credit can
be earned for a prerequisite to a course taken after the course itself.
101. Elementary Algebra. Only open to designated summer half-term Bridge students. (2). (Excl).
103. Intermediate Algebra. Only open to designated summer half-term Bridge students. (2). (Excl).
105. Data, Functions, and Graphs. Students with credit for Math. 103 can elect Math. 105 for only 2 credits. (4). (Excl). (QR/1).
110. Pre-Calculus (Self-Study). See Elementary Courses above. No credit granted to those who already have 4 credits for pre-calculus mathematics courses. (2). (Excl).
112. Brief Calculus. See Elementary Courses above. Credit is granted for only one course from among Math. 112, 113, 115, 185 and 195. (4). (N.Excl). (BS).
115. Calculus I. See Elementary Courses above. Credit usually is granted for only one course from among Math. 112, 115, 185, and 195. (4). (N.Excl). (BS). (QR/1).
116. Calculus II. Math. 115. Credit is granted for only one course from among Math. 116, 186, and 196. (4). (N.Excl). (BS). (QR/2).
118. Analytic Geometry and Calculus II for Social Sciences. Math. 115. (4). (N.Excl). (BS).
127. Geometry and the Imagination. Three years of high school mathematics including a geometry course. (4). (NS). (BS). (QR/1).
128. Explorations in Number Theory. High school mathematics through at least Analytic Geometry. (4). (NS). (BS). (QR/1).
147. Mathematics of Finance. Math. 112 or 115. (3). (Excl). (BS).
175. Combinatorics and Calculus. Permission of Honors advisor. (4). (N.Excl). (BS). (QR/1).
176. Dynamical Systems and Calculus. Math. 175 or permission of instructor. (4). (N.Excl). (BS).
185. Honors Analytic Geometry and Calculus I. Permission of the Honors advisor. Credit is granted for only one course from among Math. 112, 113, 115, and 185. (4). (N.Excl). (BS). (QR/1).
186. Honors Analytic Geometry and Calculus II. Permission of the Honors advisor. Credit is granted for only one course from among Math. 114, 116, and 186. (4). (N.Excl). (BS). (QR/1).
195. Honors Mathematics I. Permission of the Honors advisor. (4). (N.Excl). (BS). (QR/1).
196. Honors Mathematics II. Permission of the Honors advisor. (4). (N.Excl). (BS). (QR/1).
215. Calculus III. Math. 116 or 186. (4). (Excl). (BS). (QR/1).
216. Introduction to Differential Equations. Math. 215. (4). (Excl). (BS).
217. Linear Algebra. Math. 215. No credit granted to those who have completed or are enrolled in Math. 417, 419, or 513. (4). (Excl). (BS). (QR/1).
285. Honors Analytic Geometry and Calculus III. Math. 186 or permission of the Honors advisor. (4). (Excl). (BS).
286. Honors Differential Equations. Math. 285. (3). (Excl). (BS).
288. Math Modeling Workshop. Math. 216 or 316, and Math. 217 or 417. (1). (Excl). (BS). Offered mandatory credit/no credit. May be elected for a total of 3 credits.
289. Problem Seminar. (1). (Excl). (BS). May be repeated for credit with permission.
295. Honors Analysis I. Math. 196 or permission of the Honors advisor. (4). (Excl). (BS).
296. Honors Analysis II. Math. 295. (4). (Excl). (BS).
312. Applied Modern Algebra. Math. 217. (3). (Excl). (BS).
316. Differential Equations. Math. 215 and 217, or equivalent. Credit can be received for only one of Math. 216 or Math. 316, and credit can be received for only one of Math. 316 or Math. 404. (3).
(Excl). (BS).
333. Directed Tutoring. Math. 385 and enrollment in the Elementary Program in the School of Education. (1-3). (Excl). (EXPERIENTIAL). May be repeated for a total of three credits.
350/Aero. Eng. 350. Aerospace Engineering Analysis. Math. 216 or 316 or the equivalent. (3). (Excl). (BS).
354. Fourier Analysis and its Applications. Math. 216, 316, or 286. No credit granted to those who have completed or are enrolled in Math. 454. (3). (Excl). (BS).
362. Applications of Calculus and Linear Algebra. Math. 216 or 217. (3). (Excl). (BS).
371/Engin. 371. Numerical Methods for Engineers and Scientists. Engineering 103 or 104, or equivalent; and Math. 216. I and II. (3). (Excl). (BS).
385. Mathematics for Elementary School Teachers. One year each of high school algebra and geometry. No credit granted to those who have completed or are enrolled in 485. (3). (Excl).
398. Topics in Modern Mathematics. Junior standing with an interest and some background in mathematics. (3). (Excl). (BS).
399. Independent Reading. (1-6). (Excl). (INDEPENDENT). May be repeated for credit.
404. Intermediate Differential Equations. Math. 216. No credit granted to those who have completed Math. 286 or 316. (3). (Excl). (BS).
412. Introduction to Modern Algebra. Math. 215 or 285; and 217. No credit granted to those who have completed or are enrolled in 512. Students with credit for 312 should take 512 rather than 412. One
credit granted to those who have completed 312. (3). (Excl). (BS).
413. Calculus for Social Scientists. Not open to freshmen, sophomores or mathematics concentrators. (3). (Excl). (BS).
416. Theory of Algorithms. Math. 312 or 412 or CS 303, and CS 380. (3). (Excl). (BS).
417. Matrix Algebra I. Three courses beyond Math. 110. No credit granted to those who have completed or are enrolled in 217, 419, or 513. (3). (Excl). (BS).
419/EECS 400/CS 400. Linear Spaces and Matrix Theory. Four terms of college mathematics beyond Math 110. No credit granted to those who have completed or are enrolled in 217 or 513. One credit
granted to those who have completed Math. 417. I and II. (3). (Excl). (BS).
420. Matrix Algebra II. Math. 217, 417 or 419. (3). (Excl). (BS).
424. Compound Interest and Life Insurance. Math. 215 or permission of instructor. (3). (Excl). (BS).
425/Stat. 425. Introduction to Probability. Math. 215. (3). (N.Excl). (BS).
427/Social Work 603. Retirement Plans and Other Employee Benefit Plans. Junior standing. (3). (Excl).
431. Topics in Geometry for Teachers. Math. 215. (3). (Excl). (BS).
432. Projective Geometry. Math. 215. (3). (Excl). (BS).
433. Introduction to Differential Geometry. Math. 215. (3). (Excl). (BS).
450. Advanced Mathematics for Engineers I. Math. 216, 286, or 316. (4). (Excl). (BS).
451. Advanced Calculus I. Math. 215 and one course beyond Math. 215; or Math. 285. Intended for concentrators; other students should elect Math. 450. (3). (Excl). (BS).
452. Advanced Calculus II. Math. 217, 417, or 419; and Math. 451. (3). (Excl). (BS).
454. Boundary Value Problems for Partial Differential Equations. Math. 216, 286 or 316. Students with credit for Math. 354, 455 or 554 can elect Math. 454 for 1 credit. (3). (Excl). (BS).
462. Mathematical Models. Math. 216, 286 or 316; and 217, 417, or 419. Students with credit for 362 must have department permission to elect 462. (3). (Excl). (BS).
471. Introduction to Numerical Methods. Math. 216, 286, or 316; and 217, 417, or 419; and a working knowledge of one high-level computer language. (3). (Excl). (BS).
475. Elementary Number Theory. (3). (Excl). (BS).
476. Computational Laboratory in Number Theory. Prior or concurrent enrollment in Math. 475 or 575. (1). (Excl). (BS).
480. Topics in Mathematics. Math. 217 or 417, 412, or 451, or permission of instructor. (3). (Excl). (BS).
481. Introduction to Mathematical Logic. Math. 412 or 451 or equivalent experience with abstract mathematics. (3). (Excl). (BS).
485. Mathematics for Elementary School Teachers and Supervisors. One year of high school algebra or permission of instructor. No credit granted to those who have completed or are enrolled in 385. May
not be included in a concentration plan in mathematics. I and IIIb. (3 in I; 2 in IIIb). (Excl). (BS).
486. Concepts Basic to Secondary Mathematics. Math. 215. (3). (Excl). (BS).
489. Mathematics for Elementary and Middle School Teachers. Math. 385 or 485, or permission of instructor. May not be used in any graduate program in mathematics. (3). (Excl).
490. Introduction to Topology. Math. 412 or 451 or equivalent experience with abstract mathematics. (3). (Excl). (BS).
497. Topics in Elementary Mathematics. Math. 489 or permission of instructor. (3). (Excl). (BS). May be repeated for a total of six credits.
498. Topics in Modern Mathematics. Senior mathematics concentrators and Master Degree students in mathematical disciplines. (3). (Excl). (BS).
512. Algebraic Structures. Math. 451 or 513 or permission of the instructor. No credit granted to those who have completed or are enrolled in 412. Math. 512 requires more mathematical maturity than
Math. 412. (3). (Excl). (BS).
513. Introduction to Linear Algebra. Math. 412 or permission of instructor. Two credits granted to those who have completed Math. 417; one credit granted to those who have completed Math 217 or 419.
(3). (Excl). (BS).
516. Topics in Algorithms. Two mathematics courses at the 300-level or above, or equivalent computer science courses. (3). (Excl). (BS).
520. Life Contingencies I. Math. 424 and Math. 425; or permission of instructor. (3). (Excl). (BS).
521. Life Contingencies II. Math. 520; or permission of instructor. (3). (Excl). (BS).
522. Actuarial Theory of Pensions and Social Security. Math. 520; or permission of instructor. (3). (Excl). (BS).
523. Risk Theory. Math. 425. (3). (Excl). (BS).
524. Mortality Studies. Math. 520; or permission of instructor. (3). (Excl). (BS).
525/Stat. 525. Probability Theory. Math. 450 or 451; or permission of instructor. Students with credit for Math. 425/Stat. 425 can elect Math. 525/Stat. 525 for only 1 credit. (3). (Excl). (BS).
526/Stat. 526. Discrete State Stochastic Processes. Math. 525 or EECS 501. (3). (Excl). (BS).
531. Transformation Groups in Geometry. Math. 215. (3). (Excl). (BS).
537. Introduction to Differentiable Manifolds. Math. 513 and 590. (3). (Excl). (BS).
551. Advanced Multivariable Calculus. Math. 451 and 513; or permission of Honors advisor. (3). (Excl). (BS).
555. Introduction to Functions of a Complex Variable with Applications. Math. 450 or 451. Students with credit for Math. 455 or 554 can elect Math. 555 for one hour credit. (3). (Excl). (BS).
556. Methods of Applied Mathematics I. Math. 555 or 554. (3). (Excl). (BS).
557. Methods of Applied Mathematics II. Math. 556. (3). (Excl). (BS).
559. Selected Topics in Applied Mathematics. Math. 451 and 419, or equivalent. (3). (Excl). (BS). May be elected for a total of 6 credits.
561/SMS 518 (Business Administration)/IOE 510. Linear Programming I. Math. 217, 417, or 419. (3). (Excl). (BS).
562/IOE 511/Aero. 577/EECS 505/CS 505. Continuous Optimization Methods. Math. 217, 417 or 419. I. (3). (Excl). (BS).
565. Combinatorics and Graph Theory. Math. 412 or 451 or equivalent experience with abstract mathematics. (3). (Excl). (BS).
566. Combinatorial Theory. Math. 216, 286 or 316; or permission of instructor. (3). (Excl). (BS).
571. Numerical Methods for Scientific Computing I. Math. 217, 419, or 513; and 454 or permission of instructor. (3). (Excl). (BS).
572. Numerical Methods for Scientific Computing II. Math. 217, 419, or 513; and 454 or permission of instructor. (3). (Excl). (BS).
575. Introduction to Theory of Numbers I. Math. 451 and 513; or permission of instructor. Students with credit for Math. 475 can elect Math. 575 for 1 credit. (3). (Excl). (BS).
582. Introduction to Set Theory. Math. 412 or 451 or equivalent experience with abstract mathematics. (3). (Excl). (BS).
590. Introduction to Topology. Math. 451. (3). (Excl). (BS).
591. General and Differential Topology Math. 451. (3). (Excl). (BS).
592. Introduction to Algebraic Topology. Math. 591. (3). (Excl). (BS).
593. Algebra I. Math. 513. (3). (Excl). (BS).
594. Algebra II. Math. 593. (3). (Excl). (BS).
596. Analysis I. Math. 451. (3). (Excl). (BS). Students with credit for Math. 555 may elect Math 596 for two credits only.
597. Analysis II. Math. 451 and 513. (3). (Excl). (BS).
Copyright © 1994-9
The Regents of the University of Michigan, Ann Arbor, MI 48109 USA
1.734.764.1817 (University Operator) | {"url":"http://www.lsa.umich.edu/saa/publications/bulletin/archive/94-95/math.html","timestamp":"2014-04-23T11:50:17Z","content_type":null,"content_length":"38876","record_id":"<urn:uuid:f2175ba2-ffb4-4f09-be4d-614515b828e9>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00130-ip-10-147-4-33.ec2.internal.warc.gz"} |
Find a basis for this subspace
October 5th 2010, 12:11 PM #1
Mar 2010
Find a basis for this subspace
U is a subspace of $R^4$
U = {(a,b,c,d) $\in R^4$| b-2c+d = 0}
Find a basis for U.
Unfortunately, I don't even know where to begin. How do I know how many vectors are needed to span U. If someone can help me out and/or start me off, I'd really appreciate it.
Immediately, you know a=0, and that all 4-tuples that reside in U must then have zero first coordinate.
Take c and d free. Then b=2c-d. Now you can write, (a,b,c,d) = c(0,2,1,0) + d(0,-1,0,1).
So dim(U)=2, and a basis for U can be taken as {(0,2,1,0), (0,-1,0,1)}.
You can now express U by: <{(0,2,1,0), (0,-1,0,1)}>. which just says that U is the set of all linear combinations of the two guys inside the braces.
Last edited by PiperAlpha167; October 5th 2010 at 11:37 PM.
Thanks for your help but I think you made a mistake.
a is not 0. a is free and can be anything as it is not mentioned in b-2c+d=0. a can be 100 as long as you have u = (100, 1, 2, 3) or (10,000, 0, 3, 6)...
b- 2c+ d= 0 so d= 2c- b. Every vector (a, b, c, d) is equal to
(a, b, c, 2c- b)= (a, 0, 0, 0)+ (0, b, 0, -b+ (0, 0, c, 2c)= a(1, 0, 0, 0)+ b(0, 1, 0, -1)+ c(0, 0, 1, 2).
That tells you everything.
October 5th 2010, 07:08 PM #2
Junior Member
Oct 2006
October 6th 2010, 02:30 AM #3
Mar 2010
October 6th 2010, 04:13 AM #4
MHF Contributor
Apr 2005 | {"url":"http://mathhelpforum.com/advanced-algebra/158507-find-basis-subspace.html","timestamp":"2014-04-17T22:48:48Z","content_type":null,"content_length":"38729","record_id":"<urn:uuid:6ee62586-1c92-4134-958c-f06f8b849d8b>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00360-ip-10-147-4-33.ec2.internal.warc.gz"} |
Question about black bodies, emissivity.
I'm guessing that, because the emissivity is zero, there is no way for heat energy to escape from the diamond through radiation. So heat keeps building up indefinitely at a rate of 0.3 J per second
inside the diamond, reaching infinity after an infinite amount of time. If the heat capacity of the diamond is finite, then the temperature at that time is infinite, since, (temperature) = (heat
energy) / (heat capacity) | {"url":"http://www.physicsforums.com/showthread.php?t=554152","timestamp":"2014-04-19T12:40:52Z","content_type":null,"content_length":"22718","record_id":"<urn:uuid:2aabc916-ee67-4503-b664-d8d0f399a26d>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00230-ip-10-147-4-33.ec2.internal.warc.gz"} |
Twos Complement Multiplication
A selection of articles related to twos complement multiplication.
Original articles from our library related to the Twos Complement Multiplication. See Table of Contents for further available material (downloadable resources) on Twos Complement Multiplication.
Symbology >> Numerology
Twos Complement Multiplication is described in multiple online sources, as addition to our editors' articles, see section below for printable documents, Twos Complement Multiplication books and
related discussion.
Suggested Pdf Resources
Suggested Web Resources
Great care has been taken to prepare the information on this page. Elements of the content come from factual and lexical knowledge databases, realmagick.com library and third-party sources. We
appreciate your suggestions and comments on further improvements of the site. | {"url":"http://www.realmagick.com/twos-complement-multiplication/","timestamp":"2014-04-17T01:21:24Z","content_type":null,"content_length":"29089","record_id":"<urn:uuid:70093284-466c-4cb3-8b0c-1192eceb242c>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00600-ip-10-147-4-33.ec2.internal.warc.gz"} |
pp . 93-196 . ERDŐS and R . RADO, A construction of graphs without triangles having pre-assigned order and chromatic number, The
- PUBLICATION OF THE MATHEMATICAL INSTITUTE OF THE HUNGARIAN ACADEMY OF SCIENCES , 1960
"... his 50th birthday. Our aim is to study the probable structure of a random graph rn N which has n given labelled vertices P, P2,..., Pn and N edges; we suppose_ ..."
Cited by 1849 (7 self)
Add to MetaCart
his 50th birthday. Our aim is to study the probable structure of a random graph rn N which has n given labelled vertices P, P2,..., Pn and N edges; we suppose_ | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=13275114","timestamp":"2014-04-18T01:39:48Z","content_type":null,"content_length":"12103","record_id":"<urn:uuid:1f8686b2-5876-492b-926d-552dbb5e6e99>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00261-ip-10-147-4-33.ec2.internal.warc.gz"} |
Castle Rock, CO Algebra Tutor
Find a Castle Rock, CO Algebra Tutor
...I can help students struggling with Singapore Math understand how to build models to solve problems. I can help students increase their "Mental Math" capability that will enable them to do
more math, more quickly, and more reliably. I can administer a diagnostic test that helps me identify a particular student's math strengths, weaknesses, and gaps.
30 Subjects: including algebra 2, physics, geometry, algebra 1
My name is Logan and I'm enthusiastic about math and science. I'm patient, friendly, and easy-going while being goal-oriented, practical, and encouraging. My mission is to provide my clients with
the best tools possible to solve their own problems and succeed on their own.
13 Subjects: including algebra 2, algebra 1, reading, geometry
...I also tutor French and English students at the middle and high school levels and math at the middle school level and all subjects at the elementary level.I have a B.S. in English. I have have
taught writing classes and tutored writing students of all ages. My undergraduate degree is in English.
29 Subjects: including algebra 1, reading, English, ESL/ESOL
...High School and High School Math, Science and English related subjects. My educational background includes a BS in Electrical Engineering/Computer Sciences from the University of California at
Berkeley and an MBA form Santa Clara University. I have a working knowledge of Spanish, and have trave...
12 Subjects: including algebra 2, algebra 1, calculus, precalculus
...Math is my favorite subject although I do sports writing full time and have my own NFL blogging website. In my spare time I love to play the drums (16+ years exp), exercise, write, coach and
teach! The majority of people I know would say they love to be around me because I'm always upbeat, courteous and just fun to be around.
22 Subjects: including algebra 1, statistics, GED, elementary (k-6th)
Related Castle Rock, CO Tutors
Castle Rock, CO Accounting Tutors
Castle Rock, CO ACT Tutors
Castle Rock, CO Algebra Tutors
Castle Rock, CO Algebra 2 Tutors
Castle Rock, CO Calculus Tutors
Castle Rock, CO Geometry Tutors
Castle Rock, CO Math Tutors
Castle Rock, CO Prealgebra Tutors
Castle Rock, CO Precalculus Tutors
Castle Rock, CO SAT Tutors
Castle Rock, CO SAT Math Tutors
Castle Rock, CO Science Tutors
Castle Rock, CO Statistics Tutors
Castle Rock, CO Trigonometry Tutors
Nearby Cities With algebra Tutor
Aurora, CO algebra Tutors
Centennial, CO algebra Tutors
Cherry Hills Village, CO algebra Tutors
Denver algebra Tutors
Englewood, CO algebra Tutors
Federal Heights, CO algebra Tutors
Golden, CO algebra Tutors
Greenwood Village, CO algebra Tutors
Highlands Ranch, CO algebra Tutors
Lakewood, CO algebra Tutors
Littleton, CO algebra Tutors
Lone Tree, CO algebra Tutors
Lonetree, CO algebra Tutors
Parker, CO algebra Tutors
Wheat Ridge algebra Tutors | {"url":"http://www.purplemath.com/Castle_Rock_CO_Algebra_tutors.php","timestamp":"2014-04-19T07:40:15Z","content_type":null,"content_length":"24303","record_id":"<urn:uuid:5099d4b6-9548-49b9-85d0-d73d71eeafad>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00254-ip-10-147-4-33.ec2.internal.warc.gz"} |
Soving equation involving trigo. func.
August 21st 2007, 11:44 PM #1
Oct 2006
Soving equation involving trigo. func.
Solve, for $0<x<2\pi$, the equation $2cos^2x+cosxsinx-6sin^2x=0$
I tried to divide both sides by $cos^2x$ and obtain
$2+tanx-6tan^2x=0$, which can be solved easily.
I can only divide both sides by $cos^2x$ provided that it is non-zero, right?
Do I have to check whether $cos^2x=0$ will give some other solution to the equation??
If $\cos x=0$ then $\sin x=0$, but $\sin x$ and $\cos x$ can't be 0 simultaneously.
So you can divide the equation by $\cos^2x$ without loosing any solution.
Hello, acc100jt!
Another approach . . .
Solve, for $0 <x<2\pi$, the equation $2\cos^2\!x+\cos x\sin x-6\sin^2\!x\:=\:0$
Factor: . $(2\cos x - 3\sin x)(\cos x + 2\sin x) \:=\:0$
And solve the two equations . . .
$2\cos x \,- \,3\sin x \;=\;0\quad\Rightarrow\quad3\sin x\:=\:2\cos x\quad\Rightarrow\quad\frac{\sin x}{\cos x} \,=\,\frac{2}{3}$
. . $\tan x \,=\,\frac{2}{3}\quad\Rightarrow\quad x \,=\,\tan^{-1}\left(\frac{2}{3}\right)$
$\cos x + 2\sin x\;=\;0\quad\Rightarrow\quad 2\sin x \,=\,\text{-}\cos x\quad\Rightarrow\quad\frac{\sin x}{\cos x} \,=\,\text{-}\frac{1}{2}$
. . $\tan x \,=\,\text{-}\frac{1}{2}\quad\Rightarrow\quad x \:=\:\tan^{-1}\left(\text{-}\frac{1}{2}\right)$
. . . with the same results.
August 22nd 2007, 12:30 AM #2
August 22nd 2007, 06:04 AM #3
Super Member
May 2006
Lexington, MA (USA) | {"url":"http://mathhelpforum.com/trigonometry/17947-soving-equation-involving-trigo-func.html","timestamp":"2014-04-20T06:18:48Z","content_type":null,"content_length":"40253","record_id":"<urn:uuid:161f8d47-2b22-41c3-96ea-e6d6a6f71e46>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00594-ip-10-147-4-33.ec2.internal.warc.gz"} |
Lectures on Quantum Mechanics by Weinberg
I'd not conclude that a textbook must be good, because it's written by a Nobel laureat, but in the case of Weinberg it's true. All his textbooks are just very well written with a clear exposition of
the subject in a deductive way, which I myself always prefer compared to inductive expositions of a subject. Of course, also the history of science is important, and that's also covered by Weinberg
in well written introductory chapters on the historical development of the theory.
Concerning the subjects covered the book is pretty standard for an advanced graduate course in non-relativistic quantum theory. All the important topics are covered, including a very clear foundation
of the Hilbert-space formalism, which is used from the very beginning (after one chapter, where the hydrogen atom and the harmonic oscillator are treated in the wave-mechanical way).
In chapter 3 he gives a complete foundation of the quantum theoretical formalism in terms of the abstract Hilbert-space formulation, using symmetry arguments to establish the operator algebra of
observables for non-relativistic quantum theory (i.e., using Galileo invariance as a starting point).
For me the most interesting part of chapter 3 is Sect. 3.7 on the interpretation of quantum theory, where after a very good summary about the various interpretations he finally comes to the
conclusion that a complete satisfactory interpretation of the quantum theoretical formalism has not yet been achieved.
The rest of the book is simply a very good presentation of the standard material that any quantum mechanics course should cover, including the quantum mechanical description of angular momentum,
time-independent and time-dependent perturbation theory, scattering theory (marvelous via the time-dependent wave-packet approach, which he has already used in his quantum theory of fields vol. 1 and
which is, in my opinion, the only satisfactory derivation for the S-matrix anyway!).
The book closes with a concise exhibition of "non-relativistic QED", i.e., the quantized electromagnetic field coupled to "Schrödinger particles" and the final (unfortunately rather short) chapter
about entanglement, discussing the interesting topics of EPR, the Bell inequalities, and quantum computing.
As usual with Weinberg's books, it's not written for beginners in the field but for the advanced graduate. These needs are better suited by Ballentines book, although also this one is rather tough
for the beginner. Compared to Weinberg's book, in my opinion its main advantage is that also the mathematics of the rigged Hilbert space is developed to a certain extent.
For me, the best introductory text still is J.J. Sakurai, Modern Quantum Mechanics but Weinberg's is a must-reading for the more advanced scholar! | {"url":"http://www.physicsforums.com/showthread.php?p=4277230","timestamp":"2014-04-18T15:51:43Z","content_type":null,"content_length":"29697","record_id":"<urn:uuid:d73e2846-9f91-4995-8241-0da1041d3b78>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00233-ip-10-147-4-33.ec2.internal.warc.gz"} |
Mollier Diagram
Sponsored High Speed Downloads
Dry-Bulb Temperature - Tdb • Dry bulb temperature is usually referred to as air temperature, is the air property that is most common used. When people refer to the temperature of the air,
Dry-Bulb Temperature - Tdb • Dry bulb temperature is usually referred to as air temperature, is the air property that is most common used. When people refer to the temperature of the air,
Title: mollier_chart_iapws95_eng.PDF Author: rsaini Created Date: 12/31/1998 3:12:41 PM
APPENDIX B Properties of Steam (English units) Table B.1 Saturated Steam: Temperature Table Table B.2 Saturated Steam: Pressure Table Figure B.1 Mollier Diagram for Steam (two parts)
Carbon Dioxide: Pressure - Enthalpy Diagram Melting Line-40 o -40 C-2 0-2 0 ... Title: mollier_chart_met.xls Created Date: 11/10/1999 5:42:45 PM
R. Mollier – The ix-diagram for air+water vapor mixtures, 1929 3 / 13 Oblique coordinates are advantageous for the ix-diagram. They permit a better use of the
Mollier diagram for water. Source: Joseph H. Keenan, Frederick G. Keyes, Philip G. Hill, and Joan G. Moore, Steam Tables (New York: John Wiley & Sons, 1969). cen84959_ap02.qxd 4/27/05 3:02 PM Page
951. TABLE A–11E Saturated refrigerant-134a—Temperature table
Begreber i Molliers hx-diagram Luftens densitet (ρ) Den lodrette orange akse helt ude til venstre. Aflæs luftens densitet ved at følge den skrånende orange
The Mollier diagram and the Psychrometric Chart If you think that the slightly skew edges of your Mollier or Psychrometric diagram are the result of
The steam-turbine expansion line on the Mollier diagram, and a short method of finding the reheat factor Author: Buckingham, E. Keywords" Created Date:
Title: DuPont™ Freon® 22 Pressure-Enthalpy Diagram, SI units Author: DuPont Fluoroproducts Subject: DuPont™ Freon®, technical literature Keywords
copyright Cogen Projects BV 2008 not for commercial use for ordering more detailed diagrams : mail to [email protected] (euro 100,- for 3 different ranges, pdf format, V.A.T excluded)
R. Mollier – A new diagram for air+water vapor mixtures, 1923 4 / 11 It is important for many applications to note that this equation represents as well the enthalpy of
The Mollier hx-diagram quantities ... Note that the hx-diagram used throughout this booklet applies to an atmospheric pressure of 1.013 mbar. Bemærk at hx-diagrammet brugt i denne guide gælder for et
atmosfærisk tryk på 1013 mbar. Title:
MOLLIER DIAGRAM NORMAL TEMPERATURE I-P Units SEA LEVEL Chart by: HANDS DOWN SOFTWARE, www.handsdownsoftware.com. Hsuqe . Title: Mollier-1.cdr Author: Robert Hanna Created Date:
1 10 100 150 200 250 300 350 400 450 500 550 600 Pressure=Bar Temperature=ºC Enthalpy=kJ/ kg Entropy= kJ / kg.K Density= kg/m3 Enthalpy, Log Pressure diagram for
Mollier h/x-diagram Psychrometric Chart Barometric pressure 1013 mbar Rue du Dobbelenbergstraat 7 - 1130 Brussels Tel. (32) 2 - 240 61 61 - Fax (32) 2- 240 61 81
Mollier Diagram and AOl Uses Below is a Mollier diagram used to plot where the process is at with respect to Enthalpy and Entropy. This is a graphical representation of where you are in the steam
process. This diagram is used by many plant engineers that
Microclimate Control in Greenhouses based on Phytomonitoring Data and Mollier Phase Diagram U.Schmidt, Humboldt University, Institute for Horticultural Sciences, Berlin
Pressure-Enthalpy Diagram S=Specific Entropy, kJ/kg*K T=Temperature, ºC Ethylene Produced by I. Aartun, NTNU 2002. Based on the program Allprops, Center for Applied Thermodynamic Studies, University
of Idaho. 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 2 3 4 5 6 7 8 9. Title:
the Mollier diagram Objects of the experiment g To know the basic structure and the application of a Mollier diagram. g To be able to represent the thermo-dynamic cycle of the heat pump in a Mollier
diagram using the measured data.
NOLTR 67-32 MOLLIER DIAGRAM FOR NITROGEN NOL 4 MARCH 1967 UNITED STATES NAVAL ORDNANCE LABORATORY, WHITE OAK, MARYLAND II CMJ - -z 0 Distribution of this document is unlimited.
Title: Forane® 407A mollier diagram Author: Arkema Subject: Forane® 407A refrigerant mollier diagram Keywords: forane; arkema; forane mollier diagram; forane refrigerants mollier diagram; 407A
mollier diagram; forane 407a; 407a refrigerant diagram
INTRODUCTION A Mollier diagram is a graphical representation of the properties of a refrigerant, generally in terms of enthalpy and entropy. A familiarity with these dia-
T-S DIAGRAM Internally reversible process: dS rev Q T ... Isentropic process (S const=): Carnot Cycle: Qnet,in = W net,out h-s diagram (Mollier diagram)
Figure A–10 Mollier diagram for water ... P-h diagram for refrigerant-134a. Note: The reference point used for the chart is different than that used in the R-134a tables. Therefore, problems should
be solved using all property data
126 To open up the third dimension of the mollier phase diagram the description of an additional value as a crop status indicator is necessary.
The Mollier hx diagram represents the air water mixture. It is in such a way developed that the 0°C-Iso-therm is horizontal in the range of non saturated air.
a mollier diagram of ammonia with the approximation range and the rated refrigeration cycle. Saturated refrigerant properties For saturated refrigerants, enthalpy, temperature and density are
expressed as a function of pressure.
Decision support for greenhouse climate control using a computerised Mollier Diagram Uwe Schmidt Humboldt University Berlin Faculty for Agriculture and Horticulture
Figure 2. Mollier diagram for moist air - In the explanatory Mollier diagram (fig. 2. point 1.), primary air of 28°C/50% R.H. (11,8 gr humidity /kg air) is admitted and cooled down to 18,5°C.
Mollier TX diagram for: Nitrogen / Acetone / 1.040 bar Gas Steam Name Nitrogen Acetone Formula N2 C3H6O CAS 7727-37-9 67-64-1 Molecular weight kg/kMol 28.013 58.079
Title: mollier_chart_iapws95_eng.PDF Author: rsaini Created Date: 19981231151241Z
APPENDIX C Properties of Steam (SI units) Figure C.1 Isentropic Exponent for Steam Figure C.2 Mollier Diagram for Steam Figure C.3 PressureŒEnthalpy Diagram for Steam
The Mollier-diagram In the Mollier-diagram on the last page of the instruction you will draw a curve, corresponding to the data you have obtained during the lab exercise. The diagram shows the
pressure as a function of the enthalpy.
1/4 Diagram of the water-steam cycle By Jean-Yves Usually, for a steam turbine the cycle is represented on a Mollier diagram. This is justified by the fact that for any power and any place in the
cycle the conditions (pressure,
From Mollier diagram (H-S Diagram) estimate the theoretical heat extraction for the conditions mentioned in Step 1. Towards this: a) Plot the turbine inlet condition point in the Mollier chart -
corresponding to steam pressure and temperature.
... MOLLIER DIAGRAM ENTHALPY-ENTROPY CHART FOR STEAM IN SI UNITS. [ EntirEly in si units ] [ including molliEr charts ] About the book CoNteNt ...
THE H-S MOLLIER DIAGRAM. Statement. Analyse the . h-s (Mollier) diagram for a pure substance. In particular: a) Demonstrate that isobars are straight lines in the two-phase region.
MOLLIER R-427A Author: a0014454 Created Date: 5/4/2007 5:19:22 PM ...
4 Figure 6: Detailed p-h diagram of the PCF boiler steam/water heating process (red line) from figure 4. Enthalpy-entropy (Mollier, h-s) diagram
39 Exhaust temperature, to C 305 See steam Mollier diagram 40 Exhaust specific volume, vo m 3/kg 0.19936 See steam table 41 P LOSS kW 27.09 Equation 5
Fig. 4. Mollier diagram of refrigerant system Above on Fig. 4 real parameters of refrigerant system are shown. In comparison with cycle shown on Fig. 2 it looks very similarly, but it has to be
emphasized that increasing part of mass
of Mollier diagram that displays the thermodynamic properties in regard to pressu-re and enthalpy for HFC R134a. This Mollier diagram includes the same numbered flow path that is shown in the bottom
schematic. The Mollier diagram is a unique
Mollier Diagram • Can create similar state diagrams with other properties – Mollier Diagram h s p T. Title: Microsoft PowerPoint - idealdissocgas_summary.ppt Author: Jerry Created Date:
Entropy (Btu/lb OF) No tv-reheat "Al _ Q P 'in Non-reheat 20 ppb NaCl 10 ppb NaOH 10 ppb *uniš. = 10 ppb Entropy (Btu/lb OF) Figure 13. Mollier diagram for reheat and non-reheat turbine steam
expansion with
Figure 3 – Enthalpy-entropy (h-s) Mollier diagram for water-steam Stirling cycle machines and the Second Law In all textbooks reviewed, after developing the Carnot relations from the Second Law, the
09ME203 THERMAL ENGINEERING Credits 3:1:0 (Use of standard thermodynamic tables, Mollier diagram, Psychometric chart and
STEAM TURBINE CALCULATION SHEET Page : 2 No. Designation Quantity Note and additional information 1 he IMP kJ/kg 3331.2 Than make steam process in Mollier diagr.
The Mollier Diagram (left) is a graphical representation of the steam table plotted with enthalpy on the vertical axis and entropy on the horizontal axis. | {"url":"http://ebookily.org/pdf/mollier-diagram","timestamp":"2014-04-16T10:15:37Z","content_type":null,"content_length":"39790","record_id":"<urn:uuid:08330c7f-7c0e-4497-8fa5-07811687e790>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00013-ip-10-147-4-33.ec2.internal.warc.gz"} |
scaleboot Home Page
Hidetoshi Shimodaira
Department of Mathematical and Computing Sciences
Tokyo Institute of Technology
Ookayama, Meguroku, Tokyo 152-8552, JAPAN
shimo (a) is.titech.ac.jp
What is scaleboot?
scaleboot is an add-on package of R. This is for calculating approximately unbiased (AU) p-values from a set of multiscale bootstrap probabilities for a hypothesis. Scaling is equivalent to changing
the sample size of data set in bootstrap resampling. We calculate bootstrap probabilities at several scales, from which a very accurate p-value is calculated. This multiscale bootstrap method has
been implemented in CONSEL software and pvclust package. The thrust of scaleboot package is to calculate an improved version of AU p-values which are justified even for hypotheses with nonsmooth
boundaries by taking care of the singularity.
scaleboot package includes an interface to pvclust package of R for bootstrapping hierarchical clustering. We use pvclust to calculate multiscale bootstrap probabilities, from which we calculate an
improved version of AU p-values using scaleboot.
scaleboot has a front end for phylogenetic inference, and it can replace CONSEL software for testing phylogenetic trees. Currently, scaleboot does not have a method for file conversion of several
phylogenetic software, and so we must use CONSEL for this purpose before applying scaleboot to calculate an improved version of AU p-values for trees and edges.
The package vignette "Multiscale Bootstrap Using Scaleboot Package" (usesb.pdf) explains the methodology. It includes a simple example for illustration. It also includes real applications in
hierarchical clustering and phylogenetic inference. Further description is given in Shimodaira (2008). For the use of scaleboot, Shimodaira (2008) may be referenced.
scaleboot is easily installed from CRAN online. Windows users can install the package by choosing "scaleboot" from the pull-down menu. Otherwise, run R on your computer and type
> install.packages("scaleboot")
You can also download the package file below, and install manually.
Current version: scaleboot_0.3-3.tgz (2010/05/14) : only minor fixes for help pages from the previous version
Previous version: scaleboot_0.3-2.tgz (2008/02/05)
Manual: scaleboot.pdf (web version)
Package vignette: usesb.pdf ("Multiscale Bootstrap Using Scaleboot Package")
Dataset files
Supplementary dataset files for phylogenetic inference are available. See Note in help(mam15).
Description: INDEX.txt
Compressed file: mam15-files.tgz (unix), mam15-files.zip (win)
• Shimodaira, H. (2002). An approximately unbiased test of phylogenetic tree selection, Systematic Biology, 51, 492-508.
• Shimodaira, H. (2004). Approximately unbiased tests of regions using multistep-multiscale bootstrap resampling, Annals of Statistics, 32, 2616-2641. [PDF] [SUPPLEMENT]
• Shimodaira, H. (2006). Approximately Unbiased Tests for Singular Surfaces via Multiscale Bootstrap Resampling, Research Reports B-430. Department of Mathematical and Computing Sciences, Tokyo
Institute of Technology, Japan. [PDF]
• Shimodaira, H. (2006). Technical Details of Multiscale Bootstrap for Singular Surfaces, Research Reports B-431. Department of Mathematical and Computing Sciences, Tokyo Institute of Technology,
Japan. [PDF]
• Shimodaira, H. (2008). Testing Regions with Nonsmooth Boundaries via Multiscale Bootstrap, Journal of Statistical Planning and Inference, 138, 1227-1241, 2008. http://dx.doi.org/10.1016/ | {"url":"http://www.is.titech.ac.jp/~shimo/prog/scaleboot/","timestamp":"2014-04-19T04:19:49Z","content_type":null,"content_length":"6338","record_id":"<urn:uuid:2e05966f-29bb-48a8-8b30-22ca0165bcb5>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00501-ip-10-147-4-33.ec2.internal.warc.gz"} |
We begin with an old problem that no one managed to solve.
90. Let m be a positive integer, and let f(m) be the smallest value of n for which the following statement is true:
given any set of n integers, it is always possible to find a subset of m integers whose sum is divisible by m
Determine f(m).
[N. Sato] The value of f(m) is 2m - 1. The set of 2m - 2 numbers consisting of m - 1 zeros and m - 1 ones does not satisfy the property; from this we can see that n cannot be less than 2m - 1.
We first establish that, if f(u) = 2u - 1 and f(v) = 2v - 1, then f(uv) = 2uv - 1. Suppose that 2uv - 1 numbers are given. Select any 2u - 1 at random. By hypothesis, there exists a u-subset whose
sum is divisible by u; remove these u elements. Continue removing u-subsets in this manner until there are fewer than u numbers remaining. Since 2uv - 1 = (2v - 1)u + (u - 1), we will have 2v - 1
sets of u numbers summing to a multiple of u. For 1 £ i £ 2v - 1, let ua[i] be the sum of the ith of these 2v - 1 sets. We can choose exactly v of the a[i] whose sum is divisible by v. The v u-sets
corresponding to these form the desired uv elements whose sum is divisible by uv. Thus, if we can show that f(p) = 2p - 1 for each prime p, we can use the fact that each number is a product of primes
to show that f(m) = 2m - 1 for each positive integer m.
Let x[1], x[2], ¼, x[2p-1] be 2p-1 integers. Wolog, we can assume that the x[i] have been reduced to their least non-negative residue modulo p and that they are in increasing order. For 1 £ i £ p-1,
let y[i] = x[p+i] - x[i]; we have that y[i] ³ 0. If y[i] = 0 for some i, then x[i+1] = ¼ = x[p+i], in which case x[i+1] + ¼+ x[p+i] is a multiple of p and we have achieved our goal. Henceforth,
assume that y[i] > 0 for all i
Let s = x[1] + x[2] + ¼+ x[p]. Replacing x[i] by x[p+i] in this sum is equivalent to adding y[i]. We wish to show that there is a set of the y[i] whose sum is congruent to -s modulo p; this would
indicate which of the first p x[i] to replace to get a sum which is a multiple of p.
Suppose that A[0] = { 0 }, and, for k ³ 1, that A[k] is the set of distinct numbers i with 0 £ i £ p-1 which either lie in A[k-1] or are congruent to a + y[k] for some a in A[k-1]. Note that the
elements of each A[k] is equal to 0 or congruent (modulo p) to a sum of distinct y[i]. We claim that the number of elements in A[k] must increase by at least one for every k until A[k] is equal to {
0, 1, ¼, p-1 }.
Suppose that going from A[j-1] to A[j] yields no new elements. Since 0 Î A[j-1], y[j] Î A[j], which means that y[j] Î A[j-1]. Then 2y[j] = y[j] + y[j] Î A[j] = A[j-1], 3y[j] = 2y[j] + y[j] Î A[j] = A
[j-1], and so on. Thus, all multiples of y[j] (modulo p) are in A[j-1]. As p is prime, we find that A[j-1] must contain { 0, 1, ¼, p-1 }. We deduce that some sum of the y[i] is congruent to -s modulo
p and obtain the desired result.
Let ABC be a right triangle with ÐA = 90^°. Let P be a point on the hypotenuse BC, and let Q and R be the respective feet of the perpendiculars from P to AC and AB. For what position of P is the
length of QR minimum?
PQAR, being a quadrilateral with right angles at A, Q and R, is a rectangle. Therefore, its diagonals QR and AP are equal. The length of QR is minimized when the length of AP is minimized, and this
occurs when P is the foot of the perpendicular from A to BC.
Comment. P must be chosen so that PB:PC = AB^2:AC^2.
Suppose that ABC is an equilateral triangle. Let P and Q be the respective midpoint of AB and AC, and let U and V be points on the side BC with 4BU = 4VC = BC and 2UV = BC. Suppose that PV is
joined and that W is the foot of the perpendicular from U to PV and that Z is the foot of the perpendicular from Q to PV.
Explain how that four polygons APZQ, BUWP, CQZV and UVW can be rearranged to form a rectangle. Is this rectangle a square?
Consider a 180
rotation about Q so that C falls on A, Z falls on Z
and V falls on V
. The quadrilateral QZVC goes to QZ
A, ZQZ
is a line and ÐQAV
= 60
. Similarly, a 180
rotation about P takes quadrilateral PBUW to PAU
with WPW
a line and ÐU
AP = 60
. Since ÐU
AP = ÐPAQ = ÐQAV
= 60
, U
is a line and
U[1]V[1] = U[1]A + AV[1] = UB + CV = 12 BC = UV .
Translate U and V to fall on U
and V
respectively; let W fall on W
. Since
ÐW[1]U[1]W[2] = ÐW[1]U[1]A + ÐW[2]U[1]A = ÐWUB + ÐWUV = 180^° ,
ÐW[2]V[1]Z[1] = ÐW[2]V[1]A + ÐAV[1]Z[1] = ÐWVU + ÐCVZ = 180^° ,
ÐW[2] = ÐW[1] = ÐZ[1] = ÐWZQ = 90^° ,
it follows that Z
Z is a rectangle composed of isometric images of APZQ, BUWP, CQZV and UVW.
Since PU and QV are both parallel to the median from A to BC, we have that PQVU is a rectangle for which PU < PB = PQ. Thus, PQVU is not a square and so its diagonals PV and QU do not intersect at
right angles. It follows that W and Z do not lie on QU and so must be distinct.
Since PZQ and VWU are right triangles with ÐQPZ = ÐUVW and PQ = VU, they must be congruent, so that PZ = VW, PW = ZV and UW = QZ. Since
= W[1]U[1] + U[1]W[2] = WU + UW = WU + QZ
< UQ = PV = PZ + ZV = PZ + PW = PZ + PW[1] = W[1]Z ,
the adjacent sides of Z
Z are unequal, and so the rectangle is not square.
Comment. The inequality of the adjacent sides of the rectangle can be obtained also by making measurements. Take 4 as the length of a side of triangle ABC. Then
|PU | = Ö3 , |PQ | = 2 , |QU | = |PV | = Ö7 .
Since the triangles PUW and PVU are similar, UW : PU = VU : PV, whence |UW | = 2Ö[21]/7. Thus, |W
| = 4Ö[21]/7 ¹ Ö7 = |W
Z |.
One can also use the fact that the areas of the triangle and rectangle are equal. The area of the triangle is 4Ö3. It just needs to be verified that one of the sides of the rectangle is not equal to
the square root of this.
Let a > 0 and let n be a positive integer. Determine the maximum value of
x[1] x[2] ¼x[n](1 + x[1])(x[1] + x[2]) ¼(x[n-1] + x[n])(x[n] + a^n+1)
subject to the constraint that x[1], x[2], ¼, x[n] > 0.
Let u
= x
, u
= x
for 1 £ i £ n-1 and u
= a
. Observe that u
= a
. The quantity in the problem is the reciprocal of
(1 + u[1])(1 + u[2]) ¼(1 + u[n])
= 1 + å u[i] + å u[i]u[j] + ¼+ å u[i[1]]u[i[2]] ¼u[i[k]] + ¼+ u[0]u[1] ¼u[n] .
For k = 1, 2, ¼, n, the sum åu
adds together all the ((n+1) || k) k-fold products of the u
; the product of all the terms in this sum is equal to a
raised to the power (n || (k-1)), namely, to a raised to the power k ((n+1) || k). By the arithmetic-geometric means inequality
æ n+1 ö
å u[i[1]]u[i[2]] ¼u[i[k]] ³ ç k ÷ a^k .
è ø
æ n+1 ö
(1 + u[0])(1 + u[1]) ¼(1 + u[n]) ³ 1 + (n+1)a +¼+ ç k ÷ a^k + ¼a^n+1 = (1 + a)^n+1 ,
è ø
with equality if and only if u
= u
= ¼ = u
= a. If follows from this that the quantity in the problem has maximum value of (1 + a)
, with equality if and only if x
= a
for 1 £ i £ n.
Comment. Some of you tried the following strategy. If any two of the u[i] were unequal, they showed that a larger value could be obtained for the given expression by replacing each of these by
another value. They then deduced that the maximum occurred when all the u[i] were equal. There is a subtle difficulty here. What has really been proved is that, if there is a maximum, it can occur
only when the u[i] are equal. However, it begs the question of the existence of a maximum. To appreciate the point, consider the following argument that 1 is the largest postive integer. We note
that, given any integer n exceeding 1, we can find another integer that exceeds n, namely n^2. Thus, no integer exceeding 1 can be the largest positive integer. Therefore, 1 itself must be the
Some of you tried a similar approach with the x[i], and showed that for a maximum, one must have all the x[i] equal to 1. However, they neglected to build in the relationship between x[n] and a[n+1],
which of course cannot be equal if all the x[i] are 1 and a ¹ 1. This leaves open the possibility of making the given expression larger by bettering the relationship between the x[i] and a and
possibly allowing inequalities of the variables.
For a given prime number p, find the number of distinct sequences of natural numbers (positive integers) { a[0], a[1], ¼, a[n] ¼} satisfying, for each positive integer n, the equation
a[0]a[1] + a[0]a[2] + ¼+ a[0]a[n] + pa[n+1] = 1 .
For n ³ 3 we have that
a[0]a[1] + ¼+ a[0]a[2] +¼+ a[0]a[n-2] + pa[n-1]
= a[0]a[1] + a[0]a[2] + ¼+ a[0]a[n-1] + pa[n]
so that
Thus, for n ³ 2, we have that
a[n] = p^n-2 a[2](p - a[0])^n-2 .
Since 1 £ p - a
£ p - 1, p - a
and p are coprime. It follows that, either p - a
must divide a
to an arbitrarily high power (impossible!) or p - a
= 1.
Therefore, a[0] = p - 1 and a[n] = p^n-2a[2] for n ³ 2. Thus, once a[1] and a[2] are selected, then the rest of the sequence { a[n] } is determined. The remaining condition that has to be satisfied
1 = a[0]a[1] + pa[2] = p-1a[1] + pa[2] .
This is equivalent to
(p - 1)a[2] + pa[1] = a[1] a[2] ,
[a[1] - (p-1)][a[2] - p] = p(p-1) .
The factors a
- (p-1) and a
- p must be both negative or both positive. The former case is excluded by the fact that (p-1) - a
and p - a
are respectively less than p-1 and p. Hence, each choice of the pair (a
, a
) corresponds to a choice of a pair of positive divisors of p(p-1). There are d(p(p-1)) = 2d(p-1) such choices, where d(n) is the number of positive divisors of the positive integer n.
Comment. When p = 5, for example, the possibilities for (a[1], a[2]) are (5, 25), (6, 15), (8, 10), (9, 9), (14, 7), (24, 6). In general, particular choices of sequences that work are
{p-1, 2p-1, 2p-1, p(2p-1), ¼}
{p-1, p^2 - 1, p + 1, p(p+1), ¼} .
A variant on the argument showing that the a[n] from some point on constituted a geometric progression started with the relation p(a[n] - a[n-1]) = a[0]a[n] for n ³ 3, whence
Thus, for n ³ 3, a
= a
, which forces { a
, a
, ¼, } to be a geometric progession. The common ratio must be a positive integer r for which r = p/(p-a
). This forces p - a
to be equal to 1.
Quite a few solvers lost points because of poor book-keeping; they did not identify the correct place at which the geometric progression began. It is often a good idea to write out the first few
equations of a general relation explicitly in order to avoid this type of confusion. You must learn to pay attention to details and check work carefully; otherwise, you may find yourself settling for
a score on a competition less than you really deserve on the basis of ability.
Consider a cube concentric with a parallelepiped (rectangular box) with sides a < b < c and faces parallel to that of the cube. Find the side length of the cube for which the difference between
the volume of the union and the volume of the intersection of the cube and parallelepiped is minimum.
Let x be the length of the side of the cube and let f(x) be the difference between the value of the union and the volume of the intersection of the two solids. Then
ï abc + (x - a)x^2 - ax^2 = abc + x^3 - 2ax^2
f(x) = í x^3 + ab(c - x) - abx = abc + x^3 - 2abx
The function decreases for 0 £ x £ a and increases for x ³ c. For b £ x £ c,
= (x - b)[x^2 + bx + b^2 - 2ab]
= (x - b)[(x^2 - ab) + b(x - a) + b^2] ³ 0 ,
so that f(x) ³ f(b). Hence, the minimum value of f(x) must be assumed when a £ x £ b.
For a £ x £ b, f¢(x) = 3x^2 - 4ax = x[3x - 4a], so that f(x) increases for x ³ 4a/3 and decreases for x £ 4a/3. When b £ 4a/3, then f(x) is decreasing on the closed interval [a, b] and assumes its
minimum for x = b. If b > 4a/3 > a, then f(x) increases on [4a/3, b] and so achieves its minimum when x = 4a/3. Hence, the function f(x) is minimized when x = min (b, 4a/3).
The area of the bases of a truncated pyramid are equal to S[1] and S[2] and the total area of the lateral surface is S. Prove that, if there is a plane parallel to each of the bases that
partitions the truncated pyramid into two truncated pyramids within each of which a sphere can be inscribed, then
__ __
S = ( ÖS[1] + ÖS[2] )(Ö[4 ]S[1] +Ö[4 ]S[2])^2 .
Solution 1.
Let M
be the larger base of the truncated pyramid with area S
, and M
the smaller base with area S
. Let P
be the entire pyramid with base M
of which the truncated pyramid is a part. Let M
be the base parallel to M
and M
described in the problem, and let its area be S
. Let P
be the pyramid with base M
and P
the pyramid with base M
The inscribed sphere bounded by M[0] and M[1] is determined by the condition that it touches M[1] and the lateral faces of the pyramid; thus, it is the inscribed sphere of the pyramid P[1] with base
M[1]; let its radius be R[1]. The inscribed sphere bounded by M[2] and M[0] is the inscribed sphere of the pyramid P[0] with base M[0]; let its radius be R[0]. Finally, let the inscribed sphere of
the pyramid with base M[2] have radius R[2].
Suppose Q[2] is the lateral area of pyramid P[2] and Q[1] the lateral area of pyramid P[1]. Thus, S = Q[1] - Q[2].
There is a dilation with factor R[0]/R[1] that takes pyramid P[1] to P[0]; since it takes the inscribed sphere of P[1] to that of P[0], it takes the base M[1] to M[0] and the base M[0] to M[2].
Hence, this dilation takes P[0] to P[2]. The dilation composed with itself takes P[1] to P[2]. Therefore
R[0]R[1] = R[2]R[0] and Q[2]Q[1] = S[2]S[1] = R[2]^2R[1]^2 .
Consider the volume of P[2]. Since P[2] is the union of pyramids of height R[2] and with bases the lateral faces of P[2] and M[2], its volume is (1/3)R[2](Q[2] + S[2]). However, we can find the
volume of P[2] another way. P[2] can be realized as the union of pyramids whose bases are its lateral faces and whose apexes are the centre of the inscribed sphere with radius R[0] with the removal
of the pyramid of base M[2] and apex at the centre of the same sphere. Thus, the volume is also equal to (1/3)R[0](Q[2] - S[2]).
Q[2] - S[2]Q[2] + S[2] = R[2]R[0] = R[2] = = Ö[4 ]S[2]Ö[4 ]S[1]
Þ Q[2](Ö[4 ]S[1] - Ö[4 ]S[2]) = S[2] (Ö[4 ]S[1] + Ö[4 ]S[2]) ,
so that
= Q[1] - Q[2] = Q[2]S[2] (S[1] - S[2])
é ù __ __ __ __
= ê Ö[4 ]S[1] + Ö[4 ]S[2]Ö[4 ]S[1] - Ö[4 ]S[2] ú [ ÖS[1] - ÖS[2] ][ ÖS[1] + ÖS[2] ]
ë û
__ __
= ( Ö[4 ]S[1] + Ö[4 ]S[2] )^2 ( ÖS[1] + ÖS[2] ) .
Solution 2. [S. En Lu] Consider an arbitrary truncated pyramid with bases A[1] and A[2] of respective areas s[1] and s[2], in which a sphere G of centre O is inscribed. Let the lateral area be s.
Suppose that C is a lateral face and that G touches A[1], A[2] and C in the respective points P[1], P[2] and Q.
C is a trapezoid with sides of lengths a[1] and a[2] incident with the respective bases A[1] and A[2]; let h[1] and h[2] be the respective lengths of the altitudes of triangles with apexes P[1] and P
[2] and bases bordering on C. By similarity (of A[1] and A[2]),
The plane that contains these altitudes passes through P
(a diameter of G) as well as Q, the point on C nearest to the centre of G. Since the height of C is a
+ a
[why?], its area is
= 12 [ a[1]h[1] + a[2]h[2] + a[1]h[2] + a[2]h[1] ]
= 12 [ a[1]h[1] + a[2]h[2] + 2 Öa[1]a[2]h[1]h[2] ]
= 12 [ a[1]h[1] + a[2]h[2] + 2a[2]h[2] Ö ] .
Adding the corresponding equations over all the lateral faces C yields
s[1] s[2] s[1] s[2]
s = s[1] + s[2] + Ö = ( Ö + Ö )^2 .
With S[0] defined as in Solution 1, we have that S[1] / S[0] = S[0] / S[2], so that S[0] = Ö[(S[1]S[2])]. Using the results of the first paragraph applied to the truncated pyramids of bases (S[2], S
[0]) and (S[0], S[1]), we obtain that
__ __ __ __
= ( ÖS[1] + ÖS[0] )^2+ ( ÖS[0] + ÖS[1] )^2
__ __
= ( ÖS[1] + Ö[4 ]S[1]S[2])^2 +(Ö[4 ]S[1]S[2] + ÖS[2] )^2
__ __
= ( ÖS[1] + ÖS[2] )(Ö[4 ]S[1] +Ö[4 ]S[2])^2 . | {"url":"http://cms.math.ca/Concours/MOCP/2002/sol_may","timestamp":"2014-04-16T19:08:46Z","content_type":null,"content_length":"71934","record_id":"<urn:uuid:13da69e1-567d-4dec-86a8-0f6205cee44e>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00201-ip-10-147-4-33.ec2.internal.warc.gz"} |
Topological defects and controlling spatio-temporal chaos in extended hydrodynamical systems (Venue: MR3 CMS)
Meeting Room 3, CMS
Spatio-temporal chaos the image of which is an ensemble of interacting topological defects arises in many physical systems where spatially periodic states break down: at Marangoni convection in a
layer heated from below, at excitation of capillary ripples on the surface of a viscous layer (Faraday ripples), in a wake behind a streamlined cylinder, and in many other systems. When one
half-plane occupied by a periodic structure contains a period more than the other, we speak about topological defects. Topological defects are a two-dimensional analog of boundary dislocations in
solids. The difference is that a boundary dislocation occurs in solids when there is a semi-infinite layer of atoms embedded in a perfect crystal, whereas in periodic structures it arises when there
is an additional period. In spite of pronounced differences in physical properties of the systems, dynamics of topological defects that are analogs of boundary dislocations in crystals has much in
common. We demonstrate that with increasing supercriticality (amplitude of layer oscillations for Faraday ripples, temperature difference for thermoconvection, etc.) the defects increase in number
and may form bound states, domain walls and other structures. An increase of supercriticality in such systems leads to the transition from a regular state to spatio-temporal chaos. Controlling chaos
of topological defects is a most important task. We study this problem on an example of chaos of topological defects in Faraday ripples. We show that the motion of topological defects and the
characteristics of chaos can be controlled by means of pump frequency modulation. | {"url":"http://www.newton.ac.uk/programmes/PFD/seminars/2005121415006.html","timestamp":"2014-04-21T14:43:17Z","content_type":null,"content_length":"5213","record_id":"<urn:uuid:5ed3f81e-d4a9-4ae4-913a-ea8ea2cee508>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00474-ip-10-147-4-33.ec2.internal.warc.gz"} |
Re: Looking for Unambiguous Non-LR(k) Grammar
Chris F Clark <cfc@world.std.com>
7 Nov 1998 01:29:14 -0500
From comp.compilers
| List of all articles for this month |
From: Chris F Clark <cfc@world.std.com>
Newsgroups: comp.compilers
Date: 7 Nov 1998 01:29:14 -0500
Organization: The World Public Access UNIX, Brookline, MA
References: <Pine.BSI.3.91.981030111345.22214J-100000@ivan.iecc.com> 98-11-044
Keywords: parse
Zem Laski wrote (quoting others):
> > > Can anyone think of a context-free grammar that is unambiguous, and yet
> > > not part of LR(k)?
> > Sure. Let C be an LR(k) C grammar, and P be an LR(k) Pascal grammar (or
> > any other two languages you want):
> > program:: C "C"
> > | P "Pascal"
> Why isn't this grammar in LR(k)? If C is LR(k) and P is LR(k), then
> program is LR(k) too, unless FIRST(C) and FIRST(P) are not disjoint,
> in which case program is LR(k + n) for some finite n.
Actually, the "program" grammar may or may not be LR(k) depending on
the two grammars being merged. The question comes down to whether
there is some text that is both legal to the C and P parts of the
grammar that needs reduction and where only infinite lookahead will
determine which of the two reductions to apply.
An example of that is:
C: C1 C2
C1: "x"
C2: C2 "y" | "y"
P: P1 P2
P1: "x"
P2: P2 "y" | "y"
This problem grammar is not LR(k) for any finite k, since you can have
k y's after the x and before the C or Pascal which tells whether to
reduce the x to a C1 or a P1. The grammar is not ambiguous because
any complete sentence has only one parse (and it belongs to the simple
classes of RR(1) and RL(1), the right-to-left versions of LL and LR).
> I think a context-free grammar that is unambiguous but not LR(k) would
> be one where I would absolutely need to apply a derivation that is NOT
> rightmost at some point in a derivation of a sentence. But why would
> I have such a restriction? If I did, it seems I would no longer be
> dealing with a context-free grammar.
I believe the problem grammar at some point has a non-rightmost
derivation (the derivation step which expands C1|P1 depending on what
the last token is).
The only requirement for a grammar to be context free is that it have
only one non-terminal on the LHS.
Hope this helps,
Chris Clark Internet : cfc@world.std.com
Compiler Resources, Inc. CompuServe : 74252,1375
3 Proctor Street voice : (508) 435-5016
Hopkinton, MA 01748 USA fax : (508) 435-4847 (24 hours)
Web Site in Progress: Web Site : http://world.std.com/~compres
Post a followup to this message
Return to the comp.compilers page.
Search the comp.compilers archives again. | {"url":"http://compilers.iecc.com/comparch/article/98-11-058","timestamp":"2014-04-19T19:53:49Z","content_type":null,"content_length":"6543","record_id":"<urn:uuid:fab7f05c-c88e-4ad2-802b-82461cb20142>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00406-ip-10-147-4-33.ec2.internal.warc.gz"} |
Can someone help me step by step with finding derivatives?
October 8th 2008, 11:14 AM #1
Junior Member
Sep 2008
Can someone help me step by step with finding derivatives?
I missed my last math lecture and now im reallllly confused. And this hw is due tonight. The question is:
Suppose u(t)=w(t^2+5) and w'(6)=8. Find u'(1).
Honestly this seems like an entirely different language to me
$u(t) = w(t^2+5)$
take the derivative of both functions ... note the use of the chain rule for the composite function w
$u'(t) = w'(t^2+5) \cdot 2t$
$u'(1) = w'(6) \cdot 2(1)$
$u'(1) = 8 \cdot 2 = 16$
October 8th 2008, 03:11 PM #2 | {"url":"http://mathhelpforum.com/calculus/52656-can-someone-help-me-step-step-finding-derivatives.html","timestamp":"2014-04-17T04:24:02Z","content_type":null,"content_length":"33649","record_id":"<urn:uuid:91506d23-19da-4156-bef3-9d2f16ad3f15>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00253-ip-10-147-4-33.ec2.internal.warc.gz"} |
Natural Convection of Viscoelastic Fluid from a Cone Embedded in a Porous Medium with Viscous Dissipation
Mathematical Problems in Engineering
Volume 2013 (2013), Article ID 934712, 11 pages
Research Article
Natural Convection of Viscoelastic Fluid from a Cone Embedded in a Porous Medium with Viscous Dissipation
^1School of Mathematical Sciences, University of KwaZulu-Natal, Private Bag X01, Scottsville, Pietermaritzburg 3209, South Africa
^2Faculty of Military Science, Stellenbosch University, Private Bag X2, Saldanha 7395, South Africa
Received 11 March 2013; Accepted 9 September 2013
Academic Editor: Anders Eriksson
Copyright © 2013 Gilbert Makanda et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in
any medium, provided the original work is properly cited.
We study natural convection from a downward pointing cone in a viscoelastic fluid embedded in a porous medium. The fluid properties are numerically computed for different viscoelastic, porosity,
Prandtl and Eckert numbers. The governing partial differential equations are converted to a system of fourth order ordinary differential equations using the similarity transformations and then solved
together by using the successive linearization method (SLM). Many studies have been carried out on natural convection from a cone but they did not consider a cone embedded in a porous medium with
linear surface temperature. The results in this work are validated by the comparison with other authors.
1. Introduction
Natural convection of viscoelastic fluid in a porous medium with viscous dissipation is the transfer of heat due to density differences caused by temperature gradients through a permeable medium and
heat generated due to the interaction of fluid molecules is considered. There are examples in practical application such as thermal insulation, extraction of petroleum resources and the so-called
fracking, metal processing, performance of lubricants, application of paints, and extrusion of plastic sheets. The study of second grade fluids has been studied but there is no single constitutive
equation that can fully describe non-Newtonian fluids [1]; due to this fact many authors did not consider the appropriate constitutive energy equation for second grade fluids.
Natural convection on a cone geometry has been studied by among others Alim et al. [2], Awad et al. [3], Cheng [4, 5], and Kairi and Murthy [6]. Studies have been done on other geometries such as
flow over a flat plate, cylinders, vertical surfaces, stretching sheets, and inclined surfaces by, among others, Abbas et al. [7] who considered unsteady second grade fluid flow on an unsteady
stretching sheet; they did not consider the energy equation mainly due to difficulties in its characterization. Anwar et al. [8] studied mixed convection boundary layer flow of a viscoelastic fluid
over a horizontal circular cylinder; they solved the fourth order ordinary differential equations by considering the insufficiency of the boundary conditions by taking the zeroth, first, and second
order of the viscoelastic parameter and coming up with three systems of ordinary differential equations. Cortell [9] investigated flow and heat transfer of a viscoelastic fluid over a stretching
sheet. Damseh et al. [10] studied the transient mixed convection flow of a second grade viscoelastic fluid over a vertical surface. They used McCormack’s method to solve their differential equations.
Hayat et al. [11] studied mixed convection in a stagnation point flow adjacent to a vertical surface in a viscoelastic fluid.
The model in this work has been originally developed from the work of Ece [5] who studied heat and mass transfer from a downward pointing cone in a Newtonian fluid. In this paper the work of Ece [5]
is extended to take into account the flow of a second grade fluid in a porous medium and the effect of viscous dissipation is considered. Several other studies have been done in natural convection in
a viscoelastic fluid by among others Hsiao [12] who studied mixed convection for viscoelastic fluid past a porous wedge. Kasim et al. [13] investigated free convection boundary layer flow of a
viscoelastic fluid in the presence of heat generation. Massoudi et al. [14] studied natural convection flow of generalized second grade fluid between two vertical walls. Olajuwon [15] studied the
convection heat and mass transfer in a hydromagnetic flow of a second grade fluid in the presence of thermal radiation and thermal diffusion; it was shown that increasing the second grade parameter
causes reduction in the rate of the fluid flow and mass transfer, but heat transfer increases. Sajid et al. investigated fully developed mixed convection flow of a viscoelastic fluid between
permeable parallel vertical plates [16].
Studies for viscous dissipation in a second grade fluid have been done by many authors but some assumed that fluids are more viscous than elastic resulting in the energy equation without the elastic
term. Viscous dissipation has been studied by among others Subhas Abel et al. [17] who studied viscoelastic MHD flow and heat transfer over a stretching sheet with viscous and ohmic dissipations and
[18] in which a Newtonian fluid was considered. The viscous dissipation term which they used in [17] assumes that the fluid is more viscous in nature than elastic. Jha [19] investigated the effects
of viscous dissipation on natural convection flow between parallel plates with time periodic boundary conditions. Chen [20] studied the analytic solution of MHD flow and heat transfer for two types
of viscoelastic fluids over a stretching sheet with energy dissipation, internal heat source, and thermal radiation. Cortell [21] worked on viscous dissipation and thermal radiation effects on the
flow and heat transfer of a power law fluid past an infinite porous plate. Hsiao [22] investigated multimedia physical feature for unsteady MHD mixed convection viscoelastic fluid over a vertical
stretching sheet with viscous dissipation. Kameswaran et al. [23] studied hydromagnetic nanofluid flow due to a stretching sheet or shrinking sheet with viscous dissipation and chemical reaction
Studies have been done in porous media by among others Awad et al. [3, 4, 6, 24] and Singh and Agarwal [25] who studied heat transfer in a second grade fluid over an exponentially stretching sheet
through porous medium with thermal radiation and elastic deformation under the effect of magnetic field.
An investigation of available literature shows that, to the best of our knowledge, no analysis has been done on natural convection of a viscoelastic fluid embedded in a porous medium with viscous
dissipation under the given boundary conditions. The study takes into consideration a temperature that changes linearly along the surface of the cone (see Ece [5]).
2. Mathematical Formulation
A cone in a viscoelastic fluid embedded in a porous medium is heated and maintained at a linearly changing temperature (), and the ambient conditions are maintained at ; the fluid has a constant
viscosity . The vertex angle of the cone is . The velocity components and are in the directions of and , respectively, with the -axis being inclined at an angle to the vertical. A sketch of the
system and coordinate axis is illustrated in Figure 1.
The governing equations in this buoyant-driven flow are given by where , is the acceleration due to gravity, is the kinematic viscosity for the fluid, is the non-Newtonian parameter of the
viscoelastic fluid, is the coefficient of thermal expansion, is the thermal diffusivity, is the specific heat capacity for the fluid, is the density of the fluid, and is the permeability coefficient
of the porous medium. The boundary conditions are given as where is a constant, is the characteristic length, and the subscript refers to the ambient condition.
We introduce the nondimensional variables: where . Using (3) in (1) gives the following equations: where , is the viscoelastic parameter known as the Deborah number, is the Grashof number, is the
Prandtl number, and is the Eckert number. The corresponding boundary conditions are given as We now introduce the following similarity variables and defined by
Substituting (6) and the similarity variables in (4) gives the following ordinary differential equations: With boundary conditions, It is of interest to discuss the skin friction and the heat
transfer coefficient in this context. The shear stress at the surface of the cone is defined as (see Olajuwon [15]) where is the coefficient of viscosity. The skin friction is defined as The skin
friction coefficient can be expressed as The heat transfer rate at the surface of the cone is given by The Nusselt number can be expressed as Using the nondimensional variables (9)-(10), the
dimensionless wall heat rate is given by
3. Method of Solution
In this study, (7)–(10) were solved using the successive linearization method. The inclusion of the non-Newtonian term brings about the fourth order ordinary differential equation for the momentum
equation. The given boundary conditions are insufficient to obtain a unique solution. To overcome this problem the system is decomposed into the zeroth, first, and second order systems of the
viscoelastic parameter. Subhas Abel et al. [17] showed that if this method is applied small values of the viscoelastic parameter can be used without difficulty in convergence. It is also noticed in
this study that the direct application of the successive linearization method has difficulties in convergence for small values of the viscoelastic parameter. Anwar et al. [8] also confirmed the same
observation and solved a system of differential equations simultaneously and obtained better convergence for small values of the viscoelastic parameter. In this work we solve the system using the
successive linearization method. To solve the equations we seek the series solution of the form The skin friction can be computed using
Then substituting (17) into the system (7)–(10). We then take the zeroth, first, and second order of the viscoelastic parameter . We obtain the following system.
Zeroth order:
First order:
Second order: The functions in the system (19)–(30) may be expanded in series form as where , , and and , , and () are unknown functions and , , and and , , and are approximations that are found by
successively solving the linear part of equations that are obtained after substituting (31) into system (19)–(30). These linear equations have the form The coefficients , (), , and are defined as
Equations (32)–(43) must be solved simultaneously subject to certain initial approximations and . We choose these initial approximations so that they satisfy the given boundary conditions. In this
case suitable initial approximations are We note that when and () have been found, the approximate solutions and are obtained as where is the order of the SLM approximation. Equations (32) and (43)
can be solved by any numerical method. In this work the equations have been solved by the Chebyshev spectral collocation method. The method of solution is fully described in Awad et al. [3]. The
system of differential equations is solved simultaneously using the MATLAB SLM code.
3.1. Results and Discussion
The problem that is investigated in this study is the steady laminar flow and natural convection from a cone in a viscoelastic fluid in the presence of viscous dissipation in a porous medium. The
coupled nonlinear differential equations (7)–(10) were solved numerically using the successive linearisation method (SLM). In this section we discuss the effects of the viscoelastic parameter (),
porosity parameter (), Prandtl number (), and Eckert numbers () on both the velocity and temperature profiles.
In Table 1 the comparison between our results for the local skin friction and Nusselt numbers and those of Ece [5] who used the Thomas algorithm shows that our method gives satisfactory results, thus
confirming that the method is accurate.
To get a clear understanding of natural convection effects on the physics of the problem of a flow from a cone in a viscoelastic fluid with viscous dissipation, the investigation has been carried out
for different viscoelastic numbers , porosity parameter , the Eckert number , and the Prandtl number . The results for the skin friction and heat transfer coefficients are depicted in Tables 1 and 2.
In Table 2 the effect of increasing the viscoelastic parameter increases the skin friction coefficient and the opposite effect is noted on the Nusselt number in the presence of the porous medium and
viscous dissipation. Cortell [9] noted the same result. A faster increase is noted in the absence of the porous medium and the Eckert number. Increasing the porosity parameter reduces local skin
friction and the same trend is noted on the Nusselt number. Skin friction increases with increasing Eckert number and the opposite trend is noted on the Nusselt number. The skin friction decreases
with increasing Prandtl number, and the opposite trend is noted on the Nusselt number.
Figures 2–9 show the effects of various fluid properties on the velocity and temperature profiles.
Figure 2 shows that increasing the viscoelastic parameter increases the velocity across the boundary layer (see Butt et al. [24]).
Increasing the Prandtl number decreases the velocity profile in the boundary layer as shown in Figure 3; This is because when the Prandtl number is increased the conduction process is more enhanced
than convection suggesting lower molecular motion causing fluid velocity to decrease.
Figure 4 shows the variation of the porosity parameter with velocity profile for the linear surface temperature. Increasing porosity parameter reduces the velocity profile across the boundary layer.
The fluid particles move slower as the medium becomes less porous (see Singh and Agarwal [25]).
Figure 5 shows the variation of the Eckert number with velocity profile across the boundary layer. Increasing the Eckert number increases the velocity profile; this is caused by the increase in the
kinetic energy caused by viscous dissipation in the boundary layer which leads to a small temperature gradient.
Figure 6 shows the effect of increasing the viscoelastic parameter on the temperature profiles. Increasing the viscoelastic parameter increases the temperature profile.
Figure 7 depicts the variation of the Prandtl number with temperature profiles. Increasing the Prandtl number decreases the temperature profile; The thermal diffusivity becomes smaller than the
viscous diffusion rate causing smaller temperature profiles.
Figure 8 shows the variation of the porosity parameter with the temperature profile. Increasing the porosity parameter increases the temperature profile; when the fluid moves much slower due to the
reduction in porosity heat transfer becomes more rapid.
In Figure 9 increasing the Eckert number increases the temperature profile; the heat produced due to viscous dissipation increases the temperature across the boundary layer.
Figure 10 shows the variation of the skin friction with the viscoelastic parameter at different values of the porosity parameter. Skin friction increases with increasing viscoelastic parameter and
increasing the porosity parameter reduces skin friction.
Figure 11 shows the variation of the Nusselt number with the viscoelastic parameter; increasing the viscoelastic parameter reduces Nusselt number and increasing the porosity parameter reduces the
Nusselt number.
Figure 12 shows the effect of increasing the Eckert number on the skin friction and viscoelastic parameter. Increasing viscoelastic parameter increases skin friction and increasing the Eckert number
increases the skin friction.
In Figure 13 the increase of viscoelastic parameter reduces the Nusselt number and increasing the Eckert number reduces the Nusselt number.
Figure 14 shows that generally increasing the viscoelastic parameter increases the skin friction and increasing the Prandtl number reduces skin friction.
In Figure 15 increasing the viscoelastic parameter reduces the Nusselt number and increasing the Prandtl number increases the Nusselt number.
4. Conclusion
This study presented an analysis of flow and heat transfer in natural convection of viscoelastic fluid from a cone embedded in a porous medium with viscous dissipation. The nonlinear coupled
governing equations were solved using the successive linearization method (SLM). The equations were first split into the zeroth, first, and second order of the viscoelastic parameter and solved
together under the linear surface boundary conditions. The velocity and temperature profiles together with local skin friction and local Nusselt numbers were presented and investigated. It was found
that increasing the viscoelastic parameter increased the skin friction, reduced the Nusselt number, and increased the velocity and temperature profiles. Increasing the porosity parameter decreased
the skin friction and Nusselt number and decreased the velocity profile and the opposite effect was noted in the temperature profile. Increasing the Eckert number increased both velocity and
temperature profiles and decreased the Nusselt number and the opposite was noted on the skin friction. The results compared well with those of Ece [5] in case when .
1. O. D. Makinde, “On thermal stability of a reactive third-grade fluid in a channel with convective cooling the walls,” Applied Mathematics and Computation, vol. 213, no. 1, pp. 170–176, 2009. View
at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
2. M. A. Alim, M. Alam, and M. K. Chowdhury, “Pressure work effect on natural convection flow from a vertical circular cone with suction and non-uniform surface temperature,” Journal of Mechanical
Engineering, vol. 36, pp. 6–11, 2006.
3. F. G. Awad, P. Sibanda, S. S. Motsa, and O. D. Makinde, “Convection from an inverted cone in a porous medium with cross-diffusion effects,” Computers & Mathematics with Applications, vol. 61, no.
5, pp. 1431–1441, 2011. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
4. C. Cheng, “Soret and Dufour effects on natural convection boundary layer flow over a vertical cone in a porous medium with constant wall heat and mass fluxes,” International Communications in
Heat and Mass Transfer, vol. 38, no. 1, pp. 44–48, 2011. View at Publisher · View at Google Scholar · View at Scopus
5. M. C. Ece, “Free convection flow about a cone under mixed thermal boundary conditions and a magnetic field,” Applied Mathematical Modelling, vol. 29, no. 11, pp. 1121–1134, 2005. View at
Publisher · View at Google Scholar · View at Scopus
6. R. R. Kairi and P. V. S. N. Murthy, “Effect of viscous dissipation on natural convection heat and mass transfer from vertical cone in a non-Newtonian fluid saturated non-Darcy porous medium,”
Applied Mathematics and Computation, vol. 217, no. 20, pp. 8100–8114, 2011. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
7. Z. Abbas, T. Hayat, and I. Pop, “Unsteady flow of a second grade fluid film over an unsteady stretching sheet,” Mathematical and Computer Modelling, vol. 48, no. 3-4, pp. 518–526, 2008. View at
8. I. Anwar, N. Amin, and I. Pop, “Mixed convection boundary layer flow of a viscoelastic fluid over a horizontal circular cylinder,” International Journal of Non-Linear Mechanics, vol. 43, no. 9,
pp. 814–821, 2008. View at Publisher · View at Google Scholar · View at Scopus
9. R. Cortell, “A note on flow and heat transfer of a viscoelastic fluid over a stretching sheet,” International Journal of Non-Linear Mechanics, vol. 41, no. 1, pp. 78–85, 2006. View at Publisher ·
View at Google Scholar · View at Scopus
10. R. A. Damseh, A. S. Shatnawi, A. J. Chamka, and H. M. Duwairi, “Transient mixed convection flow of a second grade visco-Elastic fluid over a vertical surface,” Nonlinear Analysis: Modelling and
Control, vol. 13, no. 2, pp. 169–179, 2008.
11. T. Hayat, Z. Abbas, and I. Pop, “Mixed convection in the stagnation point flow adjacent to a vertical surface in a viscoelastic fluid,” International Journal of Heat and Mass Transfer, vol. 51,
no. 11-12, pp. 3200–3206, 2008. View at Publisher · View at Google Scholar · View at Scopus
12. K. Hsiao, “MHD mixed convection for viscoelastic fluid past a porous wedge,” International Journal of Non-Linear Mechanics, vol. 46, no. 1, pp. 1–8, 2011. View at Publisher · View at Google
Scholar · View at Scopus
13. A. R. M. Kasim, M. A. Admon, and S. Shafie, “Free convection boundary layer flow of a viscoelastic fluid in the presence of heat generation,” World Academy of Science, Engineering and Technology,
vol. 75, pp. 492–499, 2011. View at Scopus
14. M. Massoudi, A. Vaidya, and R. Wulandana, “Natural convection flow of a generalized second grade fluid between two vertical walls,” Nonlinear Analysis: Real World Applications, vol. 9, no. 1, pp.
80–93, 2008. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
15. B. I. Olajuwon, “Convection heat and mass transfer in a hydromagnetic flow of a second grade fluid in the presence of thermal radiation and thermal diffusion,” International Communications in
Heat and Mass Transfer, vol. 38, no. 3, pp. 377–382, 2011. View at Publisher · View at Google Scholar · View at Scopus
16. M. Sajid, I. Pop, and T. Hayat, “Fully developed mixed convection flow of a viscoelastic fluid between permeable parallel vertical plates,” Computers & Mathematics with Applications, vol. 59, no.
1, pp. 493–498, 2010. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
17. M. Subhas Abel, E. Sanjayanand, and M. M. Nandeppanavar, “Viscoelastic MHD flow and heat transfer over a stretching sheet with viscous and ohmic dissipations,” Communications in Nonlinear Science
and Numerical Simulation, vol. 13, no. 9, pp. 1808–1821, 2008. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
18. M. S. Abel, N. Mahesha, and J. Tawade, “Heat transfer in a liquid film over an unsteady stretching surface with viscous dissipation in presence of external magnetic field,” Applied Mathematical
Modelling, vol. 33, no. 8, pp. 3430–3441, 2009. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
19. B. K. Jha and A. O. Ajibade, “Effect of viscous dissipation on natural convection flow between vertical parallel plates with time-periodic boundary conditions,” Communications in Nonlinear
Science and Numerical Simulation, vol. 17, no. 4, pp. 1576–1587, 2012. View at Publisher · View at Google Scholar · View at MathSciNet
20. C. Chen, “On the analytic solution of MHD flow and heat transfer for two types of viscoelastic fluid over a stretching sheet with energy dissipation, internal heat source and thermal radiation,”
International Journal of Heat and Mass Transfer, vol. 53, no. 19-20, pp. 4264–4273, 2010. View at Publisher · View at Google Scholar · View at Scopus
21. R. Cortell, “Suction, viscous dissipation and thermal radiation effects on the flow and heat transfer of a power-law fluid past an infinite porous plate,” Chemical Engineering Research and Design
, vol. 89, no. 1, pp. 85–93, 2011. View at Publisher · View at Google Scholar · View at Scopus
22. K. L. Hsiao, “Multimedia physical feature for unsteady MHD mixed convection viscoelastic fluid over a vertical stretching sheet with viscous dissipation,” International Journal of the Physical
Sciences, vol. 7, no. 17, pp. 2515–2524, 2012.
23. P. K. Kameswaran, M. Narayana, P. Sibanda, and P. V. S. N. Murthy, “Hydromagnetic nanofluid flow due to a stretching sheet or shrinking sheet with viscous dissipation and chemical reaction
effects,” International Journal of Heat and Mass Transfer, vol. 55, no. 25-26, pp. 7587–7595, 2012.
24. A. S. Butt, S. Munawar, A. Mehmood, and A. Ali, “Effect of viscoelasticity on Entropy generation in a porous medium over a stretching plate,” World Applied Sciences Journal, vol. 17, no. 4, pp.
516–523, 2012.
25. V. Singh and S. Agarwal, “Heat transfer in a second grade fluid over an exponentially stretching sheet through porous medium with thermal radiation and elastic deformation under the effect of
magnetic field,” International Jornal of Applied Mathematics and Mechanics, vol. 8, no. 4, pp. 41–63, 2012. | {"url":"http://www.hindawi.com/journals/mpe/2013/934712/","timestamp":"2014-04-16T19:51:06Z","content_type":null,"content_length":"525081","record_id":"<urn:uuid:88a13083-18f9-40d1-a29c-47e703497510>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00210-ip-10-147-4-33.ec2.internal.warc.gz"} |
Saint Davids, PA Math Tutor
Find a Saint Davids, PA Math Tutor
...I specialize in tutoring high school math and science as well as test preparation for the SAT and ACT with years of experience in the US and abroad. My approach to tutoring is to first
identify the area(s) where the student is struggling and then develop a personalized learning program based on ...
21 Subjects: including ACT Math, prealgebra, precalculus, SAT math
...His industrial career included technical presentations and workshops, throughout North America and Europe, to multinational companies, to NATO, and to trade delegations from China and Russia.
In particular, he was proud to be part of the NASA Space Shuttle program and the development of new-generation jet engines by the General Electric Company. Dr.
10 Subjects: including algebra 1, algebra 2, calculus, prealgebra
...Students I tutor are mostly college-age, but range from middle school to adult. As a tutor with multiple years of experience tutoring people in precalculus- and calculus-level courses,
tutoring precalculus is one of my main focuses. With a physics and engineering background, I encounter math at and above this level every day.
9 Subjects: including algebra 1, algebra 2, calculus, geometry
...Successful players will have patience, discipline, and a solid understanding of math and statistics. I have played poker in "home games" and at casinos for 10 years. The game of water polo
combines elements of swimming, wrestling, and soccer for an endurance-heavy explosive Olympic sport.
21 Subjects: including calculus, chemistry, swimming, world history
...I have taken three courses in macroeconomics, as well as two courses in statistics. I have taught probability and statistics on multiple occasions at the university level. I also passed the
WyzAnt test for economics with flying colors.
18 Subjects: including algebra 1, algebra 2, American history, chemistry
Related Saint Davids, PA Tutors
Saint Davids, PA Accounting Tutors
Saint Davids, PA ACT Tutors
Saint Davids, PA Algebra Tutors
Saint Davids, PA Algebra 2 Tutors
Saint Davids, PA Calculus Tutors
Saint Davids, PA Geometry Tutors
Saint Davids, PA Math Tutors
Saint Davids, PA Prealgebra Tutors
Saint Davids, PA Precalculus Tutors
Saint Davids, PA SAT Tutors
Saint Davids, PA SAT Math Tutors
Saint Davids, PA Science Tutors
Saint Davids, PA Statistics Tutors
Saint Davids, PA Trigonometry Tutors
Nearby Cities With Math Tutor
Broad Axe, PA Math Tutors
Carroll Park, PA Math Tutors
Cynwyd, PA Math Tutors
Drexelbrook, PA Math Tutors
Gulph Mills, PA Math Tutors
Ithan, PA Math Tutors
Oakview, PA Math Tutors
Plymouth Valley, PA Math Tutors
Radnor, PA Math Tutors
Rosemont, PA Math Tutors
Southeastern Math Tutors
Strafford, PA Math Tutors
Upton, PA Math Tutors
Valley Forge Math Tutors
Wayne, PA Math Tutors | {"url":"http://www.purplemath.com/Saint_Davids_PA_Math_tutors.php","timestamp":"2014-04-19T20:10:49Z","content_type":null,"content_length":"24113","record_id":"<urn:uuid:33e052b2-06f5-49ab-a89d-baffcc1c815b>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00338-ip-10-147-4-33.ec2.internal.warc.gz"} |
Question regarding lb of N / 1000 using liquid fert [Archive] - LawnSite.com™ - Lawn Care & Landscaping Business Forum, Discuss News & Reviews
05-28-2004, 10:52 PM
I know I'm going to look like a fool to some of you, but for the life of me part of my brain can't work this out.
Here's what I'm going to use.
Green Flo 30-0-0 at a rate of 1 lb of N / 1000 sq ft.
Running it in my PG Ultra, so covering 32-34,000 sq ft / 8 gal.
How much green flow should I use???
I've been trying to pull up a label on Lesco's site, but it's not showing me, just the MSDS. That doesn't really help me out with this question. | {"url":"http://www.lawnsite.com/archive/index.php/t-73298.html","timestamp":"2014-04-20T21:56:35Z","content_type":null,"content_length":"10804","record_id":"<urn:uuid:69c776f6-25bd-4dce-960d-d39d912bcc00>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00662-ip-10-147-4-33.ec2.internal.warc.gz"} |
class (Eq a, Num a) => Bits a where
The Bits class defines bitwise operations over integral types.
• Bits are numbered from 0 with bit 0 being the least significant bit.
Minimal complete definition: .&., .|., xor, complement, (shift or (shiftL and shiftR)), (rotate or (rotateL and rotateR)), bitSize and isSigned.
Bits Int
Bits Int8
Bits Int16
Bits Int32
Bits Int64
Bits Integer
Bits Word
Bits Word8
Bits Word16
Bits Word32
Bits Word64
(Typed a, Bits a) => Bits (Stream a) | {"url":"http://hackage.haskell.org/package/copilot-language-0.9.1/docs/Copilot-Language-Operators-BitWise.html","timestamp":"2014-04-19T17:19:40Z","content_type":null,"content_length":"9349","record_id":"<urn:uuid:c4551530-7551-4288-bec5-ec133f68dccb>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00020-ip-10-147-4-33.ec2.internal.warc.gz"} |
st: How the option "rhs" of -ovtest- works?
[Date Prev][Date Next][Thread Prev][Thread Next][Date index][Thread index]
st: How the option "rhs" of -ovtest- works?
From =?GB2312?B?6s3qzQ==?= <huihui.pku@gmail.com>
To Statalist <statalist@hsphsun2.harvard.edu>
Subject st: How the option "rhs" of -ovtest- works?
Date Mon, 28 Nov 2005 19:07:31 +0800
In Stata 8's help of "ovtest", it says that the option "rhs" specifies
that powers of the right-hand-side (explanatory) variables are to be
used in the test rather than powers of the fitted value.
But I want to know what powers of the explanatory variables are used
in the test?
I have made a experiment.
If I use 2 regressors in the model, the test will show that it used 6
power-variables in the test regression and the result is
If I use 3 regressors in the model, the test will show that it used 9
power-variables in the test regression and the result is
and if I use 8 regressors in the model, the test gives the result
I don't have the manual of Stata, so I can't read the parts the
concerned in the books.
Thank you~
* For searches and help try:
* http://www.stata.com/support/faqs/res/findit.html
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/ | {"url":"http://www.stata.com/statalist/archive/2005-11/msg00985.html","timestamp":"2014-04-20T08:56:57Z","content_type":null,"content_length":"5524","record_id":"<urn:uuid:b598c089-0b36-4878-8019-1e94e568e0d4>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00367-ip-10-147-4-33.ec2.internal.warc.gz"} |
Exponential Functions
This chapter deals with radicals and exponential functions--functions that contain variable exponents. Here, the reader will review the meanings of negative and fractional exponents, learn how to
solve equations containing radicals, and learn how to evaluate and graph exponential functions.
The first section reviews negative and fractional exponents. It explains how to evaluate expressions containing negative and fractional exponents. This material is also covered in Negative Exponents
and Fractional Exponents.
The next section deals with equations containing radicals. Solving radical equations is similar to solving ordinary equations using inverse operations, but with two key differences--taking the
inverse of a square leads to multiple solutions, and taking the inverse of a square root can lead to false solutions. This section explains how to find both solutions to an equation containing a
square, and how to recognize and eliminate the false solutions to an equation containing a square root.
The final section introduces exponential functions. It explains how to graph an exponential function and how to find the domain and range of an exponential function. It also addresses translations,
stretches, shrinks, reflections, and rotations of exponential functions.
Exponential functions are one of the many types of functions that mathematicians study. They are useful because they describe many real-world situations, including those in economics and in physics.
In addition, they are interesting from a mathematical perspective because they employ the variable in an unusual way. While rational and polynomial functions multiply the variable by itself a fixed
number of times, exponential functions vary the number of times a constant is multiplied by itself. | {"url":"http://www.sparknotes.com/math/algebra2/exponentialfunctions/summary.html","timestamp":"2014-04-21T07:15:19Z","content_type":null,"content_length":"51383","record_id":"<urn:uuid:71ec0482-f3f9-4db0-a6f7-db64b897bdb6>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00086-ip-10-147-4-33.ec2.internal.warc.gz"} |
Green, CA Science Tutor
Find a Green, CA Science Tutor
...During my Master's and PhD I also taught colleagues how to do the same, and through my help they achieved success. I have taken classes in discrete mathematics and computer logic, which have
much overlap with mathematical logic. I understand how to construct and use truth tables in order to find the answer to logic and false logic puzzles.
44 Subjects: including biology, biochemistry, ACT Science, physical science
...I've been a tutor at Caltech in a few advanced physics and math courses and used to tutor a lot in high school. I'm looking to attend graduate school in the near future in experimental
condensed matter physics. I love teaching math and physics and am perfectly happy to deviate from the standard material to introduce the sort of things that made me love math and science in high
26 Subjects: including physics, ACT Science, physical science, geometry
...Algebra is the foundation for all higher level mathematics courses. It is the study of a set of rules on how to manipulate mathematical expressions and solve them efficiently. A list of the
most common topics in introductory Algebra includes: Functions and Patterns, Linear Equations, Linear Ine...
22 Subjects: including ACT Science, astronomy, physics, physical science
I believe that the best way to learn something is to apply it. I have a true passion for science and find it really cool how one can use the material and apply it to surrounding issues. In
addition to my degrees in biology, I have tutored and led science activities for various age levels in after...
5 Subjects: including biology, chemistry, biochemistry, anatomy
...Having grown up in the theatre, I know this world inside out and have a great deal of knowledge to share. I've been practicing yoga for 10 years. I have taught beginning as well as advanced
students in Vinyasa flow.
29 Subjects: including nutrition, English, reading, ESL/ESOL
Related Green, CA Tutors
Green, CA Accounting Tutors
Green, CA ACT Tutors
Green, CA Algebra Tutors
Green, CA Algebra 2 Tutors
Green, CA Calculus Tutors
Green, CA Geometry Tutors
Green, CA Math Tutors
Green, CA Prealgebra Tutors
Green, CA Precalculus Tutors
Green, CA SAT Tutors
Green, CA SAT Math Tutors
Green, CA Science Tutors
Green, CA Statistics Tutors
Green, CA Trigonometry Tutors
Nearby Cities With Science Tutor
Broadway Manchester, CA Science Tutors
Cimarron, CA Science Tutors
Dockweiler, CA Science Tutors
Dowtown Carrier Annex, CA Science Tutors
Firestone Park, CA Science Tutors
Foy, CA Science Tutors
La Tijera, CA Science Tutors
Lafayette Square, LA Science Tutors
Miracle Mile, CA Science Tutors
Pico Heights, CA Science Tutors
Rimpau, CA Science Tutors
Wagner, CA Science Tutors
Westvern, CA Science Tutors
Wilcox, CA Science Tutors
Wilshire Park, LA Science Tutors | {"url":"http://www.purplemath.com/green_ca_science_tutors.php","timestamp":"2014-04-21T05:08:12Z","content_type":null,"content_length":"24149","record_id":"<urn:uuid:85aa4172-1197-44d4-baba-5bed1737b8df>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00543-ip-10-147-4-33.ec2.internal.warc.gz"} |
Time, Money, and Morality (and p-Hacking?)
that is in press in
Psychological Science
tests the hypothesis that priming someone with the concept of time makes them cheat less than someone who is not thusly primed. Or, as the authors articulate the idea in in the abstract:
Across four experiments, we examined whether shifting focus onto time can salvage individuals’ ethicality.
I've said a lot already about the type of theorizing and experimenting in this type of priming research, so I just want to keep it simple this time and concentrate on something that is currently
under fire in the literature, even
on the pages of Psychological Science itself
, the p-value.
As the abstract indicates, there are four experiments. In each experiment, the key prediction is that exposure to the "time-prime" causes people to cheat less. Each prediction is evaluated on the
basis of a p-value. In Experiment 1, the prediction was that subjects would cheat less in the "time-prime" condition than in the control condition. (There also was a money-prime condition, but this
was not germane to the key hypothesis.) I've highlighted the key result.
In Experiment 2 the key hypothesis was that if priming time decreases cheating by making people reflect on who they are, cheating behavior in the latter condition would not differ between
participants primed with money and those primed with time. However, participants who were told that the game was a test of intelligence would show the same effect observed in Experiment 1. So the
authors predicted an interaction between reflection (reflection vs. no reflection) and type of prime (time vs. money). Here are the results.
In Experiment 3 the authors manipulated self-reflection in a literal way: subjects were or were not seated in front of a mirror and this was crossed with prime condition (money vs. time). Again, the
key prediction involved an interaction.
Finally, in Experiment 4 the three priming conditions of Experiment 1 were used (money, time, control), which produced the following results.
So we have four experiments, each with their key prediction supported by a p-value between .04 and .05. How likely are these results?
This question can be answered with a method developed by
Simonsohn, Simmons, and Nelson (in press)
. To quote from the abstract:
Because scientists tend to report only studies (publication bias) or analyses (p-hacking) that “work”, readers must ask, “Are these effects true, or do they merely reflect selective reporting?” We
introduce p-curve as a way to answer this question. P-curve is the distribution of statistically significant p-values for a set of studies (ps < .05).
Simonsohn and colleagues have developed a
web app
that makes it very easy to compute p-curves. I used that app to compute the p-curve for the four experiments, using the p-values for the key hypotheses.
So if I did everything correctly, the app concludes that the experiments in this study had no evidential value and were intensely p-hacked.
It is somewhat ironic that the second author of the Psych Science paper and the first author of the p-curve paper are at the same institution. This is illustrative of the current state of
methodological flux that our field is in: radically different views of what constitutes evidence co-exist in institutions and journals (e.g., Psychological Science).
10 comments:
1. Here is your answer: "no matter how one chooses the [N and the true effect size] under the alternatives, at most 3.7% of the p values will fall in the interval (.04; .05)". http://
Having four of those in a row is pretty unlikely!
1. Thanks, this is a good read.
2. A Bayes factor analysis shows that these kind of p-values (close to the .05 boundary) have almost no evidential impact. This goes back to Edwards, Lindman, & Savage, 1963 Psych Review, and has
recently been demonstrated again by Jim Berger, and, in 2013, by Valen Johnson ("Revised Standards for Statistical Evidence"). Johnson ends up recommending an alpha-level of .005. As Lindley
remarked: “There is therefore a serious and systematic difference between the Bayesian and Fisherian calculations, in the sense that a Fisherian approach much more easily casts doubt on the null
value than does Bayes. Perhaps this is why significance tests are so popular with scientists: they make effects appear so easily.”
1. Very important points that have certainly changed my outlook on things. I wonder how hard it would be to p-hack your way to a p-value of <.005.
3. It seems to me that, to paraphrase the British politician Peter Mandelson, social and positive psychologists are "intensely relaxed" about the possibility of Type 1 error. In fact, I suspect that
many of them don't sincerely consider Type 1 error to be, as the kids on the Internet say, "a thing". I found it, I published it, nobody has taken the time and effort to jump through the many
hoops (some of them flaming) needed to refute it, therefore I win.
I think that the people who pay for all this (i.e., the taxpayers in most cases) would be appalled to discover just how little understanding very many scientists have of the appropriate use of
the most basic tools of their trade. Perhaps this applies "especially" to psychologists when it comes to p-hacking, although abjectly bad statistical practice seems to be common in almost every
4. I agree that this paper appears to have the hallmarks of p-hacking. But I think we need some caution if want to engage in post-hoc p-hackery analyses. It's one thing to state a priori "I think a
set of studies that have this feature may show p-hackery" versus looking at the p values first and then post-hoc look for evidence of p-hackery. Perhaps in the near future researchers interested
in p-hacking will develop post-hoc corrections for p-hack investigations.
1. I agree Chris. I'm sensitive to this issue as we had to deal with it when I served on the Smeesters Committee. In this particular case, others had expressed skepticism about this study on
Twitter, which I shared when I read the paper. I took a closer look and then I noticed the issue with the p-values. So here there was an a priori hypothesis, so to speak.
5. I think the issue is that the p-values are the clues to p-hacking. Basically, I think a collection of studies with p values just below .05, fluctuating ns without explanation, and weird effect
sizes (i.e., large relative to expectations) are clues to p-hacking. I hate seeing packages dotted with p values around .04.
The solution for p-hacking is fairly simple. Run the studies again under the same conditions (preferably with larger samples to get more precise estimates). If the results hold, the field has
increased confidence in the sturdiness of the findings. If the results don’t duplicate, we learn another painful lessons about the impact of chance and the downsides of QRPs.
1. It would indeed be best to run replications. However, as someone mentioned to me in an email yesterday, you cannot possibly refute questionable studies given the rate at which they are
published. Experiment 4, however, was run on MTurk and so would be a good candidate for a replication. No worries about special booths or experimenters, etc.
6. I think it is good that this kind of analysis is being performed and shared in a public place. I wanted to consider some details of the analysis and an alternative approach.
Rolf focused on an effect that was repeatedly found across four experiments in Gino and Mogilner (2013): that participants were less likely to cheat when focused on time compared to participants
in a control or a money-focused condition.
These are not the only reasonable choices. Gino and Mogilner (2013) also explored the effect of a money-focus for a variety of main effects and contrasts. The p-values that are produced by these
different hypothesis tests are not independent of the p-values analyzed by Rolf, and such dependencies mean that it is not appropriate to include them all in the p-curve analysis. Table 1 (http:/
/www1.psych.purdue.edu/~gfrancis/Publications/TimeMoneyMorality/Table1.pdf ) highlights the statistics used for different analyses. The p-curve analysis for the money effect does not indicate
p-hacking (p=0.76). Details of the analysis are in Figure 1 (http://www1.psych.purdue.edu/~gfrancis/Publications/TimeMoneyMorality/Figure1.pdf ) These two different conclusions are not in
conflict because the statistics measure different effects. Nevertheless, concluding p-hacking from the p-curve analysis depends on which statistics are analyzed. Importantly, the p-curve analysis
cannot consider both sets of statistics simultaneously because of the dependencies.
Table 2 (http://www1.psych.purdue.edu/~gfrancis/Publications/TimeMoneyMorality/Table2.pdf ) shows the post hoc power for each experiment. Consider the column for the time-focused statistics. The
power estimates are all just above one half. The Test for Excess Significance (TES) notes that the probability of all four experiments like these rejecting the null hypothesis is the product of
the power values. The final row indicates that this probability is 0.079. This probability can be considered an estimate of the probability that a direct replication of the four experiments (with
the same sample sizes) would all produce statistically significant outcomes. Since this probability is less than the 0.1 criterion that is commonly used for these kinds of analyses, readers
should be skeptical that the reported results were produced with proper experiments and analyses.
The money-focused power values are higher, and their product is well above the 0.1 criterion. In this respect, the TES analysis gives essentially the same conclusions as the p-curve analysis.
The final column in Table 2 consider a more general TES analysis that considers the money-focused, the time-focused, and additional statistical results (highlighted in yellow in Table 1) that
were deemed by Gino and Mogilner (2013) as providing support for their theoretical ideas. The success probability for the full set was estimated with simulated experiments that used the
properties of the reported sample statistics. The 0.003 probability is so small that it is difficult to suppose that the experiments were fully reported, properly run, and properly analyzed.
This result does not mean that there is no merit to the reported results, but it means that readers should be skeptical about the theoretical conclusions that are derived from the reported
results. Moreover, it is not obvious which effects can be believed and which are suspect.
Unlike the p-curve analysis, the TES can consider the full set of experimental results used by Gino and Mogilner (2013) to support their theoretical ideas. Applying this more general approach
leads to a pretty convincing conclusion that readers should doubt the validity of the relationship between the experimental data and the theoretical claims.
A spreadsheet describing the effect size and power estimates, along with R code for the estimating power, can be downloaded from | {"url":"http://rolfzwaan.blogspot.nl/2013/12/time-money-and-morality-and-p-hacking.html","timestamp":"2014-04-21T02:01:27Z","content_type":null,"content_length":"111440","record_id":"<urn:uuid:529ad062-f541-49e8-828f-da8615ec51c4>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00453-ip-10-147-4-33.ec2.internal.warc.gz"} |
probability about the ball drawing game
April 9th 2011, 07:31 AM
probability about the ball drawing game
Hi All,
Here is a question about probability calculation in the ball drawing game needed your help.
In the game, there are n balls provided, including m black balls and n-m white balls. We are required to draw one ball after another till getting a black one. The problem is to calculate the
probability of we winning the game at the k-th drawing (i.e., just drew white balls from the 1st to the (k-1)-th time, while drew a black one at the k-th time).
Attach my understanding first as the follows:
Let Pr(n,m,k) represent the probability that we draw the first black ball at the k-th time, given m black balls and n-m white balls.
1) for k=1, certainly Pr(n,m,k)=m/n;
2) for k>n-m, certainly Pr(n,m,k)=1, because there are up to n-m white balls and there must be at least 1 black balls included if we draw more than n-m balls;
3) **Just where I got confused**
for 1<k<=n-m:
at the k-th time, only n-k+1 balls left, then the probability of drawing a black one is m/(n-k+1);
however, this is on the condition that at the (k-1)-th time we didn't draw a black with probability of 1-Pr(n,m,k-1);
so, I consider it as a conditional probability as Pr(n,m,k)=m/(n-k+1)/(1-Pr(n,m,k-1)).
Using n=5, m=2 for test, just found that Pr(n,m,3)>1, which means that the expression must go wrong.
Anyone can help figure it out?
Thanks in advance for your reply!
Best Regards, | {"url":"http://mathhelpforum.com/advanced-statistics/177353-probability-about-ball-drawing-game-print.html","timestamp":"2014-04-17T20:13:03Z","content_type":null,"content_length":"4575","record_id":"<urn:uuid:246ac231-66e1-40b9-a86f-612c0b8f1d2b>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00609-ip-10-147-4-33.ec2.internal.warc.gz"} |
First-Order Logical Duality
Posted by David Corfield
This is the title of the PhD thesis in which Henrik Forssell presents
…an extension of Stone Duality for Boolean Algebras from classical propositional logic to classical first-order logic. The leading idea is, in broad strokes, to take the traditional logical
distinction between syntax and semantics and analyze it in terms of the classical mathematical distinction between algebra and geometry, with syntax corresponding to algebra and semantics to
Extending the duality between Boolean algebras and Stone spaces, Forssell derives a duality between Boolean coherent categories and topological groupoids. A Boolean coherent category corresponding to
a first-order theory has as dual the topological groupoid of its models and isomorphisms.
Analogously with the Makkai duality, we consider the groupoid of models and isomorphisms of a theory. Analogously with Stone duality, we use topological structure to equip the models and model
isomorphisms of a theory with sufficient structure to recover the theory from them. The result is a first order logical duality which, in comparison to Makkai’s, is more geometrical, in that it
uses topology and sheaves on spaces and topological groupoids rather than ultraproducts, and that moreover specializes to the traditional Stone duality. (p. 17)
The dualizing object (did we give up on ambimorphic?) has shifted from $2$ to $Sets$. Next stop $Groupoids$.
Posted at March 27, 2009 10:15 AM UTC
Re: First-Order Logical Duality
So, Jim Dolan has spent many years telling us about the duality between first-order logic and groupoids: a theory in first order logic has a groupoid of models, and any groupoid arises as the models
of an essentially unique theory. Todd Trimble made this very precise and wrote up a portion here:
Most of this concerns the special case of groups, but his last sentence gives the generalization:
For more general (let us say finite) polyadic Boolean algebras $T$, the category $T−Mod$ is a groupoid. This is in part due to the classical (Boolean) nature of these hyperdoctrines. To be
related to the fact that a topos Set $C$ is a Boolean topos iff $C$ is a groupoid.
From your quote above, it sounds that Makkai has done related work.
So, it would be very nice to 1) let Henrik Forssell know about Todd’s guest posts, 2) understand what Forssell has done.
As for 1): do you know Forssell? Maybe you can send him an email about this blog entry.
As for 2):
I’m really curious what’s the intuitive meaning of a topological groupoid of models. First, a point of terminology: is this being used to mean a groupoid internal to $Top$ (where we have a topology
on the space of objects and on the space of morphisms) or a groupoid enriched in $Top$ (where we only have a topology on each hom-set)? For no particular reason I bet he means the first. In the first
case: what does it mean, intuitively, for two models to be close to each other? In either case: what does it mean for two maps between models to be close to each other?
How, if at all, is this stuff related to the Joyal–Tierney theorem showing that Grothendieck topoi are all categories of sheaves on localic groupoids?
• A. Joyal and M. Tierney, An extension of the Galois theory of Grothendieck, Memoirs of the A.M.S. 309 (1984).
‘Localic groupoids’ are a lot like topological groupoids, since the concept of locale is just an improved version of the concept of topological space, with better properties.
A localic groupoid is a groupoid internal to the category of locales.
It’s a bit hard for me to imagine a result in heavy-duty categorical logic that would work better for topological groupoids than localic groupoids!
Posted by: John Baez on March 27, 2009 5:27 PM | Permalink | Reply to this
Re: First-Order Logical Duality
I guess we shouldn’t be too surprised at the topological part of ‘topological groupoids appearing here in view of the syntax-semantics duality in the propositional case being captured by Stone
Topological groupoids as groupoids in Top are defined on p. 19.
The ‘logical’ topology on a space of models of a first-order theory is defined on p. 42.
A topology is also defined on the groupoid of sets and functions, allowing it to play the role of dualizing object.
Posted by: David Corfield on March 30, 2009 9:05 AM | Permalink | Reply to this
Re: First-Order Logical Duality
John asks
How, if at all, is this stuff related to the Joyal-Tierney theorem showing that Grothendieck topoi are all categories of sheaves on localic groupoids?
Forssell writes on p. 23
The idea of the current approach is thus that in the first-order logical case, topoi of sheaves can play the role that is played by frames of open sets in the propositional case. This idea
springs from the view of topoi as generalized spaces and the representation theorem of Joyal and Tierney ([11]) to the effect that any topos can be represented as equivariant sheaves on a localic
Butz and Moerdijk show that if the topos $\mathcal{E}$ has enough points, in the sense that the collection of inverse image functors for geometric morphisms $Sets \to \mathcal{E}$ are jointly
faithful, then $\mathcal{E}$ has a representation in terms of a topological groupoid, i.e. there is a topological groupoid $\mathbb{G}$ such that $\mathcal{E} \simeq Sh (\mathbb{G})$.
Posted by: David Corfield on March 30, 2009 10:57 AM | Permalink | Reply to this
Re: First-Order Logical Duality
Thanks! Okay, so maybe ‘topological groupoid’ is serving as a stand-in for ‘localic groupoid with enough points’.
So, do you think this is roughly correct: he’s studying a class of topoi that are equivalent to topoi of sheaves on topological groupoids?
Posted by: John Baez on March 30, 2009 8:39 PM | Permalink | Reply to this
Re: First-Order Logical Duality
Here’s a handy description from pp. 38-39:
Recall that we call a coherent theory decidable if for each sort there is a predicate $eq$ satisfying axioms of inequality (see p. 28). The goal of the current section is the representation of
such a theory $\mathbb{T}$ in terms of its semantical groupoid $G_{\mathbb{T}} \rightrightarrows X_{\mathbb{T}}$ of models and isomorphisms in Theorem 2.3.4.14, stating that the topos of sheaves
for the coherent coverage on $\mathcal{C}_{\mathbb{T}}$ [the syntactic category associated with $\mathbb{T}$] is equivalent to the topos of equivariant sheaves on $G_{\mathbb{T}} \
rightrightarrows X_{\mathbb{T}}$,
$Sh (\mathcal{C}_{\mathbb{T}}) \simeq Sh_{G_{\mathbb{T}}}(X_{\mathbb{T}})$
The overall form of the argument resembles that sketched in Section 1.2.1 for the representation of a Boolean algebra in terms of its space of models: Generalizing the Stone Representation
Theorem, we embed $\mathcal{C}_{\mathbb{T}}$ in the topos of sets over the set of models, $Sets/X_{\mathbb{T}}$, in Lemma 2.3.2.1. Proceeding to introduce a logical topology on the set $X_{\
mathbb{T}}$ of models and then to introduce $\mathbb{T}$-model isomorphisms for additional structure, we show how the embedding of $\mathcal{C}_{\mathbb{T}}$ into $Sets/X_{\mathbb{T}}$ factors
through, first, the topos $Sh(X_{\mathbb{T}})$ of sheaves on the space $X_{\mathbb{T}}$ (Proposition 2.3.2.7) and, finally, the topos $Sh_{G_{\mathbb{T}}}(X_{\mathbb{T}})$ of equivariant sheaves
on the topological groupoid $G_{\mathbb{T}} \rightrightarrows X_{\mathbb{T}}$ (Lemma 2.3.4.7). Showing that the image of the embedding generates $Sh_{G_{\mathbb{T}}}(X_{\mathbb{T}})$ in Lemma
2.3.4.13, we are then in a position to conclude that the embedding lifts to an equivalence $Sh (\mathcal{C}_{\mathbb{T}}) \simeq Sh_{G_{\mathbb{T}}}(X_{\mathbb{T}})$ in Theorem 2.3.4.14.
Posted by: David Corfield on March 31, 2009 10:04 AM | Permalink | Reply to this
Re: First-Order Logical Duality
This is neat. So a Boolean coherent category (associated with a first order logic) is dual to a topological groupoid of its models and isomorphisms. There is also a deep duality result for first
order logic involving profinite mathematics [1].
In terms of category theory or the theory of pretopoi or topoi, how would one best describe this first order logic:
With linear order, a first order logic with a least fixed point operator?
The reason I am asking this question is because such a logic gives the complexity class P. This is an example of descriptive complexity theory which is a branch of finite model theory. Then, the
famous P/NP problem becomes:
Can second order logic describe languages (of finite linearly ordered structures with nontrivial signature) that first order logic with least fixed point cannot?
Please note that there is also infinite model theory and model theory of metafinite structures [2], and so could there also be profinite model theory? In terms of category theory, a profinite or
inverse limit is an example of a (co)filtered limit [3].
[1] http://www.liafa.jussieu.fr/~jep/PDF/DualityWeb.pdf
[2] http://www-mgi.informatik.rwth-aachen.de/FMT/
[3] http://en.wikipedia.org/wiki/Filtered_category
Posted by: Charlie Stromeyer on July 14, 2009 6:59 AM | Permalink | Reply to this
Re: First-Order Logical Duality
There is also a related new paper about profinite methods in automata theory [1]. Here is perhaps another way to think about this problem:
Would it be possible to construct increasingly large finite instances X of a polynomial time problem such that an infinite instance of X is solvable? The hope is then that for a similar construction
of an NP-complete problem the infinite instance of the NP-complete problem would not be solvable.
I have read various papers on infinite constraint satisfaction problems (CSPs) such as [2] and [3]. Would it make sense to use profinite categories such as in [4] or even profinite domains such as in
[5] to construct an infinite CSP as the limit of increasingly large finite instances of the same underlying CSP?
[1] http://drops.dagstuhl.de/opus/volltexte/2009/1856/pdf/09001.PinJean_Eric.1856.pdf
[2] http://www.lix.polytechnique.fr/~bodirsky/publications.html
[3] http://www.brics.dk/~fvalenci/papers/icsp_sac05.pdf
[5] http://citeseer.ist.psu.edu/59953.html
Posted by: Charlie Stromeyer on July 15, 2009 4:33 PM | Permalink | Reply to this
Re: First-Order Logical Duality
Forssell and Awodey now have a preprint available – First-Order Logical Duality.
Posted by: David Corfield on August 20, 2010 10:38 AM | Permalink | Reply to this | {"url":"http://golem.ph.utexas.edu/category/2009/03/firstorder_logical_duality.html","timestamp":"2014-04-21T09:40:00Z","content_type":null,"content_length":"37169","record_id":"<urn:uuid:87ce9237-0b04-4236-b869-ff747f8724d1>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00641-ip-10-147-4-33.ec2.internal.warc.gz"} |
eigenvectors question
So this means that the no. of eigenvectors associated with an eigenvalue tells you the size of the largest Jordan block of that eigenvalue? Which is the same thing as the power k of the [tex](x-\
lambda_1)^k[/tex] of the minimal polynomial?
That is not quite right. The number of eigenvectors associated with an eigenvalue tells you the number of Jordan blocks of that eigenvalue.
One way to go is to consider the numbers
where A is a squre matrix, I is the identitiy matrix the same size as A, k=1,2,3, and a is an eigenvalue.
tells you the number of eigen blocks (at least dim 1)
tells you the number of eigen blocks at least dim 2
rank((A-aI)^3) -rank((A-aI)^2)
tells you the number of eigen blocks at least dim 3
and so on.
Hmm I haven't learnt anything yet about generalised eigenvectors.
eigenvectors have the property
There may not be enough eigenvectors to form a basis.
We define a generalized eigenvector (associated to the eigenvalue a) of order k=1,2,3,... as a vector v such that
where != means not equal.
With generalized eigenvectors we can always form a basis. One way of putting A in Jordan form is
where J is the jordan form of A and S is a matrix whose columbs are generalized eigenvectors. | {"url":"http://www.physicsforums.com/showthread.php?t=278073","timestamp":"2014-04-18T00:27:56Z","content_type":null,"content_length":"49714","record_id":"<urn:uuid:63a2ac4c-1ffe-4280-9811-51db3f1912b1>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00381-ip-10-147-4-33.ec2.internal.warc.gz"} |
pure geometry
October 2nd 2012, 02:25 AM #1
Junior Member
Sep 2012
pure geometry
Let ABC be an acute angled triangle with angleBAC = 60 and AB > AC. Let
I be the incenter, and H the orthocenter of the triangle ABC. Prove that
angle AHI = 3/2 angle ABC:
Follow Math Help Forum on Facebook and Google+ | {"url":"http://mathhelpforum.com/geometry/204495-pure-geometry.html","timestamp":"2014-04-17T07:25:52Z","content_type":null,"content_length":"28504","record_id":"<urn:uuid:b0ed9f21-af9c-4862-a4ed-34f1e8b313ea>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00429-ip-10-147-4-33.ec2.internal.warc.gz"} |
Jackson Heights ACT Tutor
Find a Jackson Heights ACT Tutor
...Overall I prefer a non stressful approach to establish a baseline from which to go on with each individual student. Preparation is also key and having the right material to work on is very
important. Depending on the subject (and especially for computer subjects) before the first session I will...
9 Subjects: including ACT Math, algebra 1, algebra 2, precalculus
...Sinai Medical School and Columbia Mechanical Engineering. The work I've done with this group is being submitted to nature this winter, and I will be 3rd/4th author on the paper. In addition, I
coached a sophomore high school student to a gold medal in his country's national science fair.
32 Subjects: including ACT Math, reading, calculus, physics
...My courses in college include various types of economics and statistics. I have strong academic skills in math and English. I can also provide help in elementary subjects.
14 Subjects: including ACT Math, reading, geometry, algebra 1
...I love biology and have dabbled in chemistry in my time at Boston College. I have experience tutoring college athletes as part of my mentoring of incoming freshman and would love the
opportunity to work with you! I have taken a number of courses that covered genetics including basic biology courses, evolution courses and a course in genetics.
8 Subjects: including ACT Math, biology, algebra 2, anatomy
...I use full length tests, online resources and my own material. I have been teaching/tutoring the math section of GRE for the past ten years. During the first session, I evaluate the student
(s), then I use an individualized approach focusing on strengthening student's weakness as well as a comprehensive subject review.
23 Subjects: including ACT Math, reading, English, algebra 1 | {"url":"http://www.purplemath.com/jackson_heights_act_tutors.php","timestamp":"2014-04-16T22:04:52Z","content_type":null,"content_length":"23874","record_id":"<urn:uuid:37503fbd-4370-4acf-a514-19da044f2c78>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00203-ip-10-147-4-33.ec2.internal.warc.gz"} |
Re: More on lists and sets
From: Marshall Spight <marshall.spight_at_gmail.com> Date: 26 Mar 2006 18:24:21 -0800 Message-ID: <1143426260.988651.277030@e56g2000cwe.googlegroups.com>
David Cressey wrote:
> I want to turn my attention to the discussion of lists and sets from the > logical and or physical design perspective. Marshall Spight recently said > that nearly all programming languages offer
support, in one way or another, > for lists. Support for sets in much more spotty. Marshall figures that a > good language will have to allow for both. Without disputing that > conjecture, I want to
turn my attention to facilities that can convert > easily beck and forth between sets and lists.
I think that's a great idea. Let's think about what it means for a minute.
A set is an unordered collection of distinct elements. (As an aside, there are a few programming languages with support for plain sets; Icon has it and I believe Python does as well. These are simple
sets, though, not relations.) The sweet spot IMHO comes with relations, whose awesome power we are all familiar with, if not equally appreciative of. :-) Relations are sets whose element type is a
product type. I know of no general purpose programming language with any kind of support for relations. (Maybe Dataphor qualifies?) Although you can do some impressive things with languages with list
comprehensions; see Clean for a good example.
So, what is a list? A list is a mapping from the natural numbers to a set. A finite list (the usual case) is a mapping from a finite continuous subset of the natural numbers, that is, from 0 .. n,
where n+1 is the number of elements in the list. Note that this allows duplicates.
Interestingly, I never hear anyone talk about lists of product types. Why is that? (Guess: OOP is hogging the conversation as usual.) It seems to me that lists of product types would be almost as
much more useful than lists of scalar types as sets of product types are more useful than sets of scalar types. So let's think of lists as being lists of product types.
How then do we convert from a list to a set? There are two useful techniques I can think of, one information-preserving and one lossy.
The information preserving way simply considers a list to be a set with an additional attribute that is the position of the rest of the attributes in the list. Converting back and forth simply means
thinking about (or typing) the data in one way or another.
The lossy way is to consider the position as something extra compared to the set representation, and create it when converting set -> list, and discard it when converting list -> set.
In the set -> list case, we need to supply a total order to the set if we want to unambiguously convert it to a list.
But not every interesting order is a total order. Often, partial orders are also useful. Now, I have also observed that no programming language I know of makes any kind of distinction between a total
order and a partial order, which is a shame. (Roughly, a total order is one in which there are no ties-- every distinct element is either strictly less than or strictly greater than every other
element. In a partial order, two element may compare the same, even if they are not the same element.) SQL doesn't make this distinction. In fact, many order-by clauses in SQL end up specifying a
partial order, which SQL more or less treats as a total order. :-(
So I would propose that we need to consider sorting with partial orders and sorting with totals orders separately. In the total order case, we end up with a list whose element type is the same as the
element type of the set. In the partial order case, we end up with a list-of-sets, where the set has cardinality-1 in the no-ties case and cardinality > 1 in the case of ties.
It is interesting that we see list-of-sets emerging directly from the nature of ordered data.
PS. I realize this post is more brain-dump than anything coherent. Received on Sun Mar 26 2006 - 20:24:21 CST | {"url":"http://www.orafaq.com/usenet/comp.databases.theory/2006/03/26/0545.htm","timestamp":"2014-04-19T15:21:36Z","content_type":null,"content_length":"10835","record_id":"<urn:uuid:b80ea43a-05d3-41b1-b127-c25648407d2e>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00105-ip-10-147-4-33.ec2.internal.warc.gz"} |
Braingle: 'Bingo Card 4' Brain Teaser
Bingo Card 4
Logic puzzles require you to think. You will have to be logical in your reasoning.
Puzzle ID: #47963
Category: Logic
Submitted By: cnmne
You are given a stack of bingo cards. Your task is to find a specific card. Given the following clues, what is the number arrangement of that card?
Columns, left to right, are: B (contains numbers 1 through 15), I (contains numbers 16 through 30), N (contains numbers 31 through 45), G (contains numbers 46 through 60), O (contains numbers 61
through 75). Rows, top to bottom, are: 1, 2, 3, 4, 5. An example of coordinate nomenclature: B1 identifies column B row 1. N3 is a free space (contains no number). No number appears more than once.
1) Each numeral (0 through 9) appears one time in Row 1.
2) The sum of the numbers in Row 4 is a square number.
3) There is only one two-digit prime number in each row.
4) The range of the numbers in Column N is 8.
5) Each number in Column G has a tens digit that is less than the units digit.
6) Each number in Column O is odd.
7) In only one column are the numbers in descending order from top to bottom.
8) Each column has only one numeral that appears exactly two times.
9) The smallest number in each column is located in Row 5.
10) The sums of each column share a single common prime factor.
11) The numeral 5 only appears one time on the card.
12) The sum of the numbers in each diagonal is an odd number.
13) The product of B3 and O3 has a units digit of 2.
14) The product of I3 and G3 has a units digit of 4.
Show Answer
What Next?
krishnan Very nice teaser. Enjoyed the exercise of solving this one.
Sep 17, 2010
acornalice I found this one lots of fun
Sep 18, 2010 | {"url":"http://www.braingle.com/brainteasers/teaser.php?id=47963&op=0&comm=1","timestamp":"2014-04-16T21:59:44Z","content_type":null,"content_length":"27025","record_id":"<urn:uuid:43a45dd4-2303-4155-9538-185fa2f60945>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00242-ip-10-147-4-33.ec2.internal.warc.gz"} |
Quantum breakup is no heartbreaker
Getting together and breaking up is hard to do, but splitting a quantum couple is even more difficult.
In this case, the couples involve pairs of quantum bits, or qubits, and each bit represents a piece of information. Controlling quantum bits so that they communicate, or couple, with some but not all
of the other quantum bits is one of the fundamental problems in developing a quantum computer, said Franco Nori, physics professor at the University of Michigan, and also with RIKEN, in Japan.
The inability to control and direct qubits and turn their interactions on and off selectively makes it impossible to do quantum information processing.
Quantum computing is promising because such computers—if developed—will process information thousands of times faster than conventional computers, but researchers are still a long way off from
building the first large-scale quantum computer.
Nori’s team proposes a new method to control coupling and de-coupling by tuning the frequency of qubits. Simply put, qubits in the same frequency communicate, those on different frequencies do
not—think of interconnected microscale radios.
“This tuning frequency method should facilitate the implementation of quantum information processing by using superconducting quantum circuits,” Nori said.
The circuits may be scaled up to many qubits by applying certain external frequencies to the qubits. Those qubits with the correct frequencies are allowed to connect through the line.
“Similarly to a radio, qubits can be "in tune" with each other or out of tune, and thus decoupled,” Nori said. “Choosing appropriate frequencies requires varying these frequencies, so the radio can
tune to different stations at different times. Similarly, qubits can tune to different qubits at different times by varying the frequency of the applied magnetic field.”
The paper, “Controllable Coupling Between Flux Qubits,” will be published online Feb. 15 at Physical Review Letters, the Journal of the American Physical Society.
Source: University of Michigan | {"url":"http://phys.org/news10865.html","timestamp":"2014-04-17T04:54:16Z","content_type":null,"content_length":"63023","record_id":"<urn:uuid:267d8ecf-2174-4e39-866c-b656ab18ae49>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00146-ip-10-147-4-33.ec2.internal.warc.gz"} |
[SciPy-user] plot vertical lines
Afi Welbeck welby555@yahoo....
Mon Jul 13 08:32:56 CDT 2009
Thanks it worked.
But I observed that for the values I gave you,
you altered them a bit before it plotted it just
they way I expected. Is there some formula
for this sort of plot?
From: Angus McMorland <amcmorl@gmail.com>
To: SciPy Users List <scipy-user@scipy.org>
Sent: Monday, July 13, 2009 2:31:17 PM
Subject: Re: [SciPy-user] plot vertical lines
2009/7/13 Afi Welbeck <welby555@yahoo.com>:
> Hi,
> I'm a newbie. I'm stuck trying to link the following
> points with vertical lines in the xy plane
> (1,1) (1,4) and (3,2) (3,6)
> Could anyone please help me with the code?
Here's an verbose way to do it, so you can see what's going on. You'll
need matplotlib installed for this, and it's the generally recommended
2-d graphics package to accompany scipy.
import matplotlib.pyplot as plt
x0 = [1,1]
y0 = [1,4]
x1 = [3,3]
y1 = [2,6]
fig = plt.figure()
ax = fig.add_subplot(111)
ax.plot(x0, y0, x1, y1)
ax.set_xlim([0, 4])
ax.set_ylim([0, 7])
Hoping that helps,
AJC McMorland
Post-doctoral research fellow
Neurobiology, University of Pittsburgh
SciPy-user mailing list
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://mail.scipy.org/pipermail/scipy-user/attachments/20090713/5f05c862/attachment-0001.html
More information about the SciPy-user mailing list | {"url":"http://mail.scipy.org/pipermail/scipy-user/2009-July/021790.html","timestamp":"2014-04-17T13:50:48Z","content_type":null,"content_length":"4392","record_id":"<urn:uuid:dbd399d5-e11e-4027-9b0e-5f68712a9259>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00329-ip-10-147-4-33.ec2.internal.warc.gz"} |
Homogenous systems
September 18th 2009, 06:32 PM #1
Junior Member
Sep 2009
Homogenous systems
Im trying to understand the point of homogenouse systems....You have a matrix, then you add a zero vector at the end to make it augmented...then solve...SO, is the point to just add a column of
zeros then solve??
Follow Math Help Forum on Facebook and Google+ | {"url":"http://mathhelpforum.com/advanced-algebra/103039-homogenous-systems.html","timestamp":"2014-04-17T18:52:51Z","content_type":null,"content_length":"28681","record_id":"<urn:uuid:6f5fae8a-1e25-459f-9e8a-28b6c91248e4>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00398-ip-10-147-4-33.ec2.internal.warc.gz"} |
Loynes spaces, also called pseudo-Hilbert spaces
up vote 1 down vote favorite
Let me first define my object:
First, a locally convex space $Z$ is called admissible in the sense of Loynes if
1. $Z$ is complete
2. There is a closed convex cone in $Z$, called $Z_+$, satisfying (for $x\not = 0)$ that $x \in Z_+$ implies $-x \not\in Z_+$, which is used to define a partial order relation on $Z$ in the usual
3. There is an involution in $Z$ (which in the real case can be taken to be identity), $(x^*)^*=x$, $(\alpha z)^* = \bar{\alpha} z^*$ and finally $(x+y)^* = x^* + y^*$.
4. The topology of $Z$ is compatible with the order.
5. Any monotonously decreasing sequence in $Z_+$ is convergent.
An easy example for $Z$ could be a $C^*$-algebra.
Then let $Z$ be an admissible space in the sense of Loynes. A linear topological space $X$ is called pre-Loynes if it satisfies:
1. $X$ is endowed with a $Z$-valued inner product (also called gramian), that is there exist a map $X \times X \to Z$, denoted $[x,y] with the properties:
2. $[x,x]\ge 0$, $[x,x]=0$ only if $x=0$. The usual bilinear properties and finally, $[x,y]^*=[y,x]$, that is, in the real case, symmetry.
There are also some topological conditions. If the space $X$ is complete with its topology it is called a Loynes space. In the literature it is also seen the names pseudo-Hilbert. Loynes himself
originaly used the names VE-space and VH-space, see the paper R. M. Loynes: "On generalized positive-definite functions", 1964, Proc. London Mathemathical Soc. (3) 15 373-84 and references therein.
One use of Loynes space is to define a more general version of stationary stochastic process, with a corresponding Spectral theory.
So my question: What have been the effect of the introduction of Loynes spaces, I am especially interested in work using these concepts in probability theory. Comments, and especially references to
papers or books are very welcome!
pr.probability fa.functional-analysis oa.operator-algebras
2 I don't know anything about them, but I presume in your definition that you want $[\cdot,\cdot]$ to generate the topology on $X$ or something? – Nate Eldredge Jun 23 '10 at 2:54
I ditto Nate's question. Also, if Z actually is a C*-algebra, then this concept is nothing but that of a "Hilbert C*-module", about which a lot is known (see e.g. Chris Lance's book, or the
K-Theory book by Wegge-Olsen, etc. etc.) – Matthew Daws Jun 23 '10 at 6:49
add comment
Know someone who can answer? Share a link to this question via email, Google+, Twitter, or Facebook.
Browse other questions tagged pr.probability fa.functional-analysis oa.operator-algebras or ask your own question. | {"url":"http://mathoverflow.net/questions/29180/loynes-spaces-also-called-pseudo-hilbert-spaces","timestamp":"2014-04-18T06:17:45Z","content_type":null,"content_length":"51106","record_id":"<urn:uuid:8ce20557-1ca6-4a61-9d43-8618096e841a>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00578-ip-10-147-4-33.ec2.internal.warc.gz"} |
st: Extracting the largest possible common set of variables for a given
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
st: Extracting the largest possible common set of variables for a given number of surveys
From "Ergo, Alex" <aergo@jhsph.edu>
To "statalist@hsphsun2.harvard.edu" <statalist@hsphsun2.harvard.edu>
Subject st: Extracting the largest possible common set of variables for a given number of surveys
Date Wed, 26 Nov 2008 14:34:40 -0500
Dear all,
I have data from a large number of surveys (150). There are about 300 different variables, but not all of these variables are available in all the surveys.
I've entered the information on data availability in a STATA dataset as follows:
var001 var002 var003 var004 var005 ... var299 var300
dataset001 1 1 . . 1 . .
dataset002 1 . 1 . . 1 .
dataset003 . 1 . . 1 . 1
dataset150 1 1 1 1 1 1 1
(where the value is "1" if the variable is available, "." otherwise)
I'd like to be able to determine, for a given number of surveys, which combination of surveys I should select in order to have the largest number of variables in common.
For examples, if I decide to include 15 surveys in my analysis, which ones should I select to have the maximum number of variables (available in all 15 surveys) and what are these variables? What if I decide to include 16 surveys in my analysis? etc...
Unfortunately, the command mvpatterns doesn't work with such large number of variables. The command misschk doesn't do the job either.
Can anyone think of another way to extract this information using STATA?
* For searches and help try:
* http://www.stata.com/help.cgi?search
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/ | {"url":"http://www.stata.com/statalist/archive/2008-11/msg01221.html","timestamp":"2014-04-18T13:27:40Z","content_type":null,"content_length":"6609","record_id":"<urn:uuid:df6238e3-9d57-4868-bf28-01770af5ab65>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00634-ip-10-147-4-33.ec2.internal.warc.gz"} |
sum of the digits in an integer
Author sum of the digits in an integer
write a method to compute the sum of the digits in an integer
Joined: Oct public static int sumDigits(long n)
30, 2002 sumDigits(234) returns 2+3+4=9
Posts: 4 would someone help wiht this?
Joined: Jul Which part are you having a problem with?
22, 2000
Posts: 9043
10 "Yesterday is history, tomorrow is a mystery, and today is a gift; that's why they call it the present." Eleanor Roosevelt
Author and
Marshal Hi fabio, welcome to the ranch. As others have pointed out, you won't get much help if you just ask people to write your homework for you.
You will, however, gets LOTS of valuable and quality help with concepts and specific problems you might be having.
Joined: Jan If you are having problems with getting started with this specific method, I'd venture you're having trouble separating out the individual decimal digits. If you are trying a
10, 2002 mathematical solution, that'd be painful because decimal is not a natural numbering system for computers.
Posts: 60045 So as a hint, how could you somehow transform the incoming value in such a way that it would be easy to isolate the individual digits?
65 bear
I like... [Asking smart questions] [Bear's FrontMan] [About Bear] [Books by Bear]
Ranch Hand
Show us some code and we will try to guide you in the right direction
Joined: Oct
06, 2002
Posts: 201 Author and Instructor, my book
subject: sum of the digits in an integer | {"url":"http://www.coderanch.com/t/392750/java/java/sum-digits-integer","timestamp":"2014-04-16T17:24:11Z","content_type":null,"content_length":"25776","record_id":"<urn:uuid:2f619125-ded1-484e-9b9d-1afe1c3f78b2>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00154-ip-10-147-4-33.ec2.internal.warc.gz"} |
A compact, contented set
October 21st 2008, 10:46 PM #1
Thanks to Laurent's answer to my first post, i was encouraged to resume reading my Edwards, which i had all but given up, due to the many difficult gap-fillers the reader was required to supply.
Sure enough, it didn't take long before i've reached another impasse. Consider the attached proof of Proposition 6.1. The last sentence delivers nonchalantly the following opaque claim: "A+ is a
compact contented set". Do you understand why this should be so? (Please note that the definition for an "absolutely integrable" function is given on the left leaf)
The excerpt is taken form C.H. Edwards' "Advanced Calculus of Several Variables", Dover, 1994, which is an unabridged, corrected republication of the work first published by Academic Press, New
York, 1973. The book is available for sale in such stores as Amazon and Barnes & Noble (to name a few).
I may be mistaking, but isn't it because $A^+\subset B^\varepsilon\cup A$ and both $A$ and $B^\varepsilon$ are compact contented? However, I could'nt find a definition of "compact contented" on
the internet so I can't be sure; is it what is usually called "compact"?
Some relevant definitions and theorems
Hi Laurent,
My reply is in the attached pdf file. Unfortunately, i couldn't get the mathematical symbols to display properly, when i copied the text from the pdf document directly to this message.
You're right, I gave it too quick a look. And it doesn't seem either correct as such or even simple to correct.
October 21st 2008, 11:45 PM #2
MHF Contributor
Aug 2008
Paris, France
October 22nd 2008, 10:29 AM #3
October 22nd 2008, 10:37 PM #4
MHF Contributor
Aug 2008
Paris, France | {"url":"http://mathhelpforum.com/calculus/55056-compact-contented-set.html","timestamp":"2014-04-20T21:52:35Z","content_type":null,"content_length":"42642","record_id":"<urn:uuid:42190063-12b3-40df-8e1d-17ea05b9b288>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00170-ip-10-147-4-33.ec2.internal.warc.gz"} |
Recurrent Neural Networks Learn Deterministic Representations of Fuzzy Finite-State Automata
Recurrent Neural Networks Learn Deterministic Representations of Fuzzy Finite-State Automata (1998)
Download Links
Other Repositories/Bibliography
by Christian W. Omlin , C. Lee Giles
author = {Christian W. Omlin and C. Lee Giles},
title = {Recurrent Neural Networks Learn Deterministic Representations of Fuzzy Finite-State Automata},
year = {1998}
The paradigm of deterministic finite-state automata (DFAs) and their corresponding regular languages have been shown to be very useful for addressing fundamental issues in recurrent neural networks.
The issues that have been addressed include knowledge representation, extraction, and refinement as well development of advanced learning algorithms. Recurrent neural networks are also very promising
tool for modeling discrete dynamical systems through learning, particularly when partial prior knowledge is available. The drawback of the DFA paradigm is that it is inappropriate for modeling vague
or uncertain dynamics; however, many real-world applications deal with vague or uncertain information. One way to model vague information in a dynamical system is to allow for vague state
transitions, i.e. the system may be in several states at the same time with varying degree of certainty; fuzzy finite-state automata (FFAs) are a formal equivalent of such systems. It is therefore of
interest to study how uncertainty in the form of FFAs can be modeled by deterministic recurrent neural networks. We have previously proven that second-order recurrent neural networks are able to
represent FFAs, i.e. recurrent networks can be constructed that assign fuzzy memberships to input strings with arbitrary accuracy. In such networks, the classification performance is independent of
the string length. In this paper, we are concerned with recurrent neural networks that have been trained to behave like FFAs.In particular, we are interested in the internal representation of fuzzy
states and state transitions and in the extraction of knowledge in symbolic form.
3837 Introduction to Automata Theory, Languages, and Computation - Hopcroft, Ullman - 1979
3019 Fuzzy sets - Zadeh - 1965
1533 Finding structure in time - Elman - 1990
507 Fuzzy Sets and Systems: Theory and Applications - Dubois, Prade - 1980
418 Fuzzy logic in control systems: fuzzy logic controller - Lee - 1990
256 Learning longterm dependencies with gradient descent is difficult - Bengio, Simard, et al. - 1994
214 The induction of dynamical recognizers - Pollack - 1991
173 Learning and extracting finite state automata with second-order recurrent neural networks - Giles, Miller, et al. - 1992
172 Probabilistic automata - Rabin - 1963
100 The dynamics of discrete-time computation, with application to recurrent neural networks and finite state machine extraction - Casey
82 Induction of finite-state languages using second-order recurrent networks - Watrous, Kuhn - 1992
70 Constructing deterministic finite-state automata in recurrent neural networks - Omlin, Giles - 1994
61 Extraction of rules from discrete-time recurrent neural networks - Omlin, Giles - 1996
55 Induction of multiscale temporal structure - Mozer - 1992
46 Learning finite state machines with self-clustering recurrent networks - Zeng, Goodman, et al. - 1993
37 Efficient simulation of finite automata by neural nets - Alon, Dewdney, et al. - 1991
36 Insertion of finite state automata in recurrent radial basis function networks - Frasconi, Gori, et al. - 1996
36 Bounds on the complexity of recurrent neural network implementations of finite state machines - Horne, Hush - 1996
33 Unified integration of explicit rules and learning by example in recurrent networks - Frasconi, Gori, et al. - 1995
32 A unified gradient-descent/clustering architecture for finite state machine induction - Das, Mozer - 1994
30 A fuzzy logic controller for a traffic junction - Pappis, Mamdani - 1977
22 Learning and extracting initial mealy machines with a modular neural network model - Tino, Sajda - 1995
21 Stable encoding of large finite-state automata in recurrent networks with sigmoid discriminants. Neural Computation - Omlin, Giles - 1996
19 Rule revision with recurrent neural networks - Omlin, Giles - 1996
16 Finite state automata and simple recurrent recurrent networks - Cleeremans, Servan-Schreiber, et al. - 1989
15 Refining domain theories expressed as finite-state automata, in - Maclin, Shavlik - 1991
14 Fuzzy languages and their relation to human and machine intelligence, in - Zadeh - 1996
13 An algebraic framework to represent finite state automata in single-layer recurrent neural networks - Alquezar, Sanfeliu - 1995
13 Fuzzy finite-state automata can be deterministically encoded into recurrent neural networks - Omlin, Thornber, et al. - 1998
12 Refining algorithms with knowledge-based neural networks: Improving the Chou-Fasman algorithm for protein folding - Maclin, Shavlik - 1992
12 Deterministic acceptors of regular fuzzy languages - THOMASON, MARINOS - 1974
10 Fuzzy logic for control of roll and moment for a flexible wing aircraft - Chiu, Chand, et al. - 1991
10 Fasy: a fuzzy-logic based tool for analog synthesis,” Computer-Aided Design of Integrated Circuits and Systems - Torralba, Chavez, et al. - 1996
7 Synthesis and analysis of fuzzy logic finite state machine models - Grantner, Patyra - 1994
7 Fuzzy grammars in syntactic recognition of skeletal maturity from x-rays - Pathak, Pal - 1986
6 Computation in discrete-time dynamical systems - Casey - 1995
6 Fuzzy specification of finite state machines - Mensch, Lipp - 1990
6 A fuzzy finite state machine implementation based on a neural fuzzy system - Unal, Khan - 1994
5 A fuzzy logic-based financial transaction system - Corbin - 1994
5 Nauta Lemke, "Application of a fuzzy controller in a warm water plant - Kickert, van - 1976
5 Computation: Finite and Infinite Machines, ch - Minsky - 1967
4 Multi-objective decision-making under uncertainty fuzzy logic methods - Hardy - 1994
4 Design for machining using expert system and fuzzy logic approach - Yang, Kalambur - 1995
2 Probabilistic and weighted grammars - Salommaa - 1969
1 Equivalence in knowledge representation: Automata, recurrent neural networks, and dynamical fuzzy systems," tech. rep - Omlin, Thornber, et al. - 1998 | {"url":"http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.51.4101","timestamp":"2014-04-16T16:29:40Z","content_type":null,"content_length":"35085","record_id":"<urn:uuid:00051c6a-094c-4813-ad1c-d59142b4d4a6>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00281-ip-10-147-4-33.ec2.internal.warc.gz"} |
Cosmology & Gravitation
This series consists of talks in the areas of Cosmology, Gravitation and Particle Physics.
Local-type primordial non-Gaussianity couples statistics of the curvature perturbation \zeta on vastly different physical scales. Because of this coupling, statistics (i.e. the polyspectra) of \zeta
in our Hubble volume may not be representative of those in the larger universe -- that is, they may be biased. The bias depends on the local background value of \zeta, which includes contributions
from all modes with wavelength k ~ and is therefore enhanced if the entire post-inflationary patch is large
I'll discuss a number of insights into the process of nonlinear structure formation which come from the study of random walks crossing a suitably chosen barrier. These derive from a number of new
results about walks with correlated steps, and include a unified framework for the peaks and excursion set frameworks for estimating halo abundances, evolution and clustering, as well as nonlinear,
nonlocal and stochastic halo bias, all of which matter for the next generation of large scale structure datasets.
ΛCDM has become the standard cosmological model because its predictions agree so well with observations of the cosmic microwave background and the large-scale structure of the universe. However ΛCDM
has faced challenges on smaller scales. Some of these challenges, including the “angular momentum catastrophe" and the absence of density cusps in the centers of small galaxies, may be overcome with
improvements in simulation resolution and feedback. Recent simulations appear to form realistic galaxies in agreement
I will review recent developments in our theoretical understanding of the abundance and clustering of dark matter haloes. In the first part of this talk, I will discuss a toy model based on the
statistics of peaks of Gaussian random field (Bardeen et al 1986) and show how the clustering properties of such a point set can be easily derived from a generalised local bias expansion. In the
second part, I will explain how this peak formalism relates to the excursion set approach and present parameter-free predictions for the mass function and bias of dark matter halos.
Weemploy the effective field theory approach for multi-field inflation which is ageneralization of Weinberg's work. In this method the first correction terms inaddition to standard terms in the
Lagrangian have been considered. These termscontain up to the fourth derivative of the fields including the scalar fieldand the metric. The results show the possible shapes of the interaction
termsresulting eventually in non-Gaussianity in a general formalism. In additiongenerally the speed of sound is different but almost unity. Since in this
Screened Scalar-Tensor gravity such as chameleon and symmetron theories allow order one deviations from General Relativity on large scales whilst satisfying all local solar-system constraints. A lot
of recent work has therefore focused on searching for observational signatures of these
I will also present preliminary results of constraints to this model using up-to-date cosmological observations, which verify the above picture. The parameter space is interesting to explore due to a
strongly mass dependent covariance matrix, motivating comparisons between Metropolis-Hastings and nested sampling. Finally I discuss fine-tuning and naturalness in these models.
The non-Gaussian statistics of the primordial density perturbation have become a key test of the inflationary scenario of the very early universe. Currently many techniques are used to calculate the
non-Gaussian signatures of a given model of inflation. In particular, simple super-horizon techniques such as the deltaN formalism are often used for models with more than one field, while more
technical field theory techniques, referred to as the In-In formalism, are typically used for
I will present recent work, done in collaboration with Daniel Roberts, on the global memory of initial conditions that is sometimes, but not always, retained by fluctuating fields on de Sitter space,
Euclidean anti de Sitter space, and regular infinite trees. I will discuss applications to the structure of configuration space in de Sitter space and eternal inflation.
Cosmological results from Planck, a third-generation satellite mission to measure the cosmic microwave background, have just been announced. These results improve constraints on essentially all
cosmological parameters, and have implications for several preexisting sources of tension with the standard cosmological model, while also raising new puzzles. I will discuss these results and their
significance, as well as the next steps forward. | {"url":"https://perimeterinstitute.ca/video-library/collection/cosmology-gravitation?page=3","timestamp":"2014-04-19T14:46:41Z","content_type":null,"content_length":"60718","record_id":"<urn:uuid:8f2b0f6f-5ccc-4ef4-9ec0-9821c291f7f1>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00025-ip-10-147-4-33.ec2.internal.warc.gz"} |
188 helpers are online right now
75% of questions are answered within 5 minutes.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/users/moongazer/asked","timestamp":"2014-04-17T04:14:39Z","content_type":null,"content_length":"120518","record_id":"<urn:uuid:bfa932ce-28e5-4bbe-89e6-22ed73ae12f9>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00224-ip-10-147-4-33.ec2.internal.warc.gz"} |
e: Impressions of Conway
(Original title: Impressions of Conway)
"Have I done this to you yet?" He grabbed my hand and held it out in front of him, palm down. Before I could react, he pulled a rubber stamp out of his pocket, and my hand suddenly was
emblazoned with big red letters. "John H. Conway's Seal of Grudging Approval." Within seconds, it had smeared to three red lines that wouldn't wash off for several days. Still grasping my hand, he
pulled me toward his office. Brightly colored polyhedra hung in disarray from a network of strings dangling from the ceiling. The dim outline of a computer terminal was visible through a pile of
Rubik's cubes and wooden toroids. "We'll be better off in the undergraduate lounge. The doctor says I should rest, and I can lie down over there." He pulled me across the hall, into a room where
tinkertoys buried the shelves and tables but didn't cover the chairs. He lay down on a sofa, crossed his legs, and put his hands behind his head. "You didn't comment on my shirt," he said. It
depicted a boldly colored Escher print, where fishes transformed into birds, then boats, back to fish again, and finally into horses before disappearing under his belt. He didn't wait for a reply.
"Well, off we go! I was born in Liverpool, England. Do you want to know how old I am?"
"If you don't mind," I said, desperately trying to keep up.
"I was born on December the twenty-sixth, 1937. No, I don't mind. I actually did mind once ... I'll tell you about it some time if you like."
"O.K. Do you ..."
"When I was first married, my first wife was seven years older than I was," he continued. "She was very, very much worried by this. I'm afraid I wasn't particularly sympathetic. Every time she
passed a decade, she would be terribly, terribly depressed. And then a few years ago when I turned fifty - oh, my God, it was absolutely awful!" He grinned, slightly embarrassed. "I suddenly had a
wave of sympathy for her."
As Conway stroked his bushy beard, he resembled the unemployed younger brother of Santa Claus. His hair and clothing were in disarray, yet he looked distinguished. It was an odd
combination, yet perfectly suited to a mathematician.
"Anyhow, I enjoyed living in England, but it was really hard growing up during wartime. I remember I saw a banana, once." It was a really small one, and it wasn't yet ripe. Every day,
Conway would check to see whether it was ready to be eaten. "And then the great day came. My mother divided it, and I got a piece - and I didn't like it very much, after all that fuss." Once, when he
was five or six, he went to a birthday party, and each guest received a balloon. "Of course I had seen one, but I had never actually owned one." Conway frowned. "When it burst two days later, I was
absolutely inconsolable." His parents talked to the family who gave him the balloon, but they didn't have any more. There were none to be found anywhere, due to the wartime shortage of materials.
"Times were hard. My father was almost a teacher of Chemistry, he was actually a laboratory assistant officially. Mathematics wasn't his specialty, but he certainly knew about it."
Conway reached into his pocket and pulled out a well-worn box of cards. "Have I shown you this one?" He took the pack out of the box and started pulling cards off of the top and putting
them at the bottom. "A, C, E," he said, pulling off one card per letter. He then flipped over the next card, which happened to be the ace. "T, W, O," he continued, then flipped over the two. "Would
you like to try?" He handed the deck to me. "T, H, R, E, E," I said, then flipped over the next card. The joker. "No, silly," he cried. "Here's how you do it: T, H, R, E, E." He flipped over the
three. "Try again." "F, O, U, R." Joker. The deck was passed back and forth - I always got jokers while he got the correct card. At last, he finished. Carefully rearranging the deck into the
necessary order for the trick, he put it back in his pocket, ready for the next victim.
"Math was always there for me," Conway said, settling back on the sofa. "My mother said that when I was three or four, I knew the powers of two. When I was about eleven, I changed schools,
as do all English children. I remember that at an interview for my new school, I told the interviewer that I wanted to be a mathematician at Cambridge. I don't remember what could have sparked such
interest." Conway was at the top of the class in almost everything, until he reached high school, when it ceased. However, he was always the best student in math. "I liked math above all other
subjects, but I used to go through phases. I still do. I was very, very interested in fossils for a time, then I liked spiders. But most of all, I loved astronomy." Suddenly, he sat up and leaned
toward me. "Do you know that two years ago I became interested in astronomy again? I learned the names of all the visible stars in the northern hemisphere. Here's Orion." Jabbing his finger in the
air to denote the location of each star, he recited all of their names. "... Betelgeuse, Mintaka, and Saiph. I like knowing things. When a constellation is covered up by a cloud, I predict where the
stars will be when the cloud moves away. I really get a feeling of power - knowing what will happen is almost like making it happen."
A graduate student wandered into the lounge. "Nice shirt!" Conway looked at his belly. "Do you really like it?" He leaped off of the couch. "I'll be back in a moment." He stepped outside
with the student, rummaged around in his hopelessly cluttered office, and finally handed over a pile of papers. "That's going to my book, you see," he said, grinning. "Well, not really mine. This
person decided to write up some of my lectures, and I've finally gotten around to approving his writeups. That's what this silly stamp is for," he said, pulling it from his pocket and waving it. I
hid my hands behind my back. "The book will be called 'The Sensual Form,' as I describe how to 'see' a quadratic form - a second- degree polynomial - and 'hear' and 'feel' it, after a fashion. Now,
where were we ... oh, yes, astronomy. Oh! Have I shown you this yet? Name a day."
"Waning gibbous. Now give me a harder one." I chose another day, and within five seconds, he told me the phase of the moon on that day. "It's a very simple formula, actually, but it looks really
impressive. I just like knowing these things, and showing off. I like to impress people." He smiled. "You know, Cambridge had these lovely, lovely gardens. That's one of the things I miss most about
England. And I used to walk through the gardens and look at the flowers - and I knew why the petals of the daffodil were arranged the way they were." Conway stroked his beard as he thought. "When I
was a student at Cambridge, I used to go to a small coffee shop and do anagram crosswords. One day a procession came in, and one woman was carrying a fake daffodil - and it only had five petals.
There have to be six! It made me rather annoyed; I guess that it shouldn't have." He smiled. "You know, there is a beauty in nature that is too subtle for man. It really bothers me that when artists
paint a pineapple, they always make the lines symmetric. They aren't symmetric, and that's the real beauty of the thing. The asymmetry is related to the Fibonacci sequence - there might be eight
grooves in one direction, and thirteen in the other, so the lines don't meet symmetrically. And artists ignore this. Another thing that nobody notices is a brick wall." Six months ago, he had gone
through a 'brick wall' phase, which culminated in his giving a lecture called 'How to stare at a brick wall.' He described, in detail, the enormous variety of intricate patterns that can be found in
brick walls. "And nobody looks at these damn things!"
With no warning, Conway leaned forward and whipped out his pack of cards again. "Numbers or suits," he asked me. "Suits," I replied. After shuffling the deck a few times, he flipped four
cards off of the top. One was from each suit. He went through the deck, flipping over the cards, in groups of four, and each time, one was from each suit. He gathered up the cards, carefully
preserving the order. "Now, numbers." He counted off twelve cards. "Stop! What's missing?" He looked at the flipped cards, and noticed that ace through queen were all there. "This one must be the
king." He turned over the next card from the deck; "Behold! The king."
Putting the cards back in his pocket, Conway sank back into the sofa. "Game theory wasn't my first specialty, you know. In Cambridge, I got started in number theory." His advisor, Harold
Davenport, was an eminent number theorist. "He gave me a very difficult problem - proving a conjecture that said every integer can be written as the sum of thirty seven numbers, each raised to the
fifth power. When I told him that I had solved it, he didn't believe me." But the proof was correct. After carefully going through Conway's solution, and finding no major flaws, Davenport said,
"Well, now, Mr. Conway, what we have here is a poor Ph.D. thesis." Though the comment annoyed Conway at first, he realized that it meant that he was free to do whatever he liked. He went on to study
set theory, especially transfinite numbers.
If you ask most people about infinity, they will probably respond something like, "Well, you can count one, two, three - just keep going forever, and that's infinity." In the realm of
transfinite numbers, that is only the lowest level of infinity, dubbed w (the Greek letter omega.) Conway explained, "Omega is a transfinite number. These numbers have some very bizarre properties.
For instance, w + 1 does not equal 1 + w; in fact, 1 + w equals w. In the same way, 2úw also equals w, while wú2 does not. So you can see that adding and multiplying transfinite numbers together can
give you some very odd results - but they are consistent, and they are very, very interesting." The mathematics of adding and multiplying numbers greater than infinity is very strange, but there is a
beauty to it. Instead of counting one, two, three ... and finally getting stuck at infinity, you can keep going -- w + 1, w + 2, up to wú2, wú3, then go to w2, w3, and keep going. There is no reason
to stop - you can go to omega to the omega power, omega to the omega to the omega power, and so on. When you have done that omega times, just add one, and you have a new creature. There are infinite
infinities in the realm of transfinite numbers.
Conway stared at the ceiling. "I didn't stay in set theory, however." Upon graduation, Conway got a job at Cambridge as a mathematical logician. "In my late twenties, I became very
depressed. I felt that I wasn't doing real mathematics; I hadn't published, and I was feeling very guilty because of that." He was studying the symmetries of a certain lattice, and all of a sudden
Conway stumbled upon a very large group which nobody had seen before. A group is a set (like the integers) associated with an operation (like addition), which follows certain very restrictive rules.
Finding a new group was so rare and important that Conway was made a Fellow of the Royal Society for his discovery. "You know, when they make you a Fellow, you get to sign a book which has the
signatures of all members of the Royal Society - since 1660, when it was founded. It was a great thrill to flip through the pages and see the signatures of I. Newton, C. Wren, and A. Einstein. And I
got to sign J. Conway in that same book. It gave me a warm feeling."
"Now what was I going to say?" he asked abruptly, stroking his beard. "I have a very odd sort of memory. I can remember the most useless, obscure details, but when it comes to things that
other people think are important, I can't recall them for the life of me. When I was at Cambridge, I never learned the names of some of my colleagues - and I worked with them for twenty years!"
Conway smiled a bit, embarrassed. "But I must have good memory. I know pi to one thousand places." He started rattling off digits as fast as he could catch his breath. "3.1415 92653 58979 32384 62643
38327 95028..." I joined in. "... 84197 16939 93751 05820 ..." He stopped abruptly. "How many digits do you know?" he asked, surprised. "Oh, only about seventy- five. I had nothing else to do in
computer science class, so I tried to memorize pi." Conway nodded. "I had a similar experience. When I was an undergraduate, I had a summer job at a biscuit factory. I had to clean the ceiling of the
oven room - and it was completely black with soot. We worked on a scaffolding which was fifty feet high, and we scrubbed the ceiling. It soon became apparent that it was futile; after an hour of hard
work, the ceiling changed color from a black matte to a black matte with a sheen." Conway and his friends would play poker on the scaffolding, and every so often, climb down, move the scaffolding a
few feet, climb up, and resume their game. Soon he tired of poker, so Conway decided to memorize pi. "I learned it to seven hundred and seven places - which was the extent to which it was known back
then. A few years later, I learned it up to one thousand." He sat up on the couch, and fluffed up the pillow behind his head. "I convinced my wife to learn it, too. In fact, every Sunday, we took a
romantic little walk to Grantchester, a lovely, lovely little town near Cambridge, and we ate lunch at a pub there. We would stroll along the road, reciting pi to each other; she would do twenty
places, then I would do twenty, and so forth."
Conway paused for a moment, and his bushy eyebrows furrowed. "Yes, I must really have a tremendous memory. As you know, I crossed the Atlantic in 1985 or 1986, and became a professor at
Princeton. Several years later, when what's-his-name ... Harold Shapiro became President of the university, he invited some of the faculty to a dinner party each week. There were about eight or ten
guests, and Shapiro asked each of us to say a few words about ourselves. I didn't like that one bit. It reminded me of the recitations of poetry we had to do in elementary school. So I recited a
little poem about elves and goblins that I learned when I was, oh, about six years old. I hadn't thought about since then, and I was able to recall it at an instant. Well, I wasn't invited to a
dinner party again. But I don't worry about that; I guess it looks as if I have an irresponsible attitude. However, to do good work in math, you have to be somewhat irresponsible. I only started
doing real mathematics after I found the Conway group. I got a much-needed ego boost - obviously I don't need one anymore. Anyhow, after I made my name, I could do what I like, even if it was totally
trivial. When I want to play backgammon instead of doing math, I play backgammon. If the people at Princeton don't feel that they're getting their money's worth out of me, that's their problem. They
bought me!"
After finding his group, Conway continued his work in group theory and published an "atlas" of groups. In fifteen years, Conway and his colleagues collected all the "interesting" groups,
classified them, described their properties, and put them into one volume. "The work of producing the book was quite heavy, especially in the last year. It turned me off of group theory and algebra
in general. At about the same time I started the atlas, I was trying to come up with a mathematical understanding of the game of Go. There were two very strong players at Cambridge, one of whom was
the British champion. I noticed that near the end of a match, the whole game looked like a sum of a lot of little games. And, to my great surprise, I discovered that certain games behaved like
numbers." By analyzing the properties of these games, Conway discovered a whole new set of numbers; they were soon dubbed "surreal." "The theory was a real big shock to me. It was bizarre ... crazy,
but it was true! It was like climbing to the top of the beanstalk, and there was the enchanted castle. I had no idea what to expect. The rules have all changed, like magic. It's like a new world.
Exploring that world took me some time." Soon it was a fully-fledged theory. "You can view it as an extension of the real numbers, but it also has a surreal quality. New numbers were appearing - and
nobody had seen them before. But they exist. Old numbers fell straight out of the definition - all the reals, and even the transfinite ordinals like w. By the way, I didn't coin the term 'surreal
numbers.' Donald Knuth did, in his book which should be around here." Conway jumped off of the sofa. After steadying himself for a moment, he shuffled toward his office. He looked through the
cluttered bookshelves for his copy. He found a trashy-looking novel, Soho Madonna, which sported a scantily clad woman on its cover. "I don't remember buying this." He tossed it aside. "Oh, well. In
any case, Knuth's book was in the form of a novel. I was God. There were two main characters, Alice and Bill, and a third character named C speaks from the sky. In the beginning, they find a stone
inscribed 'And Conway created numbers.'" After a frantic search, Conway gave up looking for the book, and trundled back to the lounge. "I daydreamed for weeks about these wonderful numbers. I often
do that. I have what I call a 'white hot period' which lasts for a few days. I can't sleep, and I am completely absorbed with a problem. This used to get my wife terribly, terribly upset. After the
white hot phase, I enter a daydreaming phase for a few weeks." He frowned slightly. "These phases are becoming less and less frequent."
Conway led me back to the lounge. "I discovered surreal numbers a long time ago. I guess that it was around 1970. At about the same time I invented the game of Life." The roots of the idea
originated with the famous mathematician Von Neumann. He wanted to create a model of a "universal constructor," a machine that could build any other machine if correctly programmed; it would even be
able to make a copy of itself. He imagined an infinite board ruled into squares, and assigned one of twenty-nine values, or states, to each square. There were a complex set of rules to determine how
each square's state affected its neighbors' states. He succeeded in creating a universal constructor, but his system was so complicated that it was very unappealing. Conway changed the rules so that
each square had only two states - living or dead. He had only three rules. The first, the "birth rule," was that a dead cell becomes alive if it has three neighbors which are alive. The second, the
"isolation rule," was that a live cell dies if fewer than two of its neighbors are alive. The third was the "crowding rule"; a live cell dies if it has four or more live neighbors. "I decided to
observe this simple system, rather than build in the universal constructor property like Von Neumann did." Almost anyone who owns a computer is familiar with the game of Life - it has become a
popular screen saver, because of the unpredictable and interesting patterns that flicker across the screen as cells are born and die. "You can embed any mathematical question into the game of Life -
for instance, with a tremendous amount of effort, you can plug in Fermat's last theorem and see whether the program terminates." He stroked his beard and smiled. "You know, I occupy the John von
Neumann chair at Princeton. He was interested in transfinite numbers and game theory. I think it was just a coincidence; I wouldn't have been interested in building bombs. I don't think I would even
have liked the man. But our interests do have a great deal of overlap."
Conway propped up the pillow behind his head and grinned. "I like showing off. When I make a new discovery, and I really like telling people about it. I guess I'm not so much a
mathematician as a teacher. In America, kids aren't supposed to like mathematics. It's so sad." Conway sat up suddenly. "Most people think that mathematics is cold. But it's not at all! For me, the
whole damn thing is sensual and exciting. I like what it looks like, and I get a hell of a lot more pleasure out of math than most people do out of art!" He relaxed slightly, and he lowered his
voice. "I feel like an artist. I like beautiful things - they're there already; man doesn't have to create it. I don't believe in God, but I believe that nature is unbelievably subtle and clever. In
physics, for instance, the real answer to a problem is usually so subtle and surprising that it wasn't even considered in the first place. That the speed of light is a constant - impossible! Nobody
even thought about it. And quantum mechanics is even worse, but it's so beautiful, and it works!" Conway grinned. "I really do enjoy the beauty of nature - and math is natural. Nobody could have
invented the mathematical universe. It was there, waiting to be discovered, and it's crazy, it's bizarre. Math explains why the petals of the daffodil are arranged the way they are, and I think that
I get more pleasure by looking at the daffodil than others do because I understand this. I guess I'm a Sybarite. I like beauty, and I like to eat and drink." He patted his belly. "I used to anyway.
My heart attack changed that somewhat." There was an awkward pause. Conway stood up and walked toward his office. He pointed to the polyhedra that hung from the ceiling. "I'm getting interested in
group theory again. Each of these represents a different type of symmetry, and group theory is really the study of symmetries. Unfortunately, all my groupy friends dispersed, and Princeton is a
wasteland for group theory. I'm somewhat interested in two and three dimensional stuff. It's not very serious, I'm afraid."
I looked around the office, and noticed seven long sheets of paper with footprints on them. "They represent the different types of symmetry in two dimensions," he explained. "One day, I was
walking along, trying to think of an example of translation linked with reflection. All of a sudden, I realized that walking was just what I was looking for! I xeroxed my feet, and made up these
pictures. Each one represented a type of symmetry. And each symmetry has a polyhedron associated with it." Conway sank down into a chair. "I'm not really doing mathematics right now." He looked at
the ceiling. "I guess you can say that I'm expanding it. Instead of trying to prove new theorems, I'm trying to fill in the holes that other people have left behind. I want to have a better
understanding of what we already know. I want a more visual, more intuitive feel for math, like my book on the quadratic form." He stopped short, and looked a little stunned. "I guess you can say
that I've almost ceased being a mathematician.
All of a sudden, his eyes lit up. "Oh, I haven't shown you this! Do you have any pennies?" I fumbled through my pockets and came up empty. A graduate student walked by. Conway leapt up and
grabbed his arm. "Pennies?" he cried, and the startled student looked through his jacket and came up with six. "I guess that it will have to do. I am going to change your luck forever, for the
better." He pulled me toward a table that was only mildly cluttered. "Heads or tails."
"Uh ... heads."
"Good. Now help me." He began balancing the pennies on their edges. By the time that I had finally gotten one to stay standing, he had done four. "Come on, come on." He took the last one out of
my hands, and with a swift, well practiced motion, he had it balanced on its edge. He made sure that no two were facing in the same direction. "Heads, you said?" I nodded. "Well, here goes!" He
knocked the bottom of the table gently, and all six pennies fell. All heads. "Want to see it again?" Without waiting for my answer, he quickly set up the pennies again. "You try it." I tapped the
bottom of the table, and they fell. Heads. He smiled. "It's really much more impressive when you have twenty pennies or so. Now, heads or tails?"
"Tails. But what ..."
"Wait." He held his finger up. "Help me spin these." In a flash, he had four pennies skittering along the table. "Tails, heads, tails, tails. Come on!" I was finally able to get one up to speed.
Heads. He spun four more. "Heads, tails, tails, tails. You know, if you spin pennies this way, roughly two out of three are tails."
"It's very simple, really. Balance the penny on the table and look at it very closely. You will notice that it doesn't stand up exactly straight." I didn't notice, but I nodded anyhow. "That's
because the sides aren't flat like the edges of a disc. They're angled like a slice of a cone - and the tails side is narrower than the heads side. That's a result of the minting process. So, if you
balance them on their edges, they will lean toward the tails side, so when you disturb them gently, they will fall heads up."
"But when you spin them ..."
"The center of mass likes to be above the point of contact. If you think about it for a little, you will see that when the penny spins, the tails side is facing up." Conway grinned as he shuffled
back to his office. I was impressed.
[Copyright (c) 1994, The Sciences]
[This is actually a pre-edited version, and differs slightly from the published article.] | {"url":"http://www.users.cloud9.net/~cgseife/conway.html","timestamp":"2014-04-16T18:57:31Z","content_type":null,"content_length":"28382","record_id":"<urn:uuid:01846670-c8ae-4e2d-a3d7-6dfdd9683ead>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00471-ip-10-147-4-33.ec2.internal.warc.gz"} |
West Elsdon, Chicago, IL
Chicago, IL 60610
Talented Math, Music Tutor
...Young high school, one of the leading high schools in Chicago. I was a part of their Academic Decathlon team, a prestigious academic competition in ten different subjects encompassing Math, Music,
Economics, Science, History, Art, Literature, Speech, Interview,...
Offering 10+ subjects including algebra 1 | {"url":"http://www.wyzant.com/West_Elsdon_Chicago_IL_Algebra_1_tutors.aspx","timestamp":"2014-04-24T10:42:56Z","content_type":null,"content_length":"62960","record_id":"<urn:uuid:3a6fb21f-4ab2-4cf2-8712-d916a428595d>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00324-ip-10-147-4-33.ec2.internal.warc.gz"} |
Area of parallelogram and triangle Theorems
1) Area of a figure is a number with some unit associated with the part of the plane enclosed by that figure. Different types of figures have different areas. We have different formulas for finding
area of some particular figures.
2) Two figures are called congruent, if they have the same shape and the same size. And two congruent figures have equal areas but the converse need not be true.
3) If A and B are two congruent figures, then area(A) = area(B);
and if a planar region formed by a figure T is made up of two non- overlapping planar regions formed by figures P and Q, then area(T) = area(P) + area(Q).
4) Two figures are said to be on the same base and between the same parallels, if they have a common base or a side and the vertices or the vertex opposite to the common base of each figure lie on a
line parallel to the base.In the above figure parallelogram ABCD and triangle ABE are on the same base and between the same parallels.
5) Parallelograms on the same base or equal bases and between the same parallels are equal in area.
In the above figure parallelogram ABCD and ABEF are on the same base and between the same parallels, so they have equal areas.
6) Area of a parallelogram is the product of its base and the corresponding altitude.
7) Parallelograms on the same base or equal bases and having equal areas lie between the same parallels.
In the above figure parallelogram ABCD and triangle ABE are on the same base and between the same parallels, so area of the triangle will be half the area of the parallelogram.
9) Triangles on the same base or equal bases and between the same parallels are equal in area.
In the above figure triangle ABC and BCP are on the same base and between the same parallels, so area of both triangles will be equal.
10) Area of a triangle is half the product of its base and the corresponding altitude.
11) Triangles on the same base or equal bases and having equal areas lie between the same parallels.
12) A median of a triangle divides it into two triangles of equal areas. | {"url":"http://iperform.classteacher.com/site/blog/index.php/area-of-parallelogram-and-triangle-theorems/","timestamp":"2014-04-19T22:41:24Z","content_type":null,"content_length":"33185","record_id":"<urn:uuid:3769eab3-fec0-4404-8f76-eb89732cad50>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00418-ip-10-147-4-33.ec2.internal.warc.gz"} |
Information Theoretic Evaluation of Change
Prediction Models for Large-Scale Software
Information Theoretic Evaluation of Change Prediction Models for Large-Scale Software
Abstract (Summary)
During software development and maintenance, as a software system evolves, changes are made and bugs are fixed in various files. In large-scale systems, file histories are stored in software
repositories, such as CVS, which record modifications. By studying software repositories, we can learn about open source software development rocesses. Knowing where these changes will happen in
advance, gives power to managers and developers to concentrate on those files. Due to the unpredictability in software development process, proposing an accurate change prediction model is hard. It
is even harder to compare different models with the actual model of changes that is not available.
In this thesis, we first analyze the information generated during the development process, which can be obtained through mining the software repositories. We observe that the change data follows a
Zipf distribution and exhibits self-similarity. Based on the extracted data, we then develop three probabilistic models to predict which files will have changes or bugs. One purpose of creating these
models is to rank the files of the software that are most susceptible to having faults.
The first model is Maximum Likelihood Estimation (MLE), which simply counts the number of events i. e. , changes or bugs that occur in to each file, and normalizes the counts to compute a probability
distribution. The second model is Reflexive Exponential Decay (RED), in which we postulate that the predictive rate of modification in a file is incremented by any modification to that file and
decays exponentially. The result of a new bug occurring to that file is a new exponential effect added to the first one. The third model is called RED Co-Changes (REDCC). With each modification to a
given file, the REDCC model not only increments its predictive rate, but also increments the rate for other files that are related to the given file through previous co-changes.
We then present an information-theoretic approach to evaluate the performance of different prediction models. In this approach, the closeness of model distribution to the actual unknown probability
distribution of the system is measured using cross entropy. We evaluate our prediction models empirically using the proposed information-theoretic approach for six large open source systems. Based on
this evaluation, we observe that of our three prediction models, the REDCC model predicts the distribution that is closest to the actual distribution for all the studied systems.
Bibliographical Information:
School:University of Waterloo
School Location:Canada - Ontario
Source Type:Master's Thesis
Keywords:computer science change prediction models software repositories information theory evaluation approach
Date of Publication:01/01/2006 | {"url":"http://www.openthesis.org/documents/Information-Theoretic-Evaluation-Change-Prediction-162847.html","timestamp":"2014-04-17T15:29:50Z","content_type":null,"content_length":"10598","record_id":"<urn:uuid:953f7b43-2988-4f76-8f81-908e4f986fc0>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00356-ip-10-147-4-33.ec2.internal.warc.gz"} |
[SciPy-User] How to estimate error in polynomial coefficients from scipy.polyfit?
[SciPy-User] How to estimate error in polynomial coefficients from scipy.polyfit?
Charles R Harris charlesr.harris@gmail....
Thu Mar 25 15:52:32 CDT 2010
On Thu, Mar 25, 2010 at 2:32 PM, Jeremy Conlin <jlconlin@gmail.com> wrote:
> I am using scipy.polyfit to fit a curve to my data. Members of this
> list have been integral in my understanding of how to use this
> function. Now I would like to know how I can get the uncertainties
> (standard deviations) of polynomial coefficients from the returned
> values from scipy.polyfit. If I understand correctly, the residuals
> are sometimes called the R^2 error, right? That gives an estimate of
> how well we have fit the data. I don't know how to use rank or any of
> the other returned values to get the uncertainties.
> Can someone please help?
You want the covariance of the coefficients, (A.T * A)^-1/var, where A is
the design matrix. I'd have to see what the scipy fit returns to tell you
more. In anycase, from that you can plot curves at +/- sigma to show the
error bounds on the result. I can be more explicit if you want.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://mail.scipy.org/pipermail/scipy-user/attachments/20100325/4ca018e3/attachment.html
More information about the SciPy-User mailing list | {"url":"http://mail.scipy.org/pipermail/scipy-user/2010-March/024838.html","timestamp":"2014-04-17T19:44:08Z","content_type":null,"content_length":"4316","record_id":"<urn:uuid:9ea412d8-ecde-4181-9d38-b786bd52ff3e>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00322-ip-10-147-4-33.ec2.internal.warc.gz"} |
Simplifying Linear Equations
June 2nd 2008, 08:47 AM
Simplifying Linear Equations
This is some steps into the equation
But I am having difficulty simplifying this
$x- \frac{3}{2}x = -8\frac{1}{2}$
The author shows the above simplified as
$-\frac{1}{2}x = -8 \frac{1}{2}$
Can you adjust assume X = 1 and do $1 - 3/2$ thus equaling $-\frac{1}{2}x$?
June 2nd 2008, 08:57 AM
No. It could be $-\frac{1}{2}x^5$ as well. Well, you could only use this method in a test..
$x- \frac{3}{2}x = -8\frac{1}{2}$
Factor the x on LHS,
$x\cdot \left ( 1 - \frac{3}{2} \right ) = -8\frac{1}{2}$
$x\cdot \left (- \frac{1}{2} \right ) = -8\frac{1}{2}$
$-\frac{1}{2}x = -8\frac{1}{2}$
June 2nd 2008, 09:03 AM
Thanks for clarifying, that makes perfect sense. However, does LHS mean left hand side of the equation?
June 2nd 2008, 09:06 AM
June 3rd 2008, 08:54 AM
Actually one last question I forgot to ask yesterday the problem reads
$x + 8 = \frac{3}{2}x - \frac{1}{2}$
Which leads to
$<br /> x= \frac{3}{2}x -8\frac{1}{2}<br />$
I am not certain how the " $-8\frac{1}{2}$" is created exactly. The RHS reads $\frac{3}{2}x - \frac{1}{2} - 8$ So I assume you can just do (once you subtract 8 from both sides in the first
equation displayed) " $-\frac{1}{2} - 8$" which in a calculator = -8.5 or $8\frac{1}{2}$
Could some one please clarify, thanks.
June 3rd 2008, 09:19 AM
Actually one last question I forgot to ask yesterday the problem reads
$x + 8 = \frac{3}{2}x - \frac{1}{2}$
Which leads to
$<br /> x= \frac{3}{2}x -8\frac{1}{2}<br />$
I am not certain how the " $-8\frac{1}{2}$" is created exactly. The RHS reads $\frac{3}{2}x - \frac{1}{2} - 8$ So I assume you can just do (once you subtract 8 from both sides in the first
equation displayed) " $-\frac{1}{2} - 8$" which in a calculator = -8.5 or $8\frac{1}{2}$
Could some one please clarify, thanks.
Your reasoning is correct.
I find the whole $8 \frac{1}{2}$ notation hazardous to begin with. Unless you are told otherwise by your instructor I'd advise you to write it as an "improper fraction:"
$8 \frac{1}{2} = 8 + \frac{1}{2} = \frac{17}{2}$
June 3rd 2008, 10:15 AM
Thanks for clarifying! Using improper fractions intuitively makes more sense so I appreciate the suggestion.
And as far as what my instructor does, I am teaching myself algebra all over again to eventually teach myself pre-cal, calculus I, and calculus II before I ever head into the class room. | {"url":"http://mathhelpforum.com/algebra/40379-simplifying-linear-equations-print.html","timestamp":"2014-04-16T05:20:00Z","content_type":null,"content_length":"13117","record_id":"<urn:uuid:fe32cd34-8180-4792-8d76-913893af7d31>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00606-ip-10-147-4-33.ec2.internal.warc.gz"} |
[FOM] 466: RETURN TO 463/Dominators
[FOM] 466: RETURN TO 463/Dominators
Harvey Friedman friedman at math.ohio-state.edu
Sun Jun 12 23:09:42 EDT 2011
The last comprehensive statement of the Pi01 independent statements is
#463, http://www.cs.nyu.edu/pipermail/fom/2011-May/015464.html This
features MAXIMAL CLIQUES, and the MAXIMAL CLIQUE SPLIT THEOREM is
implicitly Pi01. That posting is STILL OPERATIVE - but with caution to
the reader in the preliminary remarks there.
Postings #464, #465 made a huge jump into local facts about order
invariant graphs - with no cliques.
I am now confident that, UNFORTUNATELY, these statements are NOT
independent of ZFC.
This overly ambitious approach was generated by my seeking a really
good finite form of the MAXIMAL CLIQUE SPLIT THEOREM. I now want to
get back on track with this.
My working assumption is that I will come up with a manuscript for the
claims in #463 concerning the Maximal Clique Split Theorem.
We now present new explicitly Pi01 independent statements. We begin by
presenting a DUAL form of the Maximal Clique Split Theorem, called the
Independent Dominator Split Theorem. Here Cliques correspond to
Independent sets, and Maximality corresponds to Domination. However,
the main feature of Maximal Cliques is Cliques (and not maximality),
whereas the main feature of Independent Dominators is Dominators (and
not independence). This is a subtle kind of change of intuitive
perspective - even though formally we have a trivial kind of duality.
In fact, this change of intuitive perspective has driven us to the
INDEPENDENT DOMINATOR SPLIT THEOREM, but what is the advantage??
The advantage lies in the motivation and purpose of DOMINATION. For
the FINITE DOMINATOR SPLITE THEOREMS we need a quantitative form of
(relative) domination. This seems reasonably well motivated in light
of the motivation and purpose of DOMINATION.
Domination in Graphs is now a huge theoretical and applied topic. See
[1] Haynes, Hedetniemi, Slater, Fundamentals of Domination in Graphs,
where it is stated that there are over 1200 research papers on the
topic. That was in 1998, and presumably there are considerably more
papers now.
NOTE: Many many postings ago, we were using kernels/dominators in
DIRECTED GRAPHS, which is quite a different topic. They don't always
exist. [1] states that there are about 10% as many papers on the
directed graph situation.
In [1], there is a considerable discussion of motivation and
applications, in contexts including Facilities Location, Scheduling,
School Bus Routing, Computer Communication Networks, Radio Stations
(placement), Social Network Theory, Game Theory, and Chess.
E.g., we quote from Haynes et al:
"Suppose that each vertex in a graph represents a site where customers
are located, and we can choose one or more sites at which to locate
facilities to serve these customers optimally."
Such motivations for Domination discussed there provide sound general
motivation for the notion of Relative Domination and Relative c-
Domination that we use below for the Independent Dominator Split
NOTE: For c-Domination, we need a norm on the vertices of a graph. We
use a very natural norm on rational [-1,1]^k. That particular norm may
not be tailor made to any particular purpose of DOMINATION. (There is
the idea of this norm being related to accuracy of measurement, which
MIGHT mesh well with certain purposes of DOMINATION). So it is
compelling to determine the status of the FINITE DOMINATOR SPLIT
THEOREMS as we vary the norm. Fortunately, it appears that we can use
a very general class of norms, and also the mere existence of a
(reasonable) norm is enough to make the statement unprovable in ZFC.
This lack of sensitivity to a particular norm or even kind of norm is
very encouraging.
NOTE: We have not even begun to think about norms on the edges of a
graph. Also, we have not even begun to think about the myriad
fundamental pure and applied purposes of DOMINATION, and how they
might drive the search for necessary uses of large cardinals in the
EPILOGUE: This posting purports to show that explicitly finite forms,
even explicitly Pi01 forms, are under reasonably well motivated
control, through DOMINATION. It now becomes CRITICAL to see that the
so far very stable infinite forms: MAXIMAL CLIQUE SPLIT THEOREM,
INDEPENDENT DOMINATOR SPLIT THEOREM, do in fact have the properties
claimed. E.g., are independent of ZFC. So barring any new excitement,
I will be writing up a complete proof for these two infinite statements.
Harvey M. Friedman
June 12, 2011
1. Graphs, Cliques, Independence, Domination, Order Invariance, Splits.
2. Maximal Clique and Independent Dominator Split Theorems.
3. Normed Graphs, Relative Domination.
4. Finite Dominator Split Theorems.
1. GRAPHS, CLIQUES, INDEPENDENCE, DOMINATION, ORDER INVARIANCE, SPLITS.
A (simple) graph is a pair G = (V,E), where V is a set and E is an
irreflexive, symmetric relation on V.
We say that x,y are adjacent if and only if x E y.
A clique is a set of vertices, where any two distinct elements are
An independent set is a set of vertices, where no two elements are
A dominator is a set S of vertices, where every vertex is equal to or
adjacent to an element of S.
[1] focuses on minimal dominators.
THEOREM 1.1. Every graph has a maximal clique. Every graph has a
maximal independent set. The maximal independent sets are the same as
the independent dominators, and the minimal dominators.
We say that x,y in R^k are order equivalent if and only if for all 1
<= i,j <= k, x_i < x_j iff y_i < y_j.
Let V be a subset of R^k. We say that S containedin V is order
invariant if and only if for all order equivalent x,y in V, x in S iff
y in S.
We say that G is an order invariant graph on V if and only if G =
(V,E), where E is an order invariant subset of V^2.
The strict upper split of S contained in R^k is the subset of R^k
resulting from dividing all coordinates of elements of S by 2, and
then removing all vectors in which 0 is a coordinate.
We use Q[-1,1] for the interval [-1,1] in the rationals.
MAXIMAL CLIQUE SPLIT THEOREM. Every order invariant graph on Q[-1,1]^k
has a maximal clique containing its strict upper split.
INDEPENDENT DOMINATOR SPLIT THEOREM. Every order invariant graph on
Q[-1,1]^k has an independent dominator containing its strict upper
THEOREM 2.1. The Maximal Clique and Independent Dominator Split
Theorems are provable in SMAH+ but not in any consistent fragment of
SMAH that proves RCA_0. The Maximal Clique and Independent Dominator
Split Theorems are provably equivalent to Con(SMAH) over RCA_0.
3. NORMED GRAPHS, RELATIVE c-DOMINATION.
We now introduce relative domination. Let A,B be sets of vertices in a
graph. We say that A dominates B, or A is a dominator of B, if and
only if every element of B is in A or adjacent to an element of A.
Given the motivation for domination cited from [1], it seems
reasonable to place a "bound" on the element of A in terms of the
given element of B. For this purpose, we introduce vertex normed graphs.
A vertex normed graph is a triple G = (V,E,h), where (V,E) is a graph,
and h:V into N.
Let c be a positive real constant. We say that A is a c-dominator of B
if and only if every x in B is in A or adjacent to some y in A with
h(y) <= ch(x).
We use the following natural norm on Q[-1,1]^k. h(x) is the least
positive integer n such that every coordinate of x can be written with
denominator of magnitude at most n.
This makes every graph on Q[-1,1]^k into a vertex normed graph.
FINITE DOMINATOR SPLIT THEOREM. In every order invariant graph on
Q[-1,1]^k, every finite set of vertices has an independent 8k-
dominator containing the strict upper split of their intersection.
It is clear that the Finite Dominator Split Theorem is Pi02, using the
decision procedure for (Q,<,doubling). The existential quantifier
corresponds to the size of the dominator.
But clearly the Finite Dominator Split Theorem is equivalent to the
FINITE DOMINATOR SPLIT THEOREM (1). In every order invariant graph on
Q[-1,1]^k, every finite set of vertices has an independent 8k-
dominator with at most twice as many elements, containing the strict
upper split of their intersection.
It is clear that the Finite Dominator Split Theorem (1) is Pi01, using
the decision procedure for (Q,<,doubling).
Here is an explicitly Pi01 form, involving only finite graphs. Write
Q[-1,1]^k|<=r for the set of elements of Q[-1,1]^k of norm at most r.
FINITE DOMINATOR SPLIT THEOREM (2). In every order invariant graph on
Q[-1,1]^k|<=n, Q[-1,1]^k|<=n/(8k)! has an independent 8k-dominator
containing the strict upper split of their intersection.
The quantities here are crude but safe. Assuming this is the form that
is written up, we will see just what is needed.
THEOREM 4.1. All three forms of the Finite Dominator Split Theorem are
provable in SMAH+ but not in any consistent fragment of SMAH that
proves EFA. All three forms of the Finite Dominator Split Theorem are
provably equivalent to Con(SMAH) over EFA. For the first two of the
three forms of the Finite Dominator Split Theorem, we obtain the same
results even if we fix k to be a sufficiently large integer. In fact,
k = 16 suffices, and likely much smaller k like k = 8 (or even lower)
will suffice.
We intend to modify the strict upper split using any given rational
partial piecewise linear function from Q into Q, for all of the
Theorems presented, and determine the truth or falsity using small
large cardinals. This is my Templating methodology.
I use http://www.math.ohio-state.edu/~friedman/ for downloadable
manuscripts. This is the 466th in a series of self contained numbered
postings to FOM covering a wide range of topics in f.o.m. The list of
previous numbered postings #1-449 can be found
in the FOM archives at http://www.cs.nyu.edu/pipermail/fom/2010-December/015186.html
450: Maximal Sets and Large Cardinals II 12/6/10 12:48PM
451: Rational Graphs and Large Cardinals I 12/18/10 10:56PM
452: Rational Graphs and Large Cardinals II 1/9/11 1:36AM
453: Rational Graphs and Large Cardinals III 1/20/11 2:33AM
454: Three Milestones in Incompleteness 2/7/11 12:05AM
455: The Quantifier "most" 2/22/11 4:47PM
456: The Quantifiers "majority/minority" 2/23/11 9:51AM
457: Maximal Cliques and Large Cardinals 5/3/11 3:40AM
458: Sequential Constructions for Large Cardinals 5/5/11 10:37AM
459: Greedy CLique Constructions in the Integers 5/8/11 1:18PM
460: Greedy Clique Constructions Simplified 5/8/11 7:39PM
461: Reflections on Vienna Meeting 5/12/11 10:41AM
462: Improvements/Pi01 Independence 5/14/11 11:53AM
463: Pi01 independence/comprehensive 5/21/11 11:31PM
464: Order Invariant Split Theorem 5/30/11 11:43AM
465: Patterns in Order Invariant Graphs 6/4/11 5:51PM
Harvey Friedman
More information about the FOM mailing list | {"url":"http://www.cs.nyu.edu/pipermail/fom/2011-June/015567.html","timestamp":"2014-04-20T11:58:36Z","content_type":null,"content_length":"14242","record_id":"<urn:uuid:12b6fa8c-e5e2-4f32-83fb-ae83597a2116>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00159-ip-10-147-4-33.ec2.internal.warc.gz"} |
Number of results: 24,616
initial speed: 2 m/s a: -9.8 m/s^2 s: -2.5 m final speed: ? Vf^2 - Vi^2 = 2as v^2 - 2^2 = 2•-9.8•-2.5 v^2-4 = 49 v^2 = 53 final speed is approx. 7.28
Sunday, November 18, 2012 at 9:58pm by Anonymous
Put the 36mph into m/s units. acceleration(final speed-oritinal speed)/time final speed is zero.
Sunday, November 17, 2013 at 4:10pm by bobpursley
When would instantaneous speed be the same as average speed? would it be when the speed is constant? when the position changes linearly with time or would it be in the limit when the time interval
goes to zero? I thought it would be when the speed is constant, but i'm not sure...
Monday, September 5, 2011 at 6:45pm by Katie
computers again
what is the difference between motherboard speed and the processor speed for computers such as the 80386DX Your use of the term "motherboard" speed is outdated. I assume you mean motherboard bus
speed. Processor speed can be higher than that, and normally, it is a fixed ...
Thursday, September 7, 2006 at 2:21am by Anonymous
Downstream, he is going with current, 2mph. Crosswise, he is going 1.8mph. Draw the diagram. I see TanTheta=1.8/2.0 His actual speed? speed=sqrt(2^2+1.8^2) or speed=2.0/sinTheta
Monday, May 10, 2010 at 12:54pm by bobpursley
Set a frictionless incline at various measured angles then determine the acceleration of a block by measuring the time it takes to increase from an initial speed to a final speed.If the angle is 20
and the initial speed is 0.68m/s and final speed is 2.43m/s and the time is 0....
Sunday, November 21, 2010 at 4:00pm by Trista
w=speed of wind a=speed of airplane Use speed*time = distance (a-w)*(10/3)=960 ...(1) (a+2)*(5/2)=960 ...(2) Solve for a and w.
Wednesday, September 22, 2010 at 9:47pm by MathMate
College Physics
a horizontal speed of 15.8 m/s and a vertical speed 10.0 m/s upward. What is the vertical speed 1.12 s later?
Sunday, September 26, 2010 at 3:22pm by J
if boat speed = b, and river speed = r, then since speed = distance/time, b+r = 12/1.5 = 8 (downstream) b-r = 12/6 = 2 (upstream) b=5,r=3
Sunday, March 10, 2013 at 10:13pm by Steve
They are not asking you to graph the collision. All you need to do is apply a momentum conservation formula to compute a final speed. The final speed is half the initial speed of the moving car.
Monday, January 25, 2010 at 1:47am by drwls
Please i really need help with this!!! Question: Suppose a treadmill has an average acceleration of 4.7x10^-3 m/s. a)how much does its speed change after 5min? b)if the treadmill's speed 1.7 m/s,
what will its final speed be?
Tuesday, September 13, 2011 at 8:15pm by Leia
2m is not a speed. You must mean that he lifts the mass 2m at a constant speed. The speed is not needed. Work done is M*g*H H = 2.0 m M = 2.0 kg g = the acceleration of gravity. Use the Earth value.
Sunday, July 15, 2012 at 3:33am by drwls
easy anwer. You are looking for in the end speed, but to get there, you start with distance on each side. So XXXXN*2.5 is the distance (2.5 is the time, distance=speed*time) It could have been done
your way.. 1000N/2.5 (speed)= speed wind+ speed plane My way was.. 1000N (...
Wednesday, October 26, 2011 at 11:30am by bobpursley
tugboat goes 160 miles upstream in 10 hours. The return trip downstream takes 5 hours. Find the speed of the tugboat without a current and the speed of the current. speed of tugboat is speed of the
current is
Friday, March 30, 2012 at 10:29pm by scott
let Vc be the current speed and Vb be the boat speed in still water. Here is what you know: Vb + Vc = 10 (downstream speed) Vb - Vc = 6 (upstream speed) Adding the two equations gives you 2 Vb = 16
Solve for Vb and then Vc
Wednesday, January 21, 2009 at 10:58pm by drwls
A baseball is hit with a horizontal speed of 16 m/s and a vertical speed of 16 m/s upward. What are these speeds 5 s later? horizontal speed ?? m/s vertical speed ?? m/s
Saturday, March 8, 2008 at 12:37am by jena
Car A is traveling at a constant speed v A = 130 km /h at a location where the speed limit is 100 km/h. The police officer in car P observes this speed via radar. At the moment when A passes P, the
police car be- gins to accelerate at the constant rate of 6 m/s 2 until a speed...
Sunday, September 9, 2007 at 12:58pm by Anonymous
You can always determine wave speed by knowing ship speed and direction, wave direction, and time it takes for wave to travel ship length (relative speed). What you have is wave speed relative to
ship speed, so you do a vector addition. Wind speed does not matter, as long as ...
Friday, November 23, 2007 at 3:34am by bobpursley
I do not know. Was this question in your text? It depends how squishy you are.F = m a = rate of change of momentum = m *(change in speed/time to change speed) But time to change speed from zero to
speed of the truck depends on howmuch the truck compresses you on contact. If ...
Tuesday, August 9, 2011 at 1:15pm by Damon
Think about it. The speed of the gears' boundaries must match. Since A has a larger radius, it turns more slowly in rpm. So, if A is 6Hz ccw, B is 12Hz cw. The linear speed must match,or the teeth
will strip! A's speed is 2π(10)*6 = 120πcm/s B's speed is the same: 2...
Monday, October 3, 2011 at 1:21am by Steve
A object of mass 3.00 kg is subject to a force Fx that varies with position as in the figure below. Find the work done by the force on the object as it moves as follows: (a) from x = 0 to x = 4.00 m
1 Your response differs from the correct answer by 10% to 100%. J (b) from x...
Tuesday, October 20, 2009 at 8:08pm by fdtc
The fuel efficiency, E, in litres per 100 km, for a car driven at speed v, in km/h is E(v)=1600v/(v^2+6400). a) Determine the legal speed speed that will maximize the fuel efficiency if the speed
limit is 50 km/h. The answer is 50 km/h but my teacher said it can't be that. Any...
Sunday, March 4, 2012 at 3:19pm by Jake
An earth satellite in an elliptial orbit moves with a) slower speed as it ages b) none of the above c) constant speed d) faster speed as it ages Just need to know if my answer or right or wrong.
Thanks! :) d
Tuesday, August 17, 2010 at 4:12pm by ant
the angular speed is 2π/27 radians/second (all the other pieces of data are irrelevant for the angular speed. If you wanted the horse's linear speed then they would enter the picture.)
Thursday, April 28, 2011 at 1:46am by Reiny
angular speed=angle/time convert 220 degrees to radians.. Angle=220/360 ( 2PI) solve for angular speed. linear speed=angular speed* (3959+220)
Sunday, October 21, 2012 at 5:22pm by bobpursley
After 3 seconds she will have increased her speed by 3.9 m/s, attaining 3.9 + 2.2 = 6.1 m/s speed IF she was skating in a straight line. Since 6.1 m/s IS her final speed, she was skating along the
initial velocity direction The answer is zero degrees.
Thursday, January 15, 2009 at 8:25pm by drwls
The Danville Epress travels 280mi at a certain speed.If the speed had been increased by 5mph the trip could have been made in 1 hour less time.Find the actual speed.
Tuesday, February 15, 2011 at 7:29pm by Nicole
The Danville Epress travels 280mi at a certain speed.If the speed had been increased by 5mph the trip could have been made in 1 hour less time.Find the actual speed.
Tuesday, February 15, 2011 at 10:31pm by Nicole
The train slows down uniformly from a speed of 50m/s to a speed of 10m/s in a time of 20sec,During the next 30sec,it accelerates uniformly to a speed of u metres/sec.calculate the retardation from t=
0 to t=20?
Friday, May 3, 2013 at 10:21am by Meggy lep
Physics (Momentum)
Can anyone please help me with this question? I don't understand why the speed is v m/s. A sport utility vehicle is travelling at a speed of v m/s. If its momentum has a magnitude of 32,800 kg·m/s,
what is the SUV's mass? Thanks I know that I have to use the formula p= m*v and...
Thursday, May 2, 2013 at 10:26pm by Kathy
An automobile tire has a radius of 0.335 m, and its center moves forward with a linear speed of v = 24.1 m/s. (a) Determine the angular speed of the wheel. (b) Relative to the axle, what is the
tangential speed of a point located 0.243 m from the axle?
Tuesday, July 16, 2013 at 3:01pm by katie
A roulette wheel with a 1.0m radius reaches a maximum angular speed of 18 rad/s before it stops 35 revolutions ( 220 rad ) after attaining the maximum speed. How long did it take the wheel to stop?
Unless you mean the 35 revs occur after max speed, it cannot be solved. The ...
Tuesday, December 19, 2006 at 4:05pm by Robert
starting from reast a boat increases its speed to 4.12 m/s with constant acceleration. (a) what is the boat's average speed? (b) If it take the boat 4.77s to reach this speed, how for has it
Thursday, November 17, 2011 at 5:35pm by taylor
mach=speed sound what is the speed of sound at 1C? vsound=If I remember correctly 331(1+.6TempinC) check that formula. get the speed of sound, multiply it by 2.5
Wednesday, April 3, 2013 at 10:42pm by bobpursley
speed of light
The meter is defined as the distance light travels in 1/2.99792458×108 seconds. This definition requires that the speed of light be accurately known. This speed can be measured with high precision
and is defined to be exactly 2.99792458×108 meters/second. Thus the speed of ...
Monday, August 21, 2006 at 9:12pm by Anonymous
Starting from rest, a boat increases its speed to 4.92 m/s with constant acceleration. (a) What is the boat's average speed? (b) If it takes the boat 5.07 s to reach this speed, how far has it
Wednesday, August 29, 2012 at 9:16pm by stoic-rider77
Conservation of momentum requires that: (alpha particle mass)x(alpha particle speed) = (Th234 speed)x (Th234 mass) The speeds are in opposite directions Therefore (Th34 speed)= (alpha speed)*(4/234)
Saturday, January 16, 2010 at 3:57pm by drwls
A cart starts from position 4 (which is 20m above ground) in the figure below with a velocity of 11 m/s to the left. Find the speed with which the cart reaches positions 1, 2, and 3. Neglect
friction. speed at position 1 m/s? speed at position 2 m/s? speed at position 3 m/s? ...
Monday, February 27, 2012 at 12:48am by Jacky
A Thompson’s gazelle can reach a speed of 13 m/s in 3.0 s. A lion can reach a speed of 9.5 m/s in 1.0 s. A trout can reach a speed of 2.8 m/s in 0.12 s. Which animal has the greatest acceleration?
Thursday, October 28, 2010 at 3:56pm by Jay
The braking distance of a car is directly proportional to te square of it's speed. When the speed is p metres per second, the braking distance is 6m. When the speed is increased by 300%, find (a) an
expression for speed of the car (b) the braking distance (c) the % increase in...
Friday, October 21, 2011 at 6:48am by StuartKess
Your diameter and speed require dimensions. Numbers are not enough. Your question can be answered by using the continuity and Bernoulli equations. speed * area = constant P + (1/2)*(density)*(speed)^
2 = constant Show your work for further assistance, if needed.
Sunday, April 24, 2011 at 11:19pm by drwls
A race car enters a flat 200 m radius curve at a speed of 20 m/swhile increasing its speed at a constant 2 m/s2. If thecoefficient of static friction is .700, what will the speed of thecar be when
the car beings to slide?
Thursday, October 13, 2011 at 5:54pm by holt
Well, I agree because it says changes and the cat changed direction twice, once to turn around and once to go from horizontal speed to vertical speed, thereby changing velocity without much change in
Monday, March 3, 2014 at 2:27pm by Damon
An aircraft flew 3hrs with the wind. The return trip took 4hrs against the wind. If the speed of the plane in still air is 180mph more than the speed of the wind, find the wind speed and the speed of
the plane in still air.
Tuesday, September 27, 2011 at 11:34pm by Ingrid
if a stone at the end of a string is whirled in a circle, the inward pull on the stone A) is known as the centrifugal force B) is inversely proportional to the speed of the object C) is inversely
proportional to the square of the speed D) is proportional to the speed E) is ...
Monday, May 28, 2012 at 9:58am by star
Intermediate algebra
Hello, I am kind stuck on this question In the lab you discover a relationship between the temperature of the air and the speed of sound. You find the speed of sound is never lower than 331 m/s and
increases by 60% of the current temperature. a. What is the equation from this ...
Monday, November 28, 2011 at 9:55pm by Jennifer
a wheelof diameter 0.70 meterolls without slipping. a point at the top of the wheel moves with a tangential speed of 2.0 meter/second. a) at what speed is the axis of the wheel moving? b) What is the
angular speed of the wheel?
Monday, March 24, 2014 at 1:10pm by Anonymous
When a small object is launched from the surface of a fictitious planet with a speed of 52.0 m/s, its final speed when it is very far away from the planet is 32.0 m/s. Use this information to
determine the escape speed of the planet.
Tuesday, April 1, 2014 at 10:04pm by Ying
1) change speed to m/s 61.1m/s check that. avgspeed=(vf+vi)/2=30.5m/s distance=avgspeed*time I get considerably more than you Remember, under constant deaccleration to a stop, the average speed is 1/
2 the initial speed.
Saturday, January 10, 2009 at 1:33pm by bobpursley
10th grade Academic Math
hi can you help me with this problem please, thank you.----------------- If the speed of an experimental car is decreased by 8km/h it completes a 140 km trip in 2 hours more than had it been
traveling at its usual speed. Determine its usual speed.
Monday, June 7, 2010 at 6:41pm by Neil
10th grade Academic Math
hi can you help me with this problem please, thank you.----------------- If the speed of an experimental car is decreased by 8km/h it completes a 140 km trip in 2 hours more than had it been
traveling at its usual speed. Determine its usual speed.
Monday, June 7, 2010 at 7:19pm by Neil
10th grade Academic Math
hi can you help me with this problem please, thank you.----------------- If the speed of an experimental car is decreased by 8km/h it completes a 140 km trip in 2 hours more than had it been
traveling at its usual speed. Determine its usual speed.
Monday, June 7, 2010 at 7:29pm by Neil
They have provided more information than you need to answer the question. That is to done to give you practice in knowing what to use and what to ignore. The average speed is 2/1.5 = 4/3 m/s. The
final speed is twice that, or 8/3 m/s The acceleration is the final speed divided...
Wednesday, September 15, 2010 at 10:57pm by drwls
Thank you. I don't know if this one is right- At the speed of 2.5m/s, how many seconnds will it take the kayak to run a 4,500 m course? what is the speed of the kayak in kilometers per hour? speed=
4,500km/2.5h*60min/1 h =270,000/2.5=108,000km/h Thank you Ms. Sue :-)
Wednesday, March 9, 2011 at 1:20pm by Annie
4 m/s is a very slow pitch. It is only 8.95 mph. A ball thrown at that speed would not make it to the batter's box without rolling there. Are you sure you copied the problem correctly? A good pitcher
throws fast balls over 90 mph. Average speed while throwing = 2 m/s. Time = ...
Thursday, June 23, 2011 at 1:33am by drwls
Math Ms. Sue please help
speed of plane in still air --- x mph speed of wind ------------ y mph so going with the wind, speed = x+y against the wind, speed = x-y x+y = 158 x-y = 112 add them 2x = 270 x = 135 then 135 + y =
158 y = 23 speed of wind is 23 mph, speed of plane in still air = 135 mph
Thursday, February 28, 2013 at 9:11pm by Reiny
change of speed=(final speed-initial speed) = (14+2.3*(12.2-9) - 14)=2.3*3.2 m/s
Tuesday, September 6, 2011 at 9:36am by bobpursley
During the first part of a trip, a canoeist travels 57 miles at a certain speed. The canoeist travels 13 miles on the second trip at a speed 5 mph slower. The total time for the trip is 5 hrs. what
was the speed on each part of the trip? The speed on the first part is? The ...
Sunday, April 22, 2012 at 3:36pm by Stella
Algebra 1
if emily's speed is s and the river current speed is r, 1(s+r) = 6 2(s-r) = 6 s + r = 6 2s - 2r = 6 4s = 18 s = 4.5 r = 1.5 So, Emily's speed is 4.5 mph Ashley's speed is 5.5 mph The river's speed is
1.5 mph Rowing separately, Emily's time is 6/(4.5+1.5) + 6/(4.5-1.5) = 3 ...
Tuesday, May 7, 2013 at 10:49am by Steve
A 9.00 kg object starting from rest falls through a viscous medium and experiences a resistive force = -b , where is the velocity of the object. The object reaches one half its terminal speed in 5.54
s. (a) Determine the terminal speed. (b) At what time is the speed of the ...
Thursday, March 3, 2011 at 7:36am by kia
A .50kg ball is rolling on a frictionless surface at a speed of .75m/s. it collides with a second ball with a mass of 1kg which is also moving in the same direction as the first ball with a speed of
.38m/s. After the collision the firt ball continues at a reduced speed of .35m...
Saturday, March 26, 2011 at 5:44pm by Mari
A 0.43 kg object connected to a light spring with a spring constant of 21.4 N/m oscillates on a frictionless horizontal surface. The spring is compressed 4.0 cm and released from rest. (a) Determine
the maximum speed of the mass. (b) Determine the speed of the mass when the ...
Sunday, October 21, 2012 at 9:57pm by roro
The average speed does not give the real average. You would need to take the harmonic mean of the speed, i.e. mean speed = 1/(1/4.7 + 1/2.8)/2= 3.5 m/s You can also find the average speed by assuming
a distance of one (metre). Total distance = 2 Total time = 1/4.7 + 1/2.8 = 0....
Thursday, October 8, 2009 at 12:55pm by MathMate
waves and optics
Cherenkov radiation is light emitted by a particle moving through a medium with a speed greater than the speed of light in the medium. (Note: The speed of the particle is not greater than the speed
of light in a vacuum.) Consider a beam of electrons passing through water with ...
Sunday, May 27, 2012 at 6:42pm by jennifer
9 rev/sec = 18 pi radians/sec. That is called tha angular speed, and usually has a symbol omega. Here, we often use w. You need to provide a dimension along with the "diameter" number. Multiply half
the diameter (the radius) by the angular speed in radians per second to get ...
Thursday, March 3, 2011 at 8:48am by drwls
PHYSICS 3
A solid disk (mass = 1 kg, R=0.5 m) is rolling across a table with a translational speed of 9 m/s. a.) What must the angular speed of the disk be? rad/s d.) The disk then rolls up a hill of height 2
m, where the ground again levels out. Find the translational and rotational ...
Monday, October 20, 2008 at 4:34am by kelsey
A dentist causes the bit of a high-speed drill to accelerate from an angular speed of 1.20x10^4 rad/s to an angular speed of 3.14x10^4 rad/s. In the process, the bit turns through 1.92x10^4 rad.
Assuming a constant angular acceleration, how long would it take the bit to reach ...
Tuesday, October 23, 2012 at 8:38am by Sam
let the speed of the river current be x mph speed going upstream = 9 -x mph speed going downstream = 9 + x mph 10/(9-x) = 20/(9+x) 90 + 10x = 180 - 20x 30x = 90 x = 3 the speed of the river is 3 mph
Monday, April 14, 2014 at 10:00pm by Reiny
A girl riding a bicycle with a speed of 5m/s towards northdirection , observes rain falling vertically down. If she increases her speed to 10m/s , rain appears to meet her at 45degree tp the vertical
.what is the speed of rain ?
Tuesday, August 28, 2012 at 11:15am by manish dagar
Let x = speed of A and x + 80 = speed of B. Speed = distance/time. therefore Distance = speed * time Distance traveled by A + distance traveled by B = 3200 This should help you solve your problem.
Thanks for asking.
Tuesday, October 16, 2007 at 8:47pm by PsyDAG
A hockey puck moving due east with speed 1.00 m/s collides with an identical puck moving 40 degrees south of west with speed 2.00 m/s. After they collide, the first puck is moving south with speed
1.50 m/s. What is the direction and speed of the second puck?
Thursday, September 16, 2010 at 4:58pm by Jess
speed of boat in still water ---- x km/h speed of current ----- 2 km/h speed against current = x-2 speed with the current = x+2 time upriver = 12/(x-2) time downriver = 12/(x+2) equation: 12/(x-2) +
12/(x+2) = 2.5 hint: multiply both sides by (x+2)(x-2), simplify and you have ...
Friday, October 15, 2010 at 11:21pm by Reiny
1. speed of boat in still water --- x mph speed of current ---- y mph 10(x+y) = 210 ----> x+y = 21 70(x-y) = 210 ----> x-y = 3 add them 2x = 24 x=12 then y = 9 speed of boat = 12mph , speed of
current = 9 mph #2 even easier .... define x and y as above x+y = 158 x-y = ...
Monday, August 27, 2012 at 8:40am by Reiny
Car Speed=100km/h=100000m/3600s=27.8m/s Truck Speed =(50/100) * 27.8=13.9 m/s. V^2 = Vo^2 + 2a*d a = (V^2-Vo^2)/2d a = (0-(27.8-13.9)^2)/400 = -0.483 m/s^2 NOTE: The effective speed of the car is
less than the actual speed, because the truck is moving forward.
Wednesday, August 28, 2013 at 6:03pm by Henry
The frequency does not change. There is no waiting line. As many pulses per second travel on both sides of the boundary. The speed does change. wavelength = speed * period = speed /frequency
Friday, March 27, 2009 at 3:10pm by Damon
If a runner travels at a speed of 10 m/s for 5 seconds, stops for a short time then continues on at a speed of 8 m/s for 4 seconds. How long was the runner stopped for if his average speed at the end
of the trip is 4 m/s.
Tuesday, February 22, 2011 at 10:25pm by mikey
The speed of an automobile increases from 11 m/s to a speed of 47 m/s over a time period of 9.0 seconds. What is the average rate at which the speed of this car is changing during this time period?
Friday, September 2, 2011 at 11:03am by Anonymous
If a race car completes a 3 mi oval track in 50 s, what is its average speed? Did the car accelerate? 1. Yes, the speed changed. 2. No, the speed didn’t change. 3. Yes, the direction of the motion
Sunday, September 25, 2011 at 10:31am by anon
The CM moves diagonally NE at a speed that is the individual car speed multiplied by sqrt2. The individual car speed is therefore 51*0.707 = 36 km/h
Tuesday, November 8, 2011 at 9:15am by drwls
a boy walks to a school at a rate of 6km with a speed of 2.5km/h and walks back with a constant speed of 5km/h. his average speed for round trip in km/h is ?
Saturday, June 25, 2011 at 11:16am by aleeza
A sound wave in a solid has a frequency of 15.0 kHz and a wavelength of 0.333 m. What would be the wave speed, and how much faster is this speed than the speed of sound in air?
Tuesday, March 4, 2014 at 10:45pm by Melanie
Cherenkov radiation is light emitted by a particle moving through a medium with a speed greater than the speed of light in the medium. (Note: The speed of the particle is not greater than the speed
of light in a vacuum.) Consider a beam of electrons passing through water with ...
Thursday, June 21, 2012 at 10:58pm by matin
What is the distance an object travels per unit of time? It's called the speed Depends on the velocity (speed), if the speed is 10mph, then the object would travel 10 miles (distance) per hour (unit
of time). I'm unsure what your answer is saying, because velocity and speed ...
Monday, September 18, 2006 at 10:17pm by Ginger
A centrifuge takes 100 s to spin up from rest to its final angular speed with constant angular acceleration. If a point located 8.00 cm from the axis of rotation of the centrifuge moves with a speed
of 150 m/s when the centrifuge is at full speed, how many revolutions does the...
Sunday, June 24, 2012 at 7:12pm by Bloom
I come up with 8. Toyota: 2 options: 5-speed or automatic (both with no trunk) Mazda: 2 options: 5-speed w/trunk loaded; 5-speed w/trunk empty Honda: 4 options: 5-speed w/trunk loaded 5-speed w/trunk
empty automatic w/trunk loaded automatic w/trunk empty I lost you when you ...
Wednesday, May 21, 2008 at 12:00am by drwls
math - incomplete
speed increases at a rate of 1.2m/s/? If the speed increases, the value (m/s) increases by 1.2m.s every how often? acceleration is not measured as m/s. It looks to me like you should have written the
acceleration is 1.2m/s^2, and you want to find how long it takes the speed to...
Saturday, August 4, 2012 at 4:09am by Steve
science (motion)
Your movement relative to the bus is the speed of your walking. Relative to the ground, it would be the difference between your speed and the speed of the bus. I hope this helps. Thanks for asking.
Tuesday, April 28, 2009 at 7:18pm by PsyDAG
A car is driven from point A to B at a constant speed of 25.0 m/s along a straight line, and then back from point B to A at a constant speed of 20.0 m/s. The values of the average speed over the
entire trip is?
Tuesday, January 19, 2010 at 6:58pm by Mandy
If a car completes a 3.8 mi oval track in 69.7 s, what is the average speed? answer in mph Did the car accelerate? (A.yes the direction of the motion changed or B. yes the speed changed or C. no the
speed didnt change)
Tuesday, June 14, 2011 at 12:11am by alexa
Physics - still confused -Please help
The velocity of pulling in the line is is V = w * R , which means Linear speed = (Angular speed)*(Radius) = 2.1m/9.7s = 0.2165 m/s Angular speed = 0.2165/0.3 = 0.7216 rad/s
Saturday, February 25, 2012 at 10:36am by drwls
Car A and B are passing each other in different lanes. Car A has an initial speed of 15m/s and is gaining speed at 1.5m/s^2. Car B has an initial speed of 20m/s and is gaining speed at 2.0m/s^2. Find
when and where the cars will pass one another and their speeds as they are ...
Wednesday, February 20, 2008 at 8:02am by Anonymous
a patrol of timing paths each vehicle needs to travel the first half of the distance at a speed of 140 km / h. if the speed limit is 80km / h. which should be the highest average speed of the car in
the second half of the section, to avoid being fined?
Sunday, October 10, 2010 at 12:09am by liz
a steel ball drops onto a thick steel plate. its speed just before it hits is 10 m/s and it rebounds with a speed of 9 m/s. what percentage of its kinetic energy did it lose during the collision?
what would you expect its speed to be after the second rebound?
Thursday, September 6, 2012 at 7:29pm by Megan
u = boat speed u-3 = upstream speed u+3 = downstream speed distance = speed * time 35 = (u-3) * 5/6 210 = 5u - 15 225 = 5u 45 = u downstream, 35 = 48 * t t = 35/48 t = 0.729 hr = 43min 45 sec
Thursday, October 6, 2011 at 7:52pm by Steve
college physics
A car is driven from point A to B at a constant speed of 25.0 m/s along a straight line, and then back from point B to A at a constant speed of 20.0 m/s. The values of the average speed over the
entire trip is?
Tuesday, January 19, 2010 at 7:20pm by Mandy
Physics Please help
I'm still confused. to find angular speed = tnngenital speed/radius, would the radii cancel out, leaving only the tangential speed? still confused as to set up
Saturday, February 25, 2012 at 8:47am by Nilan
average speed= .35rev/s so in 2.9 seconds, it goes... b) well, when it is half done, 2.9/2 seconds has elapsed. average speed= (.7+.35)/2 rev= avg speed*2.9/2 revs.
Wednesday, November 17, 2010 at 8:47pm by bobpursley
An oscillating pendulum, or anything else in nature that involves "simple harmonic" (sinusoidal) motion, spends 1/4 of its period going from zero speed to maximum speed, and another 1/4 going from
maximum speed to zero speed again, etc. After four quarter-periods it is back ...
Tuesday, February 19, 2008 at 11:04am by drwls
The figure below shows the speed of a person's body as he does a chin-up. Assume the motion is vertical and the mass of the person's body is 73.4 kg. Determine the force exerted by the chin-up bar on
his body at the following times. t=0s, speed= 0cm per second; t= .5s, speed= ...
Sunday, October 10, 2010 at 12:30am by Amber
With no river flow, the boat crosses the river at 90/4 = 22.5 meters/min. With river flow, it's speed relative to water is the same as above, but it crosses at a speed of 90/5 = 18 m/min, relative to
and perpendicular to the shore. Consider the vector right triangle with sides...
Thursday, December 27, 2012 at 10:48pm by drwls
Pages: <<Prev | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | Next>> | {"url":"http://www.jiskha.com/search/index.cgi?query=speed&page=2","timestamp":"2014-04-20T02:51:47Z","content_type":null,"content_length":"42149","record_id":"<urn:uuid:025959c7-be9a-4f4f-a1fa-8eaa89c4dc5e>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00209-ip-10-147-4-33.ec2.internal.warc.gz"} |