content
stringlengths 86
994k
| meta
stringlengths 288
619
|
|---|---|
Limit Comparison Test of the series of sin(1/n)
What series do I compare $\sum_{n=1}^\infty \sin \frac 1n$ to when using the Limit Comparison test.
But that won't be sufficient, because for large n, $\sin{\frac{1}{n}} < \frac{1}{n}$ $\sum_{n=1}^\infty \frac{ 1}{n}$ diverges, but the series in the original post consists of smaller terms, so you
can't use that as a direct comparison. You can however, use the fact that $\sin{\frac{1}{n}} > \frac{1}{2n}$ and the fact that the below diverges: $\sum_{n=1}^\infty \frac{ 1}{2n}$ In fact the 2 in
the above series can be ANY number greater than 1.
@SworD, you did not read the question carefully. The question is about limit comparison not basic comparison. Some authors like Gillman call it ratio comparison. $\lim _{n \to \infty } \frac{{\sin \
left( {\frac{1}{n}} \right)}}{{\frac{1}{n}}} = 1$
Yes, the (ordinary) comparison test is if $0 \le a_n \le b_n$ where $\sum_{n=1}^\infty{b_n}$ is a convergent series, then $\sum_{n=1}^\infty{a_n}$ converges. Or if $0 \le b_n \le a_n$ where $\sum_{n=
1}^\infty{b_n}$ is a divergent series, then $\sum_{n=1}^\infty{a_n}$ diverges. For the limit comparison test, if $\lim_{n \rightarrow \infty}\frac{a_n}{b_n}$ is finite and nonzero, then $\sum_{n=1}^\
infty{a_n}$ converges if and only if $\sum_{n=1}^\infty{b_n}$ converges. So you need to compare $\sin(1/n)$ to 1/n, since $\lim_{n \rightarrow \infty}\frac{\sin(1/n)}{1/n}=1$ is finite and nonzero.
Of course, 1/2n or 35/87n would also work - the limits would still be finite and nonzero. - Hollywood
|
{"url":"http://mathhelpforum.com/calculus/210182-limit-comparison-test-series-sin-1-n.html","timestamp":"2014-04-19T23:10:22Z","content_type":null,"content_length":"49575","record_id":"<urn:uuid:58b4b2f7-a33d-4ea4-aa9b-b294ea39ecc8>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00518-ip-10-147-4-33.ec2.internal.warc.gz"}
|
[racket] a small programming exercise
From: namekuseijin (namekuseijin at gmail.com)
Date: Fri Oct 15 12:26:07 EDT 2010
On Fri, Oct 15, 2010 at 10:05 AM, Phil Bewig <pbewig at gmail.com> wrote:
> Not quite.
> Random numbers are uniformly distributed, so the first digits of a set of
> random numbers should all appear equally.
> Benford's Law most often applies to sets of naturally-occurring numbers that
> are scale-invariant. Consider the lengths of rivers, as Benford did. It
> doesn't matter whether the rivers are measured in miles or kilometers
> (scale-invariant). The first digits of the lengths of the rivers will
> conform to Benford's Law, as long as the set has enough elements.
> Auditors use Benford's Law to find anomalous records. Apply Benford's Law
> to a list of the amounts of all checks written by a company in the last
> year. If you see too many checks that start with the digits 7, 8, or 9,
> there is a clear indication of fraud. The embezzler wrote checks that were
> slightly less than $1000, on the theory that small checks would more likely
> be ignored. But instead of writing checks for $263 or $347 or $519, he
> wrote checks for $838 or $922 to maximize his payout.
> There was an external audit of the voting results in last year's Iranian
> elections. The audit clearly showed fraud, as there were far too many
> precinct tallies that started with the digits 8 or 9.
I just love seeing real useful application of seemingly abstract math
concepts... :)
Posted on the users mailing list.
|
{"url":"http://lists.racket-lang.org/users/archive/2010-October/042307.html","timestamp":"2014-04-19T15:35:42Z","content_type":null,"content_length":"6841","record_id":"<urn:uuid:03011038-05e7-416c-8bc0-a829883f21d3>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00448-ip-10-147-4-33.ec2.internal.warc.gz"}
|
{-# LANGUAGE TypeFamilies, GADTs, MultiParamTypeClasses, TypeOperators,
FlexibleContexts, ScopedTypeVariables, ViewPatterns, FlexibleInstances,
QuasiQuotes, UndecidableInstances, Rank2Types #-}
{- |
Module : Data.Yoko.ReflectBase
Copyright : (c) The University of Kansas 2011
License : BSD3
Maintainer : nicolas.frisby@gmail.com
Stability : experimental
Portability : see LANGUAGE pragmas (... GHC)
The basic @yoko@ reflection concepts.
module Data.Yoko.ReflectBase where
import Type.Yoko
import Data.Yoko.Generic
-- | The @Tag@ of a constructor type is a type-level reflection of its
-- constructor name.
type family Tag dc
-- | The @Recurs@ of a constructor type is the type-"Type.Yoko.Sum" of types
-- that occur in this constructor. NB: @Recurs t `isSubsumedBy` Siblings (Range
-- dc)@.
type family Recurs t
-- | The \"Datatype Constructor\" class.
class (DT (Range dc), dc ::: DCU (Range dc), Generic dc) => DC dc where
-- | The string name of this constructor.
occName :: [qP|dc|] -> String
-- | The range of this constructor.
type Range dc
-- | The evidence that this constructor inhabits the datatype constructor
-- universe of its range.
tag :: DCOf (Range dc) dc; tag = inhabits
-- | Project this constructor from its range.
to :: Range dc -> Maybe (RMNI dc)
to (disband -> NP tg fds) = case eqT tg (tag :: DCOf (Range dc) dc) of
Just Refl -> Just fds
_ -> Nothing
-- | Embed this constructor in its range.
fr :: RMNI dc -> Range dc
-- | Evidence that @t@ is the range of the constructor type @dc@.
data DCOf t dc where DCOf :: (DC dc, t ~ Range dc) => DCU t dc -> DCOf t dc
instance (DC dc, t ~ Range dc) => dc ::: DCOf t where inhabits = DCOf inhabits
type instance Inhabitants (DCOf t) = Inhabitants (DCU t)
instance Finite (DCU t) => Finite (DCOf t) where toUni (DCOf x) = toUni x
type instance Pred (DCOf t) dc = Elem dc (DCs t)
instance EqT (DCU t) => EqT (DCOf t) where eqT (DCOf x) (DCOf y) = eqT x y
data SiblingOf t s where SiblingOf :: (s ::: Uni (Siblings t), Siblings s ~ Siblings t, DT s) => Uni (Siblings t) s -> SiblingOf t s
instance (s ::: Uni (Siblings t), Siblings s ~ Siblings t, DT s) => s ::: SiblingOf t where inhabits = SiblingOf inhabits
type instance Inhabitants (SiblingOf t) = Siblings t
instance Finite (SiblingOf t) where toUni (SiblingOf x) = x
type instance Pred (SiblingOf t) s = Elem s (Siblings t)
instance EqT (SiblingOf t) where eqT (SiblingOf x) (SiblingOf y) = eqT x y
type AnRMN m u = NP u (RM m :. N)
type Disbanded m t = AnRMN m (DCOf t)
disbanded :: DC dc => RMN m dc -> Disbanded m (Range dc)
disbanded fds = NP tag fds
band :: Disbanded IdM t -> t
band (NP (DCOf _) fds) = fr fds
-- @LeftmostRange@ returns the @Range@ of the leftmost type in a type-sum.
type family LeftmostRange dcs
type instance LeftmostRange (N dc) = Range dc
type instance LeftmostRange (c :+ d) = LeftmostRange c
type DCs t = Inhabitants (DCOf t)
-- | The "DataType" class.
class (Finite (DCU t), EqT (DCU t),
DCs t ::: All (DCOf t), -- DCs t ::: All (AsRep GistU),
Siblings t ::: All (SiblingOf t)
) => DT t where
-- | The string name of this datatype's original package.
packageName :: [qP|t|] -> String
-- | The string name of this datatype's original module.
moduleName :: [qP|t|] -> String
-- | A type-sum of the types in this type's binding group, including
-- itself. @Siblings t@ ought to be the same for every type @t@ in the
-- binding group. (It also ought to be equivalent to the transitive closure
-- of @Recurs . DCs@, by definition.)
type Siblings t
-- | The data constructor universe. 'DCOf' is to be preferred as much as
-- possible.
data DCU t :: * -> * -- universe of constructor types
-- | /Disband/ this type into one of its data constructors.
disband :: t -> Disbanded IdM t
|
{"url":"http://hackage.haskell.org/package/yoko-0.2/docs/src/Data-Yoko-ReflectBase.html","timestamp":"2014-04-24T19:54:40Z","content_type":null,"content_length":"24533","record_id":"<urn:uuid:a85bc7e3-97db-494e-8e76-d9233fbd8aba>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00611-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Table of Contents for AP Economics Videos
Incidentally, there are other good teaching videos out there. Consider Jodi Beggs. Also gewalker72.
Three preliminary videos that review linear equations: one linear equation, two linear equations, and recognizing linear equations. You should watch them once at the very start of the course and then
again after video 20.
1. The Entrepreneur
2. Rent vs. Buy
3. Opportunity Cost
4. Comparative Advantage
5. Production Possibility Frontier
6. Arbitrage
7. Factors of Production
8. Substitution
9. Outsourcing, part one
10. Outsourcing, part two
11. Outsourcing and technology
12. Outsourced beyond all recognition
13. Economic Concepts in starting businesses
14. GDP and what it measures
15. Nominal vs. real GDP
16. GDP Calculations
17. Growth Calculations
18. GDP and the Standard of Living
19. Income Equals Spending
20. Simple Keynesian Model
21. Fiscal Policy
22. Fiscal Policy with a tax function
23. Interest Rates, Investment, and Net Exports
24. Aggregate Supply and Demand
25. Long-run Aggregate Supply and Phillips Curve
26. The Great Depression
27. The 1970s and the Phillips Curve Shift
28. The Great Moderation and the Great Recession
29. Monetarism
30. Money
31. Neutrality of Money
32. How the Fed Creates Money
33. Tools of Monetary Policy
34. Liquidity Trap
35. Loanable Funds and the Money Market
36. The Real Interest Rate
37. Supply-side Economics
38. The Real Wage and AS
39. Saving, Investment, and the Trade Balance
40. The Foreign Exchange Market
41. Optimal Currency Areas
42. Market-clearing Price
43. Market Equilibrium
44. Supply and Demand Analysis
45. Substitutes and Complements
46. Elasticity of Demand
47. Income Elasticity of Demand
48. --none
49. Producers' and Consumers' Surplus
50. Price and Quantity Controls
51. Taxes, Consumers' Surplus, and Producers' Surplus
52. Private vs. Social Cost
53. Public Goods and Private Goods
54. Public Finance and Public Choice
55. Profit Maximization
56. Who's Yo' Demand Curve?
57. Total Revenue and Marginal Revenue
58. Average Cost and Marginal Cost
59. Profit Maximization, Demand and Cost
60. Monopolistic Competition
61. Perfect Competition
62. Oligopoly
63. Oligopoly and Game Theory
64. Natural Monopoly and Price Discrimination
65. The Law of Diminishing Returns
66. Fixed and Variable Cost
67. Costs: a numerical example
68. Price Discrimination Explains Everything
69. Consumer Utility
70. More Consumer Trade-offs
71. Factor Demand
72. PSST
73. When Copies are Cheap
Videos with my random opinions: ()Where I Stand vs. policy wonks
()My Jobs Speech
()The Great Debate
()Game Theory: Two Illustrations
|
{"url":"http://arnoldkling.com/econ/ecvidcontents.html","timestamp":"2014-04-16T13:04:09Z","content_type":null,"content_length":"8162","record_id":"<urn:uuid:6c2f0612-9058-46d1-8bec-8675b7099b96>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00552-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Math Forum: Math Library - Wkly/Mnthly Challenges
1. Algebra Problem of the Week - Math Forum
Algebra problems from a variety of sources, including textbooks, math contests, NCTM books, and puzzle books, and real-life situations, designed to reflect different levels of difficulty. The
goal is to challenge students with non-routine problems and encourage them to put their solutions into words. Different types of problems are used to reach a diverse group of algebra students.
A Teacher Support page is available for some problems. more>>
2. Geometry Problem of the Week - Annie Fetter, the Math Forum
The Geometry Problem of the Week (PoW) is a regular feature of the Math Forum, providing students an opportunity to answer questions and receive feedback and recognition from Forum staff. A
Teacher Support page is available for some problems. more>>
3. Math Fundamentals Problem of the Week - Math Forum
Math problems for students working with concepts of number, operation, and measurement, as well as introductory geometry, data, and probability. The goal is to challenge students with
non-routine problems and encourage them to put their solutions into words. A Teacher Support page is available for each problem. more>>
4. Monthly Themes - NRICH Maths, Univ. of Cambridge
Past problems from the NRICH Online Maths Club, archived by month. Each problem has a symbol indicating the stage, which tells you how little or how much mathematics you need to know to solve
the problem but is no indication of its difficulty. The five stages correspond to ages 5-7, 7-11, 11-14, 14-16 and 16-18, and indicate that students in the UK normally meet the maths required
during that key stage. more>>
5. Pre-Algebra Problem of the Week - Math Forum
Math problems for students learning algebraic reasoning, identifying and applying patterns, ratio and proportion, and geometric ideas such as similarity. The goal is to challenge students with
non-routine problems and encourage them to put their solutions into words. A Teacher Support page is available for each problem. more>>
6. Sites with Problems Administered by Others - Math Forum
Problems of the week or month: a page of annotated links to weekly/monthly problem challenges and archives hosted at the Math Forum but administered by others, and to problems and archives
elsewhere on the Web, color-coded for the level(s) of the problems posted. more>>
|
{"url":"http://mathforum.org/library/resource_types/pow_pom/?keyid=38675785&start_at=1&num_to_see=50","timestamp":"2014-04-18T16:13:52Z","content_type":null,"content_length":"12163","record_id":"<urn:uuid:e4f9b940-cf58-4bbd-93e3-e71405fb52a9>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00319-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Regular array synthesis using ALPHA
Results 1 - 10 of 12
- JOURNAL OF VLSI SIGNAL PROCESSING , 2000
"... The PICO-N system automatically synthesizes embedded nonprogrammable accelerators to be used as co-processors for functions expressed as loop nests in C. The output is synthesizable VHDL that
defines the accelerator at the register transfer level (RTL). The system generates a synchronous array of cu ..."
Cited by 60 (6 self)
Add to MetaCart
The PICO-N system automatically synthesizes embedded nonprogrammable accelerators to be used as co-processors for functions expressed as loop nests in C. The output is synthesizable VHDL that defines
the accelerator at the register transfer level (RTL). The system generates a synchronous array of customized VLIW (very-long instruction word) processors, their controller, local memory, and
interfaces. The system also modifies the user's application software to make use of the generated accelerator. The user indicates the throughput to be achieved by specifying the number of processors
and their initiation interval. In experimental comparisons, PICO-N designs are slightly more costly than hand-designed accelerators with the same performance.
- INTERNATIONAL CONFERENCE ON APPLICATION--SPECIFIC ARRAY PROCESSORS , 1995
"... In recognition of the fundamental relation between regular arrays and systems of affine recurrence equations, the Alpha language was developed as the basis of a computer aided design methodology
for regular array architectures. Alpha is used to initially specify algorithms at a very high algorith ..."
Cited by 7 (4 self)
Add to MetaCart
In recognition of the fundamental relation between regular arrays and systems of affine recurrence equations, the Alpha language was developed as the basis of a computer aided design methodology for
regular array architectures. Alpha is used to initially specify algorithms at a very high algorithmic level. Regular array architecures can then be derived from the algorithmic specification using a
transformational approach supported by the Alpha environment. This design methodology guarantees the final design to be correct by construction, assuming the initial algorithm was correct. In this
paper, we address the problem of validating an initial specification. We demonstrate a translation methodolody which compiles Alpha into the imperative sequential language C. The C-code may then be
compiled and executed to test the specification. We show how an Alpha program can be naively implemented by viewing it as a set of monolithic arrays and their filling functions, implemented usin...
- In 7th Conference on Functional Programming Languages and Computer Architecture , 1995
"... : Alpha is a data parallel functional language which has the capability of specifying algorithms at a very high level. Our ultimate objective is to generate efficient parallel imperative code
from an Alpha program. In this paper, we discuss the related problem of generating efficient single processo ..."
Cited by 6 (6 self)
Add to MetaCart
: Alpha is a data parallel functional language which has the capability of specifying algorithms at a very high level. Our ultimate objective is to generate efficient parallel imperative code from an
Alpha program. In this paper, we discuss the related problem of generating efficient single processor imperative code. Analysis techniques that were developed for the synthesis of systolic arrays are
extended and adapted for the compilation of functional programming languages. We also demonstrate how a transformational methodology can be used as a compilation engine to transform an Alpha program
to a sequential form. C--code is then generated using a straightforward pretty printer from the sequential form Alpha program. The C--code may then be compiled to efficiently execute the program.
Key-words: parallelizing compilers, functional languages (R'esum'e : tsvp) Supported by NSF grant MIP-910852 and Esprit Basic Research Action NANA2, Number 6632 email: quinton@irisa.fr email:
- In Proceedings IEEE 15th International Conference on Application-specific Systems, Architectures and Processors (ASAP 2004 , 2004
"... In this paper we present a significant extension of the quantified equation based algorithm class of piecewise regular algorithms. The main contributions of the following paper are: (1) the
class of piecewise regular algorithms is extended by allowing run-time dependent conditionals, (2) a mixed int ..."
Cited by 6 (5 self)
Add to MetaCart
In this paper we present a significant extension of the quantified equation based algorithm class of piecewise regular algorithms. The main contributions of the following paper are: (1) the class of
piecewise regular algorithms is extended by allowing run-time dependent conditionals, (2) a mixed integer linear program is given to derive optimal schedules of the novel class we call dynamic
piecewise regular algorithms, and (3) in order to achieve highest performance, we present a speculative scheduling approach. The results are applied to an illustrative example.
- In World Congress on Formal Methods (2 , 1999
"... . In this paper we present the affine clock calculus as an extension of the formal verification techniques provided by the Signal language. A Signal program describes a system of clock
synchronisation constraints the consistency of which is verified by compilation (clock calculus) . Well-adapted ..."
Cited by 6 (0 self)
Add to MetaCart
. In this paper we present the affine clock calculus as an extension of the formal verification techniques provided by the Signal language. A Signal program describes a system of clock
synchronisation constraints the consistency of which is verified by compilation (clock calculus) . Well-adapted in control-based system design, the clock calculus has to be extended in order to
enable the validation of Signal-Alpha applications which usually contain important numerical calculations. The new affine clock calculus is based on the properties of affine relations induced between
clocks by the refinement of Signal-Alpha specifications in a codesign context. Affine relations enable the derivation of a new set of synchronisability rules which represent conditions against which
synchronisation constraints on clocks can be assessed. Properties of affine relations and synchronisability rules are derived in the semantical model of traces of Signal. A prototype implementing a
subset of t...
, 1996
"... : We address the problem of computation upon ZZ-polyhedra which are intersections of polyhedra and integral lattices. We introduce a canonic representation for ZZ-polyhedra which allow to
perform comparisons and transformations of ZZ-polyhedra with the help of a computational kernel on polyhedra. Th ..."
Cited by 4 (1 self)
Add to MetaCart
: We address the problem of computation upon ZZ-polyhedra which are intersections of polyhedra and integral lattices. We introduce a canonic representation for ZZ-polyhedra which allow to perform
comparisons and transformations of ZZ-polyhedra with the help of a computational kernel on polyhedra. This contribution is a step toward the manipulation of images of polyhedra by affine functions,
and has application in the domain of automatic parallelization and parallel vlsi synthesis. Key-words: regular parallelism, polyhedron, lattices, loop nest, Automatic synthesis methodology, vlsi
(R'esum'e : tsvp) CENTRE NATIONAL DE LA RECHERCHE SCIENTIFIQUE Centre National de la Recherche Scientifique Institut National de Recherche en Informatique (URA 227) Universit e de Rennes 1 -- Insa de
Rennes et en Automatique -- unit e de recherche de Rennes Manipulations de Z-poly`edres R'esum'e : Nous nous int'eressons au probl`eme du calcul sur les ZZ-poly`edres (intersections entre des poly
`edres et...
"... . In this paper we present affine transformations as an extension of the Signal language for the specification and validation of real-time systems. To each Signal program is associated a system
of equations which specify synchronization constraints on clock variables. The Signal compiler resolve ..."
Cited by 2 (0 self)
Add to MetaCart
. In this paper we present affine transformations as an extension of the Signal language for the specification and validation of real-time systems. To each Signal program is associated a system of
equations which specify synchronization constraints on clock variables. The Signal compiler resolves these equations and verifies if the control of a program is functionally safe. By means of the new
transformations, affine relations can be defined between clocks and it gets necessary to enhance the compiler with facilities for the resolution of synchronization constraints on these clocks. We
propose thus an extension of the compiler based essentially on a canonical form of the affine relations. 1 Introduction Real-time systems, and more generally reactive systems [1], are in continuous
interaction with their environment. Therefore, they must respond in time to external stimuli. Moreover, real-time systems must be safe, thus one would wish to prove their correctness. Response time
- Journal of VLSI Signal Processing , 2000
"... The PICO-N system automatically synthesizes embedded nonprogrammable accelerators to be used as co-processors for functions expressed as loop nests in C. The output is synthesizable VHDL that
defines the accelerator at the register transfer level (RTL). The system generates a synchronous array of cu ..."
Cited by 1 (0 self)
Add to MetaCart
The PICO-N system automatically synthesizes embedded nonprogrammable accelerators to be used as co-processors for functions expressed as loop nests in C. The output is synthesizable VHDL that defines
the accelerator at the register transfer level (RTL). The system generates a synchronous array of customized VLIW (very-long instruction word) processors, their controller, local memory, and
interfaces. The system also modifies the user's application software to make use of the generated accelerator. The user indicates the throughput to be achieved by specifying the number of processors
and their initiation interval. In experimental comparisons, PICO-N designs are slightly more costly than hand-designed accelerators with the same performance.
, 1997
"... : In this paper we present affine transformations as an extension of the Signal language for the specification and validation of real-time applications. A Signal program is a system of equations
which specify dependencies between program data and synchronization constraints on clock variables. In or ..."
Cited by 1 (1 self)
Add to MetaCart
: In this paper we present affine transformations as an extension of the Signal language for the specification and validation of real-time applications. A Signal program is a system of equations
which specify dependencies between program data and synchronization constraints on clock variables. In order to test if a program is functionally safe, the Signal compiler resolves the clock
constraints and verifies that the data dependency graph contains no cycles. By means of the new transformations, affine relations can be defined between clock variables and it gets necessary to
enhance the compiler with facilities for the resolution of synchronization constraints on these clocks. To tackle these constraints, we propose an extension of the compiler based essentially on a
canonical form of the affine relations. This extension removes some of the limits of the existing compiler and enlarges the range of applications that can be validated using Signal. Key-words:
Signal, real-time languages, c...
, 2004
"... In this report we present a significant extension of the quantified equation based algorithm class of piecewise regular algorithms. The main contributions of the following report are: (1) the
class of piecewise regular algorithms is extended by allowing run-time dependent conditionals, (2) a mixed i ..."
Cited by 1 (1 self)
Add to MetaCart
In this report we present a significant extension of the quantified equation based algorithm class of piecewise regular algorithms. The main contributions of the following report are: (1) the class
of piecewise regular algorithms is extended by allowing run-time dependent conditionals, (2) a mixed integer linear program is given to derive optimal schedules of the novel class we call dynamic
piecewise regular algorithms, and (3) in order to achieve highest performance, we present a speculative scheduling approach. The results are applied to an illustrative example.
|
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=249335","timestamp":"2014-04-17T08:02:59Z","content_type":null,"content_length":"40588","record_id":"<urn:uuid:4c9f14ee-32e9-47ac-bd86-3ef5dd61ee72>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00397-ip-10-147-4-33.ec2.internal.warc.gz"}
|
complex plane
April 14th 2008, 08:48 AM #1
Apr 2008
complex plane
say if i have (p) |z+2|=4 and (q) Re(z)=0 which are two different lines in the hyperbolic plane and i want to find the point of intersection p interset q how would i do this?
Hyperbolic plane! You mean complex plane. Note $|z+2|=4$ is the circle $(x+2)^2 + y^2 = 4$ while $\Re (z) = 0$ is the line $x=0$.
sorry im rubbish i meant hyperbolic lines, ok so if i take what you said would i just substitute x=0 to get the intersection. i get confused as to how you get the circle from |z+2|=4
| x+iy + 2|=4 group real and imaginary parts then mod
sqrt( ((x+2)^2) + (iy)^2)=4
and so on..
to get
((x+2)^2) - y^2 =16
where am i going wrong
sorry im rubbish i meant hyperbolic lines, ok so if i take what you said would i just substitute x=0 to get the intersection. i get confused as to how you get the circle from |z+2|=4
| x+iy + 2|=4 group real and imaginary parts then mod
sqrt( ((x+2)^2) + (iy)^2)=4 Mr F says: And here's the mistake. ${\color{red}|a + ib| = \sqrt{a^2 + b^2}}$, NOT ${\color{red}\sqrt{a^2 + (ib)^2}}$.
and so on..
to get
((x+2)^2) - y^2 =16
where am i going wrong
|z + 2| = 4 <=> |z - (-2)| = 4. So the geometric interpretation is the set of all points in the complex plane that are a distance 4 from z = -2. This is a circle of radius 4 and centre at z = -2,
that is, (-2, 0).
April 14th 2008, 08:50 AM #2
Global Moderator
Nov 2005
New York City
April 14th 2008, 09:01 AM #3
Apr 2008
April 14th 2008, 10:25 PM #4
|
{"url":"http://mathhelpforum.com/pre-calculus/34455-complex-plane.html","timestamp":"2014-04-23T18:52:32Z","content_type":null,"content_length":"41079","record_id":"<urn:uuid:6c89f6ba-66e2-427a-9520-f2f1880aadd6>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00274-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Change max points lost per game & limit total max points won
This suggestion arises from the
Remove maximum point lost to deter farming
. From posts in that thread it became clear there was resistance to using log and exponential scales (but i'm not sure of the deatils). Shame, so we have to keep it to plus, minus, times, divide,
minimum and maximum (+, -, *, /, max and min). Hence my new suggestion:
Concise description: Change max number of points a player can lose in a game.Specifics:
• Current scoring: The points to be awarded is calculated as (loser's score / winner's score) * 20, up to a maximum of 100 points from each opponent.
• Proposed scoring:
□ Remove the overall maximum
□ Include a minimum winners score
□ so you getThe points to be awarded is calculated as
(loser's score / max(winner's score,max(winner's score, limit)) * 20.
□ if limit could be 500 points.
That's one solution which is fine for the the 1 v 1 case
• 1 beating 4,200 means 168 points are exchanged rather than 4,200 if the limit is 500.
• 1 beating 4,200 means 336 points are exchanged rather than 4,200 if the limit is 250.
But another tweak is required for games with more players... and I think that is simpler.
Concise description: Change the maximum number of points a player can lose in a game AND limit the maximum TOTAL points a player can win in a game
Same as above but with a tweak
• rather than have a maximum a winner can take from an individuial loser
• have a larger maximum TOTAL win a winner can gave from a game. So...
□ One player on 1 beats seven players on 4,200 in eight player game.
□ Each 4,200 player loses 168 points if the limit is 500.
□ Each 4,200 player loses 336 points if the limit is 250.
□ Set a maximum that the loser can gain, say 500.
Re: Change max points lost per game & limit total max points won
More examples, all with a limit of 500.
General (4000) invites Cook (500) for a 1 v 1.
• Current: General can win 3 but only lose 100.
• New: General can win 3 but lose 160.
This is fairer as there is more risk involved for taking on the Cook.
Cunning Cook (1) invites 1 Generals (4000).
• Current: General can win 1 from the Cook.
• New: General can win 1 from the Cook.
• Current: Cook can only win 100 from the General (minimum of (100 and 4000*20)
• New: Cook can only win 500 in all, each General loses 100.
This is fairer as there is still risk involved for taking on the Cook but if the Cook can only benefit from this once - their next game will score as above.
Cunning Cook (1) invites 7 Generals (4000).
• Current: General can win 1 from the Cook and 20 each from the generals (141).
• New: General's winnings stays the same.
• Current: Cook can win 100 from each General, 700 total..
• New: Cook is limited to 500 points.
This is fairer as there is still risk involved for taking on the Cook but if the Cook isn't getting the additional benefit from hoodwinking 7 generals.
Re: Change max points lost per game & limit total max points won
this kills the purpose of battle royales and other special games where a person is suppose to gain tons of points when he/she wins
TheSaxlad wrote:The Dice suck a lot of the time.
And if they dont suck then they blow.
Re: Change max points lost per game & limit total max points won
Joodoo wrote:this kills the purpose of battle royales and other special games where a person is suppose to gain tons of points when he/she wins
No problem.
You just make "limit" a parameter and set it at different values for different games.
Problem solved!
Re: Change max points lost per game & limit total max points won
your 2nd post "This is fairer as there is more risk involved for taking on the Cook."
what if the general started the game and the cook joined? that's not fair at all. it will be fair only if you put a limit to who can join what games (ex. must be 3000+ points to join general's
games), but we all know this won't happen...
RETIRED AS OF 2/27/10
Peace is a lie, there is only passion.
Through passion, I gain strength.
Through strength, I gain power.
Through power, I gain victory.
Through victory, my chains are broken.
The Force shall free me.
Re: Change max points lost per game & limit total max points won
DarthBlood wrote:it will be fair only if you put a limit to who can join what games but we all know this won't happen...
So, we both agree on that!
What else do we agree on?
• The scoring system needs to be simple?
• There is a strong preference to keep it to plus, minus, times, divide, minimum and maximum (+, -, *, /, max and min).
The debate for what should be used can go on forever, the reason why one should use a system is another.
For example, a cook plays a general 100 times. If their rank is a fair reflection of their ability then the general win most of the games and the cook a few. The scoring should also reflect this. The
general should win the same number of points as he loses. At this moment, that doean't happen.
For me it comes down to this.
You want a system where when players of different ranks play each other lots of times, at the end of the games their scores remain unchanged. [Note the argument is a little more subtle than that].
If one players rank isn't true to their ability then their score and rank will rise (or fall)
Re: Change max points lost per game & limit total max points won
I used to play on another website which I believe to be better than this website in some ways, but not others. The two things that I think the other website has over this website are:
1. When you create a game you can set Minimum and/or Maximum points limits. You do not have to set limits, but you do have the option to set either or both. For example, if I have 2100 points and I
don't want to play anyone with less than 2000 points, I can set that criteria when I create the game and everyone who wants to join my game must have 2000 points or more. If you're scared to play
anyone with more than 3000 points you can cap the maximum points limits if you want. But most importantly it means you never have to play against a chef again. I can't believe that during this thread
people were making the argument of a player with 4000 points playing against a chef with 1 point. I mean honestly who would ever do that. It's the most insane idea ever. After many games of hard work
to get up to 4000, you could literally lose hundreds of points in one game if you get screwed by lady luck. I've lost 26 armies on 3 before on this website. I think the dice are rigged, but that is
another subject for other threads. Anyway...
When I look for new games these days I search through all the pending games looking for players with high point scores because I don't like playing anyone who is a lieutenant or less. Personal
preference, maybe, but I think you'll find a lot of other people do that too. I know that I'm much more likely to win games against people who know how to play the game than against games with chefs.
Not only that, but if you ever get beaten by a chef then you get screwed hardcore in the points department. I think that by implementing this simple idea you would find that your initial problem
would be solved and you would also find a lot of the really good players playing in singles games rather that in team games (because you will have probably noticed that's all they usually play). To
be honest, all I want to do is play the guys with lots of points so I can take their points off them.
(I am aware that this is not part of the initial thread, but thought i'd add it here anyway while i'm having my spiel)
2. In the game playing interface the other website has a stats table listing each player, number of territories, armies killed, total current armies, number of cards, and their current points score.
This table is so valuable and saves so much time rather than having to count the armies of each player and their territories.
Being a programmer myself, I know these two ideas would be extremely easy to implement and add a lot of benefit to the game.
Please feel free to criticise or agree as you wish.
Play well.
Re: Change max points lost per game & limit total max points won
Number 2 is available by use of the BOB script. Look in the Plugins & Addons forum for it.
Re: Change max points lost per game & limit total max points won
Oh awesome. I'll check it out now.
Thanks heaps!
Re: Change max points lost per game & limit total max points won
If anyone else is interested, I also found a result to my first point. If you go to forums and then callouts, there are a bunch of threads relating to games created for players with specific ranks.
Another alternative is to join tournaments. Excellent!
Re: Change max points lost per game & limit total max points won
fuzzball wrote:If anyone else is interested, I also found a result to my first point. If you go to forums and then callouts, there are a bunch of threads relating to games created for players
with specific ranks. Another alternative is to join tournaments. Excellent!
Those are both excellent ideas. Another I have found useful, is to add someone to your 'friend' list each time you've had a good experience playing them, and then create your own private games,
sending out invitations to all those on your friend list. Also, joining a clan can be a good way to ensure you always have decent people to play with/against.
Wait a minute. What's this thread about again? Max points? I say 100 is plenty. It's quite the rare occurrence that the maximum is exchanged as is. Upping the limit would not make any significant
jay_a2j wrote:lets not be so quick to judge Hitler
|
{"url":"http://www.conquerclub.com/forum/viewtopic.php?f=471&t=81549&p=1919149","timestamp":"2014-04-20T04:25:33Z","content_type":null,"content_length":"133966","record_id":"<urn:uuid:c13ae317-1dbe-4596-9dcb-d6e35f2a729f>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00120-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Question on the Irreducibility of Polynomials
May 18th 2013, 06:56 PM #1
Question on the Irreducibility of Polynomials
I am reading Dummit and Foote on Polynomial Rings. In particular I am seeking to understand Section 9.4 on Irreducibility Criteria.
Proposition 9 in Section 9.4 reads as follows:
Proposition 9. Let F be a field and let $p(x) \in F[x]$. Then p(x) has a factor of degree one if and only if p(x) has a root in F i.e. there is an $\alpha \in F$ with $p( \alpha ) = 0$
Then D&F state that Proposition 9 gives a criterion for irreducibility for polynomials of small degree
D&F then state Proposition 10 as follows:
Proposition 10: A polynomial of degree two or three over a field F is reducible if and only if it has a root in F
BUT! Here is my problem - why does not a root in F imply reducibility in polynomials of all degrees? A root in F means, I think, that the polynomial concerned has a linear factor and hence can be
factored into a linear factor times a polynomial of degree n-1???
Can anyone clarify this for me?
Follow Math Help Forum on Facebook and Google+
|
{"url":"http://mathhelpforum.com/advanced-algebra/219063-question-irreducibility-polynomials.html","timestamp":"2014-04-18T18:52:44Z","content_type":null,"content_length":"30608","record_id":"<urn:uuid:039b9545-58c1-4cf7-ae74-fdf9a2ff9c20>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00521-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Getting Used to
Getting Used to Mathematica
• Arguments of functions are given in square brackets.
• Names of built-in functions have their first letters capitalized.
• Multiplication can be represented by a space.
• Powers are denoted by .
• Numbers in scientific notation are entered, for example, as or .
Important points to remember in Mathematica.
If you have used other computer systems before, you will probably notice some similarities and some differences. Often you will find the differences the most difficult parts to remember. It may help
you, however, to understand a little about why Mathematica is set up the way it is, and why such differences exist.
One important feature of Mathematica that differs from other computer languages, and from conventional mathematical notation, is that function arguments are enclosed in square brackets, not
parentheses. Parentheses in Mathematica are reserved specifically for indicating the grouping of terms. There is obviously a conceptual distinction between giving arguments to a function and grouping
terms together; the fact that the same notation has often been used for both is largely a consequence of typography and of early computer keyboards. In Mathematica, the concepts are distinguished by
different notation.
This distinction has several advantages. In parenthesis notation, it is not clear whether means or . Using square brackets for function arguments removes this ambiguity. It also allows multiplication
to be indicated without an explicit or other character. As a result, Mathematica can handle expressions like and or , treating them just as in standard mathematical notation.
You can also see from "Some Mathematical Functions" that built-in Mathematica functions often have quite long names. You may wonder why, for example, the pseudorandom number function for generating
reals is called RandomReal, rather than, say, . The answer, which pervades much of the design of Mathematica, is consistency. There is a general convention in Mathematica that all function names are
spelled out as full English words, unless there is a standard mathematical abbreviation for them. The great advantage of this scheme is that it is predictable. Once you know what a function does, you
will usually be able to guess exactly what its name is. If the names were abbreviated, you would always have to remember which shortening of the standard English words was used.
Another feature of built-in Mathematica names is that they all start with capital letters. "Defining Variables" and "Defining Functions" discuss how to define variables and functions of your own. The
capital letter convention makes it easy to distinguish built-in objects. If Mathematica used instead of Max to represent the operation of finding a maximum, then you would never be able to use as the
name of one of your variables. In addition, when you read programs written in Mathematica, the capitalization of built-in names makes them easier to pick out.
Some Mathematica Conventions
Built-in functions are capitalized. Arguments to functions are wrapped with square brackets. Sin[x]
Uppercase and lowercase letters are recognized as different characters. Lists are wrapped with curly brackets.
Built-in symbols are capitalized. Commas are used to separate arguments. A semicolon suppresses output, but the command is still evaluated. N[Pi, 50];
|
{"url":"http://reference.wolfram.com/mathematica/tutorial/GettingUsedToMathematica.html","timestamp":"2014-04-17T18:29:44Z","content_type":null,"content_length":"32604","record_id":"<urn:uuid:1a9756c2-0f08-43ff-ad04-84a03d749b89>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00631-ip-10-147-4-33.ec2.internal.warc.gz"}
|
What is Nemeth Code?
Nemeth Code is a special type of Braille used for math and science notations. It was developed in 1946 by Dr. Abraham Nemeth as part of his doctoral studies in mathematics. In 1952, the Braille
Authority of North America (BANA) accepted Nemeth Code as the standard code for representing math and science expressions in Braille. With Nemeth Code, one can render all mathematical and technical
documents into six-dot Braille, including expressions in these areas:
• Arithmetic
• Column arithmetic, including carrying and borrowing
• Long division
• Algebra
• Geometry (not including figure drawings)
• Trigonometry
• Calculus
• Modern mathematics up to research level
Because Nemeth Code is in six-dot Braille, it can be generated using some of the Braille tools, such as a computer, a slate and stylus, or the Perkins Braille Writer.
Freedom Scientific offers a free downloadable Nemeth Code self-study, which is designed for blind individuals to learn to read and write the Nemeth Code for Braille mathematics. To learn or refresh
skills using this code, visit Freedom Scientific's Nemeth Code Self-Study Instructional Material.
Last update or review: January 24, 2013
|
{"url":"http://www.washington.edu/doit/Stem/articles?42","timestamp":"2014-04-17T07:55:22Z","content_type":null,"content_length":"6674","record_id":"<urn:uuid:57310346-4bf7-4b66-a601-caafcc059d1f>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00316-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Math Help
June 23rd 2008, 07:10 PM #1
Feb 2007
If $x_{1}, x_{2}$ are the roots of $ax^2 +bx+c = 0$, find the value of
$(ax_{1}+b)^{-2} +(ax_{2}+b)^{-2}$
Thank you in advanced
Hello !
I think I figured it out...
Maybe there is a quicker way, but I have not a lof of time.
Remember the sum of the roots & the product of the roots :
$x_1+x_2=-\frac ba \quad (1) \quad \quad \quad x_1x_2=\frac ca \quad (2)$
From (1), we know that $x_1=-\frac ba-x_2 \quad \Rightarrow \quad ax_1=-b-ax_2$.
Similarly, $ax_2=-b-ax_1$.
Substituting in N :
\begin{aligned} N&=\frac{1}{(-b-ax_2+b)^2}+\frac{1}{(-b-ax_1+b)^2} \\ \\<br /> &=\frac{1}{(ax_2)^2}+\frac{1}{(ax_1)^2} \\ \\<br /> &=\frac{1}{a^2} \cdot \left(\frac{1}{x_2^2}+\frac{1}{x_1^2}\
right) \end{aligned}
Gathering it in a unique fraction :
$N=\frac{1}{a^2} \cdot \frac{x_1^2+x_2^2}{x_1^2 \cdot x_2^2}$
Completing the square above :
$N=\frac{1}{a^2} \cdot \frac{({\color{red}x_1+x_2})^2-2{\color{blue}x_1x_2}}{({\color{blue}x_1x_2})^2}$
But ${\color{blue}x_1x_2}=\frac ca$ and ${\color{red}x_1+x_2}=-\frac ba$.
This simplifies into :
$N=\frac{1}{a^2} \cdot \frac{\left(\frac ba\right)^2-2 \frac ca}{\left(\frac ca\right)^2}$
$N=\frac{1}{\bold{a^2}} \cdot \left(\frac{b^2}{a^2}-2 \frac ca\right) \cdot \frac{\bold{a^2}}{c^2}$
$N=\frac{1}{c^2} \cdot \left(\frac{b^2}{a^2}-2 \frac{ac}{a^2}\right)$
Another solution:
$x_1eq 0, \ x_2eq 0$ because, if $x_1=0$ then $c=0$ and $ax_2+b=0$, so the denominator is 0, contradiction.
Now, if $x_1, \ x_2$ are the roots of the equation, then
Divide the first equality by $x_1$ and the second by $x_2$:
Then the expression becomes
$\displaystyle\left(\frac{x_1}{c}\right)^2+\left(\f rac{x_2}{c}\right)^2=\frac{x_1^2+x_2^2}{c^2}=\frac {(x_1+x_2)^2-2x_1x_2}{c^2}=\frac{\frac{b^2}{a^2}-\frac{2c}{a}}{a^2}=\frac{b^2-2ac}{a^2c^2}$
June 23rd 2008, 10:55 PM #2
June 23rd 2008, 11:10 PM #3
|
{"url":"http://mathhelpforum.com/algebra/42280-roots.html","timestamp":"2014-04-19T10:41:56Z","content_type":null,"content_length":"43957","record_id":"<urn:uuid:101eeea6-455e-4c30-971d-5978b71d3c02>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00517-ip-10-147-4-33.ec2.internal.warc.gz"}
|
A Very Bumpy Ride
Simulations of turbulent flows hit with strong shockwaves
Sanjiva K. Lele (project webpage)
Stanford University
Turbulent flows which also involve interactions with strong shocks and density variations arise in many diverse areas of science and technology. For example the explosive phenomena associated with
supernova explosions, volcanic eruptions, accidental detonations involving natural gas leaks, shock wave lithotripsy to break up kidney stones, as well as the implosion of a cryogenic fuel capsule
for inertial confinement fusion all involve dramatic compression and expansion of multi-phase materials, their turbulent mixing and chemical reactions. Strong shock waves, strong acceleration and
deceleration of heterogeneous materials and associated turbulent mixing play a critical part in these phenomena. Besides the multi-scale hydrodynamic processes, these phenomena also involve other
physics and chemistry rich in its complexity and nonlinearity, such as plasma physics, radiation transport, and complex chemical kinetics. The current ability to predict these flow phenomena is
strongly limited by the models of turbulence used, and by the computational algorithms employed. This project, utilizing the petascale computational capabilities envisioned by the Department,
provides an opportunity to revolutionize the scientific understanding of shock-turbulence interactions and multimaterial mixing in complex flows by simulations at unparalleled fidelity.
The project will consider turbulent flow configurations involving shock-turbulence interaction and multi-material mixing for fundamental scientific study, and for systematic model development, for
example for use in large-eddy simulations in the context of applications to accelerated multi-material flows. The team will also systematically evaluate different novel numerical approaches for
nonlinear, multi-scale shock-turbulence interaction flow problems to establish the best practices and rigorous benchmarks in large-eddy simulations.
Problems of shock-turbulence interaction present a philosophical dilemma in numerical algorithm development. Methods designed to treat discontinuities and shocks are inherently dissipative for
turbulence, and methods designed for turbulence (fluctuating fields with broadband variations) are ineffective for discontinuities. Capturing the interactions at unprecedented realism requires novel
algorithms and effective use of software tools which allow the full benefit of the new algorithms to be realized on the massively parallel computer architectures.
Flows involving the interaction of strong shocks with turbulence and density interfaces are central to laser-driven implosion of inertial confinement fusion plasmas, as well as in the broader
Stockpile Stewardship mission of DOE. However, the current scientific understanding of shock-turbulence interactions in complex configurations, and the ability to reliably predict these strongly
nonlinear multi-scale flows remains limited and imperfect. It is this area of science application, with relevance to inertial confinement fusion application and supernovae astrophysics, that the
current Project aims to revolutionize by bringing together a team with deep expertise in numerical simulations of turbulence and turbulence physics, computational gas dynamics and shock wave physics,
numerical analysis and nonlinear dynamics, and massively parallel computing.
Science Application: Turbulence
Project Title: Simulations of Turbulent Flows with Strong Shocks and Density Variations.
Principal Investigator: Sanjiva K. Lele
Affiliation: Stanford University
Project Webpage: http://shocks.stanford.edu/
Participating Institutions and Co-Investigators:
Stanford University - Sanjiva K. Lele (PI), Parviz Moin
University of California at Los Angeles - Xiaolin Zhong
Lawrence Livermore National Laboratory - Andrew Cook, William Cabot, Bjorn Sjögreen
NASA Ames Research Center - Helen C. Yee
Funding Partners: U.S. Department of Energy - Office of Science, Advanced Scientific Computing Research, and the National Nuclear Security Agency.
Budget and Duration: Approximately $0.8 million per year for five years ^1
Other SciDAC physics efforts
^1Subject to acceptable progress review and the availability of appropriated funds
|
{"url":"http://www.scidac.gov/physics/shock.html","timestamp":"2014-04-17T04:29:58Z","content_type":null,"content_length":"14771","record_id":"<urn:uuid:2c7313d0-dfbc-488e-b7b2-800fdbfdbc3c>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00073-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Digital Math Explanation
This is supposed to be a fairy complete overview of current systems (and projects) implementing "mathematics in the computer".
In order to qualify for this list a system has:
• to be about mathematics
so for instance a word processor will qualify for this list only if it is a mathematical word processor: which means that TeX qualifies and Word doesn't
• to have something to do with computers
a project doesn't necessarily have to produce software to qualify: a standard for encoding math - like OpenMath - or a project with nice ideas - like QED - are in this list too
• to be significant
it has to produce software, or some kind of sizable collection, or some interesting idea, or a large group of people have to be involved, etc.
• to be public
it should be possible to acquire the system: by way of the internet, or by buying it, or by contacting the author, etc.
• to be active
at the very least there has to be some web page about it that this list can point to
If this list can be improved, either because a relevant system is missing, or because the information about some system is wrong (for instance, a link to a web page might no longer work), or because
there is a better web page to point to than the one that's in the list, then I'd very much would like to know. In that case please send mail to freek@cs.kun.nl.
For each system this overview contains the following information:
• the name of the system
• the name of the program implementing the system (if appropriate)
for instance there are quite a number of programs implementing the HOL system; similarly there are a great number of MACSYMA derivates with all kinds of names; often not all implementations are
in this overview: I restricted myself to the most "active" or the most "interesting" (to me) of the implementations; generally the other implementations can easily be found on the web page about
the system
• the URL of a web page or ftp site about the system
• an e-mail address that can be used to contact the project that produced the system
• the name (or names) of the principal architect of the system
if there have been more people than just the architect(s) significantly involved in the writing of the software the name is followed by "e.a."; by "architect" I don't mean the person who
"designed" the the system, nor the (current) leader of the project, but the principal programmer: the one who took the main design decisions of the detailed implementation of the software; for
instance for the current coq implementation this seems to be Chet Murthy
• the programming language the system is implemented in
• the name of the specific variant of this programming language (if appropriate)
for instance "lisp" might be "franz lisp" or "common lisp" or "scheme", etc.
• the category of the system
see below for a list (and explanations) of the various categories
• the kind of interaction the system has with the user
see below for a list (and explanations) of the various kinds of interaction
• the kind of logic the system uses
there are four choices: none, classical, constructive and both
here the kind of logic that is commonly used in the system is shown: for instance in coq one can easily reason classically by taking the double negation rule as an axiom, but because most of the
applications of coq don't do this, coq is labeled "constructive"; similarly although Automath has a type theoretical foundation the large case studies in the Automath project were all classical:
hence Automath is labeled "classical"; only if in a system both kinds of logic have been significantly developed, like for instance in the Isabelle system, then it will be labeled "both"
• the size of the effort that produced the system
there are three choices: small, large and commercial
the first two options are only used for non-commercial systems
a small system generally only involves one or two people; the home page of such a system is generally "in" the home page of its main author; the distribution of such a system is generally at most
a megabyte in size
a big system generally has been active for years; it generally involves a group of tens of people; such a system often has its own mailing list or newsgroup; such a system often has a number of
different implementations; the distribution of such a system often is tens of megabytes in size
• a sample of the language used within the system
I selected these samples in the following way: I looked for the biggest relevant "example" file from the distribution that I could find; in that file I went to the last of lines number 42, 137
and 666 that was present; and starting from that line I took a sample of 24 lines
Each system in this list falls in one of nine categories. All but the first two categories are for projects that produce software. The "non software" categories are:
• information
example: the QED project
this is for lists like this, bibliographies, people talking without doing anything, etc.
• representation
example: the OpenMath project
this is for languages/standards for encoding/exchanging mathematics
large collections of problems (used as a "benchmark" to compare first order theorem provers) are in this category too
The "mathematics software" is categorized into five main categories:
• authoring
example: TeX
these are the typesetting systems, the authoring tools, etc.: software that encodes/processes mathematics to communicate it to other humans, but doesn't process it "mathematically"
• computer algebra
example: Mathematica
these are the symbolic "calculators"; generally they don't have a very rigorous mathematical foundation, nor do they have much reasoning capacities; generally they implement very powerful and
complicated algorithms to calculate mathematical expressions
there are various kind of computer algebra systems: for numerical calculations, for doing "calculus"-like mathematics, for doing abstract algebra, etc.: I don't distinguish between those
• proof checker and theorem prover
examples: mizar and nqthm
these two categories are closely related: both are about "mechanical reasoning": the difference is whether the focus is on the computer checking the reasoning of the human, or on the human
watching and guiding the computer's proof efforts; the more a system tries to be "smart", the more chance it has of being in the "theorem prover" category
there are two kinds of theorem provers that have been put in two separate categories of their own (see below): so the "theorem prover" category might have better been called "other theorem
• specification environment
example: obj
these are theorem provers too in a sense, but highly specialised ones which are used to specify and verify computer systems: so the focus here is not so much on trying to do mathematics as well
as on the correctness of software
a system in this category has only been put in the list if it explicitely mentions something like logic or mathematics or automated reasoning on its web page: for instance the asf+sdf system
isn't there
There are three kinds of proof checkers and theorem provers that have a category of their own:
• logic education
example: hyperproof
this is the class of proof checkers which are intended to teach logic to students
• tactic prover
example: coq
these systems are often called "proof assistants"
these systems are descendants of the LCF system, generally based on a higher order and type theoretical foundation and often implemented in the ML programming language
in this kind of system something is proven interactively by a user invoking so-called "tactics" on a partially solved "proof goal"
• first order prover
example: otter
this is a highly specialised class of theorem provers, which often use clausal forms for their problems, there are competitions between the various members of this category to see which one is
best, etc.
The interaction with a mathematical program has two orthogonal dimensions:
• either the user prepares a file and then runs the program on that file; or else the user has significant interaction with the program while it is running
• either the files that the user creates contain a sequence of commands for the "read-eval-print loop" of the program (so it is a "script", maybe called a "notebook"); or else the file contains
something more intricate than just a sequence of commands to the program
These two dimensions lead to four kinds of interaction:
A program is classified by its most common mode of producing texts: for instance coq can be used in the dialog way ("coqtop") and the script way ("coqc"): but because people most often write coq
proofs by copying fragments between an editor and a running coq interpreter, it's classified "dialog".
I probably made some mistakes in distinguishing between "dialog" and "script" systems.
Apart from these four kinds of interaction, there are three kinds of "non interaction" too:
• library:
this is software that isn't a finished program but a "software library" that can be linked to other people's programs
• interface:
here the software is about interfacing between other software: so it doesn't so much do something itself as well as communicates with other programs
• corpus:
in this case there is no software at all, but just a "static" collection of data, for instance like in the "benchmarks" for the first order provers
(last modification 1999-01-18)
|
{"url":"http://www.cs.ru.nl/~freek/digimath/explanation.html","timestamp":"2014-04-20T03:15:44Z","content_type":null,"content_length":"12764","record_id":"<urn:uuid:08a6822a-d5cf-41d2-a40b-90b20ac4bf50>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00094-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Change of Variables
October 20th 2010, 04:13 AM #1
Oct 2010
Change of Variables
I have a function $f(S,t)$, which satisfies some PDE, with the condition that $f(S,T) = max\{S-1,0\}$. I introduce two new variables $x$ and $\tau$ as follows:
$S = e^x$ and $t=T-\frac{2\tau}_{\sigma^2}$, and define the function $W$ by $f(S,t) = W(x,\tau)$. Is the new initial condition going to be:
Follow Math Help Forum on Facebook and Google+
|
{"url":"http://mathhelpforum.com/calculus/160363-change-variables.html","timestamp":"2014-04-20T02:47:08Z","content_type":null,"content_length":"29834","record_id":"<urn:uuid:f01edc4f-814e-413c-9d1d-0a2177207554>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00375-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Mplus Discussion >> Latent Profile Analysis and Binary distal outcome
Stephanie Fitzpatrick posted on Sunday, November 20, 2011 - 1:52 pm
I have a 2 class latent profile model and I want to compare these two profiles on prevalence/probability of prediabetes. Prediabetes is a binary outcome. I am not sure of the syntax to use.
I would think the syntax should be x on C, but this seems to only work when I just put Categorical Are x and nothing in the Model statement except the %Overall%. However, the output that I receive
does not seem to tell me if the probabilities for one class are different for the other. See output below. Please advise. Thanks
Latent Class 1
Category 1 0.968 0.017 58.319 0.000
Category 2 0.032 0.017 1.920 0.055
Latent Class 2
Category 1 0.818 0.084 9.689 0.000
Category 2 0.182 0.084 2.158 0.031
Latent Class 1 Compared to Latent Class 2
Category > 1 0.148 0.116 1.270 0.204
Bengt O. Muthen posted on Sunday, November 20, 2011 - 8:23 pm
It sounds like you have a set of continuous latent class (latent profile) indicators from which you form 2 classes and in that same analysis you want to compare the probability of another variable u=
prediabetes, across those 2 classes. If that's a correct understanding, you don't say u on c because the u probabilities vary across the classes as the default - the output gives you different
thresholds in logit scale for the 2 classes, and those logits can be translated into probabilities as
P = 1/(1+exp(threshold))
You can test the differences in Model Constraint, creating a New parameter that is the difference in thresholds, or probabilities, across the 2 classes.
Stephanie Fitzpatrick posted on Monday, November 21, 2011 - 3:42 pm
Thanks for your reply Dr. Muthen and your understanding is correct in that I want to compare the probability of u across the 2 latent profiles. What exactly is the syntax for this? It seems you are
suggesting to look at the thresholds. Does the response in probability scale not give me what I need? Also, keep in mind that u is binary. Thank you.
Bengt O. Muthen posted on Tuesday, November 22, 2011 - 6:28 pm
If you get results in probability scale for the distal u in the two classes, those are the ones you want to compare. If you want a statistical test of their difference, you have to express the
difference either in terms of logits or in terms of probabilities using Model Constraint. In doing so, you give labels to the threshold for u in each class in your Model command, and then use Model
constraint to express a new parameter that is their difference - this also gives you a z test of the difference.
Melissa Yale posted on Tuesday, May 15, 2012 - 2:16 pm
I am running a latent profile analysis with a distal outcome variable. I have a set of continuous latent class indicators from which I am forming 4 latent classes. In that same analysis I want to add
a distal outcome variable and compare the probabilities across the 4 classes. The outcome variable is a 5 category nominal variable. In my output I am only getting mean values for 4 of the 5 nominal
categories under each class. My questions are how do I interpret these mean values/estimates? Should different output (probabilities) be present?
Secondly, I labeled the parameters of each nominal category differently for each class and then created new parameters for use in the MODEL CONSTRAINT command. For example %c#1%
[u#1] (p1);
[u#1] (p2);
Model Constraint: New (T1);
T1 = p1 - p2;
What is the estimate and p-value given for this new parameter in the output? Also, Mplus will not let me create a parameter for all 5 categories. How do I do class to class comparisons for the 5th
Any guidance would be much appreciated.
Bengt O. Muthen posted on Tuesday, May 15, 2012 - 6:45 pm
Take a look at our Topic 2 handout on our web site, slides 58 and on. That shows how to compute the probabilities based on the logits. You have only intercepts beta_0c in the slide 58 multinomial
expression (no x's), and they vary over your latent classes.
I don't know if you want to constrain logits or constrain probabilities of certain nominal categories. Both can be done in Model Constraint, but the probabilities would first have to be expressed
from the logits as per above.
Melissa Yale posted on Thursday, May 17, 2012 - 11:53 am
Thank you.
Just for clarification:
If the means for Class 1 in the output are
U#1 -0.811
U#2 -0.493
U#3 -0.185
U#4 0 (reference)
AND the means for Class 2 are
U#1 -0.198
U#2 -0.043
U#3 -0.156
U#4 0 (reference)
AND the means for Class 3 are
U#1 -0.830
U#2 -0.378
U#3 -1.412
U#4 0 (reference)
I would calculate the probability of U#1 in Class 3 by p13= exp (-0.830)/(exp(-0.830)+exp(-0.378)+exp(-1.412)+1)
then do the same for U#1 in class 2 (p12) and Class 1 (p11). Then calculate the difference of probabilities between classes for u#1 as p13-p12?
Linda K. Muthen posted on Friday, May 18, 2012 - 12:09 pm
That is correct.
Back to top
|
{"url":"http://www.statmodel.com/cgi-bin/discus/discus.cgi?pg=prev&topic=23&page=8461","timestamp":"2014-04-20T03:13:21Z","content_type":null,"content_length":"28202","record_id":"<urn:uuid:d9e6d9bb-091f-4554-b451-a7ba66a21859>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00219-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Op Amp Applications - Integrator
Movie(s) created by
Tim Fiegenbaum
at North Seattle Community College.
Related Links
Lecture Transcript
11.3 and we're looking at Op Amp applications. The next form we're going to look at is called the integrator. An integrator is a circuit among other things. You can use an integrator if you have a,
for example, positive or a square wave coming in, you can get a shape that is almost rectangular coming out of it, and that's the kind of circuit we are going to be looking at here. They can be built
around an AC time constant with R1 and C1. Here is the value of R1 and here is C1 up here, so 33K and ten Nano-Farads are the components this is actually built around. Let's take a look at this
let's see we've got R3 here. This is going to be Rb so we can basically ignore that. We have a signal source coming in and in this case, it is going to be a rectangular square wave. It's going to
be fed through R1 and this going into an inverting amplifier. The output is as the input goes positive, the output will actually go negative and note that as it goes in here, then here is the output
and the output is fed to a feedback loop through R2 and C1. The idea here is that we're going to have the signal come in, it's going to see the virtual ground right here, so there will be that
voltage across this resistance, there will be a current and since it's a square wave it will be a constant current for as long as it is positive and as long as it goes negative. That current is going
to be fed up to the output and it will charge this capacitor and so that line that you see, that rectangular shape you see, is actually the charging of the capacitor in the feedback loop. OK, a few
things we need to consider here, let's see, we looked at R1 and C OK the input voltage is felt across R1 and OK we've talked about that the current is fed through the feedback loop and
charges the can based on the time of the input. OK this is an important thing: based on the time of the input. We have to calculate what the time is. Now if this were a 1000 Hertz signal then the
time of one would be, you know, one over one thousand and that would be one millisecond. Since we're looking at a square wave and this would be a whole cycle, and we're going to be looking at the
charging time of the positive cycle and then the charging time of the negative cycle, but we need to have to separate them out. We're going to take this divided by two so we would have 0.5
milliseconds would be our time and so when we put this into a formula we need to be able to specify the right time that we're going to be using.
Now the next page here is this same circuit, it's always large so you can actually see it, and we have the signal source here and we have the O-scope redone and this is the simulation from Workbench
where I had captured the output and the actual circuit. Now this is the formula we'll be using and this is going to be Vi times T so that would be the input voltage times the time and then that would
be divided by this RC time constant and that will tell us the output voltage. All we're looking at is we'll going to be sending a specific voltage for a specific amount of time and we'll going to see
what will that develop through in the charging of this capacitor, what kind of voltage will we see? You'll see that the results over here: the square wave is the input and this rectangular shape that
you see here is the output. Let's do the formula and so let's pull up on our calculator and let's see what we're going to get here Okay, first of all, we need to look at the time and so we said we
had 1000 Hertz as our input, so let's just do one over and that gives us one millisecond that is the time of one cycle and then we'll divide that by two. That is the time that is our time right
here. If we look at our square wave here, it would go down here and observe that this is a one volt peak signal so we would take that times one, and gets on the well the time's going to be the same
value. That is our numerator, and then we would divide that by, and this is going to be the RC time constant, so that would be R1, which is 33 exponent three, times and this is ten Nano-Farads, so
that would be ten exponent minus nine in parenthesis and then equals. You'll notice that that value is 1.51 volts; let's write that down, 1.51 volts. What that's saying is that during this positive
cycle there is going to be a current through this device. It's going to charge this capacitor and it's going to develop a voltage. Now during this time, you'll notice that the voltage appears to be
going negative. Well the reason it is going negative is because it's fed into the inverting side so as this is positive the output's going negative, which you would expect. It has it that as the peak
voltage goes negative, you see the charge in that cap going positive. Now the question here is, what is the value of this triangular signal key? Go down here you can see right here that the value's
1.447 volt. That's from the bottom of the triangle to the top of the triangle. Remember our calculated was actually quite close to that value and when you're doing this the input must be peak-to-peak
value and V1 would be the peak value. This is a quick look at a circuit call and integrator.
|
{"url":"http://www.allaboutcircuits.com/videos/80.html","timestamp":"2014-04-19T10:17:37Z","content_type":null,"content_length":"15788","record_id":"<urn:uuid:aa98628a-1d1e-4402-8914-0ad4392bb15c>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00608-ip-10-147-4-33.ec2.internal.warc.gz"}
|
A simplified version of local predicativity”, Aczel
- Arch. Math. Log., 39:155 , 1998
"... We define a type theory MLM, which has proof theoretical strength slightly greater then Rathjens theory KPM. This is achieved by replacing the universe in Martin-Lof's Type Theory by a new
universe V, which has the property that for every function f , mapping families of sets in V to families of set ..."
Cited by 15 (8 self)
Add to MetaCart
We define a type theory MLM, which has proof theoretical strength slightly greater then Rathjens theory KPM. This is achieved by replacing the universe in Martin-Lof's Type Theory by a new universe
V, which has the property that for every function f , mapping families of sets in V to families of sets in V, there exists a universe closed under f . We show that the proof theoretical strength of
MLM is /\Omega 1\Omega M+! . Therefore we reach a strength slightly greater than jKPMj and V can be considered as a Mahlo-universe. Together with [Se96a] it follows jMLMj = /\Omega 1(\Omega M+! ). 1
Introduction An ordinal M is recursively Mahlo iff M is admissible and every M-recursive closed unbounded subset of M contains an admissible ordinal. Equivalently, this is the case iff M is
admissible and for all \Delta 0 formulas OE(x; y; ~z), and all ~z 2 LM such that 8x 2 LM :9y 2 LM :OE(x; y; ~z) there exists an admissible ordinal fi ! M such that 8x 2 L fi 9y 2 L fi :OE(x; y; ~z)
holds. ...
, 1996
"... We present a type theory T T M, extending Martin-Löf Type Theory by adding one Mahlo universe V, a universe being the type theoretic analogue of one recursive Mahlo ordinal. A model, formulated
in a Kripke-Platek style set theory KP M +, is given and we show that the proof theoretical strength of T ..."
Cited by 7 (6 self)
Add to MetaCart
We present a type theory T T M, extending Martin-Löf Type Theory by adding one Mahlo universe V, a universe being the type theoretic analogue of one recursive Mahlo ordinal. A model, formulated in a
Kripke-Platek style set theory KP M +, is given and we show that the proof theoretical strength of T T M is ≤ |KP M + | = ψΩ1 (ΩM+ω). By [Se96a], this bound is sharp. 1
, 1996
"... Introduction Mathematics is usually developed on the basis of set theory. When trying to use type theory as a new basis for mathematics, most of mathematics has to be reformulated. This is of
great use, because then the step to programs is direct and one can expect to get the best programs. However ..."
Cited by 1 (1 self)
Add to MetaCart
Introduction Mathematics is usually developed on the basis of set theory. When trying to use type theory as a new basis for mathematics, most of mathematics has to be reformulated. This is of great
use, because then the step to programs is direct and one can expect to get the best programs. However, it seems that most mathematicians will continue to work in set theory. Even when changing to
type theory for the formalisation, usually the proofs will be developed first having classical set theory in the background. Therefore methods for transferring directly set theoretical arguments to
type theory could make the step from traditional mathematics to type theory and therefore to computer science far easier. The reason why set theory is used in mathematics is its high flexibility and
that it allows to write down expressions without having to care about the type of the object. Therefore, if set theoretical proofs can be transferred to type theory, one could use set theory as a
|
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=2199923","timestamp":"2014-04-21T13:54:00Z","content_type":null,"content_length":"17910","record_id":"<urn:uuid:20f7856b-8144-486a-9cc5-b7277d9a633c>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00188-ip-10-147-4-33.ec2.internal.warc.gz"}
|
This Article
Bibliographic References
Add to:
Pruning and Visualizing Generalized Association Rules in Parallel Coordinates
January 2005 (vol. 17 no. 1)
pp. 60-70
ASCII Text x
Li Yang, "Pruning and Visualizing Generalized Association Rules in Parallel Coordinates," IEEE Transactions on Knowledge and Data Engineering, vol. 17, no. 1, pp. 60-70, January, 2005.
BibTex x
@article{ 10.1109/TKDE.2005.14,
author = {Li Yang},
title = {Pruning and Visualizing Generalized Association Rules in Parallel Coordinates},
journal ={IEEE Transactions on Knowledge and Data Engineering},
volume = {17},
number = {1},
issn = {1041-4347},
year = {2005},
pages = {60-70},
doi = {http://doi.ieeecomputersociety.org/10.1109/TKDE.2005.14},
publisher = {IEEE Computer Society},
address = {Los Alamitos, CA, USA},
RefWorks Procite/RefMan/Endnote x
TY - JOUR
JO - IEEE Transactions on Knowledge and Data Engineering
TI - Pruning and Visualizing Generalized Association Rules in Parallel Coordinates
IS - 1
SN - 1041-4347
EPD - 60-70
A1 - Li Yang,
PY - 2005
KW - Association rules
KW - data visualization
KW - data mining
KW - interactive data exploration
KW - mining methods and algorithms.
VL - 17
JA - IEEE Transactions on Knowledge and Data Engineering
ER -
One fundamental problem for visualizing frequent itemsets and association rules is how to present a long border of frequent itemsets in an itemset lattice. Another problem comes from the lack of an
effective visual metaphor to represent many-to-many relationships. This paper proposes an approach for visualizing frequent itemsets and many-to-many association rules by a novel use of parallel
coordinates. An association rule is visualized by connecting items in the rule, one item on each parallel coordinate, with continuous polynomial curves. In the presence of item taxonomy, each
coordinate can be used to visualize an item taxonomy tree which can be expanded or shrunk by user interaction. This user interaction introduces a border, which separates displayable itemsets from
nondisplayable ones, in the generalized itemset lattice. Only those itemsets that are both frequent and displayable are considered to be displayed. This approach of visualizing frequent itemsets and
association rules has the following features: 1) It is capable of visualizing many-to-many rules and itemsets with many items. 2) It is capable of visualizing a large number of itemsets or rules by
displaying only those ones whose items are selected by the user. 3) The closure properties of frequent itemsets and association rules are inherently supported such that the implied ones are not
displayed. Usefulness of this approach is demonstrated through examples.
[1] C.C. Aggarwal and P.S. Yu, “A New Approach to Online Generation of Association Rules,” IEEE Trans. Knowledge and Data Eng., vol. 13, no. 4, pp. 527-540, July/Aug. 2001.
[2] R. Agrawal, T. Imielinski, and A. Swami, “Mining Association Rules between Sets of Items in Large Databases,” Proc. ACM SIGMOD Int'l Conf. Management of Data (SIGMOD '93), pp. 207-216, May 1993.
[3] R. Agrawal and R. Srikant, “Fast Algorithms for Mining Association Rules,” Proc. 20th Int'l Conf. Very Large Data Bases (VLDB '94), pp. 207-216, Sept. 1994.
[4] K.M. Ahmed, N.M. El-Makky, and Y. Taha, “A Note on Beyond Market Baskets: Generalizing Association Rules to Correlations,” ACM SIGKDD Explorations Newsletter, vol. 1, no. 2, pp. 46-48, Jan. 2000.
[5] R.J. Bayardo, “Efficiently Mining Long Patterns from Databases,” Proc. ACM SIGMOD Int'l Conf. Management of Data (SIGMOD '98), pp. 85-93, June 1998.
[6] R.J. Bayardo and R. Agrawal, “Mining the Most Interesting Rules,” Proc. Fifth ACM SIGKDD Int'l Conf. Knowledge Discovery and Data Mining (KDD '99), pp. 145-154, Aug. 1999.
[7] S. Brin, R. Motwani, and C. Silverstein, “Beyond Market Baskets: Generalizing Association Rules to Correlations,” Proc. ACM SIGMOD Int'l Conf. Management of Data (SIGMOD '97), pp. 265-276, May
[8] G. Farin, Curves and Surfaces for Computer Aided Geometric Design: A Practical Guide. New York: Academic Press, 1988.
[9] T. Fukuda, Y. Morimoto, S. Morishita, and T. Tokuyama, “Data Mining Using Two-Dimensional Optimized Association Rules: Scheme, Algorithms, and Visualization,” Proc. ACM SIGMOD Int'l Conf.
Management of Data (SIGMOD '96), pp. 13-23, June 1996.
[10] D. Gunopulos, R. Khardon, H. Mannila, and H. Toivonen, “Data Mining, Hypergraph Transversals, and Machine Learning,” Proc. 16th ACM Symp. Principles of Database Systems (PODS '97), pp. 209-216,
May 1997.
[11] B. Hetzler, W.M. Harris, S. Havre, and P. Whitney, “Visualizing the Full Spectrum of Document Relationships,” Structures and Relations in Knowledge Organization, Proc. Fifth Int'l Soc. for
Knowledge Organization (ISKO) Conf., pp. 168-175, 1998.
[12] H. Hofmann, A. Siebes, and A. Wilhelm, “Visualizing Association Rules with Interactive Mosaic Plots,” Proc. Sixth ACM SIGKDD Int'l Conf. Knowledge Discovery and Data Mining (KDD '00), pp.
227-235, Aug. 2000.
[13] A. Inselberg, “The Plane with Parallel Coordinates,” The Visual Computer, vol. 1, no. 2, pp. 69-91, 1985.
[14] A. Inselberg, “Parallel Coordinates: A Tool for Visualizing Multi-Dimensional Geometry,” Proc. First IEEE Conf. Visualization, pp. 361-375, Oct. 1990.
[15] A. Inselberg, “Visualizing High Dimensional Datasets and Multivariate Relations (Tutorial),” Proc. Sixth ACM SIGKDD Int'l Conf. Knowledge Discovery and Data Mining (KDD '00), Aug. 2000.
[16] A. Inselberg and B. Dimsdale, “Multi-Dimensional Lines,” SIAM J. Applied Math., vol. 54, no. 2, pp. 559-596, 1994.
[17] A. Inselberg, M. Reif, and T. Chomut, “Convexity Algorithms in Parallel Coordinates,” J. ACM, vol. 34, no. 4, pp. 765-801, Oct. 1987.
[18] S. Jaroszewicz and D.A. Simovici, “Pruning Redundant Association Rules Using Maximum Entropy Principle,” Proc. Sixth Pacific-Asia Conf. Knowledge Discovery and Data Mining (PAKDD '02), pp.
135-147, May 2002.
[19] M. Klemettinen, H. Mannila, P. Ronkainen, H. Toivonen, and I. Verkamo, “Finding Interesting Rules from Large Sets of Discovered Association Rules,” Proc. Third ACM Int'l Conf. Information and
Knowledge Management (CIKM '94), pp. 401-407, Nov. 1994.
[20] B. Liu, W. Hsu, S. Chen, and Y. Ma, “Analyzing the Subjective Interestingness of Association Rules,” IEEE Intelligent Systems, vol. 15, no. 5, pp. 47-55, Sept./Oct. 2000.
[21] B. Liu, W. Hsu, and Y. Ma, “Pruning and Summarizing the Discovered Associations,” Proc. Fifth ACM SIGKDD Int'l Conf. Knowledge Discovery and Data Mining (KDD '99), pp. 145-154, Aug. 1999.
[22] B. Liu, W. Hsu, K. Wang, and S. Chen, “Visually Aided Exploration of Interesting Association Rules,” Proc. Third Pacific-Asia Conf. Knowledge Discovery and Data Mining (PAKDD), pp. 380-389, Apr.
[23] H. Mannila and C. Meek, “Global Partial Orders from Sequential Data,” Proc. Sixth ACM SIGKDD Int'l Conf. Knowledge Discovery and Data Mining (KDD '00), pp. 161-168, Aug. 2000.
[24] H. Mannila and H. Toivonen, “Multiple Uses of Frequent Sets and Condensed Representations,” Proc. Second Int'l Conf. Knowledge Discovery and Data Mining (KDD '96), pp. 189-194, Aug. 1996.
[25] A. Martin and M.O. Ward, “High Dimensional Brushing for Interactive Exploration of Multivariate Data,” Proc. IEEE Conf. Visualization, pp. 271-278, Oct. 1995.
[26] B. Padmanabhan and A. Tuzhilin, “Small Is Beautiful: Discovering the Minimal Set of Unexpected Patterns,” Proc. Sixth ACM SIGKDD Int'l Conf. Knowledge Discovery and Data Mining (KDD '00), pp.
54-63, Aug. 2000.
[27] C. Silverstein, S. Brin, and R. Motwani, “Beyond Market Baskets: Generalizing Association Rules to Dependence Rules,” Data Mining and Knowledge Discovery, vol. 2, no. 1, pp. 39-68, Jan. 1998.
[28] R. Srikant and R. Agrawal, “Mining Generalized Association Rules,” Proc. 21st Int'l Conf. Very Large Data Bases (VLDB '95), pp. 407-419, Sept. 1995.
[29] H. Toivonen, M. Klemettinen, P. Ronkainen, K. Hatonen, and H. Mannila, “Pruning and Grouping of Discovered Association Rules,” Proc. ECML-95 Workshop Statistics, Machine Learning, and Knowledge
Discovery in Databases, pp. 47-52, Apr. 1995.
[30] A. Tuzhilin and G. Adomavicius, “Handling Very Large Numbers of Association Rules in the Analysis of Microarray Data,” Proc. Eighth ACM SIGKDD Int'l Conf. Knowledge Discovery and Data Mining
(KDD '02), pp. 396-404, July 2002.
[31] A. Tuzhilin and B. Liu, “Querying Multiple Sets of Discovered Rules,” Proc. Eighth ACM SIGKDD Int'l Conf. Knowledge Discovery and Data Mining (KDD '02), pp. 52-60, July 2002.
[32] P.C. Wong, P. Whitney, and J. Thomas, “Visualizing Association Rules for Text Mining,” Proc. IEEE Symp. Information Visualization (InfoVis '99), pp. 120-123, Oct. 1999.
[33] K. Yoda, T. Fukuda, Y. Morimoto, T. Morishita, and T. Tokuyama, “Computing Optimized Rectilinear Regions for Association Rules,” Proc. Third Int'l Conf. Knowledge Discovery and Data Mining (KDD
'97), pp. 96-103, Aug. 1997.
Index Terms:
Association rules, data visualization, data mining, interactive data exploration, mining methods and algorithms.
Li Yang, "Pruning and Visualizing Generalized Association Rules in Parallel Coordinates," IEEE Transactions on Knowledge and Data Engineering, vol. 17, no. 1, pp. 60-70, Jan. 2005, doi:10.1109/
Usage of this product signifies your acceptance of the
Terms of Use
|
{"url":"http://www.computer.org/csdl/trans/tk/2005/01/k0060-abs.html","timestamp":"2014-04-24T01:07:13Z","content_type":null,"content_length":"59868","record_id":"<urn:uuid:28243260-0251-4021-a6cf-e5b8d9693145>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00073-ip-10-147-4-33.ec2.internal.warc.gz"}
|
, 1994
"... We describe a paradigm for numerical computing, based on exact computation. This emerging paradigm has many advantages compared to the standard paradigm which is based on fixed-precision. We
first survey the literature on multiprecision number packages, a prerequisite for exact computation. Next ..."
Cited by 95 (10 self)
Add to MetaCart
We describe a paradigm for numerical computing, based on exact computation. This emerging paradigm has many advantages compared to the standard paradigm which is based on fixed-precision. We first
survey the literature on multiprecision number packages, a prerequisite for exact computation. Next we survey some recent applications of this paradigm. Finally, we outline some basic theory and
techniques in this paradigm. 1 This paper will appear as a chapter in the 2nd edition of Computing in Euclidean Geometry, edited by D.-Z. Du and F.K. Hwang, published by World Scientific Press, 1994.
1 1 Two Numerical Computing Paradigms Computation has always been intimately associated with numbers: computability theory was early on formulated as a theory of computable numbers, the first
computers have been number crunchers and the original mass-produced computers were pocket calculators. Although one's first exposure to computers today is likely to be some non-numerical application,
- , 2000
"... In this paper we review the current state of the problem solving environment (PSE) field and make projections for the future. First we describe the computing context, the definition of a PSE and
the goals of a PSE. The state-of-the-art is summarized along with sources (books, bibliographics, web sit ..."
Cited by 16 (2 self)
Add to MetaCart
In this paper we review the current state of the problem solving environment (PSE) field and make projections for the future. First we describe the computing context, the definition of a PSE and the
goals of a PSE. The state-of-the-art is summarized along with sources (books, bibliographics, web sites) of more detailed information. The principal components and paradigms for building PSEs are
presented. The discussion of the future is given in three parts: future trends, scenarios for 2010/2025, and research
- IEEE Computational Science & Engineering Mag , 1997
"... this article we describe our view of the area and relate it to the problem solving processes in science and engineering. Realities ..."
, 1995
"... In 1991, the US Congress passed the High Performance Computing and Communications bill, commonly known as the HPCC bill, enshrining the Grand Challenges as national priorities. The very nature
of these problems require the multidisciplinary teamwork of engineers plus computer, mathematical and physi ..."
Cited by 3 (3 self)
Add to MetaCart
In 1991, the US Congress passed the High Performance Computing and Communications bill, commonly known as the HPCC bill, enshrining the Grand Challenges as national priorities. The very nature of
these problems require the multidisciplinary teamwork of engineers plus computer, mathematical and physical scientists. But many important scientific and engineering problems are solved daily on
workstations---these were dubbed the "petty challenges". Both classes of problem are demanding computational systems although quite different from non-scientific systems. We review a philosophical
background for CSE, using this development to point out how seemingly innocuous decisions made by engineers and scientists can have disastrous results. Hence, software engineers should see CSE as a
professional challenge. Our program is based on studying applications, the algorithms to solve problems arising in those applications, and the mapping of those algorithms to architectures. Using
Computing Reviews...
- In Papers of the Twenty-Sixth SIGCSETechnical Symposium on Computer Science Education , 1995
"... ion Bisection method Procedures Weightlifting kinematics Program structure Weightlifting kinematics Simple statements Rod stacking Functions Sliding block Conditionals Root finding While loops
Numerical integration For loops Heat flow Arrays Curve fitting Array parameters Visualizing heat flow File ..."
Cited by 1 (0 self)
Add to MetaCart
ion Bisection method Procedures Weightlifting kinematics Program structure Weightlifting kinematics Simple statements Rod stacking Functions Sliding block Conditionals Root finding While loops
Numerical integration For loops Heat flow Arrays Curve fitting Array parameters Visualizing heat flow File input/output Table 1: Summary of revised lessons related course materials have observed the
susceptibility of such material to "upward creep" in scope and difficulty. The typical revised lesson uses one motivating problem to illustrate one or two key computing or problem-solving concepts.
The most complicated mathematics that we use in any of the lessons is simple integration. Each of our original lessons introduced several new concepts and used them to solve one or two representative
problems. Even so, we decided that many of the lessons were not sufficiently centered around the problems. In retrospect, this is evident from the breakdown of the lessons listed previously: they are
"... s is typically a one- or two-year program in a field that is in some sense related to, but different from, the discipline chosen by the student in the first two cycles. Hence the third cycle is,
in the current context, the only framework for the organization of interdisciplinary CS&E programs. Stude ..."
Add to MetaCart
s is typically a one- or two-year program in a field that is in some sense related to, but different from, the discipline chosen by the student in the first two cycles. Hence the third cycle is, in
the current context, the only framework for the organization of interdisciplinary CS&E programs. Students interested in CS&E can currently choose from a small range of advanced programs such as
Computer Science and Industrial Mathematics, Biomedical and Clinical Engineering, Biostatistics and so on. For the sake of completeness, we should mention that besides the university engineering
degree, which is referred to as civil engineering degree, there is also an industrial engineering degree. The educational program for industrial engineers takes four years and is taught at
non-university schools for higher education. However, in the sequel of the text we shall focus only on university curricula. May we remark that worldwide the Belgian educational system has a
"... This paper briefly introduces the field computational science and engineering (CS&E), and is an attempt to get other computer scientists more interested in CS&E related activities. It starts by
giving a short outline of the increased international activity in the field. Several of the definitions of ..."
Add to MetaCart
This paper briefly introduces the field computational science and engineering (CS&E), and is an attempt to get other computer scientists more interested in CS&E related activities. It starts by
giving a short outline of the increased international activity in the field. Several of the definitions of CS&E that have been given are presented, with an emphasis on how the field is related to
computer science. The role of supercomputers is discussed, and we try to identify important challenges for future CS&E education and research, again with a hope to attract computer scientists.
"... Committee on Cyberinfrastructure, all opinions, findings, and recommendations expressed within it are those of the CLWD Task Force and do not necessarily reflect the views of ..."
Add to MetaCart
Committee on Cyberinfrastructure, all opinions, findings, and recommendations expressed within it are those of the CLWD Task Force and do not necessarily reflect the views of
, 2012
"... Version 1.25 This bibliography records publications of John R. Rice. Title word cross-reference #4 [Ric84c]. ab x + c [Ric60b, Ric61b]. ADI [LR68b]. a ∏ x − rix + si [dBR63]. Erfc(x) [Ric64f]. Γ
(x) [Ric64f]. L1 [HR65, Ric64d, Ric64c]. L ∞ [Ric64f]. O(h 4) [HRV86, HVR88]. ..."
Add to MetaCart
Version 1.25 This bibliography records publications of John R. Rice. Title word cross-reference #4 [Ric84c]. ab x + c [Ric60b, Ric61b]. ADI [LR68b]. a ∏ x − rix + si [dBR63]. Erfc(x) [Ric64f]. Γ(x)
[Ric64f]. L1 [HR65, Ric64d, Ric64c]. L ∞ [Ric64f]. O(h 4) [HRV86, HVR88].
|
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=25873","timestamp":"2014-04-19T13:15:27Z","content_type":null,"content_length":"33440","record_id":"<urn:uuid:460e46db-1f1f-47ec-a4ac-9bee8f9ab127>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00026-ip-10-147-4-33.ec2.internal.warc.gz"}
|
188 helpers are online right now
75% of questions are answered within 5 minutes.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred.
|
{"url":"http://openstudy.com/users/walleye/answered","timestamp":"2014-04-20T01:05:04Z","content_type":null,"content_length":"103078","record_id":"<urn:uuid:702871d3-206b-472d-8fb1-9a04e5d25f87>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00042-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Do Mathematics really help us to think and open our minds? | A conversation on TED.com
This conversation is closed. Start a new conversation
or join one »
Do Mathematics really help us to think and open our minds?
Well, i put that TED talk because was the only one that i found kinda related to this topic.
So a year ago I was in secondary school where teach about many subjects but very little mathematical content. I saw this as a problem, since my intention after leaving school was to pursue a
engineering carrier, which obviously has a lot of math. So i studied and practiced hard to pass the entrance examination. After weeks of study, i noticed that was starting to think like in a more
clear and analytical way.
I'm sure that most advanced mathematics might can help us when time to think.
Maybe was a growth phenomenon? or caused by mathematics?
Nov 17 2011: Mathematics at its earlier levels is one of the fields that helps cultivate an eye for patterns.
Mathematics at its higher levels centers on proof, which creates a model of precise deductive reasoning- the examination of what conclusions can be drawn definitively from premises.
Both aspects of exposure to mathematics have widespread application in situations that require logical thinking.
Mathematics also, for many students, offers an excellent opportunity to struggle with challenge and then, with adequate perseverance, to succeed. This practice and feeling of perseverance and
ultimate success may create a disposition to tackle other challenges across domains.
Nov 17 2011: Hi Tomas ~
Excellent observation and question.
Mathematics can be an "exercise" (and a very good one) for the brain.
My Mom, who loves Math, and was a math teacher - actually does Suduko everyday - she feels its wakes up her mind and sharpens her thought processes. I think that's what you're saying about the
changes you observe in yourself.
From a personal perspective, it's not something that comes easily to me - so I tend to get very distracted when engaging in most math related tasks, which means I have real difficulty focusing. I
did discover though, that when I can relate the math problems to be solved to something I am very interested in or understand, suddenly I feel as though a "light goes on" and I have a much easier
time with math.
My children are both lucky enough to have really gifted mathematics teachers - they teach in a manner that the kids relate to, and it makes a ton of difference regarding how quickly and
completely the grasp the concepts, and are able to put them into practice. A critical component of this is how they teach and that they really love kids and want t0o see them learn - so they are
happy to work with them until they understand. My older daughters teacher even taught me a few things recently - and did so enthusiastically, without an ounce of criticism - I have to say, it
made a huge difference in the way I understood concepts I hadn't reviewed in over 20 years! (and made me wish I had a teacher like her when I was in school - methods have changed so very much!
For the better I think :)
I think a key part to it IS practice - like any exercise, the more you do - the better you get. Having a goal behind the repetition is also motivating - you are a great example of that!
Good luck to you! It sounds like you are well on your way to a wonderful career!
Nov 16 2011: A very nice question, Tomas!
I've been experiencing the same during the past 3 years! I started my maths undergrad 3 years ago.
In my mind, every thought structure has to have some physical representation inside of my skull.
Thought structures develop over time and studying like any learning process alters the mind and the
connections within.
It is less maths than logic you have to use in order to understand the mathematical formulas.
Therefore I would say as long you try to understand the maths and the motivations and structural considerations
for the construction of formulas and proofs, you improve thinking logically.
However! I realized for myself, after reaching a decent logical level, it is hard for some people to follow some of my considerations. I have to learn now how to formulate my thoughts so that
others understand what I tell them.
Finally: I think it is important to not forget to grow the heart for logic does not help you anything with your personal relationship to fellow humans (and with women in particular).
Cheers, mate! Good luck for your studies!
PS: To the ladies: No offense! To me men are as capable of logic as women are ;)
Nov 16 2011: You are smart guy! You think and observe, wow! This is as far as I can see.. first of all, I know nothing about this :)
I used my 'usual' approach to google an opinion or dissertation of some sort regarding your sophisticated subject.
Here are two links that might help you in some way.
My feeling is that approaching life with numbers is like a bean-counter running a company. Human context and relationships might be ignored or undervalued. Seeing the earth from a plane is great
but that makes the 'feeling' connected and love not well present. See what I mean? It is in a way 'detached'
My feeling is mathematics can represent or explain a situation or system but has more difficulty with feelings, relationships and spiritual values.
Nov 23 2011: Humans are networked beings and creativity is a network function not a single discipline. I'm not sure we do ourselves a service by preferring one discipline over another.
Mathematics is just one of our languages. It is no more or less creative than other languages. Each of us have talents toward one or more languages and creativity comes from how well we use these
languages to collaborate and bring multiple disciplines to bear on any given problem.
Consider that the most creative artist has nothing without a cultural context of collaboration among other artists and with potential audiences of said art.
Nov 22 2011: The problem with mathematics is the way it is taught. Just because there is a linear way of answering a question doesn't mean there has to be a linear question. For example: If sally
has five apples and you eat two how many are left. This is a dumb down version of a dumb question. By making it more creative allows people options and relevance. Same question changed to be
relevant: There are five questions on this test and you are on question 3 how many questions are left? This is relevant to the task at hand. it allows them to see the problem and then actually
make it a reality, which is creative in it's own right. let make this even harder and relevant.
If a ball is rolling a long a table at 50 inches per second how long would it take to get to the end of a 50 foot table . . .
That was boring just writing it.
The change: You are late for work and driving 50 miles per hour to get to work which is 50 miles away. how much gas does it take to get there if you car uses 10 miles per gallon? What speed would
you have to go to get there 10 minutes earlier?
By changing the equation allows people to put themselves into the problem. by making it relevant makes it creative and then the answer can be answered in a creative way even though there is a
clear logical way of answering it you still feel like you are in control of how to get to the answer, which is the goal of creativity.
Math can help us open our mind if it is applied to a real world personal situation and allows us the opportunity to engage with it.
Nov 22 2011: Life is all about applied mathematics.
When you buy a car, follow a recipe, or decorate your home, you're using math principles. People have been using these same principles for thousands of years, across countries and continents.
Whether you're sailing a boat off the coast of Japan or building a house in Peru, you're using math to get things done.
How can math be so universal? First, human beings didn't invent math concepts; we discovered them. Also, the language of math is numbers, not English or German or Russian. If we are well versed
in this language of numbers, it can help us make important decisions and perform everyday tasks. Math can help us to shop wisely, buy the right insurance, remodel a home within a budget,
understand population growth, or even bet on the horse with the best chance of winning the race.
In this exhibit, you'll look at the language of numbers through common situations, such as playing games or cooking. Put your decision-making skills to the test by deciding whether buying or
leasing a new car is right for you, and predict how much money you can save for your retirement by using an interest calculator.
Reference: http://www.learner.org/interactives/dailymath/
Nov 22 2011: Without the perception of a 'problem' there is no call for 'mind tools' such as logic or maths.
But, if a problem IS perceived, maths is the 'tool of choice' that humans use to solve them rationally.
'Problems' are things or situations that appear to threaten our safety, stability or future success.
Things that threaten our safety, stability or future success cause us to experience fear emotions and upset.
So maths is merely a tool used to avoid feeling upset and irrational should we NOT solve perceived problems.
So the human obsession with the rationality of maths is generated by the (adult) human fear of upset, or better put, 'aversion to irrationality'.
When we engage the courage necessary to 'not solve' through letting go of our rationale and instead face our fear of irrationality and upset, problems dissolve. So failing to let go of our
rationale or 'judgments', is failing to face fear.
Accepting our fundamental irrationality ought be easy, it re-unites us with 'home', since 'fundamentally irrational', is the 'birth state' from which we each emerged.
So maths cannot open the mind. When there is a percieved 'problem', the mind is already closed. Maths cannot open it, it merely moves the mind away.
Nov 21 2011: who cares math is beautiful and fun! does any art expand or mind? can you really prove it? math is about understanding not just solving a problem. sometimes you do math and you
aren't looking to solve anything, as you learn to play with an equation you learn all kinds of things even things about your self. math is amazing for many reasons including that there are so
many ways to do anything, and it truly is simple.
Nov 21 2011: Mathematics is logic, and skill in using mathematics means skill in approaching a problem in an organized, logical way. That is clearly an advantage in planning and resolving issues.
I see that some commenters seem to equate mathematics with the simple rules learned in elementary and intermediate school. Those simple tools are of course essential, but the tools of more
advanced mathematics allow you to solve problems that those unskilled in math can't even begin to approach. Every addition of skills in higher mathematics gives access to new solutions and allows
you to see more possibilities. It may or may not make you more creative, but it will give wings to the creativity you have. At the highest levels, pure mathematical research is of course highly
creative, and has nothing to do with the tools we've all learned.
I'd like to beat the drum for a branch of mathematics that is not among the esoteric and theoretical, namely statistics. I think everyone should have at least a basic course in statistics,
because it corrects the way one thinks about truth. Statistics gives you the understanding to grasp that "truths" you hear or read in the media are often at best likelihoods.
Nov 21 2011: It probably depends on the person more than the method.
Howard Gardner talks about multiple intelligences, which is really just grouping the ways like-minded people think in general ways. There is probably some merit in this.
Personally, mathematics was like a foreign, dead language to me and still is.
Nov 21 2011: In fact, mathematics is the language of physics. And is no doubt about the "open mind" of many physicists. Is almost impossible to imagine the ideas they have had. And the only way
to transmit those ideas was through mathematics. In the other hand, the "pure" mathematics also tries to explain abstract concepts, and to imagine those concepts you need a lot of imagination and
open mind.
In the art, perhaps in a hidden way, is always a kind of logic (normally named "style"). That means that the subjacent concept is totally logic, even if it doesn´t look like. Is the general
relativity "logic" or intuitive? Not at all.
I think that the main way to get an open mind is to think as much as possible, and mathematics is one good way for that.
Nov 21 2011: Tomas,
As others have said, mathematics basically uses a specific algorithm to resolve different exercises (of course, there are lots of algorithms, formulae and theorems because there is a large
spectrum of domains and areas of mathematics). In other words, you learn to apply a pattern to various problems, which leads to convergent thinking. This is useful because there are so many
problems that you can't solve each one of them in a different way. So in this way, it does develop your analytical process of thinking, it helps you make rational connections and optimize the
path to a solution.
However, creativity in its main meaning is about divergent thinking, which is the opposite of what maths help you improve. Here, you find multiple solutions to a single problem (all kinds of
problems, not necessarily involving numbers and calculations), and you choose the one that represents the best compromise.
So in my opinion, the fact that you become more open-minded and creative is indirectly related to better understanding and knowing mathematics (for example, your passion can drive you to research
a specific mathematician, you find about more about his life and work and you can make a better connection between the rational side and how he did it).
I'm studying engineering like you, so I'm more interested in mathematics and physics more than the other subjects and this has indeed helped me rationalize most of the things in my life. I can
advise you though to not become hung up on it because it might block/lead to a slower development of your interpersonal abilities (because you can't find reason in feelings, emotions and so on).
All the best!
Nov 21 2011: Great comments, most comments touched the part where we seem to look at Math as a sense of formulaic process. IMHO, the joy of math lies in it's application in daily life. Example: I
am big fan of geometry and when I drive I use it the most, like approximating turning radius, braking length etc. It is like math which I learnt while in college and Engineering is in play with a
reflex, a recreated formula set to analyze things out of habit in the head.
That is sort of a kick when you are able to apply the understanding of math in our daily life.
Maths making us open minded? I am still not sure how a 'clinical sense of being analytical' is related to 'being open minded' about things around. Maybe an extra degree of awareness is available
(from proof of Math fundamentals)to take an open minded decision.
Me & my wife have a small interest in Machine Learning, patterns etc things. While working on these subjects we particularly found the aspect of math being fun with application and less fun on
paper. :)
Please correct me if this idea seems skewed.
Nov 21 2011: Speaking for myself, probability theory (and statistics) really opened my mind quite a lot: it allows you to think inductively (in a right, non-intuitive way... the intuitive one is
too often wrong), and how to express everything between any dichotomy (instead of 1 and 0, you have all values in between, and can compare uncertainties)
Mathematics is a great way to learn thinking abstract. And it has beauty and elegance
Jake: http://comment.rsablogs.org.uk/2011/10/24/rsa-animate-divided-brain/
=> please revise your ideas around lateralization of the brains
Nov 19 2011: Mathematics uses the right side of your brain which is not creative. Math is not creative. However, I believe that learning math will allow you to think through a more realistic and
rational approach to problems in everyday life. However, if you want to open your mind to new ideas and inovations you need to use your left side of your brain which involves music, art, science,
literature, etc. Together, both sides of your brain keep your mind in check and provide you with a perfect amount of both creativity as well as rationality.
Nov 20 2011: I'm not entirely agree with what you propose. Mathematics actually uses creativity. The mathematicians´s theorems were created based on their creativity and imagination. They are
fundamental. When someone tries to solve a complex Geometry problem must inevitably resort to the imagination and the ideas. Then comes the "technical "process. Thanks for your reply.
Nov 20 2011: When you solve a mathematics problem, you use a set of formulas/theorems that have already been discovered. Geometry is no different. When you want to solve a geometry
problem, depending on the problem, you graph it or you solve it using a set of formulas/theorems. Now, the people who created those theorems and formulas needed to think creatively, but
most people do not find formulas/theorems.
Nov 20 2011: It is not just a matter of raising a formula or follow a particular theorem. That would not be efficient. What you propose is a prelude to the ideas. The problems of
calculus, and geometry agebra need ideas. think that is irrefutable. If you ever had an opportunity of solving a geometry problem you'll notice what I mean. Also you said that "left
side of your brain which involves music, art, science, literature, etc.".Is important to clarify that Math is the one most important science for not say the most important.
I recommend you a nice book:
Nov 22 2011: I am taking Geometry right now. I have/done many geometry problems some of which I did today. Solving math, whatever type it may be, involves logical thinking and formulas to
solve a problem. The person who thinks of formulas or who comes up with math problems is creative, the person who solves them is not.
Nov 23 2011: Isn't geometry fun? It was one of my favorite subjects in school (in the 1950s). No other subject illustrates logic so clearly, and when you really understand geometry
you'll find thousands of times throughout life that a problem, especially (but not only) in the physical sciences, engineering and mathematics, can be reduced to geometric
relationships, which improves insight into the problem and helps solutions to spring forth. Having the tools of geometry, trigonometry, and analysis at hand allows you to be creative
in problem solving in ways that you can't be without that knowledge. Enjoy your study of geometry, which is a study of space itself. Nothing is more fundamental.
Nov 19 2011: Human beings are natural born problem solvers. We constantly are solving problems whether its to get to an appointment on time (time management problem) or a place (traversing from
one point to another) or placing a football in the hands of a receiver 30 yeards down the field (physics problem). Underneath this natural problem solving ability is process our brains perform of
combining realltime information to adapt to the environment around us. Mathematics is a vehicle (or means to an end as Debra says) that opens up our mind to understanding how we are solving these
problems. As we gain insight into how we solve simple problem, we can combine approaches and techniques to solve larger more complex problems and turn iideas into reality. If we focus just on the
how such as equations, algebra, calculus , math is perceived as hard and unintelligible. If we look at math as our portal to understanding of the what we as natural born problem solvers are
accomplishing we gain understanding as to why it helps us adapt to the world around us.
Nov 19 2011: For me mathematics is a means to an end. I am not sure that it is the math itself but the things that it illustrates and makes apparent that opens my mind. For example: learning to
do statistics helped me to understand a lot about the world of people by showing me what was true or solid and what had no basis in fact.
Nov 19 2011: Mathematics is the most important part of life. It teaches you not only calculations but also the lesson of life.
Mathematics can teach you the real meaning of practice actually i feel that Mathematics is a alternative of practice.
Mathematics can tech you a lesson on patience.
Mathematics is life and it is everywhere. mathematics gives you the different way of thinking.
I LOVE MATHEMATICS...
Nov 17 2011: Mathematics is a language as well as a means to develop creative ideas in a precise and understandable way. It can be an aid in understanding at least some aspects of any subject,
from psychology to physics, but it has some limitations. For instance, every logical system is either inconsistent or incomplete (Kurt Goedel). Yet compared to most other languages it has the
major advantages of precision and conciseness. It is the languge of science. Why? because science is about measuring things, and Math is a big help in getting that right. Wittgenstein believed
that math was tautologous, and it may be, but so are most languages when one concerns oneself with definitions and tries to be accurate. It has been my lifelong comapnion, and i hope will
continue to be as long as i can continue to pursue truth.
|
{"url":"http://www.ted.com/conversations/7193/do_mathematics_really_help_us.html","timestamp":"2014-04-18T06:01:00Z","content_type":null,"content_length":"97432","record_id":"<urn:uuid:6b57a158-3154-4644-95e0-a5fbd5dfa260>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00007-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Linear Momentum: Conservation of Momentum
What happens when a group of particles are all interacting? Qualitatively speaking, each exerts equal and opposite impulses on the other, and though the individual momentum of any given particle
might change, the total momentum of the system remains constant. This phenomenon of momentum constancy describes the conservation of linear momentum in a nutshell; in this section we shall prove the
existence of the conservation of energy by using what we already know about momentum and systems of particles.
Momentum in a System of Particles
Just as we first defined kinetic energy for a single particle, and then examined the energy of a system, so shall we now turn to the linear momentum of a system of particles. Suppose we have a system
of N particles, with masses m [1], m [2], , m [n] . Assuming no mass enters or leaves the system, we define the total momentum of the system as the vector sum of the individual momentum of the
P = p [1] + p [2] + ^... + p [n]
= m [1] v [1] + m [2] v [2] + ^... + m [n] v [n]
Recall from our discussion of center of mass that:
v [cm] = m [1] v [1] + m [2] v [2] + ^ ... + m [n] v [n])
is the total mass of the system. Comparing these two equations we see that:
Thus the total momentum of the system is simply the total mass times the velocity of the center of mass. We can also take a time derivative of the total momentum of the system:
M Ma [cm]
Recall also that, for a system of particles,
F [ext] = Ma [cm]
Clearly, then:
Don't worry if the calculus here is complex. Though our definition of the momentum of a system of particles is important, the derivation of this equation only matters because it tells us a great deal
about momentum. When we explore this equation further we will generate our principle of conservation of linear momentum.
Conservation of Linear Momentum
From our last equation we will consider now the special case in which F [ext] = 0 . That is, no external forces act upon an isolated system of particles. Such a situation implies that the rate of
change of the total momentum of a system does not change, meaning this quantity is constant, and proving the principle of the conservation of linear momentum:
When there is no net external force acting on a system of particles the total momentum of the system is conserved.
It's that simple. No matter the nature of the interactions that go on within a given system, its total momentum will remain the same. To see exactly how this concept works we shall consider an
Conservation of Linear Momentum in Action
Let's consider a cannon firing a cannonball. Initially, both the cannon and the ball are at rest. Because the cannon, the ball, and the explosive are all within the same system of particles, we can
thus state that the total momentum of the system is zero. What happens when the cannon is fired? Clearly the cannonball shoots out with considerable velocity, and thus momentum. Because there are no
net external forces acting on the system, this momentum must be compensated for by a momentum in the opposite direction as the velocity of the ball. Thus the cannon itself is given a velocity
backwards, and total momentum is conserved. This conceptual example accounts for the "kick" associated with firearms. Any time a gun, a cannon, or an artillery piece releases a projectile, it must
itself move in the direction opposite the projectile. The heavier the firearm, the slower it moves. This is a simple example of the conservation of linear momentum.
By both examining the center of mass of a system of particles, and developing the conservation of linear momentum we can account for a great deal of motion in a system of particles. We now know how
to calculate both the motion of the system as a whole, based on external forces applied to the system, and the activity of the particles within the system, based on momentum conservation within the
system. This topic, dealing with momentum, is as important as the last one, dealing with energy. Both concepts are universally applied: while Newton's Laws apply only to mechanics, conservation of
momentum and energy are used in relativistic and quantum calculations as well.
|
{"url":"http://www.sparknotes.com/physics/linearmomentum/conservationofmomentum/section3.rhtml","timestamp":"2014-04-20T03:24:05Z","content_type":null,"content_length":"60528","record_id":"<urn:uuid:594aa9b5-2961-4075-9d7a-ba1ee5d20c9b>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00390-ip-10-147-4-33.ec2.internal.warc.gz"}
|
STEP I, II, III 2011 Solutions Thread.
Announcements Posted on
Study Help needs new mods! 14-04-2014
Post on TSR and win a prize! Find out more... 10-04-2014
Since most of the discussion has died down now and the most common thing being posted in those threads are now solutions, I thought I may as well make a thread for it. If you spot any mistakes, feel
free to post in this thread and we'll make changes. Cheers! STEP I Paper here. STEP II Paper here. STEP III paper here. Completed - Nice one, everyone! STEP I: 1. Solution by Zuzuzu 2. Solution by
Zuzuzu 3. Solution by Zuzuzu 4. Solution by Zuzuzu 5. Solution by Farhan.Hanif93 6. Solution by Zuzuzu 7. Solution by Farhan.Hanif93 8. Solution by Zuzuzu 9. Solution by jbeacom600 10. Solution by
brianeverit 11. Solution by Farhan.Hanif93 12. Solution by DFranklin 13. Solution by Farhan.Hanif93 STEP II: 1. Solution by mikelbird 2. Solution by matt2k8, Solution by mikelbird 3. Solution by
mikelbird 4. Solution by mikelbird 5. Solution by mikelbird 6. Solution by Farhan.Hanif93, Solution by mikelbird 7. Solution by mikelbird 8. Solution by mikelbird 9. Solution by cpdavis 10. Solution
by Farhan.Hanif93 11. Solution by DFranklin 12. Solution by Farhan.Hanif93 13. Solution by ben-smith STEP III: 1. Solution by Farhan.Hanif93 2. Solution by mikelbird 3. Solution by mikelbird 4.
Solution by mikelbird 5. Solution by mikelbird 6. Solution by Farhan.Hanif93, Solution by piecewise 7. Solution by Schnecke 8. Solution by mikelbird 9. Solution by jbeacom600 10. Solution by
brianeverit 11. Solution by DFranklin 12. Solution by ben-smith 13. Solution by DFranklin Solutions written by TSR members: 1987 - 1988 - 1989 - 1990 - 1991 - 1992 - 1993 - 1994 - 1995 - 1996 - 1997
- 1998 - 1999 - 2000 - 2001 - 2002 - 2003 - 2004 - 2005 - 2006 - 2007 - 2008
STEP III Q7 (i) Apologies in advance for any mistakes, I rushed this a little bit.
OK, let's finish STEP II. Here's Q11 (I hope. This was very fiddly even before I started Latexing it...) To start, lets ignore the vertical dimension, and try to get a mental picture of where i and j
go. It's not that hard. If we do the normal 2D thing of j being the y-axis and i being the x-axis, then OA lies on the Y axis, OC is at roughly 4 o'clock (i.e. x > 0, y < 0) and OB is at roughly 8
o'clock (i.e. x < 0, y < 0). As we progress, we're going to have quite a few possible sign ambiguities (basically we're going to want to work out a unit vector (sin t, cos t) knowing only what tan t
is); this should be obvious given the diagram and a little thought. (i) Let the unit vector in direction PB = ai + bj + ck. If we write h for the size of the horizontal component (so that h^2 = a^2 +
b^2), then we have h^2+c^2 = 1 (since we have a unit vector), and also . So and so . So in fact we have (so that , since c > 0) and so that . Now consider a/b (keeping in mind OB is at roughly 8
o'clock). We have , so , so . So , so and since "x < 0" we choose a = -1/3. Then . Thus our unit vector is as desired. (ii) Having done (i), it's fairly clear the plan should be to find unit vectors
for PA and PC also. The force corresponding to each string will just be U, V, W times the appropriate unit vector. So again, write the unit vector in direction PA in the form ai + bj + ck. In this
case we know a = 0 (since OA is in the direction of j), and we also know . So , from which , giving a final unit vector . Finally write the unit vector in direction PC in the form ai + bj + ck.
Again, let h^2 = a^2 + b^2, then and so . Now we want to find a and b. We have a^2+b^2 = 3/4. Considering the position of C we have a > 0, b < 0 and . So . So . So . So . Thus our final unit vector
is And of course the unit vector in the direction of W is just -k. So adding these all up, we must have: Dotting with i, we get: , and so . Dotting next with j we get: . Replacing U with we end up
with . Finally dot with k to get: . Replace U and T with the appropriate multiples of V and we find and so . So finally .
STEP I - Q11 Draw yourself the necessary diagram. (I'll attach a copy tomorrow, if I get time) For this question; let AP=a, BP=b and define the point where the line perpendicular to AB through P
meets AB as X. Let the tensions in the string AP and BP be of magnitude T and take moments about G. It follows that , as required. Note that the length of the string is given by . Using the relation
we've just found; the fact that ; and the fact that alpha is acute, it follows that and thus that . It's worth noting that: By considering triangle APB, and . Therefore . Note that, if is the angle
of inclination of the bar to the horizontal, then . By considering triangle APX (also right angled), it follows that . Similarly, . Note that and considering triangle XPG, it follows that , as
STEP I, Q12: (i) Can always give change unless the first person has a £2 coin. There are m+1 people, only one of which has a £2 coin. Therefore p = m/(m+1). (ii) As in (i), we fail if the first
person has a £2 coin. This has probability 2/(m+2) We also fail if the first person has a £1 coin but the next 2 people have 2 pound coins. This has probability . In all other cases we succeed. So p
(fail) = . So p(succeed) = 1-p(fail) = as desired. (iii) So now, the failure cases are: 2, any, ... with p = 3/(m+3) 1,2,2, any, with p = . 1,1,2,2,2, any with p = 1,2,1,2,2, any, with p = Adding
these, p(fail) = . Then p(succeed) = 1-p(fail) = as desired.
STEP I Q 9 To do this equation, we need only the equation for the trajectory of the particle, in terms of , and the inital velocity, which I choose to be . We let as time. It is quite easy to derive
the equation of motion, which is: . Now and both lie on the trajectory of the particle. Hence, on rearranging: (*) Now, and we may assume that so that if then . So and we may divide as follows:
Rearranging gives: and so we have that: Now rearranging (*) gives . Now it is straightforward enough to show that the range of the particle is given by : . We can rewrite as Hence, the range is and
so on cancelling out like terms and simplifying.
STEP III Q12 let denote the kth derivative of f(t) w.r.t. t. To find the expected value of Y we have to use the fact that so, using the chain rule: as required. Now, to find the variance: as
required. For the next part, notice the perfect fit between what the examiners have asked us to prove in the first part and the scenario of the second, the only thing left to see that we haven't been
explicitly told is that is a random variable denoting the outcome of the ith toss (1=heads, 0=tails). Using the first part, since G(t) is the pgf of N and H(t) is the pgf of Xi then the pgf of Y is G
(H(f)) so we need to find H(t) and G(t): To find the expected value: (You can alternatively do this by using . I have done both methods and I got the same answer for both). To find the variance, use
the first few steps in your derivation of Var(Y) in the first bit of the question to give yourself a shortcut where you can do it in terms of Y's pgf alone: . Now to to find P(Y=r). First, consider G
(H(t)): Our most immediate problem is the 't' term which makes it difficult to get the term we want on it's own so let's differentiate r times and then set t=0 to isolate the P(Y=r) term: which means
we only have to divide both sides by r! to get the required result:
What do you guys reckon will be the question that the fewest number of people will attempt? After doing STEP III Q12 I have a feeling no-one will do that one. What modules are probability generating
functions even on?
(Original post by ben-smith) What do you guys reckon will be the question that the fewest number of people will attempt? After doing STEP III Q12 I have a feeling no-one will do that one. What
modules are probability generating functions even on? I didn't think STEP III Q12 was that bad, but for sure, not many will probably have covered PGFs. Generally no-one touches the "post-S2"
probability questions, so I suspect you might be right. STEP III, Q11 looks horrendous, but I suspect is not that bad (I am assuming that you get the right answer by assuming the vertical velocity =
0 at the point where the strings go taut - if that doesn't give the right answer then the difficulty has suddenly taken a quantum leap). STEP II, Q11 not only looks horrendous, but I found it to be
pretty long and tricky too. I suspect it will be one of the least answered questions on STEP II.
(Original post by DFranklin) STEP III, Q11 looks horrendous, but I suspect is not that bad (I am assuming that you get the right answer by assuming the vertical velocity = 0 at the point where the
strings go taut - if that doesn't give the right answer then the difficulty has suddenly taken a quantum leap). This question is pretty tough. The fact that they've given so many 'show that's gives a
good reflection of it's trickiness. I've given it a little thought so far and I'm already stuck on the magnitude of the couple, even though it feels like I shouldn't be. Probably has something to do
with the fact that I don't know much of the stuff on M4 and thus know very little about couples. I should really be learning rotational dynamics properly first...
(Original post by DFranklin) I didn't think STEP III Q12 was that bad, but for sure, not many will probably have covered PGFs. Generally no-one touches the "post-S2" probability questions, so I
suspect you might be right. STEP III, Q11 looks horrendous, but I suspect is not that bad (I am assuming that you get the right answer by assuming the vertical velocity = 0 at the point where the
strings go taut - if that doesn't give the right answer then the difficulty has suddenly taken a quantum leap). STEP II, Q11 not only looks horrendous, but I found it to be pretty long and tricky
too. I suspect it will be one of the least answered questions on STEP II. STEP III Q12 is definitely not necessarily 'hard' by STEP standards, I mean, I have only done S1 and had never heard of a pgf
before today and I managed to get it out so it can't be that bad. On the other hand, I've been looking and I can't find generator functions anywhere on the whole edexcel syllabus which is a bit
tragic for those on edexcel who made the effort to do up to S4. Did you do generator functions at a-level? OMG, I hadn't noticed STEP III Q11. That is a stomach wrenchingly horrible question. The
amount of reading you have to do before you could even get into the question was already a bad sign.
OK, so let's have a go at STEP III, Q11... Consider the point P. It's hanging from the point (a, 0, 0). After rotation, P has position , where h is the height of the disk. So, what's the horizontal
displacement? It's . So its size is . But of course the horizontal displacement is also , hence the first result. Now let bT be the tension in the string. Then the horizontal component of the force
from the string is just . We want the size of the tangential component of this, which is going to be . So each string provides a turning moment . Suppose we have n strings. Resolving vertically, .
But . So . So the n strings provide a total couple of as desired. At this point, the disc is below the ceiling. When the strings go taut, the disc is b below the ceiling. So the loss in GPE is This
must equal the rotational KE. So , where I is the moment of inertia of the disk. That is, So as desired. This took about 26 minutes, including LaTeX. I'd say that puts it in the "not too bad"
category for STEP III Edit: on a little thought, I'm not 100% convinced about the method for calculating the "tangential component" I've used; in terms of the picture I had in my mind, I think there
are 2 compensatory sign errors (one for each x-component). I'd be very surprised to lost more than 1 mark for it though - it would be fine for a different mental picture. On the other hand, if you
draw an actual diagram, it's obvious - it's just that it's hard to draw diagrams on here.
(Original post by ben-smith) STEP III Q12 is definitely not necessarily 'hard' by STEP standards, I mean, I have only done S1 and had never heard of a pgf before today and I managed to get it out so
it can't be that bad. On the other hand, I've been looking and I can't find generator functions anywhere on the whole edexcel syllabus which is a bit tragic for those on edexcel who made the effort
to do up to S4. Did you do generator functions at a-level? I believe pgfs and mgfs used to be in the S5 module that no longer exists. When I did Further Maths, the applied was very mechanics heavy.
(From memory, only 2 of 12 questions on the applied paper would be probability based, although it might have been 3). I also did "Maths with Stats", and I believe they may have been mentioned there,
although I'm not sure if that was our teacher going beyond the syllabus. It must have been either taught or in a textbook I had, however, as I recall doing CCE questions involving pgfs. (And we
didn't have t'internet then).
|
{"url":"http://www.thestudentroom.co.uk/showthread.php?t=1697768","timestamp":"2014-04-20T11:42:11Z","content_type":null,"content_length":"386309","record_id":"<urn:uuid:d2ea4bcd-4476-4cee-809c-8b1abd497aed>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00151-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Online resources for dot product calculations?
March 25th 2013, 03:21 AM #1
Junior Member
Sep 2009
Online resources for dot product calculations?
Can someone please share a link to a site which can show me how to calculate the following:
Given 3 vectors A,B and C, calculate the following:
Theta AB
Which of the following can be calculated:
V[1].V[1] (dot product of a vector with itself)
If V[1] and V[2] are perpendicular, calculate V[1] .V [2]
If V[1] and V[2] are parallel, calculate V [1] .V[2]
I know a lot of this stuff is fairly straight forward but I'm having trouble finding information about the correct methods to calculate what's asked and the provided course materials are very
Re: Online resources for dot product calculations?
Dot products are incredibly easy. Multiply the corresponding elements of each vector, then add them all together. You should get a scalar as the answer, which is why the dot product is sometimes
called the scalar product.
Re: Online resources for dot product calculations?
From what I've seen so far, yes you are quiet correct. I did however attempt a number of them without seeing a worked example and some how got it wrong. I would like to see some worked examples
rather than do the online test based on guess work alone.
Re: Online resources for dot product calculations?
Well why not post what you have tried here and we'll see where you have gone wrong.
Re: Online resources for dot product calculations?
Thanks for you input Prove It, I did abit of searching around and was able to answer all questions correctly.
March 25th 2013, 03:24 AM #2
March 25th 2013, 03:36 AM #3
Junior Member
Sep 2009
March 25th 2013, 03:57 AM #4
March 26th 2013, 03:27 AM #5
Junior Member
Sep 2009
|
{"url":"http://mathhelpforum.com/geometry/215514-online-resources-dot-product-calculations.html","timestamp":"2014-04-19T04:57:32Z","content_type":null,"content_length":"41627","record_id":"<urn:uuid:c173b6af-8a6e-4eec-a0bf-ac933f90c0e1>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00358-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Physics Forums - View Single Post - Undefined value and Infinite value
Welcome to PF, salil87!
Infinite and undefined are different things.
For a function, infinite usually implies undefined.
But undefined does not have to be infinite.
For instance, the function log(x) is negative infinity at zero and as such undefined at zero.
But for negative values of x, log(x) is simply undefined (but not infinite).
|
{"url":"http://www.physicsforums.com/showpost.php?p=3623577&postcount=2","timestamp":"2014-04-20T01:03:44Z","content_type":null,"content_length":"7621","record_id":"<urn:uuid:95788d9e-9468-424d-9bc0-c40a9caf74ae>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00229-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Beverly, MA Algebra 2 Tutor
Find a Beverly, MA Algebra 2 Tutor
...They come up naturally in problems, and we'd go through them at the appropriate level of detail. I'd also make sure you can apply them to problems. I've been well rated as a calculus tutor.
47 Subjects: including algebra 2, chemistry, reading, calculus
...While in college, my writing won major prizes in both the sciences and the humanities. I earned a perfect score of 5 on the AP Statistics exam and have been tutoring the subject in all its
forms -- from high school introductions to advanced college classes -- for more than ten years. And I don't just know the theory!
47 Subjects: including algebra 2, English, chemistry, reading
...I have been teaching at North Reading High School for 10 years, and I enjoy working with students. I have years of experience with tutoring one-on-one including honors students looking for
that top grade, and students just trying to pass the course. In addition, I am married with 3 children, aged 7, 5, and 18 months, so I have experience with a wide range of ages.
19 Subjects: including algebra 2, physics, calculus, SAT math
...I am currently a research associate in materials physics at Harvard, have completed a postdoc in geophysics at MIT, and received my doctorate in physics / quantitative biology at Brandeis
University. I will travel throughout the area to meet in your home, library, or wherever is comfortable for ...
16 Subjects: including algebra 2, calculus, physics, geometry
I have done tutoring in the past. It has been fun, both for me and my students. They learned and memorized quickly without any problem.
22 Subjects: including algebra 2, chemistry, physics, statistics
Related Beverly, MA Tutors
Beverly, MA Accounting Tutors
Beverly, MA ACT Tutors
Beverly, MA Algebra Tutors
Beverly, MA Algebra 2 Tutors
Beverly, MA Calculus Tutors
Beverly, MA Geometry Tutors
Beverly, MA Math Tutors
Beverly, MA Prealgebra Tutors
Beverly, MA Precalculus Tutors
Beverly, MA SAT Tutors
Beverly, MA SAT Math Tutors
Beverly, MA Science Tutors
Beverly, MA Statistics Tutors
Beverly, MA Trigonometry Tutors
Nearby Cities With algebra 2 Tutor
Arlington, MA algebra 2 Tutors
Brighton, MA algebra 2 Tutors
Chelsea, MA algebra 2 Tutors
Danvers, MA algebra 2 Tutors
Everett, MA algebra 2 Tutors
Lynn, MA algebra 2 Tutors
Malden, MA algebra 2 Tutors
Marblehead algebra 2 Tutors
Peabody, MA algebra 2 Tutors
Revere, MA algebra 2 Tutors
Salem, MA algebra 2 Tutors
Saugus algebra 2 Tutors
Swampscott algebra 2 Tutors
Wenham algebra 2 Tutors
Woburn algebra 2 Tutors
|
{"url":"http://www.purplemath.com/Beverly_MA_Algebra_2_tutors.php","timestamp":"2014-04-17T01:10:06Z","content_type":null,"content_length":"23794","record_id":"<urn:uuid:d538705c-f1d1-4843-a740-fb92881910d9>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00284-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Completely lost, someone please help. Limits and derivatives.
March 4th 2010, 05:28 PM #1
Mar 2010
Completely lost, someone please help. Limits and derivatives.
Alright, the question is:
By identifying each of the following limits as a derivative, find the value of the limit.
lim h->0 (((27+h)^1/3)-3)/h
I see how this fits into the definition of a derivative
f'(a) = lim h->0 (f(a+h) - f(a))/h, but I don't understand how I'm supposed to use this information to evaluate the limit.
I see that the f(x) = cuberoot (x) and a=27 so f(a) = 3
Any instruction would be highly appreciated. I'm taking an online calculus course so can only correspond with my tutor via email and am finding her to be extremely unhelpful.
March 4th 2010, 05:35 PM #2
|
{"url":"http://mathhelpforum.com/calculus/132106-completely-lost-someone-please-help-limits-derivatives.html","timestamp":"2014-04-18T12:57:26Z","content_type":null,"content_length":"34381","record_id":"<urn:uuid:813a2edf-4351-4aaa-a100-35d5a7c9f1d4>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00054-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Greg Mankiw's Blog
Tax Hedging?
Over at Intrade, you can
bet on future tax rates
. Currently, the implied probability of a hike in the top income tax rate in 2009 is about two-thirds.
This surprises me. Even assuming an Obama victory, I would put the probability much lower. As an economic matter, raising anyone's taxes with the economy so weak seems ill-advised. As a political
matter, why not just let the Bush tax cuts expire at the end of 2010? Obama could then claim in four years that he never signed a tax hike. It seems neither economically nor politically sensible for
the new President to push for an immediate tax increase, even if an eventual tax increase is his goal.
How then to explain the betting at Intrade? I can think of three hypotheses:
1. The Obama people are not as savvy as I think they are and will push for an immediate tax hike.
2. The Intrade market is so thin that the pricing there does not mean much.
3. Some people are using the Intrade market as a hedge. A high-income person bets that tax rates will go up and bids up the implied probability above the true probability. If the bet pays off, his
winnings reduce some of the hit his after-tax income takes by the tax change. It is a form of insurance. Those traders on the other side of this bet--who win if taxes do not rise--are buying a
high-risk asset, as measured by covariance with their consumption. They need to be compensated for taking this risk. Under this hypothesis, the Intrade price is not a good gauge of the actual
probability but includes a substantial risk premium.
: Tony Smith, an economics professor at Yale, emails this comment:
Dear Greg:
I read with great interest the post on your blog today about how to interpret the prediction market price for the event of a tax hike in 2009. Tyler Cowen made a similar point about interpreting
the market price for the event that Congress would approve a bailout before September 30. And, in fact, last May I wrote a comprehensive exam question for the Ph.D. students at Yale that revolved
around this same observation in the context of an election prediction market. But I have seen no formal papers that make this point. I think it is a critical one for evaluating the usefulness of
prediction markets in aiding decision-making.
The general point that rational investors will use prediction markets to hedge risks can also help to explain an apparent puzzle throughout the recent election campaign: in particular,
statistical models designed to predict election outcomes (see, for example, www.fivethirtyeight.com and David Stromberg's website) generally reported probabilities for an Obama victory that
exceeded the corresponding market prices on both intrade and betfair. If we take seriously the predictions of these statistical models--that is, if we view them as giving an accurate estimate of
the actual probability that Obama will win--then evidently investors seem to think that Obama will be relatively good for the economy compared to McCain, driving down the equilibrium price for an
Obama contract (since this contract pays off when marginal utility is low). To help voters make an informed decision, maybe you should post this "evidence" on your blog!
Tony Smith
Thanks, Tony.
|
{"url":"http://gregmankiw.blogspot.com/2008/11/tax-hedging.html","timestamp":"2014-04-18T05:36:02Z","content_type":null,"content_length":"34432","record_id":"<urn:uuid:e2da848e-f2d6-4e21-9ea5-77d6891b50c4>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00390-ip-10-147-4-33.ec2.internal.warc.gz"}
|
[SciPy-User] Request for usage examples on scipy.stats.rv_continuous and scipy.ndimage.grey_dilate(structure)
[SciPy-User] Request for usage examples on scipy.stats.rv_continuous and scipy.ndimage.grey_dilate(structure)
Robert Kern robert.kern@gmail....
Mon Mar 22 12:13:56 CDT 2010
On Mon, Mar 22, 2010 at 11:58, Christoph Deil
<Deil.Christoph@googlemail.com> wrote:
> Dear Robert,
> thanks for the tip. I tried understanding the examples in scipy/stats/distributions.py, but being a python / scipy newbie I find the mechanism hard to understand and couldn't implement the simple examples I suggested below.
What confused you?
> Maybe it would be possible to add an example to the tutorial? At http://docs.scipy.org/scipy/docs/scipy-docs/tutorial/stats.rst/#stats there is an example on how to use rv_discrete, but none on how to use rv_continuous.
> Would it be possible to add a convenience function to scipy.stats that makes it easy to construct a distribution from a function:
>>>> p = lambda x: x**2
>>>> pdist = scipy.stats.rv_continuous_from_function(pdf=p, lim=[0,2]) # a suggestion, doesn't exist at the moment
>>>> samples = pdist.rvs(size=10)
class x2_gen(rv_continuous):
def _pdf(self, x):
return x * x * 0.375
x2 = x2_gen(a=0.0, b=2.0, name='x2')
> I would guess that getting random numbers from a user defined distribution function is such a common usage that it would be nice (at least for newbies like me :-) to being able to do it from the command line, without having to derive a class.
It's not particularly common, no.
Robert Kern
"I have come to believe that the whole world is an enigma, a harmless
enigma that is made terrible by our own mad attempt to interpret it as
though it had an underlying truth."
-- Umberto Eco
More information about the SciPy-User mailing list
|
{"url":"http://mail.scipy.org/pipermail/scipy-user/2010-March/024782.html","timestamp":"2014-04-16T22:35:31Z","content_type":null,"content_length":"4935","record_id":"<urn:uuid:d66578df-3bd6-4d7c-aca5-37cbabe50ebd>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00334-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Braingle: 'Quocorrian Math 3' Brain Teaser
Quocorrian Math 3
Math brain teasers require computations to solve.
Puzzle ID: #21404
Category: Math
Submitted By: CPlusPlusMan
Corrected By: cnmne
Bound by your promise to stay on Quocorria until you've mastered all of their numbering systems, you find yourself still stuck on Quocorria.
Now, while attending a Quocorrian mathematical seminar, the speaker mentions yet another form of math. After the speech, you approach him with questions on this system. Being like the rest of the
Quocorrians you've met, he only gives you a sheet of paper, this time with six problems on it, and leaves.
Here's the paper:
So, knowing that those are all true, what's 3*6?
Show Hint
Show Answer
What Next?
|
{"url":"http://www.braingle.com/brainteasers/21404/quocorrian-math-3.html","timestamp":"2014-04-20T03:18:26Z","content_type":null,"content_length":"24025","record_id":"<urn:uuid:c349d43f-141d-45c0-a19e-a75903041882>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00259-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Wheeling, IL Math Tutor
Find a Wheeling, IL Math Tutor
...Music Theory and received college level music course credit for it. Currently, I am engaged in composing music and developing my own skills and repertoire. I would like to help teach GED to
someone in need of it.
16 Subjects: including geometry, SAT math, English, algebra 1
...During the sessions I try to clarify and eliminate such obstacles. The above strategy allows the significant improvement in Calculus scores of students after a few tutoring sessions. I am a
research professor at Northwestern University with PhD degree in mathematics and physics.
8 Subjects: including algebra 1, algebra 2, calculus, geometry
...I have a 14 year old girl cousin that is having trouble with Algebra 1 that I am currently tutoring as well as her sister who is 11 years old that I am helping with Science. I work great with
kids and this characteristic comes from my years at Pump It Up. Pump It Up is a birthday place for kids.
11 Subjects: including geometry, precalculus, linear algebra, algebra 1
...I completed an IB diploma in high school so it's fair to say I'm an expert at taking exams. I love to work with creative individuals who genuinely want to make themselves better people and I'm
a great role model and mentor for young people. I work well and have a lot of experience with college and high school students.
36 Subjects: including algebra 1, algebra 2, biology, chemistry
...Reading, Writing, and Proofreading are absolutely essential skills. My classes in this area included rhetoric and literature. I want to help students learn how to effectively present, convey,
and support arguments as well as include relevant information.
20 Subjects: including prealgebra, English, algebra 1, reading
Related Wheeling, IL Tutors
Wheeling, IL Accounting Tutors
Wheeling, IL ACT Tutors
Wheeling, IL Algebra Tutors
Wheeling, IL Algebra 2 Tutors
Wheeling, IL Calculus Tutors
Wheeling, IL Geometry Tutors
Wheeling, IL Math Tutors
Wheeling, IL Prealgebra Tutors
Wheeling, IL Precalculus Tutors
Wheeling, IL SAT Tutors
Wheeling, IL SAT Math Tutors
Wheeling, IL Science Tutors
Wheeling, IL Statistics Tutors
Wheeling, IL Trigonometry Tutors
|
{"url":"http://www.purplemath.com/Wheeling_IL_Math_tutors.php","timestamp":"2014-04-17T07:44:00Z","content_type":null,"content_length":"23796","record_id":"<urn:uuid:e1d6944b-217d-45f9-bcbc-212d4824ecca>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00658-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Beverly, MA Algebra 2 Tutor
Find a Beverly, MA Algebra 2 Tutor
...They come up naturally in problems, and we'd go through them at the appropriate level of detail. I'd also make sure you can apply them to problems. I've been well rated as a calculus tutor.
47 Subjects: including algebra 2, chemistry, reading, calculus
...While in college, my writing won major prizes in both the sciences and the humanities. I earned a perfect score of 5 on the AP Statistics exam and have been tutoring the subject in all its
forms -- from high school introductions to advanced college classes -- for more than ten years. And I don't just know the theory!
47 Subjects: including algebra 2, English, chemistry, reading
...I have been teaching at North Reading High School for 10 years, and I enjoy working with students. I have years of experience with tutoring one-on-one including honors students looking for
that top grade, and students just trying to pass the course. In addition, I am married with 3 children, aged 7, 5, and 18 months, so I have experience with a wide range of ages.
19 Subjects: including algebra 2, physics, calculus, SAT math
...I am currently a research associate in materials physics at Harvard, have completed a postdoc in geophysics at MIT, and received my doctorate in physics / quantitative biology at Brandeis
University. I will travel throughout the area to meet in your home, library, or wherever is comfortable for ...
16 Subjects: including algebra 2, calculus, physics, geometry
I have done tutoring in the past. It has been fun, both for me and my students. They learned and memorized quickly without any problem.
22 Subjects: including algebra 2, chemistry, physics, statistics
Related Beverly, MA Tutors
Beverly, MA Accounting Tutors
Beverly, MA ACT Tutors
Beverly, MA Algebra Tutors
Beverly, MA Algebra 2 Tutors
Beverly, MA Calculus Tutors
Beverly, MA Geometry Tutors
Beverly, MA Math Tutors
Beverly, MA Prealgebra Tutors
Beverly, MA Precalculus Tutors
Beverly, MA SAT Tutors
Beverly, MA SAT Math Tutors
Beverly, MA Science Tutors
Beverly, MA Statistics Tutors
Beverly, MA Trigonometry Tutors
Nearby Cities With algebra 2 Tutor
Arlington, MA algebra 2 Tutors
Brighton, MA algebra 2 Tutors
Chelsea, MA algebra 2 Tutors
Danvers, MA algebra 2 Tutors
Everett, MA algebra 2 Tutors
Lynn, MA algebra 2 Tutors
Malden, MA algebra 2 Tutors
Marblehead algebra 2 Tutors
Peabody, MA algebra 2 Tutors
Revere, MA algebra 2 Tutors
Salem, MA algebra 2 Tutors
Saugus algebra 2 Tutors
Swampscott algebra 2 Tutors
Wenham algebra 2 Tutors
Woburn algebra 2 Tutors
|
{"url":"http://www.purplemath.com/Beverly_MA_Algebra_2_tutors.php","timestamp":"2014-04-17T01:10:06Z","content_type":null,"content_length":"23794","record_id":"<urn:uuid:d538705c-f1d1-4843-a740-fb92881910d9>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00284-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Haskell Code by HsColour
{-# LANGUAGE NoImplicitPrelude #-}
module Algebra.ToRational where
import qualified Algebra.Real as Real
import Algebra.Field (fromRational, )
import Algebra.Ring (fromInteger, )
import Number.Ratio (Rational, )
import Data.Int (Int, Int8, Int16, Int32, Int64, )
import Data.Word (Word, Word8, Word16, Word32, Word64, )
import qualified Prelude as P
import PreludeBase
import Prelude(Int,Integer,Float,Double)
{- |
This class allows lossless conversion
from any representation of a rational to the fixed 'Rational' type.
\"Lossless\" means - don't do any rounding.
For rounding see "Algebra.RealField".
With the instances for 'Float' and 'Double'
we acknowledge that these types actually represent rationals
rather than (approximated) real numbers.
However, this contradicts to the 'Algebra.Transcendental'
Laws that must be satisfied by instances:
> fromRational' . toRational === id
class (Real.C a) => C a where
-- | Lossless conversion from any representation of a rational to 'Rational'
toRational :: a -> Rational
instance C Integer where
{-#INLINE toRational #-}
toRational = fromInteger
instance C Float where
{-#INLINE toRational #-}
toRational = fromRational . P.toRational
instance C Double where
{-#INLINE toRational #-}
toRational = fromRational . P.toRational
instance C Int where {-#INLINE toRational #-}; toRational = toRational . P.toInteger
instance C Int8 where {-#INLINE toRational #-}; toRational = toRational . P.toInteger
instance C Int16 where {-#INLINE toRational #-}; toRational = toRational . P.toInteger
instance C Int32 where {-#INLINE toRational #-}; toRational = toRational . P.toInteger
instance C Int64 where {-#INLINE toRational #-}; toRational = toRational . P.toInteger
instance C Word where {-#INLINE toRational #-}; toRational = toRational . P.toInteger
instance C Word8 where {-#INLINE toRational #-}; toRational = toRational . P.toInteger
instance C Word16 where {-#INLINE toRational #-}; toRational = toRational . P.toInteger
instance C Word32 where {-#INLINE toRational #-}; toRational = toRational . P.toInteger
instance C Word64 where {-#INLINE toRational #-}; toRational = toRational . P.toInteger
|
{"url":"http://hackage.haskell.org/package/numeric-prelude-0.1.3/docs/src/Algebra-ToRational.html","timestamp":"2014-04-20T01:49:13Z","content_type":null,"content_length":"11451","record_id":"<urn:uuid:48d52e44-8cc8-4aab-ba22-f68cf42b5029>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00650-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Wheeling, IL Math Tutor
Find a Wheeling, IL Math Tutor
...Music Theory and received college level music course credit for it. Currently, I am engaged in composing music and developing my own skills and repertoire. I would like to help teach GED to
someone in need of it.
16 Subjects: including geometry, SAT math, English, algebra 1
...During the sessions I try to clarify and eliminate such obstacles. The above strategy allows the significant improvement in Calculus scores of students after a few tutoring sessions. I am a
research professor at Northwestern University with PhD degree in mathematics and physics.
8 Subjects: including algebra 1, algebra 2, calculus, geometry
...I have a 14 year old girl cousin that is having trouble with Algebra 1 that I am currently tutoring as well as her sister who is 11 years old that I am helping with Science. I work great with
kids and this characteristic comes from my years at Pump It Up. Pump It Up is a birthday place for kids.
11 Subjects: including geometry, precalculus, linear algebra, algebra 1
...I completed an IB diploma in high school so it's fair to say I'm an expert at taking exams. I love to work with creative individuals who genuinely want to make themselves better people and I'm
a great role model and mentor for young people. I work well and have a lot of experience with college and high school students.
36 Subjects: including algebra 1, algebra 2, biology, chemistry
...Reading, Writing, and Proofreading are absolutely essential skills. My classes in this area included rhetoric and literature. I want to help students learn how to effectively present, convey,
and support arguments as well as include relevant information.
20 Subjects: including prealgebra, English, algebra 1, reading
Related Wheeling, IL Tutors
Wheeling, IL Accounting Tutors
Wheeling, IL ACT Tutors
Wheeling, IL Algebra Tutors
Wheeling, IL Algebra 2 Tutors
Wheeling, IL Calculus Tutors
Wheeling, IL Geometry Tutors
Wheeling, IL Math Tutors
Wheeling, IL Prealgebra Tutors
Wheeling, IL Precalculus Tutors
Wheeling, IL SAT Tutors
Wheeling, IL SAT Math Tutors
Wheeling, IL Science Tutors
Wheeling, IL Statistics Tutors
Wheeling, IL Trigonometry Tutors
|
{"url":"http://www.purplemath.com/Wheeling_IL_Math_tutors.php","timestamp":"2014-04-17T07:44:00Z","content_type":null,"content_length":"23796","record_id":"<urn:uuid:e1d6944b-217d-45f9-bcbc-212d4824ecca>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00658-ip-10-147-4-33.ec2.internal.warc.gz"}
|
[Numpy-discussion] np.zeros(2, 'S') returns empty strings.
Nathaniel Smith njs@pobox....
Sun Jan 15 02:15:41 CST 2012
On Sat, Jan 14, 2012 at 2:12 PM, Charles R Harris
<charlesr.harris@gmail.com> wrote:
> This sort of makes sense, but is it the 'correct' behavior?
> In [20]: zeros(2, 'S')
> Out[20]:
> array(['', ''],
> dtype='|S1')
I think of numpy strings as raw fixed-length byte arrays (since, well,
that's what they are), so I would expect np.zeros to return all-NUL
strings, like it does. (Not just 'empty' strings, which just means the
first byte is NUL -- I expect all-NUL.) Maybe I've spent too much time
working with C data structures, but that's my $0.02 :-)
-- Nathaniel
More information about the NumPy-Discussion mailing list
|
{"url":"http://mail.scipy.org/pipermail/numpy-discussion/2012-January/059835.html","timestamp":"2014-04-16T10:31:15Z","content_type":null,"content_length":"3474","record_id":"<urn:uuid:ef0bd914-7321-4f48-be5b-299ebaa4df75>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00320-ip-10-147-4-33.ec2.internal.warc.gz"}
|
strongly stable functions
, 2004
"... In game semantics, the higher-order value passing mechanisms of the #-calculus are decomposed as sequences of atomic actions exchanged by a Player and its Opponent. Seen from this angle, game
semantics is reminiscent of trace semantics in concurrency theory, where a process is identified to the sequ ..."
Cited by 29 (6 self)
Add to MetaCart
In game semantics, the higher-order value passing mechanisms of the #-calculus are decomposed as sequences of atomic actions exchanged by a Player and its Opponent. Seen from this angle, game
semantics is reminiscent of trace semantics in concurrency theory, where a process is identified to the sequences of requests it generates in the course of time. Asynchronous game semantics is an
attempt to bridge the gap between the two subjects, and to see mainstream game semantics as a refined and interactive form of trace semantics. Asynchronous games are positional games played on
Mazurkiewicz traces, which reformulate (and generalize) the familiar notion of arena game. The interleaving semantics of #-terms, expressed as innocent strategies, may be analyzed in this framework,
in the perspective of true concurrency. The analysis reveals that innocent strategies are positional strategies regulated by forward and backward confluence properties. This captures, we believe, the
essence of innocence. We conclude the article by defining a non uniform variant of the #-calculus, in which the game semantics of a #-term is formulated directly as a trace semantics, performing the
syntactic exploration or parsing of that #-term.
- In Logic in Computer Science 02 , 2002
"... We present a new category of games on graphs and derive from it a model for Intuitionistic Linear Logic. Our category has the computational flavour of concrete data structures but embeds fully
and faithfully in an abstract games model. It differs markedly from the usual Intuitionistic Linear Logic s ..."
Cited by 17 (2 self)
Add to MetaCart
We present a new category of games on graphs and derive from it a model for Intuitionistic Linear Logic. Our category has the computational flavour of concrete data structures but embeds fully and
faithfully in an abstract games model. It differs markedly from the usual Intuitionistic Linear Logic setting for sequential algorithms. However, we show that with a natural exponential we obtain a
model for PCF essentially equivalent to the sequential algorithms model. We briefly consider a more extensional setting and the prospects for a better understanding of the Longley Conjecture. 1
- in the Linear Summer School, Azores , 2003
"... ..."
, 2003
"... We show that two models M and N of linear logic collapse to the same extensional hierarchy of types, when (1) their monoidal categories C and D are related by a pair of monoidal functors F : C D
: G and transformations Id C ) GF and Id D ) FG, and (2) their exponentials ! are related by distri ..."
Cited by 6 (3 self)
Add to MetaCart
We show that two models M and N of linear logic collapse to the same extensional hierarchy of types, when (1) their monoidal categories C and D are related by a pair of monoidal functors F : C D : G
and transformations Id C ) GF and Id D ) FG, and (2) their exponentials ! are related by distributive laws % : ! : ! M G ) G ! N commuting to the promotion rule. The key ingredient of the proof is a
notion of back-and-forth translation between the hierarchies of types induced by M and N. We apply this result to compare (1) the qualitative and the quantitative hierarchies induced by the coherence
(or hypercoherence) space model, (2) several paradigms of games semantics: error-free vs. error-aware, alternated vs. non-alternated, backtracking vs. repetitive, uniform vs. non-uniform.
- Theoretical Computer Science, North-Holland , 1995
"... It is known that the strongly stable functions which arise in the semantics of PCF can be realized by sequential algorithms, which can be considered as deterministic strategies in games
associated to PCF types. Studying the connection between strongly stable functions and sequential algorithms, two ..."
Cited by 4 (0 self)
Add to MetaCart
It is known that the strongly stable functions which arise in the semantics of PCF can be realized by sequential algorithms, which can be considered as deterministic strategies in games associated to
PCF types. Studying the connection between strongly stable functions and sequential algorithms, two dual classes of hypercoherences naturally arise: the parallel and serial hypercoherences. The
objects belonging to the intersection of these two classes are in bijective correspondence with the so-called "serial-parallel" graphs, that can essentially be considered as games. We show how to
associate to any hypercoherence a parallel hypercoherence together with a projection onto the given hypercoherence and present some properties of this construction. Intuitively, it makes explicit the
computational time of a hypercoherence.
, 2003
"... We offer a short tour into the interactive interpretation of sequential programs. We emphasize streamlike computation – that is, computation of successive bits of information upon request. The
core of the approach surveyed here dates back to the work of Berry and the author on sequential algorithms ..."
Add to MetaCart
We offer a short tour into the interactive interpretation of sequential programs. We emphasize streamlike computation – that is, computation of successive bits of information upon request. The core
of the approach surveyed here dates back to the work of Berry and the author on sequential algorithms on concrete data structures in the late seventies, culminating in the design of the programming
language CDS, in which the semantics of programs of any type can be explored interactively. Around one decade later, two major insights of Cartwright and Felleisen on one hand, and of Lamarche on the
other hand gave new, decisive impulses to the study of sequentiality. Cartwright and Felleisen observed that sequential algorithms give a direct semantics to control operators like call-cc and
proposed to include explicit errors both in the syntax and in the semantics of the language PCF. Lamarche (unpublished) connected sequential algorithms to linear logic and games. The successful
program of games semantics has spanned over the nineties until now, starting with syntax-independent characterizations of the term model of PCF by Abramsky, Jagadeesan, and Malacaria on one hand, and
by Hyland and Ong on the other hand. Only a basic acquaintance with λ-calculus, domains and linear logic is assumed in sections 1 through 3.
"... The true concurrency of innocence ..."
|
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=330833","timestamp":"2014-04-18T15:05:32Z","content_type":null,"content_length":"28829","record_id":"<urn:uuid:1b72de49-eb60-4fae-9f11-1d4dc815fd17>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00255-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
explain how chemists know that oxygen molecule has an unpaired electrons?
• one year ago
• one year ago
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred.
|
{"url":"http://openstudy.com/updates/5166cad6e4b066fca6619004","timestamp":"2014-04-16T04:42:12Z","content_type":null,"content_length":"56453","record_id":"<urn:uuid:6dbb6a74-5415-427c-97a3-54141e454b99>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00516-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Slow Motion in One-Dimensional Cahn-Morral Systems :: Institutional Repository
Slow Motion in One-Dimensional Cahn-Morral Systems
In this paper we study one-dimensional Cahn-Morral systems, which are the multicomponent analogues of the Cahn-Hilliard model for phase separation and coarsening in binary mixtures. In particular, we examine solutions that start with initial data close to the preferred phases except at finitely many transition points where the data has sharp transition layers, and we show that such solutions may evolve exponentially slowly; i.e., if ε is the interaction length then there exists a constant C such that in exp(C/ε) units of time the change in such a solution is o(1). This corresponds to extremely slow coarsening of a multicomponent mixture after it has undergone fine-grained decomposition.
SIAM J. Math. Anal. Vol. 26, No. 1, pp. 21{34, January 1995 SLOW MOTION IN ONE-DIMENSIONAL CAHN-MORRAL SYSTEMS CHRISTOPHER P. GRANT Abstract. In this paper we study one-dimensional Cahn-Morral systems, which are the multicomponent analogues of the Cahn-Hilliard model for phase separation and coarsening in binary mixtures. In particular, we examine solutions that start with initial data close to the preferred phases except at nitely many transition points where the data has sharp transition layers, and we show that such solutions may evolve exponentially slowly; i.e., if " is the interaction length then there exists a constant C such that in exp(C=") units of time the change in such a solution is o(1). This corresponds to extremely slow coarsening of a multicomponent mixture after it has undergone ne-grained decomposition. Key words. Cahn-Hilliard equation, phase separation, transition layers, metastability AMS subject classications. 35B30, 35B25, 35K55 1. Introduction. One of the leading continuum models for the dynamics of phase separation and coarsening in a binary mixture is the Cahn-Hilliard equation, which in the one-dimensional case can be written as ut = ( "2uxx +W0(u))xx; x2 (0; 1) ux = uxxx = 0; x2 f0; 1g: (1.1) Here W represents the bulk free energy density as a function of the concentration u of one of the two components of the mixture. (If, as is typically assumed, the total concentration of the mixture is a constant then the concentration of the second is determined by the concentration of the rst.) The parameter " represents an interaction length and is assumed to be a small positive constant. This equation was derived in [8] based on the free energy functional of van der Waals [29] E"[u] Z 1 0 W(u) + "2 2 juxj2 (1.2) dx: We will usually work with the scaled energy E"[u] " 1E"[u]. Also, we will write E"[u; a; b] when the integral is over the interval [a; b] instead of [0; 1]. In the early 1970s, Cahn and Morral [24] and DeFontaine [13] [14] initiated the study of systems of partial di erential equations that model the phase separation of mixtures of three or more components in essentially the same way that the Cahn- Hilliard equation models the separation of binary mixtures. (See Eyre [20] for a comprehensive survey of these systems.) If the domain is again taken to be [0; 1], then, after a change of variables, such systems can be written in the form ut = ( "2uxx +DW(u))xx; x2 (0; 1) ux = uxxx = 0; x2 f0; 1g; (1.3) where u is now an n-vector (for a mixture with n + 1 components), and W maps D(W) Rn into R. Again, E" dened by (1.2) represents the total free energy of Center for Dynamical Systems and Nonlinear Studies, Georgia Institute of Technology, Atlanta, Georgia 30332. Present address, Department of Mathematics, Brigham Young University, Provo, Utah 84602. 21 22 CHRISTOPHER P. GRANT the mixture, and it is easy to check that it provides a Lyapunov functional for (1.3). Note, also, that the mass R 1 0 udx of a solution is conserved. We will make the following assumptions on W. D(W) is open, convex, and connected; W 0 throughout its domain, and W has only nitely many zeros, call them fz1; z2; : : : ; zmg, (corresponding to the preferred homogeneous states, or phases, of the system); W is C3 on D(W) and has a continuous extension to its closure D(W); The Hessian D2W is positive denite at each zero of W, and W is bounded away from 0 outside of each neighborhood of these points. Additionally, we need to require that W increases as the boundary @D(W) of the domain is approached. The precise assumption we shall make is the following: For each point u in @D(W), there is a closed, convex set S D(W) n u such that 1. W is nonzero on , the connected component of D(W) n S containing u; 2. the function ' that maps each point of Rn to its nearest point in S satises W('(u)) W(u) for all u 2 . This assumption is trivially satised when D(W) = Rn. It also holds whenever W is C1 on D(W), @D(W) is a locally compact, oriented hypersurface of class C2, and the (exterior) normal derivative of W is positive. (See, e.g., [21].) However, we state the assumption in this general way because some of the most important examples of D(W) do not have smooth boundaries. For example, Eyre [20] and Elliott and Luckhaus [18] study situations where D(W) is a convex polytope and W satises the assumptions given above. Note that any constant is an equilibrium solution to (1.3). A linear analysis of the equation about an unstable constant equilibrium suggests that typical solutions that start near such a constant undergo ne-grained decomposition with a characteristic length scale that is O("). (See [22] for a precise mathematical formulation and rigorous verication of this heuristic concept in the two-component case.) This ne-grained decomposition of initially homogeneous mixtures has also been frequently observed in physical experiments [7], [9]. In this paper we investigate the way solutions evolve after this initial stage of decomposition. We, therefore, conne our attention to solutions to (1.3) with initial data u(x; 0) = u0(x) close to the zeros of W through most of the domain, with sharp transition layers, or interfaces, separating the intervals where u is nearly constant. Consider when n = 1 (i.e., the original Cahn-Hilliard equation (1.1)), the case for which the most work has been done. Carr, Gurtin, and Slemrod [10] showed that all of the local minimizers of E" with any specied mass are monotone, so, in general, we would expect that the ne-grained structure of u would coarsen as t!1. Numerical work by Elliott and French [17] indicates that this evolution occurs very slowly. (Such slowly evolving states are sometimes said to be dynamically metastable.) Bronsard and Hilhorst [5] have shown that, in a certain space, this evolution occurs at a rate that is O("k) for any power k. Using completely di erent techniques, Alikakos, Bates, and Fusco [1] constructed a portion of the unstable manifold of a two-layer equilibrium that intersects a small neighborhood of a monotone equilibrium and showed that the speed of the flow along this connecting orbit, measured in the H 1 norm, is O(exp( C=")) for some constant C. Recently, Bates and Xun [4] have found exponentially slow motion for the multi-layer states of (1.1) by combining the methods of [1] with those used by Carr and Pego [11] to study reaction-di usion equations. SLOW MOTION IN ONE-DIMENSIONAL CAHN-MORRAL SYSTEMS 23 The results that we present here are similar to those of Bates and Xun in that we also obtain exponentially slow motion, but the methods we use are much simpler, and they are valid not only for the two-component Cahn-Hilliard equation (1.1) but for the multi-component Cahn-Morral system (1.3), as well. It should be mentioned, however, that our results for the two-component two-layer case are weaker than those of Alikakos, Bates, and Fusco, in the sense that we do not explicitly construct heteroclinic orbits. We deal only with the speed of motion and say nothing about the geometric structure of the attractor. In this paper, we apply the elementary, yet powerful, approach introduced by Bronsard and Kohn [6] in their study of slow motion for reaction-di usion equations. The improvement from superpolynomial to exponential speed is made possible by incorporating some ideas of Alikakos andMcKinney [2] about the prole of constrained minimizers of (1.2). Use is also made of techniques of Sternberg [27] for describing the nature of globally stable steady-state solutions of (1.3) in the limit as " ! 0. In Section 2 we present a lower bound on the energy of any function that is su ciently close to a given simple function whose range is a subset of W 1(f0g). This result amounts to an error estimate for a convergence result of Baldo [3]. In Section 3 we show how this estimate yields our main result on slow evolution of solutions with transition layers. As in [6], the only information used about the timedependent partial di erential equation is the time rate of change of the energy along a solution path in phase space. Finally, in Section 4 we consider what the main result implies about the motion of the transition layers themselves. The questions of existence and regularity of solutions for (1.1) and (1.3) have been extensively studied, and di erent authors have obtained various conditions on W that ensure global existence of solutions [15], [16], [18], [19], [20], [25], [26], [28], [30]. Rather than restricting ourselves to one particular set of such conditions, we shall simply assume that W is such that for su ciently smooth initial data with range in D(W) there exists a global solution that is in C(R+;H2(0; 1)) \ L2(0; T ;H4(0; 1)). Given that global solutions exist, our goal is to provide some information about how some of them evolve. 2. Error Estimates. Fix v : [0; 1] ! W 1(f0g) having (exactly) N jumps located at fx1; x2; : : : ; xNg (0; 1). Fix r so small that B(xk; r) [0; 1] for each k, and B(xk; r) \ B(x ; r) = ; whenever k 6= . (Here and below, B(x; r) represents the open ball of radius r centered at x in the relevant space.) Let j be the minimum of the eigenvalues of D2W(zj ), and let = minf j : zj 2 W 1(f0g)g: For any function z on [0; 1] we write z(x) R x 0 z(s) ds. We are interested in solutions corresponding to initial data u(x; 0) = u0(x) such that u0 is close to v in the L1 norm. To the discontinuous function v we assign an asymptotic energy E0[v] XN k=1 (v(xk r); v(xk + r)); where ( 1; 2) def = inf fJ[z] : z 2 AC([0; 1];D(W)); z(0) = 1; z(1) = 2g ; 24 CHRISTOPHER P. GRANT and J[z] def = p2 Z 1 0 p W(z(s))jz0(s)j ds: It is easy to check that is a metric on the domain of W. Also, note that Young's inequality and a change of variable imply that E"[z; a; b] (z(a); z(b)): Lemma 2.1. Let C be any positive constant less than rp2 . Then there are constants C1; > 0 (depending only on W, v and C) such that, for " su ciently small, Z 1 0 j u(x) v(x)j dx ) E"[u] E0[v] C1 exp( C="): Proof. Let K be a compact set in the domain of W containing W 1(f0g) in its interior, and set = supfkD3W( )k : 2 Kg. Choose ^r > 0 and 1 so small that Copyright (r ^r)p2 n 1 and that B(zj; 1) is contained in K for each zj 2 W 1(f0g). Choose 2 so small that inf ( 1; 2) : zj 2 W 1(f0g); 1 62 B(zj; 1); 2 2 B(zj; 2) > sup (zj; 2) : zj 2 W 1(f0g); 2 2 B(zj; 2) ; and jzj z j > 2 2 if zj and z are di erent zeros of W. Let F( 2) = inff ( 1; 2) :zj1; zj2 2 W 1(f0g); zj1 6= zj2 ; 1 2 B(zj1; 2); j( 2 zj2 ) (zj2 (2.1) zj1 )j 2jzj2 zj1 jg: By our assumptions about W, F( 2) > 0, so there exists M 2 N such that MF( 2) > E0[v]. Pick such an M, and set = ^r2 2=(5M2). Now assume that R 1 0 j u(x) v(x)j dx , and let us focus our attention on B(xk; r), a neighborhood of one of the transition points of v. For convenience, let v+ = v(xk + r) and v = v(xk r). Suppose ju vj 2 throughout (xk; xk + ^r), and let IM be an open subinterval of (xk; xk+^r) of width ^r=M. If we assume without loss of generality that E"[u] E0[v] then for " su ciently small there must be some ^x 2 IM such that u(^x) 2 B(zj1; 2) for some zj1 2 W 1(f0g). (Otherwise the rescaled bulk free energy would be too high.) If (u v) zj1 v+ jzj1 v+j 2 throughout IM then it is not hard to check that we would have Z IM j u(x) v(x)j dx Z IM ( u(x) v(x)) zj1 v+ jzj1 v+j dx > ; which is a contradiction. Hence, (u v) zj1 v+ jzj1 v+j < 2 SLOW MOTION IN ONE-DIMENSIONAL CAHN-MORRAL SYSTEMS 25 somewhere on IM. But then the rescaled energy on IM must be no less than F( 2). Partitioning (xk; xk +^r) into M equal intervals of width ^r=M and using the preceding result, we have E"[u; xk; xk + ^r] MF( 2) > E0[v], contrary to assumption. Hence, there is some r+ 2 (0; ^r) such that ju(xk + r+) v+j < 2: Similarly, there is some r 2 (0; ^r) such that ju(xk r ) v j < 2: Next, consider the unique minimizer z : [xk + r+; xk + r] ! Rn of the functional E"[z; xk + r+; xk + r] subject to the boundary condition z(xk + r+) = u(xk + r+): If the range of z is not contained in B(v+; 1) then E"[z; xk + r+; xk + r] inff (z(xk + r+); ) : 62 B(v+; 1)g (2.2) (z(xk + r+); v+); by the choice of 2 and the choice of r+. Suppose, on the other hand, that the range of z is contained in B(v+; 1). Then the Euler-Lagrange equation for z is z00(x) = " 2DW(z(x)); x2 (xk + r+; xk + r) z(x) = u(xk + r+); x= xk + r+ z0(x) = 0; x= xk + r: (2.3) If we dene (x) jz(x) v+j2 then 0 = 2(z v+) z0 and 00 = 2(jz0j2 + (z v+) z00) 2 "2 (2.4) (z v+) DW(z): Now Taylor's theorem and the choice of 1 imply that (2.5) DW(z) = D2W(v+)(z v+) + R; where jRj n jz v+j2=2. Substituting (2.5) into (2.4) gives 00 2 "2 (z v+) D2W(v+)(z v+) n "2 jz v+j3 2 "2 jz v+j2 n 1 "2 jz v+j2 2 "2 jz v+j2 = 2 "2 ; where = C=(r ^r). Thus, satises 00(x) ( =")2 (x) 0; x2 (xk + r+; xk + r) (x) = ju(xk + r+) v+j2; x= xk + r+ 0(x) = 0; x= xk + r: 26 CHRISTOPHER P. GRANT Following Alikakos and McKinney [2], we compare to the solution ^ of ^ 00(x) ( =")2 ^ (x) = 0; x2 (xk + r+; xk + r) ^ (x) = ju(xk + r+) v+j2; x= xk + r+ ^ 0(x) = 0; x= xk + r; which can be explicitly calculated to be ^ (x) = ju(xk + r+) v+j2 cosh [( =")(r r+)] cosh h " (x (xk + r)) i : By the maximum principle, (x) ^ (x), so, in particular, (xk + r) ju(xk + r+) v+j2 cosh [( =")(r r+)] 2 22 exp Copyright " : Consequently, jz(xk (2.6) + r) v+j 2p2 exp( C=(2")): Because W is quadratic at v+, (2.6) implies that, for some constant C1, E"[z; xk + r+; xk + r] (z(xk + r+); z(xk + r)) (z(xk + r+); v+) (v+; z(xk + r)) (2.7) (z(xk + r+); v+) (C1=(2N)) exp( C="): Combining (2.2) and (2.7), we see that the constrained minimizer of the proposed variational problem satises E"[z; xk + r+; xk + r] (z(xk + r+); v+) (C1=(2N)) exp( C="): But the restriction of u to [xk+r+; xk+r] is an admissable function, so it must satisfy the same estimate E"[u; xk + r+; xk + r] (u(xk + r+); v+) (C1=(2N)) exp( C="): A similar estimate holds for the energy of u on the interval [xk r; xk r ]. Hence, E"[u; xk r; xk + r] = E"[u; xk r; xk r ] + E"[u; xk r ; xk + r+] + E"[u; xk + r+; xk + r] (v ; u(xk r )) (C1=(2N)) exp( C=") + (u(xk r ); u(xk + r+)) + (u(xk + r+); v+) (C1=(2N)) exp( C=") (v(xk r); v(xk + r)) (C1=N) exp( C="): Assembling all of our estimates, E"[u] XN k=1 E"[u; xk r; xk + r] E0[v] C1 exp( C="); as was claimed. SLOW MOTION IN ONE-DIMENSIONAL CAHN-MORRAL SYSTEMS 27 3. Slow Evolution. In this section we will consider a family of solutions u"(x; t) to (1.3), parametrized by the corresponding interaction length ". Lemma 3.1. Suppose that C < rp2 and the initial data u" 0 satises Z 1 0 j u" 0(x) v(x)j dx 2 and E"[u" 0] E0[v] + 1 g(") for some function g and for all " small, where is as in Lemma 2.1. Then lim "!0 ( sup 0 t minfg(");exp(C=")g Z 1 0 j u"(x; t) u" 0(x)j dx ) (3.1) = 0: Proof. First note that the scaled total energy E"[u"( ; t)] of the solution of a Cahn-Morral system is nonincreasing in t, since d dt E"[u"( ; t)] = " 1 Z 1 0 DW(u") u"t + "2u" x u" xt dx = " 1 Z 1 0 (DW(u") "2u" xx) u"t dx = " 1 Z 1 0 j u"t j2 dx: Integrating this equation over t 2 (0; T) gives E"[u" 0] E"[u"( ; T )] = " 1 Z T 0 Z 1 0 j u"t j2 (3.2) dx dt: Next, assume that u" 0 satises the conditions of the lemma and that T is small enough that Z T 0 Z 1 0 j u"t j dx dt =2: Then Z 1 0 j u" 0(x) u"(x; T )j dx =2; so by the triangle inequality, Z 1 0 j u"(x; T ) v(x)j dx : Applying, Lemma 2.1 to u"( ; T) gives E"[u"( ; T )] E0[v] C1 exp( C="). In combination with (3.2), this yields Z T 0 Z 1 0 j u"t j2 dx dt = "(E"[u" 0] E"[u"( ; T )]) C1" 1 g(") + exp( C=") (3.3) ; 28 CHRISTOPHER P. GRANT assuming, without loss of generality, that C1 1. Using H older's inequality and (3.3) we have Z T 0 Z 1 0 j u"t j dx dt !2 Z T 0 Z 1 0 1 dx dt ! Z T 0 Z 1 0 j u"t j2 dx dt ! C1T" 1 g(") + exp( C=") : Hence, T 1 C1" 1 g(") + exp( C=") 1 Z T 0 Z 1 0 j u"t j dx dt !2 (3.4) : Now suppose that Z 1 0 Z 1 0 j u"t j dx dt =2: Then we can choose T such that R T 0 R 1 0 j u"t j dx dt = =2. For this choice of T , (3.4) yields T 2 4C1" h 1 g(") + exp( C=") i 2 8C1" min fg("); exp(C=")g : Then (3.3) implies that Z 2 minfg(");exp(C=")g=(8C1") 0 Z 1 0 j u"t j2 dx dt C1" 1 g(") + exp( C=") (3.5) : If, on the other hand, R 1 0 R 1 0 j u"t j dx dt < =2, then (3.3) must hold for every T ; therefore, (3.5) is also true for this case. Using H older's inequality and (3.5) we see that for " < 2=(8C1) sup 0 t minfg(");exp(C=")g Z 1 0 j u"(x; t) u" 0(x)j dx Z minfg(");exp(C=")g 0 Z 1 0 j u"t j dx dt min fg("); exp(C=")g Z minfg(");exp(C=")g 0 Z 1 0 j u"t j2 dx dt !1=2 min fg("); exp(C=")gC1" 1 g(") + exp( C=") 1=2 p 2C1": Letting " ! 0 we get (3.1). The strength of estimate (3.1) in Lemma 3.1 depends on the e ciency of the transition layers in the initial data. In Theorem 3.3 below, we show that, in a neighborhood of the step function v, there exist initial data that smooth out the discontinuities of v SLOW MOTION IN ONE-DIMENSIONAL CAHN-MORRAL SYSTEMS 29 in an e cient enough manner that the corresponding solutions of (1.3) evolve exponentially slowly. Before we present this theorem, we shall state and prove a technical lemma about the existence and regularity of minimizing geodesics for the degenerate Riemannian metric . Lemma 3.2. 1. For any two zeros zi and zj of W, there is a Lipschitz continuous path ij from zi to zj , parametrized by a multiple of Euclidean arclength, that realizes the distance (zi; zj ); i.e., (zi; zj) = J[ ij ]. 2. There exists a positive constant C2 such that j ij(y) zij C2y for y su - ciently small, and j ij(y) zjj C2(1 y) for y su ciently near 1. Proof. Recall that outside of a neighborhood of its zeros W is bounded away from 0; therefore, it is possible to nd a bounded set B D(W) such that if (0) = zi, (1) = zj, and J[ ] (zi; zj) + 1 then the image of is contained in B. Extend W continuously to B, and consider the problem of minimizing J[ ] over all satisfying these boundary conditions and having images contained in B. Now, J[ ] is a parametric integral, and it is known that this new minimization problem has an AC global minimizer ij [12]. The parameter of this minimizer can be chosen to be proportional to arclength, and then ij will be Lipschitz continuous. We claim that ij([0; 1]) is contained in D(W). Suppose it is not. Then there exists some y 2 (0; 1) such that ij(y) 2 @D(W). By the assumptions on W, there is a closed, convex set S D(W) n ij(y) such that W is nonzero on the connected component of D(W) nS containing ij (y), and the function ' that maps each point of Rn to its nearest point in S satises W('(u)) W(u) for all u in . Consider the modied path ij from zi to zj dened by ij (y) = '( ij (y)); if ij (y) 2 ij (y); otherwise : Note that ' is Lipschitz continuous with Lipschitz constant 1. Because of this and the fact that S separates from the rest of D(W), ij is Lipschitz continuous. It is also easy to check that J[ ij ] < J[ ij ]. This contradicts the optimality of ij ; hence, the claim holds. This veries that (zi; zj) = J[ ij ]. We now prove the estimate on ij near zi; the estimate near zj can be derived similarly. Again, we consider a modication of ij , this time the path ij dened by ij(y) = zi + (y= )( ij( ) zi); if 0 y ij(y); otherwise : The optimality of ij implies that p2 Z 0 q W( ij(s))j 0ij (s)j ds p2 Z 0 q W( ij(s)) j ij ( ) zij (3.6) ds: Because D2W(zi) is positive denite, there are positive constants M1 and M2 such that M1ju zij p W(u) M2ju zij in a small neighborhood of zi. Using this in (3.6), we nd that j ij( ) zij2 M3 Z 0 j ij(s) zij ds 30 CHRISTOPHER P. GRANT for some constant M3. Applying a variant of Gronwall's inequality [23] we obtain the desired estimate. Theorem 3.3. Given > 0, there exist constants C; ^" > 0 and a family of initial conditions fu" 0 : 0 " ^"g of (1.3) satisfying homogeneous Neumann boundary conditions and the estimate Z 1 0 j u" 0(x) v(x)j dx 2 such that the corresponding solutions u" of (1.3) satisfy lim "!0 ( sup 0 t exp(C=") Z 1 0 j u"(x; t) u" 0(x)j dx ) = 0: Proof. Lemma 3.2 shows that to each discontinuity xk of v there corresponds an optimal path connecting v(xk r) to v(xk + r). Note that it su ces to prove the present theorem under the assumption that none of these optimal paths passes through any zero of W (except at the endpoints of the path), since if the assumption is not satised then v can be perturbed slightly to create a new step function that does satisfy the assumption. Given ", set u" 0 = v outside of [mj =1B(xk; r). For xed xk, we shall again use the notation v for v(xk r) and will show that for " su ciently small we can dene u" 0 inside B(xk; r) in such a way that u" 0 is very close to v (in the L1 sense) on B(xk; r), E"[u" 0; xk r; xk + r] (v ; v+) + C3 exp( C=") for some C and C3, and u" 0 is continuous at the endpoints of B(xk; r). By taking C slightly smaller and applying Lemma 3.1, the proof of the theorem will then be complete. Let : [0; 1] ! Rn be an optimal path from v to v+ as described in Lemma 3.2. Let be the Euclidean arclength of . Let y : R ! [0; 1] be the solution of dy d = 1 p (3.7) 2W( (y( ))) satisfying y(0) = 1=2. (Since pW and are Lipschitz continuous, a unique C1 solution is guaranteed to exist.) Note that lim !1 y( ) = 1 and lim ! 1 y( ) = 0. Dene u" 0 inside B(xk; r) by u" 0(x) = 8>>>< >>>: v + ( (y (1 r=")) v )(x xk + r)="; xk r < x < xk r + " (y ((x xk)=")); xk r + " x xk + r " v+ + (v+ (y (r=" 1)))(x xk r)="; xk + r " < x < xk + r: It is easy to see that u" 0 is continuous and, for " su ciently small, will satisfy the L1 requirement; therefore, we only need to check the energy requirement. Note that E"[u" 0; xk r; xk + r] = Z r+" r 1 " W(u" 0(x + xk)) + " 2 ju"0 0 (x + xk)j2 dx + Z r " r+" 1 " W(u" 0(x + xk)) + " 2ju"0 0 (x + xk)j2 dx + Z r r " 1 " W(u" 0(x + xk)) + " 2ju"0 0 (x + xk)j2 dx def (3.8) = I1 + I2 + I3: SLOW MOTION IN ONE-DIMENSIONAL CAHN-MORRAL SYSTEMS 31 Now, using (3.7) and the denition of we have I2 = Z r " r+" 1 " W y x " + 1 2" 0 y x " y0 x " 2 dx = Z r=" 1 1 r=" W( (y( ))) + 1 2 j 0(y( ))y0( )j2 d = Z r=" 1 1 r=" p 2W( (y( )))j 0(y( ))jy0( ) d = Z y(r=" 1) y(1 r=") p 2W( (y))j 0(y)j dy (3.9) (v ; v+): Next, we estimate I1 (letting C3 represent a constant whose value may change from line to line): I1 = 1 " Z r+" r W v + (y (1 r=")) v " (x + r) dx + 1 2 y 1 r " v 2 C3 y 1 r " v 2 C3 y 1 r " 2 (3.10) : Now, Lemma 3.2 implies that there exists a constant C > 0 such that, for 0, y0( ) = 1 p 2W( (y( )) Copyright 2rC2 j (y( )) v j Copyright 2r (3.11) (y( )): Applying a simple comparison argument to (3.11) yields y( ) C3 exp Copyright 2r ; for 0. Substituting this into (3.10) we have (3.12) I1 C3 exp( C="): Similarly, (3.13) I3 C3 exp( C="): By substituting (3.9), (3.12), and (3.13) into (3.8), we see that u" 0 satises the energy requirement, so we are done. Remark. For the standard two-component case with W having two minima, the maximum principle can be used more directly in the proof of Lemma 2.1 (see [2]) 32 CHRISTOPHER P. GRANT and an explicit value of C can be obtained in Theorem 3.3. This C agrees with that obtained in [1] and [4]. Remark. The initial data u" 0 just constructed are in W1;1(0; 1). Since E" is continuous on this space and elements of this space can be approximated arbitrarily closely by Cp functions (for arbitrarily large p), the initial data in Theorem 3.3 can be assumed to be arbitrarily smooth. 4. Motion of Transition Layers. From Theorem 3.3, which establishes slow evolution in a certain abstract space, it is natural to infer that the transition layers themselves move extremely slowly. This concept can be made precise in a number of ways, one of which we present here. Fix some closed subset K of D(W) nW 1(f0g), and dene the interface I[u] of a function u by I[u] def = u 1(K): This terminology is natural, since the set K is bounded away from the phases of W, where the bulk energy is low. By analyzing how rapidly I[u] changes, we obtain information on how fast the transition layers move. Let d(A;B) denote the Hausdor distance between the sets A and B, i.e. d(A;B) = max sup a2A d(a;B); sup b2B d(b;A) : We shall show that d(I[u"( ; t)]; I[u" 0]) grows very slowly in t. Theorem 4.1. Fix ^r > 0 and ^ > 0. Then there exist constants C; ^" > 0 and a family of initial conditions fu" 0 : 0 " ^"g of (1.3) satisfying homogeneous Neumann boundary conditions and the estimate Z 1 0 j u" 0(x) v(x)j dx ^ such that the time T (^r) necessary for d(I[u"( ; T (^r))]; I[u" 0]) to exceed ^r satises (4.1) T (r^) exp(C="): Proof. Assume, without loss of generality, that ^r r. Choose ^ small enough that inf ( 1; 2) : zj 2 W 1(f0g); 1 2 K; 2 2 B(zj ; ^ ) > 4N sup (zj; 2) : zj 2 W 1(f0g); 2 2 B(zj ; ^ ) : Choose M 2 N so large that MF(^ ) > E0[v], where F is dened as in (2.1). We claim that there exists "0 > 0 such that for all " "0 and for all functions z : [0; 1] ! Rn satisfying Z 1 0 j z(x) v(x)j dx ^ ^r2 17M2 (4.2) and E"[z] E0[v] + 2N sup (zj; 2) : zj 2 W 1(f0g); 2 2 B(zj ; ^ ) (4.3) ; SLOW MOTION IN ONE-DIMENSIONAL CAHN-MORRAL SYSTEMS 33 we have d I[z]; fxkgN k=1 < ^r 2: Verication of Claim: Note rst that if " is su ciently small then for each k there exist xk 2 (xk ^r=2; xk) and xk+ 2 (xk+; xk + ^r=2) such that jz(xk ) v(xk )j < ^ . This follows as in the proof of Lemma 2.1. Now, suppose the claim is violated. Then, reasoning as before E"[z] XN k=1 E"[z; xk ; xk+] + inf ( 1; 2) : zj 2 W 1(f0g); 1 2 K; 2 2 B(zj ; ^ ) E0[v] 2N sup (zj; 2) : zj 2 W 1(f0g); 2 2 B(zj ; ^ ) + inf ( 1; 2) : zj 2 W 1(f0g); 1 62 K; 2 2 B(zj ; ^ ) > E0[v] + 2N sup (zj; 2) : zj 2 W 1(f0g); 2 2 B(zj ; ^ ) E"[z]; which is a contradiction. Thus, the claim is true. Apply Theorem 3.3 with = minf^ ; ^ ^r2=(17M2) to obtain a parametrized set of initial conditions fu" 0 : 0 " ^"g. Note that z = u" 0 satises (4.2) and, by the construction in the proof of Theorem 3.3, satises (4.3) for " su ciently small. Applying the claim we get d I[u" 0]; fxkgN k=1 < ^r 2 ; for " su ciently small. By Theorem 3.3, the triangle inequality, and the fact that E"[u"( ; t)] is decreasing in t, we see that there is a constant C > 0 such that for " su ciently small, z = u"( ; T ) satises (4.2) and (4.3) if T exp(C="). Thus, for all such T we also have d I[u"( ; T )]; fxkgN k=1 < ^r 2: By the triangle inequality we get d (I[u" 0]; I[u"( ; T )]) < ^r: This means that (4.1) must hold. Acknowledgments. I am indebted to K. Mischaikow for suggesting this problem and to P. Bates for helping to correct a mistake in the original manuscript. I would also like to thank the referees for their helpful suggestions. 34 CHRISTOPHER P. GRANT REFERENCES [1] N. D. Alikakos, P. W. Bates, and G. Fusco, Slow motion for the Cahn-Hilliard equation in one space dimension, J. Di erential Equations, 90 (1991), pp. 81{135. [2] N. D. Alikakos and W. R. McKinney, Remarks on the equilibrium theory for the Cahn- Hilliard equation in one space dimension, in Reaction-Di usion Equations, Oxford University Press, 1990. [3] S. Baldo, Minimal interface criterion for phase transitions in mixtures of Cahn-Hilliard fluids, Ann. Inst. H. Poincar e, Analyse Nonlin eaire, 7 (1990), pp. 67{90. [4] P. W. Bates and J. P. Xun, Metastable patterns for the Cahn-Hilliard equation, J. Di erential Equations, to appear. [5] L. Bronsard and D. Hilhorst, On the slow dynamics for the Cahn-Hilliard equation in one space dimension, Proc. Roy. Soc. London, Series A, 439 (1992), pp. 669{682. [6] L. Bronsard and R. V. Kohn, On the slowness of phase boundary motion in one space dimension, Comm. P. A. Math., 43 (1990), pp. 983{998. [7] E. P. Butler and G. Thomas, Structure and properties of spinodally decomposed Cu-Ni-Fe alloys, Acta Metall., 18 (1970), pp. 347{365. [8] J. W. Cahn, On spinodal decomposition, Acta Metall., 9 (1961), pp. 795{801. [9] , Spinodal decomposition, Trans. Metallurg. Soc. of AIME, 242 (1968), pp. 166{180. [10] J. Carr, M. E. Gurtin, and M. Slemrod, Structured phase transitions on a nite interval, Arch. Rational Mech. Anal., 86 (1984), pp. 317{351. [11] J. Carr and R. L. Pego, Metastable patterns in solutions of ut = "2uxx f(u), Comm. P. A. Math., 42 (1989), pp. 523{576. [12] L. Cesari, Optimization|Theory and Applications, vol. 17 of Applications of Mathematics, Springer-Verlag, New York, 1983. [13] D. DeFontaine, An analysis of clustering and ordering in multicomponent solid solutions - I. stability criteria, J. Phys. Chem. Solids, 33 (1972), pp. 297{310. [14] , An analysis of clustering and ordering in multicomponent solid solutions - II. fluctuations and kinetics, J. Phys. Chem. Solids, 34 (1973), pp. 1285{1304. [15] T. Dlotko, Fourth order semilinear parabolic equations, Tsukuba J. Math., 16 (1992), pp. 389{ 406. [16] C. M. Elliott, The Cahn-Hilliard model for the kinetics of phase separation, inMathematical Models for Phase Change Problems, J. F. Rodrigues, ed., Birkh auser Verlag, Basel, 1989, pp. 35{73. [17] C. M. Elliott and D. A. French, Numerical studies of the Cahn-Hilliard equation for phase separation, IMA J. Appl. Math., 38 (1987), pp. 97{128. [18] C. M. Elliott and S. Luckhaus, A generalised di usion equation for phase separation of a multi-component mixture with interfacial free energy, preprint. [19] C. M. Elliott and S. Zheng, On the Cahn-Hilliard equation, Arch. Rational Mech. Anal., 96 (1986), pp. 339{357. [20] D. J. Eyre, Systems of Cahn-Hilliard equations, SIAM J. Appl. Math., 53 (1993), pp. 1686{ 1712. [21] G. B. Folland, Introduction to Partial Di erential Equations, vol. 17 of Mathematical Notes, Princeton University Press, Princeton, New Jersey, 1976. [22] C. P. Grant, Spinodal decomposition for the Cahn-Hilliard equation, Comm. in P.D.E., 18 (1993), pp. 453{490. [23] P. Hartman, Ordinary Di erential Equations, Birkh auser, Boston, second ed., 1982. [24] J. E. Morral and J. W. Cahn, Spinodal decomposition in ternary systems, Acta Metall., 19 (1971), pp. 1037{1045. [25] K. Promislow, Time analyticity and Gevrey regularity for solutions of a class of dissipative partial di erential equations, Nonlinear Anal., 16 (1991), pp. 959{980. [26] S. M. Rankin, Semilinear evolution equations in Banach spaces with application to parabolic partial di erential equations, Trans. Amer. Math. Soc., 336 (1993), pp. 523{536. [27] P. Sternberg, The e ect of a singular perturbation on nonconvex variational problems, Arch. Rational Mech. Anal., 101 (1988), pp. 209{260. [28] R. Temam, Innite-Dimensional Dynamical Systems in Mechanics and Physics, vol. 68 of Applied Mathematical Sciences, Springer-Verlag, New York, 1988. [29] J. D. van der Waals, The thermodynamic theory of capillarity flow under the hypothesis of a continuous variation in density, Verhandel. Konink. Akad. Weten. Amsterdam, 1 (1893). [30] S. Zheng, Asymptotic behavior of solution to the Cahn-Hilliard equation, Appl. Anal., 23 (1986), pp. 165{184.
|
{"url":"http://contentdm.lib.byu.edu/cdm/singleitem/collection/IR/id/367/rec/6","timestamp":"2014-04-20T05:44:51Z","content_type":null,"content_length":"241580","record_id":"<urn:uuid:fcdce1c2-8f07-4921-82f8-1692546a9491>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00391-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Sets and Applications of Venn Diagrams
Could someone please help me figure out this word problem using a Venn Diagram? After a certain point I am just stuck! It would really help if I could see the steps in order to solve it too! Thanks!
In a recent survey, people were asked which radio station they listened to on a regular basis. The following results were obtained: 140 listened to WOLD (oldies), 95 listened to WJZZ (jazz), 134
listened to WTLK (talk show news), 235 listened WOLD or WJZZ, 48 listened to WOLD and WTLK, 208 listened to WTLK or WJZZ, and 25 listened to none.
a. What percent of people in the survey listened only to WTLK on a regular basis?
b. What percent of people in the survey did not listen to WTLK on a regular basis?
Re: Sets and Applications of Venn Diagrams
KathleenLaura wrote:Could someone please help me figure out this word problem using a Venn Diagram? After a certain point I am just stuck!
Please show your work up to this "certain point". Thank you.
KathleenLaura wrote:In a recent survey, people were asked which radio station they listened to on a regular basis. The following results were obtained: 140 listened to WOLD (oldies), 95 listened
to WJZZ (jazz), 134 listened to WTLK (talk show news), 235 listened WOLD or WJZZ, 48 listened to WOLD and WTLK, 208 listened to WTLK or WJZZ, and 25 listened to none.
a. What percent of people in the survey listened only to WTLK on a regular basis?
b. What percent of people in the survey did not listen to WTLK on a regular basis?
To learn how Venn diagrams may be applied to exercises of this type, please study this topical lesson.
Since there are three radio stations involved in this particular exercise, it would be useful to start by drawing the standard three overlapping circles, labelling each with the call-letters of the
respective station. Note that, due to the inclusion of a "none" condition, it may be useful also to draw a "universe" box surrounding the three circles, so that the "25" label is understandable and
does not get misplaced.
|
{"url":"http://www.purplemath.com/learning/viewtopic.php?p=7219","timestamp":"2014-04-21T04:58:41Z","content_type":null,"content_length":"20032","record_id":"<urn:uuid:8db1b322-294f-4241-b9c3-a47dfba0c098>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00460-ip-10-147-4-33.ec2.internal.warc.gz"}
|
What to make of the Ind of CH ?
Dave Barrington suggested I blog about Paul Cohen since he just died.
Scotts Blog
already reported on Paul Cohen's death, and there were many comments on C
algebras and PAC learning (none of which Paul Cohen worked on). Paul Cohen's most important result was that CH is independent of ZFC. What does this mean and what do we make of it? CH is the
there is no cardinality strictly between N and R
ZFC is
Zermelo-Frankl Set Theory (with the Axiom of Choice)
. Virtually all of Math can be derived from these axioms. (There are quibbles about this which might be a latter blog.) Kurt Godel showed that there is a model of ZFC where CH is TRUE. Paul Cohen
showed that there is a model of ZFC where CH is FALSE. Together we have that CH is INDEPENDENT OF ZFC. What to make of this? Here are opinions I have heard over the years:
1. (Mathematical Realism or Platonist) There IS a model of the reals that is the RIGHT one.In that model CH is either true of false. ZFC just isn't up to the task of figuring it out.Paul Cohen
thought that there were an INFINITE number of cardinalities between N and R.I've heard rumors that Kurt Godel thought there was exactly ONE cardinality between N and R.Hugh Woodin has some
mathematical reasons to think there is exactly ONE:CHone CHtwo. Many people prefer the simplicity of having NONE---the infinity after N is R. Some people think that we need to add new axioms to
ZFC such as Large Cardinals or the Axiom of Determinacy to settle the question. Are these really candidates for axioms?That may be a later post.
2. (Not sure what these people are called.) Since ZFC settles virtually everything else in mathbut not this question, CH has no answer. There is No `correct' copy of the reals.The weakness in this
response may be the virtually. Are there questions in math that need it? Are there such questions outside of Set Theory? That may be a later post.
What do you think? ~
26 comments:
1. For people already familiar with ZFC and Choice, the absolute best introduction to "What additional axioms should we use?" and "What's at Stake?" is Shelah's "Logical Dreams" paper. Spoiler: he
disagrees strongly with Woodin.
2. It should be noted that Both Godel's and Cohen's results are relative consistency results.
Godel prove that if ZF has a model than ZFC + CH has a model.
Cohen proved that if ZF has a model that ZFC + not CH has a model. (He also proved ZF + not choice has a model).
Godel's Second Incompleteness Theorem says that this is the best you can get unless ZF is actually inconsistent.
3. I think two camps in phil of math would accept (2), or at least a variation of it: fictionalists and intuitionists. Fictionalists think that asking these questions (or any mathematical questions)
about the reals is like asking questions about a fictional story. Some have a "real" answer, e.g. was Huck Finn male or female (it is *true* that he
is a male, according to the story) but others are simply beyond the scope of the story ("what did Huck Finn have for breakfast the day before he met Tom?"). In my opinion, the debate is pretty
boring if we consider mathematical statements
like CH. It's much more interesting when the statements are used as part of a mathematical theory about empirical phenomena in the world (like physics)... Hartry Field does a lot with this.
A fun quote from Michael Dummett:
"We are, after all, being asked to choose between two metaphors, two pictures. The platonist metaphor assimilates mathematical enquiry to the investigations of the astronomer: mathematical
structures, like galaxies, exists, independently of us, in a realm of reality which do not inhabit but which those of us who have the skill are capable of observing and reporting on. The
constructivist metaphor assimilates mathematical activity to that of the artificer fashioning objects in accordance with the creative power of his imagination. ...the activities if the
mathematician seem strinkingly unlike those either of the astronomer or of the artist. What basis can exist for deciding which metaphor is to be preferred?...We have first to decide on the
correct model of meaning - either an intuitionistic one...or a platonistic one...and then one or other picture of the metaphysical character of mathematical reality will force itself on us."
4. Moshe Vardi4:39 PM, April 02, 2007
Harvey Friedman showed that there are fairly basic mathematical statements that are not decided in ZFC.
5. Well, if no one else is going to state the obvious, I guess I will: some people have suggested that P vs. NP is independent of ZFC. A couple of them have evidently even gone so far as to prove it
6. Stuart Kurtz6:03 PM, April 02, 2007
My advisor, Carl Jockusch Jr., told me that Paul Cohen himself believed that the CH fails badly--that there are unimaginably many infinities between aleph_0 and 2^{aleph_0}.
For what it's worth, I believe this too.
Zermelo "observed" his axioms of Set Theory as properties of his mental model of the cumulative hierarchy of sets. It turns out that these axioms are pretty much what you need to prove the
validity of definition by transfinite induction, and I don't believe this is an accident. [The exceptional axiom, as usual, is foundation, but this easy to observe -- the witness for a given
non-empty set is just an element of minimal rank, and therefore foundation is essentially the hypothesis with the Zermelo model that the ordinals are well ordered.]
Frankel's axiom of regularity came afterwards, and it was clearly informed by the notion that setness vs. proper classness is a matter of size -- i.e., if a class can be put into 1-1
correspondence with a set, then it is "small," and therefore gets to be a set too. This, too, is observable in Zermelo's model, as long as you assume that the ordinals don't have a small
(set-like) cofinality.
Note that Frankel's intuition is radically different here that Zermelo's -- you get to be a Frankel set by being "small," whereas you get to be a Zermelo set by being "finished." It is more
startling that people seem to realize that such different views about "what makes a set" seem to be consistent with one another.
From a Continuum Hypothesis point of view, ZF says less than it should about the power set operator. Remember, each successor level of the Cumulative Hierarchy consists of the power set of the
preceding level, and therefore the power set operator is absolutely central to mental construction, so this is essentially an incompleteness of the model as well as the theory. The CH is merely
the assumption that power sets are small, and the standard (V=L) proof of relative consistency proof for ZF+GCH over ZF is little more than a working out of the consequences of having the
smallest possible power sets.
So why do I stand with Cohen on the CH? The power set is a representation-free version of the exponential function, which enables its extension to the infinite. My experience (based admittedly on
the finite) is that the exponential function grows very quickly, and there is a lot of room between successive iterates of the exponential function. I carry this intuition to the infinite, and
conclude that the Continuum Hypothesis is false, and badly so.
7. Pascal Koiran2:39 AM, April 03, 2007
Like all statements of finite combinatorics, P<>NP is a property of the set of integers which can be written down as a first-order formula in the language of rings (+,*,=).
We have to figure out whether this formula is satisfied by the set of "true" integers.
If one would like to prove an independence result, it seems reasonable to start not with the all-powerful ZFC system but with some weaker system for proving properties of the integers, such as
e.g. Peano arithmetic (or perhaps some even weaker system). Is there any result along those lines?
8. Is there any result along those lines?
Yes, I think there are some results showing the independence of P != NP from considerably weak arithmetic.
I think that results of A. Razborov and Ran Raz show that "SAT has no poly-size circuits" has no small resolution proofs, which then implies an independent result on a very weak arithmetic formal
system of the P!=NP problem.
Razborov has some more results on this I think. If I'm not wrong, he showed that if a plausible cryptographic assumption holds then P!=NP is independent of some weak subsystem of Peano Arithmetic
(relativized S^2_2 and relativized S^1_2).
Gaisi Takeuti had also some papers from the '90s about forcing and P!=NP.
9. Paul Beame11:32 AM, April 03, 2007
A propos of Kurt and anonymous' comments on the potential independence of P vs NP:
Miki Ajtai, who has a record on these kinds of complexity questions that is nonpareil, is among those who think it will be independent. At a DIMACS workshop in 2000 in which a number of leaders
in the field gave their opinions about the resolution of open complexity questions, he conjectured that P vs NP was independent. When asked to clarify whether he meant independent of Peano
arithmetic, to the surprise of many in the room he clarified that he meant independent of ZFC.
10. Miki Ajtai ... thinks it will be independent of ... ZFC
That's quite amazing.
Maybe a more `feasible' question is:
Assuming P!=NP (or other computational hardness conjecture of the kind), is it true that there are no short (ZFC/PA) proofs of P!=NP?
11. Pascal Koiran2:33 PM, April 03, 2007
Is a blog post on S^_2 or S^1_2 on the menu?
12. Well, we do have all kinds of weird crap going on... for example, the partial consistency stuff of Krajicek and Pudlak shows that if NEXP \neq coNEXP, then there are propositional tautologies
that require superpolynomially large proofs in ZFC, but would have polysize proofs in some other system.
What that other system is not entirely clear- the natural thing to guess is that it would be ZFC + some large cardinal or determinacy stuff and that somehow yields a proof of soundness for some
slick method of proving certain propositional statements. But there's no reason to suspect that, it may be that the extra-ZFC principles that help map out the subsets of the reals are totally
different from ones that help us succinctly validate tautologies.
Perfesser Nine Toes
13. Is a blog post on S^_2 or S^1_2 on the menu?
14. To #12:
what do you mean by
propositional tautologies that require superpolynomially large proofs in ZFC?
That is, what do you mean by a proof in ZFC (or any other first order system) of a propositional tautology?
15. In a formalization of ZFC you can define the concepts of "propositional formula" and "propositional tautology" in a straightforward manner.
So, I mean proof system in the Cook-Reckhow sense where you take the input formula \tau, and then convert into its encoding in your formalized ZFC. Then, using some standard first-order proof
system with the axioms of set theory, you prove the sentence stating that this particular propositional formula is a tautology.
This is proof system in the Cook-Reckhow sense, provided you choose a natural polytime encoding of formulas and tautologies.
It's all in the red book.
--- Perfesser Nines Toes
16. What's the red book ?
17. I think that the main problem here is that people assume that there is a "correct" choice of axioms. People used to think the same thing about the parallel postulate, but nowadays we know that
this is actually an axiom and that there are essentially different, but equally useful, kinds of geometry depending on which type of parallel axiom one chooses.
I can't see why we should treat set theory differently. Just as we have several different geometries to study we have several different kinds of set theory. All of them equally correct.
18. Pascal Koiran4:20 AM, April 04, 2007
I know almost nothing about set theory, but that doesn't prevent me from having an opinion about the issue raised by klas m.
When we're talking about "exotic" stuff like large cardinals or CH, I would tend to agree that all set theories should be treated on an equal footing (provided that they're not downright
However, set theory can in principle have an impact on such "concrete" questions as the satisfiability of diophantine equations:
By Matyasevitch, we know that there must exist diophantine equations whose satisfiability is independent of ZFC (or o f your favorite alternative system).
I would argue that one set theory is better than another set theory if it proves more true statements about the satisfiability of diophantine equations (and of course does not prove any wrong
This makes sense only if you believe that that there is a "true" set of the integers, but who doesn't?
19. It's all in the red book
I guess you mean:
I'll look it up then.
20. Pascal: I am not a set theorist either, I'm in combinatorics, so I shouldn't stick my neck out too far here.
My impression is that the integers and set theory in general are slightly different. For the integers there is at least some way of making sense of a "smallest" model for them, in terms of which
objects are included in the model, and I guess that would be what most people would call the "real integers".
However for set theoretic axioms like the Axiom of Choice or CH there only seems to different models rather than some which are smaller and some which are larger. For example, if we remove AC
then we can add the axiom that all sets are measurable, and we suddenly get many new theorems which are not true with AC, like all linear functions being continuous, and if we add AC we get more
objects, like non-measurable sets, and fewer theorems about them.
21. Pascal Koiran8:31 AM, April 04, 2007
Klas: I guess the issue is, whether by adding "exotic" set-theoretic axioms (such as e.g. CH or large cardinals axioms) you can prove more "combinatorial" results (about e.g. the satisfiability
of diophantine equations) ?
22. Pascal: Do we have any concrete examples of this kind where we can say that one choice of "exotic" axiom let us gain more than another choice?
23. Pascal Koiran9:22 AM, April 04, 2007
Klas: I certainly do not have such an example, but perhaps the more knwoledgeable readers of this blog can answer your question?
Actually, I can cook up a stupid example: take a diophantine equation which is not satisfiable, and whose unsatisfiability is not provable in ZFC. Add its unsatisfiability as a new axiom.
The problem is that this is not a "natural" set-theoretic axiom such as CH, and moreover I can't actually show you the equation (it wouldn't fit on this blog).
24. Pascal: This kind of example is what I have mind when I talk about the integers having some kind of minimal model.
For a statement of the type "these equations in k variables have no integer solutions" we can for each k-tuple of integers prove that it is not a solution, but the original statement is itself
unprovable. So if we add the statement as an axiom we still have the same integers and a new axiom to prove theorems with.
However if we, for the same equations, add the axiom "these equations in k variables have an integer solution", then we have a new axiom, and since no k-tuple of ordinary integers satisfy the
equations we have also stated that there exists some additional objects which we include among the integers, and they do satisfy the equations.
But for the set theoretic axioms it seems harder to talk about smaller and larger models in this way.
25. Pascal Koiran3:08 PM, April 04, 2007
Here is a related question that puzzles me.
We do not know whether ZFC is consistent, but if it is, does that imply that ZFC can only prove statements about the integers that are satisfied by the true integers?
One could imagine that ZFC is consistent, but that each model has a set of integers which, in addition to the true integers, contains "pathological" objects which make it satisfy equations that
are not satisfied in the true integers.
Can that possibility be ruled out?
26. The most plausible candidate for settling the Continuum Hypothesis (Freiling's symmetry axiom) has the disadvantage that, when combined with the plausible assumption that any subset of the reals
of cardinal less than c will have measure zero, it will imply that the continuum cannot be well ordered.
On the other hand, why should we simultaneously believe both the Power Set Axiom and the Axiom of Choice?
|
{"url":"http://blog.computationalcomplexity.org/2007/04/what-to-make-of-ind-of-ch.html","timestamp":"2014-04-20T01:20:10Z","content_type":null,"content_length":"219745","record_id":"<urn:uuid:44f1b1ec-d92a-467f-99fb-abb7ee7eaa48>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00385-ip-10-147-4-33.ec2.internal.warc.gz"}
|
This Article
Bibliographic References
Add to:
ASCII Text x
Daniel Keren, Izchak Sharfman, Assaf Schuster, Avishay Livne, "Shape Sensitive Geometric Monitoring," IEEE Transactions on Knowledge and Data Engineering, vol. 24, no. 8, pp. 1520-1535, Aug., 2012.
BibTex x
@article{ 10.1109/TKDE.2011.102,
author = {Daniel Keren and Izchak Sharfman and Assaf Schuster and Avishay Livne},
title = {Shape Sensitive Geometric Monitoring},
journal ={IEEE Transactions on Knowledge and Data Engineering},
volume = {24},
number = {8},
issn = {1041-4347},
year = {2012},
pages = {1520-1535},
doi = {http://doi.ieeecomputersociety.org/10.1109/TKDE.2011.102},
publisher = {IEEE Computer Society},
address = {Los Alamitos, CA, USA},
RefWorks Procite/RefMan/Endnote x
TY - JOUR
JO - IEEE Transactions on Knowledge and Data Engineering
TI - Shape Sensitive Geometric Monitoring
IS - 8
SN - 1041-4347
EPD - 1520-1535
A1 - Daniel Keren,
A1 - Izchak Sharfman,
A1 - Assaf Schuster,
A1 - Avishay Livne,
PY - 2012
KW - Data streams
KW - distributed systems
KW - geometric monitoring
KW - shape
KW - data modeling.
VL - 24
JA - IEEE Transactions on Knowledge and Data Engineering
ER -
An important problem in distributed, dynamic databases is to continuously monitor the value of a function defined on the nodes, and check that it satisfies some threshold constraint. We introduce a
monitoring method, based on a geometric interpretation of the problem, which enables to define local constraints at the nodes. It is guaranteed that as long as none of these constraints is violated,
the value of the function did not cross the threshold. We generalize previous work on geometric monitoring, and solve two problems which seriously hampered its performance: as opposed to the
constraints used so far, which depend only on the current values of the local data, here we incorporate their temporal behavior. Also, the new constraints are tailored to the geometric properties of
the specific monitored function. In addition, we extend the concept of safe zones for the monitoring problem, and show that previous work on geometric monitoring is a special case of the proposed
extension. Experimental results on real data reveal that the new approach reduces communication by up to three orders of magnitude in comparison to existing approaches, and considerably narrows the
gap between achievable results and a newly defined lower bound on communication complexity.
[1] The European Air Quality Database, http://dataservice.eea. europa.eu/dataservice metadetails.asp?id=1079, 2012.
[2] Y. Gordon, M. Meyer, and S. Reisner, "Constructing a Polytope to Approximate a Convex Body," Geometriae Dedicata, vol. 57, pp. 217-222, 1995.
[3] S. Agrawal, S. Deb, K.V.M. Naidu, and R. Rastogi, "Efficient Detection of Distributed Constraint Violations," Proc. IEEE 23rd Int'l Conf. Data Eng. (ICDE '07), pp. 1320-1324, 2007.
[4] N. Alon, Y. Matias, and M. Szegedy, "The Space Complexity of Approximating the Frequency Moments," Proc. 28th Ann. ACM Symp. Theory of Computing (STOC '96), pp. 20-29, 2006.
[5] http://archive.ics.uci.edu/ml/datasetsEl+Nino , 2012.
[6] A. Arasu and G.S. Manku, "Approximate Counts and Quantiles over Sliding Windows," Proc. ACM Symp. Principles of Database Systems (PODS '04), pp. 286-296, 2004.
[7] B. Babcock, S. Babu, M. Datar, R. Motwani, and J. Widom, "Models and Issues in Data Stream Systems," Proc. ACM Symp. Principles of Database Systems (PODS '02), pp. 1-16, 2002.
[8] B. Babcock and C. Olston, "Distributed Top-K Monitoring," Proc. ACM SIGMOD Int'l Conf. Management of Data (SIGMOD '03), pp. 28-39, 2003.
[9] D. Carney, U. Çetintemel, M. Cherniack, C. Convey, S. Lee, G. Seidman, M. Stonebraker, N. Tatbul, and S.B. Zdonik, "Monitoring Streams - A New Class of Data Management Applications," Proc. 28th
Int'l Conf. Very Large Databases (VLDB '02), pp. 215-226, 2002.
[10] A. Chakrabarti, G. Cormode, and A. McGregor, "A Near-Optimal Algorithm for Computing the Entropy of a Stream," Proc. 18th Ann.ACM-SIAM Symp. Discrete Algorithms (SODA '07), 2007.
[11] M. Charikar, K. Chen, and M. Farach-Colton, "Finding Frequent Items in Data Streams," Proc. Int'l Colloquium Automata, Languages and Programming (ICALP '02), pp. 693-703, 2002.
[12] E. Cohen and M.J. Strauss, "Maintaining Time-Decaying Stream Aggregates," J. Algorithms, vol. 59, no. 1, pp. 19-36, 2006.
[13] G. Cormode, R. Keralapura, and J. Ramimirtham, "Communication-Efficient Distributed Monitoring of Thresholded Counts," Proc. ACM SIGMOD Int'l Conf. Management of Data (SIGMOD '06), 2006.
[14] G. Cormode and M. Garofalakis, "Sketching Streams through the Net: Distributed Approximate Query Tracking," Proc. Int'l Conf. Very Large Databases (VLDB '05), pp. 13-24, 2005.
[15] G. Cormode, M. Garofalakis, S. Muthukrishnan, and R. Rastogi, "Holistic Aggregates in a Networked World: Distributed Tracking of Approximate Quantiles," Proc. ACM SIGMOD Int'l Conf. Management
of Data (SIGMOD '05), pp. 25-36, 2005.
[16] G. Cormode, S. Muthukrishnan, and W. Zhuang, "Conquering the Divide: Continuous Clustering of Distributed Data Streams," Proc. IEEE 23rd Int'l Conf. Data Eng. (ICDE '07), pp. 1036-1045, 2007.
[17] G. Cormode, S. Muthukrishnan, and W. Zhuang, "What's Different: Distributed, Continuous Monitoring of Duplicate-Resilient Aggregates on Data Streams," Proc. 22nd Int'l Conf. Data Eng. (ICDE
'06), p. 57, 2006.
[18] A. Das, S. Ganguly, M. Garofalakis, and R. Rastogi, "Distributed Set-Expression Cardinality Estimation," Proc. Int'l Conf. Very Large Databases (VLDB '04), pp. 312-323, 2004.
[19] M. Data, A. Gionis, P. Indyk, and R. Motwani, "Maintaining Stream Statistics over Sliding Windows: (Extended Abstract)," Proc. 13th Ann. ACM-SIAM Symp. Discrete Algorithms (SODA '02), pp.
635-644, 2002.
[20] M. Dilman and D. Raz, "Efficient Reactive Monitoring," Proc. IEEE INFOCOM '01, pp. 1012-1019, 2001.
[21] G. Frahling, P. Indyk, and C. Sohler, "Sampling in Dynamic Data Streams and Applications," Proc. 21st Ann. Symp. Computational Geometry (SCG '05), pp. 142-149, 2005.
[22] L. Huang, M. Garofalakis, J. Hellerstein, A. Joseph, and N. Taft, "Toward Sophisticated Detection with Distributed Triggers," Proc. SIGCOMM Workshop Mining Network Data (MineNet '06), pp.
311-316, 2006.
[23] L. Huang, X. Nguyen, M.N. Garofalakis, J.M. Hellerstein, M.I. Jordan, A.D. Joseph, and N. Taft, "Communication-Efficient Online Detection of Network-Wide Anomalies," Proc. IEEE INFOCOM '07, pp.
134-142, 2007.
[24] A. Jain, J.M. Hellerstein, S. Ratnasamy, and D. Wetherall, "A Wakeup Call for Internet Monitoring Systems: The Case for Distributed Triggers," Proc. Third ACM SIGCOMM Workshop Hot Topics in
Networks (HotNets), 2004.
[25] D.D. Lewis, Y. Yang, T.G. Rose, and F. Li., "Rcv1: A New Benchmark Collection for Text Categorization Research," J. Machine Learning Research, vol. 5, pp. 361-397, 2004.
[26] S. Madden and M.J. Franklin, "Fjording the Stream: An Architecture for Queries over Streaming Sensor Data," Proc. 18th Int'l Conf. Data Eng. (ICDE '02), p. 555, 2002.
[27] S. Madden, M. Shah, J.M. Hellerstein, and V. Raman, "Continuously Adaptive Continuous Queries over Streams," Proc. ACM SIGMOD Int'l Conf. Management of Data (SIGMOD '02), pp. 49-60, 2002.
[28] A. Manjhi, V. Shkapenyuk, K. Dhamdhere, and C. Olston, "Finding (Recently) Frequent Items in Distributed Data Streams," Proc. 21st Int'l Conf. Data Eng. (ICDE '05), pp. 767-778, 2005.
[29] G.S. Manku and R. Motwani, "Approximate Frequency Counts over Data Streams," Proc. Int'l Conf. Very Large Databases (VLDB '02), pp. 346-357, 2002.
[30] C. Olston, J. Jiang, and J. Widom, "Adaptive Filters for Continuous Queries over Distributed Data Streams," Proc. ACM SIGMOD Int'l Conf. Management of Data (SIGMOD '03), pp. 563-574, 2003.
[31] P.A. Parrilo, "Semidefinite Programming Relaxations for Semialgebraic Problems," Math. Programming, vol. 96, no. 2, pp. 293-320, 2003.
[32] T.G. Rose, M. Stevenson, and M. Whitehead, "The Reuters Corpus Volume 1—From Yesterday's News to Tomorrow's Language Resources," Proc. Third Int'l Conf. Language Resources and Evaluation (LREC
'02), pp. 827-832, 2002.
[33] I. Sharfman, A. Schuster, and D. Keren, "A Geometric Approach to Monitoring Threshold Functions over Distributed Data Streams," ACM Trans. Database Systems, vol. 32, no. 4,article 23, 2007.
[34] I. Sharfman, A. Schuster, and D. Keren, "A Geometric Approach to Monitoring Threshold Functions over Distributed Data Streams," Proc. ACM SIGMOD Int'l Conf. Management of Data (SIGMOD '06), pp.
301-312, 2006.
[35] Y. Yang and J.O. Pedersen, "A Comparative Study on Feature Selection in Text Categorization," Proc. 14th Int'l Conf. Machine Learning (ICML '97), pp. 412-420, 1997.
[36] B.K. Yi, N. Sidiropoulos, T. Johnson, H.V. Jagadish, C. Faloutsos, and A. Biliris, "Online Data Mining for Co-Evolving Time Sequences," Proc. 16th Int'l Conf. Data Eng. (ICDE '00), p. 13, 2000.
[37] Y.J. Zhao, R. Govindan, and D. Estrin, "Computing Aggregates for Monitoring Wireless Sensor Networks," Proc. IEEE First Int'l Workshop Sensor Networks and Protocols (SNPA '03), 2003.
[38] Y. Zhu and D. Shasha, "Statstream: Statistical Monitoring of Thousands of Data Streams in Real Time," Proc. 28th Int'l Conf. Very Large Databases (VLDB '02), pp. 358-369, 2002.
[39] G. Sagy, D. Keren, I. Sharfman, and A. Schuster, "Distributed Threshold Querying of General Functions by a Difference of Monotonic Representation," Proc. VLDB Endowment, vol. 4, no. 2, pp.
46-57, 2010.
[40] G. Cormode, S. Muthukrishnan, and K. Yi, "Algorithms for Distributed Functional Monitoring," Proc. 19th Ann. ACM-SIAM Symp. Discrete Algorithms (SODA), pp. 1076-1085, 2008.
[41] K. Yi and Q. Zhang, "Optimal Tracking of Distributed Heavy Hitters and Quantiles," Proc. 28th ACM SIGMOD-SIGACT-SIGART Symp. Principles of Database Systems (PODS), pp. 167-174, 2009.
[42] G. Cormode, S. Muthukrishnan, K. Yi, and Q. Zhang, "Optimal Sampling from Distributed Streams," Proc. 29th ACM SIGMOD-SIGACT-SIGART Symp. Principles of Database Systems (PODS '10), pp. 77-86,
[43] B.V.K Vijaya Kumar, A. Mahalanobis, and R.D. Juday, Correlation Pattern Recognition. Cambridge Univ. Press, 2010.
Index Terms:
Data streams, distributed systems, geometric monitoring, shape, data modeling.
Daniel Keren, Izchak Sharfman, Assaf Schuster, Avishay Livne, "Shape Sensitive Geometric Monitoring," IEEE Transactions on Knowledge and Data Engineering, vol. 24, no. 8, pp. 1520-1535, Aug. 2012,
Usage of this product signifies your acceptance of the
Terms of Use
|
{"url":"http://www.computer.org/csdl/trans/tk/2012/08/ttk2012081520-abs.html","timestamp":"2014-04-16T22:39:36Z","content_type":null,"content_length":"60117","record_id":"<urn:uuid:8da7988f-434d-451d-a923-f760d62bc498>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00631-ip-10-147-4-33.ec2.internal.warc.gz"}
|
The Remarkable Interaction between Mathematics and the Computer: Examples Old and New
The image of the lonely genius locking the office door working away on a problem is something of the past. And it’s not an accurate representation of the majority of advances in mathematics,” said
Jill Pipher, director of the Institute for Computational and Experimental Research in Mathematics (ICERM) at Brown University. On the last day of Mathematics Awareness Month 2012, Pipher presented
“The Remarkable Interaction of Mathematics and the Computer” as part of MAA’s Distinguished Lecture Series.
The intersection of mathematics with computation and experimentation, Pipher argued, has changed mathematical practice in exciting ways that challenge stereotypes of the discipline (and its
Acknowledging that a talk about her chosen subject matter “could really go on forever,” Pipher mastered her obvious enthusiasm and confined her remarks to the hour allotted. She organized the
“panorama of examples” she’d chosen into three categories: personal, classical, and contemporary.
Although tight-lipped about how to get rich in cryptography, Pipher counts the science of code making and breaking among her research interests and jointly holds four patents for the NTRU encryption
and digital signature algorithms. She treated her audience to a tantalizing overview of the history of the public key cryptosystems (PKC) on which Internet commerce relies.
Pipher quoted Whitfield Diffie’s declaration that he and his colleagues stood “on the brink of a revolution in cryptography” and showed a picture of a historic garment. Printed with the RSA
algorithm, the 1980s T-shirt represents to Pipher the “mathematical and political impact” of PKC. “The printing of that algorithm on that T-shirt at that moment in time . . . turned that T-shirt into
a munition,” Pipher explained. “It made it something that was not permissible to export under U.S. export law or even to show to a foreign national.”
For the classical segment of her talk, Pipher highlighted several questions that, while stated in centuries past, were “recently answered—by geological standards—with the help of a computer.” They
included the four-color theorem and the Kepler conjecture.
The attacks leveled against these problems by mathematicians armed with unprecedented computing power, Pipher noted, forced the mathematical community to grapple with what constitutes proof in
mathematics. In 1976, to prove that four colors suffice to color any map, Kenneth Appel and Wolfgang Haken used an approach that relied on a computer program verifying what happened in a little more
than a billion cases.
“Are we any more or less certain if a computer checks all these cases than if a mathematician sits down and checks them by hand?” Pipher asked.
Outsourcing key components of their work to computers may make mathematicians uneasy, but that disquiet hasn’t stopped them from doing it. Pipher’s final set of examples illustrated the fascinating
and useful results achieved by those open-minded enough to reconceive of how math is done.
A numerical experiment suggested a surprising solution to the three-body problem: an orbit in the shape of a figure eight. David Gracias andGovind Menon work at what Pipher called the “intersection
of mathematics and materials,” developing self-assembling, 3D nanostructures. And the Internet facilitates mathematical collaboration on a massive scale in Tim Gowers’s Polymath Project. Thousands of
contributors making incremental progress on unsolved problems demonstrate the power of “many mathematicians connecting their brains efficiently” in cyberspace.
“That’s mathematicians imitating computers, rather than computers imitating mathematicians,” Pipher said.
Pipher recognized that she was just scratching the surface, merely whetting her audience’s appetite for more information on the ways computers and experimentation have broadened what mathematics and
mathematicians can do. She referred interested listeners to other Distinguished Lectures. Those keen to learn more about cryptography should look up Alice Silverberg’s Distinguished Lecture, for
And for the dirt on the Riemann zeta function? “I’m not going to be able to tell you everything that’s beautiful about this,” Pipher mourned—and recommended Brian Conrey’s lecture on the topic. —
Katharine Merow
Listen to the full lecture (mp3)
This MAA Distinguished Lecture was funded by the National Security Agency.
|
{"url":"http://www.maa.org/meetings/calendar-events/the-remarkable-interaction-between-mathematics-and-the-computer-examples-old-and-new","timestamp":"2014-04-17T09:36:23Z","content_type":null,"content_length":"98547","record_id":"<urn:uuid:9e5534df-6d45-4f65-ba13-18baf1f8dfb5>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00356-ip-10-147-4-33.ec2.internal.warc.gz"}
|
A201073 - OEIS
A201073 Record (maximal) gaps between prime quintuplets (p, p+2, p+6, p+8, p+12). 11
6, 90, 1380, 14580, 21510, 88830, 97020, 107100, 112140, 301890, 401820, 577710, 689850, 846210, 857010, 986160, 1655130, 2035740, 2266320, 2467290, 2614710, 3305310, 3530220, 3880050, 3885420,
5290440, 5713800, 6049890 (list; graph; refs; listen; history; text; internal format)
OFFSET 1,1
COMMENTS Prime quintuplets (p, p+2, p+6, p+8, p+12) are one of the two types of densest permissible constellations of 5 primes (A022006 and A022007). Average gaps between prime k-tuples can be
deduced from the Hardy-Littlewood k-tuple conjecture and are O(log^k(p)), with k=5 for quintuplets. If a gap is larger than any preceding gap, we call it a maximal gap, or a record gap.
Maximal gaps may be significantly larger than average gaps; this sequence suggests that maximal gaps are O(log^6(p)).
A201074 lists initial primes in quintuplets (p, p+2, p+6, p+8, p+12) preceding the maximal gaps. A233432 lists the corresponding primes at the end of the maximal gaps.
REFERENCES Hardy, G. H. and Littlewood, J. E. "Some Problems of 'Partitio Numerorum.' III. On the Expression of a Number as a Sum of Primes." Acta Math. 44, 1-70, 1923.
LINKS Alexei Kourbatov, Table of n, a(n) for n = 1..64
Tony Forbes, Prime k-tuplets
Alexei Kourbatov, Maximal gaps between prime quintuplets (graphs/data up to 10^15)
A. Kourbatov, Maximal gaps between prime k-tuples: a statistical approach, arXiv preprint arXiv:1301.2242, 2013. - From N. J. A. Sloane, Feb 09 2013
Alexei Kourbatov, Tables of record gaps between prime constellations, arXiv preprint arXiv:1309.4053, 2013.
Alexei Kourbatov, The distribution of maximal prime gaps in Cramer's probabilistic model of primes, arXiv preprint arXiv:1401.6959, 2014
Eric W. Weisstein, k-Tuple Conjecture
FORMULA (1) Upper bound: gaps between prime quintuplets are smaller than 0.0987*(log p)^6, where p is the prime at the end of the gap.
(2) Estimate for the actual size of the maximal gap that ends at p: maximal gap ~ a(log(p/a)-0.4), where a = 0.0987*(log p)^5 is the average gap between quintuplets near p, as predicted
by the Hardy-Littlewood k-tuple conjecture.
Formulas (1) and (2) are asymptotically equal as p tends to infinity. However, (1) yields values greater than all known gaps, while (2) yields "good guesses" that may be either above or
below the actual size of known maximal gaps.
Both formulas (1) and (2) are derived from the Hardy-Littlewood k-tuple conjecture via probability-based heuristics relating the expected maximal gap size to the average gap. Neither of
the formulas has a rigorous proof (the k-tuple conjecture itself has no formal proof either). In both formulas, the constant ~0.0987 is reciprocal to the Hardy-Littlewood 5-tuple constant
EXAMPLE The initial four gaps of 6, 90, 1380, 14580 (between quintuplets starting at p=5, 11, 101, 1481, 16061) form an increasing sequence of records. Therefore a(1)=6, a(2)=90, a(3)=1380, and a
(4)=14580. The next gap (after 16061) is smaller, so a new term is not added.
CROSSREFS Cf. A022006 (prime quintuplets p, p+2, p+6, p+8, p+12), A113274, A113404, A200503, A201596, A201598, A201051, A201251, A202281, A202361, A201062, A201074, A002386, A233432.
Sequence in context: A002432 A091800 A037959 * A006480 A138462 A002896
Adjacent sequences: A201070 A201071 A201072 * A201074 A201075 A201076
KEYWORD nonn
AUTHOR Alexei Kourbatov, Nov 26 2011
STATUS approved
|
{"url":"http://oeis.org/A201073","timestamp":"2014-04-17T01:13:50Z","content_type":null,"content_length":"20876","record_id":"<urn:uuid:a6aaccf0-d479-418a-b173-94b71319d7e2>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00282-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Definition of the billiard problem
Next: Representation by a Helmholtz Up: Chapter 5: Improved sweep Previous: Brief history of the
Given the isotropic quadratic dispersion relation corresponding to (5.2), we can choose energy units such that eigenwavenumbers
The billiard has
where the eigenvalue is quadratic in the wavefunction
The BCs have been incorporated as (5.6) rather than the linear condition (5.3) because satisfaction of the BCs by a wavefunction e.g. (5.3)) gives the amount by which the desired BCs fail to be
obeyed. Heller[91] named
Without a further condition, (5.5) and (5.6) admit the useless solution,
which measures the normalization (norm) in the domain is required. Unit norm is fixed by (5.7).
The solution any given guess at 5.6) will be replaced by the minimization
A sweep in 5.5), (5.6) and (5.7) are satisfied simultaneously. Such a sweep is shown in Fig. 5.2.
Figure 5.2: Tension (equal to the inverse largest eigenvalue of (5.14)), as a function of
Next: Representation by a Helmholtz Up: Chapter 5: Improved sweep Previous: Brief history of the Alex Barnett 2001-10-03
|
{"url":"http://www.math.dartmouth.edu/~ahb/thesis_html/node57.html","timestamp":"2014-04-17T04:18:17Z","content_type":null,"content_length":"14486","record_id":"<urn:uuid:e417aa7a-374e-4305-a0c9-ad9427182feb>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00151-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Bridgeport, PA Trigonometry Tutor
Find a Bridgeport, PA Trigonometry Tutor
...I enjoy reading books on the World wars and how European relations affected the initiation and outcome. While I have no formal degree in Government and politics, I have a great passion for it.
I took government classes in highschool scoring a 5 on the AP exam in Government.
14 Subjects: including trigonometry, chemistry, algebra 1, algebra 2
...I also have high proficiency in calculus and linear algebra, and have taken other classes which rely on differential equations. I began studying Koine Greek at Oxford University and then did a
follow-up independent study at Villanova University. I am currently solidifying my Attic Greek.
26 Subjects: including trigonometry, reading, English, algebra 2
Hi,My name is Zekai. I am graduated from Drexel university last year majoring in Mechanical Engineering and minored in Business Administration. I am currently employed with a company as design
engineer but want to fill my free time with something productive and at the same time earn a second income to pay off my heavy student debt.
8 Subjects: including trigonometry, algebra 1, algebra 2, precalculus
...I completed math classes at the university level through Advanced Calculus. This includes two semesters of elementary calculus, vector and multi-variable calculus, courses in linear algebra,
differential equations, analysis, complex variables, number theory, and non-euclidean geometry. I taught Prealgebra with a national tutoring chain for five years.
12 Subjects: including trigonometry, calculus, writing, geometry
...I have a bachelor's degree in Math from the University of London and a master's degree in Education from Temple University. I am currently a Math Instructor for Temple University and
Cumberland County College. I taught high school math in Delaware.
22 Subjects: including trigonometry, calculus, writing, statistics
Related Bridgeport, PA Tutors
Bridgeport, PA Accounting Tutors
Bridgeport, PA ACT Tutors
Bridgeport, PA Algebra Tutors
Bridgeport, PA Algebra 2 Tutors
Bridgeport, PA Calculus Tutors
Bridgeport, PA Geometry Tutors
Bridgeport, PA Math Tutors
Bridgeport, PA Prealgebra Tutors
Bridgeport, PA Precalculus Tutors
Bridgeport, PA SAT Tutors
Bridgeport, PA SAT Math Tutors
Bridgeport, PA Science Tutors
Bridgeport, PA Statistics Tutors
Bridgeport, PA Trigonometry Tutors
|
{"url":"http://www.purplemath.com/Bridgeport_PA_Trigonometry_tutors.php","timestamp":"2014-04-20T13:54:20Z","content_type":null,"content_length":"24435","record_id":"<urn:uuid:b83bd76c-5d16-43c1-9566-bcf91098e9dc>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00254-ip-10-147-4-33.ec2.internal.warc.gz"}
|
One leg of a right triangle has a length of 7 centimeters. The other sides have lengths that are consecutive integers.find these lengths. - WyzAnt Answers
One leg of a right triangle has a length of 7 centimeters. The other sides have lengths that are consecutive integers.find these lengths.
Tutors, please
to answer this question.
You know by the Pythagorean theorem that a^2 +b^2 = c^2 where a and b are the legs of the triangle and c is the hypotenuse or longest side. When they say the other two legs are consecutive integers,
that means that x = x+1 or in this case c = b+1. By substituting c = b+1 and 7cm into the Pythagorean theorem, you will get:
7^2 + b^2 = (b+1)^2
FOIL the b+1 and get all the b's on one side (b^2 will cancel since you have +1b^2 on both sides). You should be able to solve for b. Then, use that value to get c since you know c = b+1. Check it by
plugging your values back into the Pythagorean theorem.
Do you know the Pythagorean triple: 7-24-25?
Answer: The other two sides are 24 cm and 25 cm.
|
{"url":"http://www.wyzant.com/resources/answers/4615/one_leg_of_a_right_triangle_has_a_length_of_7_centimeters_the_other_sides_have_lengths_that_are_consecutive_integers_find_these_lengths","timestamp":"2014-04-20T14:43:37Z","content_type":null,"content_length":"47558","record_id":"<urn:uuid:c15c6726-f20e-439c-b689-32a2ef641201>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00280-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Different Representations
In this activity, students create patterns using connecting cubes and describe various patterns they find in different sequences of cubes. Students explore and describe connecting patterns, and
extend their patterns using a sequence of sounds and shapes. Students investigate various ways to interpret the same sequence of cubes by exploring ways to describe patterns translating from one
representation to another.
This Internet Mathematics Excursion is a brief mathematics activity. To maximize student learning, certain prerequisites are necessary to use this activity. Therefore, it would be appropriate to
include this activity as part of a more fully developed Standards-based lesson, but it should not be used as a complete stand-alone lesson.
Even before formal schooling, children develop beginning concepts related to patterns, functions, and algebra. They learn repetitive songs, rhythmic chants, and predictable poems that are based on
repeating patterns. In this activity, students use the interactive math applet to create and study red and blue connecting-cube patterns. The interactive tool is designed so students can create the
entire pattern one connecting-cube at a time, or create the pattern, two connecting- cubes at a time. The Describing Patterns 1 activity sheet, guides students to describe the different patterns,
encourages them to explore different ways to interpret the patterns, and challenges students to translate the patterns generated, from one representation to another.
Before students visit the Web site, introduce the excursion by holding up a series of 12 red and blue connecting cubes.
Ask students to describe this "connecting-cube pattern" using the colors they see. Discuss with students why this pattern could be named "ABABABAB." Inform students that they will be using the
computer to explore similar "ABABAB" patterns, and investigating different ways to create the same pattern. They will also analyze different ways to describe an "ABABAB" pattern. Many students
explain the pattern by saying, "It's a red cube then a blue cube and it keeps going like that." Some students might describe it as an "ABAB" pattern. Most students see the pattern being formed as a
sequence of single cubes of alternating colors.
Place students into teams of two and distribute a Describing Patterns 1 activity sheet to each group. They should visit the following Web site Creating, Describing, and Analyzing Patterns and follow
the specific directions provided on the activity sheet.
Working together, partners share the responsibility of "Mouse Driver" and "Reader/Recorder". The "Reader/Recorder" will read the directions from the activity sheet and record observations while
guiding the activity. The "Mouse Driver" controls the action of the mouse and movement on the computer screen. Partners should switch roles until both have manipulated the cubes.
As students work through the activity, walk from group to group, encouraging them to describe the connecting-cube pattern using color, letters and sounds. Challenge students to explore several ways
to describe the pattern. The teacher’s role during his activity is to help students draw connections between what is happening to the patterns while moving the cubes. Suggestions for guiding
questions will help facilitate this understanding.
When students have finished the Describing Patterns 1 activity sheet, the class should meet to debrief the lesson and learning objectives.
• Connecting cubes (6 red, 6 blue)
1. The Questions for Students will provide an opportunity for you and the students to assess what they have learned and what they still want or need to understand. This will give you ideas for
further instruction.
1. It would be useful for students to explore related activities using physical manipulatives. Partners can create a variety of connecting-cube patterns; for example, "ABC" patterns or "AABB"
patterns, and investigate how to describe each pattern in words, sounds and letters. This recognition lays the foundation for the idea that two very different situations can have the same
mathematical features. Students can further extend their thinking by using charts and tables for recording and organizing how many times a pattern repeats before it reaches a certain number of
connecting- cubes, predicting along the way what color various cubes will be. The fourth and final i-ME in this series provides opportunities for extending the concepts discussed in this
Questions for Students
1. Describe how the interactive tool created the connecting cube pattern.
2. Describe the connecting-cube pattern using colors.
3. Describe the connecting cube pattern using a clap/snap rhythm.
4. Describe the connecting-cube pattern using letters.
5. When looking at the color of the 12^th connecting-cube, what color do you think the 18^th connecting-cube would be if this pattern were extended? How do you know?
6. When looking at the color of the 12^th connecting-cube, what color do you think the 19^th connecting cube would be, if this pattern were extended? How do you know?
7. Describe other ways you discovered to create an "ABABAB" pattern.
8. Demonstrate how an "ABABAB" pattern sounds differently from an "AB, AB, AB," pattern.
In this activity, students analyze how repeating patterns are generated. Using the interactive computer applet, students create, compare, and contrast pattern units of two squares and predict how
patterns with different colors will appear when repeated in a grid and check their predictions.
In this activity, students create and analyze repeating patterns using pattern units of three, four, and five squares. They predict how patterns with different numbers of squares will appear when
repeated in a grid, and check their predictions. Students investigate similarities and differences between the rows, columns, and diagonal patterns created with each pattern unit.
In this activity, students create and explore more complex patterns such as "growing patterns" which have related but different relationships to "repeating patterns". Students form generalizations,
analyze, and describe growing patterns using connecting-cubes, and explore what happens when growing patterns "double" or "split."
Learning Objectives
Students will:
• Analyze a series of twelve red and blue connecting cubes, and describe different patterns they find in different sequences of cubes
• Recognize, describe, and extend patterns; such as, sequences of sounds and shapes and translate from one representation to another
• Analyze how repeating patterns are generated
|
{"url":"http://illuminations.nctm.org/Lesson.aspx?id=724","timestamp":"2014-04-21T07:04:40Z","content_type":null,"content_length":"72646","record_id":"<urn:uuid:1d2c71af-1d92-4e32-b9bb-3b2267484159>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00558-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Modifier and Method and Description
Check whether the constraints are satisfied, the probabilities sum to one, etc.
divide(double first, double second)
static double
Given a numerator and denominator in log form, this calculates the conditional model probabilities.
Each pair x,y has a value in p.data.values[x][y]
protected double fnum(int x, int y)
GainCompute(Feature f, double errorGain)
Computes the gain from a feature.
assuming we have the lambdas in the array and we need only the derivatives now.
assuming we have the probConds[x][y] , compute the derivatives for the expectedValue function
Using the arrays calculated when computing the loss, it should not be too hard to get the derivatives.
assuming we have the lambdas in the array and we need only the derivatives now.
Iterate until convergence.
improvedIterative(int iters)
Does a fixed number of IIS iterations.
double logLikelihood()
Calculate the log-likelihood from scratch, hashing the conditional probabilities in pcond, which we will use later.
calculate the log likelihood from scratch, hashing the conditional probabilities in pcond which we will use for the derivative later.
calculate the loss for Dom ranking using the numbers in p.data.values to determine domination relationships in the graphs if values[x][y]> values[x][y'] then there is an edge (x,y)->
main(java.lang.String[] args)
static void
With arguments, this will print out the lambda parameters of a bunch of .lam files (which are assumed to all be the same size).
double pcond(int y, int x)
Print out p(y|x) for all pairs to the standard output.
read_lambdas(java.io.DataInputStream rf)
static double[]
Read the lambdas from the stream.
readL(java.lang.String filename)
Read the lambdas from the file.
save_lambdas(java.io.DataOutputStream rf, double[] lambdas)
static void
Writes the lambdas to a stream.
save_lambdas(java.lang.String filename)
Writes the lambda feature weights to the file.
void setBinary()
void setNonBinary()
This is a specialized procedure to change the values of parses for semantic ranking.
|
{"url":"http://nlp.stanford.edu/nlp/javadoc/javanlp/edu/stanford/nlp/maxent/iis/LambdaSolve.html","timestamp":"2014-04-16T21:51:28Z","content_type":null,"content_length":"33652","record_id":"<urn:uuid:400632ed-50cf-42c1-b83b-3e6baf3ed66b>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00217-ip-10-147-4-33.ec2.internal.warc.gz"}
|
West Hollywood Calculus Tutor
Find a West Hollywood Calculus Tutor
...It is a brand new way of thinking, and because of this it can be either refreshing (and fun) or endlessly frustrating. Since Calculus is a new way of thinking, it can be very useful to have a
tutor with whom you can discuss the new topics and build your intuition. I have had countless prealgebra students, and I have worked with all different learning styles and personalities.
14 Subjects: including calculus, statistics, geometry, algebra 2
...Learning math has been part and parcel to learning these physics skills. Math is the language of physics. I understand the rules of math, and how to bend them.
44 Subjects: including calculus, reading, chemistry, Spanish
...Mandarin is becoming one of the most in-demand languages in business. Mandarin is the fastest growing language in Middle Schools, High Schools and Universities around the country. I fount out
that the best way to learn Mandarin is to immerse my students in an environment where they are interested in what is going on.
7 Subjects: including calculus, Chinese, algebra 1, algebra 2
...I also played at the Intramural level at UC Irvine for one year and I continue to play regularly. As a Christian, I study the Old and New Testaments on a daily basis and have been doing so for
the last 10 years. I also know a bit about the Islamic and Jewish religions because, along with Christ...
26 Subjects: including calculus, reading, chemistry, French
...Whether it's writing, reading or other facet, a small range of skills can go a long way to achieve success. As a published author in several subjects, I can help unravel the myriad difficulties
that students face, and maybe make the process a bit fun. We seem to encounter government and politics almost every day, from news stories to political ads to personal experiences.
39 Subjects: including calculus, chemistry, reading, English
|
{"url":"http://www.purplemath.com/west_hollywood_ca_calculus_tutors.php","timestamp":"2014-04-18T11:10:58Z","content_type":null,"content_length":"24268","record_id":"<urn:uuid:72e7cdef-4655-400f-bd24-ca4501d4dc55>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00027-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Bayesian Networks: A Quick Intro
June 1st/2005
Bayesian Networks: A Quick Intro
Advances in technology have brought to molecular biology datasets that are bigger, more sophisticated, and, unfortunately, more difficult to interpret than ever before. One computational analysis
approach is called Bayesian networks, a machine learning tool that is able to automatically discover networks of dependencies and causal interactions among biomolecules of interest.
statistical in nature, so an edge from A to B indicates that knowing A can help us predict B. This may or may not indicate a causal relationship, i.e. one in which A (directly or indirectly) affects
B. Interventional data, in which biomolecules are specifically manipulated, can be used to discover causal connections.
The Bayesian network inference algorithm takes data in which biomolecules were quantified (and, ideally, also manipulated), and automatically reconstructs the underlying network of protein to protein
influences that may have created the data.
How does this process work? Consider a ski resort, with skiers and non-skiers (hot-tub sitters). A study discovers a strong statistical correlation between sunscreen lotion use and skiing injuries.
To further investigate this statistical dependency, a manipulation is performed on the lotion variable: all sunscreen lotions are secretly replaced with an ineffective placebo. This fails to affect
the number of skiing injuries, and so it is determined that lotion use does not causally affect skiing injuries. The variable ski is also well-correlated. When the ski variable is manipulated (the
ski slopes are closed for a day), both lotion use and injuries are greatly reduced or eliminated, thus implicating ski as the variable causally responsible for the other two. The study now expands to
include a tropical island. The correlation between skiing and sunscreen use is weakened; however, when tropical sports are included in the study, the ski and sports variables together are able to
predict sunscreen lotion use. If sun exposure is also included, it is found to be well predicted by skiing and tropical sports, and is itself a good predictor of lotion use.
ski_lotion, when exposure is not measured); and it can eliminate unnecessary edges (ski_lotion when exposure is measured). Therefore, it is potentially able to automatically construct a network much
like the canonical pathways sketched out in biology text books. Our recent work (Sachs et al., 2005) shows an application of this approach to signaling proteins measured in single cells,
demonstrating the ability of Bayes nets to find a first order map of a signaling pathway, and serve as an in silico generator of testable hypotheses.
1 Comments
Great introductory tutorial
Submitted by Anonymous (not verified) on Wed, 01/18/2012 - 23:37.
Thanks for a simple introduction to Bayesian networks. I especially liked your analogy!
Post new comment
|
{"url":"http://www.biomedicalcomputationreview.org/content/bayesian-networks-quick-intro","timestamp":"2014-04-16T10:09:59Z","content_type":null,"content_length":"26399","record_id":"<urn:uuid:c833425d-3c0b-4fc0-8dc1-008a6dcde79a>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00474-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Pythagorean prime
Pythagorean prime
A Pythagorean prime $p$ is a prime number of the form $4n+1$. The first few are 5, 13, 17, 29, 37, 41, 53, 61, 73, 89, 97, etc., listed in A002144 of Sloane’s OEIS. Because of its form, a Pythagorean
prime is the sum of two squares, e.g., 29 = 25 + 4. In fact, with the exception of 2, these are the only primes that can be represented as the sum of two squares (thus, in Waring’s problem, all other
primes require three or four squares).
Though Pythagorean primes are primes on the line of real integers, they are not Gaussian primes in the complex plane. Expressing a Pythagorean prime as $a^{2}+b^{2}$ (it doesn’t matter whether $a<b$
or viceversa) leads to the complex factorization by simple plugging in of the values thus: $p=(a+bi)(a-bi)$, where $i$ is the imaginary unit.
Mathematics Subject Classification
no label found
|
{"url":"http://planetmath.org/pythagoreanprime","timestamp":"2014-04-18T00:24:18Z","content_type":null,"content_length":"34214","record_id":"<urn:uuid:6b316c0f-f07b-45ee-8906-bded5a40ad99>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00595-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
identify the legs in the right triangle below
• one year ago
• one year ago
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred.
|
{"url":"http://openstudy.com/updates/514cfc9fe4b0ae0b658ab70f","timestamp":"2014-04-20T18:44:02Z","content_type":null,"content_length":"46705","record_id":"<urn:uuid:57656c3a-9fe4-4d9f-9f29-823ba8ee5494>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00470-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Linear Statistical Models, 2nd Edition
Linear Statistical Models, 2nd Edition
ISBN: 978-0-470-23146-3
474 pages
August 2009, ©2009
Read an Excerpt
Praise for the First Edition
"This impressive and eminently readable text . . . [is] a welcome addition to the statistical literature."
—The Indian Journal of Statistics
Revised to reflect the current developments on the topic, Linear Statistical Models, Second Edition provides an up-to-date approach to various statistical model concepts. The book includes clear
discussions that illustrate key concepts in an accessible and interesting format while incorporating the most modern software applications.
This Second Edition follows an introduction-theorem-proof-examples format that allows for easier comprehension of how to use the methods and recognize the associated assumptions and limits. In
addition to discussions on the methods of random vectors, multiple regression techniques, simultaneous confidence intervals, and analysis of frequency data, new topics such as mixed models and curve
fitting of models have been added to thoroughly update and modernize the book. Additional topical coverage includes:
• An introduction to R and S-Plus® with many examples
• Multiple comparison procedures
• Estimation of quantiles for regression models
• An emphasis on vector spaces and the corresponding geometry
Extensive graphical displays accompany the book's updated descriptions and examples, which can be simulated using R, S-Plus®, and SAS® code. Problems at the end of each chapter allow readers to test
their understanding of the presented concepts, and additional data sets are available via the book's FTP site.
Linear Statistical Models, Second Edition is an excellent book for courses on linear models at the upper-undergraduate and graduate levels. It also serves as a comprehensive reference for
statisticians, engineers, and scientists who apply multiple regression or analysis of variance in their everyday work.
See More
1 Linear Algebra, Projections.
1.1 Introduction.
1.2 Vectors, Inner Products, Lengths.
1.3 Subspaces, Projections.
1.4 Examples.
1.5 Some History.
1.6 Projection Operators.
1.7 Eigenvalues and Eigenvectors.
2 Random Vectors.
2.1 Covariance Matrices.
2.2 Expected Values of Quadratic Forms.
2.3 Projections of Random Variables.
2.4 The Multivariate Normal Distribution.
2.5 The χ2, F, and t Distributions.
3 The Linear Model.
3.1 The Linear Hypothesis.
3.2 Confidence Intervals and Tests on η = c[1]ß[1] + . . . + c[k]ß[k].
3.3 The Gauss-Markov Theorem.
3.4 The Gauss-Markov Theorem For The General Case.
3.5 Interpretation of Regression Coefficients.
3.6 The Multiple Correlation Coefficient.
3.7 The Partial Correlation Coefficient.
3.8 Testing H[0] : θ ε V[0] С V.
3.9 Further Decomposition of Subspaces.
3.10 Power of the F-Test.
3.11 Confidence and Prediction Intervals.
3.12 An Example from SAS.
3.13 Another Example: Salary Data.
4 Fitting of Regression Models.
4.1 Linearizing Transformations.
4.2 Specification Error.
4.3 Generalized Least Squares.
4.4 Effects of Additional or Fewer Observations.
4.5 Finding the "Best" Set of Regressors.
4.6 Examination of Residuals.
4.7 Collinearity.
4.8 Asymptotic Normality.
4.9 Spline Functions.
4.10 Nonlinear Least Squares.
4.11 Robust Regression.
4.12 Bootstrapping in Regression.
4.13 Quantile Regression.
5 Simultaneous Confidence Intervals.
5.1 Bonferroni Confidence Intervals.
5.2 Scheffé Simultaneous Confidence Intervals.
5.3 Tukey Simultaneous Confidence Intervals.
5.4 Comparison of Lengths.
5.5 Bechhofer's Method.
6 Two-and Three-Way Analyses of Variance.
6.1 Two-Way Analysis of Variance.
6.2 Unequal Numbers of Observations Per Cell.
6.3 Two-Way Analysis of Variance, One Observation Per Cell.
6.4 Design of Experiments.
6.5 Three-Way Analysis of Variance.
6.6 The Analysis of Covariance.
7 Miscellaneous Other Models.
7.1 The Random Effects Model.
7.2 Nesting.
7.3 Split Plot Designs.
7.4 Mixed Models.
7.5 Balanced Incomplete Block Designs.
8 Analysis of Frequency Data.
8.1 Examples.
8.2 Distribution Theory.
8.3 Conf. Ints. on Poisson and Binomial Parameters.
8.4 Log-Linear Models.
8.5 Estimation for the Log-Linear Model.
8.6 Chi-Square Goodness-of-Fit Statistics.
8.7 Limiting Distributions of the Estimators.
8.8 Logistic Regression.
The Statistical Language R.
See More
James H. Stapleton, PhD, is Professor Emeritus in the Department of Statistics and Probability at Michigan State University. He is the author of Models for Probability and Statistical Inference:
Theory and Applications, also published by Wiley.
See More
• A thorough discussion of mixed models, nonparametric regression, quantile regression, and Bayesian models.
• All of the examples have been updated, and a greater emphasis is placed on computer-driven examples using S-Plus®, R, and SAS® code.
• Figures illustrating the results of the computer-driven simulations have also been added.
• This new edition emphasizes the geometry of vector spaces (the coordinate free approach) because the author has found that the intuition it provides is vital to the understanding of the theory.
See More
• Discussions on multiple comparison procedures, estimation of quantiles for regression models, and curve fitting of models have been added to the new edition.
• The author clearly discusses and illustrates key concepts with interesting examples, and the presentation of the material is systematic and natural.
• A list of problems that are suitable for homework exercises are at the end of each chapter, and selected solutions are available at the end of the book.
See More
Instructors Resources
Wiley Instructor Companion Site
Coming Soon!
View Sample content below:
See More
See Less
Students Resources
Wiley Student Companion Site
Coming Soon!
View Sample content below:
See More
See Less
Buy Both and Save 25%!
Linear Statistical Models, 2nd Edition (US $139.00)
-and- Linear Models in Statistics, 2nd Edition (US $170.00)
Total List Price: US $309.00
Discounted Price: US $231.75 (Save: US $77.25)
Cannot be combined with any other offers. Learn more.
|
{"url":"http://www.wiley.com/WileyCDA/WileyTitle/productCd-0470231467.html","timestamp":"2014-04-18T23:17:53Z","content_type":null,"content_length":"55738","record_id":"<urn:uuid:94c9d198-5bab-4e92-a453-e258139e4bb6>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00076-ip-10-147-4-33.ec2.internal.warc.gz"}
|
21 Feb 11:26 2013
Arimax with intervention dummy and multiple covariates
Jose Iparraguirre <Jose.Iparraguirre <at> ageuk.org.uk>
2013-02-21 10:26:45 GMT
I'm trying to measure the effect of a policy intervention (Box and Tiao, 1975).
This query has to do with the coding of the model rather than with the particulars of my dataset, so I'm not
providing the actual dataset (or a simulated one) in this case, apart from some general description.
The time series are of length n=34 (annual observations between 1977 and 2010). The policy measure was
introduced in 2000 and it has been implemented once a year ever since.
The variable of interest (VI) is continuous, and I have four continuous covariates (CO1-CO4), plus the
dummy intervention variable (DUM) which is equal to 0 between 1977 and 1999 and equal to 1 since 2000.
I thought of using an ARIMAX model, with the arimax() function in the TSA package to fit the transfer
function. I'm interested in modelling the intervention effect as a step function.
I specified the model thus:
a. I've checked the ARIMA properties of each series using the auto.arima() function (from the 'forecast'
package) -the VI was found to best fit an ARIMA(0,1,1) model and the first covariate an ARIMA(1,0,0),
whereas the other covariates were white noise.
b. To facilitate the specification of the various models (the different model specifications dropped
variables or added additional covariates, etc, without changing the general structure of the syntax
below), I defined the following design matrix:
> xreg.1 <- model.matrix(~CO1+ CO2+ CO3+ CO4)[,2:5]
c. Following Cryer and Chan (2008, ch. 11, p. 255), I wrote models such as this:
> arimax.1 <- arimax(VI, order=c(0,1,1),
(Continue reading)
|
{"url":"http://comments.gmane.org/gmane.comp.lang.r.general/287427","timestamp":"2014-04-18T18:20:39Z","content_type":null,"content_length":"9896","record_id":"<urn:uuid:13f94fc1-544f-4644-88ac-1f7e6b94f4ae>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00448-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Feeding a greedy algorithm
The idea of a greedy algorithm (for optimization) is that you do as best you can locally and you don’t worry about the big picture.
For some problems a greedy algorithm is guaranteed to get you to the global optimum. In other cases, no.
An intuitive example
Suppose you are walking and you want to get to a specific place that is north and west of you. The greedy algorithm in this case is to take a street that is going north and/or west; at each
intersection you decide whether to go in a new direction.
If you are in midtown Manhattan, you’re in luck — the greedy algorithm is going to get you there. The streets are on a grid.
If you are in London, the greedy algorithm might not work. You might be drawn into a mews that just ends; you might get on a street that circles around.
A lot of problems of interest are more like London than New York. However, greedy algorithms can still be useful in these cases. Greedy algorithms can be used to find locally good solutions while
some other mechanism is used to find reasonable locations for the greedy algorithms to explore.
A real example
In particular, trade optimization and generating random portfolios are not the perfect realm for greedy algorithms. Nonetheless, the Portfolio Probe software uses a few greedy algorithms when
performing these tasks. A genetic algorithm is the main driver, and the greedy algorithms refine the solutions.
One of the greedy algorithms is nicknamed “polishing”. Suppose we are optimizing a trade and we have a candidate trade in hand. We can think of looking at each asset in the trade in turn. We want
to find the best amount to trade of the asset while not changing the amount traded for the other assets.
The catch is that because of constraints some of the other assets will, in general, need to change. The polishing algorithm tries to move the other assets only a little but still satisfy the
It turns out that polishing is a quite successful adjunct to the genetic algorithm.
And he was rich, yes, richer than a king,
And admirably schooled in every grace:
In fine — we thought that he was everything
To make us wish that we were in his place.
from Richard Cory by Edwin Arlington Robinson
Manhattan picture by Daquella manera via everystockphoto.com.
London picture by Regent’s College London via everystockphoto.com.
4 Responses to Feeding a greedy algorithm
1. Thanks, Pat, for this insightful post on optimization, personally I seldom use optimization, not because the optimized values are wrong, but I feel insecure by using numbers jumping out of those
black boxes lack of economic sense (or I don’t get it). So may I know do you really use it a lot in reality, or you just have its results as a reference? If you don’t mind, a real application
example is appreciated. Thank you.
2. Dear Quant,
I presume you are talking about portfolio optimization rather than the more general case. First off, I don’t manage money, I just supply some tools. An answer to your question is that some fund
managers use optimization a lot. There are also fund managers who don’t use optimization at all.
I agree with your implicit position that an optimizer used badly is a dangerous thing. However, overcoming the “lack of economic sense” problem is easy to do. You merely need to restrict the
turnover that is allowed. This can be done either through a turnover constraint, or (better but harder) through transaction costs.
Portfolio Probe includes a different form of portfolio optimization which is to minimize the distance to a target ideal portfolio. This might be a more palatable approach for those fund managers
who don’t use optimization at all at present.
Does this answer your question?
3. .The greedy algorithm determines the minimum number of coins to give while . These are the steps a human would take to emulate a greedy algorithm to represent 36 cents using coins. Note that in
general the change-making problem requires or to find an optimal solution US and other currencies are special cases where the greedy strategy works. …A greedy algorithm is any that follows the of
making the locally optimal choice at each stage with the hope of finding the global optimum..For example applying the greedy strategy to the yields the following algorithm At each stage visit the
unvisited city nearest to the current city ……
This entry was posted in optimization and tagged greedy algorithm. Bookmark the permalink.
|
{"url":"http://www.portfolioprobe.com/2010/12/16/feeding-a-greedy-algorithm/","timestamp":"2014-04-17T21:23:42Z","content_type":null,"content_length":"65858","record_id":"<urn:uuid:05217b3a-3490-40ed-9701-910357dc0445>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00022-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Some probability word problems. need correction
May 2nd 2010, 08:15 AM #1
Junior Member
May 2009
Some probability word problems. need correction
hi, i was doing some past paper question on probabilities and need some checking and correction(if needed ) for my answers.
Thanks in advance.
(a) two clerks A and B work in an office. Thr probability that clerk A will be late on a given day is 0.15,while the prob that clerk B will be late is 0.1.
the prob that both will be late is 0.08
(i)The prob that either or both wll be late on a given day
P(A or B) = 0.15 + 0.1 - 0.08
= 0.17
(ii) Only one clerk comes late on a given day.
0.17 - 0.08 = 0.09
Q2 An experiment consists of rolling a pair of fair dice, Find the prob of rolling a sum of 8 in such an experiment.
P(A) = s/n = 5/6.6 = 5/36
Q3.The prob that a boy in a class is in the football team is 0.4, and the prob that he is in chess team is 0.5,.The prob that a boy in class in both team is 0.2.
(i)Find the prob that a boy chosen at random is in the football team or chess team.
P(A or B ) = 0.4 + 0.5 - 0.2
(ii)the prob that the boy is neither in football nor chess team
1 - 0.7
P(A)= 0.3
Q4. The maths students of a sixth form college want to elect a committe of 4 members. In all there are 14 candidates, 6 from 1st year, 8 from second year.
(i)In how many ways can this committe be selected from all candidates
14 = 1001ways
(ii) find the number of ways in which only 2nd year students are elected.
8 = 70 ways
(iii) find the prob that the committe only consists of 2dn year students
P(A) = s/n
70/1001 = 10/143
(iv) deduce the prob that the committe does not contain any 2nd year students.
P(A) = 1 - 10/143
Follow Math Help Forum on Facebook and Google+
|
{"url":"http://mathhelpforum.com/statistics/142638-some-probability-word-problems-need-correction.html","timestamp":"2014-04-21T04:39:16Z","content_type":null,"content_length":"30908","record_id":"<urn:uuid:8c20fef3-f934-49b5-946d-28b28296ff07>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00537-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Homework Help
Posted by Erica on Monday, March 5, 2012 at 1:58pm.
the design below grows each day, as shown. if the pattern continues to grow like this, how many tiles will there be in the design on the tenth day? on the fiftieth day?
Day 1 1 tile
Day 2 5 tiles
Day 3 13 tiles
• Math - Susan, Monday, March 5, 2012 at 2:23pm
1+4(n-1) where n is the day number
on day 10
on day fifty
• Math - Steve, Monday, March 5, 2012 at 3:52pm
Day 3 does not fit: 1+4*2 = 9
Since the difference is changing by 4 each day, we will have a quadratic:
Day n: 2n^2 - 2n + 1
• Math - Sandro, Friday, April 27, 2012 at 5:50pm
What do you mean by the ^ in your equation Steve? I don't really understand the 2n^2-2n+1. I did figure out it was a quadratic equation, and actually happened to figure out the entire problem
until I noticed it said "Write a description or formula that allows me to figure out the number of tiles for any day number?"
• Math - Sandro, Friday, April 27, 2012 at 5:50pm
What do you mean by the ^ in your equation Steve? I don't really understand the 2n^2-2n+1. I did figure out it was a quadratic equation, and actually happened to figure out the entire problem
until I noticed it said "Write a description or formula that allows me to figure out the number of tiles for any day number?"
• Math - Sandro, Friday, April 27, 2012 at 6:21pm
Oh, ok, do you mean to cube the first number by the 2nd, get back to me... Thnx
• Math - Sandro, Friday, April 27, 2012 at 7:09pm
Steve is wrong!!! D:
Related Questions
geometry - Mr.soto drew a design for a quilting pattern. The design is made up ...
math - "joyce is decorating her store window for a going-out-of-business sale. ...
Math - Tile Pattern... A design starts with 1 tile and it continues to grow the ...
Math - In a financial deal, you are promised $800 the first day and each day ...
design - In an effort to promote a better understanding of what design is and ...
English - 1. The grass grows thick in the garden. 2. The grass grows thickly in ...
Math - Lawrence records the number of minutes he reads each day show below Day 1...
circuit - Q1 In this problem, we look at the design of an inverter. For the ...
STATISTICS - The Sarasota Police department collected data on the number of car ...
College Algebra - Pinocchio has a problem with lying. Everytime he lies, his ...
|
{"url":"http://www.jiskha.com/display.cgi?id=1330973928","timestamp":"2014-04-19T12:50:05Z","content_type":null,"content_length":"10021","record_id":"<urn:uuid:44e7e56b-ea80-4b10-8a8e-e4240ac66eb8>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00529-ip-10-147-4-33.ec2.internal.warc.gz"}
|
188 helpers are online right now
75% of questions are answered within 5 minutes.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred.
|
{"url":"http://openstudy.com/users/suneja/asked/1","timestamp":"2014-04-21T08:02:11Z","content_type":null,"content_length":"66164","record_id":"<urn:uuid:99f3e28b-f026-4883-ba7c-023325ad6be4>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00656-ip-10-147-4-33.ec2.internal.warc.gz"}
|
West Hollywood Calculus Tutor
Find a West Hollywood Calculus Tutor
...It is a brand new way of thinking, and because of this it can be either refreshing (and fun) or endlessly frustrating. Since Calculus is a new way of thinking, it can be very useful to have a
tutor with whom you can discuss the new topics and build your intuition. I have had countless prealgebra students, and I have worked with all different learning styles and personalities.
14 Subjects: including calculus, statistics, geometry, algebra 2
...Learning math has been part and parcel to learning these physics skills. Math is the language of physics. I understand the rules of math, and how to bend them.
44 Subjects: including calculus, reading, chemistry, Spanish
...Mandarin is becoming one of the most in-demand languages in business. Mandarin is the fastest growing language in Middle Schools, High Schools and Universities around the country. I fount out
that the best way to learn Mandarin is to immerse my students in an environment where they are interested in what is going on.
7 Subjects: including calculus, Chinese, algebra 1, algebra 2
...I also played at the Intramural level at UC Irvine for one year and I continue to play regularly. As a Christian, I study the Old and New Testaments on a daily basis and have been doing so for
the last 10 years. I also know a bit about the Islamic and Jewish religions because, along with Christ...
26 Subjects: including calculus, reading, chemistry, French
...Whether it's writing, reading or other facet, a small range of skills can go a long way to achieve success. As a published author in several subjects, I can help unravel the myriad difficulties
that students face, and maybe make the process a bit fun. We seem to encounter government and politics almost every day, from news stories to political ads to personal experiences.
39 Subjects: including calculus, chemistry, reading, English
|
{"url":"http://www.purplemath.com/west_hollywood_ca_calculus_tutors.php","timestamp":"2014-04-18T11:10:58Z","content_type":null,"content_length":"24268","record_id":"<urn:uuid:72e7cdef-4655-400f-bd24-ca4501d4dc55>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00027-ip-10-147-4-33.ec2.internal.warc.gz"}
|
DOCUMENTA MATHEMATICA, Extra Vol. ICM III (1998), 575-584
DOCUMENTA MATHEMATICA
, Extra Volume ICM III (1998), 575-584
Leslie Greengard and Xiaobai Sun
Title: A New Version of the Fast Gauss Transform
The evaluation of the sum of $N$ Gaussians at $M$ points in space arises as a computational task in diffusion, fluid dynamics, finance, and, more generally, in mollification. The work required for
direct evaluation grows like the product $NM$, rendering large-scale calculations impractical. We present an improved version of the fast Gauss transform [L. Greengard and J. Strain, {\em SIAM J.
Sci. Stat. Comput.} {\bf 12}, 79 (1991)], which evaluates the sum of $N$ Gaussians at $M$ arbitrarily distributed points in $O(N+M)$ work, where the constant of proportionality depends only on the
precision required. The new scheme is based on a diagonal form for translating Hermite expansions and is significantly faster than previous versions.
1991 Mathematics Subject Classification: 65R10 , 44A35, 35K05
Keywords and Phrases: diffusion, fast algorithms, Gauss transform
Full text: dvi.gz 18 k, dvi 41 k, ps.gz 71 k.
Home Page of DOCUMENTA MATHEMATICA
|
{"url":"http://www.emis.de/journals/DMJDMV/xvol-icm/16/Greengard.MAN.html","timestamp":"2014-04-20T16:30:16Z","content_type":null,"content_length":"1869","record_id":"<urn:uuid:18e43ca1-136e-481f-ad83-bd2aae3f15f1>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00244-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Probability density function
March 17th 2010, 04:55 AM #1
Junior Member
Jan 2008
Probability density function
Hi, I have this question that I've been struggling with.
Given $D$ is a random distance with probability density function $f_d$ and expected value $\mu$, then the sampled distance distribution has probability density function $g$, where
$g(x)=\frac{xf_d(x)}{\mu}, x \geq 0$
Find the form of the distance probability density function $f_d$ and its expected value $\mu$ if the sample distances are fitted by the probability distance function $g(x)=\frac{x^2 e^{-\frac{x}
{\alpha}}}{2 \alpha^3}, x \geq 0$ and $\alpha = 8.1$.
Anyone help will be appreciated. thanks.
I have tried equating the two g(x) but I keep getting something ridiculous like showing $E_d(X) = \mu$
Do you agree that we have $\frac{f_d(x)}{\mu}=\frac{xe^{-x/\alpha}}{2\alpha^3}$
Then $f_d(x)=\frac{\mu}{2\alpha^3}\cdot xe^{-x/\alpha}$, for any positive x.
But we want $f_d(x)$ to be a pdf. So its integral over 0 to infinity should be 1. And this computation will give you $\mu$
Any further question ?
March 17th 2010, 10:42 AM #2
|
{"url":"http://mathhelpforum.com/advanced-statistics/134257-probability-density-function.html","timestamp":"2014-04-17T11:05:31Z","content_type":null,"content_length":"36061","record_id":"<urn:uuid:03ae8c90-10cc-4f72-a780-bfeece4eb488>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00623-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Standard Deviation (self.matheducation)
submitted ago by MisterHoward
sorry, this has been archived and can no longer be voted on
I am teaching a chapter in probability and statistics to my 11th grade Algebra 2 class. What a nice relatable way to describe standard deviation to them? I am having difficulty coming up with ways to
explain it that they will understand easily. English is their second language as well although most are quite good at speaking it. Thanks for any ideas.
all 11 comments
[–]delcrossb1 point2 points3 points ago
sorry, this has been archived and can no longer be voted on
I also teach for a lot of ESL learners and I found the best way to go about it was to show them visually. We made a couple scatter plots and then calculated the mean and then showed that SOMETHING
was different between them (the dispersion about the mean). You can usually lead them through to saying something to the effect of being more or less spread out, and then I try to walk them through a
simple (ie 5 number) example of calculating standard deviations by hand. The "distance" part can often be difficult for them to grasp (depending on how they did or are doing in algebra) but they can
usually get SOME kind of concept. After that I reinforced by showing them different scatter plots and asked them to arrange by standard deviation from low to high. Once they get the idea I teach them
how to do it on the TI/on excel.
[–]MisterHoward[S] 0 points1 point2 points ago
sorry, this has been archived and can no longer be voted on
[–]MakeWar901 point2 points3 points ago
sorry, this has been archived and can no longer be voted on
I find this lesson follows nicely after grouped averages, as the formulas are similar. First talk about what "deviation" means, and that standard deviation is just the average deviation.. Calculate
some piece of data for the class, like height. Find the average height of the class. Get each member of the class to find their personal deviation. When it come times to sum the deviations, you can
talk about why you square the deviations so that they are all positive (try finding a pair of students whose deviations are negatives of each other). From there it's very similar to finding an
average, just sum the deviations and divide by the number of data points. We square root at the end to "undo" our squaring from before.
[–]MisterHoward[S] 0 points1 point2 points ago
sorry, this has been archived and can no longer be voted on
[–]AuditionMD0 points1 point2 points ago
sorry, this has been archived and can no longer be voted on
[–]MakeWar901 point2 points3 points ago
sorry, this has been archived and can no longer be voted on
This is not correct, the standard deviation is the average distance between your data points and your mean. To find the standard deviation, we take the distance between each data point and the mean,
square each of those distances to avoid negatives cancelling out positives, sum the distances, and divide by the number of data points in our sample (because it's the average). We then square root
our whole answer to "undo" the squaring we did previously. The number we get just tells us how spread out the data is.
[–]AuditionMD2 points3 points4 points ago
sorry, this has been archived and can no longer be voted on
[–]wiggyword0 points1 point2 points ago
sorry, this has been archived and can no longer be voted on
The standard deviation is not the average of anything. It is the square root of the average of the squares of the distances between your data points and your mean.
Squaring the distances is not primarily to "avoid negatives canceling out positives". It does have that effect, but so, too, would taking the absolute value of each distance. Why prefer to square
them rather than take absolute values?
Taking the square root of the average of the squares of the distances doesn't "undo" the squaring. It does have the advantage of converting your metric back into the same units as your data, which is
significant, but to say that it "undoes the squaring" is like saying that the distance formula has a square root in it to undo the squaring you did. If the squaring was going to need undoing why did
you do it in the first place? I cannot imagine a student whose understanding of the material will be enhanced by suggesting that you take the square root to "undo the squaring".
The number you get is an indicator of how spread out the data is, but there are other indicators as well. Variance is one. MAD is another. Why are some better than others? This is a complicated
question. You don't have to go into arbitrary detail on every topic, but don't treat things like this as magic that you do because you just do it. Come up with samples with the same mean but
different MAD and standard deviations, and calculate them for each sample. Challenge students to come up with their own data sets which have high MAD and low standard deviation, and vice versa. Let
them develop an intuition for what standard deviation is saying and what it isn't, the features of your data that it is good at capturing and what it may not be so good at.
[–]MakeWar900 points1 point2 points ago
sorry, this has been archived and can no longer be voted on
"Undo" was in quotes because I meant it very lightly, perhaps a better phrase (the one I would use in the classroom) is "scale down". As for your comments about absolute value there are measures of
spread that use the absolute value of the distances (lookup "average absolute deviation"), but for whatever reason the standard deviation is more conventional and a part of the curriculum. I know it
isn't the average distance but for high school students who are taking data management because they've been told it is the easiest grade 12 math course (which it is for some, but many students still
find it very difficult) this is a fine way to make them understand why we would use standard deviation, and I've never had trouble with them not understanding the square root at the end, comparing
the differences in value between 1 and 1^2, and 20 and 20^2 works well here to help explain why our answer needs scaling down. Don't jump to conclusions saying I "treat things like magic" just
because I haven't mentioned other measures of spread. MisterHoward asked for a relatable way to explain standard deviation, and that's exactly what I've given him.
[–]Hopemonster0 points1 point2 points ago
sorry, this has been archived and can no longer be voted on
|
{"url":"http://www.reddit.com/r/matheducation/comments/14pqzi/standard_deviation/","timestamp":"2014-04-16T17:39:51Z","content_type":null,"content_length":"77747","record_id":"<urn:uuid:67f9c34e-547d-4d5b-b374-e1255bf260be>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00469-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Calculus/Integration techniques/Trigonometric Integrals
From Wikibooks, open books for an open world
When the integrand is primarily or exclusively based on trigonometric functions, the following techniques are useful.
Powers of Sine and Cosine[edit]
We will give a general method to solve generally integrands of the form $\cos^m(x)\sin^n(x)$. First let us work through an example.
Notice that the integrand contains an odd power of cos. So rewrite it as
We can solve this by making the substitution $u=\sin(x)$ so $du=\cos(x)dx$. Then we can write the whole integrand in terms of $u$ by using the identity
$\begin{matrix} \int\cos^3(x)\sin^2(x)\,dx &=&\int\cos^2(x)\sin^2(x)\cos(x)\,dx\\ &=&\int (1-u^2)u^2\,du\\ &=&\int u^2\,du - \int u^4\,du\\ &=&{1\over 3} u^3+{1\over 5}u^5 + C\\ &=&{1\over 3} \
sin^3(x)-{1\over 5}\sin^5(x)+C \end{matrix}.$
This method works whenever there is an odd power of sine or cosine.
To evaluate $\int\cos^m(x)\sin^n(x)\,dx$ when either $m$ or $n$ is odd.
□ If $m$ is odd substitute $u=\sin(x)$ and use the identity $\cos^2(x)=1-\sin^2(x)=1-u^2$.
□ If $n$ is odd substitute $u=\cos(x)$ and use the identity $\sin^2(x)=1-\cos^2(x)=1-u^2$.
Find $\int_0^{\pi/2} \cos^{40}(x)\sin^3(x) dx$.
As there is an odd power of $\sin$ we let $u=\cos(x)$ so $du=-\sin(x)dx$. Notice that when $x=0$ we have $u=cos(0)=1$ and when $x=\pi/2$ we have $u=\cos(\pi/2) = 0$.
$\begin{matrix} \int_0^{\pi/2} \cos^{40}(x)\sin^3(x) dx &=& \int_0^{\pi/2} \cos^{40}(x)\sin^2(x) \sin(x) dx \\ &=& -\int_{1}^{0} u^{40} (1-u^2) du \\ &=&\int_{0}^{1} u^{40} (1-u^2) du\\ &=& \int_{0}^
{1} u^{40} - u^{42} du \\ &=& [\frac{1}{41}u^{41} - \frac{1}{43}u^{43}]_0^1 \\ &=& \frac{1}{41}-\frac{1}{43}. \end{matrix}$
When both $m$ and $n$ are even things get a little more complicated.
To evaluate $\int\cos^m(x)\sin^n(x)\,dx$ when both $m$ and $n$ are even.
Use the identities $\sin^2(x)=\frac{1}{2}(1-\cos(2x))$ and $\cos^2(x)=\frac{1}{2}(1+\cos(2x))$.
Find $\int\sin^2(x)\cos^4(x)\,dx.$
As $\sin^2(x)=\frac{1}{2}(1-\cos(2x))$ and $\cos^2(x)=\frac{1}{2}(1+\cos(2x))$ we have
$\int \sin^2(x)\cos^4(x)\,dx = \int \left( {1 \over 2}(1 - \cos(2x)) \right) \left( {1 \over 2}(1 + \cos(2x)) \right)^2 \,dx,$
and expanding, the integrand becomes
$\frac{1}{8} \int \left( 1 - \cos^2(2x) + \cos(2x)- \cos^3(2x) \right) \,dx.$
Using the multiple angle identities
$\begin{matrix} I & = & \frac{1}{8} \left( \int 1 \, dx - \int \cos^2(2x)\, dx + \int \cos(2x)\,dx -\int \cos^3(2x)\,dx \right) \\ & = & \frac{1}{8} \left( x - \frac{1}{2} \int (1 + \cos(4x))\,dx
+ \frac{1}{2}\sin(2x) -\int \cos^2(2x) \cos(2x) \,dx\right) \\ & = & \frac{1}{16} \left( x + \sin(2x) + \int \cos(4x) \,dx -2 \int(1-\sin^2(2x))\cos(2x)\,dx\right) \\ \end{matrix}$
then we obtain on evaluating
$I=\frac{x}{16}-\frac{\sin(4x)}{64} + \frac{\sin^3(2x)}{48}+C$
Powers of Tan and Secant[edit]
To evaluate $\int\tan^m(x)\sec^n(x)\,dx$.
1. If $n$ is even and $n\ge 2$ then substitute $u=tan(x)$ and use the identity $\sec^2(x)=1+\tan^2(x)$.
2. If $n$ and $m$ are both odd then substitute $u=\sec(x)$ and use the identity $\tan^2(x)=\sec^2(x)-1$.
3. If $n$ is odd and $m$ is even then use the identity $\tan^2(x)=\sec^2(x)-1$ and apply a reduction formula to integrate $\sec^j(x)dx\,$, using the examples below to integrate when $j=1,2$.
Example 1[edit]
Find $\int \sec^2(x)dx$.
There is an even power of $\sec(x)$. Substituting $u=\tan(x)$ gives $du = \sec^2(x)dx$ so
$\int \sec^2(x)dx = \int du = u+C = \tan(x)+C.$
Example 2[edit]
Find $\int \tan(x)dx$.
Let $u=\cos(x)$ so $du=-\sin(x)dx$. Then
$\begin{matrix} \int \tan(x)dx &=& \int \frac{\sin(x)}{\cos(x)} dx \\ &=& \int \frac{-1}{u} du \\ &=& -\ln |u| + C \\ &=& -\ln |\cos(x) | + C\\ &=& \ln |\sec(x)| +C. \end{matrix}$
Example 3[edit]
Find $\int \sec(x)dx$.
The trick to do this is to multiply and divide by the same thing like this:
$\begin{matrix} \int \sec(x)dx &=& \int \sec(x)\frac{\sec(x) + \tan(x)}{\sec(x) + \tan(x)} dx \\ &=& \int \frac{\sec^2(x) + \sec(x) \tan(x)}{\sec(x)+ \tan(x)} \end{matrix}.$
Making the substitution $u= \sec(x) + \tan(x)$ so $du = \sec(x)\tan(x) + \sec^2(x)dx,$
$\begin{matrix} \int \sec(x) dx &=& \int \frac{1}{u} du\\ &=& \ln |u| + C \\ &=& \ln |\sec(x) + \tan(x)| + C \end{matrix}.$
More trigonometric combinations[edit]
For the integrals $\int \sin(nx)\cos(mx)\,dx$ or $\int \sin(nx)\sin(mx)\,dx$ or $\int \cos(nx)\cos(mx)\,dx$ use the identities
□ $\sin(a)\cos(b) = {1\over 2}(\sin{(a+b)}+\sin{(a-b)}) \,$
□ $\sin(a)\sin(b) = {1\over 2}(\cos{(a-b)}-\cos{(a+b)}) \,$
□ $\cos(a)\cos(b) = {1\over 2}(\cos{(a-b)}+\cos{(a+b)}) \,$
Example 1[edit]
Find $\int \sin(3x)\cos(5x)\,dx.$
We can use the fact that $\sin(a)\cos(b)=(1/2)(\sin(a+b)+\sin(a-b))$, so
$\sin(3x)\cos(5x)=(\sin(8x)+\sin{(-2x)})/2 \,$
Now use the oddness property of $\sin(x)$ to simplify
$\sin(3x)\cos{5x}=(\sin(8x)-\sin(2x))/2 \,$
And now we can integrate
$\begin{matrix} \int \sin(3x)\cos(5x)\,dx & = & \frac{1}{2} \int \sin(8x)-\sin(2x)dx \\ & = & \frac{1}{2}(-\frac{1}{8}\cos(8x)+\frac{1}{2}\cos(2x)) +C \\ \end{matrix}$
Example 2[edit]
Find:$\int \sin(x)\sin(2x)\,dx$.
Using the identities
$\sin(x) \sin(2x)= \frac{1}{2} \left( \cos(-x)-\cos(3x) \right) = \frac{1}{2} (\cos(x) -\cos(3x)).$
$\begin{matrix} \int \sin(x)\sin(2x)\,dx & = & \frac{1}{2} \int (\cos(x)-\cos(3x))\,dx \\ & = & \frac{1}{2}(\sin(x)-\frac{1}{3}\sin(3x)) + C \end{matrix}$
|
{"url":"https://en.wikibooks.org/wiki/Calculus/Integration_techniques/Trigonometric_Integrals","timestamp":"2014-04-23T14:30:35Z","content_type":null,"content_length":"47369","record_id":"<urn:uuid:82862fa5-fa96-4760-8db5-184f4d649844>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00573-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Posts about D-Wave One on Hack The Multiverse
I’ve been thinking about the BlackBox compiler recently and came up with a very interesting analogy to the way it works. There are actually lots of different ways to think about how BlackBox works,
and we’ll post more of them over time, but here is a very high level and fun one.
The main way that you use BlackBox is to supply it with a classical function which computes the “goodness” of a given bitstring by returning a real number (the lower this number, the better the
bitstring was).
Whatever your optimization problem is, you need to write a function that encodes your problem into a series of bits (x1, x2, x3…. xN) to be discovered, and which also computes how “good” a given
bitstring (e.g. 0,1,1…0) is. When you pass such a function to Blackbox, the quantum compiler then repeatedly comes up with ideas for bitstrings, and using the information that your function supplies
about how good its “guesses” are, it quickly converges on the best bitstring possible.
So using this approach the quantum processor behaves as a co-processor to a classical computing resource. The classical computing resources handles one part of the problem (computing the goodness of
a given bitstring), and the quantum computer handles the other (suggesting bitstrings). I realized that this is described very nicely by the two computers playing 20 questions with one another.
The quantum computer suggests creative solutions to a problem, and then the classical computer is used to give feedback on how good the suggested solution is. Using this feedback, BlackBox will
intelligently suggest a new solution. So in the example above, Blackbox knows NOT to make the next question “Is it a carrot?”
There is actually a deep philosophical point here. One of the pieces that is missing in the puzzle of artificial intelligence is how to make algorithms and programs more creative. I have always been
an advocate of using quantum computing to power AI, but we now start to see concrete ways in which it could really start to address some of the elusive problems that crop up when trying to build
intelligent machines.
At D-Wave, we have been starting some initial explorations in the areas of machine creativity and machine dreams, but it is early days and the pieces are only just starting to fall into place.
I was wondering if you could use the QC to actually play 20 questions for real. This is quite a fun application idea. If anyone has any suggestions for how to craft 20 questions into an objective
function, let me know. My first two thoughts were to do something with Wordnet and NLTK. You could try either a pattern matching or a machine learning version of ‘mining’ wordnet for the right
answer. This project would be a little Watson-esque in flavour.
Here is a video showing how some of the parts of a D-Wave Rainier processor go together to create the fabric of the quantum computer.
The animation shows how the processor is made up of 128 qubits, 352 couplers and nearly 24,000 Josephson junctions. The qubits are arranged in a tiling pattern to allow them to connect to one
There are two new tutorials on the website, complete with code snippets! Click on the images to go to the tutorial pages on the developer portal:
This tutorial (above) describes how to solve Weighted Maximum Independent Set (WMIS) problems using the hardware. Finding the Maximum Independent Set of a bunch of connected variables can be very
useful. At a high level, the MIS it gives us information about the largest number of ‘things’ that can be achieved from a set when lots of those ‘things’ have conflicting requirements. In the
tutorial, an example is given of scheduling events for a sports team, but you can imagine all sorts of variants: Train timetabling to improve services, assigning patients to surgeons to maximize the
throughput of vital operations and minimize waiting lists, adjusting variable speed limits on motorways to reduce traffic jams during periods of congestion, etc etc.
This tutorial (above) describes how to find Maximum Common Subgraphs given two graphs. The example given in this tutorial is in molecule matching. Looking for areas where sub-structures in molecules
are very similar can give us information about how such molecules behave. This is just one simple example of MCS. You can also imagine the same technique being applied to social networks to look for
matches between the structuring of social groups. This technique could be used for improving ad placement or even for detecting crime rings.
These two tutorials are closely linked – as finding the MCS involves finding the MIS as part of the process. There are also lots of interesting applications of both these methods in graph and number
If anyone would like to implement WMIS or MCS to solve any of the problem ideas mentioned in this post, please feel free!
On Friday Hamish Johnston from Physics World visited D-Wave to have a look round and investigate the ‘inside’ of the D-Wave box. Read his report here!
Physics World Blog >> Inside the box at D-Wave
Here are a few of the photos from his visit on Flickr:
The next generation of D-Wave’s technology is called Vesuvius, and it’s going to be a very interesting processor. The testing and development of this new generation of quantum processor is going
well. In the meantime, here are some beautiful images of Vesuvius!
Above: An entire wafer of Vesuvius processors after the full fabrication process has completed.
Above: Photographing the wafer from a different angle allows more of the structure to be seen. Exercise for the reader: Estimate the number of qubits in this image :)
Above: A slightly closer view of part of the wafer. The small scale of the structures (<1um) produces a diffraction grating effect (like you see on the underside of a CD) resulting in a beautiful
spectrum of colours reflecting from the wafer surface.
Above: A different angle of shot produces different colours and allows different areas of the circuitry to become visible.
Above: A close-up image of a single Vesuvius processor on the wafer. The white square seen to the right of the image contains the main ‘fabric’ of 512 connected qubits.
Above: An image of a processor wire-bonded to the chip carrier, ready to be installed into the computer system. The wires carry the signals to the quantum components and associated circuitry on the
Above: A larger view of the bonded Vesuvius processor. More of the chip packaging is now also visible in the image.
Above: The full chip packaging is visible, complete with wafer.
So as part of learning how to become a quantum ninja and program the D-Wave One, it is important to understand the problem that the machine is designed to solve. The D-Wave machine is designed to
find the minimum value of a particular mathematical expression which I can write down in one line:
As people tend to be put off by mathematical equations in blogposts, I decided to augment it with a picture of a cute cat. However, unless you are very mathematically inclined (like kitty), it might
not be intuitive what minimizing this expression actually means, why it is important, or how quantum computing helps. So I’m going to try to answer those three questions in this post.
1.) What does the cat’s expression mean?
The machine is designed to solve discrete optimization problems. What is a discrete optimization problem? It is one where you are trying to find the best settings for a bunch of switches. Here’s a
graphical example of what is going on. Let’s imagine that our switches are light switches which each have a ‘bias value’ (a number) associated with them, and they can each be set either ON or OFF:
The light switch game
The game that we must play is to set all the switches into the right configuration. What is the right configuration? It is the one where when we set each of the switches to either ON or OFF (where ON
= +1 and OFF = -1) and then we add up all the switches’ bias values multiplied by their settings, we get the lowest answer. This is where the first term in the cat’s expression comes from. The bias
values are called h’s and the switch settings are called s’s.
So depending upon which switches we set to +1 and which we set to -1, we will get a different score overall. You can try this game. Hopefully you’ll find it easy because there’s a simple rule to
We find that if we set all the switches with positive biases to OFF and all the switches with negative biases to ON and add up the result then we get the lowest overall value. Easy, right? I can give
you as many switches as I want with many different bias values and you just look at each one in turn and flip it either ON or OFF accordingly.
OK, let’s make it harder. So now imagine that many of the pairs of switches have an additional rule, one which involves considering PAIRS of switches in addition to just individual switches… we add a
new bias value (called J) which we multiply by BOTH the switch settings that connect to it, and we add the resulting value we get from each pair of switches to our overall number too. Still, all we
have to do is decide whether each switch should be ON or OFF subject to this new rule.
But now it is much, much harder to decide whether a switch should be ON or OFF, because its neighbours affect it. Even with the simple example shown with 2 switches in the figure above, you can’t
just follow the rule of setting them to be the opposite sign to their bias value anymore (try it!). With a complex web of switches having many neighbours, it quickly becomes very frustrating to try
and find the right combination to give you the lowest value overall.
2.) It’s a math expression – who cares?
We didn’t build a machine to play a strange masochistic light switch game. The concept of finding a good configuration of binary variables (switches) in this way lies at the heart of many problems
that are encountered in everyday applications. A few are shown in figure below (click to expand):
Even the idea of doing science itself is an optimization problem (you are trying to find the best ‘configuration’ of terms contributing to a scientific equation which matches our real world
3.) How does quantum mechanics help?
With a couple of switches you can just try every combination of ON’s and OFF’s, there are only four possibilities: [ON ON], [ON OFF], [OFF ON] or [OFF OFF]. But as you add more and more switches, the
number of possible ways that the switches can be set grows exponentially:
You can start to see why the game isn’t much fun anymore. In fact it is even difficult for our most powerful supercomputers. Being able to store all those possible configurations in memory, and
moving them around inside conventional processors to calculate if our guess is right takes a very, very long time. With only 500 switches, there isn’t enough time in the Universe to check all the
Quantum mechanics can give us a helping hand with this problem. The fundamental power of a quantum computer comes from the idea that you can put bits of information into a superposition of states.
Which means that using a quantum computer, our light switches can be ON and OFF at the same time:
Now lets consider the same bunch of switches as before, but now held in a quantum computer’s memory:
Because all the light switches are on and off at the same time, we know that the correct answer (correct ON/OFF settings for each switch) is represented in there somewhere… it is just currently
hidden from us.
What the D-Wave quantum computer allows you to do is take this ‘quantum representation’ of your switches and extract the configuration of ONs and OFFs with the lowest value.
Here’s how you do this:
You start with the system in its quantum superposition as described above, and you slowly adjust the quantum computer to turn off the quantum superposition effect. At the same time, you slowly turn
up all those bias values (the h and J’s from earlier). As this is performed, you allow the switches to slowly drop out of the superposition and choose one classical state, either ON or OFF. At the
end, each switch MUST have chosen to be either ON or OFF. The quantum mechanics working inside the computer helps the light switches settle into the right states to give the lowest overall value when
you add them all up at the end. Even though there are 2^N possible configurations it could have ended up in, it finds the lowest one, winning the light switch game.
Keen-eyed readers may have noticed a new section on the D-Wave website entitled ‘developer portal’. Currently the devPortal is being tested within D-Wave, however we are hoping to open it up to many
developers in a staged way within the next year.
We’ve been getting a fair amount of interest from developers around the world already, and we’re anxious to open up the portal so that everyone can have access to the tools needed to start
programming quantum computers! However given that this way of programming is so new we are also cautious about carefully testing everything before doing so. In short, it is coming, but you will have
to wait just a little longer to get access!
A few tutorials are already available for everyone on the portal. These are intended to give a simple background to programming the quantum systems in advance of the tools coming online. New
tutorials will be added to this list over time. If you’d like to have a look you can find them here: DEVELOPER TUTORIALS
In the future we hope that we will be able to grow the community to include competitions and prizes, programming challenges, and large open source projects for people who are itching to make a
contribution to the fun world of quantum computer programming.
|
{"url":"http://dwave.wordpress.com/tag/d-wave-one/","timestamp":"2014-04-20T01:28:48Z","content_type":null,"content_length":"115059","record_id":"<urn:uuid:a6e9cc8e-69e3-4989-916c-855e8d267906>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00466-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Henry John Stephen Smith
Born: 2 November 1826 in Dublin, Ireland
Died: 9 February 1883 in Oxford, England
Click the picture above
to see three larger pictures
Previous (Chronologically) Next Main Index
Previous (Alphabetically) Next Biographies index
Henry Smith's father was John Smith, an Irish barrister, and his mother was Mary Murphy from near Bantry Bay. Henry's father John died when he was less than two years old and Mary was left to bring
up their four children, of whom Henry was the youngest. One of his older sisters Eleanor Elizabeth Smith, born in 1822, went on to make a major contribution to education, and in particular to women's
education in Oxford. The family moved several times, but eventually found a permanent home in Ryde on the Isle of Wight in 1831.
Henry was a bright child who was taught first by his mother, then by private tutors in Hyde from 1838. He attended Rugby public school from the age of 15 as a boarder. He was outstanding over a range
of subjects and his ambition was a scholarship to Balliol College, Oxford. This was made harder since his health was poor (a brother and sister had both died) and he was taken to Italy after the
death of his brother in 1845 instead of completing his final year at Rugby. He undertook private reading while in Italy and was still able to win the scholarship.
At 19 he became a student at Balliol, but while spending the summer vacation in Italy his health problems became acute when he contracted smallpox. He was able to recommence his studies but while on
holiday in France in the following year he contracted malaria. He could not return to Oxford but this had the advantage that he was able to study with some of the top mathematicians at the Sorbonne
and the Collège de France such as Arago. After his health recovered he returned to Oxford in 1847 and in 1849 was awarded a double first in mathematics and classics.
Smith became a fellow, then a tutor at Balliol College. In 1860 he was appointed Savilian professor of geometry despite a strong field of applicants including George Boole. In 1861 he was elected a
fellow of the Royal Society but despite the status of the Savilian chair he could not afford to give up his income from lecturing and continued teaching at Balliol. This arrangement continued until
1873 when he was made a fellow of Corpus Christi College which provided him with an income but no duties. This enabled him to give up his teaching position at Balliol.
Smith did not marry but lived in Oxford with his mother until her death in 1857. At this time his sister Eleanor moved in with him to effectively become his housekeeper. At the time that Eleanor
moved in, Smith was living in St Giles' but in 1874 he was appointed as keeper of the University Museum and moved into the keeper's house in the Museum in South Parks Road.
While on the continent during his school and undergraduate years he had been learnt French, German and Italian and read widely. He had been most influenced by the work of Gauss. Smith said:-
If we except the great name of Newton (and the exception is one that the great Gauss himself would have been delighted to make) it is probable that no mathematician of any age or country has ever
surpassed Gauss in the combination of an abundant fertility of invention with an absolute vigorousness in demonstration...
Influenced by Gauss, Smith's most important contributions are in number theory where he worked on elementary divisors. He proved that any integer can be expressed as the sum of 5 squares and as the
sum of 7 squares, showing in how many ways this could occur. In addition to solving these cases explicitly, he gave a method which would yield the number of ways that an integer can be expressed as
the sum of k squares for any fixed k. He published his results in The orders and genera of quadratic forms containing more than three indeterminates published in the Proceedings of the Royal Society
in 1867. Eisenstein had earlier proved the result for 3 squares and Jacobi for 2, 4 and 6 squares. Smith also extended Gauss's theorem on real quadratic forms to complex quadratic forms.
From 1859 to 1865 he prepared a report in five parts on the Theory of Numbers. In it Smith analyses the work of other mathematicians but adds much of his own. This work has been described as the:-
... the most complete and elegant monument ever erected to the theory of numbers.
Smith also wrote on geometrical topics. His first two papers were on geometry and, in 1868, he wrote Certain cubic and biquadratic problems which won him the Steiner prize of the Royal Academy of
Smith is remembered for the Smith normal form for matrices. It appears to be less well known that, in 1875, he gave examples of discontinuous sets which are similar to the Sierpinski gasket, see [6].
His paper on this topic in the Proceedings of the London Mathematical Society for 1875 contains a description of the Cantor set eight years before Cantor.
He is described in [2] as follows:-
He was a tall, good-looking man, renowned for his charm, generosity, warmth, and spontaneous wit ...
We learn much of Smith from the following comment from John Conington, the professor of Latin at Oxford, (see for example [3]):-
I do not know what Henry Smith may be at the subjects of which he professes to know something; but I never go to him about a matter of scholarship, in a line where he professes to know nothing
without learning more from him than I can get from any one else.
Smith joined the London Mathematical Society during the first year of its existence and he became its sixth president in 1874-76. He gave his presidential address On the present state and prospects
of some branches of pure mathematics which was published in the Proceedings of the London Mathematical Society in 1876. He received many honours including honorary degrees from the universities of
Cambridge and Dublin. He was appointed to two royal commissions, the royal commission into scientific instruction and the royal commission into the universities.
Despite health problems when he was a student, Smith mostly enjoyed excellent health until 1881 when his heath began to deteriorate, mainly due to the extremely high level of work that he continued
to undertake. A rather unusual event happened near the end of his life. The Academy of Sciences in Paris set the question for the 1882 Grand Prix in Mathematics to be precisely the problem on the
number of ways that an integer can be expressed as the sum of k squares that Smith had solved in his 1867 paper The orders and genera of quadratic forms containing more than three indeterminates.
Smith wrote to Hermite who then realised that the Academy of Sciences had blundered by setting a problem which had already been solved. Hermite asked Smith if he would cooperate in trying not to make
the Academy look foolish, and simply submit a solution to the Grand Prix question. This Smith did but died before the prize was awarded. After his death the Academy awarded two full prizes, one to
Smith and one to Minkowski.
Article by: J J O'Connor and E F Robertson
Click on this link to see a list of the Glossary entries for this page
List of References (14 books/articles) Some Quotations (3)
A Poster of Henry Smith Mathematicians born in the same country
Honours awarded to Henry Smith
(Click below for those honoured in this way)
Savilian Geometry Professor 1861
Fellow of the Royal Society 1861
Fellow of the Royal Society of Edinburgh 1876
LMS President 1874 - 1876
Previous (Chronologically) Next Main Index
Previous (Alphabetically) Next Biographies index
History Topics Societies, honours, etc. Famous curves
Time lines Birthplace maps Chronology Search Form
Glossary index Quotations index Poster index
Mathematicians of the day Anniversaries for the year
JOC/EFR © February 2005 School of Mathematics and Statistics
Copyright information University of St Andrews, Scotland
The URL of this page is:
|
{"url":"http://www-history.mcs.st-and.ac.uk/Biographies/Smith.html","timestamp":"2014-04-19T14:30:34Z","content_type":null,"content_length":"18772","record_id":"<urn:uuid:adabe9c5-00ae-40b7-b17a-0fa88a11e8d8>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00242-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Math Can Be Fun
It's fun to try and figure out a problem in life. Math gives us the opportunity to use our minds to use reasoning and come up with solutions. Many of the Linkds shown on the left really make math a
game. Math games can be a fun way to have fun and expand your mind. The brain is really interesting the way it can learn as you challenge it with things that are new and unusual. The brain will
change and will physically change as you learn. It seems impossibile, but the more you challenge your mind it will expand and you can have fun expanding your ability to think. It's that easy, and
it's fun too.
Figuring out the different shapes can be interesting and even fun. A triangle seems so simple but to find the sides and angles is interesting. Formulas seem difficult until you see it's not that
hard. Understanding formulas is the key to sucess for math. Geometry is unique and interesting in so many ways. The shapes and formalas to figure the math to match the shapes is exciting. A theorem
was discovered in the days of the Greeks. Pythagorean math was way ahead of it's time. But that was thousands of years ago. The Egyptians must have had a great knowledge of math to build their
amazing pymarids. They even calulated the star positions in the nights sky and mirrored the heavens in the position of their pyramids on earth (Giza - the Orions Belt). The interesting thing about
math is that it was understood by the ancient people of the world. Even in Meso America (Mexico, Central America, and South America) there was a great understanding of math and how it worked. The
calendar of the ancient people in Meso America was very accurate. Some say that their understanding of math, the calendar and the stars, surpassed any other civilization, even the ancient Egyptians.
Math is more than what you learn in your math class. It is an amazing expanse of scientific and world thought. Some say that eveything can be proven matematically. That remains to be seen. This site
is primarily for the novice and elementary student. But advanced math on the internet is also easy to find in a goodle search. Tha Pythagorean Theorem is an interesting search. Also a search: "math
facts" is good. Search: general subjects as: geometry, algebra, trigonometry and calculus. The Greek scientists and mathemeticians had a lot to say about many subjects. In England Sir Isaac Newton in
the 1700s invented calculus (advanced math). He explained how the planets went around the sun when visited by Edmond Halley (who discovered Halley's Comet). As Newton was explaining how gavity worked
in his own mathematical terms, he (Newton) created calculus! Later he wrote a great tretis on math in the greatest book every written about math called "Philosophiae Naturalis Principia Mathematica".
Today it is still considered one of the greatest books ever written, not just on math, but on any scientific subject. Newton had a gift. And Einstein expanded the concepts of Newton to even further
understanding about space and time. Einstein's e=mc2 was just a theory until one day it created the atomic era in the world. Sometimes thinking about math can bring new insights of the world around
us. When Alexander the Great was conquering the world many years ago he would send information of his discoveries to Aristotle. Aristotle was had a mathmatical and scientific mind and was always
searching for new and interesting things. Today we are moving into worlds of knowledge that have been there for millions of years but we didn't see them until now. With the power of the micron
microscope and the hubble space telescope was can se so near and so far that our ancient relatives would be so surprised. Our knowledge base has never been at such a fever pitch for understanding
math, science, and whatever we are researching. Even the DNA has been uncovered. What next. You might be that person that is finding a solution through mach, science, and logic that can open the door
to a new tomorrow. Have a great time with this web page Jim Wyler
|
{"url":"http://www.mathcanbefun.net/","timestamp":"2014-04-21T02:00:14Z","content_type":null,"content_length":"11915","record_id":"<urn:uuid:394df081-df81-45b0-b07f-34da0d3f1f1f>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00644-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Java/Scala library for algebra, mathematics
up vote 16 down vote favorite
Can you advise me some flexible and powerful, yet fast library which could cover SciPy (both in performance and functionality). I found SciPy very expressive - but I want to try something in Scala.
I read a little about Scala - but is not as featured as SciPy. Any alternatives? Maybe Java library?
Did you try apache library ( commons.apache.org/math )? – Stas Kurilin Mar 4 '11 at 12:40
2 Seems like a duplicate of: stackoverflow.com/questions/482305/… – Andrey Adamovich Mar 4 '11 at 12:45
@Superfilin Yeah, but it includes Scala, which opens up the field a bit. – Daniel C. Sobral Mar 4 '11 at 15:47
Its not duplicate - i was looking for package with something like matlab style - so more for scala. – Robert Zaremba Mar 14 '11 at 12:57
add comment
3 Answers
active oldest votes
The functionality in Scipy is rather Matlab-like. So the question is whether you just want the core linear algebra / vector-matrix mathematics operations, or all sorts of things like
If you are not aware of both Scalala (now called Breeze) and ScalaLab, you should check them out--maybe they'll suit your needs.
up vote 9 down
vote accepted If you need a more diverse library, there are a couple of Java libraries that might be suitable: CERN Colt and Apache Commons Math; these are intended to be used in Java style,
however, and you're pretty much limited to using them that way from within Scala. (Though of course you can wrap the bits that you use particularly heavily in something prettier.)
1 ScalaLab + apache commons math sound like a good combination. But for now I didn't find anything as good as (in terms of composition and strength) as SciPy – Robert Zaremba Mar 14
'11 at 12:59
1 Scalala seems to be deprecated and merged into breeze now – BeniBela Feb 24 '13 at 16:41
add comment
There are scalala (scala linear algebra library) and ScalaLab (more like a Matlab scala-environment)
up vote 3 down vote
add comment
There is a la4j (Linear Algebra for Java) library that can be used from Scala environment but it doesn't contain any Scala features like hi-order functions since Java doesn't support it.
up vote 1 la4j was designed to be used in imperative environment (not functional). So if you want to use it in functional way - Scalala is the best chose.
down vote
add comment
Not the answer you're looking for? Browse other questions tagged java math scala linear-algebra algebra or ask your own question.
|
{"url":"http://stackoverflow.com/questions/5193781/java-scala-library-for-algebra-mathematics","timestamp":"2014-04-18T14:21:23Z","content_type":null,"content_length":"81655","record_id":"<urn:uuid:f3a546ac-bc2b-4db8-9002-f77244dcf0d4>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00147-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Philadelphia ACT Tutor
Find a Philadelphia ACT Tutor
...I have experience tutoring math at the levels of pre-algebra through calculus, and would also be able to tutor probability, statistics, and actuarial math. I graduated with a degree in Russian
Language, and spent a full year living in St. Petersburg, Russia.
14 Subjects: including ACT Math, Spanish, calculus, statistics
...I specialize in Chemistry, Biology, and Mathematics. The thing that separates me from the other tutors is that I will teach you the what, how, why, and when of everything science. Science and
Mathematics is all about understanding the material and why things happen.
18 Subjects: including ACT Math, chemistry, calculus, biology
...I hold Pennsylvania certifications in elementary education, secondary mathematics, and educational administration. One of my leadership roles included establishing an in-house tutoring center
at my last high school. I used student information (such as test results, classroom grades, student and...
19 Subjects: including ACT Math, reading, geometry, algebra 2
...Teaching is something that I have been inspired to do by my mother, who is a Special Education teacher, so I have a real passion for inspiring children and getting to know them. Additionally, I
have taken on multiple different tutoring positions over the years, have volunteered in my spare time,...
26 Subjects: including ACT Math, reading, English, writing
...I also have extensive experience with Microsoft Excel, having worked as a financial analyst (something which also helped me apply my knowledge of statistic to the real world!). If you need help
learning Excel for the basics up to Monte Carlo financial modeling, let me know. Or, if you would like...
25 Subjects: including ACT Math, English, writing, statistics
|
{"url":"http://www.purplemath.com/philadelphia_act_tutors.php","timestamp":"2014-04-16T04:16:55Z","content_type":null,"content_length":"23925","record_id":"<urn:uuid:2b1a2905-8e9b-4c52-9a97-2279c41dab90>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00342-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Re: Bug in classno, qbfclassno?
David R. Kohel on Tue, 11 Jan 2000 12:43:52 +1100
[Date Prev] [Date Next] [Thread Prev] [Thread Next] [Date Index] [Thread Index]
Re: Bug in classno, qbfclassno?
> On Mon, Jan 10, 2000 at 05:55:18PM +0100, Henri Cohen wrote:
> >
> > > classno(-174660)= 168
> > > classno(-272580)= 180
> >
> > This function (written by me) should definitely be removed, or rewritten correctly.
> > It is based on a naive version of Shanks baby step giant step method which assumes
> > that the approximate class number obtained by an Euler product is good enough, and
> > that the class group is not too far from cyclic. The main reason for the bug is that
> > the class group has large 2 rank. It is not difficult to write a correct baby step
> > giant step method which completely avoids this (one has to take into account the
> > full group structure, and I give the algorithm in my GTM 138 book, starting from
> > the second printing, the first printing has serious errors), but in 10 years I have
> > been lazy to do so because of the much more powerful quadclassunit algorithms
> > based on Hafner-McCurley subexponential methods (assuming GRH).
> >
> > Henri Cohen
> Powerful yet not flawless (at least implementation-wise) :-)
> ? setrand(1);quadclassunit(-108760)[1]
> 48
> ? setrand(58);quadclassunit(-108760)[1]
> 72
> Thanks
> Igor
When the factorization can be computed, the two-torsion group
can be easily determined. This usually accounts for most of
the noncyclicity, so the subgroup of squares is a better group
in which to do BSGS. It is also has the advantage of being a
potentially smaller group. Some further work, of course, is
needed for a proof of correctness. Alternatively once a multiple
m of the group exponent is determined, it is a trivial task to
compute the smallest multiple of the form m/2^t before returning
an answer.
Similar post-BSGS verification can be done for other small primes
p at which the group may have non-trival p-rank.
|
{"url":"http://pari.math.u-bordeaux.fr/archives/pari-dev-0001/msg00023.html","timestamp":"2014-04-20T18:26:50Z","content_type":null,"content_length":"6364","record_id":"<urn:uuid:fb765e9d-b22a-4e26-9bc8-36392f8f7d4b>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00366-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Posts by
Total # Posts: 26
What are three improper fractions to 4 1/2
what is the rule for this table 52 44 47 39 40 blank 36 28
cultural diversity
a speaker you over heard saying____was employing litrate style oral language.
An aircraft takes off and climbs at a constant 29 degree angle until it reaches and altitude of 6 miles. At that point, how far away is the plane from the airport?
Factoring 12m^2n^2-8mn+1
A 1.5 liter flask is filled with nitrogen at a pressure of 12 atmospheres. What size flask would be required to hold this gas at a pressure of 2.0 atmosphere?
A block of mass m = 5.9 kg is pulled up a è = 21° incline as in the figure with a force of magnitude F = 34 N. (a) Find the acceleration of the block if the incline is frictionless. 1 Your response
differs from the correct answer by more than 10%. Double check your ...
Two packing crates of masses m1 = 10.0 kg and m2 = 6.60 kg are connected by a light string that passes over a frictionless pulley as in the figure. The 6.60-kg crate lies on a smooth incline of angle
A football punter accelerates a football from rest to a speed of 10 m/s during the time in which his toe is in contact with the ball (about 0.16 s). If the football has a mass of 0.50 kg, what
average force does the punter exert on the ball?
A football punter accelerates a football from rest to a speed of 10 m/s during the time in which his toe is in contact with the ball (about 0.16 s). If the football has a mass of 0.50 kg, what
average force does the punter exert on the ball?
A vector has an x-component of −21.0units and a y-component of 36.5 units. Find the magnitude and direction of the vector.
A person walks 15.0° north of east for 3.70 km. How far due north and how far due east would she have to walk to arrive at the same location?
how did the middle class grow?
I have noo idea !! Help!!!!
I have noo idea !! Help!!!!
Ben can complete 3 math problems in 21 minutes. If he continues working at the same rate, how long will it take Ben to complete 16 math problems? thanks .
Math-3rd grade
3rd math homework
kbbvkhbv kbv nvkj nkjvn jcnj knjcn jnnjcb hcbhbhjghjkghgjighgjhghyggjhbhjugiufyufyugyuguyghhhgyghcjhhyhbvvuhubniuvjunv hbh vni bjvn vhu vhpui jvbn ijbn
a car is going 8 m/s goes over a cliff 78.4 meters high. How far from the base does it land?
help i don t know what to do help help help idont know what to do help this math probelm is hard help help
Economics(12 Multiple Choice)
1.C 2.C 3.C 4.B 5.D 6.B 7.A 8.C 9.A 10.B 11.C 12.B
global history
how does madision refute the prevailing view that democracy was possilbe only in a small state?
i don't know
4th grade
write in words 8,042,176
Social Studies
Where did the Egyptians go to expand their buisness?
how do I classify polygons ?
|
{"url":"http://www.jiskha.com/members/profile/posts.cgi?name=Imani","timestamp":"2014-04-20T22:58:06Z","content_type":null,"content_length":"10397","record_id":"<urn:uuid:2889a840-3858-4b77-8f00-7877f3beef5c>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00435-ip-10-147-4-33.ec2.internal.warc.gz"}
|
The Khan Academy -- Virtual Cliffs Notes for Math & Physics (& More!)
Have you ever been stumped by the nuances of your physics, statistics or calculus books? Maybe having a better understanding of some of the basics that never quite sunk in would help.
[Joshua A. Dijksman gave a talk earlier this week about the Khan Academy at the 2011 APS March Meeting.]
Earlier in the week, Joshua A. Dijksman gave a talk about the Khan Academy -- a website that offers a "free world-class education for anyone anywhere," according to the site. The Academy's most
prominent feature is its video library consisting of over 2,100 educational videos.
Like a sort of online collection of Cliffs Notes for hard-to-grasp subjects, the video library includes videos that are about 10 minutes long each covering about 40 subjects including physics,
developmental math, calculus, statistics, differential equations and many more. During each video, a narrator simultaneously explains concepts while drawing pictures or equations on a computer sketch
The videos
are educational, so they can be difficult to swallow, but if you've ever had a tough time getting a concept down pat, they just might do the trick. It's worth a try. Here's a sample video on fluids:
[A sample educational video from the Khan Academy.]
In addition to the video library, the website offers math practice exercises that can be done online by students who can keep track of their own progress. Teachers can monitor the students' progress
online as well.
According to
's presentation blurb, the Academy's goal is "to
allow educators to improve their teaching, but above all to bring simple, rewarding and enjoyable education to the minds of many young students."
3 comments:
1. The Khan Academy also has all the videos on its own Youtube channel - although I find it easier to navigate the various topics on their own website.
But what stands out if yo watch his videos on youtube are the comments people leave.
Things like "wow, you explain this so much better than my teacher at school" or "thanks to you I finally understand what I should have learnt years ago in class" are typical.
Compared to some commercial, paid-for offerings of math instruction (sorry, have not reviewed any physics), I have found Sal Khan's videos at least on par, if not better.
2. I do not know about other subjects but I didn't find the Physics lectures upto the mark.
In the above lecture Mr. Khan writes
E = 2k pie sigma. The thing is he writes an arrow above E (which I can't write here in print) which means he is taking E as a vector. But there is no vector on the right hand side of the
equation. In other words this equation implies that a vector quantity is equal to a non vector quantity. That's obviously wrong.
Then in this video lecture he says that the gravitational force of attraction between any two objects is to be calculated by taking the distance between the center of masses of the objects in
question. Now this method works sometimes (as has worked in his video)but generally speaking its wrong. One can only take the entire mass of an object at its center of mass when he is solving a
physical quantity which is a linear function of distance. The rigorous method is to use integral calculus. As a matter of fact Newton invented the integral calculus because he wanted to prove
that if you are outside the earth then you may take the entire mass of the earth at its geometric center and then apply the force law and you will get correct results.
Though I admire Mr. Khan's website for providing free education world wide but such errors are bound to happen if one is teaching so many subjects. I think one should only concentrate on the
field in which he has a good mastery.
3. khan sb kuch samaj nai aa rai sorry yar.............
|
{"url":"http://physicsbuzz.physicscentral.com/2011/03/khan-academy-virtual-cliffs-notes-for.html","timestamp":"2014-04-18T13:14:57Z","content_type":null,"content_length":"103303","record_id":"<urn:uuid:2d582309-9dc1-45e9-9a11-9697e5c292ed>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00396-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Proposition 110
If a medial area incommensurable with the whole is subtracted from a medial area, then two remaining irrational straight lines arise, either a second apotome of a medial straight line or a straight
line which produces with a medial area a medial whole.
As in the foregoing figures, let there be subtracted the medial area BD incommensurable with the whole from the medial area BC.
I say that the side of EC is one of two irrational straight lines, either a second apotome of a medial straight line or a straight line which produces with a medial area a medial whole.
Since each of the rectangles BC and BD is medial, and BC is incommensurable with BD, therefore each of the straight lines FH and FK is rational and incommensurable in length with FG.
Since BC is incommensurable with BD, that is, GH with GK, therefore HF is also incommensurable with FK.
Therefore FH and FK are rational straight lines commensurable in square only. Therefore KH is an apotome.
If then the square on FH is greater than the square on FK by the square on a straight line commensurable with FH, while neither of the straight lines FH nor FK is commensurable in length with the
rational straight line FG set out, then KH is a third apotome.
But KL is rational, and the rectangle contained by a rational straight line and a third apotome is irrational, and the side of it is irrational, and is called a second apotome of a medial straight
line, so that the side of LH, that is, of EC, is a second apotome of a medial straight line.
But, if the square on FH is greater than the square on FK by the square on a straight line incommensurable with FH, while neither of the straight lines HF nor FK is commensurable in length with FG,
then KH is a sixth apotome.
But the side of the rectangle contained by a rational straight line and a sixth apotome is a straight line which produces with a medial area a medial whole.
Therefore the side of LH, that is, of EC, is a straight line which produces with a medial area a medial whole.
Therefore, if a medial area incommensurable with the whole is subtracted from a medial area, then two remaining irrational straight lines arise, either a second apotome of a medial straight line or a
straight line which produces with a medial area a medial whole.
|
{"url":"http://aleph0.clarku.edu/~djoyce/java/elements/bookX/propX110.html","timestamp":"2014-04-16T10:31:11Z","content_type":null,"content_length":"5354","record_id":"<urn:uuid:a3309663-db38-419e-9700-27a7a37cd2bc>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00496-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Question on Understanding Row Echelon Form and Reduced Row Echelon Form
December 21st 2007, 10:05 AM
Question on Understanding Row Echelon Form and Reduced Row Echelon Form
My question is lets say you have
So in this example I made up, how would you find the reduced row echeolon form and row echelon form for these equations?
December 21st 2007, 10:17 AM
First, write it in matrix format using the coefficients.
The idea is to use elementary row operations to hammer it into the form:
$\left[\begin{array}{ccc|c}1&0&0&a\\0&1&0&b\\0&0&1&c\end{ array}\right]$
Your solutions will be a,b, and c.
I feel about Gaussian eleimination the way Plato feels about partial fraction decompositions. With the technology that abounds these days, why go through the tedium of reduced row echelon?. We
can spend our mathematical time more efficiently. But, you gotta do what you gotta do.
It took my TI about 2 seconds to give me:
$\left[\begin{array}{ccc|c}1&0&0&\frac{557}{260}\\0&1&0&\ frac{81}{260}\\0&0&1&\frac{21}{20}\end{array}\righ t]$
December 21st 2007, 11:44 AM
Most students cannot use calculators in an algebra course, so replace that two seconds with two minutes of arithmetic!
December 21st 2007, 12:26 PM
Really? That is certainly not been my experience. I have visited many secondary schools both public and private in the last twenty years. From that experience, it is my impression that not only
are they allowed by in fact they are required.
December 21st 2007, 12:54 PM
I am going to go ahead and show a rref. This is one of many ways to tackle it. You may have a better method with less steps, but this is the idea:
$\left[\begin{array}{ccc|c}1&0&-9&\frac{-95}{13}\\0&0&-20&-21\\0&\frac{13}{39}&1&\frac{45}{39}\end{array}\rig ht]$
$\left[\begin{array}{ccc|c}1&0&-9&\frac{-95}{13}\\0&\frac{20}{3}&0&\frac{27}{13}\\0&\frac{1 3}{39}&1&\frac{45}{39}\end{array}\right]$
$\left[\begin{array}{ccc|c}1&0&-9&\frac{-95}{13}\\0&\frac{20}{3}&0&\frac{27}{13}\\0&0&1&\fr ac{21}{20}\end{array}\right]$
$\left[\begin{array}{ccc|c}1&0&-9&\frac{-95}{13}\\0&1&0&\frac{81}{260}\\0&0&1&\frac{21}{20} \end{array}\right]$
$\left[\begin{array}{ccc|c}1&0&0&\boxed{\frac{557}{260}}\ \0&1&0&\boxed{\frac{81}{260}}\\0&0&1&\boxed{\frac{ 21}{20}}\end{array}\right]$
December 22nd 2007, 05:24 PM
by the way, the final form galactus put the matrix in is called the reduced row echelon form. it is the form in which the first non-zero digit in each row is 1 and also, all other entries in that
1's column is zero. the row echelon form does not have this last condition. we require the first non-zero digit in any row to be 1 but we do not require that all other entries in the column of
the leading 1 be zero.
reduced row echelon form:
$\left[\begin{array}{ccc|c}1&0&0&\frac{557}{260}\\0&1&0&\ frac{81}{260}\\0&0&1&\frac{21}{20}\end{array}\righ t]$
row echelon form:
$\left[\begin{array}{ccc|c}1&{\color{red}2}&0&\frac{557}{ 260}\\0&1&{\color{red}4}&\frac{81}{260}\\0&0&1&\fr ac{21}{20}\end{array}\right]$
the 2 and the 4 in the above matrix prevents it from being in reduced row echelon form.
in general, it is easier to solve a problem by bringing the matrix in reduced row echelon form if possible, otherwise, you will need to back substitute to get your solutions if it is only in row
echelon form
December 24th 2007, 04:31 AM
Maybe the schools I went to were a rarity. It's a shame that calculators are used that much.
December 24th 2007, 09:17 PM
my school is one of those rare schools. no calculators in algebra classes. in fact, other than those math courses that are basically number crunching, like statistics etc, calculators are not
allowed in any math course
|
{"url":"http://mathhelpforum.com/advanced-algebra/25169-question-understanding-row-echelon-form-reduced-row-echelon-form-print.html","timestamp":"2014-04-20T17:23:11Z","content_type":null,"content_length":"17498","record_id":"<urn:uuid:8cab5384-ff67-45cc-aa6d-fc7800387922>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00604-ip-10-147-4-33.ec2.internal.warc.gz"}
|
unique factorization domain
unique factorization domain
Let $R$ be an integral domain. We say that an element $r\in R$ is unit if it is invertible. A non-unit is called irreducible if it can not be represented as a product of two non-units.
A commutative integral domain $R$ is a unique factorization domain if every non-unit has a factorization $u={r}_{1}\cdots {r}_{n}$ as product of irreducible non-units and this decomposition is unique
up to renumbering and rescaling the irreducibles by units.
Created on September 26, 2009 23:12:16 by
Zoran Škoda
|
{"url":"http://ncatlab.org/nlab/show/unique+factorization+domain","timestamp":"2014-04-19T22:34:33Z","content_type":null,"content_length":"11839","record_id":"<urn:uuid:91ae5d09-1a19-4236-a0f8-91275efe9417>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00045-ip-10-147-4-33.ec2.internal.warc.gz"}
|
7.2.1 Efficient Sorting and Searching
In simulating urban systems it is often necessary to sort long arrays of numbers such as the coordinates of incidents, the addresses of mail recipients, or the times at which requests for service
occurred. Equally important, sorting algorithms play a central role as "building blocks" in techniques aimed at resolving several important geometrical problems, as we shall see shortly. It is
therefore important to develop a sorting algorithm that is as efficient as possible.
Suppose that we are given n integers and that we wish to sort them in, say, increasing order of magnitude with the smallest number at the top of the list. The straightforward approach is to scan the
initial list of numbers in order to find the smallest of them; this requires n - I comparisons. We make this smallest number the first element of our sorted array. We then scan the list of the
remaining n - I numbers for the second smallest number in the initial list (thus performing n - 2 comparisons) and continue in this manner until all the numbers are sorted. This is, then, a 0(n^2)
algorithm, since the total number of comparisons required is
A more subtle approach, however, leads to an algorithm which is 0(n· log[2] n). This approach is illustrated in Figure 7.13 for an array of 16 numbers involving the integers I through 16.
To describe this approach, let us assume, at first, that it so happens that n = 2^k, where k (= log[2] n) is a positive integer. The algorithm then begins by forming a sorted pair of numbers after
comparing the first and the second numbers in the initial list, a second sorted pair from the third and the fourth numbers, a third pair from the fifth and the sixth, and so on see second row of
Figure 7.13. The algorithm then goes through k - 1 more merging passes over the list, forming in the process sorted sublists of numbers, first, with four members, then with eight, then with sixteen,
until finally, the whole list of numbers is fully sorted. The key observation that leads to the algorithm is that the merging of two previously sorted lists of length m requires at most 2m - 1
comparisons [i.e., this type of merging is a 0(m) procedure]. Omitting the details, it is clear that since there are O(n) comparisons on each pass through the list of numbers and since there are k
passes until the list is completely sorted (e.g., for n = 16 there are log, 16 = 4 passes), the algorithm is 0(nk) = O(n -log[2] n).
Exercise 7.4 Show in detail that the algorithm described above is 0(n. log[2] n).
What if the length of the list of numbers to be sorted, n, is not a power of 2? Let then k = [log, n] (where again the notation [a] stands for "the smallest integer that is greater than or equal to a
"). We may then add 2k - n very large numbers to our initial list. After the expanded list has been sorted, its first n elements will constitute a sorted list of the original n numbers. The time
required by our algorithm to sort the expanded list is O(k.2 k) , and since n is of the same order as 2 k and log, n differs from k by less than 1 unit, this is equivalent to O(n log, n).
The technique outlined above, which is typical of a common approach to many combinatorial problems, is often referred to as "divide and conquer." The idea, of course, is to solve the problem in small
parts (e.g., by sorting initially only pairs of numbers as we did in the algorithm described above) and then to combine these parts in an efficient manner to arrive at the solution of the whole
Another example of this approach solves efficiently the problem of adding a new number (at its proper place on the list) to an already sorted list of n numbers. By determining, first, whether the new
number should be in the upper or lower half of the list [this can be done by comparing the new number with the (n/2)th element in the list], then the quarter in which it falls, and so on, this
problem can be answered in O(log2 n) time. Note that this procedure is equivalent to one that we would have followed if we were searching for a specific number on the list. Therefore, the procedure
is known as an efficient algorithm for searching through a sorted list and, as we just saw, it is O(log[2 n]).
|
{"url":"http://web.mit.edu/urban_or_book/www/book/chapter7/7.2.1.html","timestamp":"2014-04-19T22:20:49Z","content_type":null,"content_length":"5972","record_id":"<urn:uuid:e8efc2c2-441c-4091-b502-e99c6e53d300>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00163-ip-10-147-4-33.ec2.internal.warc.gz"}
|
On ChmInf Nan B points to this bit you have to agree to in order to join the ACS online communities:
You do not have to submit anything, but if you choose to submit something,including any user-generated content, ideas, concepts, techniques, and data,you must grant, and you actually grant by
agreeing to these Terms of Use, a non-exclusive, irrevocable, worldwide, perpetual, unlimited, assignable, sublicenseable, fully paid-up, and royalty-free right to ACS to copy, prepare, derive
work of, improve, distribute, publish, remove, retain, add, and use and commercialize, in any way now known or in the future discovered, anything that you submit to ACS without any further
consent, notice and/or compensation to your or any third parties..."
Nice. Why won't anyone join my community?
scio10: Online Civility and its (Muppethugging) Discontents
Dr Free-ride, Sheril Kirshenbaum, and Isis the Scientist
SK – definition of civility at your site – if you want children to feel welcome, for example. You have to set the tone. Some topics seem more important to be civil about.
F-r - politeness or is it being a decent human – in philosophical circles someone may rip your heart out and jump on it in perfectly polite language – so it’s not just being polite. It’s more like
taking each other seriously, assuming good faith, considering others feelings. Hard to engage when you don’t feel welcome.*
Hard to engage when you don’t feel welcome – language ( profane, technical, religious), composition of the community, am I being dismissed out of hand.
But respect doesn’t eliminate disagreement or hurt. Fundamental disagreements may surface, speaking up about experience may make you feel worse. Sometimes modeling good behavior is tedious – respect
your own limits and interests – sometimes you just have to sit out of some
I - @sshats aren’t useful
Call to civility has been used to suppress/repress minority groups.
She likes definition: personal attacks, rudeness, aggression or other behaviors aimed at disrupting a community’s goals that lead to unproductive stress disorder and conflicts.
q to the audience – what stands out to you as uncivil?
difference between using naughty words within discussion vs. making a personal attack
audience – they work in Congo and civility is used as a tool of white oppression
Chafee from Duke wrote a book about the civil rights movement and about how civility was used as a tool of oppression
q: how do you control – can you control civility on your site, and what effect does that have on the discussion on the blog. F-r says she moderates all of her comments and she also sets the tone –
she doesn’t seem to get the really serious trolls. Need to show your presence
S-k can’t be online all the time and there’s been a problem with commenters fighting each other and legal language ensuing.
I has some self policing among her commenters. She will ban someone who threatens physical violence.
q: what about ignoring them?
a: silence sometimes becomes assent and if you leave something unaddressed, it will scare other commenters away
q: we’ve been talking about commenters, what about blogger civility to civility
a: we conflate incivility with heated discussion.
q: if you meet each other f2f will you be more civil online
q: in the UK extremely tricky libel situation. Bloggers set policies – this is my house don’t pee on the carpet
“recreational outrage”, intimacy and distance
see the terrible bargain series of posts (here, I think) – how you can’t say things that need to be said in person because of social structures.
what if you’re not the person who can set the policies for the space? If you can’t set the policy about who can pee on the carpet. – there was then an extended discussion of the value of policies and
whether they promote civil discussion or whether they are exclusionary **
from the audience – need a group of people who buy into a set of collective norms that work in that environment
* how much of SH’s comments on the OSTP blog prevented others from participating?
** there is research that shows that policies are helpful in creating successful communities – see Preece.
A scientist talks about requirements for social software for scientists
I've weighed in a few times on how to build online communities or support scientists online, but it's really worth paying attention to when you get an actual scientist who is also very involved in
and interested in social software tell you what he thinks. Cameron Neylon did just that in a recent blog post (comments on ff). I'll quote liberally from his blog and feedback some ideas.
(he uses SS4S to stand for social software for science) All of the numbered paragraphs are direct quotes from his post.
1. SS4S will promote engagement with online scientific objects and through this encourage and provide paths to those with enthusiasm but insufficient expertise to gain sufficient expertise to
contribute effectively (see e.g. Galaxy Zoo). This includes but is certainly not limited to collaborations between professional scientists. These are merely a special case of the general.
There are a couple of interesting thing there - first that "citizen scientists" and interested non-scientists are welcome and encouraged to participate in the same tool. They are provided support to
move from legitimate peripheral participants [1] to more central contributors. So contrast this with the concern about "the public" seeing how the sausage is made. I found in a study I did for a
class a few years ago that the quickest way to kill an online community of engineers was to have undergrads inundate them demanding homework help.[2]
On the other hand, many scientists do want to engage with the public for lots of reasons, so supporting that is a good thing. This all feeds into some stuff I've been thinking about recently about
how to sort of merge scholarly communication models with popular communication since your communication venues are findable and useable by the public (I like a lot of what Meyer and Schroeder say
2. SS4S will measure and reward positive contributions, including constructive criticism and disagreement (Stack overflow vs YouTube comments). Ideally such measures will value quality of
contribution rather than opinion, allowing disagreement to be both supported when required and resolved when appropriate.
Good policies [4], good moderators, charitable reading, a way to comment on the specific thing you mean and to do so clearly.
3. SS4S will provide single click through access to available online scientific objects and make it easy to bring references to those objects into the user's personal space or stream (see e.g.
Friendfeed "Like" button)
Absolutely. And then be able to interact with these things, annotate them, and then re-mix them into other things.
4. SS4S should provide zero effort upload paths to make scientific objects available online while simultaneously assuring users that this upload and the objects are always under their control.
This will mean in many cases that what is being pushed to the SS4S system is a reference not the object itself, but will sometimes be the object to provide ease of use. The distinction will
ideally be invisible to the user in practice barring some initial setup (see e.g. use of Posterous as a marshalling yard).
5. SS4S will make it easy for users to connect with other users and build networks based on a shared interest in specific research objects (Friendfeed again).
What metadata is required for this? What data must the system store and use to make this work?
6. SS4S will help the user exploit that network to collaboratively filter objects of interest to them and of importance to their work. These objects might be results, datasets, ideas, or people.
Or models or equations or modules...
7. SS4S will integrate with the user's existing tools and workflow and enable them to gradually adopt more effective or efficient tools without requiring any severe breaks (see Mendeley/Citeulike
/Zotero/Papers and DropBox)
8. SS4S will work reliably and stably with high performance and low latency.
Well, yeah!
9. SS4S will come to where the researcher is working both with respect to new software and also unusual locations and situations requiring mobile, location sensitive, and overlay technologies
(Layar, Greasemonkey, voice/gesture recognition - the latter largely prompted by a conversation I had with Peter Murray-Rust some months ago).
That's pretty cool. I mean mobile is fairly common, and location sensitive things are not uncommon, but these with overlay (like augmented reality? hmm, that could be very useful for sharing
10. SS4S will be trusted and reliable with a strong community belief in its long term stability. No single organization holds or probably even can hold this trust so solutions will almost
certainly need to be federated, open source, and supported by an active development community.
Stability, reliability, and clear policies and provisions for preservation are important.
If you have comments on any of these or other suggestions, please leave them on Cameron's post or on friendfeed (or on here, I'll pass them along).
[1] Lave, J., & Wenger, E. (1991). Situated learning : legitimate peripheral participation. New York: Cambridge University Press.
[2] http://glue.umd.edu/~cpikas/708P/Pikas_Fostering_Collaboration_LBSC708P_10202005_(final).doc (word document)
[3] Meyer, E. T., & Schroeder, R. (2009). The world wide web of research and access to knowledge. Knowledge Management Research and Practice, 7(3), 218-233. doi:10.1057/kmrp.2009.13
[4] Preece, J. (2000). Online communities : designing usability, supporting sociability. New York: Wiley.
Comps readings: community detection
Last set of comps readings, I talked about sense of community:
belonging, having influence, fulfillment of needs, and emotional
support. Now, let's talk about the physics version of
"community" - cohesive subgroups. In a graph, these are
groups of nodes in a graph that are more connected to each other than
to other parts of the graph. Clumpy spots. If you read old Wasserman and
Faust, you'll probably think of cliques, cores, and lambda
sets... some how these didn't do it for me - literally, when I was
trying to locate
communities in science blog networks, it didn't work..
If you have a computer science or maybe even sociology
background you'll probably
just look at some sort of clustering (agglomerative or divisive).
The hot thing for the
past few years comes from physicists and that's what's covered here.
I did other posts on SNA
articles, so those are mostly
elsewhere. (BTW - if you ever take stats for the social sciences and
can substitute R for stata, do so and take the time to learn it. The
igraph package for R has all of the coolest community detection
thingies in it) (note, too, that these readings are not necessarily for
the dabbler in bibliometrics or social network analysis!)
Newman, M. E. J., & Girvan, M. (2004). Finding and evaluating
community structure in networks. Physical Review E (Statistical,
Nonlinear, and Soft Matter Physics), 69(2), 26113-21.
(just go here)
This article, like the ones from Barabasi, sort of kicked off this
flurry of research. They use a divisive clustering technique
- so they start with the whole network, and break the connections with
the highest betweeness. See figure.
See how if you remove
that one line, how you completely break up the thing? That line has
high betweenness. So they calculate that for all of the lines using
whatever method, then take the line with the highest out, then
re-calculate and remove, and again. They then go on to talk about the
actual algorithm to use to efficiently do all of this betweenness
calculating and give some examples. There's a lot in this
article, though, because they next talk about how to figure out when
you're done and if you've got decent communities. This measure is
modularity (see the article for the definition), but basically it's 0
if random and 1 is the maximum. If you calculate Q at each step, then
you can stop when it's highest. Note that any given node can only be in
one community, unfortunately. (in real life, people are nearly always
in multiple communities)
Reichardt, J., & Bornholdt, S. (2006). When are networks truly
modular? Physica D, 224(1-2), 20-26. doi: 10.1016/j.physd.2006.09.009
(or look here)
They review Newman and Girvan and suggest a new way that groups
connected nodes and separates non-connected
nodes. They go through a process and end up
with an equation that's apparently like a Hamiltonian
for a q-state Potts spin glass (dunno, ask a physicist if you need more
info on that!). This method allows for overlapping
communities because there could be times when you could move a node
from one community to the next without increasing the energy.
They compared it for some standard graphs and it did better
than N-G. Instead of just stopping by minimizing modularity, they
compare the modularity to a random graph with the same degree
Reichardt, J., & Bornholdt, S. (2007). Clustering of sparse
data via network communities-a prototype study of a large online
market. Journal of Statistical Mechanics: Theory and
Experiment, P06016. doi:10.1088/1742-5468/2007/06/P06016
In this one they test the spin glass community detection method against
the German version of ebay to look for market segmentation. The network
has bidders as nodes, and if they bid on the same item there is an
edge. The spin glass method was successful at pulling out
clusters and using odds ratios, the authors showed that these clusters
corresponded to groupings of subject categories. The Q was much higher
than it would be for a random graph.
Comps readings: virtual communities
Sunday morning I was all set to do another essay - just had to pick a
question source and question - when my mother in law called to say she
would be stopping by at about the same time I would be finishing up the
2 hour window, leaving no time for emergency house cleaning (no, I
haven't grown out of that yet despite being married for >10
years). So here are a few readings on "community" which I'll drop like
a hot potato and then run to clean the house.
Both Wellman and Rheingold dispute the idea that we're all "Bowling Alone"
and assert that virtual communities appearing in computer mediated
communication are
real communities, but what does "community" look like online?
Is the implementation of a "community" software tool enough?
We're in a second wave of all sorts of vendors offering their
own online communities - this was also done in the 90s. Are
these communities? Only when they succeed? Never?
It depends? On what? At the same time, there are
lots of articles coming out in the physics literature on mathematical
ways to identify cohesive subgroups in networks and they
call this process identifying communities. Are they
identifying communities or only cohesive subgroups? Could you develop
an algorithm to locate a community? How would you test what
you found to see if it's really a community (or maybe it's a group of
people all disputing a knowledge claim, what Collins called a core
set)? Is a binary yes or no enough or do we need to know what
participants feel and why?
Blanchard, A. L., & Horan, T. (1998). Virtual Communities and
Social Capital. Social
Science Computer Review, 16(3), 293-307
This article is more or less in direct response to Putnam's Bowling Alone.
His thesis was that increasing online activity lead to
decreasing community participation and civic engagement and that this
low participation hurts the community as a whole. They look
at three possible outcomes of online communities: 1) that online
communities enhance f2f
communities, 2) that online communities detract
from f2f communities, or 3) that they are unrelated. Since this was
written, social capital has been defined (and operationalized) at an
individual level, a group level, and then a societal level.
Putnam looks really at the societal level. They quote him
describing it as "the features of social organization such as networks,
norms, and social trust that facilitate coordination and cooperation
for mutual benefit." When they define virtual communities,
they differentiate between online places for physical communities (my
neighborhood has a Yahoo! Group) and online-only communities.
Networks in virtual communities might be larger and more geographically
dispersed. They might also encourage participation by some
who might not participate in f2f.. Norms in communities
include reciprocity - doing favors and having favors returned.
The idea in this article is that generalized reciprocity (not
direct, Mary does for Bob, but Mary does for Bob, Sue sees, and
Sue does a favor for Mary) is increased in virtual
communities because helping acts are visible (see, however, Wasko
& Faraj, discussed on my old blog - they found that reciprocity
didn't really explain any variance in contribution to a professional
virtual community). Blanchard and Horan also discuss lurking
as a negative social norm, akin to free riding (see, however, various
discussions by Nonnecke and Preece as well as those by Lave and Wenger
on legitimate peripheral participation). With respect to
trust, it might be increased by increased social identity in virtual
groups and decreased social cues (less stereotyping by physical
attributes), but it will be decreased by flaming, trolls, and deception.
Blanchard, A. L. (2004). Blogs as Virtual Communities: Identifying a
Sense of Community in the Julie/Julia Project. Into the Blogosphere: Rhetoric,
Community, and Culture of Weblogs, Retrieved from http://blog.lib.umn.edu/blogosphere/blogs_as_virtual.html
When I talk about blogs as communities, I mean like between blogs, or
collections of blogs, or bloggers linking to each other and commenting
on each others' blogs. In this paper, Blanchard looks at a
community that formed within the comments of a single blog (that became
a book, and isn't there a movie coming out)? The comments in
this blog were like a forum and sometimes wandered from the topic of
the post and had a life of their own. She asks the question
whether this is truely a community or only a virtual settlement.
Virtual settlement comes from a paper in JCMC by Jones. It is
defined as when there is "a) a minimal number of b) public interactions
c) with a variety of communicators in which d) there is a
minimal level of sustained membership over a period of time."
Communities, on the other hand has a sense of community, which includes
a) feelings of membership, b) feelings of influence, c) integration and
fulfillment of needs, and d) shared emotional connection.
This "sense of community" comes from f2f research on
communities (the next article discusses measuring it in virtual
situations). She did a survey of the commenters after the blog had been
around for 11 months. Some respondents who commented
frequently felt strongly that it was a community while others who kind
of read it like they would a newspaper, thought not (oh, really?
Blanchard, A. L. (2007). Developing a Sense of Virtual Community
Measure. CyberPsychology
& Behavior, 10(6), 827-830. DOI:
This one was done a few years later (obviously) and she was trying to
develop a valid and repeatable sense of community measure for virtual
communities. In previous work, people pretty much just adapted the f2f
sense of community, but it turns out that community might feel
different in virtual settings than f2f. This measure was developed like
others - f2f scales were modified, and new questions were added to
address things that are different in virtual settings. There was a
pilot, and then it was tested with other groups (total n=256, 7 usenet
groups and listservs). Factor analysis with maximum liklihood
factoring and a promax rotation. Once things were dropped
that didn't load where they were supposed to, the internal reliability
coefficient for the SOVC scale was 0.93. Tested with the
groups, it explained 53% of the variation while the standard sense of
community only explained 46% (better, but eh.)
|
{"url":"http://scientopia.org/blogs/christinaslisrant/category/online-communities/","timestamp":"2014-04-18T10:40:04Z","content_type":null,"content_length":"109830","record_id":"<urn:uuid:bb251869-8578-403b-a0e5-1701061edea2>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00212-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Decatur, GA
Find a Decatur, GA Precalculus Tutor
...I believe that education is the secret to a successful life. I have a BA in History with a focus on US Foreign Policy. Additionally, I have a minor in Mathematics with a focus on Calculus.
25 Subjects: including precalculus, reading, English, writing
...I intend to engage and challenge my students by providing a facilitated learning and balanced assessment environment in effort to inspire them. I feel that learning is most effective and
students are empowered when they are initially presented with a narrow scope of information that provides a s...
19 Subjects: including precalculus, calculus, statistics, economics
...My name is Giulianna. I have been tutoring students since I was in high school. I have tutored k-12 math, and my students have significantly improved their grades.
29 Subjects: including precalculus, chemistry, reading, Spanish
...I graduated from Georgia Tech in May 2011, and am currently tutoring a variety of math topics. I have experience in the following at the high school and college level:- pre algebra- algebra-
trigonometry- geometry- pre calculus- calculusIn high school, I took and excelled at all of the listed cl...
16 Subjects: including precalculus, calculus, geometry, algebra 2
...I have taught in community colleges, about ten years, in math, economics, philosophy and legal writing. I took the GRE in January 2014, with a 93 percentile on the writing portion, or a 5.0. I
taught trigonometry, brief calculus, finite mathematics which included statistics, and every level of algebra from introductory to college algebra.
14 Subjects: including precalculus, calculus, statistics, algebra 2
Related Decatur, GA Tutors
Decatur, GA Accounting Tutors
Decatur, GA ACT Tutors
Decatur, GA Algebra Tutors
Decatur, GA Algebra 2 Tutors
Decatur, GA Calculus Tutors
Decatur, GA Geometry Tutors
Decatur, GA Math Tutors
Decatur, GA Prealgebra Tutors
Decatur, GA Precalculus Tutors
Decatur, GA SAT Tutors
Decatur, GA SAT Math Tutors
Decatur, GA Science Tutors
Decatur, GA Statistics Tutors
Decatur, GA Trigonometry Tutors
Nearby Cities With precalculus Tutor
Atlanta precalculus Tutors
Avondale Estates precalculus Tutors
Belvedere, GA precalculus Tutors
Clarkston, GA precalculus Tutors
College Park, GA precalculus Tutors
Dunwoody, GA precalculus Tutors
East Point, GA precalculus Tutors
Johns Creek, GA precalculus Tutors
Lawrenceville, GA precalculus Tutors
Marietta, GA precalculus Tutors
North Decatur, GA precalculus Tutors
Sandy Springs, GA precalculus Tutors
Scottdale, GA precalculus Tutors
Smyrna, GA precalculus Tutors
Tucker, GA precalculus Tutors
|
{"url":"http://www.purplemath.com/Decatur_GA_Precalculus_tutors.php","timestamp":"2014-04-21T07:48:40Z","content_type":null,"content_length":"24082","record_id":"<urn:uuid:0b4661f6-d580-44e9-8efb-69edf3d8f48b>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00081-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Higher mathematics
Higher mathematics is a subject that has always seemed completely inaccessible to all but the select few who could breathe in the rarefied atmosphere of the intellectual plane where it lives. Just
as mathematics seems to be beyond most people's intellectual grasp, however, it also seemed to make absolutely no difference to the great majority of the population. Number theory, probability
theory, mathematical modeling, the mysterious math used in computer technology, and even statistics and mathematical reasoning seemed to have little to do with daily life, work, or anything that
was of much interest to the average man, woman, or child. When a mathematician somewhere in Great Britain announced a few years ago that he had solved the problem of Fermat's Last Theorem the news
made no difference to the vast majority of people, while a few, vaguely remembering the story of the theorem, understood that this was an extremely clever thing to do. But number theory seemed far
more arcane, distant, and forbidding than Chinese politics, Russian poetry, Hindu mythology, or all those words the Eskimos use for snow. Yet, as the example of computers alone can tell us, higher
math is leaving its perch and beginning to walk among us.
Aside from its forbidding complexity and impracticality, however, mathematics also seems futile to many people. It is merely a matter, it seems, of learning more and more complex maneuvers that
have been done a thousand times--just like the arithmetic and a
Related Essays
Real World Mathematics .... Duke (1999) reported that while real-world action programs have resulted in higher mathematics scores, mathematics instructors have concerns about taking .... (1360
Early Greek Culture & Mathematics .... This, combined with the Greek love of adventure, contributed greatly in taking the great leap forward into higher mathematics.11 Even more essential to
the .... (2274
National Council of Teachers of Mathematics .... answer. Mathematical reasoning is an important first step to continuing to higher mathematics and mathematical proof. A partial .... (1553
National Council of Teachers of Mathematics .... when used throughout a students learning years will meet the goals of the NCTM and give the student a firm basis for continuing on in higher
mathematics classes .... (1649
Gender differences in Academic Achievement .... takers. They suggest that males' tendencies to take more risks might explain their higher mathematics achievement. They documented .... (1615
Category: Misc - H
Isaac Newton
Reasenberg Matthews
Papua Guinea
Peterson Mathematical
Peterson Acoustics
Fermat's Theorem
Peterson Computing
Benjamin Dsiraeli
Peterson Trails
bower 132
guterl 173
sound concert halls
mathematical modeling
computer technology
casti notes
susceptible mathematical
probability theory
probability statistics
peterson computing 406
trefil 66
mathematical model
Click Here to Get Instant Access to over 32,000 Professionally Written Papers!!!
Save your essays here so you can locate them quickly!
"Thank you for making such a high quality site! Your papers are the best I have seen around"
Debbie B.
"Your site was very helpful and gave me the details I needed in order to complete my essay!!!"
Mike F.
"This site is an excellent vehicle for quick referrences. Thanks a bunch!"
Carla T.
"Great site, I got a lot of new ideas I would have never thought of before."
Nate A.
"I love this site!!!"
Marie H.
|
{"url":"http://www.lotsofessays.com/viewpaper/1681178.html","timestamp":"2014-04-21T07:57:14Z","content_type":null,"content_length":"41662","record_id":"<urn:uuid:09c08948-c004-47ce-92d0-a0191b55f622>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00066-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Classical mechanics, probability and statistics neglect rare events. The associated stochastic systems represent smooth transition while real systems exhibit jumps as well as continuous
evolution. The universe is bumpy as well as continuous and so are most of natural systems, such as earth shifts, as well as social systems such as the economy and financial markets. We present a
new axiomatic treatment of probability and statistics with black swans, which are rare events with momentous consequences. The new axioms differ from traditional axioms of probability and
statistics in that we require 'sensitivity to rare events'. A representation theorem identifies a new type of measures on R that has both countably additive and purely finitely additive parts.
This leads to distributions with heavy tails, and to stochastic systems that result in jump - diffusion processes through time. The new axioms are compared with the standard axioms of mathematics
and probability theory and are shown to differ in a crucial axiom (“Monotone Continuity, S.P.4.”) that is generally invoked and is restrictive enough to eliminate heavy tails and to underestimate
rare events. The new theory is able to integrate ambiguous features of mathematics and includes aspects of Godel's Incompleteness Theorem as well as the Independence of the Continuum Hypothesis
and the Axiom of Choice, and is applied to practical examples of measure theory, probability and stochastic systems in natural systems as well as in financial markets.
How can we communicate the importance of fundamental research? How can we muster a democratic will to take action on climate change? What could be missing in the education of Canadians? What
difference could a scientist-politician make in parliament?
Ted Hsu, a former theoretical physicist, and now MP for Kingston and the Islands and the Liberal Party critic for science and technology and economic development in Ontario will share some
thoughts about these questions and propose practical suggestions for those who wish they could have a bigger influence on government.
Natural selection is based on competition between individuals. Cooperation means that individuals help one another. Why competitors should cooperate is a conundrum. Yet cooperation is abundant in
nature and can even be seen as the master architect of evolution. Without cooperation there is no construction. The emergence of cells, multi-cellular organisms, animal societies and human
language all require cooperation. I present five mechanisms for the evolution of cooperation: kin selection, direct reciprocity, indirect reciprocity, spatial selection and group selection.
Direct and indirect reciprocity are the key mechanisms for understanding pro-social behavior among humans and are needed for the survival of intelligent life on earth.
Further Reading:
Nowak MA (2006). Five rules for the evolution of cooperation. Science 314: 1560-1563
Nowak MA, CE Tarnita, EO Wilson (2010). The evolution of eusociality. Nature 466: 1057-1062.
Nowak MA \& Highfield R (2011) SuperCooperators. Simon \& Schuster.
The classical Green's function plays an important role in function theory of one complex variable or two real variables. In higher dimensions, from the point of view of complex analysis, its
proper generalization is as the pluricomplex Green's function, which is a solution of a complex Monge-Ampere equation with a Dirac mass. We discuss the geometric properties of pluricomplex
Green's functions, as well as methods for solving such Monge-Ampere equations, with emphasis on a priori estimates, geometric constructions, and the differences with real Monge-Ampere equations.
The theory of ocean waves has been an active topic of research for more than 150 years due to the significance of the sea in human history. The motion of waves is a very complex phenomenon and
its study has applications in every aspect of our lives.
I will show how methods of mathematical analysis combined with asymptotic theory and numerical simulations can contribute to a better understanding of propagation and interaction of large
amplitude ocean waves, both at the surface of the ocean and in its interior, in regular situations as well as in extreme events.
In particular, I will discuss the influence of bottom topography on wave dynamics. This is an important topic because of its relevance to coastal engineering, sediment transport, and global-scale
propagation of tsunamis. The horizontal length scales of tsunamis are so large that even in the deep oceans, their impact depends on the particular topography of the coastline and inshore
bathymetry. Uneven topography is also responsible for the generation of internal waves in the oceans. They are commonly observed in regions of sharp changes in temperature or salinity. Local
measurements and photographs taken from orbital spacecraft show that their presence has a significant effect on the surface of the sea.
|
{"url":"http://cms.math.ca/Events/winter12/res/ps","timestamp":"2014-04-17T09:48:36Z","content_type":null,"content_length":"16937","record_id":"<urn:uuid:4dbbeff0-e2dc-4057-807a-641a646442a2>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00310-ip-10-147-4-33.ec2.internal.warc.gz"}
|
I was helping my daughter with some homework the other night. She had been asked to use a spreadsheet program to produce a bar chart. I believe the numbers were densities (g/cm
) and they were something like:
92.5, 91, 93.5, 92
And here's what Excel produced:
The vertical axis starts at 89.5, so the height of each bar represents the density−89.5, which means ... ??
Junk Charts quotes
Naomi Robbins, author of
Creating More Effective Graphs
thus: "all bar charts must include zero". Indeed—otherwise what do the bar heights represent? That Excel's defaults violate this rule is, ahem, unfortunate. (I've tried this using Excel 2000 and
Excel on a Mac, but perhaps it's been fixed in newer versions? Maybe?)
Excel can be coerced into starting its vertical axis at 0, but it takes a fair bit of clicking and navigating. The result is:
Relative to a density of zero, there's very little variation. But perhaps this
hides the message
in these numbers. Doesn't that just bring us back to the first bar chart? Well ... no.
This graph shows the data, with the vertical axis zoomed in to where the action is. Unlike the original bar chart, it doesn't show bars with arbitrary heights.
from Junk Charts
The "start-at-0" rule says that the vertical axis of any graph ought to start at value 0. The rule was mentioned in Huff's classic booklet, "How to Lie with Statistics": as the name implies, the
rule is intended to eradicate mischievous graphs that exaggerate small differences by not starting at 0, which is to say, by choosing a misleading scale.
Others, like Tufte and Wainer, have long realized that the start-at-0 rule is not absolute ... My own "anti-rule" stipulates that if all data appearing in a chart are far from 0, then don't start
at 0.
If, on the other hand, some of the plotted data are close to 0, then it is essential to start at 0.
This isn't too far from my view, but it doesn't address bar charts, which are a special case because they emphasize the heights of the bars, rather than the position of the tops of bars. Bar charts
are only appropriate for variables that are measured on
ratio scales
. For such variables, there is a non-arbitrary zero, which means that you can calculate a meaningful ratio; e.g. for weight: one thing might weigh twice as much as another. But some variables aren't
like that; e.g. IQ: an IQ of zero is meaningless, and so it doesn't make sense to say that someone with an IQ of 100 is twice as intelligent as someone with an IQ of 50. For variables of this kind
bar charts make no sense at all.
So, if your variable isn't ratio scaled (in other words, there isn't a meaningful zero), don't use a bar chart. If it is ratio scaled and you decide to use a bar chart, make sure your axis starts at
Derek puts it well in a comment at
Pictures of Numbers
There is a circumstance in which the would-be grapher absolutely must start with zero, and that's when creating a bar graph. If that causes problems, it's time to consider abandoning the bar
graph and adopting something which doesn't need a zero on the scale. I've seen bar graphs where the designer recognised the problem with zero, adopted and defended the solutions, but without
getting rid of the bar graph format. Those wavy gaps are the least bad of the abortive compromises resorted to by people who won't give up their bars.
In case anyone thinks this really isn't much of an issue, here are some examples I found quite easily:
Labels: bar chart, Excel, levels of measurement, ratio measurement, start-at-0
The global price of food has risen sharply over the last 18 months. This is most acutely the case with cereals. The New York Times reports that wheat has reached its highest price in 28 years. The
reasons for this phenomenon seem to be broadly accepted; see for example, Paul Krugman's column or a recent presentation (pdf) by Joachim von Braun of the International Food Policy Research Institute
Though the relative importance of the reasons is difficult to assess, the list itself seems clear (the price of oil, a growing middle class in China and India with an increasing demand for meat which
requires more grain for feed, droughts likely due to climate change, Western government subsidies for biofuels like corn ethanol).
But I wonder if we shouldn't consider a different aspect of this. As the New York Times points out:
Even the poorest fifth of households in the United States spend only 16 percent of their budget on food. In many other countries, it is less of a given. Nigerian families spend 73 percent of
their budgets to eat, Vietnamese 65 percent, Indonesians half.
What is wrong with our world that so many people are living so close to the edge? Hmmm ...
Update 14Apr2008: The graph below was produced using Technorati. It shows the number of blog posts (in "any language" on blogs with "some authority") containing "food crisis". Too bad most of us are
at least 6 months late.
Labels: developing countries, economics, food crisis, poverty
The Internet makes it possible to link a dispersed community of common interest. Now there are a number of blogs that focus entirely or in part on Statistics, but they seem not to be well connected.
So I've just set up a social bookmarking website just for applied statistics, data analysis, and visualization. It's called StatLinks.
It lists links that users submit, and allows other users to vote on their relevance. Links are listed in order of popularity (or in chronological order, if you prefer).
I encourage people to visit StatLinks, to submit links that are likely to be of interest, and to pass the word! I've put a few links in to get things started. (Hat tip to Slinkset whose technology
made it a breeze to set this up.)
Labels: data, data analysis, data visualization, social bookmarking, statistics
Line-ups are both eminently civilized and—really annoying! The first in first out (FIFO) principle is inherently egalitarian and respect for it is a sign of social order. But there's something crazy
about using our bodies as place keepers in a queue, sometimes for hours on end.
Inevitably, after waiting some time in a lineup, someone will need to step out for a while. Rather than lose one's priority in the sequence, the convention is to ask someone (a complete stranger if
need be), "Could you keep my place in line?"
The language here is metaphorical and indirect. The request is not really about keeping a place. It's about promising on the return of the person to vouch to any potential challengers that indeed
this particular person was previously in line at this particular point in the sequence.
The fact is, complete strangers generally do agree to "keep your place in line". And that's a further sign of civil behaviour. Maybe line ups aren't so bad after all!
I bet there are lots of good stories about line-ups. I'd love to hear some. Then we could publish a book (I'm trying to think of a queued name for it ...)
P.S. I've tried to give equal time to the different spellings lineup / line-up / line up. I really don't know which is correct. Those who wish to correct me should form an orderly line.
Labels: human behaviour, language
The perennial nature-vs-nurture debate just won't go away. This is particularly true with regards to gender differences, a subject of broad interest.
I'll acknowledge my biases up front. I have long been skeptical about biological determinism. This is partly because of its historical association with racism, sexism, classism, and the eugenics
movement. But it's also because, particularly in recent years, there has been a tendency to overstate the importance of genetics in explaining human behaviour. Part of the explanation for this
"genohype" may be the dramatic achievements of the Human Genome Project together with the rise of the biotechnology sector. Just as the success of Darwin's theory of natural selection led to Social
Darwinism, today's molecular genetics revolution has put a new wind in the sails of biological determinism.
In the scientific world, the nature-vs-nurture debate is generally accepted to be an ill-posed problem. Because the environment affects the expression of genes, it is not a question of nature versus
nurture, but of nature vis-à-vis nurture. Nevertheless, the ways in which and the extent to which nature and nurture influence human behaviour remain controversial. And beliefs about this can have
profound consequences.
But one thing's for certain, and that's uncertainty. Despite the way results from studies of gender differences are often portrayed, we're usually left with more questions than answers. Here I want
to comment briefly on two considerations that should be borne in mind.
Does the difference matter?
It's common to read reports stating that, for example, "women perform task X better than men". What this really means is "on average women perform task X better than men, and this effect was found to
be statistically significant". The magnitude of the effect may be small or large. The degree of overlap between women and men may be small or large. (And of course the study may be flawed.)
To what can the difference be attributed?
Assuming the difference is real and meaningful, we're still left with the question of whether it represents an innate biological difference or an environmental (cultural) difference. For some reason
it seems that people quickly jump to the conclusion that gender differences are innate. But in most cases it is extremely difficult to sort this out. Cultural effects can be extremely subtle. As has
been pointed out (by ?), the concept of "wet" wouldn't mean much to a fish.
Grist for the mill
Here are three interesting articles that touch on some of these issues. First, a review by Viv Groskop of "The Sexual Paradox: Troubled Boys, Gifted Girls and the Real Difference Between the Sexes"
by Susan Pinker. Next, an interview with professor of language and communication Deborah Cameron about her book "The Myth Of Mars And Venus". Finally, a New York Times article by Elizabeth Weil about
the movement for single-sex public education based on gender differences.
I've really only scratched the surface of this issue (not to mention related ones), and there's lots of stuff out there (a Google search of "gender differences" gives 2,450,000 results). Comments?
Update 09Apr2008: It seems there's an almost unlimited number of links that could be added. Here's another review of Susan Pinker's book, from the New York Times. Here's an entertaining retort to an
argument about gender differences based on evolutionary psychology. And here's a piece that argues: "Nowhere do scientific findings get more mangled than when they’re about the differences between
men and women." Finally, here's a conservative view on gender differences.
Update 11Apr2008: Here's a response to some of the arguments about single-sex schooling.
Labels: biological determinism, gender differences, genetics
|
{"url":"http://logbase2.blogspot.com/2008_04_01_archive.html","timestamp":"2014-04-18T05:45:14Z","content_type":null,"content_length":"65749","record_id":"<urn:uuid:a777a385-e2ee-4aeb-b788-597a5ff9ed88>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00475-ip-10-147-4-33.ec2.internal.warc.gz"}
|
he Monty Hall Problem (via Simulation)!
raziel_, on 22 September 2012 - 05:13 PM, said:
Maybe im just stupid over here but if one of the door is opened then dont you have 2 choices now?
Yes, you have the choice to stay with your original choice or to switch.
and your chance is 50/50?
No. Think of it this way: If you picked one of the two wrong doors at the beginning, you'll always get to the correct door if you switch (because the second wrong door will have been eliminated). If
you picked the right door at the beginning, you'll get a wrong door if you switch. So what is the chance that you win if you switch? 2/3 because that's the chance of picking a wrong door at the
|
{"url":"http://www.dreamincode.net/forums/topic/292496-prove-the-monty-hall-problem-via-simulation/page__st__30","timestamp":"2014-04-16T14:05:15Z","content_type":null,"content_length":"157429","record_id":"<urn:uuid:b7fd9769-e29e-4528-930a-fc65f5930595>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00314-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Bootstrap Dynamical Symmetry Breaking
Advances in High Energy Physics
Volume 2013 (2013), Article ID 650617, 19 pages
Research Article
Bootstrap Dynamical Symmetry Breaking
Department of Physics, National Taiwan University, Taipei 10617, Taiwan
Received 22 June 2012; Accepted 24 February 2013
Academic Editor: Tao Han
Copyright © 2013 Wei-Shu Hou. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium,
provided the original work is properly cited.
Despite the emergence of a 125GeV Higgs-like particle at the LHC, we explore the possibility of dynamical electroweak symmetry breaking by strong Yukawa coupling of very heavy new chiral quarks .
Taking the 125GeV object to be a dilaton with suppressed couplings, we note that the Goldstone bosons exist as longitudinal modes of the weak bosons and would couple to with Yukawa coupling . With
GeV from LHC, the strong could lead to deeply bound states. We postulate that the leading “collapsed state,” the color-singlet (heavy) isotriplet, pseudoscalar meson , is itself, and a gap equation
without Higgs is constructed. Dynamical symmetry breaking is affected via strong , generating while self-consistently justifying treating as massless in the loop, hence, “bootstrap,” Solving such a
gap equation, we find that should be several TeV, or , and would become much heavier if there is a light Higgs boson. For such heavy chiral quarks, we find analogy with the system, by which we
conjecture the possible annihilation phenomena of with high multiplicity, the search of which might be aided by Yukawa-bound resonances.
1. Introduction and Motivation
The field of particle physics is in a state of both jubilation and anxiety. On one hand, the long-awaited Higgs boson seems to have finally emerged [1, 2] at ~125GeV at the LHC, where a hint
appeared already with 2011 data [3, 4]. The case has been further strengthened with more data added by end of 2012, and the final results for the 7-8TeV run would be revealed by the Moriond meetings
in 2013. On the other hand, there appears to be no New Physics below theTeV scale, and one is worried what really stabilizes the Higgs mass at 125GeV.
As much as the existence of a 125GeV boson is beyond doubt, we note that it is not yet experimentally established that it is the Standard Model (SM) Higgs boson. In part because of the enhancement
in mode—for both ATLAS and CMS, and for both 7 and 8TeV data—and also because the subdominant production channels are not yet experimentally firm, the 125GeV object could still be a dilaton [5–7].
The dilaton couples like a Higgs boson, but with couplings suppressed by , where is the observed vacuum expectation value (v.e.v.) of electroweak symmetry breaking (EWSB), and is the “dilaton decay
constant” that is related to scale-invariance violation. The dilaton couplings to and , however, are determined by the trace anomaly of the energy-momentum tensor and would depend on UV details.
Keeping and effective and couplings free, it is found [6] that a “dilaton” interpretation is as consistent as an SM-Higgs. To reject the dilaton, one has to establish the subdominant vector boson
fusion (VBF) and Higgsstrahlung, or associated production (VH) processes at the expected SM level. Judging from the results available at the end of 2012, it seems [8] that the issue would have to
await the restart of the LHC at 13TeV.
Regardless of whether the 125GeV object is the SM Higgs boson or a dilaton, the Higgs mechanism is an experimental fact. That is, the electroweak (EW) gauge symmetry is experimentally established,
while the gauge bosons, as well as the chirally charged fermions, are all found to be massive, in apparent violation of the gauge symmetry. Thus, the Goldstone particle of EWSB gets “eaten” by the EW
gauge bosons, which become massive (the Meissner effect), as has been experimentally established since 30 years. The v.e.v. is simply related to the venerable Fermi constant .
With both quarks and gauge bosons massive, a heuristic argument was used to demonstrate [9] that, starting from the left-handed vector gauge coupling, the longitudinal component of the EW gauge
boson, that is, equivalently the Goldstone boson , couples to quarks by the SM Yukawa coupling, with both left- and right-handed components. Thus, Yukawa couplings are experimentally established.
Furthermore [9], much of flavor physics and violation (CPV) studies probe the effects of Yukawa couplings, providing ample and highly nontrivial support for their “complex” existence.
A natural question to ask is as follows: given three quark generations already, could there be a fourth copy? Does it carry its own raison d’être? It is not our purpose to discuss in detail the
issues, merits, and demerits regarding this possible fourth generation (4G), which we refer to [10]. At face value, we admit that the observed 125GeV new boson would pose a difficulty. The main
issue is not so much the existence of the Higgs boson, but one as light as 125GeV. The gluon-gluon fusion production, through the top loop in SM, is now augmented by 4G quarks and in the loop,
leading to an enhancement of order in production cross-section, which does not seem reflected by data. In fact, searches [11] assuming this enhancement factor rule out a Higgs boson in the full mass
range up to 600GeV. We are reminded, however, that by simply extending from generations, the effective CPV increases [12] by a thousand-trillion-fold or more and may provide enough CPV to satisfy
the Sakharov conditions for baryogenesis. Given that the three-generation Kobayashi-Maskawa model [13] falls far short of the needed CPV, Nature might use such an enhancement factor. Furthermore,
before one rules out the dilaton possibility, the premise for the order of magnitude enhancement in production may not stand.
Fourth generation and quarks have been pursued vigorously at the LHC, as it should be done for a hadron collider, independent of the Higgs situation. The current bound [14–20] is where stands for a
chiral quark doublet, in which the limit is from both and search. We shall assume “heavy isospin” symmetry, , and treat the doublet as degenerate, which can be viewed as part of the custodial SU
symmetry. What is important is that the current bound is already above the nominal perturbative partial wave unitarity bound (UB) of 550GeV [21]. The Yukawa coupling (where ) has already entered the
strong coupling regime; .
With the ever increasing bound on , the fourth generation may well not exist. But being beyond UB, a new question is as follows: could the strong Yukawa coupling of generate [22] EWSB itself? This is
an intriguing conjecture and provides a second reason for having a fourth generation. Along this line, a gap equation, given symbolically in Figure 1, was constructed [9] without ever invoking the
Higgs doublet, that is, the Higgs boson field of SM.
The logic or philosophy went as follows. The Goldstone boson is viewed as a tightly bound state, bound by the Yukawa coupling itself. Guided by a Bethe-Salpeter equation study [23], strong Yukawa
binding could lead to state collapse; that is, the bound state turns tachyonic, which is taken as suggestive of triggering EWSB itself. For further elucidation, see [24]. Reference [9] went one step
further to postulate that the leading collapsed state, the color-singlet isosinglet pseudoscalar meson, , is the Goldstone boson itself. With no New Physics in sight at the LHC, not even the heavy
chiral quark itself, the loop momentum integration runs up to roughly (when the Goldstone boson ceases to exist), without the need to add any further effects (in the ladder approximation of
truncating corrections to propagation and vertex). This is therefore a “bootstrap” gap equation, in that the strong Yukawa coupling itself is the source of EWSB, or mass generation for quark , which
simultaneously justifies keeping the Goldstone in the loop (the 125GeV object is assumed to be the dilaton). The existence of a large Yukawa coupling is used as input, without a theory for itself.
There is no attempt at UV completion.
It is important then to investigate whether one could find a solution to such a gap equation. If so, we would have demonstrated the case for dynamical symmetry breaking (DSB) and the potential riches
that could follow. This paper offers a comprehensive elucidation of the line of thought from the aforementioned, through the demonstration of DSB, and linking the need of 2-3TeV quarks to the
possible new phenomenon of production, the search of which could be aided by Yukawa resonances.
In Section 2, we trace the arguments that set up the bootstrap gap equation, including the postulate of the Goldstone boson as a “collapsed state.” In Section 3, we formulate this gap equation more
clearly, and demonstrate that numerical solution does exist [8]. A similar gap equation was formulated by Hung and Xiong [25], where in Figure 1 is replaced by a massless Higgs doublet field. We will
compare and offer a critique. Our numerical solution [8] suggests to be in the 2-3TeV range, corresponding to . Even for lower than this range, one may ask whether the usual assumption of followed
by free decay would hold, as assumed so far in direct searches [14–20]. The large Yukawa coupling now resembles the coupling in strength; to our surprise, we find . We tap into the known phenomena
and conjecture that , with multiplicity ~6–12, and with a characteristic temperature of order that is to be measured. This is discussed in Section 4. We end with further discussions and offer a
conclusion in Section 5.
2. A Gap Equation without Higgs
My own interest in the 4th generation was revitalized by a hint of possible New Physics in electroweak penguin contributions to direct CPV in decay (e.g., see, the account given in [12]). In between
2007 and 2008, my interest turned to direct search for and at the LHC with the CMS experiment. Although initially I wished for GeV for sake of the rich phenomenology [26, 27], I found the link [22]
of strong Yukawa coupling with EWSB rather intriguing. As the direct search limits on and rose, I began to find the usual Higgs mechanism via an elementary Higgs doublet more and more problematic.
Having just in the Lagrangian lacked dynamics (it is only a description), while Nature seems to be saying something through the absence of elementary scalars in QED and QCD, where the dynamics are
better understood than EWSB. Furthermore, most problems such as the hierarchy problem arose by treating the Higgs doublet field as elementary.
The following is how curiosity led the way from a flavor/CPV entry into dynamical EWSB, without invoking the existence of an elementary Higgs doublet field.
2.1. Yukawa Coupling from Gauge Coupling
Holding the SM-Higgs interpretation of the 125GeV particle observed [1, 2] at LHC as still suspect, let us recall the firm facts from experiment.
First, we know [28] that all observed quarks and charged leptons are pointlike to smaller than m, and they are governed by the gauge dynamics. Chromodynamics would not be our concern, but it is
important to emphasize that, unlike the 1970s and early 1980s, the chiral gauge dynamics is now experimentally established. We know that quarks and leptons come in left-handed weak doublets and
right-handed singlets, and for each given electric charge, they carry different hypercharge .
Second, the weak bosons are found [28] to be massive; , where is the measured weak coupling, and the vacuum expectation value. Hence, spontaneous breaking of symmetry (SSB) is also experimentally
Third, all fermions are observed [28] to be massive. These masses also manifest EWSB, since they link left- and right-handed fermions of the same electric charge, but different and charges. We shall
not invoke the elementary Higgs boson for mass generation, as this is the question we explore, and because the dilaton possibility has to be ruled out by experiment.
At this point, we need to acknowledge the important theoretical achievement of renormalizability [29] of non-Abelian gauge theories, which allowed theory-experiment correspondence down to per mille
level precision, especially for the extensive work [28] done at the LEP, SLC, as well asTeVatron colliders. The proof of renormalizability is based on Ward identities, hence, [30] unaffected by SSB;
that is, the underlying symmetry properties are not affected. From this, we now demonstrate [9, 31] the existence of Yukawa couplings as experimental fact.
With proof of renormalizability, we choose the physical unitary gauge; hence, there are no would-be Goldstone bosons (or unphysical scalars), only massive gauge bosons (strictly speaking, only the
on-shell -matrix is finite in -gauge, we thank T. Kugo for the comment and take it as a limiting case of the gauge for sake of illustration). Longitudinal boson (the would-be Goldstone bosons that
got “eaten”) propagation is via the part of the boson propagator. If we take a factor and contract with a charged current, as illustrated in Figure 2, simple manipulations give (dropping for
convenience) where we have inferred as the effective , or Goldstone boson coupling to quarks, which is nothing but the familiar Yukawa coupling. We have used the equation of motion, that is, the
Dirac equation, on and quarks in the second step of (2), but this is justified since we exist in the broken phase for the real world, and we know experimentally that all quarks are massive.
From Figure 2 and equations (2) and (3), we see that from the experimentally well-established left-handed gauge coupling, the Goldstone boson couples via the usual Yukawa coupling. The Goldstone
bosons of EWSB pair with the transverse gauge boson modes to constitute a massive gauge boson, the Meissner effect, but the important point is that we have not introduced a physical Higgs boson in
any step. Unlike the Higgs boson, SSB of electroweak symmetry is an experimentally established fact. The Goldstone bosons couple with Yukawa couplings proportional to fermion mass, independent of
whether it arises from an elementary Higgs doublet.
We have kept a factor in Figure 2. Recall that the Kobayashi-Maskawa (KM) formalism [13] for quark mixing deals with massive quarks, or equivalently the existence of Yukawa matrices, and the argument
remains exactly the same. Vast amount of flavor and violation (CPV) data overwhelmingly supports [28] the 3 generation KM picture. For example, the unique CPV phase with 3 quark generations can so
far explain all observed CPV phenomena. These facts further attest to the existence of Yukawa couplings from their dynamical effects but again do not provide any evidence for the existence of the SM
Higgs boson.
2.2. A Postulate: Collapse of Yukawa Bound State
Based on experimental facts and the renormalizability of electroweak theory, we have “derived” Yukawa couplings from purely left-handed gauge couplings in the previous section, without invoking an
explicit Higgs sector, at least not at the empirical, heuristic level. We turn now to a more hypothetical situation: could there be more chiral generations? Since we already have three, the
possibility that there exists a fourth generation of quarks should not be dropped in a cavalier way. We note again that increasing to 4 generations, one may have sufficient amount of CPV for
baryogenesis [12] from the KM mechanism. There has also been some resurgent recent interest [10, 14–20]. As argued in the Introduction, although the spirit may have been dampened by the 125GeV
Higgs-like object, one should press on with the direct search.
What we do know is that the and quarks should be suitably degenerate to satisfy electroweak constraints on the and variables [32]. A “heavy isospin” is in accordance with the custodial SU symmetry.
2.2.1. Relativistic Expansion versus Bethe-Salpeter Equation
With GeV [14–20], is already 4 times stronger than the top quark, hence, stronger than all gauge couplings. There have been two complementary studies of strong Yukawa bound states. The first
approach is along traditional lines of relativistic expansion [33]. Ignoring all gauge couplings except QCD, and taking the heavy isospin limit, with representing the 4th generation quark doublet,
and the triplet of Goldstone bosons, the - and -channel Goldstone exchange diagrams are depicted in Figure 3, with corresponding diagrams for as well as exchange ([33] did not put in -channel gluon
The heavy mesons form isosinglets and isotriplets and can be color singlet or octet. We borrow the notation from hadrons and call these states , , , , and , , , , respectively. Reference [33] used a
variational approach, with radius as parameter. It was found that, for color singlet , for below 400 (540)GeV, but above which suddenly precipitates towards tiny values. For , the radius mildly
decreases (increases) from 1, with trend reversed for binding energy; hence, it remains QCD-bound.
To understand this, note that the -channel Goldstone exchange for is repulsive, while the -channel Goldstone exchange, contributing only to , is also repulsive. However, the sudden drop in and radii
is due to the trial wave function sensing suddenly a lower energy at tiny radius due to -channel Goldstone exchange; the strong Yukawa coupling has wrested control of binding from the Coulombic QCD
potential. QCD binding energy is only a couple of GeV, but the sudden drop in radius leads to a sharp rise in binding energy, hence, a kink. One finds that the relativistic expansion fails just when
it starts to get interesting. For color octet states, QCD is repulsive; so, does not bind. In [33], the and states are degenerate, with sudden shrinking of radius occurring around 530GeV, but the
-channel QCD effect, left out in [33], should push the upwards; the state does not shrink until later.
Given that the relativistic expansion breaks down, a truly relativistic approach is needed. Such a study, based on a Bethe-Salpeter (BS) equation [23], was pursued around the time of demise of the
SSC, for very heavy chiral quark doublets. The BS equation is a ladder sum of - and -channel diagrams of Figure 3, where the pair forms a heavy meson bound state. While the ladder sum of -channel
diagrams is intuitive, a problem emerges for the -channel, which contributes only to , , and (same quantum numbers as , , and , resp.). Rather than a triangle loop, the -channel loop appears like a
self-energy, hence, potentially divergent, while the momentum carried by the exchanged boson is the bound state mass itself. One could not formally turn the integral equation into an eigenvalue
problem. This was resolved in [23] by a subtraction at fixed external momentum, which in effect eliminates all -channel diagrams. Reference [23] then solved the BS equation numerically using several
different approximations, which, in addition to the approximate nature of the BS equation itself, illustrates the uncertainties. Still, unlike [33], the bound state masses drop smoothly below as
increases, showing no kink, which is an improvement from a relativistic expansion. However, a generic feature is collapse; bound state masses tend to drop sharply to zero at some high and would
naively turn tachyonic.
2.2.2. Postulate for Leading Collapsed State
Here, we do not pursue the more conservative phenomenology for precollapse Yukawa couplings as in [24] but wish to address more fundamental issues. Although the de facto -channel subtraction made by
[23] appeared reasonable on formal grounds, the contrast with the relativistic expansion is striking; the Goldstone exchange in -channel led to a specific repulsion [33] for heavy mesons, disallowing
it to shrink suddenly like the otherwise analogous . But after subtracting the -channel, the becomes [23] the most attractive channel (MAC), more so than . Together with the tendency towards collapse
for large enough (equivalently ), this means that the meson would be the first to drop to zero and turn tachyonic. That this occurs for the channel that experiences repulsion when is far lower than
collapse values (à la which has no -channel effect) seems paradoxical. Does this falsify the whole approach, or else what light does this shed? And how is it related to -channel subtraction?
With experimental bounds [14–20] for 4th generation quarks entering the region of deep(er) binding, we offer a self-consistent view that may seem radical. Clearly, around and below 500GeV mass, or
TeV, there could still be some repulsion due to exchange in -channel. But since we did not introduce any elementary Higgs doublet, the Goldstone boson should perhaps be viewed as a bound state.
Hence, we Postulate (). Collapse is a precursor to dynamical EWSB, and the first mode to collapse becomes the Goldstone mode.
Although the full validity of the BS equation may be questioned, it is known [34] that “the appearance of a tachyonic bound state leads to instability of the vacuum,” which is “resolved by
condensation into the tachyonic mode.” Our postulate removes the equation for self-consistently (there is another aspect on the self-consistency of this subtraction/postulate, if one did not remove
the equation by invoking this subtraction, the leading collapse state of strong Yukawa coupling would be the , condensation in this channel would break Lorentz invariance) and provides some
understanding of the -channel subtraction; a boson carrying would no longer be a bound Goldstone boson in -channel. Without an elementary Higgs boson, there is no channel subtraction, while for heavy
enough (so has turned Goldstone), one can treat QCD effects as a correction, after solving the bound state problem, without need of subtracting -channel exchange. The self-consistent MAC behavior of
the channel seems like a reasonable outcome of the Goldstone dynamics, as implied by the gauge dynamics.
It may now appear that EWSB is some kind of a “bootstrap” from “massive chiral quarks” with large Yukawa coupling as seen in broken phase.
2.3. A Bootstrap Gap Equation without Higgs
Motivated by the previous heuristic discussion, we construct a gap equation for the dynamical generation of heavy quark mass without invoking the Higgs boson.
For a long time, as limit rose, I focused on breaching the UB, which the experimental pursuit would not be concerned about. Although UB violation (UBV) occurs at far higher energy, once the
experimental limits breach the bound, some form of strong interactions would take over. I asked myself how UBV might be amended by Nature. The study of Yukawa bound states [24] arose from this
pursuit, but did not shed sufficient light on the issue, except approaching the abyss of state “collapse”, as described earlier.
A mindset change occurred when one connected two of the or lines in Figure 3. One gets the self-energy for by exchange and readily arrives at the gap equation [9] as depicted in Figure 1. If quark
mass , represented by the cross , could be nonzero, then one has dynamical chiral symmetry breaking, which is equivalent to EWSB! Both and are treated as massless at the diagrammatic level, since the
gap equation is summing over all possible momenta carried by as it mediates—dominates— scattering. But if reaches , there would no longer be a Goldstone boson (assuming is a bound state), or it would
be resolved in the -channel. Thus, the summation over should not exceed at the heuristic level. The whole picture is heuristic, but realistic in the experimental sense, since the gap equation
integrates over a large momentum range of , where we have no other New Physics that enter, as indicated by LHC data.
Such a gap equation was constructed recently from a different, and in our view more ad hoc, theoretical argument. In [25], an elementary Higgs doublet is assumed together with a 4th generation.
Motivated by their earlier study [35, 36], where some UV fixed point (UVFP) behavior was conjectured, these authors pursued dynamical EWSB via a Schwinger-Dyson equation that is rather similar to our
Figure 1. However, perhaps in anticipation of the UVFP that might develop at high energy [35, 36], they put in by hand a massless Higgs doublet, hence, a scale invariant theory to boot. It is the
massless Higgs doublet that runs in the loop, replacing our Goldstone boson . The massless nature of the Higgs doublet appears ad hoc, and the paper defers the discussion of the physical Higgs
spectrum for a future work.
In contrast, our Goldstone boson , identified as the collapsed state as it turns tachyonic, is strictly massless in the broken phase. In the gap equation of Figure 1, we speculate that the loop
momentum should be cut off around , rather than some “cut-off” scale . In so doing, we bypass all issues of triviality that arise from having approaching . What happens at scales above is to be
studied by experiment.
Here, we remark that the first, elementary Higgs of [25], and are our bound state Goldstone bosons, and indeed we should have a -like massive broad bound state [24] that could mimic the heavy Higgs
boson. Their second Higgs doublet, in the form of and bound states, would be excitations above the and for us, likely rather broad. We think that their claimed third doublet, that of bound and , may
not be bound at all, as their Yukawa couplings may not be large enough.
The gap equation illustrated in Figure 1 actually links to a vast literature on strongly coupled, scale-invariant QED. It is known that such a theory could have spontaneous chiral symmetry breaking
when couplings are strong enough. We turn to our numerical study [8] in the next section, where we also offer our critique of [25].
Although our line of thought may seem constructed, we have developed a self-consistent picture where EWSB from large Yukawa coupling may be realized with some confidence—all without assuming an
elementary Higgs boson. We have not yet really touched on the meson, which would be the heavy Higgs boson. However, our scenario is to have Goldstone bosons as strongly (and tightly) bound “Cooper
pairs” of very heavy quarks, which may please Nambu. For the 125GeV Higgs-like object, the inherent scale-invariant nature of our gap equation allows the possibility of a dilaton interpretation, as
we will discuss later.
3. Solving Bootstrap Gap Equation
The question now is whether the symbolic gap equation of Figure 1 affords actual solutions. Towards finding a solution, in this section, we briefly review the Nambu-Jona-Lasinio model, where one sets
up a gap equation with its well-known solution. We then turn to the so-called strongly coupled scale-invariant QED, which is closer to our gap equation. By recounting some major steps, we also set up
our notation for later usage. We find a coupled set of integral equations, which is more complicated than strong QED in Landau gauge. But a numerical solution is found [8], with , where we compare
and contrast with [25].
3.1. NJL Model: -Independent Self-Energy
The Nambu-Jona-Lasinio model [37] is the earliest, explicit model of DSB, where the breaking of global chiral symmetry leads to generation of nucleon mass, and the pion as a (pseudo-) Goldstone boson
[38–40]. (Strictly speaking, the Goldstone boson should really be called the Nambu-Goldstone (NG) boson, since [38] predated [39]. But for sake of notation, and because of more common usage, we shall
use and Goldstone boson throughout our paper).
The model is depicted in Figure 4, where a four-fermion interaction is introduced (the blob on the right-hand side). The nucleon mass, represented by a cross, is self-consistently generated. One
easily finds the gap equation where here is the four-fermi coupling and is the cutoff. Since on both sides factor out, one has which admits a solution for , with
To understand what is happening, one can iterate the cross of the left-hand side of Figure 4 on the right-hand side and see that it constitutes an infinite number of diagrams. This effectively puts
the original self-energy diagram into the denominator. In the end, one trades the parameters and for the physical and the pion-nucleon coupling. At the more refined level and using the quark
language, one can show further that the emergent Goldstone boson, the pion, is in fact a ladder sum of the quark-level four-fermi interaction.
We will return at the end to discuss the similarity and differences of the NJL model with our gap equation.
3.2. Strong QED: -Dependent Self-Energy
We note that the self-energy bubble of Figure 4 does not depend on external momentum , and so at the superficial level, Figure 4 is quite different from Figure 1. We now turn to QED, where there is
much closer similarity.
The general gap equation for QED [41] can be written in the form of the Schwinger-Dyson (SD) equation where is the electron self-energy with the (full) electron propagator, is the (full) photon
propagator, and is the full vertex.
3.2.1. Ladder Approximation
Truncating the exact, full vertex and photon propagator by the approximation called the ladder (or rainbow) approximation, but retaining the electron self-energy, the gap equation becomes with now
given in (9), and we have set , that is, massless QED at Lagrangian level. Pictorially, this is represented as in Figure 5, where we note that, compared with the four-fermion coupling of the NJL
model depicted in Figure 4, the external momentum now flows into the self-energy loop. The question now is whether chiral symmetry can be dynamically broken, that a nontrivial solution to the
self-energy could be generated at some strong coupling ?
We define the electron propagator as [41] where the term corresponds to wave function (w.f.) renormalization related to the usual factor. A finite pole of the propagator, , would give the dynamical
effective mass. Our aim is therefore solving and from the gap equation of (10). Inserting (9) and after some algebra, one finds
Simplification can be achieved in the Landau gauge, , where one finds . Since relates to w.f. renormalization, it satisfies the Ward-Takahashi identity even under the ladder approximation. The gap
equation is simplified to a single equation After Wick rotation, and using one obtains where is the Euclidean momentum squared, and , are the ultraviolet (UV) and infrared (IR) cutoffs, respectively.
The integral equation can be changed to a differential equation by noting One obtains the differential equation together with the IR and UV boundary conditions (B.C.) If IR cutoff is 0, the B.C. for
IR should be replaced by at .
3.2.2. Solution in Landau Gauge
The coupled integral equations are simplified and put into a differential equation with B.C. In order to study qualitative features, let us first find an approximate solution. For a special range for
the IR cutoff, let us take ; then, (17) is simplified to Inserting , the characteristic equation is with discriminant ; hence, the behavior is different for and , where . The analytical solution
under the approximation is given as and the boundary conditions can be written as The determinant must vanish for nontrivial ; hence, To satisfy this condition, is needed, and for given , takes on
discontinuous values One sees that for , the nominal “continuum limit.” For , the only solution that satisfies the B.C. is the trivial .
The approximate solution for small IR cutoff can now be obtained by replacing (constant) for small . The differential equation becomes where is a hypergeometric function. Namely, where and is a
hypergeometric function. Checking the asymptotic behavior for , the power behavior for cannot satisfy the UV boundary condition.
The gap equation has a nontrivial or oscillatory solution for , where the critical coupling is for QED. The dynamical effective mass is obtained by solving . Note that the pole of the propagator
should be given in the time-like region; so, one has to make analytical continuation in order to obtain a physical mass, which can be done smoothly for .
An issue arises in that the dynamical mass is proportional to the UV cutoff . For to be physical, however, it should not depend on . Miransky suggested [42] that as ; that is, is a nontrivial UV
fixed point. A related issue, which we would not go into, is whether there would be a dilaton associated with breaking of scale invariance [43, 44]. (These two are also the two best references for
gauged NJL model).
3.3. Bootstrap Gap Equation for EWSB
Our recapitulation of strong QED is for the purpose of setting up our approach for solving the bootstrap gap equation [9] with large empirical Yukawa coupling, where we will continue to follow the
Fukuda-Kugo approach.
In Landau gauge (), the propagations of (would-be) Goldstone modes and gauge bosons are properly separated, and the gap equation becomes equivalent to our discussion if one takes the limit for the
gauge coupling. The Goldstone bosons and couple to fermions with the familiar Yukawa couplings, which we have argued [9] as experimentally established. They are also unaltered by the limit, as seen
in (2). The main assumption is the addition of a new (heavy) chiral quark doublet , where the heavy isospin symmetry implies that , that is, equality of the Yukawa couplings. We ask whether large
could be the source of EWSB, through the conceptual gap equation of Figure 1.
3.3.1. Ladder Approximation
In any case, we do not know the full propagator and vertex functions. We approximate the Goldstone-fermion vertex as undressed, and the Goldstone propagator remains as , analogous to the QED
treatment in previous section. This is in part aided by the insight that is a very tight bound state, which would be massless as long as the symmetry remains spontaneously broken. Similar to Figure 5
, the gap equation for large Yukawa (vanishing ) coupling is depicted in Figure 6. Again, the external momentum flows through the loop.
The change from the symbolic or conceptual equation of Figure 1 is that, even with bare mass forbidden by gauge invariance (), there is a part for propagation, and we aim at solving for the quark
propagator , given as which is of course of the same form as (11) for QED. The Goldstone boson is colorless, and, unlike the massless photon from electromagnetic gauge invariance for QED case, its
masslessness is “bootstrapped” into the gap equation itself. Following similar steps as in Section 2, and assuming degeneracy, we obtain where placement of anticipates the Wick rotation, and stands
as shorthand for , and likewise for . We have already used ; so, compared to massless QED, we now have to consider , or wave function renormalization effects; hence, we have to face a coupled set of
equations for and . Note that we have kept a “Higgs” term for calculational purposes, applying Standard Model Higgs boson, , couplings. This is for purpose of later comparison with the work of Hung
and Xiong [25], as well as for discussion of the dilaton case.
For our aim of bootstrap DSB, we simply drop the second term (no physical Higgs, or taking ), and so only Goldstone modes propagate in both equations. After angular integration and Wick rotation, one
gets where Taking the limit in (30) and (31) to mimic the Hung-Xiong approach of massless scalar doublet, We keep the notation of and in (33) and (34) to cover these two cases.
Analogous to Section 2, (32) can be put into differential form with the boundary conditions where prime stands for -derivative.
At this point, we note that, if one ignores wave function renormalization, that is, (36), while forcefully setting in (35), then one has the same solution as in QED, with the change in critical
coupling which is twice as high as for QED, and hence, superficially a “critical mass” GeV, which is above current LHC limits [14–20]. But it should be clear that the wave function renormalization
term cannot be neglected.
3.3.2. Numerical Solution
Redefining , our coupled differential equations with B.C. become where dot represents -derivative, and , .
Asymptotic Properties and Critical Coupling. Due to scale invariance, the differential equations are invariant under As a result, the solutions of the differential equations depend only on () and for
given and . Thus, is a kind of integration constant. We will see that the most important feature of the solutions is that only special values (discontinuous values) of and are allowed for given
boundary conditions.
If we take , the equations can be solved analytically, and the solution can be described as in the case of strong coupling QED with Landau gauge, (21)–(23). The property illustrated for should hold
also for , where, depending on , should take on special discontinuous values to satisfy the gap equation for the cases of or .
To see this, we note that the term in the denominators is irrelevant. This is because for , the dependence of and is negligible due to the boundary conditions . Therefore, to understand the behavior
of the solution, one can take and consider the differential equation with dropped as follows: where the equation for becomes independent from , but its solution affects .
The solution of is obtained analytically as follows: where is an integration constant which can be fixed by the B.C. for IR. Using the analytical solution, one can show that only discontinuous values
of can satisfy the B.Cs. The “critical” value of can be easily obtained numerically, even without neglecting term in the denominators. Of course, since the solutions are found numerically, it is not
a proof that one really has a “critical” value. The upshot is that only special values of the coupling can satisfy the coupled equations for given values of , for (or for ) and .
Our numerical solution gives corresponding to where the latter case is much higher. Here stands for “critical,” and our numerical values are extracted in the large and limit. Note that for the
artificial case of (i.e., ), the critical value was , (40); hence, the effect of considering or wave function renormalization is quite nontrivial.
The values in (46) and (47) correspond to effectively taking , which is certainly not the range of validity for (46) as a descendent of Figure 1. That is, the conceptual foundation for Figure 1 is
that, for momentum roughly up to somewhere below , corrections to the Goldstone boson propagator and vertex have been ignored. Nevertheless, at face value, if we naively apply the physical GeV, then
(46) and (47) imply the mass values and 8.1TeV, respectively, which are rather high. We drop the latter, “massless (or light) Higgs” case, not only because it is clearly out of reach for the LHC,
but for theoretical reasons as we explain later. The lower bound nature of (48) would be explained subsequently.
3.4. Discussion
The large critical coupling of of (46) is somewhat surprising, resulting in the rather high critical mass of (48). We note that this is without the contribution of a light SM Higgs boson, which would
put much higher. Before we touch phenomenology of such high chiral quark masses, and in so doing find an independent justification for such high critical Yukawa coupling, we offer some further
3.4.1. Comparison with Hung and Xiong
We can now make some comparison with, and offer a critique of, the approach of Hung and Xiong [25] (HX). We have mimicked the concept of HX by keeping a “Higgs” scalar contribution in (30) and (31),
which resulted in the second case of (34) in the limit of , as compared with our case of interest, (33), where we drop the -dependent term.
HX assumed the existence of a massless Higgs doublet, where our (35) and (36) with (34) should be a faithful representation. However, not only is the massless doublet assumed, HX also ignored wave
function renormalization, that is, the term, in the treatment of their gap equation. Furthermore, the sign of our second integral in (30) disagrees with HX, suppressing the coefficient of in
comparison. In the earlier work [9] that set up the gap equation of Figure 1, taking the numerics of HX, the estimate of (compare (40)) gave GeV, which is not too far above the current LHC bound.
But with our sign for the second integral in (30), we would arrive at , or TeV, that is, a factor of higher. With our sign, the Higgs boson effect would cancel out part of the Goldstone effect,
hence, requiring stronger Yukawa coupling.
But we have argued that it is not justified to ignore the wave function renormalization effect of . After all, the boson loop has momentum dependence, and so it would necessarily affect the factor.
Thus, the previous simple numerics are incorrect. Keeping in our numerical study, hence, the coupled equations, a considerably higher critical is found. For the case of taking , (34), where we mimic
HX’s massless Higgs doublet effect, the critical of (47) is almost 4 times as high as that for (46). In fact, we obtain the , which is quite different from 1.
Our criticism goes far deeper. Taking a scalar doublet as massless, such that superficially one has “scale-invariance” as in strong QED, is totally ad hoc. Effectively one has to hold the parameters
of the Higgs potential such that the Higgs field always remains massless. However, there is no principle by which this scale-invariance or masslessness of the Higgs field can be maintained. After
all, one is invoking large Yukawa couplings, which feed the notorious divergent quadratic corrections to the Higgs boson mass. The one-loop two-point and four-point functions with quark in the loop
would generate effective and self-coupling terms for the Higgs field. With no explicit dynamical principle (such as gauge invariance for the case of QED), the assumption of a massless Higgs doublet
as the agent of DSB is not only ad hoc, but clearly unsustainable.
In contrast, we see the merit, as well as the meaning, of our “bootstrap” gap equation. As long as we are in the broken phase, there is a massless Goldstone boson that couples with the Yukawa
coupling . Treating as large, if a nontrivial solution to the gap equation is found (as we have illustrated earlier in the previous subsection), it in turn justifies the use of a massless Goldstone
boson in the gap equation, which is why we affix the name “bootstrap.” In fact, the physical argument [9] was to view the Goldstone boson as an extremely tight ultrarelativistic bound state of heavy
and from the broken phase, while enters the “bootstrap” gap equation to dynamically generate , hence, break the symmetry, and in the same stroke justify its own existence.
We also see that this argument for our bootstrap gap equation puts the existence of a light Higgs boson in doubt. That is, it seems rather difficult to keep it light, and one cannot ignore
corrections to the Higgs propagator. At the foundation level [9], unlike the Goldstone boson, the experimental basis for the Higgs boson is rather recent. Even with the newly discovered [1, 2]
125GeV boson, one has to distinguish between an SM Higgs boson versus a dilaton. Our numerical study also shows that keeping the Higgs term tends to raise the critical Yukawa coupling considerably,
implying higher than the LHC collision energy. Thus, the light Higgs case of is rather problematic in bootstrap DSB. We will return to discuss the dilaton possibility later.
3.4.2. Physical Cutoff
The previous comparison with [25], and in particular the bootstrap nature of the Goldstone boson in the gap equation, illustrates that the limit would take us outside the range of validity of the gap
equation itself. It is clear that, for timelike , there would no longer be a Goldstone boson. Thus, for the cutoff , one should not use the traditional language of . One probably should not
contemplate “UV-completion" at the current stage. The bootstrap gap equation is rather heuristically or physically argued. But it does not provide a theory of the heavy quark Yukawa coupling ,
employing it for DSB instead. Thus, we suggest a cutoff , where is some true New Physics scale where the origin of Yukawa couplings may be contemplated, but it is out of reach from the bootstrap DSB
Viewed from a slightly different angle, this cutoff is related to the restoration of symmetry. The Goldstone boson couples to broken currents, but the factor of the Goldstone boson may vanish at some
scale related to the scale where the Goldstone boson becomes unbounded, and symmetry is restored. Rather than a true that in principle extends to infinity, there exists a cutoff of the gap equation.
We are in effect summing over the Goldstone boson correction to the self-energy of the quark , when the Goldstone boson is still defined. As noted in [9], this picture does receive experimental
support, in that no New Physics seem to be there below ~TeV scale. So, one sums up only the effect of the Goldstone boson, and nothing else.
The gap equation sums up the effect, from low to high momentum, of the correction by the Goldstone boson to the quark self-energy. Equation (46) reflects taking this sum to . Since the summation is
accumulative, if now one sums only to some , as we have argued, then the cumulative effect is less than summing to very large momentum above . We note that is a physical scale parameter that is
external to the scale-invariant gap equation. Nevertheless, we can plot the dependence of on the cutoff . From Figure 7, one can see that, for a lower cutoff, has to be higher than in (46). This is
because, as the integration range is smaller, a larger value is needed to compensate. Thus, (46) gives a lower bound on . If we take , then , and which is higher than (48). We will return to discuss
whether this could be an overestimate later.
3.4.3. Future Work
Up to now, we have been cavalier in the relation between and , treating it as . But is a physically measured quantity, and is yet to be experimentally measured. If electroweak symmetry is indeed
dynamically broken by the large Yukawa coupling of a new heavy chiral quark , when is discovered in the future, likely . We then see that the actual v.e.v. value, , may not correspond to the critical
value . This brings about the question of how scale invariance is actually broken in our gap equations, and whether there might be a dilaton [5–7, 45–49]. (When the 125GeV hint emerged from 2011 LHC
data, [45–47] already suggested that it might be a dilaton rather than the SM Higgs boson, while [48, 49] are from the (walking) technicolor perspective).
Rather than to approach this deeper problem, we try to obtain the decay constant of the Goldstone boson, which should be the same as the vacuum expectation value, . Following the Pagels-Stokar
formula [50] naively, we obtain More generally, we write where , and we have scaled by (and redefined the function ), treating it as physical.
We can get back the “Yukawa” coupling (it should really be denoted as , and the question is whether ) for the input to the gap equation, or If the system is really scale-invariant, the r.h.s. of (52)
is a function of and . In order to satisfy the gap equation, is obtained as a function of . Namely, it should not depend on explicitly. Taking the cutoff , the equation becomes iterative for .
Therefore, solving the gap equation, we obtain a prediction for the heavy quark mass. But technical issues remain. Is a physical mass? What about the infrared cutoff? Can our assumption of be
maintained self-consistently? Our work is far from complete, and we leave these theoretical questions to the future.
4. New Phenomena:
The stringent limit of GeV/c^2 [14–20] is already above the perturbative, tree-level partial wave unitarity bound (UB) that is nominally around 550GeV/c^2 [21]. WithTeV scale heavy quark masses,
the actual UB violation (UBV) in the high energy limit for scattering may be out of reach. Instead, the question to ask is should the current search strategy for ultraheavy quark at the LHC be
modified? Of course, with the advent of the 125GeV boson at the LHC, the search for the four generation quarks has been blunted, and the search effort is being reshuffled to search for vector-like
quarks, where flavor changing neutral current (FCNC) decays to and Higgs bosons need to be studied. We maintain, however, that SM has built-in absence of FCNC, both for the (only chiral fermions) and
Higgs (single Higgs doublet). There is some merit to stick to this, while our bootstrap equation has even disposed of the Higgs doublet altogether.
So, what if there are heavy chiral quarks in Nature at the 2-3TeV level? Can they be discovered at the LHC? We draw on the analogy with the proton to argue [52] that ( is the longitudinal component
of vector boson) may be the new signature at the LHC.
The coupling [53, 54] gives , which is very large and quite close to the “Yukawa coupling,” , which is slightly larger than . Although (GeV is the electroweak symmetry breaking scale) from the
current bound is not yet as large, drawing analogy with annihilation, we expect that may be the dominant process for GeV. It is also realizing the fact that is of similar strength to , that such
strong couplings already exist in Nature, which gives some confidence to our very large critical Yukawa coupling strength.
4.1. Phenomenology of
The phenomenon for annihilation is unfamiliar to the average particle physicist. One tends to think of annihilation into photons (much suppressed by of QED) or gluons. But even for the latter, it is
not practical with total energy of ~1.88GeV and in consideration of detection of final states. Certainly, would annihilate into mesonic resonances. However, all mesonic resonances basically end up
as pions. Thus, one can safely conclude that in general.
Indeed, it is observed that annihilation [51, 55] goes mainly into pions. However, the features are rather surprising, even for the nuclear and hadron physicists that worked on the subject from the
1950s to the 1990. It is found that goes via a “fireball.” The salient features of the annihilation “fireball” are (see Figure 8)(i)size of order ; (ii)temperature MeV; (iii)average number of emitted
pions ; (iv)a soft-pion factor modulates the Maxwell-Boltzman distribution for the pions.
It is worthwhile to elucidate these features a little further. The size means that, when the and meet to annihilate, destined to shed the and content, the region of annihilation extends over a region
~1/m[π]. The system seems to thermalize to a temperature of order 120MeV, hence, “loses memory” of its origins, and the emitted pions carry momenta that satisfy a thermal distribution. This rapid
thermalization probably takes place due to the rather large (as well as ) coupling, while the suppression [56] (satisfied rather well by data; see Figure 12 of [51]) for low pion momentum from the
thermal distribution reflects nothing but the Goldstone nature of pion couplings. That is, the as Goldstone boson couples derivatively, hence, cannot get emitted at zero momentum. This seems to
explain the enhancement factor of 1.3 for the mean kinetic energy beyond equipartition expectation of . Thus, the relatively high MeV gives rise to , as compared to the maximal allowed number of
pions, .
At a more refined level, it is found that and , with ; that is, more neutral pions are emitted than charged ones. Remarkably, the pion multiplicity distribution appears Gaussian, with . More
specifically [57], is argued from statistical models [58]. Thus, gives a good fit to data [51], which is given in the first row of Table 1. Note the minute 2-body fraction, while the pion
multiplicity cuts off above 8.
This successful “statistical model” which accounts for gross features of annihilation goes back to Fermi [59], who considered a system of noninteracting pions. It has been refined through the years,
and the strong interactions of the pions do play a major role. One final aspect is a focusing of incoming waves by attractive potential that leads to strong absorption in a smaller region than
originally suggested.
4.2. Analog
We mean by a left-handed chiral doublet (with corresponding right-handed weak singlets) that is degenerate in mass, thereby possessing a heavy isospin symmetry , much like the nucleon . To draw true
analogy with the case, the Yukawa coupling should be of order 13-14; that is, TeV, in accordance with findings of the bootstrap gap equation. However, for sake of phenomenology, we will assume that
analogous phenomenon already appears for 1TeV; hence, we consider TeV.
With Higgs mechanism already established, the Goldstone boson carries with it an effective length scale (assuming symmetry, we take and ignore weak gauge couplings , compared with ), which should
define the size of the annihilation fireball. Compared with , here where is the weak gauge coupling. Thus, the size of the fireball is controlled by random parameters such as or that are unrelated to
the underlying annihilation dynamics.
The fireball temperature is harder to assess. Noting that , likely , where is the electroweak transition temperature. By this analogy, however, one notes that arises from the detailed underlying
theory for hadron phenomena (which includes ), QCD. Even if EWSB arises from strong Yukawa coupling [8, 9], , we do not yet have an underlying theory for itself. Thus, we do not have a good handle on
, except that it is in the 100GeV scale, of order .
The Goldstone factor should still modulate the thermal distribution. But because of the smallness of compared with , the modulation is considerably milder than the case, and so should be closer to .
We shall therefore take as nominal which can be interpreted as either (here 1.3 corresponds to ) or . The latter would give 170GeV, which is not so different from (55). We stress, however, that the
fireball temperature could be 1.5, even twice as high, and should be determined eventually by experiment.
Assuming (55) but without applying the 1.3 enhancement factor over equipartition (as was done for the case of ), we take GeV; hence, GeV, or with . For TeV, or TeV, and this corresponds to where
we artificially keep three digits of significance for generating a “realistic” multiplicity distribution. Assuming (53) and (54), we have , and the multiplicity distribution is for TeV. We note that
a higher fireball temperature would result in lower average multiplicity and a narrower distribution (controlled by ).
We illustrate the process in Figure 8 (gluon emission discussed later) and tabulate the multiplicity distributions in Table 1. For TeV (second row of Table 1 marked by ), about 90% of annihilations
go into 5–8 prongs of . Several s should be considerably above 300GeV momentum, while 4-prong events (at 6%) are in general composed of s with momentum ~500GeV. Therefore, -tagged “fat” jets, ,
should become a useful tool for identifying these multi- events. For TeV (third row of Table 1 marked by ), again over 90% of annihilations go into 10–15 prong s, which is a rather large number. For
9–12 prong events (at ~50%), a significant number of s would have momentum above 400GeV, while for higher multiplicity, many should still carry momentum higher than the mean, (56). These high
multiplicity events would be possible hallmark for heavy production.
4.3. LHC Prospects
At the moment, conventional wisdom has turned against searching for fourth generation quarks per se, because of the SM-like Higgs signal, even though a dilaton interpretation is still possible. Our
discussion, however, point towards a possible new type of signature.
If our analogy with annihilation is already realized for TeV, then even at 8TeV running of LHC, where of order data is expected in 2012 for both ATLAS and CMS, one could already get a hint. The
cross-section is of order a couple fb; so, one might observe some number of 4 or more -tagged jet () events, with additional jet multiplicity that are less well -tagged. The competing modes would be
regular production, followed by “free quark decay,” for example, (assuming ) , or [60]; we shall assume CKM hierarchy for simplicity; hence, , and , transitions are ignored. We see that the -jet
multiplicity is lower, associated with isolated high -jets, and practically no -jets or . In contrast, does not have isolated -jets (-jets would come in pairs at lower fraction, to form a from ), and
-jet multiplicity is higher and tends to have -jets (the analogy with suggests a slight excess of over (half the) ; however, the mass is heavier than the , and so this point is to be determined by
experiment). We expect that the fireball process would dominate over the free quark decay process, as we would argue shortly.
We have demonstrated in Section 3 that if the heavy chiral quarks themselves are responsible [9] for dynamical EWSB, then TeV is required [8]. Our earlier analogy with the “Yukawa” coupling suggests
TeV. If so, the prospects for the 2012 LHC run at 8TeV are not good, and one would have to wait for the 13-14TeV run, expected by late 2014. Running the HATHOR code [61] for production, we
estimate the 14TeV cross sections to be of order 50–60fb for TeV, dropping to ~3 fb for TeV, and 0.2-0.3 fb for TeV. From to 4TeV, one quickly runs out of parton luminosity. Note that
production dominates over production, as the valence quark supplies the needed large parton momentum fraction.
From the cross section and expected LHC luminosities, for TeV, again we do not foresee a problem for discovery. Note that assuming symmetry, that is, near degeneracy of and , then (i)
decay:suppressed by both phase space and small Goldstone momentum; (ii):suppressed by CKM element (), and with no sign of New Physics in , , and , one expects [62, 63] such CKM elements to be less
than 0.1.
In contrast, once pulls the heavy quark pair out of the vacuum, the pair “sees" a cross section of order , which is at the level. With production, there is Yukawa attraction [24] between that mimics
the focusing attraction for . Thus, there is good likelihood that , that is, , would dominate over free quark decay.
We comment that the produced is likely in a color-octet state; hence, it would need to shed color. However, gluons have no way to sense the GeV (or higher) of the fireball, which is of electroweak
nature. Instead, the heaviness of means gluon radiation is suppressed (heavy quark symmetry). We illustrate gluon radiation in Figure 8 but expect the associated gluon-jet to be soft and does not
provide a discriminant.
In case TeV, one quickly runs out of parton luminosities (higher energy would be preferred!); hence, one would need high luminosity running of LHC at 14TeV. However, the situation needs not be so
pessimistic; the very large Yukawa coupling suggests the existence of bound states below . For example, as discussed in [24], there is likely an isosinglet, color-octet resonance that can be produced
via . How decays would depend on more details of the bound state spectrum and properties. The beauty of our analogy with annihilation is precisely the thermal nature of this fireball process [51, 55
], with little “remembrance,” either of the initial state or detailed resonances in the hadron spectrum. Thus, we make no assertion on decay properties here, except that it offers hope for an
enhanced production cross section.
If the decay of the is analogous to the fireball picture, then by and the resonance production nature, there is good hope for earlier discovery. If decays through similar chains as discussed in [24],
then it might lead to the discovery of several resonances. The study of [24] was done with 500GeV < < 700GeV in mind, to avoid issues of boundstate collapse [9, 23]. But since this region is now
close to being ruled out, a numerical update, in particular also on obtaining the spectrum, is certainly called for. This, however, requires nonperturbative solutions for strong Yukawa coupling.
An offshoot study of [24] provides an interesting contrast. If free quark decay is suppressed by very small , and some kinematic selections are operative, it is argued that ( is some isotriplet,
color-octet Yukawa-bound “meson”), followed by , where is transverse. The upshot is the signature of . This is an exception to our fireball discussion, in that it is effectively 2-body, , in vector
bosons, and one of which is transverse; the gluon is very energetic. If reconstructed [64], one could find two resonances simultaneously. These signatures arise from special conditions that are
unlikely to hold in general. Furthermore, [24] did not consider fireball-like decays of annihilation by multi-Goldstone radiation.
Multiple weak boson production has been considered below threshold [65]. There is a rise of high multiplicities as one starts to approach threshold of high . But that would be out of the range of
validity for the amplitude via virtual loop considered. We remark that our multi- signature is in principle quite distinct from micro-blackhole production [66]. Micro-blackholes in essence emit all
types of particles democratically. In contrast, our fireball is heated in the electroweak sense and prefers emitting by far the strongly coupled weak Goldstone bosons . However, since searches so far
are based only on the simplified signature of high jet multiplicities, a refined search is needed to separate micro-blackholes from fireballs.
We give a final remark on scattering. If a very heavy chiral quark doublet exists above theTeV scale, it would not be easily compatible with a light Higgs because of the very large quadratic
corrections to the light Higgs mass, as we have already remarked. One would have to check whether the 125GeV boson is more consistent with a SM Higgs boson, or something like a dilaton. If the
latter emerges, a corollary of our argument would then suggest that the traditional scattering study for heavy or strongly coupled Higgs case may be the wrong place to search for New Physics
enhancement. Instead, one should again watch out for scattering to high(er) multiplicity of s. In general, one should treat the UBV in and scattering as one single problem.
5. Discussion and Conclusion
Although the numbers arose from dynamical considerations—for sake of EWSB—rather than from constraints, from a phenomenological standpoint, our numerical value of TeV in (49), even the 2TeV value
in (48), seems depressingly high. At the same time, these masses are for the “no Higgs” case, or that the Higgs boson should be very heavy hence the contribution in the loop is subdominant, and we
assert that the role of Higgs mechanism can be taken over dynamically by effective condensation. But a 125GeV boson has appeared at the LHC. Can a heavy chiral quark doublet still be viable? We
offer few remarks, first on how might be lowered. We then address the 125GeV Higgs boson issue.
In the spirit of [9] and [24], the Goldstone boson is the lowest or most tightly bound state through the Yukawa coupling itself. There should exist Yukawa-bound resonances above this isotriplet,
color-singlet, pseudoscalar state. The leading ones should be the isotriplet, pseudoscalar and the isosinglet, vector , both of which are color-octet, and the isosinglet, color-singlet, vector
“mesons.” We have not yet solved the strongly coupled bound state problem; so, we do not yet know the actual spectrum (i.e., how tightly they are bound below ), nor the “decay constants,” that is,
how they couple to the heavy quark . But the couplings should be rather strong. The point is, as one integrates the Goldstone loop up to , at some point these heavy mesons should also enter and
contribute in the same spirit to the self-energy of .
There is thus some hope that extra, attractive contributions could lower , coming as additional effects of the large . But it also illustrates the limits of our bootstrap approach. The momentum
integration for these extra contributions starts from the meson mass, up to , but clearly the meson propagators and the meson- vertices would be much more sensitive to , compared to the Goldstone
boson . Even for , which is the , as one approaches , its bound state nature would lead to modifications of its propagator (even if symmetry remains broken hence it remains massless) and vertex,
which we have ignored in the ladder (or rainbow) approximation.
We have already offered our critique of the work of Hung and Xiong [25] and also showed numerically that the needed becomes exorbitantly high if one retains the light Higgs scalar in the loop. A
different question is whether our gap equation is actually equivalent to the NJL model. We have already commented that for the NJL model, the self-energy does not depend on momentum, and one simply
cuts the loop momentum off at some . The wave function part, , for our gap equation carries momentum dependence; that is, the Yukawa loop always modifies the factor. We already saw this in
scale-invariant QED. For NJL, the cutoff is traded, together with the associated dimension coupling constant , for the physical and , although, depending on the cutoff, there is a critical coupling
(see (6)). For our case, one cannot take arbitrary values for the cutoff. Instead, we argued that, because the Goldstone boson would become unbounded at some scale, at least by , the loop momentum
has to be cut-off below this heuristic, finite value. Further similarities and differences are noted in [9].
A fundamental difference from NJL model may be the implicit postulation that the dimension zero Yukawa coupling of the Goldstone boson to be the experimentally verified one related to the left-handed
vector gauge coupling of massive quarks. If the Goldstone boson is an ultratight bound state, it has turned the effective dimensionality to 1. In this sense, our gap equation may resemble the gauged
NJL model [43, 44], in which the dimensionality of the bound state tends to 1 near the critical gauge coupling.
Our approach is also conceptually different from those descended from the top condensation model [67, 68], in which the gauged NJL is applied to EWSB. The self-energy in the top condensation model
incorporates both our Figure 4 (loop with four-quark operator) and Figure 5 (but with vector boson in the loop). The gap equation with the four-quark operator is equivalent to the minimization of the
linear sigma model with a compositeness condition, such that the Yukawa and Higgs quartic couplings blow up at the composite scale. Naively speaking, our gap equation of Figure 6 corresponds to the
linear sigma model with large Yukawa coupling. One can therefore read off the schematic correspondence between top condensation and our approach by replacing gauge coupling with Yukawa coupling, and
four-quark operator loop by possible heavy bound state loop mentioned earlier. In the top condensation model, the four-quark interaction generates the large Yukawa coupling; hence, it is clearly
rather different from our approach.
We were intrigued to find that the physically measured coupling (extracted via one-pion exchange in Born approximation) is consistent with and is rather similar in value to our finding in (46). This
offers a totally separate argument that is in fact above 2TeV. Having made this analogy of - with -, it is then interesting to ask whether the pion could really have been an bound state, as
originally conjectured by Fermi and Yang [69], through our gap equation. However, developments in hadron physics subsequent to the Fermi-Yang conjecture relatively quickly gave rise to meson states
in the 500 to 800MeV range (and corresponding baryon resonances), eventually exploding in the 1 to 2GeV range, that is, below . Thus, our gap equation does not apply to the and hadron system case.
Put simply, in such a gap equation, one cannot integrate the pion loop up to ; new phenomena emerged when the loop momentum reached ~, which is quite below . We know that the and are QCD bound
states, with mesons formed by string breaking, and so the pion is not an ultratight bound state of and . If the ultratight bound state picture for the Goldstone could be realized according to our
bootstrap gap equation, it would strengthen our reasoning [9] that the underlying theory for Yukawa couplings cannot be a simple mock-up of QCD.
We do not have new insight on bound state phenomena, other than what is already discussed in [24]. Unfortunately, this reference was very conservative and did not discuss above GeV. The reason to
keep this bound on is that, above this value, the Bethe-Salpeter (BS) equation approach tends to have collapsed states. But this was in turn the foundation for the postulate in [9], that the leading
collapsed state, , is precisely the Goldstone boson , which in turn motivated the bootstrap gap equation study. We do not know at present whether our gap equation, with its numerical solution [8],
could shed light on the bound state spectra, but the SD equation itself is “higher level” than the BS equation. If the BS equation Yukawa-boundstate approach can be a guide for as high as 2TeV, that
is, with , the noteworthy point is that the leading bound states , , and are rather distinct from the ’s and ’s of technicolor (TC), which after all is an extension of QCD. Note that these bound
states emerge from strong Yukawa coupling, even though we have not offered a theory of Yukawa couplings (nor did we touch on the flavor aspect of Yukawa couplings). We mention in passing that our gap
equation can be easily extended to finite temperature, allowing one to potentially explore issues related to electroweak phase transition, which is a direction that we would take up in a subsequent
We turn now to the difficult question of how to deal with the 125GeV boson that has emerged [1, 2] at the LHC. It is rather likely that this is the long-awaited SM Higgs boson. There are two issues
against the existence of a 4th generation chiral doublet if one has a 125GeV Higgs boson, as compared with having the Higgs boson above 600GeV or so (the original premise or working assumption in
the formulation of the bootstrap gap equation). The first issue is the expected enhancement of production by roughly an order of magnitude due to 4th generation quarks seen by the gluons in the loop
[70]. But data indicate [1, 2] that seem not inconsistent with SM expectations. A second issue is if GeV [14–20], which is 4 times the top mass, what keeps the Higgs mass light? The top quark
contribution to the Higgs mass quadratic correction is already a major concern. An effect from the fourth generation quarks that is an order of magnitude stronger seems rather difficult to tame. From
a theoretical standpoint, the second problem is more serious.
It is possible to extend the Higgs sector, for example, to two Higgs doublets, where one could have one neutral Higgs boson light [71–73], and accommodate the Higgs data. Such phenomenological
accommodation (where the Higgs fields could be composite), however, does not help our cause. Having two Higgs doublets introduces several more parameters, which we did not consider in the formulation
of our gap equation. Furthermore, if these Higgs scalars couple with some modulated Yukawa coupling, our gap equations suggest that they raise the 4th generation quark mass (but the pseudoscalars may
help). Conversely, these models would still have to face how to keep a neutral Higgs scalar light in the presence of strong Yukawa couplings. Quintessentially, however, our bootstrap gap equation is
against the concept of an elementary Higgs field. In any case, having more than one Higgs doublet invalidates our gap equation formulation and would bring us out of scope.
There has been discussion that a light Higgs boson could be a pseudo-Goldstone boson (PGH) from some underlying strong dynamics [74]. In these models, the PGH still has SM-like Higgs boson couplings.
This would not suffice for our gap equation, since we have shown that the presence of a light Higgs, whether a PGH or not, would drive too high for comfort. Such large as implied by (47) would create
its own “hierarchy” problem. We are therefore left with the dilaton option.
We find it remarkable, within the strong dynamical EWSB setting, the dilaton is a recurring issue. And, if not fortuitous, as mentioned in the Introduction, that the enhanced mode for the Higgs-like
signal at the LHC permits a dilaton interpretation [5–7]. In our case, our gap equation has scale-invariance, which we have in fact used in (42) towards finding our numerical solution. Although there
was no attempt at the UV theory, it seems that our bootstrap gap equation permits a connection with an underlying scale-invariant or conformal theory (the AdS-CFT or holographic link; see [6] and
references therein). It should be noted that our goal was dynamical EWSB, or generating v.e.v., . But since we tacitly used throughout; generation is equivalent with generation, hence, breaking of
the scale-invariance of the gap equation. But since we did not attempt any theory of Yukawa couplings, , the actual source of scale-invariance violation is left for the theory of Yukawa couplings. It
should be noted that if the 125GeV object is a dilaton, its couplings are modified by , where is the dilaton decay constant (hence, VBF and VH production processes would be suppressed). Taking the
bound that (probably considerably smaller), the dilaton contribution in our gap equation would be self-consistently subdominant.
In conclusion, the ever-increasing mass bound on heavy sequential chiral quark stimulates the question whether strong Yukawa coupling itself can be the source of electroweak symmetry breaking. A
dynamical gap equation is argued, treating the Goldstone as massless inside the loop, coupling with chiral quarks with the usual Yukawa couplings. By some analogy with scale-invariant QED, numerical
solutions are found, such that dynamical EWSB is demonstrated. The resulting quark mass seems to be in the 2-3TeV range. We favor the heavy or no Higgs scenario since would become exorbitantly high
if a light Higgs boson is kept in the loop. Despite our rather high value, LHC might still shed light on it. By analogy with the observed fireball annihilation, we conjecture that such very heavy
chiral quarks that induce EWSB by their very large (≳4π) Yukawa coupling may annihilate via with high multiplicity. Even with 2-3TeV quark mass, discovery may be aided by Yukawa-bound resonances
such as the color-octet (heavy) isosinglet vector meson , if it undergoes similar annihilation. With a 125GeV boson that has emerged at LHC, to reconcile with a very heavy chiral quark doublet, that
is, a 4th generation, it must be a dilaton [5–7] of scale-invariance breaking.
The author thanks J. Alwall, J.-W. Chen, K.-F. Chen, H.-C. Cheng, T.-W. Chiu, T. Enkhbat, P. Q. Hung, K. Jensen, Y. Kikukawa, E. Klempt, H. Kohyama, T Kugo, C. N. Leung, C.-J. D. Lin, F. J. Llanes
Estrada, M. Piai, and H. Yokoya for discussions and Y. Mimura and H. Kohyama for collaboration. This work is supported by NSC 100-2745-M-002-002-ASP and various NTU grants under the MOE Excellence
1. G. Aad, T. Abajyan, B. Abbott, et al., “Observation of a new particle in the search for the standard model Higgs boson with the ATLAS detector at the LHC,” Physics Letters B, vol. 716, no. 1, pp.
1–29, 2012. View at Publisher · View at Google Scholar
2. S. Chatrchyan, V. Khachatryan, A.M. Sirunyan, et al., “Observation of a new boson at a mass of 125GeV with the CMS experiment at the LHC,” Physics Letters B, vol. 716, no. 1, pp. 30–61, 2012.
View at Publisher · View at Google Scholar
3. S. Chatrchyan, V. Khachatryan, A.M. Sirunyan, et al., “Combined results of searches for the standard model Higgs boson in pp collisions at $\sqrt{s}=7$TeV,” Physics Letters B, vol. 710, no. 1,
pp. 26–48, 2012. View at Publisher · View at Google Scholar
4. G. Aad, B. Abbott, J. Abdallah, et al., “Combined search for the Standard Model Higgs boson using up to 4.9fb^−1 of pp collision data at $\sqrt{s}=7$TeV with the ATLAS detector at the LHC,”
Physics Letters B, vol. 710, no. 1, pp. 49–66, 2012. View at Publisher · View at Google Scholar
5. S. Matsuzaki and K. Yamawaki, “Is 125GeV techni-dilaton found at LHC?” Physics Letters B, vol. 719, no. 4-5, pp. 378–382, 2013. View at Publisher · View at Google Scholar
6. D. Elander and M. Piai, “The decay constant of the holographic techni-dilaton and the 125GeV boson,” http://arxiv.org/abs/1208.0546.
7. Z. Chacko, R. Franceschini, and R. K. Mishra, “Resonance at 125GeV: Higgs or dilaton/radion?” http://arxiv.org/abs/1209.3259.
8. Y. Mimura, W. S. Hou, and H. Kohyama, “Bootstrap dynamical symmetry breaking with new heavy chiral quarks,” http://arxiv.org/abs/1206.6063.
9. W. S. Hou, “Some unfinished thoughts on strong Yukawa couplings,” Chinese Journal of Physics, vol. 50, p. 375, 2012, http://arxiv.org/abs/1201.6029.
10. B. Holdom, W. S. Hou, T. Hurth, M. L. Mangano, S. Sultansoy, and G. Ünel, “Four statements about the fourth generation,” PMC Physics A, vol. 3, article 4, 2009. View at Publisher · View at Google
11. S. Chatrchyan, V. Khachatryan, A.M. Sirunyan, et al., “Search for a Higgs boson decaying into a b-quark pair and produced in association with b quarks in proton-proton collisions at 7TeV,” http:
12. W. S. Hou, “Source of CP violation for Baryon asymmetry of the universe,” Chinese Journal of Physics, vol. 47, p. 134, 2009, http://arxiv.org/abs/0803.1234.
13. M. Kobayashi and T. Maskawa, “CP-violation in the renormalizable theory of weak interaction,” Progress of Theoretical Physics, vol. 49, no. 2, pp. 652–657, 1973. View at Publisher · View at
Google Scholar
14. G. Aad, T. Abajyan, B. Abbott, et al., “Search for pair production of heavy top-like quarks decaying to a high-${}_{p\text{T}}W$ boson and a b quark in the lepton plus jets final state at $\sqrt
{s}=7$TeV with the ATLAS detector,” Physics Letters B, vol. 718, no. 4-5, pp. 1284–1302, 2013. View at Publisher · View at Google Scholar
15. ATLAS Conference, ATLASCONF-2012-130.
16. S. Chatrchyan, V. Khachatryan, A. M. Sirunyan, et al., “Search for pair produced fourth-generation up-type quarks in pp collisions at $\sqrt{s}=7$TeV with a lepton in the final state,” Physics
Letters B, vol. 718, no. 2, pp. 307–328, 2012. View at Publisher · View at Google Scholar
17. S. Chatrchyan, V. Khachatryan, A. M. Sirunyan, et al., “Search for heavy bottom-like quarks in 4.9 inverse femtobarns of pp collisions at $\sqrt{s}=7$TeV,” Journal of High Energy Physics, vol.
2012, article 123, 2012. View at Publisher · View at Google Scholar
18. S. Chatrchyan, V. Khachatryan, A. M. Sirunyan, et al., “Combined search for the quarks of a sequential fourth generation,” Physical Review D, vol. 86, no. 11, Article ID 112003, 20 pages, 2012.
View at Publisher · View at Google Scholar
19. S. Chatrchyan, V. Khachatryan, A. M. Sirunyan, et al., “Search for heavy quarks decaying into a top quark and a W or Z boson using lepton+jets events in pp collisions at $\sqrt{s}=7$TeV,”
Journal of High Energy Physics, vol. 2013, article 154, 2013. View at Publisher · View at Google Scholar
20. “Search for a heavy partner of the top quark with charge 5/3,” CMS-PAS-B2G-12-003.
21. M. S. Chanowitz, M. A. Furman, and I. Hinchliffe, “Weak interactions of ultra heavy fermions,” Physics Letters B, vol. 78, no. 2-3, pp. 285–289, 1978. View at Scopus
22. B. Holdom, “The discovery of the fourth family at the LHC: what if?” Journal of High Energy Physics, vol. 2006, article 76, 2006. View at Publisher · View at Google Scholar
23. P. Jain, D. W. McKay, A. J. Sommerer, J. R. Spence, J. P. Vary, and B. L. Young, “Isospin multiplet structure in ultraheavy fermion bound states,” Physical Review D, vol. 49, no. 5, pp.
2514–2524, 1994. View at Publisher · View at Google Scholar
24. T. Enkhbat, W. S. Hou, and H. Yokoya, “Early LHC phenomenology of Yukawa-bound heavy $Q\overline{Q}$ mesons,” Physical Review D, vol. 84, no. 9, Article ID 094013, 14 pages, 2011. View at
Publisher · View at Google Scholar
25. P. Q. Hung and C. Xiong, “Dynamical electroweak symmetry breaking with a heavy fourth generation,” Nuclear Physics B, vol. 848, no. 2, pp. 288–302, 2011. View at Publisher · View at Google
Scholar · View at Zentralblatt MATH
26. W. S. Hou and R. G. Stuart, “possibility of discovering the next charge -1/3 quark through its flavor-changing neutral-current decays,” Physical Review Letters, vol. 62, no. 6, pp. 617–620, 1989.
View at Publisher · View at Google Scholar
27. W. S. Hou and R. G. Stuart, “Flavor changing neutral currents involving heavy fermions: a general survey,” Nuclear Physics B, vol. 320, no. 2, pp. 277–309, 1989. View at Publisher · View at
Google Scholar
28. J. Beringer, J. F. Arguin, R. M. Barnett, et al., “Review of particle physics,” Physical Review D, vol. 86, no. 1, Article ID 010001, 1528 pages, 2012. View at Publisher · View at Google Scholar
29. G. 't Hooft and M. J. G. Veltman, “Regularization and renormalization of gauge fields,” Nuclear Physics B, vol. 44, no. 1, pp. 189–213, 1972. View at Scopus
30. G. 't Hooft, “Renormalizable Lagrangians for massive Yang-Mills fields,” Nuclear Physics B, vol. 35, no. 1, pp. 167–188, 1971. View at Publisher · View at Google Scholar
31. G. W. S. Hou, “A brief (p)review on a possible fourth generation world to come,” in Proceedings of the 35th International Conference of High Energy Physics (PoS ICHEP' 10), vol. 244, Paris
France, July 2010, http://pos.sissa.it//archive/conferences/120/244/ICHEP%202010_244.pdf.
32. G. D. Kribs, T. Plehn, M. Spannowsky, and T. M. P. Tait, “Four generations and Higgs physics,” Physical Review D, vol. 76, no. 7, Article ID 075016, 11 pages, 2007. View at Publisher · View at
Google Scholar
33. K. Ishiwata and M. B. Wise, “Fourth generation bound states,” Physical Review D, vol. 83, no. 7, Article ID 074015, 8 pages, 2011. View at Publisher · View at Google Scholar
34. T. Kugo, “Dynamical instability of the vacuum in the lagrangian formalism of the Bethe-Salpeter bound states,” Physics Letters B, vol. 76, no. 5, pp. 625–630, 1978. View at Publisher · View at
Google Scholar
35. P. Q. Hung and C. Xiong, “Implication of a quasi fixed point with a heavy fourth generation: the emergence of a TeV-scale physical cutoff,” Physics Letters B, vol. 694, no. 4-5, pp. 430–134,
2011. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
36. P. Q. Hung and C. Xiong, “Renormalization group fixed point with a fourth generation: Higgs-induced bound states and condensates,” Nuclear Physics B, vol. 847, no. 1, pp. 160–178, 2011. View at
Publisher · View at Google Scholar
37. Y. Nambu and G. Jona-Lasinio, “Dynamical model of elementary particles based on an analogy with superconductivity. I,” Physical Review, vol. 122, no. 1, pp. 345–358, 1961. View at Publisher ·
View at Google Scholar
38. Y. Nambu, “Quasi-particles and Gauge invariance in the theory of superconductivity,” Physical Review, vol. 117, no. 3, pp. 648–663, 1960. View at Publisher · View at Google Scholar
39. J. Goldstone, “Field theories with ≪superconductor≫ solutions,” Il Nuovo Cimento, vol. 19, no. 1, pp. 154–164, 1961. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View
at MathSciNet
40. J. Goldstone, A. Salam, and S. Weinberg, “Broken symmetries,” Physical Review, vol. 127, no. 3, pp. 965–970, 1962. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
41. R. Fukuda and T. Kugo, “Schwinger-Dyson equation for massless vector theory and the absence of a fermion pole,” Nuclear Physics B, vol. 117, no. 1, pp. 250–264, 1976. View at Publisher · View at
Google Scholar
42. V. A. Miransky, “Dynamics of spontaneous chiral symmetry breaking and the continuum limit in quantum electrodynamics,” Il Nuovo Cimento A Series 11, vol. 90, no. 2, pp. 149–170, 1985. View at
Publisher · View at Google Scholar
43. W. A. Bardeen, C. N. Leung, and S. T. Love, “Dilaton and chiral-symmetry breaking,” Physical Review Letters, vol. 56, no. 12, pp. 1230–1233, 1986. View at Publisher · View at Google Scholar
44. K. I. Kondo, H. Mino, and K. Yamawaki, “Critical line and dilaton in scale-invariant QED,” Physical Review D, vol. 39, no. 8, pp. 2430–2433, 1989. View at Publisher · View at Google Scholar
45. V. Barger, M. Ishida, and W. Y. Keung, “Differentiating the Higgs Boson from the dilaton and radion at hadron colliders,” Physical Review Letters, vol. 108, no. 10, Article ID 101802, 5 pages,
2012. View at Publisher · View at Google Scholar
46. V. Barger, M. Ishida, and W. Y. Keung, “Dilaton at the LHC,” Physical Review D, vol. 85, no. 1, Article ID 015024, 4 pages, 2012. View at Publisher · View at Google Scholar
47. B. Coleppa, T. Gregoire, and H. E. Logan, “Dilaton constraints and LHC prospects,” Physical Review D, vol. 85, no. 5, Article ID 055001, 12 pages, 2012. View at Publisher · View at Google Scholar
48. S. Matsuzaki and K. Yamawaki, “Techni-dilaton signatures at LHC,” Progress of Theoretical Physics, vol. 127, no. 2, pp. 209–228, 2012. View at Publisher · View at Google Scholar
49. S. Matsuzaki and K. Yamawaki, “Techni-dilaton at 125GeV,” Physical Review D, vol. 85, no. 9, Article ID 095020, 5 pages, 2012. View at Publisher · View at Google Scholar
50. H. Pagels and S. Stokar, “Pion decay constant, electromagnetic form factor, and quark electromagnetic self-energy in quantum chromodynamics,” Physical Review D, vol. 20, no. 11, pp. 2947–2952,
1979. View at Publisher · View at Google Scholar
51. E. Klempt, C. Batty, and J. M. Richard, “The antinucleon-nucleon interaction at low energy: annihilation dynamics,” Physics Reports, vol. 413, no. 4-5, pp. 197–317, 2005. View at Publisher · View
at Google Scholar
52. W. S. Hou, “Searching for new heavy chiral quark pairs via their annihilation to multiple vector bosons,” Physical Review D, vol. 86, no. 3, Article ID 037701, 5 pages, 2012. View at Publisher ·
View at Google Scholar
53. M. M. Pavan, R. A. Arndt, I. I. Strakovsky, and R. L. Workman, “Determination of the πNN coupling constant in the VPI/GWU $\pi \text{N}\to \pi \text{N}$ partial-wave and dispersion relation
analysis,” Physica Scripta, vol. 2000, article 65, 2000. View at Publisher · View at Google Scholar
54. T. E. O. Ericson, B. Loiseau, and A. W. Thomas, “Determination of the pion-nucleon coupling constant and scattering lengths,” Physical Review C, vol. 66, no. 1, Article ID 014005, 19 pages, 2002.
View at Publisher · View at Google Scholar
55. C. B. Dover, T. Gutsche, M. Maruyama, and A. Faessler, “The physics of nucleon-antinucleon annihilation,” Progress in Particle and Nuclear Physics, vol. 29, pp. 87–173, 1992. View at Publisher ·
View at Google Scholar
56. S. J. Orfanidis and V. Rittenberg, “Soft-pion limits of the inclusive pion distributions in $\overline{N}\text{N}\to {\pi }^{±}$+anything at rest,” Nuclear Physics B, vol. 56, no. 2, pp.
561–564, 1973. View at Publisher · View at Google Scholar
57. S. J. Orfanidis and V. Rittenberg, “Nucleon-antinucleon annihilation into pions,” Nuclear Physics B, vol. 59, no. 2, pp. 570–582, 1973. View at Publisher · View at Google Scholar
58. A. Jabs, “Lorentz-invariant and non-invariant momentum space and thermodynamics,” Nuclear Physics B, vol. 34, no. 1, pp. 177–188, 1971. View at Publisher · View at Google Scholar
59. E. Fermi, “High energy nuclear events,” Progress of Theoretical Physics, vol. 5, no. 4, pp. 570–583, 1950. View at Publisher · View at Google Scholar
60. A. Arhrib and W. S. Hou, “Flavor changing neutral currents involving heavy quarks with four generations,” Journal of High Energy Physics, vol. 2006, article 9, 2006. View at Publisher · View at
Google Scholar
61. M. Aliev, H. Lacker, U. Langenfeld, S. Moch, P. Uwer, and M. Wiedermann, “HATHOR—HAdronic Top and Heavy quarks crOss section calculatoR,” Computer Physics Communications, vol. 182, no. 4, pp.
1034–1046, 2011. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
62. W. S. Hou, M. Kohda, and F. Xu, “Measuring the fourth-generation $b\to s$ quadrangle at the LHC,” Physical Review D, vol. 84, no. 9, Article ID 094027, 7 pages, 2011. View at Publisher · View at
Google Scholar
63. W. S. Hou, M. Kohda, and F. Xu, “Hints for a low ${B}_{s}\to {\mu }^{+}{\mu }^{-}$ rate and the fourth generation,” Physical Review D, vol. 85, no. 9, Article ID 097502, 5 pages, 2012. View at
Publisher · View at Google Scholar
64. J. Alwall, T. Enkhbat, W. S. Hou, and H. Yokoya, “Doubly resonant WW plus jet signatures at the LHC,” Physical Review D, vol. 86, no. 7, Article ID 074029, 8 pages, 2012. View at Publisher · View
at Google Scholar
65. K. Hagiwara and H. Murayama, “Multiple weak-boson production via gluon fusion,” Physical Review D, vol. 41, no. 3, pp. 1001–1004, 1990. View at Publisher · View at Google Scholar
66. S. Chatrchyan, V. Khachatryan, A. M. Sirunyan, et al., “Search for microscopic black holes in pp collisions at $\sqrt{s}=7$TeV,” Journal of High Energy Physics, vol. 2012, article 61, 2012. View
at Publisher · View at Google Scholar
67. V. A. Miransky, M. Tanabashi, and K. Yamawaki, “Dynamical electroweak symmetry breaking with large anomalous dimension and t quark condensate,” Physics Letters B, vol. 221, no. 2, pp. 177–183,
1989. View at Publisher · View at Google Scholar
68. W. A. Bardeen, C. T. Hill, and M. Lindner, “Minimal dynamical symmetry breaking of the standard model,” Physical Review D, vol. 41, no. 5, pp. 1647–1660, 1990. View at Publisher · View at Google
69. E. Fermi and C. N. Yang, “Are mesons elementary particles?” Physical Review, vol. 76, no. 12, pp. 1739–1743, 1949. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
70. A. Djouadi and A. Lenz, “Sealing the fate of a fourth generation of fermions,” Physics Letters B, vol. 715, no. 4-5, pp. 310–314, 2012. View at Publisher · View at Google Scholar
71. S. Bar-Shalom, S. Nandi, and A. Soni, “Two Higgs doublets with fourth-generation fermions: models for TeV-scale compositeness,” Physical Review D, vol. 84, no. 5, Article ID 053009, 24 pages,
2011. View at Publisher · View at Google Scholar
72. X. G. He and G. Valencia, “An extended scalar sector to address the tension between a fourth generation and Higgs searches at the LHC,” Physics Letters B, vol. 707, no. 3-4, pp. 381–384, 2012.
View at Publisher · View at Google Scholar
73. S. Bar-Shalom, M. Geller, S. Nandi, and A. Soni, “Two Higgs doublets, a 4th generation and a 125GeV Higgs: a review,” http://arxiv.org/abs/1208.3195.
74. G. F. Giudice, C. Grojean, A. Pomarol, and R. Rattazzi, “The strongly-interacting light Higgs,” Journal of High Energy Physics, vol. 2007, article 45, 2007. View at Publisher · View at Google
|
{"url":"http://www.hindawi.com/journals/ahep/2013/650617/","timestamp":"2014-04-17T17:32:09Z","content_type":null,"content_length":"928469","record_id":"<urn:uuid:6af9c3e5-59d0-45c8-abe6-71e179fff5c1>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00159-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Question on Logs
November 29th 2009, 08:30 AM
Question on Logs
When I try and solve the following I get the answer as log (x^-7/12)
1/3log(x^2) + log x - 3/4 log (x^3)
In the book it says log (x^1/12)
November 29th 2009, 08:33 AM
Note that, by the property $n \log X = \log X^n$, you have $\log x^{2/3} + \log x + \log x^{-9/4}$. Now use the property $\log X + \log Y = \log XY$
November 29th 2009, 08:38 AM
I agree with you ...
$\frac{1}{3}\log(x^2) + \log{x} - \frac{3}{4}\log(x^3)$
$\log(x^{\frac{2}{3}}) + \log{x} - \log(x^{\frac{9}{4}})$
$\log\left(\frac{x^{\frac{2}{3}} \cdot x}{x^{\frac{9}{4}}}\right)$
$\log\left(\frac{x^{\frac{5}{3}}}{x^{\frac{9}{4}}}\ right)$
$\log\left(\frac{x^{\frac{20}{12}}}{x^{\frac{27}{12 }}}\right)$
$\log\left(x^{-\frac{7}{12}}\right)<br />$
... books have been wrong before.
December 1st 2009, 03:10 PM
Hi all
another methode
|
{"url":"http://mathhelpforum.com/algebra/117368-question-logs-print.html","timestamp":"2014-04-19T00:56:40Z","content_type":null,"content_length":"9094","record_id":"<urn:uuid:cd5d6783-9637-4f59-b3e0-80ffba37ef2c>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00438-ip-10-147-4-33.ec2.internal.warc.gz"}
|
It seems the concept of Time as a dimension, analogous to space, is a
limiting, even misleading, factor in physics today. Time is a conceptual
frame of reference convenient for comparing the rate of change of two
objects, trivially:
Rate of delta_A delta_A/delta_t
--------------- = ---------------
Rate of delta_B delta_B/delta_t
That's all Time is, really -- a frame of reference, another way of
stating change or cycles. Look at your wrist watch. It's much easier to
talk in terms of "seconds" than in terms of number of quartz vibrations.
"Hey, Fred, meet me for supper after 2,098,234,555 vibrations."
Prior to 1956, one second was defined as the fraction 1/86,400 of the
mean solar day. From 1956 to 1967, it was the ephemeris second, defined
as the fraction 1/31556925.9747 of the tropical year at 00h 00m 00s 31
December 1899. The second is currently defined as the duration of
9,192,631,770 periods of the radiation corresponding to the transition
between the two hyperfine levels of the ground state of the cesium-133
The above relation can be very *inconveniently* written as:
Rate of delta_A delta_A/(9,192,631,770 periods of ... cesium-133)
--------------- = -------------------------------------------------
Rate of delta_B delta_B/(9,192,631,770 periods of ... cesium-133)
It should be obvious that *any* constant frame of reference will do:
Rate of delta_A delta_A/delta_C
--------------- = ---------------
Rate of delta_B delta_B/delta_C
By now you're probably screaming at me to make the obvious
Rate of delta_A delta_A
--------------- = -------
Rate of delta_B delta_B
And this works great, locally, but it's darned inconvenient to work
with mathematically on a larger scale.
So what's the problem with treating Time as a dimension? No problem, so
long as it's recognized for the invention of convenience that it is. But
this paradigm carries with it some heavy baggage:
- Time travel, which must be possible, given the dimensionality of Time.
The Past is "out there," with all the paradoxes this poses.
- Contradiction of basic Thermodynamics.
- Creation of many an ill-conceived Star Trek episode.
Well, I'll get off my soapbox now, before the tomatoes start flying...
-=Lord Sludge=-
|
{"url":"http://webpages.charter.net/stephenk1/Outlaw/time1.html","timestamp":"2014-04-19T20:07:05Z","content_type":null,"content_length":"3426","record_id":"<urn:uuid:aa2d5536-9a02-4aa2-80f0-52f5c98b9a52>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00074-ip-10-147-4-33.ec2.internal.warc.gz"}
|
How Long Is the Cantor Set?
Copyright © University of Cambridge. All rights reserved.
'How Long Is the Cantor Set?' printed from http://nrich.maths.org/
In the problem
The Cantor Set
, we met the Cantor set, which is the limit of $C_n$ as $n$ tends to infinity.
We can talk about the length of one of our sets $C_n$.
The set $C_1$ has length 1.
The set $C_2$ has length $\frac{2}{3}$, as this is the total length of the line segments in $C_2$.
What are the lengths of $C_3$, $C_4$ and $C_5$?
Can you find a general expression for the length of $C_n$?
By considering what happens as $n$ tends to infinity, can you find the length of the Cantor set?
|
{"url":"http://nrich.maths.org/5335/index?nomenu=1","timestamp":"2014-04-16T07:57:41Z","content_type":null,"content_length":"3783","record_id":"<urn:uuid:3c1d7d98-70ad-45a3-ab70-8f6b965b40cc>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00013-ip-10-147-4-33.ec2.internal.warc.gz"}
|
ownload 91256 Apply Co-ordinate Geometry Methods In Solving Nzqa Classroom Activities Idea
Mathematics Lesson Plans for Geometry Grade 1
Subject Class Math Unit Geometry Grade Teachers Note
Published by Milly on December 12, 2013 at 8:33 am – under Geometry Grade 1 category
Lesson Title :Subject Class Math Unit Geometry Grade Teachers NoteSubject Unit :Geometry Grade 1Grade :Geometry Grade 2
This mathematics lesson plan is tagged with Geometry Lesson Plans, geometry lesson plans 4th grade
52 Out of 100 by 141 users
Math 250–analytic Geometry And Calculus I Lesson Plan
Published by on September 12, 2013 at 1:05 pm – under Counting and number patterns Grade 1 category
Lesson Title :Math 250--analytic Geometry And Calculus I Lesson PlanSubject Unit :Counting and number patterns Grade 1Grade :Geometry Grade 1
This mathematics lesson plan is tagged with rates of change calculus tutorial, rates of change calculus
81 Out of 100 by 540 users
Semester2 Final Examination Review For Geometry Cp1 Lesson Plan
Published by Milly on September 10, 2013 at 8:53 pm – under Geometry Grade 1 category
Lesson Title :Semester2 Final Examination Review For Geometry Cp1 Lesson PlanSubject Unit :Geometry Grade 1Grade :Geometry Grade 2
This mathematics lesson plan is tagged with geometry shapes, geometry rectangle diagonal
78 Out of 100 by 234 users
Download Honors Geometry Teachers Plan
Published by Gwen on September 10, 2013 at 8:53 pm – under Geometry Grade 1 category
Lesson Title :Download Honors Geometry Teachers PlanSubject Unit :Geometry Grade 1Grade :Geometry Grade 2
This mathematics lesson plan is tagged with geometry equilateral triangles, geometry worksheets congruent triangles
66 Out of 100 by 198 users
Download Geometry Unit 1 Introducing Line, Angle, Triangle And Parallelogram Lesson Plan
Published by Gwen on September 10, 2013 at 8:52 pm – under Geometry Grade 1 category
Lesson Title :Download Geometry Unit 1 Introducing Line, Angle, Triangle And Parallelogram Lesson PlanSubject Unit :Geometry Grade 1Grade :Geometry Grade 2
This mathematics lesson plan is tagged with geometry worksheets congruent triangles, geometry congruent triangles problems
58 Out of 100 by 174 users
download New Grade 7 To 9 Geometry Outcomes Ca Classroom Activities Idea
Published by on September 10, 2013 at 8:52 pm – under Geometry Grade 1 category
Lesson Title :Download New Grade 7 To 9 Geometry Outcomes Ca Classroom Activities IdeaSubject Unit :Geometry Grade 1Grade :Geometry Grade 2
This mathematics lesson plan is tagged with geometry congruent triangles, geometry proofs congruent triangles
54 Out of 100 by 162 users
Geometry Congruent Triangles Worksheet Lesson Plan
Published by Charlotez on September 10, 2013 at 8:51 pm – under Geometry Grade 1 category
Lesson Title :Geometry Congruent Triangles Worksheet Lesson PlanSubject Unit :Geometry Grade 1Grade :Geometry Grade 2
This mathematics lesson plan is tagged with geometry corresponding parts, geometry congruent triangles
53 Out of 100 by 138 users
Points, Lines, And Triangles In Hyperbolic Geometry Teachers Note
Published by Victoria on September 10, 2013 at 8:50 pm – under Geometry Grade 1 category
Lesson Title :Points, Lines, And Triangles In Hyperbolic Geometry Teachers NoteSubject Unit :Geometry Grade 1Grade :Geometry Grade 2
This mathematics lesson plan is tagged with geometry equilateral triangles, geometry worksheets congruent triangles
57 Out of 100 by 126 users
Free download 1 Molecule S Symmetry Groups Karlstads Universitet Teachers Note
Published by Victoria on September 10, 2013 at 6:51 pm – under Geometry Grade 1 category
Lesson Title :Free Download 1 Molecule S Symmetry Groups Karlstads Universitet Teachers NoteSubject Unit :Geometry Grade 1Grade :Geometry Grade 2
This mathematics lesson plan is tagged with order of symmetry of a shape, order of symmetry of a square
61 Out of 100 by 114 users
Download Proportional Reasoning With Geometry Arizona Department Of Teachers Note
Published by Milly on September 10, 2013 at 5:18 pm – under Geometry Grade 1 category
Lesson Title :Download Proportional Reasoning With Geometry Arizona Department Of Teachers NoteSubject Unit :Geometry Grade 1Grade :Geometry Grade 2
This mathematics lesson plan is tagged with math pages 7th graders, ratios and proportions worksheets
66 Out of 100 by 99 users
Geometry Unit 1 Livingston County School District Teachers Plan
Published by Charlotez on September 4, 2013 at 4:15 am – under Counting Pre-Kindergarten category
Lesson Title :Geometry Unit 1 Livingston County School District Teachers PlanSubject Unit :Counting Pre-KindergartenGrade :Geometry Grade 1
This mathematics lesson plan is tagged with parallel lines cut by a transversal worksheet pdf, parallel lines cut by a transversal ppt
58 Out of 100 by 123 users
Download Geometry Teaching Guide Teachers Plan
Published by Charlotez on September 4, 2013 at 4:15 am – under Geometry Grade 1 category
Lesson Title :Download Geometry Teaching Guide Teachers PlanSubject Unit :Geometry Grade 1Grade :Geometry Grade 2
This mathematics lesson plan is tagged with parallel lines cut by a transversal interactive, parallel lines in art
62 Out of 100 by 111 users
Three Dimensional Co-ordinate Geometry Lesson Plan
Published by Victoria on September 4, 2013 at 3:30 am – under Geometry Grade 1 category
Lesson Title :Three Dimensional Co-ordinate Geometry Lesson PlanSubject Unit :Geometry Grade 1Grade :Geometry Grade 2
This mathematics lesson plan is tagged with vectors in calculus,
77 Out of 100 by 231 users
Symmetry And Mathematics In Folk Dancing Teachers Plan
Published by on September 4, 2013 at 2:45 am – under Geometry Grade 1 category
Lesson Title :Symmetry And Mathematics In Folk Dancing Teachers PlanSubject Unit :Geometry Grade 1Grade :Geometry Grade 2
This mathematics lesson plan is tagged with rotational and line symmetry, mathematical symmetry definition
85 Out of 100 by 255 users
Free download Summer Math Packet For Rising 8th Grade Geometry Students Teachers Plan
Published by Milly on September 4, 2013 at 2:06 am – under Geometry Grade 1 category
Lesson Title :Free Download Summer Math Packet For Rising 8th Grade Geometry Students Teachers PlanSubject Unit :Geometry Grade 1Grade :Geometry Grade 2
This mathematics lesson plan is tagged with 8th grade math help, 8th grade math games
53 Out of 100 by 159 users
download Developing Mathematical Thinking And Attitudes Through Geometry Classroom Activities Idea
Published by on September 4, 2013 at 1:59 am – under Geometry Grade 1 category
Lesson Title :Download Developing Mathematical Thinking And Attitudes Through Geometry Classroom Activities IdeaSubject Unit :Geometry Grade 1Grade :Geometry Grade 2
This mathematics lesson plan is tagged with geometry math terms, geometry math formulas
86 Out of 100 by 390 users
download Geometry Tiered Lesson Doc Dare To Differentiate Classroom Activities Idea
Published by Gwen on September 4, 2013 at 1:58 am – under Geometry Grade 1 category
Lesson Title :Download Geometry Tiered Lesson Doc Dare To Differentiate Classroom Activities IdeaSubject Unit :Geometry Grade 1Grade :Geometry Grade 2
This mathematics lesson plan is tagged with opposite rays geometry definition, geometry math formulas
94 Out of 100 by 150 users
Download Geometry Curriculum Fairchild Wheeler Multi-magnet Teachers Note
Published by Charlotez on September 4, 2013 at 1:58 am – under Geometry Grade 1 category
Lesson Title :Download Geometry Curriculum Fairchild Wheeler Multi-magnet Teachers NoteSubject Unit :Geometry Grade 1Grade :Geometry Grade 2
This mathematics lesson plan is tagged with consecutive geometry, geometry math terms
93 Out of 100 by 279 users
download 91256 Apply Co-ordinate Geometry Methods In Solving Nzqa Classroom Activities Idea
Published by Charlotez on September 4, 2013 at 12:36 am – under Geometry Grade 1 category
Lesson Title :Download 91256 Apply Co-ordinate Geometry Methods In Solving Nzqa Classroom Activities IdeaSubject Unit :Geometry Grade 1Grade :Geometry Grade 2
This mathematics lesson plan is tagged with ordinate definition, ordinate
89 Out of 100 by 267 users
Download Three Dimensional Co-ordinate Geometry Teachers Plan
Published by Victoria on September 4, 2013 at 12:36 am – under Geometry Grade 1 category
Lesson Title :Download Three Dimensional Co-ordinate Geometry Teachers PlanSubject Unit :Geometry Grade 1Grade :Geometry Grade 2
This mathematics lesson plan is tagged with ordinate scale, mid ordinate formula
73 Out of 100 by 219 users
|
{"url":"http://www.padjane.com/category/grade-1/geometry-grade-1/","timestamp":"2014-04-19T09:57:02Z","content_type":null,"content_length":"53242","record_id":"<urn:uuid:50d86df8-1621-4653-a014-d9d2831f1063>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00109-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Powder Springs, GA Algebra 1 Tutor
Find a Powder Springs, GA Algebra 1 Tutor
...I hold a Master's degree in Mathematics Education from Georgia State University and a Bachelors degree in Physics from Spelman College. I have experience tutoring all math topics at the middle
and high school levels. I will provide the needed support and encouragement to help you succeed in your mathematics class and/or standardized tests.
7 Subjects: including algebra 1, geometry, algebra 2, SAT math
...I think that the elementary age is important to developing the skills for junior high/high school. I have helped many children in elementary school understand the basics behind spelling, math,
English, grammar, history, science, reading and writing; each are crucial. Because I also tutored in high school, the main age group I focused on was K-8th.
38 Subjects: including algebra 1, English, reading, algebra 2
...I also have a Professional Teaching Certificate from the State of Georgia to teach Special Education General Curriculum(P-12). I am presently a registered substitute teacher which includes
teaching students with different disabilities. My experiences also include working with students of varying academic levels. I have a Master of Science degree in General Education and Special
24 Subjects: including algebra 1, reading, writing, GED
I was a National Merit Scholar and graduated magna cum laude from Georgia Tech in chemical engineering. I can tutor in precalculus, advanced high school mathematics, trigonometry, geometry,
algebra, prealgebra, chemistry, grammar, phonics, SAT math, reading, and writing. I have been tutoring profe...
20 Subjects: including algebra 1, reading, chemistry, geometry
I am a certified math teacher with 13 years experience. I have taught all levels of math, from pre-algebra to AP calculus BC, including IB HL and SL Math, and AP Physics - Mechanics. I also have
extensive experience in test preparation, including PSAT, SAT, GRE and GHSGT.
10 Subjects: including algebra 1, calculus, algebra 2, GRE
|
{"url":"http://www.purplemath.com/powder_springs_ga_algebra_1_tutors.php","timestamp":"2014-04-20T20:59:24Z","content_type":null,"content_length":"24533","record_id":"<urn:uuid:f59b7127-e357-4349-acf1-c94e19c9c030>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00554-ip-10-147-4-33.ec2.internal.warc.gz"}
|
What is .375 written as a fraction?
Elementary arithmetic is the simplified portion of arithmetic which includes the operations of addition, subtraction, multiplication, and division.
Elementary arithmetic starts with the natural numbers and the written symbols (digits) which represent them. The process for combining a pair of these numbers with the four basic operations
traditionally relies on memorized results for small values of numbers, including the contents of a multiplication table to assist with multiplication and division.
In mathematics, a continued fraction is an expression obtained through an iterative process of representing a number as the sum of its integer part and the reciprocal of another number, then writing
this other number as the sum of its integer part and another reciprocal, and so on. In a finite continued fraction (or terminated continued fraction), the iteration/recursion is terminated after
finitely many steps by using an integer in lieu of another continued fraction. In contrast, an infinite continued fraction is an infinite expression. In either case, all integers in the sequence,
other than the first, must be positive. The integers a[i] are called the coefficients or terms of the continued fraction.
Continued fractions have a number of remarkable properties related to the Euclidean algorithm for integers or real numbers. Every rational number pq has two closely related expressions as a finite
continued fraction, whose coefficients a[i] can be determined by applying the Euclidean algorithm to (p,q). The numerical value of an infinite continued fraction will be irrational; it is defined
from its infinite sequence of integers as the limit of a sequence of values for finite continued fractions. Each finite continued fraction of the sequence is obtained by using a finite prefix of the
infinite continued fraction's defining sequence of integers. Moreover, every irrational number α is the value of a unique infinite continued fraction, whose coefficients can be found using the
non-terminating version of the Euclidean algorithm applied to the incommensurable values α and 1. This way of expressing real numbers (rational and irrational) is called their continued fraction
An irreducible fraction (or fraction in lowest terms or reduced fraction) is a fraction in which the numerator and denominator are integers that have no other common divisors than 1 (and -1, when
negative numbers are considered). In other words, a fraction a⁄[b] is irreducible if and only if a and b are coprime, that is, if a and b have a greatest common divisor of 1. In higher mathematics, "
irreducible fraction" may also refer to irreducible rational fractions.
An equivalent definition is sometimes useful: if a, b are integers, then the fraction a⁄[b] is irreducible if and only if there is no other equal fraction c⁄[d] such that |c| < |a| or |d| < |b|,
where |a| means the absolute value of a. (Let us recall that to fractions a⁄[b] and c⁄[d] are equal or equivalent if and only if ad = bc.)
Hospitality is the relationship between the guest and the host, or the act or practice of being hospitable. This includes the reception and entertainment of guests, visitors, or strangers.
Related Websites:
|
{"url":"http://answerparty.com/question/answer/what-is-375-written-as-a-fraction","timestamp":"2014-04-18T10:35:43Z","content_type":null,"content_length":"28033","record_id":"<urn:uuid:7ffdd5ad-f012-4d87-a2e8-8f91182c73d8>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00584-ip-10-147-4-33.ec2.internal.warc.gz"}
|
SPSSX-L archives -- July 2003 (#450)LISTSERV at the University of Georgia
Date: Fri, 25 Jul 2003 14:31:58 +1000
Reply-To: paulandpen@optusnet.com.au
Sender: "SPSSX(r) Discussion" <SPSSX-L@LISTSERV.UGA.EDU>
From: Paul Dickson <paulandpen@optusnet.com.au>
Subject: Re: What does "over-fitting" mean?
Comments: To: Christina Cutshaw <ccutsha1@jhem.jhmi.edu>
Content-Type: text/plain
Hi there Christina,
I agree with the comments made (and yes they were a little harsh in my opinion but unfortunately true for stats oriented journals and their readers). Readers with enough stats knowledge could hammer
you for using this type of analysis with the sample size, provided they have time and could be bothered-I would not but some others out there might). I would therefore recommend deferring to
descriptives and to revise the journal publication possibilities aiming for practitioner/non-stats based journals and go from there.
I would also look at other previous published research in the same area that has used any of the same variables that you have and compare your findings on a variable by variable basis (you don't have
the sample size to develop a multivariate model from your data) and use tables (means, medians, proportions etc) of yours and other peoples results to comment on the findings (similarities/
differences/possible reasons why they are different and similar and how this could contribute to the field as a whole- recommendations of follow up and future funding should also be factored in
here). This broad information could help you provide useful clinical/theoretical information and make comparisons across studies that while not statistically meaningful are still incredibly
meaningful to practitioners and to the theory in a broad sense. Also see if there are norms etc for your data and look to national data (incident/prevalence) that you could use to contextualise your
findings a little more.
Finally, I am convinced there is plenty of crap published out there that makes little contribution to any real theory or clinical/practical meaning (no effect size reported) simply because the sample
size used in the analysis was so big that even tiny differences were picked up statistically (if only we all had data sets like this). All significant stats really do at the end of the day is
substantiate the analysis statistically (according to commonly agreed and valid rules of thumb- which vary so much that sometimes they are confusing) and it says nothing broader about the meaning of
the results from a theoretical and clinical point of view. That final comment may also be a bit harsh but I wander how "true" it is!!!!).
Finally, I hope one day we find ways to model our important groupingss of variables on very very small sample sizes and still produce meaningful results!!!!!
Cheers Paul
> Christina Cutshaw <ccutsha1@jhem.jhmi.edu> wrote: > > Steve, > > Thank you for your comments. I will evaluate my options in light of > your > suggestions. > > Best, > > Chris > > "Simon, Steve,
PhD" wrote: > > > Chris Cutshaw writes: > > > > > I am conducting binary logistic regression analyses with a > > > sample size of 73 of which 22 have the outcome of interest (e.g. > are > > > "very
successful" versus somewhat/not very successful). I have > > > fourteen variables of interest which I examined in a univariate > > > logistic regression with the dependent variable. Of these > > >
fourteen, six have a liklihood-ratio chi-square of p<0.25. Hosmer > & > > > Lemeshow suggest that all variables with a p<0.25 be examined in > the > > > multivariable modeling. I have heard that
there should be about > 10 > > cases > > > with the outcome of intertest per independent variable to avoid > > > "overfitting." > > > > > > 1) Does this mean my final model should contain no more
than 2 > > > variables? 2) Can I can look at all six variables using a forward > > > stepwise procedure for example, as long as the final model has > only > > > two or three variables? Or should I
create several different > > > two or three-variable models and see which combinations yield > > > significant results and compare them in some way? > > > > > > What does "overfitting" actually mean?
> > > > I apologize if some of the comments here appear harsh. You are > going to > > have to seriously lower your expectations. That may be > disheartening, > > but better to face the bad news now
rather than later. > > > > Overfitting means that some of the relationships that appear > > statistically significant are actually just noise. You will find > that a > > model with overfitting does
not replicate well and does a lousy job > of > > predicting future responses. > > > > The rule of 10 observations per variable (I've also heard 15) is > > referring to the number of variables
screened, not the number in > the > > final model. Since you looked at 14 variables, you really needed > 140 to > > 210 events of interest (equivalent to 464 to 697 total > observations) to > > be
sure that your model is not overfitting the data. > > > > What to do, what to do? > > > > If you are trying to publish these results, you have to hope that > the > > reviewers are all asleep at the
switch. Instead of a ratio of 10 or > 15 > > to one, your ratio is 1.6 to one. All 14 variables are part of the > > initial screen, so you can't say that you only looked at six > variables. > > > >
Of course, you were unfortunate enough to have the IRB asleep at > the > > switch, because they should never have approved such an ambitious > data > > analysis on such a skimpy data set. So maybe
the reviewers will be > the > > same way. > > > > I wouldn't count on it, though. If you want to improve your chances > of > > publishing the results, there are several things you can do. > > > >
First, I realize that the answer is almost always "NO" but I still > have > > to ask--is there any possibility that you could collect more data? > In > > theory, collecting more data after the study
has ended is a > protocol > > deviation (be sure to tell your IRB). And there is some possibility > of > > temporal trends that might interfere with your logistic model. But > both > > of these
"sins" are less serious than overfitting your data. > > > > Second, you could slap the "exploratory" label on your research. > Put in > > a lot of qualifiers like "Although these results are
intriguing, > the > > small sample size means that these results may not replicate well > with a > > larger data set." This is a cop-out in my opinion. I've fallen back > on > > this when I've seen
ratios of four to one or three to one, but you > don't > > even come close to those ratios. > > > > Third, ask a colleague who has not looked at the data to help. Show > > him/her the list of 14
independent variables and ask which two > should be > > the highest priority, based on biological mechanisms, knowledge of > > previous research, intuition, etc., BUT NOT LOOKING AT THE EXISTING > >
DATA. Then do a serious logistic regression model with those two > > variables, and treat the other twelve variables in a purely > exploratory > > mode. > > > > Fourth, admit to yourself that you are
trying to squeeze blood from > a > > turnip. A sample of 73 with only 22 events of interest is just not > big > > enough to allow for a decent multivariable logistic regression > model. > > You can't
look for the effect of A, adjusted for B, C, and D, so > don't > > even try. Report each individual univariate logistic regression > model > > and leave it at that. > > > > Fifth (and most radical of
all), give up all thoughts of logistic > > regression and p-values altogether. Who made a rule that says that > every > > research publication has to have p-values? Submit a publication > with a > >
graphical summary of your data. Boxplots and/or bar charts would > work > > very nicely here. Explain that your data set is too small to > entertain > > any serious logistic regression models. If
you're unlucky, then the > > reviewers may ask you to put in some p-values anyway. Then you > could > > switch to the previous option. > > > > Sixth, there are some newer approaches to statistical
modeling that > are > > less prone to overfitting. Perhaps the one you are most likely to > see if > > CART (Classification and Regression Trees). These models can't > make a > > silk purse out of a
sow's ear, but they do have some cross > validation > > checks that make them slightly better than stepwise approaches. > > > > If you asked people on this list how many of them have published >
results > > when they knew that the sample size was way too small, almost every > hand > > would go up, I suspect. I've done it more times than I want to > admit. > > Just be sure to scale back your
expectations, limit the complexity > of > > any models, and be honest about the limitations of your sample > size. > > > > Good luck! > > > > Steve Simon, ssimon@cmh.edu, Standard Disclaimer. > > The
STATS web page has moved to > > http://www.childrens-mercy.org/stats. > > > > P.S. I've adapted this question for one of my web pages. Take a > look at > > > > http://www.childrens-mercy.org/stats/
|
{"url":"http://listserv.uga.edu/cgi-bin/wa?A2=ind0307&L=spssx-l&D=0&F=P&P=50495","timestamp":"2014-04-18T13:18:26Z","content_type":null,"content_length":"19205","record_id":"<urn:uuid:bd919757-c4f4-4f75-8c61-206a1a7d8705>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00050-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Understanding Molecular Simulation
Understanding Molecular Simulation: From Algorithms to Applications (Google eBook)
Understanding Molecular Simulation: From Algorithms to Applications explains the physics behind the "recipes" of molecular simulation for materials science. Computer simulators are continuously
confronted with questions concerning the choice of a particular technique for a given application. A wide variety of tools exist, so the choice of technique requires a good understanding of the basic
principles. More importantly, such understanding may greatly improve the efficiency of a simulation program. The implementation of simulation methods is illustrated in pseudocodes and their practical
use in the case studies used in the text.
Since the first edition only five years ago, the simulation world has changed significantly -- current techniques have matured and new ones have appeared. This new edition deals with these new
developments; in particular, there are sections on:
· Transition path sampling and diffusive barrier crossing to simulaterare events
· Dissipative particle dynamic as a course-grained simulation technique
· Novel schemes to compute the long-ranged forces
· Hamiltonian and non-Hamiltonian dynamics in the context constant-temperature and constant-pressure molecular dynamics simulations
· Multiple-time step algorithms as an alternative for constraints
· Defects in solids
· The pruned-enriched Rosenbluth sampling, recoil-growth, and concerted rotations for complex molecules
· Parallel tempering for glassy Hamiltonians
Examples are included that highlight current applications and the codes of case studies are available on the World Wide Web. Several new examples have been added since the first edition to illustrate
recent applications. Questions are included in this new edition. No prior knowledge of computer simulation is assumed.
LibraryThing Review
User Review - yapete - LibraryThing
Probably the best book on Molecular computer simulations. Gives the how to and the physical reasoning behind it. If you are a grad student or researcher trying to get into this stuff, this is a great
book to start with. Read full review
Review: Understanding Molecular Simulation: From Algorithms to Applications
User Review - Cameozl - Goodreads
CH4 Read full review
Chapter 1 Introduction 1
Basics 7
Ensembles 109
Free Energies and Phase Equilibria 165
Advanced Techniques 289
Appendices 479
Bibliography 589
Author Index 619
Index 628
Bibliographic information
|
{"url":"http://books.google.com/books?id=5qTzldS9ROIC","timestamp":"2014-04-19T09:39:13Z","content_type":null,"content_length":"121898","record_id":"<urn:uuid:b0c4a448-85fb-45dd-9dec-22dc2197cd69>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00510-ip-10-147-4-33.ec2.internal.warc.gz"}
|
What's new
You are currently browsing the tag archive for the ‘Four Moment Theorem’ tag.
Van Vu and I have just uploaded to the arXiv our paper “Random matrices: Universality of local spectral statistics of non-Hermitian matrices“. The main result of this paper is a “Four Moment Theorem”
that establishes universality for local spectral statistics of non-Hermitian matrices with independent entries, under the additional hypotheses that the entries of the matrix decay exponentially, and
match moments with either the real or complex gaussian ensemble to fourth order. This is the non-Hermitian analogue of a long string of recent results establishing universality of local statistics in
the Hermitian case (as discussed for instance in this recent survey of Van and myself, and also in several other places).
The complex case is somewhat easier to describe. Given a (non-Hermitian) random matrix ensemble ${M_n}$ of ${n \times n}$ matrices, one can arbitrarily enumerate the (geometric) eigenvalues as ${\
lambda_1(M_n),\ldots,\lambda_n(M_n) \in {\bf C}}$, and one can then define the ${k}$-point correlation functions ${\rho^{(k)}_n: {\bf C}^k \rightarrow {\bf R}^+}$ to be the symmetric functions such
$\displaystyle \int_{{\bf C}^k} F(z_1,\ldots,z_k) \rho^{(k)}_n(z_1,\ldots,z_k)\ dz_1 \ldots dz_k$
$\displaystyle = {\bf E} \sum_{1 \leq i_1 < \ldots < i_k \leq n} F(\lambda_1(M_n),\ldots,\lambda_k(M_n)).$
In the case when ${M_n}$ is drawn from the complex gaussian ensemble, so that all the entries are independent complex gaussians of mean zero and variance one, it is a classical result of Ginibre that
the asymptotics of ${\rho^{(k)}_n}$ near some point ${z \sqrt{n}}$ as ${n \rightarrow \infty}$ and ${z \in {\bf C}}$ is fixed are given by the determinantal rule
$\displaystyle \rho^{(k)}_n(z\sqrt{n} + w_1,\ldots,z\sqrt{n}+w_k) \rightarrow \hbox{det}( K(w_i,w_j) )_{1 \leq i,j \leq k} \ \ \ \ \ (1)$
for ${|z| < 1}$ and
$\displaystyle \rho^{(k)}_n(z\sqrt{n} + w_1,\ldots,z\sqrt{n}+w_k) \rightarrow 0$
for ${|z| > 1}$, where ${K}$ is the reproducing kernel
$\displaystyle K(z,w) := \frac{1}{\pi} e^{-|z|^2/2 - |w|^2/2 + z \overline{w}}.$
(There is also an asymptotic for the boundary case ${|z|=1}$, but it is more complicated to state.) In particular, we see that ${\rho^{(k)}_n(z \sqrt{n}) \rightarrow \frac{1}{\pi} 1_{|z| \leq 1}}$
for almost every ${z}$, which is a manifestation of the well-known circular law for these matrices; but the circular law only captures the macroscopic structure of the spectrum, whereas the
asymptotic (1) describes the microscopic structure.
Our first main result is that the asymptotic (1) for ${|z|<1}$ also holds (in the sense of vague convergence) when ${M_n}$ is a matrix whose entries are independent with mean zero, variance one,
exponentially decaying tails, and which all match moments with the complex gaussian to fourth order. (Actually we prove a stronger result than this which is valid for all bounded ${z}$ and has more
uniform bounds, but is a bit more technical to state.) An analogous result is also established for real gaussians (but now one has to separate the correlation function into components depending on
how many eigenvalues are real and how many are strictly complex; also, the limiting distribution is more complicated, being described by Pfaffians rather than determinants). Among other things, this
allows us to partially extend some known results on complex or real gaussian ensembles to more general ensembles. For instance, there is a central limit theorem of Rider which establishes a central
limit theorem for the number of eigenvalues of a complex gaussian matrix in a mesoscopic disk; from our results, we can extend this central limit theorem to matrices that match the complex gaussian
ensemble to fourth order, provided that the disk is small enough (for technical reasons, our error bounds are not strong enough to handle large disks). Similarly, extending some results of
Edelman-Kostlan-Shub and of Forrester-Nagao, we can show that for a matrix matching the real gaussian ensemble to fourth order, the number of real eigenvalues is ${\sqrt{\frac{2n}{\pi}} + O(n^{1/
2-c})}$ with probability ${1-O(n^{-c})}$ for some absolute constant ${c>0}$.
There are several steps involved in the proof. The first step is to apply the Girko Hermitisation trick to replace the problem of understanding the spectrum of a non-Hermitian matrix, with that of
understanding the spectrum of various Hermitian matrices. The two identities that realise this trick are, firstly, Jensen’s formula
$\displaystyle \log |\det(M_n-z_0)| = - \sum_{1 \leq i \leq n: \lambda_i(M_n) \in B(z_0,r)} \log \frac{r}{|\lambda_i(M_n)-z_0|}$
$\displaystyle + \frac{1}{2\pi} \int_0^{2\pi} \log |\det(M_n-z_0-re^{i\theta})|\ d\theta$
that relates the local distribution of eigenvalues to the log-determinants ${\log |\det(M_n-z_0)|}$, and secondly the elementary identity
$\displaystyle \log |\det(M_n - z)| = \frac{1}{2} \log|\det W_{n,z}| + \frac{1}{2} n \log n$
that relates the log-determinants of ${M_n-z}$ to the log-determinants of the Hermitian matrices
$\displaystyle W_{n,z} := \frac{1}{\sqrt{n}} \begin{pmatrix} 0 & M_n -z \\ (M_n-z)^* & 0 \end{pmatrix}.$
The main difficulty is then to obtain concentration and universality results for the Hermitian log-determinants ${\log|\det W_{n,z}|}$. This turns out to be a task that is analogous to the task of
obtaining concentration for Wigner matrices (as we did in this recent paper), as well as central limit theorems for log-determinants of Wigner matrices (as we did in this other recent paper). In both
of these papers, the main idea was to use the Four Moment Theorem for Wigner matrices (which can now be proven relatively easily by a combination of the local semi-circular law and resolvent swapping
methods), combined with (in the latter paper) a central limit theorem for the gaussian unitary ensemble (GUE). This latter task was achieved by using the convenient Trotter normal form to
tridiagonalise a GUE matrix, which has the effect of revealing the determinant of that matrix as the solution to a certain linear stochastic difference equation, and one can analyse the distribution
of that solution via such tools as the martingale central limit theorem.
The matrices ${W_{n,z}}$ are somewhat more complicated than Wigner matrices (for instance, the semi-circular law must be replaced by a distorted Marchenko-Pastur law), but the same general strategy
works to obtain concentration and universality for their log-determinants. The main new difficulty that arises is that the analogue of the Trotter norm for gaussian random matrices is not
tridiagonal, but rather Hessenberg (i.e. upper-triangular except for the lower diagonal). This ultimately has the effect of expressing the relevant determinant as the solution to a nonlinear
stochastic difference equation, which is a bit trickier to solve for. Fortunately, it turns out that one only needs good lower bounds on the solution, as one can use the second moment method to upper
bound the determinant and hence the log-determinant (following a classical computation of Turan). This simplifies the analysis on the equation somewhat.
While this result is the first local universality result in the category of random matrices with independent entries, there are still two limitations to the result which one would like to remove. The
first is the moment matching hypotheses on the matrix. Very recently, one of the ingredients of our paper, namely the local circular law, was proved without moment matching hypotheses by Bourgade,
Yau, and Yin (provided one stays away from the edge of the spectrum); however, as of this time of writing the other main ingredient – the universality of the log-determinant – still requires moment
matching. (The standard tool for obtaining universality without moment matching hypotheses is the heat flow method (and more specifically, the local relaxation flow method), but the analogue of Dyson
Brownian motion in the non-Hermitian setting appears to be somewhat intractible, being a coupled flow on both the eigenvalues and eigenvectors rather than just on the eigenvalues alone.)
Van Vu and I have just uploaded to the arXiv our paper “Random matrices: The Universality phenomenon for Wigner ensembles“. This survey is a longer version (58 pages) of a previous short survey we
wrote up a few months ago. The survey focuses on recent progress in understanding the universality phenomenon for Hermitian Wigner ensembles, of which the Gaussian Unitary Ensemble (GUE) is the most
well known. The one-sentence summary of this progress is that many of the asymptotic spectral statistics (e.g. correlation functions, eigenvalue gaps, determinants, etc.) that were previously known
for GUE matrices, are now known for very large classes of Wigner ensembles as well. There are however a wide variety of results of this type, due to the large number of interesting spectral
statistics, the varying hypotheses placed on the ensemble, and the different modes of convergence studied, and it is difficult to isolate a single such result currently as the definitive universality
result. (In particular, there is at present a tradeoff between generality of ensemble and strength of convergence; the universality results that are available for the most general classes of ensemble
are only presently able to demonstrate a rather weak sense of convergence to the universal distribution (involving an additional averaging in the energy parameter), which limits the applicability of
such results to a number of interesting questions in which energy averaging is not permissible, such as the study of the least singular value of a Wigner matrix, or of related quantities such as the
condition number or determinant. But it is conceivable that this tradeoff is a temporary phenomenon and may be eliminated by future work in this area; in the case of Hermitian matrices whose entries
have the same second moments as that of the GUE ensemble, for instance, the need for energy averaging has already been removed.)
Nevertheless, throughout the family of results that have been obtained recently, there are two main methods which have been fundamental to almost all of the recent progress in extending from special
ensembles such as GUE to general ensembles. The first method, developed extensively by Erdos, Schlein, Yau, Yin, and others (and building on an initial breakthrough by Johansson), is the heat flow
method, which exploits the rapid convergence to equilibrium of the spectral statistics of matrices undergoing Dyson-type flows towards GUE. (An important aspect to this method is the ability to
accelerate the convergence to equilibrium by localising the Hamiltonian, in order to eliminate the slowest modes of the flow; this refinement of the method is known as the “local relaxation flow”
method. Unfortunately, the translation mode is not accelerated by this process, which is the principal reason why results obtained by pure heat flow methods still require an energy averaging in the
final conclusion; it would of interest to find a way around this difficulty.) The other method, which goes all the way back to Lindeberg in his classical proof of the central limit theorem, and which
was introduced to random matrix theory by Chatterjee and then developed for the universality problem by Van Vu and myself, is the swapping method, which is based on the observation that spectral
statistics of Wigner matrices tend to be stable if one replaces just one or two entries of the matrix with another distribution, with the stability of the swapping process becoming stronger if one
assumes that the old and new entries have many matching moments. The main formalisations of this observation are known as four moment theorems, because they require four matching moments between the
entries, although there are some variant three moment theorems and two moment theorems in the literature as well. Our initial four moment theorems were focused on individual eigenvalues (and later
also to eigenvectors), but it was later observed by Erdos, Yau, and Yin that simpler four moment theorems could also be established for aggregate spectral statistics, such as the coefficients of the
Greens function, and Knowles and Yin also subsequently observed that these latter theorems could be used to recover a four moment theorem for eigenvalues and eigenvectors, giving an alternate
approach to proving such theorems.
Interestingly, it seems that the heat flow and swapping methods are complementary to each other; the heat flow methods are good at removing moment hypotheses on the coefficients, while the swapping
methods are good at removing regularity hypotheses. To handle general ensembles with minimal moment or regularity hypotheses, it is thus necessary to combine the two methods (though perhaps in the
future a third method, or a unification of the two existing methods, might emerge).
Besides the heat flow and swapping methods, there are also a number of other basic tools that are also needed in these results, such as local semicircle laws and eigenvalue rigidity, which are also
discussed in the survey. We also survey how universality has been established for wide variety of spectral statistics; the ${k}$-point correlation functions are the most well known of these
statistics, but they do not tell the whole story (particularly if one can only control these functions after an averaging in the energy), and there are a number of other statistics, such as
eigenvalue counting functions, determinants, or spectral gaps, for which the above methods can be applied.
In order to prevent the survey from becoming too enormous, we decided to restrict attention to Hermitian matrix ensembles, whose entries off the diagonal are identically distributed, as this is the
case in which the strongest results are available. There are several results that are applicable to more general ensembles than these which are briefly mentioned in the survey, but they are not
covered in detail.
We plan to submit this survey eventually to the proceedings of a workshop on random matrix theory, and will continue to update the references on the arXiv version until the time comes to actually
submit the paper.
Finally, in the survey we issue some errata for previous papers of Van and myself in this area, mostly centering around the three moment theorem (a variant of the more widely used four moment
theorem), for which the original proof of Van and myself was incomplete. (Fortunately, as the three moment theorem had many fewer applications than the four moment theorem, and most of the
applications that it did have ended up being superseded by subsequent papers, the actual impact of this issue was limited, but still an erratum is in order.)
Van Vu and I have just uploaded to the arXiv our short survey article, “Random matrices: The Four Moment Theorem for Wigner ensembles“, submitted to the MSRI book series, as part of the proceedings
on the MSRI semester program on random matrix theory from last year. This is a highly condensed version (at 17 pages) of a much longer survey (currently at about 48 pages, though not completely
finished) that we are currently working on, devoted to the recent advances in understanding the universality phenomenon for spectral statistics of Wigner matrices. In this abridged version of the
survey, we focus on a key tool in the subject, namely the Four Moment Theorem which roughly speaking asserts that the statistics of a Wigner matrix depend only on the first four moments of the
entries. We give a sketch of proof of this theorem, and two sample applications: a central limit theorem for individual eigenvalues of a Wigner matrix (extending a result of Gustavsson in the case
of GUE), and the verification of a conjecture of Wigner, Dyson, and Mehta on the universality of the asymptotic k-point correlation functions even for discrete ensembles (provided that we interpret
convergence in the vague topology sense).
For reasons of space, this paper is very far from an exhaustive survey even of the narrow topic of universality for Wigner matrices, but should hopefully be an accessible entry point into the subject
Van Vu and I have just uploaded to the arXiv our paper “Random matrices: Universality of eigenvectors“, submitted to Random Matrices: Theory and Applications. This paper concerns an extension of our
four moment theorem for eigenvalues. Roughly speaking, that four moment theorem asserts (under mild decay conditions on the coefficients of the random matrix) that the fine-scale structure of
individual eigenvalues of a Wigner random matrix depend only on the first four moments of each of the entries.
In this paper, we extend this result from eigenvalues to eigenvectors, and specifically to the coefficients of, say, the ${i^{th}}$ eigenvector ${u_i(M_n)}$ of a Wigner random matrix ${M_n}$. Roughly
speaking, the main result is that the distribution of these coefficients also only depends on the first four moments of each of the entries. In particular, as the distribution of coefficients
eigenvectors of invariant ensembles such as GOE or GUE are known to be asymptotically gaussian real (in the GOE case) or gaussian complex (in the GUE case), the same asymptotic automatically holds
for Wigner matrices whose coefficients match GOE or GUE to fourth order.
(A technical point here: strictly speaking, the eigenvectors ${u_i(M_n)}$ are only determined up to a phase, even when the eigenvalues are simple. So, to phrase the question properly, one has to
perform some sort of normalisation, for instance by working with the coefficients of the spectral projection operators ${P_i(M_n) := u_i(M_n) u_i(M_n)^*}$ instead of the eigenvectors, or rotating
each eigenvector by a random phase, or by fixing the first component of each eigenvector to be positive real. This is a fairly minor technical issue here, though, and will not be discussed further.)
This theorem strengthens a four moment theorem for eigenvectors recently established by Knowles and Yin (by a somewhat different method), in that the hypotheses are weaker (no level repulsion
assumption is required, and the matrix entries only need to obey a finite moment condition rather than an exponential decay condition), and a slightly stronger conclusion (less regularity is needed
on the test function, and one can handle the joint distribution of polynomially many coefficients, rather than boundedly many coefficients). On the other hand, the Knowles-Yin paper can also handle
generalised Wigner ensembles in which the variances of the entries are allowed to fluctuate somewhat.
The method used here is a variation of that in our original paper (incorporating the subsequent improvements to extend the four moment theorem from the bulk to the edge, and to replace exponential
decay by a finite moment condition). That method was ultimately based on the observation that if one swapped a single entry (and its adjoint) in a Wigner random matrix, then an individual eigenvalue
${\lambda_i(M_n)}$ would not fluctuate much as a consequence (as long as one had already truncated away the event of an unexpectedly small eigenvalue gap). The same analysis shows that the projection
matrices ${P_i(M_n)}$ obeys the same stability property.
As an application of the eigenvalue four moment theorem, we establish a four moment theorem for the coefficients of resolvent matrices ${(\frac{1}{\sqrt{n}} M_n - zI)^{-1}}$, even when ${z}$ is on
the real axis (though in that case we need to make a level repulsion hypothesis, which has been already verified in many important special cases and is likely to be true in general). This improves on
an earlier four moment theorem for resolvents of Erdos, Yau, and Yin, which required ${z}$ to stay some distance away from the real axis (specifically, that ${\hbox{Im}(z) \geq n^{-1-c}}$ for some
small ${c>0}$).
Van Vu and I have just uploaded to the arXiv our paper “Random matrices: Localization of the eigenvalues and the necessity of four moments“, submitted to Probability Theory and Related Fields. This
paper concerns the distribution of the eigenvalues
$\displaystyle \lambda_1(M_n) \leq \ldots \leq \lambda_n(M_n)$
of a Wigner random matrix ${M_n}$. More specifically, we consider ${n \times n}$ Hermitian random matrices whose entries have mean zero and variance one, with the upper-triangular portion of the
matrix independent, with the diagonal elements iid, and the real and imaginary parts of the strictly upper-triangular portion of the matrix iid. For technical reasons we also assume that the
distribution of the coefficients decays exponentially or better. Examples of Wigner matrices include the Gaussian Unitary Ensemble (GUE) and random symmetric complex Bernoulli matrices (which equal $
{\pm 1}$ on the diagonal, and ${\pm \frac{1}{2} \pm \frac{i}{2}}$ off the diagonal). The Gaussian Orthogonal Ensemble (GOE) is also an example once one makes the minor change of setting the diagonal
entries to have variance two instead of one.
The most fundamental theorem about the distribution of these eigenvalues is the Wigner semi-circular law, which asserts that (almost surely) one has
$\displaystyle \frac{1}{n} \sum_{i=1}^n \delta_{\lambda_i(M_n)/\sqrt{n}} \rightarrow \rho_{sc}(x)\ dx$
(in the vague topology) where ${\rho_{sc}(x) := \frac{1}{\pi} (4-x^2)_+^{1/2}}$ is the semicircular distribution. (See these lecture notes on this blog for more discusssion of this law.)
One can phrase this law in a number of equivalent ways. For instance, in the bulk region ${\epsilon n \leq i \leq (1-\epsilon) n}$, one almost surely has
$\displaystyle \lambda_i(M_n) = \gamma_i \sqrt{n} + o(\sqrt{n}) \ \ \ \ \ (1)$
uniformly for in ${i}$, where the classical location ${\gamma_i \in [-2,2]}$ of the (normalised) ${i^{th}}$ eigenvalue ${\frac{1}{\sqrt{n}} \lambda_i}$ is defined by the formula
$\displaystyle \int_{-2}^{\gamma_i} \rho_{sc}(x)\ dx := \frac{i}{n}.$
The bound (1) also holds in the edge case (by using the operator norm bound ${\| M_n\|_{op} = (2+o(1)) \sqrt{n}}$, due to Bai and Yin), but for sake of exposition we shall restriction attention here
only to the bulk case.
From (1) we see that the semicircular law controls the eigenvalues at the coarse scale of ${\sqrt{n}}$. There has been a significant amount of work in the literature in obtaining control at finer
scales, and in particular at the scale of the average eigenvalue spacing, which is of the order of ${\sqrt{n}/n = n^{-1/2}}$. For instance, we now have a universal limit theorem for the normalised
eigenvalue spacing ${\sqrt{n}(\lambda_{i+1}(M_n) - \lambda_i(M_n))}$ in the bulk for all Wigner matrices, a result of Erdos, Ramirez, Schlein, Vu, Yau, and myself. One tool for this is the four
moment theorem of Van and myself, which roughly speaking shows that the behaviour of the eigenvalues at the scale ${n^{-1/2}}$ (and even at the slightly finer scale of ${n^{-1/2-c}}$ for some
absolute constant ${c>0}$) depends only on the first four moments of the matrix entries. There is also a slight variant, the three moment theorem, which asserts that the behaviour of the eigenvalues
at the slightly coarser scale of ${n^{-1/2+\epsilon}}$ depends only on the first three moments of the matrix entries.
It is natural to ask whether these moment conditions are necessary. From the result of Erdos, Ramirez, Schlein, Vu, Yau, and myself, it is known that to control the eigenvalue spacing ${\lambda_{i+1}
(M_n) - \lambda_i(M_n)}$ at the critical scale ${n^{-1/2}}$, no knowledge of any moments beyond the second (i.e. beyond the mean and variance) are needed. So it is natural to conjecture that the same
is true for the eigenvalues themselves.
The main result of this paper is to show that this is not the case; that at the critical scale ${n^{-1/2}}$, the distribution of eigenvalues ${\lambda_i(M_n)}$is sensitive to the fourth moment, and
so the hypothesis of the four moment theorem cannot be relaxed.
Heuristically, the reason for this is easy to explain. One begins with an inspection of the expected fourth moment
$\displaystyle \sum_{i=1}^n {\bf E}(\lambda_i(M_n)^4) = {\bf E} \hbox{tr} M_n^4.$
A standard moment method computation shows that the right hand side is equal to
$\displaystyle 2n^3 + 2a n^2 + \ldots$
where ${a}$ is the fourth moment of the real part of the off-diagonal coefficients of ${M_n}$. In particular, a change in the fourth moment ${a}$ by ${O(1)}$ leads to a change in the expression ${\
sum_{i=1}^n {\bf E}(\lambda_i(M_n)^4)}$ by ${O(n^2)}$. Thus, for a typical ${i}$, one expects ${{\bf E}(\lambda_i(M_n)^4)}$ to shift by ${O(n)}$; since ${\lambda_i(M_n) = O(\sqrt{n})}$ on the
average, we thus expect ${\lambda_i(M_n)}$ itself to shift by about ${O(n^{-1/2})}$ by the mean-value theorem.
To make this rigorous, one needs a sufficiently strong concentration of measure result for ${\lambda_i(M_n)}$ that keeps it close to its mean value. There are already a number of such results in the
literature. For instance, Guionnet and Zeitouni showed that ${\lambda_i(M_n)}$ was sharply concentrated around an interval of size ${O(n^\epsilon)}$ around ${\sqrt{n} \gamma_i}$ for any ${\epsilon >
0}$ (in the sense that the probability that one was outside this interval was exponentially small). In one of my papers with Van, we showed that ${\lambda_i(M_n)}$ was also weakly concentrated around
an interval of size ${O(n^{-1/2+\epsilon})}$ around ${\sqrt{n} \gamma_i}$, in the sense that the probability that one was outside this interval was ${O(n^{-c})}$ for some absolute constant ${c>0}$.
Finally, if one made an additional log-Sobolev hypothesis on the entries, it was shown by by Erdos, Yau, and Yin that the average variance of ${\lambda_i(M_n)}$ as ${i}$ varied from ${1}$ to ${n}$
was of the size of ${O(n^{-c})}$ for some absolute ${c>0}$.
As it turns out, the first two concentration results are not sufficient to justify the previous heuristic argument. The Erdos-Yau-Yin argument suffices, but requires a log-Sobolev hypothesis. In our
paper, we argue differently, using the three moment theorem (together with the theory of the eigenvalues of GUE, which is extremely well developed) to show that the variance of each individual ${\
lambda_i(M_n)}$ is ${O(n^{-c})}$ (without averaging in ${i}$). No log-Sobolev hypothesis is required, but instead we need to assume that the third moment of the coefficients vanishes (because we want
to use the three moment theorem to compare the Wigner matrix to GUE, and the coefficients of the latter have a vanishing third moment). From this we are able to make the previous arguments rigorous,
and show that the mean ${{\bf E} \lambda_i(M_n)}$ is indeed sensitive to the fourth moment of the entries at the critical scale ${n^{-1/2}}$.
One curious feature of the analysis is how differently the median and the mean of the eigenvalue ${\lambda_i(M_n)}$ react to the available technology. To control the global behaviour of the
eigenvalues (after averaging in ${i}$), it is much more convenient to use the mean, and we have very precise control on global averages of these means thanks to the moment method. But to control
local behaviour, it is the median which is much better controlled. For instance, we can localise the median of ${\lambda_i(M_n)}$ to an interval of size ${O(n^{-1/2+\epsilon})}$, but can only
localise the mean to a much larger interval of size ${O(n^{-c})}$. Ultimately, this is because with our current technology there is a possible exceptional event of probability as large as ${O(n^
{-c})}$ for which all eigenvalues could deviate as far as ${O(n^\epsilon)}$ from their expected location, instead of their typical deviation of ${O(n^{-1/2})}$. The reason for this is technical,
coming from the fact that the four moment theorem method breaks down when two eigenvalues are very close together (less than ${n^{-c}}$ times the average eigenvalue spacing), and so one has to cut
out this event, which occurs with a probability of the shape ${O(n^{-c})}$. It may be possible to improve the four moment theorem proof to be less sensitive to eigenvalue near-collisions, in which
case the above bounds are likely to improve.
Van Vu and I have just uploaded to the arXiv our paper “Random covariance matrices: Universality of local statistics of eigenvalues“, to be submitted shortly. This paper draws heavily on the
technology of our previous paper, in which we established a Four Moment Theorem for the local spacing statistics of eigenvalues of Wigner matrices. This theorem says, roughly speaking, that these
statistics are completely determined by the first four moments of the coefficients of such matrices, at least in the bulk of the spectrum. (In a subsequent paper we extended the Four Moment Theorem
to the edge of the spectrum.)
In this paper, we establish the analogous result for the singular values of rectangular iid matrices ${M = M_{n,p}}$, or (equivalently) the eigenvalues of the associated covariance matrix ${\frac{1}
{n} M M^*}$. As is well-known, there is a parallel theory between the spectral theory of random Wigner matrices and those of covariance matrices; for instance, just as the former has asymptotic
spectral distribution governed by the semi-circular law, the latter has asymptotic spectral distribution governed by the Marcenko-Pastur law. One reason for the connection can be seen by noting that
the singular values of a rectangular matrix ${M}$ are essentially the same thing as the eigenvalues of the augmented matrix
$\displaystyle \begin{pmatrix} 0 & M \\ M^* & 0\end{pmatrix}$
after eliminating sign ambiguities and degeneracies. So one can view singular values of a rectangular iid matrix as the eigenvalues of a matrix which resembles a Wigner matrix, except that two
diagonal blocks of that matrix have been zeroed out.
The zeroing out of these elements prevents one from applying the entire Wigner universality theory directly to the covariance matrix setting (in particular, the crucial Talagrand concentration
inequality for the magnitude of a projection of a random vector to a subspace does not work perfectly once there are many zero coefficients). Nevertheless, a large part of the theory (particularly
the deterministic components of the theory, such as eigenvalue variation formulae) carry through without much difficulty. The one place where one has to spend a bit of time to check details is to
ensure that the Erdos-Schlein-Yau delocalisation result (that asserts, roughly speaking, that the eigenvectors of a Wigner matrix are about as small in ${\ell^\infty}$ norm as one could hope to get)
is also true for in the covariance matrix setting, but this is a straightforward (though somewhat tedious) adaptation of the method (which is based on the Stieltjes transform).
As an application, we extend the sine kernel distribution of local covariance matrix statistics, first established in the case of Wishart ensembles (when the underlying variables are gaussian) by
Nagao and Wadati, and later extended to gaussian-divisible matrices by Ben Arous and Peche, to any distributions which matches one of these distributions to up to four moments, which covers virtually
all complex distributions with independent iid real and imaginary parts, with basically the lone exception of the complex Bernoulli ensemble.
Recently, Erdos, Schlein, Yau, and Yin generalised their local relaxation flow method to also obtain similar universality results for distributions which have a large amount of smoothness, but
without any matching moment conditions. By combining their techniques with ours as in our joint paper, one should probably be able to remove both smoothness and moment conditions, in particular now
covering the complex Bernoulli ensemble.
In this paper we also record a new observation that the exponential decay hypothesis in our earlier paper can be relaxed to a finite moment condition, for a sufficiently high (but fixed) moment. This
is done by rearranging the order of steps of the original argument carefully.
Recent Comments
Terence Tao on Polymath8b, X: writing the pap…
Andrew V. Sutherland on Polymath8b, X: writing the pap…
Terence Tao on Polymath8b, X: writing the pap…
Aubrey de Grey on Polymath8b, X: writing the pap…
xfxie on Polymath8b, X: writing the pap…
Daniel on Polymath8b, X: writing the pap…
Terence Tao on Polymath8b, X: writing the pap…
Terence Tao on Finite time blowup for an aver…
Terence Tao on Polymath8b, IV: Enlarging the…
Tony Feng on Polymath8b, IV: Enlarging the…
Gil Kalai on Finite time blowup for an aver…
xfxie on Polymath8b, X: writing the pap…
Terence Tao on 254A, Notes 3a: Eigenvalues an…
Anonymous on 254A, Notes 3a: Eigenvalues an…
|
{"url":"http://terrytao.wordpress.com/tag/four-moment-theorem/","timestamp":"2014-04-18T03:12:45Z","content_type":null,"content_length":"170244","record_id":"<urn:uuid:a45b1a0b-9477-4cb0-8e88-f03cacfab194>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00652-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Venice, CA Precalculus Tutor
Find a Venice, CA Precalculus Tutor
...Attitude is everything.) Also, I am an actress, a yogi and a pianist; I've traveled all over and I make a mean spaghetti sauce (thanks, Mom!). If we don't make a good match, WyzAnt won't charge
you for the first session. Let's meet!I completed a 200 hour Yoga Teacher Training program at Rising...
26 Subjects: including precalculus, English, Spanish, reading
...I can teach all levels and ages; I have taught students from the 3rd grade all the way up to college. I love teaching math because math is all about understanding rules. Once you understand the
rules and follows the steps, it really becomes quite easy!
18 Subjects: including precalculus, reading, statistics, geometry
...These courses were in Linear Systems, Adaptive processing, Radar, and communications. I have been a mathematica user for more than 15 years. I've used it primarily to back up analysis that I've
done as a EE using other tools such as matlab or mathcad.
33 Subjects: including precalculus, chemistry, calculus, physics
I have my Bachelor and Master of Science in Mechanical Engineering from University of Southern California. I have 8+ years of experience tutoring students from the Beverly Hills school district in
most subject areas of math and science (pre-alegbra, algebra, geometry, trigonometry, precalculus, calculus, and chemistry). I'm passionate of making sure my students succeed in school.
21 Subjects: including precalculus, reading, chemistry, calculus
Hello, my name is David Angeles and I am currently attending California State University, Northridge to pursue a Major in Applied Mathematics. I want to be a math professor one day and help out
many students the way my teachers have helped me throughout the years. I have been tutoring for this website for almost one year and had the pleasure of meeting all types of people.
10 Subjects: including precalculus, calculus, geometry, algebra 1
|
{"url":"http://www.purplemath.com/venice_ca_precalculus_tutors.php","timestamp":"2014-04-18T03:52:21Z","content_type":null,"content_length":"24354","record_id":"<urn:uuid:fb9263d5-28d4-4ee2-97e3-63347ff18a77>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00295-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Intuition behind Thom class
up vote 17 down vote favorite
The Thom class and Thom isomorphism theorem for oriented vector bundles are proven ( at least to my knowledge) by induction on the open covers and some manipulation with Mayer-Vietoris sequences.
What is the "actual reason" behind the existence of Thom class? It seems strange that such an interesting class would exist just because some Mayer-Vietoris sequences routinely produce it.
6 There are many fine answers already so I'll just add this: if you want understand the "actual reason" for the existence of something (like the Thom class), you probably ought to try to understand
the way it can fail to exist. In this case, why do non-orientable bundles fail to have a Thom class (in ordinary cohomology with integer coefficients)? – Charles Rezk Sep 22 '12 at 15:43
add comment
6 Answers
active oldest votes
It is easy to understand the existence of a Thom class by considering cellular cohomology. Let the given vector bundle be $E\to B$ with fibers of dimension $n$. One can assume without
significant loss of generality that $B$ is a CW complex with a single 0-cell. The Thom space $T(E)$ is the quotient $D(E)/S(E)$ of the unit disk bundle of $E$ by the unit sphere bundle.
One can give $T(E)$ a CW structure with $S(E)/S(E)$ as the only 0-cell and with an $(n+k)$-cell for each $k$-cell of $B$. These cells in $T(E)$ arise from pulling back the bundle $D(E)\
to B$ via characteristic maps $D^k\to B$ for the $k$-cells of $B$. These pullback are products since $D^k$ is contractible.
up vote 15
down vote In particular, $T(E)$ has a single $n$-cell and an $(n_1)$-cell for each 1-cell of $B$. There are no cells in $T(E)$ between dimension $0$ and $n$. The cellular boundary of an $(n+1)
accepted $-cell is 0 if $E$ is orientable over the corresponding 1-cell of $B$, and it is twice the $n$-cell in the opposite case. Thus $H^n(T(E);{\mathbb Z})$ is $\mathbb Z$ if $E$ is orientable
and $0$ if $E$ is non-orientable. In the orientable case a generator of $H^n(T(E);{\mathbb Z})$ restricts to a generator of $H^n(S^n;{\mathbb Z})$ in the "fiber" $S^n$ of $T(E)$ over the
0-cell of $B$, hence the same is true for all the "fibers" $S^n$ and so one has a Thom class.
Thank you Allen, your answer clarifies things, especially for me, a non-expert in topology. I think following your simple $CW$-reasoning it should be more or less easy to see why
cupping with Thom class the cohomology of $B$ one has Thom isomorphism. – Axel Sep 23 '12 at 8:48
add comment
One not-very technical way to think of the Thom Isomorphism Theorem is that if you have a vector bundle, $p : E \to B$, if you remove the $0$-section $Z$ of the vector bundle from the Thom
space $Th(p)$, you get a contractible space. So given a homology class in $H_* Th(p)$ the obstruction to trivializing it can be thought of as its intersection with $Z$. If there's no
intersection, you're in the contractible space $Th(p) \setminus Z$. So the intersection of a homology class with $Z$ is tautologically the thing that keeps track of the homology class
up vote itself.
12 down
vote That's how I like to think of the Thom Isomorphism Theorem. So why is there a Thom class? Because you can intersect with $Z$. In cohomology this is cupping with the Thom class since that's
what intersections translate to in cohomology.
add comment
You are thinking in terms of ordinary cohomology, where Mayer-Vietoris patches together the always present local orientation to produce a global one when you have it. It is more advanced, but
maybe more illuminating, to note that the definition in general is intrinsically global. An $n$-plane bundle $p$ over a space $B$ has an associated sphere bundle $Sph(p)$ (by fiberwise one
point compactification) with based fibers and thus a section. The quotient $Sph/B$ is the Thom space $T$ of $p$. For a multiplicative cohomology theory $E$, a Thom class $\mu$ is an element of
$\tilde{E}^n(T)$ whose restriction to $\tilde{E}^n(S^n_b)\cong \tilde{E}^0(S^0)$ is a unit in this ring for any $b\in B$, where $S^n_b$ is the fiber over $b$ in $Sph(p)$. This definition is
up admitttedly mysterious. It suffices to give a Thom isomorphism and it is important geometrically, but the real explanation is more advanced and still not very well known. One should think of
vote $E^*$ as represented by a ring spectrum $E$. Bundle theory naturally concerns spaces over $B$, or parametrized spaces. One can make sense of parametrized spectra over $B$, and one can even
11 take the smash product of a parametrized space and a spectrum to obtain a parametrized spectrum. Thus one can make sense of $Sph(p)\wedge E$ as a spectrum over $B$. Of course, there is also a
down trivial spherical bundle $B\times S^n$ over $B$. It turns out that a Thom class as I defined it cohomologically is the same thing as a trivialization: an equivalence of parametrized spectra
vote between $Sph(p)\wedge E$ and $(B\times S^n)\wedge E$. That is the geometric meaning. This is proven in the book Parametrized Homotopy Theory, by Sigurdsson and myself (available on my
Tom, your comment is incomplete and needs editing. The theorem that is unstated needs its hypotheses (compatibity on intersections) as well as its statement. But of course the point that
local implies global fails for generalized cohomology is part of what I had in mind. (While the question implicitly refers to ordinary cohomology, that is not explicit, so it seemed not
unreasonable to give a general answer). – Peter May Sep 22 '12 at 17:11
Yes, it's hard that comments can't be edited. I carelessly lost one set of words in dividing the comment, and I also carelessly forgot to mention the local triviality hypothesis. – Tom
Goodwillie Sep 23 '12 at 15:45
I've deleted it now. Of course I like your advanced viewpoint; I just couldn't see it as an answer to the question. Your answer could be fleshed out to make the point that the (reduced)
ordinary cohomology of $S^n$ vanishes in degrees less than $n$ (i.e. that the coefficient groups of ordinary cohomology vanish in positive degrees), which is what makes the existence of a
Thom class follow from the existence of an orientation. The same point could be made in a less advanced way using a spectral sequence rather than parametrized spectra. – Tom Goodwillie Sep
23 '12 at 15:47
add comment
Even the case of an oriented vector bundle over a point, which is where the story begins, is nontrivial. In this case the Thom isomorphism is the Poincare duality for the cohomology with
compact supports on an oriented vector space. Ultimately, the Thom isomorphism theorem is a special form of the Poincare-Verdier duality. The fact that the Mayer-Vietoris technique is used
in the proof is an indication that the Thom isomorphism deals with the cohomologies of some sheaves.
up vote 6
down vote If the base of the vector bundle is compact and oriented, then the Thom isomorphism is equivalent to the Poincare-Lefschetz duality for an oriented manifold with boundary namely, the unit
disk bundle determined by the vector bundle.
add comment
Thom class gives an orientation covector in every fiber $F\cong\mathbb R^n$ (of an oriented vector bundle) which can thought of a generator in $H^n(F-0)$ . Using local trivializations such
covectors are defined locally. One needs to prove that these covectors glue to a cohomology class on the total space (with the zero section deleted), and this is where Mayer-Vietoris
up vote 3 becomes relevant. How else would you glue? Read the exposition in Milnor-Stasheff or Bott-Tu.
down vote
add comment
The idea behind the Thom isomorphism $\beta:H^iX \rightarrow H^{n+i}(DE,SE)$ is implicit in the formula $$\int_{\sigma_{n+i}} \beta(\alpha_i) = \int_{X\cap \sigma_{n+i}} \alpha_i$$ Here $
\sigma_{n+i}$ is a singular simplex in $DE$ and we have written integration for the evaluation of a cochain on a sum of simplices. Also $X\subset DE$ is identified with the zero-section.
The problem with this formula is that it doesn't make sense in full generality: after all $X\cap\sigma_{n+i}$ will not in general be a simplex again. And even if it is, it might be a
simplex in many different ways (different parametrizations), so some choices must be made. These problems can be overcome and this is the "miracle" of the Thom isomorphism.
Note that the right hand side also requires an "orientation" of $X\cap\sigma_{n+i}$. This is why you also require an orientation on $E$.
up vote 3
down vote For the Thom class $\tau = \beta(1)$ itself this gives the characterization $$\langle \tau, \sigma_n\rangle = \sharp ( X \cap \sigma_n )$$ where the intersection points are counted with
appropriate signs. (In $DE$ a generic $n$-simplex has a zero-dimensional intersection with the zero section.)
You might find it helpful to learn something about Thom classes in other (generalized) cohomology theories: in de Rham cohomology and K-theory there are pretty explicit representatives
for the respective Thom classes. And nothing beats the elegance of Thom classes in cobordism theories, where you've got a "tautological" Thom class.
add comment
Not the answer you're looking for? Browse other questions tagged at.algebraic-topology or ask your own question.
|
{"url":"http://mathoverflow.net/questions/107825/intuition-behind-thom-class/107864","timestamp":"2014-04-16T13:40:07Z","content_type":null,"content_length":"77636","record_id":"<urn:uuid:eca62d5d-f7b3-4e2e-a407-3114c3bc20cf>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00392-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Estimating population structure
Next: IAM based distance Up: Microsatellite's and Genetic Distance Previous: Modelling microsatellite evolution
The calculation of a genetic distance between two populations gives a relative estimate of the time that has passed since the populations have existed as single cohesive units. Small estimations of
distance may indicate population substructure (i.e. subpopulations in which there is random mating, but between which there is a reduced amount of gene flow). However, small estimations of distance
may also be present because the populations are completely isolated but have only been separated for a short period of time. When two populations are genetically isolated, the two processes of
mutation and genetic drift lead to differentiation in the allele frequencies at selectively neutral loci. As the amount of time that two populations are separated increases, the difference in allele
frequencies should also increase until each population is completely fixed for separate alleles. A number of methods have been developed which estimate genetic distance from these allele frequency
differences. Estimations of genetic distance determined from differences in microsatellite allele frequencies can be summarised into the two categories, IAM and SMM/TPM based estimators.
Some of the more useful measures of population subdivision are the F-statistics developed by Wright (1965). F-statistics can be thought of as a measure of the correlation of alleles within
individuals and are related to inbreeding coefficients. An inbreeding coefficient is really a measure of the nonrandom association of alleles within an individual. As such, F- statistics describe the
amount inbreeding-like effects within subpopulations , and within the entire population IAM and SMM/TPM based estimators of F-statistics.
If there is no migration occurring between two subpopulations, eventually alternate alleles will become fixed and Nm, is > 1 (where N is the effective population size and m is the fraction of
migrants per generation), the allele frequencies in the subpopulations will remain homogenised (Wright 1931). If, however, migration is present but Nm < 1, an equilibrium based on the rate of
mutation, migration, and genetic drift will be established. Estimating Nm from microsatellite data.
Next: IAM based distance Up: Microsatellite's and Genetic Distance Previous: Modelling microsatellite evolution
|
{"url":"http://helix.biology.mcmaster.ca/brent/node7.html","timestamp":"2014-04-20T00:57:09Z","content_type":null,"content_length":"5059","record_id":"<urn:uuid:36b7f259-fb40-4926-a04f-f22fee22cad1>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00109-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Tabular Solution Of The Cubic Equation
Edwin P. Russo, P.E.
Professor Emeritus
National Center for Advanced Mfg.
Carsie A. Hall
Associate Professor
Duane J. Jardine
Adjunct Professor
Mechanical Engineering Dept.
University of New Orleans
New Orleans, La.
William W. St. Cyr, P.E.
NASA Stennis Space Center
Stennis, Miss.
The general cubic equation given by:
has roots that are solvable by classical techniques involving the computation of inverse cosines, cosines of multiple angles, and so forth. But equation 1 may be transformed into a simpler equation
with only one coefficient. Such a transformation resembles those given elsewhere, though the other transformations only reduce the number of coefficients from three to two, rather than to one.
Jahnke and Emde in 1945 derived a formulation giving a single coefficient with a short table of roots. But it needed a highly complicated graphical procedure to find the roots. The transformation
presented here as well reduces the coefficient count to one but gives a convenient means by which to tabularize the roots of the cubic equation, eliminating various approximate or tedious methods of
finding the roots.
Equation 1 may be transformed into:
The above transformation first defines a new variable, X, by the relation:
where D and K are arbitrary constants to be determined later. Substituting equation 6 into equation 1 gives:
The X ^2 term is eliminated by requiring that K = a/3. D is now defined by requiring that the coefficients of X ^3 and X be the same. Therefore:
or, after substituting K:
Dividing equation 7 by D^3 and substituting for K gives an equation with only one coefficient, that is, equation 2. This technique also works to reduce the number of coefficients for higher-order
equations (quartic, for example).
An abbreviated Table of roots contains real values of P in rather large increments. Interested readers may wish to expand the Table to include finer increments of P. Note that the real part of the
complex roots, X2 and X3, is simply X1/2, and that X2 and X3 are complex conjugates. Examination of equation 5 shows that if a, b, and c are real, then the coefficient P can only be real (positive or
negative) or imaginary if b < a ^2 /3.
When P is imaginary (b < a ^2 /3) the cubic equation 2 may be rewritten in a more convenient form, namely:
where P has been replaced by iQ and X replaced by iY (Q is real and i = √–1). That is:
Table 2 lists the roots of equation 10 for various values of the coefficient Q. Only when Q is in the interval, –4/27 < Q < √4/27, will the roots of equation 10 all be real. These roots are listed in
Table 3.
After selecting the roots of equation 2 or 10 from the appropriate Table, it is a simple matter to obtain the roots for the general cubic equation 1 from equation 4 or 12.
Note when P is large:
And when Q is large:
The Tables may be used to solve principal stresses, pump curves, eigenvalues, hydraulic jumps, control systems, spillway flow, moving wave/bores, setting initial conditions for Newton iterations, and
so on.
|
{"url":"http://machinedesign.com/print/archive/tabular-solution-cubic-equation","timestamp":"2014-04-17T10:00:41Z","content_type":null,"content_length":"20186","record_id":"<urn:uuid:31cd182a-9d97-45d4-a519-57f78cda8bf2>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00580-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Question About Rindler Wedges - Physics and Mathematics
I have a question concerning this paper:
Would it make sense to consider that a quantum superposition of two Minkowski spacetime segments can occur within a "moment" at the junction of right and left Rindler Wedges ? That is, can we view
the concept of "moment in time" as the transformation of left to right Rindler Wedges ? All comments appreciated.
|
{"url":"http://www.scienceforums.com/topic/26419-question-about-rindler-wedges/","timestamp":"2014-04-18T08:01:50Z","content_type":null,"content_length":"81817","record_id":"<urn:uuid:5c63a4cf-b854-4ac3-9913-6c126c23f059>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00523-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Design flaw in Canadian Maple Reactor
Homer Simpson
I heard somewhere that to save on design effort they essentially just scaled up a previous smaller fully engineered design. I'm not too sure how true that is.
Of course that's what the Soviets did when they built the RBMK reactors like those at Chernobyl.
The Soviet RBMK reactor is a smaller Soviet nuclear weapons material production reactor scaled
up by a factor of 2. The mistake the Soviets made was not redesigning the fuel to go along with
the reduced neutron leakage afforded by the larger RBMK.
The Soviet RBMK thus had the wrong feedback characteristics; which led to the infamous accident.
If they have a reactor that has the wrong feedback characteristics; it's best not to pursue startup
unless the problems are corrected. Perhaps the mistakes in the design are too integral to the design
that they can't be fixed without essentially scrapping the original design.
This means that they really don't have a good model for their design simuation. Somebody made a
BIG MISTAKE somewhere - be it in processing the nuclear data, the transport simulation software;
the design calculations....who knows where the error is - but evidently the error is large.
I hope someone follows up on this; I'd be curious as to what part of the design process was faulty.
Dr. Gregory Greenman
|
{"url":"http://www.physicsforums.com/showthread.php?p=1951049","timestamp":"2014-04-17T09:48:35Z","content_type":null,"content_length":"56244","record_id":"<urn:uuid:021ab6a8-c693-46df-a8ec-ebbb148fa4b3>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00281-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Kernel of an \'etale isogeny of prime degree $\ell$ between elliptic curves
up vote 2 down vote favorite
I recently try to read Vatsal's paper ``Multiplicative subgroups of $J_0(N)$ and applications to elliptic curves.'' He seemed to use the following fact freely:
Let $E$ be a semistable elliptic curve defined over $\mathbb{Q}$, and suppose that $E$ admits a cyclic $\ell$-isogeny $\phi : E \to E'$. Then $\phi$ is étale if and only if the kernel of $\phi$
is isomorphic to $\mathbb{Z}/\ell\mathbb{Z}$ as an $\mathrm{Gal}(\overline{\mathbb{Q}}/\mathbb{Q})$-module.
Because of the lack of my knowledge about étale isogenies, I cannot figure out the proof of the above fact. In fact, I'm not sure that the statement is even true. If anyone let me know the proof of
the above statement, it's very helpful for me to understand Vatsal's paper.
Moreover, I failed to find reasonable references about étale isogenies. Any suggestions about good references will be appreciated.
elliptic-curves nt.number-theory ag.algebraic-geometry
3 He must be using some non-standard definition of etale. – Felipe Voloch Nov 4 '12 at 13:47
3 "We say that φ is étale if the extension [...] to Néron models is étale." – René Nov 4 '12 at 13:55
Proof of "only if": It suffices that ker($\phi$) is unramified at all primes, so we may work over the fraction field $K$ of the completed max. unramified extension $W$ of each $\mathbf{Z}_p$. For
4 good reduction $N(E_K)$ and $N(E'_K)$ are elliptic curves, so the etale ker($N(\phi_K)$) is finite, thus ker($\phi_K$) unramified (like $K$-fiber of any finite etale $W$-scheme). For mult.
reduction, Tate handles $\ell\ne p$, so assume $\ell=p$. Now $N(E_K)[p]$ contains $\mu = \mu_p$ and $\phi_K(\mu_K) \ne 0$ (or else $\mu\subset\ker(N(\phi_K))$, violating $W$-etaleness), so again
we win by Tate. – user27056 Nov 4 '12 at 16:42
Proof of "if": False if $\ell = 2$. Assume $\ell > 2$. May again work over $K$. Assume good reduction, so $\ker(N(\phi_K))$ is finite flat of order $\ell$ and $N(\phi_K)$ is etale iff its kernel
3 is etale. The case $\ell \ne p$ is clear, and $\ell = p$ follows by Raynaud's theorem (since $\ell > 2$). Assume mult. reduction. By translations, it's equivalent to show the isogeny between
formal groups over $W$ is etale. That is an endomorphism of the formal mult. group of degree 1 or $\ell$, and degree $\ell = p$ puts $\mu_p$ inside ker($\phi_K$), contradicting constancy (as $\ell
>2$). QED – user27056 Nov 4 '12 at 17:09
@Will: Choose a semistable $E$ with split multiplicative reduction at 2 and split 2-torsion over $\mathbf{Q}$. Check (by descent from $\mathbf{Q}_2$) that for a suitable point $P$ of order 2, the
2 schematic closure of $\langle P \rangle$ in $N(E)_{\mathbf{Z}_2} = N(E_{\mathbf{Q}_2})$ is $\mu_2$, so the isogeny $E \rightarrow E' := E/\langle P \rangle$ is a counterexample (since
Grothendieck's inertial criterion ensures that $E'$ is semistable at all primes). – user27056 Nov 4 '12 at 18:00
show 10 more comments
Know someone who can answer? Share a link to this question via email, Google+, Twitter, or Facebook.
Browse other questions tagged elliptic-curves nt.number-theory ag.algebraic-geometry or ask your own question.
|
{"url":"http://mathoverflow.net/questions/111457/kernel-of-an-etale-isogeny-of-prime-degree-ell-between-elliptic-curves","timestamp":"2014-04-21T10:15:53Z","content_type":null,"content_length":"53778","record_id":"<urn:uuid:be2b32ce-6b07-43df-b51c-f0e5b7b03307>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00317-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Returns the k-th largest value in a data set. You can use this function to select a value based on its relative standing. For example, you can use LARGE to return the highest, runner-up, or
third-place score.
Array is the array or range of data for which you want to determine the k-th largest value.
K is the position (from the largest) in the array or cell range of data to return.
● If array is empty, LARGE returns the #NUM! error value.
● If k ≤ 0 or if k is greater than the number of data points, LARGE returns the #NUM! error value.
If n is the number of data points in a range, then LARGE(array,1) returns the largest value, and LARGE(array,n) returns the smallest value.
The example may be easier to understand if you copy it to a blank worksheet.
● Create a blank workbook or worksheet.
● Select the example in the Help topic.
Note Do not select the row or column headers.
Selecting an example from Help
● Press CTRL+C.
● In the worksheet, select cell A1, and press CTRL+V.
● To switch between viewing the results and viewing the formulas that return the results, press CTRL+` (grave accent), or on the Formulas tab, in the Formula Auditing group, click the Show Formulas
A B
Data Data
Formula Description (Result)
=LARGE(A2:B6,3) 3rd largest number in the numbers above (5)
=LARGE(A2:B6,7) 7th largest number in the numbers above (4)
|
{"url":"http://office.microsoft.com/en-us/excel-help/large-HP005209151.aspx?CTT=5&origin=HP005204211","timestamp":"2014-04-21T07:26:31Z","content_type":null,"content_length":"23931","record_id":"<urn:uuid:94659600-a91d-44f0-8639-2388f0c2d605>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00445-ip-10-147-4-33.ec2.internal.warc.gz"}
|