content
stringlengths 86
994k
| meta
stringlengths 288
619
|
|---|---|
Mini Cube, the 2
Mini Cube, the 2×2×2 Rubik's Cube
Links to other useful pages:
sells the Mickey Mouse puzzle head, Pyramorphix, 2x2x2 cubes by Eastsheen, and many other puzzles.
sells the original 2x2x2 cubes, Darth Maul and Homer puzzle heads.
Denny's Puzzle Pages
A very nice graphical solution.
Matthew Monroe's Page
Although a solution for the 3x3x3 cube, it is corners first, and thus applies to the pocket cube as well.
Philip Marshall's page
A relatively short solution.
This puzzle is a simpler version of the Rubik's Cube. It goes under various names, such as Mini Cube and Pocket Cube. The puzzle is built from 8 smaller cubes, i.e. a 2×2×2 cube. Each face can
rotate, which rearranges the 4 small cubes at that face. The six sides of the puzzle are coloured, so every small cube shows three colours.
This puzzle is equivalent to just the corners of the normal Rubik's cube. It is a little confusing though, because there are no face centres to use as a reference point.
This puzzle has many guises, some of which are pictured above. There are the puzzle heads such as the Mickey Mouse puzzle head by Meffert, and the Darth Maul or Simpson heads by Rubik. Also shown
above is a version in the shape of a Nissan Micra, used for advertising. These puzzles change shape when they are mixed, sometimes in quite amusing ways. There are various ball versions, such as the
Dreamball, the K8-ball, and the Octo. These balls have unique internal mechanisms. All these puzzles can be solved in the same way as the normal mini cube.
Another related puzzle is the pyramorphix, which is the tetrahedral puzzle shown above. It changes shape when mixed. It is like the cube but where the orientation of four of the pieces does not
matter. Because of this, it has a slightly easier solution than the other versions. The Pyramorphix and the equivalent Stern puzzle are covered in more depth on a separate Pyramorphix page.
The pocket cube was patented by Ernö Rubik on 29 March 1983, US 4,378,117.
The Eastsheen 2x2x2 cube was patented by Chen Sen Li on 27 October 1998, US 5,826,871.
The Mickey Mouse puzzle head was patented by Francisco Josa Patermann on 22 May 1996, EP 712,649.
The Darth Maul puzzle head was patented by Thomas Kremer (Seven Towns Ltd.) on 17 April 2001, US 6,217,023.
The K-ball was patented by Saleh Khoudary on 11 May 2000, WO 00/25874.
The number of positions:
There are 8 pieces, with 3 orientations each, giving a maximum of 8!·3
positions. This limit is not reached because:
• The total twist of the cubes is fixed (3)
• the orientation of the puzzle does not matter (24)
This leaves 7!·3^6 = 3,674,160 positions.
Every position can be solved in at most 11 moves (or 14 if a half turn is considered to be two moves). Many people have used a computer search to find God's Algorithm, i.e. the shortest solution for
each position, as far back as 1981. The result for both metrics is shown in the following table:
Face turn metric
0 1 2 3 4 5 6 7 8 9 10 11 Total
u 3 24 96 120
r 4 6 144 384 534
e 5 72 768 1,416 2,256
6 9 564 3,608 4,788 8,969
u 7 126 3,600 15,072 14,260 33,058
n 8 5 1,248 18,996 57,120 36,780 114,149
m 9 120 9,480 85,548 195,400 69,960 360,508
t 10 1,728 55,548 341,604 487,724 43,984 930,588
i 11 72 14,016 238,920 841,500 256,248 96 1,350,852
12 1,032 56,440 455,588 268,532 944 782,536
13 12 928 32,968 54,876 1,496 90,280
Total 1 9 54 321 1,847 9,992 50,136 227,536 870,072 1,887,748 623,800 2,644 3,674,160
In Sloane's On-Line Encyclopedia of Integer Sequences these are included as sequences A079761 and A079762.
This number is relatively small, so a computer can very quickly search through all positions to find the shortest possible solution for any given position. You can play a Javascript version which
includes such a solver by clicking below.
Links to other useful pages:
, the original manufacturer of this puzzle.
Let the faces be denoted by the letters L, R, F, B, U and D (Left, Right Front, Back, Up and Down). Clockwise quarter turns of a face are denoted by the appropriate letter, anti-clockwise quarter
turns by the letter with an apostrophe (i.e. L', R', F', B', U' or D'). Half turns are denoted by the letter followed by a 2 (i.e. L2, R2, F2, B2, U2 or D2).
Note that on this cube the moves LR' is simply a rotation of the whole cube. Similarly FB' or UD'. Therefore moving only three adjacent faces is enough to solve the cube. If we only use the F, R and
D faces to solve it, then the UBL corner is never moved from its place, and the other pieces are simply placed so they line up with it. The first solution below will use all the faces because it is
based on a solution for the normal Rubik's Cube.
Solution 1:
Phase 1:
Solve the top layer.
a. Decide which colour you want to place on the top face. Then hold the cube so that at least one piece already shows that colour on the top face. The remaining pieces on the top face will be placed
relative to that piece.
b. Find a piece in the D layer that belongs in the U layer. Rotate D so that the piece is below its destination and rotate the whole cube to get it to the front right. If all the U layer pieces are
already in the top layer though some are wrongly placed then choose any D piece to displace any wrong piece from the U layer.
c. Depending on its orientation, do one of the following:
1. To move FRD->URF do FDF'
2. To move RDF->URF do R'D'R
3. To move DFR->URF do FD'F'R'D2R
d. Repeat b-c until the top layer is done.
Phase 2: Place the bottom layer pieces, ignoring orientation.
a. Rotate the D layer to get at least two pieces in the correct position, though they may be twisted.
b. If necessary swap two pieces by using the following:
1. To swap DFR, DBL do F'R'D'RDFD'
2. To swap DFR, DFL do FDF'D'R'D'R
Phase 3: Twist the bottom layer pieces correctly.
a. Do one of the following sequences to solve the remaining pieces. Clockwise twists are denoted by a +, anti-clockwise ones by a -.
1. To twist DFL-, DBL+ do R'D'R F' DR'DR D2F2
2. To twist DFL+, DBL- do F2D2 R'D'RD' F R'DR
3. To twist DFL-, DBR+ do R2D'R D2R'D2R D'R2D
4. To twist DFR-, DRB-, DBL- do R'D'RD' R'D2RD2
5. To twist DFR+, DRB+, DBL+ do D2R'D2R DR'DR
6. To twist DFR-, DRB+, DBL-, DLF+ do R2D2 R D2R2 D
7. To twist DFR+, DRB-, DBL-, DLF+ do RDF R2D2F2 DF'DR2
This solution takes at most 24+8+10=42 moves.
Solution 2:
This solution is loosely based on the Thistlethwaite algorithm adapted for the 2×2×2 cube. It uses fewer moves than solution 1, but is far more complicated since it uses a large table of sequences.
Phase 1: Orient all pieces correctly.
a. Choose the colours which will go on the top and bottom faces of the cube.
b. Hold the cube so that at least three of the pieces show one of those two colours on the top and bottom faces.
c. Examine which pieces need to be twisted and in which direction to make the top/bottom faces a mixture of the two chosen colours.
d. The following table lists all possible sequences needed to fix the orientations. The left column shows the twist the pieces need, in the order ULB, UBR, URF, UFL, DBR, DRF, DFL, DLB, where a +
means it needs a clockwise twist, a - an anticlockwise twist.
-0+00000 FRU2R'F
-+000000 R2URF'UF
+-000000 RU2RU2R
0---0000 R2U2F'RF
--++0000 RF'R2F2UR
-+-+0000 F2U2F
+000-000 FU'RF
00+0-000 R'FR'F
--00-000 R'UR
0-0--000 FRF
0-++-000 R2U'F
0+-+-000 FR2F
+-0+-000 R2F2U'F
--+--000 R'F2UR
++++-000 RU'R'FR2F
0+0+-0-0 R2F
0++0-0-0 RUF2U'R
++00-0-0 R'UR2FR
-0-+-0-0 RFUR
+0---0-0 FUR2F
0+---0-0 R'F2R'F
0--+-0-0 FRUF
00++--00 R2U2F2UR
0+0+--00 R'URU2R
0++0--00 FR2F2RF
+0+0--00 RU2FU'R
++00--00 RUF2R2F
+0----00 FRFU2R
0+----00 R2FR2U'R
-+0---00 FR2F'UR
-+-+--00 R'F'U2F
0--+--00 RUF'U'F
0-+---00 FR2F'R
--0+--00 R'U'RU2R
+--0--00 RU2RU'R
--+0--00 F2R'U2F'R
0+-0-+00 F2R2F
-0+0-+00 R'UR2U'R
-+00-+00 R2F'U2F
+0-0-+00 FU'R2U'R
0----+00 FR2U'R
---0-+00 RF'U2F
0-+0+-00 R
+0-0+-00 R'F'U'F
+-00+-00 F2R'U2R
-0--+-00 RF2UR
00+--0+0 RF'RF2R
00-+-0+0 FRFR
-0+0-0+0 R2UF2R
+-00-0+0 FR'U2RF
-00+-0+0 R2U'R'F2R
--0--0+0 R'U2F'R
---0-0+0 F'RU2F
Phase 2: Put the pieces in position.
a. Find a tetrad of pieces, i.e. 4 pieces which in the solved cube are not adjacent (i.e. any two of them have exactly 1 colour in common).
b. Rotate U/D to get these pieces into one of the following patterns and do the sequence shown:
UFL, ULB, DLF, DRB: R2 U R2 U F2
UFL, ULB, UBR, DLF: R2 U F2
UFL, UBR, URF, DLF: F2 U' F2
UFL, UBR, DLF, DBR: R2 U2 F2
ULB, URB, DLF, DFR: F2
This leaves the tetrad in the U layer, and the other tetrad in the D layer.
c. Rotate U to get as many pieces from the U layer adjacent to matching pieces in the bottom layer, i.e. forming as many correct columns as possible.
d. If there are 4 columns, then do UF2U2R2U; if there are 2 adjacent columns then holding the cube with those columns in the B face do the sequence F2U'F2U2R2UF2U; if there are 2 non-adjacent
columns, then holding the cube with those columns at FR and BL do the sequence R2U2F2U.
e. The cube can now be solved by using only half turns, and at most 4 are needed. This is trivial.
This solution takes at most 24 moves (not including cancellations between the steps in phase 2).
Nice patterns:
The following patterns are symmetric, but don't look as nice.
|
{"url":"http://www.jaapsch.net/puzzles/cube2.htm","timestamp":"2014-04-18T18:11:40Z","content_type":null,"content_length":"22084","record_id":"<urn:uuid:2e627d67-a95d-4778-b214-91ea5b43cd46>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00616-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Describe The Kinetic And Potential Energy At Each ... | Chegg.com
kenetic energy
1. Describe the kinetic and potential energy at each point of the roller coaster path:
1. Between which points is the force of gravity doing positive work on the coaster? Negative work?
2. What happens to the roller coaster’s kinetic energy between points B and C? What happens to its potential energy between these points?
3. Why is it important for A to be higher than C?
4. If the roller coaster starts at point A, can it ever go higher than this point? What causes the roller coaster train to lose energy over its trip?
Answers (5)
1. Describe the kinetic and potential energy at each point of the roller coaster path:
1. Between which points is the force of gravity doing positive work on the coaster? Negative work?
2. What happens to the roller coaster’s kinetic energy between points B and C? What happens to its potential energy between these points?
3. Why is it important for A to be higher than C?
4. If the roller coaster starts at point A, can it ever go higher than this point? What causes the roller coaster train to lose energy over its trip?
stars 1
DelicateRoad2633 answered 16 minutes later
|
{"url":"http://www.chegg.com/homework-help/questions-and-answers/describe-kinetic-potential-energy-point-roller-coaster-path-points-force-gravity-positive--q2025887","timestamp":"2014-04-20T22:13:02Z","content_type":null,"content_length":"22069","record_id":"<urn:uuid:b60f679a-089a-475b-91c3-96dbc048f147>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00596-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Māris Ozols
I am a mathematician and currently I work as a post-doctoral researcher at University of Cambridge. Before this I spent a year as a post-doctoral researcher at IBM T.J. Watson Research Center. My
research is on quantum computing and quantum information. I have PhD in mathematics from University of Waterloo (supervised by Andrew Childs and Debbie Leung) where I was affiliated with Institute
for Quantum Computing. I was twice a summer intern at NEC Labs in Princeton. I did my undergraduate degree in computer science at University of Latvia.
Selected publications (full list)
• Andrew M. Childs, Debbie Leung, Laura Mančinska, Maris Ozols, A framework for bounding nonlocality of state discrimination, Commun. Math. Phys. 323(3), pp. 1121–1153 (2013) [arXiv:1206.5822]
• Debbie Leung, Laura Mancinska, William Matthews, Maris Ozols, Aidan Roy, Entanglement can increase asymptotic rates of zero-error classical communication over classical channels, Commun. Math.
Phys. 311(1), pp. 97–111 (2012) [arXiv:1009.1195]
• Maris Ozols, Martin Roetteler, Jérémie Roland, Quantum rejection sampling, ACM Transactions on Computation Theory - Special issue on innovations in theoretical computer science 2012, Volume 5,
Issue 3, Article No. 11 (2013) [arXiv:1103.2774]
Refereed for
Other things
|
{"url":"http://home.lu.lv/~sd20008/","timestamp":"2014-04-20T03:10:27Z","content_type":null,"content_length":"4948","record_id":"<urn:uuid:6635610b-adf6-4736-b35b-bc6ddd136b91>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00478-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Pennsauken Geometry Tutors
...I believe that geometry is among the hardest subjects to teach and to tutor. But... I love it! This is the math course that drives high school students crazy, even more so than calculus.
23 Subjects: including geometry, English, calculus, algebra 1
...I look forward to helping you ace whatever math material you would like to master. In addition, I would be glad to help you with any computer questions you might have, so please reach out if
you would like any assistance with setting up your computer or network, or if you just want to make sure ...
21 Subjects: including geometry, reading, calculus, physics
...I missed one question on the reading and two on the math section. I desire to tutor people who have the desire to do well on the SAT and do not feel like dealing with the snobbery that may
occur with a professional tutor. I tutored many of my friends on the SAT and saw improvements in their score.
16 Subjects: including geometry, reading, chemistry, physics
Your search for an experienced and knowledgeable tutor ends here.I have been coaching school teams for math league contests. and have coached school literature groups in preparation of Battle of
the Books contests. I enjoy teaching math and language arts. Although my specialty lies in elementary m...
15 Subjects: including geometry, English, reading, writing
...I have a degree in mathematics and a masters in education, so I have the technical and instructional skills to help any student. I have been teaching math at a top rated high school for the
last 10 years and my students are always among the top performers in the school. My goal is to provide students with the skills, organization, and confidence to become independent mathematics
15 Subjects: including geometry, calculus, algebra 1, GRE
|
{"url":"http://www.algebrahelp.com/Pennsauken_geometry_tutors.jsp","timestamp":"2014-04-19T17:11:09Z","content_type":null,"content_length":"25096","record_id":"<urn:uuid:3b219a92-f3e7-49a3-ab95-ddda76e08a80>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00545-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Doraville, GA SAT Math Tutor
Find a Doraville, GA SAT Math Tutor
...She is really my one weapon in my arsenal I couldn't do without. I wouldn't have passed calculus without her! I plan on using her for Calculus II and Calculus III as well and am not nearly as
anxiety ridden about it as I was before I met her." - Calculus I Student If the above sounds like somebody you want to learn from, just let her know!
22 Subjects: including SAT math, reading, calculus, writing
...I am currently a graduate student in a joint program between Emory and Georgia Tech pursuing a PhD in biomedical engineering. I got my bachelor's from Vanderbilt in Nashville, TN, but I went
to high school in Gwinnett County here in Atlanta. I was valedictorian of my high school, got a 35 on th...
17 Subjects: including SAT math, chemistry, physics, writing
...ADD/ADHD 5. Significant gaps in basic knowledge/skills, students who are more than 5 years behind current grade level in reading/writing 6. MLD I taught high school for five years and have
helped countless students as they navigate college writing.
22 Subjects: including SAT math, reading, English, GED
...Thanks.) My name is Anthon and I'm excited to have the opportunity to be your or your child's tutor. I have tutored for more than 10 years and have helped hundreds of students improve their
test scores and grades and I have been rewarded as a top 100 WyzAnt tutor nationwide (50,000+ tutors). ...
19 Subjects: including SAT math, physics, calculus, geometry
...I have taken and successfully passed Math up to Calculus 2. In the past, I have tutored Middle School Algebra and High School (9th grade Pre-Algebra). The students I taught, successfully
passed their class with an A. I have experience tutoring K-5th grades for 4 years at two different elementary schools.
29 Subjects: including SAT math, chemistry, reading, Spanish
Related Doraville, GA Tutors
Doraville, GA Accounting Tutors
Doraville, GA ACT Tutors
Doraville, GA Algebra Tutors
Doraville, GA Algebra 2 Tutors
Doraville, GA Calculus Tutors
Doraville, GA Geometry Tutors
Doraville, GA Math Tutors
Doraville, GA Prealgebra Tutors
Doraville, GA Precalculus Tutors
Doraville, GA SAT Tutors
Doraville, GA SAT Math Tutors
Doraville, GA Science Tutors
Doraville, GA Statistics Tutors
Doraville, GA Trigonometry Tutors
Nearby Cities With SAT math Tutor
Berkeley Lake, GA SAT math Tutors
Chamblee, GA SAT math Tutors
Clarkston, GA SAT math Tutors
Conyers SAT math Tutors
Duluth, GA SAT math Tutors
Dunwoody, GA SAT math Tutors
Embry Hls, GA SAT math Tutors
Lilburn SAT math Tutors
Norcross, GA SAT math Tutors
North Atlanta, GA SAT math Tutors
Roswell, GA SAT math Tutors
Sandy Springs, GA SAT math Tutors
Stone Mountain SAT math Tutors
Suwanee SAT math Tutors
Tucker, GA SAT math Tutors
|
{"url":"http://www.purplemath.com/doraville_ga_sat_math_tutors.php","timestamp":"2014-04-17T04:18:23Z","content_type":null,"content_length":"24139","record_id":"<urn:uuid:b8043d01-f854-45ab-8dac-069a494a6792>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00159-ip-10-147-4-33.ec2.internal.warc.gz"}
|
G&oum;del Numbering: Encoding Logic as Numbers
The first step in Gödel's incompleteness proof was finding a way of taking logical statements and encoding them numerically. Looking at this today, it seems sort-of obvious. I mean, I'm writing this
stuff down in a text file - that text file is a stream of numbers, and it's trivial to convert that stream of numbers into a single number. But when Gödel was doing it, it wasn't so obvious. So he
created a really clever mechanism for numerical encoding. The advantage of Gödel's encoding is that it makes it much easier to express properties of the encoded statements numerically.
Before we can look at how Gödel encoded his logic into numbers, we need to look at the logic that he used. Gödel worked with the specific logic variant used by the Principia Mathematica. The
Principia logic is minimal and a bit cryptic, but it was built for a specific purpose: to have a minimal syntax, and a complete but minimal set of axioms.
The whole idea of the Principia logic is to be purely syntactic. The logic is expected to have a valid model, but you shouldn't need to know anything about the model to use the logic. Russell and
Whitehead were deliberately building a pure logic where you didn't need to know what anything meant to use it. I'd really like to use Gödel's exact syntax - I think it's an interestingly different
way of writing logic - but I'm working from a translation, and the translator updated the syntax. I'm afraid that switching between the older Gödel syntax, and the more modern syntax from the
translation would just lead to errors and confusion. So I'm going to stick with the translation's modernization of the syntax.
The basic building blocks of the logic are variables. Already this is a bit different from what you're probably used to in a logic. When we think of logic, we usually consider predicates to be a
fundamental thing. In this logic, they're not. A predicate is just a shorthand for a set, and a set is represented by a variable.
Variables are stratified. Again, it helps to remember where Russell and
Whitehead were coming from when they were writing the Principia. One of their basic motivations was avoiding self-referential statements like Russell's paradox. In order to prevent that, they thought
that they could create a stratified logic, where on each level, you could only reason about objects from the level below. A first-order predicate would be a second-level object could only reason
about first level objects. A second-order predicate would be a third-level object which could reason about second-level objects. No predicate could ever reason about itself or
anything on its on level. This leveling property is a fundamental property built into their logic. The way the levels work is:
• Type one variables, which range over simple atomic values, like
specific single natural numbers. Type-1 variables are written as
a[1], b[1].
• Type two variables, which range over sets of atomic values, like
sets of natural numbers. A predicate, like IsOdd, about specific natural numbers would be represented as a type-2 variable. Type-2 variables are
written a[2], b[2], ...
• Type three variables range over sets of sets of atomic values.
The mappings of a function could be represented as type-3 variables:
in set theoretic terms, a function is set of ordered pairs. Ordered pairs, in
turn, can be represented as sets of sets - for example, the
ordered pair (1, 4) would be represented by the set { {1}, {1, 4} }.
A function, in turn, would be represented by a type-4 variable - a set
of ordered pairs, which is a set of sets of sets of values.
Using variables, we can form simple atomic expressions, which in Gödel's terminology are called signs. As with variables, the signs are divided into stratified levels:
• Type-1 signs are variables, and successor expressions. Successor expressions
are just Peano numbers written with "succ": 0, succ(0), succ(succ(0)),
succ(a[1]), etc.
• Signs of any type greater than 1 are just variables of that type/level.
Now we can assemble the basic signs into formulae. Gödel explained how to build formulae in a classic logicians form, which I think is hard to follow, so I've converted it into a grammar:
elementary_formula → sign[n+1](sign[n])
formula → ¬(elementary_formula)
formula → (elementary_formula) \or (elementary_formula)
formula → ∀ sign[n] (elementary_formula)
That's all that's really included in Gödel's logic. It's enough: everything else can be defined in terms of combinatinos of these. For example, you can define logical and in terms of negation and
logical or: (a ∧ b) ⇔ ¬ (¬ a ∨ ¬ b).
Next, we need to define the axioms of the system. In terms of logic the way I think of it, these axioms include both "true" axioms, and the inference rules defining how the logic works. There are
five families of axioms.
1. First, there's the Peano axioms, which define the natural numbers.
1. ¬(succ(x[1]) = 0): 0 is a natural number, and it's not the successor of anything.
2. succ(x[1]) = succ(y[1]) ⇒ x[1] = y[1]: Successors are unique.
3. (x[2](0) ∧ ∀ x[1] (x[2](x[1]) ⇒ x[2](succ(x[1])))) ⇒ ∀ x[1](x[2](x[1])): induction works on the natural numbers.
2. Next, we've got a set of basic inference rules about simple propositions. These
are defined as axiom schemata, which can be instantiated for any set of formalae
p, q, and r.
1. p ∨ p ⇒ p
2. p ⇒ p ∨ q
3. p ∨ q ⇒ q ∨ p
4. (p ⇒ q) ⇒ (p ∨ r) ⇒ (q ∨ r)
3. Axioms that define inference over quantification. v is a variable,
a is any formula, b is any formula where v is not a free variable, and c is a sign of the same level as v, and
which doesn't have any free variables that would be bound if it were inserted as
a replacement for v.
1. ∀ v (a) ⇒ subst[v/c](a): if formula a is true for all values of v, then you can substitute any specific value c for v in a, and a must still be true.
2. (∀ v (b ∨ a)) ⇒ (b ∨ ∀ v (a))
4. The Principia's version of the set theory axiom of comprehension:
&exists; u (∀ v ( u(v) ⇔ a )).
5. And last but not least, an axiom defining set equivalence:
∀ x[i] (x[i+1](x[i]) ⇔ y[i+1](y[i])) ⇒ x[i+1] = y[i+1]
So, now, finally, we can get to the numbering. This is quite clever. We're going to use the simplest encoding: for every possible string of symbols in the logic, we're going to define a
representation as a number. So in this representation, we are not going to get the property that every natural number is a valid formula: lots of natural numbers won't be. They'll be strings of
nonsense symbols. (If we wanted to, we could make every number be a valid formula, by using a parse-tree based numbering, but it's much easier to just let the numbers be strings of symbols, and
then define a predicate over the numbers to identify the ones that are valid formulae.)
We start off by assigning numbers to the constant symbols:
Symbols Numeric Representation
succ 3
¬ 5
∨ 7
∀ 9
( 11
) 13
Variables will be represented by powers of prime numbers, for prime numbers greater that 13. For a prime number p, p will represent a type one variable, p^2 will represent a type two variable, p^
3 will represent a type-3 variable, etc.
Using those symbol encodings, we can take a formula written as symbols x[1]x[2]x[3]...x[n], and encode it numerically as the product 2^x[1]3^x[2]5^x[2]...p[n]^x[n], where p[n] is the nth prime
For example, suppose I wanted to encode the formula: ∀ x[1] (y[2](x[1])) ∨ x[2](x[1]).
First, I'd need to encode each symbol:
1. "∀" would be 9.
2. "x[1]"" = 17
3. "(" = 11
4. "y[2]" = 19^2 = 361
5. "(" = 11
6. "x[1]" = 17
7. ")" = 13
8. "∨" = 7
9. "x[2]" = 17^2 = 289
10. "(" = 11
11. "x[1]" = 17
12. ")" = 13
13. ")" = 13
The formula would thus be turned into the sequence: [9, 17, 11, 361, 11, 17, 13, 7, 289, 11, 17, 13, 13]. That sequence would then get turned into a single number
2^9 3^17 5^11 7^361 11^11 13^17 17^13 19^7 23^289 29^11 31^17 37^13 41^13, which according to Hugs is the number (warning: you need to scroll to see it. a lot!):
Next, we're going to look at how you can express interesting mathematical properties in terms of numbers. Gödel used a property called primitive recursion as an example, so we'll walk through a
definition of primitive recursion, and show how Gödel expressed primitive recursion numerically.
Thanks a lot for this series of posts.
In your grammar, the productions for formula should be recursive and a base case should be added:
formula \rightarrow elementary_formula
I am looking forward to the next post in the series.
I think your grammar needs to be modified.
Rather than formula → ¬(elementary_formula), I think you want formula → ¬(formula). Similarly change the third and fourth productions, and add a production rule formula → elementary_formula.
Otherwise, you can't say make a formula like ¬(¬(...)).
Very cool. I once computed a number that was an instance of Goedel's theorem VI. Hilarity ensued. (seriosuly, if anyone cares, I'll share. I used a positional versus a exponential numbering technique
to keep the size manageable.)
[...] Gödel numbering: The logic of the Principia, and how to encode it as numbers. This was step 1 in the sketch. [...]
[…] we've done so far is the first two steps, and part of the third. In this post, we saw the form of the Principia's logic that we're using, and how to numerically encode it as a […]
|
{"url":"http://scientopia.org/blogs/goodmath/2013/02/07/gdel-numbering-encoding-logic-as-numbers/","timestamp":"2014-04-16T04:13:32Z","content_type":null,"content_length":"74834","record_id":"<urn:uuid:79da86e2-6252-4bf3-969c-bdba314a8a1f>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00569-ip-10-147-4-33.ec2.internal.warc.gz"}
|
A random selection algorithm
2008-06-27 | Filed Under Math, Programming
Suppose you want to select k things randomly from a list of n items, with no duplicates. Of course, if k > n, then it’s hopeless, and if k = n it’s trivial, so the interesting case is when k < n. A
naive approach might pick an item randomly, and try again if it’s a duplicate.
var selected = new List();
while( len(selected) < k ) {
var newIndex = floor(random() * n);
if ( items[newIndex] not in selected ) {
selected.append( items[newIndex] );
This may work fine if k is much smaller than n, but when k is large and close to n, this can be disastrously slow. Near the end, most of the items are already selected, so we keep picking items and
having to “throw them back”. The alternative of selecting which items to omit (instead of selecting the ones to keep) fails in the same way for small values of k.
Instead, there’s a simple trick that is guaranteed to never require more than n random numbers. As we get to each item, we select it or not with the “correct” probability. Here is the pseudocode:
var selected = new List();
var needed = k;
var available = n;
var position = 0;
while( len(selected) < k ) {
if( random() < needed / available ) {
selected.append( items[position] )
needed -= 1;
available -= 1;
Handy trick… I’ve needed this algorithm quite a few times.
2 Responses to “A random selection algorithm”
1. Shawn Poulson on 2008-06-28 12:15 pm
Hmm, it looks like you’re not incrementing position. This code will retrieve the first item in each iteration.
I’d recommend replacing items[position] with items[available-1] and ditch position altogether.
Very good code snippet. I think I’ll keep it.
2. Jeff on 2008-12-08 5:51 pm
I agree with Shawn. This looks like, here is something wrong with the position variable. But items[available-1] does not fit too, because then you would pick the same element twice if your random
condition is true twice in a row:
if( random() < needed / available )
So you would need something like this:
But way of selection is not uniform distributed: the probability would increase to the end of the while-loop, so the elements at the end have allways better chance to get into the array than the
first element.
Also this approach has a very simple problem: the items are only picked randomly, but the order stays the same. I mean, if i use this to take 3 elements from:
i could get:
but never: 1 9 5
I would suggest to generate a random number for every element from you list, and the sort the array by this number. After that you can just take the first k elements from your list. If you use
round(256*rand()) as your random number (or any other constant number * rand()) and radixsort as your sortalgorithm, you can arrange this in linear time, and it stays deterministic.
Of course, it is not that easy to implement as your snippet, so i would use my suggestion only if you really need an uniform distributed random numbers array. (=you work for a lottery) ;)
|
{"url":"http://mcherm.com/permalinks/1/a-random-selection-algorithm","timestamp":"2014-04-17T21:22:29Z","content_type":null,"content_length":"26478","record_id":"<urn:uuid:5dcfe88c-b40d-4e16-ae7b-a63d40849a03>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00616-ip-10-147-4-33.ec2.internal.warc.gz"}
|
The n-Category Café
January 30, 2013
The King of France
Posted by David Corfield
You may have seen me pondering over the years how to convince my philosophical colleagues that they ought to invest some time learning category theory. Now, of course philosophers of mathematics
should do so, and there’s an excellent case to make that philosophers of physics should do so too, but what of someone working in what are taken to be areas more at the core?
In an as yet undetermined number of posts here, I want to see how we might motivate such attention. Let me start then with a classic from the philosophy of language, involving what are known as
‘definite descriptions’. The use of logic is very common here, so with our new freedom from logic, we ought to have something to say.
Posted at 11:45 AM UTC |
Followups (34)
January 29, 2013
7th Scottish Category Theory Seminar
Posted by Tom Leinster
Organized at lightning speed, the 7th Scottish Category Theory Seminar will take place at the end of next week: Friday 8 February, 12.30 to 3.15, at the International Centre for Mathematical Sciences
in central Edinburgh.
Our three star speakers are:
• Martín Escardó (Computer Science, Birmingham): Sheaves in type theory: a model of uniform continuity
• Danny Stevenson (Mathematics and Statistics, Glasgow): A generalized Eilenberg–Zilber theorem for simplicial sets
• The Café’s very own Simon Willerton (Mathematics and Statistics, Sheffield): A tale of two constructions by Isbell
As an added attraction, Don Zagier will be giving a colloquium shortly afterwards (not as part of our seminar!): Modular forms and black holes: from Ramanujan to Hawking.
For abstracts, see the web page. And thanks to the Glasgow Mathematical Journal Learning and Research Support Fund for funding us.
Posted at 1:58 AM UTC |
Followups (1)
January 26, 2013
This Week’s Finds at 20
Posted by Tom Leinster
January 20, 2013
Tight spans, Isbell completions and semi-tropical modules
Posted by Simon Willerton
Some time ago, in a response to a comment on a post (?!) I said that I wanted to understand if the tight span construction for metric spaces could be thought of in enriched category theory terms. Tom
suggested that it might have something to do with what he calls the Isbell completion construction. Elsewhere, in a response to a post on Equipments I was wondering about connections between metric
spaces as enriched categories and tropical mathematics. I have finally got round to writing up a paper on these things. The draft can be found in the following place.
The paper was written to be accessible to people (such as metric geometers or tropical algebraists) with essentially no background in category theory, but to show how categorical methods apply.
Consequently, parts of the paper on cocompletion might be seen as a bit pedestrian by some of the categorical cognoscenti. I would be interested to hear thoughts that people have. I will give an
overview below the fold.
Posted at 10:42 PM UTC |
Followups (13)
January 17, 2013
Carleson’s Theorem
Posted by Tom Leinster
I’ve just started teaching an advanced undergraduate course on Fourier analysis — my first lecturing duty in my new job at Edinburgh.
What I hadn’t realized until I started preparing was the extraordinary history of false beliefs about the pointwise convergence of Fourier series. This started with Fourier himself about 1800, and
was only fully resolved by Carleson in 1964.
The endlessly diverting index of Tom Körner’s book Fourier Analysis alludes to this:
Posted at 2:15 PM UTC |
Followups (30)
January 16, 2013
The Universal Property of Categories
Posted by Mike Shulman
Finally, the third paper I promised about generalized category theory is out:
• Richard Garner and Mike Shulman, Enriched categories as a free cocompletion, arXiv
This paper has two parts. The first part is about the theory of enriched bicategories. For any monoidal bicategory $\mathcal{V}$, we (or, rather, Richard, who did most of this part) define $\mathcal
{V}$-enriched bicategories, assemble them into a tricategory, define $\mathcal{V}$-profunctors (modules) and weighted bilimits and bicolimits, and construct free cocompletions under such. It’s all a
fairly straightforward categorification of ordinary enriched category theory, but although a number of people have defined and used enriched bicategories in the past, I think this is the most
comprehensive development so far of their basic theory.
The second part is an application, which uses the theory of enriched bicategories to describe a universal property satisfied by the construction of enriched categories. I’ll explain a bit further
below the fold, but the introduction of the paper gives an equivalently good (and more detailed) summary. You can also have a look at these slides from Octoberfest 2012.
Posted at 7:54 PM UTC |
Followups (4)
January 11, 2013
Two Dimensional Monadicity
Posted by Mike Shulman
(guest post by John Bourke)
This blog is about my recent preprint Two dimensional monadicity which is indeed about two dimensional monadicity. However, the monadicity theorems are an application, and the paper is really about
how the weaker kinds of homomorphism that arise in 2-dimensional universal algebra — like strong, lax or colax monoidal functors between monoidal categories — are unique in satisfying certain
properties. These properties relate the weak homomorphisms with the more easily understood strict homomorphisms and so are $\mathcal{F}$-categorical, rather than 2-categorical, in nature. If you want
to understand what I mean then read on.
Posted at 7:29 PM UTC |
Followups (6)
January 7, 2013
From Set Theory to Type Theory
Posted by Mike Shulman
The discussion on Tom’s recent post about ETCS, and the subsequent followup blog post of Francois, have convinced me that it’s time to write a new introductory blog post about type theory. So if
you’ve been listening to people talk about type theory all this time without ever really understanding it—or if you’re a newcomer and this is the first you’ve heard of type theory—this post is
especially for you.
I used to be a big proponent of structural set theory, but although I still like it, over the past few years I’ve been converted to type theory instead. There are lots of reasons for this, the most
important of which I won’t say much about until the end. Instead I mostly want to argue for type theory mainly from a purely philosophical point of view, following on from some of the discussion on
Tom’s and Francois’ posts.
I do want to emphasize again some things that sometimes get lost sight of (and which I sometimes lose sight of myself in the heat of a discussion). All foundational systems are based on valid
philosophical points of view, and have their uses. The real reason for preferring one foundation over another should be practical. Type theory, of course, has lots of practical advantages, including
a computational interpretation, direct categorical semantics, and especially the potential for a homotopical version; but other foundational choices have different practical advantages. But aside
from practical concerns, philosophy is interesting and fun to talk about, and can be helpful when getting used to a new way of thinking when trying to learn a new foundational system—as long as we
all try to keep an open mind, and focus on understanding the philosophy behind things that are unfamiliar to us, rather than insisting that they must be wrong because they’re different from what
we’re used to.
My goal for this post is to start from material set theories (like ZFC) and structural set theories (like ETCS), and show how type theory can arise naturally out of an attempt to take the best of
both worlds. At the end, I’ll argue that to really resolve all the problems, we need univalent type theory.
Posted at 6:21 AM UTC |
Followups (148)
|
{"url":"http://golem.ph.utexas.edu/category/2013/01/index.shtml","timestamp":"2014-04-18T20:45:06Z","content_type":null,"content_length":"67199","record_id":"<urn:uuid:3b875aec-f0e4-46e4-9802-ea5962593eb5>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00490-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Closed convex hull of set of measurable functions, Riemannmeasurable functions and measurability of translations, Ann. Inst. Fourier 32
, 2001
"... Given a C ∗-algebra A with a semicontinuous semifinite trace τ acting on the Hilbert space H, we define the family A R of bounded Riemann measurable elements w.r.t. τ as a suitable closure, à la
Dedekind, of A, in analogy with one of the classical characterizations of Riemann measurable functions [2 ..."
Cited by 6 (3 self)
Add to MetaCart
Given a C ∗-algebra A with a semicontinuous semifinite trace τ acting on the Hilbert space H, we define the family A R of bounded Riemann measurable elements w.r.t. τ as a suitable closure, à la
Dedekind, of A, in analogy with one of the classical characterizations of Riemann measurable functions [26], and show that A R is a C ∗-algebra, and τ extends to a semicontinuous semifinite trace on
A R. Then, unbounded Riemann measurable operators are defined as the closed operators on H which are affiliated to A ′′ and can be approximated in measure by operators in A R, in analogy with
unbounded Riemann integration. Unbounded Riemann measurable operators form a τ-a.e. bimodule on A R, denoted by A R, and such bimodule contains the functional calculi of selfadjoint elements of A R
under unbounded Riemann measurable functions. Besides, τ extends to a bimodule trace on A R.
"... Given a C ∗-algebra A with a semicontinuous semifinite trace τ acting on the Hilbert space H, we define the family A R of bounded Riemann measurable elements w.r.t. τ as a suitable closure, à la
Dedekind, of A, in analogy with one of the classical characterizations of Riemann measurable functions [1 ..."
Cited by 4 (4 self)
Add to MetaCart
Given a C ∗-algebra A with a semicontinuous semifinite trace τ acting on the Hilbert space H, we define the family A R of bounded Riemann measurable elements w.r.t. τ as a suitable closure, à la
Dedekind, of A, in analogy with one of the classical characterizations of Riemann measurable functions [16], and show that A R is a C ∗-algebra, and τ extends to a semicontinuous semifinite trace on
A R. Then, unbounded Riemann measurable operators are defined as the closed operators on H which are affiliated to A ′′ and can be approximated in measure by operators in A R, in analogy with
improper Riemann integration. Unbounded Riemann measurable operators form a τ-a.e. bimodule on A R, denoted by AR, and such bimodule contains the functional calculi of selfadjoint elements of A R
under unbounded Riemann measurable functions. Besides, τ extends to a bimodule trace on AR. As type II1 singular traces for a semifinite von Neumann algebra M with a normal semifinite faithful
(non-atomic) trace τ have been defined as traces on M − M-bimodules of unbounded τ-measurable operators [5], type II1 singular traces for a C ∗-algebra A with a semicontinuous semifinite (non-atomic)
trace τ are defined here as traces on A − A-bimodules of unbounded Riemann measurable operators (in AR) for any faithful representation of A. An application of singular traces for C ∗-algebras is
contained in [6].
, 1993
"... Introduction A C 0 -semigroup of linear operators on a Banach space X is a family T = fT (t)g t0 of bounded linear operators on X which satisfies [(S1)] T (0) = I ; [(S2)] T (s)T (t) = T (s + t)
for all s; t 0: [(S3)] lim t#0 kT (t)x \Gamma xk = 0 for all x 2 X: The generator of T is the lin ..."
Add to MetaCart
Introduction A C 0 -semigroup of linear operators on a Banach space X is a family T = fT (t)g t0 of bounded linear operators on X which satisfies [(S1)] T (0) = I ; [(S2)] T (s)T (t) = T (s + t) for
all s; t 0: [(S3)] lim t#0 kT (t)x \Gamma xk = 0 for all x 2 X: The generator of T is the linear operator A with domain D(A) defined by D(A) := fx 2 X : li
|
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=2266861","timestamp":"2014-04-17T17:04:47Z","content_type":null,"content_length":"18087","record_id":"<urn:uuid:406c5828-0025-48f8-a450-6abdab9b1675>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00052-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Arsenal v Coventry City
Emirates Stadium, England
Referee: R Madley | Attendance: 59451
* Local time based on your geographic location.
• Lukas Podolski 15' •
• Lukas Podolski 27' •
• Olivier Giroud 84' •
• Santi Cazorla 89' •
• Cyrus Christie crosses the ball.
• Mesut Özil hits a good left footed shot, but it is off target. Outcome: hit post
• Joe Murphy takes a long goal kick
• Lukasz Fabianski makes the save (Catch)
• Leon Clarke hits a good right footed shot. Outcome: save
• Carl Jenkinson crosses the ball.
• Throw-in: Blair Adams takes it (Defending)
• Joe Murphy makes the save (Fumble)
• Santi Cazorla hits an impressive right footed shot. Outcome: goal
• Carl Jenkinson hits a good right footed shot. Outcome: save
• Jordon Clarke crosses the ball.
• Joe Murphy takes a long goal kick
• Coventry City makes a sub: Jordon Clarke enters for Franck Moussa. Reason: Tactical
• Olivier Giroud hits a good left footed shot, but it is off target. Outcome: miss right
• Serge Gnabry crosses the ball.
• Joe Murphy takes a long goal kick
• Santi Cazorla hits a good left footed shot, but it is off target. Outcome: over bar
• Daniel Seaborne clears the ball from danger.
• Carl Jenkinson crosses the ball.
• Lukasz Fabianski takes a long goal kick
• John Fleck hits a good left footed shot, but it is off target. Outcome: over bar
• Olivier Giroud hits an impressive left footed shot. Outcome: goal
• That last goal was assisted by Kieran Gibbs
• Daniel Seaborne clears the ball from danger.
• Lukasz Fabianski (Arsenal) takes a freekick. Outcome: Pass
• Daniel Seaborne commits a foul on Per Mertesacker resulting on a free kick for Arsenal
• Laurent Koscielny clears the ball from danger.
• Cyrus Christie crosses the ball.
• Throw-in: Carl Jenkinson takes it (Attacking)
• Andy Webster clears the ball from danger.
• Carl Jenkinson crosses the ball.
• Throw-in: Blair Adams takes it (Defending)
• Laurent Koscielny clears the ball from danger.
• Joe Murphy takes a long goal kick
• Arsenal makes a sub: Olivier Giroud enters for Lukas Podolski. Reason: Tactical
• Joe Murphy takes a long goal kick
• Joe Murphy takes a short goal kick
• Jack Wilshere (Arsenal) takes a freekick. Outcome: Open Play
• Carl Baker commits a foul on Santi Cazorla resulting on a free kick for Arsenal
• Throw-in: Blair Adams takes it (Defending)
• Throw-in: Kieran Gibbs takes it (Defending)
• Arsenal makes a sub: Gedion Zelalem enters for Alex Oxlade-Chamberlain. Reason: Tactical
• Arsenal makes a sub: Santi Cazorla enters for Nicklas Bendtner. Reason: Tactical
• Throw-in: Cyrus Christie takes it (Defending)
• Lukasz Fabianski takes a long goal kick
• Cyrus Christie hits a good right footed shot, but it is off target. Outcome: miss right
• Throw-in: Cyrus Christie takes it (Attacking)
• Laurent Koscielny clears the ball from danger.
• Lukasz Fabianski takes a long goal kick
• Blair Adams hits a good right footed shot, but it is off target. Outcome: miss right
• Throw-in: Cyrus Christie takes it (Defending)
• Laurent Koscielny clears the ball from danger.
• Carl Baker crosses the ball.
• Throw-in: Kieran Gibbs takes it (Attacking)
• Throw-in: Kieran Gibbs takes it (Attacking)
• John Fleck clears the ball from danger.
• Lukasz Fabianski takes a long goal kick
• Billy Daniels hits a good right footed shot, but it is off target. Outcome: miss right
• Per Mertesacker clears the ball from danger.
• Carl Baker crosses the ball.
• Throw-in: Kieran Gibbs takes it (Defending)
• Throw-in: Cyrus Christie takes it (Attacking)
• Joe Murphy takes a long goal kick
• Nicklas Bendtner hits a good left footed shot, but it is off target. Outcome: miss left
• Andy Webster clears the ball from danger.
• Throw-in: Kieran Gibbs takes it (Defending)
• Andy Webster clears the ball from danger.
• Throw-in: Carl Jenkinson takes it (Defending)
• Franck Moussa crosses the ball.
• Lukasz Fabianski takes a long goal kick
• Leon Clarke hits a good left footed shot, but it is off target. Outcome: hit post
• Laurent Koscielny clears the ball from danger.
• Cyrus Christie crosses the ball.
• Laurent Koscielny clears the ball from danger.
• Joe Murphy takes a long goal kick
• Per Mertesacker clears the ball from danger.
• Throw-in: Cyrus Christie takes it (Attacking)
• Jack Wilshere (Arsenal) takes a freekick. Outcome: Open Play
• Carl Baker commits a foul on Jack Wilshere resulting on a free kick for Arsenal
• Throw-in: Cyrus Christie takes it (Attacking)
• Kieran Gibbs clears the ball from danger.
• Cyrus Christie crosses the ball.
• Lukasz Fabianski makes the save (Parry)
• Leon Clarke hits a good right footed shot. Outcome: save
• Daniel Seaborne clears the ball from danger.
• Kieran Gibbs crosses the ball.
• Lukasz Fabianski (Arsenal) takes a freekick. Outcome: Pass
• Daniel Seaborne commits a foul on Lukasz Fabianski resulting on a free kick for Arsenal
• Carl Baker crosses the ball.
• Jack Wilshere clears the ball from danger.
• Throw-in: Carl Jenkinson takes it (Defending)
• Joe Murphy takes a short goal kick
• Per Mertesacker (Arsenal) takes a freekick. Outcome: Open Play
• Leon Clarke commits a foul on Per Mertesacker resulting on a free kick for Arsenal
• Joe Murphy takes a long goal kick
• Jack Wilshere hits a good right footed shot, but it is off target. Outcome: miss left
• Lukasz Fabianski takes a short goal kick
• Cyrus Christie crosses the ball.
• John Fleck (Coventry City) takes a freekick. Outcome: Open Play
• Jack Wilshere is awarded a yellow card. Reason: unsporting behaviour
• Jack Wilshere commits a foul on Franck Moussa resulting on a free kick for Coventry City
• Joe Murphy takes a long goal kick
• Lukas Podolski hits a good right footed shot, but it is off target. Outcome: over bar
• Carl Jenkinson crosses the ball.
• Joe Murphy takes a long goal kick
• Carl Jenkinson crosses the ball.
• Throw-in: Carl Jenkinson takes it (Attacking)
• Blair Adams clears the ball from danger.
• Throw-in: Carl Jenkinson takes it (Attacking)
• Joe Murphy takes a long goal kick
• Serge Gnabry hits a good right footed shot, but it is off target. Outcome: miss right
• Kieran Gibbs crosses the ball.
• Cyrus Christie clears the ball from danger.
• Carl Jenkinson crosses the ball.
• Joe Murphy (Coventry City) takes a freekick. Outcome: Pass
• Kieran Gibbs commits a foul on Carl Baker resulting on a free kick for Coventry City
• John Fleck clears the ball from danger.
• Carl Jenkinson crosses the ball.
• Alex Oxlade-Chamberlain hits a good left footed shot, but it is off target. Outcome: miss right
• Throw-in: Nicklas Bendtner takes it (Attacking)
• Lukas Podolski hits a good header. Outcome: goal
• That last goal was assisted by Per Mertesacker
• Cyrus Christie clears the ball from danger.
• Kieran Gibbs crosses the ball.
• Joe Murphy takes a long goal kick
• Serge Gnabry hits a good left footed shot, but it is off target. Outcome: miss left
• Daniel Seaborne clears the ball from danger.
• Andy Webster clears the ball from danger.
• Daniel Seaborne blocks the shot
• Nicklas Bendtner hits a good right footed shot, but it is off target. Outcome: blocked
• Andy Webster (Coventry City) takes a freekick. Outcome: Open Play
• Lukas Podolski commits a foul on Conor Thomas resulting on a free kick for Coventry City
• Kieran Gibbs crosses the ball.
• Throw-in: Blair Adams takes it (Defending)
• Per Mertesacker clears the ball from danger.
• Throw-in: Blair Adams takes it (Attacking)
• Lukasz Fabianski makes the save (Parry)
• Carl Baker hits a good left footed shot. Outcome: save
• Lukas Podolski hits a good left footed shot. Outcome: goal
• That last goal was assisted by Mesut Özil
• Joe Murphy takes a long goal kick
• Alex Oxlade-Chamberlain hits an impressive right footed shot, but it is off target. Outcome: over bar
• Andy Webster clears the ball from danger.
• Nicklas Bendtner crosses the ball.
• Cyrus Christie blocks the shot
• Lukas Podolski hits a good left footed shot, but it is off target. Outcome: blocked
• Franck Moussa clears the ball from danger.
• Cyrus Christie clears the ball from danger.
• Daniel Seaborne clears the ball from danger.
• Carl Jenkinson crosses the ball.
• Daniel Seaborne clears the ball from danger.
• Alex Oxlade-Chamberlain (Arsenal) takes a freekick. Outcome: Open Play
• Offside called on Franck Moussa
• Joe Murphy takes a long goal kick
• Lukas Podolski hits a good left footed shot, but it is off target. Outcome: over bar
• Carl Jenkinson crosses the ball.
• Throw-in: Cyrus Christie takes it (Defending)
• Throw-in: Cyrus Christie takes it (Defending)
• Throw-in: Kieran Gibbs takes it (Attacking)
• Cyrus Christie clears the ball from danger.
• Laurent Koscielny crosses the ball.
• Daniel Seaborne clears the ball from danger.
• Blair Adams clears the ball from danger.
• Kieran Gibbs crosses the ball.
• Carl Baker clears the ball from danger.
• Billy Daniels clears the ball from danger.
• Daniel Seaborne clears the ball from danger.
• Serge Gnabry crosses the ball.
• Per Mertesacker clears the ball from danger.
• Billy Daniels crosses the ball.
• Throw-in: Cyrus Christie takes it (Defending)
• Kieran Gibbs clears the ball from danger.
• Joe Murphy takes a long goal kick
• Jack Wilshere curls a good left footed shot, but it is off target. Outcome: miss left
• Carl Jenkinson crosses the ball.
• John Fleck clears the ball from danger.
• Throw-in: Alex Oxlade-Chamberlain takes it (Attacking)
• Daniel Seaborne clears the ball from danger.
• Per Mertesacker clears the ball from danger.
• Joe Murphy (Coventry City) takes a freekick. Outcome: Pass
• Lukas Podolski commits a foul on Cyrus Christie resulting on a free kick for Coventry City
• Shots (on goal)
• tackles
• Fouls
• possession
Match Stats
19(5) Shots (on goal) 8(3)
4 Fouls 5
0 Corner kicks 0
0 Offsides 1
0% Time of Possession 0%
1 Yellow Cards 0
0 Red Cards 0
3 Saves 1
|
{"url":"http://espnfc.com/en/gamecast/385329/gamecast.html?soccernet=true&cc=","timestamp":"2014-04-16T04:56:22Z","content_type":null,"content_length":"126877","record_id":"<urn:uuid:c739b5c5-a3ef-4953-9687-65e1e4f98848>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00616-ip-10-147-4-33.ec2.internal.warc.gz"}
|
by Daniel Hanson Last time, we looked at the four-parameter Generalized Lambda Distribution, as a method of incorporating skew and kurtosis into an estimated distribution of market returns, and
capturing the typical fat tails that the normal distribution cannot. Having said that, however, the Normal distribution can be useful in constructing Monte Carlo simulations, and it is still
Quality of Historical Stock Prices from Yahoo Finance
I recently looked at the strategy that invests in the components of S&P/TSX 60 index, and discovered that there are some abnormal jumps/drops in historical data that I could not explain. To help me
spot these points and remove them, I created a helper function data.clean() function in data.r at github. Following is an example
R / Finance 2014 Open for Registration
The annoucement below just went to the R-SIG-Finance list. More information is as usual at the R / Finance page:Now open for registrations: R / Finance 2014: Applied Finance with R May 16 and 17,
2014 Chicago, IL, USA The reg...
R/Finance 2014 Registration Open
As announced on the R-SIG-Finance mailing list, registration for R/Finance 2014 is now open! The conference will take place May 17 and 18 in Chicago.Building on the success of the previous
conferences in 2009-2013, we expect more than 250 attendees fro...
Slopegraphs | rCharts –> maybe finance versions
Back in 2011, Charlie Park did two very thorough posts on Edward Tufte’s table graphics or slopegraphs. http://charliepark.org/slopegraphs/ http://charliepark.org/a-slopegraph-update/ These type
graphics can provide very effective visualizations of ...
Introduction to R for Quantitative Finance Introduction to R for Quantitative Finance (Book Review)
Last November 2013, PACKT Publishing launched the Introduction to R for Quantitative Finance. The book around which is around 164 pages (including cover page and back pages) discuss the
implementation different quantitative methods used in financ...
Quantitative Finance Applications in R – 4: Using the Generalized Lambda Distribution to Simulate Market Returns
by Daniel Hanson, QA Data Scientist, Revolution Analytics Introduction As most readers are well aware, market return data tends to have heavier tails than that which can be captured by a normal
distribution; furthermore, skewness will not be captured either. For this reason, a four parameter distribution such as the Generalized Lambda Distribution (GLD) can give us a more...
Quantitative Finance Applications in R – 3: Plotting xts Time Series
by Daniel Hanson, QA Data Scientist, Revolution Analytics Introduction and Data Setup Last time, we included a couple of examples of plotting a single xts time series using the plot(.) function (ie,
said function included in the xts package). Today, we’ll look at some quick and easy methods for plotting overlays of multiple xts time series in a single...
Introduction to R for Quantitative Finance – Book Review
I used some spare time I had over the christmas break to review a book I came across: Introduction to R for Quantitative Finance. An introduction to the book by the authors can be found here. The
book targets folks with some finance knowledge but no or little experience with R. Each chapter is organised around a
Quantitative Finance Applications in R – 2
by Daniel Hanson QA Data Scientist, Revolution Analytics Some Applications of the xts Time Series Package In our previous discussion, we looked at accessing financial data using the quantmod and
Quandl R packages. As noted there, the data series returned by quantmod comes in the form of an xts time series object, and Quandl provides a parameter that sets...
|
{"url":"http://www.r-bloggers.com/search/Finance","timestamp":"2014-04-17T04:13:33Z","content_type":null,"content_length":"39250","record_id":"<urn:uuid:bc71a087-b0a3-4508-98a5-b0fa9b632101>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00019-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Study Guide
ASTR 1210 (O'Connell) Study Guide
8. GRAVITATIONAL ORBITS AND SPACE FLIGHT
Space Shuttle Discovery launches on
a mission to the Space Station, 2001
"There will certainly be no lack of human pioneers when we have mastered the art of [space] flight....Let us create vessels and sails adjusted to the heavenly ether, and there will be plenty of
people unafraid of the empty wastes. In the meantime we shall prepare, for the brave sky-travelers, maps of the celestial bodies."
---- Johannes Kepler (1610)
Newton's theories of dynamics and gravity provided a complete understanding of the interaction between gravitating bodies and the resulting orbits for planets and satellites. This guide first
describes the nature of possible gravitational orbits and some implications of those. In the mid-twentieth century, Newton's work became the key conceptual element in space technology, which is
introduced in the second part of the guide. Space technology---rockets, the Space Shuttle, dozens of robot spacecraft, the human space program---has provided most of our present knowledge of the
Solar System and most of the material we will discuss in the rest of this course.
A. Newtonian Orbit Theory
Orbital Dynamics
Newton's theory can accurately predict gravitational orbits because it allows us to determine the acceleration of an object in a gravitational field. Acceleration is the rate of change of an object's
If we know the initial position and velocity of an object, knowing its acceleration at all later times is enough to completely determine its later path of motion. To predict the path, we simply
substitute Newton's expression for F[grav] for the force term in his Second Law and solve for acceleration.
But there is a major complication. The Second Law is not a simple algebraic expression. Both velocity and acceleration are rates of change (of position and velocity, respectively).
Mathematically, they are derivatives. The gravitational force changes with position. Finally, velocity, acceleration and the gravitational force all have a directionality as well as a
magnitude associated with them. That is, they are "vectors". So the Second Law is really a differential vector equation. To solve it, Newton had to invent calculus.
We don't need to know the mathematical details in order to understand the basic interaction that shapes Newtonian orbits. Take as an example the orbit of the Earth around the Sun.
Pick any location on the Earth's orbit. Represent its velocity at that location as an arrow (a vector) showing the direction and magnitude of its motion. An essential element of Newtonian
theory is that changes in the magnitude of the velocity vector (the speed) or in the direction of motion are both considered to be "accelerations." In the following drawings, the red arrows
represent the Earth's velocity vector and the blue arrows represent the applied gravitational force. According to Newton's Second Law, the change in the velocity vector (a speed-up in the
first case or a deflection of the direction of motion in the second) is in the direction of the applied force.
Starting from any location, the instantaneous velocity vector and the rate of change of that vector (the acceleration) combine to determine where the Earth will be at the next moment of time.
Adding up the motion from one moment to the next traces out the orbital path. In Newtonian gravity, the gravitational force acts radially --- i.e. along the line connecting the Earth and the
Sun. Accordingly, both the acceleration and the change in the Earth's velocity vector from one moment in time to the next will also always be in the radial direction. You might think that if
the acceleration is always toward the Sun, then the Earth should fall faster and faster on a radial trajectory until it crashes into the Sun. That's exactly what would happen if the Earth
were ever stationary in its orbit. In that case, the situation in the left hand drawing above (straight-line acceleration toward the Sun) would prevail. But if the Earth's velocity vector has
a component which is perpendicular to the radial direction, then in any interval in time, it will move "sideways" at the same time as it accelerates toward the Sun. If the combination of
sideways motion and distance from the Sun is correct, the Earth will avoid collision with the Sun, and it will stay in permanent orbit. The animation at the right shows the situation for the
Earth's (exaggerated) elliptical orbit around the Sun (the blue line is the velocity vector, the green line is the acceleration). Note that where the Earth is nearest the Sun, the
gravitational force and inward acceleration are greatest, but the sideways motion is also greatest, which prevents us from colliding with the Sun. That motion is in accordance with Kepler's
second law. Therefore, all permanently orbiting bodies are perpetually falling toward the source of gravity but have enough sideways motion to avoid a collision.
Kinds of Gravitational Orbits
In the case of two gravitating objects (for example, the Earth and the Moon, the Sun and a planet, or the Earth and an artificial satellite), Newton found that the full solutions of his equations
give the following results:
• The relative orbit is confined to a geometric plane.
• The shape of the orbit within the plane is a "conic section", of which there are only four types.
□ A circle
□ An ellipse
□ A parabola
□ A hyperbola See the illustration at the right.
• The orbital type is determined by the initial distance, speed and direction of motion of the orbiting object, as follows:
□ Define the "escape velocity" at a given distance: V[esc](R) = √(2GM/R), where R is the separation between the two objects and M is the mass of the primary object.
V[esc] for the Earth at the Earth's surface is 25,000 mph (or 11 km/s). V[esc] for the Sun at the Earth's 1 AU distance from the Sun is 94,000 mph (42 km/s).
• If V < V[esc], the orbit is an ellipse or circle. It is said to be "closed" or "bound". The smaller object will permanently repeat its orbital motion.
• If V ≥ V[esc], the orbit is a parabola or hyperbola. It is said to be "open" or "unbound". The smaller object escapes and does not return.
• Only specific values of velocity will yield circular or parabolic orbits. An object moving exactly at escape velocity will move on a parabola. To achieve a circular orbit an object must move at
71% of the escape velocity, and its velocity must be exactly perpendicular to the radial direction.
• As noted earlier, shapes and motions within the "closed" orbits for the planets satisfy all three of Kepler's Laws of planetary motion.
You can interactively explore the relation between the orbit and the initial velocity vector using the Flash animation Gravity Chaos.
Newton's Mountain
Newton illustrated orbital behavior for a simple idealized situation where a powerful cannon is placed on top of a high mountain on the Earth. Since both the distance from Earth's center and the
direction of initial flight is fixed, the cannonball follows an orbit that depends only on the muzzle velocity of the cannon as shown below: The gravitational force of a spherical body like the Earth
acts as though it originates from the center of the sphere, so elliptical orbits have the center of the Earth at one focus.
"Newton's Mountain": orbit type depends on initial velocity.
From lower to higher velocities, orbit shapes are: ellipse, circle, ellipse, parabola, hyperbola.
"Escape velocity" (which is 25,000 mph at Earth's surface) produces a parabolic orbit.
B. Important Implications of Newtonian Orbits
"Free-Fall" Orbits
Free motion in response to gravity (in the absence of other forces) is called "free-fall" motion. Conic section orbits are all "free-fall orbits."
Remember that motion is normal in free-fall. For instance, engines do not have to be on in order for spacecraft to move through space on a free-fall orbit. They will "coast" forever on such
an orbit. Note also that free-fall orbits will depart from simple conic sections if an object is under the influence of more than one gravitating body. For instance, comets are often
deflected from their Sun-dominated simple conic orbits by Jupiter's gravity (see Guide 21).
Free-fall orbits are independent of the mass of the orbiting object.
The mass of the orbiting body always cancels out of the expression for acceleration under gravity. For instance, in the case of a planet orbiting the Sun, the gravitational force on the
planet is directly proportional to the planet's mass but, according to Newton's Second Law, the resulting acceleration is inversely proportional to its mass. Hence, mass drops out from the
expression for acceleration. This is true for all orbits under gravity. Hence, a tennis ball in space, if it were moving with the same speed and direction as the Earth at any point, would
follow exactly the same orbital path as the Earth. Kepler's Third Law (that the orbital period of a planet around the Sun depends only on orbital size, not on the mass of the planet) is
another manifestation of this fact.
The characteristics of free fall originate from the fact that the acceleration of all objects is the same in a given gravity field (e.g. at a given distance from the Sun or near the Earth's
surface), regardless of their masses. This was first demonstrated experimentally by Galileo and was the subject of a Puzzlah (see also Study Guide 7) A more familiar manifestation is the
phenomenon of "floating" astronauts on space missions. Even though the spacecraft is much more massive, both the astronauts and the spacecraft have identical accelerations under the external
gravitational fields. They are moving on parallel free-fall orbits, so the astronauts appear to be floating and stationary with respect to the spacecraft, even though (in near-Earth orbit) they
are actually moving at tens of thousands of miles per hour. Rocket engines are described under (C) below. You can think of a rocket engine in the abstract as a device for changing from one
free-fall orbit to another by applying a non-gravitational force.
With its engine turned off, the motion of any spacecraft is a free-fall orbit. If the engine is on, the craft is not in free fall. For instance, the orbit of the Space Shuttle launching from
the Earth will depart from a conic section until its engines turn off. An example of using a rocket engine to change from one free-fall orbit to another is shown here.
The Russian "Mir" space station (1986-2001) orbiting Earth at an altitude of 200 miles with a velocity of 17,000 mph
Geosynchronous Orbits
According to Kepler's Third Law, the orbital period of a satellite will increase as its orbital size increases. We have exploited that fact in developing one of the most important practical
applications of space technology: geosynchronous satellites.
□ Spacecraft in "low" Earth orbits (less than about 500 mi), like the Mir space station (seen above) or the Space Shuttle, all orbit Earth in about 90 minutes, at 17,000 miles per hour,
regardless of their mass.
□ The orbital period of a spacecraft in a larger orbit will be longer. For an orbit of radius about 26,000 mi, the period will be 24 hours---the same as the rotation period of the Earth.
Spacecraft here, if they are moving in the right direction, will appear to "hover" over a given point on the Earth's surface. These orbits are therefore called geosynchronous or
"geostationary." See the animation above. This is the ideal location for placing communications satellites. [The concept of geosynchronous communications satellites was first proposed by
science fiction writer Arthur C. Clarke. He deliberately did not patent his idea, which became the basis of a trillion-dollar industry.]
Applications of Kepler's Third Law
Newton's theory provided a physical interpretation of Kepler's Third Law. According to his formulation of gravitational force, he found that the value of the constant K in the formulation of
Kepler's Third Law in Guide 7 is K = 4π^2/GM, where M is the mass of the Sun. More generally, K is inversely proportional to the mass of the primary body (i.e. the Sun in the case of the
planetary orbits but the Earth in the case of orbiting spacecraft). The larger the mass of the primary, the shorter the period for a given orbital size.
□ The Third Law therefore has an invaluable astrophysical application: once the value of the "G" constant has been determined (in the laboratory), the motions of orbiting objects can be used to
determine the mass of the primary. This is true no matter how far from us the objects are (as long as the orbital motion and size can be measured).
□ In the Solar System, the Third Law allows us to determine the mass of the Sun from the size and periods of the planetary orbits. In the case of Jupiter, for example, the periods and sizes of
the orbits of the Galilean satellites can be used to determine Jupiter's mass (as in Optional Lab 3).
□ The Third Law has been critical to such diverse astronomical problems as measuring the masses of "exoplanets" around other stars (see Study Guide 11) and establishing the existence of "Dark
Matter" in distant galaxies.
Schematic diagram of a liquid-fueled rocket engine. The thrust of the engine is
proportional to the velocity of the exhaust gases (V[e]).
C. Space Flight
If the primary technology enabling space flight is Newtonian orbit theory, the second most important technology is the rocket engine.
• In a rocket engine such as that shown in the diagram above, fuel is burned rapidly in a combustion chamber and converted into a large quantity of hot gas. The gas creates high pressure, which
causes it to be expelled out a nozzle at very high velocity.
The exhaust pressure simultaneously forces the body of the rocket forward. You can think of the rocket as "pushing off" from the moving molecules of exhaust gas. The higher the exhaust
velocity, the higher the thrust.
Note: rockets do not "push off" against the air or against the Earth's surface. Rather, it is the "reaction force" between the expelled exhaust and the rocket that impells the rocket
Designers work to achieve the highest possible exhaust velocity per gram of fuel. Newton's second law of motion and various elaborations of it are essential for understanding and designing
rocket motors.
• The main challenge to spaceflight is obtaining the power needed to reach escape velocity. For Earth, this is 11 km/sec or 25,000 mph.
"Standard" rocket engines are designed for launching commercial payloads to synchronous orbit or delivering intercontinental ballistic missles---neither of which involve reaching escape
velocity from Earth. Therefore, most scientific spacecraft for planetary missions are relatively small (i.e. low mass) in order that standard engines can propel them past Earth escape
velocity. This means that many clever strategies are needed to pack high performance into light packages.
Example: The New Horizons spacecraft, launched on a super-high velocity trajectory to Pluto in 2006, has a mass of only 1050 lbs; its launching rocket weighed 1,260,000 lbs, over 1000
times more! New Horizons is currently beyond the orbit of Uranus and is traveling at over 34,000 mph. A rocket launched at exactly escape velocity from a given parent body will, at very
large distances, slow to exactly zero velocity with respect to that body (ignoring the effect of other gravitating bodies).
The Apollo program used the extremely powerful Saturn V rockets to launch payloads with masses up to 100,000 pounds (including 3 crew members) to the Moon. This technology was, however,
retired in the mid-1970's because it was thought, erroneously, that the next generation, reusable Space Shuttle vehicles would be cheaper to operate. A big mistake. The Space Shuttle (shown
above) was fueled by high energy liquid oxygen and liquid hydrogen plus solid-rocket boosters. But it was so massive compared to the power of its engines that it could not reach escape
velocity from Earth. Its maximum altitude is only about 300 miles. That is why NASA is developing new "heavy lift" rocket technologies to replace the Shuttle.
D. Interplanetary Space Missions
Beginning in the early 1960's, NASA and foreign space agencies developed a series of ever-more sophisticated robot probes to study the Sun, Moon, planets, and the interplanetary medium. These
included flyby spacecraft, orbiters, landers, rovers, and sample-return vehicles.
The mid-20th century was the first time humans had ever sent machines beyond the Earth's atmosphere. Even such far-sighted thinkers as Galileo and Newton had not envisioned that to be possible in
the mere 350 years that elapsed between Kepler's Laws and the first landings on the Moon. This was an amazing accomplishment.
By 2012, we had flown at close range past every planet except Pluto; had placed robotic observatories into orbit around the Moon, Mercury, Venus, Mars, Jupiter, Saturn, and the asteroids Eros and
Vesta; had soft-landed on the Moon, Venus, Mars, and Saturn's moon Titan; had returned to Earth samples obtained from the coma of the comet Wild 2 and from a soft-landing on the asteroid Itokawa; and
had sent probes into a comet nucleus and the atmosphere of Jupiter. At right is an artist's concept painting of the Cassini mission in orbit around Saturn. We also put a number of highly capable
observatories for studying the distant universe (such as the Hubble Space Telescope and the Chandra X-Ray Observatory) into orbit around the Earth and the Sun. Of course, the Apollo program in the
1960's also sent human beings to the Moon. This was very fruitful in learning about lunar geology and surface history. But, by far, most of what we know about the denizens of the Solar System has
come from the powerful robot observatories.
For a list of these missions and additional links, click here.
Reading for this lecture:
Bennett textbook: Ch. 4.1-4.4 (Newtonian dynamics & gravitational orbits) Study Guide 8
Reading for next lecture: Web links:
Last modified February 2014 by rwo Text copyright © 1998-2014 Robert W. O'Connell. All rights reserved. Orbital animation copyright © Jim Swift, Northern Arizona University. Conic section
drawings from ASTR 161, University of Tennessee at Knoxville. Newton's Mountain drawing copyright © Brooks/Cole-Thomson. These notes are intended for the private, noncommercial use of students
enrolled in Astronomy 1210 at the University of Virginia.
|
{"url":"http://www.astro.virginia.edu/class/oconnell/astr1210/guide08.html","timestamp":"2014-04-18T08:26:27Z","content_type":null,"content_length":"28956","record_id":"<urn:uuid:8122c911-f801-467f-aabc-2d11548e2ae4>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00353-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Filtering methods for symmetric cardinality constraint
Kocjan, Waldemar and Kreuger, Per (2003) Filtering methods for symmetric cardinality constraint. [SICS Report]
The symmetric cardinality constraint is described in terms of variables X = {x_1,...,x_k} which take values in the subset of values V={v_1,...,v_n}. It constraints the number of times a value can be
assigned to a variable in X to be in an interval [l_{x_i},c_{x_i}] and at the same time it restricts the number of values in V which any variable can take to an interval [l_{v_j},c_{v_j}]. In this
paper we introduce the symmetric cardinality constraint and define set constraint satisfaction problem as a framework for dealing with this type of constraints. Moreover, we present effective
filtering methods for the symmetric cardinality constraint.
Item Type: SICS Report
Uncontrolled Keywords: constraint programming, global constraints, flowtheory
ID Code: 2336
Deposited By: Vicki Carleson
Deposited On: 29 Oct 2007
Last Modified: 18 Nov 2009 16:06
Repository Staff Only: item control page
|
{"url":"http://soda.swedish-ict.se/2336/","timestamp":"2014-04-20T10:47:12Z","content_type":null,"content_length":"14484","record_id":"<urn:uuid:0ac0ea92-9559-4cf0-87bf-89afcb40d134>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00520-ip-10-147-4-33.ec2.internal.warc.gz"}
|
RE: st: RE: GMM estimation: restricting parameter estimates
Notice: On March 31, it was announced that Statalist is moving from an email list to a forum. The old list will shut down at the end of May, and its replacement, statalist.org is already up and
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
RE: st: RE: GMM estimation: restricting parameter estimates
From "Bley N'Dede" <ndedecb@tigermail.auburn.edu>
To "statalist@hsphsun2.harvard.edu" <statalist@hsphsun2.harvard.edu>
Subject RE: st: RE: GMM estimation: restricting parameter estimates
Date Mon, 24 Jun 2013 17:41:01 +0000
Dear Statalist,
I am trying to estimate a growth model using also gmm estimation. I would like to set the sum of three parameters equal to one. could you please help me in this matter.
From: owner-statalist@hsphsun2.harvard.edu [owner-statalist@hsphsun2.harvard.edu] on behalf of Christopher Baum [kit.baum@bc.edu]
Sent: Thursday, June 06, 2013 7:43 AM
To: <statalist@hsphsun2.harvard.edu>
Subject: Re: st: RE: GMM estimation: restricting parameter estimates
On Jun 6, 2013, at 8:33 AM, Mark
> You could try a variation on a trick that is sometimes used in code to restrict the range of parameter values. Instead of estimating a and b you could estimate 1/a and 1/b. Or some other function that prevents Stata (not "STATA" btw) from deciding on exact zeros as solutions.
> For an example of how official Stata code does something similar, have a look at the manual entry for -heckman- and in particular the discussion of how rho is estimated.
There is also a useful FAQ that discusses this sort of trickery.
Kit Baum | Boston College Economics & DIW Berlin | http://ideas.repec.org/e/pba1.html
An Introduction to Stata Programming | http://www.stata-press.com/books/isp.html
An Introduction to Modern Econometrics Using Stata | http://www.stata-press.com/books/imeus.html
* For searches and help try:
* http://www.stata.com/help.cgi?search
* http://www.stata.com/support/faqs/resources/statalist-faq/
* http://www.ats.ucla.edu/stat/stata/
* For searches and help try:
* http://www.stata.com/help.cgi?search
* http://www.stata.com/support/faqs/resources/statalist-faq/
* http://www.ats.ucla.edu/stat/stata/
|
{"url":"http://www.stata.com/statalist/archive/2013-06/msg01138.html","timestamp":"2014-04-17T06:49:23Z","content_type":null,"content_length":"9932","record_id":"<urn:uuid:88aadf4c-b190-4070-97a5-52c9eb20cdc9>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00497-ip-10-147-4-33.ec2.internal.warc.gz"}
|
n In
Besov Priors for Bayesian Inverse problems
Seminar Room 1, Newton Institute
We consider the inverse problem of estimating a function $u$ from noisy measurements of a known, possibly nonlinear, function of $u$. We use a Bayesian approach to find a well-posed probabilistic
formulation of the solution to the above inverse problem. Motivated by the sparsity promoting features of the wavelet bases for many classes of functions appearing in applications, we study the use
of the Besov priors within the Bayesian formalism. This is Joint work with Stephen Harris (Edinburgh) and Andrew Stuart (Warwick).
|
{"url":"http://www.newton.ac.uk/programmes/INV/seminars/2011121317001.html","timestamp":"2014-04-18T23:18:27Z","content_type":null,"content_length":"4246","record_id":"<urn:uuid:baae324b-be2f-446f-9d15-d8a66fca9ac4>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00582-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
I seen many problems like these here http://openstudy.com/study#/updates/5140bf27e4b0f08cbdc91cc0 put what if you where given third term is -96 and 6th term is -6144. its different cuzz you are not
giving a as usual. how would you go about solving it? I am just thinking. You'd have to do some kind of system of equations right?
• one year ago
• one year ago
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred.
|
{"url":"http://openstudy.com/updates/5140cd12e4b0d5d99f3e176d","timestamp":"2014-04-18T21:24:28Z","content_type":null,"content_length":"40089","record_id":"<urn:uuid:2ca6ee65-e13d-4a7d-a459-98f878457a0c>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00451-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Why One Branch Size Dominates Radar Backscatter
Warning: technical post for geeky radar folks!
There is a common rule of thumb that I have often seen presented that states that the radar backscatter from a forest canopy mostly originates from scattering elements that are similar in size to the
wavelength being used. Sometimes this is explained through attenuation effects of a full-cover forest canopies – ie. the penetration depth is in proportion to the wavelength, so that longer
wavelengths can “see” down to the larger elements. Short wavelengths only see the smaller elements in the top of the canopy.
Other times I have seen this associated with “resonance” – i.e. that the resonant scattering in the Mie region results in higher than expected returns from elements similar in size to the wavelength.
There is now a third explanation that Matthew Brolly and I have been working on and published recently in the International Journal of Remote Sensing (the full paper DOI is http://www.tandfonline.com
/doi/abs/10.1080/01431161.2012.715777 for subscribers, and a here for a pre-publication proof). It does not rely on resonance nor attenuation, and so is particular relevant in areas of sparse
woodland, rather than full-cover canopies (since all the elements of a tree may be visible).
Our explanation is based on a combination of the trends in tree architecture and the transition from Rayleigh to optical scattering. The novelty of the approach is that we use the generic
macroecological structure model introduced by West, Brown and Enquist (WBE) to explore the variability of number density with size of the scatterers. There is a finite range of ways that a tree will
grow that maintains both its structural integrity and its biological function. This allows us to ask questions such as, “As branching elements get smaller, do they increase in number fast enough
to compensate for their decreasing individual radar cross-section?”
For example, what if we consider only optical scattering from forest elements. The total cross-section of a volume of wood is much larger if you chop it into smaller twig-sized chunks, rather than
clumping it all together in one large trunk. That is because more of the total wood volume is now exposed, rather than being obscured by the rest of the volume. Mathematically, if the length of
each branch or twig is proportional to its radius, then the optical scattering goes up with the radius squared.
So, in a forest canopy, if everything scatters in an optical-type scattering kind of way, then the smaller elements at the top scatter more than the big chunky elements near the bottom (per unit
volume). We can expect optical-type scattering to dominate when the scatters are large compared to the wavelength. This is encouraging – it tallies with our expectation, that shorter wavelength
radar will mostly scatter from the smallest elements, not because they are near the top, but because they are sufficiently numerous.
On the other hand, when the wavelength gets much longer than the scattering elements, we are into the Rayleigh scattering regime. Here the trend is different, because the cross-section of a cylinder
increases with the square of the volume of the cylinder. Using the same criteria for twigs and branches as above, this means the backscatter increases with the radius to the power of six! Clumping
material together therefore increases backscatter considerably – the backscatter of the whole is greater than the sum of its parts. In a forest canopy where all the elements are small enough to be
Rayleigh scatterers, the largest backscatter therefore comes from the largest elements. This also tallies with expectation; when you look at a forest with VHF radar with wavelengths a few metres
long, meaning everything is a Rayleigh scatterer and the trunks are the largest contributor to the backscatter. This is exactly what is observed.
But what about the intermediate frequencies, such as L and P-band? In many circumstances we may have a forest canopy that tends to Rayleigh scattering at the top, but tends to optical scattering at
the bottom. That is, the scattering elements at the top are much smaller than the wavelength, whereas at the bottom they are larger. The WBE model allows us to look at the trends since it constrains
the size-to-number density relationship to within sensible limits.
Our results indicate that there is a peak response – that is, there is always one branching layer that dominates the scattering. In the extremes of very long or very short wavelengths, this
dominant layer is the trunk layer, or the smallest twig layer, respectively. This concurs with results from modelling and experiment. But note that this explanation is different from the
traditional explanations. It is entirely independent of attenuation through the canopy or of any resonant scattering.
The full implications of these findings we are still working on, but it’s clear that this is another example of where applying macroecological principles of tree structure and growth allows new
insight into how microwaves interact with forest canopies.
Matthew Brolly & Iain H. Woodhouse (2013): Vertical backscatter profile of forests predicted by a macroecological plant model, International Journal of Remote Sensing, 34:4, 1026-1040
No comments yet.
Leave a Reply Cancel reply
|
{"url":"http://forestplanet.wordpress.com/2012/10/31/why-one-branch-size-dominates-radar-backscatter/","timestamp":"2014-04-17T01:15:03Z","content_type":null,"content_length":"67242","record_id":"<urn:uuid:5c2541db-d2f4-4769-af9b-8c01c4561160>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00351-ip-10-147-4-33.ec2.internal.warc.gz"}
|
How would you use a number line ti round 148 to the nearest ten?
Programmable calculators are calculators that can automatically carry out a sequence of operations under control of a stored program, much like a computer. The first programmable calculators such
as the IBM CPC used punched cards or other media for program storage. Hand-held electronic calculators store programs on magnetic strips, removable read-only memory cartridges, or in
battery-backed read/write memory.
Since the early 1990s, most of these flexible handheld units belong to the class of graphing calculators. Before the mass-manufacture of inexpensive dot-matrix LCD displays, however, programmable
calculators usually featured a one-line numeric or alphanumeric display.
In numerical analysis, a branch of mathematics, there are several square root algorithms or methods for calculating the principal square root of a nonnegative real number. For the square roots of a
negative or complex number, see below.
Finding $\sqrt{S}$ is the same as solving the equation $f(x) = x^2 - S = 0\,\!$. Therefore, any general numerical root-finding algorithm can be used. Newton's method, for example, reduces in this
case to the so-called Babylonian method:
An order of magnitude is a scale of numbers with a fixed ratio, often rounded to the nearest ten.
For example: The United States has the world's highest incarceration rate. It has an order of magnitude more imprisoned human beings per 100,000 population than Norway.
Related Websites:
|
{"url":"http://answerparty.com/question/answer/how-would-you-use-a-number-line-ti-round-148-to-the-nearest-ten","timestamp":"2014-04-19T06:53:04Z","content_type":null,"content_length":"24170","record_id":"<urn:uuid:d1321fda-aa23-447f-b8b1-a38f4ca135c8>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00368-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Martingale Betting System- expected value
June 12th 2012, 07:38 PM #1
Jun 2012
United States
Martingale Betting System- expected value
Hi, I have been learning/experimenting with the Martingale betting system recently. I have read a lot about how no "system" works for betting in casinos. However, I want to either prove or
disprove the validity of the system by looking at its expected value/payout. I will be using the game of roulette as an example, assuming I have a 50% of getting black or red. Here is how the
payout will work-
If the ball lands on black, I win $5
If the ball lands on red 7 times in a row, I lose $155
What is the expected payout of me playing roulette with this system?
winning payout = $5 * (1/2) ;have a 50% chance of getting black
losing payout = -$155 * (1/2)^7 ;have a 50% chance of getting red, 7 times
net payout = winning payout - losing payout
So according to this, if I use this system then there will be a positive payout. What do you guys think?
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~
If you want to how i got the winning/losing payouts then here is the explanation for that-
The way the system works is that you first bet your initial, let's say it's $5. If you lose, you double your previous bet, and if you lose again you double up again, etc. So if I start at $5 and
keep losing, my bets will go like this - $5,$10,$20,$40,$80....
Using this system, you will earn $5 every time you win. To see this, assume I lose $5,$10,$20 but on the $40 bet I win. I will have lost $35 from before but gained $40, i.e. $5 profit.
***Now, according to this if I have enough money I can eventually earn $5 as long as I have a crap load of money. This is why tables have maximum bets, but many cheap casinos don't.
Moving on, if i lose 5 times in a row, I will lose 5+10+20+40+80 = $155. Let's say I "restart" after I lose 5 times. Also, because I am so clever, I will bet only until I see two reds. Therefore,
I can technically "lose" 7 times in a row and still lose only $155.
All in all, one can see that this ^ is where the losing payout comes from, and that the winning payout is me simply winning $5 on black.
Re: Martingale Betting System- expected value
My first reply here will be a story of how you should proceed with caution when actually using this system... I used this system at an online poker site before online gambling became illegal in
almost all of the United States. I invested only 50 dollars of my money initially and my first investment was doubled to one hundred dollars. I took a timid approach at first and started my first
5 dollar bet after 4 blacks in a row and was betting red. Not much time had passed and I broke 500 dollars. I then started betting my initial 5 dollars after a win, after two losing blacks like
you mention in your example. My original 100 dollar bank roll eventually turned in to over 1800...... then I was astonished when I lost over 9 bets in a row. Proceed with caution. I will reply
again in a while about some overlooked math. And go into some theory behind why this doesn't work. Unless as you say you have a crap ton of money.
Re: Martingale Betting System- expected value
I see that i have a few mistakes in my calculations that basically destroys the system. I was too deep inside the Gambler's Falacy. First the winning probability should be 1-.5^5. This is because
I need to get one black in 5 rolls to win $5. Second, the probability of me losing should still be based on losing 5 times in a row, not 7. This is because even if you wait for 2 of the opposite
color to come up, the next 5 are independent of the previous 2; it took me a long ass time to completely understand it. After you do both things one can see that the net payout is $0. However,
most casinos have a 0 and 00 so the probability of winning decreases and the probability of losing increasing, leading to a negative payout. So in the long run, you lose. =(
Re: Martingale Betting System- expected value
Exactly I was going to mention the green 0 and 00 on american tables. Also each independant roll would yield a probability of winning equal to 18/38=.4736842 so the probability that you will lose
is always higher. I was going to mention some other things but based on your latest reply I can tell that you get it. It is a really fascinating thing to think about. Another mothod of casino
money making I looked into and did for a while was the same betting concept applied to games of blackjack, where I adhered to the same rules for hitting and staying as the house do. This has
seemed to work better in many ways for making my 5 dollar wins come sooner therefore fruitioning into some degree of profit over time.
June 13th 2012, 06:12 AM #2
Jun 2012
June 13th 2012, 07:18 AM #3
Jun 2012
United States
June 13th 2012, 08:03 AM #4
Jun 2012
|
{"url":"http://mathhelpforum.com/statistics/199959-martingale-betting-system-expected-value.html","timestamp":"2014-04-20T09:21:32Z","content_type":null,"content_length":"41317","record_id":"<urn:uuid:0d559e0c-c36b-4bda-83c1-4df328e48310>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00266-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
4*square root of 80 + square root of 20
• one year ago
• one year ago
Best Response
You've already chosen the best response.
Hello, :P
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
could u explin tht to me
Best Response
You've already chosen the best response.
thats just what my question is asking the answer to so im not sure it just says 4 times the square root of 80 plus the square root of 20
Best Response
You've already chosen the best response.
4* square root of 80 + square root of 20 first square root 80 and forget about the 20 for a while so it is just square root of 80 *4 (8.94427191)*4 =35.77708764 now you add the square root of 20
=40.24922359 and that is your answer
Best Response
You've already chosen the best response.
ok thanks! =D
Best Response
You've already chosen the best response.
your welcome
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred.
|
{"url":"http://openstudy.com/updates/50980f27e4b02ec0829c50c4","timestamp":"2014-04-17T15:48:58Z","content_type":null,"content_length":"41890","record_id":"<urn:uuid:3ac21b4f-66b0-46bf-a344-043ac508ce10>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00637-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Sugar Land Algebra 2 Tutor
Find a Sugar Land Algebra 2 Tutor
...I am also currently working on completing my Master's Degree in Educational Leadership thru Lamar University. One of the best parts of teaching was tutoring the students before and after
school because I could work one-on-one with them and really help them understand the material. If a student is willing to learn and work hard, then I can teach them.
15 Subjects: including algebra 2, Spanish, reading, geometry
...I graduated in 2011 from the University of Houston with a Bachelor's degree in Civil Engineering. In high school, I enrolled in AP courses, scoring a 5 on the AP Calculus BC exam. I believe
that every individual is created to be natural learners, and with clear instruction, any student is able to effectively learn any subject.
12 Subjects: including algebra 2, English, calculus, physics
...One of the motivations for me to be a tutor is to see my tutees to improve their academic performance and more importantly develop more effective study methodology/habits. To see my tutees
grow and be better about the subjects makes me feel very happy. I am very approachable and patient when students have questions or confusion.
3 Subjects: including algebra 2, accounting, algebra 1
...If not, I show him/her what was done wrong, and have the student work another example.Mathematics I have studied mathematics through calculus, differential equations, and partial differential
equations in obtaining my BA and BS degrees in chemical engineering from Rice University. I have assist...
11 Subjects: including algebra 2, English, chemistry, geometry
I have taught calculus and analytic geometry at the U. S. Air Force Academy for 7 years.
11 Subjects: including algebra 2, calculus, geometry, statistics
Nearby Cities With algebra 2 Tutor
Fresno, TX algebra 2 Tutors
Houston algebra 2 Tutors
Humble algebra 2 Tutors
Jersey Village, TX algebra 2 Tutors
Jersey Vlg, TX algebra 2 Tutors
Katy algebra 2 Tutors
Kingwood, TX algebra 2 Tutors
Meadows Place, TX algebra 2 Tutors
Missouri City, TX algebra 2 Tutors
Pasadena, TX algebra 2 Tutors
Pearland algebra 2 Tutors
Piney Point Village, TX algebra 2 Tutors
Spring algebra 2 Tutors
Stafford, TX algebra 2 Tutors
The Woodlands, TX algebra 2 Tutors
|
{"url":"http://www.purplemath.com/sugar_land_tx_algebra_2_tutors.php","timestamp":"2014-04-19T05:14:50Z","content_type":null,"content_length":"23992","record_id":"<urn:uuid:5fbd1cf2-001a-400f-a378-99d9ffdbdc61>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00555-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Lansdowne Calculus Tutor
...The math praxis covers the material that I teach in my classroom everyday (8th grade math), and I have several textbooks to help guide teachers-to-be through the studying process. I have also
held tutoring sessions with fellow teachers seeking certification in math in order to help them pass the...
21 Subjects: including calculus, reading, physics, geometry
...Throughout my years tutoring all levels of mathematics, I have developed the ability to readily explore several different viewpoints and methods to help students fully grasp the subject matter.
I can present the material in many different ways until we find an approach that works and he/she real...
19 Subjects: including calculus, geometry, trigonometry, statistics
...I have experience in tutoring all subject fields that are included on the ACT math test. As an undergraduate student at Jacksonville University, I studied both ordinary differential equations
and partial differential equations obtaining A's in both courses. I have also been tutoring these courses while a tutor at Jacksonville University.
13 Subjects: including calculus, geometry, GRE, algebra 1
...Some years ago I started to tutor one-on-one and have found that, more than classroom instruction, it allows me to tailor my teaching to students' individual needs. Their success becomes my
success. Every student brings a unique perspective and a unique set of expectations to his or her lesson,...
21 Subjects: including calculus, reading, writing, algebra 1
...I achieved within the top 15% nationwide score on my Praxis exam. I have taught all levels of math in high school, from Algebra 1 through Calculus. Geometry is a fun subject with
multidimensional thinking required.
15 Subjects: including calculus, physics, geometry, algebra 1
|
{"url":"http://www.purplemath.com/lansdowne_calculus_tutors.php","timestamp":"2014-04-20T09:18:30Z","content_type":null,"content_length":"23968","record_id":"<urn:uuid:de72b6d9-cfa0-4860-960f-68686b092688>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00102-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Linear Interpolation FP1 Formula
Re: Linear Interpolation FP1 Formula
Well, that is why he went the route he did. He got passed listening to girls like her tell him how wonderful their bf's were. Fortunately for him, their bf's could not do science homework or repair
cars as well as him.
What is really the difference between showering them with expensive gifts for intimacy or his approach? Know one was really hurt by his antics, the girls considered it a good trade, After all, take
my gf for instance she was giving it away for nothing.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Linear Interpolation FP1 Formula
Hmm, well his method does cut out all the dating formalities. I should have attempted to do this with adriana.
Re: Linear Interpolation FP1 Formula
Or you can just become adept at spotting the signs of someone who is not interested in you.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Linear Interpolation FP1 Formula
How can someone sending 50 e-mails a day not be interested?
She was interested, I was just not the only one after her and got beaten to the game by the BF. I did correctly predict exactly this happening a month ago, however... but I don't think there was much
I could have done about it.
Re: Linear Interpolation FP1 Formula
That guy came before you. Because she was flirtatious that does not mean she was interested in you.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Linear Interpolation FP1 Formula
50 e-mails a day for 6 weeks in a row is just flirting?
Re: Linear Interpolation FP1 Formula
Obviously, it was. She went all the way to Cyprus to visit the other guy, if he even exists. You just have to think that she is lying to you.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Linear Interpolation FP1 Formula
I still don't know how to tell if someone is interested in me, then... or, how to tell the difference between that and flirting.
Re: Linear Interpolation FP1 Formula
It is difficult to tell without testing her reaction.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Linear Interpolation FP1 Formula
How do I test them?
Re: Linear Interpolation FP1 Formula
In this case you were lucky again. She told you right where she stood. That is the first big clue. I told you then you should either drop her or continue as a friend provided you never became so
fixated on a dead end. If you were going to and I think you knew that you would then you should have and still can drop her.
Last edited by bobbym (2013-02-28 20:35:20)
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Linear Interpolation FP1 Formula
She didn't say that explicitly; she told me she had a BF and was trying to understand her sexuality, and that she'd go out with me if things were different. (Again, a vague statement on her part.)
However, it is much clearer to me at this point in time that she is really not interested in me at all. I can still drop her, and I think I may already have to an extent...
Re: Linear Interpolation FP1 Formula
If you are lucky she has dropped you. How can things be different? Can a person change their preference? Not likely.
Fact is it is your reaction that you should alter. If she were really just a friend would you care whether she emails you every day? Of course not.
You are the one who has to dump her in your mind. She is not yours to keep.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Linear Interpolation FP1 Formula
Well, I cannot dump her instantly. It is a long, slow process. And it is much easier if I find someone else to be interested in... that is how I normally move on.
Re: Linear Interpolation FP1 Formula
That is how everybody moves on.
Nothing gets you over the last one like the next one.
Find another one!
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Linear Interpolation FP1 Formula
Hopefully another opportunity presents itself soon...
Re: Linear Interpolation FP1 Formula
In the meantime push her down to the level of the weakest friend you have that is a guy.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Linear Interpolation FP1 Formula
So in other words, never seeing her again?
Re: Linear Interpolation FP1 Formula
That is a strong possibility. That happens at the end of a friendship. It is like death, the older you get the more of it you are going to see.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Linear Interpolation FP1 Formula
There was one advantage of seeing her though, good food and I got to go over some maths I'd learned in the past. And of course, the free condoms. But that is a far more material friendship.
Re: Linear Interpolation FP1 Formula
People split up. Married people, dating people, relatives and friends. They sometimes interact with us for just a brief moment then they go their separate ways.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Linear Interpolation FP1 Formula
But me and adriana aren't going our separate ways until September.
Re: Linear Interpolation FP1 Formula
I think you already did go in separate directions.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Linear Interpolation FP1 Formula
Well, it will be an interesting experiment to see how she reacts to it. I will grab some popcorn.
Re: Linear Interpolation FP1 Formula
What experiment? In my opinion she needs a brain transplant. They should put a human brain in there this time...
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
|
{"url":"http://www.mathisfunforum.com/viewtopic.php?pid=255583","timestamp":"2014-04-19T17:20:10Z","content_type":null,"content_length":"36226","record_id":"<urn:uuid:23b3b356-f420-46e6-b668-733a76039708>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00339-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Wikijunior:How Things Work/Binary Numbers
From Wikibooks, open books for an open world
What is Binary?[edit]
Binary is a new type of number system. You see it as 1s and 0s in movies or TV shows. Computers use this number system to add, subtract, multiply, divide and do every math operation done on
computers. Computers save data using binary. This book will teach you how binary works, why computers use it, and how they use it.
Why do we use Binary?[edit]
In normal math, we don't use binary. We were taught to use our normal number system. Binary is much easier to do math in than normal numbers because you only are using two symbols - 1 and 0 instead
of ten symbols - 0, 1, 2, 3, 4, 5, 6, 7, 8, and 9.
Computers use binary because they can only read and store an on or off charge. So, using 0 as "off" and 1 as "on," we can use numbers in electrical wiring. Think of it as this - if you had one color
for every math symbol (0 to 9), you'd have ten colors. That's a lot of colors to memorize, but you have done it anyway. If you were limited to only black and white, you'd only have two colors. It
would be so much easier to memorize, but you would need to make a new way of writing down numbers. Binary is just that - a new way to record and use numbers.
Binary Notation[edit]
In first grade, you were taught that we have a ones, tens, hundreds columns and so on (they multiply by 10). Binary also has columns, but they aren't ones and tens. The columns in binary are...
Binary ... 1,000,000 100,000 10,000 1,000 100 10 1
Base-10* version ... 64 32 16 8 4 2 1
* Normal numbers are called base-10, because there are 10 symbols that we use. Binary is called base-2, because it uses two symbols.
So what makes binary so easy? The answer lies in how we read the number. If we had the number 52, we have a 2 in the ones column, adding 2 times 1 to the total (2). We have a 5 in the 10s column,
multiply that together and get 50, adding that to the total. Our total number is 52, like we expect. In binary, though, this is way simpler if you know how to read it fast.
Binary numbers are read from right to left, unlike normal numbers which are read left to right.
We have been trained to read these base-10 numbers really quickly. Reading binary for humans is slower since we are used to base-10. You are now starting to learn how to read base-2, so it will be
slow. You will get faster over time.
Translating to Base-10[edit]
The binary number for 52 is 110100. How do you read a binary number?
1. You look at the ones column. Since it has a 0 in it, you don't add anything to the total.
2. Then you look at the twos column. Nothing, so we move on to the next column.
3. We have a 1 in the fours column, so we add 4 to the total (total is 4).
4. Skipping the eights column since it has a 0, we have come to a 1 in the sixteens column. We add 16 to the total (total is 20).
5. Last, we have a 1 in the thirty-twos column. We add this to our total (total is 52).
We're done! We now have the number 52 as our total. The basics of reading a base-2 number is add each columns value to the total if there is a 1 in it. You don't have to multiply like you do in
base-10 to get the total (like the 5 in the tens column from the above base-10 example), which can speed up your reading of base-2 numbers. Let's look at that in a table.
Binary digit Column Binary digit's value
Total 52
Now let's look at another number.
Finding a Mystery Number[edit]
The binary number is 1011, but we don't know what it is. Let's go through the column-reading process to find out what the number is.
1. The ones column has a 1 in it, so we add 1 x 1 to the total (total is 1).
2. The twos column has a 1 in it, so we add 1 x 2 to the total (total is 3).
3. The fours column has a 0 in it, so we add 0 x 4 to the total (total is still 3).
4. The eights column has a 1 in it, so we add 1 x 8 to the total (total is 11).
We are done, so the total is the answer. The answer is 11! Here are some more numbers for you to work out.
• 101
• 1111
• 10001
• 10100
• 101000
Binary is how we use numbers in electronics. All modern computers use binary to remember numbers. How computers remember the numbers won't be taught in this book, but you will learn how the binary
numbers come into play for computers remembering certain things.
Bits and Bytes[edit]
A bit is one symbol in a binary, or one value in one column. (It's short for binary digit.) A byte is eight bits put together (Why eight? It has to do with remembering letters, which you'll read
|
{"url":"http://en.wikibooks.org/wiki/Wikijunior:How_Things_Work/Binary_Numbers","timestamp":"2014-04-20T19:10:16Z","content_type":null,"content_length":"33517","record_id":"<urn:uuid:14b3422f-04c7-47bb-9625-175919b74418>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00364-ip-10-147-4-33.ec2.internal.warc.gz"}
|
2575 -- Jolly Jumpers
Jolly Jumpers
Time Limit: 1000MS Memory Limit: 65536K
Total Submissions: 15990 Accepted: 4857
A sequence of n > 0 integers is called a jolly jumper if the absolute values of the difference between successive elements take on all the values 1 through n-1. For instance,
is a jolly jumper, because the absolutes differences are 3, 2, and 1 respectively. The definition implies that any sequence of a single integer is a jolly jumper. You are to write a program to
determine whether or not each of a number of sequences is a jolly jumper.
Each line of input contains an integer n < 3000 followed by n integers representing the sequence.
For each line of input, generate a line of output saying "Jolly" or "Not jolly".
Sample Input
5 1 4 2 -1 6
Sample Output
Not jolly
[Submit] [Go Back] [Status] [Discuss]
All Rights Reserved 2003-2013 Ying Fuchen,Xu Pengcheng,Xie Di
Any problem, Please Contact Administrator
|
{"url":"http://poj.org/problem?id=2575","timestamp":"2014-04-21T15:52:32Z","content_type":null,"content_length":"6163","record_id":"<urn:uuid:b1c88778-b5c8-48c1-beda-acab7ddc58b3>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00052-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Szigeti Jenő (Miskolc): A szimmetrikus determináns és a 3x3-as szimmetrikus Newton-formula
Abstract: One of the aims of this talk is to provide a short survey on the natural left, right and symmetric generalizations of the classical determi- nant theory for square matrices with entries in
an arbitrary (possibly non- commutative) ring. Then we use the preadjoint matrix to exhibit a general trace expression for the symmetric determinant. The symmetric version of the classical Newton
trace formula is presented in the 3x3 case.
|
{"url":"http://www.renyi.hu/~seminar/ABSTRACTS/szigeti_abs.html","timestamp":"2014-04-18T18:12:07Z","content_type":null,"content_length":"762","record_id":"<urn:uuid:e145c78e-7d15-41e2-a99e-3d4fec7da010>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00340-ip-10-147-4-33.ec2.internal.warc.gz"}
|
the first resource for mathematics
A trust region algorithm for nonlinearly constrained optimization.
(English) Zbl 0631.65068
The authors describe a method using trust regions to solve the general nonlinearly equality constrained optimization problem ${}_{x\in {R}^{n}}f\left(x\right)$ subject to $c\left(x\right)=0$. The
method works by iteratively minimizing a quadratic model of the Lagrangian subject to a possibly relaxed linearization of the problem constraints and a trust region constraint. It is shown that this
method is globally convergent even if singular or indefinite Hessian approximations are made.
A second order correction step that brings the iterates closer to the feasible set is described. If sufficiently precise Hessian information is used, the correction step allows one to prove that the
method is also locally quadratically convergent, and that the limit satisfies the second order necessary conditions for constrained optimization. An example is given to show that, without this
correction, a situation similar to the Maratos effect may occur where the iteration is unable to move away from a saddle point.
65K05 Mathematical programming (numerical methods)
90C30 Nonlinear programming
|
{"url":"http://zbmath.org/?q=an:0631.65068","timestamp":"2014-04-19T04:41:28Z","content_type":null,"content_length":"22052","record_id":"<urn:uuid:d3fe1426-cae3-4544-b58b-0ce9d0d6b34e>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00595-ip-10-147-4-33.ec2.internal.warc.gz"}
|
How to find the area of composite figures
ok so I have a figure that it a semi circle and a triangle put together it looks like an icecream cone ..ok so the radius of the semicircle is 8 and the height of the triangle is 15 what is the
answer to this problem
|
{"url":"http://www.ehelp.com/questions/10455601/how-to-find-the-area-of-composite-figures","timestamp":"2014-04-20T03:10:37Z","content_type":null,"content_length":"9920","record_id":"<urn:uuid:b0937596-2c16-4169-929d-3eddd8064b8f>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00643-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Wilson's Theorem
November 30th 2011, 07:34 AM
Wilson's Theorem
So I've been working on this problem for too long now and I need some advice as to how to proceed. I've tried using induction and I've also tried plugging in concrete values to try and discern a
pattern to no avail. I DO know that Wilson's Theorem applies.
Wilson's Theorem
If p is a prime, then (p-1)! $\equiv$ -1 (mod p)
Show that if p is an odd prime and a and b are non negative integers with a+b=p-1, then a!b! + (-1)^a $\equiv$ 0 (mod p)
November 30th 2011, 09:26 AM
Re: Wilson's Theorem
Note that: $a! = 1\cdot ... \cdot a \equiv_p \left( (-1) \cdot (p-1) \right)\cdot .. \cdot \left( (-1) \cdot (p-a) \right)$ and that $p-a = b + 1$
November 30th 2011, 09:42 AM
Re: Wilson's Theorem
I tried factoring out the (-1) getting a! congruent to (-1)^a[(p-1)...(p-a)]. Then I tried to multiply by b! considering that p-a=b+1, but I'm getting nowhere. Man I feel stupid!
November 30th 2011, 09:53 AM
Re: Wilson's Theorem
I'm going to try to complete this.
$a!\equiv (-1)^a(p-1)\cdots(p-a)\pmod{p}$
$a!b!\equiv (-1)^a(p-1)\cdots(p-a)b!\pmod{p}$
$\equiv (-1)^a(p-1)!\pmod{p}$ (Note that $p-a = b + 1$.)
$\equiv -(-1)^a\pmod{p}$ (Using Wilson's theorem)
$a!b!+(-1)^a\equiv 0\pmod{p}$
November 30th 2011, 10:19 AM
Re: Wilson's Theorem
November 30th 2011, 10:22 AM
Re: Wilson's Theorem
November 30th 2011, 10:26 AM
Re: Wilson's Theorem
Thank you!
|
{"url":"http://mathhelpforum.com/number-theory/193068-wilsons-theorem-print.html","timestamp":"2014-04-21T13:33:56Z","content_type":null,"content_length":"11776","record_id":"<urn:uuid:773891ad-815c-4f25-8b34-49e4ac63711c>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00650-ip-10-147-4-33.ec2.internal.warc.gz"}
|
aqueous solutions: dilution
Often it is necessary to take a concentrated solution and dilute it. However, we want to dilute it in a controlled way so that we know the concentration after dilution. The way this is done can be
extracted from the following figure of dilution:
The solute (denoted by red disks) is concentrated in the beaker on the left. Adding water dilutes the solution as shown with the beaker on the right. However, note that although the concentration
changes upon dilution, the number of solute molecules does not. In other words the number of moles of solute is the same before and after dilution. Since Moles = Molarity x Volume (i.e., moles= M x
V) we end up with the following equation relating molarity and volume before and after dilution:
Mi x Vi = Mf x Vf
where i and f stand for initial and final. Suppose we need 150 mL of 0.25 M NaCl. On the shelf we find a bottle of 2M NaCl. What do we do? the concentrated molarity is Mi and the volume needed is Vi.
We need to determine Vi and can do so by rearranging the above equation and doing the resulting calculation:
Thus, we need 18.8 mL of the 2M NaCl solution, put it in a beaker and add enough water to make 150 mL of solution. the resulting solution will have a molarity of 0.25 M NaCl.
Example 2: We take 25 mL of a 0.45 M AgNO3 solution and dilute it to 300 mL. What is the molarity of the resulting solution?
|
{"url":"http://www.iun.edu/~cpanhd/C101webnotes/aqueoussolns/dilution.html","timestamp":"2014-04-18T08:44:43Z","content_type":null,"content_length":"4426","record_id":"<urn:uuid:e4821030-1b5f-4428-b3e0-7f0db95a1613>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00075-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Mechanics of Solids
1. Concrete pier with dimensions of 50' wide 100' tall and 5 feet deep. There are 2000 people standing on top each 200 pounds. Two holes are bore through the concrete one center 25 feet from the top
the other 25 feet from the bottom, and it is on the side that is 50 feet wide right in the middle. If the density of the concrete is 156lb/ft^3 and the ultimate compression strength of concrete is
15000 psi, what are the two largest holes that can be drilled without the structure collapsing?
I need help knowing how to approach this problem as this problem was assigned the first day of class and i am a little intimidated by it.
|
{"url":"http://www.physicsforums.com/showthread.php?t=413267","timestamp":"2014-04-19T15:17:39Z","content_type":null,"content_length":"19245","record_id":"<urn:uuid:ea245536-2af8-49ab-9111-2700cc147dd4>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00273-ip-10-147-4-33.ec2.internal.warc.gz"}
|
A simple question?!
I am a new member to this forum.I am from INDIA and studying in grade 12.
I was struck in a question and need help.Hope someone could solve it.
The question is to solve these simultaneous equations:-
√x + y = a -------- (i)
x +√y = b ---------- (ii)
I have a few hints such as making a change in variable by introducing
x = m^2 and y= n^2 and then doing some algebraic manipulations to get
But I dunno know what to do next. Plz help.
Those "algebraic manipulations" don't help because you are left with one equation in two variables. After you have m
+ n= b and m+ n
= a, you can solve the first for n: n= b- m
and then substitute in the second: m+ (b-m
= a. That gives a single, fourth degree, equation for m.
|
{"url":"http://www.physicsforums.com/showthread.php?p=1029720","timestamp":"2014-04-19T09:35:34Z","content_type":null,"content_length":"27269","record_id":"<urn:uuid:52d39fce-3215-435c-9417-b32490b41a0c>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00079-ip-10-147-4-33.ec2.internal.warc.gz"}
|
20 Things You Didn't Know About Math
20 THINGS YOU DIDN'T KNOW ABOUT MATH
Matt Damon, below, in Good Will Hunting. Peter Coy is economics editor of Bloomberg Businessweek.
By Peter Coy
1. The median score for college-bound seniors on the math section of the SAT in 2011 is about 510 out of 800. So right there is proof that there are lots of unsolved
math problems.
2. The great 19th-century mathematician Carl Friedrich Gauss called his field "the queen of sciences."
3. If math is a queen, she's the White Queen from Alice in Wonderland, who bragged that she believed "as many as six impossible things before breakfast." (No surprise that Lewis Carroll also
wrote about plane algebraic geometry.)
4. For example, the Navier-Stokes equations are used all the time to approximate turbulent fluid flows around aircraft and in the bloodstream, but the math behind them still isn't understood.
5. And the oddest bits of math often turn out to be useful. Quaternions, which can describe the rotation of 3-D objects, were discovered in 1843. They were considered beautiful but useless
until 1985, when computer scientists applied them to rendering digital animation.
6. Some math problems are designed to be confounding, like British. philosopher Bertrand Russell's paradoxical "set of all sets that are not members of themselves." If Russell's set is not a
member of itself, then by definition it is a member of itself.
7. Russell was using a mathematical argument to test the outer limits of logic (and sanity).
8. Kurt Godel, the renowned Austrian logician, made matters worse in 1931 with his first incompleteness theorem, which said that any sufficiently powerful math system must contain statements
that are true but unprovable. Godel starved himself to death in 1978.
9. Yet problem solvers soldier on. They struggled for 358 years with Fermat's last theorem, a notoriously unfinished note that 17th-century mathematician and politician Pierre de Fermat
scrawled into the margin of a book.
10. You know how 3^2 + 4^2 = 5^2 ? Fermat claimed that there are no numbers that fit the pattern (a^n + b^n = c^n) when they are raised to a power higher than 2.
11. Finally, in 1995, English mathematician Andrew Wiles proved Fermat was right, but to do it he had to use math Fermat never knew existed. The introduction to Wiles's 109-page proof also
cites dozens of colleagues, living and dead, on whose shoulders he stood.
12. At a conference in Paris in 1900, German mathematician David Hilbert determined to clear up some lingering math mysteries by setting out 23 key problems. By 2000 mathematicians had solved
all of the well-formed Hilbert problems save one-a hypothesis posed in 1859 by Bernhard Riemann.
13.The Riemann hypothesis is now regarded as the most significant unsolved problem in mathematics. It claims there is a hidden pattern to the distribution of prime numbers-numbers that can't
be factored, such as 5, 7, 41, and, oh, 1,000,033.
14. The hypothesis has been shown experimentally to hold for the first 100 billion cases, which would be proof enough for an accountant or even a physicist. But not for a mathematician.
15. In 2000 the Clay Mathematics Institute announced $1 million prizes for solutions to seven vexing "Millennium Prize Problems. Ten years later the institute made its first award to Russian
Grigori Perelman for solving the Poincare conjecture, a problem dating back to 1904.
16. Proving that mathematicians don't grasp seven-digit numbers, Perelman turned down the milllon bucks because he felt another mathematician was equally deserving. He currently lives in
seclusion in Russia.
17. In his teens, Evariste Galois invented an entirely new branch of math, called group theory. to prove that "the quintic" - an equation with a term of x^5 was not solvable by any formula.
18. Galois died in Paris in 1832 at age 20, shot in a duel over a woman. Anticipating his loss, he spent his last night.frantically making corrections and additions to his math papers.
19. Graduate student George Dantzig arrived late to statistics class at Berkeley one day in 1939 and copied two problems off the blackboard. He handed in the answers a few days later,
apologizing that they were harder than usual.
20. The "homework" was actually two well-known unproven theorems. Dantzig's story became famous and inspired a scene from Good Will Hunting.
DiscoverMagazine , March 2012, Pg 80
Edited by M the name 27 Jan `13, 4:07PM
□ 1) Take the World's Best Courses, Online, For Free : http://www.coursera.org
2) Invent your future through free interactive college classes : http://www.udacity.com
|
{"url":"http://sgforums.com/forums/2297/topics/465050?page=1","timestamp":"2014-04-17T04:50:28Z","content_type":null,"content_length":"47578","record_id":"<urn:uuid:77a02fd5-1e51-4ed1-a1e1-1e2590f1f17d>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00304-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Andover, MA Algebra 1 Tutor
Find an Andover, MA Algebra 1 Tutor
...The student takes this home and brings back during tutoring. When tutoring is terminated, the student may choose to keep the reference notebook. Notebook and material to create reference
notebook is provided as part of the tutoring cost.
16 Subjects: including algebra 1, reading, writing, dyslexia
...Through my education and my lab/field experiences, I have attained a strong science content background and understand what working as a scientist is really like. Math - I have completed math
courses through calculus III Writing - I love writing and my professors have always commented on my writ...
22 Subjects: including algebra 1, reading, writing, geometry
...I have had much success over the years with a lot of repeat and referral business. I have tutored Middle School math, High School Math, secondary H.S. entrance exam test prep, Sat, PSAT, ACT
(math and english) and SAT I and SAT II, Math. I have taught mddle school as well as High School.
19 Subjects: including algebra 1, geometry, GRE, ASVAB
...I strive to help students understand the core concepts and building blocks necessary to succeed not only in their current class but in the future as well. I am a second year graduate student
at MIT, and bilingual in French and English. I earned my high school diploma from a French high school, as well as a bachelor of science in Computer Science from West Point.
16 Subjects: including algebra 1, French, elementary math, physics
...I continue to use undergraduate level linear algebra in my physics research. I use MATLAB routinely in my research. It was my primary simulation and computational physics tool throughout
graduate school, and I continue to use it on a daily basis.
16 Subjects: including algebra 1, calculus, physics, geometry
Related Andover, MA Tutors
Andover, MA Accounting Tutors
Andover, MA ACT Tutors
Andover, MA Algebra Tutors
Andover, MA Algebra 2 Tutors
Andover, MA Calculus Tutors
Andover, MA Geometry Tutors
Andover, MA Math Tutors
Andover, MA Prealgebra Tutors
Andover, MA Precalculus Tutors
Andover, MA SAT Tutors
Andover, MA SAT Math Tutors
Andover, MA Science Tutors
Andover, MA Statistics Tutors
Andover, MA Trigonometry Tutors
|
{"url":"http://www.purplemath.com/Andover_MA_algebra_1_tutors.php","timestamp":"2014-04-16T10:51:01Z","content_type":null,"content_length":"24029","record_id":"<urn:uuid:f78ce82a-303b-4474-9ea3-bf964d870d66>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00475-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Weiler-Atherton Algorithm
Next: Implementation Up: Polygon Clipping Previous: Polygon Clipping
The Weiler-Atherton algorithm is capable of clipping a concave polygon with interior holes to the boundaries of another concave polygon, also with interior holes. The polygon to be clipped is called
the subject polygon (SP) and the clipping region is called the clip polygon (CP). The new boundaries created by clipping the SP against the CP are identical to portions of the CP. No new edges are
created. Hence, the number of resulting polygons is minimized.
The algorithm describes both the SP and the CP by a circular list of vertices. The exterior boundaries of the polygons are described clockwise, and the interior boundaries or holes are described
counter-clockwise. When traversing the vertex list, this convention ensures that the inside of the polygon is always to the right. The boundaries of the SP and the CP may or may not intersect. If
they intersect, the intersections occur in pairs. One of the intersections occurs when the SP edge enters the inside of the CP and one when it leaves. Fundamentally, the algorithm starts at an
entering intersection and follows the exterior boundary of the SP clockwise until an intersection with a CP is found. At the intersection a right turn is made, and the exterior of the CP is followed
clockwise until an intersection with the SP is found. Again, at the intersection, a right turn is made, with the SP now being followed. The process is continued until the starting point is reached.
Interior boundaries of the SP are followed counter-clockwise.
A more formal statement of the algorithm is [3]
• Determine the intersections of the subject and clip polygons - Add each intersection to the SP and CP vertex lists. Tag each intersection vertex and establish a bidirectional link between the SP
and CP lists for each intersection vertex.
• Process nonintersecting polygon borders - Establish two holding lists: one for boundaries which lie inside the CP and one for boundaries which lie outside. Ignore CP boundaries which are outside
the SP. CP boundaries inside the SP form holes in the SP. Consequently. a copy of the CP boundary goes on both the inside and the outside holding list. Place the boundaries on the appropriate
holding list.
• Create two intersection vertex lists - One, the entering list, contains only the intersections for the SP edge entering the inside of the CP. The other, the leaving list, contains only the
intersections for the SP edge leaving the inside of the CP. The intersection type will alternate along the boundary. Thus, only one determination is required for each pair of intersections.
• Perform the actual clipping -
Polygons inside the CP are found using the following procedure.
□ Remove an intersection vertex from the entering list. If the list is empty, the process is complete.
□ Follow the SP vertex list until an intersection is found. Copy the SP list upto this point to the inside holding list.
□ Using the link, jump to the CP vertex list.
□ Follow the CP vertex list until an intersection is found. Copy the CP vertex list upto this point to the inside holding list.
□ Jump back to the SP vertex list.
□ Repeat until the starting point is again reached. At this point, the new inside polygon has been closed.
Polygons outside the CP are found using the same procedure, except that the initial intersection vertex is obtained from the leaving list and the CP vertex list is followed in the reverse
direction. The polygon lists are copied to the outside holding list.
A suitably detailed description of the algorithm can be found in [3] and [6].
Next: Implementation Up: Polygon Clipping Previous: Polygon Clipping Anirudh Modi
|
{"url":"http://www.anirudh.net/practical_training/main/node10.html","timestamp":"2014-04-18T00:40:22Z","content_type":null,"content_length":"6946","record_id":"<urn:uuid:8a58b2c2-19f6-49f8-811c-ba72c5b57f04>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00152-ip-10-147-4-33.ec2.internal.warc.gz"}
|
T. C. Son, E. Pontelli and P. H. Tu (2007) Answer Sets for Logic Programs with Arbitrary Abstract Constraint Atoms
T. C. Son, E. Pontelli and P. H. Tu (2007) "Answer Sets for Logic Programs with Arbitrary Abstract Constraint Atoms", Volume 29, pages 353-389
PDF | PostScript | doi:10.1613/jair.2171
In this paper, we present two alternative approaches to defining answer sets for logic programs with arbitrary types of abstract constraint atoms (c-atoms). These approaches generalize the
fixpoint-based and the level mapping based answer set semantics of normal logic programs to the case of logic programs with arbitrary types of c-atoms. The results are four different answer set
definitions which are equivalent when applied to normal logic programs.
The standard fixpoint-based semantics of logic programs is generalized in two directions, called answer set by reduct and answer set by complement. These definitions, which differ from each other in
the treatment of negation-as-failure (naf) atoms, make use of an immediate consequence operator to perform answer set checking, whose definition relies on the notion of conditional satisfaction of
c-atoms w.r.t. a pair of interpretations.
The other two definitions, called strongly and weakly well-supported models, are generalizations of the notion of well-supported models of normal logic programs to the case of programs with c-atoms.
As for the case of fixpoint-based semantics, the difference between these two definitions is rooted in the treatment of naf atoms.
We prove that answer sets by reduct (resp. by complement) are equivalent to weakly (resp. strongly) well-supported models of a program, thus generalizing the theorem on the correspondence between
stable models and well-supported models of a normal logic program to the class of programs with c-atoms.
We show that the newly defined semantics coincide with previously introduced semantics for logic programs with monotone c-atoms, and they extend the original answer set semantics of normal logic
programs. We also study some properties of answer sets of programs with c-atoms, and relate our definitions to several semantics for logic programs with aggregates presented in the literature.
Click here to return to Volume 29 contents list
|
{"url":"http://jair.org/papers/paper2171.html","timestamp":"2014-04-18T23:15:22Z","content_type":null,"content_length":"4499","record_id":"<urn:uuid:8b8507e5-7196-4322-a937-18b6686358ef>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00020-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Functions Tutors
Fort Lauderdale, FL 33330
Bilingual Biochemical Engineer with Masters in Education.
...I can accommodate to the needs of the students and give options on how to prepare better. Depending on the students time and consistency students usually see and improvement in their score....
Offering 10+ subjects including algebra 1
|
{"url":"http://www.wyzant.com/Hialeah_functions_tutors.aspx","timestamp":"2014-04-18T03:50:13Z","content_type":null,"content_length":"60039","record_id":"<urn:uuid:770be047-0d91-4a8b-8ee6-677da2a75968>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00499-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Final Exam Review Information
Revised: Fall 2007
MATH 0300 STUDENT SYLLABUS
Course: MATH 0300 (Beginning Algebra)
Prerequisite: Students are placed in MATH 0300 based on placement test results, taken at UH-Downtown.
Textbook: Beginning and Intermediate Algebra: An Integrated Approach, 4th edition, for University of Houston-Downtown Math 0300 & Math 1300 by R. David Gustafson and Peter D. Frisk, Brooks/Cole
Publishing Company, Pacific Grove, California, 2004.
Why You Are in this Course: Like many students at UHD, your placement test results indicate that your arithmetic and algebra skills are not sufficiently developed for you to pass one of the core
college level mathematics courses required of all students at UHD (these core courses are MATH 1301 or MATH 1310). MATH 0300 is a developmental course intended to strengthen and build your
mathematical skills up to the college level. Upon completion of this course, you will also need to complete or test out of MATH 1300 before enrolling in a college level math course.
Where to Find Course Resources: The first place to seek assistance and resources is from your instructor, both inside and outside of class. Your instructor will provide the times and locations where
he or she is available for office hours to work with you outside of class. Next, students enrolled in MATH 0300 at UHD have access to the Math Lab in the Academic Support Center (925-N) where they
may obtain additional tutoring with understanding concepts or improving their skills. The Center is staffed with mathematics faculty and student assistants, and offers tutorial help, videotapes,
calculators, and computer access on a walk-in basis. The Math Lab maintains extensive hours which are published each semester. You are encouraged to visit the Math Lab throughout the semester
whenever you feel you have time to work there, no appointment required. It is also an excellent place to study the textbook and work on homework problems, so that you can receive immediate answers to
your questions as necessary. The CD that comes with text also contains video instruction corresponding to examples in the text, as well as practice quizzes and chapter tests. A copy of this CD is
available in the Math Lab for use in the lab or for check out.
Goals/Objectives: At the completion of this course, a student should be able to: (1) identify different types of real numbers, including integer, rational, and irrational; (2) identify the basic
properties of real numbers; (3) plot numbers on the real number line; (4) execute the correct order of operations to simplify real-valued expressions; (5) determine the absolute value of a real
number and interpret its geometric meaning; (6) simplify algebraic expressions (combine like terms, multiply with the distributive law, combine like powers for integer exponents); (7) factor the
greatest common factor from a polynomial, particularly in the context of reducing fractions or solving equations; (8) solve linear equations; (9) solve linear inequalities and graph their solutions
on the real number line; (10) plot points and determine the coordinates of points in the cartesian coordinate system; (11) translate a relationship between quantities stated in words to an algebraic
expression; (12) solve various meaningful application problems.
Department Grading Policy: The final exam for this course is comprehensive and counts 1/3 of your course average. Your instructor will provide complete information as to how your course average will
be computed. Your final course average will be used assign your final course grade according to the formula shown here. Since MATH 0300 is considered a pre-college course, this grade will appear on
your transcript but will not be calculated into your GPA.
│ 90-100 │ “A” │
│ 80-89 │ “B” │
│ 70-79 │ “C” │
│ 0-69 │ “IP” [not a passing grade] │
│The following cases are exceptions: │
│1. If your final exam score is less than 50, you will receive an “F” or “IP” for the course regardless of your average. │
│2. If you violate the MATH 0300 Attendance Policy (see item below), you will receive an “F” for the course regardless of your average. │
│3. If you are attending class but are not making a genuine effort to pass (as evidenced by not handing in assignments, not participating in class, not seeking help outside of class, etc.), you will│
│receive an “F” for the course regardless of your average. │
You cannot receive the grade “I”-Incomplete unless you have a documented personal emergency that prevents you from completing the last fraction of the course, such as the last test and/or the final
exam. You must have a passing average based on the work you have already completed to receive an “I.”
Calculator Policy: Students are not required to purchase calculators and will not be allowed to use them on the final exam. Your instructor will give you more information about the use of
calculators in your class.
Excess Course Attempts: In accordance with state law, effective Fall 2004 the University of Houston-Downtown is charging an additional fee for each credit hour for enrollment in a developmental
course after 18 hours of developmental work has already been attempted. Once 18 attempted hours of developmental course work has been accumulated, registration in a developmental course will result
in the additional charge. An attempt is defined as an enrollment that results in a letter grade (including “S”, “U”, “IP”, and “W”). A developmental course is defined as MATH 0300, MATH 1300, ENG
1300, ENG 130A, and RDG 1300.
Statement on Reasonable Accommodations: UHD adheres to all applicable federal, state, and local laws, regulations, and guidelines with respect to providing reasonable accommodations for students
with disabilities. Students with disabilities should register with Disabled Student Services (409-S) and contact the instructor in a timely manner to arrange for appropriate accommodations.
MATH 0300 Attendance Policy: An attendance policy is enforced for this course. See the separate sheet MATH 0300 Attendance Policy for details.
Satisfactory Progress Policy: Students are required to demonstrate satisfactory progress toward completing their developmental course requirements. MATH 0300 is a developmental course. See the
separate sheet Satisfactory Progress Policy for Developmental Courses for details.
General University Policy: All students are subject to UHD Academic Honesty Policy and to all other university-wide policies and procedures as they are set forth in the UHD University Catalog and
Student Handbook.
Course Content: The course covers the following sections of the textbook. In some cases, not all pages from a section are covered.
│Chapters │ Sections │
│Chapter 1│1.1 Real numbers and their graphs │
│ │1.2 Fractions │
│ │1.3 Exponents and order of operations │
│ │1.4 Adding and subtracting real numbers │
│ │1.5 Multiplying and dividing real numbers │
│ │1.6 Algebraic expressions │
│ │1.7 Properties of real numbers │
│Chapter 2│2.1 Solving basic equations │
│ │2.2 Solving more equations │
│ │2.3 Simplifying expressions to solve equations │
│ │2.4 Introduction to problem solving │
│ │2.6 Formulas │
│ │2.7 Solving inequalities │
│Chapter 3│3.1 The rectangular coordinate system │
│ │3.2 Graphing linear equations │
│Chapter 4│4.1 Natural-number exponents │
│ │4.2 Zero and negative-integer exponents │
│ │4.3 Scientific notation │
│ │4.4 Polynomials │
│ │4.5 Adding and subtracting polynomials │
│ │4.6 Multiplying polynomials │
│ │4.7 Dividing polynomials by monomials │
│Chapter 5│5.1 Factoring out the greatest common factor │
Tips for Becoming a Successful College Student:
1. Come to class.
2. Read your book.
3. Do your homework.
4. Listen and ask questions.
5. Contribute to classroom discussions.
6. Interact with your teachers, either face to face or using the phone or email.
7. Form study groups with your classmates.
8. Meet with your advisor.
9. Get involved in campus activities.
10. Share new ideas with your friends and family.
VISIT THE UHD ALGEBRA STUDENT WEB PAGE FOR MORE INFORMATION:
Page maintained by CST Web Support Technician
Last updated or reviewed on 7/7/09
|
{"url":"http://uhd.edu/academic/colleges/sciences/cms/QEP/math_0300_student_syllabus.html","timestamp":"2014-04-19T14:33:50Z","content_type":null,"content_length":"47234","record_id":"<urn:uuid:4839d802-9db7-403f-b297-f515efd2f40d>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00124-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Petaluma Algebra Tutors
...I have been an independent study teacher for over 10 years and have extensive experience working with elementary students in math. I use a variety of resources and teaching styles depending on
the individual needs of the child. I have a California Multiple Subjects Credential.
17 Subjects: including algebra 2, algebra 1, reading, English
...Often times the issue is with earlier concepts that were not mastered with confidence. I will help by explaining concepts in plain English. And I help students to learn to see concepts and
problems in a way that makes sense to them.
18 Subjects: including algebra 1, algebra 2, calculus, geometry
Hello, I'm Brandis. I hold a Bachelor and Master degree from the San Francisco Conservatory of Music. In the early part of my collegiate studies, I also studied math.
23 Subjects: including algebra 2, algebra 1, reading, SAT math
...Perimeters, areas, volumes, angles, ratios, probability--all these words indicate measurements and relationships. Math teaches us to use what we know to find out what we need to know. I like to
think of math as a language--with a vocabulary (terminology)that students can acquire and a grammar (formulas and rules of operation) that students can learn to manipulate successfully.
42 Subjects: including algebra 1, English, reading, writing
...While I do have a 24 hour cancellation policy, I offer makeup classes. I excel in helping students to gain the understanding of the material needed, while providing the appropriate level of
assistance and guidance, where I may vary the approach as needed. I offer analysis of the students needs during sessions and the development of a customized plan to achieve the best results.
30 Subjects: including algebra 2, calculus, Microsoft Excel, general computer
|
{"url":"http://www.algebrahelp.com/Petaluma_algebra_tutors.jsp","timestamp":"2014-04-17T00:50:56Z","content_type":null,"content_length":"24722","record_id":"<urn:uuid:c11b3d8f-0fcf-4486-a6f7-a24124a20c60>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00298-ip-10-147-4-33.ec2.internal.warc.gz"}
|
We have established that objects can be moved around by applying FORCES. The application of force means that WORK is performed. WORKis the scientific term used to refer to this application of force.
We say that 1 Joule of work is required to move the force of 1 Newton over a distance of 1 metre!
• Remember that effort actually indicates the use of energy. A Joule therefore indicates the amount of energy that is required to move 1N over a distance of 1 metre.
1. How much effort do you apply when you:
a) drag a 10-N brick over a distance of 3 metres?
b) drag a brick with a mass of 2,5 kg over a distance of 5 m?
2. The energy value of a bar of chocolate is 300 kJ. How many 2,5-kg bricks will you be able to drag for a distance of 5 metres with this amount of energy?
3. How far will the energy provided by another bar of chocolate enable you to drag one such brick?
You have made use of the formula :
W = F × s
W : WorkF : Force s : Distance
• WFsThe placement of the letters in the triangle will help you to remember how to use the formula. Your educator will explain this to you.
• Now calculate how much energy will be required to move yourself over a distance of 10 metres.
Your educator will provide you with further exercises dealing with movement/acceleration.
Assessment of CALCULATION OF ENERGY USAGE (EFFORT)
[LO 2.3; LO 2.4]
|
{"url":"http://cnx.org/content/m20432/latest/","timestamp":"2014-04-23T06:51:42Z","content_type":null,"content_length":"38184","record_id":"<urn:uuid:73e60623-151c-48c1-8068-cde8054ff1d7>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00444-ip-10-147-4-33.ec2.internal.warc.gz"}
|
From Wikibooks, open books for an open world
Brief Complexity Theory[edit]
Complexity Theory is the study of how long a program will take to run, depending on the size of its input. There are many good introductory books to complexity theory and the basics are explained in
any good algorithms book. I'll keep the discussion here to a minimum.
The idea is to say how well a program scales with more data. If you have a program that runs quickly on very small amounts of data but chokes on huge amounts of data, it's not very useful (unless you
know you'll only be working with small amounts of data, of course). Consider the following Haskell function to return the sum of the elements in a list:
sum [] = 0
sum (x:xs) = x + sum xs
How long does it take this function to complete? That's a very difficult question; it would depend on all sorts of things: your processor speed, your amount of memory, the exact way in which the
addition is carried out, the length of the list, how many other programs are running on your computer, and so on. This is far too much to deal with, so we need to invent a simpler model. The model we
use is sort of an arbitrary "machine step." So the question is "how many machine steps will it take for this program to complete?" In this case, it only depends on the length of the input list.
If the input list is of length $0$, the function will take either $0$ or $1$ or $2$ or some very small number of machine steps, depending exactly on how you count them (perhaps $1$ step to do the
pattern matching and $1$ more to return the value $0$). What if the list is of length $1$. Well, it would take however much time the list of length $0$ would take, plus a few more steps for doing the
first (and only element).
If the input list is of length $n$, it will take however many steps an empty list would take (call this value $y$) and then, for each element it would take a certain number of steps to do the
addition and the recursive call (call this number $x$). Then, the total time this function will take is $nx+y$ since it needs to do those additions $n$ many times. These $x$ and $y$ values are called
constant values, since they are independent of $n$, and actually dependent only on exactly how we define a machine step, so we really don't want to consider them all that important. Therefore, we say
that the complexity of this sum function is $\mathcal{O}(n)$ (read "order $n$"). Basically saying something is $\mathcal{O}(n)$ means that for some constant factors $x$ and $y$, the function takes
$nx+y$ machine steps to complete.
Consider the following sorting algorithm for lists (commonly called "insertion sort"):
sort [] = []
sort [x] = [x]
sort (x:xs) = insert (sort xs)
where insert [] = [x]
insert (y:ys) | x <= y = x : y : ys
| otherwise = y : insert ys
The way this algorithm works is as follow: if we want to sort an empty list or a list of just one element, we return them as they are, as they are already sorted. Otherwise, we have a list of the
form x:xs. In this case, we sort xs and then want to insert x in the appropriate location. That's what the insert function does. It traverses the now-sorted tail and inserts x wherever it naturally
Let's analyze how long this function takes to complete. Suppose it takes $f(n)$ stepts to sort a list of length $n$. Then, in order to sort a list of $n$-many elements, we first have to sort the tail
of the list first, which takes $f(n-1)$ time. Then, we have to insert x into this new list. If x has to go at the end, this will take $\mathcal{O}(n-1)=\mathcal{O}(n)$ steps. Putting all of this
together, we see that we have to do $\mathcal{O}(n)$ amount of work $\mathcal{O}(n)$ many times, which means that the entire complexity of this sorting algorithm is $\mathcal{O}(n^2)$. Here, the
squared is not a constant value, so we cannot throw it out.
What does this mean? Simply that for really long lists, the sum function won't take very long, but that the sort function will take quite some time. Of course there are algorithms that run much more
slowly that simply $\mathcal{O}(n^2)$ and there are ones that run more quickly than $\mathcal{O}(n)$.
Consider the random access functions for lists and arrays. In the worst case, accessing an arbitrary element in a list of length $n$ will take $\mathcal{O}(n)$ time (think about accessing the last
element). However with arrays, you can access any element immediately, which is said to be in constant time, or $\mathcal{O}(1)$, which is basically as fast an any algorithm can go.
There's much more in complexity theory than this, but this should be enough to allow you to understand all the discussions in this tutorial. Just keep in mind that $\mathcal{O}(1)$ is faster than $\
mathcal{O}(n)$ is faster than $\mathcal{O}(n^2)$, etc.
|
{"url":"https://en.wikibooks.org/wiki/Haskell/YAHT/Complexity","timestamp":"2014-04-20T09:04:50Z","content_type":null,"content_length":"36155","record_id":"<urn:uuid:fdbc9bf3-075e-401c-9ba4-afef83afea71>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00475-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Miss Schmedlen's Blog
4/18 Math Update
Posted by: Mary Ann Schmedlen | April 18, 2014 | No Comment |
This week in math, we learned how to classify polygons by the number of sides. We learned about all of the different types of triangles and quadrilaterals. We also learned how to find the sum of the
interior angles of any polygon. On Friday, we made vocabulary flashcards and worked on the chapter 8 study guide. The chapter 8 test is next Tuesday, 4/22 and the math NWEA test will be on Wednesday,
Posted by: Mary Ann Schmedlen | April 18, 2014 | No Comment |
Geometric Design Contest Winners!
Posted by: Mary Ann Schmedlen | April 15, 2014 | No Comment |
Congratulations to the winners of the Geometric Design Contest!
• 1st place: Avery G.
• 2nd place: Sophia
• 3rd place: Izzy
• Honorable Mention: Ne’Vaeh, R.J., Avery S., Ryan, Maxx, Abbott, Sloan
4/3 Math Update
Posted by: Mary Ann Schmedlen | April 3, 2014 | No Comment |
This week in math, we completed a geometric design in class. Students will vote on the designs after spring break. The three people who earn the most votes will win a $5 Sugar Berry gift card! Have a
wonderful spring break.
3/28 Math Update
Posted by: Mary Ann Schmedlen | March 28, 2014 | No Comment |
This week in math, we started chapter 8 by learning about the basics of geometry ā points, lines, planes, line segments, rays, and angles. Students took an open-notes quiz over sections 8.1 ā 8.3
on Friday. Next week, students will be participating in the Geometric Design Contest. This is an in-class project, so there will be no homework.
Friday Winners!
Posted by: Mary Ann Schmedlen | March 28, 2014 | No Comment |
Congratulations to the following students for winning this week:
• Problem of the Week: Madeline, Matt F.
• Estimation Jar: Sloan
3/21 Math Update
Posted by: Mary Ann Schmedlen | March 20, 2014 | No Comment |
This week in math, we completed a study guide over chapter 6 (percents). Students took a notebook quiz on Wednesday and then we cleaned out our binders on Thursday. The chapter 6 test was on
Thursday. Due to my absence on Friday (Iā m attending a conference), tests will be returned on Monday. On Friday, students watched a video with Mrs. Hedlund about how math is used in various
careers. Next week, we move on to chapter 8 (geometry).
Posted by: Mary Ann Schmedlen | March 19, 2014 | No Comment |
Congratulations to the winners this week!
• Problem of the Week: Megan, Avery S., Makhayla, Abbott, Madeline
• Estimation Jar: Madeline
3/14 Math Update
Posted by: Mary Ann Schmedlen | March 14, 2014 | No Comment |
Happy Pi Day! This week in math, students took a quiz over sections 6.1 – 6.3. We then learned how to use a proportion or decimal equivalent to find the percent of a number. Due to the snow day, the
test and quiz dates for next week have changed. The notebook quiz will be on Wednesday, March 19. Students will have time in class on Tuesday to make sure they have all of their warm-ups. They can
stay in at recess if they are missing any. The chapter 6 test will be on Thursday, March 20. Students will receive a study guide in class on Tuesday. Part 1 of the test will cover sections 6.1 – 6.3
and calculators are not allowed. Students will be allowed to use a calculator on part 2 of the test, which covers sections 6.4 – 6.5. Students also had the chance to complete the optional “Mrs.
Vance’s Pi Day Challenge” to earn a free bag of popcorn.
3/7 Math Update
Posted by: Mary Ann Schmedlen | March 7, 2014 | No Comment |
This week in math, we took the chapter 5 test. Then we started chapter 6 by learning about percents, how to convert percents to fractions and decimals (and vice versa), and how to estimate percents.
There will be an open-notes quiz over sections 6.1 ā 6.3 next Tuesday, March 11. The tentative date for the chapter 6 test is Wednesday, March 19. We will also have a notebook quiz on Thursday,
March 20. Students will have time in class to make sure their notebooks are organized.
|
{"url":"http://cometmath.edublogs.org/","timestamp":"2014-04-20T03:11:03Z","content_type":null,"content_length":"29308","record_id":"<urn:uuid:054114e1-380f-44b8-b6b7-8a34c656890c>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00391-ip-10-147-4-33.ec2.internal.warc.gz"}
|
General Cost Functions for Support Vector Regression
Results 1 - 10 of 27
- Data Mining and Knowledge Discovery , 1998
"... The tutorial starts with an overview of the concepts of VC dimension and structural risk minimization. We then describe linear Support Vector Machines (SVMs) for separable and non-separable
data, working through a non-trivial example in detail. We describe a mechanical analogy, and discuss when SV ..."
Cited by 2272 (11 self)
Add to MetaCart
The tutorial starts with an overview of the concepts of VC dimension and structural risk minimization. We then describe linear Support Vector Machines (SVMs) for separable and non-separable data,
working through a non-trivial example in detail. We describe a mechanical analogy, and discuss when SVM solutions are unique and when they are global. We describe how support vector training can be
practically implemented, and discuss in detail the kernel mapping technique which is used to construct SVM solutions which are nonlinear in the data. We show how Support Vector machines can have very
large (even infinite) VC dimension by computing the VC dimension for homogeneous polynomial and Gaussian radial basis function kernels. While very high VC dimension would normally bode ill for
generalization performance, and while at present there exists no theory which shows that good generalization performance is guaranteed for SVMs, there are several arguments which support the observed
high accuracy of SVMs, which we review. Results of some experiments which were inspired by these arguments are also presented. We give numerous examples and proofs of most of the key theorems. There
is new material, and I hope that the reader will find that even old material is cast in a fresh light.
, 2004
"... In this tutorial we give an overview of the basic ideas underlying Support Vector (SV) machines for function estimation. Furthermore, we include a summary of currently used algorithms for
training SV machines, covering both the quadratic (or convex) programming part and advanced methods for dealing ..."
Cited by 473 (2 self)
Add to MetaCart
In this tutorial we give an overview of the basic ideas underlying Support Vector (SV) machines for function estimation. Furthermore, we include a summary of currently used algorithms for training SV
machines, covering both the quadratic (or convex) programming part and advanced methods for dealing with large datasets. Finally, we mention some modifications and extensions that have been applied
to the standard SV algorithm, and discuss the aspect of regularization from a SV perspective.
, 1998
"... In this paper a correspondence is derived between regularization operators used in Regularization Networks and Support Vector Kernels. We prove that the Green's Functions associated with
regularization operators are suitable Support Vector Kernels with equivalent regularization properties. Moreover ..."
Cited by 146 (43 self)
Add to MetaCart
In this paper a correspondence is derived between regularization operators used in Regularization Networks and Support Vector Kernels. We prove that the Green's Functions associated with
regularization operators are suitable Support Vector Kernels with equivalent regularization properties. Moreover the paper provides an analysis of currently used Support Vector Kernels in the view of
regularization theory and corresponding operators associated with the classes of both polynomial kernels and translation invariant kernels. The latter are also analyzed on periodical domains. As a
by-product we show that a large number of Radial Basis Functions, namely conditionally positive definite functions, may be used as Support Vector kernels.
- In Proceedings of the 1999 Conference on AI and Statistics , 1999
"... We introduce a class of flexible conditional probability models and techniques for classification /regression problems. Many existing methods such as generalized linear models and support vector
machines are subsumed under this class. The flexibility of this class of techniques comes from the use of ..."
Cited by 105 (3 self)
Add to MetaCart
We introduce a class of flexible conditional probability models and techniques for classification /regression problems. Many existing methods such as generalized linear models and support vector
machines are subsumed under this class. The flexibility of this class of techniques comes from the use of kernel functions as in support vector machines, and the generality from dual formulations of
standard regression models. 1 Introduction Support vector machines [10] are linear maximum margin classifiers exploiting the idea of a kernel function. A kernel function defines an embedding of
examples into (high or infinite dimensional) feature vectors and allows the classification to be carried out in the feature space without ever explicitly representing it. While support vector
machines are non-probabilistic classifiers they can be extended and formalized for probabilistic settings[12] (recently also [8]), which is the topic of this paper. We can also identify the new
formulations with other s...
- IN IEEE INTERNATIONAL CONFERENCE ON AUTOMATIC FACE & GESTURE RECOGNITION , 2000
"... A Support Vector Machine based multi-view face detection and recognition framework is described in this paper. Face detection is carried out by constructing several detectors, each of them in
charge of one specific view. The symmetrical property of face images is employed to simplify the complexity ..."
Cited by 57 (8 self)
Add to MetaCart
A Support Vector Machine based multi-view face detection and recognition framework is described in this paper. Face detection is carried out by constructing several detectors, each of them in charge
of one specific view. The symmetrical property of face images is employed to simplify the complexity of the modelling. The estimation of head pose, which is achieved by using the Support Vector
Regression technique, provides crucial information for choosing the appropriate face detector. This helps to improve the accuracy and reduce the computation in multi-view face detection compared to
other methods. For video sequences, further computational reduction can be achieved by using Pose Change Smoothing strategy. When face detectors find a face in frontal view, a Support Vector Machine
based multi-class classifier is activated for face recognition. All the above issues are integrated under a Support Vector Machine framework. Test results on four video sequences are presented, among
them, detection rate is above 95%, recognition accuracy is above 90%, average pose estimation error is around 10°, and the full detection and recognition speed is up to 4 frames/second on a
PentiumII300 PC.
- In Proceedings of the 16th ACM-SIAM Symposium on Discrete Algorithms , 2002
"... A widely acknowledged drawback of many statistical modelling techniques, commonly used in machine learning, is that the resulting model is extremely difficult to interpret. A number of new
concepts and algorithms have been introduced by researchers to address this problem. They focus primarily on de ..."
Cited by 38 (0 self)
Add to MetaCart
A widely acknowledged drawback of many statistical modelling techniques, commonly used in machine learning, is that the resulting model is extremely difficult to interpret. A number of new concepts
and algorithms have been introduced by researchers to address this problem. They focus primarily on determining which inputs are relevant in predicting the output. This work describes a transparent,
advanced non-linear modelling approach that enables the constructed predictive models to be visualised, allowing model validation and assisting in interpretation. The technique combines the
representational advantage of a sparse ANOVA decomposition, with the good generalisation ability of a kernel machine. It achieves this by employing two forms of regularisation: a 1-norm based
structural regulariser to enforce transparency, and a 2-norm based regulariser to control smoothness. The resulting model structure can be visualised showing the overall effects of different inputs,
their interactions, and the strength of the interactions. The robustness of the technique is illustrated using a range of both artifical and “real world ” datasets. The performance is compared to
other modelling techniques, and it is shown to exhibit competitive generalisation performance together with improved interpretability.
, 2002
"... Support Vector Machines (SVMs) have become a popular tool for learning with large amounts of high dimensional data. However, it may sometimes be preferable to learn incrementally from previous
SVM results, as computing a SVM is very costly in terms of time and memory consumption or because the SVM m ..."
Cited by 20 (0 self)
Add to MetaCart
Support Vector Machines (SVMs) have become a popular tool for learning with large amounts of high dimensional data. However, it may sometimes be preferable to learn incrementally from previous SVM
results, as computing a SVM is very costly in terms of time and memory consumption or because the SVM may be used in an online learning setting. In this paper an approach for incremental learning
with Support Vector Machines is presented, that improves existing approaches. Empirical evidence is given to prove that this approach can effectively deal with changes in the target concept that are
results of the incremental learning setting.
- in Proceedings Fourth International Conference on Knowledge-Based Intelligent Engineering Systems & Allied Technologies , 2000
"... An approach to multi-view face detection based on head pose estimation is presented in this paper. Support Vector Regression is employed to solve the problem of pose estimation. Three methods,
the eigenface method, the Support Vector Machine (SVM) based method, and a combination of the two methods, ..."
Cited by 10 (2 self)
Add to MetaCart
An approach to multi-view face detection based on head pose estimation is presented in this paper. Support Vector Regression is employed to solve the problem of pose estimation. Three methods, the
eigenface method, the Support Vector Machine (SVM) based method, and a combination of the two methods, are investigated. The eigenface method, which seeks to estimate the overall probability
distribution of patterns to be recognised, is fast but less accurate because of the overlap of confidence distributions between face and non-face classes. On the other hand, the SVM method, which
tries to model the boundary of two classes to be classified, is more accurate but slower as the number of Support Vectors is normally large. The combined method can achieve an improved performance by
speeding up the computation and keeping the accuracy to a preset level. It can be used to automatically detect and track faces in face verification and identification systems.
, 1998
"... The last years have witnessed an increasing interest in Support Vector (SV) machines, which use Mercer kernels for efficiently performing computations in high-dimensional spaces. In pattern
recognition, the SV algorithm constructs nonlinear decision functions by training a classifier to perform a li ..."
Cited by 10 (1 self)
Add to MetaCart
The last years have witnessed an increasing interest in Support Vector (SV) machines, which use Mercer kernels for efficiently performing computations in high-dimensional spaces. In pattern
recognition, the SV algorithm constructs nonlinear decision functions by training a classifier to perform a linear separation in some high-dimensional space which is nonlinearly related to input
space. Recently, we have developed a technique for Nonlinear Principal Component Analysis (Kernel PCA) based on the same types of kernels. This way, we can for instance efficiently extract polynomial
features of arbitrary order by computing projections onto principal components in the space of all products of n pixels of images. We explain the idea of Mercer kernels and associated feature spaces,
and describe connections to the theory of reproducing kernels and to regularization theory, followed by an overview of the above algorithms employing these kernels. 1. Introduction For the case of
two-class pattern...
|
{"url":"http://citeseerx.ist.psu.edu/showciting?doi=10.1.1.41.2760","timestamp":"2014-04-18T06:16:44Z","content_type":null,"content_length":"39442","record_id":"<urn:uuid:9e640e1d-16cc-406b-9a5e-89f8abbc2272>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00038-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Union City, GA Precalculus Tutor
Find an Union City, GA Precalculus Tutor
I am Georgia certified educator with 12+ years in teaching math. I have taught a wide range of comprehensive math for grades 6 through 12 and have experience prepping students for EOCT, CRCT, SAT
and ACT. Unlike many others who know the math content, I know how to employ effective instructional strategies to help students understand and achieve mastery.
13 Subjects: including precalculus, statistics, geometry, SAT math
...I am completing my degree in Information, Science, and Technology at Pennsylvania State University. During my time in high school and college, I did well in my Math (Calculus I-II), Chemistry,
and Physics courses and have tutored in all of these subjects. Currently, I co-teach Math 1 and GPS Algebra 1.
13 Subjects: including precalculus, chemistry, physics, calculus
...Once I'm done with that, I'll consider graduate degrees in pure or applied mathematics. I'm patient with students at all levels. I push students to not only get questions correct, but to also
build their confidence in mathematics.
10 Subjects: including precalculus, calculus, geometry, algebra 1
...If I need to cancel or reschedule a session, there is never a fee.I have just graduated from Georgia Tech with a degree in nuclear and radiological engineering, I have been tutoring people from
over the place in this topic since 2009, and I am well qualified. Try me and you will never regret! I...
10 Subjects: including precalculus, calculus, physics, algebra 1
...Discrete math includes the study of set theory, Boolean algebra, matrices, probability, functions and number theory. These concepts were essential to my understanding of algorithms and computer
programming. Being a computer science major who is very good at computer programming, I had to have a good grasp of these theories and methods.
21 Subjects: including precalculus, calculus, algebra 1, algebra 2
|
{"url":"http://www.purplemath.com/union_city_ga_precalculus_tutors.php","timestamp":"2014-04-18T21:36:59Z","content_type":null,"content_length":"24422","record_id":"<urn:uuid:f28f054f-bec8-4495-ac79-7f1f8531d406>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00506-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Petaluma Algebra Tutors
...I have been an independent study teacher for over 10 years and have extensive experience working with elementary students in math. I use a variety of resources and teaching styles depending on
the individual needs of the child. I have a California Multiple Subjects Credential.
17 Subjects: including algebra 2, algebra 1, reading, English
...Often times the issue is with earlier concepts that were not mastered with confidence. I will help by explaining concepts in plain English. And I help students to learn to see concepts and
problems in a way that makes sense to them.
18 Subjects: including algebra 1, algebra 2, calculus, geometry
Hello, I'm Brandis. I hold a Bachelor and Master degree from the San Francisco Conservatory of Music. In the early part of my collegiate studies, I also studied math.
23 Subjects: including algebra 2, algebra 1, reading, SAT math
...Perimeters, areas, volumes, angles, ratios, probability--all these words indicate measurements and relationships. Math teaches us to use what we know to find out what we need to know. I like to
think of math as a language--with a vocabulary (terminology)that students can acquire and a grammar (formulas and rules of operation) that students can learn to manipulate successfully.
42 Subjects: including algebra 1, English, reading, writing
...While I do have a 24 hour cancellation policy, I offer makeup classes. I excel in helping students to gain the understanding of the material needed, while providing the appropriate level of
assistance and guidance, where I may vary the approach as needed. I offer analysis of the students needs during sessions and the development of a customized plan to achieve the best results.
30 Subjects: including algebra 2, calculus, Microsoft Excel, general computer
|
{"url":"http://www.algebrahelp.com/Petaluma_algebra_tutors.jsp","timestamp":"2014-04-17T00:50:56Z","content_type":null,"content_length":"24722","record_id":"<urn:uuid:c11b3d8f-0fcf-4486-a6f7-a24124a20c60>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00298-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Vertex List Graph Adaptor
Vertex List Graph Adaptor
template<typename Graph, typename GlobalIndexMap>
class vertex_list_adaptor
vertex_list_adaptor(const Graph& g,
const GlobalIndexMap& index_map = GlobalIndexMap());
template<typename Graph, typename GlobalIndexMap>
vertex_list_adaptor<Graph, GlobalIndexMap>
make_vertex_list_adaptor(const Graph& g, const GlobalIndexMap& index_map);
template<typename Graph>
vertex_list_adaptor<Graph, *unspecified*>
make_vertex_list_adaptor(const Graph& g);
The vertex list graph adaptor adapts any model of Distributed Vertex List Graph in a Vertex List Graph. In the former type of graph, the set of vertices is distributed across the process group, so no
process has access to all vertices. In the latter type of graph, however, every process has access to every vertex in the graph. This is required by some distributed algorithms, such as the
implementations of Minimum spanning tree algorithms.
The vertex_list_adaptor class template takes a Distributed Vertex List Graph and a mapping from vertex descriptors to global vertex indices, which must be in the range [0, n), where n is the number
of vertices in the entire graph. The mapping is a Readable Property Map whose key type is a vertex descriptor.
The vertex list adaptor stores only a reference to the underlying graph, forwarding all operations not related to the vertex list on to the underlying graph. For instance, if the underlying graph
models Adjacency Graph, then the adaptor will also model Adjacency Graph. Note, however, that no modifications to the underlying graph can occur through the vertex list adaptor. Modifications made to
the underlying graph directly will be reflected in the vertex list adaptor, but modifications that add or remove vertices invalidate the vertex list adaptor. Additionally, the vertex list adaptor
provides access to the global index map via the vertex_index property.
On construction, the vertex list adaptor performs an all-gather operation to create a list of all vertices in the graph within each process. It is this list that is accessed via vertices and the
length of this list that is accessed via num_vertices. Due to the all-gather operation, the creation of this adaptor is a collective operation.
These function templates construct a vertex list adaptor from a graph and, optionally, a property map that maps vertices to global index numbers.
IN: Graph& g
The graph type must be a model of Distributed Vertex List Graph.
IN: GlobalIndexMap index_map
A Distributed property map whose type must model Readable property map that maps from vertices to a global index. If provided, this map must be initialized prior to be passed to the vertex list
Default: A property map of unspecified type constructed from a distributed iterator_property_map that uses the vertex_index property map of the underlying graph and a vector of vertices_size_type
These operations require O(n) time, where n is the number of vertices in the graph, and O(n) communication per node in the BSP model.
Copyright (C) 2004 The Trustees of Indiana University.
Authors: Douglas Gregor and Andrew Lumsdaine
|
{"url":"http://www.boost.org/doc/libs/1_49_0/libs/graph_parallel/doc/html/vertex_list_adaptor.html","timestamp":"2014-04-21T03:17:34Z","content_type":null,"content_length":"9764","record_id":"<urn:uuid:6641ee25-fbb8-4e51-b756-d4370eb26e99>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00202-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Error Vector Spectrum: EVS
EVS computes the instantaneous Error Vector Spectrum (EVS) between two signals. The measurement works with the Vector Signal Analyzer meter block (VSA) or the Vector Network Analyzer block (VNA). The
EVS is the spectrum of the vector differences between the measured and source inputs to the VSA meter.
│ Name │ Type │Range │
│Block Diagram │System Diagram │N/A │
│VNA/VSA │System VNA meter, System VSA meter │N/A │
│Time Span │Real value │Varies│
│Time Span Units│List of options │N/A │
│Number Averages│Integer value │>0 │
* indicates a secondary parameter
Note: If the selected system diagram is configured for a swept simulation, the measurement will have additional parameters for specifying the plotting configuration for each swept parameter. These
parameters are dynamic and change based upon which data source is selected. See Section 1.3.3 for details.
The measurement returns complex values in units of voltage. The voltage can be displayed in dB by selecting the dB check box in the Add/Modify Measurement dialog box. The x-axis for this measurement
is in frequency units. The range of the frequency axis is f[c]-f[s]/2≤f≤f[c]+f[s]/2, where f[c] is the center frequency of the measured signal and f[s] is the sampling frequency.
Right-clicking the cursor on the measurement when it is displayed on a rectangular graph will display the settings used to compute the result at the selected data point.
The instantaneous error vector spectrum is computed based on the following:
where V[f[k]] is the amplitude of the f[k] frequency, f[s] is the sampling frequency of the measured signal, and N is the number of FFT bins. E[i] is the sequence of error vectors and w[i] is an
optional windowing function. For w[i]=1, i=0,1,...,N-1 (no windowing), the spectrum equation is the discrete Fourier transform.
The error vectors are computed as:
E[k] = Meas[k] - Src[k]
where Meas and Src are the two complex signal inputs to the VSA meter block.
The number of FFT bins is determined by the Time Span and Units settings, and is set to the equivalent number of samples in the data window.
The spectrum may optionally be computed from an average of several spectrum computations. This occurs when the number of averages is set to a value other than 1. When averaging is in effect,
individual spectrums are computed by computing spectrums for several windows of input data, each offset by 50% from the previous data window. A Taylor window function is applied to the data before
each FFT is performed. The average of all the spectrums is then used to compute the spectrum values.
|
{"url":"https://awrcorp.com/download/faq/english/docs/VSS_Measurements/evs.htm","timestamp":"2014-04-21T12:08:18Z","content_type":null,"content_length":"13435","record_id":"<urn:uuid:977f82da-024c-4839-a9db-ce27e48765cb>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00057-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Full-rank linearly independent matrices
up vote 4 down vote favorite
Can we find $n^2$ full-rank matrices in $\mathbb{F}^{n \times n}$ which are linearly independent (i.e. when vectorized are linearly independent)? If not, how many such matrices can be found?
linear-algebra matrices
Yes--the set of full-rank matrices is Zariski-open (and non-empty) in $\mathbb{F}^{n\times n}$, so if $k$ is less than $n^2$, the set of full-rank matrices not in the span of $k$ chosen matrices
2 is also Zariski-open and non-empty. (Strictly speaking, this argument only works over finite fields, but it can easily be fixed). This question is more appropriate for math.stackexchange.com. –
Daniel Litt Feb 1 '13 at 5:18
add comment
1 Answer
active oldest votes
If the characteristic of $\mathbb{F}$ is not two and $E_{i,j}$ $(1\le i,j\le n)$ is the "standard basis", then the matrices $I+E_{i,j}$ are invertible. They are linearly independent if $n+1
\ne0$ in $\mathbb{F}$. If the characteristic of $\mathbb{F}$ is greater than 2, we can use $I-E_{i,j}$ instead and these are linearly independent if $n-1\ne0$. So we have explicit examples
up vote 4 except in characteristic two.
down vote
If n > 1, any order n matrix is the sum of exactly two invertible matrices, even if the field has characteristic two. So I would expect an invertible basis for characteristic 2 also.
Gerhard "Ask Me About Binary Matrices" Paseman, 2013.02.01 – Gerhard Paseman Feb 1 '13 at 23:06
I agree that there should be an invertible basis in characteristic 2. (Exercise for the reader?) – Chris Godsil Feb 1 '13 at 23:38
Consider the cycle q=(1 2 3 ...n) on the set of columns of an order n matrix over F_2 with n > 1. Let Q=q applied to the order n identity matrix, so the main diagonal is shifted "out of
the way". In addition to the n(n-1) matrices I + E_ij for i distinct from j, take also the n matrices Q + E_ii. I think this or a slight modification to resolve parity issues should work
as a basis in characteristic 2. Gerhard "Ask Me About System Design" Paseman, 2013.02.03 – Gerhard Paseman Feb 4 '13 at 6:18
add comment
Not the answer you're looking for? Browse other questions tagged linear-algebra matrices or ask your own question.
|
{"url":"http://mathoverflow.net/questions/120488/full-rank-linearly-independent-matrices?answertab=votes","timestamp":"2014-04-21T07:23:36Z","content_type":null,"content_length":"54236","record_id":"<urn:uuid:6241f0d7-5a86-4621-8c2a-3a9149d4191a>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00182-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Coined quantum walks lift the cospectrality of graphs and trees
David Emms, Simone Severini, Richard Wilson and Edwin Hancock
Pattern Recognition Volume 42, Number 9, , 2009. ISSN 0031-3203
In this paper we explore how a spectral technique suggested by coined quantum walks can be used to distinguish between graphs that are cospectral with respect to standard matrix representations. The
algorithm runs in polynomial time and, moreover, can distinguish many graphs for which there is no subexponential time algorithm that is proven to be able to distinguish between them. In the paper,
we give a description of the coined quantum walk from the field of quantum computing. The evolution of the walk is governed by a unitary matrix. We show how the spectrum of this matrix is related to
the spectrum of the transition matrix of the classical random walk. However, despite this relationship the behaviour of the quantum walk is vastly different from the classical walk. This leads us to
define a new matrix based on the amplitudes of paths of the walk whose spectrum we use to characterise graphs. We carry out three sets of experiments using this matrix representation. Firstly, we
test the ability of the spectrum to distinguish between sets of graphs that are cospectral with respect to standard matrix representation. These include strongly regular graphs, and incidence graphs
of balanced incomplete block designs (BIBDs). Secondly, we test our method on ALL regular graphs on up to 14 vertices and ALL trees on up to 24 vertices. This demonstrates that the problem of
cospectrality is often encountered with conventional algorithms and tests the ability of our method to resolve this problem. Thirdly, we use distances obtained from the spectra of S+(U3) to cluster
graphs derived from real-world image data and these are qualitatively better than those obtained with the spectra of the adjacency matrix. Thus, we provide a spectral representation of graphs that
can be used in place of standard spectral representations, far less prone to the problems of cospectrality.
|
{"url":"http://eprints.pascal-network.org/archive/00006840/","timestamp":"2014-04-20T00:51:20Z","content_type":null,"content_length":"8718","record_id":"<urn:uuid:870e005b-2609-4772-b0cd-fb657ab2778c>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00348-ip-10-147-4-33.ec2.internal.warc.gz"}
|
spectral sequence
Homological algebra
Stable Homotopy theory
The notion of spectral sequence is an algorithm or computational tool in homological algebra and more generally in homotopy theory which allows to compute chain homology groups/homotopy groups of bi-
graded objects from the homology/homotopy of the two graded components.
Notably there is a spectral sequence for computing the homology of the total complex of a double complex from the homology of its row and column complexes separately. This in turn allows to compute
derived functors of composite functors $G\circ F$ from the double complex $\mathbb{R}^\bullet G (\mathbb{R}^\bullet F(-))$ obtained by non-totally deriving the two functors separately (called the
Grothendieck spectral sequence). By choosing various functors $F$ and $G$ here this gives rise to various important classes of examples of spectral sequences, see below.
More concretely, a homology spectral sequence is a sequence of graded chain complexes that provides the higher order corrections to the naïve idea of computing the homology of the total complex $Tot
(V)_\bullet$ of a double complex $V_{\bullet, \bullet}$: by first computing those of the vertical differential, then those of the horizontal differential induced on these vertical homology groups (or
the other way around). This simple idea in general does not produce the correct homology groups of $Tot(V)_\bullet$, but it does produce a “first-order approximation” to them, in a useful sense. The
spectral sequence is the sequence of higher-order corrections that make this naive idea actually work.
Being, therefore, an iterative perturbative approximation scheme of bigraded differential objects, fully-fledged spectral sequences can look a bit intricate. However, a standard experience in
mathematical practice is that for most problems of practical interest the relevant spectral sequence “perturbation series” yields the exact result already at the second stage. This reduces the
computational complexity immensely and makes spectral sequences a wide-spread useful computational tool.
Despite their name, there seemed to be nothing specifically “spectral” about spectral sequences, for any of the technical meanings of the word spectrum. Together with the concept, this term was
introduced by Jean Leray and has long become standard, but was never really motivated (see p. 5 of Chow). But then, by lucky coincidence it turns out in the refined context of stable (∞,1)-category
theory/stable homotopy theory that spectral sequences frequently arise by considering the homotopy groups of sequences of spectra. This is discussed at spectral sequence of a filtered stable homotopy
While therefore spectral sequences are a notion considered in the context of homological algebra and more generally in stable homotopy theory, there is also an “unstable” or nonabelian variant of the
notion in plain homotopy theory, called homotopy spectral sequence.
We give the general definition of a (co)homology spectral sequence. For motivation see the example Spectral sequence of a filtered complex below.
Throughout, let $\mathcal{A}$ be an abelian category.
Spectral sequence
A cohomology spectral sequence in $\mathcal{A}$ is
• a family $(E^{p,q}_r)$ of objects in $\mathcal{A}$, for all integers $p,q,r$ with $r\geq 1$
(for a fixed $r$ these are said to form the $r$-th page of the spectral sequence)
• for each $p,q,r$ as above a morphism (called the differential)
$d^{p,q}_r:E^{p,q}_r\to E^{p+r,q-r+1}_r$
satisfying $d_r^2 = 0$ (more precisely, $d_r^{p+r,q-r+1}\circ d_r^{p,q} = 0$)
• isomorphisms $\alpha_r^{p,q}: H^{p,q}(E_r)\to E^{p,q}_{r+1}$ where the chain cohomology is given by
$H^{p,q}(E_r) = \mathrm{ker} d^{p,q}_r/ \mathrm{im} d^{p-r,q+r-1}_r \,.$
Analogously a homology spectral sequence is collection of objects $(E_{p,q}^r)$ with the differential $d_r$ of degree $(-r,r-1)$.
Let $\{E^r_{p,q}\}_{r,p,q}$ be a spectral sequence such that for each $p,q$ there is $r(p,q)$ such that for all $r \geq r(p,q)$ we have
$E^{r \geq r(p,q)}_{p,q} \simeq E^{r(p,q)}_{p,q} \,.$
Then one says equivalently that
1. the bigraded object
$E^\infty \coloneqq \{E^\infty_{p,q}\}_{p,q} \coloneqq \{ E^{r(p,q)}_{p,q} \}_{p,q}$
is the limit term of the spectral sequence;
2. the spectral sequence abuts to $E^\infty$.
If for a spectral sequence there is $r_s$ such that all differentials on pages after $r_s$ vanish, $\partial^{r \geq r_s} = 0$, then $\{E^{r_s}\}_{p,q}$ is limit term for the spectral sequence. One
says in this cases that the spectral sequence degenerates at $r_s$.
By the defining relation
$E^{r+1}_{p,q} \simeq ker(\partial^r_{p-r,q+r-1})/im(\partial^r_{p,q}) = E^r_{pq}$
the spectral sequence becomes constant in $r$ from $r_s$ on if all the differentials vanish, so that $ker(\partial^r_{p,q}) = E^r_{p,q}$ for all $p,q$.
If for a spectral sequence $\{E^r_{p,q}\}_{r,p,q}$ there is $r_s \geq 2$ such that the $r_s$th page is concentrated in a single row or a single column, then the the spectral sequence degenerates on
this pages, example 1, hence this page is a limit term, def. 2. One says in this case that the spectral sequence collapses on this page.
For $r \geq 2$ the differentials of the spectral sequence
$\partial^r \colon E^r_{p,q} \to E^r_{p-r, q+r-1}$
have domain and codomain necessarily in different rows an columns (while for $r = 1$ both are in the same row and for $r = 0$ both coincide). Therefore if all but one row or column vanish, then all
these differentials vanish.
A spectral sequence $\{E^r_{p,q}\}_{r,p,q}$ is said to converge to a graded object $H_\bullet$ with filtering $F_\bullet H_\bullet$, traditionally denoted
$E^r_{p,q} \Rightarrow H_\bullet \,,$
if the associated graded complex $\{G_p H_{p+q}\}_{p,q} \coloneqq \{F_p H_{p+q} / F_{p-1} H_{p+q}\}$ of $H$ is the limit term of $E$, def. 2:
$E^\infty_{p,q} \simeq G_p H_{p+q} \;\;\;\;\;\;\; \forall_{p,q} \,.$
A spectral sequence $\{E^r_{p,q}\}$ is called a bounded spectral sequence if for all $n,r \in \mathbb{Z}$ the number of non-vanishing terms of the form $E^r_{k,n-k}$ is finite.
A spectral sequence $\{E^r_{p,q}\}$ is called
• a first quadrant spectral sequence if all terms except possibly for $p,q \geq 0$ vanish;
• a third quadrant spectral sequence if all terms except possibly for $p,q \leq 0$ vanish.
Such spectral sequences are bounded, def. 4.
A bounded spectral sequence, def. 4, has a limit term, def. 2.
First notice that if a spectral sequence has at most $N$ non-vanishing terms of total degree $n$ on page $r$, then all the following pages have at most at these positions non-vanishing terms, too,
since these are the homologies of the previous terms.
Therefore for a bounded spectral sequence for each $n$ there is $L(n) \in \mathbb{Z}$ such that $E^r_{p,n-p} = 0$ for all $p \leq L(n)$ and all $r$. Similarly there is $T(n) \in \mathbb{Z}$ such $E^
r_{n-q,q} = 0$ for all $q \leq T(n)$ and all $r$.
We claim then that the limit term of the bounded spectral sequence is in position $(p,q)$ given by the value $E^r_{p,q}$ for
$r \gt max( p-L(p+q-1), q + 1 - L(p+q+1) ) \,.$
This is because for such $r$ we have
1. $E^r_{p-r, q+r-1} = 0$ because $p-r \lt L(p+q-1)$, and hence the kernel $ker(\partial^r_{p-r,q+r-1}) = 0$ vanishes;
2. $E^r_{p+r, q-r+1} = 0$ because $q-r + 1 \lt T(p+q+1)$, and hence the image $im(\partial^r_{p,q}) = 0$ vanishes.
\begin{aligned} E^{r+1}_{p,q} &= ker(\partial^r_{p-r,q+r-1})/im(\partial^r_{p,q}) \\ & \simeq E^r_{p,q}/0 \\ & \simeq E^r_{p,q} \end{aligned} \,.
The basic class of examples are
which compute the cohomology of a filtered complex from the cohomologies of its associated graded objects.
From this one obtains as a special case the class of
which compute the cohomology of the total complex of a double complex using the two canonical filtrations of this by row- and by column-degree.
From this in turn one obtains as a special case the class of
which compute the derived functor $\mathbb{R}^\bullet(G \circ F (-))$ of the composite of two functors from the spectral sequence of the double complex $\mathbb{R}^\bullet (F (\mathbb{R}^\bullet G
Many special cases of this for various choices of $F$ and $G$ go by special names, this we tabulate at
Spectral sequence of a filtered complex
The fundamental example of a spectral sequence, from which essentially all the other examples arise as special cases, is the spectral sequence of a filtered complex. (See there for details). Or more
generally in stable homotopy theory: the spectral sequence of a filtered stable homotopy type.
If a cochain complex $C^\bullet$ is equipped with a filtration $F^\bullet C^\bullet$, there is an induced filtration $F^\bullet H(C)$ of its cohomology groups, according to which levels of the
filtration contain representatives for the various cohomology classes.
A filtration $F$ also gives rise to an associated graded object $Gr(F)$, whose grades are the successive level inclusion cokernels. Generically, the operations of grading and cohomology do not
$Gr(F^\bullet H^\bullet(C)) eq H^\bullet (Gr(F^\bullet) C) \,.$
But the spectral sequence associated to a filtered complex $F^\bullet C^\bullet$, passes through $H^\bullet (Gr(F^\star) C)$ in the page $E_{(1)}$ and in good cases converges to $Gr(F^* H^\bullet(C))
Spectral sequence of a double complex
The total complex of a double complex is naturally filtered in two ways: by columns and by rows. By the above spectral sequence of a filtered complex this gives two different spectral sequences
associated computing the cohomology of a double complex from the cohomologies of its rows and columns. Many other classes of spectral sequences are special cases of this cases, notably the
Grothendieck spectral sequence and its special cases.
This is discussed at spectral sequence of a double complex.
Spectral sequences for hyper-derived functors
From the spectral sequence for a double complex? one obtains as a special case a spectral sequence that computes hyper-derived functors.
Grothendieck spectral sequence
The Grothendieck spectral sequence computes the composite of two derived functors from the two derived functors separately.
Let $\mathcal{A} \stackrel{F}{\to} \mathcal{B} \stackrel{G}{\to} \mathcal{C}$ be two left exact functors between abelian categories.
Write $R^p F : \mathcal{D} \to Ab$ for the cochain cohomology of the derived functor of $F$ in degree $p$ etc. .
If $F$ sends injective objects of $\mathcal{A}$ to $G$-acyclic objects in $\mathcal{B}$ then for each $A \in \mathcal{A}$ there is a first quadrant cohomology spectral sequence
$E_r^{p,q} := (R^p G \circ R^q F)(A)$
that converges to the right derived functor of the composite functor
$E_r^{p,q} \Rightarrow R^{p+q} (G \circ F)(A).$
1. the edge maps in this spectral sequence are the canonical morphisms
$R^p G (F A) \to R^p (G \circ F)(A)$
induced from applying $F$ to an injective resolution $A \to \hat A$ and the isomorphism
$R^q (G \circ F)(A) \to G(R^q F (A)) \,.$
2. the exact sequence of low degree terms is
$0 \to (R^1 G)(F(A)) \to R^1(G \circ F)(A) \to F(R^1(G(A))) \to (R^2 F)(G(A)) \to R^2(G \circ F)(A)$
This is called the Grothendieck spectral sequence.
Since for $A \to \hat A$ an injective resolution of $A$ the complex $F(\hat A)$ is a chain complex not concentrated in a single degree, we have that $R^p (G \circ F)(A)$ is equivalently the
hyper-derived functor evaluation $\mathbb{R}^p(G) (F(A))$.
Therefore the second spectral sequence discussed at hyper-derived functor spectral sequences converges as
$(R^p G)H^q(F(\hat A)) \Rightarrow R^p (G \circ F)(A) \,.$
Now since by construction $H^q(F(\hat A)) = R^q F(A)$ this is a spectral sequence
$(R^p G)(R^q F) A) \Rightarrow R^p (G \circ F)(A) \,.$
This is the Grothendieck spectral sequence.
Special Grothendieck spectral sequences
Leray spectral sequence
The Leray spectral sequence is the special case of the Grothendieck spectral sequence for the case where the two functors being composed are a push-forward of sheaves of abelian groups along a
continuous map $f : X \to Y$ followed by the push-forward $X \to *$ to the point. This yields a spectral sequence that computes the abelian sheaf cohomology on $X$ in terms of the abelian sheaf
cohomology on $Y$.
Let $X, Y$ be suitable sites and $f : X \to Y$ be a morphism of sites. () Let $\mathcal{C} = Ch_\bullet(Sh(X,Ab))$ and $\mathcal{D} = Ch_\bullet(Sh(Y,Ab))$ be the model categories of complexes of
sheaves of abelian groups. The direct image $f_*$ and global section functor $\Gamma_Y$ compose to $\Gamma_X$:
$\Gamma_X : \mathcal{C} \stackrel{f_*}{\to} \mathcal{D} \stackrel{\Gamma_Y}{\to} Ch_\bullet(Ab) \,.$
Then for $A \in Sh(X,Ab)$ a sheaf of abelian groups on $X$ there is a cohomology spectral sequence
$E_r^{p,q} := H^p(Y, R^q f_* A)$
that converges as
$E_r^{p,q} \Rightarrow H^{p+q}(X, A)$
and hence computes the cohomology of $X$ with coefficients in $A$ in terms of the cohomology of $Y$ with coefficients in the push-forward of $A$.
Base change spectral sequence for $Tor$ and $Ext$
For $R$ a ring write $R$Mod for its category of modules. Given a homomorphism of ring $f : R_1 \to R_2$ and an $R_2$-module $N$ there are composites of base change along $f$ with the hom-functor and
the tensor product functor
$R_1 Mod \stackrel{\otimes_{R_1} R_2}{\to} R_2 Mod \stackrel{\otimes_{R_2} N}{\to} Ab$
$R_1 Mod \stackrel{Hom_{R_1 Mod}(-,R_2)}{\to} R_2 Mod \stackrel{Hom_{R_2}(-,N)}{\to} Ab \,.$
The derived functors of $Hom_{R_2}(-,N)$ and $\otimes_{R_2} N$ are the Ext- and the Tor-functors, respectively, so the Grothendieck spectral sequence applied to these composites yields base change
spectral sequence for these.
Hochschild-Serre spectral sequence
Exact couples
The above examples are all built on the spectral sequence of a filtered complex. An alternatively universal construction builds spectral sequences from exact couples.
An exact couple is an exact sequence of three arrows among two objects
$E \overset{j}{\to} D \overset{\varphi}{\to} D \overset{k}{\to} E \overset{j}{\to}.$
These creatures construct spectral sequences by a two-step process:
• first, the composite $d \coloneqq k j \colon E\to E$ is nilpotent, in that $d^2=0$
• second, the homology $E'$ of $(E,d)$ supports a map $j':E'\to \varphi D$, and receives a map $k':\varphi D\to E'$. Setting $D'=\varphi D$, by general reasoning
$E' \overset{j'}{\to} D' \overset{\varphi}{\to} D' \overset{k'}{\to} E' \overset{j'}{\to} \,.$
is again an exact couple.
The sequence of complexes $(E,d),(E',d'),\dots$ is a spectral sequence, by construction.
Examples of exact couples can be constructed in a number of ways. Importantly, any short exact sequence involving two distinct chain complexes provides an exact couple among their total homology
complexes, via the Mayer-Vietoris long exact sequence; in particular, applying this procedure to the relative homology of a filtered complex gives precisely the spectral sequence of the filtered
complex described (???) somewhere else on this page. For another example, choosing a chain complex of flat modules $(C^\dot,d)$, tensoring with the short exact sequence
$\mathbb{Z}/p\mathbb{Z} \to \mathbb{Z}/p^2\mathbb{Z} \to \mathbb{Z}/p\mathbb{Z}$
gives the exact couple
$H^\bullet(d,\mathbb{Z}/p^2\mathbb{Z}) \overset{[\cdot]}{\to} H^\bullet(d,\mathbb{Z}/p\mathbb{Z}) \overset{\beta}{\to} H^\bullet(d,\mathbb{Z}/p\mathbb{Z}) \overset{p}{\to}H^\bullet(d,\mathbb{Z}/p^2\
in which $\beta$ is the mod-$p$ Bockstein homomorphism.
The exact couple recipe for spectral sequences is notable in that it doesn’t mention any grading on the objects $D,E$; trivially, an exact couple can be specified by a short exact sequence $\coker \
varphi\to E\to \ker\varphi$, although this obscures the focus usually given to $E$. In applications, a bi-grading is usually induced by the context, which also specifies bidegrees for the initial
maps $j,k,\varphi$, leading to the conventions mentioned earlier.
List of examples
Lurie spectral sequences
The following list of examples orders the various classes of spectral sequences by special cases: items further to the right are special cases of items further to the left.
Here is a more random list (using material from Wikipedia). Eventually to be merged with the above.
Basic lemmas
(mapping lemma)
If $f : (E_r^{p,q} \to (F_r^{p,q}))$ is a morphism of spectral sequences such that for some $r$ we have that $f_r : E_r^{p,q} \toF_r^{p,q}$ is an isomorphism, then also $f_s$ is an isomorphism for
all $s \geq r$.
(classical convergence theorem)
This is recalled in (Weibel, theorem 5.51).
First quadrant spectral sequence
A first quadrant spectral sequence is one for wich all pages are concentrated in the first quadrant of the $(p,q)$-plane, in that
$((p \lt 0) or (q \lt 0)) \;\; \Rightarrow E_r^{p,q} = 0 \,.$
If the $r$th page is concentrated in the first quadrant, then so the $(r+1)st$ page. So if the first one is, then all are.
Every first quadrant spectral sequence converges at $(p,q)$ from $r \gt max(p,q+1)$ on
$E_{max(p,q+1)+1}^{p,q} = E_\infty^{p,q} \,.$
If a first quadrant spectral sequence converges
$E_r^{p,q} \Rightarrow H^{p+q}$
then each $H^n$ has a filtration of length $n+1$
$0 = F^{n+1}H^n \subset F^n H^n \subset \cdots \subset F^1 H^n \subset F^0 H^n = H^n$
and we have
• $F^n H^n \simeq E_\infty^{n,0}$
• $H^n/F^1 H^n \simeq E_\infty^{0,n}$.
Abelian/stable theory
An elementary pedagogical introduction is in
• Timothy Chow, You could have invented spectral sequences, Notices of the AMS (2006) (pdf)
Standard textbook references are
• John McCleary, A User’s Guide to Spectral Sequences, Cambridge University Press
chapter 5 of
• Charles Weibel, An introduction to homological algebra Cambridge studies in advanced mathematics 38 (1994)
and section 14 of
• Raoul Bott, Loring Tu, Differential forms in algebraic topology, Graduate Texts in Mathematics 82, Springer 1982. xiv+331 pp.
A textbook with a focus on applications in algebraic topology is
The general discussion in the context of stable (∞,1)-category theory (the spectral sequence of a filtered stable homotopy type) is in section 1.2.2 of
A review Master thesis is
• Jennifer Orlich, Spectral sequences and an application (pdf)
Reviews of and lecture notes on standard definitions and facts about spectral sequences include
Original articles incluce
See also
• A. Romero, J. Rubio, F. Sergeraert, Computing spectral sequences (pdf)
Nonabelian / unstable theory
Homotopy spectral sequences in model categories are discussed in
Spectral sequences in general categories with zero morphisms are discussed in
• John McCleary, A history of spectral sequences: Origins to 1953, in History of Topology, edited by Ioan M. James, North Holland (1999) 631–663
|
{"url":"http://ncatlab.org/nlab/show/spectral+sequence","timestamp":"2014-04-21T07:28:30Z","content_type":null,"content_length":"154922","record_id":"<urn:uuid:073a2b47-8c15-490f-8956-bc6dc631f2c9>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00273-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Belmont, CA ACT Tutor
Find a Belmont, CA ACT Tutor
...I am experienced working with students in many areas, including math, science, history, geography, English/grammar and reading. My approach is often the same independent of the subject. I
explain the problem or concept carefully and then explain it a 2nd and 3rd way until the student says "I get it!" That is one of the best feeling I have as a tutor.
32 Subjects: including ACT Math, reading, English, ADD/ADHD
...With this credential I am allowed to teach any self contained K-6th grade class. In addition to this credential I have 4 years of teaching in elementary schools. As a California credentialed
teacher and a classroom teacher, it is imperative I teach my students study skills to be successful in my classroom and for the future.
26 Subjects: including ACT Math, reading, writing, geometry
...Now I am happy to become a professional tutor so I can help more students. My teaching method is emphasized on conceptual learning rather than rote memorization. I would like to teach students
how to think and how to be engaged in a fun mathematics world.
22 Subjects: including ACT Math, calculus, algebra 1, algebra 2
...I am currently enrolled in a doctoral program at the University of Phoenix. My grades in writing there are always A level. I have a way of explaining things clearly, which helps when speaking
with a student with English as a second language.
35 Subjects: including ACT Math, chemistry, English, reading
...My emphasis was and still is on understanding the fundamental concepts as problem solving naturally follows. Teaching and drilling the “how” of problem solving mechanics are important, but
really understanding the “why” that those problems are based on generates the greatest returns, especially ...
24 Subjects: including ACT Math, reading, chemistry, calculus
Related Belmont, CA Tutors
Belmont, CA Accounting Tutors
Belmont, CA ACT Tutors
Belmont, CA Algebra Tutors
Belmont, CA Algebra 2 Tutors
Belmont, CA Calculus Tutors
Belmont, CA Geometry Tutors
Belmont, CA Math Tutors
Belmont, CA Prealgebra Tutors
Belmont, CA Precalculus Tutors
Belmont, CA SAT Tutors
Belmont, CA SAT Math Tutors
Belmont, CA Science Tutors
Belmont, CA Statistics Tutors
Belmont, CA Trigonometry Tutors
Nearby Cities With ACT Tutor
Atherton ACT Tutors
Burlingame, CA ACT Tutors
East Palo Alto, CA ACT Tutors
Foster City, CA ACT Tutors
Hillsborough, CA ACT Tutors
Los Altos Hills, CA ACT Tutors
Menlo Park ACT Tutors
Millbrae ACT Tutors
Redwood City ACT Tutors
San Bruno ACT Tutors
San Carlos, CA ACT Tutors
San Lorenzo, CA ACT Tutors
San Mateo, CA ACT Tutors
Stanford, CA ACT Tutors
Woodside, CA ACT Tutors
|
{"url":"http://www.purplemath.com/Belmont_CA_ACT_tutors.php","timestamp":"2014-04-16T22:11:18Z","content_type":null,"content_length":"23802","record_id":"<urn:uuid:bf5392b6-5c3a-43e3-bf2a-0ec8aface236>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00303-ip-10-147-4-33.ec2.internal.warc.gz"}
|
[Numpy-discussion] Simple question about scatter plot graph
Benjamin Root ben.root@ou....
Wed Oct 31 20:24:35 CDT 2012
On Wednesday, October 31, 2012, wrote:
> On Wed, Oct 31, 2012 at 8:59 PM, klo uo <klonuo@gmail.com <javascript:;>>
> wrote:
> > Thanks for your reply
> >
> > I suppose, variable length signals are split on equal parts and dominant
> > harmonic is extracted. Then scatter plot shows this pattern, which has
> some
> > low correlation, but I can't abstract what could be concluded from grid
> > pattern, as I lack statistical knowledge.
> > Maybe it's saying that data is quantized, which can't be easily seen from
> > single sample bar chart, but perhaps scatter plot suggests that? That's
> only
> > my wild guess
> http://pandasplotting.blogspot.ca/2012/06/lag-plot.html
> In general you would see a lag autocorrelation structure in the plot.
> My guess is that even if there is a pattern in your data we might not
> see it because we don't see plots that are plotted on top of each
> other. We only see the support of the y_t, y_{t+1} transition (points
> that are at least once in the sample), but not the frequencies (or
> conditional distribution).
> If that's the case, then
> reduce alpha level so many points on top of each other are darker, or
> colorcode the histogram for each y_t: bincount for each y_t and
> normalize, or use np.histogram directly for each y_t, then assign to
> each point a colorscale depending on it's frequency.
> Did you calculate the correlation? (But maybe linear correlation won't
> show much.)
> Josef
The answer is hexbin() in matplotlib when you have many points laying on or
near each other.
Ben Root
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://mail.scipy.org/pipermail/numpy-discussion/attachments/20121031/3d460731/attachment.html
More information about the NumPy-Discussion mailing list
|
{"url":"http://mail.scipy.org/pipermail/numpy-discussion/2012-October/064341.html","timestamp":"2014-04-16T13:29:05Z","content_type":null,"content_length":"5085","record_id":"<urn:uuid:9d45c44b-c477-4502-939a-b17a806e4343>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00634-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Use the trial and improvement method to solve the equation 2xcubed+5=24. Find a solution that is a positive number give your answer correct to one decimal place - WyzAnt Answers
Use the trial and improvement method to solve the equation 2xcubed+5=24. Find a solution that is a positive number give your answer correct to one decimal place.
ease help and show working out, thank you!
Tutors, please sign in to answer this question.
1 Answer
Hi Kath;
Let's subtract 5 from both sides...
Let's divide both sides by 2...
x | x^3 | result
1 | 1 |too small
2 | 8 |very close
2.1| 9.3|closest
2.2|10.6|too high
|
{"url":"http://www.wyzant.com/resources/answers/20933/use_the_trial_and_improvement_method_to_solve_the_equation_2xcubed_5_24_find_a_solution_that_is_a_positive_number_give_your_answer_correct_to_one_decimal_place","timestamp":"2014-04-18T01:04:20Z","content_type":null,"content_length":"39564","record_id":"<urn:uuid:98cc56fc-5989-4972-92f4-3dae57404879>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00492-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Decomposition of positive definite matrices.
up vote 11 down vote favorite
It is known that a $n^2 \times n^2$ positive semidefinite matrix $A$ cannot always be written as a finite sum $$ A=\sum_{j} B_j \otimes C_j $$ with $B_j$ and $C_j$ positive semidefinite matrices (of
size $n \times n$). For example, it can be seen that the matrix $$ \begin{pmatrix} 1 & 0 & 0 & 1 \\\ 0 & 0 & 0 & 0 \\\ 0 & 0 & 0 & 0 \\\ 1 & 0 & 0 & 1 \end{pmatrix} $$ is not the finite sum of
Kronecker products of positive semidefinite $2 \times 2$ matrices.
Is the statement true if $A$ is positive definite? (i.e., $A$ is invertible?).
Edit: this question is a slight variant of a previous question.
linear-algebra fa.functional-analysis oa.operator-algebras operator-theory
add comment
1 Answer
active oldest votes
The following is just a minor variation of Martin Argerami's proof of the old question. I am even copying his equations and some of his text. If you are +1ing this post, please also +1
his one (if not already done).
Here is a counterexample for $n=2$. Let $\varepsilon $ be a positive real $< \dfrac{1}{2}$. The matrix
$ a= \begin{bmatrix} 1 & 0 & 0 & 1-\varepsilon \\ 0 & \varepsilon & 0 & 0 \\ 0 & 0 & \varepsilon & 0 \\ 1-\varepsilon & 0 & 0 & 1 \end{bmatrix} \in \mathrm{M}_{4}\left( \mathbb{C}\right)
is positive-definite, but it cannot be written as a sum of tensor products of nonnegative-semidefinite $2\times 2$-matrices. Here is why:
Assume the contrary. Thus, $a$ is written in the form
$a=\sum_{j} \left[ \begin{matrix} \alpha _{j} & \overline{\gamma _{j}} \\ \gamma _{j} & \beta _{j} \end{matrix} \right] \otimes \left[ \begin{matrix} \alpha _{j}^{\prime } & \overline{\
gamma _{j}^{\prime }} \\ \gamma _{j}^{\prime } & \beta _{j}^{\prime } \end{matrix} \right] =\left[ \begin{matrix} \sum_{j}\alpha _{j}^{\prime }\alpha _{j} & \sum_{j}\alpha _{j}^{\prime }
\overline{\gamma _{j}} & \sum_{j}\overline{\gamma _{j}^{\prime }}\alpha _{j} & \sum_{j}\overline{\gamma _{j}^{\prime }}\gamma _{j} \\ \sum_{j}\alpha _{j}^{\prime }\gamma _{j} & \sum_{j}\
alpha _{j}^{\prime }\beta _{j} & \ast & \ast \\ \ast & \ast & \sum_{j}\beta _{j}^{\prime }\alpha _{j} & \ast \\ \ast & \ast & \ast & \ast \end{matrix}\right] $,
where $j$ ranges from $1$ to some positive integer $N$. Since each $ \begin{bmatrix} \alpha _{j} & \overline{\gamma _{j}} \\ \gamma _{j} & \beta _{j} \end{bmatrix} $ is
nonnegative-semidefinite, we have $\alpha _{j}\geq 0$, $\beta _{j}\geq 0$, and $\alpha _{j}\beta _{j}\geq \left\vert \gamma _{j}\right\vert ^{2}$ for all $j$. Similarly, $\alpha _{j}^{\
prime }\geq 0$, $\beta _{j}^{\prime }\geq 0$, and $\alpha _{j}^{\prime }\beta _{j}^{\prime }\geq \left\vert \gamma _{j}^{\prime }\right\vert ^{2}$ for all $j$.
up vote 8
down vote Now, comparing the entries of $a$ in this equation, we get $\varepsilon =\sum_{j}\alpha _{j}^{\prime }\beta _{j}$ (from the $\left( 2,2\right) $-th entry) and $\varepsilon =\sum_{j}\beta
accepted _{j}^{\prime }\alpha _{j}$ (from the $ \left( 3,3\right) $-th entry). Taking the arithmetic mean of these two equations, we get
$\varepsilon =\dfrac{1}{2}\left( \sum_{j}\alpha _{j}^{\prime }\beta _{j}+\sum_{j}\beta _{j}^{\prime }\alpha _{j}\right) =\sum_{j}\dfrac{\alpha _{j}^{\prime }\beta _{j}+\beta _{j}^{\prime
}\alpha _{j}}{2}\geq \sum_{j}\sqrt{\alpha _{j}^{\prime }\beta _{j}\beta _{j}^{\prime }\alpha _{j}}$
(by AM-GM, since we are dealing with nonnegative reals). But for every $j$, we have
$\sqrt{\alpha _{j}^{\prime }\beta _{j}\beta _{j}^{\prime }\alpha _{j}}=\sqrt{\alpha _{j}\beta _{j}}\sqrt{\alpha _{j}^{\prime }\beta _{j}^{\prime }}\geq \left\vert \gamma _{j}\right\vert \
left\vert \gamma _{j}^{\prime }\right\vert $
(since $\alpha _{j}\beta_{j}\geq \left\vert \gamma _{j}\right\vert ^{2}$ and $\alpha _{j}^{\prime }\beta _{j}^{\prime }\geq \left\vert \gamma _{j}^{\prime }\right\vert ^{2}$ ), so this
$\varepsilon \geq \sum_{j}\underbrace{\left\vert \gamma _{j}\right\vert \left\vert \gamma _{j}^{\prime }\right\vert }_{=\left\vert \overline{\gamma _{j}^{\prime }}\gamma _{j}\right\vert }
= \sum_{j}\left\vert \overline{\gamma _{j}^{\prime }}\gamma _{j}\right\vert \geq \left\vert \sum_{j}\overline{\gamma _{j}^{\prime }}\gamma _{j}\right\vert $
(by the triangle inequality). But since $1-\varepsilon =\sum_{j}\overline{\gamma _{j}^{\prime }}\gamma _{j}$ (by comparing the $ \left( 1,4\right) $-th entry of the matrices in the above
equation), this becomes $\varepsilon \geq \left\vert 1-\varepsilon \right\vert $, what contradicts the definition of $\varepsilon $.
4 For an easier proof, observe that the partial transpose $\mathbf 1 \otimes T$ of $a$ is not positive semidefinite for small $\varepsilon$ (while it is obviously positive semidefinite
for every matrix that is decomposable in the above sense). – Michael Jan 15 '12 at 9:37
Michael: Can you explain further? What is the "partial transpose $1 \otimes T$ of $a$ ? Thanks. – Ruben A. Martinez-Avendano Jan 16 '12 at 18:19
1 There is a canonical isomorphism $\mathrm M_4\left(\mathbb C\right) \to \mathrm M_2\left(\mathbb C\right) \otimes \mathrm M_2\left(\mathbb C\right)$. The partial transpose is the
endomorphism $1\otimes T$ of $ \mathrm M_2\left(\mathbb C\right) \otimes \mathrm M_2\left(\mathbb C\right)$, where $1$ denotes the identity endomorphism of $ \mathrm M_2\left(\mathbb C
\right)$ and $T$ denotes the transposition endomorphism of $ \mathrm M_2\left(\mathbb C\right)$. (Endomorphism means vector-space endomorphism.) – darij grinberg Jan 16 '12 at 19:17
(The only reason why I haven't +1ed Michael's comment is that I was too lazy to check how $1\otimes T$ acts on that matrix...) – darij grinberg Jan 16 '12 at 19:17
add comment
Not the answer you're looking for? Browse other questions tagged linear-algebra fa.functional-analysis oa.operator-algebras operator-theory or ask your own question.
|
{"url":"http://mathoverflow.net/questions/85377/decomposition-of-positive-definite-matrices/85700","timestamp":"2014-04-20T08:36:54Z","content_type":null,"content_length":"61761","record_id":"<urn:uuid:45be67c0-9473-4537-a579-2ca592de7611>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00152-ip-10-147-4-33.ec2.internal.warc.gz"}
|
A mechanism for turbulent drag reduction by polymers
Seminar Room 1, Newton Institute
Minute quantities of long chained polymers, of order 10 ppm, added to liquids like water or oil are capable of cutting the turbulent wall friction by half. This startling effect- the "Toms
phenomenon" -has been known for more than 60 years, but a detailed explanation of how such small amounts of polymer alter the structure of turbulence so dramatically has been lacking. To explore this
question, direct numerical simulations have been performed based on a visco-elastic model of the fluid that uses a finite extensible non-linear elastic-Peterlin (FENE-P) constituitive equation. It is
found that the stresses due to the polymers circulating around turbulent vortices produce counter-torques that inherently oppose the rotation. Vortices creating the turbulent transport responsible
for drag are weakened and the creation of new vortices is inhibited. Thus, both coherent and incoherent turbulent Reynolds stresses are reduced. Interesting, the viscoelastic stresses of the FENE-P
model rely upon the vortices being asymmetric and such deviations from axisymmetry occur where the vortices are strained by themselves or by adjacent vortices.
Kim, K, Li, C.-F., Sureshkumar, R., Balachandar, S. and Adrian, R. J., “Effects of polymer stresses on eddy structures in drag-reduced turbulent channel flow,” J. Fluid Mech. 584, 281 (2007).
Kim, K,, Adrian, R. J., Balachandar, S. and Sureshkumar, R., “Dynamics of hairpin vortices and polymer-induced turbulent drag reduction,” Phys Rev. Lett. 100 (2008). LJ11563
The video for this talk should appear here if JavaScript is enabled.
If it doesn't, something may have gone wrong with our embedded player.
We'll get it fixed as soon as possible.
|
{"url":"http://www.newton.ac.uk/programmes/HRT/seminars/2008090914001.html","timestamp":"2014-04-19T17:11:32Z","content_type":null,"content_length":"7278","record_id":"<urn:uuid:52c35218-a509-4ac0-911b-f3b19ccb6969>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00659-ip-10-147-4-33.ec2.internal.warc.gz"}
|
st: questions on treatreg
[Date Prev][Date Next][Thread Prev][Thread Next][Date index][Thread index]
st: questions on treatreg
From "Chen, Xiao" <jingy1@ucla.edu>
To <statalist@hsphsun2.harvard.edu>
Subject st: questions on treatreg
Date Fri, 9 Mar 2007 15:28:38 -0800
I have a couple of questions about treatreg. I don't know much about
econometrics in general. In particular, I don't know much about this
treatreg procedure.
Here are the two questions:
1) Does it make sense to include an interaction term of the endogenous
treatment variable with a predictor variable in treatreg?
For example, in the model below, my hypothesis is that the effect of lfp
depends on the status of wc.
webuse labbor, clear
gen wc=0
replace wc=1 if we>12
xi: treatreg ww i.wc*lfp , treat(wc= wmed wfed) twostep
I think conceptually, this model should be doing the following:
ww = beta0 + beta1*lfp + beta2*lfp*wc + beta3*wc + e (1)
wc = g0 + g1*wmed + g2*wfed + u (2)
and e and u are bivariate normal with variance sigma and 1 and
covariance rho.
For this example, I do have trouble running it using ml. But I have seen
models that actually run with ml estimator. But the fact that it runs
mechanically does not really tell much if the model makes sense or if
the estimates are biased or not.
It seems that if any substitution of equation (2) happens in (1), it can
only happen to beta3, not to beta2 since this will make lfp a random
effect and that is not what has been hypothesized.
2) Does fixed-effect model make sense for treatreg? Let's say we have
data over multiple years and we want to control the year effect. Would
it make sense to just include the dummy variables for year in the model
to account for the fixed effect of year?
I hope these questions make sense and I really appreciate any hints on
them from the experts.
Xiao Chen
Statistical Consulting Group
UCLA Academic Technology Services
* For searches and help try:
* http://www.stata.com/support/faqs/res/findit.html
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/
|
{"url":"http://www.stata.com/statalist/archive/2007-03/msg00386.html","timestamp":"2014-04-17T16:17:57Z","content_type":null,"content_length":"7056","record_id":"<urn:uuid:cd9070ad-84f5-4ab6-a0d0-20dd5a7e3e0b>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00373-ip-10-147-4-33.ec2.internal.warc.gz"}
|
pentadiagonal matrix
pentadiagonal matrix
An $n\times n$pentadiagonal matrix (with $n\geq 3$) is a matrix of the form
$\begin{pmatrix}c_{1}&d_{1}&e_{1}&0&\cdots&\cdots&0\\ b_{1}&c_{2}&d_{2}&e_{2}&\ddots&&\vdots\\ a_{1}&b_{2}&\ddots&\ddots&\ddots&\ddots&\vdots\\ 0&a_{2}&\ddots&\ddots&\ddots&e_{{n-3}}&0\\ \vdots&\
ddots&\ddots&\ddots&\ddots&d_{{n-2}}&e_{{n-2}}\\ \vdots&&\ddots&a_{{n-3}}&b_{{n-2}}&c_{{n-1}}&d_{{n-1}}\\ 0&\cdots&\cdots&0&a_{{n-2}}&b_{{n-1}}&c_{n}\end{pmatrix}.$
It follows that a pentadiagonal matrix is determined by five vectors: one $n$-vector $c=(c_{1},\ldots,c_{n})$, two $(n-1)$-vectors $b=(b_{1},\ldots,b_{{n-1}})$ and $d=(d_{1},\ldots,d_{{n-1}})$, and
two $(n-2)$-vectors $a=(a_{1},\ldots,a_{{n-2}})$ and $e=(e_{1},\ldots,e_{{n-2}})$. It follows that a pentadiagonal matrix is completely determined by $n+2(n-1)+2(n-2)=5n-6$scalars.
pentadiagonal penta-diagonal
Mathematics Subject Classification
no label found
no label found
Added: 2003-01-25 - 21:10
|
{"url":"http://planetmath.org/pentadiagonalmatrix","timestamp":"2014-04-19T09:25:32Z","content_type":null,"content_length":"58320","record_id":"<urn:uuid:46bf2f7d-95ab-4785-9a02-8e49d3cac9ff>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00020-ip-10-147-4-33.ec2.internal.warc.gz"}
|
7.8 Bayesian Learning
Rather than choosing the most likely model or delineating the set of all models that are consistent with the training data, another approach is to compute the posterior probability of each model
given the training examples.
The idea of Bayesian learning is to compute the posterior probability distribution of the target features of a new example conditioned on its input features and all of the training examples.
Suppose a new case has inputs X=x and has target features, Y; the aim is to compute P(Y|X=x∧e), where e is the set of training examples. This is the probability distribution of the target variables
given the particular inputs and the examples. The role of a model is to be the assumed generator of the examples. If we let M be a set of disjoint and covering models, then reasoning by cases and the
chain rule give
P(Y|x∧e) = ∑[m∈M] P(Y ∧m |x∧e)
= ∑[m∈M] P(Y | m ∧x∧e) ×P(m|x∧e)
= ∑[m∈M] P(Y | m ∧x) ×P(m|e) .
The first two equalities are theorems from the definition of probability. The last equality makes two assumptions: the model includes all of the information about the examples that is necessary for a
particular prediction [i.e., P(Y | m ∧x∧e)= P(Y | m ∧x) ], and the model does not change depending on the inputs of the new example [i.e., P(m|x∧e)= P(m|e)]. This formula says that we average over
the prediction of all of the models, where each model is weighted by its posterior probability given the examples.
P(m|e) can be computed using Bayes' rule:
P(m|e) = (P(e|m)×P(m))/(P(e)) .
Thus, the weight of each model depends on how well it predicts the data (the likelihood) and its prior probability. The denominator, P(e), is a normalizing constant to make sure the posterior
probabilities of the models sum to 1. Computing P(e) can be very difficult when there are many models.
A set {e[1],...,e[k]} of examples are i.i.d. (independent and identically distributed), where the distribution is given by model m if, for all i and j, examples e[i] and e[j] are independent given m,
which means P(e[i]∧e[j]|m)=P(e[i]|m)×P(e[j]|m). We usually assume that the examples are i.i.d.
Suppose the set of training examples e is {e[1],...,e[k]}. That is, e is the conjunction of the e[i], because all of the examples have been observed to be true. The assumption that the examples are
i.i.d. implies
P(e|m) = ∏[i=1]^k P(e[i]|m) .
The set of models may include structurally different models in addition to models that differ in the values of the parameters. One of the techniques of Bayesian learning is to make the parameters of
the model explicit and to determine the distribution over the parameters.
Example 7.30: Consider
the simplest learning task under uncertainty. Suppose there is a single Boolean random variable,
. One of two outcomes,
, occurs for each example. We want to learn the probability distribution of
given some examples.
There is a single parameter, φ, that determines the set of all models. Suppose that φ represents the probability of Y=true. We treat this parameter as a real-valued random variable on the interval
[0,1]. Thus, by definition of φ, P(a|φ)=φ and P(¬a|φ)=1-φ.
Suppose an agent has no prior information about the probability of Boolean variable Y and no knowledge beyond the training examples. This ignorance can be modeled by having the prior probability
distribution of the variable φ as a uniform distribution over the interval [0,1]. This is the the probability density function labeled n[0]=0, n[1]=0 in Figure 7.15.
We can update the probability distribution of φ given some examples. Assume that the examples, obtained by running a number of independent experiments, are a particular sequence of outcomes that
consists of n[0] cases where Y is false and n[1] cases where Y is true.
The posterior distribution for φ given the training examples can be derived by Bayes' rule. Let the examples e be the particular sequence of observation that resulted in n[1] occurrences of Y=true
and n[0] occurrences of Y=false. Bayes' rule gives us
P(φ|e)=(P(e|φ)×P(φ))/(P(e)) .
The denominator is a normalizing constant to make sure the area under the curve is 1.
Given that the examples are i.i.d.,
because there are n[0] cases where Y=false, each with a probability of 1-φ, and n[1] cases where Y=true, each with a probability of φ.
One possible prior probability, P(φ), is a uniform distribution on the interval [0,1]. This would be reasonable when the agent has no prior information about the probability.
Figure 7.15 gives some posterior distributions of the variable φ based on different sample sizes, and given a uniform prior. The cases are (n[0]=1, n[1]=2), (n[0]=2, n[1]=4), and (n[0]=4, n[1]=8).
Each of these peak at the same place, namely at (2)/(3). More training examples make the curve sharper.
The distribution of this example is known as the beta distribution; it is parametrized by two counts, α[0] and α[1], and a probability p. Traditionally, the α[i] parameters for the beta distribution
are one more than the counts; thus, α[i]=n[i]+1. The beta distribution is
Beta^α[0],α[1](p)=(1)/(K) p^α[1]-1×(1-p)^α[0]-1
where K is a normalizing constant that ensures the integral over all values is 1. Thus, the uniform distribution on [0,1] is the beta distribution Beta^1,1.
The generalization of the beta distribution to more than two parameters is known as the Dirichlet distribution. The Dirichlet distribution with two sorts of parameters, the "counts" α[1],...,α[k],
and the probability parameters p[1],...,p[k], is
Dirichlet^α[1],...,α[k](p[1],...,p[k]) = (1)/(K) ∏[j=1]^k p[j]^α[j]-1
where K is a normalizing constant that ensures the integral over all values is 1; p[i] is the probability of the ith outcome (and so 0 ≤ p[i] ≤ 1) and α[i] is one more than the count of the ith
outcome. That is, α[i]=n[i]+1. The Dirichlet distribution looks like Figure 7.15 along each dimension (i.e., as each p[j] varies between 0 and 1).
For many cases, summing over all models weighted by their posterior distribution is difficult, because the models may be complicated (e.g., if they are decision trees or even belief networks).
However, for the Dirichlet distribution, the expected value for outcome i (averaging over all p[j]'s) is
(α[i])/(∑[j] α[j]) .
The reason that the α[i] parameters are one more than the counts is to make this formula simple. This fraction is well defined only when the α[j] are all non-negative and not all are zero.
Example 7.31:
Example 7.30
, which determines the value of
based on a sequence of observations made up of
cases where
is false and
cases where
is true. Consider the posterior distribution as shown in
Figure 7.15
. What is interesting about this is that, whereas the most likely posterior value of
, the
expected value
of this distribution is
Thus, the expected value of the n[0]=1, n[1]=2 curve is (3)/(5), for the n[0]=2, n[1]=4 case the expected value is (5)/(8), and for the n[0]=4, n[1]=8 case it is (9)/(14). As the learner gets more
training examples, this value approaches (n)/(m).
This estimate is better than (n)/(m) for a number of reasons. First, it tells us what to do if the learning agent has no examples: Use the uniform prior of (1)/(2). This is the expected value of the
n=0, m=0 case. Second, consider the case where n=0 and m=3. The agent should not use P(y)=0, because this says that Y is impossible, and it certainly does not have evidence for this! The expected
value of this curve with a uniform prior is (1)/(5).
An agent does not have to start with a uniform prior; it can start with any prior distribution. If the agent starts with a prior that is a Dirichlet distribution, its posterior will be a Dirichlet
distribution. The posterior distribution can be obtained by adding the observed counts to the α[i] parameters of the prior distribution.
The i.i.d. assumption can be represented as a belief network, where each of the e[i] are independent given model m. This independence assumption can be represented by the belief network shown on the
left side of Figure 7.16.
If m is made into a discrete variable, any of the inference methods of the previous chapter can be used for inference in this network. A standard reasoning technique in such a network is to condition
on all of the observed e[i] and to query the model variable or an unobserved e[i] variable.
The problem with specifying a belief network for a learning problem is that the model grows with the number of observations. Such a network can be specified before any observations have been received
by using a plate model. A plate model specifies what variables will be used in the model and what will be repeated in the observations. The right side of Figure 7.16 shows a plate model that
represents the same information as the left side. The plate is drawn as a rectangle that contains some nodes, and an index (drawn on the bottom right of the plate). The nodes in the plate are indexed
by the index. In the plate model, there are multiple copies of the variables in the plate, one for each value of the index. The intuition is that there is a pile of plates, one for each value of the
index. The number of plates can be varied depending on the number of observations and what is queried. In this figure, all of the nodes in the plate share a common parent. The probability of each
copy of a variable in a plate given the parents is the same for each index.
A plate model lets us specify more complex relationships between the variables. In a hierarchical Bayesian model, the parameters of the model can depend on other parameters. Such a model is
hierarchical in the sense that some parameters can depend on other parameters.
Example 7.32:
Suppose a diagnostic assistant agent wants to model the probability that a particular patient in a hospital is sick with the flu before symptoms have been observed for this patient. This prior
information about the patient can be combined with the observed symptoms of the patient. The agent wants to learn this probability, based on the statistics about other patients in the same hospital
and about patients at different hospitals. This problem can range from the cases where a lot of data exists about the current hospital (in which case, presumably, that data should be used) to the
case where there is no data about the particular hospital that the patient is in. A hierarchical Bayesian model can be used to combine the statistics about the particular hospital the patient is in
with the statistics about the other hospitals.
Suppose that for patient X in hospital H there is a random variable S[HX] that is true when the patient is sick with the flu. (Assume that the patient identification number and the hospital uniquely
determine the patient.) There is a value φ[H] for each hospital H that will be used for the prior probability of being sick with the flu for each patient in H. In a Bayesian model, φ[H] is treated as
a real-valued random variable with domain [0,1]. S[HX] depends on φ[H], with P(S[HX]|φ[H])=φ[H]. Assume that φ[H] is distributed according to a beta distribution. We don't assume that φ[h[i]] and φ[h
[2]] are independent of each other, but depend on hyperparameters. The hyperparameters can be the prior counts α[0] and α[1]. The parameters depend on the hyperparameters in terms of the conditional
probability P(φ[h[i]]|α[0],α[1])= Beta^α[0],α[1](φ[h[i]]); α[0] and α[1] are real-valued random variables, which require some prior distribution.
The plate model and the corresponding belief network are shown in Figure 7.17. Part (a) shows the plate model, where there is a copy of the outside plate for each hospital and a copy of the inside
plate for each patient in the hospital. Part of the resulting belief network is shown in part (b). Observing some of the S[HX] will affect the φ[H] and so α[0] and α[1], which will in turn affect the
other φ[H] variables and the unobserved S[HX] variables.
Sophisticated methods exist to evaluate such networks. However, if the variables are made discrete, any of the methods of the previous chapter can be used.
In addition to using the posterior distribution of φ to derive the expected value, we can use it to answer other questions such as: What is the probability that the posterior probability of φ is in
the range [a,b]? In other words, derive P((φ ≥ a ∧φ ≤ b) | e). This is the problem that the Reverend Thomas Bayes solved more than 200 years ago [Bayes(1763)]. The solution he gave - although in much
more cumbersome notation - was
(∫[a]^b p^n×(1-p)^m-n)/(∫[0]^1 p^n×(1-p)^m-n) .
This kind of knowledge is used in surveys when it may be reported that a survey is correct with an error of at most 5%, 19 times out of 20. It is also the same type of information that is used by
probably approximately correct (PAC) learning, which guarantees an error at most ε at least 1-δ of the time. If an agent chooses the midpoint of the range [a,b], namely (a+b)/(2), as its hypothesis,
it will have error less than or equal to (b-a)/(2), just when the hypothesis is in [a,b]. The value 1-δ corresponds to P(φ ≥ a ∧φ ≤ b | e). If ε=(b-a)/(2) and δ=1-P(φ ≥ a ∧φ ≤ b | e), choosing the
midpoint will result in an error at most ε in 1-δ of the time. PAC learning gives worst-case results, whereas Bayesian learning gives the expected number. Typically, the Bayesian estimate is more
accurate, but the PAC results give a guarantee of the error. The sample complexity (see Section 7.7.2) required for Bayesian learning is typically much less than that of PAC learning - many fewer
examples are required to expect to achieve the desired accuracy than are needed to guarantee the desired accuracy.
|
{"url":"http://artint.info/html/ArtInt_196.html","timestamp":"2014-04-19T01:48:00Z","content_type":null,"content_length":"22869","record_id":"<urn:uuid:c3021e27-8253-416a-8e1b-c503e306f31b>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00650-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Fabulous Adventures In Coding
Guid guide, part three
Let's recap: a GUID is a 128 bit integer that is used as a globally unique identifier. GUIDs are not a security system; they do not guarantee uniqueness in a world where hostile parties are
deliberately attempting to cause collisions; rather, they provide a cheap and easy way for mutually benign parties to generate identifiers without collisions. One mechanism for ensuring global
uniqueness is to generate the GUID so that its bits describe a unique position in spacetime: a machine with a specific network card at a specific time. The downside of this mechanism is that code
artifacts with GUIDs embedded in them contain easily-decoded information about the machine used to generate the GUID. This naturally raises a privacy concern.
To address this concern, there is a second common method for generating GUIDs, and that is to choose the bits at random. Such GUIDs have a 4 as the first hex digit of the third section.
First off, what bits are we talking about when we say "the bits"? We already know that in a "random" GUID the first hex digit of the third section is always 4. Something I did not mention in the last
episode was that there is additional version information stored in the GUID in the bits in the fourth section as well; you'll note that a GUID almost always has 8, 9, a or b as the first hex digit of
the fourth section. So in total we have six bits reserved for version information, leaving 122 bits that can be chosen at random.
Second, why should we suppose that choosing a number at random produces uniqueness? Flipping a coin is random, but it certain does not produce a unique result! What we rely on here is probabilistic
uniqueness. Flipping a single coin does not produce a unique result, but flipping the same coin 122 times in a row almost certainly produces a sequence of heads and tails that has never been seen
before and will never be seen again.
Let's talk a bit about those probabilities. Suppose you have a particular randomly-generated GUID in hand. What is the probability that a specific time that you randomly generate another GUID will
produce a collision with your particular GUID? If the bits are chosen randomly and uniformly, clearly the probability of collision is one in 2^122. Now, what is the probability that over n
generations of GUIDs, you produce a collision with your particular GUID? Those are independent rare events, so the probabilities add ^1; the probability of a collision is n in 2^122. This 2^122 is an
astonishingly large number.
There are on the order 2^30 personal computers in the world (and of course lots of hand-held devices or non-PC computing devices that have more or less the same levels of computing power, but lets
ignore those). Let's assume that we put all those PCs in the world to the task of generating GUIDs; if each one can generate, say, 2^20 GUIDs per second then after only about 2^72 seconds -- one
hundred and fifty trillion years -- you'll have a very high chance of generating a collision with your specific GUID. And the odds of collision get pretty good after only thirty trillion years.
But that's looking for a collision with a specific GUID. Clearly the chances are a lot better of generating a collision somewhere else along the way. Recall that a couple of years ago I analyzed how
often you get any collision when generating random 32 bit numbers; it turns out that the probability of getting any collision gets extremely high when you get to around 2^16 numbers generated. This
generalizes; as a rule of thumb, the probability of getting a collision when generating a random n bit number gets large when you've generated around 2^n/2 numbers. So if we put those billion PCs to
work generating 122-bits-of-randomness GUIDs, the probability that two of them somewhere in there would collide gets really high after about 2^61 GUIDs are generated. Since we're assuming that about
2^30 machines are doing 2^20 GUIDs per second, we'd expect a collision after about 2^11 seconds, which is about an hour.
So clearly this system is not utterly foolproof; if we really, really wanted to, we could with high probability generate a GUID collision in only an hour, provided that we got every PC on the planet
to dedicate an hour of time to doing so.
But of course we are not going to do that. The number of GUIDs generated per second worldwide is not anywhere even close to 2^50! I would be surprised if it were more than 2^20 GUIDs generated per
second, worldwide, and therefore we could expect to wait about 2^41 seconds, for there to be a reasonable chance of collision, which is about seventy thousand years. And if we are looking for a
collision with a specific GUID, then again, it will take about a billion times longer than our initial estimate if we assume that a relatively small number of GUIDs are being generated worldwide per
So, in short: you should expect that any particular random GUID will have a collision some time in the next thirty billion trillion years, and that there should be a collision between any two GUIDs
some time in the next seventy thousand years.
Those are pretty good odds.
Now, this is assuming that the GUIDs are chosen by a perfectly uniform random process. They are not. GUIDs are in practice generated by a high-quality pseudo-random number generator, not by a
crypto-strength random number generator. Here are some questions that I do not know the answers to:
• What source of entropy is used to seed that pseudo-random number generator?
• How many bits of entropy are used in the seed?
• If you have two virtual machines running on the same physical machine, do they share any of their entropy?
• Are any of those entropy bits from sources that would identify the machine (such as the MAC address) or the person who created the GUID?
• Given perfect knowledge of the GUID generation algorithm and a specific GUID, would it be possible to deduce facts about the entropy bits that were used as the seed?
• Given two GUIDs, is it possible to deduce the probability that they were both generated from a pseudo-random number generator seeded with the same entropy? (And therefore highly likely to be from
the same machine.)
I do not know the answers to any of these questions, and therefore it is wise for me to assume that the answers to the bottom four questions is "yes". Clearly it is far, far more difficult for
someone to work out where and when a version-four GUID was create than a version-one GUID, which has that information directly in the GUID itself. But I do not know that it is impossible.
There are yet other techniques for generating GUIDs. If there is a 2 in the first hex digit of the third section then it is a version 1 GUID with some of the timestamp bits have slightly different
meanings. If there is a 3 or 5 then the bits are created by running a cryptographic hash function over a unique string; the uniqueness of the string is then derived from the fact that it is typically
a URL. But rather than go into the details of those more exotic GUIDs, I think I will leave off here.
Summing up:
• GUIDs are guaranteed to be unique but not guaranteed to be random. Do not use them as random numbers.
• GUIDs that are random numbers are not cryptographic strength random numbers.
• GUIDs are only unique when everyone cooperates; if someone wants to re-use a previously-generated GUID and thereby artificially create a collision, you cannot stop them. GUIDs are not a security
• GUIDs have an internal structure; at least six of the bits are reserved and have special meanings.
• GUIDs are allowed to be generated sequentially, and in practice often are.
• GUIDs are only unique when taken as a whole.
• GUIDs can be generated using a variety of algorithms.
• GUIDs that are generated randomly are statistically highly unlikely to collide in the foreseeable future.
• GUIDs could reveal information about the time and place they were created, either directly in the case of version one GUIDs, or via cryptanalysis in the case of version four GUIDs.
• GUIDs might be generated by some entirely different algorithm in the future.
1. this is an approximation that only holds if the probability is small and n is relatively small compared to the total number of possible outcomes. ↩
|
{"url":"http://ericlippert.com/2012/05/07/guid-guide-part-three/","timestamp":"2014-04-18T10:34:47Z","content_type":null,"content_length":"40867","record_id":"<urn:uuid:74fa911a-c426-4606-a496-859061cf34b7>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00178-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Integration, which one is right? (I'm confused)
January 2nd 2008, 01:49 PM #1
Junior Member
Dec 2007
Integration, which one is right? (I'm confused)
I'm now little bit confused. I have textbook that says:
$\int \! x \left( {x}^{2}+1 \right) ^{2}{dx}\$
$=\frac{1}{6}\left(x ^{2}+1\right)^{3}\$
Ok, I'll do it my way and Integrate function by expand polynom first:
$\int \! x \left( {x}^{2}+1 \right) ^{2}{dx}\$
$=\mathop{\rm }\int \left(x ^{5}+2\mathop{\rm }x ^{3}+x \right)\mathop{\rm } dx$
$=\mathop{\rm }\frac{x ^{6}}{6}+\frac{x ^{4}}{2}+\frac{x ^{2}}{2}$
I get different integral function. let x=3 then first integral function gives result: 500/3=166,6666666... but seconds gives 333/2 = 166,5.
So, which one is right?
They differ by a constant C=1/6
$\frac{(x^{2}+1)^{3}}{6}=\frac{x^{6}}{6}+\frac{x^{4 }}{2}+\frac{x^{2}}{2}+\frac{1}{6}$
Oh, clever. Thanks guys!
Ok, I try to substitute u = x^2 Therefore I get:
u = x^2
x = u^(1/2)
du = 2x
dx = du/2
ok, then function looks like:
$<br /> \int \! x \left( u+1 \right) ^{2}\frac{du}{2}\<br />$
or should it be:
$<br /> \int \! u^{\frac{1}{2}} \left( u+1 \right) ^{2}\frac{du}{2}\<br />$
Could you help me little bit on this..what next or is it already wrong?
Let $u = x^2+1$
$\,du = 2x\,dx\Rightarrow \,dx = \frac{\,du}{2x}$
So the integral becomes
$\int x\cdot u^2 \cdot \frac{\,du}{2x}$
$=\int u^2 \cdot \frac{\,du}{2}$
$=\frac{1}{2}\int u^2 \,du$
$=\frac{1}{2} \left(\frac{u^3}{3}\right)+C$
I have one more question.
I want to integrate function x*(x+1)^3 with substitution u = (x+1)^3
How it continues?
January 2nd 2008, 02:28 PM #2
January 2nd 2008, 07:52 PM #3
January 2nd 2008, 11:25 PM #4
Junior Member
Dec 2007
January 3rd 2008, 01:21 AM #5
Junior Member
Dec 2007
January 3rd 2008, 02:17 AM #6
January 3rd 2008, 03:26 AM #7
Junior Member
Dec 2007
January 3rd 2008, 07:12 AM #8
Junior Member
Dec 2007
January 3rd 2008, 07:46 AM #9
|
{"url":"http://mathhelpforum.com/calculus/25475-integration-one-right-i-m-confused.html","timestamp":"2014-04-16T20:05:10Z","content_type":null,"content_length":"60394","record_id":"<urn:uuid:ff881a5c-5cd1-485a-8816-ca056fdd8e37>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00309-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Re: Selective Computation...
Patrick Volteau <Patrick.Volteau@st.com>
20 Jan 2003 23:55:27 -0500
From comp.compilers
| List of all articles for this month |
From: Patrick Volteau <Patrick.Volteau@st.com>
Newsgroups: comp.compilers
Date: 20 Jan 2003 23:55:27 -0500
Organization: STMicroelectronics
References: 02-12-116 03-01-077
Keywords: performance, comment
Posted-Date: 20 Jan 2003 23:55:27 EST
> [On modern architectures, conditional branches can be slow. -John]
Yes but now some architecture provide predicated instructions where an
instruction is executed conditionally.
For example for the expression proposed here, one could use:
(k1, k2, k3, p, q, p', q and X are symbolique names for registers
p0, p1, p2 are predicates (true or false)
I'm using an algebraic assembly syntax)
p0 = (k1 == 1)
p1 = (k2 == 1)
p2 = (k3 == 1)
p0 ? X = p + q
p1 ? X = p + q'
p2 ? X = p' + q
Nice, isn't it?
[Not new, either. I programmed a Varian mini in about 1969 that had
conditional execute instructions that we used for similar
things. -John]
Post a followup to this message
Return to the comp.compilers page.
Search the comp.compilers archives again.
|
{"url":"http://compilers.iecc.com/comparch/article/03-01-094","timestamp":"2014-04-17T03:50:53Z","content_type":null,"content_length":"6225","record_id":"<urn:uuid:4a6d1009-bee3-457a-b52a-e8c169787c4a>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00059-ip-10-147-4-33.ec2.internal.warc.gz"}
|
A data-parallel implementation of the adaptive fast multipole algorithm
, 2000
"... This paper describes our experiences developing high-performance code for astrophysical N-body simulations. Recent N-body methods are based on an adaptive tree structure. The tree must be built
and maintained across physically distributed memory; moreover, the communication requirements are irregul ..."
Cited by 21 (0 self)
Add to MetaCart
This paper describes our experiences developing high-performance code for astrophysical N-body simulations. Recent N-body methods are based on an adaptive tree structure. The tree must be built and
maintained across physically distributed memory; moreover, the communication requirements are irregular and adaptive. Together with the need to balance the computational work-load among processors,
these issues pose interesting challenges and tradeoffs for high-performance implementation. Our implementation was guided by the need to keep solutions simple and general. We use a technique for
implicitly representing a dynamic global tree across multiple processors which substantially reduces the programming complexity as well as the performance overheads of distributed memory
architectures. The contributions include methods to vectorize the computation and minimize communication time which are theoretically and experimentally justified. The code has been tested by varying
the number and distribution of bodies on different configurations of the Connection Machine CM-5. The overall performance on instances with 10 million bodies is typically over 48 percent of the peak
machine rate, which compares favorably with other approaches.
- In SC'97 , 1997
"... We describe the design of several portable and efficient parallel implementations of adaptive N-body methods, including the adaptive Fast Multipole Method, the adaptive version of Anderson's
Method, and the Barnes-Hut algorithm. Our codes are based on a communication and work partitioning scheme tha ..."
Cited by 14 (2 self)
Add to MetaCart
We describe the design of several portable and efficient parallel implementations of adaptive N-body methods, including the adaptive Fast Multipole Method, the adaptive version of Anderson's Method,
and the Barnes-Hut algorithm. Our codes are based on a communication and work partitioning scheme that allows an efficient implementation of adaptive multipole methods even on high-latency systems.
Our test runs demonstrate high performance and speedup on several parallel architectures, including traditional MPPs, shared-memory machines, and networks of workstations connected by Ethernet. 1
Introduction The N-body problem is the problem of simulating the movement of a set of bodies (or particles) under the influence of gravitational, electrostatic, or other type of force. Algorithms for
N-body simulations have a number of important applications in fields such as astrophysics, molecular dynamics, fluid dynamics, and even computer graphics [12]. A large number of algorithms for N-body
, 1996
"... Scientific problems are often irregular, large and computationally intensive. Efficient parallel implementations of algorithms that are employed in finding solutions to these problems play an
important role in the development of science. This thesis studies the parallelization of a certain class of ..."
Cited by 13 (9 self)
Add to MetaCart
Scientific problems are often irregular, large and computationally intensive. Efficient parallel implementations of algorithms that are employed in finding solutions to these problems play an
important role in the development of science. This thesis studies the parallelization of a certain class of irregular scientific problems, the N-body problem, using a classical hierarchical
algorithm: the Fast Multipole Algorithm (FMA). Hierarchical N-body algorithms in general, and the FMA in particular, are amenable to parallel execution. However, performance gains are difficult to
obtain, due to load imbalances that are primarily caused by the irregular distribution of bodies and of computation domains. Understanding application characteristics is essential for obtaining high
performance implementations on parallel machines. After surveying the available parallelism in the FMA, we address the problem of exploiting this parallelism with partitioning and scheduling
techniques that optimally map i...
, 1994
"... Numerical studies of turbulent flows have always been prone to crude approximations due to the limitations in computing power. With the advent of supercomputers, new turbulence models and fast
particle algorithms, more highly resolved models can now be computed. Vortex Methods are grid-free and so a ..."
Cited by 12 (1 self)
Add to MetaCart
Numerical studies of turbulent flows have always been prone to crude approximations due to the limitations in computing power. With the advent of supercomputers, new turbulence models and fast
particle algorithms, more highly resolved models can now be computed. Vortex Methods are grid-free and so avoid a number of shortcomings of gridbased methods for solving turbulent fluid flow
equations; these include such problems as poor resolution and numerical diffusion. In these methods, the continuum vorticity field is discretised into a collection of Lagrangian elements, known as
vortex elements, which are free to move in the flow field they collectively induce. The vortex element interaction constitutes an N-body problem, which may be calculated by a direct pairwise
summation method, in a time proportional to N 2 . This time complexity may be reduced by use of fast particle algorithms. The most common algorithms are known as the N-body Treecodes and have a
hierarchical structure. An in-de...
, 1994
"... This dissertation studies issues critical to efficient N-body simulations on parallel computers. The N-body problem poses several challenges for distributed-memory implementation: adaptive
distributed data structures, irregular data access patterns, and irregular and adaptive communication patterns. ..."
Cited by 11 (1 self)
Add to MetaCart
This dissertation studies issues critical to efficient N-body simulations on parallel computers. The N-body problem poses several challenges for distributed-memory implementation: adaptive
distributed data structures, irregular data access patterns, and irregular and adaptive communication patterns. We introduce new techniques to maintain dynamic irregular data structures, to vectorize
irregular computational structures, and for efficient communication. We report results from experiments on the Connection Machine CM-5. The results demonstrate the performance advantages of design
simplicity; the code provides generality of use on various message-passing architectures. Our methods have been used as the basis of a C++ library that provides abstractions for tree computations to
ease the development of different N-body codes. This dissertation also presents the atomic message model to capture the important factors of efficient communication in message-passing systems. The
atomic model was m...
- IN PROCEEDINGS OF SUPERCOMPUTING, THE SCXY CONFERENCE SERIES , 1994
"... The O(N) hierarchical N-body algorithms and Massively Parallel Processors allow particle systems of 100 million particles or more to be simulated in acceptable time. We describe a data parallel
implementation of Anderson's method and demonstrate both efficiency and scalability of the implementation ..."
Cited by 9 (1 self)
Add to MetaCart
The O(N) hierarchical N-body algorithms and Massively Parallel Processors allow particle systems of 100 million particles or more to be simulated in acceptable time. We describe a data parallel
implementation of Anderson's method and demonstrate both efficiency and scalability of the implementation on the Connection Machine CM-5/5E systems. The communication time for large particle systems
amounts to about 10-25%, and the overall efficiency is about 35%. On a CM-5E the overall performance is about 60 Mflop/s per node, independent of the number of nodes.
- COMPUTING, U. VISHKIN, ED.: ACM , 1994
"... We identify the following key problems faced by HPC software: (1) the large gap between HPC design and implementation models in application development, (2) achieving high performance for a
single application on different HPC platforms, and (3) accommodating constant changes in both problem spe ..."
Cited by 8 (5 self)
Add to MetaCart
We identify the following key problems faced by HPC software: (1) the large gap between HPC design and implementation models in application development, (2) achieving high performance for a single
application on different HPC platforms, and (3) accommodating constant changes in both problem specification and target architecture as computational methods and architectures evolve. To attack these
problems, we suggest an application development methodology in which high-level architecture-independent specifications are elaborated, through an iterative refinement process which introduces
architectural detail, into a form which can be translated to efficient low-level architecture-specific programming notations. A tree-structured development process permits multiple architectures to
be targeted with implementation strategies appropriate to each architecture, and also provides a systematic means to accommodate changes in specification and target architecture. We describe the
- Harvard University, Division of Applied Sciences , 1996
"... The optimization techniques for hierarchical O(N ) N--body algorithms described here focus on managing the data distribution and the data references, both between the memories of different
nodes, and within the memory hierarchy of each node. We show how the techniques can be expressed in data--paral ..."
Cited by 7 (5 self)
Add to MetaCart
The optimization techniques for hierarchical O(N ) N--body algorithms described here focus on managing the data distribution and the data references, both between the memories of different nodes, and
within the memory hierarchy of each node. We show how the techniques can be expressed in data--parallel languages, such as High Performance Fortran (HPF) and Connection Machine Fortran (CMF). The
effectiveness of our techniques is demonstrated on an implementation of Anderson's hierarchical O(N ) N --body method for the Connection Machine system CM--5/5E. Of the total execution time,
communication accounts for about 10--20% of the total time, with the average efficiency for arithmetic operations being about 40% and the total efficiency (including communication) being about 35%.
For the CM--5E, a performance in excess of 60 Mflop/s per node (peak 160 Mflop/s per node) has been measured. c fl1996 John Wiley & Sons, Inc. 1 INTRODUCTION Achieving high efficiency in hierarchical
methods on mas...
- Proceedings of 26 th International Conference on Parallel Processing , 1997
"... Abstract This paper describes an implementation of a platform-independent parallel C++ N-body framework that can support various scientific simulations that involve tree structures, such as
astrophysics, semiconductor device simulation, molecular dynamics, plasma physics, and fluid mechanics. Withi ..."
Add to MetaCart
Abstract This paper describes an implementation of a platform-independent parallel C++ N-body framework that can support various scientific simulations that involve tree structures, such as
astrophysics, semiconductor device simulation, molecular dynamics, plasma physics, and fluid mechanics. Within the framework the users will be able to concentrate on the computation kernels that
differentiate different N-body problems, and let the framework take care of the tedious and error-prone details that are common among N- body applications. This framework was developed based on the
techniques we learned from previous CM-5 C implementations, which have been rigorously justified both experimentally and mathematically. This gives us confidence that our framework will allow fast
prototyping of different N-body applications, to run on different parallel platforms, and to deliver good performance as well. 1 Introduction 1.1 N-body problem and tree codes Computational methods
to track the motio...
, 1994
"... This paper describes our experiences developing highperformance code for astrophysical N-body simulations. Recent N-body methods are based on an adaptive tree structure. The tree must be built
and maintained across physically distributed memory; moreover, the communication requirements are irregular ..."
Add to MetaCart
This paper describes our experiences developing highperformance code for astrophysical N-body simulations. Recent N-body methods are based on an adaptive tree structure. The tree must be built and
maintained across physically distributed memory; moreover, the communication requirements are irregular and adaptive. Together with the need to balance the computational work-load among processors,
these issues pose interesting challenges and tradeoffs for high-performance implementation. Our implementation was guided by the need to keep solutions simple and general. We use a technique for
implicitly representing a dynamic global tree across multiple processors which substantially reduces the programming complexity as well as the performance overheads of distributed memory
architectures. The contributions include methods to vectorize the computation and minimize communication time which are theoretically and experimentally justified. The code has been tested by varying
the number and distrib...
|
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=1929743","timestamp":"2014-04-16T09:48:16Z","content_type":null,"content_length":"40038","record_id":"<urn:uuid:c835830d-c04b-4a81-9860-920a7d026e45>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00112-ip-10-147-4-33.ec2.internal.warc.gz"}
|
A certain sum was invested in a high-interest bond for which
Author Message
A certain sum was invested in a high-interest bond for which [#permalink] 16 Mar 2011, 06:49
Joined: 08 Mar 2011
5% (low)
Posts: 24
Question Stats:
Followers: 0
Kudos [?]: 5 [0], given:
31 (02:42) correct
66% (01:02)
based on 12 sessions
A certain sum was invested in a high-interest bond for which the interest is compounded monthly. The bond was sold x number of months later, where x is an integer. If the
value of the original investment doubled during this period, what was the approximate amount of the original investment in dollars?
(1) The interest rate during the period of investment was greater than 39 percent but less than 45 percent.
(2) If the period of investment had been one month longer, the final sale value of the bond would have been approximately $2,744.
Re: compound intrest [#permalink] 16 Mar 2011, 07:40
This post received
punyadeep wrote:
Q A certain sum was invested in a high-interest bond for which the interest is compounded monthly. The bond
was sold x number of months later, where x is an integer. If the value of the original investment doubled
during this period, what was the approximate amount of the original investment in dollars?
(1) The interest rate during the period of investment was greater than 39 percent but less than 45 percent.
(2) If the period of investment had been one month longer, the final sale value of the bond would have
been approximately $2,744.
Investment = $P
time: x months = x/12 years
Periods = n = 12
Return after application of the compound Interest for x months;
fluke It is given that the investment doubles after x months;
Math Forum Moderator P(1+\frac{r}{12})^x=2P(1+\frac{r}{12})^x=2
Joined: 20 Dec 2010 r and x are unknown
Posts: 2058 1. 0.4<=r<=0.44
Followers: 123 (1+\frac{0.4}{12})^x=(1.033)^x = 2
Kudos [?]: 827 [1] , to
given: 376
x can be found; but we don't know P.
Not Sufficient.
We still have two unknowns.
Not Sufficient.
Combining both;
r= 0.4
Now, we can get approx value of P.
Ans: "C"
Intern Re: compound intrest [#permalink] 16 Mar 2011, 08:24
Joined: 08 Mar 2011 thnx so much fluke
Posts: 24
Followers: 0
Kudos [?]: 5 [0], given:
Re: compound intrest [#permalink] 16 Mar 2011, 21:19
IanStewart Where is this question from? It makes no sense, in DS, to ask for the 'approximate value' of something -- how could you possibly know what information would be sufficient?
If I ask the following question:
GMAT Instructor
What is the approximate value of x?
Joined: 24 Jun 2008
1. 3 < x < 5
Posts: 967
2. 4 < x < 4.5
Location: Toronto
Is Statement 1 sufficient? Statement 2? You can't possibly know. It's a nonsensical question to ask in DS, so I wouldn't use other questions from the same source.
Followers: 236
Kudos [?]: 576 [0],
given: 3 Nov 2011: After years of development, I am now making my advanced Quant books and high-level problem sets available for sale. Contact me at ianstewartgmat at gmail.com for
Private GMAT Tutor based in Toronto
Current Student
Joined: 17 Mar 2011
Posts: 453 Re: compound intrest [#permalink] 17 Mar 2011, 06:10
Location: United States Good point Ian, based on (2) I could easily say "initial investment is between $0 and $1372, approximately"
Concentration: General
Management, Technology
GMAT 1: 760 Q49 V45
GPA: 3.37
WE: Information
Technology (Consulting)
Followers: 10
Kudos [?]: 142 [0],
given: 5
Re: compound intrest [#permalink] 18 Mar 2011, 09:37
Senior Manager
Fluke - how did u go from the 2nd stage to the 3rd?!?!
Joined: 08 Nov 2010
Posts: 424 P(1+\frac{r}{12})^{(x+1)}=2744
WE 1: Business 2P*(1+\frac{r}{12})=2744
Development We still have two unknowns.
Not Sufficient.
Followers: 6
Kudos [?]: 28 [0], given:
Re: compound intrest [#permalink] 18 Mar 2011, 10:00
144144 wrote:
Fluke - how did u go from the 2nd stage to the 3rd?!?!
We still have two unknowns.
Not Sufficient.
From the stem, please see the [highlight]highlighted[/highlight] part.
fluke wrote:
Investment = $P
time: x months = x/12 years
Periods = n = 12
Return after application of the compound Interest for x months;
fluke P(1+\frac{r}{12})^{12*x/12}
Math Forum Moderator It is given that the investment doubles after x months;
P(1+\frac{r}{12})^x=2P [highlight]--------------A[/highlight]
Joined: 20 Dec 2010 (1+\frac{r}{12})^x=2
Posts: 2058 r and x are unknown
Followers: 123 1. 0.4<=r<=0.44
(1+\frac{0.4}{12})^x=(1.033)^x = 2
Kudos [?]: 827 [0], to
given: 376 (1+\frac{0.44}{12})^x=(1.036)^x=2
x can be found; but we don't know P.
Not Sufficient.
[highlight]Substitute from equation A[/highlight]
We still have two unknowns.
Not Sufficient.
Combining both;
r= 0.4
Now, we can get approx value of P.
Ans: "C"
Re: compound intrest [#permalink] 18 Mar 2011, 19:05
VeritasPrepKarishma Expert's post
Veritas Prep GMAT This question reminds me:
Sometimes, it's useful to know that in compound interest, the principal doubles approximately every 72/r years where r is the rate of interest.
Joined: 16 Oct 2010
i.e. if rate of interest is 10, the principal doubles in approximately 72/10 = 7.2 years.
Posts: 4178
Location: Pune, India
Followers: 895 Veritas Prep | GMAT Instructor
My Blog
Kudos [?]: 3795 [0],
given: 148 Save $100 on Veritas Prep GMAT Courses And Admissions Consulting
Enroll now. Pay later. Take advantage of Veritas Prep's flexible payment plan options.
Veritas Prep Reviews
Status: Nothing comes
easy: neither do I want.
Re: compound intrest [#permalink] 19 Mar 2011, 10:54
Joined: 12 Oct 2009
Posts: 2793
Location: Malaysia
GMAT 1: 670 Q49 V31
GMAT 2: 710 Q50 V35
Followers: 161
Kudos [?]: 829 [0],
given: 235
Re: compound intrest [#permalink] 10 Sep 2013, 17:19
VeritasPrepKarishma wrote:
This question reminds me:
divineacclivity Sometimes, it's useful to know that in compound interest, the principal doubles approximately every 72/r years where r is the rate of interest.
Intern i.e. if rate of interest is 10, the principal doubles in approximately 72/10 = 7.2 years.
Joined: 15 Mar 2012 Couldn't the following give multiple values of x (may be with a minor difference each)? So, we wouldn't have a single answer to the question & hence 1 & 2 aren't sufficient
together. SO, the answer shd be E. Please correct me if I'm wrong. Thank you.
Posts: 49
Followers: 0
(1+\frac{0.4}{12})^x=(1.033)^x = 2
Kudos [?]: 0 [0], given:
16 to
x can be found; but we don't know P.
Re: compound intrest [#permalink] 11 Sep 2013, 02:42
Expert's post
divineacclivity wrote:
VeritasPrepKarishma wrote:
This question reminds me:
Sometimes, it's useful to know that in compound interest, the principal doubles approximately every 72/r years where r is the rate of interest.
i.e. if rate of interest is 10, the principal doubles in approximately 72/10 = 7.2 years.
Couldn't the following give multiple values of x (may be with a minor difference each)? So, we wouldn't have a single answer to the question & hence 1 & 2 aren't sufficient
together. SO, the answer shd be E. Please correct me if I'm wrong. Thank you.
(1+\frac{0.4}{12})^x=(1.033)^x = 2
Math Expert
Joined: 02 Sep 2009
x can be found; but we don't know P.
Posts: 17321
Followers: 2875
This is a poor quality question. Check here:
Kudos [?]: 18405 [0],
given: 2350 a-certain-sum-was-invested-in-a-high-interest-bond-for-which-110991.html#p893606
NEW TO MATH FORUM? PLEASE READ THIS: ALL YOU NEED FOR QUANT!!!
PLEASE READ AND FOLLOW: 11 Rules for Posting!!!
RESOURCES: [GMAT MATH BOOK]; 1. Triangles; 2. Polygons; 3. Coordinate Geometry; 4. Factorials; 5. Circles; 6. Number Theory; 7. Remainders; 8. Overlapping Sets; 9. PDF of
Math Book; 10. Remainders; 11. GMAT Prep Software Analysis NEW!!!; 12. SEVEN SAMURAI OF 2012 (BEST DISCUSSIONS) NEW!!!; 12. Tricky questions from previous years. NEW!!!;
COLLECTION OF QUESTIONS:
PS: 1. Tough and Tricky questions; 2. Hard questions; 3. Hard questions part 2; 4. Standard deviation; 5. Tough Problem Solving Questions With Solutions; 6. Probability and
Combinations Questions With Solutions; 7 Tough and tricky exponents and roots questions; 8 12 Easy Pieces (or not?); 9 Bakers' Dozen; 10 Algebra set. ,11 Mixed Questions, 12
Fresh Meat
DS: 1. DS tough questions; 2. DS tough questions part 2; 3. DS tough questions part 3; 4. DS Standard deviation; 5. Inequalities; 6. 700+ GMAT Data Sufficiency Questions
With Explanations; 7 Tough and tricky exponents and roots questions; 8 The Discreet Charm of the DS ; 9 Devil's Dozen!!!; 10 Number Properties set., 11 New DS set.
What are GMAT Club Tests?
25 extra-hard Quant Tests
gmatclubot Re: compound intrest [#permalink] 11 Sep 2013, 02:42
|
{"url":"http://gmatclub.com/forum/a-certain-sum-was-invested-in-a-high-interest-bond-for-which-110991.html?fl=similar","timestamp":"2014-04-20T05:58:33Z","content_type":null,"content_length":"197324","record_id":"<urn:uuid:3ac0ce8e-817f-4f20-bb3c-c2c8643483c4>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00463-ip-10-147-4-33.ec2.internal.warc.gz"}
|
[SOLVED] Need guidance in getting to solve a PERT question
November 9th 2006, 02:07 PM
[SOLVED] Need guidance in getting to solve a PERT question
i am stuck on the following PERT PROBLEM
WHICH IS 60% SOLVED. SO FAR I HAVE:
DURATION IN WEEKS
(i) ACTIVITY A
10 LIKELY
5 OPTIMISTIC
21 PESSIMISTIC
(ii) ACTIVITY B
6 LIKELY
4 OPTIMISTIC
8 PESSIMISTIC
(iii) ACTIVITY C
14 LIKELY
6 OPTIMISTIC
16 PESSIMISTIC
(A) CALC THE EXPECTED DURATION PER ACTIVITY
(i) 5+5*10+21/6 = 11
(ii) 4+4*6+8/6 = 6
(iii) 6+4*14+16/6 = 13
total 30 weeks EXPECTED PROJ DURATION
(B) DETERMINE THE STD DEVIATION
A 21-5=16/6=2.67
B 8-4 = 4/6=0.67
C 16-6 = 10/6=1.67
((2.67)^2 + (O.67)^2 + (1.67)^2)^0.5
= 3.21
(C) ASSUMING A NORMAL DISTRIBUTION ESTIMATE THE PROB THAT THE
PROJ DURATION IS
(i) more than A days (ii) less than B days (iii) between C and D days
I am stuck on Part C. Note I have weeks and days. Do I convert days into weeks to make the whole thing equal? My problem is knowing (i) what figure used to subtract A, B and C&D days. Once I have
clarity here
I can go ahead and work out the confidence limits. Thanks
|
{"url":"http://mathhelpforum.com/advanced-statistics/7381-solved-need-guidance-getting-solve-pert-question-print.html","timestamp":"2014-04-18T03:54:05Z","content_type":null,"content_length":"4360","record_id":"<urn:uuid:617e38cb-7897-4338-b7d0-ee09e411fba9>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00064-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Sample Essay Database
Paper Topic:
Pythagorean theorem
The Pythagorean Theorem
I . The History of the Pythagorean Theorem
II . Some proofs of the Pythagorean Theorem
III . Evolution of the Pythagorean Theorem
IV . Metaphysics and the Pythagorean Theorem
V . How the Pythagorean Theorem Shows Up In Daily Life
a2 b2 c2 where a and b are the sides of a right triangle and c is the hypotenuse is the answer most students will give when asked to define the Pythagorean Theorem . Ask them to prove it and most can
't unless they are fairly
advanced in mathematics . By and large , most anyone who has taken high school algebra has encountered this theorem without every really understanding it
In words , the Pythagorean Theorem states that the square on the hypotenuse of a right triangle has an area equal to the combined areas of the squares on the other two sides
The Egyptians , Babylonians and Chinese knew of the right triangle long before Pythagoras came along , and the Egyptians had devised a right triangle of rope with twelve evenly spaced knots for
measurements . They knew that 3 , 4 and 5 make a 90 degree angle and used the triangle of rope as an instrument of measurement
This will explore several aspects of the Pythagorean Theorem from its history to how it found in art , how it relates to music and how it occurs in everyday life , whether people know it or not I .
The History Of the Pythagorean Theorem
It is not conclusively known that Pythagoras can be solely credited with his famous theorem , but it is known that he was the first to prove why a right triangle behaves the way it does
Born in Samos (now in Turkey ) in about 582 B .C , Pythagoras came from a humble family . Various scenarios of his early life put him in Egypt Babylonia , and Italy , where he founded his school that
survived for a few hundred years
Pythagoras was not interested in solving mathematical problems he focused on pure mathematics and geometric relationships
Pythagoras and his Brotherhood (the Pythagoreans ) traveled fairly extensively , and it is known that they visited and studied in Egypt . The Pythagoreans were a mystical group , and Pythagoras
himself identified numbers with mysticism
It is possible that Pythagoras saw the knotted triangle in Egypt and went about assigning a numerical value to the geometry of the triangle Nothing of his was written down , though , and while the
theorem is attributed to him it could well have been one of his brethren or students that invented the theorem since all in the Pythagorean school was community property , and since his name was the
name of the school we have come to know the famous theorem as it is today
How do we know that they Pythagorean Theorem was known before Pythagoras ? So far , nothing has been shown in terms of an equation in relation to a right triangle before Pythagoras , but the right
triangle was certainly known for its special properties , as can be seen in...
More Essays on theorem, pythagorean, III, Pythagorean Theorem Shows, Pythagorean Theorem
Related searches on III, Pythagorean Theorem, EMT
|
{"url":"http://www.mightystudents.com/essay/Pythagorean.theorem.essay.3747","timestamp":"2014-04-16T04:13:27Z","content_type":null,"content_length":"36640","record_id":"<urn:uuid:d5aef9eb-5118-43ba-9228-0b7950586c12>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00400-ip-10-147-4-33.ec2.internal.warc.gz"}
|
What is primitive of function (cosx)^3/sin x? - Homework Help - eNotes.com
What is primitive of function (cosx)^3/sin x?
You need to use the evaluate the primitive of the given function, hence, you need to evaluate the indefinite integral of the given function, such that:
`int (cosx)^3/sin x dx = int (cos^2 x)/sin x *cos x dx`
You need to use the Pyrhagorean trigonometric identity, such that:
`cos^2 x = 1 - sin^2 x`
`int (cos^2 x)/sin x *cos x dx = int (1 - sin^2 x)/sin x *cos x dx`
You may solve the indefinite integral using the substitution process, such that:
`sin x = y => cos x dx = dy`
Replacing the variable yields:
`int (1 - sin^2 x)/sin x *cos x dx = int (1 - y^2)/y *dy`
You need to split the integral using the property of linearity of integral, such that:
`int (1 - y^2)/y *dy = int 1/y dy - int y dy`
`int (1 - y^2)/y *dy = ln|y| - y^2/2 + c`
Replacing back `sin x` for `y` yields:
`int (cosx)^3/sin x dx = ln|sin x| - (sin^2 x)/2 + c`
Hence, evaluating the requested primitive yields, under the given conditions, `int (cosx)^3/sin x dx = ln|sin x| - (sin^2 x)/2 + c.`
Join to answer this question
Join a community of thousands of dedicated teachers and students.
Join eNotes
|
{"url":"http://www.enotes.com/homework-help/what-primitive-function-cosx-3-sin-x-455001","timestamp":"2014-04-18T00:15:08Z","content_type":null,"content_length":"25470","record_id":"<urn:uuid:de0cc880-b34f-400e-99c5-0719a90b69b8>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00374-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Wood, John C. - Department of Pure Mathematics, University of Leeds
• Jacobi elds along harmonic maps John C. Wood
• Jacobi elds along harmonic 2-spheres in CP 2 are Luc Lemaire and John C. Wood
• University of Leeds School of Mathematics
• THE GEOMETRY OF HARMONIC MAPS AND J. C. Wood
• A NEW CONSTRUCTION OF EINSTEIN SELF-DUAL RADU PANTILIE AND JOHN C. WOOD
• Article for the `Handbook of Global Analysis', Elsevier (2007, to appear) HARMONIC MAPS
• MATH 1035 ANALYSIS: 2009 Summary of results
• MATH1035: Workbook One M. Daws, 2010 This course is MATH1035, "Analysis". The course is split in two, with two different lec-
• Some Recent Developments in the study of minimal 2-spheres in spheres
• THE GEOMETRY OF HARMONIC MAPS AND J. C. Wood \Lambda
• University of Leeds School of Mathematics
• School of maths -Homework/Attendance Coversheet You must fill in this section and hand in this workbook, even if you attempt
• On the construction of harmonic morphisms from Euclidean spaces \Lambda
• HARMONIC MORPHISMS BETWEEN RIEMANNIAN JOHN C. WOOD
• Elementary Differential and Integral Calculus FORMULA SHEET
• Jacobi fields along harmonic maps John C. Wood
• International Journal of Geometric Methods in Modern Physics Vol. 3, Nos. 5 & 6 (2006) 933956
• Jacobi fields along harmonic 2-spheres in CP2 Luc Lemaire and John C. Wood
• Harmonic morphisms and shear-free ray congruences
• HARMONIC MORPHISMS BETWEEN RIEMANNIAN JOHN C. WOOD
• List of contributions John C. Wood
• School of maths -Homework/Attendance Coversheet You must fill in this section and hand in this workbook, even if you attempt
• On the construction of harmonic morphisms from Euclidean spaces *
• School of maths -Homework/Attendance Coversheet You must fill in this section and hand in this workbook, even if you attempt
• A NEW CONSTRUCTION OF EINSTEIN SELF-DUAL RADU PANTILIE AND JOHN C. WOOD
• Harmonic morphisms and shear-free ray congruences
• Harmonic morphisms and shear-free ray congruences
• THE GEOMETRY OF HARMONIC MAPS AND MORPHISMS
• University of Leeds, School of Mathematics MATH 1220 Introduction to Geometry
• School of maths -Homework/Attendance Coversheet You must fill in this section and hand in this workbook, even if you attempt
• JACOBI FIELDS ALONG HARMONIC 2-SPHERES IN 3-AND 4-SPHERES ARE NOT ALL INTEGRABLE.
• A NEW CONSTRUCTION OF EINSTEIN SELF-DUAL METRICS
• 2 Jacobi fields along harmonic 2-spheres in C P are
• Jacobi fields along harmonic maps* John C. Wood
|
{"url":"http://www.osti.gov/eprints/topicpages/documents/starturl/16/162.html","timestamp":"2014-04-17T05:41:37Z","content_type":null,"content_length":"12829","record_id":"<urn:uuid:6e04a896-717a-46c8-98d2-d08f3718bf28>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00603-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Hello everybody!
Re: Hello everybody!
Welcome to the forum.
I would love to hear why you are studying maths!
I think we do not choose math it chooses us.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
|
{"url":"http://www.mathisfunforum.com/viewtopic.php?pid=254715","timestamp":"2014-04-19T09:29:27Z","content_type":null,"content_length":"12084","record_id":"<urn:uuid:7f8bc384-6255-4a85-af42-165c0d2a8838>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00143-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Prove:(tanx/(1+tanx)) = (sinx/(sinx+cosx))
November 15th 2010, 05:56 PM #1
Junior Member
May 2010
Prove:(tanx/(1+tanx)) = (sinx/(sinx+cosx))
For this question the only answer I seem to get is sinx/cosx not (sinx/(sinx+cosx))
(tanx/(1+tanx)) = (sinx/(sinx+cosx))
((1-tanx/1-tanx)) times (tanx/(1+tanx))
((cosx/cosx) times (sinx/cosx) - (sin^2x/cos^2x)) / ((cos^2x/cos^2x) times 1- (sin^2x/cos^2x))
((cosxsinx - sin^2x)/cos^2x) / ((cos^2x-sin^2x)/cos^2x)
multiply the reciprocal and cancel i get
ugh.. what did i do wrong??
you're really making this too hard ...
$\displaystyle \frac{\tan{x}}{1+\tan{x}} =$
$\displaystyle\frac{\frac{\sin{x}}{\cos{x}}}{1 + \frac{\sin{x}}{\cos{x}}}$
finish it by multiplying numerator and denominator by $\cos{x}$ ...
ohh.. haha. thanks.. for some reason i never see the easiness of it .. ugh
One of the easiest way to prove this we can also use it as (1-tanx)/tanx= (sinx-cosx)/cosx. because we changing both numerator so this will be ok and both sides will give you result cotx-1 so
this is proved. Thanks
For this question the only answer I seem to get is sinx/cosx not (sinx/(sinx+cosx))
(tanx/(1+tanx)) = (sinx/(sinx+cosx))
((1-tanx/1-tanx)) times (tanx/(1+tanx))
((cosx/cosx) times (sinx/cosx) - (sin^2x/cos^2x)) / ((cos^2x/cos^2x) times 1- (sin^2x/cos^2x))
((cosxsinx - sin^2x)/cos^2x) / ((cos^2x-sin^2x)/cos^2x)
multiply the reciprocal and cancel i get
ugh.. what did i do wrong??
You went wrong on your final step.
Simplest is to ask "how do we get from $tanx\rightarrow\ sinx$ in the numerator" ?
Whenever you are trying to prove a trig identity with tan x, cot x, sec x or csc x, a method that usually works pretty easily is to change these into sin x and cos x and then simplify
algebraically. So if you don't see a quicker way right away always try this.
ok ... but what does that have to do with the original identity to be proved in this thread?
November 15th 2010, 06:11 PM #2
November 15th 2010, 06:13 PM #3
Junior Member
May 2010
February 2nd 2011, 03:01 AM #4
Dec 2010
February 2nd 2011, 03:56 AM #5
MHF Contributor
Dec 2009
February 2nd 2011, 04:14 AM #6
Senior Member
Nov 2010
Staten Island, NY
February 2nd 2011, 04:14 AM #7
February 2nd 2011, 08:46 PM #8
Dec 2010
|
{"url":"http://mathhelpforum.com/trigonometry/163381-prove-tanx-1-tanx-sinx-sinx-cosx.html","timestamp":"2014-04-16T19:55:41Z","content_type":null,"content_length":"53647","record_id":"<urn:uuid:20d2e860-fd01-4b35-8d41-19b392d03c35>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00535-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Scatchard Analysis Software
Peter Gegenheimer peterg at rnaworld.bio.ukans.edu
Thu Jan 5 19:18:43 EST 1995
In <3ehenh$fbr at lastactionhero.rs.itd.umich.edu>, Chris Beecher <cab at umich.edu> writes:
>I am looking for PC or Mac software that can do a Scatchard analysis for
>at least one-site ligand binding experiments.
>I am already familair with Ligand and it barely gets by, can anyone
>recommend something better without spending an arm and a leg?
>chris beecher
>Univ of Mich
Ligand binding is not determined from a Scatchard plot but from least-squares
non-linear regression fitting of the hyperbolic binding curve to the raw data
(bound ligand vs total ligand). For a simple example, see Chen et al, FEBS
Lett. 298, 69-73 (1992). Any sort of linear transformation of the data
introduces statistical errors which make the results less reliable than direct
curvefitting to the unmanipulated data. This is especially true for a
Scatchard plot, in which calculation of [free ligand] is often unreliable at
very low [total ligand]. The *easiest* program for non-linear curvefitting is
the old PC program Enzfitter (BioSoft; avail from Sigma and other scientific
software distributors). Enzfitter is expensive ($250-350) but lets you enter
your own equations; it uses linear transforms, if possible, to obtain initial
estimates for the non-linear curvefitting. It also lets you adjust the type
of robust weighting. It has excellent graphing capabilities; they require
work and are not publication-quality, but you can transfer everything to s
simple plotting program for final figures if necessary. A slick (and $$)
versions of this program are Graft-It for Windows, and Ultra-Fit for the Mac.
Probably mush easier to use, but I haven't tried them.
The *cheapest* way is to use the freeware program Hyper (for Windows). It's
available from ftp sites; I don't remember which one (maybe IUBio?) Hyper will
fit to the Michaelis-Menten equation, which is a plot of [bound ligand] vs
[total ligand] for any ligand which is not a substrate. Hyper is excellent in
that it also plot all the common linear transforms (Lineweaver-Burk;
Eadie-Hofstee-Scatchard; Haines-Woolfe) to let you see how their answers
differ from the true answer.
A compromise is to use any one of the many scientific graphing programs which
have built-in non-linear curvefitting capabilities. SigmaPlot is one; PSI
Plot (only $50 academic) is my favorite. These curvefitters require a little
work to set up the equations, especially if you want to use robust weighting.
If you can get the $, get Enzfitter or a similar program. You'll use it
| Peter Gegenheimer | pgegen at kuhub.cc.ukans.edu |
| Departments of Biochemistry | voice: 913-864-3939 |
| and of Botany | |
| University of Kansas | FAX : 913-864-5321 |
| 2045 Haworth Hall | "The sleep of reason produces |
| Lawrence KS 66045-2106 | monsters." Goya |
More information about the Bio-soft mailing list
|
{"url":"http://www.bio.net/bionet/mm/bio-soft/1995-January/009813.html","timestamp":"2014-04-16T15:01:04Z","content_type":null,"content_length":"5704","record_id":"<urn:uuid:328f15df-9de1-43a7-826c-f0d427de4b63>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00069-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Johnson and Zen-ichi Yosimura, Torsion in Brown-Peterson homology and Hurewicz homomorphisms
, 2003
"... Given a good homology theory E and a topological space X, E∗X is not just an E∗-module but also a comodule over the Hopf algebroid (E∗, E∗E). We establish a framework for studying the
homological algebra of comodules over a well-behaved Hopf algebroid (A, Γ). That is, we construct ..."
Cited by 13 (3 self)
Add to MetaCart
Given a good homology theory E and a topological space X, E∗X is not just an E∗-module but also a comodule over the Hopf algebroid (E∗, E∗E). We establish a framework for studying the homological
algebra of comodules over a well-behaved Hopf algebroid (A, Γ). That is, we construct
- Adv. Math
"... Abstract. We show that, if E is a commutative MU-algebra spectrum such ..."
, 2002
"... Abstract. We describe the author’s research with Neil Strickland on the global algebra and global homological algebra of the category of BP∗BP- ..."
"... Abstract. Given a spectrum X, we construct a spectral sequence of BP∗BP-comodules that converges to BP∗(LnX), where LnX is the Bousfield localization of X with respect to the Johnson-Wilson
theory E(n)∗. The E2-term of this spectral sequence consists of the derived functors of an algebraic version o ..."
Cited by 1 (1 self)
Add to MetaCart
Abstract. Given a spectrum X, we construct a spectral sequence of BP∗BP-comodules that converges to BP∗(LnX), where LnX is the Bousfield localization of X with respect to the Johnson-Wilson theory E
(n)∗. The E2-term of this spectral sequence consists of the derived functors of an algebraic version of Ln. We show how to calculate these derived functors, which are closely related to local
cohomology of BP∗-modules with respect to the ideal In+1.
|
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=11370672","timestamp":"2014-04-18T21:22:24Z","content_type":null,"content_length":"17895","record_id":"<urn:uuid:1c18f5e5-3845-4b30-8851-39dda65b1c4a>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00604-ip-10-147-4-33.ec2.internal.warc.gz"}
|
SSRS – Why Do We Need InScope Function in Matrix Subtotals
Here is a pretty good article about advanced matrix reporting technique. I’ll show you an example where I need to manipulate the Subtotals in a matrix to get a subtotal that is not the default sum of
all the data in the group. (I need to check out if this is still true in SSRS 2008.)
By default, the subtotal in the matrix is the sum of all numbers in the defined group. But in many cases, we need to have custom aggregates on a matrix report. Examples are, you need average instead
of sum, you need growth percentage instead of sum. In this example, I need to sum only the numbers in bold, and ignore the numbers that are not in bold.
matrix1_RowGroup2: this is the lowest level grouping Dates on the column.
matrix1_ColumnGroup2: this is the lowest level grouping on the row.
To get a custom aggregates, enter this in the Expression editor of the measure cell. (To be continued…)
=IIF(inscope(“matrix1_ColumnGroup2″) and inscope(“matrix1_RowGroup2″),
IIF(SUM(Fields!ACCT_COUNTER.Value) = 0, “”, SUM(Fields!ACCT_COUNTER.Value)),
IIF(SUM(Fields!ACCT_COUNTER_SUBTOTAL.Value) = 0, “”, SUM(Fields!ACCT_COUNTER_SUBTOTAL.Value)))
|
{"url":"http://beyondrelational.com/modules/2/blogs/101/posts/13376/ssrs-why-do-we-need-inscope-function-in-matrix-subtotals.aspx","timestamp":"2014-04-16T16:02:07Z","content_type":null,"content_length":"101556","record_id":"<urn:uuid:a44e8d22-c074-4523-b8b0-6889ca5b8a52>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00579-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
• one year ago
• one year ago
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred.
|
{"url":"http://openstudy.com/updates/50f5a1b6e4b0246f1fe3a0b7","timestamp":"2014-04-19T02:23:06Z","content_type":null,"content_length":"53282","record_id":"<urn:uuid:1a1508a7-1949-446b-9702-97d938d1c2c1>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00151-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Natural Vibration of Beam - Algebraic Query
Not sure I understand this. For example on a slight tangent of topic if we consider a 1 dimensional bar element whose interpolation function is a quadratic in x which is the distance along the bar in
##u_h^e(x)=c_1+c_2x+c_3x^2## This is not dimensionless...
The difference with this example is that your coefficients ##c_i## each have different units to compensate for the units of the different powers of x.
In the quadratic for ##\lambda##, you assigned no numbers to the numerical coefficients, and since you were following a book that appears to have left all dimensionful quantities as variables, I
assumed those coefficients were dimensionless. Are the coefficients supposed to have units?
Well this problem stems from the Euler-Bernoulli Beam Theory and the book states that ##\rho## is "mass density per unit length" so isnt that kg.m^3/m=kg/m^4?...
The book says that word-for-word? That is a very confusing statement. One usually says "mass per unit length" or "mass per unit volume", but I have not heard "mass
per unit length". I think it must be a typo or something. Even if we were to take that at face value, is the density a linear density or a volumetric density?
##\omega## is the angular frequency in radians/secons, not an angle in radians.
Yes, I was not referring to the angular frequency as a whole, only the factor of radians that remained in bugatti79's calculation, so I mentioned that radians are actually dimensionless.
|
{"url":"http://www.physicsforums.com/showthread.php?p=4190710","timestamp":"2014-04-16T07:36:34Z","content_type":null,"content_length":"54554","record_id":"<urn:uuid:06e555ef-893d-4d70-90e6-39dedf8558ee>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00618-ip-10-147-4-33.ec2.internal.warc.gz"}
|
matching and extracting...
Cliff Wells LogiplexSoftware at earthlink.net
Wed Jan 15 01:10:09 CET 2003
On Tue, 2003-01-14 at 13:24, Shagshag wrote:
> Here is my problem : say i have items ix, some are "to discover" (must
> be extracted "-x"), some are "needed" (must be present "-p") and some
> are indifferent ("-?"). for example, i have sequence like :
> s = "a0 a1 a2 a3 a4 a5 a6 a7 a8"
> i would like to check if my sequence is matching sequence like :
> m = "a0-p i1-x i2-x i3-? a4-p i5-x i6-? i7-x i8-?"
> and get result like :
> m is matching s, i1 is a1, i2 is a3, i5 is a5, i7 is a7 (in python a
> "true" and a dict)
Is "i2 is a3" a typo, or am I missing something?
If it is a typo, then perhaps this will work for you:
import re
class matchseq:
def __init__(self, sequence):
self.items = {}
exp = ""
for item in sequence.split():
if exp:
exp += " "
k, v = item.split('-')
if v == 'p':
exp += "(?P<%s>%s)" % (k, k)
# self.items[k] = v
exp += "(?P<%s>\D*\d*)" % k
if v == 'x':
self.items[k] = v
self.re = re.compile(exp)
def match(self, sequence):
match = self.re.match(sequence)
if match:
for k in self.items:
self.items[k] = match.group(k)
return self.items
m = matchseq("a0-p i1-x i2-x i3-? a4-p i5-x i6-? i7-x i8-?")
result = m.match("a0 a1 a2 a3 a4 a5 a6 a7 a8")
if result:
print result
Cliff Wells, Software Engineer
Logiplex Corporation (www.logiplex.net)
(503) 978-6726 x308 (800) 735-0555 x308
More information about the Python-list mailing list
|
{"url":"https://mail.python.org/pipermail/python-list/2003-January/214139.html","timestamp":"2014-04-19T04:20:20Z","content_type":null,"content_length":"4390","record_id":"<urn:uuid:bf4f68f5-c466-4271-8358-935df2bdc138>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00264-ip-10-147-4-33.ec2.internal.warc.gz"}
|
George William Hill
Born: 3 March 1838 in New York, USA
Died: 16 April 1914 in West Nyack, New York, USA
Click the picture above
to see two larger pictures
Previous (Chronologically) Next Main Index
Previous (Alphabetically) Next Biographies index
George Hill's mother was Catherine Smith and his father was John William Hill who was an artist specialising in engraving. In 1846, when Hill was eight years old, his family moved to a farm near West
Nyack, in New York, where George attended school. He received a somewhat basic education at secondary school, yet he shone in the mathematics classes. After graduating from high school he entered
Rutgers College in 1855. It had been an institution of higher education from 1766 and had been renamed Rutgers College thirty years before Hill studied there. Hill had graduated just before the
Morrill Act of 1862 led to Rutgers College becoming New Jersey's land-grant college in 1864.
Certainly Hill's undergraduate studies at Rutgers College were highly unusual for American students of this period. He was taught at the College by a very able mathematician named Thomas Strong who
was a friend of Bowditch. Strong had a fine library of classic mathematics texts which he was pleased to allow the talented undergraduate Hill to study. These texts included Lacroix' Traité du calcul
différentiel et intégral, Lagrange's Méchanique analytique, Laplace's Méchanique céleste and Legendre's Fonctions elliptiques. Hill graduated from Rutgers College in 1859 with his A.B. The following
year he began his study of the lunar theory of Delaunay and Hansen. He was to continue this study for twelve years before he produced any publications of his own. He was awarded his A.M. by Rutgers
in 1862.
In 1861 Hill joined the Nautical Almanac Office working in Cambridge, Massachusetts. In the same year he won first prize for his essay On the confrontation of the Earth in a competition which was run
by Runckle's Mathematical Monthly. After two years in Cambridge, Massachusetts he returned to West Nyack where he worked from his home. This suited the reclusive Hill who prefered being on his own.
Except for a period of 10 years from 1882 to 1892 when he worked in Washington on the theory and tables for the orbits of Jupiter and Saturn, this was to be the working pattern for the rest of his
Writing in [4], E W Brown says:-
He was essentially of the type of scholar and investigator who seems to feel no need of personal contacts with others. While the few who knew him speak of the pleasure of his companionship in
frequent tramps over the country surrounding Washington, he was apparently quite happy alone, whether at work or taking recreation.
He began his studies of the transits of Venus soon after joining the Nautical Almanac Office and at this early stage he mastered the methods set out by Delaunay in his two volume treatise Théorie du
mouvement de la lune. Hill was the first to use infinite determinants to study the orbit of the Moon in On the part of the motion of the lunar perigee which is a function of the mean motion of the
sun and moon. He published this work at his own expense in 1877 (it was reprinted in Acta Mathematica in 1886). His Researches in Lunar Theory appeared in 1878 in the newly founded American Journal
of Mathematics. This publication contains important new ideas on the three-body problem. He also introduced infinite determinants and other methods to give increased accuracy to his results. Brown
wrote in 1915 that Hill's memoir Researches in Lunar Theory :-
... of but fifty quarto pages has become fundamental for the development of celestial mechanics in three different directions. ... Poincaré's remark that in it we may perceive the germ of all
progress which has been made in celestial mechanics since its publication is doubtless fully justified.
Newcomb persuaded Hill to develop a theory of the orbits of Jupiter and Saturn and Hill's work on this topic is another major contribution to mathematical astronomy. Hill's most important work dealt
with the gravitational effects of the planets on the Moon's orbit so in this work he was considering the 4-body problem. Examples of papers he published in the Annals of Mathematics include: On the
lunar inequalities produced by the motion of the ecliptic (1884), Coplanar motion of two planets, one having a zero mass (1887), On differential equations with periodic integrals (1887) (these
differential equations are now called Hill's differential equation), On the interior constitution of the earth as respects density (1888), The secular perturbations of two planets moving in the same
plane; with application to Jupiter and Saturn (1890), On intermediate orbits (1893), Literal expression for the motion of the Moon's perigee (1894) and Application of Chebyshev's principle in the
projection of maps (1908). He also published On the extension of Delaunay's method in the lunar theory to the general problem of planetary motion in the Transactions of the American Mathematical
Society in 1900.
Although he must be considered a mathematician, his mathematics was entirely based on that necessary to solve his orbits problems. His new idea on how to approach the solution to the three body
problem involved solving the restricted problem. In this he assumed that the three bodies lie in the same plane, that two bodies orbited their common centre of mass, and that the third body orbited
the other two but it was assumed to be of negligible mass so did not affect the orbits of the two (assumed massive) bodies. He had no interest in any modern developments in other areas of mathematics
which did not relate to solving problems about orbits. In fact Hill worked on very similar problems to Adams. It is no coincidence that Adams was also led to investigate infinite determinants and he
did this work quite independently of Hill.
From 1898 until 1901 Hill lectured at Columbia University, but [4]:-
... characteristically returned the salary, writing that he did not need the money and that it bothered him to look after it.
Zund writes in [12]:-
Hill never married and spent most of his life on the family farm at West Nyack. His income was small, but so were his needs, and several of his brothers lived nearby. Although a virtual recluse,
he was happiest there among his large library, where he was free to pursue his research. He died at his home in West Nyack.
Hill was elected to the National Academy of Sciences (United States) in 1874. He was elected a Fellow of the Royal Society (1902) receiving its Copley Medal in 1909. He was elected to the Royal
Astronomical Society and they awarded him their Gold Medal in 1887; it was presented by Glaisher who was president of the Society. Hill was president of the American Mathematical Society from 1894 to
1896 delivering his presidential address on Remarks on the progress of celestial mechanics since the middle of the century. He was elected to the French Academy of Sciences who awarded him their
Damoiseau Prize in 1898, and to the Russian Academy of Sciences who presented him with their Schubert Prize (1905). The Astronomical Society of the Pacific awarded him their Bruce Medal in 1908. He
was also elected to the Royal Society of Edinburgh in 1908, the Royal Belgium Academy of Science (1909), the Norwegian Academy of Science (1910), the Royal Swedish Academy of Sciences (1913), and the
Accademia dei Lincei in Rome (1913).
Article by: J J O'Connor and E F Robertson
Click on this link to see a list of the Glossary entries for this page
List of References (12 books/articles)
Mathematicians born in the same country
Additional Material in MacTutor
Honours awarded to George Hill
(Click below for those honoured in this way)
American Maths Society President 1895 - 1896
Fellow of the Royal Society 1902
LMS Honorary Member 1907
Fellow of the Royal Society of Edinburgh 1907
Royal Society Copley Medal 1909
Lunar features Crater Hill
Cross-references in MacTutor
Previous (Chronologically) Next Main Index
Previous (Alphabetically) Next Biographies index
History Topics Societies, honours, etc. Famous curves
Time lines Birthplace maps Chronology Search Form
Glossary index Quotations index Poster index
Mathematicians of the day Anniversaries for the year
JOC/EFR © August 2005 School of Mathematics and Statistics
Copyright information University of St Andrews, Scotland
The URL of this page is:
|
{"url":"http://www-groups.dcs.st-and.ac.uk/~history/Biographies/Hill.html","timestamp":"2014-04-19T02:13:07Z","content_type":null,"content_length":"19694","record_id":"<urn:uuid:d2daaa97-7150-44ff-9efb-dca029c24c85>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00284-ip-10-147-4-33.ec2.internal.warc.gz"}
|
How Do I: Convert a list of Lat/Long|X/Y coordinates into points in an image?
If I have an Excel list of lat/long coordinates (which I can easily convert into X/Y image coordinates), what is a good way of placing points or drawing lines in an image using those coordinates?
Obviously, I can do it manually, but I'm trying to construct a new process, and manual doesn't work for that. I am not sure what programs might be best to use, probably something that has a scripting
language. I can convert coords into script commands pretty easily. Then I would just run the script and it would draw the stuff onto the image, which I could bring into Photoshop or Fractal Terrains
as an overlay. Thanks
My Finished Maps | My Planet Maps | My Challenge Entries | Album: Pre-generated Worlds ------ Assuming I stick with fantasy cartography, I'd like to become a World Builder, laying out not only a
realistic topography, but also the geopolitical boundaries and at least rough descriptions of the countries and societies.
I'd use imagemagick and the scripting language of your choice Specifically: http://www.imagemagick.org/Usage/draw/#specifics or http://www.imagemagick.org/script/ma...r-graphics.php -Rob A>
My tutorials: Using GIMP to Create an Artistic Regional Map ~ All My Tutorials My GIMP Scripts: Rotating Brush ~ Gradient from Image ~ Mosaic Tile Helper ~ Random Density Map ~ Subterranean Map
Prettier ~ Tapered Stroke Path ~ Random Rotate Floating Layer ~ Batch Image to Pattern ~ Better Seamless Tiles ~ Tile Shuffle ~ Scale Pattern ~ Grid of Guides ~ Fractalize path ~ Label Points My
Maps: Finished Maps ~ Challenge Entries ~ My Portfolio: www.cartocopia.com
Those links look very good. Now all I need to do is remember all my basic geometry. Seems having a degree in the stuff doesn't do any good if you never use it. Repped. Thx for the assist!
If your data is already in Excel, have you considered just drawing an XY scatter plot in Excel itself?... It also has a built-in scripting language, that lets you create the chart and save the image.
I had thought of that briefly, but didn't know I could export it to an image and still be precise with the coords. I can generally make Excel jump through hoops, so that's a good option if possible.
I'll look into that further, thanks.
The community members location map is done exactly like this and I did that in ImageMagick. If you have a world map in a mercatory style such that its stretched to linearize the lat/long coords or a
local map small enough such that the lat/long range is approximately linear then all you need is the bounds in lat long coords of the map and the image pixel size and then you just use the ratios to
go from one coordinate system to the other. If the map is in some other projection and is large enough such that increasing the lat/longs produce curved point sets then you need to do some math. It
can get a bit tricky then but libraries such as GDAL can do that calculation for you. If you have the X/Y points and you just need to plot them well, image magick does make that very easy indeed. I
would suggest using a scripting language with it to make it faster such as Perl where you can get PerlMagick which is the interface of image magick with it. Then its just a loop and a few calls to
get it to plot it out. There are other interfaces too such as Python etc as well. Your kinda spoiled for choice with IM.
ViewingDale Map maker and VTT . Recent CG posts . Keyword Index . Intros Index . Member Locations . CWBP Place Index
I'd probably write a little program to generate SVG or WKT, which are both fairly straightforward structured text formats. SVG is more readily converted to graphics, but WKT can be fed into GIS
Here are the results in an Excel graph plot. I wrote a VBA routine to try and build believable random tectonic plate boundaries. It's very slow to run, a couple of hours, but then this laptop is not
necessarily the world's most powerful computer. There are ways I can speed it up, and I still want to add directional and speed arrows for each of the plates, so it's not quite finished yet. This
would have been a helluva lot easier if I knew spherical trigonometry. I assume I covered it during getting that BS in Math, but I've never used it since and didn't remember any of it. Fortunately I
was able to eventually find everything I needed online once I figured out what terms to search for. If anyone else wants to delve into the topic, I suggest looking for an aviation navigation page -
all the formulas you could ever want are listed out right there, already put into a form perfect for doing calculations based on a planetary surface. I've actually had the general process in mind for
a month or more, but didn't know how to implement it until I started finding the right equations. Any critiques or suggestions from the geologically-knowledgeable crowd out there?
Here's another one, with the centerpoints added, and a slightly faster algorithm for the edges (a bit over 3 hrs), although that apparently comes with the drawback of the edges having a lot more
smooth curves to it. And I should probably mention that I know the centerpoints aren't at the center of the final plates.
Would you like to explain what it is that your doing to get these diagrams. I dont see the connect between the lat long points to this diagram. I can see that they are supposed to be tectonic plates
but I dont understand where the edges are derived from or why its taking 3 hrs to calculate or plot them. What are these calculations referring to ?
|
{"url":"http://www.cartographersguild.com/how-do-i/15393-how-do-i-convert-list-lat-long%7Cx-y-coordinates-into-points-image.html","timestamp":"2014-04-17T13:38:11Z","content_type":null,"content_length":"125768","record_id":"<urn:uuid:b372f419-912e-4a53-8d32-55c28bdae150>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00050-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Well, as I'm typing this at school, you would think that I'am slacking. But, in fact, we are on Study leave and I have to wait 2 hours for an english revision lesson.
Friends are angels who lift our feet when our own wings have trouble remembering how to fly
|
{"url":"http://www.mathisfunforum.com/viewtopic.php?pid=6156","timestamp":"2014-04-20T03:38:53Z","content_type":null,"content_length":"13169","record_id":"<urn:uuid:2f64d6ef-cbab-450f-9eae-9b1cdaa521b9>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00433-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Perimeter and Area of a Rounded Rectangle
Rounded rectagles have corners that are smoothed out with quarter-circle arcs. Rectangles that have rounded corners instead of square corners are pleasing to the eye and frequently used in
construction and design. You can find examples in home décor, furniture, landscaping designs, and other industrial applications.
Oval-shaped racetracks (rectangles with semicircles appended to opposite sides) are related to rounded rectangles:
The area and perimeter of a rounded rectangle depends on the overall width (W) and length (L) of the shape as well as the radius (r) of curvature at the corners. If you know the three measurements,
you can find the perimeter and area using the formulas below or the calculator on the left.
The area formula in terms of W, L, and r is
Area = LW - 4r
= LW - (4-
The perimeter formula in terms of W, L, and r is
Perimeter = 2L + 2W - 8r + 2
= 2L + 2W - (8-2
A rounded square has a total width and length of 12 inches. The radius of curvature at the rounded corners is 2.5 inches. What is the area and perimeter of the shape?
In this example we have W = L = 12 and r = 2.5. The area and perimeter are calculated by
Area = 12*12 - (4-
= 138.635 square inches
Perimeter = 2*12 + 2*12 - (8-2
= 43.708 inches.
© Had2Know 2010
|
{"url":"http://www.had2know.com/academics/rounded-rectangle-area-perimeter.html","timestamp":"2014-04-20T23:36:36Z","content_type":null,"content_length":"9720","record_id":"<urn:uuid:e6aed9e4-4fcc-4bf5-96c3-6e033ee20304>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00329-ip-10-147-4-33.ec2.internal.warc.gz"}
|
[FOM] Formalization Thesis
Bill Taylor W.Taylor at math.canterbury.ac.nz
Wed Jan 16 21:52:12 EST 2008
> Let FT(*) denote the Formalization Thesis that every specific mathematical
> statement can be faithfully expressed in some formal system.
I tend to agree with the emerging PoV that it is a tautology.
By and large, we do not (yet) *call* a thing math, unless it has
some formalization, and the Constructivist examples (e.g. all functions
on intervals are continuous) show that FOL with ZFC are insufficient.
My lingering reluctance to leave it there, consists in the thought that
if there ARE counterexamples, they must surely involve *diagrams* in
some essential way. Diagrams have never sat all that comfortably with
the paradigm of linear logical thought and formalisms.
An early example was the feeling, quite strong in Cartesian to Eulerian
times (I gather), that it ought to be possible to put on some sort of
proper footing, the idea of infinitesimal angles that appear within
cusps, (imagine e.g. two circles of different sizes, internally tangent).
This never got off the ground, and though we might ORDER such "zero angles"
by local proper inclusion, the feeling perhaps lingers that
there "ought" still to be a metric for them.
A more modern example is from knot theory. I recall reading
(I think in Scientific American, Gardner's column), that J H Conway,
as a boy, managed to prove (to his own satisfaction) that two knots
separately tied in a loop of string, could not cancel each other out,
and mutually undo by continuous non-intersecting deformations alone.
His "proof" consisted in imagining the loop of string stretched
between two planar walls, with the two knots tied in it. Then imagine
a cylinder of cellophane cooking paper, fixed in circles on the walls
at each end, and then going along the string "engulfing" one of the knots,
but "circumnavigating" the other. (i.e. it fits the first like a bag but
the second like a sleeve.)
Indeed, this picture alone might be enough for a proof! But anyway;
if the knot pair is somehow untied, the cellophane must continuously
deform, so that a line in it, originally from the north point of one circle
to the north point of the other, will still become a line in the vertical
plane joining them. But what was knotted before, has now become unknotted!
Impossible, QED.
I thought this proof was brilliant, especially for a schoolboy.
However, every time I've shown it to other colleagues, they hem and haw
and complain that it isn't sufficiently formal, and isn't really a proof,
or not yet, anyway. My feeling is that they have perhaps been a little
"hypnotised by formalism", and that it is a pretty good proof.
However, the matter of proofs (rather than concepts) not being
formalisable has already been raised, and perhaps dismissed.
So maybe this is not the sort of thing Tim (and others) originally
had in mind. But then again, it has become unclear just what they
*did* have in mind! So I think maybe this sort of example is close
to what was being asked for.
Bill Taylor.
More information about the FOM mailing list
|
{"url":"http://www.cs.nyu.edu/pipermail/fom/2008-January/012528.html","timestamp":"2014-04-17T07:41:45Z","content_type":null,"content_length":"5293","record_id":"<urn:uuid:02aff22a-ec61-43a9-a0eb-8d7423a26929>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00021-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Introductory String Theory Seminar
Posted by Urs Schreiber
I have been asked by students if I would like to talk a little about introductory string theory. Since it is currently semester break, we decided to make an experiment (which is unusual for string
theory) and try to do an informal and inofficial seminar.
The background of the people attending the semiar is very inhomogeneous and a basic knowledge of special relativity and quantum mechanics is maybe the greatest common divisor. Therefore we’ll start
with elementary stuff and will try to acquaint ourselfs with the deeper mysteries of the universe (such as QFT, YM, GR, CFT, SUSY) as we go along.
If I were in my right mind I’d feel overwhelmed with the task of conducting such a seminar, but maybe at least I can be of help as a guide who has seen the inside of the labyrinth before. Hence I’d
like to stress that
I can only show you the door. You’re the one that has to walk through it.
In this spirit, the very first thing I can and should do is prepare a commented list of introductory literature. Here it is:
Actually, the task of writing such a list has already been done:
D. Marolf, Resource Letter NSST-1: The Nature and Status of String Theory
and I won’t be able and won’t try to do better than that. But I can provide a couple of convenient hyperlinks and personal comments.
First of all, everybody must know that there are two canonical textbooks, the old and the new testament. The old one is
M. Green & J. Schwarz & E. Witten, Superstring Theory Vol.1 , Vol. 2, Cambridge University Press (1987)
and the new one is
J. Polchinski, String Theory Vol. 1, Vol. 2, Cambridge University Press (1998).
Both are to some degree complementary. Polchinski is more modern (no branes in GSW) and more concise. GSW is more more old-fashioned and more elementary.
Those who want to read textbooks should probably start with the first couple of chapters of GSW, first volume, and then begin reading volume 1 of Polchinski in parallel - and then see what happens to
your neurons and decide on that basis how to proceed further.
There are also some non-canonical textbooks:
B. Hatfield, Quantum Field Theory of Point Particles and Strings, Perseus Publishing (1992)
(This one is very pedagogical but only covers very little string theory.)
B. Zwieback, A First Course in String Theory, Cambridge University Press (2004)
M. Kaku, Introduction to Superstrings and M-Theory, Springer (1998)
M. Kaku, Strings, Conformal Fields, and M-Theory, Springer (2000) .
(I haven’t read these last three books myself.)
More important for our purposes, there are a large number of very good lecture notes available online at the so called arXiv. This is a preprint server which is a way to make research papers
publically available that have not yet went through the full process of peer-reviewed publishment in print journals.
Of interest for this seminar are mostly the sections hep-th (theoretical high energy physics) and maybe gr-qc (general relativity and quantum cosmology) of the arXiv archive.
Most notably in the fields covered by hep-th, there has been an ongoing process away from an emphasis of print journals towards an emphasis of online communication, and except for articles dating
from before 1992 most every publication in high energy physics that one will ever want to see can be found here, online and for free!
In this context one should also mention the SPIRES HEP Literature Database that reaches all the way back to 1974 - which is incidentally the year in which it was realized that string theory is a
theory of quantum gravity.
The most easily accessible introductory lecture on string theory that I know is
R. Szabo, BUSSTEPP Lectures on String Theory (2002)
J. Schwarz, Introduction to Superstring Theory (2000)
a brief elementary introduction of the basic ideas of string theory aimed at
experimentalists is given.
Another nice introduction is
T. Mohaupt, Introduction to String Theory (2002) .
The notes by E. Kiritsis
E. Kiritsis, Introduction to Superstring Theory (1998)
are a thorough introduction to the string with some emphasis on conformal field theory and a bit on branes and dualities.
I always find the lecture notes by M. Kreuzer extremely valuable as a second
reading, i.e. when I already understand the basics. See
M. Kreuzer, Einführung in die Superstring-Theorie (2001)
for the bosonic string and
M. Kreuzer, Einführung in die Superstring-Theorie II (2001)
for conformal field theory and a (tiny) little bit on the superstring. (The
text is in English, only the title is German.)
More advanced introductions are
E. Alvarez & P. Meessen, String Primer (2001)
L. Dolan TASI Lectures on Perturbative String Theory and Ramond-Ramond Flux (2002)
There is much more available, but this should give a first idea. The above list is basically taken from this post to the newgroup sci.physics.research, which can be a very valuable resource and place
to ask and answer questions. Before participating please read this and this. Maybe there will be a similar newsgroup concerned exclusively with string theory soon. Of course, everybody is also
invited to post any questions and comments to the String Coffee Table. See here for some tips and tricks.
If I find the time I may expand the above list in the future. Suggestions are very welcome.
Last not least, I cannot refrain from pointing to the fun little Java applet which visualizes the classical motion of string.
This is by Igor Nikitin and the theory behind it is explained in
I. Nikitin, Introduction to String Theory.
So much for now. Summaries, links and background information concerning our Seminar meetings will be given in the comments.
Posted at March 11, 2004 12:29 PM UTC
Meeting 1: Nambu-Goto, Polyakov and back
For the convenience of those who had to decipher my handwriting on the blackboard while keeping track of my signs (which tend to pick up a stochastic dynamics) as well as of the number of dimensions
I was talking about, here is a list of references where the material that I presented can be found in print.
(At the end there is also a little exercise. Please post proposed solutions here to the Coffee Table, so that everybody can benefit.)
First I made some historical remarks concerning the inception and development of what today is called ‘string theory’ or maybe ‘M-theory’. I didn’t even go to the level of detail found in R. Szabo’s
lectures pp. 4-9. More on this can be found in GSW I, section 1 and a much shorter equivalent is section 1.1 of Polchinski. Since giving a reasonable glimpse of the Big Picture is beyond what I
should try when standing with my back to the blackboard, I won’t say much more about this until maybe much later.
Instead there are some elementary but interesting calculations that one can get one’s hands on in order to get started:
First of all one should recall some basic facts about the relativistic point particle, like how its square-root form of the action looks like (Nambu-Goto-like action) and how the corresponding square
form looks like (Polyakov-like action). This can be found for instance on pp. 293-295 of this text.
There is a (maybe surprisingly) obvious and straightforward generalization of this to the case where the object under consideration is not 0 but $p$-dimensional. One can write down the general
Nambu-Goto-like action for $p$-branes and find the associated Polyakov-like action. For instance by varying the latter with respect to the auxiliary metric on the world-volume one can check that both
are classically equivalent.
This is demonstrated in detail on pp. 171-179 of the above mentioned text.
Anyone who feels like he wants to read a more pedagogical discussion of these issues is invited to have a look at this.
We have also talked a lot about the basics of gauge theory after the seminar. I hope to come to that later, but if anybody feels like reading more on this he or she might want to have a look at
chapter 20 of the very recommendable book
T. Frankel, The Geometry of Physics Cambridge (1997)
or of course pick up a book on field theory, like
M. Peskin & D. Schroer, An Introduction to Quantum Field Theory,
where it is chapter 15.
That wouldn’t hurt, because my evil plan is to eventually discuss the IIB Matrix Model in the seminar, which is a surprisingly elementary way to have a look into the
{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}$Total Perspective Vortex.
But, as I said, we’ll come to that later.
Finally here is a little exercise concerning the material discussed in the first meeting:
I had demonstrated how the mass shell constraint
(1)${p}_{\mu }{p}^{\mu }=-{m}^{2}$
follows from the Nambu-Goto-like action of the point particle.
1) Derive the analogous constraint for the Nambu-Goto action of the string. Interpret it physically.
2) The action of the point particle coupled to an electromagnetic field with vector potential ${A}_{\mu }$ is
(2)$S=-m\int \left(\sqrt{-{\stackrel{˙}{x}}^{\mu }{\stackrel{˙}{x}}_{\mu }}+{A}_{\mu }\left(x\right){\stackrel{˙}{x}}^{\mu }\right)d\tau \phantom{\rule{thinmathspace}{0ex}}.$
How does the mass-shell constraint look now?
3) The generalization of the above action to the string is obviously
(3)$S=-T\int {d}^{2}\sigma \left(\sqrt{-h}+{B}_{\mu u }{ϵ}^{\alpha \beta }\left({\partial }_{\alpha }{x}^{\mu }\right)\left({\partial }_{\beta }{x}^{u }\right)\right)$
where $\alpha ,\beta \in \left\{0,1\right\}$ are the indices on the worldsheet, ${h}_{\alpha \beta }=\left({\partial }_{\alpha }{x}^{\mu }\right)\left({\partial }_{\beta }{x}^{u }\right){g}_{\mu u }$
, is the induced metric on the worldsheet and $h=\mathrm{det}{h}_{\alpha \beta }$ is its determinant. ${ϵ}^{\alpha \beta }$ is the antisymmetric symbol and ${B}_{\mu u }=-{B}_{u \mu }$ is an
antisymmetric tensor (i.e. a 2-form) on spacetime.
Derive the mass-shell constraint for the string for non-vanishing ${B}_{\mu u }$. Interpret the result by comparison with the point particle case.
The next meeting will be
Friday, 19 Mar 2004, 15:00, in S05 V07 E04.
(We cannot meet next Wednesday because I’ll be in Ulm).
Posted by: Urs Schreiber on March 11, 2004 5:16 PM | Permalink | PGP Sig | Reply to this
Re: Meeting 1: Nambu-Goto, Polyakov and back
Hi Urs,
When I saw you stating you were giving a “String Seminar”, my first thought went to Baez’s “Quantum Gravity Seminar” where he kept us updated on what was discussed in the seminar. It was almost as if
we were there ourselves :) Now, reality has sunk in and I see you meant a usual seminar. Bummer! :) Do you think you might be able to get one of your students to volunteer to write up an informal
expository overview of the lecture. It would be great if they could mimic something like what Toby Bartels and Baez did for the QG Seminar. Just a thought! :)
On a different note…
One big deterrant for me getting very far in the string literature is the insane notation. I feel like the index-ridden notation is something best left in the 20th century :)
If you look at Maxwell’s original expression for the equations, they extend over several pages. With the use of vector calculus notation they were reduced to
(1)$abla ×E+\frac{\partial B}{\partial t}=0,\phantom{\rule{1em}{0ex}}abla \cdot B=0,$
(2)$abla ×H-\frac{\partial D}{\partial t}=J,\phantom{\rule{1em}{0ex}}abla \cdot D=\varrho$
with constitutive relations
but we soon learned that this was STILL coordinate dependent because of the appearance of $t$ and the realization that spacetime was a 4d manifold. Finally we end up with
Voila! When you express the equations in their natural coordinate independent form, then a great simplification occurs. Not only that, we get some really nice geometrical pictures arising. The first
equation says simply that the flux of F through any closed surface in spacetime is zero. This manifests itself as the first line of equations using the vector calculus notation. This beautiful
interpretation is hardly obvious just by staring at the vector calculus versions (and hopeless looking at the originals of Maxwell :)).
Now, when I look at the string literature I can’t help but think that things are in as sad a shape as the original form of Maxwell’s equations. It is a mess of indices and different fields. Even
worse, the indices are a contiuum :) My question is, in the list of literature you provided, is there anything that remotely resembles the string analog of
Could it be that the various fields in the string expressions (I don’t have a particular example in mind) are really just different components of a single field in analogy to how E and B are just
different components of F? I guess I am looking for an index free formulation of string theory. Does such a thing exist?
Best of luck with your seminar!
Posted by: Eric on March 12, 2004 2:43 PM | Permalink | Reply to this
Re: Meeting 1: Nambu-Goto, Polyakov and back
Hi Eric -
you wrote:
my first thought went to Baez’s ‘Quantum Gravity Seminar’
Gulp. My aim is orders of magnitude more humble. I can only run this with a relatively low task priority and really have to learn many thinks myself.
Do you think you might be able to get one of your students to volunteer to write up an informal expository overview of the lecture.
Yes, I have thought about that, too. For our first meeting I have tried to provide the equivalent of a write up of what we did in my previous comment by pointing to the page numbers of texts from
which I took the material. (For instance all the material necessary to solve the exercise is given… :-)
I would like to make the content of our meetings available here in general, not the least because not all of the local participants will be able to attend every week. But there is a limit to the
amount of energy and time that I have, so it would be really good if one of the participants would volunteer to supply his notes to the Coffee Table. I’ll see if I find somebody.
One big deterrant for me getting very far in the string literature is the insane notation.
That’s too bad. Once you are used to it it does not seem that insane anymore.
Of course you can rewrite most anything that you encounter in a coordinate independent way. That usually involves making further definitions. For instance, as you know, you can define the worldline
volume form
and the pull-back of ${A}_{\mu }{\mathrm{dx}}^{\mu }$ to the worldline
(2)${}^{*}\phantom{\rule{-0.1667 em}{0ex}}A:={A}_{\mu }\frac{{\mathrm{dx}}^{\mu }}{d\tau }d\tau$
and then rewrite the first equation in the above exercise as
(3)$S=-m\int \left(\mathrm{vol}+{}^{*}\phantom{\rule{-0.1667 em}{0ex}}A\right)\phantom{\rule{thinmathspace}{0ex}}.$
You can then invent notations that allow you to vary this action without ever writing down an index. But I think this will not make things easier, necessarily. But if you find a nice way, let me
know! :-)
Now, when I look at the string literature I can’t help but think that things are in as sad a shape as the original form of Maxwell’s equations.
I don’t think that that’s fair. The notation usually used is more like the analogue of
(4)${F}_{\left[\mu u ,\lambda \right]}=0$
(5)${F}^{\mu u }{}_{;u }=0\phantom{\rule{thinmathspace}{0ex}},$
which isn’t too bad. But maybe you disagree.
Here is a suggestion: You are assigned the special task of providing us with index-free notation of all the formulas that show up in the seminar! ;-)
But seriously, many thanks for your interest and I would very much enjoy if you find the time to follow the seminar here at the Coffee Table and provide us with comments and feedback.
Posted by: Urs Schreiber on March 12, 2004 3:36 PM | Permalink | PGP Sig | Reply to this
Re: Meeting 1: Nambu-Goto, Polyakov and back
Hi Urs,
Here is a suggestion: You are assigned the special task of providing us with index-free notation of all the formulas that show up in the seminar! ;-)
Not that I have a lot of free time on my hands, but I don’t think it is a bad idea to really try this :)
I agree with your suggestion for ${A}^{*}$, but I’m not sure about $\mathrm{vol}=\sqrt{-h}d\tau$. What you really have is a metric $\text{g}$ on $M$ and you pull this metric back to the string ${\
text{g}}^{*}$ and use this metric to construct a volume form on the string. I’ll have to think about a nice coordinate-free/index-free notation for doing this.
If you could give me a more specific section to try to convert, I could work on it. For example, where is a nice exposition of the Nambu-Goto and Polyakov actions? I could start there.
Best regards,
Posted by: Eric on March 12, 2004 5:03 PM | Permalink | Reply to this
Re: Meeting 1: Nambu-Goto, Polyakov and back
Thanks for correcting me on that volume form! Yes, for those trying to understand this stuff for the first time now it is of utter importance to understand that in my last comment I have been too
cavalier with the notation.
As Eric rightly points out, in the Nambu-Goto action the metric that appears is that induced from the background to the worldvolume, i.e. the pull-back of the background metric. It is the Polyakov
action where the metric on the world-volume is a priori independent of the background metric.
For example, where is a nice exposition of the Nambu-Goto and Polyakov actions? I could start there.
See for instance pp. 12-15 of Szabo’s lectures (which I’ll probably stick to most of the time). More details are given on pp. 173 of my sqm.pdf.
Posted by: Urs Schreiber on March 12, 2004 5:51 PM | Permalink | PGP Sig | Reply to this
Re: Meeting 1: Nambu-Goto, Polyakov and back
Hi Urs,
It is the Polyakov action where the metric on the world-volume is a priori independent of the background metric.
I’m guessing that this is probably an important point, but I don’t yet fully appreciate it. The metrics must be related somehow, right? Sorry, I know this is probably basic (and I probably just read
it in one of your spr conversations. Senility *sigh* :))
Posted by: Eric on March 12, 2004 6:50 PM | Permalink | Reply to this
Re: Meeting 1: Nambu-Goto, Polyakov and back
Yes, that’s an important point. The general (p-dimensional) Polyakov and Nambu-Goto action are classically equivalent. You can prove this for instance by varying the Polyakov action with respect to
the auxiliary worldsheet metric and demanding that the result must vanish. The equation that you get says that the auxiliary metric of the Polyakov action must equal to (for $p=1$ it must only be
conformally related to) the induced metric. Inserting this result in the Polyakov action yields the Nambu-Goto action.
In order for this to work for arbitrary $p$-branes one has to take care to include the right ‘cosmological’ term on the brane in the Polyakov action. See page 173 of sqm.pdf.
Posted by: Urs Schreiber on March 13, 2004 12:42 PM | Permalink | PGP Sig | Reply to this
Re: Meeting 1: Nambu-Goto, Polyakov and back
Hi Urs,
It seems you are having a good time in Ulm. I wish I was there too. An Einstein fest sounds like fun :)
I have to admit I haven’t spent more than 30 minutes thinking about this yet, but it is not immediately obvious how to write the Polyakov action in a coordinate-free/ index-free manner. Come to think
of it, I haven’t even made much progress with the Nambu-Goto action, but since that is just the volume form on the brane, I’m not too worried about it. Maybe I should be.
I think this is a nice exercise. Any hints? :)
Anyone else care to take a stab at it? I’m basically looking for the analog of the EM action
(1)$S={\int }_{M}F\wedge \star F$
for the Nambu-Goto and Polyakov actions that does not involve any explicit coordinate indices.
I may be going out on a limb, but the difference between Nambu-Goto and Polyakov reminds me a little bit about the difference between Yang-Mills and BF-theory. In BF theory (I’m sure to get this
wrong, but it is something like this), you have an action
(2)$S={\int }_{M}F\wedge B.$
where we want some other input whose equations of motion give $B=\star F$. Does that make any sense? :)
In the Polyakov action, the induced metric seems to be analogous to $B$ (if you know what I mean).
Posted by: Eric on March 15, 2004 5:39 PM | Permalink | Reply to this
Re: Meeting 1: Nambu-Goto, Polyakov and back
Hi Eric -
currently I cannot see the connection that you make concerning gauge theory and BF theory.
I’d rather suggest that you look at the Polyakov action as the action of 1+1 dimensional gravity coupled to scalar fields.
I think the index free version of the derivative terms can’t do better than
(1)${g}^{\alpha \beta }{\partial }_{\alpha }X{\partial }_{\alpha }X=\left(abla X\right)\cdot \left(abla X\right)\phantom{\rule{thinmathspace}{0ex}}.$
For a start, you can ignore the spacetime index $\mu$ on ${X}^{\mu }$ completely and only consider a single scalar field on the worldsheet. I think then it is obvious what to write down.
(BTW, I have found some people who are very interested in our work on discrete differential geometry. See here for the details :-)
Posted by: Urs Schreiber on March 15, 2004 9:45 PM | Permalink | PGP Sig | Reply to this
Re: Meeting 1: Nambu-Goto, Polyakov and back
Hi Urs,
currently I cannot see the connection that you make concerning gauge theory and BF theory.
I’m still not sure it is relevant, but I was thinking of something along the lines of this:
To paraphrase, you begin with
(1)$S={\int }_{M}\mathrm{tr}\left(B\wedge F+{g}^{2}B\wedge \star B\right),$
which is just the usual BF action when $g\to 0$. Varying with respect to $B$ gives
(2)$F=-2g\star B.$
Varying with respect to $A$ gives
which combined with the first equation gives
(4)${d}_{A}\star F=0.$
This is what you get from varying the Yang-Mills action
(5)$S={\int }_{M}\mathrm{tr}\left(F\wedge \star F\right)$
with respect to $A$. Quoting Baez,
Sometimes this trick is called “BF-YM theory” - it’s a way of thinking of Yang-Mills as a perturbation of BF theory, which reduces back to BF theory as g -> 0. (It’s hard to see this happening at
the classical level, where the Yang-Mills equations don’t depend on g. It’s better to go to the quantum theory - see Witten’s paper on 2d gauge theories for that.)
About the Polyakov action…
I think the index free version of the derivative terms can’t do better than
(6)${g}^{\alpha \beta }\left({\partial }_{\alpha }X\right)\cdot \left({\partial }_{B}X\right)=\left(abla X\right)\cdot \left(abla X\right).$
Hmm… wouldn’t $abla X$ be a vector valued 1-form, or something? If so, it is not obvious you wouldn’t get extra terms
(7)$abla X={\mathrm{dx}}^{\mu }\otimes {abla }_{\mu }X,$
which expands into a mess of connection coefficients. Then again, it is conceivable that metric compatibility will save the day, but I’m too lazy to check right now :)
My inability to write the Polyakov action in a nice coordinate-free/index-free notation is beginning to trouble me. Either there is something wrong with it, or we are lacking the language to express
it more clearly. I’m not yet able to see the “meaning” of the expression.
Of course, that is just my own ignorance speaking :)
I’ll keep thinking about this. There must be a better way to see it.
Best regards,
Posted by: Eric on March 16, 2004 3:29 AM | Permalink | Reply to this
Re: Meeting 1: Nambu-Goto, Polyakov and back
Hi Eric -
maybe it is possible to express the worldsheet action using BF theory. I’d have to think about that a little more.
But let me say how I think we should write the Polyakov action in index-free and coordinate-free notation:
First assume there is a single scalar field $X$ on the worldsheet. Let $d$ and $\star$ be the exterior derivative and the Hodge star on the worldsheet with respect to the auxiliary worldsheet metric.
Then the Polyakov action is simply proportional to
(1)$S\sim \int \left(dX\right)\wedge \star \left(dX\right)\phantom{\rule{thinmathspace}{0ex}}.$
But really there are $D$ such scalars which carry spacetime indices $\mu$. So really the string is described by
(2)$S\sim \int {g}_{\mu u }\left(X\right)\left(d{X}^{\mu }\right)\wedge \star \left(d{X}^{u }\right)\phantom{\rule{thinmathspace}{0ex}},$
where ${g}_{\mu u }$ is the metric on target space which is a priori independent of the worldsheet metric.
Of course now there are indices again. To remove these we need to introduce new notation. Maybe we should set
(3)$⟨v,w⟩:={w}^{\mu }{v}^{u }{g}_{\mu u }\phantom{\rule{thinmathspace}{0ex}}.$
Then the full Polyakov action, by a slight abuse of notation, could be written as
(4)$S\sim \int ⟨\left(dX\right),\wedge \star \left(dX\right)⟩\phantom{\rule{thinmathspace}{0ex}},$
Hm, no, this comma followed by a wedge looks a little awkward. Can you think of something better?
Posted by: Urs Schreiber on March 16, 2004 9:28 AM | Permalink | PGP Sig | Reply to this
Re: Meeting 1: Nambu-Goto, Polyakov and back
Hi Urs,
maybe it is possible to express the worldsheet action using BF theory. I’d have to think about that a little more.
I don’t mean to imply that Polyakov can be derived from BF theory (although that would be neat and a part of me thinks there is some truth to it). Rather, my gut (which has been wrong before)
suggests that there is an analogy: Yang-Mills is to Nambu-Goto as BF-YM is to Polyakov :) After this post, I’m thinking the analogy is actually there.
Let ${A}^{p}$ and ${A}^{q}$ be $p$- and $q$- forms, respectively. Same with ${B}^{p}$ and ${B}^{q}$. Define an inner product of ${\Omega }^{p}\otimes {\Omega }^{q}$ via
(1)$\left({A}^{p}\otimes {A}^{q},{B}^{p}\otimes {B}^{q}\right):=\left({A}^{p},{B}^{p}\right)\left({A}^{q},{B}^{q}\right)$
where $\left({A}^{p},{B}^{p}\right)$ is the usual inner product of forms.
Now let
(2)$h={h}_{\kappa \lambda }{\mathrm{dx}}^{\kappa }\otimes {\mathrm{dx}}^{\lambda }$
be the worldsheet metric and
(3)$g={g}_{\mu u }{\mathrm{dx}}^{\mu }\otimes {\mathrm{dx}}^{u }$
be the target space metric pulled back to the world sheet. Then (on the worldsheet) we have
(4)$\left(g,h\right)={g}_{\kappa \lambda }{h}_{\mu u }\left({\mathrm{dx}}^{\kappa },{\mathrm{dx}}^{\mu }\right)\left({\mathrm{dx}}^{\lambda },{\mathrm{dx}}^{u }\right)={g}_{\kappa \lambda }{h}_{\mu u
}{h}^{\kappa \mu }{h}^{\lambda u }={h}_{\mu u }{g}^{\mu u }.$
Unless I’m mistaken, the Polyakov action may then be written in the suggestive form
(5)${S}_{P}={\int }_{M}\left(g,h\right)vol.$
Note that you have $\left(h,h\right)\sim 1$ so that the Nambu-Goto action is
(6)${S}_{\mathrm{NG}}\sim {\int }_{M}\left(h,h\right)vol.$
In this form, the Nambu-Goto action looks very much like a Yang-Mills action
(7)${S}_{\mathrm{YM}}={\int }_{M}\left(F,F\right)vol.$
I can massage the BF action to look something like the Polyakov action. Let $B=\star G$ for some 2-form $G$, then the BF action looks like
(8)${S}_{\mathrm{BF}}={\int }_{M}F\wedge B={\int }_{M}F\wedge \star G={\int }_{M}\left(F,G\right)vol,$
which looks a lot like the Polyakov action above.
How does this look?
Hmm… just before hitting “Post” I had a thought motivated by Baez’ spr post I referenced above. What if we take what I suggested as the “index-free” version of the Polyakov action above and modify it
(9)$S{\prime }_{P}={\int }_{M}\left[\left(g,h\right)-\frac{1}{2}\left(g,g\right)\right]vol$
then varying ${S}_{P}^{\prime }$ with respect to $g$ results in
(10)$\delta S{\prime }_{P}={\int }_{M}\left[\left(\delta g,h\right)-\left(\delta g,g\right)\right]\mathrm{vol}={\int }_{M}\left(\delta g,h-g\right)vol$
so we get $g=h$. Neat! Maybe THIS is what I really want for the index-free Polyakov action.
Now what do you think? :)
PS: In general, the term $\frac{1}{2}\left(g,g\right)$ looks like a cosmological constant.
Posted by: Eric on March 16, 2004 3:22 PM | Permalink | Reply to this
Re: Meeting 1: Nambu-Goto, Polyakov and back
I think that previous post (aside from errors) is pretty neat so maybe I’ll make the analogy between the index-free Polyakov action and the BF-YM action more explicit. If I begin with the BF-YM
action, but with the replacement $B=\star G$ (so maybe I should call it FG theory :)), we get
(1)${S}_{\mathrm{FG}-\mathrm{YM}}={\int }_{M}\left[\left(F,G\right)-\frac{1}{2}\left(G,G\right)\right]vol.$
If we vary with respect to $G$, in complete analogy with the index-free Polyakov action, we get
(2)$\delta {S}_{\mathrm{FG}-\mathrm{YM}}={\int }_{M}\left[\left(F,\delta G\right)-\left(G,\delta G\right)\right]vol={\int }_{M}\left(F-G,\delta G\right)vol$
so that we have $F=G$. Varying with respect to $A$, we get
which combined with the first equation gives
Voila! Yang-Mills! :)
I think the formal analogy in the way Polyakov relates to Nambu-Goto and FG-YM relates to YM is clear. In fact, I think the relation between the FG-YM action and the Polyakov action is pretty clear
(at least formally). Not to mention the relation between the Nambu-Goto and the Yang-Mills actions. Note the key words “I think” :)
Posted by: Eric on March 16, 2004 4:23 PM | Permalink | Reply to this
Re: Meeting 1: Nambu-Goto, Polyakov and back
Hi Eric!
(1)${S}_{\mathrm{P}}={\int }_{M}\left(g,h\right)\mathrm{vol}$
(2)${S}_{\mathrm{NG}}={\int }_{M}\left(h,h\right)\mathrm{vol}$
is much better that what I had proposed and I agree that it does look suggestive.
However, we should be carefully distinuishing the roles played by $h$ in these expression. In ${S}_{\mathrm{P}}$ the term $h$ is the auxiliary worldsheet metric and it is understood that $\left(\cdot
,\cdot \right)$ contracts indidices using this auxiliary metric and that $\mathrm{vol}$ is constructed from this auxiliary metric.
But in the expression for ${S}_{\mathrm{NG}}$ as defined above and if we assume $\left(h,h\right)\sim 1$, then $\mathrm{vol}$ must be the volume form of the induced metric, which in the previous
equation carried the name $g$! If we still want $\mathrm{vol}$ to be associated with the letter $h$ then now $h$ must be identified with the induced metric, in constrast to what we did above.
So while I think that each of the above expressions given above is a valid notation for the respective actions, the symbolds do not mean the same thing in these expressions. Agreed?
You write:
In general, the term $\frac{1}{2}\left(g,g\right)$ looks like a cosmological constant.
Hm, if this term is supposed to be the contraction of two copies of the induced metric with two copies of the auxiliary metric then I don’t see how it looks like a cosmological constant. A
cosmological constant rather gives a term $\sim {\int }_{M}\mathrm{vol}$. But maybe I didn’t fully understand what you have in mind.
Posted by: Urs Schreiber on March 16, 2004 5:12 PM | Permalink | PGP Sig | Reply to this
Re: Meeting 1: Nambu-Goto, Polyakov and back
Hi Urs! :)
So while I think that each of the above expressions given above is a valid notation for the respective actions, the symbolds do not mean the same thing in these expressions. Agreed?
I dunno. To me, $vol$ MUST be the volume form corresponding to the metric on the (sub)manifold. In the case of the Polyakov action, this metric is apparently called the auxiliary metric $h$ and is
assumed to be distinct from the induced metric $g$ pulled back from the target space. I hope I got that right :) However, don’t we get as an equation of motion
? So I don’t think I really understand the difference that you point out. In the end, we get $h=g$, right? Then the $vol$ would also be the same. I’m just confused as usual. Please bear with me :)
Hm, if this term is supposed to be the contraction of two copies of the induced metric with two copies of the auxiliary metric then I don’t see how it looks like a cosmological constant. A
cosmological constant rather gives a term $\sim {\int }_{M}vol$. But maybe I didn’t fully understand what you have in mind.
It is more likely that I am confused :) I thought that $\left(g,g\right)~1$, which would mean that
(2)${\int }_{M}\frac{1}{2}\left(g,g\right)vol\sim {\int }_{M}vol.$
I’m obviously still grappling with the difference between the auxiliary and induced metrics.
Posted by: Eric on March 16, 2004 5:32 PM | Permalink | Reply to this
Re: Meeting 1: Nambu-Goto, Polyakov and back
Hi Eric -
if we set $h=g$ throughout then there would be no difference between Polyakov and Nambu-Goto and we could just stick to writing either one letter or the other.
The important point is that one of the equations of motions for the Polyakov action says that $h=g$. But in order to have a well-defined action it must be defined even off-shell, i.e. when the
equations of motion do not hold.
Note that ${h}_{\alpha \beta }$ is an autonomous metric on the Polyakov worldsheet which is treated exactly like the spactetime metric of GR is. But ${g}_{\alpha \beta }$ is a compound object which
involves the scalar fields $X$ on the worldsheet
(1)${g}_{\alpha \beta }=\left({\partial }_{\alpha }{X}^{\mu }\right)\left({\partial }_{\beta }{X}^{u }\right){g}_{\mu u }\left(X\right)\phantom{\rule{thinmathspace}{0ex}},$
where $\alpha ,\beta \in \left\{0,1\right\}$ vary on the worldsheet while $\mu ,u \in \left\{0,1,\cdots ,D-1\right\}$ vary on spacetime and ${g}_{\mu u }\left(X\right)$ is the metric on spacetime
evaluated at the point $\stackrel{⇀}{X}=\left[{X}^{0},{X}^{1},\cdots ,{X}^{D-1}\right]$.
So a priori ${h}_{\alpha \beta }$ and ${g}_{\alpha \beta }$ are completely independent. When we write down actions (before considering their equations of motion) these actions have to be well defined
expressions in terms of these objects.
See, when you write
I thought that $\left(g,g\right)\sim 1$
you are apparantly simply setting $h=g$ and use the relation $\left(h,h\right)\sim 1$ which was postulated before and essentially defines what $\left(\cdot ,\cdot \right)$ is suppoed to mean.
But if we assume $h=g$ everywhere then we are really only dealing with the Nambu-Goto action, because for $h=g$ Polyakov reduces to Nambu-Goto. So there would be no point in having a seperate
Polyakov form of the action.
But actually there is good reason to have an a priori independent worldsheet metric. So we may not set $h=g$ throughout.
Posted by: Urs Schreiber on March 16, 2004 7:08 PM | Permalink | PGP Sig | Reply to this
Re: Meeting 1: Nambu-Goto, Polyakov and back
Hi Urs,
See, when you write
I thought that $\left(g,g\right)\sim 1$
you are apparantly simply setting $h=g$ and use the relation $\left(h,h\right)\sim 1$ which was postulated before and essentially defines what $\left(\cdot ,\cdot \right)$ is suppoed to mean.
Actually, I am even more confused than that :) I didn’t really mean to set $h=g$ in the beginning. I understand that would simply reduce to Nambu-Goto. I was making the mistake that I thought
(1)$\left(g,g\right)={g}_{\mu u }{g}^{\mu u }={\delta }_{\mu }^{\mu },$
but actually this is not true (I guess). By habit, I was thinking of ${g}^{\mu u }$ as the inverse of ${g}_{\mu u }$, but actually
(2)${g}^{\mu u }={h}^{\mu \kappa }{h}^{u \lambda }{g}_{\kappa \lambda }.$
Oops! :)
I am making progress. Thanks!
Also, I enjoy very much your reports of the conference :)
Gotta run!
Posted by: Eric on March 17, 2004 12:12 AM | Permalink | Reply to this
Re: Meeting 1: Nambu-Goto, Polyakov and back
Hi Urs,
I’ve been sneaking out and reading Polchinski to try to get a better feel for this stuff. As usual, I can’t seem to content myself with what is written and have to explore alternative ideas :)
In Polchinski, for the point particle he has the action
(1)${S}_{\mathrm{pp}}=-m{\int }_{M}vol,$
(2)$vol=\sqrt{-\mathrm{det}\left(\gamma \right)}d\tau =\sqrt{-{\gamma }_{\tau \tau }}d\tau$
(3)$\gamma ={\gamma }_{\tau \tau }d\tau \otimes d\tau$
is the auxilliary metric intrinsic to the particle’s worldline. For future reference, I might as well write down
(4)$\delta vol=\left(\frac{{\gamma }_{\tau \tau }^{-1}}{2}vol\right)\delta {\gamma }_{\tau \tau }.$
Then he writes down (what is probably suppose to relate to Polyakov later on)
(5)$S{\prime }_{\mathrm{pp}}=\frac{1}{2}{\int }_{M}\left[\left(\gamma ,h\right)-m\right]vol=\frac{1}{2}{\int }_{M}\left({\gamma }_{\tau \tau }^{-1}{h}_{\tau \tau }-m\right)vol.$
Varying this with respect to ${\gamma }_{\tau \tau }$ results in the equation of motion
(6)${h}_{\tau \tau }=-m{\gamma }_{\tau \tau }.$
Then, plugging this back into the action, we get
(7)${S{\prime }_{\mathrm{pp}}\mid }_{\text{"on shell"}}={S}_{\mathrm{pp}}.$
This is nice except when I compare this $S{\prime }_{\mathrm{pp}}$ with what I was discussing above about BF-YM-type actions, this doesn’t seem to fit. So of course I tried the obvious alternative
(8)$S{″}_{\mathrm{pp}}=-{\int }_{M}\left[\left(\gamma ,h\right)-\frac{m}{2}\left(h,h\right)\right]vol,$
where unlike Polyakov, I treat $\gamma$ AND $h$ as degrees of freedom that generate equations of motion. Of course, varying with respect to $h$ gives directly
(9)$\gamma =mh.$
This is not surprising because this is essentially how $S{″}_{\mathrm{pp}}$ was defined, i.e. so that we would have $\gamma =mh$. The somewhat surprising thing to me was that if we vary $S{″}_{\
mathrm{pp}}$ with respect to $\gamma$ WITHOUT plugging in $\gamma =mh$, then I get (with a little more work) the equation of motion
(10)$\gamma =mh$
AGAIN! :) In other words, varying $S{″}_{\mathrm{pp}}$ with respect to $h$ while holding $\gamma$ fixed gives the SAME equation of motion as varying $S{″}_{\mathrm{pp}}$ with respect to $\gamma$
while holding $h$ fixed. It has been so long since I studied Lagrangians that this is probably obvious, but I found it to be pretty neat :)
Of course, plugging in the equation of motion back into the action we get
(11)${S{″}_{\mathrm{pp}}\mid }_{\text{"on shell"}}=\frac{1}{2}{S}_{\mathrm{pp}}.$
My question is, is there anything new or interesting here? I probably just reinvented some ancient wheel, but it was fun :)
PS: In the above, I was following Polchinski’s convention, which seems to be the opposite of what we had been using. In the above, $\gamma$ is the auxilliary metric instrinsic to the worldsheet/line
and $h$ is the induced metric pulled back from target space.
Posted by: Eric on March 18, 2004 5:21 AM | Permalink | Reply to this
Re: Meeting 1: Nambu-Goto, Polyakov and back
Hi Eric -
I would call your $h$ here auxiliary metric and $\gamma$ the induced metric. But never mind :-) Yes, possibly Polchinski uses other letters than we have, I don’t think there is a generally accepted
convention for this stuff.
I am still in Ulm, but I have to hurry now to get to the trainstation so that I won’t miss the second meeting of our seminar tomorrow! :-) Meanwhile, since you are now thinking so much about strings,
I bet you would enjoy this.
Posted by: Urs Schreiber on March 18, 2004 1:35 PM | Permalink | PGP Sig | Reply to this
Re: Meeting 1: Nambu-Goto, Polyakov and back
Good morning! :)
The danger of writing up something right before you sleep is that it tends to preoccupy your thoughts all night :)
Anyway, I’m not 100% confident that I performed my calculation correctly and I think the result is pretty neat so I thought I would spell it out here to make it easier to spot any holes. I’m
basically proposing what seems to be an alternative (BF-YM-type) action for a point particle (which is supposed to generalize to $p$-branes)
(1)${S}_{0}=-{\int }_{M}\left[\left(\gamma ,h\right)-\frac{m}{2}\left(h,h\right)\right]vol,$
(2)$\gamma ={\gamma }_{\tau \tau }d\tau \otimes d\tau$
is the intrinsic metric on the worldsheet and
(3)$h={h}_{\tau \tau }d\tau \otimes d\tau$
is the pullback metric (I’m giving up on the ambiguous terminology auxilliary and induced :)).
I haven’t (yet) figured out how to do the variations in a coordinate-free/index-free manner so for the moment I’ll just follow more standard procedures and compute
(4)$\delta \left(\gamma ,h\right)=\delta \left({\gamma }_{\tau \tau }^{-1}{h}_{\tau \tau }\right)=-{\gamma }_{\tau \tau }^{-2}{h}_{\tau \tau }\delta {\gamma }_{\tau \tau }$
(5)$\delta \left(h,h\right)={\gamma }_{\tau \tau }^{-2}{h}_{\tau \tau }^{2}=-2{\gamma }_{\tau \tau }^{-3}{h}_{\tau \tau }^{2}\delta {\gamma }_{\tau \tau }$
(6)$\delta vol=\frac{{\gamma }_{\tau \tau }^{-1}}{2}vol\delta {\gamma }_{\tau \tau }$
Plugging all this in, I get
(7)$\delta {S}_{0}=-{\int }_{M}\delta {\gamma }_{\tau \tau }\left[-1+m{\gamma }_{\tau \tau }^{-1}{h}_{\tau \tau }\right]\frac{1}{2}{\gamma }_{\tau \tau }^{-2}{h}_{\tau \tau }vol,$
which barring ${\gamma }_{\tau \tau }^{-2}$, ${h}_{\tau \tau }$ or $vol$ being zero gives the equation of motion
(8)${\gamma }_{\tau \tau }=m{h}_{\tau \tau }$
(9)$\gamma =mh.$
Plugging this back into the action gives
(10)${{S}_{0}\mid }_{\text{"on shell"}}=-\frac{m}{2}{\int }_{M}vol.$
Ok. Having done this again, I think the chance of an algebra mistake has decreased significantly :)
What appears to be different here than what I see in Polchinski and in Szabo is that I am allowing variation of both the intrinsic and the pullback metrics. Then again, I guess that fact is not too
important because we get the same equation of motion even if we think of $h$ as not variable. The thing this makes me think about is the common complaint that string theory is somehow background
dependent. Could allowing the pullback metric, i.e. the one pulled back from the target space, also vary somehow be relevent to that question?
You’ve got to tell me if I’m way off base because this seems interesting to me :)
Posted by: Eric on March 18, 2004 2:48 PM | Permalink | Reply to this
Re: Meeting 1: Nambu-Goto, Polyakov and back
Oops! I actually get
(1)${{S}_{0}\mid }_{\text{"on shell"}}=-\frac{m}{2}{\int }_{M}\left(h,h\right)vol$
and I stuck in $\left(h,h\right)=1$ but actually we have $\left(\gamma ,\gamma \right)=1$ so that should be $\left(h,h\right)=1/{m}^{2}$ or
(2)${{S}_{0}\mid }_{\text{"on shell"}}=-\frac{1}{2m}{\int }_{M}vol$
so maybe I should replace $m$ with $\alpha =1/m$ in the action. I don’t know if a constant multiple is important or not. In any case, whatever constant you put in there we end up with
(3)${{S}_{0}\mid }_{\text{"on shell"}}\sim {\int }_{M}vol.$
which is the important thing I think.
Posted by: Eric on March 18, 2004 3:06 PM | Permalink | Reply to this
Re: Meeting 1: Nambu-Goto, Polyakov and back
Sorry for all the posts, but I just noticed something else that seems interesting. Once I correct for that constant my action is more like
(1)${S}_{0}={\int }_{M}\left[\left(\gamma ,h\right)-\frac{\alpha }{2}\left(h,h\right)\right]vol$
whose equations of motion are
(2)$\gamma =\alpha h⇒h=m\gamma .$
Now when I look back at the equations of motion that Polchinski gets, I see he has
(3)$h=-m\gamma .$
Doesn’t that seem a little odd that the sign is different? I am growing more and more fond of my BF-YM-inspired action so please (anyone) shoot me down quick to ease my inevitable pain :)
I guess the fact that the two actions agree “on shell” means that classically they are equivalent, right? I wonder what happens quantuminally :)
Posted by: Eric on March 18, 2004 4:08 PM | Permalink | Reply to this
Meeting 2: Free and yet constrained
This time I started by introducing Nambu-Brackets, which are defined by
(1)$\left\{{X}^{{\mu }_{0}},{X}^{{\mu }_{2}},\cdots ,{X}^{{\mu }_{p}}\right\}:={ϵ}^{{\alpha }_{0}{\alpha }_{1}\cdots {\alpha }_{p}}\left({\partial }_{{\alpha }_{0}}{X}^{{\mu }_{0}}\right)\cdots \left
({\partial }_{{\alpha }_{p}}{X}^{{\mu }_{p}}\right)\phantom{\rule{thinmathspace}{0ex}}.$
(Here ${ϵ}^{{\alpha }_{0}{\alpha }_{1}\cdots {\alpha }_{p}}$ is the completely antisymmetric symbol with ${ϵ}^{012\cdots p}=1$.)
These are in a sense a generalization of Poisson brackets, to which they reduce for $p=1$. Using these brackets the determinant of an induced metric can conveniently be rewritten as
(2)$\mathrm{det}\left({h}_{\alpha \beta }\right)=\mathrm{det}\left(\left({\partial }_{\alpha }{X}^{\mu }\right)\left({\partial }_{\beta }{X}^{u }\right){g}_{\mu u }\right)=\frac{1}{\left(p+1\right)!}
\left\{{X}^{{\mu }_{0}},\cdots ,{X}^{{\mu }_{p}}\right\}\left\{{X}_{{\mu }_{0}},\cdots ,{X}_{{\mu }_{p}}\right\}\phantom{\rule{thinmathspace}{0ex}}.$
Using this notation it is relatively easy to show that the general bosonic Nambu-Goto action for the $p$-brane
(3)${S}_{\mathrm{NG}}=-T\int \sqrt{h}\phantom{\rule{thinmathspace}{0ex}}{d}^{p+1}\sigma$
gives rise to the constraints
(4)${P}_{\mu }{\partial }_{i}{X}^{\mu }=0$
(5)${P}^{\mu }{P}_{\mu }+{T}^{2}\stackrel{˜}{g}=0\phantom{\rule{thinmathspace}{0ex}}.$
Here ${P}_{\mu }$ is the canonical momentum conjugate to ${X}^{\mu }$, ${\partial }_{i}$ is a spatial derivative along the brane and $\stackrel{˜}{g}$ is the determinant of the spatial part of the
induced metric on the brane.
The first set of constraints are the spatial reparameterization constraints and the last one is known as the Hamiltonian constraint.
This clearly generalizes the constraint ${P}^{\mu }{P}_{\mu }=-{m}^{2}$ of the point particle: Every piece of membrane moves like a point particle with mass proportional to its volume ($\stackrel{˜}
For the string with $p=1$ this reduces to the two constraints
(6)${P}_{\mu }{X}^{\prime \mu }=0$
(7)${P}_{\mu }{P}^{\mu }+{T}^{2}{X}^{\prime \mu }{X}_{\mu }^{\prime }=0\phantom{\rule{thinmathspace}{0ex}},$
where ${X}^{\prime }\left(\sigma \right)={\partial }_{\sigma }X\left(\sigma \right)$ is the spatial derivative of $X$ along the string.
The string ($p=1$-brane) is special in many respects. Here the special property is that the above constraints can be reassambled for $p=1$ in the symmetric form
(8)$⇔\left(P±{\mathrm{TX}}^{\prime }{\right)}^{2}=0\phantom{\rule{thinmathspace}{0ex}}.$
These are known as the Virasoro constraints.
In order to better understand them it is very helpful to make a Fourier decomposition.
To that end assume for simplicity that we are considering a flat Minkowski background spacetime (${g}_{\mu u }={\eta }_{\mu u }$) and define the objects
(9)${𝒫}_{±}^{\mu }\left(\sigma \right)=\frac{1}{\sqrt{2T}}\left({P}^{\mu }±T{X}^{\prime \mu }\right)\phantom{\rule{thinmathspace}{0ex}}.$
Using canonical quantization with commutator
(10)$\left[{X}^{\mu }\left(\sigma \right),{P}_{u }\left(\kappa \right)\right]=i{\delta }_{u }^{\mu }\delta \left(\sigma -\kappa \right)$
these have the commutators
(11)$\left[{𝒫}_{±}^{\mu }\left(\sigma \right),{𝒫}_{±}^{u }\left(\kappa \right)\right]=±{\eta }^{\mu u }{\delta }^{\prime }\left(\sigma -\kappa \right)\phantom{\rule{thinmathspace}{0ex}}.$
(12)$\left[{𝒫}_{±}^{\mu }\left(\sigma \right),{𝒫}_{\mp }^{u }\left(\kappa \right)\right]=0\phantom{\rule{thinmathspace}{0ex}}.$
Using this one can check that the Fourier modes defined by
(13)${\alpha }_{m}^{\mu }:=\frac{1}{\sqrt{2\pi }}\int d\sigma \phantom{\rule{thinmathspace}{0ex}}{𝒫}_{-}^{\mu }\left(\sigma \right){e}^{-\mathrm{in}\sigma }$
(14)${\stackrel{˜}{\alpha }}_{m}^{\mu }:=\frac{1}{\sqrt{2\pi }}\int d\sigma \phantom{\rule{thinmathspace}{0ex}}{𝒫}_{+}^{\mu }\left(\sigma \right){e}^{+\mathrm{in}\sigma }$
satisfy the oscillator algebra
(15)$\left[{\alpha }_{m}^{\mu },{\alpha }_{n}^{u }\right]=m{\delta }_{m,-n}{\eta }^{\mu u }$
(16)$\left[{\stackrel{˜}{\alpha }}_{m}^{\mu },{\stackrel{˜}{\alpha }}_{n}^{u }\right]=m{\delta }_{m,-n}{\eta }^{\mu u }$
(17)$\left[{\alpha }_{m}^{\mu },{\stackrel{˜}{\alpha }}_{n}^{u }\right]=0\phantom{\rule{thinmathspace}{0ex}}.$
Up to an inessential factor this are many copies of the well known relation of the creator ${a}^{†}$ and annihilator $a$ of the harmonic oscillator
This suggest that we construct the Hilbert space of string states from a Fock vacuum $\mid 0⟩$, which by definition is annihilated by all the ${\alpha }_{m>0}$ and ${\stackrel{˜}{\alpha }}_{m>0}$
(19)${\alpha }_{m>0}^{\mu }\mid 0⟩=0$
(20)${\stackrel{˜}{\alpha }}_{m>0}^{\mu }\mid 0⟩=0\phantom{\rule{thinmathspace}{0ex}}.$
An arbitrary state in the Fock Hilbert space is then constructed by acting with creators ${\alpha }_{m<0}$ and ${\stackrel{˜}{\alpha }}_{m<0}$ on this vacuum state.
But we have to be careful because the 0-modes ${\alpha }_{0}$ and ${\stackrel{˜}{\alpha }}_{0}$ are not oscillators but proportional to the ‘center of mass’ momentum of the string (in the given
(21)${\alpha }_{0}^{\mu }={\stackrel{˜}{\alpha }}_{0}^{\mu }=\frac{1}{\sqrt{4\pi T}}\int d\sigma \phantom{\rule{thinmathspace}{0ex}}{P}^{\mu }\left(\sigma \right)\phantom{\rule{thinmathspace}{0ex}}.$
As usual the (generalized) ‘eigenstates’ of the momentum operator are plane waves and hence the Hilbert space of the string is really the direct sum of the above oscillator excitations for a given
center of mass momentum $p$. For the Fock vacuum at com-momentum $p$ we write $\mid p,0⟩$.
And this is where the content of this second meeting rather abrubtly ended. To be continued on Wednesday, 24th or March.
Posted by: Urs Schreiber on March 22, 2004 5:09 PM | Permalink | PGP Sig | Reply to this
Re: Meeting 2: Free and yet constrained
Hi Urs,
Just another note on notation…
Given ANY p 0-forms (i.e. scalar functions on a manifold) f,g,…,h, you can define the Nambu brackets free of indices via
(1)$\left\{f,g,...,h\right\}=\epsilon \left(\mathrm{df},\mathrm{dg},...,\mathrm{dh}\right),$
where $\epsilon$ is a $p$-vector, i.e. the “Nambu tensor”.
Posted by: Eric on March 22, 2004 6:48 PM | Permalink | Reply to this
Meeting 3: The importance of being invariant
Today we talked a bit about the meaning of the classical Virasoro constraints and their Poisson algebra. Then we set out to completely ‘solve’ the classical closed bosonic string by constructing a
complete set of classical invariants i.e. of observables that Poisson-commute with all the constraints - namely the classical DDF observables. Essentially all of what I wrote on the blackboard is
what I previously typed into section 2.3.1 of this pdf. Please see there for more details.
In the process of these derivations a couple of important concepts came up, such as reparameterizations, the notion of conformal weight and the idea of reparameterization invariant observables.
Having understood the DDF invariants classically should allow us next time to understand the massless spectrum of the closed bosonic quantum string as well as the need for it to propagate in the
critical number of exactly 26 spacetime dimensions.
But maybe I won’t be able to refrain from first showing how from the DDF invariants one can construct the so-called Pohlmeyer invariants. This is the content of section 2.3.2 of the above mentioned
pdf. Besides being very simple and instructive (and still related to current research) this would give a nice first opportunity to say something about Wilson lines and in particular strings as Wilson
lines, which I plan to say more about in the near future.
We will meet next time on
Tuesday, 30. April, 15:00 c.t.
After that the schedule may become problematic: From April 4th-8th most of us will be in Bad Honnef, after that we have Easter (phew, what a horrible website… ;-), the week after that I’ll be at the
AEI in Potsdam, and from Apr 16-19 I’ll be in New York, visiting Ioannis Giannakis at Rockefeller University. After that the semester starts again and we’ll have to figure out how to proceed anyway.
The most recent information will always be found here in the latest comment.
Finally, here is a little exercise:
We have seen that the classical DDF observables ${A}_{m}^{\mu }$, ${\stackrel{˜}{A}}_{m}^{\mu }$ are morally similar to the ordinary worldsheet oscillators ${\alpha }_{m}^{\mu }$ and ${\stackrel{˜}{\
alpha }}_{m}^{\mu }$ but with appropriate corrections inserted to make them invariant under one copy of the Virasoro algebra.
Compute the Poisson algebra of the DDF observables ${A}_{m}^{\mu }$, i.e. compute
(1)$\left[{A}_{m}^{\mu },{A}_{n}^{u }{\right]}_{\mathrm{PB}}=\cdots \phantom{\rule{thinmathspace}{0ex}}.$
Hint: Consider first the case where $\mu =i$ and $u =j$ are indices transversal to the lightlike vector $k$ (which enters the definition of the ${A}_{m}^{\mu }$), i.e.
Then consider another lightlike vector $l$ with $l\cdot ke 0$ and compute
(3)$\left[l\cdot {A}_{m},{A}_{n}^{i}{\right]}_{\mathrm{PB}}=\cdots$
(4)$\left[l\cdot {A}_{m},l\cdot {A}_{n}\right]=\cdots$
Do you recognize the algebra of the $l\cdot {A}_{m}$?
Why don’t we consider the algebra of the $k\cdot {A}_{m}$?
Posted by: Urs Schreiber on March 24, 2004 5:45 PM | Permalink | PGP Sig | Reply to this
Meeting 4: Subtle is the spectrum..
Today we finally did OCQ (old canonical quantization) and determined the spectrum of open and closed bosonic strings as well as the conditions for the Hilbert space of physical states to have
non-negative metric and a maximum of null states.
What I said was a mixture of the following sources:
- - Polchinski, pp. 123-125
- - Szabo’s lecture pp. 19-27
- - Green,Schwarz& Witten, pp. 112-113 .
For reasons mentioned before we won’t meet again before at least 20th of April. In any case I will announce the new date here in the comment section.
Posted by: Urs Schreiber on March 30, 2004 5:46 PM | Permalink | PGP Sig | Reply to this
Cosmological Conformality
Hello string people :)
I had a chance to spend a few minutes here and there during a mini-vacation to study some strings and I had a couple of questions.
First is just a statement, I’ll probably start bugging you at sci.physics.strings as soon as I can get our news server to make it available for subscription :)
Second, I have seen it emphasized how the Polyakov action is conformally invariant so that string theory is a conformal field theory. I’m sure I’m glossing over important details, but Urs has
emphasized that in order for Polyakov to really be equivalent to Nambu-Goto, then you must add a cosmological constant term. However, unless I’m mistaken, adding a cosmological constant makes the
action no longer conformally invariant so that Polyakov + cosmological constant doesn’t seem to be a conformal field theory to me. Is that correct? How important is it for string theory that it be a
conformal field theory?
I guess a related question is that I’ve seen it also emphasized that string theory contains general relativity. Is general relativity a conformal field theory? If not, how is that supposed to work?
That will probably do it for now.
Best regards,
Posted by: Eric on April 1, 2004 2:40 PM | Permalink | Reply to this
Re: Cosmological Conformality
Hi Eric -
recall that the ‘cosmological constant’ type of term in the Polykov-like action for a $p$-brane was proportional top $p-1$. So it vanishes precisely for the string!
Also note that when you do the computation which demonstrates the classical equivalence of the Polyakov-like action of a $p$-brane with the NG-like action of the same $p$-brane, you’ll find that for
$pe 1$ the induced metric and the auxiliary metric must be precisely the same. But for $p=1$ you find that they may differ by a conformal factor!
(I did emphasisze this in the seminar, but maybe not so much here at the SCT.)
How important is it for string theory that it be a conformal field theory?
Conformal on the worldsheet, by the way, not in spacetime. GR is not conformal, but the strings which are in graviton states are conformal on their worldsheet.
Worldsheet conformal invariance is of paramount importance for the whole theory. For one, it dictates the form of the target space theory. (One can in principle do non-critical string theory which
gives up conformal invariance on the worldsheet, though.)
Posted by: Urs Schreiber on April 2, 2004 12:17 PM | Permalink | PGP Sig | Reply to this
Next meeting: Cosmological Billiards
The next meeting of our string theory seminar will be
Friday, 23. April 2004
10:00 c.t.
in room S05 V06 E22 .
I originally indended to give an introduction to two dimensional conformal field theory. But due to recent developments I now want to give an introduction to cosmological billiards instead. The CFT
discussion will be made good for later.
In case anyone wants to have a look at some literature, here is a brief list of relevant papers (but my talk will be fully introductory, no familarity with either cosmology or billiards or in fact
much else will be assumed):
T. Damour, M. Henneaux & H. Nicolai Cosmological Billiards (2002)
T. Damour, M. henneaux & H. Nicolai ${E}_{10}$ and the ‘small tension expansion’ of M Theory (2002)
J. Brown, O. Ganor & C. Helfgott M-theory and ${E}_{10}$: Billiards, Branes, and Imaginary Roots (2004)
R. Gebert & H. Nicolai ${E}_{10}$ for beginners (1994)
Anyone interested might also want to have a look at (and maybe participate in) a discussion thread on sci.physics.strings called Supergravity Cosmological Billiards and the BIG group.
Posted by: Urs Schreiber on April 21, 2004 5:05 PM | Permalink | PGP Sig | Reply to this
Next Meeting: 2D conformal field theory
Our next meeting will be
Friday, 14th May, 10:15 S05 V06 E22 .
I’ll introduce some basic concepts of conformal field theory (CFT), discuss the Polyakov action from that point of view and show how the oscillator computations that we have discussed before are
performed in terms of the more sophisticated CFT language.
The meeting after that will be
Friday, 4th June, 10:15 S05 V06 E22 .
Posted by: Urs Schreiber on April 28, 2004 4:15 PM | Permalink | PGP Sig | Reply to this
Re: Next Meeting: 2D conformal field theory
Tomorrow morning I’ll have 60-90 minutes to explain from scratch some notions of CFT, what they are good for in string theory and in particular how to derive the Virasoro anomaly using CFT
techniques. Some students in the audience won’t have heard much of QFT, so I’ll need to account for that. Pretty difficult task, eh? :-)
I thought it would be best to hand out some notes and then roughly go through the main ideas in these notes at the blackboard, with lots of specific pointers to the literature. I am under pretty
tight time pressure, so the best I could come up with before going to bed tonight is this:
Elementary notes on elements of elementary CFT.
I know that there is lots of room for improvement and additions, but it’s a first step that should serve the purpose of showing how the Nambu-Goto/oscillator derivations that I have used so far have
their complement in terms of Polyakov/CFT language.
Next time, on 4th of June, I’ll talk about BRST quantization.
Posted by: Ur s on May 13, 2004 7:13 PM | Permalink | PGP Sig | Reply to this
Meeting 6: BRST quantization
The next meeting will be
Friday, 4th of June, 10:15 in S05 V06 E22 .
I’ll try to explain the BRST quantization of the bosonic string in Minkowski background.
Those interested might enjoy looking at some of the references listed by Eric Forgy in the SFT thread and the discussion which can be found there, especially maybe the sketch of the main idea, which
I had given here.
I seem to recall some nice discussion of BRST formalism applied to some simple systems by Warren Siegel somewhere, probaly in FIELDS (nothing under the sun which is not mentioned in this book), but
currently I can’t find it.
I’ll probably pretty much stick to section 4.2 of Polchinski’s book.
Posted by: Urs Schreiber on May 14, 2004 1:52 PM | Permalink | PGP Sig | Reply to this
Re: Meeting 6: BRST quantization
Today I have talked about BRST techiques in general and how they are used in string theory and string field theory, in particular. I mostly followed appendix A in my Notes on OSFT which contains a
simple description of exterior derivatives on gauge groups, on gauge fixing of path integrals and how both are related via the BRST operator.
I then motivated the construction of the cubic OSFT action as described for instance on the first couple of pages of hep-th/0102085 and briefly talked about the effective action obtained from level
truncation and the meaning of tachyon condensation in OSFT.
The next meeting will be on 16th of July and I hope then to be able to say more interesting things about string field theory and in particular how deformations of worldsheet CFTs can be understood
from the SFT point of view.
Posted by: Urs Schreiber on June 4, 2004 11:44 AM | Permalink | PGP Sig | Reply to this
|
{"url":"http://golem.ph.utexas.edu/string/archives/000327.html","timestamp":"2014-04-20T09:24:14Z","content_type":null,"content_length":"169358","record_id":"<urn:uuid:a438a2cc-243c-486c-b61a-2f31fdd46b78>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00489-ip-10-147-4-33.ec2.internal.warc.gz"}
|
st: Re: RE: Re: RE: questions about stationarity test results using IPSH
[Date Prev][Date Next][Thread Prev][Thread Next][Date index][Thread index]
st: Re: RE: Re: RE: questions about stationarity test results using IPSHIN
From Jeannette Wicks-Lim <janetlim@econs.umass.edu>
To statalist@hsphsun2.harvard.edu
Subject st: Re: RE: Re: RE: questions about stationarity test results using IPSHIN
Date Fri, 19 Nov 2004 09:28:59 -0500
Hi Nick,
One other follow-up question. Why do you say that the original paper is needed to judge the significance of the t-tests? I assumed that the critical values provided are enough and the paper was only
needed if you wanted the exact p-values for the t-statistics estimated.
I did follow your suggestion and just graphed a couple of states' series. The plot of -lngap- does look to be stationary with a deterministic trend. And -lnmin1- appears to be nonstationary because
it is basically a series with a bunch of structural breaks (each break being a legislated raise in the minimum wage), not the typical erratic movement that I think of when I think of nonstationary
series. I guess because of that the ratio between -lnq5- and -lnmin1- is stationary...that's my guess for now!
Thanks again,
----- Original Message ----- From: "Nick Cox" <n.j.cox@durham.ac.uk>
To: <statalist@hsphsun2.harvard.edu>
Sent: Thursday, November 18, 2004 5:59 PM
Subject: st: RE: Re: RE: questions about stationarity test results using IPSHIN
My comment is very simple: put another way,
on the t-bar statistics -lnq5- and -lngap-
are behaving relatively similarly, whereas -lnmin1-
differs from both. No more than that, and no
The help for -ipshin- makes it clear that
to judge significance you need to get
results from the original paper.
Jeannette Wicks-Lim
Maybe I'm misunderstanding the results ( I am a real novice
re: time series
issues). I thought that the t-bar being further from zero for lngap
indicates that it is stationary (more negative then the cv1). Is that
Nick Cox
> Your commentary seems at odds with your results
> in that t-bar is further from zero for -lngap-
> than for the other variables.
Jeannette Wicks-Lim
>> I've conducted the IPSHIN test on two variables, one of which
>> appears to be
>> nonstationary (log of the minimum wage, or "lnmin1") and the
>> other appears
>> to be stationary (log of the 5th wage percentile, or "lnq5").
>> When I create
>> a third variable (log of 5th wage percentile - log of minimum wage,
>> or"lngap"), the IPSHIN test indicates that it is stationary.
>> How can it be
>> that the ratio of a stationary and nonstationary variable is
>> stationary?
>> (Some background info: the panels in this dataset are US
>> states -- all 50,
>> the time points are 6 month intervals over 20 years).
>> Here are my results:
>> . ipshin lnmin1 if gestcen~=53, lags(17) trend
>> Im-Pesaran-Shin test for cross-sectionally demeaned lnmin1
>> Deterministics chosen: constant & trend
>> t-bar test, N,T = (50,40) Obs = 1593
>> Augmented by 17 lags (average)
>> t-bar cv10 cv5 cv1 W[t-bar] P-value
>> -1.456 -2.320 -2.360 -2.440 . .
>> . ipshin lnq5 if gestcen~=53, lags(17) trend
>> Im-Pesaran-Shin test for cross-sectionally demeaned lnq5
>> Deterministics chosen: constant & trend
>> t-bar test, N,T = (50,40) Obs = 1593
>> Augmented by 17 lags (average)
>> t-bar cv10 cv5 cv1 W[t-bar] P-value
>> -3.087 -2.320 -2.360 -2.440 . .
>> . ipshin lngap if gestcen~=53, lags(17) trend
>> Im-Pesaran-Shin test for cross-sectionally demeaned lngap
>> Deterministics chosen: constant & trend
>> t-bar test, N,T = (50,40) Obs = 1593
>> Augmented by 17 lags (average)
>> t-bar cv10 cv5 cv1 W[t-bar] P-value
>> -3.585 -2.320 -2.360 -2.440 . .
* For searches and help try:
* http://www.stata.com/support/faqs/res/findit.html
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/
* For searches and help try:
* http://www.stata.com/support/faqs/res/findit.html
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/
|
{"url":"http://www.stata.com/statalist/archive/2004-11/msg00588.html","timestamp":"2014-04-17T18:57:03Z","content_type":null,"content_length":"10034","record_id":"<urn:uuid:032f4fca-bd57-4537-8063-dd32b82589fd>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00609-ip-10-147-4-33.ec2.internal.warc.gz"}
|
New York City Algebra 2 Tutor
Find a New York City Algebra 2 Tutor
...As a young guy myself, I work very well with high school students and aim to go above and beyond by giving advice on scholarship and college applications. Whenever working with students, I give
the first session free so that students can get accustomed to my teaching style. I also prefer tutoring online especially with the technology and resources available.
9 Subjects: including algebra 2, geometry, algebra 1, SAT math
...If you're like most people, it probably seemed really hard. But years later, is it hard now? After elementary school, these are routine math problems most people can easily accomplish.
12 Subjects: including algebra 2, physics, MCAT, trigonometry
...In college, I received an A+ in Calculus I and A- in Calculus III. My teaching philosophy rests on first identifying a student's strengths and weaknesses and then finding the root problem of
his/her weaknesses and making his/her strengths even stronger. I do assign homework and review tests to challenge a student further, but mainly to offer feedback on his/her performance.
6 Subjects: including algebra 2, biology, algebra 1, prealgebra
...I have also taken the AP Calculus BC and also received a 5. I hope these credentials are what you are looking for. I really look forward to working with you!
15 Subjects: including algebra 2, chemistry, physics, calculus
...I provide highly personalized one-on-one lessons in all areas of chemistry, physics and math. Throughout my career, tutoring and teaching has been one of my passions. I have tutored over a
hundred students spanning a broad range of ages and backgrounds, from elementary school through post-graduate continuing education.
12 Subjects: including algebra 2, chemistry, physics, calculus
|
{"url":"http://www.purplemath.com/new_york_city_algebra_2_tutors.php","timestamp":"2014-04-21T14:43:24Z","content_type":null,"content_length":"24115","record_id":"<urn:uuid:5cdab161-7fe8-4507-a8b7-3295aa5d00ff>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00070-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
Let /\ABC, A(-2,5) B(3,2), C(5,-7) 1.Which the length of the median that part of that vertex C? 2.Whats is the relative height of the side AB?
• one year ago
• one year ago
Best Response
You've already chosen the best response.
Almost every Geometry problem solution begins with a diagram.
Best Response
You've already chosen the best response.
The median from vertex C is the segment drawn from C to the midpoint of the opposite side. So, you need to get the midpoints of segment AB.
Best Response
You've already chosen the best response.
@Nathalia.agui Post your coordinates for the midpoint of segment AB and I will check them.
Best Response
You've already chosen the best response.
>is the answer (3/2, -1)? For the midpoint of segment AB, I got [ (3-2)2, (5-2)/2] = (1/2, 3/2). To get the midpoint of a segment when you know the endpoints of the segment, take the average of
the x-coordinates of the endpoints and the average of the y-coordinates of the endpoint.
Best Response
You've already chosen the best response.
To get the length of segment CM find the distance between points C and M using the Distance Formula. The segment CM itself is the median from vertex C of the triangle to the midpoint of side AB.
Best Response
You've already chosen the best response.
The distance formula is attached. Crank out the distance using the formula and the two points C and M. Recall that C is (5, -7) and that M is (1/2, 3/2). Post what you get and we can compare
answers. @Nathalia.agui
Best Response
You've already chosen the best response.
CM = (√ 370) / 2 Check that result. Part B is for you to try first. :) @Nathalia.agui
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred.
|
{"url":"http://openstudy.com/updates/51576177e4b07077e0c0630f","timestamp":"2014-04-16T10:38:13Z","content_type":null,"content_length":"45894","record_id":"<urn:uuid:e01ec7e3-960f-4dac-ba18-1d257e32624b>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00141-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Re: st: RE: AW: Sample selection models under zero-truncated negative b
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: st: RE: AW: Sample selection models under zero-truncated negative binomial models
From John Ataguba <johnataguba@yahoo.co.uk>
To statalist@hsphsun2.harvard.edu
Subject Re: st: RE: AW: Sample selection models under zero-truncated negative binomial models
Date Fri, 5 Jun 2009 14:03:06 +0000 (GMT)
Hi Austin,
Specifically, I am not looking at the time dimension of the visits. The data set is such that I have total number of visits to a GP (General Practitioner) in the past one month collected from a national survey of individuals. Given that this is a household survey, there are zero visits for some individuals.
One of my objective is to determine the factors that predict positive utilization of GPs. This is easily implemented using a standard logit/probit model. The other part is the factors that affect the number of visits to a GP. Given that the dependent variable is a count variable, the likely candidates are count regression models. My fear is with how to deal with unobserved heterogeneity and sample selection issues if I limit my analysis to the non-zero visits. If I use the standard two-part or hurdle model, I do not know if this will account for sample selection in the fashion of Heckman procedure.
I think the class of mixture models (fmm) will be an anternative that I want to explore. I don't know much about them but will be happy to have some brighter ideas.
----- Original Message ----
From: Austin Nichols <austinnichols@gmail.com>
To: statalist@hsphsun2.harvard.edu
Sent: Friday, 5 June, 2009 14:27:20
Subject: Re: st: RE: AW: Sample selection models under zero-truncated negative binomial models
Steven--I like this approach in general, but from the original post,
it's not clear that data on the timing of first visit or even time at
risk is on the data--perhaps the poster can clarify? Also, would you
propose using the predicted hazard in the period of first visit as
some kind of selection correction? The outcome is visits divided by
time at risk for subsequent visits in your setup, so represents a
fractional outcome (constrained to lie between zero and one) in
theory, though only the zero limit is likely to bind, which makes it
tricky to implement, I would guess--if you are worried about the
nonnormal error distribution and the selection b
Ignoring the possibility of detailed data on times of utilization, why
can't you just run a standard count model on number of visits and use
that to predict probability of at least one visit? One visit in 10
years is not that different from no visits in 10 years, yeah? It
makes no sense to me to predict utilization only for those who have
positive utilization and worry about selection etc. instead of just
using the whole sample, including the zeros. I.e. run a -poisson- to
start with. If you have a lot of zeros, that can just arise from the
fact that a lot of people have predicted number of visits in the .01
range and number of visits has to be an integer. Zero inflation or
overdispersion also can arise often from not having the right
specification for the explanatory variables... but you can also move
to another model in the -glm- or -nbreg- family.
On Tue, Jun 2, 2009 at 1:21 PM, <sjsamuels@gmail.com> wrote:
> A potential problem with Jon's original approach is that the use of
> services is an event with a time dimension--time to first use of
> services. People might not use services until they need them.
> Instead of a logit model (my preference also), a survival model for
> the first part might be appropriate.
> With later first-use, the time available for later visits is reduced,
> and number of visits might be associated with the time from first use
> to the end of observation. Moreover, people with later first-visits
> (or none) might differ in their degree of need for subsequent visits.
> To account for unequal follow-up times, I suggest a supplementary
> analysis in which the outcome for the second part of the hurdle model
> is not the number of visits, but the rate of visits (per unit time at
> risk).
> -Steve.
> On Tue, Jun 2, 2009 at 12:22 PM, Lachenbruch, Peter
> <Peter..Lachenbruch@oregonstate.edu> wrote:
>> This could also be handled by a two-part or hurdle model. The 0 vs. non-zero model is given by a probit or logit (my preference) model. The non-zeros are modeled by the count data or OLS or what have you. The results can be combined since the likelihood separates (the zero values are identifiable - no visits vs number of visits).
>> -----Original Message-----
>> From: owner-statalist@hsphsun2.harvard.edu [mailto:owner-statalist@hsphsun2.harvard.edu] On Behalf Of Martin Weiss
>> Sent: Tuesday, June 02, 2009 7:02 AM
>> To: statalist@hsphsun2.harvard.edu
>> Subject: st: AW: Sample selection models under zero-truncated negative binomial models
>> *************
>> ssc d cmp
>> *************
>> -----Ursprüngliche Nachricht-----
>> Von: owner-statalist@hsphsun2.harvard.edu
>> [mailto:owner-statalist@hsphsun2.harvard.edu] Im Auftrag von John Ataguba
>> Gesendet: Dienstag, 2. Juni 2009 16:00
>> An: Statalist statalist mailing
>> Betreff: st: Sample selection models under zero-truncated negative binomial
>> models
>> Dear colleagues,
>> I want to enquire if it is possible to perform a ztnb (zero-truncated
>> negative binomial) model on a dataset that has the zeros observed in a
>> fashion similar to the heckman sample selection model.
>> Specifically, I have a binary variable on use/non use of outpatient health
>> services and I fitted a standard probit/logit model to observe the factors
>> that predict the probaility of use.. Subsequently, I want to explain the
>> factors the influence the amount of visits to the health facililities. Since
>> this is a count data, I cannot fit the standard Heckman model using the
>> standard two-part procedure in stata command -heckman-.
>> My fear now is that my sample of users will be biased if I fit a ztnb model
>> on only the users given that i have information on the non-users which I
>> used to run the initial probit/logit estimation.
>> Is it possible to generate the inverse of mills' ratio from the probit model
>> and include this in the ztnb model? will this be consistent? etc...
>> Are there any smarter suggestions? Any reference that has used the similar
>> sample selection form will be appreciated.
>> Regards
>> Jon
* For searches and help try:
* http://www.stata.com/help.cgi?search
* http://www.stata.com/support/statalist/faq
* http://www.ats..ucla.edu/stat/stata/
* For searches and help try:
* http://www.stata.com/help.cgi?search
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/
|
{"url":"http://www.stata.com/statalist/archive/2009-06/msg00176.html","timestamp":"2014-04-19T20:52:30Z","content_type":null,"content_length":"13936","record_id":"<urn:uuid:aa93f0c0-6781-436a-8b6c-2ef2371322c4>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00071-ip-10-147-4-33.ec2.internal.warc.gz"}
|
maths problem !
May 14th 2008, 11:47 PM #1
May 2008
[FONT=Tahoma][SIZE=3]( i ) the circumference of a circle ( c = 2 x 3.14 x r) is related to the radius of the circle in a direct way. Draw the circles as indicated int he table below, then measure
the circumference?
( ii ) calculate the ratio circumference divided by radius is related to the formula
Circle Radius r (mm) Circumference C (mm) Circumference ÷ radius
A 6
B 10
C 15
D 19
E 29
( iii) explain how these results help to determine the formula for the circumference. That is, explain how the ratio circumference ÷ radius is related to the formula ?
[font=Tahoma][size=3]( i ) the circumference of a circle ( c = 2 x 3.14 x r) is related to the radius of the circle in a direct way. Draw the circles as indicated int he table below, then measure
the circumference?
( ii ) calculate the ratio circumference divided by radius is related to the formula
Circle Radius r (mm) Circumference C (mm) Circumference ÷ radius
A 6
B 10
C 15
D 19
E 29
( iii) explain how these results help to determine the formula for the circumference. That is, explain how the ratio circumference ÷ radius is related to the formula ?
When you construct these circles and divide the circumference by the radius you should note that the answer is just about the same for all five circles. So if C/r is a constant then the formula
for the circumference must be C = kr, where k is the constant you found when you measured it.
May 15th 2008, 04:46 AM #2
|
{"url":"http://mathhelpforum.com/math-topics/38400-maths-problem.html","timestamp":"2014-04-18T14:47:51Z","content_type":null,"content_length":"35438","record_id":"<urn:uuid:6f19835b-eefd-4675-a10c-2f83337076a5>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00325-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Is there an interesting definition of a category of test categories?
up vote 6 down vote favorite
Given a pair of test categories $C_1$ and $C_2$ (in the sense of Grothendieck - weak or strict or otherwise), has anyone defined an interesting notion of morphism between them? Or are ordinary
functors enough? Given that, has anyone studied the resulting category? In particular, what effect do properties of the functors have on the resulting presheaf categories, $Set^{C_1^{op}}$ and $Set^
{C_2^{op}}$? Is there a characterisation of functor between test categories which induces a Quillen equivalence of presheaf categories, using the model structure as in Cisinski's Asterisque volume?
ct.category-theory homotopy-theory
add comment
2 Answers
active oldest votes
I guess that it might be slightly more interesting to look for a notion of morphism of local test categories. A first natural candidate is given by the notion of locally constant functor: a
functor $u:A\to B$ is locally constant is it satisfies the assumpions of Quillen's theorem B, namely that, for any map $b\to b'$ in $B$, the induced functor on the comma categories $A/b\to A/
b'$ is a weak equivalence. If $A$ and $B$ are local test categories, then a functor $u:A\to B$ is locally constant if and only if the inverse image functor $u^\star : Set^{B^{op}}\to Set^{A^
{op}}$ is a left Quillen functor (and such a $u^\star$ is moreover a Quillen equivalence if and only if $u$ is a weak equivalence); see prop 6.4.29 in Astérisque 308.
If we want to look at something which looks like some theory of bimodules, in order to produce a bicategory of local test categories, we might start with spans of shape
$$A\overset{w}{\leftarrow} C \overset{u}{\to} B$$
where $w$ is aspherical (i.e. satisfies the assumptions of Quillen's theorem A), while $u$ is locally constant (with $C$ a local test category as well). From such data, we obtain a left Quillen
$$Set^{B^{op}}\overset{u^\star}{\longrightarrow} Set^{C^{op}}$$
as well as a left Quillen equivalence
$$Set^{C^{op}}\overset{w^\star}{\longleftarrow} Set^{A^{op}}.$$
The trouble is that such spans may not compose (because neither aspherical functors nor locally constant functors are stable under pullbacks in general). We may correct this defect by asking
furthermore that $w$ is smooth (e.g. that $C$ is fibred over $A$): a reformulation of Quillen's theorem B is that the pullback of a locally constant functor by a smooth map is always homotopy
up cartesian (see prop 6.3.39 and thm 6.4.15 in Astérisque 308); furthermore, for any smooth functor $p:X\to Y$, if $Y$ is a local test category, so is $X$ (in fact, for such a property, we need
vote only $p$ to be locally aspherical); see thm 7.2.1 in loc. cit. In other words, we obtain a bicategory of local test categories, for which the $1$-cells are the couples $(w,u)$, with $w$ an
5 aspherical smooth functor, while $u$ is locally constant (e.g. any smooth and proper functor is locally constant). This bicategory is in fact another model for homotopy types: if we denote by
down $S(A,B)$ the category of spans $(w,u)$ from $A$ to $B$ as above, then the classifying space of $S(A,B)$ has the homotopy type of the mapping space from the classifying space of $A$ to the
vote classifying space of $B$ (and any homotopy type is the classifying space of some local test category).
Here is some explanation of the meaning of this bicategory: this has to do with the description of homotopy types as $\infty$-groupoids. Let us start with the toposic description of
$1$-groupoids. There is a $2$-functor from the $2$-category of $1$-groupoids to the $2$-category of (Grothendieck) topoi which associates to a $1$-groupoid $G$ the category of presheaves of
sets over $G$. This functor is fully faithful (i.e. induces equivalences of categories at the level of Hom's). Its essential image may be characterized as the $2$-category of topoi which are
generated (as cocomplete categories) by their locally constant objects. This embedding of $1$-groupoids into topoi is the starting point of the Grothendieck version of Galois theory (unifying
classical Galois theory of fields and topological Galois theory). If we consider homotopy types as $\infty$-groupoids (in the spirit of Toën, Lurie, and others), then one can consider the
classifying space of a small category $A$ as the $\infty$-groupoid obtained from $A$ by inverting all the arrows in $A$. The $\infty$-topos associated to this $\infty$-groupoid may be described
as the one associated to the model category obtained from the (injective) model category of simplicial presheaves over $A$, by considering the left Bousfield localization by the arrows between
representable presheaves. The latter model category is precisely the model structure corresponding to the local test category $A\times \Delta$ (recall that the product of any small category
with a (local) test category is a local test category). There is a higher version of higher Galois theory whose first fundamental statement is that the $(\infty,2)$-category of $\
infty$-groupoids is embedded fully faithfully in the $(\infty,2)$-category of $\infty$-topoi; see Toën's Vers une interprétation Galoisienne de la théorie de l'homotopie Cahiers de Top. et de
Geom. Diff. Cat. 43, No. 4 (2002), 257-312". The bicategory of local test categories as described above is a convenient presentation to see this canonical embedding of homotopy types into $\
infty$-topoi explicitely: one associates to a span $(w,u)$ as above the map induced by the left Quillen functors ${w^{\star}}$ and ${u^{\star}}$ (inverting ${w^{\star}}$); see the last
paragraph of my paper Locally constant functors, Math. Proc. Camb. Phil. Soc. 147 (2009), 593-614, for a similar description of this embedding. In other words, given a local test category $A$,
the corresponding model category on $Set^{A^{op}}$ defines an $\infty$-topos, which is Galois (in the sense that it is generated, as a cocomplete $(\infty,1)$-category, by its locally constant
objects), and the corresponding $\infty$-groupoid is simply the one associated to $A$ by inverting all the arrows in $A$. This is how I understand the meaning of these model category structures
associated to local test categories.
Since small categories are models of homotopy types (from Thomason's model structure on Cat), what more does one get by restricting to local test categories? – David Roberts May 22 '10 at
On a different note, can this bicategory of local test categories be obtained as a bicategorical localisation (a la Pronk, or any other way, for that matter) of any 2-category of test
categories? – David Roberts May 22 '10 at 9:06
I added some comments on the meaning of this bicategory of local test categories which (hopefully) answers to your first question. I don't really understand the second question. However, it
is clear that the localization à la Pronk of this bicategory of local test categories is biequivalent to the $2$-category whose objects are spaces, and whose categories of morphisms are the
groupoids $\pi_1(Map(X,Y))$. – Denis-Charles Cisinski May 22 '10 at 22:06
My second question was more along the lines of: is this bicategory $S$ with spans $(w,u)$ as 1-arrows the localisation of the full sub-2-category $loc.test.Cat \subset Cat$ at the aspherical
(resp. aspherical smooth) functors? What if we don't necessarily take the full sub-2-category, say by restricting attention to smooth functors? (for example) – David Roberts May 23 '10 at
add comment
I'm not sure what “interesting” means in this context. It's probably too much to demand that morphisms between $\widehat{C_1} = \mathbf{Set}^{{C_1}^{\mathrm{Op}}}$ and $\widehat{C_2} = \
mathbf{Set}^{{C_2}^{\mathrm{Op}}}$ arise only from functors $C_2 \to C_1$. This would eliminate functors such as the simplicial realization of a cubical set. Better, we should take the
morphisms among left adjoints $\widehat{C_1} \to \widehat{C_2}$, i.e., diagrams $C_1 \times {C_2}^\mathrm{Op} \to \mathbf{Set}$. Not all of these give Quillen adjunctions, but those
corresponding to suitably cofibrant resolutions of the terminal object in ${\widehat{C_2}}^{C_1}$ probably do.
up vote
2 down In the special case that a left adjoint $\widehat{C_1}\to\widehat{C_2}$ is given by restriction $f^\ast$ along an aspherical functor $f:C_2 \to C_1$, the corresponding functor $C_2 \
vote downarrow f^\ast X \to C_1 \downarrow X$ is a weak equivalence for all $X\in\widehat{C_1}$. This is one implication of Proposition 1.2.9 in Maltsiniotis' Asterisque volume. Now, if $f^\ast$
is to be a left Quillen equivalence, you need it to preserve all weak equivalences (since everything in $\widehat{C_1}$ is cofibrant). This forces your hand: since the representables in $\
widehat{C_1}$ are weakly contractible you need $f^\ast C_1({-},x)$ to be weakly contractible, i.e., the nerve of $f\downarrow x$ should be weakly equivalent to a point.
add comment
Not the answer you're looking for? Browse other questions tagged ct.category-theory homotopy-theory or ask your own question.
|
{"url":"http://mathoverflow.net/questions/24997/is-there-an-interesting-definition-of-a-category-of-test-categories?answertab=votes","timestamp":"2014-04-17T07:14:58Z","content_type":null,"content_length":"65340","record_id":"<urn:uuid:58d2f8b8-1dd1-4f54-8b45-59e1eebaaf77>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00386-ip-10-147-4-33.ec2.internal.warc.gz"}
|
7.2.1 Modeling the Situation
│ │ Modeling the Situation (applet) │
│ │ │
│ │ An interactive environment is used to become familiar with the parameters involved and the range of results that can be obtained. │
1. A student strained her knee in an intramural volleyball game, and her doctor has prescribed an anti-inflammatory drug to reduce the swelling. She is to take two 220-mg tablets every 8 hours for
10 days. Her kidneys eliminate 60% of this drug from her body every 8 hours. Assume she faithfully takes the correct dosage at the prescribed regular intervals. The interactive figure below
contains the initial dose (440), the elimination rate (0.60), and the recurring dose (440). Click on Calculate to generate values for the amount of medicine in her body just after taking each
dose of medicine
The interactive figure calculates the amount of drug in the system just after taking a dose of medicine. You could also ask how much drug is in the body just before taking each dose. These values
would be exactly 440 mg less than the values calculated just after taking each dose.
1. How much of the drug is in her system after 10 days, just after she takes her last dose of medicine? If she continued to take the drug for a year, how much of the drug would be in her system
just after she took her last dose?
2. Does the amount of medicine in the body change faster around the fifth interval (about 40 hours after the initial dose) or around the twenty-fifth interval? How can you tell? What happens to
the change in the amount of medicine in the body as time progresses?
3. Explain, in mathematical terms and in terms of body metabolism, why the long-term amount of medicine in the body is reasonable.
2. Vary the initial dose, the elimination rate, and the recurring dose. What do you notice?
How to Use the Interactive Figure
The interactive figure calculates the amount of medicine in a person's body immediately after taking a dose. In this scenario, the individual takes an initial dose of medicine followed by recurring
doses, taken faithfully at fixed intervals of time. The interactive figure allows the initial dose to be different from the recurring doses.
The simulation requires three inputs:
• The initial dose—the amount of medicine given for the initial dose
• The elimination rate—the percent of medicine (given as a decimal) that the kidneys remove from the system between doses
• The recurring dose—the amount of the medicine to be given at fixed intervals
Click Calculate.
A(n) represents the amount of medicine in the body immediately after taking the nth recurring dose of medicine. So A(0) = initial dose, A(1) = amount of medicine in body after first recurring dose, A
(2) = amount of medicine in body after second recurring dose, and so on.
This interactive figure calculates the amount of drug in the system just after taking a dose of medicine. If the amount of medicine in the body just before taking the nth dose is desired, subtract
the amount of the recurring dose (or initial dose for n = 0) from the value calculated for A(n).
The interactive figure in this example illustrates calculation features that can be implemented in spreadsheets or graphing calculators. Spreadsheets or calculators with iterative capabilities can be
very useful for investigating and understanding change—whether it is due to growth or to decay. In computer and calculator spreadsheet programs, students have a powerful tool that permits them to
calculate the results of multiple dynamic events quickly and accurately. The ease of calculation frees students to focus on the effect of changing one or more of the problem parameters. In this
example, an athlete takes a constant dose of medicine at regular intervals. Using a calculator or a spreadsheet, students can determine the effect when changes are made in the initial dose, the
recurring dose, or the percent of medicine eliminated from the body.
Obtaining explicit formulas that capture such effects is often quite difficult and in some cases, impossible. In order to have had the experience that will lead them to an appropriate closed-form
equation with which to model such situations, students generally must be at a fairly high level of mathematics. A recursive approach, especially when supported by a calculator or an electronic
spreadsheet, gives students access to interesting problems such as this earlier in their schooling. It also informally introduces them to an important mathematical concept—limit.
In this initial phase of the investigation, students should recognize that the level of medicine in the body initially rises rapidly but with time increases less rapidly. Although one might question
whether the accuracy of the recorded answer affects this observation, it appears that the level eventually stabilizes, so that after about seventeen periods, the value seems no longer to change. In
other words, the athlete's body is eliminating the same amount of medication as she is taking. This observation can be mathematically verified by showing that (733 1/3)0.6 = 440.
As students play with the various parameters in the problem, they might make a number of observations. For example, the initial dose has no long-term effect; the level of medication still stabilizes
at the same value, regardless of the value of the initial dose. Changing the recurring dose does change the level at which it stabilizes. This line of discussion is extended in the next part of the
Take Time to Reflect
• What are the advantages and disadvantages of defining the relationship recursively? How might a recursive definition link with other experiences that students have had?
• What particular problems does the concept of limit pose for students? How might this context help them begin to approach this important topic?
• In what ways does technology enhance this investigation? In what ways does it detract from it?
National Research Council. High School Mathematics at Work: Essays and Examples for the Education of All Students. Washington, D.C.: National Academy Press, 1998.
Also see:
• 7.2 Using Graphs, Equations, and Tqables to Investigate the Elimination of Medicine from the Body
□ 7.2.2 Long Term Effect
□ 7.2.3 Graphing the Solution
|
{"url":"http://www.nctm.org/standards/content.aspx?id=26783","timestamp":"2014-04-19T22:07:43Z","content_type":null,"content_length":"44806","record_id":"<urn:uuid:640d1633-6d2b-4fcc-a6d6-ae7999993921>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00040-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Using the determinant to find real roots
February 26th 2008, 11:58 AM #1
Feb 2008
Using the determinant to find real roots
The question asks
Find the range of values of k that give this equation two distinct real roots. 3x^2 +kx +2=0.
I have used the determinant and get the answer k >+/- 2X6^1/2. The answer book says k< -2X6^1/2 and k>2X6^1/2. Can someone explain why this is. I am trying to teach myself A level maths (1st
First of all I think that you are refering to the discriminant:
$\begin{gathered}<br /> \Delta = k^2 - 24 > 0 \hfill \\<br /> \Leftrightarrow k^2 > 24 \hfill \\<br /> \Leftrightarrow k > 2\sqrt 6 \quad or\quad k < - 2\sqrt 6 \hfill \\ <br /> \end{gathered}$
Discriminant, not determinant
Sounds like you need to use the discriminant, not the determinant. Determinants are usually associated with matrices, and discriminants with quadratic equations and the quadratic formula.
The discriminant for a quadratic equations is given by b^2-4ac. Note that this is the part of the quadratic equation that is under the square root. There are three cases for the discriminant:
1) If it is positive, you will get two distinct, real roots (one from the positive value, one from the negative value of the square root of the discriminant).
2) If it is zero, you will get one real root (since -b +/- 0 is just -b; the +/- doesn't really do anything, since you're adding/subtracting zero)
3) If it is less than zero, you will get no real roots (since you'll be taking the square root of a negative number).
So, you need to find all values for k that makes the discriminant positive. Your discriminant is k^2 - (4)(3)(2), or k^2 - 24. Make an inequality with this greater than zero.
This means k^2 > 24, which means either k > sqrt(24) or k < -sqrt(24) (since squaring it will remove the negative sign). Simplify these inequalities and you should be in good shape!
Todd Werner
Director, Mathnasium West LA
Math Tutoring at Mathnasium Math Learning Centers
Hello, newbold!
Find the range of values of $k$ that give this equation
two distinct real roots: . $3x^2 +kx +2\:=\:0$
You work is correct ... up to a point.
The discriminant is: . $k^2 - 24$ .which must be positive.
. . So we have: . $k^2 - 24 \:>\:0\quad\Rightarrow\quad k^2 \:>\:24$
Now, be very very careful . . .
The next step gives us: . $|k| \:>\:\sqrt{24}$
. . which means: . $k < -\sqrt{24}\:\text{ or }\:k \:> \:\sqrt{24}$
If this is not evident, test some values on the number line.
February 26th 2008, 12:40 PM #2
February 26th 2008, 12:46 PM #3
Feb 2008
Westwood, Los Angeles, CA
February 26th 2008, 12:52 PM #4
Super Member
May 2006
Lexington, MA (USA)
|
{"url":"http://mathhelpforum.com/algebra/29223-using-determinant-find-real-roots.html","timestamp":"2014-04-21T17:01:51Z","content_type":null,"content_length":"40855","record_id":"<urn:uuid:ba0556b8-fe16-47bd-b996-d96864edf359>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00249-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Mass along a wire?
I have no idea how i should solve this problem.. http://snag.gy/L6hWl.jpg Please help.. I seriously have no idea..
From the definition of dr $dr= dxi+ dyj+ dzk$ Take the magnitude of dr $|dr|^2= (dx)^2+ (dy)^2+ (dz)^2$ $(\frac{|dr|}{dt})^2= (\frac{dx}{dt})^2+ (\frac{dy}{dt})^2+ (\frac{dz}{dt})^2$ You can find the
values of $\frac{dx}{dt},\frac{dy}{dt} and \frac{dz}{dt}$ in terms of t easily. Then get |dr| on its own and integrate it between t=0 and t=1 Edit. Didnt see that the density wasn't constant. After
you get an expression for dr in the form |dr|=g(t)dt since the density is 1+t find the integral of (1+t)g(t)dt between 0 and 1
Last edited by Shakarri; March 11th 2013 at 08:41 AM.
After you get an expression for dr in the form |dr|=g(t)dt?.. what do you mean by numeric form of it, and g(T)?
|
{"url":"http://mathhelpforum.com/geometry/214581-mass-along-wire.html","timestamp":"2014-04-17T04:02:06Z","content_type":null,"content_length":"49260","record_id":"<urn:uuid:c591ee09-da3a-4b13-a549-e278af754f56>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00279-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Help a dad - Triangles
My kid brought some homework back and I sat down to help...... And got stuck *blush*
Well, his homework was easy, find the missing length or angle of a triangle etc. We finished that with no problems and I felt quite happy I seemed to of helped, but then he asked me about angles of a
triangle -
Why are the angles always an exact figure, like 45 degrees or 61 degrees.
I should of just said, thats what the book says, but no, I went into explaining that the angles can have decimals and tried to explain about the Angle being built with Degrees and minutes / secs ...
This sank in and then he said, -
What tangent should be used for a 32.47 degree angle ?
I grinned and opened the Trigonometric table page and found 32 degrees = 0.625
and 33 degrees = 0.649.. And said it lies within these too tangents.
But.... I couldn't explain what the exact tangent to use would be. I told him we'd revisit the problem and so here i am.... I've searched the net and havent found any such precise tangent tables or
anything which will help me to understand and explain better the solution.
Sorry its a long one... Any help on this would be appreciated.
|
{"url":"http://www.mathisfunforum.com/viewtopic.php?pid=18694","timestamp":"2014-04-21T07:07:23Z","content_type":null,"content_length":"10612","record_id":"<urn:uuid:6ad5f760-eb44-49f2-ab35-a3c8232339db>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00605-ip-10-147-4-33.ec2.internal.warc.gz"}
|
MathGroup Archive: June 2004 [00394]
[Date Index] [Thread Index] [Author Index]
RE: Overlay graphs
• To: mathgroup at smc.vnet.net
• Subject: [mg48852] RE: [mg48829] Overlay graphs
• From: "David Park" <djmp at earthlink.net>
• Date: Sat, 19 Jun 2004 04:30:55 -0400 (EDT)
• Sender: owner-wri-mathgroup at wolfram.com
First, you have to use the various brackets correctly. Function arguments
are always enclosed in [].
There are several ways to combine your plots.
Method 1
plot1 = Plot[Sin[x], {x, 0, Pi/4}, DisplayFunction -> Identity];
plot2 = Plot[Cos[x], {x, Pi/4, Pi/2}, DisplayFunction -> Identity];
Show[plot1, plot2, DisplayFunction -> $DisplayFunction];
Method 2
Block[{$DisplayFunction = Identity},
plot1 = Plot[Sin[x], {x, 0, Pi/4}];
plot2 = Plot[Cos[x], {x, Pi/4, Pi/2}];]
Show[plot1, plot2];
Method 3
Plot[Sin[x], {x, 0, Pi/4}],
Plot[Cos[x], {x, Pi/4, Pi/2}]];
Method 4 (Using the DrawGraphics package from my web site.)
{Draw[Sin[x], {x, 0, Pi/4}],
Draw[Cos[x], {x, Pi/4, Pi/2}]},
Axes -> True];
The only advantage of the last method would be if you wanted to combine the
curves with other Graphics primitives such as Points, Rectangles, Lines and
using various colors, say, and you had to draw and overlay them in a
specific order.
David Park
djmp at earthlink.net
From: JosefG [mailto:J_o_s_e_f at hotmail.com]
To: mathgroup at smc.vnet.net
I was wondering how to overlay graphs in Mathmatica. I need to
overlay: Plot[Sin{(x)},{x,0,Pi/4}] and
Plot[{Cos(x)},{x,Pi/4,Pi/2}]. I don't know how to do it. A phase shift
doesn't work because I need the data from the original points.
Thanks alot,
|
{"url":"http://forums.wolfram.com/mathgroup/archive/2004/Jun/msg00394.html","timestamp":"2014-04-20T21:22:05Z","content_type":null,"content_length":"35576","record_id":"<urn:uuid:5e61ca47-0287-44df-b5e6-d4a9cd337913>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00579-ip-10-147-4-33.ec2.internal.warc.gz"}
|
It was just a matter of time...
December 10, 2003 11:54 PM Subscribe
26 year old student finds largest known prime number.
The number is 6,320,430 digits long and would need 1,400 to 1,500 pages to write out. It is more than 2 million digits larger than the
previous largest known prime number
Why? What use is it? How can knowing the next highest prime number be of any benefit?
One word:
Prime numbers are essential in producing keys for cryptography.
posted by DailyBread (14 comments total)
While large primes are indeed used in crypto, those aren't quit this big. This discovery mainly has curiosity value (not that there's anything wrong with that).
posted by fvw at 12:00 AM on December 11, 2003
yeah what fvw said... it's all about bragging rights and is rather worthless. not to mention this is old news, but hey... whatever
posted by banished at 12:04 AM on December 11, 2003
Anyways, if
used the largest prime for their encryption needs, then it would be pretty shitty encryption.
The NSA, CIA, and various telephone companies employ whole teams of people to come up with 'secret primes,' under lock and key, available only to those that know about them.
posted by kaibutsu at 12:41 AM on December 11, 2003
"it's all about bragging rights and is rather worthless."
Hey Pal My Prime Number Is Bigger Then Yours...
Yep I Love That Sweet Poontang Pi ;)
(If Math Geeks Acted Like Circa 1980's Films Frat Boys)
posted by Dreamghost at 1:05 AM on December 11, 2003
(If Math Geeks Acted Like Circa 1980's Films Frat Boys)
favorite math pickup lines:
"wanna get drunk and multiply?"
"i'd love to see the sum of your parts."
posted by kaibutsu at 1:26 AM on December 11, 2003
Metafilter: Like being back at school.
posted by nthdegx at 4:44 AM on December 11, 2003
hehehe. great link Space Coyote.
posted by dabitch at 6:00 AM on December 11, 2003
Is TubGirl the Japanese Metamucil ad?
Has Mr Shafer been contacted by the SETI people?
posted by RubberHen at 6:29 AM on December 11, 2003
This is the same kid that has a hobby of searching for palindromic prime numbers. I think the guy's gotta get a girlfriend and get laid.
Now, if he created a program to figure out celebrity's home phone numbers, THAT would be worthwhile!
posted by fenriq at 8:11 AM on December 11, 2003
Hey, now. Some of have girlfriends, get laid, and still do inexplicably dorky stuff like that.
/needless defensiveness
posted by cortex at 3:10 PM on December 11, 2003
The earlier one has links to goatse in the comments, so I vote to keep this one.
Well here's the
link again (
courtesy homunculus' post
). Can we delete
post now?
posted by mikhail at 6:37 PM on December 11, 2003
I let the bear shit for 16 hours and got about 44,000 prime numbers. Here are my stats:
Prime now:5309071
Prime count:43,908
Prime density:8.3%
Uptime: 16:12
Extra credit: calculate how long the bear would have to shit to get to the largest known prime number.
posted by dgaicun at 10:49 PM on December 11, 2003
« Older Punch and Judy... | Norbert's Online NES emulator... Newer »
This thread has been archived and is closed to new comments
|
{"url":"http://www.metafilter.com/30159/It-was-just-a-matter-of-time","timestamp":"2014-04-19T03:43:36Z","content_type":null,"content_length":"24865","record_id":"<urn:uuid:2b312adf-2387-4356-a93e-2d26d93410d8>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00236-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Comparing cartesian closed categories of core compactly generated spaces, Topology and its
Results 1 - 10 of 16
- In Proceedings of the 22nd Annual IEEE Symposium on Logic In Computer Science , 2007
"... Abstract. Perhaps surprisingly, there are infinite sets that admit mechanical exhaustive search in finite time. We investigate three related questions: What kinds of infinite sets admit
mechanical exhaustive search in finite time? How do we systematically build such sets? How fast can exhaustive sea ..."
Cited by 14 (8 self)
Add to MetaCart
Abstract. Perhaps surprisingly, there are infinite sets that admit mechanical exhaustive search in finite time. We investigate three related questions: What kinds of infinite sets admit mechanical
exhaustive search in finite time? How do we systematically build such sets? How fast can exhaustive search over infinite sets be performed? Keywords. Higher-type computability and complexity,
Kleene–Kreisel functionals, PCF, Haskell, topology. 1.
- Logical Methods in Computer Science
"... Abstract. We say that a set is exhaustible if it admits algorithmic universal quantification for continuous predicates in finite time, and searchable if there is an algorithm that, given any
continuous predicate, either selects an element for which the predicate holds or else tells there is no examp ..."
Cited by 13 (12 self)
Add to MetaCart
Abstract. We say that a set is exhaustible if it admits algorithmic universal quantification for continuous predicates in finite time, and searchable if there is an algorithm that, given any
continuous predicate, either selects an element for which the predicate holds or else tells there is no example. The Cantor space of infinite sequences of binary digits is known to be searchable.
Searchable sets are exhaustible, and we show that the converse also holds for sets of hereditarily total elements in the hierarchy of continuous functionals; moreover, a selection functional can be
constructed uniformly from a quantification functional. We prove that searchable sets are closed under intersections with decidable sets, and under the formation of computable images and of finite
and countably infinite products. This is related to the fact, established here, that exhaustible sets are topologically compact. We obtain a complete description of exhaustible total sets by
developing a computational version of a topological Arzela–Ascoli type characterization of compact subsets of function spaces. We also show that, in the non-empty case, they are precisely the
computable images of the Cantor space. The emphasis of this paper is on the theory of exhaustible and searchable sets, but we also briefly sketch applications. 1.
- GDP FESTSCHRIFT ENTCS, TO APPEAR
"... We motivate and define a category of topological domains, whose objects are certain topological spaces, generalising the usual ω-continuous dcppos of domain theory. Our category supports all the
standard constructions of domain theory, including the solution of recursive domain equations. It also su ..."
Cited by 13 (3 self)
Add to MetaCart
We motivate and define a category of topological domains, whose objects are certain topological spaces, generalising the usual ω-continuous dcppos of domain theory. Our category supports all the
standard constructions of domain theory, including the solution of recursive domain equations. It also supports the construction of free algebras for (in)equational theories, can be used as the basis
for a theory of computability, and provides a model of parametric polymorphism.
"... A topological space is Noetherian iff every open is compact. Our starting point is that this notion generalizes that of well-quasi order, in the sense that an Alexandroff-discrete space is
Noetherian iff its specialization quasi-ordering is well. For more general spaces, this opens the way to verify ..."
Cited by 10 (5 self)
Add to MetaCart
A topological space is Noetherian iff every open is compact. Our starting point is that this notion generalizes that of well-quasi order, in the sense that an Alexandroff-discrete space is Noetherian
iff its specialization quasi-ordering is well. For more general spaces, this opens the way to verifying infinite transition systems based on non-well quasi ordered sets, but where the preimage
operator satisfies an additional continuity assumption. The technical development rests heavily on techniques arising from topology and domain theory, including sobriety and the de Groot dual of a
stably compact space. We show that the category Nthr of Noetherian spaces is finitely complete and finitely cocomplete. Finally, we note that if X is a Noetherian space, then the set of all (even
infinite) subsets of X is again Noetherian, a result that fails for well-quasi orders. 1.
- UNDER CONSIDERATION FOR PUBLICATION IN MATH. STRUCT. IN COMP. SCIENCE , 2007
"... It is a fact of experience from the study of higher type computability that a wide range of approaches to defining a class of (hereditarily) total functionals over N leads in practice to a
relatively small handful of distinct type structures. Among these are the type structure C of Kleene-Kreisel co ..."
Cited by 4 (2 self)
Add to MetaCart
It is a fact of experience from the study of higher type computability that a wide range of approaches to defining a class of (hereditarily) total functionals over N leads in practice to a relatively
small handful of distinct type structures. Among these are the type structure C of Kleene-Kreisel continuous functionals, its effective substructure C eff, and the type structure HEO of the
hereditarily effective operations. However, the proofs of the relevant equivalences are often non-trivial, and it is not immediately clear why these particular type structures should arise so
ubiquitously. In this paper we present some new results which go some way towards explaining this phenomenon. Our results show that a large class of extensional collapse constructions always give
rise to C, C eff or HEO (as appropriate). We obtain versions of our results for both the “standard” and “modified” extensional collapse constructions. The proofs make essential use of a technique due
to Normann. Many new results, as well as some previously known ones, can be obtained as instances of our theorems, but more importantly, the proofs apply uniformly to a whole family of constructions,
and provide strong evidence that the above three type structures are highly canonical mathematical objects.
, 2009
"... Given a continuous functional f: X → Y and y ∈ Y, we wish to compute x ∈ X such that f(x) = y, if such an x exists. We show that if x is unique and X and Y are subspaces of Kleene–Kreisel spaces
of continuous functionals with X exhaustible, then x is computable uniformly in f, y and the exhaustion ..."
Cited by 1 (1 self)
Add to MetaCart
Given a continuous functional f: X → Y and y ∈ Y, we wish to compute x ∈ X such that f(x) = y, if such an x exists. We show that if x is unique and X and Y are subspaces of Kleene–Kreisel spaces of
continuous functionals with X exhaustible, then x is computable uniformly in f, y and the exhaustion functional ∀X: 2 X → 2. We also establish a version of the above for computational metric spaces X
and Y, where is X computationally complete and has an exhaustible set of Kleene–Kreisel representatives. Examples of interest include functionals defined on compact spaces X of analytic functions.
Our development includes a discussion of the generality of our constructions, bringing QCB spaces into the picture, in addition to general topological considerations. Keywords and phrases.
Higher-type computability, Kleene–Kreisel spaces of continuous functionals, exhaustible set, searchable set, QCB space, admissible representation, topology in the theory of computation with infinite
objects. 1
"... Abstract. We construct a continuous model of Gödel’s system T and its logic HA ω in which all functions from the Cantor space 2 N to the natural numbers are uniformly continuous. Our development
is constructive, and has been carried out in intensional type theory in Agda notation, so that, in partic ..."
Cited by 1 (1 self)
Add to MetaCart
Abstract. We construct a continuous model of Gödel’s system T and its logic HA ω in which all functions from the Cantor space 2 N to the natural numbers are uniformly continuous. Our development is
constructive, and has been carried out in intensional type theory in Agda notation, so that, in particular, we can compute moduli of uniform continuity of T-definable functions 2 N → N. Moreover, the
model has a continuous Fan functional of type (2 N → N) → N that calculates moduli of uniform continuity. We work with sheaves, and with a full subcategory of concrete sheaves that can be presented
as sets with structure, which can be regarded as spaces, and whose natural transformations can be regarded as continuous maps.
"... We present two probabilistic powerdomain constructions in topological domain theory. The first is given by a free ”convex space” construction, fitting into the theory of modelling computational
effects via free algebras for equational theories, as proposed by Plotkin and Power. The second is given b ..."
Cited by 1 (0 self)
Add to MetaCart
We present two probabilistic powerdomain constructions in topological domain theory. The first is given by a free ”convex space” construction, fitting into the theory of modelling computational
effects via free algebras for equational theories, as proposed by Plotkin and Power. The second is given by an observationally induced approach, following Schröder and Simpson. We show the two
constructions coincide when restricted to ω-continuous dcppos, in which case they yield the space of (continuous) probability valuations equipped with the Scott topology. Thus either construction
generalises the classical domain-theoretic probabilistic powerdomain. On more general spaces, the constructions differ, and the second seems preferable. Indeed, for countably-based spaces, we
characterise the observationally induced powerdomain as the space of probability valuations with weak topology. However, we show that such a characterisation does not extend to non countably-based
, 2010
"... We present a method for constructing from a given domain representation of a space X with underlying domain D, a domain representation of a subspace of compact subsets of X where the underlying
domain is the Plotkin powerdomain of D. We show that this operation is functorial over a category of domai ..."
Add to MetaCart
We present a method for constructing from a given domain representation of a space X with underlying domain D, a domain representation of a subspace of compact subsets of X where the underlying
domain is the Plotkin powerdomain of D. We show that this operation is functorial over a category of domain representations with a natural choice of morphisms. We study the topological properties of
the space of representable compact sets and isolate conditions under which all compact subsets of X are representable. Special attention is paid to admissible representations and representations of
metric spaces.
, 2011
"... In recent work we developed the notion of exhaustible set as a higher-type computational counter-part of the topological notion of compact set. In this paper we give applications to the
computation of solutions of higher-type equations. Given a continuous functional f: X → Y and y ∈ Y, we wish to co ..."
Add to MetaCart
In recent work we developed the notion of exhaustible set as a higher-type computational counter-part of the topological notion of compact set. In this paper we give applications to the computation
of solutions of higher-type equations. Given a continuous functional f: X → Y and y ∈ Y, we wish to compute x ∈ X such that f(x) = y, if such an x exists. We show that if x is unique and X and Y are
subspaces of Kleene– Kreisel spaces of continuous functionals with X exhaustible, then x is computable uniformly in f, y and the exhaustibility condition. We also establish a version of this for
computational metric spaces X and Y, where is X computationally complete and has an exhaustible set of Kleene–Kreisel representatives. Examples of interest include evaluation functionals defined on
compact spaces X of bounded sequences of Taylor coefficients with values on spaces Y of real analytic functions defined on a compact set. A corollary is that it is semi-decidable whether a function
defined on such a compact set fails to be analytic, and that the Taylor coefficients of an analytic function can be computed extensionally from the function. Keywords and phrases. Higher-type
computability, Kleene–Kreisel spaces of continuous functionals, exhaustible set, searchable set, computationally compact set, QCB space, admissible representation, topology in the theory of
computation. 1
|
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=614298","timestamp":"2014-04-21T12:15:54Z","content_type":null,"content_length":"38288","record_id":"<urn:uuid:847052f5-f9eb-4553-9d12-1a68fe6757d6>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00143-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Visual Physics Wiki
• This is NOT an educational site. The views expressed here are not those of mainstream physics.
• Due to spam attempts, user registration has been disabled. If you want to contribute to the wiki, email me at the address given in the Contact page.
• sim : Article with simulation -- stb : Article that needs development (stub).
Main Page
From Visual Physics Wiki
Welcome to Visual Physics 2.0 and the Visual Physics Project
Visual Physics 2.0 is a theory development portal based on a number of integrated components (see About the Site for details). It is the home of the Visual Physics Project, whose purpose is the
exploration of what could be termed "Metaclassical Physics". Specifically we will work on the following broad areas of physics, roughly in the order given:
• Special Relativity
• General Relativity and Gravitation
• Electrostatic Field and Electromagnetic Waves
• Quanta, Wave-Particle Duality and the Uncertainty Principle
To state the obvious, these fields have been exhaustively explored for more than a century by great minds, great physicists and mathematicians. This site will not presume to find fault in their
formulations. Our premise is simply that perhaps not all avenues and byways of theory development have been explored, and so our aim is to follow different pathways hoping to discover some neglected
pieces of the puzzle.
Project Outline
This project started off with a simple idea about the physical meaning of what we term "spacetime curvature". (You can read more about the original idea in the History of the Visual Physics Project.)
The development of this concept led to the Proper Time Adjusted Special Relativity theory. This physical approach can also be applied usefully in other areas of metaclassical physics, and this is the
general aim of the Visual Physics Project. Now, let's see the specific aims of the project for each of its main areas of interest. An important first step has been made in Special Relativity, while
its aims for the other areas still remain in the Roadmap and will hopefully be accomplished.
Special Relativity
This is the starting point of the whole project. By giving the Proper Time of the Moving Observer its "proper place" in the graph of motion, we find that the Moving Body is always situated on the
circumference of a circle of increasing radius t, and we are led to the conclusion that the phenomena of Special Relativity are produced by the curvature of spacetime, specifically by the phenomena
of the curved expanding universe. In the main article, you will find the full treatment of the Proper Time Adjusted Special Relativity theory in the form of an interactive presentation powered by a
Java simulation, and also its implications and the next steps in the Roadmap of the theory, mainly the introduction of gravitation in the Proper Time Adjusted Special Relativity theory.
Gravitation: Extrinsic Relativity
Our main aim here is an extrinsic formulation of General Relativity. The intrinsic formulation of the theory on the basis of Tensor Calculus has been exhaustively developed, so here we will look at
its extrinsic formulation. We want to see if it will agree with what we will have come up with in the Proper Time Adjusted Special Relativity with Gravitation. This will allow us to describe much
more clearly the way that spacetime curvature produces the acceleration of gravity.
Electrostatic Field and Electromagnetic Waves
Here we will take the mechanism of acceleration production that we will have come up with for gravitation, and we will try to apply it to the electrostatic field. This is reasonable since
gravitational and electrostatic forces (accelarations) have exactly the same form, and this suggests that the latter are also produced by some kind of spacetime curvature. Again, the main article
gives a fuller description of the needed formulation.
When we have such a formulation, hopefully we will have a much clearer picture of what electromagnetic waves are. Most probably they will prove to be fluctuations of the curvature produced in
spacetime by the existence of electric charge. So if this proves correct, what "waves" in electromagnetic waves is spacetime itself –-spacetime is the "ether."
Quanta, Wave-Particle Duality and the Uncertainty Principle
When we have this clearer picture of EM waves, hopefully it will be easier to see how these fluctuations can be quantized as to the energy they carry. Maybe this happens through the existence of a
minimal length interval, or a minimal time interval, or most probably both. At such scales, we are looking at the "pixels" of spacetime (and a pixel has both minimal length and minimal width). Or
there could also be "gaps" between successive discrete intervals.
Based on that, perhaps it will be easier to see what the uncertainty principle really means, what exactly wave-particle duality is, and if the wavefunction describes any real entity or it is a
statistical description of phenomena that are deterministic to a greater or lesser extend (and if such a thing as its notorious collapse really exists).
Related Topics
See also
|
{"url":"http://www.visual-physics.com/","timestamp":"2014-04-18T15:48:14Z","content_type":null,"content_length":"38227","record_id":"<urn:uuid:4318f575-be1e-4d1b-a378-93f83d0ff8ac>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00578-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Finding a series representation for pi
667pages on
this wiki
This article finds an infinite series representation for pi.
We should note that arctan(1) = π/4. It is the main idea of the proof. We will find a Taylor series representation for the inverse tangent and the proof will be complete.
Finding the series representation
Observe these:
$\displaystyle \arctan^{(0)}(0)=0$
$\displaystyle \arctan^{(1)}(0) = 1$
$\displaystyle \arctan^{(2)}(0) = 0$
$\displaystyle \arctan^{(3)}(0) = -2$
From simple observation or mathematical induction, we obtain that the nth derivative is zero if n is even, and it is this when n is odd:
$\arctan^{(n)}(0) = (-1)^{\frac{n-1}{2}}(n-1)!$
So we can strike out the even derivative terms from the Taylor series. Doing that, we obtain the following series for arctan:
Evaluation at x=1 yields
$\frac{\pi}{4} = \sum_{k=0}^{\infty}\frac{(-1)^{k}}{2k+1}$
Hence, it immediately follows that
$\pi = 4\sum_{k=0}^{\infty}\frac{(-1)^{k}}{2k+1} = 4\left(\frac{1}{1}-\frac{1}{3}+\frac{1}{5}-\frac{1}{7}+\frac{1}{9}...\right)$
Proof complete.
Product representation for pi
From the Basel problem, it follows that the infinite product representation Euler found for sin(x)/x is, in fact, true; despite it relying on the factoring of an infinite polynomial. This formula is
Since sin(π/2) is equal to 1, it immediately follows that
From here, with a bit of rearranging, we obtain
which gives, when expanded,
$\displaystyle \frac{\pi}{2}=\frac{2 \cdot 2}{1 \cdot 3}\cdot\frac{4 \cdot 4}{3 \cdot 5}\cdot\frac{6 \cdot 6}{5 \cdot 7}\cdot\frac{8 \cdot 8}{7 \cdot 9}...$
Other representations
Pi is also equal to the values of some definite integrals:
Also this holds:
The first two can be verified via integration, the third one follows from the Weierstrass product for the Gamma function and the fourth one is the result of the Basel problem.
|
{"url":"http://math.wikia.com/wiki/Finding_a_series_representation_for_pi","timestamp":"2014-04-20T18:24:14Z","content_type":null,"content_length":"57972","record_id":"<urn:uuid:385627e6-b041-47c7-9856-2a4b8be79802>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00300-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Kentfield Math Tutor
Find a Kentfield Math Tutor
...I have acted as a tutor for MBA students in every course they took in their graduate school curriculum. I have a strong background in statistics and econometrics. I have an undergraduate degree
in biology and math and have worked many years as a data analyst in a medical environment.
49 Subjects: including statistics, finance, actuarial science, calculus
...I am a certified EMT via the San Francisco Paramedics Association thus, I am CPR, First Aid, and AED certified. This course taught me how to remain calm in any emergency situation as well as
provide proper care to an injured person. In addition, this includes proper administration of common prescription medications such as inhalers, vasodialators, vasoconstrictors, etc.
30 Subjects: including algebra 1, reading, algebra 2, English
...My hours are flexible. For regular students, I am available for help by phone for "emergency" help.Linear Algebra is key to all post-calculus mathematics and includes the study of vectors in
the plane and space, systems of linear equations, matrices, vectors and vector spaces. Applications include differential equations, least squares approximations, and models in economics.
7 Subjects: including algebra 1, algebra 2, calculus, precalculus
...And I believe every student has the potential to be successful with Math. I can help by explaining Math concepts in simple ways the student can understand and remember. And I do so in a
positive, patient and encouraging manner.
18 Subjects: including statistics, algebra 1, algebra 2, American history
...I have used an Apple computer since 1984. I use the Mac for all of my computer needs (email, internet use, writing (Word), spreadsheets (Excel), accounting (Quicken and Quickbooks), music
(iTunes) etc. on a daily basis. I also own an iPad.
12 Subjects: including prealgebra, reading, writing, English
|
{"url":"http://www.purplemath.com/kentfield_ca_math_tutors.php","timestamp":"2014-04-18T23:27:57Z","content_type":null,"content_length":"23805","record_id":"<urn:uuid:c3784ecb-bb3d-4160-b8c6-e8ff9f0de47b>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00146-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Southeastern Calculus Tutor
Find a Southeastern Calculus Tutor
...Aside from that, I occasionally tutored high school mathematics and other more advanced college courses, such as Advanced Calculus, Logic and Set Theory, Foundations of Math, and Abstract
Algebra. Many of these subjects I also tutored privately. In addition to this, I've done substantial work i...
26 Subjects: including calculus, English, reading, writing
...I live in Plymouth Meeting, PA. I like to write in my free time: I write comedy sketches and scripts. Also, during the day, I stay at home with my two young daughters.I took a differential
equations course in fall of 2007 at Rensselaer Polytechnic Institute.
25 Subjects: including calculus, chemistry, physics, writing
...I was very effective in critiques and frequently assisted the other students in my classes. I've been painting with acrylic paint for over 10 years and it is one of my primary media. As a
trained scientific illustrator, acrylic paint is a major medium for me.
19 Subjects: including calculus, geometry, algebra 2, trigonometry
...I received a degree in mathematics from Cornell University, and have extensive experience tutoring in Calculus. I have taken far too many math classes related to calculus, from an introductory
high school class to intensive real analysis where we rigorously proved the standard formulas of calcul...
35 Subjects: including calculus, Spanish, chemistry, reading
...During my coursework, I took Calc I, Calc II, Calc III and Differential equations. My knowledge of calculus was also applied in my structural engineering coursework during senior year. All
through college I used Excel to present my data, and in the workplace I have done the same.
21 Subjects: including calculus, reading, physics, geometry
|
{"url":"http://www.purplemath.com/Southeastern_calculus_tutors.php","timestamp":"2014-04-20T14:00:48Z","content_type":null,"content_length":"24070","record_id":"<urn:uuid:b626f3dd-a9d0-4e00-9c4a-77df0e8fcf84>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00468-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Third grade math introduces children to some of the most frustrating concepts – multiplication and division. Kids also learn the properties of operations and begin to build the skills necessary for
solving more complex equations. At this level, practice makes perfect and offering children a variety of different ways to practice these new skills is the key to helping them develop a comprehensive
With Math Game Time, third graders will find many different ways to learn and practice their new skills. They can access a variety of free and fun games focused on multiplication, division, and
solving equations, practicing their new skills by racing, drilling, and using them to solve complicated puzzles and other challenges. For children who need a little more help or structured practice,
free videos feature teachers who guide them through the skills step by step. Print out the free worksheets for practice time away from the computer too.
|
{"url":"http://www.mathgametime.com/grade/3rd-grade","timestamp":"2014-04-21T12:48:28Z","content_type":null,"content_length":"85368","record_id":"<urn:uuid:418da88e-aa2f-46a7-ab25-10502a8afcca>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00330-ip-10-147-4-33.ec2.internal.warc.gz"}
|
ABCD x E Puzzle - Solution ?
Hi this is my first post here.
On the puzzle ABCD × E = DCBA
Should it not be, (A)(B)(C)(D) x E = (D)(C)(B)(A)
There for A, B, C, D can be anything as long as E = 1 or 0?
If this has been discussed before sorry.
Thanks all!
Last edited by dford (2013-12-07 16:04:33)
|
{"url":"http://www.mathisfunforum.com/viewtopic.php?id=20264","timestamp":"2014-04-18T23:35:53Z","content_type":null,"content_length":"10492","record_id":"<urn:uuid:789c9dff-4472-4e26-9224-3e8b39fefa3e>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00659-ip-10-147-4-33.ec2.internal.warc.gz"}
|