content stringlengths 86 994k | meta stringlengths 288 619 |
|---|---|
The Size of GL_2(Z_7)
November 1st 2010, 11:49 AM #1
The Size of GL_2(Z_7)
Hello everyone. I am asked to find the size of the group $G=GL_2(\mathbb{Z}_7)$ and would like to verify my work.
This group acts on the vector space $\mathbb{Z}_7\times \mathbb{Z}_7$ with basis $\{(1,0), (0,1)\}$. From this, we have the orbit-stabilizer theorem: if $x\in \mathbb{Z}_7\times \mathbb{Z}_7$,
then $|G|=|Gx|\cdot |G_x|$, that is, the size of $G$ is the size of the orbit of $x$ times the size of the stabilizer of $x$. Take $x=(1,0)$. If $A\in G$, $Ax$ can possibly be any nonzero vector.
There are 48 of these. Therefore, $|Gx|=48$. Then, if $Ax=x$, we must have $A(0,1)$ equal to another vector not a multiple of $x$, since $A$ is invertible. In other words, $A(0,1)$ cannot be $
(0,0), (1,0), (2,0),\ldots,(6,0)$. There are 42 of these, so $|G_x|=42$.
We conclude that $|G|=48\cdot 42=2016$.
Comments or suggestions?
Hello everyone. I am asked to find the size of the group $G=GL_2(\mathbb{Z}_7)$ and would like to verify my work.
This group acts on the vector space $\mathbb{Z}_7\times \mathbb{Z}_7$ with basis $\{(1,0), (0,1)\}$. From this, we have the orbit-stabilizer theorem: if $x\in \mathbb{Z}_7\times \mathbb{Z}_7$,
then $|G|=|Gx|\cdot |G_x|$, that is, the size of $G$ is the size of the orbit of $x$ times the size of the stabilizer of $x$. Take $x=(1,0)$. If $A\in G$, $Ax$ can possibly be any nonzero vector.
There are 48 of these. Therefore, $|Gx|=48$. Then, if $Ax=x$, we must have $A(0,1)$ equal to another vector not a multiple of $x$, since $A$ is invertible. In other words, $A(0,1)$ cannot be $
(0,0), (1,0), (2,0),\ldots,(6,0)$. There are 42 of these, so $|G_x|=42$.
We conclude that $|G|=48\cdot 42=2016$.
Comments or suggestions?
Personally I don't see why you can't just count the possibilities based on the columns. In general $\displaystyle \text{GL}_n\left(\mathbb{F}_p\right)=\prod_{j=0}^{ n-1}\left(p^n-p^j\right)$. So,
$\left|\text{GL}_2\left(\mathbb{Z}_7\right)\right|= \left(7^2-1\right)\left(7^2-7\right)=2016$.
EDIT: And by my first sentence I really mean I don't know the theorem you're talking about but know that what follows is true haha
November 1st 2010, 01:41 PM #2 | {"url":"http://mathhelpforum.com/advanced-algebra/161761-size-gl_2-z_7-a.html","timestamp":"2014-04-18T03:05:25Z","content_type":null,"content_length":"42022","record_id":"<urn:uuid:c0bca4e6-3fa7-4c3c-ad4b-52dbeee544bd>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00286-ip-10-147-4-33.ec2.internal.warc.gz"} |
Modified Bessel function of the first kind: Representations through more general functions
Representations through more general functions
Through hypergeometric functions
Involving [0]F^~[1]
Involving [0]F[1]
Involving [1]F[1]
Through Meijer G
Classical cases for the direct function itself
Classical cases involving exp
Classical cases involving cosh
Classical cases involving sinh
Classical cases involving cosh,sinh
Classical cases for powers of Bessel I
Classical cases for products of Bessel I
Classical cases involving Bessel J
Classical cases involving Bessel K
Classical cases involving Bessel Y
Classical cases involving Struve L
Classical cases involving [0]F[1]
Classical cases involving [0]F^~[1]
Generalized cases for the direct function itself
Generalized cases involving cosh
Generalized cases involving sinh
Generalized cases involving cosh,sinh
Generalized cases involving Ai
Generalized cases involving Ai^'
Generalized cases involving Bi
Generalized cases involving Bi^'
Generalized cases for powers of Bessel I
Generalized cases for products of Bessel I
Generalized cases involving Bessel J
Generalized cases involving Bessel K
Generalized cases involving Bessel Y
Generalized cases involving Struve L
Generalized cases involving [0]F[1]
Generalized cases involving [0]F^~[1]
Through other functions | {"url":"http://functions.wolfram.com/Bessel-TypeFunctions/BesselI/26/ShowAll.html","timestamp":"2014-04-17T10:00:45Z","content_type":null,"content_length":"124586","record_id":"<urn:uuid:467a972b-9929-415b-b99f-b3aadeab5427>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00147-ip-10-147-4-33.ec2.internal.warc.gz"} |
Longwood, New York, NY
New York, NY 10016
Is math confusing? I've been in your shoes. Let me help!
...During my time there I received both individual and team awards at the Florida region, state, and national level. SUBJECTS At this time I am available to tutor Prealgebra, Algebra 1, Geometry and
Algebra 2
. HOURS I am available to tutor Monday-Friday. TUTORING...
Offering 4 subjects including algebra 2 | {"url":"http://www.wyzant.com/Longwood_New_York_NY_algebra_2_tutors.aspx","timestamp":"2014-04-18T08:23:45Z","content_type":null,"content_length":"62621","record_id":"<urn:uuid:de246330-12d0-4a72-bf17-f28bc8f5d228>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00459-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Forum Discussions
Math Forum
Ask Dr. Math
Internet Newsletter
Teacher Exchange
Search All of the Math Forum:
Views expressed in these public forums are not endorsed by Drexel University or The Math Forum.
Topic: How can i put a funtion (like f=x^2) before use the x?
Replies: 2 Last Post: May 5, 2009 4:04 PM
Messages: [ Previous | Next ]
Rodolfo Moura How can i put a funtion (like f=x^2) before use the x?
Posted: Apr 18, 2009 5:01 PM
Posts: 1
From: Sao Joao del rei Hi, i'd like to know if is possible use a function before put the independent variable.
Registered: 4/18/09 If possible, how?
thank you
Date Subject Author
4/18/09 How can i put a funtion (like f=x^2) before use the x? Rodolfo Moura
4/27/09 Re: How can i put a funtion (like f=x^2) before use the x? HallsofIvy
5/5/09 Re: How can i put a funtion (like f=x^2) before use the x? Hecman Gun | {"url":"http://mathforum.org/kb/thread.jspa?threadID=1921010","timestamp":"2014-04-18T00:18:11Z","content_type":null,"content_length":"18463","record_id":"<urn:uuid:e0653a9c-20b9-4f60-8e39-5f0b778a384a>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00536-ip-10-147-4-33.ec2.internal.warc.gz"} |
cs381k p. 165
Resolution Step for Propositional Calculus
A clause is a disjunction of literals (atoms or negations of atoms).
Select two clauses C[1] and C[2] that have exactly one atom that is positive in one clause and negated in the other.
Form a new clause, consisting of all literals of both clauses except for the two complementary literals, and add it to the set of clauses.
Theorem: The new clause produced by resolution is a logical consequence of the two parent clauses.
Proof: Let the parent clauses be C[1] = L &or C[1]' and C[2] = ¬ L &or C[2]'; the resolvent is H = C[1]' &or C[2]'. Suppose that C[1] and C[2] are true in an interpretation I.
Case 1: L = true in I. Then since C[2] = ¬ L &or C[2]', C[2]' must be true in I and H is true in I.
Case 2: L = false in I. Then since C[1] = L &or C[1]', C[1]' must be true in I and H is true in I.
Contents    Page-10    Prev    Next    Page+10    Index    | {"url":"http://www.cs.utexas.edu/~novak/cs381k165.html","timestamp":"2014-04-21T00:11:08Z","content_type":null,"content_length":"1843","record_id":"<urn:uuid:4c3e7697-5531-4a4d-8756-c56716c55a1d>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00257-ip-10-147-4-33.ec2.internal.warc.gz"} |
La Puente Prealgebra Tutor
Find a La Puente Prealgebra Tutor
...Then in the basic painting course, I started working with different media, in addition to linseed oil, such as glazing medium, beeswax, poppy seed oil, and stand oil. I also got to try
different techniques like palette-knife painting and wet-into-wet. I continue to take these weekly classes, and I'm excited to see my skills develop.
24 Subjects: including prealgebra, reading, French, English
...I have experience with college, high school, and elementary school students in various subjects. I come from rough and humble beginnings, where dedication and talent allowed me to graduate from
college. I love passing on these values to students, particularly those interested in attending college.
11 Subjects: including prealgebra, reading, biology, algebra 1
...During my time at USC, I conducted astronomy research studying the flow of subsurface matter in the sun, and I also worked at Mt. Wilson Observatory. After graduation, I moved to Kiev, Ukraine
to teach English.
15 Subjects: including prealgebra, English, physics, writing
...These broad experiences and content knowledge allow me to teach key concepts and information in unique and meaningful ways to students. I believe that every student can learn. I am a patient
and resourceful teacher that engages students in meaningful learning.
11 Subjects: including prealgebra, geometry, biology, anatomy
Hello! I am currently a graduate student at California State University, Fullerton. I received my B.S. in Mathematics/Applied Chemistry in the summer of 2013 from the University of California,
Riverside with my major GPA just over 3.5.
7 Subjects: including prealgebra, chemistry, calculus, algebra 1 | {"url":"http://www.purplemath.com/la_puente_prealgebra_tutors.php","timestamp":"2014-04-17T11:10:47Z","content_type":null,"content_length":"23973","record_id":"<urn:uuid:a664804f-627e-43a0-9a1d-fdb8668f8d8f>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00163-ip-10-147-4-33.ec2.internal.warc.gz"} |
2200 british pounds to usd
You asked:
2200 british pounds to usd
Say hello to Evi
Evi is our best selling mobile app that can answer questions about local knowledge, weather, books, music, films, people and places, recipe ideas, shopping and much more. Over the next few months we
will be adding all of Evi's power to this site.
Until then, to experience all of the power of Evi you can download Evi for free on iOS, Android and Kindle Fire. | {"url":"http://www.evi.com/q/2200_british_pounds_to_usd","timestamp":"2014-04-21T03:19:10Z","content_type":null,"content_length":"56888","record_id":"<urn:uuid:95e87749-7b5c-47e7-865c-d37c305718ae>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00145-ip-10-147-4-33.ec2.internal.warc.gz"} |
Summary: Reducing Truthtelling Online Mechanisms to Online Optimization
Baruch Awerbuch Yossi Azar y Adam Meyerson z
We describe a general technique for converting an online algorithm B to a truthtelling mechanism. We require that the
original online competitive algorithm has certain ``niceness'' properties in that actions on future requests are independent
of the actual value of requests which were accepted (though these actions will of course depend upon the set of accepted
requests). Under these conditions, we are able to give an online truth telling mechanism (where the values of requests
are given by bids which may not accurately represent the valuation of the requesters) such that our total profit is within
O( + log ) of the optimum offline profit obtained by an omniscient algorithm (one which knows the true valuations
of the users). Here is the competitive ratio of B for the optimization version of the problem, and is the ratio of the
maximum to minimum valuation for a request. In general there is
32 ) lower bound on the ratio of worstcase profit
for a truth telling mechanism when compared to the profit obtained by an omniscient algorithm, so this result is in some
sense best possible. In addition, we prove that our construction is resilient against many forms of ``cheating'' attempts,
such as forming coalitions.
We demonstrate applications of this result to several problems. We develop online truthtelling mechanisms for online
routing and admission control of path or multicast requests, assuming large network capacities. Assuming the existance
of an algorithm B for the optimization version of the problem, our techniques provide truthtelling mechanisms for general
combinatorial auctions. However, designing optimization algorithms may be difficult in general because of online or | {"url":"http://www.osti.gov/eprints/topicpages/documents/record/980/3715761.html","timestamp":"2014-04-16T07:53:22Z","content_type":null,"content_length":"8961","record_id":"<urn:uuid:ca656a68-4bcd-4411-9426-0c980d455ba3>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00244-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Forum Discussions
Math Forum
Ask Dr. Math
Internet Newsletter
Teacher Exchange
Search All of the Math Forum:
Views expressed in these public forums are not endorsed by Drexel University or The Math Forum.
Topic: Fast exponent and logarithm, given initial estimate
Replies: 29 Last Post: Nov 8, 2004 2:31 AM
Messages: [ Previous | Next ]
Re: Fast exponent and logarithm, given initial estimate
Posted: Oct 19, 2004 5:32 AM
Glen Low wrote (snipped):
> 1. Get a good exponentiation algorithm going. Probably split integer
> and fraction parts, then apply a squared Taylor expansion on the
> fractional part. For 24 bits of accuracy, probably end up with about 7
> multiplies. (I read some stuff about binary splitting, binary
> reduction and other esoteric algorithms, but I fail to see how they
> would reduce the number of multiplies -- anyone care to explain?)
Once you have take the first few terms of a power series to make a polynomial,
there are a number of tricks you can use to evaluate that polynomial with
fewer multiplications, which are outlined in Knuth's Art of Computer Programming,
section 4.6.4. For example a polynomial over the reals of degree 4 can be
evaluated with 3 multiplications and 5 additions, of degree 5, with 4 multiplications
and 5 additions, of degree 6, with 4 multiplications and 7 additions. There is a general
method for evaluating a real polynomial of degree n>=3 with (floor (n/2) + 2 )
multiplications and n additions.
These methods require some precomputation to generate the coefficients to be used,
but since the coefficients are constant in your case, that should not be a problem.
Date Subject Author
10/18/04 Fast exponent and logarithm, given initial estimate Glen Low
10/18/04 Re: Fast exponent and logarithm, given initial estimate Jeremy Watts
10/19/04 Re: Fast exponent and logarithm, given initial estimate Peter Spellucci
10/19/04 Re: Fast exponent and logarithm, given initial estimate Glen Low
10/18/04 Re: Fast exponent and logarithm, given initial estimate bv
10/19/04 Re: Fast exponent and logarithm, given initial estimate Glen Low
10/19/04 Re: Fast exponent and logarithm, given initial estimate George Russell
10/19/04 Re: Fast exponent and logarithm, given initial estimate Glen Low
10/20/04 Re: Fast exponent and logarithm, given initial estimate George Russell
10/20/04 Re: Fast exponent and logarithm, given initial estimate Glen Low
10/21/04 Re: Fast exponent and logarithm, given initial estimate Christer Ericson
10/21/04 Re: Fast exponent and logarithm, given initial estimate Glen Low
10/22/04 Re: Fast exponent and logarithm, given initial estimate Christer Ericson
10/19/04 Re: Fast exponent and logarithm, given initial estimate Martin Brown
10/19/04 Re: Fast exponent and logarithm, given initial estimate Glen Low
10/19/04 Re: Fast exponent and logarithm, given initial estimate Richard Mathar
10/19/04 Re: Fast exponent and logarithm, given initial estimate Glen Low
10/20/04 Re: Fast exponent and logarithm, given initial estimate Gert Van den Eynde
10/20/04 Re: Fast exponent and logarithm, given initial estimate Glen Low
10/20/04 Re: Fast exponent and logarithm, given initial estimate Richard Mathar
10/21/04 Re: Fast exponent and logarithm, given initial estimate Gert Van den Eynde
10/21/04 Re: Fast exponent and logarithm, given initial estimate bv
10/22/04 Re: Fast exponent and logarithm, given initial estimate Glen Low
10/22/04 Re: Fast exponent and logarithm, given initial estimate Peter Spellucci
10/22/04 Re: Fast exponent and logarithm, given initial estimate Glen Low
10/23/04 Re: Fast exponent and logarithm, given initial estimate bv
10/24/04 Re: Fast exponent and logarithm, given initial estimate Gert Van den Eynde
10/25/04 Re: Fast exponent and logarithm, given initial estimate Peter Spellucci
10/20/04 Re: Fast exponent and logarithm, given initial estimate Gert Van den Eynde
11/8/04 Re: Fast exponent and logarithm, given initial estimate Glen Low | {"url":"http://mathforum.org/kb/message.jspa?messageID=3412484","timestamp":"2014-04-17T05:45:16Z","content_type":null,"content_length":"52179","record_id":"<urn:uuid:2afbcdc9-d8a0-437d-ae55-fcd019352992>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00292-ip-10-147-4-33.ec2.internal.warc.gz"} |
Contents of /sb-simd/timing.lisp
1 #|
2 Copyright (c) 2005 Risto Laakso
3 All rights reserved.
5 Redistribution and use in source and binary forms, with or without
6 modification, are permitted provided that the following conditions
7 are met:
8 1. Redistributions of source code must retain the above copyright
9 notice, this list of conditions and the following disclaimer.
10 2. Redistributions in binary form must reproduce the above copyright
11 notice, this list of conditions and the following disclaimer in the
12 documentation and/or other materials provided with the distribution.
13 3. The name of the author may not be used to endorse or promote products
14 derived from this software without specific prior written permission.
16 THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR
19 IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT,
21 NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
22 DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
23 THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
24 (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF
25 THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
26 |#
27 (in-package :cl-user)
29 (defun time-sample-form (form &optional &key (samples 10))
30 (let (start end times)
31 (dotimes (i samples)
32 (setq start (get-internal-real-time))
33 (funcall form)
34 (setq end (get-internal-real-time))
35 (push (- end start) times))
37 (flet ((calc-avg (list) (float (/ (apply #'+ list) (length list))))
38 (sq (x) (* x x)))
40 (let* ((avg (calc-avg times))
41 (sq-times (mapcar #'sq times))
42 (stddev (sqrt (- (calc-avg sq-times) (sq (calc-avg times)))))
43 )
44 ;; (format t "; times ~S, sqtimes ~S~%" times sq-times)
45 (format t "; ~D samples, avg ~5F sec, stddev ~5F sec~%"
46 samples
47 (/ avg internal-time-units-per-second)
48 (/ stddev internal-time-units-per-second)))))) | {"url":"http://common-lisp.net/viewvc/sb-simd/sb-simd/timing.lisp?revision=1.1&view=markup","timestamp":"2014-04-17T13:25:17Z","content_type":null,"content_length":"18612","record_id":"<urn:uuid:eb9db8bb-3d1c-4a1f-8b49-ae2cb4d821bb>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00109-ip-10-147-4-33.ec2.internal.warc.gz"} |
Park School Math
The Perfect Combinatorics Problem
March 3, 2012 – 5:53 pm
In this post, I’m going to extol the virtues of my favorite combinatorics problem. You’ve probably heard it, or some version of it, before:
A pizza parlor offers ten different toppings on their pizza. How many different types of pizza are possible to make, given that a pizza can have any number of toppings, or no toppings at all?
Just in case you aren’t familiar with this problem and want to work it out for yourself first, I’m putting most of this post after the jump. First, a shout out: I remember doing this problem with
Michigan State Professor Bruce Mitchell, who used to teach Saturday-morning math enrichment classes at my middle school, and whose enthusiasm and humor kept me coming back. Second, some pizza:
You may prefer to pretend you never saw that.
By Mimi | Posted in Problems | Tagged combinatorics, pizza | Comments (5)
Let’s Ban the “Distance Formula”!
February 23, 2012 – 2:01 am
One of the things that can be hardest for kids learning algebra is to be able to understand the value of abstraction and of using symbols to help one analyze and think about a problem. I myself
remember learning algebra (way back in 1978) from Dolciani/Wooton, a textbook that valued formal manipulation above all else; there was almost no motivation given for where the rules came from, just
lots of practice in learning how to manipulate symbols correctly. Indeed, for a number of years afterward I thought that formal manipulation was all there was to algebra.
The idea that there are ideas to be discovered in algebra was completely foreign to me. I knew in 9th grade that perpendicular lines had slopes that were negative reciprocals, but if you had asked
me why, I would never have known, or even thought I should have known. Dolciani gives a proof, but I would bet a lot of money that very few students read it, as it is, to be frank, inappropriately
abstract for a high schooler learning the subject for the first time.
Our curriculum ( http://parkmath.org/curriculum/ ) approaches the topic by having students first draw a line on graph paper with a slope of 2, and then try to figure out experimentally what slope a
line perpendicular to it should have. Working out concrete examples using lines of different slopes can lead to a much deeper understanding for a 9^th grader than an algebraic proof that they have
little chance of following, never mind retaining. When students are older and have more experience with algebra, of course, a formal proof can make good pedagogical sense. But for a freshman?
Which is why I think we should ban(!) teaching the “Distance Formula”, at least for the large majority of 9th and 10th graders. Why, you might ask, if I am trying to teach them the value of
abstraction? Continue reading →
By Tony | Posted in Uncategorized | Comments (26)
Our department is hiring!
February 14, 2012 – 7:00 pm
Our department is hiring! Would you like to teach intellectually curious students using our problems-based curriculum and collaborate with thoughtful colleagues? Click here for details.
By Mimi | Posted in Uncategorized | Comments (0)
Park School Math in the “Mathematics Teacher”
February 13, 2012 – 7:45 am
We’ve been fortunate enough to have an article published in the February 2012 issue of the Mathematics Teacher, “Geometry in Medias Res”. We teach Geometry in a fairly unusual way, we think, so we
decided to write about it and see what people thought about our approach.
One of our main ideas is that we want students to encounter interesting problems on the first day. So we ask them non-trivial questions right away (e.g. can every triangle be circumscribed?), and in
the process of them discussing/arguing with each other, we start to develop with the kids the necessity for a standard of proof other than “it really seems like it to me!”.
There’s a lot more, but the idea is to have a more natural and intuitive introduction to the axiomatic nature of Geometry than one usually finds. We are big believers that proof is completely
accessible to all levels of students, but that it has to be introduced gradually, as a way of resolving questions students have, not as a forced superstructure like the way it was often taught in the
If you have a chance to take a look at the article, we’d love your feedback and thoughts.
By Tony | Posted in Uncategorized | Comments (0)
Geometry Follow-Up: Proof in a Bag
January 22, 2012 – 3:43 pm
The concept of proof-in-a-bag is simple. Write out a two-column proof and then cut it up so that each statement or reason is by itself on a scrap of paper. Then put all the scraps in a bag (a small
sandwich bag works well, though an opaque paper bag might have more of a dramatic effect) and have kids work on rearranging the scraps so that they form a coherent proof. You can decide whether you
want students to know ahead of time what it is they’re proving, or if you want them to figure it out by putting statements with “given:…”, “prove…” and a diagram in the bag as well.
Credit where credit is due: I got the idea for this from Laura Chihara while a student in her Algebraic coding class at the Carleton-St. Olaf Summer Math Program.
It’s nice to have any activity where kids are physically doing something in a math class, of course, but I really like what kids get out of this activity. It emphasizes the idea that you have to
have enough evidence before you can conclude that triangles are congruent (otherwise, what are those “extra” statements doing in the bag?) And it is very good for helping students understand what
can be a statement vs. what can be a reason. I often find that students want to use triangle congruence theorems like SAS when using properties of triangle congruence; the structure of this activity
leads them to realize that they’ve already used SAS to justify the triangle congruence statement; they now need to use something else (CPCTC or the equivalent) to start using the congruence.
There are some times when I would definitely not use this activity. If the proof is a particularly exciting one for kids to work out on their own, I wouldn’t rob them of the opportunity.
Proof-in-a-bag works best for simple, straightforward proofs, where the two-column proof format can be used without having to do a lot of extra explaining. I generally use it for one day only, at a
time when the class has had some practice writing proofs but has not yet reached a level of comfort with them.
Does anybody else have activities or techniques that they use to teach writing proofs? I’d be especially interested in what people do who don’t insist on a strict two-column format all of the time.
By Mimi | Posted in Uncategorized | Tagged geometry, pedagogy | Comments (5)
On Algebra and Logic
December 8, 2011 – 10:28 am
By acohen77 | Posted in Philosophy | Comments (1)
Postgame Analysis: the Towers of Hanoi
December 6, 2011 – 12:02 pm
I recently gave my juniors the classic Towers of Hanoi puzzle to play with in small groups. It went something like this:
You have three plates, and plate #1 has a stack of 5 pancakes, in order from the largest one on the bottom to the smallest on top. The puzzle is to get the stack onto plate #2 using as few moves as
Two rules: (i) you can only move the top pancake on a stack, and (ii) at no time can any larger pancake be on top of a smaller pancake.
They spent a couple minutes getting familiar with the mechanics of it, and then settled into working together, shifting pancakes and keeping a count of their moves.
By Anand | Posted in Pedagogy, Problems | Comments (6)
A first post on geometry and proof
November 21, 2011 – 2:08 pm
Euclidean geometry is for many students the first time they get a taste of what math is really about. The problems don’t all fit the same pattern; it’s natural and expected that students will come
up with their own arguments to prove something, rather than following a set of rules. Ideally, geometry class also involves experimentation and conjecture.
I don’t think Park students are encountering these things for the first time in geometry. Our students are used to investigating and asking their own questions. And they are used to making careful
arguments to support their claims. Still, for Park students as much as students anywhere, geometry tends to be the first time that they are asked to write formal proofs. Anyone who has taught
geometry knows that writing proofs can feel to students like wearing a straightjacket. For the first time, arguments that are correct but either vague or not axiomatic are inadmissible:
• Opposite sides of a parallelogram have to be congruent because lines with the same slope stay the same distance apart.
• Opposite sides of a parallelogram have to be congruent because there is no way to extend one of those sides without changing the angle of the side coming to meet it.
• The base angles of an isosceles triangle have to be congruent because the triangle is symmetric.
I’ve stopped telling students that these arguments are not convincing. Anyone who understands the terms they’re using would be convinced. And I’ve even stopped telling students that they are
incorrect. They’re not incorrect; they’re just not arguments from first principles. They appeal to intuition and common sense, as most arguments we’d make in daily life do.
Acknowledging those things, we still need to make rigorous arguments that appeal to specific principles we’ve studied in class, such as theorems about parallel lines, and theorems about congruent
triangles. For this reason, I stick to the “statement/reason” model of proofs taught in most geometry classes. I find that if students don’t write proofs this way it is too easy for them to fall
into arguments that are merely intuitive. It’s also easy for them to fool themselves into thinking that they have enough evidence to conclude that triangles are congruent when, say, they’ve really
only found two pairs of congruent sides.
I don’t, however, insist on the degree of rigor that most geometry books do. Students in my classes do not write proofs that contain the sequence, “If angles form a linear pair, then they are
supplementary. If angles are supplementary, then their measures add up to 180 degrees.” They can go right from linear pair to adding up to 180 degrees. I don’t think that this level of following
tiny steps in a chain serves the purpose of helping students to build new theorems out of the knowledge they already have.
Generally, my rule is that if students are using congruent triangles to prove something, they need to
• Name the three pairs of sides/angles that they need to justify the congruence, providing a reason for each.
• Name the pair of congruent triangles and say which theorem (SSS, ASA, etc) they are using.
• Only after they’ve done all that, name the pair of sides or angles that they can now say are congruent. To justify this, they will sometimes use the infamous “CPCTC,” or, since many students
have trouble remembering what the acronym stands for, just say that they are using triangle congruence.
I think that, among Park faculty, I am one of the teachers who insists the most on some kind of standard template for proofs, even though I allow much more leeway in what can be used for a reason
than most textbooks do. I’d be interested in what other teachers ask of their students when writing geometry proofs.
By Mimi | Posted in Uncategorized | Tagged geometry, proof | Comments (0)
Puzzles for 11.11.11
November 11, 2011 – 10:05 am
Some questions we are asking our classes today:
Anand: How many times this millenium will the date consist of a single digit?
Bill: Today, the day, month, and year are the same. In how many days will this happen again?
Angela: How can you get an answer of 0.0909090909… using only one number, but as many times as you want, and basic arithmetic?
What are you all doing?
By Angela | Posted in Uncategorized | Comments (0)
The Cruel Irony of Algebra
November 1, 2011 – 10:25 pm
You would think it would have occurred to me sooner, but it wasn’t until a few years ago that it really hit me what I think is the biggest problem most students have with Algebra: they don’t
actually think the letters represent numbers. Here’s the kind of question I’ve asked that illustrates what I mean:
Chuck says that (xy)(wz) is always equal to (xyw)(xyz). Is he right?
Over the years, I have found that if a student is unsure of whether or not they are supposed to “distribute the xy”, they often just guess. When asked why, they say that they were unsure of the rule
they were supposed to use, so they just took their best shot. For many years, I tried to show them which rule to use in various situations and the principles involved, hoping that over time they
would catch on to the logic of algebra.
But invariably, for many of my students, even a slightly changed question presented what seemed like a freshly baffling challenge. After all, does the question below really seem all that different
(other than to a math teacher)?:
Chloe says that (xy)(w+z) is always equal to (xyw) + (xyz). Is she right?
So what to do? While it may seem like taking two steps backwards in the march towards abstraction and generalization, these days I ask my students how they could possibly figure out for themselves
if the two expressions are always equal, and the ensuing discussions leads us to the question of what the heck those x’s and y’s and z’s and w’s represent—numbers! So why not try out these equations
with numbers? The cruel irony of algebra is that what is intended to make generalization easier actually becomes so abstract for many kids that numbers are the last thing on their minds. They end up
seeing algebra as a bunch of arbitrary rules that are hard to predict.
Of course, just because two expressions are equal with a given set of numbers doesn’t mean they always will be—and we discuss that eventually as well. But as an entry point into algebraic
identities, and as a gut check to see if my students “get” what algebra is about, I find this works because students believe what they can test and see for themselves. Having them practice
distributing multiplication over addition (and also practicing NOT distributing multiplication over multiplication), while it has its place, I find isn’t a good substitute for the intuition that
develops by playing with the raw numbers.
Next, semi-related entry: The joys and sorrows of “flip and multiply”!
By Tony | Posted in Uncategorized | Comments (10) | {"url":"http://parkmath.org/page/2/","timestamp":"2014-04-16T19:36:58Z","content_type":null,"content_length":"52364","record_id":"<urn:uuid:cb8d71ac-1b19-410b-855c-48ad76265875>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00223-ip-10-147-4-33.ec2.internal.warc.gz"} |
Fast Matrix Multiplication
up vote 21 down vote favorite
Suppose we have two $n$ by $n$ matrices over particular ring. We want to multiply them as fast as possible. According to wikipedia there is an algorithm of Coppersmith and Winograd that can do it in
$O(n^{2.376})$ time. I have tried to look at the original paper and it scares me. It seems that it is impossible to understand current state of the art.
So, the question is the following. Is there any 'gentle' introduction or survey for beginners in this particular field? I took only introductory course in algebra, so it would be nice to know what
parts of algebra do these techniques rely on.
algorithms linear-algebra
Fascinating. Can any expert quickly comment on why that exponent and what is conjectured optimal? – Piero D'Ancona Sep 15 '10 at 19:57
I'm not sure it's possible to quickly comment on why this exponent arises... the exponent is really $2.375\cdots$, and it′s what you get when you find the minimum solution to some funky system of
3 equations. Experts differ on what the optimal exponent should be. Strassen has conjectured that the bound should be $\Theta(n^{2+\delta})$ for some $\delta > 0$. Others believe that $n^{2+\delta}$
may be possible for every $\delta >0$. – Ryan Williams Sep 15 '10 at 22:16
add comment
8 Answers
active oldest votes
You may also look at the alternative approach to Coppersmith-Winograd proposed by Cohn-Umans and Cohn-Kleinberg-Szegedy-Umans. Their papers are very readable, and the latter gets close to
up vote 16 the Coppersmith-Winograd exponent 2.376. It is said that the methods in their paper can also achieve 2.376, but I don't think this is published.
down vote
I'll try to read them. But firstly, I need to improve my knowledge about representations. – ilyaraz Aug 2 '10 at 0:43
3 Anyway: 1) matrix multiplication $\mathbb F^{m\times n}\times \mathbb F^{n\times p}\to \mathbb F^{m\times p}$ is a bilinear map - if you choose the canonical bases for the three
spaces, you get the structural tensor. 2) The tensor rank is the minimum number $r$ of "triads" $a \otimes b \otimes c$ so that you can write your tensor $T$ as $$T=\sum_{i=1}^r a_i \
otimes b_i \otimes c_i$$ – Federico Poloni Aug 2 '10 at 23:05
1 No. These papers is definitely what I was looking for. They are actually readable. :) – ilyaraz Aug 3 '10 at 4:33
3 If you haven't done so already, you might start by reading up on the algorithms for fast integer multiplication and fast Fourier transform. You can think of the matrix multiplication
problem as a much harder analogue of these problems, where analogous solutions would rely on complicated group-theoretic constructions. en.wikipedia.org/wiki/… en.wikipedia.org/wiki/
Fast_Fourier_transform – Gene S. Kopp Aug 13 '10 at 17:32
1 I don't understand how can one try to learn fast matrix multiplication without knowledge of FFT. :) – ilyaraz Aug 13 '10 at 17:39
show 3 more comments
Here are some resources I found useful while learning about this stuff.
• Victor Pan. How to Multiply Matrices Faster. Springer LNCS, 1984. A paperback edition was available on Amazon at some point, but no longer it seems. This monograph and Pan's 1980
journal paper (which improves on Strassen) are very readable:
Victor Y. Pan: New Fast Algorithms for Matrix Operations. SIAM J. Comput. 9(2): 321-342 (1980)
• Knuth, The Art of Computer Programming Vol 2 contains a series of exercises that lead you through an $o(n^{2.5})$ matrix multiplication algorithm. Unfortunately I don't have a
copy in front of me, so I can't tell you the specific exercises.
up vote 13 down
vote • A nice exposition by Andrew Stothers: http://www.maths.ed.ac.uk/~s0237198/report1styr.pdf
• EDIT: [DEL:There are some other lecture notes out there that I can't seem to dig up at the moment.:DEL] Here they are: http://www-cc.cs.uni-saarland.de/teaching/SS09/
If you search for "strassen laser method" you will find more nice hits. In principle, "schoenhage tau theorem" should also yield results, but it doesn't seem to. (These are the two
prior results that Coppersmith-Winograd build on.)
add comment
Sara Robinson's survey Toward an Optimal Algorithm for Matrix Multiplication, SIAM News 38 (9), 2005, might be suitable.
up vote 10 down
1 This survey is a great read, though it is without many technical details. It seems that this field is based on representation theory and rather advanced group theory. – ilyaraz
Aug 2 '10 at 0:32
add comment
Instead of going for state-of-the art immediately, you might read a little bit on the history of the problem. Karatsuba multiplication and the Strassen algorithm should give the core
idea. If you look at the Coppersmith-Winograd algorithm closely, you might find an implementation that will make it practical for small n, or a series of examples that will show why it
won't be practical.
up vote 2
down vote Gerhard "Ask Me About System Design" Paseman, 2010.08.01
I know Karatsuba and Strassen algorithms. And they doesn't seem to help in understanding more advanced techniques. :) – ilyaraz Aug 2 '10 at 0:30
add comment
There's a chapter on fast matrix multiplication on: Algebraic complexity theory - Peter Bürgisser, Michael Clausen, Mohammad Amin Shokrollahi.
up vote 1 down vote
1 This book does not seem to be readable at all. :( – ilyaraz Aug 3 '10 at 0:24
add comment
An other reference, in French:
up vote 1 down vote J. Abdeljaoued, H. Lombardi. Méthodes Matricielles. Introduction à la Complexité Algébrique. SMAI series ``Mathématiques et Applications''. Springer-Verlag (2003).
add comment
http://www-cc.cs.uni-saarland.de/teaching/SS09/ComplexityofBilinearProblems/script.pdf It has some typos but except this it is really good.
up vote 1 down vote
It seems Ryan Williams gave this link in his answer of 2 August 2010. – Gerry Myerson Mar 4 '11 at 4:47
Yes, you are right. I did not saw it – Klim Efremenko Mar 5 '11 at 17:15
add comment
"Geometry and the complexity of matrix multiplication", by J. Landsberg from the AMS bulletin is a very nice article. It describes an approach to this problem based on algebraic geometry,
that of bounding the "border rank" of the sequence of bilinear maps defining matrix multiplication. I don't think it reproduces the state of the art yet (but I'm not an expert so maybe), but
up vote it is a well-defined mathematical program that should in principle be able to uncover the optimal exponent. I think at least the basics of the approach should be pretty understandable with a
1 down minimum of background, but the whole theory does go pretty deep and technical. I believe this is, however, the nature of the beast - it is a shockingly deep question.
add comment
Not the answer you're looking for? Browse other questions tagged algorithms linear-algebra or ask your own question. | {"url":"http://mathoverflow.net/questions/34173/fast-matrix-multiplication?sort=newest","timestamp":"2014-04-19T12:16:18Z","content_type":null,"content_length":"90498","record_id":"<urn:uuid:d1343f66-cdc2-4a8c-ac37-25a2e832cb2e>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00625-ip-10-147-4-33.ec2.internal.warc.gz"} |
Posts about Heegaard splittings on Low Dimensional Topology
Mark your calendars now: in June 2014, Cornell University will host “What’s Next? The mathematical legacy of Bill Thurston”. It looks like it will be a very exciting event, see the (lightly edited)
announcement from the organizers below the fold.
Smooth proof of Reidemeister-Singer
Every construction I know of 3-manifold invariants from Heegaard splittings factors through the Reidemeister-Singer Theorem:
Reidemeister-Singer Theorem: For any two Heegaard splittings $H_1$ and $H_2$ of a 3-manifold $M$, there exists a third Heegaard splitting $H$ which is a stabilization of both.
This theorem is definitely part of the big story in 3-manifold topology, and is usually proven in the PL category, as for example in Nikolai Saveliev’s Lectures on the Topology of 3-manifolds. There
is another nice PL proof due to Craggs, Proc. Amer. Math. Soc. 57, n 1 (1976), 143-147.
I think of a Heegaard splitting as being intrinsically a smooth topology construction (a level set of a Morse function), and so I would really like the proof of Reidemeister-Singer to live in the
smooth category. I think that there should be consistent smooth and PL stories of 3-manifold topology living side by side. In the 1970′s, Bonahon wrote a smooth proof of Reidemeister-Singer, which
uses Cerf Theory (naturally, because we’re investigating paths between Morse functions). Unfortunately, Bonahon’s proof was never published, and it is lost.
A year ago (but I only saw it this morning), François Laudenbach posted a smooth proof of Reidemeister-Singer to arXiv: http://arxiv.org/abs/1202.1130. I think that this is wonderful! There are too
few papers like this- there is insufficient incentive to streamline the storylines of foundations. I am very happy to have found this proof, and I want such a proof to be a part of my smooth
3-manifold topology foundations.
Edit: Thanks to George Mossessian and to Ryan Budney, who point out in the comments that Jesse Johnson proved Reidemeister-Singer using Rubinstein and Scharlemann’s sweep-outs, which involves
singularity theory which is much less sophisticated that Cerf Theory: http://front.math.ucdavis.edu/0705.3712
Perhaps that should be the “smooth proof from The Book” (or the “proof from The Smooth Book”)!
Lots and lots of Heegaard splittings
The main problem that I’ve been thinking about since graduate school (so around a decade now) is the following: How does the topology of a three-dimensional manifold determine its isotopy classes of
Heegaard splittings? Up until about a year ago, I would have predicted that most three-manifolds probably don’t have many distinct Heegaard splittings, maybe even just a single minimal genus Heegaard
splitting and then all of its stabilizations. Sure, plenty of examples have been constructed of three-manifolds with multiple distinct (unstabilized) splittings, but these all seemed a bit contrived,
like they should be the exceptions rather than the rule. I even wrote a blog post a couple years back stating what I called the generalized Scharlamenn-Tomova conjecture, which would imply that a
“generic” three-manifold has only one unstabilized splitting. However, since writing this post, my view has changed. Partially, this was the result of discovering a class of examples that disprove
this conjecture. (I’m hoping to post a preprint about this on the arXiv in the near future.) But it turns out there is an even simpler class of examples in which there appear to be lots and lots of
distinct Heegaard splitting. I can’t quite prove that they’re distinct, so in this post I’m going to replace my generalized Scharlemann-Tomova conjecture with a conjecture in quite the opposite
direction, which I will describe below.
Update on subadditivity of tunnel number
A few months ago, I wrote a blog post about the interesting phenomenon that the tunnel number of a connect sum of two knots may be anywhere from one more than the sum of the tunnel numbers to a
relatively small fraction of the sum of the tunnel numbers. Since then, a couple of related papers have been posted to the arXiv, so I thought that justifies another post on the subject. The first
preprint I’ll discuss, by João Miguel Nogueira [1], gives new examples of knots in which the tunnel number degenerates by a large amount. The second paper, by Trent Schirmer [2] (who is currently a
postdoc here at OSU), gives a new bound on the amount tunnel number and Heegaard genus can degenerate by under connect sum/torus gluing, respectively, in certain situations.
The Bridge Spectrum
A knot $K$ in a three-manifold $M$ is said to be in bridge position with respect to a Heegaard surface $\Sigma$ if the intersection of $K$ with each of the two handlebody components of the complement
of $\Sigma$ is a collection of boundary parallel arcs, or if $K$ is contained in $\Sigma$. The bridge number of a knot $K$ in bridge position is the number of arcs in each intersection (or zero if if
$K$ is contained in $\Sigma$) and the genus $g$bridge number of $K$ is the minimum bridge number of $K$ over all bridge positions relative to genus $g$ Heegaard surfaces for $M$. The classical notion
of bridge number is the genus-zero bridge number, i.e. bridge number with respect to a sphere in $S^3$, but a number of very interesting results in the last few years have examined the higher genus
bridge numbers. Yo’av Rieck defined the bridge spectrum of a knot $K$ as the sequence $(b_0,b_1,b_2,\ldots)$ where $b_i$ is the genus $i$ bridge number of $K$ and asked the question: What sequences
can appear as the bridge spectrum of a knot? (At least, I first heard this term from Yo’av at the AMS section meeting in Iowa City in 2011 – as far as I know, he was the first to formulate the
question like this.)
Topologically minimal surfaces – More common than you might think
Before I get back to train tracks (as I had promised in my last post), I wanted to point out some interesting recent work on topologically minimal surfaces. The definition of topologically minimal
surfaces was introduced by Dave Bachman [1] as a topological analogue of higher index geometrically minimal surfaces, suggested by work of Hyam Rubinstein. I discussed these in detail in my series of
posts on axiomatic thin position, but here’s the rough idea: An incompressible surface has topological index zero because there is no way to compress it, so it’s similar to a local minimum, i.e. an
index-zero critical point of a Morse function. A strongly irreducible Heegaard surface has topological index one because there are two distinct ways to compress it, similar to how there are two
distinct ways to descend from an index-one critical point (a saddle) in a Morse function. An index two surface will be weakly reducible, but there will be an essential loop of compressions, in the
sense that consecutive compressing disks will be disjoint, but the loop is homotopy non-trivial in the complex of compressing disks. This should remind you of an index-two critical point in a Morse
function, in which there is a loop of directions in which to descend. Then index-three surfaces have an essential sphere of compressions and so on. Initially, it was unclear how common higher index
surfaces would be. I would have guessed that they weren’t very common, and I think Dave felt the same. But a number of recent results indicate quite the opposite.
Morse-Novikov number and tunnel number
Someone recently pointed out to me a paper by A. J. Pajitnov [1] proving a very interesting connection between circular Morse functions and (linear) Morse functions on knot complements. (A similar
result is probably true in general three-manifolds as well.) Recall that a (linear) Morse function is a smooth function from a manifold to the line in which there are a finite number of critical
points (where the gradient of the function is zero), and each critical point has one of a number of possible forms. For a two-dimensional manifold the possible forms are the familiar local minimum,
saddle or local maximum. This post is about three-dimensional Morse functions, in which case the possible forms are slight generalizations of local minima, maxima and saddles. A circular Morse
function is a function with the same conditions on critical points, but whose range is the circle rather than the line. For a three-dimensional manifold, the minimal number of critical points in a
linear Morse function is twice the Heegaard genus plus two, and for knot complements it’s twice the tunnel number plus two. (In particular, one can construct a Heegaard splitting or unknotting tunnel
system directly from a Morse function, but that’s for another post.) The minimal number of critical points in a circular Morse function is called the Morse-Novikov number, and is equal to the minimal
number of handles in a circular thin position for the manifold (usually a knot complement). Pajitnov has a very clever argument to show that the (circular) Morse-Novikov number of a knot complement
is bounded above by twice its (linear) tunnel number. Below, I want to outline a slightly different formulation of this proof in terms of double sweep-outs, though I should stress that the underlying
idea is the same.
More than you probably wanted to know about Scharlemann’s no-nesting Lemma
This post is going to be a bit more technical than usual (though not necessarily any more coherent). As I’ve been working on porting thin position techniques to the analysis of large data sets and
other arenas, I’ve had to spend a lot of time trying to understand how the fundamental ideas fit together, and one in particular is Scharlemann’s no-nesting Lemma. This Lemma says the following:
Given a strongly irreducible Heegaard surface $\Sigma$ and an embedded disk $D$ with essential boundary in $\Sigma$, you can always make the interior of $D$ disjoint from $\Sigma$ by isotoping away
disks and annuli in $D$ that are parallel into $\Sigma$. As I’ll describe below, it turns out that this Lemma in many ways encapsulates the fundamental properties of thin position.
Bill Thurston is dead at age 65.
Bill Thurston passed away yesterday at 8pm, succumbing to the cancer that he had been battling for the past two years. I don’t think it’s possible to overstate the revolutionary impact that he had
on the study of geometry and topology. Almost everything we blog about here has the imprint of his amazing mathematics. Bill was always very generous with his ideas, and his presence in the
community will be horribly missed. Perhaps I will have something more coherent to say later, but for now here are some links to remember him by:
• 2010 lecture on The mystery of 3-manifolds.
The minimal genus Heegaard splitting conjecture
Today, I will continue on my quest to find the most interesting conjectures about Heegaard splittings. (Most of these conjectures, including this one, fail criteria one and two in Daniel’s recent
post, but strive to satisfy criteria three.) Here’s the latest:
The minimal genus Heegaard splitting conjecture: For every positive integer $g$, there is a constant $K_g$ such that if $M$ is a hyperbolic 3-manifold with Heegaard genus $g$ then $M$ has at most
$K_g$ isotopy classes of (minimal) genus $g$ Heegaard splittings. | {"url":"http://ldtopology.wordpress.com/category/heegaard-splittings/","timestamp":"2014-04-19T02:05:30Z","content_type":null,"content_length":"66116","record_id":"<urn:uuid:c3600ec8-64c4-492d-af3c-118a605313f1>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00368-ip-10-147-4-33.ec2.internal.warc.gz"} |
Tewksbury Algebra 2 Tutor
Find a Tewksbury Algebra 2 Tutor
...Computer Science: I've taught young students programming from the ground up. I have a course of instruction designed for C++, but I also teach Java, and other languages such as MATLAB,
Mathematica, and LABview. High school: Physics, Mathematics, Chemistry, Biology, and Computer Programming and related laboratory courses.
47 Subjects: including algebra 2, chemistry, reading, calculus
...I have worked mostly with college level introductory courses and high school students. I have worked with many students of different academic levels from elementary to college students. Whether
you want to solidify your knowledge and get ahead or get a fresh perspective if your are struggling, I am confident I can help you.
19 Subjects: including algebra 2, chemistry, calculus, Spanish
...I have extensive experience in tutoring high school math (algebra, trigonometry, pre-calculus, calculus) and science (biology, chemistry, physics) as well as undergraduate pre-medical courses
such as biology, chemistry, physics, biochemistry, physiology, and organic chemistry. I love tutoring in...
10 Subjects: including algebra 2, chemistry, geometry, biology
...Topics may include, but are not limited to: solving equations, proportional reasoning, rules with exponents, factoring equations and quadratic equations. Geometry! - This is a subject that I
learned in the classroom AND actually used in my job as a civil engineer. From proofs, to theorems, to angles, to properties of shapes, to combining with algebra - these are all of the things I
8 Subjects: including algebra 2, reading, algebra 1, GED
...This is where tutoring shines; while students in a classroom tend to have to adapt to the teacher's teaching style, tutors in a one-on-one or small group setting can adapt to the individual
learning needs of the students. I prefer my lessons to be as hands on as possible and I tend to blend tech...
23 Subjects: including algebra 2, chemistry, reading, Spanish | {"url":"http://www.purplemath.com/Tewksbury_Algebra_2_tutors.php","timestamp":"2014-04-18T11:32:09Z","content_type":null,"content_length":"24131","record_id":"<urn:uuid:3997e928-a192-4de5-a7e7-98808b495d1f>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00170-ip-10-147-4-33.ec2.internal.warc.gz"} |
Southlake Statistics Tutor
Find a Southlake Statistics Tutor
...I gained considerable OTJ experience writing complex SQL queries for a large mainframe relational database. I do not have experience with Microsoft ACCESS. I can help with query strategies and
with syntax of most query statements.
15 Subjects: including statistics, chemistry, physics, calculus
...This dynamic energy allows me to establish rapport with students, as well as facilitate a more efficient learning environment free from distractions. In all, I take great joy in seeing my
students grasp the concepts and become successful in their endeavors. I look forward to helping and serving you or your child's learning needs and to assist them on their road to success.
40 Subjects: including statistics, reading, algebra 1, English
...Many of my students also struggled with learning disabilities. This gave me considerable experience with assisting those with ADD/ADHD, dyslexia and other challenges. In addition, I worked
extensively with students to improve their testing strategies to help prepare them for the TAKS test, the ACT and the SAT.
82 Subjects: including statistics, chemistry, English, algebra 1
...I know the typical problems students run into with these exams so I know what to it takes to correct them. For the GMAT and GRE, timing is always crucial, so what I try to do is show my
students the quickest way(s) to solve (or answer) problems, as well as techniques and strategies they can use ...
41 Subjects: including statistics, chemistry, ASVAB, logic
...I have gone through rigorous mathematics at college, grad school and during my Ph.D. I love to teach mathematics, not to just solve problems, but also to make sure concepts and fundamentals are
clear. I have experience in teaching Pre-Algebra, Algebra 1, Geometry and Algebra 2.
12 Subjects: including statistics, calculus, geometry, algebra 1 | {"url":"http://www.purplemath.com/Southlake_Statistics_tutors.php","timestamp":"2014-04-18T08:20:52Z","content_type":null,"content_length":"24061","record_id":"<urn:uuid:08aae14b-087a-467a-83d9-887c70348f37>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00128-ip-10-147-4-33.ec2.internal.warc.gz"} |
Custom Topsoil
Custom Topsoil - Frequently Asked Questions
Frequently Asked Questions
1- Pick up and Delivery Questions
2- Payment Questions
3- General Questions
Answers to Frequently Asked Questions
1- Pick up and Delivery Questions
When will my order be delivered?
You have a choice between morning and afternoon delivery windows. Deliveries can be scheduled as early as a day in advance. We deliver Monday through Saturday, weather permitting.
Please contact us for questions or assistance.
Do I have to be home when my topsoil, mulch, compost or fill is delivered?
No, you do not have to be at the delivery location. We will drop your order on the driveway or similar suitable surface if you are not home or at the job site.
Can you deliver my order off the road, such as right into my back yard, job site, etc?
Custom Topsoil's responsibility ends at the curb.
Our drivers are only permitted to drive off the roads if the following considerations are met:
□ The property owner or agent must certify that the owner or agent will accept full responsibility for any damage that may result to their property or to adjoining properties.
□ The owner must also certify that the property being driven over is theirs.
□ Any damage to Custom Topsoil trucks resulting from driving off the roads is the responsibility of, and must be paid for by the owner (customer) or agent.
□ Please also note that any and all costs associated with the detention of CTS vehicles as well as a fee of $60 per hour will be assessed to owner or agent (e.g. contractor).
2- Payment Questions
What payment methods do you accept?
Custom Topsoil accepts all major credit cards (e.g. MasterCard, Visa, Discover and American Express), cash, as well as company and personal checks.
General Obligations Law. Section 11-104
WARNING: bad checks mean stiff penalties. You can now be sued for the face value of the bad check plus two (2) times the amount of the check up to $750.00.
3- General Questions
How do I determine how many cubic yards of topsoil, mulch, compost or fill I need?
You can determine how many cubic yards of topsoil, mulch, compost or fill that you need by using our convenient online calculators.
If you prefer manual calculation, the formula for calculating cubic yardage is as follows:
□ First, calculate the area, in feet, to be covered. For example, a rectangle 12' wide x 14' long has an area of 168 square feet (12 multiplied by 14).
☆ To calculate the area of a circle, multiply pi (3.1416) by the square of the radius of the circle. For example, a circle with a 10' radius has an area of 314.16 square feet (3.1416
multiplied by 10^2, or 3.1416 multiplied by (10 multiplied by 10)).
☆ To calculate the area of a triangle, multiply 1/2 by the length of the base by the length of the height. For example, a triangle 12' wide by 14' long has an area of 84 square feet (0.5
multiplied by 12 multiplied by 14).
□ Second, calculate the depth, in inches, to be covered, then convert inches to feet. For example, a 6" fill depth equals 0.5 feet (6 divided by 12).
□ Third, calculate the cubic feet by multiplying the length, in feet, by the width, in feet, by the depth, in feet. For example, a rectangle 12' wide x 14' long at 6" deep (or 0.5 feet) equals
84 cubic feet (12 multiplied by 14 multiplied by 0.5).
□ Fourth, convert cubic feet into yards. There are 27 feet in one cubic yard (3' x 3' x 3'), so divide the number of cubic feet by 27. Using our example above of 84 cubic feet, dividing by 27
equals 3.11 cubic yards.
□ Save some work, use our online calculators!
Remember to account for settling. For example 1/3 of the amount of topsoil will settle. | {"url":"http://www.customtopsoil.com/faqs.html","timestamp":"2014-04-20T18:23:14Z","content_type":null,"content_length":"19366","record_id":"<urn:uuid:4aef58f7-338d-41f9-b7ba-4589ec1f5fbb>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00555-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
What is the primary factor that determines the color of a pixel in the program that draws Mandelbrot’s function? A) The colors of its four neighbors B) The escape-time algorithm C) The rotational
acceleration D) The rotational velocity E) Symmetry and why?
• one year ago
• one year ago
Best Response
You've already chosen the best response.
anyone help
Best Response
You've already chosen the best response.
In a Mandelbrot function, the black area is the set belonging to the function. The colored area those points that are not part of the set and the color change is attributed to the number of
iterations it took to surpass the number 2. While I am not certain, I would most equate this to B, escape time algorithm.
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/4fa7e36be4b059b524f3a388","timestamp":"2014-04-17T07:01:12Z","content_type":null,"content_length":"30360","record_id":"<urn:uuid:292830fd-be74-4215-b9a9-310dc6f0e26b>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00418-ip-10-147-4-33.ec2.internal.warc.gz"} |
Determining Properties Of A Relation
November 2nd 2012, 05:28 AM
Determining Properties Of A Relation
The problem is: "Determine whether the relation R on the set of all Web pages is reflexive, symmetric, antisymmetric, and/or transitive, where(a, b)∈R if and only if
a) everyone who has visited Web page a has also visited
Web page b.
I can see how it is transitive--it is always true that someone who visited webpage a also visited webpage a, or in ordered pair notation, (a, a).
What I am having difficulty seeing is how it is not symmetric. Wouldn't it be true that, if you visited webpage a, then you visited webpage b, could be stated as, if you visited webpage b, then
you visited webpage a? Meaning that (a, b) and (b, a) are elements of R?
November 2nd 2012, 07:06 AM
Re: Determining Properties Of A Relation
The problem is: "Determine whether the relation R on the set of all Web pages is reflexive, symmetric, antisymmetric, and/or transitive, where(a, b)∈R if and only if
a) everyone who has visited Web page a has also visited Web page b.
I can see how it is transitive--it is always true that someone who visited webpage a also visited webpage a, or in ordered pair notation, (a, a).
What I am having difficulty seeing is how it is not symmetric.
The difficulty is the everyone.
Is it necessarily true that everyone who visits B also visits A?
November 2nd 2012, 07:29 AM
Re: Determining Properties Of A Relation
Presumably, all Internet users visited Google. So all people who visited my home page also visited Google. But what about the converse?
November 2nd 2012, 09:17 AM
Re: Determining Properties Of A Relation
everyone who has visited this page (http://mathhelpforum.com/discrete-ma...-relation.html) henceforth know as "a", has presumably also visited the parent page: (Discrete Math) henceforth know as
"b" (technically, it's possible to navigate directly to this page, but let's not quibble. you can conceive of a website where you HAVE to go to the main site (the "index page"), to get to a
particular page on that site (one that might be password-protected, for example, so you have to go through the "log-in page" to get to any other page).
therefore we have aRb.
however, someone may have visited the parent forum of Discrete Math, without ever having looked at this particular topic, so we do not have bRa.
in other words, sure, we know that the people who looked at a, also looked at b. but surely there can exist people who looked at just b, but never at a. and of course, these people count as part
of "everyone".
in fact, any kind of "gated access" leads to this sort of thing: if you have to go through checkpoint A to get to checkpoint B, then anyone who got as far as B made it through A, but we cannot
say anyone who made it through A, will also make it past B. any kind of travel (including web-page clicking) might be "one-way" and is not, therefore, symmetric (which implies a kind of
the grand-daddy of all non-symmetric (in fact ANTI-symmetric) relations is "≤", which arises in hierarchical structures of any kind (like sets, or web-pages, or organizations such as the
military). there is a hidden appeal to such a kind of structure any time one uses something like "A implies B" (saying John is a man is pretty much saying the set of all men includes John). it is
rarely the case that we can do the REVERSE thing, and conclude from the fact that someone is a man, that it must be John.
in older language, if someone is John, he is necessarily a man (let's ignore all the weird exceptions of parents who chose to name their daughters "John", ok?), but it is not sufficient to say
that if someone is a man, he is then John (that is identifying someone as "a man" is not sufficient information to deduce the man is "John").
with equivalences, we are essentially saying: it makes no difference (for the purpose of "whatever") if we use A or B. we're relaxing the "strictness" of equality, but keeping its "rules"
(equivalence extends equality (this is what reflexive means), equivalence is bi-directional (this is what symmetry means), and equivalent to equivalent is equivalent (this is what transitive | {"url":"http://mathhelpforum.com/discrete-math/206607-determining-properties-relation-print.html","timestamp":"2014-04-19T05:07:39Z","content_type":null,"content_length":"9060","record_id":"<urn:uuid:52704791-8ac1-452c-bb3c-c38d9eede52c>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00024-ip-10-147-4-33.ec2.internal.warc.gz"} |
Example of line plot for third grade
Definition of Line Plot . A line plot shows data on a number line with x or other marks to show frequency. Examples of Line Plot . The line plot below shows the test. Fun math practice! Improve your
skills with free problems in 'Interpret line plots ' and thousands of other practice lessons. Line Plot For 3rd Grade . What worksheet are you looking for ? Recent Searches. Math Worksheets. Addition
Worksheets; Algebra Workshets; Decimal Worksheets; Division. Line plot graph worksheets third grade . They all explain how you could change the graph to show the new test scores. The score and.
Create a graph; third grade math. Plot Line Graph with Word Problem Examples. Plot Line Graph for each word problem with the give data. Plot 3rd Grade Unit.. Lesson 2 Analyzing Character Conflict
Caused by the Plot Identify and describe how the problem in a story causes a conflict between the. Review All 3 rd Grade GLE’s. Smartboard Activities.. Line Plot Raisin Activity. Line Plot Example .
Bug Bar Graphs. Handling Data . Patterns: Number Cracker.
Prohibition and held him signs and omens in a long time. Addition of the last drop is a for third reddish brown which. The ground floor containing athera notus.
They shot back to the valley of the. Sheriff is absent and buildings one delicate and. Then where is the place to oppose it not be satisfac. 1 sample resume for hair salon receptionist 2 5. for third
it as if his chariot wheels He literature.
Line Plot For 3rd Grade . What worksheet are you looking for ? Recent Searches. Math Worksheets. Addition Worksheets; Algebra Workshets; Decimal Worksheets; Division. Plot Line Graph with Word
Problem Examples. Plot Line Graph for each word problem with the give data. Third graders will actively engage in collecting and analizing data. They will construct and analyze bar graph and line
plot .
Subscribe to the newsletter
Follow us (international) | {"url":"http://h4y.6r.nxc.in.net/","timestamp":"2014-04-17T03:48:07Z","content_type":null,"content_length":"9648","record_id":"<urn:uuid:0d146e03-3e3d-4c2c-b01e-f30f483a60e5>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00510-ip-10-147-4-33.ec2.internal.warc.gz"} |
These days, everybody is talking about entropy. In fact, there is so much talk about entropy I am waiting for a Hollywood starlet to name her daughter after it. To help that case, today a
contribution about the entropy of black holes.
To begin with let us recall what entropy is. It's a measure for the number of micro-states compatible with a given macro-state. The macro-state could for example be given by one billion particles
with a total energy
in a bag of size
. You then have plenty of possibilities to place the particles in the bag and to assign a velocity to them. Each of these possibilities is a micro-state. The entropy then is the logarithm of that
number. Don't worry if you don't know what a logarithm is, it's not so relevant for the following. The one thing you should know about the total entropy of a system is that it can't decrease in time.
That's the
second law of thermodynamics
It is generally believed that black holes carry entropy. The need for that isn't hard to understand: if you throw something into a black hole, its entropy shouldn't just vanish since this would
violate the second law. So an entropy must be assigned to the black hole. More precisely, the entropy is proportional to the surface area of the black holes, since this can be shown to be a quantity
which only increases if black holes join, and this is also in agreement with the entropy one derives for a black hole from Hawking radiation. So, black holes have an entropy. But what does that mean?
What are the microstates of the black hole? Or where are they? And why doesn't the entropy depend on what was thrown into the black hole?
While virtually nobody in his right mind doubts black hole have an entropy, the interpretation of that entropy is less clear. There are two camps: On the one side those who believe the black hole
entropy counts indeed the number of micro-states inside the black hole. I guess you will find most string theorists on this side, since this point of view is supported by their approach. On the other
side are those who believe the black hole entropy counts the number of states that can interact with the surrounding. And since the defining feature of black holes is that the interior is causally
disconnected from the exterior, these are thus the states that are assigned to the horizon itself. These both interpretations of the black hole entropy are known as the volume- and
surface-interpretations respectively. You find a discussion of these both points of view in Ted Jacobson's paper
"On the nature of black hole entropy"
] and in the trialogue
"Black hole entropy: inside or out?" [hep-th/0501103
A recent contribution to this issue comes from Steve Hsu and David Reeb in their paper
Steve is a neighbor here on blogspot over at
Information Processing
. In their paper Steve and David examine the question how much matter one can stuff into a volume bounded by a given surface, and how much entropy this matter can carry. In flat space-time the
relation between the volume of an area and its surface is trivial, it's just Euclidean geometry. But not so if space-time is strongly curved!
To see this, consider the often made analogy of a curved space to a rubber sheet. Draw a circle on it. That's your surface. But it's a rubber sheet, meaning you can deform the sheet inside the circle
arbitrarily. You could for example form it to a bag and stuff a lot of gold into it.
This pictorial terminology is sadly not my invention: these kind of solutions have been known to be possible in General Relativity for a long time, and have been dubbed “bags of gold” by
already in the early 70s. Their defining property is that they have a potentially arbitrarily large interior volume, but a small surface area.
Steve and David in their paper now construct a weird kind of solution they dub “monsters,” which exemplifies what one can do with these bags. To understand what a monster is, consider some stuff (eg
coins of gold) dispersed in space-time, such that the background is to good approximation flat. Now pick up these coins and put them closely together - so close that they almost, but not entirely,
form a black hole. What you achieve in this way is that you get a strong gravitational field and a deviation of the volume-surface relation from flat space. That process of picking up and
redistributing the coins should not be thought of as a process that is actually dynamically happening, but just as a way to create the initial conditions*. If you create these initial conditions
carefully you can achieve most importantly two things:
1. You can get the asymptotic mass a far away observer would measure (ADM mass) to be arbitrarily small, no matter how many coins you have had. The reason for this is that the strong gravitational
field contributes with a negative binding energy.
2. You can similarly get an arbitrarily large entropy inside a sphere with fixed surface area, think of the coins as the particles forming a particular micro-state. The reason is that the volume can
get arbitrarily large, and you can stuff all the coins in, even though the surface area and the asymptotic mass might remain small.
The authors also show in their paper that if you create the monster state and let it evolve in time, it inevitably forms a black hole. Since it can have been arbitrarily close to being a black hole,
it is plausible to expect that almost all of this entropy goes into the black hole. If the volume interpretation of the black hole entropy was correct, this would be in conflict with it. Weirder than
that, the monster solution must have come out of a white hole in the past. This solution is thus very similar to an expanding and re-collapsing closed FRW universe embedded in empty space.
Despite these monster solutions existing in GR, there remains the question however whether they do exist in reality, since they are somewhat pathological and constructed. Though it might be possible
to argue these states will never be formed from any sensible initial condition, in a quantum theory the situation is more tricky since everything that can happen does happen - even though it might be
very improbable. That means the monsters could be spontaneously formed through tunneling processes. That might however in practice not happen even once during the lifetime of the universe.
Steve was visiting PI in November and gave a very clear talk about the monsters, that is recommendable if you want to know more details. You can find it at
PIRSA 08110026
the slides are here
* You shouldn't take the picture too literally though, much like in the often used example with the marble on the rubber-sheet it is slightly misleading as there isn't actually something "on" the
spacetime (the sheet) that extends into an additional dimension.
34 comments:
This is so well explained, very interesting post. Bravo and thanks a lot!
Thanks for the post Bee - dumb question: what is FRW?
The monsters idea is weird, yet more to feed my fascination and appreciation of how weird our universe is. That definition of entropy sounds rigorous, but it is unambiguous even in a case of
including the state of collections of matter which include radiative nuclei etc? Also, considering continua of possible values of velocity (the example given) or etc, how can there be a "number
of possibilities"?
Another problem about entropy is "the arrow of time." The AoT is often believed to be relative, or defined only because of entropy and not inherent absolute character of time flow. But consider
the issue of interfering in a “time-reversed world” and how the changes in such a world undermine the credibility of arguing that the AoT is just relative. As I proposed earlier, what if I e.g.
deflect a backwards-happening bullet so it “then” misses the barrel it “came out of” (but not yet in the view of the intruding other world relative to which the intrusion is benchmarked.) If that
interference happens, the bullet maybe goes past the gun it should have come out of, runs into a tree etc. and then we have a ridiculous “past” that continues to get more wrecked as time (?) goes
In principle, our world could be such a time-reversed world if there’s no true physical distinction (or one that matters to showing “legitimacy”, versus some distinctions about nuclear decay etc.
that have equal “standing” regarding genuineness.) Yet now many of us can believe that an intervention from another world etc., regardless of what time flow they were in relative to us, could
actually change our own past? Such questions are part of the foundational framing and can’t be brushed off from not being more directly operational expressions of what we already know.
Hi Changcho:
FRW = Friedmann-Robertson-Walker, sorry about that. It's one of the standard textbook examples for spherical collapse (see eg MTW = Misner, Thorne, Wheeler). Best,
I find entropy to be a bit of a false lead, after all successive observations from a distribution should statistically pick out the most likely values.
The real question is why are successive observations always time ordered?
Perhaps the forward time shift operator acts on a left (right) bounded measurable set (that is time has a definite beginning) so that in the forward direction it is unitary (preserves inner
products of states) but the inverse operator (reverse time shift) has a non-trivial kernel, and is thus not unitary. You can only pull this sort of magic off in infinite Hilbert spaces.
Actually that is a pretty easy theorem to prove: Shift operators away from a finite left (right) boundary can be symmetries of a Hermitian operator, while the inverse shift operator, towards a
finite left (right) boundary can never be a symmetry of a Hermitian operator; precisely because the kernel is not trivial. So traveling back in time towards a finite temporal boundary can never
be a symmetry of a real observable. Thus the arrow of time always points away from a finite boundary in the past.
Should I write that up and publish it?
Hi Bee,
"On the one side those who believe the black hole entropy counts indeed the number of micro-states inside the black hole. I guess you will find most string theorists on this side, since this
point of view is supported by their approach."
I don't think that terms like inside or outside of black holes apply for the microscopic description of the black hole in string theory. There the microscopic degrees of freedom of the black hole
are basically open strings in D-branes configurations. So there is no notion of the inside of a black hole. The low energy description of these D-branes configurations though correspond to
solutions of the supergravity theory (p-branes). If then we compactify the derived metric down to our 4 dimensions we'll get eventually the metric of the black hole, where notions as the horizon
for example have a meaning. From there we can calculate the entropy from the area. This matches of course the entropy calculated by counting states in the microscopic picture.
Unless you meant something else.
Hi Giotis:
Thanks for the clarification. I wasn't referring to inside vs outside but to surface versus volume. I think the corresponding question would be after you compactify down, would the degrees of
freedom be found merely on the horizon? Best,
Well, I meant "radioactive nuclei" and am still interested in how entropy applies to such entities, but oddly enough "radiative nuclei" turns up quite a few apparently legitimate non-typo
references - I never heard of that, perhaps nuclei that emit gamma rays in response to excitation etc. But like I was saying, look at a muon which before it decays is structureless and just like
a stable particle in terms of properties, but when it decays there is then more complexity in the universe than before etc - how can entropy be coherently defined for such entities, absent a
mechanism we can describe inside the way we can for ordinary "heat engines" etc?
In string theory, things with large volumes and small areas tend to be very unstable, see eg
Having said that, I do think that it's a great pity that Prof Hsu's work has not received the attention it deserves. So your posting is very welcome. This is what physics blogs should be.
I think from outside of a black hole (Earth) if you can measure its entropy you're not going to get information from inside it because you're causally disconnected from that region of spacetime.
And to get an estimation of the entropy from inside the black hole I suppose you should perform a change of reference system and see what happens with the entropy after that. But I suppose that,
being causally disconnected and one spacelike dimension inside interchanged with the timelike one, the change of reference would not be mathematically sound. Is that correct?
Hi Bee,
Have you physicists no mercy, for I haven’t had reason to believe in monsters since childhood and now I’m presented with this. Seriously though, it’s a great post which serves to remind that the
second law is more of what’s the monster as far as physics is concerned. It will be interesting to see that in future if this means you must further refine the models of black holes or rather
find reason to repeal the law. Anyway, I don’t see this gives me reason enough to once again sleep with the light on:-)
Black hole has maximal entropy.
Do monsters have a geometrical structure, or are these things contrived by mortal men whose fear has overtaken them?:) I' think such abstraction are mental capacities that allow wo/men to advance
into the realm of the mental constructs. Interesting to see such real "allotrope geometrical values in the real world, from such abstractions.
You might think the loss of geometry like the loss of, say, Latin would pass virtually unnoticed. This is the thing about geometry: we no more notice it than we notice the curve of the earth. To
most people, geometry is a grade school memory of fumbling with protractors and memorizing the Pythagorean theorem. Yet geometry is everywhere. Coxeter sees it in honeycombs, sunflowers, froth
and sponges. It's in the molecules of our food (the spearmint molecule is the exact geometric reaction of the caraway molecule), and in the computer-designed curves of a Mercedes-Benz. Its loss
would be immeasurable, especially to the cognoscenti at the Budapest conference, who forfeit the summer sun for the somnolent glow of an overhead projector. They credit Coxeter with rescuing an
art form as important as poetry or opera. Without Coxeter's geometry as without Mozart's symphonies or Shakespeare's plays our culture, our understanding of the universe,would be incomplete. This
quote taken from article and posted in this Blog entry. I give a more in depth explanation at Moshe's article on Maldacena.
Will have to see if comment takes there or not.
Faster then light in a medium Neil. Ice or the earth serve as the backdrop for evidence of particle collisions. We know they happen in the cosmos, and, we know they happen at the LHC with Gran
How is this translated....microstate blackholes?
Thanks for this well explained posting!
When talking about entropy in relation to the physics of Black Holes (BH), it is typically conjectured that statistical thermodynamics and QFT apply in a way that is familiar to our laboratory
settings. But since BH are non-trivial space-time structures emerging near a gravitational singularity, what grounds does one have to make this tacit assumption? For example, is there
experimental confirmation of the Beckenstein area law? Can one precisely define what a "surface" is near the singularity, where space-time is likely to evolve into an unfamilar topology?
Hi Bee,
I wonder if it’s been considered what’s required to conquer a monster is a demon? In this case perhaps one inspired by Maxwell :-)
I don't think that string theorists would uniformly say that the degrees of freedom of a black hole must be "inside". They are equally well associated with the horizon. The very question "where
are they?" is largely unphysical, analogously to the question Where does the gauge theory live? that was answered by Moshe Rozali.
What matters is whether one can predict physics. The details of the microstates influence the evolution of the exterior microscopically - it is imprinted in Hawking radiation - but it doesn't
influence the exterior macroscopically - the radiation is thermal if we only care about the macro description.
Does it mean the information is inside or the surface? The question has no privileged answer. The people inside will surely think that at least a part of the entropy is carried by them who are
inside. The people outside may find the whole interior unphysical because they can't ever see it, so they will associate the entropy with the surface that simplifies calculations.
There is no contradiction here. In fact, black hole complementarity implies something much stronger: the degrees of freedom inside are not independent from those outside, despite the spacelike
Hi Ervin,
BH are non-trivial space-time structures emerging near a gravitational singularity,
You should try to find out exactly what you mean with that. First, what do you mean with a black hole? Presumably the presence of a horizon, since that is what makes the hole black. Then, in what
sense is that 'emerging near the gravitational singularity'? Well, it isn't. The horizon can be arbitrarily far away from where classically the singularity would be. The horizon can be in a
regime with arbitrarily small curvature. I will repeat that once again because it is a very common confusion: the horizon, which is what makes the black hole black, is formed in a region with a
background that can be arbitrarily close to being flat. It is to excellent precision described by semi-classical physics unless the black hole has reached Planckian size.
Besides this however you seem to assume that the entropy of the black hole has something to do with a quantum field in its inside. That is most definitely not the case. Best,
PS: You don't have to post as 'Anonymous'. Check option Name/URL under the comment window, a box will open where you can enter a name. You don't have to provide an URL.
Ervin might referring to Sean Carroll's post?:)
I may of been a bit incorrect in my explanation at Sean Carroll's. I'm trying. Of course his comment might not have to be correlated at all in this context and one can take Sean Carroll as his
blog post is.
See Blackhole Wars Ervin. And then, Moshe's newest post.
Somewhat related to this topic, an anomaly in the German GEO600 gravity waves detection experiment could be the first demonstration of the most amazing discovery in fundamental physics in decades
- the discreteness of space-time and possible confirmation of the holographic principle:
The article name is:
Our world may be a giant hologram
and the article link is:
Hi Tkk,
Thanks that was indeed an interesting article you offered. I must admit however I’ve never been able to get my head around that all we refer to as reality is nothing more then a holographic
projection. I have no problem with the horizon constituting to being what contains the hologram, it’s just that I can’t imagine what serves as the projector.
If any of you out there ever talk to Susskind or 't Hooft you might ask them if you don't know already.
Dear Bee,
Thank you for your explanations and clarifications in terminology. I am not fully familiar with the physics of BH and this is the reason why I ask. I must confess, however, that your reply did
not really answer my questions:
a) what observational grounds do we have to state that the concept of "entropy" from statistical physics is fully applicable to BH?
b)is there confirmation of Beckenstein law from astrophysical data?
c)for BH that are reaching Planckian size, what motivates the assumption that the horizon has a conventional topology?
Best regards,
Hi Tkk,
I've read this paper already some months ago and tried to figure out what the guy is talking about. I couldn't really make sense of it. Best,
Hi Ervin,
a) I don't even know what you mean with the question. Who applied where and when what concept to black holes that you doubt? Could you be somewhat more specific? What I wrote was: one identifies
the black hole area with an entropy for the reason that the area has the right properties and it matches with Hawkings results according to which black holes have a temperature (the inverse of
which you can integrate to get the entropy). If you read my post you will figure out that there is some debate about what this entropy de facto physically means.
b) There is nothing known in the universe that violates the bound. That is of course not a confirmation. What would you consider a 'confirmation'?
c) I don't even know whether it still has a horizon or whether that is a meaningful question then. Why do you think so?
Hi Bee,
1)Indeed, Hawking entropy as an integrated inverse temperature makes sense. The physical origin of BH entropy is not universally accepted and this is what prompted my question: can astrophysical
data be used to settle the debate on where the BH entropy is coming from?
2) By confirmation I mean data that reasonably matches the area law. Is there such confirmation?
2)I don't really know if it is a sensible question to ask but it seems to me that concepts such as "surface" and "area" loose their conventional meaning near the Planck scale.
Hi Ervin,
1) no.
2) see my above reply. the entropy bounds are so large, there is nothing known that breaks them. I wouldn't call that a confirmation though.
3) that is correct
"The one thing you should know about the total entropy of a system is that it can't decrease in time."
Please don't say this!!! I know it's a small quibble, but I think it's important to remember that entropy does not have to increase over time... it's only likely to do so! In small systems, it's
not even very likely to do so. The consequences of this are measurable.
Hi Aaron,
cool, thank you very much for the pointer ro the paper by Wang et al., "Experimental Demonstration of Violations of the Second Law of Thermodynamics for Small Systems and Short Time Scales".
Actually, I've started reading Feyerabend's "Against Method" over our vacation, and stumbled about his bold statement (there are many bold statements in the book, though ;-) that It is well known
[...] that statistical thermodynamics is inconsistent with the second law of the phenomenological theory (page 24 in the 1993 Verso edition at Google Books), which he later specifies (page 27) as
It is now known that the Brownian particle is a perpetual motion machine of the second kind and that its existence refutes the phenomenological second law.
I was/I am quite sure that this is not true without further qualifiers. Interestingly, Feyerabend didn't give any references to bolster this assertion, though usually his footnotes can exceed the
main text on a page.
As such, a Brownian particle doesn't do net work, as explained by Feynman's ratchet, and Brownian motors and other more complicated contrivances had not yet been known when Feyerabend wrote his
book, I guess.
So, the paper seems just to address this point - I will have a closer look.
Interestingly, Feyerabend refers an old German paper by Reinhold Fürth, which is said to show that there is a kind of "uncertainty principle" which forbids to establish entropy decrease via the
measurement of temperature/heat exchange. And, indeed, in the paper by Wang, they evaluate the work done by/done at the Brownian particle by really measuring mechanical work via integration of
dv·F, where velocity and force are accessible by the experimental setup...
Very interesting stuff!
Maybe you all like to eat more metaphors?:)
The experiment apparatus continues to exhibit tiny amount of vibrational noises no matter how 'perfect' they tried to eliminate all sources of noise. More detailed analysis of the noise shows it
has characteristics curve closely matching 'noise' of space-time as we go near Planck's length. But with a difference - the noise shows itself many orders of magnitude at lengths larger then
Plank. I.e. quantum fluctuation space-time could be showing itself at lengths much larger than Planck. How to explain?
One possibility is the holographic principle - information (thus entropy) of surface = that of volume. Since volume >> surface, the only way this can happen is Planck size inside the volume is >>
than surface. Two Plank sizes! When they calculate out the necessary Planck sizes according to HP, the Planck size in volume (i.e. within the universe) has a much larger length, which yields a
quantum flux character approximately matching that of the said apparatus 'noise'.
If this noise is truly irreducible and the characteristic pattern confirmed, then it is a strong confirmation of 1) large Planck length inside the universe, and the accepted Planck length at the
surface of the universe, 2) discreteness of space-time 3) HP.
I personally is intrigued by implications of discreteness on the time side. I.e. 2 Planck times!
Hi Tkk,
Yes, very intriguing indeed. Did you actually read the paper? I tried, and I also looked up the papers he is referring to (mostly his own) because I thought it would make for an interesting blog
post. I couldn't make sense of it, and then I lost interest. Your equation volume >> surface is obviously nonsense, sorry to be so blunt. Best,
This comment has been removed by the author.
Hi Arun,
This temporary violation of the second law is not a new idea in fact James Clerk Maxwell in 1878 wrote in a review in Nature the following (sourced from nature):
“The truth of the second law is ... a statistical, not a mathematical, truth, for it depends on the fact that the bodies we deal with consist of millions of molecules...
Hence the second law of thermodynamics is continually being violated, and that to a considerable extent, in any sufficiently small group of molecules belonging to a real body. “
It doesn’t appear that Wang et al has extended thinking much past Maxwell’s. What I do like however is that experiments like these as far as I tell suggest we separate ourselves from notions that
entropy might be responsible for the arrow of time. That is to say are we to suppose that for the brief few seconds where the experiment showed a decrease in entropy time ran backwards.
It also appears that Penrose doesn’t consider the second law as being a sacred cow either, for in his “Road to Reality” page 692 he states:
“In my view entropy has the status of a ‘convenience’, in present day theory rather then being ‘fundamental’---though there are indications that, in the deeper context where quantum-gravitational
considerations become important (especially in relation to black hole entropy) there may be a more fundamental status for this kind of notion.”
What Penrose considers significant about the black hole entropy question is that it seems to indicate that the lion’s share of entropy is manifest in and confined to these objects. One way to
look at this would be to say that without them the background radiation temperature would be significantly higher. On the other hand if you are one of those that equate entropy with information
would be to say that the overwhelming portion of the universes information is confined to them. Which ever way you slice it the current makeup and character of reality is reliant on their
existence in being largely responsible for assuring that equilibrium if looked at from a potential standpoint is something that is not being reached in the straight forward manner which
thermodynamics normally suggests. | {"url":"http://backreaction.blogspot.com/2009/01/monsters.html?showComment=1232107620000","timestamp":"2014-04-17T09:34:02Z","content_type":null,"content_length":"180071","record_id":"<urn:uuid:2556e65b-ee5c-411c-a5a7-70b7a9fb5fdd>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00217-ip-10-147-4-33.ec2.internal.warc.gz"} |
Matches for:
Contemporary Mathematics
2012; 202 pp; softcover
Volume: 569
ISBN-10: 0-8218-5359-7
ISBN-13: 978-0-8218-5359-7
List Price: US$74
Member Price: US$59.20
Order Code: CONM/569
This volume is a collection of papers presented at the 11th International Workshop on Real and Complex Singularities, held July 26-30, 2010, in São Carlos, Brazil, in honor of David Mond's 60th
birthday. This volume reflects the high level of the conference discussing the most recent results and applications of singularity theory. Articles in the first part cover pure singularity theory:
invariants, classification theory, and Milnor fibres. Articles in the second part cover singularities in topology and differential geometry, as well as algebraic geometry and bifurcation theory:
Artin-Greenberg function of a plane curve singularity, metric theory of singularities, symplectic singularities, cobordisms of fold maps, Goursat distributions, sections of analytic varieties,
Vassiliev invariants, projections of hypersurfaces, and linearity of the Jacobian ideal.
Graduate students and research mathematicians interested in singularities in geometry and topology. | {"url":"http://ams.org/bookstore?fn=20&arg1=conmseries&ikey=CONM-569","timestamp":"2014-04-17T02:18:40Z","content_type":null,"content_length":"15109","record_id":"<urn:uuid:33d838f1-db9f-474e-a15e-fcd5fef7f514>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00182-ip-10-147-4-33.ec2.internal.warc.gz"} |
%0 Report %D 2000 %T Finding Missing Proofs with Automated Reasoning %A B. Fitelson %A L. Wos %X
This article features long-sought proofs with intriguing properties (such as the absence of double negation and the avoidance of lemmas that appeared to be indispensable), and it features the
automated methods for finding them. The theorems of concern are taken from various areas of logic that include two-valued sentential (or propositional) calculus and infinite-valued sentential
calculus. Many of the proofs (in effect) answer questions that had remained open for decades, questions focusing on axiomatic proofs. The approaches we take are of added interest in that all rely
heavily on the use of a single program that offers logical reasoning, William McCune\'s automated reasoning program OTTER. The nature of the successes and approaches suggests that this program offers
researchers a valuable automated assistant. This article has three main components. First, in view of the interdisciplinary nature of the audience, we discuss the means for using the program in
question (OTTER), which flags, parameters, and lists have which effects, and how the proofs it finds are easily read. Second, because of the variety of proofs that we have found and their
significance, we discuss them in a manner that permits comparison with the literature. Among those proofs, we offer a proof shorter than that given by Meredith and Prior in their treatment of
Lukasiewicz\'s shortest single axiom for the implicational fragment of two-valued sentential calculus, and we offer a proof for the Lukasiewicz 23-letter single axiom for the full calculus. Third,
with the intent of producing a fruitful dialogue, we pose questions concerning the properties of proofs and, even more pressing, invite questions similar to those this article answers.
%B Studia Logica %P 329-356 %8 07/2000 %G eng %1 http://www.mcs.anl.gov/papers/P836.ps.Z | {"url":"http://www.anl.gov/publications/export/tagged/4918","timestamp":"2014-04-21T08:15:31Z","content_type":null,"content_length":"2606","record_id":"<urn:uuid:e1b69526-e075-4312-8aac-191b1bd74a54>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00548-ip-10-147-4-33.ec2.internal.warc.gz"} |
the encyclopedic entry of Gravity set
The Mitchell-Green gravity set (MGGS) is a fractal set that was created by Fred Mitchell and Chris Green in 1992.
The gravity set is a set of points in the plane. Like the Mandelbrot set, the gravity set is defined as follows: a certain mapping f from the plane into itself is considered. If x[0] is any point, if
the sequence defined recursively by x[n+1] = f(x[n]) does not diverge to infinity, then x[0] is a member of the gravity set; otherwise it is not. The difference between the gravity set and the
Mandelbrot set is that the mapping f is different.
In computations, one assigns colors to points according to the rate at which the aforementioned recursively defined sequence appears to converge. The production of the gravity set follows along the
same lines of color assignment to points based on the number of iterations it takes for the points to escape the system. One uses the basic formula for gravitation and simple Eulerian integration as
described below. Unlike the Mandelbrot set, the plane uses the cartesian coordinate system and thus it is not complex.
The conditions that are considered "escape" is arbitrarily defined, and in most of the maps presented here the escape condition is taken to be a chosen distance from the center of the coordinate
system (or center of mass).
The points are iterated through a gravitational system of fixed masses placed in an arbitrary arrangement, and given arbitrary masses. The pattern of the arrangement of the fixed masses is found
repeated in distorted fashion at all scales of magnification for many of the maps.
Negative masses are also allowed to be specified, and some of the maps generated have combinations of negative and positive masses.
Chris Green originated the idea in 1992, in an attempt to come up with something "fractal-like" which could allow the user to customize the basic shape of the set. Green and Mitchell were both
employees at Commodore Amiga at the time. Green initially implemented the gravity set in Forth. Those initial images were a bit crude, but showed great promise, which spurred Fred Mitchell on to
re-implement it with a dedicated program -- written in C on the Amiga, with a GUI front end -- so that he could explore this phenomenon more closely.
Mitchell discovered the incredible richness in the gravity set by manipulating the many parameters involved. Some of these maps can be seen at Fred's Fractal Laboratory | {"url":"http://www.reference.com/browse/Gravity+set","timestamp":"2014-04-18T02:59:54Z","content_type":null,"content_length":"87529","record_id":"<urn:uuid:5d46a438-4fb7-4fc0-befa-a83f734784d0>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00241-ip-10-147-4-33.ec2.internal.warc.gz"} |
Results 1 - 10 of 18
- SIAM Journal on Computing , 1995
"... Acyclic digraphs, such as the covering digraphs of ordered sets, are usually drawn upward, i.e., with the edges monotonically increasing in the vertical direction. A digraph is upward planar if
it admits an upward planar drawing. In this survey paper, we overview the literature on the problem of upw ..."
Cited by 81 (15 self)
Add to MetaCart
Acyclic digraphs, such as the covering digraphs of ordered sets, are usually drawn upward, i.e., with the edges monotonically increasing in the vertical direction. A digraph is upward planar if it
admits an upward planar drawing. In this survey paper, we overview the literature on the problem of upward planarity testing. We present several characterizations of upward planarity and describe
upward planarity testing algorithms for special classes of digraphs, such as embedded digraphs and single-source digraphs. We also sketch the proof of NP-completeness of upward planarity testing.
- SIAM J. Discrete Math , 1999
"... Ljubljana, February 2, 2009A simpler linear time algorithm for embedding graphs into an arbitrary surface and the genus of graphs of bounded tree-width ..."
Cited by 56 (10 self)
Add to MetaCart
Ljubljana, February 2, 2009A simpler linear time algorithm for embedding graphs into an arbitrary surface and the genus of graphs of bounded tree-width
- ALGORITHMICA , 1994
"... We give a detailed description of the embedding phase of the Hopcroft and Tarjan planarity testing algorithm. The embedding phase runs in linear time. An implementation based on this paper can
be found in [MMN93]. ..."
Cited by 35 (6 self)
Add to MetaCart
We give a detailed description of the embedding phase of the Hopcroft and Tarjan planarity testing algorithm. The embedding phase runs in linear time. An implementation based on this paper can be
found in [MMN93].
, 1997
"... In the mid 1980s, graphics workstations became the main platforms for software and information engineers. Since then, visualization of relational information has become an essential element of
software systems. Graphs are commonly used to model relational information. They are depicted on a graphics ..."
Cited by 25 (2 self)
Add to MetaCart
In the mid 1980s, graphics workstations became the main platforms for software and information engineers. Since then, visualization of relational information has become an essential element of
software systems. Graphs are commonly used to model relational information. They are depicted on a graphics workstation as graph drawings. The usefulness of the relational model depends on whether
the graph drawings effectively convey the relational information to the users. This thesis is concerned with finding good drawings of graphs. As the amount of information that we want to visualize
becomes larger and the relations become more complex, the classical graph model tends to be inadequate. Many extended models use a node hierarchy to help cope with the complexity. This thesis
introduces a new graph model called the clustered graph. The central theme of the thesis is an investigation of efficient algorithms to produce good drawings for clustered graphs. Although the
criteria for judging the qua...
- Lecture Notes in Computer Science , 1997
"... INTRODUCTION Graph drawing addresses the problem of constructing geometric representations of graphs, and has important applications to key computer technologies such as software engineering,
database systems, visual interfaces, and computer-aided-design. Research on graph drawing has been conducte ..."
Cited by 14 (3 self)
Add to MetaCart
INTRODUCTION Graph drawing addresses the problem of constructing geometric representations of graphs, and has important applications to key computer technologies such as software engineering,
database systems, visual interfaces, and computer-aided-design. Research on graph drawing has been conducted within several diverse areas, including discrete mathematics (topological graph theory,
geometric graph theory, order theory), algorithmics (graph algorithms, data structures, computational geometry, vlsi), and human-computer interaction (visual languages, graphical user interfaces,
software visualization). This chapter overviews aspects of graph drawing that are especially relevant to computational geometry. Basic definitions on drawings and their properties are given in
Section 1.1. Bounds on geometric and topological properties of drawings (e.g., area and crossings) are presented in Section 1.2. Section 1.3 deals with the time complexity of fundamental graph drawin
- Math. Slovaca , 1997
"... Let K be a subgraph of G. It is shown that if G is 3–connected modulo K then it is possible to replace branches of K by other branches joining same pairs of main vertices of K such that G has no
local bridges with respect to the new subgraph K. A linear time algorithm is presented that either perfor ..."
Cited by 8 (8 self)
Add to MetaCart
Let K be a subgraph of G. It is shown that if G is 3–connected modulo K then it is possible to replace branches of K by other branches joining same pairs of main vertices of K such that G has no
local bridges with respect to the new subgraph K. A linear time algorithm is presented that either performs such a task, or finds a Kuratowski subgraph K5 or K3,3 in a subgraph of G formed by a
branch e and local bridges on e. This result is needed in linear time algorithms for embedding graphs in surfaces.
"... Let K be an induced non-separating subgraph of a graph G, andletB be the bridge of K in G. Obstructions for extending a given 2-cell embedding of K to an embedding of G in the same surface are
considered. It is shown that it is possible to find a nice obstruction which means that it has bounded bran ..."
Cited by 7 (6 self)
Add to MetaCart
Let K be an induced non-separating subgraph of a graph G, andletB be the bridge of K in G. Obstructions for extending a given 2-cell embedding of K to an embedding of G in the same surface are
considered. It is shown that it is possible to find a nice obstruction which means that it has bounded branch size up to a bounded number of “almost disjoint ” millipedes. Moreover, B contains a nice
subgraph ˜ B with the following properties. If K is 2-cell embedded in some surface and F is a face of K, then ˜ B admits exactly the same types of embeddings in F as B. A linear time algorithm to
construct such a universal obstruction ˜ B is presented. At the same time, for every type of embeddings of ˜ B, an embedding of B ofthesametypeis determined.
- In 3rd Annual European Symposium on Algorithms (ESA’95), LNCS 979 , 1995
"... In this paper, we introduce a new graph model known as clustered graphs, i.e. graphs with recursive clustering structures. This graph model has many applications in informational and
mathematical sciences. In particular, we study C-planarity of clustered graphs. Given a clustered graph, the C-planar ..."
Cited by 5 (2 self)
Add to MetaCart
In this paper, we introduce a new graph model known as clustered graphs, i.e. graphs with recursive clustering structures. This graph model has many applications in informational and mathematical
sciences. In particular, we study C-planarity of clustered graphs. Given a clustered graph, the C-planarity testing problem is to determine whether the clustered graph can be drawn without edge
crossings, or edge-region crossings. In this paper, we present efficient algorithms for testing C-planarity and finding C-planar embeddings of clustered graphs. 1 Introduction Representing
information visually, or by drawing graphs can greatly improve the effectiveness of user interfaces in many relational information systems [12, 17, 18, 5]. Developing algorithms for drawing graphs
automatically and efficiently has become the interest of research for many computer scientists. Research in this area has been very active for the last decade. A recent survey citelabel13new of
literature in this area inclu...
, 1994
"... A linear time algorithm is presented that, for a given graph G, finds an embedding of G in the torus whenever such an embedding exists, or exhibits a subgraph\Omega of G of small branch size
that cannot be embedded in the torus. 1 Introduction Let K be a subgraph of G, and suppose that we are ..."
Cited by 4 (0 self)
Add to MetaCart
A linear time algorithm is presented that, for a given graph G, finds an embedding of G in the torus whenever such an embedding exists, or exhibits a subgraph\Omega of G of small branch size that
cannot be embedded in the torus. 1 Introduction Let K be a subgraph of G, and suppose that we are given an embedding of K in some surface. The embedding extension problem asks whether it is embedding
extension problem possible to extend the embedding of K to an embedding of G in the same surface, and any such embedding is an embedding extension of K to G. An embedding extension obstruction for
embedding extensions is a subgraph\Omega of G \Gamma E(K) such that obstruction the embedding of K cannot be extended to K [ \Omega\Gamma The obstruction is small small if K [\Omega is homeomorphic
to a graph with a small number of edges. If\Omega is small, then it is easy to verify (for example, by checking all the possibilities Supported in part by the Ministry of Science and Technolo...
, 2000
"... A projective plane is equivalent to a disk with antipodal points identified. A graph is projective planar if it can be drawn on the projective plane with no crossing edges. A linear time
algorithm for projective planar embedding has been described by Mohar. We provide a new approach that takes O(n ..."
Cited by 3 (0 self)
Add to MetaCart
A projective plane is equivalent to a disk with antipodal points identified. A graph is projective planar if it can be drawn on the projective plane with no crossing edges. A linear time algorithm
for projective planar embedding has been described by Mohar. We provide a new approach that takes O(n 2 ) time but is much easier to implement. We programmed a variant of this algorithm and used it
to computationally verify the known list of all the projective plane obstructions. Key words: graph algorithms, surface embedding, graph embedding, projective plane, forbidden minor, obstruction 1
Background A graph G consists of a set V of vertices and a set E of edges, each of which is associated with an unordered pair of vertices from V . Throughout this paper, n denotes the number of
vertices of a graph, and m is the number of edges. A graph is embeddable on a surface M if it can be drawn on M without crossing edges. Archdeacon's survey [2] provides an excellent introduction to | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=126345","timestamp":"2014-04-20T10:27:39Z","content_type":null,"content_length":"37012","record_id":"<urn:uuid:79c46874-a501-481a-a75e-7f3c27c2b1c1>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00110-ip-10-147-4-33.ec2.internal.warc.gz"} |
Implementation of interior point methods for large scale linear programming
Results 1 - 10 of 55
- Optimization Methods and Software , 1996
"... In this paper, we describe our implementation of a primal-dual infeasible-interior-point algorithm for large-scale linear programming under the MATLAB 1 environment. The resulting software is
called LIPSOL -- Linear-programming Interior-Point SOLvers. LIPSOL is designed to take the advantages of M ..."
Cited by 60 (2 self)
Add to MetaCart
In this paper, we describe our implementation of a primal-dual infeasible-interior-point algorithm for large-scale linear programming under the MATLAB 1 environment. The resulting software is called
LIPSOL -- Linear-programming Interior-Point SOLvers. LIPSOL is designed to take the advantages of MATLAB's sparse-matrix functions and external interface facilities, and of existing Fortran sparse
Cholesky codes. Under the MATLAB environment, LIPSOL inherits a high degree of simplicity and versatility in comparison to its counterparts in Fortran or C language. More importantly, our extensive
computational results demonstrate that LIPSOL also attains an impressive performance comparable with that of efficient Fortran or C codes in solving large-scale problems. In addition, we discuss in
detail a technique for overcoming numerical instability in Cholesky factorization at the end-stage of iterations in interior-point algorithms. Keywords: Linear programming, Primal-Dual
- Computational Optimization and Applications , 2004
"... Abstract. Every Newton step in an interior-point method for optimization requires a solution of a symmetric indefinite system of linear equations. Most of today’s codes apply direct solution
methods to perform this task. The use of logarithmic barriers in interior point methods causes unavoidable il ..."
Cited by 44 (13 self)
Add to MetaCart
Abstract. Every Newton step in an interior-point method for optimization requires a solution of a symmetric indefinite system of linear equations. Most of today’s codes apply direct solution methods
to perform this task. The use of logarithmic barriers in interior point methods causes unavoidable ill-conditioning of linear systems and, hence, iterative methods fail to provide sufficient accuracy
unless appropriately preconditioned. Two types of preconditioners which use some form of incomplete Cholesky factorization for indefinite systems are proposed in this paper. Although they involve
significantly sparser factorizations than those used in direct approaches they still capture most of the numerical properties of the preconditioned system. The spectral analysis of the preconditioned
matrix is performed: for convex optimization problems all the eigenvalues of this matrix are strictly positive. Numerical results are given for a set of public domain large linearly constrained
convex quadratic programming problems with sizes reaching tens of thousands of variables. The analysis of these results reveals that the solution times for such problems on a modern PC are measured
in minutes when direct methods are used and drop to seconds when iterative methods with appropriate preconditioners are used. Keywords: interior-point methods, iterative solvers, preconditioners 1.
, 2003
"... Many practical large-scale optimization problems are not only sparse, but also display some form of block-structure such as primal or dual block angular structure. Often these structures are
nested: each block of the coarse top level structure is block-structured itself. Problems with these charact ..."
Cited by 41 (20 self)
Add to MetaCart
Many practical large-scale optimization problems are not only sparse, but also display some form of block-structure such as primal or dual block angular structure. Often these structures are nested:
each block of the coarse top level structure is block-structured itself. Problems with these characteristics appear frequently in stochastic programming but also in other areas such as
telecommunication network modelling. We present a linear algebra library tailored for problems with such structure that is used inside an interior point solver for convex quadratic programming
problems. Due to its object-oriented design it can be used to exploit virtually any nested block structure arising in practical problems, eliminating the need for highly specialised linear algebra
modules needing to be written for every type of problem separately. Through a careful implementation we achieve almost automatic parallelisation of the linear algebra. The efficiency of the approach
is illustrated on several problems arising in the financial planning, namely in the asset and liability management. The problems are modelled as
- Mathematical Programming 107 , 2006
"... An interior-point method for nonlinear programming is presented. It enjoys the flexibility of switching between a line search method that computes steps by factoring the primal-dual equations
and a trust region method that uses a conjugate gradient iteration. Steps computed by direct factorization a ..."
Cited by 31 (11 self)
Add to MetaCart
An interior-point method for nonlinear programming is presented. It enjoys the flexibility of switching between a line search method that computes steps by factoring the primal-dual equations and a
trust region method that uses a conjugate gradient iteration. Steps computed by direct factorization are always tried first, but if they are deemed ineffective, a trust region iteration that
guarantees progress toward stationarity is invoked. To demonstrate its effectiveness, the algorithm is implemented in the Knitro [6, 28] software package and is extensively tested on a wide selection
of test problems. 1
- SIAM J. on Optimization , 1996
"... Abstract. Despite the efficiency shown by interior-point methods in large-scale linear programming, they usually perform poorly when applied to multicommodity flow problems. The new specialized
interior-point algorithm presented here overcomes this drawback. This specialization uses both a precondit ..."
Cited by 30 (6 self)
Add to MetaCart
Abstract. Despite the efficiency shown by interior-point methods in large-scale linear programming, they usually perform poorly when applied to multicommodity flow problems. The new specialized
interior-point algorithm presented here overcomes this drawback. This specialization uses both a preconditioned conjugate gradient solver and a sparse Cholesky factorization to solve a linear system
of equations at each iteration of the algorithm. The ad hoc preconditioner developed by exploiting the structure of the problem is instrumental in ensuring the efficiency of the method. An
implementation of the algorithm is compared to state-of-the-art packages for multicommodity flows. The computational experiments were carried out using an extensive set of test problems, with sizes
of up to 700,000 variables and 150,000 constraints. The results show the effectiveness of the algorithm.
- in the Cutting Plane Scheme, Mathematical Programming , 1997
"... A practical warm-start procedure is described for the infeasible primal-dual interior-point method employed to solve the restricted master problem within the cutting-plane method. In contrast to
the theoretical developments in this field, the approach presented in this paper does not make the unreal ..."
Cited by 23 (2 self)
Add to MetaCart
A practical warm-start procedure is described for the infeasible primal-dual interior-point method employed to solve the restricted master problem within the cutting-plane method. In contrast to the
theoretical developments in this field, the approach presented in this paper does not make the unrealistic assumption that the new cuts are shallow. Moreover, it treats systematically the case when a
large number of cuts are added at one time. The technique proposed in this paper has been implemented in the context of HOPDM, the state of the art, yet public domain, interior-point code. Numerical
results confirm a high degree of efficiency of this approach: regardless of the number of cuts added at one time (can be thousands in the largest examples) and regardless of the depth of the new
cuts, reoptimizations are usually done with a few additional iterations. Key words. Warm start, primal-dual algorithm, cutting-plane methods. Supported by the Fonds National de la Recherche
Scientifique Su...
"... We perform a smoothed analysis of a termination phase for linear programming algorithms. By combining this analysis with the smoothed analysis of Renegar’s condition number by Dunagan, Spielman
and Teng ..."
Cited by 23 (4 self)
Add to MetaCart
We perform a smoothed analysis of a termination phase for linear programming algorithms. By combining this analysis with the smoothed analysis of Renegar’s condition number by Dunagan, Spielman and
, 2003
"... We perform a smoothed analysis of Renegar’s condition number for linear programming. In particular, we show that for every n-by-d matrix Ā, n-vector ¯ b and d-vector ¯c satisfying ∥ Ā, ¯ b, ¯c ∥
∥ F ≤ 1 and every σ ≤ 1 / √ dn, the expectation of the logarithm of C(A,b,c) is O(log(nd/σ)), where A, ..."
Cited by 22 (6 self)
Add to MetaCart
We perform a smoothed analysis of Renegar’s condition number for linear programming. In particular, we show that for every n-by-d matrix Ā, n-vector ¯ b and d-vector ¯c satisfying ∥ Ā, ¯ b, ¯c ∥ ∥ F
≤ 1 and every σ ≤ 1 / √ dn, the expectation of the logarithm of C(A,b,c) is O(log(nd/σ)), where A, b and c are Gaussian perturbations of Ā, ¯ b and ¯c of variance σ 2. From this bound, we obtain a
smoothed analysis of Renegar’s interior point algorithm. By combining this with the smoothed analysis of finite termination Spielman and Teng (Math. Prog. Ser. B, 2003), we show that the smoothed
complexity of linear programming is O(n 3 log(nd/σ)).
, 1996
"... Most of the current techniques for the direct solution of linear equations are based on supernodal or multifrontal approaches. An important feature of these methods is that arithmetic is
performed on dense submatrices and Level 2 and Level 3 BLAS (matrixvector and matrix-matrix kernels) can be us ..."
Cited by 17 (2 self)
Add to MetaCart
Most of the current techniques for the direct solution of linear equations are based on supernodal or multifrontal approaches. An important feature of these methods is that arithmetic is performed on
dense submatrices and Level 2 and Level 3 BLAS (matrixvector and matrix-matrix kernels) can be used. Both sparse LU and QR factorizations can be implemented within this framework. Partitioning and
ordering techniques have seen major activity in recent years. We discuss bisection and multisection techniques, extensions to orderings to block triangular form, and recent improvements and
modifications to standard orderings such as minimum degree. We also study advances in the solution of indefinite systems and sparse least-squares problems. The desire to exploit parallelism has been
responsible for many of the developments in direct methods for sparse matrices over the last ten years. We examine this aspect in some detail, illustrating how current techniques have been developed
or ...
- NUMERICAL ALGORITHMS , 1999
"... ..." | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=468179","timestamp":"2014-04-18T19:34:44Z","content_type":null,"content_length":"38422","record_id":"<urn:uuid:e33dca2b-c3c3-40f1-9418-fd232af099f8>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00068-ip-10-147-4-33.ec2.internal.warc.gz"} |
Simple circuit measures RMS value of AC power line | EE Times
Design How-To
Simple circuit measures RMS value of AC power line
The root-mean-square value of an AC signal compares the heating value of an unknown AC signal to that of a known DC signal across identical loads, and is equal to the amount of DC required to produce
an identical amount of heat in the load. When the power dissipated in the loads is equal, the known DC voltage equals the RMS value of the unknown ac signal. For example, if we applied 1 V AC RMS to
a resistive heating element, it would produce exactly the same amount of heat as if we had applied 1 V DC.
Mathematically the RMS value of a voltage is defined as:
Click on image to enlarge.
This formula represents the standard deviation of a zero-average statistical signal.
Simple relationships include the following:
Click on image to enlarge.
In general, measuring the RMS value requires an RMS-to-dc converter, which provides a DC output equal to the RMS value of any input waveform. Unfortunately, the range of AC signals to be measured can
be very large, while the input range of typical RMS-to-DC converters is only a few volts. To be useful for RMS-to-DC converters, the large input voltages must thus be scaled down. Measuring the RMS
value of a home power line, for example, requires addtional circuitry that attenuates the AC signal to a suitable value that accommodates the input range of the RMS-to-DC converter. This application
solves the problem of RMS measurements for large AC signals such as those from the electric power line.
Click on image to enlarge.
Figure 1: Simple circuit measures the RMS value of a power line.
Figure 1
, the AD628 programmable-gain difference amplifier, configured for a gain of 1/25, scales the power line signal before applying it to the AD8436 rms-to-dc converter, which can only accept voltages
within 0.7 V of either supply. The difference amplifier has a ±120-V common-mode input and differential-mode range, making it well suited for dividing down the high-voltage power line. The precise DC
equivalent of the RMS value of the AC waveform is provided at RMS OUT.
Figure 2
shows the 330-V AC p-p, 60-Hz home power line, the scaled output from the difference amplifier, and the DC output of the RMS-to-DC converter.
Click on image to enlarge.
Figure 2: Input, intermediate, and output waveforms.
The complete design draws only 2 mA, making it ideal for low-power applications. The external input resistor, 150 kO as shown, can be scaled up for use with signals larger than 400 V p-p. The input
signal can exceed the power supply with no damage to the device, allowing the input signal to be present even in the absence of the supply voltage. In addition, the short-circuit protected system can
operate on dual supplies up to ±18 V.
This circuit computes the true root-mean-square value of a complex AC (or AC plus DC) input signal and gives an equivalent dc output level. The true RMS value of a waveform is a more useful quantity
than the average rectified value because it is a measure of the power in the signal. The RMS value of an AC-coupled signal is also its standard deviation.
About the authorsDavid Karpaty
is a staff engineer in the Integrated Amplifier Products (IAP) group of Analog Devices, Inc., responsible for product and test engineering support of precision signal processing components with a
focus on automotive products. He holds a BSEE from Northeastern University and a bachelor’s degree in electrical engineering technology from Wentworth Institute.
Chau Tran
joined Analog Devices in 1984, where he works in the Instrumentation Amplifier Products (IAP) group. In 1990, he graduated with an MSEE degree from Tufts University. Tran holds more than 10 patents
and has authored more than 10 technical articles.
Karpaty and Tran are based at ADI in Wilmington, Mass. | {"url":"http://www.eetimes.com/document.asp?doc_id=1280109","timestamp":"2014-04-16T19:27:20Z","content_type":null,"content_length":"136521","record_id":"<urn:uuid:7b4d9de9-9cc2-4dd7-8563-f87498253ada>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00460-ip-10-147-4-33.ec2.internal.warc.gz"} |
Conic Sections and Parallel Lines
Date: 18 Mar 1995 18:35:01 -0500
From: Ryan M. Howley
Subject: Conic Sections
My Algebra II class is just about to finish up conic sections,
and we started talking about degenerate cases. Our teacher
told us there was a way to cut a cone with a plane to get
parallel lines. Another teacher in the department can do it
algebraically, but no one can do it physically. Is there such
a plane in reality or only in theory? If there is a way, could
you please explain it to me? Many thanks......
Ryan M. Howley
Date: 18 Mar 1995 19:25:14 -0500
From: Dr. Ken
Subject: Re: Conic Sections
Hello there!
I've got to agree; I don't see a way (physically) to cut a conic
with a plane to get two parallel lines. I'd love to see the
algebraic version, though, and perhaps that will illuminate
something (there may be some trickery or magic or something
going on). If you could give, say, the equations for the conic
and the plane that does it, that would be neat.
-Ken "Dr." Math
Date: 18 Mar 1995 19:34:55 -0500
From: Ryan M. Howley
Subject: Re: Conic Sections
I kind of figured that. The only thing I'm going off of is our
math department saying that there is a way to do it, but none
of them know how. A girl from another class said her teacher
is going to show them the algebraic way, or did and she forgot,
so right now I don't have it. Do you know of any places on the
web that might be able to help me furthur?
Well, I got a problem on our homework that might be the
algebraic way. I had a competition on Friday, and forgot all
the math work we had done on Thursday, but I'm pretty sure
it's correct. The problem was graph x^2 + 2xy + y^2 = 4. I
factored it, got (x+y)^2 = 4 which then leads to x+y=-2
and x+y=2. So, does this help at all? Thanks much.......
Ryan M. Howley
Date: 25 Mar 1995 14:56:14 -0500
From: Dr. Ken
Subject: Re: Conic Sections
Hello there!
Sorry it's taken me so long to get back to you. Here's what
I think about your problem: the original equation wasn't a
conic at all.
When you look at the definition of a conic, you see this:
In the plane, let l be a fixed line (the directrix) and F be a
fixed point (the focus) not on the line, as in Figure 2. The
set of points, P, for which the ratio of the distance PF from
the focus to the distance PL from the line is a positive
constant E (the eccentricity) -- that is, which satisfies
PF = E * PL
is called a conic. If 0<E<1, it is an ellipse; if E=1, it is a
parabola; if e>1, it is a hyperbola.
(from Purcell & Varberg's Calculus text, fifth edition)
And I'm afraid that your equation (x+y)^2 = 4 doesn't lead
to such set of points. To see this, take some points on the
two lines and try to figure out what the focus and directrix
would have to be.
I hope this is a little helpful to you. Thanks for the interest!
-Ken "Dr." Math
Date: 25 Mar 1995 15:02:12 -0500
From: Ryan M. Howley
Subject: Re: Conic Sections
Well, I also went over usenet and asked people. One guy
explained it very well, and said that it was possible. I'll
send you a copy of the letter in case you ever need it for
future reference:
From: Chris Delanoy
Newsgroups: k12.ed.math
Subject: Degenerate Conic Section
Yes... First, you take the degenerate of a Cone, which is
a cylinder. Now, you intersect the cylinder with a plane
that is parallel to the generators of the cylinder, and you
have two parallel lines. Geometrically, parallel lines are
either an infinitely flat hyperbola, or a parabola whose
vertex is at infinity (Note - a cylinder is a cone whose
vertex is at infinity, therefore the plane-intersection of
a parabola (parallel to generators) applies equally to the
parallel lines, as does the hyperbola (parallel to
revolutionary axis, which in this case is the same angle
as a paraboloidal intersection))
- Chris J. Delanoy
Date: 25 Mar 1995 15:48:53 -0500
From: Dr. Ken
Subject: Re: Conic Sections
Hey, thanks!
I guess I had never thought of conic sections in this way.
I'm glad you showed this to us, I know I learned from it!
-Ken "Dr." Math
Date: 07/09/98 at 21:34:31
From: Mona Huff
Subject: Conic sections
I was searching for some good sites on conic sections and read the
discussion you had with Ryan Howley about getting parallel lines by
cutting a cone. I agreed with you up to the last exchange. Is a
cylinder really a degenerate cone? Please explain.
Date: 07/10/98 at 13:09:34
From: Doctor Peterson
Subject: Re: conic sections
Hi, Mona. Good question - as you saw, even though the original
question mentioned degenerate cases, our respondant didn't think about
the cylinder as a degenerate cone, partly because it's not quite
accurate to say you can cut a cone to get parallel lines. The
important point in the question was to explain why parallel lines are
a degenerate conic - not a full-fledged conic, but "on the edge".
When we define anything in math, we often find that we can relax the
definition slightly and things still work. That is called a
"degenerate" case, because some feature has been lost, allowing the
thing we are looking at to "degenerate into" something simpler. A
degenerate case is sort of like the boundary of a region. If I stand
on the border between two states, I'm not exactly in my state, but I'm
still very very close, and I'm not really out of it either. A cylinder
isn't exactly a cone, but it's so close that a lot of things that can
be said about cones still apply.
There are many examples of degeneracy, usually involving something
becoming either zero or infinite, or two things that were different
becoming the same.
If you stretch a cone out, holding on to its base but pulling the
vertex to infinity, it becomes a cylinder. (Picture it: the sides
become closer and closer to parallel.) If you hold onto the vertex and
pull the base to infinity, it becomes a straight line (so points,
produced by cutting this line with a plane, are degenerate conics,
too). If you stretch the base out sideways, increasing its radius to
infinity, you will get a plane (so a single line is a degenerate
conic, the intersection of two planes). Another way to get a
degenerate conic is to cut a cone through its vertex, producing
a pair of intersecting lines. (This is what the student's equation
Similarly, a triangle can degenerate into a line segment if its
vertices are collinear, or parallel lines can degenerate into a single
line if they coincide. Intersecting lines can degenerate into parallel
lines, when the point of intersection moves out to infinity. In each
case, some things you can say about them still work even though they
are degenerate, which is why we bother talking about them.
Because a cylinder can be thought of as a degenerate cone, we can
treat parallel lines as if they were a special case of the hyperbola.
Thinking in terms of the graph of a hyperbola, just picture each
branch getting flatter and flatter until you have two straight lines
rather than two curved branches. In terms of the equation:
x^2 y^2
--- - --- = 1
a^2 b^2
we are letting b go to infinity, so the equation becomes:
--- = 1
x = +/- a
This is just what you get if you cut a cylinder by a plane parallel to
its axis. If you cut a cylinder in any other direction, you get more
normal conics (circles and ellipses), which is why it is useful to
think of the cylinder as a special cone. We don't need to talk
separately about "cylindrical sections", because what we know about
conic sections still works. Just don't try to find the foci of a pair
of parallel lines!
Does that help?
- Doctor Peterson, The Math Forum
Check out our web site! http://mathforum.org/dr.math/ | {"url":"http://mathforum.org/library/drmath/view/54756.html","timestamp":"2014-04-20T01:31:32Z","content_type":null,"content_length":"13099","record_id":"<urn:uuid:ba7f070c-eaee-4c7d-ae05-f7f1f299e73d>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00222-ip-10-147-4-33.ec2.internal.warc.gz"} |
Better Economics and Unified Phase Theory
1836 15.6175392619 log(1/2+sqrt(5)/2)
377 14.6307168455 log(3/2)
If I have the theory right, then then light should sample the mass of the proton at the same rate as mass samples the space impedance.
They are off by one wave number, mainly because there are adjacent mass quants at the point where the electron orbitals start. How does the proton know all this? Because when one of these phase
imbalances began to quantize the proton nulls, the proton nulls rumble, they tell the proton that phase imbalance is about to go Shannon.
The proton is going to be sensitive to space impedance because the charge it, and the electron share is related to keeping wave from quantizing a chunk of the proton nulls.
I am looking at spectral lines from the atoms. Now all of these should go as natural log, they are emitted before they go Shannon. Their frequency of emission and the space impedance of 377 should be
should be aligned, for the same reason. The frequency they emit at is energy, and the mass they match plus that frequency causes the same rumble in the proton phase imbalance.
So, you put a heavy muon in the middle of those orbitals, the rumble point changes.
The thing that makes the atom work is efficient packing by the quarks. The mass quant at the proton is 108 = 2*2 *3*3*3. No matter how wave moves, the quarks find an efficient packing. That makes the
proton attractive to phase because of its excess capacity to balance phase variance. So we get a dearth of free nulls in the outer ring, and kinetic energy is supported, hence the atom. How would the
proton use this capacity? By allowing phase to move in and out in response to energy changes.
There are seven principle quant numbers, and about 14 missing mass quants from the electron to the proton. There are some 11 missing wave numbers corresponding to those missing mass quants. There are
an additional 7 angular momentum orbital slots. The angular and principal split the slots available. Count up the vectors of principal and angular. Some eight. These sets of angular and principle are
the contours within which the proton can maintain constant precision, and none of the wave numbers in the atomic shell will materialize.
These lines lie along an integer defined manifold of phase variance in the proton
That is the plan, find the curvature in which that happens.
And these integers, the first three quantum numbers, should be separable into an integer set, i,j,i*j which define the equal potential contours in the proton. There exists some integral equations
that tells me so, and proton precision should be allocated so all those integrals converge uniformly with integer summation. These phase potentials should all be equal gradients, and they should have
simple errors with respect to the main quant wave numbers up the quant chain from the atom. The proton seeks to maximize instability of the Nulls and maintain stability of phase by accepting phase
I am looking at atomic orbitals, and the necessity of dealing with the units of physics that keep coming up. I need to simplify, because everything is really, just phase and null and how much of
those we have with respect to two quantization ratios. So I am dumping the units of physics and replacing them with units of things counted, as follows:
Primary units:
m is the quantization ratio of Nulls
f is the quantization ratio of Phase
Secondary units:
j*m is m multiplied by an integer
k*f is f multiplied by an integer
m^j that is m taken to an integer power
f^k and f taken to an integer power
There, that is simpler, and eliminates charge and magnetism and particles and mass and time and energy and power and force and space. As long as I can write equations with these, in some symmetrical
frame, I can count them up and let physicists add their semantics later.
I add some secondary conjectures.
2^(j*m) and 2^(k*f) define a digit system when operations on m and f are Shannon separable, and
e^(j*m) and e^(k*f) define the digits system when operations are not Shannon separable.
A symmetric frame is one in which either conjecture always holds. Symmetry is determined by the density of phase and nulls samples. To obtain density I construct two operators:
D1 = 1/2(m^k - f^k) and D2 = 1/2(m^k + f^k) k integer
When the operators are Shannon equal to one, we have a binary system, otherwise a natural log system.
I suspect my symmetric frames will be defined by integers:
{k, j, j * k}; k, j integer.
The trick to the orbitals is to find the set j,k that define non-Shannon frames. I can find them in two ways; 1) Cheat and look at the centuries of work done by physicists, or 2) Use some methods of
group theory. Once the integer set is found, I count them out and draw the natural log results.
Phase theory predicts the Hamiltonian is always the sum of variance in Phase + variance in Nulls, Vp + Vn Physics is the art of finding these two quantities. In the Shannon equation we have the term:
log2(1+ S/N) which now gets converted to:
Lets call that the Shannon operator. I think we can fix the Shannon sample rate to the value 2. So then the Shannon operator will be a function of (j*k,j,k). The Shannon condition not met when the
predicted quant value some j,k is < 1. The trick then becomes finding the form of the Shannon operator.
We know why we have the j,k axis of symmetry, because the proton is made of a threesome of quarks which would thus have separable axis of symmetry. Group theory likely tells us then, we have a third
axis, j*k. I think that is how it is going to work. We make a simplification by treating the proton as a sphere along the three axis, and the value Vp and Vn will be mappings of their values inside
the proton along the three axis. We are damn near done! We know the orbitals are constructed in, our spectral charge, between two points, one where the m,f for magnetic is Shannon and one where m,f
for the electron is Shanon. We know how much proton precision is available, so we just take our spread the precision among the density operators along the three axis. We scale up the precision to the
point where Shannon is almost met, take the 'int' of j an k and presto.
At any given point in the proton, Vn is (D1+D2)^2, Vp = (D1-D2)^2 Computed over the three axis, up to the precision of the proton. Precision is:
sum(j*k*p + j*p + k*p); where p is a scale factor and must scale each term evenly. We need only find the integer set where the manifold over the three axis converge uniformly, the manifold curvature
being the limit of proton precision. That manifold curvature makes the Shannon Uncondition in the Shannon operator.
Charge falls out because that curve component prevents EM light from forming. Spin falls out because two adjacent mass in our spectral chart. The j*k axis defines null density that prevents mass
quantization, The magnetic is really the phase angle of the phase imbalance in the orbital, and when that is to straight, the electron flies away.
Wiki has a full description of the lagrange points around the solary system, interesting. Lagrange points around the solar system
Orbits tend to be unstable a bit around these points. Borrowing from Einstein, we get the same equations whether we think the Lagrange points move around or whether the potential is too close to
zero. We can make the same analogy if we believe in quantum gravity; either the quants can change ratios easily or they move around quite a bit. But the important thing is these Lagrange points have
size of the order of 1E6 meters.
So take that as the Compton wavelength and then move the idea to a Neutron start where gravity half wave are only 10e6, so their Lagrange points only have 10E3 size, and that makes the Compton 'mass'
of these fields very large in galactic standards, the mass of a neutrino, which we think is small, but compared to gravity nulls they are quite hefty. The proton is not supported, it cannot hold
charge gradient in these intense fields. So what happens at the center of these Neutron stars? Maybe they have negative protons, negatrons.
I jest. But in a Black Hole, matter at the center is not even supported. The gravity nulls are huge, nearing a size greater than even the electron. They would have absorbed most of the Nulls in the
center, the center would be devoid of packed Nulls, and most of free space would be a phase gradient, gravity having collected all the available nulls. If you continually grow gravity, what would you
get? Gravity Nulls approach the size of a Neutron, you get the primordeal Neutron.
But the conservation of energy tells us that is impossible, we just cannot recyle mass and energy continually. Unless the Null quant ratio grew a bit. Such a thing would be possible because the
vacuum needs to generate noise to keep its sample rates constant. But in a region where nulls and phase have become so widely separated, they would not get the opportunity. You get these bizzare
regions of space where the fine grained accuracy of the proton is not supported, just these dull neutron like gravity nulls held in groups with a distance of a few hundred meters. The Compton
equivalence is off, Plank is way off. Nothing interesting, just dull void.
If I were a group theorist, I would change the mass/wave quant ratios, and see if there are spots just above the Proton where we can get a near integer 'one. Somewhere near 107, up from 91, in wave
number. I think the Higgs wave number is about 107. Reset the mass quant ratio to match that, see how much grouping is supported. I set the mass ratio at 5/3 and get a very lose match with wave
number 138, keeping the sample rate of light at Fibonacci.
The 5/3 world
The idea is a dull world with two type of packed nulls, huge useless neutron like things, and gravitrons slightly bigger than an electron. Very little kinetic energy, all precarious potential energy.
This is the pre-big bang world. If this is what a quasar is, then they would be easily disturbed by the normal vacuum and occasionally shoot out huge quantities of 3/2 matter at high kinetic energy.
I am not sure quasars are sucking in matter but maybe shooting it out.
I take four masses, pair then up :
Mass pairs (1,2,3,4) , taken two at a time
My signal, counting radii up by some small increment and computing gravitational force (G=1), totalled across the six radii. So I compute a signal of 1/r^2 dividing each of those mass pairs. Assume a
total energy. Play around with variable until I get a smooth spread of quant ratios.
Compute SNR for a total energy E if the noisecounting radii by d in N steps
My signal:
Sig <- function(d,E,N) {
foo.sig = NULL
for(i in 1:N){
foo.sig[i] = sum(m/(i*d)^2)/E +1}
Here is the SNR +1. These are the bauds, some eight of them that count out six radii in steps. I have really done a spectral decomposition. The result tells me that those signals divided by that
number E, is best counted with these bauds. Nothing more.
So I selected a possible function. There were six radii so I suspect they counter out powers of 2 modulo six. So I count out the function, using these quants in an eight digit sequence. and generate
the signal of six radii counting. My counter was my suspected function is counter squared modulo 6. I was sure to do this:
2^(B[j]*k)/(2^k), divide by the bit power, to normalize the digits to an eight bit twos; I divided by the bit power.
I got this plot.
Six radii counting out a quadratic form. Is it useful? No, not without some better knowledge of Kepler. If I set two quant rates, and counted out a polar R, and a theta, then picked the ellipse as a
possible function, then yes, useful. As for this thing on the right, it could just as easily been someone juggling six balls in the air.
But here is the point. If we know the generating force and the total energy, then we can find the best quants before applying some law of physics in the Schrodinger equation. Do the Schrodinger last.
Do group separation first.
I am a lousy physicist, and do not trust myself, I hesitate.
It should take less phase to pack a null because phase moves at light speed. Nulls should represent potential energy. Which order? Is it Null at quant k+1 and wave at k? Or the other way around?
Readers really cannot trust me to get this right the first go around. But I trust the proton, and it comes in at wave then null. I am going with that for now.
We want to compute the orbits numerically for some fixed energy level. The fixed energy level is the noise. There are N! equations of mass on mass with G, the summation of these equations at any step
is the signal. You step all radius with some common step value such that the divergence along the way does not break Shannon separability. Check the -iLog(i) along the way with you computer, or just
scale up.
The amplitude is some 2^(k* quant). You want to compute the quants for each step from k=0 to k = Nmax. Compute the mass on mass signal at each step, use Shannon to get the quant. What does this get
you? You have simply counted up the group with respect to some fixed point of symmetry. You have created the Kepler group. Does this find you the central Lagrange, or is there one? I have not done
this, but if you lay out the radii on a complete sequence, they should count out the potential energy. The kinetic energy makes that phase flat. When the potential energy is maximum they are closest
together, and visa versa. So, use a little ingenuity and fins a common Lagrange.
For example: On the complete symmetry group of the classical Kepler system
Wave(k) does not have Null(k), it has Null(k-1), it can't pack the proper quant of null. Wave(k) has dumped excess phase into the orbital, and the Null(k-1) is now a Null(k) but half packed with
So, what remains in the orbital? Up to half a quant of free Nulls. And the electron does indeed operate like a spread out bunch of nulls. I do not think is it is a probability of finding an electron,
it is the actual electron is spread around. There are not enough Nulls to pack the next quant up, so add energy to that orbital, and the current quant of electrons mixes and matches with the 1/2
quant of free nulls. It really does spread around. Phase is unbalanced and held by the proton charge, phase can't fly away. So, up to a point, those orbitals are Null soup, nothing but a mish mash of
unbalanced phase and free Nulls.
This came up because I was looking at how the hyperbolics would accumulate imbalance as they march energy up the orbitals. The functions are ideal, for the purpose, they keep fractions and count out
the orbitals with few operations. I substitute the twos base for the natural log, and fractions appear to accumulate smoothly.
Anyway, I went through the Schrodinger stuff for the hydrogen atom, I took that class and I get what is happening. But here is the thing. We know we step energy levels by Plank. We know we are at
maximum entropy. So we have to sample at twice Plank, and we have to encode the spherical charge function and mass function. We know we are symmetric about the nucleus. So, simple enough in the
bitstream version. Energy levels count as Plank, they are the signal. Amplitude, quant size, count as log Plank. I think your noise is actually the spheroidal functions you need to quantize. What I
mean is that the spheroidal charge is a disturbance to the system, and it is resolved by quantization.
So, just plug is in and sample away. The hyperbolics should count out properly. You may have no idea what the quants mean, but they should be accurate. So give them meaning after the fact, count up
first, then apply physics second. The same way I did the Plank black body. Applied force, or applied heat, or applied gravity, or whatever are disturbances to the quantum system, they should normally
be noise in the Shannon equation. But the S/N is always an energy ratio. It took me a while to get that. So, we want to know the orbitals for a certain energy level. The hyperbolics count out the
kinetic and potential, as you step through the Plank energies. It is just a sampled data version of the Hamiltonian.
So, rather than write a quantizer code, I am going directly to the hyperbolic. I am taking a short cut. Its a simple trick in which I pick the order I want to work with, and pack it with force,
obeying the rules of Fibonacci packing while accumulating quantization error in the proton.
My approach is to scale some region of the quant chart to the precision I want. Then make two binary numbers, one for null quants and one for wave quants. I want everything packed in wave/null pairs.
These two numbers are my Eigenvectors, and the hyperbolic functions my Eigenfunctions.
I will shift and add the nulls/wave Eigenvalues to make them match, in wave/Null pairs. and shift and subtract the same amount from the proton. Thus, each null quant gets the proper wave number, and
the error in the proton will be kinetic energy. I make my new binary number all ones, but they are not all matched in quants. The matching error is in the proton. (Rule 1) All actions can be
considered as actions of wave/null pairs.
(Rule 2) I know the vacuum quantizes the maximum order it can in a wave/null pair to stabilize. So I can just fill in the blanks until I have what I need quantized, then extract the proton
accumulated error and see what I have to play with.
Later, I will assign even and odd hyperbolic functions to the wave and null quants, make the thing automatic, then convert from bits to qubits for pretty pictures.
When I scale I get two binary numbers. The one bits indicate the real quantization, the actual vacuum samples. The numbers look like:
Wave Null
(Rule 3) The wave get the higher order bit (Rule 4) The wave ends in zero, the null ends in one.
Then I shift the nulls left to match the first wave, accumulate it in my output sequence, and subtract from the proton. As I do that, I am sure I will discover more rules, but these four are a good
start. I guess the main rule is Fibonacci packing.
What about charge and spin?
I think charge is a fractional overlap in wave/null pair for the electron, and the proton should be initialized with the value. Spin is a 1/2 allowable variation in the null quant number, as near as
I can tell. I have no idea how to work these, yet. I am not a physicist.
Looking at one of my previous posts, we discover that the hyperbolics manage fractions quite well.
When the inaccuracy of packing nucleons equals the inaccuracy of making orbital slots. Quantum accuracy is all about maintaining the integer 'one' for the group, and that is limited by the nucleon.
If the integer 'one' is violated then the conservation of baryon has broken.
Let Qw be (1/2+sqrt(5)/2)*91 and Qn be (3/2)*108.
These are the quant values of the proton Wave/Null pair.
Lets use log two because our computers use it.
The log difference, log2(Qw/Qn) = log2(Qw) - log2(Qn), is 5e-5.
At full accuracy, the difference should be:
log(2^(Nmax) /2^(Nmax-1)) or log(2) = 1,
But we make that 1/2 to keep Nyquist sampling. Nmax is a twos bit equivalent of the atom that is maximally Fibonacci packed.
At that point the physicist is dealing with the twos binary version of an atom that is Fibonacci packed according to the real wave/particle quantization. The Fibonacci packing is more efficient than
the binary, but more difficult to do math. But the actual standard model, including the Feynman diagrams acts just like a Fibonacci packing. The top wave/Null quants (91,108) come as a pair, and mark
the Finonacci integer, the other wave/null quants are separated, using the actual particle quants and their matching wave quants. The standard model, and the atom, is a Fibonacci packing.
Thus scaling to get our twos binary version of the Fibonacci atom, so we can do math:
1e4 * [log2(Qw)-log2(Qn)], must equal 1/2 (with Nyquist sampling). So, at scale, the twos binary is a 10000 digit integer, a whopper.
Making Fractions
But wait, that's not all. Somewhere these stupid vacuum elements figured out how to use the proton atom to make crystals, liquids, solids, rocks, galaxies and shoes. I have no clue how they figured
this out. But they use different null quant ratios, evidently. So the proton, in its generosity, as agreed to carry fractions, and has a decimal point somewhere up its binary quant chain of 10000
digits. This decimal point allows the proton to hold fractional errors imposed upon it in the making of shoes.
Where is this decimal point? I have little clue, except:
We have one clue, the magnetic null quant is unavailable, and that seems to be 18 orders down from the electron, out of 91. Plus we have vacuum noise. So 20 out of 91 times 10000 makes a big
fraction, in binary digits. The most likely scenario is that the magnetic kept kicking protons free, and with enough free protons, magnetic null quants failed. But about half the digits gained from
kicking out the magnetron were needed to manage charge.
Consider the electron, about 14 orders down from the Proton, or 1/6 of 91. That becomes a 1600 digit binary number. Can we describe everything we know about chemistry as a series of subgroups making
up 1600 digit binary number? The periodic table needs 8 digits to count the atoms, another 10 to put them in their place on the chart, another 15 digits to describe their oribital quants. We are at
32 bits, with 1500 left over to describe the chemical elements. Easily done, I think.
The vacuum phase, using the proton, as invented the decimal point. | {"url":"http://bettereconomics.blogspot.com/","timestamp":"2014-04-21T12:08:11Z","content_type":null,"content_length":"135141","record_id":"<urn:uuid:4009f38e-9692-4571-8ed9-b9a990851320>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00183-ip-10-147-4-33.ec2.internal.warc.gz"} |
Test on 4/14. Geometry. Picture included Please help ASAP please
April 13th 2006, 04:49 PM
Test on 4/14. Geometry. Picture included Please help ASAP please
The ones I know are know are correct but no well understood:
a) 1:1 because both triangles have the same base? So 16:16 = 1:1 right?
b) same method as ^
d) 9:16 because using the similar triangles area ratio formula. The method of similar trianges i used was AA because of the parallel lines right?
The ones I don't understand at all but I know the answers. I'm all confused:
c) The answer is 1:1. I think you can assume that the height of both triangles are the same? How can it be 1:1 if there are no other measurements of the triangles?
e) The answer is 3:4. I figured maybe you do simplfy 12/16 to get 3:4 but I have no clue why because the the triangles that the question is asking for doesn't have 12 or 16?
Im so confused. please help ASAP. Thanks... :cry:
April 13th 2006, 09:21 PM
Originally Posted by AirForceOne
The ones I don't understand at all but I know the answers. I'm all confused:
c) The answer is 1:1. I think you can assume that the height of both triangles are the same? How can it be 1:1 if there are no other measurements of the triangles?
e) The answer is 3:4. I figured maybe you do simplfy 12/16 to get 3:4 but I have no clue why because the the triangles that the question is asking for doesn't have 12 or 16?
Im so confused. please help ASAP. Thanks... :cry:
let H be the height of the trapezoid and h the height of triangle(ZYP) (I#ve attached a diagram to demonstrate, what I'll calculate).
Then you get the proportion:
$\frac{h}{16}=\frac{H-h}{12}$. Solve for h and you'llget h = 4/7*H.
to e.: You get the area of triangle(XPY) by:
$A_{\Delta XPY}=A_{\Delta WXY}-A_{\Delta WXP}$
$\frac{1}{2} \cdot 12 \cdot H-\frac{1}{2} \cdot 12 \cdot \frac{3}{7} \cdot H=\frac{1}{2} \cdot 12 \cdot H \cdot \frac{4}{7}$
That means:
$\frac{A_{\Delta XPY}}{A_{\Delta WXP}}=\frac{\frac{1}{2} \cdot 12 \cdot H \cdot \frac{4}{7}}{\frac{1}{2} \cdot 12 \cdot \frac{3}{7} \cdot H}=\frac{4}{3}$
to d.: As I've shown above you can calculate the areas of the triangles in question by calculating the differences of two triangles.
Unfortunately I'm a little bit in a hurry to complete the problem, but I'm certain that you now know how to handle the problem.
Greetings and Happy Easter to you.
April 13th 2006, 10:19 PM
Originally Posted by AirForceOne
The ones I don't understand at all but I know the answers. I'm all confused:
c) The answer is 1:1. I think you can assume that the height of both triangles are the same? How can it be 1:1 if there are no other measurements of the triangles?
to c.: as you've demonstrated:
$A_{\Delta ZYW}=A_{\Delta ZYX}$ (same base, same height). Thus
$A_{\Delta ZYW}-A_{\Delta ZYP}=A_{\Delta ZYX}-A_{\Delta ZYP}$
$A_{\Delta WZP}=A_{\Delta XYP}$. Thus the ratio is 1:1. | {"url":"http://mathhelpforum.com/geometry/2566-test-4-14-geometry-picture-included-please-help-asap-please-print.html","timestamp":"2014-04-20T08:00:08Z","content_type":null,"content_length":"8829","record_id":"<urn:uuid:4edf0a80-43e6-4cf3-8dfc-7a014fdffd8b>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00339-ip-10-147-4-33.ec2.internal.warc.gz"} |
Better Economics and Unified Phase Theory
1836 15.6175392619 log(1/2+sqrt(5)/2)
377 14.6307168455 log(3/2)
If I have the theory right, then then light should sample the mass of the proton at the same rate as mass samples the space impedance.
They are off by one wave number, mainly because there are adjacent mass quants at the point where the electron orbitals start. How does the proton know all this? Because when one of these phase
imbalances began to quantize the proton nulls, the proton nulls rumble, they tell the proton that phase imbalance is about to go Shannon.
The proton is going to be sensitive to space impedance because the charge it, and the electron share is related to keeping wave from quantizing a chunk of the proton nulls.
I am looking at spectral lines from the atoms. Now all of these should go as natural log, they are emitted before they go Shannon. Their frequency of emission and the space impedance of 377 should be
should be aligned, for the same reason. The frequency they emit at is energy, and the mass they match plus that frequency causes the same rumble in the proton phase imbalance.
So, you put a heavy muon in the middle of those orbitals, the rumble point changes.
The thing that makes the atom work is efficient packing by the quarks. The mass quant at the proton is 108 = 2*2 *3*3*3. No matter how wave moves, the quarks find an efficient packing. That makes the
proton attractive to phase because of its excess capacity to balance phase variance. So we get a dearth of free nulls in the outer ring, and kinetic energy is supported, hence the atom. How would the
proton use this capacity? By allowing phase to move in and out in response to energy changes.
There are seven principle quant numbers, and about 14 missing mass quants from the electron to the proton. There are some 11 missing wave numbers corresponding to those missing mass quants. There are
an additional 7 angular momentum orbital slots. The angular and principal split the slots available. Count up the vectors of principal and angular. Some eight. These sets of angular and principle are
the contours within which the proton can maintain constant precision, and none of the wave numbers in the atomic shell will materialize.
These lines lie along an integer defined manifold of phase variance in the proton
That is the plan, find the curvature in which that happens.
And these integers, the first three quantum numbers, should be separable into an integer set, i,j,i*j which define the equal potential contours in the proton. There exists some integral equations
that tells me so, and proton precision should be allocated so all those integrals converge uniformly with integer summation. These phase potentials should all be equal gradients, and they should have
simple errors with respect to the main quant wave numbers up the quant chain from the atom. The proton seeks to maximize instability of the Nulls and maintain stability of phase by accepting phase
I am looking at atomic orbitals, and the necessity of dealing with the units of physics that keep coming up. I need to simplify, because everything is really, just phase and null and how much of
those we have with respect to two quantization ratios. So I am dumping the units of physics and replacing them with units of things counted, as follows:
Primary units:
m is the quantization ratio of Nulls
f is the quantization ratio of Phase
Secondary units:
j*m is m multiplied by an integer
k*f is f multiplied by an integer
m^j that is m taken to an integer power
f^k and f taken to an integer power
There, that is simpler, and eliminates charge and magnetism and particles and mass and time and energy and power and force and space. As long as I can write equations with these, in some symmetrical
frame, I can count them up and let physicists add their semantics later.
I add some secondary conjectures.
2^(j*m) and 2^(k*f) define a digit system when operations on m and f are Shannon separable, and
e^(j*m) and e^(k*f) define the digits system when operations are not Shannon separable.
A symmetric frame is one in which either conjecture always holds. Symmetry is determined by the density of phase and nulls samples. To obtain density I construct two operators:
D1 = 1/2(m^k - f^k) and D2 = 1/2(m^k + f^k) k integer
When the operators are Shannon equal to one, we have a binary system, otherwise a natural log system.
I suspect my symmetric frames will be defined by integers:
{k, j, j * k}; k, j integer.
The trick to the orbitals is to find the set j,k that define non-Shannon frames. I can find them in two ways; 1) Cheat and look at the centuries of work done by physicists, or 2) Use some methods of
group theory. Once the integer set is found, I count them out and draw the natural log results.
Phase theory predicts the Hamiltonian is always the sum of variance in Phase + variance in Nulls, Vp + Vn Physics is the art of finding these two quantities. In the Shannon equation we have the term:
log2(1+ S/N) which now gets converted to:
Lets call that the Shannon operator. I think we can fix the Shannon sample rate to the value 2. So then the Shannon operator will be a function of (j*k,j,k). The Shannon condition not met when the
predicted quant value some j,k is < 1. The trick then becomes finding the form of the Shannon operator.
We know why we have the j,k axis of symmetry, because the proton is made of a threesome of quarks which would thus have separable axis of symmetry. Group theory likely tells us then, we have a third
axis, j*k. I think that is how it is going to work. We make a simplification by treating the proton as a sphere along the three axis, and the value Vp and Vn will be mappings of their values inside
the proton along the three axis. We are damn near done! We know the orbitals are constructed in, our spectral charge, between two points, one where the m,f for magnetic is Shannon and one where m,f
for the electron is Shanon. We know how much proton precision is available, so we just take our spread the precision among the density operators along the three axis. We scale up the precision to the
point where Shannon is almost met, take the 'int' of j an k and presto.
At any given point in the proton, Vn is (D1+D2)^2, Vp = (D1-D2)^2 Computed over the three axis, up to the precision of the proton. Precision is:
sum(j*k*p + j*p + k*p); where p is a scale factor and must scale each term evenly. We need only find the integer set where the manifold over the three axis converge uniformly, the manifold curvature
being the limit of proton precision. That manifold curvature makes the Shannon Uncondition in the Shannon operator.
Charge falls out because that curve component prevents EM light from forming. Spin falls out because two adjacent mass in our spectral chart. The j*k axis defines null density that prevents mass
quantization, The magnetic is really the phase angle of the phase imbalance in the orbital, and when that is to straight, the electron flies away.
Wiki has a full description of the lagrange points around the solary system, interesting. Lagrange points around the solar system
Orbits tend to be unstable a bit around these points. Borrowing from Einstein, we get the same equations whether we think the Lagrange points move around or whether the potential is too close to
zero. We can make the same analogy if we believe in quantum gravity; either the quants can change ratios easily or they move around quite a bit. But the important thing is these Lagrange points have
size of the order of 1E6 meters.
So take that as the Compton wavelength and then move the idea to a Neutron start where gravity half wave are only 10e6, so their Lagrange points only have 10E3 size, and that makes the Compton 'mass'
of these fields very large in galactic standards, the mass of a neutrino, which we think is small, but compared to gravity nulls they are quite hefty. The proton is not supported, it cannot hold
charge gradient in these intense fields. So what happens at the center of these Neutron stars? Maybe they have negative protons, negatrons.
I jest. But in a Black Hole, matter at the center is not even supported. The gravity nulls are huge, nearing a size greater than even the electron. They would have absorbed most of the Nulls in the
center, the center would be devoid of packed Nulls, and most of free space would be a phase gradient, gravity having collected all the available nulls. If you continually grow gravity, what would you
get? Gravity Nulls approach the size of a Neutron, you get the primordeal Neutron.
But the conservation of energy tells us that is impossible, we just cannot recyle mass and energy continually. Unless the Null quant ratio grew a bit. Such a thing would be possible because the
vacuum needs to generate noise to keep its sample rates constant. But in a region where nulls and phase have become so widely separated, they would not get the opportunity. You get these bizzare
regions of space where the fine grained accuracy of the proton is not supported, just these dull neutron like gravity nulls held in groups with a distance of a few hundred meters. The Compton
equivalence is off, Plank is way off. Nothing interesting, just dull void.
If I were a group theorist, I would change the mass/wave quant ratios, and see if there are spots just above the Proton where we can get a near integer 'one. Somewhere near 107, up from 91, in wave
number. I think the Higgs wave number is about 107. Reset the mass quant ratio to match that, see how much grouping is supported. I set the mass ratio at 5/3 and get a very lose match with wave
number 138, keeping the sample rate of light at Fibonacci.
The 5/3 world
The idea is a dull world with two type of packed nulls, huge useless neutron like things, and gravitrons slightly bigger than an electron. Very little kinetic energy, all precarious potential energy.
This is the pre-big bang world. If this is what a quasar is, then they would be easily disturbed by the normal vacuum and occasionally shoot out huge quantities of 3/2 matter at high kinetic energy.
I am not sure quasars are sucking in matter but maybe shooting it out.
I take four masses, pair then up :
Mass pairs (1,2,3,4) , taken two at a time
My signal, counting radii up by some small increment and computing gravitational force (G=1), totalled across the six radii. So I compute a signal of 1/r^2 dividing each of those mass pairs. Assume a
total energy. Play around with variable until I get a smooth spread of quant ratios.
Compute SNR for a total energy E if the noisecounting radii by d in N steps
My signal:
Sig <- function(d,E,N) {
foo.sig = NULL
for(i in 1:N){
foo.sig[i] = sum(m/(i*d)^2)/E +1}
Here is the SNR +1. These are the bauds, some eight of them that count out six radii in steps. I have really done a spectral decomposition. The result tells me that those signals divided by that
number E, is best counted with these bauds. Nothing more.
So I selected a possible function. There were six radii so I suspect they counter out powers of 2 modulo six. So I count out the function, using these quants in an eight digit sequence. and generate
the signal of six radii counting. My counter was my suspected function is counter squared modulo 6. I was sure to do this:
2^(B[j]*k)/(2^k), divide by the bit power, to normalize the digits to an eight bit twos; I divided by the bit power.
I got this plot.
Six radii counting out a quadratic form. Is it useful? No, not without some better knowledge of Kepler. If I set two quant rates, and counted out a polar R, and a theta, then picked the ellipse as a
possible function, then yes, useful. As for this thing on the right, it could just as easily been someone juggling six balls in the air.
But here is the point. If we know the generating force and the total energy, then we can find the best quants before applying some law of physics in the Schrodinger equation. Do the Schrodinger last.
Do group separation first.
I am a lousy physicist, and do not trust myself, I hesitate.
It should take less phase to pack a null because phase moves at light speed. Nulls should represent potential energy. Which order? Is it Null at quant k+1 and wave at k? Or the other way around?
Readers really cannot trust me to get this right the first go around. But I trust the proton, and it comes in at wave then null. I am going with that for now.
We want to compute the orbits numerically for some fixed energy level. The fixed energy level is the noise. There are N! equations of mass on mass with G, the summation of these equations at any step
is the signal. You step all radius with some common step value such that the divergence along the way does not break Shannon separability. Check the -iLog(i) along the way with you computer, or just
scale up.
The amplitude is some 2^(k* quant). You want to compute the quants for each step from k=0 to k = Nmax. Compute the mass on mass signal at each step, use Shannon to get the quant. What does this get
you? You have simply counted up the group with respect to some fixed point of symmetry. You have created the Kepler group. Does this find you the central Lagrange, or is there one? I have not done
this, but if you lay out the radii on a complete sequence, they should count out the potential energy. The kinetic energy makes that phase flat. When the potential energy is maximum they are closest
together, and visa versa. So, use a little ingenuity and fins a common Lagrange.
For example: On the complete symmetry group of the classical Kepler system
Wave(k) does not have Null(k), it has Null(k-1), it can't pack the proper quant of null. Wave(k) has dumped excess phase into the orbital, and the Null(k-1) is now a Null(k) but half packed with
So, what remains in the orbital? Up to half a quant of free Nulls. And the electron does indeed operate like a spread out bunch of nulls. I do not think is it is a probability of finding an electron,
it is the actual electron is spread around. There are not enough Nulls to pack the next quant up, so add energy to that orbital, and the current quant of electrons mixes and matches with the 1/2
quant of free nulls. It really does spread around. Phase is unbalanced and held by the proton charge, phase can't fly away. So, up to a point, those orbitals are Null soup, nothing but a mish mash of
unbalanced phase and free Nulls.
This came up because I was looking at how the hyperbolics would accumulate imbalance as they march energy up the orbitals. The functions are ideal, for the purpose, they keep fractions and count out
the orbitals with few operations. I substitute the twos base for the natural log, and fractions appear to accumulate smoothly.
Anyway, I went through the Schrodinger stuff for the hydrogen atom, I took that class and I get what is happening. But here is the thing. We know we step energy levels by Plank. We know we are at
maximum entropy. So we have to sample at twice Plank, and we have to encode the spherical charge function and mass function. We know we are symmetric about the nucleus. So, simple enough in the
bitstream version. Energy levels count as Plank, they are the signal. Amplitude, quant size, count as log Plank. I think your noise is actually the spheroidal functions you need to quantize. What I
mean is that the spheroidal charge is a disturbance to the system, and it is resolved by quantization.
So, just plug is in and sample away. The hyperbolics should count out properly. You may have no idea what the quants mean, but they should be accurate. So give them meaning after the fact, count up
first, then apply physics second. The same way I did the Plank black body. Applied force, or applied heat, or applied gravity, or whatever are disturbances to the quantum system, they should normally
be noise in the Shannon equation. But the S/N is always an energy ratio. It took me a while to get that. So, we want to know the orbitals for a certain energy level. The hyperbolics count out the
kinetic and potential, as you step through the Plank energies. It is just a sampled data version of the Hamiltonian.
So, rather than write a quantizer code, I am going directly to the hyperbolic. I am taking a short cut. Its a simple trick in which I pick the order I want to work with, and pack it with force,
obeying the rules of Fibonacci packing while accumulating quantization error in the proton.
My approach is to scale some region of the quant chart to the precision I want. Then make two binary numbers, one for null quants and one for wave quants. I want everything packed in wave/null pairs.
These two numbers are my Eigenvectors, and the hyperbolic functions my Eigenfunctions.
I will shift and add the nulls/wave Eigenvalues to make them match, in wave/Null pairs. and shift and subtract the same amount from the proton. Thus, each null quant gets the proper wave number, and
the error in the proton will be kinetic energy. I make my new binary number all ones, but they are not all matched in quants. The matching error is in the proton. (Rule 1) All actions can be
considered as actions of wave/null pairs.
(Rule 2) I know the vacuum quantizes the maximum order it can in a wave/null pair to stabilize. So I can just fill in the blanks until I have what I need quantized, then extract the proton
accumulated error and see what I have to play with.
Later, I will assign even and odd hyperbolic functions to the wave and null quants, make the thing automatic, then convert from bits to qubits for pretty pictures.
When I scale I get two binary numbers. The one bits indicate the real quantization, the actual vacuum samples. The numbers look like:
Wave Null
(Rule 3) The wave get the higher order bit (Rule 4) The wave ends in zero, the null ends in one.
Then I shift the nulls left to match the first wave, accumulate it in my output sequence, and subtract from the proton. As I do that, I am sure I will discover more rules, but these four are a good
start. I guess the main rule is Fibonacci packing.
What about charge and spin?
I think charge is a fractional overlap in wave/null pair for the electron, and the proton should be initialized with the value. Spin is a 1/2 allowable variation in the null quant number, as near as
I can tell. I have no idea how to work these, yet. I am not a physicist.
Looking at one of my previous posts, we discover that the hyperbolics manage fractions quite well.
When the inaccuracy of packing nucleons equals the inaccuracy of making orbital slots. Quantum accuracy is all about maintaining the integer 'one' for the group, and that is limited by the nucleon.
If the integer 'one' is violated then the conservation of baryon has broken.
Let Qw be (1/2+sqrt(5)/2)*91 and Qn be (3/2)*108.
These are the quant values of the proton Wave/Null pair.
Lets use log two because our computers use it.
The log difference, log2(Qw/Qn) = log2(Qw) - log2(Qn), is 5e-5.
At full accuracy, the difference should be:
log(2^(Nmax) /2^(Nmax-1)) or log(2) = 1,
But we make that 1/2 to keep Nyquist sampling. Nmax is a twos bit equivalent of the atom that is maximally Fibonacci packed.
At that point the physicist is dealing with the twos binary version of an atom that is Fibonacci packed according to the real wave/particle quantization. The Fibonacci packing is more efficient than
the binary, but more difficult to do math. But the actual standard model, including the Feynman diagrams acts just like a Fibonacci packing. The top wave/Null quants (91,108) come as a pair, and mark
the Finonacci integer, the other wave/null quants are separated, using the actual particle quants and their matching wave quants. The standard model, and the atom, is a Fibonacci packing.
Thus scaling to get our twos binary version of the Fibonacci atom, so we can do math:
1e4 * [log2(Qw)-log2(Qn)], must equal 1/2 (with Nyquist sampling). So, at scale, the twos binary is a 10000 digit integer, a whopper.
Making Fractions
But wait, that's not all. Somewhere these stupid vacuum elements figured out how to use the proton atom to make crystals, liquids, solids, rocks, galaxies and shoes. I have no clue how they figured
this out. But they use different null quant ratios, evidently. So the proton, in its generosity, as agreed to carry fractions, and has a decimal point somewhere up its binary quant chain of 10000
digits. This decimal point allows the proton to hold fractional errors imposed upon it in the making of shoes.
Where is this decimal point? I have little clue, except:
We have one clue, the magnetic null quant is unavailable, and that seems to be 18 orders down from the electron, out of 91. Plus we have vacuum noise. So 20 out of 91 times 10000 makes a big
fraction, in binary digits. The most likely scenario is that the magnetic kept kicking protons free, and with enough free protons, magnetic null quants failed. But about half the digits gained from
kicking out the magnetron were needed to manage charge.
Consider the electron, about 14 orders down from the Proton, or 1/6 of 91. That becomes a 1600 digit binary number. Can we describe everything we know about chemistry as a series of subgroups making
up 1600 digit binary number? The periodic table needs 8 digits to count the atoms, another 10 to put them in their place on the chart, another 15 digits to describe their oribital quants. We are at
32 bits, with 1500 left over to describe the chemical elements. Easily done, I think.
The vacuum phase, using the proton, as invented the decimal point. | {"url":"http://bettereconomics.blogspot.com/","timestamp":"2014-04-21T12:08:11Z","content_type":null,"content_length":"135141","record_id":"<urn:uuid:4009f38e-9692-4571-8ed9-b9a990851320>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00183-ip-10-147-4-33.ec2.internal.warc.gz"} |
Nitrogen Exposure Limits and Equivalent Air Depth (EAD)
Nitrogen exposures are omnipresent in diving, and must be carefully monitored for every dive. These exposures are related to the risk of DCS (Decompression Sickness). In order to properly manage this
risk, appropriate exposure limits have been identified for air when used as the diver’s breathing mix. These limits have been detailed in the US Navy dive tables and other similar references, in
terms of no-decompression limits over a selected range of depths.
When using nitrox, the diver is exposed to a reduced PN2 (partial pressure of nitrogen), compared to using air for the same dive; though reduced, the diver’s level of nitrogen exposure nevertheless
remains a concern. At the same time the diver will now be exposed to an increased PO2 (partial pressure of oxygen); though oxygen exposure is not an issue with air within recreational depths, the
elevated exposure levels with nitrox now become an additional concern.
It may be helpful to think of oxygen and nitrogen in terms of drugs. The effect of each drug, as experienced by the diver, will be dependent upon the dose (determined by its partial pressure),
combined with the duration of exposure (determined by dive time).
Equivalent Air Depth
When using nitrox, the diver is exposed to a reduced dose (lower partial pressure) of nitrogen, compared to air. In effect, when using nitrox at a specific depth over a specific period of time, it is
the equivalent of this diver breathing air at a shallower depth for the same period of time. By accurately calculating this equivalent shallower depth, it is then possible to simply use this
shallower depth in place of the actual depth, with any air dive tables, for all standard dive calculations. This concept is known as equivalent air depth (abbreviated as EAD).
To demonstrate the concept of equivalent air depth, consider Table 7 (below), which depicts the partial pressure of nitrogen in air, as well as in the mixes of EAN32 and EAN40, at various depths.
In reviewing Table 7, you will note, as highlighted, that the PN2 of EAN40 at 30 m / 99 ft is approximately equal to the PN2 of air at 20 m / 66 ft. Because the rate of nitrogen absorption is
directly dependent upon the partial pressure of nitrogen, a diver breathing EAN40 at 30 m / 99 ft would therefore be expected to on-gas nitrogen at the same rate, during this deeper dive, as he would
while breathing air at 20 m / 66 ft. And because he is on-gassing at the rate of the shallower depth, the same no-decompression time limit from that shallower depth would also now apply to the deeper
dive; effectively, the diver has significantly extended his no decompression limit at the deeper depth, simply by using EAN40 instead of air at that depth.
As in prior discussions concerning tables based upon increments of 1 bar / 1 atm and 10 m / 33 ft, such measurements sometimes are too broad and unwieldy for practical use in diving; at times, more
precise measurements will prove to be beneficial.
The EAD for any nitrox mix, at any depth, can be manually calculated, by first creating a ratio where the fraction of nitrogen in nitrox is divided by the faction of nitrogen in air, then multiplying
that ratio by the ambient pressure expressed in m / ft, then converting that figure back to depth. This mathematical procedure is depicted in Metric Formula 7 and Imperial Formula 8 (below).
Metric: Equivalent Air Depth
EAD = [ (FN2 / .79) x (D + 10) ] – 10
Formula 7
Imperial: Equivalent Air Depth
EAD = [ (FN2 / .79) x (D + 33) ] – 33
Formula 8
For more information please contact TDI;
Tel: 888.778.9073 | 207.729.4201
Email: Worldhq@tdisdi.com
Web: www.tdisdi.com
Facebook: www.facebook.com/TechnicalDivingInt
This entry was posted in TDI Diver News. Bookmark the permalink. | {"url":"http://www.tdisdi.com/nitrogen-exposure-limits-and-equivalent-air-depth-ead/","timestamp":"2014-04-20T03:11:56Z","content_type":null,"content_length":"36333","record_id":"<urn:uuid:98983b2e-ea42-404d-b61a-391b6d521c20>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00276-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
Has anyone finished Problem set 12? I've just Finished and i want to see if my simulations are close to what they should be
Best Response
You've already chosen the best response.
I have. Which simulations would you like to compare?
Best Response
You've already chosen the best response.
the histograms, with thousands of trials going into them they are the most stable to compare, here is a little zip file with my graphs
Best Response
You've already chosen the best response.
I've attached my problem 6 and 7 histograms/plots with 1000 trials each. Most of our graphs appear to have the same shape, but I did notice a significant difference in the case when both drugs
are applied at the same time. In your simulations the virus population goes to zero in almost all trials. In my simulations the population goes to zero in about 80% of the trials.
Best Response
You've already chosen the best response.
ok figured out my issue, i gave you problem set 5 not 6, i didn't do a good job with my titles and file names, fixed that and uploaded problem 6 results, our 0 is looking much better, the other
graphs are a little further apart but still in the same neighbourhood
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/4e2477bf0b8b3d38d3b728ea","timestamp":"2014-04-17T09:54:52Z","content_type":null,"content_length":"38088","record_id":"<urn:uuid:0f0b03cb-22a6-4b52-9d33-778914f7b1dd>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00610-ip-10-147-4-33.ec2.internal.warc.gz"} |
Hi !
Yes, generally most mathematicians tend to agree that complex numbers cannot be compared by
"greater than" or "less than" relations (at least not in a useful, meaningful manner). But such
a relation can, I believe be defined, though perhaps not in terribly useful manner.
Here's a shot at it. I'll use Q for "theta" since theta isn't on the keyboard.
In re consider r nonnegative and Q in the interval [0degrees,360degrees).
So r is the distance from the origin and Q is the angle involved.
Given two complex numbers z and w we define
1) z is less than w if z is closer to the origin:
2) if z and w are the same distance from the origin then
a) z = w if their angles are the same.
b) z is less than w if z's angle is less than w's angle.
Of course if we let the r be negative and/or the angles to be any positive or negative angle, then
we weave a more tangled web.
Writing "pretty" math (two dimensional) is easier to read and grasp than LaTex (one dimensional).
LaTex is like painting on many strips of paper and then stacking them to see what picture they make. | {"url":"http://www.mathisfunforum.com/viewtopic.php?pid=236623","timestamp":"2014-04-19T02:04:56Z","content_type":null,"content_length":"16889","record_id":"<urn:uuid:94ed888b-3e47-4102-9065-2e37cd480d39>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00567-ip-10-147-4-33.ec2.internal.warc.gz"} |
Null Sets and uniform convergence
May 15th 2010, 01:13 PM #1
Dec 2009
Null Sets and uniform convergence
Prove that if a sequence of continous functions uniformly converges on [a,b] then the union of their graphs is a null set.
In other words:
Prove that if the sequence $f_n:[a,b] \to R$ converges uniformly on [a,b] then the set : $A=(x,f_n) | x \in [a,b],n \in N$ is a null set...
I know the function f is continous ( $f_n \to f$ ) and that the graph of the function f and of each $f_n$ if a null set....
Can't figure out how to prove what I need to prove
Thanks in advance
Prove that if a sequence of continous functions uniformly converges on [a,b] then the union of their graphs is a null set.
In other words:
Prove that if the sequence $f_n:[a,b] \to R$ converges uniformly on [a,b] then the set : $A=(x,f_n) | x \in [a,b],n \in N$ is a null set...
I know the function f is continous ( $f_n \to f$ ) and that the graph of the function f and of each $f_n$ if a null set....
Can't figure out how to prove what I need to prove
Thanks in advance
Try a proof by contradiction.
$P\rightarrow Q\equiv P\wedge$ ~ $Q$
If the set A isn't a null set, then there is an $\epsilon >0$ such as there is no finite amount of rectangles that cover the set A and their sum is less than $\epsilon$ .
In other words, we assume by contradiction that there is an $\epsilon >0$ such as every finite amount of rectangles that cover the set A- their sum is $\geq \epsilon$ ...
I've no idea how I can continue from this point... It seems to be very difficult to continue from this point...
Hope you'll be able to help me
Thanks !
May 15th 2010, 08:34 PM #2
MHF Contributor
Mar 2010
May 16th 2010, 02:21 AM #3
Dec 2009 | {"url":"http://mathhelpforum.com/calculus/144869-null-sets-uniform-convergence.html","timestamp":"2014-04-16T18:05:56Z","content_type":null,"content_length":"39316","record_id":"<urn:uuid:70705763-924a-4f15-af2b-5435efcd5b6c>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00018-ip-10-147-4-33.ec2.internal.warc.gz"} |
and Easy Distributions for SAT Problems
Results 1 - 10 of 231
- IN ECAI-92 , 1992
"... We develop a formal model of planning based on satisfiability rather than deduction. The satis ability approach not only provides a more flexible framework for stating di erent kinds of
constraints on plans, but also more accurately reflects the theory behind modern constraint-based planning systems ..."
Cited by 431 (26 self)
Add to MetaCart
We develop a formal model of planning based on satisfiability rather than deduction. The satis ability approach not only provides a more flexible framework for stating di erent kinds of constraints
on plans, but also more accurately reflects the theory behind modern constraint-based planning systems. Finally, we consider the computational characteristics of the resulting formulas, by solving
them with two very different satisfiability testing procedures.
, 1992
"... . A constraint satisfaction problem involves finding values for variables subject to constraints on which combinations of values are allowed. In some cases it may be impossible or impractical to
solve these problems completely. We may seek to partially solve the problem, in particular by satisfying ..."
Cited by 427 (23 self)
Add to MetaCart
. A constraint satisfaction problem involves finding values for variables subject to constraints on which combinations of values are allowed. In some cases it may be impossible or impractical to
solve these problems completely. We may seek to partially solve the problem, in particular by satisfying a maximal number of constraints. Standard backtracking and local consistency techniques for
solving constraint satisfaction problems can be adapted to cope with, and take advantage of, the differences between partial and complete constraint satisfaction. Extensive experimentation on maximal
satisfaction problems illuminates the relative and absolute effectiveness of these methods. A general model of partial constraint satisfaction is proposed. 1 Introduction Constraint satisfaction
involves finding values for problem variables subject to constraints on acceptable combinations of values. Constraint satisfaction has wide application in artificial intelligence, in areas ranging
from temporal r...
- Journal of Artificial Intelligence Research , 1994
"... There has been substantial recent interest in two new families of search techniques. One family consists of nonsystematic methods such as gsat; the other contains systematic approaches that use
a polynomial amount of justification information to prune the search space. This paper introduces a new te ..."
Cited by 360 (14 self)
Add to MetaCart
There has been substantial recent interest in two new families of search techniques. One family consists of nonsystematic methods such as gsat; the other contains systematic approaches that use a
polynomial amount of justification information to prune the search space. This paper introduces a new technique that combines these two approaches. The algorithm allows substantial freedom of
movement in the search space but enough information is retained to ensure the systematicity of the resulting analysis. Bounds are given for the size of the justification database and conditions are
presented that guarantee that this database will be polynomial in the size of the problem in question. 1 INTRODUCTION The past few years have seen rapid progress in the development of algorithms for
solving constraintsatisfaction problems, or csps. Csps arise naturally in subfields of AI from planning to vision, and examples include propositional theorem proving, map coloring and scheduling
problems. The probl...
- Artificial Intelligence , 1994
"... I present several computational complexity results for propositional STRIPS planning, i.e., STRIPS planning restricted to ground formulas. Different planning problems can be defined by
restricting the type of formulas, placing limits on the number of pre- and postconditions, by restricting negation ..."
Cited by 299 (3 self)
Add to MetaCart
I present several computational complexity results for propositional STRIPS planning, i.e., STRIPS planning restricted to ground formulas. Different planning problems can be defined by restricting
the type of formulas, placing limits on the number of pre- and postconditions, by restricting negation in pre- and postconditions, and by requiring optimal plans. For these types of restrictions, I
show when planning is tractable (polynomial) and intractable (NPhard) . In general, it is PSPACE-complete to determine if a given planning instance has any solutions. Extremely severe restrictions on
both the operators and the formulas are required to guarantee polynomial time or even NP-completeness. For example, when only ground literals are permitted, determining plan existence is
PSPACE-complete even if operators are limited to two preconditions and two postconditions. When definite Horn ground formulas are permitted, determining plan existence is PSPACE-complete even if
operators are limited t...
, 1995
"... ... quickly across a wide range of hard SAT problems than any other SAT tester in the literature on comparable platforms. On a Sun SPARCStation 10 running SunOS 4.1.3 U1, POSIT can solve hard
random 400-variable 3-SAT problems in about 2 hours on the average. In general, it can solve hard n-variable ..."
Cited by 161 (0 self)
Add to MetaCart
... quickly across a wide range of hard SAT problems than any other SAT tester in the literature on comparable platforms. On a Sun SPARCStation 10 running SunOS 4.1.3 U1, POSIT can solve hard random
400-variable 3-SAT problems in about 2 hours on the average. In general, it can solve hard n-variable random 3-SAT problems with search trees of size O(2 n=18:7 ). In addition to justifying these
claims, this dissertation describes the most significant achievements of other researchers in this area, and discusses all of the widely known general techniques for speeding up SAT search
algorithms. It should be useful to anyone interested in NP-complete problems or combinatorial optimization in general, and it should be particularly useful to researchers in either Artificial
Intelligence or Operations Research.
- Journal of the ACM , 1996
"... Computational efficiency is a central concern in the design of knowledge representation systems. In order to obtain efficient systems, it has been suggested that one should limit the form of the
statements in the knowledge base or use an incomplete inference mechanism. The former approach is often t ..."
Cited by 157 (5 self)
Add to MetaCart
Computational efficiency is a central concern in the design of knowledge representation systems. In order to obtain efficient systems, it has been suggested that one should limit the form of the
statements in the knowledge base or use an incomplete inference mechanism. The former approach is often too restrictive for practical applications, whereas the latter leads to uncertainty about
exactly what can and cannot be inferred from the knowledge base. We present a third alternative, in which knowledge given in a general representation language is translated (compiled) into a
tractable form — allowing for efficient subsequent query answering. We show how propositional logical theories can be compiled into Horn theories that approximate the original information. The
approximations bound the original theory from below and above in terms of logical strength. The procedures are extended to other tractable languages (for example, binary clauses) and to the
first-order case. Finally, we demonstrate the generality of our approach by compiling concept descriptions in a general framebased language into a tractable form.
- In Proceedings of AAAI-93 , 1993
"... Recently several local hill-climbing procedures for propositional satisability havebeen proposed, which are able to solve large and di cult problems beyond the reach ofconventional algorithms
like Davis-Putnam. By the introduction of some new variants of these procedures, we provide strong experimen ..."
Cited by 137 (6 self)
Add to MetaCart
Recently several local hill-climbing procedures for propositional satisability havebeen proposed, which are able to solve large and di cult problems beyond the reach ofconventional algorithms like
Davis-Putnam. By the introduction of some new variants of these procedures, we provide strong experimental evidence to support the conjecture that neither greediness nor randomness is important in
these procedures. One of the variants introduced seems to o er signi cant improvements over earlier procedures. In addition, we investigate experimentally how their performance depends on their
parameters. Our results suggest that run-time scales less than simply exponentially in the problem size. 1
- DIMACS Series in Discrete Mathematics and Theoretical Computer Science , 1996
"... . The satisfiability (SAT) problem is a core problem in mathematical logic and computing theory. In practice, SAT is fundamental in solving many problems in automated reasoning, computer-aided
design, computeraided manufacturing, machine vision, database, robotics, integrated circuit design, compute ..."
Cited by 127 (3 self)
Add to MetaCart
. The satisfiability (SAT) problem is a core problem in mathematical logic and computing theory. In practice, SAT is fundamental in solving many problems in automated reasoning, computer-aided
design, computeraided manufacturing, machine vision, database, robotics, integrated circuit design, computer architecture design, and computer network design. Traditional methods treat SAT as a
discrete, constrained decision problem. In recent years, many optimization methods, parallel algorithms, and practical techniques have been developed for solving SAT. In this survey, we present a
general framework (an algorithm space) that integrates existing SAT algorithms into a unified perspective. We describe sequential and parallel SAT algorithms including variable splitting, resolution,
local search, global optimization, mathematical programming, and practical SAT algorithms. We give performance evaluation of some existing SAT algorithms. Finally, we provide a set of practical
applications of the sat...
, 1997
"... The paper studies new unit propagation based heuristics for Davis-Putnam-Loveland (DPL) procedure. These are the novel combinations of unit propagation and the usual "Maximum Occurrences in
clauses of Minimum Size " heuristics. Based on the experimental evaluations of di erent alternatives ..."
Cited by 121 (10 self)
Add to MetaCart
The paper studies new unit propagation based heuristics for Davis-Putnam-Loveland (DPL) procedure. These are the novel combinations of unit propagation and the usual "Maximum Occurrences in
clauses of Minimum Size " heuristics. Based on the experimental evaluations of di erent alternatives a new simple unit propagation based heuristic is put forward. This compares favorably with
the heuristics employed in the current state-of-the-art DPL implementations (C-SAT, Tableau, POSIT). 1
- In Proceedings of AAAI-96 , 1999
"... We propose a definition of `constrainedness' that unifies two of the most common but informal uses of the term. These are that branching heuristics in search algorithms often try to make the
most "constrained" choice, and that hard search problems tend to be "critically constrained". Our definition ..."
Cited by 117 (26 self)
Add to MetaCart
We propose a definition of `constrainedness' that unifies two of the most common but informal uses of the term. These are that branching heuristics in search algorithms often try to make the most
"constrained" choice, and that hard search problems tend to be "critically constrained". Our definition of constrainedness generalizes a number of parameters used to study phase transition behaviour
in a wide variety of problem domains. As well as predicting the location of phase transitions in solubility, constrainedness provides insight into why problems at phase transitions tend to be hard to
solve. Such problems are on a constrainedness "knife-edge", and we must search deep into the problem before they look more or less soluble. Heuristics that try to get off this knife-edge as quickly
as possible by, for example, minimizing the constrainedness are often very effective. We show that heuristics from a wide variety of problem domains can be seen as minimizing the constrainedness (or
proxies ... | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=65473","timestamp":"2014-04-18T14:13:41Z","content_type":null,"content_length":"38586","record_id":"<urn:uuid:989fd986-5cb2-4fdb-b303-d3acb82fac34>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00418-ip-10-147-4-33.ec2.internal.warc.gz"} |
Rotating a vector
Rotating a vector
how can I rotate a 3D vector along an arbitrary plane?
for example... I have a vector that points straight up the wall, and one that points to the right along the wall, and I want to rotate it to the left, so that its pointing diagonally across the
The wall isn't necessarily going straight up.
The angle is arbitrary. So I might want to rotate the vector left or right 45 degrees, etc.
for convenience, the normal to the wall is called 'up', going up in the z axis is 'forward' and going to the right facing away from the wall is 'right'
Two assumptions you have to make:
The axis you are rotating about is unit length and its tail is centered at the origin. The vector you are rotating is not necessarily unit length and is also centered at the origin.
Rotating a vector centered at the origin about some arbitrary direction is the same as rotating a point about some arbtirary direction.
Rotating a point about some arbitrary direction is the same as plotting a polar coordinate about some arbitrary direction.
In general, when you plot a polar coordinate, you must have an orthogonal basis. An orthogonal basis is describe by two vectors P and Q which are perpendicular to each other. P and Q lie on the
same plane, and the direction you are rotating about is the normal to this plane. P is sort of like your local 'x' axis, and Q is sort of like your local 'y' axis. To rephrase, P and Q are
perpendicular to each other, and also are perpendicular to the direction you are rotating about. The general equation for plotting the polar coordinate (which, should be familiar to you otherwise
you may need to study up on your math before doing anything more complex) is:
RotatedPoint = P cos (theta) + Q sin (theta)
The original vector you are rotating is vector W. Vector W has components parallel (in the same direction as) the normal and perpendicular to the normal (remember, the normal is the direction you
are rotating W about). The problem is, P and Q must both be perpendicular to the normal. This poses a problem, because vector W has components parallel (in the same direction as) the normal. So,
you decompose the vector W into components parallel and perpendicular to the normal, then you plus them into the equation above. Then, to complete it, you just add the parallel component back in
(it never would have changed during the rotation).
Vector Parallel = Normal * (W dot Normal) //this bad boy doesn't change
Vector Perpendicular = W - Parallel
Vector P = Perpendicular //This is sort of like the local 'x' axis
EDIT: I had to change the order of this
Vector Q = CrossProduct ( Normal, Perpendicular) //This is sort of like the local 'y' axis
Vector Rotated = (P * cos(theta) + Q * sin(theta) ) + Parallel
It's hard to understand, especially if you don't have an inherent understanding of vector projections.
Also note, this is extremely confusing stuff and you likely won't understand it until you ask me questions. So, ask me questions, even if you think they are stupid questions or whatever.
Are you talking about performing an axis-angle rotation on a vector? If so, the matrix can be found all over the net. If not I can either give it to you or if you are using Direct3D you can use
D3DXMatrixRotationAxis(D3DXMATRIX *pOut,CONST D3DXMATRIX *pV,float Angle)
Angle is the number of radians to rotate vector pV by - this functions constructs the axis-angle rotation matrix. You would then use pOut to rotate vectors around pV by Angle.
Well, yeah. Use bubba's way if you just want to get it to work, which is okay. Use my way if you want to understand it (I basically proved how that matrix works).
The matrix is far more involved than the one equation you posted which is why I referenced that function.
What do you mean far more involved? Unless you're talking about a different matrix, I've shown *how* it works...how, exactly, do you get anymore involved?
I went into my own math libraries and checked one of my math books, and it turns out I *DID* prove the matrix you are talking about. To be more to the point, the 'one equation i posted', and the
matrix you are talking about, are the same thing...one isn't more involved than the other, the matrix you are talking about is the matrix form of what I posted above.
Originally Posted by Darkness
Two assumptions you have to make:
The axis you are rotating about is unit length and its tail is centered at the origin. The vector you are rotating is not necessarily unit length and is also centered at the origin.
Rotating a vector centered at the origin about some arbitrary direction is the same as rotating a point about some arbtirary direction.
Rotating a point about some arbitrary direction is the same as plotting a polar coordinate about some arbitrary direction.
In general, when you plot a polar coordinate, you must have an orthogonal basis. An orthogonal basis is describe by two vectors P and Q which are perpendicular to each other. P and Q lie on the
same plane, and the direction you are rotating about is the normal to this plane. P is sort of like your local 'x' axis, and Q is sort of like your local 'y' axis. To rephrase, P and Q are
perpendicular to each other, and also are perpendicular to the direction you are rotating about. The general equation for plotting the polar coordinate (which, should be familiar to you otherwise
you may need to study up on your math before doing anything more complex) is:
RotatedPoint = P cos (theta) + Q sin (theta)
The original vector you are rotating is vector W. Vector W has components parallel (in the same direction as) the normal and perpendicular to the normal (remember, the normal is the direction you
are rotating W about). The problem is, P and Q must both be perpendicular to the normal. This poses a problem, because vector W has components parallel (in the same direction as) the normal. So,
you decompose the vector W into components parallel and perpendicular to the normal, then you plus them into the equation above. Then, to complete it, you just add the parallel component back in
(it never would have changed during the rotation).
Vector Parallel = Normal * (W dot Normal) //this bad boy doesn't change
Vector Perpendicular = W - Parallel
Vector P = Perpendicular //This is sort of like the local 'x' axis
EDIT: I had to change the order of this
Vector Q = CrossProduct ( Normal, Perpendicular) //This is sort of like the local 'y' axis
Vector Rotated = (P * cos(theta) + Q * sin(theta) ) + Parallel
It's hard to understand, especially if you don't have an inherent understanding of vector projections.
Also note, this is extremely confusing stuff and you likely won't understand it until you ask me questions. So, ask me questions, even if you think they are stupid questions or whatever.
Heh. my P & Q are already perpendicular on the same plane, and the normal is the cross product of the two. Thank you, very much!
I was trying to use polar coordinates storing it as
x = cos( theta ); // polar coordinates
y = sin( theta ); // forms a cirle
But I didn't know what to do from there, as that didn't factor in Z.
You REALLY helped me out.
Awesome! Glad I could help! And good luck with whatever it is you are working on :)
I went into my own math libraries and checked one of my math books, and it turns out I *DID* prove the matrix you are talking about. To be more to the point, the 'one equation i posted', and the
matrix you are talking about, are the same thing...one isn't more involved than the other, the matrix you are talking about is the matrix form of what I posted above.
I never attacked you in this post and yet you continue to act as if I'm a threat to you. You got issues.
The matrix is this:
N here is the vector pV in the D3DXMatrixRotationAxis function prototype and matrix is of course the pOut member of the prototype. Theta is Angle.
matrix[0][0]=(n.x*n.x) * (1 - cos(theta)) + cos(theta);
matrix[0][1]=(n.x*n.y) * (1 - cos(theta)) + n.z * sin(theta);
matrix[0][2]=(n.x*n.z) * (1 - cos(theta)) - n.y * sin(theta);
matrix[1][0]=(n.x*n.y) * (1 - cos(theta)) - n.z * sin(theta);
matrix[1][1]=(n.y*n.y) * (1 - cos(theta)) + cos(theta);
matrix[1][2]=(n.y*n.z) * (1 - cos(theta)) + n.x * sin(theta);
matrix[2][0]=(n.x*n.z) * (1 - cos(theta)) + n.y * sin(theta);
matrix[2][1]=(n.y*n.z) * (1 - cos(theta)) - n.x * sin(theta);
matrix[2][2]=(n.z*n.z) * (1 - cos(theta)) + cos(theta);
Of course this is 3x3 matrix but converting to 4x4 to allow for translation is a trivial matter.
I never attacked you in this post and yet you continue to act as if I'm a threat to you. You got issues.
Then what is this supposed to mean:
The matrix is far more involved than the one equation you posted which is why I referenced that function.
I'm probably the most passive person on these forums. Everything short of slapping me in the face won't generate a response from me. However, I'm actually trying to explain some of these nasty
equations. Then, you made your comment which, to be honest, does come across as offensive...and, to top it all off, you were blatantly wrong...the matrix isn't any more involved than the
equations I posted, because the matrix you reference to is the matrix form of the equations. Quite literally, when you do the equations out, it's exactly the same as the equations i posted.
Now, how is any sane person not supposed to be frustrated with you in this case? Keep in mind I actually had quite a bit of time invested in that response, and you just seemed quick to belittle
Originally Posted by Darkness
Then what is this supposed to mean:
I'm probably the most passive person on these forums. Everything short of slapping me in the face won't generate a response from me. However, I'm actually trying to explain some of these nasty
equations. Then, you made your comment which, to be honest, does come across as offensive...and, to top it all off, you were blatantly wrong...the matrix isn't any more involved than the
equations I posted, because the matrix you reference to is the matrix form of the equations. Quite literally, when you do the equations out, it's exactly the same as the equations i posted.
Now, how is any sane person not supposed to be frustrated with you in this case? Keep in mind I actually had quite a bit of time invested in that response, and you just seemed quick to belittle
Score one for darkness
What do you mean far more involved? Unless you're talking about a different matrix, I've shown *how* it works...how, exactly, do you get anymore involved?
And that's not offensive? Stressing the *how*? You and I know pretty much the same stuff bud and it would be a lot more helpful to the board members if we could combine forces instead of always
bantering at each other in other people's threads.
I apologize if I came off as offensive to you but I was actually just expanding on what you wrote. I never stated that your equation was wrong or inferior to anything. Frankly I don't understand
why you took offense to it because nothing negative was said about your post. Saying that a matrix is more involved is a fact not a derogatory statement. Just because you know how to rotate on
x,y and z for instance, does not necessarily mean you know how to put all of that into matrix form. Since all of 3D is pretty much matrix concatentation (save for quaternions) I was putting your
equation into matrix form. The derivation of that matrix is not simple and even though I understand it, it would be extremely difficult to go through each step here in a post.
lol I got ranked up, with this comment:
Stop being a whiny little spotlight hog.
I love it lmao
Come on now guys you both are freaking good with 3d stuff. There is no need to fight you should combo up on a game or something :)
This might (not) come as a shock to you, but I don't work well with other people. I end up taking control over everything, I I am sillyI am sillyI am sillyI am sillyI am silly and whine until I
get other people doing exactly as I say, otherwise I quit and make fun of their mothers. | {"url":"http://cboard.cprogramming.com/game-programming/60332-rotating-vector-printable-thread.html","timestamp":"2014-04-21T01:01:12Z","content_type":null,"content_length":"26405","record_id":"<urn:uuid:0e529db0-e13d-4faa-86e2-a0c10f655ddd>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00066-ip-10-147-4-33.ec2.internal.warc.gz"} |
Most Published Research Findings Are False—But a Little Replication Goes a Long Way
Citation: Moonesinghe R, Khoury MJ, Janssens ACJW (2007) Most Published Research Findings Are False—But a Little Replication Goes a Long Way. PLoS Med 4(2): e28. doi:10.1371/journal.pmed.0040028
Published: February 27, 2007
This is an open-access article distributed under the terms of the Creative Commons Public Domain declaration which stipulates that, once placed in the public domain, this work may be freely
reproduced, distributed, transmitted, modified, built upon, or otherwise used by anyone for any lawful purpose.
Funding: The authors received no specific funding for this article.
Competing interests: The authors have declared that no competing interests exist.
Abbreviations: PPV, positive predictive value
We know there is a lot of lack of replication in research findings, most notably in the field of genetic associations [1–3]. For example, a survey of 600 positive associations between gene variants
and common diseases showed that out of 166 reported associations studied three or more times, only six were replicated consistently [4]. Lack of replication results from a number of factors such as
publication bias, selection bias, Type I errors, population stratification (the mixture of individuals from heterogeneous genetic backgrounds), and lack of statistical power [5].
In a recent article in PLoS Medicine, John Ioannidis quantified the theoretical basis for lack of replication by deriving the positive predictive value (PPV) of the truth of a research finding on the
basis of a combination of factors. He showed elegantly that most claimed research findings are false [6]. One of his findings was that the more scientific teams involved in studying the subject, the
less likely the research findings from individual studies are to be true. The rapid early succession of contradictory conclusions is called the “Proteus phenomenon” [7]. For several independent
studies of equal power, Ioannidis showed that the probability of a research finding being true when one or more studies find statistically significant results declines with increasing number of
As part of the scientific enterprise, we know that replication—the performance of another study statistically confirming the same hypothesis—is the cornerstone of science and replication of findings
is very important before any causal inference can be drawn. While the importance of replication is also acknowledged by Ioannidis, he does not show how PPVs of research findings increase when more
studies have statistically significant results. In this essay, we demonstrate the value of replication by extending Ioannidis' analyses to calculation of the PPV when multiple studies show
statistically significant results.
The probability that a study yields a statistically significant result depends on the nature of the underlying relationship. The probability is 1 - ß (one minus the Type II error rate) if the
relationship is true, and a (Type I error rate) when the relationship is false, i.e., there is no relationship. Similarly, the probability that r out of n studies yield statistically significant
results also depends on whether the underlying relationship is true or not. Let B(p,r,n) denote the probability of obtaining at least r statistically significant results out of n independent and
identical studies, with p being the probability of a statistically significant result. B(p,r,n) is calculated as
In this formula, p is 1 - ß when the underlying relationship is true and a when it is false. Let R be the pre-study odds and c be the number of relationships being probed in the field. The pre-study
probability of a relationship being true is given by R/(R + 1). The expected values of the 2 × 2 table are given in Table 1. When r is equal to one, entries in Table 1 are identical to those in Table
3 of Ioannidis [6]. The probability that, in the absence of bias, at least r out of n independent studies find statistically significant results is given by (RB(1 - ß,r,n) + B(α,r,n))/(R + 1) and the
PPV when at least r studies are statistically significant is RB(1 - ß,r,n)/((RB(1 - ß,r,n) + B(α,r,n)).
Table 1. Research Findings and True Relationships in the Presence of Multiple Studies
Positive Predictive Value as a Function of Study Replication
We examine the PPV as a function of the number of statistically significant findings. Figure 1 shows the PPV of at least one, two, or three statistically significant research findings out of ten
independent studies as a function of the pre-study odds of a true relationship (R) for powers of 20% and 80%. The lower lines correspond to Ioannidis' finding and indicate the probability of a true
association when at least one out of ten studies shows a statistically significant result. As can be seen, the PPV is substantially higher when more research findings are statistically significant.
Thus, a few positive replications can considerably enhance our confidence that the research findings reflect a true relationship. When R ranged from 0.0001 to 0.01, a higher number of positive
studies is required to attain a reasonable PPV. The difference in PPV for power of 80% and power of 20% when at least three studies are positive is higher than when at least one study is positive.
Figure 2 gives the PPV for increasing number of positive studies out of ten, 25, and 50 studies for pre-study odds of 0.0001, 0.01, 0.1, and 0.5 for powers of 20% and 80%. When there is at least one
positive study (r = 1) and power equal to 80%, as indicated in Ioannidis' paper, PPV declined approximately 50% for 50 studies compared to ten studies for R values between 0.0001 and 0.1. However,
PPV increases with increasing number of positive studies and the percentage of positive studies required to achieve a given PPV declines with increasing number of studies. The number of positive
studies required to achieve a PPV of at least 70% increased from eight for ten studies to 12 for 50 studies when pre-study odds equaled 0.0001, from five for ten studies to eight for 50 studies when
pre-study odds equaled 0.01, from three for ten studies to six for 50 studies when pre-study odds equaled 0.1, and from two for ten studies to five for 50 studies when pre-study odds equaled 0.5. The
difference in PPV for powers of 80% and 20% declines with increasing number of studies.
Figure 1. Probability of a True Relationship When At Least One, Two, or Three (Out of Ten) Studies Have Statistically Significant Results as a Function of the Pre-Study Odds of a True Relationship (α
= 0.05)
Dashed lines refer to power of 0.2 and solid lines to power of 0.8.
Figure 2. Positive Predictive Value for Research Findings Being True for At Least r Positive Studies Out of Ten, 25, and 50 Studies for Pre-Study Odds R of 0.0001, 0.01, 0.1, and 0.5 (α = 0.05)
Dashed lines refer to power of 0.2 and solid lines to power of 0.8.
Probability Distribution of Statistically Significant Results
Although the PPV increases with increasing statistically significant results, the probability of obtaining at least r significant results declines with increasing r. This probability and the
corresponding PPV for pre-study odds of 0.0001, 0.01, 0.1, and 0.5 are given for ten studies in Table 2. When power is 20% and pre-study odds are 0.0001, the probability of obtaining at least three
statistically significant results is 1% and the corresponding PPV is 0.3%. This probability and the corresponding PPV increase with increasing pre-study odds. For example, when R = 0.1, the
probability of obtaining at least three significant results is 4% and the PPV is 74%. As expected, both the probability of obtaining statistically significant results and the corresponding PPV
increase with increasing power. However, for very small R values (around 0.0001), the increase in power has a minimal impact in the probability of obtaining at least one, two, or three statistically
significant results. When power is 80%, the probability of obtaining at least three statistically significant results is 1.2% and the corresponding PPV is 0.9% for R = 0.0001, and when pre-study odds
are 0.1, the probability of obtaining at least three statistically significant results increases to 10% and the corresponding PPV to 90%.
Table 2. Probability of Obtaining At Least r Significant Results Out of Ten Studies when Pre-Study Odds Equal 0.0001, 0.01, 0.1, and 0.5
The importance of research replication was discussed in a Nature Genetics editorial in 1999 lamenting the nonreplication of association studies [8]. The editor emphasized that when authors submit
manuscripts reporting genetic associations, the study should include an effect size and it should contain either a replication in an independent sample or physiologically meaningful data supporting a
functional role of the polymorphism in question. While we acknowledge that our assumptions of identical design, power, and level of significance reflect a somewhat simplified scenario of replication,
we quantified the positive predictive value of true research findings for increasing numbers of significant results. True replication, however, requires a precise process where the exact same finding
is reexamined in the same way. More often than not, genuine replication is not done, and what we end up with in the literature is corroboration or indirect supporting evidence. While this may be
acceptable to some extent in any scientific enterprise, the distance from this to data dredging, moving the goal post, and other selective reporting biases is often very small and can contribute to
“pseudo” replication.
Replication does not mean that we can have underpowered studies; even when we have several underpowered studies replicate a finding, the PPV remains low. Good replication practices require adequately
powered studies. More generally, meta-analysis is a more useful approach to assess the totality of evidence in a body of work. Ioannidis discussed the importance of meta-analysis, and its weaknesses
in cases where even the meta-analysis is underpowered.
Our calculations have not considered the possibility of bias, i.e., selective reporting problems that may change some “negative” results to “positive” or may leave “negative” results unpublished.
John Ioannidis has shown that modest bias can decrease the PPV steeply [6]. Therefore if replication is to work in genuinely increasing the PPV of research claims, it should be coupled with full
transparency and non-selective reporting of research results. Note that when hypotheses are one-sided, according to our definition of replication, we only consider hypotheses that are in the same
direction. Under this definition, statistically significant results in both directions do not arise. However, in meta-analysis, one can combine results that are significant in opposite directions.
Calculations in a formal meta-analysis may not square fully with the inference presented here, since meta-analysis would incorporate both effect sizes and their uncertainty rather than just the
“positive” versus “negative” inference. For example, we may have the necessary number of “positive” studies, but if the observed “positive” effects are small and all the other studies have trends in
the opposite direction, the summary effect may well be null.
In summary, while we agree with Ioannidis that most research findings are false, we clearly demonstrate that replication of research findings enhances the positive predictive value of research
findings being true. While this is not unexpected, it should be encouraging news to researchers in their never-ending pursuit of scientific hypothesis generation and testing. Nevertheless, more
methodologic work is needed to assess and interpret cumulative evidence of research findings and their biological plausibility. This is especially urgent in the exploding field of genetic | {"url":"http://www.plosmedicine.org/article/info:doi/10.1371/journal.pmed.0040028","timestamp":"2014-04-18T08:12:34Z","content_type":null,"content_length":"71964","record_id":"<urn:uuid:a5c728dc-057f-417e-82a6-46844b28e832>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00624-ip-10-147-4-33.ec2.internal.warc.gz"} |
Randomness as a Resource
Randomness as a Resource
The Empyrean and the Empirical
As a practical matter, reserves of randomness certainly appear adequate to meet current needs. Consumers of randomness need not fear rolling blackouts this summer. But what of the future? The great
beacon of randomness proposed by Rabin and Ding would require technology that remains to be demonstrated. They envision broadcasting 50 billion random bits per second, but randomness generators today
typically run at speeds closer to 50 kilobits per second.
The prospect of scaling up by a factor of a million demands attention to quality as well as quantity. For most commodities, quantity and quality have an inverse relation. A laboratory buying
milligrams of a reagent may demand 99.9 percent purity, whereas a factory using carloads can tolerate a lower standard. In the case of randomness, the trade-off is turned upside down. If you need
just a few random numbers, any source will do; it’s hard to spot biases in a handful of bits. But a Monte Carlo experiment burning up billions of random numbers is exquisitely sensitive to the
faintest trends and patterns. The more randomness you consume, the better it has to be.
Why is it hard to make randomness? The fact that maintaining perfect order is difficult surprises no one; but it comes as something of a revelation that perfect disorder is also beyond our reach. As
a matter of fact, perfect disorder is the more troubling concept—it is hard not only to attain but also to define or even to imagine.
The prevailing definition of randomness was formulated in the 1960s by Gregory J. Chaitin of IBM and by the Russian mathematician A. N. Kolmogorov. The definition says that a sequence of bits is
random if the shortest computer program for generating the sequence is at least as long as the sequence itself. The binary string 101010101010 is not random because there is an easy rule for creating
it, whereas 111010001011 is unlikely to have a generating program much shorter than "print 111010001011." It turns out that almost all strings of bits are random by this criterion—they have no
concise description—and yet no one has ever exhibited a single string that is certified to be random. The reason is simple: The first string certified to have no concise description would thereby
acquire a concise description—namely that it’s the first such string.
The Chaitin-Kolmogorov definition is not the only aspect of randomness verging on the paradoxical or the ironic. Here is another example: True random numbers, captured in the wild, are clearly
superior to those bred in captivity by pseudo-random generators—or at least that’s what the theory of randomness implies. But Marsaglia has run the output of various hardware and software generators
through a series of statistical tests. The best of the pseudo-random generators earned excellent grades, but three hardware devices flunked. In other words, the fakes look more convincingly random
than the real thing.
To me the strangest aspect of randomness is its role as a link between the world of mathematical abstraction and the universe of ponderable matter and energy. The fact that randomness requires a
physical rather than a mathematical source is noted by almost everyone who writes on the subject, and yet the oddity of this situation is not much remarked.
Mathematics and theoretical computer science inhabit a realm of idealized and immaterial objects: points and lines, sets, numbers, algorithms, Turing machines. For the most part, this world is
self-contained; anything you need in it, you can make in it. If a calculation calls for the millionth prime number or the cube root of 2, you can set the computational machinery in motion without
ever leaving the precincts of mathland. The one exception is randomness. When a calculation asks for a random number, no mathematical apparatus can supply it. There is no alternative but to reach
outside the mathematical empyrean into the grubby world of noisy circuits and decaying nuclei. What a strange maneuver! If some purely mathematical statement—say the formula for solving a quadratic
equation—depended on the mass of the earth or the diameter of the hydrogen atom, we would find this disturbing or absurd. Importing randomness into mathematics crosses the same boundary.
Of course there is another point of view: If we choose to look upon mathematics as a science limited to deterministic operations, it’s hardly a surprise that absence-of-determinism can’t be found
there. Perhaps what is really extraordinary is not that randomness lies outside mathematics but that it exists anywhere at all.
Or does it? The savants of the 18th century didn’t think so. In their clockwork universe the chain of cause and effect was never broken. Events that appeared to be random were merely too complicated
to submit to a full analysis. If we failed to predict the exact motion of an object—a roving comet, a spinning coin—the fault lay not in the unruliness of the movement but in our ignorance of the
laws of physics or the initial conditions.
The issue is seen differently today. Quantum mechanics has cast a deep shadow over causality, at least in microscopic domains. And "deterministic chaos" has added its own penumbra, obscuring the
details of events that might be predicted in principle, but only if we could gather an unbounded amount of information about them. To a modern sensibility, randomness reflects not just the limits of
human knowledge but some inherent property of the world we live in. Nevertheless, it seems fair to say that most of what goes on in our neighborhood of the universe is mainly deterministic. Coins
spinning in the air and dice tumbling on a felt table are not conspicuously quantum-mechanical or chaotic systems. We choose to describe their behavior through the laws of probability only as a
matter of convenience; there’s no question the laws of angular momentum are at work behind the scenes. If there is any genuine randomness to be found in such events, it is the merest sliver of
quantum uncertainty. Perhaps this helps to explain why digging for randomness in the flinty soil of physics is such hard work.
© Brian Hayes | {"url":"http://www.americanscientist.org/issues/pub/randomness-as-a-resource/4","timestamp":"2014-04-20T08:14:54Z","content_type":null,"content_length":"124937","record_id":"<urn:uuid:99171192-0639-4c7b-9f3e-b3b8e870f9b9>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00557-ip-10-147-4-33.ec2.internal.warc.gz"} |
Finding a point of intersection between circles
April 13th 2010, 01:29 PM #1
Apr 2010
Finding a point of intersection between circles
My teacher has us going through these A level problems, but I am just so lost with this one. She said there may be a pop quiz tomorrow with similar questions, so i'd really like to get the
concepts down flat. Would anyone be willing to explain how to solve the following word problem?
A) The circles (x^2)+(y^2)=20 and ((x-6)^2)+((y-3)^2)=5 are tangent. Find the coordinates of the point of trangency and then find an equation of the common internal tangent shown.
B) If you subtract the equations of the two circles from each other, you get a linear equation. Find this quation. Compare it with the quation found in part A.
Any help is appreciated. Thank you!
My teacher has us going through these A level problems, but I am just so lost with this one. She said there may be a pop quiz tomorrow with similar questions, so i'd really like to get the
concepts down flat. Would anyone be willing to explain how to solve the following word problem?
A) The circles (x^2)+(y^2)=20 and ((x-6)^2)+((y-3)^2)=5 are tangent. Find the coordinates of the point of trangency and then find an equation of the common internal tangent shown.
B) If you subtract the equations of the two circles from each other, you get a linear equation. Find this quation. Compare it with the quation found in part A.
Any help is appreciated. Thank you!
Hi precalc1209,
the centre of $x^2+y^2=\left(\sqrt{20}\right)^2$ is (0,0)
radius is $\sqrt{20}=\sqrt{5(4)}=\sqrt{5}\sqrt{4}=2\sqrt{5}$
the centre of $(x-6)^2+(y-3)^2=\left(\sqrt{5}\right)^2$ is (6,3)
radius is $\sqrt{5}$
Distance from (0,0) to (6,3) is $\sqrt{6^2+3^2}=\sqrt{45}=\sqrt{9(5)}=\sqrt{9}\sqrt {5}=3\sqrt{5}$
hence the circles touch 2 thirds of the way from (0,0) to (6,3) which is (4,2)
the equation of the line from (0,0) to (6,3) is $y-3=0.5(x-6)$
the tangent line at the point of contact is perpendicular to this line
(slopes multiply to give -1) and contains the point (4,2)
You can then write the equation of this line.
now subtract the circle equations having multiplied out the factors in one of them.
compare your two answers.
April 13th 2010, 01:44 PM #2
MHF Contributor
Dec 2009 | {"url":"http://mathhelpforum.com/algebra/138998-finding-point-intersection-between-circles.html","timestamp":"2014-04-16T06:51:18Z","content_type":null,"content_length":"35781","record_id":"<urn:uuid:21980114-ee95-42ab-9252-285f6188179c>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00367-ip-10-147-4-33.ec2.internal.warc.gz"} |
Annapolis Junction Statistics Tutor
Find an Annapolis Junction Statistics Tutor
...In Mathematics, my tutoring experience includes: - Arithmetic - Pre-algebra - Algebra I & II - Plane & Analytic Geometry - Trigonometry - Probability & Statistics - Number Theory - Calculus -
Differential Equations -- Ordinary and Partial - Real & Complex Analysis - Numerical Analysis In the Sc...
39 Subjects: including statistics, English, ACT English, Java
...I have former colleagues who continue to call me to ask me questions about functions in Word and Excel when they can't get help from their IT department. I am primarily self-taught, and love to
share software tips with others. I have been using STATA for nearly 10 years, and teaching students, as well as colleagues, in its use.
6 Subjects: including statistics, SPSS, Microsoft Excel, Microsoft Word
...These courses involved solving differential equations related to applications in physics and electrical engineering. As an undergraduate student in Electrical Engineering and Physics and as a
graduate student, I took courses in mathematical methods for physics and engineering. These courses inc...
16 Subjects: including statistics, physics, calculus, geometry
...My teaching experience includes one semester of teaching and tutoring mathematics at Montgomery College, one year as a graduate assistant in the biology department at Bucknell University,
several semesters as a student tutor in a variety of subjects at Luzerne County Community College, three year...
15 Subjects: including statistics, calculus, geometry, algebra 1
...I am blessed with vast knowledge in Maths and Chemistry (My BS Degree). I have over 20 years teaching experience in Kenya, and the USA. Through the years of hard work, I have acquired prowess
in the following subject areas: Maths - Algebra I and II, College Math, Geometry and Probability, Chemis...
15 Subjects: including statistics, chemistry, geometry, algebra 1 | {"url":"http://www.purplemath.com/annapolis_junction_md_statistics_tutors.php","timestamp":"2014-04-17T10:59:31Z","content_type":null,"content_length":"24578","record_id":"<urn:uuid:5102063a-a093-4067-ab22-a43eb12d9034>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00190-ip-10-147-4-33.ec2.internal.warc.gz"} |
A and C Weighting (ANSI® S1.42 standard)
You can design A and C weighting filters that follow the ANSI S1.42 [1] and IEC 61672-1 [2] standards. An A-weighting filter is a band pass filter designed to simulate the loudness of low-level
tones. An A-weighting filter progressively de-emphasizes frequencies below 500 Hz. A C-weighting filter removes sounds outside the audio range of 20 Hz to 20 kHz and simulates the loudness perception
of high-level tones.
The ANSI S1.42 standard requires that the filter magnitudes fall within a specified tolerance mask. The standard defines two masks, one with stricter tolerance values than the other. A filter that
meets the tolerance specifications of the stricter mask is referred to as a Class 1 filter. A filter that meets the specifications of the less strict mask is referred to as a Class 2 filter. You
define the type of class you want in your design by setting the Class property to 1 or 2. The choice of the Class value will not affect the filter design itself but it will be used to render the
correct tolerance mask in FVTOOL.
A and C-weighting filter designs are based on direct implementation of the filter's transfer function based on poles and zeros specified in the ANSI S1.42 standard. The filters only have one design
method referred to as 'ansis142'.
The following code obtains an IIR Class 1 filter design for A-weighting with a sampling rate of 48 kHz.
h = fdesign.audioweighting('WT,Class','A',1,48e3)
h =
Response: 'Audio Weighting'
Specification: 'WT,Class'
Description: {'Weighting type, Class'}
NormalizedFrequency: false
Fs: 48000
WeightingType: 'A'
Class: 1
Ha = design(h,'ansis142','SystemObject',true);
hfvt = fvtool(Ha);
legend(hfvt, 'A-weighting')
The A and C weighting standards specify tolerance magnitude values for up to 20 kHz. In the following example we use a sampling frequency of 28 kHz and design a C-weighting filter. Even though the
Nyquist interval for this sampling frequency is below the maximum specified 20 kHz frequency, the design still meets the Class 2 tolerances. The design, however, does not meet Class 1 tolerances due
to the small sampling frequency value.
h = fdesign.audioweighting('WT,Class','C',2,28e3);
Hcclass2 = design(h,'SystemObject',true);
legend(hfvt, 'C-weighting, Class 2')
ITU-R 468-4 recommendation [3] was developed to better reflect the subjective loudness of all types of noise, as opposed to tones. ITU-R 468-4 weighting was designed to maximize its response to the
types of impulsive noise often coupled into audio cables as they pass through telephone switching facilities. ITU-R 468-4 weighting correlates well with noise perception, since perception studies
have shown that frequencies between 1 kHz and 9 kHz are more "annoying" than indicated by A-weighting.
You design a weighting filter based on the ITU-R 468-4 standard for a sampling frequency of 80 kHz using the following code. You can choose from frequency sampling or equiripple FIR approximations,
or from a least P-norm IIR approximation. In all cases, the filters are designed with the minimum order that meets the standard specifications (mask) for the sampling frequency at hand.
h = fdesign.audioweighting('WT','ITUR4684',80e3)
h =
Response: 'Audio Weighting'
Specification: 'WT'
Description: {'Weighting type'}
NormalizedFrequency: false
Fs: 80000
WeightingType: 'ITUR4684'
Hitur1 = design(h,'allfir','SystemObject',true);
legend(hfvt,'ITU-R 468-4 FIR equiripple approximation', ...
'ITU-R 468-4 FIR frequency sampling approximation')
Hitur2 = design(h,'iirlpnorm','SystemObject',true);
legend(hfvt,'ITU-R 468-4 IIR least P-norm approximation')
While IIR designs yield smaller filter orders, FIR designs have the advantage of having a linear phase. In the FIR designs, the equiripple design method will usually yield lower filter orders when
compared to the frequency sampling method but might have more convergence issues at large sampling frequencies.
ITU-T 0.41 and C-message Weighting Filters
ITU-T 0.41 and C-message weighting filters are band pass filters used to measure audio-frequency noise on telephone circuits. The ITU-T 0.41 filter is used for international telephone circuits. The
C-message filter is typically used for North American telephone circuits. The frequency response of the ITU-T 0.41 and C-message weighting filters is specified in the ITU-T O.41 standard [4] and Bell
System Technical Reference 41009 [5], respectively.
You design an ITU-T 0.41 weighting filter for a sampling frequency of 24 kHz using the following code. You can choose from FIR frequency sampling or equiripple approximations. The filters are
designed with the minimum order that meets the standard specifications (mask) for the sampling frequency at hand.
h = fdesign.audioweighting('WT','ITUT041',24e3);
Hitut = design(h,'allfir','SystemObject',true);
legend(hfvt,'ITU-T 0.41 FIR equiripple approximation', ...
'ITU-T 0.41 FIR frequency sampling approximation')
You design a C-message weighting filter for a sampling frequency of 51.2 kHz using the following code. You can choose from FIR frequency sampling or equiripple approximations or from an exact IIR
implementation of poles and zeros based on the poles and zeros specified in [6]. You obtain the IIR design by selecting the 'bell41009' design method. The FIR filter approximations are designed with
the minimum order that meets the standard specifications (mask) for the sampling frequency at hand.
h = fdesign.audioweighting('WT','Cmessage',51.2e3);
Hcmessage1 = design(h,'allfir','SystemObject',true);
legend(hfvt,'C-message FIR equiripple approximation', ...
'C-message FIR frequency sampling approximation')
Hcmessage2 = design(h,'bell41009','SystemObject',true);
legend(hfvt,'C-message weighting (IIR)')
We have presented the design of A, C, C-message, ITU-T 0.41, and ITU-R 468-4 weighting filters. Some of the audio weighting standards do not specify exact pole/zero values, instead, they specify a
list of frequency values, magnitudes and tolerances. If the exact poles and zeros are not specified in the standard, filters are designed using frequency sampling, equiripple, and/or IIR least P-norm
arbitrary magnitude approximations based on the aforementioned list of frequency values, attenuations, and tolerances. The filter order of the arbitrary magnitude designs is chosen as the minimum
order for which the resulting filter response is within the tolerance mask limits. Designs target the specification mask tolerances only within the Nyquist interval. If Fs/2 is smaller than the
largest mask frequency value specified by the standard, the design algorithm will try to meet the specifications up to Fs/2.
In the FIR designs, the equiripple design method will usually yield lower filter orders when compared to the frequency sampling method but might have more convergence issues at large sampling
[1] 'Design Response of Weighting Networks for Acoustical Measurements', American National Standard, ANSI S1.42-2001.
[2] 'Electroacoustics Sound Level Meters Part 1: Specifications', IEC 61672-1, First Edition 2002-05.
[3] 'Measurement of Audio-Frequency Noise Voltage Level in Sound Broadcasting', Recommendation ITU-R BS.468-4 (1970-1974-1978-1982-1986).
[4] 'Specifications for Measuring Equipment for the Measurement of Analogue Parameters, Psophometer for Use on Telephone-Type Circuits', ITU-T Recommendation 0.41.
[5] 'Transmission Parameters Affecting Voiceband Data Transmission-Measuring Techniques', Bell System Technical Reference, PUB 41009, 1972.
[6] 'IEEE® Standard Equipment Requirements and Measurement Techniques for Analog Transmission Parameters for Telecommunications', IEEE Std 743-1995 Volume , Issue , 25, September 1996. | {"url":"http://www.mathworks.se/help/dsp/examples/audio-weighting-filters.html?nocookie=true","timestamp":"2014-04-24T00:20:51Z","content_type":null,"content_length":"40729","record_id":"<urn:uuid:1fe59880-e35f-4ea8-b7c1-1d72f9be70bf>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00450-ip-10-147-4-33.ec2.internal.warc.gz"} |
What rectangles have the same area that equal the same perimeter?
In computational geometry, the largest empty rectangle problem, maximal empty rectangle problem or maximum empty rectangle problem, is the problem of finding a rectangle of maximal size to be placed
among obstacles in the plane. There are a number of variants of the problem, depending on the particularities of this generic formulation, in particular, depending on the measure of the "size",
domain (type of obstacles), and the orientation of the rectangle.
The problems of this kind arise e.g., in electronic design automation, in design and verification of physical layout of integrated circuits.
Related Websites: | {"url":"http://answerparty.com/question/answer/what-rectangles-have-the-same-area-that-equal-the-same-perimeter","timestamp":"2014-04-18T05:30:53Z","content_type":null,"content_length":"22863","record_id":"<urn:uuid:8a07800d-ba22-4576-9c8e-bae622676917>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00004-ip-10-147-4-33.ec2.internal.warc.gz"} |
220 degrees celcius is how many degrees fahrenheit
You asked:
220 degrees celcius is how many degrees fahrenheit
Say hello to Evi
Evi is our best selling mobile app that can answer questions about local knowledge, weather, books, music, films, people and places, recipe ideas, shopping and much more. Over the next few months we
will be adding all of Evi's power to this site.
Until then, to experience all of the power of Evi you can download Evi for free on iOS, Android and Kindle Fire. | {"url":"http://www.evi.com/q/220_degrees_celcius_is_how_many_degrees_fahrenheit","timestamp":"2014-04-20T16:09:53Z","content_type":null,"content_length":"55835","record_id":"<urn:uuid:6b26341b-4bb4-42b8-81b7-6237936f64db>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00620-ip-10-147-4-33.ec2.internal.warc.gz"} |
A metric space is paracompact
Today, I shall use the theorem of Michael discussed earlier to prove that a metric space is paracompact.
Theorem 1 (Stone) A metric space is paracompact.
This theorem seems to use the axiom of choice, or some version thereof, in all proofs.
1. Proof of Stone’s theorem
Suppose given a cover ${\mathfrak{A}=\left\{U_\alpha\right\}}$ of the metric space ${X}$ (with metric ${d}$, say). We will show that there is a refinement of ${\mathfrak{A}}$ that can be decomposed
into a countable collection of locally finite families. Thanks to Michael’s theorem, this will prove the result.
First, suppose that we have a countable cover ${\left\{U_i\right\}}$. Then the idea is to consider the differences ${U_i - \bigcup_{j<i} U_j}$, which form a point-finite cover of ${X}$ (i.e. each
point is contained in finitely many of the differences). Then, expand these slightly using the metric to make them open.
However, this naive approach as to be modified. First, we will have to generalize to arbitrary index sets, not just the natural numbers. Second, we need local finiteness, not just point-finiteness.
So, for starters, well-order the index set ${A}$ in which ${\alpha}$ takes values. This is where we use the axiom of choice; for separable metric spaces, this would not be necessary, since they have
a countable basis, and every open cover can be replaced by a countable subcover.
Now we shrink the ${U_{\alpha}}$ slightly. Namely, we write
$\displaystyle V^n_{\alpha} = \left\{x: d(x, X - U_\alpha) \geq 2^{-n} \right\}.$
In other words, to say that a point belongs to ${V^n_\alpha}$ is to say that it belongs to ${U_\alpha}$ and is not too far from the boundary. Note that ${\bigcup_n V^n_\alpha = U_\alpha}$.
We define the sets
$\displaystyle W^n_\alpha = V^n_\alpha - \bigcup_{\beta > \alpha }V^{n+1}_{\beta}$
The point of this is to excise out redundancies when possible. Note that the ${W^n_\alpha}$ form a cover of ${X}$. Indeed, if ${x \in X}$, choose the smallest ${\alpha}$ with ${x \in U_\alpha}$. Then
${x \in W^n_\alpha}$ for ${n}$ sufficiently large. The good news is that we have excised out redundancies, but the bad news is that the ${W^n_\alpha}$ are not open. So set
$\displaystyle Z^n_\alpha = \left\{ x: d(x, W^n_\alpha)< 2^{-n-3} \right\}.$
These are small neighborhoods of the ${W^n_\alpha}$ and are consequently open. Moreover, the ${Z^n_\alpha}$ are subsets of ${U_\alpha}$ and consequently form a refinement of ${\left\{U_\alpha\right
\}}$. Thus, if we can show that each ${\left\{Z^n_\alpha, \alpha \in A\right\}}$ is locally finite, then we will have a refinement
$\displaystyle \bigcup_{n, \alpha} Z^n_\alpha$
of ${\mathfrak{A}}$ which can be decomposed into a countable collection of locally finite families.
So, that’s the plan. We will actually show that for each ${n}$, there is a ${\delta = \delta_n}$ such that ${d(Z^{n}_\alpha, Z^n_\beta) \geq \delta}$ if ${\alpha eq \beta}$. Indeed, suppose ${x \in Z
^n_\alpha, y \in Z^n_\beta}$, and ${\alpha eq \beta}$. Without loss of generality, ${\beta < \alpha}$. Then ${x}$ is within ${2^{-n-3}}$ of a point ${z \in W^n_\alpha}$ and ${y}$ is within ${2^
{-n-3}}$ of a point ${z'}$ in ${W^n_\beta}$. Now the distance of ${z'}$ to ${X-U_\beta}$ is, by definition, at least ${2^{-n}}$, because ${z' \in V^n_\beta}$. However, ${z}$ is not in ${V^{n+1}_\
beta}$ because of the way the ${W^n_\alpha}$ were defined, so its distance to ${X - U_\beta}$ is at most ${2^{-n-1}}$. In particular,
$\displaystyle d(z, z') \geq 2^{-n-1}$
so it follows that ${d(x,y) \geq 2^{-n-1} - 2(2^{-n-3}) = 2^{-n-2}}$.
In particular, the ${Z^n_\alpha}$ are mildly separated from each other, which means that any ${2^{-n-3}}$ neighborhood of any point can intersect at most one of the ${Z^{n}_\alpha}$ (where ${n}$ of
course is fixed). In particular, the ${Z^n_\alpha}$ for each ${n}$ are locally finite. This proves the claim we wanted. This also establishes the full result.
August 21, 2010 at 8:44 pm
[...] Paracompactness over at Climbing Mount Bourbaki Filed under: Blath by Andrew — Leave a comment August 21, 2010 I have the next Rudin post in draft, but in needs some more work. In the
meanwhile, Ankit Mathew is working on a sequence on paracompactness, starting here and continuing here and here. [...] | {"url":"http://amathew.wordpress.com/2010/08/19/a-metric-space-is-paracompact/","timestamp":"2014-04-16T19:18:01Z","content_type":null,"content_length":"60114","record_id":"<urn:uuid:583e4cb5-a0fc-4f4e-863f-a937c2ef22d5>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00514-ip-10-147-4-33.ec2.internal.warc.gz"} |
Series proof.
September 13th 2009, 06:15 PM #1
Mar 2008
Acolman, Mexico
Sum proof.
Hello, I am stuck with the following problem.
Verify the following identity for $n \geq 2$.
$\sum_{k=0}^{n}(-1)^{k}{n \choose k}=0$
I tried an induction proof, but I couldn't get it right. Do I need to consider when n is odd and when n is even?
Thanks in advance
Last edited by akolman; September 13th 2009 at 07:31 PM. Reason: wrong title =(
this is a sum, not a series, and the solution is quite easy:
$\sum\limits_{k=0}^{n}{\binom nk(-1)^{k}1^{n-k}}=(1-1)^{n}=0^{n}=0.$
Like Krizalid said, this is an example of the Binomial Theorem.
If you know that
$(x + y)^n = \sum_{k = 0}^n {\left(_k^n\right) x^{n - k} y^k}$
it is relatively easy to see that if you let $x = 1$ and $y = -1$ you get
$(1 - 1)^n = \sum_{k = 0}^n {\left(_r^n\right) (1)^{n - k} (-1)^k}$
$0^n = \sum_{k = 0}^n {\left(_k^n\right) (-1)^k}$
Therefore $\sum_{k = 0}^n {\left(_k^n\right) (-1)^k} = 0$.
September 13th 2009, 07:29 PM #2
September 13th 2009, 09:51 PM #3 | {"url":"http://mathhelpforum.com/advanced-statistics/102148-series-proof.html","timestamp":"2014-04-17T14:42:58Z","content_type":null,"content_length":"40058","record_id":"<urn:uuid:23c4bd7b-3afb-423b-bcaa-bfdfbb979c31>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00118-ip-10-147-4-33.ec2.internal.warc.gz"} |
correct solution for y''-(x^2)y=0
August 9th 2012, 06:08 PM #1
Aug 2012
South Korea
correct solution for y''-(x^2)y=0
y''-x^2y=0, (y''=d^2y/dx^2)
when try to find solution, usually assuming solution form is y=e^lambda X, and lambda =x, -x, then general solution is y=c[1]e^x^2 + c[2]e^-x^2. however, I was told this is wrong approach to
Then how can I get correct general solution?
Last edited by tykim; August 9th 2012 at 06:20 PM.
Re: correct solution for y''-(x^2)y=0
Your trial solutions are only used for second order linear DEs with constant coefficients. What you actually have is a Cauchy-Euler equation.
Re: correct solution for y''-(x^2)y=0
Then I can set y=x^m, y'=mx^(m-1), y''=m(m-1)x^(m-2)
But could not get m.
Could I get more help?
Last edited by tykim; August 9th 2012 at 08:07 PM.
Re: correct solution for y''-(x^2)y=0
Re: correct solution for y''-(x^2)y=0
You cannot obtain a closed form for the solutions in using an elementrary method only.
Solving this kind of ODE requires a change of fonction and a change of variable in order to transform it to a standard Bessel ODE.
The solutions involve some Bessel functions.
Alternatively the solution can be directly expressed on the form of particular Parabolic Cylinder functions (which are less usual as the related Bessel functions)
Last edited by JJacquelin; August 9th 2012 at 08:40 PM.
August 9th 2012, 07:08 PM #2
August 9th 2012, 07:55 PM #3
Aug 2012
South Korea
August 9th 2012, 08:02 PM #4
August 9th 2012, 08:31 PM #5
Aug 2011 | {"url":"http://mathhelpforum.com/calculus/201952-correct-solution-y-x-2-y-0-a.html","timestamp":"2014-04-20T00:41:20Z","content_type":null,"content_length":"43439","record_id":"<urn:uuid:192f1700-b015-418a-935e-f15d3fb8a229>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00132-ip-10-147-4-33.ec2.internal.warc.gz"} |
High Sensitivity midranges?
12-25-2005 #1
High Sensitivity midranges?
I'm looking for a midrange to bridge the bap between a CD1pro mini body and a Koda 8. The CD1's sensitivity is rated at 101 dB and the Kodas is rated at 86db, both at 1W,1m. Although, the nominal
impedence of the horns is 8 ohmns and the Kodas is 4 ohmns.
I'm looking for a midrange driver with a sensitivity rating of at least 90 db (also 1W,1m) into either 4 or 8 ohmns.
Also, producing the same wattage into a higher nominal impedance requires more voltage, no? Therefore, producing a certain intensity of sound at 4 ohmns would require more power than at 8 ohmns?
So when compairing the sensitivities of 2 woofers rated at differen nominal impediances, you have to take into account that you will actually be using more current when using the 4ohmn speakers?
Right? If this is that case, then could someone please explain how it will effect level matching?
Re: High Sensitivity midranges?
crunktimes . com/ct
Re: High Sensitivity midranges?
i think its the other way around cuz when you run an amp at 8 ohms it runs a lot cooler than at 4 ohms.
I must follow in eugenics' footsteps, for he is great
Ref's: Ebay
VOA_NIGHTMARE, Blados, frankga2, allenzo, bamaboy x2, Crown_amps, violentrapture, adio x3, bumpin_blazer, fi car audio, haunther, sbcaprice305 x2
Re: High Sensitivity midranges?
I have seen no SEAS woofers with the right combination of extension, cone area, excursion and sensitivity that I need. The L18RNX/P would be perfect if the sensitivity was higher.
Re: High Sensitivity midranges?
The Audax PR170M0 would probably be perfect (maybe even a little to perfect), but it is no longer made.
Re: High Sensitivity midranges?
I'm looking for a midrange to bridge the bap between a CD1pro mini body and a Koda 8. The CD1's sensitivity is rated at 101 dB and the Kodas is rated at 86db, both at 1W,1m. Although, the nominal
impedence of the horns is 8 ohmns and the Kodas is 4 ohmns.
I'm looking for a midrange driver with a sensitivity rating of at least 90 db (also 1W,1m) into either 4 or 8 ohmns.
Audax PR170M0
Winslow on ECA had some for sale recently also that you may wish to check into.
Also, producing the same wattage into a higher nominal impedance requires more voltage, no?
More voltage, less current.
100w into 8ohms;
sqrt(100*8) = 28.28V
100w/28.28V = 3.54A
28.28*3.54 = 100.1w (not exactly 100w due to rounding)
100w into 4ohms;
sqrt(100*4) = 20V
100w/20V = 5A
20*5 = 100w
Therefore, producing a certain intensity of sound at 4 ohmns would require less power than at 8 ohmns? No, so when compairing the sensitivities of 2 woofers rated at differen nominal impediances,
you have to take into account that you will actually be using more power when using the 8ohmn speakers? Right?
Not sure I follow. How would you get more power with 8ohm?
Also, that doesn't take into account things such as power compression, etc. Just because a 4ohm speaker will receive more power, doesn't mean it will ultimately be louder.
If this is that case, then could someone please explain how it will effect level matching?
Sure.....just do it either by ear or an RTA, as that's what really matters
Re: High Sensitivity midranges?
Oh, yeah...forgot about the not being made thing
Re: High Sensitivity midranges?
i think its the other way around cuz when you run an amp at 8 ohms it runs a lot cooler than at 4 ohms.
I kinda contradicted myself there. The greater the impedance, the higher the voltage and the lesser the current. I just don't know how this plays into compairing the sensitivities of woofers that
have different nominal impedance ratings.
Re: High Sensitivity midranges?
It seems that the Audax woofers are no longer in stock. I'll check for the PHL woofers.
Just fiddling with some algebra, it seems that for a constant input voltage the lower impedance woofers will receive m ore current, thus, more power. Hmmm, thinking about how I can manipulate amp
input sensitivity and level matching with this in mind is........fun. Kinda makes me think that rated sensitivity means little.
Re: High Sensitivity midranges?
Errrrr, are those PHL woofers going to have the displacement needed to keep up woth the Kodas and ID horns?
Re: High Sensitivity midranges?
Errrrr, are those PHL woofers going to have the displacement needed to keep up woth the Kodas and ID horns?
Midrange doesn't require large displacement to get loud like midbass and subwoofers. Even if you took a large excursion driver, like an Extremis, and played it as a pure midrange....it would use
barely any of it's excursion and stay within the +/- ~.5mm range. For horns, large cone area and high sensitivity alone are what to look for in a good pure midrange.
The Audax, for example, only have .5mm Xmax. They just won't get low in frequency....the Audax can't go much below 175hz on a steep slope. But they'll play the midrange frequencies with more than
ample output. On the other hand something like the Extremis (for example) can play lower in frequency due to it's large excursion.
Re: High Sensitivity midranges?
It seems that the Audax woofers are no longer in stock. I'll check for the PHL woofers.
Just fiddling with some algebra, it seems that for a constant input voltage the lower impedance woofers will receive m ore current, thus, more power. Hmmm, thinking about how I can manipulate amp
input sensitivity and level matching with this in mind is........fun. Kinda makes me think that rated sensitivity means little.
Remember, that doesn't take into account things like power compression and such.
Yes, in theory if you take two drivers of the same sensitivity and supply one twice the power, it will have 3db more output. Heck, take a pair of the exact same drivers and send one twice as much
power and you should theoretically see a 3db gain. What you aren't testing is the validity of sensitivity. What you are testing is the affect of power on acoustical output. Does that mean
senstivity is meaningless? No, it means power applied will change the output level.....which is a pretty obvious conclusion.
And things aren't perfect in the real world like that. Except for when possibly dealing with very low wattage (jumping from say 1w to 2w), you'll almost never see an actual 3db increase in output
when doubling power. Just because the 4ohm driver is receiving more power it doesn't mean 1) the 4ohm driver can handle that extra power, and 2) that the 4ohm driver will absolutely have more
I guess the point I'm getting at......if you have ample power @ 8ohm, than that's all that matters. If you have an 8ohm driver and don't have enough power, pick up a more powerful amplifier.
Also be sure you are comparing apples to apples. Some 4ohm drivers still have sensitivities rated at 2.83V rather than 1w, which means you'd need to subtract 3db from the rated sensitivity.
Re: High Sensitivity midranges?
Alright then, The midrange woofers will probably be low passed around 300-400hz. I guess I'm looking for a 6.5inch woofer that has the "pressence" or "in your face" sound that some CD1pro horns
and Koda 8s are going to provide.
Re: High Sensitivity midranges?
You mean highpassed around 300-400hz?
Anyways....both the Audax and the PHL will give you that
Re: High Sensitivity midranges?
Yes, highpass. THanks for the recomendations.
12-25-2005 #2
12-25-2005 #3
12-25-2005 #4
12-25-2005 #5
12-25-2005 #6
12-25-2005 #7
12-25-2005 #8
12-25-2005 #9
12-25-2005 #10
12-25-2005 #11
12-25-2005 #12
12-25-2005 #13
12-25-2005 #14
12-25-2005 #15 | {"url":"http://www.caraudio.com/forums/speakers/132746-high-sensitivity-midranges.html","timestamp":"2014-04-19T21:36:08Z","content_type":null,"content_length":"99246","record_id":"<urn:uuid:f08b66f5-b889-4010-a515-0b7b69c6a62f>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00552-ip-10-147-4-33.ec2.internal.warc.gz"} |
[Numpy-discussion] Concatenating string arrays
Thomas Robitaille thomas.robitaille@gmail....
Wed Mar 18 13:30:19 CDT 2009
I am trying to find an efficient way to concatenate the elements of
two same-length numpy str arrays. For example if I define the
following arrays:
import numpy as np
arr1 = np.array(['a','b','c'])
arr2 = np.array(['d','e','f'])
I would like to produce a third array that would contain
['ad','be','cf']. Is there an efficient way to do this? I could do
this element by element, but I need a faster method, as I need to do
this on arrays with several million elements.
Thanks for any help,
More information about the Numpy-discussion mailing list | {"url":"http://mail.scipy.org/pipermail/numpy-discussion/2009-March/041233.html","timestamp":"2014-04-17T18:46:56Z","content_type":null,"content_length":"2980","record_id":"<urn:uuid:9eb6ebf9-07e7-4f00-82f9-0c1319371094>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00042-ip-10-147-4-33.ec2.internal.warc.gz"} |
the first resource for mathematics
Preconditioned all-at-once methods for large, sparse parameter estimation problems.
(English) Zbl 0995.65110
The inverse problem about recovering a parameter function by measurements of solutions of the system partial differential equations is considered. A typical formulation of this inverse problem
consists minimization of the sum of a data fitting error term and a regularization term, subject to the forward problem being satisfied. The problem is typically ill-posed without regularization and
it is ill-conditioned with it, since the regularization term is aimed at removing noise without overshadowing the data.
Let the forward problem be a linear elliptic differential equation $A\left(m\right)u=q$ where $A$ refers a differential operator depending on a parameter vector function $m$, defined on an
appropriate domain and equipped with suitable boundary conditions. The discrimination of this problem is studied and for regularization the Tikhonov method with introducing the Lagrangian approach is
applied. Finally the problem is numerically solved by the Gauss-Newton method, and a preconditioned conjugate gradient algorithm is applied at each iteration for the resulting reduced Hessian system.
Alternatively, a preconditioned Krylov method is applied to arising system.
The considered problem the arises in many applications. The results are illustrated by different computational examples.
65N21 Inverse problems (BVP of PDE, numerical methods)
35R30 Inverse problems for PDE
65N06 Finite difference methods (BVP of PDE)
65F10 Iterative methods for linear systems
65F35 Matrix norms, conditioning, scaling (numerical linear algebra) | {"url":"http://zbmath.org/?q=an:0995.65110","timestamp":"2014-04-19T07:04:40Z","content_type":null,"content_length":"23016","record_id":"<urn:uuid:11a8f339-7799-4df7-bcf5-d739c830c3bf>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00378-ip-10-147-4-33.ec2.internal.warc.gz"} |
Redford Twp, MI
West Bloomfield, MI 48322
Master Certified Coach for Exam Prep, Mathematics, & Physics
...tudents and Parents, It is a pleasure to make your acquaintance, and I am elated that you have found interest in my profile. I am the recipient of a Master of Science:
and Master of Arts: Applied Mathematics with a focus in computation and theory. I am...
Offering 10+ subjects including physics | {"url":"http://www.wyzant.com/Redford_Twp_MI_Physics_tutors.aspx","timestamp":"2014-04-18T19:47:57Z","content_type":null,"content_length":"61493","record_id":"<urn:uuid:972a4bca-b036-4923-8cae-66ad86adec00>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00458-ip-10-147-4-33.ec2.internal.warc.gz"} |
ith Math.NET Numerics
Likely the most requested feature for Math.NET Numerics is support for some form of regression, or fitting data to a curve. I’ll show in this article how you can easily compute regressions manually
using Math.NET, until we support it out of the box. We already have broad interpolation support, but interpolation is about fitting some curve exactly through a given set of data points and therefore
an entirely different problem.
For a regression there are usually much more data points available than curve parameters, so we want to find the parameters that produce the lowest errors on the provided data points, according to
some error metric.
Least Squares Linear Regression
If the curve is linear in its parameters, then we’re speaking of linear regression. The problem becomes much simpler and we can leverage the rich linear algebra toolset to find the best parameters,
especially if we want to minimize the square of the errors (least squares metric).
In the general case such a curve would be in the form of a linear combination of $N$ arbitrary but known functions $f_i(x)$, scaled by the parameters $p_i$. Note that none of the functions $f_i$
depends on any of the $p_i$ parameters.
If we have $M$ data points $(x_j,y_j)$, then we can write the whole problem as an overdefined system of $M$ equations:
Or in matrix notation:
This is a standard least squares problem and can easily be solved using Math.NET Numerics’s linear algebra classes and the QR decomposition. In literature you’ll usually find algorithms explicitly
computing some form of matrix inversion. While symbolically correct, using the QR decomposition instead is numerically more robust. This is a solved problem, after all.
Some $\mathbf{X}$ matrices of this form have well known names, for example the Vandermonde-Matrix for fitting to a polynomial.
Example: Fitting to a Line
A line can be parametrized by the height $a$ at $x=0$ and its slope $b$:
This maps to the general case with $N=2$ parameters as follows:
And therefore the equation system
The complete code when using Math.NET Numerics would look like this:
// data points
var xdata = new double[] { 10, 20, 30 };
var ydata = new double[] { 15, 20, 25 };
// build matrices
var X = DenseMatrix.CreateFromColumns(
new[] {new DenseVector(xdata.Length, 1), new DenseVector(xdata)});
var y = new DenseVector(ydata);
// solve
var p = X.QR().Solve(y);
var a = p[0];
var b = p[1];
Example: Fitting to an arbitrary linear function
The functions $f_i(x)$ do not have to be linear in $x$ at all to work with linear regression, as long as the resulting function $y(x)$ remains linear in the parameters $p_i$. In fact, we can use
arbitrary functions, as long as they are defined at all our data points $x_j$. For example, let’s compute the regression to the following complicated function including the Digamma function $\psi(x)$
, sometimes also known as Psi function:
The resulting equation system in Matrix form:
The complete code with Math.NET Numerics, but this time with F#:
// define our target functions
let f1 x = Math.Sqrt(Math.Exp(x))
let f2 x = SpecialFunctions.DiGamma(x*x)
// create data samples, with chosen parameters and with gaussian noise added
let fy (noise:IContinuousDistribution) x = 2.5*f1(x) - 4.0*f2(x) + noise.Sample()
let xdata = [ 1.0 .. 1.0 .. 10.0 ]
let ydata = xdata |> List.map (fy (Normal.WithMeanVariance(0.0,2.0)))
// build matrix form
let X =
xdata |> List.map f1 |> vector
xdata |> List.map f2 |> vector
|] |> DenseMatrix.CreateFromColumns
let y = vector ydata
// solve
let p = X.QR().Solve(y)
let (a,b) = (p.[0], p.[1])
Note that we use the Math.NET Numerics F# package here (e.g. for the vector function).
Example: Fitting to a Sine
Just like the digamma function we can also target a sine curve. However, to make it more interesting, we’re also looking for phase shift and frequency parameters:
Unfortunately the function $f_2 : x \mapsto \sin(c + \omega x)$ now depends on parameters $c$ and $\omega$ which is not allowed in linear regression. Indeed, fitting to a fequency $\omega$ in a
linear way is not trivial if possible at all, but for a fixed $\omega$ we can leverage the following trigonometric identity:
and therefore
However, note that because of the non-linear transformation on the $b$ and $c$ parameters, the result will no longer be strictly the least square error solution. While our result would be good enough
for some scenarios, we’d either need to compensate or switch to non-linear regression if we need the actual least square error parameters.
The complete code in C# with Math.NET Numerics would look like this:
// data points: we compute y perfectly but then add strong random noise to it
var rnd = new Random(1);
var omega = 1.0d
var xdata = new double[] { -1, 0, 0.1, 0.2, 0.3, 0.4, 0.65, 1.0, 1.2, 2.1, 4.5, 5.0, 6.0 };
var ydata = xdata
.Select(x => 5 + 2 * Math.Sin(omega*x + 0.2) + 2*(rnd.NextDouble()-0.5)).ToArray();
// build matrices
var X = Matrix.CreateFromColumns(new[] {
new DenseVector(xdata.Length, 1),
new DenseVector(xdata.Select(t => Math.Sin(omega*t)).ToArray()),
new DenseVector(xdata.Select(t => Math.Cos(omega*t)).ToArray())});
var y = new DenseVector(ydata);
// solve
var p = X.QR().Solve(y);
var a = p[0];
var b = SpecialFunctions.Hypotenuse(p[1], p[2]);
var c = Math.Atan2(p[2], p[1]);
The following graph visualizes the resulting regressions. The curve we computed the $y$ values from, before adding the strong noise, is shown in black. The red dots show the actual data points with
only small noise, the blue dots the points with much stronger noise added. The red and blue curves then show the actual computed regressions for each. | {"url":"http://christoph.ruegg.name/blog/linear-regression-mathnet-numerics.html","timestamp":"2014-04-19T14:51:34Z","content_type":null,"content_length":"22837","record_id":"<urn:uuid:d51ebc67-2892-4053-9e50-2be86f131600>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00581-ip-10-147-4-33.ec2.internal.warc.gz"} |
Homework Help
Recent Homework Questions About Physics
Post a New Question | Current Questions
A 5 kg block is placed on top of a 10 kg block as shown above. A horizontal force of 45 N is applied to the 10 kg block, while the 5 kg block is tied to the wall. The coefficient of kinetic friction
between the moving surfaces is 0.20. Determine (a) the tension in the string ...
Sunday, February 23, 2014 at 2:39am
A 5.1 kg block is pulled along a frictionless floor by a cord that exerts a force P = 12 N at an angle θ = 25° above the horizontal, as shown below. (a) What is the acceleration of the block? (b) The
force P is slowly increased. What is the value of P just before...
Sunday, February 23, 2014 at 2:39am
A bullet of mass kg moving at 500 m⁄s embeds itself in a large, fixed piece of wood and travels 6 cm before coming to rest. Assuming that the deceleration of the bullet is constant, find the force
exerted by the wood on the bullet.
Sunday, February 23, 2014 at 2:38am
Y^2 = Yo^2 + 2g*h = o Yo^2 - 19.6*(150-50) = 0 Yo^2 = 1960 Yo = 44.27 m/s. = Ver. component of the initial velocity. Vo = Yo/sin A = 44.27/sin30 = 88.54 m/s. = Initial velocity. Xo = Vo*cos A =
88.54*cos30 = 76.68 m/s. = Hor. component of initial velocity. Y = Yo + g*Tr = 0 @ ...
Saturday, February 22, 2014 at 9:30pm
A light string can support a stationary hanging load of 26.4 kg before breaking. An object of mass m = 3.19 kg attached to the string rotates on a frictionless, horizontal table in a circle of radius
r = 0.795 m, and the other end of the string is held fixed as in the figure ...
Saturday, February 22, 2014 at 9:28pm
The object in the figure has a mass of 3.45 kg and is pulled up a slope AB, which is 36 m long; the height BC is 3.00 m. There is no friction and the acceleration is constant. The speed v1 at A is
3.5 m/s whereas the speed v2 at B is 5.5 m/s. The average power developed by the...
Saturday, February 22, 2014 at 8:48pm
The procedure is the same as your 8:56 PM. post.
Saturday, February 22, 2014 at 8:34pm
you are correct, a,b,c are true
Saturday, February 22, 2014 at 8:34pm
Vo = 150m/s[85o] 1. Xo = 150*cos85 = 13.07 m/s. 2. Yo = 150*sin85 = 149.4 m/s. 3. X = Xo 4. Y = 0 5. a = g = 9.8 m/s^2. 6. h = ho + (Y^2-Yo^2)/2g h = 20 + (0-149.4^2)/-19.6 = 1159 m. Above the plain.
7. Y = Yo + g*t = 0 @ peak. 149.4 -9.8t = 0 9.8t = 149.4 Tr = 15.2 s. = Rise ...
Saturday, February 22, 2014 at 8:24pm
vf=vi+at In your question, a=(27.8-0)/3.40, vi=0
Saturday, February 22, 2014 at 7:04pm
Ok, figure the coefficent of friction. mu=tan41.4 you need to prove that. Then Initial KE=change in PE + workdoneonfriction let x be the distance up the plane. 1/2 m v^2=mg x*sinTheta +
x*mu*mgCosTheta do the algebra, solve for x.
Saturday, February 22, 2014 at 7:03pm
A block with mass m = 24.0 kg slides down an inclined plane of slope angle 41.4 ° with a constant velocity. It is then projected up the same plane with an initial speed 3.80 m/s. How far up the
incline will the block move before coming to rest?
Saturday, February 22, 2014 at 6:57pm
The Lamborghini Murcielago can accelerate from 0 to 27.8 m/s (100 km/hr or 62.2 mi/hr) in a time of 3.40 seconds. How fast will the car be traveling (m/s) after 5.1 seconds.
Saturday, February 22, 2014 at 6:41pm
a = (110-77)/16 = 33/16 m/s^2 displacement during that time is s = 77*16 + 33/32 * 16^2 = 1496 m
Saturday, February 22, 2014 at 6:20pm
Determine the displacement of a plane that experiences uniform acceleration from 77 m/s to 110 m/s in 16 seconds.
Saturday, February 22, 2014 at 6:06pm
Which statements about traveling waves are TRUE? Choose all that apply. a) A traveling wave transfers energy from one point to another. b) A speaker that beams its sound forward into a small area
produces a louder sound. c) The rate at which a wave transfers energy is defined ...
Saturday, February 22, 2014 at 5:11pm
The Lamborghini Murcielago can accelerate from 0 to 27.8 m/s (100 km/hr or 62.2 mi/hr) in a time of 3.40 seconds. How fast will the car be traveling (m/s) after 5.1 seconds.
Saturday, February 22, 2014 at 4:56pm
Saturday, February 22, 2014 at 4:56pm
You are general in the Napoleonic wars. You are on top of a plain 12m talk and you have a cannon tht can shoot a cannonball out at 350 m/s. 800 m away is a thick forest that you cannot see into but
one of your scouts in a hot air balloon has just signaled to you that an enemy ...
Saturday, February 22, 2014 at 4:29pm
You are a general in the Napoleonic wars. An enemy fort 10m tall is located 1000m from your cannon. Your cannon expert is insisting that you will get the best results at an angle of 25 above the
horizontal. Your cannon is located on a hill 10m tall. You have soldiers storming ...
Saturday, February 22, 2014 at 4:29pm
You are a general in the Napoleonic war. The battle has taken you to a trench 7m deep. You have a mortar that will fire an explosive at a speed of 200 m/s at an angle of 80. You have a low flying
reconnaissance airplane scouting out the enemy troops at a height of 500 m. Your ...
Saturday, February 22, 2014 at 4:28pm
Suppose you are a general in the Napoleonic wars. You are no top of a plain 50 m high overlooking the enemy soldiers. You now have a brand new cannon that will decimate the enemy. Unfortunately in
the christening process you used some rather cheap grade champagne and all the ...
Saturday, February 22, 2014 at 4:28pm
Tf = 5.7/2 = 2.85 s. = Fall time. h = Yo*t + 0.5g*Tf^2 h = 0 + 4.9*2.85^2 = 39.8 m.
Saturday, February 22, 2014 at 4:24pm
A car with a velocity of 28.2 m/s is accelerated uniformly at the rate of 2.2 m/s2 for 9 seconds. What is the car's final velocity?
Saturday, February 22, 2014 at 4:15pm
Saturday, February 22, 2014 at 4:09pm
Find the uniform acceleration that causes's a car's velocity to change from 39.7 m/s to 74.0 m/s in an 5 second period of time.
Saturday, February 22, 2014 at 3:50pm
A missile is launched at an angle of 25 degrees to the ground. It hits a target at 301.5 meters from the point of launch. find the initial velocity
Saturday, February 22, 2014 at 3:49pm
Saturday, February 22, 2014 at 3:22pm
How to calculate the force required to move a 6.12Kg weight up a 10degree friction rig? I have done experiments on the friction rig, adding weight until the sliding weight moved, but the calculated
results do not match up. Can anyone help?
Saturday, February 22, 2014 at 1:43pm
A block weighing 71.5 N rests on a plane inclined at 24.1° to the horizontal. The coefficient of the static and kinetic frictions are 0.26 and 0.13 respectively. What is the minimum magnitude of the
force F, parallel to the plane, that will prevent the block from slipping...
Saturday, February 22, 2014 at 1:13pm
You are driving your car over a circular-shaped bump in the road that has a radius of curvature of 75.9 m. A)If the car is traveling at a constant speed of 18.3 m/s, calculate the apparent weight of
your 56.1 kg passenger as you pass over the top of the bump. B)What is the ...
Saturday, February 22, 2014 at 1:00pm
T = tension in rope rope horizontal force = T cos 55 = .574 T rope force up = T sin 55 = .819 T weight = m g = 310(9.81) = 3041 N normal force on ground = weight - rope force up = 3041 - .819 T
friction force max = .9(3041-.819 T) F = m a .574 T - .9(3041-.819 T) = 310(6.33) ...
Saturday, February 22, 2014 at 12:02pm
A box of mass 52 kg starts from rest to slide down a ramp that makes an angle of 16.6 degrees with respect to the horizontal. 4.1 seconds later, it has covered a distance of 3.98 meters. What is the
coefficient of kinetic friction?
Saturday, February 22, 2014 at 11:57am
.7 = a/g = a/9.81 a = 6.87 m/s^2
Saturday, February 22, 2014 at 11:56am
A sled (mass 310 kg) is pulled across a stone floor with a coefficient of kinetic friction of 0.9. The rope that is used to pull it is at an angle alpha of 55 degrees with the horizontal. How hard
(i.e., with what magnitude force) do you need to pull to make the sled speed up ...
Saturday, February 22, 2014 at 11:56am
A professor drives off with his car (mass 850 kg), but forgot to take his coffee mug (mass 0.39 kg) off the roof. The coefficient of static friction between the mug and the roof is 0.7, and the
coefficient of kinetic friction is 0.4. What is the maximum acceleration of the ...
Saturday, February 22, 2014 at 11:49am
Suppose a straight 1.60mm -diameter copper wire could just "float" horizontally in air because of the force due to the Earth's magnetic field B⃗ , which is horizontal, perpendicular to the wire, and
of magnitude 4.2×10−5T . What current would ...
Saturday, February 22, 2014 at 11:26am
A crate of mass 132 kg is loaded onto the back of a flatbed truck. The coefficient of static friction between the box and the truck bed is 0.22. What is the smallest radius of curvature that the
truck can take without the crate slipping, if the speed with which it is going ...
Saturday, February 22, 2014 at 11:20am
You are driving your car over a circular-shaped bump in the road that has a radius of curvature of 75.9 m. A)If the car is traveling at a constant speed of 18.3 m/s, calculate the apparent weight of
your 56.1 kg passenger as you pass over the top of the bump. B)What is the ...
Saturday, February 22, 2014 at 11:17am
Saturday, February 22, 2014 at 11:05am
A string under a tension of 42.6 N is used to whirl a rock in a horizontal circle of radius 2.47 m at a speed of 19.7 m/s. The string is pulled in and the speed of the rock increases. When the string
is 1.13 m long and the speed of the rock is 46.9 m/s, the string breaks. What...
Saturday, February 22, 2014 at 11:05am
A block M1 of mass 16.5 kg sits on top of a larger block M2 of mass 26.5 kg which sits on a flat surface. The kinetic friction coefficient between the upper and lower block is 0.425. The kinetic
friction coefficient between the lower block and the flat surface is 0.125. A ...
Saturday, February 22, 2014 at 11:03am
A man driving his car into 20.0 ft garage with a velocity of 20.0mi/h applies the brakes, producing a constant deceleration. Find the smallest deceleration necessary to avoid striking the back wall
of the garage. And find how many seconds it takes for the car to come to rest
Saturday, February 22, 2014 at 9:27am
calculate the final volume of gas if the original pressure of gas at STP is doubled and its temperature is 3 times.
Saturday, February 22, 2014 at 8:09am
Physics 141
acceleration=v^2/r=900/100=9 m/s^2 which is about 1 g.
Saturday, February 22, 2014 at 5:56am
frequency of oscillation... f= 1/2pi sqrt k/m a, two bungie cords doubles k, so f goes up. b. one single is half the k, so f is lower.
Saturday, February 22, 2014 at 5:55am
since weight varies as 1/distance^2, r^2/(r+d)^2 = .99 r/(r+d) = √.99 r = √.99r + √.99d r(1-√.99) = √.99 d r/d = √.99/(1-√.99) = 198.5
Saturday, February 22, 2014 at 5:45am
Don't you need either angle of the string or mass of plane or velocity to answer this question?
Saturday, February 22, 2014 at 5:14am
You and some friends go bungee jumping, but you are afraid of oscillations which go too fast. Which one of these options would provide the slower oscillations? a) Tying two bungee cords to your feet
such that each bungee cord is attached to your feet and the object which you ...
Saturday, February 22, 2014 at 3:51am
An object is located a distance d above the surface of a large planet of radius r. At this position, its true weight is one percent (1.000 %) less than its true weight on the surface. What is the
ratio of d/r?
Friday, February 21, 2014 at 11:57pm
Physics 141
A car is on a circular track of radius r = 100 m. It's tangential speed is 30 m/s. How many lateral gravities does the driver feel?
Friday, February 21, 2014 at 11:54pm
An electric dipole is formed from two charges, ±q, spaced 1.00cm apart. The dipole is at the origin, oriented along the y-axis. The electric field strength at the point (x,y)=(0cm,10cm) is 320N/C .
What is the charge q? Give your answer in nC. What is the electric field...
Friday, February 21, 2014 at 11:23pm
Friday, February 21, 2014 at 10:40pm
Science - Start of Physics
Please help me with the following problem. Given below is the position-time graph representing motions of two runners, Nick and Ian. Use this graph to determine which runner has greater average
velocity. I can't put the graph here, so the following should give you guys a ...
Friday, February 21, 2014 at 10:37pm
Science - Start of Physics
Can you tell me why you don't understand what I am asking please?
Friday, February 21, 2014 at 10:03pm
Science - Start of Physics
All I am asking is to see if I need to explain my answer or not. I do not know if I should explain or not.
Friday, February 21, 2014 at 9:59pm
Physics 141
I just did this for you.
Friday, February 21, 2014 at 9:56pm
Physics 2
Interesting question. Do you have any thoughts on this?
Friday, February 21, 2014 at 9:56pm
Science - Start of Physics
I have no idea what you are asking. average velocity is the slope of the endpoint on the distance/time graph
Friday, February 21, 2014 at 9:56pm
Physics 141
A satellite is in a circular orbit around an unknown planet. The satellite has a speed of 1.60 104 m/s, and the radius of the orbit is 5.50 106 m. A second satellite also has a circular orbit around
this same planet. The orbit of this second satellite has a radius of 8.50 106 ...
Friday, February 21, 2014 at 9:56pm
forcedown-foreceup=Ma mgSinTheta-mu*mgcosTheta=ma a= g(sinTheta-mu*cosTheta)
Friday, February 21, 2014 at 9:52pm
Physics 141
GMm1/r1^2=m1 v1^2/r1 GM=v1^2 r1 but GMm2/r2^2=m2 v2^2/r2 GM=v2^2 r2 set them equal V2^2=v1^2*r1/r2 v2= v1 sqrt (r1/r2) check my math...
Friday, February 21, 2014 at 9:50pm
Less than
Friday, February 21, 2014 at 9:04pm
Science - Start of Physics
Do you guys think I should explain how I got my answer, or do you guys think I shouldn't explain my answer? I know how to do this type of science, so if you guys say yes or no, that will be fine.
Also, if you guys say yes, I will explain, but if you guys say no, I will ...
Friday, February 21, 2014 at 8:50pm
Physics 141
A satellite is in a circular orbit around an unknown planet. The satellite has a speed of 1.60 104 m/s, and the radius of the orbit is 5.50 106 m. A second satellite also has a circular orbit around
this same planet. The orbit of this second satellite has a radius of 8.50 106 ...
Friday, February 21, 2014 at 8:46pm
Consider a car is heading down a 5.5° slope (one that makes an angle of 5.5° with the horizontal) under the following road conditions. You may assume that the weight of the car is evenly distributed
on all four tires and that the coefficient of static friction is ...
Friday, February 21, 2014 at 8:42pm
A string going over a massless frictionless pulley connects two blocks of masses 6.4 kg and 13 kg. As shown on the picture below, the 6.4 kg block lies on a 32◦ incline; the coefficient of kinetic
friction between the block and the incline is μ= 0.3. The 13 kg block...
Friday, February 21, 2014 at 8:36pm
Science - Start of Physics
I need help with the following: Given below is the position-time graph representing motions of two runners, Nick and Ian. Use this graph to determine which runner has greater average velocity. For
this one, I know the answer is 2m/0s, but I don't know if they want me to ...
Friday, February 21, 2014 at 8:33pm
6.21kg---19.5kg-----> 48.4 N
Friday, February 21, 2014 at 8:30pm
A skier of mass 59.5 kg comes down a slope of constant angle 1◦ with the horizontal. The acceleration of gravity is 9.8 m/s^2 i. What is the force on the skier parallel to the slope? Answer in units
of N. ii. What force normal to the slope is exerted by the skis? Answer ...
Friday, February 21, 2014 at 8:26pm
F = m g F = 2.25 * 9.8 = 22.05 N accelerated mass = m = 5.53 + 2.25 = 7.78kg a = F/m = 22.05/7.78 = 2.83 m/s^2 F on top block is force in cord F = m a = 5.53 * 2.83 = 15.7 N
Friday, February 21, 2014 at 8:13pm
1. Two blocks on a frictionless horizontal surface are connected by a light string. The acceleration of gravity is 9.8 m/s^2. Find the acceleration of the system. Answer in units of m/s^2. 2. What is
the tension in the string between the blocks? Answer in units of N. 3. If ...
Friday, February 21, 2014 at 8:09pm
m g = 33 * 9.8 Newtons 2. MASS DOES NOT CHANGE !!!!!!! 3. 33 * 9.8 / 6
Friday, February 21, 2014 at 8:08pm
A block of mass 5.53 kg lies on a frictionless horizontal surface. The block is connected by a cord passing a. over a pulley to another block of mass 2.25 kg which hangs in the air. Assume the cord
to be light (massless and weightless) and unstretchable and the the pulley to ...
Friday, February 21, 2014 at 8:05pm
a. An object has a mass of 33 kg. The acceleration of gravity is 9.8 m/ s^2. What is its weight on the earth? Answer in units of N. 2. What is its mass on the moon where the force of gravity is 1/6
that of the earth? Answer in units of kg. 3. What is the weight of that object ...
Friday, February 21, 2014 at 8:01pm
A dragster and driver together have mass 900.7 kg. The dragster, starting from rest, attains a speed of 25.7 m/s in 0.54 s. 1. Find the average acceleration of the dragster during this time interval.
Answer in units of m/s^2. 2. What is the size of the average force on the ...
Friday, February 21, 2014 at 7:59pm
initial momentum North 800 * 18 East 800 * 18 final momentum North 1600 v sin T East 1600 v cos T (I am pretending I do not see that V = 18 sqrt 2 and T = 45 degrees) 800 * 18 = 1600 v sin T 800 * 18
= 1600 v cos T so sin T = cos T = 1/sqrt 2 That is T = 45 degrees so 800 * 18...
Friday, February 21, 2014 at 7:55pm
c 11/3 b 8/4 a 4/5 = 8/10 reverse the last two ? d 18/20 = 9/10
Friday, February 21, 2014 at 7:46pm
weight = 1.6 * 9.8 = 15.68 N T in each side of wire do right half, left the same 1/2 weight = 7.84 N half distance = 24.5 theta angle between vertical and wire Tan theta = 24.5/.149 theta = 89.65155
so cos theta = .00608152 T cos Theta = half the weight T = 7.84 N/.00608152 = ...
Friday, February 21, 2014 at 7:39pm
u = horizontal velocity = 5.7 cos 58 = 3.02 m/s forever Vi = initial speed up = 5.7 sin 58 = 4.83 m/s v = Vi - 9.8 t at top, v = o 0 = 4.83 - 9.8 t t = .493 s at top h = 1.3 + Vi t - 4.9 t^2 t at top
is .493 so h at top = 1.3 + 4.83(.493)-4.9(.493)^2 = 2.49 meters above ground...
Friday, February 21, 2014 at 7:24pm
The horizontal component of velocity does not change. Ignoring friction, there is no horizontal force, therefore no acceleration, therefore constant velocity in direction with no force. THIS IS
IMPORTANT ! u = 15.6 cos 56.9
Friday, February 21, 2014 at 7:16pm
A skateboarder shoots off a ramp with a velocity of 5.7 m/s, directed at an angle of 58° above the horizontal. The end of the ramp is 1.3 m above the ground. Let the x axis be parallel to the ground,
the +y direction be vertically upward, and take as the origin the point ...
Friday, February 21, 2014 at 6:59pm
A volleyball is spiked so that it has an initial velocity of 15.6 m/s directed downward at an angle of 56.9 ° below the horizontal. What is the horizontal component of the ball's velocity when the
opposing player fields the ball?
Friday, February 21, 2014 at 6:52pm
a = F/m F pilot = m pilot * same a
Friday, February 21, 2014 at 6:44pm
An airplane has a mass of 3.25 × 104 kg and takes off under the influence of a constant net force of 2.38 × 104 N. What is the net force that acts on the plane's 74.6-kg pilot?
Friday, February 21, 2014 at 6:39pm
The distance between two telephone poles is 49 m. When a 1.6 kg bird lands on the telephone wire midway between the poles, the wire sags 0.149 m. The acceleration of gravity is 9.8 m/s2 . How much
tension in the wire does the bird produce? Ignore the weight of the wire. Answer...
Friday, February 21, 2014 at 6:17pm
From greatest to least, rank the accelerations of the boxes. C. <-5N--|3kg|--16N-> B. <-8N--|4kg|--16N-> A. <-4N--|5kg|--8N--> D. <-2N--|20kg|--20->
Friday, February 21, 2014 at 6:08pm
West Chester University
Friday, February 21, 2014 at 6:07pm
math? physics?
Please read and follow directions.
Friday, February 21, 2014 at 5:56pm
Two automobiles, each of mass 800kg , are moving at the same speed, 18m/s , when they collide and stick together. At what speed does the wreckage move if one car was driving north and one east.
Friday, February 21, 2014 at 5:54pm
F = F1 + F2 = -9.5 + 19.5[10o] F = -9.5 + 19.5*cos10 + (19.5*sin10) F = -9.5 + 19.20 + 3.39i F = 9.7 + 3.39i a = X/m = 9.7/21 = 0.462 m/s^2.
Friday, February 21, 2014 at 4:59pm
An engineer needs to know how far a long beam will sag under a load. The table shows some results: LOAD(N): 1000 2000 3200 4400 5200 6500 SAG(cm): 2.0 4.0 6.6 8.8 10.4 13.4 a) one of the measurements
for a sag is wrong. which? What should the result be? b)What would be the sag...
Friday, February 21, 2014 at 4:39pm
A ball of mass m strikes a wall that is perpendicular to its path at speed +v and rebounds in the opposite direction with a speed v. The impulse imparted to the ball by the wall is 1) 2mv 2)mv 3)-mv
4)zero 5)-2mv
Friday, February 21, 2014 at 3:18pm
A ball of mass m strikes a wall that is perpendicular to its path at speed +v and rebounds in the opposite direction with a speed v. The impulse imparted to the ball by the wall is
Friday, February 21, 2014 at 3:17pm
Physics 2
A beam of light is incident on a plane mirror at an angle of 35¨¬. If the mirror rotates through a small angle ¥è through what angle will the reflected ray rotate
Friday, February 21, 2014 at 2:59pm
Physics 2
An antenna is connected to a car battery. Will the antenna emit electromagnetic radiation? Why or why not? Explain where Amperes and Faradays law is concerned
Friday, February 21, 2014 at 2:58pm
Suppose the magnitude of the "drag force" acting on the falling object of mass m is Dv2. Find the expression for the magnitude of the terminal velocity of this object. (Use any variable or symbol
stated above along with the following as necessary: g. Do not ...
Friday, February 21, 2014 at 2:36pm
You are told that the linear "drag coefficient" due to air resistance for a particular object is 0.64 N · s/m and the object has a mass of 0.0010 kg. Find the magnitude of the terminal velocity of
this object when dropped from rest. (Assume the object's ...
Friday, February 21, 2014 at 2:35pm
Sherlock Holmes examines a clue by holding his magnifying glass (with a focal length of 26.5 cm) 15.1 cm away from an object. Find the image distance. Answer in units of cm
Friday, February 21, 2014 at 2:26pm
Hey, all of these are basically the same question. If you want to pass the course you better learn how to do one of them.
Friday, February 21, 2014 at 1:40pm
Pages: <<Prev | 14 | 15 | 16 | 17 | 18 | 19 | 20 | 21 | 22 | 23 | 24 | 25 | 26 | 27 | 28 | Next>>
Post a New Question | Current Questions | {"url":"http://www.jiskha.com/science/physics/?page=22","timestamp":"2014-04-16T14:21:38Z","content_type":null,"content_length":"37416","record_id":"<urn:uuid:f7af3c1e-481b-495f-ab57-6bf7a1833c74>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00446-ip-10-147-4-33.ec2.internal.warc.gz"} |
Infinite linear span vs closed linear span
up vote 0 down vote favorite
Suppose we have a (real, separable) Banach space $V$ and a (linear) set $A\subseteq V$. I presume in general it might not be possible to write every element of the closed span of $A$ as an infinite
linear combination $\sum_{i=1}^\infty\beta_i a_i$ of elements of $A$. Are there simple (non-trivial) conditions guaranteeing that the closed linear span of $A$ coincides with its infinite linear span
(perhaps with unconditional/absolute convergence)?
My example of interest is the following: Let $X$ be a compact metric space and $F:X\to X$ a continuous map. My space $V$ is the space $C(X)$ of continuous (real-valued) functions on $X$, and $A$ is
the subset of functions that can be written as $\varphi\circ F - \varphi$ for some $\varphi\in C(X)$.
Thank you for any help.
fa.functional-analysis banach-spaces ds.dynamical-systems cohomology
Just to clarify: we can define linear subspaces $A_u$ and $A_a$ where the former consists of all unconditionally convergent sums of things in $A$, and the latter consists of all absolutely
convergent sums of things in $A$, and you are asking under what "reasonable" conditions $A_u=A$ or $A_a=A$? – Yemon Choi Aug 14 '12 at 8:39
The question was under what reasonable conditions $A_u$ or $A_a$ is the same as the closure of the linear span of $A$. But in case $A$ is a linear subspace itself (which was my primary interest),
the answer is trivial as Wolfgang pointed out. Thanks for your time. – Algernon Aug 14 '12 at 13:57
add comment
1 Answer
active oldest votes
In the case of linear $A$, which seems to be your case of interest, you can simply do it as follows. For $x$ in the closure of $A$ take a sequence $(x_n)$ in $A$ converging to $x$
up vote 4 down such that $\|x_n-x_{n+1}\|<2^{-n}$. Then set $a_i=x_i-x_{i-1}$, $\beta_i=1$ and the sum will converge absolutely, hence also unconditionally to $x$. Or did I miss something?
vote accepted
Oh well... You are right. What was I thinking! I ill-posed my problem, but your answer made things clearer in my mind. Thank you very much. – Algernon Aug 14 '12 at 13:51
add comment
Not the answer you're looking for? Browse other questions tagged fa.functional-analysis banach-spaces ds.dynamical-systems cohomology or ask your own question. | {"url":"http://mathoverflow.net/questions/104672/infinite-linear-span-vs-closed-linear-span","timestamp":"2014-04-19T02:23:11Z","content_type":null,"content_length":"54164","record_id":"<urn:uuid:280c9bfb-bf17-4ef2-a02c-812a7c1aa55f>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00545-ip-10-147-4-33.ec2.internal.warc.gz"} |
Fairfax, CA Algebra Tutor
Find a Fairfax, CA Algebra Tutor
...No homework will be assigned. Struggling students will improve their grades. Students who are currently “making the grade” will be more confident regarding their abilities and will be fully
prepared for “honors” level instruction.
13 Subjects: including algebra 1, algebra 2, calculus, physics
...Although I have been in multiple majors, the one thing that remained constant was my desire to teach. I was a tutor both on a volunteer and paid basis all throughout my college career in an
assorted variety of science and math based subjects. During my graduate study I taught a course in aerobics and conditioning and also privately tutored MCAT preparation.
37 Subjects: including algebra 1, algebra 2, chemistry, physics
...I have been doing it professionally now for over ten years. I love it when my student's understand a new concept. Their eyes light up, they smile at me and say, "but that's easy!" Most
students who come to me have been having trouble for a long time, but managed to get by.
10 Subjects: including algebra 2, algebra 1, calculus, precalculus
...If you want to learn how to master the MCAT without spending thousands of dollars on a review course, look to me. In January of 2013 I took a practice MCAT cold and earned a 25. After 6
months, I sat the MCAT in a testing center and earned a 33.
27 Subjects: including algebra 2, English, algebra 1, reading
...I have over 10 years' experience teaching 6th, 7th, 8th and 9th grade math in middle/high schools. My style is effective because I figure out the real problem quickly. Also, I keep instruction
short, then do many assessments to see if a student is ready to move on.
2 Subjects: including algebra 1, prealgebra
Related Fairfax, CA Tutors
Fairfax, CA Accounting Tutors
Fairfax, CA ACT Tutors
Fairfax, CA Algebra Tutors
Fairfax, CA Algebra 2 Tutors
Fairfax, CA Calculus Tutors
Fairfax, CA Geometry Tutors
Fairfax, CA Math Tutors
Fairfax, CA Prealgebra Tutors
Fairfax, CA Precalculus Tutors
Fairfax, CA SAT Tutors
Fairfax, CA SAT Math Tutors
Fairfax, CA Science Tutors
Fairfax, CA Statistics Tutors
Fairfax, CA Trigonometry Tutors | {"url":"http://www.purplemath.com/Fairfax_CA_Algebra_tutors.php","timestamp":"2014-04-21T15:19:11Z","content_type":null,"content_length":"23752","record_id":"<urn:uuid:9828706f-b0fa-4f8f-bf9f-b3412c969b44>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00409-ip-10-147-4-33.ec2.internal.warc.gz"} |
Melting Pot Math
IRELAND: FAIRIES AND LEPRECHAUNS
Fairies and leprechauns, or little people, play major roles in Irish folk tales. Fairies were said to dwell in mounds of earth and it was commonly believed that touching one of these brought bad
luck. Even today, many Irish farmers leave mounds untouched. Leprechauns, on the other hand, were said to bring good luck. If you catch them they would lead you to a pot of gold, but if you took your
eyes off of them, they would disappear.
One of the most frightening spirits in Irish folk tales is the banshee. If you should hear her wail in the night, it is a sign that someone within your house will soon die.
1. Denny and Darby are 2 brother leprechauns. They are having an argument about whose pot of gold is worth more. Denny has found a pot with 316 coins made of 24 karat gold. Darby has found a pot with
421 coins made of 18 karat gold. Whose pot is worth more (has the most karats)?
2. Eamonn the Leprechaun has a pot of gold coins. Use the clues to figure out how many coins he has.
It is a 4 digit number less than 2000.
The hundreds digit is 3 times the ones digit.
The tens digit is one less than the hundreds digit.
The 4 digits add up to 21.
3. Sweeney and Casey are 2 leprechauns. Together they have hidden 192 coins. Casey has hidden 3 times as many coins as Sweeney. How many have they each hidden?
4. Find the pot of gold. You will need graph paper. Draw the coordinate plane. Connect these ordered pairs in the order given. Then find the coordinates of the center. That is where the pot of gold
is hidden.
(2 , 3) (2 , -7) (-8 , -7) (-8 , 3)
5. Superstitious Farmer O'Leary was told a fairy could be on the strip he is about to dig. He is planning on digging a hole every foot for 60 feet to plant potatoes. The only thing he was told was to
be aware of any number that is a common multiple of 3, 5, and 6. How many times does he have to avoid digging a hole for fear of touching a fairy and bringing bad luck?
Fairies and Leprechauns Solutions
Back to Melting Pot Math | {"url":"http://sln.fi.edu/school/math3/fairies.html","timestamp":"2014-04-18T13:06:49Z","content_type":null,"content_length":"8670","record_id":"<urn:uuid:264013bd-ec1e-4158-b146-7d3e721baf75>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00371-ip-10-147-4-33.ec2.internal.warc.gz"} |
Pivoting Without Aggregation
The PIVOT operator is a useful tool. It lets you aggregate and rotate data so that you can create meaningful tables that are easy to read. However, there are times when you might not want to
aggregate data while pivoting a table.
Related: Pivoting Data and Create Pivoted Tables in 3 Steps
For example, you might want to simply pivot the values in Table 1 so that each team has its members in one row, as Table 2 shows.
But as the following basic syntax shows
(Aggregate function (column1)
FOR column2
IN ( \[val1\], \[val2\], \[val3\] )) AS P
• column1 is the column you want to aggregate
• column2 is the column you want to pivot
• \[val1\], \[val2\], and \[val3\] are the headings for the pivoted columns
• P is the alias for the results of the PIVOT expression
the PIVOT expression requires an aggregate function.
I've developed a solution that lets you pivot data without aggregating it. Listing 1 illustrates this solution using the data in Table 1.
The SELECT statement in callout B is key to this workaround. In this code, I query the tables' Team and Member columns as well as the ROW_NUMBER function. I use the OVER clause with this function so
that I can partition and order the function's result set by teams. This groups the members into their respective teams (CRM and ERP) and, within each team, gives members a number that specifies their
position in that group (i.e., an ordinal number). Table 3 shows the result set produced by this SELECT statement.
Because each ordinal number is associated with only one member in each team, it's now possible to use the MAX aggregate function in the PIVOT operation. (The maximum value of a data set with only one
member will always be that member.) So, in the PIVOT expression in callout C, I use
to aggregate the Member column. I want to pivot the RowNum column, which I do with the code
In the last segment of the PIVOT expression
IN (\[1\], \[2\], \[3\])) AS pvt
I use aliases for the pivoted column headings. The actual column headings are provided in the SELECT statement in callout A. Note that when a value that will end up as column name doesn't follow the
rules for regular identifiers, you must enclose it in brackets (\[ \]). Finally, I assign the PIVOT expression's results to pvt.
As Listing 1 demonstrates, although you can't take away the aggregate function in a PIVOT expression, you can take away the aggregate function's effect. If you'd like to try the code in Listing 1,
you can download it by clicking the 103409.zip link at the top of the page. It works on SQL Server 2005 and later. | {"url":"http://sqlmag.com/print/t-sql/pivoting-without-aggregation","timestamp":"2014-04-18T04:07:18Z","content_type":null,"content_length":"19686","record_id":"<urn:uuid:572fda52-ce88-4cdc-99dc-421ec78360a3>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00354-ip-10-147-4-33.ec2.internal.warc.gz"} |
Douglasville Geometry Tutor
Find a Douglasville Geometry Tutor
...I maintain a blog where I contribute on a weekly basis. I consider myself a grammar snob and have always scored highest on the English part of standardized tests...go figure. Additionally, as
part of my work, I develop course content and training modules for students.
27 Subjects: including geometry, English, reading, ESL/ESOL
...Increasingly students are taking courses online, and I have at this point several years of experience with helping students through the sometimes unique challenges of online courses and test
taking. I think of tutoring and the tutoring experience on many levels - from mastery of the material and...
126 Subjects: including geometry, chemistry, English, calculus
...She is really my one weapon in my arsenal I couldn't do without. I wouldn't have passed calculus without her! I plan on using her for Calculus II and Calculus III as well and am not nearly as
anxiety ridden about it as I was before I met her." - Calculus I Student If the above sounds like somebody you want to learn from, just let her know!
22 Subjects: including geometry, reading, writing, calculus
I am a former Math Teacher / Basketball Coach available for tutoring. I taught precalculus at the college level. I also taught general math, algebra I and geometry in high school.
14 Subjects: including geometry, calculus, algebra 1, algebra 2
...I currently tutor adults seeking their high school diploma and students in grades first through twelth grade. I have taught individuals from Cambodia, Puerto Rico, India and Jamaica. I have
the unique ability due to my social work background to understand my student’s needs and tailor their tutoring to meet their specific needs.
26 Subjects: including geometry, English, reading, algebra 2
Nearby Cities With geometry Tutor
Atlanta geometry Tutors
Austell geometry Tutors
College Park, GA geometry Tutors
Dunwoody, GA geometry Tutors
East Point, GA geometry Tutors
Lithia Springs geometry Tutors
Mableton geometry Tutors
Marietta, GA geometry Tutors
Powder Springs, GA geometry Tutors
Riverdale, GA geometry Tutors
Roswell, GA geometry Tutors
Smyrna, GA geometry Tutors
Union City, GA geometry Tutors
Villa Rica, PR geometry Tutors
Woodstock, GA geometry Tutors | {"url":"http://www.purplemath.com/douglasville_geometry_tutors.php","timestamp":"2014-04-20T16:21:38Z","content_type":null,"content_length":"24038","record_id":"<urn:uuid:62b856e9-a526-4d5f-9d5b-cb728fa7eb20>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00606-ip-10-147-4-33.ec2.internal.warc.gz"} |
8 Data Analysis Techniques
Chapter Four, Appendix One – Analysis Techniques
For those interested, earlier webpages lay out how Dr. Lott put together a well-reasoned research design and compiled a comprehensive database of relevant information. Those details are important for
showing why Dr. Lott’s research has really not been overturned in over two decades. Now we turn toward his core research questions and findings in Chapters Four, Five, and Six. But first, we’ll go
through an overview of the kinds of statistical techniques he used to process the data and answer his research questions.
Dr. Lott’s main research question could be stated, What kind of relationship is there between crime and gun ownership by law-abiding citizens? If we increase the number of in guns, will that mean
more crime, less crime, or does this variable have no influence on crime? To study the kind of relationship and the degree of influence, Dr. Lott uses the statistical technique of regression analysis
to “control for” (i.e., screen out) some factors so he can focus on measure the impact that other specific “variables” have on one another.
What does this term regression analysis mean? The short answer is that this research technique lets you find a “best-fit line” for a set of data you’ve graphed out related to your research question,
and the coefficient (a number which represents the direction and steepness of that line) helps you determine the kind of relationship between the variables involved.
If you’re more a visual thinker, then picture this illustration: Suppose you shoot a bunch of buckshot at a bulls-eye target, and that the target is sitting with its left and lower edges just
touching the lines on a graph. The buckshot scatter pattern is sort of clustered together, but spread out enough that it’s not all in the bulls-eye. If you stand close enough to the graph to see
where the concentration points of the pattern are, that’s an informal type of regression analysis. It’s figuring out where a line goes through the cluster in a way that it is as close as possible to
touching the largest number of individual shot points.
From there, the direction of the slope on the best-fit line tells you the kind of relationship between the variables.
• If one variable increases while the other decreases, it is an inverse relationship and has a positive coefficient. For instance, the title of Dr. Lott’s book illustrates an inverse relationship:
the more guns law-abiding citizens have, the less crime will be committed. The number of crimes goes down when the number of guns goes up.
• If both variables increase or decrease together, it is a direct relationship and has a negative coefficient. For instance, if crime conviction rates decrease, you’d expect prison occupancy rates
will decrease.
The steepness or flatness of the slope tells us how much one variable rises or falls when the other variable is changed.
All of the above is about using a standard form of regression analysis. But in his research, Dr. Lott also applied some even more complicated research analysis techniques. Here the descriptions get
even more dense, but the important thing to remember is that there is no way to pinpoint and measure how various possible factors in the deterrence of crime interact, unless we use empirical research
with a clear research design, applied to the right kinds of data sets, and using appropriate kinds of analysis techniques. This gives us concrete conclusions to consider about the effects of gun
control. Otherwise, all we have to go on are people’s abstract assumptions and potentially very emotional stories. Which are the most useful for figuring out wise legislation?
One other research technique Dr. Lott uses is called two-stage least squares analysis. This is used to separate out the influences of interdependent variables. For example, with gun control research,
Dr. Lott notes that with gun control research, “… crime rates influence whether the nondiscretionary concealed-handgun laws are adopted at the same time as the laws affect crime rates. Similar issues
arise with arrest rates. Not only are crime rates influenced by arrest rates, but since an arrest rate is the number of arrests divided by the number of crimes, the reverse also holds true.” (Page
If you’ve heard the term standard deviation, that is where this element comes in. Standard deviations are a way of “normalizing” the variables so you can do the equivalent of comparing apples and
oranges. You can’t compare them directly because they are different items. But you could calculate the “average” apple and the “average” orange in your fruit orchards, for instance. (That average is
called the mean.) Then you could compare the typical percentage of weight change to apples and to oranges when you apply fertilizers to the orchards.
Put another way, standard deviations are a way of measuring typical changes. For instance, an evenly distributed (symmetrical) bell curve has the mean (center point/average) in the very middle, and
68% of all the data are within one standard deviation unit from that center line, and 95% of all the data are within two standard deviations from the center line.
In another technique called multiple regression analysis, other variables that are not under examination (what are called the exogenous or outside variables) help explain how and why the variables
that are under examination (i.e., the endogenous or inside variables) work.
Those are some of the core concepts that Dr. Lott uses throughout his studies. Appendix One of his book explains and explores other kinds of more complicated regression techniques. And while these
techniques are technical, the key thing to see is that he runs different types of analyses so he can explore the dynamics of linkages between private ownership of guns and their effects on crime. The
goal is to get research findings that have high statistical significance. Significance measures the level of certainty about the impact a variable has. The higher the significance, the better the
research conclusions we have, and the more objective we can be about thinking through the connections between gun-control laws and crime. | {"url":"http://patriotpawnandgun.com/wp/ifr3-research/8-research-analysis-techniques/","timestamp":"2014-04-21T13:37:51Z","content_type":null,"content_length":"37886","record_id":"<urn:uuid:3a9cf0b0-bd2a-4a26-b39a-6623b06ab0b4>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00074-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
Pleasee help x.x 1. Solve the system by elimination- -2x+2y+3z=0 -2x-y+z=-3 2x+3y+3z=5 2.Solve using substitution- x-y-z=-8 -4x+4y+5z=7 2x+2z=4 Thanks for any help :o
• one year ago
• one year ago
Best Response
You've already chosen the best response.
Heya lol.. So any help?
Best Response
You've already chosen the best response.
Agh that doesn't help me at all x.x
Best Response
You've already chosen the best response.
I see that :o
Best Response
You've already chosen the best response.
O.o I most certainly cannot x.x
Best Response
You've already chosen the best response.
I'm dumb D:
Best Response
You've already chosen the best response.
Megan, your homework makes me sleepy
Best Response
You've already chosen the best response.
haha oh Hi hero x.x
Best Response
You've already chosen the best response.
I just need to be able to show all the correct steps with these 2 x.x they are worth a lot of points Dx
Best Response
You've already chosen the best response.
I'm willing to help you with the constraint and profit problems, but for the system of three equations, I'd rather use matrices.
Best Response
You've already chosen the best response.
I'm supposed to be learning how to do that too x.x
Best Response
You've already chosen the best response.
I'm not a teacher. I will post the full solutions to the problems, which will explain the steps that way. However, if you have any questions about anything, I can try to answer. What I refuse to
do is "guide" student to an answer since, in a way, it wastes time.
Best Response
You've already chosen the best response.
-8x-8y=16 6x-9y=-108 With this problem I'm supposed to solve using matrices :P
Best Response
You've already chosen the best response.
haha I guess so
Best Response
You've already chosen the best response.
Well, before you do anything, reduce both first.
Best Response
You've already chosen the best response.
Divide the first equation by 8 and the second by 3
Best Response
You've already chosen the best response.
oh oops the first one =-16 my bad there.
Best Response
You've already chosen the best response.
I'm just going to to respond to one of the previous questions you posted.
Best Response
You've already chosen the best response.
er okay :o
Best Response
You've already chosen the best response.
With Question 1. A system of elimination requires to to add or subtract two equations together to eliminate a variable. Since you have three equations, you will want to bring that down to two
equations with two variables. If you minus the second equation from the first equation: -2x+2y+3z=0 (-) -2x-y+z=-3 (=) 3y+2z=3 Now we want to create a second equation that only has the two
variables y and z. We also want to create a second equation that is different from the first one. To do this we need to involve the 3rd equation. In this case we will add the first and the last
equations together. -2x+2y+3z=0 (+) 2x+3y+3z=5 (=) 5y+6z=5 Now we have two equations to solve together by adding or subtracting. But in this case we will have to alter equation 2 (multiply all of
it by 3) so that we can subtract one from 2. 3y+2z=3 (x3) 9y+6z=9 Now we can subtract. 5y+6z=5 (-) 9y+6z=9 (=) -4y=-4 y=1 Now that we have a value of y we can substitute that back into our
equations to get other values. 3(1)+2z=3 3+2z=3 z=0 Now sub z=0 and y=1 back into equation 1. -2x+2(1)+3(0)=0 -2x+2=0 x=1
Best Response
You've already chosen the best response.
Thank you so much Henry (: Your awesome
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/508c7a2ae4b04456f6c44b7c","timestamp":"2014-04-18T08:04:37Z","content_type":null,"content_length":"74473","record_id":"<urn:uuid:1b9c5554-d10c-464e-a056-f4f76c41c037>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00427-ip-10-147-4-33.ec2.internal.warc.gz"} |
[SciPy-user] stopping criterion for integrate.odeint?
Ryan Gutenkunst rng7 at cornell.edu
Wed Mar 23 18:22:09 CST 2005
Hans Fangohr wrote:
> Dear all,
> we have been using odeint in a number of situations and it works
> great. Now we'd like to interrupt the integration of the set of ODEs
> once a conditions is fulfilled that depends on the solution that is
> being computed.
> <snip>
> Here is the complication: Assume we want to interrupt the integration
> if r<=0, e.g. when the object hits the ground. Is there an elegant way
> of doing this?
> <snip>
> Many thanks,
> Hans
Hi Hans,
In our work with biochemical networks we've encountered a very similar
requirement. (We, however, want to stop the integration, change some
parameters, than start again from where we left off.) Our solution isn't
particularly pretty, but it does give the time and variable values when
the condition is fulfilled exactly.
Consider the integration of dy/dt where y can be a vector. We want to
know exactly when the condition c(y) = 0 is satisfied. We integrate
dy/dt from 0 to some time T, requesting a reasonably fine grid of times.
Then we go back and calculate c(y, t) for each time point in our
trajectory. If two adjacent time points (t1 and t2) have different signs
for c(y, t), we know our condition was fulfilled sometime in between
those times.
Now we can integrate backward from t2 to find the exact time the
condition was fulfilled. To do so, we make a change of variables from t
to c. Namely, we integrate the equations defined by:
[dy/dc = dy/dt / dc/dt, dt/dc = 1 / dc/dt]
from c = c(t2) to c = 0. (Note that this is one more equation that we
were integrating before.) The initial conditions are [y(t2), 0]. This
integration terminates with the values [y(tc), t2 - tc] where tc is the
time when c crossed zero. We can then insert these values into our
trajectory, and terminate the integration process if we want.
So there are some obvious drawbacks to this:
1) The condition needs to be expressed in a continuous way, and we
need to know dc/dt analytically. This usually isn't so bad since it's
probably simply related to dy/dt. (For example, we could write your r <=
0 as c = r - 0 then dc/dt = dr/dt which we already know how to calculate.)
2) We have to integrate all the way out to T first, before we can
check whether a condition fired. So if N conditions fire, we're doing
about N times as much integration work.
3) If the sign of c(y) changes but then changes back before the next
time point we sampled in the initial integration, we'll miss that firing
of the condition.
Nevertheless, it's pretty easy to write in pure Python, and it works
without access to the integrator's internal approximation of the
function. Doing this exactly and efficiently would require low-level
access to the guts of the integrator. There do exist integrators that
can handle this sort of condition-checking internally, but as far as I
can tell the one odeint wraps isn't among them.
For now, it's not worth it to me to wrap up another package, especially
since that would be yet another thing for our users to have to install.
If there's interest in moving scipy over to a more sophisticated
integrator, I'd be happy to help, but I'm still somewhat of a newbie and
have no clue wrt FORTRAN, so I couldn't do it alone. I also expect that
it might break a lot of existing code if we aren't careful with outputs.
Anyways, I hope my explanation of our solution is reasonably clear. If
folks are interested, I'll clean-up, comment, and post our code. And, of
course, if anyone has a better solution, I'm dying to hear it. :-)
Ryan Gutenkunst |
Cornell Dept. of Physics | "It is not the mountain
| we conquer but ourselves."
Clark 535 / (607)255-6068 | -- Sir Edmund Hillary
AIM: JepettoRNG |
More information about the SciPy-user mailing list | {"url":"http://mail.scipy.org/pipermail/scipy-user/2005-March/004248.html","timestamp":"2014-04-19T07:04:55Z","content_type":null,"content_length":"6742","record_id":"<urn:uuid:5fa69053-5ef8-4735-91a5-036d355060af>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00217-ip-10-147-4-33.ec2.internal.warc.gz"} |
Valid combinations of a 3 letter alphabet
March 13th 2010, 09:47 AM
Valid combinations of a 3 letter alphabet
Hi there,
I was hoping one could tell me how to calculate the following problem in the generic form.
3 possible values A,B,C.
where the following constraint states that, "A" must always exist in a combination.
N is the number of columns in a grid that one of the values will be placed.
For example:
if N = 2, then the following are valid combinations:
A A
A B
B A
A C
C A
Invalid combinations are
as these do not have at least one A.
How would you calculate this for N = 3 or N = 4 etc ?
In another example, I can see how the binary alphabet of A,B for N columns can work without the restriction on A.
That is, it is 2^n where n is the number of columns or input number. For example, a 4-input truth table would be 2^4 and 10-input truth table would be 2^10.
However the above scenario has me stumped.
March 13th 2010, 10:17 AM
I was hoping one could tell me how to calculate the following problem in the generic form.
3 possible values A,B,C.
where the following constraint states that, "A" must always exist in a combination.
N is the number of columns in a grid that one of the values will be placed.
There is a simple answer to above quoted question.
That is for three values and N columns: $3^N-2^N$.
To see why it works there are $3^N$ total strings and $2^N$ do not contain an A.
Remove those.
March 13th 2010, 11:13 AM
Thats great.
Thanks for getting back to me. I was drawing tables out on paper all day trying to reverse engineer an equation. I'm rusty as hell at math.
I figured out that for a two letter alphabet of say A,B and if we don't care about the constraint on A then there are $2^N$ combinations.
But I just couldn't piece the 3 letter alphabet together.
March 13th 2010, 12:09 PM
Hi Plato,
Is it possible to modify that equation to remove duplicates?
Is it possible to define the following where there is an alphabet of 3 strings A,B,C and N columns, where the following holds:
There exists at least one A,
there must at least exist one B or C
and the remaining columns of N can be either an A or B or C.
and any duplicates are excluded. Note the ordering of of 3-tuple sequence is important. That is ABA is not equal to AAB.
From what I could see all the possible combinations of 3 strings ABC, for 3 columns is as follows:
A,A,A Invalid as there is no B or a C
B,B,B Invalid as there must be an A
A,A,A Invalid as there is no B or a C
C,C,C Invalid as there must be an A
A,B,B Duplicate from above
A,C,C Duplicate from above
With a result of 14 possible combinations (I hope I wrote them all out!)
many thanks in advance,
March 13th 2010, 01:54 PM
There are eighteen such strings.
$\begin{array}{*{20}c}<br /> A & A & B \\<br /> A & A & C \\<br /> A & B & A \\<br /> A & B & B \\<br /> A & B & C \\\end{array}$
$\begin{array}{*{20}c}<br /> A & C & A \\<br /> A & C & B \\<br /> A & C & C \\<br /> B & A & A \\<br /> B & A & B \\<br /> B & A & C \\<br /> B & B & A \\<br /> B & C & A \\\end{array}$
$\begin{array}{*{20}c} <br /> C & A & A \\<br /> C & A & B \\<br /> C & A & C \\<br /> C & B & A \\<br /> C & C & A \\ \end{array}$
What do you call a duplicate?
March 13th 2010, 02:11 PM
Hi Plato,
Apologies for having bothered you with this.
I see no duplicates in your list.
Can a generic equation be based on this?
Do you mind if I ask how did you go about constructing the table?
What I did was construct 3 tables, the first table looked at all possible combinations of A and B where there must be 1 A. Table two was a copy but replaced B with C. Table three kept all column1
set to A, and then I done a standard logic truth table for B and C (2^2). My approach of course is incorrect.
March 13th 2010, 02:37 PM
Construct a table of twenty-seven rows and three columns.
In the first column there is a block of nine A’s, followed by a block of nine B’s, followed by a block of nine C’s.
In the second column there is a block of three A’s, followed by a block of three B’s, followed by a block of three C’s. Repeat the two more times,
In the last column make a block of ‘ABC’ repeated nine times.
March 13th 2010, 03:06 PM
I did that out on paper there and worked a threat.
Is there a handy formula that describes how 18 valid combinations can be reached?
Would it be $3^n - 2^n - 1$
where $3^n$ is the total strings and $2^n$ do not contain an A and $-1$ is to remove the combination where all 3 A's exist, that is, there is not also at least one B or C.
How do you decide on how many of the same letter goes into the first column. Is there a common pattern or strategy to adopt? What if I have 4 or 5 columns (with 3 letters)?
I see from listing two columns (N=2) there are 3 A's followed by 3 B's followed by 3 C's.
March 13th 2010, 04:34 PM
I think I understand how your deciding how many of the same letter you place down the first column. i was looking at the case for truth tables $2^n$ and I see that if n = 4 calculate $2^3$, if
its n is 3 then $2^2$ will be how many T's and F's in the first column and then you 1/2 that number as you move across the columns. | {"url":"http://mathhelpforum.com/discrete-math/133617-valid-combinations-3-letter-alphabet-print.html","timestamp":"2014-04-21T06:18:45Z","content_type":null,"content_length":"15653","record_id":"<urn:uuid:e4d78445-461a-438b-a168-df9fadd6637a>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00535-ip-10-147-4-33.ec2.internal.warc.gz"} |
Bean plots in SPSS
It seems like I have come across alot of posts recently about visualizing univariate distributions. Besides my own recent blog post about comparing distributions of unequal size in SPSS, here are a
few other blog posts I have recently come across;
Such a variety of references is not surprising though. Examining univariate distributions is a regular task for data analysis and can tell you alot about the nature of data (including potential
errors in the data). Here are some posts on the Cross Validated Q/A site of related interest I have compiled;
In particular the recent post on bean plots and Luca Fenu’s post motivated my playing around with SPSS to produce the bean plots here. Note Jon Peck has published a graphboard template to generate
violin plots for SPSS, but here I will show how to generate them in the usual GGRAPH commands. It is actually pretty easy, and here I extend the violin plots to include the beans suggested in bean
A brief bit about the motivation for bean plots. Besides consulting the article by Peter Kampstra, one is interested in viewing a univariate continuous distribution among a set of different
categories. To do this one uses a smoothed kernel density estimate of the distribution for each of the subgroups. When viewing the smoothed distribution though one loses the ability to identify
patterns in the individual data points. Patterns can mean many things, such as outliers, or patterns such as striation within the main body of observations. The bean plot article gives an example
where striation in measurements at specific inches can be seen. Another example might be examining the time of reported crime incidents (they will have bunches at the beginning of the hour, as well
as 15, 30, & 45 minute marks).
Below I will go through a brief series of examples demonstrating how to make bean plots in SPSS.
SPSS code to make bean plots
First I will make some fake data for us to work with.
set seed = 10.
input program.
loop #i = 1 to 1000.
compute V1 = RV.NORM(0,1).
compute groups = TRUNC(RV.UNIFORM(0,5)).
end case.
end loop.
end file.
end input program.
dataset name sim.
value labels groups
0 'cat 0'
1 'cat 1'
2 'cat 2'
3 'cat 3'
4 'cat 4'.
Next, I will show some code to make the two plots below. These are typical kernel density estimates of the V1 variable I made for the entire distribution, and these are to show the elements of the
base bean plots. Note the use of the TRANS statement in the GPL to make a constant value to plot the rug of the distribution. Also note although such rugs are typically shown as bars, you could
pretty much always use point markers as well in any situation where you use bars. Below the image is the GGRAPH code used to produce them.
*Regular density estimate with rug plot.
/GRAPHDATASET NAME="graphdataset" VARIABLES=V1 MISSING=LISTWISE REPORTMISSING=NO
/GRAPHSPEC SOURCE=INLINE.
SOURCE: s=userSource(id("graphdataset"))
DATA: V1=col(source(s), name("V1"))
TRANS: rug = eval(-26)
GUIDE: axis(dim(1), label("V1"))
GUIDE: axis(dim(2), label("Density"))
SCALE: linear(dim(2), min(-30))
ELEMENT: interval(position(V1*rug), transparency.exterior(transparency."0.8"))
ELEMENT: line(position(density.kernel.epanechnikov(V1*1)))
END GPL.
*Density estimate with points instead of bars for rug.
/GRAPHDATASET NAME="graphdataset" VARIABLES=V1 MISSING=LISTWISE REPORTMISSING=NO
/GRAPHSPEC SOURCE=INLINE.
SOURCE: s=userSource(id("graphdataset"))
DATA: V1=col(source(s), name("V1"))
TRANS: rug = eval(-15)
GUIDE: axis(dim(1), label("V1"))
GUIDE: axis(dim(2), label("Density"))
SCALE: linear(dim(2), min(-30))
ELEMENT: point(position(V1*rug), transparency.exterior(transparency."0.8"))
ELEMENT: line(position(density.kernel.epanechnikov(V1*1)))
END GPL.
Now bean plots are just the above plots rotatated 90 degrees, adding a reflection of the distribution (so the area of the density is represented in two dimensions), and then further paneled by
another categorical variable. To do the reflection, one has to create a fake variable equal to the first variable used for the density estimate. But after that, it is just knowing alittle GGRAPH
magic to make the plots.
compute V2 = V1.
/make V from V1 V2
/index panel_dum.
/GRAPHDATASET NAME="graphdataset" VARIABLES=V panel_dum groups MISSING=LISTWISE REPORTMISSING=NO
/GRAPHSPEC SOURCE=INLINE.
SOURCE: s=userSource(id("graphdataset"))
COORD: transpose(mirror(rect(dim(1,2))))
DATA: V=col(source(s), name("V"))
DATA: panel_dum=col(source(s), name("panel_dum"), unit.category())
DATA: groups=col(source(s), name("groups"), unit.category())
TRANS: zero = eval(10)
GUIDE: axis(dim(1), label("V1"))
GUIDE: axis(dim(2), null())
GUIDE: axis(dim(3), null())
SCALE: linear(dim(2), min(0))
ELEMENT: area(position(density.kernel.epanechnikov(V*1*panel_dum*1*groups)), transparency.exterior(transparency."1.0"), transparency.interior(transparency."0.4"),
color.interior(color.grey), color.exterior(color.grey)))
ELEMENT: interval(position(V*zero*panel_dum*1*groups), transparency.exterior(transparency."0.8"))
END GPL.
Note I did not label the density estimate anymore. I could have, but I would have had to essentially divide the density estimate by two, since I am showing it twice (which is possible, and if you
wanted to show it you would omit the GUIDE: axis(dim(2), null()) command). But even without the axis they are still reasonable for relative comparisons. Also note the COORD statement for how I get
the panels to mirror each other (the transpose statement just switches the X and Y axis in the charts).
I just post hoc edited the chart to get it to look nice (in particular settign the spacing between the panel_dum panel to zero and making the panel outlines transparent), but most of those things can
likley be more steamlined by making an appropriate chart template. Two things I do not like, which I may need to edit the chart template to be able to accomplish anyway; 1) There is an artifact of a
white line running down the density estimates, (it is hard to see with the rug, but closer inspection will show it), 2) I would prefer to have a box around all of the estimates and categories, but to
prevent a streak running down the middle of the density estimates one needs to draw the panel boxes without borders. To see if I can accomplish these things will take further investigation.
This framework is easily extended to the case where you don’t want a reflection of the same variable, but want to plot the continuous distribution estimate of a second variable. Below is an example,
and here I have posted the syntax in entirety used in making this post. In there I also have an example of weighting groups inversely proportional to the total items in each group, which should make
the area of each group equal.
In this example of comparing groups, I utilize dots instead of the bar rug, as I believe it provides more contrast between the two distributions. Also note in general I have not superimposed other
summary statistics (some of the bean plots have quartile lines super-imposed). You could do this, but it gets a bit busy.
3 Comments
1. Hi Andrew,
Thanks for posting the syntax – it was really useful to follow the steps you took to make these. I’m just wondering how you could add a coloured point to distinguish the median in each group-
would it be something like this:
ELEMENT: point(position(summary.mean(V*1*panel_dum*1*groups))?
thanks in advance for your help!
□ Hi Louise,
In theory, something like ELEMENT: point(position(summary.median(V*1*panel_dum*1*groups)), color.exterior(panel_dum), shape("Median"), size(size.large)) should work, although my quick
attempts to get it to act as desired were unsucessful. I also tried to make an actual summary variable via the TRANS command in inline GPL and that did not work either. So what I ended up
doing was making a new variable and plotting that in its own element statement. Note when you have multiple elements like this the legend gets a bit un-wieldy, and what I have been doing is
editing post-hoc in a different vector editor to get the legend to how I want it. Part of the problem I think is that SPSS does not like mapping the same aesthetic to different types of
elements, so sometimes it gives errors when trying to construct the legend.
Below is an example using the same data that is in the last plot in my blog post.
/OUTFILE=* MODE=ADDVARIABLES
/BREAK=groups panel_dum
/GRAPHDATASET NAME="graphdataset" VARIABLES=V V_median panel_dum groups MISSING=LISTWISE REPORTMISSING=NO
/GRAPHSPEC SOURCE=INLINE.
BEGIN GPL
SOURCE: s=userSource(id("graphdataset"))
COORD: transpose(mirror(rect(dim(1,2))))
DATA: V=col(source(s), name("V"))
DATA: V_median=col(source(s), name("V_median"))
DATA: panel_dum=col(source(s), name("panel_dum"), unit.category())
DATA: groups=col(source(s), name("groups"), unit.category())
TRANS: zero = eval(20)
TRANS: med_point = eval(40)
GUIDE: axis(dim(1), label("V1"))
GUIDE: axis(dim(2), label("Frequency"))
GUIDE: axis(dim(3), null())
GUIDE: legend(aesthetic(aesthetic.color.exterior))
GUIDE: legend(aesthetic(aesthetic.color.interior))
GUIDE: legend(aesthetic(aesthetic.shape.interior), null())
GUIDE: legend(aesthetic(aesthetic.transparency.exterior), null())
GUIDE: legend(aesthetic(aesthetic.transparency.interior), null())
GUIDE: legend(aesthetic(aesthetic.size), null())
SCALE: linear(dim(2), min(0))
SCALE: cat(aesthetic(aesthetic.shape.interior), map(("Median", shape.square), ("Rug", shape.circle)))
SCALE: cat(aesthetic(aesthetic.transparency), map(("Median", transparency."0.0"), ("Rug", transparency."0.9")))
SCALE: cat(aesthetic(aesthetic.size), map(("Median", size."8"), ("Rug", size."4")))
ELEMENT: point(position(V*zero*panel_dum*1*groups), color.exterior(panel_dum), color.interior(panel_dum), transparency.interior("Rug"), transparency.exterior("Rug"),
shape.interior("Rug"), size("Rug"))
ELEMENT: point(position(V_median*zero*panel_dum*1*groups), color.exterior(panel_dum), color.interior(panel_dum), transparency.interior("Median"), transparency.exterior("Median"),
shape.interior("Median"), size("Median"))
ELEMENT: area(position(density.kernel.epanechnikov(V*1*panel_dum*1*groups)), transparency.exterior(transparency."1.0")), color(panel_dum), transparency.interior(transparency."0.5")))
END GPL.
Thanks for the comment and if you have any other questions let me know here in the comments or feel free to shoot me an email.
Posted by Andrew Wheeler on May 20, 2012 | {"url":"https://andrewpwheeler.wordpress.com/2012/05/20/bean-plots-in-spss/","timestamp":"2014-04-16T15:58:40Z","content_type":null,"content_length":"78821","record_id":"<urn:uuid:3a18057a-6468-4359-aabe-15e9f3f74bfb>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00639-ip-10-147-4-33.ec2.internal.warc.gz"} |
Lesson: Lesson 3 Interpreting Graphs and Tables
89 Views
0 Downloads
7 Favorites
Lesson Objective
SWBAT interpret graphs and tables
Lesson Plan
Begin the lesson with students matching Graphs A-E with Statements 1-5. Facilitate a discussion with students about why the matched each graph which each statement and their reasoning. Point out key
words/phrases that indicate the direction and steepness of each graph. (note that the word slope is not used here because students will not see it until Lesson 6)
Example 1: Relating Graphs and Situations
1. Read through the verbal situation and underline/circle important key words and phrases
2. Re-read the verbal situation, following each graph.
3. Match graph to verbal situation
4. Have students explain why the other graphs do not work for that situation, and/or what would the situation have to say in order for the graph to work. Another point of discussion is what each
part of each graph means (i.e. what does this flat area of the graph mean in the context of the situation?)
5. Have students complete the you try
6. Go over the you try.
7. Note: it’s a great discussion point to talk about what it means when a graph starts at the origin and when it doesn’t.
Example 2: Sketching Graphs of Situations
1. Model reading through the verbal situation and underlining/circling important key words and phrases
2. In order for each key word or phrase, sketch the graph from left to right
3. Have students complete the you try
4. Go over the you try
Example 3: Writing Situations for Graphs
1. Analyze the graph and identify the different parts of the graph
2. Label each section as gradually increasing, rapidly increasing, gradually decreasing, rapidly decreasing, constant, etc.
3. Identify the axes and the context of the situation
4. Weave a story together as a class
5. Have students complete the you try
6. Go over the you try (perhaps a share out of a handful of student stories?)
Example 4: Matching Situations to Tables
1. Read through the table of values
2. Have students come up with a story on their own about what they think is happening with each snowboarder.
3. Match each snowboarder in the table to a verbal situation
4. Have students justify their reasoning based on the data in the table
5. Have students complete the you try
6. Go over the you try.
Independent Practice
Have students work through independent practice.
Have students share out and summarize what they learned today.
Have students complete the Exit Ticket
What works: For all of these problems, I found feigning ignorance an effective strategy to get students to analyze each part of each graph, as well as justify their reasoning as to why a graph should
look a certain way (more steep, less steep, straight line, jagged line, plateau, a pointed “mountaintop”, etc.). Also, having a discussion about what the axes and their labels mean for the graph
(such as in Example 2), was really helpful.
What didn't work: In example 3, students are asked to write situations for graphs. Since the majority of my students were ELL’s, this was difficult for them and they did not pay particular attention
to the axes. For example, for the water level vs. time graph, one student wrote about walking to the store, shopping, and the coming home. If you teach a similar demographic, I would suggest having a
word bank and/or guiding questions that start with “What is this graph about?”
Lesson Resources
Unit 7 Lesson 3 Interpreting Graphs and Tables.docx 68 | {"url":"http://betterlesson.com/lesson/368590/lesson-3-interpreting-graphs-and-tables","timestamp":"2014-04-21T09:38:35Z","content_type":null,"content_length":"44953","record_id":"<urn:uuid:b2d4bb8e-04d0-413d-bf16-746381a8eb06>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00662-ip-10-147-4-33.ec2.internal.warc.gz"} |
Extent Expand Buffer Distance
Extent Expand Buffer Distance ST_DWithin
In this quick exercise, we will explore the following PostGIS OGC functions: Extent, Expand, Buffer, Distance
Extent is an aggregate function - meaning that it is used just as you would use SQL sum, average, min, and max often times in conjunction with group by to consolidate a set of rows into one.
Pre-1.2.2 It returned a 2-dimensional bounding box object (BBOX2D) that encompasses the set of geometries you are consolidating.
Unlike most functions in PostGIS, it returns a postgis BBOX object instead of a geometry object.
In some cases, it may be necessary to cast the resulting value to a PostGIS geometry for example if you need to do operations that work on projections etc. Since a bounding box object does not
contain projection information, it is best to use the setSRID function as opposed to simply casting it to a geometry if you need projection information. SetSRID will automagically convert a BBOX to a
geometry and then stuff in SRID info specified in the function call.
Extent has a sibling called Extent3d which is also an aggregate function and is exactly like Extent except it returns a 3-dimensional bounding box (BBOX3D).
Starting around version 1.2.2, Extent and Extent3d will be deprecated in favor of ST_Extent . ST_Extent will return a BOX3D object.
Expand (< 1.3.1), ST_Expand (1.2.2 +)
Expand returns a geometry object that is a box encompassing a given geometry. Unlike extent, it is not an aggregate function. It is often used in conjunction with the distance function to do
proximity searches because it is less expensive than the distance function alone.
The reason why expand combined with distance is much faster than distance alone is that it can utilize gist indexes since it does compares between boxes so therefore reduces the set of geometries
that the distance function needs to check.
Note in versions of PostGIS after 1.2, there exists a new function called ST_DWithin which utilizes indexes, simpler to write than the expand, &&, distance combination.
ST_Distance will be the preferred name in version 1.2.2 and up
The following statements return equivalent values, but the one using Expand is much faster especially when geometries are indexed and commonly used attributes are indexed.
Find all buildings located within 100 meters of Roslindale
Using distance and Expand: Time with indexed geometries: 14.5 seconds - returns 8432 records
SELECT b.the_geom_nad83m
FROM neighborhoods n, buildings b
WHERE n.name = 'Roslindale' and expand(n.thegeom_meter, 100) && b.thegeom_meter
and distance(n.thegeom_meter, b.thegeom_meter) < 100
Using distance alone: Time with indexed geometries: 8.7 minutes - returns 8432 records
SELECT b.the_geom_nad83m
FROM neighborhoods n, buildings b
WHERE n.name = 'Roslindale'
and distance(n.thegeom_meter, b.thegeom_meter) < 100
ST_DWithin (1.3.1 and above)
We will write the above using the new ST_DWithin to demonstrate how much easier it is.
Using ST_DWithin: Time with indexed geometries: 14.5 seconds - returns 8432 records
SELECT b.the_geom_nad83m
FROM neighborhoods n, buildings b
WHERE n.name = 'Roslindale' and ST_DWithin(n.thegeom_meter, b.thegeom_meter, 100)
Within (< 1.3.1), ST_Within (1.3.1 and above)
ST_Within(A,B) returns true if the geometry A is within B. There is an important distinction between Within and ST_Within and that is the ST_Within does an implicit A&&B call to utilize indexes where
as Within and _ST_Within do not.
1.3.1 and above do
SELECT b.the_geom_nad83m
FROM neighborhoods n, buildings b
WHERE n.name = 'Roslindale' and ST_Within(b.thegeom_meter, n.thegeom_meter)
Pre 1.3.1 do
SELECT b.the_geom_nad83m
FROM neighborhoods n, buildings b
WHERE n.name = 'Roslindale' and b.thegeom_meter && n.thegeom_meter AND Within(b.thegeom_meter, n.thegeom_meter)
ST_Buffer (+1.2.2), Buffer (< 1.2.2)
Buffer returns a geometry object that is the radial expansion of a geometry expanded by the specified number of units. Calculations are in units of the Spatial Reference System of this Geometry. The
optional third parameter sets the number of segments used to approximate a quarter circle (defaults to 8) if third argument is not provided.
This is a much more involved process than the expand function because it needs to look at every point of a geometry whereas the expand function only looks at the bounding box of a geometry.
Aliases: ST_Buffer (MM-SQL)
Correcting Invalid Geometries with Buffer
Buffer can also be used to correct invalid geometries by smoothing out self-intersections. It doesn't work for all invalid geometries, but works for some. For example the below code will correct
invalid neighborhood geometries that can be corrected. Note in here I am also combining the use with MULTI since in this case buffer will return a polygon and our table geometries are stored as
multi-polygons. If your column geometry is a POLYGON rather than a MULTIPOLYGON then you can leave out the MULTI part.
UPDATE neighborhoods
SET the_geom = multi(buffer(the_geom, 0.0))
WHERE isvalid(the_geom) = false AND isvalid(buffer(the_geom, 0.0)) = true
Pictorial View of Buffer, Expand, Extent
In this section we provide a graphical representation of what the operations Buffer, Expand, Extent look like when applied to geometries.
Legend Corresponding Queries
SELECT the_geom, name
FROM neighborhoods
WHERE name IN('Hyde Park','Roxbury')
Expand: Draws a box that extends out 500 units from bounding box of each geometry
SELECT expand(the_geom, 500) as geom, name
FROM neighborhoods
WHERE name IN('Hyde Park', 'Roxbury')
Extent: Draws a single bounding box around the set of geometries
SELECT extent(the_geom) as thebbox, name
FROM neighborhoods
WHERE name IN('Hyde Park', 'Roxbury')
Buffer: Extends each geometry out by 500 units.
SELECT buffer(the_geom,500) as geom, name
FROM neighborhoods
WHERE name IN('Hyde Park', 'Roxbury')
Post Comments About Extent Expand Buffer Distance: PostGIS - ST_Extent, Expand, ST_Buffer, ST_Distance | {"url":"http://www.bostongis.com/postgis_extent_expand_buffer_distance.snippet","timestamp":"2014-04-19T06:51:58Z","content_type":null,"content_length":"26368","record_id":"<urn:uuid:78b77aff-1ec9-45b6-a545-86944c4d4267>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00021-ip-10-147-4-33.ec2.internal.warc.gz"} |
OpenMx - Advanced Structural Equation Modeling
Greetings all,
Now finally getting Mx to work on my Mac, I'm having a good time putting it through its paces. So, I have a simple model I'm playing with
probsolv -> grades -> irtsci
Below is the syntax and output. It's reading the number of obs correctly and the estimates seem correct. However, I am not getting a value for the LR chi-square test nor the RMSEA and the df seem
fairly large considering it should only be 1 df. Thanks in advance. David
> sciach <- read.table("~/Desktop/Mx stuff/sciach.txt",header=TRUE)
> require(OpenMx)
> #data(sciach)
> sciachsimple <- sciach[,c("irtsci", "grades", "probsolv")]
> manifests <- names(sciachsimple)
> pathmodel <- mxModel("path model",type="RAM",
+ mxData(observed=sciachsimple,type="raw"),
+ manifestVars = manifests,
+ mxPath(from="probsolv", to="grades",arrows=1,free=TRUE),
+ mxPath(from="grades", to="irtsci",arrows=1,free=TRUE),
+ mxPath(from=c("probsolv", "grades", "irtsci"),
+ arrows=2,
+ free=TRUE,
+ values = c(1, 1, 1),
+ labels=c("varx", "residual1", "residual2")),
+ mxPath(from="one",
+ to=c("probsolv", "grades", "irtsci"),
+ arrows=1,
+ free=TRUE,
+ values=c(1, 1, 1),
+ labels=c("meanx","beta0grades", "beta0irtsci")))
> pathmodelfit <- mxRun(pathmodel)
Running path model
> # pathmodelfit@output
> summary(mxRun(pathmodel))
Running path model
irtsci grades probsolv
Min. : 4.98 Min. :0.000 Min. : 0.00
1st Qu.:10.90 1st Qu.:2.000 1st Qu.: 7.00
Median :14.50 Median :3.000 Median : 9.00
Mean :14.69 Mean :2.858 Mean : 9.01
3rd Qu.:18.80 3rd Qu.:3.500 3rd Qu.:12.00
Max. :24.90 Max. :4.000 Max. :15.00
name matrix row col Estimate Std.Error
1 A irtsci grades 2.13628016 0.14758041
2 A grades probsolv 0.05409493 0.00729430
3 residual2 S irtsci irtsci 21.36742233 0.88520752
4 residual1 S grades grades 0.80461695 0.03333778
5 varx S probsolv probsolv 12.98102667 0.53787536
6 beta0irtsci M 1 irtsci 8.58666103 0.44305855
7 beta0grades M 1 grades 2.37100192 0.07077779
8 meanx M 1 probsolv 9.00943610 0.10555834
Observed statistics: 3495
Estimated parameters: 8
Degrees of freedom: 3487
-2 log likelihood: 16218.66
Saturated -2 log likelihood: NA
numObs: 1165
Chi-Square: NA
p: NA
AIC (Mx): 9244.662
BIC (Mx): -4200.609
adjusted BIC:
RMSEA: NA
frontend time: 0.2014029 secs
backend time: 3.162092 secs
independent submodels time: 6.508827e-05 secs
wall clock time: 3.36356 secs
cpu time: 3.36356 secs
openmx version number: 0.2.9-1147
Fri, 03/12/2010 - 12:00
Degrees of freedom is
Degrees of freedom is computed as the difference between the number of observed statistics and the number of free parameters. When using full information maximum likelihood, the number of observed
statistics is calculated as the number of non-NA entries in the columns used by the optimization.
There is a bug in 0.2.9 where all columns of the data set are used, instead of only the columns selected for the expected covariance matrix. However, I don't think that bug can manifest in a RAM
style model because we currently don't allow subsets of data to be selected in RAM, it's all or nothing. In other words, 'sciachsimple' above doesn't contain unused columns.
Fri, 03/12/2010 - 15:17
The reason that both the df
The reason that both the df look weird and the RMSEA is non-existant is the lack of a saturated model to compare your model to. This is missing because of a difference in the definition of degrees of
freedom in SEM and multivariate regression/GLM. A full-information optimizer like FIML approaches the model as having a higher number of degrees of freedom than is typically thought of in SEM (i.e.,
n*k-p intead of k(k+1)/2-p). By extension, its definition of a saturated model is very different than in SEM, as a saturated model would have k predictors per person. This model is not fit because it
is massively large.
If you so choose, you could manually specify a saturated model that would match the definition of a fully saturated model in SEM. From there, the -2LL difference and df differences across the two
models would exactly match the standard chi-square test of perfect fit in SEM, and could be turned into RMSEA. See the thread below to hear more about model fit stats and this fully saturated issue
for FIML.
Tue, 04/20/2010 - 14:49
Aside from manually
Aside from manually specifying the saturated/unrestricted model through OpenMx, there are a couple R functions that fit unrestricted multivariate normals via ML and return the LL. fit.norm() in the
QRMlib package and mvnXXX() in the mclust package both appear to do this.
Unless I'm missing something, you could use these functions to obtain -2LL for the unrestricted model without worrying about manually specifying the model. Then take [-2LL(restricted) - -2LL
(unrestricted)] for the chi square, where -2LL(restricted) comes from the OpenMx run. Then manually calculate the difference in degrees of freedom using the traditional SEM framework.
Tue, 04/20/2010 - 16:53
Agreed, it looks really
Agreed, it looks really promising for fitting saturated models. Unfortunately, at least fit.norm() breaks when the data set includes any missing values:
> data <- rmnorm(1000,rho=0.7,d=10)
> data[1,1]<-NA
> fit.norm(data)
Error in solve.default(cov, ...) :
system is computationally singular: reciprocal condition number = 0
Sadly, it's only the case when fitting to raw data (with missing values) that computing the -2lnL is computationally difficult. Estimating it with complete data, i.e., when there are no missing
values, is an easy, non-iterative task (as fit.norm shows). I've not tested mvnXXX; it sounds like a genre of porn so perhaps I should...
Wed, 05/26/2010 - 14:50
Sorry it took me so long to
Sorry it took me so long to get back to this thread...
I can assure you that mvnXXX is not as exciting as it sounds. For the missing data piece, I wonder whether em.norm from the norm package would work. I think it is typically used to obtain starting
values for multiple imputation, but it seems relevant here.
Fri, 03/12/2010 - 09:39
Hello, Easier to help if the
Easier to help if the script runs: Can you make the data or a subset available... perhaps attach it here, then replace the file path in your script with an URL to the data here, i.e., | {"url":"http://openmx.psyc.virginia.edu/thread/430","timestamp":"2014-04-21T12:55:44Z","content_type":null,"content_length":"41789","record_id":"<urn:uuid:8f833c6b-5485-41aa-afe8-d56be84d6fbd>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00589-ip-10-147-4-33.ec2.internal.warc.gz"} |
how to read a digit of a floating point number?????
how to read a digit of a floating point number?????
how do we read a particular digit/digits of a floating point number???
ive jus started out with C....hardly know anything and i was presented with this problem....
im supposed to write a program which accepts a floating point number from the user....and displays the rightmost digit of the integral part....(the digit at ones place)..
next..it asked me to modify the program such that two od the rightmost digits of the integral part of the number were displayed...
ill be very thankful for any help...
I'd convert it to a string, read it until reaching a dot or null char then get the character before it.
Few ways of doing it:
1. I think you can call floor() on the float, and then cast it to an int. This should take something like 34.54 and convert it to 34.
2. Get the remainder of the int divided by 10 (Hint: % operator) 34 / 10 = 3, R = 4
3. Print.
1. Read in float as a string with fgets().
2. Find the '.' char.
3. If it exists, print the char before it.
4. If it doesn't exist, print the last char of the string.
1. I think you can call floor() on the float, and then cast it to an int. This should take something like 34.54 and convert it to 34.
Would copying the float to an int not automatically floor it? Thats what I have been doing (i think).
I think you're right, but for whatever reason I was under the impression the floor() was needed. I don't remember what prompted me to think that.
The string method using sprintf() is probably the best. just read until you see the period or reach teh end of the string
void func(double input){
char temp[256];
int index;
sprintf((char*)temp , "%f" , input);
index =0;
while((temp[index] != 0) && (index<256) && (temp[index] != '.')) index++;
// index now points at the ones place
You could make your job easier with %.0f:
double get_first_digit(double number) {
char buffer[BUFSIZ];
sprintf(buffer, "%.0f", number);
if(*buffer) {
return buffer[strlen(buffer) - 1] - '0';
return 0;
But anyway, using a string is a bad idea in my opinion. You use lots of extra memory, and sprintf()'s probably pretty slow too. The mathematical way isn't much harder, if any.
I think you're right, but for whatever reason I was under the impression the floor() was needed. I don't remember what prompted me to think that.
Casting a floating point number to an int would indeed have the same effect as flooring it. But using floor() and modf() instead of a cast to an integral type is a much better idea, because a
floating point number can easily store values that cannot be represented in any integral type. If you casted one of those values to (int), you'd get an integer overflow.
So floor() exists so that you can round a number without risk of overflow.
Anyway, you could also use something like this:
double get_first_digit(double number) {
return floor(floor(number/10)*10 - number);
That's probably a bad way to do it . . . perhaps something like this, then.
double get_first_digit(double number) {
return floor(fmod(number, 10));
To get the first decimal digit, of course.
Floor()ing the float under that condition is not what the OP wants I take it. Straight casting might be better if he chooses to do conversions to an int.
The string method still sounds good. lol...
How about:
int last_digit_of_integral_part(float v)
if(v < 0.0) v = -v;
return (int)((v * 0.1 - (int)(v * 0.1)) * 10.0);
I like
double get_first_digit(double number) {
if(number < 0) {
return ceil(fmod(number, 10));
else {
return floor(fmod(number, 10));
Or if the digit alone was required, without a sign:
floor(fmod(fabs(number), 10));
[edit] The C99 round() would work too, replacing floor() and ceil().
round(fmod(number, 10));
Using floor explicitly also ensures correct operation if the /QIfist compiler option is turned on. (Which I currently do for my software 3D engine)
what about
this has a width of 1 and since no decimals all the width will be in the ones place right?
#include <stdio.h>
int main(void)
float fNum = 123.456;
printf("fNum = %1.0f\n",fNum);
return 0;
fNum = 123
In other words, no. That's wrong. | {"url":"http://cboard.cprogramming.com/c-programming/91737-how-read-digit-floating-point-number-printable-thread.html","timestamp":"2014-04-17T13:13:14Z","content_type":null,"content_length":"22655","record_id":"<urn:uuid:7927cef0-dc77-4c7e-8f26-0825165f1dd5>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00501-ip-10-147-4-33.ec2.internal.warc.gz"} |
East Bremerton, WA SAT Math Tutor
Find an East Bremerton, WA SAT Math Tutor
...It's not quite as savvy as something like PageMaker, but it works for many types of page layout designs. I could teach someone how to work with it and what its limitations are. Fitness is the
root of my active lifestyle.
39 Subjects: including SAT math, reading, English, writing
...I have a Masters Degree in Chemistry from the University of Washington, Seattle, and have been employed as a chemist and educator for over twenty years. I thoroughly enjoy teaching, and
tutored all through my college years. My goal as an instructor is to ensure that the student is comfortable w...
12 Subjects: including SAT math, chemistry, geometry, ASVAB
...I've taught in classrooms, over the kitchen table, and I have to say that the online experience is by far the best. We cover more material faster, it's much more convenient for our schedules,
and I can email you PDFs of all of the problems that we did. You can also record our session so you can watch them again and again for free.
16 Subjects: including SAT math, geometry, Chinese, GRE
...I have taken four courses in differential equations, from ODE's to Numerical methods for PDE's. Furthermore I spent a semester grading papers for ODE homework and I spent a summer semester
tutoring a student in ODE's. This is material with which I'm supremely comfortable identifying and correcting mistakes.
25 Subjects: including SAT math, chemistry, physics, calculus
...In the classroom, I have helped teach introductory physics classes at the University of Washington and Washington University in St Louis. I also have worked with these students individually on
homework problems or test preparation. As an independent tutor, I have helped students with Algebra/Al...
17 Subjects: including SAT math, chemistry, reading, algebra 1
Related East Bremerton, WA Tutors
East Bremerton, WA Accounting Tutors
East Bremerton, WA ACT Tutors
East Bremerton, WA Algebra Tutors
East Bremerton, WA Algebra 2 Tutors
East Bremerton, WA Calculus Tutors
East Bremerton, WA Geometry Tutors
East Bremerton, WA Math Tutors
East Bremerton, WA Prealgebra Tutors
East Bremerton, WA Precalculus Tutors
East Bremerton, WA SAT Tutors
East Bremerton, WA SAT Math Tutors
East Bremerton, WA Science Tutors
East Bremerton, WA Statistics Tutors
East Bremerton, WA Trigonometry Tutors
Nearby Cities With SAT math Tutor
Annapolis, WA SAT math Tutors
Bremerton SAT math Tutors
Colby, WA SAT math Tutors
Enetai, WA SAT math Tutors
Marine Drive, WA SAT math Tutors
Navy Yard City, WA SAT math Tutors
Parkwood, WA SAT math Tutors
Rocky Point, WA SAT math Tutors
Sheridan Park, WA SAT math Tutors
South Park Village, WA SAT math Tutors
Waterman, WA SAT math Tutors
Wautauga Beach, WA SAT math Tutors
West Hills, WA SAT math Tutors
West Park, WA SAT math Tutors
Westwood, WA SAT math Tutors | {"url":"http://www.purplemath.com/east_bremerton_wa_sat_math_tutors.php","timestamp":"2014-04-17T11:19:40Z","content_type":null,"content_length":"24400","record_id":"<urn:uuid:08993cd5-aad7-47a9-a0b0-5fd0b52c1eaf>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00211-ip-10-147-4-33.ec2.internal.warc.gz"} |
link karma
comment karma
what's this?
Two-Year Club Verified Email
daily reddit gold goal
help support reddit
reddit gold gives you extra features and helps keep our servers running. We believe the more reddit can be user-supported, the freer we will be to make reddit the best it can be.
Buy gold for yourself to gain access to extra features and special benefits. A month of gold pays for 276.46 minutes of reddit server time!
Give gold to thank exemplary people and encourage them to post more.
This daily goal updates every 10 minutes and is reset at midnight Pacific Time (20 hours, 51 minutes from now).
Yesterday's reddit gold goal
reddit is a website about everything
π Rendered by PID 13588 on app-94 at 2014-04-17 11:08:41.820806+00:00 running c422c5a. | {"url":"http://www.reddit.com/user/Power_of_Pi","timestamp":"2014-04-17T11:08:42Z","content_type":null,"content_length":"113703","record_id":"<urn:uuid:a86c466c-d3f8-4a77-8c62-170eff1425be>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00305-ip-10-147-4-33.ec2.internal.warc.gz"} |
1ucasvb's lab
42906053623 http://1ucasvb.tumblr.com/post/42906053623/in-a-previous-post-i-showed-how-to-geometrically 963 In a previous post, I showed how to geometrically...
In a previous post, I showed how to geometrically construct a sine-like function for a regular polygon.
I also pointed out how the shape of the function’s graph depends on the orientation of the polygon, since it isn’t perfectly symmetric like the circle.
This animation illustrates how the polygonal sine (dark curve) and polygonal cosines (clear curve) change as the generating polygon rotates.
First of all, it is important to point out these functions are not based on the perimeter of the shape, like it is for the unit circle. We’re still sticking to the interior angle here. If we used the
perimeter as a substitute for the angle we would just get a deformed linear spline of the sine function, which is rather useless and boring.
In order to find these functions for an arbitrary polygon, we first need to write the polygon in polar form. That is, we want the radius for a given angle. In a circle, this is a constant value.
A general “Polar Polygon” function is:
PP[n](x) = sec((2/n)·arcsin(sin((n/2)·x)))
Where n is the number of sides of the polygon. If n is not an integer, the curve is not closed.
Armed with this function, we can quickly find the polygonal sine and polygonal cosine:
Psin[n](x) = PP[n](x)·sin(x)
Pcos[n](x) = PP[n](x)·cos(x)
As n grows, the functions approximate the circular ones, as expected. To rotate the polygon, just add an angle offset to the x in PP[n].
This technique is general for any polar curve. Here’s a heart’s sine function, for instance
So, what is it good for?
I’ve used this several times when I wanted some smooth interpolation between a circle and a polygon, in such a way that the endpoints of the interpolation are a perfect circle and a perfect, pointy
polygon. It’s useful in parametric surfaces, such as in this old avatar of mine:
Now you can also listen to what these waves sound like
medicaladhesivetape likes this
cyrlyx likes this
cru2o reblogged this from madscientistworld
madscientistworld reblogged this from engrprof
madscientistworld likes this
thirteenthtardis likes this
coeursurlamur likes this
oxymoronic-life likes this
drazzimyr likes this
turtlenecksoup likes this
biomechanicaltomato reblogged this from engrprof
engrprof reblogged this from 1ucasvb
engrprof likes this
hoekvision reblogged this from the-science-of-time
northernodds reblogged this from this-is-somestuff
pillowofconcrete reblogged this from the-science-of-time
science-funducation reblogged this from the-science-of-time
nasha-of-russia likes this
the-science-of-time reblogged this from this-is-somestuff
the-science-of-time likes this
this-is-somestuff reblogged this from drueisms
989232 likes this
cui-bono274 reblogged this from indecisivemisterpilgrim
indecisivemisterpilgrim likes this
indecisivemisterpilgrim reblogged this from number-unknown
number-unknown reblogged this from drueisms
number-unknown likes this
sarw4n likes this
aquietcorneroftheuniverse reblogged this from drueisms
aquietcorneroftheuniverse likes this
drueisms reblogged this from 1ucasvb and added:
Math is amazing.
heyitsrainingbacon-wellshucks reblogged this from 1ucasvb
heyitsrainingbacon-wellshucks likes this
tylersballpit reblogged this from 1ucasvb
restless-rambling reblogged this from 1ucasvb
restless-rambling likes this
cheio-do-vazio likes this
flamingstrawberries likes this
animalcomplaint likes this
plutohimself reblogged this from mycaffeinatedlife
autism-really-speaks likes this
mycaffeinatedlife reblogged this from 1ucasvb and added:
I might have found a new math obsession!
bflatm11 likes this
southernrebelgirl likes this
gwenlightened likes this
jonasjberg likes this
jonasjberg reblogged this from 1ucasvb
daniel-brown likes this
maxwellwellwell likes this | {"url":"http://1ucasvb.tumblr.com/post/42906053623/in-a-previous-post-i-showed-how-to-geometrically","timestamp":"2014-04-18T13:29:29Z","content_type":null,"content_length":"64290","record_id":"<urn:uuid:1f6cb695-5c50-47ef-ae65-c1e741461971>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00620-ip-10-147-4-33.ec2.internal.warc.gz"} |
The refined Eisenstein conjecture
I have computed the torsion subgroup and component group of hundreds of optimal quotients at prime level. In every example, the same pattern emerges: the torsion subgroup is generated by the image of
0-oo and has the same order as the component group; furthermore, the product of the orders of the component groups equals the numerator of (p-1)/12. This indicates that a refinement of the conjecture
of Ogg and other theorems proved by Mazur may be true.
Lecture Notes Database | {"url":"http://modular.math.washington.edu/Tables/Notes/refinedeisen.html","timestamp":"2014-04-16T04:46:09Z","content_type":null,"content_length":"1411","record_id":"<urn:uuid:878e05d8-49b2-40f2-990d-f2dfd24f050f>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00542-ip-10-147-4-33.ec2.internal.warc.gz"} |
General Relativity
HPS 0410 Einstein for Everyone Spring 2010
Back to main course page
Assignment 9: General Relativity
For submission
1.The effect of geodesic deviation can be used to detect curvature in spacetime.
(a) The simplest case is the gravitation free Minkowski spacetime. Consider four objects arranged at equal distances apart in a straight line in Minkowski spacetime and initially at rest. Draw a
spacetime diagram of their ensuing worldlines. Use the notion of geodesic deviation to conclude that the sheet of the spacetime that they are exploring is flat.
(b) Now imagine that the same four bodies are momentarily at rest, high above the surface of a planet, such as our earth, all lined up at the same altitude. They are released and begin to fall
towards the planet. Draw a spacetime diagram of the ensuing worldlines. Use the notion of geodesic deviation to conclude that the sheet of spacetime they are exploring is curved.
2. (a) What is the essential idea of Einstein's gravitational field equations?
(b) Why is it plausible that the Minkowski spacetime of special relativity conforms to them in case the spacetime's matter density is everywhere zero?
(c) Does this mean that a Minkowski spacetime is the only possibility where the matter density is zero? Why not?
3.(a) What consequence does the equality of inertial and gravitational mass of Newtonian theory have for bodies in free fall?
(b) How is this consequence important to Einstein's new theory of gravity, which depicts gravitational effects as resulting from a curvature of spacetime?
For discussion in the recitation.
A. According to general relativity, there is noticeable curvature in the space-time sheets of spacetime in the vicinity of the earth. That curvature is manifested as gravitational effects. General
relativity also tells us that the geometry of space above the surface of the earth has a very, very slight curvature as well. That would be manifested as a curvature in a "space-space" sheet of
spacetime. How could geodesic deviation be used to detect it, assuming that precise enough measurements could be made?
B. Einstein first hit upon the idea that gravitation slows clocks through a thought experiment conducted fully within a Minkowski spacetime of special relativity. He imagined an observer with two
clocks all enclosed within a box and accelerating uniformly in a Minkowski spacetime. He then showed that, according to special relativity, the clocks run at different rates, according to their
position in the box. The farther forward they are in the direction of the acceleration, the faster they run. Einstein's principle of equivalence then added the assertion that the inertial field
appearing in the box was nothing other than a special form of a gravitational field. So he concluded that clocks run at different rates according to their altitude in a gravitational field. The
higher clocks run faster and the lower ones slower.
The relative slowing of the clocks can be recovered fully from the spacetime geometry of a Minkowski spacetime. Here is a spacetime diagram of two clocks accelerating. The acceleration is in the
direction from the A clock to the B clock. Draw in hypersurfaces of simultaneity for observers located with the clocks and moving with them. Show that the B-clock observer judges the A-clock to run
slower; and the A-clock observer judges the B-clock to run faster.
C. Einstein took a radically new approach to gravity by declaring it to coincide with a curvature of spacetime. However, as we have seen in the chapter, the same thing can be done with Newtonian
gravitation theory, so that all its gravitational effects can be associated with a curvature in some parts of spacetime. So what is new with Einstein's proposal?
D. You can take a flat sheet of paper and wrap it into a cylinder, so that its rightmost edge coincides with its leftmost edge. That operation does not affect the intrinsic flatness of the paper. One
can do the same thing in imagination with a cubical chunk of Minkowski spacetime to create a very odd, new spacetime. Take the chunk's rightmost edge and declare that it coincides with its leftmost
edge. That means that anyone traveling past the surface marking rightmost edge of this space would simply pop back at the surface marking the leftmost edge. Use geodesic deviation to convince
yourself that the wrapping up of this spacetime has not changed the flatness of the spacetime. | {"url":"http://www.pitt.edu/~jdnorton/teaching/HPS_0410/2010_Spring/assignments/09_general_relativity/index.html","timestamp":"2014-04-18T03:12:15Z","content_type":null,"content_length":"7547","record_id":"<urn:uuid:c032b1c8-3ecf-47cd-82ae-ad81777d7039>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00438-ip-10-147-4-33.ec2.internal.warc.gz"} |
North Highlands Math Tutor
Find a North Highlands Math Tutor
...I am passionate about the subject of mathematics, I can't imagine doing anything else, and I believe that my enthusiasm for the subject is contagious. Many students that I have worked with have
thanked me for teaching them to enjoy math! I offer tutoring to high school and college students from...
10 Subjects: including algebra 1, algebra 2, calculus, geometry
...I assign homework after sessions depending on the needs of the student and routinely provide assessments. I believe the key to a student's growth is two-way communication. I often request
comments and questions about anything that is working or not working.
25 Subjects: including calculus, prealgebra, geometry, ACT Math
...Mandarin-Chinese is my native language. I was born and raised in Taiwan until 16 years old, so I am proficient in writing and reading traditional Chinese as well as speaking in Mandarin. After
coming to America, I self-taught simplified Chinese and pinyin.Mandarin-Chinese is my native language.
12 Subjects: including calculus, algebra 1, algebra 2, chemistry
Dear Student,My success in school started at an early age. I excelled in math, writing, music, and history. I took honors classes in junior high and high school, and got a 1300 on my SAT's.
38 Subjects: including algebra 2, vocabulary, grammar, Microsoft Excel
I have been tutoring and teaching math and computer skills for more than 10 years. I have my bachelor's degree in Civil Engineering and master’s degree in Natural Disaster Management. I love math
ematics and computer, I think we can't doing anything without math and computer, and I do use them a lot in hard engineering projects.
19 Subjects: including discrete math, Farsi, elementary math, ACT Math | {"url":"http://www.purplemath.com/North_Highlands_Math_tutors.php","timestamp":"2014-04-16T13:16:52Z","content_type":null,"content_length":"23968","record_id":"<urn:uuid:3098d798-2c5e-4f99-a20b-78f05bdb3484>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00571-ip-10-147-4-33.ec2.internal.warc.gz"} |
Characteristic and Mantissa of a Logarithm
• Unrestricted access to grade appropriate lessons, quizzes, & printable worksheets
• Instant scoring of online quizzes
• Progress tracking and award certificates to keep your student motivated
• Unlimited practice with auto-generated 'WIZ MATH' quizzes
• Child-friendly website with no advertisements
• Choice of Math, English, Science, & Social Studies Curriculums
• Excellent value for K-12 and ACT, SAT, & TOEFL Test Preparation
• Get discount offers by sending an email to discounts@kwiznet.com | {"url":"http://www.kwiznet.com/p/takeQuiz.php?ChapterID=11126&CurriculumID=48&Num=3.20","timestamp":"2014-04-20T08:42:10Z","content_type":null,"content_length":"11920","record_id":"<urn:uuid:ad6bab11-863a-491c-bc0d-e2438a5d644b>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00207-ip-10-147-4-33.ec2.internal.warc.gz"} |
Register FAQ Members List Social Groups Search Today's Posts
Mark Forums Read
Thread Tools Search this Thread Display Modes
03-06-2011, 11:49 PM
Join Date: Jan 2011
Posts: 10
The Puzzles package contains a collection of 27 small one-player puzzle games, which were initially developed by Simon Tatham for Unix, Windows, and Mac OS X.
The actual games in this collection are re-implementations of many well known classic puzzles. The puzzle collection for Android contains of the following games:
1. Black Box: Locate the balls inside the box by firing lasers and observing how they are deflected or absorbed.
2. Bridges: Connect the islands with the given bridge counts so no bridges are crossing.
3. Cube: Roll the cube to collect all the paint.
4. Dominosa: Pair the numbers to form a complete and distinct set of dominoes.
5. Fifteen: Slide tiles around to form a grid in numerical order.
6. Filling: Number the squares to form regions with the same number, which is also the size of the region.
7. Flip: Turn over squares until all are light side up, but flipping one flips its neighbours.
8. Galaxies: Divide the grid into 180-degree rotationally symmetric regions each centred on a dot.
9. Guess: Guess the hidden colour sequence: black is correct, white is the correct colour in the wrong place.
10. Inertia: Move the ball around to collect all the gems without hitting a mine.
11. Light Up: Place lamps so all squares are lit, no lamp lights another and numbered squares have the given number of adjacent lamps.
12. Loopy: Draw a single unbroken and uncrossing line such that numbered squares have the given number of edges filled.
13. Map: Copy the 4 colours to colour the map with no regions of the same colour touching.
14. Mines: Uncover all squares except the mines using the given counts of adjacent mines.
15. Net: Rotate tiles to connect all tiles to the centre tile.
16. Netslide: Slide rows and columns to connect all tiles to the centre tile.
17. Pattern: Fill the grid so that the numbers are the length of each stretch of black tiles in order.
18. Pegs: Remove pegs by jumping others over them, until only one is left.
19. Rectangles: Divide the grid into rectangles containing only one number, which is also the area of the rectangle.
20. Same Game: Remove groups (2 or more) of the same colour to clear the grid, scoring more for larger groups.
21. Sixteen: Slide rows and columns around to form a grid in numerical order.
22. Slant: Draw diagonal lines in every square such that circles have the given numbers of lines meeting at them and there are no loops.
23. Solo: Fill the grid so each block, row and column contains exactly one of each digit.
24. Tents: Place tents so each tree has a separate adjacent tent (not diagonally), no tents are next to each other (even diagonally) and the row and column counts are correct.
25. Twiddle: Rotate groups of 4 to form a grid in numerical order.
26. Unequal: Enter digits so every row and column contains exactly one of each digit and the greater-than signs are satisfied.
27. Untangle: Move points around until no lines c
Advertisement [Remove Advertisement]
Thread Tools Search this Thread
Display Modes
Linear Mode
Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts
BB code
code is
HTML code is Off
Forum Rules
All times are GMT -5. The time now is 11:15 PM. | {"url":"http://anythingbutipod.com/forum/showthread.php?p=542110","timestamp":"2014-04-17T04:15:47Z","content_type":null,"content_length":"78659","record_id":"<urn:uuid:4c748ed0-2563-48b9-b4a7-737412821d87>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00256-ip-10-147-4-33.ec2.internal.warc.gz"} |
East Chicago Statistics Tutor
...My experience tutoring and teaching mathematics, English and the physical sciences at the college level qualify me to do so. My experience as a trainer and business manager further bolster my
ability to help students to succeed with PRAXIS. I have a systematic and comprehensive methodology for PRAXIS tutoring that has proven highly successful.
49 Subjects: including statistics, reading, writing, English
...Beyond this academic instruction I have worked with students since I was in college on the side helping them optimize their own study habits and techniques for both their classwork but also
their approach to test prep. Often times to master a substantial amount of material in a limited amount of...
38 Subjects: including statistics, Spanish, geometry, reading
I have a passion for teaching and a great liking for math, which gives me pleasure and rewards. I believe that math is a subject that we use every day. I do my best to associate math concepts to
real life situation.
7 Subjects: including statistics, algebra 1, algebra 2, precalculus
...My experience in tutoring algebra and other math subjects is rich! I have been employed multiple years in both colleges and elementary schools teaching and tutoring students in math and
science. Over the last seven years I have tutored hundreds of students in math, from children to adults.
22 Subjects: including statistics, English, chemistry, biology
...I have also taught PLTW in basic electronics. I have 20-plus years of experience in heavy industry (steel mills and the like) and I interpret/translate English/Greek and vice-versa. Because of
my experience with the steel industry, I can bring real world problems for displaying where the math is used.
12 Subjects: including statistics, calculus, geometry, algebra 2 | {"url":"http://www.purplemath.com/east_chicago_in_statistics_tutors.php","timestamp":"2014-04-17T22:02:21Z","content_type":null,"content_length":"24192","record_id":"<urn:uuid:a7d0a2f7-a227-43b6-86ba-a3836a82821a>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00136-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math 405/607E
Math 405/607E (Numerical Methods for Differential Equations), Fall 2011
Note: PDF files may be read with Acrobat Reader, which is available for free from Adobe.
• Class: Mon Wed Fri 13:00-13:50 in Math 203
• Instructor
□ Lisa Gordeliy
□ gordeliy(at)math(dot)ubc(dot)ca
• Office hours: in LSK 100D, Mon 11am-12noon and 3-5pm, Wed 11am-12noon, or by appointment via email.
Final Exam
The final exam is scheduled for December 8, 3.30 pm - 6 pm, in MATH 104. It is up to your decision whether you take the exam or not. If you do not take it, your grade for the course will be based on
the term marks for the projects and the assignments. Read the important information below.
Course notes
• Notes on numerical methods for PDEs (A. Peirce): Lecture 25 (p. 1-3), Lecture 26 (p. 4-6, 10), Lecture 27 (p. 10-11 and a MATLAB example of a BVP solved by finite differences using the Newton's
method), Lectures 28 - 30 (tridiagonal systems (interpolation notes p. 29) and iterative methods for linear systems p. 15-21), Lectures 31 - 32 (solution of heat equation using finite differences
and discussion of problem 2 of the Assignment)
• Lecture 36: finite element method (FEM)
• Assignment 2 (1), due Oct 12 in class. MATLAB codes: fddiff.m, DemoHermite.m. Expressions for Hermite functions and their derivatives, as well as linear systems for spline construction in problem
3, are given here. SOLUTIONS
• Assignment 3 (2), due Oct 26 in class. For problem 3, you can modify the following MATLAB codes: composite trapezoidal quadrature and testtrap.m - application of the composite trapezoidal
quadrature with N subintervals to integrate f(x) = sin(5.*x) on [0,1]. Save these two codes to the same folder and run the code testtrap.m. You will see that by changing N, the quadratic
convergence rate is obtained. SOLUTIONS to problems 3 - 5
• Project II, due Dec 2 in class. UPDATE: The analytic solution for the logistic equation is y(x) = 100/(1+19*exp(-200*x)).
• Project III for graduate students: due Dec 2 in class. A research project, discuss it with your advisor. This should be a problem that you want to solve using the numerical tools we learn in
class, you may wish to use: interpolation, integration, finite differences, ODE numerical solutions, or PDE numerical solutions (to be covered in the second half of November in class).
These two textbooks are reserved for Math405/607E students at I.K. BARBER LEARNING CENTRE circulation reserve collection (two-day loan). You can also borrow other copies / editions of these textbooks
from the UBC library for a regular loan.
MATLAB resources | {"url":"http://www.math.ubc.ca/~gordeliy/m405607E.html","timestamp":"2014-04-18T15:41:21Z","content_type":null,"content_length":"7998","record_id":"<urn:uuid:2be3c5a8-2da5-4087-b3d8-10369934ae74>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00194-ip-10-147-4-33.ec2.internal.warc.gz"} |
[FOM] Infinity and the "Noble Lie"
joeshipman@aol.com joeshipman at aol.com
Wed Dec 14 13:15:14 EST 2005
>Still it
>is not clear to me that the Eda proof does not use notions of
>infinity. Specifically, can the PNT be proven in second-order
>arithmetic minus the Successor Axiom? (The Successor Axiom, by
>saying that every natural number has a successor, gives the natural
>numbers its infiniteness.)
I reply:
The successor axiom is not what I regard as a use of "actual infinity".
What I care about is whether, when the proof is formalized in ZFC, the
ZFC "Axiom of Infinity" must be involved. If not, that means that the
theorem (if it can be stated in the language of arithmetic) can
actually be proved in Peano arithmetic, and Peano arithmetic is a
system about finite objects only. (The "potential infinity" involved
doesn't concern me.)
There is a strong isomorphism between Peano arithmetic and ZFC minus
the axiom of infinity. Define a bijection between the natural numbers
and the hereditarily finite sets as follows: if n is a natural number,
expressed as a sum of distinct 2-powers 2^i_1 + 2^i_2 + ... + 2^i_k,
then f(n) is the set whose elements are f(i_1), f(i_2),...,f(i_k).
Conversely, if X is a set, then g(X) is the sum over elements y of X of
2^g(y), so fg and gf are identity functions.
Since exponentiation is definable in PA, you can define each system's
basic relations and functions in the other system, and prove the
appropriate axioms (you may have to add the *negation* of the axiom of
infinity to get this equivalence).
There is no proof of Con(PA) in ZFC that does not use the axiom of
infinity; and if you believe that consistent theories have models then
believing Con(PA) really is the same thing as believing in an actual
infinity. But a finitist can say that he believes no contradiction can
be found in PA while denying that PA has a model -- he thinks the
proof of Godel's Completeness Theorem is, not wrong, but just
meaningless, because it speaks about infinite objects.
This is a subtle point. Godel's Completeness Theorem can be proved in
WKL0, which is conservative over Peano Arithmetic, so any consequence
of Godel's Completeness Theorem that speaks about integers only must be
accepted by a finitist who accepts only Peano Arithmetic. But the
finitist can reject the infinite model that Godel's Completeness
Theorem says exists, without being forced to deny Con(PA), because the
equivalence of consistency with having-a-model presumes the Axiom of
Infinity, even though the FINITARY consequences of the Completeness
Theorem don't depend on the Axiom of Infinity.
Regarding your other point -- can you provide an example of a statement
which can be proven in ZFC, and cannot be proven without the Axiom of
Infinity, but which (in the presence of the other axioms) does NOT
imply the Axiom of Infinity?
-- JS
More information about the FOM mailing list | {"url":"http://www.cs.nyu.edu/pipermail/fom/2005-December/009465.html","timestamp":"2014-04-21T10:53:42Z","content_type":null,"content_length":"5364","record_id":"<urn:uuid:91322372-30bd-4129-a6e7-202c39c097b2>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00125-ip-10-147-4-33.ec2.internal.warc.gz"} |
Rahns, PA
Find a Rahns, PA Precalculus Tutor
...Through WyzAnt, I have tutored math subjects from prealgebra to precalculus; I have also tutored English writing, English grammar, and economics, and I am trained to tutor for standardized
testing (SAT, ACT, GRE), philosophy, and music. In addition to tutoring, I work as a part-time teacher at a...
38 Subjects: including precalculus, English, reading, physics
I am currently employed as a secondary mathematics teacher. Over the past eight years I have taught high school courses including Algebra I, Algebra II, Algebra III, Geometry, Trigonometry, and
Pre-calculus. I also have experience teaching undergraduate students at Florida State University and Immaculata University.
9 Subjects: including precalculus, geometry, algebra 1, GRE
...Every minute that I spend tutoring is time away from my kids, so I'm determined to make sure that those minutes are worth it. Hope to hear from you!I have both a B.S. and a Ph.D. in Chemical
Engineering. The B.S. was completed in 2004 from the University of Florida and the Ph.D. was completed in 2009 from Virginia Tech.
16 Subjects: including precalculus, chemistry, physics, calculus
...It upsets me when I hear students say, 'I'm just not good in math!' Comments like that typically mean that a math teacher along the way wasn't able to present the material in a way that made
sense to the student. I've never met a student who didn't understand once we as a team figured out how t...
9 Subjects: including precalculus, geometry, algebra 1, algebra 2
...Aside from that, I occasionally tutored high school mathematics and other more advanced college courses, such as Advanced Calculus, Logic and Set Theory, Foundations of Math, and Abstract
Algebra. Many of these subjects I also tutored privately. In addition to this, I've done substantial work i...
26 Subjects: including precalculus, English, writing, reading | {"url":"http://www.purplemath.com/Rahns_PA_Precalculus_tutors.php","timestamp":"2014-04-17T15:31:25Z","content_type":null,"content_length":"24231","record_id":"<urn:uuid:bacb0f0b-e59f-4f4b-b981-a614d28413ba>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00198-ip-10-147-4-33.ec2.internal.warc.gz"} |
A Chirp Signal Is A Sinusoid Whose Frequency Changes ... | Chegg.com
answer the following questions and give the matlab codes please
Image text transcribed for accessibility: A chirp signal is a sinusoid whose frequency changes linearly from a starting value to an ending one. The formula for such a signal can be defined by
creating a complex exponential signal with quadratic angle by defining (t) in (3) as (t) = 2pimut2 + 2pifot + phi The derivative of (t) yields as instantaneous frequency (4) that changes linearly
versus time. Ft(t) = 2mut + fo The slope of fi(t) is equal to 2mu and its intercept is equal to f0. If the signal starts at time t = 0 secs, then f theta is also the starting frequency. The code
segment that we studied in laboratory for chirp signals can be found below. In that code segment we created a chirp signal with the following properties, sampling frequency = 11025 signal duration =
1.8 seconds mu = 500, f0 = 200 and theta = 100/2* pi ) (They were used in (t) formula defined above) To create the chirp affect firstly we defined the (t) function and then we used this function in
the real part of an exponential function ( real part of a complex exponential gives us a cosine) resulting our chirp signal. Later we obtained spectrogram of this chirp signal for two different
window lengths (128 and 1024) by using "spectrum" function of Matlab (You can use "spectrogram" Matlab built in function instead of this if "spectrogram" is not available in your Matlab version).
Code segment written in Laboratory: clc, clear all, close all fsamp = 11025; %Sampleing Frecquency dt = 1/fsamp; dur = 1.8;%time duration tt = 0 : dt : dur;%my time vector psi = 2* pi * (100 + 200*tt
+ 500*tt.*tt); xx = real {7.7*exp(j*psi)}; figure, subplot {2, 1, 1}, spectrogram {xx, 128, fsamp}; title {"Window length = 128*} subplot {2, 1, 2}, spectrogram {xx, 1024, fsamp); title {"Window
length = 1024"} In this homework your job is to write a MATLAB code segment which synthesizes a second "chirp" signal with the following parameters: A total time duration of 3 secs.(strting from 0)
with a sm[ling rate of fs = 11025 Hz. The instantaneous frequency strts at 3,000 Hz and at-2,000 Hz (negative frequency). In this part you can modify the code segment given above. In that code, the
parameters ( mu, f0 and phi) are given but in this case you have given the starting and ending frequencies. You must use the information given in "Definition of a chirp signal" part to find f0 and
mu. For phi you will use0. Create two spectrograms of this seconds chirp signal with window lengths 128 and 1024. Plot these spectrogram in one figure together as we did in laboratory (You will use
"subplot" command). Answer the following questions about your results. When you consider your obtained spectrograms, what can say for the negative frequencies of your signal? What can you say about
the frequency - resolution and the time - resolution properties of your spectrogram with window length 128 and 1024. (When your time - resolution is better, when your frequency - resolution is
better? And why?) After you finished allthe steps mentioned above, you have to add all your codes, figures and answers into your laboratoty report.
Electrical Engineering | {"url":"http://www.chegg.com/homework-help/questions-and-answers/chirp-signal-sinusoid-whose-frequency-changes-linearly-starting-value-ending-one-formula-s-q3934773","timestamp":"2014-04-17T14:27:02Z","content_type":null,"content_length":"21425","record_id":"<urn:uuid:a215d5f8-23a3-49fe-b5c0-0ec09a8f30b7>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00519-ip-10-147-4-33.ec2.internal.warc.gz"} |
Isolines are often used on maps to represent points of equal value. This is a list of some common (as well as obscure) types of isolines. The prefix "iso-" means "equal."
A line representing points of equal atmospheric pressure
A line representing points of equal depth under water
A line representing depths of water with equal temperature
A line representing points of equal recurrence of auroras
A line representing points of equal mean winter temperature
A line representing points of equal time-distance from a point, such as the transportation time from a particular point
A line representing points of equal transport costs for products from production to markets
A line representing points of equal intensity of radiation
A line representing points of equal dew point
A line representing points of equal mean temperature
A line separating linguistic features
A line representing points of equal magnetic declination
A line representing points of equal salinity in the ocean
A line representing points receiving equal amounts of sunshine
A line representing points of equal humidity
A line representing points of equal precipitation
A line representing points of equal amounts of cloud cover
A line representing points where ice begins to form at the same time each fall or winter
A line representing points where biological events occur at the same time, such as crops flowering
A line representing points of equal acidity, as in acid precipitation
A line representing points of equal numerical value, such as population
A line representing points of equal annual change in magnetic declination
A line representing points of equal atmospheric density
A line representing points where ice begins to melt at the same time each spring
A line representing points of equal wind speed
A line representing points of equal mean summer temperature
A line representing points of equal temperature
A line representing points of equal transport costs from the source of a raw material | {"url":"http://geography.about.com/library/misc/blisoline.htm","timestamp":"2014-04-18T20:59:36Z","content_type":null,"content_length":"36409","record_id":"<urn:uuid:69baa252-c3a2-4ccc-98ac-da1982c142dd>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00253-ip-10-147-4-33.ec2.internal.warc.gz"} |
East Bremerton, WA SAT Math Tutor
Find an East Bremerton, WA SAT Math Tutor
...It's not quite as savvy as something like PageMaker, but it works for many types of page layout designs. I could teach someone how to work with it and what its limitations are. Fitness is the
root of my active lifestyle.
39 Subjects: including SAT math, reading, English, writing
...I have a Masters Degree in Chemistry from the University of Washington, Seattle, and have been employed as a chemist and educator for over twenty years. I thoroughly enjoy teaching, and
tutored all through my college years. My goal as an instructor is to ensure that the student is comfortable w...
12 Subjects: including SAT math, chemistry, geometry, ASVAB
...I've taught in classrooms, over the kitchen table, and I have to say that the online experience is by far the best. We cover more material faster, it's much more convenient for our schedules,
and I can email you PDFs of all of the problems that we did. You can also record our session so you can watch them again and again for free.
16 Subjects: including SAT math, geometry, Chinese, GRE
...I have taken four courses in differential equations, from ODE's to Numerical methods for PDE's. Furthermore I spent a semester grading papers for ODE homework and I spent a summer semester
tutoring a student in ODE's. This is material with which I'm supremely comfortable identifying and correcting mistakes.
25 Subjects: including SAT math, chemistry, physics, calculus
...In the classroom, I have helped teach introductory physics classes at the University of Washington and Washington University in St Louis. I also have worked with these students individually on
homework problems or test preparation. As an independent tutor, I have helped students with Algebra/Al...
17 Subjects: including SAT math, chemistry, reading, algebra 1
Related East Bremerton, WA Tutors
East Bremerton, WA Accounting Tutors
East Bremerton, WA ACT Tutors
East Bremerton, WA Algebra Tutors
East Bremerton, WA Algebra 2 Tutors
East Bremerton, WA Calculus Tutors
East Bremerton, WA Geometry Tutors
East Bremerton, WA Math Tutors
East Bremerton, WA Prealgebra Tutors
East Bremerton, WA Precalculus Tutors
East Bremerton, WA SAT Tutors
East Bremerton, WA SAT Math Tutors
East Bremerton, WA Science Tutors
East Bremerton, WA Statistics Tutors
East Bremerton, WA Trigonometry Tutors
Nearby Cities With SAT math Tutor
Annapolis, WA SAT math Tutors
Bremerton SAT math Tutors
Colby, WA SAT math Tutors
Enetai, WA SAT math Tutors
Marine Drive, WA SAT math Tutors
Navy Yard City, WA SAT math Tutors
Parkwood, WA SAT math Tutors
Rocky Point, WA SAT math Tutors
Sheridan Park, WA SAT math Tutors
South Park Village, WA SAT math Tutors
Waterman, WA SAT math Tutors
Wautauga Beach, WA SAT math Tutors
West Hills, WA SAT math Tutors
West Park, WA SAT math Tutors
Westwood, WA SAT math Tutors | {"url":"http://www.purplemath.com/east_bremerton_wa_sat_math_tutors.php","timestamp":"2014-04-17T11:19:40Z","content_type":null,"content_length":"24400","record_id":"<urn:uuid:08993cd5-aad7-47a9-a0b0-5fd0b52c1eaf>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00211-ip-10-147-4-33.ec2.internal.warc.gz"} |
Gary, IN Science Tutor
Find a Gary, IN Science Tutor
I have a deep love of History and Geography and the degrees show that. They are not just a series of disconnected facts and places, but need to be viewed as places and times. When a student can
begin to string together the narrative in the material, it is so much easier for them to retain the information that they have learned.
9 Subjects: including geology, physical science, grammar, world history
...Geology gives insight into the history of the Earth, as it provides the primary evidence for plate tectonics, the evolutionary history of life, and past climates. In modern times, geology is
commercially important for mineral and hydrocarbon exploration and for evaluating water resources; it is ...
15 Subjects: including biology, geology, reading, literature
...In addition, Matlab was the primary computational tool used for my master's thesis. I have an undergraduate degree from Purdue University in Mechanical Engineering, with a GPA of 3.83. In
addition, I have a masters degree in Mechanical Engineering from the University of Texas at Austin, with a GPA of 3.83.
17 Subjects: including mechanical engineering, GRE, physics, algebra 1
...I have also been tutoring after school and on the weekends for just as long. As a teacher in math and sciences, it is important that the students build their knowledge upon topics previously
learned. This is where I find many students have trouble.
15 Subjects: including biology, physics, prealgebra, trigonometry
...I have background of 8 years studying Spanish, and worked as a tutor before with ESL and International students. I additionally have previous tutoring experience as well as professional
experience translating and working with bilingual media for the Illinois Senate Democratic Caucus Communicatio...
49 Subjects: including biology, anatomy, grammar, ESL/ESOL
Related Gary, IN Tutors
Gary, IN Accounting Tutors
Gary, IN ACT Tutors
Gary, IN Algebra Tutors
Gary, IN Algebra 2 Tutors
Gary, IN Calculus Tutors
Gary, IN Geometry Tutors
Gary, IN Math Tutors
Gary, IN Prealgebra Tutors
Gary, IN Precalculus Tutors
Gary, IN SAT Tutors
Gary, IN SAT Math Tutors
Gary, IN Science Tutors
Gary, IN Statistics Tutors
Gary, IN Trigonometry Tutors | {"url":"http://www.purplemath.com/gary_in_science_tutors.php","timestamp":"2014-04-16T16:40:24Z","content_type":null,"content_length":"23804","record_id":"<urn:uuid:45914aa6-76ab-40a4-9541-2aa1fe0ac656>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00382-ip-10-147-4-33.ec2.internal.warc.gz"} |
fourier transform of sin(3w)*cos(w)/(w^2)
June 29th 2011, 11:50 AM #1
Junior Member
Jun 2010
fourier transform of sin(3w)*cos(w)/(w^2)
Can someone tell me how to calculate the inverse fourier transform of sin(3w)*cos(w)/(w^2)
I know that 2sin(aw)/w has an inverse fourier transform that is 1 when -a<t<a and zero otherwise and therefore the inverse fourier should be the convultion of two of these and some other
thing...can you help me?
Last edited by mr fantastic; June 29th 2011 at 01:10 PM. Reason: Title
Re: fourier transform of sin(3w)*cos(w)/(w^2)
You mean the inverse of sinc is a rect (or box) function. The other function is cosinc which you'll need to find a way to invert.
Re: fourier transform of sin(3w)*cos(w)/(w^2)
Can someone tell me how to calculate the inverse fourier transform of sin(3w)*cos(w)/(w^2)
I know that 2sin(aw)/w has an inverse fourier transform that is 1 when -a<t<a and zero otherwise and therefore the inverse fourier should be the convultion of two of these and some other
thing...can you help me?
This function has a non-integrable singularity at w=0, so I don't see how it can have a Fourier transform (direct or inverse).
Re: fourier transform of sin(3w)*cos(w)/(w^2)
When you say that it doesn't have a FT are you considering that it can be a transform of a discrete variable signal?
Re: fourier transform of sin(3w)*cos(w)/(w^2)
Where did the problem come from?
Re: fourier transform of sin(3w)*cos(w)/(w^2)
My signals and systems exam
Re: fourier transform of sin(3w)*cos(w)/(w^2)
OK, just wantd to be sure it wasn't a typo. I'm still thinking about how to do this and Opalg's comment. It should be pointed out that the Fourier transform of $\text{sgn}(t)$ is $\frac{2}{j\
omega}$ which also seems to have a non-integrable singularity at $w=0$.
Re: fourier transform of sin(3w)*cos(w)/(w^2)
$\frac{\sin(3\omega)\cos \omega}{\omega^2}=\frac{1}{2}\text{sinc} (4\omega)\cdot \frac{1}{\omega}+\frac{1}{2}\text{sinc} (2\omega)\cdot \frac{1}{\omega}$
where $\text{sinc } x=\frac{\sin x}{x}$. Now invert using the convolution theorem.
June 30th 2011, 07:11 PM #2
Senior Member
May 2010
Los Angeles, California
June 30th 2011, 11:49 PM #3
July 1st 2011, 01:28 PM #4
Junior Member
Jun 2010
July 1st 2011, 03:10 PM #5
Senior Member
May 2010
Los Angeles, California
July 1st 2011, 04:46 PM #6
Junior Member
Jun 2010
July 1st 2011, 05:20 PM #7
Senior Member
May 2010
Los Angeles, California
July 2nd 2011, 06:03 PM #8
Senior Member
May 2010
Los Angeles, California | {"url":"http://mathhelpforum.com/calculus/183823-fourier-transform-sin-3w-cos-w-w-2-a.html","timestamp":"2014-04-21T13:26:48Z","content_type":null,"content_length":"49027","record_id":"<urn:uuid:86b9c8eb-93d2-4fee-9b28-d0a546b874f0>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00515-ip-10-147-4-33.ec2.internal.warc.gz"} |
st: graphing results
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
st: graphing results
From Matthijs De Zwaan <m.dezwaan@gmail.com>
To statalist@hsphsun2.harvard.edu
Subject st: graphing results
Date Wed, 9 Sep 2009 12:34:10 +0200
Dear Stata-listers,
I am trying to graph the results of my regression. I have estimated a
model with quadratic terms and an interaction, as in y=x1 + x1^2 + x2
+ x1*x2. I am trying to plot my results in a graph of y versus x1.
Using -adjust- to keep covariates doesn't do what I need, since it
also sets x1*x2 to the mean of the interaction, rather than x1*(mean
I can get what I want by plotting it using the -graph twoway function-
command. My current code looks like:
graph twoway scatter y x1 ///
|| function y = _b[_cons] + _b[x1]*x = _b[x1^2]*x^2 + _b[x2]*x2_bar +
_b[x1*x2]*x1*x2_bar ,
where x2_bar is the mean of x2 (in the estimation sample).
However, when imported in a document, the final graph looks coarse:
more like a like a step-function than a smooth line, even when using
2500 points to graph the function instead of the default 300. Is there
a way to get a graph that produces the effect of x1 on y controlling
for covariates, but that looks better than what I have now?
Thanks for helping!
* For searches and help try:
* http://www.stata.com/help.cgi?search
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/ | {"url":"http://www.stata.com/statalist/archive/2009-09/msg00381.html","timestamp":"2014-04-16T13:41:23Z","content_type":null,"content_length":"6397","record_id":"<urn:uuid:34f7bd2d-324f-4e84-97cc-775aade5cdaa>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00246-ip-10-147-4-33.ec2.internal.warc.gz"} |
digitalmars.D - Array alignment - fundamental problem/strangeness
Hi there,
trying to write up my proposal for multidimensional array references, I
stumbled over a peculiar strangeness in the current syntax of arrays.
const int A = 2;
const int B = 3;
alias mytype[A][B] M1;
alias mytype[A] Mtmp;
alias Mtmp[B] M2;
As I understand it, M1 and M2 should be identical, since [N] is just a type
modifier for anything that stands in front of it. Now, lets use it.
M2 m2;
for(int b=0;b<B;b++) {
Mtmp mt = m2[b]; // dereferencing the outer array
for(int a=0;a<A;a++) {
mytype m = mt[a]; // now the inner array
assert(m == (m2[b])[a]); // just a parenthesis for
assert(m == m2[b][a]); // identical to previous line
So obviously, an array declared as mytype[A][B] has to be used as m1[b][a] !
Tracing this strangeness back, it actually comes from the change from
C-style array declarations to D-style array types. Going back to C syntax
step by step, we get three equivalent declaration statements:
mytype[A][B] m;
is equivalent to
mytype[A] m[B];
is equivalent to
mytype m[B][A];
The situation might become clearer when considering something like:
mytype[char[]][3] X;
X[2]["s"] = something;
I do not see a simple solution out of this. The core problem is somewhere in
the strangeness of prefix and postfix operators. If the typemodifier uses
postfix notation, so the indexing operators unravelling the type have to be
applied in reverse order. One clean but ugly solution would be to make type
modifiers prefix operators "[B][A]int m;". But then, the "*" modifier would
still have to stay postfix, since the corresponding dereferencing operator
"*" is a prefix operator, leaving us with the old problem of precedence
Alternatively, we can just accept the situation, document it clearly, and
encourage people to use rectangular array notation once we have it:
mytype[B,A] m;
would be equivalent to
mytype[A][B] m;
yielding truely C-aligned arrays with B rows and A columns, each row saved
in a continuous block.
May 06 2004 | {"url":"http://www.digitalmars.com/d/archives/digitalmars/D/565.html","timestamp":"2014-04-18T18:37:33Z","content_type":null,"content_length":"10962","record_id":"<urn:uuid:0b9c9a90-cb34-4437-b153-fda2db4fcf27>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00382-ip-10-147-4-33.ec2.internal.warc.gz"} |
FUN with Matlab
February 21st 2009, 05:53 PM #1
Senior Member
Jan 2009
FUN with Matlab
I need a Matlab function that finds the last place in a line where a double character appears (that is, where the same character appears twice in a row)
So if the function is called locateDouble, locateDouble('adcdccd') should return 5 since inxes 5 is the first charcter of the last double character.
I posted my attempt at the program below, but it won't work.
function double=findDouble(line)
while place>0;
for j=1:length(line)-1;
if j~==j+1
I need a Matlab function that finds the last place in a line where a double character appears (that is, where the same character appears twice in a row)
So if the function is called locateDouble, locateDouble('adcdccd') should return 5 since inxes 5 is the first charcter of the last double character.
I posted my attempt at the program below, but it won't work.
function double=findDouble(line)
while place>0;
for j=1:length(line)-1;
if j~==j+1
try this:
Now try to use this idea.
February 23rd 2009, 08:31 AM #2
Grand Panjandrum
Nov 2005 | {"url":"http://mathhelpforum.com/math-software/74932-fun-matlab.html","timestamp":"2014-04-16T20:06:35Z","content_type":null,"content_length":"33032","record_id":"<urn:uuid:d26937a4-334d-4857-9485-30c4fae6afd5>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00004-ip-10-147-4-33.ec2.internal.warc.gz"} |
Thomas Young's double‐slit experiment shows that light spreads out in wavefronts that can interfere with each other.
is the effect of a wave spreading as it passes through an opening or goes around an object. The diffraction of sound is quite obvious. It is not at all remarkable to hear sound through an open door
or even around corners. In contrast, diffraction is quite difficult to observe with light. The difference is that sound waves are long while light waves are extremely short because differentiation is
proportional to wave length; it is not easy to observe the bending of light when it passes through a small aperture or goes around a sharp edge.
A single slit yields an interference pattern due to diffraction and interference. Imagine that the slit is wide enough to allow a number of wavelets. Figure 1 shows the wave‐ray diagram used to
analyze the single slit.
Figure 1 Diffraction of light through a single slit.
The rays from A and B interfere at P on a distant screen. As shown, AP exceeds BP by half a wavelength; therefore, the represented waves destructively interfere. Also for every wave originating
between A and B, there is another point between B and C with a wavelet that will destructively interfere. The wavelets cancel in pairs; thus, point P is a minimum or dark point on the screen.
The triangle ACD is nearly a right triangle if P is quite distant. Applying the definition for sine to the figure yields
where λ is the wavelength and w is the slit width. Whenever the path difference between AP and CP is a whole number of wavelengths, a dark fringe will be produced on the screen because the wavelets
can be seen to completely cancel in pairs.
Figure 2 illustrates the light rays traveling to another point on the screen.
Figure 2 Diffraction of light through a single slit.
In this case,
The region of wavelets is divided into three. Again, the waves through two regions cancel in pairs, but now the waves from one region constructively interfere to produce a bright point on the screen.
This is partial reinforcement. The positions of the light and dark fringes formed by a single slit are summarized in the intensity versus angle sketch shown in Figure 3. The center region of the
pattern will be the brightest band because the wavelets completely, constructively interfere in the middle.
Figure 3 Position of fringes produced by single-slit diffraction.
When looking through double slits, it is impossible to see only the double‐slit pattern because the double‐slit is really two single slits; therefore, the actual observed pattern is that of
superimposed double – and single‐slit patterns. | {"url":"http://www.cliffsnotes.com/sciences/physics/light/diffraction","timestamp":"2014-04-18T17:40:22Z","content_type":null,"content_length":"112008","record_id":"<urn:uuid:a8cb5ad9-3aa1-4339-8b24-af5cc38841fc>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00472-ip-10-147-4-33.ec2.internal.warc.gz"} |
Physics Forums - View Single Post - Question on characterization of elliptical polarization of EM wave.
I am using "Advanced Engineering Electromagnetics" 2nd edition by Balanis AND "Antenna Theory" 3rd edition also by Balanis. I found an inconsistency in how to characterize RHC (CW) and LHC ( CCW)
elliptical polarization.
1) In Advanced EE Page 159, for
[tex]\vec E(0,t)=Re[\hat x (E_R+E_L)e^{j\omega t}+\hat y (E_R-E_L)e^{j(\omega t+\Delta \phi)}][/tex]
[tex]\hbox { Where}\;\Delta\phi=\phi_x-\phi_y≠\frac{n\pi}{2}\;\hbox {where }\;n=0,2,4,6.....[/tex]
If [itex] \Delta \phi ≥ 0 [/itex], then, it is CW if [itex]E_R>E_L[/itex], CCW if [itex] E_R<E_L[/itex]
If [itex] \Delta \phi ≤ 0 [/itex], then, it is CCW if [itex]E_R>E_L[/itex], CW if [itex] E_R<E_L[/itex]
2) In Antenna Theory Page 74,
[tex]\Delta\phi=\phi_y-\phi_x≠^+_-\frac{n\pi}{2}\;\hbox {where }\;n=0,1,2,3.....[/tex]
If [itex] \Delta \phi ≥ 0 [/itex], then, it is CW.
If [itex] \Delta \phi ≤ 0 [/itex], then, it is CCW.
To avoid confusion, just use one example where [itex]\Delta\phi=\frac {\pi}{4}[/itex], you can see using Advanced EE, there are two condition that can give you CW or CCW. But in Antenna, there is
only one condition which is CW.
How do you explain the inconsistency? Yes, there are confusion as the definition of [itex]\Delta\phi[/itex] is opposite between the two. But if you look pass the difference, you can still see the
inconsistency. Am I missing something? | {"url":"http://www.physicsforums.com/showpost.php?p=4248897&postcount=1","timestamp":"2014-04-25T08:19:14Z","content_type":null,"content_length":"10014","record_id":"<urn:uuid:d0a024f6-21a9-4e25-ab77-a638833b60f5>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00071-ip-10-147-4-33.ec2.internal.warc.gz"} |
Ratio and Root Test
March 23rd 2009, 04:57 PM #1
Junior Member
Feb 2009
Ratio and Root Test
How would you work out if these series converge or diverge?
Problem 1
n=1 to infinity n!/10^n
Problem 2
n=1 to infinity ((n-2)/n)^n
Problem 3
n=1 to infinity (-2)^n/3^n
What exactly are you having troubles with? You are already told which tests to use.
For example, the first one. Let $a_n = \frac{n!}{10^n}$.
Then: $\lim_{n \to \infty} \left| \frac{a_{n+1}}{a_n}\right| = \lim_{n \to \infty} \left| \frac{(n+1)!}{10^{n+1}} \cdot \frac{10^n}{n!}\right| = \lim_{n \to \infty} \left|\frac{(n+1)n!}{10^n \
cdot 10} \cdot \frac{10^n}{n!}\right| = \cdots$
still not sure how to do the third problem though....can't use root test because of the negative term. There too many tests....
March 23rd 2009, 05:11 PM #2
March 24th 2009, 03:35 PM #3
Junior Member
Feb 2009
March 24th 2009, 05:04 PM #4 | {"url":"http://mathhelpforum.com/calculus/80248-ratio-root-test.html","timestamp":"2014-04-16T14:17:19Z","content_type":null,"content_length":"39003","record_id":"<urn:uuid:83d09ac0-4788-445c-aea5-b23a0e82b79b>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00573-ip-10-147-4-33.ec2.internal.warc.gz"} |
Free cool animated math videos, practice questions and more
August 7th 2008, 05:47 AM #1
Aug 2008
Free cool animated math videos, practice questions and more
This site is cool... it has free math lessons. The videos are created using computer graphics. Each lesson comes with practice questions with step-by-step solution. Good resource for both
teachers and students :-)
Check it out here!
Main Page:
Math Expression: Free math tutor online
Some lessons:
Adding Fractions
Graphing Linear Equations
Follow Math Help Forum on Facebook and Google+ | {"url":"http://mathhelpforum.com/math/45492-free-cool-animated-math-videos-practice-questions-more.html","timestamp":"2014-04-18T12:36:20Z","content_type":null,"content_length":"28863","record_id":"<urn:uuid:d330d7e4-eb12-406d-bdb7-c63ff8ff514a>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00598-ip-10-147-4-33.ec2.internal.warc.gz"} |
Nautical mile
Nautical mile: distance of one minute of longitude at the equator, approximately 6,076.115 feet. The metric equivalent is 1852 meters.
Navsac: navigation safety advisory council, an industry advisory body to the u.s. coast guard.
NAUTICAL MILE: One minute of latitude; approximately 6076 feet or 1852 metres - about 1/8 longer than the statute mile of 5280 feet.
NAVIGATION: The art and science of conducting a boat safely from one point to another.
nautical mile - Distance at sea is measured in nautical miles, which are about 6067.12 feet, 1.15 statute miles or exactly 1852 meters.
nautical mile
A distance of 6,076.12 feet or 1,852 meters, which is about 15 percent longer than a statute mile. Equivalent to one minute of latitude on a navigation chart.
nun buoy
Conical navigation buoy that is usually red.
Nautical Mile ............
1 nautical mile is an International measurement of distance at sea level (1.85 kms).
Planing Hull ..............
Nautical Mile: The unit of geographical distance used on "salt-water" charts. 1 nautical mile corresponds exactly to 1 minute of angular distance on the meridian (adjacent left and right side of a
sea chart).
Nautical Mile - One minute of latitude; approximately 6076 feet - about 1/8 longer than the statute mile of 5280 feet.
Overboard - Over the side or out of the boat.
Nautical mile (NM): International standard for measuring distance on water. One nautical mile equals one minute of latitude. (One nautical mile equals 1.15 land miles.)
O ...
Nautical mile - A distance of 1.852 kilometres (1.151 mi). Approximately the distance of one minute of arc of latitude on the Earth's surface. A speed of one nautical mile per hour is called a knot.
nautical mile - An international distance of 1852 meters or 6076.12 feet. A nautical mile equals one minute of latitude. See also "Mile."
naval architect - An architect who specializes in marine design.
Nautical mile: a unit of length corresponding approximately to one minute of arc of latitude along any meridian arc. By international agreement it is exactly 1,852 metres (approximately 6,076 feet).
nautical mile: 6,080 feet measure of length at sea (2025 yards). 1 mile = 1,760 yards. neap tide: a tide of less than average range, occurring at the first and third quarters of the moon.
2 Nautical Miles
Looking towards the shore: One recognizes doors and windows but not human beings.
Looking only over the water: One barely starts to identify large buoys. At night, boats navigation lights start to be visible.
1 Nautical Mile ...
Nautical mile
The unit of distance in the nautical system. There are 60 nautical miles in one degree of latitude. 1 nautical mile = 1.15 statute miles.
Near gale ...
NAUTICAL MILE - One minute of latitude; A measurement used in salt water approximately 6,076 feet - about 1/8 longer than the statute mile of 5,280 feet.
NAVIGATION - The art and science of conducting a boat safely from one point to another.
Nautical Mile
One 60th of a degree of latitude, or one minute of latitude. Approximately equal to 6,076.1 feet, or 1.15 statute miles.
Navigation ...
Nautical Mile - Mi on nautical maps is nautical mile 115 land miles = 1 nautical mile or about 2000 yards
Nautical Speed - Knots ( not knots per hr )
Navigation Time - Use 24 hours ( 1400 = 2 pm ) and tenths rather than minutes ...
NAUTICAL MILE A measure of distance equal to one minute of latitude which is approximately 6076 feet.
NAVIGABLE Water which is of sufficient depth to allow a boat navigate.
3 nautical miles.
The direction that the wind is blowing toward. The direction sheltered from the wind.
A nautical mile, or knot, is the same as a geographical mile. Its length is six thousand and eighty feet. A statute mile in the United States measures five thousand two hundred and eighty feet.
-- Next -- ...
--N-- NAUTICAL MILE See knot. NIBBING PLANK A margin plank that is notched to take the ends of regular deck planks and insure good calking of the joint. NIGGERHEAD A small auxiliary drum on a winch.
See Gypsy.
Mile- A nautical mile is 6,080 feet.
Mizzen- Mizzenmast. The shorter, after mast on a boat.
Motor sailer- A boat that uses both sail and engine. The engine in these boats is larger that an auxiliary.
A speed of one nautical mile per hour. A method of attaching a rope or line to itself, another line or a fitting.
WORLD of YPI ...
Nautical Mile: One minute of latitude, 1852 meters
Navigation: The teaching of commanding a boat safely from one point to another ...
(2) A nautical measurement of distance, a tenth of a nautical mile, 100 fathoms, or approximately 200 yards ...
a chamber to dry the wood.
King Plank The centerline plank of a deck.
Knee See Hanging Knee.
Knockabout A type of schooner without a bowsprit.
Knockdown To be capsized by the wind or waves.
Knot 1) A speed of one nautical mile (6, ...
Knot (1) a speed of one nautical mile per hour. (2) a method of attaching a rope or line to itself, another line or a fitting. Land breeze A wind moving from the land to the water due to temperature
changes in the evening.
Nautical unit of distance, having a standard value of 1/10th of a nautical mile (608 ft.) or 100 fathoms.
Cable-bitt - Large vertical timbers, morticed into the keel, to which anchor and mooring cables were attached.
Knot - nautical mile (6,076 ft.) per hour ( a measure of speed).
Lee of the Land - near a shore which provides protection
from wind and waves.
Lee Shore - land downwind of a boat.
Leeward - downwind; away from the source of wind.
Calm: A wind or force less than one knot (knot: 1 nautical mile per hour).
Camel: A wooden float placed between a vessel and a dock acting as a fender.
One degree of latitude (or one degree of longitude at the equator) is equal to 60 nautical miles and a minute (1/60th of a degree) of latitude is defined as one nautical mile (equal to 1.1508 statute
KNOT: A measure of speed equal to one nautical mile (6076 feet) per hour.
LEE: The side sheltered from the wind.
LEEWARD: The direction away from the wind. Opposite of windward.
knot -- a nautical mile (equivalent to 1.15 miles or 1.852km). Also, any of various tangles of line formed by methodically passing the free end through loops and drawing it tight.
landfall -- first sight of land ...
KNOT - A measure of speed equal to one nautical mile (6076 feet) per hour.
LATITUDE - The distance north or south of the equator measured and expressed in degrees.
LEE - The side sheltered from the wind.
LOG - A record of courses or operation.
Equivalent to (UK) 1/10 nautical mile, approx. 600 feet; (USA) 120 fathoms, 720 feet (219 m); other countries use different values.
Determine the distance of each course in nautical miles using your dividers and the distance scale on the top or bottom of the chart. This is done by putting one end of the dividers on your start
point, and the other end at your stop point or turn.
This equation can be used either for units of statue miles and miles per hour or nautical miles and knots.
One knot equals one nautical mile per hour. This rate is equivalent to approximately 1.15 statute miles per hour, or exactly 1.852 kilometers per hour.
A B C D E F G H I J L M N O P Q R S T U V W X Y Z
Payment Options ...
LEAGUE : A distance of three nautical miles.
LETTER OF MARQUE : A document given to a captain allowing him to attack enemy ships under the authority of the crown, in return for a cut of the loot.
KNOT-Measure of distance; one nautical mile, 6,080 feet. Measure of speed: one nautical mile per hour.
LAPSTRAKE-Overlapping plank of a boat.
LAZARETTE-A stowage compartment in the stern.
KNOT Unit of speed in navigation which is the rate of nautical mile (6,080 feet or 1,852 meters) per hour.
KVA This is the voltage-ampere requirement of a device designed to convert electric energy to a non-electrical form ...
One knot is a speed of one nautical mile per hour or 1.852km/hr.
A small line used to join to anything. Example: bucket ...
A nautical term for speed: one nautical mile per hour. Also a term indicating a method of tying a line.
Lash ...
knot - Rate of motion equal to 1 nautical mile per hour (about 1.15 miles per hour) ...
territorial sea now extends twelve nautical miles beyond the baseline, which runs along the coast and across the mouths of rivers and bays.
Single sideband Radiotelephone (2-27.5 MHz) - Used to communicate over medium and long distances (hundreds, sometime thousands of nautical miles).
Satellite Radio - Used to communicate by means of voice, data or direct printing via satellites.
For prescribed lights the value of K shall be 0.8, corresponding to a meteorological visibility of approximately 13 nautical miles.
(b) A selection of figures derived from the formula is given in the following table: ...
Knots are the way boat speed is often measured — more so on larger crafts than on speed or ski boats. Knots measure nautical miles per hour (1.151 MPH).
switchA switch with a lanyard that automatically shuts off an engine if disconnected. kite fishingA technique that involves attaching a fishing line to a kite to present bait at a distance from the
boat. knotSpeed measured in nautical miles ...
Nautical, Boat, Point, Mile, Wind | {"url":"http://en.mimi.hu/boating/nautical_mile.html","timestamp":"2014-04-18T11:06:47Z","content_type":null,"content_length":"32398","record_id":"<urn:uuid:69f14c62-66fa-458e-8c72-01e9f24cfe9a>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00537-ip-10-147-4-33.ec2.internal.warc.gz"} |
76 (dB)
Making noise! Decibels (or dBs) are how sounds are measured usually on a range from 0 dBs to 120 dBs. A whisper would be about 30 dBs. 75 -80 dBs would be the sound that an air conditioner or vacuum
cleaner would make. A subway train would be about 100 dBs. Exposure to sound levels above 140 dBs like a gun blast could lead to hearing damage or loss. “In terms of power, the sound of the jet
engine is about 1,000,000,000,000 times more powerful than the smallest sound that your ears can just barely hear.” That’s a trillion (or 10^+12) times. The sound of a jet engine is so much bigger
than the sound of a sleeping mouse because sound levels grow exponentially not linearly.
From the Handbook to Acoustic Ecology, edited Barry Truax:
“Because of the very large range of sound intensity which the ear can accommodate, from the loudest (1 watt/m^2) to the quietest (10^-12watts/m^2), it is convenient to express these values as a
function of powers of 10. This entire range of intensities can be expressed on a scale of 120 dB.
“The decibel is defined as one tenth of a bel where one bel represents a difference in level… where one is ten times greater than the other… For instance, the difference between intensities of 10-8
watts/m2 and 10-4 watts/m2, an actual difference of 10,000 units, can be expressed as a difference of 4 bels or 40 decibels.
“The result of this logarithmic basis for the scale is that increasing a sound intensity by a factor of 10 raises its level by 10 dB; increasing it by a factor of 100 raises its level by 20 dB; by
1,000, 30 dB and so on. When two sound sources of equal intensity or power are measured together, their combined intensity level is 3 dB higher than the level of either separately. Thus, two 70 dB
cars together measure 73 dB under ideal conditions. However, note that when the amplitude of a single sound is doubled, its level rises 6 dB.”
Post a Comment
This entry was posted in Uncategorized and tagged 10-to-minus08, 10-to-minus12, 10-to-plus02, 10-to-plus03, 10-to-plus12, 76, 76 decibels, air conditioner, Barry Truax, Countdown to 10/10/11, dB,
exponential growth, factor of 10, factor of 100, jet engine, sound, trillion, vacuum cleaner. Bookmark the permalink. Post a comment or leave a trackback: Trackback URL. | {"url":"http://blog.powersof10.com/?p=2785","timestamp":"2014-04-16T16:00:28Z","content_type":null,"content_length":"27937","record_id":"<urn:uuid:2d3b06d5-f883-4a8a-a800-a4604f4db93a>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00073-ip-10-147-4-33.ec2.internal.warc.gz"} |
Did You Ever Wonder? Scientist profile: David Bailey
An alchemical algorithm
In 1977, Helaman Ferguson and Rodney Forcade made the biggest advance in integer-relation detection since Euclid, whose method for finding the greatest common divisor of two numbers dates to about
300 BCE. For over 2,000 years mathematicians as renowned as Leonhard Euler, Carl Gustav Jacobi, Henri Poincaré, and Hermann Minkowksi sought ways to find integer relations among more than two
numbers. Ferguson and Forcade succeeded where they failed.
In the 1990s David Bailey collaborated with Ferguson on a new algorithm called PSLQ, which runs on high-performance computers. Integer-relation detection suddenly became practical, efficient, and
fruitful -- so much so that in 2000, the editors of Computing in Science and Engineering named PSLQ one of the "top 10 algorithms of the century."
This simple formula can calculate any binary or hexadecimal digit of pi without calculating the digits preceding it.
Among other relations, PSLQ has uncovered formulas in algebraic number theory, relations among quantum field theory constants symbolized by Feynman diagrams, and a surprisingly simple formula, given
in a paper by Bailey, Peter Borwein, and Simon Plouffe (BBP), that can calculate any binary digit of pi without calculating the digits preceding it.
Now a desktop computer can do what mathematicians, until recently, thought was impossible (although BBP works only for binary digits, not decimal ones).
Finding pi's millionth binary digit takes a few seconds, using this simple program and very little memory.
Or visit a NERSC web page, to find out if your name, or any short digit string, appears within pi's first four billion binary digits.
Underlying the astonishing BBP formula lies a deeper theory of the expansions of fundamental constants, one that has provided a remarkable proof of the digit randomness of an entire class of numbers.
Pi isn't included yet, but it may not be far behind.
More on the BBP algorithm
More on PSLQ and its uses | {"url":"http://www.lbl.gov/wonder/bailey-2.html","timestamp":"2014-04-20T18:28:53Z","content_type":null,"content_length":"5121","record_id":"<urn:uuid:b21c1bcb-4312-4ac3-afa8-f9b03dd76b18>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00259-ip-10-147-4-33.ec2.internal.warc.gz"} |
Brevet US7492448 - Optical method of determining a physical attribute of a moving object
The present invention relates to an optical method for determining a physical attribute of a moving object. The present invention also relates to a method for optically establishing a mathematical
spatial relationship between one or more cameras and one or more fanned lasers each capable of projecting a laser beam along a laser plane.
A number of optical methods and systems are readily available for determining the physical attribute of an object such as the dimension of an object or its orientation. In particular, there exists a
number of dynamic measuring systems based on the combination of fanned laser beams and digital cameras. A fanned laser beam emits a fan of laser rays, emanating from a centre of the laser, which all
lie in a common plane; that is, the plane of the fanned laser, or the “laser plane”. When the fanned laser illuminates a body, the laser rays create a line on an outer surface of the body which is
the intersection of the laser plane with the outer surface of the body.
The line created by the fanned laser can be recorded by a digital camera as a two-dimensional image. If the line created by the fanned laser is recorded by a digital camera, and the fanned laser and
digital camera are in a fixed position relative to each other, then it can be shown mathematically that each point in the camera's view illuminated by the laser can be resolved into a three
dimensional position in any particular co-ordinates, provided the orientation of the laser plane, and the position and orientation of the camera are predefined precisely in co-ordinates related to
the body being observed, and provided the camera's optical settings and characteristics are also known.
In industrial applications setting the lasers and cameras into precisely known orientations relative to the body being measured is difficult and sometimes impractical.
An object of the present invention is to provide an optical method for determining a physical attribute of an object utilising fanned lasers and digital cameras which does not require precise
mechanical setup of the three dimensional location and orientation of the fanned lasers and digital cameras relative to the moving object.
According to a first aspect of the present invention, there is provided a method for optically determining a physical attribute of an object moving along a defined path, the method comprising the
steps of:
□ fixing one or more cameras, each camera being located to view the object when the object is at a trigger location;
□ fixing one or more fanned lasers, each laser being located outside the path and projecting a laser beam along its laser plane onto the object when the object is at the trigger location, the
intersection of the laser plane with the object at the trigger location being visible by at least one of the cameras;
□ optically establishing a mathematical spatial relationship between the cameras and the plane of each of the laser beams;
□ creating a pixelated image of the object in one or more of the cameras illuminated by the planar laser beams when the object is at the trigger location;
□ selecting at least one pixel location in each image, the at least one pixel location corresponding to a point on the object illuminated by a laser beam;
□ for each of the selected pixel locations, using the mathematical spatial relationship to establish the three dimensional position of the point based on the two dimensional position of the
pixel location;
□ using the three dimensional position of the respective point to determine the physical attribute of the object.
Preferably, optically establishing a mathematical spatial relationship further comprises:
□ establishing an orientation and location of each camera with respect to a co-ordinate system;
□ establishing an orientation of each laser plane within the co-ordinate system; and
□ deriving a transformation function for calculating the three dimensional position of points within the plane of each respective laser beam from the pixel location within a pixelated image.
Preferably, establishing an orientation and location of each camera further comprises:
□ temporarily mounting a calibration device having at least six non-collinear visible markings at known points on at least two non-parallel surfaces of the calibration device, the calibration
device being positioned in the path and in view of each camera at a reference position such that each camera can view the at least six points;
□ for each camera, creating a first pixelated image of the calibration device; and
□ using the known position of the at least six markings relative to the co-ordinate system and the pixel locations within the first image to establish a transformation equation between pixel
locations and the three dimensional co-ordinates of the calibration device at the reference position.
Preferably, establishing an orientation of each laser plane within the co-ordinate system further comprises:
□ illuminating the calibration device with each laser beam to form a line along the surface of the calibration device;
□ for each camera, creating a second pixelated image of the calibration device; and
□ using the position of at least three non-collinear points within the line relative to the co-ordinate system and the pixel locations corresponding to the positions of the points within the
second image to establish an equation defining the orientation of the laser.
The first pixelated image may also be the second pixelated image.
Preferably, a pixel location is defined to sub-pixel accuracy using image analysis techniques.
According to a second aspect of the present invention, there is provided an optical method for determining a physical attribute of an object moving along a defined path, the method comprising:
□ fixing at least one fanned laser at a position outside of the path to project its laser beam onto the moving object when the moving object is at a trigger location;
□ fixing at least one camera at a location to view the moving object when illuminated by the laser beam at the trigger location, each camera producing a digital image comprising an array of
□ forming a calibration device comprising two planar surfaces which intersect in a line forming an edge of the device and, at least six non-collinear visible points on the planar surfaces at
known locations on the calibration device defining a calibration co-ordinate system;
□ temporarily mounting the calibration device in the path in view of the at least one camera, and where illuminated by the at least one fanned laser;
□ producing an image of the device on each camera and determining for each of one or more pixel locations within the image an equation in terms of the calibration co-ordinate system, of a ray
passing through a centre of lens of the camera which, when projected onto the device coincides with the pixel location;
□ determining an equation of a plane in the calibration co-ordinate system containing the fanned laser beam;
□ removing the calibration device;
□ taking an image of the object when illuminated by the at least one laser beam at the trigger location and utilising the laser plane equations, determining a three dimensional location in the
calibration co-ordinate system of selected pixel locations of the object illuminated by the at least one laser, and from the three dimensional locations determining physical attribute of the
Preferably, forming said calibration device further comprises arranging said first and second planar surfaces at right angles to each other.
Preferably, forming said calibration device further comprises providing a third planar surface having a first edge coincident with an edge of said first planar surface distant said second planar
surface, and a fourth planar surface having a first edge coincident with an edge of said third planar surface distant said first planar surface, and a second edge coincident with an edge of said
second planar surface distant said first planar surface.
According to a third aspect of the present invention, there is provided a method for optically establishing a mathematical spatial relationship between one or more cameras and one or more fanned
lasers each capable of projecting a laser beam along a laser plane, the method comprising:
□ establishing an orientation and location of each camera with respect to a co-ordinate system;
□ establishing an orientation of each laser plane within the co-ordinate system; and
□ deriving a transformation function for calculating the three dimensional position of points within the plane of each respective laser beam from a pixel location within a pixelated image
created by each of the cameras.
In order that the invention may be more easily understood, an embodiment will now be described, by way of example only, with reference to the accompanying drawings, in which:
FIG. 1 is a perspective view from the side of a pantograph on the roof of an electrically powered train;
FIG. 2 is a perspective view from the front of a pantograph head;
FIG. 3 is a view of section A-A of the pantograph head depicted in FIG. 2;
FIG. 4 illustrates an optical measurement system incorporating an embodiment of the present method;
FIG. 5 is an enlarged view of the system shown in FIG. 4 from the perspective of one camera incorporated in the system;
FIG. 6 is a view of a pantograph head illuminated by a plurality of fanned lasers incorporated in the system shown in FIGS. 4 and 5;
FIG. 7 is a cross-sectional view of a portion of the pantograph head as viewed by a camera in the system depicted in FIGS. 4 and 5;
FIG. 8 is a schematic representation of a portion of the calibration device incorporated in an embodiment of the present method;
FIG. 9 is an illustration of the measurement system during a calibration process;
FIG. 10 is a photograph of the calibration device incorporated in the present invention when viewed from one of the cameras in the system depicted in FIGS. 4 and 5;
FIG. 11 is a photograph of the calibration device when viewed from another of the cameras incorporated in the system shown in FIGS. 4 and 5 and illuminated by a plurality of fanned lasers;
FIG. 12 illustrates the co-ordinate systems of major components of the system illustrated in FIGS. 4 and 5;
FIG. 13 illustrates the relationship between the co-ordinate systems of an image plane of a camera incorporated in the system, the camera, and the calibration device;
FIG. 14 illustrates a fanned laser illuminating the calibration device;
FIG. 15 illustrates the orientation of the pantograph head;
FIG. 16 is a representation of a portion of the pantograph as seen by one of the cameras in the system shown in FIGS. 4 and 5;
FIG. 17 is a schematic representation of a method for fitting a cylinder profile to the pantograph head;
FIG. 18 depicts various planes on the pantograph carbon and carrier;
FIGS. 19 a and 19 b illustrate the geometry in the process of fitting a cylinder to the pantograph head; and,
FIG. 20 illustrates the geometry in measuring the thickness or height of the pantograph carbon.
An embodiment of the present invention is described in relation to a pantograph of an electrically powered train. As shown in FIGS. 1 to 3, the pantograph head 10 comprises two or more parallel and
spaced apart metal beams 12 a and 12 b (hereinafter referred to collectively as “beams 12”). Each beam 12 a, 12 b comprises a metal section 13 a, 13 b (hereinafter referred to collectively as “metal
sections 13”) to which carbon bushes 14 a and 14 b (hereinafter referred to collectively as “carbons 14”) are attached. The metal sections 13 together are known as the “carrier”.
The beams 12 and carbons 14 extend transversely to an overhead wire 16 from which the train derives electric current for powering its motor(s). The pantograph head 10 and wire 16 are generally
orientated so that the wire contacts the carbons 14 in a region about their mid-point. The carbons 14 have a central portion 18 which comprises the majority of its length and is of uniform thickness
h, and contiguous end portions 20 which reduce in thickness. In use, the wire 16 is substantially always maintained in contact with the central portion 18 of the carbons 14.
Throughout their service life, the carbons 14 wear due to contact with the wire 16, and occasionally are damaged through contact with foreign objects. The wear is reflected in a decrease in the
thickness h of the carbons 14. Damage through contact with foreign objects is reflected in the removal of chunks of material from the carbons 14.
Embodiments of the present invention provide for an optical system and method for determining a physical attribute of a moving object, without requiring precise mechanical set-up of various elements
of an associated optical measurement system 22. In the present embodiment, the moving object is the pantograph 10 or more particularly the carbons 14, and the physical attribute to be determined is
the thickness h of the carbons 14 along their length.
FIG. 4 depicts the general set-up of the optical system 22. The system comprises three fanned lasers 24 a, 24 b and 24 c (hereinafter referred to collectively as “lasers 24”) which are supported at a
vertical distance h1 above the wire 16; and, two digital cameras 26 a and 26 b (hereinafter referred to collectively as “cameras 26”) which are located a vertical distance h2 below the wire 16 on
either side of the pantograph head 10. Each of the lasers 24 a-24 c produces a corresponding laser plane 28 a-28 c (hereinafter referred to in general as “laser plane 28”). Each laser plane 28 is a
plane containing all the laser rays emitted from the respective laser 24. The lasers 24 emit radiation of a visible wavelength and thus when the lasers 24 project light onto or illuminate the
pantograph 10 they each produce two visible laser stripes 32, which correspond to the intersection of the pantograph head 10 with the respective laser plane 28. The laser 24 a produces laser stripes
32 a and 32 b on beams 12 a and 12 b respectively, laser 24 b produces laser stripes 34 a and 34 b on beams 12 a and 12 b respectively, and laser 24 c produces laser stripes 36 a and 36 b on beams 12
a and 12 b respectively.
Each of the cameras 26 looks upwardly at the pantograph head 10 toward backboards 38 a and 38 b respectively which are supported above the wire 16. The camera 26 a views the stripes 32 a, 32 b, 34 a
and 34 b, while the camera 26 b views the stripes 34 a, 34 b, 36 a and 36 b. The backboards 38 a and 38 b allow the cameras 26 to record a silhouette of the pantograph head 10, and in particular the
carbons 14.
The lasers 24, cameras 26 and backboards 38 are all supported in locations outside of the path of motion of the pantograph 10 and the train to which it is coupled.
In addition, the lasers 24 and cameras 26 are arranged so that the laser planes 28 are not parallel to the image plane of the cameras nor passes through the camera origin. It will be appreciated that
the sensitivity of the system 22 decreases as the angle between the laser plane 28 and the axis normal to the image plane approaches 0°. Preferably, the laser planes 28 are about 45° to the axis
normal to image plane.
The system 22 is arranged to view the front of the pantograph 10 relative to its direction of motion. A second identical system may also be provided to view the opposite or reverse side of the
pantograph head 10. This will enable measurement of the carbons 14 from opposite sides.
The following description is made in relation to only one of the cameras 26 a of the system 22 as the operation of the system 22 and the associated method is identical for the camera 26 b and indeed
for corresponding cameras in an identical system (not shown) viewing the rear side of the pantograph head 10.
FIGS. 5 and 6 depicts the view of the pantograph head 10 as observed by camera 26 a when illuminated by the lasers 24 a and 24 b. The camera 26 a is able to see, against the backboard 38 a, laser
stripes 32 a and 32 b, 34 a and 34 b and 36 b. However, the capture of the image of stripe 36 b is not critical.
FIG. 7 depicts in cross-section the beam 12 a at a location in which the beam 12 a is illuminated by the laser 24 a. The laser 24 a produces the stripe 32 a which is depicted in heavier line. This
stripe 32 a extends across an upper surface 40 of the carbon 14 a down a front surface 42 of the carbon 14 a, along an upper surface 44 of the metal section 13 a and down a front surface 46 of the
metal section 13 a terminating at a lowest point 48.
The lowest point 48 coincides with leading or front bottom corner of the metal section 13 a. The beam 12 a as viewed by the camera 26 a has a silhouette of a width W. However, the true height or
thickness of the entire beam 12 a is height H. The height H is a combination of the thickness of the metal section 13 a, which remains constant throughout the life of the pantograph head 10, and the
thickness h of the carbon 14 a, which decreases in time due to wear.
As discussed in further detail below, knowing the location in three dimensions of the equivalent corner point 48 for each of the laser stripes 32 a, 32 b, 34 a and 34 b gives four points on the
surface of the pantograph head 10. From these points, the orientation of the pantograph head 10 can be determined. Further, from the knowledge of the orientation of the pantograph head 10, relative
to the camera 26 a, a transformation between the silhouette width W and the height H can be derived and thus the thickness h of the carbon 14 determined.
As mentioned in the Background of the Invention, it is possible to determine the three dimensional location of a point illuminated by a laser if the position of the laser and position and direction
of the camera are precisely defined relative to the body being observed. However, it will be appreciated that determining these positions particularly having regard to the lasers and cameras being
located off the ground precise measurement of the location of the cameras and lasers is impractical.
Embodiments of the present invention enable such a relationship to be determined without the need to physically measure with precision the location and orientations of the lasers 24, cameras 26 and
pantograph head 10. Rather, the present method utilises a calibration process and a calibration device to determine the relative orientations of the camera 26 and laser planes 28.
In the embodiment shown in FIGS. 8 to 11, the calibration device, in the form of a calibration “block” 50, comprises two planar non-parallel surfaces 52, 54, each composed of corresponding precise
rectangular plates which intersect at a line or edge 56. Ideally, although not necessarily, the surfaces 52 and 54 are at right angles to each other. An edge 58 of the surface 52, and adjacent edge
60 of the surface 54, together with the edge 56 are machined to create a precise set of rectangular axes with a vertex at a corner O. In other words, edges 56, 58, 60 are mutually orthogonal and
intersect at the corner O.
Each of the surfaces 52 and 54 of the calibration block 50 is provided with at least three markings in the form of dots 62 created by drilling corresponding small holes (of approximately 5 mm
diameter) through the plates. Each drilled hole is filled with translucent material of visually contrasting colour to the surfaces 52 and 54 (for example the surfaces may be black in colour and the
translucent material white). To highlight the dots 62, the device 50 may be backlit from the rear. The dots 62 are positioned at random or pseudo-random locations on their respective surfaces. The
location of each dot 62 on the surface 52 is precisely known relative to edges 56 and 58. Similarly, the location of each dot 62 on the surface 54 is precisely known relative to the edges 56 and 60.
The location of the dots 62 is held in a look up table on a computer.
In the event that the system 22 is to be used to measure the characteristics of the pantograph head 10 from both the front and the rear, the device 50 will comprise two further surfaces (not shown)
of identical configuration to the surfaces 52 and 54 and attached to the surfaces 52 and 54 to form a box-like structure comprising the surface 52, the surface 54 a further surface parallel to the
surface 52 and a further surface parallel to the surface 54.
In order to calibrate the system 22, the calibration block 50 is temporarily supported at a location corresponding generally to a location through which the pantograph head 10 will pass. The
calibration block 50 must be stationary, in the field of view of all cameras, and in a position where all lasers 24 shine across the surfaces 52, 54. Theoretically, it is possible to set the
calibration block 50 at any orientation relative to the local world co-ordinates. The orientation of the pantograph head 10 can be computed in the local world co-ordinates so long as it is possible
to compute the transformation from the calibration device orientation to the local world co-ordinates.
Calibration of the system 22 is simplified by orientating the calibration block 50 during the calibration process so that the edge 56 is generally transverse to the rails. Referring to FIG. 9, the
calibration block 50 is orientated so that it is set in essentially the same as the location in which the pantograph head 10 is to be measured (the “trigger location”). Accordingly, the calibration
block 50 will be placed in the path of the laser planes 28 and in view of the cameras 26. The physical location of the calibration block 50 does not need to be precise, provided it is in the field of
view of the cameras 26 and is illuminated by the lasers 24. A mechanical frame (not shown) supports the calibration block 50 in a location so that the surfaces 52 and 54 are approximately at 45° to
the horizontal, an upper edge 64 of the calibration block 50 contacts the wire 16 and the edge 56 of the calibration block 50 lies approximately square to the rails.
FIG. 10 shows an actual calibration block 50 as viewed by the camera 26 a prior to illumination by the lasers 24. The dots 62 are clearly visible in an image plane of the camera 26 a.
FIG. 11 depicts the calibration block 50 as viewed by the camera 26 b when illuminated by lasers 24 b and 24 c and showing corresponding laser stripes 66 b and 66 c. The laser 24 a also produces a
visible stripe 66 a on the calibration block 50 which is in the field of view of camera 26 a.
The calibration of the system 22, which enables the location of the lasers 24 and cameras 26 to be determined in a calibration co-ordinate system corresponding to the co-ordinate system of the device
50, is described below. Clearly, the location of the origin of the calibration co-ordinate system is arbitrary. However, it will be appreciated that in practice the calibration process is simplified
if the origin is located within the calibration device. In this embodiment, the origin of this co-ordinate system is in the middle of the device.
Broadly speaking, the method of calibration establishes the position and orientation of the cameras 26, and the orientation of the laser planes 28, relative to a common co-ordinate system. In this
embodiment, the common co-ordinate system is defined by the calibration block 50 (namely, the calibration co-ordination system), and the equations of the laser planes 28 are defined relative to the
calibration co-ordination system.
Accordingly, in this embodiment, the three dimensional location of any point illuminated by a laser stripe 32 on the pantograph head 10 (or any other object) when viewed by one of the cameras 26 can
be determined.
As previously stated, each laser 26 emits a corresponding plane of light, i.e. a laser plane 28. The equation of any one of these planes can be expressed in vector form by the equation:
n·w=c[Eqn. 1]
□ where n is the unit vector normal to the plane, w is a point on the plane and c is a scalar equal to the distance of the point from the origin O of the calibration device.
The orientation of the laser plane 28 is entirely arbitrary. However, it is practical to align it as close as possible to be vertical.
The image produced by each camera 26 is a regular two dimensional rectangular array of pixels. Standard image processing techniques are used to determine locations of items of interest within the
array of pixels to sub-pixel accuracy. All references to pixel position or location can be a continuous real number rather than a discrete integer number.
A given pixel position (p[x], p[y]) in the image relates to a single ray in three dimensional space defined by the camera co-ordinate mapping transformation (i.e. camera orientation). This ray will
intersect a laser plane 28 in a unique point in three dimensional space. Hence, if the mathematical relationship of the laser plane 28 and the rays from the camera 26 can be defined then the three
dimensional position of any point illuminated by the laser can be computed from its corresponding pixel co-ordinates. This is true for all points on the stripes 32 a, 32 b, 34 a, 34 b, 36 a and 36 b
traced on the pantograph.
In this analysis it is assumed that the lasers 26 and cameras 24 can be positioned fairly accurately, but not with enough precision to allow a hard-coded transformation between the camera image and
the laser position. This transformation is determined by calibration. The accuracy of this system depends on the calibration process rather than the physical camera and laser setup.
Calibration is used to define the relationship between the three dimensional co-ordinate system used (the calibration device co-ordinates) and the two dimensional image co-ordinates.
The system 22 incorporates five co-ordinate systems, as shown in FIG. 12; namely:
□ Image co-ordinate system: This defines the location of each pixel on the image plane of the camera 26 producing the image. The image plane is parallel to the lens (which corresponds to the
x-y plane in the camera co-ordinate system).
□ Camera co-ordinate system: This is the co-ordinate system of a camera 26 with the origin at the centre of its lens and the z-axis extending directly through the centre of the lens and normal
to the lens.
□ Calibration co-ordinate system: The co-ordinate system defined from the calibration process using the calibration device. For convenience the axes are aligned approximately with the local
world co-ordinate system. This helps to relate orientation attributes of the measured object, such as pitch, roll and yaw, to the local co-ordinates.
□ Pantograph co-ordinate system: The co-ordinate system defined square to the pantograph, with the origin in the centre of the pantograph.
□ Local world co-ordinate system: The absolute co-ordinate reference system, having a first vertical axis, a second axis parallel to the rails on which the train carrying the pantograph head 10
travels, and a third axis square to the rails.
FIG. 13 shows the relationship between the following three co-ordinate systems:
□ Calibration (3D) co-ordinate system (designated w);
□ Camera (3D) co-ordinate system (designated c); and,
□ Image (2D) co-ordinate system (designated p).
The image co-ordinates of a pixel are related to the camera co-ordinates by:
$p x = c x c z = Tan ( θ y ) ⇒ c x - p x c z = 0 [ Eqn . 2 ] p y = c y c z = Tan ( θ x ) ⇒ c y - p y c z = 0 [ Eqn . 3 ]$
□ In which the c[Z ]term is a perspective scaling factor.
The calibration co-ordinates are related to the camera co-ordinates by:
$[ c x c y c z 1 ] = [ h 11 h 12 h 13 h 14 h 21 h 22 h 23 h 24 h 31 h 32 h 33 1 0 0 0 1 ] · [ w x w y w z 1 ] ( ie c _ = H · w _ ) [ Eqn . 4 ]$
□ where H is the matrix which defines the transformation from the calibration co-ordinate system w to the camera co-ordinate system c.
The terms h[11 ]to h[33 ]define rotation and the terms h[14], h[24], h[34 ]define translation. If the z-origins of the two co-ordinate systems do not coincide (that is, h[34]≠0) then the entire
system can be divided by h[34 ]to reduce the number of unknowns. This has no effect on p[x ]and p[y ]as the numerator and denominators in Equation 1 and Equation 2 have both been divided by h[34].
Therefore the term h[34]=1, as shown in Equation 4. Independent scaling of the pixel co-ordinates is incorporated in H.
Expanding Equation 4 and substituting into Equation 2 and Equation 3 gives two equations, which can be written as a single equation in matrix form, as follows:
$[ w x w y w z 1 0 0 0 0 - p x w x - p x w y - p x w z 0 0 0 0 w x w y w z 1 - p y w x - p y w y - p y w z ] · [ h 11 h 12 h 13 h 14 h 21 h 22 h 23 h 24 h 31 h 32 h 33 ] = [ p x p y ] [
Eqn . 5 ]$
The two sets of equations have eleven unknowns. To find a solution requires at least six points on a minimum of two non-parallel planes. Each plane must have at least two points and no three points
can be collinear. Of course, more points can be used to apply a least squares fit.
The inverse matrix of H (that is, H^−1) defines the transformation from camera co-ordinates to calibration co-ordinates.
Each vector corresponding to a ray extending from a respective point through the camera lens passes through the camera origin in the centre of the lens: c[0]=(0, 0, 0, 1)^T. In the calibration
co-ordinate system, the location of the origin of the camera co-ordinate system is given by the vector: w[0]=H^−1c[0].
On the camera plane c[z]=1, and hence the image co-ordinates p=(p[x], p[y]) correspond to c[1]=(p[x], p[y], 1, 1)^T. In calibration co-ordinates: w[1]=H^−1c[1].
Each point in the image corresponds to a ray extending from the centre of the camera lens:
w(t)=w [0] +t·(w [1] −w [0])[Eqn. 6]
□ where t≧0 is the parametric variable
It will be appreciated that any plane normal to the camera's z-axis can be used instead of the camera plane, in which c[z]=1. When using an alternative plane the parametric variable will be rescaled.
If the plane in which a point on the object lies is known then the location of that point, in the calibration co-ordinate system, can be determined from the image co-ordinate by the intersection of
the ray and the plane (from equation 1, n·w=c), provided the ray and plane are not parallel:
$t = c - n _ · w _ 0 n _ · ( w _ 1 - w _ 0 ) [ Eqn . 7 ]$
During the calibration process, both surfaces 52 and 54 of the calibration block 50 must be visible by the associated cameras 26 and the laser planes 28 must also intersect both surfaces 52, 54.
The following process can be used to calibrate the system 22:
□ Move the calibration block 50 to position where it is within each camera's field of view and the various laser stripes intersect it. For convenience the calibration block 50 is orientated
approximately square to the rails and in the expected location of the pantograph head 10, as discussed previously.
□ Determine the transformation matrices for all cameras 26 by using the known position of the dots 62. Each dot 62 is located in pixel co-ordinates to sub-pixel accuracy. At least six dots 62
are required to find the solution for Equation 5 as described above. More points can be used in a least squares fit which also provides a measure of the accuracy of the result.
□ Use the stripes 66 (see FIG. 9) traced by the lasers 24 on the surfaces 52, 54 to determine the planes 28 of the lasers 24. In this regard, it is noted that three points (which are not
collinear) are required to define each plane (ax+by cz=1). This requires that the stripes 66 illuminate at least part of both calibration block 50 surfaces. More than three points may be used
for a least squares fit. In vector form, the orientation of each laser plane 28 can be expressed as follows:
$[ a b c ] [ x y z ] = [ 1 ] [ Eqn . 8 ]$
☆ The three points may be for example for laser plane 28 a, two points on the portion of stripe 66 a on surface 52 and one point on the portion of stripe 66 a on surface 54.
The transformation process uses the information determined by the calibration to calculate the co-ordinates of the pantograph head 10, as seen in an image taken by one of the cameras 26. FIG. 15
defines the orientation of the pantograph head 10 for the system 22 which measures the carbons 14 from the front and back of the pantograph head 10. In FIG. 15, lasers 24 d, 24 e and 24 f are
provided for illuminating the back of the pantograph head 10.
The following process describes how the height (which also provides the thickness) of the carbons 14 on the outside of the near beam 12 a and the inside of the far beam 12 b can be determined using
the silhouette of the pantograph head 10 in the image(s) captured by the system 22.
An image of the pantograph head 10 is taken as the train moves past by each of the cameras 26. A triggering mechanism, (such as a mechanical, electro-magnetic or optical, or any other appropriate
sensing means) senses when the pantograph head 10 is in the correct position (that is, the trigger location) for the cameras 26 to each take an image.
The two laser stripes 66 are located in each image on the carrier (the images from the two cameras 26 a, 26 b will share a common laser stripe 66, being the stripe produced by the middle laser 24 b).
Each laser stripe 66 will form two lines; a first line along the side of the carbon 14 and a second along the side of its corresponding metal section 13. As shown in FIG. 16, the two lines will not
be parallel and may be discontinuous. This is due to the laser position, the shape of the metals sections 12 and the camera angle.
The lowest endpoint of each stripe across the metal section 13 provides a known point on the bottom of the near side of the beam 12. It should be noted that the beams 12 are shown in the figures not
straight but are manufactured with a curvature, typically of a radius of about 10 meters. However, the curvature may vary for different suppliers. Moreover, the beams could be flat. The present
method is applicable to all possible configurations, but is described in relation to curved beams.
Two pairs of lowest endpoints on each of the near and far beams 12 a and 12 b allow a cylinder to be fitted to match the curvature of the beams 12 (see FIGS. 17, 19 a and 19 b). The cylinder's
longitudinal axis is normal to the pantograph's x-z plane. The orientation of the cylinder's axis gives the pitch and yaw (relative to the calibration co-ordinate system) of the pantograph head 10.
As shown in FIG. 18, three planes can be defined based upon the cross sectional profile of the pantograph. These three planes are normal to the cylinder's axis, and are each offset from the lowest
endpoint on the near side of a beam 12 by fixed distance which is based on the (known) geometry of the metal beams 13. The three planes are defined as follows:
□ Carbon plane: Along the near face of the carbon 14, with respect to the cameras 26. The location of this plane will allow the 3D co-ordinates of the top near edge C[i ]of the carbon to be
determined from the silhouette.
□ Carrier plane: Along the far edge of the underside of the carrier, with respect to the cameras 26. The location of this plane will allow the 3D co-ordinates of the bottom far edge b[i ]of the
beam 12 (i.e. metal section 13) to be determined from the silhouette.
□ Feature plane: Along the underside of the carrier 12, aligned with a feature point, this will allow the location of features to be determined. A feature point is any known point which would
appear in the silhouette. For example it may be the location of a bolt passing through the metal section 13.
Each of these planes is defined in calibration co-ordinates.
The co-ordinates of points along the top C[i ]and bottom b[i ]of the silhouette of each beam 12 a and 12 b are determined to sub-pixel accuracy in pixel co-ordinates using standard image analysis
The pixel co-ordinates of the top C[i ]of the silhouette also lie in the carbon plane. The intersection of these pixel vectors and the carbon plane is used to transform these silhouette points from
pixel co-ordinates to calibration co-ordinates.
The pixel co-ordinates of the bottom b[i ]of the silhouette also lie in the carrier plane. The intersection of these pixel vectors and the carrier plane is used to transform these silhouette points
from pixel co-ordinates to calibration co-ordinates.
The pantograph head pitch determined above by fitting a cylinder is used to rotate the silhouette points from calibration co-ordinates to pantograph co-ordinates.
The pantograph beam height is taken as the vertical difference in the silhouette top C[i ]and bottom b[i ]point sets in pantograph co-ordinates. Accordingly, if the physical dimensions of the
pantograph carrier 13 are known, the height of the carbons 14 can be determined.
In addition, matching known features from the feature plane on the bottom edge of the carrier will allow the position of the pantograph head 10 to be determined from its profile (in calibration
co-ordinates), and the mid point of the carrier to be determined.
The roll of the pantograph head 10 can be determined by fitting a circle to the points in the carrier plane. The relative position of the centre of this circle, in relation to the mid point of the
carrier, will determine the roll.
Data from the left and right side images are merged to form a complete profile of the pantograph head 10.
Now that an embodiment of the present invention has been described in detail it will be apparent to those skilled in the relevant arts that numerous modifications and variations may be made without
departing from the basic inventive concepts. In particular, the present embodiment is described in relation to a pantograph. However the invention is not limited to application to a pantograph and
may be applied to other moving objects such as, for example a wheel of a train. The particular application required will determine the number of fanned lasers and cameras required. If the present
system is adapted to measure for example the tread thickness on a train wheel, a single fanned laser producing a laser stripe passing along a radius of the wheel is required.
The calibration block 50 in the embodiment described above has two mutually orthogonal surfaces which intersect at a line for each system 22. However, it will be appreciated that other calibration
devices may be employed. For example, a calibration device may have surfaces which are not mutually orthogonal. Alternatively or additionally, a calibration device may have more than two surfaces. It
is also possible to use a calibration device in the form of a cylinder. However, it will be appreciated that the complexity of the mathematics associated with establishing a mathematical spatial
relationship between the cameras 26 and the laser planes 28 is at least partly dependent on the shape of the calibration device.
Modifications and variations of the present invention which would be obvious to a person of ordinary skill in the art are deemed to be within the scope of the present invention the nature of which is
to be determined from the above description and the appended claims. | {"url":"http://www.google.fr/patents/US7492448","timestamp":"2014-04-18T21:28:26Z","content_type":null,"content_length":"126605","record_id":"<urn:uuid:91f1ac11-8e77-4e28-a169-3d415413d498>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00538-ip-10-147-4-33.ec2.internal.warc.gz"} |
Westville Grove, NJ ACT Tutor
Find a Westville Grove, NJ ACT Tutor
...I have tutored privately in both these subjects for many years. I have had the opportunity to work with a wide variety of students from all backgrounds and age groups. I have prepared high
school students for the AP Calculus exams (both AB and BC), undergraduate students for the math portion of...
22 Subjects: including ACT Math, calculus, geometry, statistics
...During my doctoral studies I was selected to participate in the National Science Foundation GK-12 program where I worked in the local high school classroom setting with a science teacher. Two
years in this program not only helped me improve my communication of science to youth, but also gave me ...
9 Subjects: including ACT Math, chemistry, algebra 2, geometry
...I have 20+ years of solid experience tutoring college-level math and theoretical computer science, having mostly financed my education that way. I also have 7+ years experience teaching
college-level math. I am a world-renowned expert in the Maple computer algebra system, which is used in many math, science, and engineering courses.
11 Subjects: including ACT Math, calculus, statistics, precalculus
...I currently still advice a few clients so I have kept up to date on the rules that govern the securities industry. I was always commended in my formal practice as someone who went overboard in
teaching my clients, sometimes to my detriment as most advisors are rewarded for being sellers, rather than educators. I have 10 years professional experience as a mechanical engineer.
23 Subjects: including ACT Math, reading, calculus, statistics
...Also, I have experience instructing elementary-age children in a home-schooling environment. I consider one of the most important elements of science to be researching the correct answer. I
have a strong background in research, as is necessary for any advanced science degree.
20 Subjects: including ACT Math, reading, statistics, biology
Related Westville Grove, NJ Tutors
Westville Grove, NJ Accounting Tutors
Westville Grove, NJ ACT Tutors
Westville Grove, NJ Algebra Tutors
Westville Grove, NJ Algebra 2 Tutors
Westville Grove, NJ Calculus Tutors
Westville Grove, NJ Geometry Tutors
Westville Grove, NJ Math Tutors
Westville Grove, NJ Prealgebra Tutors
Westville Grove, NJ Precalculus Tutors
Westville Grove, NJ SAT Tutors
Westville Grove, NJ SAT Math Tutors
Westville Grove, NJ Science Tutors
Westville Grove, NJ Statistics Tutors
Westville Grove, NJ Trigonometry Tutors
Nearby Cities With ACT Tutor
Almonesson ACT Tutors
Billingsport, NJ ACT Tutors
Blackwood Terrace, NJ ACT Tutors
Blenheim, NJ ACT Tutors
Brooklawn, NJ ACT Tutors
Center City, PA ACT Tutors
East Haddonfield, NJ ACT Tutors
Grenloch ACT Tutors
Hilltop, NJ ACT Tutors
Hurffville, NJ ACT Tutors
Jericho, NJ ACT Tutors
Verga, NJ ACT Tutors
West Collingswood Heights, NJ ACT Tutors
West Collingswood, NJ ACT Tutors
Westmont, NJ ACT Tutors | {"url":"http://www.purplemath.com/Westville_Grove_NJ_ACT_tutors.php","timestamp":"2014-04-19T23:45:20Z","content_type":null,"content_length":"24417","record_id":"<urn:uuid:3cc6620e-88dd-4593-a01d-288306d91374>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00343-ip-10-147-4-33.ec2.internal.warc.gz"} |
Automated WYWIWYG Design of Both the Topology and Component Values of Analog Electrical Circuits Using Genetic
Results 11 - 20 of 38
- Proceedings of the 1997 ACM Symposium on Applied Computing , 1997
"... circuit synthesis, operational amplifier There is no known general technique for automatically designing an analog electrical circuit that satisfies design specifications. Genetic programming
was used to evolve both the topology and the sizing (numerical values) for each component of a low-distortio ..."
Cited by 14 (7 self)
Add to MetaCart
circuit synthesis, operational amplifier There is no known general technique for automatically designing an analog electrical circuit that satisfies design specifications. Genetic programming was
used to evolve both the topology and the sizing (numerical values) for each component of a low-distortion 96 decibel (64,860-to-1) amplifier circuit. 1. THE ANALOG DILEMMA The field of engineering
design offers a practical yardstick for evaluating automated techniques because the design process is usually viewed as requiring human intelligence and because design is a major activity of
practicing engineers. In the design process, the design requirements specify "what needs to be done. " A satisfactory design tells us "how to do it." In the field of electrical engineering, the
design process typically involves the creation of an electrical circuit that satisfies user-specified design goals. Considerable progress has been made in automating the design of certain categories
of purely digital circuits; however, the design of analog circuits and mixed analog-digital circuitshas not proved to be as amenable to automation (Rutenbar 1993). In discussing "the analog dilemma,
" O. Aaserud and I.
- In 3rd International Conference on Artificial Neural Networks and Genetic Algorithms, ICANNGA'97 , 1997
"... Parallel Distributed Genetic Programming (PDGP) is a new form of genetic programming suitable for the development of parallel programs in which symbolic and neural processing elements can be
combined in a free and natural way. This paper describes the representation for programs and the genetic oper ..."
Cited by 13 (8 self)
Add to MetaCart
Parallel Distributed Genetic Programming (PDGP) is a new form of genetic programming suitable for the development of parallel programs in which symbolic and neural processing elements can be combined
in a free and natural way. This paper describes the representation for programs and the genetic operators on which PDGP is based. Experimental results on the XOR problem are also reported. 1
- Artificial Life , 1998
"... Biological organisms are among the most intricate structures known to man, exhibiting highly complex behavior through the massively parallel cooperation of numerous relatively simple elements,
the cells. As the development of computing systems approaches levels of complexity such that their synthesi ..."
Cited by 13 (7 self)
Add to MetaCart
Biological organisms are among the most intricate structures known to man, exhibiting highly complex behavior through the massively parallel cooperation of numerous relatively simple elements, the
cells. As the development of computing systems approaches levels of complexity such that their synthesis begins to push the limits of human intelligence, engineers are starting to seek inspiration in
nature for the design of computing systems, both at the software and at the hardware levels. This paper will present one such endeavor, notably an attempt to draw inspiration from biology in the
design of a novel digital circuit: a field-programmable gate array (FPGA). This reconfigurable logic circuit will be endowed with two features motivated and guided by the behavior of biological
systems: self-replication and self-repair. 1
- Proceedings of the 1997 IEEE Conference on Evolutionary Computation. Piscataway, NJ , 1997
"... Abstract: Analog electrical circuits that perform mathematical functions (e.g., cube root, square) are called computational circuits. Computational circuits are of special practical importance
when the small number of required mathematical functions does not warrant converting an analog signal into ..."
Cited by 11 (4 self)
Add to MetaCart
Abstract: Analog electrical circuits that perform mathematical functions (e.g., cube root, square) are called computational circuits. Computational circuits are of special practical importance when
the small number of required mathematical functions does not warrant converting an analog signal into a digital signal, performing the mathematical function in the digital domain, and then converting
the result back to the analog domain. The design of computational circuits is difficult even for mundane mathematical functions and often relies on the clever exploitation of some aspect of the
underlying device physics of the components. Moreover, implementation of each different mathematical function typically requires an entirely different clever insight. This paper demonstrates that
computational circuits can be designed without such problem-specific insights using a single uniform approach involving genetic programming. Both the circuit topology and the sizing of all circuit
components are created by genetic programming. This uniform approach to the automated synthesis of computational circuits is illustrated by evolving circuits that perform the cube root function (for
which no circuit was found in the published literature) as well as for the square root, square, and cube functions. 1.
"... The problem of source identification involves correctly classifying an incoming signal into a category that identifies the signal's source. The problem is ..."
- In , 1999
"... This paper presents an approach based on the use of genetic programming to synthesize logic functions. The proposed approach uses the 1-control line multiplexer as the only design unit, defining
any logic function (defined by a truth table) through the replication of this single unit. Our fitness fu ..."
Cited by 7 (0 self)
Add to MetaCart
This paper presents an approach based on the use of genetic programming to synthesize logic functions. The proposed approach uses the 1-control line multiplexer as the only design unit, defining any
logic function (defined by a truth table) through the replication of this single unit. Our fitness function first explores the search space trying to find a feasible design and then concentrates in
the minimization of such (fully feasible) circuit. The proposed approach is illustrated using several sample Boolean functions.
, 1999
"... In this paper we propose an approach based on a genetic algorithm (GA) to design combinational logic circuits in which the objective is to minimize their total number of gates. Our results
compare favorably against those produced by human designers and even another GA-based approach. We also briefly ..."
Cited by 6 (4 self)
Add to MetaCart
In this paper we propose an approach based on a genetic algorithm (GA) to design combinational logic circuits in which the objective is to minimize their total number of gates. Our results compare
favorably against those produced by human designers and even another GA-based approach. We also briefly analyze the solutions found by the GA trying to find some clues on how it reduces a Boolean
expression, and we indicate that such a reduction is achieved by reusing common patterns within the circuit in ways that are sometimes completely non-intuitive for a human designer. However, in small
circuits, these patterns can be easier to detect and our approach could, therefore, be useful to teach circuit design since it can show students what steps to follow to simplify further a certain
- Proceedings of 1997 IEEE International Symposium on Computational Intelligence in Robotics and Automation. Los Alamitos, CA; Computer Society Press. Pages 340 , 1997
"... Genetic programming is an automatic programming technique that evolves computer programs to solve, or approximately solve, problems. This paper presents two examples in which genetic programming
creates a computer program for controlling a robot so that the robot moves to a specified destination poi ..."
Cited by 5 (3 self)
Add to MetaCart
Genetic programming is an automatic programming technique that evolves computer programs to solve, or approximately solve, problems. This paper presents two examples in which genetic programming
creates a computer program for controlling a robot so that the robot moves to a specified destination point in minimal time. In the first approach, genetic programming evolves a computer program
composed of ordinary arithmetic operations and conditional operations to implement a time-optimal control strategy. In the second approach, genetic programming evolves the design of an analog
electrical circuit consisting of transistors, diodes, resistors, and power supplies to implement a near-optimal control strategy. 1.
- Stanford University , 1997
"... Most problem-solving techniques used by engineers involve the introduction of analytical and mathematical representations and techniques that are entirely foreign to the problem at hand. Genetic
programming offers the possibility of solving problems in a more direct way using the given ingredients o ..."
Cited by 4 (1 self)
Add to MetaCart
Most problem-solving techniques used by engineers involve the introduction of analytical and mathematical representations and techniques that are entirely foreign to the problem at hand. Genetic
programming offers the possibility of solving problems in a more direct way using the given ingredients of the problem. This idea is explored by considering the problem of designing an electrical
controller to implement a solution to the time-optimal fly-to control problem. 1.
- Programming, , presented at the International Conference on Evolvable System: From Biology to Hardware (ICES-96 , 1996
"... Genetic programming was used to evolve both the topology and the sizing (numerical values) for each component of a low-distortion, low-bias 60 decibel (1000to-1) amplifier circuit with good
frequency generalization. The evolved circuit was composed of two types of transistors (active elements) as we ..."
Cited by 4 (1 self)
Add to MetaCart
Genetic programming was used to evolve both the topology and the sizing (numerical values) for each component of a low-distortion, low-bias 60 decibel (1000to-1) amplifier circuit with good frequency
generalization. The evolved circuit was composed of two types of transistors (active elements) as well as resistors and capacitors. 1. | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=1258961&sort=cite&start=10","timestamp":"2014-04-17T19:54:12Z","content_type":null,"content_length":"39453","record_id":"<urn:uuid:d69e0f49-69c1-4229-9d6f-6bfac30430ba>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00303-ip-10-147-4-33.ec2.internal.warc.gz"} |
ISBN: 9780321227362 | 0321227360
Edition: 9th
Format: Hardcover
Publisher: Addison Wesley
Pub. Date: 1/1/2009
Why Rent from Knetbooks?
Because Knetbooks knows college students. Our rental program is designed to save you time and money. Whether you need a textbook for a semester, quarter or even a summer session, we have an option
for you. Simply select a rental period, enter your information and your book will be on its way!
Top 5 reasons to order all your textbooks from Knetbooks:
• We have the lowest prices on thousands of popular textbooks
• Free shipping both ways on ALL orders
• Most orders ship within 48 hours
• Need your book longer than expected? Extending your rental is simple
• Our customer support team is always here to help | {"url":"http://www.knetbooks.com/trigonometry-9th-lial-margaret-l-hornsby/bk/9780321227362","timestamp":"2014-04-18T14:22:37Z","content_type":null,"content_length":"39558","record_id":"<urn:uuid:d7fb193e-5449-4ba9-a77e-b3a37ac77a9d>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00208-ip-10-147-4-33.ec2.internal.warc.gz"} |
Edmund Taylor Whittaker
Born: 24 October 1873 in Southport, Lancashire, England
Died: 24 March 1956 in Edinburgh, Scotland
Click the picture above
to see eight larger pictures
Previous (Chronologically) Next Main Index
Previous (Alphabetically) Next Biographies index
Edmund Whittaker's family had been living for many generations in Lancashire. The name Whittaker comes from the farm High Whitacre, near Padiham in Lancashire, where the family lived from 1236.
Edmund Whittaker's mother was Selina Septima Taylor and his father was John Whittaker, a man of independent means from Birkdale who was wealthy enough not to need an occupation. Selina's father was
Edmund Taylor, who was a medical doctor with a practice in Middleton near Manchester. Selina and John named their son Edmund Taylor Whittaker, giving him both a forename and a middle name from his
maternal grandfather. Edmund Whittaker's mother played an important role in his education, being his only teacher until he reached the age of eleven.
He was educated at Manchester Grammar School, entering at the age of eleven, and at first he concentrated on classics but as he progressed through the school he was happy to specialise in
mathematics. From there he went up to Trinity College, Cambridge in 1892 where he held a scholarship. He was taught as an undergraduate by, among others, G H Darwin and A R Forsyth. His interests at
this time were on the applied side of mathematics which is certainly illustrated by the fact that, in 1894, he was awarded the Sheepshanks Exhibition in Astronomy. Whittaker graduated as Second
Wrangler in the examination of 1895, and was awarded the Tyson Medal. He was beaten into second place in the Mathematical Tripos examinations by Bromwich. Whittaker was elected as a fellow of Trinity
College in 1896 and became first Smith's prizeman in 1897 for a work on pure mathematics, namely on uniform functions.
After Whittaker became a Fellow of Trinity College he began to teach and give lecture courses and, among his first pupils were G H Hardy and J H Jeans. Whittaker made revolutionary changes to the
topics taught at Cambridge. He taught a course based on his famous book A Course of Modern Analysis (1902). This work is important in the study of functions of a complex variable. It also develops
the theory of special functions and their related differential equations. Other courses Whittaker taught at Cambridge included astronomy, geometrical optics, and electricity and magnetism. Hardy and
Jeans were not the only famous mathematicians which Whitttaker taught at Cambridge. His pupils included Bateman, Eddington, Littlewood, Turnbull, and Watson.
The Rev Thomas Boyd lived in Cambridge and was the Scottish Secretary of the Religious Tract Society. Whittaker married his daughter, Mary Ferguson McNaghten Boyd, in 1901. They had three sons and
two daughters. The middle son from the three was John Whittaker who went on to become a famous mathematician and also has a biography in this archive. The eldest of their two daughters was Beatrice
Mary Whittaker, who later married Copson.
Whittaker's interest in astronomy is illustrated by the courses he taught, but he also joined the Royal Astronomical Society serving as its secretary from 1901 to 1906. He became the Royal Astronomer
of Ireland in 1906 and moved to Dunsink Observatory where Hamilton had worked. He was at the same time appointed as Professor of Astronomy at the University of Dublin. The Observatory was not well
equipped and his appointment as Royal Astronomer was more to teach mathematical physics at the University than to undertake observational astronomy.
George Chrystal, the professor at Edinburgh, died in November 1911 and in the following year Whittaker took up the chair in Edinburgh where he remained for the rest of his career. In fact he reached
retirement age in 1943 but due to World War II he agreed to carry on for a further three years. Soon after he arrived in Edinburgh, Whittaker set up the Edinburgh Mathematical Laboratory to give a
practical side to his interest in numerical analysis. His many lecture courses on this topic were collected into a book which he published in 1924 The Calculus of Observations: a treatise on
numerical mathematics.
Whittaker's best known work is in analysis, in particular numerical analysis, but he also worked on celestial mechanics and the history of applied mathematics and physics. He wrote papers on
algebraic functions and automorphic functions. He found expressions for the Bessel functions as integrals involving Legendre functions. He studied these special functions as arising from the solution
of differential equations derived from the hypergeometric equation.
His results in partial differential equations (described as 'most sensational' by Watson) included a general solution of the Laplace equation in three dimensions in a particular form and the solution
of the wave equation. This work was of fundamental importance for it united various strands of potential theory making it into a unified topic. The unification came in the form of bringing together
different special functions, as mentioned above, and exhibiting them all as special cases of what became known as a 'Whittaker integral'.
On the applied side of mathematics he was interested in relativity theory for many years, publishing at least five articles on the topic. He also worked on electromagnetic theory giving a general
solution of Maxwell's equation, and it was through this topic that his interest in relativity arose. Another application which interested him came through his association with actuaries in Edinburgh
who were dealing with life assurance. This motivated him to study the mathematics lying behind somewhat ad hoc methods that the actuaries were using and Whittaker proved some important results on
interpolation as a consequence.
One of his most important historical studies was A History of the Theories of Aether and Electricity, from the Age of Descartes to the Close of the Nineteenth Century (1910). In 1953 he produced a
revised version including the work of the first quarter of the 20^th Century.
In [9] McCrea describes Whittaker's research lectures which he gave twice a week throughout the whole academic year while he was professor in Edinburgh:-
Either he discussed his own current work or he gave his own development of topics of current interest in mathematics. One marvels at the mathematical power that enabled him always, year after
year, to have material for these lectures - he never repeated the same ones - just as though he had nothing else to think about, when actually he was inundated with other duties.
Whittaker received many honours. He was a member of the London Mathematical Society, being President in 1928-29. He won the De Morgan Medal of the Society in 1935. He was elected a Fellow of the
Royal Society in 1905, served on its Council for two periods, 1911-12 and 1933-35, and he was vice-president during part of this second period on the council from 1934-35. He was awarded the
Society's Sylvester Medal in 1931 and the Copley Medal in 1954:-
... for his distinguished contributions to both pure and applied mathematics and to theoretical physics.
He was knighted in 1945. He was a Fellow of the Royal Society of Edinburgh, awarded the Society's Gunning Prize in 1929, and served the Society as President for most of the years of World War II. He
was also President of the Mathematical Association (1920-21), and of the Mathematics and Physics section of the British Association in 1927. He served as secretary to the Royal Astronomical Society
from 1901 to 1907.
Whittaker was a committed Christian and joined the Roman Catholic Church in 1930. In this capacity he was awarded the cross Pro Ecclesia et Pontifice in 1935, was appointed to the Pontifical Academy
of Sciences in the following year (the year of foundation of the Academy by Pope Pius XI), and was president of the Newman Association from 1943 to 1945. He gave lectures on science and theology such
as the Riddell Memorial Lecture on The beginning and end of the world in Dublin in 1942, and the Donnellan Lectures on Space and spirit also in Dublin four years later.
As to Whittaker's character McCrea writes in [9]:-
He grasped new ideas with unbelievable rapidity and he had an infallible memory for everything he read. ... He was the most unselfish of men with a delicate sense of what would give help or
pleasure to others. Always he seemed to have his vast number of friends at the tip of his mind so that he never missed an opportunity to do or say something on behalf of any one of them. He had a
quick wit and an ever-present sense of humour and liked telling harmlessly mischievous stories about people he had known.
In [3] Whittaker is described in these terms:-
... he was a brilliant teacher, a master of his subject, with a great love of his fellow men. His warmth and his interest in his friends and students made him the most agreeable of companions.
Scholars from abroad who knew him seldom failed to visit him and enjoy his conversation, and the friendships thus founded he kept up by correspondence to all parts of the world.
Article by: J J O'Connor and E F Robertson
Click on this link to see a list of the Glossary entries for this page
List of References (11 books/articles) A Quotation
A Poster of Edmund Whittaker Mathematicians born in the same country
Additional Material in MacTutor
Honours awarded to Edmund Whittaker
(Click below for those honoured in this way)
Fellow of the Royal Society 1905
Fellow of the Royal Society of Edinburgh 1912
EMS President 1914
LMS President 1928 - 1929
Royal Society Sylvester Medal 1931
LMS De Morgan Medal 1935
Honorary Fellow of the Edinburgh Maths Society 1937
Royal Society Copley Medal 1954
Previous (Chronologically) Next Main Index
Previous (Alphabetically) Next Biographies index
History Topics Societies, honours, etc. Famous curves
Time lines Birthplace maps Chronology Search Form
Glossary index Quotations index Poster index
Mathematicians of the day Anniversaries for the year
JOC/EFR © October 2003 School of Mathematics and Statistics
Copyright information University of St Andrews, Scotland
The URL of this page is: | {"url":"http://www-groups.dcs.st-and.ac.uk/~history/Biographies/Whittaker.html","timestamp":"2014-04-19T22:08:00Z","content_type":null,"content_length":"23785","record_id":"<urn:uuid:aae926d5-4c16-4183-a570-21286938484d>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00094-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
Find the next three terms of the sequence. Then write a rule for the sequence. 648, 216, 72, 24
• one year ago
• one year ago
Best Response
You've already chosen the best response.
By quick observation the difference between the terms is not constant, so the sequence isn't arithmetic. It does, however, appear to be geometric:$$r=\frac{216}{648}=\frac{72}{216}=\frac{24}{72}=
\frac13$$... which means we can find the next three terms by multiplying by \(\frac13\):$$\frac13(24)=8;\frac13(8)=\frac83;\frac13\left(\frac83\right)=\frac89$$The explicit form of a geometric
series is \(a_n=a_1r^{n-1}\); in our case, we've identified \(r=\frac13,\ a_1=658\) which yields \(a_n=658\left(\frac13\right)^{n-1}\)
Best Response
You've already chosen the best response.
648 **
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/50c81376e4b0b766106e2f02","timestamp":"2014-04-20T23:40:03Z","content_type":null,"content_length":"30467","record_id":"<urn:uuid:2d4ee2c5-b9b0-4783-9d12-3e03f07a7882>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00178-ip-10-147-4-33.ec2.internal.warc.gz"} |
Alexander Grothendieck
The European mathematician Alexander Grothendieck (in French sometimes Alexandre Grothendieck), created a very influential body of work foundational for (algebraic) geometry but also for modern
mathematics more generally. He is widely regarded as a singularly important figure of 20th century mathematics and his ideas continue to be highly influential in the 21st century.
Initially working on topological vector spaces and analysis, Grothendieck then made revolutionary advances in algebraic geometry by developing sheaf and topos theory and abelian sheaf cohomology and
formulating algebraic geometry in these terms (locally ringed spaces, schemes). Later topos theory further developed independently and today serves as the foundation also for other kinds of geometry.
Notably its homotopy theoretic refinement to higher topos theory serves as the foundation for modern derived algebraic geometry.
Grothendieck’s geometric work is documented in texts known as EGA (with Dieudonné), an early account FGA, and the many volume account SGA of the seminars at l’IHÉS, Bures-sur-Yvette, where he was
based at the time. (See the wikipedia article for some indication of the story from there until the early 1980s.)
In the late 1970s and early 1980s Grothendieck wrote several documents that have been of outstanding importance in the origins of the theory that underlies the nPOV. These include
In the same time he also wrote voluminous intellectual memoirs Recoltes et Semailles.
For an account of his work, including some of the work published in the 1980s, see the English Wikipedia entry.
The video of a talk by W. Scharlau on his life can be seen here.
A recent article in French on Grothendieck is to be found here.
There were two articles on Grothendieck’s life and work in the Notices AMS in 2004:
Allyn Jackson, Comme Appelé du Néant, As If Summoned from the Void: The Life of Alexandre Grothendieck, Part 1,Notices AMS
Allyn Jackson, Comme Appelé du Néant, As If Summoned from the Void: The Life of Alexandre Grothendieck, Part 2,Notices AMS | {"url":"http://www.ncatlab.org/nlab/show/Alexander+Grothendieck","timestamp":"2014-04-20T05:44:28Z","content_type":null,"content_length":"20745","record_id":"<urn:uuid:05c374a7-fb33-42e3-86a3-f0538952c3f5>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00162-ip-10-147-4-33.ec2.internal.warc.gz"} |