content
stringlengths 86
994k
| meta
stringlengths 288
619
|
|---|---|
Euler's Formula for Complex Numbers
(There is another "Euler's Formula" about Geometry,
this page is about the one used in Complex Numbers)
First, you may have seen this famous equation:
e^iπ + 1 = 0
It seems absolutely magical that such a neat equation combines:
But if you want to take an interesting trip through mathematics, then read on to find out why it is true.
Euler's Formula
It actually comes from Euler's Formula:
When we calculate that for x = π we get:
e^iπ = cos π + i sin π
e^iπ = −1 + i × 0 (because cos π = −1 and sin π = 0)
e^iπ = −1
e^iπ + 1 = 0
So e^iπ + 1 = 0 is just a special case of a much more useful formula that Euler discovered.
It was around 1740, and mathematicians were interested in imaginary numbers.
An imaginary number, when squared gives a negative result
This would normally be impossible (try squaring any number, remembering that multiplying negatives gives a positive), but just imagine that you can do it, call it i for imaginary, and see where it
carries you:
i^2 = -1
Euler was enjoying himself one day, playing with imaginary numbers (or so I imagine!), and he took this Taylor Series (which was already known):
(You can use the Sigma Calculator to play with this.)
And he put i in it:
And because i^2 = -1, it simplifies to:
Now, moving the i terms at the end gets:
And here is the miracle ...
• the first group is the Taylor Series for cos
• the second group is the Taylor Series for sin
So we get:
Example: when x = 3
e^ix = cos x + i sin x
e^3i = cos 3 + i sin 3
e^3i = −0.990 + 0.141 i (to 3 decimals)
Note: we are using radians, not degrees.
The answer is a combination of a Real and an Imaginary Number, which together is called a Complex Number.
We can even plot such a number on the complex plane (the real numbers go left-right, and the imaginary numbers go up-down):
Here we show the number −0.990 + 0.141 i
Which is the same as e^3i
A Circle!
In fact, putting Euler's Formula on that graph produces a circle:
e^ix produces a circle of radius 1
And we can turn any point (such as 3 + 4i) into re^ix form (by finding the correct value of x and the radius, r, of the circle)
Example: the number 3 + 4i
To turn into re^ix form we do a Cartesian to Polar conversion:
• r = √(3^2 + 4^2) = √(9+16) = √25 = 5
• x = tan^-1 ( 4 / 3 ) = 0.927 (to 3 decimals)
So 3 + 4i can also be 5e^0.927 i
There are many cases (such as multiplication) where it is easier to use re^ix than a+bi
Lastly, here is the point created by e^iπ (where our discussion began):
e^iπ = −1
|
{"url":"http://www.mathsisfun.com/algebra/eulers-formula.html","timestamp":"2014-04-19T04:20:41Z","content_type":null,"content_length":"11307","record_id":"<urn:uuid:8a1f5c17-5bac-4d66-aa07-4cedef40aea4>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00365-ip-10-147-4-33.ec2.internal.warc.gz"}
|
MathGroup Archive: October 1990 [00015]
[Date Index] [Thread Index] [Author Index]
Re: math coprocessors
• To: mathgroup at yoda.ncsa.uiuc.edu
• Subject: Re: math coprocessors
• From: gabriel at athens.ees.anl.gov (John Gabriel)
• Date: Tue, 23 Oct 90 11:00:02 CDT
I may be speaking from a position of ignorance here, because my knowledge is
several years out of date, but the computation of y=sin(x) contains
deep pitfalls even for values of x as small as small as 62.83. The point is
that for x outside [0..pi/4), multiples of pi/4 must be subtracted from x.
For x values close to n*(pi/4) this must be done with consummate skill if
work is being done in double precision, or a good deal of significance will
be lost. Cody & Waite (A Handbook for the Elementary Functions) discuss the
issue at length. So computation of sin(x) for large arguments is always
fraught with danger. For a given machine precision, there is typically a value
of x beyond which computation of sin(x) will inevitably have significant
error, and for some algorithms even x in the region of a few hundreds gives
trouble (the worst I ever saw gave noticeable error round about x=30).
More recent cases involving functions other than sin(x), investigated by
a colleague in the context of FORTRAN exonerated an INTEL floating point
co-processor, and laid the blame squarely on a compiler writer in one case,
and perhaps on the operating system in another (I don't really know about
the second case).
So, I suppose the point of all this is that you should not necessarily
blame the floating point chip. But perhaps Mathematica takes all the
appropriate precautions about argument reduction and has the proper
approximation to pi to allow the usual 2 step argument reduction to work
OK. On the other hand, if pi is simply held to working precision (no
matter how long that may be) argument reduction holds traps for the
unwary. But I would be surprised if any IEEE standard chip gave good
answers for 2^100. Good answers for 2^70 however, are not obviously ruled
out except by a much more careful examination than I have given to the
John Gabriel (gabriel at ees.anl.gov)
|
{"url":"http://forums.wolfram.com/mathgroup/archive/1990/Oct/msg00015.html","timestamp":"2014-04-17T18:27:59Z","content_type":null,"content_length":"35715","record_id":"<urn:uuid:859e280f-48ac-43bc-bf82-3e656cdabff8>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00263-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Apache Mahout - should I use it to build a custom recommender?
up vote 1 down vote favorite
I am iteratively building a custom recommender system based on a frequently changing probabilistic latent factor model. I have already written some Java code that implements the model. It factorises
the user-item rating matrix into two matrices UxK (user feature vectors) and IxK (item feature vectors) to estimate the missing ratings.
I am looking for the simplest way to plug (perhaps by rewriting) my code into a framework to build a recommender system, a baseline, and be able to compare these against each other in a standard way
- e.g. cross validation to calculate precision, recall, RMSE... As my system still lacks this, the framework should provide methods to calculate and make recommendations based on the estimated
user-item rating matrix.
It looks like Mahout should do the job. However, its documentation says "It does not currently support model-based recommenders.". Can anybody tell me whether what I am trying to achieve is possible
with Mahout and whether it is worth spending the time to learn how to use it. If Mahout is not suitable, can you suggest any alternatives?
Many thanks!
java mahout recommendation-engine
add comment
1 Answer
active oldest votes
I'd say you are better off asking the nice fellows in the Mahout mailing list
That said, Mahout provides SVD based recommenders that use different factorizers for the matrices calculations. For instance, there's the ALSWRFactorizer that supports 2 modes:
1. Factorizing of a explicit feedback rating matrix. See paper
up vote 2 down vote 2. Factorizing implicit feedback variant. See paper
It should be easy to extend functionality by implementing your own recommender (extend AbstractRecommender) or by implementing your own factorizer (extend AbstractFactorizer).
Nonetheless, without knowing more about your approach or your implementation I cannot really say more.
add comment
Not the answer you're looking for? Browse other questions tagged java mahout recommendation-engine or ask your own question.
|
{"url":"http://stackoverflow.com/questions/14668561/apache-mahout-should-i-use-it-to-build-a-custom-recommender/14775724","timestamp":"2014-04-19T08:42:23Z","content_type":null,"content_length":"63948","record_id":"<urn:uuid:a03f2631-7c10-4b62-98da-d84b22c67a9e>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00117-ip-10-147-4-33.ec2.internal.warc.gz"}
|
(in the sense "conformable"): from late Latin conformalis, from con- "together"...
(Source: Oxford Dictionary) [more]
Definition references
Collins Dictionary:
conformal | orthomorphic [synonym, sense-specific]
[mathematics] (of a transformation) preserving the angles of the depicted surface | (of a parameter) relating to such a transformation | (of a map ... (22 of 290 words, 3 definitions, pronunciation)
Conformal [disambiguation]
may refer to: Conformal map, in mathematics | Conformal geometry, in mathematics | Conformal map projection, in cartography | Conformal film on a surface | Conformal fuel tanks on military aircraft |
Conformal coating in electronics | Conformal ... (32 of 117 words, 13 definitions)
Oxford Dictionary:
conformal | conformally [derived]
(of a map or a mathematical mapping) preserving the correct angles between directions within small areas (though distorting distances) (19 of 63 words, pronunciation)
conformal | orthomorphic [cartography]
Describing something that conforms, especially that matches the shape of something. | [cartography] Describing a map projection which has the property of preserving relative angles over small scales
(except at a limited number of distinct points). On... (36 of 56 words, 2 definitions)
leaving the size of the angle between corresponding curves unchanged | [map] representing small areas in their true shape (18 of 55 words, 2 definitions, 1 usage example, pronunciation)
American Heritage Dictionary:
[math] Designating or specifying a mapping of a surface or region upon ... | Of or relating to a map projection in which small areas are rendered with ... (27 of 54 words, 2 definitions,
New World Dictionary:
[math] of a transformation in which corresponding angles are equal | designating or of a map projection in which shapes at any point are true, but ... (25 of 41 words, 2 definitions, pronunciation)
Random House Dictionary:
of, pertaining to, or noting a map or transformation in which angles and scale are preserved. (16 of 21 words, pronunciation)
Encarta Dictionary:
conformal | orthomorphic [geography, sense-specific]
describes a mathematical transformation that leaves the angles between intersecting curves unchanged | describes a map that shows the correct shape ... (20 of 44 words, 2 definitions, pronunciation)
encarta.msn.com/dictionary 1861599527/definition.html [offline]
Etymology references
Oxford Dictionary:
First use: mid 17th century
Origin: (in the sense "conformable"): from late Latin conformalis, from con- "together" + formalis "formal". The current sense was coined in German
Collins Dictionary:
First use: 17th century
Origin: from Late Latin conformālis having the same shape, from Latin com- same + forma shape
First use: 1893
Origin: Late Latin conformalis having the same shape, from Latin com- + formalis formal, from forma
New World Dictionary:
Origin: ecclesiastical Late Latin conformalis, conformable, similar from Latin conformare: see "conform"
American Heritage Dictionary:
Origin: Late Latin cōnfōrmālis, similar: Latin com-, com- + Latin fōrma, shape.
Audio references
Collins Dictionary:
Audio: British English pronunciation of "conformal"
the Free Dictionary:
Audio 1: North American English pronunciation of "conformal"
Audio 2: British English pronunciation of "conformal"
Audio 3: North American English pronunciation of "conformal" by speech synthesizer
Google Dictionary:
Audio: English pronunciation of "conformal"
Merriam-Webster Pronunciation:
YourDictionary Audio:
Audio: North American English pronunciation of "conformal" by speech synthesizer
Page last updated: 2013-06-26
|
{"url":"http://www.memidex.com/conformal","timestamp":"2014-04-19T03:11:18Z","content_type":null,"content_length":"12221","record_id":"<urn:uuid:c001fe1d-821e-499c-9e7d-a320665d7fc5>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00352-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Solving an infinite series that doesn't seem to be harmonic or geometric
November 7th 2010, 03:28 PM
Solving an infinite series that doesn't seem to be harmonic or geometric
I'm solving a visual problem, and sparing the boring details, I've found the sequence to be:
${a_n} = \frac{1}{2^{2n-1}}$
I need to solve for the sum of the series from 1 to infinity. Wolfram easily solves this as 2/3. However, I've poured over 2 calculus books and I've enlisted help but we can't solve it by hand.
Any help at simplifying this into a series which I can solve would be greatly appreciated.
November 7th 2010, 04:06 PM
I'm solving a visual problem, and sparing the boring details, I've found the sequence to be:
${a_n} = \frac{1}{2^{2n-1}}$
I need to solve for the sum of the series from 1 to infinity. Wolfram easily solves this as 2/3. However, I've poured over 2 calculus books and I've enlisted help but we can't solve it by hand.
Any help at simplifying this into a series which I can solve would be greatly appreciated.
$\displaystyle \frac{1}{2^{2n-1}} = 2^{1-2n} = \frac{2}{2^{2n}} = \frac{2}{4^n}$
$\displaystyle 2 \sum_{n=1}^{\infty} \frac{1}{4^n} = 2 \cdot \frac{\frac{1}{4}}{1 - \frac{1}{4}} = 2 \cdot \frac{1}{3} = \frac{2}{3}$
|
{"url":"http://mathhelpforum.com/calculus/162480-solving-infinite-series-doesnt-seem-harmonic-geometric-print.html","timestamp":"2014-04-17T04:01:39Z","content_type":null,"content_length":"6434","record_id":"<urn:uuid:b663b265-1353-4a4b-a7ad-98145ba4c1ec>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00159-ip-10-147-4-33.ec2.internal.warc.gz"}
|
This Article
Bibliographic References
Add to:
Some New Designs of 2-D Array for Matrix Multiplication and Transitive Closure
April 1995 (vol. 6 no. 4)
pp. 351-362
ASCII Text x
Jong-Chuang Tsay, Pen-Yuang Chang, "Some New Designs of 2-D Array for Matrix Multiplication and Transitive Closure," IEEE Transactions on Parallel and Distributed Systems, vol. 6, no. 4, pp.
351-362, April, 1995.
BibTex x
@article{ 10.1109/71.372789,
author = {Jong-Chuang Tsay and Pen-Yuang Chang},
title = {Some New Designs of 2-D Array for Matrix Multiplication and Transitive Closure},
journal ={IEEE Transactions on Parallel and Distributed Systems},
volume = {6},
number = {4},
issn = {1045-9219},
year = {1995},
pages = {351-362},
doi = {http://doi.ieeecomputersociety.org/10.1109/71.372789},
publisher = {IEEE Computer Society},
address = {Los Alamitos, CA, USA},
RefWorks Procite/RefMan/Endnote x
TY - JOUR
JO - IEEE Transactions on Parallel and Distributed Systems
TI - Some New Designs of 2-D Array for Matrix Multiplication and Transitive Closure
IS - 4
SN - 1045-9219
EPD - 351-362
A1 - Jong-Chuang Tsay,
A1 - Pen-Yuang Chang,
PY - 1995
VL - 6
JA - IEEE Transactions on Parallel and Distributed Systems
ER -
Abstract—In this paper, we present some new regular iterative algorithms for matrix multiplication and transitive closure. With these algorithms, by spacetime mapping the 2-D arrays with $2N - 1$ and
$\lceil \left(3N - 1\right)/2\rceil$ execution times for matrix multiplication can be obtained. Meanwhile, we can derive a 2-D array with $4N - 2$ execution time for transitive closure based on the
sequential Warshall-Floyd algorithm. All these new 2-D arrays for matrix multiplication and transitive closure have the advantages of faster and more regular than other previous designs.
Index Terms—Algorithm mapping, matrix multiplication, mesh array, systolic array, spherical array, transitive closure, VLSI architecture.
[1] H. T. Kung and C. E. Leiserson,“Systolic arrays (for VLSI),”inProc. Sparse Matrix Symp., Soc. Indust. Appli. Math., 1978, pp. 256–282.
[2] S.K. Rao,“Regular iterative algorithms and their implementations on processor arrays,” PhD thesis, Stanford Univ., 1985.
[3] V. Van Dongen and P. Quinton,“Uniformization of linear recurrence equations: A step towards the automatic synthesis of systolic arrays,”inProc. Int. Conf. Systolic Array, 1988, pp. 473–482.
[4] P. Quinton, “Automatic Synthesis of Systolic Arrays from Uniform Recurrent Equations,” Proc. 11th Ann. Int'l Symp. Computer Architecture, pp. 208-214, June 1984.
[5] V. K. Prasanna Kumar and Y. C. Tsai,“Designing linear systolic arrays,”J. Parallel Distribut. Comput., vol. 7, pp. 441–463, 1989.
[6] ——,“Mapping two dimensional systolic arrays to one dimensional arrays and applications,”inProc. Int. Conf. Parallel Processing, 1988, pp. 39–46.
[7] ——,“Synthesizing optimal family of linear systolic arrays for matrix computations,”inProc. Int. Conf. Systolic Array, 1988, pp. 51–60.
[8] P.J. Varman and I.V. Ramakrishnan, "Synthesis of an Optimal Family of Matrix Multiplication Algorithms on Linear Arrays," IEEE Trans. Computers, vol. 35, no. 11, pp. 989-996, Nov. 1986.
[9] ——,“Modular matrix multiplication on a linear array,”IEEE Trans. Comput., vol. C-33, pp. 952–958, Nov. 1984.
[10] S.Y. Kung, VLSI Array Processors. Prentice Hall, 1988.
[11] L. Melkemi and M. Tchuente,“Complex of matrix product on a class of orthogonally connected systolic arrays,”IEEE Trans. Comput., vol. C-36, pp. 615–619, May 1987.
[12] G. J. Li and B. W. Wah,“The design of optimal systolic arrays,”IEEE Trans. Comput., vol. C-34, pp. 66–77, Jan. 1985.
[13] W. A. Porter and J. L. Aravena,“Cylindrical arrays for matrix multiplication,”inProc. 24th Annu. Allerton Conf. Commun., Control Comput., Mar. 1988, pp. 595–602.
[14] S. C. Kak,“Multilayered array computing,”inProc. 1985 Annu. Conf. Inform. Sci. Syst., Mar. 1984, pp. 4.36–4.41.
[15] ——,“A two-layered mesh array for matrix multiplication,”Parallel Comput., vol. 6, pp. 383–385, 1988.
[16] A. Benaini and Y. Robert,“An even faster systolic array for matrix multiplication,”Parallel Comput., vol. 12, pp. 249–254, 1989.
[17] H. V. Jagadish and T. Kailath,“A family of new efficient arrays for matrix multiplication,”IEEE Trans. Comput., vol. C-38, pp. 149–155, Jan. 1989.
[18] S.Y. Kung,S.C. Lo,, and P.S. Lewis,“Optimal systolic design for the transitive closure and the shortest path problems,” IEEE Trans. Computers, vol. 36, pp. 603-614, May 1987.
[19] D.I. Moldovan and J.A.B. Fortes, “Partitioning and Mapping Algorithms into Fixed Size Systolic Arrays,” IEEE Trans. Computers, vol. 35, no. 1, pp.1-12, Jan. 1986.
Jong-Chuang Tsay, Pen-Yuang Chang, "Some New Designs of 2-D Array for Matrix Multiplication and Transitive Closure," IEEE Transactions on Parallel and Distributed Systems, vol. 6, no. 4, pp. 351-362,
April 1995, doi:10.1109/71.372789
Usage of this product signifies your acceptance of the
Terms of Use
|
{"url":"http://www.computer.org/csdl/trans/td/1995/04/l0351-abs.html","timestamp":"2014-04-18T11:13:16Z","content_type":null,"content_length":"54153","record_id":"<urn:uuid:b64ed2e0-e73b-40b1-84d6-5212a27dce99>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00416-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Sandy Springs, GA Algebra Tutor
Find a Sandy Springs, GA Algebra Tutor
...Tutored on Geometry topics during high school, college, and as a GMAT instructor for three years. Scored in the 99th percentile on the GMAT. Can help you understand the basics of the Microsoft
Word toolbar.
28 Subjects: including algebra 2, algebra 1, physics, calculus
...I have been a private tutor since my freshman year of college in 2009 and have tutored more than 90 students in the last five years. Because I am motivated by my own acquisition of knowledge,
I have been able to tutor in many different subjects. I specialize in standardized test preparation including all sections of the PSAT, SAT, ACT, and ASVAB.
26 Subjects: including algebra 2, algebra 1, reading, English
...I have been doing private math tutoring since I was a sophomore in high school. I believe in guiding students to the answers through prompt questions. This makes sure that when the student
leaves he or she is equipped to answer the problems on their own for tests and quizzes.
9 Subjects: including algebra 1, algebra 2, geometry, precalculus
...In my professional career I carried out training in a larger group environment. My passion is teaching and helping young kids acquire the skills to understand and master math and science. I am
very patient and know how to make complex study questions look simple.
15 Subjects: including algebra 2, algebra 1, trigonometry, geometry
I have a decade of teaching experience as a philosophy instructor, and before that as a tutor when I was completing my undergraduate degree at Grand Valley State University. I have a taught in a
variety of settings, including online, at the community college level, and at a major university. I hav...
9 Subjects: including algebra 1, algebra 2, English, reading
Related Sandy Springs, GA Tutors
Sandy Springs, GA Accounting Tutors
Sandy Springs, GA ACT Tutors
Sandy Springs, GA Algebra Tutors
Sandy Springs, GA Algebra 2 Tutors
Sandy Springs, GA Calculus Tutors
Sandy Springs, GA Geometry Tutors
Sandy Springs, GA Math Tutors
Sandy Springs, GA Prealgebra Tutors
Sandy Springs, GA Precalculus Tutors
Sandy Springs, GA SAT Tutors
Sandy Springs, GA SAT Math Tutors
Sandy Springs, GA Science Tutors
Sandy Springs, GA Statistics Tutors
Sandy Springs, GA Trigonometry Tutors
Nearby Cities With algebra Tutor
Alpharetta algebra Tutors
Atlanta algebra Tutors
Chamblee, GA algebra Tutors
Decatur, GA algebra Tutors
Doraville, GA algebra Tutors
Dunwoody, GA algebra Tutors
Johns Creek, GA algebra Tutors
Lawrenceville, GA algebra Tutors
Mableton algebra Tutors
Marietta, GA algebra Tutors
Norcross, GA algebra Tutors
Roswell, GA algebra Tutors
Smyrna, GA algebra Tutors
Tucker, GA algebra Tutors
Woodstock, GA algebra Tutors
|
{"url":"http://www.purplemath.com/Sandy_Springs_GA_Algebra_tutors.php","timestamp":"2014-04-17T21:52:34Z","content_type":null,"content_length":"24124","record_id":"<urn:uuid:fac19a6b-c138-469d-a6d0-13725f3f93df>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00352-ip-10-147-4-33.ec2.internal.warc.gz"}
|
MathGroup Archive: December 2006 [00398]
[Date Index] [Thread Index] [Author Index]
Re: A problem in mathematical logic
• To: mathgroup at smc.vnet.net
• Subject: [mg72263] Re: [mg72242] A problem in mathematical logic
• From: Andrzej Kozlowski <akoz at mimuw.edu.pl>
• Date: Sun, 17 Dec 2006 06:20:07 -0500 (EST)
• References: <200612161017.FAA12165@smc.vnet.net> <70451048-2722-481D-AF0F-0899F375E840@mimuw.edu.pl>
On 16 Dec 2006, at 21:21, Andrzej Kozlowski wrote:
> On 16 Dec 2006, at 19:17, Bonny Banerjee wrote:
>> I am working on a problem that requires eliminating quantifiers
>> from a
>> logical combination of polynomial equations and inequatlities over
>> the
>> domain of real numbers, i.e. given a quantified logical expression
>> E, I want
>> a quantifier-free expression F, such that E and F are equivalent.
>> It has
>> been shown that the computational complexity of quantifier
>> elimination is
>> inherently doubly exponential in the number of variables.
> Not quite true. The Basu, Pollack, Roy algorithm is double
> exponential in the number of blocks of variables, where the blocks
> are delimited by alternations of the existential and universal
> quantifiers. This means you need to have at least two blocks and at
> lest on of them must have more than one variable, for this to make
> a difference.
>> I want to know
>> whether quantifier elimination can be done in lesser time if we
>> take the
>> help of examples.
>> Suppose I have a quantified logical expression E1 and its equivalent
>> quantifier-free expression F1. Now I am given another quantified
>> logical
>> expression E2 and the task is to compute its equivalent quantifier-
>> free
>> expression, say F2. Instead of spending a lot of time computing F2
>> from E2
>> using the quatifier elimination algorithm, I want to know whether
>> there
>> exists a mapping between E2 and E1, so that F2 can be computed
>> from F1 by
>> reverse-mapping. This idea will be beneficial only if there exists
>> such a
>> mapping that can be computed in time less than doubly exponential.
>> Example of a mapping: If I can show that E1 and E2 are equivalent,
>> then F1
>> and F2 have to be equivalent. So "equivalence" is an example of
>> such a
>> mapping. Unfortunately, the general problem of equivalence
>> checking is
>> NP-hard.
>> I suspect, there might exist a mapping weaker than equivalence (and
>> computable in polynomial time) that will suffice for my purposes.
>> Please let
>> me know if any of you are already aware of any such mapping. Any
>> suggestion
>> regarding which book/paper to look at would also help.
> Each of yoru two expressions E1 and E2 is equivalent to specifying
> some semi-algebraic set. I have not given much thought to your
> question, but at this time the only way I can imagine of
> constructing (in the general case) your mapping is by using
> Cylindrical Decomposition, which is indeed inherently double
> exponential (although there are approximate versions which are only
> single exponential - e.g.
> Experimental`GenericCylindricalDecomposition` ). The improvements
> in complexity of a number of algorithms (such as computing the
> connected components of a semi-algebraic set, quantifier
> elimination etc.) achieved by Basu, Pollack and Roy is due to their
> being able to avoid the need to use Cylindrical Decomposition. The
> relevant book is
> "Algorithms in Real Algebraic Geometry" by Basu, Pollack and Roy
> (Springer, Algorithms and Computation in mathematics, Volume 10).
> (However, this book has 580 pages and the relevant algorithms are
> all near the end).
> Andrzej Kozlowski
I forgot to add that the decision problem for the existential theory
of the reals (in other words, quantifier elimination over the reals
where we only have to deal with the existential quantifier; this is
equivalent to deciding whether or not a given semi-algebraic set is
empty) can be solved with singly exponential complexity in the number
of variables.
It seems to me that singly exponential complexity is the best one can
expect of general algorithms of this kind. I do not know of any
algorithms with polynomial complexity in this whole field. ( Maybe
one can do better using numeric-symbolic methods, about which I know
very little, but I am pretty certain that the complexity is still
singly exponential).
Of course a particular problem with some special structure may be
solvable much more efficiently using special tricks rather than
general algorithms, but that is already mathematics rather than
"computer science" ;-)
Andrzej Kozlowski
• Follow-Ups:
• References:
|
{"url":"http://forums.wolfram.com/mathgroup/archive/2006/Dec/msg00398.html","timestamp":"2014-04-19T19:50:10Z","content_type":null,"content_length":"39492","record_id":"<urn:uuid:187e1592-db3b-455d-a452-5dbdd560e8de>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00032-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Free Rainbow Tables | Forum
The lowered max stpl definately allows better space saving.
458 + 2mb compared to 512 + 200 gotta love it.
The lm_lm-frt-cp437-850#1-7 set has a keyspace of almost 2^48 but has a max sptl of 38.
the ntlm_loweralpha-numeric-space#1-8 set has a keyspace of 2 ^ 41.7152 and has a max sptl of 42..
blazerx wrote:
(-sptl=42 -eptl=14)
during the transition from file 9 -> 0 the converter crashed.
Tested various files from the LM CP437-850 set with different index so far looks good Works with rcracki (-sptl=37 -eptl=19 used)
Small problem found with the converter though
(-sptl=42 -eptl=14)
during the transition from file 9 -> 0 the converter crashed.
However if the files are placed separately in directories the conversion works fine.
Edit: Forgot to add index 0 and 2 works fine with no crash
Downloaded the wrong index files for the new MD5 tables accidently dled 1 instead of 0 so haven't had a chance to test it yet.
blazerx wrote:
oh ok
The NTLM loweralpha 9 doesn't seem to want to pack with 43 13, but needs 43 21 so seems like only the index is saved on space goin down to 300kb index files.
I shall collect the index files for CP437 and the indxes for the new MD5 lcase 10 tables tonight and see how things go tomorrow since i have most of those table files.
PowerBlade wrote:
Can we have some volunteers to test out this build of rcracki_mt for the RTI2 issues listed above?
The issue was the following:
pReader->ReadChains(nDataRead, pChain);
nDataRead *= 8; // Convert from chains read to bytes
A few lines down:
int nRainbowChainCountRead = nDataRead / 16;
So all of the chains are read into memory, but only half of them are used :)
nDataToRead = nAllocatedSize / 16;
nDataRead = nDataToRead;
pReader->ReadChains(nDataRead, pChain);
nDataRead *= 8; // Convert from chains read to bytes
blazerx wrote:
i think i accidentally deleted the indexes for the CP437 tables, so i'll grab tham later, but what i did was convert the LM_ALph_num 7 set and when i tired to rcracki them i got the "A solution needs
to be found for this problem" message. So i think maybe rcracki doesn't have LM RTI2 implementation or something
I've finally gotten to merging changes and pushed out rcracki_mt_0.6.5.2 (rcracki.sourceforge.net.) Mainly this is rti2 fixes.
Also, PowerBlade's converti2 changes so you can go rti -> rti2 without the rto step are in gitorious (
http://gitorious.org/freerainbowtables- ... nbowtables
) and *nix users can just grab it and run make. Windows binary builds of any tools besides rcracki_mt harass PB for :P
The first set that I picked from the list of sequentially generated tables proved interesting: mysqlsha1_numeric#1-12_*
I did the usual converti2 -d and found my start point for the conversion of 33 bits from mysqlsha1_numeric#1-12_0 and just used that for the 5 tables in the set. Apparently this set is even weirder
than I remembered when it was created. The first tip off was seeing "this file is not sorted." When I ran -d across all the table chunks here is what I came up with:
-sptl=33 -eptl=23 - at rcracki_mt run FATAL: m_indexrowsizebytes > 1: 2
-sptl=33 -eptl=15
-sptl=31 -eptl=25 - at rcracki_mt run FATAL: m_indexrowsizebytes > 1: 2
-sptl=31 -eptl=17
-sptl=33 -eptl=15 - for every file in the table "this file is not sorted"
-sptl=35 -eptl=21 - at rcracki_mt run FATAL: m_indexrowsizebytes > 1: 2
-sptl=35 -eptl=13
This last table is where it got very weird even tho I get 35 from -d for all the tables it didn't seem to work for _0 and _1 at crack time:
536870912 bytes read, disk access time: 2.99 s
verifying the file...
this file is not sorted
536870912 bytes read, disk access time: 2.87 s
verifying the file...
this file is not sorted
For those 2 table chunks I used: -sptl 36 -eptl 12
The end result is that both my rti and rti2 runs produced the same stats except minor time variance.
./rcracki_mt -t 4 -h 1604DD3A95B8AE90462F4BCEE373FC5697582B65 /mnt/rainbow_tables/freerainbowtables/mysqlsha1/mysqlsha1_numeric#1-12_?/*.rti
./rcracki_mt -t 4 -h 1604DD3A95B8AE90462F4BCEE373FC5697582B65 /mnt/rainbow_tables/freerainbowtables/mysqlsha1/mysqlsha1_numeric#1-12_?/*.rti2
plaintext found: 0 of 1 (0.00%)
total disk access time: 90.56 s
total cryptanalysis time: 6.48 s
total pre-calculation time: 13.49 s
total chain walk step: 89955005
total false alarm: 19014
total chain walk step due to false alarm: 42227888
I think this is the complete set of start points for the sequential tables PB listed (and some newer ones):
33 mysqlsha1_numeric#1-12_[012]
31 mysqlsha1_numeric#1-12_3
35 mysqlsha1_numeric#1-12_4
34 halflmchall_all-space#1-7_0
33 halflmchall_all-space#1-7_[123]
38 lm_lm-frt-cp437-850#1-7_0
37 lm_lm-frt-cp437-850#1-7_[123]
unique chain min (for 99.9% for the set): 6,338,391,198
md5_loweralpha#1-10_3: 67108864*95+1479350 = 6,376,821,430
36 md5_loweralpha#1-10_[0123] with 1 exception:
38 - 1
39 - 3
40 - 1
41 - 1
42 - 1
43 - 3
34 mysqlsha1_loweralpha-numeric-space#1-8_0
33 mysqlsha1_loweralpha-numeric-space#1-8_[123]
37 ntlm_mixalpha-numeric#1-8_[0123]
unique chain min (for 99.9% for the set): 3,047,005,299
ntlm_mixalpha-numeric-all-space#1-7_1: 67108864*45+39127566 = 3,059,026,446
ntlm_mixalpha-numeric-all-space#1-7_2: 67108864*45+39141984 = 3,059,040,864
ntlm_mixalpha-numeric-all-space#1-7_3: 67108864*45+38980523 = 3,058,879,403
35 ntlm_mixalpha-numeric-all-space#1-7_[0123] with 3 exceptions
40 - 62
45 - 6
46 - 13
36 - 900
37 - 151
47 - 35
40 - 4
48 - 10
36 - 971
37 - 295
46 - 33
35 md5_mixalpha-numeric-all-space#1-7_[0123]
The small number of start points that exceed the highest number of bits of the rest of the table and fall in the last file are something we'll need to deal with. We purposefully over-generate the
number of chains required for our expectedSuccessRate so that in cases like this, after the math is checked, it is likely safe to discard some chains. This sort of computation of if it is safe to
drop some chains as well as the code to do so will have to be something we integrate into converti2. In theory the client, validator, assimilator, perfecter, etc. should all be checking and
discarding these in the first place but that's also TBD.
Hi quel,
thanks for all the work around rcracki_mt!
I tried to make converti2 on my Debian Version Lenny amd64, but I get an error:
g++ -Wall -m32 -ansi -I../../Common/rt\ api -O3 -c ../../Common/rt\ api/MemoryPool.cpp
In file included from /usr/include/features.h:354,
from /usr/include/stdio.h:28,
from ../../Common/rt api/Public.h:27,
from ../../Common/rt api/MemoryPool.cpp:28:
/usr/include/gnu/stubs.h:7:27: error: gnu/stubs-32.h: Datei oder Verzeichnis nicht gefunden
make: *** [MemoryPool.o] Fehler 1
Can't find any "stubs"-Library in the standard Debianrepositorys. So my question is: "Where to find this Library?" Could you give some hints to all the Debian users out there?
LordAlien wrote:
I tried to make converti2 on my Debian Version Lenny amd64, but I get an error:
A sound choice of OS as that's what I'm using ;)
[quel@paranoia ~] dlocate /usr/include/gnu/stubs-32.h
libc6-dev-i386: /usr/include/gnu/stubs-32.h
You can always search package contents at:
http://www.debian.org/distrib/packages#search_contentshttp://packages.debian.org/search?searc ... e&arch=any
Right now it forces 32-bit compilation (the -m32) as there are bugs in the 64-bit version. I cleaned up most of them and the 64-bit version does at least produce rti2 tables that match but the
.rti2.index files are completely broken.
Thanks for those useful links and the explanation.
Hopefully you will fix those 64Bit problems.
I want to store the new MD5-Table on my 2TB RAID1 next to all the other tables, but it will only fit if I convert some tables from rti to rti2.
Damn it, another error:
g++ -Wall -m32 -ansi -I../../Common/rt\ api -O3 MemoryPool.o Public.o RTI2Reader.o RTIReader.o RTReader.o converti2.cpp -o converti2
/usr/bin/ld: skipping incompatible /usr/lib/gcc/x86_64-linux-gnu/4.3.2/libstdc++.so when searching for -lstdc++
/usr/bin/ld: skipping incompatible /usr/lib/gcc/x86_64-linux-gnu/4.3.2/libstdc++.a when searching for -lstdc++
/usr/bin/ld: skipping incompatible /usr/lib/gcc/x86_64-linux-gnu/4.3.2/libstdc++.so when searching for -lstdc++
/usr/bin/ld: skipping incompatible /usr/lib/gcc/x86_64-linux-gnu/4.3.2/libstdc++.a when searching for -lstdc++
/usr/bin/ld: cannot find -lstdc++
collect2: ld returned 1 exit status
make: *** [converti2] Fehler 1
I have installed following packets matching libstdc++:
ii libstdc++6 4.3.2-1.1 The GNU Standard C++ Library v3
ii libstdc++6-4.3-dev 4.3.2-1.1 The GNU Standard C++ Library v3 (development
Never had such an error, so I have no idea what to do against that. Any hints?
LordAlien wrote:
g++ -Wall -m32 -ansi -I../../Common/rt\ api -O3 MemoryPool.o Public.o RTI2Reader.o RTIReader.o RTReader.o converti2.cpp -o converti2
/usr/bin/ld: skipping incompatible /usr/lib/gcc/x86_64-linux-gnu/4.3.2/libstdc++.so when searching for -lstdc++
/usr/bin/ld: skipping incompatible /usr/lib/gcc/x86_64-linux-gnu/4.3.2/libstdc++.a when searching for -lstdc++
/usr/bin/ld: skipping incompatible /usr/lib/gcc/x86_64-linux-gnu/4.3.2/libstdc++.so when searching for -lstdc++
/usr/bin/ld: skipping incompatible /usr/lib/gcc/x86_64-linux-gnu/4.3.2/libstdc++.a when searching for -lstdc++
/usr/bin/ld: cannot find -lstdc++
collect2: ld returned 1 exit status
make: *** [converti2] Fehler 1
I have installed following packets matching libstdc++:
ii libstdc++6 4.3.2-1.1 The GNU Standard C++ Library v3
ii libstdc++6-4.3-dev 4.3.2-1.1 The GNU Standard C++ Library v3 (development
Never had such an error, so I have no idea what to do against that. Any hints?
You had me scratching my head as well with this one until I noted /usr/lib/gcc/x86_64-linux-gnu and wondered if it was yet another 32-bit compat package you lacked. I believe it's this one:
Just to be safe here is the list of '*32*' packages I have installed:
Ok just this time I've created a static build of converti2 for linux x86/x86_64 that also has a static libgcc. This is the same method we use to created distrrtgen clients. If all else fails anyone
wishing on linux to try converti2 and running into compilation difficulties can try this binary. Though, documenting the required libraries is probably a good idea too :P
|
{"url":"https://www.freerainbowtables.com/phpBB3/viewtopic.php?p=15494","timestamp":"2014-04-21T01:59:22Z","content_type":null,"content_length":"73925","record_id":"<urn:uuid:33121e19-4e51-48c7-aa60-4849ecfdd487>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00499-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Campo, CA Math Tutor
Find a Campo, CA Math Tutor
...I look forward to bringing the skills which made me a successful student to the chalkboard, sharing them with my students while I continue to learn as well.As a senior in high school I earned
SAT scores of 740CR/690M, placing me in the top 3% of test-takers. As a tutor, I have led 87% of my SAT ...
54 Subjects: including calculus, chemistry, algebra 2, SAT math
...In tutoring, I focus on providing practical examples and background information that help make the subject matter come to life.B.S. in Political Science J.D. Southern Illinois University
Active Member of Florida Bar I am a licensed attorney that has argued many cases in court. I also have a law degree as well as a Bachelor's Degree in Political Science.
16 Subjects: including logic, English, writing, economics
...My favorite subjects are math and science, and I have been tutoring for over 15 years to students of all ages, K-12 and college. My tutoring sessions are very effective. Within the first
session, I am able to pinpoint exactly what is hindering a student from understanding the subject matter.
35 Subjects: including calculus, English, trigonometry, reading
...The GED, ACT, SAT and TOEFL all have different formats and specific measures of knowledge and communication skills.I mastered in Anthropology specializing in Online Communities and Religion.
My course of study covered classes in religion, folklore (often directly related to religious beliefs), a...
37 Subjects: including SAT math, ACT Math, ESL/ESOL, English
Hello, My name is Sarmad, and I am a math tutor. I have a Bachelor degree in mathematics and teaching credentials from San Diego State University. Also, I have worked for a long period of time as
a math tutor at various places.
8 Subjects: including trigonometry, geometry, precalculus, statistics
Related Campo, CA Tutors
Campo, CA Accounting Tutors
Campo, CA ACT Tutors
Campo, CA Algebra Tutors
Campo, CA Algebra 2 Tutors
Campo, CA Calculus Tutors
Campo, CA Geometry Tutors
Campo, CA Math Tutors
Campo, CA Prealgebra Tutors
Campo, CA Precalculus Tutors
Campo, CA SAT Tutors
Campo, CA SAT Math Tutors
Campo, CA Science Tutors
Campo, CA Statistics Tutors
Campo, CA Trigonometry Tutors
|
{"url":"http://www.purplemath.com/Campo_CA_Math_tutors.php","timestamp":"2014-04-16T13:54:58Z","content_type":null,"content_length":"23604","record_id":"<urn:uuid:259f0bfe-83d7-48e7-ad47-017febdd724d>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00374-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Math Forum Discussions
Math Forum
Ask Dr. Math
Internet Newsletter
Teacher Exchange
Search All of the Math Forum:
Views expressed in these public forums are not endorsed by Drexel University or The Math Forum.
Topic: Consturction of 17-sided regular polygon
Replies: 4 Last Post: Feb 8, 1999 2:23 PM
Messages: [ Previous | Next ]
Re: Consturction of 17-sided regular polygon
Posted: Jan 31, 1999 9:05 AM
On Sun, 31 Jan 1999, Peter Hung wrote:
> I have trying to find out the procedure for constructing a regular 17-sided
> polygon.
The neatest construction I know is due to Richmond - I call it the
"quadruple quadrisection constriction":
1) quadrisect the perimeter of the circle, by points N,S,E,W;
2) quadrisect the radius ON by the point A;
3) quadrisect the angle OAE by the line AB;
4) quadrisect the straight angle BAC by the line AD:
I |
| J
C |
5) draw the semicircle DFE, cutting ON in F;
6) draw the semicircle GFH, centred at B;
7) cut the semicircle WNE by the perpendiculars GI and HJ to WE.
Then I and J are points of the regular heptakaidecagon on the
circle ENWS that has one vertex at E.
John Conway
Date Subject Author
1/31/99 Consturction of 17-sided regular polygon Peter Hung
1/31/99 Re: Consturction of 17-sided regular polygon John Conway
1/31/99 Re: Consturction of 17-sided regular polygon Antreas P. Hatzipolakis
2/2/99 Re: Consturction of 17-sided regular polygon Peter Hung
2/8/99 Re: Consturction of 17-sided regular polygon Antreas P. Hatzipolakis
|
{"url":"http://mathforum.org/kb/message.jspa?messageID=1078422","timestamp":"2014-04-16T05:41:08Z","content_type":null,"content_length":"21888","record_id":"<urn:uuid:4fd42606-bd17-4775-94c0-5b63a4031b34>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00610-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Title Page
1. Introduction
2. Key Idea
3. Root finding in one dimension
4. Linear equations
5. Numerical integration
6. First order ordinary differential equations
7. Higher order ordinary differential equations
8. Partial differential equations
Consider the Laplace equation in two dimensions
in some rectangular domain described by x in [x[0],x[1]], y in [y[0],y[1]]. Suppose we discretise the solution onto a m+1 by n+1 rectangular grid (or mesh) given by x[i] = x[0] + iDx, y[j] = y[0] + j
Dy where i=0,m, j=0,n. The mesh spacing is Dx = (x[1]-x[0])/m and Dy = (y[1]-y[0])/n. Let [ij] = (x[i],y[j]) be the exact solution at the mesh point i,j, and F[ij] =~ [ij] be the approximate solution
at that mesh point.
By considering the Taylor Series expansion for about some mesh point i,j,
, (102a)
( 102 b)
( 102 b)
( 102 b)
it is clear that we may approximate ^2j/x^2 and ^2j/y^2to the first order using the four adjacent mesh points to obtain the finite difference approximation
for the internal points 0<i<m, 0<j<n. In addition to this we will have either Dirichlet, von Neumann or mixed boundary conditions to specify the boundary values of [ij]. The system of linear
equations described by ( 103 ) in combination with the boundary conditions may be solved in a variety of ways.
Provided the boundary conditions are linear in , our finite difference approximation is itself linear and the resulting system of equations may be solved directly using Gauss Elimination as discussed
in section4.1 . This approach may be feasible if the total number of mesh points (m+1)(n+1) required is relatively small, but as the matrix A used to represent the complete system will have [(m+1)(n+
1)]^2 elements, the storage and computational cost of such a solution will become prohibitive even for relatively modest m and n.
The structure of the system ensures A is relatively sparse, consisting of a tridiagonal core with one nonzero diagonal above and another below this. These nonzero diagonals are offset by either m or
n from the leading diagonal. Provided pivoting (if required) is conducted in such a way that it does not place any nonzero elements outside this band then solution by Gauss Elimination or LU
Decomposition will only produce nonzero elements inside this band, substantially reducing the storage and computational requirements (see section4.4 ). Careful choice of the order of the matrix
elements (i.e. by x or by y) may help reduce the size of this matrix so that it need contain only O(m^3) elements for a square domain.
Because of the wide spread need to solve Laplace's and related equations, specialised solvers have been developed for this problem. One of the best of these is Hockney's method for solving Ax = b
which may be used to reduce a block tridiagonal matrix (and the corresponding right-hand side) of the form
into a block diagonal matrix of the form
where I is an identiy matrix.. This process may be performed iteratively to reduce an n dimensional finite difference approximation to Laplace's equation to a tridiagonal system of equations with n-1
applications. The computational cost is O(p log p), where p is the total number of mesh points. The main drawback of this method is that the boundary conditions must be able to be cast into the block
tridiagonal format.
An alternative to direct solution of the finite difference equations is an iterative numerical solution. These iterative methods are often referred to as relaxation methods as an initial guess at the
solution is allowed to slowly relax towards the true solution, reducing the errors as it does so. There are a variety of approaches with differing complexity and speed. We shall introduce these
methods before looking at the basic mathematics behind them.
The Jacobi Iteration is the simplest approach. For clarity we consider the special case when Dx = Dy. To find the solution for a two-dimensional Laplace equation simply:
1. Initialise F[ij] to some initial guess.
2. Apply the boundary conditions.
3. For each internal mesh point set
F*[ij] = (F[i+][1,j] + F[i-][1,j] + F[i,j+][1] + F[i,j-][1])/4. (106)
1. Replace old solution F with new estimate F*.
2. If solution does not satisfy tolerance, repeat from step 2.
The coefficients in the expression (here all 1/4) used to calculate the refined estimate is often referred to as the stencil or template. Higher order approximations may be obtained by simply
employing a stencil which utilises more points. Other equations (e.g. the bi-harmonic equation, ^4Y = 0) may be solved by introducing a stencil appropriate to that equation.
While very simple and cheap per iteration, the Jacobi Iteration is very slow to converge, especially for larger grids. Corrections to errors in the estimate F[ij] diffuse only slowly from the
boundaries taking O(max(m,n)) iterations to diffuse across the entire mesh.
The Gauss-Seidel Iteration is very similar to the Jacobi Iteration, the only difference being that the new estimate F*[ij] is returned to the solution F[ij] as soon as it is completed, allowing it to
be used immediately rather than deferring its use to the next iteration. The advantages of this are:
• Less memory required (there is no need to store F*).
• Faster convergence (although still relatively slow).
On the other hand, the method is less amenable to vectorisation as, for a given iteration, the new estimate of one mesh point is dependent on the new estimates for those already scanned.
A variant on the Gauss-Seidel Iteration is obtained by updating the solution F[ij] in two passes rather than one. If we consider the mesh points as a chess board, then the white squares would be
updated on the first pass and the black squares on the second pass. The advantages
• No interdependence of the solution updates within a single pass aids vectorisation.
• Faster convergence at low wave numbers.
It has been found that the errors in the solution obtained by any of the three preceding methods decrease only slowly and often decrease in a monotonic manner. Hence, rather than setting
F*[ij] = (F[i][+1,j] + F[i-][1,j] + F[i][,j+1] + F[i][,j-1])/4,
for each internal mesh point, we use
F*[ij] = (1-s)F[ij] + s(F[i][+1,j] + F[i-][1,j] + F[i][,j+1] + F[i][,j-1])/4, (107)
for some value s. The optimal value of s will depend on the problem being solved and may vary as the iteration process converges. Typically, however, a value of around 1.2 to 1.4 produces good
results. In some special cases it is possible to determine an optimal value analytically.
The big problem with relaxation methods is their slow convergence. If s = 1 then application of the stencil removes all the error in the solution at the wave length of the mesh for that point, but
has little impact on larger wave lengths. This may be seen if we consider the one-dimensional equation d^2/dx^2 = 0 subject to j(x=0) = 0 and j(x=1) = 1. Suppose our initial guess for the iterative
solution is that F[i] = 0 for all internal mesh points. With the Jacobi Iteration the correction to the internal points diffuses only slowly along from x = 1.
Multigrid methods try to improve the rate of convergence by considering the problem of a hierarchy of grids. The larger wave length errors in the solution are dissipated on a coarser grid while the
shorter wave length errors are dissipated on a finer grid. for the example considered above, the solution would converge in one complete Jacobi multigrid iteration, compared with the slow asymptotic
convergence above.
For linear problems, the basic multigrid algorithm for one complete iteration may be described as
1. Select the initial finest grid resolution p=P[0] and set b^(p) = 0 and make some initial guess at the solution F^(p)
2. If at coarsest resolution (p=0) then solve A^(p)F^(p)=b^(p) exactly and jump to step 7
3. Relax the solution at the current grid resolution, applying boundary conditions
4. Calculate the error r = AF^(p)-b^(p)
5. Coarsen the error b^(p-1)r to the next coarser grid and decrement p
6. Repeat from step 2
7. Refine the correction to the next finest grid F^(p+1) = F^(p+1)+aF^(p) and increment p
8. Relax the solution at the current grid resolution, applying boundary conditions
9. If not at current finest grid (P[0]), repeat from step 7
10. If not at final desired grid, increment P[0] and repeat from step 7
11. If not converged, repeat from step 2.
Typically the relaxation steps will be performed using Successive Over Relaxtion with Red-Black ordering and some relaxation coefficient s. The hierarchy of grids is normally chosen to differ in
dimensions by a factor of 2 in each direction. The factor a is typically less than unity and effectively damps possible instabilities in the convergence. The refining of the correction to a finer
grid will be achieved by (bi-)linear or higher order interpolation, and the coarsening may simply be by sub-sampling or averaging the error vector r.
It has been found that the number of iterations required to reach a given level of convergence is more or less independent of the number of mesh points. As the number of operations per complete
iteration for n mesh points is O(n)+O(n/2^d)+ +O(n/2^2d)+ , where d is the number of dimensions in the problem, then it can be seen that the Multigrid method may often be faster than a direction
solution (which will require O(n^3), O(n^2) or O(n log n) operations, depending on the method used). This is particularly true if n is large or there are a large number of dimensions in the problem.
For small problems, the coefficient in front of the n for the Multigrid solution may be relatively large so that direct solution may be faster.
A further advantage of Multigrid and other iterative methods when compared with direct solution, is that irregular shaped domains or complex boundary conditions are implemented more easily. The
difficulty with this for the Multigrid method is that care must be taken in order to ensure consistent boundary conditions in the embedded problems.
In principle, relaxation methods which are the basis of the Jacobi, Gauss-Seidel, Successive Over Relaxation and Multigrid methods may be applied to any system of linear equations to interatively
improve an approximation to the exact solution. The basis for this is identical to the Direct Iteration method described in section3.6 . We start by writing the vector function
and search for the vector of roots to f(x) = 0 by writing
g(x) = D^1{[A+D]x b}, (110)
with D a diagonal matrix (zero for all off-diagonal elements) which may be chosen arbitrarily. We may analyse this system by following our earlier analysis for the Direct Iteration method (section3.6
). Let us assume the exact solution is x* = g(x*), then
e[n][+1] = x[n][+1] x*
= D^1{[A+D]x[n] b} D^1{[A+D]x* b}
= D^1[A+D](x[n] x*)
= D^1[A+D]e[n]
= {D^1[A+D]}^n^+1 e[0].
From this it is clear that convergence will be linear and requires
||e[n][+1]|| = ||Be[n]|| < ||e[n]||, (111)
where B = D^1[A+D] for some suitable norm. As any error vector e[n] may be written as a linear combination of the eigen vectors of our matrix B, it is sufficient for us to consider the eigen value
and require max(|l|) to be less than unity. In the asymptotic limit, the smaller the magnitude of this maximum eigen value the more rapid the convergence. The convergence remains, however, linear.
Since we have the ability to choose the diagonal matrix D, and since it is the eigen values of B = D^1[A+D] rather than A itself which are important, careful choice of D can aid the speed at which
the method converges. Typically this means selecting D so that the diagonal of B is small.
The structure of the finite difference approximation to Laplace's equation lends itself to these relaxation methods. In one dimension,
and both Jacobi and Gauss-Seidel iterations take D as 2I (I is the identity matrix) on the diagonal to give B = D^1[A+D] as
The eigen values l of this matrix are given by the roots of
In this case the determinant may be obtained using the recurrence relation
det(Bl)[(n)] = l det(Bl)[(n1)] ^1/[4] det(Bl)[(n2)] , (116)
where the subscript gives the size of the matrix B. From this we may see
det(Bl)[(1)] = l ,
det(Bl)[(2)] = l^2 ¼ ,
det(Bl)[(3)] = l^3 + ½l ,
det(Bl)[(4)] = l^4 ¾ l^2 + (1/16) ,
det(Bl)[(5)] = l^5 + l^3 (3/16)l ,
det(Bl)[(6)] = l^6 (5/4)l^4 + (3/8)l^2 (1/64) ,
which may be solved to give the eigen values
l[(1)] = 0 ,
l^2[(2)] = 1/4 ,
l^2[(3)] = 0, 1/2 ,
l^2[(4)] = (3 5)/8 ,
l^2[(5)] = 0, 1/4, 3/4 ,
It can be shown that for a system of any size following this general form, all the eigen values satisfy |l| < 1, thus proving the relaxation method will always converge. As we increase the number of
mesh points, the number of eigen values increases and gradually fills up the range |l| < 1, with the numerically largest eigen values becoming closer to unity. As a result of l1, the convergence of
the relaxation method slows considerably for large problems. A similar analysis may be applied to Laplace's equation in two or more dimensions, although the expressions for the determinant and eigen
values is correspondingly more complex.
The large eigen values are responsible for decreasing the error over large distances (many mesh points). The multigrid approach enables the solution to converge using a much smaller system of
equations and hence smaller eigen values for the larger distances, bypassing the slow convergence of the basic relaxation method.
The analysis of the Jacobi and Gauss-Seidel iterations may be applied equally well to Successive Over Relaxation. The main difference is that D = (2/s)I so that
and the corresponding eigen values are related by (1sl)^2 equal to the values tabulated above. Thus if s is chosen inappropriately, the eigen values of B will exceed unity and the relaxation method
will diverge. On the otherhand, careful choise of s will allow the eigen values of B to be less than those for Jacobi and Gauss-Seidel, thus increasing the rate of convergence.
Relaxation methods may be applied to other differential equations or more general systems of linear equations in a similar manner. As a rule of thumb, the solution will converge if the A matrix is
diagonally dominant, i.e. the numerically largest values occur on the diagonal. If this is not the case, SOR can still be used, but it may be necessary to choose s < 1 whereas for Laplace's equation
s >= 1 produces a better rate of convergence.
One of the most common ways of solving Laplace's equation is to take the Fourier transform of the equation to convert it into wave number space and there solve the resulting algebraic equations. This
conversion process can be very efficient if the Fast Fourier Transform algorithm is used, allowing a solution to be evaluated with O(n log n) operations.
In its simplest form the FFT algorithm requires there to be n = 2^p mesh points in the direction(s) to be transformed. The efficiency of the algorithm is achieved by first calculating the transform
of pairs of points, then of pairs of transforms, then of pairs of pairs and so on up to the full resolution. The idea is to divide and conquer! Details of the FFT algorithm may be found in any
standard text.
The Poisson equation ^2f = f(x) may be treated using the same techniques as Laplace's equation. It is simply necessary to set the right-hand side to f, scaled suitably to reflect any scaling in A.
Consider the two-dimensional diffusion equation,
subject to u(x,y,t) = 0 on the boundaries x=0,1 and y=0,1. Suppose the initial conditions are u(x,y,t=0) = u[0](x,y) and we wish to evaluate the solution for t > 0. We shall explore some of the
options for achieving this in the following sections.
One of the simplest and most useful approaches is to discretise the equation in space and then solve a system of (coupled) ordinary differential equations in time in order to calculate the solution.
Using a square mesh of step size Dx = Dy = 1/m, and taking the diffusivity D = 1, we may utilise our earlier approximation for the Laplacian operator (equation ( 103 )) to obtain
for the internal points i=1,m-1 and j=1,m-1. On the boundaries (i=0,j), (i=m,j), (i,j=0) and (i,j=m) we simply have u[ij]=0. If U[ij] represents our approximation of u at the mesh points x[ij], then
we must simply solve the (m-1)^2 coupled ordinary differential equations
In principle we may utilise any of the time stepping algorithms discussed in earlier lectures to solve this system. As we shall see, however, care needs to be taken to ensure the method chosen
produces a stable solution.
Applying the Euler method Y[n+][1] = Y[n]+Dtf(Y[n],t[n]) to our spatially discretised diffusion equation gives
where the Courant number
describes the size of the time step relative to the spatial discretisation. As we shall see, stability of the solution depends on m in contrast to an ordinary differential equation where it is a
function of the time step Dt only.
Stability of the Euler method solving the diffusion equation may be analysed in a similar way to that for ordinary differential equations. We start by asking the question "does the Euler method
converge as t->infinity?" The exact solution will have u -> 0 and the numerical solution must also do this if it is to be stable.
We choose
U^(0)[i,j]=sin(ai) sin(bj), (125)
for some a and b chosen as multiples of p/m to satisfy u = 0 on the boundaries. Substituting this into ( 123 ) gives
U^(1)[i][,j] = sin(ai)sin(bj) + m{sin[a(i+1)]sin(bj) + sin[a(i1)]sin(bj)
+ sin(ai)sin[b(j+1)] + sin(ai)sin[b(j1)] 4 sin(ai)sin(bj)}
= sin(ai)sin(bj) + m{[sin(ai)cos(a) + cos(ai)sin(a)]sin(bj) + [sin(ai)cos(a) cos(ai)sin(a)]sin(bj)
+ sin(ai)[sin(bj)cos(b) + cos(bj)sin(b)] + sin(ai)[sin(bj)cos(b) cos(bj)sin(b)] 4 sin(ai)sin(bj)}
= sin(ai)sin(bj) + 2m{sin(ai)cos(a) sin(bj) + sin(ai)sin(bj)cos(b) 2 sin(ai)sin(bj)}
= sin(ai)sin(bj){1 + 2m[cos(a) + cos(b) 2]}
= sin(ai)sin(bj){1 4m[sin^2(a/2) + sin^2(b/2)]}. (126)
Applying this at consecutive times shows the solution at time t[n] is
U^(n)[i,j] = sin(ai)sin(bj) {1 4m[sin^2(a/2) + sin^2(b/2)]}^n, (127)
which then requires |1 4m[sin^2(a/2) + sin^2(b/2)]| < 1 for this to converge as n>infinity. For this to be satisfied for arbitrary a and b we require m < 1/4. Thus we must ensure
A doubling of the spatial resolution therefore requires a factor of four more time steps so overall the expense of the computation increases sixteen-fold.
The analysis for the diffusion equation in one or three dimensions may be computed in a similar manner.
Our analysis of the Euler method for solving the diffusion equation in section 8.3.3 assumed initial conditions of the form sin(kpx/L[x]) sin(lpy/L[y]) where k,l are integers and L[x], L[y] are the
dimensions of the domain. In addition to satisfying the boundary conditions, these initial conditions represent a set of orthogonal functions which may be used to construct any arbitrary initial
conditions as a Fourier series. Now, since the diffusion equation is linear, and as our stability analysis of the previous section shows the conditions under which the solution for each Fourier mode
is stable, we can see that the equation ( 128 ) applies equally for arbitrary initial conditions.
The implicit Crank-Nicholson method is significantly better in terms of stability than the Euler method for ordinary differential equations. For partial differential equations such as the diffusion
equation we may analyse this in the same manner as the Euler method of section8.3.3 .
For simplicity, consider the one dimensional diffusion equation
with u(x=0,t) = u(x=1,t) = 0 and apply the standard spatial discretisation for the curvature term to obtain
for the i=1,m internal points. Solution of this expression will involve the solution of a tridiagonal system for this one-dimensional problem.
To test the stability we again choose a Fourier mode. Here we have only one spatial dimension so we use U^(0)[i] = sin(qi) which satisfies the boundary condition if q is a multiple of p. Substituting
this into ( 131 ) we find
Since the term [1-2msin^2(q/2)]/ [1+2msin^2(q/2)] < 1 for all m > 0, the Crank-Nicholson method is unconditionally stable. The step size Dt may be chosen on the grounds of truncation error
independently of Dx.
Start of this section
Title Page
Stuart Dalziel, last page update: 17 February 1998
|
{"url":"http://www.damtp.cam.ac.uk/lab/people/sd/lectures/nummeth98/pdes.htm","timestamp":"2014-04-19T05:13:47Z","content_type":null,"content_length":"79769","record_id":"<urn:uuid:b1bfc073-4450-42f9-a341-97c43cdf51f4>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00111-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Increasing/Decreasing intervals for Trig
Q: Let f(x)=20x-10sin(4x) for 0<=x<=pi/2
Find the largest interval(s) over which the function f is increasing or decreasing.
I tried to solve this:
on keeping f'(x)=0, i get x=pi/12
I just cannot get correct intervals to work properly with my online math homework system!
Please help me!
Thank you SO much!
Re: Increasing/Decreasing intervals for Trig
ronny3050 wrote:Q: Let f(x)=20x-10sin(4x) for 0<=x<=pi/2
Find the largest interval(s) over which the function f is increasing or decreasing.
I tried to solve this:
I think you forgot to finish: you still have to do the 4x part inside. You can see an explanation here from the Math Forum site. The Purplemath writer did something in alt groups back in 1999 here.
Where PM talks about "the rest", that's the same as where MF talks about "BLOB".
Re: Increasing/Decreasing intervals for Trig
I did differentiate that! The problem is with the intervals!
Could you please solve it?
Thank you!
Re: Increasing/Decreasing intervals for Trig
ronny3050 wrote:I did differentiate that!
Where? I only see the derivative for the sine, not for the 4x inside it.
Re: Increasing/Decreasing intervals for Trig
Sorry to bug you again but my function was f(x)=20x-10sin(4x)
After taking the derivative I get f'(x)=20-10*4*cos(4x) = 20-40cos(4x)
So, I did take the derivate of 4x!
Re: Increasing/Decreasing intervals for Trig
$f(x) = 20x - 10\sin(4x)$
$4x=\pm\frac{\pi}{3}+2\pi n$
f is increasing on intervals of the form:
f is decreasing on intervals of the form:
Here's the graph of f'(x) so you can see where the derivative changes sign:
Re: Increasing/Decreasing intervals for Trig
Matt, thanks a zillion!
|
{"url":"http://www.purplemath.com/learning/viewtopic.php?p=7810","timestamp":"2014-04-18T06:10:40Z","content_type":null,"content_length":"26817","record_id":"<urn:uuid:b3243ee2-fea9-4e59-bb76-fd251d0a7ce5>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00241-ip-10-147-4-33.ec2.internal.warc.gz"}
|
How to handle multiplication of numbers close to 1
up vote 2 down vote favorite
I have a bunch of floating point numbers (Java doubles), most of which are very close to 1, and I need to multiply them together as part of a larger calculation. I need to do this a lot.
The problem is that while Java doubles have no problem with a number like:
0.0000000000000000000000000000000001 (1.0E-34)
they can't represent something like:
Consequently of this I lose precision rapidly (the limit seems to be around 1.000000000000001 for Java's doubles).
I've considered just storing the numbers with 1 subtracted, so for example 1.0001 would be stored as 0.0001 - but the problem is that to multiply them together again I have to add 1 and at this point
I lose precision.
To address this I could use BigDecimals to perform the calculation (convert to BigDecimal, add 1.0, then multiply), and then convert back to doubles afterwards, but I have serious concerns about the
performance implications of this.
Can anyone see a way to do this that avoids using BigDecimal?
Edit for clarity: This is for a large-scale collaborative filter, which employs a gradient descent optimization algorithm. Accuracy is an issue because often the collaborative filter is dealing with
very small numbers (such as the probability of a person clicking on an ad for a product, which may be 1 in 1000, or 1 in 10000).
Speed is an issue because the collaborative filter must be trained on tens of millions of data points, if not more.
The performance will not be an issue with what you have suggested. – Kevin Crowell Apr 4 '09 at 23:02
Why do you need such accuracy and performance? Perhaps with a better context of the problem, we could offer a more appropriate solution? – Alex Spurling Apr 4 '09 at 23:18
Kevin, can you elaborate? Alex, I've tried to explain more about the context. – sanity Apr 4 '09 at 23:28
add comment
8 Answers
active oldest votes
Yep: because
(1 + x) * (1 + y) = 1 + x + y + x*y
In your case, x and y are very small, so x*y is going to be far smaller - way too small to influence the results of your computation. So as far as you're concerned,
(1 + x) * (1 + y) = 1 + x + y
up vote 11 down
vote accepted This means you can store the numbers with 1 subtracted, and instead of multiplying, just add them up. As long as the results are always much less than 1, they'll be close enough to
the mathematically precise results that you won't care about the difference.
EDIT: Just noticed: you say most of them are very close to 1. Obviously this technique won't work for numbers that are not close to 1 - that is, if x and y are large. But if one is
large and one is small, it might still work; you only care about the magnitude of the product x*y. (And if both numbers are not close to 1, you can just use regular Java double
Thanks David, this has certainly provided food for thought - it may be the answer but I'll leave it a bit longer to see what others suggest. – sanity Apr 4 '09 at 23:31
+1 - faster than logs :D – v3. Apr 5 '09 at 1:25
It would be better to simply use the first equation in any case. If x*y is close 0 or not it still works... – Pool Apr 5 '09 at 16:32
Dropping the x*y saves you a multiplication, which makes the whole computation significantly faster. And since doubles only store about 15 digits of precision (IIRC), if (x*y)/(x +
y) is smaller than 10^-15 it'd get truncated off anyway. – David Z Apr 5 '09 at 22:49
add comment
Perhaps you could use logarithms?
Logarithms conveniently reduce multiplication to addition.
up vote 11 down Also, to take care of the initial precision loss, there is the function log1p (at least, it exists in C/C++), which returns log(1+x) without any precision loss. (e.g. log1p(1e-30)
vote returns 1e-30 for me)
Then you can use expm1 to get the decimal part of the actual result.
Kind of the same idea as my answer, since log(1+x) = x for very small x... anyway +1 for using the math to optimize ;-) – David Z Apr 4 '09 at 23:13
ouch my head hurts – ojblass Apr 5 '09 at 0:00
add comment
Isn't this sort of situation exactly what BigDecimal is for?
Edited to add:
"Per the second-last paragraph, I would prefer to avoid BigDecimals if possible for performance reasons." – sanity
up vote 3 "Premature optimization is the root of all evil" - Knuth
down vote
There is a simple solution practically made to order for your problem. You are concerned it might not be fast enough, so you want to do something complicated that you think will be faster.
The Knuth quote gets overused sometimes, but this is exactly the situation he was warning against. Write it the simple way. Test it. Profile it. See if it's too slow. If it is then start
thinking about ways to make it faster. Don't add all this additional complex, bug-prone code until you know it's necessary.
Per the second-last paragraph, I would prefer to avoid BigDecimals if possible for performance reasons. – sanity Apr 4 '09 at 23:14
This isn't a premature optimization. double is already very slow, I've done some benchmarking and BigDecimal seems several orders of magnitude slower. It may be the solution I go with,
but I want to consider alternatives. – sanity Apr 4 '09 at 23:29
2 Hmm :/ you didn't cite the complete Knuth quote: "We should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil" en.wikipedia.org/
wiki/Optimization_%28computer_science%29 – Jason S Apr 6 '09 at 13:29
add comment
Depending on where the numbers are coming from and how you are using them, you may want to use rationals instead of floats. Not the right answer for all cases, but when it is the
right answer there's really no other.
If rationals don't fit, I'd endorse the logarithms answer.
Edit in response to your edit:
up vote 1 down If you are dealing with numbers representing low response rates, do what scientists do:
• Represent them as the excess / deficit (normalize out the 1.0 part)
• Scale them. Think in terms of "parts per million" or whatever is appropriate.
This will leave you dealing with reasonable numbers for calculations.
add comment
Its worth noting that you are testing the limits of your hardware rather than Java. Java uses the 64-bit floating point in your CPU.
up vote 1 down vote I suggest you test the performance of BigDecimal before you assume it won't be fast enough for you. You can still do tens of thousands of calculations per second with BigDecimal.
add comment
As David points out, you can just add the offsets up.
(1+x) * (1+y) = 1 + x + y + x*y
However, it seems risky to choose to drop out the last term. Don't. For example, try this:
x = 1e-8 y = 2e-6 z = 3e-7 w = 4e-5
What is (1+x)(1+y)(1+z)*(1+w)? In double precision, I get:
ans =
However, see what happens if we just do the simple additive approximation.
1 + (x+y+z+w)
ans =
up vote 1 down 1.00004231
We lost the low order bits that may have been important. This is only an issue if some of the differences from 1 in the product are at least sqrt(eps), where eps is the precision
you are working in.
Try this instead:
f = @(u,v) u + v + u*v;
result = f(x,y);
result = f(result,z);
result = f(result,w);
ans =
As you can see, this gets us back to the double precision result. In fact, it is a bit more accurate, since the internal value of result is 4.23100930230249e-05.
add comment
If you really need the precision, you will have to use something like BigDecimal, even if it's slower than Double.
up vote 0 down If you don't really need the precision, you could perhaps go with David's answer. But even if you use multiplications a lot, it might be some Premature Optimization, so BIgDecimal
vote might be the way to go anyway
add comment
When you say "most of which are very close to 1", how many, exactly?
up vote 0 down vote Maybe you could have an implicit offset of 1 in all your numbers and just work with the fractions.
add comment
Not the answer you're looking for? Browse other questions tagged java math floating-point bigdecimal rounding-error or ask your own question.
|
{"url":"http://stackoverflow.com/questions/717994/how-to-handle-multiplication-of-numbers-close-to-1","timestamp":"2014-04-18T03:54:37Z","content_type":null,"content_length":"104837","record_id":"<urn:uuid:3713c4be-e717-4761-a795-9aaef2ff3288>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00465-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Finding the cosine of 72* the hard way
May 20th 2008, 08:25 PM #1
May 2008
Finding the cosine of 72* the hard way
Alright, so let's say that w = cis(72*)
Using demoivre's theorem, w^5 = 1
What I'm hung up on, now, is showing that w^4+w^3+w^2+w+1=0, I'm supposed to use something I already know about polynomials (synthetic division?), and knowing that both w and 1 are roots of f(x)
= x^5 -1, but I'm drawing a total blank.
All help is very much appreciated.
Alright, so let's say that w = cis(72*)
Using demoivre's theorem, w^5 = 1
What I'm hung up on, now, is showing that w^4+w^3+w^2+w+1=0, I'm supposed to use something I already know about polynomials (synthetic division?), and knowing that both w and 1 are roots of f(x)
= x^5 -1, but I'm drawing a total blank.
All help is very much appreciated.
Okay I get that, now I have to turn that into w^2 + w^3 = (w + w^4)^2 - 2
somehow. I think I'm supposed to expand (w+w^4) ^2 into w^8+2w^5+w^2, but after than I'm lost again.
Alright, so let's say that w = cis(72*)
Using demoivre's theorem, w^5 = 1
What I'm hung up on, now, is showing that w^4+w^3+w^2+w+1=0, I'm supposed to use something I already know about polynomials (synthetic division?), and knowing that both w and 1 are roots of f(x)
= x^5 -1, but I'm drawing a total blank.
All help is very much appreciated.
Read the attachment I have (quickly) prepared for you and work your way through it.
May 20th 2008, 08:35 PM #2
May 20th 2008, 08:50 PM #3
May 2008
May 20th 2008, 09:24 PM #4
|
{"url":"http://mathhelpforum.com/trigonometry/39101-finding-cosine-72-hard-way.html","timestamp":"2014-04-18T14:42:37Z","content_type":null,"content_length":"40960","record_id":"<urn:uuid:18622b6c-a983-409e-9065-7feff87e2f96>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00030-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Moving under the influence of a vector field
up vote 3 down vote favorite
I have a continuously varying vector field $v(p)$ on $\mathbb{R}^2$, and a particle at point $p$ in the plane that can move in a direction $u(p)$ as long as $u(p)$ is turned at most $\pi/2$ left of
$v(p)$. So at any point $p$, the particle can move in a quarter-circle of directions: from $v(p)$ to $v(p)$ rotated $90^\circ$ counterclockwise.
I would like to identify the points in $\mathbb{R}^2$ reachable from a given start point $p_0$ under this constraint. For example, suppose the vector field is determined by a rotation about a fixed
center $c$. Then the reachable points are just those in the disk centered on $c$ with radius $|p_0 - c|$:
I can write down equations, in terms of dot- and cross-product, but they are not revealing to me.
Q. Is there some clean formulation of this problem that suggests a computationally feasible identification of the reachable points?
Thanks for any insights/ideas!
mg.metric-geometry differential-equations
What should your constraint mean at a zero of the vector field? If you don't allow zeros, I'd say you have a Lorentz structure on the plane, and you're asking about the causal relationships between
points. – Ryan Budney Apr 26 '12 at 21:21
@Ryan: Good question! I must allow zeros, for rotations about a point are among my vector fields. I guess then $u(p)$ must also be zero: the particle stops and stays there. – Joseph O'Rourke Apr 26
'12 at 21:31
@Ryan: Thanks for the "Lorentz structure" hint; that connection did not occur to me. – Joseph O'Rourke Apr 26 '12 at 21:34
Okay, then you're looking at the causal structure on the plane minus the zeros of the vector field / Lorentz structure. – Ryan Budney Apr 27 '12 at 6:45
add comment
1 Answer
active oldest votes
Sounds like a distribution except that instead of having linear subspaces you have cones. There's this paper: Langerock, "Conic Distributions and Accessible Sets," but it sounds an awful
lot like your question (and I wonder if that's where you're starting from in the first place!). It also doesn't say anything about the computability of the accessible set, though they do
up vote 3 provide some characterization.
down vote
@fuzzytron: I am starting from a completely different place, but this paper and its references to the literature on accessible sets is just what I need. Thanks! – Joseph O'Rourke Apr 27
'12 at 11:24
add comment
Not the answer you're looking for? Browse other questions tagged mg.metric-geometry differential-equations or ask your own question.
|
{"url":"http://mathoverflow.net/questions/95304/moving-under-the-influence-of-a-vector-field?sort=votes","timestamp":"2014-04-21T08:12:32Z","content_type":null,"content_length":"55844","record_id":"<urn:uuid:8377c7e8-da30-43e5-b5b8-87cac3980175>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00308-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Converting Improper Fractions to Mixed Fractions Video
Converting Improper Fractions to Mixed Fractions Video Tutorial
arithmetic operations video, fractions video, improper fractions video, mixed fractions video, mixed numbers video, number sense video, numbers video, operations video.
Watch Our Video Tutorials At Full Length
At TuLyn, we have over 2000 math video clips. While our guests can view a short preview of each video clip, our members enjoy watching them at full length.
Become a member to gain access to all of our video tutorials, worksheets and word problems.
Converting Improper Fractions to Mixed Fractions
This tutorial will be able to show you how to convert from an improper fraction to a mixed number. You will need to rewrite the numerator inside the division house and divide by the denominator,
which now becomes your divisor. You will also to convert your quotient and your remainder into a mixed fraction.
Converting improper fractions to mixed fractions video involves arithmetic operations, fractions, improper fractions, mixed fractions, mixed numbers, number sense, numbers, operations. The video
tutorial is recommended for 1st Grade, 2nd Grade, 3rd Grade, 4th Grade, 5th Grade, 6th Grade, 7th Grade, and/or 8th Grade Math students studying Algebra, Geometry, Trigonometry, Probability and
Statistics, Arithmetic, Basic Math, Pre-Algebra, Pre-Calculus, and/or Advanced Algebra.
In mathematics, a fraction is a concept of a proportional relation between an object part and the object whole. Each fraction consists of a denominator (bottom) and a numerator (top), representing
(respectively) the number of equal parts that an object is divided into, and the number of those parts indicated for the particular fraction.
Improper Fractions
An improper fraction is a fraction of which numerator is larger than or equal to its denominator.
Mixed Numbers
A mixed number is the sum of a whole number and a proper fraction.
Post a Comment
|
{"url":"http://www.tulyn.com/4th-grade-math/arithmetic-operations/videotutorials/converting-improper-fractions-to-mixed-fractions_by_polly.html","timestamp":"2014-04-18T13:27:39Z","content_type":null,"content_length":"19656","record_id":"<urn:uuid:aabe4b39-d9e3-4eb6-922a-e43fe437be8d>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00649-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Carroll Park, PA Trigonometry Tutor
Find a Carroll Park, PA Trigonometry Tutor
...I have planned and executed numerous lessons for classes of high school students, as well as tutored many independently. I have a bachelor's degree in secondary math education. During my time
in college, I took one 3-credit course in Discrete Math.
11 Subjects: including trigonometry, calculus, geometry, algebra 1
...I am a professional physicist with over 20 years math experience, including calculus. Geometry has a lot of terms to remember but once you are past that, it can be engaging and a lot of fun --
like doing puzzles. I use a variety of different teaching techniques to help students master geometry.
10 Subjects: including trigonometry, calculus, physics, geometry
...I believe that I have a unique ability to present and demonstrate various topics in mathematics in a fun and effective way. I have worked three semesters as a computer science lab TA at North
Carolina State University, as well as three semesters as a general math tutor for the tutoring center at...
22 Subjects: including trigonometry, calculus, geometry, statistics
...Having industrial experience, I can easily explain the relevance of science that will inspire my students to learn and pursue career in the sciences if possible. This inspiration is what will
motivate my students and consequently improve their grades in a tremendous fashion.I have a background i...
17 Subjects: including trigonometry, chemistry, physics, calculus
...This is especially useful for students who don't like math or are having trouble in one particular kind of math (such as geometry).I have a B.S. in Mathematics and I tend toward more
rule-based and algebraic math. I can tutor for any middle to lower level college math classes and any high school...
19 Subjects: including trigonometry, calculus, geometry, algebra 2
Related Carroll Park, PA Tutors
Carroll Park, PA Accounting Tutors
Carroll Park, PA ACT Tutors
Carroll Park, PA Algebra Tutors
Carroll Park, PA Algebra 2 Tutors
Carroll Park, PA Calculus Tutors
Carroll Park, PA Geometry Tutors
Carroll Park, PA Math Tutors
Carroll Park, PA Prealgebra Tutors
Carroll Park, PA Precalculus Tutors
Carroll Park, PA SAT Tutors
Carroll Park, PA SAT Math Tutors
Carroll Park, PA Science Tutors
Carroll Park, PA Statistics Tutors
Carroll Park, PA Trigonometry Tutors
Nearby Cities With trigonometry Tutor
Bala, PA trigonometry Tutors
Bywood, PA trigonometry Tutors
Cynwyd, PA trigonometry Tutors
Llanerch, PA trigonometry Tutors
Melrose Park, PA trigonometry Tutors
Melrose, PA trigonometry Tutors
Merion Park, PA trigonometry Tutors
Middle City West, PA trigonometry Tutors
Oakview, PA trigonometry Tutors
Overbrook Hills, PA trigonometry Tutors
Penn Ctr, PA trigonometry Tutors
Penn Valley, PA trigonometry Tutors
Penn Wynne, PA trigonometry Tutors
Rosemont, PA trigonometry Tutors
Upton, PA trigonometry Tutors
|
{"url":"http://www.purplemath.com/Carroll_Park_PA_Trigonometry_tutors.php","timestamp":"2014-04-18T04:06:08Z","content_type":null,"content_length":"24567","record_id":"<urn:uuid:a4a02695-4d94-411f-9b2e-a5de14027d4f>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00544-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Estimating Adaptive AutoRegressive-Moving-Average-and-mean model (includes mean term)
function [z,e,REV,ESU,V,Z,SPUR] = aarmam(y, Mode, MOP, UC, z0, Z0, V0, W);
Estimating Adaptive AutoRegressive-Moving-Average-and-mean model (includes mean term)
~~ This function is obsolete and is replaced by AMARMA
[z,E,REV,ESU,V,Z,SPUR] = aarmam(y, mode, MOP, UC, z0, Z0, V0, W);
Estimates AAR parameters with Kalman filter algorithm
y(t) = sum_i(a_i(t)*y(t-i)) + m(t) + e(t) + sum_i(b_i(t)*e(t-i))
State space model
z(t) = G*z(t-1) + w(t) w(t)=N(0,W)
y(t) = H*z(t) + v(t) v(t)=N(0,V)
G = I,
z = [m(t),a_1(t-1),..,a_p(t-p),b_1(t-1),...,b_q(t-q)];
H = [1,y(t-1),..,y(t-p),e(t-1),...,e(t-q)];
W = E{(z(t)-G*z(t-1))*(z(t)-G*z(t-1))'}
V = E{(y(t)-H*z(t-1))*(y(t)-H*z(t-1))'}
y Signal (AR-Process)
Mode determines the type of algorithm
MOP Model order [m,p,q], default [0,10,0]
m=1 includes the mean term, m=0 does not.
p and q must be positive integers
it is recommended to set q=0.
UC Update Coefficient, default 0
z0 Initial state vector
Z0 Initial Covariance matrix
z AR-Parameter
E error process (Adaptively filtered process)
REV relative error variance MSE/MSY
[1] A. Schloegl (2000), The electroencephalogram and the adaptive autoregressive model: theory and applications.
ISBN 3-8265-7640-3 Shaker Verlag, Aachen, Germany.
More references can be found at
This function calls: This function is called by: Generated on Sat 16-May-2009 00:04:49 by m2html © 2003
|
{"url":"http://biosig-consulting.com/matlab/help/freetb4matlab/tsa/aarmam.html","timestamp":"2014-04-20T08:14:58Z","content_type":null,"content_length":"3906","record_id":"<urn:uuid:913b429a-9a18-4663-92b2-4dcc78e44dd7>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00286-ip-10-147-4-33.ec2.internal.warc.gz"}
|
1. The definition of an ellipse is a path taken or a shape that results when a flat plane intersects a cone in a direction which is not parallel to the base of the cone.
An example of an ellipse are the rings of Saturn.
The dark circle within the cone is an ellipse.
pl. ellipses
Geom. the path of a point that moves so that the sum of its distances from two fixed points, the foci, is constant; closed curve formed by the section of a cone cut by a plane less steeply inclined
than the side of the cone
Origin of ellipse
Modern Latin
; from Classical Greek
, a defect, ellipse ; from
, to fall short ; from
, in +
, to leave (see loan): so named from falling short of a perfect circle
1. A plane curve, especially:
a. A conic section whose plane is not parallel to the axis, base, or generatrix of the intersected cone.
b. The locus of points for which the sum of the distances from each point to two fixed points is equal.
2. Ellipsis.
Origin of ellipse
French, from Latin
, from Greek
a falling short, ellipse
, from
to fall short (from the relationship between the line joining the vertices of a conic and the line through the focus and parallel to the directrix of a conic)
; see
to leave
; see
in Indo-European roots.
(plural ellipses)
1. (geometry) A closed curve, the locus of a point such that the sum of the distances from that point to two other fixed points (called the foci of the ellipse) is constant; equivalently, the conic
section that is the intersection of a cone with a plane that does not intersect the base of the cone.
(third-person singular simple present ellipses, present participle ellipsing, simple past and past participle ellipsed)
1. (grammar) To remove from a phrase a word which is grammatically needed, but which is clearly understood without having to be stated.
In B's response to A's question:- (A: Would you like to go out?, B: I'd love to), the ellipsed words are go out.
From French ellipse.
|
{"url":"http://www.yourdictionary.com/ellipse","timestamp":"2014-04-17T07:52:40Z","content_type":null,"content_length":"49310","record_id":"<urn:uuid:167c2ffa-2e2b-4d56-add8-a0d842544f2f>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00035-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Wilson's Theorem
Here's an interesting characterization of primes:
Wilson's Theorem. A number P is prime if and only if
(P-1)! + 1 is divisible by P.
Let's check:
(2-1)!+1 = 2, which is divisible by 2.
(5-1)!+1 = 25, which is divisible by 5.
(9-1)!+1 = 40321, which is not divisible by 9 (cast out nines to see this).
Pretty cool!
The Math Behind the Fact:
However it is not really practical to use this to test if a number is prime, especially if P is large: just try P=101, and you'll see what I mean! There are better primality tests available; you can
learn about some of them in a number theory class. See also Fermat's Little Theorem.
How to Cite this Page:
Su, Francis E., et al. "Wilson's Theorem." Math Fun Facts. <http://www.math.hmc.edu/funfacts>.
|
{"url":"http://www.math.hmc.edu/funfacts/ffiles/10006.5.shtml","timestamp":"2014-04-16T21:55:02Z","content_type":null,"content_length":"19622","record_id":"<urn:uuid:6e129936-5727-41d0-937d-722a809a9037>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00480-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Linear Search
Linear search (aka Sequential Search) is the most fundamental and important of all algorithms. It is simple to understand and implement, yet there are more subtleties to it than most programmers
realize. The input to linear search is a sequence (e.g. an array, a collection, a string, an iterator, etc.) plus a target item. The output is
if the target item is in the sequence, and
otherwise. If the sequence has n items, then, in the
worst case
, all n items in the sequence must be checked against the target for equality. Under reasonable assumptions, linear search does O(n) comparisons on average. In practice, this is often too slow, and
so, for example,
ing or hashing or TreeSearch
ing are speedier alternatives. Here's an implementation of linear search on an array (in Java):
// returns true iff there is an integer i, where arr[i] == target and 0 <= i < arr.length
boolean linearSearch(int[] arr, int target)
int i = 0;
while (i < arr.length) {
if (arr[i] == target) {
return true;
} // if
return false;
Linear search can be sped up through the use of a sentinel value: assuming there is a free cell at the end of arr to use, then copy target into that free cell. This lets you move the test ``i <
arr.length'' out of the loop. For example:
// PRE: arr[arr.length - 1] == target
// POST: returns true iff there is an integer i, where arr[i] == target and 0 <= i < arr.length - 1
boolean _fastLinearSearch(int[] arr, int target)
int i = 0;
while (arr[i] != target) { // only one comparison per iteration
return i < arr.length;
// returns true iff there is an integer i, where arr[i] == target and 0 <= i < arr.length
boolean fastLinearSearch(int[] arr, int target)
int n = arr.length;
if (arr[n - 1] == target) { // is target at the end?
return true;
} else {
int last = arr[n - 1]; // remember the final value of the array
arr[n - 1] = target;
boolean result = _fastLinearSearch(arr,target);
arr[n - 1] = last; // restore the array
return result;
} // if
In one of his interviews,
(developer of the
) says that linear search is one of the basic algorithms he used to test languages as candidates for implementing his view of algorithms and data structures. He claims that only C++ is able to
properly implement linear search on all sensible sequences; other languages alway require you to make separate linear search algorithms for different data types. For instance, in
, you must have a different linear search for lists, arrays, and strings.
URL to the relevant interview? That sounds like quite an extreme position to take, and I doubt it can be supported. Are you sure that's not a misquote?
I can infer the topic. This would have been on the general subject of iterators, and the above is confusingly phrased, but not outright wrong. Iterators are very heavily used, not so much for
per se, but to traverse the entire collection for some reason.
is a subset of that larger topic. So the above should be paraphrased to talk about complete traversal of collections, rather than its current reference to "linear search", and then I'm sure that,
yes, Stepanov has said similar things any number of times. The "only C++" part is both dated and a bit of an exaggeration; he's talking about generics in C++. The comment about
is outright incorrect, though, as is obvious if one considers that Common Lisp supports generics, and therefore there's nothing preventing one writing a generics module in Scheme (and which has in
fact been done, here and there). -- Doug Rather than do a sentinel, just do it length times, and break on a match.
That misses the point. If you do that, as the code shown above does, then you have to do two tests each time around the loop: one to see whether you've reached the end, and one to see whether you've
found the item you were looking for. With a sentinel, you only have to do one of those tests.
You misunderstand me. You do the loop a maximum of
times, with a break on a match. No sentinel required. Single test only e.g.:
int target = ...;
boolean found = false;
int i = arr.length-1;
while (i >= 0) { // comparison 1
if (arr[i] == target) { // comparison 2
found = true;
This code does two comparisons per iteration - it checks "i >= 0" and also "arr[i] == target". The fastLinearSearch code does only 1 comparison each time around the loop, since using a sentinel moves
one comparison outside of the loop, hence speeding it up. Count the number of comparisons done in fastLinearSearch and you'll see.
I don't understand the advantage of searching the entire array. If the array values have no order, then on average only half the array will need to be searched to find the target value. On average,
the simple linear search will do two tests on half the array values, for a total of N tests. The fast linear search given above will do one test per array value, again requiring N tests. They appear
to do equivalent amounts of work, on average. Moreover, if the test for equivalence to the target value is more expensive than the index test (e.g., if the array values are strings or a composite
data type), the simple linear search seems to be the clear winner. Doing many expensive tests to avoid doing simple tests seems counterproductive, particularly when the code to do so is less clear. I
must not understand part of your argument. Perhaps an answer to this question will clarify things: how is always performing N (possibly expensive) tests more efficient than on average performing N/2
(possibly expensive) + N/2 (cheap) tests?
I've re-written the sample code above in a way that hopefully clarifies things; the original version of linearSearch was implemented so that it always searched the whole array, which is both
inefficient and, for this example, misleading because fastLinearSearch does not search the whole array if the target is in it. Note that in linearSearch, two comparisons are done on each iteration of
the loop, while in _fastLinearSearch only one comparison is done per iteration.
This sort of trick was much more common in the old days of assembly language. For example, even on a simple machine like the Imlac, the search loop would code up in two instructions ...
cmp i index
bne .-1
|
{"url":"http://c2.com/cgi/wiki?LinearSearch","timestamp":"2014-04-18T15:32:53Z","content_type":null,"content_length":"8268","record_id":"<urn:uuid:605d172b-e634-4c64-b46f-0e033da99cf7>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00534-ip-10-147-4-33.ec2.internal.warc.gz"}
|
WordWeb Get the FREE one-click dictionary software for Windows or the iPhone/iPad and Android apps Nearest
Help Us
Improve Noun: mathematics teacher
• Report an 1. Someone who teaches mathematics
error - math teacher
• Missing
word/ Derived forms: mathematics teachers
Type of: educator, instructor, teacher
• Crossword
• Crossword
|
{"url":"http://www.wordwebonline.com/en/MATHEMATICSTEACHER","timestamp":"2014-04-18T16:29:03Z","content_type":null,"content_length":"7100","record_id":"<urn:uuid:06cbae66-51e3-4b53-a860-1e2a77eb4cce>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00491-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Heat transfre for a domestic radiator
Any suggestions anyone??? Really struggling with this one !!
1. The problem statement, all variables and given/known data
A domestic radiator is 2.5 m long and 0.6m high and is sited in a room whose temperature is 14C. Hot water is circulating through the radiator at a temperature of 90c. The radiator is convecting heat
from both sides but only radiating from one side. Given that the surface emissivity is 0.7, calculate the total heat transfer from radiator to the room.
2. Relevant equations
Stefan-Boltzmann Constant = 5.67x10^-8
Nu = 0.59(Gr.Pr)^0.25 for laminar flow (GR.Pr)<10^9
Nu = 0.129(Gr.Pr)^0.33 for turbulent flow (GR.Pr)>10^9
Grashof no. Gr= ([tex]\rho[/tex]^2[tex]\beta(\theta[/tex]1-[tex]\theta[/tex]2)l^3)/[tex]\mu[/tex]^3
Nusselt No. Nu= hl/k
Prandtl No. Pr=Cp[tex]\mu[/tex]/k
3. The attempt at a solution
Q= [tex]\sigma\epsilon[/tex]A(T1^4-T2^4)
= 5.67x10^-8 x 0.7 x 1.5 x (1.736x10^10-6.785x10^9)
= 629.8W
1. The problem statement, all variables and given/known data
2. Relevant equations
3. The attempt at a solution
|
{"url":"http://www.physicsforums.com/showthread.php?t=179393","timestamp":"2014-04-17T01:03:02Z","content_type":null,"content_length":"23874","record_id":"<urn:uuid:f51c56c2-bc44-4021-b577-763384ac28e4>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00454-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Advanced XP
One of the things I’ve found the most tedious about calculating experience in AD&D is the per hit point part of the creature’s value. I understand why a creature with more hit points is worth more
than one of its fellows with fewer, but what a pain. A huge deal? No. But I don’t think it adds enough to the game to be worth it.
So I’ve decided I’m going to shortcut it. Rather than doing the calculation for every individual monster, I’m going to calculate out the value of one of each with average (4.5 per HD) hit points and
use that for every copy encountered. I roll out HD normally, so the results should be perfectly fine.
For example, three gnolls I just rolled have 11, 6, and 11 hit points. The listed XP value is 28 +2/hp, so they should be worth 50, 40, and 50 XP respectively, a total of 140. Using my method we have
2 XP per hit point TIMES 4.5 hit points per HD TIMES 2 HD PLUS 28 EQUALS 46 XP per gnoll for a total of 138.
Another example: Gray ooze (3+3 HD) is 200 + 5/hp by the book. The gray ooze I just rolled has 16 hit points, so it is worth 280 XP. My method is 5 XP per hit point TIMES 4.5 hit points per HD TIMES
3 HD PLUS 15 (for the three extra hp at 5 per) PLUS 200 EQUALS 282.5 or 283 XP.
I’m making a list of monsters as I use them and adding the value to Appendix E in the DMG, though I might write them into the Monster Manual as well. This is an example of what I mean when I say
we’re trying to play “mostly-by-the-book”: We don’t want to change anything if we can possibly help it, and when we change or houserule something it’s going to be with as little distance from BTB as
we can manage.
Note: Sometimes, such as with the gray ooze example above, you end up with a half XP. I always round this UP in favor of the players. But then I’m a softie pushover DM like that.
Tags: AD&D
3 Comments to “Advanced XP”
1. For ease and speed, I’d just revert to the B/X model – base value plus a set amount for each special ability. You get a static value for each monster, and you can calculate it ahead of time.
□ Well, that’s what AD&D does (with two levels of special abilities) plus adds the per hit point value.
I’m taking the average per hit point value so I can can calculate a static value for each monster ahead of time.
2. I do exactly the same thing and have for years. It works like a charm.
|
{"url":"http://www.lordkilgore.com/advanced-xp","timestamp":"2014-04-17T18:24:19Z","content_type":null,"content_length":"61813","record_id":"<urn:uuid:8c2069c8-5a75-4d97-99e3-af8eec2afc6d>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00131-ip-10-147-4-33.ec2.internal.warc.gz"}
|
CONDENSED MATTER PHYSICS, 2008, vol. 11, No. 2(54)
Infinite Particle Systems: Complex Systems III (June 2007, Kazimierz Dolny, Poland)
In the years 2002-2005, a group of German and Polish mathematicians worked under a DFG research project No 436 POL 113/98/0-1 entitled "Methods of stochastic analysis in the theory of collective
phenomena: Gibbs states and statistical hydrodynamics". The results of their study were summarized at the German-Polish conference, which took place in Poland in October 2005. The venue of the
conference was Kazimierz Dolny upon Vistula - a lovely town and a popular place for various cultural, scientific, and even political events of an international significance. The conference was
also attended by scientists from France, Italy, Portugal, UK, Ukraine, and USA, which predetermined its international character. Since that time, the conference, entitled "Infinite Particle
Systems: Complex Systems" has become an annual international event, attended by leading scientists from Germany, Poland and many other countries. The present volume of the "Condensed Matter
Physics" contains proceedings of the conference "Infinite Particle Systems: Complex Systems III", which took place in June 2007.
Title: Continuous unitary transformation approach to pairing interactions in statistical physics
  T.Domański (Institute of Physics, M. Curie Skłodowska University, 20-031 Lublin, Poland)
We apply the flow equation method to the study of the fermion systems with pairing interactions which lead to the BCS instability signalled by the appearance of the off-diagonal order parameter.
For this purpose we rederive the continuous Bogoliubov transformation in a fashion of renormalization group procedure where the low and high energy sectors are treated subsequently. We further
generalize this procedure to the case of fermions interacting with the discrete boson mode. Andreev-type interactions are responsible for developing a gap in the excitation spectrum. However, the
long-range coherence is destroyed due to strong quantum fluctuations.
Condensed Matter Physics, 2008, vol. 11, No. 2(54), p. 195, English
Title: Random walks in random environment with Markov dependence on time
& C.Boldrighini (Dipartimento di Matematica, Università di Roma "La Sapienza", Piazzale Aldo Moro 2, 00185 Roma, Italy. Partially supported by INdAM (G.N.F.M.) and M.U.R.S.T. research founds) ,
& R.A.Minlos (Institute for Problems of Information Transmission, Russian Academy of Sciences, B. Karetnyi Per. 19, 127994, GSP-4, Moscow, Russia. Partially supported by RFBR grants 99-01-024,
nbsp 97-01-00714 and CRDF research funds N RM1-2085) ,
& A.Pellegrinotti (Dipartimento di Matematica, Università di Roma Tre, Largo S. Leonardo Murialdo 1, 00146 Roma, Italy. Partially supported by INdAM (G.N.F.M.) and M.U.R.S.T. research founds)
We consider a simple model of discrete-time random walk on Ζ^ν, ν=1,2,... in a random environment independent in space and with Markov evolution in time. We focus on the application of methods
based on the properties of the transfer matrix and on spectral analysis. In section 2 we give a new simple proof of the existence of invariant subspaces, with an explicit condition on the
parameters. The remaining part is devoted to a review of the results obtained so far for the quenched random walk and the environment from the point of view of the random walk, with a brief
discussion of the methods.
Condensed Matter Physics, 2008, vol. 11, No. 2(54), p. 209, English
Title: On convergence of generators of equilibrium dynamics of hopping particles to generator of a birth-and-death process in continuum
  E.Lytvynov (Department of Mathematics, Swansea University, Singleton Park, Swansea,
SA2 8PP, U.K.) ,
  P.T.Polara (Department of Mathematics, Swansea University, Singleton Park, Swansea,
SA2 8PP, U.K.)
We deal with the two following classes of equilibrium stochastic dynamics of infinite particle systems in continuum: hopping particles (also called Kawasaki dynamics), i.e., a dynamics where each
particle randomly hops over the space, and birth-and-death process in continuum (or Glauber dynamics), i.e., a dynamics where there is no motion of particles, but rather particles die, or are
born at random. We prove that a wide class of Glauber dynamics can be derived as a scaling limit of Kawasaki dynamics. More precisely, we prove the convergence of respective generators on a set
of cylinder functions, in the L^2-norm with respect to the invariant measure of the processes. The latter measure is supposed to be a Gibbs measure corresponding to a potential of pair
interaction, in the low activity-high temperature regime. Our result generalizes that of [Random. Oper. Stoch. Equa., 2007, 15, 105], which was proved for a special Glauber (Kawasaki,
respectively) dynamics.
Condensed Matter Physics, 2008, vol. 11, No. 2(54), p. 223, English
Title: Extension of explicit formulas in Poissonian white noise analysis using harmonic analysis on configuration spaces
& Yu.G.Kondratiev (Fakultät für Mathematik, Universität Bielefeld, D 33615 Bielefeld, Germany; Fakultät für Mathematik, Universität Bielefeld, D 33615 Bielefeld, Germany 1; National University
nbsp "Kyiv-Mohyla Academy", Kiev, Ukraine) ,
& T.Kuna (Fakultät für Mathematik, Universität Bielefeld, D 33615 Bielefeld, Germany; Fakultät für Mathematik, Universität Bielefeld, D 33615 Bielefeld, Germany 1),
& M.J.Oliveira (Fakultät für Mathematik, Universität Bielefeld, D 33615 Bielefeld, Germany 1; Universidade Aberta, P 1269-001 Lisbon, Portugal; Universidade Aberta,
nbsp P 1269-001 Lisbon, Portugal 1)
Harmonic analysis on configuration spaces is used in order to extend explicit expressions for the images of creation, annihilation, and second quantization operators in L^2-spaces with respect to
Poisson point processes to a set of functions larger than the space obtained by directly using chaos expansion. This permits, in particular, to derive an explicit expression for the generator of
the second quantization of a sub-Markovian contraction semigroup on a set of functions which forms a core of the generator.
Condensed Matter Physics, 2008, vol. 11, No. 2(54), p. 237, English
Title: Yamada-Watanabe theorem for stochastic evolution equations in infinite dimensions
  M.Röckner (Department of Mathematics and BiBoS, Bielefeld University, Bielefeld, Germany; Department of Mathematics and BiBoS, Bielefeld University, Bielefeld, Germany 1) ,
  B.Schmuland (Department of Mathematical and Statistical Sciences, University of Alberta, Edmonton, Canada) ,
  X.Zhang (Department of Statistics, School of Mathematics and Statistics, University of New South Wales, Sydney, Australia)
The purpose of this note is to give a complete and detailed proof of the fundamental Yamada-Watanabe Theorem on infinite dimensional spaces, more precisely in the framework of the variational
approach to stochastic partial differential equations.
Condensed Matter Physics, 2008, vol. 11, No. 2(54), p. 247, English
Title: Equilibrium stochastic dynamics of Poisson cluster ensembles
  L.Bogachev (Department of Statistics, University of Leeds, Leeds LS2 9JT, UK) ,
  A.Daletskii (Department of Mathematics, University of York, York YO10 5DD, UK)
The distribution μ of a Poisson cluster process in Χ=R^d (with n-point clusters) is studied via the projection of an auxiliary Poisson measure in the space of configurations in Χ^n, with the
intensity measure being the convolution of the background intensity (of cluster centres) with the probability distribution of a generic cluster. We show that μ is quasi-invariant with respect to
the group of compactly supported diffeomorphisms of Χ, and prove an integration by parts formula for μ. The corresponding equilibrium stochastic dynamics is then constructed using the method of
Dirichlet forms.
Condensed Matter Physics, 2008, vol. 11, No. 2(54), p. 261, English
Title: Invariance principle for diffusions in random environment
  S.Struckmeier (Department of Mathematics, Universität Bielefeld, Universitätsstr. 25, 33615 Bielefeld, Germany)
We will show an invariance principle for the diffusive motion of a particle interacting with a random frozen configuration of infinitely many other particles in R^d. The interaction is described
by a symmetric, translation invariant pair potential with repulsion at zero distance and proper decay at infinity.
Condensed Matter Physics, 2008, vol. 11, No. 2(54), p. 275, English
Title: Selection-mutation balance models with epistatic selection
  Yu.G.Kondratiev (Universität Bielefeld, Postfach 10 01 31, D-33501 Bielefeld, Germany; BiBoS, Univ. Bielefeld, Germany) ,
  T.Kuna (Universität Bielefeld, Postfach 10 01 31, D-33501 Bielefeld, Germany; BiBoS, Univ. Bielefeld, Germany; University of Reading, Department of Mathematics, Whiteknights, PO Box 220,
Reading RG6 6AX, UK) ,
  N.Ohlerich (Universität Bielefeld, Postfach 10 01 31, D-33501 Bielefeld, Germany; BiBoS, Univ. Bielefeld, Germany)
We present an application of birth-and-death processes on configuration spaces to a generalized mutation-selection balance model. The model describes the aging of population as a process of
accumulation of mutations in a genotype. A rigorous treatment demands that mutations correspond to points in abstract spaces. Our model describes an infinite-population, infinite-sites model in
continuum. The dynamical equation which describes the system, is of Kimura-Maruyama type. The problem can be posed in terms of evolution of states (differential equation) or, equivalently,
represented in terms of Feynman-Kac formula. The questions of interest are the existence of a solution, its asymptotic behavior, and properties of the limiting state. In the non-epistatic case
the problem was posed and solved in [Steinsaltz D., Evans S.N., Wachter K.W., Adv. Appl. Math., 2005, 35(1)]. In our model we consider a topological space X as the space of positions of mutations
and the influence of an epistatic potential on these mutations.
Condensed Matter Physics, 2008, vol. 11, No. 2(54), p. 283, English
Title: The Gibbs fields approach and related dynamics in image processing
  X.Descombes (Ariana, Joint group, CNRS/INRIA/UNSA, INRIA, 2004, route des Lucioles, BP93, 06902, Sophia-Antipolis Cedex, France) ,
  E.Zhizhina (Institute for Information Transmission Problems, Bolshoy Karetny per. 19, 127994 GPS-4, Moscow, Russia)
We give in the paper a brief overview of how the Gibbs fields and related dynamics approaches are applied in image processing. We discuss classical pixel-wise models as well as more recent
spatial point process models in the framework of the Gibbs fields approach. We present a new multi-object adapted algorithm for object detection based on a spatial birth-and-death process and a
discrete time approximation of this process.
Condensed Matter Physics, 2008, vol. 11, No. 2(54), p. 293, English
Title: Bassalygo-Dobrushin uniqueness for continuous spin systems on irregular graphs
  D.Kępa (Instytut Matematyki, Uniwersytet Marii Curie-Skłodowskiej, 20-031 Lublin, Poland) ,
  Yu.Kozitsky (Instytut Matematyki, Uniwersytet Marii Curie-Skłodowskiej, 20-031 Lublin, Poland)
An extension of the Bassalygo-Dobrushin technique of proving uniqueness of Gibbs fields on irregular graphs, developed in [Theory of Probab. Appl., 1986, 31, 572-589], to the case of continuous
spins has been presented.
Condensed Matter Physics, 2008, vol. 11, No. 2(54), p. 313, English
Title: Analysis of urban complex networks
  D.Volchenkov (Bielefeld Bonn Stochastic Research Center (BiBoS), University of Bielefeld, Postfach 100131, D-33501, Bielefeld, Germany)
We analyze the dual graph representation of urban textures by the methods of complex network theory and spectral graph theory. We present the empirical diagrams of distributions of the nearest
and far-away neighbors in the several European compact urban patterns and the spectra of normalized Laplace operator defined on their dual graphs.
Condensed Matter Physics, 2008, vol. 11, No. 2(54), p. 331, English
Title: Modelling complex networks by random hierarchical graphs
  M.Wróbel (Institute of Mathematics, Maria Curie-Skłodowska University, Lublin, Poland)
Numerous complex networks contain special patterns, called network motifs. These are specific subgraphs, which occur oftener than in randomized networks of Erdős-Rényi type. We choose one of
them, the triangle, and build a family of random hierarchical graphs, being Sierpiński gasket-based graphs with random "decorations". We calculate the important characteristics of these graphs -
average degree, average shortest path length, small-world graph family characteristics. They depend on probability of decorations. We analyze the Ising model on our graphs and describe its
critical properties using a renormalization-group technique.
Condensed Matter Physics, 2008, vol. 11, No. 2(54), p. 341, English
Title: On the implementation of cryptoalgorithms based on algebraic graphs over some commutative rings
  J.S.Kotorowicz (University of Maria Curie-Skłodowska, Plac M.C. Skłodowkiej 1, 20-031 Lublin, Poland) ,
  V.A.Ustimenko (University of Maria Curie-Skłodowska, Plac M.C. Skłodowkiej 1, 20-031 Lublin, Poland)
The paper is devoted to computer implementation of some graph based stream ciphers. We compare the time performance of this new algorithm with fast, but no very secure RC4, and with DES. It turns
out that some of new algorithms are faster than RC4. They satisfy the Madryga requirements, which is unusual for stream ciphers (like RC4). The software package with new encryption algorithms is
ready for the demonstration.
Condensed Matter Physics, 2008, vol. 11, No. 2(54), p. 347, English
Title: Differential functional von Foerster equations with renewal
  H.Leszczyński (Univ. Gdańsk, Wita Stwosza 57, 80-952 Gdańsk, Poland)
Natural iterative methods converge to the exact solution of a differential-functional von Foerster-type equation which describes a single population dependent on its past time and state densities
as well as on its total size. On the lateral boundary we impose a renewal condition.
Condensed Matter Physics, 2008, vol. 11, No. 2(54), p. 361, English
Title: Almost sure functional central limit theorems for multiparameter stochastic processes
  E.B.Czerebak-Morozowicz (Department of Mathematics, Technical University of Rzeszów,
ul. Wincentego Pola 2, 35-959 Rzeszów, Poland) ,
  Z.Rychlik (Institute of Mathematics, Maria Curie-Skłodowska University,
pl. Marii Curie-Skłodowskiej 1, 20-031 Lublin, Poland) ,
  M.Urbanek (Institute of Mathematics, Maria Curie-Skłodowska University,
pl. Marii Curie-Skłodowskiej 1, 20-031 Lublin, Poland)
We present almost sure central limit theorems for stochastic processes whose time parameter ranges over the d-dimensional unit cube. Our purpose here is to generalize the classic functional
central limit theorem of Prokhorov (1956) for such processes. We prove multidimensional analogues of Glivenko-Cantelli type theorems.
Condensed Matter Physics, 2008, vol. 11, No. 2(54), p. 371, English
Title: Quantum codes from algebraic curves with automorphisms
& T.Shaska (Science and Engineering Building, Department of Mathematics and Statistics, Oakland University, Rochester, MI, 48309 367 Science and Engineering Building, Department of Mathematics and
nbsp Statistics, Oakland University, Rochester, MI, 48309; University of Maria Curie Sklodovska, Lublin, Poland University of Maria Curie Sklodovska, Lublin, Poland) ,
Let Χ be an algebraic curve of genus g ≥ 2 defined over a field F[q] of characteristic p > 0. From Χ, under certain conditions, we can construct an algebraic geometry code C. If the code C is
self-orthogonal under the symplectic product then we can construct a quantum code Q, called a QAG-code. In this paper we study the construction of such codes from curves with automorphisms and
the relation between the automorphism group of the curve Χ and the codes C and Q.
Condensed Matter Physics, 2008, vol. 11, No. 2(54), p. 383, English
Erratum: Inelastic neutron scattering applied to the investigation of collective excitations in topologically disordered matter [Condens. Matter Phys., 2008, vol. 11, 1(53), 7]
In the above mentioned publication the denominator of the prefactor of equation (31) was unfortunately wrongly given.
Condensed Matter Physics, 2008, vol. 11, No. 2(54), p. 397, English
|
{"url":"http://www.icmp.lviv.ua/journal/zbirnyk.54/index.html","timestamp":"2014-04-21T09:58:49Z","content_type":null,"content_length":"31497","record_id":"<urn:uuid:d4a319d0-58ea-4ba4-be58-cac179ce74b2>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00235-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Generalized approaches to constant division :: Tulane University Theses and Dissertations Archive
Home Tulane University Theses and Dissertations Archive
Generalized approaches to constant division
Generalized approaches to constant division
small (250x250 max)
medium (500x500 max)
large ( > 500x500)
Full Resolution
• There is no file associated with this item.
Disclaimer Access requires a license to the Dissertations and Theses (ProQuest) database.
Link to http://libproxy.tulane.edu:2048/login?url=http://gateway.proquest.com/openurl?url_ver=Z39.88-2004&rft_val_fmt=info:ofi/fmt:kev:mtx:dissertation&res_dat=xri:pqdiss&rft_dat=
File xri:pqdiss:9008735
Title Generalized approaches to constant division
Author Raghuram, Padmini Srinivasan
School Tulane University
Academic Computer Science
The division process is not only the most complex but also the most time-consuming arithmetic operation in a digital computer. There exist many types of special-purpose systems which
require rapid and repeated division by a set of known constant divisors. Even in general purpose machines, since integer division takes significantly longer than additions or subtraction,
if many divisions are needed, this disparity in execution time can result in a bottleneck. It is therefore beneficial to seek ways to do specific division cases faster, in order to
improve the average performance of division Numerous solutions have been proposed in response to the deficiencies of the conventional division algorithms, for applications which involve
repeated divisions by known constants. The approaches in the literature are outlined and characterized with respect to timing, generality, implied redundancy, and the possibility of
shared computation, parallelism,, and pipelined implementation. The application-dependent development of the constant division approaches has left a gap in the theoretical foundations of
the algorithms. Here the various methods are mathematically explained and unified through the establishment of their common theoretical basis A major intended contribution of this
Abstract research is the definition of approaches to division by constants belonging to the set of integers of the form 2$\sp{n}\ \pm$ 1. Generalized algorithms for division by integers of the
form 2$\sp{n}\ \pm$ 1, for an arbitrary value or values of $n$, are developed and proved correct. Implementation issues are explored in the development of design suggestions for the
constant division method The algorithms presented are a good solution to the problem of division by constants of the specific form 2$\sp{n}\ \pm$ 1, for $n$ $\in$ N, $n$ $>$ 0. The
consideration of such a subset of divisor values (of the form 2$\sp{n}\ \pm$ 1) is not, however, a satisfactory solution to the general problem of constant division. Dividers designed for
constant divisors in this set can trivially be extended to cover divisors of the form 2$\sp{m}(2\sp{n}\pm1)$, $m$ $\in$ N. The Euler-Fermat theorem is used to show that, with one
additional multiplication, division by any integer can be converted to a division by an integer of the for 2$\sp{n}$ $-$ 1. The multiplier to be determined is established to be the value
in one period of the reciprocal of the divisor. Approaches to reciprocal determination are therefore presented, specifically methods that take advantage of the special characteristics of
reciprocals of integers
Language eng
Advisor(s) Petry, Frederick E
Degree Date 1989
Degree Ph.D
Publisher Tulane University
Publication 1989
Source Source: 103 p., Dissertation Abstracts International, Volume: 50-11, Section: B,
Identifier See 'reference url' on the navigation bar.
Rights Copyright is in accordance with U.S. Copyright law
Contact digitallibrary@tulane.edu
Add tags for Generalized approaches to constant division
you wish to report:
Your comment:
[ ]
[ ]
[ ]
[ ]
[ ]
[ ]
[ ]
[ ]
[ ]
[ ]
Your Name:
[ ]
|
{"url":"http://louisdl.louislibraries.org/cdm/singleitem/collection/p16313coll12/id/2325/rec/11","timestamp":"2014-04-23T11:22:59Z","content_type":null,"content_length":"242162","record_id":"<urn:uuid:4d88b3cb-c6ed-42ca-b5c3-a792d1ec287d>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00063-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Newport Beach Algebra 2 Tutor
Find a Newport Beach Algebra 2 Tutor
...This process also builds a firm foundation that increases recall, eases testing panic and supports future learning (7-step process can be provided upon request!). A little bit about me: I am a
23-year-old mathematics major, still in college, with a 3.8 GPA. I have been tutoring math for over fo...
7 Subjects: including algebra 2, geometry, algebra 1, trigonometry
...Received A grade in Precalculus Honors Long time proficiency in all Math. Long time interest and proficiency in Sciences. I played three years of high school tennis.
19 Subjects: including algebra 2, Spanish, geometry, biology
...As an undergraduate at UCSD, I worked as a teaching assistant for lower division biology classes and also tutored different math subjects. I have tutored students in math and science at all
grade levels: elementary, middle school, high school and college. I have over eight years of tutoring experience and I am passionate about sharing my knowledge with students.
15 Subjects: including algebra 2, chemistry, geometry, biology
...That is what I look forward to doing with you.Algebra 2 is exciting. It is one of those subjects that is important for engineering, business, and other quantitative fields but at the same time
it is hard to understand for the young kids. With patience, mastery of the subject and a passion to learn and teach Algebra 2, I can help.
48 Subjects: including algebra 2, physics, geometry, statistics
...I want to be graded on the content of my work rather than docked for sloppy mistakes, which is why proofreading is so important. Math is my strongest subject and while I know some people may
struggle with elementary math, I am able to help them learn by providing a basic, logical approach. Some...
33 Subjects: including algebra 2, reading, English, geometry
|
{"url":"http://www.purplemath.com/Newport_Beach_algebra_2_tutors.php","timestamp":"2014-04-16T21:56:56Z","content_type":null,"content_length":"24306","record_id":"<urn:uuid:374aa4f6-337e-45f4-a443-ac21b90013ff>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00389-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Derivation for formula of area of a cone
I take the weekend off, okay?
Area=(2/3)pi(H)(R)^2 (which is starting to look similar to that of the volume of the cone).
But you don't want
you want
. And the formula you give can't possibly be area, it has the wrong units. The formula for lateral surface area of a cone (not including the base) is [itex]\pi r\sqrt{r^2+h^2}[/itex].
Okay, we agree that z= (h/R)r. Looking at the cone from the side we see a right triangle with legs of length h and R so that the "lateral length", the hypotenuse of the triangle, is given by [itex]\
sqrt{R^2+ h^2}[/itex]. Similarly, a tiny piece, with dr instead of R and dz instead of h has "lateral length" [itex]\sqrt{dr^2+ dz^2}= \sqrt{dr^2+ (h/R)^2dr^2}= \sqrt{1+ (h/R)^2}dr= \sqrt{h^2+ R^2}/R
dr[/itex]. Rotating that around the z-axis gives a thin ribbon with that width and length the circumference of the circle: [itex]2\pi r[/itex]. The area of that "ribbon" and the differential of area
for the cone is the product of those:[itex]2\pi\sqrt{h^2+ R^2}/R rdr[/itex]. To find the area of the cone, integrate that with respect to r. Of course, r goes from 0 to R.
A more advanced way to do this is to write the cone in parametric equations. Here, polar coordinates, r and [itex]\theta[/itex], work fine: [itex]x= rcos(\theta)[/itex], [itex]y= rsin(\theta)[/itex],
z= (h/R)r. A "position vector" for a point on the cone is [itex]/vec{r}= rcos(\theta)\vec{i}+ rsin(\theta)\vec{j}+ (h/R)r\vec{k}[/itex]. Differentiating with respect to r, we get [itex]\vec{r}_r= cos
(\theta)\vec{i}+ sin(\theta)\vec{j}+ (h/R)\vec{k}[/itex]. Differentiating with respect to [itex]\theta[/itex], we get [itex]\vec{r}_\theta= -rsin(\theta)\vec{i}+ rcos(\theta)\vec{j}[/itex]. The
"fundamental vector product" is the cross product of those vectors: [itex](h/R)rcos(\theta)\vec{i}-(h/R)rsin(\theta)\vec{j}+ r\vec{k}[/itex]. The differential area is given by the length of that
[tex]\sqrt{(h/R)^2 r^2+ r^2}= \frac{\sqrt{h^2+ R^2}}{R}r[/tex]
times [itex]drd\theta[/itex]. That is, of course, just what we got before.
Finally, an elementary method- no calculus at all! Imagine cutting the cone in a straight line from its base to its vertex and flattening it. You can do that: through any point on a cone there exist
a straight line through that point so it is a "developable surface"- any such surface can by "flattened" (unlike a sphere). We get a part of a circle. Not an entire circle because our circle has
radius equal to the slant height of the cone: [itex]\sqrt{R^2+ h^2}[/itex] and circumference equal to the circuference of the base of the cone: [itex]2\pi R[/itex]. The "portion" of the circle that
we have is the ratio of that circumference to the circumference of a full circle of radius [itex]\sqrt{R^2+ r^2}[/itex]: [itex](2\pi R)/(2\pi\sqrt{R^2+ h^2})= R/\sqrt{R^2+ h^2}[/itex] and so the
areas are in the same ratio. The area of an entire circle of radius [itex]\sqrt{R^2+ h^2}[/itex] would be [itex]\pi(R^2+ h^2)[/itex] so your cone has area [itex][R/\sqrt{R^2+ h^2}](\pi(R^2+ h^2))= \
pi R\sqrt{R^2+ h^2}[/itex] again.
|
{"url":"http://www.physicsforums.com/showthread.php?t=166085","timestamp":"2014-04-17T04:03:46Z","content_type":null,"content_length":"56637","record_id":"<urn:uuid:75076090-c484-4318-ba67-2973d8396c3f>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00568-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Are the $L$-functions of $X_0(N)$ automorphic?
up vote 2 down vote favorite
This question, like all of my previous questions regarding Langlands, is very naive.
All $g\geq 1$ curves come from quotients of the upper half plane. The curves $X_0(N)$ come from quotients of special subgroups of the group of automorphisms of the upper half plane. This might imply
that they are easier to work with.
$Gal(\mathbb{Q})$ acts on the Tate module of $X_0(N)$, which leads to a motivic $L$-function. Can one prove that $L$-functions arising from these $X_0(N)$'s are $L$-functions coming from automorphic
Furthermore, is this the motivation for these curves to begin with? If this is true, is this the reason that the modularity theorem (Taniyama-Shimura) often phrased in terms of parametrizing elliptic
curves via $X_0(N)$'s? If not, then why do these curves come up in the formulation of Taniyama-Shimura?
nt.number-theory langlands-conjectures modular-forms
1 The study of modular curves arose from the theory of elliptic integrals and elliptic functions, and the resulting theory of modular equations. The connections with arithmetic came later (and are
an outgrowth of the work of Ramanujan and Hecke, among others). – Emerton Sep 4 '11 at 3:47
add comment
2 Answers
active oldest votes
Langlands, in his Antwerp II article, was the first to show that the zeta function of a modular curve is exactly the product (well, some of the $L$-functions are in the denominator) of
$L$-functions of modular forms (previous results of Eichler, Shimura, Kuga, Sato, Ihara showed the equality up to finitely many factors). He used a comparison of the Lefschetz trace formula
and the Arthur–Selberg trace formula to accomplish this. This set up a basic approach to proving that zeta functions of Shimura varieties are products of automorphic $L$-functions which
Langlands spent a few papers developing (check out the section on Shimura varieties of his "complete works" website here). This approach involves knowing something specific about the
structure of the points mod $p$ of a Shimura variety. The paper of Langlands and Rapoport at the above link is where the Langlands–Rapoport conjecture on the points mod $p$ is first spelled
out carefully, but there are other places to read about it (in English! and improved/simplified) such as several of Milne's papers such as his article in Motives II or his article in the
Montréal proceedings (which, incidentally, are the proceedings of a conference pretty much whose sole purpose was to prove the zeta function of the Shimura variety associated to a unitary
group in three variables (a Picard modular surface) is a product of automorphic $L$-functions) (the book is called The zeta functions of Picard modular surfaces, edited by Langlands and
Ramakrishnan), and Kottwitz's JAMS article which begins with a historical overview.
up vote 7
down vote The modularity theorem, as first suggested by Taniyama, was in terms of $L$-functions. Basically, he said that if Hasse was correct and the $L$-function of an elliptic curve had analytic
accepted continuation and satisfied a functional equation then the inverse Mellin transform of the $L$-function of an elliptic curve could very well be a weight 2 modular form (see Shimura's article
on Taniyama). The formulation in terms of a modular parametrization came from Shimura's work in the late 50s and 60s on constructing quotients of Jacobians of modular curves attached to
modular forms, since some of those quotients were indeed elliptic curves over Q (whose $L$-functions matched up as they should). So, perhaps one could say that modular curves come up in
Shimura–Taniyama because, if Hasse's conjecture that the $L$-function of an elliptic curve has analytic continuation and functional equation is true, then the inverse Mellin transform of it
is a differential form on a modular curve.
Modular curves/forms were interesting to mathematicians way before the 1950s. Poincaré, for one, studied them, but that's a bit far back in time to be my area of expertise.
add comment
The zeta function of the modular curve $X_0(N)$ is the product of the $L$-functions of a basis of cusp forms of weight 2 for $\Gamma_0(N)$ (the basis taken to be normalized eigenforms for
the Hecke operators prime to $N$), up to a finite number of factors. See, e.g., Milne's notes on modular forms, Theorem 11.14 (p. 108).
Modular curves are the (or at least one of the) simplest examples of Shimura varieties (See Milne's notes on Shimura varieties). One of the main motivations for the study of Shimura
varieties is showing that their Hasse-Weil zeta functions are products (allowing positive and negative powers) of automorphic $L$-functions (as part of a broader program to prove the same
up vote thing for general algebraic varieties, i.e. that motivic $L$-functions are automorphic). There are plenty of other reasons to study Shimura varieties, though (e.g. they are the most powerful
4 down tool for proving results about special values of automorphic $L$-functions, more advanced versions of $\zeta(2n)\in(2\pi)^{2n}{\mathbb Q}$)
The original version of Taniyama-Shimura-Weil is "for any elliptic curve $E$, there exists a non-constant map from some $X_0(N)$ to $E$ (defined over $\mathbb Q$). So, there are historical
reasons for phrasing it that way.
add comment
Not the answer you're looking for? Browse other questions tagged nt.number-theory langlands-conjectures modular-forms or ask your own question.
|
{"url":"http://mathoverflow.net/questions/74454/are-the-l-functions-of-x-0n-automorphic?sort=oldest","timestamp":"2014-04-19T15:23:41Z","content_type":null,"content_length":"62163","record_id":"<urn:uuid:6f3a0075-d76b-4605-b149-e7c0bfd341c4>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00046-ip-10-147-4-33.ec2.internal.warc.gz"}
|
BellKor Algorithm: Pearson Correlation Coefficient
In their
on the k-nearest neighbor (knn) algorithm, the BellKor team mention three choices that they considered for a similarity function: Pearson's Correlation Coefficient, the Cosine Coefficient, and the
Root Mean Square Difference.
In today's blog, I will focus solely on the Pearson Correlation Coefficient. It is clear from the paper that they did a lot of work with Pearson. There is also a good explanation about it in the book
Collective Intelligence
In line with the BellKor article, I will focus on the item-item correlation. Newman! and Tillberg have reported that they can generate this statistic in 20 minutes on a single processor and others
have reported generating it in 5 minutes on a quad-core processor. The algorithm that I describe in today's blog took 2 hours to run and required 1GB RAM on my machine. It look 45 minutes to load up
into mysql. I wrote the algorithm in java. I suspect that a C/C++ algorithm can go faster and the code is not well optimized. My goal here is to present the algorithm in the clearest possible way.
MySQL Tables
It is assumed that a Rating table already exists in mysql that includes all the Netflix training data. In a
previous blog
, I went into detail on how I loaded this table from the Netflix data and removed the probe data. Here is the table:
Rating Table
• movieid: int(2)
• userid: int(3)
• rating: int(1)
• date: date
• first: date
• support: int(3)
• avg: double
• avg2: double
• residual: double
In addition to this table, I created a table which I called Pearson. Here is the sql for creating this table:
CREATE TABLE pearson (movie1 int(2), movie2 int(2), num int(3), pearson float,
index index_pearson_movie1(movie1),
index index_pearson_pearson(pearson));
So, we end up with the following table:
Pearson Table
• movie1: int(2)
• movie2: int(2)
• num: int(3)
• pearson: float
Now, the tricky part is generating the pearson correlation coefficient for all 17,770 tables. That's a set of 17,770 x 17,770 = over 315 million records. On a very powerful machine, that may not be a
problem but one a 1GB machine, if you do not code the algorithm correctly, the processing can easily take more than 2 hours.
But before jumping into the algorithm, let's take a look at the Pearson formula. One of the important optimizations comes down to simplifying the standard formula.
Pearson Formula
Here's the formula for computing the coefficient for two items x and y is the following:
Following the cue from the book Collective Intelligence by O'Reilly, it gets much simpler, if we decompose it into the following 5 variables:
Then, the formula becomes:
The result will be between -1 and 1 where the closer it is to 1, the closer the match. A movie compared to itself will score a 1.
Loading up the Ratings table into memory
In order to calculate the pearson coefficient quickly, we need to load up the Rating table into memory and we need to have it sorted by both movieid and userid.
Here is what
Dan Tillberg writes
in the Netflix Prize forum:
...the trick is to have trainer ratings sorted by both movie and by user. Then, to find movie-movie similarities for, say, movie m1, you look at all of the *users* that rated movie m1, and then
you go through each of those users' *movies* that they've rated to fill in the entire m1'th row of the movie-movie similarity matrix (actually, you only need to bother with movies m2 <= m1, since
the other half is symmetrical, as noted further below). I do this in three minutes on an quad-core processor. If you calculate it cell by cell instead of row by row it takes a lot longer.
I'll talk about the computation of the coefficient in the next section. On my 1GB machine, I needed to run java with the following options:
java -Xms950m -Xmx950m pearson > pearson.txt
So, the goal is to output the result to a file pearson.txt which I will later load up into MySQL.
To speed things up, I don't fully sort the ratings by userid and the movieid. Instead, I group all the ratings by userid and then I group all the ratings by movieid. This makes the load up into
memory go significantly faster.
Here is the arrays that I used for the load up:
private static final int NUM_RECORDS=99072112;
byte[] ratingByUser = new byte[NUM_RECORDS];
short[] movieByUser = new short[NUM_RECORDS];
byte[] ratingByMovie = new byte[NUM_RECORDS];
int[] userByMovie = new int[NUM_RECORDS];
To group each rating, I use the following arrays:
private static final int NUM_USERS=480189;
private static final int NUM_MOVIES=17770;
int[] userIndex = new int[NUM_USERS+1];
int[] userNextPlace = new int[NUM_USERS];
int[] movieIndex = new int[NUM_MOVIES+1];
int[] movieNextPlace = new int[NUM_MOVIES];
Some Help from MySQL
To speed up the processing, I create two tables in MySQL that will provide me with the number of movies by user and the number of users by movie. These will be relatively small tables and that will
help in the grouping process. I will use the term "support" in the same way BellKor where "movie support" is the number of users who rated each movie and "user support" is the number of movies rated
each user.
Here are the two tables:
• movieid: int(2)
• support: int(3)
• userid: int(3)
• support: int(3)
Here's the sql for creating and loading up these tables:
CREATE TABLE movie_support (movieid INT(2), support INT(3));
CREATE TABLE user_support (userid INT(3), support INT(3));
INSERT INTO movie_support (movieid,support) SELECT movieid, COUNT(rating) FROM rating GROUP BY movieid;
INSERT INTO user_support (userid,support) SELECT userid, COUNT(rating) FROM rating GROUP BY userid;
We don't need any indexes for either of these tables.
Handling UserId
UserId presents an opportunity for reducing storage space. Now, the userid data presented as part of training varies from 6 to 2,649,429. But, there are only 480,189 distinct users.
To address this point, I use a simple hash table to map a relative userid (1 .. 480,189) to the absolute userid (6 .. 2,649,429). Here's the code to do this:
private static final int nextRelId=0;
private static HashMap<integer,integer> userMap = new HashMap<integer,integer>();
private static int getRelUserId(int userId) {
Integer relUserId = userMap.get(userId);
if (relUserId == null) {
relUserId = nextRelId++;
return relUserId;
Before Grouping Ratings by Movie and User
To prepare for the load up all the ratings and group them by userid and movieid, I used the following java code:
userIndex[NUM_USERS] = NUM_RECORDS;
int i=0;
ResultSet rs = stmt.executeQuery("SELECT userid,support FROM user_support");
while (rs.next()) {
int relUserId = getRelUserId(rs.getInt("userid"));
userIndex[relUserId] = i;
userNextPlace[relUserId] = i;
i += rs.getInt("support");
movieIndex[NUM_MOVIES] = NUM_RECORDS;
i = 0;
rs = stmt.executeQuery("SELECT movied,support FROM movie_support");
while (rs.next()) {
int relMovieId = rs.getInt("movieid")-1;
i += rs.getInt("support");
Once, I have these indices set, I am ready to start loading up. The advantage of these arrays will now become apparent. We do a very fast single pass through the entire Rating table.
Grouping the Ratings by User and Movie
Here's the java code for loading up the ratings:
rs = stmt.executeQuery("SELECT userid,movieid,rating FROM rating");
while (rs.next()) {
int userId = rs.getInt("userid");
int relUserId = getRelUserId(userid);
short movieId = rs.getShort("movieid");
int relMovieId = movieId-1;
byte rating = rs.getByte("rating");
movieByUser[userNextPlace[relUserId]] = movieId;
ratingByUser[userNextPlace[relUserId]] = rating;
userByMovie[movieNextPlace[relMovieId]] = userId;
ratingByMovie[movieNextPlace[relMovieId]] = rating;
Quickly Tallying the Ratings
Once we've loaded up ratings into memory, we are ready to process them. To do this efficiently, I use a technique that
Newman! posted on the Netflix Prize forum
. Here's Newman's description:
If you're using raw score values (1 to 5), you can speed things up significantly by replacing the PearsonIntermediate structure with:
unsigned int32 viewerCnt[5][5];
where viewerCnt[m][n] is the number of viewers who gave movie X a score of m+1 and movie Y a score of n+1. Furthermore, based on the total number of viewers of a movie, you can replace int32 with
int16 or int8, speeding it up even further.
On my 1.5G laptop, it took 20 minutes to calculate 17770x17770 correlations and write out a big data file. Of course, you really need to calculate only 17770x17770/2, if you have enough memory to
hold all correlation and related values (like average rating from common viewers) in memory, otherwise disk seeking will kill your running time and hard disk when you read the data file later. On
a multi-core processor, you can easily parallelize the calculations. So I think on a fast multi-core system, you can calculate all movie correlations in less than 5 minutes.
So, here's my code:
int[][][] values = new int[NUM_MOVIES][5][5];
for (i=0; i < NUM_MOVIES-1; i++) {
for (int j=i+1; j < NUM_MOVIES; j++) {
for (int k=0; k < 5; k++) {
for (int l=0; l < 5; l++) {
for (int j=movieIndex[i]; j < movieIndex[i+1]; j++) {
int relUserId = getRelUserId(userByMovie[j]);
for (int k=userIndex[relUserId]; k < userIndex[relUserId+1]; j++) {
if (movieByUser[k]-1 > i) {
Shrinking the Pearson Coefficient in relation to data available
BellKor state that they shrink the pearson coefficient based on the the count of data available. They use the following formula in the article:
Where |U(i,j)| is "the set of users hwo rated both items j and k".
In the original article, they state that they used "some small α." In the
NetPrize Forum
, Yehuda Koren thinks that they "used alpha values around 5-10." In my code, I use α = 10.
Computing the Pearson
With this tally, I am now able to compute the Pearson coefficient for each movie in comparison to i.
Here's the code:
for (i=0; i < NUM_MOVIES-1; i++) {
// Insert the code from above for tallying the totals...
for (int j=i+1; j < NUM_MOVIES; j++) {
float sum1=0;
float sum2=0;
float sumsq1=0;
float sumsq2=0;
float sumpr=0;
float num=0;
for (int k=1; k <= 5; k++) {
for (int l=1; k <= 5; l++) {
int val=values[j][k-1][l-1];
sum1 += l*val;
sum2 += k*val;
sumsq1 = l*l*val;
sumsq2 = k*k*val;
sumpr = k*l*val;
num += val;
float top = sumpr - (sum1*sum2/num);
float bottom = (float)Math.sqrt((sumsq1 - (sum1*sum1)/num)*(sumsq2 - (sum2*sum2)/num));
if (bottom != 0) {
float pearson = (top/bottom)*(num/(num+10));
else {
Loading Pearson Coefficients into MySQL
Here's the sql that I used to load back up into MySQL
use netflix;
ALTER TABLE pearson DISABLE KEYS;
LOAD DATA LOCAL INFILE 'pearson.txt' INTO TABLE pearson
FIELDS TERMINATED BY ','
LINES TERMINATED BY '\n'
ALTER TABLE pearson ENABLE KEYS;
That's pretty much it.
Using Your Pearson Table
For example to compute 10 movies that are most like
The Adventures of Robin Hood
(movieid 16840), we do the following:
SELECT movie1,pearson FROM pearson WHERE movie2=7064 ORDER BY pearson DESC LIMIT 10;
Now, I ran my pearson against the entire training data so your results may be different.
Here are my results:
• The Adventures of Robin Hood: Bonus Material (movieid: 3032, pearson: 0.64)
• Fairly Odd Parents: School's Out! The Musical (movieid: 15618, pearson: 0.6234)
• Imelda (movieid: 15948, pearson: 0.6232)
• Tiger Bay (movieid: 815, pearson: 0.61592)
• The Adventures of Ociee Nash (movieid: 12979, pearson: 0.61591)
• Counsellor at Law (movieid: 9122, pearson: 0.61)
• Empires: Peter and Paul and the Christian Revolution (movieid: 9574, pearson: 0.60)
• Hillary & Tenzing: Climbing to the Roof of World (movieid: 735, pearson: 0.599)
• Faerie Tale Theater: Snow White and the Seven Dwarfs (movieid: 3358, pearson: 0.596)
• Stephen Sondheim's Putting It Together (movieid: 5868, pearson: 0.595)
29 comments:
Impressive write-up once again.
One slightly off topic question. In their progress prize paper they mention:
"Another kind of similarity is based solely on who-rated-what (we refer to this as "support-based" or "binary-based" similarity). The full details are given in an Appendix 1."
"Interestingly, when post-processing residuals of factorization or RBMs, these seemingly inferior support-based similarities led to more accurate results."
I have found pearson not to be a good similarity measure when post processing residuals of other methods. I wonder if you do understand what they do on "Appendix 1". I'm also asking about it in
I'm glad you liked my write up.
As for your question, I have not thought in enough detail about Appendix 1 to give you a good answer.
It will be interesting to see how people respond on the Netflix Prize Forum.
When I have a good answer, I will post it here or respond to your question in the Netflix Prize Forum.
Hi, Newman! here. I didn't mention that replacing int32 with int16 or int8 gave me the most performance boost, no doubt because absolute majority of viewers have less than 65536 ratings, and a
large portion of them (more than 40% if I remember correctly) have less than 256 ratings, so the amount of memory you access is significantly less. In my C++ implementation, I just wrote a
template function and have one copy each for int32, int16, and int8.
Also, the optimal data order for values[][][] is not [NUM_MOVIES][5][5], but [5][NUM_MOVIES][5], this is a cache miss/memory access pattern thing, but it doesn't help as much as int32/int16/int8
Hi Newman!
Thanks very much for the clarifications on your algorithm!
This comment has been removed by the author.
I've been following your blog and loved the level of detail you covered on this topic. I tried to reproduce your results, but am getting OutOfMemoryErrors before my query can get all the data
from the ratings table. I tried doubling your recommended heap size and that didn't do it. I even tried using stmt.setFetchSize(num_rows) which suprisingly didn't fix my problem either.
Any ideas?
How much memory does your machine have? Also, are you running any other applications at the same time? For example, I found that I needed to close down Firefox to get it working.
I was able to run my algorithm on a machine with 1GB memory.
Thanks for the quick response. I actually just figured out the problem was indeed that JDBC was trying to load the entire ratings query result into memory. Statement.setFetchSize(1) didn't solve
my problem, but I learned that strangely Statement.setFetchSize(Integer.MinValue) did! Unless this is some strange behavior by my version of the JDBC drivers, I'm not sure why this works.
Yes, that's a well known problem with MySQL's version of JDBC. I'm glad that you figured it out.
Here's the bug report.
hi, thank you very much for your writeups!
FYI... if you run the knn on residuals from the global effects, adventures of robin hood suddenly has very reasonable continuous-like neighbors. that is--bonus mt'l, pirates of caribbean, indep
day, lor 123, american beauty, and so on. I tried to email you, but can't find your email anywhere.
Hi Larry,
Many thanks for this code - I also used your sql setup and it worked great. I don't know much about java and I'm having trouble compiling your code. Is there any chance you could post it as a
single file?
- Andy
Hi Andy,
Thanks for your feedback.
I'll try to put all the java code in a single place in the next week or so. I'll post the link here when I do.
That would be great, Larry. Thank you again.
It is not clear to me how global effects is used in the pearson calculation, i.e. no use is made of 'residual'. I'm about to do #4 of your previous blog and came to a grinding halt when I
realized that this is not used in the pearson calculation at all described in this blog. Am I missing something?
MikeM, I might be wrong, I haven't read Mike's other page. But the idea I got from reading the papers was that you'd run the pearson on the data set after you've applied the global effects - they
are not predictive models in and of themselves. The GE iron out anomolies and bumps in the data, the pearson ties them together.
Just realized that Larry answered my question in his previous blog. I.e. you do rating minus all the effects, then predit using your knn algorithm and then add the effects back in to produce the
qualifying result. This looks reasonable.
I managed to get the code up and compiled, after correcting several apparent mistakes with the iterators. But I'm getting some kind of exception at the 8,17770 mark, and the pearson results don't
look at all right. Is there any chance of this code being updated to something usable? The whole tally part mystifies me and I just can't work it out :/
Sorry for the problems with the code. I have not figured out an easy way to get code into Blogger. It insists on reformatting it.
Here's the essence of the code:
(1) int[] usersOrderdByMovie
Assume this is an array of users ordered by movie.
(2) int[] movieIndex
Assume that this is an index that maps movies to usersPerMovie.
(3) int[] numUsersPerMovie
This is a number of users who rated a movie.
In this way, we can iterate through all users for a movie 10 by using:
int base=movieIndex[10];
int n = numUsersPerMovie[10];
for (int i=base; i < base+n; i++) {
int userId = usersOrderedByMovie[i];
We can likewise do the same for users so that we have:
(4) int[] moviesOrderedByUser
(5) int[] userIndex
(6) int[] numMoviesPerUser
Now here's the algorithm for calculating the Pearson for all movies:
double[] similarity = new double[NMOVIES];
short[][][] values = new short[NMOVIES][5][5];
for (int movieid=1; movieid < 17771; movieid++) {
for each user (u) who rated this movie {
for each other movie (m != movieid) rated by this user {
let r = rank for u,m
That's the essence of the algorithm. The tally is just a count to make it easier to order the usersOrderedByMovie and the moviesOrderedByUser.
Hi Larry,
Thank you for your further explanation, I think I see what I'm supposed to do; but I'm still not sure why your code is stopping at 8,17770. I've posted a copy here with all the sql access stuff
The only changes I've made are to correct some typos, and these lines:
"for (int k=userIndex[relUserId]; k < userIndex[relUserId+1]; j++)" - where I changed j++ to k++. And:
for (int l=1; k <= 5; l++) where I changed k to l.
Are these corrections correct?
The exception comes at 8,17770 when trying to update the "values[movieByUser[k]-1][ratingByUser[k]-1][ratingByMovie[j]-1]++;"
err, anyone else trying to use the code I pasted should probably remove the 3 "System.out.println(+new Date());" lines if you're running from the command line. I've also limited the final sql
rating query to 0,1000000 to make the loadup quicker for debugging, so remove that. This has no apparent effect on the exception.
Hi Larry,
Ok, I finally got this up and running for me, so I thought I'd post a link to the complete job:
This includes all the server access garb, some time outputs to let you see the progress, and FileWriter outputs. I ran this from Run in Eclipse as, not knowing anything about Java, I couldn't get
it to compile from the command line ;)
As I said before, I had to change a couple of the k & l iterators. In addition, in order to get sane results I had to change sumsq1, sumsq2 and sumpr to += and deleted the +10 from "float pearson
= (top/bottom)*(num/(num+10))".
Many thanks for all your work - I'd still be nowhere without it.
Oh! I see - the +10 is the alpha value for the shrinking of the pearson. Sorry, missed that bit. I couldn't work out why comparing a movie against itself wasn't coming out with 1.0 with that in
there - but I saw after that without this, you get lots of 1.0 values for movies with low support.
Sorry for spamming your blog with my noobishness, Larry.
Larry, what's the probe and qualifying RMSE of your KNN algorithm ?
Hi Newman!,
I've haven't completed the KNN algorithm yet so I don't have an answer for you.
Recently, I've been working with SVD, SVD++, and the Brism.
I am planning to finish my work with KNN after I get SVD++ working.
Hi Newman!,
My standard version of KNN improved my quiz RMSE by 5.
KNN combined with SVD++, etc. gave me a quiz score of .8848.
Hi Larry,
There's a typo here: "My standard version of KNN improved my quiz RMSE by 5." ?
I just finish a blog on my optimizations of KNN calculations. I'd appreciate it if you take a quick look and tell me if some part is not clear ?
Hi Newman!,
Yes, a typo. I meant, I improved by .0005.
I've been thinking about RMSE with a multiple of 1000.
I haven't submitted my implementation of the BellKor knn yet. I am hoping to do that tonight.
I'll be very glad to review your blog entry.
Hi Larry,
Thank you for this blog !
T have a question about the pearson correlation formula:
When I compute correlation between 2 movies evaluated by 2 users
user 1 gives following ratings : 1, 1, 1 to movie : M1, M2, M3.
user 2 gives following ratings : 1,1,1 to movies: M1, M2, M3.
the result of the pearson correlation formula is 0.
So How I could interprate this result?
Thank for your helps.
Hi Amel,
Remember a result is between 1 and -1. 0 Means that the two results are not as similar as they could be (1 is closest) and they are not as dissimilar as they could be (-1 is the farthest).
|
{"url":"http://algorithmsanalyzed.blogspot.com/2008/07/bellkor-algorithm-pearson-correlation.html","timestamp":"2014-04-18T16:12:39Z","content_type":null,"content_length":"107069","record_id":"<urn:uuid:c24add35-24c0-47ef-8066-03c7adf02e29>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00094-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Lambda the Ultimate
Brute Force is a relative term
Took a few seconds...
Select empty space with mouse to see solution:
My solver (which is not much evolved beyond brute force) took a few seconds to get there. I would have taken a lot longer to do it manually...
Dominic Fox
at Mon, 2005-06-20 15:34 |
to post comments
Limits of brute-force
The following puzzle only has 17 given numbers which, according to Wikipedia, is the minimal known.
This presents the pure brute-force solvers with a rather large space to search. You'll probably solve it long before the prolog code in the original post could.
1 - - - - - - - -
- - 2 7 4 - - - -
- - - 5 - - - - 4
- 3 - - - - - - -
7 5 - - - - - - -
- - - - - 9 6 - -
- 4 - - - 6 - - -
- - - - - - - 7 1
- - - - - 1 - 3 -
(I found this puzzle here.)
Peter McArthur
at Mon, 2005-06-20 13:51 |
to post comments
Maybe it's then a nice case to [derive by] brute-force minimal one-sat instances (or otherwise smallest unsat/hitting set instances), there was some interest for minimal one-sat on wikipedia ;-);
then again, there are a lot of symmetries in the puzzle so preferably one would need to solve that first.
[Oh, forgot my manners; thanks :-)]
at Tue, 2005-06-14 22:36 |
to post comments
Too easy....
It is not really an interesting example because it is to simple, most puzzles are solved using only propagation, given a domain-consistent propagation algorithm for all different. The really hard
instances require a few choice-points.
Mikael 'Zayenz' Lagerkvist
at Tue, 2005-06-14 11:57 |
to post comments
Maybe I shouldn't type LtU messages in between meetings ;-)
[But your are right BTW, looks like a pretty normal CSP problem to me]
[And also, I guess the intent of the original statement was: are Sudoku problems easily solved by trivial brute force? So no need for CSP solving?]
at Mon, 2005-06-13 20:00 |
to post comments
Is that what you were wondering about too?
Not really ;-)
Ehud Lamm
at Mon, 2005-06-13 16:36 |
to post comments
CSP solutions?
I would expect some DPLL kind of solution given the size of the statespace. Or is the solution [of a Sudoku problem] normally that far of the phase transition point/easy? (Not that I actually believe
in phase transition)
Is that what you were wondering about too?
at Mon, 2005-06-13 14:07 |
to post comments
GUI + solver
David Easton has created a Tcl/Tk Sudoku GUI which includes code for solving puzzles. If you download the starkit, you can use SDX to unwrap it, and then the solver code is in sudoku.vfs/lib/
app-sudoku/sudoku-solve.tcl, although the code is not as clear as the Prolog solver.
Neil M
at Sun, 2005-06-12 12:37 |
to post comments
Simple solver
I wrote a fairly stupid solver in Haskell, before I'd even tried solving a Sudoku puzzle manually. Turns out there are smarter ways to do it than they way I tried (and doing them manually is more
fun). But it's an interesting exercise, all the same.
The interest in the puzzle lies in the fact that there are several different ways of deducing a "necessary" move (I tried only one, and used trial-and-error where that failed), and the best puzzles
are solvable without the use of trial-and-error.
Dominic Fox
at Sun, 2005-06-12 11:39 |
to post comments
Sure, searching all 9^64 possible ways of filling in the 64 blanks with the digits 1..9 is a (very!) slow way of solving the puzzle. But I wouldn't call this brute force, I would call it
There are easy and obvious ways to improve this algorithm. Consider this simple backtracking algorithm:
To check if a position is solvable:
1. Choose a blank square
2. Place an untried number in that square.
3. Did that placement violate the rules? Then go to 2.
(No numbers left to try? Then this position isn't
4. Otherwise, check if this new position is solvable.
(No blank squares left? Then you have a solution!)
This recursive algorithm can be thrown at many different problems. It is easily implemented in almost any modern language. It is a brute-force approach that is a great place to start when you don't
understand the problem very well.
There are non-deterministic choices in the first and second steps, but they don't affect correctness of the algorithm. They do effect the performance. (rather profoundly!) Simply choosing a blank
square with a minimal number of possibilities is a great heuristic for step 1. This heuristic is essentially what Dominic's Solver does.
Leon P Smith
at Wed, 2005-06-22 03:12 |
to post comments
What I meant was ...
When I said "brute force" I actually meant the "guess and check" depth-first search strategy that Leon Smith described. I should have qualified my statement by pointing out how slow my hardware and
run-time are. Re-implementing in C yields a 100-fold speed up.
What I meant by "avoiding brute force" was "using the rules of inference that human solvers use". This becomes essential as we scale up to, say, 25x25 grids.
We can still keep things elegant. For example, if we represent the grid as a 4-dimensional 3x3x3x3 grid, then the different kinds of region (rows, columns and squares) become constraints on different
pairs of co-ordinates of the grid. We can also combine all of the inference rules into one as follows:
"Consider each cell to be the set of possible values that we know, or have inferred, that it might contain. Thus, a cell known to contain a 1 is {1} and a cell about which we know nothing is {1, 2,
... , 9}. A region is one of the 1x9 rows, 9x1 columns or 3x3 squares that are constrained to contain each of the digits 1--9. For any region, R, and for any subset of R, r, let V be the union of the
possible values of all the cells in r. If the size of r is the same as the size of V, then no cell in R - r can contain any of the values in V."
Are we having fun yet?
Peter McArthur
at Thu, 2005-12-29 22:04 |
to post comments
20 ms
on a lowly K6 (or thereabouts, it's below timer resolution). This solver just keeps a bitmap of legal symbols per cell, updated whenever a symbol is entered into a cell. It repeatedly tries a symbol
in a cell with the minimum number of options left, which most of the time avoids any need for backtracking.
The program started out in Haskell, but I rewrote it in C when I noticed that startup of the runtime dominated actual processing time. Now I don't consider Sudoku a puzzle for clever minds anymore...
unless the grid is made a lot bigger.
This particular puzzle required finding 47 "hidden singles" and not a single guess, everything else was stupidly filling in the "obvious singles". That was easy ;-)
Udo Stenzel
at Fri, 2006-06-30 12:50 |
to post comments
Not by hand
For your program, it was easy perhaps, but I did it by hand, and while I didn't have to guess, it wasn't easy..
at Sat, 2006-07-01 14:45 |
to post comments
Short Solution in JavaScript
Here's a short solution in JavaScript.
Kevin Greer
at Mon, 2005-12-26 15:43 |
to post comments
Peter Norvig
A python solution, with a detailed exaplanations.
Ehud Lamm
at Fri, 2006-06-30 09:36 |
to post comments
Generating Soduku is more interesting than solving
Designing a program to generate Soduku puzzles was one of the challenges in the Extravagaria workshop at OOPSLA last year.
at Fri, 2006-06-30 15:51 |
to post comments
Quite ... ;-)
Quite ... ;-)
Ehud Lamm
at Fri, 2006-06-30 19:45 |
to post comments
Sudoku as homework problem in Oz
Russ Abbott's AI course has several Sudoku solutions as constraint programs in Oz.
Peter Van Roy
at Fri, 2006-06-30 19:51 |
to post comments
Haskell Solutions
I thought there might as well be a link to the Sudoku page on HaskellWiki.
Bryan Burgers
at Sat, 2006-07-01 00:08 |
to post comments
forth solver
I remember seeing is here.
I like semantics of forth, but I dislike its readability..
David Medlock
at Sat, 2006-07-01 02:43 |
to post comments
Sudoku in Haskell
As part of a new Advanced Functional Programming course in Nottingham, I presented a Haskell approach to solving Sudoku puzzles, based upon notes from Richard Bird. The approach is classic Bird:
start with a simple but impractical solver, whose efficiency is then improved in a series of steps. The end result is an elegant program that is able to solve any Sudoku puzzle in an instant. It’s
also an excellent example of what has been termed “wholemeal programming” — focusing on entire data structures rather than their elements.
A literate Haskell script is available here.
Graham Hutton
at Sat, 2006-07-01 07:58 |
to post comments
Ehud Lamm
at Sat, 2006-07-01 10:24 |
to post comments
|
{"url":"http://lambda-the-ultimate.org/node/772","timestamp":"2014-04-18T10:35:31Z","content_type":null,"content_length":"29898","record_id":"<urn:uuid:b30323a9-6f99-43cc-b9cb-5c59a0dfb92d>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00556-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Exponential Decay ( Read ) | Algebra
The population of a city was 10,000 in 2012 and is declining at a rate of 5% each year. If this decay rate continues, what will the city's population be in 2017?
In the last concept, we only addressed functions where $|b|>1$$b$$y=\left(\frac{1}{2}\right)^x$
Example A
Graph $y=\left(\frac{1}{2}\right)^x$$y=2^x$
Solution: Let’s make a table of both functions and then graph.
$x$ $\left(\frac{1}{2}\right)^x$ $2^x$
3 $\left(\frac{1}{2}\right)^3 = \frac{1}{8}$ $2^3=8$
2 $\left(\frac{1}{2}\right)^2 = \frac{1}{4}$ $2^2=4$
1 $\left(\frac{1}{2}\right)^1 = \frac{1}{2}$ $2^1=2$
0 $\left(\frac{1}{2}\right)^0 = 1$ $2^0=1$
-1 $\left(\frac{1}{2}\right)^{-1} = 2$ $2^{-1}=\frac{1}{2}$
-2 $\left(\frac{1}{2}\right)^{-2} = 4$ $2^{-2}=\frac{1}{4}$
-3 $\left(\frac{1}{2}\right)^3 = 8$ $2^{-3}=\frac{1}{8}$
Notice that $y=\left(\frac{1}{2}\right)^x$$y$$y=2^x$exponential growth, the function $y=\left(\frac{1}{2}\right)^x$ decreases exponentially, or exponentially decays . Anytime $b$exponential growth
function, and exponential decay function has the form $y=ab^x$$a>0$$0<b<1$$y=0$
Example B
Determine which of the following functions are exponential decay functions, exponential growth functions, or neither. Briefly explain your answer.
a) $y=4(1.3)^x$
b) $f(x)=3 \left(\frac{6}{5}\right)^x$
c) $y = \left(\frac{3}{10}\right)^x$
d) $g(x)= -2(0.65)^x$
Solution: a) and b) are exponential growth functions because $b>1$$b$$a$
Example C
Graph $g(x)=-2 \left(\frac{2}{3}\right)^{x-1}+1$$y$
Solution: To graph this function, you can either plug it into your calculator (entered Y= -2(2/3)^(X-1)+1) or graph $y=-2 \left(\frac{2}{3}\right)^x$
The $y$
$y=-2 \left(\frac{2}{3}\right)^{0-1}+1=-2 \cdot \frac{3}{2}+1=-3+1=-2$
The horizontal asymptote is $y=1$real numbers and the range is $y < 1$
Intro Problem Revisit This is an example of exponential decay, so we can once again use the exponential form $f(x)=a \cdot b^{x-h}+k$ a = 10,000, the starting population, x-h = 5 the number of years,
and k = 0, but b is a bit trickier. If the population is decreasing by 5%, each year the population is (1 - 5%) or (1 - 0.05) = 0.95 what it was the previous year. This is our b .
$P = 10,000 \cdot 0.95^5\\= 10,000 \cdot 0.7738 = 7738$
Therefore, the city's population in 2017 is 7,738.
Guided Practice
Graph the following exponential functions. Find the $y$
1. $f(x)=4 \left(\frac{1}{3}\right)^x$
2. $y=-2 \left(\frac{2}{3}\right)^{x+3}$
3. $g(x)= \left(\frac{3}{5}\right)^x-6$
4. Determine if the following functions are exponential growth, exponential decay, or neither.
a) $y=2.3^x$
b) $y=2 \left(\frac{4}{3}\right)^{-x}$
c) $y=3\cdot 0.9^x$
d) $y=\frac{1}{2} \left(\frac{4}{5}\right)^{x}$
1. $y$$(4, 0)$$y=0$$y < 0$
2. $y$$\left(0, -\frac{16}{27}\right)$$y=0$$y<0$
3. $y$$(-5, 0)$$y=-6$$y>-6$
4. a) exponential growth
b) exponential decay; recall that a negative exponent flips whatever is in the base. $y=2 \left(\frac{4}{3}\right)^{-x}$$y=2 \left(\frac{3}{4} \right)^{x}$
c) exponential decay
d) neither; $a < 0$
Exponential Decay Function
An exponential function that has the form $y=ab^x$$a>0$$0<b<1$
Determine which of the following functions are exponential growth, exponential decay or neither.
1. $y= -\left(\frac{2}{3}\right)^x$
2. $y= \left(\frac{4}{3}\right)^x$
3. $y=5^x$
4. $y= \left(\frac{1}{4}\right)^x$
5. $y= 1.6^x$
6. $y= -\left(\frac{6}{5}\right)^x$
7. $y= 0.99^x$
Graph the following exponential functions. Find the $y$domain and range for each function.
8. $y= \left(\frac{1}{2}\right)^x$
9. $y=(0.8)^{x+2}$
10. $y=4 \left(\frac{2}{3}\right)^{x-1}-5$
11. $y= -\left(\frac{5}{7}\right)^x +3$
12. $y= \left(\frac{8}{9}\right)^{x+5} -2$
13. $y=(0.75)^{x-2}+4$
14. Is the domain of an exponential function always all real numbers? Why or why not?
15. A discount retailer advertises that items will be marked down at a rate of 10% per week until sold. The initial price of one item is $50.
1. Write an exponential decay function to model the price of the item $x$
2. What will the price be after the item has been on display for 5 weeks?
3. After how many weeks will the item be half its original price?
|
{"url":"http://www.ck12.org/algebra/Exponential-Decay/lesson/Exponential-Decay-Function/","timestamp":"2014-04-17T01:34:03Z","content_type":null,"content_length":"127857","record_id":"<urn:uuid:ff5d2afd-9dab-41bc-868a-2dc2a1e901fa>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00553-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Ars Mathematica
Grete Hermann
This thread about famous women mathematicians on Cocktail Party Physics, reminded me of an interesting figure in history that I came across while doing researching for a Wikipedia article: Grete
Hermann. (The Wikipedia article is a skeleton that I created; it could use a lot of work.)
Hermann was a student of Emmy Noether. Noether was one of the iconic figures of twentieth-century mathematics, a key figure in the century’s trend toward abstraction. A typical example is her proof
of the Lasker-Noether theorem. The theorem, that every ideal has a primary decomposition, was originally proven for polynomial rings by Emanuel Lasker, using a difficult computational argument.
Noether identified the key abstract condition behind the result — the ascending chain condition on ideals — and used it to give a shorter proof of a much more general theorem. Rings that satisfy the
ascending chain condition on ideals are now known as Noetherian rings in her honor.
While Hermann was Noether’s student, her thesis was a throwback to the nineteenth century’s computational approach. Hermann showed that Lasker’s approach could be turned into an effective procedure
for computing primary decompositions. Hermann did this before the invention of the computer, or even before the notion of an effective procedure had been formalized. (As her definition, Hermann used
the existence of an explicit upper bound on time complexity, and gave such a bound for primary decomposition, and other questions in commutative ring theory.)
Hermann went on to work in philosophy and the foundations of physics. John Von Neumann had proposed a proof that a hidden variable theory of quantum mechanics could not exist. (A hidden variable
theory is one that explains the random behavior of quantum mechanical systems in terms of unobserved deterministic variables.) Hermann discovered and published the flaw in Von Neumann’s proof back in
1935, a result that has no impact until it was rediscovered by John Bell some thirty years later.
(The thread on Cocktail Party Physics is instructive for just how unfamous mathematicians really are. For physicists, Karl Weierstrauss is an obscure historical figure. For mathematicians of course,
Weierstrauss is five times as famous as Madonna and Britney Spears combined. It was interesting to learn that Sofia Kovalevskaya is not particularly well-known among physicists, even though part of
her research was in classical mechanics.)
|
{"url":"http://www.arsmathematica.net/2006/08/09/grete-hermann/","timestamp":"2014-04-21T02:00:23Z","content_type":null,"content_length":"11967","record_id":"<urn:uuid:7956b38d-8887-4df6-a065-2e14fc7e33b1>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00060-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Vernon, CA SAT Math Tutor
Find a Vernon, CA SAT Math Tutor
...My concentration was to work on written and oral communication skills, time management, and other executive function skills. In all I worked with over one hundred students, some for several
years in a row. Here in the L.A. area I have several students in the elementary age group.
72 Subjects: including SAT math, English, writing, reading
...High and High School Volleyball. I was on the Varsity team in High School. I also lived on the beach in Newport Beach and often spent summer playing beach volleyball with friends once or twice
a week.
47 Subjects: including SAT math, reading, English, writing
...In Precalculus, the student typically needs to everything about trigonometry (I love teaching trigonometry). Trig can be hard for some students, but I explain the concepts in a simple way and
demonstrate the Unit Circle. Many teachers don't explain in detail how the Unit Circle works with trig ...
20 Subjects: including SAT math, reading, Spanish, geometry
...I was responsible for proofreading legal manuscripts written by attorneys. These manuscripts were subsequently published. I have completed coursework in mathematics from arithmetic to
multivariable calculus.
40 Subjects: including SAT math, reading, English, writing
...I believe that building a good foundation is the key to success in any subject. I have come up with many different tricks for helping students remember key ideas in math and the biggest
compliment I received was when one client told me she was showing her class my "wedding cake" trick (one that ...
11 Subjects: including SAT math, physics, geometry, algebra 1
Related Vernon, CA Tutors
Vernon, CA Accounting Tutors
Vernon, CA ACT Tutors
Vernon, CA Algebra Tutors
Vernon, CA Algebra 2 Tutors
Vernon, CA Calculus Tutors
Vernon, CA Geometry Tutors
Vernon, CA Math Tutors
Vernon, CA Prealgebra Tutors
Vernon, CA Precalculus Tutors
Vernon, CA SAT Tutors
Vernon, CA SAT Math Tutors
Vernon, CA Science Tutors
Vernon, CA Statistics Tutors
Vernon, CA Trigonometry Tutors
Nearby Cities With SAT math Tutor
Bell Gardens SAT math Tutors
Bell, CA SAT math Tutors
Bradbury, CA SAT math Tutors
Commerce, CA SAT math Tutors
Cudahy, CA SAT math Tutors
Dockweiler, CA SAT math Tutors
Hazard, CA SAT math Tutors
Huntington Park SAT math Tutors
Los Angeles SAT math Tutors
Maywood, CA SAT math Tutors
Rossmoor, CA SAT math Tutors
San Marin, CA SAT math Tutors
South Gate SAT math Tutors
Sunland SAT math Tutors
Universal City, CA SAT math Tutors
|
{"url":"http://www.purplemath.com/Vernon_CA_SAT_Math_tutors.php","timestamp":"2014-04-16T13:19:38Z","content_type":null,"content_length":"23809","record_id":"<urn:uuid:17678a55-08fe-4038-b99d-4a8da8a1e008>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00073-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Fast Fourier Transform (FFT) (Part 1) - Arduinoos
Fast Fourier Transform (FFT) (Part 1)
Part 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11
Important notice:
Due to the huge popularity of the PlainFFT library along with the complementary PlainADC (Data acquisition library), I merged both libraries in the PlainDSP library for sake of efficiency
and comfort of use. All the principles described in early posts which relate to the Fast Fourier Transform remain valid data acquisition.
Conceptually speaking FFT is pretty simple. Practically, FFT is tough! So that I will not go into all the details of this algorithm and I will stay focused on its use and applications. This series of
posts will end up with a fully featured library that you will able to use for all sorts of applications, without bothering about the maths.
The plot on an oscilloscope shows a wave which is made of samples which are characterized by their intensities (Ordinates) and their sampling time (Abscissa). We are in the time domain.
The simpler the signal (e.g. a single sine wave), the simpler the characterization. The Fourier Transform proposes to decompose any signal into a sum of sin and cos. To each data point from the FFT
power spectrum corresponds a magnitude (Ordinates) and a frequency (Abscissa). We are now in the frequency domain.
The more complex the signal (e.g. signal + harmonics + noise)…
… the more complex the characterization.
And guess what? The FFT algorithm can be executed in reverse mode, so that starting from a FFT you can rebuild a real life signal. We may study later how this property can be used to denoise signals.
Check the following links for a global explanation, and a more detailed information related to the algorithm itself. They are many papers dedicated to FFT, and you may also like to learn by doing,
using this applet, or this very nice one.
FFT applies to vectors containing n samples, where n must be a power of 2. Executing the algorithm on this data may lead to unexpected results. The reason being that the vector truncates the signal
information and may contain incompletely described waves (e.g. low frequency waves). This side effect may be corrected by weighing the signal, giving less importance to the leading and tailing data.
This is the windowing function. The weighing function in the windowing depends upon the type of signal to be analysed:
• Transients whose duration is shorter than the length of the window : Rectangular (Box car)
• Transients whose duration is longer than the length of the window : Exponential, Hann
• General-purpose applications : Hann
• Spectral analysis (frequency-response measurements) : Hann (for random excitation), Rectangular (for pseudorandom excitation)
• Separation of two tones with frequencies very close to each other but with widely differing amplitudes : Kaiser-Bessel
• Separation of two tones with frequencies very close to each other but with almost equal amplitudes : Rectangular
• Accurate single-tone amplitude measurements : Flat top
• Sine wave or combination of sine waves : Hann
• Sine wave and amplitude accuracy is important : Flat top
• Narrowband random signal (vibration data) : Hann
• Broadband random (white noise) : Uniform
• Closely spaced sine waves : Uniform, Hamming
• Excitation signals (hammer blow) : Force
• Response signals : Exponential
• Unknown content : Hann
Once the windowing is executed, you can run the FFT. The result of this algorithm lies in to vectors containing the real and imaginary computed values. You need then to apply some math to convert
them into a vector of intensities.
Window type Rectangle (box car)
Window type Hamming:
Window type Flat top
All plots exported from my DSP tool box “Panorama”
Links of interest
20 Comments
1. Howdy
Thanks for that clear and simple explanation, I was never entirely sure about FFT until reading this. Thank you very much
□ You are welcome. I hope you will like next posts too!
2. Thank you for sharing your knowledge and for clear article.
May I ask a dumb question? What may be use of FFT except noise filtering?
May it be used, for example, for recognition of sound, like cellular telephone ring?
□ Sound analysis is probably one of the most exciting application for FFT: from filtering up to denoising. Detecting a ring bell from a noisy environment is definitely a good example, alike
catching one frequency spectrum (Fundamlentals and harmonics) from an accelerometer signal.
Any other ideas?
3. I am still wondering if recognition of particular cell phone ring is possible in limitation of Arduiono memory? Can you suggest any ground theoretical article about recognition of 3-4 seconds
harmony sound via FFS?
Thank you in advance.
4. Picking one specific frequency is not so hard. Recognize an harmony (so as to say, a sequence of frequencies) may not be harder, except that, as you mentionned it, we may lack some memory space.
This is definetly a nice challenging project!
5. Thank you.
6. Great article!
In your opinion, is it possible to make a good estimation of distance based on sound? My idea was to make a base station transmit a certain frequency that’s inaudible to the human ear, have
sensors in different placed filter out this frequency and get it’s amplitude and then send this “Received Sound Strength” back to a computer to calculate their position (in the same way RSS is
used for localization with WiFi). The more speaker stations there are, the better the estimation of course.
7. Wind and noise (not to talk about air density changes) may affect the strengh and cleaness of the audio signal in between the emitter and receptor. On the other hand, if you plan to use your
system in a closed and large enough room this may work. Emiiting at least three different frequencies may also help positionning the receiver… And filtering them may prevent sound mix (You may
try FIR filtering instead a FFT)
Why not finally…
Does it help?
8. Hello, man i wanna the library PlainFFT to work with your code, can you send it to my mail. Thanks
□ Hi,
Nope, I won’t! Please check this thread http://goo.gl/qIMQW, no exceptions.
9. Hi Didier,
Really nice work appreciation is implicit :).
My field is computer science so not good in electronics and electrical concepts. I understood FFT algorithm.
We do require N (no. of points) only to execute this algorithm but doubt is why we are passing samplingfrequency to the functions?
10. Thanks for your comment,
So far, these are the funciton which required the sampling frequency argument
void MajorPeak(double *vData, uint16_t samples, double samplingFrequency, struct strPeakProperties *result);
void MajorPeak(double *vData, uint16_t samples, double samplingFrequency, double loFrequency, double upFrequency, struct strPeakProperties *result);
void Normalize(double *vData, uint16_t samples, double normalizingValue);
void TargetPeak(double *vData, uint16_t samples, double samplingFrequency, double targetPosition, double tolerance, struct strPeakProperties *result);
Each of them retourn a time domain information, thus the need for sampling frequency value.
None of the other functions require this arguments. Your understanding is fine, but your question is ambiguous…
11. Thanks Didier for your frequent reply,
Let me put it in right manner. Suppose you are getting some voice analog sample. You are discretizing it using sampling frequency am i right? Then out of those discrete samples you are
considering N points to apply FFT on it.
Considering your microLS project. You are considering sampling frequency = 16000Hz and no. of samples 64. That means it is considering 64 points for applying FFT right? My question is out of
large no. of sampled points which points it gonna use to apply FFT? My understanding is correct?
12. Hi,
I am quite new to Adruino and I would like to know how to get out from approx 1024 samples (Voltage levels), Sampled at Fs=44.1 kHz = 20 ms of sample time a Spectrum.
I have used before the Soundcard of my PC , Mic-In (16 bit,44100 Hz) and did the postprocess in MATLAB.
Do you have any sketch ready? I would like to see it in the Monitor
Many Thanks in advance
□ 1024 samples
If you are using an Arduino UNO, like most users, you will have to save one byte data (-127 to 127 counts) in order to fit the 2k RAM. You may also decide to compress your data as described
in Arduinoos posts. I did not maintain this option as the result depends very much on the signal shape.
No big deal with PlainADC which can be pushed to 130kHz when recording 2 bytes integers and to 80Khz when recording 32 bits floats.
PlainADC does the job; on the other hand, as you are new to Arduino, you may follow the latest series of post dedicated to PlainDDS which is a combo of the PlainADC and PlainFFT libraries.
13. Hi. I’m planning on creating a target tracking device based on a beacon attached to the target and measurement of signal strength on 4 different microphones (tdoa is thought to involve a higher
cost in terms of components both in the beacon and in the receiver, and I don’t need great accuracy). Is the arduino capable of performing FFT on 4 different channels simultaneously?
□ Not simultaneously, unless you have 4 arduinos! Running sequentially 4 FFT’s may take time and depending upon the speed of your target, you may loose it in blue sky
I suggest that you look after the FIR which is way faster and should fit your needs as far as I understand them.
14. Hello Didier
First, thank you for all the information and code you put up on this website. I read them and it was helpful learning that I can implement FFT algorithm with Arduino board.
I’m an engineering student who is trying to use Arduino board for my degree project. I’ve been using MATLAB to use FFT algorithm and filter the sound I have. Now, implementing the data to real
product is the problem. I was planning to use Arduino board, but I don’t know which board I should buy. I’m trying to receive sound as an input source through microphone sensors and filter it so
data will have certain frequencies, and send the output signal to vibration motors and LED light panel. Do you think I’ll be able to program these with Arduino board? There is no one who can help
me with this in my school!!!!
Please help me!
□ Keep in close touch with arduinoos, something awesome for you is coming soon
You must be logged in to post a comment.
|
{"url":"http://www.arduinoos.com/2010/10/fast-fourier-transform-fft/","timestamp":"2014-04-19T19:33:54Z","content_type":null,"content_length":"62597","record_id":"<urn:uuid:a3eeedd6-09e8-4478-ab08-bf428f69b022>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00316-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Number of results: 667
micro economics
Define and then derive the expression for the MRTS. How do you derive this? I thought derive meant to receive or take something. Thanks.
Tuesday, June 23, 2009 at 11:58am by anonymous
physics, error calculation repost
Hi Bob, I have to derive the error in the wavelength. I only know how to derive error equations from multiplication or addition/subtraction. For example, if I needed to derive an arror equation for N
=MT it would be deltaN=(deltaM+deltaT)N So how do I do it for a division ...
Tuesday, March 8, 2011 at 6:56pm by Trace
university physics
Derive absolute error in equations: 1)Derive absolute error in (e/m) e/m= 2V/B^2*R^2 2)Derive absolute error in B B=8uN(Inet)/square root 125a
Wednesday, November 21, 2012 at 2:45pm by Ella
To maintain the effectiveness of a buffer, relative concentrations of the acid and the conjugate base should not differ by a factor of 10. Based on this information, derive pH range within which the
buffer can work effectively. I know that I have to use the HH equation, but I ...
Tuesday, April 16, 2013 at 11:52am by Jamie
To maintain the effectiveness of a buffer, relative concentrations of the acid and the conjugate base should not differ by a factor of 10. Based on this information, derive pH range within which the
buffer can work effectively. I know that I have to use the HH equation, but I ...
Tuesday, April 16, 2013 at 11:52am by Jamie
To maintain the effectiveness of a buffer, relative concentrations of the acid and the conjugate base should not differ by a factor of 10. Based on this information, derive pH range within which the
buffer can work effectively. I know that I have to use the HH equation, but I ...
Tuesday, April 16, 2013 at 11:53am by Jamie
Derive the empirical formula of a hydrocarbon that on analysis gave the following percentage by mass composition: C = 85.63 % and H = 14.37 %. If the molecular mass of this compound is 56 g, derive
the molecular formula.
Thursday, May 30, 2013 at 2:07am by Hanin
Derive the identity cot A+1= csc A to derive this identity does it mean to change the cot^2 A + 1 into csc^2 A ??
Sunday, January 30, 2011 at 3:09pm by Anonymous
Micro econ
1: Suppose John had a utility function of U=X^2/3Y^1/3 . Derive Johns demand function from his utility function showing all the necessary steps. i know that the MUx=MUy and first i derive the
equation to 2/3X^-1/3 1/3Y^-2/3 then im stuck i need help with the simplification
Tuesday, September 18, 2007 at 11:39pm by Brett
Change OF 75 cents or change from a dollar (25 cents)? In either case, I suggest you make a list instead of trying to derive a formula, or plugging into one we derive for you. When you start making
the list, perhaps a formula to use will become obvious.
Monday, January 25, 2010 at 1:02pm by drwls
Pretty silly question. The official mass of a major league baseball is 0.145 kg. No pitcher has ever thrown a baseball 200 km/h. Not even Sandy Koufax. You can derive a baseballmass from the numbers
you were given by equating the kinetic energy of the baseball to the work done...
Friday, November 27, 2009 at 10:33pm by drwls
No ... and there's no such word as "in-bedded" either. http://www.answers.com/topic/derive
Tuesday, October 30, 2012 at 10:18pm by Writeacher
Physics, error derivation
I need to derive an error equation for Bohr's model to use in my physics lab this week. I am really bad at calculus, so if anyone can help me that would be really great. the equation is (1/lambda)=R
[(1/n^2final)-(1/n^2initial)] I've never had to derive anything like this ...
Tuesday, March 8, 2011 at 12:24am by trace
physics, error calculation repost
I need to derive an error equation for Bohr's model to use in my physics lab this week. I am really bad at calculus, so if anyone can help me that would be really great. the equation is (1/lambda)=R
[(1/n^2final)-(1/n^2initial)] I've never had to derive anything like this ...
Tuesday, March 8, 2011 at 6:56pm by Trace
Google is a wonderful place to start! http://www.google.com/search?q=how+to+derive+marginal+utility+of+income&oq=how+to+derive+marginal+utility+of+income&aqs=chrome..69i57j0.3163j0j7&sourceid=chrome&
Wednesday, April 2, 2014 at 8:05am by Writeacher
A ball is thrown vertically upwards, from ground level, with an initial speed vo. Assume that air resistance is negligible so that the acceleration of the ball is due solely to gravity. a) Derive an
algebraic formula for the time t that it takes for the ball to reach its ...
Wednesday, September 1, 2010 at 3:52pm by Ashleyy
Ive been stuck on this forever now if someone could please walk me through it i was appreciate it: 1: Suppose John had a utility function of U=X^2/3Y^1/3 . Derive Johns demand function from his
utility function showing all the necessary steps. i know that the MUx=MUy and first...
Tuesday, September 18, 2007 at 7:58pm by Brett
The kinetic theory of gases is based on a number postulates from which the equation P= 1/3 N/Vm is derived (P is the pressure of the gas, N the number of molecules in the container, m the mass of
each molecule and is the mean square speed). State all the postulates and...
Wednesday, January 1, 2014 at 9:20pm by Taylor
The determination of 4.000 mmol of ZnCl2 is performed gravimetrically by the addition of NaCN solution in a Total volume of 200.00ml. Ksp = 3.0X10^-16 for Zn(CN)2. State and verify all assumption.
Assume cyanide does not hydrolyze. a. Derive the COC and COM equations for the ...
Saturday, December 22, 2012 at 5:01pm by WILL
AP Physics
I'm trying to derive the formula v^2 = v0^2 + 2a(x-x0) were zeros are subscripts my book tells me to derive it this way use the definition of average velocity to derive a formula for x use the
formula for average velocity when constant acceleration is assumed to derive a ...
Thursday, June 18, 2009 at 6:44pm by AP Physics
Suppose that the following equations describe an economy (C, I, G, T, and Y are measured in billions of dollars and r is measured in percent; for example, r = 10 C=170+0.6(Y-T),T=200,I=100-4r,G=350
(M/P)d=L=0.75Y-6r, (M/P)s=735 a. Derive the equation for the IS curve (Hint: ...
Monday, August 16, 2010 at 11:07pm by yaw
Physical Science
The study of heat and its relation to fluid properties, and the performance of work upon and by fluids, is called thermodynamics. The subject can be taught using two or three laws of thermodynamics,
without dealing with kinetic theory. The kinetic theory of matter can be used ...
Tuesday, April 27, 2010 at 1:30pm by drwls
Physics PLEASE
I'm trying to derive the formula v^2 = v0^2 + 2a(x-x0) were zeros are subscripts my book tells me to derive it this way use the definition of average velocity to derive a formula for x use the
formula for average velocity when constant acceleration is assumed to derive a ...
Friday, June 19, 2009 at 10:39am by Physics PLEASE
Posted by Aletha on Sunday, March 18, 2007 at 6:51pm. After t hours of an 8-hour trip the distance a car travels is modeled by: D(t)= 10t + (5)/(1+t) - 5 where D(t) is measured in meters. a) derive a
formula for the velocity of the car. b) how fast is the car moving at 6 hours...
Sunday, March 18, 2007 at 7:54pm by Aletha
PHYSiCS --
Where did you derive 0.2387 ?
Sunday, November 21, 2010 at 8:01pm by Reema
Derive(its a verb)
Wednesday, December 15, 2010 at 9:36pm by alley
how do you derive deltaG
Tuesday, May 28, 2013 at 8:51am by Anonymous
help plz
There is a difference between cognate languages and cognate words. When cognate words (that are spelled the same or similarly) mean the same thing in different languages, it can be because they
derive from a common parent language (e.g. Latin or old German) OR because one ...
Saturday, January 30, 2010 at 1:15am by drwls
college chem
Thank you but I am not sure what you mean by derive it
Friday, February 1, 2013 at 12:09am by k
can you explain how you derive that equation?
Saturday, March 9, 2013 at 9:32pm by Eduardo
Derive: F(x)= sqrt(3, 1 + tan(x))
Wednesday, May 1, 2013 at 10:47am by Mason
After t hours of an 8-hour trip the distance a car travels is modeled by: D(t)= 10t + (5)/(1+t) - 5 where D(t) is measured in meters. a) derive a formula for the velocity of the car. b) how fast is
the car moving at 6 hours? c) derive a formula for the car's acceleration. I'm ...
Sunday, March 18, 2007 at 8:40pm by Aletha
Physics question - Damon please help!!
I am doing the monkey and hunter question, and I need to find out what the minimum speed the bullet has to be.. The variables I have are g, Vo, theta, Dx(horizontal distance from monkey), and Dy
(vertical distance from monkey). I need to derive Vo in the form of (Dx/cos0)(sqrt...
Monday, November 11, 2013 at 12:44am by Jennifer
(2 points) Suppose that the following equations describe an economy (C, I, G, T, and Y are measured in billions of dollars and r is measured in percent; for example, r = 10 means r = 10%)
1700.6 200 1004 350...
Monday, August 16, 2010 at 10:59pm by kojo
Derive the identity cot A+1= csc A
Sunday, January 30, 2011 at 2:44am by Anonymous
Derive Gauss law in integral form.
Sunday, February 10, 2013 at 1:10am by mania
do monocytes derive from killer T cells?
Monday, May 6, 2013 at 10:29pm by Jess
how to derive marginal utility of income?
Wednesday, April 2, 2014 at 8:05am by geet
Economics Help pls
Suppose you are given the following production function: , where y is output and K is capital. y= 60K + 20.3K2- K3 1.1 what is a production function, what is the real work application of such, and
where would you source the data to develop a production of the type given above...
Friday, July 16, 2010 at 3:16pm by Sarah
a) There is a velocity addition formula: (v1 + v2)/[1+v1 v2/c^2] that you can use. However, the point of learning relativity is understanding how to derive such formulas from first principles, not to
learn a list of formulas. So, I suggest you study the Lorentz transformations...
Sunday, May 24, 2009 at 6:24am by Count Iblis
Is this how you derive the formula for arc length?
Yes; that is one way.
Tuesday, December 4, 2007 at 2:44am by drwls
Can someone please tell me how to derive V=4/3pi(r)^3 to get x^2+y^2=r^2?
Sunday, March 23, 2008 at 1:31am by Jess
Science, Chemistry
I would like to know how to derive it, please?
Wednesday, December 3, 2008 at 2:33am by Tricia
Households derive income from ______.
Sunday, July 19, 2009 at 4:17am by steven Sanchez
What are the ways of how to derive words into noun?
Wednesday, November 9, 2011 at 6:30am by laniie
You need to derive or find the "rocket" formula for the velocity of an object that has a fixed force T applied while its mass is decreasing. Calculus is needed to derive it. You can find it at http:/
/www.ebtx.com/mars/rocketeq.htm You will need the rocket exhaust velocity, Ve...
Tuesday, October 7, 2008 at 10:40pm by drwls
derive all equation of motion mathametically & graphically.
Wednesday, June 25, 2008 at 11:59am by sangam
a bit challenging for me
where did u derive ur w w2 eqn frm?
Sunday, February 10, 2008 at 3:39pm by Physics_freak
British Lit
What are the 3 languages that derive from the original Celtic language?
Thursday, October 4, 2012 at 5:39pm by Monica
Calculus HELP!
Use the fact that L{f'}=sF-f(0) to derive the formula for L{cosh at}
Saturday, July 27, 2013 at 9:33pm by Sara
Define and then derive the expression for the marginal rate of technical substitution
Monday, April 6, 2009 at 11:16pm by Anonymous
hoe can i derive this equation. s(v)=(4pi)^1/3(3v)^2/3 and please explain.
Monday, September 20, 2010 at 9:30am by Joe
derive the formula v0=sqrt((deltaX)squared)*g)/2*deltaY
Thursday, October 11, 2012 at 8:02pm by carlton
Thank you SO much sir. I feel a lot better knowing that I did this right
Thursday, March 7, 2013 at 11:38am by Mary
college math
Drwls, how did you derive the first step so that I can continue with the remainder of the steps?
Sunday, February 15, 2009 at 6:17pm by Ann
Please show me how to derive the relativist mass formula: m = m0 / sqrt(1 - v^2/c^2)?
Thursday, October 22, 2009 at 4:08pm by Marinho
7. If the subject is plural, the verb will be "derive." The rest are all possible. Sra
Sunday, February 20, 2011 at 12:28pm by SraJMcGin
How would you derive the formula {-Ff+Fgx=ma} to show that s=tan
Tuesday, March 19, 2013 at 8:30pm by po
University, Physics
Derive the error in equation: T1/2 = 1n(2)/lambda solve for T
Monday, March 25, 2013 at 2:10pm by Kenna
heat transfer
Derive 3D general conduction equation for homogeneous material
Wednesday, July 24, 2013 at 8:54am by jigs
Property of logarithms
Using property of logarithms, how do I prove derivative of ln(kx) is 1/x First observe that ln(kx) = ln(k) + ln(x) then take derivatives. The ln(k) is simply a constant so it goes away. You could
also derive it as d/dx ln(kx) = 1/kx * k by the chain rule. To see that the ...
Friday, October 20, 2006 at 9:51pm by Jen
How do I derive the secant reduction formula? Am I asking this question wrong? Integrate: (sec x)^n dx
Thursday, November 8, 2007 at 4:08pm by mathstudent
AP Calculus
find f'(2) given g(2)=3, h(2)=-1, h'(2)=4, and g'(2)=-2 f(x)=g(x)h(x) I'm just not understanding how to start this at all and what it wants me to derive when I don't know the function
Monday, October 5, 2009 at 12:47am by Bri
calculus high school
isn't that the power rule? anyway, ow do you derive teh second term?
Thursday, April 8, 2010 at 7:11pm by k
use the principle of an incline plane and derive the coeffecient of stationary friction
Tuesday, June 28, 2011 at 6:43pm by merry light
college chem
The slope will be nRT. You will have to use the ideal gas law to derive this.
Friday, February 1, 2013 at 12:09am by Devron
heat transfer
what is utility of extended surface? derive governing differential equation for fins
Wednesday, July 24, 2013 at 8:32am by jigs
Consider the linear model yi = xiB + e = B1 + B2xi2 + ... + Bkxik + ei, i = 1, ..., n, or in matrix notation Y =XB + e. Consider the linear model X = Z pie + u where Z is a matrix n * m, X is a
matrix n * k and pie is a matrix m * k. Assume that 1. E (x'ixi) has full column ...
Saturday, August 31, 2013 at 4:24pm by Joy
Is this how you derive the formula for arc length?
in the third step, how did you integrate the right side with no delta-variable?
Tuesday, December 4, 2007 at 2:44am by Matt
Derive the expression for the difference in the free energy between a metal in its normal and superconductivity states
Thursday, July 31, 2008 at 12:34pm by vijaya
derive expression to show that how drag force is dependent on velocity of moving object in fluid
Friday, February 17, 2012 at 8:14am by momi
You don't need to derive the HH equation, only derive the pH range needed. pH = pKa + log(base)/(acid) The problem tells you that acid and conjugate base should not differ by more than a factor of
10. Therefore, use acid = 10*base as the lower end and 10*acid = base as the top...
Tuesday, April 16, 2013 at 11:53am by DrBob222
P*molar mass = density*RT The equation comes from PV = nRT. I can show you how to derive it if you are interested.
Tuesday, July 6, 2010 at 7:58pm by DrBob222
so for (a) you used Y = Yo + Vot+ 1/2at^2 so when you derive it, you get time for the other ones, got it. THanks a lot Reiny
Monday, August 6, 2012 at 9:14pm by Robert
12th grade
The Inuit of Nunavik would be one example, if they derive all of their food and clothing by hunting and fishing.
Saturday, September 4, 2010 at 11:01am by drwls
derive the equation of the parabolla with its vertex on the line 7x+3y-4=0 and containing ponts (3,-5) and (3/2,1) the axis being horizontal.
Tuesday, May 1, 2012 at 10:16am by intan
You will get ideas here: http://www.google.com/search?q=Households+derive+income+from+&ie=utf-8&oe=utf-8&aq=t&rls=org.mozilla:en-US:official&client=firefox-a Sra
Sunday, July 19, 2009 at 4:17am by SraJMcGin
You can derive the formula for the volume of a sphere by rotating the circle x^2 + y^2 = r^2 about the x axis by taking the integral of pi(r^2 - x^2)dx from -r to +r Is this what you meant?
Sunday, March 23, 2008 at 1:31am by Reiny
Well, theoretically speaking all other methods of solving this equation to derive r positive would come from this formula
Wednesday, August 6, 2008 at 2:23pm by Jacob Sudes
chem 101
using the unit conversion method derive an equation to calculate heat produced from the HCl-Na OH reaction
Tuesday, October 26, 2010 at 7:48pm by andrew
Take the simplest approach first. O + O must be 2 or 4, or 2O = T, without any carryover, c/o, from N + N. Therefore: O....E....T....N....W 2....1....4....3....6 4....2....8....3....6 Now consider
carryovers from E + E and/or N + N and see what you can derive.
Thursday, February 10, 2011 at 12:48am by tchrwill
Chem 101
Using the "Unit conversion method", derive an equation to calculate the heat required to raise the calorimeter to the maximum temperature.
Wednesday, October 26, 2011 at 9:57pm by John
Calculus. I need help!
im guessing you meant sin(x)^5. in which case yeah you did it wrong. you derive (x)^k , k being a constant, like this k*(x)^(k-1)*x' so f'(x)=5sin(x)^4*cos(x)
Tuesday, February 28, 2012 at 2:00am by Bryant
Derive the identity 1 + cot ^2 theta = csc^2 theta by dividing x^2 + y^2 = r^2 by y^2
Thursday, September 1, 2011 at 10:27pm by Anonymous
I am sorry i still dont know how to start. and i am sorry for the problem earlier..it wasnt my fault :( Note that the way students like you are asked to solve this and similar problems is usually not
the way people like me solve such problems. It is a good exercise to plug in ...
Sunday, May 6, 2007 at 9:19pm by URGENT PLZ
MATH !!
Given that f:x -->3-x g:x --> x+2/x-5 derive the expression for gf(x)
Wednesday, September 7, 2011 at 9:02pm by Nelly
Given that f:x -->3-x g:x --> x+2/x-5 derive the expression for gf(x)
Wednesday, September 7, 2011 at 8:49pm by Nelly
Q-80= -10P Q-10P = 0 Derive the slop of both?
Saturday, October 5, 2013 at 10:10am by dino
You can derive this by setting centripetal force=magnetic force mv^2/r=qBv
Wednesday, April 9, 2008 at 4:34pm by bobpursley
Could you please clarify how did you derive the answer for (a) supply schedule = 500+ 50P? Thanks, SB
Wednesday, November 14, 2007 at 1:53am by SB
algebra 116
It is an equation with two variables. What are you supposed to do with it? Graph it? Derive an equation for x? Is the x in the denominator?
Saturday, December 5, 2009 at 2:05am by drwls
Can you please show me the steps: Derive the identity for sin 3x in terms of sin x.
Thursday, February 24, 2011 at 2:14am by anon
What is the oxidation number of carbon, and how do you derive this number?
Thursday, August 28, 2008 at 9:14pm by hunter
If you are trying to derive the equation for y, y(2 + 9x) = 4 y = 4/(2 + 9x)
Wednesday, January 7, 2009 at 1:06am by drwls
Derive the equation of the ecllipse with center (h,k)
Tuesday, March 5, 2013 at 2:17pm by anoynomous
There are 10 identical consumers whose demand is D: p = 20 - 10q. There are 10 identical firms, each firm's marginal cost is MC(q)= 5 + 5q. The market is competitive. a) derive the market demand
function b) derive the market supply function c) what is the market equilibrium ...
Monday, February 4, 2013 at 11:24am by Andrea
Most are phototrophs, but many are heterotrophs, meaning they derive energy from both sunlight (during the day) and consuming other organic matter (at night).
Tuesday, April 8, 2008 at 11:35pm by drwls
Slippage happens when the tangent of the slope angle exceeds the static coefficient of friction. Use (or derive) that fact to figure out the answer
Saturday, December 13, 2008 at 2:39pm by drwls
bosnian, i already used wolfram alpha to get the solutions I listed. I want to derive the solution analytically. anon, I believe you just did algebraic rearrangement.
Thursday, June 16, 2011 at 2:50pm by Sean
Square Root Help!!! Urgent:)
Thanks SO much!! :) Helped a lot, from the answers, I could understand how to derive them! Thanks again! -Angellove:)
Monday, November 3, 2008 at 7:11pm by Angellove
Pages: 1 | 2 | 3 | 4 | 5 | 6 | 7 | Next>>
|
{"url":"http://www.jiskha.com/search/index.cgi?query=Derive","timestamp":"2014-04-19T04:56:22Z","content_type":null,"content_length":"33767","record_id":"<urn:uuid:a84ec2dc-61e4-4bce-a957-3b4f590919bd>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00493-ip-10-147-4-33.ec2.internal.warc.gz"}
|
want to find pi... almost there
Yes, I see that if the user enters 1000, it works. Does that suggest a change to the code? I don't see the relevance.
Iterative approximation algorithms are expected to be relatively inaccurate when they are only allowed to iterate a small number of times. I don't know what kind of answer you're expecting. "Use a
different algorithm"?
I know what you mean. It could be that I misunderstood the goal of this assignment (it's for school). "Use a different algorithm" is probably good advice, since understanding the problem is the first
step to good programming. I suppose. Thank you.
Topic archived. No new replies allowed.
|
{"url":"http://www.cplusplus.com/forum/beginner/83330/","timestamp":"2014-04-17T22:21:39Z","content_type":null,"content_length":"11895","record_id":"<urn:uuid:81bf69f8-6816-4cb3-a886-11c9876cbb47>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00108-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Bainbridge Island Precalculus Tutor
Find a Bainbridge Island Precalculus Tutor
...With this method, I've helped many students improve their math grades. Trigonometry could be challenging for some people, so I always try to make it easy to understand and fun to learn. I've
helped many students improve their grades in trigonometry.
13 Subjects: including precalculus, geometry, Chinese, algebra 1
...I have significant experience helping students write and revise papers. I have taught all subjects of the SAT for over 10 years. For the math section, I focus on test strategy interspersed with
content review.
32 Subjects: including precalculus, English, reading, writing
...I can help students understand the concepts behind specific problems and how those concepts fit into the big picture. And perhaps most importantly of all, I love mathematics, and I have always
enjoyed helping others learn to love it, too! I did quite well in all my middle and high school math classes, including pre-algebra.
35 Subjects: including precalculus, English, reading, writing
...Personally scored 800 on both SAT Math & SAT Math II & 787 in Chemistry prior to attending CalTech. Have extensive IT industry experience and have been actively tutoring for 2 years. I excel in
helping people learn to compute fast without or without calculators, and prepare for standardized tests.
43 Subjects: including precalculus, chemistry, physics, calculus
...I enjoy the field of chemistry as well as teaching; and have been tutoring for a some time. While the vagaries of chemistry can be vexing, I find that once the basic patterns of chemical
behavior have been acquired, it is more comprehensible. I strive to assist students in understanding the 'big picture' as well as the specifics of the material at hand.
12 Subjects: including precalculus, chemistry, geometry, ASVAB
|
{"url":"http://www.purplemath.com/bainbridge_island_precalculus_tutors.php","timestamp":"2014-04-18T19:07:29Z","content_type":null,"content_length":"24512","record_id":"<urn:uuid:81ac4851-efe1-4f71-8b0e-16169a792869>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00281-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Next: Restricted Groups Up: Distributed Array Sections Previous: Matrix multiplication with reduced Contents Index
Let's take another look at the example of the array section in figure 4.2. We can capture this section in a named variable as follows
Now, what are the ranges of c--the objects returned by the rng() inquiry applied to c?
In fact they are a different sort of range from any considered so far--they are subranges. For completeness the HPJava language provides a special syntax for constructing subranges directly. Ranges
equivalent to those of c can be created by
This syntax should look quite natural. It is modelled on the syntax for multiarray sections themselves.
The global indices associated with the subrange does not inherit global indices from the parent.
A non-trivial subrange is one for which the lower bound is not equal to zero, or the upper bound is not equal to
A non-trivial subrange is never considered to have ghost extensions, even if its parent range does. This avoids various ambiguities that might otherwise crop up.
That covers the distributed ranges of sections. What about the distribution groups of sections? Now triplet subscripts don't cause problems--the distribution group of c above can be defined to be the
same as the distribution group of the parent distributed array a. But the example of figure 4.1 is problematic. This was constructed using a scalar subscript, effectively as follows:
The single range of b is clearly y, but identifying the distribution group of b with that of a doesn't seem to be right. If a one dimensional array is newly constructed with range y and distribution
group p, like this:
it is understood to be replicated over the first dimension of p. The section b clearly isn't replicated in this way. Where does the information that b is localized to the top row of processes go?
Next: Restricted Groups Up: Distributed Array Sections Previous: Matrix multiplication with reduced Contents Index Bryan Carpenter 2003-04-15
|
{"url":"http://www.hpjava.org/papers/HPJava/HPJava/node35.html","timestamp":"2014-04-19T17:01:29Z","content_type":null,"content_length":"8951","record_id":"<urn:uuid:c1db06da-23ac-4a23-ad99-8938f369218c>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00164-ip-10-147-4-33.ec2.internal.warc.gz"}
|
LAPACK block factorization algorithms on the Intel iPSC/860
Results 1 - 10 of 21
- Society for Industrial and Applied Mathematics , 1997
"... We survey general techniques and open problems in numerical linear algebra on parallel architectures. We rst discuss basic principles of parallel processing, describing the costs of basic
operations on parallel machines, including general principles for constructing e cient algorithms. We illustrate ..."
Cited by 532 (26 self)
Add to MetaCart
We survey general techniques and open problems in numerical linear algebra on parallel architectures. We rst discuss basic principles of parallel processing, describing the costs of basic operations
on parallel machines, including general principles for constructing e cient algorithms. We illustrate these principles using current architectures and software systems, and by showing how one would
implement matrix multiplication. Then, we present direct and iterative algorithms for solving linear systems of equations, linear least squares problems, the symmetric eigenvalue problem, the
nonsymmetric eigenvalue problem, and the singular value decomposition. We consider dense, band and sparse matrices.
, 1992
"... This paper describes ScaLAPACK, a distributed memory version of the LAPACK software package for dense and banded matrix computations. Key design features are the use of distributed versions of
the Level LAS as building blocks, and an ob ect-based interface to the library routines. The square block s ..."
Cited by 161 (33 self)
Add to MetaCart
This paper describes ScaLAPACK, a distributed memory version of the LAPACK software package for dense and banded matrix computations. Key design features are the use of distributed versions of the
Level LAS as building blocks, and an ob ect-based interface to the library routines. The square block scattered decomposition is described. The implementation of a distributed memory version of the
right-looking LU factorization algorithm on the Intel Delta multicomputer is discussed, and performance results are presented that demonstrated the scalability of the algorithm.
- SIAM REVIEW , 1995
"... This paper discusses the design of linear algebra libraries for high performance computers. Particular emphasis is placed on the development of scalable algorithms for MIMD distributed memory
concurrent computers. A brief description of the EISPACK, LINPACK, and LAPACK libraries is given, followed b ..."
Cited by 68 (17 self)
Add to MetaCart
This paper discusses the design of linear algebra libraries for high performance computers. Particular emphasis is placed on the development of scalable algorithms for MIMD distributed memory
concurrent computers. A brief description of the EISPACK, LINPACK, and LAPACK libraries is given, followed by an outline of ScaLAPACK, which is a distributed memory version of LAPACK currently under
development. The importance of block-partitioned algorithms in reducing the frequency of data movement between different levels of hierarchical memory is stressed. The use of such algorithms helps
reduce the message startup costs on distributed memory concurrent computers. Other key ideas in our approach are the use of distributed versions of the Level 3 Basic Linear Algebra Subprograms (BLAS)
as computational building blocks, and the use of Basic Linear Algebra Communication Subprograms (BLACS) as communication building blocks. Together the distributed BLAS and the BLACS can be used to
construct highe...
, 1995
"... This paper discusses issues in the design of ScaLAPACK, a software library for performing dense linear algebra computations on distributed memory concurrent computers. These issues are
illustrated using the ScaLAPACK routines for reducing matrices to Hessenberg, tridiagonal, and bidiagonal forms. ..."
Cited by 34 (5 self)
Add to MetaCart
This paper discusses issues in the design of ScaLAPACK, a software library for performing dense linear algebra computations on distributed memory concurrent computers. These issues are illustrated
using the ScaLAPACK routines for reducing matrices to Hessenberg, tridiagonal, and bidiagonal forms. These routines are important in the solution of eigenproblems. The paper focuses on how building
blocks are used to create higher-level library routines. Results are presented that demonstrate the scalability of the reduction routines. The most commonly-used building blocks used in ScaLAPACK are
the sequential BLAS, the Parallel BLAS (PBLAS) and the Basic Linear Algebra Communication Subprograms (BLACS). Each of the matrix reduction algorithms consists of a series of steps in each of which
one block column (or panel), and/or block row, of the matrix is reduced, followed by an update of the portion of the matrix that has not been factorized so far. This latter phase is performed usin...
, 1993
"... We describe the design of ScaLAPACK++, an object oriented C++ library for implementing linear algebra computations on distributed memory multicomputers. This package, when complete, will support
distributed matrix operations for symmetric, positive-definite, and non-symmetric cases. In ScaLAPACK++ w ..."
Cited by 26 (10 self)
Add to MetaCart
We describe the design of ScaLAPACK++, an object oriented C++ library for implementing linear algebra computations on distributed memory multicomputers. This package, when complete, will support
distributed matrix operations for symmetric, positive-definite, and non-symmetric cases. In ScaLAPACK++ we have employed object oriented design methods to enchance scalability, portability,
flexibility, and ease-of-use. We illustrate some of these points by describing the implementation of basic algorithms and comment on tradeoffs between elegance, generality, and performance.
, 1994
"... This paper discusses the core factorization routines included in the ScaLAPACK library. These routines allow the factorization and solution of a dense system of linear equations via LU, QR, and
Cholesky. They are implemented using a block cyclic data distribution, and are built using de facto standa ..."
Cited by 24 (11 self)
Add to MetaCart
This paper discusses the core factorization routines included in the ScaLAPACK library. These routines allow the factorization and solution of a dense system of linear equations via LU, QR, and
Cholesky. They are implemented using a block cyclic data distribution, and are built using de facto standard kernels for matrix and vector operations (BLAS and its parallel counterpart PBLAS) and
message passing communication (BLACS). In implementing the ScaLAPACK routines, a major objective was to parallelize the corresponding sequential LAPACK using the BLAS, BLACS, and PBLAS as building
blocks, leading to straightforward parallel implementations without a significant loss in performance. We present the details of the implementation of the ScaLAPACK factorization routines, as well as
performance and scalability results on the Intel iPSC/860, Intel Touchstone Delta, and Intel Paragon systems.
- JOURNAL OF PARALLEL AND DISTRIBUTED COMPUTING , 1994
"... This paper discusses the scalability of Cholesky, LU, and QR factorization routines on MIMD distributed memory concurrent computers. These routines form part of the ScaLAPACK mathematical
software library that extends the widely-used LAPACK library to run efficiently on scalable concurrent computers ..."
Cited by 23 (12 self)
Add to MetaCart
This paper discusses the scalability of Cholesky, LU, and QR factorization routines on MIMD distributed memory concurrent computers. These routines form part of the ScaLAPACK mathematical software
library that extends the widely-used LAPACK library to run efficiently on scalable concurrent computers. To ensure good scalability and performance, the ScaLAPACK routines are based on
block-partitioned algorithms that reduce the frequency of data movement between different levels of the memory hierarchy, and particularly between processors. The block cyclic data distribution, that
is used in all three factorization algorithms, is described. An outline of the sequential and parallel block-partitioned algorithms is given. Approximate models of algorithms' performance are
presented to indicate which factors in the design of the algorithm have an impact upon scalability. These models are compared with timings results on a 128-node Intel iPSC/860 hypercube. It is shown
that the routines are highl...
- Parallel Comput , 1995
"... In this paper, we present an algorithm for the reduction to block upper-Hessenberg form which can be used to solve the nonsymmetric eigenvalue problem on message-passing multicomputers. On such
multicomputers, a nonsymmetric matrix can be distributed across processing nodes con gured into a network ..."
Cited by 17 (5 self)
Add to MetaCart
In this paper, we present an algorithm for the reduction to block upper-Hessenberg form which can be used to solve the nonsymmetric eigenvalue problem on message-passing multicomputers. On such
multicomputers, a nonsymmetric matrix can be distributed across processing nodes con gured into a network of two-dimensional mesh processor array using a block-scattered decomposition. Based on the
matrix partitioning and mapping, the algorithm employs both Householder re ectors and Givens rotations within each reduction step. We analyze the arithmetic and communication complexities and
describe the implementation details of the algorithm on message-passing multicomputers. We discuss two di erent implementations|synchronous and asynchronous|and present performance results on the
Intel iPSC/860 and DELTA. We conclude with an evaluation of the algorithm's communication cost, and suggest areas for further improvement. 1
, 1991
"... this paper, we describe extensions to a proposed set of linear algebra communication routines for communicating and manipulating data structures that are distributed among the memories of a
distributed memory MIMD computer. In particular, recent experience shows that higher performance can be attain ..."
Cited by 16 (6 self)
Add to MetaCart
this paper, we describe extensions to a proposed set of linear algebra communication routines for communicating and manipulating data structures that are distributed among the memories of a
distributed memory MIMD computer. In particular, recent experience shows that higher performance can be attained on such architectures when parallel dense matrix algorithms utilize a data
distribution that views the computational nodes as a logical two dimensional mesh. The motivation for the BLACS continues to be to increase portability, efficiency and modularity at a high level. The
audience of the BLACS are mathematical software experts and people with large scale scientific computation to perform. A systematic effort must be made to achieve a de facto standard for the BLACS.
|
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=1653563","timestamp":"2014-04-18T12:14:37Z","content_type":null,"content_length":"37821","record_id":"<urn:uuid:7efa535d-795b-48f4-b490-410208666be4>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00088-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Currying versus partial application (with JavaScript code)
Currying and partial application are two ways of transforming a function into another function with a generally smaller arity. While they are often confused with each other, they work differently.
This post explains the details.
Currying takes a function
f: X × Y → R
and turns it into a function
f': X → (Y → R)
Instead of calling f with two arguments, we invoke f' with the first argument. The result is a function that we then call with the second argument to produce the result. Thus, if the uncurried f is
invoked as
f(3, 5)
then the curried f' is invoked as
JavaScript example.
The following is the uncurried binary function
function add(x, y) {
return x + y;
Calling it:
> add(3, 5)
The curried version of
looks as follows.
function addC(x) {
return function (y) {
return x + y;
Calling it:
> addC(3)(5)
The algorithm for currying.
The function
curries a given binary function. It has the signature
curry: (X × Y → R) → (X → (Y → R))
takes a binary function and returns a unary function that returns a unary function. Its JavaScript code is
function curry(f) {
return function(x) {
return function(y) {
return f(x, y);
Partial application
Partial application takes a function
f: X × Y → R
and a fixed value x for the first argument to produce a new function
f': Y → R
f' does the same as f, but only has to fill in the second parameter which is why its arity is one less than the arity of f. One says that the first argument is
to x.
JavaScript example. Binding the first argument of function add to 5 produces the function plus5. Compare their definitions to see that we have simply filled in the first argument.
function plus5(y) {
return 5 + y;
The algorithm for partial application. The function partApply partially applies binary functions. It has the signature
partApply : ((X × Y → R) × X) → (Y → R)
takes a binary function and a value and produces a unary function. Its JavaScript code is
function partApply(f, x) {
return function(y) {
return f(x, y);
General partial application in JavaScript.
JavaScript has the built-in method
that works on functions with any arity and can bind an arbitrary amount of parameters. Its invocation has the following syntax.
func.bind(thisValue, [arg1], [arg2], ...)
It turns
into a new function whose implicit
parameter is
and whose initial arguments are always as given. When one invokes the new function, the arguments of such an invocation are appended to what has already been provided via
. MDN has
more details
> var plus5 = add.bind(null, 5)
> plus5(10)
Note that
does not matter for the (non-method) function
which is why it is
Currying versus partial application
The difference between the two is:
• Currying always produces nested unary (1-ary) functions. The transformed function is still largely the same as the original.
• Partial application produces functions of arbitrary arity. The transformed function is different from the original – it needs less arguments.
Interestingly, with a curried function and curried invocation, it is easy to achieve the same effect as binding one argument (performing this operation several times yields the general case): To bind
the first argument to a value, you simply apply the outer-most of the nested functions to that value, which returns the desired new function.
Related reading
1. Partial application on Wikipedia [partial source of this post]
|
{"url":"http://www.2ality.com/2011/09/currying-vs-part-eval.html","timestamp":"2014-04-20T19:40:51Z","content_type":null,"content_length":"124086","record_id":"<urn:uuid:e619aa61-a829-4f3a-ad19-58b49793c4c5>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00569-ip-10-147-4-33.ec2.internal.warc.gz"}
|
The Mathematics of . . . Shuffling
A seemingly random arrangement of cards in a deck is sometimes nothing but an illusion. Persi Diaconis, a Stanford mathematician and practiced magician, can restore a deck of cards to its original
order with a series of perfect shuffles. The sleight of hand: Each time Diaconis cuts the cards, he interleaves exactly one card from the top half of the deck between each pair of cards from the
bottom half.
Photographs by Sian Kennedy
Persi Diaconis picks up an ordinary deck of cards, fresh from the box, and writes a word in Magic Marker on one side: RANDOM. He shuffles the deck once. The letters have re-formed themselves into six
bizarre runes that still look vaguely like the letters R, A, and so on. Diaconis shuffles again, and the markings on the side become undecipherable. After two more shuffles, you can't even tell that
there used to be six letters. The side of the pack looks just like the static on a television set. It didn't look random before, but it sure looks random now.
Keep watching. After three more shuffles, the word RANDOM miraculously reappears on the side of the deck only it is written twice, in letters half the original size. After one more shuffle, the
original letters materialize at the original size. Diaconis turns the cards over and spreads them out with a magician's flourish, and there they are in their exact original sequence, from the ace of
spades to the king of diamonds.
Diaconis has just performed eight perfect shuffles in a row. There's no hocus-pocus, just skill perfected in his youth: Diaconis ran away from home at 14 to become a magician's assistant and later
became a professional magician and blackjack player. Even now, at 57, he is one of a couple of dozen people on the planet who can do eight perfect shuffles in less than a minute.
Diaconis's work these days involves much more than nimbleness of hand. He is a professor of mathematics and statistics at Stanford University. But he is also the world's leading expert on shuffling.
He knows that what seems to be random often isn't, and he has devoted much of his career to exploring the difference. His work has applications to filing systems for computers and the reshuffling of
the genome during evolution. And it has led him back to Las Vegas, where, instead of trying to beat the casinos, he now works for them.
A card counter in blackjack memorizes the cards that have already been played to get better odds by making bets based on his knowledge of what has yet to turn up. If the deck has a lot of face cards
and 10s left in it, for instance, and he needs a 10 for a good hand, he will bet more because he's more likely to get it. A good card counter, Diaconis estimates, has a 1 to 2 percent advantage over
the casino. On a bad day, a good card counter can still lose $10,000 in a hurry. And on a good day, he may get a tap on the shoulder by a large person who will say, "You can call it a day now." By
his mid-twenties, Diaconis had figured out that doing mathematics was an easier way to make a living.
Two years ago, Diaconis himself got a tap on the shoulder. A letter arrived from a manufacturer of casino equipment, asking him to figure out whether its card-shuffling machines produced random
shuffles. To Diaconis's surprise, the company gave him and his Stanford colleague, Susan Holmes, carte blanche to study the inner workings of the machine. It was like taking a Russian spy on a tour
of the CIA and asking him to find the leaks.
When shuffling machines first came out, Diaconis says, they were transparent, so gamblers could actually see the cutting and riffling inside. But gamblers stopped caring after a while, and the
shuffling machines turned into closed boxes. They also stopped shuffling cards the way humans do. In the machine that Diaconis and Holmes looked at, each card gets randomly directed, one at a time,
to one of 10 shelves. The shuffling machine can put each new card either on the top of the cards already on that shelf or on the bottom, but not between them.
"Already I could see there was something wrong," says Holmes. If you start out with all the red cards at the top of the deck and all the black cards at the bottom, after one pass through the
shuffling machine you will find that each shelf contains a red-black sandwich. The red cards, which got placed on the shelves first, form the middle of each sandwich. The black cards, which came
later, form the outside. Since there are only 10 shelves, there are at most 20 places where a red card is followed by a black one or vice versa fewer than the average number of color changes (26)
that one would expect from a random shuffle.
The nonrandomness can be seen more vividly if the cards are numbered from 1 to 52. After they have passed through the shuffling machine, the numbers on the cards form a zigzag pattern. The top card
on the top shelf is usually a high number. Then the numbers decrease until they hit the middle of the first red-black sandwich; then they increase and decrease again, and so on, at most 10 times.
Diaconis and Holmes figured out the exact probability that any given card would end up in any given location after one pass through the machine. But that didn't indicate whether a gambler could use
this information to beat the house.
So Holmes worked out a demonstration. It was based on a simple game: You take cards from a deck one by one and each time try to predict what you've selected before you look at it. If you keep track
of all the cards, you'll always get the last one right. You'll guess the second-to-last card right half the time, the third-to-last card a third of the time, and so on. On average, you will guess
about 4.5 cards correctly out of 52.
By exploiting the zigzag pattern in the cards that pass through the shuffling machine, Holmes found a way to double the success rate. She started by predicting that the highest possible card (52)
would be on top. If it turned out to be 49, then she predicted 48 the next highest number for the second card. She kept going this way until her prediction was too low predicting, say, 15 when the
card was actually 18. That meant the shuffling machine had reached the bottom of a zigzag and the numbers would start climbing again. So she would predict 19 for the next card. Over the long run,
Holmes (or, more precisely, her computer) could guess nine out of every 52 cards correctly.
To a gambler, the implications are staggering. Imagine playing blackjack and knowing one-sixth of the cards before they are turned over! In reality, a blackjack player would not have such a big
advantage, because some cards are hidden and six full decks are used. Still, Diaconis says, "I'm sure it would double or triple the advantage of the ordinary card counter."
Diaconis and Holmes offered the equipment manufacturer some advice: Feed the cards through the machine twice. The alternative would be more expensive: Build a 52-shelf machine.
A small victory for shuffling theory, one might say. But randomization applies to more than just cards. Evolution randomizes the order of genes on a chromosome in several ways. One of the most common
mutations is called a "chromosome inversion," in which the arm of a chromosome gets cut in two random places, flipped over end-to-end, and reattached, with the genes in reverse order. In fruit flies,
inversions happen at a rate of roughly one per every million years. This is very similar to a shuffling method called transposition that Diaconis studied 20 years ago. Using his methods, mathematical
biologists have estimated how many inversions it takes to get from one species of fruit fly to another, or to a completely random genome. That, Diaconis suggests, is the real magic he ran away from
home to find. "I find it amazing," he says, "that mathematics developed for purely aesthetic reasons would mesh perfectly with what engineers or chromosomes do when they want to make a mess."
Comment on this article
|
{"url":"http://discovermagazine.com/2002/oct/featmath/","timestamp":"2014-04-20T11:00:44Z","content_type":null,"content_length":"70564","record_id":"<urn:uuid:1bf506f5-4740-415b-b210-908d89e0f7fd>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00045-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Control Seminar
Hybrid Linear Quadratic Optimal Control for Aerospace Systems with Continuous and Impulsive Inputs
Chris Damaren
University of Toronto,Institute of Aerospace Studies
Friday, February 21, 2014
3:30pm - 4:30pm
1500 EECS
About the Event
We consider linear systems described by hybrid dynamics; that is systems described by linear (potentially time-varying) models with continuous control inputs and "jump dynamics" at discrete time
instants where impulsive control inputs are applied. A quadratic performance index is formulated and the necessary conditions for optimality are determined. The solution to the control problem is
obtained using two types of Riccati equations: the usual continuous-time Riccati differential equation and a discrete Riccati equation which yields discontinuous jumps in the continuous-time
solution. Necessary conditions for optimal timing of the impulsive inputs are also presented. The hybrid LQR solution is applied to the problem of spacecraft formation flying in low Earth orbit where
the deputy spacecraft mitigates the effects of the J2 perturbation using a combination of the geomagnetic Lorentz force and impulsive thrusting for actuation. It is shown that the required amount of
thruster actuation for formation keeping can be significantly reduced when used in concert with Lorentz force actuation.
Chris Damaren received the BASc, MASc, and PhD degrees in aerospace engineering from the University of Toronto in 1985, 1987, and 1990 respectively. His graduate research was in the area of dynamics
and control of flexible spacecraft. From 1990 to 1995 he as an Assistant Professor in the Department of Engineering at Royal Roads Military College in Victoria, BC, Canada. From 1995 to 1999 he was a
Senior Lecturer in the Department of Mechanical Engineering at the University of Canterbury in Christchurch, New Zealand. From 1999-2010, he held the position of Associate Professor at the University
of Toronto Institute for Aerospace Studies and was promoted to the rank of Professor in 2011. From 2008 to 2013, he was the Vice-Dean Graduate Studies for the Faculty of Applied Science and
Engineering at the University of Toronto. His research interests are mainly in the areas of spacecraft dynamics and control. He has also published several articles on the dynamics and control of
structurally flexible robotic manipulators and the transient hydrodynamics of floating structures.
Additional Information
Contact: Ann Pace
Phone: 763-5022
Email: ampace@umich.edu
Sponsor: Bosch, Eaton, Ford, MathWorks, Toyota, and Whirlpool
Open to: Public
|
{"url":"http://eecs.umich.edu/eecs/etc/events/showevent.cgi?3024","timestamp":"2014-04-21T12:10:58Z","content_type":null,"content_length":"16489","record_id":"<urn:uuid:b0a56130-c327-4dca-9e59-d38d2445a282>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00620-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Category Theory
Welcome to E-Books Directory
This page lists freely downloadable books.
Category Theory
E-Books for free online viewing and/or download
e-books in this category
Category Theory for Scientists
by David I. Spivak - arXiv , 2013
We attempt to show that category theory can be applied throughout the sciences as a framework for modeling phenomena and communicating results. In order to target the scientific audience, this book
is example-based rather than proof-based.
(1054 views)
Categories and Homological Algebra
by Pierre Schapira - UPMC , 2011
These notes introduce the language of categories and present the basic notions of homological algebra, first from an elementary point of view, next with a more sophisticated approach, with the
introduction of triangulated and derived categories.
(1255 views)
Category Theory for Computing Science
by Michael Barr, Charles Wells - Prentice Hall , 1998
This book is a textbook in basic category theory, written specifically to be read by researchers and students in computing science. We expound the constructions basic to category theory in the
context of applications to computing science.
(1655 views)
Banach Modules and Functors on Categories of Banach Spaces
by J. Cigler, V. Losert, P.W. Michor - Marcel Dekker Inc , 1979
This book is the final outgrowth of a sequence of seminars about functors on categories of Banach spaces (held 1971 - 1975) and several doctoral dissertations. It has been written for readers with a
general background in functional analysis.
(2236 views)
Functors and Categories of Banach Spaces
by Peter W. Michor - Springer , 1978
The aim of this book is to develop the theory of Banach operator ideals and metric tensor products along categorical lines: these two classes of mathematical objects are endofunctors on the category
Ban of all Banach spaces in a natural way.
(2403 views)
Category Theory and Functional Programming
by Mikael Vejdemo-Johansson - University of St. Andrews , 2012
An introduction to category theory that ties into Haskell and functional programming as a source of applications. Topics: definition of categories, special objects and morphisms, functors, natural
transformation, (co-)limits and special cases, etc.
(3030 views)
Higher Topos Theory
by Jacob Lurie - Princeton University Press , 2009
Jacob Lurie presents the foundations of higher category theory, using the language of weak Kan complexes, and shows how existing theorems in algebraic topology can be reformulated and generalized in
the theory's new language.
(3209 views)
Higher Algebra
by Jacob Lurie - Harvard University , 2011
Contents: Stable infinite-Categories; infinite-Operads; Algebras and Modules over infinte-Operads; Associative Algebras and Their Modules; Little Cubes and Factorizable Sheaves; Algebraic Structures
on infinite-Categories; and more.
(3894 views)
Introduction to Categories and Categorical Logic
by Samson Abramsky, Nikos Tzevelekos - arXiv , 2011
These notes provide a succinct, accessible introduction to some of the basic ideas of category theory and categorical logic. The main prerequisite is a basic familiarity with the elements of discrete
mathematics: sets, relations and functions.
(3338 views)
Category Theory Lecture Notes
by Daniele Turi - University of Edinburgh , 2001
These notes were written for a course in category theory. The course was designed to be self-contained, drawing most of the examples from category theory itself. It was intended for post-graduate
students in theoretical computer science.
(3112 views)
An Introduction to Category Theory in Four Easy Movements
by A. Schalk, H. Simmons - Manchester University , 2005
Notes for a course offered as part of the MSc. in Mathematical Logic. From the table of contents: Development and exercises; Functors and natural transformations; Limits and colimits, a universal
solution; Cartesian closed categories.
(3574 views)
Category Theory Lecture Notes
by Michael Barr, Charles Wells , 1999
Categories originally arose in mathematics out of the need of a formalism to describe the passage from one type of mathematical structure to another. These notes form a short summary of some major
topics in category theory.
(3125 views)
Category Theory
- Wikibooks , 2010
This book is an introduction to category theory, written for those who have some understanding of one or more branches of abstract mathematics, such as group theory, analysis or topology. It contains
examples drawn from various branches of math.
(3510 views)
Basic Category Theory
by Jaap van Oosten - University of Utrecht , 2007
Contents: Categories and Functors; Natural transformations; (Co)cones and (Co)limits; A little piece of categorical logic; Adjunctions; Monads and Algebras; Cartesian closed categories and the
lambda-calculus; Recursive Domain Equations.
(3544 views)
Abelian Categories: an Introduction to the Theory of Functors
by Peter Freyd - Harper and Row , 1964
From the table of contents: Fundamentals (Contravariant functors and dual categories); Fundamentals of Abelian categories; Special functors and subcategories; Metatheorems; Functor categories;
Injective envelopes; Embedding theorems.
(3535 views)
Model Categories and Simplicial Methods
by Paul Goerss, Kristen Schemmerhorn - Northwestern University , 2004
There are many ways to present model categories, each with a different point of view. Here we would like to treat model categories as a way to build and control resolutions. We are going to emphasize
the analog of projective resolutions.
(2521 views)
Notes on Categories and Groupoids
by P. J. Higgins - Van Nostrand Reinhold , 1971
A self-contained account of the elementary theory of groupoids and some of its uses in group theory and topology. Category theory appears as a secondary topic whenever it is relevant to the main
issue, and its treatment is by no means systematic.
(6693 views)
Seminar on Triples and Categorical Homology Theory
by B. Eckmann - Springer , 1969
This volume concentrates a) on the concept of 'triple' or standard construction with special reference to the associated 'algebras', and b) on homology theories in general categories, based upon
triples and simplicial methods.
(4162 views)
Higher Operads, Higher Categories
by Tom Leinster - arXiv , 2003
Higher-dimensional category theory is the study of n-categories, operads, braided monoidal categories, and other such exotic structures. It draws its inspiration from topology, quantum algebra,
mathematical physics, logic, and computer science.
(4244 views)
Higher-Dimensional Categories: an illustrated guide book
by Eugenia Cheng, Aaron Lauda - University of Sheffield , 2004
This work gives an explanatory introduction to various definitions of higher-dimensional category. The emphasis is on ideas rather than formalities; the aim is to shed light on the formalities by
emphasizing the intuitions that lead there.
(4817 views)
Mixed Motives
by Marc Levine - American Mathematical Society , 1998
This book combines foundational constructions in the theory of motives and results relating motivic cohomology to more explicit constructions. Prerequisite for understanding the work is a basic
background in algebraic geometry.
(6448 views)
A Gentle Introduction to Category Theory: the calculational approach
by Maarten M. Fokkinga , 1994
These notes present the important notions from category theory. The intention is to provide a fairly good skill in manipulating with those concepts formally. This text introduces category theory in
the calculational style of the proofs.
(8666 views)
Computational Category Theory
by D.E. Rydeheard, R.M. Burstall , 2001
The book is a bridge-building exercise between computer programming and category theory. Basic constructions of category theory are expressed as computer programs. It is a first attempt at connecting
the abstract mathematics with concrete programs.
(9603 views)
Categories, Types, and Structures
by Andrea Asperti, Giuseppe Longo - MIT Press , 1991
Here is an introduction to category theory for the working computer scientist. It is a self-contained introduction to general category theory and the mathematical structures that constitute the
theoretical background.
(7784 views)
Abstract and Concrete Categories: The Joy of Cats
by Jiri Adamek, Horst Herrlich, George Strecker - John Wiley & Sons , 1990
A modern introduction to the theory of structures via the language of category theory, the emphasis is on concrete categories. The first five chapters present the basic theory, while the last two
contain more recent research results.
(6706 views)
Basic Concepts of Enriched Category Theory
by Max Kelly - Cambridge University Press , 2005
The book presents a selfcontained account of basic category theory, assuming as prior knowledge only the most elementary categorical concepts. It is designed to supply a connected account of the
theory, or at least of a substantial part of it.
(5224 views)
Toposes, Triples and Theories
by Michael Barr, Charles Wells - Springer-Verlag , 2005
Introduction to toposes, triples and theories and the connections between them. The book starts with an introduction to category theory, then introduces each of the three topics of the title.
Exercises provide examples or develop the theory further.
(5973 views)
More Sites Like This
Science Books Online Books Fairy
Maths e-Books Programming Books
|
{"url":"http://www.e-booksdirectory.com/listing.php?category=364","timestamp":"2014-04-16T15:58:57Z","content_type":null,"content_length":"24068","record_id":"<urn:uuid:ff4506c2-6386-4aa4-a459-db22b6619359>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00436-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Interpretation of analysis by means of constructive functionals of finite types
Results 1 - 10 of 59
, 2003
"... Introduction to the constructive point of view in the foundations of mathematics, in particular intuitionism due to L.E.J. Brouwer, constructive recursive mathematics due to A.A. Markov, and
Bishop’s constructive mathematics. The constructive interpretation and formalization of logic is described. F ..."
Cited by 162 (4 self)
Add to MetaCart
Introduction to the constructive point of view in the foundations of mathematics, in particular intuitionism due to L.E.J. Brouwer, constructive recursive mathematics due to A.A. Markov, and Bishop’s
constructive mathematics. The constructive interpretation and formalization of logic is described. For constructive (intuitionistic) arithmetic, Kleene’s realizability interpretation is given; this
provides an example of the possibility of a constructive mathematical practice which diverges from classical mathematics. The crucial notion in intuitionistic analysis, choice sequence, is briefly
described and some principles which are valid for choice sequences are discussed. The second half of the article deals with some aspects of proof theory, i.e., the study of formal proofs as
combinatorial objects. Gentzen’s fundamental contributions are outlined: his introduction of the so-called Gentzen systems which use sequents instead of formulas and his result on first-order
arithmetic showing that (suitably formalized) transfinite induction up to the ordinal "0 cannot be proved in first-order arithmetic.
- Typed Lambda Calculi and Applications, number 664 in Lecture Notes in Computer Science , 1993
"... This paper describes formalizations of Tait’s normalization proof for the simply typed λ-calculus in the proof assistants Minlog, Coq and Isabelle/HOL. From the formal proofs programs are
machine-extracted that implement variants of the well-known normalization-by-evaluation algorithm. The case stud ..."
Cited by 60 (5 self)
Add to MetaCart
This paper describes formalizations of Tait’s normalization proof for the simply typed λ-calculus in the proof assistants Minlog, Coq and Isabelle/HOL. From the formal proofs programs are
machine-extracted that implement variants of the well-known normalization-by-evaluation algorithm. The case study is used to test and compare the program extraction machineries of the three proof
assistants in a non-trivial setting. 1
- Transactions of the American Mathematical Society
"... We consider the extent to which one can compute bounds on the rate of convergence of a sequence of ergodic averages. It is not difficult to construct an example of a computable Lebesgue-measure
preserving transformation of [0, 1] and a characteristic function f = χA such that the ergodic averages An ..."
Cited by 27 (4 self)
Add to MetaCart
We consider the extent to which one can compute bounds on the rate of convergence of a sequence of ergodic averages. It is not difficult to construct an example of a computable Lebesgue-measure
preserving transformation of [0, 1] and a characteristic function f = χA such that the ergodic averages Anf do not converge to a computable element of L2([0,1]). In particular, there is no computable
bound on the rate of convergence for that sequence. On the other hand, we show that, for any nonexpansive linear operator T on a separable Hilbert space, and any element f, it is possible to compute
a bound on the rate of convergence of (Anf) from T, f, and the norm ‖f ∗ ‖ of the limit. In particular, if T is the Koopman operator arising from a computable ergodic measure preserving
transformation of a probability space X and f is any computable element of L2(X), then there is a computable bound on the rate of convergence of the sequence (Anf). The mean ergodic theorem is
equivalent to the assertion that for every function K(n) and every ε> 0, there is an n with the property that the ergodic averages Amf are stable to within ε on the interval [n, K(n)]. Even in
situations where the sequence (Anf) does not have a computable limit, one can give explicit bounds on such n in terms of K and ‖f‖/ε. This tells us how far one has to search to find an n so that the
ergodic averages are “locally stable ” on a large interval. We use these bounds to obtain a similarly explicit version of the pointwise ergodic theorem, and show that our bounds are qualitatively
different from ones that can be obtained using upcrossing inequalities due to Bishop and Ivanov. Finally, we explain how our positive results can be viewed as an application of a body of general
proof-theoretic methods falling under the heading of “proof mining.” 1
- Typed Lambda Calculi and Applications, LNCS 664 , 1993
"... ) 1 J. M. E. Hyland 2 C.-H. L. Ong 3 University of Cambridge, England Abstract This paper is motivated by the discovery that an appropriate quotient SN 3 of the strongly normalising untyped
3-terms (where 3 is just a formal constant) forms a partial applicative structure with the inherent appl ..."
Cited by 14 (1 self)
Add to MetaCart
) 1 J. M. E. Hyland 2 C.-H. L. Ong 3 University of Cambridge, England Abstract This paper is motivated by the discovery that an appropriate quotient SN 3 of the strongly normalising untyped 3-terms
(where 3 is just a formal constant) forms a partial applicative structure with the inherent application operation. The quotient structure satisfies all but one of the axioms of a partial combinatory
algebra (pca). We call such partial applicative structures conditionally partial combinatory algebras (c-pca). Remarkably, an arbitrary right-absorptive c-pca gives rise to a tripos provided the
underlying intuitionistic predicate logic is given an interpretation in the style of Kreisel's modified realizability, as opposed to the standard Kleenestyle realizability. Starting from an arbitrary
right-absorptive c-pca U , the tripos-to-topos construction due to Hyland et al. can then be carried out to build a modified realizability topos TOPm (U ) of non-standard sets equipped with an
- In LICS’06 , 2006
"... U. Berger, [11] significantly simplified Tait’s normalisation proof for bar recursion [27], see also [9], replacing Tait’s introduction of infinite terms by the construction of a domain having
the property that a term is strongly normalizing if its semantics is. The goal of this paper is to show tha ..."
Cited by 13 (1 self)
Add to MetaCart
U. Berger, [11] significantly simplified Tait’s normalisation proof for bar recursion [27], see also [9], replacing Tait’s introduction of infinite terms by the construction of a domain having the
property that a term is strongly normalizing if its semantics is. The goal of this paper is to show that, using ideas from the theory of intersection types [2, 6, 7, 21] and Martin-Löf’s domain
interpretation of type theory [18], we can in turn simplify U. Berger’s argument in the construction of such a domain model. We think that our domain model can be used to give modular proofs of
strong normalization for various type theory. As an example, we show in some details how it can be used to prove strong normalization for Martin-Löf dependent type theory extended with bar recursion,
and with some form of proof-irrelevance. 1
, 1995
"... this paper we study some extensions of the Kleene-Kreisel continuous functionals [7, 8] and show that most of the constructions and results, in particular the crucial density theorem, carry over
from nite to dependent and transnite types. Following an approach of Ershov we dene the continuous functi ..."
Cited by 10 (2 self)
Add to MetaCart
this paper we study some extensions of the Kleene-Kreisel continuous functionals [7, 8] and show that most of the constructions and results, in particular the crucial density theorem, carry over from
nite to dependent and transnite types. Following an approach of Ershov we dene the continuous functionals as the total elements in a hierarchy of Ershov-Scott-domains of partial continuous
functionals. In this setting the density theorem says that the total functionals are topologically dense in the partial ones, i.e. every nite (compact) functional has a total extension. We will
extend this theorem from function spaces to dependent products and sums and universes. The key to the proof is the introduction of a suitable notion of density and associated with it a notion of
co-density for dependent domains with totality. We show that the universe obtained by closing a given family of basic domains with totality under some quantiers has a dense and co-dense totality
provided the totalities on the basic domains are dense and co-dense and the quantiers preserve density and co-density. In particular we can show that the quantiers and have this preservation property
and hence, for example, the closure of the integers and the booleans (which are dense and co-dense) under and has a dense and co-dense totality. We also discuss extensions of the density theorem to
iterated universes, i.e. universes closed under universe operators. From our results we derive a dependent continuous choice principle and a simple order-theoretic characterization of extensional
equality for total objects. Finally we survey two further applications of density: Waagb's extension of the Kreisel-Lacombe-Shoeneld-Theorem showing the coincidence of the hereditarily eectively
continuous hierarchy...
, 2004
"... In [12], the second author obtained metatheorems for the extraction of effective (uniform) bounds from classical, prima facie nonconstructive proofs in functional analysis. These metatheorems
for the first time cover general classes of structures like arbitrary metric, hyperbolic, CAT(0) and nor ..."
Cited by 10 (6 self)
Add to MetaCart
In [12], the second author obtained metatheorems for the extraction of effective (uniform) bounds from classical, prima facie nonconstructive proofs in functional analysis. These metatheorems for the
first time cover general classes of structures like arbitrary metric, hyperbolic, CAT(0) and normed linear spaces and guarantee the independence of the bounds from parameters raging over metrically
bounded (not necessarily compact!) spaces. The use of classical logic imposes some severe restrictions on the formulas and proofs for which the extraction can be carried out. In this paper we
consider similar metatheorems for semi-intuitionistic proofs, i.e. proofs in an intuitionistic setting enriched with certain non-constructive principles. Contrary to
- Applied Categorical Structures , 2000
"... . We study a semantics of dependent types and universe operators based on parametrized domains with totality. The main results are generalizations of the Kleene/Kreisel density theorem for the
continuous functionals. This continues work of E. Palmgren and V. Stoltenberg{Hansen on the domain interpre ..."
Cited by 10 (0 self)
Add to MetaCart
. We study a semantics of dependent types and universe operators based on parametrized domains with totality. The main results are generalizations of the Kleene/Kreisel density theorem for the
continuous functionals. This continues work of E. Palmgren and V. Stoltenberg{Hansen on the domain interpretation of dependent types, and of D. Normann on universes of wellfounded types with density.
Key words: Continuous functionals, Domains, Totality, Dependent types, Universes 1. Introduction In Mathematical Logic and Computer Science there is growing interest in constructive type theories as
developed by Martin{L of [8]. This paper is concerned with a semantics of such theories within the realm of Ershov{Scott domains [5] with totality [10]. Erik Palmgren and Viggo Stoltenberg{Hansen
[15], [17] developed a semantics for a partial type theory (modelling partial functions and functionals) based on the notion of a parametrization, i.e. a domain depending on parameters. Since this
semantics wa...
- Department of Computer Science, University of Aarhus , 2000
"... A definition of a typed language is said to be "intrinsic" if it assigns meanings to typings rather than arbitrary phrases, so that ill-typed phrases are meaningless. In contrast, a definition
is said to be "extrinsic " if all phrases have meanings that are independent of their typings, while typing ..."
Cited by 10 (1 self)
Add to MetaCart
A definition of a typed language is said to be "intrinsic" if it assigns meanings to typings rather than arbitrary phrases, so that ill-typed phrases are meaningless. In contrast, a definition is
said to be "extrinsic " if all phrases have meanings that are independent of their typings, while typings represent properties of these meanings. For a simply typed lambda calculus, extended with
recursion, subtypes, and named products, we give an intrinsic denotational semantics and a denotational semantics of the underlying untyped language. We then establish a logical relations theorem
between these two semantics, and show that the logical relations can be "bracketed" by retractions between the domains of the two semantics. From these results, we derive an extrinsic semantics that
uses partial equivalence relations.
|
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=968144","timestamp":"2014-04-25T04:05:14Z","content_type":null,"content_length":"39813","record_id":"<urn:uuid:e1e60f22-6cc2-4de3-ad85-a604b9176ebc>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00525-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Borland C++ Builder Math Functions: Introduction
The controls on your applications will receive strings of various kinds either supplied by the user or gotten from other controls. Some of the values on these controls will be involved in
mathematical operations. The C++ language provides a rich set of functions to help you quickly perform different types of calculations. The functions range from arithmetic to geometry, from
trigonometry to algebra, etc. To compensate for the areas where C++ does not expand, instead of writing your own functions, The Visual Component Library (VCL) is equipped with various functions that,
besides geometry and algebra, deal with finance, statistics, random number generation, etc. Because there are so many of these functions and they get added with each new release of the library, we
will review only the most common used.
By default, the content of a text control, such as an edit box, is a string, which is an array of characters. If you want the value or content of such a control to participate in a mathematical
operation, you must first convert such a value to a mathematically compatible value.
|
{"url":"http://www.functionx.com/bcb/math/introduction.htm","timestamp":"2014-04-20T23:34:46Z","content_type":null,"content_length":"3605","record_id":"<urn:uuid:154b2122-579a-4f3c-aae6-c28eef450739>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00322-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Past exam questions
May 15th 2010, 09:41 AM #1
May 2010
Past exam questions
So I just passed my first year, but I failed calc 2... badly
I'll have to retake it next summer, but until then I would like to ask some questions that I saw on the 2009 exam.
The past exams can all be found at this website
Skule Courses - SKULE
the course is mat197
If you would like to, you can read it there for more clarity.
1. Evaluate the integral:
erhm... could anyone send me a link of the math functions?
Anyways, the integral of
x^2 dx / (x^2+64)^(3/2)
I first thought, maybe I should try some substitution, but that didn't work. The bottom looked eerily familiar to an inverse trig substitution style, that didn't work either. Then I thought,
maybe I need to pull something out, like that 64=2^6 and (2^6)^3/2 = 2^9... and we could use that to some strange effect like completing the square.
The entire exam is attached to this post. Although you are welcome to do every single problem, it is not required, as I would like to tackle them a little more in depth than I had previously,
before posting them. Many thanks to anyone who answers, to help this student in need.
So I just passed my first year, but I failed calc 2... badly
I'll have to retake it next summer, but until then I would like to ask some questions that I saw on the 2009 exam.
The past exams can all be found at this website
Skule Courses - SKULE
the course is mat197
If you would like to, you can read it there for more clarity.
1. Evaluate the integral:
erhm... could anyone send me a link of the math functions?
Anyways, the integral of
x^2 dx / (x^2+64)^(3/2)
I first thought, maybe I should try some substitution, but that didn't work. The bottom looked eerily familiar to an inverse trig substitution style, that didn't work either. Then I thought,
maybe I need to pull something out, like that 64=2^6 and (2^6)^3/2 = 2^9... and we could use that to some strange effect like completing the square.
The entire exam is attached to this post. Although you are welcome to do every single problem, it is not required, as I would like to tackle them a little more in depth than I had previously,
before posting them. Many thanks to anyone who answers, to help this student in need.
$\int \frac{ x^2 } { (x^2+64)^{ \frac{3}{2} } } dx = \int \frac{ x^2 } { (x^2+8^2)^{ \frac{3}{2} } } dx$
Let $x = 8tan f$ and $dx = 8sec^2 f df$
$\int \frac{ x^2 } { (x^2+8^2)^{ \frac{3}{2} } } dx$
$\int \frac{ 64 tan^2 f (8sec^2 f) }{ ( 8^2[tan^2 f + 1] )^{ \frac{3}{2} } } df$
$\int \frac{ 64 tan^2 f (8sec^2 f) }{ ( 8^2 sec^2 f )^{ \frac{3}{2} } } df$
$\int \frac{ 64tan^2 f (8 sec^2 f) }{ ( 8^ 3sec^3 f ) } df$
$\int \frac{ tan^2 f }{ sec f } df$
$\int \frac{ tan^2 f }{ \frac{1}{cosf} } df$
$\int \frac{ \frac{sin^2 f}{cos^2 f} }{ \frac{1}{cosf} } df$
$\int \frac{sin^2f}{cosf} df$
Can you solve from here?
Last edited by AllanCuz; May 15th 2010 at 10:45 AM.
Hello, obesechicken13!
$[1]\;\;\int \frac{x^2}{(x^2+64)^{\frac{3}{2}}}\,dx$
Let: . $x \:=\:8\tan\theta \quad\Rightarrow\quad dx \:=\:8\sec^2\!\theta\,d\theta$
Substitute: . $\int\frac{(8\tan\theta)^2}{(64\sec^2\!\theta)^{\fr ac{3}{2}}}(8\sec^2\!\theta\,d\theta) \;=\;\int\frac{512\tan^2\!\theta\sec^2\!\theta}{51 2\sec^3\!\theta}\,d\theta$
. . . $=\;\;\int\frac{\tan^2\!\theta}{\sec\theta}\,d\thet a \;\;=\;\;\int\frac{\sec^2\!\theta-1}{\sec\theta}\,d\theta$
. . . $=\;\;\int\left(\sec\theta - \cos\theta\right)\,d\theta \;\;=\;\;\ln|\sec\theta + \tan\theta| - \sin\theta + C$
Back-substitute: . $\tan\theta \:=\:\frac{x}{8} \quad\Rightarrow\quad \sec\theta \:=\:\frac{\sqrt{x^2+64}}{8}$
We have: . $\ln\left|\frac{\sqrt{x^2+64}}{8} + \frac{x}{8}\right| - \frac{x}{\sqrt{x^2+64}} + C \;\;=\;\;\boxed{\ln\left|x + \sqrt{x^2+64}\right| + \frac{x}{\sqrt{x^2+64}} + C}$
Firstly, thanks to both of you.
Can you solve from here?
Yes, I can by substituting 1-cos^2(x) in for sin^2(x) and then splitting the integral. Then I draw a triangle to back substitute for x.
I arrive at the same answer as the second to last step as Soroban
I don't understand how you went from the second to last step to the last step. How did you get rid of the 8's and how did the subtraction turn into addition?
PS: I no longer wish to learn the math code, it seems not worth the trouble
In addition, I would like to know exactly how I should go about an integration problem, but I suppose that will come with practice. So never mind about that.
One thing this problem has taught me is how little I know my trigonometric identities and their integrals. I'm gonna make some flashcards.
Perhaps the second to last step is a good enough solution that it doesn't need to go the last step. Thanks!
Observe that
\begin{aligned}\ln\left|\frac{\sqrt{x^2+64}}{8} + \frac{x}{8}\right| - \frac{x}{\sqrt{x^2+64}} + C &= \ln\left|\frac{x+\sqrt{x^2+64}}{8}\right|-\frac{x}{\sqrt{x^2+64}}\\ &= \ln\left|x+\sqrt{x^
2+64}\right|-\ln 8-\frac{x}{\sqrt{x^2+64}}+C\end{aligned}
But $C-\ln 8$ is just another constant!! We can rename it $C$ and then get the simplified result Soroban got (except I just noticed a small typo on his part.. :/ ).
Does this make sense?
Yes it does, that typo threw me off
May 15th 2010, 10:24 AM #2
May 15th 2010, 10:33 AM #3
Super Member
May 2006
Lexington, MA (USA)
May 15th 2010, 07:00 PM #4
May 2010
May 16th 2010, 09:12 AM #5
May 2010
May 16th 2010, 09:23 AM #6
May 16th 2010, 09:57 AM #7
May 2010
|
{"url":"http://mathhelpforum.com/calculus/144839-past-exam-questions.html","timestamp":"2014-04-18T22:39:14Z","content_type":null,"content_length":"58742","record_id":"<urn:uuid:bb323e60-50d7-479d-9ef4-1b848abeaab2>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00495-ip-10-147-4-33.ec2.internal.warc.gz"}
|
A controversial concept in climate science is what fraction of incremental GHG absorption affects the surface temperature. HITRAN based simulations confirm that the IPCC stated radiative forcing of
3.7 W/m² is the incremental absorption from doubling CO[2], but climate alarmism assumes that all of this affects the surface, while physics requires that only half does.
The physical proof considers the atmosphere as a black box (not to be confused with a black body) with inter-related boundary conditions. At the top boundary, 239 W/m² of post albedo solar power
enters and 239 W/m² of radiated power leaves. This power flux corresponds to the 255°K equivalent temperature of the Earth and is known to be strictly radiative. At the bottom boundary, the surface
at 287°K radiates 385 W/m², which must be replaced with the same amount of power entering the surface from the bottom of the atmosphere.
The picture at right shows the relationships between all of the radiative fluxes relevant to the radiative balance. The average surface temperature, average incident solar power, average albedo and
size of the transparent region of the atmosphere are inputs and everything else is deterministically derived as described below. Of these, only the size of the atmospheric window is not a measured
value. This is calculated from 3-d, HITRAN driven, atmospheric simulations based on nominal GHG concentrations and nominal cloud coverage, and at 24.1%, is not that far away from the 18% estimated by
Trenberth et all, 2009.
You should notice that only radiation components are included and non radiative components, like latent heat and thermals, are not. Relative to the radiative balance, the only thing that matters is
EM radiation. The non radiative components act primarily to redistribute and reorganize the energy stored in the Earth's thermal mass, specifically, as fluxes in and out and within the thermal mass
spanning the oceans and atmosphere, including atmospheric and oceanic circulation currents. A condition on the altitude of the lower atmospheric boundary is that it must separate the thermal mass
from the atmosphere. Setting this boundary to be coincident with the surface is only approximately true, but none the less, the total heat capacity of the atmosphere is a tiny fraction of the heat
capacity of the planet making this a reasonable approximation. A further constraint of Conservation Of Energy is that the global net non radiative flux between the atmosphere and the surface must be
zero in the steady state if the radiative flux is also zero. While radiative flux to and from the surface can be traded off against non radiative flux, it makes no difference to the overall radiative
Since the atmosphere creates no energy of it's own, these boundary conditions can be expressed as,
239 = T*385 + (1-F)*A
385 = 239 + F*A
A = 385*(1-T)
is the transmittance between the surface and space from the transparent window in the atmosphere, A is the surface power absorbed by the atmosphere and F is the fraction of this power returned to the
surface. The first equation sets the radiated power of the planet to be the surface power passing through the transparent window plus the power radiated by the atmosphere into space. The second
equation sets the power entering the surface as the post albedo power from the Sun plus the fraction of the power entering the atmosphere and returned to the surface. The third equation sets the
power entering the atmosphere as the surface power that doesn't pass through the transparent window. Implicit in this formulation is that the power entering the atmosphere is equal to the power
leaving and that most of the planets thermal mass is contained within the surface radiator and not within the atmosphere. There is also an explicit albedo, R, as shown in the diagram.
This is a redundant set of equations, thus there is no unique solution despite 3 equations and 3 unknowns, however, given values of T between 0 and 239/385 (0.622), a unique value of F always exists.
If we solve for F and consider 4 values of T, 0.18, 0.22, 0.24 and 0.26, the behavior of the solution space becomes evident. The 0.18 value is shown to match the Trenberth estimate of atmospheric
A = 385*(1-T)
F = 146/A
T = 0.18 -> A = 316, F = .462
T = 0.22 -> A = 300, F = .486
T = 0.24 -> A = 293, F = .499
T = 0.26 -> A = 285, F = .512
Note that based on Trenberth's atmospheric absorption model, less than half of the absorbed power must be sent to the surface, not more! This is a consequence of more radiation required by the
atmosphere to make up the difference between power passing through the transparent window and the required emitted power.
From the physics of black bodies, the atmosphere should behave as an isotropic radiator with half of it's emitted power going up and half down, thus we can say that the transparent window of the
atmosphere must be about 24% since physics dictates that F must be one half. While this is a valid approach, another is to arrive at this from the other direction and calculate the size of the
transparent window from HITRAN based simulations to see what F should be based on calculated absorption.
Simulations say that the clear sky absorbs about 62% of the surface power, for a transparent window spanning 38% of the emitted power spectrum. The cloudy sky absorbs between 62% and 100% of the
surface power, for an average is 83%, corresponding to an average transparent window of 17%. From ISCCP data, the average cloud coverage is 66%, so the cloud fraction weighted size of the transparent
atmospheric window is, 0.66*0.17 + (1-0.66)*0.38 = 0.241, whose corresponding value of F is 0.500, as expected.
This confirms that the physics of black body radiators predicts the average transparency of the atmosphere and that both measurements and atmospheric simulations confirm that within 1%, only half of
the power absorbed by the atmosphere affects surface temperatures.
As a falsification test, consider the implication of a net atmospheric opacity limited to 50%. Even if all of the surface power is absorbed, half will still escape into space. This can be tested by
measuring the ratio of the power emitted by the coldest cloud tops and the surface power beneath them. Again, the ISCCP data tells us that for 100's of thousands of gridded measurements spanning
decades across the entire globe, cloud power asymptotically approaches half of the surface power and on average is about 2/3 of the surface power, as shown in the diagram to the right.
Why is the counter example of Venus so different? Because the thermal mass of Venus is primarily dense, energized CO[2] in the atmosphere above the surface, while on Earth, the primary thermal mass
is ground state water in the oceans below the surface and it's this thermal mass which is the focus of energy entering and leaving the system upon which surface temperature and the emitted surface
power depends.
In conclusion, there can be no question that only half of all absorption by the clear and cloudy atmosphere, GHG or otherwise, affects the surface, moreover; quantifying the energy balance in terms
of atmospheric opacity provides the precise mechanism for how incremental absorption affects surface temperatures. The consequence of this is that everything claimed by the IPCC must be reexamined.
|
{"url":"http://www.palisad.com/co2/div2/div2.html","timestamp":"2014-04-19T04:18:49Z","content_type":null,"content_length":"8629","record_id":"<urn:uuid:de77a183-60b0-4487-9379-e8a8b089fdae>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00351-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Summary: OnLine Routing of Virtual Circuits with
Applications to Load Balancing and Machine
Scheduling \Lambda
James Aspnes y Yossi Azar z Amos Fiat x
Serge Plotkin -- Orli Waarts k
In this paper we study the problem of online allocation of routes to virtual circuits (both
pointtopoint and multicast) where the goal is to route all requests while minimizing the
required bandwidth. We concentrate on the case of permanent virtual circuits (i.e., once a
circuit is established, it exists forever), and describe an algorithm that achieves an O(logn)
competitive ratio with respect to maximum congestion, where n is the number of nodes in
the network. Informally, our results show that instead of knowing all of the future requests,
it is sufficient to increase the bandwidth of the communication links by an O(logn) factor.
We also show that this result is tight, i.e. for any online algorithm there exists a scenario
in which
\Omega\Gamma137 n) increase in bandwidth is necessary in directed networks.
We view virtual circuit routing as a generalization of an online load balancing problem,
defined as follows: jobs arrive on line and each job must be assigned to one of the machines
immediately upon arrival. Assigning a job to a machine increases this machine's load by
an amount that depends both on the job and on the machine. The goal is to minimize the
|
{"url":"http://www.osti.gov/eprints/topicpages/documents/record/289/3729745.html","timestamp":"2014-04-24T00:07:40Z","content_type":null,"content_length":"8674","record_id":"<urn:uuid:337133fd-b835-4ea9-bd82-9066b4fb45fa>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00631-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Kensington, NY Prealgebra Tutor
Find a Kensington, NY Prealgebra Tutor
...Despite over ten years of experience tutoring, I've always kept tutoring as a part-time thing with a limited number of students. I prefer it this way so I can get to know my pupils better. In
addition, this arrangement means that I usually have enough free time to be flexible in case the student needs an extra session for a test or midterms, or has to reschedule because of conflicts.
26 Subjects: including prealgebra, chemistry, calculus, physics
...I have a bachelor's degree in physics. I have experience tutoring pre-algebra and have a bachelor's degree in physics. I have tutored pre-calculus both privately and for the Princeton Review.
20 Subjects: including prealgebra, English, algebra 2, grammar
...I am offering lessons to a wide range of student profile from elementary school to college level. My tutoring methods depend on students profile mostly. We can do practices, exercises, and/or
homework together, or we can study a topic from the beginning and do exercises to reinforce our understanding.
25 Subjects: including prealgebra, calculus, statistics, logic
...I have experience teaching at risk youth critical thinking, problem solving, and I am experienced to administer personality and interest inventories. I have three years of experience advising
students about what courses and educational programs they need for particular careers.I've helped stude...
24 Subjects: including prealgebra, English, reading, grammar
...I stress the development of astronomy from ancient times up to modern times in what I call historical astronomy. After this, I go into the stars, planets, moons, galaxies and universe itself
in what I call physical astronomy. I have a BA in Geology and have been teaching Regents Earth Science since 2003.
6 Subjects: including prealgebra, biology, algebra 1, astronomy
Related Kensington, NY Tutors
Kensington, NY Accounting Tutors
Kensington, NY ACT Tutors
Kensington, NY Algebra Tutors
Kensington, NY Algebra 2 Tutors
Kensington, NY Calculus Tutors
Kensington, NY Geometry Tutors
Kensington, NY Math Tutors
Kensington, NY Prealgebra Tutors
Kensington, NY Precalculus Tutors
Kensington, NY SAT Tutors
Kensington, NY SAT Math Tutors
Kensington, NY Science Tutors
Kensington, NY Statistics Tutors
Kensington, NY Trigonometry Tutors
Nearby Cities With prealgebra Tutor
East Atlantic Beach, NY prealgebra Tutors
Great Nck Plz, NY prealgebra Tutors
Great Neck prealgebra Tutors
Great Neck Estates, NY prealgebra Tutors
Great Neck Plaza, NY prealgebra Tutors
Harbor Hills, NY prealgebra Tutors
Kings Point, NY prealgebra Tutors
Lake Gardens, NY prealgebra Tutors
Little Neck prealgebra Tutors
Manhasset prealgebra Tutors
Plandome, NY prealgebra Tutors
Russell Gardens, NY prealgebra Tutors
Saddle Rock Estates, NY prealgebra Tutors
Saddle Rock, NY prealgebra Tutors
Thomaston, NY prealgebra Tutors
|
{"url":"http://www.purplemath.com/Kensington_NY_Prealgebra_tutors.php","timestamp":"2014-04-20T07:10:23Z","content_type":null,"content_length":"24486","record_id":"<urn:uuid:dcf23d4a-2d0a-41a2-949e-f6a6d7a0d97d>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00508-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Columbia Theory Seminar, Fall 2012
For Fall 2012, the usual time for the meetings will be Friday at 11:30 - 13:00 in the CS conference room, CSB 453. Abstracts for talks are given below the talk schedule.
Talk Abstracts
Friday, September 14:
A Near Optimal Sublinear-Time Algorithm for Approximating the Minimum Vertex Cover Size
Dana Ron
Tel-Aviv University
Abstract: We give a nearly optimal sublinear-time algorithm for approximating the size of a minimum vertex cover in a graph G. The algorithm may query the degree [;\deg(v);] of any vertex v of its
choice, and for each [;1 \leq i \leq \deg(v);], it may ask for the i-th neighbor of v. Letting [;VCopt(G);] denote the minimum size of vertex cover in G, the algorithm outputs, with high constant
success probability, an estimate [;wVC(G);] such that [;VCopt(G) \leq wVC(G) \leq 2 VCopt(G) + \epsilon n;], where [;\epsilon;] is a given additive approximation parameter. We refer to such an
estimate as a [;(2,\epsilon);]-estimate. The query complexity and running time of our algorithm are [;\tilde{O}(d \cdot {\rm poly}(1/\epsilon));], where d denotes the average vertex degree in the
The best previously known sublinear algorithm, of Yoshida et al. (STOC 2009), has query complexity and running time [;O(d^4/\epsilon^2);], where d is the maximum degree in the graph. Given the lower
bound of [;\Omega(d);] (for constant [;\epsilon;]) for obtaining such an estimate (with any constant multiplicative factor) due to Parnas and Ron (TCS 2007), our result is nearly optimal.
In the case that the graph is dense, that is, the number of edges is [;\Theta(n^2);], we consider another model, in which the algorithm may ask, for any pair of vertices u and v, whether there is an
edge between u and v. We show how to adapt the algorithm that uses neighbor queries to this model and obtain an algorithm that outputs a [;(2,\epsilon);]-estimate of the size of a minimum vertex
cover whose query complexity and running time are [;\tilde{O}(n) \cdot {\rm \poly}(1/\epsilon);].
Joint work with Krzysztof Onak, Michal Rosen, and Ronitt Rubinfeld
Friday, October 5:
The Complexity of Finding a Market Equilibrium for CES and other types of Markets
Dimitris Paparas
Columbia University
Abstract: We introduce the notion of non-monotone utilities, which covers a wide variety of utility functions in economic theory, and prove that it is PPAD-hard to find an approximate Arrow-Debreu
Market Equilibrium for Markets with linear and non-monotone utilities. Building on this result, we settle the complexity of finding an approximate Arrow-Debreu Market Equilibrium in a market with CES
utilities by proving that it is PPAD-complete when the Constant Elasticity of Substitution parameter, [;\rho;], is any constant less than -1.
Joint work with Xi Chen and Mihalis Yannakakis
Friday, October 12:
A Manipulability Dichotomy Theorem for Generalized Scoring Rules
Lirong Xia
Harvard University
Abstract: Social choice studies ordinal preference aggregation with applications ranging from high-stakes political elections to low-stakes movie rating websites. One recurring concern is that of the
robustness of a social choice (voting) rule to manipulation, bribery and other kinds of strategic behavior. A number of results have identified ways in which computational complexity can provide a
new barrier to strategic behavior, but most of previous work focused on case-by-case analyses for specific social choice rules and specific types of strategic behavior. In this talk, I present a
dichotomy theorem for the manipulability of a broad class of generalized scoring rules and a broad class of strategic behavior called vote operations. When the votes are i.i.d., then with high
probability the number of vote operations that are needed to achieve the strategic individual's goal is 0, [;\Theta(\sqrt{n});], [;\Theta(n);], or infinity. This theorem significantly strengthens
previous results and implies that most social choice situations are more vulnerable to many types of strategic behavior than previously believed.
Thursday, October 18:
The Theory of Crowdsourcing Contests
Jason Hartline
Northwestern University
Abstract: Crowdsourcing contests have been popularized by the Netflix challenge and websites like TopCoder and 99designs. What is a crowdsourcing contest? Imagine you are designing a new web service,
you have it all coded up, but the site looks bad because you haven't got any graphic design skills. You could hire an artist to design your logo, or you could post the design task as a competition to
crowdsourcing website 99designs with a monetary reward of $100. Contestants on 99designs would then compete to produce the best logo. You then select your favorite logo and award that contestant the
$100 prize.
In this talk, I discuss the theory of crowdsourcing contests. First, I will show how to model crowdsourcing contests using auction theory. Second, I will discuss how to solve for contestant
strategies. I.e., suppose you were entering such a programming contest on TopCoder, how much work should you do on your entry to optimize your gains from winning less the cost of doing the work?
Third, I will discuss inefficiency from the fact that the effort of losing contestants is wasted (e.g., every contestant has to do work to design a logo, but you only value your favorite logo). I
will show that this wasted effort is at most half of the total amount of effort. A consequence is that crowdsourcing is approximately as efficient a means of procurement as conventional methods
(e.g., auctions or negotiations). Finally, I will give a structural characterization of optimal crowdsourcing contests (in terms of procuring the highest quality work).
Friday, October 19:
Lower bounds on information complexity via zero-communication protocols and applications
Virginie Lerays
Universite Paris-Sud 11
Abstract: The information complexity of a protocol is a lower bound on its communication complexity. One open question is whether these quantities are equal or not. We show that almost all known
lower bound methods for communication complexity are also lower bounds for information complexity. To do that, we define a relaxed version of the partition bound of Jain and Klauck, which subsumes
all rectangle and norm-based techniques, and we show that it lower bounds the information complexity.
Our result uses a recent connection between rectangle techniques and zero-communication protocols where players can abort (Laplante, Lerays, Roland 2012). More precisely, the maximum achievable
probability that the protocol doesn't abort, which is called efficiency, gives a lower bound on communication complexity which corresponds to the relaxed partition bound. We use compression
techniques to relate IC to efficiency.
In this talk, I will first make the link between zero communication protocols, communication complexity and rectangle techniques and then present the compression technique which is similar to those
of Braverman and Weinstein 2012 and Braverman 2012. Finally I will present some applications of our theorem.
Friday, October 19:
Non-commutative extensions of Grothendieck's inequality
Oded Regev
Courant Institute, NYU
Abstract: The classical Grothendieck inequality has applications to the design of approximation algorithms for NP-hard optimization problems. Here show that a similar algorithmic interpretation may
be given to a noncommutative generalization of the Grothendieck inequality due to Pisier and Haagerup. Our main result, an efficient rounding procedure for this inequality, leads to a constant-factor
polynomial time approximation algorithm for an optimization problem which generalizes the Cut Norm problem of Frieze and Kannan, and is shown here to have additional applications to robust principle
component analysis and the orthogonal Procrustes problem.
Time permitting, we will mention the so-called operator space Grothendieck inequality, its applications to quantum information theory, and a new proof on ideas from quantum information.
Based on several joint papers with Assaf Naor and Thomas Vidick.
Thursday, October 25:
Sparsest Cut on Bounded Treewidth Graphs
David Witmer
Abstract: We consider the non-uniform sparsest cut problem. Given an underlying capacitated graph G and demands between the vertices of G (forming a demand graph H), the goal is to find a bipartition
of the vertices that minimizes the ratio of the capacity of edges separated to the total demand separated by this partition. This is a generalization of the uniform sparsest cut problem, obtained by
placing unit demand between every pair of vertices. These problems have been well-studied over the past 25 years for the purpose of developing approximation algorithms.
In this talk, we give a 2-approximation algorithm for the non-uniform sparsest cut problem with running time [;n^{O(k)};], where k is the treewidth of the underlying graph G. We complement this
result by showing a hardness-of-approximation (even for treewidth-2 graphs) of 1/c, where c is the hardness of the max-cut problem on general graphs. This implies a hardness of 17/16 assuming P!=NP
and 1/0.878 assuming the Unique Games Conjecture for treewidth-2 graphs G.
Our algorithm rounds a Sherali-Adams LP relaxation. We also show that the integrality gap of this LP remains at least 2-epsilon, even after polynomially many rounds of Sherali-Adams and even for
treewidth-2 graphs G.
This is joint work with Anupam Gupta (CMU) and Kunal Talwar (Microsoft Research SVC).
Friday, October 26:
Learning and Testing Submodular Functions
Grigory Yaroslavtsevv
Penn State
Abstract: Submodular functions capture the law of diminishing returns and can be viewed as a generalization of convexity to functions over the Boolean cube. Such functions arise in different areas,
such as combinatorial optimization, machine learning and economics. In this talk we will focus on positive results about learning such functions from examples and testing whether a given function is
submodular with a small number of queries.
For the class of submodular functions taking values in discrete integral range of size R we show a structural result, giving concise representation for this class. The representation can be described
as a maximum over a collection of threshold functions, each expressed by an R-DNF formula. This leads to efficient PAC-learning algorithms for this class, as well as testing algorithms with running
time independent of the size of the domain.
Friday, November 2:
List-Decoding Multiplicity Codes
Swastik Kopparty
Abstract: Multiplicity Codes allow one to encode data with just an epsilon fraction redundancy, so that even if a constant fraction of the encoded bits are corrupted, any one bit of the original data
can be recovered in sublinear time with high probability. These codes were introduced and studied recently in joint work with Shubhangi Saraf and Sergey Yekhanin.
I will talk about a new result showing that multiplicity codes also tolerate a large fraction of errors:
1. They can achieve "list-decoding capacity".
2. They can be locally list-decoded beyond half their minimum distance. In particular, we give the first polynomial time algorithms for decoding multiplicity codes upto half their minimum distance/
In simple terms, these are algorithms for interpolating a polynomial given evaluations of it and its derivatives, even if many of the given evaluations are wrong.
The first of these results is based on solving some kinds of algebraic differential equations. The second is based on a family of algebraically repelling "space-filling" curves.
Thursday, November 15:
Inverse Problems in Approximate Uniform Generation
Ilias Diakonikolas
University of Edinburgh
Abstract: We initiate the study of inverse problems in approximate uniform generation, focusing on uniform generation of satisfying assignments of various types of Boolean functions. In such an
inverse problem, the algorithm is given uniform random satisfying assignments of an unknown function f belonging to a class C of Boolean functions, and the goal is to output a probability
distribution D which is [;\epsilon;]-close, in total variation distance, to the uniform distribution over [;f^{-1}(1);].
-- Positive results: We prove a general positive result establishing sufficient conditions for efficient inverse approximate uniform generation for a class C. We define a new type of algorithm called
a densifier for C, and show (roughly speaking) how to combine (i) a densifier, (ii) an approximate counting / uniform generation algorithm, and (iii) a Statistical Query learning algorithm, to obtain
an inverse approximate uniform generation algorithm. We apply this general result to obtain a poly(n,1/[;\epsilon;])-time algorithm for the class of halfspaces; and a quasipoly(n,1/[;\epsilon;])-time
algorithm for the class of poly(n)-size DNF formulas.
-- Negative results: We prove a general negative result establishing that the existence of certain types of signature schemes in cryptography implies the hardness of certain inverse approximate
uniform generation problems. This implies that there are no {subexponential}-time inverse approximate uniform generation algorithms for 3-CNF formulas; for intersections of two halfspaces; for
degree-2 polynomial threshold functions; and for monotone 2-CNF formulas.
Finally, we show that there is no general relationship between the complexity of the "forward" approximate uniform generation problem and the complexity of the inverse problem for a class C -- it is
possible for either one to be easy while the other is hard.
The talk will be based on joint work with Anindya De (Berkeley) and Rocco Servedio (Columbia).
Wednesday, November 28:
Matching: A New Proof for an Ancient Algorithm
Vijay V. Vazirani
Georgia Tech
Abstract: For all practical purposes, the Micali-Vazirani algorithm, discovered in 1980, is still the most efficient known maximum matching algorithm (for very dense graphs, slight asymptotic
improvement can be obtained using fast matrix multiplication). However, this has remained a "black box" result for the last 32 years. We hope to change this with the help of a recent paper giving a
simpler proof and exposition of the algorithm:
In the interest of covering all the ideas, we will assume that the audience is familiar with basic notions such as augmenting paths and bipartite matching algorithm.
Wednesday, December 5:
Discrete proof of Majority is Stablest and applications to Semidefinite programming
Anindya De
Abstract: We give a new and simple induction based proof of the well-known 'Majority is Stablest' theorem. Unlike the previous proof, the new proof completely avoids use of sophisticated tools from
Gaussian analysis. As the main application, we show that a constant number of rounds of the Lasserre hierarchy can refute the Khot-Vishnoi instance of MAX-CUT.
Joint work with Elchanan Mossel and Joe Neeman.
Thursday, December 6:
Truth, Justice, and Cake Cutting
John Lai
Harvard University
Abstract: Cake cutting is a common metaphor for the division of a heterogeneous divisible good. There are numerous papers that study the problem of fairly dividing a cake; a small number of them also
take into account self-interested agents and consequent strategic issues, but these papers focus on fairness and consider a strikingly weak notion of truthfulness. In this paper we investigate the
problem of cutting a cake in a way that is truthful, Pareto-efficient, and fair, where for the first time our notion of dominant strategy truthfulness is the ubiquitous one in social choice and
computer science. We design both deterministic and randomized cake cutting mechanisms that are truthful and fair under different assumptions with respect to the valuation functions of the agents.
Friday, December 7:
Constructive Discrepancy Minimization by Walking on the Edges
Raghu Meka
Institute for Advanced Study
Abstract: Minimizing the discrepancy of a set system is a fundamental problem in combinatorics. One of the cornerstones in this area is the celebrated six standard deviations result of Spencer (AMS
1985): In any system of n sets in a universe of size n, there always exists a coloring which achieves discrepancy [;6\sqrt{n};]. The original proof of Spencer was existential in nature, and did not
give an efficient algorithm to find such a coloring. Recently, a breakthrough work of Bansal (FOCS 2010) gave an efficient algorithm which finds such a coloring. In this work we give a new randomized
algorithm to find a coloring as in Spencer's result based on a restricted random walk we call "Edge-Walk". Our algorithm and its analysis use only basic linear algebra and is "truly" constructive in
that it does not appeal to the existential arguments, giving a new proof of Spencer's theorem and the partial coloring lemma.
Joint work with Shachar Lovett.
Friday, December 14:
Characterizing the Sample Complexity of Private Learners
Kobbi Nissim
Ben-Gurion University
Abstract: The notion of private learning [Kasiviswanathan el al. 08] is a combination of PAC (probably approximately correct) learning [Valiant 84] and differential privacy [Dwork et al. 06].
Kasiviswanathan el al. presented a generic construction of private learner for finite concept classes, where the sample complexity depends logarithmically in the size of the concept class. For
concept classes of small VC dimension, this sample complexity is significantly larger than what is sufficient for non-private learning.
In this talk I will present some of the known bounds on the sample complexity of private learners, and a recent characterization of the sample complexity as a combinatorial measure of the learned
concept class.
Joint work with Amos Beimel and Uri Stemmer, ITCS 2013.
Wednesday, December 19:
Isoperimetric and hypercontractive inequalities via the entropy method
Li-Yang Tan
Columbia University
Abstract: In the past few decades ideas and techniques from information theory have been successfully applied to solve problems in numerous areas of theoretical computer science: data structures,
communication complexity, gap amplification, compression, PRGs, extractors, etc. In this talk I will describe on-going work exploring connections between information theory and the analysis of
Boolean functions, and give a few examples of how information-theoretic methods can be used to prove (and sometimes sharpen) classical isoperimetric and hypercontractive inequalities.
Joint works with Eric Blais (MIT) and Andrew Wan (Harvard).
Contact xichen-at-cs-dot-columbia-dot-edu if you want to volunteer to give a talk (especially for students!). The talk can be about your or others' work. It can be anything from a polished
presentation of a completed result, to an informal black-board presentation of an interesting topic where you are stuck on some open problem. It should be accessible to a general theory audience. I
will be happy to help students choose papers to talk about. There is a mailing list for the reading group. General information about the mailing list (including how to subscribe yourself to it) is
available here. If you want to unsubscribe or change your options, send email to theoryread-request@lists.cs.columbia.edu with the word `help' in the subject or body (don't include the quotes), and
you will get back a message with instructions.
Comments on this page are welcome; please send them to xichen-at-cs.columbia.edu
|
{"url":"http://www.cs.columbia.edu/theory/f12-theoryread.html","timestamp":"2014-04-19T01:49:52Z","content_type":null,"content_length":"25653","record_id":"<urn:uuid:ce38c247-3c9a-41e3-94b6-49ed86c8bf26>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00565-ip-10-147-4-33.ec2.internal.warc.gz"}
|
René Descartes (1596 - 1650)
Descartes had a habit of getting out of bed at 11am but became a critical character in the intellectual development of the world and is regarded as the modern founder of both mathematics and
He wrote the first important book in the French language (rather than Latin) and enabled mathematicians to understand each other - even if no one else could. Mathematics is a language, and like any
language it has different dialects and alphabets.
Choosing the right notation is one way of making it universal, and indeed it is possible, for example, for a Chinese mathematician with no English to write mathematics that an English speaking
mathematician can read and understand.
If you read pre-Descartes mathematics book you would have difficulty understanding the mathematics because the notation would be totally unfamiliar. For example x^3 would be written as xxx.
Descartes was the first to start using modern style notation. He didn’t use our equals sign, but he did use plus and minus and represented known entities with a, b, c and unknowns with x, y, z.
He was born in Toulouse, France and after completing a law degree, he lived the life of a gentleman travelling to Paris and through Europe before settling in Holland. This was a haven in a continent
ripped apart by religious intolerance and Descartes settled down to write a book on physics.
Just as he was about to publish, the religious troubles caught up with him. He heard of Galileo’s arrest and as his book was also based on the Copernican view he stopped publication.
Instead he turned to write Discours de la methode, a treatise on science. This dismissed the Aristolian logic on which most European thought was based. Mathematics, he felt, was the only certain
thing and all thought should be based on this.
An appendix to this work was on geometry. It led to what we now know as Cartesian geometry and brought all the algebraic tools to the geometric arena - this developed into a subject which was rich in
results and techniques. Oughtred and al-Khwarizmi had also attempted to do this, but Descartes was more thorough than his predecessors.
It is said that the idea for coordinates came to Descartes in his bed. Lying in bed he saw a spider crawling on his ceiling, and realised that its position could always be determined by its distances
from the edges.
But mathematical discovery cannot always be done in bed. In 1649 Descartes broke the habit of a lifetime. Queen Christine of Sweden persuaded him to come to Stockholm and be her tutor. The winter was
bitter that year but the young Queen insisted that her lessons commence at 5am. Poor Descartes, used to his leisurely, meditative mornings succumbed to pneumonia and did not last the winter.
Descartes’ mathematics
Descartes basic idea was to use axes to define all the points in a plane. Today, we denote the vertical axis as the y axis and the horizontal one as the x axis. Each point then has coordinates (x,y)
where x and y denote the distance from these two axes.
Using this it is easy to find the algebraic form of a curve or locus. The circle is the locus of points that are at a fixed distance from its centre. Consider any point P on the circle with
coordinates (x,y). If the radius of the circle is r, then by Pythagoras’ Theorem we have
We call this the equation of the circle. Using this we can discover algebraically the many geometric properties that it has. Let us deal with the unit circle whose equation is,
For example, let us introduce a line whose slope is 1. It’s equation will be of the form
where c is the intercept on the y axis. Does the line intersect the circle - and if so where? Simple, we need to solve the equations simultaneously - when they meet they share the same values for x
and y. We’ll concentrate on the y coordinate. The equation of the circle can be written,
and substituting the equation of the line into this gives,
this equation tells us the x coordinate of the points where the line and the circle meet - and it is points, because the equation is a quadratic with two roots. Big deal you say - you’ve done all
that algebra just to prove that a line intersects a circle in two points! Not so fast. It gives a great deal more. Let’s first write the equation in the usual form,
The roots of this equation are determined by the value of a quantity called the discriminant.
For the equation ax^2+bx+c=0 this is b^2-4ac. In our case it is,
which may be simplified to,
Can the line be a tangent? In this case, the two roots of the equation must be equal and the discriminant zero. So the quantity inside the first bracket must be zero, that is,
This equation for c has the two roots c=\pm\sqrt{2}; the line is a tangent to the circle for these values of c:
The method works just as well in three dimensions - though now there will be 3 coordinates. It may be extended to any number of dimensions, and the properties of curves and surfaces may be explored
through their equations. Geometry was never the same again following Descartes breakthrough.
|
{"url":"http://www.counton.org/timeline/test-mathinfo.php?m=ren-descartes","timestamp":"2014-04-16T10:36:21Z","content_type":null,"content_length":"8543","record_id":"<urn:uuid:e9a6b0ac-7dd9-40ec-96fd-719c733c0937>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00327-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Errors in tomographic imaging of scalar and vector fields from
ASA 124th Meeting New Orleans 1992 October
2pAO13. Errors in tomographic imaging of scalar and vector fields from underdetermined data.
D. Keith Wilson
Dept. of Appl. Ocean Phys. and Eng., Woods Hole Oceanogr. Inst., Woods Hole, MA 02543
The usual error maps computed for underdetermined tomographic images can be very misleading for two reasons. First, the spatial correlation function of the actual field usually differs from the
correlation function used to solve the inverse problem. Second, the effect of smoothing by the projections is neglected. Numerical examples are used to demonstrate these points, and methods for
improving the inverse reconstructions are suggested. Tomographic imaging of flow fields is also discussed. Although it can be proven that the irrotational part of the flow that is interior to the
array is invisible to the measurements, this is shown not to be a fundamental impediment for flow tomography using the methods usually applied by geophysical tomographers, largely because they assume
a priori statistics of the spatial structure.
|
{"url":"http://www.auditory.org/asamtgs/asa92nwo/2pAO/2pAO13.html","timestamp":"2014-04-19T10:47:53Z","content_type":null,"content_length":"1516","record_id":"<urn:uuid:0f386e1e-1a05-4056-aa29-87650bf6f507>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00190-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Posts about Bayesian inference on Xi'an's Og
“We now think the Bayesian Programming methodology and tools are reaching maturity. The goal of this book is to present them so that anyone is able to use them. We will, of course, continue to
improve tools and develop new models. However, pursuing the idea that probability is an alternative to Boolean logic, we now have a new important research objective, which is to design specific
hsrdware, inspired from biology, to build a Bayesian computer.”(p.xviii)
On the plane to and from Montpellier, I took an extended look at Bayesian Programming a CRC Press book recently written by Pierre Bessière, Emmanuel Mazer, Juan-Manuel Ahuactzin, and Kamel
Mekhnacha. (Very nice picture of a fishing net on the cover, by the way!) Despite the initial excitement at seeing a book which final goal was to achieve a Bayesian computer, as demonstrated by the
above quote, I however soon found the book too arid to read due to its highly formalised presentation… The contents are clear indications that the approach is useful as they illustrate the use of
Bayesian programming in different decision-making settings, including a collection of Python codes, so it brings an answer to the what but it somehow misses the how in that the construction of the
priors and the derivation of the posteriors is not explained in a way one could replicate.
“A modeling methodology is not sufficient to run Bayesian programs. We also require an efficient Bayesian inference engine to automate the probabilistic calculus. This assumes we have a
collection of inference algorithms adapted and tuned to more or less specific models and a software architecture to combine them in a coherent and unique tool.” (p.9)
For instance, all models therein are described via the curly brace formalism summarised by
which quickly turns into an unpalatable object, as in this example taken from the online PhD thesis of Gabriel Synnaeve (where he applied Bayesian programming principles to a MMORPG called StarCraft
and developed an AI (or bot) able to play BroodwarBotQ)
thesis that I found most interesting!
“Consequently, we have 21 × 16 = 336 bell-shaped distributions and we have 2 × 21 × 16 = 772 free parameters: 336 means and 336 standard deviations.¨(p.51)
Now, getting back to the topic of the book, I can see connections with statistical problems and models, and not only via the application of Bayes’ theorem, when the purpose (or Question) is to take a
decision, for instance in a robotic action. I still remain puzzled by the purpose of the book, since it starts with very low expectations on the reader, but hurries past notions like Kalman filters
and Metropolis-Hastings algorithms in a few paragraphs. I do not get some of the details, like this notion of a discretised Gaussian distribution (I eventually found the place where the 772 prior
parameters are “learned” in a phase called “identification”.)
“Thanks to conditional independence the curse of dimensionality has been broken! What has been shown to be true here for the required memory space is also true for the complexity of inferences.
Conditional independence is the principal tool to keep the calculation tractable. Tractability of Bayesian inference computation is of course a major concern as it has been proved NP-hard
(Cooper, 1990).”(p.74)
The final chapters (Chap. 14 on “Bayesian inference algorithms revisited”, Chap. 15 on “Bayesian learning revisited” and Chap. 16 on “Frequently asked questions and frequently argued matters” [!])
are definitely those I found easiest to read and relate to. With mentions made of conjugate priors and of the EM algorithm as a (Bayes) classifier. The final chapter mentions BUGS, Hugin and… Stan!
Plus a sequence of 23 PhD theses defended on Bayesian programming for robotics in the past 20 years. And explains the authors’ views on the difference between Bayesian programming and Bayesian
networks (“any Bayesian network can be represented in the Bayesian programming formalism, but the opposite is not true”, p.316), between Bayesian programming and probabilistic programming (“we do not
search to extend classical languages but rather to replace them by a new programming approach based on probability”, p.319), between Bayesian programming and Bayesian modelling (“Bayesian programming
goes one step further”, p.317), with a further (self-)justification of why the book sticks to discrete variables, and further more philosophical sections referring to Jaynes and the principle of
maximum entropy.
“The “objectivity” of the subjectivist approach then lies in the fact that two different subjects with same preliminary knowledge and same observations will inevitably reach the same
Bayesian Programming thus provides a good snapshot of (or window on) what one can achieve in uncertain environment decision-making with Bayesian techniques. It shows a long-term reflection on those
notions by Pierre Bessière, his colleagues and students. The topic is most likely too remote from my own interests for the above review to be complete. Therefore, if anyone is interested in reviewing
any further this book for CHANCE, before I send the above to the journal, please contact me. (Usual provisions apply.)
|
{"url":"http://xianblog.wordpress.com/tag/bayesian-inference/","timestamp":"2014-04-21T09:36:51Z","content_type":null,"content_length":"95399","record_id":"<urn:uuid:f216a7b0-e9ea-4f76-b961-8b2d51731c13>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00064-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Optimisation Problem
May 1st 2013, 11:55 PM #1
May 2013
Optimisation Problem
Hey guys, I having some problems with a math question for my uni homework and would greatly appreciate any assistance
A company wishes to construct a pipeline connecting an oil refinery to an offshore drilling platform. The platform is situated 12km down the shore from the refinery and 11km out to sea.
Constructing a pipeline in the ocean is more expensive than the one on land, so the company wants to run part of the pipeline along the shore and part of it through the ocean. The problem is to
find the length of the pipeline along the shore which minimises the cost.
1) Find expressions for the length of the pipeline along the shore and in the water in terms of the angle the pipeline makes with shore, theta
2) What is the appropriate range of values that theta can have in this problem?
3) It costs $60K per km to construct a pipeline on land and $120K per km to construct a pipeline in water. Write down an expression for the total cost (in thouasands) to build the pipeline in
terms of theta
4) Find the value of theta that gives the minimum cost.
5) Find the length of the land and water that minimise the cost and the cost itself.
Cheers, any help would be greatly appreciated.
Re: Optimisation Problem
First, let me advise you that posting questions in the appropriate sub-forum will get you quicker help. This question would belong in the Calculus sub-forum, as it requires differential calculus
to solve. I suggest reporting this post, using the Report Post feature, to the staff and request that your topic be moved appropriately.
Now, on to the problem...
Have you draw a diagram? This is usually a very helpful step so that you may see what quantities require representation by a variable, and what the relationships between the various quantities
are. What do you find?
May 2nd 2013, 12:25 AM #2
|
{"url":"http://mathhelpforum.com/new-users/218464-optimisation-problem.html","timestamp":"2014-04-17T15:44:49Z","content_type":null,"content_length":"32677","record_id":"<urn:uuid:aa166b68-465f-43d0-ba1d-d263689620d1>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00640-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Mixture Problems - Concept
Some word problems using systems of equations involve mixing two quantities with different prices. To solve mixture problems, knowledge of solving systems of equations. is necessary. Most often,
these problems will have two variables, but more advanced problems have systems of equations with three variables. Other types of word problems using systems of equations include rate word problems
and work word problems.
Alright guys word problems can be some of the most intimidating problems you'll come across in your Math homework and I'm a Math teacher so I know that a lot of students tend to just skip them.
Please please don't skip them I promise if you try you guys you can do them. There's a certain kind of word problem we're going to look at today and that's where you're looking at the amount of cost
and the amount of quantities that go into a mixture. It's really relevant for anyone who goes into any kind of selling of products whether it be like a food item, like a mixed coffee brand where
you're combining like Colombian with Brazilian or something and you have to figure out how much to sell. Or maybe if you're making like makeup and you have like one product that's really expensive
that you use as half of your ingredients the other ingredients are really cheap and you want to figure out how, how too figure out what the price of your selling item should be.
That's where you're going tot use this kind of a problem. Here is a kind of a formula that might help you when you go through this. What you're going to do is have 2 quantities or ingredients that
are being mixed together to give you your mix. So you have the amount or the quantity of your first item times its price plus the amount of quantity of your second item times its price, that's going
to be equal to the amount of the mixture times the mixture price. It makes sense when you're looking at it now but I bet when you start seeing some problems you might get a little confused. So please
write this down somewhere where you can refer to it when you're going through your homework or watching the upcoming Brightstorm videos.
amount price mixture word problem
|
{"url":"https://www.brightstorm.com/math/algebra/word-problems-using-systems-of-equations/mixture-problems/","timestamp":"2014-04-20T14:04:56Z","content_type":null,"content_length":"56407","record_id":"<urn:uuid:0856d3a7-715b-44b2-9ae2-4c9dfdb2fcb3>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00514-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Calculating Flange Capacity of Beams
elpoblano (Structural) 2 Oct 04 11:43
Thinking further in your problem, your beam is being used. To evaluate the max. wheel load, you need to ensure that the beam does not have undercuts or gauges made by the wheels onto the upper side
of the lower flanges. These are noticeable in a visual inspection.
In my experience every time I encounter a gauge, I request a magnetic particle test of the gaugue itself. However, if the gauge reduces the nominal thickness of the flange by 30% or more, I order a
beam replacement.
Yes there are ways to repair the flange, but around my location the price of repair vs. replace is about the same.
One way is to weld a reinforcement plate under the bottom flange, this is restricted to the clearance left by your hoisting device. Depending on the monorail use frequency , you may have to weld the
plate continuously along the the flange. And that generates another problem, stress concentrations.
|
{"url":"http://www.eng-tips.com/viewthread.cfm?qid=103146","timestamp":"2014-04-20T15:52:59Z","content_type":null,"content_length":"31252","record_id":"<urn:uuid:460635d8-3dba-4a82-a65c-3824e99aca0c>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00111-ip-10-147-4-33.ec2.internal.warc.gz"}
|
INtegration using U-Substutution
May 27th 2011, 10:19 AM #1
May 2011
INtegration using U-Substutution
I am revising for an exam and am stuck on one question.
\int sin^24tcos4t
I have done the following:
u^2 = sin^2
Now I am stuck and not sure were to go. Can any one explain how to do this so I can how they got the answer?
Why not just u = sin(t)?
What is the derivative of $\frac{\sin^3(4t)}{12}~?$
Why do students always have to rely on u-substitution?
I would tell them that I know derivatives.
I can recognize a derivative when I see one.
I was trained that way.
I graduated with a degree in mathematics never having seen U-substitution.
We were required to call it what it is: anti-differentiation.
In fact, to get full credit, we had to identify the 'derivatiive form'.
I would tell them that I know derivatives.
I can recognize a derivative when I see one.
I was trained that way.
I graduated with a degree in mathematics never having seen U-substitution.
We were required to call it what it is: anti-differentiation.
In fact, to get full credit, we had to identify the 'derivatiive form'.
This way to evaluate an integral is too hard for the mid-level student.
When you give him a function and ask him THIS IS A DERIVATIVE OF WHAT? imo you will confuse him.
But I believe, the student who leaerned it this way will be better than the student who learned it using u-substitutions.
But how in earth did not you see a u-substituion ?
Surely, you've learned trigonometric-substitution. Which is a substitution itself. so you should have been learned the substitution technique then the trigonometric substitution.
That is a fair point. I do not remember. I guess by the time we needed
a trigonometric substitution, we could see why it made sense to use one.
Anyway, I think all of this is mute. I think that with in twenty years calculus textbooks will not have a chapter on Techniques of Intergrations. The students users are already demanding to be
allowed to use laptops connected to the internet. Here is why.
Students do get what they demand today.
basically "using a u-substitition" is THE SAME THING as recognizing that f(x) dx = g'(u) du.
one person sees adding a+b+c as a 1-step operation, another sees it as a 2-step operation. who is right?
in this particular problem, we have (something squared), along with something else that is very close to the derivative of (something).
typically, one uses "u" for "something", and hopefully, what else is left is close enough to "du" that we can fiddle with it, and make it work.
specifically, if we let u = sin(4t), then du = 4cos(4t). well, we don't have 4cos(4t), we just have cos(4t).
but we can write cos(4t) = (1/4)(4cos(4t)) and take the factor of 1/4 outside the integral. so we have (1/4)∫ u^2 du
= u^3/12 + C = sin^3(4t)/12 + C.
now, let's pretend we never learned a thing about u-substitution. we're looking for some function f, such that:
f'(t) = sin^2(4t)cos(4t). it appears that f will have to be some combination of trig functions. so let's try:
f(t) = sin^a(mt) + cos^b(nt), and differentiate (does this seem like a reasonable guess?).
f'(t) = (am)sin^(a-1)(mt)cos(mt) - (bn)cos^(b-1)(nt)sin(nt).
compare this to sin^2(4t)cos(4t). well, the second term doesn't have a high enough power of sin,
so we might suppose that b = 0. if, in the first term, we let a = 3, m = 4, we get:
f'(t) = 12sin^2(4t)cos(4t). oh, so close! but it appears we are off by a factor of 12. but 12 is a constant, that's no trouble:
simply consider g(t) = f(t)/12, which will fix everything: g'(t) = (1/12)f'(t) = sin^2(4t)cos(4t), which is what we want to integrate.
so ONE anti-derivative of g'(t) is g(t), and any anti-derivative of g'(t) is of the form g(t) + C, that is:
sin^3(4t)/12 + C.
Just in case a picture helps guide the mental logic on which Plato is curiously unwilling to introspect (glad he attacks the automatic habit of a u-sub, though!)...
We spot that the integrand looks like the result of a chain-rule differentiation, i.e. it might fit the bottom level of this shape...
... where (key in spoiler) ...
Next, anti-differentiate with respect to the dashed balloon, just like a u-sub...
... and that's why the solution is...
Generally the drift is...
(For a 'trig sub', slightly different - see below.)
Don't integrate - balloontegrate!
Balloon Calculus; standard integrals, derivatives and methods
Balloon Calculus Drawing with LaTeX and Asymptote!
basically "using a u-substitition" is THE SAME THING as recognizing that f(x) dx = g'(u) du.
one person sees adding a+b+c as a 1-step operation, another sees it as a 2-step operation. who is right?
in this particular problem, we have (something squared), along with something else that is very close to the derivative of (something).
typically, one uses "u" for "something", and hopefully, what else is left is close enough to "du" that we can fiddle with it, and make it work.
specifically, if we let u = sin(4t), then du = 4cos(4t). well, we don't have 4cos(4t), we just have cos(4t).
but we can write cos(4t) = (1/4)(4cos(4t)) and take the factor of 1/4 outside the integral. so we have (1/4)∫ u^2 du
= u^3/12 + C = sin^3(4t)/12 + C.
now, let's pretend we never learned a thing about u-substitution. we're looking for some function f, such that:
f'(t) = sin^2(4t)cos(4t). it appears that f will have to be some combination of trig functions. so let's try:
f(t) = sin^a(mt) + cos^b(nt), and differentiate (does this seem like a reasonable guess?).
f'(t) = (am)sin^(a-1)(mt)cos(mt) - (bn)cos^(b-1)(nt)sin(nt).
compare this to sin^2(4t)cos(4t). well, the second term doesn't have a high enough power of sin,
so we might suppose that b = 0. if, in the first term, we let a = 3, m = 4, we get:
f'(t) = 12sin^2(4t)cos(4t). oh, so close! but it appears we are off by a factor of 12. but 12 is a constant, that's no trouble:
simply consider g(t) = f(t)/12, which will fix everything: g'(t) = (1/12)f'(t) = sin^2(4t)cos(4t), which is what we want to integrate.
so ONE anti-derivative of g'(t) is g(t), and any anti-derivative of g'(t) is of the form g(t) + C, that is:
sin^3(4t)/12 + C.
The difference is that between understanding what is going on and following a set of rules learned by rote. Today the latter is a useless waste of time when it comes to calculus, since it is more
reliable when money and lives depend on the result to use machine assistance. Now when using a machine in this way it is vitally important that you understand what is going on in principle so
that you can apply a reality check to the results (after all it is still you legally liable for wrong results, not your computer of Stephen Wolfram).
I fear this thread is getting slightly derailed, but I would just summarise by saying that I learned the following rule, which is essentially the same thing as everyone else is applying, albeit
in a more general sense:
$\int f'(x)f(x)^n dx = \frac{[f(x)]^{n+1}}{n+1} + C$
It needs some slight adaptation here because the constants don't quite match, but you can easily take that into account when applying the rule.
I fear this thread is getting slightly derailed, but I would just summarise by saying that I learned the following rule, which is essentially the same thing as everyone else is applying, albeit
in a more general sense:
$\int f'(x)f(x)^n dx = \frac{[f(x)]^{n+1}}{n+1} + C$
It needs some slight adaptation here because the constants don't quite match, but you can easily take that into account when applying the rule.
Whatever works for you... I just find that mappings can be a touch more intuitive (for me) than substitutions.
Anyway, I'm really only posting again to point out that CB probably meant to quote 'Plato' and not Deveno. (Hence some sense of 'derailment'?) Tried the 'report' button instead but found a severe
warning about usage...
$\frac{du}{dt}=4cos(4t)\Rightarrow\frac{1}{4}du=cos (4t)dt$
Now you will be able to use the "du" term at the end of your integral in place of the Cosine
to get an integral only in terms of "u".
You can compare your result using Plato's early statement.
The difference is that between understanding what is going on and following a set of rules learned by rote. Today the latter is a useless waste of time when it comes to calculus, since it is more
reliable when money and lives depend on the result to use machine assistance. Now when using a machine in this way it is vitally important that you understand what is going on in principle so
that you can apply a reality check to the results (after all it is still you legally liable for wrong results, not your computer of Stephen Wolfram).
yes. as a matter of practice, applying a u-substitution is sort of a (self-)test on how well you understand WHY they work. because if you do not, you'll have trouble picking the right "u".
as tom@ballooncalculus pointed out, it involves recognizing when you are seeing the results of derivative in which the chain rule has been used. this isn't a 100% hard-and-fast rule, but if the
integrand is of the form (f(t))(g(t)), it's a good thing to check out first.
often, people feel as if they know how to integrate, if they can evaluate most integrals they see. this is a bit misleading, as often, the problems posed in a class "already have answers", so
they aren't a perfect test. and remembering various forms for integrals, can be taxing on the memory (no doubt explaining the continuing popularity of lists of common integrals in print, and on
the internet). one is in much better shape (in terms of "knowing" how to integrate), if one knows that the various rote "rules" are actually theorems. i feel rather strongly, that no one, not an
undergraduate, not a college professor, nor a professional using mathematics for a living, should ever take a theorem on faith. mathematics is not, after all, a religion, but a way of expressing
knowledge. if you can't prove something, you have no right to claim you know it is true (although you may suspect it is). i use the word "prove" loosely, in the sense that you could (if given
enough time, and reference materials) prove it, at least to your own satisfaction (perhaps not to the satisfaction of the Inquisitors of the Grand Council of Rigorous Standards).
yes, i am afraid we ARE a bit off-topic. i suspect OP has already taken his exam, and is on to other things. and we are beating a dead horse, and preaching to the choir. CaptainBlack, Plato,
tom@ballooncalculus, et alius, don't need to read this rant. in a perfect world, i wish the original poster would. the reason one goes to school, is NOT to pass the courses (or shouldn't be).
when one is a NASA engineer, calculating a re-entry trajectory for a space-craft, no one cares what grade you got on your final, they want (correct) results.
This is a slow time so why not continue this?
Here is a quote from Keith Devlin.
I mean using properly -- calculators and computers does not represent a reduction in skill or the need for accuracy. On the contrary, successful use of today's computational aids requires far
greater mathematical skill, and much more mathematical insight, than we old timers had to master to get our sums right. In addition to ensuring that our students can get the right answer using
modern technology, we should also try to interest them in mathematics as a human creation, developed over the centuries to improve the quality of our lives. To do that, we need to show them some
of the many different ways that mathematics plays a major role in today's society, including some of the mathematics developed during our own lifetime. In my view, those who cry "Back to basics"
have got it wrong. The call should be "Forward to (the new) basics."
Who knows, if we answer that call, we might even produce a generation that is not math phobic or paralyzed by math anxiety.
That is exactly what I meant by my post.
In other words: We must adjust to the world that technology has trust upon us.
We adjust or die.
May 27th 2011, 10:28 AM #2
MHF Contributor
Aug 2007
May 27th 2011, 10:32 AM #3
May 27th 2011, 12:40 PM #4
Feb 2010
May 27th 2011, 01:02 PM #5
May 27th 2011, 01:11 PM #6
Feb 2010
May 27th 2011, 01:35 PM #7
May 27th 2011, 01:56 PM #8
MHF Contributor
Mar 2011
May 28th 2011, 03:09 AM #9
MHF Contributor
Oct 2008
May 28th 2011, 06:30 AM #10
Grand Panjandrum
Nov 2005
May 28th 2011, 06:43 AM #11
May 28th 2011, 08:04 AM #12
MHF Contributor
Oct 2008
May 28th 2011, 10:08 AM #13
MHF Contributor
Dec 2009
May 28th 2011, 02:46 PM #14
MHF Contributor
Mar 2011
May 28th 2011, 04:16 PM #15
|
{"url":"http://mathhelpforum.com/calculus/181814-integration-using-u-substutution.html","timestamp":"2014-04-17T00:53:50Z","content_type":null,"content_length":"96357","record_id":"<urn:uuid:e9afe937-84c0-4033-8196-06d5743deae4>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00482-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Positive Real Matrices
June 12th 2009, 07:20 AM #1
Jun 2009
In Michael Artin's "Alegbra" textbook, Chapter 4 section 3, he discussed positive real matrices. "They occur in applications and one of their most important properties is that they always have an
eigenvector whose coordinates are positive. Instead of proving this, let us illustrate it in the case of two variables by examining the effect of multiplying by a positive 2 X 2 matrix A on R^2."
He goes on to say that since the entries of this matrix A are positive, left multiplication by A carries the first quadrant S to itself [since e1 is carried to the first column of A and e2 to the
second column], i.e. S > AS > A^2S and so on. He continues "Now the intersection of a nested set of sectors is either a sector or a half line. In our case, the intersection Z = Intersection over
A^r S for all r>=0 turns out to be a half line. This is intuitively plausible, and it can be shown in various ways."
Is there a straightforward / direct way to prove this? I am struggling to think of something other than my geometric intuition that starting with the first quadrant, the sector keeps getting
smaller if you repeatedly multiply by a positive matrix...
In Michael Artin's "Alegbra" textbook, Chapter 4 section 3, he discussed positive real matrices. "They occur in applications and one of their most important properties is that they always have an
eigenvector whose coordinates are positive. Instead of proving this, let us illustrate it in the case of two variables by examining the effect of multiplying by a positive 2 X 2 matrix A on R^2."
He goes on to say that since the entries of this matrix A are positive, left multiplication by A carries the first quadrant S to itself [since e1 is carried to the first column of A and e2 to the
second column], i.e. S > AS > A^2S and so on. He continues "Now the intersection of a nested set of sectors is either a sector or a half line. In our case, the intersection Z = Intersection over
A^r S for all r>=0 turns out to be a half line. This is intuitively plausible, and it can be shown in various ways."
Is there a straightforward / direct way to prove this? I am struggling to think of something other than my geometric intuition that starting with the first quadrant, the sector keeps getting
smaller if you repeatedly multiply by a positive matrix...
here's a proof instead of a just a geometric illustration for $2 \times 2$ matrices:
let $A$ be a $k \times k$ positive real matrix and $\lambda$ the eigenvalue of $A$ such that $|\lambda|$ is as large as possible. clearly $|\lambda| > 0$ because otherwise all the eigenvalues of
$A$ would be 0 and so $A$ would be
nilpotent, which is impossible since $A$ is positive. let $\bold{a}=\begin{bmatrix}a_1 & a_2 & \cdots & a_k \end{bmatrix}^T$ be an eigenvector of $A$ corresponding to $\lambda.$ define $|\bold{a}
|=\begin{bmatrix}|a_1| & |a_2| & \cdots & |a_k| \end{bmatrix}^T.$ the claim is that $|\bold{a}|$ is also an
eigenvector of $A,$ which will prove the problem because $|a_j| > 0,$ for all $j.$why? from now on, we'll use this notation: for any matrices $X,Y$ with the same dimension we wrtie $X > Y \ (X \
geq Y)$
if all the entries of $X-Y$ are positive (non-negative). back to our problem: let $A|\bold{a}|=\bold{c}$ and $\bold{c}-|\lambda||\bold{a}|=\bold{b}.$ we only need to prove that $\bold{b}=\bold
{0}.$ it's easy to see that $\bold{b} \geq \bold{0}.$ so if $\bold{b} eq \bold{0},$ then
$A \bold{b} > \bold{0}.$ therefore there exists a real number $r > 0$ such that $A \bold{b} > r \bold{c},$ because clearly $\bold{c} > \bold{0}.$ using the definition of $\bold{b}$ we get: $\frac
{1}{r + |\lambda|} A\bold{c} > \bold{c},$ and hence $\left(\frac{1}{r+|\lambda|}A \right)^n \bold{c} > \bold{c},$ for all positive
integers $n.$ call this (1). now if $\mu$ is an eigenvalue of $\frac{1}{r+|\lambda|}A,$ then $\mu(r+|\lambda|)$ will be an eigenvalue of $A$ and thus we must have $|\mu|(r+|\lambda|) \leq |\
lambda|.$ therefore $|\mu| < 1.$ so, by what we proved
in here, we must have $\lim_{n\to\infty}\left(\frac{1}{r+|\lambda|}A \right)^n=\bold{0}.$ but then (1) will give us the contradiction $\bold{0} \geq \bold{c}. \ \ \Box$
Thanks! This seems like a good proof of the existence of an Eigenvector in the first quadrant, but I'm not quite seeing how it proves that if you left multiply the first quadrant by a positive
real martix an arbitrary number of times and intersect over all such sectors, you end up with a half line instead of a sector.
Also, if one were to try to offer a geometric demonstration (say in the 2x2 acting on R^2 case), e1 and e2 keep moving further "into" the first quadrant as you transform them repeatedly ( S > AS
> A^2S etc), but how do you argue that they don't each converge to a separate half line (leaving a sector), rather than to the same half line?
June 12th 2009, 09:51 PM #2
MHF Contributor
May 2008
June 13th 2009, 11:30 AM #3
Jun 2009
|
{"url":"http://mathhelpforum.com/advanced-algebra/92630-positive-real-matrices.html","timestamp":"2014-04-16T14:11:41Z","content_type":null,"content_length":"47231","record_id":"<urn:uuid:3781eaf5-a144-426e-9232-c910f7390939>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00244-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Bausman Math Tutor
Hello there! My name is Leah F. Although I am new to Pennsylvania, I am not new to teaching!
24 Subjects: including algebra 2, algebra 1, grammar, geometry
...I received a certificate of graduation from Fairwood Bible Institute, a three-year Bible school in Dublin NH. Five years after graduation I participated in a one-year program offered to
graduates. This program was divided between classroom and fieldwork.
13 Subjects: including algebra 1, algebra 2, calculus, geometry
...If you need to improve your math skills for a math class or a science class, I can help you. I have also found that students who can see the connections between scientific concepts and the real
life, are more interested in the science topics and perform better overall in science class. I draw o...
29 Subjects: including logic, algebra 1, algebra 2, ACT Math
...I have taught middle and high school for 3 years. Additionally, I have worked as a TSS for the past six months. I am able to teach to any different learning style, and would ask specific
questions to learn in what way each individual learns the best.
11 Subjects: including algebra 1, prealgebra, biology, GED
...I find that Dyslexia has the biggest impact when studying for Math and Reading, but it can hinder any subject. This can be very frustrating. I believe the best way to get through a lesson is
work at a slower pace, or whatever the student is comfortable with.
11 Subjects: including SAT math, ACT Math, algebra 1, biology
|
{"url":"http://www.purplemath.com/Bausman_Math_tutors.php","timestamp":"2014-04-18T11:19:55Z","content_type":null,"content_length":"23304","record_id":"<urn:uuid:eafba499-0551-4c32-901e-fee8e86412b7>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00356-ip-10-147-4-33.ec2.internal.warc.gz"}
|
The re-evaluation of Calculus at Macalester College began about six years ago with the realization that the traditional sequence of Calculus I and II was not meeting the needs of our students.
From questionnaires distributed to the students and from transcript records obtained from our registrar, we learned that about 75% of the students taking Calculus I had studied calculus in high
school. For 70% of those who took the course, it would be their last calculus class. With a pass rate of C or higher near 90%, the attrition was primarily because most of these students were in a
major, usually either Biology or Economics, that only required a single semester of calculus. It made no sense to teach Calculus I as if these students had never seen calculus before, or as if
they were taking it as the first half of a year-long introduction to calculus.
Calculus I also was not serving as a gateway into the major. Of the 80–90 students who took Calculus I each year, at most 3 or 4 would eventually take a junior- or senior-level math course. Of
the roughly 25 math majors who graduated each year, we seldom had more than one who had studied Calculus I at Macalester.
So we needed a course that would be fresh and interesting to students who had studied calculus in high school, a course that would stand on its own and be relevant to students for whom this would
be their terminal calculus course.
Another important factor in our re-evaluation of calculus was the report of the Biology group in the Curriculum Foundations Workshop that had been sponsored by the MAA’s Committee on Curricular
Renewal Across the First Two Years (CRAFTY), held at Macalester in November, 2000. We learned from the assembled biologists that Macalester was typical in that it required two semesters of
mathematics for its biology majors. For us, this translated into a semester of calculus and a semester of statistics. But what we were doing did not match the needs of these majors. In the words
of the report, “Statistics, modeling and graphical representation should take priority over calculus.”[1, p. 15] The way we were teaching calculus made it almost useless to the biology majors,
and the statistics, while useful, was limited to univariate statistics. What biologists really need and use is multivariate statistics.
Inspired by programs such as the one at the US Military Academy at West Point [2] and with the leadership of our mathematical biologist, Danny Kaplan, we reconstructed the calculus class from the
ground up, creating a course that we now call Applied Calculus. There were several absolutes that were established at the beginning. First, the emphasis had to be on differential equations and
systems of differential equations. By the end of semester, students needed to be comfortable reading and interpreting as well as constructing such equations. While some attention would be paid to
finding exact solutions, numerical techniques and qualitative analysis of solutions would dominate their study. Second, functions of several variables would be included as early and as often as
was practical. In particular, graphic representations of functions of two variables would be introduced early in the semester, and the introduction of partial derivatives would immediately follow
the discussion of the derivative of a function of a single variable. Third, the last two or three weeks of the semester would be reserved for a geometric introduction to linear algebra, setting
the stage for a subsequent statistics class that could spend much of its time on multivariate statistics.
Other key aspects of the course emerged as it was field tested over the first two years. The course begins with a review of functions as models of types of behavior. Thus, for example, the sine
function is a useful model of periodicity. Time is spent ensuring that students know how to modify the sine (adjusting amplitude, period and location of extrema) to get it to fit a given set of
periodic data. The derivative and partial derivatives are introduced at the same time. The point of emphasis is their use as models of rates of growth (or decay).
Nothing conveys what has been stressed within a course as effectively as the final examination. I have posted the nine final exam questions from Fall, 2006 (the last time I taught this course) at
[3]. Several members of my department are now working on a description of our Applied Calculus course that will appear in a forthcoming MAA Notes volume being edited by Glenn Ledder and
tentatively titled Undergraduate Mathematics for the Life Sciences: Processes, Models, Assessment, and Directions.
Macalester is too small an institution be able to offer both a traditional Calculus I and our revamped Applied Calculus. Our traditional sequence has disappeared. Applied Calculus serves both as
a terminal course and as an introduction to the ideas of calculus. It is also taken by students who arrive with credit with first semester calculus but who are uncertain whether or not they want
to continue toward advanced mathematics. For those who do seek preparation for more advanced courses, we offer a single semester of Single Variable Calculus that is appropriate for both the
students who have come out of Applied Calculus and for those who arrive at Macalester with credit for a semester’s worth of Calculus.
We have not yet done a transcript analysis to parallel the one done at the start of this process, but the informal reports from students and the departments whose majors are served by this course
suggest that our change has been very positive. The students feel that they are learning skills and understandings that are truly useful. And we find that there are students whose interest in
mathematics is re-kindled by this course. It is feeding students into our Single- and Multivariable Calculus courses.
Macalester’s solution will not work at every college or university, but our basic premise is one that I wish everyone would adopt: We must take a long and honest look at our calculus sequence.
What are the backgrounds and needs of the students it is serving? How well does it serve those needs? How can we re-imagine these courses so that they better serve the students we have?
[1] Ganter, Susan, and William Barker, eds., Curriculum Foundations Project: Voices of the Partner Disciplines, 2004, Washington, DC: The Mathematical Association of America. Available at
[2] US Military Academy’s Applied Calculus is described on the course website at www.dean.usma.edu/departments/math/courses/ma103/
[3] Bressoud, David M., Report on Calculus at Macalester College, presentation at Joint Math Meetings, January 8, 2007. Available at www.macalester.edu/~bressoud/talks/NewOrleans/
|
{"url":"http://www.maa.org/external_archive/columns/launchings/launchings_1_08.html","timestamp":"2014-04-19T22:43:49Z","content_type":null,"content_length":"11379","record_id":"<urn:uuid:782f18f8-3edf-4de2-b564-d47765546d1f>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00252-ip-10-147-4-33.ec2.internal.warc.gz"}
|
What are two symbolic techniques used to solve linear equations? Which do you feel is better? Explain why.
Elimination and substitution are two methods. If you readily see how to do the elimination, it is quicker. Substitution, however, [ is much more general and completely independent of immediate
perception. So each technique has its own peculiar merit. Consequently, I doubt that there a single criterion that permits ranking. Just my opinion. Others may disagree; see what they say. ]
Expert answered|
|Points 6470|
Asked 11/8/2011 9:28:32 AM
0 Answers/Comments
Not a good answer? Get an answer now. (FREE)
There are no new answers.
|
{"url":"http://www.weegy.com/home.aspx?ConversationId=54C2AF4F","timestamp":"2014-04-20T08:18:51Z","content_type":null,"content_length":"40902","record_id":"<urn:uuid:eeefa1d5-6f07-4fdd-a6fc-666f87e955ba>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00459-ip-10-147-4-33.ec2.internal.warc.gz"}
|
The attempt to load metrics for this article has failed.
The attempt to plot a graph for these metrics has failed.
Theoretical phase diagram for a system of particles interacting through the potential (2.2) with ϕ(x) = x −6 and z 1 = 12. T c is the critical temperature and β c = (k B T c )−1. The critical-point
coordinates, ρ c and T c , follow from requiring that the first- and second-order density derivatives of the fluid pressure be simultaneously zero. One thus finds ρ c = ρ0/3 and k B T c = (8/27)aρ0,
with a = (2π/3)εσ3. Top: phase diagram on the density-temperature plane, showing the extent of the coexistence regions; the triple temperature is between 0.6 and 0.65 of T c . Bottom: phase diagram
on the temperature-pressure plane, reporting as blue crosses also the (T, P) points characterizing the solid-liquid coexistence states borne out of the decay of the metastable-liquid states at
various T in values, for x g = 0.001 (see Sec. III B ).
Final equilibrium state after the adiabatic decay of the metastable liquid under constant-volume conditions, for T m = 0.8 T c . Top: temperature; bottom: pressure.
Final equilibrium state after the adiabatic decay of the metastable liquid under constant-volume conditions, for T m = 0.8 T c and for two different amounts of foreign gas in the vessel (crosses, x g
= 0.001; squares, x g = 0.1). Top: temperature; bottom: pressure.
Top: Solid fraction in the equilibrium state resulting from the adiabatic decay of the metastable liquid under constant-volume conditions, for T m = 0.8 T c and for two different amounts of foreign
gas in the vessel (crosses, x g = 0.001; squares, x g = 0.1). Bottom: Entropy of the solid-liquid mixture at T fin (solid lines) vs. entropy of the supercooled liquid at T in (dotted lines).
Final equilibrium state after the adiabatic decay of the metastable liquid under constant-volume conditions, for T m = 0.8 T c and for two different amounts of foreign gas in the vessel (top panel, x
g = 0.001; bottom panel, x g = 0.1). Volume of the solid-liquid mixture (solid lines) vs. volume of the supercooled liquid at T in (dotted lines).
Final equilibrium state after the adiabatic decay of the metastable liquid at constant pressure, for T m = 0.8 T c . Top: volume of the solid-liquid mixture at T m (solid line) vs. volume of the
liquid at T in (dotted line); bottom: solid fraction in the mixture.
Difference in specific entropy between the droplet-liquid mixture at T fin and the original metastable liquid at T in, as a function of the droplet “radius,” . Two values of N are considered, 1000
(red curves, left) and 10 000 (blue curves, right), for T m = 0.8 T c . For each N, various T in/T c values were considered: from top to bottom, 0.57, 0.60, 0.63 for N = 103; and 0.60, 0.65, 0.70 for
N = 104.
Scitation: A maximum-entropy approach to the adiabatic freezing of a supercooled liquid
|
{"url":"http://scitation.aip.org/content/aip/journal/jcp/138/16/10.1063/1.4801864","timestamp":"2014-04-17T18:23:51Z","content_type":null,"content_length":"79740","record_id":"<urn:uuid:ea4244d9-a3c0-43dc-a016-2b6ba91e0241>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00186-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Finding PI using MPI collective operations
You are here
Finding PI using MPI collective operations
Finding PI using MPI collective operations
This exercise presents a simple program to determine the value of pi. The algorithm suggested here is chosen for its simplicity. The method evaluates the integral of 4/(1+x*x) between -1/2 and 1/
2. The method is simple: the integral is approximated by a sum of n intervals; the approximation to the integral in each interval is (1/n)*4/(1+x*x). The master process (rank 0) asks the user for
the number of intervals; the master should then broadcast this number to all of the other processes. Each process then adds up every n'th interval (x = -1/2+rank/n, -1/2+rank/n+size/n,...).
Finally, the sums computed by each process are added together using a reduction.
Complete the missing arguments in the following pseudo code. Choose either one of the following Fortran or C example.
MPI routines :
|
{"url":"https://www.msi.umn.edu/workshops/mpi/hands-on/collective/finding-pi/assign","timestamp":"2014-04-19T12:48:47Z","content_type":null,"content_length":"24924","record_id":"<urn:uuid:23af08ff-d4e4-466c-89d0-42e0788576e9>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00592-ip-10-147-4-33.ec2.internal.warc.gz"}
|
commutative triangle
commutative triangle
Commutative triangles
Let $C$ be a category. A triangle of morphisms of $C$ consists of objects $X,Y,Z$ of $C$ and morphisms $f\colon X \to Y$, $g\colon Y \to Z$, and $h\colon X \to Z$. This is often pictured as a
$\array { X & \overset{f}\rightarrow & Y \\ & \searrow^{h} & \downarrow^{g} \\ & & Z }$
The triangle is commutative if $h = g \circ f$.
A commutative triangle is determined entirely by $f$ and $g$; therefore, a commutative triangle is equivalent to a composable pair of morphisms.
Accordingly, one rarely hears of commutative triangles on their own; instead, the concept only comes up when one already has a triangle and asks whether it commutes. (This is different from the
situation with commutative squares.)
Created on September 3, 2010 20:01:23 by
Toby Bartels
|
{"url":"http://ncatlab.org/nlab/show/commutative+triangle","timestamp":"2014-04-16T16:29:41Z","content_type":null,"content_length":"14754","record_id":"<urn:uuid:45e5c137-da75-440f-9b23-188da0c31d6f>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00352-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Summary: Physics 408 -- Exam 3 Name
You are graded on your work, with partial credit where it is deserved.
Please give clear, well-organized, understandable solutions.
h = 6.63 x i0 J s [Planck's constantj k 1.38 X 10_23 JfK [Boltzmann constant]
c = 3.00 x 108 mIs [speed of light] G = 6.67 x 10_li N rn2
[gravitational constant]
m = 1.67 x 10_27 kg [mass of neutron] M0 = 1.99 x 1030 kg [solar mass, i.e. mass of Sun]
The variables have their usual meanings: E = energy, S = entropy, V = volume, N = number of particles,
T = temperature, P = pressure, i = chemical potential, B = applied magnetic field, C = heat capacity at
constant volume, F Helmholtz free energy, k = Boltzmann constant. Also, (...) represents an average.
1. The Gibbs free energy G is defined by G = (E) -- TS + PV.
(a) (5) Using the standard expression for d (E), obtain dG in terms of dT, dP, and dN.
- d(T5)4 a(pv
& p.*/(JW)-- (dą 5dT)
(b) (5) Then obtain 5, V, and u as partial derivatives of G.
Frii Cii1 r/e r p( eri JVS
3TJj V
|
{"url":"http://www.osti.gov/eprints/topicpages/documents/record/507/4802925.html","timestamp":"2014-04-21T07:33:41Z","content_type":null,"content_length":"8138","record_id":"<urn:uuid:68f893e3-423f-416b-a83e-9014f33c8c3e>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00157-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Solve: x/x-1 = x/2 - (x+1)/(x+2)
Simplify: x^2+x-20/ (5x-20)
Multiply: (x^2-x-6)/(x^2+4x+3) *(times)... - Homework Help - eNotes.com
Solve: x/x-1 = x/2 - (x+1)/(x+2)
Simplify: x^2+x-20/ (5x-20)
Multiply: (x^2-x-6)/(x^2+4x+3) *(times) (x^2-x-12)/(x^2-2x-8)
Please show step- by- step answers for all. Thank you.
I don't understand these and I would like to, please help!!
`A)..x/x-1= x/2 -(x+1)/(x+2)`
`B)...(x^2+x-20)/(5x-20)=` `(x+5)(x-4)/(5(x-4))=` `(x+5)/5`
`C)... (x^2-x-6)/(x^2+4x+3) xx (x^2-x-12)/(x^2-2x-8)=` `((x-3)(x+2))/((x+1)(x+3)) xx ((x-4)(x+3))/((x+2)(x-4))=`
Go by pririty rules operations PEDMAS. RHS convert the fractions under the common denominator:
Mutiply by the denominator, (x+2) both sides:
x=sqrt(2) or x=-sqrt(2)
To simplify :x^2+x-20/(5x-20)= x^2+x-20/({5(x-4)}
=x^2+x-4/(x-4). There is no further simplification.
But if you intend x^2+x-20 is to be divided by (5x-20),Then it requires that you should write it like: (x^2+x-20)/(5x-20)
Then, x^2+x-20=(x+5)(x-4) is dividendo
(5x-20)= 5(x-4) is divisor. Therefore x^2+x-20 divided by 5x-20
is (x^2+x-20)(5x-20)= (x+5)(x-4)/{5(x-4)}=(x+5)/5 or x/5+1
(x-3)(x+2)/[(x+3)(x+1)] * (x-4)(x+3)/[(x-3)(x+1)]
(x-3)(x+2)(x-4)(x+3)/ [(x+3)(x+1)(x-3)(x+1)]
Join to answer this question
Join a community of thousands of dedicated teachers and students.
Join eNotes
|
{"url":"http://www.enotes.com/homework-help/solve-x-x-1-x-2-x-1-x-2-simplify-x-2-x-20-5x-20-102505","timestamp":"2014-04-20T03:46:39Z","content_type":null,"content_length":"29665","record_id":"<urn:uuid:04961ce2-ec49-4a3c-8edc-a87f6f6809c8>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00248-ip-10-147-4-33.ec2.internal.warc.gz"}
|
1. BMI is defined as an acronym for Body Mass Index, a calculation used to estimate body fat and to determine whether or not a subject is at a healthy weight.
Defining BMI
□ BMI is a number calculated from a person's height and weight.
BMI does not measure body fat; but, it is a fairly reliable indicator of body fatness.
□ People with a BMI over 30 are considered obese since they are typically about 30 pounds over their ideal weight for their height.
□ Those with a BMI of 40 or more are considered morbidly obese, although some doctors apply this label to people with a BMI of 35 or above who also have obesity-related medical conditions that
substantially affect their quality of life.
□ BMI is not a diagnostic tool to determine health risk. Other assessments would be needed such as skinfold thickness measurement, diet evaluation, physical activity review and family history.
□ BMI calculations alone can be somewhat misleading. Race, sex, age, and ethnicity are not taken into account when making the basic calculations.
□ Statistics may be somewhat inflated when dealing with athletes and others who have a high muscle mass, since muscle simply weighs more than an equivalent amount of fat.
□ BMI calculations are often artificially low when working with the elderly and those who have lost body mass.
An example of BMI is a BMI of 30 which is considered overweight.
bmi - Computer Definition
|
{"url":"http://www.yourdictionary.com/bmi","timestamp":"2014-04-18T06:29:33Z","content_type":null,"content_length":"62641","record_id":"<urn:uuid:bf6515d7-cc28-45e4-92a2-f54b560c9e38>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00587-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Poisson brackets and angular momentum
1. The problem statement, all variables and given/known data
Let f(q, p), g(q, p) and h(q, p) be three functions in phase space. Let Lk =
ε[lmk]q[l]p[m] be the kth component of the angular momentum.
(i) Define the Poisson bracket [f, g].
(ii) Show [fg, h] = f[g, h] + [f, h]g.
(iii) Find [q[j] , L[k]], expressing your answer in terms of the permutation symbol.
(iv) Show [L[j] , L[k]] = q[j]p[k]−q[k]p[j ]. Show also that the RHS satisfies q[j]p[k]−q[k]pj =
ε[ijk]L[i]. Deduce [L[i], |L|^2] = 0.
[Hint: the identity ε[ijk]ε[klm] = δ[il]δ[jm] − δ[im]δ[jl] may be useful in (iv)]
2. Relevant equations n/a
3. The attempt at a solution
i) [f,g]=[itex]\frac{\partial f}{\partial q_i}\frac{\partial g}{\partial p_i}\frac{\partial f}{\partial p_i}\frac{\partial g}{\partial q_i}[/itex]
ii) easy to show from the definition in i)
iii) after a bit of working, I get ε[lmk]q[l]
iv) my working is quite long, but I get [L[j],L[k]]=q[j]p[k]-q[k]p[j]=ε[ijk]L[i] as required.
The bit I'm having trouble with is the very last bit of the question, to deduce [L[i], |L|^2] = 0.
Since it's only a small part of the question, it seems as though this part should be fairly simple so maybe I'm overlooking something, but I don't get 0. This is my working:
[L[i], |L|^2]=[L[i], L[j]L[j]]=L[j][L[i], L[j]]+[L[i], L[j]]L[j]=2L[j][L[i], L[j]]
I'm not entirely sure where to go from here so any help (or pointing out of any glaring errors) would be great.
|
{"url":"http://www.physicsforums.com/showthread.php?t=630831","timestamp":"2014-04-18T10:41:05Z","content_type":null,"content_length":"33654","record_id":"<urn:uuid:04bd8a5c-a3da-4d99-9371-d55641a083f2>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00279-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Medford, NJ Math Tutor
Find a Medford, NJ Math Tutor
(( HIGHEST RATINGS!!! )) PARENTS: Bring the full weight of a PhD, as tutor, and student advocate. Hello Students! If you need help with mathematics, physics, or engineering, I'd be glad to help
14 Subjects: including algebra 1, algebra 2, calculus, geometry
I went to school for computer engineering at Carnegie Mellon University, changed my major to chemical engineering and transferred to the University of Delaware where I completed my degree. I
started a business with a friend of mine, which I ran successfully for about 12 years. I then changed my career and became a teacher.
15 Subjects: including differential equations, linear algebra, algebra 1, algebra 2
...I currently teach life science which is the NJ state 7th grade curriculum. Students who need science tutoring generally need help in recalling vocabulary and general concept review. As a
special education teacher I have been required to adapt curriculum helping students with recall and understand concepts.
10 Subjects: including prealgebra, reading, writing, biology
...I am happy to help school students with their homework assignments or project works. Also, I can do relaxed lessons individually or tailored exactly to what the pupil/students requires. I am
able to help my students to maximize their potential using targeted revision and exam techniques.
43 Subjects: including precalculus, statistics, SAT math, SPSS
...I believe the student will learn through reinforcement of basic skills and using that knowledge can build stronger skills. It also important that the student is confident in what they are
learning. If they are having a difficulty in one area, it is imperative to stop and not move on until that skill is accomplished.
19 Subjects: including prealgebra, reading, English, writing
Related Medford, NJ Tutors
Medford, NJ Accounting Tutors
Medford, NJ ACT Tutors
Medford, NJ Algebra Tutors
Medford, NJ Algebra 2 Tutors
Medford, NJ Calculus Tutors
Medford, NJ Geometry Tutors
Medford, NJ Math Tutors
Medford, NJ Prealgebra Tutors
Medford, NJ Precalculus Tutors
Medford, NJ SAT Tutors
Medford, NJ SAT Math Tutors
Medford, NJ Science Tutors
Medford, NJ Statistics Tutors
Medford, NJ Trigonometry Tutors
Nearby Cities With Math Tutor
Delran Township, NJ Math Tutors
Evesham Twp, NJ Math Tutors
Lindenwold, NJ Math Tutors
Lumberton Township, NJ Math Tutors
Lumberton, NJ Math Tutors
Maple Shade Math Tutors
Marlton Math Tutors
Medford Lakes, NJ Math Tutors
Medford Township, NJ Math Tutors
Merchantville Math Tutors
Moorestown Math Tutors
Mount Laurel Math Tutors
Mount Laurel Township, NJ Math Tutors
North Marlton, NJ Math Tutors
Pine Hill, NJ Math Tutors
|
{"url":"http://www.purplemath.com/Medford_NJ_Math_tutors.php","timestamp":"2014-04-20T01:45:10Z","content_type":null,"content_length":"23929","record_id":"<urn:uuid:b83c9b69-13d8-461a-bcba-6f96af3286db>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00235-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Math Forum Discussions
Math Forum
Ask Dr. Math
Internet Newsletter
Teacher Exchange
Search All of the Math Forum:
Views expressed in these public forums are not endorsed by Drexel University or The Math Forum.
Topic: calculators
Replies: 0
Posted: Aug 2, 1995 3:33 PM
Lenny VerMaas wrote about calculator use in the elementary school. I can
verify that we do use calculators in 5th grade but with some controls.
Calculators are used - as a matter of fact I supply T.I. Explorers whenever
we are working on problem solving where the method is more important than
the actual computation.
I also use calculators in teaching mult. and div. Once kids have learned
the process the multiplication or division - then I think it's also
important to show them that a division problem with a 4 digit divisor and
an 8 digit dividend is more efficiently done with a calculator. How can we
be sure that kids aren't using calculators on their homework? Easy -
require them to "show their work" OR don't make them take the work home.
|
{"url":"http://mathforum.org/kb/thread.jspa?threadID=479768","timestamp":"2014-04-19T00:25:38Z","content_type":null,"content_length":"14176","record_id":"<urn:uuid:fdf097c7-5624-4af8-a88a-2bb611b18b21>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00651-ip-10-147-4-33.ec2.internal.warc.gz"}
|
G. Chelidze
On a Characterisation of Inner Product Spaces
\par It is well known that for the Hilbert space $H$ the minimum value of the functional $F_\mu(f)=\int_H\|f-g\|^2d\mu(g),$ $f\in H,$ is achived at the mean of $\mu$ for any probability measure $\mu$
with strong second moment on $H.$ We show that the vrlidity of this property for measures on a normed space having support at three points with norm 1 and arbitrarily fixed positive weights implies
the existence of an inner product that generates the norm.
|
{"url":"http://www.emis.de/journals/GMJ/vol8/8-2-3.htm","timestamp":"2014-04-18T10:38:59Z","content_type":null,"content_length":"968","record_id":"<urn:uuid:bb29cfee-3ba1-41d0-987b-60e4a6889393>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00440-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Re: st: Use extended functions outside of macro assignment?
Notice: On March 31, it was announced that Statalist is moving from an email list to a forum. The old list will shut down at the end of May, and its replacement, statalist.org is already up and
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: st: Use extended functions outside of macro assignment?
From Nick Cox <njcoxstata@gmail.com>
To statalist@hsphsun2.harvard.edu
Subject Re: st: Use extended functions outside of macro assignment?
Date Wed, 7 Sep 2011 08:41:51 +0100
The second example problem looks like one of selecting observations. I
think all ways of doing that reduce to a condition on the values of
some variable(s) to be given explicitly.
The first mentioned -regexm()-. Note that neither -ds- or -findname-
offers a direct way of using -regexm()-. I think you would need to
loop over variables, selecting those satisfying your -regexm()- call
and build up a macro containing a varlist that way. That's the small
trick used repeatedly inside -ds- and -filename-. As far as -findname-
is concerned, I drew short of building in that functionality because I
haven't needed it yet for myself and the syntax is already rather
complicated. And I knew of the way to do it just mentioned. (-ds- I
think all predates -regexm()-.)
On Wed, Sep 7, 2011 at 7:06 AM, Nick Cox <njcoxstata@gmail.com> wrote:
> The logic of -list- with -if- is like any other command. Consider
> list if 2 == 2
> 2 == 2 is (vacuously) true when considered for observation 1, for
> observation 2, and so on, so every variable and every observation will
> be -list-ed. The same will be true of your syntax. If your condition
> is true, i.e. there is a match, then everything will be listed.
> What I think you want is a two-step operation to Stata. First, you
> produce a varlist, then you supply that varlist to some command. In
> official Stata, check out -ds- and in user-written Stata check out
> -findname-. -search findame- in an up-to-date Stata will find an
> article and an update in the Stata Journal; use the update to get the
> software and read the article!
> Nick
> On Tue, Sep 6, 2011 at 9:06 PM, James Sams <sams.james@gmail.com> wrote:
>> I keep running into situations where I would like to use an extended function
>> as a part of a function call, e.g. in a conditional statement. I thought I had
>> seen syntax to do this before, but I cannot find this now.
>> For example, let's say I wanted to operate on a set of observations dependent
>> upon some characteristic of a value label, say that the value label matches
>> some regular expression. How can I do this? I specifically cannot pull the value
>> label into a string variable with decode due to the size of the dataset:
>> Something like this:
>> list if regexm("`: label (varname)'", "my_regex")
>> or say I want to operate on a set of ids I have stored in a local macro "ids":
>> list if `: list posof id in ids'
* For searches and help try:
* http://www.stata.com/help.cgi?search
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/
|
{"url":"http://www.stata.com/statalist/archive/2011-09/msg00220.html","timestamp":"2014-04-20T21:10:23Z","content_type":null,"content_length":"10223","record_id":"<urn:uuid:2c848d21-bb77-4fd3-a144-e258c6f4c87e>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00115-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Writing A Quadratic Equation In Vertex Form
What our customers say...
Thousands of users are using our software to conquer their algebra homework. Here are some of their experiences:
I am a parent of an 8th grader:The software itself works amazingly well - just enter an algebraic equation and it will show you step by step how to solve and offer clear, brief explanations,
invaluable for checking homework or reviewing a poorly understood concept. The practice test with printable answer key is a great self check with a seemingly endless supply of non-repeating
questions. Just keep taking the tests until you get them all right = an A+ in math.
Kara Lyssa, WI
Just when I thought I couldn't find the program to do the job, I found Algebrator and my algebra problems were gone! Thank you.
John Kattz, WA
I was confused initially whether to buy this software or not. But in five days I am more than satisfied with the Algebrator . I was struggling with quadratic equations and inequalities. The logical
and step-bystep approach to problem solving has been a boon to me and now I love to solve these equations.
Joseph K., MN
Search phrases used on 2009-01-30 :
Students struggling with all kinds of algebra problems find out that our software is a life-saver. Here are the search phrases that today's searchers used to find our site. Can you find yours among
• statistic free woorksheet
• beginners algebra worksheets free
• algebraic equations to project salary
• ALGEBRA PROBLEMS WITH AN ANSWER KEY
• solving equations on a ti-86
• algebra 1 transforming formulas worksheets
• fraction common denominator calculator
• algebra for begginers
• online rational expression calculator
• free maths work sheets
• holts workbook for middle school variables and expressions
• matlab simplify square root
• algebra 2 answer key online
• cube root in fraction form
• solving for slope
• how do we do substitution problem in algebra
• online cubed root calculator
• solving math ratio problems for dummies
• glencoe mcGraw-hill math chapter 5 test for 2c
• glencoe algebra 1 worksheet answers
• matlab solving linear equations symbolic capability
• test "factor theorem" "inequalities
• solved apttitude questions
• linear programing percentage examples
• adding mixed fractions do it yourself
• 9th grade physics formulas and examples
• Accounting Books, PDF
• adding subtracting signed numbers worksheets
• liner equation
• Learning Basic Algebra
• free college math solvers
• TI-83 plus solving for imaginary numbers
• trigonometry quiz with answers
• how to find roots of square equation in maths
• free worksheets + substitutions
• prentice hall course 2 mathmatics work book
• solving equations by multiplying
• FREE ONLINE MATH HELP 3RD GRADE
• When simplifying how do you determine like factors and common factors?
• math ebooks free tussy and gustafson
• where can i find answers to holt algebra 2 with trigonometry
• multiplying and dividing integers game
• saxon math algebra 1/2 lesson 85
• rules for rational exponent simplification
• scientific calculator online percents
• LCM story problems
• Samples of Math Trivia
• graphing ordered pairs powerpoint
• "vocabulary from classical roots"
• English Aptitude Paper
• technics on intermediate algebra
• algebra 1 holt answers
• compound inequalities and square roots
• year 7 maths questions to print for free
• Exponents worksheet with Multiplication
• a free calculator that will determine the square roots of x
• year 9 online maths tests
• converting hexidecimal numbers to decimal numbers using the ti-89 calculator
• mechanics mcqs
• TI-84 plus programs- factor
• printable algebra games
• how to calculate scale factors for 8th grade
• algebra tutor
• show me step by step on how to figure out and work lowest common multiplyer in fraction
• free7 grade pre algrebra math printouts
• equation helper how to Express and slove fractional exponents.
• physics holt challenge problems
• free accounting MCQ
• pre primary homework sheet
• multiple symbol simultaneous equations Calculator
• Worksheets Highest Common Factor
• prentice hall advanced algebra workbook
• Grade 10th math algebra textbooks =
• pearson prentice hall algebra-help with graphing lines
• sample aptitude test pie charts
• calculate common denominator
• solving Equation using Lagrange's method
• California Math Test CAT/5 form B Level 16
• free online help algebra for dummies
• factoring exponential expression
• printable worksheets on finding common denominators
• gcse question paper for grade 9
• real life example of an ellipse
• 598478#post598478
• CPM algebra tile
• convolution for ti-89
• t189 calculator download
• "least common denominator"
• convert a mixed number to a simplest form
• math help with mcdougal littell
• vocabulary anwsers
• negative fractions printable worksheets
• solving radical equations with the princples of powers
• math free printouts exponent
• hardest math equation ever
• 5th grade math problem solving
• subtraction worded problems lesson plan
• Free 8th Grade Math Worksheets
• graphing calculator ti-83 online
• Math adding integer worksheet
|
{"url":"http://www.softmath.com/algebra-help/writing-a-quadratic-equation-i.html","timestamp":"2014-04-20T15:51:53Z","content_type":null,"content_length":"27111","record_id":"<urn:uuid:46b62c53-7597-4ba9-b068-f1a9296bd398>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00136-ip-10-147-4-33.ec2.internal.warc.gz"}
|
[Numpy-discussion] Toward release 1.0 of NumPy
Charles R Harris charlesr.harris at gmail.com
Thu Apr 13 13:33:08 CDT 2006
On 4/13/06, Tim Hochberg <tim.hochberg at cox.net> wrote:
> Alan G Isaac wrote:
> >On Thu, 13 Apr 2006, Charles R Harris apparently wrote:
> >
> >
> >>The Kronecker product (aka Tensor product) of two
> >>matrices isn't a matrix.
> >>
> >>
> >
> >That is an unusual way to describe things in
> >the world of econometrics. Here is a more
> >common way:
> >http://planetmath.org/encyclopedia/KroneckerProduct.html
> >I share Sven's expectation.
> >
> >
> mathworld also agrees with you. As does the documentation (as best as I
> can tell) and the actual output of kron. I think Charles must be
> thinking of the tensor product instead.
It *is* the tensor product, A \tensor B, but it is not the most general
tensor with four indices just as a bivector is not the most general tensor
with two indices. Numerically, kron chooses to represent the tensor product
of two vector spaces a, b with dimensions n,m respectively as the direct sum
of n copies of b, and the tensor product of two operators takes the given
form. More generally, the B matrix in each spot could be replaced with an
arbitrary matrix of the correct dimensions and you would recover the general
tensor with four indices.
Anyway, it sounds like you are proposing that the tensor (outer) product of
two matrices be reshaped to run over two indices. It seems that likewise the
tensor (outer) product of two vectors should be reshaped to run over one
index (i.e. flat). That would do the trick.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://projects.scipy.org/pipermail/numpy-discussion/attachments/20060413/d5803f3f/attachment.html
More information about the Numpy-discussion mailing list
|
{"url":"http://mail.scipy.org/pipermail/numpy-discussion/2006-April/007569.html","timestamp":"2014-04-16T22:27:05Z","content_type":null,"content_length":"4712","record_id":"<urn:uuid:45673f90-6cc8-4d8f-b560-580092c5d0d9>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00318-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Weekly Problem 40 - 2013
Copyright © University of Cambridge. All rights reserved.
'Weekly Problem 40 - 2013' printed from http://nrich.maths.org/
The length of each side of a quadrilateral $ABCD$ is a whole number of centimetres. Given that $AB=4 \; \text{cm}$, $BC = 5 \; \text{cm}$ and $CD = 6 \; \text{cm}$, what is the maximum possible
length of the fourth side $DA$?
If you liked this problem,
here is an NRICH task
which challenges you to use similar mathematical ideas.
This problem is taken from the UKMT Mathematical Challenges.
View the previous week's solutionView the current weekly problem
|
{"url":"http://nrich.maths.org/2901/index?nomenu=1","timestamp":"2014-04-18T03:11:39Z","content_type":null,"content_length":"3434","record_id":"<urn:uuid:5c916356-3e7e-4d1c-9a39-b43d370439c4>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00221-ip-10-147-4-33.ec2.internal.warc.gz"}
|
[amsat-bb] Re: requesting help on a RF link solution (imaginary ka-bandlink!)
[amsat-bb] Re: requesting help on a RF link solution (imaginary ka-bandlink!)
i8cvs domenico.i8cvs at tin.it
Wed Oct 28 10:13:43 PDT 2009
Hi Bob, W7LRD
I need to answere your question as well via AMSAT-BB because my emails sent to w7lrd at comcast.net are alwais rejected to me by your provider.
From the point of view of Amateur Radio the best I can suggest to you is the book " The Satellite Experimenters Handbook " by Martin Davidoff K2UBC 2nd Edition ARRL Order No 3185 ISBN 0-87259-318-5 and also the ARRL " UHF MICROWAVE Experimenters's Manual" ARRL Order No 3126 ISBN 0-87259-312-6
Those books are full of easy calculations and you can follow it using a small scientific hand held calculator but very important every chapter of the UHF MICROWAVE Experimenter's Manual is full of "References and Bibliography " that you can find and read/study to go deeply into details on the above matter covering circuits and antennas which are described here in hardware but also with related easy to follow calculations.
At the beginning you must go slowly with the above two books but after a few months you will improve and the above matter will come very familiar to you provided that you implement your knoledge following the recommended References and Bibliography.
In AMSAT-BB I follow your experimental activity particularly into the S band.............congrats !
Best 73" de
i8CVS Domenico
----- Original Message -----
From: Bob- W7LRD
To: i8cvs
Sent: Wednesday, October 28, 2009 3:23 AM
Subject: Re: [amsat-bb] Re: requesting help on a RF link solution (imaginary ka-bandlink!)
Hello Domenico
I enjoy your posts, even though many are "out of my pay grade". Would you aim me towards a good tutorial place you may know of where I could learn some of the basics. I would like to gain a better understanding of this concept.
Thanks & 73
Bob W7LRD
----- Original Message -----
From: "i8cvs" <domenico.i8cvs at tin.it>
To: "Samudra Haque" <samudra.haque at gmail.com>, "Amsat-bb" <amsat-bb at amsat.org>
Sent: Tuesday, October 27, 2009 5:44:14 PM GMT -08:00 US/Canada Pacific
Subject: [amsat-bb] Re: requesting help on a RF link solution (imaginary ka-bandlink!)
----- Original Message -----
From: "Samudra Haque" <samudra.haque at gmail.com>
To: "Amsat-bb" <amsat-bb at amsat.org>
Sent: Tuesday, October 27, 2009 11:03 AM
Subject: [amsat-bb] requesting help on a RF link solution (imaginary
> Hi, amsat-bb
> CQ any satellite link budget expert !
> I'm trying to do a calculation on my own based upon published specs
> for the NASA MRO Ka-band experiment, but am getting some unexpected
> results for a Ka-band simplex link with Temp=3000K (hypothetical),
> operating with a Signal to Noise ratio (unitless) figure of 1.171
> (representing 4.5 dB eb/no with a data rate of 1 Gbps and a bandwidth
> of 2.4x10^9 Hz)
> Question : is 1 gbps not 1x10^9 bps ?
> Question : if both antennas are 3m parabolic (both are the same type)
> with 56.4 dBi boresight gain, what would you think the furthest
> distance the link can perform with SNR of 1.171. I have actually used
> a padding of 3 dB Eb/No in my link budget, so am not worried about any
> further signal loss at first (ok, I should be ..) For the exercise, I
> am choosing a 10 Watt estimated output on an arbitrary basis.
> So:
> P_t = 10W
> G_t = 56.4 dBi = G_r , can we assume the same gain for TX and RX on a
> parabolic dish ?
> T = 3000K at receiver
> SNR = 1.171 required
> f=32.2 GHz
> B = 2.4E9 Hz, (bpsk, ldpc code 0.5)
> DR = 1E9 bps
> So, I am puzzled why this link budget says the range with these
> parameters is equal to 4.644 x 10^9 Km -- that seems to be a long
> distance ! What am I not able to conceptualize.
> BTW, I know if I send this out, the answer will come to me soon
> thereafter, but for education, I would like to know where the problem
> in my understanding lies !
> Samudra N3RDX
Hi Samudra, N3RDX
If I well understand your question is to know what is the maximum
free space distance at which you can get a S/N ratio of 4.5 dB using
two identical transmitting and receiving systems having the following
1) Antenna gain for TX and RX = 56.4 dBi at 32.2 GHz
2) Frequency = 32.2 GHz
3) Overall receiving system noise temperature: T = 3000 kelvin
4) Bandwidth of receiving system = 2.4 x 10^9 Hz
5) TX power 10 W
6) Required Signal to Noise ratio S/N at the unknown distance = 4.5 dB
With the above data we first calculate the receiver noise floor Pn = KTB
K = Boltzmann constant = 1.38 x 10^ -23 (Joule/kelvin)
T = Overall System Noise Temperature = 3000 kelvin
B = Bandwidth of receiving system = 2.4 x 10^9 Hz
Working out the numbers we get the following RX noise floor
Pn = (1.38 x 10^ -23) x (3000) x (2.4 x 10^9) = 1 x 10^-10 watt
and 10 x [ log (1 x 10^-10)] = - 100 dBW or - 70 dBm
Link budged calculation
TX power = 10 W =..................+ 40 dBm
TX antenna gain ........................+ 56.4 dBi
Transmitted EIRP......................+ 96.4 dBm
Free space attenuation for
61.000 km at 32.2 GHz............- 218.3 dB
Received power over isotropic
ant. at 61.000 km distance........- 121.9 dBm
RX antenna gain.......................+56.4 dB
Received power at RX input... - 65.5 dBm
Receiver noise floor.................- 70.0 dBm
Received S/N Ratio................. + 4.5 dB
Using two boresight identical parabolic dishes having each a gain of
56.4 dBi at 32.2 GHz one transmitting with 10 watt and the other one
receiving with a receiving system having a noise temperature of 3000 kelvin
into a bandwidth = 2.4 x 10^9 Hz the free space distance at which the
signal is received with a S/N ratio = + 4.5 dB is only 61.000 km so that
your hypothetical system is not suitable for the NASA MRO Ka-band
experiment because the distance Earth to Mars is about 1 AU i.e.
1 Astronomical Unit corresponding to 149 Million/ km
73" de
i8CVS Domenico
Sent via AMSAT-BB at amsat.org. Opinions expressed are those of the author.
Not an AMSAT-NA member? Join now to support the amateur satellite program!
Subscription settings: http://amsat.org/mailman/listinfo/amsat-bb
More information about the AMSAT-BB mailing list
|
{"url":"http://amsat.org/pipermail/amsat-bb/2009-October/023298.html","timestamp":"2014-04-18T20:48:49Z","content_type":null,"content_length":"10124","record_id":"<urn:uuid:d1714906-d77e-4a7a-a2f1-f0f2c3476d4e>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00316-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Results 1 - 10 of 12
- Artificial Intelligence , 1994
"... Constraint networks are known as a useful way to formulate problems such as design, scene labeling, temporal reasoning, and more recently natural language parsing. The problem of the existence
of solutions in a constraint network is NP-complete. Hence, consistency techniques have been widely studied ..."
Cited by 136 (11 self)
Add to MetaCart
Constraint networks are known as a useful way to formulate problems such as design, scene labeling, temporal reasoning, and more recently natural language parsing. The problem of the existence of
solutions in a constraint network is NP-complete. Hence, consistency techniques have been widely studied to simplify constraint networks before or during the search of solutions. Arc-consistency is
the most used of them. Mohr and Henderson [Moh&Hen86] have proposed AC-4, an algorithm having an optimal worst-case time complexity. But it has two drawbacks: its space complexity and its average
time complexity. In problems with many solutions, where the size of the constraints is large, these drawbacks become so important that users often replace AC-4 by AC-3 [Mac&Fre85], a nonoptimal
algorithm. In this paper, we propose a new algorithm, AC-6, which keeps the optimal worst-case time complexity of AC-4 while working out the drawback of space complexity. More, the average time
complexity of AC-6 is optimal for constraint networks where nothing is known about the semantic of the constraints. At the end of the paper, experimental results show how much AC-6 outperforms AC-3
and AC-4. 1.
- Handbook of Constraint Programming , 2006
"... Constraint propagation is a form of inference, not search, and as such is more ”satisfying”, both technically and aesthetically. —E.C. Freuder, 2005. Constraint reasoning involves various types
of techniques to tackle the inherent ..."
Cited by 51 (3 self)
Add to MetaCart
Constraint propagation is a form of inference, not search, and as such is more ”satisfying”, both technically and aesthetically. —E.C. Freuder, 2005. Constraint reasoning involves various types of
techniques to tackle the inherent
- In Proceedings of the Second International Conference on Principles and Practice of Constraint Programming , 1996
"... . In the last twenty years, many algorithms and heuristics were developed to find solutions in constraint networks. Their number increased to such an extent that it quickly became necessary to
compare their performances in order to propose a small number of "good" methods. These comparisons often le ..."
Cited by 40 (3 self)
Add to MetaCart
. In the last twenty years, many algorithms and heuristics were developed to find solutions in constraint networks. Their number increased to such an extent that it quickly became necessary to
compare their performances in order to propose a small number of "good" methods. These comparisons often led us to consider FC or FC-CBJ associated with a "minimum domain" variable ordering heuristic
as the best techniques to solve a wide variety of constraint networks. In this paper, we first try to convince once and for all the CSP community that MAC is not only more efficient than FC to solve
large practical problems, but it is also really more efficient than FC on hard and large random problems. Afterwards, we introduce an original and efficient way to combine variable ordering
heuristics. Finally, we conjecture that when a good variable ordering heuristic is used, CBJ becomes an expensive gadget which almost always slows down the search, even if it saves a few constraint
checks. 1 Introducti...
- CONSTRAINTS , 2002
"... There are two main solving schemas for constraint satisfaction and optimization problems: i) search, whose basic step is branching over the values of a variables, and ii) dynamic programming,
whose basic step is variable elimination. Variable elimination is time and space exponential in a graph para ..."
Cited by 22 (6 self)
Add to MetaCart
There are two main solving schemas for constraint satisfaction and optimization problems: i) search, whose basic step is branching over the values of a variables, and ii) dynamic programming, whose
basic step is variable elimination. Variable elimination is time and space exponential in a graph parameter called induced width, which renders the approach infeasible for many problem classes.
However, by restricting variable elimination so that only low arity constraints are processed and recorded, it can be e#ectively combined with search, because the elimination of variables may reduce
drastically the search tree size. In this
, 2000
"... Variable elimination is the basic step of Adaptive Consistency [4]. It transforms the problem into an equivalent one, having one less variable. Unfortunately, there are many classes of problems
for which it is infeasible, due to its exponential space and time complexity. However, by restricting ..."
Cited by 16 (1 self)
Add to MetaCart
Variable elimination is the basic step of Adaptive Consistency [4]. It transforms the problem into an equivalent one, having one less variable. Unfortunately, there are many classes of problems for
which it is infeasible, due to its exponential space and time complexity. However, by restricting variable elimination so that only low arity constraints are processed and recorded, it can be
effectively combined with search, because the elimination of variables, reduces the search tree size. In this paper
- In Proceedings of CP-2006 , 2006
"... Abstract. Thanks to its extended expressiveness, the quantified constraint satisfaction problem (QCSP) can be used to model problems that are difficult to express in the standard CSP formalism.
This is only recently that the constraint community got interested in QCSP and proposed algorithms to solv ..."
Cited by 8 (0 self)
Add to MetaCart
Abstract. Thanks to its extended expressiveness, the quantified constraint satisfaction problem (QCSP) can be used to model problems that are difficult to express in the standard CSP formalism. This
is only recently that the constraint community got interested in QCSP and proposed algorithms to solve it. In this paper we propose BlockSolve, an algorithm for solving QCSPs that factorizes
computations made in branches of the search tree. Instead of following the order of the variables in the quantification sequence, our technique searches for combinations of values for existential
variables at the bottom of the tree that will work for (several) values of universal variables earlier in the sequence. An experimental study shows the good performance of BlockSolve compared to a
state of the art QCSP solver. 1
- Proceedings ECAI’04 Workshop on Modelling and Solving Problems with Constraints , 2004
"... The SAT and CSP communities make a great use of search effort comparisons to assess the validity of an algorithm or a heuristic. There exist different ways... ..."
Cited by 6 (0 self)
Add to MetaCart
The SAT and CSP communities make a great use of search effort comparisons to assess the validity of an algorithm or a heuristic. There exist different ways...
, 1996
"... Reasoning about qualitative temporal information is essential in many artificial intelligence problems. In particular, many tasks can be solved using the interval-based temporal algebra
introduced by Allen (All83). In this framework, one of the main tasks is to compute the transitive closure of a ne ..."
Cited by 4 (0 self)
Add to MetaCart
Reasoning about qualitative temporal information is essential in many artificial intelligence problems. In particular, many tasks can be solved using the interval-based temporal algebra introduced by
Allen (All83). In this framework, one of the main tasks is to compute the transitive closure of a network of relations between intervals (also called path consistency in a CSP-like terminology).
Almost all previous path consistency algorithms proposed in the temporal reasoning literature were based on the constraint reasoning algorithms PC-1 and PC-2 (Mac77). In this paper, we first show
that the most efficient of these algorithms is the one which stays the closest to PC-2. Afterwards, we propose a new algorithm, using the idea "one support is sufficient" (as AC-3 (Mac77) does for
arc consistency in constraint networks). Actually, to apply this idea, we simply changed the way compositionintersection of relations was achieved during the path consistency process in previous
algorithms. Intr...
"... Variable elimination is the basic step of Adaptive Consistency [8]. It transforms the problem into an equivalent one, having one less variable. Unfortunately, there are many classes of problems
for which it is infeasible, due to its exponential space and time complexity. However, by restricting va ..."
Cited by 1 (1 self)
Add to MetaCart
Variable elimination is the basic step of Adaptive Consistency [8]. It transforms the problem into an equivalent one, having one less variable. Unfortunately, there are many classes of problems for
which it is infeasible, due to its exponential space and time complexity. However, by restricting variable elimination so that only low arity constraints are processed and recorded, it can be
effectively combined with search, because the elimination of variables, reduces the search tree size. In this paper we introduce VarElimSearch(S;k), a hybrid meta-algorithm that combines search and
variable elimination. The parameter S names the particular search procedure and k controls the tradeoff between the two strategies. The algorithm is space exponential in k. Regarding time, we show
that its complexity is bounded by k and a structural parameter from the constraint graph. We also provide experimental evidence that the hybrid algorithm can outperform state-of-the-art algorithms in
binary sparse problems. Experiments cover the tasks of finding one solution
"... In [1, 2], Bessière and Cordier said that the AC-6 arc-consistency algorithm is optimal in time on constraint networks where nothing is known about the constraint semantics. However, in
constraint networks, it is always assumed that constraints are symmetric. None of the previous algorithms achievin ..."
Add to MetaCart
In [1, 2], Bessière and Cordier said that the AC-6 arc-consistency algorithm is optimal in time on constraint networks where nothing is known about the constraint semantics. However, in constraint
networks, it is always assumed that constraints are symmetric. None of the previous algorithms achieving arc-consistency (AC-3 [5, 6], AC-4 [7], AC-6) use this property. We propose here an improved
version of AC-6 (the best algorithm for arcconsistency) which uses this property. Then, we claim that our new algorithm is optimal in the number of constraint checks performed. 1. Introduction In the
last five years, the number of applications using constraint networks has dramatically increased. It appears that the more constraint networks are used, the simpler the constraint satisfaction
techniques involved in the applications are. In fact, a great part of real-life applications using constraint networks are limited to a forward-checking search procedure [4], or use an
arc-consistency filtering a...
|
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=79368","timestamp":"2014-04-18T07:21:23Z","content_type":null,"content_length":"37684","record_id":"<urn:uuid:59d42b45-8e04-4c98-a686-21f7e0d645de>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00300-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Does a conditional expectation from a von Neumann algebra to its center exist?
up vote 1 down vote favorite
In a finite von Neumann algebra, the unique tracial state serves as one, then for a general von Neumann algebra, does it exist?
in line with Dmitri's answer below: when you say that there is a unique tracial state, I think you mean to say that on a finite factor there is a unique faithful normal tracial state. Otherwise,
consider $\ell^\infty$. – Yemon Choi Nov 24 '10 at 17:52
I think you need to reword your question to be a little more precise. Presumably you are really interested in the case of $\sigma$-finite, properly infinite von Neumann algebras? eom.springer.de/v/
v096900.htm – Yemon Choi Nov 24 '10 at 17:54
On the topic of precision, I assume you want your expectation to be faithful (otherwise any state would do) and normal. – Martin Argerami Nov 25 '10 at 13:53
add comment
2 Answers
active oldest votes
Using direct integral decomposition, also known as reduction theory, one can reduce the problem to the case of a factor. A conditional expectation in this case is a state. Every factor
admits a state, but only σ-finite factors admit faithful states. Thus if you require the conditional expectation to be faithful, all factors in the direct integral decomposition must be
up vote 5 σ-finite, otherwise no additional conditions are needed to ensure the existence of a conditional expectation.
down vote
add comment
The answer is yes, provided that $M$ has a faithful normal semifinite weight (this always exists) that is also semifinite when restricted to the centre (this I'm not so sure how easily can
up vote When $M$ has a faithful normal semifinite weight $\varphi$, with $\varphi|_{Z(M)}$ semifinite, consider the modular group $\sigma_t^\varphi$ associated with $\varphi$. For each $t\in\mathbb
0 down {R}$, $\sigma_t^\varphi$ is an automorphism of $M$, and in particular it preserves its centre. This means that \[ \sigma_t^\varphi(Z(M))=Z(M), \ \ t\in\mathbb{R} \] This conditions, by
vote Takesaki's Theorem (IX.4.2 in Takesaki 2, or JFA1972) are equivalent to the existence of a conditional expectation $E:M\to Z(M)$, with $\varphi\circ E=\varphi$. This last condition forces
$E$ to be faithful and normal.
You certainly do not need any modular theory to see this. – Andreas Thom Nov 25 '10 at 15:51
1 That wouldn't surprise me, but off the top of my head I wouldn't know how to do it in another way. – Martin Argerami Nov 25 '10 at 17:41
I think that the main point of Takesaki's theorem is that it characterizes the subalgebras for which a conditional expectation exists. However, if you just want to have a conditional
1 expectation onto the center then why don't you proceed as Dmitri Pavlov in his answer. Of course there are some technicalities hidden in the direct integral decomposition and making
measurable choices etc, but that is something you are facing anyway. I think that Takesaki's theorem is difficult (relies on modular theory) in the factor case; but then a direct integral
approach has to be followed anyway. – Andreas Thom Nov 25 '10 at 20:44
I agree that Takesaki's theorem is difficult, but it is a proven result and it doesn't use direct integrals (I try to avoid them precisely because of the technicalities). In Dmitri's
2 argument, I'm not sure exactly which vN algebras can be decomposed into a direct integral of $\sigma$-finite factors, and I don't immediately see if the expectation obtained is faithful
and normal. – Martin Argerami Nov 25 '10 at 23:54
add comment
Not the answer you're looking for? Browse other questions tagged oa.operator-algebras or ask your own question.
|
{"url":"http://mathoverflow.net/questions/47236/does-a-conditional-expectation-from-a-von-neumann-algebra-to-its-center-exist","timestamp":"2014-04-20T16:35:05Z","content_type":null,"content_length":"62962","record_id":"<urn:uuid:4d1ca4de-e34b-49ec-9e0e-e325a1976c3b>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00467-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Woodacre Math Tutor
...I enjoy great music and would love to introduce new musicians to the beautiful sounds of the violin, a magnificent instrument! The AFOQT (Air Force Officer Qualifying Test) is an aptitude test
given to prospective Air Force officer candidates. It is similar to the SAT and ACT tests, with an emphasis in technical/aviation related subjects.
14 Subjects: including algebra 2, algebra 1, prealgebra, reading
...In addition, I was a leading tutor at the University of Arizona both privately and through the Math and Science Tutoring Resource center. I take the time to understand the learning style of my
student and can quickly adapt lessons and examples to meet the needs of each and every style. I will be starting my J.D. program in August of 2014 in San Francisco and currently reside in Marin
4 Subjects: including algebra 1, algebra 2, calculus, precalculus
...Microbiology was my major in college at the University of California, Davis. I received top grades in all of my courses and I was awarded the Department Citation of Excellence for Microbiology.
I also worked in a Medical Microbiology and Immunology laboratory for 3 years while in college.
29 Subjects: including calculus, physics, statistics, algebra 1
...And finally, as a PhD student at Cal, I have been a TA for an undergrad level intro to discrete math and probability class required for all computer science undergraduate students at Cal. In
addition to my experience teaching recitation sections of up to 40 students, I am very comfortable (and e...
27 Subjects: including calculus, chemistry, physics, discrete math
Greetings! I have over 10 years' experience teaching 6th, 7th, 8th and 9th grade math in middle/high schools. My style is effective because I figure out the real problem quickly.
2 Subjects: including algebra 1, prealgebra
|
{"url":"http://www.purplemath.com/woodacre_ca_math_tutors.php","timestamp":"2014-04-19T19:56:00Z","content_type":null,"content_length":"23576","record_id":"<urn:uuid:0125f9fc-2747-4412-a5ad-17d392a92a84>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00370-ip-10-147-4-33.ec2.internal.warc.gz"}
|
A Neural Network Model for Driver’s Lane-Changing Trajectory Prediction in Urban Traffic Flow
Mathematical Problems in Engineering
Volume 2013 (2013), Article ID 967358, 8 pages
Research Article
A Neural Network Model for Driver’s Lane-Changing Trajectory Prediction in Urban Traffic Flow
^1Department of Transportation Engineering, Beijing Institute of Technology, Beijing 100081, China
^2Institut für Verkehrssystemtechnik, Deutsche Zentrum für Luft-und Raumfahrt, Lilienthalplatz 7, 38108 Braunschweig, Germany
Received 13 September 2012; Accepted 10 November 2012
Academic Editor: Huimin Niu
Copyright © 2013 Chenxi Ding et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any
medium, provided the original work is properly cited.
The neural network may learn and incorporate the uncertainties to predict the driver’s lane-changing behavior more accurately. In this paper, we will discuss in detail the effectiveness of
Back-Propagation (BP) neural network for prediction of lane-changing trajectory based on the past vehicle data and compare the results between BP neural network model and Elman Network model in terms
of the training time and accuracy. Driving simulator data and NGSIM data were processed by a smooth method and then used to validate the availability of the model. The test results indicate that BP
neural network might be an accurate prediction of driver’s lane-changing behavior in urban traffic flow. The objective of this paper is to show the usefulness of BP neural network in prediction of
lane-changing process and confirm that the vehicle trajectory is influenced previously by the collected data.
1. Introduction
The most crucial road traffic problems requiring solutions have been the reduction of traffic accidents and traffic congestion [1]. Increasing the safety level of driving in traffic, especially the
safety of maneuvers such as lane-changing and overtaking, is one of the key technologies for the Intelligent Transportation System (ITS) to achieve congestion-free and accident-free traffic
situations [2]. Every year, traffic accidents result in approximately 1.2 million fatalities worldwide; without new prevention measures, this number could increase by 65% over the next two decades [3
]. Researchers estimate that lane-changing crashes account for 4% to 10% of all vehicle crashes in the USA [4]. Although, this value is not very high, the delay time it causes accounts for 10% of the
total time caused by all traffic accidents.
Due to the importance of driving behavior to vehicle safety, many researchers have attempted to model driving behavior. A general trend in the study of modeling driving behavior is the greater
application of computational artificial intelligence. Because the driver’s mental and physical behavior is nondeterministic and highly nonlinear, it is difficult for traditional methods to embody
this kind of uncertain relationship. The Artificial Neural Networks, fuzzy logic theory, and dynamic Bayesian networks, which include well-known hidden Markov models, have attracted many researchers
to do relative research. Kumagai et al. focused on the prediction of drivers’ intensions of stopping the car at an intersection with their current and historical maneuvers based on a simple dynamic
Bayesian Network [5]. Tezuka et al. developed a method to infer driver behavior with a driving simulator to evaluate continuous time-series steering angle data at the time of lane-changing. The
proposed method used a static type conditional Gaussian model on Bayesian Networks [6]. Kuge et al. proposed hidden Markov models (HMMs) using observations of vehicle parameters and lane positions to
model trajectories [7]. Sathyanarayana et al. proposed a method to model driver behavior signals using hidden Markov models. The hierarchical framework and initial results can encourage more
investigations into driver behavior signal analysis and related safety systems employing a partitioned submodule strategy [8]. Pentland and Andrew proposed that many human behaviors can be accurately
described as a set of dynamic models sequenced together by a Markov chain. They considered the human as a device with a large number of internal mental states and used the dynamic Markov models to
recognize human behaviors from sensory data and to predict human behaviors a few seconds into the future [9]. Macadam and Johnson demonstrated the use of elementary neural networks (a two-layer back
propagation) to represent the driver’s steering behavior in double lane-changing maneuvers and S-curve maneuvers. Due to the limited data source for neural networks, it was concluded that the
adaptive nature of neural networks should be used for modeling driver steering behavior under a variety of operation scenarios [10]. Cheng et al. used a Back-Propagation neural network as a
controller for an automated vehicle system. Camera images were used as inputs to the neural network [11]. Tomar et al. proposed a method to give the future lane-changing trajectories accurately for
discrete patches using a multilayer perceptron (MLP) [12]. The proposed multilayer perception network is a simple single input, single output based on a single hidden layer and is used for training,
testing, and prediction of the vehicle trajectories.
Most of the prior research based on Neural Network or DBN (Dynamic Bayesian Network) have recognized temporal information, inferred current states, or simply detected the action after it has begun,
but it has not predicted future states. Most studies until now have analyzed driver behavior by offline processing of data due to the limited datasets. Besides, some studies are only able to train
itself and predict the future positions of a lane-changing vehicle in certain discrete sections of the path only and not over the complete lane change path. Therefore, our research is intended to
real-time predict lane-changing trajectory based on a time delay Back-Propagation Neural Network (BPNN). We proposed two-layer Tansig and Linear BP neural network to deal with n-input, single-output
problems. The position, velocity, acceleration, and time headway of the vehicle were used as the inputs of the model. The future lane-changing trajectory was considered as the desired output of the
trained network. We mixed the data of the different path sections as inputs to train the network in order to make its applications more wide. To validate the model, we employed NGSIM trajectory data
and a smoothing method. Simulation results demonstrate the effectiveness of the proposed model.
In this research, we attempted to answer the following questions based on the proposed model. Is it possible to infer future driving states in different path sections using one neural network? How
long of the prediction time is stable? Is it necessary to combine various input data for the prediction, and which one (or combination) is the more informative indicator? How to post-processed the
raw data effectively? Which network is suitable for online analysis, BP Neural Network or Elman Network?
2. Traditional Lane-Changing Model Development
A major problem in developing a driver model lies on the requirement of suitable quantitative formulations by means of mathematical and statistical theories [13–15].
Lane-changing behavior has been widely studied from the viewpoint of traffic flow theory. A typical lane-changing algorithm which is used for microscopic traffic simulation is described in this
section [16].
Figure 1 illustrates the lane-changing situation for vehicle M. The vehicles L[d], F[d], and M represent the leading vehicle in the destination lane, the following vehicle in the destination lane,
and the subject vehicle, respectively. The trajectories of subject vehicle M can be constructed by a two-dimensional coordinate system.
The objective of this section is to use a simple lane-changing model to define the lateral acceleration, the lateral velocity, and the lateral distance of subject vehicle M during a certain
time-interval in Figure 2. The time-intervals for the lane-changing maneuver are shown in Figure 2 and the times are defined as follows.(i)At time , the subject vehicle M starts lane-changing
maneuver and sets .(ii)At time , the subject vehicle M adjusts successfully for lateral acceleration.(iii)At time , the subject vehicle M arrives at the marginal collision point.(iv)At time , the
subject vehicle M finishes the lateral acceleration.(v)At time , the subject vehicle M finishes the lane-changing maneuver.
Note that with the exception of the subject vehicle, the lateral and longitudinal accelerations of the other vehicles are assumed to be 0. Figure 3 illustrates the motion of vehicle M during a
lane-changing or merge maneuver. is the angle between the tangent of the lane changing trajectory at the time and the X-axis. is the lane width. is the total lateral displacement for vehicle M. Memar
et al. developed a sinusoidal pattern of lateral acceleration of the subject vehicle [17]. Instantaneous lateral acceleration is given by (1). Where is the total lateral displacement for the subject
vehicle M, is the elapsed time, and and are defined as above. According to (1), the lateral acceleration is positive within the first half of the lateral displacement, that is, , and negative in the
second half.
On basis of the lateral acceleration, the lateral velocity and the lateral distance traveled of the front-left corner P during a lane-changing can be derived by successive integration, given in (2)
and (3).
As indicated above, traditional lane-changing models do not consider the uncertainties and perceptions in human behavior that are involved in modeling lane-changing. Therefore, research that develops
lane-changing model from the artificial intelligence viewpoint is very important, such as the artificial neural network method introduced in the next section. The random term in network is
attributable to a specific individual and the unimportant variables
3. Neural Network Model for Lane-Changing Trajectory Prediction
It is necessary that Advanced Driver Assistance System (ADAS) is provided with concepts and techniques that enable prediction of future situations. In order to optimize warning and control strategy
of driver assistance systems, determine assistance systems more exactly, an analysis of driving behavior prediction under real-time traffic conditions is very essential to carry out driving erroneous
analysis at the microscopic level. As an example, early recognition of a lane-changing behavior would help to adapt the warning and control strategy of Forward Collision Avoidance Assistance Systems
and Lane Departure Warning (LDW) Systems.
Therefore, our research objective was the development of a neural network-based model for collision avoidance systems in the case of lateral lane-changing and longitudinal car following. Consistent
with this primary goal, different advanced neural network models were employed and compared for the same problem.
3.1. Artificial Neural Network
Artificial Neural Networks (ANNs) are massive parallel adaptive networks of simple nonlinear computing elements. These elements are called neurons and are intended to model some functionalities of
the human nervous system in order to take advantage of its computational strength [18]. As a commonly used nonlinear function approximation tool, artificial neural network has shown great advantages
in forecasting, pattern identification, optimization techniques, and signal processing for its nonlinear, flexible, and valid self-organization properties. A variety of problem areas are modeled
using ANN [19–21] and in many instances, ANN has provided superior results compared to the conventional modeling techniques.
The basic model of ANN consists of computational units, which are a highly simplified model of the structure of the biological neural network [22]. Conceptual operation of ANN is shown in Figure 4.
ANN is regarded as a black box that takes a weighted sum of all inputs and computes an output value using a transformation or output function. The output value is propagated to many other units via
connections between units.
In general, the output function is a linear function in which a unit becomes active only when its net input exceeds the threshold of the unit, or it is a sigmoid function which is a nondecreasing and
differentiable function of the input. Computational units in an ANN model are hierarchically structured in layers and depend upon the layer in which a unit resides. The units are called input,
hidden, or output units. There are many input units, and some are dependent on the others. The output units are dependent on all input units. A hidden unit is used to augment the input data in order
to support any required function from input or output. The inputs and outputs can be discrete or continuous data values. The input and output could also be stochastic, deterministic, or fuzzy.
In order to store a pattern in a network, it is necessary to adjust the weights of the connections in the network. The set of all weights on all connections in a network form a weight vector. The
process of computing appropriate weights is called a learning law or learning algorithm. The learning process of ANN can be thought of as a reward and punishment mechanism [23], whereby when the
system reacts appropriately to an input, the related weights are strengthened. In this case, it is possible to generate outputs, which are similar to those corresponding to the previously encountered
inputs. On the contrary, when undesirable outputs are produced, the related weights are reduced. The model learns to give a different reaction when similar inputs occur, thus updating the system
towards producing desirable results, whilst the undesirable ones are “punished.”
Back-Propagation (BP) neural network, a typical case of neural networks, is used most widely and is more mature than other networks. BP neural network models consist of an input layer, one or several
hidden layers, and an output layer. The typical BP neuron model is shown in Figure 5. When a set of input values and corresponding desired output values are supplied to the network, the transferred
value is propagated from the input layer through hidden layers to the output layer. The neural network tries to learn the input-output parameter relationship process by adapting its free parameters.
Mathematical expression of a BP neural network is defined as where column vector is input vector, row vector is the connection weight vector for neuron , is the threshold of the output, represents
the input of neurons, and the function is the transfer function.
3.2. BP Neural Network Model for Lane-Changing Trajectory Prediction
For safe driving, it is necessary that the drivers perceive the relevant objects of a situation, comprehend the meaning of these objects to form a holistic understanding of the current situation, and
predict the future development of the situation [24]. However, it is difficult for traditional lane-changing behavior model to embody the uncertainty in the series of cognitive behavior of drivers.
Unlike the classic mathematic methods, BP networks can approximate the specific inputs and outputs relationship without a certain model.
Our research has attempted to set up a BP neural network model for lane-changing trajectory prediction and approximate the simulation of vehicle-to-vehicle interactions during the lane-changing
process. Figure 6 shows a simple lane-changing maneuver.
In this study, an n-input, single-output based time delay BP neural network with two hidden layers is used for training, testing, and prediction of the vehicle trajectories. There are 40 sets of
training samples consisted of four variables: the prior position, velocity, acceleration of the object vehicle, and the time headway. Each input variable consists of 1 second contiguous time history
sets (or 10 frames). The desired output is the prediction of the next state 1 second in the lane-changing process by delaying time. The network was trained based on the Levenberg-Marquardt algorithm
and the mean-squared error was examined. Network weight values are then iteratively adjusted until output errors are minimized. Once appropriate training data were collected, learning procedure could
be implemented in order to reach similar performance with neural networks. Note that we mixed the training data of the different path sections as inputs to make the applications more wide.
4. Simulation Results and Discussion
4.1. Date Collection Based on Driving Simulation Test
In the simulation test, a driving simulator was used to collect driving behavior data including vehicle movements and maneuver operations. Separate computers were used to generate vehicle motion
calculations and the front view displays. Subjects were instructed to use the driving simulator executing lane-changing and overtaking maneuvers on a two-lane road. There were 32 male drivers and 8
female drivers, their ages range from 24 to 50 years old and driving experience range from 1 year to 23 years. All the training data for the BP neural network was obtained from a driving simulator.
4.2. Results and Discussion of BP Neural Network Model
The BP neural network model used to predict lane-changing trajectory is shown in Figure 7. The network model consists of an input layer, two hidden layers, and an output layer. A nonlinear sigmoid
defines each neuron’s activation function in the first layer. The second layer neurons are linear. The biases is attributable to a specific individual, uncertainties, and the unimportant variables.
In this paper, we only show tested results of lane-changing lateral trajectories as an example. Figure 8 shows the tested lateral trajectory from a driving simulator and the predicted lateral
trajectory from BP neural network model. The prediction of the vehicle trajectory has large error in the initial phase because of the small number of previous samples.
Performance curve of BPNN is shown in Figure 9. As indicated in Figure 9, by increasing the number of iterations, the performance of the network will be improved. But when the number of iterations is
large enough, an increase in the number of iterations will no longer reduce the error rate. Besides, we also found vehicle lateral trajectory is influenced by time headway and the prior lateral
trajectory more obviously. It is reasonable, because most of lane-changing maneuvers are caused by a slow leading vehicle.
4.3. Model Validation Based on NGSIM Data
The Next Generation Simulation (NGSIM) data is used to test and verify our model. NGSIM data provides detailed vehicle trajectory data, wide-area detector data, and supporting data needed for
behavioral algorithm research. Considering observations in real traffic are always affected by measurement errors, we smooth the raw data which is used to test the model.
Taking the lateral position and vehicle velocity, for example, the smooth results are shown in Figures 10 and 11. The test results of different prediction time are shown in Figure 12. The green curve
is lateral position with 2s prediction time (MSE = 0.2273). The blue curve is lateral position with 1s prediction time (MSE = 0.0184), which is more accurate.
The validation results demonstrate the effectiveness of BNs model around a prediction time of 1s. And validations using a new sample of NGSIM also prove that our model can predict the future
positions of a lane-changing vehicle under different path sections.
4.4. Comparison between BPNN and Elman Network
For the purpose of comparison, training time and accuracy are considered between the BPNN model and the Elman Network model. The training time is the time needed to train a neural network. The
accuracy is measured by calculating the error for the testing data points. Elman Network differs from BPNN in that the first layer has a recurrent connection, shown in Figure 13. We employed the same
transfer function, training algorithm, neural nodes, and training samples.
In Figures 14, 15, and 16, the comparison results with 1s prediction time show that convergence rate is increased and the training time is reduced with the BPNN model. But the accuracy is increased
with the Elman Network. The mean-squared error based on BPNN model is 0.0371, whereas mean-squared error based on Elman Network model is 0.0279.
Simulation results demonstrate that in this application BPNN is more advantageous to the training time and accuracy because of the simpler network structure. For the future research on real-time
on-line prediction, both the training time and accuracy are important.
5. Conclusion
The main motivation for using a neural network is its ability to learn and incorporate the uncertainties from real driving data. This means, after learning from the driving behavior data, neural
network could generate vehicles states to reproduce any style of driving. In this study, prediction was done by sequential inference through BPNN and Elman models using the collected driving data
from a driving simulator and NGSIM. The BP neural network model did a better job predicting lane-changing trajectories under different path sections and generated reliable simulation results. Note
that among the various data, the vehicle trajectory is influenced by time headway and the prior position more obviously.
We can use this network model as a basic model or an initial point to create more complex lane-changing models. For future research, perhaps the results of the simulation could be improved by using
more inputs, such as the current states of the vehicles and the driver’s reaction time. Reaction time could be increased when the driver is under stress or distracted. Therefore, future research
should also consider how to incorporate mental workload in the driver behavior model and how to increase the length of the prediction window. One possible method to incorporate the mental workload is
to measure the mental workload and define it as a part of the input to the neural network model. Meanwhile, for the future research, we attempt to develop a real-time, on-road lane-change detector
that can anticipate the future driving state. This detector extracts signals from vehicle sensors and preprocesses them into feature vectors, which are then used for offline training and online
inference. Finally, the simulation results could be used to assess the safety of lane-changing maneuvers and lay a necessary foundation for further development of the autonomous lane-changing and
overtaking assistance systems.
This research was supported in part by National Nature Science Foundation of China under Grant 51010305084, 51110305060, 51210305046, and the Program of Introducing Talents of Discipline to
Universities under Grant B12022. The authors also gratefully acknowledge the helpful comments and suggestions of the colleagues from Transportation Research Center of Beijing University of
Technology, which have improved the paper.
1. W. H. Wang, Q. Cao, K. Ikeuchi, and H. Bubb, “Reliability and safety analysis methodology for identification of drivers' erroneous actions,” International Journal of Automotive Technology, vol.
11, no. 6, pp. 873–881, 2010. View at Publisher · View at Google Scholar · View at Scopus
2. P. Varaiya, “Smart cars on smart roads: problems of control,” IEEE Transactions on Automatic Control, vol. 38, no. 2, pp. 195–207, 1993. View at Publisher · View at Google Scholar
3. M. Peden, R. Scurfield, D. Sleet et al., World Report on Road Traffic Injury Prevention, World Health Organization, Geneva, Switzerland, 2004.
4. S. E. Lee, E. C. B. Olsen, and W. W. Wienville, A Comprehensive Examination of Naturalistic Lane-Changes, Virginia Tech Transportation Institute I National Highway Transportation Safety
Administration, Blacksburg, Va, USA, 2004.
5. T. Kumagai, Y. Sakaguchi, M. Okuwa, et al., “Prediction of driving behavior through probabilistic inference,” in Proceedings of the International conference on Engineering Application of Neural
Networks (EANN '03), pp. 117–123, Malaga, Spain.
6. S. Tezuka, H. Soma, and K. Tanifuji, “A study of driver behavior inference model at time of lane change using Bayesian Networks,” in Proceedings of the IEEE International Conference on Industrial
Technology (ICIT '06), pp. 2308–2313, Mumbai, India, December 2006. View at Publisher · View at Google Scholar · View at Scopus
7. N. Kuge, T. Yamamura, and O. Shimoyama, A Driver Behavior Recognition Method Based on a Driver Model framework, SAE, Warrendale, Pa, USA, 1998.
8. A. Sathyanarayana, P. Boyraz, and J. H. L. Hansen, “Driver behavior analysis and route recognition by hidden Markov models,” in Proceedings of the IEEE International Conference on Vehicular
Electronics and Safety (ICVES '08), pp. 276–281, Columbus, Ohio, USA, September 2008. View at Publisher · View at Google Scholar · View at Scopus
9. A. Pentland and L. Andrew, “Modeling and prediction of human behavior,” Neural Computation, vol. 11, no. 1, pp. 229–242, 1999. View at Scopus
10. C. C. Macadam and G. E. Johnson, “Application of elementary neural networks and preview sensors for representing driver steering control behaviour,” Vehicle System Dynamics, vol. 25, no. 1, pp.
3–30, 1996. View at Scopus
11. R. M. H. Cheng, J. W. Xiao, and S. LeQuoc, “Neuromorphic controller for AGV steering,” in Proceedings of the IEEE International Conference on Robotics and Automation, pp. 2057–2062, May 1992.
View at Scopus
12. R. S. Tomar, S. Verma, and G. S. Tomar, “Prediction of lane change trajectories through neural network,” in Proceedings of the International Conference on Computational Intelligence and
Communication Networks (CICN '10), pp. 249–253, Bhopal, India, November 2010. View at Publisher · View at Google Scholar · View at Scopus
13. W. Wang, H. Guo, H. Bubb, and K. Ikeuchi, “Numerical simulation and analysis procedure for model-based digital driving dependability in intelligent transport system,” KSCE Journal of Civil
Engineering, vol. 15, no. 5, pp. 891–898, 2011. View at Publisher · View at Google Scholar · View at Scopus
14. W. Wang, X. Jiang, S. Xia, and Q. Cao, “Incident tree model and incident tree analysis method for quantified risk assessment: an in-depth accident study in traffic operation,” Safety Science,
vol. 48, no. 10, pp. 1248–1262, 2010. View at Publisher · View at Google Scholar · View at Scopus
15. W. Wang, W. Zhang, H. Guo, H. Bubb, and K. Ikeuchi, “A safety-based approaching behavioural model with various driving characteristics,” Transportation Research Part C, vol. 19, no. 6, pp.
1202–1214, 2011. View at Publisher · View at Google Scholar · View at Scopus
16. W. H. Wang, C. X. Ding, G. D. Feng, and X. B. Jiang, “Simulation modelling of longitudinal safety spacing in inter-vehicles dynamics interactions,” Journal of Beijing Institute of Technology,
vol. 19, supplement 2, pp. 55–60, 2010. View at Scopus
17. A. H. H. A. Memar, P. Z. H. Bagher, and M. Keshmiri, “Mechanical design and motion planning of a modular reconfigurable robot,” in Proceedings of 11th International of Conference on Climbing and
Walking Robots and the Support Technologies for Mo, pp. 1090–1097, 2008.
18. T. Kohonen, “An introduction to neural computing,” Neural Networks, vol. 1, no. 1, pp. 3–16, 1988. View at Scopus
19. E. G. Tsionas, P. G. Michaelides, and A. T. Vouldis, “Global approximations to cost and production functions using artificial neural networks,” International Journal of Computational Intelligence
Systems, vol. 2, no. 2, pp. 132–139, 2009. View at Publisher · View at Google Scholar · View at Scopus
20. E. Zio, “Neural networks simulation of the transport of contaminants in groundwater,” International Journal of Computational Intelligence Systems, vol. 2, no. 3, pp. 267–276, 2009. View at Scopus
21. Gowrishankar and P. S. Satyanarayana, “Neural network based traffic prediction for wireless data networks,” International Journal of Computational Intelligence Systems, vol. 1, no. 4, pp.
379–389, 2008. View at Publisher · View at Google Scholar · View at Scopus
22. B. Yegnanarayana, Artificial Neural Networks, PHI, New Delhi, India, 2005.
23. T. Kaya, E. Aktaş, I. Topçu, and B. Ülengin, “Modeling toothpaste brand choice: an empirical comparison of artificial neural networks and multinomial probit model,” International Journal of
Computational Intelligence Systems, vol. 3, no. 5, pp. 674–687, 2010. View at Scopus
24. M. R. K. Baumann, D. Rosier, and J. F. Krems, “Situation awareness and secondary task performance while driving,” in Proceedings of the 7th International Conference on Engineering Psychology and
Cognitive Ergonomics (EPCE '07), vol. 4562 of Lecture Notes in Computer Science, pp. 256–263, Beijing, China, 2007.
|
{"url":"http://www.hindawi.com/journals/mpe/2013/967358/","timestamp":"2014-04-21T00:29:45Z","content_type":null,"content_length":"114293","record_id":"<urn:uuid:1b2e16a8-2e28-4026-a847-b1c526a49f90>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00020-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Datamoil blog. Undocumented solutions.
This function here is the right one. It returns pretty same results as the Google Maps distance, which leads me to believe that it's kind of correct.
I've seen a lot of ready solutions online for the same input/output/platform combination (i.e. WGS84 degrees input/output on MySQL), but all of them used some other formula (at the end multiplying by
1.1515) that was producing some results that never worked for me.
On the other hand the following formula returns results identical with what Google Maps returns.
Cross check
Apparently there's some issue with the bearing and that's why the result is still not precisely accurate, but it's much closer than the other methods' outputs I've tried before.
On the official ArcGIS mobile blog they have described how to create an offline tiled layer for iOS, but not a word about Android (probably because it's still in beta, thing might change and it
doesn't gain any publicity anyway). The idea was taken from this blog: if not for that, I would've not even attempted to look into this direction. So thank God for a custom hack!
My first attempt for offline tiles was to create a local tile server and in the ArcGISTiledMapServiceLayer as a URL supply something with localhost in it. It totally worked, but there were two issues
that forced me to look for another solution. Number one was technical: as soon as the connection was down (e.g. Airplane mode turned on) the tiles would just stop to download (even though their
physical location was on the same SD card the application was installed on!); the second issue was mental: just for knowing that the files stored locally had to take a trip to the moon before being
rendered (let alone maintaining a whole process of a local map server), caused an allergy and made me invest some time into research.
So here we go with offline tiled layer. Easier than easy, with a very little code, but totally impossible to find out how to make it! At least at the current version of ArcGIS for Android all the
interesting stuff regarding custom tiled layers is undocumented.
Here's the implementation of the custom tiled layer:
Here's a usage example:
Now about the filesystem. In the usage example I have set some values to the OfflineTiledLayer constructor and these represent the directory structure. First off, I didn't use the cache created by
ArcGIS. I mentioned in the beginning of the post that earlier I had implemented a local map server, that's why the file paths reflect the ones used by the online map servers. For instance, the
absolute path of the tile at zoom level 3/row 44/column 65 looks like /mnt/sdcard/services/RoadMapsWebMercator101010/MapServer/tile/3/44/65. I guess it's not hard at all to modify my class to use the
HEX paths of the cache created by ArcGIS.
Finally, a couple words about the map server specification. That paramter index.html of the OfflineTiledLayer constructor represents this. Again, I named it index.html to support the local map server
implemented earlier, but in fact it contains what an online map server outputs by clicking on the link "REST" on the bottom of the MapServer description page of the ArcGIS server (the address looks
something like /MapServer?f=json&pretty=true).
And that's totally it. Hope it helps!
Another mystery without much relevant results on Google search that turned out to be quite an easy task.
Cloudmade seems to make a lot of effort to provide developers with a lot of mapping features, and as I haven't found anywhere on the site how to pay them, I take it that their service is free. It
seems also that they use OpenStreetMaps as a back-end, therefore (here goes the disclaimer) ROUTING IS NOT VERY ACCURATE. At least not yet, but it's being improved a lot; I've added a Starbucks shop
about a month ago, and now it's visible on the map!
Among other services Cloudmade offers Routing HTTP API, which seems to be a breeze for use within HTML code (they support JSONP-style callback for script injection to go around cross-origin resource
sharing restrictions). This is the one to be used in the open map view with open map controller, because Google Maps doesn't allow routing anywhere outside their domain (Google even removed routing
API from Android right after the very first release!).
My implementation of the BlueLine does everything from requesting directions, all the way to drawing them on the map.
Create a class extending PathOverlay with the following code:
The Cloudmade API key goes into the manifest file within the application element like here
And that's pretty much it. It's very easy to make it work now with the application:
That's really it! Hope it helps.
This took me a while to figure out as I couldn't find any tutorial, but it's fairly easy after all.
The map container of my choice is OSMDroid, which is a great (and open source!) replacement for Google Maps container.
Resolve the MapView and set tile provider as follows
It basically defines a new OnlineTileSourceBase with:
• name "Google Maps" (this is an important bit, as it will be used to lookup the directory with offline tiles)
• resource id "unknown" (I also downloaded OSMDroid source code, added a value "google" to the ResourceProxy.string enum and used that instead)
• minimum zoom level 1
• maximum zoom level 20
• tile size 256 pixels
• tile file extension ".png"
• tile base url
• finally inside the overriden method getTileURLString it describes how to build a URL to get a tile for specific location and zoom level
OK, now the controller supports Google Maps and if setUseDataConnection was set to true it would already show everything and would work fine with Google Maps. But the mission is to make it work
The best tool to export the tiles for an area is Mobile Atlas Creator (version up to 1.8, they removed Google Maps from 1.9). Export an area with map source set to Google Maps in desired zoom levels
in Osmdroid ZIP format (it's going to take a while). Put the output ZIP file into the /sdcard/osmdroid/Google Maps directory (if it doesn't exist, create it; the name has to be the same as the first
parameter in the OnlineTileSourceBase constructor). Again, as I downloaded the source code for OSMDroid, I changed some values in the
org.osmdroid.tileprovider.constants.OpenStreetMapTileProviderConstants class (such as OSMDROID_PATH to put the tiles in my directory instead of /sdcard/osmdroid).
Start up the application, now it should show offline tiles. Notice that now, if the SD card is unmounted, the controller will appear empty.
But the story doesn't end here. As the Google Maps terms of use (10.1.3.b) state:
No Pre-Fetching, Caching, or Storage of Content. You must not pre-fetch, cache, or store any Content, except that you may store: (i) limited amounts of Content for the purpose of improving the
performance of your Maps API Implementation if you do so temporarily, securely, and in a manner that does not permit use of the Content outside of the Service; and (ii) any content identifier or
key that the Maps APIs Documentation specifically permits you to store. For example, you must not use the Content to create an independent database of “places.”
So it sounds like an application is quite limited to use offline Google Maps tiles.
Nonetheless, it seems they don't disallow to temporarily cache the tiles and work in online mode (of course then one also needs to store the tiles securely). To achieve this, unzip the ZIP file with
extracted map area into the location /sdcard/osmdroid/Google Maps/tiles (or whatever is the location specified in OpenStreetMapTileProviderConstants.OSMDROID_PATH), then set
mapView.setUseDataConnection(true). The default cache expiry period is not very long, so I also altered it in the source code by setting the values of
OpenStreetMapTileProviderConstants.TILE_EXPIRY_TIME_MILLISECONDS and OpenStreetMapTileProviderConstants.DEFAULT_MAXIMUM_CACHED_FILE_AGE to (1000 * 60 * 60 * 24 * 365 * 10) (that's 10 years) . This
will make OSMDroid to use pre-fetched tiles for areas where available, but for the rest of the world it will download new tiles.
That's it. Hope it helps.
UPDATE As stated in the comments, you need the Mobile Atlas Creator version 1.8 (I see they removed all versions prior to 1.9 from sourceforge). The other tool capable of fetching tiles is "OsmAnd
Map Creator" (I see they deprecated it too, still it's available for download), but I'm not sure what the output directories look like, so one would have to adjust it manually to the structure
OSMDroid controller expects.
The mission is what the title says. It was quite hard to define the problem to search for a solution. I ended up asking for help at StackOverflow and as I was suggested, I implemented a way to list
some known apps and let the user select a preferred nav app.
So, let's say, somewhere on Activity there's a button, which on click is supposed to call the Navigation application. The way I did it, on tap, the activity shows a custom dialog letting the user
select one of the known applications or prompt Android to find suitable activities implicitly.
Here's the source code of the payload of the main method invoked from the OnClickListener - showNavigation():
Here are few utils needed only to follow up the last call in the showNavigation() method - showTableDialog():
These are the resource files:
* this goes to /res/values/themes.xml
* this goes to /res/values/styles.xml
* this goes to /res/layout/table_dialog.xml
* this goes to /res/drawable as menu_gps.png
And here is the result dialog you are supposed to see:
On click on each row it will start up the respective activity.
That's it, I hope it helps!
Just what the title says.
In an application there were a few tables filled with results of some queries. I haven’t yet figured out why, but it was of a high importance to the client that all the tables had accumulative
sorting (like the order by clause in SQL). This requirement took me a while to figure out how to implement, whilst the solution was quite elegant and pretty small to write it here.
So, there’s a table with results of a search (the query really doesn't matter as the entire sorting solution will be implemented in the view layer)
The whole trick is to set sortListener=”#{CustomersBean.onSort}” for the table. The backing code for this sortListener is found in the backing bean named CustomerBean.
And that’s pretty much it. The table is now sortable accumulatively: you can hit customerid first, then name, then surname etc. the resulting table will look similarly to what produces SQL clause
order by customerid, name, surname, etc.
Hope this helps.
There’s a mess with the selectManyCheckbox bindings: for both initial list items and selected items. ADF itself doesn’t support direct binding to a ViewObject, and that’s why, as Frank Nimphius said,
…you would need to use a managed bean to dispatch between the ADF binding layer and the multi select component.
Maybe it’s just me, but it took me a while to figure out how exactly to do it.
First off, the managed bean. It should contain fields with accessors for two lists: the initial options and the selected items.
The initialListItems will be bound to the multi-selection items element. Whilst the selectedItems list will contain the selected items from the initial list. The important thing is the datatype for
the latter – a java.util.List! In fact it’s what provokes the “Unsupported Model Type” exception. This datatype hassle was actually the part that took me a real while to get it, I tried primitive
array, ArrayList of String’s, Object’s and what not!
Now the only thing left is to bind both lists to the multi-selection component.
No more exceptions. That's it, hope it helps!
|
{"url":"http://datamoil.blogspot.com/","timestamp":"2014-04-16T05:18:06Z","content_type":null,"content_length":"93519","record_id":"<urn:uuid:cf0e2991-8f20-4978-9ba0-9b48fdf1ff78>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00378-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Weakly nonlinear analysis of a hyperbolic model for animal group formation
Weakly nonlinear analysis of a hyperbolic model for animal group formation. / Eftimie, R.; de Vries, G.; Lewis, M. A.
Journal of Mathematical Biology
, Vol. 59, No. 1, 07.2009, p. 37-74.
Research output: Contribution to journal › Article
Eftimie, R, de Vries, G & Lewis, MA 2009, 'Weakly nonlinear analysis of a hyperbolic model for animal group formation' Journal of Mathematical Biology, vol 59, no. 1, pp. 37-74.
Eftimie, R., de Vries, G., & Lewis, M. A. (2009). Weakly nonlinear analysis of a hyperbolic model for animal group formation. Journal of Mathematical Biology, 59(1), 37-74doi: 10.1007/
Eftimie R, de Vries G, Lewis MA. Weakly nonlinear analysis of a hyperbolic model for animal group formation. Journal of Mathematical Biology. 2009 Jul;59(1):37-74.
title = "Weakly nonlinear analysis of a hyperbolic model for animal group formation",
author = "R. Eftimie and {de Vries}, G. and Lewis, {M. A.}",
year = "2009",
volume = "59",
number = "1",
pages = "37--74",
journal = "Journal of Mathematical Biology",
issn = "0303-6812",
RIS (suitable for import to EndNote) - Download
TY - JOUR
T1 - Weakly nonlinear analysis of a hyperbolic model for animal group formation
A1 - Eftimie,R.
A1 - de Vries,G.
A1 - Lewis,M. A.
AU - Eftimie,R.
AU - de Vries,G.
AU - Lewis,M. A.
PY - 2009/7
Y1 - 2009/7
N2 - <p>We consider an one-dimensional nonlocal hyperbolic model for group formation with application to self-organizing collectives of animals in homogeneous environments. Previous studies have
shown that this model displays at least four complex spatial and spatiotemporal group patterns. Here, we use weakly nonlinear analysis to better understand the mechanisms involved in the formation of
two of these patterns, namely stationary pulses and traveling trains. We show that both patterns arise through subcritical bifurcations from spatially homogeneous steady states. We then use these
results to investigate the effect of two social interactions (attraction and alignment) on the structure of stationary and moving animal groups. While attraction makes the groups more compact,
alignment has a dual effect, depending on whether the groups are stationary or moving. More precisely, increasing alignment makes the stationary groups compact, and the moving groups more elongated.
Also, the results show the existence of a threshold for the total group density, above which, coordinated behaviors described by stationary and moving groups persist for a long time.</p>
AB - <p>We consider an one-dimensional nonlocal hyperbolic model for group formation with application to self-organizing collectives of animals in homogeneous environments. Previous studies have
shown that this model displays at least four complex spatial and spatiotemporal group patterns. Here, we use weakly nonlinear analysis to better understand the mechanisms involved in the formation of
two of these patterns, namely stationary pulses and traveling trains. We show that both patterns arise through subcritical bifurcations from spatially homogeneous steady states. We then use these
results to investigate the effect of two social interactions (attraction and alignment) on the structure of stationary and moving animal groups. While attraction makes the groups more compact,
alignment has a dual effect, depending on whether the groups are stationary or moving. More precisely, increasing alignment makes the stationary groups compact, and the moving groups more elongated.
Also, the results show the existence of a threshold for the total group density, above which, coordinated behaviors described by stationary and moving groups persist for a long time.</p>
U2 - 10.1007/s00285-008-0209-8
DO - 10.1007/s00285-008-0209-8
M1 - Article
JO - Journal of Mathematical Biology
JF - Journal of Mathematical Biology
SN - 0303-6812
IS - 1
VL - 59
SP - 37
EP - 74
ER -
|
{"url":"http://discovery.dundee.ac.uk/portal/en/research/weakly-nonlinear-analysis-of-a-hyperbolic-model-for-animal-group-formation(c8c6d080-0672-426a-b37f-7f2eb884ec1b)/export.html","timestamp":"2014-04-20T03:45:27Z","content_type":null,"content_length":"30481","record_id":"<urn:uuid:f58b00c3-38b3-4a10-8ce7-ccb73d3c17ce>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00266-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Proof of Circle Theorem by Vectors
Date: 05/03/2001 at 06:09:29
From: Marko Stojovic
Subject: Proof of circle theorem by vectors
A question requires me to prove, using the vector scalar product, that
the angle in a semicircle is always 90 degrees (the hypotenuse being
the diameter, and the sides meeting on the perimeter).
I have called the sides leading away from the angle vectors a and b
(both directed away from the angle), and have tried to prove that
a.b = 0.
I have tried this by means of drawing a radius from the centre to the
angle (calling this vector d, leading away from the angle), and have
expressed the diameter of the circle as two opposite vectors, equal
in magnitude, leading away from the centre (e and f).
I then tried to express a.b as (d+e).(d+f) = d.(e+f) = d.(e-e) = d.0
As I understand it, both vectors have to be positive for the scalar
product to work, so I can't proceed beyond this point. Please help...
Date: 05/04/2001 at 14:35:32
From: Doctor Floor
Subject: Re: Proof of circle theorem by vectors
Dear Marko,
Thanks for your interesting question.
Let X be a point on the circle, let M be its center, and let TU be a
Let vector XM be given by (t,u) and MT by (v,w). Then we know that
t^2+u^2 = v^2+w^2 = r^2
where r is the radius of the circle.
Now we have:
vector XT = (t,u)+(v,w) = (t+v,u+w)
vector XU = (t,u)-(v,w) = (t-v,u-w).
Their internal product is equal to
(t+v)(t-v) + (u+w)(u-w) = t^2 - v^2 + u^2 - w^2
= t^2 + u^2 - v^2 - w^2
= r^2 - r^2
= 0
and we see that the two vectors must be perpendicular, as desired.
If you need more help, just write back.
Best regards,
- Doctor Floor, The Math Forum
|
{"url":"http://mathforum.org/library/drmath/view/51810.html","timestamp":"2014-04-17T01:12:47Z","content_type":null,"content_length":"6825","record_id":"<urn:uuid:13217cba-1709-484f-bcb8-3fe2e5f823cd>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00420-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Regular Expressions
Regular Expressions
A regular expression is a character string where some characters are given special meaning, so that the pattern as a whole denotes a possibly infinite class of alternative strings to match.
J uses the gnu.regexp package.
Supported Syntax
Within a regular expression, the following characters have special meaning:
• Positional Operators
^ matches the beginning of a line
$ matches the end of a line
• One-Character Operators
. matches any single character
\d matches any decimal digit
\D matches any non-digit
\n matches a newline character
\r matches a return character
\s matches any whitespace character
\S matches any non-whitespace character
\t matches a tab character
\w matches any word (alphanumeric) character
\W matches any non-word (alphanumeric) character
Otherwise, \c matches the character c.
• Character Classes
[abc] matches any character in the set a, b or c
[^abc] matches any character not in the set a, b or c
[a-z] matches any character in the range a to z (inclusive)
A leading or trailing dash is interpreted literally.
• Subexpressions and Backreferences
(abc) matches whatever the expression abc would match, and saves it as a subexpression
\n where 1 <= n <= 9, matches the same thing the nth subexpression matched
Parentheses can also be used for grouping.
Parentheses used for grouping or to record matched subexpressions should not be escaped.
Backreferences may also be used in replacement strings; see replace.
• Branching (Alternation) Operator
a|b matches whatever the expression a would match, or whatever the expression b would match.
• Repeating Operators
? matches zero or one occurrence of the preceding expression or the null string
* matches zero or more occurrences of the preceding expression
+ matches one or more occurrences of the preceding expression
{m} matches exactly m occurrences of the preceding expression
{m,n} matches between m and n occurrences of the preceding expression (inclusive)
{m,} matches m or more occurrences of the preceding expression
The repeating operators operate on the preceding atomic expression.
• Stingy (Minimal) Matching
If a repeating operator is immediately followed by a ?, the repeating operator will stop at the smallest number of repetitions that can complete the rest of the match.
|
{"url":"http://armedbear-j.sourceforge.net/doc/regexp.html","timestamp":"2014-04-17T16:38:47Z","content_type":null,"content_length":"3874","record_id":"<urn:uuid:fee20308-518d-441f-a74b-cbb7dd241292>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00138-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Physics Forums - View Single Post - Visual Prime Pattern identified
Right; your parabolas do not pass through the origin, instead they have been shifted so that the parabola representing the multiples of n passes through the point in the first parabola that
represents the integer n. (This way, the horizontal lines will only intersect true multiples of n, clearing up other instances of n itself.)
A similar thing can be done by shifting the lines I mentioned before; the line with slope n would pass not through the origin, but through the point (n,n) on the first line. Attached is a drawing.
In fact, graphs of
monotonic curve (x^2, x^3, exp x, ln x, ...) would also produce the primes in the same manner (namely, in the manner of
Erathostenes' sieve
Edit: My bad, x^2 is not, overall, monotonic. I was referring to curves that are increasingly monotonic on the first quadrant; that is, for x>0, whenever y>x you have f(y)>f(x), so that the vertical
ordering of the points is preserved.
|
{"url":"http://www.physicsforums.com/showpost.php?p=3204440&postcount=9","timestamp":"2014-04-17T07:29:18Z","content_type":null,"content_length":"8397","record_id":"<urn:uuid:9e67126c-e6d5-4584-b40f-8cda91743c39>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00216-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Department of Mathematics Home Page
The Department of Mathematics, together with the Departments of Computer Science and Statistics, forms the School of Computing and Mathematical Sciences at the University of Waikato. To contact the
Department of Mathematics or to obtain more information, write to:
Department of Mathematics,
The University of Waikato,
Private Bag 3105,
New Zealand.
Alternatively, the Department may be contacted by phone on +64-7-838 4713, fax on +64-7-838 4666, or by email to maths@waikato.ac.nz. Current arising from reconnection of magnetic field lines in 2D
periodic magnetic plasmas. Last updated 11 September 2002
|
{"url":"http://www.math.waikato.ac.nz/index.html","timestamp":"2014-04-17T04:20:35Z","content_type":null,"content_length":"15075","record_id":"<urn:uuid:346a936c-49a7-4c8b-ad6a-d49c609703ad>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00610-ip-10-147-4-33.ec2.internal.warc.gz"}
|
February 24th 2006, 11:47 AM
I am trying to understand the calculation of log odds. I'm provided with the following example: 4 out of 5 are A or a .80 probability and 1 out of 5 are T or .20. The possible values are A,T,C or
My example indicates that the log odds for A is +1.16 and T is -0.22. I am not able to duplicate this result. Can you explain the steps calculate the log odds from these probabilities?
Thank you.
The funny part is that I am good at individual subjects like Probability by itself or logs by itself So who was the genius who came up with log odds?
February 25th 2006, 12:15 AM
Originally Posted by askmemath
I am trying to understand the calculation of log odds. I'm provided with the following example: 4 out of 5 are A or a .80 probability and 1 out of 5 are T or .20. The possible values are A,T,C or
My example indicates that the log odds for A is +1.16 and T is -0.22. I am not able to duplicate this result. Can you explain the steps calculate the log odds from these probabilities?
Thank you.
The funny part is that I am good at individual subjects like Probability by itself or logs by itself So who was the genius who came up with log odds?
The log-odds of an event of probability $p$ is the value of
$\mathrm{logit}(p)$, defined as:
$<br /> \mathrm{logit}(p)=\log\left(\frac{p}{1-p}\right)=\log(p)-\log(1-p)<br />$,
where the $\log$ is usually the natural logarithm.
So the log-odds of A are:
$<br /> \mbox{log-odds}(A)=\mathrm{logit}(0.8)=\log(4)=1.386<br />$
$<br /> \mbox{log-odds}(T)=\mathrm{logit}(0.2)=\log(0.25)=-1.386<br />$.
February 25th 2006, 08:05 AM
<Salutes the Cap'n>
This is the atleast the 2nd time that you have come to my rescue. Thank you so very much!
|
{"url":"http://mathhelpforum.com/pre-calculus/1999-log-print.html","timestamp":"2014-04-20T20:29:04Z","content_type":null,"content_length":"6632","record_id":"<urn:uuid:eae7c6cb-c268-48d0-9b8b-0988f9b18580>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00532-ip-10-147-4-33.ec2.internal.warc.gz"}
|
zplane - don't the zeros and poles need to be complex?
1 Answer
Accepted answer
zplane is a little subtle. When B and A are rows, they represent transfer functions. However, if B and A are columns, they are zeros and poles.
[b,a] = ellip(4,.5,20,.6);
6 Comments
Show 3 older comments
You can do A(:), but A' should work if all elements in the row vector are real
great thanks :)
zplane - don't the zeros and poles need to be complex?
I'm just getting my head around the zplane function. I'm not how the values of z and p are mapped onto the pole-zero plot. I thought zeros and poles were supposed to have both real and imaginary
parts - as these are the axes of the pole-zero plot. But in the examples of zplane code given in the help documentation the z and p values are just single real values. Also, for a 4 pole/zero filter
there are 5 values for z and also for p. Shouldn't this be four?
1 Comment
I'm just getting my head around the zplane function. I'm not how the values of z and p are mapped onto the pole-zero plot. I thought zeros and poles were supposed to have both real and imaginary
parts - as these are the axes of the pole-zero plot. But in the examples of zplane code given in the help documentation the z and p values are just single real values. Also, for a 4 pole/zero filter
there are 5 values for z and also for p. Shouldn't this be four?
Hi Tom, in your code, b and a are coefficients. So if you want to use zplane, you need to keep them in the row form and zplane will automatically calculate zeros and poles for you and display in the
On the other hand, if you want to pass in column vectors, then zplane treat them as just zeros and poles. In this case, you need to compute zeros and poles yourself before passing it into the
function. So you need to do zplane(roots(b),roots(a)).
|
{"url":"http://www.mathworks.nl/matlabcentral/answers/39096-zplane-don-t-the-zeros-and-poles-need-to-be-complex","timestamp":"2014-04-19T14:31:49Z","content_type":null,"content_length":"32071","record_id":"<urn:uuid:7a331e01-23eb-41b1-967a-3dff91e45306>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00185-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Quantum Foundations
This series consists of talks in the area of Foundations of Quantum Theory. Seminar and group meetings will alternate.
Quantum mechanics is a non-classical probability theory, but hardly the most general one imaginable: any compact convex set can serve as the state space for an abstract probabilistic model (classical
models corresponding to simplices). From this altitude, one sees that many phenomena commonly regarded as ``characteristically quantum' are in fact generically ``non-classical'. In this talk, I'll
show that almost any non-classical probabilistic theory shares with quantum mechanics a notion of entanglement and, with this, a version of the so-called measurement problem.
A standard canonical quantization of general relativity yields a time-independent Schroedinger equation whose solutions are static wavefunctions on configuration space. Naively this is in
contradiction with the real world where things do change. Broadly speaking, the problem how to reconcile a theory which contains no concept of time with a changing world is called 'the problem of
The essential ingredients of a quantum theory are usually a Hilbert space of states and an algebra of operators encoding observables. The mathematical operations available with these structures
translate fairly well into physical operations (preparation, measurement etc.) in a non-relativistic world. This correspondence weakens in quantum field theory, where the direct operational meaning
of the observable algebra structure (encoded usually through commutators) is lost.
Both classical probability theory and quantum theory lend themselves to a Bayesian interpretation where probabilities represent degrees of belief, and where the various rules for combining and
updating probabilities are but algorithms for plausible reasoning in the face of uncertainty. I elucidate the differences and commonalities of these two theories, and argue that they are in fact the
only two algorithms to satisfy certain basic consistency requirements.
Lee Smolin has argued that one of the barriers to understanding time in a quantum world is our tendency to spatialize time. The question is whether there is anything in physics that could lead us to
mathematically characterize time so that it is not just another funny spatial dimension. I will explore the possibility(already considered by Smolin and others) that time may be distinguished from
space by what I will call a measure of Booleanity.
Quantum entanglement has two remarkable properties. First, according to Bell\'s theorem, the statistical correlations between entangled quantum systems are inconsistent with any theory of local
hidden variables. Second, entanglement is monogamous -- that is, to the degree that A and B are entangled with each other, they cannot be entangled with any other systems. It turns out that these
properties are intimately related.
We all know that the EPR argument fails, and we can all provide proofs of one sort or another that it can\'t work. But in spite of this, there\'s something curiously tempting about the reasoning, and
the temptation sometimes leads to needless perplexity about other issues. This paper will do two things. It will offer a diagnosis of where the EPR argument goes wrong that shows why we should be
suspicious long before we get to Bell-type results, and then use the thought behind this diagnosis to suggest an orientation toward thinking about quantum states.
This paper critically examines the view of quantum mechanics that emerged shortly after the introduction of quantum mechanics and that has been widespread ever since. Although N. Bohr, P. A. M.
Dirac, and W. Heisenberg advanced this view earlier, it is best exemplified by J. von Neumann’s argument in Mathematical Foundations of Quantum Mechanics (1932) that the transformation of \'a
[quantum] state ... under the action of an energy operator . . . is purely causal,\' while, \'on the other hand, the state ... which may measure a [given] quantity ...
Conventional quantum mechanics answers this question by specifying the required mathematical properties of wavefunctions and invoking the Born postulate. The ontological question remains unanswered.
There is one exception to this. A variation of the Feynman chessboard model allows a classical stochastic process to assemble a wavefunction, based solely on the geometry of spacetime paths. A direct
comparison of how a related process assembles a Probability Density Function reveals both how and why PDFs and wavefunctions differ from the perspective of an underlying kinetic theory.
|
{"url":"http://www.perimeterinstitute.ca/video-library/collection/quantum-foundations?page=12","timestamp":"2014-04-18T11:12:13Z","content_type":null,"content_length":"65224","record_id":"<urn:uuid:f9d9c445-b6eb-498a-86fd-c2d13854921e>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00438-ip-10-147-4-33.ec2.internal.warc.gz"}
|
The Higgs Field
You simply can't travel to the outside. No matter what you do, you're going down the r-coordinate with the same certainty as you climb up the t-coordinate outside the EH.
Are those two statements the same? "You can't travel to the outside" (which I think I already understood, but maybe not) and " ... you're going down the r-coordinate with same certainty ..."
If I were falling into a really large BH, I could cross the event horizon without hardly noticing, or so I thought. If that were the case, then once inside I imagined the gradients would be small
enough that I could putter around in my little rocket as much as I wanted, I just could never get outside (as you say). But would I have to move inexorably toward the center? If I am outside the EH,
I can slow the march along the t axis by moving faster along r. If I had infinite energy, I could stop my motion completely along t. Are the conditions inside EH just the reverse (substitute r for t)
or is this whole thing just an absurd conjecture based on unknown (to me anyway) physics?
|
{"url":"http://www.physicsforums.com/showthread.php?t=390226","timestamp":"2014-04-16T13:48:57Z","content_type":null,"content_length":"74236","record_id":"<urn:uuid:b2710073-80a4-4fba-80ca-6b813fb0d1db>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00128-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Between the Screws - A Logic Puzzle
If you like this site, I would appreciate a gift from my wishlist.
Between the Screws - A Logic Puzzle
I originally wrote this puzzle for the magazine of a local Role-Playing Games’ club, back in high school. It was originally written in Hebrew, but is presented here in its English translation.
The Puzzle Itself
When Nom the gnome went to build his new aeroplane “Little Wing”, he ran into a funny situation when he wanted to attach the wing to the body of the plane.
To do that, he had to use four screws in a particular order, using four screwdrivers of four different colours - one of them green. Furthermore, each screw had a different shape and a different
Your mission is to find out the order in which the screws were attached, their lengths, shapes, and the colour of the screwdriver used for each screw.
1. The screw that the gnome used the last was longer than the screw that needed the blue screwdriver, but was less than twice its length. (Albeit there is a screw that is twice as long than the one
needing the blue screwdriver.)
2. The diamond-shaped screw was screwed after the screw with the blue screwdriver, but before the 6cm-long screw.
3. Only once was a shorter screw used right after a longer one.
4. The screw needing the red screwdriver, which isn’t the hexagonal one, was the last. The diamond-shaped screw did not come immediately before it.
5. The triangular screw came before the 2cm-long screw.
6. The screw needing the purple screwdriver is 8 cm long, did not follow the 2cm long screw.
7. The 4cm long screw was not the square-shaped one.
Position in Time Screw Length Screwdriver Colour Screw Shape
4th (Last)
The solution can be found below.
From the puzzle the following data can be directly gathered:
• Screw lengths: 2cm, 4cm, 6cm and 8cm.
• Screwdriver colours: green, blue, red and purple.
• Screw shapes: square, diamond, hexagonal, and triangular.
Based on hint #1, we can tell that the screw needing the blue screwdriver and the final screw were either 4cm and 6cm long respectively or 6cm and 8cm respectively, as no other allocations fulfil the
hint. According to hint #2 we know that the 6cm long screw was either 3rd or 4th. If the 6cm screw was 3rd , then the 4th screw must be 8cm (in accordance to hint #1). However, then the 3rd screw
must be the one needing the blue screwdriver which is impossible according to hint #2, which means it must be at 1st or 2nd. Therefore, the 6cm screw was the 4th screw and the blue screwdriver was
used for the 4cm screw. So far we have:
• 1st or 2nd - 4cm - Blue
• 2nd or 3rd - Diamond
• 4th - 6cm
All mutually exclusive.
According to hint #4 we know that the 4th screw needed the red screw driver and that it wasn’t the hexagonal one. Therefore, we know that «4th - 6cm - red - Not Hex». Hint #4 also tells us that the
diamond shaped screw did not come immediately before it. Thus the diamond shaped screw must have been the 2nd one, and the 4cm screw must have been the 1st. So what we have so far is this:
• 1st - 4cm - Blue
• 2nd - Diamond
• 4th - 6cm - Red - Not Hexagonal
Now according to hint #3 we know that the order cannot be 4cm→2cm→8cm→6cm because that would mean two shorter screws have followed two longer ones. So we have:
• 1st - 4cm - Blue
• 2nd - 8cm - Diamond
• 3rd - 2cm
• 4th - 6cm - Red - Not Hexagonal
According to hint #5 the triangular screw preceded the 2cm screw, so it must have been the 1st screw. Since the 4th screw is not hexagonal screw, the hexagonal one must therefore be the 3rd screw,
and the 4th screw must be the square screw. Thus, we have:
• 1st - 4cm - Blue - Triangle
• 2nd - 8cm - Diamond
• 3rd - 2cm - Hexagonal
• 4th - 6cm - Red - Square
Now all we have left to determine are the colours of the 2nd and 3rd screwdrivers. According to hint #6 we know that the 8cm screw was used with the purple screwdriver, so the 2nd screwdriver must
have been purple and the 3rd screwdriver was green.
Our final results are:
Position in Time Screw Length Screwdriver Colour Screw Shape
1st 4 cm Blue Triangle
2nd 8 cm Purple Diamond
3rd 2 cm Green Hexagon
4th (Last) 6 cm Red Square
|
{"url":"http://www.shlomifish.org/puzzles/logic/between-the-screws/","timestamp":"2014-04-21T02:09:17Z","content_type":null,"content_length":"31793","record_id":"<urn:uuid:c9ab850d-0670-4004-b689-5c1099dbc3e3>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00335-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Advanced Calculus
Author: , Date: 21 Feb 2008, Views: ,
Advanced CalculusAuthor: D. V. WidderPublisher: D. Van Nostrand Company LtdDjVU format | 448 page | 17,8mb | English language | ASIN B00005VAZ0
Classic text leads from elementary calculus into more theoretic problems. Precise approach with definitions, theorems, proofs, examples and exercises. Topics include partial differentiation, vectors,
differential geometry, Stieltjes integral, infinite series, gamma function, Fourier series, Laplace transform, much more. Numerous graded exercises with selected answers. 1961 edition.
I bought this textbook as a supplementary resource book for an advanced calculus class I once took although I ended up using it for a Differential Equations II class instead (in particular the
partial differential equation and fourier series sections). This book does not present proofs as one might expect from many of today's Advanced Calculus classes. It does not present abstract theorems
but rather applied Calculus and Differential Equations. You will not find logical connectives, quantifiers, techniques of proofs, set operations, induction, or completeness axioms in this book. What
you will find is partial differentiation, line and surface integrals, definite integrals, fourier series, infinite series, etc. Electrical and Computer Engineers will find that they may benefit from
the Vector, Fourier Series, and Laplace Transform chapters of this book. Physics majors are more likely to profit from the chapters on Partial Differentiation and Fourier Series.
Here's the textbook's chapter titles: 1) Partial Differentiation, 2) Vectors, 3) Differential Geometry', 4) Applications of Partial Differentation, 5) Stieltjes Integral, 6) Multiple Integrals, 7)
Line and Surface Integrals, 8) Limits and Indeterminate Forms, 9) Infinite Series, 10) Convergence of Improper Integrals, 11) The Gamma Function. Evaluation of Definite Integrals, 12) Fourier Series,
13) The Laplace Transform, 14) Applications of the Laplace Transform.
The book may be considered as being written in the ole' school style. It was written by a former Professor of Mathematics at Harvard and was first printed in 1947. The relatively low cost of the
textbook may be attributed to it not having been `updated' for a while, being devoid of any color, and being softbound. It has some worked out examples but focuses more on established theorems and
lemmas to solve problems. The book is fairly well organized and is overall a good reference book.
I really believe that this book does an excellent job at teaching such a difficult topic. "Advanced Calculus" is just packed with proofs and stimulating problems. This should be the text used to
teach the subject. If you intend to tutor yourself on the topic or you are actually taking the class, this book is a must. I am currently using this as a secondary text to an advanced calculus class
I am taking, and, as far as I'm concerned, this is the only text I need. This book does, in such a small package, more than you'll ever need. I recommend one purchases this book at the multi-varialbe
calculus level and use it through your time in analysis courses. This is a must have for all math majors.
password = www.AvaxHome.ru
FileHo Mirror:Download link
MiHD MiRROR:::::>>>>> NO PASSWORD!![/b]
MiHD MiRROR:::::>>>>>
[Fast Download] Advanced Calculus
Copyright Disclaimer:
This site does not store any files on its server. We only index and link to content provided by other sites. Please contact the content providers to delete copyright contents if any and email us,
we'll remove relevant links or contents immediately.
Architecture Animals related
Artbooks Audiobooks
Biographies Business
Comics Cooking and Diets
Cultures / Languages Databases and SQL
Graphic Design Hardware
Internet Microsoft
Web Development Poetry
Programming Software
Engineering Technology Magazine
Novel Study
Sports Security related
Science Politics, Sociology
Personality Music related
Mobile eBooks Readers History / Military
Games related Gambling
Encyclopedia, Dictionary Travel Guides
Religion related Others
|
{"url":"http://www.ebook3000.com/Advanced-Calculus_8416.html","timestamp":"2014-04-18T05:34:15Z","content_type":null,"content_length":"18169","record_id":"<urn:uuid:4f6530d4-ddd8-49c8-9891-05adae54678f>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00333-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Higher category theory
Basic concepts
Basic theorems
Universal constructions
Extra properties and structure
1-categorical presentations
An $n$-poset is any of several concepts that generalize posets in higher category theory. In fact, $n$-posets are the same as $(n-1,n)$-categories.
Fix a meaning of $\infty$-category, however weak or strict you wish. Then an $n$-poset is an $\infty$-category such that all parallel pairs of $j$-morphisms are equivalent for $j \geq n$. Thus, up to
equivalence, there is no point in mentioning anything beyond $n$-morphisms, not even whether two given parallel $n$-morphisms are equivalent. This definition makes sense as low as $n = -1$; the
statement that parallel $(-1)$-morphisms are equivalent simply means that there exists an object (a $0$-morphism).
Special cases
• The concept of $(-1)$-poset is trivial.
• A $0$-poset is a truth value.
• A $1$-poset or (0,1)-category is simply a poset.
Because, by the definition of $(0,1)$-category, we have that any two $1$-morphisms with the same source and target are equivalent. Hence there is, up to equivalence, at most one morphism for
every ordered pair of objects. The rest of the axioms say that this is all the information there is in a $(0,1)$-category. Therefore, by the discussion at poset – As a category with extra
properties, a $(0,1)$-category is a poset. (See also thin category.)
• A $2$-poset or (1,2)-category is a locally posetal 2-category.
• In general, an $n$-poset is an $n$-category in which all parallel pairs of $n$-morphisms are equal.
• An $\infty$-poset is the same thing as an $\infty$-category.
In the light of the general definition, one must interpret ‘is’ up to equivalence of categories. The last statement also depends on how strict your definition of $\infty$-category or $n$-category is;
it is actually simpler to define $n$-posets from scratch as given above than to define them in terms of $n$-categories.
Basic theorems
The $\infty$-category of (small) $n$-posets, as a full sub-∞-category of the $\infty$-category of $\infty$-categories, is an $(n+1)$-poset. That is, $n$-posets form an $(n+1)$-poset. This is well
known for small values of $n$.
Revised on October 13, 2011 00:40:31 by
Toby Bartels
|
{"url":"http://www.ncatlab.org/nlab/show/n-poset","timestamp":"2014-04-16T19:00:50Z","content_type":null,"content_length":"47708","record_id":"<urn:uuid:473c1b20-0c92-4ae5-b321-c06670f2ef97>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00150-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Review of Sorting
Consider the following consecutive configurations of a list while it it being sorted:
• (4, 5, 3, 1)
• (4, 5, 3, 1)
• (4, 3, 5, 1)
• (4, 3, 1, 5)
What sorting algorithm is being used?
What is the best case running time of Bubble Sort?
Why is log(n) often a term in the efficiency expressions for divide and conquer algorithms?
What sort might you use if you know that your data will be pretty much in order to begin with and why would you use that sort?
Imagine that we run quick sort on an already ordered list, picking the pivot by taking the first element. What problem do we run into?
Why are merge sort and quick sort known as "divide and conquer" algorithms?
Why might it seem counterintuitive that heap sort can run so efficiently?
Merge sort is O(nlog(n)) . Where does the n term come from?
Which of the following is a proper heap?
For merge sort to merge the following two arrays: (1, 4, 5, 8) and (3, 7, 9, 13), what comparisons have to take place?
Consider a sorting algorithm that checks all possible configurations of a list until it finds one that is in order. This algorithm will sort a list correctly, but is very inefficient. What is its
big-O notation?
In what case is the algorithm described above as efficient as bubble sort?
In what case is the algorithm described above more efficient than selection sort?
Consider the intermediate configurations of an array being sorted below. What sort is being used?
• (4, 5, 2, 1, 7)
• (1, 5, 2, 4, 7)
• (1, 2, 5, 4, 7)
What sorting algorithm might you choose for the following list? Why? (1, 2, 3, 6, 5, 9)
True of false: merge sort and quick sort can only be used on lists whose length is a power of 2.
How many comparisons would it take merge sort to merge the following lists: (1, 2, 3, 4, 5) and (6, 7, 8, 9, 10)?
True or false: selection sort can sometimes run as fast as O(n) .
The intermediate configurations below are characteristic of which sorting algorithm?
• (5, 1, 4, 8, 2)
• (1, 5, 4, 8, 2)
• (1, 4, 5, 8, 2)
• (1, 4, 5, 2, 8)
Why would it be a bad idea to implement heap sort using a heap data structure that didn't support random access?
Imagine the following strategy for picking a pivot in quick sort: scan through half the data set, and use the median value as the pivot. Why is this a bad strategy?
Imagine the following specification of a comparison function. If the two numbers passed in have an equal number of digits, they are equal. Otherwise, the one with the larger number of digits is
greater. Which of the following lists are sorted with respect to this comparison function?
Why are we so concerned with the efficiencies of sorting algorithms?
Imagine a comparison function for complicated objects. Why might efficiency calculations for a sort using this comparison function be misleading?
Consider the following intermediate configurations of a list being sorted. What sorting algorithm is being used?
• (5, 2, 8, 1, 9)
• (1, 5, 2, 8, 9)
• (1, 2, 5, 8, 9)
What is the first swap insertion sort would make on the following list? (5, 3, 4, 9, 1)
What is the first swap selection sort would make on the following list? (5, 3, 4, 9, 1)
What kind of data structure would make insertion sort particularly inefficient?
Imagine a situation where most of the data to be sorted starts in roughly reverse order. Why would this not be a good situation to use bubble sort?
What simple modification could be made to bubble sort to make it efficient in the situation described above?
Why would bubble sort be more efficient on the list (1, 2, 3, 4, 5, 6, 7) than selection sort?
When using quick sort, why is it common to switch to another sort when the lists being sorted are small?
In what situation might heap sort be useful?
What makes heap sort attractive as opposed to quick sort?
Why is quick sort's name misleading?
True or false: bubble sort gains efficiency by splitting the data in half.
Why is sorting important to the process of searching?
True or false: for some data sets, quick sort will be slower than bubble sort.
How long does re-heaping take?
True or false: selection sort's efficiency is independent of the data being sorted.
|
{"url":"http://www.sparknotes.com/cs/sorting/review/quiz.html","timestamp":"2014-04-18T20:53:25Z","content_type":null,"content_length":"89600","record_id":"<urn:uuid:2e283647-faba-434f-ba50-16da2e4116ef>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00487-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Re: GP: ffinit() bug
Michael Somos on Tue, 30 Apr 2002 11:23:33 -0400 (EDT)
[Date Prev] [Date Next] [Thread Prev] [Thread Next] [Date Index] [Thread Index]
Thank you for the additional information, but this is a bit difficult
to accept. I tried other values. For example :
? ffinit(6,2)
%2 = Mod(1, 6)*x^2 + Mod(1, 6)*x + Mod(2, 6)
and even though there is no field 'F_6' it returned a result. The result
may be nonsense, but that may be up to the user to decide. What is hard
to accept is the infinite loop. However, if that is the desireable way
to go, a clear warning about this seems to be in order. There is no way
to always protect the user from himself, but we can try. Shalom, Michael
On Tue, 30 Apr 2002, Bill Allombert wrote:
> On Mon, Apr 29, 2002 at 10:48:55PM -0400, Michael Somos wrote:
> > Pari Developers,
> > Using the up-to-the-minute CVS I find :
> > (readline v2.2 enabled, extended help not available)
> > ? ffinit(4,2)
> > ^C *** user interrupt after 23,010 ms.
> >
> > which appears to be an infinite loop. Shalom, Michael
> 4 is not a prime number, so the behaviour of ffinit is undefined in this case.
> Maybe the doc is misleading, it is written F_p, and F_4 make sense, but
> it is universally agreed that p is a prime else we wrote F_q, I think.
> Testing the primality of p is not a option, since the new ffinit
> is much faster than ispseudoprime or even BSW_psp() for large p.
> Fixing the oo-loop is a little difficult. The algorithm looks for a odd prime
> number l so that 4 is not a square modulo l. Of course it will never find it,
> but that give no information on the fact that 4 is not a prime, unless we use
> Lagarias-Odlyzko bound but this is not really practical.
> Cheers,
> Bill.
> PS: there is a CVS branch for the stable version named 'release-2-1-patches'.
> You can also check it out if you are interested.
|
{"url":"http://pari.math.u-bordeaux.fr/archives/pari-dev-0204/msg00027.html","timestamp":"2014-04-20T11:15:08Z","content_type":null,"content_length":"5205","record_id":"<urn:uuid:c65df628-bde1-43db-aeb0-d09d95034a28>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00023-ip-10-147-4-33.ec2.internal.warc.gz"}
|
construction job estimate example | ForConstructionPros.com
How to Calculate an Accurate and Profitable Job Estimate
As we explored in Part 1 of this series, making money starts with knowing your annual overhead costs and break-even sales required to hit your goals. Next we will explore how to make a profit and
calculate an accurate and profitable job estimate.
Determine your profit
The profit you want to earn is just that. It is the amount of money you want to make at the end of the year based on the risk you take and the return you want for being a business owner. I recommend
contractors have an annual minimum net profit target return of 20 percent on their annual overhead (ROOH). Determine your annual overhead expenses and then multiply by 20 percent to determine your
annual minimum net profit goal (pre-tax). Then for the hard part. Try your best to again estimate your annual sales you’ll generate over the next year as shown in Example 1.
Minimum Profit (Example 1)
│Estimated Annual Sales │$1,000,000│$2,000,000│$3,000,000│
│Annual Overhead │ $500,000│ $500,000│ $500,000│
│Annual Profit Target 20% ROOH │ $100,000│ $100,000│ $100,000│
│Total Overhead & Profit │ $600,000│ $600,000│ $600,000│
│Overhead & Profit Margin │ 60%│ 30%│ 20%│
│Annual Job Costs │ $400,000│$1,400,000│$2,400,000│
│Margin Conversion Rate │ │ │ │
│MCR= 1.0 - Margin% │ 0.40│ 0.70│ 0.80│
In Example 1, to calculate your final selling price on jobs to earn a minimum of $100,000 for the year, divide your estimated job costs by the MCR to determine your final selling prices.
Job Bid - Overhead Plus Minimum Profit (Example 2)
Direct Job Cost $1,000 $1,000 $1,000
Margin Conversion Rate
MCR = 1.0 - Margin% 0.40 0.70 0.80
Job Sales Price (Cost MCR) $2,500 $1,428 $1,250
Set higher profit goals
An annual net profit return on overhead goal (ROOH) of 20 percent is too low for the risk most contractors take. I recommend you consider a higher profit target of at least 40 percent return on your
annual overhead. Again, first determine your annual overhead expenses and then estimate your annual sales projected. Next multiply your annual overhead by 40 percent to determine a higher net profit
goal for the year as shown in Example 3.
Higher Profit (Example 3)
Estimated Annual Sales $1,000,000 $2,000,000 $3,000,000
Annual Overhead $500,000 $500,000 $500,000
Annual Profit Target 40% ROOH $200,000 $200,000 $200,000
Total Overhead & Profit $700,000 $700,000 $700,000
Overhead & Profit Margin 70% 35% 23%
Annual Job Costs $300,000 $1,400,000 $2,400,000
Margin Conversion Rate
MCR= 1.0 - Margin% 0.30 0.65 0.77
In the example above, to calculate your final selling price so you will earn a minimum of $200,000 overhead and profit for the year, divide your total estimated job costs by the MCR to determine your
final selling prices as shown in Example 4.
Job Bid - Overhead Plus Higher Profit (Example 4)
Direct Job Cost $1,000 $1,000 $1,000
Margin Conversion Rate
MCR = 1.0 - Margin% 0.30 0.65 0.23
Sales Price (Cost MCR) $3,333 $1,538 $4,347
Estimating jobs to make a profit
To determine your final selling price on jobs you bid, use a job estimating template to determine your break-even sales price, your minimum profit sales price and your higher sales price.
Job Estimating Template (Example 5)
Projected Annual Budget
Annual Estimated Sales $2,000,000
Annual Company Overhead $ 500,000
Break-Even MCR .75
Minimum Profit MCR .70
Higher Profit MCR .65
Bid RECAP 1,000 square feet
Labor $ 2,000
Equipment $ 400
Materials $ 2,000
Subcontractors $ 200
General Conditions $ 400
Total Job Cost $ 5,000
Final Sales Price MCR Sales Price Cost/sf
At Break-Even 0.75 $6,666 $6.66/sf
At Minimum Profit 0.70 $7,142 $7.14/sf
At Higher Profit 0.65 $7,692 $7.69/sf
Converting annual targets to weekly goals
Next, it would be great to know how much work you need to perform every week to hit your annual goals. Using Example 5, you need to cover at least $500,000 of annual overhead to break even. If you
can work productively for 50 weeks per year, you need to make at least $10,000 more than your job costs a week to pay for your annual overhead. In most parts of the country, 40 productive weeks per
year is the average for contractors. If you work 40 weeks a year, you need to make at least $12,500 more than your job costs a week to pay for your annual overhead.
Convert Targets To Weekly & Daily Goals (Example 6)
Break-Even Overhead = $500,000/year
Productive Weeks x 40 weeks
Overhead Recovery Needed = $ 12,500/week
Break-Even Overhead = $ 2,500/day
Minimum Profit Goal = $100,000/year
Annual Overhead & Profit = $600,000/year
Productive Weeks x 40 weeks
Overhead & Profit Needed = $ 15,000/week
Minimum OH & P = $ 3,000/day
Higher Profit Goal = $200,000/year
Annual Overhead & Profit = $700,000/year
Productive Weeks x 40 weeks
Overhead & Profit Needed = $ 17,500/week
Higher OH & P = $ 3,500/day
Taking overhead and profit to the crew level
Let’s say your company has three regular crews each comprised of five men with trucks. Your crew cost might look like this:
Typical Crew Cost – 40 Weeks / Year (Example 7)
Labor – 5 Men @ $30/hour $ 150/hour
Down Time @ 10% $ 15/hour
Truck $ 15/hour
Small Tools & Equipment $ 10/hour
Miscellaneous Supplies $ 10/hour
Total Crew Cost $ 200/hour
3 Crews x 3
Total 3 Crews Cost $ 600/hour
Total 3 Crews Cost $4,800/day
To determine how much you need to bill each day, 40 weeks per year, add the following costs to your crew daily rates shown above in Example 7:
Break-Even Overhead $2,500/day ($104/hour/crew)
Minimum Overhead & Profit $3,000/day ($125/hour/crew)
Higher Overhead & Profit $3,500/day ($145/hour/crew)
To break even in the example above, each of the three crews will have to be billed out $200 per hour to cover their cost plus $104 per hour to cover your company overhead = $304 per hour, plus what
you want to earn for profit. If you want to make the higher profit amount, your crew billing rate is $200 + $145 = $345 per hour.
Understanding what it takes to make the money you want is not a simple task. It takes time and concentration to figure out your numbers. And then it takes discipline to actually ask and get the
proper amounts you need to make a profit at the end of the year. Take the time to get to know how to make a profit, and then you might actually make it become a reality!
|
{"url":"http://www.forconstructionpros.com/article/10452700/how-to-calculate-an-accurate-and-profitable-job-estimate?page=2","timestamp":"2014-04-20T09:19:08Z","content_type":null,"content_length":"54650","record_id":"<urn:uuid:9b2c7ca8-55b4-444a-a6d7-12c040f185d3>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00578-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Scavenger Hunt Assignment
Answer each question using a ruler, calculator, or the internet.
Write all of your answers onto a Google Doc and send to me at svazquez1@archimedesacademy.net .
Remember to include the names of your group members.
1. Write an equation where the sum of two numbers equals a 3 digit palindrome.
2. What teacher uses room # (the square root of 58,564)?
3. Pick up a ruler from Mr. V and find the area of the top of the blue table in class. (round to the nearest inch)
4. Write Pi up to 20 decimal places.
5. Find the first nine numbers of the Fibonnaci sequence.
6. You go to this restaurant http://www.adamsribsprincefrederick.com/AdamsRestaurantMenu.pdf)">(http://www.adamsribsprincefrederick.com/AdamsRestaurantMenu.pdf) and order the following: 1-Mambo
Nachos 1- Rib Platter Half Rack 1-Stuffed Baked Potato 1- Mountain Dew.
Forgetting about tax/tip, how much change should you receive if you pay with a $50 bill?
7. Find a picture of a real object that is in the shape of a hexagon.
8. What famous Yankee wore #7?
9. How many points did the Dallas Mavericks score in total in this year’s finals?
10. June 14 is known as Flag Day. What year was the first American flag adopted? Who created it and how many years ago was that?
|
{"url":"http://archimedesacademy.org/drupal/?q=node/1071","timestamp":"2014-04-21T15:30:22Z","content_type":null,"content_length":"16884","record_id":"<urn:uuid:b5886ab9-3865-4d9b-b9cd-2dd45ae9ed82>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00473-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Student Definition
Noticing number patterns.
Repeating number series, Number order, Number patterning
Why Teach
The process of pattern recognition facilitates learning and significantly enhances academic performance by enabling students to predict outcomes, to organize the world around them, and to
establish relationships for meaning.
□ Analyzing math problems to introduce operations
□ Constructing a house
□ Designing art work
□ Determining numeric probability
□ Observing nature
□ Planning for retirement
□ Planning to purchase a home
□ Predicting a recession
□ Predicting g.p.a.
□ Projecting weight loss or gain
□ Solving math puzzles
□ Understanding the national debt
Students will be able to:
□ Name and give examples of different types of recurring numeric patterns (see "Background Information on Numeric Patterns").
□ Recognize numeric patterns in their environment.
Metacognitive Objective
Students will be able to:
□ Reflect upon their thinking processes when using this skill and examine its effectiveness.
Skill Steps
1. Analyze the relationship among adjacent numbers. Look for a recurring pattern.
2. Hypothesize a pattern structure.
3. Test your hypothesis (see "Background Information on Numeric Patterns").
4. If a pattern does not appear, look for a different pattern (see "Background Information on Numeric Patterns").
5. Repeat steps 1-4 as necessary.
Metacognitive Step
6. Reflect upon the thinking process used when performing this skill and examine its effectiveness:
☆ What worked?
☆ What did not work?
☆ How might you do it differently next time?
□ Debrief - review and evaluate process, using both cognitive and affective domains to achieve closure of the thinking activity.
□ Metacognition - the act of consciously considering one's own thought processes by planning, monitoring, and evaluating them (thinking about your thinking).
□ Pattern - an organizational arrangement
□ Fibonacci series - see "Background Information on Numeric Patterns".
Possible Procedure for Teaching the Skill
General Strategy
1. Define the skill and discuss its importance.
2. Introduce and model a repetitive numeric pattern from "Background Information."
3. Practice the pattern.
4. Repeat steps 2 & 3 with each numeric pattern from "Background Information on Numeric Patterns," providing activities that allow discrimination among patterns learned.
5. Debrief: Discuss techniques students used to arrive at conclusions. Include both triumphs and tragedies (facilitations and roadblocks).
Note: A general strategy can be given to students that will help them identify which type of series is used (these are explained, in detail, in "Background Information on Numeric Patterns."
☆ If the series increases or decreases rapidly, look for a multiplication, division or exponential series (the V technique).
☆ If the series does not increase or decrease rapidly, apply the next level of the V technique, looking for a solution.
☆ If no solution can be found, look for a combined addition and multiplication series.
☆ If no solution can be found, look for a Fibonacci series.
☆ If no solution can be found, look for another pattern.
Primary Procedure
1. Define repetitive numeric patterning and discuss its purpose.
2. Place ten piles of beans in a line; the first pile having one bean, the second having two beans, the third having three beans... and so on up to ten beans in the last pile.
3. Ask students to count the beans in the first and second pile to compare the numbers.
4. Repeat #3 with the second and third pile, the third and fourth pile, etc.
5. Explain to students that adding one bean each time creates a repetitive numeric pattern.
6. Tell students that being able to identify patterns can help them learn math.
7. Go through skill steps with students and show them how skill steps apply to figuring out the bean pattern.
8. Repeat bean procedure with a +2 pattern. Help students apply the skill steps to figure out the pattern.
9. Debrief students on the process, the definition, and the importance of this skill.
Integrating the Skill into the Curriculum
Understanding of non-linguistic recurring numeric patterns can be facilitated by using the overhead projector and charts showing the numbers 1 to 100 in boxes on the chart. Involve the class in
finding and describing numerical patterns on the chart. Begin by covering the chart with a blank transparency and then marking all the even numbers with a blue dot. Ask the class to explain how
you arrived at the resulting pattern. Depending on their experiences, they may explain it as "plus two" or "even numbers." Remove the marked transparency. On a clean transparency, mark in yellow
the numbers in intervals of three. After discussion, overlay the two sheets and note that +2 and +3 addition patterns emerge. Mark these points with a red square, and ask students if they can
discover a new pattern. This can be continued with the class, and then with clean, duplicated copies of the 1-to-100 chart allow students to construct their own patterns. Have them compare and
check identified patterns with fellow students.
Introduce and teach recurring numeric pattern recognition both as an interesting intellectual skill and as a test-taking device. Teach students the numeric problem-solving processes and the
common types of problems they are likely to encounter. Ask students to create problems using two or three of these common types plus one or two types that they make up. Have students exchange
their "made-up" problems and try to solve each other's.
Using a transparency or ditto of a 1-to-100 chart, blank out a pattern of numbers. Ask students to fill in missing numbers.
Have a student describe the weather (weather chart, calendar).
Background Information
Patterns can be found in all forms of non-linguistic information. The more students can "see" patterns in non-linguistic information, the more they can organize their world. This skill can assist
the brain with its natural tendency to organize information into patterns.
The ability to perceive repeating patterns in various settings is important for the learner. Aptitude tests frequently include problems involving numeric patterns. Everyday problems can often be
solved by recognizing the recurring patterns within them. Understanding that the environment contains countless examples of obvious and subtle repeating patterns introduces the learner to the
elegant nature of our universe.
There are many types of non-linguistic patterns. In this unit we will consider a few of those patterns--those commonly found on aptitude tests and other forms of tests. However, if students are
not presented with anything more than this procedure, all they are learning is a relatively sophisticated test-taking technique. The ultimate goal of having students solve number series problems
is to have them identify different types of numeric patterns. Students should be encouraged to find other types of patterns in numeric series. We will consider numeric patterns, especially those
identified by David Lewis and James Greeene (Thinking Better, New York: Holt, Rinehart and Winston, 1982) as important for aptitude tests.
More Background Information about Numeric Patterns
Additional Resources
Jacobs, Harold R. Mathematics: A Human Endeavor. New York.: W.H. Freeman and Co., 1982. Chapter 2, pp 57-118.
Lewis, David and James Green. Thinking Better. New York: Holt, Rinehart, and Winston, 1982.
Marzano, Robert J. and Daisy E. Arredondo. Tactics for thinking. Colorado: Mid-Continent Regional Educational Laboratory, 1986.
|
{"url":"http://media.bethelsd.org/website/resources/static/thinkingSkillsGuide/skills/recog_num_pat.htm","timestamp":"2014-04-16T21:51:38Z","content_type":null,"content_length":"14231","record_id":"<urn:uuid:26798002-ff25-4060-be2c-a7c407f6fa42>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00418-ip-10-147-4-33.ec2.internal.warc.gz"}
|