content
stringlengths 86
994k
| meta
stringlengths 288
619
|
|---|---|
Fairfax Station Geometry Tutor
I recently graduated with a master's degree in chemistry, all the while tutoring extensively in math and science courses throughout my studies. I am well versed in efficient studying techniques,
and am confident that I will be able to make the most use of both your time and mine! I have taken the ...
17 Subjects: including geometry, chemistry, physics, calculus
...I graduated from University of Virginia with a degree in economics and mathematics. While in college, I tutored calculus to many students. Since graduating I have begun to tutor again and being
new to the area I am currently working to expand my number of students.
22 Subjects: including geometry, calculus, algebra 1, GRE
...My teaching focuses on understanding concepts, connecting different concepts into a coherent whole and competency in problem solving. Every student has different needs so my approach is fluid.
Some need homework help, others a plan to backtrack and review algebra.
9 Subjects: including geometry, calculus, physics, algebra 1
...I was less good at the kind of material in the verbal section, but since then my training in linguistics and wide reading have improved my skills in this area. The SAT reading test has multiple
choice questions. Tutoring a student often involves getting him or her to see why one answer is better than the others.
22 Subjects: including geometry, reading, Spanish, English
...As a member of the school organization, I tutored fellow students with Algebra and Geometry after school. I've also helped ESL friend so I am open for that. I am not perfect (I scored a 780 in
the SAT) but I have passion for facilitating and teaching others.
11 Subjects: including geometry, piano, algebra 1, algebra 2
|
{"url":"http://www.purplemath.com/Fairfax_Station_geometry_tutors.php","timestamp":"2014-04-17T21:31:13Z","content_type":null,"content_length":"24106","record_id":"<urn:uuid:a482af63-0efd-432d-88a9-0f28d2a66cc3>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00051-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Sum of n-th roots is rarely rational
up vote 2 down vote favorite
Let $m,n$ be positive integers, and $\displaystyle \Phi_{m,n}~:~ {\mathbb{R}_+^*}^m \to \mathbb{R}_+^*, \ \ \ (x_1,x_2, \ldots , x_m) \mapsto \sum_{k=1}^m \sqrt[n]{x_k}$.
Clearly for $m=1$ if for all positive integer $n$, we have $\Phi_{1,n}(x) \in \mathbb Q$, then $x=1$.
It seems that the same conclusion holds for $m>1$ (or at least the subset of ${\mathbb{R}_+^*}^m$ for which $\Phi_{m,n}(x) \in \mathbb Q$ is finite).
Is it true (or even obvious and I missed it)?
fields algebraic-number-theory
How does your definition of Phi depend on n? – Kevin Ventullo May 8 '10 at 21:16
@Kevin: Those are n-th roots. – JBL May 8 '10 at 21:58
add comment
1 Answer
active oldest votes
The following conclusion is true: If $\Phi_{m,n}(x)\in\mathbb{Q}$ for all positive integers n, then x[1]=x[2]=...=x[n]=1.
It follows in what I believe is a fairly routine, or at least not too difficult, manner from the following Claim:
up vote 3 down vote Let K be the extension field of ℚ generated by all n-th roots of all x[i]. Then K is a finite extension of ℚ.
Proof. Let y[i] be the N!-th root of x[i]. Then the power sum symmetric functions of the y[i] are all rational, hence the elementary symmetric functions are all rational, so the
y's lie in a field extension of ℚ of degree at most m. Take N as large as you like. Voila!
add comment
Not the answer you're looking for? Browse other questions tagged fields algebraic-number-theory or ask your own question.
|
{"url":"http://mathoverflow.net/questions/23957/sum-of-n-th-roots-is-rarely-rational","timestamp":"2014-04-19T07:53:08Z","content_type":null,"content_length":"53029","record_id":"<urn:uuid:b7cdffa2-08b8-4e44-a622-0fc90c8b895a>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00215-ip-10-147-4-33.ec2.internal.warc.gz"}
|
What time does it get dark in austin texas?
You asked:
What time does it get dark in austin texas?
Assuming you meant
• Austin, the place in Travis County, Texas, USA
Did you mean?
Say hello to Evi
Evi is our best selling mobile app that can answer questions about local knowledge, weather, books, music, films, people and places, recipe ideas, shopping and much more. Over the next few months we
will be adding all of Evi's power to this site.
Until then, to experience all of the power of Evi you can download Evi for free on iOS, Android and Kindle Fire.
|
{"url":"http://www.evi.com/q/what_time_does_it_get_dark_in_austin_texas","timestamp":"2014-04-18T19:33:50Z","content_type":null,"content_length":"60899","record_id":"<urn:uuid:2dc75dd5-5b5b-4f13-ab22-9ce39b9ec2a7>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00053-ip-10-147-4-33.ec2.internal.warc.gz"}
|
College Hill, PA Calculus Tutor
Find a College Hill, PA Calculus Tutor
...I favor a dual approach, focused on both understanding concepts and going through practice problems. Let me know what concepts you're struggling with before our session, so I can streamline
the session as much as possible! In my free time, I like to play with my pet chickens, play Minecraft, code up websites, and write sci-fi creative stories.
26 Subjects: including calculus, English, physics, writing
...We can meet at your house, a library, diner, or any of the Barnes & Nobles. If money is a concern, I can meet with small study groups at a more affordable rate per student. I can provide
11 Subjects: including calculus, Spanish, geometry, algebra 1
...Sci. USA, and Phys. Biol.
13 Subjects: including calculus, reading, writing, physics
...In 1982 I set up a subsidiary company specializing in aerospace products, with the position of VP/General Manager. Subsequently I became Engineering Vice President for the entire company. Over
the last 20 years I have given technical presentations and workshops throughout Europe and North Ameri...
10 Subjects: including calculus, GRE, algebra 1, GED
Hi! My name is Lyle and I am a senior chemical engineering student at Lehigh University. I enjoy teaching math and science, and am trying to make some money on the side since I am only a
part-time student this semester.
10 Subjects: including calculus, chemistry, physics, precalculus
Related College Hill, PA Tutors
College Hill, PA Accounting Tutors
College Hill, PA ACT Tutors
College Hill, PA Algebra Tutors
College Hill, PA Algebra 2 Tutors
College Hill, PA Calculus Tutors
College Hill, PA Geometry Tutors
College Hill, PA Math Tutors
College Hill, PA Prealgebra Tutors
College Hill, PA Precalculus Tutors
College Hill, PA SAT Tutors
College Hill, PA SAT Math Tutors
College Hill, PA Science Tutors
College Hill, PA Statistics Tutors
College Hill, PA Trigonometry Tutors
Nearby Cities With calculus Tutor
Alpha, NJ calculus Tutors
Butztown, PA calculus Tutors
Chapmans, PA calculus Tutors
Delaware Park, NJ calculus Tutors
Easton, PA calculus Tutors
Forks Township, PA calculus Tutors
Hokendauqua, PA calculus Tutors
Lehigh Valley calculus Tutors
Longswamp, PA calculus Tutors
Lopatcong, NJ calculus Tutors
Phillipsburg, NJ calculus Tutors
Stockertown calculus Tutors
Tatamy calculus Tutors
West Easton, PA calculus Tutors
Willow Grove, NJ calculus Tutors
|
{"url":"http://www.purplemath.com/College_Hill_PA_Calculus_tutors.php","timestamp":"2014-04-16T16:52:32Z","content_type":null,"content_length":"23919","record_id":"<urn:uuid:6a80145f-7522-4d5d-b911-e921c0ed98b2>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00140-ip-10-147-4-33.ec2.internal.warc.gz"}
|
the first resource for mathematics
In this paper the authors prove that continuously indexed frames in separable Hilbert spaces with no redundancy (or excess) are equivalent to discretely indexed sets. The result applies to general
not necessarily separated Hilbert spaces, by restricting the analysis to closed countably generated subspaces. More specific, let $H$ be a Hilbert space and $\left(M,S,\mu \right)$ a measure space.
Then a generalized frame in $H$ indexed by $M$ is a family $h=\left\{{h}_{m}\in H$; $m\in M\right\}$ such that: (a) $\forall f\in H$, $m↦Tf\left(m\right):=〈{h}_{m},f〉$ is measurable; (b) there are
$0<A,B<\infty$ such that $\forall f\in H$, $A\parallel f{\parallel }_{H}^{2}\le \parallel Tf{\parallel }_{{L}^{2}\left(M;\mu \right)}^{2}\le B\parallel f{\parallel }_{H}^{2}$. Recall also a
measurable subset $E$ of $H$ is called an atom if $0<\mu \left(E\right)<\infty$ and $E$ contains no measurable subset $F$ such that $0<\mu \left(F\right)<\mu \left(E\right)$. The main result reads
(the reviewer takes the liberty to fix a typo):
Theorem 2.2: Let $h$ be a generalized frame in $H$ indexed by $\left(M,S,\mu \right)$ and assume $ImT={L}^{2}\left(M,d\mu \right)$. Then for every countable subset $L$ of $H$, there exists a
countable collection $\left\{{E}_{i};i\in {\Lambda }\right\}$ of disjoint measurable sets such that $\stackrel{˜}{f}=\sum {c}_{fi}{\chi }_{i}$ for all $f$ in the closed linear span of $L$, where $\
left\{{c}_{fi};i\in {\Lambda }\right\}$ is a set of complex numbers depending on $f$, and ${\chi }_{i}$ denotes the characteristic function of ${E}_{i}$. In particular, if $H$ is an infinite
dimensional separable space, then ${L}^{2}\left(M;\mu \right)$ is isometrically isomorphic to the weighted space ${l}_{w}^{2}$ consisting of all sequences $\left\{{c}_{i}\right\}$ with $\parallel \
left\{{c}_{i}\right\}{\parallel }^{2}={\sum }_{i}{|{c}_{i}|}^{2}{w}_{i}<\infty$, where ${w}_{i}=\mu \left({E}_{i}\right)$ for a fixed collection of disjoint $\mu$-atoms $\left\{{E}_{1},{E}_{2},...\
42C40 Wavelets and other special systems
|
{"url":"http://zbmath.org/?q=an:0976.42022","timestamp":"2014-04-17T21:28:56Z","content_type":null,"content_length":"27571","record_id":"<urn:uuid:2c843525-b3bb-4796-9ce5-70bacbbece10>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00142-ip-10-147-4-33.ec2.internal.warc.gz"}
|
What I dont understand is
March 31st 2008, 06:30 PM #1
What I dont understand is
After deriving "erf(x)" with Maclaurin series...I am still oblivous as to two things...where to 2/pi^(1/2) came from and as to say if I was integrating e^(-x²) from 4 to 2 how I would plug in
numbers into erf(x) to acheive the numerical answer? Could someone help?
2/pi^(1/2) is a constant that is introduced so that erf(+oo) = 1. There are good reasons for doing this.
$\int_a^b e^{-x^2}\, dx = \frac{\sqrt{\pi}}{2} (\text{erf} (b) - \text{erf} (a))$.
That is what I thought obviously since F(b)-F(a) where F'(x)=f(x) but I swore I did it and got the wrong values...
March 31st 2008, 06:51 PM #2
March 31st 2008, 06:57 PM #3
|
{"url":"http://mathhelpforum.com/calculus/32763-what-i-dont-understand.html","timestamp":"2014-04-17T01:45:52Z","content_type":null,"content_length":"38020","record_id":"<urn:uuid:f525282c-7386-4b47-86f9-701d350fa0cc>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00612-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Three Trend Models Help for Trend Forecasting - Transtutors
Three Trend Model
Time trend models assume that there is some permanent deterministic pattern across time. These models are best suited to data that are not dominated by random fluctuations.
Examining a graphical plot of the time series you want to forecast is often very useful in choosing an appropriate model. The simplest case of a time trend model is one in which you assume the series
is a constant plus purely random fluctuations that are independent from one time period to the next. Figure 15.8 shows how such a time series might look.
Figure 15.8 Time series without Trend
where b
Suppose that the series exhibits growth over time, as shown in Figure 15.9.
Figure 15.9 Time series with Linear Trend
A linear model is appropriate for this data. For the linear model, assume the x
The linear model has two parameters. The predicted values for the future are the points on the estimated line. The extension of the polynomial model to three parameters is the quadratic (which forms
a parabola). This allows for a constantly changing slope, where the x
PROC FORECAST can fit three types of time trend models: constant, linear, and quadratic. For other kinds of trend models, other SAS procedures can be used.
Exponential smoothing fits a time trend model by using a smoothing scheme in which the weights decline geometrically as you go backward in time. The forecasts from exponential smoothing are a time
trend, but the trend is based mostly on the recent observations instead of on all the observations equally. How well exponential smoothing works as a forecasting method depends on choosing a good
smoothing weight for the series.
To specify the exponential smoothing method, use the METHOD=EXPO option. Single exponential smoothing produces forecasts with a constant trend (that is, no trend). Double exponential smoothing
produces forecasts with a linear trend, and triple exponential smoothing produces a quadratic trend. Use the TREND= option with the METHOD=EXPO option to select single, double, or triple exponential
The time trend model can be modified to account for regular seasonal fluctuations of the series about the trend. To capture seasonality, the trend model includes a seasonal parameter for each season.
Seasonal models can be additive or multiplicative.
where s(t) is the seasonal parameter for the season that corresponds to time t. Get latest updates on the related subject in Statistic homework help and assignment help at transtutors.com.
The Winters method is similar to exponential smoothing, but it includes seasonal factors. The Winters method can use either additive or multiplicative seasonal factors. Like exponential smoothing,
good results with the Winters method depend on choosing good smoothing weights for the series to be forecast.
To specify the multiplicative or additive versions of the Winters method, use the METHOD=WINTERS or METHOD=ADDWINTERS options, respectively. To specify seasonal factors to include in the model, use
the SEASONS= option.
Many observed time series do not behave like constant, linear, or quadratic time trends. However, you can partially compensate for the inadequacies of the trend models by fitting time series models
to the departures from the time trend, as described in the following sections.
Time series models assume the future value of a variable to be a linear function of past values. If the model is a function of past values for a finite number of periods, it is anautoregressive model
and is written as follows:
The coefficients a are autoregressive parameters. One of the simplest cases of this model is the random walk, where the series dances around in purely random jumps. This is illustrated in Figure
Figure 15.10 Random Walk Series
In this type of model, the best forecast of a future value is the present value. However, with other autoregressive models, the best forecast is a weighted sum of recent values. Pure autoregressive
forecasts always damp down to a constant (assuming the process is stationary).
Autoregressive time series models can also be used to predict seasonal fluctuations. These latest developments are all covered in theStatistichomework helpand assignment help at transtutors.com.
Our email-based homework help support provides best and intelligent insight and recreation which help make the subject practical and pertinent for any assignment help.
Transtutors.com present timely homework help at logical charges with detailed answers to your Statistic questions so that you get to understand your assignments or homework better apart from having
the answers. Our tutors are remarkably qualified and have years of experience providing Three trend model homework help or assignment help.
Related Questions
• Let a random process be given as X (t) = Acos(2p fct) + Bsin(2p fct) where... 16 hrs ago
Let a random process be given as X (t) = Acos(2p fct) + Bsin(2p fct) where A and B are independent, zero-mean, unit variance random variables. Is the process X (t) (wide-sense) stationary?
Tags : Statistics, Descriptive Statistics, Others, College ask similar question
• Administrators at a university are planning to offer a summer seminar 18 hrs ago
Administrators at a university are planning to offer a summer seminar
Tags : Statistics, Basics of Statistics, Theory of probability, University ask similar question
• Suppose you have a box with a very large number of orange and blue beads.... 19 hrs ago
Suppose you have a box with a very large number of orange and blue beads. You want to estimate the proportion p of orange beads in the box and you want to be 92% confident that your point
estimate, which is the sample...
Tags : Statistics, Basics of Statistics, Theory of probability, University ask similar question
• One measure of the risk or volatility of an individual stock is the... 1 day ago
One measure of the risk or volatility of an individual stock is the standard deviation of thetotal return (capital appreciation plus dividends) over several periods of time. Althoughthe standard
deviation is easy to compute,...
Tags : Statistics, Regression, Correlation, Regression, University ask similar question
• selecting a test 1 day ago
selecting a test
Tags : Statistics, Hypothesis Testing, Others, University ask similar question
• Suppose the probability density function of the length of computer cables... 1 day ago
Suppose the probability density function of the length of computer cables is <span style="color: rgb(0, 0, 0); font
Tags : Statistics, Basics of Statistics, Theory of probability, University ask similar question
• Hypothesis testing What do we need to consider when we try to select a... 1 day ago
Hypothesis testing<span style='color: rgb(68, 68, 68); line-height: 200%; font-family: "Arial","sans-serif"; font-size: 12pt; mso-fareast-font-family: "Times New Roman"; mso-font-kerning:...
Tags : Statistics, Hypothesis Testing, Others, University ask similar question
• The following data represent the running times of films produced 1 day ago
The following data represent the running times of films produced by 2 motion-picture companies. Test the hypothesis that the average running time of films produced by company 2 exceeds the
average running time of films...
Tags : Statistics, Hypothesis Testing, t,F,Z distibutions, College ask similar question
• Use Excel to compute the descriptive statistics for the following data set:... 1 day ago
Use Excel to compute the descriptive statistics for the following data set:25, 45, 73, 16, 34, 98, 34, 45, 26, 2, 56, 97, 12, 445, 23, 63, 110, 12, 17, and 41.
Tags : Statistics, Descriptive Statistics, Standard Deviation, College ask similar question
• The time it takes a ski patroller to respond to an accident call has an... 2 days ago
The time it takes a ski patroller to respond to an accident call has an exponential distribution with an average response time of 5 minutes. As part of ski patrollers’ commitment to prevent an
accident related fatalities, the...
Tags : Statistics, Basics of Statistics, Theory of probability, University ask similar question
more assignments »
|
{"url":"http://www.transtutors.com/homework-help/statistics/time-series-analysis/trend-forecasting/models/","timestamp":"2014-04-17T21:25:05Z","content_type":null,"content_length":"80882","record_id":"<urn:uuid:95a7afd6-19fa-4f57-bd56-49791764f858>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00372-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Time's arrow and Boltzmann's entropy
From Scholarpedia
The arrow of time expresses the fact that in the world about us the past is distinctly different from the future. Milk spills but doesn't unspill; eggs splatter but do not unsplatter; waves break but
do not unbreak; we always grow older, never younger. These processes all move in one direction in time - they are called "time-irreversible" and define the arrow of time. It is therefore very
surprising that the relevant fundamental laws of nature make no such distinction between the past and the future. This in turn leads to a great puzzle - if the laws of nature permit all processes to
be run backwards in time, why don't we observe them doing so? Why does a video of an egg splattering run backwards look ridiculous? Put another way: how can time-reversible motions of atoms and
molecules, the microscopic components of material systems, give rise to the observed time-irreversible behavior of our everyday world? The resolution of this apparent paradox is due to Maxwell,
Thomson and (particularly) Boltzmann. These ideas also explain most other arrows of time - in particular; why do we remember the past but not the future?
What is time
Time is arguably among the most primitive concepts we have—there can be no action or movement, no memory or thought, except in time. Of course this does not mean that we understand, whatever is meant
by that loaded word "understand", what time is. As put by Saint Augustine.
"What is time? If nobody asks me, I know; but if I were desirous to explain it to one that should ask me, plainly I know not."
In a book entitled Time's Arrow and Archimedes' Point the Australian philosopher Huw Price describes well the ``stock philosophical debates about time. These have not changed much since the time of
Saint Augustine or even earlier.
"... Philosophers tend to be divided into two camps. On one side there are those who regard the passage of time as an objective feature of reality, and interpret the present moment as the marker or
leading edge of this advance. Some members of this camp give the present ontological priority, as well, sharing Augustine's view that the past and the future are unreal. Others take the view that the
past is real in a way that the future is not, so that the present consists in something like the coming into being of determinate reality. .... Philosophers in the opposing camp regard the present as
a subjective notion, often claiming that now is dependent on one's viewpoint in much the same way that here is. Just as "here" means roughly "this place", so "now" means roughly "this time", and in
either case what is picked out depends where the speaker stands. In this view there is no more an objective division of the world into the past, the present, and the future than there is an objective
division of a region of space into here and there.
Often this is called the block universe view, the point being that it regards reality as a single entity of which time is an ingredient, rather than as a changeable entity set in time."
A very good description of the block universe point of view is given by Kurt Vonnegut in his novel Slaughterhouse-Five. The coexistence of past, present and future forms one of the themes of the
book. The hero, Billy Pilgrim, speaks of the inhabitants of Tralfamadore a planet in a distant galaxy: "The Tralfamadorians can look at all different moments just the way we can look at a stretch of
the Rocky Mountains, for instance. They can see how permanent all the moments are, and they can look at any moment that interests them. It is just an illusion we have here on earth that one moment
follows another like beads on a string, and that once a moment is gone it is gone forever."
This view (with relativity properly taken into account) is certainly the one held by most physicists—at least when they think as physicists. It is well expressed in the often quoted passage from
Einstein's letter of condolences upon the death of his youthful best friend Michele Besso: "Michele has left this strange world just before me. This is of no importance. For us convinced physicists
the distinction between past, present and future is an illusion, although a persistent one."
There are however also more radical views about time among physicists. At a conference on the Physical Origins of Time Asymmetry which took place in Mazagon, Spain, in 1991, the physicist Julian
Barbour conducted an informal poll about whether time is fundamental. Here is Barbour's account of that from his book The End of Time.
"During the Workshop, I conducted a very informal straw-poll, putting the following question to each of the 42 participants: Do you believe time is a truly basic concept that must appear in the
foundations of any theory of the world, or is it an effective concept that can be derived from more primitive notions in the same way that a notion of temperature can be recovered in statistical
The results were as follows: 20 said there was no time at a fundamental level, 12 declared themselves to be undecided or wished to abstain, and 10 believed time did exist at the most basic level.
However, among the 12 in the undecided/abstain column, 5 were sympathetic to or inclined to the belief that time should not appear at the most basic level of theory."
Matter in space-time
In this article, the intuitive notion of space-time as a primitive undefined concept is taken as a working hypothesis. This space-time continuum is the arena in which matter, radiation and all kinds
of other fields exist and change.
Many of these changes have a uni-directional order "in time", or display an arrow of time. One might therefore expect, as Feynman puts it, that there is some fundamental law which says, that "uxels
only make wuxels and not vice versa." But we have not found such a law.... "so this manifest fact of our experience is not part of the fundamental laws of physics." The fundamental microscopic laws
(with some, presumably irrelevant, exceptions) all turn out to be time symmetric. Newton's laws, the Schrödinger equation, the special and general theory of relativity, etc., make no distinction
between the past and the future—they are "time-symmetric". As put by Brian Greene in his book "The Fabric of the Cosmos: Space, Time and the Structure of Reality", "no one has ever discovered any
fundamental law which might be called the Law of the Spilled Milk or the Law of the Splattered Egg."
It is only secondary laws, which describe the behavior of macroscopic objects containing many, many atoms, such as the second law of thermodynamics, (discussed below), which explicitly contain this
time asymmetry. The obvious question then is; how does one go from a time symmetric description of the dynamics of atoms to a time asymmetric description of the evolution of macroscopic systems made
up of atoms.
In answering that question, one may mostly ignore relativity and quantum mechanics. These theories, while essential for understanding both the very large scale and the very small scale structure of
the universe, have a "classical limit" which is adequate for a basic understanding of time's arrow. One may also for simplicity ignore waves, made up of photons, and any entities smaller than atoms
and talk about these atoms as if they were point particles interacting with each other via some pair potential, and evolving according to Newtonian laws.
In the context of Newtonian theory, the "theory of everything" at the time of Thomson, Maxwell and Boltzmann, the problem can be formally presented as follows: The complete microscopic (or micro)
state of a classical system of \(N\) particles, is represented by a point \(X\) in its phase space \(\Gamma\ ,\) \( X =(r_1, p_1, r_2, p_2, ..., r_N, p_N), r_i\) and \(p_i\) being three dimensional
vectors representing the position and momentum (or velocity) of the \(i\)th particle. When the system is isolated, say in a box \(V\) with reflecting walls, its evolution is governed by Hamiltonian
dynamics with some specified Hamiltonian \(H(X)\) which we will assume for simplicity to be an even function of the momenta: no magnetic fields. Given \(H(X)\ ,\) the microstate \(X(t_0)\ ,\) at time
\(t_0\ ,\) determines the microstate \(X(t)\) at all future and past times \(t\) during which the system will be or was isolated. Let \(X(t_0)\) and \(X(t_0+\tau)\ ,\) with \(\tau\) positive, be two
such microstates. Reversing (physically or mathematically) all velocities at time \(t_0+\tau\ ,\) we obtain a new microstate, \(RX\ .\) \[ RX = (r_1,-p_1, r_2,-p_2, ...,r_N,-p_N). \] If we now follow
the evolution for another interval \(\tau\) we find that the new microstate at time \(t_0 + 2\tau\) is just \(RX(t_0)\ ,\) the microstate \(X(t_0)\) with all velocities reversed: Hence if there is an
evolution, i.e. a trajectory \(X(t)\ ,\) in which some property of the system, specified by a function \(f(X(t))\ ,\) behaves in a certain way as \(t\) increases, then if \(f(X) = f(RX)\) there is
also a trajectory in which the property evolves in the time reversed direction. So why is one type of evolution, the one consistent with an entropy increase in accord with the "second law" of
thermodynamics, common and the other never seen?
An example of the entropy increasing evolution is the approach to a uniform temperature of systems initially kept isolated at different temperatures, as exemplified by putting a glass of hot tea and
a glass of cold water into an insulated container. It is common experience that after a while the two glasses and their contents will come to the same temperature.
This is one of the "laws" of thermodynamics, a subject developed in the eighteenth and nineteenth century, purely on the basis of macroscopic observations—primarily the workings of steam engines—so
central to the industrial revolution then taking place. Thermodynamics makes no reference to atoms and molecules, and its validity remains independent of their existence and nature—classical or
quantum. The high point in the development of thermodynamics came in 1865 when Rudolf Clausius pronounced his famous two fundamental theorems: 1. The energy of the universe is constant. 2. The
entropy of the universe tends to a maximum.
The "second law" says that there is a quantity called entropy associated with macroscopic systems which can only increase, never decrease, in an isolated system. In Clausius' poetic language, the
paradigm of such an isolated system is the universe itself. But even leaving aside the universe as a whole and just considering our more modest example of two glasses of water in an insulated
container, this is clearly a law which is asymmetric in time. Entropy increase is identified with heat flowing from hot to cold regions leading to a uniformization of the temperature. But, if we look
at the microscopic dynamics of the atoms making up the systems then, as noted earlier, if the energy density or temperature inside a box \(V\) gets more uniform as time increases, then, since the
energy density profile is the same for \(X\) and \(RX\ ,\) there is also an evolution in which the temperature gets more nonuniform.
There is thus clearly a difficulty in deriving or showing the compatibility of the second law with the microscopic dynamics. This is illustrated by the impossibility of time ordering of the snapshots
in {Fig. 1} using solely the microscopic dynamical laws: the time symmetry of the microscopic dynamics implies that if (a, b, c, d) is a possible ordering so is (d, c, b, a).
The explanation of this apparent paradox, due to Thomson, Maxwell and Boltzmann, shows that not only is there no conflict between reversible microscopic laws and irreversible macroscopic behavior,
but, as clearly pointed out by Boltzmann in his later writings, there are extremely strong reasons to expect the latter from the former. (Boltzmann's early writings on the subject are sometimes
unclear, wrong, and even contradictory. His later writings, however, are generally very clear). These reasons involve several interrelated ingredients which together provide the required distinction
between microscopic and macroscopic variables and explain the emergence of definite time asymmetric behavior in the evolution of the latter despite the total absence of such asymmetry in the dynamics
of the former.
To describe the macroscopic state of a system of \(N\) atoms in a box \(V\ ,\) say \( N {}^>_\sim 10^{20}\ ,\) we make use of a much cruder description than that provided by the microstate \(X\ .\)
We shall denote by \(M\) such a macroscopic description or macrostate. As an example we may take \(M\) to consist of the specification, to within a given accuracy, of the energy and number of
particles in each half of the box \(V\ .\) A more refined macroscopic description would divide \(V\) into \(K\) cells, where \(K\) is large but still \(K << N\ ,\) and specify the number of
particles, the momentum, and the amount of energy in each cell, again with some tolerance.
Clearly \(M\) is determined by \(X\) but there are many \(X\)'s (in fact a continuum) which correspond to the same \(M\ .\) Let \(\Gamma_M\) be the region in \(\Gamma\) consisting of all microstates
\(X\) corresponding to a given macrostate \(M\) and denote by \(|\Gamma_M|=(N! h^{3N})^{-1} \int_{\Gamma_M}\prod_{i=1}^Ndr_{i}p_i\ ,\) its symmetrized \(6N\) dimensional Liouville volume in units of
\(h^{3N}\ .\) At this point this is simply an arbitrary choice of units. It is however a very convenient one for dealing with the classical limit of quantum systems. ath.nyu.edu/faculty/varadhan/
Time evolution of macrostates: An example
Consider a situation in which a gas of \(N\) atoms with energy \(E\) (with some tolerance) is initially confined by a partition to the left half of the box \(V\ ,\) and suppose that this constraint
is removed at time \(t_a\ ,\) see Fig. 1. The phase space volume available to the system for times \(t>t_a\) is then fantastically enlarged compared to what it was initially, roughly by a factor of \
(2^N\ .\) If the system contains 1 mole of gas then the volume ratio of the unconstrained phase space region to the constrained one is far larger than the ratio of the volume of the known universe to
the volume of one atom.
Let us now consider the macrostate of this gas as given by \(M=\left({N_L \over N} , {E_L \over E}\right)\ ,\) the fraction of particles and energy in the left half of \(V\) (within some small
tolerance). The macrostate at time \(t_a, M=(1, 1)\ ,\) will be denoted by \(M_a\ .\) The phase-space region \(|\Gamma| = \Sigma_E\ ,\) available to the system for \(t> t_a\ ,\) i.e., the region in
which \(H(X) \in (E, E + \delta E), \delta E << E\ ,\) will contain new macrostates, corresponding to various fractions of particles and energy in the left half of the box, with phase space volumes
very large compared to the initial phase space volume available to the system. We can then expect (in the absence of any obstruction, such as a hidden conservation law) that as the phase point \(X\)
evolves under the unconstrained dynamics and explores the newly available regions of phase space, it will with very high probability enter a succession of new macrostates \(M\) for which \(|\Gamma_
{M}|\) is increasing. The set of all the phase points \(X_t\ ,\) which at time \(t_a\) were in \(\Gamma_{M_a}\ ,\) forms a region \(T_t \Gamma_{M_a}\) whose volume is, by Liouville's Theorem, equal
to \(|\Gamma_{M_a}|\ .\) The shape of \(T_t\Gamma_{M_a}\) will however change with \(t\) and as \(t\) increases \(T_t\Gamma_{M_a}\) will increasingly be contained in regions \(\Gamma_M\)
corresponding to macrostates with larger and larger phase space volumes \(|\Gamma_M|\ .\) This will continue until almost all the phase points initially in \(\Gamma_{M_a}\) are contained in \(\Gamma_
{M_{eq}}\ ,\) with \(M_{eq}\) the system's unconstrained macroscopic equilibrium state. This is the state in which approximately half the particles and half the energy will be located in the left
half of the box, \(M_{eq} = ({1\over 2}, {1 \over 2})\) i.e. \(N_L /N\) and \(E_L/ E\) will each be in an interval \(\left({1 \over 2} - \epsilon, {1 \over 2} + \epsilon\right)\ ,\) \(N^{-1/2} << \
epsilon << 1\ .\)
\(M_{eq}\) is characterized, in fact defined, by the fact that it is the unique macrostate, among all the \(M_\alpha\ ,\) for which \(|\Gamma_{M_{eq}}| / |\Sigma_E| \simeq 1\ ,\) where \(|\Sigma_E|\)
is the total phase space volume available under the energy constraint \(H(X) \in (E, E + \delta E)\ .\) (Here the symbol \(\simeq\) means equality when \(N \to \infty\ .\)) That there exists a
macrostate containing almost all of the microstates in \(\Sigma_E\) is a consequence of the law of large numbers. The fact that \(N\) is enormously large for macroscopic systems is absolutely
critical for the existence of thermodynamic equilibrium states for any reasonable definition of macrostates, in the above example e.g. for any \(\epsilon\ ,\) such that, \(N^{-1/2} << \epsilon << 1\
.\) Indeed thermodynamics does not apply (is even meaningless) for isolated systems containing just a few particles. Nanosystems are interesting and important intermediate cases: Note however that in
many cases an \(N\) of about 1,000 will already behave like a macroscopic system: see related discussion about computer simulations below.
After reaching \(M_{eq}\) we will (mostly) see only small fluctuations in \(N_L(t) / N\) and \(E_L(t) / E\ ,\) about the value \({1 \over 2}\ :\) typical fluctuations in \(N_L\) and \(E_L\) being of
the order of the square root of the number of particles involved. (Of course if the system remains isolated long enough we will occasionally also see a return to the initial macrostate—the expected
time for such a Poincaré recurrence is however much longer than the age of the universe and so is of no practical relevance when discussing the approach to equilibrium of a macroscopic system.)
As already noted earlier, the scenario in which \(|\Gamma_{M(X(t))}|\) increase with time for the \(M_a\) shown in Fig.1 cannot be true for all microstates \(X\subset \Gamma_{M_a}\ .\) There will of
necessity be \(X\)'s in \(\Gamma_{M_a}\) which will evolve for a certain amount of time into microstates \(X(t)\equiv X_t\) such that \(|\Gamma_{M(X_t)}|<|\Gamma_{M_a}|\ ,\) e.g. microstates \(X\in \
Gamma_{M_a}\) which have all velocities directed away from the barrier which was lifted at \(t_a\ .\) What is true however is that the subset \(B\) of such "bad" initial states has a phase space
volume which is very very small compared to that of \(\Gamma_{M_a}\ .\) This is what is meant by the statement that entropy increasing behavior is typical; a more extensive discussion of typicality
is given later.
Boltzmann's entropy
The end result of the time evolution in the above example, that of the fraction of particles and energy becoming and remaining essentially equal in the two halves of the container when \(N\) is large
enough (and `exactly equal' when \(N \to\infty\)), is of course what is predicted by the second law of thermodynamics.
It was Boltzmann's great insight to connect the second law with the above phase space volume considerations by making the observation that for a dilute gas \(\log |\Gamma_{M_{eq}}|\) is proportional,
up to terms negligible in the size of the system, to the thermodynamic entropy of Clausius. Boltzmann then extended his insight about the relation between thermodynamic entropy and \(\log |\Gamma_{M_
{eq}}|\) to all macroscopic systems; be they gas, liquid or solid. This provided for the first time a microscopic definition of the operationally measurable entropy of macroscopic systems in
Having made this connection Boltzmann then generalized it to define an entropy also for macroscopic systems not in equilibrium. That is, he associated with each microscopic state \(X\) of a
macroscopic system a number \(S_B\) which depends only on \(M(X)\) given, up to multiplicative and additive constants (which can depend on \(N\)), by \[\tag{1} S_B(X) = S_B (M(X)) \]
with \[\tag{2} S_B(M) = k \log|\Gamma_{M}|, \]
This is the Boltzmann entropy of a classical system, Penrose (1970) N. B. This definition uses two equations to emphasize their logical independence which is important for the discussion of quantum
Boltzmann then used phase space arguments, like those given above, to explain (in agreement with the ideas of Maxwell and Thomson) the observation, embodied in the second law of thermodynamics, that
when a constraint is lifted, an isolated macroscopic system will evolve toward a state with greater entropy. In effect Boltzmann argued that due to the large differences in the sizes of \(\Gamma_M\ ,
\) \(S_B(X_t) = k \log |\Gamma_{M(X_t)}|\) will typically increase in a way which explains and describes qualitatively the evolution towards equilibrium of macroscopic systems.
These very large differences in the values of \(|\Gamma_M|\) for different \(M\) come from the very large number of particles (or degrees of freedom) which contribute, in an (approximately) additive
way, to the specification of macrostates. This is also what gives rise to typical or almost sure behavior. Typical, as used here, means that the set of microstates corresponding to a given macrostate
\(M\) for which the evolution leads to a macroscopic increase (or non-decrease) in the Boltzmann entropy during some fixed macroscopic time period \(\tau\) occupies a subset of \(\Gamma_M\) whose
Liouville volume is a fraction of \(|\Gamma_M|\) which goes very rapidly (exponentially) to one as the number of atoms in the system increases. The fraction of "bad" microstates, which lead to an
entropy decrease, thus goes to zero as \(N\to \infty\ .\)
Typicality is what distinguishes macroscopic irreversibility from the weak approach to equilibrium of probability distributions (ensembles) of systems with good ergodic properties having only a few
degrees of freedom, e.g. two hard spheres in a cubical box. While the former is manifested in a typical evolution of a single macroscopic system the latter does not correspond to any appearance of
time asymmetry in the evolution of an individual system. Maxwell makes clear the importance of the separation between microscopic and macroscopic scales when he writes: "the second law is drawn from
our experience of bodies consisting of an immense number of molecules. ... it is continually being violated, ..., in any sufficiently small group of molecules ... . As the number ... is increased ...
the probability of a measurable variation ... may be regarded as practically an impossibility."
On the other hand, because of the exponential increase of the phase space volume with particle number, even a system with only a few hundred particles, such as is commonly used in molecular dynamics
computer simulations, will, when started in a nonequilibrium `macrostate' \(M\ ,\) with `random' \(X \in \Gamma_M\ ,\) appear to behave like a macroscopic system. After all, the likelihood of
hitting, in the course of say one thousand tries, something which has probability of order \(2^{-N}\) is, for all practical purposes, the same, whether \(N\) is a hundred or \(10^{23}\ .\) Of course
the fluctuation in \(S_B\) both along the path towards equilibrium and in equilibrium will be larger when \(N\) is small, c.f. [2b]. This will be so even when integer arithmetic is used in the
simulations so that the system behaves as a truly isolated one; when its velocities are reversed the system retraces its steps until it comes back to the initial state (with reversed velocities),
after which it again proceeds (up to very long Poincare recurrence times) in the typical way.
We might take as a summary of such insights in the late part of the nineteenth century the statement by Gibbs and quoted by Boltzmann (in a German translation) on the cover of his book Lectures on
Gas Theory II:
``In other words, the impossibility of an uncompensated decrease of entropy seems to be reduced to an improbability.
Initial conditions
Once we accept the statistical explanation of why macroscopic systems evolve in a manner that makes \(S_B\) increase with time, there remains the nagging problem (of which Boltzmann was well aware)
of what we mean by "with time": since the microscopic dynamical laws are symmetric, the two directions of the time variable are a priori equivalent and thus must remain so a posteriori.
In terms of Fig. 1 this question may be put as follows: why can one use phase space arguments to predict the macrostate at time \(t\) of an isolated system whose macrostate at time \(t_b\) is \(M_b\
,\) in the future, i.e. for \(t > t_b\ ,\) but not in the past, i.e. for \(t < t_b\ ?\) After all, if the macrostate \(M\) is invariant under velocity reversal of all the atoms, then the same
argument should apply equally to \(t_b + \tau\) and \(t_b -\tau\ .\) A plausible answer to this question is to assume that the nonequilibrium macrostate \(M_b\) had its origin in an even more
nonuniform macrostate \(M_a\ ,\) prepared by some experimentalist at some earlier time \(t_a < t_b\) (as is indeed the case in Figure 1) and that for states thus prepared we can apply our
(approximately) equal a priori probability of microstates argument, i.e. we can assume its validity at time \(t_a\ .\) But what about events on the sun or in a supernova explosion where there are no
experimentalists? And what, for that matter, is so special about the status of the experimentalist? Isn't he or she part of the physical universe?
Put differently, where ultimately do initial conditions, such as those assumed at \(t_a\ ,\) come from? In thinking about this we are led more or less inevitably to introduce cosmological
considerations by postulating an initial "macrostate of the universe" having a very small Boltzmann entropy. To again quote Boltzmann: "That in nature the transition from a probable to an improbable
state does not take place as often as the converse, can be explained by assuming a very improbable [small \(S_B\)] initial state of the entire universe surrounding us. This is a reasonable assumption
to make, since it enables us to explain the facts of experience, and one should not expect to be able to deduce it from anything more fundamental". While this requires that the initial macrostate of
the universe, call it \(M_0\ ,\) be very far from equilibrium with \(|\Gamma_{M_0}|<< |\Gamma_{M_{eq}}|\ ,\) it does not require that we choose a special microstate in \(\Gamma_{M_0}\ .\) As also
noted by Boltzmann elsewhere "We do not have to assume a special type [read microstate] of initial condition in order to give a mechanical proof of the second law, if we are willing to accept a
statistical viewpoint...if the initial state is chosen at random...entropy is almost certain to increase." This is a very important aspect of Boltzmann's insight: it is sufficient to assume that this
microstate is typical of an initial macrostate \(M_0\) which is far from equilibrium.
This going back to the initial conditions, i.e. the existence of an early state of the universe (presumably close to the big bang) with a much lower value of \(S_B\) than the present universe, as an
ingredient in the explanation of the observed time asymmetric behavior, bothers some scientists. A common question is: how does the mixing of the two colors after removing the partitions in Fig. 1
depend on the initial conditions of the universe? The answer is that once you accept that the microstate of the system in 1a is typical of its macrostate the future evolution of the macrostates of
this isolated system will indeed look like those depicted in Fig 1. It is the existence of inks of different colors separated in different compartments by an experimentalist, indeed the very
existence of the solar system, etc. which depends on the initial conditions. In a "typical" universe everything would be in equilibrium.
It is the initial state of the universe plus the dynamics which determines what is happening at present. Conversely, we can deduce information about the initial state from what we observe now. As put
by Feynman, Feynman, et al. (1967) "It is necessary to add to the physical laws the hypothesis that in the past the universe was more ordered, in the technical sense, [i.e. low \(S_B\)] than it is
today...to make an understanding of the irreversibility."
A very clear discussion of initial conditions is given by Roger Penrose in connection with the "big bang" cosmology, Penrose, (1990 and 2005). He takes for the initial macrostate of the universe the
smooth energy density state prevalent soon after the big bang: an equilibrium state (at a very high temperature) except for the gravitational degrees of freedom which were totally out of equilibrium,
as evidenced by the fact that the matter-energy density was spatially very uniform. That such a uniform density corresponds to a nonequilibrium state may seem at first surprising, but gravity, being
purely attractive and long range, is unlike any of the other fundamental forces. When there is enough matter/energy around, it completely overcomes the tendency towards uniformization observed in
ordinary objects at high energy densities or temperatures. Hence, in a universe dominated, like ours, by gravity, a uniform density corresponds to a state of very low entropy, or phase space volume,
for a given total energy, see Fig. 2.
The local `order' or low entropy we see around us (and elsewhere)—from complex molecules to trees to the brains of experimentalists preparing macrostates—is perfectly consistent with (and possibly
even a necessary consequence of, i.e. typical of) this initial macrostate of the universe. The value of \(S_B\) at the present time, \(t_p\ ,\) corresponding to \(S_B (M_{t_p})\) of our current
clumpy macrostate describing a universe of planets, stars, galaxies, and black holes, is much much larger than \(S_B(M_0)\ ,\) the Boltzmann entropy of the "initial state", but still quite far away
from \(S_B(M_{eq})\) its equilibrium value. The `natural' or `equilibrium' state of the universe, \(M_{eq}\ ,\) is, according to Roger Penrose, Penrose (1990 and 2005), one with all matter and energy
collapsed into one big black hole. Penrose gives an estimate \(S_B(M_0) / S_B(M_{t_p}) / S_{eq}\sim 10^{88} / 10^{101} / 10^{123}\) in natural (Planck) units, see Fig. 3.
It is this fact that we are still in a state of low entropy that permits the existence of relatively stable neural connections, of marks of ink on paper, which retain over relatively long periods of
time shapes related to their formation. Such nonequilibrium states are required for memories- in fact for the existence of living beings and of the earth itself.
We have no such records of the future and the best we can do is use statistical reasoning which leaves much room for uncertainty. Equilibrium systems, in which the entropy has its maximal value, do
not distinguish between past and future.
Penrose's consideration about the very far from equilibrium uniform density "initial state" of the universe is quite plausible, but it is obviously far from proven. In any case it is, as Feynman
says, both necessary and sufficient to assume a far from equilibrium initial state of the universe, and this is in accord with all cosmological evidence. The "true" equilibrium state of the universe
may also be different from what Penrose proposes. There are alternate scenarios in which the black holes evaporate and leave behind mostly empty space, c.f. Chen and Carroll.
The question as to why the universe started out in such a very unusual low entropy initial state worries Penrose quite a lot (since it is not explained by any current theory) but such a state is just
accepted as a given by Boltzmann. Clearly, it would be nice to have a theory which would explain the "cosmological initial state", but such a theory is not available at present. The "anthropic
principle" in which there are many universes and ours just happens to be right, or we would not be here, is too speculative for an encyclopedic article.
• R. P. Feynman, The Character of Physical Law, MIT Press, Cambridge, Mass. (1967), ch. 5.
• S. Goldstein and J. L. Lebowitz, On the Boltzmann Entropy of Nonequilibrium Systems, Physica D, 193, 53-66, (2004); {b)} P. Garrido, S. Goldstein and J. L. Lebowitz, The Boltzmann Entropy of
Dense Fluids Not in Local Equilibrium, Phys. Rev. Lett. 92, 050602, (2003).
• J. L. Lebowitz, {a)} Macroscopic Laws and Microscopic Dynamics, Time's Arrow and Boltzmann's Entropy, Physica A 194, 1–97(1993);
• J.L. Lebowitz, {b}}Boltzmann's Entropy and Time's Arrow, Physics Today, 46, 32–38(1993); see also letters to the editor and response in "Physics Today", 47, 113-116 (1994);{c)} Microscopic
Origins of Irreversible Macroscopic Behavior, Physica A, 263, 516–527, (1999);
• J.L. Lebowitz, {d)} A Century of Statistical Mechanics: A Selective Review of Two Central Issues, Reviews of Modern Physics, 71, 346–357, 1999; {e)} From Time-symmetric Microscopic Dynamics to
Time-asymmetric Macroscopic Behavior: An Overview, to appear in European Mathematical Publishing House, ESI Lecture Notes in Mathematics and Physics.
• O. Penrose, Foundations of Statistical Mechanics, Pergamon, Elmsford, N.Y. (1970): reprinted by Dorer (2005).
• R. Penrose, The Emperor's New Mind, Oxford U.P., New York(1990), ch. 7: The Road to Reality, A. E. Knopf, New York(2005), ch. 27–29.
• S.M. Carroll and J. Chen, Spontaneous Inflation and the Origin of the Arrow of Time, arXiv:hep-th/0410270v1
Internal references
• Valentino Braitenberg (2007) Brain. Scholarpedia, 2(11):2918.
• Tomasz Downarowicz (2007) Entropy. Scholarpedia, 2(11):3901.
• Eugene M. Izhikevich (2007) Equilibrium. Scholarpedia, 2(10):2014.
• Mark Aronoff (2007) Language. Scholarpedia, 2(5):3175.
• Howard Eichenbaum (2008) Memory. Scholarpedia, 3(3):1747.
• Philip Holmes and Eric T. Shea-Brown (2006) Stability. Scholarpedia, 1(10):1838.
• David H. Terman and Eugene M. Izhikevich (2008) State space. Scholarpedia, 3(3):1924.
Recommended reading
• For a general history of the subject and references to the original literature see S.G. Brush, The Kind of Motion We Call Heat, Studies in Statistical Mechanics, vol. VI, E.W. Montroll and J.L.
Lebowitz, eds. North-Holland, Amsterdam, (1976).
• For a historical discussion of Boltzmann and his ideas see articles by M. Klein, E. Broda, L. Flamn in The Boltzmann Equation, Theory and Application, E.G.D. Cohen and W. Thirring, eds.,
Springer-Verlag, 1973.
• For interesting biographies of Boltzmann, which also contain many quotes and references, see E. Broda, Ludwig Boltzmann, Man—Physicist—Philosopher, Ox Bow Press, Woodbridge, Conn (1983); C.
Cercignani, Ludwig Boltzmann; The Man Who Treated Atoms, Oxford University Press (1998); D. Lindley, Boltzmann's Atom: The Great Debate that Launched a Revolution in Physics, Simon & Shuster
See also
|
{"url":"http://www.scholarpedia.org/article/Time's_arrow_and_Boltzmann's_entropy","timestamp":"2014-04-20T20:56:10Z","content_type":null,"content_length":"69314","record_id":"<urn:uuid:9869928e-234b-4468-9681-04fc5c7ee2cf>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00268-ip-10-147-4-33.ec2.internal.warc.gz"}
|
What to expect when i simulate a recurrence relation?
September 28th 2010, 10:21 AM
What to expect when i simulate a recurrence relation?
I am working on an assignment, and I am having trouble with this recurrence relation:
x(n+2) - 3x(n+1) + x(n) = 0
x(0) = 1
x(1) = (3-sqrt(5))/2
What can you expect to happen when you simulate this equation numerically?
The next task is to simulate the equation in python, but for now, they want me to point out what kind of problems I might run into.
I figured that x(1) will not be represented correctly when converted to a 64-bit float, and that the misrepresentation will lead to large errors for large values of n, but I don't know how to
elaborate and explain this sufficiently, or if i might encounter more problems.
Can anyone help me?
Sorry if my english is unclear!
September 28th 2010, 12:03 PM
The solution of the difference equation...
$x_{n+2} -3\ x_{n+1} + x_{n} =0$ , $x_{0}=1$ , $x_{1}= \frac{3-\sqrt{5}}{2}$ (1)
... is of the form...
$x_{n}= c_{1}\ r_{1}^{n} + c_{2}\ r_{2}^{n}$ (2)
... where $r_{1}$ and $r_{2}$ are the solution of the second order algebraic equation...
$r^{2} - 3\ r +1=0$ (3)
... that are...
$r_{1}= \frac{3-\sqrt{5}}{2}$
$r_{2}= \frac{3+\sqrt{5}}{2}$ (4)
The 'initial conditions' give $c_{1}=1$$c_{2}=0$ , so that the solution is...
$x_{n} = (\frac{3-\sqrt{5}}{2})^{n}$ (5)
Kind regards
|
{"url":"http://mathhelpforum.com/calculus/157719-what-expect-when-i-simulate-recurrence-relation-print.html","timestamp":"2014-04-19T22:33:37Z","content_type":null,"content_length":"7247","record_id":"<urn:uuid:8ea76e32-0045-4f56-a30d-644adc993abb>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00518-ip-10-147-4-33.ec2.internal.warc.gz"}
|
View Tubes: Teacher Notes
Conceptual Understanding
Data Collection
Line of Best Fit
Interpret Data
Procedural Knowledge
Organize Data
Graph Data
Make Predictions
Problem Solving
Does the length of a viewing tube, its diameter, or the distance from an object affect the type of data collected and the resulting graph?
Work in groups of four or five.
Each group needs:
□ Measuring Tape placed vertically on the wall from the floor upward.
□ Assorted tubes with varying lengths and diameters to correspond to the investigation assigned to each group
□ Tape to mark standing positions on the floor in front of the measuring tape
□ Grid paper for each group
1. Students will work in groups at designated "viewing stations."
2. Each viewing station has a measuring tape on the wall and standing positions marked on the floor in front of the wall.
3. Each student standing at indicated distances from the wall will use a tube to view the measuring tape and then describe to other group members what portion of the poster is visible.
4. Other group members will measure the height of the described viewable portion to the nearest inch.
5. Data is recorded as collected for each member of the group.
6. Graphs of best-fit lines or curves are drawn and interpretations made.
7. Investigating the relationship between the dimensions of the tubes, slopes, and intercepts should reveal:
A. When tube length and diameter are constant, the viewable height is a linear function of the distance from the wall. The intercept is the diameter of the tube.
B. When distance from the wall and tube length are constant, the viewable height is a linear function of the diameter of the tube.
C. When the distance from the wall and tube diameter are constant, the viewable height is a nonlinear function of the length of the tube.
Investigation A
1. Collect the Data: Each member of the group will view the poster with the given tubes at varying distances from the wall. Others will mark the top and bottom of the described portion to
determine the measurement in inches of the viewing height. Calculate the average visible height for each of the distances.
2. Graph the Data: Choose appropriate labels and scales for the horizontal and vertical axes. Plot the data as ordered pairs, (x, y).
3. Read the Results: Looking at your points, do they seem to lie along a line or a curve? Draw the line that best fits your data.
4. Describe in words how to determine the height of the visible portion if you know the distance from the wall.
5. Describe by equation how to determine the height of the visible portion (y) if you know the distance from the wall (x).
y =
6. Predict the height of the visible portion, if you were standing 10 feet
from the wall.
7. Predict the distance from the wall you would have to stand in order to see a 25-inch portion of the poster.
Viewing Height:
Same Tube Length & Tube Diameter
Varying Distances from Wall
│ │ Distance from Poster │
│Student name ├──────┬──────┬──────┬──────┬──────┬──────┬──────┬──────┤
│ │1 foot│2 feet│3 feet│4 feet│5 feet│6 feet│7 feet│8 feet│
│Total │ │ │ │ │ │ │ │ │
Investigation B
1. Collect the Data: Each member of the group will view the poster with the given tubes at varying distances from the wall. Others will mark the top and bottom of the described portion to
determine the measurement in inches of the viewing height. Calculate the average visible height for each of the distances.
2. Graph the Data: Choose appropriate labels and scales for the horizontal and vertical axes. Plot the data as ordered pairs, (x, y).
3. Read the Results: Looking at your points, do they seem to lie along a line or a curve? Draw the line that best fits your data.
4. Describe in words how to determine the height of the visible portion if you know the diameter of the tube.
5. Describe by equation how to determine the height of the visible portion (y) if you know the diameter of the tube (x).
y =
6. Predict the height of the visible portion for a tube with a diameter of five inches.
7. Predict the diameter of a tube that would allow you to see a 10-inch portion of the poster.
Viewing Height:
Same Distance from Wall & Tube Length
Varying Diameters
│ │Diameter of Tube │
│Student Name ├──┬──┬──┬──┬──┬──┤
│ │1"│2"│3"│4"│5"│6"│
│Average │ │ │ │ │ │ │
Investigation C
1. Collect the Data: Each member of the group will view the poster with the given tubes varying in length, a designated distance from the wall. Others will mark the top and bottom of the
described portion to determine the measurement in inches of the viewing height. Calculate the average visible height for each of the lengths.
2. Graph the Data: Choose appropriate labels and scales for the horizontal and vertical axes. Plot the data as ordered pairs, (x, y).
3. Read the Results: Looking at your points, as the length of the tube increases, what is happening to the height of the visible portion of the picture?
4. Would the height of the portion ever decrease to zero? Why?
5. What height does the line seem to be approaching as a minimum?
Viewing Height
Same Tube Diameter & Distance from Wall
Varying Length
│ │ Length of Tube │
│Student Name ├────────┬────────┬─────────┬─────────┤
│ │4 inches│7 inches│22 inches│33 inches│
│Average │ │ │ │ │
As a result of this activity, students learn to analyze data that they have collected, look for relationships, and make predictions.
Have students answer the following questions:
1. Explain how you knew the graphs were or were not linear.
2. Predict the height of the visible portion of the measuring tape if you were standing 15 feet from the wall.
3. Predict the distance you would have to stand from the wall in order to see a 30-inch portion of the measuring tape.
4. Predict the height of the visible portion for the tube with a diameter of 7.5 inches.
5. Predict the diameter of a tube that would allow you to see a 20-inch portion of the measuring tape.
|
{"url":"http://fcit.usf.edu/fcat8m/resource/activity/viewtubt.htm","timestamp":"2014-04-18T08:57:25Z","content_type":null,"content_length":"27107","record_id":"<urn:uuid:8531e96e-cac5-4dd2-a8bf-85991a7c5244>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00632-ip-10-147-4-33.ec2.internal.warc.gz"}
|
what is the mean of factorial(-1/2)??
How did quantum field theory or analytically contination get into this? It seemed as if you meant it was (bad) lazy to make the definition x! = gamma(x+1). If you only meant the definition should be
clearly stated, that is fine. Though it is tedious to clearly define everything at all times. There is really not a competing definition of x!, x! = gamma(x+1) + A(x)*sin(Pi*x) having not caught on.
No confusion results. As I already stated it is desirable that x! be log convex. Speaking of being (bad) lazy one does not normally speak of sets like N, Q, and R as being continuous or not. When one
speaks of a function being continuous, one specifies a topology or equivelently type of limit that defines the continuity.
If we define f(x)=sin(x*pi) when x is rational (that is Q is the domain of f)
Theorem 1: f(x) is (rational) continuous for all x
because sin(x+h)-sin(x) is defined and small for all x and small h (where x+h and x are in the domain of f)
Theorem 2: f(x) is an algebraic number for all x
So Theorem 1 will remain true, but theorem 2 will be ruined by the generalization to real numbers.
As you have written: "If you only meant the definition should be clearly stated, that is fine." which is exactly what I meant. And the reason that I meant it, is because the usual definition is not
always clearly understood - otherwise we would not be having this discussion because the question would not have been asked.
As for QFT - this same issue crops up almost everywhere in QFT. Typically, after a long calculation, an integral appears that is infinite, and the question becomes what to do. The usual answer is to
change one of the (integral) parameters of the problem from an integer to something continuous; the integral becomes tractable, and the final answer is obtained after the parameter is reset to its
initial integral value. Books and careers have been devoted to methods of doing this - it is called "regularization" when the integer is generalized, and "analytic continuation" when the general
value is returned to an integer. There are an infinite number os ways of doing this.
Example: a four dimensional integral is infinite. So change the dimensionality from "4", an integer, to something continuous (never mind how - as long as the two coincide when the dimensionality is
4, its OK), the "regularized" integral is no longer infinite and the calculation proceeds. Finally, reset the dimensionality back to 4. This is exactly the same issue as generalizing n!=GAMMA(n+1) to
x!=GAMMA(x+1), so it is important to understand this simple case.
I don't understand your example: You write "f(x) is (rational) continuous for all x".
"Rational" means that it is the ratio of integers, and it is well-known that the rationals are not continuous - there are in infinity of irrational numbers between every rational one- and that is
prior to considering transcendentals.
|
{"url":"http://www.physicsforums.com/showthread.php?p=2399554","timestamp":"2014-04-21T14:49:38Z","content_type":null,"content_length":"62633","record_id":"<urn:uuid:47ae6122-8bbb-4a31-9294-adff746c4b84>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00443-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Transform a pde into rotating frame
I have an equation of the form;
\frac{d}{dt}(W) = \omega \left(x \frac{\partial}{\partial y} - y \frac{\partial}{\partial x} \right) W + g \frac{\partial}{\partial y} W + k x \frac{\partial^2}{\partial y^2} W
I want to change it into the rotating frame using the transform;
x = x' cos(wt) - y' sin(wt)
y = x' sin(wt) + y' cos(wt)
I have calculated the derivatives of these transforms to be;
\frac{\partial}{\partial x} = -cos(\omega t) \frac{\partial}{\partial x'} - sin(\omega t) \frac{\partial}{\partial y'}
\frac{\partial}{\partial y} = -cos(\omega t) \frac{\partial}{\partial y'} + sin(\omega t) \frac{\partial}{\partial x'}
\frac{\partial^2}{\partial x^2} = -cos^(2)(\omega t) \frac{\partial^2}{\partial x'^2} - sin^(2)(\omega t) \frac{\partial^2}{\partial y'^2}
I am assuming I can just substitute these transforms for x, y and their derivatives into the original equation and this will give me the original equation in the rotating frame...but do I have to do
something with the time derivative on the L.H.S of the original equation??
Thank you.
|
{"url":"http://www.physicsforums.com/showthread.php?s=541d9710fd27878579cb4f2c898675e0&p=3988601","timestamp":"2014-04-20T18:32:17Z","content_type":null,"content_length":"20572","record_id":"<urn:uuid:2e3710de-fde8-4bd7-bbf6-c3a59988eef5>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00133-ip-10-147-4-33.ec2.internal.warc.gz"}
|
How to sort my paws?
up vote 98 down vote favorite
In my previous question I got an excellent answer that helped me detect where a paw hit a pressure plate, but now I'm struggling to link these results to their corresponding paws:
I manually annotated the paws (RF=right front, RH= right hind, LF=left front, LH=left hind).
As you can see there's clearly a repeating pattern and it comes back in almost every measurement. Here's a link to a presentation of 6 trials that were manually annotated.
My initial thought was to use heuristics to do the sorting, like:
• There's a ~60-40% ratio in weight bearing between the front and hind paws;
• The hind paws are generally smaller in surface;
• The paws are (often) spatially divided in left and right.
However, I’m a bit skeptical about my heuristics, as they would fail on me as soon as I encounter a variation I hadn’t thought off. They also won’t be able to cope with measurements from lame dogs,
whom probably have rules of their own.
Furthermore, the annotation suggested by Joe sometimes get's messed up and doesn't take into account what the paw actually looks like.
Based on the answers I received on my question about peak detection within the paw, I’m hoping there are more advanced solutions to sort the paws. Especially because the pressure distribution and the
progression thereof are different for each separate paw, almost like a fingerprint. I hope there's a method that can use this to cluster my paws, rather than just sorting them in order of occurrence.
So I'm looking for a better way to sort the results with their corresponding paw.
For anyone up to the challenge, I have pickled a dictionary with all the sliced arrays that contain the pressure data of each paw (bundled by measurement) and the slice that describes their location
(location on the plate and in time).
To clarfiy: walk_sliced_data is a dictionary that contains ['ser_3', 'ser_2', 'sel_1', 'sel_2', 'ser_1', 'sel_3'], which are the names of the measurements. Each measurement contains another
dictionary, [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10] (example from 'sel_1') which represent the impacts that were extracted.
Also note that 'false' impacts, such as where the paw is partially measured (in space or time) can be ignored. They are only useful because they can help recognizing a pattern, but won't be analyzed.
And for anyone interested, I’m keeping a blog with all the updates regarding the project!
python image-processing
2 It was fascinating reading the responses to your previous question. Hopefully this will also generate much interest. – neil Dec 21 '10 at 18:59
1 Yeah, the approach I was using doesn't quite work. Just to elaborate, the approach I was using is to just order the impacts, and assume that the first paw to touch is the same as the 5th paw to
touch, and so on. (i.e. order the impacts and use a modulo 4). The problem with this is that sometimes the rear paws impact off the sensor pad after the first paw touches down. In that case, the
first paw to impact matches the 4th or 3rd paw to impact. Hopefully this makes some sense. – Joe Kington Dec 21 '10 at 19:40
1 Yeah I noted that when I started to manually annotate the impacts with the paws @Joe, but still your method was fabulous for extracting the paws in a manageable fashion. Now I'm hoping someone can
come up with something just as awesome for sorting them :-) – Ivo Flipse Dec 21 '10 at 19:45
1 Would I be interpreting the images correctly in that one toe of each hind foot exerts significantly less pressure than the rest? It also appears that toe is always towards the 'inside' i.e.
towards the dog's center of mass. Could you incorporate that as a heuristic? – Thomas Langston Dec 21 '10 at 21:31
1 I'll admit my limited image processing skills are somewhat rusty, but is it easily possible to take the least steep gradient of the large middle pad of each paw? It seems the angle of least
steepness would help immensely (a hand-drawn example for the paws posted: imgur.com/y2wBC imgur.com/yVqVU imgur.com/yehOc imgur.com/q0tcD) – user470379 Dec 22 '10 at 3:51
show 3 more comments
3 Answers
active oldest votes
Alright! I've finally managed to get something working consistently! This problem pulled me in for several days... Fun stuff! Sorry for the length of this answer, but I need to
elaborate a bit on some things... (Though I may set a record for the longest non-spam stackoverflow answer ever!)
As a side note, I'm using the full dataset that Ivo provided a link to in his original question. It's a series of rar files (one-per-dog) each containing several different experiment
runs stored as ascii arrays. Rather than try to copy-paste stand-alone code examples into this question, here's a bitbucket mercurial repository with full, stand-alone code. You can
clone it with
hg clone https://joferkington@bitbucket.org/joferkington/paw-analysis
There are essentially two ways to approach the problem, as you noted in your question. I'm actually going to use both in different ways.
1. Use the (temporal and spatial) order of the paw impacts to determine which paw is which.
2. Try to identify the "pawprint" based purely on its shape.
Basically, the first method works with the dog's paws follow the trapezoidal-like pattern shown in Ivo's question above, but fails whenever the paws don't follow that pattern. It's
fairly easy to programatically detect when it doesn't work.
Therefore, we can use the measurements where it did work to build up a training dataset (of ~2000 paw impacts from ~30 different dogs) to recognize which paw is which, and the problem
reduces to a supervised classification (With some additional wrinkles... Image recognition is a bit harder than a "normal" supervised classification problem).
Pattern Analysis
To elaborate on the first method, when a dog is walking (not running!) normally (which some of these dogs may not be), we expect paws to impact in the order of: Front Left, Hind Right,
Front Right, Hind Left, Front Left, etc. The pattern may start with either the front left or front right paw.
If this were always the case, we could simply sort the impacts by initial contact time and use a modulo 4 to group them by paw.
However, even when everything is "normal", this doesn't work. This is due to the trapezoid-like shape of the pattern. A hind paw spatially falls behind the previous front paw.
Therefore, the hind paw impact after the initial front paw impact often falls off the sensor plate, and isn't recorded. Similarly, the last paw impact is often not the next paw in the
sequence, as the paw impact before it occured off the sensor plate and wasn't recorded.
Nonetheless, we can use the shape of the paw impact pattern to determine when this has happened, and whether we've started with a left or right front paw. (I'm actually ignoring
problems with the last impact here. It's not too hard to add it, though.)
def group_paws(data_slices, time):
# Sort slices by initial contact time
data_slices.sort(key=lambda s: s[-1].start)
# Get the centroid for each paw impact...
paw_coords = []
for x,y,z in data_slices:
paw_coords.append([(item.stop + item.start) / 2.0 for item in (x,y)])
paw_coords = np.array(paw_coords)
# Make a vector between each sucessive impact...
dx, dy = np.diff(paw_coords, axis=0).T
#-- Group paws -------------------------------------------
paw_code = {0:'LF', 1:'RH', 2:'RF', 3:'LH'}
paw_number = np.arange(len(paw_coords))
# Did we miss the hind paw impact after the first
# front paw impact? If so, first dx will be positive...
if dx[0] > 0:
paw_number[1:] += 1
# Are we starting with the left or right front paw...
# We assume we're starting with the left, and check dy[0].
# If dy[0] > 0 (i.e. the next paw impacts to the left), then
# it's actually the right front paw, instead of the left.
if dy[0] > 0: # Right front paw impact...
paw_number += 2
# Now we can determine the paw with a simple modulo 4..
paw_codes = paw_number % 4
paw_labels = [paw_code[code] for code in paw_codes]
return paw_labels
In spite of all of this, it frequently doesn't work correctly. Many of the dogs in the full dataset appear to be running, and the paw impacts don't follow the same temporal order as
when the dog is walking. (Or perhaps the dog just has severe hip problems...)
Fortunately, we can still programatically detect whether or not the paw impacts follow our expected spatial pattern:
def paw_pattern_problems(paw_labels, dx, dy):
"""Check whether or not the label sequence "paw_labels" conforms to our
expected spatial pattern of paw impacts. "paw_labels" should be a sequence
of the strings: "LH", "RH", "LF", "RF" corresponding to the different paws"""
# Check for problems... (This could be written a _lot_ more cleanly...)
problems = False
last = paw_labels[0]
for paw, dy, dx in zip(paw_labels[1:], dy, dx):
# Going from a left paw to a right, dy should be negative
if last.startswith('L') and paw.startswith('R') and (dy > 0):
problems = True
# Going from a right paw to a left, dy should be positive
if last.startswith('R') and paw.startswith('L') and (dy < 0):
problems = True
# Going from a front paw to a hind paw, dx should be negative
if last.endswith('F') and paw.endswith('H') and (dx > 0):
problems = True
# Going from a hind paw to a front paw, dx should be positive
if last.endswith('H') and paw.endswith('F') and (dx < 0):
problems = True
last = paw
return problems
Therefore, even though the simple spatial classification doesn't work all of the time, we can determine when it does work with reasonable confidence.
Training Dataset
From the pattern-based classifications where it worked correctly, we can build up a very large training dataset of correctly classified paws (~2400 paw impacts from 32 different dogs!).
We can now start to look at what an "average" front left, etc, paw looks like.
To do this, we need some sort of "paw metric" that is the same dimensionality for any dog. (In the full dataset, there are both very large and very small dogs!) A paw print from an
up vote 110 Irish elkhound will be both much wider and much "heavier" than a paw print from a toy poodle. We need to rescale each paw print so that a) they have the same number of pixels, and b)
down vote the pressure values are standardized. To do this, I resampled each paw print onto a 20x20 grid and rescaled the pressure values based on the maximum, mininum, and mean pressure value
accepted for the paw impact.
def paw_image(paw):
from scipy.ndimage import map_coordinates
ny, nx = paw.shape
# Trim off any "blank" edges around the paw...
mask = paw > 0.01 * paw.max()
y, x = np.mgrid[:ny, :nx]
ymin, ymax = y[mask].min(), y[mask].max()
xmin, xmax = x[mask].min(), x[mask].max()
# Make a 20x20 grid to resample the paw pressure values onto
numx, numy = 20, 20
xi = np.linspace(xmin, xmax, numx)
yi = np.linspace(ymin, ymax, numy)
xi, yi = np.meshgrid(xi, yi)
# Resample the values onto the 20x20 grid
coords = np.vstack([yi.flatten(), xi.flatten()])
zi = map_coordinates(paw, coords)
zi = zi.reshape((numy, numx))
# Rescale the pressure values
zi -= zi.min()
zi /= zi.max()
zi -= zi.mean() #<- Helps distinguish front from hind paws...
return zi
After all of this, we can finally take a look at what an average left front, hind right, etc paw looks like. Note that this is averaged across >30 dogs of greatly different sizes, and
we seem to be getting consistent results!
However, before we do any analysis on these, we need to subtract the mean (the average paw for all legs of all dogs).
Now we can analyize the differences from the mean, which are a bit easier to recognize:
Image-based Paw Recognition
Ok... We finally have a set of patterns that we can begin to try to match the paws against. Each paw can be treated as a 400-dimensional vector (returned by the paw_image function) that
can be compared to these four 400-dimensional vectors.
Unfortunately, if we just use a "normal" supervised classification algorithm (i.e. find which of the 4 patterns is closest to a particular paw print using a simple distance), it doesn't
work consistently. In fact, it doesn't do much better than random chance on the training dataset.
This is a common problem in image recognition. Due to the high dimensionality of the input data, and the somewhat "fuzzy" nature of images (i.e. adjacent pixels have a high covariance),
simply looking at the difference of an image from a template image does not give a very good measure of the similarity of their shapes.
To get around this we need to build a set of "eigenpaws" (just like "eigenfaces" in facial recognition), and describe each paw print as a combination of these eigenpaws. This is
identical to principal components analysis, and basically provides a way to reduce the dimensionality of our data, so that distance is a good measure of shape.
Because we have more training images than dimensions (2400 vs 400), there's no need to do "fancy" linear algebra for speed. We can work directly with the covariance matrix of the
training data set:
def make_eigenpaws(paw_data):
"""Creates a set of eigenpaws based on paw_data.
paw_data is a numdata by numdimensions matrix of all of the observations."""
average_paw = paw_data.mean(axis=0)
paw_data -= average_paw
# Determine the eigenvectors of the covariance matrix of the data
cov = np.cov(paw_data.T)
eigvals, eigvecs = np.linalg.eig(cov)
# Sort the eigenvectors by ascending eigenvalue (largest is last)
eig_idx = np.argsort(eigvals)
sorted_eigvecs = eigvecs[:,eig_idx]
sorted_eigvals = eigvals[:,eig_idx]
# Now choose a cutoff number of eigenvectors to use
# (50 seems to work well, but it's arbirtrary...
num_basis_vecs = 50
basis_vecs = sorted_eigvecs[:,-num_basis_vecs:]
return basis_vecs
These basis_vecs are the "eigenpaws".
To use these, we simply dot (i.e. matrix multiplication) each paw image (as a 400-dimensional vector, rather than a 20x20 image) with the basis vectors. This gives us a 50-dimensional
vector (one element per basis vector) that we can use to classify the image. Instead of comparing a 20x20 image to the 20x20 image of each "template" paw, we compare the 50-dimensional,
transformed image to each 50-dimensional transformed template paw. This is much less sensitive to small variations in exactly how each toe is positioned, etc, and basically reduces the
dimensionality of the problem to just the relevant dimensions.
Eigenpaw-based Paw Classification
Now we can simply use the distance between the 50-dimensional vectors and the "template" vectors for each leg to classify which paw is which:
codebook = np.load('codebook.npy') # Template vectors for each paw
average_paw = np.load('average_paw.npy')
basis_stds = np.load('basis_stds.npy') # Needed to "whiten" the dataset...
basis_vecs = np.load('basis_vecs.npy')
paw_code = {0:'LF', 1:'RH', 2:'RF', 3:'LH'}
def classify(paw):
paw = paw.flatten()
paw -= average_paw
scores = paw.dot(basis_vecs) / basis_stds
diff = codebook - scores
diff *= diff
diff = np.sqrt(diff.sum(axis=1))
return paw_code[diff.argmin()]
Here are some of the results:
Remaining Problems
There are still some problems, particularly with dogs too small to make a clear pawprint... (It works best with large dogs, as the toes are more clearly seperated at the sensor's
resolution.) Also, partial pawprints aren't recognized with this system, while they can be with the trapezoidal-pattern-based system.
However, because the eigenpaw analysis inherently uses a distance metric, we can classify the paws both ways, and fall back to the trapezoidal-pattern-based system when the eigenpaw
analysis's smallest distance from the "codebook" is over some threshold. I haven't implemented this yet, though.
Phew... That was long! My hat is off to Ivo for having such a fun question!
7 @Joe, your answer is once again amazing! Can't wait to try it out myself! – Ivo Flipse Dec 28 '10 at 9:32
1 Could you add a summary, conclusions, titles to the sections. The answer is not scannable and it is hard to understand at a glance. – J.F. Sebastian Dec 28 '10 at 12:21
7 Is it just me or would "Eigenpaw" be a great name for a band? – Malvolio Dec 30 '10 at 21:21
2 Great answer. I attempted eigenpaw method too, but was not as perseverant as you. One problem I see is paw registration, i.e., as facial registration is to face recognition. Did you
encounter any problems in normalizing the location and rotation of each paw? If so, then perhaps the paw can be preprocessed into some translation-rotation invariant feature before
doing PCA. – Steve Tjoa Jan 17 '11 at 23:58
@Steve, I haven't tried rotating them though I had some discussions with Joe on how to improve it any further. However, to finish my project for now, I manually annotated all the
2 paws so I can wrap it up. Luckily this also allows us to create different training sets to make the recognition more sensitive. For rotating the paws, I was planning on using the
toes, but as you can read on my blog, that's not as easy as my first question made it look like... – Ivo Flipse Jan 18 '11 at 0:42
show 2 more comments
Using the information purely based on duration, I think you could apply techniques from modeling kinematics; namely Inverse Kinematics. Combined with orientation, length, duration, and
total weight it gives some level of periodicity which, I would hope could be the first step trying to solve your "sorting of paws" problem.
up vote 4
down vote All that data could be used to create a list of bounded polygons (or tuples), which you could use to sort by step size then by paw-ness [index].
add comment
Can you have the technician running the test manually enter the first paw (or first two)? The process might be:
up vote 2 • Show tech the order of steps image and require them to annotate the first paw.
down vote • Label the other paws based on the first paw and allow the tech to make corrections or re-run the test. This allows for lame or 3-legged dogs.
I actually have annotations of the first paws, though they are not flawless. However, the first paw is always a front paw and wouldn't help me separate the hind paws. Furthermore, the
ordering isn't perfect as Joe mentioned, because that requires both front to touch the plate at the start. – Ivo Flipse Dec 23 '10 at 14:48
The annotations would be useful when using image recognition, because of the 24 measurements I have, at least 24 paws would already be annotated. If they would then be clustered into 4
groups, two of those should contain a reasonable amount of either front paw enough to make an algorithm fairly certain of the clustering. – Ivo Flipse Dec 23 '10 at 14:51
Unless I'm reading them incorrectly, the linked annotated trials show the hind paw touching first in 4 of 6 trials. – Jamie Ide Dec 23 '10 at 14:52
Ah, I meant time-wise. If you loop through the file, the front paw should always be the first to contact the plate. – Ivo Flipse Dec 23 '10 at 14:56
add comment
Not the answer you're looking for? Browse other questions tagged python image-processing or ask your own question.
|
{"url":"http://stackoverflow.com/questions/4502656/how-to-sort-my-paws","timestamp":"2014-04-18T20:25:22Z","content_type":null,"content_length":"108507","record_id":"<urn:uuid:035e7a51-381a-4fee-b03d-49f6948b017f>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00164-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Venn Diagrams
Venn diagrams are a great way to visualize the structure of set relationships. They’re also an example of a technique that works very well for a particular purpose, but that entirely fails outside
its well-defined scope or when the number of sets gets too large.
The idea of the Venn diagram is simple: sets are shown as regions, typically circles. The inside of the circle represents elements of a particular set, the outside anything that is not in that set. A
set might contain all dogs: anything inside the circle is a dog, anything outside is not a dog.
It gets more interesting when more sets are involved. The typical schoolbook example is of two sets and their potential interactions. Let’s say the left set in these images contains dogs, the right
one black animals.
The left image shows set intersection: all A that are also B, i.e., all dogs that are also black. The right image shows set union: all things that are in at least one of the sets, i.e., all dogs and
all black animals (including black dogs). Even without being familiar with set theory, it’s still easy to understand where the criteria overlap and where they don’t.
Slightly more complex relationships are set difference and set complement. The left image shows A subtracted from B, i.e., black animals that are not dogs. The right image includes all elements that
are in either A or B (but not both), i.e., dogs or black animals, but not black dogs.
There are more set operations, and they are all easily explained using Venn diagrams. I imagine that many people think of Venn diagrams when they think of sets. That is not a bad thing as long as the
limitations of the technique are understood. Many typical set problems are simple enough to be solved using Venn diagrams.
Limitation: Number of Sets
While Venn diagrams are great for two or even three sets, they very quickly break down when the number of sets goes beyond three. It’s not like people haven’t tried, though, with results ranging from
pointless to downright silly.
Four sets are doable, though they show the challenge as more sets are added. The shapes of the intersections are very different, and it becomes easier to miss configurations. The simplicity and
regular layout that made the two- and three-set diagram useful is nowhere to be found.
The image below shows a version of the Venn diagram for six sets. Not only are most people unable to think in terms of all the 64 possible combinations of six sets, the diagram does not provide much
If it’s not possible in 2D, then maybe in three dimensions? This image is supposed to show some of the possible intersections of four sets. While it’s nice to look at, it should be obvious that it is
futile to figure out which sets are included and which ones are not.
All visualization techniques break down at some point. In most cases, it is fairly obvious when it happens, but there is no hard number that clearly defines that point. There are also many criteria
like screen resolution, etc., that have an impact. But in the case of Venn diagrams, that point is very clearly defined: two or three sets work perfectly well, anything above three sets is pointless.
Limitation: Sizes of Sets
Another piece of information Venn diagrams do not convey is the size of a set. While it is possible to imagine doing that, it typically does not work without serious distortions of the diagram. If
the shape has to be altered significantly to correctly represent size, it is likely that different parts of the diagram will be very different shapes, thus being tough to compare. The Venn diagram
simply isn’t able to perform this function in a reasonable way.
In the medical and bioinformatics literature, Venn diagrams are a popular way of showing different study conditions, sometimes with the intention of directly reflecting set sizes, sometimes with
annotations. Rather than insist on Venn diagrams, it would be a better idea to use better alternatives, like I have shown in the past.
Conclusions: Venn to Use, Venn Not to Use
Venn diagrams have their uses. They’re great for teaching basic set theory and they can help illustrate combinations of criteria, as long as there are no more than three. But it is equally important
to be aware of the limitations, and to know when to look for alternatives.
All images from the Wikipedia page on Venn diagrams.
1. Jim Vallandingham says
Thanks for creating this nice review of an important topic.
I’ve seen the 4-set version more than a few times, and I’ve always been struck by how unhelpful it is. Glad to see my opinion matches yours.
An important lesson in the fact that just because something can be done, doesn’t mean it should be done.
2. derek says
Sometimes you can extend it to a fourth set, provided that the data you have completely exclude some relationships. When that happens, the four set diagram shows that fact stunningly well, but
you have to be on the lookout for the data that give you the opportunity to use the diagram.
Like the calculus I was taught at school, information visualisation is often a matter of being able to recognise problems as looking like other problems you’ve encountered before and know the
solution to.
3. Rob Shell says
Good post. I enjoyed reading this as well as your G+ post on Euler diagrams.
4. Julien Delvat says
For some strange reason I usually find Venn Diagramm in jokes:
Enjoy …
5. Jon Peltier says
The Venn diagram for six sets “does not provide much help.”
Sure is pretty, though.
Like so many other techniques, Venn diagrams work well within a narrow realm, and poorly outside, where they are used most often.
6. derek says
I see that with my naive talk of four or more sets being feasible, provided some of the combinations are empty sets, I’m describing Euler diagrams, which I hadn’t heard of before :-)
7. T J Bate says
Visokio Omniscope has an interactive Venn View that will go up to 5 subsets plus ‘outside’ records. Several innovative interactive business solutions have been implemented with this Venn View at
the heart of the user filtering/query interaction. Omniscope is free to try:
8. Matt says
I created a Venn Diagram at work to show how we’re cleaning data. It was a single large circle, representing the population of data, then proportionately smaller circles representing the % of
records needing cleaning, which then had a classic Venn Diagram inside it which showed how we were cleaning those records.
From my perspective it was something very simple, but people love it because it’s familiar and easy-to-read. I love using Venn Diagrams because of those reasons, even when you’re dealing with a
single set of data.
□ Julien Delvat says
Would you mind sharing that diagramm?
I’m not sure I have the right picture of your description and it’ll probably help others, too
9. Raphael says
One thing I wonder about Venn (and related) diagrams:
How effective are we at judging areas of complex shapes? Do they really help us evaluate the relative size of each of their components?
I’m not entirely convinced they are better than a table of numbers, or a simple proportional area plot.
10. Jesse Paquette says
My sentiments exactly – see:
11. John says
Here’s an even more useless Venn, but pretty to look at: a seven set Venn. http://www.phillydesignblog.com/2012/09/seven-way-venn-colored/
12. VD says
can any one help me with venn diagram generator with 6 sets? does anybody have any idea about such tool online?
Leave a Reply Cancel reply
|
{"url":"http://eagereyes.org/techniques/venn-diagrams","timestamp":"2014-04-16T10:55:14Z","content_type":null,"content_length":"50716","record_id":"<urn:uuid:54f9665b-92ab-42da-ae37-12fd8d898bb7>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00634-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Post a reply
Hi all;
Or as the Russian do?!
To change 530 to binary.
Make this grid:
The first column was created by dividing 530 by 2 and if there was a remainder ignoring it. The second column was created by the rule if the number next to it is even put a 0, if odd, put a 1.
Now start from the bottom
530 decimal = 1000010010 binary.
Lets do another one.
11571 decimal = 10110100110011 binary
|
{"url":"http://www.mathisfunforum.com/post.php?tid=19937&qid=282286","timestamp":"2014-04-19T02:13:25Z","content_type":null,"content_length":"21814","record_id":"<urn:uuid:b9fd667d-d329-4195-a7cf-9074d7b3bfb5>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00617-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Finding the flux of an electrix field
1. The problem statement, all variables and given/known data
A charge q sits in the back corner of a cube, as shown in the attachment. What is the flux of E through the shaded side?
2. Relevant equations
3. The attempt at a solution
I know that I need the surface integral of E over the shaded area, but the problem is with choosing the proper coordinates and the origin.
Please help out this poor guy!
|
{"url":"http://www.physicsforums.com/showthread.php?p=3866391","timestamp":"2014-04-19T22:51:44Z","content_type":null,"content_length":"23601","record_id":"<urn:uuid:74cf6699-55bb-4ebc-8206-f7b2c9c69713>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00470-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Intermediate Algebra
If you need help in intermediate algebra, you have come to the right place. Note that you do not have to be a student at WTAMU to use any of these online tutorials. They were created as a service to
anyone who needs help in these areas of math.
If this is your first time using this Intermediate Algebra Online Tutorial please read the Guide to the WTAMU Intermediate Algebra Online Tutorial Website to learn how our tutorials are set up and
the disclaimer. Come back to this page to make your tutorial selection.
Please click on the name of the tutorial of your choice:
Tutorial 1: How to Succeed in a Math Class
Tutorial 2: Algebraic Expressions
Tutorial 3: Sets of Numbers
Tutorial 4: Operations on Real Numbers
Tutorial 5: Properties of Real Numbers
Tutorial 6: Practice Test on Tutorials 2 - 5
Tutorial 7: Linear Equations in One Variable
Tutorial 8: An Introduction to Problem Solving
Tutorial 9: Formulas and Problem Solving
Tutorial 10: Linear Inequalities
Tutorial 11: Practice Test on Tutorials 7 - 10
Tutorial 12: Graphing Equations
Tutorial 13: Introduction to Functions
Tutorial 14: Graphing Linear Equations
Tutorial 15: The Slope of a Line
Tutorial 16: Equations of Lines
Tutorial 17: Graphing Linear Inequalities
Tutorial 18: Practice Test on Tutorials 12 - 17
Tutorial 19: Solving Systems of Linear Equations in Two Variables
Tutorial 20: Solving Systems of Linear Equations in Three Variables
Tutorial 21: Systems of Linear Equations and Problem Solving
Tutorial 22: Practice Test on Tutorials 19 - 21
Tutorial 23: Exponents and Scientific Notation, Part I
Tutorial 24: Exponents and Scientific Notation, Part II
Tutorial 25: Polynomials and Polynomial Functions
Tutorial 26: Multiplying Polynomials
Tutorial 27: The Greatest Common Factor and Factoring by Grouping
Tutorial 28: Factoring Trinomials
Tutorial 29: Factoring by Special Products
Tutorial 30: Solving by Factoring
Tutorial 31: Practice Test on Tutorials 23 - 30
Tutorial 32: Multiplying and Dividing Rational Expressions
Tutorial 33: Adding and Subtracting Rational Expressions
Tutorial 34: Complex Fractions
Tutorial 35: Dividing Polynomials
Tutorial 36: Practice Test on Tutorials 32 - 35
Tutorial 37: Radicals
Tutorial 38: Rational Exponents
Tutorial 39: Simplifying Radical Expressions
Tutorial 40: Adding, Subtracting and Multiplying Radicals
Tutorial 41: Rationalizing Denominators and Numerators of Radical Expressions
Tutorial 42: Practice Test on Tutorials 37 - 41
[Copyright] [Fair Use] [Intellectual Property] [Resource Guide]
If you have any comments about this website email Kim Seward at kseward@mail.wtamu.edu
This site is brought to you by West Texas A&M University (WTAMU). It was created by Kim Seward with the assistance of Jennifer Puckett. It is currently being maintained by Kim Seward.
WTAMU and Kim Seward are not responsible for how a student does on any test or any class for any reason including not being able to access the website due to any technology problems. We cannot
guarantee that you will pass your math class after you go through this website. However, it will definitely help you to better understand the topics covered.
Throughout this website, we link to various outside sources. WTAMU and Kim Seward do not have any ownership to any of these outside websites and cannot give you permission to make any kind of copies
of anything found at any of these websites that we link to. It is purely for you to link to for information or fun as you go through the study session. Each of these websites have a copy right clause
that you need to read carefully if you are wanting to do anything other than go to the website and read it. We discourage any illegal use of the webpages found at these sites.
All contents copyright (C) 2001 - 2011, WTAMU, Kim Seward. All rights reserved. Last revised on April 6, 2011 by Kim Seward.
|
{"url":"http://www.wtamu.edu/academic/anns/mps/math/mathlab/int_algebra/index.htm","timestamp":"2014-04-19T07:41:43Z","content_type":null,"content_length":"23670","record_id":"<urn:uuid:b41fdb15-1ee4-4da4-9e4f-9c43a95d2e4b>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00012-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Download Mathematica Notebooks for Economics and Econometrics
All the Mathematica code available from this page was written by Luci Ellis, and is freely available. Please do not redistribute it for profit.
Business Cycle Analysis (56kb)
There are two main functions in this notebook. The first, findPeaks[a,b], finds peaks in data, defined as being a point that completes a rises and is followed by b falls. The second function,
assymetricDetrending, uses the SplineFit package to generate estimates of trends in macroeconomic data. The reference for this procedure is Julian Allwood and David Shepherd (1999), “Alternative
Detrending Procedures for Macroeconomic Time Series”, University of Melbourne Department of Economics Research Paper Number 698, June 1999. Dr Shepherd has kindly given me permission to post my
implementation of their procedure on this website. I have not yet implemented their recommendations regarding the endpoint data, so this procedure does not yet deal with the case of incomplete
□ simpleLinRegress[]: quick-and-dirty OLS function for when you only need estimated coefficients, fitted values and residuals. I mainly use it as an input into other functions.
□ ARLagOrderSelectionTable[ ] for the Akaike, Schwarz and Hannan-Quinn selection criteria (updated September 1999 based on helpful suggestions by Virgil Stokes).
□ LjungBoxStatistic[data,k] test statistic and critical value for a vector of real numbers. Used for establishing serial correlation.
□ CochraneOrcutt[data]] for estimating linear models in the presence of AR(1) serial correlation in the errors.
MarkovThings (20kb)
□ MarkovQ[matrix] tests if a (square) matrix is an approprate matrix of transition probabilities for a Markov chain.
□ MarkovErgodicProbabilities[matrix]finds the stationary (ergodic) probabilities implied by a given transition matrix.
□ MakeMarkovChain[n_Integer,p_?MatrixQ] creates a time series of state numbers, ie a Markov chain, of length n, given transition matrix p. The starting point is taken from the ergodic
The Johansen Procedure (8kb)
An implementation of the Johansen procedure for testing for cointegration in multivariate systems. I translated the algorithm from Gauss. Untested. Use at your own risk!
Filtering (52kb)
Implements the Hodrick-Prescott filter and the Henderson-weighted moving average (for all standard numbers of terms – 5, 7, 9, 13, 15, 17, and 23) and including the endpoints, as used by the
Australian Bureau of Statistics. See also A Tutorial on Multivariate Hodrick-Prescot Filtering
This implements the Two-Piece Normal Distribution (another use of Upvalues), and draws a fan chart similar to that used by the Bank of England for presenting its forecasts of inflation and output
growth. The code is based on a working paper by Professor Ken Wallis of Warwick University in the UK. My version is a bit more sophisticated than the BoE version in that you can choose as many
bands, evenly spaced in quantile terms, as you like.
|
{"url":"http://www.verbeia.com/mathematica/mathecon/econ_code.html","timestamp":"2014-04-18T05:31:46Z","content_type":null,"content_length":"7235","record_id":"<urn:uuid:f8ca315b-46b2-453b-9288-83e745ec99e4>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00427-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Triangulations of polyhedra
up vote 13 down vote favorite
A topologist came to me with this question, but everything I think should work doesn't.
How many triangulations are there of a polyhedron with n vertices?
By a "triangulation" of a polyhedron P we mean a decomposition of P into 3-simplices whose interiors are disjoint, whose vertices are vertices of P, and whose union is P. Since this obviously depends
on the polyhedron, let's say that P is the convex hull of n points on the curve (t, t^2, t^3). (I think this is general, but a proof of that would be nice too.) In particular, this means that all of
the faces are triangles, since no four vertices are coplanar.
Since triangulations of a polygon are counted by the Catalan numbers, a reasonable first guess is that these are counted by the generalized Catalan numbers $C_{n,k} = \frac{1}{(k-1)(n+1)} {kn \choose
n}$, which count k-ary trees (among other things). But just at n=5 we run into trouble: there are 2 (not 3) such triangulations, and they don't even contain a fixed number of pieces: one of them
triangulates P into two tetrahedra, and one breaks it into three.
This seems obvious enough that someone would have asked it before, but I'm not finding anything. Of course, answers to the obvious generalization (triangulations of k-polytopes whose vertices lie on
(t, t^2, ..., t^k)) are welcome as well.
co.combinatorics simplicial-stuff convex-polytopes
add comment
3 Answers
active oldest votes
You should read Section 6.1 in the excellent monograph "Triangulations" by De Loera, Rambau and Santos (here is a slightly dated version). It deals with triangulations of cyclic
up vote 19 down polytopes - exactly the subject of your question. Not only it answers your question, it is also the state of art for the rest of the subject.
vote accepted
A slightly newer version is here math.ucdavis.edu/~deloera/BOOK/final.pdf – j.c. Nov 12 '10 at 20:09
Wonderful. Thanks! – Jonah Ostroff Nov 12 '10 at 20:10
add comment
Although the answer is provided by Igor's pointers to the Triangulations book, it might be useful to supplement those pointers with the explicit bounds. The lower bound is due to Gil Kalai,
and the upper bound to Tamal Dey. For fixed dimension $d$, the cyclic polytope has at least $\Omega( 2^{n^{ \lfloor d/2 \rfloor }})$ triangulations, and for $d$ odd, at most $2^{ O( n^ {\
up vote 3 lceil d/2 \rceil} ) }$ triangulations. So, for $d=3$, the case posed in the question, the bounds are between $c^n$ and $c^{n^2}$. See Section 8.4 (pp. 396-398) of Triangulations.
down vote
add comment
Not an answer, but more questions!
What are the natural orderings on the set of triangulations for fixed $n$? Do any of these posets map naturally to the poset of triangulations of the $n$-gon?
up vote 1 down I'm asking because of the well-known relationship between triangulations of n-gons and the Stasheff Associahedron, the later of which relates to far too much to detail here. But
vote relations between this and the 3-d version you ask about would be very interesting.
A related MO question on the Associahedrian and Catalan numbers is here: Combinatorics of the Stasheff polytopes
2 You might want to ask new questions as separate questions (see FAQ). Also, I think you might like to read the section I mentioned above, before asking your question. – Igor Pak Nov
13 '10 at 1:13
1 See Edelman and Reiner, The higher Stasheff-Tamari posets. Mathematika 43 (1996), no. 1, 127–154. – Hugh Thomas Nov 22 '10 at 0:24
@ H Thomas: Very interesting paper, thanks, that's exactly the kind of thing was asking about! – Dr Shello Dec 6 '10 at 3:25
add comment
Not the answer you're looking for? Browse other questions tagged co.combinatorics simplicial-stuff convex-polytopes or ask your own question.
|
{"url":"http://mathoverflow.net/questions/45863/triangulations-of-polyhedra/46265","timestamp":"2014-04-18T21:29:46Z","content_type":null,"content_length":"67700","record_id":"<urn:uuid:20cb1fb6-aaf1-47b7-b093-b1483e6a79d4>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00499-ip-10-147-4-33.ec2.internal.warc.gz"}
|
[SciPy-user] KDE question
David Cournapeau david@ar.media.kyoto-u.ac...
Thu Nov 15 21:44:17 CST 2007
Stefan van der Walt wrote:
> On Thu, Nov 15, 2007 at 11:46:53AM +0200, Stefan van der Walt wrote:
>>>> Sounds like the kind of problem that can be solved using marching
>>>> squares:
>>>> http://www.polytech.unice.fr/~lingrand/MarchingCubes/algo.html
>>> This solves the already-matplotlib-solved problem of drawing the contours given
>>> a level. That still leaves finding the correct level. Or am I underestimating
>>> the potential to reformulate marching squares to solve the
>>> integration problem, too?
>> No, I don't think you are. As for the line-search, since the
>> different components of the mixture are available, can't we evaluate
>> the integral (over each component) directly, rather than working with
>> a grid?
> No, since the relevant area depends on the *sum* of components, not
> the value of the component itself.
Yes, that's what I would have thought, too. As Robert said, I don't
think you can find an analytical version for the inverse cumulative
"function" of a mixture of Gaussian on an area of interest, and that's
what we need (matlab does not implement it, and I guess that if it was
possible, they would have put it with the pdf and cdf abilities of their
mixture object). For a component, the contour shape is easy to see
(ellipsoids), for mixtures, not so easy.
For kernel estimators, you assume each component is the same 'shape', (I
mean same covariance matrix) right ? Maybe this make the computation
feasible (find all the points such as sum_i{a_i f(x - \mu_i)} = cst) ?
More information about the SciPy-user mailing list
|
{"url":"http://mail.scipy.org/pipermail/scipy-user/2007-November/014542.html","timestamp":"2014-04-16T07:14:44Z","content_type":null,"content_length":"4183","record_id":"<urn:uuid:4a338637-fcab-47a5-9ad8-59afc03644d6>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00612-ip-10-147-4-33.ec2.internal.warc.gz"}
|
North Waltham Prealgebra Tutors
...Even though I was a math major in college I will only tutor math from 5th grade up to Geometry or Algebra 2 in high school. Although I know I could do more complicated math, middle school math
and algebra are what I love to tutor in. I am very responsible and am always on time for things.
6 Subjects: including prealgebra, geometry, algebra 1, elementary math
...Currently I work as a college adjunct professor and teach college algebra and statistics. I enjoy tutoring and have tutored a wide range of students - from middle school to college level. I
know the programs of high and middle school math, as well as the preparation for the SAT process.
14 Subjects: including prealgebra, geometry, algebra 1, statistics
I am a senior chemistry major and math minor at Boston College. In addition to my coursework, I conduct research in a physical chemistry nanomaterials lab on campus. I am qualified to tutor
elementary, middle school, high school, and college level chemistry and math, as well as SAT prep for chemistry and math.I am a chemistry major at Boston College.
13 Subjects: including prealgebra, chemistry, calculus, geometry
...I have taught math for an SAT prep company. I teach the necessary concepts in order to obtain the answers and also how to use more efficient and quicker methods. SAT preparation requires lots
of practice and I offer a study schedule based on the amount of time which remains until the exam and my assessment of the student's level.
24 Subjects: including prealgebra, chemistry, calculus, physics
...I enjoy helping my students to understand and realize that they can not only do the work - they can do it well and they can understand what they're doing. My references will gladly provide
details about their own experiences. I have a master's degree in computer engineering and run my own data analysis company.
11 Subjects: including prealgebra, geometry, algebra 1, precalculus
|
{"url":"http://www.algebrahelp.com/North_Waltham_prealgebra_tutors.jsp","timestamp":"2014-04-18T03:04:49Z","content_type":null,"content_length":"25532","record_id":"<urn:uuid:41594f39-2507-4e40-889b-d25858efad9d>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00166-ip-10-147-4-33.ec2.internal.warc.gz"}
|
reddit is a website about everything
Would it make sense to allow where in types? by ninereeds314in haskell
[–]camccann2 points3 points4 points ago
One idea I've semi-seriously suggested before is that where should be a namespace mechanism and always introduce a named scope (that may or may not export any bindings, depending on how it's used).
So a type synonym A attached to the declaration of a data type Bar in a module Foo would be displayed as Foo.Bar.A just like if Foo.Bar was a separate module.
Make lllegal State *Transitions* Unrepresentable? by chebertappsin haskell
[–]camccann1 point2 points3 points ago
I don't remember, it's been a while since I messed around with that. I suppose it may not even do that anymore for all I know. But to get rid of the warning you'd just need a catch-all pattern for
the GADT, _ or a plain identifier or something, I think.
Mostly I just recall being annoyed at getting inexhaustive pattern warnings when all the reasonable patterns were covered, not the details, heh. Sorry.
Where's GHC 7.8.1? by RedLambdain haskell
[–]camccann3 points4 points5 points ago
The whole situation continues to confuse me. It's a compiler, your users are by definition developers, in the general sense. Likewise, anyone working on GHC is also a user. How is it that one
platform vs. others has tons of users but not a single one of them is interested in GHC development?
I suppose it's likely that, say, students taking a course using Haskell but no long-term interest in the language would be disproportionately Windows users, but I can't imagine that would account for
all of it.
Is it just an issue of awareness, that there are potential contributors who don't know how badly you could use their help? Or is there something deeper?
[ANN] Simple parallel genetic algorithm implementation in pure Haskell by afiskonin haskell
[–]camccann0 points1 point2 points ago
Regarding zeroGeneration: you're giving it a function rnd that generates a random value and a population size ps that says how many values to generate, correct? You wrote it as a fold over a list [1
.. ps] but you ignore the counter value anyway, so consider that you could instead write it starting with replicate ps rnd and then folding that list by plugging the rng seed output of one function
to the input of the next, while collecting a list of the generated values.
Threading a state value like an rng seed through multiple functions is the purpose of the State monad, so what you'd basically be doing there is turning a list [State g a] into a value State g [a],
which is precisely what the function sequence is for.
replicateM itself is just a handy function that uses replicate followed by sequence for you.
Arafura Design: Reclaiming NetBSD kernel in Haskell, little by little by plumenatorin haskell
[–]camccann0 points1 point2 points ago
Well, Japanese is sort of special in that regard, if memory serves me--unlike most or all other widely-spoken languages, it has no clear relationship to any others. (Yes, the writing system borrows
from the Chinese writing system and there are many loanwords, but spoken Japanese is very different from the various spoken languages collectively known as "Chinese")
Relative to English, however, anything outside the Indo-European family is probably sufficiently "far away" that relative comparison isn't meaningful.
Why this Haskell program runs considerably slower than its JavaScript equivalent? (Frustrated) by SrPeixinhoin haskell
[–]camccann4 points5 points6 points ago
I can't think of any reasonable situation (that isn't highly artificial and contrived for the specific purpose) where plain foldl is preferable.
In some cases (i.e., ones where it gets optimized into something equivalent to foldl' because the compiler is clever) it might be as good as the alternative, but that's all.
Using foldl' or foldr as appropriate is a good rule of thumb in any case.
EDIT: I may have misread, if you were asking whether to prefer foldl' or foldr that depends entirely on what you're doing. foldr lets you consume the list lazily if the result value is also consumed
lazily, while foldl' consumes the whole list strictly to avoid nesting thunks. e.g., you'd write map with foldr and sum with foldl'. if using the result in any way requires the entire fold, use
foldl', if you expect it to work with infinite lists use foldr, &c.
Compared to foldr and foldl', foldl essentially has the intersection of their benefits and the union of their drawbacks, and there's not much left (ha, ha) after everything that rules out.
Recursion Schemes and Functors by Platzin haskell
[–]camccann0 points1 point2 points ago
What language features are supported by Template Haskell is a function of what language features its users want it to support, plus an unknown time delay (a "constant of implementation", if you
Here's the trac ticket, if you're wondering. Obviously it's not going to be in the upcoming release, but if they know users are actively interested it's less likely to be deferred indefinitely like
some TH features have been.
What is the `canonical' Haskell solution to the "Raytracer renderer" hierarchy problem? by TunaOfDoomin haskell
[–]camccann1 point2 points3 points ago
That phrase is presumably in reference to the open world assumption, which is how type class instance selection works in Haskell.
In this context I assume it means allowing for the possibility of adding new Shapes, while a closed world would be a fixed set of possible shapes, i.e. make Shape a type instead of a class and make
each instance a constructor.
Is dependently typing generally superior to Haskell's type system? by SrPeixinhoin haskell
[–]camccann0 points1 point2 points ago
Of course it's not actual non-termination, since the whole point is to satisfy a termination checker. :]
On the other hand, any potentially non-terminating expression can be transformed into a similar thunky corecursion. What this really amounts to is a sort of cooperative multithreading, where each
layer of the delay type represents the "thread" yielding.
This obviously only serves to punt on the issue of termination, presuming that the "thread" will be run periodically by some other code. that other code may itself be pseudo-partial and you can keep
recursively delaying (ha, ha) a resolution to the issue until you reach a layer that's either truly corecursive (and can interleave evaluating the pseudo-partial term(s)) or recursive by way of an
iteration limit.
An example in Haskell would be trying to find the first element of a list satisfying some predicate; if no such element exists this won't terminate. So instead you can map over the list with a
function that replaces non-matching elements with Nothing and wraps matching elements in Just. If the function that wants the value is suitably corecursive, it can keep pulling elements from the
"filtered" list until it finds a match; otherwise it can take a finite prefix of the "filtered" list, use catMaybes, then pattern match on the (possibly empty) result list.
This should work for any possibly-non-terminating expression, but in most cases there are probably better solutions than that sort of mechanical translation.
Snap framework apps over Clever-Cloud by mightybytein haskell
[–]camccann[M] 4 points5 points6 points ago
That's pretty much how I see it as well.
Posts that draw attention to commercial ventures because their use of Haskell is interesting are A-OK by me, even if it basically is an ad. If there's nothing terribly interesting about their use of
Haskell and/or it seems like just a pretense for posting an ad, that will be downvoted and possibly removed.
Hopefully people will realize that if a post gets a positive response at all, there's absolutely nothing to be gained by hiding their association with it.
I wonder if I should add something to the sidebar about this since I've seen similar questions raised in maybe a third or half of the posts that are commercial announcements/job ads/&c.
[ghc-devs] Proposal: Partial Type Signatures by ehambergin haskell
[–]camccann0 points1 point2 points ago
Given how things currently work, there really aren't many places where it would be sensible or useful to have _ in a type variable binding (i.e., corresponding to the use in patterns).
Probably the most reasonable would be in an instance declaration involving unconstrained type variables. For example, consider this instance:
instance Monoid [a] where
mempty = []
mappend = (++)
The a type is completely irrelevant: it's only mentioned once in the instance head, it's not used in the instance's (empty) context, and without ScopedTypeVariables, a couldn't even be used in type
signatures in the function definitions.
Therefore, much like you'd use _ for unused function arguments, we can imagine writing the instance as instance Monoid [_] where ....
If ScopedTypeVariables were enabled by default this might actually be helpful; as it is the scope in which type variables are bound is generally very small and thus it's simple to tell by inspection
whether a bound variable is used.
|
{"url":"http://www.reddit.com/user/camccann","timestamp":"2014-04-19T12:14:44Z","content_type":null,"content_length":"124985","record_id":"<urn:uuid:db7d1fbf-aa95-4417-a972-416ccedf5e53>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00192-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Profit Maximization Methods in Managerial Economics - MBA Knowledge Base
Profit Maximization Methods in Managerial Economics
The profit maximization theory states that firms (companies or corporations) will establish factories where they see the potential to achieve the highest total profit. The company will select a
location based upon comparative advantage (where the product can be produced the cheapest). The theory draws from the characteristics of the location site, land price, labor costs, transportation
costs and access, environmental restrictions, worker unions, population etc. The company will then elect the best location for the factory to maximize profits. This is anathema to the idea of social
responsibility because firms will place their factory to achieve profit maximization. They are nonchalant to environment conservation, fair wage policies and exploit the country. The only objective
is to earn more profits. In economics, profit maximization is the process by which a firm determines the price and output level that returns the greatest profit. There are several approaches to
profit maximization.
1. Total Cost-Total Revenue Method
To obtain the profit maximizing output quantity, we start by recognizing that profit is equal to total revenue (TR) minus total cost (TC). Given a table of costs and revenues at each quantity, we can
either compute equations or plot the data directly on a graph. Finding the profit-maximizing output is as simple as finding the output at which profit reaches its maximum. That is represented by
output Q in the diagram.
There are two graphical ways of determining that Q is optimal. Firstly, we see that the profit curve is at its maximum at this point (A). Secondly, we see that at the point (B) that the tangent on
the total cost curve (TC) is parallel to the total revenue curve (TR), the surplus of revenue net of costs (BC) is the greatest. Because total revenue minus total costs is equal to profit, the line
segment CB is equal in length to the line segment AQ.
Computing the price, at which the product should be sold, requires knowledge of the firm’s demand curve. Optimum price to sell the product is the price at which quantity demanded equals
profit-maximizing output.
2. Marginal Cost-Marginal Revenue Method
An alternative argument says that for each unit sold, marginal profit (Mπ) equals marginal revenue (MR) minus marginal cost (MC). Then, if marginal revenue is greater than marginal cost, marginal
profit is positive, and if marginal revenue is less than marginal cost, marginal profit is negative. When marginal revenue equals marginal cost, marginal profit is zero. Since total profit increases
when marginal profit is positive and total profit decreases when marginal profit is negative, it must reach a maximum where marginal profit is zero or marginal cost equals marginal revenue. If there
are two points where this occurs, maximum profit is achieved where the producer was collected positive profit up until the intersection of MR and MC (where zero profit is collected), but would not
continue to after, as opposed to vice versa, which represents a profit minimum. In calculus terms, the correct intersection of MC and MR will occur when:
dMR/dQ < dMC/dQ
The intersection of MR and MC is shown in the next diagram as point A. If the industry is perfectly competitive (as is assumed in the diagram), the firm faces a demand curve (D) that is identical to
its Marginal revenue curve (MR), and this is a horizontal line at a price determined by industry supply and demand. Average total costs are represented by curve ATC. Total economic profit are
represented by area P,A,B,C. The optimum quantity (Q) is the same as the optimum quantity (Q) in the first diagram.
If the firm is operating in a non-competitive market, minor changes would have to be made to the diagrams. For example, the Marginal Revenue would have a negative gradient, due to the overall market
demand curve. In a non-competitive environment, more complicated profit maximization solutions involve the use of game theory.
3. Maximizing Revenue Method
In some cases, a firm’s demand and cost conditions are such that marginal profits are greater than zero for all levels of production. In this case, the Mπ = 0 rule has to be modified and the firm
should maximize revenue. In other words, the profit maximizing quantity and price can be determined by setting marginal revenue equal to zero. Marginal revenue equals zero when the marginal revenue
curve has reached its maximum value. An example would be a scheduled airline flight. The marginal costs of flying the route are negligible. The airline would maximize profits by filling all the
seats. The airline would determine the profit maximum conditions by maximizing revenues.
4. Changes in Fixed Costs Method
A firm maximizes profit by operating where marginal revenue equals marginal costs. A change in fixed costs has no effect on the profit maximizing output or price. The firm merely treats short term
fixed costs as sunk costs and continues to operate as before. This can be confirmed graphically. Using the diagram, illustrating the total cost-total revenue method, the firm maximizes profits at the
point where the slope of the total cost line and total revenue line are equal. A change in total cost would cause the total cost curve to shift up by the amount of the change. There would be no
effect on the total revenue curve or the shape of the total cost curve. Consequently, the profit maximizing point would remain the same. This point can also be illustrated using the diagram for the
marginal revenue-marginal cost method. A change in fixed cost would have no effect on the position or shape of these curves.
5. Markup Pricing Method
In addition to using the above methods to determine a firm’s optimal level of output, a firm can also set price to maximize profit. The optimal markup rules are:
(P – MC)/P = 1/ -Ep
P = (Ep/(1 + Ep)) MC
Where MC equals marginal costs and Ep equals price elasticity of demand. Ep is a negative number. Therefore, -Ep is a positive number.
The rule here is that the size of the markup is inversely related to the price elasticity of demand for a good.
6. Marginal Revenue Product of Labor (MRPL) Method
The general rule is that firm maximizes profit by producing that quantity of output where marginal revenue equals marginal costs. The profit maximization issue can also be approached from the input
side. That is, what is the profit maximizing usage of the variable input? To maximize profits, the firm should increase usage “up to the point where the input’s marginal revenue product equals its
marginal costs”. So mathematically the profit maximizing rule is MRPL = MCL. The marginal revenue product is the change in total revenue per unit change in the variable input- assuming input
as labor. That is, MRPL = ΔTR/ΔL. MRPL is the product of marginal revenue and the marginal product of labour or MRPL = MR x MPL.
Leave a Reply Cancel reply
Recent Discussions
• Chaitanya Jha on Definition of Management by Eminent Authors
• Sipho on Conflict in Organizations
• iqbal on Case Study: General Electric’s Two-Decade Transformation Under the Leadership of Jack Welch
• ETIDO ESSIEN on Theories of Profit in Economics
• Gurpreet Kaur on Problems Faced by Trade Unions in India
• rohit gautam on Meaning of Sampling and Steps in Sampling Process
About Abey Francis
Abey Francis is the founder of MBAKnol - A Blog about Management Theories and Practices - and he's always happy to share his passion for innovative management practices. You can found him on Google+
and Facebook. If you’d like to reach him, send him an email to: [email protected]
Tagged Economics Principles.
Bookmark the permalink.
|
{"url":"http://www.mbaknol.com/managerial-economics/profit-maximization-methods-in-managerial-economics/","timestamp":"2014-04-21T04:32:28Z","content_type":null,"content_length":"74478","record_id":"<urn:uuid:4f75001f-3733-46e6-88cb-87d696cb39b9>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00177-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Database Management Systems 3rd Edition Chapter 19 Solutions | Chegg.com
Let R be a relational schema and let X and Y be two subsets of the set of all attributes of R. We say Y is functionally dependent on X, written X → Y, if the Y-values are determined by the X-values.
More precisely, for any two tuples r[1] and r[2] in (any instance of) R
π[X](r[1])=π[X](r[2]) ⇒ π[Y] (r[1])=π[Y] (r[2])
|
{"url":"http://www.chegg.com/homework-help/database-management-systems-3rd-edition-chapter-19-solutions-9780072465631","timestamp":"2014-04-20T05:09:31Z","content_type":null,"content_length":"44167","record_id":"<urn:uuid:e4afcf08-ce92-4beb-bb81-49c5de03bfa6>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00259-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Physics Forums - View Single Post - How/where to start to teach someone MATH from SCRATCH?
I'm in college, and my roomy is a very close friend who does Classical Philology and Philosophy and I, on the other hand, am completely passionate about physics and mathematics and we've agreed that
I'd teach him about mathematics this year (for fun). He knows close to nothing, he's had the minimum of hours of math in high school, so nothing much to build on (except some intuition, hopefully).
He is, however, certainly very interested and a very smart guy.
My question is: what do I focus on? And more particularly: where do I start? (he has expressed interest in logic and probability; of course this is to be taken with a grain of salt: he has little
idea of what is out there)
Some options: really fundamentally with set theory and the foundations of logic; I know close to nothing about this (in my last year of bachelor in math), but I'm very interested so I think I could
look up and grasp the basics, anyway it seems like the most "genuine" place to start, especially since he has a deep interest in philosophy (albeit continental philosophy, but I'll cure that)
Another option: the way they start in a real analysis course: defining the real numbers, concepts like order, completeness, those things; this would be interesting to see the gap between intuition
and rigour.
Or: algebra. Don't talk about numbers, but groups and rings and fields and algebras and matrices. This would be of interest to show how mathematics succeeds in talking about structures themselves and
not just concrete realizations, a jump into the abstract.
Or, well, maybe geometry, although that seems like a weird place to start nowadays, it is after all the way math began and comes with a load of intuition, intuition that can be shattered by the
interesting non-euclidean spaces or projective spaces.
And one other way I can think of: to not spend too much on the basics, but just jump in with calculus, to get to complex analysis quickly: a piece of beauty I don't want to deny him!
Any suggestions or comments?
|
{"url":"http://www.physicsforums.com/showpost.php?p=3541405&postcount=1","timestamp":"2014-04-17T18:33:43Z","content_type":null,"content_length":"10513","record_id":"<urn:uuid:e165f61b-07b1-4d08-a0d1-6a03acb68aea>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00245-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Clairvoyance is but fatal
One of my
asks in his blog:
"If you could foresee the next two minutes in your life, would you do things differently than what you otherwise do?"
Foreseeing something is, IMO, helpful iff the outcome is alterable. But is it even possible?
"..foresee the next two minutes.."
is ambiguous to say the least. It could mean 2 different things: 1) what I'm foreseeing is one possible outcome, OR, 2) it is _the_ outcome. In case of (2), unless you're in a gambling business or in
a life-threatening situation, foreseeing is mostly useless (and even harmful!). We'II come back to this later.
As for the case (1), seeing a possible outcome assumes a sequence of, say 'n', events e_0(0), e_0(1), ..., e_0(n-1), within those 2 minutes, where e_i(x) is causally related to e_j(x-1); i < j <= x
<= infinity, and 'n' belongs to [0, infinity]. e_0(n) happens after e_0(n-1) and is foreseen. Now this is just one possible sequence of events and there could be infinite sequences and for any value
of 'n'. E.g., there could be another event sequence e_0(0), e_3(1), e_2(2), ..., e_999(n-1), e_7(n); in this case outcome is e_7(n). To simplify the calculations lets assume 'n' tends to infinity (in
other words, substitute 'n' for infinity) and at most there could be 'n' events in any sequence. This also means that there are at most 'n' possible outcomes, e_j(n). Also this means that there are
'n!' events in all. All the possible outcomes are equally probable. Now the question is which outcome would you foresee? For the outcome you see depends on penultimate event and a series of events
before that, which are causally related and directly related to the event that is going to happen next.
For example, consider a set of colored water guns. I'm to pick any one and shoot on a wall in front of me. The outcome is the color on the wall. If we apply the aforesaid theory to this, I can
foresee a color on the wall. Now, in case (2), no matter what gun I pick, I'II end up spraying the color I forsaw. In case (1), chances are (since I did foresee) I choose the gun with the color that
I forsaw. Now consider a case that the guns are correctly marked with the color they have. Now if I see a color, and I'm to change it, I'II cleverly pick the right gun and get the color I want on the
wall. BUT, but the moot point is what I should've foresaw? The color that I cleverly didn't allow to appear on the wall, or the one that I did?
With this could we conclude that being able to change the future you saw kind of defeats the purpose of your clairvoyance?
Even if we assume we foresee an outcome that would happen if we don't try to alter it, our ability of altering the output greatly depends on the rate of change of events, time between e_i(x) and e_j
(x+1); i < j <= x <= infinity. For example, such clairvoyance would help me if I'm a cricketer or a stock broker, but it'd hardly affect me if I'm a tea-leaf picker or a carpenter unless it is
life-threatening. Same is true in case of (1), with an added danger that since we foresee an outcome, now no matter what we do, nothings going to change it. This has some obvious consequences, and
more so if the clairvoyance vision is 2 years and not 2 minutes. We could think that all we can do is work towards the next outcome. But, wait..think. I'm a film maker and I foresee that my movie is
going to be trashed, I can't completely stop working on this movie and take up next, it has to be done until trashed, but all I can do is...is nothing! I wouldn't find time to work on my next film,
for the time has to be spent on things I already foresaw sometime back! And the output is unalterable! Things have to be done; and by me!
And with this could we conclude that foreseeing an unalterable future is but useless and prenotion of disappointment could do more harm than cheer of an oncoming success?
Chaitali 8:50 PM, September 17, 2008
Excellent! Isaac Mendez, Sylar and Peter Petrelli ahould all read it :) (I just finished seasons 1 and 2 of Heroes so this :P )
|
{"url":"http://amitgud.blogspot.com/2008/05/clairvoyance-is-but-fatal.html","timestamp":"2014-04-20T18:24:11Z","content_type":null,"content_length":"28725","record_id":"<urn:uuid:cc3fcb8b-f036-4a7b-b74f-e66bc54668d2>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00318-ip-10-147-4-33.ec2.internal.warc.gz"}
|
M408K Syllabus
Syllabus: M408K
Text: Stewart, Calculus, Early Transcendentals, Seventh Edition
(at the Campus Store, it is called ACP Single Variable Calculus: Early Transcendentals)
Responsible Parties: Jane Arledge, Kathy Davis, Ray Heitmann, Diane Radin June 2011
Core curriculum
This course may be used to fulfill the mathematics component of the university core curriculum and addresses core objectives established by the Texas Higher Education Coordinating Board:
communication skills, critical thinking skills, and empirical and quantitative skills.
Calculus is the theory of things that change, and so is essential for understanding a changing world. Students are expected to use calculus to compute optimal strategies in a variety of settings
(Chapter 3, max/min), as well as to apply derivatives to understand changing quantities in physics, economics and biology.
Students improve their number sense through qualitative reasoning and by comparing the results of formulas to those guiding principles.
Student activities include creating logically ordered, clearly written solutions to problems, and communicating with the instructor and their peers during lecture by asking and responding to
questions and discussion in lecture.
Prerequisite and degree relevance:The minimum required score on the ALEKS placement exam. Only one of the following may be counted: M403K, M408C, M408K, M408N.
Calculus is offered in two equivalent sequences: a two-semester sequence, M 408C/408D, which is recommended only for students who score at least 600 on the mathematics Level I or IC Test, and a
three-semester sequence, M 408K/408L/408M.
For some degrees, the two-semester sequence M 408K/408L satisfies the calculus requirement . This sequence is also a valid prerequisite for some upper-division mathematics courses, including M325K,
427K, 340L, and 362K.
M408C and M408D (or the equivalent sequence M408K, M408L, M408M) are required for mathematics majors, and mathematics majors are required to make grades of C- or better in these courses.
Course description: M408K is one of two first-year calculus courses. It is directed at students in the natural and social sciences and at engineering students. In comparison with M408C, it covers
fewer chapters of the text. However, some material is covered in greater depth, and extra time is devoted the development of skills in algebra and problem solving. This is not a course in the theory
of calculus.
The syllabus for M 408K includes most of the basic topics in the theory of functions of a real variable: algebraic, trigonometric, logarithmic and exponential functions and their limits, continuity,
derivatives, maxima and minima, as well as definite integrals and the Fundamental Theorem of Calculus.
Overview and Course Goals
The following pages comprise the syllabus for M 408K, and advice on teaching it. Calculus is a service course, and the material in it was chosen after interdepartmental discussions. Please do not
make drastic changes (for example, skipping techniques of integration). You will do your students a disservice and leave them ill equipped for subsequent courses.
This is not a course in the theory of calculus; the majority of the proofs in the text should not be covered in class. At the other extreme, some of our brightest math majors found their first
passion in calculus; one ought not to bore them. Remember that 408K/L/M is the sequence designed for students who may not have taken calculus previously. Students who have seen calculus and have done
well might be better placed in the faster M 408C/408D sequence.
Resources for Students
Many students find the study skills from high school are not sufficient for UT. The Sanger Learning Center (http://lifelearning.utexas.edu/) in Jester has a wide variety of material ( drills,
video-taped lectures, computer programs, counseling, math anxiety workshops, algebra and trig review, calculus review) as well as tutoring options, all designed to help students through calculus. On
request they will come to your classroom and explain their services.
You can help your students by informing them of these services.
Timing and Optional Sections
A typical fall semester has 42 hours of lecture, 42 MWF and 28 TTh days, while the spring has 45 hours, 45 MWF and 30 TTh days (here, by one hour we mean 50 minutes -- thus in both cases there are
three "hours" of lecture time per week). The syllabus contains suggestions as to timing, and includes approximately 35 hours. Even after including time for exams, etc., there will be some time for
the optional topics, reviews, and/or additional depth in some areas.
Forty Class Days As:
• 1 Functions and Models (3 hours)
□ 1.5 Exponential Functions
□ 1.6 Inverse Functions and Logarithms
• 2 Limits and Derivatives (9 hours)
□ 2.1 The Tangent and Velocity Problems
□ 2.2 The Limit of a Function
□ 2.3 Calculating Limits Using the Limit Laws
□ 2.4 The Precise Definition of a Limit (optional)
□ 2.5 Continuity
□ 2.6 Limits at Infinity; Horizontal Asymptotes
□ 2.7 Derivatives and Rates of Change
□ 2.8 The Derivative of a Function
• 3 Differentiation Rules (10 hours)
□ 3.1 Derivatives of Polynomials and Exonential Functions
□ 3.2 The Product and Quotient Rules
□ 3.3 Derivatives of Trigonometric Functions
□ 3.4 The Chain Rule
□ 3.5 Implicit Differentiation
□ 3.6 Derivatives of Logarithmic Functions
□ 3.7 Rates of Change in the Natural and Social Sciences
□ 3.8 Exponential Growth and Decay (optional)
□ 3.9 Related Rates
□ 3.10 Linear Approximations and Differentials
□ 3.11 Hyperbolic Functions (optional)
• 4 Applications of Differentiation (9 hours)
□ 4.1 Maximum and Minimum Values
□ 4.2 The Mean Value Theorem
□ 4.3 How Derivatives Affect the Shape of a Graph
□ 4.4 Indeterminate Forms and L'Hospital's Rule
□ 4.5 Summary of Curve Sketching
□ 4.7 Optimization Problems
□ 4.9 Antiderivatives
• 5 Integrals (4 hours)
□ 5.1 Areas and Distances
□ 5.2 The Definite Integral
□ 5.3 The Fundamental Theorem of Calculus
|
{"url":"http://www.ma.utexas.edu/academics/courses/syllabi/M408K.php","timestamp":"2014-04-17T01:05:56Z","content_type":null,"content_length":"25302","record_id":"<urn:uuid:36acffa9-3539-494c-a696-8c637b8a9429>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00222-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Group homomorphism
May 4th 2010, 11:50 PM #1
Junior Member
Mar 2009
Madison, WI
Group homomorphism
Describe a group homomorphism from $U_5$ to $S_4$.
$U_5$ is the group of units $Z/5Z$ under multiplication
$S_4$ is the set of permutations on 4 elements.
The hint for the exercise said to use Cayley's Theorem.
I think a group homomorphism is a mapping from group $U_5$ [with say operation $*$] into $S_4$ [with operation $*'$].
And $f(a*b)=f(a)*'f(b)$$\forall a,b \in S_4$.
Well, what is Cayley's theorem?
The hint is telling you, basically, to work through the proof of Cayley's theorem using an example. There are a number of different homomorphisms from $U_5$ to $S_4$, but the point of this
question is not to conjure up these, but to find a specific one using this theorem.
So, do you have any problems understanding the theorem? It basically says you can find a `copy' of $U_5$ in $S_4$.
May 5th 2010, 12:34 AM #2
Apr 2008
May 5th 2010, 01:24 AM #3
|
{"url":"http://mathhelpforum.com/advanced-algebra/143156-group-homomorphism.html","timestamp":"2014-04-19T12:38:18Z","content_type":null,"content_length":"39875","record_id":"<urn:uuid:ab0c5fae-c59f-4766-9a89-e0cb5cd4c4f2>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00105-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Pursuit and Evasion Game under Uncertainty
Bankole Abiola^1, R.K. Ojikutu^1,
^1Department of Actuarial Science and Insurance Faculty of Business Administration University of Lagos Akoka, Lagos
This paper examined a class of multidimensional differential games. In particular, it considered a situation in which the pursuer and evader are affected by uncertain disturbances. A necessary and
sufficient condition for the existence of saddle point for this class of games was developed.
Keywords: uncertain disturbances, pursuer, evader, differential games
American Journal of Applied Mathematics and Statistics, 2013 1 (2), pp 21-26.
DOI: 10.12691/ajams-1-2-1
Received January 18, 2013; Revised March 02, 2013; Accepted April 15, 2013
© 2013 Science and Education Publishing. All Rights Reserved.
Cite this article:
• Abiola, Bankole, and R.K. Ojikutu. "Pursuit and Evasion Game under Uncertainty." American Journal of Applied Mathematics and Statistics 1.2 (2013): 21-26.
• Abiola, B. , & Ojikutu, R. (2013). Pursuit and Evasion Game under Uncertainty. American Journal of Applied Mathematics and Statistics, 1(2), 21-26.
• Abiola, Bankole, and R.K. Ojikutu. "Pursuit and Evasion Game under Uncertainty." American Journal of Applied Mathematics and Statistics 1, no. 2 (2013): 21-26.
Import into BibTeX Import into EndNote Import into RefMan Import into RefWorks
1. Introduction and System Description
The subscripts
2. The Problem
The pursuer uses control
3. The Game
The final miss is defined as a weighted quadratic form:
To make the game meaningful, we shall impose the following limitations:
T is the final time. Joining (6) and (7) to (5) , are the following pay-off functional defined as:
the controller
Subsequently, we make the following assumptions:
Assumption A1: Assuming that the uncertainties
Assumption A2: There exist non-singular matrices
Also given any
4. Problem Formulation
Defining a new state variable
then, from equations (1) – (4) we have
We shall write equation (13) in a compact form as:
We also impose the condition that
On the basis of (15), the following problems arise:
Subject to
This problem would be solved under the assumption that the pay-off functional defined by
Based on the aforementioned assumption and noting that
we arrived at the formulation of the following two optimal control problems define by Problems (2) and (3)
Problem (2):
Subject to
Problem (3):
Subject to
5. Solution
Necessary Condition for a Saddle Point
For problems (2) and (3) we introduce the following assumptions:
i. The matrix functions
ii. Control functions
Now consider problem (2) and define the Hamiltonian for the problem as
The adjoint equation satisfies
6. Deduction
From (23) we deduce the following three cases, namely:
then any admissible
We knock off case (b) since
Case (c) is a trivial solution. Assume that the solution is not trivial, we consider case (a).
Therefore on the basis of (12) we have
Substituting (14) in (17) we get
For (18) to hold, it is necessary and sufficient that
For problem (3) the Hamiltonian is defined as
Following the same procedure as explained in problem (2), the adjoint vector satisfies
The following three cases can be deduced from (25)
then any admissible
From (22)
On substituting (37) into (41) we have the following equation
For (42) to hold
7. Value of the Game
We now employ the results in (20) and (34) to compute the value of the objective functional defined in problems (2) and (3)
We recall that
Multiply (45) by
adding equations (48) and (49) together to get
Integrating (50) we get the following:
We know from (47) that
and from
Similarly, we consider
From (52) and (53) we have
Combining (55) and (56) we have
Now, given that
From (37) substitute for
Combining (51) and (61) ,the value of the game is given as
8. Sufficient Condition for a Saddle Point
We shall employ the sufficiency theorem given by [Gutman, S. (1975)] to show that the solution obtained for each of the cases is indeed a saddle point.
We assume that
By virtue of (23) and (24) we have
using (29), (66) is reduced to
In (66) we substitute for
On the basis of (69) and (70), (30) is indeed a saddle point.
9. Conclusion
The idea of saddle point (min, max) controllers arises in engineering problems where extreme conditions are to be overcome (Gutman,1975) A natural example is the “boosted period” of missiles when
high thrust acts on the body so that every small deviation from the designed specifications cause unpredictable (input) disturbances in three nodes.
In this work we have considered situations where the disturbances affect the motions of the pursuer and the evader respectively. In our subsequent paper we hope to apply the results in this work to
problems arising from pricing of general insurance policies, particularly in a competitive and non-cooperative market.
[1] Abiola, B.(2012) “On Generalized Saddle Point Solution for a Class of Differential Games” International Journal of Science and Advanced Technology, 2(8):27-31.
[2] Abiola, B.(2009) “Control of Dynamical Systems in The Presence of Bounded Uncertainties” Unpublished PhD Thesis , Department of Mathematics University of Agriculture Abeokuta.
[3] Arika, I. (1976). “Linear Quadratic Differential Games in Hilbert Space” SIAM Journal of Control and Optimization, 1(1).
[4] Gutman,S, (1975). “Differential Games and Asymptotic Behaviour of Linear Dynamical Systems in the Presence of Bounded Uncertainty” PhD Dissertation, University of California,
[5] Leitmann, G, (2004) “A Direct Optimization Method and its Application to a Class of Differential Games” Journal of Dynamics of Continuous and Intensive Systems, 11, 191-204.
|
{"url":"http://pubs.sciepub.com/ajams/1/2/1/index.html","timestamp":"2014-04-18T16:17:36Z","content_type":null,"content_length":"64016","record_id":"<urn:uuid:45fc9978-c646-4d69-a5c0-7d6195c71384>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00379-ip-10-147-4-33.ec2.internal.warc.gz"}
|
The topic degree is discussed in the following articles:
Lagrange’s analysis of roots
• TITLE: mathematicsSECTION:
Theory of equations
Lagrange presented a detailed analysis of the solution by radicals of second-, third-, and fourth-degree equations and investigated why these solutions failed when the degree was greater than or
equal to five. He introduced the novel idea of considering functions of the roots and examining the values they assumed as the roots were permuted. He was able to show that the solution of an
|
{"url":"http://www.britannica.com/print/topic/155995","timestamp":"2014-04-21T04:17:42Z","content_type":null,"content_length":"6581","record_id":"<urn:uuid:4ebbd0bd-45ce-4f71-b414-fcaa1fb9a680>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00633-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Implementation of the Spatial and the Temporal Cross-Ambiguity Function for Waveguide Fields and Optical Pulses
On the basis of space–time duality, we propose experimental setups to implement the cross-ambiguity function optically in space and time in one and two dimensions. In space the cross-ambiguity is
shown to be related to the coupling efficiency between butt-joined optical waveguides. In time it is related to the spectrogram or the frequency-resolved optical gating techniques for the
characterization of optical pulses.
© 1999 Optical Society of America
OCIS Codes
(060.2310) Fiber optics and optical communications : Fiber optics
(070.1060) Fourier optics and signal processing : Acousto-optical signal processing
(070.6020) Fourier optics and signal processing : Continuous optical signal processing
(120.4820) Instrumentation, measurement, and metrology : Optical systems
Daniela Dragoman, Mircea Dragoman, and Jean-Pierre Meunier, "Implementation of the Spatial and the Temporal Cross-Ambiguity Function for Waveguide Fields and Optical Pulses," Appl. Opt. 38, 822-827
Sort: Year | Journal | Reset
1. P. M. Woodward, Probability and Information Theory with Application to Radar (Pergamon, London, 1953).
2. J. Tu and S. Tamura, “Wave field determination using tomography of the ambiguity function,” Phys. Rev. E 55, 1946–1949 (1997).
3. H. H. Szu and J. A. Blodgett, “Wigner distribution and ambiguity function,” in Optics in Four Dimensions—1980 (American Institute of Physics, New York, 1981), pp. 355–381.
4. L. Cohen, “Time–frequency distributions—a review,” Proc. IEEE 77, 941–981 (1989).
5. B. H. Kolner, “Space–time duality and the theory of temporal imaging,” IEEE J. Quantum Electron. 30, 1951–1963 (1994).
6. D. Dragoman and M. Dragoman, “Wigner transform implementation in the time–frequency domain,” Appl. Opt. 35, 7025–7030 (1996).
7. K. W. DeLong, D. N. Fittinghoff, and R. Trebino, “Practical issues in ultrashort-pulse measurements using frequency-resolved optical gating,” IEEE J. Quantum Electron. 32, 1253–1264 (1996).
8. S. I. Hosain, J.-P. Meunier, and Z. H. Wang, “Coupling efficiency of butt-joined planar waveguides with simultaneous tilt and transverse offset,” J. Lightwave Technol. 14, 901–906 (1996).
9. R. K. Luneburg, Mathematical Theory of Optics (University of California at Berkeley, Berkeley Calif., 1966).
10. D. Onciul, “Efficiency of light launching into waveguides: a phase space approach,” Optik (Stuttgart) 96, 20–24 (1994).
11. D. Dragoman, “Wigner distribution function representation of the coupling coefficient,” Appl. Opt. 34, 6758–6763 (1995).
12. D. Dragoman and M. Dragoman, “Integrated-optic devices characterization with the Wigner transform,” IEEE J. Select. Topics Quantum Electron. 2, 181–186 (1996).
13. K. H. Brenner and K. Wodkiewicz, “The time-dependent physical spectrum of light and the Wigner distribution function,” Opt. Commun. 43, 103–106 (1982).
14. H. O. Bartelt, K. H. Brenner, and A. W. Lohmann, “The Wigner distribution function and its optical production,” Opt. Commun. 32, 32–38 (1980).
15. M. J. Bastiaans, “Wigner distribution function display: a supplement to ambiguity function display using a single 1-D input,” Appl. Opt. 19, 192–195 (1980).
16. R. Bamler and H. Glünder, “The Wigner distribution function of two-dimensional signals coherent-optical generation and display,” Opt. Acta 30, 1789–1803 (1983).
17. E. B. Treacy, “Measurement and interpretation of dynamic spectrogram of picosecond light pulses,” J. Appl. Phys. 42, 3848–3858 (1971).
18. A. A. Godil, B. A. Auld, and D. M. Bloom, “Picosecond time-lenses,” IEEE J. Quantum Electron. 30, 827–837 (1994).
19. N. Agrawal and M. Wegener, “Ultrafast graded-gap electron transfer optical modulator structure,” Appl. Phys. Lett. 65, 685–687 (1994).
OSA is able to provide readers links to articles that cite this paper by participating in CrossRef's Cited-By Linking service. CrossRef includes content from more than 3000 publishers and societies.
In addition to listing OSA journal articles that cite this paper, citing articles from other participating publishers will also be listed.
« Previous Article | Next Article »
|
{"url":"http://www.opticsinfobase.org/ao/abstract.cfm?uri=ao-38-5-822","timestamp":"2014-04-20T19:40:48Z","content_type":null,"content_length":"104273","record_id":"<urn:uuid:aa163fd8-dc9b-4451-83f7-542bdbb2e848>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00010-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Enriched stratified systems for the foundations of category theory
, 2006
"... By “alternative set theories ” we mean systems of set theory differing significantly from the dominant ZF (Zermelo-Frankel set theory) and its close relatives (though we will review these
systems in the article). Among the systems we will review are typed theories of sets, Zermelo set theory and its ..."
Add to MetaCart
By “alternative set theories ” we mean systems of set theory differing significantly from the dominant ZF (Zermelo-Frankel set theory) and its close relatives (though we will review these systems in
the article). Among the systems we will review are typed theories of sets, Zermelo set theory and its variations, New Foundations and related systems, positive set theories, and constructive set
theories. An interest in the range of alternative set theories does not presuppose an interest in replacing the dominant set theory with one of the alternatives; acquainting ourselves with
foundations of mathematics formulated in terms of an alternative system can be instructive as showing us what any set theory (including the usual one) is supposed to do for us. The study of
alternative set theories can dispel a facile identification of “set theory ” with “Zermelo-Fraenkel set theory”; they are not the same thing. Contents 1 Why set theory? 2 1.1 The Dedekind
construction of the reals............... 3 1.2 The Frege-Russell definition of the natural numbers....... 4
"... Abstract. Following a discussion of various forms of set-theoretical foundations of category theory and the controversial question of whether category theory does or can provide an autonomous
foundation of mathematics, this article concentrates on the question whether there is a foundation for “unli ..."
Add to MetaCart
Abstract. Following a discussion of various forms of set-theoretical foundations of category theory and the controversial question of whether category theory does or can provide an autonomous
foundation of mathematics, this article concentrates on the question whether there is a foundation for “unlimited ” or “naive ” category theory. The author proposed four criteria for such some years
ago. The article describes how much had previously been accomplished on one approach to meeting those criteria, then takes care of one important obstacle that had been met in that approach, and
finally explains what remains to be done if one is to have a fully satisfactory solution. From the very beginnings of the subject of category theory as introduced by Eilenberg & Mac Lane (1945) it
was recognized that the notion of category lends itself naturally to
|
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=5956846","timestamp":"2014-04-19T18:39:09Z","content_type":null,"content_length":"14827","record_id":"<urn:uuid:03112306-1fd5-41c4-b06b-d156cc57659c>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00102-ip-10-147-4-33.ec2.internal.warc.gz"}
|
: A Derivation Of The Bose Einstein Distribution
Monday, Apr 21, 2014
Museum of Learning
Explore a Virtual Museum of Knowledge
Bose-Einstein Statistics: A Derivation Of The Bose Einstein Distribution
Image Gallery :: Bose-Einstein Statistics A Derivation Of ...
** images are derived based on close guess matching, may not be exact. - Thumbnail images link to source website and full size image and additional content.
Related Resources :: Bose-Einstein Statistics A Derivation Of ...
|
{"url":"http://www.museumstuff.com/learn/topics/Bose-Einstein_statistics::sub::A_Derivation_Of_The_Bose_Einstein_Distribution","timestamp":"2014-04-21T07:31:17Z","content_type":null,"content_length":"13987","record_id":"<urn:uuid:8e30a48b-0dea-415e-8ca1-71a15fb4e1b4>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00389-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Winning Number
Smart Luck Glossary of Lottery Terms
Winning Number
A winning number is a lotto number that is in the set of numbers officially drawn for a lotto game is a winning number for that drawing. For example, the winning Florida Lotto drawing for January 2,
2008 is 07-16-27-39-47-53. Number 7 (as well as all of the other 5 lotto numbers in the drawing) is an example of a winning number for that drawing. Numbers that were not in the drawing, are losing
|
{"url":"https://www.smartluck.com/lotteryterms/winning-number.htm","timestamp":"2014-04-17T21:23:55Z","content_type":null,"content_length":"4266","record_id":"<urn:uuid:6cba4649-8b01-44fe-a64a-3379f79a6d89>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00521-ip-10-147-4-33.ec2.internal.warc.gz"}
|
t-designs on hypergraphs
Results 1 - 10 of 11
- J. Combinatorial Designs , 1998
"... Lattice basis reduction in combination with an efficient backtracking algorithm is used to find all (4 996 426) simple 7-(33,8,10) designs with automorphism group P\GammaL(2,32). 1 Introduction
Let X be a v-set (i.e. a set with v elements) whose elements are called points. A t-(v; k; ) design is a ..."
Cited by 12 (7 self)
Add to MetaCart
Lattice basis reduction in combination with an efficient backtracking algorithm is used to find all (4 996 426) simple 7-(33,8,10) designs with automorphism group P\GammaL(2,32). 1 Introduction Let X
be a v-set (i.e. a set with v elements) whose elements are called points. A t-(v; k; ) design is a collection of k-subsets (called blocks) of X with the property that any t-subset of X is contained
in exactly blocks. A t-(v; k; ) design is called simple if no blocks are repeated, and trivial if every k-subset of X is a block and occurs the same number of times in the design. A straightforward
approach to the construction of t-(v; k; ) designs is to consider the matrix M v t;k := (m i;j ); i = 1; : : : ; ` v t ' ; j = 1; : : : ; ` v k ' : The rows of M v t;k are indexed by the t-subsets of
X and the columns by the k-subsets of X. We set m i;j := 1 if the i-th t-subset is contained in the j-th k-subset, otherwise m i;j := 0. Simple t-(v; k; ) designs therefore correspond to ...
, 1997
"... We present a new iterative algorithm for solving large sparse systems of linear Diophantine equations which is fast, provably exploits sparsity, and allows an efficient parallel implementation.
This is accomplished by reducing the problem of finding an integer solution to that of finding a very smal ..."
Cited by 11 (4 self)
Add to MetaCart
We present a new iterative algorithm for solving large sparse systems of linear Diophantine equations which is fast, provably exploits sparsity, and allows an efficient parallel implementation. This
is accomplished by reducing the problem of finding an integer solution to that of finding a very small number of rational solutions of random Toeplitz preconditionings of the original system. We then
employ the Block-Wiedemann algorithm to solve these preconditioned systems efficiently in parallel. Solutions produced are small and space required is essentially linear in the output size.
, 1995
"... A computer package is being developed at Bayreuth for the generation and investigation of discrete structures. The package is a C and C++ class library of powerful algorithms endowed with
graphical interface modules. Standard applications can be run automatically whereas research projects mostly ..."
Cited by 10 (7 self)
Add to MetaCart
A computer package is being developed at Bayreuth for the generation and investigation of discrete structures. The package is a C and C++ class library of powerful algorithms endowed with graphical
interface modules. Standard applications can be run automatically whereas research projects mostly require small C or C++ programs. The basic philosophy behind the system is to transform problems
into standard problems of e.g. group theory, graph theory, linear algebra, graphics, or databases and then to use highly specialized routines from that field to tackle the problems. The
transformations required often follow the same principles especially in the case of generation and isomorphism testing.
"... Abstract In this paper we construct constant dimension space codes with prescribed minimum distance. There is an increased interest in space codes since a paper [13] by Kötter and Kschischang
were they gave an application in network coding. There is also a connection to the theory of designs over fi ..."
Cited by 9 (1 self)
Add to MetaCart
Abstract In this paper we construct constant dimension space codes with prescribed minimum distance. There is an increased interest in space codes since a paper [13] by Kötter and Kschischang were
they gave an application in network coding. There is also a connection to the theory of designs over finite fields. We will modify a method of Braun, Kerber and Laue [7] which they used for the
construction of designs over finite fields to do the construction of space codes. Using this approach we found many new constant dimension spaces codes with a larger number of codewords than
previously known codes. We will finally give a table of the best found constant dimension space codes. network coding, q-analogue of Steiner systems, subspace codes 1
- Austral. J. Combin , 1990
"... [u this paper, we show how the basis reduction algorithm of Kreher and Radziszowski can be used to construct large sets of disjoint designs with specified automorphisms. In particular, we
construct a (3,4,23;4)large set which rise to an infinite family of large sets of 4-desiglls via a result of Tei ..."
Cited by 5 (0 self)
Add to MetaCart
[u this paper, we show how the basis reduction algorithm of Kreher and Radziszowski can be used to construct large sets of disjoint designs with specified automorphisms. In particular, we construct a
(3,4,23;4)large set which rise to an infinite family of large sets of 4-desiglls via a result of Teirlinck [6]. 1
- Combinatorial Designs and Related Structures, Proceedings of the First Pythagorean Conference, volume 245 of London Mathematical Society Lecture Notes
"... Some simple 7-designs with small parameters are constructed with the aid of a computer. The smallest parameter set found is 7-(24; 8; 4): An automorphism group is prescribed for finding the
designs and used for determining the isomorphism types. Further designs are derived from these designs by ..."
Cited by 1 (1 self)
Add to MetaCart
Some simple 7-designs with small parameters are constructed with the aid of a computer. The smallest parameter set found is 7-(24; 8; 4): An automorphism group is prescribed for finding the designs
and used for determining the isomorphism types. Further designs are derived from these designs by known construction processes.
"... In this paper, we develop a computational method for constructing transverse t-designs. An algorithm is presented that computes the G-orbits of k-element subsets transverse to a partition given
that an automorphism group G is provided. We then use this method to investigate transverse Steiner quad ..."
Cited by 1 (1 self)
Add to MetaCart
In this paper, we develop a computational method for constructing transverse t-designs. An algorithm is presented that computes the G-orbits of k-element subsets transverse to a partition given that
an automorphism group G is provided. We then use this method to investigate transverse Steiner quadruple systems. We also develop recursive constructions for transverse Steiner quadruple systems, and
we provide a table of existence results for these designs when the number of points v 24. Finally, some results on transverse t-designs with t > 3 are also presented. 1
, 1995
"... Isomorphism problems often can be solved by determining orbits of a group acting on the set of all objects to be classified. The paper centers around algorithms for this topic and shows how to
base them on the same idea, the homomorphism principle. Especially it is shown that forming Sims chains, u ..."
Cited by 1 (1 self)
Add to MetaCart
Isomorphism problems often can be solved by determining orbits of a group acting on the set of all objects to be classified. The paper centers around algorithms for this topic and shows how to base
them on the same idea, the homomorphism principle. Especially it is shown that forming Sims chains, using an algorithmic version of Burnside's table of marks, computing double coset representatives,
and computing Sylow subgroups of automorphism groups can be explained in this way. The exposition is based on graph theoretic concepts to give an easy explanation of data structures for group
"... . We show the existence of simple 8-(31,10,93) and 8-(31,10,100) designs. For each value of we show 3 designs in full detail. The designs are constructed with a prescribed group of automorphisms
PSL(3; 5) using the method of Kramer and Mesner [8]. They are the first 8designs with small parameters w ..."
Cited by 1 (1 self)
Add to MetaCart
. We show the existence of simple 8-(31,10,93) and 8-(31,10,100) designs. For each value of we show 3 designs in full detail. The designs are constructed with a prescribed group of automorphisms PSL
(3; 5) using the method of Kramer and Mesner [8]. They are the first 8designs with small parameters which are known explicitly. We do not yet know if PSL(3; 5) is the full group of automorphisms of
the given designs. There are altogether 138 designs with = 93 and 1658 designs with = 100 and PSL(3; 5) as a group of automorphisms. We prove that they are all pairwise non-isomorphic. For this
purpose, a brief account on the intersection numbers of these designs is given. The proof is done in two different ways. At first, a quite general group theoretic observation shows that there are no
isomorphisms. In a second approach we use the block intersection types as invariants, they classify the designs completely. Keywords: t-design, Kramer-Mesner method, intersection number, isomorphism
|
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=1301180","timestamp":"2014-04-24T20:13:27Z","content_type":null,"content_length":"35191","record_id":"<urn:uuid:736a6732-c025-40bb-a9cd-ff6fcbb9686a>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00599-ip-10-147-4-33.ec2.internal.warc.gz"}
|
FastMap: Fast eQTL mapping in homozygous populations
• We are sorry, but NCBI web applications do not support your browser and may not function properly.
More information
Bioinformatics. Feb 15, 2009; 25(4): 482–489.
FastMap: Fast eQTL mapping in homozygous populations
Daniel M. Gatti
^1,^† Andrey A. Shabalin
^2,^† Tieu-Chong Lam
^1 Fred A. Wright
^3 Ivan Rusyn
Andrew B. Nobel^2,^3,^*
Motivation: Gene expression Quantitative Trait Locus (eQTL) mapping measures the association between transcript expression and genotype in order to find genomic locations likely to regulate
transcript expression. The availability of both gene expression and high-density genotype data has improved our ability to perform eQTL mapping in inbred mouse and other homozygous populations.
However, existing eQTL mapping software does not scale well when the number of transcripts and markers are on the order of 10^5 and 10^5–10^6, respectively.
Results: We propose a new method, FastMap, for fast and efficient eQTL mapping in homozygous inbred populations with binary allele calls. FastMap exploits the discrete nature and structure of the
measured single nucleotide polymorphisms (SNPs). In particular, SNPs are organized into a Hamming distance-based tree that minimizes the number of arithmetic operations required to calculate the
association of a SNP by making use of the association of its parent SNP in the tree. FastMap's tree can be used to perform both single marker mapping and haplotype association mapping over an m-SNP
window. These performance enhancements also permit permutation-based significance testing.
Availability: The FastMap program and source code are available at the website: http://cebc.unc.edu/fastmap86.html
Contact: iir/at/unc.edu; nobel/at/email.unc.edu
Supplementary information: Supplementary data are available at Bioinformatics online.
Quantitative Trait Locus (QTL) mapping is a set of techniques that locates genomic loci associated with phenotypic variation in a genetically segregating population. QTL mapping has been highly
successful in determining causative loci underlying several disease phenotypes (Cervino et al., 2005; Hillebrandt et al., 2005; Wang et al., 2004) and can broadly be subdivided into two classes:
linkage mapping and association mapping. For standard linkage mapping in experimental crosses, likelihood or regression approaches are used to map QTL, with flanking markers used to infer genotypes
in the intervals between widely spaced markers (i.e. >1 cM) (Haley and Knott, 1992; Lander and Botstein, 1989). As marker density increases, linkage statistics may be computed at individual marker
loci, with minimal loss in precision or power (Kong and Wright, 1994). In contrast, simple association mapping does not attempt to explicitly consider the linkage disequilibrium structure between
marker loci, and thus typically considers association statistics computed only at the marker loci. In either case, the statistics computed at the markers in experimental cross-linkage designs, and in
association studies, are often identical, e.g. t-statistics to detect differences in phenotype means as a function of genotype. Here, we consider the case of markers collected at sufficient density
so that association statistics may be calculated only at the observed markers.
Recent advances in gene expression and single nucleotide polymorphism (SNP) microarray technology have lowered the cost of collecting gene expression and high-density genotype data on the same
population. These technologies have been used to produce high-density SNP datasets with thousands of transcripts and millions of allele calls in both mice (Frazer et al., 2007b; Szatkiewicz et al.,
2008) and humans (Frazer et al., 2007a). eQTL mapping has been successfully carried out in several inbred mouse populations (Bystrykh et al., 2005; Chesler et al., 2005; Gatti et al., 2007; McClurg
et al., 2007; Pletcher et al., 2004; Schadt et al., 2003). These studies have provided a revealing genome-wide view of the genetic basis of transcriptional regulation in multiple tissues, and form a
necessary foundation for systems genetics (Kadarmideen et al., 2006; Mehrabian et al., 2005).
The calculation of associations between tens of thousands of transcripts and thousands to millions of SNPs creates a computational challenge that can stretch or overwhelm existing tools. These
challenges are further compounded by multiple comparison issues arising from the large number of available SNPs and transcripts. Various methods have been used to address these issues. A resampling
approach (Carlborg et al., 2005; Churchill and Doerge, 1994; Peirce et al., 2006) is one common way of addressing multiple comparisons among markers, and it is used by several available QTL mapping
tools (Broman et al., 2003; Manly et al., 2001; Wang et al., 2003). Multiple comparisons among transcripts has been previously addressed by thresholding transcripts using q-values (Storey and
Tibshirani, 2003) obtained from transcript-specific testing of association with SNPs using Likelihood Ratio Statistic (LRS) (Chesler et al., 2005) or the mixture over markers method (Kendziorski et
al., 2006).
While parallel computation has been suggested as a potential solution to the computational challenges associated with eQTL analysis (Carlborg et al., 2005), many researchers have neither the
expertise nor the resources required to administer and maintain a computing cluster. To address the growing need for eQTL mapping in high-density SNP datasets, and the poor scalability of the
existing computational tools, we developed the FastMap algorithm and implemented it as a Java-based, desktop software package that performs eQTL analysis using association mapping. We achieve
computational efficiency through the use of a data structure called a Subset Summation Tree, which is described in Section 2 below. FastMap performs either single marker mapping (SMM) or haplotype
association mapping (HAM) by sliding an m-SNP window across the genome (Pletcher et al., 2004). FastMap is currently intended for the use with inbred mouse strains. Significance thresholds and p
-values are calculated for each transcript using multiple permutations of transcript expression values. In order to address multiple comparisons across transcripts, FastMap assigns a q-value (Storey
and Tibshirani, 2003) assessing FDR, to each transcript. We apply our software tool to two publicly available datasets consisting of gene expression measurements in panels of inbred mice and compare
our results to other software tools.
2 METHODS
This section first describes the calculations of test statistics (correlations) for SMM in a 1-SNP sliding window. First we introduce the concept of a subset sum M[g](s) and a Subset Summation Tree.
Subset sums are quantities that can be efficiently calculated using the Subset Summation Tree, and are used in the calculation of correlations. We then show how the subset sums and Subset Summation
Tree can be adapted to the fast calculation of ANOVA test statistics for m-SNP sliding windows (m>1).
In association mapping for homozygous inbred strains, the input data consists of two matrices: the first contains real-valued transcript expression measurements and the second contains SNP allele
calls, coded as 0 for the major allele and 1 for minor allele. Each matrix has the same number of samples (strains) n. Let S be the number of SNPs and let G be the number of transcripts.
Homozygous SNPs: 1-SNP window: we use the Pearson correlation as an association statistic in the case of a 1-SNP window. For a given transcript g and SNP s the correlation between g and s is
To simplify the formula, we assume without loss of generality that each transcript expression vector g is centered and standardized such that
In this case, the correlation expression reduces to
The denominator can be calculated once for each SNP, because it depends only upon the Hamming weight of s. In contrast, the numerator must be calculated for every SNP–transcript pair (S × G
computations). Our goal is to speed up calculation of the numerator. Denote the numerator by M[g](s):
As the SNPs are binary, M[g](s) is simply the sum of transcript expression values over a subset of samples defined by the minor allele of the SNP.
To illustrate how the calculation of the M[g](s) can be simplified, consider two SNPs s and s′ that differ only at the i-th position (thus s and s′ have Hamming distance of 1):
In this case, the quantity M[g](s′) can be calculated quickly (in one arithmetic operation) from M[g](s) as follows:
For any given transcript, the association statistic is the same for SNPs with the same strain distribution pattern (SDP). Hence, we calculate the association statistic once for each unique SDP. The
McClurg mouse data used in this article contains 156 525 SNPs, but has only 64 157 unique SDPs.
Additional improvements are based on Formula (2). To take full advantage of this relationship between correlations, we construct a tree, which we call a Subset Summation Tree. The vertices of the
tree correspond to unique subsets of samples. Each SDP defines a subset of samples associated with its minor allele. The tree contains all SDPs appearing in the SNP matrix. By construction, the edges
of tree connect SDPs which differ in one position (i.e. Hamming distance 1). The process of tree construction is described later in this section. It ensures that the tree is at least as efficient (in
terms of weight based on the Hamming distance) as the minimum spanning tree connecting all SDPs from the SNP matrix. An illustration of a subset summation tree is given in Figure 1.
The Subset Summation Tree is used to calculate the covariance sums in Pearson's correlation statistic. The table shows one gene expression vector and six corresponding SNP vectors for seven strains.
At each node, the covariance of the gene expression ...
Traversing the tree we can calculate the covariance M[g](s) for all SDPs in the tree with one arithmetic operation per SDP. One additional arithmetic operation is required to calculate the
correlation from M[g](s).
Homozygous SNPs: m-SNP sliding window: The use of a consecutive 3-SNP sliding window has been shown to improve the associations that can be detected in mouse studies (Pletcher et al., 2004). FastMap
is capable of employing any m-SNP window specified by the user. Within each m-SNP window, the strains form haplotypes that partition strains into ANOVA groups. A one way ANOVA test statistic is then
used to assess the relationship between a gene g and an m-SNP window.
Consider a 3-SNP window that contains k unique haplotype (ANOVA) groups across the n stains. Let A[i] denote the set of samples in the i-th ANOVA group, and let the transcript expression values in
the i-th ANOVA group be g[ij],j=1,…,n[i]. The associated ANOVA test statistic is calculated as
where the between group sum of squares SSB, and within group sum of squares SSW are calculated as follows:
The sums of squares are related by SST=SSB+SSW.
For a given transcript, the total sum of squares (SST) remains constant across all SNPs. As in the 1-SNP window case, the gene expression values are standardized to satisfy the conditions in Equation
(1). For standardized expression measurements, the SST and SSB calculations simplify as follows:
where M[g](A[i]) is sum of the transcript expression values for the i-th ANOVA group. As before, M[g](A[i]) can be calculated efficiently using the Subset Summation Tree. The difference is that the
tree for these calculations connects subsets of samples defining the m-SNP ANOVA groups, as opposed to SDPs defined by single SNPs. Once the SSB is calculated, the F-statistic is calculated as:
Tree construction: The Subset Summation Tree is used for fast calculation of M[g](A[i])—sums of transcript expression values over subsets of samples {A[i]}. Tree construction is initiated by
obtaining the family of sample subsets of interest {A[i]} from the set of SNPs. The tree is grown starting from single root element (empty subset) by sequential addition of the nearest element from {
A[i]} to the tree.
All the subsets {A[i]} are put in a hash table (HT) that stores the subsets that are not yet members of the tree. The tree is grown by connecting subsets that are at the minimum distance from the
tree. Node selection and connection to the tree can be optimized by taking advantage of two facts. First, the Hamming distances are positive integers. Thus, once we find a subset in the HT within
distance 1 of a particular tree vertex, we connect them, adding the subset to the tree and removing it from the HT. To find such an SDP in the HT we use the second fact: for any subset, there are
only n possible subsets that are within hamming distance 1 from it. Thus, instead of calculating distances from a certain tree vertex to all subsets in the HT we can check if the HT contains any of
the n possible neighbor subsets. This approach reduces the complexity of the search for close (within distance 1) neighbors of a given tree vertex from O(nS) to O(n).
The procedure above is applicable as long as there are SDPs in the HT within distance 1 from the tree. Once there are no SDPs in the HT within distance 1 from tree vertices, the search continues for
SDPs within distance 2. The same optimizations are applicable here—once an SDP within distance 2 is found, it should be connected to the tree and there are n(n−1)/2 possible SDPs within distance 2
from a given tree vertex. The same technique is applied even for the search for subsets within distance 3. When the remaining vertices are at Hamming distance 4 or greater, an exhaustive search is
performed to find a node in HT that is a minimum distance from the tree. This process is repeated until all SNPs have been inserted into the tree.
Permutation-based significance thresholds: for a single transcript, the association statistic is calculated between the observed values of that transcript and all SNPs. The transcript data are then
permuted while the SNP data are held fixed. Association statistics are calculated between the permuted transcript values and all SNPs and the maximum association statistic is stored. The distribution
of the maximum association statistics obtained from 1000 permutations of the transcript's values is used to define significance thresholds for individual (transcript, SNP) pairs, and to assign a
percentile-based p-value to the observed maximum association of the transcript across SNPs.
Significance across multiple transcripts: the procedure above assigns a p-value to each transcript that accounts for multiple comparisons across SNPs through the use of the maximum association
statistic. In order to correct for multiple comparisons across transcripts, we calculate q-values (Storey and Tibshirani, 2003) for each transcript, using the p-values obtained from the
permutation-based maximum association test.
2.1 Data
2.1.1 BXD gene expression data
The BXD Liver dataset is available from genome.unc.edu, and is described in Gatti et al. (2007). Briefly, it consists of microarray-derived expression measurements for 20 868 transcripts in 39 BXD
recombinant inbred strains and the C57BL/6J and DBA/2J parentals. The data were normalized using the UNC Microarray database and QTL analysis was performed on all transcripts.
2.1.2 BXD marker data
The BXD marker data consist of 3795 informative markers taken from a larger set of 13 377 markers. Briefly, consecutive markers with the same SDP were removed and only the flanking markers of such
regions were included. The data were downloaded from http://www.genenetwork.org/genotypes/BXD.geno; further information is available at http://www.genenetwork.org/dbdoc/BXDGeno.html.
2.1.3 Hypothalamus gene expression data
The mouse hypothalamus dataset {"type":"entrez-geo","attrs":{"text":"GSE5961","term_id":"5961"}}GSE5961 was downloaded from the NCBI Gene Expression Omnibus website. These data are described in
McClurg et al. (2007). The 58 CEL files were normalized using the gcrma package from Bioconductor (version 1.9.9) in R (version 2.4.1). The data were subset to include only the 31 male samples, and
removing the NZB data because the entire array appeared as an outlier in hierarchical clustering of the arrays. There were 36 182 probes on the array; of these a subset of 3672 transcripts having an
expression value >200 and at least a 3-fold difference in expression in one strain were selected. Transcripts containing a single outlier strain with expression values >4 SDs from the mean were
removed from the dataset. There were 402 such transcripts, leaving 3270 transcripts for analysis in FastMap.
2.1.4 Hypothalamus SNP data
The SNP data were obtained from McClurg et al. (2007) and originally contained 71 inbred strains. Missing genotype data were imputed using the algorithm of Roberts et al. (2007a). There were 156 525
SNPs, of which 99 were monomorphic across the 32 strains. These SNPs were removed from the analysis, leaving 156 426 SNPs. There were 64 790 unique SDPs in this final dataset.
2.2 Settings
In Section 3.2, we compare FastMap performance with two other publicly available tools: SNPster (McClurg et al., 2006) and R/qtl (Broman et al., 2003). The setting used to run them are detailed
2.2.1 Snpster settings
SNPster runs were performed using the tool available at snpster.gnf.org. The following settings were selected and are listed in the order in which they appear on the website. (i) Log transform data:
No. (ii) Test statistic: F-test. (iii) Method of calculating significance: parametric. (iv) Compute gFWER: No. The default settings were used for the remaining options on the web site.
2.2.2 R/qtl settings
R/qtl version 1.08-56 for R 2.7 was used to perform eQTL analysis on the BXD Liver dataset. R/qtl was configured to perform Haley–Knott regression only at the observed markers. eQTL significance was
determined by performing 1000 permutations for each transcript and selecting only those eQTLs above the 95% LOD threshold.
2.2.3 Computer for performance testing
A Pentium 4 with a clock speed of 3.4 GHz and 4 GB of RAM running Microsoft Windows XP Professional(r), SP2 was used for all timing runs. No other applications were open during the runs.
3.1 FastMap application
FastMap is written in the Java programming language and is driven by a simple graphical user interface (GUI, Fig. 2a). The required input files are (i) a transcript expression file with mean
expression values for each mouse strain and (ii) a SNP file containing allele calls for all strains, with the major and minor alleles coded as 0 and 1, respectively. Once the SNP file has been
loaded, FastMap constructs a Subset Summation Tree (see Section 2) for the SNP data, a computational task that is performed only once for a given set of strains. FastMap allows the user to perform
either SMM by calculating the Pearson correlation of each transcript expression measurement with each SNP, or HAM by sliding an m-SNP window across the genome and calculating the ANOVA F-statistic
for the phenotype versus the distinct haplotypes observed in the window (Pletcher et al., 2004). The association statistic at each SNP is displayed in a zoomable panel that links to the University of
California at Santa Cruz Genome Browser (Kent et al., 2002; Pontius et al., 2007) (Fig. 2b and c). Association plots may be exported as text files or as images.
FastMap application GUI. (a) FastMap with a list of probes on the left and the QTL plot on the right. (b) A zoomed in view of the significant QTL on Chr 1. (c) The same region in the UCSC Genome
browser, to which FastMap can connect.
QTL mapping with sparsely distributed markers has traditionally used maximum likelihood methods and has employed the LRS or the related Log of the Odds ratio (LOD) as a measure of the association
between genotype and phenotype [LRS=2ln(10)×LOD]. When marker density is high, regression techniques applied only at the observed markers will produce results which are numerically equivalent to the
LRS or LOD (Kong and Wright, 1994). In fact, the LRS, Student t-statistic, Pearson correlation and the standard F-statistic, can be shown to be equivalent when they are applied at the marker
locations (Supplementary Material). While previous literature has shown that regression methods produce estimates with a higher mean square error and have less power (Kao, 2000), these results apply
primarily to the case of interval mapping when the spacing between markers is wide (>1cM). For these reasons, FastMap employs the Pearson correlation for SMM and the F-statistic for HAM when
employing high-density SNP datasets.
The significance of eQTLs for a single transcript may be determined using a permutation-based approach (Churchill and Doerge, 1994). The expression values of each transcript are permuted, the
association statistics of each transcript with all SNPs are calculated and the maximum transcript-specific association statistic is retained. This process is repeated 1000 times, and a significance
threshold is taken as the 1−α percentile of the empirical distribution of the maxima. Both the number of permutations and the significance thresholds may be specified by the user. Since the various
association statistics are equivalent when applied at the markers, the significant marker locations will be the same for any choice of these statistics. Once a QTL peak that exceeds a user selected
threshold has been identified, the width of the QTL must be defined in order to identify potential candidate genes for further study. Given a local maximum d, a confidence region can be defined as
all markers q in an interval around d such that 2ln(LR(q))≥max[d] 2ln(LR(d))−x and this interval is referred to as an (x/2ln10)-LOD support interval (Dupuis and Siegmund, 1999). The choice of x=4.6
yields a 1-LOD confidence interval, which has been widely used in linkage analysis. A more conservative choice of x=6.9 (a 1.5-LOD interval) is more appropriate to situations with dense markers,
yielding approximate 95% coverage under dense marker scenarios. Intervals for non-LR association statistics can be calculated from the relationships between statistics provided in the Supplementary
Material. In practice, eQTL peak regions are limited by the effective resolution determined by breeding and recombination history.
FastMap assigns a p-value to each transcript that indicates the significance of the maximum association of that transcript across all the available markers. In situations where it is necessary or of
interest to simultaneously consider multiple transcripts, additional steps must be taken to account for the resulting multiple comparison problem. We address this by calculating the q-value (Storey
and Tibshirani, 2003) of every transcript. The q-value of a transcript is related to the false discovery rate. In particular, the q-value of a transcript is an estimate of the fraction of false
discoveries among transcripts that are equally or more significant than it is. For example, if we create a list of transcripts consisting of a transcript with q-value equal to 10%, and all those
transcripts having smaller permutation-based values, then we expect 10% or less of the transcripts on the list to have a significant association with at least on SNP or haplotype.
Permutation-based significance testing is frequently used in eQTL analysis (Doerge and Churchill, 1996; Peirce et al., 2006), and typically forms the bulk of the computational burden in eQTL mapping.
It is natural to ask whether a parametric approach, based on Gaussian p-values, would be just as effective and save a significant amount of time. We note that permutation-based testing offers several
advantages over parametric approaches. Permutation testing deals cleanly with the problem of multiple comparisons, and induces a null distribution under which there is no association between
transcript expression and genotype, regardless of the underlying distributions from which the data are drawn, and the correlations between SNPs. In addition, the normality assumptions underlying
parametric tests are often violated in practice.
3.2 Performance and speed
In order to gauge the performance improvement provided by FastMap over existing software, we compared computation times using two microarray datasets. The first consists of 20 868 transcripts and
3795 markers in 41 strains of mice [BXD dataset; (Gatti et al., 2007)]. This dataset was selected because, unlike the following larger dataset, it can be loaded into the widely used R/qtl package
without exhausting computer memory. The second is a hypothalamus dataset (McClurg et al., 2006) that consists of 3672 transcripts, 156 525 markers in 32 strains of laboratory inbred mice. This
dataset was selected for its dense genotype information, which is on the scale of the expected high-density SNP data for which we designed FastMap.
The amount of time required to perform eQTL mapping in these datasets is summarized in Table 1. In the BXD dataset, FastMap performs SMM for the entire set of 20 868 transcripts in about half an
hour, which is the same time required for R/qtl to analyze 100 transcripts. The hypothalamus data were previously analyzed with an association mapping tool called SNPster (McClurg et al., 2006),
which is available as a web application hosted by the Genomic Institute of the Novartis Research Foundation (GNF). A single transcript typically requires <5 min to analyze, depending on the load on
SNPster's web server. However, obtaining results for thousands of transcripts from submissions to an external website is impractical in most cases. Another version of SNPster runs at GNF in parallel
on a 200 node cluster, which is not publicly available, in batches of 10 transcripts per node. It requires 18 min to process these 10 transcripts using 1 000 000 bootstrap resamplings for each
transcript, and a −log(P-value) threshold of 2.5, which implies ~1.8 CPU-minutes per transcript (T.Wiltshire, personal communication). If these 3672 transcripts were analyzed serially rather than in
parallel, this would require 110.2 h. In contrast, FastMap runs on a standard desktop computer and can perform eQTL mapping for these same 3672 transcripts with 156 K SNPs in 32 strains in 12.3 h.
Large computing clusters, and the expertise required to administer them, are not available to all laboratories. FastMap offers the convenience of running on a single, local computer in a reasonable
amount of time (overnight, or over a weekend for more than 10 000 transcripts).
FastMap eQTL mapping times
We evaluated the scalability of FastMap with increasing numbers of transcripts and SNPs using the hypothalamus dataset. Since we are aware of no stand-alone software that can perform eQTL mapping
with hundreds of thousands of SNPs, we compared FastMap's performance in these plots to a brute force approach in which all calculations are performed without any optimizations. In the case of both
SMM and HAM, computation time for FastMap scales linearly with increasing numbers of transcripts (Fig. 3a). FastMap also scales linearly with increasing number of SNPs (Fig. 3b).
FastMap scales linearly with increasing numbers of genes and SNPs. (a and b) The time required to compute the association of increasing numbers of transcripts with 156K SNPs. (c and d) The time
required to compute the association of one transcript with ...
In order to examine the scalability of our algorithm with increasing numbers of strains, we determined tree construction times for various sets of inbred strains genotyped at approximately 156 525
SNPs (Table 2). The amount of time required to construct the tree is a function of both the number of strains as well as their ancestral relationships. Strains that are closely related (i.e. all
derived from Mus musculus domesticus (M.m.domesticus)) will produce nodes in the tree that are close to each other. As more distantly related strains are added (i.e. M.m.domesticus-derived strains
combined with Mus musculus musculus (M.m.musculus)-derived strains), the distance between SDPs becomes larger and tree construction times increase. Most existing eQTL studies in panels of inbred
strains have used less than 40 strains (Bystrykh et al., 2005; Chesler et al., 2005; McClurg et al., 2007). Tree construction required 5.3 min for the 32 strains of the hypothalamus dataset. In
contrast, for a panel of 71 inbred strains derived from both M.m.domesticus and non-M.m.domesticus strains, tree construction requires ~10 h using a 1-SNP window and ~24 h using a 3-SNP window. Tree
construction is carried out only once, and the resulting calculations still require less time than a brute force approach. Faster algorithms for tree construction that improve scalability with
increasing numbers of strains are currently under investigation.
FastMap tree construction and association mapping times with increasing numbers of strains (in seconds)
3.3 Population stratification
As noted by McClurg et al. (2007), considerable population stratification is present when panels of laboratory inbred strains are used. Common laboratory inbred strains are a mixture of
M.m.domesticus, M.m.musculus, M.m.castaneus, M.m.molossinus and M. spretus, which arose during the creation of the laboratory inbred strains (Beck et al., 2000; Yang et al., 2007). Figure 4 shows a
SNP similarity matrix for the 32 inbred strains in the hypothalamus dataset, where each cell represents the proportion of SNPs that have the same allele between two strains (normalized Hamming
distance) across all 156K SNPs. The non-M.m.domesticus-derived strains cluster tightly in the lower left hand corner, indicating that they are more genotypicly similar to each other than to the
M.m.domesticus-derived strains. Numerous transcripts and SNPs exhibit systematic differences across these two strata. Consequently, each such transcript will show a significant association with every
such marker. In eQTL mapping, this produces numerous markers that show significant associations with the expression of a single transcript, leading to horizontal banding in the transcriptome map (
Fig. 5a and b). When such differences exist, most permutations of the transcript will yield a lower association statistic than the observed one, this leads to inappropriately low significance
thresholds (Fig. 5c). In order to remove this strata effect, we median center the values of each transcript within M.m.domesticus and non-M.m.domesticus strata. As shown in Figure 5d, the resulting
transcriptome map becomes interpretable with cis-eQTLs along the diagonal. The few horizontal bands that remain are due to a subset of the M.m.musculus-derived strains with transcript expression
levels that differ from the other strains; this prevents the median subtraction method from removing the strata effect completely. We recommend removing those few transcripts that demonstrate this
SNP similarity matrix demonstrates population stratification among laboratory inbred strains. In one row, each cell represents the proportion of SNPs (in the 156K dataset) with the same allele in the
other strains. The similarity matrix has been hierarchically ...
Strata median correction dramatically improves transcriptome map. (a) The transcriptome map for 3270 transcripts without correcting for the population structure for all eQTL above a
transcript-specific 5% significance threshold. The horizontal bands dominate ...
FastMap allows the user to select strata by genotype a priori, and subtracts strata means or medians from the transcript values in each stratum (Pritchard et al., 2000). While there are more
sophisticated methods for addressing population stratification (Kang et al., 2008), FastMap is not primarily designed to address this problem. While laboratory inbred strains have been useful in
mapping Mendelian traits, eQTL mapping with FastMap will have greater utility in well-segregated populations like the Collaborative Cross (Churchill et al., 2004; Roberts et al., 2007b), due to
increased genetic diversity, as well as the finer recombination block structure. In such well-mixed populations, mean/median subtraction within strata or the non-uniform resampling technique used by
SNPster should not be required.
3.4 Comparison of FastMap to other QTL software
We compared the eQTL results produced by FastMap to those produced by R/qtl. R/qtl was configured to use Haley–Knott regression (Haley and Knott, 1992) and 1000 permutations to determine significance
thresholds. While R/qtl is designed to perform linkage mapping, we note that when linkage mapping is performed exclusively at the markers, the calculations are identical to those performed in eQTL (
Supplementary Material). eQTLs may be broadly separated into two categories; eQTLs located within 1 Mb of the transcript location (cis-eQTLs) and eQTLs located further than 1 Mb from the transcript
location (trans-eQTLs). Both FastMap and R/qtl found similar numbers of total eQTLs, cis-eQTLs and trans-eQTLs (Fig. 6a). Figure 6b shows that the eQTL locations found by each software package are
essentially identical; 98% of the eQTLs found by each method are within 5 Mb of each other, a margin of resolution consistent with the resolution of the BXD marker set. Since permutation-based
testing involves randomization, it should not be expected that 100% of the eQTLs would match between the two methods. Furthermore, the eQTL histograms produced by each method (Fig. 6c and d) are
similar, with differences being due to histogram binning effects (see insets).
FastMap eQTL mapping results almost equivalent to those obtained with R/qtl. (a) The BXD dataset and the number of matching eQTLs between FastMap and R/qtl at varying distances. (b) The high degree
of concordance between FastMap and R/qtl. (c and d) eQTL ...
eQTL mapping in the hypothalamus dataset was performed to evaluate computational performance, rather than to compare the results with SNPster. However, it is natural to ask how the results of the two
methods compare when we employ median centering in FastMap to correct for population stratification. We correct for population stratification by median centering transcript values within
M.m.domesticus-and non-M.m.domesticus-derived strains.
Since SNPster does not provide a fixed threshold for significance, we selected 2413 transcripts which had SNPster p-values <10^−4. Of these, 105 were cis-eQTLs and 2308 were trans-eQTLs. FastMap
produced eQTLs for 382 transcripts at or above a 0.05 significance threshold, of which 29 were cis-eQTLs and 353 were trans-eQTLs. The locations of 55 eQTLS were common between the two methods and
all of these were cis-eQTLs, which have been reported to be more reproducible than trans-eQTLs (Peirce et al., 2006).
It should be noted that FastMap and SNPster differ in several important respects. SNPster uses a heuristic weighted F-statistic who's null distribution is not known, it employs a resampling approach
that selects strains in a random manner with a non-uniform distribution. FastMap uses the standard F-statistic and conventional permutation-based significance thresholds. For these reasons, it is
unclear whether the results of the two methods should be concordant, and biological validation of both eQTL mapping approaches may be necessary to address the differences.
We have introduced new software for fast association mapping that uses a new method to speed the time required to calculate summations involved in QTL mapping. These improvements are particularly
advantageous in the context of eQTL mapping when thousands of transcripts are analyzed, and permutation-based significance thresholds are calculated for each transcript. The utility of the Subset
Summation Tree extends beyond eQTL mapping: the idea can be applied to any situation where sums must be calculated over groups whose membership can be specified with a binary string. FastMap does not
require the use of computer clusters and can be run on a standard desktop computer. FastMap performs both SMM and HAM over m-SNP windows, and calculates permutation-based p- and q-values. These
performance enhancements make FastMap suitable for eQTL mapping in high-density SNP datasets.
Supplementary Material
[Supplementary Data]
We thank Andrew Su of the Genomic Institute of the Novartis Research Foundation for providing bulk SNPster output of the McClurg hypothalamus data and to Tim Wiltshire of UNC for vital discussions in
understanding the SNPster algorithm and providing SNPster timing results.
Funding: National Institutes of Health (grant numbers P42 ES005948 and R01 AA016258); National Science Foundation (grant number DMS 0406361). Although the research described in this article has been
funded in part by the United States Environmental Protection Agency through (grant numbers RD832720 and RD833825), it has not been subjected to the Agency's required peer and policy review and
therefore does not necessarily reflect the views of the Agency and no official endorsement should be inferred.
Conflict of Interest: none declared.
Articles from Bioinformatics are provided here courtesy of Oxford University Press
|
{"url":"http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2642639/?tool=pubmed","timestamp":"2014-04-18T17:05:49Z","content_type":null,"content_length":"108267","record_id":"<urn:uuid:0128f27d-05d8-4206-8455-6f7025e6a0ac>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00474-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Methods of Analysis - Football
By Bob Stoll - Updated July, 2008
I have been a successful professional handicapper for 21 years and my methods have evolved from my early years as mostly a technical handicapper. While the research I have done over the years
indicates that technical analysis does work, I felt as if there were some games in which I was giving up line value on the team that my situational analysis dictated that I bet. In an effort to gauge
line value, I developed a model that is mathematically sound and that math model has produced very good results over the 7 years since its inception. With 7 years of good results, my mathematical
predictions have become a major part of my handicapping process. My research of fundamental indicators has also proven to be profitable and I will continue to explore the statistical profiles of
opposing teams to find value in a particular match-up. Below is an explanation of the methods that I currently employ in my handicapping and a summary of how I combine those tools to accurately
measure a team's chance of covering the pointspread at any given line based on the situation, fundamental indicators, my math model prediction and the line.
Situational Analysis
Situational Analysis is the study of performance patterns, either on a league-wide basis or on a team specific basis.
I tend to shy away from most team oriented trends unless the head coach or core of star players has been intact over the term of the trend. I certainly wouldn't pay much attention to a Carolina
Panthers trend that included games prior to the arrival of head coach John Cox – who changed the personality of that team. On the other hand, longer term trends of the Pittsburgh Steelers do have
some validity due to the long tenure of head coach Bill Cowher – even though the personnel have changed over the years.
Most of the situational analysis that I employ are league wide trends rather than team specific patterns. For instance, NFL home underdogs have been pretty good bets over the years after an upset win
(173-128-9 ATS since 1980) since such teams tend to play with more confidence in that situation. That is a very simple trend with very few parameters and I expect that situation to continue to be
profitable in the future. Many handicappers tend to back-fit past data by adding more and more factors (parameters) to a situation until they have a very high percentage angle (but also a much
smaller sample size). However, my research has shown that a situation's predictability is sacrificed with each parameter added to derive that situation. For instance, a situation with a record of
40-20 (67%) that is derived using 10 factors isn't as predictive as the 57.4% home underdog situation that I presented above, which has just 4 parameters (this game home, this game dog, won last
game, dog last game) and a much larger sample size. It's easy to find a very high-percentage situation if you use an unlimited number of parameters to get to that situation, but all that will result
in is a situation that explains what has happened rather than something that helps predict what will happen. My research, and the theories of statistics, shows that the more predictive angles have
fewer factors and a larger sample size, rather than a smaller sample situation with a high winning percentage that was derived by using too many parameters. Further research I did in the Summer of
2004 enabled me to accurately assess a situation's future performance based on the win percentage, sample size, number of parameters and more recent performance (i.e. record of the angle over the
past 3 seasons). That research led to a more realistic use of situational analysis than I've employed in the past. For instance, I can now tell you that a situation with a record of 140-60-5 ATS that
uses 6 parameters has a 56.8% chance of winning the next time it applies if the line is fair. Having a realistic expectation of a situation's value has helped my overall analysis immensely the last 2
seasons and I will continue to devote time each summer to update the research on the predictability of my situational analysis.
Fundamental Indicators
Fundamental indicators are based on the statistical profiles of the teams involved in an upcoming game, using historical spread results of match-ups that involved teams with similar profiles. For
instance, I have learned that good running teams tend to perform better at home while good passing teams have a tendency to perform relatively better on the road. You might find it surprising to find
out that teams that have turned the ball over a lot are actually pretty good bets – which is due to the fact that their previous turnover problems have likely led to disappointing past performances
that will be already reflected in the current pointspread, while the likelihood of turning the ball over at the same high rate is not likely. That scenario creates line value in favor of the turnover
prone team. I also use fundamental indicators that are based a team's statistical profile (i.e. good rushing team, average passing team, good run defense, etc) when matched against the profile of
their opponent. For instance, I can query how teams with a good run offense have performed, against the pointspread, when facing teams with a poor run defense. I have found those sort of match-up
indicators to be very insightful and my recent research shows that indicators based on season-to-date statistics are very predictive and weighing my fundamental indictors more heavily last season has
helped produce better overall results.
Math Model
Most handicappers that try to come up with a formula to predict future games tend to make the same mistake. That mistake is using regression analysis to find the correlation between different
statistics and point differential. While that exercise is very useful for explaining which statistics impact a game's result, regression is not necessarily useful in using past statistical averages
to predict future results since some important statistics simply don't correlate very highly to the future. For example, turnovers are the number one factor in point differential in football, but
turnovers are also the least predictable statistic. A model that is based on regression analysis will weigh turnovers very highly, but since past turnovers do not correlate highly with future
turnovers such models will over-weigh the affect of past turnovers – creating a model that is good at explaining what has happened but not very good at predicting what will happen.
My math model incorporates the predictability of past statistics to future games and uses each team's compensated statistics rather than their raw stats, which adds to the accuracy of my prediction.
Compensated statistics are derived by comparing a team's statistics to the statistics of the opponents that they have faced. For instance, a team averaging only 3.6 yards per rush on offense is
actually a better than average running team if they have faced a schedule of opponents that combine to allow an average of just 3.4 ypr on defense. Using compensated statistics in combination with
the predictive nature of each statistic used in my model produces an accurate measure of the true differences between two teams future performances – not the difference between their past
performances. I also adjust my projected numbers based on current personnel for each team and those extra hours of statistical work have paid off handsomely over the years (and I get better each year
at making those adjustments). I also take out meaningless plays such as kneel downs at the end of a half or game and quarterback spikes, so the game statistics that I use are more representative of a
team's performance than the statistics used by other handicappers.
I've been using my current math NFL model for 8 years and the record is a very good 199-140-8 ATS (59%) when my math model prediction is 5 points or more away from the actual pointspread and my
College math model had produced 57% winners from 2001 to 2004 using differences of 7 points or higher. I made improvements to my College Math Model prior to the 2005 season and the results have been
even better than I anticipated. My College math model kicks in week 5 and it has been 55% picking every single College game from week 5 on since 2005 and an incredible 60% ATS in games where the
difference between my prediction and the line is 6 points or more (5.8 or more, actually) as long as both teams had played 3 or more games.
Combined Analysis
The key to the 2004 research on my methods was finding a way to combine my situational analysis, fundamental indicators and my math model to give me an overall chance of a team covering at any given
number. My performance on my Best Bets the last 4 seasons is an indication that I succeeded in that endeavor and I will continue to refine the accuracy of my methods each year. An example of combined
analysis is a game in which Team A applies to a 140-60-5 ATS situation that uses 6 parameters. Team B applies to a statistical profile indicator with a record of 86-28-4 ATS and my NFL math model
favors Team A by 12.4 points when Team A is a 7 point favorite in reality. As discussed above, a situation with a record of 140-60-5 and 6 parameters has a 56.8% chance of winning if the line is
fair. The fundamental indicator favoring Team B has a 58.2% chance of winning given a fair line and my math model would give Team A a 56.9% chance of covering at a line of -7 points. The trick is
assigning a point value to the situation and the fundamental indicator based on their chance of covering at a fair line. I simply put everything in terms of points based on the relationship between
point differentials and the chance of covering of my math model. Each point difference in my math model is worth about 1.3% in chance of covering, so each percentage point is worth about 0.8 points
(1/1.3). In this case, the situation favoring Team A is worth 5.3 points while the fundamental indicator favoring Team B is worth 6.4 points. My math model favors Team A by 10.4 points, so adding the
value of the situation and the indicator would result in an overall prediction of Team A by 9.3 points (+5.3 – 6.4 + 12.4 = 11.3), which would give Team A a 55.5% chance of covering at the line of -7
points. Obviously, things can become a lot more complicated when there are multiple situations and indicators applying to a particular game - which is most often the case, but my years of studying
probability theory at Berkeley have given me the tools to sort through it all and come up with an accurate measure of the overall affect of the situations and indicators.
A lot of handicappers use situational analysis and math models in their handicapping but few, if any, of them have studied the predictability of their methods, as I have, or found a realistic way of
combining their methods for an overall measure of predicted success on every game.
Sports Betting as an Investment
My analysis has also shown me that there is no game that has higher than a 70% chance of covering - so ignore those handicappers that advertise "Locks" and "Guaranteed Winners". You'll also notice
plenty of handicappers that claim to win well over 60% of their games. I have had plenty of seasons in which I've won over 60% of my Best Bets, but there is no handicapper that is better than 60%
over the long run. I feel that I'm on my way to that sort of long term success in football given the research I've done the past 3 summers on my handicapping methods. While 60% may not seem that
impressive to those of you that are new to sports betting, you should consider that a bet with a 60% chance of covering is a 16% investment at -110 odds (.60 - .40 - .040 = .016), and that investment
is compounded since you can use your winnings immediately for future sports investments. I consider any season in which I win 56% or more to be a successful season, as 56% is a solid 7.6% investment
per bet while winning 56% of 150 Best Bets would lead to a record of 84-66, which is a profit of 11.4 units. If you have a bankroll of $10,000 and expect to win 56% on 150 bets then you can safely
wager an average of $400 per game (or $160 per Star using my Star ratings) with less than 1% chance of losing your bankroll. Winning 56% of your 150 bets at $400 a game would result in a profit of
$4560, which is actually a 45.6% investment on your $10,000. So, you can see why I consider 56% to be a successful season and a very good investment. For more information on money management you
should consult my Sports Betting as an Investment section on this site.
More Essays
|
{"url":"http://www.drbobsports.com/football.cfm?p=8","timestamp":"2014-04-18T03:15:02Z","content_type":null,"content_length":"27126","record_id":"<urn:uuid:5a0d6ebb-0b15-4c44-af08-15c16eee2dd5>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00619-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Physics Forums - View Single Post - Grid Curves
To find the tangent vector in the "u" direction, differentiate r(u,v)= <vcos u, vsin(u), v^2> with respect to u. To find the tangent vector in the "v" direction, differentiate r(u,v)= <vcos u, vsin
(u), v^2> with respect to v. To find them at the given point, substitute [itex]u= \pi/4[/itex] and [itex]v= \sqrt{2}[/itex].
So, when r(u,v)=<vcos(u), vsin(u), v
>, the derivative with respect to
is <-vsin(u), vcos(u), 0>. And the derivative with respect to
is <cos(u), sin(u), 2v>. Now, I plug in [tex]\sqrt{2}[/tex] for v and [tex]\pi/4[/tex] for u. So I get, <-1, 1, 0> and <[tex]\sqrt{2}/2[/tex], [tex]\sqrt{2}/2[/tex], 2[tex]\sqrt{2}[/tex]>. Is this
correct so far?
|
{"url":"http://www.physicsforums.com/showpost.php?p=1815325&postcount=5","timestamp":"2014-04-17T15:35:12Z","content_type":null,"content_length":"8141","record_id":"<urn:uuid:34cf79cc-ee88-4b7f-ba08-d321dcc437e3>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00370-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Joseph: My encyclopedia says that the mathematician Pierre
Question Stats:
0%0% (00:00)based on 0 sessions
1.Joseph: My encyclopedia says that the mathematician Pierre de Fermat died in 1665 without leaving behind any written proof for a theorem that he claimed nonetheless to have proved. Probably this
alleged theorem simply cannot be proved, since---as the article points out---no one else has been able to prove it. Therefore it is likely that Fermat was either lying or else mistaken when he made
his claim.
Laura: Your encyclopedia is out of date. Recently someone has in fact proved Fermat’s theorem. And since the theorem is provable, your claim---that Fermat was lying or mistaken---clearly is wrong.
Which one of the following most accurately describes a reasoning error in Laura’s argument?
(A) It purports to establish its conclusion by making a claim that, if true, would actually contradict that conclusion.
(B) It mistakenly assumes that the quality of a person’s character can legitimately be taken to guarantee the accuracy of the claims that person has made.
(C) It mistakes something that is necessary for its conclusion to follow for something that ensures that the conclusion follows.
(D) It uses the term “provableâ€
|
{"url":"http://gmatclub.com/forum/joseph-my-encyclopedia-says-that-the-mathematician-pierre-43341.html","timestamp":"2014-04-16T04:57:34Z","content_type":null,"content_length":"164538","record_id":"<urn:uuid:e36535e9-2422-45c6-ab7a-6db95c1a1d8f>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00211-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Math Forum Discussions
Math Forum
Ask Dr. Math
Internet Newsletter
Teacher Exchange
Search All of the Math Forum:
Views expressed in these public forums are not endorsed by Drexel University or The Math Forum.
Topic: polar contourf plot: how to avoid the edge effects?
Replies: 7 Last Post: Jul 20, 2012 3:10 PM
Messages: [ Previous | Next ]
Re: polar contourf plot: how to avoid the edge effects?
Posted: Jul 20, 2012 6:14 AM
TideMan <mulgor@gmail.com> wrote in message <4bd368b3-cf32-4fd6-a762-c4161ca68199@googlegroups.com>...
> On Friday, July 20, 2012 7:13:13 PM UTC+12, Kristoffer wrote:
> > I expect you aren't interpreting my "edge effect" correctly. Here is a pic showing the problem. There is a vertical line from the origin to "90 deg" label in the contourf plot. Ignore the angle
labels, as they are meaningless. That "90 deg" label corresponds to the 0/2pi wrap boundary. I just shifted it to do north for plotting purposes.
> >
> > http://sail.ucsd.edu/~walker/contourf/out.jpg
> >
> > I was using version 2010b. I just tried it on 2011b. Same problem. I also just installed version 2012a--same problem. Is the problem really not there in 2012b?
> >
> > Kris
> >
> > "Bruno Luong" <b.luong@fogale.findmycountry> wrote in message <juatvf$7c8$1@newscl01ah.mathworks.com>...
> > > "Kristoffer " <kwalker@ucsd.edu> wrote in message <juasrp$3kk$1@newscl01ah.mathworks.com>...
> > >
> > > >
> > > > Running test shows the edge effect that I'm referring to. It is occurring where the angle vector wraps from 2pi back to 0.
> > >
> > > Run fine to me (v. 2012B). May be you run on a buggy older MATLAB version.
> > >
> > > Bruno
> You have come upon a problem that has stumped many of us for a decade or more. It happens all the time in cotidal charts, where the tide amplitude is plotted using contourf and the phase lines are
over-plotted using contour. The problem is that there is a vertical cliff of phase lines between 2pi and zero.
> So far, in more than a decade using Matlab for this, I have not found a solution to the problem anywhere.
Maybe not the most elegant of solutions:
shading flat takes away the edges of the patches - then if you guys want it back you might do it with contour(....,'k').
Seems to work for this case for me.
Date Subject Author
7/20/12 polar contourf plot: how to avoid the edge effects? Kristoffer
7/20/12 Re: polar contourf plot: how to avoid the edge effects? Bruno Luong
7/20/12 Re: polar contourf plot: how to avoid the edge effects? Kristoffer
7/20/12 Re: polar contourf plot: how to avoid the edge effects? Bruno Luong
7/20/12 Re: polar contourf plot: how to avoid the edge effects? Kristoffer
7/20/12 Re: polar contourf plot: how to avoid the edge effects? Derek Goring
7/20/12 Re: polar contourf plot: how to avoid the edge effects? Bjorn Gustavsson
7/20/12 Re: polar contourf plot: how to avoid the edge effects? Kristoffer
|
{"url":"http://mathforum.org/kb/thread.jspa?threadID=2391147&messageID=7851554","timestamp":"2014-04-16T04:27:04Z","content_type":null,"content_length":"26692","record_id":"<urn:uuid:7f36f099-a458-4d8f-be50-489b439800f2>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00121-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Homework Help
Post a New Question | Current Questions
Algebra 2, not "12th grade"
Assistance needed. Please type your subject in the School Subject box. Any other words are likely to delay responses from a teacher who knows that subject well.
Sunday, November 22, 2009 at 1:21pm
12th grade math an
6b=30 b=30/6 b=5
Wednesday, November 18, 2009 at 9:02pm
12th grade English
I need to write a one page response essay to the quote "you can no more win a war than you can an earthquake"..........please help! Not good at writing. Thank you
Tuesday, November 17, 2009 at 6:17am
12th grade math an
Multiply the left side of the equation. 7b - 42 = - 12 + b Subtract b and add 42 to both sides. 6b = 30 Can you do the rest?
Tuesday, November 17, 2009 at 12:48am
Math, not "12th grade"
Assistance needed. Please type your subject in the School Subject box. Any other words are likely to delay responses from a teacher who knows that subject well.
Sunday, November 15, 2009 at 8:19am
12th grade Physics
Hi, actually Bob, your answer is not 30 degrees, but 60 degrees. 30 degrees does not refer to the angle inside the triangle formed by the x and y components.
Tuesday, November 10, 2009 at 12:59am
12th grade Trig
secTheta=2 means cosTheta=1/2, which means that Theta is 60 degrees.
Monday, November 9, 2009 at 5:07pm
12th grade Trig
sec theta = 2, sin theta is less than 0. I'm not sure what to do? Please explain. Thank you.
Monday, November 9, 2009 at 4:44pm
12th grade Politics
United States is the only sizable industrialized country that does not have a socialist party as a major source of power. Why is this??? Can someone please give me some points or reasons??? Thank
Wednesday, November 4, 2009 at 10:47am
12th grade Physics
Determine the x and y components if a dog walks to the park that is 24m, 30 degrees northwest. Please help i don't know how to go about solving this problem
Wednesday, November 4, 2009 at 9:57am
12th grade Advanced Functions
A generator produces electrical power, P, in watts, according to the function: P(R)= 120/ (0.4 + R)^2 where R is the resistance, in ohms. Determine the intervals on which the power is increasing.
Sunday, October 25, 2009 at 7:50pm
12th grade Physics
Force required to accelerate the car vertically = m * (v^2) = 1200 * (0.8^2) = 1200 * 0.64 This force will be equal to the tension on the rope.
Saturday, October 24, 2009 at 1:13pm
12th grade Physics
how much tension must a rope withstand if it is used to accelerate a 1200-kg car vertically upward at .80 m/s^2?
Saturday, October 24, 2009 at 1:09pm
12th grade English
Your subject is English.
Monday, October 12, 2009 at 8:18pm
12th grade chemistry
Adding an O would make a superoxide and I don't know of any instances in organic of RCOOOH molecules although they are known in inorganic (KO2, for example).
Sunday, October 4, 2009 at 7:37pm
12th grade chemistry
thanks DrBob ..I appreciate the help. I have tried many searches for the answer on the internet and felt the answer was no as I found nothing.I understand the idea of going to CO2 and H2O. I also
don't think that just an O molecule can be added. Thanks again.
Sunday, October 4, 2009 at 5:50pm
11th grade u.s. history
can i ask one more question srry um do you know a political cartoon that goes with the 12th amendment and what does he mean how does one go about amending the constitution lol srry thats 2
Monday, September 28, 2009 at 9:52pm
11th grade u.s. history
So basically the 12th amendment is necessary cause it helps select both president and vice president So does it protect the 20th amendment or am i like way off
Monday, September 28, 2009 at 9:23pm
11th grade u.s. history
Have you seen this site? http://kids.yahoo.com/directory/Around-the-World/Countries/United-States/Government/U.S.-Constitution/Amendments/12th-Amendment
Monday, September 28, 2009 at 8:49pm
12th grade
This is an opinion question. Your teacher is looking for YOU to be able to express your opinion and then back it up with facts, details, etc. Let us know what you think.
Saturday, September 26, 2009 at 11:10am
12th grade
According to the social contract theory, the contract is
Wednesday, September 16, 2009 at 12:22pm
12th grade A.P. Economics
which of the following is the most essential for a market economy? 1.) functioning labor unions 2.) good government regulation 3.) active competition in the marketplace. 4.) responsible action by the
business leaders. I think its choice no. 3, am i right?
Sunday, September 13, 2009 at 6:10pm
12th grade government/economics
can you desccribe the definition of economics in these letters as examples J,K,O,Q,X,Y
Friday, September 11, 2009 at 7:31pm
12th grade LAW
i can think of the examples for that one point, but i'm not sure what other points i can add in order to answer my question. i'm not exactly sure how laws can actually increase someones freedom.
Thursday, September 10, 2009 at 10:36pm
12th grade A.P. Economics
Compare the mixed economies of various nations along a continuum between centrally planned and free market systems.
Saturday, September 5, 2009 at 9:40am
12th grade English
how to compare the characters of gilgamesh and enkidu. who was the more heroic? why? begin with an explanations of what you consider heroic and see if it is similar to what is considered heroic in
the story.
Saturday, September 5, 2009 at 12:45am
3rd grade
His birthday must be: 10, 12, 14, 16, 18, 20, or 22 It can't be the 10th because 1 + 0 = 1 It can't be the 12th because 1 + 2 = 3 Let's see if she can figure this problem out from here.
Wednesday, August 26, 2009 at 7:57pm
12th grade expository writing
Can somebody help me with writing?
Thursday, August 20, 2009 at 1:13pm
12th grade history
Which groups of people were not afforded all the rights stated in the bill of rights?
Thursday, August 13, 2009 at 6:12pm
12th grade AP Economics
Why might an economist look at the hundreds of cars moving along an assembly line an say, There is an example of scarcity ?
Thursday, August 6, 2009 at 2:15pm
12th grade
"I am interested" IN WHAT? What is your education? What does the job require? What is your experience? Each of these topics require a paragraph. Post your letter, and we'll be glad to comment.
Sunday, July 26, 2009 at 6:54pm
12th grade
the voting right act of 1965 specially removed from voting in the U.S.
Wednesday, July 15, 2009 at 5:10pm
12th grade
which does suzuki in "hidden lessons" explore more thoroughly, the causes of children's negative or positive attitudes toward nature or the effects of these attitudes?
Tuesday, July 14, 2009 at 12:06am
12th grade English
How does Saki in "The image of the lost soul" use descriptions of places to reach desired effect in story? Can anyone tell me how? an example form the text would be helpful
Thursday, July 2, 2009 at 11:50am
12th grade AP Economics
I'd spend more money ($50? $100?) per trip if I knew the air traffic controllers were competent and not tired.
Wednesday, July 1, 2009 at 6:26pm
12th grade AP Economics
Ms. Sue, Thank you so much for the response. But I still don't get what I should be writing. Can you please mention one example? Sorry for the trouble! Thank you
Wednesday, July 1, 2009 at 6:18pm
12th grade AP Economics
Making a list of what you would consider the most important trade-offs of spending more money on air-travel safety. Can someone please help me!!!
Wednesday, July 1, 2009 at 4:10pm
12th grade
Penn Foster Examination #93051 Cooking Appliances I've gotten 16 out of 25 completed but I'll just post the whole exam to make sure I have THOSE right. Thanks for your help guys!
Saturday, May 23, 2009 at 4:13pm
12th grade- Trigonometry
Find the exact value of each expression Cos^-1(0) Tan^-1 sqrt(3)/3 Sin^-1(1) Since the expressions are capitalized, I'm not sure if I have to do anything different. Any help is appreciated.
Thursday, May 14, 2009 at 9:04pm
12th grade chemistry
Also, you are not adding up the molarity of the compound. Molarity is numer of moles of solute per liter of solution.
Friday, May 1, 2009 at 1:22am
12th grade
Are you sure you wrote the question correctly? I do not understand how g which is a function of x is written as a function of t, and especially with the dt.
Thursday, April 23, 2009 at 10:34pm
Math, not "12th grade"
assistance needed Please type your subject in the School Subject box. Any other words are likely to delay responses from a teacher who knows that subject well.
Thursday, April 23, 2009 at 7:43am
12th grade
Please put the class in the subject area and please complete your question.
Monday, April 20, 2009 at 10:31am
12th grade, Economics
After you have done some reading and answered the question, please repost and we will be happy to make any corrections/suggestions if needed.
Thursday, April 16, 2009 at 4:11pm
12th grade math
This seems to me a poor way to teach math.
Saturday, April 11, 2009 at 7:57am
12th grade chemistry
Calculate the number of grams in the following :[149.096 g/mole] 8.55 moles of (NH4)3PO4 IT WOULD BE NICE IF I CAN GET STEP BY STEPS TO GET ANSWER !!!
Monday, March 30, 2009 at 6:32pm
12th grade
Balance the reaction first. Then, figure the moles of Al in 100 g of Al. Now, use the mole relationships to figure themoles of Cu. We will be happy to critique your thinking.
Monday, February 23, 2009 at 2:37pm
12th grade Data
In this sequence, tk is a factorial number, often written k!. Show that tk=k!=k(k-1)(k-2)...(3)(2)(1)
Monday, February 23, 2009 at 12:43pm
12th grade (chemistry)
I think one of your reactants is wrong. It would have to be potassium hydroxide H2SO4 + 2 KOH -> K2SO4 + 2 H2O
Tuesday, February 17, 2009 at 12:14am
12th grade
Write the chemical equation for Sulfiric Acid and potassisum sulfate the product is potassium sulfate and water.
Monday, February 16, 2009 at 10:55pm
12th grade
List 3 problems of Decentralized power that existed under the Articles of Confederation. For each problem, identify one solution that the Constitution provided to address the problem.
Thursday, February 5, 2009 at 9:54pm
12th grade bio
The transition of the lips where the outer skinner and inner mucous membrane meet is called the _________ I looked it up and from what I read I believe it is the gingivae. Would that be correct.
Thank you
Thursday, February 5, 2009 at 2:10pm
12th grade Chemistry
The total volume of the mixture is: 5 + 3 + 3 = 11 mls = 0.011 L I used 0.010 L by mistake. It should be replaced by 0.011 L. This will case a small change in the concentrations of Fe+3 and SCN-
Sunday, January 11, 2009 at 8:40pm
12th grade
not basically 20, it is 20. Units were not specified.
Tuesday, January 6, 2009 at 9:36pm
12th grade
I assume you are in calculus. Postition=INTEGRAL v(t) dt=INT (2t+1)dt = t^2 + t Put in t=4 and compute.
Tuesday, January 6, 2009 at 9:31pm
12th grade Subject??
What is your subject?
Tuesday, January 6, 2009 at 9:30pm
12th grade
A particle starts at x=0 and moves along the x-axis with velocity v(t)=2t+1 for time t is less than or equal to 0. Where is the particle at t=4?
Tuesday, January 6, 2009 at 9:28pm
12th grade history
Some of the clergy was corrupt and illiterate. By improving the clergy, the Church hoped to improve the whole organization.
Wednesday, December 17, 2008 at 10:02pm
12th grade history
Why did the Roman Catholic reform leaders believe that the fundamental aspect of improving the Church was to enhance the performance of the clergy?
Wednesday, December 17, 2008 at 10:00pm
12th grade (Law)
Could you tell me some information about NATO? I have to do a project for my law course and need to know how it impacts the world.
Monday, December 15, 2008 at 9:18pm
12th grade chem
In this case, they are different ways of writing the same thing. The more coventional way is H3PO3 (phosphoruus acid)
Friday, December 12, 2008 at 12:29pm
12th grade chemistry!!
Metal ions lose electrons to form cations, nonmetal ions gain electron to form anions.
Sunday, December 7, 2008 at 12:05pm
12th grade IPT
i need to gather information about E-commerce and there are questions i don't get. such as, how is the communication system is used and discuss some situations in which the system is used?
Friday, December 5, 2008 at 7:03pm
12th grade, English
We do not do your homework for you. After you have finished reading and writing, please repost and we will be happy to give you further corrections or suggestions.
Wednesday, December 3, 2008 at 3:14pm
12th grade science (food & nutrition)
You can start by Googling vegetarianism. Study these sites, and follow any links and hints that might help you more.
Sunday, November 23, 2008 at 7:52pm
12th grade science (food & nutrition)
Were doing a class debate on vegetarianism and i'm supporting vegetarianism (pro side) and i need 10 question to ask the con side (against vegetarianism). Don't really know how to start.
Sunday, November 23, 2008 at 7:48pm
12th grade government/economics
should the u.s move to an electonic currency and remove paper currency
Thursday, November 20, 2008 at 4:18pm
12th grade calculus
a marathoner ran the 26.2-mi New York City marathon in 2.2h. show that at least twice, the marthoner was running at exactly 11 mph
Thursday, November 6, 2008 at 8:06pm
12th grade calculus
I will be happy to critique your thinking. Remember, you need to work on either side of x=3 here, and on either side of x=-3.
Wednesday, November 5, 2008 at 12:08am
12th grade calculus
Let f(x)=[x^3-9x] [] --- absolute value a. does f'(x) exist? b. does f'(3) exist? c. does f'(-3) exist? d. determine all extrema of f.
Tuesday, November 4, 2008 at 11:22pm
12th grade Physics
a car travels in a straight line for 3 h at a constant speed of 53 km/h. what is the acceraleration? answer in units of m/s2.
Saturday, November 1, 2008 at 9:58pm
12th grade calculus
find the lines that are tangent and normal to th ecurve at the point given 1. 2xy+piesiny=2pie (1, pie/2) 2. x^2cos^2y-siny=0 (0, pie)
Sunday, October 19, 2008 at 9:06pm
12th grade calculus
You did not state the given point. Using implicit derivative I found it to be y' = (2x+y)/(2y-x) sub in the given point, that gives you the slope of the tangent. Now that you have the slope (m) and a
given point, use the grade 9 method of finding the equation of the ...
Sunday, October 19, 2008 at 7:58pm
12th grade government/economics
Amendments must be ratified by 3/4 of the states. Article IV has given Congress the option of requiring ratification by state legislatures or by special conventions assembled in the states.
Sunday, October 19, 2008 at 7:19pm
12th grade Physics
"If a skier coasts down a slope at an angle of 24 degrees below the horizontal, what is her acceleration if the force of friction is negligible?" I must have done this question 15 times, and still
can't get the right anwser. Can anyone help me?
Monday, October 13, 2008 at 4:30pm
12th grade Math?
How would you factor 3x^3-4x^2+4x-1? P.S. Factor theorem does not work here.
Sunday, October 12, 2008 at 11:48pm
12th grade Physics
Please do not just "drop off" problems. Show your work. Do you know what "the normal force" means? It is the component of the applied force that is perpendicular to the ground. Start by calculating
Sunday, October 12, 2008 at 3:06pm
12th grade Physics
it changed velocity from 88West to zero, in a given time. acceleration= (finalveloicty-initialveloicty)/time I assume you realize what negative West means.
Tuesday, October 7, 2008 at 6:52pm
12th grade Physics
In an experiment, a car taveling at 88m/s,west slams into a wall and comes to a halt in 0.75 seconds. What is the car's acceleration vector?
Tuesday, October 7, 2008 at 6:45pm
12th grade Physics
who was the father of physics
Saturday, October 4, 2008 at 12:37am
12th grade
Think of a business in your local area. Describe its operation in terms of factor markets and product markets.
Sunday, September 21, 2008 at 2:34pm
12th grade
If f(x)= 1/1-x Find the composition of f(f(x)). So I get 1/1-(1/1-x) I'm not sure if I am putting it in my calculator correctly because I keep getting a line. Is that right? If I put it in with the
parentheses differently, I get the graph of f(x)=1/x. Does anyone know ...
Thursday, September 18, 2008 at 5:18pm
12th grade Science
A bud covers the bloom but when it erupts it holds up the flower from underneath find the name it s divided into together and alone? -thanks-
Wednesday, September 10, 2008 at 4:03pm
12th grade
Given that x is an inetger, state the relation representing each equation by making a table of values. y=3x+5 and -4<x<4
Wednesday, September 3, 2008 at 9:25pm
12th grade
Why is it difficult to recognize the worth and dignity of all individuals at all times?
Wednesday, September 3, 2008 at 5:33pm
12th grade math help!!!!
The height of the right triangle is 9 units, and the area is 54 sq. units. how long is the base of the triangle?
Tuesday, August 26, 2008 at 2:46pm
12th grade math help!!!!
if (x,y) are the coordinates of a point p in the xy plane then x is called _______ of p and the _______ of p
Tuesday, August 26, 2008 at 2:00am
12th grade government/economics
If no candiate for the presidency wins a simple majority of the total number of electoral votes, what body has the power to choose the president?
Sunday, August 24, 2008 at 10:23pm
10th grade math
1.548937075 x 10 to the 12th. round the number to 3 significat figures. I thought I would respost this because the other one was getting confusing
Tuesday, August 19, 2008 at 8:56pm
12th grade math help!!!!
I fail to see how that relates to math
Friday, March 28, 2008 at 1:08pm
Math - PreCalc (12th Grade)
Which rectangular equation corresponds to these parametric equations? x = 0.2sec t and y = -0.25tan t A) 25x^2 − 16y^2 = 1 B) 25x^2 + 16y^2 = 1 C) x^2 − 16y^2 = 25 D) 25y^2 − 16x^2 = 1 E) 4x^2 − 16x^
2 = 1
Monday, March 24, 2014 at 10:24am
Math - PreCalc (12th Grade)
If the distance covered by an object in time t is given by s(t) = 2t2 + 3t, where s(t) is in meters and t is in seconds, what is the average velocity over the interval from 2 seconds to 4 seconds? A)
15 meters/second B) 14 meters/second C) 13 meters/second D) 12 meters/second ...
Friday, March 21, 2014 at 12:01pm
Math - PreCalc (12th Grade)
spread = 5-2 = 3 if cut into n pieces, each base = 3/n endpoint of 1st rectangle = 2 + 3/n endpoint of 2nd rectangle = 2 + 2(3/n) ... endpoint of kth rectangle = 2 + k(3/n) = 2 + 3k/n f(2+3k/n) = 2(2
+ 3k/n) + 1 = 5 +6k/2 I see you have two other questions above that follow ...
Friday, March 21, 2014 at 11:40am
Math - PreCalc (12th Grade)
Which statement about a uniform probability distribution defined on a given interval is true? A) The mean is always 0. B) The mean is always 1. C) The standard deviation is always 1. D) The mean is
the midpoint of the interval. E) The standard deviation is equal to the width ...
Friday, March 21, 2014 at 11:05am
Math - PreCalc (12th Grade)
The function f(x) = 2x + 1 is defined over the interval [2, 5]. If the interval is divided into n equal parts, what is the value of the function at the right endpoint of the kth rectangle? A) 2+3k/n
B) 4+3k/n C) 4+6k/n D) 5+6k/n E) 5+3k/n
Friday, March 21, 2014 at 9:57am
Math - PreCalc (12th Grade)
ok so far, though they already told you that f(2)=3 because the point (2,3) is on the graph. As you know, the slope at any point (x,y) on the graph is 2x+2 So, the slope at x=2 is 6 Though, if this
is pre-calc, how do you know the slope of the tangent to a curve? That's ...
Thursday, March 20, 2014 at 1:21pm
Math - PreCalc (12th Grade)
as with any arithmetic progression, Sk = k/2(a1 + ak) = k/2(1+5k-4) = k/2(5k-3) Sk+ak+1 = k(5k-3)/2 + 5k-4+1 (D) Don't forget your algebra I just because you're in pre-calc now.
Thursday, March 20, 2014 at 1:19pm
Math - PreCalc (12th Grade)
If Sn = 1 + 6 + 11 + 16 + 21 + 26...... where an = (5n − 4), what would be the mathematical expression represented by Sk + ak + 1? A) (k(5k − 3)/2)+ 5k + 1 B) (k(5k − 3)/2)+ 5k − 4 C) (k(5k − 3)/2)+
5k + 2 D) (k(5k − 3)/2)+ 5k − 3 E) (...
Thursday, March 20, 2014 at 1:05pm
Math - PreCalc (12th Grade)
What value must be defined for f(4) to remove the discontinuity of this function at x=4? f(x)=(x^2−16)/(x−4) A) 0 B) 4 C) -4 D) 8 E) -8 f(4)=(4^2−16)/(4−4) f(4)=0/0 you can't divide by zero. I don't
understand the question.
Thursday, March 20, 2014 at 12:38pm
Math - PreCalc (12th Grade)
If Sn represents the sum of the squares of the first n natural numbers, use proof by induction to find which of the following expressions for Sn is true? A) Sn = n(n − 1)/3 B) Sn = n(2n − 1)/3 C) Sn
= n(n + 1)/3 D) Sn = n(n + 1)(2n + 1)/3
Thursday, March 20, 2014 at 12:23pm
Pages: <<Prev | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | Next>>
|
{"url":"http://www.jiskha.com/12th_grade/?page=8","timestamp":"2014-04-17T20:26:09Z","content_type":null,"content_length":"34474","record_id":"<urn:uuid:d2d90071-be1d-4c19-89e8-ffc766dec0ec>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00507-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Painleve systems arising from integrable hierarchies
Seminar Room 1, Newton Institute
I will give an overview of a class of Lax representations for Painlev¥'e equations and their generalization in terms of Lie algebras. In that context discrete symmetries of Painlev¥'e systems are
described by means of birational Weyl group actions. I will also discuss how they are related to integrable hierarchies associated with affine Lie algebras.
|
{"url":"http://www.newton.ac.uk/programmes/PEM/seminars/2006091811301.html","timestamp":"2014-04-16T16:05:50Z","content_type":null,"content_length":"4470","record_id":"<urn:uuid:97b0d923-ab8e-45e8-b046-a5a2583fb2fa>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00087-ip-10-147-4-33.ec2.internal.warc.gz"}
|
My Talks
I’m giving an introductory seminar talk this afternoon to let new graduate students know about the noncommutative geometry research group at Penn State and what it is we do. My plan is to begin with
a short “elevator speech” about NCG (a few minutes) and then follow it up with four ten-minute vignettes of “Things we talk about a lot”
• Hilbert space
• K-theory
• Curvature
• Expanders
and at least to indicate the existence of all of the \( (4 \times 3)/2 = 6 \) connections among these concepts as well. Here is a link to a scanned version of my notes for the talk.
My talk at BIRS
I gave a talk yesterday (August 8th, 2013) on Ghostbusting and property A. Thanks to the technology system at BIRS you can watch the talk on video here.
Or, you can download the slides for the talk here.
The paper has now been accepted for the Journal of Functional Analysis.
Lectures on Coarse Index Theory at Fudan
Here are the slides from a lecture series on “Coarse Index Theory” which I gave at Fudan University, in Shanghai, in May 2006
|
{"url":"http://sites.psu.edu/johnroe/category/research/my-talks/","timestamp":"2014-04-21T10:02:04Z","content_type":null,"content_length":"34798","record_id":"<urn:uuid:44629343-5432-4d34-9a46-844c8ca0e48f>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00564-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Northlake, TX Geometry Tutor
Find a Northlake, TX Geometry Tutor
...This knowledge gap can cause problems when trying to learn algebra and geometry. I tutor pure Prealgebra sessions but also work Prealgebra topics into tutoring sessions for ACT prep, SAT prep,
Algebra 1, Algebra 2 and Geometry when needed. I’ve tutored over 100 hours of SAT and ACT prep, including over 30 hours of SAT and ACT Math.
15 Subjects: including geometry, reading, writing, algebra 1
I am a recent graduate of Trinity University in San Antonio, Texas. Before transferring to Trinity, I attended the United States Naval Academy in Annapolis, Maryland for three years. I was an
applied math major at the academy, and finished my math degree at Trinity University.
14 Subjects: including geometry, chemistry, ASVAB, SAT math
...I have also taught Year 11 and Year 12 Physics, which includes topics in electricity and electronics. I have taught Year 11 for 4 years and Year 12 Physics for 5 years. I have completed a
degree in Mechanical Engineering at The University of Melbourne in 2001.
56 Subjects: including geometry, chemistry, calculus, physics
...My area of expertise is Biology and Chemistry, but I can easily teach any of the other math or science subjects. I also taught AP English Literature for three years as an undergraduate at MIT,
so I am very comfortable with teaching English as well. So whether you want to emphasize a specific su...
30 Subjects: including geometry, chemistry, reading, English
...Further, I started tutoring high school students in 2008 in mathematics, sciences (chemistry and physics) and SATs.I have an M.S. from University of Texas at Arlington majoring in Organic and
Inorganic Chemistry. I am currently tutoring organic chemistry II to a Pre-Med student at Univeristy of ...
22 Subjects: including geometry, chemistry, calculus, physics
Related Northlake, TX Tutors
Northlake, TX Accounting Tutors
Northlake, TX ACT Tutors
Northlake, TX Algebra Tutors
Northlake, TX Algebra 2 Tutors
Northlake, TX Calculus Tutors
Northlake, TX Geometry Tutors
Northlake, TX Math Tutors
Northlake, TX Prealgebra Tutors
Northlake, TX Precalculus Tutors
Northlake, TX SAT Tutors
Northlake, TX SAT Math Tutors
Northlake, TX Science Tutors
Northlake, TX Statistics Tutors
Northlake, TX Trigonometry Tutors
Nearby Cities With geometry Tutor
Argyle, TX geometry Tutors
Bartonville, TX geometry Tutors
Colleyville geometry Tutors
Copper Canyon, TX geometry Tutors
Corinth, TX geometry Tutors
Corral City, TX geometry Tutors
Denton, TX geometry Tutors
Highland Village, TX geometry Tutors
Justin geometry Tutors
Oak Point, TX geometry Tutors
Roanoke, TX geometry Tutors
Saginaw, TX geometry Tutors
Shady Shores, TX geometry Tutors
Southlake geometry Tutors
University Park, TX geometry Tutors
|
{"url":"http://www.purplemath.com/northlake_tx_geometry_tutors.php","timestamp":"2014-04-18T16:13:05Z","content_type":null,"content_length":"24146","record_id":"<urn:uuid:239d134d-be0c-4328-9055-809e83ef8241>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00212-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
A.) 2 logbq + 8 logbt B.) 4 log x – 6 log (x + 2)
• one year ago
• one year ago
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred.
|
{"url":"http://openstudy.com/updates/50e46568e4b0e36e35146bdc","timestamp":"2014-04-21T10:16:11Z","content_type":null,"content_length":"46472","record_id":"<urn:uuid:d73b18db-d2df-4f86-887c-521c0ca8db01>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00304-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Performance-Driven Symbol Mapping for Downlink and Point-to-Point MIMO Systems
An adaptive symbol mapping scheme is proposed for single-user point-to-point and multiuser downlink multiple-input multiple output (MIMO) systems aiming at the minimization of the overall system bit
error rate. The proposed scheme introduces a disorder to the symbols to be transmitted within a MIMO subframe by means of dynamic mapping, with the objective to optimise the interference between them
and enhance the received symbols' power. This is done by either changing the allocation order of the symbols to the antennas or by applying a scrambling process that alters the symbols sign. This
procedure is targeted to optimizing, rather than strictly minimizing the interference between the symbols such that constructive instantaneous interference is utilized in enhancing the decision
variables at the receiver on a symbol-by-symbol basis so that detection is made more reliable. In this way, the overall system performance is improved without the need to raise the transmitted power.
The proposed scheme can be used in conjunction with various conventional MIMO precoding and detection techniques. The presented results show that for a given transmit power budget this scheme
provides significant benefits to the corresponding conventional system's error rate performance.
1. Introduction
The recent advances in multiple-input multiple-output (MIMO) processing [1] are making the application of multiantenna transmitters and receivers increasingly popular in modern wireless
communications due to the enhanced capacity and space diversity they offer. MIMO schemes have recently been incorporated in communication standards such as WiMAX and 3GPP-LTE to satisfy the growing
demand for higher data rates and quality of service for multimedia applications. Despite the increased information capacity offered by the MIMO channel, the spatial correlation of the multiple
subchannels introduces an additional source of interference which corrupts the data symbols and in effect degrades the achievable error rate performance of such systems. In the MIMO uplink, space
diversity detection techniques [2–5] can counteract this impediment to a satisfactory extent. In [2, 3], the sphere decoder is presented for an arbitrary lattice code and a lattice code resulting
from applying algebraic space-time coding on a MIMO system, respectively. Regardless of the technique's near-optimal performance, the decoding complexity is quite significant, which makes it
impractical for use in mobile units at downlink and point-to-point reception. Suboptimal solutions with reduced complexity are introduced in [4, 5] where diagonal- and vertical-layered architectures
of the (Bell Laboratories Layered Space Time) BLAST receiver are presented, respectively. While complexity is drastically reduced the performance of these techniques is comparable to the sphere
decoder in most practical scenarios. An alternative to MIMO detection is to shift the signal enhancement processing to the transmitter by use of precoding. This is particularly popular in MIMO
downlink communications and point-to-point systems, which is the focus of this work. Channel inversion (CI) [6] entails the least complexity of the precoding techniques available. However, the
disadvantages of the CI technique include a poor symbol error rate (SER) performance and the fact that the transmission rate and throughput delivered are limited and do not improve by increasing the
number of antennas, as demonstrated in [7]. The solution proposed in [7], which is a minimum mean square error (MMSE) form of channel inversion, provides some performance and capacity gains with
respect to the conventional CI, without a considerable complexity increase. Nevertheless, the transmission rates offered by both these schemes are far from reaching the theoretical channel capacity.
Dirty paper coding (DPC) techniques as, for example, in [8–11] based on the initial information theoretical analysis in [12], can further increase transmission rates and achieve significant capacity
benefits. However, the majority of the DPC methods developed so far are impractical in many scenarios as they require sophisticated signal processing at the transmitter with complexity similar to the
one of sphere decoding. A promising alternative is the joint transmit-receive beamforming scheme as presented in [13] amongst others in the literature. Despite being less complex than DPC, the most
robust beamforming schemes require iterative communication between the transmitter and receiver for the optimization of the joint processing and the system configuration. This needs to be done every
time the channel characteristics change and hence, in fast fading environments introduces considerable latency to the MIMO downlink system. Owing to their favourable performance-to-complexity
tradeoff amongst the techniques mentioned above, this paper focuses on the application of the proposed scheme to the more practical V-BLAST detection and MMSE precoding.
Complementary to the aforementioned signal enhancement processing MIMO schemes, a number of resource allocation schemes [14–19] have emerged for MIMO communications mainly involving antenna selection
[14–16] and power allocation [17, 18] for multielement transceivers as well as frequency (subcarrier) allocation [19] for MIMO-orthogonal frequency division multiplexing (OFDM) communications. All
the relevant resource allocation methods focus on the reduction of interference between the spatial streams of the MIMO channel. This clearly differentiates them to the proposed scheme where the aim
is not strictly to minimise the correlation of the spatial streams but rather to optimise it and accommodate for constructive interchannel interference (ICI). Moreover, resource allocation schemes
such as antenna selection can be used in addition to the proposed technique to further improve the performance. The focus of this paper, however, is on signal enhancement schemes and for reasons of
coherence, antenna selection and power allocation are not considered here.
In more detail, the proposed scheme which parallels the ones in [20, 21] proposed for code division multiple access (CDMA) is based on the fact that ICI is separated into constructive and destructive
as discussed in detail in [22]. The characterisation of the instantaneous ICI depends on the channel characteristics and the correlation between the spatial streams, and, equally importantly, on the
instantaneous values of the transmitted symbols. By perturbing the data symbols to be transmitted by means of reordering or scrambling, the proposed scheme influences the ICI between the MIMO
subchannels. It then chooses a symbol mapping such that the interference is optimised and the decision variables at the receiver are maximised. Subsequently, conventional precoding or detection can
be applied with enhanced performance due to the optimisation of interference achieved by the proposed symbol mapping.
It is clear that the proposed symbol mapping scheme can be combined with various conventional MIMO detection (linear detection, V-BLAST, sphere decoding, etc.) and precoding schemes (linear
precoding, dirty paper coding etc.) to improve the respective performance. For reasons of simplicity and to maintain the focus of the present paper, as mentioned above, only two of the most practical
and popular MIMO techniques are considered here, MMSE precoding and V-BLAST detection.
It should be noted that the proposed data allocation method entails the transmission of control signalling (CS) to inform the receiver about the mapping process used so as to attain the correct
initial order or appropriately descramble the received data after detection. It will be shown that the CS increases logarithmically with the number of candidate mapping patterns and for this reason
the number of possible reordered or scrambled versions of the data to select from should be limited. In the simulations presented here this number is limited to values such that the overhead imposed
by the CS transmission is restricted to less than 6% of the transmitted information.
2. System Model and Conventional MIMO Processing
This paper considers transmission in a MIMO system with a limited number of
2.1. Linear Minimum Mean Square Error (MMSE) Precoding
The MMSE precoding shown in [7] applies a regularized inversion of the channel matrix at the transmitted symbols such that the signal to interference plus noise ratio (SINR) at the receiver is
maximized. The transmitted symbol vector is given as
which derives average normalization of the transmitted power. It can be seen that in this case the channel is not entirely orthogonalized and a certain amount of interference remains. The received
symbol vector is given as
is the equivalent crosscorrelation matrix of the symbols as seen at the receiver. The estimated symbols are retrieved by directly quantizing the received signal
where 9] that the value of
2.2. Vertical Bell Laboratories Layered Space Time (V-BLAST) Detector and Proposed Modification
The V-BLAST detector proposed in [5] involves iterative detection and cancellation of the interfering symbols at each antenna in order to attain an interference-free detection of the desired signal.
No precoding is applied at the transmitter 5] and for reasons of completeness we present the compact recursive procedure of the technique:
Here 23] is more appropriate. Therefore in the simulations shown below the received signal is multiplied with the entire equalization matrix at each recursion and the symbol with the highest norm
(most reliable for detection) is selected for cancellation at each iteration of the algorithm. Hence, while the conventional V-BLAST is simulated in the graphs below using the procedure in (7a)–(7f)
for the combined V-BLAST and symbol mapping the BLAST algorithm is modified to:
As regards the equalized symbols to be detected in (8b) assuming perfect cancellation the expression can be transformed using (8f) to
3. Proposed Optimized Symbol Allocation (SA)
In both (4) and (9) it can be seen that when the transmitted data symbols are reordered they are paired with different crosscorrelation elements in the crosscorrelation matrix and the interference
between them changes so that the values of the resulting decision variables are different. Hence, instead of transmitting the symbols 1. The proposed algorithm involves the following steps.
(1)From an initial reference symbol-to-antenna allocation pattern a limited number of
Figure 1. Block diagram of the proposed symbol allocation (SA) scheme.
(2)For each candidate the expected decision variables are preestimated according to the signal enhancement mechanism employed (precoding or detection). For the MMSE and V-BLAST techniques considered
here the preestimated symbols are given by (6) and (8d), respectively using the channel estimates. The vectors containing the decision variables for each candidate allocation are stacked to form the
(3)A symbol allocation
(4)The transmitter subsequently allocates the symbols to the antennas based on the selected allocation and, if applicable, precodes the data using some form of conventional precoding.
(5)Additional to the data symbols the transmitter sends the CS bits that inform the receiver which of the candidate allocation patterns was used.
(6)The receiver applies the conventional signal enhancement processing which can be V-BLAST (assumed in this paper) or any other conventional detection scheme to acquire the enhanced decision
(7)The CS is detected to determine the allocation
(8)Using the knowledge of all possible allocation patterns, the receiver then removes the perturbation introduced at the transmitter by inverting the process of
For reasons of clarity the separation between the notations
It is evident that for each allocation pattern used, a number of
As regards the mapping mechanism used to create the
3.1. Mapping Method 1: Symbol Reordering
The symbols within the MIMO subframe are randomly shuffled to produce a reordered version of the data subframe as shown in Figure 2(a). This can be expressed by the mapping operation
for the
different reordered versions of the subframe. Nevertheless as mentioned in the previous section the number of candidate allocation patterns needs to be limited
Figure 2. Mapping methods: (a) symbol reordering, (b) symbol scrambling.
3.2. Mapping Method 2: Symbol Scrambling
The symbols within the MIMO subframe are randomly scrambled as shown in Figure 2(b), so that the sign (but not the absolute value) of the real and imaginary parts of the symbols existing in the
subframe change. This can be expressed by the element wise multiplication of the data symbols with a scrambling sequence
different possible scrambled versions. It will be shown however that the performance of the proposed scheme depends on the number of actual candidate perturbed versions of the frame rather than the
theoretical achievable diversity. Therefore for a practical number of candidate allocations
4. Selection of the Symbol Allocation
4.1. Selection Criterion
From (4) and (9), a number of criteria can be formulated for the selection of the symbol allocation to be used for transmission based on the resulting interference and decision variables for each
candidate allocation pattern. Since the average error rate performance of a point-to-point MIMO system is governed by the performance of the instantaneously "worst" symbols we propose to select the
allocation pattern that derives the decision variable distribution with the most reliable worst symbol. The obvious choice would be to select the allocation according to a Euclidean distance
that is, select the allocation that minimizes the maximum Euclidean distance to the data symbols (worst symbol) in the preestimated symbol distribution. However, this does not allow for constructive
interference which pushes the received symbols further away from the nominal constellation points, towards the direction opposite from the decision thresholds. This is shown graphically in Figure 3
for the example of constellation point
When the projection of the preestimated decision variable on the actual symbol to be transmitted is negative it signifies that due to ICI the decision variable is corrupted and would indicate a
different constellation point than the one transmitted which would lead to erroneous detection. When the projection is positive the ICI does not push the decision variable to a different
constellation point and in the absence of noise detection is expected to be successful. The higher the value of the projection the more reliable the decision variables are expected to be. Hence the
minimum of the projection for each candidate
Figure 3. Euclidean distance versus projection criterion, QPSK
To verify the superiority of the proposed criterion over the conventional approach, the two criteria (14) and (15) are compared in the results that follow. It should be noted that since the search
for the best candidate is not exhaustive amongst all possible perturbed symbol allocations but rather between
4.2. Selection Implementation
As regards the selection of
5. Control Signalling (CS) and Resulting Transmission-Reception Efficiency
5.1. CS Transmission
It is evident from the aforementioned analysis that the performance of the proposed scheme relies on the transmission of CS to update the receiver on the allocation pattern used at each symbol period
in order to correctly remove the perturbation introduced at the transmitter and obtain the initial data. It is possible to attach the CS at the end of the corresponding subframe but for reasons of
efficiency it is preferable to adopt a frame-based approach as the one shown in Figure 4. The MIMO frame consists of
5.2. CS Transmission-Reception Efficiency
As mentioned above a number of
Likewise, at the receiver a number of 7 (and the relevant discussion in the following) which plots the transmission efficiency with increasing
6. Complexity Analysis
In order to investigate the complexity repercussions of the above methodology, the relevant comparison of the conventional and proposed techniques is illustrated in Table 1. The table illustrates the
complexity of conventional MMSE precoding with MMSE using symbol allocation (MMSE-SA). The complexity count is shown in terms of principal factors
For the case of fast fading where channel estimation and precoding matrix calculation (steps 1, 2, 3, 5 in Table 1) need to be done more frequently, the weight of the factor
Table 1. Complexity in numbers of operations for MMSE and MMSE-SA.
7. Performance Analysis for Nonideal CS Transmission
Another important aspect of the proposed SA scheme is the dependency of its performance on the correct reception of the CS. This issue is treated in this section where a performance analysis is
presented for the case of imperfect CS detection. Assume that
Also, if
while the resulting probability of data error per bit for imperfect CS detection is
For 24] expressed as
In (24)
8. Numerical Results
This section presents the results of Monte Carlo simulations carried out for conventional MIMO precoding and detection schemes with and without the proposed SA for various numbers of antennas on
frequency flat fading MIMO channels in order to illustrate the relevant performance comparison. While it is intuitive that the benefits of the proposed scheme extend to a variety of MIMO techniques,
the simulations below focus on MMSE precoding and V-BLAST detection, as these schemes offer a practical performance-to-complexity tradeoff. For the simulations shown QPSK modulation has been employed
and unless stated otherwise perfect channel estimates are assumed. For the transmission of CS an increased transmission power by a factor of two compared to the data transmission is assumed, which is
a common method in practical systems to achieve reliable CS and eliminate the effect on data detection. To avoid confusion it should be clarified that to ease comparison to the results of [5, 9] the
total transmitted SNR is used in the graphs for MMSE precoding while the values of transmitted SNR per receive
8.1. Reference Achievable Performance Gain
As an initial point and to quantify the absolute performance benefit achievable by SA on MMSE precoding Figure 5 depicts the performance of MMSE-SA on a MIMO symbol rather than on a MIMO subframe
basis (
Figure 5. SER versus SNR for MMSE, MMSE-SA for increasing
8.2. Selection of Optimum
A profound insight of the performance to transmission efficiency tradeoff can be attained by Figures 5 and 6 where the symbol error rate (SER) performance and transmission efficiency are shown for
increasing values of 6 the performance of MMSE precoding is shown for a total transmitted SNR of 20 and 25dB and the performance of V-BLAST is included for transmitted SNR per antenna of 20dB. It
can be seen that for low values of 7 and especially the curve for QPSK modulation of the CS bits it can be seen that the reduction in efficiency is considerable between 7 that the transmission
efficiency can be increased by using 16QAM modulation which for this value of
Figure 7. Transmission-reception efficiency for SA for increasing
8.3. Further Performance Investigation
The SER versus transmitted SNR performance for MMSE is shown in Figure 8 for the same system of
Figure 8. SER versus SNR for MMSE, MMSE-SA with reordering or scrambling, projection-based optimisation and MMSE -SA with Euclidean distance (ED) optimisation
The performance of V-BLAST is investigated in Figure 9 where the bit error rate (BER) versus SNR per rx antenna is shown. The same MIMO system of
Figure 9. BER versus SNR for V-BLAST, V-BLAST-SA with reordering or scrambling, projection-based optimisation and V-BLAST-SA with Euclidean distance (ED) optimisation
Figure 10 shows the BER performance for increasing number of antennas for the symmetric (
In all simulations above the CSI is assumed perfectly known at the transmitter. However the processing of the proposed scheme as shown in Section 3 suggests that SA could be sensitive to CSI errors.
To validate the usefulness of the proposed scheme in scenarios with erroneous channel estimates, Figure 11 depicts the BER performance of V-BLAST and V-BLAST-SA for increasing CSI errors. In order to
maintain a generic performance comparison irrespective to any channel estimation technique or type of CSI errors, these errors are simulated by adding a complex random deviation to the channel
coefficients available at the transmitter to derive an error in the estimated coefficients of
Figure 11. BER versus SNR for V-BLAST, V-BLAST-SA for CSI errors
9. Conclusions and Future Work
The use of static data-to-antenna allocation leads to waste of useful energy inherent in the communication channel and makes conventional MIMO schemes suboptimal. By applying adaptive mapping on the
data to be transmitted and introducing diversity in the interference between the transmitted symbols of the MIMO channel this work has shown that significant performance benefits are gleaned for MIMO
systems. The tradeoff to this improvement is the need for control signaling for the correct data detection. Further work can be carried out towards reducing the CS overhead and applying the proposed
scheme to further and more advanced MIMO techniques including resource allocation.
This work has been jointly funded by EPSRC and Philips Research Labs, UK. The authors would like to thank Dr. Tim Moulsley for the helpful discussions throughout this research contribution.
Sign up to receive new article alerts from EURASIP Journal on Wireless Communications and Networking
|
{"url":"http://jwcn.eurasipjournals.com/content/2011/1/376394","timestamp":"2014-04-16T21:55:22Z","content_type":null,"content_length":"146503","record_id":"<urn:uuid:0eec533a-0146-4512-9f89-b7e8d4d299ba>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00403-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Math Forum Discussions - Re: interopolation with method log, ln, or exp(-x)
Date: Mar 20, 2013 2:06 PM
Author: Curious
Subject: Re: interopolation with method log, ln, or exp(-x)
"Jonathan W Smith" wrote in message <kicmao$ek3$1@newscl01ah.mathworks.com>...
> Hello
> I have 3D gas constituent data from a model. The gas constituent data is on pressure levels. The pressure is on the 3D grid as well. Pressure falls exponentially with height. However I want to interpolate based on a different set of 3D pressure levels from satellite data.
> Is it best to use interp2 or interp3? Just as there is method 'linear' or 'cubic', is there one for ln (natural log) or exponential decay? If not, what function or set of functions can I use to substitute for this?
If I understand your question, I don't believe there is a "best" answer.
It really depends on your data and what you want to do with it.
If x is a vector, y = exp(x), and you want to interpolate y (by any method) then you could do either:
z = interp(y)
z = exp(interp(x))
Its really YOUR choice.
> Thanks
> Jonathan
|
{"url":"http://mathforum.org/kb/plaintext.jspa?messageID=8697746","timestamp":"2014-04-16T22:22:37Z","content_type":null,"content_length":"2055","record_id":"<urn:uuid:a425fc93-baad-48e0-92a6-f9d82cb55dc2>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00267-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
Christine’s butter cookies sells large tins of butter cookies and small tins of butter cookies. The factory can prepare at most 200 tins of cookies a day. Each large tin of cookies requires 2 pounds
of butter, and each small tin requires 1 pound of butter, with a maximum of 300 pounds of butter available each day. The profit from each day’s cookie production can be estimated by the function f
(x,y) = $6.00x+$4.80y, where x represents the number of large tins sold and y the number of small tins sold. Find the maximum profit that can be expected in a day.
Best Response
You've already chosen the best response.
here are the choices $1,080 $600 $480 $920
Best Response
You've already chosen the best response.
any ideas?
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred.
|
{"url":"http://openstudy.com/updates/4f8860f0e4b0505bf0877674","timestamp":"2014-04-16T19:01:11Z","content_type":null,"content_length":"30368","record_id":"<urn:uuid:2619aad0-cb57-4483-b170-01ac933a662e>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00475-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Portability portable
Stability experimental
Maintainer bos@serpentine.com
Commonly used sample statistics, also known as descriptive statistics.
Statistics of location
mean :: Sample -> DoubleSource
Arithmetic mean. This uses Welford's algorithm to provide numerical stability, using a single pass over the sample data.
Statistics of dispersion
The variance—and hence the standard deviation—of a sample of fewer than two elements are both defined to be zero.
Two-pass functions (numerically robust)
These functions use the compensated summation algorithm of Chan et al. for numerical robustness, but require two passes over the sample data as a result.
Because of the need for two passes, these functions are not subject to stream fusion.
stdDev :: Sample -> DoubleSource
Standard deviation. This is simply the square root of the maximum likelihood estimate of the variance.
Single-pass functions (faster, less safe)
The functions prefixed with the name fast below perform a single pass over the sample data using Knuth's algorithm. They usually work well, but see below for caveats. These functions are subject to
array fusion.
Note: in cases where most sample data is close to the sample's mean, Knuth's algorithm gives inaccurate results due to catastrophic cancellation.
• Chan, T. F.; Golub, G.H.; LeVeque, R.J. (1979) Updating formulae and a pairwise algorithm for computing sample variances. Technical Report STAN-CS-79-773, Department of Computer Science, Stanford
University. ftp://reports.stanford.edu/pub/cstr/reports/cs/tr/79/773/CS-TR-79-773.pdf
• Knuth, D.E. (1998) The art of computer programming, volume 2: seminumerical algorithms, 3rd ed., p. 232.
• Welford, B.P. (1962) Note on a method for calculating corrected sums of squares and products. Technometrics 4(3):419–420. http://www.jstor.org/stable/1266577
• West, D.H.D. (1979) Updating mean and variance estimates: an improved method. Communications of the ACM 22(9):532–535. http://doi.acm.org/10.1145/359146.359153
|
{"url":"http://hackage.haskell.org/package/statistics-0.2/docs/Statistics-Sample.html","timestamp":"2014-04-17T04:08:27Z","content_type":null,"content_length":"10749","record_id":"<urn:uuid:af5a18cc-0ca2-4706-bfe4-d55b5e1d54ba>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00035-ip-10-147-4-33.ec2.internal.warc.gz"}
|
[Fractint] FOTD 01-05-12 (Testing 1-2-3 [7])
FOTD -- May 01, 2012 (Rating 7)
Fractal visionaries and enthusiasts:
It's a new month and time for a new theme. To escape the rut of
mathematical interest only, I have decided to make May the month
of images with artistic value. And there is no formula I know
of that creates images more artistic than those created by the
workhorse MandAutoCritInZ.
To lead off the month, we check the parent fractal that results
when Z^(2.01) is subtracted from Z^2 and straight C is added on
each iteration. Intuition says that this fractal will be little
more than an oversized Mandelbrot set. Well, the fractal is
oversized sure enough. A bailout radius of 1100 was neded to
fit it all in, but the fractal is far from an everyday
Mandelbrot set.
This parent fractal actually resembles a Z^(2.5)+C Mandeloid far
more than it resembles the classic M-set. In addition, it is
rotated 180 degrees so that the side normally on the east lies
on the west. Its East Valley has split into two well-defined
valleys, with the area between filled with many discontinuities
that call for exploration. Its main period-2 bud has sprouted
two large sub-buds, each growing its own infinitely divided main
spike. Today's image is located in the area of discontinuities
between the two large sub-buds.
The name "Testing 1-2-3" is what a sound man traditionally says
to test the levels of his sound system. It came about when I
saw myself setting up for an actual entire month of worthwhile
fractals. The rating of a 7 that I gave today's image is not
particularly outstanding, but I'm still not totally back into
artistic mode. The ratings will (hopefully) increase as the
month progresses.
The calculation time of 1-2/3 minutes is a fair price for a
fractal with a rating of a 7. And the web sites are always
there for those who would rather miss the fun of calculation.
The finished image is online at:
A high-definition rendering is at:
All the past images are at:
Today brought a mix of clouds and sun to Fractal Central. The
pleasant temperature of 77F 25C made the periods of clouds seem
much more pleasant than they might have been. The fractal cat
duo was very active, chasing each other up and down the hallway
until Nicholas finally ran out of energy.
The humans, who never chase each other up and down the hallway,
had a more restful day doing routine things. The next FOTD will
be posted soon. (I'll no longer mention 24 hours, since the
time between postings is irregular, and has rarely if ever been
exactly 24 hours.) Until next time, take care, and it looks
like it might be fractals all the way down.
Jim Muth
START PARAMETER FILE=======================================
Testing_1-2-3 { ; time=0:01:40.00 SF5 at 2000MHZ
reset=2004 type=formula formulafile=basicer.frm
formulaname=MandAutoCritInZ function=ident float=y
3.11306e+009/1/82.5/0 params=1/2/-1/2.01/1/1000/0/0
maxiter=1250 inside=0 logmap=114 periodicity=6
zzCzzBzzAzz9zz8zz_zzXzzVz }
frm:MandAutoCritInZ {; Jim Muth
a=real(p1), b=imag(p1), d=real(p2), f=imag(p2),
g=1/f, h=1/d, j=1/(f-b), z=(((-a*b*g*h)^j)+(p4)),
k=real(p3)+1, l=imag(p3)+100, c=fn1(pixel):
|z| < l }
END PARAMETER FILE=========================================
|
{"url":"http://mailman.xmission.com/lurker/message/20120501.220833.f8bc0cce.en.html","timestamp":"2014-04-18T21:37:56Z","content_type":null,"content_length":"17460","record_id":"<urn:uuid:4b4703c9-89b1-4394-afb0-c5a5f8e9a703>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00651-ip-10-147-4-33.ec2.internal.warc.gz"}
|
CMSC 100
Course Overview
Professor Marie desJardins
Thursday, August 28, 2008
Thu 8/28/08 CMSC 100 -- Overview 2
What is Computer Science?
Course Logistics
First Assignments
UPC Example
Thu 8/28/08 CMSC 100 -- Overview 3
What is Computer
The Computer Revolution
How fast did this happen?
[ http://www.blinkenlights.com/pc.shtml ]
1950: “Simon” (plans published in Radio Electronics)
1973: HP 65 (programmable calculator)
1975: Altair 8800 (first widely used programmable computer kit)
1977: Apple II (a huge breakthrough, the first mass-produced,
inexpensive personal computer)
1981: IBM 5150 PC (now we’re really taking off)
1984: Apple Macintosh 128K (my first computer!!)
2008: MacBook Air
(my newest computer!)
Thu 8/28/08 CMSC 100 -- Overview 5
Moore’s Law
Computer memory (and processing speed, resolution, and
just about everything else) increases exponentially
(roughly: doubles every 18-24 months)
Thu 8/28/08 CMSC 100 -- Overview 6
Measuring Memory
One yes/no “bit” is the basic unit of memory
Eight (23) bits = one byte
1,024 (210) bytes = one kilobyte (1K)*
1,024K (220 bytes) = one megabyte (1M)
1,024K (230 bytes) = one gigabyte (1G)
1,024 (240 bytes) = one terabyte (1T)
1,024 (250 bytes) = one petabyte (1P)
... 280 bytes = one yottabyte (1Y?)
* Note that external storage is usually measured in decimal rather than binary (1000 bytes = 1K, and so on)
Thu 8/28/08 CMSC 100 -- Overview 7
What Was It Like Then?
The PDP-11/70s we used in college had 64K of RAM, with
hard disks that held less than 1M of external storage
... and we had to walk five miles, uphill, in the snow, every
day! And we had to live in a cardboard box in the middle of
the road!
Thu 8/28/08 CMSC 100 -- Overview 8
What Is It Like Now?
The PDP-11/70s we used in college had 64K of RAM, with
hard disks that held less than 1M of memory
The cheapest Dell Inspiron laptop has 2G of RAM and up to
80G of hard drive storage....
...a factor of 1018 more RAM and 1012 more disk space
...and your iPod nano has 8G of blindingly fast storage
...so don’t come whining to me about how slow your
computer is!
Thu 8/28/08 CMSC 100 -- Overview 9
It’s Not Just Speed, It’s Quantity
So just how big a revolution are we talking about?
How many computers do you think were in the room when I
took my first programming class?
Answer: ZERO(*).
How many computers are in this room?
(* First we need to decide what is a computer… not so easy!)
Answer: I’m going to guess around 100.
Thu 8/28/08 CMSC 100 -- Overview 10
Grand Challenges for CS
Ubiquitous Computing and
Information Search Situation Awareness
Human-Level Intelligence
http://www.cs.cmu.edu/~claytronics/software/ thebrain.mcgill.ca
How Does a Computer Work?
“The work performed by the computer is specified by a program, which
is written in a programming language. This language is converted to
sequences of machine-language instructions by interpreters or
compilers, via a predefined set of subroutines called the operating
system. The instructions, which are stored in the memory of the
computer, define the operations to be performed on data, which are
also stored in the computer's memory. A finite-state machine fetches
and executes these instructions. The instructions as well as the data
are represented by patterns of bits. Both the finite-state machine and
the memory are built of storage registers and Boolean logic blocks, and
the latter are based on simple logical functions, such as And, Or, and
Invert. These logical functions are implemented by switches, which are
set up either in series or in parallel, and these switches control a
physical substance, such as water or electricity, which is used to send
one of two possible signals from one switch to another: 1 or 0. This is
the hierarchy of abstraction that makes computers work.”
-- W. Daniel Hillis, The Pattern on the Stone
Thu 8/28/08 CMSC 100 -- Overview 12
How Does a Computer Work?
“The work performed by the computer is specified by a program, which
is written in a programming language. This language is converted to
sequences of machine-language instructions by interpreters or
compilers, via a predefined set of subroutines called the operating
system. The instructions, which are stored in the memory of the
computer, define the operations to be performed on data, which are
also stored in the computer's memory. A finite-state machine fetches
and executes these instructions. The instructions as well as the data
are represented by patterns of bits. Both the finite-state machine and
the memory are built of storage registers and Boolean logic blocks,
and the latter are based on simple logical functions, such as And, Or,
and Invert. These logical functions are implemented by switches, which
are set up either in series or in parallel, and these switches control a
physical substance, such as water or electricity, which is used to send
one of two possible signals from one switch to another: 1 or 0. This is
the hierarchy of abstraction that makes computers work.”
-- W. Daniel Hillis, The Pattern on the Stone
Thu 8/28/08 CMSC 100 -- Overview 13
Abstraction: The Key Idea!
Computers are very complex
Most interesting programs are very complex
What makes it possible to design and maintain these
complex systems??
Which just means:
Once we’ve solved a “low-level detail,” we can treat that solution as
a “black box” with known inputs and outputs, and not worry about
how it works.
The way we get there is called problem reduction (or
decomposition or divide-and-conquer)
Thu 8/28/08 CMSC 100 -- Overview 14
Patterns of bits
Memory / storage registers
Machine-language instructions
Switches and Boolean logic blocks
Thu 8/28/08 CMSC 100 -- Overview 15
Operating systems
Thu 8/28/08 CMSC 100 -- Overview 16
Programming languages
Thu 8/28/08 CMSC 100 -- Overview 17
What this class is about
How computers are built, programmed, and used to solve
Hardware: Digital logic and system architecture
Systems: Operating systems and networks
Software: Basic programming/algorithms, databases
Theory: Algorithms, computation, complexity
Applications: AI, graphics, …
Social issues: Ethics, privacy, environmental impact
Other skills emphasized:
Effective writing and presentation skills
Basic programming (in Alice)
Foundational mathematics for computer science
Thu 8/28/08 CMSC 100 -- Overview 18
What this class is NOT about
How to install Windows or Linux
How to use Excel and PowerPoint
What kind of computer you should buy
Advanced programming techniques
Thu 8/28/08 CMSC 100 -- Overview 19
Course Logistics
Instructor: Prof. Marie desJardins, mariedj@cs.umbc.edu
Office hours: Mon 11-12, Thurs 3:30-4:30, ITE 337
TA: Ms. Chaitra Sathyanarayana, chaitra1@umbc.edu
Office hours: Tues 11-12, Wed 2:30-3:30, ITE 334
Course website/syllabus:
Thu 8/28/08 CMSC 100 -- Overview 20
Brookshear, Introduction to
Computer Science
Hillis, The Pattern on the Stone
Dann et al., Learning to Program with
Alice (regular or brief edition)
100H only:
Stork, Hal’s Legacy
Thu 8/28/08 CMSC 100 -- Overview 21
My Expectations
Students will…
Attend class regularly
Be prompt, and not engage in distracting or disruptive behaviors
NO LAPTOPS OR CELLPHONES DURING CLASS
Take responsibility for knowing what work is due, and turning the
coursework in promptly
Follow the course’s academic honesty policy, and not present
another’s work as your own
Be engaged in the learning process, respectful of the course staff,
and supportive of your fellow students
Express concerns and ask questions
Understand that the course staff has other obligations outside of this
Thu 8/28/08 CMSC 100 -- Overview 22
Your Expectations
The instructor will…
Tell students what is expected in terms of coursework and behavior
Be fair in giving assignments, grading assignments, and returning
coursework in a timely fashion
Answer questions and concerns promptly
Be open to feedback and suggestions
Be respectful of students
Try to make the course useful, interesting, and enjoyable
Understand that students have other obligations outside of this class
Thu 8/28/08 CMSC 100 -- Overview 23
Academic Honesty Policy
See handout…
Thu 8/28/08 CMSC 100 -- Overview 24
Course Communications
Requests for extensions, questions about course policies Prof. dJ
Grading inquiries, requests for help with assignments
Still having trouble? Talk to Prof. dJ
Office hours
One point of EXTRA CREDIT if you come to my office hours
before 9/12 to introduce yourself!
Instructor postings
Discussion board
Assignment submission
Thu 8/28/08 CMSC 100 -- Overview 25
First Assignments
First Assignments
Academic Honesty Policy and Survey
Due Tuesday 9/2
Submit in class
HW 1
Due Tuesday 9/9; NOTE CHANGE!
Submit via Blackboard
Late policy
Thu 8/28/08 CMSC 100 -- Overview 26
EXAMPLE: Universal Product Codes
Slides for the UPC example courtesy of
Prof. Michael Littman (Rutgers University)
• First scanned product: Wrigley’s gum
• Method of identifying products at point of
sale by 11-digit numbers.
• Method of encoding digit sequences so
they can be read quickly and easily by
Thu 8/28/08 CMSC 100 -- Overview 27
Reduction Idea
• Each level uses an encoding to translate to the next
level (i.e., the next higher abstraction)
• Patterns of ink.
• Sequence of 95 zeros and ones (“bits”).
• Sequence of 12 digits.
• Sequence of 11 digits.
• Name/type/manufacturer of product.
Thu 8/28/08 CMSC 100 -- Overview 28
Product Name
• Ponds Dry Skin Cream
• 3.9 oz (110g)
• Unilever Home and Personal Care USA
• Name Badge Labels (Size 2 3/16" x 3 3/8")
• 100 Labels
• Avery Dennison/Avery Division
Thu 8/28/08 CMSC 100 -- Overview 29
11-Digit Number
• Digit = {0,1,2,3,4,5,6,7,8,9}
• Sequence of 11 digits
• QUESTION: How many different items can be encoded?
Thu 8/28/08 CMSC 100 -- Overview 30
Encode Name By 11 Digits
• First 6 digits: Manufacturer
• First digit, product category:
0, 1, 6, 7, 8, or 9: most products
2: store’s use, for variable-weight items
3: drugs by National Drug Code number
• Last 5 digits: Manufacturer-assigned ID
Thu 8/28/08 CMSC 100 -- Overview 31
• Labels: 0-72782-051440
• 0=general product
• 72782= Avery
• 051440=Avery’s code for this product
• Ponds: 3-05210-04300
• 3=drug code
• 05210= Unilever
• 04300=National Drug Code for this product
Thu 8/28/08 CMSC 100 -- Overview 32
12-Digit Number
• The UPC folks decided to include another digit for error
checking. Example:
• 01660000070 Rose’s Lime Juice (12 oz)
• 04660000070 Eckrich Franks, Jumbo (16 oz)
• 05660000070 Reese PB/Choc Egg (34 g)
• 08660000070 Bumble Bee Salmon (14.75 OZ)
• Misread digit #2 and you turn sweet to sour.
Thu 8/28/08 CMSC 100 -- Overview 33
Check Digit
1. Add the digits in the odd-numbered positions (first, third,
fifth, etc.) together and multiply by three.
2. Add the digits in the even-numbered positions (second,
fourth, sixth, etc.) to the result.
3. Subtract the result from the next-higher multiple of ten.
The result is the check digit.
Thu 8/28/08 CMSC 100 -- Overview 34
Code and Example
set evensum to d2+d4+d6+d8+d10
set oddsum to d1+d3+d5+d7+d9+d11
set checkdigit to (0-(3*oddsum+oddsum)) mod 10
odd-digit sum: 0+6+0+0+0+0=6
even-digit sum: 1+6+0+0+7=14
odd*3+even = 6*3+14=32
subtract from mult of 10=40-32=8
• Lime juice: 01660000070→016600000708
all are two
• Franks: 04660000070→046600000705
digits different
• Choc Egg: 05660000070→056600000704
• Salmon: 08660000070→086600000701
Thu 8/28/08 CMSC 100 -- Overview 35
Some (Mod) Math
• 3 x Sodd + Seven = 0 mod 10
• The sum of the odd-position digits (times 3) plus the sum
of the even position digits (including the check digit) is 0
mod 10.
• Modulo math is just like regular math, except things wrap
around (like an odometer). Mod 10 means we only pay
attention to the last digit in the number.
• Divide by 10 and only keep the remainder.
Thu 8/28/08 CMSC 100 -- Overview 36
More Modulo Math
• What’s the check digit for the code 0-000000-000000?
• What happens to the check digit if you add one to an
odd-position digit?
• What happens to the check digit if you add one to an
even-position digit?
Thu 8/28/08 CMSC 100 -- Overview 37
• We’ve gone from a product name to an 11-digit
number to a 12-digit number.
• A 0 will appear in the UPC as a white bar (space)
and a 1 as a black bar.
• So we need to turn each digit (base 10) into a series of
bits (base 2).
• Also, we want to be sure we alternate 0s and 1s often
enough (e.g., don’t want 20 black bars (1s) in a row).
• Finally, we want to have a code that we can scan in
either direction (i.e., we need to be able to tell which
direction we’re reading it in).
Thu 8/28/08 CMSC 100 -- Overview 38
Digits are encoded as 7-bit 0: 0001101 5: 0110001
patterns that all:
1: 0011001 6: 0101111
•start with 0, end with 1
2: 0010011 7: 0111011
•switch from 0 to 1 twice
•include no reverse 3: 0111101 8: 0110111
complements 4: 0100011 9: 0001011
• Encode d1 d2 d3 d4 d5 d6 d7 d8 d9 d10 d11 d12 as:
101 d1 d2 d3 d4 d5 d6 01010 d7 d8 d9 d10 d11 d12 101
Last 6 digits have 0s and 1s reversed.
(No reverse complements can tell what
direction we’re scanning in!)
Thu 8/28/08 CMSC 100 -- Overview 39
How Many Bits?
• How many bits (zeros and ones) long is the code for the
original 12-digit sequence?
Thu 8/28/08 CMSC 100 -- Overview 40
Finally, Ink!
• Given the long pattern of bits, we write a 1 as a bar
and a zero as a space.
• Two 1s in a row become a double-wide bar.
• Two 0s in a row become a double-wide space.
• No UPC has more than four 0s or 1s in a row.
• All digits have equal width.
• All UPCs start and end with bars (actually with black-
white-black pattern).
• UPCs can be read upside down.
• UPCs can be read at an angle or variable speed via
Thu 8/28/08 CMSC 100 -- Overview 41
Example .......
• Barcode for skin cream:
• 3-05210-04300-8 (8 is the check digit)
start: 101; 3: 0111101
05210: 0001101-0110001-0010011-0011001-0001101
middle: 01010
04300: 1110010-1011100-1000010-1110010-1110010 (rev)
8: 1001000 (rev); end: 101
• The digits underneath are for our benefit.
Thu 8/28/08 CMSC 100 -- Overview 42
The UPC example illustrates:
Binary numbers and modulo math
Encoding (error correction, readability constraints)
Thu 8/28/08 CMSC 100 -- Overview 43
|
{"url":"http://www.docstoc.com/docs/75197500/Overview","timestamp":"2014-04-16T12:30:21Z","content_type":null,"content_length":"73499","record_id":"<urn:uuid:b5660092-00ce-46ae-9c27-90145d8b93c1>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00167-ip-10-147-4-33.ec2.internal.warc.gz"}
|
3. WHAT THE CMB DATA ALONE TELL US
As a good first approximation, one should think of a map of the CMB anisotropy as a picture of the universe at a redshift of z[dec] = 1089, when the CMB decoupled from the primordial plasma. Thus,
the CMB tells us about the universe when it was less than t[dec] = 379 kyrs old and a much simpler place. In this epoch, the early universe acts as though it is spatially flat, independent of the
values of the dark energy and dark matter today.
The variation in temperature from spot to spot across the sky arises from the primordial plasma responding to spatial variations in the gravitational potential. In turn, the gravitational landscape
is the manifestation of quantum fluctuations in some primordial field. In the inflationary model, one imagines these fluctuations stretched by at least 10^28 so that they are super-horizon size, and
then expanded with the expansion of the universe.
Observing the CMB is like looking at a distant surface ^(2) at the edge of the observable universe. As the universe expands, the pattern in the anisotropy will shift as new regions of the
gravitational landscape are sampled. For example, one may imagine that the quadrupole (l = 2) may rotate 90° in one Hubble time (30 mas/century), with higher multipoles changing faster. In a similar
vein, the light from the clusters of galaxies that formed in the potential wells that gave rise to cold regions on the decoupling surface has not has enough time to reach us.
The processes of the formation of stars, galaxies, and clusters of galaxies takes place between us and the decoupling surface. As a first approximation, photons from the decoupling surface come to us
unimpeded. The lower redshift properties do, though, affect the light from the decoupling surface but in characteristic and definable ways as discussed below.
A full analysis of the CMB involves accurately comparing the measured power spectrum, Figure 2, to models. The simplest model that describes the CMB data is flat and lambda-dominated. The results for
this parametrization derived from WMAP alone (Spergel et al. 2003) and the independent GUS analysis are shown in Table 1.
Table 1. Cosmic Parameters from CMB measurements
Description Parameter WMAP GUS w/2dF
Baryon density [b] h^2 0.024 ± 0.001 0.023 ± 0.002 0.023 ± 0.001
Matter density [m] h^2 0.14 ± 0.02 0.14 ± 0.01 0.134 ± 0.006
Hubble parameter h 0.72 ± 0.05 0.71 ± 0.05 0.73 ± 0.03
Amplitude A 0.9 ± 0.1 0.85 ± 0.06 0.8 ± 0.1
Spectral index n[s] 0.99 ± 0.04 0.967 ± 0.029 0.97 ± 0.03
Optical depth 0.166^+0.076[-0.071] ^... 0.148 ± 0.072
We can get at the essence of what the CMB is telling us from the following. Let us focus on the decoupling surface. There is a natural length scale in the early universe that is smaller than the
horizon size. It corresponds to the distance over which a density perturbation (sound wave) in the primordial plasma can propagate in the age of the universe at the time of decoupling (t[dec] = 379
kyr). It is called the acoustic horizon. Once we know the contents of the universe from the overall characteristics of the power spectrum, we can compute the size of the acoustic horizon. It is
roughly r[s] c[s] t[dec] z[dec] where c[s] is the sound speed in the plasma. In the full expression (Hu & Sugiyama 1995), r[s] depends on only the physical densities of matter and radiation and not
on the Hubble parameter, h. We may think of r[s] as a standard yard stick embedded in the decoupling surface. From a map of the anisotropy, we measure the angular size, [A], of the feature
corresponding to r[s]. From WMAP, [A] = 0.598° ± 0.002. By definition then,
where d[A] is the angular size distance to the decoupling surface. In d[A] we can trade off the geometry, [k] = 1 - [r] - [m] - [], with h. Thus to determine the geometry without recourse to
appealing to the simplest model, we must make a prior assumption on h. The dependence is not strong. If one assumes h > 0.5 then one finds 0.98 < [tot] < 1.08 (95% cl), where again we have used the
WMAP data for illustration. The progress in our knowledge of [tot] as determined by all available data roughly between the past two IAU symposia (starting with Figure 1, Bond et al. 2003) is:
Table 2. Total Cosmic Density, [tot] (1
January 2000 [tot] = 1.06^+0.16[-0.10]
January 2002 [tot] = 1.035^+0.043[-0.046]
January 2003 [tot] = 1.034^+0.040[-0.042]
March 2003 (+WMAP) [tot] = 1.015^+0.063[-0.015]
One way to see what the CMB alone can tell us is to plot the data in the [m] - [] plane for a pure cosmological constant, or equation of state w = -1. This is shown in Figure 4 for the WMAP data. All
simple open, flat, and closed cosmological models satisfying the Friedmann equation can be plotted here. One picks a point in the space, a single source of the fluctuations (e.g., adiabatic
fluctuations in the metric from an inflationary epoch), w = -1, and marginalizes over the other parameters (n[s], [b], A) with uniform priors. The possibilities are labeled by the Hubble parameter
that goes with them.
Figure 4. Models consistent with the WMAP CMB data in the []- [m] plane. The flat models correspond to the line with []+ [m] = 1. This plot assumes that the dark energy has w = -1. The code at the
top gives the values of the Hubble constant as one moves along the geometric degeneracy. It is striking that the value picked out by the CMB for a flat universe, h = 0.71, is in such agreement with
the value from the HST key project. The observations behind these two probes are completely different and correspond to times separated by a good fraction the age of the universe. The 1Tonry et al.
2003). Constraints from large scale structure would correspond to roughly a vertical swath centered on [m] = 0.3. This plot is courtesy of Ned Wright.
There are a number of things the plot pulls together. First, there is a large degeneracy in the CMB data along the line that runs above the line for flat universe. This is called the "geometric
degeneracy" and is essentially the observation noted above that one must pick h to determine d[A] to complete the equation [A] = r[s] / d[A]. The degeneracy line clearly misses a model in which the
universe is flat with [m] = 1 ([] = 0), the Einstein-deSitter case. If one stretches the data slightly, it is possible to have a model with [m] [] = 0) but the price one pays is a Hubble parameter
near 0.3. This value is in conflict with a host of other non-CMB observations. In addition, when one considers the Integrated Sachs-Wolfe (ISW) induced cross-correlation between cosmic structure, as
measured by radio sources, and the CMB anisotropy, this solution is disfavored at the 3Nolta et al. 2003). Thus, in this minimal picture, there are no models with [] = 0 that fit the data.
Once one moves off the x axis, the intersection of the flat universe line, [] + [m] = 1, and the geometric degeneracy is the next least baroque point, at least by today's standards of baroqueness. It
is very satisfying that h for the intersection is very close to the value obtained from the Hubble Key Project (h = 0.72 ± 0.03(stat) ± 0.07(sys), Freedman et al. 2001). Additionally, the values
agree with probes of the large scale structure and the supernovae data. From the plot, it is easy to see why such a weak prior on h (or [m]) picks out a flat universe. A number have noted that all
determinations of [tot] are greater than unity. The plot shows that with the priors we have chosen, there are more solutions with [tot] > 1. This may bias the solution somewhat.
^2 This is the "surface" at which the CMB decoupled from the primordial electrons and baryons. It is sometimes called the last scattering surface, but since z = 20, we prefer decoupling. Back.
|
{"url":"http://ned.ipac.caltech.edu/level5/March04/Page/Page3.html","timestamp":"2014-04-20T13:22:39Z","content_type":null,"content_length":"15760","record_id":"<urn:uuid:b7299275-b75a-446a-9b8f-5ea6085282dc>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00541-ip-10-147-4-33.ec2.internal.warc.gz"}
|
East Boston Trigonometry Tutor
...I read abundantly on my own, and enjoy helping students to develop as writers. I do well (99th percentile) on standardized tests in both math and English. I enjoy helping others to master the
different types of questions on the SAT and the PSAT.
29 Subjects: including trigonometry, English, reading, writing
...I make these foundations "feel" real so that you can really understand them and apply them on tests and exams. I have an extensive vocabulary, and a strong background in reading and writing all
types of documents. I can help you improve your understanding of written and conversational English.
18 Subjects: including trigonometry, English, writing, statistics
...I understand how science and math are used in industry. I like to help students understand the importance of trying to determine if answers make sense. I am a parent of two high school students
so I understand the stress involved in trying to equip them for college.
10 Subjects: including trigonometry, calculus, physics, algebra 2
I am a senior chemistry major and math minor at Boston College. In addition to my coursework, I conduct research in a physical chemistry nanomaterials lab on campus. I am qualified to tutor
elementary, middle school, high school, and college level chemistry and math, as well as SAT prep for chemistry and math.I am a chemistry major at Boston College.
13 Subjects: including trigonometry, chemistry, calculus, geometry
...Would you like to ace that entrance exam? Do you want feel comfortable asking questions and get answers that you understand? I can help you tackle these issues by providing excellent one-to-one
tutoring designed specifically to assist you, the individual student.
34 Subjects: including trigonometry, reading, calculus, English
|
{"url":"http://www.purplemath.com/east_boston_ma_trigonometry_tutors.php","timestamp":"2014-04-19T12:46:40Z","content_type":null,"content_length":"24206","record_id":"<urn:uuid:fe60e81b-85ef-4405-a34c-7316b860d6a6>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00004-ip-10-147-4-33.ec2.internal.warc.gz"}
|
[Prev] Thread [Next] | [Prev] Date [Next]
[sage-support] Re: Show command question Jason Grout Tue Feb 21 14:01:31 2012
On 2/21/12 3:42 PM, mhfrey wrote:
I recently upgraded to sage 4.8 and was working on some older
notebooks. Something has changed in Sage, I can no longer get
show("$P_{integral}$") to work properly. This should type set to P
with subscript of "integral". What am I doing wrong?
It works if you do:
What version of Sage did the show() above work in?
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at
URL: http://www.sagemath.org
|
{"url":"http://www.mailinglistarchive.com/html/sage-support@googlegroups.com/2012-02/msg00605.html","timestamp":"2014-04-17T21:52:29Z","content_type":null,"content_length":"7299","record_id":"<urn:uuid:861646d8-7008-46af-9c8e-99bdb5292d70>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00130-ip-10-147-4-33.ec2.internal.warc.gz"}
|
search results
Expand all Collapse all Results 1 - 4 of 4
1. CJM 2013 (vol 66 pp. 284)
Random Harmonic Functions in Growth Spaces and Bloch-type Spaces
Let $h^\infty_v(\mathbf D)$ and $h^\infty_v(\mathbf B)$ be the spaces of harmonic functions in the unit disk and multi-dimensional unit ball which admit a two-sided radial majorant $v(r)$. We
consider functions $v $ that fulfill a doubling condition. In the two-dimensional case let $u (re^{i\theta},\xi) = \sum_{j=0}^\infty (a_{j0} \xi_{j0} r^j \cos j\theta +a_{j1} \xi_{j1} r^j \sin j\
theta)$ where $\xi =\{\xi_{ji}\}$ is a sequence of random subnormal variables and $a_{ji}$ are real; in higher dimensions we consider series of spherical harmonics. We will obtain conditions on the
coefficients $a_{ji} $ which imply that $u$ is in $h^\infty_v(\mathbf B)$ almost surely. Our estimate improves previous results by Bennett, Stegenga and Timoney, and we prove that the estimate is
sharp. The results for growth spaces can easily be applied to Bloch-type spaces, and we obtain a similar characterization for these spaces, which generalizes results by Anderson, Clunie and
Pommerenke and by Guo and Liu.
Keywords:harmonic functions, random series, growth space, Bloch-type space
Categories:30B20, 31B05, 30H30, 42B05
2. CJM 2012 (vol 65 pp. 600)
Christoffel Functions and Universality in the Bulk for Multivariate Orthogonal Polynomials
We establish asymptotics for Christoffel functions associated with multivariate orthogonal polynomials. The underlying measures are assumed to be regular on a suitable domain - in particular this
is true if they are positive a.e. on a compact set that admits analytic parametrization. As a consequence, we obtain asymptotics for Christoffel functions for measures on the ball and simplex,
under far more general conditions than previously known. As another consequence, we establish universality type limits in the bulk in a variety of settings.
Keywords:orthogonal polynomials, random matrices, unitary ensembles, correlation functions, Christoffel functions
Categories:42C05, 42C99, 42B05, 60B20
3. CJM 2011 (vol 64 pp. 1036)
Harmonic Analysis Related to Homogeneous Varieties in Three Dimensional Vector Spaces over Finite Fields
In this paper we study the extension problem, the averaging problem, and the generalized ErdÅ s-Falconer distance problem associated with arbitrary homogeneous varieties in three dimensional vector
spaces over finite fields. In the case when the varieties do not contain any plane passing through the origin, we obtain the best possible results on the aforementioned three problems. In
particular, our result on the extension problem modestly generalizes the result by Mockenhaupt and Tao who studied the particular conical extension problem. In addition, investigating the Fourier
decay on homogeneous varieties enables us to give complete mapping properties of averaging operators. Moreover, we improve the size condition on a set such that the cardinality of its distance set
is nontrivial.
Keywords:extension problems, averaging operator, finite fields, ErdÅ s-Falconer distance problems, homogeneous polynomial
Categories:42B05, 11T24, 52C17
4. CJM 2008 (vol 60 pp. 685)
Closed and Exact Functions in the Context of Ginzburg--Landau Models
For a general vector field we exhibit two Hilbert spaces, namely the space of so called \emph{closed functions} and the space of \emph{exact functions} and we calculate the codimension of the space
of exact functions inside the larger space of closed functions. In particular we provide a new approach for the known cases: the Glauber field and the second-order Ginzburg--Landau field and for
the case of the fourth-order Ginzburg--Landau field.
Keywords:Hermite polynomials, Fock space, Fourier coefficients, Fourier transform, group of symmetries
Categories:42B05, 81Q50, 42A16
|
{"url":"http://cms.math.ca/cjm/msc/42B05","timestamp":"2014-04-20T13:20:48Z","content_type":null,"content_length":"32211","record_id":"<urn:uuid:8f5871b6-6817-471e-8f21-d3af4924b374>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00405-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Homework Help
Post a New Question | Current Questions
Algebra 2, not "12th grade"
Assistance needed. Please type your subject in the School Subject box. Any other words are likely to delay responses from a teacher who knows that subject well.
Sunday, November 22, 2009 at 1:21pm
12th grade math an
6b=30 b=30/6 b=5
Wednesday, November 18, 2009 at 9:02pm
12th grade English
I need to write a one page response essay to the quote "you can no more win a war than you can an earthquake"..........please help! Not good at writing. Thank you
Tuesday, November 17, 2009 at 6:17am
12th grade math an
Multiply the left side of the equation. 7b - 42 = - 12 + b Subtract b and add 42 to both sides. 6b = 30 Can you do the rest?
Tuesday, November 17, 2009 at 12:48am
Math, not "12th grade"
Assistance needed. Please type your subject in the School Subject box. Any other words are likely to delay responses from a teacher who knows that subject well.
Sunday, November 15, 2009 at 8:19am
12th grade Physics
Hi, actually Bob, your answer is not 30 degrees, but 60 degrees. 30 degrees does not refer to the angle inside the triangle formed by the x and y components.
Tuesday, November 10, 2009 at 12:59am
12th grade Trig
secTheta=2 means cosTheta=1/2, which means that Theta is 60 degrees.
Monday, November 9, 2009 at 5:07pm
12th grade Trig
sec theta = 2, sin theta is less than 0. I'm not sure what to do? Please explain. Thank you.
Monday, November 9, 2009 at 4:44pm
12th grade Politics
United States is the only sizable industrialized country that does not have a socialist party as a major source of power. Why is this??? Can someone please give me some points or reasons??? Thank
Wednesday, November 4, 2009 at 10:47am
12th grade Physics
Determine the x and y components if a dog walks to the park that is 24m, 30 degrees northwest. Please help i don't know how to go about solving this problem
Wednesday, November 4, 2009 at 9:57am
12th grade Advanced Functions
A generator produces electrical power, P, in watts, according to the function: P(R)= 120/ (0.4 + R)^2 where R is the resistance, in ohms. Determine the intervals on which the power is increasing.
Sunday, October 25, 2009 at 7:50pm
12th grade Physics
Force required to accelerate the car vertically = m * (v^2) = 1200 * (0.8^2) = 1200 * 0.64 This force will be equal to the tension on the rope.
Saturday, October 24, 2009 at 1:13pm
12th grade Physics
how much tension must a rope withstand if it is used to accelerate a 1200-kg car vertically upward at .80 m/s^2?
Saturday, October 24, 2009 at 1:09pm
12th grade English
Your subject is English.
Monday, October 12, 2009 at 8:18pm
12th grade chemistry
Adding an O would make a superoxide and I don't know of any instances in organic of RCOOOH molecules although they are known in inorganic (KO2, for example).
Sunday, October 4, 2009 at 7:37pm
12th grade chemistry
thanks DrBob ..I appreciate the help. I have tried many searches for the answer on the internet and felt the answer was no as I found nothing.I understand the idea of going to CO2 and H2O. I also
don't think that just an O molecule can be added. Thanks again.
Sunday, October 4, 2009 at 5:50pm
11th grade u.s. history
can i ask one more question srry um do you know a political cartoon that goes with the 12th amendment and what does he mean how does one go about amending the constitution lol srry thats 2
Monday, September 28, 2009 at 9:52pm
11th grade u.s. history
So basically the 12th amendment is necessary cause it helps select both president and vice president So does it protect the 20th amendment or am i like way off
Monday, September 28, 2009 at 9:23pm
11th grade u.s. history
Have you seen this site? http://kids.yahoo.com/directory/Around-the-World/Countries/United-States/Government/U.S.-Constitution/Amendments/12th-Amendment
Monday, September 28, 2009 at 8:49pm
12th grade
This is an opinion question. Your teacher is looking for YOU to be able to express your opinion and then back it up with facts, details, etc. Let us know what you think.
Saturday, September 26, 2009 at 11:10am
12th grade
According to the social contract theory, the contract is
Wednesday, September 16, 2009 at 12:22pm
12th grade A.P. Economics
which of the following is the most essential for a market economy? 1.) functioning labor unions 2.) good government regulation 3.) active competition in the marketplace. 4.) responsible action by the
business leaders. I think its choice no. 3, am i right?
Sunday, September 13, 2009 at 6:10pm
12th grade government/economics
can you desccribe the definition of economics in these letters as examples J,K,O,Q,X,Y
Friday, September 11, 2009 at 7:31pm
12th grade LAW
i can think of the examples for that one point, but i'm not sure what other points i can add in order to answer my question. i'm not exactly sure how laws can actually increase someones freedom.
Thursday, September 10, 2009 at 10:36pm
12th grade A.P. Economics
Compare the mixed economies of various nations along a continuum between centrally planned and free market systems.
Saturday, September 5, 2009 at 9:40am
12th grade English
how to compare the characters of gilgamesh and enkidu. who was the more heroic? why? begin with an explanations of what you consider heroic and see if it is similar to what is considered heroic in
the story.
Saturday, September 5, 2009 at 12:45am
3rd grade
His birthday must be: 10, 12, 14, 16, 18, 20, or 22 It can't be the 10th because 1 + 0 = 1 It can't be the 12th because 1 + 2 = 3 Let's see if she can figure this problem out from here.
Wednesday, August 26, 2009 at 7:57pm
12th grade expository writing
Can somebody help me with writing?
Thursday, August 20, 2009 at 1:13pm
12th grade history
Which groups of people were not afforded all the rights stated in the bill of rights?
Thursday, August 13, 2009 at 6:12pm
12th grade AP Economics
Why might an economist look at the hundreds of cars moving along an assembly line an say, There is an example of scarcity ?
Thursday, August 6, 2009 at 2:15pm
12th grade
"I am interested" IN WHAT? What is your education? What does the job require? What is your experience? Each of these topics require a paragraph. Post your letter, and we'll be glad to comment.
Sunday, July 26, 2009 at 6:54pm
12th grade
the voting right act of 1965 specially removed from voting in the U.S.
Wednesday, July 15, 2009 at 5:10pm
12th grade
which does suzuki in "hidden lessons" explore more thoroughly, the causes of children's negative or positive attitudes toward nature or the effects of these attitudes?
Tuesday, July 14, 2009 at 12:06am
12th grade English
How does Saki in "The image of the lost soul" use descriptions of places to reach desired effect in story? Can anyone tell me how? an example form the text would be helpful
Thursday, July 2, 2009 at 11:50am
12th grade AP Economics
I'd spend more money ($50? $100?) per trip if I knew the air traffic controllers were competent and not tired.
Wednesday, July 1, 2009 at 6:26pm
12th grade AP Economics
Ms. Sue, Thank you so much for the response. But I still don't get what I should be writing. Can you please mention one example? Sorry for the trouble! Thank you
Wednesday, July 1, 2009 at 6:18pm
12th grade AP Economics
Making a list of what you would consider the most important trade-offs of spending more money on air-travel safety. Can someone please help me!!!
Wednesday, July 1, 2009 at 4:10pm
12th grade
Penn Foster Examination #93051 Cooking Appliances I've gotten 16 out of 25 completed but I'll just post the whole exam to make sure I have THOSE right. Thanks for your help guys!
Saturday, May 23, 2009 at 4:13pm
12th grade- Trigonometry
Find the exact value of each expression Cos^-1(0) Tan^-1 sqrt(3)/3 Sin^-1(1) Since the expressions are capitalized, I'm not sure if I have to do anything different. Any help is appreciated.
Thursday, May 14, 2009 at 9:04pm
12th grade chemistry
Also, you are not adding up the molarity of the compound. Molarity is numer of moles of solute per liter of solution.
Friday, May 1, 2009 at 1:22am
12th grade
Are you sure you wrote the question correctly? I do not understand how g which is a function of x is written as a function of t, and especially with the dt.
Thursday, April 23, 2009 at 10:34pm
Math, not "12th grade"
assistance needed Please type your subject in the School Subject box. Any other words are likely to delay responses from a teacher who knows that subject well.
Thursday, April 23, 2009 at 7:43am
12th grade
Please put the class in the subject area and please complete your question.
Monday, April 20, 2009 at 10:31am
12th grade, Economics
After you have done some reading and answered the question, please repost and we will be happy to make any corrections/suggestions if needed.
Thursday, April 16, 2009 at 4:11pm
12th grade math
This seems to me a poor way to teach math.
Saturday, April 11, 2009 at 7:57am
12th grade chemistry
Calculate the number of grams in the following :[149.096 g/mole] 8.55 moles of (NH4)3PO4 IT WOULD BE NICE IF I CAN GET STEP BY STEPS TO GET ANSWER !!!
Monday, March 30, 2009 at 6:32pm
12th grade
Balance the reaction first. Then, figure the moles of Al in 100 g of Al. Now, use the mole relationships to figure themoles of Cu. We will be happy to critique your thinking.
Monday, February 23, 2009 at 2:37pm
12th grade Data
In this sequence, tk is a factorial number, often written k!. Show that tk=k!=k(k-1)(k-2)...(3)(2)(1)
Monday, February 23, 2009 at 12:43pm
12th grade (chemistry)
I think one of your reactants is wrong. It would have to be potassium hydroxide H2SO4 + 2 KOH -> K2SO4 + 2 H2O
Tuesday, February 17, 2009 at 12:14am
12th grade
Write the chemical equation for Sulfiric Acid and potassisum sulfate the product is potassium sulfate and water.
Monday, February 16, 2009 at 10:55pm
12th grade
List 3 problems of Decentralized power that existed under the Articles of Confederation. For each problem, identify one solution that the Constitution provided to address the problem.
Thursday, February 5, 2009 at 9:54pm
12th grade bio
The transition of the lips where the outer skinner and inner mucous membrane meet is called the _________ I looked it up and from what I read I believe it is the gingivae. Would that be correct.
Thank you
Thursday, February 5, 2009 at 2:10pm
12th grade Chemistry
The total volume of the mixture is: 5 + 3 + 3 = 11 mls = 0.011 L I used 0.010 L by mistake. It should be replaced by 0.011 L. This will case a small change in the concentrations of Fe+3 and SCN-
Sunday, January 11, 2009 at 8:40pm
12th grade
not basically 20, it is 20. Units were not specified.
Tuesday, January 6, 2009 at 9:36pm
12th grade
I assume you are in calculus. Postition=INTEGRAL v(t) dt=INT (2t+1)dt = t^2 + t Put in t=4 and compute.
Tuesday, January 6, 2009 at 9:31pm
12th grade Subject??
What is your subject?
Tuesday, January 6, 2009 at 9:30pm
12th grade
A particle starts at x=0 and moves along the x-axis with velocity v(t)=2t+1 for time t is less than or equal to 0. Where is the particle at t=4?
Tuesday, January 6, 2009 at 9:28pm
12th grade history
Some of the clergy was corrupt and illiterate. By improving the clergy, the Church hoped to improve the whole organization.
Wednesday, December 17, 2008 at 10:02pm
12th grade history
Why did the Roman Catholic reform leaders believe that the fundamental aspect of improving the Church was to enhance the performance of the clergy?
Wednesday, December 17, 2008 at 10:00pm
12th grade (Law)
Could you tell me some information about NATO? I have to do a project for my law course and need to know how it impacts the world.
Monday, December 15, 2008 at 9:18pm
12th grade chem
In this case, they are different ways of writing the same thing. The more coventional way is H3PO3 (phosphoruus acid)
Friday, December 12, 2008 at 12:29pm
12th grade chemistry!!
Metal ions lose electrons to form cations, nonmetal ions gain electron to form anions.
Sunday, December 7, 2008 at 12:05pm
12th grade IPT
i need to gather information about E-commerce and there are questions i don't get. such as, how is the communication system is used and discuss some situations in which the system is used?
Friday, December 5, 2008 at 7:03pm
12th grade, English
We do not do your homework for you. After you have finished reading and writing, please repost and we will be happy to give you further corrections or suggestions.
Wednesday, December 3, 2008 at 3:14pm
12th grade science (food & nutrition)
You can start by Googling vegetarianism. Study these sites, and follow any links and hints that might help you more.
Sunday, November 23, 2008 at 7:52pm
12th grade science (food & nutrition)
Were doing a class debate on vegetarianism and i'm supporting vegetarianism (pro side) and i need 10 question to ask the con side (against vegetarianism). Don't really know how to start.
Sunday, November 23, 2008 at 7:48pm
12th grade government/economics
should the u.s move to an electonic currency and remove paper currency
Thursday, November 20, 2008 at 4:18pm
12th grade calculus
a marathoner ran the 26.2-mi New York City marathon in 2.2h. show that at least twice, the marthoner was running at exactly 11 mph
Thursday, November 6, 2008 at 8:06pm
12th grade calculus
I will be happy to critique your thinking. Remember, you need to work on either side of x=3 here, and on either side of x=-3.
Wednesday, November 5, 2008 at 12:08am
12th grade calculus
Let f(x)=[x^3-9x] [] --- absolute value a. does f'(x) exist? b. does f'(3) exist? c. does f'(-3) exist? d. determine all extrema of f.
Tuesday, November 4, 2008 at 11:22pm
12th grade Physics
a car travels in a straight line for 3 h at a constant speed of 53 km/h. what is the acceraleration? answer in units of m/s2.
Saturday, November 1, 2008 at 9:58pm
12th grade calculus
find the lines that are tangent and normal to th ecurve at the point given 1. 2xy+piesiny=2pie (1, pie/2) 2. x^2cos^2y-siny=0 (0, pie)
Sunday, October 19, 2008 at 9:06pm
12th grade calculus
You did not state the given point. Using implicit derivative I found it to be y' = (2x+y)/(2y-x) sub in the given point, that gives you the slope of the tangent. Now that you have the slope (m) and a
given point, use the grade 9 method of finding the equation of the ...
Sunday, October 19, 2008 at 7:58pm
12th grade government/economics
Amendments must be ratified by 3/4 of the states. Article IV has given Congress the option of requiring ratification by state legislatures or by special conventions assembled in the states.
Sunday, October 19, 2008 at 7:19pm
12th grade Physics
"If a skier coasts down a slope at an angle of 24 degrees below the horizontal, what is her acceleration if the force of friction is negligible?" I must have done this question 15 times, and still
can't get the right anwser. Can anyone help me?
Monday, October 13, 2008 at 4:30pm
12th grade Math?
How would you factor 3x^3-4x^2+4x-1? P.S. Factor theorem does not work here.
Sunday, October 12, 2008 at 11:48pm
12th grade Physics
Please do not just "drop off" problems. Show your work. Do you know what "the normal force" means? It is the component of the applied force that is perpendicular to the ground. Start by calculating
Sunday, October 12, 2008 at 3:06pm
12th grade Physics
it changed velocity from 88West to zero, in a given time. acceleration= (finalveloicty-initialveloicty)/time I assume you realize what negative West means.
Tuesday, October 7, 2008 at 6:52pm
12th grade Physics
In an experiment, a car taveling at 88m/s,west slams into a wall and comes to a halt in 0.75 seconds. What is the car's acceleration vector?
Tuesday, October 7, 2008 at 6:45pm
12th grade Physics
who was the father of physics
Saturday, October 4, 2008 at 12:37am
12th grade
Think of a business in your local area. Describe its operation in terms of factor markets and product markets.
Sunday, September 21, 2008 at 2:34pm
12th grade
If f(x)= 1/1-x Find the composition of f(f(x)). So I get 1/1-(1/1-x) I'm not sure if I am putting it in my calculator correctly because I keep getting a line. Is that right? If I put it in with the
parentheses differently, I get the graph of f(x)=1/x. Does anyone know ...
Thursday, September 18, 2008 at 5:18pm
12th grade Science
A bud covers the bloom but when it erupts it holds up the flower from underneath find the name it s divided into together and alone? -thanks-
Wednesday, September 10, 2008 at 4:03pm
12th grade
Given that x is an inetger, state the relation representing each equation by making a table of values. y=3x+5 and -4<x<4
Wednesday, September 3, 2008 at 9:25pm
12th grade
Why is it difficult to recognize the worth and dignity of all individuals at all times?
Wednesday, September 3, 2008 at 5:33pm
12th grade math help!!!!
The height of the right triangle is 9 units, and the area is 54 sq. units. how long is the base of the triangle?
Tuesday, August 26, 2008 at 2:46pm
12th grade math help!!!!
if (x,y) are the coordinates of a point p in the xy plane then x is called _______ of p and the _______ of p
Tuesday, August 26, 2008 at 2:00am
12th grade government/economics
If no candiate for the presidency wins a simple majority of the total number of electoral votes, what body has the power to choose the president?
Sunday, August 24, 2008 at 10:23pm
10th grade math
1.548937075 x 10 to the 12th. round the number to 3 significat figures. I thought I would respost this because the other one was getting confusing
Tuesday, August 19, 2008 at 8:56pm
12th grade math help!!!!
I fail to see how that relates to math
Friday, March 28, 2008 at 1:08pm
Math - PreCalc (12th Grade)
Which rectangular equation corresponds to these parametric equations? x = 0.2sec t and y = -0.25tan t A) 25x^2 − 16y^2 = 1 B) 25x^2 + 16y^2 = 1 C) x^2 − 16y^2 = 25 D) 25y^2 − 16x^2 = 1 E) 4x^2 − 16x^
2 = 1
Monday, March 24, 2014 at 10:24am
Math - PreCalc (12th Grade)
If the distance covered by an object in time t is given by s(t) = 2t2 + 3t, where s(t) is in meters and t is in seconds, what is the average velocity over the interval from 2 seconds to 4 seconds? A)
15 meters/second B) 14 meters/second C) 13 meters/second D) 12 meters/second ...
Friday, March 21, 2014 at 12:01pm
Math - PreCalc (12th Grade)
spread = 5-2 = 3 if cut into n pieces, each base = 3/n endpoint of 1st rectangle = 2 + 3/n endpoint of 2nd rectangle = 2 + 2(3/n) ... endpoint of kth rectangle = 2 + k(3/n) = 2 + 3k/n f(2+3k/n) = 2(2
+ 3k/n) + 1 = 5 +6k/2 I see you have two other questions above that follow ...
Friday, March 21, 2014 at 11:40am
Math - PreCalc (12th Grade)
Which statement about a uniform probability distribution defined on a given interval is true? A) The mean is always 0. B) The mean is always 1. C) The standard deviation is always 1. D) The mean is
the midpoint of the interval. E) The standard deviation is equal to the width ...
Friday, March 21, 2014 at 11:05am
Math - PreCalc (12th Grade)
The function f(x) = 2x + 1 is defined over the interval [2, 5]. If the interval is divided into n equal parts, what is the value of the function at the right endpoint of the kth rectangle? A) 2+3k/n
B) 4+3k/n C) 4+6k/n D) 5+6k/n E) 5+3k/n
Friday, March 21, 2014 at 9:57am
Math - PreCalc (12th Grade)
ok so far, though they already told you that f(2)=3 because the point (2,3) is on the graph. As you know, the slope at any point (x,y) on the graph is 2x+2 So, the slope at x=2 is 6 Though, if this
is pre-calc, how do you know the slope of the tangent to a curve? That's ...
Thursday, March 20, 2014 at 1:21pm
Math - PreCalc (12th Grade)
as with any arithmetic progression, Sk = k/2(a1 + ak) = k/2(1+5k-4) = k/2(5k-3) Sk+ak+1 = k(5k-3)/2 + 5k-4+1 (D) Don't forget your algebra I just because you're in pre-calc now.
Thursday, March 20, 2014 at 1:19pm
Math - PreCalc (12th Grade)
If Sn = 1 + 6 + 11 + 16 + 21 + 26...... where an = (5n − 4), what would be the mathematical expression represented by Sk + ak + 1? A) (k(5k − 3)/2)+ 5k + 1 B) (k(5k − 3)/2)+ 5k − 4 C) (k(5k − 3)/2)+
5k + 2 D) (k(5k − 3)/2)+ 5k − 3 E) (...
Thursday, March 20, 2014 at 1:05pm
Math - PreCalc (12th Grade)
What value must be defined for f(4) to remove the discontinuity of this function at x=4? f(x)=(x^2−16)/(x−4) A) 0 B) 4 C) -4 D) 8 E) -8 f(4)=(4^2−16)/(4−4) f(4)=0/0 you can't divide by zero. I don't
understand the question.
Thursday, March 20, 2014 at 12:38pm
Math - PreCalc (12th Grade)
If Sn represents the sum of the squares of the first n natural numbers, use proof by induction to find which of the following expressions for Sn is true? A) Sn = n(n − 1)/3 B) Sn = n(2n − 1)/3 C) Sn
= n(n + 1)/3 D) Sn = n(n + 1)(2n + 1)/3
Thursday, March 20, 2014 at 12:23pm
Pages: <<Prev | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | Next>>
|
{"url":"http://www.jiskha.com/12th_grade/?page=8","timestamp":"2014-04-17T20:26:09Z","content_type":null,"content_length":"34474","record_id":"<urn:uuid:d2d90071-be1d-4c19-89e8-ffc766dec0ec>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00507-ip-10-147-4-33.ec2.internal.warc.gz"}
|
subdepth – Unify maths subscript height
This package is based on code (posted long ago to comp.text.tex by Donald Arseneau) to equalise the height of subscripts in maths. The default behaviour is to place subscripts slightly lower
when there is a superscript as well, but this can look odd in some situations.
Sources /macros/latex/contrib/subdepth
Documentation Readme
Version 0.1
License The LaTeX Project Public License
Copyright 2007 Will Robertson
Maintainer Will Robertson
Contained in TeXLive as subdepth
MiKTeX as subdepth
Topics support for typesetting mathematics
position sub- and superscripts (left and right sides)
Download the contents of this package in one zip archive (98.2k).
|
{"url":"http://www.ctan.org/pkg/subdepth","timestamp":"2014-04-20T20:56:57Z","content_type":null,"content_length":"5276","record_id":"<urn:uuid:ddf4097a-78a0-40a5-9f37-e7485b2018bb>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00417-ip-10-147-4-33.ec2.internal.warc.gz"}
|
The NNSYSID toolbox is a set of MATLAB tools for neural network based identification of nonlinear dynamic systems. The toolbox contains a number of m and MEX-files for training and evaluation of
multilayer perceptron type neural networks within the MATLAB environment. There are functions for training of ordinary feedforward networks as well as for identification of nonlinear dynamic systems
and for time-series analysis. Version 2 requires MATLAB 5.3 or higher. For MATLAB 4.2-MATLAB 5.2 it is possible to use the old Version 1.1. In this case the Signal Processing Toolbox must be
available. The toolbox is completely independent of the Neural Network Toolbox and the System Identification Toolbox.
The toolbox contains:
• Fast, robust, and easy-to-use training algorithms.
• A number of different model structures for modelling of dynamic systems.
• Validation of trained network models.
• Estimation of the models's generalization ability.
• Demonstration programs.
HOW CAN I LEARN THE THEORY?
The book
Neural Networks for Modelling and Control of Dynamic Systems
by Magnus Nørgaard, O. Ravn, N. K. Poulsen, and L. K. Hansen
is available on Springer-Verlag, London, in the series Advanced Textbooks in Control and Signal Processing.
Version 2
The toolbox will work under Matlab 5.3 and Matlab 6. No "official" toolboxes are required. Version 2 is not backward compatible with Version 1.1. The toolbox has been zipped into a file of
approximately 1.5 Mbytes. This file contains the manual in Postscript and PDF-formats.
Download matrix
General version (zip) From Windows: Use Winzip
From DOS : pkunzip nnsysid20.zip
From UNIX: unzip -a nnsysid20.zip
Alternative unix version (gzip+tar) Use "gunzip nnsysid.tar.gz" followed by "tar -xvf nnsysid.tar" to unpack
Compiled MEX files (zip) Compiled Mex files for Windows, Intel/Linux, and HPUX
Version 1.1
The toolbox has been compressed and packed into a "zip" file of approximately 0.53 Mbytes. From the matrix below you can download different versions of the toolbox.
Download matrix
Matlab 4.2 General version (zip) From DOS : pkunzip nnsysid.zip
From UNIX: unzip -a nnsysid.zip
Matlab 4.2 Alternative unix version (gzip+tar) Use "gunzip nnsysid.tar.gz" followed by "tar -xvf nnsysid.tar" to unpack
Matlab 4.2 Alternative PC version (zip) use pkunzip sysidpc.zip to unzip
Matlab 5 General version (zip) From DOS : pkunzip nnsysid5.zip
From UNIX: unzip -a nnsysid5.zip
Matlab 5 Alternative unix version (gzip+tar) Use "gunzip nnsysid5.tar.gz" followed by "tar -xvf nnsysid5.tar" to unpack
NOTICE that there is a special PC-version for MATLAB 4.2 As explained in the release notes the "printf" statements works differently under Unix and Windows 3.1. The PC version contains the toolbox
with the suggested modification for PCs. Under MATLAB5/Windows 95 this problem has been eliminated.
It appears that problems occur when trying to print the manuals on certain printers. I have therefore used the unix-command 'ps2pdf' to convert the manuals to pdf-format. View tutorial section or
reference section. The manuals are included in postscript format in the zip-files above.
MEX files for version 1.1
All functions in the toolbox have been implemented as M-functions. However, to speed up some of the most time consuming functions, a few dublets have been implemented in C and can be compiled to
MEX-files. For users that do not have access to a compiler or can't figure out how to use their compiler I have precompiled the MEX-files for a few platforms
Several things have changed in Version 2. This means that hardly any of the functions will be compatible with Version 1.1. However, only minor changes in the function calls must be made. Some of the
major new features are:
• The toolbox is no longer dependent on the Signal Processing Toolbox.
• The training is more automatic (better stopping criteria have been introduced).
• Easier call of training algorithms.
• Options to training algorithms changed to an object oriented like fashion.
• Bug fixes and fine-tuning.
Please bear with me. This is not a commercial product and thus I cannot spare the time for supporting it. BUT, if you should find a major bug do let me know and hopefully I can correct it in a future
We encourage all users of the NNSYSID toolbox to write us about their successes (and failures?). We are very interested in hearing where the toolbox is used and for what type of applications. Since
your comments very well may influence future releases of the toolbox this is also in your own interest! You can e-mail your experiences to the address listed at the bottom of this page.
If you are interested in neural networks for control we recommend that you download our NNCTRL toolkit. See our NNCTRL toolkit page for supplementary information.
The toolbox functions grouped by subject
│ │
│ FUNCTIONS FOR TRAINING │
│batbp │Batch version of the back-propagation algorithm │
│incbp │Recursive (/incremental) version of back-propagation │
│igls │Iterated Generalized Least Squares training of multi-output nets │
│marq │Levenberg-Marquardt method │
│marqlm│Memory-saving implementation of the Levenberg-Marquardt method │
│rpe │Recursive prediction error method │
│ │
│ FUNCTIONS FOR PREPARATION OF DATA │
│dscale│Scale data to zero mean and variance one │
│ │
│ FUNCTIONS FOR TRAINING MODELS OF DYNAMIC SYSTEMS │
│lipschit│Determine the lag space │
│nnarmax1│Identify a Neural Network ARMAX (or ARMA) model (Linear MA filter) │
│nnarmax2│Identify a Neural Network ARMAX (or ARMA) model │
│nnarx │Identify a Neural Network ARX (or AR) model │
│nnarxm │Identify a multi output Neural Network ARX (or AR) model. │
│nnigls │Iterated Generalized LS training of multi-output NNARX models. │
│nniol │Identify a Neural Network model suited for I-O linearization control │
│nnoe │Identify a Neural Network Output Error model │
│nnrarmx1│Recursive counterpart to NNARMAX1 │
│nnrarmx2│Recursive counterpart to NNARMAX2 │
│nnrarx │Recursive counterpart to NNARX │
│nnssif │Identify a NN State Space Innovations form model │
│ │
│ FUNCTIONS FOR PRUNING NETWORKS │
│netstruc│Extract weight matrices from matrix of parameter vectors │
│nnprune │Prune models of dynamic systems with Optimal Brain Surgeon (OBS) │
│obdprune│Prune feed-forward networks with Optimal Brain Damage (OBD) │
│obsprune│Prune feed-forward networks with Optimal Brain Surgeon (OBS) │
│ │
│ FUNCTIONS FOR EVALUATING TRAINED NETWORKS │
│fpe │FPE estimate of the generalization error for feed-forward nets │
│ifvalid │Validation of models generated by NNSSIF │
│ioleval │Validation of models generated by NNIOL │
│kpredict│k-step ahead prediction of dynamic systems. │
│loo │Leave-One-Out estimate of generalization error for feed-forward nets │
│nneval │Validation of feed-forward networks (trained by marq,rpe,bp) │
│nnfpe │FPE for I/O models of dynamic systems │
│nnloo │Leave-One-Out estimate for NNARX models. │
│nnsimul │Simulate model of dynamic system from sequence of inputs │
│nnvalid │Validation of I/O models of dynamic systems │
│wrescale│Rescale weights of trained network │
│xcorrel │Calculates high-order cross-correlation functions │
│ │
│ MISCELLANOUS FUNCTIONS │
│crossco │Calculate correlation coefficients. │
│drawnet │Draws a two layer neural network │
│getgrad │Derivative of network outputs w.r.t. the weights │
│pmntanh │Fast tanh function │
│settrain│Set parameters for training algorithms. │
│ │
│ DEMOS │
│test1│Demonstrates different training methods on a curve fitting example │
│test2│Demonstrates the NNARX function │
│test3│Demonstrates the NNARMAX2 function │
│test4│Demonstrates the NNSSIF function │
│test5│Demonstrates the NNOE function │
│test6│Demonstrates the effect of regularization by weight decay │
│test7│Demonstrates pruning by OBS on the sunspot benchmark problem │
For more information, please contact
|
{"url":"http://www.iau.dtu.dk/research/control/nnsysid.html","timestamp":"2014-04-20T13:49:20Z","content_type":null,"content_length":"23024","record_id":"<urn:uuid:0ab3d874-4da6-4ce6-a261-9ab42d88e8ff>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00473-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Nonstandard Mathematics and a Doctrine of God
return to religion-online
Nonstandard Mathematics and a Doctrine of God
by Granville C. Henry, Jr.
Granville C. Henry, Jr. is Associate Professor of Mathematics and Philosophy at Claremont Men’s College, Claremont, California. The following article appeared in Process Studies, pp. 3-14, Vol. 3,
Number 1, Spring, 1973. Process Studies is published quarterly by the Center for Process Studies, 1325 N. College Ave., Claremont, CA 91711. Used by permission. This material was prepared for
Religion Online by Ted and Winnie Brock.
In Jerusalem, 1964, at the International Congress for Logic, Methodology and Philosophy of Science, Abraham Robinson said: "As far as I know, only a small minority of mathematicians, even of those
with Platonist views, accept the idea that there may be mathematical facts which are true but unknowable."^1 In a 1971 expository article "New Models of the Real-Number Line," Lynn Steen commented:
"It seems unlikely, however, that within the next few generations mathematicians will be able to agree on whether every mathematical statement that is true is also knowable."^2 We can see in these
two statements the first stages of a new philosophical question concerning the nature of mathematics: whether there are true but unknowable mathematical structures. The question itself is a result of
new foundation shaking developments of the last few decades in mathematics.
True but unknowable? How can we talk about the content of mathematics, of all things, as true but unknowable? Robinson and Steen are not questioning whether there are mathematical relationships,
theorems, or facts that are presently true but presently unknown. This would mean something that neither accepts, namely, that we presently know all true mathematics. What they are asking is whether
there is some mathematical content that is true but which in principle can never be known. We can understand Robinson’s incredulity and Steen’s more cautious skepticism about the existence of such
structures, for traditional Western mathematics has operated in almost the opposite direction. The determination of the truth of a mathematical structure, theorem, or fact has been primarily a
function of its knowability. We know it to be true because it is known in a certain determinate way.
In this paper I want to examine how this new question of the possibility of true but unknowable mathematics may engage contemporary discussions of a doctrine of God. But first, let us look at how old
mathematics, with familiar procedures and assumptions, has affected traditional doctrines of God.
God as Unchanging
Concepts of God change even within religious communities that maintain close continuity with their past. Changes in a Christian doctrine of God have often paralleled changes in an understanding of
the nature of the soul. The understanding of both underwent significant change between the end of the New Testament period and the culmination of theology of the early church in Augustine. The change
was basically towards emphasizing an understanding of the soul as eternal and of God as unchanging, as contrasted with an understanding of the soul (or spirit) that decays or dissolves at death to be
resurrected by the power of God and of a God who is active and involved in the affairs of men and, hence, who changes. Both of these latter positions are nearer Biblical emphases than the former. The
God of the Bible is never presented as absolutely immutable and static ontologically. He loves, wills, acts in history, becomes incarnate, changes his mind, and knows particular changing and quite
mutable men. If one accepts a real knowledge by God of a changing world, then such understanding would indicate that there is some change, perhaps minor, in God himself.
This movement within Christian theology is generally recognized to have resulted from a contribution of Greek philosophy and religion. The Greek contribution, however, was not seen by the church to
be incompatible with scripture or orthodox doctrine. For centuries, theologians have seen the traditional scriptural accounts of creation, of covenant, of historical deliverance, of incarnation and
atonement as confirming a doctrine of God who is best understood as absolute and unchanging, while not realizing the anomaly of this doctrine with God’s activity witnessed in each scriptural account.
To see a clear example of the use of mathematics for an argument for an eternal soul, and thereby for an immutable God, we need look no further than Augustine’s treatise On the Immortality of the
Soul. Augustine’s argument proceeds from the unchangeable nature of mathematics to the eternal nature of the soul. We may redesign his argument in chapter one as follows: Anything which contains
something eternal, i.e., unchanging, cannot itself be non-eternal. The soul or mind contains something eternal, namely science, which in turn contains the unchanging mathematical structure that a
line drawn through the midpoint of a circle is greater than any line not drawn through the midpoint, and hence is eternal itself. The critical relation on which the argument hangs is inclusion. The
soul A includes (contains) science B which contains an eternal truth of mathematics C. A cannot be totally destroyed or eliminated without eliminating both the subsets B and C. Thus, the guaranteed
or eternal existence of C entails the conclusion that the soul can never be ultimately or completely destroyed.
Tertullian put his finger on the source of the doctrine of an incorporeal soul when he pointed out that it is the philosophers, those "patriarchs of heretics," and the chief among them Plato himself,
who have led us astray to suppose the soul is not bodily. The fault lies squarely, according to Tertullian, on Plato’s doctrine of forms, which in separating intellectual faculties from bodily
functions, make claim to a kind of truth "whose realities are not palpable, nor open to the senses."^3 Tertullian believed in an eternal soul, but after careful examination of the scriptures
concluded that it was quite corporeal.
I think Tertullian’s historical analysis is sound. It was Plato’s doctrine of form which, if not the clear source, was the primary philosophical justification, of an affirmation of an eternal soul
and an unchanging God -- at least in the theological period dominated by Augustine. We see a clear argument in the Phaedo for eternal soul and unchanging divinity based on forms -- although, of
course, Plato was radically inconsistent about the immutability of God.^4 If one assumes that God’s Wisdom is Platonic in form, i.e., utterly unchanging and eternal, it is a simple matter to go from
there to understand Cod in totality as unchanging. This is the primary approach of Augustine and of all Christian Platonists. I could document this in innumerable ways.
There is excellent evidence to believe that Plato’s doctrine of forms was precipitated by his mathematical involvement and that he modeled his understanding of forms after an understanding of
mathematical figures that was then held by the burgeoning mathematical community. Aristotle, for example, claimed that Plato’s ideas had ontologically the same status as Pythagorean numbers and were
used by Plato the way the mathematical society used their numbers.^5 Plato himself frequently used examples from geometry to show the nature of form. A. E. Taylor has made a comprehensive study of
the usage of idea and eidos in Greek literature prior to and contemporary with the Platonic dialogues.^6 He actually lists each such use of the word. His conclusion from the study was that the term
idea itself came from a technical Pythagorean use which meant geometrical pattern or figure. Since the publication of his study, the main thrust of argument against it has been, not that there is
close ontological identification between the true nature of mathematical existence and the other ideas, but that the concept of Platonic form could have arisen from sources historically independent
of mathematics.^7 Even if the concept of idea did arise independently, it was quickly welded to mathematical examples, identified with them, and influenced thereby.
It was Aristotle who in revising a Platonic understanding of form championed an exclusively immutable God. He understood Platonic forms to be immanent within physical things and not in a realm
transcendent over them. In ordinary substances each sensible thing was seen to be a composite of matter and form possessing a combination of fixity, the form, and potentiality for change, the matter.
All such substances, according to Aristotle, are in motion, that is, changing from one form to another. The reason for change from potential to actual in the individual substances is ultimately pure
form itself, i.e., God (as much as Aristotle has a God), who attracts, as it were, by virtue of being a final cause or goal, all things unto himself. The unmoved mover or pure form, is absolutely
immutable. He possesses only actuality and has no trace of potentiality. Although Aristotle differs markedly from Plato in the use of forms, the forms themselves, which constitute that which is most
real in any particular substance and is reality itself in God, are essentially Platonic in nature, possessing Platonic characteristics of eternality, fixity, abstractness and logical relatedness.
In. addition to Platonic form, there is another presupposition attendant to arguments for both an eternal soul and an unchanging God, namely the idea that unity is not divided. Augustine’s argument
above rests upon an assumed unicity of the soul, i.e., that it is one and indivisible. For if the soul is composed of parts, the eternal existence of a mathematical truth C may only guarantee the
existence of that part of the soul which is C. This would entail that that which truly continues to live is mathematically pure form, which would be too much of a Platonic realism even for Augustine.
Greek mathematicians consistently insisted that the true mathematical one could not be divided. Seldom was one listed as a number. The first number was two. The unit was understood to be the standard
of measure, the means by which number was determined, and hence in itself of a different order of reality than other numbers. One, according to Aristotle, who attempted to present essentially the
mathematical tradition, is to the other numbers as is the measure to the measureable.^8 By virtue of its use as a standard it both transcends and gives meaning to other measurements. It is the number
one, however, as contrasted with the geometrical measure of one, that is, according to Aristotle, in every way indivisible. It is interesting to consider the qualities given to Aristotle’s prime
mover in the Physics, which is "not divisible, has no parts, and is not dimensional (i.e., has no magnitude)."^9 These are the characteristics of the true number one which is not divisible, has no
parts and as the standard for magnitude does not itself possess magnitude.
It seems strange to us that the number one should not be divided -- that there should be no fractions. Although Greek mathematicians knew of the embryonic development of fractional numbers in both
Egyptian and Babylonian mathematics, they did not include within the body of pure mathematics these numbers but relegated them to practical matters where they languished without benefit of
theoretical consideration. I think that it was a strange turn of mathematical competence rather than naiveté that prevented Greek mathematicians from objectifying fractions. They solved the enigma of
the discovery of incommensurable magnitudes by the brilliant extension of the concept of ratio which itself compares magnitudes by whole numbers, that in turn depend on an indivisible unit. They
could have objectified fractions and thereby declared their existence. Indeed, they had all the theoretical development in terms of ratio to do so -- provided these fractions did not include
incommensurables. The Greeks knew that if one had rational numbers, he must also have irrational ones, and they chose the theory that allowed them to have both in a consistent setting. Remember, it
was not until the nineteenth century that theory was developed which allows irrational fractional numbers to exist alongside rational ones in a consistent axiomatic framework.
In the two basic philosophical positions the church had available to it, the Platonic and Aristotelian, the choice of an Aristotelian emphasis led quickly and easily to an immutable God. God as pure
form did not change. In a Platonic emphasis adapted to Jewish and Christian monotheism, if one considers the realm of Platonic ideas to be part of the thought of God, as did Philo, who more than any
other is the founder of classical theism, we can understand how God who possesses these ideas is necessarily, at least in part, immutable. We can understand Augustine’s statement: "For He does not
pass from this to that by transition of thought, but beholds all things with absolute unchangeableness; . . . these are by Him comprehended by Him in His stable and eternal presence."^10 Possession
of these ideas by God, however, is no true authority for the claim that he is altogether immutable; something else is required, namely that God is One, and not just the one and only God, but One in
the then understood mathematical and metaphysical sense that unity is not divided. As Aquinas states, "one means undivided being," and this is authority for affirming "one is convertible with being."
^11 But neither Aquinas, nor the other theologians, nor the mathematicians assumed they were talking about God whenever they used one in mathematics. They distinguished between the mathematical one
and the metaphysical one. I am maintaining that the understanding of the metaphysical one was influenced by an understanding of the mathematical one out of which it was derived.
Parmenides was the first philosopher to associate a metaphysical one with the strict immutability of Being itself. He may have been indebted to the Milesians who affirmed a generalized divine
substance, the arche, as the foundation of all things, or to Xenophanes, who in reaction to anthropomorphic Homeric polytheism described God as motionless though not strictly immutable. But it was
Parmenides’ mathematical background with the Pythagoreans that seems decisive. He made Pythagoreanism consistent by reducing its dualism to a monism -- of a very special sort.
Pythagorean dualism is expressed in terms of ten contrarieties: limit and unlimited, odd and even, one and plurality, right and left, male and female, resting and moving, straight and curved, light
and darkness, good and bad, square and oblong. Each of these represents manifestations of the two primary opposites leading the list, the limited and unlimited. The limit is that which can be known
clearly, objectified, made finite and bounded in some way. The unlimited is that which cannot. The limited is associated with good as opposed to bad, light as opposed to darkness. A resting object is
more self contained, visually distinct and exactly characterized than a moving one. Something moving may move we know not where, and accordingly will change we know not how. The straight is
characterized by exactness of concept, the curving has unlimited varieties of possibilities and cannot be contained as a precise and fixed structure. The contrarieties odd and even, square and
oblong, are related to the Pythagorean representation of numbers as patterns of dots. The odd always has the same shape, a square; the even varies.
The limit is characterized best by precise mathematical structure. This grasping of the mathematical bounded and known, allowed an ecstatic experience that transcended the round of birth and rebirth,
and, hence, effected salvation. It was the number one, identified by the Pythagoreans with the unit point, that was the epitome of exact objectification. From the unit point came all the numbers, and
from numbers came the whole universe. The unit point for them, as for us, was indivisible. Their identification of the number one, however, with the unit point also made it indivisible.
Parmenides, though trained as a Pythagorean, rebelled against Pythagorean thought by intensifying the importance of the objectified known, by identifying the properties of the left-hand column of the
Pythagorean dichotomies as the true and only properties of Being, and by rejecting altogether the properties of the right-hand column as having no existential import whatsoever. As such, his project
may be viewed as an attempt to make Pythagoreanism consistent by really taking seriously the Pythagorean identification of being with that which is known and objectified precisely. Parmenides,
however, in his movement towards objectifying the whole, culminating in his ecstatic and religious revelation, came to view that upon grasping Being, that which is as a unified whole, all internal
structure: time, space, multiplicity, sense experience, etc., must be denied as truly real. Essentially, Parmenides saw everything, the whole, as the one, indeed, as the Pythagorean unit point made
cosmic, but having the then understood qualities of the number one -- namely, no internal divisions whatsoever.
I have tried to show how mathematics’ influence on the philosophical notions of Platonic forms and metaphysical One had an effect on the traditional Christian doctrine of the immutability of God. I
have limited myself to one aspect of the influence of mathematics on a doctrine of God. We could have further considered the shift that occurred when God was referred to as the Infinite as well as
the One. Or we could have examined in some detail the rationalization of Christian Logos that occurred from mathematical sources.^12 Logos was first, remember, a mathematical word and was influenced
considerably by mathematical developments.
Mathematics As Changing
One of the primary characteristics discovered by the Greeks about mathematical structures is that once proved they do not change. A theorem accurately proved in the Elements is valid today, although
we will probably view it from a different perspective in the light of subsequent mathematical developments. Geometrical figures became the primary examples, if not the source of the position itself,
of Platonic forms. These figures, though individual in themselves, were seen to be linked together by a logical and mathematical connection which itself was seen as unchanging. As the discipline of
mathematics progressed, the realm of mathematicals, the domain of Platonic mathematical relationships, was seen to be a structured whole, unchanging, eternal and primordial.
Mathematicians and philosophers have not always seen, nor always maintained, that mathematics is best understood in a Platonic way -- although this has been by far the dominant position. By Platonic
here, I do not mean that mathematicians accept as a matter of course the whole body of Plato’s philosophy, but I do mean that they view in a minimal way that mathematical structures and relationships
exist independently of man’s construction of them and are there existing in some way or some form to be discovered. In regard to what is now an accepted convention we call this general position
platonic (with a small p). The mere mention of the philosophers Locke, Wittgenstein, the whole logical positivist movement, shows that not all mathematicians and philosophers are platonic. Also, it
is the case that these non-platonic mathematicians and philosophers have influenced theology and sometimes a doctrine of God. But this influence has been primarily negative in the sense that it has
denied the existence or attributes of the traditionally understood and metaphysically presented Christian God. Any traditionally conceived understanding of God has as a consequence, by and large, a
platonic understanding of mathematics, if nothing else than because of the assumption that God knows and understands mathematical relations, thereby giving them some kind of existence independent of
man’s creation.
The primary sources of influence of mathematics on Christian theology have been the result of changes in the understanding of the nature of platonic mathematical structures as a result of the
changing discipline of mathematics itself. I have indicated how the platonic understanding of mathematics influenced and confirmed the doctrine of the immutability of God. This was because of the
strict immutability of an assumed existing realm of rigidly connected mathematical structures. Not all, however, who believe in such a rigid realm of mathematics have affirmed a strictly immutable
God. Whitehead, for example, and process theologians following him maintain a doctrine of a changing God, especially in his response to the world. This God does possess, however, an unchanging
essential nature, called by Whitehead God’s Primordial Nature (the realm of eternal objects), that itself contains the rigidly connected realm of mathematical relationships. The nature of eternal
objects, and hence, God’s primordial nature, was modeled by Whitehead after his understanding of the nature of mathematical existence.^13
What if we could understand the realm of mathematical structures to be itself evolving? Would this not modify both an Augustinian and a contemporary process view of God? The chief authority for the
stability of a platonic realm would be challenged, and hence one of the primary means for arguing God’s immutable essential nature questioned. There are developments in mathematics that might lead us
to come to that opinion.
The primary mathematical developments that appear to me to be relevant for contemporary theology are the creations of multiple models for the real numbers. There are apparently a number of different
real number systems, all of which characterize real numbers in that they satisfy all the accepted axioms of real numbers but differ among themselves in specific and exact details. This is like
telling the number theorist that there is no one arithmetic but a number of different arithmetics possessing different properties. And we can say that also! The mathematical authority that allows us
to claim the existence of a multiplicity of different real number models also provides for a multiplicity of different arithmetics, and vice versa. In fact, if we accept any axiomatization for the
real numbers or for arithmetic, there are an infinite number of different real number systems and an infinite number of different arithmetics that satisfy the respective axiomatizations.
The discovery of nonstandard models for arithmetic and real numbers differ in degree and kind from the discovery of non-Euclidean geometries. Non-Euclidean geometries were formulated by changing the
axioms of Euclidean geometry, and in particular the parallel postulate axiom of Euclid. Each of the resulting different geometries had its own axiom system that was understood to characterize its own
specific properties. Once one had the axioms, he had, presumably, the system "wrapped up" provided he had the skill and the means to deduce the theorems from it. The different geometries were clearly
distinguishable from each other in terms of clearly discernible sets of different axioms. This is not the case for non-standard models, for within any one given family of models, they all have the
same axioms.
The possibility of the existence of non-standard models has been evident since the announcement by Gödel to the Vienna Academy of Sciences in 1930 of his now famous Incompleteness Theorem. This
theorem is an effective proof by metamathematical considerations that arithmetic is essentially incomplete: that not only are there true theorems in arithmetic that cannot be proved from the axioms
of arithmetic but also that no matter how many axioms are added there always remain true theorems that cannot be proved. If one finds some true theorem that cannot be proved from the axioms, then
neither can its negation be proved. What if one adds to the set of axioms not the unproved true theorem but its negation? We know that in any consistent first order theory, if some theorem A is not
provable from the axioms, then the theory with the negation of A affixed to the axioms is itself consistent. Obviously this new system, or more accurately an interpretation or model of this system,
is different from the previously accepted one. It differs exactly in that the accepted unprovable but true theorem in the original system is false in the newly constructed one. Yet both theories
conform to the previously accepted axiom system.
We can see in terms of these developments why the question arose that I mentioned in the first part of the paper, namely, whether there are true but unknowable mathematical structures. In terms of
the adequately, or perhaps "vividly," known, we have shown that there are structures that conform to an axiomatic system that cannot be proved from an axiomatic system. We know that there are models
of the real number axioms which may never be explicitly formulated. Traditionally we have encompassed and understood an infinitude of structures by an axiomatic system. It has been the authority for
our declaring that we know all of a certain type of structure. We cannot now make any comprehensive claim to know all structures for any complicated mathematical axiomatic system. Could there be
compatible interpretations of a system that are somehow in principle impossible to know? Could there be true but unknowable mathematical facts?
I share Steen’s and Robinson’s skepticism about the existence of platonic mathematical structures that are true but unknowable. I find there is a certain presumption about affirming the existence of
a platonic mathematical form that cannot be known -- either within a Platonic perspective or outside of it. In principle, one could never have any evidence of the form’s positive existence. Also, I
find the affirmation that there is a platonic realm of mathematical structures that are eternally fixed in their relationship to each other but never growing or diminishing in totality, to be also
somewhat presumptuous. Our evidence historically, certainly in terms of what we know, is almost exclusively of a changing domain of mathematical structures, a domain that changes primarily by
addition to itself. Of course, it may be claimed that this is simply a growth of our knowledge of a fixed domain, and I would certainly acknowledge the explosion of mathematical discoveries in this
century. But it may be the case that there is an actual ontological addition to mathematical structures.
If one believes in any platonically understood realm of mathematical structures, it seems to me best to understand it as a loosely known multiplicity which is incapable of unification axiomatically
and to which new relationships may be added. The addition of any new relationship would, of course, be compatible with some structures and logically incompatible with others. Instead of "true but
unknowable" we might say "unknowable because not yet true."
In assuming that mathematical relationships have a kind of platonic reality at least in terms of being potentials for matters of fact as known by God, we recognize that these relationships may be
structures of that which is known -- or part of the structures of knowing itself. The structures of knowing, at least the means by which one can know mathematics, have traditionally been known as
logic. It is a well-known fact that these structures have been objectified and made epistemological objects whose nature can be examined mathematically as structures of the known. Gödel’s theorem
points out that the structures of knowing cannot all be formalized mathematically.
The new developments in mathematics seem to me to allow a better understanding of what it might mean for God to have the freedom to change the totality of potentials -- both in terms of the
structures of knowing among human consciousness and in terms of the objects known. This would mean that not only could man’s consciousness, as well as other structures of the world, evolve in ways
hitherto unknown, and in ways impossible to know, but in ways that might be even a surprise to God -- a surprise in the sense that the potential mathematical structure that could characterize (in
part) such consciousness might not even be at present. My viewpoint here is a departure from a strictly Whiteheadian process theology that could understand God’s surprise at the way Beethoven’s Fifth
Symphony turned out, but a surprise because it turned out this way and not that way, or some other way, all ways being known as strict potentials. God may not be surprised, however, at new
mathematical potentials that are added, for he may create and add them all himself. But we need not, in our knowledge, limit new potentials solely to God; they may come from God’s interaction with
the world or from the world itself, i.e., by the creative power given to the world by God.
Almost all traditional and contemporary theologies that maintain a platonic reality for mathematical potentials insist both that the mathematical structures do not change and that they are complete
in their totality as understood or envisioned by God. This doctrine is found in Augustine as well as in contemporary process theology. God, though changing in his actual consequent nature in process
theology, does not change in his essential nature, that aspect of him called the primordial nature. The eternal objects that comprise the primordial nature are fixed, they are pure potentials and as
such have a rigid logical structure. God may establish possibilities for actual entities by selective envisionment of, or ordering of, the realm of eternal objects, and in this role he acts as
destiny or providence for actual entities. From the perspective of the actual entity, there are multiple routes to the future in terms of different potentials for actualization, but each of these
routes as in the completion of Beethoven’s symphony is a choice of this route or that one, each of which is known to God and thereby potentially knowable to man. God is the ground of an individual’s
possibility in traditional process theology. He provides the options. But he may not create new pure possibilities, i.e., eternal objects, or destroy old ones. This is a fixed aspect of his own
I would like to maintain the emphasis that platonic mathematical structures do not change, as affirmed by Whitehead and Augustine, but relax the requirement that no new potentials or structures be
added to the realm of eternal objects. This relaxation is based on the simple observation that it has been primarily the axiomatic method that has given mathematicians and philosophers the authority
for stabilizing the mathematical realm, for claiming it to be complete as related logically to a few unquestionable assumptions. What we have learned about mathematics since the advent of Whitehead’s
philosophy is that the axiomatic method cannot adequately characterize the nature of mathematical structures that are presently known. It is true that we may know some aspects of these structures
apart from the axiomatic method. This is essential. But we still know the unity of mathematics, or the unity of mathematical systems, primarily through axiomatic investigation. It may be that what
unity we know we know through axiomatic systems, but that this unity is not complete.
The claim that individual mathematical structures are unchanging but that new ones may be formed, new potentials added to the realm of eternal objects, entails some kind of evolution in the realm of
eternal objects. Under the principle that actuality determines (at least) potentiality, we would maintain that all actual relationships in the past are now potential. The realm of eternal objects is
comprised at least of those relationships that were (or are) actual -- of course understood now as potential. In addition, the realm of eternal objects is comprised of all known potential
relationships and especially that vast welter of mathematical relationships created by the imagination and consciousness of man. For as known by man, these relationships do have a tie to the actual
world, even though in their objective status they do not characterize or have never characterized any particular complex of events. I am sure that the realm of potentials, i.e., eternal objects, is
greatly enlarged by God’s knowledge of potentials. He knows the mathematical structures that we could know but now in fact do not know. In addition, I think that his activity is the primary source of
new structured relationships in the realm of eternal objects and that those relationships coming from the world comprise probably only a small portion of the total.
My proposal concerning the nature and evolution of eternal objects tips the balance towards a Hartshornian rather than a strictly Whiteheadian process theology. Whitehead did model his understanding
of eternal objects after his understanding of mathematical existence. Eternal objects, therefore, according to him, are exact, discrete, individual, objective and existing in themselves apart from
any relationship to particular actual entities. Whitehead’s God, though not fully developed in Process and Reality, is described characteristically as a nontemporal actual entity whose primordial
nature, the realm of eternal objects, is given primacy over his consequent nature. For Hartshorne, however, who emphasizes that the concrete contains the abstract, the temporal includes the
atemporal; eternal objects are given less emphasis than actual entities. It is the becoming of actual entities that determines their being and especially that being characterized by (mathematical)
eternal objects. Consequently Hartshorne’s God is much more temporal than Whitehead’s God; God’s consequent nature is understood to embody concretely his primordial nature as abstract essence. My
position is Whiteheadian as it views the nature of eternal objects presently existing and Hartshornian in that God’s consequent nature is the ground and source of (most) new eternal objects. Here I
am trying to maintain the emphasis of Hartshorne that actuality ontologically precedes potentiality; that of the two, actual entities and eternal objects, precedence must go to actual entities. In my
estimation, contemporary mathematics makes this position easier to maintain.
God’s freedom in this revision of process theology not only extends to his influence on actualities but also on the limitations of that which is possible, not just in the sense of choosing those
possibilities that may be most relevant in a particular set, but in creating the possibilities themselves. The realm of eternal objects grows as history progresses. Things in their true possibility
literally become more complex. Not only can God point out possibilities that we do not know of, he can create them. Thus in a genuinely new sense, at least in process theology, the future is his.
There is another aspect of the theory under consideration that I find attractive, because it conforms to my ideas of the relativity of metaphysics. There is a similarity of nature and function
between overarching metaphysical principles and mathematical ones. In process theology both turn out to be varieties of eternal objects. Our difficulty in finding an adequate metaphysics may be due
to the fact that there is no (mathematical) structure existing presently that can characterize adequately the cosmos. The irony may be, and indeed it would be an irony appropriate to God, that the
true metaphysics is a structure yet to be evolved. Metaphysical truth may genuinely come from the hands of God in the future.
In the first part of this paper I tried to show how an understanding of standard mathematics conditioned the doctrine of God’s immutability. Obviously, our interpretation of contemporary nonstandard
mathematics relaxes any restrictions, at least from mathematics itself, of requiring God to be strictly immutable. In the second part I have tried to show how contemporary developments in mathematics
might affect a contemporary doctrine of God. In particular, I chose process theology to work with. I challenge those who may have an allegiance to another set of theological doctrines of God to work
out what might be the consequence if the traditional understanding that mathematical structures are complete, unified and eternal in nature were relaxed.
Abraham Robinson, "Formalism 64," Logic, Methodology and Philosophy of Science; Proceedings of the 1964 International Congress, ed. Yehoshua Bar-Hillel (Amsterdam: North-Holland Publishing Co.,
1965). p. 232. ^
Lynn Arthur Steen, "New Models [0]f the Real Number Line," Scientific American 225/2 (August. 1971), 99. ^
Treatise on the Saul, Ch. III, from The Ante-Nicene Fathers, eds. Roberts and Donaldson (Grand Rapids: Wm. B. Eerdmans) III, 183. ^
1n the earlier works of Phaedo, Republic and Parmenides the deity and at times the soul are supreme examples of fixity and immutability, whereas in the Phaedrus and the Laws the deity is freely
mobile. In the Timaeus the eternal God is immutable and the world soul is self moving. ^
Metaphysics, 987b 9-13. ^
A. E. Taylor, Varia Socratica (Oxford: James Parker & Co., 1911), pp. 187 ff. ^
See, for example, Sir David Ross, Plato’s Theory of Ideas (Oxford: Clarendon Press, 1951), p. 13. ^
Metaphysics, 1052b16--1053a5. ^
Physics, 267b25, ^
The City of God. Ch. XXI, Basic Writings of Saint Augustine. tr. M. Dods (New York: Random House. 1948), II, 162. ^
The Summa Theologica Question XI, First Article. ^
"Mathematics and Theology." Bucknell Review, 20/2 (Fall, 1972), 113-26. ^
My "Whitehead’s Philosophical Response to the New Mathematics," The Southern Journal^ of Philosophy, 7/4 (Winter, 1969-70). 341-49.
|
{"url":"http://www.religion-online.org/showarticle.asp?title=2408","timestamp":"2014-04-19T19:40:00Z","content_type":null,"content_length":"40812","record_id":"<urn:uuid:676b25f6-c102-424d-8fa2-8b7c00985726>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00639-ip-10-147-4-33.ec2.internal.warc.gz"}
|
SparkNotes: SAT Subject Test: Math Level 2: Explanations
10.1 Characteristics of a Function 10.6 Graphing Functions
10.2 Evaluating Functions 10.7 Identifying the Graphs of Polynomial Functions
10.3 Compound Functions 10.8 Review Questions
10.4 Inverse Functions 10.9 Explanations
10.5 Domain and Range
1. B
This question tests your understanding of compound functions. If f(x) = ax and g(x) = bx, f(g(x)) = f(bx) = abx. From this point, the question is all algebra to solve the equation abx = 5x^2 + 2. By
factoring an x out of the right side, you get (ab)x = x(5x + ^2/[x]). Divide both sides of this equation by x and you see that ab = 5x + ^2/[x].
2. C
To evaluate this compound function, you must first find the inverses of f and g. The inverse is found by interchanging the places of x and y and solving for y: f^–1(x) = ^x/ [2], and g^–1(x) = .
Thus, f^–1(g^–1(x)) = ^/ [2]. Substituting 2 for x into this equation, the result is ^/[2] = ^2/[2] = 1.
3. C
In order to solve this problem, it is simpler to rearrange the problem so it looks like h(x) = (g(f(3)) – 1. Then just substitute the expression for h(x) into the equation:
Remember that you were given h(x) = 3x, and now you know that h(x) = 3. Equate the two and you get 3x = 3 and x = 1.
4. C
By graphing the function on your calculator, you should be able to see that statements I and II must be true and statement III is false, making C the correct answer choice. Even without a calculator,
though, you can calculate the asymptotes of this graph.
A vertical asymptote occurs in this graph where the function is undefined. The function is undefined at x = 2, so this is a vertical asymptote. By plugging in a series of numbers, you find that
values of the function approach 2 but are never equal to 2. So there is a horizontal asymptote, the line y = 2.
One easy way to check your asymptotes is to plug the corresponding x and y values into the function. For example, when you plug in x = 2, the function should be undefined. To be sure that x = 2 is
indeed an asymptote and not a hole, plug in values on both sides of x = 2, like x = ± 2.01, and make sure that one is very large and one is very small. This indicates that on one side of x = 2, the
function approaches infinity, and on the other side the function approaches negative infinity.
5. B
All of the answer choices are polynomials. You can analyze the polynomial’s end behavior to narrow the choices. By observing the end behavior of f, you can determine whether the leading coefficient
is positive or negative, and you can tell whether the degree of f is odd or even. From the graph, you can see that f(x) increases without bound as x increases and as x decreases. This means that the
degree of f is even, and the leading coefficient is positive. This eliminates C and D as possible answers.
To choose between the remaining three possible answers requires closer analysis. First, note from the figure that x = 0 is not a root of the function, but x = 0 is a root of the function in A.
Therefore, this choice can be eliminated. Second, in the graph, f takes on positive and negative values. The function in E is positive for all values of x, because an even power of x is always
positive, and c^2 is always positive. So, E can be eliminated from the answer choices, making B the only remaining possibility.
This is a difficult question, and the answer probably didn’t jump out at you right away. Remember that when you have to analyze the graph of a function or a function itself, you can use end behavior,
roots, and which portions of the graph have positive and negative values of f as tools for your analysis.
6. E
The condition that f(x) = –f(–x) means that f is an odd function, which means that it is symmetrical with respect to the origin. The easiest way to answer this question is to choose which of the
functions graphed are symmetrical with respect to the origin. The only graphs that satisfy this condition are the graphs in D and E. Note, however, that the graph in D is actually not even the graph
of a function, because it doesn’t pass the vertical line test. The correct answer must be E.
7. E
To find the maximum value of the function, first multiply it out.
The graph of the function f(x) = –2x^2 + 28 is a parabola opening downward, since the coefficient of x^2 is negative. This means the maximum value of the function will be the y value of the vertex.
To find the vertex of the parabola, set x = 0 and solve for f(x). For this function, f(0) = –2(0)^2 + 28 = 0 + 28 = 28, so the vertex of the parabola is the point (0, 28), and the maximum value of f
is 28.
8. D
In order for x must equal x – y and y must equal x + y. These equations can be solved simultaneously:
When x = y, the requirement that is fulfilled for this function.
|
{"url":"http://www.sparknotes.com/testprep/books/sat2/math2c/chapter10section9.rhtml","timestamp":"2014-04-17T15:44:07Z","content_type":null,"content_length":"51241","record_id":"<urn:uuid:061d3aaf-6155-48e1-926c-1ebc6106eb1b>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00063-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Math Tools
Calculate the Area of a Trapezoid
Reviewer: Andy B, Aug 4 2004 10:52:19:563AM
Classroom use in Courses and Topics:
Math 7
Geometry: Triangles, Circles, Perimeter, area, and volume
Duration of classroom use:
One Week
What did students learn?
A nice review for students on their own of the area formulas discovered in class.
What did students do with the resource?
Reviewed area formuals in an interactive way. I appreciated the "Why" feature shown in many of the formula explanations.
How hard was it for students to use?
Very Easy
Other classroom comments:
The quiz taken at the end is a bit abrupt. When students get a problem incorrect, that is that, and the quiz moves on after giving the right answer. Right then and there I would like to see some
explanation to the problem or a hint as to how better calculate the area.
Appropriate for:
practice of skills and understandings
Other Comments:
A bit straight too forward with some of the definitions as an introduction lesson. A nice remedial support though. I appreciate a discovery approach with the definitionsl.
What math does one need to know to use the resource?
General principles of area.
What hardware expertise does one need to learn to use the resource?
What extra things must be done to make it work?
I encourage, since the program has no means of recording the incorrect problems, the students to record mistakes made on the quiz and reflect later.
How hard was it for you to learn?
Very Easy
Fairly straightforward.
Ability to meet my goals:
Recommended for:
Geometry: Triangles, Polygons, Circles
|
{"url":"http://mathforum.org/mathtools/all_reviews/751/","timestamp":"2014-04-18T21:50:52Z","content_type":null,"content_length":"14074","record_id":"<urn:uuid:154a8ad7-8dab-4b0e-80d2-dda2e23cbdf2>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00571-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Corte Madera
...If this is not possible, the student and I can meet at the library. MY BACKGROUND I have a B.S. degree in Mathematics from UC Davis specializing in probability theory and did graduate work in
Computer Science at California State University Chico. I retired after working 30 years as a computer p...
12 Subjects: including calculus, geometry, statistics, algebra 2
...Different students will respond better to one style of teaching versus another. My job as a tutor is to find out what teaching style or presentation will make my student comprehend the subject
at hand better. First, I try to find out what sparks the interest in the student and then try to relate the subject matter that he or she has difficult with to that interesting topic.
24 Subjects: including calculus, chemistry, physics, geometry
...I emphasize the understanding because it will not only help in getting better grades but also serves structural learning. "Excellent Tutor" - Alexandra K. San Francisco, CA Andreas was more
than willing to help and he not only answered my questions but clued me in on other connections and hints between problems. He helped me to thoroughly understand concepts and catch up in Pre
41 Subjects: including calculus, geometry, statistics, algebra 1
When I retired from the United States Air Force I swore I would never get up early again. But I still wanted to do something to continue making the world a better place. So I turned to something I
had done for my friends in High School, my troops in the field, and my neighborhood kids, TUTORING!
10 Subjects: including calculus, geometry, precalculus, algebra 1
...I despise simply memorizing dates and figures, and I love discussing history within a memorable context. I also attempt to make learning about history enjoyable (there are some very
entertaining figures and events in our nation's history!). Most recently I worked with a high school senior enroll...
29 Subjects: including calculus, English, physics, French
|
{"url":"http://www.purplemath.com/Corte_Madera_calculus_tutors.php","timestamp":"2014-04-20T19:27:55Z","content_type":null,"content_length":"24299","record_id":"<urn:uuid:b24e787b-6e9a-4d84-9db2-354d34cb7155>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00377-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Spring 2001 LACC Math Contest - Problem 1
Problem 1.
Suppose a clock is perfectly accurate, but has only an hour hand, no minute or second hand. What is the exact time when the hour hand is pointing at the 22 minute mark?
[Problem submitted by Iris Magee, LACC Instructor of Mathematics. Source: Mathematics Leagues Inc.]
|
{"url":"http://lacitycollege.edu/academic/departments/mathdept/samplequestions/2001solution1.html","timestamp":"2014-04-16T04:38:50Z","content_type":null,"content_length":"2698","record_id":"<urn:uuid:fcf5023c-dae8-42c4-b7a1-f8bc226fe671>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00641-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Bitshift and two's complement in the documentation - Arduino Forum
Well, I am new here and this is my first post, please be not to harsh if I missed some forum rules
While reading the documentation for the
operation I struggled over:
In that case, the sign bit is copied into lower bits, for esoteric historical reasons:
I'm unhappy with that since
two's complement
is neither historic nor esoteric - nearly every processor implements negative integers this way. Injecting the one for negative numbers keeps them negative and makes sure that division by powers of
two for negative numbers works as expected. One could argue that this might be to deep into number representation theory, but a few lines later exactly this division is shown:
If you are careful to avoid sign extension, you can use the right-shift operator >> as a way to divide by powers of 2. For example:
Here it gets completely unhelpful: you are told to avoid the sign but in reality everything just works, due to the injected one.
My proposal: drop "for esoteric historical reasons", and mention that the injected ones guarantee proper results for negative numbers in the bottom.
P.S. (Maybe it would be nice to have a page on number representation anyway where it would be possible to get deeper into the details)
|
{"url":"http://forum.arduino.cc/index.php?topic=83166.0;prev_next=next","timestamp":"2014-04-16T22:18:37Z","content_type":null,"content_length":"43223","record_id":"<urn:uuid:a0d9cb72-5e77-4b63-bee0-eb8a13855f07>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00504-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Cryptography : Asymmetric Encryption by using Asymmetric Algorithm Classes.
In previous blog –
Symmetric Encryption by Symmetric Algorithm Classes–Part 1 and Part 2
we have learned about basic introduction of Cryptography based on Symmetric Encryption. so, now In addition to previous blog here we will learn about basics of Asymmetric Encryption.
Asymmetric Encryption :
Asymmetric encryption is also referred to as public key encryption because it uses public key as well as private key. means Its having a secret key that must be kept from unauthorized or anonymous
users and a public key that can make public to any one. Hence we can say that Asymmetric encryption is designed so that the private key remains shield and secret, whereas the publick key is widely
distributed. The private key is used to lock information, whereas the public key is used to unlock it.
The main benefit of opting asymmetric encryption is that you can share encrypted data without having access to the private key. In standard mode asymmetric encryption is used more commonly than
symmetric encryption, and it is proved that its a standard used to help secure communication over the Internet.
Have a look at animated pic to view its working
we came to know in earlier blog about symmetric encryption, the same key is used for both encryption and decryption, however this approach is simpler but less secure since the key must be
communicated to and known at both sender and receiver locations.
Example For Better Understand :
Lets assume about conversion of Plain text between A and B. If ‘A’ send message to ‘B’, ‘A’ can find out public key (but not private key of ‘B’) of ‘B’ from a central administrator and encrypt a
message to ‘B’ using ‘B’ public key. When ‘B’ receive it, ‘B’ can decrypt it with ‘B’ private key. In addition to encrypting messages (which can ensures privacy), ‘B’ can authenticate itself to ‘A’
(so ‘A’ know that it is really ‘B’ who is the sender of message to ‘A’) by using ‘B’ private key to encrypt a digital certificate. When ‘A’ receive it, ‘A’ can use ‘B’ public key to decrypt it.
Asymmetric Algorithm Classes
namespace provides encryption classes that provide most popular Asymmetric algorithms like
• RSA and RSACryptoServiceProvider
• DSA and DSACryptoServiceProvider
RSACryptoServiceProvider Class
RSA stands for Ron Rivest, Adi Shamir and Leonard Adleman, who first publicly described it in 1977-[ From Wikipedia ]
The RSA class is an abstract class that extends the Asymmetric Algorithm class and provides support for the RSA algorithm. The .NET Framework RSA algorithm support an encryption key size ranging from
384 bits to 16,384 bits in increments of 8 bits by using the Microsoft Enhanced Cryptographic Provider and an encryption key size ranging from 384 bits to 512 bits in increments of 8 bits by using
the Microsoft Base Crystallographic Provider.
The RSACryptoServiceProvider class exnteds the RSA class and is the concrete RSA algorithm class.
Implementation of RSACryptoServiceProvider Class
To perform Encryption and Decryption. you must add
using System.Security.Cryptography; // Namespace
Now take a look at encryption function
static public byte[] RSAEncrypt(byte[] byteEncrypt, RSAParameters RSAInfo, bool isOAEP)
byte[] encryptedData;
//Create a new instance of RSACryptoServiceProvider.
using (RSACryptoServiceProvider RSA = new RSACryptoServiceProvider())
//Import the RSA Key information. This only needs
//toinclude the public key information.
//Encrypt the passed byte array and specify OAEP padding.
encryptedData = RSA.Encrypt(byteEncrypt, isOAEP);
return encryptedData;
//Catch and display a CryptographicException
//to the console.
catch (CryptographicException e)
return null;
In above code Encrypt Function is used to encrypt plain text to cipher text. Encrypt Function Need two parameter first one is byte array of plain text and second one specify OAEP padding(True or
Now in same way we need to create function for Decrypt the PlainText(Encrypted Text)
have a look at given function which is responsible to decrypt encrypted text.
static public byte[] RSADecrypt(byte[] byteDecrypt, RSAParameters RSAInfo, bool isOAEP)
byte[] decryptedData;
//Create a new instance of RSACryptoServiceProvider.
using (RSACryptoServiceProvider RSA = new RSACryptoServiceProvider())
//Import the RSA Key information. This needs
//to include the private key information.
//Decrypt the passed byte array and specify OAEP padding.
decryptedData = RSA.Decrypt(byteDecrypt, isOAEP);
return decryptedData;
//Catch and display a CryptographicException
//to the console.
catch (CryptographicException e)
return null;
we can see in above code Decrypt Function is used in same manner to decrypt cipher text to plain text. Decrypt Function Need two parameter first one is byte array of encrypted text and second one
specify OAEP padding(True or False)
Note :- OAEP padding is only available on Microsoft Windows XP or later.
Now we have created function, so we can use both function to appropriate manner to accomplishment of Encryption Decryption task.
Note : we need to access RSACryptoServiceProvider class here.
RSACryptoServiceProvider RSA = new RSACryptoServiceProvider();
How to use Encrypt and Decryption Function
Note : below code is tested in Windows Application. You can Download Source Code for better understand.
UnicodeEncoding ByteConverter = new UnicodeEncoding();
RSACryptoServiceProvider RSA = new RSACryptoServiceProvider();
byte[] plaintext;
byte[] encryptedtext;
For Encrypt Text
plaintext = ByteConverter.GetBytes(txtplain.Text);
encryptedtext = RSAEncrypt(plaintext, RSA.ExportParameters(false), false);
txtencrypt.Text = ByteConverter.GetString(encryptedtext);
For Decrypt Text or Back to Plain Text
//While Decryption set True, the private key information (using RSACryptoServiceProvider.ExportParameters(true),
byte[] decryptedtex = RSADecrypt(encryptedtext, RSA.ExportParameters(true), false);
txtdecrypt.Text = ByteConverter.GetString(decryptedtex);
What we’ve seen
• Create Function of Encryption and Decryption
• Create Encoder
• Create RSACryptoServiceProvider Instance.
• Create byte array to illustrate the encrypted and decrypted data.
• encrypted text and Display Cipher text
• Decrypt cipher text and Display back to plain text.
Output by using RSAEncryption Program.
Download Source Code :
Want to Download Source Code : Click Me!
Further Reading
RSA Algorithm-[Wikipedia]
RSACryptoServiceProvider Class-[MSDN]
Data Confidentiality-[MSDN]
Coming Next
Please Stay in touch for extend part of this article. Topic will move around “Introduction and Implementation of DSACryptoServiceProvider for Beginners”.
Filed under:
|
{"url":"http://www.codeproject.com/Articles/448719/Cryptography-Asymmetric-Encryption-by-using-Asymme","timestamp":"2014-04-21T03:29:26Z","content_type":null,"content_length":"88880","record_id":"<urn:uuid:18cb21b3-568f-4fd2-98c0-4e109f649b48>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00015-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Manhattan, NY Math Tutor
Find a Manhattan, NY Math Tutor
...As well as working as a teacher I have had years of experience tutoring students in areas of English such as reading, writing, and grammar. I have tutored elementary and intermediate French
students during high school and college as well. I am passionate about reading and an enthusiastic motivator!
21 Subjects: including SAT math, reading, English, ESL/ESOL
...I have a Ph.D in Immunology and I am a current postdoctoral fellow in New York doing research work in cancer immunology. I taught high school students and privately tutored all levels, various
subjects. I was brought up in Paris (France) and Toronto (Canada), thus I am perfectly bilingual.
18 Subjects: including calculus, elementary (k-6th), French, physics
I have been a certified teacher for the last 10 years. I love teaching, especially to students who love learning. I'm flexible, patient, and my main interest is in helping the students.
40 Subjects: including ACT Math, prealgebra, English, reading
Hello, My name is Matthew and I thoroughly enjoy teaching. My career has always been around teaching and the excitement of discovery. I enjoy the thrill of new discovery and understanding.
26 Subjects: including trigonometry, discrete math, ACT Math, SAT math
...I want to go into research and enjoy tutoring/working in these subject areas. In my junior and senior years of high school I tutored two middle school students, primarily in math and biology,
but in other subject areas as well. I have an advanced Regents diploma so I would be willing to help students with Regents exams.
23 Subjects: including probability, statistics, trigonometry, precalculus
Related Manhattan, NY Tutors
Manhattan, NY Accounting Tutors
Manhattan, NY ACT Tutors
Manhattan, NY Algebra Tutors
Manhattan, NY Algebra 2 Tutors
Manhattan, NY Calculus Tutors
Manhattan, NY Geometry Tutors
Manhattan, NY Math Tutors
Manhattan, NY Prealgebra Tutors
Manhattan, NY Precalculus Tutors
Manhattan, NY SAT Tutors
Manhattan, NY SAT Math Tutors
Manhattan, NY Science Tutors
Manhattan, NY Statistics Tutors
Manhattan, NY Trigonometry Tutors
|
{"url":"http://www.purplemath.com/manhattan_ny_math_tutors.php","timestamp":"2014-04-18T03:41:36Z","content_type":null,"content_length":"23721","record_id":"<urn:uuid:c9c48497-c8e6-497f-bd31-97e47972690c>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00425-ip-10-147-4-33.ec2.internal.warc.gz"}
|
UC Berkeley Mathematician Edward Frenkel on the Transcendent World of Math
Congratulations to UC Berkeley mathematician Edward Frenkel whose book Love and Math: The Heart of Hidden Reality is in the top five science books for the year at Amazon! I wrote about Frenkel in a
different context recently when he participated in the expression of some dangerous reservations about Darwinian theory in, of all places, the New York Times Book Review ("Someone at the New York
Times Wasn't Being Sufficiently Vigilant About Stealth 'Creationism' When This One Got Through").
The philosophical issues raised by Dr. Frenkel in his book are not only fascinating but very relevant to subjects we touch on often here. Math, he argues, is not only beautiful and worthy of our
love. It also gives access to another, ultimate reality that transcends our own.
He says it briefly and eloquently in an interview in The Economist.
Does maths exist without human beings to observe it, like gravity? Or have we made it up in order to understand the physical world?
I argue, as others have done before me, that mathematical concepts and ideas exist objectively, outside of the physical world and outside of the world of consciousness. We mathematicians discover
them and are able to connect to this hidden reality through our consciousness. If Leo Tolstoy had not lived we would never have known Anna Karenina. There is no reason to believe that another
author would have written that same novel. However, if Pythagoras had not lived, someone else would have discovered exactly the same Pythagoras theorem. Moreover, that theorem means the same to
us today as it meant to Pythagoras 2,500 years ago.
So it's not subject to culture?
This is the special quality of mathematics. It means the same today as it will a thousand years from now. Our perception of the physical world can be distorted. We can disagree on many different
things, but mathematics is something we all agree on.
The only reason the theory means the same is that it describes the reality of the physical world, so mathematics must need the physical world.
Not always. Euclidian geometry deals with flat spaces, such as the three-dimensional flat space. For millennia people thought we inhabited a flat, three-dimensional world. It was only after
Einstein that we realised we lived in a curved space and that light doesn't travel in a straight line but bends around a star. Pythagoras' theorem is about geometric shapes in an idealised space,
a flat Euclidian plane which, in fact, is not found in the real world. The real world is curved. When Pythagoras discovered his theorem there were, of course, inferences from physical reality,
and a lot of mathematics is drawn from our experience in the physical world, but our imagination is limited and a lot of mathematics is actually discovered within the narrative of a hidden
mathematical world. If you look at recent discoveries, they have no a priori bearing in physical reality at all.
The naïve interpretation that mathematics comes from physical reality just doesn't work. The other interpretation that mathematics is a product of the human mind also has serious issues, because
it seems clear that some of these concepts transcend any specific individual.
Math isn't something we imagine or make for ourselves, it's something we discover. It points to a realm of objective reality beyond ours. Our reality is also objective but it is distorted, in our
perception, by subjectivity. Not so with math.
I love the point he makes about Tolstoy versus Pythagoras. Tolstoy had he never lived or had he died young would never have revealed Anna Karenina. What if Pythagoras never lived? Pythagoras'
theorem, just differently named, would have been revealed in any event.
It would be interesting to apply the same test to Darwin. (Michael Flannery has considered the question here.)
The Russian-born Frenkel is not just a brilliant mathematician -- he's also an infectiously effervescent personality. Go here and look for the charming interview he did with our friend Dennis Prager.
Photo source: Edward Frenkel, UC Berkeley/Elizabeth Lippman.
|
{"url":"http://www.evolutionnews.org/2013/12/berkeley_mathem080361.html","timestamp":"2014-04-23T09:50:51Z","content_type":null,"content_length":"29480","record_id":"<urn:uuid:e410a694-e476-43cc-b842-264df57f43b4>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00397-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Octonion Algebra. The algebra and its connection to physics.
Octonion Algebra
A presentation of the algebra and its connection to physics
This website is a repository for papers I have written describing my research into the
application of Octonion Algebra to Physics. Understanding the application requires an
understanding of Octonion Algebra itself. There are perspectives on the algebra that
anyone interested in Octonions for any reason may find enlightening, and will not be able
to find anywhere else for now.
Highlights of original work
There are only two truly different multiplication tables for Octonion Algebra, not 480.
They can be classified as Left Octonion or Right Octonion. For a given set of 7
permutation triplets, there are only 8 Right and 8 Left Octonion Algebras, not 128.
Creation methods for all possible valid Octonion Algebras and mappings between them.
The Law of Octonion Algebraic Invariance, and the Octonion Variance Sieve Process.
The Octonion Ensemble Derivative Form as the foundation for Octonion Calculus.
The Invariant 8-current and the Invariant Action Function (work-force Octonion form).
Applying the Law of Algebraic Invariance to convert the Action Function to an equivalent
integrable form. Equating the Action Function to its integrated form to produce the
Octonion Conservation of Energy and Momentum Equations. Electrodynamics is fully
covered as a subset of the presentation.
Copyright 2008-2010 Richard Lockyer
email: rick@octospace.com
Somewhat new
Somewhat new
Somewhat New
|
{"url":"http://www.octospace.com/","timestamp":"2014-04-23T22:31:39Z","content_type":null,"content_length":"6449","record_id":"<urn:uuid:ee45fca5-8f15-4094-be56-ecd81202e8f3>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00029-ip-10-147-4-33.ec2.internal.warc.gz"}
|
What is the answer to the square root of 96 over the square root of 6?
In mathematics, a square root of a number a is a number y such that y2 = a, in other words, a number y whose square (the result of multiplying the number by itself, or y × y) is a. For example, 4 and
−4 are square roots of 16 because 42 = (−4)2 = 16.
Every non-negative real number a has a unique non-negative square root, called the principal square root, which is denoted by √a, where √ is called the radical sign or radix. For example, the
principal square root of 9 is 3, denoted √9 = 3, because 32 = 3 × 3 = 9 and 3 is non-negative. The term whose root is being considered is known as the radicand. The radicand is the number or
expression underneath the radical sign, in this example 9.
In numerical analysis, a branch of mathematics, there are several square root algorithms or methods for calculating the principal square root of a nonnegative real number. For the square roots of
a negative or complex number, see below.
Finding $\sqrt{S}$ is the same as solving the equation $f(x) = x^2 - S = 0\,\!$. Therefore, any general numerical root-finding algorithm can be used. Newton's method, for example, reduces in this
case to the so-called Babylonian method:
In mathematics, a half iterate (sometimes called a functional square root) is a square root of a function with respect to the operation of function composition. In other words, a functional
square root of a function g is a function f satisfying f(f(x)) = g(x) for all x. For example, f(x) = 2x2 is a functional square root of g(x) = 8x4. Similarly, the functional square root of the
Chebyshev polynomials g(x) = T[n](x) is f(x) = cos (√n arccos(x)), in general not a polynomial.
Notations expressing that f is a functional square root of g are f = g[[½]] and f = g[½].
Hospitality is the relationship between the guest and the host, or the act or practice of being hospitable. This includes the reception and entertainment of guests, visitors, or strangers.
The word hospitality derives from the Latin hospes, meaning "host", "guest", or "stranger". Hospes is formed from hostis, which means "stranger" or "enemy" (the latter being where terms like
"hostile" derive).
Related Websites:
|
{"url":"http://answerparty.com/question/answer/what-is-the-answer-to-the-square-root-of-96-over-the-square-root-of-6","timestamp":"2014-04-18T05:32:06Z","content_type":null,"content_length":"26395","record_id":"<urn:uuid:34801c35-cf13-423f-814a-7b9e0c92ee81>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00313-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Dividing Radicals and Rationalizing the Denominator - Concept
Long division can be used to divide a polynomial by another polynomial, in this case a binomial of lower degree. When dividing polynomials, we set up the problem the same way as any long division
problem, but are careful of terms with zero coefficients. For example, in the polynomial x^3 + 3x + 1, x^2 has a coefficient of zero and needs to be included as x^3+ 0x^2+3x+1in the division problem.
One thing to remember about simplifying radical expressions is thou shall not have a radical in the denominator. What I'm talking about is you don't want to have any square roots in the bottom of the
fraction. In order to get it out of the bottom of the fraction, you're going to have to use a bunch of techniques.
First thing, if you're given a fraction that has a square root in the bottom, if you don't want to reduce the fraction first that's a possibility. Another thing you might want to try doing is looking
for the perfect square factors and reducing it like you guys have been doing with radical expressions all along.
A couple of things to keep in mind also when you see fractions. The square root of 3 plus square root of 7 is not the same thing as the square root of 3+7. That's a really important distinction. That
would be true for multiplying square root of 3 times square root of 7 is equal to the square root of 3 times 7. Don't get that stuff confused in your head. So when we're looking at these sums or
differences of radical expressions that have different radicands, we're going to be coming across what we call conjugates.
Conjugates look like this. There are two different sums in differences that have the same two terms like I have root 3 plus root 8 and root 3 minus root 8. These are called conjugates and there are
some really cool properties that come out when you're multiplying conjugates. If you multiply two conjugates, your result is always an integer or a whole or a whole number. That's a good thing when
you're trying to get square roots out of the bottom of a fraction.
So putting it all together, we have a process called rationalising the denominator. If you're given a fraction that has a square root in the denominator, you rationalise the denominator by
multiplying the numerator and denominator by the conjugate of the denominator. That'll make a lot more sense when you start looking at examples but again, most important thing to remember is that you
never want to leave a radical; expression or that means a square root in the bottom of a fraction. Always always always rationalise by multiplying by the conjugate of the denominator.
denominator numerator rationalizing the denominator conjugates
|
{"url":"https://www.brightstorm.com/math/algebra/radical-expressions-and-equations/dividing-radicals-and-rationalizing-the-denominator/?collection=col10624","timestamp":"2014-04-17T13:08:56Z","content_type":null,"content_length":"69304","record_id":"<urn:uuid:24564468-bbac-44c6-9825-3ef3c179a2d0>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00464-ip-10-147-4-33.ec2.internal.warc.gz"}
|
June 14th 2012, 06:11 AM
Can anyone please help me with these problems?
1. S((cosx(^3))sinx dx
2. S(x+1)sin((x^2)+2x dx
3. S(x^2)cos(x^3) dx
Can anyone explain me how I can apply the Gini index? Or use the Gini index when it comes down to calculus?
Thanks I appreciate it in advance.
June 14th 2012, 06:25 AM
Re: problems
I guess 'S' is being used as an integral sign? All of these require only simple substitution:
Let $u = \cos x\Rightarrow du = -\sin x\,dx.$
Let $u = x^2 + 2x\Rightarrow du = 2x + 2\,dx.$
Let $u = x^3\Rightarrow du = 3x^2\,dx.$
June 14th 2012, 06:10 PM
Re: problems
Yes the S is the integral. But how can I integrate the cos's and the sin's?
June 14th 2012, 06:17 PM
Re: problems
$\int\sin u\,du = -\cos u + C$
$\int\cos u\,du = \sin u + C$
The integration should be fairly straightforward after you make the substitutions. For example, take number 3. Using the substitution I stated earlier,
$\int x^2\cos x^3\,dx$
$=\frac13\int3x^2\cos x^3\,dx$
$=\frac13\int\cos u\,du$
$= \frac13\sin u + C$
$= \frac13\sin x^3 + C.$
|
{"url":"http://mathhelpforum.com/calculus/200015-problems-print.html","timestamp":"2014-04-16T14:37:49Z","content_type":null,"content_length":"9176","record_id":"<urn:uuid:3105d11f-8ca2-457b-8e50-7bf2097536db>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00114-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Among the Novice Hams - Inductors
[Table of Contents]People old and young enjoy waxing nostalgic about and learning some of the history of early electronics. Popular Electronics was published from October 1954 through April 1985. As
time permits, I will be glad to scan articles for you. All copyrights (if any) are hereby acknowledged.
Pliers of the amateur radio hobby have since the beginning put forth a lot of effort training fledgling entrants in the realm of electronics and communications. Up until the latter part of the last
century, there were a number of magazines - Popular Electronics among them - that would regularly print articles covering the basics of electronics. Other than the ARRL's
magazine, it seems maybe
Nuts and Volts
is the only monthly still in print that you can go to for such information. I suppose it was inevitable with the emergence and now domination of the Internet as a source for most knowledge. The
Among the Novice Hams
column in Popular Electronics often included short primers on subjects like the basics of capacitors and inductors. Here is one on inductors from March 1958. The basics still apply.
See all articles from
Popular Electronics
Among the Novice Hams
By Herb S. Brier, 9EGQ Current and Magnetism.
Imagine that a source of direct current, such as a battery, is connected across the ends of a length of wire or other conductor. An electric current, which consists of electrons in motion, will flow
through the wire. If we bring a magnetic compass near the wire, the compass needle will be deflected from its normal position. The greater the current flowing through the wire, the more the needle
will be deflected. If we reverse the battery terminals, it will be deflected in the opposite direction.
We have shown that electrons in motion (electric current) in a conductor generate a rotating magnetic field around the conductor. We have also found that the direction in which the electrons are
moving determines the direction of rotation of the magnetic lines of force. According to the "left-hand rule," when a conductor carrying current is grasped in the left hand with the thumb pointing in
the direction in which the current is flowing (towards the positive terminal), the fingers point in the direction of rotation of the magnetic field.
If we substitute a sensitive microammeter for the battery across the ends of the conductor and rapidly move a powerful permanent magnet across the conductor, the meter pointer is momentarily
deflected. The direction in which the magnet is moved determines the direction in which the meter pointer is deflected. The speed of the magnet determines how much the pointer is deflected.
Thus, a magnetic field moving across a conductor induces (causes to flow) a current in the conductor. A current will also be induced in the conductor if the magnet is held still and the conductor is
swept across its poles.
The magnitude of the effect is small in a straight length of wire. If the wire is wound into a coil like thread on a spool, the effect is greatly increased. Then the magnetic lines of force around
the wire act upon each turn and on adjacent turns as well. Figure 1 illustrates this action two-dimensionally.
If we insert a soft iron core inside the coil, even more current flow takes place, because the magnetic lines will travel through the iron much easier than through air. Consequently, the iron core
concentrates the magnetism around the turns of the coil.
Fig. 1. How magnetic lines of force around a conductor carrying current (A)
are concentrated by winding the conductor into a coil (B);
total magnetic flux path around the tightly wound coil is shown in (C).
Self Inductance.
Suppose we connect a coil containing thousands of turns of wire wound around an iron core, a source of direct current, a voltmeter, an ammeter, and a switch, as shown in Fig. 2. When the switch is
closed, the voltmeter immediately indicates the full battery voltage across the coil terminals. But the ammeter pointer moves slowly up to a position determined by the resistance of the wire in the
coil and the applied voltage.
When the switch is opened, however, the ammeter pointer immediately drops back to its zero position, but the voltmeter pointer flips up far beyond its previous position before it drops back to zero.
There will also probably be quite a large spark across the opening switch contacts.
What happened? When the switch is first closed and current starts to flow into the coil, a strong magnetic field starts to build up around the coil. This expanding magnetic field is moving;
therefore, it builds up an electromotive force of its own in the coil. This induced electromotive force is exactly opposite to the applied electromotive force. Consequently, it opposes the flow of
current into the coil-but it cannot cut off the current completely. If it did, there would be nothing to generate the magnetic field. So the current slowly increases to its steady value and supports
a steady magnetic field around the coil, but the process does take time.
When the switch is opened, the incoming current drops instantly to zero and kicks the props out from under the magnetic field, which is thus forced to collapse instantaneously. While it collapses,
the energy it contains is instantly converted back into an electromotive force in the coil, which builds up in voltage until it is sufficient to arc across the open switch contacts.
These effects are due to the inductance of the coil, which is measured in henrys. By definition, a change of one ampere per second in the amount of current flouring through an inductance of one henry
generates an electromotive force of one volt in it. The technical name for a coil containing inductance is an inductor. In radio work, the terms millihenry (0.001 henry), abbreviated mh., and
microhenry (0.000001 henry), abbreviated μh., are also used.
Applying A.C.
Figure 3 shows what happens to the current and voltage in an inductor if an a.c. generator is substituted for a d.c. generator.
For simplicity, let us assume that the a.c. generator voltage is maximum (point A) when we close the switch. Immediately, this voltage tries to force current through the inductor. But zip! The
resulting magnetic field immediately generates a counter voltage in the inductor, which sharply limits the amount of current that can flow into it. However, as time passes, the generator gradually
forces more current into the inductor, even though the generator voltage is decreasing at the same time, until 1/4 cycle or 90° later (point B), the current reaches its maximum value, just as the
generator voltage has decreased to zero.
Immediately, the a.c. generator voltage starts increasing in the opposite (negative) direction and tries to force a current through the inductor in that direction. But, as soon as the current tries
to reverse direction, the magnetic field generated by the current flowing in the original direction starts to collapse, and its energy is converted back into an electromotive force that tends to keep
the current flowing in the original direction.
Fig. 2. Theoretical circuit used to illustrate the meaning of inductance as discussed in the text.
Fig. 3. Current and voltage relationships in an inductive circuit when alternating current is applied.
At first, the electromotive force from the collapsing field is strong; so the current is high. As the cycle continues, however, this energy is used up, while the generator voltage is increasing.
Thus, at the end of 1/2 cycle or 180° (point C), the current has decreased to zero, just as the generator voltage reaches its maximum negative value.
At this point, current starts flowing into the inductor in the opposite direction, and the action of the current and voltage is like that of the previous half cycle. At the end of a complete cycle
(point E), the current and voltage relations are exactly as they were when the switch was closed. These series of actions continue as long as alternating current is fed into the inductor.
Inductive Reactance.
Obviously, inductance opposes the flow of alternating current through it. This opposition is called inductive reactance and is measured in ohms. The formula for calculating it is:
= 2 π FL; where π (pi) 3.14, F is the frequency in cycles per second, and L is the inductance in henrys. The formula is also correct if the frequency is expressed in kilocycles and the inductance in
millihenrys, or the frequency in megacycles and the inductance in microhenrys.
Don Jensen, KN6VXM, worked the 48 states and Europe with a home-brew 6146 transmitter running 50 watts. Now he uses a new Johnson Ranger transmitter.
An example will show that there is nothing mysterious about the formula. Question: What is the inductive reactance of a 10-henry choke (inductor) at a frequency of 60 cps? Answer: X
= 2 X 3.14 X 60 X 10 = 3768 ohms. At 600 cycles, its reactance is 37,680 ohms. Inductive reactance is directly proportional to frequency and inductance.
This is just the opposite of capacitive reactance, where the reactance is inversely proportional to frequency and capacitance. Another difference between inductive and capacitive reactance is that,
in a purely capacitive circuit, the current leads the voltage by 90°, while in a purely inductive circuit the current lags the voltage by 90°.
Posted 8/13/2012
|
{"url":"http://www.rfcafe.com/references/popular-electronics/among-the-novice-hams-mar-1958-popular-electronics.htm","timestamp":"2014-04-16T13:07:05Z","content_type":null,"content_length":"25984","record_id":"<urn:uuid:9be3cdf1-1230-4135-b50a-bef7ccb6d299>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00151-ip-10-147-4-33.ec2.internal.warc.gz"}
|
VHDL coding method for Cyclic Reduntancy Check(CRC)
Most of the modern communication protocols use some error detection algorithms. Cyclic Redundancy Check, or CRC, is the most popular one among these. CRC properties are defined by the generator
polynomial length and coefficients. The protocol specification usually defines CRC in hex or polynomial notation. For example, CRC-8 used in ATM HEC field is represented as 0x07 in hex notation or as
G(X)=X^8 + X^2 + X^1 +1. in the polynomial notation.The code given below is capable of computing ,CRC-8 for 32 bit input.The module need 32 clock cycles for the computation.
library IEEE;
use IEEE.STD_LOGIC_1164.ALL;
use IEEE.STD_LOGIC_ARITH.ALL;
use IEEE.STD_LOGIC_UNSIGNED.ALL;
entity crc32_8 is
port ( clk : in std_logic;
data_in : in std_logic_vector(31 downto 0);
crcout : out std_logic_vector(7 downto 0)
end crc32_8;
architecture Behavioral of crc32_8 is
signal crc_temp : std_logic_vector(7 downto 0) := "00000000";
signal counter1 : std_logic_vector(5 downto 0):="000000";
signal dtemp : std_logic_vector(31 downto 0):=(others => '0');
dtemp <= data_in;
if(data_in /= "00000000000000000000000000000000") then
if(clk'event and clk='1') then
--CRC calculation. Function used is : X^8 + X^2 + X^1 +1.
--Edit the next 8 lines to compute a different CRC function.
crc_temp(0) <= data_in(31-conv_integer(counter1(4 downto 0))) xor crc_temp(7);
crc_temp(1) <= data_in(31-conv_integer(counter1(4 downto 0))) xor crc_temp(7) xor crc_temp(0);
crc_temp(2) <= data_in(31-conv_integer(counter1(4 downto 0))) xor crc_temp(7) xor crc_temp(1);
crc_temp(3) <= crc_temp(2);
crc_temp(4) <= crc_temp(3);
crc_temp(5) <= crc_temp(4);
crc_temp(6) <= crc_temp(5);
crc_temp(7) <= crc_temp(6);
--CRC calculation is finished here.
--counter increment.
counter1 <= counter1 + '1';
end if;
if(counter1 ="100000") then --counter for doing the CRC operation for 8 times.
crcout <= crc_temp;
crc_temp <="00000000";
counter1 <= "000000";
crcout <= "00000000"; --CRC output is zero during idle time.
end if;
--CRC output is zero when input is not given or input is zero
crcout <= "00000000";
crc_temp <="00000000";
counter1 <= "000000";
end if;
end process;
end Behavioral;
When the input is zero or not given the output stays at zero.This module can be edited to calculate other CRC functions and for different input lengths.If you require program for a different CRC
function leave a comment here.I will try to post the code as soon as possible.
15 comments:
1. why the output is zero even i already put in the input?
2. I have tested this code in Xilinx ISE 10.1. It worked successfully. Please check your simulation and testbench code.
3. sorry, i am new in vhdl. i ran this code in quartus 9.0. i set the all the input to HIGH (1). however, the output is all zero.
4. If you use all the inputs as '1' then I think the output is supposed to be '0'.
First calculate the output manually and then check it with the code.
5. I will give it a try. Because initially, I set the input randomly but the output is "0". So i thought of setting all the input to "1"
6. Hey dude, i need to generate CRC-16 using X^16 + X^12 + X^5 +1 as my generator polynomial. Initial value of FFFFh and residue of F0B8h. This is according to ISO13239 standard.
7. hi everyone , thanks for the code, but my input is a std_logic_vector(14023 downto 0) how can adapt this code ?? thanks for answer :)
8. hi,
I need the testbench n codin 4 crc32.. plz help me
9. can anyone post a 64bit CRC for 64 bit input data
10. can anyone post crc 16 bit code indivisually for transmitter and receiver.
11. how can we know that the output has no error ?? i mean how can we show whether there is any error or not
12. what should be the clk value in test bench code for crc32...i m still gettin output 0 even after some random input.
13. please post the code for 4 bit and 8 bit data input
14. if I have to design CRC, 18- bit data in, and the CRC value output is 6-bit. if i have the remainder is 111001 AS X^5 + X^4 + X^3 + 1
and the data stream as a test input (LSB on left)
could you please try to help post the change.
thank you
15. can u give me the code for QPSK modulation
|
{"url":"http://vhdlguru.blogspot.com/2010/03/vhdl-coding-method-for-cyclic.html","timestamp":"2014-04-19T22:29:10Z","content_type":null,"content_length":"145634","record_id":"<urn:uuid:8e77b43f-0152-4e22-885d-81b7c701e18d>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00168-ip-10-147-4-33.ec2.internal.warc.gz"}
|
ALEX Lesson Plan: I Want My Half- An Interactive Lesson Introducing Fractions
Step 1: Explain to the students that you have one sandwich, but you want to share it with a friend. Ask the students if they can figure out a way to share your sandwich with your friend and both of
you have equal parts. Have students turn and talk with a partner to help you solve your problem. Give students time to discuss. Select a pair of students to share with the rest of the class.
Step 2: Explain to students that during this lesson they will learn to identify parts of a whole with two, three, or four equal parts.
Step 3: Read Eating Fractions, by Bruce McMillan. Stop on each page and discuss with students what the picture shows.
Step 1: Hand out the bags of clay and plastic knife. Have students locate the bag with the yellow modeling clay. They will work with their partner to discover a way to cut the yellow modeling clay
role into two equal parts. The teacher will walk around the room and observe the students working cooperatively. The teacher will also observe each group dividing the clay shape into half (see
assessment rubric, Identifying Fractions). Once the teacher has observed the students successfully dividing the shape into half, the teacher will have the students journal their answer by drawing a
picture of their answer on the fraction recording sheet. Then the teacher will challenge the students to find another way to cut the shape into two equal parts. Give students time to explore the
different ways they can divide the shape into 2 equal parts.
**Have a copy of the book Eating Fractions for students to refer to if they are having a problem.**
Step 2. Repeat the same procedure with the brown modeling clay. This time the students are dividing the shape into three equal parts. Remind the students to journal their answer. Once again,
challenge the students to figure out a different way to divide the shape into 3 equal parts.
Step 3: Repeat the same procedure with the red modeling clay. This time the students are dividing the shape into four equal parts. Remind the students to journal their answer. Once again, challenge
the students to figure out a different way to divide the shape into 4 equal parts.
Step 1: Have students meet back as a whole group. Ask students if the pictures they drew reminded them of any they saw in the book Eating Fractions. Show the students the pictures from the book and
discuss how each picture is divided into equal parts. Have selective students demonstrate how they divided their modeling clay into halves, thirds, and fourths. Encourage students to use the fraction
vocabulary in their descriptions.
Step 2: Watch I Want My Half. Choose different students to complete the interactive activities within the lesson.
Extend As an extension activity, the students will make a fraction poster. They will label the large sheet of construction paper with 1/2, 1/3, 1/4. They will take the construction paper shapes and
cut them into halves, thirds, and fourths and glue them under the correct label.
As an extension activity, the students will complete the interactive game, Cross The River.
As the students are completing the activities, teacher will use the Identifying Fractions rubric to assess student understanding of fraction identification.
|
{"url":"http://alex.state.al.us/lesson_view.php?id=26357","timestamp":"2014-04-18T03:04:56Z","content_type":null,"content_length":"62739","record_id":"<urn:uuid:f62ec9a4-723d-45aa-9bed-084bf647ba47>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00074-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Analytic function
From Wikipedia, the free encyclopedia
This article is about both real and complex analytic functions. For analytic functions in complex analysis specifically, see
holomorphic function
In mathematics, an analytic function is a function that is locally given by a convergent power series. There exist both real analytic functions and complex analytic functions, categories that are
similar in some ways, but different in others. Functions of each type are infinitely differentiable, but complex analytic functions exhibit properties that do not hold generally for real analytic
functions. A function is analytic if and only if its Taylor series about x[0] converges to the function in some neighborhood for every x[0] in its domain.
Formally, a function ƒ is real analytic on an open set D in the real line if for any x[0] in D one can write
$f(x) = \sum_{n=0}^\infty a_{n} \left( x-x_0 \right)^{n} = a_0 + a_1 (x-x_0) + a_2 (x-x_0)^2 + a_3 (x-x_0)^3 + \cdots$
in which the coefficients a[0], a[1], ... are real numbers and the series is convergent to ƒ(x) for x in a neighborhood of x[0].
Alternatively, an analytic function is an infinitely differentiable function such that the Taylor series at any point x[0] in its domain
$T(x) = \sum_{n=0}^{\infty} \frac{f^{(n)}(x_0)}{n!} (x-x_0)^{n}$
converges to f(x) for x in a neighborhood of x[0] pointwise (and locally uniformly). The set of all real analytic functions on a given set D is often denoted by C^ω(D).
A function ƒ defined on some subset of the real line is said to be real analytic at a point x if there is a neighborhood D of x on which ƒ is real analytic.
The definition of a complex analytic function is obtained by replacing, in the definitions above, "real" with "complex" and "real line" with "complex plane". A function is complex analytic if and
only if it is holomorphic i.e. it is complex differentiable. For this reason the terms "holomorphic" and "analytic" are often used interchangeably for such functions.^1
Most special functions are analytic (at least in some range of the complex plane). Typical examples of analytic functions are:
• Any polynomial (real or complex) is an analytic function. This is because if a polynomial has degree n, any terms of degree larger than n in its Taylor series expansion must immediately vanish to
0, and so this series will be trivially convergent. Furthermore, every polynomial is its own Maclaurin series.
• The exponential function is analytic. Any Taylor series for this function converges not only for x close enough to x[0] (as in the definition) but for all values of x (real or complex).
Typical examples of functions that are not analytic are:
• The absolute value function when defined on the set of real numbers or complex numbers is not everywhere analytic because it is not differentiable at 0. Piecewise defined functions (functions
given by different formulas in different regions) are typically not analytic where the pieces meet.
• The complex conjugate function z → z* is not complex analytic, although its restriction to the real line is the identity function and therefore real analytic, and it is real analytic as a
function from R² to R².
Alternative characterizations
If ƒ is an infinitely differentiable function defined on an open set D ⊂ R, then the following conditions are equivalent.
1) ƒ is real analytic.
2) There is a complex analytic extension of ƒ to an open set G ⊂ C which contains D.
3) For every compact set K ⊂ D there exists a constant C such that for every x ∈ K and every non-negative integer k the following bound holds^citation needed
$\left | \frac{d^k f}{dx^k}(x) \right | \leq C^{k+1} k!$
The real analyticity of a function ƒ at a given point x can be characterized using the FBI transform.
Complex analytic functions are exactly equivalent to holomorphic functions, and are thus much more easily characterized.
Properties of analytic functions
• The sums, products, and compositions of analytic functions are analytic.
• The reciprocal of an analytic function that is nowhere zero is analytic, as is the inverse of an invertible analytic function whose derivative is nowhere zero. (See also the Lagrange inversion
• Any analytic function is smooth, that is, infinitely differentiable. The converse is not true for real functions; in fact, in a certain sense, the real analytic functions are sparse compared to
all real infinitely differentiable functions. For the complex numbers, the converse does hold, and in fact any function differentiable once on an open set is analytic on that set (see
"analyticity and differentiability" below).
• For any open set Ω ⊆ C, the set A(Ω) of all analytic functions u : Ω → C is a Fréchet space with respect to the uniform convergence on compact sets. The fact that uniform limits on compact sets
of analytic functions are analytic is an easy consequence of Morera's theorem. The set $\scriptstyle A_\infty(\Omega)$ of all bounded analytic functions with the supremum norm is a Banach space.
A polynomial cannot be zero at too many points unless it is the zero polynomial (more precisely, the number of zeros is at most the degree of the polynomial). A similar but weaker statement holds for
analytic functions. If the set of zeros of an analytic function ƒ has an accumulation point inside its domain, then ƒ is zero everywhere on the connected component containing the accumulation point.
In other words, if (r[n]) is a sequence of distinct numbers such that ƒ(r[n]) = 0 for all n and this sequence converges to a point r in the domain of D, then ƒ is identically zero on the connected
component of D containing r. This is known as the Principle of Permanence.
Also, if all the derivatives of an analytic function at a point are zero, the function is constant on the corresponding connected component.
These statements imply that while analytic functions do have more degrees of freedom than polynomials, they are still quite rigid.
Analyticity and differentiability
As noted above, any analytic function (real or complex) is infinitely differentiable (also known as smooth, or C^∞). (Note that this differentiability is in the sense of real variables; compare
complex derivatives below.) There exist smooth real functions that are not analytic: see non-analytic smooth function. In fact there are many such functions.
The situation is quite different when one considers complex analytic functions and complex derivatives. It can be proved that any complex function differentiable (in the complex sense) in an open set
is analytic. Consequently, in complex analysis, the term analytic function is synonymous with holomorphic function.
Real versus complex analytic functions
Real and complex analytic functions have important differences (one could notice that even from their different relationship with differentiability). Analyticity of complex functions is a more
restrictive property, as it has more restrictive necessary conditions and complex analytic functions have more structure than their real-line counterparts.^2
According to Liouville's theorem, any bounded complex analytic function defined on the whole complex plane is constant. The corresponding statement for real analytic functions, with the complex plane
replaced by the real line, is clearly false; this is illustrated by
Also, if a complex analytic function is defined in an open ball around a point x[0], its power series expansion at x[0] is convergent in the whole ball (analyticity of holomorphic functions). This
statement for real analytic functions (with open ball meaning an open interval of the real line rather than an open disk of the complex plane) is not true in general; the function of the example
above gives an example for x[0] = 0 and a ball of radius exceeding 1, since the power series 1 − x^2 + x^4 − x^6... diverges for |x| > 1.
Any real analytic function on some open set on the real line can be extended to a complex analytic function on some open set of the complex plane. However, not every real analytic function defined on
the whole real line can be extended to a complex function defined on the whole complex plane. The function ƒ(x) defined in the paragraph above is a counterexample, as it is not defined for x = ±i.
This explains why the Taylor series of ƒ(x) diverges for |x| > 1, i.e., the radius of convergence is 1 because the complexified function has a pole at distance 1 from the evaluation point 0 and no
further poles within the open disc of radius 1 around the evaluation point.
Analytic functions of several variables
One can define analytic functions in several variables by means of power series in those variables (see power series). Analytic functions of several variables have some of the same properties as
analytic functions of one variable. However, especially for complex analytic functions, new and interesting phenomena show up when working in 2 or more dimensions. For instance, zero sets of complex
analytic functions in more than one variable are never discrete.
See also
1. ^ "A function f of the complex variable z is analytic at at point z[0] if its derivative exists not only at z but at each point z in some neighborhood of z[0]. It is analytic in a region R if it
is analytic as every point in R. The term holomorphic is also used in the literature do denote analyticity." Churchill, Brown, and Verhey Complex Variables and Applications McGraw-Hill 1948 ISBN
0-07-010855-2 pg 46
External links
|
{"url":"http://www.territorioscuola.com/wikipedia/en.wikipedia.php?title=Analytic_function","timestamp":"2014-04-19T23:06:36Z","content_type":null,"content_length":"87173","record_id":"<urn:uuid:06da588f-11bf-419a-8105-b535e20892d9>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00250-ip-10-147-4-33.ec2.internal.warc.gz"}
|
River Oaks, TX Precalculus Tutor
Find a River Oaks, TX Precalculus Tutor
...I also taught a graduate level pharmacology class at Oakland University for several years. In addition, I have published book chapters and peer-reviewed papers on the subject. These are
available upon request.
55 Subjects: including precalculus, chemistry, statistics, writing
...I can meet anywhere (typically a local library, Starbucks, or in the comfort of your home). About me: I am a senior at Texas Wesleyan, completing my bachelor's degree in Mathematics and
Secondary Education. I have been tutoring for the last year, with an emphasis in Algebra, Geometry, and Pre-C...
13 Subjects: including precalculus, physics, calculus, geometry
...I have a master's degree in Mathematics. I have also passed two of the actuarial exams. Logic is an integral part of all levels of math and first encountered before high school.
15 Subjects: including precalculus, chemistry, statistics, calculus
I am a recently retired (2013) high school math teacher with 30 years of classroom experience. I have taught all maths from 7th grade through AP Calculus. I like to focus on a constructivist style
of teaching/learning which gets the student to a conceptual understanding of mathematical topics.
12 Subjects: including precalculus, calculus, geometry, statistics
...Topics usually covered in Geometry include points, lines, angles, properties of angle formed by parallel lines with intersecting lines, triangles, the Pythagorean theorem, quadrilaterals,
polygons, solving equations and inequalities involving geometric figures, properties of irrational numbers. ...
82 Subjects: including precalculus, English, chemistry, calculus
|
{"url":"http://www.purplemath.com/River_Oaks_TX_Precalculus_tutors.php","timestamp":"2014-04-18T00:49:11Z","content_type":null,"content_length":"24251","record_id":"<urn:uuid:34b3c666-e0f8-4c5e-a64a-239cb0c752bf>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00250-ip-10-147-4-33.ec2.internal.warc.gz"}
|
An equation for eternity
History of algebra
An equation for eternity
THE roots of algebra, as John Derbyshire tells us, go back to the ancient world: the Babylonians left cuneiform tablets showing simple algebraic problems. Its actual birth is usually credited to
Diophantus of Alexandria, who wrote “Arithmetica” in Greek during the third century. Progress was slow. Negative numbers, or even the number zero, had not yet been invented, and the notation was
cumbersome (try doing multiplication with Roman numerals).
Medieval Islamic scholars such as Muhammad al-Khwarizmi and Omar Khayyam also worked on algebraic problems (and gave us words such as “algorithm” and indeed “algebra” itself). But it is in
16th-century Italy that the story gets exciting. Italian mathematicians engaged in bitter feuds, challenging each other to solve ever more complicated equations. The crucial step came in a book by a
physician called Girolamo Cardano which presented formulas for both cubic and quartic equations.
At this point, the future of algebra looked rosy. In the 17th century René Descartes did everyone a favour by introducing modern algebraic notation, including the use of the letter x for unknowns
(some say this was the choice of a printer who was running low on y's and z's).
But the subject had actually stalled. Mathematicians across Europe worked feverishly on quintic equations without success. It was only in the 19th century that Niels Henrik Abel discovered why, by
showing that it is an impossible problem: there is no general formula that solves every quintic equation. What led to this breakthrough? In part, it was that mathematicians began to ask different
questions. After centuries spent working on individual equations, they began to concentrate more on the patterns and symmetries to be found in different types of equations and their solutions.
Specific problems were replaced by general theories, with spectacular success. As the 19th century went on, the scope of algebra also expanded, as new algebraic objects were studied: not just
numbers, but matrices (arrays of numbers), and new inventions such as groups, rings and fields.
Mr Derbyshire, whose book has just been published in Britain after appearing in America last year, gives an intriguing account of these developments, and of the mathematicians involved, such as
George Boole, who “married algebra to logic” by inventing algebraic ways to express logical arguments: he led an almost saintly life, but died after his wife treated him for a chill by dousing him in
buckets of icy water. The most tragic and romantic figure is the French mathematician, Evariste Galois, who died aged 20 in a pistol duel. He became famous a decade after his death, when his work on
the algebraic structure of solutions was at last published.
In earlier centuries, mathematics had been a pursuit of amateurs. But it was now a matter for professionals, and increasingly difficult for anybody else to understand. Indeed, in 1870 a Norwegian
mathematician called Sophus Lie became a media sensation after being arrested outside Paris: found with a backpack filled with indecipherable mathematical notes, he was thought to be a spy.
At times, even mathematicians have been suspicious of the “abstractions of abstractions of abstractions” of modern algebra. But, as Mr Derbyshire shows, algebra today is an essential part of the
wider mathematical landscape, with a huge range of applications from encrypting communications to the construction of computer chips. His book is a demanding read, with its fair share of mathematical
diagrams and equations, but the fascination of the subject does come across.
|
{"url":"http://www.economist.com/node/9142442/print","timestamp":"2014-04-21T05:11:03Z","content_type":null,"content_length":"61384","record_id":"<urn:uuid:a84e99da-2d79-4eea-b373-9af743278bcf>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00042-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Has anyone seen this sort of graph property used before?
up vote 9 down vote favorite
Consider the following property of a graph $G$:
The graph $G$ has no independent cutset of vertices, $S$, such that the number of components of $G-S$ is more than $|S|$ (the size of $S$).
(That is, cannot delete 1 vertex and leave 2+ components, cannot delete 2 independent vertices and leave 3+ components etc.)
For some as-yet-unexplained reason, this property has arisen in a couple of questions relating to chromatic roots; needing a name we called this property $\alpha$-1-tough, which uses the notation
from graph toughness plus the adjective $\alpha$ to indicate "independent".
Basically we believe that $\alpha$-1-tough graphs are well-behaved with respect to chromatic polynomials; the evidence is that various small graphs that violate certain reasonably well-founded and
natural conjectures are very clearly NOT $\alpha$-1-tough.
Having failed miserably at all attempts to prove anything sensible using this property, I wondered if anyone anywhere has seen this, or a similar, graph property appear anywhere.
(I have posted a longer article about this on my (shared) blog, but am not sure of the policy about posting links to your own stuff so I won't do so just in case.)
Edit: The blog entry is http://symomega.wordpress.com/2012/01/06/chromatic-roots-the-multiplicity-of-2/
graph-theory chromatic-polynomial
1 I think you should link to the relevant blog entry. Anyone who wants to investigate this would appreciate knowing more details. – Joseph O'Rourke Jan 11 '12 at 13:26
Ok, now added... just didn't want anyone to think that I'm trying to drive traffic to my blog (not that there would be any point). – Gordon Royle Jan 11 '12 at 22:50
gordon.royle@uwa.edu.au – Gordon Royle Jan 23 '12 at 23:10
Dear Gordon, thank you for your Email address. – Shahrooz Jan 24 '12 at 11:23
add comment
1 Answer
active oldest votes
A more relaxed notion of independent (or stable) cutsets -- in which the number of remaining components is not relevant -- was studied in relation to the chromatic number in a 1983
paper by Tucker, see http://dx.doi.org/10.1016/0095-8956(83)90039-4
up vote 3 down More recently, Brandstädt et al. proved that it is NP-complete to recognize whether a graph has a stable cutset even for restricted graph classes, see http://dx.doi.org/10.1016/
vote S0166-218X(00)00197-9
add comment
Not the answer you're looking for? Browse other questions tagged graph-theory chromatic-polynomial or ask your own question.
|
{"url":"http://mathoverflow.net/questions/85404/has-anyone-seen-this-sort-of-graph-property-used-before/86475","timestamp":"2014-04-18T03:29:33Z","content_type":null,"content_length":"55984","record_id":"<urn:uuid:535288d9-4046-409b-a806-0eff9d881976>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00076-ip-10-147-4-33.ec2.internal.warc.gz"}
|
DragonFly On-Line Manual Pages
IEEE(3) DragonFly Library Functions Manual IEEE(3)
ieee -- IEEE standard 754 for floating-point arithmetic
The IEEE Standard 754 for Binary Floating-Point Arithmetic defines repre-
sentations of floating-point numbers and abstract properties of arith-
metic operations relating to precision, rounding, and exceptional cases,
as described below.
IEEE STANDARD 754 Floating-Point Arithmetic
Radix: Binary.
Overflow and underflow:
Overflow goes by default to a signed infinity. Underflow is
Zero is represented ambiguously as +0 or -0.
Its sign transforms correctly through multiplication or division,
and is preserved by addition of zeros with like signs; but x-x
yields +0 for every finite x. The only operations that reveal
zero's sign are division by zero and copysign(x, +-0). In particu-
lar, comparison (x > y, x >= y, etc.) cannot be affected by the
sign of zero; but if finite x = y then infinity = 1/(x-y) !=
-1/(y-x) = -infinity.
Infinity is signed.
It persists when added to itself or to any finite number. Its sign
transforms correctly through multiplication and division, and
(finite)/+-infinity = +-0 (nonzero)/0 = +-infinity. But infin-
ity-infinity, infinity*0 and infinity/infinity are, like 0/0 and
sqrt(-3), invalid operations that produce NaN. ...
Reserved operands (NaNs):
An NaN is (Not a Number). Some NaNs, called Signaling NaNs, trap
any floating-point operation performed upon them; they are used to
mark missing or uninitialized values, or nonexistent elements of
arrays. The rest are Quiet NaNs; they are the default results of
Invalid Operations, and propagate through subsequent arithmetic
operations. If x != x then x is NaN; every other predicate (x > y,
x = y, x < y, ...) is FALSE if NaN is involved.
Every algebraic operation (+, -, *, /, \/) is rounded by default to
within half an ulp, and when the rounding error is exactly half an
ulp then the rounded value's least significant bit is zero. (An
ulp is one Unit in the Last Place.) This kind of rounding is usu-
ally the best kind, sometimes provably so; for instance, for every
x = 1.0, 2.0, 3.0, 4.0, ..., 2.0**52, we find (x/3.0)*3.0 == x and
(x/10.0)*10.0 == x and ... despite that both the quotients and the
products have been rounded. Only rounding like IEEE 754 can do
that. But no single kind of rounding can be proved best for every
circumstance, so IEEE 754 provides rounding towards zero or towards
+infinity or towards -infinity at the programmer's option.
IEEE 754 recognizes five kinds of floating-point exceptions, listed
below in declining order of probable importance.
Exception Default Result
Invalid Operation NaN, or FALSE
Overflow +-infinity
Divide by Zero +-infinity
Underflow Gradual Underflow
Inexact Rounded value
NOTE: An Exception is not an Error unless handled badly. What
makes a class of exceptions exceptional is that no single default
response can be satisfactory in every instance. On the other hand,
if a default response will serve most instances satisfactorily, the
unsatisfactory instances cannot justify aborting computation every
time the exception occurs.
Data Formats
Type name: float
Wordsize: 32 bits.
Precision: 24 significant bits, roughly like 7 significant deci-
If x and x' are consecutive positive single-precision numbers (they
differ by 1 ulp), then
5.9e-08 < 0.5**24 < (x'-x)/x <= 0.5**23 < 1.2e-07.
Range: Overflow threshold = 2.0**128 = 3.4e38
Underflow threshold = 0.5**126 = 1.2e-38
Underflowed results round to the nearest integer multiple of
0.5**149 = 1.4e-45.
Type name: double (On some architectures, long double is the same
as double)
Wordsize: 64 bits.
Precision: 53 significant bits, roughly like 16 significant deci-
If x and x' are consecutive positive double-precision numbers (they
differ by 1 ulp), then
1.1e-16 < 0.5**53 < (x'-x)/x <= 0.5**52 < 2.3e-16.
Range: Overflow threshold = 2.0**1024 = 1.8e308
Underflow threshold = 0.5**1022 = 2.2e-308
Underflowed results round to the nearest integer multiple of
0.5**1074 = 4.9e-324.
Type name: long double (when supported by the hardware)
Wordsize: 96 bits.
Precision: 64 significant bits, roughly like 19 significant deci-
If x and x' are consecutive positive extended-precision numbers
(they differ by 1 ulp), then
1.0e-19 < 0.5**63 < (x'-x)/x <= 0.5**62 < 2.2e-19.
Range: Overflow threshold = 2.0**16384 = 1.2e4932
Underflow threshold = 0.5**16382 = 3.4e-4932
Underflowed results round to the nearest integer multiple of
0.5**16445 = 5.7e-4953.
Type name: long double (when supported by the hardware)
Wordsize: 128 bits.
Precision: 113 significant bits, roughly like 34 significant deci-
If x and x' are consecutive positive quad-extended-precision num-
bers (they differ by 1 ulp), then
9.6e-35 < 0.5**113 < (x'-x)/x <= 0.5**112 < 2.0e-34.
Range: Overflow threshold = 2.0**16384 = 1.2e4932
Underflow threshold = 0.5**16382 = 3.4e-4932
Underflowed results round to the nearest integer multiple of
0.5**16494 = 6.5e-4966.
Additional Information Regarding Exceptions
For each kind of floating-point exception, IEEE 754 provides a Flag that
is raised each time its exception is signaled, and stays raised until the
program resets it. Programs may also test, save and restore a flag.
Thus, IEEE 754 provides three ways by which programs may cope with excep-
tions for which the default result might be unsatisfactory:
1. Test for a condition that might cause an exception later, and branch
to avoid the exception.
2. Test a flag to see whether an exception has occurred since the pro-
gram last reset its flag.
3. Test a result to see whether it is a value that only an exception
could have produced.
CAUTION: The only reliable ways to discover whether Underflow has
occurred are to test whether products or quotients lie closer to
zero than the underflow threshold, or to test the Underflow flag.
(Sums and differences cannot underflow in IEEE 754; if x != y then
x-y is correct to full precision and certainly nonzero regardless of
how tiny it may be.) Products and quotients that underflow gradu-
ally can lose accuracy gradually without vanishing, so comparing
them with zero (as one might on a VAX) will not reveal the loss.
Fortunately, if a gradually underflowed value is destined to be
added to something bigger than the underflow threshold, as is almost
always the case, digits lost to gradual underflow will not be missed
because they would have been rounded off anyway. So gradual under-
flows are usually provably ignorable. The same cannot be said of
underflows flushed to 0.
At the option of an implementor conforming to IEEE 754, other ways to
cope with exceptions may be provided:
1. ABORT. This mechanism classifies an exception in advance as an
incident to be handled by means traditionally associated with error-
handling statements like "ON ERROR GO TO ...". Different languages
offer different forms of this statement, but most share the follow-
ing characteristics:
- No means is provided to substitute a value for the offending
operation's result and resume computation from what may be the
middle of an expression. An exceptional result is abandoned.
- In a subprogram that lacks an error-handling statement, an
exception causes the subprogram to abort within whatever program
called it, and so on back up the chain of calling subprograms
until an error-handling statement is encountered or the whole
task is aborted and memory is dumped.
2. STOP. This mechanism, requiring an interactive debugging environ-
ment, is more for the programmer than the program. It classifies an
exception in advance as a symptom of a programmer's error; the
exception suspends execution as near as it can to the offending
operation so that the programmer can look around to see how it hap-
pened. Quite often the first several exceptions turn out to be
quite unexceptionable, so the programmer ought ideally to be able to
resume execution after each one as if execution had not been
3. ... Other ways lie beyond the scope of this document.
Ideally, each elementary function should act as if it were indivisible,
or atomic, in the sense that ...
1. No exception should be signaled that is not deserved by the data
supplied to that function.
2. Any exception signaled should be identified with that function
rather than with one of its subroutines.
3. The internal behavior of an atomic function should not be disrupted
when a calling program changes from one to another of the five or so
ways of handling exceptions listed above, although the definition of
the function may be correlated intentionally with exception han-
The functions in libm are only approximately atomic. They signal no
inappropriate exception except possibly ...
when a result, if properly computed, might have lain barely
within range, and
Inexact in cabs(), cbrt(), hypot(), log10() and pow()
when it happens to be exact, thanks to fortuitous cancella-
tion of errors.
Otherwise, ...
Invalid Operation is signaled only when
any result but NaN would probably be misleading.
Overflow is signaled only when
the exact result would be finite but beyond the overflow
Divide-by-Zero is signaled only when
a function takes exactly infinite values at finite oper-
Underflow is signaled only when
the exact result would be nonzero but tinier than the
underflow threshold.
Inexact is signaled only when
greater range or precision would be needed to represent the
exact result.
fenv(3), ieee_test(3), math(3)
An explanation of IEEE 754 and its proposed extension p854 was published
in the IEEE magazine MICRO in August 1984 under the title "A Proposed
Radix- and Word-length-independent Standard for Floating-point Arith-
metic" by W. J. Cody et al. The manuals for Pascal, C and BASIC on the
Apple Macintosh document the features of IEEE 754 pretty well. Articles
in the IEEE magazine COMPUTER vol. 14 no. 3 (Mar. 1981), and in the ACM
SIGNUM Newsletter Special Issue of Oct. 1979, may be helpful although
they pertain to superseded drafts of the standard.
IEEE Std 754-1985
DragonFly 3.7 January 26, 2005 DragonFly 3.7
|
{"url":"http://leaf.dragonflybsd.org/cgi/web-man?command=ieee§ion=3","timestamp":"2014-04-16T04:27:59Z","content_type":null,"content_length":"12504","record_id":"<urn:uuid:f032c439-6cd9-40a1-815f-4ac8c3108a89>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00586-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Versatile algorithms for nanoscale designs
The latest news from academia, regulators research labs and other things of interest
Posted: Nov 04, 2010
Versatile algorithms for nanoscale designs
(Nanowerk News) Today's RFIC (integrated circuits for radio frequency) design is integrated with digital and analog modules on the same die, posing severe challenges to existing simulation tools. The
ambitious objective of the EU-funded ICESTARS research project has been to overcome the barriers in both existing and future radio frequency design flows by developing and deploying integrated
simulation algorithms and prototype tools.
Driven by the market demand for higher bandwidth and more end-product capability, RF designs are moving into higher frequency ranges and growing in complexity. The processes to develop both
electronic design automation (EDA) and computer aided design (CAD) – indispensable to design integrated circuits for RF design – and their underlying mathematics are themselves complex. It
necessitates new modelling approaches, new mathematical solution procedures and numerical simulations with mixed analog and digital signals. That is where the ICESTARS (Integrated Circuit/EM
Simulation and Design Technologies for Advanced Radio Systems-on-chip) research focus is situated. The consortium comprises five leading mathematical institutes (Universities of Cologne, Wuppertal,
Upper Austria [Hagenberg], Oulu and Aalto[Espoo]), two semiconductor companies (NXP Semiconductors [Eindhoven] and Infineon Technologies [Munich]) and two software providers (AWR-APLAC [Espoo] and
Magwel [Leuven]).
"Advancing RF design in super high and extremely high frequencies (SHF and EHF, i.e., beyond 3 GHz) necessitates new transceiver architectures and CAD tools as today's EDA tools are functionally not
adequately addressing the simulation challenges of high-frequency designs. The project's research areas have been the efficient connection between the frequency domain, where the RF part of wireless
transceiver systems is usually designed, and the time domain, where the digital signal processing and control logic are developed", Jan ter Maten, NXP Semiconductors, outlines what ICESTARS is about.
"Then, in electromagnetic (EM) analysis and coupled EM circuit analysis we deal with the 'communication' of the physical layer (such as mapping of devices) and the mathematical one."
A sound mathematical basis is the starting point of all ICESTARS research. Mathematical equations such as ordinary differential equations (ODEs), differential-algebraic equations (DAEs) and partial
differential-algebraic equations (PDAEs) are basis of time- and frequency-domain analyses, whose purpose is to predict the behaviour of the designed ICs, before the expensive manufacturing process
starts. In ICESTARS these algorithms have been modified to cover extended functionalities and entire new algorithms have been developed to meet the simulation demands of circuits operating in
frequency beyond 3 GHz.
An entirely new mathematical undertaking
When it comes to mutual simulation of digital and analog RF parts, standard time-domain techniques alone are far from sufficient. Therefore, in ICESTARS, a prototype of adaptive wavelet-based
analysis, an entirely new circuit simulation algorithm has been developed and successfully tested at Infineon. In circuit-envelope simulation, input waveforms are represented as RF carriers with
modulation envelopes. By embedding the system of DAEs into partial DAEs the project succeeded in formulating a general mathematical framework that can be adapted to different classes of RF circuits.
An optimal dynamic time splitting allows efficient simulation of frequency or amplitude modulated signals.
Adaptivity was core to the frequency-domain research in the project. Adaptivity denotes the dynamic simulator adjustment to the frequency response of amplifiers, filters, mixers etc. in terms of
network parameters or frequency-dependent noise. The project aimed at achieving reasonable estimates for the initial conditions for distortion analysis of free-running oscillators, and for the first
time, in ICESTARS, a truly generic multi-device, a so-called VoHB algorithm, was coded and tested for circuits that are larger than plain single-transistor power amplifiers. New robust and efficient
nonlinear solution methods have been developed in close cooperation between academia and industries.
The ever increasing miniaturisation of future circuits realised in physical models necessitates the simulation of simulation of circuits that take electromagnetic field effects into account..
Simulations are used to extract and verify the compact models for both active and passive devices by computation of how they interact locally. Conventional equations for circuits neglect such
physical effects and only try to re-build complex building blocks using single parameters – a procedure lacking efficiency as there might be up to 800 parameters. In an entirely new mathematical
undertaking these problem was tackled by modeling the building blocks using PDAEs to better project the physical complexity of the models.
As a proof of concept the academically developed mathematical analysis methods have been implemented by the industrial partners and Upper Austria University in real-life simulation and/or industrial
use cases. The simulation results of the tools and algorithms were then compared against the results obtained with commercial or public domain tools – the ICESTARS validation has successfully covered
the complete functionality of the tools that have been developed.
But that is the just the beginning. The advanced algorithms developed within ICESTARS have the potential to substantially reduce the simulation overhead within the RF design process, thereby
improving the RF designer's ability to deal with chip development for the generations ahead.
Subscribe to a free copy of one of our daily
Nanowerk Newsletter Email Digests
with a compilation of all of the day's news.
|
{"url":"http://www.nanowerk.com/news/newsid=18832.php","timestamp":"2014-04-21T05:12:49Z","content_type":null,"content_length":"39683","record_id":"<urn:uuid:7bc3eec3-ee40-4652-a533-b621253d0c74>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00326-ip-10-147-4-33.ec2.internal.warc.gz"}
|
How to cannonicate numbers in international form
Joachim Breitner wrote: > uff, this is bad. Yeah ;-) > So can we still come up with code that tells us, for > numbers not in the GSM format (+4917212345), that they are _very likely_ > the same? BTW,
the numbering plan I mentioned is used with GSM. > For example, a very simple algorithm might be: > If the last 7 digits are the same, the numbers probably refer to the > same person? How about
making it configurable, with rules depending on prefix and "local zone(s)". E.g., * -> last six digits must match +4121* -> last seven digits must match (i.e., landline, Vaud, Switzerland) +5411* ->
last eight digits must match (i.e., landline, Buenos Aires) AR && 011* -> as above BUE && 15* -> last eight digits must match (i.e., cellular, Buenos Aires) (In fact, the last rule could just be
"BUE", since all numbers here are unique in their last eight digits.) A number you know is only unique if the last N digits match would only match any other number, no matter what you know about that
one, if at least N digits match. In most cases, just a simple "n last digits match" test would be sufficient. That's what all the usual phones do anyway, and false matches are very rare. But hey, we
can be more sophisticated :-) By the way, I don't think requiring people to enter canonical numbers is a good idea. When you travel, you often enough get numbers that work if you dial them locally
"as is", but you may not necessarily have a good enough understanding of the local numbering plan to turn them into fully qualified numbers. - Werner
|
{"url":"http://lists.openmoko.org/pipermail/devel/2008-July/000088.html","timestamp":"2014-04-19T07:02:14Z","content_type":null,"content_length":"5046","record_id":"<urn:uuid:8db2f8e0-fafd-47df-968e-2d99a7c50ca2>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00563-ip-10-147-4-33.ec2.internal.warc.gz"}
|
This ebook provided by RISPs constitutes a collection of forty open-ended investigative activities for use in the A Level pure mathematics classroom. The book consists of two parts: the first part of
the ebook lists all activities indexed by topic together with the teachers' notes. The second part advises on how to make the…
• Publication year: 2000 - 2009
• Activity sheet
The first of two RISP activities, Venn Diagrams explores lines in co-odinate geometry and quadratic functions although the idea can be extended to cover other topics. Students are given a Venn
diagram with the sets defined. Students are required to find pairs of lines which fit into each of the eight regions. The task is then repeated…
• Publication year: 2000 - 2009
• Activity sheet
Odd One Out, provided by RISPs, helps students focus on mathematical properties to determine which of three numbers or expressions is the odd one out. For example, given the triplet 2, 3, 9: 2 could
be the odd one out because it is even and the others are odd. 3 could be the odd one out because it is the only triangle number. 9…
• Not yet rated
• Publication year: 2000 - 2009
• Activity sheet
The first of two RISP activities, Modelling the Spread of a Disease requires students to carry out a simulation of a disease spreading. Students carry out an experiment then pool their results to
estimate the proportion of the population that will have caught the disease. The experiment is then modelled mathematically using differential…
• Not yet rated
• Publication year: 2000 - 2009
• Activity sheet
This RISP activity is in the form of a puzzle in which students are given a general parametric equation with missing coefficients. A number of clues are given such as a point through which the curve
passes. Students have to use their understanding of parametric equations to find the missing numbers. There are a variety of different…
• Not yet rated
• Publication year: 2000 - 2009
• Activity sheet
The first of two RISP activities, Radians and Degrees, students are set the task of finding an angle whose sine value is the same whether measured in radians or degrees. Solving the problem leads to
further discussion about the relationship between radians and degrees and general trigonometric equations. The second, Generating the…
• Publication year: 2000 - 2009
• Activity sheet
The first of four RISP activities, Exploring Pascal's Triangle, explores Pascal's triangle, combinations and binomial coefficients. Doing and Undoing the Binomial Theorem and Extending the Binomial
Theorem use different approaches to investigate the binomial theorem using negative and fractional indices. Advanced Arithmagons…
• Not yet rated
• Publication year: 2000 - 2009
• Activity sheet
Two Repeats covers the revision of algebraic topics including changing the subject of a formula, graphical solution of equations, solving simultaneous and quadratic equations and manipulating surds.
Given a simple starting premise, students have to solve the puzzle by solving a number of different kinds of equations, working logically…
• Not yet rated
• Publication year: 2000 - 2009
• Activity sheet
This RISP activity, The answer's 1: what's the question? gives students graphs containing shaded areas enclosed by two functions. Examples of a straight line and a quadratic graph, a cubic graph and
an exponential graph are used. Given that the enclosed area has a value of one, students are asked to find the functions.…
• Not yet rated
• Publication year: 2000 - 2009
• Activity sheet
Two Special Cubes is a RISP activity designed to introduce the idea of implicit differentiation. Students are presented with two cubes of length x and y and are told that volume, the surface area and
the edge length form an arithmetic progression. Students are asked to find the maximum value for y and hence the corresponding value…
• Not yet rated
• Publication year: 2000 - 2009
• Activity sheet
This RISP activity Polynomial Equations with Unit Coefficients sets students the task of finding the roots of polynomials with a large number of terms. Students are required to use a graph plotter to
compare the graphs of several polynomials looking for common points and differences. The task leads to a numeriacal method, iteration,…
• Not yet rated
• Publication year: 2000 - 2009
• Activity sheet
The first of two RISP activities, Periodic Functions, asks students to write down as many periodic functions as they can. The activity progresses to look at what happens to the period when graphs
with different periods are combined. Topics covered are periodic functions, odd functions, even functions, composite functions and transformation…
• Not yet rated
• Publication year: 2000 - 2009
• Activity sheet
RISP activity Building Log Equations requires students to form equations given a set of cards and to determine, with examples, whether the equation is always, sometimes or never true and to attempt
to say why. Students must include at least one log card in their equation. Students need to be familiar with logs in different number…
• Not yet rated
• Publication year: 2000 - 2009
• Activity sheet
This RISP activity gives students four properties - one side is 3cm, one angle is 90 degrees, one side is 4cm and one angle is thirty degrees. Students are required to find as many triangles as they
can which contain any three of these four properties. Once the triangles have been found, students are asked to find the area and perimeter…
• Not yet rated
• Publication year: 2000 - 2009
• Activity sheet
Two RISP activities designed for students to explore or consolidate ideas about integration. Introducing e requires students to use a graphing package to explore a variety of functions of the form y
equals x to the power of n and attempt to find the value for k for which the area under the graph between 0 and k is exactly one.…
• Not yet rated
• Publication year: 2000 - 2009
• Activity sheet
This RISP activity introduces the subject of differentiation. Rather than start from first principles or learning a rule, the activity suggests using a graphing package to generate data. Starting
with a quadratic graph, students find the gradient of the curve using a straight line graph and are encouraged arrive at a rule for differentiating…
• Not yet rated
• Publication year: 2000 - 2009
• Activity sheet
Sequence Tiles requires students to define a position to term rule for a sequence and is extended to iterative sequences, using the set of cards given. Students have to decide the nature of their
sequence: convergent and divergent increasing, decreasing, oscillating or periodic. In Geoarithmetic sequences students are required…
• Not yet rated
• Publication year: 2000 - 2009
• Activity sheet
This RISP activity is ideal for introducing, consolidating or revising the idea of proof using a mathematical argument and appropriate use of logiocal deduction. Students are asked to choose two
triangular numbers and find when the difference is a prime number. Students should then be encouraged to attempt to prove their conjecture.…
• Not yet rated
• Publication year: 2000 - 2009
• Activity sheet
This RISP activity from can be used when either consolidating or revising ideas of curve-sketching and indices. The numbers phi, e and pi are used in this investigation where students are asked to
estimate the size numbers generated when raising these numbers to different powers. It is suggested that a graphing package would prove…
• Not yet rated
• Publication year: 2000 - 2009
• Activity sheet
This RISP, Almost Identical, is an activity designed to consolidate work on hyperbolics, exponentials, percentage error and curve sketching. Students are told that the shape of the curve formed by a
chain suspended from two points is called a catenary and are asked to attempt to fit a parabola to the curve as best they can, then…
• Not yet rated
• Publication year: 2000 - 2009
• Activity sheet
Five RISP starters revise ideas of polynomials and curve-sketching, cover expanding brackets, solving equations graphically, and knowing how to sketch the graphs of curves. Gold and Silver Cuboid
requires the students to use a graph plotter to explore the effects of changing coefficients of a cubic equation. The investigation…
• Not yet rated
• Publication year: 2000 - 2009
• Activity sheet
The first of three RISP activities exporing polynomials, The Gold and Silver Cuboid requires students to find a connection between the volume of a cuboid, the surface area and the edge length. The
second part of the activity asks students to find the maximum and minimum volume of a cuboid gives certain constraints. Venn Diagrams…
• Not yet rated
• Publication year: 2000 - 2009
• Activity sheet
Seven RISP activities covering a range of topics, each one having some activity which explores coordinate geometry. Circle Property: Students generate two coordinates. The coordinates form the end
points of the diameter of a circle. Students have to find the equation of the circle formed, compare their results with colleagues…
• Not yet rated
• Publication year: 2000 - 2009
• Activity sheet
Three RISP activities designed to introduce or consolidate basic algebraic skills. Brackets Out, Brackets In: students are asked to insert integers into a statement containing brackets in order to
obtain as many different results as they can. This activity can be used to introduce, consolidate or revise simple expanding brackets…
• Publication year: 2000 - 2009
• Activity sheet
Filter results by...
|
{"url":"http://www.nationalstemcentre.org.uk/elibrary/?filter=R&facet%5B0%5D=publisher%3A%22RISPs%22","timestamp":"2014-04-18T00:28:37Z","content_type":null,"content_length":"44169","record_id":"<urn:uuid:adba3125-eeb7-4f07-9b61-02dd54ea1491>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00194-ip-10-147-4-33.ec2.internal.warc.gz"}
|
In applications where pseudo-random numbers are not appropriate, one must resort to using a physical random number generator. When using such a generator, it is essential to consider the physical
process used as the randomness source. This source can be either based on a process described by classical physics or by quantum physics. Classical physics is the set of theories developed by
physicists before the beginning of the XXth century and which describes macroscopic systems like falling coins. Quantum physics is a set of theories elaborated by physicists during the first half of
the XXth century and which describes microscopic systems like atoms or elementary particles. Some examples of generators based on each of these theories, along with their advantages, are presented
below, after a brief discussion of biased random number sequences.
A problem encountered with physical random number generators is their bias. A binary generator is said to be biased when the probability of one outcome is not equal to the probability of the other
outcome. Bias arises because of the difficulty to devise precisely balanced physical processes. It is however less of a problem than one might expect at first sight. There exists some post-processing
algorithm that can be used to remove bias from a sequence or random numbers.
The simplest of these unbiasing procedures was first proposed by Von Neumann [1]. The random bits of a sequence are grouped in subsequences of two bits. Whenever the two bits of a subsequence are
equal, it is discarded. When the two bits are different and the subsequence starts with a 1, the subsequence is replaced by a 1. When it starts with a 0, it is replaced by a 0. After this procedure,
the bias is removed from the sequence.
The cost of applying an unbiasing procedure to a sequence is that it is shortened. In the case of the Von Neumann procedure, the length of the unbiased sequence will be at most 25% of the length of
the raw sequence. It was mentioned above that randomness tests basically all amount to verifying whether the sequence can be compressed. An unbiasing procedure can be seen as a compression procedure.
After its application, the bias is removed and no further compression is possible, guaranteeing that the sequence will pass the tests. Other unbiasing procedures exist. The one proposed by Peres [2]
for example is significantly more efficient than the Von Neumann procedure.
Macroscopic processes described by classical physics can be used to generate random numbers. The most famous random number generator – coin tossing – indeed belongs to this class. However, it is very
important to realize that classical physics is fundamentally deterministic.
Processes described by quantum physics
randomness revealed by simplicity
Contrary to classical physics, quantum physics is fundamentally random. It is the only theory within the fabric of modern physics that integrates randomness. This fact was very disturbing to
physicists like Einstein who invented quantum physics. However, its intrinsic randomness has been confirmed over and over again by theoretical and experimental research conducted since the first
decades of the XXth century.
When designing a random number generator, it is thus a natural choice to take advantage of this intrinsic randomness and to resort to the use of a quantum process as source of randomness. Formally,
quantum random number generators are the only true random number generators. Although this observation may be important in certain cases, quantum random number generators have other advantages. This
intrinsic randomness of quantum physics allows selecting a very simple process as source of randomness. This implies that such a generator is easy to model and its functioning can be monitored in
order to confirm that it operating properly and is actually producing random numbers. Contrary to the case where classical physics is used as the source of randomness and where determinism is hidden
behind complexity, one can say that with quantum physics randomness is revealed by simplicity.
Until recently the only quantum random number generator that existed were based on the observation of the radioactive decay of some element. Although they produce numbers of excellent quality, these
generators are quite bulky and the use of radioactive materials may cause health concerns. The fact that a simple and low cost quantum random number generators did not exist prevented quantum physics
to become the dominant source of randomness.
Optical quantum random number generator
Optics is the science of light. From a quantum physics point of view, light consists of elementary "particles" called photons. Photons exhibit in certain situations a random behavior. One such
situation, which is very well suited to the generation of binary random numbers, is the transmission upon a semi-transparent mirror. The fact that a photon incident on such a component be reflected
or transmitted is intrinsically random and cannot be influenced by any external parameters. The figure below schematically shows this optical system.
Figure 1: Optical system used to generate random numbers.
[1] Von Neumann, J., "Various techniques used in connection with random digits", Applied Mathematics Serires, no. 12, 36-38 (1951).
[2] Peres, Y., Ann. Stat., 20, 590 (1992).
|
{"url":"http://www.randomnumbers.info/content/Generating.htm","timestamp":"2014-04-17T16:24:03Z","content_type":null,"content_length":"11206","record_id":"<urn:uuid:51138a2f-7277-410f-8af2-96eb3e328a64>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00253-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Times Tables
• It's the only way to access our downloadable files;
• You can use our search box tool;
• Registered users see fewer Adverts;
• You will receive our 'irregular' newsletters;
• It's free.
Unless specified otherwise in the individual descriptions MathSticks resources are licenced under a Creative Commons Licence.
You are free to use; share; copy; distribute and transmit the work. Provided that you give mathsticks.com credit for the work and logos remain intact. You may not alter, transfrom, or build upon the
work, nor may you use it in any form for commercial purposes.
|
{"url":"http://mathsticks.com/category/tags/times-tables","timestamp":"2014-04-16T22:12:48Z","content_type":null,"content_length":"69238","record_id":"<urn:uuid:16a98622-b0e8-4099-b8f7-1d246c3fcee1>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00601-ip-10-147-4-33.ec2.internal.warc.gz"}
|
How many bobbyms does it take to change a lightbulb?
I do not know but someone might
'And fun? If maths is fun, then getting a tooth extraction is fun. A viral infection is fun. Rabies shots are fun.'
'God exists because Mathematics is consistent, and the devil exists because we cannot prove it'
'Who are you to judge everything?' -Alokananda
Re: How many bobbyms does it take to change a lightbulb?
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: How many bobbyms does it take to change a lightbulb?
Answer: None. All the bobbyms will wait for Wolfram to create a better CAS
'And fun? If maths is fun, then getting a tooth extraction is fun. A viral infection is fun. Rabies shots are fun.'
'God exists because Mathematics is consistent, and the devil exists because we cannot prove it'
'Who are you to judge everything?' -Alokananda
Re: How many bobbyms does it take to change a lightbulb?
Nope, M is like a 1911 Colt 45. Some people love it and some people say it is overkill, but everyone has to agree that it will knock your socks off.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: How many bobbyms does it take to change a lightbulb?
Can you change a lightbulb with it? Yes?
'And fun? If maths is fun, then getting a tooth extraction is fun. A viral infection is fun. Rabies shots are fun.'
'God exists because Mathematics is consistent, and the devil exists because we cannot prove it'
'Who are you to judge everything?' -Alokananda
Re: How many bobbyms does it take to change a lightbulb?
You do not need to! An M user's mind is so bright, so sharp, so piercing, so innovative that it will cast aside the darkness.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: How many bobbyms does it take to change a lightbulb?
Okay, good answer
'And fun? If maths is fun, then getting a tooth extraction is fun. A viral infection is fun. Rabies shots are fun.'
'God exists because Mathematics is consistent, and the devil exists because we cannot prove it'
'Who are you to judge everything?' -Alokananda
Re: How many bobbyms does it take to change a lightbulb?
Perhaps a better question is, how many bobbyms does it take to change a bobbym?
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Real Member
Re: How many bobbyms does it take to change a lightbulb?
None. He changes himself all the time.
The limit operator is just an excuse for doing something you know you can't.
“It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman
“Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment
Re: How many bobbyms does it take to change a lightbulb?
I am forced to disagree, he has remained the same for a very long time.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: How many bobbyms does it take to change a lightbulb?
How long?
You could not tie your shoes 92 years ago
'And fun? If maths is fun, then getting a tooth extraction is fun. A viral infection is fun. Rabies shots are fun.'
'God exists because Mathematics is consistent, and the devil exists because we cannot prove it'
'Who are you to judge everything?' -Alokananda
Re: How many bobbyms does it take to change a lightbulb?
For about 81 years.
I did not wear shoes 92 years ago. I wore little slippers.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: How many bobbyms does it take to change a lightbulb?
Hmm, you have changed in a lot in all these years
'And fun? If maths is fun, then getting a tooth extraction is fun. A viral infection is fun. Rabies shots are fun.'
'God exists because Mathematics is consistent, and the devil exists because we cannot prove it'
'Who are you to judge everything?' -Alokananda
Re: How many bobbyms does it take to change a lightbulb?
Yes, I grew bigger.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
|
{"url":"http://www.mathisfunforum.com/viewtopic.php?pid=290140","timestamp":"2014-04-19T17:11:13Z","content_type":null,"content_length":"26074","record_id":"<urn:uuid:9cfbe5cb-7ba2-4faf-9eeb-9440f4fbd57f>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00385-ip-10-147-4-33.ec2.internal.warc.gz"}
|
A Course in Point Set Topology
A year ago this month I reviewed John Conway’s A Course in Abstract Analysis and now a year later I find myself reviewing his A Course in Point Set Topology. Perhaps, assuming the author can keep up
the grueling pace, reviewing one of his books will become a New Year’s tradition for me. I certainly hope so.
Conway’s books all seem to have certain features in common which to my mind make them very attractive reading. The writing is clear, but, more than that, it is chatty and conversational; Conway has
that rare and valuable ability to write reasonably informally without sacrificing precision. In addition, his books also generally seem to have, for want of a better word, personality. Conway has
been a mathematician for a good while now; he has obviously formed some definite opinions about analysis, and is not shy about letting his books reflect those opinions. He also puts more of an
emphasis in his books on historical background (including short biographies of many of the mathematicians whose names adorn theorems in the subject) and etymology of mathematical terms, than one
typically finds in the textbook literature. Consider, for example, his thoughts about the name of the Baire Category Theorem:
Why is the word category used in the name of this theorem? Please excuse the author while he takes a little time to rant. Mathematics loves tradition; largely this is good and has my support.
However, occasionally it adopts a word, makes it a definition, and promulgates it far beyond its usefulness. Such is the case here. There is a concept in topology called a set of the first
category. Sets that are not of the first category are said to be of the second category. What are the definitions? I won’t tell you. If you are curious, you can look them up, but it will not be
helpful. I will say, however, that using this terminology the Baire Category Theorem says that a complete metric space is of the second category. My objection stems from the fact that this
“category” terminology does not convey any sense of what the concept is. There is another pair of terms that is used: meager or thin and comeager or thick. These at least convey some sense of
what the terms mean. But I see no reason to learn additional terminology. The theorem as stated previously says what it says, end of story—and end of rant.
I like reading things like this in a mathematics text, and so do, I suspect, the students. Comments like this enliven a book and also educate a beginning student.
The book under review is, as the title makes clear, an introduction to point set topology, and it maintains the high quality that the author has set with his previous books. The prerequisites are
modest: technically, just a good calculus background should suffice, but some prior exposure to “theoretical calculus”, as, for example, in a real analysis course (or perhaps even a really good
honors calculus course), would be a definite plus.
One very distinctive feature of the book is that it is quite short (or perhaps, given the subject matter, I should say “compact”); the main text is only about 120 pages long, not counting an appendix
(roughly fifteen pages long) on set theory, which starts from scratch with the definitions of “union” and “intersection” and proceeds through a discussion of Zorn’s Lemma and basic cardinal
arithmetic. The brevity of the text reflects what the author refers to as a mid-career “epiphany”: “I realized I didn’t have to teach my students everything I had learned about the subject at hand. I
learned mathematics in school that I never used again, and not just because those things were in areas in which I never did research. At least part of this, I suspect, was because some of my teachers
hadn’t had this insight.” So, Conway wrote this book to give students “a set of tools”, discussing “material [that] is used in almost every part of mathematics.”
There are only three chapters. The first is on metric spaces and covers the basic topological concepts (continuity, convergence, compactness, connectedness, etc.) in that context. I fully agree with
the author that this is the best way to introduce the subject of topology: it builds on the reader’s Euclidean intuition of distance and also provides an easy way to motivate the definition of a
general topological space, which in this book is the subject of chapter 2. The topics mentioned above are revisited in this chapter in this broader context. Nets now replace sequences, which don’t
work as well in topological spaces as they do in metric spaces.
There is some mild redundancy between this chapter and the first, with some results stated that are a direct generalization of the metric space result, but I certainly don’t view that as a problem;
in fact, to the contrary, it seems to me to be a pedagogically sound approach to learning the material, and there’s no real waste of time, since the author, in cases of proofs that generalize almost
verbatim, generally just asks the student to fill in the details. And of course there are a number of topics that are introduced in this chapter for the first time: for example, path-connectedness is
defined in this chapter, and Tikhonov’s (that’s the author’s spelling; I learned it as Tychonoff) theorem is proved, using Alexander’s theorem that a space is compact if (and of course only if) any
subbasic open cover has a finite subcover. (It should be noted that there is a standing convention that all topological spaces encountered in the text are assumed Hausdorff, and therefore that
hypothesis is not generally repeated in statements of theorems.)
The selection of topics in chapter 3 (“Continuous Real-Valued Functions”) is based on the author’s belief that “the continuous functions on a space are more important than the underlying space”.
Although many of the topics included here (regular, completely regular, and normal spaces; the Stone-Cech compactification) are typically found in other point set topology books in chapters with the
word “separation” in the title, the unifying thread of the topics in this chapter is that they involve real-valued functions defined on a topological space, perhaps considered as themselves elements
of a topological space. The chapter also discusses paracompactness, but stops short of proving results like the Nagata-Smirnov metrization theorem.
Slim books like this have some advantages: they don’t overwhelm and intimidate the student and they are also affordable; this one is currently selling on amazon for less than thirty dollars. (With
textbook prices getting ridiculously high, I have on more than one occasion found myself considering the price of a book as a factor in the selection process.) Of course, they also have some
potential disadvantages, including a certain lack of flexibility — the professor no longer gets to pick and choose from among a broad selection of things to teach from, and one’s favorite topic may
well wind up missing. It’s certainly reasonable to believe that you can get a higher quality meal at a restaurant with a limited menu than at one with a mammoth buffet, but you do need to check the
menu first to make sure there are items to your liking on it. So, for example, if you adamantly believe that any course in introductory topology simply must contain at least some discussion of
surfaces, or an introduction to the fundamental group, or of the topological aspects of matrix groups, then you’ll probably want to look elsewhere for a course text; these topics don’t appear here.
(Texts which offer a point of view different from this one include McCleary’s A First Course in Topology: Continuity and Dimension and Topology Now! by Messer and Straffin; in addition, the recent
Elements of Topology by Singh, which contains enough material for two semesters, covers the topics discussed in this book and also includes some discussion of the fundamental group, covering spaces,
and matrix groups, and other topics not included here.)
Another potential disadvantage of narrowly focused books like this one is that, depending on taste, one might believe that students do not end the semester with any kind of “big payoff”. The “set of
tools” for analysis is provided to the student, but (particularly with respect to the topics in the latter part of the course, where the material is somewhat technical and not such a direct
generalization of the analysis studied previously) he or she will have to wait for future courses to see these tools applied seriously. It’s easy to motivate the definition of a group in an abstract
algebra class just by showing the students how groups pop up all over the place — geometry, number theory, analysis, and elsewhere. But it’s a little harder to get a student excited about the
definition of a completely regular topological space, especially when the examples strike many of them as kind of strange and artificial.
This is all a matter of taste, of course. There are plenty of people who think this material is beautiful in its own right, and non-specialists in the area (like me) can certainly benefit from
reading an expert’s opinion of what is, or is not, important. By and large, based on my own (admittedly non-expert) opinion, I thought the author did an excellent job selecting topics, with one
exception — although there is a section on quotient spaces in chapter 2, there is no reference at all to either Moebius strips, the Klein bottle, or the projective plane. How can you have an
introductory text on topology that doesn’t even have a picture of a Moebius strip in it? Fortunately this is hardly a deal-breaker; an instructor who is of a mind to do so can easily supplement the
book on this one point. However, having already defined quotient spaces, it does seem like a missed opportunity to not give these neat examples.
Each of the three chapters is divided into sections, and each section ends with a fairly generous selection of exercises. Based on a quick survey of them, it seems that very few are of the trivial
make-work variety, and most seemed to be of average or perhaps somewhat above-average difficulty. No solutions are provided in the text, which I view as another pedagogical plus.
To summarize: this is a well-written book that I enjoyed reading. Assuming that your idea of what to teach in a first-semester course in topology is in line with the author’s, this book would make an
excellent text for such a course.
Mark Hunacek (mhunacek@iastate.edu) teaches mathematics at Iowa State University.
|
{"url":"http://www.maa.org/publications/maa-reviews/a-course-in-point-set-topology","timestamp":"2014-04-16T11:29:14Z","content_type":null,"content_length":"105622","record_id":"<urn:uuid:3a1413fb-12c6-400a-9acb-0697a4d7450a>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00401-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Separation of Axis
Separation of Axis is an algorithm used for collision detection in some physics engines. The algorithm is very straightforward, and easy to understand geometrically. It also incurs a high
computational cost
The Basics
If you can draw a line between two convex polygons in 2-dimensions without that line intersecting one of the polygons, then they don't intersect. If no such line exists, then they do intersect. The
image below shows an example of a non-intersecting polygon pair. The dotted line is the separating line and the green line is the separating axis
The Algorithm
For two convex polygons, first choose a face. In the diagram below each separating axis, separating axis, we project each vertex of both polygons onto that separating axis. Notice that since
separating axis that we only need to choose one of those vertices to project. The bounds of each shape's projections form what are called projection intervals shown in red. The idea is that if these
intervals do not overlap, then we know that these two polygons do not intersect. If, on the other hand, we iterate through each face of each polygon and perform this test and find that each and every
set of projection interval overlap, then we know that these two polygons intersect. So back to the algorithm. For each vertex compute the length of the projection along the separating axis
One thing I really like about Separation of Axis is that it provides a cheat to calculate the point of contact given that two polygons are intersecting. All you have to do is find the axis with the
least interval overlap, shown in white in the figure below. Then just project the vertex of the shape that does not own that separating axis, onto the face that created that separating axis. So, in
the figure below, the point of contact becomes
Interactive Demo
Try playing around with the demo below to get a feel for the geometry. The color coding is the same as in the figures above. Just follow the direction in the right hand column.
Login or register to post comments
|
{"url":"http://physics2d.com/content/separation-axis","timestamp":"2014-04-18T08:13:57Z","content_type":null,"content_length":"17528","record_id":"<urn:uuid:696618ec-2fe1-47f2-8001-2602708af165>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00262-ip-10-147-4-33.ec2.internal.warc.gz"}
|
What is the longest known sequence of consecutive zeros in Pi?
up vote 1 down vote favorite
Inspired by this question, I would like to know what is the longest known sequence of consecutive zeros in Pi (in base 10).
So far the longest I have found is the sequence of 8 zero's occurring in position 172,330,850 after the decimal point.
If we expand the question to longest sequence of identical digits, 6 takes a lead with 9 digits occurring at position 45,681,781. All other digits have 8 digit maximum sequences occurring within the
first 200,000,000 digits.
In general what is known about the distribution of k-length b-sequences in Pi, where b is any of the base digits? Can something be learned about the normalcy of Pi from these distributions? NB, by
distribution I mean the set of (k,b,f) triples, for a given base, where f is the first position of occurrence.
nt.number-theory computational-number-theo
Results from first 200,000,000 digits were found using: angio.net/pi/piquery.html. – Halfdan Faber Apr 24 '11 at 22:30
10 This is not an interesting question. An interesting question is one that has the property that other people will learn something from the answer. What have I learned from the fact that the
sequence "00000000" occurs somewhere between the $10^8$-th and $10^9$-th digit of $\pi$?... – André Henriques Apr 24 '11 at 22:32
Well, if the position of first occurence for a k-length sentence grows at the same rate for all base digits, something can be learned from that. If 6-sequences of k length actually always occur
first, then Pi would not be normal (I realize this is more than exceedingly unlikely to be the case, but would like to see some references). – Halfdan Faber Apr 24 '11 at 22:38
@Halfdan: I completely agree with you. But there's only a finite amount of information that one can explore by computer. And, after that, one is still infinitely far away from infinity... – André
Henriques Apr 24 '11 at 22:53
1 Ok. I have learned something from the answer: I've learned about the existence of Fabrice Belard's web page. – André Henriques Apr 24 '11 at 22:56
show 2 more comments
closed as off topic by Gjergji Zaimi, Bruce Westbury, Steve Huntsman, Dmitri Pavlov, Simon Thomas Apr 25 '11 at 2:37
Questions on MathOverflow are expected to relate to research level mathematics within the scope defined by the community. Consider editing the question or leaving comments for improvement if you
believe the question can be reworded to fit within the scope. Read more about reopening questions here.If this question can be reworded to fit the rules in the help center, please edit the question.
1 Answer
active oldest votes
There is a sequence of 12 zeroes starting at position 1755524129973; There is a sequence of 13 eights starting at position 2164164669332. You can see more statistics in Fabrice
up vote 4 down Belard's web pages.
vote accepted
@Julian: Thx, much. This is excellent! Here is another link:ja0hxv.calico.jp/pai/estatistics5t.html. Thanks much to Alex Yee for providing this. They found 13 zeros in positions
3,186,699,229,890 and 3,675,091,769,442. Since this is from the longest Pi calculation done so far, this is likely the longest known zero sequence. See also: numberworld.org/
misc_runs/pi-5t/announce_en.html. – Halfdan Faber Apr 25 '11 at 1:56
add comment
Not the answer you're looking for? Browse other questions tagged nt.number-theory computational-number-theo or ask your own question.
|
{"url":"http://mathoverflow.net/questions/62868/what-is-the-longest-known-sequence-of-consecutive-zeros-in-pi?sort=oldest","timestamp":"2014-04-19T09:56:07Z","content_type":null,"content_length":"53822","record_id":"<urn:uuid:f9885266-8a83-4f66-9429-16079473f66b>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00302-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Simple question regarding Time derivatives
February 27th 2010, 10:08 AM #1
Feb 2010
Simple question regarding Time derivatives
I have a system of equations that I dont know how to solve. I want to know what is the value of X1 and X2 at a specific point. I have a system of equations that follow this:
if t is between 0 and 1:
X1 varies according to the function Xtot-(kp X8 A-k4 X3[t])
X2 varies according to the function kp X8 A-k2 X2[t]
if t is between 1 and 60:
X1 varies according to the fucntion -k4 X3[t]
X2 varies according to the function -k2 X2[t]
Constants through time: Xtot, kp, X8, A, k2, k4.
through all time we have X3[t]=Xtot-X1-X2.
my problem now is, i want to compute the values of X1 and X2 at t=60, but these values depend on the values that they have at t=1. I know this is a simple question, but I dont know how to do
it... This is all the information that I have. Could you please give me some tips on how to solve this?
Follow Math Help Forum on Facebook and Google+
|
{"url":"http://mathhelpforum.com/calculus/131034-simple-question-regarding-time-derivatives.html","timestamp":"2014-04-20T08:36:27Z","content_type":null,"content_length":"29998","record_id":"<urn:uuid:b3e5f94b-b197-42a3-955a-e65621ba7c23>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00508-ip-10-147-4-33.ec2.internal.warc.gz"}
|
NCSM 2010 — Day Two
April 26th, 2010 by Dan Meyer
Sessions Reviewed:
• A Lesson Study Project: Connecting Theory and Practice Through the Development of an Exemplar Video for Algebra 1 Teachers and Students. Anne Papakonstantinou, et al.
• Intriguing Lessons About How Math Is Taught and Assessed in High Performing Asian Countries. Steven Leinwand.
• Problem Solving and Technology implementation in an Inclusion Classroom. Annie Fetter.
• Western Caucus. California, Hawaii, Oregon, Washington.
Better Teaching Through Video
Houston Independent School District is huge. It won a huge grant and has a huge video production department. It created an impressive DVD of impressive practices in Algebra I and the teachers who
participated in that reflection program improved impressively.
I have nothing but good things to say about the presenters, their content, and their presentation of their content. But if you believe as I do that classroom transparency is our mandate as teachers,
that when you share what you do it will inevitably change what I do (even if I'm only learning from your counter-example), your questions should be the same as mine:
• How can we scale this?
• How can we distribute the work of editing and mastering these videos?
• What is the least cost we can get away with for a hardware / software package for a classroom?
• Specifically, what is the sweet spot where quality meets affordability?
• How do we communicate this transparency mandate to parents?
• Specifically, how do we streamline the legal release process?
Those questions weren't addressed. Which was fine. It wasn't my session. Nonetheless.
Problematic Problem Solving
The presenter, Annie Fetter, has been creating Problems of the Week for the Math Forum since 1993. She's reviewed thousands of student responses in that time and she now models problem-solving
techniques for other teachers in their classrooms. There was one mortifying moment in the session and a whole lot of great ones.
First, she dismissed the status quo model for teaching problem solving:
• Underline the key parts.
• Circle the numbers.
• Match words to operations.
Those commands are meaningless if the student doesn't understand the problem. (ie. the word "more" can indicate addition or subtraction, depending on the context.) So Fetter asks students instead to
read a problem and complete the statements a) "I notice …" and b) "I wonder …."
She workshopped the process with this problem:
Greta has a vegetable garden. She sells her extra produce at the local Farmer's Market. One Saturday she sold $200 worth of vegetables — peppers, squash, tomatoes, and corn.
□ Greta received the same amount of money for the peppers as she did for the squash.
□ The tomatoes brought in twice as much as the peppers and squash together.
□ The money she made from corn was $8 more than she made from the other three kinds of vegetables combined.
You know where this went for me.
Honest to god I tried to stay cool. I asked my table buddies, "Is this problem contrived? Is there a more natural way to teach systems of equations?" The guy across from me just glared back like I
had said something derogatory about his mother.
When it came time to share our noticings and wonderings, I said, "I'm wondering how Greta knows how much she made in total but she doesn't know how much she made on peppers, squash, tomatoes, or
corn. Like, what went wrong with her bookkeeping there."
Fetter said in all good humor, "Well, sure. I mean, this is a math problem."
That sort of response triggers my instinct for self-preservation. I start looking for a desk to duck beneath. If I pitched that response to my students, they'd already have their knives out,
sharpened, and thrown.
Is there a situation where it makes sense to use systems? Did mathematicians develop systems because they makes life easier, more fun, or more meaningful? Or are they just arbitrary symbology we use
to limit access to universities?
I'm glad I stayed. Her strategies for drawing students into conversation, for appealing to and strengthening their intuition, for encouraging patient problem solving were excellent. A few highlights:
• "Students think math isn't about them. They think math is about learning the ways to do what the dead white guys figured out how to do a long time ago. Some kids master those ways but can't solve
problems. Others can't master those ways but have amazing problem solving skills."
• "Don't listen for things, listen to students." If you're looking for a specific answer, your students will equate "problem solving" with "reading the teacher's mind."
• A particularly sweet grace note on noticing / wondering. Give the student a mathematically rich situation but don't yet give them a question. If you give your class a specific line of inquiry and
even one student chases it all the way to the end, the moment has passed for the novice problem solvers. What moment? The moment where "we get students out of the gate who can't usually get out
of the gate." ¶ So we take that moment away. "Find all the math here." Then the classroom superhero will expand in every direction not just along the one.
• "Guess and check gets a bad rap because guessing doesn't seem like math."
• "Do math in pen. Learn from your mistakes, don't erase them."
All of which is great. But all of that requires rich mathematics. You can't throw a piece of lumpy charcoal (see Greta's garden above) on a student's desk and ask her, "What do you notice? What do
you wonder?"
Steve Leinwand
Steve Leinwand is a young up-and-comer, a researcher at AIR who spits hot fire on the track, and deserves his own section heading. If you get a chance to hear him speak on anything, take it. I can
put the dy/dan stamp on three guarantees:
• He will call you "gang."
• You take offense at least once in his talk — either by his tone or by a rhetorical overreach.
• He will be the most well-researched speaker you see that day.
In particular, he has an encyclopedic knowledge of the content standards and assessment questions of countries in the East and the West. He noted right away the irony of totalitarian eastern
countries assessing their students more constructively than their democratic counterparts in the West.
For instance, a sample first-grade assessment in China:
What is the approximate thickness of 1,200 sheets of paper? What is the approximate number of classes that may be formed by 1,200 students? What is the approximate length of 1,200 footsteps?
And one from the third-grade:
Estimate the number of characters contained in one whole page of a newspaper.
The most interesting aspects of his presentation, then, compared assessments of similar standards between the different countries. The only people who would dispute the inferiority of ours to theirs,
Steve suggested, were "tired, old, white men who still told time with analog watches." (One got the sense here that Steve was doing his damnedest not to name names.)
Decide for yourself.
• "The National Math Panel report was a complete and total disaster." So there you go.
• "I do this [present at conferences] to entertain myself and I'm surprised whenever anyone comes." Naturally the hall is packed.
• "The Common Core math standards are what we've been praying for for a generation." He anticipates improvements in the next draft.
• "By 2014, we'll have a national test in the 4th, 8th, and 11th grade. It will be on a computer and feature both constructed response and multiple choice. Students will have a four-week window to
take it, two weeks after which they'll release 75% of the questions and data to all stakeholders." By "constructed response" he referred to a problem where a student saw two egg rolls on a screen
and had to drag a knife around with the mouse, clicking to create cut lines to show how to feed three people equally.
He noted the inequities in models of teacher development between the East and West that made adopting the eastern model of student instruction difficult. In Korea, for instance, the ratio of teacher
salary after fifteen years to GDP per capita is 2.5.
The Chinese, meanwhile, apply a model to teacher training that looks suspiciously like medical residency.
Final Steve-ism: "We do the same thing [at these conferences] we don't want our kids to do: we sit, we get, and we forget."
Western Caucus
Math supervisors from California, Washington, Oregon, and Hawaii gathered together in a room to update each other on old and new business. We stood one-at-a-time and introduced ourselves.
Aw screwit, I thought, and went there: "I'm Dan Meyer. I'm a high school math teacher and I blog."
"You what?"
Caucuses strike me as a relic of an age when you couldn't e-mail or otherwise connect instantaneously with your long-distance colleagues. I'm not saying they aren't useful or even fun but it will be
interesting to watch these stodgy national organizations try to persuade young members of that usefulness.
Gratuitous iPad Review
• Typing is getting easier.
• Auto correct is still confounding.
• It's an attention-getter. Leinwand accused me of buying it just to make friends.
• I love the sound of typing, or rather the lack of sound. There's always some guy in a session clattering away noisily and unselfconsciously on his laptop. The iPad "keyboard," meanwhile, is
whisper quiet.
14 Responses to “NCSM 2010 — Day Two”
1. I haven’t seen a good pro-Common Core math argument yet. You?
2. I like Leinwand and Leinwand likes Common Core math, so there’s that. I realize that’s pretty weak argumentation but it’s like that with so many things in my life where I just draft off the ideas
of smarter people in fields I know nothing about.
Of course, I know something about this field so there’s really no excuse not to dig in myself. Jason Dyer’s done some of that yeoman work and, while he’s seems to have resisted a categorical
thumbs up or down, his overall tone has been positive.
3. I give a thumbs up on the math, but a long spiel why is going to have to wait.
The short version is the crew working on this has put great pains to both compare with existing international standards and justify their decisions with research. While I quibble with parts, at
least the standards are backed by *some* sort of reasoning.
4. Two things.
First: I agree with Jason. I like that the Common Core are coherent. They are really thought out from beginning to end. Like Jason, I don’t think they are perfect, but they are way better than
many states’ standards. That said, my favorite thing about the Common Core is that they exist. I’d like to see states start supporting the Common Core as the core of their standards. This would
help to get states in synch and help to reduce the mile-wide inch-deep thing we got going on.
Second: My favorite quote from the post was “‘Don’t listen for things, listen to students.’ If you’re looking for a specific answer, your students will equate “problem solving” with ‘reading the
teacher’s mind.’” Bravo. Your job as a teacher is NOT to pave the road and make sure the student is on it. Your job is to define the destination and make sure the students are taking good paths
to get there.
5. on 27 Apr 2010 at 8:09 pm5
Sorry I urged you to come to the caucus. Thought we would actually get to talk to one another. (Usually the purpose of introductions.)
Great meeting you
6. Nah nah. It was fun. I suppose I would’ve enjoyed it more if I were a full-flight member of NCSM instead of a tourist.
7. I find the info on China interesting. We were there in May–visited a school and spent the afternoon in a 5th grade class. The first thing I noted was the LARGE class size…more than 50 students in
the class. That appeared to be typical. I didn’t see much in the way of supplies. No computers or any electronics in the classroom. If there were calculators, I didn’t see them. Granted, this was
only one school, but from what I’ve been able to gather, schools are pretty similar, at least in cities. Also, from what I understand, in China, parents must pay to send their children to public
Enjoying your blog and just watched your 03/06/10 YouTube video. Thanks!
8. on 28 Apr 2010 at 8:58 pm8
Annie Fetter
I’m glad you’re glad you stayed for the rest of my talk :-) You’re exactly right that Greta’s Garden is not the best problem choice to use in modeling our “Noticing and Wondering” strategy. All
the problems mentioned in the talk were picked with the goal of helping the kids at this particular school get better at the Guess and Check strategy. Since we wanted kids practicing guess and
check, we didn’t want them practicing a lot of noticing and deciphering and whatnot, or else they may have never gotten to the guess and check.
Here’s a “scenario” (our name for problems without questions) we used in our booth in the exhibit hall as a starter for Noticing and Wondering: A regular hexagon and an equilateral triangle have
the same perimeter.
Paul, I like your addition of the destination to my idea about listening to kids, not for answers. I would add that a teacher who is really listening to their kids will also be willing to
occasionally change the destination if it becomes clear that the kids are really interested in something the teacher didn’t anticipate!
9. on 01 Jul 2010 at 12:52 pm9 Mimi
In the Measurement Unit that I drafted up and taught to my 9th-graders a few months back, I had included discussing with kids (and subsequently, quizzing them) on what were some reasonable
estimates for various lengths of real-world objects. But, obviously, assessing them with worthwhile questions is not the problem — how do you TEACH reasonable approximations? I found, even after
the discussions (and quite a few hands-on measurement activities), that the only kids consistently choosing the correct approximations were the ones who already had great number/measurement
Any thoughts??
how do you TEACH reasonable approximations?
Great question. My sense is that one calibrates reasonable approximations through trial and error, that every time you find out you’ve over- or undershot an approximation, you’ll approximate the
same measurement better the next time. That’s just a guess, though.
11. Mathsemantics, by Edward MacNeal, has a great chapter on this. He says you need to have a web of basic numbers in your head, and then estimate often, do something to commit to your estimate (say
it out loud), and then confirm the true value afterward. Doing that often improves your estimation ability.
The students have to take responsibility for doing that, of course. So a first step would be a very cool problem.
12. Hello Dan,
I read your blog from time to time. Thanks for making us laugh, think, and for your passion for math.
I’m recovering from my traumatized math childhood. Now I teach two daughters at home, and they don’t have math shivers or preconceptions. They are fresh, eager to learn, and a joy to get
reacquainted with math at almost forty.
My oldest thinks that the last number is something around 40 40 40 40 40 40 40 …anyway, God knows that number for sure.
My comment is about the famous Greta problem…that gives me shivers. My daughter after we do mental math, comes with the same nonsensical problems on her own. Her point it’s to make it difficult.
She senses those problems are totally ad hoc. A different thing is when we asked her about her sister age when she is X age, or vice-versa. That she knows very well, it’s very valuable to
children to know they will always be older and you can’t catch up with age. And it’s fun (for them) to know how older or younger others and themselves will be and at what years. (You were talking
about a real scenario to use or how mathematicians started to investigate this area of knowledge. History and estimating ages in the future and past as well as scales for maps and drawings may
have been one area to practice this.
Thanks for your blog and your spirit. I enjoyed specially your worksheet post, as a former public school teacher and homeschooler mom now I related to it. And your humor is very appreciated.
13. Now I’ve read the comments and they pose very good questions, it’s not about a formula of finding real scenarios for math, but more about being able to see the math in the world, which is very
difficult and in the meantime there are the tests and district requirements teachers need to meet.
One person said that you get better at teaching things with time. That’s true. Another (or you) said that at times you just work the problem straight and that’s fine.
I’ll give you a different example with reading. There is a new and deplorable collection that intends to use difficult vocabulary words in stories that supposedly will make the children laugh.
It’s so fictitious, and they still believe that’s how children gain and increase vocabulary…WRONG. You just have to talk using proper words, read the best without watering it down to them, but
using age appropriate books, which are many more than many think. And if you have to face a spelling test, or a test that will ask you for definitions, look, just cram it, do it, and then one day
you’ll hear the word in something you read, or a conversation, and you’ll learn it for real.
14. [...] yeah, I can't freaking believe they counter-programmed me against Steve Leinwand, who has never disappointed whenever he's turned up on my conference schedule. If you're flipping a coin
between the two of us, [...]
|
{"url":"http://blog.mrmeyer.com/2010/ncsm-2010-%e2%80%94-day-two/","timestamp":"2014-04-18T15:39:14Z","content_type":null,"content_length":"57147","record_id":"<urn:uuid:f65d50b6-9e08-413e-aa11-95f98955fd6c>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00122-ip-10-147-4-33.ec2.internal.warc.gz"}
|
help me with differential
October 17th 2009, 03:33 PM #1
Sep 2009
help me with differential
Here is the given:
Use differentials to estimate the amount of material in a closed cylindrical can that is 20 cm high and 8 cm in diameter if the metal in the top and bottom is 0.1 cm thick, and the metal in the
sides is 0.1 cm thick. Note, you are approximating the volume of metal which makes up the can (i.e. melt the can into a blob and measure its volume), not the volume it encloses.
The differential for the volume is
The approximate volume of material is_______
How do i get started?
dr is 0.1 but what is dh?
October 17th 2009, 09:36 PM #2
Sep 2009
|
{"url":"http://mathhelpforum.com/calculus/108625-help-me-differential.html","timestamp":"2014-04-20T07:22:42Z","content_type":null,"content_length":"31942","record_id":"<urn:uuid:90b08f10-3998-49d6-823d-8b1d93292430>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00248-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Wolfram Demonstrations Project
Bell Number Diagrams
The Bell number is the number of ways to partition a set of elements into disjoint, nonempty subsets. In this Demonstration, points are at the corners of a regular -gon. Members of the same subset
are connected with line segments (singletons appear as dots).
|
{"url":"http://demonstrations.wolfram.com/BellNumberDiagrams/","timestamp":"2014-04-19T09:26:47Z","content_type":null,"content_length":"41545","record_id":"<urn:uuid:5e6205a7-0ef5-4932-a7ed-7f6ca7067bc9>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00466-ip-10-147-4-33.ec2.internal.warc.gz"}
|
188 helpers are online right now
75% of questions are answered within 5 minutes.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred.
|
{"url":"http://openstudy.com/users/thatsmeh/answered","timestamp":"2014-04-17T12:48:54Z","content_type":null,"content_length":"87210","record_id":"<urn:uuid:3acaf1f6-4b30-4490-bb5e-82762599a04c>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00619-ip-10-147-4-33.ec2.internal.warc.gz"}
|
HP Fortran for OpenVMS
User Manual
Chapter 15
Using the Compaq Extended Math Library (CXML) (Alpha Only)
This chapter describes:
This entire chapter pertains only to HP Fortran on OpenVMS Alpha systems only.
15.1 What Is CXML?
The Compaq Extended Math Library (CXML) provides a comprehensive set of mathematical library routines callable from Fortran and other languages. CXML contains a set of over 1500 high-performance
mathematical subprograms designed for use in many different types of scientific and engineering applications. It significantly improves the run-time performance of certain HP Fortran programs.
CXML is included with HP Fortran for OpenVMS Alpha Systems and can be installed using the instructions in the HP Fortran Installation Guide for OpenVMS Alpha Systems.
The CXML reference guide is available in both online and hardcopy formats. You can obtain this documentation by accessing the following files:
• SYS$HELP:CXMLREF-VMS.PDF (online)---view with the Adobe Acrobat Reader
• SYS$HELP:CXMLREF-VMS.PS (hardcopy)---print to a PostScript printer
Example programs are also provided with CXML. These programs are located in the following directory:
15.2 CXML Routine Groups
CXML provides a comprehensive set of highly efficient mathematical subroutines designed for use in many different types of scientific and engineering applications. CXML includes the following
functional groups of subroutines:
• Basic linear algebra
• Linear system and Eigenproblem solvers
• Sparse linear system solvers
• Signal processing
• Utility subprograms
The routines are described in Table 15-1.
Table 15-1 CXML Routine Groups
┃ Name │ Description ┃
┃Basic Linear│The Basic Linear Algebra Subprograms (BLAS) library includes the industry-standard Basic Linear Algebra Subprograms for Level 1 (vector-vector, BLAS1), Level 2 (matrix-vector, BLAS2), ┃
┃Algebra │and Level 3 (matrix-matrix, BLAS3). Also included are subprograms for BLAS Level 1 Extensions, and Sparse BLAS Level 1. ┃
┃Signal │The Signal Processing library provides a basic set of signal processing functions. Included are one-, two-, and three-dimensional Fast Fourier Transforms (FFT), group FFTs, Cosine/Sine┃
┃Processing │Transforms (FCT/FST), Convolution, Correlation, and Digital Filters. ┃
┃Sparse │The Sparse Linear System library provides both direct and iterative sparse linear system solvers. The direct solver package supports both symmetric and symmetrically shaped sparse ┃
┃Linear │matrices stored using the compressed row storage scheme. The iterative solver package supports a basic set of storage schemes, preconditioners, and iterative solvers. ┃
┃System │ ┃
┃LAPACK │LAPACK is an industry-standard subprogram package offering an extensive set of linear system and eigenproblem solvers. LAPACK uses blocked algorithms that are better suited to most ┃
┃ │modern architectures, particularly ones with memory hierarchies. ┃
┃Utility │Utility subprograms include random number generation, vector math functions, and sorting subprograms. ┃
┃subprograms │ ┃
Where appropriate, each subprogram has a version to support each combination of real or complex and single- or double-precision arithmetic. In addition, selected key CXML routines are available in
parallel form as well as serial form on HP OpenVMS Alpha systems.
15.3 Using CXML from Fortran
To use CXML, you need to make the CXML routines and their interfaces available to your program and specify the appropriate libraries when linking.
The CXML routines can be called explicitly by your program. There are separate CXML libraries for the IEEE and the VAX floating-point formats. You must compile your program for one of these float
formats and then link to the matching CXML library (either IEEE or VAX), depending upon how you compiled the program.
Either the IEEE or VAX CXML library can be established as the systemwide default by the system startup procedure. Individual users can select between the VAX and IEEE version by executing the
SYS$LIBRARY:CXML$SET_LIB command procedure. For example, the following command alters the default CXML link library for the current user to the VAX format library:
$ @SYS$LIBRARY:CXML$SET_LIB VAX
For more details, see the section about CXML post-installation startup options in the HP Fortran Installation Guide for OpenVMS Alpha Systems.
If needed, you can instead specify the appropriate CXML library or libraries on the LINK command line (use the /LIBRARY qualifier after each library file name). You must compile your program and then
link to the appropriate CXML library (either IEEE or VAX), depending upon how you compiled the program. The following examples show the corresponding CXML commands for compiling and linking for cases
where the CXML default library is not used:
$ FORTRAN /FLOAT=IEEE_FLOAT MYPROG.F90
The link command uses the name of the CXML library for IEEE floating-point data, cxml$imagelib_ts . To use VAX floating-point data, specify the CXML library name as cxml$imagelib_gs .
If you are using an older version of CXML, use dxml$xxxxx instead of cxml$xxxxx as the library name. For more information on using CXML and specifying the correct object libraries on the LINK
command, see the Compaq Extended Mathematical Library Reference Manual.
15.4 CXML Program Example
The free-form Fortran 90 example program below invokes the function SAXPY from the BLAS portion of the CXML libraries. The SAXPY function computes a*x+y .
PROGRAM example
! This free-form example demonstrates how to call
! CXML routines from Fortran.
REAL(KIND=4) :: a(10)
REAL(KIND=4) :: x(10)
REAL(KIND=4) :: alpha
INTEGER(KIND=4) :: n
INTEGER(KIND=4) :: incx
INTEGER(KIND=4) :: incy
n = 5 ; incx = 1 ; incy = 1 ; alpha = 3.0
DO i = 1,n
a(i) = FLOAT(i)
x(i) = FLOAT(2*i)
PRINT 98, (a(i),i=1,n)
PRINT 98, (x(i),i=1,n)
98 FORMAT(' Input = ',10F7.3)
CALL saxpy( n, alpha, a, incx, x, incy )
PRINT 99, (x(i),I=1,n)
99 FORMAT(/,' Result = ',10F7.3)
END PROGRAM example
Appendix A
Differences Between HP Fortran on OpenVMS I64 and OpenVMS Alpha Systems
This appendix describes:
A.1 HP Fortran Commands on OpenVMS I64 That Are Not Available on OpenVMS Alpha
• /CHECK=ARG_INFO
• /CHECK=FP_MODE
A.2 HP Fortran Commands on OpenVMS Alpha That Are Not Available on OpenVMS I64
• /ARCHITECTURE
• /MATH_LIBRARY
• /OLD_F77
• /OPTIMIZE=TUNE
• /SYNCHRONOUS_EXCEPTIONS
A.3 Differences in Default Values
The differences are:
• Because the Itanium architecture supports IEEE directly, the default floating-point format on OpenVMS I64 systems is /FLOAT=IEEE_FLOAT. On OpenVMS Alpha systems, the default is /FLOAT=G_FLOAT.
See Section 2.3.22.
• The default floating-point exception handling mode on OpenVMS I64 systems is /IEEE_MODE=DENORM_RESULTS). On OpenVMS Alpha systems, the default is /IEEE_MODE=FAST. See Section 2.3.24.
A.4 Support for VAX-Format Floating-Point
Because there is no direct hardware support for VAX-format floating-point on OpenVMS I64 systems, the VAX-format floating-point formats (F, D, and G) are supported indirectly by a three-step process:
1. Conversion from VAX-format variable to IEEE floating-point temporary (using the denormalized number range)
2. Computation in IEEE floating-point
3. Conversion back to VAX-format floating-point and storage into the target variable
There are a number of implications for this approach:
• Exceptions might move, appear, or disappear, because the calculation is done in a different format with a different range of representable values.
• Because there are very small numbers representable in VAX format that can only be represented in IEEE format using the IEEE denorm range, there is a potential loss of accuracy if an intermediate
result of the calculation is one of those numbers.
At worst, the two bits with the least significance will be set to zero. Note that further calculations might result in magnifying the magnitude of this loss of accuracy.
Note that this small loss of accuracy does not raise signal FOR$_SIGLOSMAT (or FOR$IOS_SIGLOSMAT).
• Expressions that are used to drive control flow but are not stored back into a variable will not be converted back into VAX format. This can cause exceptions to disappear.
• There can be a significant performance cost for the use of VAX-format floating-point.
Note that floating-point query built-ins (such as TINY and HUGE) will return values appropriate to the floating-point format that you select, despite the fact that all formats are supported by IEEE.
A.5 Changes in Exception Numbers and Places
There will be changes in the number of exceptions raised and in the code location at which they are raised.
This is particularly true for VAX-format floating-point calculations, because many exceptions will only be raised at the point where a result is converted from IEEE format to VAX format. Some valid
IEEE-format numbers will be too large or too small to convert and will thus raise underflow or overflow. IEEE exceptional values (such as Infinity and NaN) produced during the evaluation of an
expression will not generate exceptions until the final conversion to VAX format is done.
If a VAX-format floating-point calculation has intermediate results (such as the X * Y in the expression (X * Y)/ Z ), and the calculation of that intermediate result raised an exception on OpenVMS
Alpha systems, it is not guaranteed that an exception will be raised on OpenVMS I64 systems. An exception will only be raised if the IEEE calculation produces an exception.
A.5.1 Ranges of Representable Values
In general, the range of VAX-format floating-point numbers is the same as the range for IEEE-format. However, the smallest F- or G-format value is one quarter the size of the smallest normal IEEE
number, while the largest F- or G-format number is about half that of the largest IEEE number. There are therefore nonexceptional IEEE values that would raise overflows in F- or G-format. There are
also nonexceptional F- or G-format values that would raise underflow in IEEE-format in those modes in which denormalized numbers are not supported.
A.5.2 Underflow in VAX Format with /CHECK=UNDERFLOW
OpenVMS Alpha and VAX Fortran applications do not report underflows for VAX-format floating-point operations unless you specifically enable underflow traps by compiling with the /CHECK=UNDERFLOW
qualifier (see Section 2.3.11).
The same is true on OpenVMS I64 systems, but with an important caveat: Since all I64 floating-point operations are implemented by means of IEEE-format operations, enabling underflow traps with /CHECK
=UNDERFLOW causes exceptions to be raised when values underflow the IEEE-format representation, not the VAX-format one.
This can result in an increased number of underflow exceptions seen with /CHECK=UNDERFLOW when compared with equivalent Alpha or VAX programs, as the computed values may be in the valid VAX-format
range, but in the denormalized IEEE-format range.
If your application requires it, a user-written exception handler could catch the IEEE-format underflow exception, inspect the actual value, and determine whether it represented a VAX-format
underflow or not.
See Section 8.4 for exact ranges of VAX-format and IEEE-format floating point.
A.6 Changes in Exception-Mode Selection
On OpenVMS Alpha systems, the exception-handling mode and the rounding mode can be chosen on a per-routine basis. This lets you set a desired exception mode and rounding mode using compiler
qualifiers. Thus, a single application can have different modes during the execution of different routines.
This is not as easy to do on OpenVMS I64 systems. While the modes can be changed during the execution of a program, there is a significant performance penalty for doing so.
As a result, the HP Fortran compiler and the OpenVMS linker implement a "whole-program" rule for exception handling and rounding modes. The rule says that the whole program is expected to run in the
same mode, and that all compilations will have been done using the same mode qualifiers. To assist in enforcing this rule, the compiler, linker and loader work together:
• The compiler tags each compiled object file with a code specifying the modes selected by the user (directly or using the default) for that compilation.
• The linker tags the generated executable file with a code specifying the mode selected by the user.
• The loader initializes the floating-point status register of the process based on the linker code.
A.6.1 How to Change Exception-Handling or Rounding Mode
If you are using an OpenVMS I64 system and want to change the exception-handling or rounding mode during the execution of a program, use a call to either of the following:
• Fortran routines DFOR$GET_FPE and DFOR$SET_FPE
• System routines SYS$IEEE_SET_FP_CONTROL, SYS$IEEE_SET_PRECISION_MODE, and SYS$IEEE_SET_ROUNDING_MODE
HP does not encourage users to change the exception-handling or rounding mode within a program. This practice is particularly discouraged for an application using VAX-format floating-point.
A.6.1.1 Calling DFOR$GET_FPE and DFOR$SET_FPE
If you call DFOR$GET_FPE and DFOR$SET_FPE, you need to construct a mask using the literals in SYS$LIBRARY:FORDEF.FOR, module FOR_FPE_FLAGS.
The calling format is:
INTEGER*4 OLD_FLAGS, NEW_FLAGS
INTEGER*4 DFOR$GET_FPE, DFOR$GET_FPE
EXTERNAL DFOR$GET_FPE, DFOR$GET_FPE
! Set the new mask, return old mask.
OLD_FLAGS = DFOR$GET_FPE( NEW_FLAGS )
! Return the current FPE mask.
OLD_FLAGS = DFOR$GET_FPE ()
An example (which does no actual computations) follows. For a more complete example, see Example A-1.
subroutine via_fortran
include 'sys$library:fordef.for'
include '($IEEEDEF)'
integer new_flags, old_flags
old_flags = dfor$get_fpe()
new_flags = FOR_K_FPE_CNT_UNDERFLOW + FPE_M_TRAP_UND
call dfor$set_fpe( new_flags )
! Code here uses new flags
call dfor$set_fpe( old_flags )
end subroutine
A.6.1.2 Calling SYS$IEEE_SET_FP_CONTROL, SYS$IEEE_SET_PRECISION_MODE, and SYS$IEEE_SET_ROUNDING_MODE
If you call SYS$IEEE_SET_FP_CONTROL, SYS$IEEE_SET_PRECISION_MODE, and SYS$IEEE_SET_ROUNDING_MODE, you need to construct a set of masks using the literals in SYS$LIBRARY:FORSYSDEF.TLB, defined in
module IEEEDEF.H (in STARLET). For information about the signature of these routines, see the HP OpenVMS System Services Reference Manual.
An example (which does no actual computations) follows. For a more complete example, see Example A-1.
subroutine via_system( rr )
real rr
include '($IEEEDEF)'
integer*8 clear_flags, set_flags, old_flags, new_flags
clear_flags = IEEE$M_MAP_DNZ + IEEE$M_MAP_UMZ
set_flags = IEEE$M_TRAP_ENABLE_UNF + IEEE$M_TRAP_ENABLE_DNOE
call sys$ieee_set_fp_control(%ref(clear_flags),
! Code here uses new flags
clear_flags = set_flags
call sys$ieee_set_fp_control(%ref(clear_flags),%ref(old_flags),%ref(new_flags))
A.6.1.3 Additional Rules That You Should Follow
If you decide to change the exception-handling or rounding mode, be careful to observe the following rules to maintain the "whole-program" rule. Failure to do so might cause unexpected errors in
other parts of your program:
• The preexisting mode must be restored at the end of the execution of the section in which the new mode is used. This includes both normal endings, such as leaving a code block, and exits by means
of exception handlers.
• It is a good idea to establish a handler to restore the old mode on unwinds, because called code can cause exceptions to be raised (including exceptions not related to floating point).
• The code should be compiled with the same mode qualifiers as the other, "normal" parts of the application, not with the mode that will be set by the call to the special function.
• Be aware that VAX-format expressions are actually calculated in IEEE format, and any change to the modes will impact a calculation in IEEE format, not a calculation in VAX format.
• Consider adding the VOLATILE attribute to the declaration of all the variables used in the calculation in the different mode. This will prevent optimizations that might move all or part of the
calculation out of the region in which the different mode is in effect.
A.6.1.4 Whole-Program Mode and Library Calls
System libraries that need to use an alternate mode (for example, the math library) accomplish this by using an architectural mechanism not available to user code: the .sf1 flags of the
floating-point status register (user code uses the .sf0 flags).
Therefore, a user's choice of exception-handling or rounding mode will not have an impact on any system library used by the program.
A.6.2 Example of Changing Floating-Point Exception-Handling Mode
Example A-1 shows both methods of changing the floating-point exception-handling mode. However, for a real program, you should pick just one of the two methods.
Example A-1 Changing Floating-Point Exception Mode
! SET_FPE.F90: Change floating-point exception handling mode,
! and check that it worked.
! Compile and link like this:
! $ f90 set_fpe
! $ lin set_fpe,SYS$LIBRARY:VMS$VOLATILE_PRIVATE_INTERFACES.OLB/lib
! $ run set_fpe
! The library is needed to bring in the code for LIB$I64_INS_DECODE,
! which we call for its side-effect of incrementing the PC in the
! correct manner for I64.
! This is a place to save the old FPE flags.
module saved_flags
integer saved_old_flags
end module
! Turn on underflow detection for one routine
! Using the FORTRAN library function FOR_SET_FPE.
subroutine via_fortran( rr )
real rr
include 'sys$library:fordef.for'
include '($IEEEDEF)'
integer new_flags, old_flags
old_flags = dfor$get_fpe()
new_flags = FPE_M_TRAP_UND
call dfor$set_fpe( new_flags )
! Code here uses new flags
rr = tiny(rr)
type *,' Expect a catch #1'
rr = rr / huge(rr)
call dfor$set_fpe( old_flags )
end subroutine
! Alternatively, do the same using the system routine.
subroutine via_system( rr )
real rr
include '($IEEEDEF)'
integer*8 clear_flags, set_flags, old_flags, new_flags
clear_flags = IEEE$M_MAP_DNZ + IEEE$M_MAP_UMZ
set_flags = IEEE$M_TRAP_ENABLE_UNF + IEEE$M_TRAP_ENABLE_DNOE
call sys$ieee_set_fp_control(%ref(clear_flags), %ref(set_flags), %ref(old_flags))
! Code here uses new flags
rr = tiny(rr)
type *,' Expect a catch #2'
rr = rr / huge(rr)
clear_flags = set_flags
call sys$ieee_set_fp_control(%ref(clear_flags),%ref(old_flags),%ref(new_flags))
end subroutine
! Main program
program tester
use saved_flags
real, volatile :: r
! Establish an exception handler.
external handler
call lib$establish( handler )
! Save the initial setting of the exception mode flags.
saved_old_flags = dfor$get_fpe()
! This expression underflows, but because this program has
! been compiled with /IEEE=DENORM (by default) underflows
! do not raise exceptions.
write (6,100)
100 format(1x,' No catch expected')
r = tiny(r);
r = r / huge(r)
! Call a routine to turn on underflow and try that expression
! again. After the call, verify that underflow detection has
! been turned off.
call via_fortran( r )
write (6,100)
r = tiny(r)
r = r / huge(r)
! Ditto for the other approach
call via_system( r )
write (6,100)
r = tiny(r)
r = r / huge(r)
end program
! A handler is needed to catch any exceptions.
integer (kind=4) function handler( sigargs, mechargs )
use saved_flags
include '($CHFDEF)'
include '($SSDEF)'
integer sigargs(100)
record /CHFDEF2/ mechargs
integer lib$match_cond
integer LIB$I64_INS_DECODE
integer index
integer status
integer no_loop / 20 /
logical int_over
logical int_div
logical float_over
logical float_div
logical float_under
logical float_inval
logical float_denorm
logical HP_arith
logical do_PC_advance
integer pc_index
integer*8 pc_value
! Don't loop forever between handler and exception
! (in case something goes wrong).
no_loop = no_loop - 1
if( no_loop .le. 0 ) then
handler = ss$_resignal
! We'll need the PC value of the instruction if
! this turns out to have been a fault rather than
! a trap.
pc_index = sigargs(1)
pc_value = sigargs(pc_index)
! Figure out what kind of exception we have, and
! whether it is a fault and we need to advance the
! PC before continuing.
do_PC_advance = .false.
int_over = .false.
int_div = .false.
float_over = .false.
float_div = .false.
float_under = .false.
float_inval = .false.
float_denorm = .false.
HP_arith = .false.
index = lib$match_cond(sigargs(2), SS$_INTOVF)
if( index .eq. 0 ) then
int_over = .true.
index = lib$match_cond(sigargs(2), SS$_INTDIV)
if( index .eq. 0 ) then
int_div = .true.
index = lib$match_cond(sigargs(2), SS$_FLTOVF)
if( index .eq. 0 ) then
float_over = .true.
index = lib$match_cond(sigargs(2), SS$_FLTDIV)
if( index .eq. 0 ) then
float_div = .true.
index = lib$match_cond(sigargs(2), SS$_FLTUND)
if( index .eq. 0 ) then
float_under = .true.
index = lib$match_cond(sigargs(2), SS$_FLTOVF_F)
if( index .eq. 0 ) then
float_over = .true.
do_PC_advance = .true.
index = lib$match_cond(sigargs(2), SS$_FLTDIV_F)
if( index .eq. 0 ) then
float_div = .true.
do_PC_advance = .true.
index = lib$match_cond(sigargs(2), SS$_FLTUND_F)
if( index .eq. 0 ) then
float_under = .true.
do_PC_advance = .true.
index = lib$match_cond(sigargs(2), SS$_FLTINV)
if( index .eq. 0 ) then
float_inval = .true.
do_PC_advance = .true.
index = lib$match_cond(sigargs(2), SS$_INTOVF_F)
if( index .eq. 0 ) then
int_over = .true.
do_PC_advance = .true.
index = lib$match_cond(sigargs(2), SS$_FLTDENORMAL)
if( index .eq. 0 ) then
float_denorm = .true.
index = lib$match_cond(sigargs(2), SS$_HPARITH)
if( index .eq. 0 ) then
HP_arith = .true.
! Tell the user what kind of exception this is.
handler = ss$_continue
if( float_over) then
write(6,*) ' - Caught Floating overflow'
else if ( int_over ) then
write(6,*) ' - Caught Integer overflow'
else if ( int_div ) then
write(6,*) ' - Caught Integer divide by zero'
else if ( float_div ) then
write(6,*) ' - Caught Floating divide by zero'
else if ( float_under ) then
write(6,*) ' - Caught Floating underflow'
else if ( float_inval ) then
write(6,*) ' - Caught Floating invalid'
else if ( HP_arith ) then
write(6,*) ' - Caught HP arith error'
write(6,*) ' - Caught something else: resignal '
! Here we have to restore the initial floating-point
! exception processing mode in case the exception
! happened during one of the times we'd changed it.
call dfor$set_fpe( saved_old_flags )
handler = ss$_resignal
! If this was a fault, and we want to continue, then
! the PC has to be advanced over the faulting instruction.
if( do_PC_advance .and. (handler .eq. ss$_continue)) then
status = lib$i64_ins_decode (pc_value)
sigargs(pc_index) = pc_value
end function handler
┃Previous │Next│Contents │Index┃
|
{"url":"http://h71000.www7.hp.com/doc/82final/6443/6443pro_043.html","timestamp":"2014-04-19T06:54:14Z","content_type":null,"content_length":"36273","record_id":"<urn:uuid:ff8b7b9e-4254-4983-b670-1eabb3df9e47>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00076-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Material Results
Search Materials
Return to What's new in MERLOT
Get more information on the MERLOT Editors' Choice Award in a new window.
Get more information on the MERLOT Classics Award in a new window.
Get more information on the JOLT Award in a new window.
Go to Search Page
View material results for all categories
Click here to go to your profile
Click to expand login or register menu
Select to go to your workspace
Click here to go to your Dashboard Report
Click here to go to your Content Builder
Click here to log out
Search Terms
Enter username
Enter password
Please give at least one keyword of at least three characters for the search to work with. The more keywords you give, the better the search will work for you.
select OK to launch help window
cancel help
You are now going to MERLOT Help. It will open a new window.
This is a free, online textbook that provides information on dimensions, from longtitude and latitude to the proof of a...
see more
Material Type:
Open Textbook
Multiple others in credits http://www.dimensions-math.org/Dim_merci_E.htm, Étienne Ghys, Jos Leys, Aurélien Alvarez
Date Added:
Nov 17, 2008
Date Modified:
Sep 21, 2013
The focus of this website is to help in the transition from a paper oriented environment to one using OER materials with an...
see more
Material Type:
Open Textbook
Jim Kelly
Date Added:
Feb 14, 2012
Date Modified:
Oct 06, 2013
This textbook presents introductory economics (״principles״) material using standard mathematical tools, including calculus....
see more
Material Type:
Open Textbook
R. Preston McAfee
Date Added:
Mar 15, 2008
Date Modified:
Sep 03, 2013
This introductory probability book, published by the American Mathematical Society, is available from AMS bookshop. We are...
see more
Material Type:
Open Textbook
Charles Grinstead, J. Laurie Snell
Date Added:
Apr 12, 2008
Date Modified:
Sep 03, 2013
Online open textbook with interactive labs but none of the labs are tab-through so not ADA compliant, inaccessible to...
see more
Material Type:
Open Textbook
Multiple Authors
Date Added:
Dec 01, 2008
Date Modified:
Mar 31, 2014
Area of applied mathematics concerned with the data collection, analysis, interpretation and presentation.
Material Type:
Open Textbook
Date Added:
Aug 30, 2008
Date Modified:
Sep 03, 2013
A Problem Course in Mathematical Logic is intended to serve as the text for an introduction to mathematical logic for...
see more
Material Type:
Open Textbook
Stefan Bilaniuk
Date Added:
Apr 12, 2008
Date Modified:
Oct 17, 2012
This is the Conceptual Explanations part of Kenny Felder's course in Advanced Algebra II. It is intended for students to...
see more
Material Type:
Open Textbook
Kenny M. Felder
Date Added:
Feb 22, 2010
Date Modified:
Nov 17, 2011
This is a free textbook by Boundless that is offered by Amazon for reading on a Kindle. Anybody can read Kindle books—even...
see more
Material Type:
Open Textbook
Date Added:
Nov 20, 2013
Date Modified:
Nov 21, 2013
This is a virtual edition of a developmental algebra textbook. It follows the Suffolk County Community College Mathematics...
see more
Material Type:
Open Textbook
Leslie Buck
Date Added:
Jan 23, 2013
Date Modified:
Dec 08, 2013
|
{"url":"http://www.merlot.org/merlot/materials.htm?materialType=Open%20Textbook&nosearchlanguage=&keywords=mathematics","timestamp":"2014-04-21T01:13:10Z","content_type":null,"content_length":"190404","record_id":"<urn:uuid:6aa1e8d8-70fd-41b2-9c9c-061565814998>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00178-ip-10-147-4-33.ec2.internal.warc.gz"}
|
MathFiction: The Rule of Four (Ian Caldwell / Dustin Thomason)
Contributed by Vijay Fafat
There is an enigmatic book from the late 15th century called Hypnerotomachia Poliphili, written by an Italian monk, Francesco Colonna (available at gutenberg.org for download). The book chronicles
the dream-within-a-dream love adventure of Poliphilo and his object of affection, Polia. The enigma of the book arises from the author’s use of vocabulary from multiple languages and neologisms,
which give it an inscrutable, labyrinth-like quality and a suspicion that it contains far more than just the richly illustrated love story.
“The Rule of Four” takes off on this premise that there is a hidden code in the text of Hypnerotomachia; in fact, the entire book is a cipher pointing to a very well-guarded secret. Set on the
grounds of Princeton University, “The Rule of Four” shows how two students unlock the mystery. Naturally, the book is full of allusions to cryptography, mathematical patterns, breathless chases,
historical and current murders, lost manuscripts, etc. For example,
• an expert at the mathematical analysis of the Torah plays a role,
• the sequence 3, 4, 6, 9 is found to unlock one part of the book since “it is the smallest sequence which produces all three harmonies (arithmetic, geometric and harmonic)” [I don’t know what this
• Eratosthenes and his measurement of Earth’s circumference based on the geometry of shadows is discussed when one of the clues requires the students to calculate “the distance between you and the
horizon” (the sub-puzzle at this point of the story is about art and perspective drawing).
• Quote from book: “The most complicated concept he taught me was how to decode a book based on algorithms or ciphers from the text itself. In those cases, the key is built right on. You solve for
the cipher, like an equation or a set of instructions, the you use the cipher to unlock the text. The book acually interprets itself.”
The novel is a lot of fun to read and savor.
|
{"url":"http://kasmana.people.cofc.edu/MATHFICT/mfview.php?callnumber=mf1035","timestamp":"2014-04-18T20:44:50Z","content_type":null,"content_length":"9905","record_id":"<urn:uuid:472a639f-327f-40dc-ac10-b6b5e95106c1>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00410-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Book I, Proposition 1
Back to Proposition 1.
1. Draw a straight line, and on it construct an equilateral triangle.
2. Name the six formal divisions of a proposition.
To see the answer, pass your mouse over the colored area.
To cover the answer again, click "Refresh" ("Reload").
The enunciation.
The setting-out.
The specification.
The construction (if there is one).
The proof.
The closing.
3. a) In Proposition 1, what is given?
A straight line.
3. b) What are we required to do?
To draw an equilateral triangle on it.
3. c) Practice writing out all six parts of Proposition 1.
4. Draw three straight lines at random. Now try to construct a triangle
4. by using each line as a side.
4. If you are unable to draw a triangle, can you a least draw some
4. conclusions?
Any two sides together must be greater than the third side.
Next proposition
Previous proposition
Table of Contents | Introduction | Home
Please make a donation to keep TheMathPage online.
Even $1 will help.
Copyright © 2012 Lawrence Spector
Questions or comments?
E-mail: themathpage@nyc.rr.com
|
{"url":"http://www.themathpage.com/aBookI/geoProblems/I-1Prob.htm","timestamp":"2014-04-20T08:26:25Z","content_type":null,"content_length":"5681","record_id":"<urn:uuid:aca88bfd-9e8a-4517-a4a0-969e49939c64>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00039-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Keasbey Prealgebra Tutor
...After completing my student teaching assignment, I graduated from Kean University earning a Bachelor of Arts degree in Education and started to work as a substitute teacher at Woodbine Avenue
School #23. During this time I was working with many different grade levels ranging from kindergarten to...
12 Subjects: including prealgebra, reading, geometry, grammar
...I am able to also help prepare students for SAT Math. I offer flexible tutoring hours with advance requests and have a 24-hour cancellation policy. My approach to tutoring involves getting to
know each student.
6 Subjects: including prealgebra, geometry, algebra 1, algebra 2
...I have tutored elementary students in math through my work with Mathnasium. I have instructed gifted students from first grade up to sixth grade in math topics through my work with Spirit of
Math. Finally, I have classroom experience teaching math both at the elementary level and the middle school level.
18 Subjects: including prealgebra, reading, calculus, geometry
I have worked with a wide range of students from preschool age up to high school. I have a background in Special Ed and have worked with many students with varying degrees of physical and learning
disabilities. I am currently a college student pursuing a degree in Special Ed and Theatre.
18 Subjects: including prealgebra, Spanish, reading, English
...I have mentored young attorneys in the art of written advocacy and have tutored my daughter in various subjects. I am passionate about teaching students to read and think critically and to
write incisively. Whether we are discussing the Christian imagery running through King Lear or editing a p...
16 Subjects: including prealgebra, reading, English, algebra 1
|
{"url":"http://www.purplemath.com/keasbey_nj_prealgebra_tutors.php","timestamp":"2014-04-21T12:43:40Z","content_type":null,"content_length":"23912","record_id":"<urn:uuid:de92a277-d0bc-45f7-8fdd-9fc5d373a9a6>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00547-ip-10-147-4-33.ec2.internal.warc.gz"}
|
North Hills, NY Algebra Tutors
These private tutors in North Hills, NY are brought to you by WyzAnt.com, the best place to find local tutors on the web. When you use WyzAnt, you can search for North Hills, NY tutors, review
profiles and qualifications, run background checks, and arrange for home lessons. Click on any of the results below to see the tutor's full profile. Your first hour with any tutor is protected by our
Good Fit Guarantee: You don't pay for tutoring unless you find a good fit.
...I am very hard-working, personable, intelligent, and committed to your success. I am currently teaching science to elementary through college level students, including gifted and talented
students as well as tutoring students at all academic levels. I believe strongly in the ability of all stud...
12 Subjects: including algebra 1, biology, GRE, SAT math
...Lastly, I am an avid fan of the various hotkeys and tricks available to users running OS X, and would love to help students on wyzant.com improve their Macintosh skills. In 2012 I graduated
with a B.S. in mechanical engineering from Columbia University. During 2011, 2012, and 2013 I've worked at a joint research group between Mt.
32 Subjects: including algebra 2, algebra 1, physics, reading
...I try to guide students to understanding the material by trying to ground problems in real life situations: you can see whether an answer makes sense based on some sort of intuition, rather
than just going through the algorithm and hoping you don't mess up. I'm a big fan of unit analysis, where ...
18 Subjects: including algebra 2, algebra 1, calculus, trigonometry
I hold a bachelors degree in microbiology, a masters degree in biochemistry and molecular biology, and have five years of Laboratory experience in medical centers like UT Southwestern and Mount
Sinai medical center. These experiences have helped me to understand the subject matters in depth. As a ...
16 Subjects: including algebra 1, algebra 2, chemistry, geometry
...For example: "ALTHOUGH Daniel is an excellent math student, his recent test scores in Calculus were ________." with Although being the important word instructing the student to choose an
adjective which describe not being an excellent math student. The reading comprehension section is difficult ...
15 Subjects: including algebra 1, algebra 2, chemistry, calculus
Related North Hills, NY Tutors
North Hills, NY act tutors
North Hills, NY act math tutors
North Hills, NY algebra tutors
North Hills, NY calculus tutors
North Hills, NY chemistry tutors
North Hills, NY excel tutors
North Hills, NY geometry tutors
North Hills, NY math tutors
North Hills, NY physics tutors
North Hills, NY prealgebra tutors
North Hills, NY precalculus tutors
North Hills, NY sat tutors
North Hills, NY sat math tutors
North Hills, NY statistics tutors
North Hills, NY trigonometry tutors
|
{"url":"http://www.algebrahelp.com/North_Hills_NY_algebra_tutors.jsp","timestamp":"2014-04-20T03:22:38Z","content_type":null,"content_length":"25316","record_id":"<urn:uuid:39b9d66e-4e93-49ad-87af-080f9f291218>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00519-ip-10-147-4-33.ec2.internal.warc.gz"}
|
etd AT Indian Institute of Science: Generalizations Of The Quantum Search Algorithm
& Collections
Thesis Guide
Submitted Date
Sign on to:
Receive email
Login / Register
authorized users
Edit Profile
About DSpace
etd AT Indian Institute of Science >
Division of Physical and Mathematical Sciences >
Physics (physics) >
Please use this identifier to cite or link to this item: http://hdl.handle.net/2005/951
Title: Generalizations Of The Quantum Search Algorithm
Authors: Tulsi, Tathagat Avatar
Advisors: Patel, Apoorva
Quantum Theory
Quantum Search Algorithm
Grover's Search Algorithm
Keywords: Quantum Computation
Adiabatic Quantum Search
Robust Quantum Search Algorithm
Kato's Algorithm
Fixed-point Quantum Search
Quantum Algorithms
Submitted 27-Apr-2009
Series/ G23050
Report no.:
Quantum computation has attracted a great deal of attention from the scientific community in recent years. By using the quantum mechanical phenomena of superposition and entanglement, a
quantum computer can solve certain problems much faster than classical computers. Several quantum algorithms have been developed to demonstrate this quantum speedup. Two important
examples are Shor’s algorithm for the factorization problem, and Grover’s algorithm for the search problem. Significant efforts are on to build a large scale quantum computer for
implementing these quantum algorithms. This thesis deals with Grover’s search algorithm, and presents its several generalizations that perform better in specific contexts. While writing
the thesis, we have assumed the familiarity of readers with the basics of quantum mechanics and computer science. For a general introduction to the subject of quantum computation, see
[1]. In Chapter 1, we formally define the search problem as well as present Grover’s search algorithm [2]. This algorithm, or more generally the quantum amplitude amplification algorithm
[3, 4], drives a quantum system from a prepared initial state (s) to a desired target state (t). It uses O(α-1 = | (t−|s)| -1) iterations of the operator g = IsIt on |s), where { IsIt}
are selective phase inversions selective phase inversions of the corresponding states. That is a quadratic speedup over the simple scheme of O(α−2) preparations of |s) and subsequent
projective measurements. Several generalizations of Grover’s algorithm exist. In Chapter 2, we study further generalizations of Grover’s algorithm. We analyse the iteration of the search
operator S = DsI t on |s) where Ds is a more general transformation than Is, and I t is a selective phase rotation of |t) by angle . We find sufficient conditions for S to produce a
successful quantum search algorithm. In Chapter 3, we demonstrate that our general framework encapsulates several previous generalizations of Grover’s algorithm. For example, the
phase-matching condition for the search operator requires the angles and and to be almost equal for a successful quantum search. In Kato’s algorithm, the search operator is where Ks
consists of only single-qubit gates, which are easier to implement physically than multi-qubit gates. The spatial search algorithms consider the search operator where is a spatially
local operator and provides implementation advantages over Is. The analysis of Chapter 2 provides a simpler understanding of all these special cases. In Chapter 4, we present schemes to
Abstract: improve our general quantum search algorithm, by controlling the operators through an ancilla qubit. For the case of two dimensional spatial search problem, these schemes yield an
algorithm with time complexity . Earlier algorithms solved this problem in time steps, and it was an open question to design a faster algorithm. The schemes can also be used to find, for
a given unitary operator, an eigenstate corresponding to a specified eigenvalue. In Chapter 5, we extend the analysis of Chapter 2 to general adiabatic quantum search. It starts with the
ground state |s) of an initial Hamiltonian Hs and evolves adiabatically to the target state |t) that is the ground state of the final Hamiltonian The evolution uses a time dependent
Hamiltonian HT that varies linearly with time . We show that the minimum excitation gap of HT is proportional to α. Also, the ground state of HT changes significantly only within a very
narrow interval of width around the transition point, where the excitation gap has its minimum. This feature can be used to reach the target state (t) using adiabatic evolution for time
In Chapter 6, we present a robust quantum search algorithm that iterates the operator on |s) to successfully reach |t), whereas Grover’s algorithm fails if as per the phase-matching
condition. The robust algorithm also works when is generalized to multiple target states. Moreover, the algorithm provides a new search Hamiltonian that is robust against certain
systematic perturbations. In Chapter 7, we look beyond the widely studied scenario of iterative quantum search algorithms, and present a recursive quantum search algorithm that succeeds
with transformations {Vs,Vt} sufficiently close to {Is,It.} Grover’s algorithm generally fails if while the recursive algorithm is nearly optimal as long as , improving the error
tolerance of the transformations. The algorithms of Chapters 6-7 have applications in quantum error-correction, when systematic errors affect the transformations The algorithms are
robust as long as the errors are small, reproducible and reversible. This type of errors arise often from imperfections in apparatus setup, and so the algorithms increase the flexibility
in physical implementation of quantum search. In Chapter 8, we present a fixed-point quantum search algorithm. Its state evolution monotonically converges towards |t), unlike Grover’s
algorithm where the evolution passes through |t) under iterations of the operator . In q steps, our algorithm monotonically reduces the failure probability, i.e. the probability of not
getting |t), from . That is asymptotically optimal for monotonic convergence. Though the fixed-point algorithm is of not much use for , it is useful when and each oracle query is highly
expensive. In Chapter 9, we conclude the thesis and present an overall outlook.
URI: http://hdl.handle.net/2005/951
Appears in Physics (physics)
Items in etd@IISc are protected by copyright, with all rights reserved, unless otherwise indicated.
|
{"url":"http://etd.ncsi.iisc.ernet.in/handle/2005/951","timestamp":"2014-04-16T22:25:01Z","content_type":null,"content_length":"24962","record_id":"<urn:uuid:0b381400-da0f-4630-9923-4ce0b69efb79>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00081-ip-10-147-4-33.ec2.internal.warc.gz"}
|