content stringlengths 86 994k | meta stringlengths 288 619 |
|---|---|
Linear transformation
October 10th 2009, 10:30 AM #1
Oct 2009
Hello! I have som problems with this one. maybe someone can help me?
Find the standard matrix for the linear transformation.
T: R^2 --> R^2 dilates a vector by a factor of 3, then reflects that vector about the line y=x, and then projects that vector orthogonally onto the y-axis.
I am supposed to use this theorem while solving :
"Let T: R^n --> R^m be a linear transformation, and suppose that vectors are expressed in column form.
If e1, e2,...,en are the standard vectors in R^n, and if x is any vector in R^n, then T(x) can be expressed as
T(x)=Ax where A = [T(e1) T(e2) ... T(en)]"
Hello! I have som problems with this one. maybe someone can help me?
Find the standard matrix for the linear transformation.
T: R^2 --> R^2 dilates a vector by a factor of 3, then reflects that vector about the line y=x, and then projects that vector orthogonally onto the y-axis.
I am supposed to use this theorem while solving :
"Let T: R^n --> R^m be a linear transformation, and suppose that vectors are expressed in column form.
If e1, e2,...,en are the standard vectors in R^n, and if x is any vector in R^n, then T(x) can be expressed as
T(x)=Ax where A = [T(e1) T(e2) ... T(en)]"
First it dilates by $3$:
Then it reflects about $y=x$:
Then it projects orthogonally onto the y-axis:
Putting it all together gives us:
Thank you very much! =)
October 10th 2009, 03:44 PM #2
October 10th 2009, 09:04 PM #3
Oct 2009 | {"url":"http://mathhelpforum.com/advanced-algebra/107187-linear-transformation.html","timestamp":"2014-04-20T08:38:54Z","content_type":null,"content_length":"37718","record_id":"<urn:uuid:06a2b632-70ca-45a6-b692-a541c694ff55>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00648-ip-10-147-4-33.ec2.internal.warc.gz"} |
Is the boundary of the orientable manifold orientable?
up vote 1 down vote favorite
For an orientable compact manifold $M$ with boundary, is its boudary orientable as well?
If $M$ is smooth, the conclusion obviously holds. For the general manifold, my plan is to use long exact sequence $H_n(M,\partial M)\rightarrow H_{n-1}(\partial M) $ to show the latter homology group
is $\mathbb Z$, but I stuck there.
1 My guess is that whatever argument you have in mind which makes it obvious in the smooth case will have a counterpart in the nonsmooth case. – Paul Siegel Jul 31 '12 at 5:08
The tools in the topological category are a little heavier though. You basically need to know the boundary has a collar - a product neighbourhood. – Ryan Budney Jul 31 '12 at 5:34
I can recommend several sources, G. Bredon Topology and Geometry, or Dold's lecture on algebraic topology. – Liviu Nicolaescu Jul 31 '12 at 8:48
1 Let $N^{n-1}$ be a boundary component of oriented $n$-manifold $M$, and let $x\in N$. Complete a basis $B$ for $T_x N$ to a basis $B_M$ for $T_x M$ by appending a vector $v$ (possible because the
boundary has a collar, as Ryan Budney pointed out). Vector $v$ points either into or out of $M$ (again, because the boundary has a collar). Say $B$ is positive if $B\cup v$ is positive whenever
$v$ points inwards, with respect to the orientation of $M$. And voilá, you've oriented $N$. – Daniel Moskovich Jul 31 '12 at 11:33
1 I'm pretty sure you don't need global collars in either the smooth or topological cases. All you need is a covering of $\partial M$ by open subsets of $M$ which are $n$-dimensional ``half-balls''
and which intersect $\partial M$ in $n-1$-dimensional balls. – Lee Mosher Jul 31 '12 at 13:56
add comment
closed as off topic by Ryan Budney, Mark Grant, Chris Gerig, Anton Petrunin, Fernando Muro Jul 31 '12 at 11:30
Questions on MathOverflow are expected to relate to research level mathematics within the scope defined by the community. Consider editing the question or leaving comments for improvement if you
believe the question can be reworded to fit within the scope. Read more about reopening questions here.If this question can be reworded to fit the rules in the help center, please edit the question.
Browse other questions tagged at.algebraic-topology or ask your own question. | {"url":"http://mathoverflow.net/questions/103582/is-the-boundary-of-the-orientable-manifold-orientable","timestamp":"2014-04-19T18:00:52Z","content_type":null,"content_length":"45539","record_id":"<urn:uuid:eeea6575-b679-41cb-b228-278b3ba40bcb>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00287-ip-10-147-4-33.ec2.internal.warc.gz"} |
[FOM] Correction to Posting on Kaplan's Sentence
hdeutsch@ilstu.edu hdeutsch at ilstu.edu
Mon May 19 11:21:11 EDT 2008
In a recent posting I mentioned that Kaplan has argued in "A Problem
for Possible Worlds Semantics" that possible worlds semantics is
wanting in that it excludes certain "possibilities." Kaplan works in
a extension of sentential modal logic which allows for quantification
over propositions (sets of worlds). He gives an example of a sentence
that is not satisfiable in this system, but that Kaplan thinks should
be satisfiable in a correct semantics of modality.
In both Kaplan's paper and in my posting, the relevant sentence is
misstated. (There is a typo in Kaplan's formulation that I did not
correct in my posting.) The correct version is as follows:
For all p, it is possible that, for all q ( Qq < > p = q), where Q is
a sentential operator, < > is material implication, and p and q range
over sets of worlds.
The sentence seems to require that there be a one to one
correspondence between worlds and propositions (sets of worlds), and
hence is not satisfiable in Kaplan's extension of modal logic.
I asked in my posting whether the existence of this sentence had any
wider significance. It seems interesting that this idea (that there is
a one to one correspondence, etc.) could be formulated in the language
of modal logic. hd
This message was sent using Illinois State University Webmail.
More information about the FOM mailing list | {"url":"http://www.cs.nyu.edu/pipermail/fom/2008-May/012901.html","timestamp":"2014-04-18T04:35:18Z","content_type":null,"content_length":"3833","record_id":"<urn:uuid:eb8c621b-9502-4dd1-876f-0db3075c61be>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00465-ip-10-147-4-33.ec2.internal.warc.gz"} |
Mohammad Wajeeh Alomari
Department of Mathematics, Faculty of Science, Jerash University, Jerash, Jordan
Abstract: Several inequalities of Grüss type for the Stieltjes integral with various type of integrand and integrator are introduced. Some improvements inequalities are proved. Applications to the
approximation problem of the Riemann–Stieltjes integral are also pointed out.
Keywords: Ostrowski's inequality, bounded variation, Riemann-Stieltjes integral
Classification (MSC2000): 26D15; 26D20; 41A55
Full text of the article: (for faster download, first choose a mirror)
Electronic fulltext finalized on: 8 Nov 2012. This page was last modified: 19 Nov 2012.
© 2012 Mathematical Institute of the Serbian Academy of Science and Arts
© 2012 FIZ Karlsruhe / Zentralblatt MATH for the EMIS Electronic Edition | {"url":"http://www.emis.de/journals/PIMB/106/12.html","timestamp":"2014-04-21T12:20:06Z","content_type":null,"content_length":"4503","record_id":"<urn:uuid:4f20d6fd-e2a9-498d-87f1-2d286ef74b5d>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00128-ip-10-147-4-33.ec2.internal.warc.gz"} |
Topological Groups
November 8th 2010, 09:32 AM
Topological Groups
Can anybody help me with this question
Given topological groups X and Y and homomorphism f:X->Y, verify the following are equivalent:
(a) f is an open map
(b) for each neighborhood N of e(x), f(N) is a neighborhood of e(y)
(c) for each open neighborhood N of e(x), f(N) is a neighborhood of e(y)
(d) for each open neighborhood N of e(x), f(N) is an open neighborhood of e(y)
I already know that f is continuous which implies f is continuous at the identity e(x) of x
November 8th 2010, 10:00 AM
Can anybody help me with this question
Given topological groups X and Y and homomorphism f:X->Y, verify the following are equivalent:
(a) f is an open map
(b) for each neighborhood N of e(x), f(N) is a neighborhood of e(y)
(c) for each open neighborhood N of e(x), f(N) is a neighborhood of e(y)
(d) for each open neighborhood N of e(x), f(N) is an open neighborhood of e(y)
I already know that f is continuous which implies f is continuous at the identity e(x) of x
I helped you with the other one. Any ideas for this one?
November 30th 2010, 12:57 AM
(a) Well I know that to show f is an open map I need something like
For all G a subset of X , f(G) is open in Y.
(b) the identity e(x) is an element of G which is a subset of N. So f(e(x)) = e(y) which is an element of open f(G). This is however a subset of f(N) so it makes that f(N) is a neighborhood of e
(c) I think this is implied from the previous answer but not sure how I would do it, maybe by finding an open neighborhood of e(x)?
(d) To solve this I'd have to prove that f(N) is an open neighborhood which would be implied form the previous answer.
Am I going about the right way to solve these 4 questions? | {"url":"http://mathhelpforum.com/differential-geometry/162553-topological-groups-print.html","timestamp":"2014-04-21T03:08:17Z","content_type":null,"content_length":"5688","record_id":"<urn:uuid:8955bdc2-83cb-42a4-8f65-45db3419d203>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00096-ip-10-147-4-33.ec2.internal.warc.gz"} |
Copyright © University of Cambridge. All rights reserved.
'Flexi Quads' printed from http://nrich.maths.org/
Consider a convex quadrilateral $Q$ made from four rigid rods with flexible joints at the vertices so that the shape of $Q$ can be changed while keeping the lengths of the sides constant. Let ${\bf
a}_1$, ${\bf a}_2$, ${\bf a}_3$ and ${\bf a}_4$ be vectors representing the sides (in this order) of an arbitrary quadrilateral $Q$, so that ${\bf a}_1+{\bf a}_2+{\bf a}_3+{\bf a}_4 = {\bf 0}$ (the
zero vector). Now let ${\bf d}_1$ and ${\bf d}_2$ be the vectors representing the diagonals of $Q$. We may choose these so that ${\bf d}_1={\bf a}_4+{\bf a}_1$ and ${\bf d}_2={\bf a}_3+{\bf a}_4$.
Prove that
$${\bf a}_2^2+{\bf a}_4^2-{\bf a}_1^2-{\bf a}_3^2= 2({\bf a}_1{\cdot}{\bf a}_3-{\bf a}_2{\cdot}{\bf a}_4).\quad (1)$$
and that the scalar product of the diagonals is constant and given by:
$$2{\bf d}_1{\cdot}{\bf d}_2 = {\bf a}_2^2+{\bf a}_4^2-{\bf a}_1^2-{\bf a}_3^2.\quad (2)$$
Use these results to show that, as the shape of the quadrilateral is changed, if the diagonals of $Q$ are perpendicular in one position of $Q$, then they are perpendicular in all variations of $Q$. | {"url":"http://nrich.maths.org/439/index?nomenu=1","timestamp":"2014-04-20T03:20:31Z","content_type":null,"content_length":"4274","record_id":"<urn:uuid:bb1cf879-bc37-4ecc-bd0c-a097130039cd>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00441-ip-10-147-4-33.ec2.internal.warc.gz"} |
Quotations by
Quotations by Nikolai Ivanovich Lobachevsky
There is no branch of mathematics, however abstract, which may not some day be applied to phenomena of the real world.
Quoted in N Rose Mathematical Maxims and Minims (Raleigh N C 1988).
In geometry I find certain imperfections which I hold to be the reason why this science, apart from transition into analytics, can as yet make no advance from that state in which it came to us from
As belonging to these imperfections, I consider the obscurity in the fundamental concepts of the geometrical magnitudes and in the manner and method of representing the measuring of these magnitudes,
and finally the momentous gap in the theory of parallels, to fill which all efforts of mathematicians have so far been in vain.
Geometric researches on the theory of parallels (1840)
JOC/EFR February 2006
The URL of this page is: | {"url":"http://www-groups.dcs.st-and.ac.uk/~history/Quotations/Lobachevsky.html","timestamp":"2014-04-18T05:35:20Z","content_type":null,"content_length":"1701","record_id":"<urn:uuid:563f0b89-3f77-4d36-9500-66ce50522ee7>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00346-ip-10-147-4-33.ec2.internal.warc.gz"} |
math module for Decimals
Steven D'Aprano steve at REMOVE-THIS-cybersource.com.au
Sun Dec 28 07:58:18 CET 2008
On Sat, 27 Dec 2008 16:02:51 -0800, jerry.carl.mi wrote:
> Hi,
> I have been looking for a Python module with math functions that would
> both eat and spit Decimals. The standard math module eats Decimals
> allright but spits floats... herefore exp(sin(Decimal())) produces exp
> () of a float :-(
You can write your own wrappers. Assuming you always want to return
decimals instead of floats:
from decimal import Decimal
import math
def exp(x):
if isinstance(x, Decimal):
return x.exp()
f = math.exp(x)
return make_decimal(f)
def sin(x):
f = math.sin(x)
return make_decimal(f)
You can get fancier if you like:
def cos(x):
return x.cos()
except AttributeError:
f = math.cos(x)
return make_decimal(f)
That version will automatically use Decimal.cos() in the future, assuming
Decimals grow a cos() method. If they don't, it will continue to work
(although a tad slower than the sin() above.
Converting floats to decimals is tricky, because you have to decide how
much precision you care about. You can keep all the precision of floats.
Depending on your needs, this is either a feature or a gotcha:
>>> (1.0/3).as_integer_ratio() # expecting (1, 3)
(6004799503160661L, 18014398509481984L)
Anyway, here are at least three ways to write make_decimal:
# Python 2.6 only
def make_decimal(f): # exact, no loss of precision
# however, will fail for f = NAN or INF
a, b = f.as_integer_ratio()
return Decimal(a)/Decimal(b)
def make_decimal(f):
return Decimal("%r" % f)
def make_decimal(f, precision=16):
# choose how many decimal places you want to keep
return Decimal.from_float(f, precision)
You choose which suits you best.
> So far, i found:
> -AJDecimalMathAdditions (http://www.ajgs.com/programming/
> PythonForDownload.html)
> -decimalfuncs (http://pypi.python.org/pypi/decimalfuncs/1.4) -and dmath
> (http://pypi.python.org/pypi/dmath/0.9)
> I tried using the AJDecimalMathAdditions, but ran into issues like dSin
> (1E4) would take forever to calculate and would result in sin() > 1 ...
> If i understand it correctly, the program is using maclaurin series to
> calculate the value and since it does not chop off all the multiples of
> 2*pi, the maclaurin approximation becomes useless when its too far from
> x=0.
You might try something like this:
pi = Decimal("%r" % math.pi)
twopi = 2*pi
pi2 = pi/2
def sin(x):
# *** UNTESTED ***
if x < 0:
return -sin(-x)
elif x > twopi:
x = remainder(x, twopi)
return sin(x)
elif x > pi:
return -sin(x - pi)
elif x > pi2:
return sin(pi - x)
return AJDecimalMathAdditions.dSin(x)
You can treat cos and tan similarly but not identically.
However, you might find that it quicker and simpler to just do your maths
calculations as floats, then convert to decimals, as in my earlier
> I also ran into some issues with the decimalfuncs, but i have never
> tried the dmath thing.
> Now, i can go and start modifying these modules to behave more like the
> standard math module but since i am neither mathematician or programer,
> i really hesitate.
And so you should. Getting numerical functions correct is damn hard.
However, if all you need is "close enough", then don't be scared -- the
sort of tricks I've shown are probably "close enough", although you may
want a disclaimer about not using them for running nuclear reactors or
flying aircraft. The basic tools Python supplies are pretty good, and you
can probably get a 95% solution with them without being an expert in
computational mathematics.
> So my questions are:
> (1) what do folks use when they need to calculate something like exp
> (sin(Decimal())) or even more complex things? Any recommendations? Or am
> I completely missing something?
> (2) Is there any plan to provide a standard python module that would do
> that? (Python 4.0? ;-)
Guido is very reluctant to put numeric libraries in the standard library,
precisely because it is so hard to get it right. Don't expect Python to
become Mathematica any time soon!
More information about the Python-list mailing list | {"url":"https://mail.python.org/pipermail/python-list/2008-December/504373.html","timestamp":"2014-04-19T11:31:26Z","content_type":null,"content_length":"7187","record_id":"<urn:uuid:9e9ae28c-f02e-4329-a0d3-d445b96c514c>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00310-ip-10-147-4-33.ec2.internal.warc.gz"} |
Criteria for a map of schemes to be an isomorphism
up vote 5 down vote favorite
Let $ X, Y $ be separated finite type schemes over an algebraically closed field $ k $. Assume that $ Y $ is reduced. Let $ \phi : X \rightarrow Y $ be a morphism of schemes. Suppose that $ \phi $
gives a bijection on $ k $-points and an injection on $ S$-points for all $k$-schemes $ S$. Prove or disprove that $ \phi $ is an isomorphism (or add some extra hypotheses to ensure that $ \phi $ is
an isomorphism).
When $ k = \mathbb{C} $, $ X $ is normal, an $ Y $ is normal and irreducible, then I have a proof which uses the following crazy fact: If $ X $ and $ Y $ are irreducible varieties over $\mathbb{C} $
and $ Y $ is normal, then a morphism $ \phi $ inducing a bijection on $ \mathbb{C} $-points is an isomorphism. My original question follows from this fact via a small tweaking of the usual Yoneda
If anyone can give me a proof or reference for this last fact, I would be grateful too. I read it in the appendix of Kumar's book on Kac-Moody groups.
Edit: In light of some counterexamples, let me assume that X is irreducible and Y is normal and irreducible.
2 Why can't $X$ be the disjoint union of the affine line and a point, and $Y$ be the projective line (and the map be the obvious thing) for a counterexample? – Kevin Buzzard Sep 6 '10 at 14:14
<TeX-pedant>It is usually much better to write `\$X\$, \$Y\$' than '\$X, Y\$', because of the resulting spacing and, in some cases, the fact that the math comma may be different from the text
comma.</TeX-pedant> – Mariano Suárez-Alvarez♦ Sep 6 '10 at 14:24
1 See mathoverflow.net/questions/12767 for a related question. – VA. Sep 6 '10 at 14:37
And also mathoverflow.net/questions/16786 – Frank Sep 6 '10 at 14:40
add comment
2 Answers
active oldest votes
Trivial counterexample when $X$ is not connected: let $F$ be closed an nonempty in $Y$, $U:=Y\setminus F$ (assumed nonempty), and $X$ the disjoint sum of $U$ and $F$.
A bit less trivial with $X$ and $Y$ irreducible: $Y$= an irreducible curve with a node $y$, $X'$:=its normalization, $X$= $X'$ with one of the two points over $y$ removed.
up vote 5 down
vote The property holds, indeed, if $X$ is irreducible and reduced and $Y$ normal, assuming only that $X(\mathbb{C})\to Y(\mathbb{C})$ is bijective. In fact, in this case $f$ must be
quasifinite and birational, hence an isomorphism if $Y$ is normal.
add comment
[This is a minor comment on Laurent Moret-Bailly's answer; I'm too new to leave comments]
up vote 2 down That the map $f$ in Laurent Moret-Bailly's answer is birational follows from Proposition 7.16 in [J. Harris, Algebraic Geometry, A First Course, GTM 133, 1992]; that it is an
vote isomorphism from Zariski's Main Theorem.
add comment
Not the answer you're looking for? Browse other questions tagged ag.algebraic-geometry or ask your own question. | {"url":"http://mathoverflow.net/questions/37887/criteria-for-a-map-of-schemes-to-be-an-isomorphism","timestamp":"2014-04-18T14:00:25Z","content_type":null,"content_length":"59445","record_id":"<urn:uuid:de8c5487-c490-41b7-a1f9-12d5a7eefa3b>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00521-ip-10-147-4-33.ec2.internal.warc.gz"} |
Patent US6807513 - Method for determining a spatial quality index of regionalized data
1. Field of the Invention
The present invention concerns a method for determining a spatial quality index of regionalised data.
It can be applied more particularly, but not exclusively, to geophysical data, image data obtained by physical methods, such as medical or sonar methods, the non-destructive control of materials, or
even to any type of sampling of natural phenomena, such as mines recognition campaigns, geochemical inventories, pollution sensors, satellite data, oceanographic data, water analysis, etc.
2. Description of the Prior Art
The regionalised data is data marked by coordinates inside a space with N dimensions, that is most currently in a one, two or three dimensional geographical space. This data can be either mono or
multivariable, that is that one or several variables are measured or calculated at the data points.
The theory of regionalised variables is known under the name of geostatistics. Geostatistics involves applying probabilities to natural phenomena which develop in space over a period of time, hence
the prefix ‘geo’. This theory is shown in detail in the work entitled “The theory of regionalised variables” by G. MATHERON (Publisher MASSON).
This theory provides the appropriate language and tools for estimating any quantity, a priori unknown, and able to be marked in a given space on the basis of a forcefully fragmentary sampling of this
same quantity.
So as to estimate this unknown quantity, geostatistics makes it possible to suitably select the most appropriate probabilistic model for the situation, the geostatistical estimator being known under
the name of “krigeage”.
More than the estimation, the probabilistic model also gives an indicator of the accuracy of the estimate. This indicator, known as estimate variance, is a vital tool as it opens the way for a
possible control of uncertainties (translated in variance terms).
Within the context of stationary probabilistic models, which assumes the invariance by translating in space the average of the modelised variable, the covariance tool or variogramme is used to
quantify the spatial variability of the data.
For a non-stationary model, generalised covariance is used.
The geostatistical models also make it possible to validly anticipate concerning a future state, for example the exploitation of natural resources, when the data available shall be more numerous and
the operator needs to deal with estimating problems.
Irrespective of the context for exploiting natural resources, the question here still concerns whether there is sufficient data available to resolve the operational problem.
Added to the intrinsic quality of each data item is the quality of the spatial integration of this data element inside the whole set of data. This is why it is advantageous to complete the
experimental reading by a geostatistical control associated with geographical, time or other types of coordinates.
The usual methods for controlling the quality or coherence of sets of regionalised data are either visual or morphological (studies of shapes) or statistical (without taking into account the spatial
coordinates). When they are used, filtering methods (frequency or spatial) generally work on monovariable data and on regular grids. As a result, they are ill-adapted to the breaking up of
multivariable data irregularly situated in space into anomalistic and coherent components;
Similarly, the definition of the criteria used to define the anomalies is often arbitrary and ill-adapted to experimental verification.
So as to eliminate these drawbacks, the invention proposes quantifying the spatial quality of a set of regionalised data by virtue of determining a geostatistical index known as a “Spatial Quality
Index” (SQI) being used to localise a priori anomalistic data and thus judging the quality of the measurements or of the digital processing which have generated the set of data.
The determination of the SQI resolves both the problem of interpretation of the spatial variations of the mono or multivariable data in general terms of anomalies and coherent component and an
estimate of the degree of anomalistic or spatial incoherence present in each data element taken individually. The determination of the SQI does not assume any particular arrangement of the data in
space and also fully works on data irregularly distributed in space and also on data regularly situated at the nodes of a grid with N dimensions, for example a three-dimensional acquisition grid for
acquiring irregularly distributed geophysical data is defined along two data acquisition transversal/longitudinal axes and a third time vertical axis.
Advantageously, this index is determined by means of the method of the invention which includes the following operational stages:
a first phase for identifying the statistical anomalies of a first order on the basis of a set of raw regionalised data, this identification including a stationing of the data via a preliminary
extraction of the spatial drifts of said data and the determination of the associated stationary residue of first order so that the value of the average of the residual data is reasonably constant in
space, the anomalies being identified and examined on the first order residue so as to provide a first order anomalistic criterion,
a second phase for identifying a second order statistical anomalies with extraction of the components of first order residue considered as anomalies and the components of first order residue
considered as coherent in space,
the establishment of a quantified relation (SQI) between any combination of the estimated values of the anomalistic components of the first and/or second order and any combination of the estimated
values of the coherent components of the first and/or second order,
the localisation of space anomalies on the basis of the values of the SQI of each regionalised data element.
Advantageously, said drawing up of a quantified relation constitutes the determination for each regionalised data element taken individually of the ratio of a spatial quality index (SQI).
Of course, said identification stages could be carried out by a geostatistical estimation (krigeage) in a non-stationary model for the first phase and in a stationary model for the second phase.
In the first phase, the non-stationary estimation of the spatial drift makes it possible to obtain first order stationary residue on which it is possible to validly calculate and modelise a
The interpretation of this variogramme in terms of coherent and anomalistic components results in the estimation per stationary model of the second order anomalistic component.
More particularly, the stationary and non-stationary geostatistical models could use:
the estimation by factorial krigeage of the anomalistic and coherent components of the residue,
the definition of the krigeage surrounding area adapted to the estimation of each of said anomalistic and coherent components.
Of course, in each of said stages, the analysis could be facilitated by a 3D visual control carried out firstly by an interpolation on a “grid” file of any irregularly sampled variable originating
from a “point” file, and secondly with by means of a colour code associated with the value of the inserted variable.
One embodiment of the invention appears hereafter and is given by way of non-restrictive example with reference to the accompanying drawings on which:
FIG. 1 is a diagrammatic representation illustrating the main phases for determining the spatial quality index (coefficient of spatial anomalies) inside a geophysical data processing context;
FIGS. 2a, 2 b and 2 c represent various file formats used in the method of the invention;
FIGS. 3a and 3 b respectively represent a target diagram (FIG. 3a) and a speed/time diagram (FIG. 3b);
FIG. 4 represents displays allowing a visual control of the stages of the method;
FIG. 5 represents the interpolation made to allow the displays of FIG. 4;
FIG. 6 represents an experimental residue variogram;
FIGS. 7a and 7 b represent spatial anomalistic quantification and localisation modes.
This example more particularly concerns the acquisition and processing of 3D seismic data for characterising petrol tanks and more. particularly the quality control of measurements of geophysical
speeds or “stack” speeds.
The problem is as follows: the seismic contractor offers the petrol operator a set of speeds set up manually for the “stack” operation which conditions the quality of the final data. The petrol
operator is responsible for monitoring the work of the contractor and shall give his opinion concerning the quality of the speed set-up.
To this effect, he can examine the cubes of “stack” speeds set-ups with the aid of statistical and geophysical tools so as to identify the spatial incoherences due to erroneous set-ups.
The determination of the SQI in accordance with the method of the invention contributes in defining the set-ups considered as anomalistic set-ups fit for resetting-up so as to guarantee a spatial
homogeneity which shall be quantified by the SDI index value.
Each “stack” speed set-up is defined by its spatial coordinates—geographic and temporal—and a geophysical speed value. Inside the spatial field described by all the set-ups, a probabilistic
modelisation makes it possible to separate a spatial noise, possibly organised, of a coherent signal (spatially). The spatial noise is supposed to correspond to processing and acquisition artifacts.
Quantified, it makes it possible to quickly identify the problematic set-ups according to a tolerance threshold deduced from geophysical quality or other (for example in a “stack” case, preservation
of the amplitudes of the seismic signal) requirements.
In the process for determining the SQI index shown on FIG. 1, the system of coordinates retained is the seismic longitudinal axis (inline)—transversal axis (crossline)—time system (FIG. 2a). In fact,
the entire geophysical chain, from acquisition to seismic processing, favours these three main directions: vertical (time) and the horizontal directions (inline, crossline) defined by the acquisition
device. As a result, most of the acquisition and processing artifacts are generated along these directions and the analysis is orientated along these directions. This process thus makes it possible
to reduce the determination time.
More specifically, the process introduces three types of files:
a 3D “point” file (cube of “stack” speed set-ups) (FIG. 2a) defined by all the positions of speed set-ups in a system of coordinates longitudinal axis (inline)—transversal axis (crossline)—time, is
generally at regular step in the horizontal plane (longitudinal axis (inline)—transversal axis (crossline)) and irregular in time,
a 2D “grid” file (FIG. 2b): with regular step and defined according to the longitudinal (inline)—transversal (crossline) axes and used to produce directional statistics along the vertical (time axis
of the 3D point file of FIG. 2a); statistics according to the longitudinal direction or transversal direction are possible,
a 3D “grid” file (FIG. 2c): with regular mesh in a system of coordinates (longitudinal axis (inline)—transversal axis (crossline)—time) and used for various displays, as explained subsequently (FIG.
In accordance with the methodology shown on FIG. 1, once the cube of “stack” speed set-ups is loaded, it is subjected to a geostatistical quality control (calculation of SQI). The raw speeds 1 are
broken down into the first order by factorial “krigeage” into a spatial drift 2 (“low frequency” component) and stationary residue 3 (QC1 phase). The spatial coherence of the first order residue is
modalised (with the aid of a variogram) (QC2 phase) for embodying a discriminating filtering via the factorial “krigeage” between a spatial noise 4, that is a second order residue, and a second order
coherent portion 5 considered to be “cleaned” from the processing and acquisition artefacts (QC3 phase). This second order coherent residual portion is added to the drift 2, that is the first order
coherent portion, so as to generate a cube of spatially coherent speeds set-ups 6 (QC4). In this example, the SQI is constituted by the ratio between the second order residue noise and the coherent
portion of the data element, that is the sum of the coherent components of the first and/or second order and result in obtaining a spatial anomalistic cube 7 (QC5 phase): each set-up is thus
characterised by its spatial quality index (SQI) which expresses the spatial noise percentage with respect to the spatially coherent portion.
The experimental statistics calculated during the process are broken down into:
basic statistics,
directional statistics,
experimental variogram.
All the “stack” speeds set-ups and the first order residues form distributions which can be quickly analysed by tools taking various parameters into account, such as its number of samples, its
extreme points, its arithmetic mean, its standard deviation, its variance:
its number of samples N which characterises a distribution V[1 ](raw speeds, residues, drifts, anomalies, filtered residues, filtered speeds . . . )
its extreme points:
minimum=min (V[i])
maximum=max (V[i])
its arithmetical mean: $m = 1 N ∑ i = 1 N V i$
its standard deviation: $σ = 1 N ∑ i = 1 N ( V i - m ) 2$
its variance: $σ 2 = 1 N ∑ i = 1 N ( V i - m ) 2$
This analysis can be completed by using:
a target diagram in which the values V[i ]of the variable are grouped into categories, the target diagram representing frequencies corresponding to these categories (FIG. 3a),
a time/speed diagram (“cross-plot”) (FIG. 3b)
It is known that the acquisition and processing artefacts have as main axes the longitudinal, transversal and time axes. The calculation of a statistical magnitude along one of these three directions
can result in identifying one or several reinforced artefacts. As a result, the three statistical magnitudes calculated along these directions could be limited to the number of samples, the
arithmetical mean ands the variance (or standard deviation).
Added to the previously mentioned analysis tools is the first order residue experimental variogram. The variogram is able to quantify the spatial correlation of a regionalised variable V({right arrow
over (r)}), r being the position vector defined in the longitudinal-transversal-time system of coordinates. Its formula is deduced from that of the theoretical variogram which concerns the random
function V({right arrow over (r)}) for which there is only one embodiment: the regionalised variable.
Theoretical variogram: $γ ( h -> ) = 1 2 Var [ V ( r -> + h -> ) - V ( r -> ) ]$
where {right arrow over (h)} is the vector characterising a set of pairs of set-ups
Experimental variogram (after stationary and ergodicity hypotheses): $Γ ( h -> ) = 1 2 N ( h -> ) ∑ n = 1 N ( h -> ) [ V ( r -> n + h -> ) - V ( r -> n ) ] 2$
where N({right arrow over (h)}) is the number of pairs of set-ups separated from {right arrow over (h)}.
Moreover, the analysis could be facilitated by using the display mode shown on FIG. 4.
According to this display mode, interpolation on grid (with regular steps) of any irregularly sampled variable (“point” file) allows, by means of a colour code associated with the value of the
interpolated variable, a quick 3D visual control.
The interpolation retained by display is defined as follows (FIG. 5): at one grid node P[j], the estimated value corresponds to the linear interpolation of the two set-ups P[1], P[2 ]respectively
defined by the nearest coordinates (t[1], V[1]) and (t[2], V[2]) situated on both sides of the node P[j ]and on the same vertical line as the latter. $V j = V 1 + ( t j t 2 - t 1 ) × ( V 2 - V 1 )$
Other more elaborate types of interpolation, such as “krigeage,” could be possible for display if required.
The stages for filtering and determining the anomalies used in the methodology shown on FIG. 1 are described hereafter:
a) Drift/residue filtering stage
A field of speeds generally has a vertical drift due to compaction, this compaction being the increase of the speed according to the increase of the penetration depth (a horizontal drift may also
exist, if the sea bottom, for example, is considered for acquiring marine seismic data). A non-stationary state of speeds is observed in this direction which can be managed by the theory of
generalised covariances, a non-stationary geophysical model. But a generalised covariance cannot be interpreted directly in terms of anomalistic and coherent spatial components, thus rendering it
impossible to adjust a model. Therefore, it is necessary to extract this drift and work on the associated stationary residue.
The extraction of a drift ensuring the stationary state of the residue is embodied by least error squares polynomial adjustment which is a particular case of factorial krigeage: the value of the
drift at a point in space corresponds to the value of a polynomial adjusting as best as possible (least error squares) the points (independent) belonging to a surrounding region centered around the
point to be estimated. The type of the polynomial—1 z, 1 z z^2, 1 x z x^2 z^2 xz, etc—is to be determined according to the type of drift it is desired to extract, z being the time and x being a
geographical coordinate. The dimensions of the krigeage extraction surrounding region shall guarantee the stationary state of the first order residue.
Example: extraction of a type 1 z z^2 drift at the point {right arrow over (r)}[0]:
V [drift]({right arrow over (r)} [0])=a+b×z [0] +c×z [0] ^2 with {right arrow over (r)} [0](x [0] , y [0] , z [0])
The coefficients of the polynomial (a, b and c) are obtained by minimising the system: $∑ i = 1 N V [ ( a + b × z i + c × z i 2 ) - V ( r -> i ) ] 2$
where N[V ]is the number of samples contained in the surrounding region centered around the point to be estimated.
b) Coherent residue/anomalies filtering
The drift, a first order coherent component, previously estimated, is subtracted from the raw “stack” speeds. The first order residues shall be stationary.
V [residue]({right arrow over (r)} [0])=V({right arrow over (r)} [0])−V [drift]({right arrow over (r)} [0])
The variographic analysis of the first order residues is the crucial phase of spatial quality control.
The experimental variogram of the residues Γ({right arrow over (h)}) is calculated in several directions. For reasons of providing clarity of the figures, only the components in the three main
directions (longitudinal, transversal, time) are shown on FIG. 6.
The modelisation of the variogram is subordinate to an interpretation of the experimental variogram in terms of coherent and anomalistic spatial components. The combined skills of the
geostatistician, the geohphysicist and possibly the geologist are required for this interpretation phase. The geologist provides information concerning the known or assumed geological structures, the
geophysicist specifies the nature of the major geophysical artefacts likely to affect the data, and the geostatistician constructs the variogram model by taking into account these two types of
Ideally, it would be better to separate a spatial noise (anomalistic spatial component) from a coherent signal (coherent spatial component) solely on the basis of a variographic interpretation. The
modelisation put forward depends on the application for producing a set of speeds (“stack”, depth conversion, DIX speeds . . . ). A spatial component can be considered as a noise for a certain
application or as a coherent signal for another. The terms “coherent” and “anomalistic” are not intrinsic properties of the set of speeds but are the properties of the set of speeds within the
context of the recognised geostatistical model.
The adjustment of the model Γ^M ({right arrow over (h)}) can only be effected by conditional negative standard functions. Initially, the variogram models, like the nugget effect model and the
exponential and spherical models, are sufficient to construct an extendable model (with several components). The definition of the models is given in the isotropic case:
Nugget effect model: $Γ ( h -> ) = { 0 si h -> = 0 b si h -> > 0$
Spherical model: $Γ ( h -> ) = { 0 si h -> > a b × [ 3 2 × h -> a - 1 2 × ( h -> a ) 3 ] si 0 ≤ h -> ≤ a$
Exponential model: $Γ ( h -> ) = b × [ 1 - exp ( - h -> a ) ]$
The parameters a and b respectively are termed the range and stage of the variogram and are both positive.
The retained variogram model Γ^M is a linear combination of various elementary components selected according to their coherent or anomalistic interpretation:
Γ^M({right arrow over (h)})=Γ[A]({right arrow over (h)})+Γ[C]({right arrow over (h)})
with Γ[A ]a component of the variogram associated with the anomalistic portion,
Γ[C ]a component of the variogram associated with the coherent portion.
Surrounding Area
The surrounding area combines all the points taking part in estimating the anomalistic component situated at the point {right arrow over (r)}[0].
A sliding surrounding area is essential for any filtering operation. A single surrounding area including all the samples of the field is extremely penalising concerning the calculation time. The
dimensions of the sliding surrounding area thus need to optimise the calculation time without deteriorating the quality of the estimate.
Factorial Krigeage
The modelisation of the second order variogram of the residue corresponds to an interpretation in terms of anomalistic and coherent components. The factorial krigeage allows an estimate of each of
the two components.
The estimate of the anomalistic component at the point {right arrow over (r)}[0 ]is carried out by calculating: $V anomalistic residue ( r -> 0 ) = ∑ α = 1 N V λ α × V residue ( r -> α )$
where all the {right arrow over (r)}[α] a constitute the sliding krigeage surrounding area and where the krigeage weights are determined by resolving the system: ${ ∑ β = 1 N V λ β × Γ M ( r -> α
- r -> β ) + μ A = Γ A ( r -> α - r -> 0 ) ∑ β = 1 N V λ β = 1 for α = 1 , … , N V$
The estimation of the coherent component at the point {right arrow over (r)}[0 ]can be obtained similarly by factorial krigeage. So as to find the corresponding krigeage weights, it suffices to
change Γ[A ]by Γ[C ]in the krigeage system.
However, a single filtering is required since by means of the factorial krigeage, the following can be written:
V [coherentresidue]({right arrow over (r)} [0])=V [residue]({right arrow over (r)} [0])−V [anomalistic residue]({right arrow over (r)} [0])
c) Quantification of spatial anomalies
Calculation of the Spatial Anomaly Coefficient
The spatially coherent portion of the second order residue added to the drift, a first order coherent component, makes it possible to generate a spatially coherent field of speeds:
V [coherent]({right arrow over (r)} [0])=V [drift]({right arrow over (r)} [0])+V [coherent residue]({right arrow over (r)} [0])
The ratio between an estimation of the anomalistic component and an estimation of the coherent component of the data element (expressed in %) constitutes a ratio known as a spatial anomaly
coefficient or spatial quality index SQI. It is attached to each speed set-up.
SQI attached to the stack speed set-ups: $V spatial anomaly coefficient ( r -> 0 ) = V anomalisticresidue ( r -> 0 ) V coherent ( r -> 0 ) × 100$
d) Localisation of the anomalistic points via interpretation of the spatial anomaly coefficient
The localisation of the spatial anomalies is made on the basis of the spatial anomaly coefficient attached to each set-up of the speed cube. Two options are possible:
in the first case, two categories of colours (or symbols) are associated with the spatial anomaly coefficient SQI. Sections in the cube are displayed. It is also possible to interpolate on the grid
the spatial anomaly coefficient (FIG. 7a).
in the second case, the colour (or symbolic) coding relates to the definition of time intervals possibly containing spatial anomaly coefficients greater than a threshold value (FIG. 7b). | {"url":"http://www.google.ca/patents/US6807513","timestamp":"2014-04-17T07:24:44Z","content_type":null,"content_length":"92536","record_id":"<urn:uuid:532e0a22-d77a-4823-bc7e-76eb215086e7>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00000-ip-10-147-4-33.ec2.internal.warc.gz"} |
Stat 310: Computational Math and Optimization II: Simulation and Optimization
• First Lecture: Feb 9, 2010. Last lecture: Mar 11, 2010.
• Office Hours: Tue-Thu 3-5 pm, Eckhart 120A, or by appointment.
Course Outline
• Special outline page.
1. Homework 1, due 02/18/2010.
□ The code I used. Functions: fenton.m, fenton_wrap.m, newtonLikeIteration.m, newtonLikeMethod.m. newtonLikeMethod is the driver. You need intval for derivative information to run it (we will
talk in lecture 4 about it).
2. Homework 2, due 02/25/2010.
3. Homework 3, due 03/04/2010.
□ To install intval: Download it in a directory over which you have read write access. Unpack the archive (for example, by using gzip -d filename.tar.gz followed by tar -xvf filename.tar). Add
the directory and all subdirectories to your matlab path. Simplest thing is to do, from the matlab interface: File->Set Path->add with subfolders then browse until you find the directory
where intval was unpacked. I would restart Matlab, though that is not necessary. First time around, intval runs an installation/demo routine which takes a while to run, but second time around
matlab start should be fast, while allowing you to see the intval demo movie.
4. Homework 4, due 03/11/2010.
5. Homework 5, due 03/17/2010
What I assume is known by the students.
1. Multivariate calculus.
2. First-order necessary optimality conditions for unconstrained optimization.
3. Linear algebra. Positive definite and semidefinite symmetric matrices, eigenvalues.
4. Beginner Matlab. | {"url":"http://www.mcs.anl.gov/~anitescu/CLASSES/2009/compmath09.html","timestamp":"2014-04-17T08:50:39Z","content_type":null,"content_length":"9584","record_id":"<urn:uuid:de7104cd-2bea-49b1-801f-81a0d6c95627>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00572-ip-10-147-4-33.ec2.internal.warc.gz"} |
The Inverse Fundamental Operator Marching Method for Cauchy Problems in Range-Dependent Stratified Waveguides
Mathematical Problems in Engineering
Volume 2011 (2011), Article ID 259479, 15 pages
Research Article
The Inverse Fundamental Operator Marching Method for Cauchy Problems in Range-Dependent Stratified Waveguides
^1Department of Mathematics, North China University of Water Conservancy and Electric Power, Zhengzhou 450011, China
^2School of Economics and Finance, Xi'an Jiaotong University, Xi'an 710061, China
Received 28 February 2011; Accepted 28 May 2011
Academic Editor: Ming Li
Copyright © 2011 Peng Li and Weizhou Zhong. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction
in any medium, provided the original work is properly cited.
The inverse fundamental operator marching method (IFOMM) is suggested to solve Cauchy problems associated with the Helmholtz equation in stratified waveguides. It is observed that the method for
large-scale Cauchy problems is computationally efficient, highly accurate, and stable with respect to the noise in the data for the propagating part of a starting field. In further, the application
scope of the IFOMM is discussed through providing an error estimation for the method. The estimation indicates that the IFOMM particularly suits to compute wave propagation in long-range and slowly
varying stratified waveguides.
1. Introduction
In many engineering applications, efficient mathematical methods are often required for the computing of propagation phenomena and transitions in complex systems. Recently, many interesting works on
this issue are proposed to improve efficiencies in many different scientific areas, for example, the wavelet-related method for the integrodifferential and integral equations [1, 2], the exact
solution of impulse response to a class of fractional oscillators [3], the representation of a stochastic bound in stochastic modeling [4], the mathematical transform of traveling-wave equations [5]
and the dynamics generated by coherent functions [6], the numerical transform for the Helmholtz equation [7]. As one of propagation phenomena and transitions, wave propagation problems associated
with the Helmholtz equation are very common and important in many areas, for example, ocean acoustics, wave propagation and scattering, and electromagnetic field. Many works have been done to improve
the computing efficiency in this area.
Some mathematical problems with their boundary conditions not completely known due to some technical difficulties often happen in many scientific and engineering areas described by the Helmholtz
equation, such as ocean acoustics, wave propagation and scattering, and electromagnetic field. With the assistance of additionally supplied data, to determine the boundary conditions on the
inaccessible part of the boundary or the source condition constitutes the inverse boundary value problem or the Cauchy problem, which is ill posed in the sense that small perturbations in the data
may result in an enormous deviation in the solution. Here, the purpose of this study is to improve the efficiency for the computing of the Cauchy problems.
Some numerical methods for medium-scale problems have been proposed for the Cauchy problem [7–22]. However they are not practical for large-scale wave propagation problems, for example, wave
propagation in optics and ocean acoustics [23]. Large-scale wave propagation problems in acoustics, electromagnetism, seismic migration, and other applications often require solving the Helmholtz
equation in a very large-scale domain with curved interfaces or boundaries. For example, waves are allowed to travel large distances in the horizontal direction in ocean acoustics. For large-scale
wave propagation problems, indirect methods, for example, parabolic equation (PE) method [24–26], are widely used, since direct methods like finite element method (FEM) result in very large
indefinite linear systems which are hard to be solved. The PE method gives useful approximations to the outgoing component of the wave-field in the case of weak range dependence. However, its
accuracy in weakly range-dependent waveguides over large range distances has not been rigorously established [27]. Then, the exact one-way reformulation method [28, 29] which reformulates the
Helmholtz equation in terms of the DtN (Dirichlet-to-Neumann) map and the fundamental solution operator is proposed to solve the wave propagation problem exactly. When the range length scale is much
larger than the transverse length scale of the waveguide, such exact reformulation is useful, and numerical methods based on this reformulation feature range-independent memory requirements and
linear scaling of computing time.
Based on the exact one-way reformulation, the “inverse fundamental operator marching method” (IFOMM) [23] is developed to reconstruct the propagating parts of incident waves exactly from the received
waves in one-layered medium waveguides with curved bottom. While, in many practical applications, the computed waveguides are often multilayered and composed by different mediums with varied
densities, and, thus, this study tries to solve the Cauchy problems associated with the Helmholtz equation in multilayered waveguides. Firstly, whether the IFOMM is applicable in multilayered
waveguides is the primary question need to be answered. As numerical examples demonstrate, the illnesses caused by the discontinuities between different layers, such as the density and the
wavenumber, can also be remedied by the IFOMM successfully and the linear systems arising in the marching process can be treated by the truncated singular value decomposition. Secondly, an error
estimation for the numerical marching scheme is provided to discuss the proper scope of the IFOMM's application. The IFOMM is important for inverse problems in many applications in stratified
waveguides, for example, source localization and remote sensing in ocean acoustics [30]. For instance, the location of a point source can be determined from the reconstructed propagating part of the
incident field.
This paper is arranged as follows. The basic mathematical formulations are described in Section 2. In Section 3, we reorganize the inverse operator marching scheme firstly, then give the error
estimation for the IFOMM and discuss its application scope; interface conditions and matrix approximation are briefly introduced in the end of this section. Section 4 presents some numerical results
obtained by IFOMM. We conclude our work with some discussions in Section 5.
2. Mathematical Formulation
Consider the two-dimensional Helmholtz equation in a typical ocean and seabed environment with two curved interfaces where the first layer with density is located in , the second layer with density
is located in , the third layer with density is located in ; the interfaces are two curves and , with , , represents the Fourier transform of acoustic pressure, and is called wavenumber. We also
assume that the problem is range independent (i.e., wavenumber and interfaces are independent of ) for and , that is, The boundary conditions on the top and the bottom are supposed as and . The
interface conditions mean that where is a normal vector of the interface or (Figure 1). If there is no wave coming from , the exact boundary condition (radiation condition) at is , where and the
square root operator is defined in [28].
We will concentrate on solving the equation for since the Helmholtz equation can be easily solved by separable variable method for or . If there are no waves coming from , the exact boundary
condition (radiation condition) at is , where and the square root operator is defined in [28]. The simplest boundary condition at is for a given function . Here, the dividable assumption is also
imposed although it is not necessary [7], there exists one horizontal straight line between the interfaces and . We suppose that there is an unique solution for (2.1) with the boundary conditions and
interface conditions.
The forward problem of (2.1) is to find received wave at from the incident wave at . Whereas, in the inverse boundary value problem or the Cauchy problem presented, the incident wave at needs to be
computed from the received wave at . Here, except the radiation condition dirichlet type boundary condition is imposed. Together with the top, bottom, and interfaces conditions, we seek the solution
at from the imposed boundary conditions at .
Since the Helmholtz equation under consideration is in a range-dependent stratified waveguide with some curved interfaces, to flatten the curved interfaces of the dependent waveguide, we perform an
analytic local orthogonal transform [31–35] in this study, and the numerical transform [7] is another possible option.
By flattening the stratified waveguide with two curved interfaces through coordinate transformation, (2.6) is expected to be transformed as and details can be referred in [34].
3. Numerical Algorithm
This section reorganizes the operator form of the IFOMM for inverse boundary value problems in multilayered waveguide firstly. An error estimation for the IFOMM is then provided to discuss its
properties. In the end, the forward fundamental operator marching method [28, 29, 31–33] (FFOMM) for forward problems is also briefly reviewed for the sake of the comparison with IFOMM.
Let be a partition of the interval , that is, , where the solutions are required. Consider also the refined partition , with . Let and be the approximations of and at , respectively, denote range
step size . The forward problem computes the solutions required in from the incident wave at to . On the contrary, the Cauchy problem computes the solutions at required places from received wave at
to the incident wave at .
3.1. IFOMM
The DtN operator and the fundamental operator are defined by By substituting (3.1) into (2.6) and differentiating (3.2) with respect to , the equations for and are obtained as
The DtN operator in range-independent region, that is, , satisfies the analytic “initial” conditions if there are no waves coming from , and satisfies .
Equation (2.6) on the interval is approximated by their midpoint values at . We have the following operator marching scheme in For derivation details, we refer to [29, 31]. For simplicity, the
shorthand notation is used to represent the operator marching process (3.6).
The inverse fundamental operator marching method can be described as follows.
Algorithm 1 (IFOMM). Step 1. , where .
Step 2. If , solve by TSVD, .
Step 3. Let .
Step 4. If =, end the program, else let and repeat Step 1.
Generally, the -curve method [36] is used to determine the regularization parameter with which the TSVD can work smoothly. In fact, the -curve criterion is not really used in the marching computing
since only propagating modes can determine the waves after a long-range propagating, which gives rise to choose the number of propagating modes as the regularization parameter for the TSVD. And it
has been verified that the parameter determined by the -curve method is just the number of propagating modes in the waveguide [23].
3.2. Error Estimation
The error estimation for the IFOMM is provided firstly. Based on the estimation, the utilization scope of the method in stratified waveguides is analyzed.
From the operator marching process (3.6), we have where , and so forth represent the corresponding operators in interval , .
Then, recurse the formula (3.8) and notice that , In the same way, there is also
According to (3.2), there are By taking norm to (3.12) and utilizing(3.10)
Suppose that be polluted and be obtained by (3.12), we have from (3.12) and according to (3.10). If , , we have the following theorem.
Theorem 3.1. Let be the initial measurement error, then, the resulting errors of the IFOMM solution at , will satisfy
Corollary 3.2. For weakly range-dependent stratified waveguides, if there exist a constant , subject to , with , the IFOMM is stable.
Proof. In weakly range-dependent waveguides, the reflection waves are very weak which leads to and . For a grid which approximates the computed domain and the PDE model enough accurately, there
exists a positive constant satisfies that , with . For an points discretization along range direction, we have from (3.16).
The operators are truncated by the number of propagating modes, the eigenvalues left for the truncated system are positive real which gives rise to . Thus, When the reflections for every mode are
weak in the waveguide, the constant will be small for a enough slim grid. Thus, the IFOMM is stable.
A more slim grid may approximate the original problem certainly, while in the same time, it may also amplify the initial errors if the number of discrete points in range direction tend to the
infinite. Since large-range step method is used to discretize the computed domain for slowly varied waveguides and overdense grid is of no need for obtaining required accuracy, the IFOMM is stable in
weakly range-dependent waveguides without much reflections according to the corollary.
The theorem gives two major factors which affect the IFOMM solutions of (2.6) greatly. One is to choose correct number for truncating the linear systems, the other is that the reflection in the
waveguide can not be very strong. Strong reflection may lead to a very large in (3.18), since every can be very large.
3.3. Interface Conditions and Matrix Approximation
The formulas in (3.6) have to be further discretized with matrices replacing the operators [29, 32]. The interface and boundary conditions are important in the discretization. Corresponding to the
interface and boundary conditions of (2.1), we need to consider the interface and boundary conditions for the improved Helmholtz equation (2.6). Details can be referred in [34].
Furthermore, we approximate the operator by an tridiagonal matrix . Details can be referred in [32].
3.4. FFOMM
Let the DtN operator and the fundamental operator satisfy By substituting (3.20) into (2.6), formulas for and can be derived analogously. Using the operator marching scheme (3.6) for the FFOMM from
to , its fundamental operator can be determined. For more details, we refer to [28]. The FFOMM can be written as the following algorithm.
Algorithm 2 (FFOMM). Step 1. , where .
Step 2. If Save and reset .
Step 3. If , repeat Step 1, else go to Step 4.
Step 4. Load , .
Step 5. Load .
Step 6. If =, end the program, else let repeat Step 1.
If similar error analysis of the IFOMM is applied to the FFOMM, similar result can be obtained. The conditions on which the Helmholtz equation can be exactly solved by the FFOMM also demand that the
reflection of the propagating modes be small in the waveguide.
4. Numerical Example
A typical numerical example in ocean acoustics is provided here to examine the IFOMM for solving Cauchy problems in stratified waveguides.
Example 4.1. Let with , where is the number of points to discretize the variable, is the number to truncate the matrices that approximate the operators arising in the marching process [32].
Suppose incident wave at is (Case 1: the tenth propagation mode at ) and (Case 2: the first propagation mode at ), respectively, reconstruct the incident wave.
The numerical example takes the incident wave at as the reference solution for the IFOMM solution. Although varied range step size considering the character of the range-dependent waveguide is more
reasonable, we will choose constant range step size for simplicity. The solutions are computed with a large range step size (). As the numerical test shows, the reasonably accurate reformulated
solutions are already obtained for .
In practice, the available data is usually contaminated by measurement errors, and the stability of the numerical method is of vital importance for obtaining physically meaningful results. To this
end, the simulated noisy data generated by is used to impose on the received wave , where is the eigenvectors corresponding to the propagating modes of the operator (3.19) at and is the number of
propagating modes, , is an -dimensional vector, and are normally distributed random variables with zero mean and unit standard deviation, respectively, and dictates the level of data noise which
represents the ratio of noise energy to data energy in 2-norm. The random variable and are realized by using the Fortran function of IMSL library DRNNOF().
Figures 2 and 5 denote the received waves of Case 1 and 2, respectively. Figures 3 and 6 give the reconstructed incident waves which are computed by the IFOMM with TSVD where regularization
parameter is chosen as 10 (the number of the propagation modes in the waveguide). As indicated by Figures 3 and 6, the regularization solution computed by the IFOMM is in good agreement with the
exact incident wave . As shown by the solutions obtained with noise level in Figures 4 and 7, the reconstruction remains very stable despite the high noise level. The performance of the IFOMM for the
two different incident waves of Cases 1 and 2 verifies that the proposed algorithm is efficient, accurate, and stable for the incident waves.
5. Conclusion
The IFOMM developed in one-layered waveguides is applied to solve inverse boundary value problems associated with the Helmholtz equation in stratified waveguides. Numerical example demonstrates that
the IFOMM can be used to compute inverse boundary problems in the stratified waveguides successfully. The scope of the IFOMM's application is then discussed based on an error estimation for the
marching scheme. The estimation gives a quantitative estimate of error propagation and shows that IFOMM can only be applied in waveguides where reflection is not strong. In further, the theorem also
reveals that the errors may be cumulated greatly in some special circumstances when the grid becomes excessive fine. Numerical examples indicate that the IFOMM is computationally efficient, highly
accurate, and stable with respect to the noise in the data for the propagating part of a starting field when the computing domain is long and complex.
There are several further studies related to the IFOMM for solving the Cauchy problems. Firstly, although the IFOMM only considers two-dimensional problems in its current form, the scheme is easily
extended to problems in three-dimensional space under cylindrical coordinates, and it can also be extended to solve wave propagation under three-dimensional Cartesian coordinate system. Secondly,
when the number of propagating modes varies with the range direction frequently, which is corresponding to some of propagating modes that are totally reflected, whether the IFOMM or FFOMM can be
applied through some improvements of them in such waveguides, and how to improve the methods? At least, the parameters for truncating the systems are a little more difficult to be determined.
Thirdly, the estimation for error propagation may be improved through more detailed analysis.
This research is supported by the NCET-08-0450 and the 985 II of Xi'an Jiaotong University.
1. C. Cattani, “Shannon wavelets for the solution of integrodifferential equations,” Mathematical Problems in Engineering, vol. 2010, Article ID 408418, 22 pages, 2010. View at Publisher · View at
Google Scholar · View at Zentralblatt MATH
2. C. Cattani and A. Kudreyko, “Harmonic wavelet method towards solution of the Fredholm type integral equations of the second kind,” Applied Mathematics and Computation, vol. 215, no. 12, pp.
4164–4171, 2010. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
3. M. Li, S. C. Lim, and S. Y. Chen, “Exact solution of impulse response to a class of fractional oscillators and its stability,” Mathematical Problems in Engineering, vol. 2011, Article ID 657839,
9 pages, 2011. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
4. M. Li and W. Zhao, “Representation of a stochastic traffic bound,” IEEE Transactions on Parallel and Distributed Systems, vol. 21, no. 9, pp. 1368–1372, 2010. View at Publisher · View at Google
5. E. G. Bakhoum and C. Toma, “Mathematical transform of traveling-wave equations and phase aspects of quantum interaction,” Mathematical Problems in Engineering, vol. 2010, Article ID 695208, 15
pages, 2010. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
6. E. G. Bakhoum and C. Toma, “Specific mathematical aspects of dynamics generated by coherence functions,” Mathematical Problems in Engineering, vol. 2011, 10 pages, 2011. View at Publisher · View
at Google Scholar
7. P. Li, W. Z. Zhong, G. S. Li, and Z. H. Chen, “A numerical local orthogonal transform method for stratified waveguides,” Journal of Zhejiang University, vol. 11, no. 12, pp. 998–1008, 2010. View
at Publisher · View at Google Scholar
8. M. R. Bai, “Application of BEM (boundary element method)-based acoustic holography to radiation analysis of sound sources with arbitrarily shaped geometries,” Journal of the Acoustical Society of
America, vol. 92, pp. 533–549, 1992.
9. Z. Wang and S. F. Wu, “Helmholtz equation least-squares method for reconstructing the acoustic pressure field,” Journal of the Acoustical Society of America, vol. 102, no. 4, pp. 2020–2032, 1997.
View at Publisher · View at Google Scholar
10. S. F. Wu and J. Yu, “Reconstructing interior acoustic pressure fields via Helmholtz equation least-squares method,” Journal of the Acoustical Society of America, vol. 104, no. 4, pp. 2054–2060,
1998. View at Publisher · View at Google Scholar
11. T. Delillo, V. Isakov, N. Valdivia, and L. Wang, “The detection of the source of acoustical noise in two dimensions,” SIAM Journal on Applied Mathematics, vol. 61, no. 6, pp. 2104–2121, 2001.
View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
12. T. DeLillo, V. Isakov, N. Valdivia, and L. Wang, “The detection of surface vibrations from interior acoustical pressure,” Inverse Problems, vol. 19, no. 3, pp. 507–524, 2003. View at Publisher ·
View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
13. L. Marin, L. Elliott, P. J. Heggs, D. B. Ingham, D. Lesnic, and X. Wen, “Conjugate gradient-boundary element solution to the Cauchy problem for Helmholtz-type equations,” Computational Mechanics,
vol. 31, no. 3, pp. 367–372, 2003. View at Zentralblatt MATH
14. L. Marin, L. Elliott, P. J. Heggs, D. B. Ingham, D. Lesnic, and X. Wen, “An alternating iterative algorithm for the Cauchy problem associated to the Helmholtz equation,” Computer Methods in
Applied Mechanics and Engineering, vol. 192, no. 5-6, pp. 709–722, 2003. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
15. L. Marin, L. Elliott, P. J. Heggs, D. B. Ingham, D. Lesnic, and X. Wen, “Comparison of regularization methods for solving the Cauchy problem associated with the Helmholtz equation,” International
Journal for Numerical Methods in Engineering, vol. 60, no. 11, pp. 1933–1947, 2004. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
16. L. Marin, L. Elliott, P. J. Heggs, D. B. Ingham, D. Lesnic, and X. Wen, “BEM solution for the Cauchy problem associated with Helmholtz-type equations by the Landweber method,” Engineering
Analysis with Boundary Elements, vol. 28, no. 9, pp. 1025–1034, 2004. View at Publisher · View at Google Scholar
17. B. T. Jin and Y. Zheng, “A meshless method for some inverse problems associated with the Helmholtz equation,” Computer Methods in Applied Mechanics and Engineering, vol. 195, no. 19–22, pp.
2270–2288, 2006. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
18. B. T. Jin and Y. Zheng, “Boundary knot method for some inverse problems associated with the Helmholtz equation,” International Journal for Numerical Methods in Engineering, vol. 62, no. 12, pp.
1636–1651, 2005. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
19. B. T. Jin and Y. Zheng, “Boundary knot method for the Cauchy problem associated with the inhomogeneous Helmholtz equation,” Engineering Analysis with Boundary Elements, vol. 29, no. 10, pp.
925–935, 2005. View at Publisher · View at Google Scholar
20. L. Marin and D. Lesnic, “The method of fundamental solutions for the Cauchy problem associated with 2D Helmholtz-type equations,” Computers & Structures, vol. 83, no. 4-5, pp. 267–278, 2005. View
at Publisher · View at Google Scholar · View at MathSciNet
21. L. Marin, “A meshless method for the numerical solution of the Cauchy problem associated with 3D Helmholtz-type equations,” Applied Mathematics and Computation, vol. 165, no. 2, pp. 355–374,
2005. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
22. B. T. Jin and L. Marin, “The plane wave method for inverse problems associated with Helmholtz-type equations,” Engineering Analysis with Boundary Elements, vol. 32, no. 3, pp. 223–240, 2008. View
at Publisher · View at Google Scholar
23. P. Li, Z. H. Chen, and J. X. Zhu, “An operator marching method for inverse problems in range-dependent waveguides,” Computer Methods in Applied Mechanics and Engineering, vol. 197, no. 49-50, pp.
C4077–C4091, 2008. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
24. F. B. Jensen, W. A. Kuperman, M. B. Porter, and H. Schmidt, Computational Ocean Acoustics, American Institute of Physics, New York, NY, USA, 1994.
25. L. Fishman, “One-way wave propagation methods in direct and inverse scalar wave propagation modeling,” Radio Science, vol. 28, no. 5, pp. 865–876, 1993.
26. L. Fishman, A. K. Gautesen, and Z. Sun, “Uniform high-frequency approximations of the square root Helmholtz operator symbol,” Wave Motion, vol. 26, no. 2, pp. 127–161, 1997. View at Publisher ·
View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
27. J. X. Zhu and Y. Y. Lu, “Validity of one-way models in the weak range dependence limit,” Journal of Computational Physics, vol. 12, no. 1, pp. 55–66, 2004. View at Publisher · View at Google
Scholar · View at MathSciNet
28. Y. Y. Lu and J. R. McLaughlin, “The Riccati method for the Helmholtz equation,” Journal of the Acoustical Society of America, vol. 100, no. 3, pp. 1432–1446, 1996.
29. Y. Y. Lu, “One-way large range step methods for Helmholtz waveguides,” Journal of Computational Physics, vol. 152, no. 1, pp. 231–250, 1999. View at Publisher · View at Google Scholar · View at
Zentralblatt MATH · View at MathSciNet
30. M. D. Collins and W. A. Kuperman, “Inverse problems in ocean acoustics,” Inverse Problems, vol. 10, no. 5, pp. 1023–1040, 1994. View at Zentralblatt MATH
31. Y. Y. Lu, J. Huang, and J. R. McLaughlin, “Local orthogonal transformation and one-way methods for acoustic waveguides,” Wave Motion, vol. 34, no. 2, pp. 193–207, 2001. View at Publisher · View
at Google Scholar
32. J. X. Zhu and Y. Y. Lu, “Large range step method for acoustic waveguide with two layer media,” Progress in Natural Science, vol. 12, no. 11, pp. 820–825, 2002. View at Zentralblatt MATH
33. Y. Y. Lu and J. X. Zhu, “A local orthogonal transform for acoustic waveguides with an interval interface,” Journal of Computational Physics, vol. 12, no. 1, pp. 37–53, 2004. View at Publisher ·
View at Google Scholar · View at MathSciNet
34. J. X. Zhu and P. Li, “Local orthogonal transform for a class of acoustic waveguides,” Progress in Natural Science, vol. 17, no. 10, pp. 18–28, 2007.
35. J. X. Zhu and P. Li, “The mathematical treatment of wave propagation in the acoustical waveguides with n curved interfaces,” Journals of Zhejiang University, vol. 9, no. 10, pp. 1463–1472, 2008.
36. P. C. Hansen, “Analysis of discrete ill-posed problems by means of the $L$-curve,” SIAM Review, vol. 34, no. 4, pp. 561–580, 1992. View at Publisher · View at Google Scholar · View at MathSciNet | {"url":"http://www.hindawi.com/journals/mpe/2011/259479/","timestamp":"2014-04-17T06:09:19Z","content_type":null,"content_length":"347928","record_id":"<urn:uuid:95fc1615-9869-4a15-b2f3-7b642dadd4b1>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00598-ip-10-147-4-33.ec2.internal.warc.gz"} |
Newton s second law by Radurod
Given below is a strobe picture of a ball rolling across a table. Strobe pictures reveal the position of the object at regular intervals of time, in this case, once each 0.1 seconds.
Notice that the ball covers an equal distance between flashes. Let's assume this distance equals 20 cm and display the ball's behavior on a graph plotting its x-position versus time.
The slope of the position versus time graph shown above would equal 20 cm divided by 0.1 sec or 200 cm/sec. The following graph displays this exact same information in a new format, a velocity versus
time graph.
This graph very clearly communicates that the ball's velocity never changes since the slope of the line equals zero. Note that during the interval of time being graphed, the ball maintained a
constant velocity of 200 cm/sec. We can also infer that it is moving in a positive direction since the graph is in quadrant I where velocities are positive.
To determine how far the ball travels on this type of graph we must calculate the area bounded by the "curve" and the x- or time axis.
As you can see, the area between 0.1 and 0.3 seconds confirms that the ball experienced a displacement of 40 cm while moving in a positive direction.
Physics Lab
Motion of a Motorized Cart
* To study the motion (position, displacement, velocity, and acceleration) of a motorized cart. * To practice constructing position vs. time and velocity vs. time graphs for a motion.
constant velocity motorized cart| meter stick or metric tape| about 2 meters of ticker tape| graph paper|
masking tape| stopwatch or watch with a seconds hand|
data table| grading rubric|
In this lab you will observe and measure the motion of a motorized cart by marking its position along a strip of tape at regular time intervals. Note that you can adjust the speed of the cart using
the small dial - find a speed that works well for you and then do not... | {"url":"http://www.studymode.com/essays/Newton's-Second-Law-421874.html","timestamp":"2014-04-16T04:24:58Z","content_type":null,"content_length":"32597","record_id":"<urn:uuid:6926978d-7ba2-4bac-954c-eaddbe0a6239>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00008-ip-10-147-4-33.ec2.internal.warc.gz"} |
classical ring of quotients
classical ring of quotients
A ring $Q\supset R$ is a left classical ring of quotients for $R$ (resp. right classical ring of quotients for $R$) if it satisifies:
• every element of $Q$ can be written in the form $x^{{-1}}y$ (resp. $yx^{{-1}}$) with $x,y\in R$ and $x$ regular.
If a ring $R$ has a left or right classical ring of quotients, then it is unique up to isomorphism.
Note that the goal here is to construct a ring which is not too different from $R$, but in which more elements are invertible. The first condition says which elements we want to be invertible. The
second condition says that $Q$ should contain just enough extra elements to make the regular elements invertible.
Such rings are called classical rings of quotients, because there are other rings of quotients. These all attempt to enlarge $R$ somehow to make more elements invertible (or sometimes to make ideals
Finally, note that a ring of quotients is not the same as a quotient ring.
OreCondition, ExtensionByLocalization, FiniteRingHasNoProperOverrings
left classical ring of quotients, right classical ring of quotients
Mathematics Subject Classification
no label found
no label found
Added: 2003-10-21 - 00:39 | {"url":"http://planetmath.org/ClassicalRingOfQuotients","timestamp":"2014-04-16T16:34:44Z","content_type":null,"content_length":"42753","record_id":"<urn:uuid:725bfc86-6e02-46c7-b49a-9dd68f1e87a7>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00420-ip-10-147-4-33.ec2.internal.warc.gz"} |
Multiplication Worksheets
Multiplication Worksheets and Games (Basic Facts)
STW Filing CabinetLogged in members can use the Super Teacher Worksheets filing cabinet to save their favorite worksheets.
Quickly access your most commonly used files AND your custom generated worksheets!
Please login to your account or become a member today to utilize this helpful new feature. :) [x] close
STW Filing CabinetThis document has been saved in your Super Teacher Worksheets filing cabinet.
Here you can quickly access all of your favorite worksheets and custom generated files in one place!
Click on My Filing Cabinet in the menu at the upper left to access it anytime! [x] close
Go to Filing Cabinet Close this Window
Grade Level Estimation
Grade Level Estimation:
5thGrade level may vary depending on location and school curriculum. The above estimate is for schools in the USA. [x] close
Common Core Standards
Common core standards listing. [x] close
Common Core Standards
All common core standards details. [x] close
Common Core Standards
If you think there should be a change in the common core standards listed for this worksheet - please let us know.
[x] close
We have multiplication sheets for timed tests or extra practice, as well as flashcards and games. Includes Multiplication Flashcards, Multiplication Bingo, Multiplication Tables, Multiplication I
Have - Who has, and lots more!
For two and three-digit multiplication, see: Multiplication (Advanced).
To see the Common Core Standards associated with each multiplication worksheet, select the apple core logo (
Multiplication Game: I Have / Who Has Free
A super-fun chain reaction game that teaches times tables!
Multiplication Board Game: To the Moon Member
Printable multiplication board game with a space theme.
Multiplication Game: Memory Match (up to 9s) Free
This fun memory card game will help students learn their multiplication facts up to 9 x 9.
Multiplication Game: Memory Match (up to 12s) Member
This version of Multiplication memory match includes 10s, 11s, and 12s.
Multiplication Roll 'Em Member
Play this multiplication dice game to make your math lesson more fun!
Multiplication Bingo Member
30 different multiplication bingo boards and calling cards.
Multiplication Flashcards - Large Member
Copy these flashcards (large) on card stock and let your students cut them out and practice with them.
Multiplication Flashcards - Small Member
Copy these flashcards (small) on card stock and let your students cut them out and practice with them.
Fact Family Flashcards Member
Practice basic multiplication facts with these triangular fact family cards.
Multiplication Arrays
Multiplying With Arrays Member
Write a multiplication fact for each array shown.
Multiplication Array Table Member
Use the L-shaped tool to view an array for any multiplication fact up to 10.
Introduction to Arrays Member
Teach kids to solve multiplication facts by making arrays with neat rows and columns of symbols.
Draw Arrays Member
Students draw their own arrays to solve these multiplication facts.
Array Fact Families Member
Write a fact family for each array shown.
Worksheets (0-9s)
Multiplication: Groups Member
Write a multiplication and addition problem for each picture shown.
Multiplication Wheel (0-9) Free
Multiply the number inside the wheel times the number in the middle to find the outside product.
Multiplication Nines Trick Member
A nifty finger trick for multiplying any number by nine.
Multiplication Picture Word Problems Member
Use the picture clues to help you solve these basic multiplication word problems.
Correct or Incorrect? (0-9) Member
Determine whether each product is correct or incorrect.
Domino Multiplication Member
Count the dots on each side of the dominoes and multiply the numbers together.
Scrambled Facts (0-9) Member
Unscramble each set of digits to create basic multiplication facts.
Multiplication Squares Puzzle (Basic Facts 0-9) Member
Fill in the empty squares with factors that will complete the puzzle.
Multiplication Animal Parts Member
Multiply to find out how many ears are on six rabbits, how many legs are on seven spiders, etc.
Multiplication Boxes (Basic Facts 0-9) Member
Solve the multiplication facts, then color the boxes according to the key. Students will find a surprise picture.
Multiplication Rhymes Member
Students write their own rhymes to help learn basic facts.
Multiplication Word Problems (Basic) Member
These word problems require knowledge of 0 - 9 basic facts.
Missing Factors Member
Determine the missing factor for each multiplication fact.
Multiplication Skip Count (0 - 5) Member
Use skip counting to learn and practice basic multiplication facts. Includes facts up to 5 x 12.
Multiplication Skip Count (6-10) Member
More practice with multiplication facts, using skip counting. Includes basic facts facts up to 10 x 12.
Times Table Crossword Member
A fun crossword puzzle where kids fill in their basic facts.
Multiplication Search Puzzle Member
Search for the hidden 0 - 9 multiplication facts in this puzzle.
Worksheets (0-12s)
Multiplication Wheel (0-12) Member
Multiply the numbers on the wheel-shaped puzzle.
Multiplication Tic-Tac-Toe Member
Write the answers to the multiplication facts, then write X or O over the corresponding numbers on the tic-tac-toe board.
Scrambled Facts (0-12) Member
Unscramble the digits in each box to create a basic fact.
Multiplication Word Problems (Dozens) Member
Word problems to practice multiplying by 12s only.
Multiplication Squares Puzzle (Facts 0-12) Member
Insert factors that will complete the puzzle grid to make the given products.
Mystery Pictures
Multiplication Mystery Picture - Parrot (0-6) Member
Write the answers to the multiplication facts (0-6) and color according to the key.
Rooster Multiplication Mystery Picture (0-9) Member
Solve the multiplication facts and color to reveal a picture of a rooster.
Multiplication Mystery Picture: Hot Air Balloon (0-9) Member
Hot air balloon picture covers all basic multiplication facts 0-9.
Fish Multiplication Mystery Picture (0-9) Member
Solve the multiplication facts and color to reveal a picture of a clownfish.
Dragon Multiplication Mystery Picture (0-9) Member
Solve the multiplication facts and color according to the key to reveal a mystery dragon picture.
Rocket Mystery Picture (3s Only) Member
Create a space ship zooming across the sky after solving multiplication facts. 3s only.
Sailboat Mystery Picture (3s Only) Member
Solve and color according to the key to reveal a sailboat picture.
Airplane Mystery Picture (4s Only) Member
Reveal a colorful airplane illustration when you solve the math facts and color according to the key. All problems include 4 as a factor.
Whale Mystery Picture (5s Only) Member
Solve these multiplication problems with factors of five. Then color to see the blue whale picture.
Fruit Mystery Picture (6s Only) Member
Create a picture of fruit by solving the multiplication problems. 6s only.
Castle Mystery Picture (7s Only) Member
Write the products and color the numbers to make a royal castle picture.
Beach Ball Mystery Picture (8s Only) Member
Solve the multiplication problems and color the beach ball, pail, and shovel.
Dog Mystery Picture (9s Only) Member
Find the answers to the multiplication problems, then color according to the code at the bottom.
Basic Multiplication
Worksheet Generator
Basic Multiplication
Worksheet Generator
Make your own basic multiplication worksheets. You choose the range for the first and second factor. This generator allows you to create worksheets with 25 or 50 problems.
Timed Quizzes
Multiplication Timed Quiz 0 - 2 Free
Test students' multiplication skills with this timed quiz with 0s, 1s, and 2s. Recommended time: 5 minutes.
Multiplication Timed Quiz 0 - 3 Free
Here's another timed multiplication test with 0s, 1s, 2s, and 3s.
Multiplication Timed Quiz 0 - 4 Member
Time students to see if they can finish this quiz in less than 5 minutes. Facts include 0s, 1s, 2s, 3s, and 4s.
Multiplication Timed Quiz 0 - 5 Member
Try this quick multiplication timed quiz.. Facts include 0s, 1s, 2s, 3s, 4s, and 5s.
Multiplication Timed Quiz 0 - 6 Member
This is a five-minute, fifty question multiplication test.. Facts include 0s, 1s, 2s, 3s, 4s, 5s, and 6s.
Multiplication Timed Quiz 0 - 7 Member
Build speed and accuracy with this multiplication quiz. Facts include 0s, 1s, 2s, 3s, 4s, 5s, and 6s.
Multiplication Timed Quiz 0 - 8 Member
Here's another quiz for assessing students' knowledge of basic facts. Facts include 0s, 1s, 2s, 3s, 4s, 5s, 6s, 7s, and, of course, 8s.
Multiplication Timed Quiz 0 - 9 Member
When students have learned their facts up to 9x9, try this timed test. Facts include 0s, 1s, 2s, 3s, 4s, 5s, 6s, 7s, 8s, and 9s.
Multiplication Timed Quiz 0 - 10 Member
Yet another timed test for building accuracy and speed. Facts include 0s, 1s, 2s, 3s, 4s, 5s, 6s, 7s, 8s, 9s, and 10s.
Multiplication Timed Quiz 0 - 11 Member
Use this time drill to test students' ability to recall basic facts. Facts include 0s, 1s, 2s, 3s, 4s, 5s, 6s, 7s, 8s, 9s, 10s, and 11s.
Multiplication Timed Quiz 0 - 12 Member
Here's the last multiplication quiz in the set. Facts include 0s, 1s, 2s, 3s, 4s, 5s, 6s, 7s, 8s, 9s, 10s, 11s, and 12s.
Multiplication Drills
Multiplication Basic Facts 0 - 3 Free
Basic Multiplication Facts 0 - 3.
Multiplication Basic Facts 0 - 4 Member
Basic Multiplication Facts 0 - 4.
Multiplication Basic Facts 0 - 5 Member
Basic Multiplication Facts 0 - 5.
Multiplication Basic Facts 0 - 6 Member
Basic Multiplication Facts 0 - 6.
Multiplication Basic Facts 0 - 7 Member
Basic Multiplication Facts 0 - 7.
Multiplication Basic Facts 0 - 8 Member
Basic Multiplication Facts 0 - 8.
Multiplication Basic Facts 0 - 9 Member
Basic Multiplication Facts 0 - 9.
Multiplication Basic Facts 0 - 10 Member
Basic Multiplication Facts 0 - 10.
Multiplication Basic Facts 0 - 11 Member
Basic Multiplication Facts 0 - 11.
Multiplication Basic Facts 0 - 12 Member
Basic Multiplication Facts 0 - 12.
Associative Property of Multiplication Member
Worksheet to help you teach students about the associative property of multiplication. Requires knowledge of basic facts 0-12.
Squares and Square Roots Member
Determine the square and root of each.
Origami Fortune Tellers
(aka Cootie Catchers)
Cootie Catcher
Hardest Facts 0 through 9 Member
This origami fortune teller includes the hardest 0-9 multiplication facts. Basic facts include 6x7, 8x7, 6x3, 7x4, 7x3, 3x8, 9x7, and 4x8. (Does NOT include 10s, 11s, or 12s.)
Cootie Catcher
Multiply by TWO Free
Fold the origami fortune teller and use it to practice multiplying by twos. Includes eight basic facts. (Up to 2 x 12)
Cootie Catcher
Multiply by THREE Member
Use this origami fortune teller to review multiplication by threes. (Up to 3 x 12)
Cootie Catcher
Multiply by FOUR Member
Review your fours times tables with this multiplication cootie catcher craft project. (Up to 4 x 12)
Cootie Catcher
Multiply by FIVE Member
Cut and fold the paper to make a fortune teller that allows students to practice multiplying by fives. (Up to 5 x 12)
Cootie Catcher
Multiply by SIX Member
Multiply by six with this creative origami cootie catcher game. (Up to 6 x 12)
Cootie Catcher
Multiply by SEVEN Member
Your students will become pros at multiplying by 7s when they practice with this fortune teller. (Up to 7 x 12)
Cootie Catcher
Multiply by EIGHT Member
Want to master the eights? Cut, color, fold, and use this review toy. (Up to 8 x 12)
Cootie Catcher
Multiply by NINE Member
Kids will quickly learn to multiply numbers by nine with this fortune teller game. (Up to 9 x 12)
Cootie Catcher
Multiply by TEN Member
Use this c.c. to practice basic multiplication facts with tens. (Up to 10 x 12)
Cootie Catcher
Multiply by ELEVEN Member
Color, then cut, then fold the fortune teller. Includes eight basic facts with elevens. (Up to 11x12)
Cootie Catcher
Multiply by TWELVE Member
Learn those tricky twelves with this paper-folding game. (Up to 12x12)
See Also:
Multiplication Worksheets (Advanced)
Practice 2 and 3-digit multiplication problems.
Fact Families (Basic)
Print basic multiplication and division fact families and number bonds.
Skip Counting Worksheets
Practicing skip counting skills can help students master their multiplication facts.
Properties of Multiplication
Practice using the the distributive, associative, commutative, and identity properties of multiplication.
Multiplication Worksheet
Printable Multiplication Worksheets and Games | {"url":"https://www.superteacherworksheets.com/multiplication.html","timestamp":"2014-04-19T19:41:37Z","content_type":null,"content_length":"142737","record_id":"<urn:uuid:db06403c-900f-4cd5-9f93-a429c713f5eb>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00610-ip-10-147-4-33.ec2.internal.warc.gz"} |
the encyclopedic entry of spanner-wrench
$frac\left\{dmathbf\left\{L\right\}\right\}\left\{dt\right\}$ $= mathbf\left\{tau\right\}_\left\{tot\right\}$
$= sum_\left\{i\right\} mathbf\left\{tau\right\}_i$
Torque has dimensions of force times distance and the SI unit of torque is the "newton meter" (N m). Even though the order of "newton" and "meter" are mathematically interchangeable, the BIPM (
Bureau International des Poids et Mesures) specifies that the order should be N m not m N. N·m is also acceptable.
The joule, which is the SI unit for energy or work, is also defined as 1 N m, but this unit is not used for torque. Since energy can be thought of as the result of "force times distance", energy
is always a scalar whereas torque is "force cross distance" and so is a (pseudo) vector-valued quantity. The dimensional equivalence of these units, of course, is not simply a coincidence: a
torque of 1 N m applied through a full revolution will require an energy of exactly 2π joules. Mathematically,
$E= tau theta$
E is the energy
τ is torque
θ is the angle moved, in radians.
Other non-SI units of torque include "pound-force-feet" or "foot-pounds-force" or "inch-pounds-force" or "ounce-force-inches" or "meter-kilograms-force" or "kilogrammeter" (kgm).
Extended units in relation with rotation angles
As a consequence of the previous equation, if you introduce the radian (rad) as part of the dimensional units in the SI units system, the torque could be measured using "newton meters per radian"
(N m/rad), or "joules per radian" (J/rad), while the energy needed and spent to perform the rotation would be measured simply in "newton meters" or "joules".
In the strict SI system, angles are not given any dimensional unit, because they do not designate physical quantities, despite the fact that they are measurable indirectly simply by dividing two
distances (the arc length and the radius): one way to conciliate the two systems would be to say that arc lengths are not measures of distances (given they are not measured over a straight line,
and a full circle rotation returns to the same position, i.e. a null distance). So arc lengths should be measured in "radian meter" (rad·m), differently from straight segment lengths in "meters"
(m). In such extended SI system, the perimeter of a circle whose radius is one meter, will be two pi rad·m, and not just two pi meters.
If you apply this measure to a rotating wheel in contact with a plane surface, the center of the wheel will move across a distance measured in meters with the same value, only if the contact is
efficient and the wheel does not slide on it: this does not happen in practice, unless the surface of contact is constrained and is then not perfectly plane (and can resist to the horizontal
linear forces applied to the irregularities of the pseudo-plane surface of movement and to the surface of the pseudo-circular rotating wheel); but then the system generates friction that loses
some energy spent by the engine: this lost energy does not change the measurement of the torque or the total energy spent in the system but the effective distance that has been made by the center
of the wheel.
The difference between the efficient energy spent by the engine and the energy produced in the linear movement is lost in friction and sliding, and this explains why, when applying the same
non-null torque constantly to the wheel, so that the wheel moves at a constant speed according to the surface in contact, there may be no acceleration of the center of the wheel: in that case,
the energy spent will be directly proportional to the distance made by the center of the wheel, and equal to the energy lost in the system by friction and sliding.
For this reason, when measuring the effective power produced by a rotating engine and the energy spent in the system to generate a movement, you will often need to take into account the angle of
rotation, and then, adding the radian in the unit system is necessary as well as making a difference between the measurement of arcs (in radian meter) and the measurement of straight segment
distances (in meters), as a way to effectively compute the efficiency of the mobile system and the capacity of a motor engine to convert between rotational power (in radian watt) and linear power
(in watts): in a friction-free ideal system, the two measurements would have equal value, but this does not happen in practice, each conversion losing energy in friction (it's easier to limit all
losses of energy caused by sliding, by introducing mechanical constraints of forms on the surfaces of contacts).
Depending on works, the extended units including radians as a fundamental dimension may or may not be used.
Special cases and other facts
Moment arm formula
A very useful special case, often given as the definition of torque in fields other than physics, is as follows:
$|tau| = \left(textrm\left\{moment arm\right\}\right) cdot textrm\left\{force\right\}$
The construction of the "moment arm" is shown in the figure below, along with the vectors r and F mentioned above. The problem with this definition is that it does not give the direction of the
torque but only the magnitude, and hence it is difficult to use in three-dimensional cases. If the force is perpendicular to the displacement vector r, the moment arm will be equal to the
distance to the centre, and torque will be a maximum for the given force. The equation for the magnitude of a torque arising from a perpendicular force:
$|tau| = \left(textrm\left\{distance to center\right\}\right) cdot textrm\left\{force\right\}$
For example, if a person places a force of 10 N on a spanner (wrench) which is 0.5 m long, the torque will be 5 N m, assuming that the person pulls the spanner by applying force perpendicular to
the spanner.
Force at an angle
If a force of magnitude F is at an angle θ from the displacement arm of length r (and within the plane perpendicular to the rotation axis), then from the definition of cross product, the
magnitude of the torque arising is:
$tau=rF sintheta$
Static equilibrium
For an object to be in static equilibrium, not only must the sum of the forces be zero, but also the sum of the torques (moments) about any point. For a two-dimensional situation with horizontal
and vertical forces, the sum of the forces requirement is two equations: ΣH = 0 and ΣV = 0, and the torque a third equation: Στ = 0. That is, to solve statically determinate equilibrium problems
in two-dimensions, we use three equations.
Torque as a function of time
Torque is the time-derivative of angular momentum, just as force is the time derivative of linear momentum:
$boldsymbol\left\{tau\right\} =\left\{mathrm\left\{d\right\}mathbf\left\{L\right\} over mathrm\left\{d\right\}t\right\} ,!$
L is angular momentum.
Angular momentum on a rigid body can be written in terms of its moment of inertia $boldsymbol I ,!$ and its angular velocity $boldsymbol\left\{omega\right\}$:
$mathbf\left\{L\right\}=I,boldsymbol\left\{omega\right\} ,!$
so if $boldsymbol I ,!$ is constant,
$boldsymbol\left\{tau\right\}=I\left\{mathrm\left\{d\right\}boldsymbol\left\{omega\right\} over mathrm\left\{d\right\}t\right\}=Iboldsymbol\left\{alpha\right\} ,!$
where α is angular acceleration, a quantity usually measured in radians per second squared.
Machine torque
Torque is part of the basic specification of an engine: the power output of an engine is expressed as its torque multiplied by its rotational speed of the axis. Internal-combustion engines
produce useful torque only over a limited range of rotational speeds (typically from around 1,000–6,000 rpm for a small car). The varying torque output over that range can be measured with a
dynamometer, and shown as a torque curve. The peak of that torque curve usually occurs somewhat below the overall power peak. The torque peak cannot, by definition, appear at higher rpm than the
power peak.
Understanding the relationship between torque, power and engine speed is vital in automotive engineering, concerned as it is with transmitting power from the engine through the drive train to the
wheels. Power is typically a function of torque and engine speed. The gearing of the drive train must be chosen appropriately to make the most of the motor's torque characteristics.
Steam engines and electric motors tend to produce maximum torque close to zero rpm, with the torque diminishing as rotational speed rises (due to increasing friction and other constraints).
Therefore, these types of engines usually have quite different types of drivetrains from internal combustion engines.
Torque is also the easiest way to explain mechanical advantage in just about every simple machine.
Relationship between torque, power and energy
If a force is allowed to act through a distance, it is doing mechanical work. Similarly, if torque is allowed to act through a rotational distance, it is doing work. Power is the work per unit
time. However, time and rotational distance are related by the angular speed where each revolution results in the circumference of the circle being travelled by the force that is generating the
torque. The power injected by the applied torque may be calculated as:
$mbox\left\{Power\right\}=mbox\left\{torque\right\} cdot mbox\left\{angular speed\right\} ,$
On the right hand side, this is a scalar product of two vectors, giving a scalar on the left hand side of the equation. Mathematically, the equation may be rearranged to compute torque for a
given power output. Note that the power injected by the torque depends only on the instantaneous angular speed - not on whether the angular speed increases, decreases, or remains constant while
the torque is being applied (this is equivalent to the linear case where the power injected by a force depends only on the instantaneous speed - not on the resulting acceleration, if any).
In practice, this relationship can be observed in power stations which are connected to a large electrical power grid. In such an arrangement, the generator's angular speed is fixed by the grid's
frequency, and the power output of the plant is determined by the torque applied to the generator's axis of rotation.
Consistent units must be used. For metric SI units power is watts, torque is newton meters and angular speed is radians per second (not rpm and not revolutions per second).
Also, the unit newton meter is dimensionally equivalent to the joule, which is the unit of energy. However, in the case of torque, the unit is assigned to a vector, whereas for energy, it is
assigned to a scalar.
Conversion to other units
For different units of power, torque, or angular speed, a conversion factor must be inserted into the equation. Also, if rotational speed (revolutions per time) is used in place of angular speed
(radians per time), a conversion factor of $2 pi$ must be added because there are $2 pi$ radians in a revolution:
$mbox\left\{Power\right\} = mbox\left\{torque\right\} times 2 pi times mbox\left\{rotational speed\right\} ,$,
where rotational speed is in revolutions per unit time.
Useful formula in SI units:
$mbox\left\{Power \left(kW\right)\right\} = frac\left\{ mbox\left\{torque \left(N\right\}cdotmbox\left\{m\right)\right\} times 2 pi times mbox\left\{rotational speed \left(rpm\right)\right\}\
right\} \left\{60000\right\}$
where 60,000 comes from 60 seconds per minute times 1000 watts per kilowatt.
Some people (e.g. American automotive engineers) use horsepower (imperial mechanical) for power, foot-pounds (lbf·ft) for torque and rpm (revolutions per minute) for angular speed. This results
in the formula changing to:
$mbox\left\{Power \left(hp\right)\right\} = frac\left\{ mbox\left\{torque\left(lbf\right\}cdotmbox\left\{ft\right)\right\} times 2 pi times mbox\left\{rotational speed \left(rpm\right)\right
\} \right\}\left\{33000\right\}.$
The constant below in, ft·lbf./min, changes with the definition of the horsepower; for example, using metric horsepower, it becomes ~32,550.
Use of other units (e.g. BTU/h for power) would require a different custom conversion factor.
For a rotating object, the linear distance covered at the circumference in a radian of rotation is the product of the radius with the angular speed. That is: linear speed = radius x angular
speed. By definition, linear distance=linear speed x time=radius x angular speed x time.
By the definition of torque: torque=force x radius. We can rearrange this to determine force=torque/radius. These two values can be substituted into the definition of power:
$mbox\left\{power\right\} = frac\left\{mbox\left\{force\right\} times mbox\left\{linear distance\right\}\right\}\left\{mbox\left\{time\right\}\right\}=frac\left\{left\left(frac\left\{mbox\
left\{torque\right\}\right\}\left\{r\right\}right\right) times \left(r times mbox\left\{angular speed\right\} times t\right)\right\} \left\{t\right\} = mbox\left\{torque\right\} times mbox\
left\{angular speed\right\}$
The radius r and time t have dropped out of the equation. However angular speed must be in radians, by the assumed direct relationship between linear speed and angular speed at the beginning of
the derivation. If the rotational speed is measured in revolutions per unit of time, the linear speed and distance are increased proportionately by $2 pi$ in the above derivation to give:
$mbox\left\{power\right\}=mbox\left\{torque\right\} times 2 pi times mbox\left\{rotational speed\right\} ,$
If torque is in lbf·ft and rotational speed in revolutions per minute, the above equation gives power in ft·lbf/min. The horsepower form of the equation is then derived by applying the conversion
factor 33,000 ft·lbf/min per horsepower:
$mbox\left\{power\right\} = mbox\left\{torque \right\} times 2 pi times mbox\left\{ rotational speed\right\} cdot frac\left\{mbox\left\{ft\right\}cdotmbox\left\{lbf\right\}\right\}\left\{mbox
\left\{min\right\}\right\} times frac\left\{mbox\left\{horsepower\right\}\right\}\left\{33000 cdot frac\left\{mbox\left\{ft \right\}cdotmbox\left\{ lbf\right\}\right\}\left\{mbox\left\{min\
right\}\right\} \right\} approx frac \left\{mbox\left\{torque\right\} times mbox\left\{RPM\right\}\right\}\left\{5252\right\}$
because $5252.113122... = frac \left\{33,000\right\} \left\{2 pi\right\} ,$.
See also
External links | {"url":"http://www.reference.com/browse/spanner-wrench","timestamp":"2014-04-17T04:27:27Z","content_type":null,"content_length":"101337","record_id":"<urn:uuid:5d07c409-8720-48ec-9356-7a13d44e28fd>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00081-ip-10-147-4-33.ec2.internal.warc.gz"} |
Computational Engineering
AeroAstro Magazine Highlight
The following article appears in the 2005–2006 issue of AeroAstro, the annual report/magazine of the MIT Aeronautics and Astronautics Department. © 2006 Massachusetts Institute of Technology.
Computational engineering is growing, and AeroAstro is in the thick of it
By David Darmofal, Jaume Peraire, Raul Radovitzky, and Karen Willcox
Richard Whitcomb, one of the most influential aerodynamicists of the 20th century, was well known to design airfoil sections by hand. He would take a file into the wind tunnel at NASA Langley and
modify an airfoil’s shape based on his understanding of aerodynamics and on the data he had just collected. Some 50 years later, airfoil design is now dominated by computational methods leading to
the replacement of the wind tunnel with a laptop computer for this problem. In fact, the application of computational methods to engineering problems, that is, computational engineering, is a
discipline with a much wider scope than airfoil design and sees application throughout all fields. There’re some particularly interesting research and educational activities happening in the
Department of Aeronautics and Astronautics in this rapidly growing discipline.
Intensive computation for simulation and optimization has become an essential activity in the design and operation of complex systems in engineering. Thus, while computational engineering is a
discipline in itself, it advances all engineering. The recent National Research Council report “Research Directions in Computational Mechanics” points out that revenues from simulation and
optimization software products are in the billions of dollars, and the overall economic impact of these products is in the trillions of dollars. Despite this considerable development, the same report
predicts that the next decade will experience an explosive growth in the demand for accurate and reliable numerical simulation and optimization of complex systems.
Since the early days of computational mechanics, the aerospace community has been at the forefront of computational engineering. Not surprisingly, NASA has played a major role in the development of
the early finite element codes for structural mechanics (e.g., NASTRAN) as well in the development of computational fluid dynamics. Traditionally, the aerospace industry has pioneered the use of the
latest computational methods. In many cases aerospace companies have developed highly sophisticated in-house capabilities through alliances with universities and research institutions. The origin of
the more recent paradigms on multidisciplinary design and optimization can also be traced to the same community. Within our department, computational methods are used in almost all research efforts.
However, the advancement of computational methods for aerospace design is the focus of the MIT Aero-Astro Aerospace Computational Design Laboratory.
Computational engineering challenges
When airfoils are operating at, or near, their intended design conditions, computational fluid dynamics models can accurately approximate the aerodynamic flows in just a few seconds on a laptop
computer. Thus, in much the same way that Whitcomb used his file and a wind tunnel to design airfoils, a modern aerodynamicist can use CFD methods to quickly try new airfoil design concepts.
Aero-Astro Professor Mark Drela’s MSES software takes this modeling capability a step farther, and can automatically optimize airfoil shapes to achieve desired performance characteristics.
However, for problems that are more complex than airfoil design, computational methods are hindered by a combination of effects including:
• Uncertainty: Computational simulations begin with a set of model equations. These model equations (even if solved exactly) are only an approximate representation of reality. Furthermore, a
computational model is typically constructed by discretizing these model equations. For example, to simulate the flow around an aircraft using a finite element method, the region around the body will
be discretized into many small elements and the solution in each element will be assumed to be a polynomial. As the number of elements and/or the order of the polynomials in each element are
increased, a well-constructed simulation will converge to the solution of the model equations. However, given that computational resources are finite, discretization errors will be present. Thus,
modeling and discretization errors combine to create uncertainty in the validity of the computational simulation. As problems increase in complexity, these uncertainties typically increase as well.
• Automation: For complex, three-dimensional problems, in particular those with complicated geometry, the process of discretizing the problem can require significant human interaction. For example,
the generation of meshes appropriate for simulation of flow around aircraft or in jet engines can require weeks of engineering effort. The main cause for this time requirement is that the process of
generating meshes for complex geometries is far from robust and requires human intervention to circumvent problems. In fluid dynamic applications, the problem of meshing robustness is especially
acute for flows with boundary layers that require thin elements near all surfaces. For comparison, once a mesh is generated, the simulation on these meshes typically can be completed in a day or two
on a state-of-the-art computer. This lack of robustness leads to a lack of automation. Currently, it is not possible to go from engineering concept to computational simulation in a timely manner for
complex problems.
• Computational cost: The cost of performing computational simulations is driven by the available computer speed and memory, and the choice of computational algorithm. As raw computer speed has
continued to increase, so has the complexity of feasible simulations. In addition to speed gains for a single computer, state-of-the-art computational simulations are almost exclusively performed on
clusters of interconnected computers. For example, the world’s fastest supercomputer is the BlueGene/L at the Department of Energy’s Lawrence Livermore National Laboratory. Manufactured by IBM, this
computer has more than 100,000 processors, and is the only computer to achieve more than 100 teraflops on a standard linear algebra test case. (Flops is the number of FLoating point OPerations per
Second, so a teraflop is a trillion floating point operations per second.) To use these clusters effectively, new computational algorithms are required that attempt to reduce the time spent
communicating between each processor while increasing the time spent operating on the data within each processor. Interestingly, over the past 30 years, improvements in computational algorithms have
contributed equally with improvements in computer hardware towards advances in simulation complexity.
Computational engineering research
• Multiscale modeling for material design: Material design has been largely based on empiricism. The main reason for this situation has been a lack of systematic strategies to design materials from a
set of functional requirements: the connection between microstructure and performance is a priori unknown and has seldom been established. However, a critical societal need exists to develop new
materials for a wide range of applications.
Multiscale materials modeling combined with high-performance simulation provides a rational approach for material design. Led by Professor Raul Radovitzky, the department’s computational solid
modeling group is successfully applying this modeling paradigm to a variety of problems including the description of anomalous plastic deformation in nanocrystalline metals, dynamic response of
polycrystalline metals, nanoscale plasticity in biomimetic materials with extreme fracture toughness, and high-rate response of soft biological tissue and human organs to blast loads. By way of
example, the following figure shows our efforts to describe blast effects on the human brain using our coupled blast-structure interaction capability, tissue models and realistic geometries from 3D
magnetic resonance imaging. The figure shows two snapshots at t = 1.13ms and t = 1.74ms of a simulation of the interaction of a blast wave produced by the explosion of 1.5Kg of TNT at a stand-off
distance of 1.5 m on a human cranium. The blast delivers a pressure wave with an overpressure of approximately 5 atm at the point of impact with the cranium. Stress wave propagation and multiple
reverberations inside the skull lead to peak strain energy densities in excess of 750 J m−3.
• Model reduction for real-time simulation and optimization: While the use of high fidelity computational models such as CFD is widespread for analysis and design, an emerging challenge is real-time
simulation and optimization. The need for real-time simulation is critical for many applications, including emergency response to natural disasters, industrial accidents, and terrorist attacks,
control of dynamical processes and adaptive systems, and interactive design of complex systems. The challenge is to develop models of sufficient accuracy than can be used in real-time decision
One approach to developing an accurate, real-time modeling capability is known as model reduction. In general terms, model reduction entails the systematic generation of computationally-efficient
representations of large-scale systems that result, for example, from high fidelity discretization of partial differential equations. More specifically, a reduced-order model can be obtained by using
the structure of the governing equations and mathematical techniques to identify key elements of the system input/output behavior. In the past decade, reduction methodology has been developed and
applied for many different disciplines, including controls, fluid dynamics, structural dynamics, circuit design, and weather prediction. Model reduction research in ACDL, led by Professor Karen
Willcox, has focused on the development and application of model reduction methodology for aerospace problems that include compressor blade aeroelasticity, supersonic inlet flow dynamics, and active
flow controller design. Our current projects include development of a new methodology that creates reduced-order models for inverse problems. We are applying this methodology to a data-driven
framework for real-time reconstruction of hazardous events from sparse measurements.
• Next-generation CFD algorithms: A major ACDL research area within the Aerospace Computational Design Laboratory is the development of next-generation algorithms for computational fluid dynamics.
Led by Professor Dave Darmofal, Bob Haimes, and Professor Jaime Peraire, and supported by the Air Force, NASA, Boeing, and Ford, the goal of this research is to improve the aerothermal design process
for complex configurations by significantly reducing the time from geometry-to-solution at engineering-required accuracy. A key ingredient in our approach is the use of adaptive methods to
automatically control the discretization error. The basic premise of an adaptive CFD method is to simulate the flow on an initial mesh, estimate the error contributed by each element within the mesh,
and refine the mesh in elements that most contribute to the estimated error. A novel aspect of our adaptive method is that it seeks to control the impact of discretization error on outputs of
engineering importance such as lift, drag, or moments. Thus, engineering decisions can be made with increased confidence that key outputs have been accurately estimated. For certain classes of
problems, we have moved beyond error estimates and can bound the output error.
Teaching Computational Engineering
Computational engineering is a multidisciplinary field requiring knowledge of mathematics, computer science, and engineering. At the undergraduate level, we have developed 16.901 Computational
Methods in Aerospace Engineering. The learning objectives for this subject are for the students to attain:
• a conceptual understanding of computational methods commonly used for analysis and design of aerospace systems
• a working knowledge of computational methods including experience implementing them for model problems drawn from aerospace engineering applications
• a basic foundation in theoretical techniques to analyze the behavior of computational methods
This subject’s enrollment has grown in size each year. In the spring of 2006, more than 40 students were enrolled. During the semester, students gain hands-on experience with computational methods by
implementing them to solve various aerospace-derived problems; for example, in 2005, students developed a finite volume method to approximate the supersonic flow over a circular cylinder.
At the graduate education level, a new interdepartmental Master of Science program in Computation for Design and Optimization has been created. The CDO interdisciplinary program provides a strong
foundation in computational approaches to the design and operation of complex engineering and scientific systems. Furthermore, the program provides a focal point for the large computational
engineering research community at MIT. The current program has more than 20 affiliated faculty members from the schools of Engineering, Science, and Sloan. Aero-Astro is well represented: the program
co-director is Jaume Peraire; and Dave Darmofal, Olivier de Weck, Raul Radovitzky, and Karen Willcox are affiliates. The department offers several graduate subjects in computational engineering, all
of which are also a part of the CDO program. These include:
• 16.225J: Computational Mechanics of Materials
• 16.888J: Multidisciplinary System Design Optimization
• 16.910J: Introduction to Numerical Simulation
• 16.920J: Numerical Methods for Partial Differential Equations
• 16.930: Advanced Numerical Methods for Partial Differential Equations
Computational engineering has changed the way aerospace design is conducted. As the use of computational engineering spreads, new challenges will continue to arise. We look forward to addressing
these challenges.
David Darmofal is an Associate Professor of Aeronautics and Astronautics and a MacVicar Fellow. Jaume Peraire is a Professor of Aeronautics and Astronautics, director of the Aerospace Computational
Design Laboratory, and co-director of the Program in Computation for Design and Optimization. Raul Radovitzky is the Charles Stark Draper Associate Professor of Aeronautics and Astronautics. Karen
Willcox is an Associate Professor of Aeronautics and Astronautics. Lead author Darmofal may be reached at darmofal@mit.edu. | {"url":"http://web.mit.edu/aeroastro/news/magazine/aeroastro-no3/2006computationalengineering.html","timestamp":"2014-04-17T04:11:45Z","content_type":null,"content_length":"25343","record_id":"<urn:uuid:98d14d07-66ff-4176-9eca-594cb8e42129>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00582-ip-10-147-4-33.ec2.internal.warc.gz"} |
Accumulation points, dense set
March 18th 2010, 08:36 PM
Accumulation points, dense set
Let S be contained in $\mathbb{R}$ and let A be the set of accumulation points of S. Prove S is dense in $<br /> \mathbb{R}$ iff. A= $<br /> \mathbb{R}$
March 19th 2010, 06:22 AM
What is your definition of dense? Is it that for each ball in the reals, it contains a point of S?
If so then suppose that S is dense. Let $x \in \mathbb{R}$, then for each open set around x, it contains an element of S. Hence $x \in A$. As x was arbitrary, we have that $A=\mathbb{R}$.
Similarly the other way round. | {"url":"http://mathhelpforum.com/differential-geometry/134529-accumulation-points-dense-set-print.html","timestamp":"2014-04-20T03:22:57Z","content_type":null,"content_length":"5927","record_id":"<urn:uuid:437e7d62-4f04-45fe-8b21-55984e5c8cea>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00330-ip-10-147-4-33.ec2.internal.warc.gz"} |
Vector Components
A vector is a quantity that has both magnitude and direction. Displacement, velocity, acceleration, and force are the vector quantities that we have discussed thus far in the Physics Classroom
Tutorial. In the first couple of units, all vectors that we discussed were simply directed up, down, left or right. When there was a free-body diagram depicting the forces acting upon an object, each
individual force was directed in one dimension - either up or down or left or right. When an object had an acceleration and we described its direction, it was directed in one dimension - either up or
down or left or right. Now in this unit, we begin to see examples of vectors that are directed in two dimensions - upward and rightward, northward and westward, eastward and southward, etc.
In situations in which vectors are directed at angles to the customary coordinate axes, a useful mathematical trick will be employed to transform the vector into two parts with each part being
directed along the coordinate axes. For example, a vector that is directed northwest can be thought of as having two parts - a northward part and a westward part. A vector that is directed upward and
rightward can be thought of as having two parts - an upward part and a rightward part.
Any vector directed in two dimensions can be thought of as having an influence in two different directions. That is, it can be thought of as having two parts. Each part of a two-dimensional vector is
known as a component. The components of a vector depict the influence of that vector in a given direction. The combined influence of the two components is equivalent to the influence of the single
two-dimensional vector. The single two-dimensional vector could be replaced by the two components.
If Fido's dog chain is stretched upward and rightward and pulled tight by his master, then the tension force in the chain has two components - an upward component and a rightward component. To Fido,
the influence of the chain on his body is equivalent to the influence of two chains on his body - one pulling upward and the other pulling rightward. If the single chain were replaced by two chains.
with each chain having the magnitude and direction of the components, then Fido would not know the difference. This is not because Fido is dumb (a quick glance at his picture reveals that he is
certainly not that), but rather because the combined influence of the two components is equivalent to the influence of the single two-dimensional vector.
Consider a picture that is hung to a wall by means of two wires that are stretched vertically and horizontally. Each wire exerts a tension force upon the picture to support its weight. Since each
wire is stretched in two dimensions (both vertically and horizontally), the tension force of each wire has two components - a vertical component and a horizontal component. Focusing on the wire on
the left, we could say that the wire has a leftward and an upward component. This is to say that the wire on the left could be replaced by two wires, one pulling leftward and the other pulling
upward. If the single wire were replaced by two wires (each one having the magnitude and direction of the components), then there would be no affect upon the stability of the picture. The combined
influence of the two components is equivalent to the influence of the single two-dimensional vector.
Consider an airplane that is flying from Chicago's O'Hare International Airport to a destination in Canada. Suppose that the plane is flying in such a manner that its resulting displacement vector is
northwest. If this is the case, then the displacement of the plane has two components - a component in the northward direction and a component in the westward direction. This is to say that the plane
would have the same displacement if it were to take the trip into Canada in two segments - one directed due North and the other directed due West. If the single displacement vector were replaced by
these two individual displacement vectors, then the passengers in the plane would end up in the same final position. The combined influence of the two components is equivalent to the influence of the
single two-dimensional displacement.
Any vector directed in two dimensions can be thought of as having two different components. The component of a single vector describes the influence of that vector in a given direction. In the next
part of this lesson, we will investigate two methods for determining the magnitude of the components. That is, we will investigate how much influence a vector exerts in a given direction. | {"url":"http://www.physicsclassroom.com/Class/vectors/U3l1d.cfm","timestamp":"2014-04-18T10:48:09Z","content_type":null,"content_length":"58407","record_id":"<urn:uuid:66132951-c8b0-414a-8818-6c52e5c9b8b4>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00623-ip-10-147-4-33.ec2.internal.warc.gz"} |
[FOM] 303: PA Completeness (restatement)
A.P. Hazen a.hazen at philosophy.unimelb.edu.au
Mon Oct 30 23:09:02 EST 2006
Harvey Friedman has made the interesting conjecture that PA is
complete for arithmètic sentences containing no more than two
quantifiers. If true, I think this would qualify as a SURPRISING
result: most of the classical work on PA has regarded "bounded"
quantifiers as free (which is equivalent to allowing the basic
non-logical vocabulary of the language to be enriched with symbols
for arbitrary primitive recursive functions and relations). And of
course PA is very far from being complete for sentences with two
UNBOUNDED quantifiers! (So, my guess is that many of us have
"intuitions" that are not properly tutored for thinking about
Harvey's conjecture. And that if it turns out to be true we will
therefore be surprised by it.)
On the more general topic of whether First-Orlder Logic (FOL) is the
"Appropriate" logical framework for research in the Foundations of
Mathematics (FoM), Harvey a few posts back suggested that it is clear
that nothing much weaker than (full) FOL will do. One obvious
weakening would be to FOL with only a limited number of distinct
variables allowed: 3-variable FOL being, for example, equivalent in
expressive power to the "Algebra of Relations." Interestingly, a
moderately strong THEORY can compensate for the weakening of the
LOGIC: Tarski and Givant's "Set Theory without Variables" shows that
ZFC (and any of a wide variety of other theories of FoM-interest in
which pairing functions are definable) can be formulated in
3-variable FOL.
Allen Hazen
Philosophy Department
University of Melbourne
More information about the FOM mailing list | {"url":"http://www.cs.nyu.edu/pipermail/fom/2006-October/011052.html","timestamp":"2014-04-19T14:56:01Z","content_type":null,"content_length":"4076","record_id":"<urn:uuid:7f94aae0-7192-42ee-910e-a01dff8b3460>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00254-ip-10-147-4-33.ec2.internal.warc.gz"} |
New Function: genFunction
The function genFunction generates a stand-alone MATLAB^® function for simulating any trained neural network and preparing it for deployment in many scenarios:
● Document the input-output transforms of a neural network used as a calculation template for manual reimplementations of the network
● Create a Simulink^® block using the MATLAB Function block
● Generate C/C++ code with MATLAB Coder™ codegen
● Generate efficient MEX-functions with MATLAB Coder codegen
● Generate stand-alone C executables with MATLAB Compiler™ mcc
● Generate C/C++ libraries with MATLAB Compiler mcc
● Generate Excel^® and .COM components with MATLAB Builder™ EX mcc options
● Generate Java components with MATLAB Builder JA mcc options
● Generate .NET components with MATLAB Builder NE mcc options
genFunction(net,'path/name') takes a neural network and file path and produces a standalone MATLAB function file 'name.m'.
genFunction(_____,'MatrixOnly','yes') overrides the default cell/matrix notation and instead generates a function that uses only matrix arguments compatible with MATLAB Coder tools. For static
networks the matrix columns are interpreted as independent samples. For dynamic networks the matrix columns are interpreted as a series of time steps. The default value is 'no'.
genFunction(_____,'ShowLinks','no') disables the default behavior of displaying links to generated help and source code. The default is 'yes'.
Here a static network is trained and its outputs calculated.
[x,t] = house_dataset;
houseNet = feedforwardnet(10);
houseNet = train(houseNet,x,t);
y = houseNet(x);
A MATLAB function with the same interface as the neural network object is generated and tested, and viewed.
y2 = houseFcn(x);
accuracy2 = max(abs(y-y2))
edit houseFcn
The new function can be compiled with the MATLAB Compiler tools (license required) to a shared/dynamically linked library with mcc.
mcc -W lib:libHouse -T link:lib houseFcn
Next, another version of the MATLAB function is generated which supports only matrix arguments (no cell arrays). This function is tested. Then it is used to generate a MEX-function with the MATLAB
Coder tool codegen (license required) which is also tested.
y3 = houseFcn(x);
accuracy3 = max(abs(y-y3))
x1Type = coder.typeof(double(0),[13 Inf]); % Coder type of input 1
codegen houseFcn.m -config:mex -o houseCodeGen -args {x1Type}
y4 = houseCodeGen(x);
accuracy4 = max(abs(y-y4))
Here, a dynamic network is trained and its outputs calculated.
[x,t] = maglev_dataset;
maglevNet = narxnet(1:2,1:2,10);
[X,Xi,Ai,T] = preparets(maglevNet,x,{},t);
maglevNet = train(maglevNet,X,T,Xi,Ai);
[y,xf,af] = maglevNet(X,Xi,Ai);
Next, a MATLAB function is generated and tested. The function is then used to create a shared/dynamically linked library with mcc.
[y2,xf,af] = maglevFcn(X,Xi,Ai);
accuracy2 = max(abs(cell2mat(y)-cell2mat(y2)))
mcc -W lib:libMaglev -T link:lib maglevFcn
Next, another version of the MATLAB function is generated which supports only matrix arguments (no cell arrays). This function is tested. Then it is used to generate a MEX-function with the MATLAB
Coder tool codegen, and the result is also tested.
x1 = cell2mat(X(1,:)); % Convert each input to matrix
x2 = cell2mat(X(2,:));
xi1 = cell2mat(Xi(1,:)); % Convert each input state to matrix
xi2 = cell2mat(Xi(2,:));
[y3,xf1,xf2] = maglevFcn(x1,x2,xi1,xi2);
accuracy3 = max(abs(cell2mat(y)-y3))
x1Type = coder.typeof(double(0),[1 Inf]); % Coder type of input 1
x2Type = coder.typeof(double(0),[1 Inf]); % Coder type of input 2
xi1Type = coder.typeof(double(0),[1 2]); % Coder type of input 1 states
xi2Type = coder.typeof(double(0),[1 2]); % Coder type of input 2 states
codegen maglevFcn.m -config:mex -o maglevNetCodeGen -args {x1Type x2Type xi1Type xi2Type}
[y4,xf1,xf2] = maglevNetCodeGen(x1,x2,xi1,xi2);
dynamic_codegen_accuracy = max(abs(cell2mat(y)-y4)) | {"url":"http://www.mathworks.in/help/nnet/release-notes.html?nocookie=true","timestamp":"2014-04-24T18:55:20Z","content_type":null,"content_length":"155028","record_id":"<urn:uuid:f40ee33a-d1f8-457f-aa19-17a1f2aaeae7>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00341-ip-10-147-4-33.ec2.internal.warc.gz"} |
Version: 1.0
Type: Full Script
Category: Math Functions
License: GNU General Public License
Description: Like the title says. This script solves problems in Ax + B = C form where you're solving for x. Hope you like it.
# This is a stupid little script I wrote in a few minutes just to see if I could figure it out. #
#You're welcome to use it or modify it however you would like. This is my first script I've wrote #
#to give to people. Hope you like it! Email me at mattsparksrulesall@hotmail.com with any feedback#
function algProblem() {
print("Now you don't have to do your homework! Solve simple Algebra problems with this script. The problem has to be in Ax + B = C form. Where you're solving for x. Have fun!<p>");
print("<center><form method=\"post\" action=\"algebra.php\">");
print("<input type=\"text\" size=\"5\" name=\"val1\" value=\"first number\"><b>x</b> + <input type=\"text\" size=\"5\" name=\"val2\" value=\"2nd number\"><b> = </b><input type=\"text\" size=\"5\" name=\"val3\" value=\"3rd number\">");
print("<input type=\"submit\" name=\"submit\" value=\"Solve!\">");
#Algebra problem solver#
$a = $val1;
$b = $val2;
$c = $val3;
#1st step#
#Subtract B from C and get D#
$d = $c - $b;
#2nd Step#
#Divide D by A and get E#
$e = $d / $a;
print("The Answer is:<br> x = $e"); | {"url":"http://www.phpbuilder.com/print/snippet/detail.php?type=snippet&id=1055","timestamp":"2014-04-19T16:28:35Z","content_type":null,"content_length":"3249","record_id":"<urn:uuid:47c76105-8db3-4f07-863a-1a58b1a4ca12>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00156-ip-10-147-4-33.ec2.internal.warc.gz"} |
st: RE: constraints in reg3
[Date Prev][Date Next][Thread Prev][Thread Next][Date index][Thread index]
st: RE: constraints in reg3
From "Neumayer,E" <E.Neumayer@lse.ac.uk>
To <statalist@hsphsun2.harvard.edu>
Subject st: RE: constraints in reg3
Date Wed, 29 Sep 2004 11:33:08 +0100
Hi, this takes up a thread from about 4 weeks ago. Kit Baum clarified that if you do
1. reg y x1 x2
2. reg y1 x1 x2
3. reg y2 x1 x2
and y = y1 + y2, then the coefficient vector of model 1 is the sum of the coefficient vectors on models 2 and 3.
I did this with xtpcse for data that have both cross-sectional and time-series dimension and find this to be true, of course. However, if I specify xtpcse, corr(ar1) then the coefficient vectors of models 2 and 3 no longer add up to the coefficient vector of model 1. I presume this is because xtpcse, corr(ar1) uses Prais-Winsten, whereas xtpcse without the corr(ar1) option uses OLS. I have two questions:
a. Can anyone explain why the coefficient vectors of models 2 and 3 no longer add up to the coefficient vector of model 1 if xtpcse, corr(ar1) is used?
b. Should I accept that the coefficient vectors no longer add up or should I artificially restrict them to be the same?
Any help greatly appreciated.
Eric Neumayer
-----Original Message-----
From: Kit Baum [mailto:kitbaum@mac.com]
Sent: 25 August 2004 21:45
To: statalist@hsphsun2.harvard.edu
Subject: st: constraints in reg3
Eric said
I have a dependent variable y, which is the sum of 2 components, such
that y = y1 + y2.
I want to regress y on x1 and x2. I also want to regress each one of y1
and y2 on the same explanatory variables x1 and x2 to assess their
effect on each sub-component of y. If I typed
1. reg y x1 x2
2. reg y1 x1 x2
3. reg y2 x1 x2
then I would have no guarantee that the estimated coefficients in
regressions 2 and 3 on the individual components of y (i.e. y1 and y2)
are compatible with the estimated coefficient on the aggregate
dependent variable y. As far as I understand I can ensure compatibility
by estimating the model: (with constraints)
When there are linear relations between the dependent variables, one
cannot estimate a full system. That is well-known when, e.g., the dep
vars add up to 1, as in the case of budget or portfolio shares. But it
is true here too, and if you think about it it must be so. This is not
reg3, in the sense of 3SLS, since there are no instruments; it is
really SURE. But you can't do SURE on this problem (which is why I'm
capturing the result; otherwise it would stop). On the other hand, you
don't need to use a systems estimator and constraints (and we know that
SURE with identical regressors is the same as . Check out the degree to
which the coefficient vector on model 1 compares with the sum of the
coefficient vectors on models 2 and 3. That is not by chance.
webuse klein,clear
capt noi sureg ( wagepriv consump yr) ( wagegovt consump yr) ( wagetot
consump yr)
reg wagepriv consump yr
mat p = e(b)
reg wagegovt consump yr
mat g = e(b)
reg wagetot consump yr
mat t = e(b)
mat check = t - p - g
mat list p
mat list g
mat list t
mat list check
* For searches and help try:
* http://www.stata.com/support/faqs/res/findit.html
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/
* For searches and help try:
* http://www.stata.com/support/faqs/res/findit.html
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/ | {"url":"http://www.stata.com/statalist/archive/2004-09/msg00693.html","timestamp":"2014-04-17T13:22:07Z","content_type":null,"content_length":"8070","record_id":"<urn:uuid:a001e420-c202-4871-80c2-f6700bf7d7e4>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00655-ip-10-147-4-33.ec2.internal.warc.gz"} |
Users' Note
NAG Fortran Library, Mark 19
Cray T3E Single Precision
Users' Note
This document is essential reading for every user of the NAG Fortran Library Implementation specified in the title. It provides implementation-specific detail that augments the information provided
in the NAG Fortran Library Manual and Introductory Guide. Wherever those manuals refer to the "Users' Note for your implementation", you should consult this note.
NAG recommends that you read the following minimum reference material before calling any library routine:
(a) Essential Introduction
(b) Chapter Introduction
(c) Routine Document
(d) Implementation-specific Users' Note
Items (a), (b) and (c) are included in the NAG Fortran Library Manual; items (a) and (b) are also included in the NAG Fortran Library Introductory Guide; item (d) is this document which is provided
in HTML form. Item (a) is also supplied in plain text form.
All routines listed in the chapter contents documents of the NAG Fortran Library Manual, Mark 19 are available in this implementation. At Mark 19, 68 new primary ("user-callable") routines have been
introduced, and 15 deleted. Please consult the file doc/news.html (see Section 3.5) for lists of these routines and for a list of routines scheduled for withdrawal at Mark 20 or later. Your
suggestions for new algorithms for future releases of the Library are welcomed (see Section 7).
Assuming that libnag.a has been installed in a directory in the search path of the linker, such as /usr/lib, then you may link to the NAG Fortran Library in the following manner:
f90 driver.f -lnag
where driver.f is your application program. The example programs are most easily accessed by the command nagexample, which will provide you with a copy of an example program (and its data, if any),
compile the program and link it with the library (showing you the compile command so that you can recompile your own version of the program). Finally, the executable program will be run, presenting
its output to stdout. The example program concerned is specified by the argument to nagexample, e.g.
nagexample c06eae
will copy the example program and its data into the files c06eaee.f and c06eaee.d in the current directory and process them to produce the example program results.
In the NAG Fortran Library Manual, routine documents that have been typeset since Mark 12 present the example programs in a generalised form, using bold italicised terms as described in Section 3.3.
In other routine documents, the example programs are in single precision. All printed example programs show routine names ending in F not E (see Section 3.6).
The example programs supplied to a site in machine-readable form have been modified as necessary so that they are suitable for immediate execution. Note that all the distributed example programs have
been revised and do not correspond exactly with the programs published in the manual, unless the documents have been recently typeset. The distributed example programs should be used in preference
wherever possible.
For this double precision implementation, the bold italicised terms used in the NAG Fortran Library Manual should be interpreted as:
real - REAL (REAL*8)
basic precision - single precision
complex - COMPLEX (COMPLEX*16)
additional precision - double precision (REAL*16)
machine precision - the machine precision, see the value
returned by X02AJE in Section 4
Thus a parameter described as real should be declared as REAL in your program. If a routine accumulates an inner product in additional precision, it is using double precision.
In routine documents that have been newly typeset since Mark 12 additional bold italicised terms are used in the published example programs and they must be interpreted as follows:
real as an intrinsic function name - REAL
imag - AIMAG
cmplx - CMPLX
conjg - CONJG
e in constants, e.g. 1.0e-4 - E, e.g. 1.0E-4
e in formats, e.g. e12.4 - E, e.g. E12.4
All references to routines in Chapter F07 - Linear Equations (LAPACK) and Chapter F08 - Least-squares and Eigenvalue Problems (LAPACK) use the LAPACK name, not the NAG F07/F08 name. The LAPACK name
is precision dependent, and hence the name appears in a bold italicised typeface.
For example:
sgetrf refers to the LAPACK routine name - SGETRF
cpotrs - CPOTRS
Certain routines produce explicit error messages and advisory messages via output units which either have default values or can be reset by using X04AAE for error messages and X04ABE for advisory
messages. (The default values are given in Section 4). The maximum record lengths of error messages and advisory messages (including carriage control characters) are 80 characters, except where
otherwise specified. The following machine-readable information files are provided in the doc directory:
• un.html - Users' Note (this document)
• essint - the Essential Introduction to the NAG Fortran Library
• summary - a brief summary of the routines
• news - an outline of the new and enhanced routines available at Mark 19
• replaced - a list of routines available at earlier Marks of the Library but since withdrawn, together with recommended replacements
• calls - a list of routines called directly or indirectly by each routine in the library, and by each example program
• called - for each routine in the library (including auxiliaries), a list of routines and example programs which call it directly or indirectly
• blas_lapack_to_nag - BLAS/F06, LAPACK/F07 and LAPACK/F08 listing
• nag_to_blas_lapack - F06/BLAS, F07/LAPACK and F08/LAPACK listing
See Section 5 for additional documentation available from NAG.
To ensure a single precision implementation is completely distinct from any double precision version also available, all single precision routine names have been modified by changing the sixth
character from F to E. Thus, for example:
A02AAF denotes the double precision version
A02AAE denotes the single precision version
The names of auxiliary routines have also been modified by interchanging the first three and the last three characters, e.g. C02AFZ has been changed to AFZC02.
In the NAG Fortran Library Manual all library routine names end in F. Therefore, when using the manual in conjunction with this single precision implementation, all such names must be understood to
refer to the single precision versions with names ending in E. Some routines in the Library require users to specify particular auxiliary routines. Again, when using this implementation it is
necessary to specify the amended names, as given in Section 4.
The names of COMMON blocks have also been modified, e.g. AC02AF is renamed AFC02A. This is unlikely to affect the user.
The NAG Fortran Library Interface Blocks define the type and arguments of each user callable NAG Fortran Library routine. These are not essential to calling the NAG Fortran Library from Fortran 90
programs. Their purpose is to allow the Fortran 90 compiler to check that NAG Fortran Library routines are called correctly. The interface blocks enable the compiler to check that:
(a) Subroutines are called as such
(b) Functions are declared with the right type
(c) The correct number of arguments are passed
(d) All arguments match in type and structure
These interface blocks have been generated automatically by analysing the source code for the NAG Fortran Library. As a consequence, and because these files have been thoroughly tested, they are more
reliable than writing your own declarations.
The NAG Fortran Library Interface Block files are organised by Library chapter. The module names are:
These are supplied in pre-compiled form (.o files) and they can be accessed by specifying the -p"pathname" option on each f90 invocation, where "pathname" is the path of the directory containing the
.o files.
In order to make use of these modules from existing Fortran 77 code the following changes need to be made:
• Add a USE statement for each of the module files for the chapters of the NAG Fortran Library that your program calls directly. Often only one USE statement will be required.
• Delete all EXTERNAL statements for NAG Fortran Library routines. These are now declared in the module(s).
• Delete the type declarations for any NAG Fortran Library functions. These are now declared in the module(s).
These changes are illustrated by showing the conversion of the Fortran 77 version of the example program for NAG Fortran Library routine S18DEE. Please note that this is not exactly the same as the
example program that is distributed with this implementation. Each change is surrounded by comments boxed with asterisks.
* S18DEE Example Program Text
* Mark 14 Revised. NAG Copyright 1989.
* Add USE statement for relevant chapters *
USE NAG_F77_S_CHAPTER
* *
* .. Parameters ..
INTEGER NIN, NOUT
PARAMETER (NIN=5,NOUT=6)
INTEGER N
PARAMETER (N=2)
* .. Local Scalars ..
COMPLEX Z
REAL FNU
INTEGER IFAIL, NZ
CHARACTER*1 SCALE
* .. Local Arrays ..
COMPLEX CY(N)
* .. External Subroutines ..
* EXTERNAL declarations need to be removed (and type declarations *
* for functions). *
C EXTERNAL S18DEE
* *
* .. Executable Statements ..
WRITE (NOUT,*) 'S18DEE Example Program Results'
* Skip heading in data file
READ (NIN,*)
WRITE (NOUT,*)
WRITE (NOUT,99999) 'Calling with N =', N
WRITE (NOUT,*)
WRITE (NOUT,*)
+' FNU Z SCALE CY(1) CY(2)
+ NZ IFAIL'
WRITE (NOUT,*)
20 READ (NIN,*,END=40) FNU, Z, SCALE
IFAIL = 0
CALL S18DEE(FNU,Z,N,SCALE,CY,NZ,IFAIL)
WRITE (NOUT,99998) FNU, Z, SCALE, CY(1), CY(2), NZ, IFAIL
GO TO 20
40 STOP
99999 FORMAT (1X,A,I2)
99998 FORMAT (1X,F7.4,' (',F7.3,',',F7.3,') ',A,
+ 2(' (',F7.3,',',F7.3,')'),I4,I4)
Any further information which applies to one or more routines in this implementation is listed below, chapter by chapter.
(a) D01
D01BAE auxiliaries D01BAW, D01BAX, D01BAY and D01BAZ have been renamed as
BAWD01, BAXD01, BAYD01 and BAZD01 respectively
D01BBE auxiliaries D01BAW, D01BAX, D01BAY and D01BAZ have been renamed as
BAWD01, BAXD01, BAYD01 and BAZD01 respectively
D01FDE auxiliary D01FDV has been renamed as FDVD01
(b) D02
D02BJE auxiliaries D02BJW and D02BJX have been renamed as BJWD02 and BJXD02
D02EJE auxiliaries D02EJW, D02EJX and D02EJY have been renamed as EJWD02,
EJXD02 and EJYD02 respectively
D02NBE auxiliaries D02NBY and D02NBZ have been renamed as NBYD02 and NBZD02
D02NCE auxiliaries D02NBY and D02NCZ have been renamed as NBYD02 and NCZD02
D02NDE auxiliaries D02NBY and D02NDZ have been renamed as NBYD02 and NDZD02
D02NGE auxiliaries D02NBY and D02NGZ have been renamed as NBYD02 and NGZD02
D02NHE auxiliaries D02NBY and D02NHZ have been renamed as NBYD02 and NHZD02
D02NJE auxiliaries D02NBY and D02NJZ have been renamed as NBYD02 and NJZD02
D02RAE auxiliaries D02GAX and D02GAZ have been renamed as GAXD02 and GAZD02
D02SAE auxiliaries D02HBW, D02HBX, D02HBY and D02HBZ have been renamed as
HBWD02, HBXD02, HBYD02 and HBZD02 respectively
(c) D03
The example programs for D03RAE and D03RBE take much longer to run than other examples.
D03PFE auxiliary D03PFP has been renamed as PFPD03
D03PHE auxiliary D03PCK has been renamed as PCKD03
D03PJE auxiliary D03PCK has been renamed as PCKD03
D03PKE auxiliary D03PEK has been renamed as PEKD03
D03PLE auxiliaries D03PEK and D03PLP have been renamed as PEKD03 and PLPD03
D03PPE auxiliaries D03PCK and D03PCL have been renamed as PCKD03 and PCLD03
D03PRE auxiliaries D03PEK and D03PEL have been renamed as PEKD03 and PELD03
D03PSE auxiliaries D03PEK, D03PEL and D03PLP have been renamed as PEKD03,
PELD03 and PLPD03 respectively
D03PWE auxiliaries D03PEK and D03PLP have been renamed as PEKD03 and PLPD03
D03PXE auxiliaries D03PEK and D03PLP have been renamed as PEKD03 and PLPD03
(d) E04
E04GBE auxiliaries E04FCV and E04HEV have been renamed as FCVE04 and HEVE04
E04NFE auxiliary E04NFU has been renamed as NFUE04
E04NKE auxiliary E04NKU has been renamed as NKUE04
E04UCE auxiliary E04UDM has been renamed as UDME04
E04UGE auxiliaries E04UGM and E04UGN have been renamed as UGME04 and UGNE04
E04UNE auxiliary E04UDM has been renamed as UDME04
E04ZCE auxiliary E04VDM has been renamed as VDME04
(e) F02
F02FJE auxiliary F02FJZ has been renamed as FJZF02
(f) F06, F07 and F08
In this implementation calls to the Basic Linear Algebra Subprograms (BLAS) and linear algebra routines (LAPACK) are implemented by calls to the Cray Research libsci.a Subroutine Library. All calls
to BLAS and LAPACK routines (except the sparse BLAS) use the code in libsci.a except the following, which use NAG versions:
SGETRI CGETRI CSTEIN SGEBAK CGEBAK
(g) G02
The value of ACC, the machine-dependent constant mentioned in several documents in the chapter, is 1.0E-13.
(h) H02
H02CBE auxiliaries E04NFU and H02CBU have been renamed as NFUE04 and CBUH02
H02CEE auxiliaries E04NKU and H02CEY have been renamed as NKUE04 and CEYH02
(i) P01
On hard failure, P01ABE writes the error message to the error message unit specified by X04AAE and then stops.
(j) S07 - S21
The constants referred to in the NAG Fortran Library Manual have the following values in this implementation:
S07AAE F(1) = 1.0E+13
F(2) = 1.0E-14
S10AAE E(1) = 18.50
S10ABE E(1) = 708.0
S10ACE E(1) = 708.0
S13AAE x(hi) = 708.3
S13ACE x(hi) = 3.3E+7
S13ADE x(hi) = 3.3E+7
S14AAE IFAIL = 1 if X > 170.0
IFAIL = 2 if X < -170.0
IFAIL = 3 if abs(X) < 2.23E-308
S14ABE IFAIL = 2 if X > 2.55E+305
S15ADE x(hi) = 26.6
x(low) = -6.25
S15AEE x(hi) = 6.25
S17ACE IFAIL = 1 if X > 3.3E+7
S17ADE IFAIL = 1 if X > 3.3E+7
IFAIL = 3 if 0.0 < X <= 2.23E-308
S17AEE IFAIL = 1 if abs(X) > 3.3E+7
S17AFE IFAIL = 1 if abs(X) > 3.3E+7
S17AGE IFAIL = 1 if X > 103.8
IFAIL = 2 if X < -1.3E+5
S17AHE IFAIL = 1 if X > 104.1
IFAIL = 2 if X < -1.3E+5
S17AJE IFAIL = 1 if X > 104.1
IFAIL = 2 if X < -1.3E+5
S17AKE IFAIL = 1 if X > 104.1
IFAIL = 2 if X < -1.3E+5
S17DCE IFAIL = 2 if abs (Z) < 5.97E-154
IFAIL = 4 if abs (Z) or FNU+N-1 > 6.71E+7
IFAIL = 5 if abs (Z) or FNU+N-1 > 4.50E+15
S17DEE IFAIL = 2 if imag (Z) > 700.0
IFAIL = 3 if abs (Z) or FNU+N-1 > 6.71E+7
IFAIL = 4 if abs (Z) or FNU+N-1 > 4.50E+15
S17DGE IFAIL = 3 if abs (Z) > 1.65E+5
IFAIL = 4 if abs (Z) > 2.72E+10
S17DHE IFAIL = 3 if abs (Z) > 1.65E+5
IFAIL = 4 if abs (Z) > 2.72E+10
S17DLE IFAIL = 2 if abs (Z) < 5.97E-154
IFAIL = 4 if abs (Z) or FNU+N-1 > 6.71E+7
IFAIL = 5 if abs (Z) or FNU+N-1 > 4.50E+15
S18ADE IFAIL = 2 if 0.0 < X <= 2.23E-308
S18AEE IFAIL = 1 if abs(X) > 711.6
S18AFE IFAIL = 1 if abs(X) > 711.6
S18CDE IFAIL = 2 if 0.0 < X <= 2.23E-308
S18DCE IFAIL = 2 if abs (Z) < 5.97E-154
IFAIL = 4 if abs (Z) or FNU+N-1 > 6.71E+7
IFAIL = 5 if abs (Z) or FNU+N-1 > 4.50E+15
S18DEE IFAIL = 2 if real (Z) > 700.0
IFAIL = 3 if abs (Z) or FNU+N-1 > 6.71E+7
IFAIL = 4 if abs (Z) or FNU+N-1 > 4.50E+15
S19AAE IFAIL = 1 if abs(x) >= 49.50
S19ABE IFAIL = 1 if abs(x) >= 49.50
S19ACE IFAIL = 1 if X > 997.26
S19ADE IFAIL = 1 if X > 997.26
S21BCE IFAIL = 3 if an argument < 1.579E-205
IFAIL = 4 if an argument >= 3.774E+202
S21BDE IFAIL = 3 if an argument < 2.820E-103
IFAIL = 4 if an argument >= 1.404E+102
(k) X01
The values of the mathematical constants are:
X01AAE (PI) = 3.1415926535897932
X01ABE (GAMMA) = 0.5772156649015329
(l) X02
The values of the machine constants are:
The basic parameters of the model
X02BHE = 2
X02BJE = 53
X02BKE = -1021
X02BLE = 1024
X02DJE = .TRUE.
Derived parameters of the floating-point arithmetic
X02AJE = Z'3CA0000000000001' ( 1.11022302462516E-16 )
X02AKE = Z'0010000000000000' ( 2.22507385850720E-308 )
X02ALE = Z'7FEFFFFFFFFFFFFF' ( 1.79769313486232E+308 )
X02AME = Z'0010000000000000' ( 2.22507385850720E-308 )
X02ANE = Z'2010000000000043' ( 2.98333629248013E-154 )
Parameters of other aspects of the computing environment
X02AHE = Z'4180000000000000' ( 3.35544320000000E+7 )
X02BBE = 9223372036854775807
X02BEE = 15
X02DAE = .FALSE.
(m) X04
The default output units for error and advisory messages for those routines which can produce explicit output are both Fortran Unit 6.
Each supported NAG Fortran Library site is currently provided with a printed copy of the NAG Fortran Library Manual (or Update) and Introductory Guide. Additional copies are available for purchase;
please refer to the NAG documentation order form (available on the NAG websites, see Section 6 (c)) for details of current prices.
On-line documentation is bundled with this implementation. Please see the Readme file on the distribution medium for further information.
(a) Contact with NAG
Queries concerning this document or the implementation generally should be directed initially to your local Advisory Service. If you have difficulty in making contact locally, you can contact NAG
directly at one of the addresses given in the Appendix. Users subscribing to the support service are encouraged to contact one of the NAG Response Centres (see below).
(b) NAG Response Centres
The NAG Response Centres are available for general enquiries from all users and also for technical queries from sites with an annually licensed product or support service.
The Response Centres are open during office hours, but contact is possible by fax, email and phone (answering machine) at all times.
When contacting a Response Centre please quote your NAG site reference and NAG product code (in this case FLCRE19SE).
The NAG websites are an information service providing items of interest to users and prospective users of NAG products and services. The information is reviewed and updated regularly and includes
implementation availability, descriptions of products, downloadable software, product documentation and technical reports. The NAG websites can be accessed at
http://www.nag.com/ (in North America)
http://www.nag-j.co.jp/ (in Japan)
(d) NAG Electronic Newsletter
If you would like to be kept up to date with news from NAG you may want to register to receive our electronic newsletter, which will alert you to special offers, announcements about new products or
product/service enhancements, case studies and NAG's event diary. To register simply visit one of our websites or contact us at nagnews@nag.co.uk. Many factors influence the way NAG's products and
services evolve and your ideas are invaluable in helping us to ensure that we meet your needs. If you would like to contribute to this process we would be delighted to receive your comments. We have
provided a short survey on our website at www.nag.co.uk/local/feedback to enable you to provide this feedback. Alternatively feel free to contact the appropriate NAG Response Centre who will be happy
either to record your comments or to send you a printed copy of the survey.
NAG Ltd
Wilkinson House
Jordan Hill Road
OXFORD OX2 8DR NAG Ltd Response Centre
United Kingdom email: infodesk@nag.co.uk
Tel: +44 (0)1865 511245 Tel: +44 (0)1865 311744
Fax: +44 (0)1865 310139 Fax: +44 (0)1865 311755
NAG Inc
1400 Opus Place, Suite 200
Downers Grove
IL 60515-5702 NAG Inc Response Center
USA email: infodesk@nag.com
Tel: +1 630 971 2337 Tel: +1 630 971 2345
Fax: +1 630 971 2706 Fax: +1 630 971 2346
NAG GmbH
Schleissheimerstrasse 5
85748 Garching
email: info@naggmbh.de
Tel: +49 (0)89 320 7395
Fax: +49 (0)89 320 7396
Nihon NAG KK
Yaesu Nagaoka Building No. 6
1-9-8 Minato
email: help@nag-j.co.jp
Tel: +81 (0)3 5542 6311
Fax: +81 (0)3 5542 6312 | {"url":"http://www.nsc.liu.se/news/un.html","timestamp":"2014-04-20T08:38:52Z","content_type":null,"content_length":"26016","record_id":"<urn:uuid:e947936c-81c5-49ee-9421-72e460342e19>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00011-ip-10-147-4-33.ec2.internal.warc.gz"} |
[Numpy-discussion] numpy reference array
Matt U mpuecker@mit....
Wed Mar 13 08:56:07 CDT 2013
Is it possible to create a numpy array which points to the same data in a
different numpy array (but in different order etc)?
For example:
import numpy as np
a = np.arange(10)
ids = np.array([0,0,5,5,9,9,1,1])
b = a[ids]
a[0] = -1
b[0] #should be -1 if b[0] referenced the same data as a[0]
ctypes almost does it for me, but the access is inconvenient. I would like to
access b as a regular numpy array:
import numpy as np
import ctypes
a = np.arange(10)
ids = np.array([0,0,5,5,9,9,1,1])
b = [a[id:id+1].ctypes.data_as(ctypes.POINTER(ctypes.c_long)) for id in ids]
a[0] = -1
b[0][0] #access is inconvenient
Some more information: I've written a finite-element code, and I'm working on
optimizing the python implementation. Profiling shows the slowest operation is
the re-creation of an array that extracts edge degrees of freedom from the
volume of the element (similar to b above). So, I'm trying to avoid copying the
data every time, and just setting up 'b' once. The ctypes solution is
sub-optimal since my code is mostly vectorized, that is, later I'd like to
something like
c[ids] = b[ids] + d[ids]
where c, and d are the same shape as b but contain different data.
Any thoughts? If it's not possible that will save me time searching.
More information about the NumPy-Discussion mailing list | {"url":"http://mail.scipy.org/pipermail/numpy-discussion/2013-March/065812.html","timestamp":"2014-04-16T05:03:02Z","content_type":null,"content_length":"4221","record_id":"<urn:uuid:ce712c1f-e25d-43bb-b469-889b01ac1bd1>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00366-ip-10-147-4-33.ec2.internal.warc.gz"} |
4.1 Isotropy
Isotropy reduces the phase space of general relativity to two-dimensions since, up to SU(2)-gauge freedom, there is only one independent component in an isotropic connection and triad, which is not
already determined by the symmetry. This is analogous to metric variables, where the scale factor The lapse function where
One can also understand these different roles of metric components from a Hamiltonian analysis of the Einstein–Hilbert action
specialized to isotropic metrics (16), whose Ricci scalar is
The action then becomes
(with the spatial coordinate volume
illustrating the different roles of
such that variation with respect to | {"url":"http://relativity.livingreviews.org/Articles/lrr-2008-4/articlesu9.html","timestamp":"2014-04-18T00:27:26Z","content_type":null,"content_length":"9677","record_id":"<urn:uuid:17e80065-9409-4f19-a03a-acb99e760479>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00626-ip-10-147-4-33.ec2.internal.warc.gz"} |
abstract algebra
there would only be 2 cyclic subgroups of order 10 because non of the subgroups can share an order 10 element because if they did share an element in common, that element would generate both groups,
so the two groups would be the same. So this means that no two cyclic subgroups of order 10 can share an element of order 10.
Thanks a lot for the help, this made sooooooo much more sense now. | {"url":"http://www.physicsforums.com/showthread.php?t=390967","timestamp":"2014-04-19T04:41:07Z","content_type":null,"content_length":"30756","record_id":"<urn:uuid:7fb08ddc-dda8-43a6-a634-5e87592f7719>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00340-ip-10-147-4-33.ec2.internal.warc.gz"} |
What is a good example of a complete but not model-complete theory, and why?
up vote 7 down vote favorite
The standard examples of complete but not model-complete theories seem to be:
- Dense linear orders with endpoints.
- The full theory $\mathrm{Th}(\mathcal{M})$ of $\mathcal{M}$, where $\mathcal{M} = (\mathbb{N}, >)$ is the structure of natural numbers equipped with the relation $>$ (and nothing else, i.e. no
addition etc).
Can anyone explain or give a reference to show why any of these two theories are not model-complete, or give another example altogether of a complete but not model complete theory (with explanation)?
lo.logic model-theory
add comment
1 Answer
active oldest votes
For the second example, let $M$ be the natural numbers and let $N$ be the integers greater than or equal to -1. Then $M$ is a substructure of $N$ but $M\models$ ``0 is the least
element", while this is false in $N$. Thus the theory is not model complete.
The first example is similar, let $M$ be $[0,1]$ and let $N$ be $[-1,1]$. Again $M\models$ ``0 is the least element" but the extension $N$ does not.
up vote 13 down vote
accepted One equivalent of model completeness is that every formula is equivalent to an existential formula. So theories like true arithmetic, the theory of the natural numbers in the
language {$+,\cdot,0,1$}, are far from model complete.
add comment
Not the answer you're looking for? Browse other questions tagged lo.logic model-theory or ask your own question. | {"url":"http://mathoverflow.net/questions/44397/what-is-a-good-example-of-a-complete-but-not-model-complete-theory-and-why?sort=newest","timestamp":"2014-04-18T11:10:38Z","content_type":null,"content_length":"51050","record_id":"<urn:uuid:0a18b02f-94a0-49dd-9cda-542f0c9375ff>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00168-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
how do u convert 14 1/8 in an improper fraction
• one year ago
• one year ago
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/500dfc76e4b0ed432e106984","timestamp":"2014-04-21T12:29:26Z","content_type":null,"content_length":"111816","record_id":"<urn:uuid:f9f174d9-e3dc-45dd-8bf9-53609212b1ce>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00010-ip-10-147-4-33.ec2.internal.warc.gz"} |
Guttenberg, NJ Algebra 2 Tutor
Find a Guttenberg, NJ Algebra 2 Tutor
...I have had the opportunity to work with several children with ADD/ADHD over the years as well as other learning and emotional disorders. My strongest attribute in educating children is my
patience. With my 11 years of assistant teaching with elementary students specifically with literacy, langu...
16 Subjects: including algebra 2, reading, algebra 1, GRE
...We then do several additional, similar examples, at first working together and then with the student proceeding alone, to ensure mastery of the topic. I assign follow-up work to see if the
student has indeed understood what has been taught. My hallmark is complete patience, and I will continue to go over the material until the student gains complete comprehension.
8 Subjects: including algebra 2, geometry, algebra 1, ACT Math
...I have been teaching at the college level for the past 8 years. I have tutored subject in algebra, physics, trigonometry and calculus. I plan to show students how to do well as a student in
the Science, Technology, Engineering and Mathematics (STEM)fields.
10 Subjects: including algebra 2, calculus, physics, geometry
...I also ran a coaching session while living in England. I started playing softball at 5 years old. I played until the age of 19, which is 14 years.
16 Subjects: including algebra 2, statistics, geometry, precalculus
...My school taught all religious subjects as well as modern Hebrew itself in Hebrew. Besides the sacred texts, we studied classical literary works by authors such as Agnon, Tchernikhovsky and
Bialik. I wrote compositions in Hebrew on a wide range of topics, and also wrote letters in Hebrew to my Israeli relatives.
25 Subjects: including algebra 2, chemistry, writing, biology
Related Guttenberg, NJ Tutors
Guttenberg, NJ Accounting Tutors
Guttenberg, NJ ACT Tutors
Guttenberg, NJ Algebra Tutors
Guttenberg, NJ Algebra 2 Tutors
Guttenberg, NJ Calculus Tutors
Guttenberg, NJ Geometry Tutors
Guttenberg, NJ Math Tutors
Guttenberg, NJ Prealgebra Tutors
Guttenberg, NJ Precalculus Tutors
Guttenberg, NJ SAT Tutors
Guttenberg, NJ SAT Math Tutors
Guttenberg, NJ Science Tutors
Guttenberg, NJ Statistics Tutors
Guttenberg, NJ Trigonometry Tutors | {"url":"http://www.purplemath.com/Guttenberg_NJ_Algebra_2_tutors.php","timestamp":"2014-04-19T23:32:15Z","content_type":null,"content_length":"24135","record_id":"<urn:uuid:9cc8bc05-dc8d-45ad-a5c0-1ee0ca5b8175>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00542-ip-10-147-4-33.ec2.internal.warc.gz"} |
Average Position is a Really Perverse Metric
Posted on Thursday, June 25th, 2009 by Bradd Libby Print This Post
Categories - SEM
Average position is a really perverse metric. Let’s say that I have only 2 keywords in an account, each with one ad: On Day 1, ad #1 is in position 2 and gets 100 impressions per day, while ad #2 is
in position 9 and gets 10 impressions per day. The account’s average position on Day 1 (100×2 + 10×9, divided by 110) is thus 2.64.
Now let’s say that on Day 2 both ads move up one position. Ad #2 is now in position 8. An increase in position tends to result in more impressions, so let’s say that ad #2 now gets 40 more
impressions, for a total of 50 impressions on Day 2. Ad #1 is now in position 1 and let’s say it also gets 40 more impressions, for a total of 140 impressions for that ad on Day 2. The account’s
average position on Day 2 is thus 2.84.
That is, the average position has dropped (from 2.64 to 2.84) even though both ads in the account moved up one position.
What makes average position even more perverse is that this relationship is only true sometimes. For instance, in the example above, if ad #2 had been in position 6 on Day 1 and moved to position 5
on Day 2 (instead of from position 9 to 8), then the account’s average position on Day 1 would have been 2.36 and on Day 2 would have been 2.05. That is, the average position would not have dropped
as both ads moved up one position.
In case that hasn’t frustrated you enough, the average position of a group of ads/keywords can change even if all the ads stay in the same position. If ad #1 had been in position 2 on both days and
ad #2 had been in position 9 on both days, but the number of impressions had still been 100 and 140, and 10 and 50, as described above, then the average position on Day 1 still would have been 2.64,
but the average position on Day 2 would have been 3.84. That is, the average position would have dropped by more than 1 full point even though neither ad changed position at all!
To make matters even worse, the average position of an individual ad/keyword isn’t necessarily the position at which all, or even most, of its impressions occurred. Let’s say that a search engine
tells us that one of our ads got 4 impressions yesterday and had an average position of 2.0. Looking at the figure below, we see that there are only 5 possible ways to show 4 impressions such that
their average position is 2.0. (If you’re not convinced these are the only solutions, try for yourself to find others.)
The most obvious is solution 1. If all 4 impressions were shown in position 2, then their average position will be 2.0. Slightly less obvious is solution 2: if 2 of the impressions were in position
2 and one each in positions 1 and 3, then their average position will still be 2.0. Even less obvious is solution 3: if two impressions happened in both position 1 and position 3, then the average
position will still be 2.0 even though no impressions actually occurred in position 2!
There are two other possible configurations of impressions, solutions 4 and 5, which you can check for yourself have average positions of 2.0.
That’s it. Those are the only 5 possible configurations of impressions with an average position of 2.0. Unfortunately, we have no way from the data the search engines provide to determine which of
these 5 cases actually occurred. What’s strange is that 3 out of these 5 possibilities have more impressions in position 1 than in position 2! If that ad got 1 click yesterday, did that click come
from an impression that was actually in position 2, or was from an impression in position 1 (or position 3? or position 4 or 5)? When it comes to determining which position performs best for this
ad, I’d like to know!
If the search engines told us not only the average position at which our impressions occurred, but also the standard deviation of that average position, then we could figure out which configuration
of impressions actually occurred. For example, if they told the average position was 2.0 and the standard deviation was 0.0 (that is, no impressions happened outside position 2), then we’d know that
solution 1 was the case that actually happened. If they told us the standard deviation was 1.0 (that is, the ad was shown on average 1 position away from position 2), then we’d know that only
solutions 3 or 4 could have been the actual configuration of impressions.
Part of the problem here, I think, is terminology. The metric we commonly call the ‘average position’ is really the ‘impression-weighted position’. And just as there’s an impression-weighted
position, there’s also a click-weighted position. So, if the search engines told us that our ad got 4 impressions in average position 2.0 with a standard deviation of 1.0, and also 1 click in
average position 3.0, then we’d be able to determine immediately that configuration 3 was the one that actually occurred.
The reporting burden for the search engines would only be marginally increased, since they’d have to report 4 metrics for every ad instead of one (impression-weighted position, impression-weighted
standard deviation, click-weighted position and click-weighted standard deviation, rather than just ‘average position’), but the benefit to search marketers would be enormous. (Perhaps that’s why
they don’t do it…)
In the meantime, we’ll just have to take our average position with measure of skepticism by remembering how perverse the average position metric can be.
Related Articles:
Tags | average position, impressions, keyword position, SEM
6 Responses to “Average Position is a Really Perverse Metric”
1. Reminds me of a similar scenario I learned back in the day:
Someone with a fairly high IQ (let’s call him Bradd;), lives in a town full of super-smart people who on average have a higher IQ than him.
Bradd then moves to another town with a lower average IQ than his is.
By moving, he raises the average IQ of both towns!
2. I simply found your site, I actually book-marked it and i am reading the particular discussions. I witout a doubt really like it. Exciting subject in any case you actually look on it. I come by
way of this point of view that will discover remarks as akin of being attentive. | {"url":"http://www.thesearchagents.com/2009/06/average-position-is-a-really-perverse-metric/comment-page-1/","timestamp":"2014-04-20T20:58:10Z","content_type":null,"content_length":"65677","record_id":"<urn:uuid:390f6b0d-3511-4f37-b500-9e48048cd6e1>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00323-ip-10-147-4-33.ec2.internal.warc.gz"} |
Graph Coloring for Computing Derivatives
This page contains a brief discussion of the background for, the functionalities available in, and the organization of our serial coloring software package ColPack. For a detailed discussion of the
algorithms on which the package relies, consult the Papers page.
Sparse derivative computation and coloring
Four-step procedure
The computation of an m-by-n sparse derivative matrix A using automatic differentiation (AD) can be made efficient by using the following four-step procedure.
1. Determine the sparsity structure of A.
2. Using a specialized vertex coloring on an appropriate graph representation of the matrix A, obtain an n-by-p seed matrix S that defines a partitioning of the columns of A into p groups with p as
small as possible.
3. Compute the compressed matrix B = AS using AD.
4. Recover the numerical values of the entries of A from B.
The set of criteria used to define the seed matrix S in the second step, the partitioning problem on the matrix A, depends on three mutually orthogonal factors:
● Whether the derivative matrix being computed is a Jacobian (nonsymmetric) or a Hessian (symmetric),
● Whether the numerical values of the entries of the original matrix A are obtained from the compressed representation B directly (without any further arithmetic) or indirectly (for example, by
solving for unknowns via successive substitution), and
● Whether the matrix partitioning is unidirectional (involving only columns or only rows) or bidirectional (involving both columns and rows). The four-step procedure above is described assuming a
column-wise unidirectional partitioning. In a row-wise unidirectional partitioning (which is a better approach for Jacobian matrices with a few dense rows), the compressed matrix would correspond
to the seed-matrix-Jacobian product S^TA. Similarly, in a bidirectional partitioning (which might be the best approach for Jacobian matrices with both a few dense rows and a few dense columns),
the Jacobian entries are recovered from two compressed matrices S[1]^TA and AS[2].
Coloring models
Table 1 below provides a summary of the most accurate coloring models for the various computational scenarios. In each case, the structure of a Jacobian matrix A is represented by the bipartite graph
G[b](A) = (V[1], V[2], E), where the vertex sets V[1] and V[2] represent the rows and columns of A, respectively, and each nonzero matrix entry A[ij] is represented by the edge (r[i ], c[j]) in E.
Analogously, the structure of a Hessian matrix A is represented by the adjacency graph G(A) = (V, E), where the vertex set V represents the columns (or, by symmetry, the rows) of A and each
off-diagonal nonzero matrix entry A[ij] (and its symmetric counterpart A[ji]) is represented by the single edge (c[i], c[j]) in E.
│ │1d partition │2d partition │ │
│Jacobian│Partial distance-2 coloring │Star bicoloring │Direct │
│Hessian │Star coloring │NA │Direct │
│Jacobian│NA │Acyclic bicoloring│Substitution│
│Hessian │Acyclic coloring │NA │Substitution│
Table 1: Overview of coloring models in derivative computation. NA stands for not applicable.
In a graph G = (V, E), two distinct vertices are distance-k neighbors if a shortest path connecting them consists of at most k edges. A distance-k coloring of the graph is an assignment of positive
integers (called colors) to the vertices such that every two distance-k neighboring vertices get different colors. A star coloring is a distance-1 coloring where, in addition, every path on four
vertices uses at least three colors. An acyclic coloring is a distance-1 coloring in which every cycle uses at least three colors. The names star and acyclic coloring are due to the structures of
two-colored induced subgraphs: a collection of stars in the case of star coloring and a collection of trees in the case of acyclic coloring.
In a bipartite graph G[b] = (V[1], V[2], E), a partial distance-2 coloring on the vertex set V[i], i = 1,2, is an assignment of colors to the vertices in V[i] such that any two vertices connected by
a path of length exactly two edges receive different colors. Star and acyclic bicoloring in a bipartite graph are defined in a manner analogous to star and acyclic coloring in a general graph, but
with the additional stipulation that the set of colors assigned to row vertices (V[1]) is disjoint from the set of colors used for column vertices (V[2]).
ColPack : functionalities
ColPack is a package comprising of implementations of algorithms for the specialized vertex coloring problems discussed in the previous section as well as algorithms for a variety of related
supporting tasks in derivative computation.
Coloring capabilities
Table 2 below gives a quick summary of all the coloring problems (on general and bipartite graphs) supported by ColPack.
│General Graph G = (V, E) │Bipartite Graph G[b] = (V[1], V[2], E): │Bipartite Graph G[b] = (V[1], V[2], E):│
│ │ │ │
│ │One-sided Coloring │Bicoloring │
│· Distance-1 coloring │[· ]Partial distance-2 coloring on V[2 ][] │· Star bicoloring^† │
│ │ │ │
│O(|V|∙d[1]) = O(|E|) │ O(|V[2]|· d(V[2]) · Δ(V[1])) = O (|E|·Δ(V[1])) │O((|V[1]|+ |V[2]|)∙d[2])) │
│· Distance-2 coloring │[· ]Partial distance-2 coloring on V[1][] │^ │
│ │ │ │
│O(|V|∙d[2]) │ O(|V[1]|· d(V[1]) · Δ(V[2])) = O(|E|·Δ(V[2])) │ │
│· Star coloring^† │ │ │
│ │ │ │
│ O(|V|∙d[2])^ │ │ │
│· Acyclic coloring │ │ │
│ │ │ │
│ O(|V|∙d[2]∙α) │ │ │
│· Restricted star coloring │ │ │
│ │ │ │
│ O(|V|∙d[2]) │ │ │
│· Triangular coloring^† │ │ │
│ │ │ │
│O(|V|∙d[2])^ │ │ │
Table 2: List of coloring problems for which implementations of algorithms are available in ColPack. Problems with the superscript ^† have more than one algorithm implemented in ColPack; the
complexity listed in each case is that of the fastest algorithm.
All of the coloring problems listed in Table 2 are NP-hard. Their corresponding algorithms in ColPack are greedy heuristics in the sense that the algorithms progressively extend a partial coloring by
processing one vertex at a time, in some order, in each step assigning a vertex the smallest allowable color. Listed beneath each coloring problem in Table 2 is the complexity of the corresponding
algorithm in ColPack. In the cases where ColPack has multiple algorithms for a problem (these are designated by the superscript ^†), the complexity expression corresponds to that of the fastest
algorithm. In the complexity expressions,
● d[k] denotes the average degree-k, the number of distinct paths of length at most k edges leaving a vertex. Thus d[1](v) corresponds to the usual degree of the vertex v, the number of edges
incident on v, and d[2](v) corresponds to the sum of the degree-1 values of the vertices adjacent to v.
● α denotes the inverse of Ackermann’s function.
● d(V[i]) and Δ(V[i]) denote the average and maximum, respectively, vertex degree-1 in the set V[i], i=1,2, of the bipartite graph G[b] = (V[1], V[2] , E).
Ordering techniques
The order in which vertices are processed in a greedy coloring algorithm determines the number of colors used by the algorithm. ColPack has implementations of various effective ordering techniques
for each of the supported coloring problems. These are summarized in Table 3 below.
│General Graph │Bipartite Graph: One-sided Coloring │Bipartite Graph: Bicoloring │
│· Natural │· Column Natural │· Natural │
│· Largest First │· Column Largest First │· Largest First │
│· Smallest Last │· Column Smallest Last │· Smallest Last │
│· Incidence Degree │· Column Incidence Degree │· Incidence Degree │
│· Dynamic Largest First │· Row Natural │· Dynamic Largest First │
│· Distance-2 Largest First │· Row Largest First │· Selective Largest First │
│· Distance-2 Smallest Last │· Row Smallest Last │· Selective Smallest Last │
│· Distance-2 Incidence Degree │· Row Incidence Degree │· Selective Incidence Degree│
│· Distance-2 Dynamic Largest First│ │ │
Recovery routines
Besides coloring and ordering capabilities, ColPack also has routines for recovering the numerical values of the entries of a derivative matrix from a compressed representation. In particular the
following reconstruction routines are currently available:
● Recovery routines for direct (via star coloring ) and substitution-based (via acyclic coloring) Hessian computation
● Recovery routines for unidirectional, direct Jacobian computation (via column-wise or row-wise distance-2 coloring)
● Recovery routines for bidirectional, direct Jacobian computation via star bicoloring
Graph construction routines
Finally, as a supporting functionality, ColPack has routines for constructing bipartite graphs (for Jacobians) and adjacency graphs (for Hessians) from files specifying matrix sparsity structures in
various formats, including Matrix Market, Harwell-Boeing and MeTis.
ColPack : organization
ColPack is written in an object-oriented fashion in C++ heavily using the Standard Template Library (STL). It is designed to be simple, modular, extensible and efficient. Figure 1 below gives an
overview of the structure of the major classes of ColPack.
Figure 1: Overview of the structure of the major classes in ColPack. A solid arrow indicates an inheritance-relationship, and a broken arrow indicates a uses-relationship.
ColPack functions that a user needs to call directly are made available via the appropriate Interface classes.
Sample Codes
The following sample codes illustrate how ColPack functions are called in the context of sparse derivative computation via the Four-step Procedure. In each sample code, the de-compressed sparse
derivative matrix is returned in the Coordinate Format (zero-based indexing). Recovery routines that return the de-compressed matrix in Direct Sparse Solver and ADOL-C Formats are also available in
Column-wise Jacobian Computation (via partial distance-2 coloring)
Row-wise Jacobian Computation (via partial distance-2 coloring)
Direct Hessian Computation (via star coloring)
Indirect Hessian Computation (via acyclic coloring)
Bidirectional, direct Jacobian Computation (via star bicoloring)
Here is the source code of ColPack. It is being distributed under the GNU Lesser General Public License.
And here are a few test graphs for experiments.
Graph Collection in MeTis format
Graph Collection in Matrix Market format
Complete Doxygen documentation of ColPack | {"url":"http://cscapes.cs.purdue.edu/coloringpage/software.htm","timestamp":"2014-04-21T07:04:10Z","content_type":null,"content_length":"135005","record_id":"<urn:uuid:b7caee0d-2148-4ec9-975a-2a8a3c75dbd0>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00330-ip-10-147-4-33.ec2.internal.warc.gz"} |
Integration by parts
September 19th 2008, 10:11 AM #1
Sep 2008
Integration by parts
I'm trying to learn integration by parts and I feel as though I have a grasp on the formula, however the back of the book's answers are disagreeing with mine. I searched online for something to
help teach me how to do this kind of integration and came upon this: http://www.math.umn.edu/~rosenfie/c/...n_by_parts.pdf
It seemed it would be helpful, but I'm stumped on how they get some of their answers. When they integrate (x^2)(e^x), on page 2 of 3, they some how end up with (x^2)(e^x)-the integral of 2xe^2dx.
Where is the e^2 coming from?
Thanks in advance.
I'm trying to learn integration by parts and I feel as though I have a grasp on the formula, however the back of the book's answers are disagreeing with mine. I searched online for something to
help teach me how to do this kind of integration and came upon this: http://www.math.umn.edu/~rosenfie/c/...n_by_parts.pdf
It seemed it would be helpful, but I'm stumped on how they get some of their answers. When they integrate (x^2)(e^x), on page 2 of 3, they some how end up with (x^2)(e^x)-the integral of 2xe^2dx.
Where is the e^2 coming from?
Thanks in advance.
It is a typo
You can see in the following equality that they go back to the correct thing :
$\dots -2 \int xe^x ~dx$
Thanks, I swear many forces are conspiring against me to prohibit my learning of calc 2.
September 19th 2008, 10:13 AM #2
September 19th 2008, 10:19 AM #3
Sep 2008 | {"url":"http://mathhelpforum.com/calculus/49774-integration-parts.html","timestamp":"2014-04-17T18:27:24Z","content_type":null,"content_length":"36243","record_id":"<urn:uuid:9ad5cdbb-b8de-45f6-8e49-b2d6660088e9>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00528-ip-10-147-4-33.ec2.internal.warc.gz"} |
How to Celebrate Pi Day
Edit Article
Edited by Josh C., Jakeob Vigil, Ben Rubenstein, Jack Herrick and 111 others
Pi is a mathematical constant that is the ratio of a circle's circumference to its diameter, and it is also one of the most revered mathematical constants in the known world.^[1] Pi Day was first
officially celebrated on a large scale in 1988 at the San Francisco Exploratorium.^[2] Since then, Pi Day has been celebrated by millions of students and math-lovers. The holiday is celebrated on
14th March, since 3, 1, and 4 are the three most significant digits in the decimal form of pi. If you'd like to learn how to celebrate pi in due fashion, read on and it will be as easy as pi.
1. 1
Eat pi foods. Eating pi foods may be the easiest and the most fun way to celebrate pi. If you're in school, everyone can bring in a pi-themed food for a pi-pot luck, and if you're just
celebrating with friends, you can all enjoy a pi-themed meal together. Here are some creative ideas for pi-related foods:
□ Eat any type of pie. Try key lime pie, pumpkin pie, pecan pie, or apple pie.
□ Put the pi symbol on a variety of cookies, pies, and cupcakes. You can even make the foods in advance and write pi on them in icing in a big group.
□ Make a pie day pie in honor of the special day.
□ Take the punny approach. Eat pineapple, pizza, or pine nuts, and drink piña coladas or pineapple juice.
□ Use the shape method. Make cookies, cakes, loaves of bread, or pancakes that are shaped like pi.
□ Pi foods don't have to be limited to desserts. Eat shepherd's pie or chicken pot pie.
2. 2
Create some pi ambiance. Just as people display a tree and mistletoe, wear Christmas gear, and sing Christmas songs around Christmas, there is a lot of room for making the environment around you
reminiscent of pi. Here are some Pi Day ideas:
□ Wear a pi T-shirt.
□ Wear pi accessories. This idea can be taken further to include pi jewelry, such as a necklace whose beads represent the numbers in pi, carrying around a pi mug or clock, or other pi
□ Put on a pi temporary tattoo.
□ Put pi stickers all over your stuff.
□ Give away pencils with pi symbols.
□ Make your computer or phone background into something related to pi.
□ Change your Internet browser into something pi-related.
3. 3
Don't forget to celebrate pi when it's 1:59 PM on Pi Day. Take a minute to acknowledge pi in whatever way you see fit. During this minute, you can cheer wildly, or even have a countdown leading
up to "pi minute" the minute before.
□ For added effect for a countdown, have a "pi drop" where you drop a big pie off a balcony or another elevated structure. You can even add a lot of sprinkles to the pie to make it look like a
disco ball.
□ You can also be more serious and have a minute of silence. Every person can think about what pi means to him, and consider where the world would be without pi. If you're in school, someone
can even announce the minute of pi over the loud speaker.
□ If you've written a pi song or made a pi dance, this would be the perfect minute to share your art.
☆ There is some debate regarding the exact time that Pi Day should be celebrated. Though 1:59PM is probably the most common, some believe that the 24 hour clock should be used instead,
which would mean that Pi Day should be celebrated at 1:59AM or 15:09PM.
4. 4
Convert things into pi. This step is absolutely necessary for two reasons: first, to utterly confuse people who have no idea what you're talking about, and secondly, to have fun seeing how many
things can be referenced with pi. This will help you reach an even higher appreciation for the amazing number that is pi. Consider two approaches:
□ Use pi to tell the time. Convert naturally circular things into radians, like the hours on the clock. Instead of it being 3 o'clock, now it's 1/2 pi o'clock. Or, instead of it being 3
o'clock, convert the inclination of the sun into radians and describe that as the time.
□ Simply use 3.14 as a unit of measure. Instead of being 31 years old, you are 9pi years old. With this same approach, you can find out your next pi birthday (don't forget to celebrate it when
it comes!).
5. 5
Play pi games. Pi games will not only be fun, but they will improve your understanding of pi and will make everyone around you have a deeper appreciation of pi.
□ There are plenty of traditional games that are appropriate on Pi Day, like a piñata, a pie-eating contest, or a pie-in-the-face fundraiser.
□ Answer math questions. Come to Pi Day with at least ten math questions you can spring on people. They should be related to geometry, trigonometry, or other fields where pi is particularly
□ Play Pi Day versions of "Are You Smarter Than a 5th Grader?" or "Jeopardy."
□ Conduct a Pi Day Scavenger Hunt.
□ Remember that Pi Day also happens to be the birthday of Albert Einstein. Play an Einstein-themed trivia game, or have an Einstein impersonator contest.
□ Have a pi memorization or recitation contest. As soon as someone loses, you can hit him in the face with a pie. If you want to really show your dedication on Pi Day, learn to memorize pi in
advance by studying as many of the digits of pi as you can.
□ Discuss the different ways to derive pi.
□ Write as many digits of pi on a blackboard as you can, and then try to find your name, birthday, ATM pin, or pi, in pi.
6. 6
Use your artistic side to celebrate pi. You don't have to be a left-brained thinker to fully celebrate pi. You can use your creative side show how much you love and appreciate pi. And even if
you're not the world's most talented poet or writer, you can still have fun while being silly. You don't have to create art to celebrate pi; you can also just appreciate art that already
celebrates it. Here are a few ways to celebrate pi artistically:
□ Write poetry. Write either a pi-ku (haiku) or a regular pi-em (poem) to show how much you love pi.
□ Write a pi-themed song.
□ Write a short pi-themed play and act it out.
□ Paint a picture of pi.
□ Watch the film π.^[3] It's a dark movie about a mathematician who goes crazy. It's very interesting, but intended for an adult audience only.
□ Listen to Kate Bush. Progressive rock musician Kate Bush performed a song titled π on her 2005 album Aerial.
☆ Bush sings pi to its 137th decimal place, but omits the 79th through 100th decimal places of pi for unknown reasons.
□ Watch the film Life of Pi.^[4] Technically "Pi" is only the protagonist's name, but it has gotten people thinking about pi.
7. 7
Get physical with pi. You can also use your physical prowess, or even your car, to show your love for pi. Here are a few things you can do to celebrate pi:
□ Do a pi mile run. Run 3.14 miles, which is just a tiny bit longer than a 5K. You can take this a step further by organizing a pi mile run with friends or colleagues.
□ Lay down in pi formation and take a picture. If you're bold, have two people standing up while holding up a third person who is laying sideways in between them. Make sure the lightest person
is on top.
□ Drive exactly 3.14 miles.
□ March in a circle to show your love for pi.
8. 8
Help the tradition continue. Don't let this be a one-time thing — you owe it to pi to celebrate again and again. Set the date for next year and create a pi club or website in the process.
□ Talk about your plans for Pi Day the following year. This will help generate enthusiasm.
□ Take notes after your Pi Day celebration. What can you do next year to make your celebration even more incredible?
□ Next year, talk about the day months in advance so your skeptical friends can be convinced to join in. You can even advertise for the event by emailing your closer friends or even setting up
a Pi Day Facebook page.
• Pi day is also Einstein's Birthday.
• Show your love for pi by getting married on Pi Day. There's nothing more romantic than being married to the one you love at 1:59:26 PM on March 14th to show that, like pi, your love will continue
• Note that Pi Approximation Day is held on July 22, because when you use the DD/MM format, it is shown as 22/7, the fraction for pi.
• Pi continues indefinitely, and so far it has been tracked out to 2,576,980,377,524 (over 2 trillion) digits after the decimal place using a computer. ^[5]
Article Info
Thanks to all authors for creating a page that has been read 541,220 times.
Was this article accurate? | {"url":"http://www.wikihow.com/Celebrate-Pi-Day","timestamp":"2014-04-19T18:11:35Z","content_type":null,"content_length":"76079","record_id":"<urn:uuid:80f64119-cad8-4a2c-b9ec-f0f089721a2c>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00370-ip-10-147-4-33.ec2.internal.warc.gz"} |
Trying to figure this out?
December 4th 2010, 11:07 AM #1
Jan 2009
Ontario Canada
Trying to figure this out?
We were given 1 example to do math questions. But I don't get how I got the answer.
With with Log and Ln, but in this one it's Ln.
I have to solve for t.
So you have to divide both sides by 1.4 and add Ln as well.
Ln(2.6/1.4) = Ln e^(0.3t)
Now my example in my book does not make sense. I did not make a note next to it.
What are the next examples to get to the answer of t=2.06.
If someone would kindly help me out that would be appreciated.
We were given 1 example to do math questions. But I don't get how I got the answer.
With with Log and Ln, but in this one it's Ln.
I have to solve for t.
So you have to divide both sides by 1.4 and add Ln as well.
Ln(2.6/1.4) = Ln e^(0.3t)
Now my example in my book does not make sense. I did not make a note next to it.
What are the next examples to get to the answer of t=2.06.
If someone would kindly help me out that would be appreciated.
You don't "add" log or ln to both sides: you apply it to both sides, and that only if you're sure they are positive, so:
$\displaystyle{\ln\left(\frac{2.6}{1.4}\right)=\ln( e^{0.3t})=0.3t\Longrightarrow t=\frac{\ln\left(\frac{2.6}{1.4}\right)}{0.3}}$ , as
$\ln e^x=x$ since $e^x\,,\ln x$ are inverse function to each other.
The functions $\ln x$ and $e^x$ are inverse, i.e., $\ln(e^x)=x$ for all x and $e^{\ln x}=x$ for all $x>0$. Therefore, $\ln(2.6/1.4) = \ln e^{0.3t}=0.3t$, from which t can be found.
You are on the right track,
$2.6 = 1.4 e^{0.3t}$
$\frac{2.6}{1.4}= e^{0.3t}$
$\ln \frac{2.6}{1.4}= 0.3t$
$\displaystyle\frac{\ln \frac{2.6}{1.4}}{0.3}= t$
OK I get that now. But what happens to the e on the 2nd line?? Is that the e that is applied to the left side as Ln??? Do I have that correct?
Thank you all for the help to clear this up.
OK I called a friend of mine and the Ln is e with a base of e, so the answer is 1. So it's not needed in a way, correct?
Last edited by bradycat; December 4th 2010 at 11:47 AM.
They are the inverse of each other.
I.e $x+3-3 = x-3+3 = x$
$\displaystye \ln e^x = e^{\ln x} = x$
and also,
$\displaystye \log_{a} a^x = a^{\log_{a} x} = x$
December 4th 2010, 11:19 AM #2
Oct 2009
December 4th 2010, 11:20 AM #3
MHF Contributor
Oct 2009
December 4th 2010, 11:20 AM #4
December 4th 2010, 11:29 AM #5
Jan 2009
Ontario Canada
December 4th 2010, 02:38 PM #6 | {"url":"http://mathhelpforum.com/pre-calculus/165256-trying-figure-out.html","timestamp":"2014-04-18T06:19:54Z","content_type":null,"content_length":"50225","record_id":"<urn:uuid:0a3565bc-94e9-4672-bccd-c72b34119ddf>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00274-ip-10-147-4-33.ec2.internal.warc.gz"} |
The VELMOD analysis can tell us which velocity field models - which values of [I], [v], w[LG], and quadrupole parameters - are "better" than others. However, as with maximum likelihood approaches
generally, by itself it cannot tell us which, if any, of these models is an acceptable fit to the data. This is because we do not have precise, a priori knowledge of the two sources of variance, the
velocity noise [v] and the TF scatter [TF]. Instead, we have treated these quantities as free parameters and determined their values by maximizing likelihood. As a result, a standard ^2 statistic
will be ~ 1 per degree of freedom (dof), even if the fit is poor.
Of course, we can ask whether or not the values of [v] and [TF] obtained from VELMOD agree with independent estimates. It is reassuring that they do. We find [TF] A82 and MAT samples, within the
range estimated by Willick et al. (1996) by methods independent of peculiar velocity models. However, this agreement is of limited significance. TF scatter is very sensitive to non-Gaussian outliers
(Section 4.1), and thus to precisely which objects have been excluded. Furthermore, the MAT subsample used here is only about half as large as the MAT subsample used by Willick et al. (1996) to
estimate its scatter. The VELMOD result for the velocity noise, [v] ^-1, is remarkably small and appears consistent with recent studies for the value of the velocity field outside of clusters based
on independent methods (e.g., Miller, Davis, & White 1996 and Strauss et al. 1997). Indeed, because ~ 90 km s^-1 may be attributed to IRAS velocity prediction errors (Section 3.2), our value of [v]
suggests a true one-dimensional velocity noise of ^-1. Still, the small [v] is not necessarily diagnostic; for demonstrably poor models (e.g., [I] [v]. Thus, an alternative approach is required for
identifying a poor fit.
Let us consider fitting a straight line y = ax + b by least squares to data (x[i], y[i]) whose errors are unknown. One obtains a, b, and also the rms scatter about the fit. Because the scatter is
derived from the fit, the ^2 statistic is ~ 1 per dof by construction. However, if the straight line is a bad fit - if, say, the relation between y and x is actually quadratic - then the residuals
from the fit will exhibit coherence. Coherent residuals in excess of what is expected from the observed scatter would signify that a model is a poor fit. In this section, we will make such an
assessment for the VELMOD residuals. First, we will define a suitable residual and plot it on the sky. We will demonstrate coherence and incoherence of the residuals for "poor" and "good" models,
respectively, by plotting residual autocorrelation functions. Motivated by these considerations, we will define and compute a statistic that measures goodness of fit. | {"url":"http://ned.ipac.caltech.edu/level5/March02/Willick/Will5.html","timestamp":"2014-04-16T15:59:34Z","content_type":null,"content_length":"5864","record_id":"<urn:uuid:dfa9baec-a028-4583-91dd-182738b140e2>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00440-ip-10-147-4-33.ec2.internal.warc.gz"} |
Inner Peace (Through Superior Firepower) – Episode 024
drachefly wrote:
Housellama wrote:Quantum mechanics is just booping weird. It contains contradictions
No, it doesn't. It's a consistent system. If it weren't consistent, it wouldn't even be worth considering.
Well, it depends on what you call "quantum mechanics."
You have to understand - the reason why so many physicists are
mathematicians is because the math needed to describe physics at the scale we can reach now doesn't exist yet. So what physicists do with the math they have is say "&!@# it, I'm pretty damn sure the
theory's right, so we'll just quietly ignore this problem and pretend it actually works out." Later physicists wave their hands and try to claim that what they're doing is perfectly fine. But that's
post-hoc rationalization - they bull$*!+ their way through it, and they knew it. Seriously, some of the math blows up in 4 dimensions, so one of the tricks is to say "well... let's *pretend* we don't
live in 4 dimensions... let's pretend we live in "4+a little" dimensions, and then we'll just take 'a little' to go to zero and pretend that's the right answer." The mathematicians around might say
that's okay (it's called 'analytic continuation') but seriously, if the math doesn't work in the number of dimensions you're actually working in, it ain't all that great math.
So if you're talking about the
of quantum mechanics... the fact that things are quantized, the fact that certain observables are complementary, the fact that you have certain fundamental symmetries of the universe - those guys are
fine. But if you're talking about the actual
- it's
with contradictions. Physicists just say "OK... so the answer isn't *really* infinity... so let's just replace it and pretend we never saw the infinity." Again, you might think "well, OK..." but the
problem is that sometimes when you cheat and do that, you end up making your nice, elegant theory that you base all your other math on
not actually work anymore
. It just really depends on whether you think the math is all that important or not - and most physicists tend to side on the "if I can calculate a number and that number ends up being right, I don't
care if the math sucks."
As a note, this is par for the course for physicists. Dirac needed a function that picked out a specific value when you integrated over it. So he just said "OK... there is one. Now we're gonna use it
and play around with it." Mathematicians looked at it and said "WTF! How the heck does this work?" and Dirac basically responded "Who the hell knows, you go find out and I'm gonna do physics." Thirty
years later they managed to actually work out a consistent description of Dirac's delta function. (Meanwhile Dirac had laid down most of the basics for the next century of physics).
Dirac's attitude was that the math that's used isn't intended to be descriptive, but just to figure out some way to calculate things. His opinion would be that describing string theory to the public
is a waste of time, because it's just math, and has nothing to do with what's actually going on in the universe. Similarly he would be railing against Kaku's goofy "space pushes" description. History
has proven Dirac to be right many many times - the mathematics of a theory usually long outlast its silly examples - so I tend to side with him in that. Waving your hands around and saying "look,
strings!" is just not helpful to actually explain physics.
I mention this because your statement of "if it wasn't consistent, it wouldn't be worth considering" is unfortunately very naive. In fact, most modern physics theories start out life hopelessly
inconsistent, but physicists, like Dirac, basically said "whatever, I can use it to calculate stuff. Good enough for me" and just hope the mathematicians figure it out eventually.
Re: Inner Peace (Through Superior Firepower) – Episode 024
barawn wrote:Seriously, some of the math blows up in 4 dimensions, so one of the tricks is to say "well... let's *pretend* we don't live in 4 dimensions... let's pretend we live in "4+a little"
dimensions, and then we'll just take 'a little' to go to zero and pretend that's the right answer." The mathematicians around might say that's okay (it's called 'analytic continuation') but
seriously, if the math doesn't work in the number of dimensions you're actually working in, it ain't all that great math.
Ah-hem. NOBODY shall banish us from the paradise that Euler has created for us.
On the larger point (that physicists are/should be first keen on explaining material phenomena, then worry about mathematical rigour), this is true and the Dirac delta function is a nice example.
But I resent the implication that the Dirac delta function is not proper math. It turns out that the usual concept of function is not enough to make sense of Dirac's idea, but distributions work
fine- legitimate mathematical objects defined based on Dirac's intuition, not by their value at a point but by their properties as operators on functions. And I'm not seeing where contradictions
arise from the Dirac delta anyway.
Mathematics itself made progress by first formulating theories in an intuitive fashion: calculus and Euclid's (!!) geometry stand out here. It was later work that teased out assumptions made in such
ideas and put them on rigurous footing, but with one worrying exception, the later rigurous systems validated the old intuitions. Which turned out to be, except that one thing I didn't mention,
So where are inconsistencies in the newer theories? I'm honestly curious, this stuff interests me.
The whole point of this is lost if you keep it a secret.
Re: Inner Peace (Through Superior Firepower) – Episode 024
Guyyyyysssss, this amount of off-topic and occasionally strongly-opinionated debate is gonna attract Oberon, he senses potential flame targets the same way artic wolves smell fear! He's been
gone...like months, but suddenly came out of hibernation to tear into that lolita chick...but that was a few days ago so I bet he's hungry again lol.
"I'm afraid I don't understand. And also afraid that I do."
GJC wrote:Two guys with basically the same name in a discussion about a character getting cloned.
There's gotta be a good joke in here somewhere.
Re: Inner Peace (Through Superior Firepower) – Episode 024
barawn wrote:
drachefly wrote:
Housellama wrote:Quantum mechanics is just booping weird. It contains contradictions
No, it doesn't. It's a consistent system. If it weren't consistent, it wouldn't even be worth considering.
Well, it depends on what you call "quantum mechanics."
I mean the stuff you said was the fundamentals - the part that's left of QM before you begin making claims about what sorts of interactions there are. The rest - the renormalization group and all
that infinity-zapping stuff - I take that to be a consequence of their being approximations for some reason or another (e.g. space is discrete, or a near-field correction on the interactions of
In that case, we just need to partition off the 'don't go here' areas as 'outside the bounds of our approximation' and no inconsistency arises. But the fundamental core of QM? The notion that we're
in a Hilbert space with parameters describing a space of at least 3+1 dimensions? That has to be consistent to be worth considering.
I did not mean to make a claim that ANY rule which if applied everywhere would lead to a contradiction is useless in all cases, only those that actually need to be applied everywhere if they're to
mean anything at all.
Re: Inner Peace (Through Superior Firepower) – Episode 024
BLANDCorporatio wrote:
barawn wrote:Seriously, some of the math blows up in 4 dimensions, so one of the tricks is to say "well... let's *pretend* we don't live in 4 dimensions... let's pretend we live in "4+a
little" dimensions, and then we'll just take 'a little' to go to zero and pretend that's the right answer." The mathematicians around might say that's okay (it's called 'analytic
continuation') but seriously, if the math doesn't work in the number of dimensions you're actually working in, it ain't all that great math.
Ah-hem. NOBODY shall banish us from the paradise that Euler has created for us.
On the larger point (that physicists are/should be first keen on explaining material phenomena, then worry about mathematical rigour), this is true and the Dirac delta function is a nice example.
But I resent the implication that the Dirac delta function is not proper math. It turns out that the usual concept of function is not enough to make sense of Dirac's idea, but distributions work
fine- legitimate mathematical objects defined based on Dirac's intuition, not by their value at a point but by their properties as operators on functions. And I'm not seeing where contradictions
arise from the Dirac delta anyway.
Mathematics itself made progress by first formulating theories in an intuitive fashion: calculus and Euclid's (!!) geometry stand out here. It was later work that teased out assumptions made in
such ideas and put them on rigurous footing, but with one worrying exception, the later rigurous systems validated the old intuitions. Which turned out to be, except that one thing I didn't
mention, consistent.
So where are inconsistencies in the newer theories? I'm honestly curious, this stuff interests me.
As far as inconsistencies and contradictions in the theory? 5 words. "Spooky action at a distance". Quantum entanglement can't be explained within QM. It's inconsistent with the theory of locality,
but it works. So they just kinda wrote in an exception going "Booped if we know why, but this happens." There's a lot of things in QM where they can't explain why it happens, but they know it does,
and they can make a model that predicts it, but it goes against other rules in QM. "You can't do anything nonlocally... except for this and that's okay."
I'm not a physicist or math guy, but I know a bit more about QM than the average layperson. They can model the behavior that they see and use those models to accurately predict what happens, but as
to the why? They don't know. So you get theory after theory. And all of it is just stuff from the imagination to try to explain in math the behaviors that they see in reality.
"All warfare is based on deception" - Sun Tzu, Chapter 1, Line 18, The Art of War
"The principle of strategy is to know ten thousand things by having one thing." - Miyamoto Musashi, The Book of Earth, Go Rin No Sho
Re: Inner Peace (Through Superior Firepower) – Episode 024
Housellama wrote:As far as inconsistencies and contradictions in the theory? 5 words. "Spooky action at a distance". Quantum entanglement can't be explained within QM. It's inconsistent with the
theory of locality, but it works.
Urh, not sure I follow. Entanglement appears naturally in the mathematical theory and is indeed confirmed by experiment so far. If anything this points to intuitions of locality being off-base. The
mathematical theory itself works fine.
I guess the thing that's referred to as inconsistent is the habit of "infinities" (aka, divergent sums) to show up when doing calculations. This has some analog in classical physics, as a
differential equation may not have solutions that are valid "for all time" but instead blow up at singularities (though the reasons stuff like this happens are surely different; and while an equation
that blows up simply means the assumptions used in the model are only valid in certain ranges of parameters, it may still be possible to sum divergent series and get a reasonable value. Again, none
shall banish us from the paradise that Euler hath wrought).
The whole point of this is lost if you keep it a secret.
Re: Inner Peace (Through Superior Firepower) – Episode 024
Can we move the physics discussion to another thread, please? As far as I can tell, the relevancy-connection of QM and GR to Wanda's Fate and/or raiment is, at this point, tenuous at best.
Rob should release a "new one" to put this thread out of my misery.
Re: Inner Peace (Through Superior Firepower) – Episode 024
BLANDCorporatio wrote:
Housellama wrote:As far as inconsistencies and contradictions in the theory? 5 words. "Spooky action at a distance". Quantum entanglement can't be explained within QM. It's inconsistent with
the theory of locality, but it works.
Urh, not sure I follow. Entanglement appears naturally in the mathematical theory and is indeed confirmed by experiment so far. If anything this points to intuitions of locality being off-base.
The mathematical theory itself works fine.
I guess the thing that's referred to as inconsistent is the habit of "infinities" (aka, divergent sums) to show up when doing calculations. This has some analog in classical physics, as a
differential equation may not have solutions that are valid "for all time" but instead blow up at singularities (though the reasons stuff like this happens are surely different; and while an
equation that blows up simply means the assumptions used in the model are only valid in certain ranges of parameters, it may still be possible to sum divergent series and get a reasonable value.
Again, none shall banish us from the paradise that Euler hath wrought).
That's kinda what I mean actually. Entanglement works in the math, but the math was made up to fit the situation. They can make math that describes what they are seeing, but that's what they are
doing. Making up math. Why it works, what's actually happening? Nobody has a clue. Newtonian physics is consistent and explicable in the large scale. "This is what happens, here's the math and here's
what's actually happening". Yeah, Newtonian has had time to be sorted out, but that's kinda the point that barawn and I are making. QM is really new (relatively speaking). Yeah, it works, but only
because they made the math to fit. Why it works? Who knows. That's why we are wary of theorists. Until someone comes along and makes all the specifics fit together, it's anyone's guess as to what's
actually going on.
Kinda like what Wanda's going through. There's a theory saying "Fate exists and this is yours". But what's actually going on or going to happen? Nobody knows. Delphie made her best guess but her math
didn't fit. Now Wanda's dealing with a big mess.
...okay, that didn't work as well as I'd hope.
"All warfare is based on deception" - Sun Tzu, Chapter 1, Line 18, The Art of War
"The principle of strategy is to know ten thousand things by having one thing." - Miyamoto Musashi, The Book of Earth, Go Rin No Sho
Re: Inner Peace (Through Superior Firepower) – Episode 024
Amado wrote:Baaaaaaah!
Can we move the physics discussion to another thread, please? As far as I can tell, the relevancy-connection of QM and GR to Wanda's Fate and/or raiment is, at this point, tenuous at best.
Rob should release a "new one" to put this thread out of my misery.
Meh; update threads are generally dead at this point anyway, and this is a substantial and interesting discussion. I'd say that by day three, hijacking these things is fair game.
I mean, did you have anything left to say about fate and raiment regarding this update? 'Cause I didn't.
Re: Inner Peace (Through Superior Firepower) – Episode 024
On the Wanda as a PC thing, the Predictions going Cloudy at Faq's fall could be explained by Wanda having a choice between the Gobwin Knob route, the Magick Kingdom route, the Goodminton's rebirth
route, the Teleport herself to another World with a gem formed from Faq's entire treasury route...
On Wanda's price, if Clay was right with his thing about evading luckmancy backlash by affecting roles Goodminton rarely makes, it seems likely that Wanda's pop debt would have to be paid in one of
two ways, by the popping of sub par Units around her (why hello there Stanley) and/or by Sides expending resources to claim her as their own. If Olive had succeeded acquiring a Slave-Wanda, whatever
price she paid would have cleared some of Wanda's unpaid bill. The failed attempt by the soon to be Un-Larry may have counted. The butchers bill Stanley paid to capture her at Faq almost certainly
On Wanda and suicide, given her 'life is pain, Fate the only painkiller' attitude in the Future Era, I wouldn't be surprised if she does try it at some point.
As for Fate, if Wanda were intended, by Fate, to fall into Olive's hands, why was Larry defeated in his capture attempt? More likely that the good ship OlivexWanda was intended by Fate to help Wanda
deal with the horrible life that's heading her way, too bad Delphie torpedoed it.
A few random thoughts: I see Wanda's been converted to the way of Chocolate despite her unsureness. Hmmmm... Ditch Witch sounds like a term for Dirtamancer. Do I spy a flower on Wanda's side?
Mrtyuh wrote:While I agree Delphie shares responsibility for the dire circumstances in which Goodminton finds itself, I don't think all the blame can be laid on her. Delphie was mistaken when she
lied, since it ruined her credibility. Delphie was mistaken to inform Tommy about the details of the Prediction. Delphie was mistaken in her fatalism towards Overlord Firebaugh. She has made poor
choices. Wanda was mistaken in her resolute determination to fight Fate, instead of working with it to reach a better outcome. Tommy was mistaken informing Larry of the Prediction, which probably
led to Haffaton deciding not to continue diplomacy. Overlord Firebaugh was mistaken when he chose inaction. There were many mistakes made by many people which compounded on each other. Delphie
did not have all the answers. She should have admitted that from the start. She did, however, act in her side's best interests, with what information she did have, to the best of her ability. The
fact that she failed just makes her human. She failed, and she should have gone about it differently, but I can't fault her for trying.
This is what we know. It has been five turns since the air battle. We haven't heard anything about Goodfinger or Goodminton's other city, although, at the time of the air battle, Goodfinger was
likely to fall soon, and Overlord Firebaugh was considering razing it. There has been on action in the capital. Goodminton has sought no action in the field. There are still captured units from
the air battle that haven't turned. Wanda has not come up with any brilliant new ideas. Wanda ordered Clay to boost three scouts. Overlord Firebaugh gave Wanda a boon after the air battle. Wanda
told her father she would be the field commander Goodminton needed. Fritz is practical but not imaginative, and he feels it is not his place to make strategic decisions. So, what are Wanda's
Mind you, this is all speculation. First, it may be Wanda's duty to interrogate the prisoners and to attempt to turn them. We know Wanda had a talent for interrogation and torture at Gobwin Knob.
This may be where she gets her start. It may also be where she discovers her sadism kink. We know Wanda is responsible for prisoners at Gobwin Knob. We also know Vanna is responsible for
prisoners at Faq. Now, it can be argued these are special cases, Wanda's personal interest in Jillian and Vanna being a Turnamancer, but it also plausible that prisoners falling under the
auspices of casters being the norm in Erfworld. The second possibility is that Wanda has become the de facto Chief Warlord of Goodminton. Her father may still hope her to be the commander that
will save their side. It was her brilliant plan that saved the capital. Unfortunately, she can't come up with another. Instead of trying to wrestle the initiative from their enemies, which is
really their only hope, they are surrendering it. Patton once said that a good plan today is better than a perfect plan next week. Wanda doesn't realize this, so she is doing nothing while she
tries to come up with a brilliant plan. Goodminton is scouting, though, so they are trying to keep tabs on enemy movements. Just because Goodminton has sought no engagements in the field, it does
not mean there have not been any. Goodminton's enemies are certainly moving, even if Goodminton is not. It may be that Wanda is looking for a weak, isolated enemy force she feels she can
overwhelm. Goodminton seems to expect another air attack, so they are just sitting there waiting for it. So, Wanda may need to meet with Fritz and Overlord Firebaugh to discuss strategy, even if
she hasn't come up with any good ideas yet.
Gobwin Knob paid her upkeep for hundreds of turns. They gave her free reign to do what she wanted, indulge in her hobbies. While Wanda gave them Faq's three cities, which probably gave their
treasury a nice boost, it wasn't her intention to do so. She may have been behind the death of Saline IV. She certainly inflated Stanley's ego, encouraging him to attack his neighbors and claim a
Titanic Mandate. She was an enabler to Stanley. She was also in control of the relationship with him. She dictated the terms. Any position in which she found herself, she put herself there. Wanda
was probably more responsible for the dire situation Gobwin Knob found itself in than Delphie is for the one Goodminton finds itself in. Anything Wanda has done for Gobwin Knob has been solely in
her own interest. Any benefit Gobwin Knob has reaped is wholely an unintended consequence. Of course, I'm splitting hairs here. My intention was to point out how Wanda once seemed irate over the
idea of someone being loyal to Fate, when that is where her own loyalty will eventually lie.
I also find the difference between the Chief Predictamancer of Goodminton and the Chief Croakamancer of Goodminton interesting. Delphie was manipulative, haughty and autocratic, but she was never
thoughtless or cruel. While Wanda is not yet manipulative, we know she will be by the time she gets to Gobwin Knob. Currently, all the other adjectives describe her. She is haughty in her new
raiment, enjoying the power it gives her. She is autocratic, forcing Clay and Delphie to change quarters and dictating most of their actions. She is thoughtless and cruel. She only cares about
herself, her father and her side. She cared about her brother. She gives no thought to what others want. She tramples all over the feelings of others in her pursuit of her own desires. She does
not care how much she hurts Delphie; she's upset because she things don't work the way she wants them to. Her orders may end up killing Delphie and Clay. She has forbidden them from entering the
Magic Kingdom without orders. If they find themselves in a situation where they need to flee or perish, they'll perish because they can't flee. While I didn't like Lady Temple as Chief Caster, I
think Lady Firebaugh is much worse. Maybe the job just brings out the worst in her. Maybe she will be more sympathetic in service of Haffaton and Faq. Anyway, enough pointless musing..
Everything that happened with regards to Olive, happened as a result of her lies and her plotting. So yes, the blame is entirely hers. Grand failure and arch treachery rolled into one package. The
Ditch Witch's hubris and indeed, her bullying are what led to Wanda's reaction, her bleatings sent Atomic to his end. Her plan alerted the wolves to tasty meat, Goodminton is more of a target than it
was before, thanks to her.
I believe it's six turns at this point, as a new turn started just before she left the Magic Kingdom.
We know nothing of the sort. She's beaten a girl who likes it, anything more than that is speculation on your part. And while the thought of having Olive in her dungeon made her feel funny... she
hasn't given the idea of torture a thought on camera. Never mind that she'd have mentioned when thinking of the value of those prisoners if she were happily spanking them. Remember what was said of
her dreams? I wouldn't be surprised at all if her first experiences with such matters occur when being cruelly treated at Faq, either as a direct response to her capture or when serving as Jillian's
whipping girl. As for Warlord Wanda, it seems likely she's involved in whatever long (or short) term planning they're making based on what she said to Clay. Recollect, they lost their Siege group
while withdrawing from Kiloton, until they replace it, they can't strike back. Worse, they urgently need to raze higher level Cities and the only Haffaton Cities reachable by Road are dinky ones. But
I'm really concerned with the uses for her juices. She has two underlings to spell up the Tower for her and a desire to learn new magics, so... what experiments could she be performing? She managed
to make a Hat Golem, first try, in the field, with the improved Tower bonus... A randomish thought: Flesh Golems, Croakamancy or Weirdomancy?
What of it? The destruction of Faq did more for Gobwin Knob at a stroke than most of it's Commanders could ever hope to accomplish. Her Upkeep was repaid in advance to the tune of hundreds, if not
thousands of turns thanks to that. Her motivations during that incident don't matter, money's money. Her Uncroaked made up between a third to a half of their forces by the time the RCC reached Gobwin
Knob. A Warlord she summoned saved the Side from the RCC. The Volcano wealth would have been impossible without her. If she did make Stanley Overlord, that just puts him further in her debt. And the
RCC's attack was an utterly unprovoked act of aggression courtesy of Jetstone, no fault of hers. That Side owes her, not the other way around. Besides, she's loyal to the Arkenhammer to all intents
and purposes, rather like the Dwagons.
Delphie is a bullying, bottom grabbing malcontent who forced Wanda to live in the Dungeon and leaked Side Secrets to the enemy. That is high treason in time of war. Tommy died in a trap Delphie
helped set. With the Tower's bonus having been improved the way it has, there is no longer any excuse to skulk in the best position to run away from and every reason to be in the Tower close to the
Casting Chamber (so yes, that's a charge of cowardice against Delphie as well). What, apart from her family and Side, is there for Wanda to care for? She can't do her job, that of Chief Caster,
without exercising control over her Side's magical resources. The position may be a joke at Gobwin Knob, but when in a Side where it means responsibility... And after all that leaked information, how
can she trust her underlings not to start blabbing to the enemy if allowed into the Magick Kingdom?
Re: Inner Peace (Through Superior Firepower) – Episode 024
Housellama wrote:As far as inconsistencies and contradictions in the theory? 5 words. "Spooky action at a distance". Quantum entanglement can't be explained within QM.
The only action at a distance in QM is if you think wavefunction collapse is a first-order ontologically real thing, rather than an emergent phenomenon from the action of decoherence.
In other words, that's not QM, it's Copenhagen sticking its ugly, pustulent face in and slobbering all over the carpets.
Housellama wrote:It's inconsistent with the theory of locality, but it works. So they just kinda wrote in an exception going "Booped if we know why, but this happens."
Nope. Not how it works. QM was designed without that in mind and it came out naturally.
Re: Inner Peace (Through Superior Firepower) – Episode 024
drachefly wrote:In other words, that's not QM, it's Copenhagen sticking its ugly, pustulent face in and slobbering all over the carpets.
Hah. Fair enough. I'm a layman and have absolutely no grasp on the actual math. Those two blend together sometimes for me.
drachefly wrote:
Housellama wrote:It's inconsistent with the theory of locality, but it works. So they just kinda wrote in an exception going "Booped if we know why, but this happens."
Nope. Not how it works. QM was designed without that in mind and it came out naturally.
Again, fair enough. Like I said I don't do the math. Thank you for expanding my knowledge. That's why I love this place. One of the best forum environments EVAR.
"All warfare is based on deception" - Sun Tzu, Chapter 1, Line 18, The Art of War
"The principle of strategy is to know ten thousand things by having one thing." - Miyamoto Musashi, The Book of Earth, Go Rin No Sho
Re: Inner Peace (Through Superior Firepower) – Episode 024
Speaking as an Engineer (my degree is applied physics, and some universities put the Engineering department in the Engineering deptartment, since Engineers merely apply the physics developed by the
scientists) whose best marks were in Calculus... Housellama have no clue what you're talking about wrt to Quantum Mechanics.
They aren't developing new Mathematics. They are developing new perspectives on how to view reality that start the mathematics on different beliefs that will mathematically develop into the current
known equations. It's not new Math. It's just plain old calculus, applied to new starting positions.
I'm going to remind you why Calculus was invented. It was done by the greatest Scientist of his age -- Sir Isaac Newton. He developed it in order to describe the paths of objects in orbit. Calculus
was not invented and then applied to Physics... Calculus was developed by Physics and handed to mathematicians. Physics and Calculus are the same thing, and have been for over 200 years. All Physics
is described by equations derived from other equations, and without those Mathematical equations, Physics can solve nothing.
In short, Newton knew the classic geometrical descriptions of how planets moved, but developed calculus in order to develop a general explanation for all planetary motion. Calculus gave him the
equations that described the general theory of gravity. That's what Physics does -- it develops higher order equations to describe the universe, that gives us access to new knowledge and effects we
may not have observed without knowing we could find them.
If you need an example of how new physics is used to describe old, we can look at the derivation of E= 1/2*m*v*v (energy in a moving body) from the principles of relativity developed by Einstein that
tells us (I won't bore you wiht the development from first principles, just jump straight to Einstein's derivation), the real equation is E = 1/2 *m * (v*c/(c-v))^2. Now, for our little Earth,
everything moves much, much slower than c (the speed of light). So, essentially, v = 0 when compared to c. c/(c-0) = c/c = 1, so E = 1/2m*(v*1)^2 = 1/2*m*v*v. Now, it's not exact, but since v for a
car is 1 millionth the speed of light, the error is so small we could never detect it, so the Newtonian equation describes the observations admirably. See how we developed a known Newtonian equation
from a complex Einsteinian equation? THAT is what Physicists do. They try to find the equations that describe the universe at a more fundamental level that can derive the equations that we currently
know. They do this in all branches of Physics, but somehow you're singling out QM as different. It is not. So your following statement:
Housellama wrote:Entanglement works in the math, but the math was made up to fit the situation.
Is completely absurd.
You can say that about all of Physics, if that's what you want to say. QM is NOT treated any differently than other Physics, much as the conspiracy theorists want them to be. Since you can reject all
of Physics with that statement, you actually accept no Physics as valid, because the same methods applied to QM are applied to all of Physics. Either you accept it all, or none. You cannot pick and
choose some Physics as "made up" and the rest as "convenient to my purposes, so I'll accept it." E=1/2*M*v*v is just as made up to fit the situation, since we have derived it from Relavity. Quantum
effects are observable and testable, and therefore equations can be developed to explain their effects, in exactly the same way observing a car moving is observable. You're playing favorites --
finding fault in QM for what is done in every other branch of physics, and it's your lack of knowledge of general physics (and the willful ignorance of your sources) that give you that impression. QM
is not different.
I have no problem with you saying what you are, but you need to say it about every single branch of Physics, or you're applying a double standard. If you want to reject physics in general for
describing observed effects with equations, go right ahead. But you have to do it for Physics in general, because that's what all of Physics is. Observe, Theorize, Derive, Test, rinse and repeat.
You're faulting QM for the Derivation step, when all Physics uses the same methodology.
Housellama wrote:Quantum entanglement can't be explained within QM. It's inconsistent with the theory of locality, but it works. So they just kinda wrote in an exception going "Booped if we know
why, but this happens."
False. Entanglement was derived by Einstein, Podolsky, and Rosen 70 years before it could be tested, in 1935. The way you state this is that it was detected and then added in as an extension.
Completely false. EPR had noticed that there was a loophole in QM, which suggested entanglement, and were trying to use this to discredit QM (Einstein did not like the uncertainty of QM). It was
forgotten for decades as QM developed apace. Complex systems can have inconsistencies in their description, but if you don't have anything else to explain what's happening, you don't abandon that
theory just because you don't have one that does the job better. You develop the one you have, and hope the inconsistency resolves itself. It's kind of like saying, "We're only providing water to 90%
of the city. We need to abandon the city because the water system is inadequate!" Nevermind that there's no water at all in the desert outside the walls. Of course, when we could finally detect
entanglement, we tested for it... and found EPR had actually found a quantum effect. The loophole they wanted to use to discredit QM was actually a real quantum effect, and no loophole at all.
Housellama wrote:I'm not a physicist or math guy, but I know a bit more about QM than the average layperson.
In other words, you're full of horse cookies. Sit down, do some basic research, and learn the history of what you're embarrassing yourself about. I am sick of useless pseudo-scientists like you that
get sucked into the Conspiracy bunk-science books written by self-proclaimed geniuses and sold over the Internet. I've heard these theories before. They're Bogus -- 100% flat out pure horse cookies.
Even cursory examination of the original history reveals massive flaws in their presentation of science and its historical development. THEY LIE TO YOU TO SELL BOOKS AND FEED THEIR EGOS. And you
didn't even do the basic study to check their sources. Did they give you sources to check? Or did they sound so good, so you believed them?
What you "know" is lies. All of it. The history they tell you is a lie. Their claims about what Physics and Calculus is are lies.
housellama wrote:They can model the behavior that they see and use those models to accurately predict what happens, but as to the why? They don't know.
Which is true of all of physics. We don't concern ourselves with "why". We may never know "why" -- there may be no "why". We learn HOW and leave the "why" to religion. We know how the sun comes up
every day -- it's the pull of all atoms on each other which we call gravity. We don't know how that force is transmitted, yet. We also see a magnet pull on an iron nail. We call this pull magnetism.
We do know how -- electromagnetic photons made of subatomic particles transmit force on the magnetically receptive iron atoms. Why does that subatomic particle pull on the iron atom? Don't know.
Doesn't matter. It does and that is useful to us. Pondering "why" is for philosophers and fools. Figuring out how the sub-atomic particle causes a force of attraction is all that matters. "Why"
implies intelligence, since it implies purpose. The universe, to a scientist, does not need to have a purpose to exist and be observable and explainable.
http://www.erfworld.com/wiki/index.php/TBFGK_1 Here you can find all comic pages written as text for convenient quoting.
http://www.erfworld.com/wiki/index.php/Erfworld_Mechanics The starting page for accessing all known Erfworld "rules".
Re: Inner Peace (Through Superior Firepower) – Episode 024
"Entanglement was derived by Einstein, Podolsky, and Rosen 70 years before it could be tested, in 1935."
The Aspect experiment was carried out in 1981, so that would be only 45 years.
Also, I wouldn't call it a 'loophole', but rather a 'major yet previously unexplored feature'.
Re: Inner Peace (Through Superior Firepower) – Episode 024
Angry Kreistor is angry.
Re: Inner Peace (Through Superior Firepower) – Episode 024
drachefly wrote:super-quibble: Also, I wouldn't call it a 'loophole', but rather a 'major yet previously unexplored feature'.
EPR viewed it as a loophole that invalidated QM. You'll note I corrected the view later by saying it wasn't a "loophole at all".
Aspect's experiments couldn't prove Entanglement. Valiant effort, but limited by the technology of the time.
http://www.erfworld.com/wiki/index.php/TBFGK_1 Here you can find all comic pages written as text for convenient quoting.
http://www.erfworld.com/wiki/index.php/Erfworld_Mechanics The starting page for accessing all known Erfworld "rules".
Re: Inner Peace (Through Superior Firepower) – Episode 024
A physical theory that uses local hidden variables to spoof quantum entanglement in combination with detector systems is optimized for spoofing quantum entanglement in combination with detector
systems. It was grasping at straws even then.
Re: Inner Peace (Through Superior Firepower) – Episode 024
drachefly wrote:A physical theory that uses local hidden variables to spoof quantum entanglement in combination with detector systems is optimized for spoofing quantum entanglement in combination
with detector systems. It was grasping at straws even then.
Noooo, that's not how it was developed. Entanglement was postulated as the result of a thought experiment known as the EPR paradox (notably involving individual photons entering a prism angled such
that some photons reflect and others reffract), not "hidden variables". Mathematically including the theory inside QM mechanics may require complex mathematics, but that is only an effort to include
the theory in the current model.
http://www.erfworld.com/wiki/index.php/TBFGK_1 Here you can find all comic pages written as text for convenient quoting.
http://www.erfworld.com/wiki/index.php/Erfworld_Mechanics The starting page for accessing all known Erfworld "rules".
Re: Inner Peace (Through Superior Firepower) – Episode 024
Kreistor wrote:Speaking as an Engineer (my degree is applied physics, and some universities put the Engineering department in the Engineering deptartment, since Engineers merely apply the physics
developed by the scientists) whose best marks were in Calculus... Housellama have no clue what you're talking about wrt to Quantum Mechanics.
Comparatively speaking, you're absolutely right. I know a little, but not much. More than the average person, but nowhere close to a professional. Then again, I never claimed to be a professional.
I'm outclassed here and I freely admit it. Having read your post and done some research, you're right. I knew about the EPR paradox, but since my knowledge is a patchwork of information from
different sources (some reliable, some questionable) I didn't make the connection. Sometimes I don't know what I know. Another problem is that I don't have a lot of the right language to communicate
what's in my head, for reasons I go into below.
Kreistor wrote:They aren't developing new Mathematics. They are developing new perspectives on how to view reality that start the mathematics on different beliefs that will mathematically develop
into the current known equations. It's not new Math. It's just plain old calculus, applied to new starting positions.
[History lesson clipped]
THAT is what Physicists do. They try to find the equations that describe the universe at a more fundamental level that can derive the equations that we currently know.
That's actually what I was trying (badly) to say. I'm not a math person. It's one of my blind spots. It took me 5 tries to pass Calc 1. It's not that I don't like it, it's just that I can't do it.
Calc doesn't fit in my brain, for whatever reason. It's actually a little frustrating to me because I wanted to be an engineer. Can't be an engineer if you can't do higher level math.
When I said "new math", what I was trying to say was new equations. I do know what physicists do. I (sorta) understand the behavior (what things do) but I couldn't do any math to describe it nor
could I explain the exact mechanics of it (because that takes math).
Kreistor wrote:They do this in all branches of Physics, but somehow you're singling out QM as different. It is not.
The process is the same, absolutely. The way math describes the world is the same in all physics. But there is something fundamentally different about QM, something that makes it distinct from
everything else: uncertainty. More later.
Kreistor wrote:So your following statement:
Housellama wrote:Entanglement works in the math, but the math was made up to fit the situation.
Is completely absurd.
Granted. My history was wrong and I got that screwed up.
Kreistor wrote:
Housellama wrote:I'm not a physicist or math guy, but I know a bit more about QM than the average layperson.
In other words, you're full of horse cookies. Sit down, do some basic research, and learn the history of what you're embarrassing yourself about. I am sick of useless pseudo-scientists like you
that get sucked into the Conspiracy bunk-science books written by self-proclaimed geniuses and sold over the Internet. I've heard these theories before. They're Bogus -- 100% flat out pure horse
cookies. Even cursory examination of the original history reveals massive flaws in their presentation of science and its historical development. THEY LIE TO YOU TO SELL BOOKS AND FEED THEIR EGOS.
And you didn't even do the basic study to check their sources. Did they give you sources to check? Or did they sound so good, so you believed them?
What you "know" is lies. All of it. The history they tell you is a lie. Their claims about what Physics and Calculus is are lies.
Woah... slow down there. I got some history wrong. Fair cop. But I'm not one of those that jumps on the bandwagons. I'm in the sciences myself, so I know something about self promoting crackpots. I
also know enough to tell the difference between generally accepted science and individuals pushing their pet theories. My problems earlier came from ignorance and miscommunication, not bad knowledge.
I'll freely admit to ignorance. Ignorance can be cured. I probably do have some bad knowledge somewhere in this patchwork of stuff in my head, but I know the difference between reputable and
I can understand and appreciate your vehemence about your chosen field. I get pissed off when I see lunatics out there selling bad psychology. And don't get me started on a lot of 'alternative'
therapists. You see me as one of those spreading bad knowledge, and this time you are right. But it was an honest mistake, not an attempt to spread bad knowledge. I own this bit of misinformation and
apologize for it.
Kreistor wrote:
housellama wrote:They can model the behavior that they see and use those models to accurately predict what happens, but as to the why? They don't know.
Which is true of all of physics. We don't concern ourselves with "why". We may never know "why" -- there may be no "why". We learn HOW and leave the "why" to religion. We know how the sun comes
up every day -- it's the pull of all atoms on each other which we call gravity. We don't know how that force is transmitted, yet. We also see a magnet pull on an iron nail. We call this pull
magnetism. We do know how -- electromagnetic photons made of subatomic particles transmit force on the magnetically receptive iron atoms. Why does that subatomic particle pull on the iron atom?
Don't know. Doesn't matter. It does and that is useful to us. Pondering "why" is for philosophers and fools. Figuring out how the sub-atomic particle causes a force of attraction is all that
matters. "Why" implies intelligence, since it implies purpose. The universe, to a scientist, does not need to have a purpose to exist and be observable and explainable.
This is another case of me not communicating well. When I said why, I didn't mean in a philosophical sense, I meant it in a causation sense. In your example, the nail moves because the forces were
carried through the subatomic particles. That's a causation. But there are a lot of places in QM where causation has a big blank spot. They have the equations and they can make predictions about, for
example, waveform collapse, but the causation behind waveform collapse is still unknown. Yeah, it doesn't matter what causes it to collapse for the purposes of making predictions in the current
equations, but you can't tell me that there aren't physicists searching for the answer to that one.
This leads back to why I think QM is different. If you told scientists that you couldn't predict where a particular electron is going to be, they would think you were out of your head without the
math to back it up. And many people did. Einstein's "spooky action at a distance", for example. But QM worked, so crazy as it is, it was kept.
Perhaps I'm putting the 'blame' in the wrong place here. It would be more accurate to say that Nature is weird and therefore QM is weird. It accurately models what is there, but what is there is
freaking bizarre. Yeah, the basics of QM are scientifically accepted these days, but when you look at it from an outside point of view, it's pretty booped up. But that's Mother Nature's fault. You
guys are just going where the equations take you.
"All warfare is based on deception" - Sun Tzu, Chapter 1, Line 18, The Art of War
"The principle of strategy is to know ten thousand things by having one thing." - Miyamoto Musashi, The Book of Earth, Go Rin No Sho
Re: Inner Peace (Through Superior Firepower) – Episode 024
Kreistor wrote:
drachefly wrote:A physical theory that uses local hidden variables to spoof quantum entanglement in combination with detector systems is optimized for spoofing quantum entanglement in
combination with detector systems. It was grasping at straws even then.
Noooo, that's not how it was developed. Entanglement was postulated as the result of a thought experiment known as the EPR paradox (notably involving individual photons entering a prism angled
such that some photons reflect and others reffract), not "hidden variables".
You misunderstand what I'm saying. I'm saying that any alternative explanation for the Aspect experiment besides QM has to be built to purpose-built to emulate QM yet not be it.
QM was not mathematically proven by the experiment. It was just proven beyond a reasonable doubt.
Housellama wrote:They have the equations and they can make predictions about, for example, waveform collapse, but the causation behind waveform collapse is still unknown. Yeah, it doesn't matter
what causes it to collapse for the purposes of making predictions in the current equations, but you can't tell me that there aren't physicists searching for the answer to that one.
Actually, they figured it out a while back. It's called decoherence, and though the line of reasoning is long, it is intuitively comprehensible. Yes, intuitively. It makes so much sense that it
undoes a lot of the roadblocks to comprehension that spawned such quotes as 'no one really understands quantum mechanics'. Now that's still true in a lot of senses - it's a really complicated theory
with a lot of unexplored consequences - but understanding how the basics fit together, which is what those quotes were talking about... that is now widely understood.
As for uncertainty, well, that's a minefield. 'Uncertainty' is a really lousy name for the referent. That reminds me - for a non-mathematical yet very accurate description of the behavior of quantum
mechanics, I recommend the Quantum Physics Sequence by Eliezer Yudkowsky. He may be a philosopher/computer scientist/Harry Potter fanfic author, and not a physicist, but I vouch for it. He could wear
Gilgamesh Wulfenbach's hat and the only inaccurate part would be the name. | {"url":"http://www.erfworld.com/forum/viewtopic.php?p=72617","timestamp":"2014-04-16T23:16:47Z","content_type":null,"content_length":"99992","record_id":"<urn:uuid:1d140ca5-83be-4bf1-81c2-e9ef353e1f93>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00280-ip-10-147-4-33.ec2.internal.warc.gz"} |
Searching in High-Dimensional Spaces – Index Structures for Improving the Performance of Multimedia Databases
Results 1 - 10 of 180
- ACM COMPUTING SURVEYS , 2008
"... We have witnessed great interest and a wealth of promise in content-based image retrieval as an emerging technology. While the last decade laid foundation to such promise, it also paved the way
for a large number of new techniques and systems, got many new people involved, and triggered stronger ass ..."
Cited by 270 (8 self)
Add to MetaCart
We have witnessed great interest and a wealth of promise in content-based image retrieval as an emerging technology. While the last decade laid foundation to such promise, it also paved the way for a
large number of new techniques and systems, got many new people involved, and triggered stronger association of weakly related fields. In this article, we survey almost 300 key theoretical and
empirical contributions in the current decade related to image retrieval and automatic image annotation, and in the process discuss the spawning of related subfields. We also discuss significant
challenges involved in the adaptation of existing image retrieval techniques to build systems that can be useful in the real world. In retrospect of what has been achieved so far, we also conjecture
what the future may hold for image retrieval research.
- ACM Transactions on Database Systems , 2003
"... Similarity search is a very important operation in multimedia databases and other database applications involving complex objects, and involves finding objects in a data set S similar to a query
object q, based on some similarity measure. In this article, we focus on methods for similarity search th ..."
Cited by 133 (6 self)
Add to MetaCart
Similarity search is a very important operation in multimedia databases and other database applications involving complex objects, and involves finding objects in a data set S similar to a query
object q, based on some similarity measure. In this article, we focus on methods for similarity search that make the general assumption that similarity is represented with a distance metric d.
Existing methods for handling similarity search in this setting typically fall into one of two classes. The first directly indexes the objects based on distances (distance-based indexing), while the
second is based on mapping to a vector space (mapping-based approach). The main part of this article is dedicated to a survey of distance-based indexing methods, but we also briefly outline how
search occurs in mapping-based methods. We also present a general framework for performing search based on distances, and present algorithms for common types of queries that operate on an arbitrary
“search hierarchy. ” These algorithms can be applied on each of the methods presented, provided a suitable search hierarchy is defined.
- ACM Transactions on Graphics , 2004
"... Large motion data sets often contain many variants of the same kind of motion, but without appropriate tools it is difficult to fully exploit this fact. This paper provides automated methods for
identifying logically similar motions in a data set and using them to build a continuous and intuitively ..."
Cited by 111 (2 self)
Add to MetaCart
Large motion data sets often contain many variants of the same kind of motion, but without appropriate tools it is difficult to fully exploit this fact. This paper provides automated methods for
identifying logically similar motions in a data set and using them to build a continuous and intuitively parameterized space of motions. To find logically similar motions that are numerically
dissimilar, our search method employs a novel distance metric to find “close ” motions and then uses them as intermediaries to find more distant motions. Search queries are answered at interactive
speeds through a precomputation that compactly represents all possibly similar motion segments. Once a set of related motions has been extracted, we automatically register them and apply blending
techniques to create a continuous space of motions. Given a function that defines relevant motion parameters, we present a method for extracting motions from this space that accurately possess new
parameters requested by the user. Our algorithm extends previous work by explicitly constraining blend weights to reasonable values and having a run-time cost that is nearly independent of the number
of example motions. We present experimental results on a test data set of 37,000 frames, or about ten minutes of motion sampled at 60 Hz.
- In Nearest-Neighbor Methods for Learning and Vision: Theory and Practice , 2006
"... Given a set S of n sites (points), and a distance measure d, the nearest neighbor searching problem is to build a data structure so that given a query point q, the site nearest to q can be found
quickly. This paper gives a data structure for this problem; the data structure is built using the distan ..."
Cited by 87 (0 self)
Add to MetaCart
Given a set S of n sites (points), and a distance measure d, the nearest neighbor searching problem is to build a data structure so that given a query point q, the site nearest to q can be found
quickly. This paper gives a data structure for this problem; the data structure is built using the distance function as a “black box”. The structure is able to speed up nearest neighbor searching in
a variety of settings, for example: points in low-dimensional or structured Euclidean space, strings under Hamming and edit distance, and bit vector data from an OCR application. The data structures
are observed to need linear space, with a modest constant factor. The preprocessing time needed per site is observed to match the query time. The data structure can be viewed as an application of a
“kd-tree ” approach in the metric space setting, using Voronoi regions of a subset in place of axis-aligned boxes. 1
- In VLDB , 2005
"... This paper addresses the efficient processing of top-k queries in wide-area distributed data repositories where the index lists for the attribute values (or text terms) of a query are
distributed across a number of data peers and the computational costs include network latency, bandwidth consumption ..."
Cited by 73 (12 self)
Add to MetaCart
This paper addresses the efficient processing of top-k queries in wide-area distributed data repositories where the index lists for the attribute values (or text terms) of a query are distributed
across a number of data peers and the computational costs include network latency, bandwidth consumption, and local peer work. We present KLEE, a novel algorithmic framework for distributed top-k
queries, designed for high performance and flexibility. KLEE makes a strong case for approximate top-k algorithms over widely distributed data sources. It shows how great gains in efficiency can be
enjoyed at low result-quality penalties. Further, KLEE affords the query-initiating peer the flexibility to trade-off result quality and expected performance and to trade-off the number of
communication phases engaged during query execution versus network bandwidth performance. We have implemented KLEE and related algorithms and conducted a comprehensive performance evaluation. Our
evaluation employed real-world and synthetic large, web-data collections, and query benchmarks. Our experimental results show that KLEE can achieve major performance gains in terms of network
bandwidth, query response times, and much lighter peer loads, all with small errors in result precision and other result-quality measures.
, 2010
"... This paper introduces a product quantization based approach for approximate nearest neighbor search. The idea is to decomposes the space into a Cartesian product of low dimensional subspaces and
to quantize each subspace separately. A vector is represented by a short code composed of its subspace q ..."
Cited by 71 (10 self)
Add to MetaCart
This paper introduces a product quantization based approach for approximate nearest neighbor search. The idea is to decomposes the space into a Cartesian product of low dimensional subspaces and to
quantize each subspace separately. A vector is represented by a short code composed of its subspace quantization indices. The Euclidean distance between two vectors can be efficiently estimated from
their codes. An asymmetric version increases precision, as it computes the approximate distance between a vector and a code. Experimental results show that our approach searches for nearest neighbors
efficiently, in particular in combination with an inverted file system. Results for SIFT and GIST image descriptors show excellent search accuracy outperforming three state-of-the-art approaches. The
scalability of our approach is validated on a dataset of two billion vectors.
- ACM Computing Surveys , 2005
"... The development of effective content-based multimedia search systems is an important research issue due to the growing amount of digital audio-visual information. In the case of images and
video, the growth of digital data has been observed since the introduction of 2D capture devices. A similar dev ..."
Cited by 66 (10 self)
Add to MetaCart
The development of effective content-based multimedia search systems is an important research issue due to the growing amount of digital audio-visual information. In the case of images and video, the
growth of digital data has been observed since the introduction of 2D capture devices. A similar development is expected for 3D data as
, 2005
"... Hierarchical image structures are abundant in computer vision and have been used to encode part structure, scale spaces, and a variety of multiresolution features. In this paper, we describe a
framework for indexing such representations that embeds the topological structure of a directed acyclic g ..."
Cited by 44 (10 self)
Add to MetaCart
Hierarchical image structures are abundant in computer vision and have been used to encode part structure, scale spaces, and a variety of multiresolution features. In this paper, we describe a
framework for indexing such representations that embeds the topological structure of a directed acyclic graph (DAG) into a low-dimensional vector space. Based on a novel spectral characterization of
a DAG, this topological signature allows us to efficiently retrieve a promising set of candidates from a database of models using a simple nearest-neighbor search. We establish the insensitivity of
the signature to minor perturbation of graph structure due to noise, occlusion, or node split/merge. To accommodate large-scale occlusion, the DAG rooted at each nonleaf node of the query "votes" for
model objects that share that "part," effectively accumulating local evidence in a model DAG's topological subspaces. We demonstrate the approach with a series of indexing experiments in the domain
of view-based 3D object recognition using shock graphs.
"... In this paper, we present an efficient B+-tree based indexing method, called iDistance, for K-nearest neighbor (KNN) search in a high-dimensional metric space. iDistance partitions the data
based on a space- or data-partitioning strategy, and selects a reference point for each partition. The data po ..."
Cited by 38 (1 self)
Add to MetaCart
In this paper, we present an efficient B+-tree based indexing method, called iDistance, for K-nearest neighbor (KNN) search in a high-dimensional metric space. iDistance partitions the data based on
a space- or data-partitioning strategy, and selects a reference point for each partition. The data points in each partition are transformed into a single dimensional value based on their similarity
with respect to the reference point. This allows the points to be indexed using a B +-tree structure and KNN search to be performed using one-dimensional range search. The choice of partition and
reference point adapt the index structure to the data distribution. We conducted extensive experiments to evaluate the iDistance technique, and report results demonstrating its effectiveness. We also
present a cost model for iDistance KNN search, which can be exploited in query optimization.
- IEEE Trans. on Knowledge and Data Engineering , 2007
"... Abstract—Nearest neighbor search and many other numerical data analysis tools most often rely on the use of the euclidean distance. When data are high dimensional, however, the euclidean
distances seem to concentrate; all distances between pairs of data elements seem to be very similar. Therefore, t ..."
Cited by 33 (1 self)
Add to MetaCart
Abstract—Nearest neighbor search and many other numerical data analysis tools most often rely on the use of the euclidean distance. When data are high dimensional, however, the euclidean distances
seem to concentrate; all distances between pairs of data elements seem to be very similar. Therefore, the relevance of the euclidean distance has been questioned in the past, and fractional norms
(Minkowski-like norms with an exponent less than one) were introduced to fight the concentration phenomenon. This paper justifies the use of alternative distances to fight concentration by showing
that the concentration is indeed an intrinsic property of the distances and not an artifact from a finite sample. Furthermore, an estimation of the concentration as a function of the exponent of the
distance and of the distribution of the data is given. It leads to the conclusion that, contrary to what is generally admitted, fractional norms are not always less concentrated than the euclidean
norm; a counterexample is given to prove this claim. Theoretical arguments are presented, which show that the concentration phenomenon can appear for real data that do not match the hypotheses of the
theorems, in particular, the assumption of independent and identically distributed variables. Finally, some insights about how to choose an optimal metric are given. Index Terms—Nearest neighbor
search, high-dimensional data, distance concentration, fractional distances. 1 | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=129058","timestamp":"2014-04-20T04:58:04Z","content_type":null,"content_length":"41113","record_id":"<urn:uuid:2de1a87b-5d06-4ab8-8be6-9792915df789>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00314-ip-10-147-4-33.ec2.internal.warc.gz"} |
A Method For Creating A Human-Readable File Size
Brice McIver
August 13, 2012 7:56 am
Recently, I was working on a project in which the users needed to see a list of files available for download. While it wasn’t a specific requirement, I thought it might be helpful to have the file
size appear next to the file name. This is a common enough use case that I figured that there must be an open source library that would give a human-readable file size if I were to give it a file
A quick search later, I found the Apache Commons FileUtils class and the byteCountToDisplaySize method. Looking at the JavaDoc, we see that it returns a “human-readable version of the file size,
where the input represents a specific number of bytes. If the size is over 1GB, the size is returned as the number of whole GB, i.e. the size is rounded down to the nearest GB boundary. Similarly for
the 1MB and 1KB boundaries.”
This seemed to be what I was looking for, and for this particular use case it works fine. However, it could be misleading if you needed more accurate sizes. If I’m looking at a file that has a size
of 1.99 MB, it would display as 1 MB. Even worse, a 1.99 GB file would display as only being 1 GB in size. This is even pointed out in a JIRA ticket attached to the JavaDoc.
I decided to implement an improved version. Surprisingly, I got my inspiration from Windows Explorer. When you look at the drive size and space free in Windows Explorer (in this case, the Windows 7
version), you’ll only see the three most significant digits of the number. Here I’ll implement a version of the method, based on the original byteCountToDisplaySize, to have the same behavior.
A Look At The Original
The original class divides the byte length by multiples of a byte from high (yottabyte) to low (kilobyte). When one of those divisions equals a number above zero (since it is an integer division), it
has reached the appropriate multiple and outputs the value followed by the appropriate symbol.
For this method, I’ll follow the same pattern to determine the appropriate symbol:
* The number of bytes in a kilobyte.
public static final BigInteger ONE_KB = BigInteger.valueOf(1024);
* The number of bytes in a megabyte.
public static final BigInteger ONE_MB = ONE_KB.multiply(ONE_KB);
* The number of bytes in a gigabyte.
public static final BigInteger ONE_GB = ONE_KB.multiply(ONE_MB);
* The number of bytes in a terabyte.
public static final BigInteger ONE_TB = ONE_KB.multiply(ONE_GB);
* The number of bytes in a petabyte.
public static final BigInteger ONE_PB = ONE_KB.multiply(ONE_TB);
* The number of bytes in an exabyte.
public static final BigInteger ONE_EB = ONE_KB.multiply(ONE_PB);
* The number of bytes in a zettabyte.
public static final BigInteger ONE_ZB = ONE_KB.multiply(ONE_EB);
* The number of bytes in a yottabyte.
public static final BigInteger ONE_YB = ONE_KB.multiply(ONE_ZB);
* Returns a human-readable version of the file size, where the input
* represents a specific number of bytes.
* @param size
* the number of bytes
* @return a human-readable display value (includes units - YB, ZB, EB, PB, TB, GB,
* MB, KB or bytes)
public static String byteCountToDisplaySize(BigInteger size) {
String displaySize;
if (size.divide(ONE_YB).compareTo(BigInteger.ZERO) > 0) {
displaySize = String.valueOf(size.divide(ONE_YB)) + " YB";
} else if (size.divide(ONE_ZB).compareTo(BigInteger.ZERO) > 0) {
displaySize = String.valueOf(size.divide(ONE_ZB)) + " ZB";
} else if (size.divide(ONE_EB).compareTo(BigInteger.ZERO) > 0) {
displaySize = String.valueOf(size.divide(ONE_EB)) + " EB";
} else if (size.divide(ONE_PB).compareTo(BigInteger.ZERO) > 0) {
displaySize = String.valueOf(size.divide(ONE_PB)) + " PB";
} else if (size.divide(ONE_TB).compareTo(BigInteger.ZERO) > 0) {
displaySize = String.valueOf(size.divide(ONE_TB)) + " TB";
} else if (size.divide(ONE_GB).compareTo(BigInteger.ZERO) > 0) {
displaySize = String.valueOf(size.divide(ONE_GB)) + " GB";
} else if (size.divide(ONE_MB).compareTo(BigInteger.ZERO) > 0) {
displaySize = String.valueOf(size.divide(ONE_MB)) + " MB";
} else if (size.divide(ONE_KB).compareTo(BigInteger.ZERO) > 0) {
displaySize = String.valueOf(size.divide(ONE_KB)) + " KB";
} else {
displaySize = String.valueOf(size) + " bytes";
return displaySize;
That code replicates the behavior of the original byteCountToDisplaySize method. The if/else if /else block structure will remain the same for our method, but the calculation of the displaySize must
change. A new method, getThreeSigFigs, will be created for this.
Our New Calculation
That code replicates the behavior of the original byteCountToDisplaySize method. The if/else if /else block structure will remain the same for our method, but the calculation of the displaySize must
change. A new method, getThreeSigFigs, will be created for this.
private static String getThreeSigFigs(double displaySize) {
String number = String.valueOf(displaySize);
StringBuffer trimmedNumber = new StringBuffer();
int cnt = 0;
for (char digit : number.toCharArray()) {
if (cnt < 3) {
if (digit != '.') {
return trimmedNumber.toString();
The Updated Method
The above method will grab the first three digits and the decimal, if it occurs before the third digit, and output it as a string. Now let’s plug this into the method.
public static String byteCountToDisplaySize(BigInteger size) {
String displaySize;
BigDecimal decimalSize = new BigDecimal(size);
if (size.divide(ONE_YB).compareTo(BigInteger.ZERO) > 0) {
displaySize = String.valueOf(size.divide(ONE_YB)) + " YB";
} else if (size.divide(ONE_ZB).compareTo(BigInteger.ZERO) > 0) {
displaySize = getThreeSigFigs(decimalSize.divide(new BigDecimal(ONE_ZB))) + " ZB";
} else if (size.divide(ONE_EB).compareTo(BigInteger.ZERO) > 0) {
displaySize = getThreeSigFigs(decimalSize.divide(new BigDecimal(ONE_EB))) + " EB";
} else if (size.divide(ONE_PB).compareTo(BigInteger.ZERO) > 0) {
displaySize = getThreeSigFigs(decimalSize.divide(new BigDecimal(ONE_PB))) + " PB";
} else if (size.divide(ONE_TB).compareTo(BigInteger.ZERO) > 0) {
displaySize = getThreeSigFigs(decimalSize.divide(new BigDecimal(ONE_TB))) + " TB";
} else if (size.divide(ONE_GB).compareTo(BigInteger.ZERO) > 0) {
displaySize = getThreeSigFigs(decimalSize.divide(new BigDecimal(ONE_GB))) + " GB";
} else if (size.divide(ONE_MB).compareTo(BigInteger.ZERO) > 0) {
displaySize = getThreeSigFigs(decimalSize.divide(new BigDecimal(ONE_MB))) + " MB";
} else if (size.divide(ONE_KB).compareTo(BigInteger.ZERO) > 0) {
displaySize = getThreeSigFigs(decimalSize.divide(new BigDecimal(ONE_KB))) + " KB";
} else {
displaySize = String.valueOf(size) + " bytes";
return displaySize;
We leave the method out for two of the branches. The first branch in the extremely rare case that we have a file over 999 YB and the last branch because we will always show all of the digits for
values under one kilobyte. But how do we know this code works?
Unit Test
package com.keyholesoftware;
import java.util.Arrays;
import java.util.Collection;
import junit.framework.Assert;
import org.junit.Test;
import org.junit.runner.RunWith;
import org.junit.runners.Parameterized;
import org.junit.runners.Parameterized.Parameters;
public class KHSFileUtilsTest {
private Long input;
private String output;
public KHSFileUtilsTest(Long input, String output) {
this.input = input;
this.output = output;
public static Collection<Object[]> generateData() {
return Arrays.asList(new Object[][] { { 0L, "0 bytes" },
{ 27L, "27 bytes" }, { 999L, "999 bytes" }, {1000L, "1000 bytes" },
{1023L, "1023 bytes"},{1024L, "1.0 KB"},{1728L, "1.68 KB"},{110592L, "108 KB"},
{7077888L, "6.75 MB"}, {452984832L, "432 MB"}, {28991029248L, "27.0 GB"},
{1855425871872L, "1.68 TB"}, {9223372036854775807L, "8.0 EB"}});
public void testByteCountToDisplaySizeBigInteger() {
Assert.assertEquals(output, KHSFileUtils.byteCountToDisplaySize(input));
I use a parameterized JUnit test here so we can easily test multiple inputs against their expected output.
I hope you’ll find this improved version of the method byteCountToDisplaySize (with surprising inspiration from Windows Explorer) useful. Please let me know if you have any questions.
– Brice McIver, asktheteam@keyholesoftware.com
3 Responses to “A Method For Creating A Human-Readable File Size”
1. Has this patch been submitted to apache? I think they would find this useful…Here’s a link on how to submit? http://commons.apache.org/patches.html
□ I haven’t submitted it yet, but that’s on my todo list for the week. Thanks for the comment!
□ Patch is now submitted to Apache. You can see it as an attachment on https://issues.apache.org/jira/browse/IO-226 | {"url":"http://keyholesoftware.com/2012/08/13/a-method-for-creating-a-human-readable-file-size/","timestamp":"2014-04-17T12:49:08Z","content_type":null,"content_length":"66450","record_id":"<urn:uuid:8a34eaa5-ff68-437f-bb26-62b0ba3a7932>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00383-ip-10-147-4-33.ec2.internal.warc.gz"} |
On the Erdős-Sos conjecture
We investigate a tantalizing problem in extremal graph theory known as the Erdős-Sós conjecture. The Erdős-Sós conjecture states that every simple graph with average degree greater than
k − 2 contains every tree on k vertices as a subgraph (k is a positive integer). We prove various special cases each of which places certain restrictions on the class of graphs or the class of trees
considered. In our descriptions below, G is a simple graph (that is, no loops and no multi-edges) on n vertices that has average degree greater than k − 2; and that T is any tree on k vertices. ^ In
1989, Sidorenko proved the conjecture holds if T has a vertex v with at least k2 − 1 leaf neighbors. In the first manuscript we improve upon this result by proving it is sufficient to assume that T
has a vertex v with at least k2 − 2 leaf neighbors. We use this to prove the conjecture holds if the graph has minimum degree k − 4. From this result, we obtain that the conjecture holds for all k ≤
8. ^ A spider of degree d is a tree that can be thought of as the union of d edge-disjoint paths that share exactly one common end-vertex. In the second manuscript, we prove that G contains every
spider of degree d where d = 3 or d > 3k4 − 2. ^ In the third manuscript, we prove the conjecture holds if G has at most k + 3 vertices. ^ In the fourth manuscript, we prove the conjecture holds if G
is P[k][+4]-free, that is, if G contains no path on k + 4 vertices. ^
Subject Area
Recommended Citation
Gary F Tiner, "On the Erdős-Sos conjecture" (2007). Dissertations and Master's Theses (Campus Access). Paper AAI3277009. | {"url":"http://digitalcommons.uri.edu/dissertations/AAI3277009/","timestamp":"2014-04-16T19:22:19Z","content_type":null,"content_length":"19360","record_id":"<urn:uuid:64be97de-0050-41b5-a62f-cb8019fb9fa4>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00335-ip-10-147-4-33.ec2.internal.warc.gz"} |
Re: st: Plotting a Local Polynomial Regression with CIs Accounting for
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: st: Plotting a Local Polynomial Regression with CIs Accounting for Clustering
From Maarten buis <maartenbuis@yahoo.co.uk>
To statalist@hsphsun2.harvard.edu
Subject Re: st: Plotting a Local Polynomial Regression with CIs Accounting for Clustering
Date Tue, 1 Dec 2009 08:09:11 +0000 (GMT)
--- On Tue, 1/12/09, L S wrote:
> I've been playing around with the fracpoly graphs for a
> couple days now. Compared to the local polynomial
> regression lines, they do not look quite right. The
> main thing is that the picture will depend often depend
> fairly strongly on the number of degrees for the
> fractional polynomial.
The local polynomial suffers from a similar problem, there
you will have to choose the bandwidth, which will similarly
influence the resulting curve. One tool you can use to help
you choose the degree is the -compare- option within
> Thus, though I said I was flexible with respect to which
> form of nonparametric regression is used, I was wondering
> if there might be a way to possibly return back to local
> polynomial regression or perhaps another form of
> nonparametric regression (besides fracpoly) that will
> allow me to plot 95% CIs accounting for clustering, e.g.
> something like
You can't do it with local polynomial curves. Another option
you could look at is -mvrs-, which appeared in the Stata
Journal, so you'll have to install it by typing -findit mvrs-
and following the instructions in the resulting window. The
basic philosophy is similar to -fracpoly-, but it uses
splines instead of fractional polynomials.
Patrick Roystond and Willi Sauerbrei (2007) Multivariable
modeling with cubic regression splines: A principled approach.
The Stata Journal, 7(1):45--70.
Hope this helps,
Maarten L. Buis
Institut fuer Soziologie
Universitaet Tuebingen
Wilhelmstrasse 36
72074 Tuebingen
* For searches and help try:
* http://www.stata.com/help.cgi?search
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/ | {"url":"http://www.stata.com/statalist/archive/2009-12/msg00000.html","timestamp":"2014-04-19T12:05:28Z","content_type":null,"content_length":"7470","record_id":"<urn:uuid:1f5aa2f2-c750-4be4-9dce-b4ffdb1f992e>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00426-ip-10-147-4-33.ec2.internal.warc.gz"} |
Jamaica Beach, TX ACT Tutor
Find a Jamaica Beach, TX ACT Tutor
...I can guarantee that I will help you get an A in your course or ace that big test you're preparing for. I am a Trinity University graduate and I have over 4 years of tutoring experience. I
really enjoy it and I always receive great feedback from my clients.
38 Subjects: including ACT Math, English, writing, reading
Hi, my name is Richard. I am a graduate student at the University of Houston and a mathematics tutor at San Jacinto College. I have 3 to 4 years experience in mathematics.
16 Subjects: including ACT Math, calculus, geometry, algebra 1
...I have taken, and scored high on, many major tests, like: the ACT, SAT, GRE, and the Praxis. I have also taken the math teaching certification exams. I have prepared for all these tests, and
know how to help you prepare for your test.
20 Subjects: including ACT Math, Spanish, geometry, algebra 1
...I help students develop the ability to see computational problems from a mathematical perspective. Discrete math is normally divided into six areas: sets, functions, and relations; basic
logic; proof techniques; counting basics; graphs and trees; and discrete probability. I show students how these topics are interwoven with computer science applications.
30 Subjects: including ACT Math, calculus, physics, geometry
...I have successfully conducted training sessions in Windows 7, SharePoint, Visual Basic, C++ before. I also have experience in tutoring C++ to the collage students at graduate and post graduate
level. I have done a Bachelor's of Engineering and also have a MBA degree in Finance.
14 Subjects: including ACT Math, writing, differential equations, algebra 1
Related Jamaica Beach, TX Tutors
Jamaica Beach, TX Accounting Tutors
Jamaica Beach, TX ACT Tutors
Jamaica Beach, TX Algebra Tutors
Jamaica Beach, TX Algebra 2 Tutors
Jamaica Beach, TX Calculus Tutors
Jamaica Beach, TX Geometry Tutors
Jamaica Beach, TX Math Tutors
Jamaica Beach, TX Prealgebra Tutors
Jamaica Beach, TX Precalculus Tutors
Jamaica Beach, TX SAT Tutors
Jamaica Beach, TX SAT Math Tutors
Jamaica Beach, TX Science Tutors
Jamaica Beach, TX Statistics Tutors
Jamaica Beach, TX Trigonometry Tutors
Nearby Cities With ACT Tutor
Angleton ACT Tutors
Bayou Vista, TX ACT Tutors
Bonney, TX ACT Tutors
Danbury, TX ACT Tutors
El Lago, TX ACT Tutors
Hillcrest, TX ACT Tutors
Iowa Colony, TX ACT Tutors
Jones Creek, TX ACT Tutors
Liverpool, TX ACT Tutors
Oyster Creek, TX ACT Tutors
Port Bolivar ACT Tutors
Quintana, TX ACT Tutors
Surfside Beach, TX ACT Tutors
Tiki Island, TX ACT Tutors
West Galveston, TX ACT Tutors | {"url":"http://www.purplemath.com/Jamaica_Beach_TX_ACT_tutors.php","timestamp":"2014-04-18T23:57:34Z","content_type":null,"content_length":"23915","record_id":"<urn:uuid:0c685ddb-a9d5-4238-92be-b489c23ebe9d>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00072-ip-10-147-4-33.ec2.internal.warc.gz"} |
A composite likelihood approach to the analysis of longitudinal clonal data on multitype cellular systems under an age-dependent branching process
A recurrent statistical problem in cell biology is to draw inference about cell kinetics from observations collected at discrete time points. We investigate this problem when multiple cell clones are
observed longitudinally over time. The theory of age-dependent branching processes provides an appealing framework for the quantitative analysis of such data. Likelihood inference being difficult in
this context, we propose an alternative composite likelihood approach, where the estimation function is defined from the marginal or conditional distributions of the number of cells of each
observable cell type. These distributions have generally no closed-form expressions but they can be approximated using simulations. We construct a bias-corrected version of the estimating function,
which also offers computational advantages. Two algorithms are discussed to compute parameter estimates. Large sample properties of the estimator are presented. The performance of the proposed method
in finite samples is investigated in simulation studies. An application to the analysis of the generation of oligodendrocytes from oligodendrocyte type-2 astrocyte progenitor cells cultured in vitro
reveals the effect of neurothrophin-3 on these cells. Our work demonstrates also that the proposed approach outperforms the existing ones.
Keywords: Bias correction, Cell differentiation, Composite likelihood, Discrete data, Monte Carlo, Neurotrophin-3, Oligodendrocytes, Precursor cell, Stochastic model | {"url":"http://pubmedcentralcanada.ca/pmcc/articles/PMC3006127/?lang=en-ca","timestamp":"2014-04-18T17:31:22Z","content_type":null,"content_length":"166344","record_id":"<urn:uuid:e5a6545e-a9f1-45fc-b51f-457ddd43d644>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00510-ip-10-147-4-33.ec2.internal.warc.gz"} |
The attempt to load metrics for this article has failed.
The attempt to plot a graph for these metrics has failed.
Energy transfer of highly vibrationally excited naphthalene. II. Vibrational energy dependence and isotope and mass effects
Angular resolved energy transfer probability density functions (double differential cross section with respect to solid angle and transferred energy) of naphthalene excited by 266 and in collisions
with Kr at various collision energies. Thick black line, thin black line, and dot black line represent near forward, sideway, and backward density functions excited by ; thick gray line, thin gray
line, and dot gray line represent near forward, sideway, and backward density functions excited by . The first column represents the up collisions energy transfer; the second column represents the
down collisions energy transfer. The third column shows the region of maximum down collisions energy transfer. The density functions at each collision energy for each pump wavelength are normalized
separately so that . In each plot, the density functions for 266 and are plotted in the same scale.
Angular resolved energy transfer probability density functions for naphthalene- and naphthalene- in collisions with Kr at two collision energies. Thick black line, thin black line, and black dot line
represent near forward, sideway, and backward density functions for naphthalene-; thick gray line, thin gray line, and gray dot line represent near forward, sideway, and backward density functions
for naphthalene-. Collision energies are 564 and for naphthalene- and naphthalene-, respectively, in (a)–(c); they are 853 and for -naphthalene and -naphthalene, respectively, in (d)–(f). The first
column represents the up collisions energy transfer; the second column represents the down collisions energy transfer. The third column shows the region of maximum down collisions energy transfer.
Both naphthalene- and naphthalene- are excited by photons. The density functions at each collision energy are normalized so that . In each plot, naphthalene- and naphthalene- are plotted in the same
Angular resolved energy transfer probability density functions of naphthalene excited by in collisions with Xe at various collision energies. Thick black line, thin black line, and gray line
represent near forward, sideway, and backward probability density functions. The first column represents the up collisions energy transfer; the second column represents the down collisions energy
transfer. The third column shows the region of maximum down collisions energy transfer. The density functions at each collision energy are normalized separately so that .
Energy transfer probability density functions in collisions with Xe at various collision energies. Thin black line: ; gray line: ; thick black line: . Negative values represent down collisions energy
transfer and positive values represent up collisions energy transfer. The density functions at each collision energy are normalized separately so that .
Velocity uncertainties and speed ratios of naphthalene molecular beam. and are the full widths at half maximum of the naphthalene velocity distribution in the and directions, respectively. is the
naphthalene velocity in the laboratory frame, is the naphthalene velocity in the center-of-mass frame, , is the uncertainty of collision energy, and .
Scitation: Energy transfer of highly vibrationally excited naphthalene. II. Vibrational energy dependence and isotope and mass effects | {"url":"http://scitation.aip.org/content/aip/journal/jcp/128/12/10.1063/1.2868753","timestamp":"2014-04-17T19:39:38Z","content_type":null,"content_length":"94857","record_id":"<urn:uuid:7f3202ac-4c04-4c10-a16e-57aee2552921>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00385-ip-10-147-4-33.ec2.internal.warc.gz"} |
Highly efficient generation of entangled photon pair by spontaneous parametric downconversion in defective photonic crystals
We study the process of spontaneous parametric downconversion in a one-dimensional defective quadratic nonlinear photonic crystal. It is shown that the strong confinements of both the pump and signal
waves around the defective layer result in a giant enhancement of the entangled photon-pair generation. An enhancement factor as high as $3.4 × 10 6$ is obtained in our defective structure based on
the dual-localized modes. Furthermore, the linewidth of the downconverted fields is only $0.1 nm$. Such a photonic crystal structure can be applied as a highly efficient source of entangled photon
pairs for highly integrated all-optical circuits.
© 2007 Optical Society of America
OCIS Codes
(190.4410) Nonlinear optics : Nonlinear optics, parametric processes
(270.0270) Quantum optics : Quantum optics
ToC Category:
Quantum Optics
Original Manuscript: August 10, 2006
Revised Manuscript: February 5, 2007
Manuscript Accepted: March 6, 2007
Published: May 17, 2007
Yong Zeng, Ying Fu, Xiaoshuang Chen, Wei Lu, and Hans Ågren, "Highly efficient generation of entangled photon pair by spontaneous parametric downconversion in defective photonic crystals," J. Opt.
Soc. Am. B 24, 1365-1368 (2007)
Sort: Year | Journal | Reset
1. G. Alber, T. Beth, M. Horodecki, P. Horodecki, R. Horodecki, M. Rötteler, H. Weinfurter, R. Werner, and A. Zeilinger, Quantum Information: an Introduction to Basic Theoretical Concepts and
Experiments (Springer, 2001).
2. P. Kumar, P. Kwiat, A. Migdall, S. W. Nam, J. Vuckovic, and F. N. C. Wong, "Photonic technologies for quantum information processing," Quantum Inf. Process. 3, 215-231 (2004). [CrossRef]
3. M. J. A. de Dood, W. T. M. Irvine, and D. Bouwmeester, "Nonlinear photonic crystals as source of entangled photons," Phys. Rev. Lett. 93, 040504 (2004). [CrossRef] [PubMed]
4. E. Yablonovitch, "Inhibited spontaneous emission in solid-state physics and electronics," Phys. Rev. Lett. 58, 2059-2062 (1987). [CrossRef] [PubMed]
5. S. John, "Strong localization of photons in certain disordered dielectric superlattices," Phys. Rev. Lett. 58, 2486-2489 (1987). [CrossRef] [PubMed]
6. K. Sakoda, Optical Properties of Photonic Crystals (Springer, 2001).
7. S. Datta, C. T. Chan, K. M. Ho, and C. M. Soukoulis, "Effective dielectric constant of periodic composite structures," Phys. Rev. B 48, 14936-14943 (1993). [CrossRef]
8. A. A. Krokhin, P. Halevi, and J. Arriaga, "Long-wavelength limit (homogenization) for two-dimensional photonic crystals," Phys. Rev. B 65, 115208 (2002). [CrossRef]
9. A. Yariv and P. Yeh, Optical Waves in Crystals (Wiley, 1976).
10. A. N. Vamivakas, B. E. A. Saleh, A. V. Sergienko, andM. C. Teich, "Theory of spontaneous parametric down-conversion from photonic crystals," Phys. Rev. A 70, 043810 (2004). [CrossRef]
11. G. D. Giuseppe, M. Atature, M. D. Shaw, A. V. Sergienko, B. E. A. Saleh, and M. C. Teich, "Entangled-photon generation from parametric down-conversion in media with inhomogeneous nonlinearity,"
Phys. Rev. A 66, 013801 (2002). [CrossRef]
12. W. T. M. Irvine, M. J. A. de Dood, and D. Bouwmeester, "Bloch theory of entangled photon generation in nonlinear photonic crystals," Phys. Rev. A 72, 043815 (2005). [CrossRef]
13. Y. R. Shen, The Principles of Nonlinear Optics (Wiley, 1984).
14. M. Centini, J. Perina. Jr., L. Sciscione, C. Sibilia, M. Scalora, M. J. Bloemer, and M. Bertolotti, "Entangled photon pair generation by spontaneous parametric down-conversion in finite-length
one-dimensional photonic crystals," Phys. Rev. A 72, 033806 (2005). [CrossRef]
15. M. Bahl, N.-C. Panoiu, and R. M. Osgood. Jr., "Nonlinear optical effects in a two-dimensional photonic crystal containing one-dimensional Kerr defects," Phys. Rev. E 67, 056604 (2003). [CrossRef]
16. J. Martorell and R. Corbalan, "Enhancement of second harmonic generation in a periodic structure with a defect," Opt. Commun. 108, 319-323 (1994). [CrossRef]
17. J. Trull, R. Vilaseca, J. Martorell, and R. Corbalan, "Second-harmonic generation in local modes of a truncated periodic structure," Opt. Lett. 20, 1746-1748 (1995). [CrossRef] [PubMed]
18. B. Shi, Z. M. Jiang, and X. Wang, "Defective photonic crystals with greatly enhanced second-harmonic generation," Opt. Lett. 26, 1194-1196 (2001). [CrossRef]
19. F. F. Ren, R. Li, C. Cheng, H. T. Wang, J. Qiu, J. Si, and K. Hirao, "Giant enhancement of second harmonic generation in a finite photonic crystal with a single defect and dual-localized modes,"
Phys. Rev. B 70, 245109 (2004). [CrossRef]
20. Y. Zeng, X. Chen, and W. Lu, "Optical limiting in defective quadratic nonlinear photonic crystals," J. Appl. Phys. 99, 123107 (2006). [CrossRef]
21. Y. Jeong and B. Lee, "Matrix analysis for layered quasi-phase-matched media considering multiple reflection and pump wave depletion," IEEE J. Quantum Electron. 35, 162-178 (1999). [CrossRef]
22. G. D'Aguanno, M. Centini, M. Scalora, C. Sibilia, M. Bertolotti, M. J. Bloemer, and C. M. Bowden, "Energy exchange properties during second-harmonic generation in finite one-dimensional photonic
band-gap structures with deep gratings," Phys. Rev. E 67, 016606 (2003). [CrossRef]
OSA is able to provide readers links to articles that cite this paper by participating in CrossRef's Cited-By Linking service. CrossRef includes content from more than 3000 publishers and societies.
In addition to listing OSA journal articles that cite this paper, citing articles from other participating publishers will also be listed.
« Previous Article | Next Article » | {"url":"http://www.opticsinfobase.org/josab/abstract.cfm?uri=josab-24-6-1365","timestamp":"2014-04-20T17:22:47Z","content_type":null,"content_length":"166982","record_id":"<urn:uuid:36318042-3179-47e2-958c-32b2757583d8>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00111-ip-10-147-4-33.ec2.internal.warc.gz"} |
Freund, Y. (Y. Freund, M. Kearns, D. Ron, R. Rubinfeld, R. Schapire and L. Sellie) -- Efficient learning of typical finite automata from random walks - 1993
Freund, Y. (N. Cesa-Bianchi, Y. Freund, D. P. Helmbold, D. Haussler, R. E. Schapire and M. K. Warmuth) -- How to use expert advice - 1993
Freund, Y. (Y. Freund and D. Haussler) -- Unsupervised learning of distributions on binary vectors using two layer networks - 1991
Freund, Y. (Y. Freund) -- An improved boosting algorithm and its implications on learning complexity - 1992
Freund, Y. (Y. Freund) -- Boosting a Weak Learning Algorithm by Majority - September 1995 | {"url":"http://www.nzdl.org/gsdlmod?e=d-00000-00---off-0cltbibZz-e--00-1----0-10-0---0---0direct-10---4-------0-1l--11-en-50---20-home---01-3-1-00-0--4--0--0-0-11-10-0utfZz-8-00&a=d&c=cltbib-e&cl=CL2.6.105","timestamp":"2014-04-21T14:55:19Z","content_type":null,"content_length":"17606","record_id":"<urn:uuid:2d6e447f-2094-4dae-99b4-9fe684bc3c13>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00641-ip-10-147-4-33.ec2.internal.warc.gz"} |
Shift all bits to the left
<< has the effect of multiplying a number by a power of two, but is less expensive in computer time.
Name Type Opt Description
initial-value int opt Sets an initial value for the number of bits by which to shift leftward.
bang In left inlet: Performs the bit-shift with the numbers currently stored. If there is no argument, << initially holds 0 as the number of bits by which to shift.
int input [int] In left inlet: All bits of the number, in binary form, are shifted to the left by a certain number of bits. The resulting number is sent out the outlet.
(inlet1) amount-of-bitshift [int] In right inlet: The number is stored as the number of bits to left-shift the number in the left inlet.
float input [float] Converted to int.
set set-input [int] Sets input to the object without causing output (bang will output it).
list input [list] In left inlet: The first number is bit-shifted to the left by the number of bits specified by the second number.
int: The number in the left inlet is bit-shifted to the left by a certain number of bits. The number of bits by which to shift is specified by the number in the right inlet. The output is the
resulting bit-shifted number.
See Also
Name Description
* Multiply two numbers, output the result
>> Shift all bits to the right | {"url":"http://cycling74.com/docs/max5/refpages/max-ref/shiftleft.html","timestamp":"2014-04-17T04:30:23Z","content_type":null,"content_length":"3855","record_id":"<urn:uuid:1cd20c8e-456e-4796-b2c0-0c3338a44131>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00444-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Forum Discussions
Math Forum
Ask Dr. Math
Internet Newsletter
Teacher Exchange
Search All of the Math Forum:
Views expressed in these public forums are not endorsed by Drexel University or The Math Forum.
Topic: Call for Papers: THedu'11 (Post-proceedings)
Replies: 2 Last Post: Sep 18, 2012 2:07 PM
Messages: [ Previous | Next ]
Call for Papers: THedu'11 (Post-proceedings)
Posted: Sep 1, 2011 11:17 AM
[Apologies for possible multiple postings.]
Call for Papers Post-Proceedings
CTP components for educational software
(CTP -- Computer Theorem Proving)
Important Dates
* Call for papers: 1.Sep.2011
* Submission (full papers): 15.Nov.2011
* Notification of acceptance: 15.Dec.2011
* Revised papers due: 15.Jan.2012
THedu is a forum to gather the research communities for Computer
Theorem proving (CTP), Automated Theorem Proving (ATP), Interactive
Theorem Proving (ITP) as well as for Computer Algebra Systems (CAS)
and Dynamic Geometry Systems (DGS).
The goal of this union is to combine and focus systems of these areas
and to enhance existing educational software as well as studying the
design of the next generation of mechanised mathematics assistants
(MMA). Elements for next-generation MMA's include:
* Declarative Languages for Problem Solution: education in applied
sciences and in engineering is mainly concerned with problems, which
are understood as operations on elementary objects to be transformed
to an object representing a problem solution. Preconditions and
postconditions of these operations can be used to describe the
possible steps in the problem space; thus, ATP-systems can be used to
check if an operation sequence given by the user does actually present
a problem solution. Such "Problem Solution Languages" encompass
declarative proof languages like Isabelle/Isar or Coq's Mathematical
Proof Language, but also more specialized forms such as, for example,
geometric problem solution languages that express a proof argument in
Euclidean Geometry or languages for graph theory.
* Consistent Mathematical Content Representation: libraries of existing
ITP-Systems, in particular those following the LCF-prover paradigm,
usually provide logically coherent and human readable knowledge. In
the leading provers, mathematical knowledge is covered to an extent
beyond most courses in applied sciences. However, the potential of
this mechanised knowledge for education is clearly not yet recognised
adequately: renewed pedagogy calls for enquiry-based learning from
concrete to abstract --- and the knowledge's logical coherence
supports such learning: for instance, the formula 2.Pi depends on the
definition of reals and of multiplication; close to these definitions
are the laws like commutativity etc. Clearly, the complexity of the
knowledge's traceable interrelations poses a challenge to usability
* User-Guidance in Stepwise Problem Solving: Such guidance is
indispensable for independent learning, but costly to implement so
far, because so many special cases need to be coded by hand. However,
CTP technology makes automated generation of user-guidance reachable:
declarative languages as mentioned above, novel programming languages
combining computation and deduction, methods for automated
construction with ruler and compass from specifications, etc --- all
these methods 'know how to solve a problem'; so, using the methods'
knowledge to generate user-guidance mechanically is an appealing
challenge for ATP and ITP, and probably for compiler construction!
In principle, mathematical software can be conceived as models of
mathematics: The challenge addressed by this workshop is to provide
appealing models for MMAs which are interactive and which explain
themselves such that interested students can independently learn by
inquiry and experimentation.
Program Chairs
Ralph-Johan Back, Abo University, Turku, Finland
Pedro Quaresma, University of Coimbra, Portugal
Program Committee
Francisco Botana, University of Vigo at Pontevedra, Spain
Florian Haftman, Munich University of Technology, Germany
Predrag Janicic, University of Belgrade, Serbia
Cezary Kaliszyk, University of Tsukuba, Japan
Julien Narboux, University of Strasbourg, France
Walther Neuper, Graz University of Technology, Austria
Wolfgang Schreiner, Johannes Kepler University, Linz, Austria
Laurent Théry, Sophia Antipolis, INRIA, France
Makarius Wenzel, University Paris-Sud, France
Burkhart Wolff, University Paris-Sud, France
The post-proceedings of THedu'11 will be published in the Electronic
Proceedings in Theoretical Computer Science (EPTCS) series. You are
invited to submit original research papers (of 10-14 pages) for
possible publication in the proceedings. Your contributions have to be
within the scope of THedu, but their contents do not have to be related
to a past presentation at THedu'11. Submissions which do not have been
presented at THedu'11 are welcome.
All the submissions will be formally reviewed according to the usual
standard of international conferences. The proceedings will be edited
by the PC chairs.
THedu'11 seeks papers presenting original unpublished work which is
not been submitted for publication elsewhere.
Submission guidelines
The authors of papers should submit to easychair in PDF format
generated by EPTCS LaTeX style(*).
We will use the same submission page as for the workshop:
Do NOT UNDER ANY CIRCUMSTANCES replace your workshop submission by
your new post-proceedings paper (it won't be considered in that case),
but instead make sure to submit your post-proceedings contribution as
a NEW AND INDEPENDENT SUBMISSION.
Please feel free to contact us if you have any comments, suggestions,
and/or questions. We look forward to receiving your submissions.
With best wishes,
The Program Committee of THedu'11
(*) http://www.cse.unsw.edu.au/~rvg/EPTCS/eptcsstyle.zip
At\'e breve;\`A bient\^ot;See you later;Vidimo se;
Professor Auxiliar Pedro Quaresma
Departamento de Matem\'atica, Faculdade de Ci\^encias e Tecnologia
Universidade de Coimbra
P-3001-454 COIMBRA, PORTUGAL
correioE: pedro@mat.uc.pt
p\'agina: http://www.mat.uc.pt/~pedro/
telef: +351 239 791 137; fax: +351 239 832 568
Date Subject Author
9/1/11 Call for Papers: THedu'11 (Post-proceedings) pedro@mat.uc.pt
9/18/12 pedro@mat.uc.pt | {"url":"http://mathforum.org/kb/thread.jspa?threadID=2292770&messageID=7560189","timestamp":"2014-04-16T11:27:18Z","content_type":null,"content_length":"24008","record_id":"<urn:uuid:5b30a3f5-8c2f-4d1c-9920-19e4cadd53e9>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00344-ip-10-147-4-33.ec2.internal.warc.gz"} |
Write the quadratic function from the table of values
Can some one tell me how to find the vertex and create a "Quadratic function" in "Vertex form" from this table of value :
(-3,-0.5) (-2.5, -3) (-2, -4.5) (-1.5, -5) (-1, -4.5) -0.5, -3) and (0, -0.5)
And after we have a vertex , determine whether it is a max or a min ?
honest_denverco09 wrote:Can some one tell me how to find the vertex and create a "Quadratic function" in "Vertex form" from this table of value :
(-3,-0.5) (-2.5, -3) (-2, -4.5) (-1.5, -5) (-1, -4.5) -0.5, -3) and (0, -0.5)
These points look to be symmetric (compare the y-values for x = -3 and x = 0, or x = -2 and x = -1), so I don't think you're supposed to do a regression to create a "best fit" equation; I believe
these points are exactly from an equation.
One method for finding the equation would be to plot the points and note, by symmetry, where the vertex must be. Since the vertex is the point (h, k), and since the vertex form of the equation is "y
= a(x - h)^2 + k", you can plug the values for "h" and "k" into the equation.
They've given you seven points, only one of which was the vertex. Pick any of othe other points, and plug the x- and y-values into the above equation. Solve this for the value of "a".
Now that you have the values of "h", "k", and "a", you can fill in the vertex form of the equation to find your answer.
honest_denverco09 wrote:And after we have a vertex , determine whether it is a max or a min ?
If a quadratic is positive (that is, if its parabola opens upward), the vertex is the minimum; otherwise, the vertex is the maximum. (The lesson in the above link gives more information on this.)
Finding vertex of quadratic is same to of linear functions ?
Thank you for your answering!
I got the second part (Whether the vertex is max or min), but i still be stuck in part that how can i figure out the coordinates of the vertex in these seven points, will the vertex be the middle
point ? If so, how can i find the vertex when the number of points that are given is the even numbers ?
Also can i find the vertex of quadratic function by using the way that is used to find the slope of linear function "slope = (y1 - y2)/(x1-x2)" ?
honest_denverco09 wrote:i still be stuck in part that how can i figure out the coordinates of the vertex in these seven points, will the vertex be the middle point ? If so, how can i find the
vertex when the number of points that are given is the even numbers ?
It's not so much a matter of "even" or "odd", as much as symmetry. You could have four points for a quadratic (say, (-4, 0), (-3, -1), (-1, -1), and (0, 0)), and all you would need to do, for finding
the vertex, would be to note the symmetry (in this example, the fact that x = -4 and x = 0 have the same y-value, and x = -3 and x = -1 share another y-value); the vertex would be at the point
directly between (in this example, at x = -2).
honest_denverco09 wrote:Also can i find the vertex of quadratic function by using the way that is used to find the slope of linear function "slope = (y1 - y2)/(x1-x2)" ?
Since a quadratic does not have a constant slope, no, the slope formula would not apply. Sorry!
The vertex of quadratic function !
Thank you for your answering! | {"url":"http://www.purplemath.com/learning/viewtopic.php?f=9&t=333","timestamp":"2014-04-17T07:22:20Z","content_type":null,"content_length":"26421","record_id":"<urn:uuid:05e983d9-a804-4580-8129-2dc33402ea62>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00186-ip-10-147-4-33.ec2.internal.warc.gz"} |
Help with problem in c++ class
Jul 18, 2008
2,763 (1.31/day)
Thanks Received:
South Carolina
Our teacher gave us some problems to work for class but I dont see anywhere in the book that explains these.
Im sure it is something simple that im overlooking because im good in math.
Could someone show me how to work these.
Thanks in advance.
Following the pseudocode below, what is the value of X if Y = 10?
If Y > 10 then
If Y < 20 then
Jul 18, 2008
2,763 (1.31/day)
Thanks Received:
South Carolina
Ok I think I see it already.
X= 7 correct?
Jul 23, 2011
1,197 (1.20/day)
Thanks Received:
Kaunas, Lithuania
Yes, You are correct. Also, the last branch will never trigger no matter what is the value of Y.
And to finding an answer: if You can't figure out the end result by reading a description/pseudocode/whatever, but can see/understand the algorithm, just write a small test program and simply get
the result
Asylum says thanks.
MrSeanKon New Member
Nov 14, 2006
267 (0.10/day)
Thanks Received:
Athens in love with Anna :)
X=5 for all Y values greater than 10.
For values less than 10 e.g. Y=8, Y=9, Y=9.999999999 (not exactly Y=10) X is always 9.
10 value for Y is a "critical" value where the second if/else conditional works.
Dec 2, 2009
3,252 (2.03/day)
Thanks Received:
I don't even get it why you asked?
Feb 26, 2008
4,875 (2.17/day)
Thanks Received:
Joplin, Mo
He didn't understand it at first. I thought it was pretty easy. I am sure there was something very specific he wasn't understanding.
W1zzard Administrator Staff Member
May 14, 2004
14,544 (4.01/day)
Thanks Received:
much more readable once you indent it
If Y > 10 then
If Y < 20 then
when using a language that uses closing blocks for if it will be even more readable
If Y > 10 then
If Y < 20 then
It makes no sense at all, especially the first else.
Else means that Y<10 and "if Y<20" is redundant. Because if Y<10 then of course Y<20, that part is captain obvious.
Jul 23, 2011
1,197 (1.20/day)
Thanks Received:
Kaunas, Lithuania
No, it's y<10. They wrote:
It means that whatever comes after the "else" operator can happen only when y<10
Jul 23, 2011
1,197 (1.20/day)
Thanks Received:
Kaunas, Lithuania
No need to play word games here. Else means that y is less or equal 10. Period.
Jul 23, 2011
1,197 (1.20/day)
Thanks Received:
Kaunas, Lithuania
Okay, I won't be playing word games.
Graphical representation:
Edit: OK, now I see Your point. That thing is quite obvious when looking at that "code".
Yet, you wrote it very ambiguously. Ambiguously enough to confuse me into thinking You were trying to say a completely different thing. If You would be less ambiguous next time, I would be
grateful. Thank You!
Last edited: Oct 16, 2012
Feb 26, 2008
4,875 (2.17/day)
Thanks Received:
Joplin, Mo
Would look a lot easier if:
X = 0;
if (Y > 10) then x = 5;
elseif ( Y < 20 ) then X = 7;
else x = 9;
end if
Sometimes pseudocode is just more confusing. | {"url":"http://www.techpowerup.com/forums/threads/help-with-problem-in-c-class.173451/","timestamp":"2014-04-19T15:25:58Z","content_type":null,"content_length":"81934","record_id":"<urn:uuid:4b085d22-98a8-4cb0-ba91-ecc694ee09e8>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00342-ip-10-147-4-33.ec2.internal.warc.gz"} |
Probability of detection
Probability of detection
Probability of Detection : The likelihood, expressed as a percentage, that a test method will correctly identify a leaking tank.
The smallest leak (from a storage tank), expressed in terms of gallons- or liters-per-hour, that a test can reliably discern with a certain probability of detection or false alarm.
Source: Terms of the Environment
Detectable Leak Rate- The smallest leak (from a storage tank), expressed in terms of gallons- or liters-per-hour, that a test can reliably discern with a certain probability of detection or false
Age tank, Storage, River, Table, Contaminant | {"url":"http://en.mimi.hu/environment/probability_of_detection.html","timestamp":"2014-04-18T05:31:33Z","content_type":null,"content_length":"10444","record_id":"<urn:uuid:56fe5cd4-818f-439c-bf3f-09636a477071>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00446-ip-10-147-4-33.ec2.internal.warc.gz"} |
David Krumm - Publications and Preprints
Publications and Preprints
• Squarefree parts of polynomial values
In preparation.
• Computing points of bounded height in projective space over a number field pdf
• Preperiodic points for quadratic polynomials over quadratic fields pdf
With John Doyle and Xander Faber. Submitted.
• Computing algebraic numbers of bounded height pdf
With John Doyle. To appear in Mathematics of Computation. An implementation of the algorithm is currently awaiting review on the Sage Trac. | {"url":"http://www.cmc.edu/pages/faculty/DKrumm/html/publications.html","timestamp":"2014-04-19T12:00:52Z","content_type":null,"content_length":"2408","record_id":"<urn:uuid:415026fa-2b0a-4bdd-aaa9-5f95e152d4ea>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00551-ip-10-147-4-33.ec2.internal.warc.gz"} |
Newbie combat question
Matrix Games Forums Forums Register Login Photo Gallery Member List Search
Calendars FAQ
My Profile Inbox Address Book My Subscription My Forums Log Out
View related threads: (in this forum | in all forums) Logged in as: Guest
All Forums >> [New Releases from Matrix Games] >> Panzer Corps >> Newbie combat question Page: [1]
12/27/2011 1:49:51 PM
jonpfl All,
Really enjoying the game but I have some questiosn in regard to combat
Posts: 27 When I am looking at units before attacking, I can never figure out what is the best choice (rock - paper - scissor)
Joined: 3/26/2005
Status: offline 1) When attacking with infantry on a tank, would I look at the attack value of infantry on a hard target and look at the ground defense of the tank to get an idea?
2) When is close defense figured into the equation? If a tank is in the town and I attacked it with infantry, do I use the infantry attack value for hard target and look at
the close defense value of the tank?
3) How come there are various values for attacking (soft and hard) but only one value for defending (ground) for ground units?
Any idea if there is a thread anywhere that goes over all of this?
Post #: 1
12/27/2011 2:02:47 PM
jonpfl All,
I found this screenshot in the forum, can someone go over it step by step and explain to me?
Posts: 27 Thx
Joined: 3/26/2005 jonpfl
Status: offline
Attachment (1)
Post #: 2
12/27/2011 4:03:15 PM
Rood I'll try to give an explanation for all your questions which are all related.
First if you go to the Library (F1) and go to Terrain you will see a column 'Special properties' which lists which terrain type uses the 'Close defense' value when a unit
Posts: 67 is attacking another unit (not just armor) which is in such a hex.
Joined: 7/16/2011
Status: offline Depending on the 'Target Type' Either the soft or hard attack value is used. And depending on the terrain in which the defending unit is either ground or close defense is
In regards to the screenshot.
- every unit has a initiative, intitive can be capped by weather and/or terrain (see the library again). For example a tank attacking an infantry unit inside a forest hex
will not use the initiave of the tank but that of the terrain (which for forest is 2).
The difference between the initiative (the effective initiative that is so after the dice roll) will tell how many shots (percentage) the defending unit can return.
The formula is (for ground combat for air it's different) that for every point in difference 20% of the defending unit cannot shoot back.
So in this case the difference is 3, therefore 60% of the defending unit cannot shoot back. In this case the strength of the defending unit is two, 0.4 * 2 = 0.8 and the
game rounds this up to 1. However if the defending unit was 10 strenght then only 4 units could shoot back. This is shown on the lower right side where it says
'Unsurpressed strenght: 1'
- The defending unit is an infantry unit so the soft attack value is used, also the defending unit is in a forest hex so the close defense is used
- When the defending unit is shooting back it also uses his soft attack value and the close defense value of the attackers.
Based on the attack and defense values a sort of combat results table (CRT) is made and then for every point of strenght of the attacking unit a die roll is made**.
The kill change is 53% and therefore the attacker needs to roll 47 or higher for a kill. As you can see there are 10 rolls made, and 6 are higher than 47 so the defending
unit is totally destroyed. Then the defending unit shoots back and it needs to roll 51 or higher for a kill but it rolls 47 which results in a surpression result.
** for artillery there's another factor which is the rate of fire (RoF), basically the lower the RoF the less dice are rolled.
I hope I have explained it correctly and you can understand what I wrote.
Post #: 3
12/27/2011 4:31:12 PM
ORIGINAL: Rood
Posts: 27
Joined: 3/26/2005 I'll try to give an explanation for all your questions which are all related.
Status: offline
First if you go to the Library (F1) and go to Terrain you will see a column 'Special properties' which lists which terrain type uses the 'Close defense' value when a
unit is attacking another unit (not just armor) which is in such a hex.
Depending on the 'Target Type' Either the soft or hard attack value is used. And depending on the terrain in which the defending unit is either ground or close defense
is used.
In regards to the screenshot.
- every unit has a initiative, intitive can be capped by weather and/or terrain (see the library again). For example a tank attacking an infantry unit inside a forest
hex will not use the initiave of the tank but that of the terrain (which for forest is 2).
The difference between the initiative (the effective initiative that is so after the dice roll) will tell how many shots (percentage) the defending unit can return.
The formula is (for ground combat for air it's different) that for every point in difference 20% of the defending unit cannot shoot back.
So in this case the difference is 3, therefore 60% of the defending unit cannot shoot back. In this case the strength of the defending unit is two, 0.4 * 2 = 0.8 and
the game rounds this up to 1. However if the defending unit was 10 strenght then only 4 units could shoot back. This is shown on the lower right side where it says
'Unsurpressed strenght: 1'
- The defending unit is an infantry unit so the soft attack value is used, also the defending unit is in a forest hex so the close defense is used
- When the defending unit is shooting back it also uses his soft attack value and the close defense value of the attackers.
Based on the attack and defense values a sort of combat results table (CRT) is made and then for every point of strenght of the attacking unit a die roll is made**.
The kill change is 53% and therefore the attacker needs to roll 47 or higher for a kill. As you can see there are 10 rolls made, and 6 are higher than 47 so the
defending unit is totally destroyed. Then the defending unit shoots back and it needs to roll 51 or higher for a kill but it rolls 47 which results in a surpression
** for artillery there's another factor which is the rate of fire (RoF), basically the lower the RoF the less dice are rolled.
I hope I have explained it correctly and you can understand what I wrote.
Yes, thank you thank you thank you
Is there any way to see this info in game vs having to go to the library? I would like to be able to hover over some terrain and see these values
Post #: 4
quote:ORIGINAL: Rood I'll try to give an explanation for all your questions which are all related. First if you go to the Library (F1) and go to Terrain you will see a column 'Special properties'
which lists which terrain type uses the 'Close defense' value when a unit is attacking another unit (not just armor) which is in such a hex. Depending on the 'Target Type' Either the soft or hard
attack value is used. And depending on the terrain in which the defending unit is either ground or close defense is used. In regards to the screenshot. - every unit has a initiative, intitive can be
capped by weather and/or terrain (see the library again). For example a tank attacking an infantry unit inside a forest hex will not use the initiave of the tank but that of the terrain (which for
forest is 2). The difference between the initiative (the effective initiative that is so after the dice roll) will tell how many shots (percentage) the defending unit can return. The formula is (for
ground combat for air it's different) that for every point in difference 20% of the defending unit cannot shoot back. So in this case the difference is 3, therefore 60% of the defending unit cannot
shoot back. In this case the strength of the defending unit is two, 0.4 * 2 = 0.8 and the game rounds this up to 1. However if the defending unit was 10 strenght then only 4 units could shoot back.
This is shown on the lower right side where it says 'Unsurpressed strenght: 1' - The defending unit is an infantry unit so the soft attack value is used, also the defending unit is in a forest hex so
the close defense is used - When the defending unit is shooting back it also uses his soft attack value and the close defense value of the attackers. Based on the attack and defense values a sort of
combat results table (CRT) is made and then for every point of strenght of the attacking unit a die roll is made**. The kill change is 53% and therefore the attacker needs to roll 47 or higher for a
kill. As you can see there are 10 rolls made, and 6 are higher than 47 so the defending unit is totally destroyed. Then the defending unit shoots back and it needs to roll 51 or higher for a kill but
it rolls 47 which results in a surpression result. ** for artillery there's another factor which is the rate of fire (RoF), basically the lower the RoF the less dice are rolled. I hope I have
explained it correctly and you can understand what I wrote.
12/27/2011 6:51:39 PM
Rood Clear, Countryside, Airfield and Fortification use the ground defense values.
So do rivers but you don't want to attack (in most cases) from a river hex as the defending unit gets a bonus to his defense value.
The rest all use the close defense value.
Posts: 67
Joined: 7/16/2011 Basically if it costs you more than 1 movement point to enter (except cities) than it uses close defense values.
Status: offline
The reason for this - and that initiative is capped - is that there's limited visibility, at least there's only short range visibility. So long range guns don't work well
at all since you cannot see/target very far anyway because all kinds of obstacles (houses, trees, hills etc).
< Message edited by Rood -- 12/27/2011 6:52:31 PM >
Post #: 5
12/28/2011 11:42:22 AM
VPaulus Thanks, Rood.
I've just add this thread to Sticky Common Question, in Slitherine forum.
Posts: 1140
Joined: 6/23/2011
From: Portugal
Status: offline
Post #: 6
12/30/2011 12:56:59 AM
ORIGINAL: Rood
Posts: 27
Joined: 3/26/2005 Clear, Countryside, Airfield and Fortification use the ground defense values.
Status: offline So do rivers but you don't want to attack (in most cases) from a river hex as the defending unit gets a bonus to his defense value.
The rest all use the close defense value.
Basically if it costs you more than 1 movement point to enter (except cities) than it uses close defense values.
The reason for this - and that initiative is capped - is that there's limited visibility, at least there's only short range visibility. So long range guns don't work
well at all since you cannot see/target very far anyway because all kinds of obstacles (houses, trees, hills etc).
So I am assuming you want to defend in close defense, right? I looked at some of my infantry units and their close defense value is 2 which isn't very high. It seems that
my ground defense values are higher.
That seems backwards to me
Post #: 7
quote:ORIGINAL: Rood Clear, Countryside, Airfield and Fortification use the ground defense values. So do rivers but you don't want to attack (in most cases) from a river hex as the defending unit
gets a bonus to his defense value. The rest all use the close defense value. Basically if it costs you more than 1 movement point to enter (except cities) than it uses close defense values. The
reason for this - and that initiative is capped - is that there's limited visibility, at least there's only short range visibility. So long range guns don't work well at all since you cannot see/
target very far anyway because all kinds of obstacles (houses, trees, hills etc).
12/30/2011 8:26:33 AM
Rood Ground defense values are always higher than close defense values but it's a bit more complicated than just that.
It also depends on what kinds of units are attacking each other.
Posts: 67 If for example a tank is sitting in a forest hex (and forest is close terrain) and if it's attacked by infantry then the tank will use it's close defense value. This is
Joined: 7/16/2011 good since the ground defense value of a tank is usually much higher than it's close defense value.
Status: offline However if that same tank in the forest hex is attacked by another tank than it will use it's ground defense value.
So with infantry you always want to attack 'Hard' targets when they are in close terrain so they will use their close defense values making them much more vulnerable for
As a tank/recon you should prefer not to be in close terrain since you will be more vulnerable from attacks from infantry.
If a another tank (or anti-tank, even towed AT) attacks a tank in close terrain then the ground defense value is used.
Now when a tank is attacking infantry in either flat or close terrain the ground defense of the infantry is used. However when an infantry is in a close terrain hex it will
usually be entrenched (every terrain has a base entrenchment value which is gained the next turn after entering that hex) and therefore there's a higher change Rugged
Defense will occur, and that gives bonusses (the initiative for attacking unit is set at zero, defending unit gets +4 to it's defense value).
A good way to learn more about the combat mechanics is to use CTRL + mouse click to get the predicted combat results then you can see all the factors involved in the
Post #: 8
12/30/2011 1:57:32 PM
jonpfl Ok, this is making sense
If a unit with net initiative 8 is attacked by a unit with net initiative 2, would the tank go first?
Posts: 27 Is it possible for a unit with really low initiative to attack and not go first? Does net initiative determine who goes first?
Joined: 3/26/2005
Status: offline I know it sounds obvious this is the case but just making sure
Post #: 9
12/30/2011 1:59:57 PM
ORIGINAL: jonpfl
Posts: 27
Joined: 3/26/2005 All,
Status: offline
I found this screenshot in the forum, can someone go over it step by step and explain to me?
And what is the dice roll for? Why does it say +0 and +2?
Post #: 10
quote:ORIGINAL: jonpfl All, I found this screenshot in the forum, can someone go over it step by step and explain to me? Thx jonpfl
12/30/2011 7:51:43 PM
Rood quote:
edit 23 june 2013: below is the original post which may not be accurate anymore, below the quoted text is the information I've posted on the slitherine forum which
Posts: 67 should be up to date.
Joined: 7/16/2011
Status: offline As to who shoots first, yes initiative does decide who shoots first.
After the effective iniative has been calculated either the attacker or defender will shoot first.
If the attacker has the highest initiative then it will shoot with his available strength, then it's calculated by using the initiative of both units and some other
factors how many units (= dice rolls) of the defender shoot back.
If the defending unit has a higher initiative then it will shoot first and with it's full available (unsurpressed) strenght left. Then after the defender has fired the
attacker will shoot back using the same calculation (i.e. 20% doesn't shoot back for every point in initiative the defender has more than the attacker).
You usually want to avoid this, either reduce the defenders unit's strenght and/or supress it with artillery and air attacks.
As to the dice roll that is shown. For every attack a dice roll from 0 to 3 is made. This is a random variable that is used to make the results, well, more random
So even if you have all the factor's there's still a matter of luck/randomness involved. So even if you attack and you have a initiative of 7 and the enemey has 2 then
you don't expect any losses since the enemy will not be able to return fire. However if you roll a 0 and the enemey rolls a 3 then it will defend with 60% of it's
Note that this dice roll is always added last and after the other factors have been used, so for example when attacking a unit in a forest which terrain has a cap for
initiative of 2 then a dice roll is made and if that is 2 for example the effective initiative will be 4, so higher than the cap.
And btw, there's still more to be said about initiative since some attacks have special rules, for example when an AT unit is attacking a tank or recon it gets a bonus
to it's initiative of 3.
And I'm not sure if I have told that for every unit that you have adjacent (and has not fired yet) to the unit you are going to attack the defending unit will get a -1
penalty for it's initiative.
So when you have 3 unit's adjacent that have not fired yet and you attack with one of those units the defender will get a -2 penalty to it's initiative calculation.
Edit: wrote wrong info, it's correct now (I hope
This is from the slitherine forum updated with some additional information
The calculations for the Effective Initiative decide two things, the unit with the highest initiative will shoot first - regardless if it's the attacker or the defender -
and then, the difference between the initiative of the attacker and defender determines how many units can return fire.
The section below was changed - see the response below from Rudankort
It is not quite like this. Initiative advantage of 4 means that 80% of damage is caused to the enemy before he has a chance to react. In this particular case your
attack has caused 3 kills and 1 suppression. From these, 80% are applied upfront:
- 80% from 3 kills is 2.4 kills, rounded to 2.
- 80% from 1 suppression is 0.8 suppression, rounded to 1.
So, a total of 3 points of enemy's strength were disabled before he could shoot back. So he only responded to your attack with 7 points of strength out of 10.
For every point of difference in initiative a unit has over the other unit 20% of the (unsurpressed strength of the) there will be a chance that the other unit will not be
able to shoot back.
Therefore if the difference is 5 or more and if the attack causes enough kills and/or surpression the other unit will not shoot back (5 * 20% = 100%).
Fractions are rounded up, so if the a unit is at 4 strength and the difference in initiative is 3 then 60% of 4 will have a chance to not be able return fire, so (1 - 0.6)
* 4 = 1.6 and this is rounded up to 2.
Note that if the unit is already (partly) surpressed then only the unsurpressed strength of the unit is used to calculate the amount of units that have a chance to return
This does not apply to artillery versus ground units as defending units cannot return fire ever when being attacked by artillery (even if the artillery unit has a range of
Initiative elements and Effective Initiave
Apart from the attacking and defending unit's base initiave there are many other factors involved, some will add or substract initiative for either or both the attacking
and defending unit. There can also be a maximum allowed initiative, a so called cap, due to weather and/or terrain
Weather can cap ininitiave - Rain or Snow caps the initiative at one
Terrain can cap initiative - see the Library.
Note that it's always the terrain of the defending unit that determines any factors regarding any initiative cap.
Hero bonus
Any bonus of a hero is added to the calculation
Experience bonus
The experience of the unit is partially added to the base of the units intiative, this depends on the type of units attacking.
Initiative bonus for each (full) experience star.
Tank vs Tank 10%
AT vs Tank 5%
Inf vs Tank 0%
Recon vs Tank ?
So if a tank with 4 stars and 9 base initiative attacking another tank it would get 4 * 10% * 9 = 3.6 -> rounded down to 3 added to it's intiative.
Note that the Experience bonus is calculated at the end, the Experience bonus is the shown percentage of the experience and is always rounded down. For example a unit with
10 initiative and 100 experience (equals one star) would usually gain +1 bonus to initiative from experience, however if that unit is attacked by multiple units and
therefore gets a -1 penalty from Multiple Attack it's experience would now be 9 and then 10% of 9 would be 0.9 which will be rounded down so effectively it's zero.
If a Tank or Recon attacks an AT unit the AT gets +3 added to it's intiative.
Ambushed/Rugged Defense initiative set to zero (for the ambushed unit or for the attacker against a defender who gained rugged defense)
Multiple attacking units (not artillery - even it has a range of 1) -1 for each unit attacking beyond the first.
No Fuel -2 penalty
A die roll 1-2 is made for both attacker and defender.
This roll is added after the other factors have been determined and the result is always added to the calculated total and is not affected by either weather or terrain cap.
Therefore the Effective Iniative can be higher than the cap would allow.
< Message edited by Rood -- 6/23/2013 10:49:51 AM >
Post #: 11
quote:edit 23 june 2013: below is the original post which may not be accurate anymore, below the quoted text is the information I've posted on the slitherine forum which should be up to date. As to
who shoots first, yes initiative does decide who shoots first. After the effective iniative has been calculated either the attacker or defender will shoot first. If the attacker has the highest
initiative then it will shoot with his available strength, then it's calculated by using the initiative of both units and some other factors how many units (= dice rolls) of the defender shoot back.
If the defending unit has a higher initiative then it will shoot first and with it's full available (unsurpressed) strenght left. Then after the defender has fired the attacker will shoot back using
the same calculation (i.e. 20% doesn't shoot back for every point in initiative the defender has more than the attacker). You usually want to avoid this, either reduce the defenders unit's strenght
and/or supress it with artillery and air attacks. As to the dice roll that is shown. For every attack a dice roll from 0 to 3 is made. This is a random variable that is used to make the results,
well, more random So even if you have all the factor's there's still a matter of luck/randomness involved. So even if you attack and you have a initiative of 7 and the enemey has 2 then you don't
expect any losses since the enemy will not be able to return fire. However if you roll a 0 and the enemey rolls a 3 then it will defend with 60% of it's strength. Note that this dice roll is always
added last and after the other factors have been used, so for example when attacking a unit in a forest which terrain has a cap for initiative of 2 then a dice roll is made and if that is 2 for
example the effective initiative will be 4, so higher than the cap. And btw, there's still more to be said about initiative since some attacks have special rules, for example when an AT unit is
attacking a tank or recon it gets a bonus to it's initiative of 3. And I'm not sure if I have told that for every unit that you have adjacent (and has not fired yet) to the unit you are going to
attack the defending unit will get a -1 penalty for it's initiative. So when you have 3 unit's adjacent that have not fired yet and you attack with one of those units the defender will get a -2
penalty to it's initiative calculation. Edit: wrote wrong info, it's correct now (I hope
quote: It is not quite like this. Initiative advantage of 4 means that 80% of damage is caused to the enemy before he has a chance to react. In this particular case your attack has caused 3 kills and
1 suppression. From these, 80% are applied upfront: - 80% from 3 kills is 2.4 kills, rounded to 2. - 80% from 1 suppression is 0.8 suppression, rounded to 1. So, a total of 3 points of enemy's
strength were disabled before he could shoot back. So he only responded to your attack with 7 points of strength out of 10.
12/30/2011 8:03:54 PM
jonpfl Awesome stuff,
Thanks a lot, you have taught me a lot
Posts: 27 Happy New Year
Joined: 3/26/2005 jonpfl
Status: offline
Post #: 12
12/31/2011 8:10:46 AM
Rood Actually I was wrong:
The attacker does not always shoot first, initiative decides who shoots first and if the defender has a much higher initiative the attacker might not shoot at all!
Posts: 67
Joined: 7/16/2011 Obviously I never attack under these circumstances.
Status: offline
I'll rewrite what I posted above so as not to spread wrong information around!
Happy new year aswell!
Post #: 13 | {"url":"http://www.matrixgames.com/forums/tm.asp?m=2994803","timestamp":"2014-04-19T09:27:09Z","content_type":null,"content_length":"128854","record_id":"<urn:uuid:c282a5a0-dd6d-4a12-ad24-14f923cb01f0>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00452-ip-10-147-4-33.ec2.internal.warc.gz"} |
Fractals/Iterations in the complex plane/cremer
• there are Cremer Julia set for quadratic polynomial,
• there are no images of such Julia sets
• a Cremer Julia set has no interior. It contains single or double comb. ^[1] "Objects such as combs (homeomorphs of the
product of a Cantor set and an interval) had been expected to be found within hedgehogs." (Kingshook Biswas ^[2]
• Cremer Julia sets are not locally connected ^[3]^[4]
It is hard to give values of Cremer parameters ( c-points).
"The Cremer parameters are on the boundaries of hyperbolic components at specific internal angles (argument of the mulitplier). If you know the angle, you can compute the parameter explicitly for
periods 1, 2, 3 and numerically for all periods. If I remember correctly, a simple angle is .01001000100001000001 ... times 2PI. But of course there are Siegel angles and parabolic angles which are
the same for the first 100 digits.
... Maybe that is one of the reasons, why you cannot draw them..." Wolf Jung "A Cremer internal angle is obtained as 0.10001000000000000010000... with fast growing 0-gaps, but any finite
approximation is parabolic." ( Wolf Jung )
External raysEdit
Parameter rayEdit
" For Cremer parameters, there is a unique parameter ray landing, but the dynamics is more complicated." ( Wolf Jung )
Last modified on 7 March 2014, at 17:51 | {"url":"http://en.m.wikibooks.org/wiki/Fractals/Iterations_in_the_complex_plane/cremer","timestamp":"2014-04-18T06:11:16Z","content_type":null,"content_length":"18904","record_id":"<urn:uuid:5cb41547-fef0-442c-be9f-316afbf5b66e>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00223-ip-10-147-4-33.ec2.internal.warc.gz"} |
Probability you need to know to understand the search engines properly | Distilled
A primer in everything you (n)ever wanted to know about (convergence in) probability
Don’t be scared of the maths in this post. I want to share one of the reasons I enjoyed studied probability at university. It’s an infuriating, beautiful, brilliant subject with myriad applications
to search marketing. It’s also (mainly) based in common sense if you are prepared to try to get past the maths. Don’t be scared of the terminology - I’m going to explain things from scratch. And then
I’m going to try to apply it to some practical advice.
If you have A-level maths (or equivalent), you should be able to follow. Here we go...
The bit of probability I want to show you is to do with a subject called convergence, but first, some basics.
What is probability?
The kind of probability you learn about at school is most easily thought about as:
The probability of an event occurring = (the number of ways that event can occur) / (the total number of equally likely outcomes)
So the probability of getting a head when you toss a (fair) coin is 1 / 2 (i.e. the number of heads = 1 and the total number of outcomes = 2, H or T).
This serves us pretty well for a lot of things. But pretty soon, you find outcomes that aren’t equally likely - at which point you start thinking of probability as the proportion of times you expect
to get a particular result in a set of trials (you can use this definition of probability to cope with click through rates for example - the probability of getting a click being 0.2 means over time,
1 in 5 people click on your link).
[Side note: when you study more probability, you realise this is actually a circular definition as it relies on expectation which is a consequence of the definition of probability, but tough, we’re
doing our best here.]
But what about when there are an infinite number of equally likely outcomes? What about if we are trying to pick a random integer? Then you start introducing (at school) probability distribution
functions etc. (which are relatively boring, in my opinion). As you learn more about the subject, you start talking about probability spaces, metrics and integration. Ugggh. We’re not going to go
there today (this is a blog post, not a textbook - I can talk about the interesting stuff...).
So let’s just carry on thinking about probability as the ‘chance of something happening’. For most practical purposes, you can think of it in terms of experiments. If I repeated this test a hundred
(a thousand, a million) times, how many times would I expect to get a particular result.
One pretty cool (geek!) thing about probability when you start talking in terms of probability spaces, is that it becomes clear that you can have events that have zero probability of happening that
could actually happen. The easiest way to think of this is to imagine you were to pick a random number (of any size) with equal likelihood of picking any number. Because there are infinitely many
possible equally likely outcomes, it is not too hard to come to the conclusion that the probability of picking any particular number is 0. Yet you did pick one...
However, events with probability zero don’t factor into any calculations really - for this reason we talk about almost sure or almost certain for events with probability 1. Having probability 1
doesn’t actually mean something is certain - it simply means the probability of it not happening is 0 (though this could still happen). Confused yet? Loving it yet?
What is convergence?
Suppose you have a sequence of numbers x1, x2, ... , xn, ...
To say that the sequence ‘converges’ is to say that there is a number x such that no matter how small a number E you choose, there is a point in the sequence past which all the numbers in the
sequence are closer to x than E.
An example would be the sequence 0.9, 0.99, 0.999, 0.999, ... which converges to 1 - i.e. it gets arbitrarily close to 1 as you go along.
Note, however, that it never actually reaches 1. Nevertheless, we say that the ‘limit of the sequence as n tends to infinity’ is 1.
Not all infinite sequences converge. The simplest example of a non-converging sequence is one that alternates between two numbers: 0, 1, 0, 1, 0, ...
While explaining that, I have subtly slipped in the concept of infinity. This is easily a subject large enough not just to be the subject of another post, but to be a graduate-level course all on its
own. I am going to use infinity in a slightly sloppy way (in pure maths terms - so not many people will notice). I thought it was enough to try to explain convergence and probabilistic convergence in
one day. Never mind the many levels of infinity!
The reason you (might) care about convergence is that it:
• underpins some of the elements of the Google pagerank algorithm
• is very important if you are working with any kind of iterative algorithm (especially those involving randomness)
• involves infinity, which is always cool
• gives you something to talk about at dinner parties
OK. Maybe just the first two reasons. But still.
What is random convergence?
Suppose you have a random sequence X1, X2, ... , Xn, ...
What I mean by this is that each of the Xi is a number picked according to some probability distribution (this is called a ‘Random Variable’). For example, I might say Xi = 0 if a coin lands heads up
and Xi = 1 if a coin lands heads down for a sequence of coin tosses.
Then we define the probability space W as a set of points corresponding to “real-world outcomes”. For example, if x is a point in W, it might correspond to the outcome where we toss a coin and it
keeps landing heads-up (H, H, H, H, ...) - this would give us a sequence X1(x), X2(x), X3(x), ... = 0, 0, 0, ... (according to our rule above). Another point y might correspond to one tail, then
heads (T, H, H, H, ...) etc.
Now we can look at the probability of a particular outcome - let’s denote this P(x) - this is the probability of the outcome x occurring - in the example above, if our sequence is 10 coin tosses
long, P(x) = P(10 heads in a row) = 0.5^10.
Okey dokey. Now consider the behaviour as n grows larger and larger (mathematicians say “as n tends towards infinity”).
Kinda hard to imagine isn’t it?
For any given outcome, we are left with a sequence x1, x2, x3, ... (which may or may not converge as n tends to infinity).
Because there are a whole range of these sequences, each with its own probability of occurring, we can define a number of kinds of convergence when we are talking about convergence of a random
sequence. There are actually a few kinds, but I just wanted to quickly introduce two:
1. convergence in probability
2. convergence with probability 1
Convergence in probability
We say that a random sequence converges in probability if there exists a random variable X such that, for all e (no matter how small), the probability that the difference between Xn and X is greater
than e tends to 0 as n tends to infinity.
This is actually quite a weak form of convergence and there are a lot of sequences that don’t look like they shouldn’t converge that do converge in probability.
Convergence with probability 1
As mentioned above, something can have probability 1 without actually being certain - which is why we call it ‘almost sure’ or ‘almost certain’. Convergence with probability 1 is a common form of
convergence that is one of the strongest.
What it means is that almost all of the sequences that can possibly be generated converge in the regular sense (i.e. that the probability of getting a sequence that doesn’t converge is 0).
Practical uses
As I mentioned, the concept of convergence is important when you are dealing with iterative algorithms (and probabilistic convergence comes into play when you are dealing with iterative algorithms
that have an element of randomness). One example of this kind of algorithm is a class known as Monte Carlo algorithms. These are used to find the answer to questions of probability by running a large
number of theoretical trials and seeing what the result is.
The famous Google pagerank algorithm is an iterative algorithm with random elements and so the question of whether or not it will converge is potentially a highly complex one. I haven’t put too much
thought into this (or done much background reading), but I guess I imagine they have! Even so, I imagine it converges ‘almost surely’ at best. This means that there is a chance the Internet wouldn’t
converge... Wouldn’t that be bad? (Note - I know they use a limited number of iterations, so they’re not going to end up in an infinite loop, but still).
Having said all that, this post isn’t really about practical uses, but is more an adventure into some slightly advanced maths. I hope you enjoyed it.
Disclaimer, inspiration and further reading
This post was inspired by conversations with by Hamlet Batista on SEOmoz.
Disclaimer: it’s a long time since I studied this stuff so I might have any bit of it wrong. It’s still fun though.
If you enjoyed that bit of maths, you might like the following (though some of these need waaay more maths):
Will founded Distilled with Duncan in 2005. Since then, he has consulted with some of the world’s largest organisations and most famous websites, spoken at most major industry events and regularly
appeared in local and national press. Will is part... read more
8 Comments
Damn that brings back memories... Good post Will, I missed that post on SEOMoz, must check it out later.
Ah, Monte Carlo simulations... good times, good times.
Not long ago, I used some probability theory to come up with an almost foolproof way to win on online roulette. Unfortunately: (1) online roulette is illegal in my home state, and (2) the rate of
winning is so slow (risk/reward ratio and all that) that I'd be better off taking a part-time job.
Pete - does that system involve martingales perchance?! The problem with most 'almost foolproof' systems is that the one time you don't win you lose so much more than you win the other times!
Online roulette is easy to win at though if you use casino bonuses.
Tom - It does, indeed; honestly, I didn't know it had a name. Of course, the original doubling strategy was pretty dangerous (as you said, when you lose, you lose big) and also easily thwarted by
table limits. I started calculating other multipliers for win conditions with lower odds (3:1, 4:1, etc.). Unfortunately, you basically need a spreadsheet in front of you and have to bet such odd
amounts that you could never do it in a real casino. Plus, when you win, you win so incrementally, that it's almost not worth the bother.
Sorry, I'm not sure what this had to do with the original post, come to think of it :) Fun with probability, I guess...
Hey Will!
Thanks for this excellent writeup. When you were talking about 'limit' you brought back memories of my first Calculus class. That is a very interesting concept.
The famous Google pagerank algorithm is an iterative algorithm with random elements and so the question of whether or not it will converge is potentially a highly complex one. I haven’t put
too much thought into this (or done much background reading), but I guess I imagine they have! Even so, I imagine it converges ‘almost surely’ at best. This means that there is a chance the
Internet wouldn’t converge… Wouldn’t that be bad? (Note - I know they use a limited number of iterations, so they’re not going to end up in an infinite loop, but still).
From my research, I think the main reason they made those adjustments I mentioned on the SEOmoz post is to make sure the Power method converges on relatively few iterations. I learned they
originally converged at 50. I assume they need more iterations now that the web is a lot bigger and so has to be the Google matrix.
I am taking a short vacation, but I'm following most things through my blackberry. Happy Holidays!
@Pete: I'm sure the casinos love "almost foolproof" ways of winning :) Martingale theory is fun - now you know what it's called, there is loads of stuff to read about it. It applies to a lot of
gambling theory - the fact that you can potentially go bust breaks a lot of otherwise sound schemes.
@Hamlet: thanks for reading on your blackberry! Your contribution is great - I think I need to look into the convergence of PR a little more.
Enjoy the season everyone :)
Somehow, I figured the average pit boss would probably put 2+2 together when my bets looked like:
$1, $1, $1, $2, $2, $3, $4, $7...
Plus, I'd always be asking for change :)
That reminds me when I was still studying Probability. I got headache with that. anyways, thanks for this post .
Leave a Reply Cancel reply | {"url":"https://www.distilled.net/blog/seo/probability-you-need-to-know-to-understand-the-search-engines-properly/","timestamp":"2014-04-19T19:36:21Z","content_type":null,"content_length":"77840","record_id":"<urn:uuid:b01f628c-f513-4201-ac2f-cc08eed7293b>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00233-ip-10-147-4-33.ec2.internal.warc.gz"} |
[R] Extreme AIC or BIC values in glm(), logistic regression
Gad Abraham gabraham at csse.unimelb.edu.au
Thu Mar 19 01:55:01 CET 2009
Maggie Wang wrote:
> Dear R-users,
> I use glm() to do logistic regression and use stepAIC() to do stepwise model
> selection.
> The common AIC value comes out is about 100, a good fit is as low as around
> 70. But for some model, the AIC went to extreme values like 1000. When I
> check the P-values, All the independent variables (about 30 of them)
> included in the equation are very significant, which is impossible, because
> we expect some would be dropped. This situation is not uncommon.
> A summary output like this:
> Coefficients:
> Estimate Std. Error z value Pr(>|z|)
> (Intercept) 4.883e+14 1.671e+07 29217415 <2e-16 ***
> g761 -5.383e+14 9.897e+07 -5438529 <2e-16 ***
> g2809 -1.945e+15 1.082e+08 -17977871 <2e-16 ***
> g3106 -2.803e+15 9.351e+07 -29976674 <2e-16 ***
> g4373 -9.272e+14 6.534e+07 -14190077 <2e-16 ***
> g4583 -2.279e+15 1.223e+08 -18640563 <2e-16 ***
> g761:g2809 -5.101e+14 4.693e+08 -1086931 <2e-16 ***
> g761:g3106 -3.399e+16 6.923e+08 -49093218 <2e-16 ***
> g2809:g3106 3.016e+15 6.860e+08 4397188 <2e-16 ***
> g761:g4373 3.180e+15 4.595e+08 6920270 <2e-16 ***
> g2809:g4373 -5.184e+15 4.436e+08 -11685382 <2e-16 ***
> g3106:g4373 1.589e+16 2.572e+08 61788148 <2e-16 ***
> g761:g4583 -1.419e+16 8.199e+08 -17303033 <2e-16 ***
> g2809:g4583 -2.540e+16 8.151e+08 -31156781 <2e-16 ***
I don't have an answer (and you haven't supplied the full code), but one
obvious thing is that the estimated coefficients are extremely large
(this is the linear predictor scale, so in the response scale it's even
worse since you exponentiate it). Perhaps this is due to very high
collinearity of your variables (however the standard error is low
relative to the estimate so maybe not), and/or issues of scaling (i.e.,
your variables are very small, use scale() to standardise them.)
Gad Abraham
MEng Student, Dept. CSSE and NICTA
The University of Melbourne
Parkville 3010, Victoria, Australia
email: gabraham at csse.unimelb.edu.au
web: http://www.csse.unimelb.edu.au/~gabraham
More information about the R-help mailing list | {"url":"https://stat.ethz.ch/pipermail/r-help/2009-March/192136.html","timestamp":"2014-04-18T23:25:02Z","content_type":null,"content_length":"5593","record_id":"<urn:uuid:af3fabaa-35ac-4e3b-aa68-4935b8907b45>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00165-ip-10-147-4-33.ec2.internal.warc.gz"} |
Regression Trend Lines
Regression Trend Lines
Some of the forecast types you can choose are presented below. All of these forecasts are fit to your data using regression analysis.
Linear forecast with confidence limits
n is a user-supplied constant between 0 and 100 defining the confidence interval. For example, a value of 90 will cause confidence lines to be drawn such that the value being forecast is 90% likely
to be between the lines (5% chance of being above the upper line and 5% below the lower). To display lines at one standard deviation from the prediction line, use n = 68.3. To display lines at two
standard deviations, use n = 95.5%.
In order for the confidence limits to properly reflect uncertainty in the forecast, the underlying data must conform to assumptions inherent in the linear regression model. Specifically, y must be a
linear function of x; and, the residuals must be normally distributed.
Square Root
n is a user-supplied constant.
Seasonal trend
p1 and p2 are user-supplied season and cycle periods. All other constants are calculated automatically. | {"url":"http://www.vanguardsw.com/dphelp4/dph00108.htm","timestamp":"2014-04-18T13:12:26Z","content_type":null,"content_length":"18390","record_id":"<urn:uuid:be7a7822-65ab-4afb-a992-2a94c5cad9ac>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00206-ip-10-147-4-33.ec2.internal.warc.gz"} |
[Numpy-discussion] Index array confusion...
Todd Miller jmiller at stsci.edu
Mon Dec 16 06:57:01 CST 2002
Magnus Lie Hetland wrote:
>Todd Miller <jmiller at stsci.edu>:
>>What I'm saying is that in the future it will work correctly. :)
>OK :)
>>Most likely as you suggest, but I haven't really looked at the code,
>>just the rather sickening results.
>Well, what I was asking about was really what you meant by "correct".
>My interpretation of "correct" here was that only tuples and slices
>would be allowed as indexes.
This actually worked as you expected in numarray-0.3.6. The current
behavior is a casualty of optimization to C.
>BTW: I found something else that I think is rather odd:
>>>>x = [0, 0, 0]
>>>>y = [1, 1, 1]
>>>>p = 1
>>>>x[p:], y[p:] = y[p:], x[p:]
>>>>x, y
>([0, 1, 1], [1, 0, 0])
>>>>x = array([0, 0, 0])
>>>>y = array([1, 1, 1])
>>>>x[p:], y[p:] = y[p:], x[p:]
>>>>x, y
>(array([0, 1, 1]), array([1, 1, 1]))
>This seems like a bug to me. The assignment ought to swap the tails of
Numeric does this as well, so I would not call it a bug, but I agree
that it is unfortunate.
>the sequences, as is the case with lists, but with numeric arrays,
>some weird form of overwriting occurs. I guess this may be an
>optimization (i.e. to avoid making copies), but it is rather
I think you're correct here. This behavior is a consequence of the
fact that array slices are views and not copies. To fix it, just say:
x[p:], y[p:] = y[p:].copy(), x[p:].copy()
>confusing. What do you think?
The manual should probably document this as a gotcha, and we might want
to consider adding an exchange ufunc which can do the swap without a
temporary. I doubt the cost of exchange is worth it though.
Thanks for the feedback,
More information about the Numpy-discussion mailing list | {"url":"http://mail.scipy.org/pipermail/numpy-discussion/2002-December/001780.html","timestamp":"2014-04-18T06:27:14Z","content_type":null,"content_length":"4967","record_id":"<urn:uuid:c5c00df3-817d-49ca-b097-d631563341cc>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00206-ip-10-147-4-33.ec2.internal.warc.gz"} |
Quantitative Finance
Quantitative Finance Defined
What is Quantitative Finance?Quantitative Finance, also known as Mathematical Finance, is a complex field of applied mathematics that is primarily concerned with the financial markets. Quantitative
finance possesses a close relationship with financial economics, which is in essence, the underlying theory of Quantitative Finance.In most forms, Quantitative Finance will derive and subsequently
extend the numerical or mathematical model that is originally suggested by a financial economic theory. For example, a financial economist will study the structural relationships and reasons why a
corporation may trade under a specific share price, while a Quantitative Financial economist will assume the share price as a given, and attempt use stochastic calculus or other advanced forms of
mathematics to elucidate upon the fair value of derivatives of the underlying stock. This method, which is known as the valuation of options, is one of the principle areas of focus within the field
of Quantitative Finance.The values of an option contract are dependent on a number of different variables, in addition to the share price of the underlying asset. The evaluation process is extremely
complex; multiple pricing models are used to distinguish the fair value of derivatives through an analysis of the following concepts: Moneyness, rational pricing, put-call parity, and option time
value.Relationship with Computational Finance:In addition to sharing similar theories or models with financial economics, Quantitative Finance also overlaps with the field of computational finance
(also referred to as financial engineering). In most areas of the study, these two fields are largely synonymous; however, one glaring theoretical differences exists where computational finance
focuses on application, whereas Quantitative Finance will predominantly investigate derivation and modeling.Theorem of Arbitrage-Free Pricing:In a broad sense, the theorem of arbitrage-free pricing
is an application which relates arbitrage opportunities with risk neutral devices or measures that are equivalent to the original probability measure. This theorem, in a finite state market has two
parts: 1.) There is no arbitrage if and if a risk neutral measure is equivalent to the original probability measure. The second aspect of the arbitrage-free pricing model states that in a situation
where arbitrage is absent, a market is viewed as complete if and only if there is a unique risk measure that is equivalent to the original probability measure.History of Quantitative Finance:The
Field of Quantitative Finance grew from the novel The Theory of Speculation, which was introduced in 1900 by Louis Bachelier. This introduction to Quantitative Finance discussed the use of Brownian
motion to evaluate the earliest forms of stock options. Although the book sparked a revolution in regards to the evaluation of option pricing, it hardly caught any attention outside of the academic
population.Another influential work of Quantitative Finance was the theory of portfolio optimization by Harry Markowitz. This publication used mean-variance to estimate portfolios to judge investment
techniques; eventually this theorem sparked a shift from the concept of trying to identify the best individual stock for a particular investment. The field of Quantitative Finance used a linear
regression strategy to quantify the risk and return associated with an entire portfolio of stocks and fixed-income instruments to develop an optimized strategy used to develop a portfolio with the
largest mean return subject to levels of variance.
What is Quantitative Finance?
Quantitative Finance, also known as Mathematical Finance, is a complex field of applied mathematics that is primarily concerned with the financial markets. Quantitative finance possesses a close
relationship with financial economics, which is in essence, the underlying theory of Quantitative Finance.
In most forms, Quantitative Finance will derive and subsequently extend the numerical or mathematical model that is originally suggested by a financial economic theory. For example, a financial
economist will study the structural relationships and reasons why a corporation may trade under a specific share price, while a Quantitative Financial economist will assume the share price as a
given, and attempt use stochastic calculus or other advanced forms of mathematics to elucidate upon the fair value of derivatives of the underlying stock. This method, which is known as the valuation
of options, is one of the principle areas of focus within the field of Quantitative Finance.
The values of an option contract are dependent on a number of different variables, in addition to the share price of the underlying asset. The evaluation process is extremely complex; multiple
pricing models are used to distinguish the fair value of derivatives through an analysis of the following concepts: Moneyness, rational pricing, put-call parity, and option time value.
Relationship with Computational Finance:
In addition to sharing similar theories or models with financial economics, Quantitative Finance also overlaps with the field of computational finance (also referred to as financial engineering). In
most areas of the study, these two fields are largely synonymous; however, one glaring theoretical differences exists where computational finance focuses on application, whereas Quantitative Finance
will predominantly investigate derivation and modeling.
Theorem of Arbitrage-Free Pricing:
In a broad sense, the theorem of arbitrage-free pricing is an application which relates arbitrage opportunities with risk neutral devices or measures that are equivalent to the original probability
measure. This theorem, in a finite state market has two parts: 1.) There is no arbitrage if and if a risk neutral measure is equivalent to the original probability measure. The second aspect of the
arbitrage-free pricing model states that in a situation where arbitrage is absent, a market is viewed as complete if and only if there is a unique risk measure that is equivalent to the original
probability measure.
History of Quantitative Finance:
The Field of Quantitative Finance grew from the novel The Theory of Speculation, which was introduced in 1900 by Louis Bachelier. This introduction to Quantitative Finance discussed the use of
Brownian motion to evaluate the earliest forms of stock options. Although the book sparked a revolution in regards to the evaluation of option pricing, it hardly caught any attention outside of the
academic population.
Another influential work of Quantitative Finance was the theory of portfolio optimization by Harry Markowitz. This publication used mean-variance to estimate portfolios to judge investment
techniques; eventually this theorem sparked a shift from the concept of trying to identify the best individual stock for a particular investment. The field of Quantitative Finance used a linear
regression strategy to quantify the risk and return associated with an entire portfolio of stocks and fixed-income instruments to develop an optimized strategy used to develop a portfolio with the
largest mean return subject to levels of variance.
NEXT: Securities vs Stocks | {"url":"http://finance.laws.com/quantitative-finance","timestamp":"2014-04-18T05:33:37Z","content_type":null,"content_length":"76756","record_id":"<urn:uuid:6c50d623-5029-4488-90fd-808b62db8117>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00226-ip-10-147-4-33.ec2.internal.warc.gz"} |
dailysudoku.com :: View topic - AICs vs. Forcing Chains
Discussion of Daily Sudoku puzzles
Author Message
daj95376 Posted: Tue Jul 21, 2009 8:26 am Post subject: AICs vs. Forcing Chains
I've had something on my mind for awhile now, and decided the only way to get beyond it is to share it. I doubt if it's anything new, but I don't recall seeing anyone else express
this opinion.
Joined: 23 Aug First off, let me say that I use and am a strong supported of AICs. I find them to be a convenient and effective way to present logical relationships. Second, there are those who
2008 support AICs but will bad-mouth Forcing Chains in the same breath. I'm writing this to put a damper on their position.
Posts: 3855
Note 1: AICs have a bidirectional property.
Note 2: Forcing Chains are based on the premise that at least one stream must be true.
Let's see where this can take us.
puzzle provided by Keith and PM+AIC provided by Norm
| 39 237 4 | 28 6 589 | 359 1 78 |
| 6 127 79 | 3 279 1589 | 59 4 78 |
| 5 137 8 | 179 4 19 | 39 2 6 |
| 2 5 6 | 14 39 14 | 7 8 39 |
| 7 9 3 | 5 8 2 | 1 6 4 |
| 4 8 1 | 79 379 6 | 2 5 39 |
| 8 37 5 | 49 1 349 | 6 79 2 |
| 1 4 29 | 6 29 7 | 8 3 5 |
| 39 6 279 | 28 5 389 | 4 79 1 |
(7=9)r2c3 - (9)r8c3 = (9)r9c13 - (9=7)r9c8 - (7)r7c8 = (7)r7c2; r123c2 <> 7
Step 1: Split the AIC at a strong inference.
1a) (7=9)r2c3 - (9)r8c3
1b) =
1c) (9)r9c13 - (9=7)r9c8 - (7)r7c8 = (7)r7c2
Step 2: Use Note 1 to rewrite right-to-left interpretation of (1a) as left-to-right.
2a) (9)r8c3 - (9=7)r2c3
2b) =
2c) (9)r9c13 - (9=7)r9c8 - (7)r7c8 = (7)r7c2
Step 3: Add implications based on each stream.
3a) (9)r8c3 - (9=7)r2c3 => r123c2 <> 7
3b) =
3c) (9)r9c13 - (9=7)r9c8 - (7)r7c8 = (7)r7c2 => r123c2 <> 7
Since the results of Step 3 are based on a strong inference (3b) between the two streams, we can apply Note 2 because one of the streams must be true. So, at least in this case,
the AIC can be viewed as the merging of two Forcing Chain streams by applying the above steps in reverse order.
Myth Jellies Posted: Thu Jul 23, 2009 7:35 am Post subject:
An AIC is a meta-pattern.
Joined: 27 Jun
2006 I can find an AIC by finding sub-patterns.
Posts: 64
These sub-patterns (bivalued cells, bilocated digits, ALS's, URs, etc) always possess the same weak and strong link charateristics no matter what puzzle I work on.
Thus, I can assemble these sub-patterns into an AIC meta-pattern and learn something new about the puzzle without ever making an assumption other than that the puzzle-maker was
following his rules.
This is very similar to the process most people use to find a naked pair. One usually sees the first ab-bivalue cell sub-pattern and then looks for an appropriately located second
ab-bivalue cell to complete the pattern. No assumptions are needed during the entire process. Not surprising really since a naked pair is an AIC loop.
A forcing chain relies on you making assumptions to start it off. Comparing an AIC to a forcing chain is the same as comparing someone who recognizes a naked pair and uses it to
make a deduction to someone who has to assume both cases of one of the naked pair bivalue cells and sees what happens to find the same result.
Thus, an AIC maker is looking for patterns that fit a pre-existing theory, while the forcing chain-ist is using ad hoc assumptions to prove things on the fly.
In addition, an AIC maker can learn to recognize and use new sub-patterns and gain the sense of satisfaction that that brings. The forcing chain-ist is limited improvement-wise to
picking a good start point, which is very similar to the dilemma faced by those who utilize Ariadne's Thread-type algorithms to solve puzzles.
I'm not meaning to bad-mouth forcing chains. Rather I am just pointing out that solving puzzles in that way leaves little room for improvement, and thus, hardly requires a
discussion forum.
daj95376 Posted: Thu Jul 23, 2009 3:53 pm Post subject:
I abhor getting into a discussion about semantics. An AIC says nothing about how the user found the final relationship. All it says is that starting at one end and proceeding to
the other end leads the reader through all of the sub-patterns used. If anything, a forcing chain at least gives the reader an idea of where the user started.
Joined: 23 Aug For example, an XY-Wing is a very common sub-pattern that's described (in this forum) as vertex, pinchers, and eliminations -- in that order. This is definitely a forcing chain
2008 description. However, this description can't be written as an AIC. Does this mean that everyone who starts by looking for the vertex in an XY-Wing is wrong ... or that they can't
Posts: 3855 transcribe it into an AIC later?
Moving along and using the AIC from above ...
(7=9)r2c3 - (9)r8c3 = (9)r9c13 - (9=7)r9c8 - (7)r7c8 = (7)r7c2; r123c2 <> 7
you wrote:
Alternating Inference Chain (AIC) is a chain which starts with an endpoint candidate which has a strong inference on the next candidate, which has a weak inference on the next
candidate, which has a strong inference on the next candidate, and so on alternating weak and strong inferences until it ends with a strong inference on the final candidate at the
other endpoint. The nodes of an AIC are really just the candidate premises themselves.
Quite simply, at least one or the other (possibly both) of the two endpoint candidates (or candidate premises) of an AIC is true. Any deductions that you can make based on that are
valid. This tends to produce the best results if the endpoints either share a group, or if the endpoints involve the same candidate. When your chain endpoints satisfy one of those
conditions, it is time to check for any deductions.
This sure sounds like the above AIC deduction is derived from ...
[r2c3]=7 => [r123c2] <> 7
[r2c3]<>7 via the chain => [r7c2]=7 => [r123c2] <> 7
In my discussion above, I did not say that the strong link used to split the AIC couldn't be the first!!!
BTW: How would you write the (ERI) situation in [b7]? I would write it as ...
(9)r89c3 = (9)r9c13
... if I were to try and make it work bidirectionally in an AIC. This issue doesn't arinse in a forcing chain.
wapati Posted: Fri Jul 24, 2009 4:48 am Post subject:
daj95376 wrote:
For example, an XY-Wing is a very common sub-pattern that's described (in this forum) as vertex, pinchers, and eliminations -- in that order. This is definitely a forcing chain
Joined: 10 Jun description. However, this description can't be written as an AIC. Does this mean that everyone who starts by looking for the vertex in an XY-Wing is wrong ... or that they can't
2008 transcribe it into an AIC later?
Posts: 472
Brampton, Ontario,
Canada. I start at ends looking at any pattern. That is how I solve puzzles.
I might turn a finned-sword into a normal jelly, that is my transcribe.
From what I have seen, most methods I use can be WRITTEN as forcing chains, I don't see them that way, or find them that way.
Myth Jellies Posted: Fri Jul 24, 2009 6:36 am Post subject:
The issue is that when you describe an AIC construct via a forcing chain, you give the very natural impression that you plugged in one truth value and followed its implications
along, then you plugged in the other value and did the same thing; and where the two agreed you had your deduction.
Joined: 27 Jun The reality is you may have found it like I do, and made no such assumptions; but try explaining that to a bunch of newcomers. I have many times and it is just not easy when their
2006 primary exposure to the concept is "if you assume x then it leads to y, and if you assume not x it leads to y, therefore y"
Posts: 64
For example, an XY-Wing is a very common sub-pattern that's described (in this forum) as vertex, pinchers, and eliminations -- in that order. This is definitely a forcing chain
description. However, this description can't be written as an AIC. Does this mean that everyone who starts by looking for the vertex in an XY-Wing is wrong ... or that they can't
transcribe it into an AIC later?
Actually, I'd call your xy-wing a pattern and not a forcing chain description at all. Your vertex and pincers sub-patterns, and I'd say that they map quite readily into an AIC. You
can even use a vertical link to allow you to indicate which element you saw first in your discovery process. I do this all the time with my more complex AIC nets when I think such
information might be of interest.
(a)r1c1 - (a=c)r1c9 pincer1
|| => r1c23,r2c789 <> c
(b)r1c1 - (b=c)r2c2 pincer2
BTW: How would you write the (ERI) situation in [b7]? I would write it as ...
(9)r89c3 = (9)r9c13
For a strong-only hinge/ER subpattern, that is the best way. The reason being is that if you managed to form an AIC loop, then you can change every link in the loop to a conjugate
link (both strong and weak). When you change that to a conjugate link, you can eliminate (9)r9c3 at the vertex of the hinge. The fact that (9)r9c3 shows up on both sides of this
link makes this potential deduction more apparent.
storm_norm Posted: Wed Sep 02, 2009 8:23 am Post subject:
When you change that to a conjugate link, you can eliminate (9)r9c3 at the vertex of the hinge. The fact that (9)r9c3 shows up on both sides of this link makes this potential
Joined: 18 Oct deduction more apparent.
Posts: 1741
Myth Jellies,
do you have an example where the value in the ERI cell is eliminated in a AIC loop?
Myth Jellies Posted: Mon Sep 07, 2009 3:23 am Post subject:
storm_norm wrote:
...do you have an example where the value in the ERI cell is eliminated in a AIC loop?
Joined: 27 Jun
Posts: 64
Not for a normal sudoku. The reason for that is there is always a shorter and usually simpler discontinuous AIC that eliminates the intersection (using the same subpatterns/
candidates) that one tends to notice prior to seeing the loop.
I have used the concept in squiggly puzzles, though. There is a particular set of squiggly nonets that covers each corner and the outside edge cells with exactly 4 nonets. Thus you
have practically built-in AIC hinge/group/ER type loops around the outside edge. You can use this property to show that all the digits that occupy the cells in the four corners
must be the same digits that occupy the cells in those four nonets that do not lie on an outside edge.
You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot vote in polls in this forum | {"url":"http://www.dailysudoku.com/sudoku/forums/viewtopic.php?t=3921&view=previous","timestamp":"2014-04-16T07:30:29Z","content_type":null,"content_length":"55476","record_id":"<urn:uuid:632a2acb-0966-4781-ac41-0ae7150d7ab4>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00235-ip-10-147-4-33.ec2.internal.warc.gz"} |
Taylor's theorem
September 13th 2010, 02:19 AM #1
Senior Member
Feb 2008
Taylor's theorem
Suppose that $f(x)=\ln (1+x)$
1) Express $f(x)$ in the form $p_1(x)+R_2(x)$, where $p_1$ is the first Taylor polynomial for $f$ about 0 and $R_2$ is the Lagrange formula for the remainder.
$\Longrightarrow p_1(x)=x$
$f''(x)=-\frac{1}{(1+x)^2}\Rightarrow f''(c)=-\frac{1}{(1+c)^2}$
2) Suppose that $x\in [-0.1, 0.1]$ and consider the approximation $\ln (1+x) \approx x$. Use the answer in the first part to show that an upper bound for the absolute error in this approximation
is $\frac{1}{162}$.
How would I do this question?
Suppose that $f(x)=\ln (1+x)$
1) Express $f(x)$ in the form $p_1(x)+R_2(x)$, where $p_1$ is the first Taylor polynomial for $f$ about 0 and $R_2$ is the Lagrange formula for the remainder.
$\Longrightarrow p_1(x)=x$
$f''(x)=-\frac{1}{(1+x)^2}\Rightarrow f''(c)=-\frac{1}{(1+c)^2}$
2) Suppose that $x\in [-0.1, 0.1]$ and consider the approximation $\ln (1+x) \approx x$. Use the answer in the first part to show that an upper bound for the absolute error in this approximation
is $\frac{1}{162}$.
How would I do this question?
Well, you have already determined that $R_2(x)= -\frac{x^2}{(1+ c)^2}$ for some c between -0.1 and 0.1 so that $|R_2(x)|= \frac{x^2}{(1+ c)^2}$. Now what is the largest possible value for that?
Remember that a fraction achieves it maximum value when the numerator is maximum and the denominator is minimum.
September 13th 2010, 04:56 AM #2
MHF Contributor
Apr 2005 | {"url":"http://mathhelpforum.com/calculus/156005-taylor-s-theorem.html","timestamp":"2014-04-17T12:56:13Z","content_type":null,"content_length":"40430","record_id":"<urn:uuid:05f1ed04-8fe5-46a4-8ce5-faafd043cf2b>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00551-ip-10-147-4-33.ec2.internal.warc.gz"} |
Convex polygons
Convex polygons are polygons for which a line segment joining any two points in the interior lies completely within the figure
The word interior is important. You cannot choose one point inside and one point outside the figure
The following figure is convex:
No matter how I choose two points inside this polygon, the line segment joining these two points will always be inside the figure.
Notice that a triangle, isosceles, scalene, right, or obtuse is always convex
Rectangles, squares, and trapezoid too are always convex
Finally, all regular polygons, such as a pentagon, hexagon, septagon, octagon, and so forth are always convex
Fun math game: Destroy numbered balls by adding to 10 | {"url":"http://www.basic-mathematics.com/convex-polygons.html","timestamp":"2014-04-17T21:32:36Z","content_type":null,"content_length":"34039","record_id":"<urn:uuid:a03108c6-1758-467d-9dda-bc977c2fc9f0>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00239-ip-10-147-4-33.ec2.internal.warc.gz"} |
Accelerating, Expanding Universe Questions from a Novice
4. Are there any significant consequences between "stretching" and "new" space in terms of what that understanding tells us about the workings of the universe?
Well, first we would have to talk about what we mean by saying "stretching space" or "adding space". Space is generally thought of as a framework and not something "physical". Events happen and
objects occupy locations within space and time. When you stretch something out you are physically changing the locations of the particles that make up that object. That is, they are changing their
locations within space. But space is not an object. You cannot grab a hold of it and move it around, flex it, roll it up around you arm, etc. So does it make sense to say that more space is "added"
in between existing space? In this context, not really. Neither does stretching or expanding.
Instead, what we have is an effect that is SIMILAR to what you can do with real objects. IE the nature of spacetime is that it can cause distances between objects to increase based solely on the way
the geometry works. The reality is that there are several ways of thinking about it and visualizing it that work. Space expanding, more space being added, space flowing, etc. However to REALLY grasp
what is going on you would need to know how the math works. But, however you think about it, the results are always the same.
5. Is gravity counteracting the expansion for nearby objects?
We don't know. The model that tells us how the universe expands using extremely complex math to do so. In order to make it even remotely possible to calculate, we have to "assume" that energy and
mass is distributed evenly throughout the universe. Which of course it is not, as otherwise we wouldn't have big clumps of dirt and gas in the form of planets, stars, galaxies, etc.
The two main possibilities I know of are that gravity completely counteracts the expansion and no expansion takes place around massive objects at all. The other is that gravity doesn't completely
counteract it, and the expansion acts like a very very tiny repulsive force. Gravity and the other forces of nature are still overwhelmingly more powerful than the expansion, so everything still
stays together just fine.
6. Do we expand? If the expansion of space stretches the wavelength of light, causing a redshift, then does the expansion of space stretch matter? That is to say you and I are a little larger than
you and I from yesterday. If it is the case, how would be detect it, since 1 cm is today a little larger than 1 cm from yesterday.
That's all for now.
We do not, as per above. | {"url":"http://www.physicsforums.com/showthread.php?s=59d44b5f1e280168a809032a71406ef5&p=4519776","timestamp":"2014-04-24T17:43:35Z","content_type":null,"content_length":"42649","record_id":"<urn:uuid:cd86fa88-c5e6-478a-b2af-0b52caba5751>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00453-ip-10-147-4-33.ec2.internal.warc.gz"} |
Rectangle's Dimensions
A rectangle has a perimeter of 23cm. Its area is 33cm². Determine the dimensions of the rectangle. Can anyone help? sat online tutoring
Homertyson... each rectangle has the opp sides equal...in your case let one be x and the other y then what is the perimeter.....it is 2x+2y and it is 23 cm this is one equation. then what is the area
of this rectangle? it is xy and it is equal to 33 cs square. if you understand all these then the only thing remaining is to solve the system of simultaneous equations 2x+2y=23 xy=33 to find the
dimensions of the rectangle..solve it to find x=6 and y=11/2 MINOAS | {"url":"http://mathhelpforum.com/math-topics/214367-rectangle-s-dimensions.html","timestamp":"2014-04-17T04:22:55Z","content_type":null,"content_length":"31069","record_id":"<urn:uuid:4df2c103-0d34-483e-89dd-3cdc2b378cfe>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00529-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
1.Graph the function f(x) = (x + 3)3 by hand and describe the end behavior.
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/4ee43137e4b0a50f5c56b762","timestamp":"2014-04-18T18:41:15Z","content_type":null,"content_length":"53588","record_id":"<urn:uuid:e9f270ed-ed82-4eb4-870d-771f876d3dcb>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00348-ip-10-147-4-33.ec2.internal.warc.gz"} |
Fundamentals of Heat and Mass Transfer 7th Edition Chapter 13 Solutions | Chegg.com
e) Sphere lying on infinite plate
For spherical surface, the view factor from the surface itself
By using the reciprocity rule, the view factor
Therefore, the view factors are,
f) Hemisphere of diameter D, over a disc of diameter D/2
For flat disc surface, the view factors from the surface itself
Similarly, using the summation rule
By using the reciprocity rule, the view factor | {"url":"http://www.chegg.com/homework-help/fundamentals-of-heat-and-mass-transfer-7th-edition-chapter-13-solutions-9780470501979","timestamp":"2014-04-21T13:29:21Z","content_type":null,"content_length":"81498","record_id":"<urn:uuid:30784029-17ba-496a-b7d0-49f1cde50941>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00039-ip-10-147-4-33.ec2.internal.warc.gz"} |
Washington, DC Prealgebra Tutor
Find a Washington, DC Prealgebra Tutor
...As a tutor, my main job isn't to talk, but to listen: I think the real centerpiece is getting my students to explain the material in their own words. Everyone learns differently, and the beauty
of tutoring is that we can adapt our approach on-the-fly to address just what it is that one specific ...
18 Subjects: including prealgebra, writing, calculus, geometry
...I am willing to travel, and my schedule is currently very open for tutoring. I prefer to tailor my approach to teaching a subject based on each student's preferred learning style. I look
forward to working with you!
11 Subjects: including prealgebra, chemistry, geometry, algebra 1
I am a Princeton educated tutor willing to go above and beyond for students. I have experience tutoring students at Huntington Learning Center for the SAT/ACT, reading, math, calculus, and science
(bio, chem, physics). I am a patient person and willing to work with students to achieve their best.
33 Subjects: including prealgebra, chemistry, biology, calculus
...I look forward to supporting your academic needs.Over 7 years of experience in information technology on a corporate, government, academic level. Previous experience in field as system support
technician (including work with servers, training, connectivity, web design, and more.) 3 Years as Des...
41 Subjects: including prealgebra, chemistry, English, reading
...I have Master's degrees in Atmospheric, Oceanic, and Space Science and in Electrical Engineering from the University of Michigan. My students and their parents will tell you that I'm committed
to their success and fun to work with. I will make myself as flexible as possible to work around your busy schedules.
20 Subjects: including prealgebra, physics, calculus, SAT math | {"url":"http://www.purplemath.com/Washington_DC_Prealgebra_tutors.php","timestamp":"2014-04-18T06:12:50Z","content_type":null,"content_length":"24286","record_id":"<urn:uuid:3a995611-f2e5-475b-a8ba-968c710963cb>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00456-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Complex Class
Entisoft Tools 2.0 Object Library
Version 2.1 Build 208
<-- Previous || Index || Main || Topics || Next -->
Declarations Section, DivideByZero Property, Absolute Function, Addit Function, ApproxComplex Function, Conj Function, CubicExpr Function, CubicRoot Function, CubicRootBenchmark Sub, CubicRootSample
Sub, Div Function, FibonacciComplex Function, ImagPart Function, Inv Function, IsComplex Function, Max2Complex Function, Max3Complex Function, Min2Complex Function, Min3Complex Function, Mult
Function, Neg Function, NthRoot Function, Power Function, PowerVerify Sub, QuadraticExpr Function, QuadraticRoot Function, QuadraticRootBenchmark Sub, QuadraticRootSample Sub, RealToComplex Function,
RealPart Function, Sign Function, Sqrt Function, Square Function, Subtract Function, ComplexNumbersBenchmark Sub
For differentiating between divide-by-zero and other types of errors when testing the trig. functions. Currently defined as the Null value.
"Absolute Value" Absolute value of a real number. Returns the norm of a complex number. Function always returns either Null or a real number.
"Addition" Add two real and/or complex numbers.
"Approximate" Return True if two expressions are approximately equal. Compare the real and imaginary parts of complex numbers separately.
"Conjugate" Conjugate of a real or complex number.
"Evaluate The Cubic Expression" Evaluates the expression A X^3 + B X^2 + C X + D where A, B, C, D, and X are real and/or complex numbers.
"Root Of Cubic Equation" Returns one of the three roots of the cubic equation. Solves the equation A X^3 + B X^2 + C X + D = 0 for X. Real solutions are returned as a numeric value and complex
solutions are returned as a complex number string.
Run a benchmark of the CubicRoot function.
Print some samples of the CubicRoot function.
"Divide" or "Division" Divide two real and/or complex numbers.
"Nth Fibonacci Number" Returns the Nth Fibonacci number where N can be a floating-point or complex number.
"Imaginary Part" Return the iamginery part of a complex (or real) number. Function returns 0 (zero) if the argument is a real number.
"Inverse" Reciprocal of a real or complex number.
Determine if the argument vX is a complex (or real) number, according to the way that complex numbers are represented within this library, which is as a string like "R|I" where R is the real part
and I is the imaginary part of the number.
"Maximum Of Two Numbers" Maximum of two real and/or complex numbers. Returns argument on left (vX) if the arguments are equal real numbers.
"Maximum Of Three Numbers" Maximum of three Real and/or Complex numbers. Returns the leftmost of the maximum arguments if all arguments are real numbers.
"Minimum Of Two Numbers" Minimum of two Real and/or Complex numbers. Returns argument on left (vX) if arguments are equal real numbers.
"Minimum Of Three Numbers" Minimum of three Real and/or Complex numbers. Returns the leftmost of the minimum arguments if all arguments are real numbers.
"Multiply" or "Multiplication" Multiply two real and/or complex numbers.
"Negate" or "Negative" Negate a real or complex number.
Return the Nth root given real and/or complex numbers.
"X To The Power Of Y" X and Y can be real and/or complex numbers.
Display some results of calling the Power function with some real number arguments to help verify that the function is working properly.
"Evaluate The Quadratic Expression" Evaluates the expression A X^2 + B X + C where A, B, C, and X are real and/or complex numbers.
"Root Of Quadratic Equation" Returns one of the three roots of the cubic equation. Solves the equation A X^2 + B X + C = 0 for X. Real solutions are returned as a numeric value and complex
solutions are returned as a complex number string.
Run a benchmark of the QuadraticRoot function.
Print some samples of the QuadraticRoot function.
"Real Numbers To Complex Number" Construct a complex number string from two real numbers.
"Real Part" Return the real part of a complex (or real) number. Function returns vX unchanged if vX is already a real number.
Sign of a real or complex number.
"Square Root" Square root of a real or complex number.
Square of a real or complex number.
"Subtraction" or "Minus" Real or Complex subtraction of two numbers.
Run a benchmark of many of the complex number routines.
This Windows-based ActiveX DLL provides many useful routines through its function-bearing and data structure classes. Consult the Help file for more information, or call us at 1-310-472-3736. For the
latest news and files, visit our home page on the World Wide Web: http://www.entisoft.com
Copyright 1996-1999 Entisoft
Entisoft Tools is a trademark of Entisoft. | {"url":"http://www.entisoft.com/ESTools/MathComplex.HTML","timestamp":"2014-04-18T21:28:51Z","content_type":null,"content_length":"11207","record_id":"<urn:uuid:ab51d739-28c8-4e6e-b430-d2dc6a9c1bcb>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00439-ip-10-147-4-33.ec2.internal.warc.gz"} |
Re: st: Problem with ML estimation
[Date Prev][Date Next][Thread Prev][Thread Next][Date index][Thread index]
Re: st: Problem with ML estimation
From "Stas Kolenikov" <skolenik@gmail.com>
To statalist@hsphsun2.harvard.edu
Subject Re: st: Problem with ML estimation
Date Fri, 18 Aug 2006 13:44:17 -0500
Oops, that was right, you had four dependent variables in your -ml-
statement, that's what I was thinking about. I still believe that you
have identification problems with your program and your data. As you
most likely know, +\infty for Stata is a missing value, and if the
maximum is achieved at infinity, then the likelihood is flat in the
neighborhood of it, and that is what Stata complains about: it cannot
compute the derivative with adequate precision, as the subtraction of
the numerator and/or denominator in the numerical derivative is
machine zero. So even though the model might be technically
identified, it is empirically underidentified (i.e., for a given
combination of parameters and the data), and it is not identified in
computer arithmetics precision.
On 8/18/06, DEEPANKAR BASU <basu.15@osu.edu> wrote:
I am not sure I understand your comment. I am trying to estimate two parameters
(both constants) and not four. These constants are those contained in `theta3' and
`theta5'. Since I have three distinct observations, the model should work. Or am
I missing something?
I can ofcourse analytically see that the result of the estimation should give me
`theta3' as +\infty (or a very large number). Would it be a problem for STATA if the
result is +\infty?
Stas Kolenikov
* For searches and help try:
* http://www.stata.com/support/faqs/res/findit.html
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/ | {"url":"http://www.stata.com/statalist/archive/2006-08/msg00481.html","timestamp":"2014-04-17T16:03:31Z","content_type":null,"content_length":"7158","record_id":"<urn:uuid:6d60a5c9-9a85-48b1-81bc-79aa8aff48b4>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00071-ip-10-147-4-33.ec2.internal.warc.gz"} |
A brief history of problems that ruined my world
Posted by Louis Brandy on 26 July 2009
A different (maybe better) title for this might be: the most frequently banned topics on the internet. Below I present a set of questions that are so counter-intuitive that their very mention can set
off the right audience into a complete flame war. Each of them has been known to paralyze various online communities resulting in monster threads and lots of insults (and in one case, the company
stepping in to solve the dispute). You can tell the intellectual average of a community based upon the sophistication of the brain-exploding questions that give it fits.
.9~ and 1
Is .9999~ = 1 or is .9999~ < 1?
When I was 12, this problem ruined my world. The answer is, of course, .9~ is exactly equal to 1. In elementary school, they tried to placate me with algebraic trickery, but I wasn't buying that.
Obviously, .9~ is the number right before 1, but not equal to 1. Obviously! It took me awhile to come to terms with the fact that any two distinct real numbers have an infinite amount of numbers
between. If .9~ and 1 were distinct, they'd have an infinite amount of numbers between them. Since they don't, they cannot be distinct. And so I moved on.
The monty hall problem
Suppose you’re on a game show and you’re given the choice of three doors. Behind one door is a car; behind the others, goats. The rules of the game show are as follows: After you have chosen a
door, the door remains closed for the time being. The game show host, Monty Hall, who knows what is behind the doors, now has to open one of the two remaining doors, and the door he opens must
have a goat behind it. After Monty Hall opens a door with a goat, he will ask you to decide whether you want to stay with your first choice or to switch to the last remaining door. Imagine that
you chose Door 1 and the host opens Door 3, which has a goat. He then asks you “Do you want to switch to Door Number 2?” Is it to your advantage to change your choice?
When I was in high school, this problem ruined my world. The answer is, of course, you should switch doors. At first, you might believe, that you are still choosing between two doors, and so it
doesn't matter, it's 50-50. After you think a little bit more carefully you realize that the rules that Monty plays by gives him no choice but to give you information, making the choices not so
equal. Intuition squares with this, eventually.
Airplane on a treadmill
A plane is standing on a large treadmill or conveyor belt. The plane moves in one direction, while the conveyor moves in the opposite direction. This conveyor has a control system that tracks the
plane speed and tunes the speed of the conveyor to be exactly the same (but in the opposite direction). Can the plane take off?
When I was in college, this problem ruined my world. The answer is, of course, the plane will have no trouble taking off. Alright, that's not quite true. The problem with this problem is that it's
open to a bit of interpretation. A more precise answer is if the pilot wants to take off, he will, regardless of what the treadmill does. This is basically because the airplane wheels are not where
the power is, the engines are. And the engines are pushing against the stationary air, not the moving treadmill.
The second ace paradox
There are two games of bridge going on somewhere in the world, right now. In one of those games, Alice announces, "I have an ace in my hand." In the other of those games, Bob announces, "I have
the ace of spades in my hand." Whose hand is more likely to contain a second ace, Alice or Bob?
In grad school, this problem ruined my world. The answer is, of course, Bob is much more likely than Alice to have a second Ace. I've run the math on this and it's absolutely true. I still don't have
a good intuition for why.
The two envelopes problem
You are given two envelopes, one of which contains twice as much money as the other. You asked to choose one to keep. No matter which envelope you choose, the other one has a higher expected
value. So you should switch your choice. But now... the original has a higher expected value.. so you should switch back. But now...
This problem, and its variants, still ruin my world. The answer is, of course, well, uhm. I actually heard this problem in college and not only do I not have a good answer to what's going on,
apparently no one does. That makes me feel better.
In the simple version above, the flaw in the logic comes at the very beginning where it is presumed that the initial value is an unbounded and uniform random number. An unbounded, uniform
distribution is impossible, and so the number chosen cannot be from that distribution. Certain possible distributions have solutions. However, there are variants of this problem that have possible
distributions and still present the paradox. Good luck with that.
© louis brandy — theme: midnight by mattgraham — with help from jekyll bootstrap and github pages | {"url":"http://lbrandy.com/blog/2009/07/a-brief-history-of-problems-that-ruined-my-world/","timestamp":"2014-04-18T15:38:49Z","content_type":null,"content_length":"9636","record_id":"<urn:uuid:5838d475-1cd8-47ff-8061-3de55492cd42>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00253-ip-10-147-4-33.ec2.internal.warc.gz"} |
Weehawken Precalculus Tutor
Find a Weehawken Precalculus Tutor
...I have tutored high school geometry both privately and for the Princeton Review. I have a bachelor's degree in physics. I have experience tutoring pre-algebra and have a bachelor's degree in
20 Subjects: including precalculus, English, algebra 2, grammar
...I'm strongest at tutoring students in math and science (geometry, algebra, trigonometry, chemistry, physics, and biology). I also tutor in French language and grammar, having studied the
language for over 10 years. Lastly, I can help prepare for the math and science sections of the SAT and ACT, ...
37 Subjects: including precalculus, chemistry, physics, calculus
...I'd love to work with you! But, please keep in mind that I am working really hard to finish my PhD right now, so I am available for appointments after 7pm on weeknights and after 10am on most
weekends. I request 24 hours notice for cancellation.
17 Subjects: including precalculus, calculus, geometry, biology
...I am also proficient to teach math to children up to 7th grade because I have tutored my own nieces so far and they both have the best math scores in their class (level 1).I can teach Mandarin
at business level and Japanese at conversational level. I am easygoing and get along with children at all ages very well. Well, I am sure it should be even easier to communicate with adults.
6 Subjects: including precalculus, calculus, Japanese, Chinese
Hello,My goal in tutoring is to develop your skills and provide tools to achieve your goals. My teaching experience includes varied levels of students (high school, undergraduate and graduate
students).For students whose goal is to achieve high scores on standardized tests, I focus mostly on tips a...
15 Subjects: including precalculus, chemistry, calculus, geometry | {"url":"http://www.purplemath.com/weehawken_precalculus_tutors.php","timestamp":"2014-04-19T17:44:44Z","content_type":null,"content_length":"24049","record_id":"<urn:uuid:0689cce2-7bd6-4c5b-bffd-77bb67edd0c5>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00483-ip-10-147-4-33.ec2.internal.warc.gz"} |
books on math for grade school children
Mon Jan 21, 2013, 10:08 AM
phantom power (23,714 posts)
books on math for grade school children
I wasn't sure whether to post this question here, or in Parenting, or somewhere else. I decided here, since the audience seemed the most likely to have good leads.
Anyway: my daughter keeps telling us she "hates math" and "isn't good at math." This makes me want to gouge my eyes out, because (a) it isn't true (she seems to score well above average in math) and
(b) since she is in 3rd grade, most of the "math" she is learning is really arithmetic.
Arithmetic is obviously crucial, but it worries me because math doesn't really start to get interesting until you get to geometry and algebra. I'm worried that by that time she'll have solidified
this "I don't like math" or even worse "girls don't like math" horseshit. I'm her father, and if I told her the sky was blue, she'd argue that it was chartreuse. Or mauve. So me just trying to
explain these things to her doesn't work very well.
My real question: does anybody know of good books on math for grade schoolers. By that I mean, books that introduce cool patterns, somehow convey the beauty of numbers. Ideally, without needing to
know algebra. Does this limit it to geometry? Not sure I know how to really explain what I'm looking for, but hopefully you get the idea.
It may be that in the end, she just won't be especially interested in math. Not everybody is. But it would be damn sad if she reached that conclusion prematurely. I'd like to try and prevent that.
11 replies, 965 views
Replies to this discussion thread
Author Time Post
phantom power Jan 2013 OP
knitter4democracy Jan 2013 #1
Some Recommendations TM99 Jan 2013 #2
Glassunion Jan 2013 #3
nenagh Jan 2013 #4
phantom power Jan 2013 #6
Several pokerfan Jan 2013 #5
Vincardog Jan 2013 #7
tridim Jan 2013 #8
struggle4progress Jan 2013 #9
phantom power Feb 2013 #10
Have you tried littlemissmartypants Feb 2013 #11
Response to phantom power (Original post)
Mon Jan 21, 2013, 10:11 AM
knitter4democracy (14,208 posts)
1. The education group can help.
There are a few elementary and math teachers there.
Part of the problem might be the way she's being taught math. My daughter had the same reaction to the Chicago Method but is doing better now. My son thinks it's boring. Anyway, a popular method to
really help her nail her skills is the Saxon series, but for the beauty of numbers, that I would ask a math teacher about. They deal with that more.
Response to phantom power (Original post)
Mon Jan 21, 2013, 10:45 AM
TM99 (1,290 posts)
2. Some Recommendations
I love math. My sister does not. Her son is much like your daughter in that he is good at math but because he hears his mother constantly say she does not like it and is not good at it, he,
therefore, is not either.
I have recommended the following to her to augment his studies, and they seem to enjoy them.
The first is Chalkdust -
It is very structured and is broken down by both grade and type of maths being learned. The second is Math-U-See -
It is also very structured and complete, however, you will need to check the level to see which one would be appropriate for her to start with at her age and current level. It might be either Alpha
or Beta though you can certainly go back and start with the Primer.
If you can spark that interest now, once she is a bit older, I highly recommend Danica McKellar. Many know her only as the actress who played Winnie on the Wonder Years, but she also holds a math
degree from UCLA and is an advocate for math education for young women. I have had the pleasure of meeting her a few times, and she is a smart and passionate young woman. Here is her site:
Good luck.
Response to phantom power (Original post)
Mon Jan 21, 2013, 10:48 AM
Glassunion (6,174 posts)
3. Keep your eye out for "The Animation of Mathematics"
It is soon to be published. The author is Dr. Joseph Phillips.
I was a student of his many moons ago and he was the only professor who has ever been able to excite me about math. Ever.
From my understanding, this text book is aimed at the grade school level. I have been able to teach my 6 year old nephew bits and pieces of algebra by using the Dr.'s methods. He can break down the
most difficult of problems right back to basic math.
To this day I can reduce ridiculous fractions in a blink.
Response to phantom power (Original post)
Mon Jan 21, 2013, 11:36 AM
nenagh (1,536 posts)
4. OK, from a curious Mother...is she being taught something like fractions in grade 3?
My son learned the times tables etc about grade 3.. The teacher (Ontario) had the children stand beside their desks and recite the times tables starting with x1, x2 etc..
If the child could recite the x table within a certain period of time..they moved on to the next higher #.. Dire for kids who were shy. My son was stressed out..thought this math was just too
One night when he had been asleep for 20 min or so, I whispered in his sleeping ear...You will really like math the thing is, once you have learned the numbers, the answers never change..so it's
He flew along after that..but I probably, like my Dad did, I coached him at home..on the basics..in a non threatening way.
My Dad had a great manner when teaching..sitting down to help with my Physics homework, he always said the same thing: 'Lets keep this simple'. There was no recrimination..it was like an adventure
because he always related it to something concrete..that I could visualize.. ( the mechanical Engineer method).
He was very helpful, caring and practical and positive.. Helping with homework..
Lastly.. My son was slow to learn reading..which did not surprise me given the method: white sheets of paper with black printing.. Based on similar vowel sounds...an absolute hodge podge of
meaningless words with no connection but the vowel..
But, if a child now, with testing, is weak in reading..they will have trouble reading math 'problem' questions..
Re my son, he loved computer games and computer game magazines made that easier...so I bought them for him..and he taught himself to read faster in order to get hints about playing Nintendo games...
Wonder who your daughter is hearing she is bad at math from...may be school friends..
Anyway..you might talk to her teacher... For good ideas...
Lastly, my Dad never hassled over test results...but my Mother was a demon on the subject..very stressful..
I don't know if that helps at all... but my son did catch on to math, enjoyed it, learned calculus and graduated a computer engineer...
In grade three..he needed encouragement..plenty of it.. and support...
( I saw a Florida grade 3 math exam..which had fractions etc..and I thought it was very difficult... I'd learn as much as possible about what she is learning. .maybe your help for her can start or
end with a treat.. Sorry I'm quite an old Mother, just giving hints from long ago)
Response to nenagh (Reply #4)
Mon Jan 21, 2013, 12:07 PM
phantom power (23,714 posts)
6. She's learning fractions, multiplication tables, word problems
She does just fine with them, and I don't have any concerns that she isn't learning that stuff. They don't really keep track of grades at this point, but she generally scores 90-100% on the stuff she
brings home. I don't think I've ever seen her score lower than 80%.
It's just all the complaining and "I'm not good at math" stuff. I'm about 99% sure she gets it from other students. She sure as hell doesn't hear it from us
Part of my problem is, she's just very strong willed. Contrary. Bloody-minded, really. Makes it unbelievably hard to try and explain that she's wrong about something. I wear special mittens to keep
me from gouging out my eyes.
Reading and writing aren't a problem for her. She got classified as gifted in language arts. She reads at what I'd consider a high school level, at least in the mechanical aspects. It really, really
upsets her to be wrong. I think maybe part of what's up with math is, when you are wrong, you're wrong. There's a little more room for subjectivity in language arts.
"I'm gifted at language arts, but I wasn't gifted at math"
"That was just a test, kiddo, it doesn't mean you aren't good at math, or that you don't like it"
"Well maybe you should go back to school, because you don't know what I'm talking about"
Please kill me.
Response to phantom power (Original post)
Mon Jan 21, 2013, 05:44 PM
Vincardog (17,866 posts)
7. Try to find the math in what interests her. Instead of forcing the math is good meme
show her where math in beautiful in what interests her.
Don't reinforce her opinion about "not being good at math"
but show her how it is math is loves in her interests.
Response to phantom power (Original post)
Tue Jan 22, 2013, 09:17 AM
tridim (43,421 posts)
8. Not a book, but the most interesting teacher on the planet, Vi Hart...
I would have been a different person had Vi been around when I was in school.
Mandelbrot's books also got me excited about math, but I was already in college and not a math major.
Response to phantom power (Original post)
Mon Feb 4, 2013, 01:52 PM
phantom power (23,714 posts)
10. thanks to everybody for their suggestions
I did play a couple Vi Hart youtube shorts on apple TV (which is kind of a fun way to show them).
My daughter liked the circle game one, and then announced that next time she was bored in math class she would draw pictures like Vi. Not exactly the message I was aiming for
But, she liked it | {"url":"http://www.democraticunderground.com/122814704","timestamp":"2014-04-21T00:30:31Z","content_type":null,"content_length":"52645","record_id":"<urn:uuid:c8038e0a-9e09-474a-95b5-45f4ad6c476f>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00653-ip-10-147-4-33.ec2.internal.warc.gz"} |
March 14 - Celebrate Pi Day in Its San Francisco Birthplace
For all of you nerds, pie-in-the-sky dreamers and circular reasoners (and anyone who's been accused of being, or wants to be, any of those), your day has come: Pi Day, on March 14. In 2014, Pi Day
turns 26.
A quick refresher: Pi, the ratio of a circle's circumference to its diameter, is 3.14159.... It's an irrational, infinite number; it has been calculated to 10 trillion decimal places so far, and each
digit appears as often as any other digit. There are no patterns or repeating sequences among the digits, so pi seems to be statistically random.
San Francisco has a unique claim to Pi Day--it was born in our very own Exploratorium, the brainchild of physicist Larry Shaw. In 1988, Shaw got his Exploratorium colleagues to help build a pi
shrine, circle around it and eat pie. The U.S. House of Representatives in 2009 passed a non-binding resolution recognizing March 14 as National Pi Day. The reasoning was simple: "Whereas since 1995
the United States has shown only minimal improvement in math and science test scores" and "America needs to reinforce mathematics and science education...to compete in a 21^st-century economy," our
reps figured that designating and promoting Pi Day might nudge those scores up. It's a nice coincidence that March 14 is Albert Einstein's birthday.
Pi Day is now an international phenomenon. In San Francisco, here are some public celebrations in 2014 of the homegrown holiday. On your own, you can try the Exploratorium website's pi puzzlers, read
or watch Life of Pi, watch Pi, memorize as many pi digits as you can, walk, spin and think in circles, eat pie, and be irrational for the day.
26th Annual Pi Day
March 14, at 1-3:30 pm
Come to the birthplace of Pi Day. The homage includes a pi procession to the Pi Shrine, demos and talks about pi, tossing of pizza pie dough and servings of pie.
At the Exploratorium, Pier 15, San Francisco 94111. Free.
Pi Day Puzzle Party
March 14, at 7 pm
Exercise your gray matter in a fun and lively competition to solve math and logic puzzles. You can fly solo or with a crew of up to six people; come with crew mates or form a team once you arrive.
Bring pencils and scratch paper, and for brain power, chomp on food truck offerings. Hosted by math guy Wes Carroll and organized by Ask a Scientist.
At the SoMa StrEat Food Park, 428 11th St., San Francisco. Free. | {"url":"http://sanfrancisco.about.com/od/forkids/qt/Celebrate-Pi-Day-In-San-Francisco.htm","timestamp":"2014-04-16T19:10:24Z","content_type":null,"content_length":"39859","record_id":"<urn:uuid:0af3242f-742d-4919-a092-09acba8cfe88>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00560-ip-10-147-4-33.ec2.internal.warc.gz"} |
Lewisville, PA Math Tutor
Find a Lewisville, PA Math Tutor
...I took courses in combinatorics, algorithms, introduction to discrete math, Ramsey theory, integer programming and other miscellaneous discrete math courses. I taught Intro to Discrete
Mathematics at Rochester Institute of Technology. I hold a PhD in Algorithms, Combinatorics and Optimization.
18 Subjects: including trigonometry, differential equations, algebra 1, algebra 2
...I have been playing the double bass and bass guitar for many years and have studied jazz bass and piano with seasoned performers. I have received several undergraduate poetry prizes, including
First Place in Christianity & Literature's Student Writing Contest. I was also a Research Assistant for Dr.
38 Subjects: including calculus, composition (music), ear training, elementary (k-6th)
...Toxicologists, Epidemiologists, Environmental Engineers, Health Promotion Officers, Occupational Docs). I truly do enjoy my job because I get to interact with scientists of all facets and my
projects vary from day to day. It keeps me on my toes and keeps my awareness and knowledge of statistica...
6 Subjects: including SAT math, basketball, statistics, GRE
I am currently employed as a secondary mathematics teacher. Over the past eight years I have taught high school courses including Algebra I, Algebra II, Algebra III, Geometry, Trigonometry, and
Pre-calculus. I also have experience teaching undergraduate students at Florida State University and Immaculata University.
9 Subjects: including algebra 1, algebra 2, geometry, prealgebra
I have been speaking Spanish for 17 years and am currently in the middle of getting my Master's in Spanish. I have helped many students of varied levels, middle school, high school and college
with their classes. Please contact me with any questions. Thanks!
8 Subjects: including prealgebra, algebra 1, Spanish, ESL/ESOL
Related Lewisville, PA Tutors
Lewisville, PA Accounting Tutors
Lewisville, PA ACT Tutors
Lewisville, PA Algebra Tutors
Lewisville, PA Algebra 2 Tutors
Lewisville, PA Calculus Tutors
Lewisville, PA Geometry Tutors
Lewisville, PA Math Tutors
Lewisville, PA Prealgebra Tutors
Lewisville, PA Precalculus Tutors
Lewisville, PA SAT Tutors
Lewisville, PA SAT Math Tutors
Lewisville, PA Science Tutors
Lewisville, PA Statistics Tutors
Lewisville, PA Trigonometry Tutors
Nearby Cities With Math Tutor
Avondale, PA Math Tutors
Chatham, PA Math Tutors
Kelton Math Tutors
Kemblesville Math Tutors
Kennett Square Math Tutors
Kirkwood, PA Math Tutors
Landenberg Math Tutors
New London Township, PA Math Tutors
New London, PA Math Tutors
North East, MD Math Tutors
Nottingham, PA Math Tutors
Oxford, PA Math Tutors
Rising Sun, MD Math Tutors
Toughkenamon Math Tutors
West Grove, PA Math Tutors | {"url":"http://www.purplemath.com/lewisville_pa_math_tutors.php","timestamp":"2014-04-20T11:19:29Z","content_type":null,"content_length":"24044","record_id":"<urn:uuid:4e3564fe-1a7b-412d-a09f-54e36dcecb5a>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00013-ip-10-147-4-33.ec2.internal.warc.gz"} |
An operation * has 2 possible identity element:which one's the real identity element?
August 24th 2012, 02:32 PM
An operation * has 2 possible identity element:which one's the real identity element?
The rule to find identity element is laid out on Pinter's Abstract Algebra book like this on page 24:
"First solve the equation $x * e = x$ for $e$; if the equation cannot be solved there is no identity element. If it can be solved, it is
still necessary to check that $e * x = x * e = x$ for any $x \in \mathbb{R}$. If it checks, then $e$ is an identity element."
Suppose there is an operation $*$ on $\mathbb{R}$ such that $x * y = \left | x + y \right|$ and we need to find the identity element.
So according to the above rule and by the definition of the operation $*$:
$x * e = \left|x + e \right| = x$
so $x = x + e$ or $x = - (x + e)$
$e = 0$ or $e = 2x$
Now $x * e = x * 0 = \left| x + 0 \right| = \left| x \right| = x$
But $x * e = x * 2x = \left| x + 2x \right | = \left| 3x \right| eq x$
So can I say that $e = 0$ is an identity element for the operation $*$ because it fulfills the property of identity element?
Can anyone kindly tell me if I'm right or wrong to say that $e = 0$ is an identity element for this operation?
August 24th 2012, 02:41 PM
Re: An operation * has 2 possible identity element:which one's the real identity elem
Sorry I got the solution.
You see, if $x = -1$ and $e = 0$ then $x * e = \left | -1 + 0 \right | = 1 eq -1$
So no $e = 0$ is not an identity element for the operation $*$ on the set $\mathbb{R}$ in this case.
August 24th 2012, 02:47 PM
Re: An operation * has 2 possible identity element:which one's the real identity elem
Neither can "2x". An "identity element" for an algebraic stucture, set X with operation *, is member of X such that e+ x= x for any element of X. That is, an identity is a single member of x.
There is not a different identity element for each element of X. That operation does not have two identities- it has none.
August 24th 2012, 03:17 PM
Re: An operation * has 2 possible identity element:which one's the real identity elem
Thanks HallsofIvy for clarifying the rule for being a identity element. | {"url":"http://mathhelpforum.com/advanced-algebra/202514-operation-has-2-possible-identity-element-ones-real-identity-element-print.html","timestamp":"2014-04-19T14:15:10Z","content_type":null,"content_length":"10474","record_id":"<urn:uuid:1f83aefb-ebc0-4382-bcf7-017c60021184>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00361-ip-10-147-4-33.ec2.internal.warc.gz"} |
The potential ability of TC
A*(2)=A x A
examples; 2*(2)=4, 4*(1/2)=2, (4/9)*(1/2)=2/3, 3(4/9)*(1/2)=2, k4*(1/2)=2k
//About the correlativity of energy multiplying phenominon of Tesla Coil and Hutchison effect
//Generally, we make the tank circuit to oscillate cosine-curved electric vibration. In use of Tesla Coil, it isn't an exception.
//But, every tank circuit can't escape from the impediment of consuming energy that resitance causes. Usually, we tend to eliminate resistance from tank circuit to reduse the loss of heat.
//I solved the differential equation which describes the action of series LCR circuit(a part of Tesla Coil). The answer showed that tank circuit shows strange behavior when resistance of tank circuit
was larger than a certain value. Under this condition, it is expected that Tesla Coil will multiply energy, or absorb deficient energy in the capacitive space of capacitor.
//I think this is the cause of Hutchison effect, and be sure of reapperance of this effect to regulate the resistance of general Tesla Coil(with spark gap).
// I . Behavior of reconstructed TC
//fig. 1 Reconstructed Tesla Coil
//This is the plan of TC's new reguration. Resistance R is larger than 2 (L/C)*(1/2) , and the secondary inductor is connected to the other resistor R' not the metallic globe or ground. Under this
condition, all supplied energy from the battery V0 will be changed into thermal energy at each resistors.
/ 1. " Charge C "
//First, the switch is conected to the point of contact (1), and V0 charges the capacitor C . Electrical current " I " carries electric charge from V0 to C, but the electric energy is also spent at
the resistor R1.
// The capacitive voltage of C ( VC ) and the electric current " I " can be expressed by V0, R1, C as following .
//VC= -V0+V0e*(-t/CR1)
//I =V0e*(-t/CR1) / R1
//The sum of accumulated electrostatic energy EC(suppose infinite time) is equal to -CV0*(2) / 2.
//And the sum of electric power ER1(t ; 0~infinite) which is consumed at the resistor R1 becomes following. Electric power WR1(= -R1I*(2) ) is the differential of ER1.
//ER1 was equal to EC. As a result, the total energy Esply which V0 supplies becomes twice as much as absolute quantity of EC.
//Esply= -EC -ER1=CV0*(2) ........[1]
//Esply is CV0*(2) .
//2. " Discharge C "
//Next, after full-charging of C, turn the connecting point of the switch from (1) to (2). Capacitor C will discharge its electrostatic energy, and another electric current I1 will flow into inductor
L and resistor R. Action of this part(tank circuit) is expressed as following differential equations. In these equations, VL is the inducted voltage of L, and VR is the resistive voltage of R.
//VC+VL+VR=VC - LI1' - RI1=0
//I1= - CVC'
//Answers are following ( using "w" instead of Greece "omega", "w" is angular frequency).
//VC= -V0e*(-wt)
//VL= -CLV0w*(2)e*(-wt)
//I1= -wCV0e*(-wt)
//w=(R/2L) +or- (R*(2)/4L*(2) - 1/CL)*(1/2)
//("w" is the answer of equation "CLw*(2) - CRw +1 = 0 ")
//From these answers, each electric power(WC, WL, WR) and consumed(or discharged) energy(EC, EL, ER, ER') become following. But, providing that EL must be consumed at R', and EL must be equal to
//WR= -R(wCV0)*(2)e*(-2wt)
//EL= L(wCV0)*(2)/2 (= -ER')
//ER= -Rw(CV0)*(2)/2
//Therefore, all generated thermal energy Egnrt becomes "|ER'|+|ER|+|ER1|(=|EL|+|ER|+|ER1|)".
//Egnrt=|ER'|+|ER|+|ER1|=L(wCV0)*(2)/2 + Rw(CV0)*(2)/2 + CV0*(2)/2
////////=C(CLw*(2) + CRw + 1)V0*(2)/2
//Then, from the results of [1] and [2] , evaluate "Egnrt - Esply". Define it as "E".
//E=Egnrt-Esply=Rw(CV0)*(2) - CV0*(2)=(CRw - 1)CV0*(2)
//According to this result, if angular frequency "w" were actual number at least, E becomes larger than zero. When R was equal to or larger than 2 (L/C)*(1/2), this condition will be satisfied. It
means that this circuit multiplies enrgy. E is the profit.
//Especially, when R=2 (L/C)*(1/2), w becomes (1/CL)*(1/2). Under this condition, E becomes CV0*(2), it is as much as supplied enrgy Esply.
//I named this equipment, "The Twisty Energy Multiplier". Because this circuit works like changing debt into profit. Essencially, EL is the energy of debt, but secondary inductor and R' changes its
attribution from "debt" into "profit".
//It sounds like "falsifying of account" for me. I think it is like a certain countorie's national bank buying its national debt. And also, I feel it is "twisty" to change attribution of EL from
"debt" into "profit", or rewriting its mark from "minus(-)" into "plus(+)". So, I gave it the name "twisty".
//3. A plan to realize continuous motion
fig 2. "The Twisty Energy Multiplier"
/////Condition ;/R=2 (L/C)*(1/2)
/////Under this condition,
/////VC=VL= -V0e*(-wt), VR= -2VL= -2VC
/////(shown as the size and directions of each arrows at fig.1)
/////I1= -2CV0e*(-wt)/R
//|EC|=|EL|=|ER'|=|ER1|=CV0*(2)/2, |ER|= CV0*(2)
//Generated heat can be changed into electricity by using steam engine or Peltier effect(or thermocouple).
//This circuit(fig2) can "conduct" continuously for the help of spark gap which is used as an excellent substitute for the switch of fig.1./When C was charged enough by the battery V0, it opens spark
gap and discharges its electrostatic enrgy for L and R. And when spark gap was closed for the lack of electrostatic energy of C, battery V0 begins to charge C again. This circuit works continuously
with the repetition of alternative these two actions.
//Frankly to say, this is nothing but a Tesla Coil. Just I found its potential ability.
//II. Correlativity with Hutchison effect
fig.3 A plan for the purpose of reappearing of Hutchison effect
//This is the "suspected" plan which may be able to reappear Hutchison effect in the capacitive space of capacitor C'. On this plan, L' is a secondary inductor which is connected to the primary
inductor L indirectly, and, a capacitor C' and a resistor R'(basically, this is a large resistance) is connected to L' in series.
//Conditions about this circuit are folowing(condition 1).
//R';voluntary (depends on proper charging speed of C)
//R=2 (L/C)*(1/2), R'=2 (L'/C')*(1/2)
//Provided that L' receives magnetic energy of L in its entirety.
//According to this provision, energy of L' (EL') must be equal to -EL. And under this condition, EL', ER' and EC' become following(this part(sereis L'C'R' circuit) is same as the part of series LCR
//ER'= -CV0*(2)
//EL'R'=EL'+ER'= -CV0*(2)/2, EL'R'+EC'=0
//Supplied energy from the primary inductor L is only EL'. And as well as the series LCR circuit of fig1, absolute quantity of thermal energy ER' is twice as much as it.
//However, EL'R' is the excess of consumed thermal energy at R'. EC' must be the absorbed energy to offset the excess of EL'R'.
//From this result, I presupposed the cause of Hutchison effect as following.
//"Under the condition 1, absorption of deficient electrostatic energy will happen in the capacitive space of capacitor C', and many kinds of materials which is put in the capacitive space of C' will
be melted with being absorbed its (some kind of)energy."
//To make sure of this presupposition, I made comparison fig3 with Dr. Hutchison's(I conjectured as it) Tesla Coil(part of series L'C'R' circuit).
fig.4 Comparison with fig3.
//Condition of series L'C'R circuit(fig3) is " R'=2(L'/C')*(1/2), w=(1/C'L')*(1/2) ".
//In case of this setting(fig4), metalic globe and a certain domain of ground consists capacitor C', and the distance between the grounded point of L' and a certain domain of ground(colored by faded
purple) becomes large insulating resistance R'. There is a close resemblance between these two circuits.
//Then, what was the cause of generating strange waveform of voltage at the primary series LCR circuit?
//Dr. Hutchison constructed this equipment to make an experiment of wirelss-transmitting of electricity. Under this condition, it is conjectured that capacitor of the primary circuit " C " had to
have large electrostatic capacity because it is necessary to transmit large electric power. And the primary inductor L always has little inductance for the purpose of transforming voltage.
//These conditions make the value of "L/C" be little. When this value is too little, the resistance R which exists naturally in the LC(R) series circuit tends to become larger than 2(L/C)*(1/2)
against the will of the producer.
//In consequence, it is presupposed that unexpected waveform of voltage "VC=V0e*(-wt) ("w" is actual number)" was generated in the primary LC(R) series circuit, and absorption of electrostatic energy
happened at a certain place between the two secondary inductors of Tesla Coils and their metalic globes which caused to melt some kinds of materials.
//Apart from this phenominon(material's melting), the cause of melted material's floating can be presupposed as following.
//Vector of electrostatic field which is between the metalic globe and a certain domain of ground directs perpendicular at the middle point of the two secondary capacitor.
//If E1( E(y+dy,t) ) and E2( E(y,t) ) has different size( |E2| > |E1| ), dE/dy( ={E(y+dy,t) -E(y,t)}/dy ) doesn't become zero.
//Electric field which alters belong time and axis and never changes its direction lifts dielectric substance or conductive materials. It is known as Biefeld-Brown effect. I think this effect is the
cause of material's floting.
//I think, to set up the insulating resistance R' properly with depending on unexpectable chance was the cause of difficulty of reappearing this pheminon. In this sense, so many accidental
coincidence caused the discovering of this phenominon.
//But, if "condition 1" was setted on some TC which is constructed properly and was given enough electric energy, it may be able to reappear Hutchison effect in the capacitive space of its capacitor. | {"url":"http://www.geocities.jp/motoki_mimori/hutchison_effect-e.html","timestamp":"2014-04-20T13:20:33Z","content_type":null,"content_length":"57761","record_id":"<urn:uuid:50ca722f-c8c8-4cd7-9b51-6667d1709bdc>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00475-ip-10-147-4-33.ec2.internal.warc.gz"} |
Advanced Math Is a Challenge. What's Wrong With That?
Dear Extra Credit Readers:
Jerry W. Miller's suggestion ["In the Real World, Advanced Math Doesn't Always Add Up," Oct. 30] that schools might be requiring too much math inspired so many letters that I needed several extra
fingers and toes to count them:
Dear Extra Credit:
I think that one reason we don't see how we use math, especially algebra, in our daily lives is that we use it so much and so automatically that we don't realize it, sort of like breathing. I have to
admit that I am a math geek, although I did not excel in math in school. My professional life, though, has been spent mostly as a researcher, and for a while I taught college statistics.
There is a lack of understanding of how math, and even arithmetic, factors in our daily life. I could give many examples of people's inability to calculate simple formulas for common things, such as
how to increase a recipe that serves four to one that serves six.
Or percents: I am constantly amazed at the lack of familiarity with something as necessary as percents. Once, I took a class that aimed to teach how to shop for a low-fat diet. The rule was that no
more than 30 percent of one's daily calories should come from fat. A student, over 40, asked what she could eat for supper if her lunch calories were 25 percent fat, and breakfast had 25 percent
calories from fat. She said, "I'm already up to 50 percent, and I have a whole meal to go!"
Martha Gilbert
Ellicott City
Dear Extra Credit:
I'm 44 years old and have received four associate degrees from Northern Virginia Community College in Alexandria. I'm working on my fifth associate of arts, in business administration, but have not
been able to get the degree because of the math requirement. I have been in the government, private and nonprofit sectors and never required more math than the basic equations in an Excel file
(ratios, sums, etc.).
Also, with English being a second language for me, math is harder. The wording for math problems is very difficult to translate and then figure out the math. I hate the idea that I have to go back
and take algebra and other basic math review courses so I can complete two courses in calculus that I won't use. Higher education and high school educators need to reflect that we are not all going
to college, and vocational skills such as plumbing pay just as much as other jobs.
Mike Beaty
Dear Extra Credit:
Miller's assertion that "advanced math is rarely used and so should not be required" is flawed in two distinct ways.
First, very little of what we are taught is going to be directly useful in the day-to-day activities for which we are compensated, but that is an insufficient reason to avoid teaching it. Miller's
argument is the equivalent of saying, "I shouldn't be asked to read John Donne, or to discourse on it, because I want to be an engineer."
Although the forced application of the 10th Holy Sonnet might not inspire a student to dip a quill the student might otherwise have left dry, a sound arts education polishes and tints the lens
through which we view the world -- at work and at play. A strong math background does the same.
Second, Miller seems to equate the manual computation -- and solution -- of complex calculus problems with "using advanced math."
Over the course of an advanced math education, certain recurring systems of problems emerge and are dealt with so frequently that nobody so educated thinks in terms of the underlying equations and
symbols anymore. In this sense, a practicing engineer might not "use" the experience with differential equation systems to hand-compute the optimum thickness of, say, a steel shaft. This engineer
simply knows how changing the diameter will affect the component and, by extension, the system.
This critical step, from rote calculation to real-world problem-solving and internalization, cannot occur without thinking in abstractions.
Virginia should immediately bar Miller's engineer friends from practice, as they have told him that they are incapable of gut-checking their CAD tools. And he should find a new physician immediately,
since the good doctor doesn't understand the decay models of the drugs he's prescribing.
Al Moore
Dear Extra Credit:
I would suggest that schools no longer give courses that might stretch students' minds, such as math, physics and chemistry, or any other intellectually challenging subject. Many of the older
generation took three to four years of Latin, with very few ever using this knowledge in their later life. However, certainly an appreciation of such subjects has enriched the lives of those who
chose to take such courses.
While all of us cannot be Nobel Prize winners, it was discouraging that the prizes awarded in physics, chemistry and medicine this year were given either to foreign scientists or to researchers who
came from non-American secondary schools, with only one having what could be termed an American upbringing. The United States' performance in international competition in math and sciences at the
secondary school level has been generally abysmal.
The statement has been made that we use less than 10 percent of what we have learned. Does this make the other 90 percent inconsequential?
Nelson Marans
Silver Spring
Thanks for the thoughtful responses. I still think advanced math in school helps us learn good study and thinking habits, particularly in problem-solving. But I haven't read all of your letters yet
and might run some more.
Please send your questions, along with your name, e-mail or postal address and telephone number, to Extra Credit, The Washington Post, 526 King St., Suite 515, Alexandria, Va. 22314. Or e-mail
By Washington Post Editors | December 18, 2008; 12:58 PM ET
Categories: Extra Credit
Save & Share: Previous: Betting Against a Big Drop in Graduation Rates
Next: The Balance Sheet on Requiring Advanced Math
No comments have been posted to this entry.
The comments to this entry are closed. | {"url":"http://voices.washingtonpost.com/class-struggle/2008/12/advanced_math_is_a_challenge_w.html","timestamp":"2014-04-21T13:49:29Z","content_type":null,"content_length":"46657","record_id":"<urn:uuid:f81b5cf0-7f2d-4099-b0dc-3f6da96b0bc3>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00605-ip-10-147-4-33.ec2.internal.warc.gz"} |
[SciPy-user] integrate ODE to steady-state?
Rob Clewley rob.clewley@gmail....
Tue Jun 24 19:55:42 CDT 2008
It somewhat depends on your equations, but in principle you don't need
to actually integrate the ODEs at all. An equilibrium corresponds to
no motion, i.e. when the right-hand sides of all the ODEs are
zero-valued. So for a system dx/dt = f(x), where x is a vector, you
just need to solve f(x) = 0. This may have multiple solutions, of
course, if f is nonlinear. There's no scipy code that will do all this
for you but fsolve will do in most cases. It's best to have a good
initial guess as a starting condition for the search. So, integration
can help you find that. If f is strongly nonlinear there could be many
equilibria and your task will be to identify which initial conditions
lead to which equilibria. This is not a trivial task - you may need an
exhaustive search of initial conditions (e.g. based on sampling your
phase space of x) to get started. I have some naive code that
pre-samples the phase space and then uses fsolve, and works OK on some
nonlinear problems. It's the find_fixedpoints function in PyDSTool,
but it's extremely easy to remove the PyDSTool dependence :)
If this approach turns out to be too numerically problematic, you'll
just have to go back to integrating for long times...
On Tue, Jun 24, 2008 at 7:45 PM, Zachary Pincus <zachary.pincus@yale.edu> wrote:
> Hi all,
> So, after a brief bout of the stupids (thanks Robert), I have
> formulated my optimization problem as a physical system governed by an
> ODE, and I wish to learn the equilibrium configuration of the system.
> Any thoughts on what the easiest way to do this with scipy.integrate
> is? Ideally, I'd just want the solver to take as large steps as
> possible until things converge, and so I don't really care about the
> "time" values. One option would be to use odeint and just tell it to
> integrate to a distant time-point when I'm sure things will be in
> equilibrium, but that seems dorky, wasteful, and potentially incorrect.
> Alternately, I could use the ode class and keep asking it to integrate
> small time-steps until the RMS change drops below a threshold. There,
> still, I'd need to choose a reasonable time-step, and also the inner
> loop would be in python instead of fortran.
> Any recommendations? (Or a I again being daft? I never really took a
> class in numerical methods, so sorry for dim-bulb questions!)
> Zach
> _______________________________________________
> SciPy-user mailing list
> SciPy-user@scipy.org
> http://projects.scipy.org/mailman/listinfo/scipy-user
More information about the SciPy-user mailing list | {"url":"http://mail.scipy.org/pipermail/scipy-user/2008-June/017339.html","timestamp":"2014-04-18T18:21:57Z","content_type":null,"content_length":"5523","record_id":"<urn:uuid:75a6e218-39d6-4708-9f3b-f57675e89e7b>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00272-ip-10-147-4-33.ec2.internal.warc.gz"} |
Angle of Elevation
September 17th 2009, 08:49 AM #1
Junior Member
Aug 2009
Angle of Elevation
A driveway is built on a slant where it rises 3m over a distance of 20m, what is the angle of elevation with the ground?
I'm not sure if the 20m should be the adjacent leg or the hypotenuse.
I would say it's the adjacent side, it would be unusual to describe the hypoteneuse when it would be much easier to describe the adjacent
This is a Building Construction problem.
ALL dimensions are given as horizontal & vertical - or vertical & horizontal - NEVER diagonal.
Stair Step: tread & riser
Roof Pitch: Horizontal Distance & Vertical Distance
Sewer Flow: 1% (translates to 1 foot vertical drop per 100 feet horizontal travel).
These are the easiest methods to layout & build.
Most carpenters use the slope/hypotenuse ONLY as a check -- never to setup to build.
In your question you are give the vertical rise over a horizontal distance.
The 20m is the adjacent side.
"A driveway is built on a slant where it rises 3m over a distance of 20m, what is the angle of elevation with the ground?I'm not sure if the 20m should be the adjacent leg or the hypotenuse."
in a civil engineer arena, you do measure base and height
grade = y/x = height/base
height = 3 m
base = 20
not necessary: slant height = sqrt (20^2 + 3^2) = sqrt 409 = 20.22375 m
angle of elevation = arctan (grade) = arctan (y/x) = arctan(height/base) = ?
you may read this: Grades, Highway | Macmillan Mathematics Summary
September 17th 2009, 09:03 AM #2
September 18th 2009, 02:40 AM #3
Super Member
Jan 2009
September 19th 2009, 03:48 AM #4 | {"url":"http://mathhelpforum.com/trigonometry/102814-angle-elevation.html","timestamp":"2014-04-21T04:47:23Z","content_type":null,"content_length":"39054","record_id":"<urn:uuid:506872db-1811-482c-9cca-078a4ac7f680>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00208-ip-10-147-4-33.ec2.internal.warc.gz"} |
Electricity and its Effects TEST PAPER
CBSE TEST PAPER-01, Class - X Science (Electricity and its Effects)
1. A metallic conductor has loosely bound electrons called free electrons. The metallic conductor is (a) negatively charged
(b) Positively charged
(c) neutral
(d) Either positively charged or negatively charged
3. Which of the following expressions does not represent the electric power in the circuit?
4. Resistivity of a metallic wise depends on
(a) its length
(b) its shape
(c) its thickness
(d) nature of material
5. If the current I through a resistor is increased by 100% the increased in power dissipation will be (assume temperature remain unchanged)
(b) 200%
(c) 300%
(d) 400%
6. How does use of fuse wire protect electrical appliances?
7. Calculate the resistance of an electric bulb which allows a 10A current when connected to a 220V power source?
8. (i) Identify the V-I graphs for ohmic and non-ohmic materials.
(ii) Material downloaded from
(ii) Give one example of each.
9. What do the following symbols represent in a circuit? Write the name and one function of each?
10. Two metallic wires A and B are connected in second wire A has length l and radius r, while wire B has length 2l and radius 2r. Find the ratio of total resistance of series combination and the
resistance of wire A, if both the wires are of same material?
11. Should the heating element of an electric iron be made of iron, silver or nichrome wire? Justify giving three reasons?
12. (a) Define electric resistance of a conductor?
(b)A wire of length L and resistance R is stretched so that its length is double and the area of cross section is halved. How will its
(a) resistance change
(b) resistivity change?
13. Two resistor of resistance R and 2R are connected in parallel in an electric circuit. Calculate the ratio of the electric power consumed by R and 2R?
14. Two wires A and B are of equal length, different cross sectional areas and made of same metal.
(a) (i) Name the property which is same for both the wires,
(ii) Name the property which is different for both the wires.
(b) If the resistance of wire A is four times the resistance of wire B, Calculate
(i) the ratio of the cross sectional areas of the wires and
(ii) The ratio of the radii of the wire.
Electric current and its effects : Related search and topics
0 comments: | {"url":"http://physicsadda.blogspot.in/2011/05/electricity-and-its-effects-test-paper.html","timestamp":"2014-04-21T14:41:54Z","content_type":null,"content_length":"148269","record_id":"<urn:uuid:7dda90c1-f993-4887-b52d-c56dd04fe962>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00197-ip-10-147-4-33.ec2.internal.warc.gz"} |
Wolfram Demonstrations Project
Numerical Flowers
This generalizes how seeds are distributed on plants such as sunflowers. Starting from the center, each successive seed is a fixed distance from the previous seed and is rotated from the line
connecting the previous two seeds by a constant angle. Using the controls, you can explore how different angles, , produce varying patterns with seeds.
The angle is represented as a percentage of a full rotation, so 45 degrees would be 1/8. If you enter a number greater than one, you get the same result as you would with the fractional part of the
number. Therefore, is the same as .
M. Naylor, "Golden,
, and Flowers: A Spiral Story,"
Mathematics Magazine
(June), 2002 pp. 163-172. | {"url":"http://demonstrations.wolfram.com/NumericalFlowers/","timestamp":"2014-04-16T04:36:30Z","content_type":null,"content_length":"42409","record_id":"<urn:uuid:378c7ab7-165e-4eda-9842-1149fe1ac25d>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00478-ip-10-147-4-33.ec2.internal.warc.gz"} |
Mathematical Logic
Undergraduate students with no prior classroom instruction in mathematical logic will benefit from this evenhanded multipart text. It begins with an elementary but thorough overview of mathematical
logic of first order. The treatment extends beyond a single method of formulating logic to offer instruction in a variety of techniques: model theory (truth tables), Hilbert-type proof theory, and
proof theory handled through derived rules.
The second part supplements the previously discussed material and introduces some of the newer ideas and the more profound results of twentieth-century logical research. Subsequent chapters explore
the study of formal number theory, with surveys of the famous incompleteness and undecidability results of Godel, Church, Turing, and others. The emphasis in the final chapter reverts to logic, with
examinations of Godel's completeness theorem, Gentzen's theorem, Skolem's paradox and nonstandard models of arithmetic, and other theorems. The author, Stephen Cole Kleene, was Cyrus C. MacDuffee
Professor of Mathematics at the University of Wisconsin, Madison. Preface. Bibliography. Theorem and Lemma Numbers: Pages. List of Postulates. Symbols and Notations. Index.
Reprint of the John Wiley & Sons, Inc., New York, 1967 edition.
Availability Usually ships in 24 to 48 hours
ISBN 10 0486425339
ISBN 13 9780486425337
Author/Editor Stephen Cole Kleene
Format Book
Page Count 432
Dimensions 5 3/8 x 8 1/2 | {"url":"http://store.doverpublications.com/0486425339.html","timestamp":"2014-04-18T10:37:56Z","content_type":null,"content_length":"44244","record_id":"<urn:uuid:544ddde4-a9cc-4fd7-bc52-14c70e692e6a>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00322-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
I need help with problem 4. I'm supposed to use the Poisson Distribution, but I'm confused because part A says 6 or MORE. (I obviously can't go to infinity, so what do I do? If it was 6 or LESS, I
would plug in 6, 5, 4, 3, 2, 1... in for P(x).)
• one year ago
• one year ago
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/51106479e4b09cf125bd6b3d","timestamp":"2014-04-18T00:22:52Z","content_type":null,"content_length":"86701","record_id":"<urn:uuid:de003983-147d-4ebb-86d6-cd7b96cbe76b>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00442-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Help
January 19th 2014, 05:32 AM #1
Borel set
How does one show the following ( $\mathcal{R}$ denotes the one-dimensional Borel-set):
$\mathcal{R} = \sigma(\mathcal{A})$
with $\mathcal{A}=\{]a,b]|a,b \in \mathbb{Q}, a < b\}\cup {\emptyset}$
I know that $\mathcal{R}= \sigma(\{]a,b]|-\infty<a\leq b<\infty\})$ so I think $\mathcal{A} \subseteq \mathcal{R}$ and if I can show that $\mathcal{A}$ is a $\pi-$ systeme it follows by the $\pi-
\lambda$ theorem that $\sigma(\mathcal{A})\subset \mathcal{R}$.
I could do the reverse implication the same way but then I need to prove that $\{]a,b]|-\infty<a\leq b<\infty\} \subset \sigma(\mathcal{A})$, but I'm struggling with this implication.
Thanks in advance.
Re: Borel set
January 20th 2014, 11:07 AM #2
Super Member
Dec 2012
Athens, OH, USA | {"url":"http://mathhelpforum.com/differential-geometry/225524-borel-set.html","timestamp":"2014-04-16T19:44:52Z","content_type":null,"content_length":"34030","record_id":"<urn:uuid:cf99b594-4c90-42e2-9e9a-c5cb700ad5cc>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00643-ip-10-147-4-33.ec2.internal.warc.gz"} |
Wolfram Demonstrations Project
Two-Asset Markowitz Feasible Set
Set the "two-asset portfolios allowed" toggle to "yes" to visualize the relationship between risk and expected return for two-asset portfolios, for varying levels of expected return and risk for the
two individual assets, and of the correlation between their returns.
The loci of all possible two-asset portfolio risk and return combinations is called the two-asset feasible set. For two risky assets the orange straight dashed line shows the two-asset feasible set
for the case of perfect positive correlation; the black dashed curve shows the two-asset feasible set for the case of zero correlation; and the two slate blue dashed line segments show the two-asset
feasible set for the case of perfect negative correlation. The thick blue line shows the actual two-asset feasible set given the actual selection for the (Pearson) correlation coefficient, while the
green ball shows the actual risk-return combination given the actual proportion invested in asset 1. The Demonstration does not allow for short selling of any asset. When the "two-asset portfolios
allowed" toggle is set to "no" a proportion invested in asset 1 exceeding (or falling short of) 0.50 is treated as 100% invested in asset 1 (or 2). | {"url":"http://demonstrations.wolfram.com/TwoAssetMarkowitzFeasibleSet/","timestamp":"2014-04-21T07:05:16Z","content_type":null,"content_length":"42552","record_id":"<urn:uuid:0928e4f6-5eeb-4c76-bdfd-0ec41b76c7c7>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00429-ip-10-147-4-33.ec2.internal.warc.gz"} |
Linear Interpolation FP1 Formula
Re: Linear Interpolation FP1 Formula
Then why is she going, why is she hanging around?
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Linear Interpolation FP1 Formula
A friendly gesture.
Re: Linear Interpolation FP1 Formula
Women friendly? Hah! I believe that she is a spy, so be careful what you tell her.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Linear Interpolation FP1 Formula
I don't think she's a spy...
Re: Linear Interpolation FP1 Formula
Have you asked her why she is going?
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Linear Interpolation FP1 Formula
No, I haven't...
Re: Linear Interpolation FP1 Formula
Maybe you should just wait and see who goes and who does not.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Linear Interpolation FP1 Formula
Unfortunately a lot of people can't come because they have lessons to go to -- so there may only be 3-4 people coming anyway... my (male) friend, who hates PJ, probably won't like her being there
Re: Linear Interpolation FP1 Formula
With only 3 or 4 people can you really afford to turn her away?
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Linear Interpolation FP1 Formula
No, not really... there are max 6 people who are coming, but it'll likely end up being 3-4.
Re: Linear Interpolation FP1 Formula
3 or 4 will be better than 0.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Linear Interpolation FP1 Formula
There will be random members of staff (teachers) there to watch if there are too few people.
Re: Linear Interpolation FP1 Formula
That is good. You want your ideas heard.
See you later, need some sleep.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Linear Interpolation FP1 Formula
Okay, see you later.
Re: Linear Interpolation FP1 Formula
Slept like a baby. Dreamt about something that was so good I immediately forgot it.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Linear Interpolation FP1 Formula
Have you ever tried keeping a dream diary? Supposedly you just need to remember the dream once, and you won't forget it for a long time.
I've been dreaming about women for the past week... I don't know what has got into me.
Re: Linear Interpolation FP1 Formula
Unfortunately, I do not usually remember them.
Dreams like that are pretty common. Too much mental activity and too little physical activity perhaps.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Linear Interpolation FP1 Formula
A knock-on effect of the exam season.
Re: Linear Interpolation FP1 Formula
Hectic day, hopefully no more distractions.
Finished anymore STEP questions?
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Linear Interpolation FP1 Formula
Tax people?
No, but I am working through a nice easy probability one now though.
Re: Linear Interpolation FP1 Formula
No, bills! I have been arguing on the phone for 3 hours on and off. I am tired. It seems that to my ISP the title of "Preferred Customer," is synonymous with dummy who should be cheated.
Looks like I did not get my way...
Sounds like you have it solved.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Linear Interpolation FP1 Formula
That sounds so stressful... I don't know if I want to be an adult yet, having to deal with that regularly.
Almost there -- STEP I 2007 Q13 if you want to have a look (generally STEP I probability is relatively straightforward, no horrible covariance questions like in STEP III or anything).
Re: Linear Interpolation FP1 Formula
Okay, let me look up the question.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Linear Interpolation FP1 Formula
It is a good example of why STEP-takers should at least take a look at the mechanics and probability/stats section, because there can be gems in there sometimes.
Re: Linear Interpolation FP1 Formula
I only have iii onward and I am not getting those answers.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof. | {"url":"http://www.mathisfunforum.com/viewtopic.php?pid=266673","timestamp":"2014-04-19T05:03:10Z","content_type":null,"content_length":"34713","record_id":"<urn:uuid:9e27716d-b299-460d-841d-07557e7a3556>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00413-ip-10-147-4-33.ec2.internal.warc.gz"} |
square root of 2
The square root of 2 has the approximate value 1.41421356 ...
It was the first number shown to be what is now known as an irrational number (a number that can't be written in the form a/b, where both a and b are integers.) This discovery was made by Pythagoras
or, at any rate, by the Pythagorean group that he founded.
The square root of 2 is the length of the hypotenuse (longest side) of a right triangle whose other two sides are each one unit long. A reductio ad absurdum proof that √2 is irrational is
straightforward. Suppose that √2 is rational, in other words that √2 = a/b, where a and b are coprime integers (that is, they have no common factors other than 1) and b > 0. It follows that a^2/b^2 =
2, so that a^2 = 2b^2. Since a^2 is even (because it has a factor of 2), a must be even, so that a = 2c, say. Therefore, (2c)^2 = 2b^2, or 2c^2 = b^2, so b must also be even. Thus, in a/b, both a and
b are even. But we started out by assuming that we'd reduced the fraction to its lowest terms. So there is a contradiction and therefore √2 must not be irrational. This type of proof can be
generalized to show that any root of any natural number is either a natural number or irrational.
As a continued fraction, √2 can be written 1 + 1/(2 + 1/(2 + 1/(2 + ... ))), which yields the series of rational approximations: 1/1, 3/2, 7/5, 17/12, 41/29, 99/70, 239/169, ... Multiplying each
numerator (number on the top) by its denominator (number on the bottom) gives the series 1, 6, 35, 204, 1189, 6930, 40391, 235416, ... which follows the pattern: A[n] = 6A[n-1] - A[n-2]. Squaring
each of these numbers gives 1, 36, 1225, 41616, 1413721, 48024900, 1631432881, ..., each of which is also a triangular number. The numbers in this sequence are the only numbers that are both square
and triangular.
Related category | {"url":"http://www.daviddarling.info/encyclopedia/S/square_root_of_2.html","timestamp":"2014-04-19T04:20:47Z","content_type":null,"content_length":"8551","record_id":"<urn:uuid:ee227259-b55d-49f0-9313-14c9d8429c67>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00643-ip-10-147-4-33.ec2.internal.warc.gz"} |
Asymptotics of the n-th prime using the gamma function
up vote 6 down vote favorite
In the paper http://rgmia.org/papers/v8n2/eepnt.pdf, the author proves that proves an explicit inequality on prime numbers using the gamma function and as a corollary, he showed that.
$$ p_n = n \frac{\Gamma'(n)}{\Gamma(n)} + o(n \ln n). $$
I obtained a stronger form of this result namely
$$ p_n = n \ln \frac{\Gamma'(n)}{\Gamma(n-1)} + O\Big(\frac{n\ln\ln n}{\ln n}\Big). $$
The gamma function seems to beautifully approximate $p_n$. To get the same error term using the regular Cipolla's asymptotic expansion of the $p_n$ we would need three terms.
Can someone explain why the gamma function approximated the n-th prime so nicely? Is this a coincidence or is there some underlying phenomenon governing this result that can shed some new light
distribution of prime numbers.
Something is wrong here. $\Gamma'(n)/\Gamma(n) \approx \log n$ so you formula gives $p_n \approx n \log \log n$, where the truth is $p_n \approx n \log n$. – David Speyer Jun 1 '12 at 12:02
@ David. It was a typing error and I have corrected it. – Nilotpal Sinha Jun 1 '12 at 13:03
add comment
1 Answer
active oldest votes
The asymptotic expansion of Cipolla starts $$p_n=n\log n+n\log\log n-n+n\frac{\log\log n}{\log n}+O(n(\log\log n/\log n)^2)$$ So the given approximations have errors $$p_n=n\frac{\
Gamma'(n)}{\Gamma(n)}+\Theta(n\log\log n)$$ and $$p_n=n\log\frac{\Gamma'(n)}{\Gamma(n-1)}+\Theta(n).$$ I would not say these are good approximations with so big errors.
up vote 9 down The inverse function of the log integral function $\text{li}^{-1}(x)$ has error $$p_n= \text{li}^{-1}(n) +O(n \exp(-c\sqrt{\log n})$$ which assumming Riemann hypothesis can be
vote accepted reduced to $$|p_n-\text{li}^{-1}(n)|\le \pi^{-1} \sqrt{n}(\log n)^{\frac52}\qquad n>11.$$ (see arXiv:1203.5413)
add comment
Not the answer you're looking for? Browse other questions tagged nt.number-theory analytic-number-theory prime-numbers or ask your own question. | {"url":"http://mathoverflow.net/questions/98566/asymptotics-of-the-n-th-prime-using-the-gamma-function/98578","timestamp":"2014-04-17T04:09:41Z","content_type":null,"content_length":"53851","record_id":"<urn:uuid:c708ebf7-c173-4967-93f3-eeaca777fb00>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00212-ip-10-147-4-33.ec2.internal.warc.gz"} |
Actuarial Diversity Scholarship
Scholarship Overview
This scholarship is for students who are pursuing a career in the actuarial profession. To qualify for this scholarship, applicants must have at least one birth parent who is a member of one of the
following minority groups: African American, Hispanic, or American Indian. Applicants may be high school seniors or current undergraduate students, be enrolled full-time, and have a 3.0 GPA with
emphasis on math or actuarial courses. Entering college freshmen must have a minimum ACT math score of 28 or SAT math score of 600.
How easy is it to apply?
Not too bad
This scholarship's application process may have items such as essays that could take a couple hours.
How much competition is there?
Not much
This scholarship won't have as many applicants as most.
Deadline: May 2
Award Range: $1,000 - $3,000
Awards Granted: 40
• High school senior
• College freshman
• College sophomore
• College junior
• College senior
• Actuarial Science
• Algebra and Number Theory
• Analysis and Functional Analysis
• Applied Mathematics
• Applied Mathematics, Other
• Computational Mathematics
• Geometry/Geometric Analysis
• Mathematical Statistics and Probability
• Mathematics and Statistics, Other
• Mathematics, General
• Mathematics, Other
• Statistics, General
• Statistics, Other
• Topology and Foundations
• Computational and Applied Mathematics
• Financial Mathematics
• Mathematical Biology
• Mathematics and Statistics
• African American
• American Indian or Native Alaskan
• Hispanic/Latino | {"url":"http://www.cappex.com/scholarship/listings/Actuarial-Diversity-Scholarship/-s-d-2901/","timestamp":"2014-04-19T02:23:48Z","content_type":null,"content_length":"26017","record_id":"<urn:uuid:36546f17-9923-49f8-8df6-09d370fe41a6>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00629-ip-10-147-4-33.ec2.internal.warc.gz"} |
EM algorithm for Bayesian estimation of genomic breeding values
• We are sorry, but NCBI web applications do not support your browser and may not function properly.
More information
EM algorithm for Bayesian estimation of genomic breeding values
In genomic selection, a model for prediction of genome-wide breeding value (GBV) is constructed by estimating a large number of SNP effects that are included in a model. Two Bayesian methods based on
MCMC algorithm, Bayesian shrinkage regression (BSR) method and stochastic search variable selection (SSVS) method, (which are called BayesA and BayesB, respectively, in some literatures), have been
so far proposed for the estimation of SNP effects. However, much computational burden is imposed on the MCMC-based Bayesian methods. A method with both high computing efficiency and prediction
accuracy is desired to be developed for practical use of genomic selection.
EM algorithm applicable for BSR is described. Subsequently, we propose a new EM-based Bayesian method, called wBSR (weighted BSR), which is a modification of BSR incorporating a weight for each SNP
according to the strength of its association to a trait. Simulation experiments show that the computational time is much reduced with wBSR based on EM algorithm and the accuracy in predicting GBV is
improved by wBSR in comparison with BSR based on MCMC algorithm. However, the accuracy of predicted GBV with wBSR is inferior to that with SSVS based on MCMC algorithm which is currently considered
to be a method of choice for genomic selection.
EM-based wBSR method proposed in this study is much advantageous over MCMC-based Bayesian methods in computational time and can predict GBV more accurately than MCMC-based BSR. Therefore, wBSR is
considered a practical method for genomic selection with a large number of SNP markers.
Genome-wide polymorphisms are increasingly elucidated in livestock and crops with the recent development of the sequencing technologies. Accordingly, high-throughput genotyping systems, such as
high-density SNP chips containing several tens of thousands of genome-wide SNP markers, have become available to efficiently identify genotypes of individuals for a large number of SNPs with low
cost. As a new breeding technology utilizing the information of genome-wide dense SNP markers, genomic selection was proposed by Meuwissen et al. (2001) [1]. In genomic selection, firstly a
well-fitted model for genomic breeding value (GBV) of a trait is constructed by estimating SNP effects included in the model as parameters using the individuals with data of both genotypes of SNPs
and phenotypes of a trait (training data set). Secondly, GBV is predicted for individuals to be selected based on only genotype data of SNPs (selection candidates) using the fitted model. For the
estimation of SNP effects, two Bayesian methods called BayesA and BayesB were proposed as well as a BLUP method and it was shown that BayesB could predict GBV most accurately of the methods using
simulation experiments [1].
BayesA method can be classified into a method of Bayesian shrinkage regression (BSR) [2] from a view point of statistical methodology, which can handle a large number of model effects requiring no
variable selection. In BSR, a model including of effects of all SNPs available are considered and the shrinkage estimation is applied for these SNP effects assuming the appropriate prior distribution
for the effects such as a normal distribution with a mean 0. On the other hand, BayesB method can be regarded as a modified version of stochastic search variable selection (SSVS) [3]. In the original
SSVS method, each SNP effect (regression coefficient) is assigned a mixture of two normal distributions both having means 0 but one with a large variance and the other with a tiny variance. If the
posterior probability of the effect to belong to the distribution with a large variance is high, this effect is considered as selected and included in the model. In the method of BayesB, a mixture of
a normal distribution with a mean 0 and a large variance and a distribution with point mass only at zero which might be regarded as a normal distribution with both of a mean and a variance set at
zero is assumed for each SNP effect. Meuwissen et al. [1] used block-updating for a SNP effect and a variance to prevent the estimate from being stuck at zero. In this simultaneous update, a variance
is assigned a zero or sampled from a prior inverted chi-square distribution following a prior mixture probability, which is a prior probability of each SNP to be included in the model, and then a SNP
effect is obtained from a conditional normal distribution given a variance. Taking these things into consideration, we use more general statistical terms BSR and SSVS for BayesA and BayesB,
respectively, hereafter in this paper for the help of understanding of readers in broad research fields. Although BayesB can be interpreted as a variant of original SSVS as noted above, we use the
term 'SSVS' for BayesB, which could cause no confusion.
Usually, Markov chain Monte Carlo (MCMC) algorithm has been applied to the model construction with BSR and SSVS in genomic selection. However, MCMC-based Bayesian methods are much time-consuming and
therefore might be prohibited for application as the sample size and/or the number of SNPs become much larger. Accordingly, a fast non-MCMC algorithm for SSVS utilizing the analytical form of
posterior means of SNP effects was devised [4], where conditional posterior expectation of each SNP effect could be analytically calculated by assuming a mixture of a distribution with a discrete
probability mass of zero and a double exponential distribution for a prior distribution for SNP effects. It was shown that this analytical SSVS method was slightly inferior to MCMC-based SSVS but
much superior to BLUP in the accuracy of predicting GBV. It was also shown that this analytical SSVS predicted GBV in a very similar way as MCMC-based one with much reduced computing time [4].
Xu (2003) [2] proposed BSR in the context of mapping QTL effects on a whole genome to capture the polygenic effects. This shrinkage mapping method was improved and extended by some authors [5-7]. The
efficiency of QTL mapping using BSR was shown to be superior to that using SSVS in [5]. Recently, Yi and Benerjee (2009) [8] proposed an EM-based algorithm for the maximization of the posterior
distribution function in BSR.
In this study, we apply the EM algorithm described in [8] for the model construction including estimation of SNP effects in BSR from a view point of genomic selection. Although generalized linear
models were considered to deal with several types of phenotypes including categorical traits and continuous polygenic traits in [8], we confine ourselves to the case of continuous traits here for
simplicity. Moreover, we incorporate the weight for each SNP according to the strength of its association with a trait in the procedure of model construction with BSR to improve the prediction
accuracy. The weight of SNP can be regarded as an approximate posterior probability of SNP to be included in a model and obtained from a given prior probability of SNP inclusion with EM algorithm. We
call this model construction procedure as wBSR, which means a modified BSR incorporating the weights for SNPs.
Using the simulation experiments, we compare the accuracies of EM-based wBSR with BSR and SSVS using MCMC algorithm in the prediction of GBV for several values of the prior probability, p, of SNP
inclusion in the model. It is shown that the accuracy of wBSR can be improved in comparison with MCMC-based BSR although the accuracy of wBSR is inferior to SSVS and is influenced by the values of p
and the hyperparameters of the prior inverted chi-square distribution assumed for the variances of SNP effects. Moreover, the computational cost of wBSR is much less than the MCMC-based Bayesian
methods. Therefore, wBSR is considered a practical and useful method for genomic selection with a large number of SNP markers.
In this section, we will describe the methods of BSR (BayesA) and SSVS (BayesB) for genomic selection and EM algorithm for BSR to obtain the estimates of parameters included in the model that
maximize the posterior distribution function. Subsequently, we will modify BSR method (wBSR) by assigning the weight for each SNP according to the strength of its association to a trait for
improvement of the prediction accuracy. The weight of SNP can be obtained from a prior probability of each SNP to be included in a model, which is also considered in SSVS procedure, using EM
algorithm as well as the estimate of SNP effect. For the evaluation of the accuracies of the predicted GBVs, we apply wBSR with variable prior probabilities of SNP inclusion for simulated data sets
as well as MCMC-based BSR and SSVS.
In the statistical model described below, we consider not haplotype effect but the effect of each single SNP. We assume that the number of SNPs genotyped is N and a training data set including n
individuals with the records of phenotypes and SNP genotypes is available for the estimation of parameters in the model. We also assume that selection candidates consists of individuals with only SNP
genotypes, for each of which GBV is predicted based on the model with SNP effects estimated with training data sets. We denote two alleles at each SNP by 0 and 1 and three genotypes by '0_0', '0_1',
and '1_1'.
Models for BSR and SSVS in genomic selection
In BSR (BayesA) method [1,2], the following linear model is fitted to the phenotypes of a training data set:
where y = (y[1], y[2], ..., y[n])' is a vector of phenotypic values of a trait for n individuals of a training data set, u[l ]= (u[l1], u[l2], ..., u[ln])' is a vector of genotypes of n individuals
at the lth SNP with u[li ]taking a value of -1, 0, or 1 corresponding to the genotypes '0_0', '0_1', or '1_1', respectively, g[l ]is the effect of the lth SNP, b = (b[1], b[2], ..., b[f])' is a
vector of fixed non-genetic effects with dimension f including a general mean, X = (x[ij]) (i = 1, 2, ..., n; j = 1, 2, ..., f) is a design matrix relating b to y and e = (e[1], e[2], ..., e[n])' is
a vector of random deviates with e[i ]~N(0, σ[e]^2). It is assumed that the prior distribution of the SNP effect, g[l], is a normal distribution with a mean 0 and a variance σ[gl]^2, which differs
for every SNP. Moreover, the prior distribution of σ[gl]^2 is considered. In this study, we assume that it is a scaled inverted chi-squared distribution with a scale parameter S and a
degree-of-freedom ν, χ^-2(ν, S), following [1,2]. The posterior distributions of relevant parameters, b, g[l], σ[gl]^2 (l = 1, 2, ..., N) and σ[e]^2, can be obtained by Gibbs sampling [1,2]. For the
individuals of selection candidates, GBV are predicted by g[l]. In this study, we consider not haplotype effect but the single marker effect for g[l]. The use of marker haplotypes instead of the
single marker genotypes would cause slight modification of the model, but the procedure for estimation of effects and prediction of GBV is essentially the same.
In SSVS (BayesB) method, the model (1) is also adopted but a prior probability, p, of each SNP to be included in the model is considered. Usually, a small value is given for p based on the assumption
that many of SNPs have actually no effects for a trait. The prior distribution of g[l ]is assumed to be a normal distribution with a mean 0 and a variance σ[gl]^2 in SSVS as in BSR, whereas the prior
distribution of σ[gl]^2 is expressed as a mixture of two distributions corresponding to the inclusion and the exclusion of the SNP as follows:
assuming that the prior is χ^-2(ν, S) when the SNP is included. When MCMC algorithm is applied for the estimation of the parameters in SSVS, g[l ]and σ[gl]^2 are jointly updated with
Metropolis-Hastings chain [1]. The GBV predicted by SSVS is presented by
EM algorithm for BSR
In Bayesian estimation, the inferences about the parameters are made based on the posterior distributions. MCMC algorithms can be used for obtaining the posterior information of the parameters in BSR
method as described above. However, the posterior mode of each SNP effect which is a point estimate maximizing the density function of the posterior distribution can be calculated instead of a
posterior expectation by some other iteration algorithm including EM algorithm. In QTL mapping using BSR method, Yi and Banerjee [8] utilized an EM algorithm to search the posterior mode of the
marker effects included in the model. This EM algorithm can be applied for genomic selection with BSR method without any modification and we describe the estimation procedure for the EM algorithm in
this section. Although, in [8], phenotypic data was transformed to have a mean 0 and a standard deviation 0.5 following Gelman et al. (2008) [9] and the derivations of the posterior estimates of
parameters were illustrated in the framework of generalized linear model, original phenotypic data are subject to the EM algorithm here without any transformation and we derive the posterior
estimates of parameters under the normality in what follows assuming that the trait of concern is polygenic and normally distributed.
The posterior distribution is given by combining a likelihood of the data and the prior distributions of the parameters. We denote parameters in BSR method as a vector form θ,
The posterior distribution of θ given the data of phenotypes, y, and genotypes of SNP data, U = (u[1], u[2], ..., u[N]), is denoted by g(θ | y, U) and written as
where C means a constant and it should be noted that the likelihood of y given the model parameters and genotypes is a normal distribution with a mean Xb + σ[e]^2 and the prior of g[l ]is a normal
distribution with a mean 0 and a variance σ[gl]^2, the prior of which is the scaled inverted chi-squared distribution χ^-2(ν, S) as described above. The priors of b and σ[e]^2 are written by p(b) and
p(σ[e]^2), respectively, which are assumed uniform distributions over suitable ranges of the values here.
Following [8], we regard the variances of SNP effects, σ[gl]^2 (l = 1, 2, ..., N), as missing data and replace σ[gl]^2 by the conditional posterior expectation of σ[gl]^2, denoted by
As M-step, we obtain the values of parameters other than σ[gl]^2 (l = 1, 2, ..., N) maximizing the log-posterior distribution with σ[gl]^2 replaced by
The mode of each parameter which maximizes the log-posterior can be given by solving an equation derived by making the partial derivative of the log-posterior with respect to the parameter equal to
0. Accordingly the modes of g[l](l = 1, 2, ..., N), b[j ](j = 1, 2, ..., f) and σ[e]^2, denoted as
The EM algorithm for BSR is summarized as follows:
1. E-step: σ[gl]^2 is estimated as g[l], which is l = 1, 2, ..., N.
2. M-step: the values of g[l ](l = 1, 2, ..., N), b[j ](j = 1, 2, ..., f) and σ[e]^2 maximizing the log posterior distribution of parameters,
E-step and M-step are repeated until the values of parameters converge. We stop this iteration when the change of values of parameters becomes small. For example, when ^-6, where
Modification of BSR
In SSVS, SNP effects can shrink more strongly than in BSR due to the assumption that only a small number of SNPs can be linked to QTL causing only a small portion of SNPs to have significant effects
and many other SNPs to have negligible effects, which might result in the improvement of prediction accuracy for SSVS using a more parsimonious model. Although it was reported in [5] that BSR could
provide a more accurate result for QTL mapping with less than a hundred markers than SSVS developed by Yi et al. (2003) [10], SSVS that is capable of deleting many SNPs with ignorable effects might
perform as well or better than BSR in the case of a huge number of high-density SNPs involved in the prediction of GBV. However, the EM algorithm described above cannot be applied to SSVS because the
prior distribution of σ[gl]^2, a mixture distribution combining χ^-2(ν, S) and 0 with probability p and 1-p, respectively, cannot be well treated with EM algorithm. To devise a cost-effective and
EM-based method providing more accurate prediction for genomic selection with a higher degree of shrinkage, we develop a new modified BSR method incorporating a weight for each SNP depending on the
strength of its association with a trait. In this method, we modify the model (1) by incorporating the variable γ[l ]indicating the inclusion of the lth SNP in the model or exclusion of the lth SNP
from the model, where inclusion and exclusion of the SNP are indicated by γ[l ]= 1 and γ[l ]= 0, respectively. We assume that the prior probabilities of γ[l ]= 1 and γ[l ]= 0 are p and 1-p,
respectively, as in SSVS. The modified model is written as
where X, b, u[l], g[l ]and e are as described in the model (1). We assume that the priors of g[l ]and σ[gl]^2 are not influenced by the inclusion (γ[l ]= 1) or exclusion (γ[l ]= 0) of SNP in the
model (2) and are as adopted in BSR. The method with the model (7), but utilizing these assumption, is called wBSR, meaning a modified BSR incorporating SNP weight, in this study since the same EM
procedure as used in BSR for searching the posterior mode of parameters can be applied for this method and it is equivalent to an EM-based BSR procedure proposed by [8] when p = 1. We denote the
variables indicating the inclusion of SNP effects in the model in a vector form as γ = (γ[1], γ[2], ..., γ[N]) which are treated as variables to be estimated in wBSR.
In wBSR, the posterior distribution g(θ, γ | y, U) is modified from (2) and written as g(θ, γ | y, U)
where the priors p(b) and p(σ[e]^2) are assumed uniform distributions. Applying the same argument as in EM algorithm used for BSR, σ[gl]^2 is replaced by its conditional posterior expectation, σ[l ]
indicating the inclusion of SNP in the model is unobserved, thus, σ[l ]is also replaced by its conditional posterior expectation ξ[l ]which can be written, from (8) and under the assumption that the
priors of g[l ]and σ[gl]^2 are independent of σ[l], as
where γ[-l ]denotes γ with the lth component γ[l ]deleted and
In this expression, however, γ[j ](j ≠ l) is also unobserved. Therefore, we modify the expression for ξ[l ]by substituting γ[j ]with ξ[j ]for j ≠ l. Accordingly, the conditional posterior expectation
of γ[l], ξ[l], is approximately obtained in E-step for l = 1, 2, ..., N following the formula:
In M-step, the values of b[j ](j = 1, 2, ..., f) and σ[e]^2 maximizing g(θ| y, U),
For g[l], the value maximizing the posterior (8), γ[l ]and is given as γ[l ]= 0 and
for γ[l ]= 1. As γ[l ]is unobserved, we substitute ξ[l ]for γ[l ]in the expressions of γ[l ]= 1 is adopted for the iteration. In summary,
It should be noted that ξ[l ]given by (9) is an approximate posterior expectation of γ[l ]that might be different from the posterior probability of SNP to be included in the model. Therefore, ξ[l ]is
referred to as the weight of the SNP that is regarded as an indicator of the strength of the association of the SNP with a trait. The SNP assigned a large weight with ξ[l ]taking values near one is
considered to essentially contribute to GBV while the contribution of the SNP assigned a small weight with ξ[l ]taking values near the given prior value of p is regarded as negligible. The degree of
shrinkage can be affected by the value of a prior probability p as well as the values of hyperparameters, ν and S, in ξ^-2(ν, S), the prior distribution for σ[gl]^2. The predicted GBV of wBSR is
expressed as
Simulation experiments
We evaluated the accuracy for the prediction of GBV using wBSR with variable p based on simulated data sets. The population and genome were simulated following the way as in [11]. In brief, the
populations with an effective population size 100 were maintained by random mating for 1000 generations to attain mutation drift balance and linkage disequilibrium between SNPs and QTLs. The genome
was assumed to consist of 10 chromosomes with each length 100 cM. Two scenarios were considered for the number of SNP markers available in the simulations and data sets under two scenarios were
denoted as Data I and Data II. In Data I, 101 marker loci were located every 1 cM on each chromosome with total of 1010 markers on a genome. In Data II, 1010 equidistant marker loci were located on
each chromosome with a total of 10100 markers. We assumed that equidistant 100 QTLs were located on each chromosome such that a QTL was in the middle of every marker bracket in Data I and the middle
of every 10th marker bracket in Data II. Therefore, there were a total of 1000 QTLs located on a whole genome. The mutation rates assumed per locus per meiosis were 2.5 × 10^-3 and 2.5 × 10^-5 for
marker locus and QTL, respectively. At least one mutation occurred in the most of all marker loci with such high mutation rate during the simulated generations. In the marker loci experiencing more
than one mutation, the mutation remaining at the highest minor allele frequency (MAF) was regarded as visible, whereas the others were ignored, which caused the marker loci to have two alleles like
SNP markers. The polymorphic QTLs at which mutation occurred only affected the trait, where the effects of QTL alleles were sampled from a gamma distribution with scale parameter 0.4 and shape
parameter 1.66 and were assigned with positive or negative values with equal probabilities [1,11].
In generation 1001 and 1002, the population size was increased to 1000. The population in the 1001th generation was treated as a training data, where the phenotypes of a trait and SNP genotypes of
the individuals were simulated and analyzed with methods of genomic selection to estimate the SNP effects in the model. The phenotype of each individual in the 1001th generation was given as a sum of
QTL effects over the polymorphic QTLs and environmental effects (or residuals) sampled from a normal distribution with a mean 0 and a variance 1 such that the heritability in the population was
expected to be 0.5. The population in the 1002th generation was used as selection candidates, where the individuals were only genotyped for 1010 and 10100 SNP markers in Data I and Data II,
respectively, without phenotypic records and GBV of each individual was predicted using a model with SNP effects estimated based on the population in the 1001th generation. The true breeding value
(TBV) of the individual in the 1002th generation was also simulated as a sum of QTL effects corresponding to the QTL genotype and utilized for evaluation of the accuracy of predicted GBV but was
regarded as unknown and unavailable in the estimation of SNP effects in the models. The accuracy was measured by the correlation between the predicted GBV and TBV.
For the evaluation of the accuracies of the predicted GBVs obtained by wBSR with p = 0.01, 0.05, 0.1, 0.2, 0.5 and 1.0, we simulated 100 and 20 data sets under the scenario of Data I and Data II,
respectively. The accuracies of the GBVs predicted by BSR and SSVS based on MCMC algorithm were also evaluated on the same data sets in comparison with wBSR. In MCMC iteration, we repeated 11000
cycles using a burn-in period of the first 1000 cycles. The values of parameters were sampled every 10 cycles for obtaining the posterior means. In SSVS, we investigated the accuracies of predicted
GBVs for p = 0.01, 0.05, 0.1, 0.2 and 0.5 in Data I but for p = 0.01, 0.05 and 0.1 in Data II due to large computational time required for MCMC algorithm. SNP markers with MAF less than 0.05, which
were less than 10% of all SNPs, were not used for the estimation of effects and the prediction of GBV. We set ν = 4.012 and S = 0.002 for MCMC-based BSR and wBSR with p = 1.0 that is equivalent to an
EM-based BSR proposed by [8], and ν = 4.234 and S = 0.0429 for SSVS and wBSR with other values of p. These values of ν and S were determined following [1].
The accuracies of the predicted GBVs obtained by several methods for genomic selection were evaluated in 100 simulated data sets of Data I and in 20 data sets of Data II, where we assumed that 1010
SNP markers and 10100 SNP markers were available on a whole genome for Data I and Data II, respectively. The results of the simulations were summarized in Table Table1,1, where the regression
coefficients of the true GBV on the predicted GBV were also listed for the purpose of reference as well as the correlation coefficients. Although we evaluated the accuracies of the prediction of GBV
with the correlation coefficients, the regression coefficient could be used as an indicator of bias for the predicted GBV.
Accuracies of prediction of GEV in the methods of genomic selection
In Data I, SSVS based on MCMC-algorithm provided the most accurate prediction for GBV with the accuracy of 0.772 when p = 0.5 in the given settings of ν and S (Table (Table1).1). The accuracy of
wBSR was affected by the value of p and reduced as the value of p was decreased from 0.5. The accuracies of wBSR was 0.760 at p = 0.5 and reduced to 0.699 at p = 0.01 in the same setting of ν and S.
This was the case for SSVS, where the accuracy of SSVS ranged from 0.772 at p = 0.5 to 0.718 at p = 0.01. The prediction accuracies with MCMC-based BSR and EM-based BSR (wBSR with p = 1.0) were
considerably different in Data I. MCMC-based BSR provided significantly better predicted GBV with accuracy of 0.748 than EM-based BSR with accuracy of 0.697 considering the standard errors based on
100 repetitions as shown in Table Table1.1. It was shown that the accuracy was significantly improved with wBSR at p = 0.5 in comparison with MCMC-based BSR in Data I although different values of ν
and S were assumed. In Data II, SSVS with p = 0.01 could predict GBV most accurately with the accuracy of 0.887. The accuracy of wBSR was influenced by the value of p also in Data II, which was 0.843
at p = 0.01 and attained to 0.857 at p = 0.05 but much reduced to 0.665 at p = 0.5 (Table (Table1).1). The accuracy of SSVS was reduced to 0.874 and 0.846 with p = 0.05 and p = 0.1, respectively.
MCMC-based and EM-based BSR provided similar accuracies in Data II, which were 0.838 and 0.840, respectively.
In EM-algorithm used for wBSR, the posterior modes of SNP effects maximizing the posterior distribution are obtained whereas the posterior expectations of SNP effects are given using MCMC estimation.
Therefore, some inconsistency might be anticipated for the estimates of SNP effects, which might make the difference between accuracies of GBVs predicted by MCMC-based BSR and its EM-based version,
wBSR with p = 1.0. In Data I, the difference between the accuracies with MCMC-based and EM-based BSR was significant as shown in Table Table1.1. In Data II, however, the accuracies with both types
of BSR well agreed. We plotted the accuracy obtained by MCMC-based BSR in the analysis of each data set against that by EM-based BSR for Data I and Data II in Figure Figure11 and Figure Figure2,2,
respectively. As seen in Figure Figure1,1, the inconsistency between the accuracies with MCMC-based BSR and that with EM-based BSR appeared to be small in Data I although they were significantly
deviated from each other. The good consistency of the accuracies with both ESR methods was visible in Data II as shown in Figure Figure2.2. However, goodness of the agreement between MCMC-based and
EM-based BSR seemed dependent on the property of analyzed data.
Plot of the prediction accuracy for GBV with MCMC-based BSR against that with EM-based BSR in 100 repetitions of Data I.
Plot of the prediction accuracy for GBV with MCMC-based BSR against that with EM-based BSR in 20 repetitions of Data II.
In this study, EM algorithm for the estimation of SNP effects in BSR method for genomic selection was described following the algorithm proposed in QTL mapping [8]. Moreover, BSR method was modified
by incorporating the weight assigned to each SNP in the model reflecting the strength of its association with a trait for controlling the degree of shrinkage. For this method of wBSR, the EM
algorithm could be also applied. The computational advantage of the wBSR method over MCMC-based Bayesian methods was obvious and would become remarkable as the number of SNP markers increased. In the
simulations, wBSR took less than 30 seconds for the estimation of all SNP effects in each data set of Data I (1010 SNPs) and less than 2 minutes in each data set of Data II (10100 SNPs) on the
average, whereas MCMC-based SSVS took more than 30 minutes and more than four hours in each data set of Data I and Data II, respectively, when p = 0.05 on the average using a dual processor 2 GHz
machine (Intel Xeon 2 GHz) without parallel computing implementation. Although the computational time required by MCMC-based BSR was less than that by SSVS, it still took more than 25 minutes and
more than three hours on average in the analysis of a single data set of Data I and Data II, respectively. The iteration times in wBSR until attaining to convergence based on the criterion adopted
here ranged 30 to 120 depending on the simulated data.
A fast non-MCMC algorithm for SSVS method, called fBayesB, was proposed in [2]. In this method, the posterior expectation of each SNP effect, g[l], was analytically evaluated instead of MCMC-based
numerical calculation, where the prior of g[l ]was assumed to be a mixture of a distribution with a discrete probability mass of zero and a double exponential distribution. Although no comparison
between this method of SSVS based on the analytical integration and wBSR proposed here was made in this study, the simulation experiments showed that wBSR was also effective in computational time
based on EM algorithm, which is a simple algorithm without integral calculation, and performed better than MCMC-based BSR, thus, wBSR could be regarded as a simpler method for genomic selection with
practical prediction accuracy and computing efficiency as well as the SSVS method utilizing analytical integration (fBayesB).
As shown in Table Table1,1, the accuracy of GBV predicted was much influenced by the value p, a prior probability of SNP to be included in the model. The accuracy was considered to also change along
with the values of hyperparameters, ν and S, in χ^-2(ν, S), the prior distribution for σ[gl]^2. These prior parameters given a priori determine the degree of shrinkage of estimation for SNP effects
and affect the accuracy of the prediction of GBV as well as the property of data analyzed. We adopted here the values of ν = 4.234 and S = 0.0429 for SSVS and wBSR with p < 1.0 and ν = 4.012 and S =
0.002 for MCMC-based BSR and EM-based BSR (wBSR with p = 1.0) since we considered the same scenario in simulations as that used by [1] for the population size, mutation rates of markers and QTL and
the number of QTL, in which these values of ν and S were theoretically calculated as suitable values for SSVS and BSR. However, the suitability of these values of ν and S might be affected by the
structure of analyzed data such as the number of SNPs involved, especially for BSR including all of SNPs in the model. Therefore, we performed additional analyses with MCMC-based and EM-based BSR for
Data I and Data II using the different values of ν and S. We adopted the same setting of ν and S as used in SSVS (that is, ν = 4.234 and S = 0.0429), which should cause less shrinkage for the
estimate of SNP effect, in the additional analysis with both types of BSR in Data I. In Data II, the Jeffreys' prior p(σ[gl]^2) σ[gl]^2 corresponding to ν = 0.0, yielding strong shrinkage for very
small SNP effect but weak shrinkage for large effects [8], was tested for the analysis with both types of BSR. In the additional analysis of 100 simulated data sets in Data I with the same setting of
ν and S as in SSVS, the accuracy of EM-based BSR (wBSR with p = 1.0) much increased from 0.697 to 0.744 with standard error (s.e.) of 0.006 while the increase in the accuracy of MCMC-based BSR was
slight, where the accuracy was changed from 0.748 to 0.754 with s.e. of 0.006. In another additional analysis of 20 repetitions of Data II using the Jeffreys' prior, the accuracies of both types of
BSR were decreased in comparison with the original prior setting of ν and S. We obtained the accuracy of 0.834 with s.e. 0.017 for MCMC-based BSR and the accuracy of 0.809 with s.e. 0.016 for
EM-based BSR with the Jeffreys' prior. Although there seems to be the possibility of further improvement of the accuracy by choosing the priors yielding more suitable degree of shrinkage for the
estimates of SNP effects, it is generally difficult to construct such desirable prior for σ[gl]^2.
An actual strategy to determine the optimal values of p, v and S would be to evaluate the accuracies obtained by varying the values of these hyperparameters in small steps over the suitable ranges,
for example, 0 <p < 1, 0 <v < 5, 0 <S < 1. In genomic selection applied for the actual data, cross validation might be a method of choice for determining the suitable values of these hyperparameters.
A number of replications in the estimation of a large number of SNP effects are necessarily required for finding the optimal values. When replicated estimations are required, the advantage of
EM-based wBSR method over MCMC-based methods with respect to the computational time would be much more remarkable.
In [8], EM algorithm was applied for the shrinkage regression model of QTL mapping in the framework of generalized linear model, which included logistic model and probit model as well as normal
linear model described in this study by choosing appropriate link functions, following [9]. For the EM algorithm applied to normal linear model described in [9], standardization of outcome variable
by rescaling it to have mean 0 and standard deviation 0.5 was recommended. The influence of data transformation on the accuracies in the prediction of GBVs seems important as well as that of the
prior settings for gl and σ[gl]^2. These investigations would be described elsewhere.
In large-scale genotyping data used for genomic selection including tens of thousands SNP genotypes for thousands of individuals, a large number of SNP genotypes may still be missing. EM-algorithm
allows the missing SNP genotypes to be inferred with posterior expectations of the indicator variables of genotypes given the information of the adjacent SNPs or pedigree information. A step for the
inference of missing genotypes can also be included in our EM-based method of genomic selection. Although the inference of missing genotypes with EM-algorithm has been shown to be effective for
increase in power of QTL detection, how prediction accuracy is affected by the inference of missing genotypes in genomic selection remains to be investigated. This topic should be addressed in the
further study.
We developed a program implementing EM algorithm for estimating SNP effects, described here, in genomic selection and applied the program for the simulation study. The information of this program is
provided below (see Availability and requirements).
In this research, we described EM algorithm for a Bayesian method, BSR, that included effects of all SNPs in a regression model as covariates in genomic selection and was so far based on MCMC
algorithm. Moreover, we devised a modified version of BSR method called wBSR by incorporating the weight assigned to each SNP according to the strength of its association with a trait, for which EM
algorithm was also applicable. As results of simulation experiments, it was shown that the accuracy in predicting GBV by wBSR was improved in comparison with MCMC-based BSR. Although the accuracy of
wBSR was inferior to SSVS, wBSR was regarded as a practical and cost-effective method taking great computing advantage over MCMC-based Bayesian methods into account.
Availability and requirements
The source code of the program used in the simulation study was written with Fortran 77 and a Windows version of the executable program is available on the request to the first author (hayatk/at/
affrc.go.jp). The sample input files and a brief manual of the program can be also provided.
Authors' contributions
TH devised EM algorithm for Bayesian methods in genomic selection, developed a program for simulations and drafted the manuscript. HI assisted in developing a program and drafted the final
manuscript. Both authors read and approved the final manuscript.
This research was supported by a grant from the Ministry of Agriculture, Forestry and Fisheries of Japan (Genomics for Agricultural Innovation, DD-4050).
• Meuwissen THE, Hayes B, Goddard ME. Prediction of total genetic value using genome-wide dense marker maps. Genetics. 2001;157:1819–1829. [PMC free article] [PubMed]
• Xu S. Estimating polygenic effects using makers of the entire genome. Genetics. 2003;163:789–891. [PMC free article] [PubMed]
• George EL, McCulloch. Variable selection via Gibbs sampling. J Am Stat Assoc. 1993;91:883–904.
• Meuwissen THE, Solberg TR, Shepherd R, Wooliams JA. A fast algorithm for BayesB type of prediction of genome-wide estimates of genetic value. Genet Sel Evol. 2009;41(2) [PMC free article] [PubMed
• Wang H, Zhang YM, Li X, Masinde GL, Mohan S, Baylink DJ, Xu S. Bayesian shrinkage estimation of quantitative trait loci parameters. Genetics. 2005;170:465–480. doi: 10.1534/genetics.104.039354. [
PMC free article] [PubMed] [Cross Ref]
• ter Braak CJF, Boer MP, Bink MCAM. Extending Xu's Bayesian model for estimating polygenic effects using markers of the entire genome. Genetics. 2005;170:1435–1438. doi: 10.1534/
genetics.105.040469. [PMC free article] [PubMed] [Cross Ref]
• Yang R, Xu S. Bayesian shrinkage analysis of quantitative trait loci for dynamic traits. Genetics. 2007;176:1169–1185. doi: 10.1534/genetics.106.064279. [PMC free article] [PubMed] [Cross Ref]
• Yi N, Banerjee S. Hierarchical generalized linear models for multiple quantitative trait locus mapping. Genetics. 2009;181:1101–1113. doi: 10.1534/genetics.108.099556. [PMC free article] [PubMed]
[Cross Ref]
• Gelman A, Jakulin A, Pittau GM, Su YS. A weakly informative default prior distribution for logistic and other regression models. Ann Appl Stat. 2008;2:1360–1383. doi: 10.1214/08-AOAS191. [Cross
• Yi N, George V, Allison B. Stochastic search variable selection for identifying multiple quantitative trait loci. Genetics. 2003;164:1129–1138. [PMC free article] [PubMed]
• Solberg TR, Sonesson AK, Wooliams JA, Meuwissen THE. Genomic selection using different marker types and densities. J Anim Sci. 2008;86:2447–2454. doi: 10.2527/jas.2007-0010. [PubMed] [Cross Ref]
Articles from BMC Genetics are provided here courtesy of BioMed Central
• PubMed
PubMed citations for these articles
Your browsing activity is empty.
Activity recording is turned off.
See more... | {"url":"http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2845064/?tool=pubmed","timestamp":"2014-04-18T08:57:08Z","content_type":null,"content_length":"111261","record_id":"<urn:uuid:c4ae956c-4177-4e71-b3bd-bb5d48e3cf14>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00149-ip-10-147-4-33.ec2.internal.warc.gz"} |
Motion in a resisting medium
September 17th 2009, 04:47 AM #1
Sep 2009
Motion in a resisting medium
A body of mass m is projected vertically upwards with speed u. Air resistance is equal kv^2. Find the speed when next at the point of projection.
Can you set up the differential equation for the upwards motion?
Can you set up the differential equation for the downwards motion?
Please post what you've done so far.
I don't know what to do next?
$\frac{dv}{dt} = - g - \frac{k}{m} v^2$
$\Rightarrow \frac{dt}{dv} = - \frac{1}{g + \frac{k}{m} v^2} = - \frac{m}{mg + k v^2}$ subject to the boundary condition that v = u at t = 0.
Solve this differential for t as a function of v and then make v the subject. Get x from v and use it to find x when v = 0 (the maximum height).
Now set up and solve the differential equation for the downwards motion. Find v when x = distance found above.
September 17th 2009, 04:49 AM #2
September 17th 2009, 05:03 AM #3
Sep 2009
September 17th 2009, 05:35 AM #4 | {"url":"http://mathhelpforum.com/calculus/102784-motion-resisting-medium.html","timestamp":"2014-04-17T15:42:51Z","content_type":null,"content_length":"40705","record_id":"<urn:uuid:9a32b3e9-2036-44d9-b8a3-913bddce2a40>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00373-ip-10-147-4-33.ec2.internal.warc.gz"} |
Australian Bu
1 When interpreting the results of a survey it is important to take into account factors that may affect the reliability of the estimates. Estimates in this publication are subject to both
non-sampling and sampling errors.
2 Non-sampling errors may arise as a result of errors in the reporting, recording or processing of the data and can occur even if there is a complete enumeration of the population. These errors can
be introduced through inadequacies in the questionnaire, treatment of non-response, inaccurate reporting by respondents, errors in the application of survey procedures, incorrect recording of answers
and errors in data capture and processing.
3 The extent to which non-sampling error affects the results of the survey is difficult to measure. Every effort is made to reduce non-sampling error by careful design and testing of the
questionnaire, efficient operating procedures and systems, and the use of appropriate methodology.
4 Some of the items collected in the Business Characteristics Survey are dynamic in nature and the concepts measured are subject to evolution and refinement over time. This is most evident in the
items related to innovation statistics where substantial change has been made. As noted in the Explanatory Notes, changes have been made to the questions, survey scope and survey procedures, however,
it is not possible to measure the impact of all of these changes on data quality.
5 While all attempts are made to ensure that the questions are unambiguous and not subject to misinterpretation, some of the concepts measured in the BCS require the provider to make a subjective
judgement or assessment. It is not possible to accurately quantify the impact of these issues on data quality.
6 The 2006-07 Business Characteristics Survey had a response rate of 97%.
7 The difference between estimates obtained from a sample of businesses, and the estimates that would have been produced if the information had been obtained from all businesses, is called sampling
error. The expected magnitude of the sampling error associated with any estimate can be estimated from the sample results. One measure of sampling error is given by the standard error (SE), which
indicates the degree to which an estimate may vary from the value that would have been obtained from a full enumeration (the 'true' figure). There are about two chances in three that a sample
estimate differs from the true value by less than one standard error, and about nineteen chances in twenty that the difference will be less than two standard errors.
8 An example of the use of standard error on the total proportion of innovating businesses is as follows. In this release, the estimated proportion of total innovating businesses is 32.4%. The
standard error of this estimate was 0.95%. There would be about two chances in three that a full enumeration would have given a figure in the range 31.4% to 33.4%, and about nineteen chances in
twenty that it would be in the range 30.5% to 34.3%. Detailed standard errors are available on request.
9 In this publication (and associated data cubes), indications of sampling variability are measured by relative standard errors (RSEs). The relative standard error is a useful measure in that it
provides an immediate indication of the percentage errors likely to have occurred due to sampling, and thus avoids the need to refer also to the size of the estimate. Relative standard errors are
shown in the Relative Standard Error table in this section.
10 To avoid inconsistencies between the way very low and very high proportions are annotated, a value of 50% of the SE has been used in the calculation of RSE. Relative standard errors for estimates
in this publication have been calculated using the actual standard error and the survey estimate (referred to as x) in the following manner:
Using the previous example, the standard error for the estimated proportion of innovating businesses was 0.95%. Multiplied by 100 and then divided by 50 gives an RSE calculated on this basis of 1.9%.
It is these figures that appear in the table appended below.
For the tables in this publication (and associated data cubes), estimates with RSEs between 10% and 25% are annotated with the symbol '^'. These estimates should be used with caution as they are
subject to sampling variability too high for some purposes. Estimates with RSEs between 25% and 50% are annotated with the symbol '*', indicating that the estimates should be used with caution as
they are subject to sampling variability too high for most practical purposes. Estimates with an RSE greater than 50% are annotated with the symbol '**', indicating that the sampling variability
causes the estimates to be considered too unreliable for general use.
For estimates of proportion the symbol '^' means that the estimate from full enumeration could lie more than a decile away so the estimate should be used with caution. For example, a proportion
estimate of 30% annotated with '^' means the full enumeration value could lie beyond the range 20% to 40%. The symbol '*' means the estimate from full enumeration could lie more than a quartile away
and is subject to sampling variability too high for most practical purposes. A proportion estimate of 30% annotated with '*' means the full enumeration value could lie beyond the range 5% to 55%.
Proportion estimates annotated with the symbol '**' have a sampling error that causes the estimates to be considered too unreliable for general use.
Readers of this release should note that most of the data have an RSE of less than 10%.
Relative Standard Error - Summay indicators of Innovation - 2006-07
0-4 persons 5-19 persons 20-199 persons 200 or more persons Total
% % % % %
Estimated number of businesses as at 30 June 2007 1.8 3.2 3.6 5.0 0.6
Businesses with introduced or implemented innovation (innovating businesses) 2.7 3.4 4.5 5.0 1.9
Businesses with innovative activity which was:
still in development 2.3 3.3 4.1 4.1 1.9
abandoned 1.4 1.9 2.2 1.9 1.0
Businesses with any innovative activity (innovation-active businesses) 2.8 3.4 4.5 4.8 2.0
This page last updated 25 August 2010 | {"url":"http://www.abs.gov.au/ausstats/abs@.nsf/Previousproducts/8158.0Technical%20Note12006-07?opendocument&tabname=Notes&prodno=8158.0&issue=2006-07&num=&view=","timestamp":"2014-04-20T19:50:15Z","content_type":null,"content_length":"32858","record_id":"<urn:uuid:cf2c9438-6b06-483b-8402-8c15fcd8abd8>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00309-ip-10-147-4-33.ec2.internal.warc.gz"} |
Learning R for Researchers in Psychology
June 6, 2009
By Jeromy Anglim
R is a powerful environment for statistical computing. Here is a selective list of resources on R with an emphasis on resources useful for researchers in psychology.
Psychology specific R resourcesGeneral ResourcesTask Views
R includes many user contributed packages. In order to make these more accessible, many of them are listed under task views. The following Task Views are of particular relevance to researchers in
BooksR lists many of the increasing number books on R
that are being released. Some of the books on R that I have enjoyed reading include the following:
• Software for Data Analysis (2008): John Chambers: This gives a sense of the philosophy and style of programming in R. It is an intermediate to advanced text.
• Data Manipulation with R: Phil Spector: This book is short, concise, and very clear. The examples are well chosen.
• Data Analysis and Graphics Using R - An Example-Based Approach: John Maindonald and John Braun: This provides a good introduction to R. It also covers many techniques useful in psychology
introducing several interesting techniques that are not necessarily part of the standard psychology statistics curriculum.
• Books in the The Springer UseR Series tend be quite good.
• If you are coming from an SPSS or SAS background as is often the case in psychology, R for SAS and SPSS Users may make transferring your knowledge easier. There's an early version of the book
available for free online.
Getting Started
• The following Videos Part 1 and Part 2 provide a useful introduction. Here's another video
• Organise a user interface: My general advice: spend the first 10 to 20 hours using the basic R environment; then have a look at Tinn-R, JGR, and RCommander. If you decide you like it, I'd
recommend using Eclipse with the StatET plug-in. See this tutorial for how to install and use. Here's why I like it. New GUI options are changing all the time, so its worth keeping an eye out for
new developments.
daily e-mail updates
news and
on topics such as: visualization (
), programming (
Web Scraping
) statistics (
time series
) and more...
If you got this far, why not
subscribe for updates
from the site? Choose your flavor:
, or | {"url":"http://www.r-bloggers.com/learning-r-for-researchers-in-psychology/","timestamp":"2014-04-20T03:14:27Z","content_type":null,"content_length":"43537","record_id":"<urn:uuid:3c1c3e13-940c-42c0-a0d4-1569caee9f2a>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00382-ip-10-147-4-33.ec2.internal.warc.gz"} |
Multiplying and Dividing Rationals - Problem 2
When I assign problems like this in my class, a lot of times students think that they are done in 10 seconds. They say oh! Okay cancel out 5x² and 5x² boom, so the answer is x plus 3 plus 10x. You
guys that’s like the bonehead way to do this. What you need to do when you're multiplying rational expressions is multiply cross to top multiply cross the bottom and reduce. Also you want to make
sure you are factoring everything.
So what I’m going to do is write this binomial as a sum on top of 1. That will help me see that what I’m really doing is multiplying two fractions. I have this fraction where I have x plus 3 on top,
also on top of this guy I have 5x² plus 10x that’s all being multiplied, on the bottom I have 5x². So let me go through and try to factor everything that I can.
The top of the second fraction could be rewritten like this, 5x times x plus 2 because 5x is a common factor. So to rewrite this problem I’m going to have on top x plus 3 times 5x times x plus 2, all
that stuff is in the top of my fractions on the bottom I have 5x². Okay my next job is to cancel out any factors that are the same in top and bottom. Well there is no x plus 3 terms so that guy is
going to stay. This 5x term looks pretty close to that guy and then, I’ll come back to that, the x plus 2 term I can’t cancel anything so I know he’s going to stay. So I’m going to write him as part
of my answer also.
Okay let’s go back to this 5x over 5x² you guys can probably do in your head that the fives are going to be eliminated, this x is going to cancel out one of those Xs and I’ll be left with just 1x in
the denominator. That’s my most reduced form. Your teacher might want you to Foil this out and make that a trinomial it's totally up to your teacher or maybe the textbook instructions but this is the
simplified version of this product.
Again the way students make errors in this kinds of problems is because they try to cancel out right away or because they don’t recognize that this could be written out as a fraction or because they
factor incorrectly.
Be really careful with all those things guys especially if you have something where you have a rational expression multiplied by a polynomial that isn’t clearly a fraction yet. Just put it on top of
1 then it’s a fraction and you guys can do it.
monomial exponent factor trinomial | {"url":"https://www.brightstorm.com/math/algebra/rational-expressions-and-functions-2/multiplying-and-dividing-rationals-problem-2/","timestamp":"2014-04-20T15:54:16Z","content_type":null,"content_length":"62908","record_id":"<urn:uuid:2fc46782-92c4-409b-a1c9-ed54bb4b428b>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00433-ip-10-147-4-33.ec2.internal.warc.gz"} |
Finding the velocity function
February 12th 2009, 03:39 PM
Finding the velocity function
I thought acceleration was the change in velocity or in other words the derivative of the velocity function, and thus am trying to solve the problem I've attached keeping this in mind but it's
not working. Any helpful hints?
February 12th 2009, 03:52 PM
you forgot your constant of integration when you integrated to get v(t) ... determine it from the given initial condition.
February 12th 2009, 03:53 PM
A god way can be integrate, knowing:
$<br /> v(t) = d'(t) \Rightarrow \int {v(t)dt} = d(t) + C<br />$
February 12th 2009, 03:56 PM
Scott H
Two or more functions can have the same derivative. In our case, any function whose derivative is $t+8$ must be of the form
All we have to do now is figure out what $C$ must be. We are told that $v(0)=3$, so
Now that we know $C=3$, we can rewrite our formula for $v(t)$:
You were close, though. :)
February 12th 2009, 03:56 PM
ahh gotcha, but then how would I go the final part of the problem calculating the total distance?
February 12th 2009, 04:05 PM
I can't even figure out how total distance can be contrived from what's given...
February 12th 2009, 04:28 PM
Just as acceleration is the derivative of velocity, so velocity is the derivative of distance. Just as velocity is the integral of acceleration, so distance is the integral of velocity.
Now that you know the velocity function is $\frac{1}{2}t^2+ 8t+ 3$ integrate that. Because you are asked for the distance covered during the time interval 0 and 10, d(0)= 0 to find C, the
constant of integration and the find d(10). Equivalently evaluate d(10)- d(0) so the constant cancels. That last is the same as finding the definite integral $\int_0^{10} (\frac{1}{2}t^2+ 8t- 3)
February 12th 2009, 04:36 PM
Thanks buddy
February 12th 2009, 05:25 PM
to get the terminology straight ...
position (a vector) is the antiderivative of velocity (also a vector). distance is a scalar quantity.
a definite integral of velocity over an interval of time yields the displacement, or change in position over that interval of time.
distance traveled over an interval of time is the definite integral of speed (a scalar equal to the absolute value of velocity) over that time interval. | {"url":"http://mathhelpforum.com/calculus/73368-finding-velocity-function-print.html","timestamp":"2014-04-16T07:44:10Z","content_type":null,"content_length":"10466","record_id":"<urn:uuid:3570c109-8d8a-427a-93f1-7b1fb66ac315>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00225-ip-10-147-4-33.ec2.internal.warc.gz"} |
Boolean ring, induction
January 19th 2009, 04:23 PM #1
Nov 2008
Boolean ring, induction
A commutative ring $A$ is Boolean if $x^2 = x$ for all $x \in A$.
In a Boolean ring $A$, show that every finitely generated ideal in $A$ is principal.
I am pretty sure this proof is by induction because: $(x,y)=(x-xy+y)$ The inclusion $(\supseteq)$ is clear, and the identities $x(x-xy+y)=x$ and $y(x-xy+y)=y$ give $(\subseteq)$. How do I prove
this by induction?
A commutative ring $A$ is Boolean if $x^2 = x$ for all $x \in A$.
In a Boolean ring $A$, show that every finitely generated ideal in $A$ is principal.
I am pretty sure this proof is by induction because: $(x,y)=(x-xy+y)$ The inclusion $(\supseteq)$ is clear, and the identities $x(x-xy+y)=x$ and $y(x-xy+y)=y$ give $(\subseteq)$. How do I prove
this by induction?
first of all you don't need to add "commutativity" in the definition of a Boolean ring because it's a result of the condition $x^2=x, \ \forall x \in A.$ to complete your induction just note that
$I=<x_1, \cdots , x_{n-1}, x_n>=Ax_1 + \cdots + Ax_{n-1}+Ax_n.$ now if $J=<x_1, \cdots, x_{n-1}>=<x>,$ then $I=J+Ax_n=<x,x_n>=<x-xx_n+x_n>.$
January 19th 2009, 04:48 PM #2
MHF Contributor
May 2008 | {"url":"http://mathhelpforum.com/advanced-algebra/68929-boolean-ring-induction.html","timestamp":"2014-04-16T19:34:31Z","content_type":null,"content_length":"37831","record_id":"<urn:uuid:42fa3536-0a66-4fb1-a67b-e7e30154b9f8>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00468-ip-10-147-4-33.ec2.internal.warc.gz"} |
prove that [0,1] is compact?
April 5th 2011, 02:51 AM
prove that [0,1] is compact?
Using Tychonoff's theorem, prove that [0,1] is compact.
Should i present [0,1] as a product of compact spaces? is that possible? if so how can I do it?
any help is appreciated!
April 5th 2011, 06:15 AM
I haven't thought about this really hard, so it might not be correct, but you can try letting $X=\{0,1,2,\ldots, 9\}$ under the usual Euclidean metric. Then we can represent $[0,1]\cong \Pi_{n=1}
^\infty X$. In other words, each copy of $X$ allows you to write down a digit of a number from $[0,1]$. For example, $.14159\ldots\cong \{1\}\times \{4\}\times \{1\}\times \{5\}\times \{9\}\times
At this point, it is necessary to check that the product topology for $\Pi_{n=1}^\infty X$ is the same as the Euclidean topology, which I have not done. Maybe you can think about it. If it is
true, then Tychonoff's theorem will finish the problem.
Good luck.
April 5th 2011, 06:19 AM
You can identify $\mathcal{F}(\mathbb{R},[0,1])$ and the product set $F=\prod \{[0,1]_x:x\in \mathbb{R}\}$ where $[0,1]_x$ denotes a copy of $[0,1]$ .
Edited: Sorry, I didnīt see roninpro's post.
April 5th 2011, 08:21 AM
What is [0,1] in the context of Tychonoff's Theorem:
"If each of the subspaces $X_{a}$ is compact, the product space $X= \prod_{a \epsilon A} X_a$, with the cartesian product topology, is also compact." Taylor
Does the theorem say, for example, that (x,y,z) is a compact space if x, y, and z are? If so then (x) is compact if [0,1] is compact, which it is. But that's the opposite of what you have to
prove. Stuck
April 5th 2011, 08:54 AM
I haven't thought about this really hard, so it might not be correct, but you can try letting $X=\{0,1,2,\ldots, 9\}$ under the usual Euclidean metric. Then we can represent $[0,1]\cong \Pi_{n=1}
^\infty X$. In other words, each copy of $X$ allows you to write down a digit of a number from $[0,1]$. For example, $.14159\ldots\cong \{1\}\times \{4\}\times \{1\}\times \{5\}\times \{9\}\times
At this point, it is necessary to check that the product topology for $\Pi_{n=1}^\infty X$ is the same as the Euclidean topology, which I have not done. Maybe you can think about it. If it is
true, then Tychonoff's theorem will finish the problem.
Good luck.
That construction ought to work. But in that product topology, is it true that the sequence 0.49, 0.499, 0.4999, 0.49999, ... converges to 0.5? As usual with constructions involving infinite
decimals, the ambiguity between .9 recurring and 1 causes serious headaches.
April 5th 2011, 09:15 AM
(x) is compact because R is compact. (Product space of dim 1), Tychonoffs Theorem
if (x') is a closed subset of (x), (x') is compact and hence x' [0,1] is compact.
April 5th 2011, 10:37 AM
A headache indeed.
I was hoping that the product topology would somehow be metrizable, maybe with a metric like $d(\{a_1\}\times \{a_2\}\times \{a_3\}\times \ldots, \{b_1\}\times \{b_2\}\times \{b_3\}\times \ldots)
=\sum_{n=0}^\infty (a_n-b_n) 10^{-n}$ (basically mimicking the Euclidean metric). It would take care of the repeated nines situation. It seems like a real hassle to deal with though!
Maybe we can simplify the situation by using the binary representation of real numbers?
April 6th 2011, 08:41 AM
Is Tychynoff's theorem a concise (abstract) way of dealing with continuity of functions of many variables (which can be quite messy)?
I would be satisfied with "yes," "no," or "irrelevant." Anything beyond that, such as its ultimate purpose (use), would be a great bonus.
April 6th 2011, 09:15 AM
What do you mean by 'dealing with continuity'? Proving that a given function is continuous? If so, then I don't really see how.
April 6th 2011, 09:40 AM
Defining functions in the neighborhood of a point and their limit at the point. Frankly, I looked at the discussion in Taylor and found it unintelligible, as I did previous posts, with no
interest in wading through it. I would be satisfied If I could salvage a basic insight out of it.
Taylor states Tychonoff's theorem as:
"If each of the subspaces $X_a$ is compact, the product space $\prod_{a \epsilon A} X_a$ , with the cartesian product topology, is also compact."
To me, as a special case, it says, for example, if X1, X2, and X3 are coordinatre axes (sub-spaces), then the product space (X1,X2,X3) (3d space) is compact.*
EDIT: *And then it would follow that closed subsets of the 3d space are also compact.
EDIT AGAIN: It would also make sense to me that if X1 is (x1,x2) (a plane) and X2 is (x3) A line), then X1 X X2 is (x1,x2,x3) (3d space), in the context of Tychonoff's theorem.
April 6th 2011, 10:04 AM
I assume by coordinate axes you mean copies of $\mathbb{R}$. In that case, why are you assuming $\mathbb{R}$ is compact? this is definitely not true under the usual topology.
April 6th 2011, 10:15 AM
All singletons (x) where x is a real number. Or in the case of a plane, (x1,x2), where x1 and x2 are real numbers.
If you want to engage me in a discussion of topology, rather than answer my question, which seems straight forward, I surrender in advance.
April 6th 2011, 10:25 AM
You are right. I admit I had trouble with the notion of points on a line being compact because of the theorem: a set is compact iff it is closed and bounded.
OK, so what does Taylor mean by "a compact subspace?" Only closed bounded spaces like [0,1]?
EDIT: I tried rereading the proof in Taylor. It's over my head. I'm gettin out while the gettins good. Sorry to bother you. | {"url":"http://mathhelpforum.com/differential-geometry/176855-prove-0-1-compact-print.html","timestamp":"2014-04-17T02:54:09Z","content_type":null,"content_length":"19333","record_id":"<urn:uuid:cddc46c8-ffe2-4c8f-ab8d-a3f700874060>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00061-ip-10-147-4-33.ec2.internal.warc.gz"} |
How to Stretch Random Functions: The Security of Protected Counter Sums
Results 1 - 10 of 13
, 2000
"... There is a well-known class of message authentication systems guaranteeing that attackers will have a negligible chance of successfully forging a message. This paper shows how one of these
systems can hash messages at extremely high speed -- much more quickly than previous systems at the same securi ..."
Cited by 28 (8 self)
Add to MetaCart
There is a well-known class of message authentication systems guaranteeing that attackers will have a negligible chance of successfully forging a message. This paper shows how one of these systems
can hash messages at extremely high speed -- much more quickly than previous systems at the same security level -- using IEEE floating-point arithmetic. This paper also presents a survey of the
literature in a unified mathematical framework.
, 2001
"... This is a set of lecture notes on cryptography compiled for 6.87s, a one week long course on cryptography taught at MIT by Shafi Goldwasser and Mihir Bellare in the summers of 1996–2001. The
notes were formed by merging notes written for Shafi Goldwasser’s Cryptography and Cryptanalysis course at MI ..."
Cited by 17 (0 self)
Add to MetaCart
This is a set of lecture notes on cryptography compiled for 6.87s, a one week long course on cryptography taught at MIT by Shafi Goldwasser and Mihir Bellare in the summers of 1996–2001. The notes
were formed by merging notes written for Shafi Goldwasser’s Cryptography and Cryptanalysis course at MIT with notes written for Mihir Bellare’s Cryptography and network security course at UCSD. In
addition, Rosario Gennaro (as Teaching Assistant for the course in 1996) contributed Section 9.6, Section 11.4, Section 11.5, and Appendix D to the notes, and also compiled, from various sources,
some of the problems in Appendix E. Cryptography is of course a vast subject. The thread followed by these notes is to develop and explain the notion of provable security and its usage for the design
of secure protocols. Much of the material in Chapters 2, 3 and 7 is a result of scribe notes, originally taken by MIT graduate students who attended Professor Goldwasser’s Cryptography and
Cryptanalysis course over the years, and later edited by Frank D’Ippolito who was a teaching assistant for the course in 1991. Frank also contributed much of the advanced number theoretic material in
the Appendix. Some of the material in Chapter 3 is from the chapter on Cryptography, by R. Rivest, in the Handbook of Theoretical Computer Science. Chapters 4, 5, 6, 8 and 10, and Sections 9.5 and
7.4.6, were written by Professor Bellare for his Cryptography and network security course at UCSD.
- Advances in Cryptology - CRYPTO 2000 , 2000
"... Abstract. The paradigms currently used to realize symmetric encryption schemes secure against adaptive chosen ciphertext attack (CCA) try to make it infeasible for an attacker to forge “valid ”
ciphertexts. This is achieved by either encoding the plaintext with some redundancy before encrypting or b ..."
Cited by 10 (0 self)
Add to MetaCart
Abstract. The paradigms currently used to realize symmetric encryption schemes secure against adaptive chosen ciphertext attack (CCA) try to make it infeasible for an attacker to forge “valid ”
ciphertexts. This is achieved by either encoding the plaintext with some redundancy before encrypting or by appending a MAC to the ciphertext. We suggest schemes which are provably secure against
CCA, and yet every string is a “valid ” ciphertext. Consequently, our schemes have a smaller ciphertext expansion than any other scheme known to be secure against CCA. Our most efficient scheme is
based on a novel use of “variable-length ” pseudorandom functions and can be efficiently implemented using block ciphers. We relate the difficulty of breaking our schemes to that of breaking the
underlying primitives in a precise and quantitative way. 1
, 2004
"... We introduce a new concept of elastic block ciphers, symmetric-key encryption algorithms that for a variable size input do not expand the plaintext, (i.e., do not require plaintext padding),
while maintaining the diffusion property of traditional block ciphers and adjusting their computational loa ..."
Cited by 7 (7 self)
Add to MetaCart
We introduce a new concept of elastic block ciphers, symmetric-key encryption algorithms that for a variable size input do not expand the plaintext, (i.e., do not require plaintext padding), while
maintaining the diffusion property of traditional block ciphers and adjusting their computational load proportionally to the size increase. Elastic block ciphers are ideal for applications where
length-preserving encryption is most beneficial, such as protecting variable-length database entries or network packets.
"... Abstract. This paper considers the construction and analysis of pseudo-random functions (PRFs) with specific reference to modes of operations of a block cipher. In the context of message
authentication codes (MACs), earlier independent work by Bernstein and Vaudenay show how to reduce the analysis o ..."
Cited by 6 (3 self)
Add to MetaCart
Abstract. This paper considers the construction and analysis of pseudo-random functions (PRFs) with specific reference to modes of operations of a block cipher. In the context of message
authentication codes (MACs), earlier independent work by Bernstein and Vaudenay show how to reduce the analysis of relevant PRFs to some probability calculations. In the first part of the paper, we
revisit this result and use it to prove a general result on constructions which use a PRF with a “small ” domain to build a PRF with a “large ” domain. This result is used to analyse two new
parallelizable PRFs which are suitable for use as MAC schemes. The first scheme, called iPMAC, is based on a block cipher and improves upon the well-known PMAC algorithm. The improvements consist in
faster masking operations and the removal of a design stage discrete logarithm computation. The second scheme, called VPMAC, uses a keyed compression function rather than a block cipher. The only
previously known compression function based parallelizable PRF is called the protected counter sum (PCS) and is due to Bernstein. VPMAC improves upon PCS by requiring lesser number of calls to the
compression function. The second part of the paper takes a new look at the construction and analysis of modes of operations for authenticated encryption (AE) and for authenticated encryption with
associated data (AEAD). Usually, the most complicated part in the security analysis of such modes is the analysis of authentication
, 2001
"... accounting. PMAC uses djM j=ne block-cipher invocations for any nonempty message M . (The empty string takes one block-cipher invocation). We compare with the CBC MAC: The \basic" CBC MAC, which
assumes that the message is a nonzero multiple of the block length and which is only secure when all mes ..."
Cited by 3 (0 self)
Add to MetaCart
accounting. PMAC uses djM j=ne block-cipher invocations for any nonempty message M . (The empty string takes one block-cipher invocation). We compare with the CBC MAC: The \basic" CBC MAC, which
assumes that the message is a nonzero multiple of the block length and which is only secure when all messages to be MACed are of one xed length, uses the same number of block cipher calls: jM j=n.
"... . This paper presents surf k , a reasonably fast function that converts a 384-bit input into a 256-bit output, given a 1024-bit seed k. When k is secret and uniformly selected, surf k seems to
be indistinguishable from a uniformly selected 384-bit-to-256-bit function. 1. ..."
Cited by 1 (1 self)
Add to MetaCart
. This paper presents surf k , a reasonably fast function that converts a 384-bit input into a 256-bit output, given a 1024-bit seed k. When k is secret and uniformly selected, surf k seems to be
indistinguishable from a uniformly selected 384-bit-to-256-bit function. 1.
, 2000
"... We describe a MAC (message authentication code) which is deterministic, parallelizable, and uses only ### #### block-cipher invocations to MAC a non-empty string # (where # is the blocksize of
the underlying block cipher). The MAC can be proven secure (work to appear) in the reduction-based approa ..."
Cited by 1 (1 self)
Add to MetaCart
We describe a MAC (message authentication code) which is deterministic, parallelizable, and uses only ### #### block-cipher invocations to MAC a non-empty string # (where # is the blocksize of the
underlying block cipher). The MAC can be proven secure (work to appear) in the reduction-based approach of modern cryptography. The MAC is similar to one recently suggested by Gligor and Donescu [5].
1 Introduction PMAC and its characteristics This note describes a new message authentication code, ####. Unlike customary modes for message authentication, the construction here is fully
parallelizable. This will result in faster authentication in a variety of settings. The #### construction is stingy in its use of block-cipher calls, employing just ### #### block-cipher invocations
to MAC a nonempty string # using an #-bit block cipher. A MAC computed by PMAC can have any length from up to # bits. Unlike the CBC MAC (in its basic form), #### can be applied to any message # ; in
"... Abstract. This paper introduces the XSalsa20 stream cipher. XSalsa20 is based upon the Salsa20 stream cipher but has a much longer nonce: 192 bits instead of 64 bits. XSalsa20 has exactly the
same streaming speed as Salsa20, and its extra nonce-setup cost is slightly smaller than the cost of generat ..."
Add to MetaCart
Abstract. This paper introduces the XSalsa20 stream cipher. XSalsa20 is based upon the Salsa20 stream cipher but has a much longer nonce: 192 bits instead of 64 bits. XSalsa20 has exactly the same
streaming speed as Salsa20, and its extra nonce-setup cost is slightly smaller than the cost of generating one block of Salsa20 output. This paper proves that XSalsa20 is secure if Salsa20 is secure:
any successful fast attack on XSalsa20 can be converted into a successful fast attack on Salsa20. | {"url":"http://citeseerx.ist.psu.edu/showciting?doi=10.1.1.20.9173","timestamp":"2014-04-18T19:14:47Z","content_type":null,"content_length":"35470","record_id":"<urn:uuid:81686c61-24ee-41aa-b7e6-d90b2676af2b>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00290-ip-10-147-4-33.ec2.internal.warc.gz"} |
Closest approach of particle problem - Please help!!
1. The problem statement, all variables and given/known data
A proton (q = 1e, m = 1u) and an alpha particle (q = +2e, m = 4u) are fired directly toward each other from far away, each with a speed of 0.01c. What is their distance of closest approach, as
measured between their centers?
[tex]e = 1.6 * 10^{-19}[/tex]
[tex]c = 3 * 10^8[/tex]
[tex]u = 1.661 * 10^{-27}[/tex]
This should be a simple problem, but I wanted to know if anyone got the same answer as I did.
2. Relevant equations
Conservation of energy
[tex]K_i + U_i = K_f + U_f[/tex]
Conservation of momentum
[tex]m_1v_{1i} + m_1v_{2i} = m_1v_{1f} + m_2v_{2f}[/tex]
3. The attempt at a solution
After making my conclusion that the proton will eventually turn around and reach a 0 velocity because the bigger particle (alpha particle) will make this "collision" similar to an elastic one.
I first have to find my final velocity of the alpha particle, v1.
[tex]m_1v_{1i} + m_1v_{2i} = m_1v_{1f} + m_2v_{2f}[/tex]
[tex](4u)(3 * 10^6 \frac{m}{s}) - (1u)(3 * 10^6 \frac{m}{s}) = (4u)v_{1f}[/tex]
[tex]9.0 * 10^6u\frac{m}{s} = (4u)v_{1f}[/tex]
[tex]v_{1f} = 2.25 * 10^6\frac{m}{s}[/tex]
Then I plugged that velocity into the energy
After plugging in and solving for my R (which is at minimum when the velocity of the proton is at 0), I get my R to be [tex]2.24 * 10^{-14} m[/tex]
Did anyone get this same answer? Thanks! | {"url":"http://www.physicsforums.com/showthread.php?t=161767","timestamp":"2014-04-20T18:26:56Z","content_type":null,"content_length":"53911","record_id":"<urn:uuid:7f3c7a7b-bdc8-47ec-89c4-7efcb608e0d8>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00376-ip-10-147-4-33.ec2.internal.warc.gz"} |
Using factor analysis or principal components analysis or measurement-error models for biological measurements in archaeology? - Statistical Modeling, Causal Inference, and Social Science
Using factor analysis or principal components analysis or measurement-error models for biological measurements in archaeology?
Greg Campbell writes:
I am a Canadian archaeologist (BSc in Chemistry) researching the past human use of European Atlantic shellfish. After two decades of practice I am finally getting a MA in archaeology at Reading.
I am seeing if the habitat or size of harvested mussels (Mytilus edulis) can be reconstructed from measurements of the umbo (the pointy end, and the only bit that survives well in archaeological
deposits) using log-transformed measurements (or allometry; relationships between dimensions are more likely exponential than linear).
Of course multivariate regressions in most statistics packages (Minitab, SPSS, SAS) assume you are trying to predict one variable from all the others (a Model I regression), and use ordinary
least squares to fit the regression line. For organismal dimensions this makes little sense, since all the dimensions are (at least in theory) free to change their mutual proportions during
growth. So there is no predictor and predicted, mutual variation of all the dimensions is the response (a Model II regression), and the fitted regression line must give equal weight to all the
dimensions: common methods are major-axis (perpendicular distances between the line and all the points are minimised, in a principal-component-analysis way) and reduced major axis or
standard-major-axis (perpendicular distances between the standardised points and the line are fitted, and then unstandardised).
I see that you literally wrote the book on regression. Do you know if it is possible to carry out major-axis or reduced-major-axis fitting in multiple linear regressions in SPSS, SAS or Systat (I
know that it can’t be done in Minitab)?
Do you know if there are applications in R that carry out this type of analysis?
My reply: I’m a sucker for any email that begins, “I am a Canadian archaeologist.” I think there are various models out there that could work here, including factor analysis and measurement-error
models. I’m no expert on this particular set of models, but they get used in psychometrics when there are many variable measurements. Maybe some commenters could help?
12 Comments
1. I never used this (or tested this), but I did some exploring and found in a recent spssx discussion thread that you can use reduced major axis regression through the constrained nonlinear
regression command (CNLR). See here about halfway down: http://spssx-discussion.1045642.n5.nabble.com/Model-II-regression-analysis-td4286435.html
Another commenter brought up an interesting approach that I think (if I understand your problem correctly) translates well here–why not consider SEM? That would be my first approach.
2. And here are the commands for both major axis regression and reduced major axis regression, http://www.listserv.uga.edu/cgi-bin/wa?A2=ind9603&L=spssx-l&F=&S=&P=17532
Finally Jack Weiss mentions an algebraic alternative to OLS when you have ecological data http://www.unc.edu/courses/2007spring/biol/145/001/docs/lectures/Nov3.html#maxlikelihoodform. Not sure if
it’s useful, but the background in the lecture notes helped me understand this problem more fully anyway
3. Hi Andrew,
I’m a somewhat longtime lurked (via Deborah Mayo’s blog) but have never commented before. Anyway, seems like a latent class model might be just the ticket, although I don’t think that they are
available in SAS, SPSS, or Minitab… I typically use MPlus to model these, but Stata (GLLAMM…mmmm I forget the acronym) might also work.
4. One can be Bayesian about this: instead of fitting a p(y|x), one just needs to fit a p(x,y). The main issue is that by also fitting a distribution for p(x|y), the predictive quality of p(y|x)
won’t be as good with the same number of parameters. But Lasserre’s paper “Principled Hybrids of Generative and Discriminative Models (2006)” talks about interpolating between both extremes.
All the improvements coming out of Bayesian priors and more complex models can be applied to this setting – rather than going to OLS.
5. The R package SMATR, (Standardised) Major Axis Estimation and Testing Routines.
6. I think you can analyze your data using HMLM (hierarchical multi-variate linear models). You stack up all the data in the outcome column and include indicators on the right side of the equation
specifying the type of variable. The program calculates latent random variables for each of the variables, and the covariances of the random variables show the relationships among the variables.
7. This sounds like a problem in morphometrics. This website looks to have software and tutorials. http://life.bio.sunysb.edu/morph/index.html
8. Along the lines of Aleks’s earlier comment, factor-analysis type models have been used as a joint model to induce a reduced rank (uni- or multivariate) regression in a fully Bayesian way (see
e.g. http://ftp.isds.duke.edu/WorkingPapers/02-12.html). For R packages, MCMCpack has factor analysis functions that will give the necessary output (after some post-processing), as does the bfa
package. bfa includes shrinkage priors for factor loadings which may be of interest.
9. This feels like a community ecology problem where it’s called ‘direct gradient analysis’ and is a subset of ordination methods. (Indirect gradient analysis is when you don’t have covariates for
the measurements you’re treating as a dependent variable.) Actually, ordination is also used for dating in archaeology, where the unobserved ‘gradient’ that summarises the vector of measurements
is time, so you might have come across it there. One of the old-style, but still widely used methods for direct gradient analysis is canonical correspondence analysis. As for references, http://
ordination.okstate.edu/overview.htm is a reasonable orienting introduction, and ter Braak 1988 provides a quick review of the basic methods. Things have moved on on the estimation and modeling
front, but this should give you the basic toolset. The R package vegan implements most methods.
10. If he chooses to go the latent class analysis route, to me the best book remains Ronald Hayduks’s. The inner leaf of the book cover alone is worth looking at from your library because it diagrams
out all the various matrices and relationship in the Lisrel model.
11. The ‘psych’ package in R works well if you want to do exploratory factor analysis [function: fa()] or principal component analysis [function: principal()]. ‘GPArotation’ has a host of rotation
algorithms that may be of interest to improve interpretability.
12. Latent Class Analysis can be done in SAS…check the Methodology Center at Penn State. They have a great SAS add-on for LCA. | {"url":"http://andrewgelman.com/2011/12/31/using-factor-analysis-or-principal-components-analysis-or-measurement-error-models-for-biological-measurements-in-archaeology/","timestamp":"2014-04-17T04:13:00Z","content_type":null,"content_length":"38637","record_id":"<urn:uuid:6c41873d-800e-4344-bc5d-f3b357db2f3c>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00135-ip-10-147-4-33.ec2.internal.warc.gz"} |