content
stringlengths 86
994k
| meta
stringlengths 288
619
|
|---|---|
Auction Algorithm for Bipartite Matching
July 13, 2009 by algorithmicgametheory
Undergraduate algorithms courses typically discuss the maximum matching problem in bipartite graphs and present algorithms that are based on the alternating paths (Hungarian) method. This is true in
the standard CLR book as well as in the newer KT book (and implicitly in the new DPV book that just gives the reduction to max-flow.) There is an alternative auction-like algorithm originally due to
Demange, Gale, and Sotomayor that is not well known in the CS community despite being even simpler. The algorithm naturally applies also to the weighted version, sometimes termed the assignment
problem, and this is how we will present it.
Input: A weighted bipartite graph, with non-negative integer weights. We will denote the vertices on one side of the graph by B (bidders) and on the other side by G (goods). The weight between a
bidder i and a good j is denoted by $w_{ij}$. We interpret $w_{ij}$ as quantifying the amount that bidder i values good j.
Output: A matching M with maximum total weight $\sum_{(i,j) \in M} w_{ij}$. A matching is a subset of $B \times G$ such that no bidder and no good appear more than once in it.
The special case where $w_{ij} \in \{0,1\}$ is the usual maximum matching problem.
1. For each good j, set $p_j \leftarrow 0$ and $owner_j \leftarrow null$.
2. Initialize a queue Q to contain all bidders i.
3. Fix $\delta = 1/(n_g+1)$, where $n_g$ is the number of goods.
While Q is not empty do:
1. $i \leftarrow Q.deque()$.
2. Find j that maximizes $w_{ij} - p_j$.
3. If $w_{ij} - p_j \ge 0$ then
1. Enque current $owner_j$ into Q.
2. $owner_j \leftarrow i$.
3. $p_j \leftarrow p_j + \delta$.
Output: the set of $(owner_j, j)$ for all j.
Correctness: The proof of correctness is based on showing that the algorithm gets into an “equilibrium”, a situation where all bidders “are happy”.
Definition: We say that bidder i is $\delta$-happy if one of the following is true:
1. For some good j, $owner_j=i$ and for all goods j’ we have that $\delta + w_{ij}-p_j \ge w_{ij'}-p_{j'}$.
2. For no good j does it hold that $owner_j=i$ and for all goods j we have that that $w_{ij} \le p_{j}$.
The key loop invariant is that all bidders, except those that are in Q, are $\delta$-happy. This is true at the beginning since Q is initialized to all bidders. For the bidder i dequeued in an
iteration, the loop exactly chooses the j that makes him happy, if such j exists, and the $\delta$-error is due to the final increase in $p_j$. The main point is that this iteration cannot hurt the
invariant for any other i’: any increase in $p_j$ for j that is not owned by i’ does not hurt the inequality while an increase for the j that was owned by i’ immediately enqueues i’.
The running time analysis below implies that the algorithm terminates, at which point Q must be happy and thus all bidders must be $\delta$-happy.
Lemma: if all bidders are $\delta$-happy then for every matching M’ we have that $n\delta + \sum_{i=owner_j} w_{ij} \ge \sum_{(i,j) \in M'} w_{ij}$.
Before proving this lemma, we notice that this implies the correctness of the algorithm since by our choice of $\delta$, we have that $n\delta < 1$, and as all weights are integers, this implies that
our matching does in fact have maximum weight.
We now prove the lemma. Fix a bidder i, let j denote the good that he got from the algorithm and let j’ be the good he gets in M’ (possibly j=null or j’=null). Since i is happy we have that $\delta
+ w_{ij}-p_j \ge w_{ij'}-p_{j'}$ (with a notational convention that $w_{i,null}=0$ and $p_{null}=0$, which takes care also of case 2 in the definition of happy) Summing up over all i we get $\sum_{i
=owner_j} (\delta + w_{ij}-p_j) \ge \sum_{(i,j') \in M'} (w_{ij'}-p_{j'})$. Now notice that since both the algorithm and M’ give matchings, each j appears at most once on the left hand side and at
most once on the right hand side. More over if some j does not appear on the left hand side then it was never picked by the algorithm and thus $p_j=0$. Thus when we subtract $\sum_j p_j$ from both
sides of the inequality, the LHS becomes the LHS of the inequality in the lemma and the RHS becomes at most the RHS of the inequality in the lemma. QED.
Running Time Analysis:
Each time the main loop is repeated, some $p_j$ is increased by $\delta$ or some bidder is removed from Q forever. No $p_j$ can ever increase once its value is above $C = max_{i,j} w_{ij}$. It
follows that the total number of iterations of the main loop is at most $Cn/\delta = O(Cn^2)$ where n is the total number of vertices (goods+bidders). Each loop can be trivially implemented in O(n)
time, giving total running time of $O(Cn^3)$, which for the unweighted case, C=1, matches the running time of the basic alternating paths algorithm on dense graphs.
For non-dense graphs, with only $m=o(n^2)$ edges (where an edge is a non-zero $w_{ij}$), we can improve the running time by using a better data structure. Each vertex maintains a priority que of
goods ordered according to the value of $w_{ij} - p_j$. Whenever some $p_j$ is increased, all bidders that have an edge to this j need to update the value in the priority queue. Thus an increase in
$p_j$ requires $d_j$ priority queue operations, where $d_j$ is the degree of j. Since each $p_j$ is increased at most $C/\delta = O(Cn)$ times, and since $\sum_j d_j =m$ we get a total of O(Cmn)
priority queue operations. Using a heap to implement the priority queue takes $O(\log n)$ per operation. However, for our usage, an implementation using an array of linked lists gives O(1)
amortized time per operation: entry t of the array contains all j such that $w_{ij} - p_j = t\delta$, updating the value of j requires moving it down one place in the array, and finding the maximum
$w_{ij} - p_j$ is done by marching down the array to find the next non empty entry (this is the only amortized part). All in all, the running time for the unweighted case is O(mn).
Additional comments:
• As shown by DGS, a similar procedure terminates with close to VCG prices, which are also the point-wise minimum equilibrium prices.
• The algorithm was presented for the assignment problem where bidders never desire more than a single item. It does work more generally as long as bidders are “gross substitutes”.
• The algorithm, like many auctions, can be viewed as a primal-dual algorithm for the associated linear program.
• Choosing a small fixed value of, say, $\delta=0.01$ gives a linear time 1.01-approximation for the maximum matching.
• Choosing the value $\delta = 1/\sqrt{n}$ gives a matching that misses at most $\sqrt{n}$ edges, that can then be added using $\sqrt{n}$ alternating path computations, for a total running time of
$O(m \sqrt{n})$.
• Many algorithmic variants were studied by Dimitry Bertsekas.
• A wider economic context appears in this book.
on July 13, 2009 at 10:16 pm | Reply Ali Sinop
Isn’t this just maximum residual augmenting path algorithm for the associated flow network?
• I don’t think so… the algorithm does not find or use augmenting paths even implicitly.
Great point, I have to admit intellectual laziness in almost always reducing bipartite matchings to flows (without thinking deeply). You have probably mentioned this paper before, but a great
treatment of using online bipartite matching to design an auction strategy:
“AdWords and Generalized On-line Matching.” Aranyak Mehta, Amin Saberi, Umesh V Vazirani, Vijay V Vazirani. Jornal of the ACM (2007) vol. 54 (5).
I do not know what you mean by “bidders are gross substitutes”. But the algorithm does NOT work when items are gross substitutes. You need more sophisticated algorithms when items are gross
substitutes – see Gul and Stacchetti (2000), who generalize the “exact auction” (based on the idea of overdemanded items) in DGS.
• Yes, some modification is certainly needed, as also can be seen in the reference given. My survey with Liad (http://www.cs.huji.ac.il/~noam/bn-ca.pdf) contains a description and analysis for the
GS case.
• BY “bidders are gross substitutes” I meant “bidders’ valuations satisfy the “(gross) substitutes” condition”.
[...] Auction Algorithm for Bipartite Matching « Algorithmic Game [...]
on February 23, 2010 at 11:08 pm | Reply sifta
Actually, Bertsekas has a 1979 reference for the auction algorithm (LIDS Working Paper).
[Ber79] Bertsekas, D. P., 1979. “A Distributed Algorithm for the Assignment Problem,” Lab. for Information and Decision Systems Working Paper, M.I.T., Cambridge, MA.
can you please teach me algorthm about AI games
killing games
Good and Bad Social Technology…
I found your entry interesting thus I’ve added a Trackback to it on my weblog :)…
This will certainly help me in a future endeavour of mine.
But the algorithm does work when items are gross substitutes. maybe
Thanks , I’ve just been looking for info about this topic for ages and yours is the greatest I’ve discovered so far.
However, what in regards to the conclusion? Are you
positive concerning the supply?
I’m not that much of a online reader to be honest but your sites
really nice, keep it up! I’ll go ahead and bookmark your site to come back later. All the best
Pretty nice post. I just stumbled upon your blog and
wished to say that I have truly enjoyed browsing your
blog posts. After all I’ll be subscribing to your feed and I hope you write again very soon!
Hi would you mind stating which blog platform you’re working with? I’m looking to start my own
blog soon but I’m having a hard time deciding between BlogEngine/Wordpress/B2evolution and Drupal. The reason I ask is because your design and style seems different then most blogs and I’m looking
for something unique.
P.S My apologies for being off-topic but I had to ask!
Its like you read my mind! You appear to know so much about this, like you wrote the book in it or something.
I think that you could do with some pics to drive the message home a little bit, but instead
of that, this is wonderful blog. A great read. I will certainly be back.
Hello There. I found your blog using msn.
This is a really well written article. I’ll make sure to bookmark it and return to read more of your useful information. Thanks for the post. I will definitely comeback.
It is appropriate time to make a few plans for
the future and it is time to be happy. I’ve learn this submit and if I could I desire to suggest you few attention-grabbing issues or advice. Perhaps you can write next articles referring to this
article. I wish to learn even more issues about it!
Good day! This is my first visit to your blog! We are a collection of volunteers and
starting a new project in a community in the same niche.
Your blog provided us useful information to work on. You have done a outstanding job!
Good day I am so grateful I found your website, I really found you by accident, while I was
searching on Google for something else, Nonetheless I am here now and would just like to say cheers
for a tremendous post and a all round thrilling blog (I also love
the theme/design), I don’t have time to look over it all at the minute but I have saved it and also included
your RSS feeds, so when I have time I will be back to read a lot more, Please do keep up the
great jo.
Hi, I think your blog might be having browser compatibility issues.
When I look at your website in Opera, it looks fine but when opening
in Internet Explorer, it has some overlapping.
I just wanted to give you a quick heads up! Other then that, very good
Excelllent post I’m a huge slots player from Germany
23 Responses
|
{"url":"https://agtb.wordpress.com/2009/07/13/auction-algorithm-for-bipartite-matching/","timestamp":"2014-04-20T03:56:42Z","content_type":null,"content_length":"88639","record_id":"<urn:uuid:9b919d17-5593-41ff-b031-1943423f1b5a>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00265-ip-10-147-4-33.ec2.internal.warc.gz"}
|
he fundamental group
The basic idea of the fundamental groupEdit
An easy way to approach the concept of fundamental group is to start with a concrete example. Let us consider the 2-sphere $S^2$ and the surface of the torus.
Let us start thinking about two types of loops of the torus (paths starting and ending at the same point). It seems that a path around the "arm" of the torus is substantially different from a “local”
simple loop: one cannot be deformed into the other. On the other hand, in the sphere it seems that all the loops can be deformed into any other loop. The set of "types of loops" in the two spaces is
different: the torus seems to have a richer set of "types of loops" than the spherical surface. This type of approach constitutes the base of the definition of fundamental group and explains
essential differences between different kinds of topological spaces. The fundamental group makes this idea mathematically rigorous.
Definition of fundamental groupEdit
Definition: Let $X$ be a topological space and let p and q be points in X. Then two paths $f: [0,1] \rightarrow X$ and $g: [0,1] \rightarrow X$ are considered equivalent if there is a homotopy $H:
[0,1]^2 \rightarrow X$ such that H(x,a) is a path from p to q for any $a \in [0,1]$. It is easily verified that this is an equivalence relation.
Definition: Define the composition of paths $f_1$ from x to y and then $f_2$ from y to z to be simply the same adjunction of paths as we had in the section on path connectedness:
$f(x) = \left\{ \begin{array}{ll} f_1(2x) & \text{if } x \in [0,\frac{1}{2}]\\ f_2(2x-1) & \text{if } x \in [\frac{1}{2},1]\\ \end{array} \right.$
We shall denote the composition of two paths f(x) and g(x) as f(x)*g(x).
Definition: Let $f: [0,1] \rightarrow X$ to be a path. Define the inverse path (not to be confused with the inverse function) as $f^{-1}(x)=f(1-x)$, the path in the opposite direction.
Definition: Let X be a topological space, and let p be a point in X. Then define $C_p$ to be the constant path $f: [0,1] \rightarrow X$ where f(x)=p.
Now consider the set of equivalence groups of paths. Define the composition of two equivalence groups to be the equivalence group of the composition of any two paths. Define the inverse of an
equivalence group to be the equivalence group of the inverse of any within the equivalence group. Define $[C_p]$ to be the equivalence group containing $C_p$.
We can easily check that these operations are well-defined.
Now, in a fundamental group, we will work with loops. Therefore, we define the equivalence, composition and inverse of loops to be the same as the definition as that of paths, and the composition and
inverse of the equivalence classes also to be the same.
Definition: The set of equivalence classes of loops at the base point $x_0$, is a group under the operation of adjoining paths. This group is called fundamental group of $X$ at the base point $x_0$.
In order to demonstrate that this is a group, we need to prove:
1) associtivity: $[\alpha]*([\beta]*[\gamma])=([\alpha]*[\beta])*[\gamma]$;
2) identity: $[\alpha]*[1]=[1]*[\alpha]=[\alpha]$;
3) inverse: $[\alpha]*[\overline{\alpha}]=[\overline{\alpha}]*[\alpha]=[1]$.
1) It is quite obvious that when you have a path from a to b, and then b to c, and finally c to d, then the adjunction of the paths from a to b and b to d is the same as the path you get when you
adjoin the paths from a to c and then c to d.
In fact, an explicit homotopy on $[\alpha]*([\beta]*[\gamma])$ and $([\alpha]*[\beta])*[\gamma]$ can be given by the following formula:
$F(t,s) = \begin{cases} \alpha(\frac{4t}{s+1}), & \mbox{if }0\le t\le\frac{s+1}{4}\\\beta(4t-s-1), & \mbox{if }\frac{s+1}{4}\le t\le\frac{s+2}{4}\\\gamma(\frac{4t-s-2}{2-s}), & \mbox{if }\frac{s+2}
{4}\le t\le 1\end{cases}$
2) $[C_{x_0}]$ is the identity. One can easily verify that the product of this constant loop with another loop is homotopic to the original loop.
3) The inverse of the equivalence relation as defined before serves as the inverse within the group. The fact that the composition of the two paths f(x) and $f^{-1}(x)$ reduces to the constant path
can easily be verified with the following homotopy:
Dependence on the Base PointEdit
We now have our fundamental group, but it would be of interest to see how the fundamental group depends on the base point, since, as we have defined it, the fundamental group depends on the base
point. However, due to the very important theorem that in any path-connected topological space, all of its fundamental groups are isomorphic, we are able to speak of the fundamental group of the
topological space for any path-connected topological space.
Let´s take $x_0,x_1\in X$ in the same path-component of $X$. In this case, it´s possible to find a relation between $\pi_1(X,x_0)$ and $\pi_1(X,x_1)$. Let $h:\left [ 0,1 \right ]\to X$ be a path from
$x_0$ to $x_1$ and $\overline{h}(s)=1-s$ from $x_1$ back to $x_0$. The map $\beta_h:\pi_1(X,x_1)\to \pi_1(X,x_0)$ defined by $\beta_h[f]=[hf\overline{h}]$is an isomorphism. Thus if $X$ is
path-connected, the group $\pi_1(X,x_0)$ is, up to isomorphism, independent of the choice of basepoint $x_0$.
When all fundamental groups of a topological space are isomorphic, the notation $\pi_1(X,x_0)$ is abbreviated to $\pi_1(X)$.
Definition: A topological space $X$ is called simply-connected if it is path-connected and has trivial fundamental group.
The fundamental group of $S^1$Edit
This section is dedicated to the calculation of the fundamental group of $S^1$ that we can consider contained in the complex topological space. One more time we can start with a visual approach.
It's easy to imagine that a loop around the circle is not homotopic to the trivial loop. It's also easy to imagine that a loop that gives two complete turns is not homotopic to one that gives only
one. Simple intuition seems to be that fundamental group of $S^1$ is related with number of turns. However, the rigorous calculation of $\pi_1(S^1)$ involves some difficulties.
We define $p:\mathbb{R}\to S^1\sub\mathbb{C},p(t)=e^{2i\pi t}$. It's possible to demonstrate the following results:
Lemma 1: Let $f:\left [ 0,1 \right ]\to S^1\sub\mathbb{C}$ be a path. Then there exists $\overline{f}:\left [ 0,1 \right ]\to \mathbb{R}$ such that $p\circ\overline{f}=f$. Moreover, if $f(0)=x_0\land
z_0\in p^{-1}(x_0)$, then $\overline{f}$ is unique, called a lift of $f$.
Lemma 2: Let $F:\left [ 0,1 \right ]\times\left [ 0,1 \right ]\to S^1\sub\mathbb{C}$ be a homotopy on paths with start point $x_0$. Let $z_0\in p^{-1}(x_0)$. Then exists only one homotopy $\overline
{F}:\left [ 0,1 \right ]\times\left [ 0,1 \right ]\to\mathbb{R}$ on paths with start point $z_0$ such that $p\circ\overline{F}=F$.
Note: These lemmas allow to guarantee that homotopic loops have homotopic lifts.
For more information see Wikipedia.
Proof: Let $\alpha$ be a loop with base $(1,0)$ and $[\alpha]\in\pi_1(S^1,(1,0))$. Let $z_0=0$ and define
The good definition of this application comes from the fact that homotopic loops defined in $\left [ 0,1 \right ]$ to $S^1$ have homotopic lifts. We have $p(\overline{\alpha}(1))=\alpha(1)=x_0$.So, $
\overline{\alpha}(1)=k$ for some $k\in\mathbb{Z}$. Therefore $v([\alpha])\in\mathbb{Z}$.
1) $v$ is surjective. For $N\in\mathbb{Z}$ we define the loop $\alpha_N(t)=e^{2i\pi Nt}$. We then have $\overline\alpha_N(t)=Nt$ and $v([\alpha_N])=N$;
2) $v$ is injective. Let $v([\alpha])=v([\beta])$. Then, $(\overline{\alpha}(1)=\overline{\beta}(1))\land(\overline{\alpha}(0)=\overline{\beta}(0)=0)$. We then have that $F(t,s)=p((1-s)\overline{\
alpha}(t)+s\overline{\beta}(t))$ is a homotopy on $\alpha$ and $\beta$, or either $[\alpha]=[\beta]$;
3) $v$ is a homomorphism. We want to demonstrate that $v([\alpha]*[\beta])=v([\alpha]+v([\beta]$. Let´s consider
$\gamma(t) = \begin{cases} \overline{\alpha}(2t), & \mbox{if }0\le t\le \frac{1}{2} \\ \overline{\beta}(2t-1)+\overline{\alpha}(1), & \mbox{if }\frac{1}{2}\le t\le 1 \end{cases}$.
$\overline{\alpha}(1)$ is an integer and $p\circ\gamma=\alpha*\beta$.We then have,$\gamma=\overline{\alpha*\beta}$ and
We can note that all the loops are homotopic to $\alpha_N(t)=e^{2i\pi Nt}$ for some $N\in\mathbb{Z}$, or either, all the loops, up to homotopy, consist of giving a certain number of turns.
We can think this demonstration through the following scheme:
Covering spaces and the fundamental groupEdit
One of the most useful tools in studying fundamental groups is that of a covering space. Intuitively speaking, a covering space of a given space $X$ is one which 'looks like' a disjoint union of
copies of $X$ in a small enough neighborhood of any point of $X$, but not necessarily globally.
This section will define covering spaces formally, state important lifting theorems for covering spaces, and then show what the consequences are for fundamental groups.
Definition: Suppose $X$ is a topological space. A space $Z$ is called a covering space for $X$ if we are given a continuous map $p: Z \rightarrow X$ with the following property: for any $x \in X$,
there exists an open neighborhood $U i x$ such that
(i) $p^{-1}(U)$ is a disjoint union $\coprod_{\alpha \in A}V_\alpha$of open subsets of $Z$;
(ii) the restriction of $p$ to any of these open subsets $V_{\alpha}$ is a homeomorphism from $V_{\alpha}$ to U.
Unsurprisingly, we call $p$ a covering map.
Example: In fact, we've already seen an example of a covering space. In the calculation of $\pi_1(S^1)$ above, we implicitly made us of the fact that the real line $\mathbb{R}$ is a covering space
for $S^1$. The map $p: \mathbb{R} \rightarrow S^1, t \mapsto e^{2\pi i t}$ is the covering map. How can we check this? Well, recall that $e^{2 \pi i t_1} = e^{2 \pi i t_2}$ iff the difference $t_1 -
t_2$ is an integer. So, suppose we're given a point $x \in S^1$. Let $U_x = S^1 - \{-x\}$ - that is, the set consisting of the whole circle except for the point antipodal to $x$. Then a little
thought shows that if $Arg(x) = t_x$, we have $p^{-1}(U) = \mathbb{R} - \{ t_x + \mathbb{Z}\}$. In other words, the preimage of $U$ consists of the whole real line except for a 'hole' at each point
$t_x + n, n \in \mathbb{Z}$.
It's clear (draw a picture!) that this set is a disjoint union of subintervals, and one can check that the exponential function maps each subinterval homeomorphically onto $U_x$. So we do have a
covering map. Neat! $\square$
Homotopy LiftingEdit
Now we come to a theorem which looks a bit esoteric at first, but in fact allows us to do much with covering spaces.
Theorem (Homotopy Lifting): Suppose $p : Z \rightarrow X$ is a covering map for a space $X$. Let $f: I^n \rightarrow X$ be a map from the unit $n$-cube to $X$, and $F: I^{n+1} \rightarrow X$ a
homotopy of $f$ to another map $f': I^n \rightarrow X$. Suppose (for the last time!) that $\phi: I^n \rightarrow Z$ is a map satisfying $p \cdot \phi = f$. Then there exists a unique map $\Phi: I^
{n+1} \rightarrow Z$ satisfying the following:
(i) $\Phi |_{I^n} = \phi$;
(ii)$p \cdot \Phi = F$. $\square$
The proof is quite technical, but straightforward, and so is omitted. Any introductory book on algebraic topology should give it --- see, for example, Armstrong, "Basic Topology" (Springer). At first
sight this is pretty daunting, so let's take a concrete case to make it easier to digest. Suppose $n=0$ --- then $I^n$ is just a point, and hence $f: I^n \rightarrow X$ is just a function selecting a
particular point of $X$. Hence $f$ can be identified with its image, a point $x_0 \in X$. Now a homotopy from $f$ to another map is (recalling the definition of homotopy) just a map $F: I \rightarrow
X$ such that $F(0) = x_0$; hence, nothing more than a path in $X$ starting at $x_0$. What does the theorem tell us about covering maps $p: Z \rightarrow X$?
It says (check it!) that if $z_0 \in Z$ is a point such that $p(z_0) = x_0$, and $\gamma$ is a path in $X$ starting at $x_0$, then there is a unique path $\gamma'$ in $Z$ starting at $z_0$ such that
$p \dot \gamma' = \gamma$. In fancier (and looser) terminology, we say that a path in $X$ has a unique lift to $Z$, once the starting point of the lift has been chosen.
On reflection, this result --- sometimes known as the path lifting theorem --- is not so surprising. Think about a covering space as a 'folded-over' version of the base space $X$, as in Fig XXXX. If
we look at a small open set $U \subset X$, its preimage in $Z$ is a disjoint union of open sets each homeomorphic to it. If we just concentrate on the portion of $\gamma$ lying inside $U$ for now,
it's clear that for each of the disjoint sets $V_\alpha$, there is a unique path in $V_\alpha$ which maps onto $\gamma$ via the covering map $p$. So to specify a lift, we simply need to choose which
of the sets $V_\alpha$ it lives in (and this is equivalent to choosing a point in the preimage of $x_0$ as above). Now the whole path $\gamma$ can be split up as a finite 'chain' of short paths
living inside 'small' open sets like $U$ (check this!), so finite induction shows that the whole lift is uniquely determined in this way.
Covering Spaces and $\pi_1$Edit
Now we come to the connection between covering spaces and the fundamental group, which is of major significance.
Theorem: Given a covering space $p: Z \rightarrow X$, the map $p$ induces a map $p_* : \pi_1(Z) \rightarrow \pi_1(X)$ which is an injective (i.e. 1-1) group homomorphism.
Proof (Sketch): First, consider a path $\gamma$ in $Z$: it's a continuous map $\gamma : I \rightarrow Z$, and so we can compose it with the covering map $p$ to get a path $p \cdot \gamma$ in $X$. So
we have a map
$p':$ paths in $Z \rightarrow$ paths in $X$.
We want to show this can be used to define a map
$p*:$ homotopy classes of loops in $Z$ based at $z_0 \rightarrow$ homotopy classes of paths in $X$ based at $x_0$.
This sounds complicated, but in fact isn't at all: the idea is, given a homotopy $H: I \times I \rightarrow Z$ between two paths $\gamma_1$ and $\gamma_2$ in $Z$, the composition $p \cdot H$ is a
homotopy in $X$ between their images $p'(\gamma_1)$ and $p'(\gamma_2)$. (If this still seems opaque, be sure to check the details.) Also, loops based at $z_0$ clearly map to loops based at $p(z_0)$.
So, we have our map $p*$ as desired, mapping $\pi_1(Z,z_0)$ to $\pi_1(X,p(z_0)).$ We still need to show (a) that it's a group homomorphism, and (b) that it's injective.
(a) is pretty easy. To prove it, choose two elements of $\pi_1(Z,z_0)$. These are homotopy classes of based loops, so we can choose loops to represent them. What we need to see is that if we
concatenate these loops, and then look at the image of this concatenation in $X$, the result is homotopic to the loop we get if we first map each of the loops via $p'$ and then concatenate them.
Convince yourself that this is so.
(b) is more tricky. To prove it, we must show that the kernel of the homomorphism $p*$ described above consists just of the identity element of $pi_1(Z,z_0)$. So, suppose we have a path $\gamma$
representing an element $[\gamma]$ in the kernel: so $p*([\gamma])$ is the identity of $\pi_1(X,p(z_0))$. By definition of $p*$, this means that $p'(\gamma)$ is homotopic in $X$ to the constant path
at $p(z_0)$. So suppose $F: I \times I \rightarrow X$ is such a homotopy: the trick is to use the homotopy lifting theorem (above) to 'lift' $F$ to $F'$, a homotopy in $Z$ from $\gamma$ to the
constant path at $z_0$. (Again, one should check the details of this!) Since such a homotopy $F'$ exists, this shows that the homotopy class $[\gamma]$ is the identity element of $\pi_1(Z,z_0)$. So
the only element in the kernel of $p*$ is the identity element of $\pi_1(Z,z_0)$, so $p*$ is injective, as required. $\square$
Let's think about the significance of this result for a moment. An injective homomorphism of groups $G \rightarrow H$ is essentially the same as a subgroup $G_H \subset H$, so the first thing the
theorem tells us is that there's a significant restriction on possible covering spaces for a given space $X$: $Z$ can't be a covering space for $X$ unless $\pi_1(Z)$ is a subgroup of $\pi_1(X)$.
Right off the bat, this rules out whole classes of maps from being covering maps. There's no covering map from the circle $S^1$ to the real line $\mathbb{R}$, for example, since the fundamental group
of the circle is isomorphic to the integers, as we saw above, while that of the real line is trivial (why?). Similarly, there's no covering map from the torus $S^1 \times S^1$ to the two-dimensional
sphere $S^2$ --- the latter has trivial fundamental group, while that of the former is $\mathbb{Z}^2$ (the direct sum of two copies of the integers). And so on, ad infinitum.
This latter example is particularly nice, I think, in that it shows how looking at algebraic invariants of spaces (in this case, fundamental groups), rather than our geometric 'mental picture' of the
spaces themselves, vastly simplifies arguments about the existence or form of particular kinds of maps between spaces. Can you give me a simple geometric argument to show that there's no possible way
to 'wrap' $S^2$ around the torus so that every point is covered an equal number of times? Can you do the same for the three-dimensional sphere $S^3$? For $S^n$? (If so, I humbly salute you.)
Example: Now let's look at a specific covering space, and see what the homomorphism $p*$ we talked about above actually is in a concrete case. Think about the circle $S^1$ as the unit circle in the
complex plane: $s^1 = \{z \in \mathbb{C}: |z| = 1\}$. Then we can define a continuous map $p:S^1 \rightarrow S^1$ by $p(z) = z^2$.
I claim that $p$ is a covering map. To see this, imagine a point $e^{i\theta}$ of $S^1$ (with $0 \leq \theta < 2\pi$). It isn't hard to see that there are exactly two points $z' \in S^1$ such that p
(z') = z; moreover, if we look at a 'small enough' circular arc around $z$, its preimage under $p$ will consist of two disjoint circular arcs, each containing one of the two preimages of $z$, and
each mapped homeomorphically onto our original arc by $p$. (Check these details!)
So, $p$ is a covering map, and so the above theorem tells us that $p*$ is a group homomorphism from the fundamental group of the circle to itself. In symbols, we have $p^*: \mathbb{Z} \rightarrow \
mathbb{Z}$. But what is it? To answer this, consider a path $\gamma$ in $S^1$ that winds once around the origin. As we saw in the previous section, the equivalence class of such a path is mapped to
the element $1 \in \mathbb{Z}$ under the isomorphism $\pi_1(S^1) \cong \mathbb{Z}$. Now, to work out $p*[\gamma]$, we look at the equivalence class of the path $p \cdot \gamma$ in $S^1$ (this is just
the defintion of $p*$). It's easy to check (do it!) that $p \cdot \gamma$ is a path is $S^1$ winding twice around the origin, and so its equivalence class is $[\gamma] * [\gamma]$. So we have $p*([\
gamma]) = [\gamma] * [\gamma]$. Looking at $\pi_1(S^1)$ as the group of integers, we have $p*(1) = 2$. So $p^*: \mathbb{Z} \rightarrow \mathbb{Z}$ is just the doubling map!
Of course, it's all well and good introducing new concepts like covering spaces, but this doesn't achieve a lot unless our new concepts prove useful in some way. Covering spaces have indeed proved
useful in many ways, but hopefully the following example will suffice to illustrate this point:
Theorem (Nielsen-Schreier): Any subgroup of a free group is free.
Proof (Sketch): Consult Wikipedia for a rigorous definition of 'free group': roughly speaking, it is a group in which no non-trivial combination of elements equals the identity. Now, the strategy of
proof is along the following lines:
1) Given a free group $F$, find a graph $X$ with $\pi_1(X) = F$. (Note that a graph is a topological space consisting of a discrete set of points to which are attached a family of line segments.
Again see Wikipedia for a rigorous definition.)
2) Show that for a space $X$ and a subgroup $H < \pi_1(X)$, there exists a covering space $Z$ for $X$ with $\pi_1(Z) = H$.
3) Show that any covering space for a graph is itself a graph.
4) Show that the fundamental group of a graph is a free group.
The Fundamental Theorem of AlgebraEdit
Theorem: Let f be a polynomial with coefficients in the complex numbers $\mathbb{C}$. Then there exists a root of that polynomial in $\mathbb{C}$. Phrased in the language of algebra, the set of
complex numbers $\mathbb{C}$ are algebraically closed.
(Under Construction)
Last modified on 11 April 2014, at 14:24
|
{"url":"http://en.m.wikibooks.org/wiki/Topology/The_fundamental_group","timestamp":"2014-04-17T18:50:44Z","content_type":null,"content_length":"68702","record_id":"<urn:uuid:16818d4c-a082-4a52-ab58-de3f111ec857>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00034-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Chapter 1: Philosophy, Numbers and Functions
Home | 18.013A Tools Glossary Index Up Previous Next
Chapter 1: Philosophy, Numbers and Functions
We consider the basic context in which our efforts will be concentrated: the realms of numbers and functions. We describe "standard functions" which are those that will appear most often in your
world, and inverse functions
1.1 Philosophy
1.2 Numbers
1.3 Functions
1.5 Other Functions
|
{"url":"http://ocw.mit.edu/ans7870/18/18.013a/textbook/HTML/chapter01/contents.html","timestamp":"2014-04-16T13:08:21Z","content_type":null,"content_length":"2269","record_id":"<urn:uuid:6d7d3737-de1d-4d07-b241-038bd55c4f7d>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00348-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
Jason and Kyle both choose a number from 1 to 10 at random. What is the probability that both numbers are odd? A.1/3 B.1/2 C.1/4 D.1/8
Best Response
You've already chosen the best response.
How many odd numbers are between 1 and 10?
Best Response
You've already chosen the best response.
answer is B. using what romero stated
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
both numbers are odd, not one number is odd the probability that kyle picks and odd number is \(\frac{1}{2}\) and the probability that lucy picks an odd number is also \(\frac{1}{2}\) so the
probability that they both pick odd numbers is \(\frac{1}{2}\times \frac{1}{2}=\frac{1}{4}\)
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred.
|
{"url":"http://openstudy.com/updates/4f8cdc19e4b00280a9c3064c","timestamp":"2014-04-19T15:25:42Z","content_type":null,"content_length":"35067","record_id":"<urn:uuid:9d35fec2-f378-4422-b12d-de42ab395097>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00619-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Towards a theory of types in Prolog
Results 1 - 10 of 63
- JOURNAL OF THE ACM , 1995
"... We propose a novel formalism, called Frame Logic (abbr., F-logic), that accounts in a clean and declarative fashion for most of the structural aspects of object-oriented and frame-based
languages. These features include object identity, complex objects, inheritance, polymorphic types, query methods, ..."
Cited by 763 (59 self)
Add to MetaCart
We propose a novel formalism, called Frame Logic (abbr., F-logic), that accounts in a clean and declarative fashion for most of the structural aspects of object-oriented and frame-based languages.
These features include object identity, complex objects, inheritance, polymorphic types, query methods, encapsulation, and others. In a sense, F-logic stands in the same relationship to the
objectoriented paradigm as classical predicate calculus stands to relational programming. F-logic has a model-theoretic semantics and a sound and complete resolution-based proof theory. A small
number of fundamental concepts that come from object-oriented programming have direct representation in F-logic; other, secondary aspects of this paradigm are easily modeled as well. The paper also
discusses semantic issues pertaining to programming with a deductive object-oriented language based on a subset of F-logic.
, 1993
"... A practical procedure for computing a regular approximation of a logic program is given. Regular approximations are useful in a variety of tasks in debugging, program specialisation and
compile-time optimisation. The algorithm shown here incorporates optimisations taken from deductive database fixpo ..."
Cited by 99 (19 self)
Add to MetaCart
A practical procedure for computing a regular approximation of a logic program is given. Regular approximations are useful in a variety of tasks in debugging, program specialisation and compile-time
optimisation. The algorithm shown here incorporates optimisations taken from deductive database fixpoint algorithms and efficient bottom-up abstract interpretation techniques. Frameworks for defining
regular approximations have been put forward in the past, but the emphasis has usually been on theoretical aspects. Our results contribute mainly to the development of effective analysis tools that
can be applied to large programs. Precision of the approximation can be greatly improved by applying query-answer transformations to a program and a goal, thus capturing some argument dependency
information. A novel technique is to use transformations based on computation rules other than left-to-right to improve precision further. We give performance results for our procedure on a range of
programs. 1
- In Seventeenth Annual ACM Symposium on Principles of Programming Languages , 1990
"... In program analysis, a key notion used to approximate the meaning of a program is that of ignoring inter-variable dependencies. We formalize this notion in logic programming in order to define
an approximation to the meaning of a program. The main result proves that this approximation is not only re ..."
Cited by 94 (15 self)
Add to MetaCart
In program analysis, a key notion used to approximate the meaning of a program is that of ignoring inter-variable dependencies. We formalize this notion in logic programming in order to define an
approximation to the meaning of a program. The main result proves that this approximation is not only recursive, but that it can be finitely represented in the form of a cyclic term graph. This
explicit representation can be used as a starting point for logic program analyzers. A preliminary version appears in the Proceedings, 17 th ACM Symposium on POPL. y School of Computer Science,
Carnegie Mellon University, Pittsburgh, PA 15213-3890 z IBM Thomas J. Watson Research Center, PO Box 218, Yorktown Heights, NY 10598 Section 1: Introduction 1 1 Introduction The problem at hand is:
given a logic program, obtain an approximation of its meaning, that is, obtain an approximation of its least model. The definition of the approximation should be declarative (so that results can be
proved ab...
, 1992
"... We investigate the relationship between set constraints and the monadic class of first-order formulas and show that set constraints are essentially equivalent to the monadic class. From this
equivalence we can infer that the satisfiability problem for set constraints is complete for NEXPTIME. Mor ..."
Cited by 71 (0 self)
Add to MetaCart
We investigate the relationship between set constraints and the monadic class of first-order formulas and show that set constraints are essentially equivalent to the monadic class. From this
equivalence we can infer that the satisfiability problem for set constraints is complete for NEXPTIME. More precisely, we prove that this problem has a lower bound of NTIME(c n= log n ). The
relationship between set constraints and the monadic class also gives us decidability and complexity results for certain practically useful extensions of set constraints, in particular "negative
projections" and subterm equality tests.
- In Second Workshop on the Principles and Practice of Constraint Programming
"... . Set constraints are a natural formalism for many problems that arise in program analysis. This paper provides a brief introduction to set constraints: what set constraints are, why they are
interesting, the current state of the art, open problems, applications and implementations. 1 Introduction ..."
Cited by 69 (3 self)
Add to MetaCart
. Set constraints are a natural formalism for many problems that arise in program analysis. This paper provides a brief introduction to set constraints: what set constraints are, why they are
interesting, the current state of the art, open problems, applications and implementations. 1 Introduction Set constraints are a natural formalism for describing relationships between sets of terms
of a free algebra. A set constraint has the form X ` Y , where X and Y are set expressions. Examples of set expressions are 0 (the empty set), ff (a set-valued variable), c(X; Y ) (a constructor
application), and the union, intersection, or complement of set expressions. Recently, there has been a great deal of interest in program analysis algorithms based on solving systems of set
constraints, including analyses for functional languages [AWL94, Hei94, AW93, AM91, JM79, MR85, Rey69], logic programming languages [AL94, HJ92, HJ90b, Mis84], and imperative languages [HJ91]. In
these algorithms, sets of...
, 1993
"... Abstract. Set constraints are relations between sets of terms. They have been used extensively in various applications in program analysis and type inference. We present several results on the
computational complexity of solving systems of set constraints. The systems we study form a natural complex ..."
Cited by 67 (11 self)
Add to MetaCart
Abstract. Set constraints are relations between sets of terms. They have been used extensively in various applications in program analysis and type inference. We present several results on the
computational complexity of solving systems of set constraints. The systems we study form a natural complexity hierarchy depending on the form of the constraint language. 1
- Proc. JICSLP , 1992
"... internet: ..."
- In Fifth Annual IEEE Symposium on Logic in Computer Science , 1991
"... A set constraint is of the form exp 1 ' exp 2 where exp 1 and exp 2 are set expressions constructed using variables, function symbols, projection symbols, and the set union, intersection and
complement symbols. While the satisfiability problem for such constraints is open, restricted classes have be ..."
Cited by 53 (0 self)
Add to MetaCart
A set constraint is of the form exp 1 ' exp 2 where exp 1 and exp 2 are set expressions constructed using variables, function symbols, projection symbols, and the set union, intersection and
complement symbols. While the satisfiability problem for such constraints is open, restricted classes have been useful in program analysis. The main result herein is a decision procedure for definite
set constraints which are of the restricted form a ' exp where a contains only constants, variables and function symbols, and exp is a positive set expression (that is, it does not contain the
complement symbol). A conjunction of such constraints, whenever satisfiable, has a least model and the algorithm will output an explicit representation of this model. 1 1 Introduction We consider a
formalism for elementary set algebra which is useful for describing properties of programs whose underlying domain of computation is a Herbrand universe. The domain of discourse for this formalism is
the powerset of...
- In Proceedings of the 1991 Conference on Functional Programming Languages and Computer Architecture , 1991
"... Regular tree expressions are a natural formalism for describing the sets of tree-structured values that commonly arise in programs; thus, they are well-suited to applications in program
analysis. We describe an implementation of regular tree expressions and our experience with that implementation in ..."
Cited by 52 (6 self)
Add to MetaCart
Regular tree expressions are a natural formalism for describing the sets of tree-structured values that commonly arise in programs; thus, they are well-suited to applications in program analysis. We
describe an implementation of regular tree expressions and our experience with that implementation in the context of the FL type system. A combination of algorithms, optimizations, and fast
heuristics for computationally difficult problems yields an implementation efficient enough for practical use. 1 Introduction Regular tree expressions are a natural formalism for describing the sets
of tree-structured values that commonly arise in programs. As such, several researchers have proposed using (variations on) regular tree expressions in type inference and program analysis algorithms
[JM79, Mis84, MR85, HJ90, HJ91, AM91]. We are not aware of any implementations based on regular tree expressions, however, except for our own work on type analysis for the functional language FL [B +
89]. A p...
- In Proceedings of the 1st International Static Analysis Symposium , 1994
"... We present an algorithm for automatic type checking of logic programs with respect to directional types that describe both the structure of terms and the directionality of predicates. The type
checking problem is reduced to a decidable problem on systems of inclusion constraints over set expressio ..."
Cited by 42 (1 self)
Add to MetaCart
We present an algorithm for automatic type checking of logic programs with respect to directional types that describe both the structure of terms and the directionality of predicates. The type
checking problem is reduced to a decidable problem on systems of inclusion constraints over set expressions. We discuss some properties of the reduction algorithm, complexity, and present a proof of
correctness. 1 1 Introduction Most logic programming languages are untyped. In Prolog, for example, it is considered meaningful to apply any n-ary predicate to any n-tuple of terms. However, it is
generally accepted that static type checking has great advantages in detecting programming errors early and for generating efficient executable code. Motivated at least in part by the success of type
systems for procedural and functional languages, there is currently considerable interest in finding appropriate definitions of type and welltyping for logic languages. This paper explores the type
|
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=94249","timestamp":"2014-04-17T20:04:52Z","content_type":null,"content_length":"38216","record_id":"<urn:uuid:688693d9-44b6-46d6-90d0-6bc4649cb42f>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00181-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Birthday Problem
The Birthday Problem
From Math Images
The Birthday Problem
How many people do you need in a room before it is more than likely that at least two of them have the same birthday? This question is the original Birthday Problem, a common probability problem
that stumps many people. In 1970, Johnny Carson tried, and failed, to solve the birthday problem on The Tonight Show.
This page investigates (and solves!) the birthday problem, in addition to some similar puzzles.
Basic Description
When asked to solve the birthday problem, many people make initial guesses that are much higher than the actual solution. A common answer to the birthday problem is 183 people, which is simply 365
(the number of days in a year), divided by 2, rounded up to the nearest whole number. However, we will later show that the actual solution is a much smaller number.
When solving this problem we must make two assumptions:
1. There are always 365 days in a year (disregard leap years)
2. All birthdays are equally likely to occur
Wait! What if all birthdays are not equally likely to occur? Surely some birthday months must be more common than others.
Roy Murphy ran an analysis of the distribution of birthdays by collecting data from 480,040 insurance policy applications made between 1981 and 1994.His results are shown to the right in figure 1. ^
[1] *****Roy Murphy has not responded to my e-mail
Is this problem still solvable even though birthdays are not uniformly distributed?
In their study Majorization and the Birthday Inequality, M. Lawrence Clevenson and William Watkins state, "If the distribution of birthday is not uniform, then the probability of a match is at least
as large as it is when the birthdays are uniformly distributed throughout the year. We call this fact the 'birthday inequality." If we agree with Clevenson and Watkind, assuming that all birthdays
are equally likely will give us accurate results. For more on the birthday inequality and to read the full article, click here!
Combinations v. Permutations
It is also important to understand the difference between combinations and permutations.
How many ways can you select $k$ items from a group of $n$ items? When selecting a combination of items, order does not matter. For example, if you pulling three marbles out of a bag that contains
10 marbles, 3 blue, 3 red, and 4 green, the order in which you choose the marbles is not important. Pulling a red, red, green is the same as pulling a red, green, red.
The formula for a combination is as follows
$\binom nk = \frac{n!}{k!(n-k)!}$ whenever $k\leq n$, and which is zero when $k>n$.^[2]
When order matters, the combination becomes a permutation. For example, suppose 5 students run a race. How many ways can the 5 students finish 1st, 2nd and 3rd? In this case, result Jenna, Mike,
Emily is different from result Mike, Emily Jenna.
Because we do not have to consider identical results (the outcome of a red, blue and green marble v. the outcome of a red, green, and blue marble), the formula for a permutation is simpler:
$\binom nk =\frac{n!}{(n-k)!}$
A More Mathematical Explanation
Note: understanding of this explanation requires: *Algebra, Probablity
In probability, the term more than likely means that the probability of the event happening is greater than 1/2. So we can say,
$P_n \geq \frac{1}{2}$
If the initial question is, "In a room with n people, what is the probability that at least 2 people share the same birthday?, then we set up the probability like this:
First, we will make the initial problem simpler by choosing a set n value. Say $n=10$. This probability problem is actually quite easy to solve if we first solve the complementary probability. The
complementary probability is the probability that the original probability does not happen. For example, if the original probability is the chance that we get tails when we throw a coin the
complementary probability is the probability that we do not get tails. The complementary probability is always equal to 1 - original probability.
$P_{10}=$the probability that, in a room of 10 people, at least two people share the same birthday
The complementary probability to the birthday problem investigates the question, "In a room with 10 people, what is the probability that no two people share the same birthday?"We can set up this
probability like this:
$P'_{10}=$the probability that, in a room of 10 people, NO two people share the same birthday
To solve the complementary probability ($P'_{10}$), we will first assume that $n=10$
How many birthday outcomes are possible if there are 10 people in the room?
$C_{10} = 365\times 365 \times 365 \times 365 \times 365 \times 365 \times 365 \times 365 \times 365 \times 365 = 365^{10}$
Note: the general solution to this problem is just $C_{n} = 365^{n}$
How many birthday combinations are possible if there are 10 people in the room AND no one has the same birthday?
$C'_{10} = 365\times 364 \times 363 \times 362 \times 361 \times 360 \times 359 \times 358 \times 357 \times 356 = 3.706 \times 10^{25}$
So, The probability that no two people have the same birthday:
$P'_{10} =\frac{\text{outcomes where no one shares a birthday}}{\text{all possible outcomes}}=\frac{3.706 \times 10^{25}}{365^{10}}=0.883$
Now that we have calculated the complementary probability, it is quite easy to calculate the actual probability. Recall that:
$P_{10} = 1 - P'_{10}$, so
$P_{10} = 1 - 0.883 = 0.117$
With ten people in the room the probability that at least two people in the room share the same birthday is 0.117.
If we test out other values for n, we end up with this table:
We see that 23 is the first whole number where $P_n \geq \frac{1}{2}$
Another Similar Problem
Many people who are shocked by the answer to the birthday problem were actually thinking about this question: In a room with n people, what is the probability that someone in the room shares my
This problem is also easy to solve if we think about the complementary probability. Say,
$P_n=$the probability that, in a room with n people, someone shares your birthday, and the complementary probability
$P'_n=$the probability that, in a room with n people, no one shares your birthday
Once again, we want to first calculate the complementary probability. say $n=23$
$P'_{23} =\frac{\text{possible outcomes where no one shares your birthday}}{\text{all possible birthday outcomes}}$
Since it is still possible for two other people in the room to share the same birthday, the numerator in this problem is constant (unlike the first example). In this case,
$P'_{23}= \frac{364}{365} \times \frac{364}{365} \times \frac{364}{365} \times \frac{364}{365} \text{...}= \left(\frac{364}{365}\right)^{23}= 0.9388$
Note: the general solution to this problem is just $P'_n=\left(\frac{364}{365}\right)^n$
So the probability that, in a room of 23, someone shares your birthday:
$P_{23} = 1 - P'_{23} = 1 - 0.9388 =0.0612$
In a room of 23, the probability that someone shares your birthday is quite low. We can solve for n to see how many people need to be in the room before it is more than likely that someone shares
your birthday.
Recall that in probability, the term more than likely means that the probability of the event happening is greater than 1/2. So we can say,
$P_n \geq\frac{1}{2}$
If $P_n = 1 - P'_n$, then $P'_n \leq \frac{1}{2}$
$P'_n= \frac{364}{365} \times \frac{364}{365} \times \frac{364}{365} \times \frac{364}{365} \text{...}= \left(\frac{364}{365}\right)^{n}$, so
$\left(\frac{364}{365}\right)^{n} \leq \frac{1}{2}$, and take the $\ln$ of both sides so
$n \times \ln \frac{364}{365} \leq \ln \frac{1}{2}$
Since $\ln \frac{364}{365}$ is negative, we must switch the sign when we divide:
$n \geq \frac{\ln \frac{1}{2}}{\ln \frac{364}{365}}$
$n = \frac{\ln \frac{1}{2}}{\ln \frac{364}{365}}= 252.651989 \approx 253$ people
There needs to be 253 people in the room before it is more than likely that someone in the room shares your birthday.
The Almost Birthday Problem
What is the probability that, in a room with n people, there are two people who have birthdays within r days of each other?
The formula to find the probability that, in a room with n people, there are two people who have birthdays within r days of each other is
$P_n(r)= 1 - \frac{(365-1-nr)!}{365^{n-1}(365-(r+1)n)!}$
The proof for this formula is more difficult. In his book, Understanding Probability: Chance Rules in Everyday Life, author Henk Tijms directs his readers to P. Diaconis and F. Msteller's Methods for
Studying Coincidences for a detailed explanation. To learn more, click here!
Cool Applications
Below are some links to interactive programs involving the birthday problem. You will need to have the Wolfram Alpha Demonstration Project installed on your computer to view these applications!
Why It's Interesting
The Birthday Problem is one of many problems in probability that demonstrates that certain "rare" events really are not all that rare. People are often fascinated by certain events in nature,
casinos, and sporting events. When you can understand the math behind certain scenarios like the birthday problem and
The Monty Hall Problem
you can better understand the events that occur around you on a daily basis.
Teaching Materials
There are currently no teaching materials for this page. Add teaching materials.
1. ↑ [ http://www.panix.com/~murphy/bday.html "An Analysis of the Distribution of Birthdays in a Calendar Year"], Retrieved on 16 July 2012.
2. ↑ [ http://en.wikipedia.org/wiki/Combination "Combination"], Retrieved on 16 July 2012.
Future Directions for this Page
Finishing this Section:
More than 2 people
What is the probability that at least 3 people in the room are born on the same day?
What is the probability that, in a room of n people, at least three people share a birthday? We are going to call this S[n] where n is the number of people in the room.
Let's first assume that n=3 Let
$S_3=$Probability that, in a room with 3 people, at least 3 people share the same birthday. In this specific case, it is also the probability that everyone in the room shares the same birthday.
$S'_3=$probability that, in a room with 3 people, no three people share a birthday
To start, we must consider all possibilities.
1. That no two people are born on the same day. Let’s call this “1 1 1”
2. That two people share a birthday. Let’s call this “2 1”
3. That all three people share a birthday. Let’s call this “3”
For these probabilities we are going to use the notation P[(n,k)] where n is the number of people in the room and k is the number of pairs of people who share a birthday. Let’s calculate the
probability of possibility 3 (which is equal to P [(3,3)]). This is simply
$\frac{365}{365} \times \frac{1}{365} \times \frac{1}{365} = \frac{1}{365 \times 365} = 0.0000075$ So the probability that, in a room with 3 people, 3 people share the same birthday is
practically zero.
General Formula?
In number theory, a partition of n is a way of writing n as a sum of positive integers. Order is not important in partitions. For example: 2 1 1 is considered the same as 1 2 1. A composition is a
partition in which the order is important. ^[3]
When we calculated P [(3,3)] above, we were looking at partitions. To find any probability, we must sum the probabilities of the relevant partitions. To solve this problem (that, in a room with n
people, at least three people share the same birthday), any partition that involves integers greater than or equal to 3 would be summed to get the direct probability, P[(3,3)]. All other partitions
(those only involving integers 1 and 2) are summed to get the complementary probability, P'[(3,3)] the probability that, in a room with n people, no 3 people share a birthday
If we look at the partitions for n=3 in the chart above we see that there is 1 partition that corresponds to the direct probability (in red), and 2 partitions that correspond to the indirect
probability (blue). For n=3 it is easier to just find the probability of the 1 partition. However, the chart shows that, as n increases, the number of direct partitions (red) increases much faster
than the number of indirect partitions. This means that computing the direct probability will be much more time consuming for greater n values. For this reason, we will compute ‘’n’’ values greater
than 3 by using the complementary probability.
Let’s find a recursive formula to find the probability of at least 3 birthdays that works for any n value! A recursive formula is based on previous calculations. In this case, the recursive formula
will be based on 2 previous probabilities.
Let’s start with complementary probabilities from the beginning: We will continue using the notation P [(n,k)] where n represents the number of people in the room and k represents the number of pairs
of people with the same birthday.
What if n=1?
• What is the probability that, in a room with 1 person, no pair of people share a birthday ? 1 is the only partition of 1.
The answer is obviously 1. So we can say P[(1,0)]=1
• What is the probability that, in a room with 1 person, that one pair of people share a birthday?
The answer to this question is obviously 0. There is only one person in the room so how can we have a pair?
So we can say that P[(1,1)]=0.
Next, let’s look at n=2.
• What is the probability that, in a room with 2 people, no pair of people share a birthday? There are two partitions of 2: 1 1 and 2. Here we are calculating P[(2,0)]
This probability only concerns partition 1 1.
This is equal to
$\frac{365}{365} \times \frac{364}{365} = 0.997$
So P[(2,0)]= 0.997
• What is the probability that, in a room with 2 people, one pair of people share a birthday? Here we are calculating P[(2,1)]
This probability only concerns partition 2.
$\frac{365}{365} \times \frac{1}{365} = 0.0027$
So P[(2,1)]=0.0027
Note: P[(2,2)]=0, P[(2,3)]=0, etc. so we do not have to worry about these probabilities.
Now let's jump to n=5: Let's look at the partitions of 5.
(a) 1 1 1 1 1
(b) 2 1 1 1
(c) 2 2 1
(d) 3 1 1
(e) 3 2
(f) 4 1
(g) 5
• Here we have 3 partitions that involve integers less than 3 (a, b, and c), and 4 partitions that involve integers greater than or equal to 3 (d, e, f, and g).
Let’s compute the probabilities of partitions a, b, and c.
(a) 1 1 1 1 1
• In this case everyone has a different birthday. It can be represented P[(5,0)]. This is computed by
$\frac{365}{365} \times \frac{364}{365} \times \frac{363}{365} \times \frac{362}{365} \times \frac{361}{365} = 0.973$
(b) 2 1 1 1
• In this case there is one pair of birthdays. It can be represented P[(5,1)]. We can think of this as the sum of two probabilities:
1. The probability that, among the first 4 people there is already one pair of birthdays (and therefore the fifth person has a different birthday from everyone before them)
Which can be represented as $P_{(4,1)} \times C_1$
1. The probability that, among the first 4 people there are no pairs of birthdays (and therefore the fifth person must share a birthday with one of the first 4 people)
Which can be represented as $P_{(4,0)} \times C_2$. So we can represent P[(5,1)] as
$P_{(5,1)}= P_{(4,1)} \times C_1 + P_{(4,0)} \times C_2$
$C_1 = \frac{365-\text{the birthdays of the first 4 people}}{365} = \frac{365-3}{365} = 0.9917808$
$C_2 = \frac{\text{the birthdays of the first 4 people}}{365} = \frac{4}{365} = 0.0109589$
Once we get P[(4,1)] and P[(4,0)] from the chart we are going to make we can say that.
$P_{(5,1)}= P_{(4,1)} \times 0.9917808 + P_{(4,0)} \times 0.0109589$
$P_{(5,1)}= 0.01630349 \times 0.9917808 + 0.98364409 \times 0.0109589$
$P_{(5,1)}= 0.02694915$
(c) 2 2 1
• In this case there are two pairs of birthdays. It can be represented P[(5,2)]. We can think of this as the sum of two probabilities:
1. The probability that, among the first 4 people there are already two pairs of birthdays (and therefore the fifth person has a different birthday from everyone before them)
Which can be represented as $P_{(4,2)} \times C_a$
1. The probability that, among the first 4 people there is one pair of birthdays (and therefore the fifth person must share a birthday with one of the other two people)
Which can be represented as $P_{(4,1)} \times C_b$. So we can represent P[(5,2)] as
$P_{(5,2)}= P_{(4,2)} \times C_a + P_{(4,1)} \times C_b$
$C_a = \frac{365-\text{the two birthday pairs in the first 4 people}}{365} = \frac{365-2}{365} = 0.9945205479$
$C_b = \frac{\text{the two birthdays in the first 4 that have not been paired}}{365} =\frac{2}{365} = 0.0054794521$
Once we get P[(4,2)] and P[(4,1)] from the chart we are going to make we can say that.
$P_{(5,2)}= P_{(4,2)} \times 0.9945205479 + P_{(4,1)} \times 0.0054794521$
$P_{(5,2)}= 0.01630349 \times 0.9945205479 + 0.98364409 \times 0.0054794521$
$P_{(5,2)}= 0.02694915$
Now we have all parts of the complementary probability (S'[(5)]) for n=5
$S'_5= P_(5,0) + P_(5,1) + P_(5,2)$
$S'_5= 0.973 + 0.0027 + 0.0000074 = 0.976$
$S_5= 1 - S'_5$
$S_5 = 1- CHECK = PROBABILITY$
^I'm not getting the right answer here...
If you are able, please consider adding to or editing this page!
Have questions about the image or the explanations on this page?
Leave a message on the discussion page by clicking the 'discussion' tab at the top of this image page.
|
{"url":"http://mathforum.org/mathimages/index.php/The_Birthday_Problem","timestamp":"2014-04-18T00:24:02Z","content_type":null,"content_length":"52572","record_id":"<urn:uuid:08e0e6b1-f192-4142-8f54-41712220fe1f>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00596-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Let F: (0,1] => R (all Real Numbers Be Uniformly ... | Chegg.com
Let f: (0,1] => R (all real numbers be uniformly continuous on (0,1] and Let L=lim {f(1/n)}. Prove then if {xn} is any sequence in (0,1] such that lim {xn} = 0, then lim {f(xn)} = L. Deduce that if f
is defined at 0 by f(0) = L, then f becomes continuous at 0. Show complete detailed proof.
Advanced Math
|
{"url":"http://www.chegg.com/homework-help/questions-and-answers/let-f-0-1-r-real-numbers-uniformly-continuous-0-1-let-l-lim-f-1-n--prove-xn-sequence-0-1-l-q2907626","timestamp":"2014-04-21T10:55:30Z","content_type":null,"content_length":"20600","record_id":"<urn:uuid:2b3f2c4f-6fef-427f-a4de-f9d27dd6b732>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00555-ip-10-147-4-33.ec2.internal.warc.gz"}
|
What is the effect on the area of a triangle if the base is doubled and the height is cut in half
Word Problem Answers (7,914)
Statistics Answers (2,475)
Calculus Answers (5,051)
Trigonometry Answers (2,192)
Geometry Answers (4,419)
Algebra 2 Answers (9,143)
Algebra 1 Answers (21,997)
Pre-Algebra Answers (10,876)
A = 1/2 * B * H formula for area of a triangle
question states that the base B is doubled and
H is cut in half
A = 1/2 (2B) (1/2H)
A = B (1/2H)
A = 1/2 * B * H
the are of the triangle stays the same no change
This is a simple question from an online college course that someone feels the need to go to the internet to obtain an answer for. If they look up the formula, substituted numbers in the formula and
compared the results for themselves, they would have the answer.
The method of going to the internet to obtain an answer is both lazy and academically dishonest.
I completely agree. Math does not come easy to everyone. And I was looking for something to help me with the discussion part of the question not the actual math part. Why is everyone so hypocritical.
And you would not even know we were asking unless you came here to do the same thing! So why are you at this page?
|
{"url":"http://mathhomeworkanswers.org/9169/what-effect-area-triangle-the-base-doubled-and-height-cut-half","timestamp":"2014-04-18T03:24:56Z","content_type":null,"content_length":"186966","record_id":"<urn:uuid:60c31653-70b5-4140-95a5-ff86e948eae2>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00027-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Zentralblatt MATH
Publications of (and about) Paul Erdös
Zbl.No: 117.02901
Autor: Erdös, Pál
Title: Quelques problemes de theorie des nombres. (Some problems in number theory) (In French)
Source: Monographies Enseign. Math. 6, 81-135 (1963).
Review: In this important monograph the author has collected a great variety of problems (76 altogether) in the theory of numbers and has included useful comments and bibliographic references. The
problems are gathered under the following groupings: Divisibility problems in finite sets; Divisibility problems in infinite sets; Problems on the sums and differences of terms in one or more sets;
Problems on congruences, divisions, and arithmetic progressions; Prime number problems; Problems in diophantine analysis and similar questions. This monograph is an extremely valuable contribution to
the literature in the field.
Reviewer: W.E.Briggs
Classif.: * 11-02 Research monographs (number theory)
11Bxx Sequences and sets of numbers
11Axx Elementary number theory
00A07 Problem books
Index Words: number theory
© European Mathematical Society & FIZ Karlsruhe & Springer-Verlag
│Books │Problems │Set Theory │Combinatorics │Extremal Probl/Ramsey Th. │
│Graph Theory │Add.Number Theory│Mult.Number Theory│Analysis │Geometry │
│Probabability│Personalia │About Paul Erdös │Publication Year│Home Page │
|
{"url":"http://www.emis.de/classics/Erdos/cit/11702901.htm","timestamp":"2014-04-18T01:13:25Z","content_type":null,"content_length":"3655","record_id":"<urn:uuid:b7133e1c-2555-452d-8bd0-96f7f2473445>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00445-ip-10-147-4-33.ec2.internal.warc.gz"}
|
How to Use the VLOOKUP Function in Microsoft Excel
Hello World
If you are using Excel in a lot of your work, sooner or later, you definitely need to look for values in a table. One of the most useful functions in Excel, called VLOOKUP, can do so. This function
allows you to search for values in the table are listed in column format (how many tables arranged), with a given value of another (we call this "key").
So, let's start with a very simple example of what is meant by VLOOKUP. Suppose you have a table like the following picture:
Then let's say you want to know what types of animals based on the given name, then write a list of names in other parts (in this case, column H):
Format VLOOKUP function looks like the following: = VLOOKUP (value search, the table where the value is, # number of columns in which values are, false).
The first thing that comes into the vlookup function is something that you know and that will be used to search for other values. In this case, you have the names of animals. In the example, they are
in column H, from cell H2 to H5. If you want to place the animal species in addition to animal names in column (I2 will be in accordance with animal names in H2), you will insert a vlookup function
is: = VLOOKUP (H2 ,..).
Next, we need to know the location of the table where our values are. This is derived from cell A1 through B5 in this example, you can highlight it with the mouse to enter into the vlookup function.
It's very important that you include all the cells in the table. So the function would look like: = VLOOKUP (H2, A1: B5, ...).
Next, we need the numbers column where value is located. Always start with the first column (column A in this case) as # 1 and counting to the right. In this example, the type of animal are listed in
column 2, so that's what we need to enter the vlookup function. Finally, the last attribute vlookup takes is a good "true" or "false". Here we will use a "false". If you use "true", you will need to
sort your data in order before using vlookup. So your function would look like: = VLOOKUP (H2, A1: B5, 2, false) And the resulting values are:
You can copy the function to the line under it by changing the value of the search starting from H3 to H5.
Good luck!
|
{"url":"http://docdags-net.blogspot.com/2011/01/cara-menggunakan-fungsi-vlookup-di.html","timestamp":"2014-04-19T11:58:22Z","content_type":null,"content_length":"94128","record_id":"<urn:uuid:5bd3ba3e-0926-4351-9e50-a764b879a0e9>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00512-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Statistics Proves Same Drug Both Causes And Does Not Cause Same Cancer
Wall Street Journal article “Analytical Trend Troubles Scientists.”
Thanks to the astonishing fecundity of the p-value and our ridiculous practice of reporting on the parameters of models as if those parameters represented reality, we have stories like this:
In 2010, two research teams separately analyzed data from the same U.K. patient database to see if widely prescribed osteoporosis drugs [such as fosamax] increased the risk of esophageal cancer.
They came to surprisingly different conclusions.
One study, published in the Journal of the American Medical Association, found no increase in patients’ cancer risk. The second study, which ran three weeks later in the British Medical Journal,
found the risk for developing cancer to be low, but doubled.
How could this be!
Each analysis applied a different methodology and neither was based on original, proprietary data. Instead, both were so-called observational studies, in which scientists often use fast
computers, statistical software and large medical data sets to analyze information collected previously by others. From there, they look for correlations, such as whether a drug may trigger a
worrisome side effect.
And, surprise, both found “significance.” Meaning publishable p-values below the magic number, which is the unquestioned and unquestionable 0.05. But let’s not cast aspersions on frequentist
practices alone, as probelmatic as these are. The real problem is that the Love Of Theory Is The Root Of All Evil.
Yes, researchers love their statistical models too well. They cannot help thinking reality is their models. There is scarcely a researcher or statistician alive who does not hold up the parameters
from his model and say, to himself and us, “These show my hypothesis is true. The certainty I have in these equals the certainty I have in reality.” Before I explain, what do other people say?
The WSJ suggests that statistics can prove opposite results simultaneously when models are used on observational studies. This is so. But it is also true that statistics can prove a hypothesis true
and false with a “randomized” controlled trial, the kind of experiment we repeatedly hear is the “gold standard” of science. Randomization is a red herring: what really counts is control (see this,
this, and this).
Concept 1
There are three concepts here that, while known, are little appreciated. The first is that there is nothing in the world wrong with the statistical analysis of observational data (except that
different groups can use different models and come to different conclusions, as above; but this is a fixable problem). It is just that the analysis is relevant only to new data that is exactly like
that taken before. This follows from the truth that all probability, hence all probability models (i.e. statistics), is conditional. The results from an observational study are statements of
uncertainty conditional on the nature of the sample data used.
Suppose the database is one of human characteristics. Each of the human beings in the study have traits that are measured and a near infinite number of traits which are not measured. The collection
of people which make up the study is thus characterized by both the measured traits and the unmeasured ones (which include time and place etc.; see this). Whatever conclusions you make are thus only
relevant to this distribution of characteristics, and only relevant to new populations which share—exactly—this distribution of characteristics.
And what is the chance, given what we know of human behavior, that new populations will match—exactly—this distribution of characteristics? Low, baby. Which is why observational studies of humans are
so miserable. But it is why, say, observational astronomical studies are so fruitful. The data taken incidentally about hard physical objects, like distant cosmological ones, is very likely to be
like future data. This means that the same statistical procedures will seem to work well on some kinds of data but be utter failures on others.
Concept 2
Our second concept follows directly from the first. Even if an experiment with human beings can be controlled, it cannot be controlled exactly or precisely. There will be too many circumstances or
characteristics which will remain unknown to the researcher, or the known ones will not be subject to control. As good as you can design an experiment with human beings is just not good enough such
that your conclusions will be relevant to new people because again those new people will be unlike the old ones in some ways. And I mean, above and here, in ways that are probative of or relevant to
the outcome, whatever that happens to be. This explains what a sociologist once said of his field, that everything is correlated with everything.
Concept 3
If you follow textbook statistics, Bayesian or frequentist, your results will be statements about your certainty in the parameters of the model you use and not about reality itself. Click on the
Start Here tab and look to the articles on statistics to read about this more fully (and see this especially). And because you have a free choice in models, you can always find one which lets you be
as certain about those parameters as you’d like.
But that does not mean, and it is not true, that the certainty you have in those parameters translates into the certainty you should have about reality. The certainty you have in reality must always
necessarily be less, and in most cases a lot less.
The only way to tell whether the model you used is any good is to apply it to new data (i.e. never seen by you before). If it predicts that new data well, then you are allowed to be confident about
reality. If it does not predict well, or you do not bother to collect statistics about predictions (which is 99.99% of all studies outsides physics, chemistry, and the other hardest of hard
sciences), then you are not allowed to be confident.
Why don’t people take this attitude? It’s too costly and time consuming to do statistics the right way. Just look how long it takes and how it expensive it is to run any physics experiment (about
genuinely unknown areas)! If all of science did their work as physicists must do theirs, then we would see about a 99 percent drop in papers published. Sociology would slow to a crawl. Tenure
decisions would be held in semi-permanent abeyance. Grants would taper to a trickle. Assistant Deans, whose livelihoods depend on overhead, would have their jobs at risk. It would be pandemonium.
Brrr. The whole thing is too painful to consider.
Statistics Proves Same Drug Both Causes And Does Not Cause Same Cancer — 7 Comments
1. Amen, brother. Say it!
Frequentist or no, I have always been keenly aware that the results of any sample apply only to the population from which the sample was taken; meaning that the parent distribution is the same.
Most of my career was spent dealing with something halfway between the hard sciences and the social sciences: namely, manufacturing processes. There, our objectives quite often were a) to
maintain the process in a stable distribution with the mean on target and the variation comfortably within specs; and b) to get a signal when this was no longer true. Even so, there were
situations, like those dealing with raw materials, where a constant mean was unrealistic and unattainable in practice.
I wouldn’t blame the poor little p-value for the abuse it suffers at the hands of amateurs. Its conditionality should be quite clear: IF the mean and variance of the population has remained such
and such, THEN the probability of obtaining this sample result is p. But I have seen even physicists and the like take it to mean that “the probability that the hypothesis is true is p.” Eek.
2. Briggs – Like many/most people I have struggled with statistics for years. I always wondered about p-values, and your Bayesian discussions make hypothetical sense to me when I get into a
theoretical statistics-studying mode.
However, I still struggle – and I can see why people continue to use p-values – they just pick them off the shelf as statistical fast-food.
It would be useful if you could provide a simple scenario for some kind of (fairly realistic) “scientific” study where a Bayesian approach is used, with a direct comparison with the perhaps
erroneous p-value version.
As a statistical idiot, I would like to use the correct techniques, but I need them in a “take-away” format!
3. I love the little p value. It, along with confidence intervals, allow anyone to make their graphs exciting. A single line on its own is sleep inducing, but add some nice curves of t distribution,
a little < 0.05, maybe an * or two, and you've got yourself some sex appeal. Even a flat autocorrelation plot starts to look pretty hot once the CI has some sexy Gaussian displacement..
All kidding aside: Dr. Briggs, if folks actually tested their predictions then you wouldnt have anything left to write about.
4. According to the National Cancer Institute the cause of cancer is unknown. Since nobody knows what causes cancer, how could the researchers determine osteoporosis drugs increase the risk of
Read here. http://training.seer.cancer.gov/disease/war/
“Our slogan is: end the slavery of reification! (I’ll speak more on this another day.”
I’m eagerly waiting for Dr. Briggs to expound on this topic.
5. “fast computers”: oh for heaven’s sake.
6. Do you have the primary references for the two studies? This would make a wonderfully thought-provoking reading or analysis assignment for an undergrad statistics course.
7. “The only way to tell whether the model you used is any good is to apply it to new data (i.e. never seen by you before)”
I learned that the hard way. Years ago, when I was young and single, I read Andrew Byer’s book on handicapping horse races, and figured I could pick up a little extra spending cash, like Andrew
Byer did in “My $50,000 Year at the Track”.
I pored through Daily Racing Forms, finding significant statistical predictors almost guaranteed to make me money. My actual result was a loss of about 9% on my wagers- less than the track take
of 15%, but no better than blindly betting on favorites would have done.
I realized after the fact that looking for statistical predictors, recency of last X races, times of races run, recency of workouts, weight carried, average purse won, etc, I was BOUND to get
some results with p values less than 5% just based on the number of variables I was looking at. The only way to verify that my results were significant, rather merely the result of data mining
was to check the predictions on a series of future races before I wagered any more actual money.
|
{"url":"http://wmbriggs.com/blog/?p=5583&cpage=1","timestamp":"2014-04-21T09:35:43Z","content_type":null,"content_length":"68276","record_id":"<urn:uuid:62648d05-fd81-4c63-aabb-4c34df0c0c35>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00157-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Permutations containing and avoiding 123 and 132
Results 1 - 10 of 18
- Adv. Appl. Math
"... Abstract. We exhibit a bijection between 132-avoiding permutations and Dyck paths. Using this bijection, it is shown that all the recently discovered results on generating functions for
132-avoiding permutations with a given number of occurrences of the pattern 12... k follow directly from old resul ..."
Cited by 99 (3 self)
Add to MetaCart
Abstract. We exhibit a bijection between 132-avoiding permutations and Dyck paths. Using this bijection, it is shown that all the recently discovered results on generating functions for 132-avoiding
permutations with a given number of occurrences of the pattern 12... k follow directly from old results on the enumeration of Motzkin paths, among which is a continued fraction result due to
Flajolet. As a bonus, we use these observations to derive further results and a precise asymptotic estimate for the number of 132-avoiding permutations of {1, 2,..., n} with exactly r occurrences of
the pattern 12... k. Second, we exhibit a bijection between 123-avoiding permutations and Dyck paths. When combined with a result of Roblet and Viennot, this bijection allows us to express the
generating function for 123-avoiding permutations with a given number of occurrences of the pattern (k − 1)(k − 2)...1k in form of a continued fraction and to derive further results for these
- Adv. in Appl. Math
"... Abstract. We study generating functions for the number of permutations on n letters avoiding 132 and an arbitrary permutation τ on k letters, or containing τ exactly once. In several interesting
cases the generating function depends only on k and is expressed via Chebyshev polynomials of the second ..."
Cited by 41 (24 self)
Add to MetaCart
Abstract. We study generating functions for the number of permutations on n letters avoiding 132 and an arbitrary permutation τ on k letters, or containing τ exactly once. In several interesting
cases the generating function depends only on k and is expressed via Chebyshev polynomials of the second kind.
- SÉMINAIRE LOTHARINGIEN DE COMBINATOIRE 47 (2002) ARTICLE B47C , 2002
"... We study generating functions for the number of permutations in Sn subject to two restrictions. One of the restrictions belongs to S3, while the other to Sk. It turns out that in a large variety
of cases the answer can be expressed via Chebyshev polynomials of the second kind. ..."
Cited by 26 (14 self)
Add to MetaCart
We study generating functions for the number of permutations in Sn subject to two restrictions. One of the restrictions belongs to S3, while the other to Sk. It turns out that in a large variety of
cases the answer can be expressed via Chebyshev polynomials of the second kind.
- Discrete Math. Theor. Comput. Sci , 2000
"... A permutation is said to ¡ be –avoiding if it does not contain any subsequence having all the same pairwise comparisons ¡ as. This paper concerns the characterization and enumeration of
permutations which avoid a set ¢¤ £ of subsequences increasing both in number and in length at the same time. Le ..."
Cited by 23 (1 self)
Add to MetaCart
A permutation is said to ¡ be –avoiding if it does not contain any subsequence having all the same pairwise comparisons ¡ as. This paper concerns the characterization and enumeration of
permutations which avoid a set ¢¤ £ of subsequences increasing both in number and in length at the same time. Let ¢ £ be the set of subsequences of the “¥§¦©¨�������¦©¨����� � form ¥ ”, being any
permutation ��������������¨� � on. ¨��� � For the only subsequence in ¢�� ���� � is and ���� � the –avoiding permutations are enumerated by the Catalan numbers; ¨��� � for the subsequences in ¢� �
are, ������ � and the (������������������ � –avoiding permutations are enumerated by the Schröder numbers; for each other value ¨ of greater � than the subsequences in ¢ £ ¨� � are and their length
¦©¨����� � is; the permutations avoiding ¨�� these subsequences are enumerated by a number ������ � �� � � sequence such �������������� � that �� � , being � the –th Catalan number. For ¨ each we
determine the generating function of permutations avoiding the subsequences in ¢� £ , according to the length, to the number of left minima and of non-inversions.
- Proc. 12th Conference on Formal Power Series and Algebraic Combinatorics , 2000
"... Let T m k = {σ ∈ Sk | σ1 = m}. We prove that the number of permutations which avoid all patterns in T m k equals (k − 2)!(k − 1)n+1−k for k ≤ n. We then prove that for any τ ∈ T 1 k (or any τ ∈
T k k), the number of permutations which avoid all patterns in T 1 k (or in T k k) except for τ and contai ..."
Cited by 20 (9 self)
Add to MetaCart
Let T m k = {σ ∈ Sk | σ1 = m}. We prove that the number of permutations which avoid all patterns in T m k equals (k − 2)!(k − 1)n+1−k for k ≤ n. We then prove that for any τ ∈ T 1 k (or any τ ∈ T k
k), the number of permutations which avoid all patterns in T 1 k (or in T k k) except for τ and contain τ exactly once equals (n + 1 − k)(k − 1) n−k for k ≤ n. Finally, for any τ ∈ T m k, 2 ≤ m ≤ k −
1, this number equals (k − 1) n−k for k ≤ n. These results generalize recent results due to Robertson concerning permutations avoiding 123-pattern and containing 132-pattern exactly once. 1
, 2001
"... Recently, Babson and Steingrimsson (see [BS]) introduced generalized permutations patterns that allow the requirement that two adjacent letters in a pattern must be adjacent in the permutation.
We study generating functions for the number of permutations on n letters avoiding 1-3-2 (or containing 1- ..."
Cited by 15 (7 self)
Add to MetaCart
Recently, Babson and Steingrimsson (see [BS]) introduced generalized permutations patterns that allow the requirement that two adjacent letters in a pattern must be adjacent in the permutation. We
study generating functions for the number of permutations on n letters avoiding 1-3-2 (or containing 1-3-2 exactly once) and an arbitrary generalized pattern τ on k letters, or containing τ exactly
once. In several cases the generating function depends only on k and is expressed via Chebyshev polynomials of the second kind, and generating function of Motzkin numbers. 1
, 2001
"... Recently, Babson and Steingrimsson (see [BS]) introduced generalized permutations patterns that allow the requirement that two adjacent letters in a pattern must be adjacent in the permutation.
Following [BCS], let ekπ (respectively; fkπ) be the number of the occurrences of the generalized pattern 1 ..."
Cited by 13 (8 self)
Add to MetaCart
Recently, Babson and Steingrimsson (see [BS]) introduced generalized permutations patterns that allow the requirement that two adjacent letters in a pattern must be adjacent in the permutation.
Following [BCS], let ekπ (respectively; fkπ) be the number of the occurrences of the generalized pattern 12-3-...-k (respectively; 21-3-...-k) in π. In the present note, we study the distribution of
the statistics ekπ and fkπ in a permutation avoiding the classical pattern 1-3-2. Also we present an applications, which relates the Narayana numbers, Catalan numbers, and increasing subsequences, to
permutations avoiding the classical pattern 1-3-2 according to a given statistics on ekπ, or on fkπ.
- Annals of Combinatorics
"... A permutation is said to be alternating if it starts with rise and then descents and rises come in turn. In this paper we study the generating function for the number of alternating permutations
on n letters that avoid or contain exactly once 132 and also avoid or contain exactly once an arbitrary p ..."
Cited by 12 (2 self)
Add to MetaCart
A permutation is said to be alternating if it starts with rise and then descents and rises come in turn. In this paper we study the generating function for the number of alternating permutations on n
letters that avoid or contain exactly once 132 and also avoid or contain exactly once an arbitrary pattern on k letters. In several interesting cases the generating function depends only on k and is
expressed via Chebyshev polynomials of the second kind.
- DISC. MATH , 2005
"... We say that a permutation π is a Motzkin permutation if it avoids 132 and there do not exist a < b such that πa < πb < πb+1. We study the distribution of several statistics in Motzkin
permutations, including the length of the longest increasing and decreasing subsequences and the number of rises and ..."
Cited by 8 (4 self)
Add to MetaCart
We say that a permutation π is a Motzkin permutation if it avoids 132 and there do not exist a < b such that πa < πb < πb+1. We study the distribution of several statistics in Motzkin permutations,
including the length of the longest increasing and decreasing subsequences and the number of rises and descents. We also enumerate Motzkin permutations with additional restrictions, and study the
distribution of occurrences of fairly general patterns in this class of permutations.
- Adv. Appl. Math
"... Define Sn(R; T) to be the number of permutations on n letters which avoid all patterns in the set R and contain each pattern in the multiset T exactly once. In this paper we enumerate Sn({α};
{β}) and Sn(∅; {α, β}) for all α ̸= β ∈ S3. We show that there are five Wilf-like classes associated with e ..."
Cited by 8 (2 self)
Add to MetaCart
Define Sn(R; T) to be the number of permutations on n letters which avoid all patterns in the set R and contain each pattern in the multiset T exactly once. In this paper we enumerate Sn({α}; {β})
and Sn(∅; {α, β}) for all α ̸= β ∈ S3. We show that there are five Wilf-like classes associated with each of Sn({α}; {β}) and Sn(∅; {α, β}) for all α ̸= β ∈ S3. 1.
|
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=1160402","timestamp":"2014-04-19T23:32:19Z","content_type":null,"content_length":"36326","record_id":"<urn:uuid:132626c1-df88-42e1-a456-fbcf2b7649ea>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00461-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Issaquah Calculus Tutor
Find a Issaquah Calculus Tutor
...I hold a Ph.D. in Aeronautical and Astronautical Engineering from the University of Washington, and I have nearly 40 years' experience working in the physical sciences in the private sector. I
have used my knowledge of physics and mathematics to model and build high power gasdynamic lasers, to m...
21 Subjects: including calculus, chemistry, English, physics
...I was a grader and tutor for a Physical Chemistry course for which I wrote segments of the solutions manual and errata in the first edition of the text. I've constructed, researched, and
presented biological fuel cells. I've presented a fuel-cell powered car at a national AIChE conference.
62 Subjects: including calculus, chemistry, English, physics
...Have extensive IT industry experience and have been actively tutoring for 2 years. I excel in helping people learn to compute fast without or without calculators, and prepare for standardized
tests. Handle all levels of math through undergraduate levels.
43 Subjects: including calculus, chemistry, physics, geometry
...I have worked as the Lead Math Tutor at the Math Resource Center of Highline Community College for one year, taking upon quite a load of responsibility while maintaining a high GPA in the
field of Physics. Currently I am transferring to the University of Washington Seattle campus on a full-ride ...
10 Subjects: including calculus, physics, algebra 1, algebra 2
...Since the history program at the University of Washington is so writing-intensive, I am practiced at writing analytic and argumentative papers comparing primary sources. This lends itself well
to students needing help preparing for AP tests, whether they be history or literature, as all are extr...
16 Subjects: including calculus, reading, chemistry, biology
Nearby Cities With calculus Tutor
Bellevue, WA calculus Tutors
Burien, WA calculus Tutors
Des Moines, WA calculus Tutors
Edmonds calculus Tutors
Kent, WA calculus Tutors
Kirkland, WA calculus Tutors
Mercer Island calculus Tutors
Newcastle, WA calculus Tutors
Redmond, WA calculus Tutors
Renton calculus Tutors
Sammamish calculus Tutors
Seatac, WA calculus Tutors
Seattle calculus Tutors
Shoreline, WA calculus Tutors
Tukwila, WA calculus Tutors
|
{"url":"http://www.purplemath.com/Issaquah_Calculus_tutors.php","timestamp":"2014-04-16T10:35:07Z","content_type":null,"content_length":"23746","record_id":"<urn:uuid:53f61356-8158-4fa5-b986-06ce8547c652>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00488-ip-10-147-4-33.ec2.internal.warc.gz"}
|
The team investigate how becoming a cab driver makes you brainier
Teachers TV Created: Tue Jun 20 12:26:05 BST 2006
How Pythagoras' Theorem can be applied to everyday life
Teachers TV Created: Tue Jun 20 12:39:26 BST 2006
□ shows a square enlarged with one of its vertices as the centre of enlargement
□ enlarges a trapezium using a centre of enlargement that is external to the shape
□ in discussion explains ‘the coordinates are doubled when you enlarge by a scale factor of two and (0, 0) is the centre for the enlargement, but that doesn't work if the centre is somewhere
else on the grid‧
http://www.directoryofchoice.co.uk/ Last update: 2011
□ enlarge shapes drawn in other quadrants, using (0, 0) as the centre of enlargement, and look at the effect on coordinates of vertices
□ enlarge shapes using different centres of enlargement and scale factors
http://www.directoryofchoice.co.uk/ Last update: 2011
Teaching ideas and example questions to help pupils construct lines and angles, understand the sum of angles in a triangle, classify quadrilaterals and enlarge 2-D shapes to a given scale. This
is part of the geometry section of the geometry and measures progression map for intervention in secondary mathematics.
http://www.directoryofchoice.co.uk/ Last update: 2011
Example mathematics questions focusing on enlarging a shape given the scale factor and centre of enlargement. Includes marks to be allocated.
The National Strategies Last update: 2011
We help a teaching assistant overcome her technophobia
Teachers TV Created: Sat Dec 17 10:29:01 GMT 2005
Issac Anoom presents a selection of quick-fire maths questions
Teachers TV Created: Mon Dec 21 01:05:10 GMT 2009
Teaching ideas and example questions to help pupils find, calculate and use the interior and exterior angles of polygons, and find the locus of a point that moves according to a given rule. This
is part of the geometry section of the geometry and measures progression map for intervention in secondary mathematics.
http://www.directoryofchoice.co.uk/ Last update: 2011
Lead and initiate classroom activities and build lesson plans that expand pupils’ understanding of enlargement and similarity. This is part of ‘Teaching mental mathematics from level 5:
http://www.directoryofchoice.co.uk/ Last update: 2011
|
{"url":"http://www.teachfind.com/teachers-tv/enlargement","timestamp":"2014-04-19T11:58:35Z","content_type":null,"content_length":"20089","record_id":"<urn:uuid:4afac455-ab57-4f7c-85a4-893c2ab75cc0>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00361-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Finding the Inverse of a Funtion
First make it a "Y=" equation. Y = x2-5x+6 Switch the Y with the x's X = Y2-5Y+6 Solve for X X-6 = Y2-5Y X-6/5 = Y2 Square root of ((x-6)/5)) = Y Kind of confusing, but I hope this helps!
You don't. That is not a "one to one" function and so does not have an inverse. If $f^{-1}(x)$ is the inverse of f(x), then we must have $f^{-1}(f(x))= x$, that is, $f^{-1}$ must 'undo' whatever f
does. But f(6)= 0 and f(-1)= 0. $f^{-1}(0)$ can't be both 6 and -1! Luke774, for some reason, chose to take only the "+" sign on the square root. That's valid but gives the inverse of a slightly
different function: $f(x)= x^2- 5x- 6$ with $x\ge 5/2$ and undefined for x< 5/2. Same formula but different domain so different function.
|
{"url":"http://mathhelpforum.com/math-topics/57846-finding-inverse-funtion.html","timestamp":"2014-04-16T22:00:12Z","content_type":null,"content_length":"40205","record_id":"<urn:uuid:90db661c-07bc-4033-9e86-b2799e27a3a3>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00487-ip-10-147-4-33.ec2.internal.warc.gz"}
|
real numbers
Definition of Real Numbers
• Real Numbers include all the rational and irrational numbers.
More about Real Numbers
• Real numbers are denoted by the letter R.
• Real numbers consist of the natural numbers, whole numbers, integers, rational, and irrational numbers.
Examples of Real Numbers
• Natural numbers, whole numbers, integers, decimal numbers, rational numbers, and irrational numbers are the examples of real numbers.
• Natural Numbers = {1, 2, 3,...}
• Whole Numbers = {0, 1, 2, 3,...}
• Integers
Solved Example on Real Numbers
Name the subset(s) of the real numbers to which '- 25' belongs.
A. integers, rational numbers, real numbers
B. whole numbers, integers, rational numbers, real numbers
C. natural numbers, whole numbers, integer numbers, rational numbers, real numbers
D. irrational numbers, real numbers
Correct Answer: A
Step 1: - 25 is an integer.
Step 2: The set of integers is a subset of the rational numbers and the real numbers.
Step 3: Therefore, the subsets to which - 25 belongs to are: integers, rational numbers, and the real numbers.
Related Terms for Real Numbers
• Decimal
• Integer
• Irrational Number
• Natural Number
• Rational Number
• Whole Number
|
{"url":"http://www.icoachmath.com/math_dictionary/Real_Numbers.html","timestamp":"2014-04-18T23:17:19Z","content_type":null,"content_length":"9115","record_id":"<urn:uuid:ae30e17e-97e9-4d80-9dbf-d1932942bd2d>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00635-ip-10-147-4-33.ec2.internal.warc.gz"}
|
OP-SF WEB
Extract from OP-SF NET
Topic #10 ---------------- OP-SF NET ----------------- March 14, 1996
From: Martin Muldoon muldoon@mathstat.yorku.ca
Subject: New book on special functions
Nico M. Temme, Special Functions: an Introduction to the Classical Special Functions of Mathematical Physics, Wiley, New York-Chichester-Brisbane-Toronto-Singapore, 1996, xii + 374 pp, ISBN
1. Bernoulli, Euler and Stirling Numbers
2. Useful Methods and Techniques (Theorems from Analysis, Asymptotic Expansions of Integrals)
3. The Gamma Function
4. Differential Equations (Separating the Wave Equation, DE's in the Complex Plane, Sturm's Comparison Theorem, Integrals as Solutions, Liouville Transformation)
5. Hypergeometric Functions (includes very brief introduction to q-functions)
6. Orthogonal Polynomials
7. Confluent Hypergeometric Functions (includes many special cases)
8. Legendre Functions
9. Bessel Functions
10. Separating the Wave Equation
11. Special Statistical Distribution Functions (Error functions, Incomplete Gammma Functions, Incomplete Beta Functions, Non-Central Chi-Squared Distribution, Incomplete Bessel Function)
12. Elliptic Integrals and Elliptic Functions
13. Numerical Aspects of Special Functions (mainly recurrence relations)
Notations and Symbols
The author mentions that part of the material was collected from well-known books such as those by Hochstadt, Lebedev, Olver, Rainville, Szego, and Whittaker & Watson as well as lecture notes by
Lauwerier and Boersma. But there is much recent material, especially in the areas of asymptotic expansions and numerical aspects. About half of the approximately 200 references are dated 1975 and
The author states that the book "has been written with students of mathematics, physics and engineering in mind, and also researchers in these areas who meet special functions in their work, and for
whom the results are too scattered in the general literature." Complex analysis (especially contour integration) would appear to be the main prerequisite. The book is clearly written, with good
motivating examples and exercises. For example, the first chapter opens with a remarkable example of Borwein, Borwein and Dilcher (Amer. Math. Monthly 96 (1989), 681-687 which explains, using Euler
numbers and Boole's summation formula, why the sum of 50 000 terms of the very slowly converging alternating series for Pi/4, though it gives an answer correct to only six digits, yet has nearly all
its first 50 digits correct.
Back to Home Page of
SIAM AG on Orthogonal Polynomials and Special Functions Page maintained by Martin Muldoon
revised March 14, 1996
|
{"url":"http://math.nist.gov/opsf/books/temme.html","timestamp":"2014-04-20T00:40:42Z","content_type":null,"content_length":"3534","record_id":"<urn:uuid:97c8f517-b393-4983-bf7f-3330a4a72977>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00087-ip-10-147-4-33.ec2.internal.warc.gz"}
|
numerical problem: efficient euclid distance with very large numbers
December 17th 2007, 11:29 AM #1
Dec 2007
numerical problem: efficient euclid distance with very large numbers
Hi all,
Iīm looking for an efficient solution for the following problem:
Let V be a set containing m vectors V={v1,v2,...,vm}. Given an input vector vi I need to find the vector from V which has the smallest euclid distance to vi (find best matching unit).
Normally not an issue. However, every element of Vīs vectors and the input vector vi always have a value from the open interval (0,1).
example 2D vector:
v1 = (0.0123123423423 , 0.00023567887)
Assume that every number has several thousand or even million digits. So practically spoken, if you implement euclid distance for this kind of numbers, you will need special high precision
libraries, where every mathematical operation (like add, mult or sqrt) will run many times slower than with primitives such as double.
Recently, I had the idea to tokenize a long number into smaller peaces and execute the euclid distance only on this peaces.
so for instance lets assume we are in 1d (for simplicity) and we have the following two vectors v1, v2 and the input vector vi
v1 = 0.001222345566
v2 = 0.123456649900
vi = 0.1923343233232
lets say we are tokenizing the numbers in peaces of length 5, then the first token (t1) for each vector would look like this:
v1_t1 = 00122
v2_t1 = 12345
vi_t1 = 19233
(here I am leaving away "0." since it is shared by all numbers)
here we see that the 5 first numbers of v2 are closer to vi than v1īs. Thus whatever comes after this 5 numbers, v2 will always be closer to vi. And thus vi is the best matching unit.
If the first tokens of v1 and v2 had the same (or very close) distance to vi, then the the second token containing the next 5 numbers would be compared.
The questions:
does this approach make sense ? (with approach I mean tokenizing the numbers and comparing them part by part)
Am I overlooking anything fundamental ?
How can this idea be improved ?
edit @ 18.12.07, 17:56 :
for new readers and future responses : Please accept the fact that Iīm using huge numbers, this is simply a requirement and canīt be changed.
thanks in advance,
Last edited by sirandreus; December 18th 2007 at 07:57 AM. Reason: some corrections again
Hi all,
Iīm looking for an efficient solution for the following problem:
Let V be a set containing m vectors V={v1,v2,...,vm}. Given an input vector vi I need to find the vector from V which has the smallest euclid distance to vi (find best matching unit).
Normally not an issue. However, every element of Vīs vectors and the input vector vi always have a value from the open interval (0,1).
example 2D vector:
v1 = (0.0123123423423 , 0.00023567887)
Assume that every number has several thousand or even million digits. So practically spoken, if you implement euclid distance for this kind of numbers, you will need special high precision
libraries, where every mathematical operation (like add, mult or sqrt) will run many times slower than with primitives such as double.
Recently, I had the idea to tokenize a long number into smaller peaces and execute the euclid distance only on this peaces.
so for instance lets assume we are in 1d (for simplicity) and we have the following two vectors v1, v2 and the input vector vi
v1 = 0.001222345566
v2 = 0.123456649900
vi = 0.1923343233232
lets say we are tokenizing the numbers in peaces of length 5, then the first token (t1) for each vector would look like this:
v1_t1 = 00122
v2_t1 = 12345
vi_t1 = 19233
(here I am leaving away "0." since it is shared by all numbers)
here we see that the 5 first numbers of v2 are closer to vi than v1īs. Thus whatever comes after this 5 numbers, v2 will always be closer to vi. And thus vi is the best matching unit.
If the first tokens of v1 and v2 had the same (or very close) distance to vi, then the the second token containing the next 5 numbers would be compared.
The questions:
does this approach make sense ?
Am I overlooking anything fundamental ?
How can this idea be improved ?
Note that the solution to this problem does not need to be exact. Thus approximations are welcome as well
thanks in advance,
If it does not need to be "exact" then why not just truncate all the numbers
to 10 significant digits and work with those in standard double precission
floating point?
the representation of the numbers must be in a high precision, but practically spoken "not exact".
I cant just consider the first 10 digits, see counterexample:
You can not compare this two numbers, if you cut everything but the first 10 positions.
In my original idea (see first post), you would first compare the 10 first numbers, which are equal. Because they are equal you would compare the next 10 positions (assuming you are moving in
steps of 10).
There you would compare:
Last edited by sirandreus; December 18th 2007 at 03:52 AM. Reason: corrected constraints
Why do you need such high precision?
Working with say $13$ digit precision the probability of two distances being
indistinguishable is $\sim 10^{-13}$ (with a bit of scalling and to a hand waving
approximation), so you will need a lot of vectors before the chance
of encountering a problem becomes significant.
hi Captain,
why I need such high precision is descripted here: http://www.mathhelpforum.com/math-he...html#post90666
In short words: having an (0,1)^n as an inputspace, and k vectors v1,...,vk in that space, this k vectors are mapped uniquely into one vector in (0,1)^n.
This works by interleaving the decimal expansions of the vectors and thus a very huge vector can be created.
Sorry CaptanBack, but I canīt figure out how your statement about indistinguishable distances is related to my problem
As I understand you are talking about the probability that 2 vectors have the same digits (with a length of 13). In fact, this probability is not relevant, because Im looking for the best
matching unit (for a vector from the set V which is the closest (in terms of euclid distance) to the input vector v_i )
kind regards,
Last edited by sirandreus; December 18th 2007 at 12:26 AM.
hi Captain,
why I need such high precision is descripted here: http://www.mathhelpforum.com/math-he...html#post90666
In short words: having an (0,1)^n as an inputspace, and k vectors v1,...,vk in that space, this k vectors are mapped uniquely into one vector in (0,1)^n.
This works by interleaving the decimal expansions of the vectors and thus a very huge vector can be created.
Sorry CaptanBack, but I canīt figure out how your statement about indistinguishable distances is related to my problem
As I understand you are talking about the probability that 2 vectors have the same digits (with a length of 13). In fact, this probability is not relevant, because Im looking for the best
matching unit (for a vector from the set V which is the closest (in terms of euclid distance) to the input vector v_i )
kind regards,
You are doomed to failure unless you recast your problem.
Almost all reals in (0,1) have non-terminating decimal expansions, which
leaves you with infinitly long calculations.
The interleaving of the decimal expansions is an "in-principle process" in
practice it is impossible to do exactly.
ok lets assume exactness is not a major requirement. For me its more important if the modified euclid distance computation works (see first post)
In practice you could use a max precision, such as 100000 digits. In practice nothing is exact
Correct, but not an issue for me. Because I already have numbers in a finite representation before I do the interleaving.
I dont care what this numbers should exacly be. I only see the numbers and assume that they are accurate. From a practical and numerical perspective there is nothing else you can do. I mean if
your sensor gives you a 0.33333333335 thats it, you canīt ask yourself if it could mean 0.3333333333333.....33333....
Correct, but not an issue for me. Because I already have numbers in a finite representation before I do the interleaving.
I dont care what this numbers should exacly be. I only see the numbers and assume that they are accurate. From a practical and numerical perspective there is nothing else you can do. I mean if
your sensor gives you a 0.33333333335 thats it, you canīt ask yourself if it could mean 0.3333333333333.....33333....
1. Check your sensors real accuracy, 24 ADC's don't give 24 bit accuracy
may be 20 bits. You may not have as much real precision as you think.
2. Work with int64's (or if you have them int128)
3. Look at getting an arbitary precision arithmetic packge/class (google for
such for the programming language you intend to use).
captain thank you for your feedback.
However, the main question of this thread remains still unanswered. Does the modified euclid distance work ? Please read first post for that.
If no, is it possible to modify it so that it works ?
Or is the idea fundamentally wrong ? I mean the idea of tokenizing the numbers and comparing the number part by part
kind regards,
December 17th 2007, 08:42 PM #2
May 2006
December 17th 2007, 11:02 PM #3
Dec 2007
December 17th 2007, 11:38 PM #4
Grand Panjandrum
Nov 2005
December 18th 2007, 12:11 AM #5
Dec 2007
December 18th 2007, 03:36 AM #6
Grand Panjandrum
Nov 2005
December 18th 2007, 03:46 AM #7
Dec 2007
December 18th 2007, 04:03 AM #8
Dec 2007
December 18th 2007, 04:24 AM #9
Grand Panjandrum
Nov 2005
December 18th 2007, 07:53 AM #10
Dec 2007
|
{"url":"http://mathhelpforum.com/advanced-math-topics/25033-numerical-problem-efficient-euclid-distance-very-large-numbers.html","timestamp":"2014-04-17T16:58:48Z","content_type":null,"content_length":"67427","record_id":"<urn:uuid:e75c8b51-9fe8-43d3-870a-f25ec65bcf64>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00070-ip-10-147-4-33.ec2.internal.warc.gz"}
|
coadjoint orbit
coadjoint orbit
Let $G$ be a Lie group, and $\mathfrak{g}$ its Lie algebra. Then $G$ has a natural action on $\mathfrak{g}^{*}$ called the coadjoint action, since it is dual to the adjoint action of $G$ on $\
mathfrak{g}$. The orbits of this action are submanifolds of $\mathfrak{g}^{*}$ which carry a natural symplectic structure, and are in a certain sense, the minimal symplectic manifolds on which $G$
acts. The orbit through a point $\lambda\in\mathfrak{g}^{*}$ is typically denoted $\mathcal{O}_{\lambda}$.
The tangent space $T_{\lambda}\mathcal{O}_{\lambda}$ is naturally idenified by the action with $\mathfrak{g}/\mathfrak{r}_{\lambda}$, where $\mathfrak{r}_{\lambda}$ is the Lie algebra of the
stabilizer of $\lambda$. The symplectic form on $\mathcal{O}_{\lambda}$ is given by $\omega_{\lambda}(X,Y)=\lambda([X,Y])$. This is obviously anti-symmetric and non-degenerate since $\lambda([X,Y])=
0$ for all $Y\in\mathfrak{g}$ if and only if $X\in\mathfrak{r}_{\lambda}$. This also shows that the form is well-defined.
There is a close association between coadoint orbits and the representation theory of $G$, with irreducible representations being realized as the space of sections of line bundles on coadjoint orbits
. For example, if $\mathfrak{g}$ is compact, coadjoint orbits are partial flag manifolds, and this follows from the Borel-Bott-Weil theorem.
Mathematics Subject Classification
no label found
|
{"url":"http://planetmath.org/coadjointorbit","timestamp":"2014-04-17T19:01:31Z","content_type":null,"content_length":"46636","record_id":"<urn:uuid:3a7c15ab-cd05-4e93-bbb4-9f74cf2834fc>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00103-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Mplus Discussion >> How to constrain the two factors to be equivalent
Yi Ou posted on Sunday, October 17, 2010 - 4:32 pm
I use CFA to test discriminant validity of a five factor measure. One easy way is to create two factor combination and say the five factor model is better than the four factor model. Another way (my
professor suggests) is to equate one factor with another, meaning constraining the covariance of the two factors to be equal to 1.
However, I tried and it didn't converge. What should I do?
Below is the syntax I used:
itle: CFA self 5 factor model
file is cfa data.dat;
format is free ;
type is individual ;
variable: names are
id hum1 - hum34 lgo1 - lgo5
cse1 - cse12 val1 - val21 mod1 - mod13 narc1 - narc14
sd1 - sd10
huma1 - huma34 humb1-humb34
missing = all (9) ;
usevariables are hum1 hum4 hum5 hum7 - hum12 hum16 - hum18
hum23 - hum25 hum27 - hum29 hum31 hum32 hum34;
type = general;
estimator = ML;
iterations = 10000;
hum_s by hum1 hum4 hum5 hum7 hum8;
hum_o by hum9-hum12;
hum_l by hum16-hum18;
hum_p by hum23 hum24 hum25 ;
hum_c by hum27 hum28 hum29 hum31 hum32 hum34;
hum_l with Hum_p@1;
output: sampstat modindices (0) standardized tech1 ;
Linda K. Muthen posted on Monday, October 18, 2010 - 9:32 am
Fixing the factor covariance to one most likely makes the model fit poorly resulting in convergence problems. Instead use MODEL TEST to test if the covariance is one. See the user's guide.
Jon Elhai posted on Monday, October 18, 2010 - 3:32 pm
Linda: When using MODEL TEST to test if the difference between two correlations (i.e., correlations between factors) is zero... I find some unusual results.
In one Model test, I find correlations (using STDYX) of .56 (between factor A and C) and .58 (between factor A and D), which Wald test results shows as significantly different (p = .002). But in
another Model test (using the same dataset), I find correlations (STDYX) of .57 (between factor B and C) and .66 (between B and E), which is not statistically significant (p = .93).
So the difference in correlations for the second Wald test appears to be greater than in the first case, yet without statistical significance. Could this be because of the use of STDYX instead of
using nonstandardized correlations?
Linda K. Muthen posted on Tuesday, October 19, 2010 - 9:48 am
It is not only the difference between the estimates but also the standard errors of the estimates that determine significance. If you have not standardized the coefficients in MODEL CONSTRAINT, you
may be testing covariances not correlations.
Back to top
|
{"url":"http://www.statmodel.com/cgi-bin/discus/discus.cgi?pg=next&topic=9&page=6084","timestamp":"2014-04-17T12:53:25Z","content_type":null,"content_length":"21541","record_id":"<urn:uuid:cd911560-24f3-4905-831f-3580bf62016e>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00356-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Totowa Statistics Tutor
Find a Totowa Statistics Tutor
...After completing my undergraduate studies, I obtained a Masters degree in philosophy from Virginia Tech in 2009 and until recently was working on my PhD in philosophy at Brown University in
Providence, RI. If you have an interest in philosophy (either inside or outside of the classroom) or anyth...
40 Subjects: including statistics, chemistry, reading, physics
...My technique is always to break down complicated concepts into simple components, whether it's the steps of a complicated statistical technique, the components of a complex equation, or the
complexities of an action potential. I have 10 years of experience successfully teaching and tutoring statistics. I have a Bachelor's degree in Psychology and a Masters and PhD in Behavioral
5 Subjects: including statistics, algebra 1, psychology, Microsoft Excel
...The course normally has a 50% failure rate, but Jeff got me through it. He is very patient and provides great examples to help you understand the concepts." I was a computer programmer for
many years and am now a math student. I can probably solve most of your problems with Excel.
23 Subjects: including statistics, calculus, geometry, ASVAB
I just graduated college with a BA in Applied and Pure Mathematics from Rutgers University with a 3.9/4.0 GPA. I would be starting a Pure Mathematics PhD program at the University of Oklahoma
this fall. In short, I love mathematics.
12 Subjects: including statistics, calculus, geometry, algebra 2
...I am a confidence builder who tutors all levels of math, from integrated to honors. I also offer tutoring skills for the ACT, SATs, Private school entry exams and college prep. I have worked
with students from elementary school to college level classes and have made a huge difference in their lives.
13 Subjects: including statistics, geometry, algebra 1, trigonometry
Nearby Cities With statistics Tutor
Cedar Grove, NJ statistics Tutors
East Rutherford statistics Tutors
Fairfield, NJ statistics Tutors
Glen Rock, NJ statistics Tutors
Haledon statistics Tutors
Hawthorne, NJ statistics Tutors
Hillcrest, NJ statistics Tutors
Lincoln Park, NJ statistics Tutors
Little Falls, NJ statistics Tutors
North Caldwell, NJ statistics Tutors
North Haledon, NJ statistics Tutors
Paterson, NJ statistics Tutors
Verona, NJ statistics Tutors
Wayne, NJ statistics Tutors
Woodland Park, NJ statistics Tutors
|
{"url":"http://www.purplemath.com/Totowa_statistics_tutors.php","timestamp":"2014-04-20T08:52:08Z","content_type":null,"content_length":"24057","record_id":"<urn:uuid:8ea1bfcc-f625-45bc-a959-ae548f87741a>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00124-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Quick approximation to matrices and applications
Results 1 - 10 of 96
- Proc. of the 36 th ACM STOC , 2004
"... ..."
- J. Combin. Theory Ser. B
"... We show that if a sequence of dense graphs Gn has the property that for every fixed graph F, the density of copies of F in Gn tends to a limit, then there is a natural “limit object”, namely a
symmetric measurable function W: [0,1] 2 → [0, 1]. This limit object determines all the limits of subgraph ..."
Cited by 97 (9 self)
Add to MetaCart
We show that if a sequence of dense graphs Gn has the property that for every fixed graph F, the density of copies of F in Gn tends to a limit, then there is a natural “limit object”, namely a
symmetric measurable function W: [0,1] 2 → [0, 1]. This limit object determines all the limits of subgraph densities. Conversely, every such function arises as a limit object. Along the lines we
introduce a rather general model of random graphs, which seems to be interesting on its own right. 1
- In IEEE Symposium on Foundations of Computer Science , 2000
"... Given two distributions over an n element set, we wish to check whether these distributions are statistically close by only sampling. We give a sublinear algorithm which uses O(n 2/3 ɛ −4 log n)
independent samples from each distribution, runs in time linear in the sample size, makes no assumptions ..."
Cited by 77 (16 self)
Add to MetaCart
Given two distributions over an n element set, we wish to check whether these distributions are statistically close by only sampling. We give a sublinear algorithm which uses O(n 2/3 ɛ −4 log n)
independent samples from each distribution, runs in time linear in the sample size, makes no assumptions about the structure of the distributions, and distinguishes the cases ɛ when the distance
between the distributions is small (less than max ( 2 32 3 √ n, ɛ 4 √)) or large (more n than ɛ) in L1-distance. We also give an Ω(n 2/3 ɛ −2/3) lower bound. Our algorithm has applications to the
problem of checking whether a given Markov process is rapidly mixing. We develop sublinear algorithms for this problem as well.
- Handbook of Randomized Computing, Vol. II , 2000
"... this technical aspect (as in the bounded-degree model the closest graph having the property must have at most dN edges and degree bound d as well). ..."
Cited by 76 (10 self)
Add to MetaCart
this technical aspect (as in the bounded-degree model the closest graph having the property must have at most dN edges and degree bound d as well).
- Bioinformatics , 2004
"... Biological and engineered networks have recently been shown to display network motifs: a small set of characteristic patterns which occur much more frequently than in randomized networks with
the same degree sequence. Network motifs were demonstrated to play key information processing roles in biolo ..."
Cited by 70 (0 self)
Add to MetaCart
Biological and engineered networks have recently been shown to display network motifs: a small set of characteristic patterns which occur much more frequently than in randomized networks with the
same degree sequence. Network motifs were demonstrated to play key information processing roles in biological regulation networks. Existing algorithms for detecting network motifs act by exhaustively
enumerating all subgraphs with a given number of nodes in the network. The runtime of such full enumeration algorithms increases strongly with network size. Here we present a novel algorithm that
allows estimation of subgraph concentrations and detection of network motifs at a run time that is asymptotically independent of the network size. This algorithm is based on random sampling of
subgraphs. Network motifs are detected with a surprisingly small number of samples in a wide variety of networks. Our method can be applied to estimate the concentrations of larger subgraphs in
larger networks than was previously possible with full enumeration algorithms. We present results for high-order motifs in several biological networks and discuss their possible functions.
Availability: A software tool for estimating subgraph concentrations and detecting network motifs (mfinder 2.0) and further information is available at:
"... Abstract. Szemerédi’s Regularity Lemma proved to be a powerful tool in the area of extremal graph theory. Many of its applications are based on its accompanying Counting Lemma: If G is an
ℓ-partite graph with V (G) = V1 ∪ · · · ∪ Vℓ and |Vi | = n for all i ∈ [ℓ], and all pairs (Vi, Vj) are ε-r ..."
Cited by 70 (12 self)
Add to MetaCart
Abstract. Szemerédi’s Regularity Lemma proved to be a powerful tool in the area of extremal graph theory. Many of its applications are based on its accompanying Counting Lemma: If G is an ℓ-partite
graph with V (G) = V1 ∪ · · · ∪ Vℓ and |Vi | = n for all i ∈ [ℓ], and all pairs (Vi, Vj) are ε-regular of density d for ℓ 1 ≤ i < j ≤ ℓ, then G contains (1 ± fℓ(ε))d
- In Proc. 41th Annu. IEEE Sympos. Found. Comput. Sci , 2000
"... A set X of points in ! d is (k; b)-clusterable if X can be partitioned into k subsets (clusters) so that the diameter (alternatively, the radius) of each cluster is at most b. We present
algorithms that by sampling from a set X , distinguish between the case that X is (k; b)-clusterable and the ca ..."
Cited by 60 (13 self)
Add to MetaCart
A set X of points in ! d is (k; b)-clusterable if X can be partitioned into k subsets (clusters) so that the diameter (alternatively, the radius) of each cluster is at most b. We present algorithms
that by sampling from a set X , distinguish between the case that X is (k; b)-clusterable and the case that X is ffl-far from being (k; b 0 )-clusterable for any given 0 ! ffl 1 and for b 0 b. In
ffl-far from being (k; b 0 )-clusterable we mean that more than ffl \Delta jX j points should be removed from X so that it becomes (k; b 0 )-clusterable. We give algorithms for a variety of cost
measures that use a sample of size independent of jX j, and polynomial in k and 1=ffl. Our algorithms can also be used to find approximately good clusterings. Namely, these are clusterings of all but
an ffl-fraction of the points in X that have optimal (or close to optimal) cost. The benefit of our algorithms is that they construct an implicit representation of such clusterings in time
- IN IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION , 2005
"... The recent explosion of interest in graph cut methods in computer vision naturally spawns the question: what energy functions can be minimized via graph cuts? This question was first attacked by
two papers of Kolmogorov and Zabih [23, 24], in which they dealt with functions with pairwise and triplew ..."
Cited by 44 (0 self)
Add to MetaCart
The recent explosion of interest in graph cut methods in computer vision naturally spawns the question: what energy functions can be minimized via graph cuts? This question was first attacked by two
papers of Kolmogorov and Zabih [23, 24], in which they dealt with functions with pairwise and triplewise pixel interactions. In this work, we extend their results in two directions. First, we examine
the case of k-wise pixel interactions; the results are derived from a purely algebraic approach. Second, we discuss the applicability of provably approximate algorithms. Both of these developments
should help researchers best understand what can and cannot be achieved when designing graph cut based algorithms.
- In Proceedings of the 28th Annual International Colloquium on Automata, Languages and Programming (ICALP , 2001
"... We present a probabilistic algorithm that, given a connected graph G (represented by adjacency lists) of average degree d, with edge weights in the set {1,...,w}, and given a parameter 0 < ε < 1
/2, estimates in time O(dwε−2 log dw ε) the weight of the minimum span-ning tree of G with a relative erro ..."
Cited by 38 (6 self)
Add to MetaCart
We present a probabilistic algorithm that, given a connected graph G (represented by adjacency lists) of average degree d, with edge weights in the set {1,...,w}, and given a parameter 0 < ε < 1/2,
estimates in time O(dwε−2 log dw ε) the weight of the minimum span-ning tree of G with a relative error of at most ε. Note that the running time does not depend on the number of vertices in G. We
also prove a nearly matching lower bound of Ω(dwε−2) on the probe and time complexity of any approximation algorithm for MST weight. The essential component of our algorithm is a procedure for
estimating in time O(dε−2 log d ε) the number of connected components of an unweighted graph to within an additive error of εn. (This becomes O(ε−2 log 1 ε) for d = O(1).) The time bound is shown to
be tight up to within the log d ε factor. Our connected-components algorithm picks O(1/ε2) vertices in the graph and then grows “local spanning trees” whose sizes are specified by a stochastic
process. From the local information collected in this way, the algorithm is able to infer, with high confidence, an estimate of the number of connected components. We then show how estimates on the
number of components in various subgraphs of G can be used to estimate the weight of its MST. 1
- In IEEE Symposium on Foundations of Computer Science (FOCS , 2008
"... In this paper we derive tight bounds on the expected value of products of low influence functions defined on correlated probability spaces. The proofs are based on extending Fourier theory to an
arbitrary number of correlated probability spaces, on a generalization of an invariance principle recentl ..."
Cited by 37 (5 self)
Add to MetaCart
In this paper we derive tight bounds on the expected value of products of low influence functions defined on correlated probability spaces. The proofs are based on extending Fourier theory to an
arbitrary number of correlated probability spaces, on a generalization of an invariance principle recently obtained with O’Donnell and Oleszkiewicz for multilinear polynomials with low influences and
bounded degree and on properties of multi-dimensional Gaussian distributions. We present two applications of the new bounds to the theory of social choice. We show that Majority is asymptotically the
most predictable function among all low influence functions given a random sample of the voters. Moreover, we derive an almost tight bound in the context of Condorcet aggregation and low influence
voting schemes on a large number of candidates. In particular, we show that for every low influence aggregation function, the probability that Condorcet voting on k candidates will result in a unique
candidate that is preferable to all others is k−1+o(1). This matches the asymptotic behavior of the majority function for which the probability is k−1−o(1). A number of applications in hardness of
approximation in theoretical computer science were
|
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=855610","timestamp":"2014-04-16T15:19:22Z","content_type":null,"content_length":"37952","record_id":"<urn:uuid:301b5ea6-1f58-457d-b452-6ae6c238a69e>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00164-ip-10-147-4-33.ec2.internal.warc.gz"}
|
The Higher Infinite
“Infinity!” “Infinity plus 1!” “Infinity plus a hundred!” “Infinity times ten!” “Infinity times infinity!” Remember playing this game as a child? Trying to compete to name the biggest infinity? What
happens when pure research mathematicians play this game?
Standing outside an auditorium before giving a calculus final exam, some of my students asked what I was reading. It was “The Higher Infinite”, a very advanced tome of mathematics by Akihiro
Kanamori. My students were understandably pretty interested to hear about the idea of more than one kinds of infinity. In fact, there are infinitely many levels of infinity. And that’s even pretty
well-known, at least among people who read about math. What Kanamori writes about, in rather difficult mathematical language (the book is meant for advanced graduate students or math PhDs), are some
lesser-known infinities, which are so big that they literally bend the basic facts of mathematics.
In formal set theory, there is no single entity called “infinity”. In mathematics, when we say that the answer to some question is “infinity”, we really mean that any finite answer would be too
small. The “entity” of infinity is just a kind of shorthand for expressing this idea. But there are infinite sets– that is, collections which are infinite. When we talk about multiple “levels of
infinity”, we’re really talking about collections of different, infinite, sizes.
Here’s an example. Take the collection of all natural numbers (the natural numbers are the numbers 0, 1, 2, 3, 4, and so on; the numbers you use to count). How big is this collection? Any finite
answer is too small. For example, if we guess that the collection of natural numbers has size one million, then that’s too small because there are more than a million counting numbers. Since any
finite answer is too small, we say the collection is infinite.
What about the set of all real numbers, with any number of decimal places, including infinitely many non-repeating decimal places, like in the number pi=3.1415…? If we look at this “continuum” of
real numbers, again it’s infinite, because any finite size would be too small. It turns out the set of all real numbers is actually larger than the set of all counting numbers. And that’s the
prototypical example to show that there are more than one sizes of infinity. But, what does it mean to say that one infinite collection is larger than another?
To understand how the sizes of infinite collections can be compared, it’s first necessary to understand how the sizes of finite collections are compared. One way to compare the sizes of finite
collections is to just count them and compare the numbers. But that doesn’t generalize to infinite collections, because we can’t count an infinite collection– if we could, it would not be infinite.
A better way– or at least, a more generalizable way– to compare the sizes of finite collections is to check whether we can marry their elements together in a nice one-to-one way. If two finite
collections have the same size, then we can think of one collection as being the collection of “males”, think of the other as being the collection of “females”, and marry them up so everyone has
exactly one partner. If the collections had different sizes, we couldn’t do this, someone would be left out.
This “marriage” idea is referred to in mathematics as a “bijection”. A bijection is just a fancy, smart-sounding way of saying, you assign each member of the first collection to a unique member of
the other, so that nothing is left out and nothing is matched up twice. If two collections have a bijection between them, then they have the same size, and if not, then they have different sizes.
(The marriage analogy breaks down a little bit when the two sets have some overlap. In that case, an element of one set is allowed to self-marry precisely if it’s an element of both sets.)
You can think of counting small (smaller than size eleven) sets as establishing a bijection between the set and between a certain subset of your fingers. You count “one, two, three, four,” holding up
a finger with each utterance, and you’re implicitly “marrying” a certain set of your fingers to elements of the set you’re counting.
The bijection idea naturally generalizes to infinite sets with no extra work. Two sets, whether they be finite or infinite, are said to have the same size if there’s some way to link their elements
together bijectively, so that each element of the first gets associated with exactly one element of the second.
Right away, some trippy examples come up. For example, the set of all even counting numbers (0,2,4,6,8,…) has the same size as the set of all counting numbers. That’s because if you take any even
counting number and divide it in half, you get a counting number, and the result is unique and no other even number gives the same result. The act of dividing-in-half is a bijection from the set of
even counting numbers to the set of all counting numbers.
The famous example which first shows that there are more than one sizes of infinity, is the fact that there are more real numbers than there are counting numbers. A real number being any number,
positive or negative, with any number of decimal places, including numbers like pi or √2 where there are infinitely many non-repeating decimal places.
The fact that there are more real numbers than counting numbers was a big surprise to mathematicians; after all, both sets have infinitely many elements, so a mathematician back in the old times
might have said both sets have size infinity.
Saying that there are more reals than naturals comes down to saying that there aren’t enough naturals to associate with the reals; if we try to marry each real number to a unique natural number, with
no naturals getting married twice, then there won’t be enough natural numbers to go around and some (in fact, most) of the reals will be left out. There is no bijection from the naturals to the
reals. Since the naturals are a subset of the reals– every natural is a real– there are at least as many real numbers as there are natural numbers; so if there’s no bijection between them, then the
set of real numbers must be strictly bigger.
Georg Cantor, the mathematician who discovered a lot of this stuff about different infinities, published a famous proof that there’s no bijection between the naturals and the real numbers. You can
read his proof, the famous “diagonal argument”, at Wikipedia. I’ll give a different proof, one which is in my opinion more acid-trippy and fun.
My proof uses a certain infinite sum which is related to one of the Zeno’s paradoxes, if you’ve ever read about those. Basically, in order to walk across a distance of size 1 meter, first you have to
walk one half meter. Then, you have to walk one fourth meter. Then, you have to walk one eighth meter, then one sixteenth meter, and so on. Adding up all those partial walks, each one half as long as
the previous, you get the total distance of one meter. This gives the infinite sum, 1/2+1/4+1/8+…=1. What does this have to do with proving there are more reals than naturals? Hold onto your hat…
Suppose that there was a bijection from the naturals to the reals. Then we could “count” the reals (marry them to the counting numbers), saying, this real number is the first real; this real number
is the second real; this real number is the third real; and so on.
Now I’m gonna show a way that you can cover the entire real number line with a covering of length 1. That’s ridiculous, because the real number line is infinitely long. There’s no way to cover it
with just a covering of length one. For instance the real line contains the interval from -1 to 1, which has length 2 all by itself.
Take the “first” real number (i.e., the number which is associated to the counting number 0). Cover it with a cover of length 1/2. Next, take the “second” real number, and cover it with a cover of
length 1/4. Take the “third” real number, and cover it with a cover of length 1/8, then cover the “fourth” real number with a cover of length 1/16, and so on. Even if none of these covers overlapped,
the total area of the cover would be at most 1/2+1/4+1/8+…=1. In fact, the covers overlap a lot, so the total area of these covers ends up being less than 1. But every real number is the nth real
number for some n; that’s the assumption we made, that we had a bijection between counting numbers and reals. So, every real number gets covered. I’ve covered the whole real line, with a covering
scheme where the covers have a total length no longer than 1. Impossible, that’s not even enough to cover just the part of the real line from -1 to 1.
I started out assuming there was a bijection from the naturals to the reals. Then, I showed that the assumption allows me to do something ridiculous. So the assumption must be wrong, and there is no
bijection between the naturals and the reals. They have different sizes, and the reals contain the naturals, so there are more reals than naturals.
(Actually the proof is missing a little detail, since I’m assuming some common sense notions about how lengths work. The details are filled in in an advanced branch of math called “measure theory”,
but I figured it’d be worth the reduced details to post this alternate proof that there’s no bijection.)
I just established there are more than one levels of infinity, by showing that the set of real numbers is bigger than the set of natural numbers. But the infinitudes get much, much bigger, and there
are far more sizes of infinity than just the size of the set of naturals and the size of the set of reals.
If you take any collection of objects, it makes sense to talk about subsets of that collection. For example, you can ask about the set of all subsets of the natural numbers. This set-of-all-subsets
is called the power set. The power set of the natural numbers is the set of all sets of natural numbers.
Here’s the big breakthrough which leads to infinitely many levels of infinity. It’s a fundamental truth discovered by Georg Cantor, and it totally turned mathematics on its head. Cantor showed: if
you have any set whatsoever– empty, finite, or any level of infinite– then the power set is even bigger.
For example, the power set of the natural numbers– the set of all sets of natural numbers– is bigger than the set of natural numbers. So that gives another proof that there are more than one levels
of infinity.
To get infinitely many levels of infinity, you can just repeat the process. You can take the power set of the power set of the set of naturals, and get something even bigger. And then you can take
the power set of that. The process never ends, and it provides infinitely many levels of infinity.
I’ve shown you how there are infinitely many different levels of infinity, and a natural question which you might ask is, what is the size of the set of all levels of infinity? So far, I’ve showed
how to get infinitely many different levels of infinity, but the infinities we can create using just power set, can be placed into a natural association with the natural numbers. I can say that the
0th infinity corresponds to the number of natural numbers. And then I can say the 1st infinity is the power set of the 0th. And the 2nd is the power set of the 1st, and so on. This hits all the
infinities we get with just repeated power setting of the naturals. If we just look at repeated power sets, we get as many levels of infinity as there are natural numbers.
But is that all of them?
No. There are infinities so big that no matter how many times I apply the power set operation, I’ll never reach them. Here’s an example. What if we take the set of naturals, and then the power set of
that, and combine them into one big set. And then, we throw in everything in the power set of the power set of the naturals. And then, throw in everything in the power set of that. And keep going,
forever. So, in other words, we get the set of all things which show up anywhere in any of the repeated power sets starting with the naturals. Since this set contains all those power sets as subsets,
it must be bigger than all of them. It’s one mind-bendingly, insanity-destroyingly huge set! (But, it’ll turn out it’s still “tiny” in the world of mathematical logic)
So just how many infinities are there? The answer is unsettling. It turns out, there are so many levels of infinity, that no level of infinity is enough to answer the question. No matter how hard
anyone tries to come up with some incomprehendably large level of infinity, there are more levels of infinity than that.
Congratulations, if you’ve read this far, you know just about as much about building really-big-freaking-infinities as a lot of mathematicians. Now strap yourself in, I’m gonna talk about how to go
to a whole new level, making the infinities we’ve talked about so far look like tiny insects.
Logicians use the term “LARGE CARDINAL” to refer to some levels of infinity so big that, in a certain sense, they transcend math.
Almost all contemporary mathematics is done under a system of assumptions called ZFC (Zermelo-Frankel set theory with the axiom of Choice). Think of these assumptions like the postulates of Euclid,
except they concern abstract sets rather than geometric objects. ZFC is generally assumed to be “consistent”, by which we mean you can’t use it to prove 1=0. If someone did manage to prove 1=0 from
ZFC, that would rank among the biggest moments in all of mathematics.
Still, there is the faint, unsettling possibility that maybe ZFC can prove 1=0. Mathematicians would be extremely pleased to prove that it can’t. But a logician named Kurt Gödel crushed any hopes of
that. In a move that shocked mathematicians of the time, Gödel proved that any system strong enough to do the most basic arithmetic, is too strong to prove its own consistency– well, unless it’s
actually inconsistent (in which case it can prove anything whatsoever). In light of this “2nd Incompleteness Theorem” of Gödel, mathematicians actually dread a proof, in ZFC, of the consistency of
ZFC, because if any such proof exists– then ZFC is inconsistent.
However, if you assume additional axioms which go beyond ZFC, then it is sometimes possible to prove the consistency of ZFC in that new system. Such new axioms are generally quite complicated to even
state, but one of them is relatively simple: the axiom, “ZFC is consistent”. The system “ZFC+CON(ZFC)”, consisting of ZFC together with the statement that ZFC is consistent, trivially proves CON
(ZFC). Of course, by Gödel’s theorem, it can’t prove its own consistency…
Now, one way to prove the consistency of ZFC is to construct a model where ZFC holds, and show that ZFC holds in it. The ZFC axioms are statements saying that certain sets exist; so to prove CON
(ZFC), it suffices to construct a set which contains all the sets which ZFC says must exist. Such a set must be quite large; larger, in fact, than anything which ZFC says exists. Its size is a Large
All the levels of infinity which we can possibly conceive using normal math must exist within a large cardinal, because ZFC– and thus, all of “normal math”– exists within the large cardinal.
Consequently, the large cardinal is bigger than any level of infinity that we can construct using the axiom system of modern mathematics.
We can extend ZFC to a stronger axiom system, ZFC+, in which we add one new axiom: “There exists a set which is a model of ZFC.” The whole process can be repeated: we can ask about whether a model of
ZFC+ necessarily contains a set which, itself, is a model of ZFC+. Gödel’s theorem guarantees that ZFC+ does not prove that a ZFC+ model exists, and if there is a set which is a model of ZFC+, then
it is larger than anything which ZFC+ proves exists. Thus, its size is an even “Larger Cardinal”.
These LARGE CARDINALS I’ve talked about are just one type of large cardinal. The general process is: take ZFC (normal modern math) and extend it with some new assumption, strong enough to prove ZFC.
Above, I added the axiom “There is a set which models ZFC”, which is the heaviest-handed large cardinal axiom. In actual practice, large cardinal axioms are much more exotic, and in some cases they
may not even appear to be related to set theory at all. In fact, one of the goals of the mathematicians who study large cardinals is to troll mathematics by finding the most “harmless” looking large
cardinal axioms they can.
Levels of Infinity
Obscure Numbers
A Mathematician’s Survival Guide
Infinitely Large vs. Arbitrarily Large
|
{"url":"http://www.xamuel.com/the-higher-infinite/","timestamp":"2014-04-18T15:38:46Z","content_type":null,"content_length":"39344","record_id":"<urn:uuid:f9809a52-08e2-40f3-8b4d-a75c90071fb6>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00245-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Is every virtual knot group an HNN extension?
up vote 8 down vote favorite
A basic fact in knot theory is that a knot group $\pi(K)$ is an HNN extension of $\pi(F)$, the fundamental group of a Seifert surface complement. A nice discussion of this may be found in Chapter 11
of An introduction to the theory of groups by Rotman. This property means that a knot group is completely determined by the fundamental group of a Seifert surface complement, plus a choice of
meridian. I wonder whether this is somehow the real reason that Seifert surfaces are important in knot theory. Morally, the fact that a knot group is an HNN extension means that all the information
about a knot which you might care about is contained in any of its Seifert surfaces.
To remind you of the explicit group-theoretic statement, there is an isomorphism
$\phi\colon\thinspace \frac{\pi(F)\ast \langle m\rangle}{\mathcal{N}}\longrightarrow \pi(K),$
where $\langle m\rangle$ is the infinite cyclic group generated by the meridian, and $\mathcal{N}$ is the smallest normal subgroup of the free product $\pi(F)\ast \langle m\rangle$ containing the
$m^{-1}\mu^+(z)m(\mu^{-}(z))^{-1}\qquad z\in \pi(F),$
with $\mu^{\pm}$ denoting the pushoff maps.
There is a natural notion of a virtual knot group, by assigning a formal generator to each arc of a virtual knot diagram, and a Wirtinger relation to each real crossing (virtual crossings are
ignored). Any Wirtinger presentation of deficiency $0$ or $1$ can be realized as a virtual knot group by Theorem 3 of a paper by Se-Goo Kim.
Question: Is every virtual knot group an HNN extension? (edit: over a finitely generated group?) Can the base group be described in terms of a group generated by a commutator at each real
crossing? If not, is a virtual knot group "almost" an HNN extension in some useful sense?
I'm interested in this question because I wonder whether invariants coming from Seifert surfaces can be read off Gauss diagrams in any systematic way. Are Seifert surfaces an essential feature of
knots, as opposed to virtual knots; or are they a non-essential luxury?
knot-theory combinatorial-group-theor
It's an HNN extension in a trivial sense, since its abelianization is Z. Take the kernel of the homomorphism to Z, and take an extension of this by a meridian acting by conjugation. In general, the
kernel will be infinitely generated (unless e.g. it is a fibered real knot), so this won't correspond to a Seifert surface. Maybe you want to rephrase your question for an HNN extension over
finitely generated groups? – Ian Agol Feb 6 '11 at 21:47
Thanks! Indeed, I want the base group to be something weakly analogous to the fundamental group of a Seifert surface complement in some sense, so I definitely want it finitely generated. Question
edited. – Daniel Moskovich Feb 6 '11 at 22:09
add comment
2 Answers
active oldest votes
I don't have a full answer to your question, but a hint. Any virtual knot group is the fundamental group of a knotted torus in 4-space. Constructing the torus from the virtual diagram is
easy. The construction is due to Shin Satoh. Such a knotted torus has a Seifert solid --- the torus is the boundary of a 3-manifold in 4-space. There is also a meridonal class for the
up vote knotted torus. So I would imagine that the fundamental group of that knotted surface is an HNN extension. This is something that I should know, but don't know off the top of my head --- and
7 down I am heading out of the house in 5 minutes. If it is true, then it should not be hard to prove.
Thanks for the answer! What is the meridonal class for a knotted torus? Also, isn't Shin Satoh's construction highly non-canonical? (not that that would make a difference for my question;
but it would make the answer less of an analogue to the regular knot case) – Daniel Moskovich Feb 7 '11 at 15:42
add comment
According to a theorem of Kuperberg, a virtual knot corresponds canonically to an embedding of a knot in a thickened surface $K\subset \Sigma_g\times [0,1]$ of minimal genus $g$ (up to
homeomorphism). There is therefore another natural fundamental group associated to the knot, namely the fundamental group of the knot complement $\pi_1(\Sigma_g\times [0,1] - K)$. This group
certainly splits as an HNN extension (in many ways). The fundamental group of the virtual knot is obtained from this by killing the two peripheral subgroups corresponding to $\Sigma_g \times
\{0,1\}$. One may think of this as the fundamental group $\pi_1( S\Sigma -K)$, where $S\Sigma$ is the suspension. If $K$ is homologically trivial in $\Sigma\times [0,1]$, then one could take
an embedded minimal genus surface $F \subset \Sigma \times [0,1]$ spanning $K$, so $\partial F=K$. Unfortunately, though, there is no canonical choice of homology class for this surface. One
up vote has a geometric splitting of $S\Sigma-K$ along $F$, however $F$ might not be $\pi_1$-injective in this space since Dehn's lemma is not available.
4 down
vote If $K$ is not homologically trivial in $\Sigma \times [0,1]$, it is still homologically trivial in $S\Sigma$, so one could take a surface bounding it (which must intersect a singular point of
$S\Sigma$). One could think of this as taking a minimal genus surface giving a cobordism between $K$ and an embedded curve in $\Sigma \times \{0,1\}$. Again, there is not a canonical homology
class ($H_2(S\Sigma)=\mathbb{Z}^{2g}$) and the surface may not be $\pi_1$-injective (in fact, there are virtual knots where the longitude is trivial in the virtual knot fundamental group).
Also, I don't think that linking numbers are well-defined (again, since $H_2(S\Sigma)$ is large), so it's not clear how to obtain an Alexander polynomial from such a surface.
add comment
Not the answer you're looking for? Browse other questions tagged knot-theory combinatorial-group-theor or ask your own question.
|
{"url":"http://mathoverflow.net/questions/54549/is-every-virtual-knot-group-an-hnn-extension?sort=votes","timestamp":"2014-04-17T21:44:07Z","content_type":null,"content_length":"61231","record_id":"<urn:uuid:941471a0-44b0-4047-b6f3-e3d7c4b61e3a>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00448-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Complex Analysis: Laurent Series
December 6th 2008, 08:51 PM #1
Oct 2008
Complex Analysis: Laurent Series
I'm having some trouble with these two Laurent Series questions:
1) Consider the Laurent Series expansion of (pi^2-4z^2)/cos(z) which converges on the circle |z|=5. What is the principal part of this expression. What is the largest open set on which the series
So I have no idea how to begin, especially with the series having to converge on the circle |z|=5. I know we can rewrite the expression as
(pi-2z)(pi+2z)/cosz. so we have zeros at +/- pi/2. Past that I am lost.
The second question is similar:
2) Find the Laurent Series expansion of the function
centered at 0 and convergent at z=2i. What is the largest open set for which this series converges.
For this one I am confused because I thought we used the Laurent series expansion to fix problems on an annulus.
Any help?
You do not want to find the zeroes instead you want to find the functions singularities.
This is when cosz=0.
So I spent some time today on these and wasn't getting anywhere. What really throws me off is the notion that these series converge on a circle instead of at a point or in some domain.
It doesn't mean it must converge 'only' on the circle it is just that the domain includes the circle.
The second one is easier. It has singularities at i, -i, 3 and -3. It asks for convergence at 2i but the series we want will converge at 1<|z|<3 which include 2i.
So will I come up with different series that converge for |z|<1, 1<|z|<3, and |z|>3? For each of these domains, do I just exclude the factor of the denominator which would give me trouble?
Yes a different solution for each region. To do the second one spit the function into partial fractions then expand each as a geometric sequence (being careful of the radii of convergence.)
So I got this expression writing the things out in partial fractions
z/20 * ((-1/(1-iz) + (1+(2-iz)+(2-iz)^2+...) + 1/(1-(4+3z)) - (1/9)*(1+(z/3)+(z/3)^2+...))
Where do I go from here? The two terms which I wrote out in geometric series form don't converge for 1<|z|<3
December 7th 2008, 03:28 AM #2
Aug 2008
December 7th 2008, 07:39 AM #3
Oct 2008
December 7th 2008, 03:14 PM #4
Oct 2008
December 7th 2008, 03:31 PM #5
Aug 2008
December 7th 2008, 03:43 PM #6
Oct 2008
December 7th 2008, 03:56 PM #7
Aug 2008
December 8th 2008, 04:19 PM #8
Oct 2008
|
{"url":"http://mathhelpforum.com/calculus/63691-complex-analysis-laurent-series.html","timestamp":"2014-04-17T16:41:51Z","content_type":null,"content_length":"47601","record_id":"<urn:uuid:7cceab66-a0d7-436c-bb8c-1150bb63f792>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00262-ip-10-147-4-33.ec2.internal.warc.gz"}
|
can a particle entangle with itself?
The usual Stern-Gerlach experiment, often used as a very nice example to introduce the basic concepts of quantum theory, provides a state, where spin and position of a single particle are entangled.
The trick is to direct a beam of particles, described by a Schrödinger-wave packet, towards an inhomogeneous magnetic field. The force affecting the particle's trajectory is proportional to the spin
component in direction of the magnetic field and thus you can well separate particles in space with different spin components in this direction. In other words, the incoming beam splits into well
separated partial beams depending on its spin component. In this way you are (nearly) 100% sure that a particle in one of the partial beams has a certain value of this spin component. That means, the
particle's position is entangled with the spin component.
Then doesn't it resolves the measurement problem in double-slit experiment?
Let's say we identify the slit that particle is passed, by measuring electron's spin, (or by measuring the polarization of light) then, spin is entangled with position, when we put the measuring
device in front of the slits. Consequently, when we know the spin, we know the position, and it destroys the wave-like uncertainty in position (destroys interference pattern) same as we destroy the
information when measuring entangled particles.
I had heard that in double-slit exp., measurement device becomes entangled with particle. Then, that is not true. What's happening is, measurement device creates entanglement between spin and
position of the particle. (like BBO crystal creates entangled particles) Right?
|
{"url":"http://www.physicsforums.com/showthread.php?p=4216174","timestamp":"2014-04-17T21:41:19Z","content_type":null,"content_length":"47834","record_id":"<urn:uuid:1b4e7868-6957-4dec-b330-7b69762353e7>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00107-ip-10-147-4-33.ec2.internal.warc.gz"}
|
the first resource for mathematics
This is a well-written paper on the possibility of dividing the circle, lemniscate, and other curves into $n$ equal arcs using straightedge and compass and using origami. It is a nice blend of
geometry, number theory, abstract algebra, and the theory of functions.
In his Elements, Euclid showed that a regular $n$-gon can be constructed by straightedge and compass for $n=3,4,5$, and 6. In 1796, C. F. Gauss showed that a regular $n$-gon is constructible by
straightedge and compass if $n$ is of the form ${2}^{a}{p}_{1}\cdots {p}_{r}$, where $a\ge 0$ and where the ${p}_{i}$ are distinct Fermat primes, i.e., odd primes of the form ${2}^{n}+1$, $n\ge 0$.
In 1837, P. Wantzel proved that the converse is also true. In 1895, Pierpont proved that a regular $n$-gon is constructible using origami, i.e., paper-folding, if and only if $n$ is of the form ${2}^
{a}{3}^{b}{p}_{1}\cdots {p}_{r}$, where $a,b\ge 0$ and where the ${p}_{i}$ are distinct Pierpont primes, i.e., primes $>$ 3 having the form ${2}^{n}{3}^{m}+1$, $n,m\ge 0$. The appearance of the
number 3 reflects the fact that angles can be trisected by origami.
These theorems can be rephrased using the fact that a regular $n$-gon is constructible if and only if the circle can be divided into $n$ equal arcs. In view of this, Abel considered the lemniscate $
{r}^{2}=cos2\theta$ and proved that it can be divided into $n$ equal arcs using straightedge and compass if and only if the circle can be so divided, i.e., $n={2}^{a}{p}_{1}\cdots {p}_{r}$, where $a\
ge 0$ and where the ${p}_{i}$ are distinct Fermat primes; see M. Rosen’s paper in [Am. Math. Mon. 88, 387–395 (1981; Zbl 0491.14023)].
The paper under review complements these results. Among other things, it considers division of the lemniscate by origami and proves that the lemniscate can be divided into $n$ equal arcs using
origami if and only if $n={2}^{a}{3}^{b}{p}_{1}\cdots {p}_{r}$, where $a,b\ge 0$ and where the ${p}_{i}$ are distinct Pierpont primes such that ${p}_{i}=7$ or ${p}_{i}\equiv 1$ (mod 4). It also puts
these results in a more general context by investigating curves of the form ${r}^{m/2}=cos\left(m\theta /2\right)$. For $m=1,2,$ and 4, these are the cardioid, the circle, and the lemniscate, and for
$m=3$, the curve is referred to as the clover. The paper under review proves that the cardioid can be divided into $n$ equal arcs, for all $n$, by straightedge and compass (and hence by origami since
origami subsumes straightedge and compass). It also proves that the clover can be divided into $n$ equal arcs by origami if and only if $n={2}^{a}{3}^{b}{p}_{1}\cdots {p}_{r}$, where $a,b\ge 0$ and
where the ${p}_{i}$ are distinct Pierpont primes such that ${p}_{i}=5$, ${p}_{i}=17$, or ${p}_{i}\equiv 1\phantom{\rule{4.44443pt}{0ex}}\left(mod\phantom{\rule{0.277778em}{0ex}}3\right)$. The problem
of finding conditions on $n$ under which a clover can be divided into $n$ equal arcs by straightedge and compass is left open.
51M15 Geometric constructions
11G05 Elliptic curves over global fields
|
{"url":"http://zbmath.org/?q=an:1107.51007","timestamp":"2014-04-18T11:11:20Z","content_type":null,"content_length":"29090","record_id":"<urn:uuid:81527d12-d21f-4d0d-a0b9-ceebf6adcf76>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00662-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Examples 6.3.4(d):
This function is impossible to graph. The picture above is only a poor representation of the true graph. Nonetheless, take an arbitrary point
on the real axis. We can find a sequence
of rational points that converge to
from the right. Then
converges to
. But we can also find a sequence
of irrational points converging to
from the right. In that case
converges to
. But that means that the limit of
from the right does not exist. The same argument, of course, works to show that the limit of
from the left does not exist. Hence,
is an essential discontinuity for
|
{"url":"http://www.mathcs.org/analysis/reals/cont/answers/discwp4.html","timestamp":"2014-04-16T16:26:43Z","content_type":null,"content_length":"5778","record_id":"<urn:uuid:93ba5952-8af4-4278-8e0b-972889cac189>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00078-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Existence of Limit
December 25th 2005, 01:05 PM
Existence of Limit
Let $0 \leq f(x)$
Then proof existence of infinity or a limit (but not both):
$\lim c \rightarrow \infty f(x)=L$
$\lim c \rightarrow \infty f(x)=\infty$
Logic caution:
In the problem you have to proof that exactly one of the conditions must be satisfied and exactly one.
December 25th 2005, 01:20 PM
Originally Posted by ThePerfectHacker
Let $0 \leq f(x)$
Then proof existence of infinity or a limit (but not both):
$\lim c \rightarrow \infty f(x)=L$
$\lim c \rightarrow \infty f(x)=\infty$
Logic caution:
In the problem you have to proof that exactly one of the conditions must be satisfied and exactly one.
Let $f(x)=\sin(x)+1$, then what is (if anything):
$\lim_{x \rightarrow \infty}f(x)\ ?$
Or have I misunderstood your intention?
December 25th 2005, 03:59 PM
Yes, I made a mistake with what I said you are correct.
Show that if the limit is L then it cannot be infinite.
Show that if the limit is infinite then it cannot be L.
December 26th 2005, 12:46 PM
Originally Posted by ThePerfectHacker
Yes, I made a mistake with what I said you are correct.
Show that if the limit is L then it cannot be infinite.
Show that if the limit is infinite then it cannot be L.
in case of infinity then for any given M there is X that for every x>X, f(x)>M
take M to be L+100 and L is not a limit
in case of L for every given g>0 there is X wich for every x>X, |f(x)-L|<g
take X to be the one that from him forward f(x) is blocked (don't remember the english expression) (and don't remember the proof that you have one) in this part f(x)<>infinity
|
{"url":"http://mathhelpforum.com/calculus/1506-existence-limit-print.html","timestamp":"2014-04-21T07:59:29Z","content_type":null,"content_length":"7270","record_id":"<urn:uuid:9bfbb6d9-2874-4d03-8e1f-56c2680d58a1>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00172-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Hapeville, GA ACT Tutor
Find a Hapeville, GA ACT Tutor
...I also ran cross country in high school and participated or led several service organizations in college and high school, so I can easily relate to a wide variety of interests and backgrounds.
I bring a professional, optimistic, and energetic attitude to every session and I think that everyone c...
17 Subjects: including ACT Math, chemistry, writing, physics
I am Georgia certified educator with 12+ years in teaching math. I have taught a wide range of comprehensive math for grades 6 through 12 and have experience prepping students for EOCT, CRCT, SAT
and ACT. Unlike many others who know the math content, I know how to employ effective instructional strategies to help students understand and achieve mastery.
13 Subjects: including ACT Math, geometry, statistics, probability
...I do enjoy tutoring or interacting one on one with students. I do strongly believe that each student is capable of succeeding, especially in Math. Working one on one with them does allow me to
find their strengths as well as their weaknesses, and build a strategy that will fit in to make him/her succeed.
13 Subjects: including ACT Math, calculus, discrete math, differential equations
...I earned my BS in mathematics from Bethune-Cookman college, my master's in educational administration and leadership from Florida A&M University, and I'm almost done with my PhD in mathematics
education at Georgia State University. Once I'm done with that, I'll consider graduate degrees in pure or applied mathematics. I'm patient with students at all levels.
10 Subjects: including ACT Math, calculus, geometry, algebra 1
...This work has included methods for study skills, including time management, ways to structure a large amount of reading, and organizational skills. Moreover, I have co-taught a course in
Research Practices for three semesters, which includes a significant component of study skills instruction. I hold a Ph.D. in religion with a special emphasis on Hebrew Bible.
33 Subjects: including ACT Math, English, reading, writing
Related Hapeville, GA Tutors
Hapeville, GA Accounting Tutors
Hapeville, GA ACT Tutors
Hapeville, GA Algebra Tutors
Hapeville, GA Algebra 2 Tutors
Hapeville, GA Calculus Tutors
Hapeville, GA Geometry Tutors
Hapeville, GA Math Tutors
Hapeville, GA Prealgebra Tutors
Hapeville, GA Precalculus Tutors
Hapeville, GA SAT Tutors
Hapeville, GA SAT Math Tutors
Hapeville, GA Science Tutors
Hapeville, GA Statistics Tutors
Hapeville, GA Trigonometry Tutors
Nearby Cities With ACT Tutor
Atlanta Ndc, GA ACT Tutors
Avondale Estates ACT Tutors
Clarkdale, GA ACT Tutors
College Park, GA ACT Tutors
Conley ACT Tutors
East Point, GA ACT Tutors
Ellenwood ACT Tutors
Fairburn, GA ACT Tutors
Forest Park, GA ACT Tutors
Lake City, GA ACT Tutors
Morrow, GA ACT Tutors
Pine Lake ACT Tutors
Red Oak, GA ACT Tutors
Rex, GA ACT Tutors
Scottdale, GA ACT Tutors
|
{"url":"http://www.purplemath.com/Hapeville_GA_ACT_tutors.php","timestamp":"2014-04-18T08:54:40Z","content_type":null,"content_length":"24078","record_id":"<urn:uuid:46056875-0669-46eb-be5e-28988496950a>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00638-ip-10-147-4-33.ec2.internal.warc.gz"}
|
How to measure the height of a building with the help of a barometer
A small twist on an old tale...it's not really related to the bayesian/frequentist
, but rather
It was the end of term, and the three top students (one each of engineering, physics, and maths) had one practical exam left to determine which one got the overall prize.
The tutor took them to a clock tower, and gave each one a barometer, paper and pencil. "Your task is to determine the height of the tower in the next 2 hours. You should assume
a priori
that it is a typical example of its type, which have heights uniformly distributed in the range 20-40m. We've examined the foundations and found that they are not safe for a building of more than 30m
height. Obviously we are very worried. Unless we can rule out the possibility that it exceeds 30m at the 99% level or better, we'll have to spend a million pounds reinforcing it. Can you tell us
whether this is necessary?" He then walked off.
The students sat down on the front step to think. The physicist and engineer started scribbling ideas. The mathematician thought for a few seconds, then went off to the pub for lunch.
2 hours later, they all assembled at the clock tower, and the tutor asked for their answers. The physicist says, "I observed the air pressure at the bottom of the tower, climbed to the top and
observed the air pressure again. Based on the pressure difference, I estimate the height to be 25+-2.5m, which means there is a 2.3% chance of it exceeding 30m. You'd better do the reinforcements."
(All uncertainties are assumed Gaussian and quoted as 1 standard deviation.) The engineer says, "I climbed to the top of the tower, and dropped the barometer, timing the fall with the clock. Based on
the time it took, I estimate the height to be 27+-2m, which means there is a 7% chance of exceeding 30m. You definitely have to reinforce it."
The mathematician says, "I estimate the tower's height to be 26.2+-1.56m. The chance of it exceeding 30m is less than 1%. Let's spend the money on renovating the bar instead."
The physicist and engineer are aghast. "But we've both proved there is a substantial chance of disaster! Something Must Be Done!"
How did the mathematician calculate her answer, and was her decision the correct one? If she is right, what did the others do wrong?
Updated 6/02
Well, this wasn't really intended as a serious problem - I'm sure that you all realised that the mathematician combined the previous two estimates using Bayes' Theorem. Note that the question as
posed specifically defines the tower as a sample from a uniform distribution - so it's a perfect well-posed frequentist problem and the Bayesian/frequentist rambling in the comments is completely
The vaguely amusing point I was making is that although the engineer and physicist did nothing wrong initially, and both concluded that there was a significant (>1%) danger, as soon as each of them
hears the other one agree with their conclusion, their position immediately becomes untenable. Their only fault is to not realise this as quickly as the mathematician did :-)
22 comments:
Dear James,
here is my solution to your quiz:
The Bayesian mathematician "knows" that it is better to combine the information of two different independent measurements. She plugged the numbers into her standard statistical procedure.
Of course, the professor is smarter than that.
He knows that in order to estimate the height with the claimed precision by dropping a barometer one would need to measure the release and landing time to less tha n a fraction of a second.
This would require timinig devices at the top and bottom of the building and could not be done within 2 hours of time.
And this assumes that there was no wind and no air at all. A barometer
tumbling down a building is just not a good measurment device.
It is obvious that the second result was achieved by cheating or faked somehow.
The first result is somewhat suspicious too. Using only one commercial barometer would certainly introduce a systematic error.
In any case, the 2nd and 3rd student get an F and the 1st one a C.
He then explains to his students once more that without proper understanding of physics you should not estimate errors.
Then the professor decides to use good old trigonometry to determine the height himself.
Did I get it right ?
Dear Wolfgang,
your approach mimicks the procedure how the top quark was discovered at the Tevatron.
The two detector teams, CDF and D0, normally compete. But neither of them could have made a 5 sigma discovery. But when they combined their data, they could make it into 5 sigma. See Lisa's
Warped Passages for a description of this story.
Of course, by today, we have seen many top quarks and the discovery is much more solid than 5 sigma.
I find James' example unscientific. What he talks about is politics, not science. If there is a law that dictates whether a building must be reinforced, the law should exactly define the
procedure to find out whether the reinforcing is necessary. The definition he mentioned is not well-defined.
Claiming that there is an objective "probability" that the building is taller than 30 m is a stupidity, and we have explained why it is so many times. I assure you that more skillful engineers
and physicists could measure the height with the given equipment up to the error of 0.1 meters. And with better equipments, up to micrometers.
By giving them a stupid barometer and two hours only, you force them to act irrationally. This is just not how science should operate. In science, it is essential to have enough time and enough
available tools to do things right.
Incidentally, s=gt^2/2 gives you, for g=10 and s=30, 30=5*t^2, which means t=2.5 seconds or so, while the relative error of "s" is twice the relative error of "t".
> Incidentally, s=gt^2/2 gives you, for g=10 and s=30, 30=5*t^2, which means t=2.5 seconds or so, while the relative error of "s" is twice the relative error of "t".
Exactly my point. The 2nd student would have to measure the flight time at a fraction of a second to achieve the claimed accuracy, which is impossible with just a stop watch.
It would be equally foolish to assemble a panel of "height experts"
and ask for their opinions to improve the estimate.
Come on, Wolfgang, I can measure time exactly this precisely with stopwatch, especially if they're beeping.
Toss a piece of the barometer so that you hear a tick, press "start", stop it, press "stop". You will hear "tick beep ... tick beep". If you listen carefully, you can estimate the time delays
between the beeps and the ticks (tossing/falling on the ground), and if there were some discrepancies, you can estimate them better than with the 0.1s accuracy.
Don't forget that it takes 0.1 second for the sound to return back from the opposite end of the tower, if you rely on sound.
Best, LM
> Don't forget that it takes 0.1 second for the sound to return back from the opposite end of the tower, if you rely on sound.
Why did you have to give away this point? I wanted to keep the speed of sound argument for later 8-)
And what about the
"http://www.jamstec.go.jp/frcgc/research/d5/jdannan/#publications">real experiment where a panel of "height experts" had to place bets in order to determine the best estimate for the hight of the
And what about the real experiment, as described here,
where a panel of "height experts" had to place bets in order to
determine the best estimate for the hight of the building?
OK, this joke was popular in a different form when I was a grad student. I think the most accurate way to measure the building used the rulings on the barometer to measure the height of each step
on the way to the roof.
Another good way is to go to the architect, and offer her a fine barometer in exchange for her telling you the height of the building.
There is also a Syrian way to measure the height of the building: use the barometer as a fuse to burn the tower - surely it will be below 30 meters after 2 hours of work. ;-)
I'm glad my little story amused you.
Please note that as posed, it has an entirely straightforward frequentist interpretation as the prior is explicitly given. So no need for any anti-Bayesian rants here please.
If I were doing it, I would measure the height of the barameter, put the baramoter next to the tower, then find out how many "barometers" high the tower is. Sorry, the statistics are beyond me.
Lumo's responces are just bizzre. The question does not say anything about a law. And if there were a law, I have no idea why it would specify how to measure height. Apparently anything he does
not like is "politics".
Not a bad idea for an accountancy exam or as an inter-disciplinary exam where accountancy students are excluded.
My answer would run along the lines of:
Assuming for the moment the hypothetical to be real, I would spend the two hours keeping the public away from the tower. I would also solicit someone to find an expert who could measure the
height of the building without undertaking the dangerous activity of climbing the tower. I anticipated that this would be done using trigonometry so I also solicited someone to find out the
quickest way of obtaining a sextant, theodolight or other appropriate surveying equipment. I did tell the physicist and engineer not to climb the tower. Fortunately for me they thought I was
trying to prevent them winning the prize. Had I not been so busy keeping the public away from the tower, I would have drafted a liability disclaimer to ensure that anyone who did climb the tower
did so at their own risk.
My reasoning for the above answer is to consider the controllable costs. No-one is going to order a million pounds of work without checking it is necessary with appropriate tools for the task.
Any attempt I made to measure the height of the tower will not make a difference to whether it falls down before strengthening work takes place or not. Therefore none of the costs of
strengthening of the building or the loss of the building and rebuilding costs if that is considered the best option are controllable costs. The most major cost I may be able to control is public
liability. The university is undoubtedly insured but I suspect there is a clause that the university should take appropriate steps if it knows of risks. This explains my suggested activities of
keeping the public away from the tower.
However, I noticed that the tutor walked away after setting the question. I consider that this is significant. Given this information, if the situation was real, I believe it is too much of a
risk to assume the prize students would take the appropriate action. Therefore it is clearly just a hypothetical question and the answer is that it is not necessary to measure the height of the
Next I turn my attention to the actions of the competing students to see how I am doing compared to them. While climbing the tower is not dangerous, any attempt to measure the tower means they
are working on the assumption that the problem is real. As far as they are concerned they are undertaking a dangerous activity and acting inappropriately. The maths student gets a few marks for
spotting that using two independent measurements is better than one. However, he looses marks in several areas: 1. For not using this to conclude three measurements are better than two and ten
measurements better than three. 2. For not doing some trigonometry on the problem and assuming the problem is real and giving an answer. 3 For effort to answer the problem.
Therefore I conclude that I think this answer should win. At this point I want to go to the pub but I will probably hang around in case still being on the scene when the tutor returns is
considered important.
The shorter answer to the question "Can you tell us whether this is necessary?" is No.
Referring to measuring the height of the building because it is clearly a test not real. The only reason for going through the long answer above is to avoid losing marks for effort.
Eventually I'll get to the answer:
In case it isn't clear, the mathematician calculated the answer the same way as I did deciding it was a test of deciding when not to do something.
The mathematician gets bonus points for producing the best answer with minimum effort! Being a mathematician, of course she couldn't dirty her hands with a real experiment.
Should I cry foul? Both the mathematician and I decided to put in some effort. The mathematician only doing so after hearing the other answers and she could not be sure she would hear two
answers. I put my effort in before that.
Sorry Chris,
Low cunning always beats effort in my book - and it's my puzzle, so I get to make the rules :-)
James thank you for compliment of telling me I don't use low cunning methods. :)
Stand the barometer up vertically.
Measure the length of its shadow.
Measure the length of the building's shadow.
You can get to the answer from there, right?
Jimmy - I can see one serious blunder in your problem formulation right off. That hypothetical chick was no mathematician - no mathematician would take the word of two miscelleaneous louts for
anything. Mathematicians have to prove it for themselves.
She was definitely some lesser being - maybe a statistician - they will believe anything if they here it from enough people.
er, hear.
|
{"url":"http://julesandjames.blogspot.com/2006/02/how-to-measure-height-of-building-with.html","timestamp":"2014-04-17T21:24:47Z","content_type":null,"content_length":"156966","record_id":"<urn:uuid:30980dec-37fe-4bce-a383-1f9ed89772e8>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00423-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Answer to the Friday Puzzle!
Last night seven of my friends went to a local restaurant. They each handed over their coat as they entered. At the end of the evening the staff returned the coats, but they were totally incompetent
and randomly handed out the coats to my friends. What is the probability that exactly six of my friends received their own coat?
If you have not tried to solve it, have a go now. For everyone else the answer is after the break.
The answer is…..of course….. zero. If six of my friends have their coats back then all seven must have the right coat. Therefore it impossible for exactly six of them to have the correct coat.
Did you solve it?
I have produced an ebook containing 101 of the previous Friday Puzzles! It is called PUZZLED and is available for the Kindle (UK here and USA here) and on the iBookstore (UK here in the USA here).
You can try 101 of the puzzles for free here.
31 comments on “Answer to the Friday Puzzle!”
1. What happened to the 7th person?
□ Skipped out without paying.
2. I think he meant to say that if 6 have the right coats, the 7th must also.
3. I got the right answer, but I don’t understand the explanation with five and six. I would have explained it with six and seven…
□ Yes, Richard made an error there… And later on this day he’ll change it and nobody who reads it after that understands these comments. So for those, this is what it says now: “If five of my
friends have their coats back then all six must have the right coat. Therefore it impossible for exactly five of them to have the correct coat.”
4. I suspect that you are you one of the staff at the restaurant Richard
5. The staff are so disorganized they lost a guest halfway through the puzzle. That said, it took the time it took to read it to figure it out.
6. The 7th wasn’t a real friend :). I calculated the probability of 7 people getting their coats. Should have known it wasn’t maths…
7. “The answer is…..of course….. zero. If five of my friends have their coats back then all six must have the right coat. Therefore it impossible for exactly five of them to have the correct coat.”
Clearly Richard’s made a mistake here and meant
“”The answer is…..of course….. zero. If six of my friends have their coats back then all seven must have the right coat. Therefore it impossible for exactly six of them to have the correct coat.”
However, there is another answer. The wording of the question precludes the “totally incompetent” staff handing coats from a different set of customers to the famous seven. However, it does not
disallow for the possibility that they handed more than one coat any one member of the party. It is therefore possible that 6 of the friends have the right coat with one having a superfluous
extra. The possibility of that occurring is 6/(7x7x7x7x7x7x7) or rather less than one chance in 137,000. (That’s assuming any one coat is allocated randomly to any of the 7 friends and any one
might get any number of coats).
8. This is a real puzzle answer…. 😊. But is funny all people thinking…. I think is Zero in all cases 7-6, 6-5, 5-4… No?…😥
□ Yes, exactly. But sadly some people feel compelled to repeatedly point out the slightest error on the part of someone else. I think they hope it’ll distract from their own inadequacies.
□ It’s always worked for me Dave. What do you do to distract from your own inadequacies?
9. Zip, zero, obvious. Except this is Richard, who probably went with them and there could have been eight coats. Still he specifically says seven coats, so he probably didn’t wear one that night.
Of course we can bet he stuck one of his friends with the entire bill using some clever little magic trick. Sneaky devil!
10. Assumption: The 7 coats belonging to the 7 friends were randomly returned to the 7 friends.
Otherwise it is possible that 6/7 got correct coats. Probability, though, cannot be calculated as some of the variables are missing.
But I get the point about degrees of freedom.
11. One sneaky alternative explanation: the staff lost the seventh coat, or someone slipped into the wardrobe and stole it while the friends were eating dinner…
Of course, both would be unquantifiable in terms of probability unless you happened to know the frequency of staff mislaying coats or the local crime rate (particularly for coat thieves)…
12. Easy x 7, I think the real puzzle was inventing a new one around the scenario. What if all the coats were identical bar their odour, could they be sorted out in the miasma of cooking smells?
13. Simple rule of thumb: If any math problem can be solved by me in less than ten minutes, it’s too easy.
14. i remain confused. why is it not possible that the seventh recieved his own coat in error? that would fit the puzzle conditions.
□ Look, if there were 7 coats belonging to 7 people and 6 of those got their own coats, then the 7th coat had to go to the 7th person. Is this not obvious? If it isn’t, create 7 slips of paper
of different colors. Cut them all in half. Now mix them up. Then take 6 of the halves and match them to the 6 halves with the same colors. Well, what do you see for the remaining two slips?
□ i thank you niva for trying to explain to me. but i still do not see it. by your reasoning if there were six diners then it is impossible to get five coats right. surely therefore by
Mathematica Induction we will reason down to one coat and one diner. and that one coat must 100% be right, although Induction would say it was not.. so therefore with 7 it must be 1/6 =
approx 20%
□ Ha! My final answer was 4. Four of the 6 will get the correct coat. Make out of that what you will. #mathguess
□ If there are seven people and seven coats, and six of the people correctly receive their own coats, then whose coat will the seventh person receive? There’s only one coat left–his own–so it
is impossible for him to be the ONLY person to receive the wrong coat.
15. Oh, I thought it was 1:720. I didn’t think the 7th coat being correctly distributed would be a violation. But I see why it is.
16. So obviouse-but I didn’t get it right-your trixy language tripped me up… Good one though!
17. I got 1*2*3*4*5*6*7=5040th chance. My thought was also that also the seventh got automatically the right one, but i didnt read the “…exactly six…” in the question.
□ I got the same answer, assuming that the 7th person died before he could receive his coat :-)
18. The answer depends on the historical check-to-coat matching ratio of the clerk. If he or she gets it wrong every 7th time, then there’s a 100% chance that 6 have the correct coat, and the 7th guy
gets the coat of some poor slob who left without claiming his years ago back when patches on the elbow were still popular. Luckily the coat pockets contained $23.50 in small bills and coins, a
map to the New York subway system, a silver ring in the shape of a unicorn, ticket stubs from a performance of CATS in 1982, and a half-empty pack of still quite usable chewing gum.
□ That’s mine! unicorn ring, chewing gum, yipee
I wondered where I left it, but I’ve never been to NY or Cats, what a puzzle.
19. Richard states the problem above as ” At the end of the evening the staff returned the coats, but they were totally incompetent and randomly handed out the coats to my friends. What is the
probability that exactly six of my friends received their own coat?”
Clearly he has omitted some sentence here specifying that six received the right coat. But only Nick seems to have noticed this.
This one was absurdly easy. Yet it seems that some didn’t get it. The real puzzle is why they didn’t. What was the mental glitch? Hard to see.
Whatever it was, it is very unexpected. The simple logic of the question seemed to leave no room for imaging anything else but the right answer at once.
Judging from the rest of the comments here some people have the most disordered brains, constantly beset by imaginings both absurd and irrelevant to the point of the question.
If this is characteristic of the population at large, no wonder the results of voting in the US are such a poor validation of democracy.
In fact on this basis I predict Romney will displace the impeccable professorial pragmatist Obama.
□ I’m not sure what you’re talking about.
The puzzle *does* include a sentence specifying that six received the right coat. You quoted it in your post! “What is the probability that exactly six of my friends received their own coat?”
Nick’s post was in response to an earlier version of Richard’s solution, since replaced with a corrected one, in which he inadvertently answered the question as if five out of six friends
(instead of six out of seven) received their own coats.
20. Damit! I just read the trains puzzle thinking that was a trick and got it wrong because I had to use math (technically) so I worked this one out with math when it turned out to be a trick :(
|
{"url":"http://richardwiseman.wordpress.com/2012/05/28/answer-to-the-friday-puzzle-157/","timestamp":"2014-04-20T23:54:13Z","content_type":null,"content_length":"94083","record_id":"<urn:uuid:54b942ca-537e-4992-bbcb-4ede55ff54e4>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00042-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Posts by
Total # Posts: 284
Through a given point on a line, there is one and only one line perpendicular to the given line. The hints that it gave was "Do not limit your thinking to just two dimensions". If it just a regular
line there will only be one perpendicular line right?
There is something wrong with this definition for a pair of vertical angles: If AB and CD intersect at point P,then APC and BPD are a pair of vertical angles. Sketch a counterexample to show why it
is not correct. Can you add a phrase to correct it? - Umm, for the ...
Algebra I
The pairs should be parallel. y-x=-8 in slope intercept form ( y=mx+b) is y=x - 8. So the slope for y-x=-8 is 1 too.
Math(Please help)
( I don't know if I'm 100% correct, could someone recheck if neeeded too? Thanks) 1. 3sqrt2-8sqrt128 [ First you multiply 3 x 2, which will give you 6sqrt. Then you would multiply -8 and 128, which
will give you -1024. The final step would be to combine like terms, the...
Use your protractor to find the measure of the angle to the nearest degree. 1.Angle S Anyone know how am I suppose to find the degree when it only telling me the vertex(S) ?
Could anyone give me any idea on how t o answer this? Thanks 1.A classmate tells you, Finding the coordinates of a midpoint is easy. You just find the averages. Is there any truth to it? Explain
what you think your classmate means. - I think what he meant by saying...
Could anyone give me an idea of what to do for this problem? ( Like how do I set it up) Draw and mark a figure in which M is the midpoint of Segment ST, SP=PT, and T is the midpoint of Segment PQ.
If I have a line that have 3 points ( A,R, and T) do I call it line ART? or only line AT(A being the first point and T being the last)? Thanks
Language Arts
Thanks again =)!
Language Arts
Does anyone know a word that mean the same as " a computer's user"? I try googling and all I got was "Hacker".Thanks
Language Arts
Thanks! That helped me a lot!
Language Arts
Use the fallowing sentences and use vivid verbs and adjective to rewrite the sentences. The sentences that I'm having problem with is : I play video game. Could anyone give me idea on how to improve
that sentences? Thanks
Does anyone know how I can make a 3-D poster?Thanks
So for this problem 3y + 2x = 4.5 it would be 30y + 20x = 45( I multiply all the term by factor of 10 to get rid of the decimals) then I would simplify it into 4x + 6y=9. Is this correct?
Why did you multiply by 5?
Write each equation in standard form. 1.y-3=-2.4(x-5) -The answer I got was 2.4x + y = 15. Could someone please check my answer and tell me how to solve it if it wrong? Thanks
Ms.Sue could you help me again please?Thanks -Earlier you say: The best example is the last four lines: "And so, all the night-tide, I lie down by the side Of my darling- my darling- my life and my
bride, In the sepulchre there by the sea, In her tomb by the sounding sea....
Is the 4 line the best examples because he sleep/stay beside her tomb everyday or do you have another points of view? Thanks
Could you give me some examples from the poem that you think show Allan not being able to go on with his life because of his wife's death? If not then thank you very much anyway.
Read the poem "Annabel Lee" and write why you think Allan's grief for his wife has made him unable to go on with his own life. Use examples from the poems.-Thanks
Factor each trinomials. 1.m^2-mv-56v^2. Thanks
Kent invested $5,000 in a retirement plan.He allocated X dollars of the money to a bond account that earns 4% interest per year and the rest to a traditional account that earn 5% interest per year.
1.Write an expression that represents the amount of money invested in the tradi...
Sorry, it actually -2/3(n^2)
Find each products. -2/3n^2(-9n^2+3n+6) Thanks
State whether each expression is a polynomial. If the expression is a polynomials, identify it as a monomial,a binomial, or a trinomials. 1.7a^2b+3b^2-a^2b 2.1/5y^3+y^2-9 3.6g^2h^3k Are these the
right answers? 1.Yes-trinomials 2.No 3.Yes- monomials.
State whether each expression is a polynomial. If the expression is a polynomial, identify it as monomial,a binomial, or a trinomial. 1.1/5y^3+y^2-9 2.6g^2h^3k Are these the right answers? Thanks
1.No 2.Yes, monomials.
How should I begin my first paragraph when I'm trying to interpret a poem? The poem is Nothing Gold Can Stay by Robert Frost.Thanks
Simplify. 1.xy^2/xy 2.m^5np/m^4p 3.5c^2d^3/-4c^2d 4.-4c^2/24c^5 Are these the right answers?Thanks 1.xy 2.mnp 3.-5/4cd^2 4.-1/6c^3
Could someone help me interpret this poem?Thanks Nature's first green is gold, Her hardest hue to hold. Her early leafs a flower; But only so an hour. Then leaf subsides to leaf. So Eden sank to
grief, So dawn goes down to day. Nothing gold can stay. -Robert Frost All I ca...
Simplify. 1.(-5x^2y)(3x^4) Is the answer for this = -15x^6y^1? If not could you explain how to do it? Thanks.
Simplify. 1.(-15xy^4)(-1/3xy3) Is the answer for number 1 = -15x^6y^1?Thanks Express the area of each figure as a monomial. 1.A circle with the radius of 5x^3. how would I solve this problem?
Question Number Your Answer Answer Reference 1. D Correct 2. C Correct 3. B Correct 4. A Correct 5. B Correct 6. C Correct 7. A Correct 8. C Correct 9. D Correct 10. D Correct 11. D Correct 12. B
Correct 13. A Correct 14. C Correct 15. B Correct 16. D Correct 17. C Correct 18...
2 is A
Social Studies
Congressional committee must pass along every bill to be debated and voted on by the other members of their respective houses. Is this true or false?
Triangle ABC has vertices A(0,4),B(1,2),and C(4,6). Determine whether triangle ABC is a right triangle.Explain. How do I do this?
Write the slope-intercept form of an equation of the line that passes through the given point and is perpendicular to the graph of each equation. 1.(-2,-2),y=-1/3x+9 2.(-4,-3),4x+y=7 3.(-4,1),4x+7y=6
4.(-6,-5),4x+3y=-6 These are the answer I came up with, could you review them...
Could do use one of mine and show me how you check it? Thanks
Write the slope-intercept form of an equation of the line that passes through the given point and is parallel to the graph of each equation. 1. (3,2), y=x + 5 2.(4,-6),y=-3/4x + 1 3.(-8,2),5x-4y=1
Could you guy check if these are the answer? 1.y=x-1 2.y=-3/4x -3 3.y=5/4x + 12 ...
I'm writing a paragraph to compare and contrast a story character and me, how should I start the paragraph? Thanks
Social Studies
1.Do you think the special powers of the Senate or those of the House of Representatives are more important? What is this question asking me? It is asking me who power do I think is more important
between the Senate and the House of Representatives? Thanks
Social Studies
1.Why is " elastic clause' an appropriate name for Clause 18 of Section 8 of Article l? 2.Think about what it mean to "express" something. Which powers - delegated or implied - could also be referred
to as expressed powers? Why? 1.Elastic Clause is the appro...
Social Studies
What is the process of an amendment? Could someone sums up how can 1 get pass? Thanks
Strong winds called the prevailing westerlies blow from west to east in a belt from 40 degree to 60 degree latitude in both Northern and Southern Hemisphere. 1.Write an inequality to represent the
latitudes where the prevailing westerlies are not located. Could someone explain...
Did you divide the -n by itself and then 2 by -n in order to get the sign reverse? Also could you help me on this one? Thanks A number minus one is at most nine, or two times the number is at least
twenty-four. Is it n-1</= 9 or 2n =/> 24 which equal n>/= 12 or n <...
Solve the compound inequality. 1.-n<2 or 2n-3>5. 2.2c-4>-6 and 3c+1<13 Are these the answer? 1.n<2 or n>4. 2.-1<c<4.
Express each statement using an inequality involving absolute value.Do not solve. 1.The majority of grades in Sean's English class are within 4 points of 85. 2.AA thermometer is guaranteed to give a
temperature no more than 1.2F from the actual temperature. If the thermome...
Would the second one be |28 - t|</= 1.2?
Could you explain more on why you put | g-85 | </= 4 ? Thanks
Express each statement using an inequality involving absolute value.Do not solve. 1.The majority of grades in Sean's English class are within 4 points of 85. 2.AA thermometer is guaranteed to give a
temperature no more than 1.2F from the actual temperature. If the thermome...
Social Studies
Why is school desegregation an important piece in the struggle for civil rights? Also, how did this change people's point-of-view toward race relations in American? Thanks
Social Studies
What started the school segregation?
Social Studies
Where could I get information on school segregation?
Social Studies
Where could I find good information on school desegregation?
Social Studies
Was Brown vs. Board of Education the only events that have an effect on school segregation? Or was there other events that made school segregation unconstitutional?
Since happy,sad,upset,etc. are part of feeling/emotions, what is hope, insight, optimism, courage under?
What did Philip Emeagwali believed in?
Social Study
Thank you.
Social Study
Where could I find good video/information on school segregation and brown vs board of education? Two days ago it was on history channel website but now it gone. Thanks
How many ways can you arrange 8 different crates on a shelf if they are placed from left to right?
How many ways can you arrange 8 different crates on a shelf if they are placed from left to right?
1.Four coins are tossed.What is the probability of tossing all heads? 2.A card is chosen at random from a deck of 52 cards.It is then replaced and a second card is chosen.What is the probability of
getting a jack and then an eight?
1.Four coins are tossed.What is the probability of tossing all heads? 2.A card is chosen at random from a deck of 52 cards.It is then replaced and a second card is chosen.What is the probability of
getting a jack and then an eight?
Determine the mole fraction of acetone in a solution prepared by adding 0.456 kg of (CH3)2CO to 0.899 L of C4H8O. Molar Mass (g/mol) (CH3)2CO 58.05 C4H8O 72.12 Density (g/ml): (CH3)2CO 0.7899 C4H8O
0.8892 Name/Formula: acetone (CH3)2CO tetrahydrofuran C4H8O I need help!!
Thanks for the help, Reiny.
I confused the PROBABILITY OF GETTING 2 ACES. Thank you very much. Could you review this one too please. A jar hold 15 red pencils and 10 blues pencils. What is the probability of drawing two red
pencil form the jar? This would be an dependent and it would be 15/25( first time...
Two cards are chosen at random from a standard deck of cards with replacement. What is the probability of getting 2 aces? Is is an independent events? If it is then would it be 2/52(chances of an
ace) x 2/52?
May I ask where did you get 4/52 and 1/2?
A standard deck of playing cards contains 52 cards in four suits of 13 cards each. Two suits are red and two suits are black. Find each probability.Assume the first card is replaced before the second
card is drawn. 1.P(black,queen) 2.P(jack,queen) How would I solve these type ...
Just to make sure I understand, could you please review this two too? Thanks Most of the trees HAVE Xs marked on them. -The subject and verb agree. 2. Cassie and her brothers ATTACKS the lumbermen.
-Attacks become attack?
So is the correct form on SHARES in this sentences = share?
So is the correct form of SHARES
In each sentence, if the italicized verb does not agree with the subject, circle the verb and write it correct form. I will capitalize the verb since I don't know how to italicized. 1.Big Ma and
Cassie SHARES the same bedroom. How would I solve this?
Use the fundamental counting principle to find the total number of outcomes in each situation. 1.Rolling two number cubes and tossing one coin. = Is the outcome = 4? 2.Choosing from 3 sizes of
distilled,filtered, or spring water. = Is the outcome 9? Thanks
What are some human's instinct? Is breathing air count as an instinct? If not what do it count as? since you can't stop breathing.
How do I improve my hooks when writing an essay? Also, how should my last paragraph be and what should it end with? Thanks
Write a friendly letter on protecting those you love. How should I start and end the letter? Thanks
I'm reading a story and I have to come up with a reason why this would help the character become a man. -Seeing himself in a mirror for the first time.
What does it mean to protect someone?
Thank you!
1.What impact did Thomas Paine s writing of Common Sense have on gaining support for the American s cause? 2.What is the significance of the signing of the Declaration of Independence by members
of the Continental Congress?
Automotive engine parts and operation
1. C Correct 2. B Correct 3. A Correct 4. A Correct 5. B Correct 6. D Correct 7. B Correct 8. A Correct 9. B Correct 10. B Correct 11. A Correct 12. C Correct 13. C Correct 14. D Correct 15. B
Correct 16. B Correct 17. D Correct 18. A Correct 19. B Correct 20. A Correct 21. A...
auto repair technician
Date Graded 10/18/11 Examination 00400400 Grade 100 Document ID 234465408 1. A Correct 2. C Correct 3. A Correct 4. A Correct 5. D Correct 6. B Correct 7. B Correct 8. B Correct 9. A Correct 10. C
Correct 11. A Correct 12. D Correct 13. A Correct 14. B Correct 15. B Correct 16...
1.b 2.c 3.c 4.d 5.d 6.c 7.b 8.b 9.a 10.b 11.a 12.d 13.a 14.b 15.d 16c 17.a 18.c 19.a 20.d just got done takin the exam and these are the correct answers. Totally not right...
Pages: <<Prev | 1 | 2 | 3
|
{"url":"http://www.jiskha.com/members/profile/posts.cgi?name=Shadow&page=3","timestamp":"2014-04-16T20:03:26Z","content_type":null,"content_length":"24772","record_id":"<urn:uuid:ea7798d4-ed0a-487d-87e9-8fcb7f420a53>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00158-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Longitude Degrees at the Equator
Date: 09/09/97 at 12:03:08
From: Motts Parrinello
Subject: Distance in miles between longitude degrees at the equator
Dear Dr. Math,
What is the distance in miles between degrees of longitude at the
equator ?
Is it the circumference of the earth divided by 360 degrees?
If so, how do I find the circumference or diameter so I can
multiply by 3.14 ?
Thank you,
Motts Parrinello
Date: 09/09/97 at 12:57:36
From: Doctor Rob
Subject: Re: Distance in miles between longitude degrees at the
Yes, it is the circumference of the Earth divided by 360.
The answer is exactly 60 nautical miles, by definition.
Since one nautical mile is 1.150777 statute miles, your answer is
69.04663 miles. Multiply by 360 to get the circumference of the earth,
then divide by Pi to get the diameter, and divide again by 2 to get
the radius of the Earth.
-Doctor Rob, The Math Forum
Check out our web site! http://mathforum.org/dr.math/
|
{"url":"http://mathforum.org/library/drmath/view/54958.html","timestamp":"2014-04-21T00:42:57Z","content_type":null,"content_length":"5989","record_id":"<urn:uuid:7cb27415-367f-49ac-94ac-129f9fe2bb58>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00210-ip-10-147-4-33.ec2.internal.warc.gz"}
|
How to Calculate the Percentage Return on Investment If You Bought Stock on Margin
Buying stock on margin gets you more bang for your buck on your investment. You put up only a portion of the purchase price and your broker lends you the rest. Because you acquire more stock without
paying the full cost, your gains and losses are magnified. You can calculate your return on investment to analyze the effects of using margin. ROI measures your total profit or loss as a percentage
of your initial investment. Using margin increases your ROI if your stock rises, but causes a lower negative ROI if your stock drops.
Step 1
Multiply the number of shares you bought by the price you paid per share to figure the total cost. For example, assume you bought 100 shares of a $10 stock. Multiply 100 by $10 to get a $1,000 cost.
Step 2
Multiply the percentage of the cost you paid for with your own money by the amount of the cost to determine your cash investment. In this example, assume you paid 50 percent toward the cost. Multiply
50 percent, or 0.5, by $1,000 to get a $500 cash investment.
Step 3
Multiply the number of shares by the price for which you sold the stock to determine the total sale amount. In this example, assume you sold the stock for $12 per share. Multiply $12 by 100 to get a
$1,200 sale amount.
Step 4
Subtract the total cost from the total sale amount. Continuing the example, subtract $1,000 from $1,200 to get $200.
Step 5
Subtract the interest and commissions you paid your broker from your result and add any dividends you received to calculate your profit or loss. A negative result represents a loss. In this example,
assume you paid $25 in interest on the borrowed money, paid $20 in commissions and received $3 in dividends. Subtract $25 and $20 from $200 to get $155. Add $3 to $155 for $158 in profit.
Step 6
Divide your profit or loss by your cash investment and multiply your result by 100 to calculate your return on investment as a percentage. Concluding the example, divide $158 by $500 and multiply by
100 to get a 31.6 percent ROI. This means you generated profit equal to 31.6 percent of your $500 cash investment. Without margin, your ROI would’ve been only 15.8 percent, or $158 divided by the
full $1,000 cost.
• Rules for buying stock on margin vary among brokerage firms. Always read your margin agreement and the fine print before buying on margin.
• If stock purchased on margin drops, you might lose more money than you initially invested.
• If your stock drops to a certain price, your broker might require you to deposit more cash into your account, or it might automatically sell your stock without notice.
Photo Credits
• Comstock/Comstock/Getty Images
|
{"url":"http://budgeting.thenest.com/calculate-percentage-return-investment-bought-stock-margin-25599.html","timestamp":"2014-04-21T07:04:33Z","content_type":null,"content_length":"46508","record_id":"<urn:uuid:cc74b332-9e21-4e5b-9701-81dea3f2491f>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00377-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Growth of groups versus Schreier graphs
up vote 5 down vote favorite
This question is motivated by this one What is the relation between the number syntactic congruence classes, and the number of Nerode relation classes? where it essentially asks to compare the growth
of the syntactic monoid with the growth of the minimal automaton. A special case is the following. Let G be an infinite finitely generated group and H a subgroup such that G acts faithfully on G/H.
How different can the growth of G and the Schreier graph of G/H be?
I know the Grigrchuk group of intermediate growth has faithful Schreier graphs of polynomial growth.
Are there groups of exponential growth with faithful Schreier graphs of polynomial growth?
Schreier graphs of non-elementary hyperbolic groups with respect to infinite index quasi-convex subgroups have non-amenable Schreier graphs so ths should be avoided.
gr.group-theory geometric-group-theory
2 Every regular graph of even degree is a Schreier graph of a free group. Basically you can make any growth you like. Faithfulness of a Schreier graph is not a problem usually. If you want a
particular example in the spirit of the Grigorchuk group, you can take the Basilica group: it has exponential growth, acts faithfully on its Schreier graphs corresponding to stabilizers of
infinite sequences, and these Schreier graphs have polynomial growth. – Ievgen Bondarenko Nov 19 '11 at 12:50
I believed a free group was no problem but most of the examples I tried to draw in my head were not faithful. I was sure somebody new an example off the top of their head. Basilica is good because
the polynomial growth is from contracting like in Grigorchuk group. – Benjamin Steinberg Nov 19 '11 at 18:27
add comment
1 Answer
active oldest votes
This holds true, for example, for free groups. Actually, take $G$ to be a free product of three copies of $Z/2Z$, which has an index two subgroup which is rank 2 free. The Cayley graph
for this group (which has undirected edges) is just a trivalent tree, with edges colored 3 colors by the generators, so that every vertex has exactly 3 colors (this is known as a Tait
coloring). Any cubic graph with a Tait coloring corresponds to a Schreier graph of a (torsion-free) subgroup $H$ of $G$, which is the quotient of the Cayley graph of $G$ by the subgroup
$H$ (one may choose a root vertex to correspond to the trivial coset). Closed paths starting from the root vertex correspond to elements of the subgroup $H$.
up vote 4
down vote Choose a cubic graph with a Tait coloring which has linear growth and corresponds to a subgroup $H$ satisfying your condition ($G$ acts faithfully on $G/H$). This is equivalent to $\cap_
accepted {g\in G} gHg^{-1}=\{1\}$. For example, take a bi-infinite ladder, labeling the two stringers with matching sequences of colors, which then determine the colors of the rungs. By making
these stringer sequences aperiodic, you can guarantee that $\cap_{g\in G} gHg^{-1}=\{1\}$. Changing the root vertex corresponds to changing the conjugacy class. In fact, we may choose
stringer sequences which contain any word in $G$. Then putting a root at the endpoint of such a word, we guarantee that it is not in the corresponding conjugate subgroup of $H$.
add comment
Not the answer you're looking for? Browse other questions tagged gr.group-theory geometric-group-theory or ask your own question.
|
{"url":"http://mathoverflow.net/questions/81321/growth-of-groups-versus-schreier-graphs/81323","timestamp":"2014-04-16T04:46:48Z","content_type":null,"content_length":"54454","record_id":"<urn:uuid:919305b6-f616-4ecb-b0ba-c5ee572cdeab>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00328-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
How do I prove that \(n^3-n\) is always divisible with 6? I sort of see the solution since \(n(n-1)(n+1)\) will always have a part that is a 2 and a part that is 3. But I can't formulate it
Best Response
You've already chosen the best response.
3(2n) is always divisible by 6 ... not that it helps
Best Response
You've already chosen the best response.
the proof might be induction tho
Best Response
You've already chosen the best response.
Yes, induction is the best way.
Best Response
You've already chosen the best response.
Hmmm, give me a sec and I will try.
Best Response
You've already chosen the best response.
First \(f(n) = \frac{n^3-n}{6}\) \[f(1) = \frac{0}{6} = 0\] so valid, then \(f(k)\) and \(f(k+1)\) \[f(k+1) = \frac{(k+1)^3-(k+1)}{6}\] which boils down to \[f(k+1) = \frac{k(k+1)(k+2)}{6}\]
which basically is \[f(k+1) = \frac{l^3-l}{6} \] where \(k=l+1\), is that good enough?
Best Response
You've already chosen the best response.
I mean \(k=l-1\)
Best Response
You've already chosen the best response.
I'll prove it for \(n\ge 0\). Let \(P(n)\) be the statement that \(n^3-n\) is divisible by \(6\). \(P(0)\) is obviously true as you said. Now assume that \(P(k)\) is true, that is \(k^3-k\) is
divisible by \(6\) then for P(k+1): \[(k+1)^3-(k+1)=(k+1)(k^2+2k)=k^3+3k^2+2k=k^3-k+3k(k+1).\] By the induction hypothesis \(k^3-k\) is divisible by \(6\) and it's obvious that \(3k(k+1)\) is
also divisible by \(6\) since either \(k\) or \(k+1\) is divisible by \(2\). Therefore \(P(k)\) implies \(P(k+1)\) and thus \(P(n)\) is true \(\forall n\ge 0\).
Best Response
You've already chosen the best response.
You can easily show that it's also true for \(n<0\) since [call \(f(n)=n^3-n\) ] \(f(-n)=(-n)^3+n=-(n^3-n).\)
Best Response
You've already chosen the best response.
I'm not sure about the \((k+1)^3-(k+1) = (k+1)(k^2+2k)\) of your equations, but the rest is right and with less work :) but I hope you approve of my method as well.
Best Response
You've already chosen the best response.
\[(k+1)^3-(k+1)=(k+1)((k+1)^2-1)=(k+1)(k^2+2k).\] As for your method, you didn't state clearly what is the induction hypothesis and you didn't show that f(k) implies f(k+1). That's at least what
I think.
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred.
|
{"url":"http://openstudy.com/updates/4f4e9a70e4b019d0ebae5709","timestamp":"2014-04-18T08:15:47Z","content_type":null,"content_length":"50038","record_id":"<urn:uuid:b546c168-8662-407e-8a81-1e233eb8b139>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00412-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Sep 202013
This article assumes familiarity with monads and monad transformers. If you’ve never had an occasion to use lift yet, you may want to come back to it later. It is also available on the School of
Haskell: https://www.fpcomplete.com/user/jwiegley/monad-control.
The Problem
What is the problem that monad-control aims to solve? To answer that, let’s back up a bit. We know that a monad represents some kind of “computational context”. The question is, can we separate this
context from the monad, and reconstitute it later? If we know the monadic types involved, then for some monads we can. Consider the State monad: it’s essentially a function from an existing state, to
a pair of some new state and a value. It’s fairly easy then to extract its state and later use it to “resume” that monad:
import Control.Applicative
import Control.Monad.Trans.State
main = do
let f = do { modify (+1); show <$> get } :: StateT Int IO String
(x,y) <- runStateT f 0
print $ "x = " ++ show x -- x = "1"
(x',y') <- runStateT f y
print $ "x = " ++ show x' -- x = "2"
In this way, we interleave between StateT Int IO and IO, by completing the StateT invocation, obtaining its state as a value, and starting a new StateT block from the prior state. We’ve effectively
resumed the earlier StateT block.
Nesting calls to the base monad
But what if we didn’t, or couldn’t, exit the StateT block to run our IO computation? In that case we’d need to use liftIO to enter IO and make a nested call to runStateT inside that IO block.
Further, we’d want to restore any changes made to the inner StateT within the outer StateT, after returning from the IO action:
import Control.Applicative
import Control.Monad.Trans.State
import Control.Monad.IO.Class
main = do
let f = do { modify (+1); show <$> get } :: StateT Int IO String
flip runStateT 0 $ do
x <- f
y <- get
y' <- liftIO $ do
print $ "x = " ++ show x -- x = "1"
(x',y') <- runStateT f y
print $ "x = " ++ show x' -- x = "2"
return y'
put y'
A generic solution
This works fine for StateT, but how can we write it so that it works for any monad tranformer over IO? We’d need a function that might look like this:
foo :: MonadIO m => m String -> m String
foo f = do
x <- f
y <- getTheState
y' <- liftIO $ do
print $ "x = " ++ show x
(x',y') <- runTheMonad f y
print $ "x = " ++ show x'
return y'
putTheState y'
But this is impossible, since we only know that m is a Monad. Even with a MonadState constraint, we would not know about a function like runTheMonad. This indicates we need a type class with at least
three capabilities: getting the current monad tranformer’s state, executing a new transformer within the base monad, and restoring the enclosing transformer’s state upon returning from the base
monad. This is exactly what MonadBaseControl provides, from monad-control:
class MonadBase b m => MonadBaseControl b m | m -> b where
data StM m :: * -> *
liftBaseWith :: (RunInBase m b -> b a) -> m a
restoreM :: StM m a -> m a
Taking this definition apart piece by piece:
1. The MonadBase constraint exists so that MonadBaseControl can be used over multiple base monads: IO, ST, STM, etc.
2. liftBaseWith combines three things from our last example into one: it gets the current state from the monad transformer, wraps it an StM type, lifts the given action into the base monad, and
provides that action with a function which can be used to resume the enclosing monad within the base monad. When such a function exits, it returns a new StM value.
3. restoreM takes the encapsulated tranformer state as an StM value, and applies it to the parent monad transformer so that any changes which may have occurred within the “inner” transformer are
propagated out. (This also has the effect that later, repeated calls to restoreM can “reset” the transformer state back to what it was previously.)
Using monad-control and liftBaseWith
With that said, here’s the same example from above, but now generic for any transformer supporting MonadBaseControl IO:
{-# LANGUAGE FlexibleContexts #-}
import Control.Applicative
import Control.Monad.Trans.State
import Control.Monad.Trans.Control
foo :: MonadBaseControl IO m => m String -> m String
foo f = do
x <- f
y' <- liftBaseWith $ \runInIO -> do
print $ "x = " ++ show x -- x = "1"
x' <- runInIO f
-- print $ "x = " ++ show x'
return x'
restoreM y'
main = do
let f = do { modify (+1); show <$> get } :: StateT Int IO String
(x',y') <- flip runStateT 0 $ foo f
print $ "x = " ++ show x' -- x = "2"
One notable difference in this example is that the second print statement in foo becomes impossible, since the “monadic value” returned from the inner call to f must be restored and executed within
the outer monad. That is, runInIO f is executed in IO, but it’s result is an StM m String rather than IO String, since the computation carries monadic context from the inner transformer. Converting
this to a plain IO computation would require calling a function like runStateT, which we cannot do without knowing which transformer is being used.
As a convenience, since calling restoreM after exiting liftBaseWith is so common, you can use control instead of restoreM =<< liftBaseWith:
y' <- restoreM =<< liftBaseWith (\runInIO -> runInIO f)
-- becomes...
y' <- control $ \runInIO -> runInIO f
Another common pattern is when you don’t need to restore the inner transformer’s state to the outer transformer, you just want to pass it down as an argument to some function in the base monad:
foo :: MonadBaseControl IO m => m String -> m String
foo f = do
x <- f
liftBaseDiscard forkIO $ f
In this example, the first call to f affects the state of m, while the inner call to f, though inheriting the state of m in the new thread, but does not restore its effects to the parent monad
transformer when it returns.
Now that we have this machinery, we can use it to make any function in IO directly usable from any supporting transformer. Take catch for example:
catch :: Exception e => IO a -> (e -> IO a) -> IO a
What we’d like is a function that works for any MonadBaseControl IO m, rather than just IO. With the control function this is easy:
catch :: (MonadBaseControl IO m, Exception e) => m a -> (e -> m a) -> m a
catch f h = control $ \runInIO -> catch (runInIO f) (runInIO . h)
You can find many function which are generalized like this in the packages lifted-base and lifted-async.
Jul 162013
Posted a very brief overview of the conduit library on the School of Haskell:
I hope it makes it clear just how easy and simple conduits are to use. If not, comments welcome!
Jun 292013
I’ve decided after many months of active development to release version 1.0.1 of gitlib and its related libraries to Hackage. There is still more code review to done, and much documentation to be
written, but this gets the code out there, which has been working very nicely at FP Complete for about six months now.
The more exciting tool for users may be the git-monitor utility, which passively and efficiently makes one-minute snapshots of a single Git working tree while you work. I use it continually for the
repositories I work on during the day. Just run git-monitor -v in a terminal window, and start making changes. After about a minute you should see commit notifications appearing in the terminal
Jun 192013
Until the Comonad Reader comes back online, I have a temporary mirror setup at http://comonad.newartisans.com. It’s a bit old (Sep 2012), but has some classics like “Free Monads for Less”. It is
missing the “Algebra of Applicatives”, though, since I hadn’t run the mirror in a while.
Jun 182013
Chatting with merijn on #haskell, I realized I have a file server running Ubuntu in a VM that’s idle most of the time, so I decided to set up a jenkins user there and make use of it as a build slave
in the evenings. This means that at http://ghc.newartisans.com, you’ll now find nightly builds of GHC HEAD for Ubuntu as well (64-bit). It also includes fulltest and nofib results for each build.
|
{"url":"http://newartisans.com/author/johnw/","timestamp":"2014-04-18T00:12:24Z","content_type":null,"content_length":"36448","record_id":"<urn:uuid:ac4df4c7-b153-413d-8581-adea9d1ff88f>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00123-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Whats the application of Hyperchaotic system?
MathOverflow is a question and answer site for professional mathematicians. It's 100% free, no registration required.
Can anyone tell me where hyperchaotic system is used in real world, its applications? It'd be great if you could provide a literature to back any application as well.
up vote 1 down vote
favorite ds.dynamical-systems
add comment
Can anyone tell me where hyperchaotic system is used in real world, its applications? It'd be great if you could provide a literature to back any application as well. Thanks.
and some more pointers:
• C. Stan, C.P. Cristescu, and D. Alexandroaei, Chaos and hyperchaos in a symmetrical discharge plasma: experiment and modelling, University Politehnica Of Bucharest Scientific
Bulletin-Series A-Applied Mathematics And Physics, vol. 70 (4), 25-30 (2008).
up vote 2 down vote
accepted • R. Stoop et al., A p-Ge semiconductor experiment showing chaos and hyperchaos, Physica D, vol. 35 (3), 425-435 (1989).
• T. Matsumoto, L. Chua, and K. Kobayashi, Hyperchaos - Laboratory experiment and numerical confirmation, IEEE Transactions On Circuits And Systems, vol. 33 (11), 1143-1147
add comment
C. Stan, C.P. Cristescu, and D. Alexandroaei, Chaos and hyperchaos in a symmetrical discharge plasma: experiment and modelling, University Politehnica Of Bucharest Scientific Bulletin-Series
A-Applied Mathematics And Physics, vol. 70 (4), 25-30 (2008).
R. Stoop et al., A p-Ge semiconductor experiment showing chaos and hyperchaos, Physica D, vol. 35 (3), 425-435 (1989).
T. Matsumoto, L. Chua, and K. Kobayashi, Hyperchaos - Laboratory experiment and numerical confirmation, IEEE Transactions On Circuits And Systems, vol. 33 (11), 1143-1147 (1986).
It is well known that if two or more Lyapunov exponents of a dynamical system are positive throughout a range of parameter space, then the resulting attractors are hyperchaotic. The
importance of these attractors is that they are less regular and are seemingly "almost full" in space, which explains their importance in fluid mixing [Scheizer & Hasler, 1996, Abel et
al., 1997, Ottino,1989; Ottino et al., 1992]. See:
Abel. A, Bauer. A, Kerber. K, Schwarz. W, [1997] " Chaotic codes for CDMA application," Proc. ECCTD '97, 1, 306.
Kapitaniak.T, Chua. L. O, Zhong. Guo-Qun, [1994] " Experimental hyperchaos in coupled Chua's circuits," Circuits,.Syst. I: Fund. Th. Appl. 41 (7), 499 -- 503.
up vote 2 Ottino. J. M, [1989] " The kinematics of mixing: stretching, chaos, and transport," Cambridge: Cambridge University Press.
down vote
Ottino. J. M, Muzzion. F. J, Tjahjadi. M, Franjione.J. G, Jana. S. C, Kusch. H. A, [1992] " Chaos, symmetry, and self-similarity: exploring order and disorder in mixing processes,"
Science. 257, 754--760.
Scheizer. J, Hasler. M, [1996] " Multiple access communication using chaotic signals," Proc. IEEE ISCAS '96. Atlanta, USA, 3, 108.
Thamilmaran. K, Lakshmanan. M, Venkatesan. A, [2004] " Hyperchaos in a Modified Canonical Chua's Circuit," Int. J. Bifurcation and Chaos. 14 (1),221--244.
add comment
It is well known that if two or more Lyapunov exponents of a dynamical system are positive throughout a range of parameter space, then the resulting attractors are hyperchaotic. The importance of
these attractors is that they are less regular and are seemingly "almost full" in space, which explains their importance in fluid mixing [Scheizer & Hasler, 1996, Abel et al., 1997, Ottino,1989;
Ottino et al., 1992]. See:
Abel. A, Bauer. A, Kerber. K, Schwarz. W, [1997] " Chaotic codes for CDMA application," Proc. ECCTD '97, 1, 306.
Kapitaniak.T, Chua. L. O, Zhong. Guo-Qun, [1994] " Experimental hyperchaos in coupled Chua's circuits," Circuits,.Syst. I: Fund. Th. Appl. 41 (7), 499 -- 503.
Ottino. J. M, [1989] " The kinematics of mixing: stretching, chaos, and transport," Cambridge: Cambridge University Press.
Ottino. J. M, Muzzion. F. J, Tjahjadi. M, Franjione.J. G, Jana. S. C, Kusch. H. A, [1992] " Chaos, symmetry, and self-similarity: exploring order and disorder in mixing processes," Science. 257,
Scheizer. J, Hasler. M, [1996] " Multiple access communication using chaotic signals," Proc. IEEE ISCAS '96. Atlanta, USA, 3, 108.
Thamilmaran. K, Lakshmanan. M, Venkatesan. A, [2004] " Hyperchaos in a Modified Canonical Chua's Circuit," Int. J. Bifurcation and Chaos. 14 (1),221--244.
|
{"url":"http://mathoverflow.net/questions/74289/whats-the-application-of-hyperchaotic-system?answertab=oldest","timestamp":"2014-04-21T16:02:33Z","content_type":null,"content_length":"54731","record_id":"<urn:uuid:fa9cc5eb-9ced-48ec-9f1f-87cef56abe2f>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00222-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Manitou Beach, WA Math Tutor
Find a Manitou Beach, WA Math Tutor
...I have installed networks, configured computers and setup security and many other types of software. Though my primary tutoring area is math, I worked as an electrical engineer at Tektronix in
Beaverton, Oregon for about a year and worked extensively with circuits as a teenager. I have a good g...
43 Subjects: including trigonometry, linear algebra, computer science, discrete math
...I received exemplary marks on all papers, including 4.0 on both junior and senior theses (samples available on request). My senior essay on Thucydides was published and used in graduate-level
classical Greece research symposium. I consulted as an essay grader for the University of Washington's D...
16 Subjects: including algebra 1, algebra 2, biology, calculus
...If you are outside of my travel area, or if you prefer to learn online, I now offer online tutoring. Tutoring sessions take place through an online platform that allows me to transfer files,
assess homework and communicate directly with the student. I will help you with the technology aspect without charging for the time and I offer free e-mail support.
36 Subjects: including SAT math, ACT Math, geometry, prealgebra
...I also have taught 4-6th graders so I am familiar with the building approach for English / Grammar, Math, Sciences and Social Studies. Study skills are essential for any student not only for
their present academic career but also for college and work. I have been working with middle school and ...
46 Subjects: including ACT Math, trigonometry, SAT math, algebra 1
...I instill rigor and emphasize practice so moving ahead to new concepts is a breeze. I keep myself updated with teaching techniques by taking courses in math and computer science on Coursera
and Edx platforms Math is a language, with its own phrases and terminology. When these are in one's tool belt, one can draw out the right tool for the right situation confidently.
16 Subjects: including algebra 1, algebra 2, Microsoft Excel, geometry
Related Manitou Beach, WA Tutors
Manitou Beach, WA Accounting Tutors
Manitou Beach, WA ACT Tutors
Manitou Beach, WA Algebra Tutors
Manitou Beach, WA Algebra 2 Tutors
Manitou Beach, WA Calculus Tutors
Manitou Beach, WA Geometry Tutors
Manitou Beach, WA Math Tutors
Manitou Beach, WA Prealgebra Tutors
Manitou Beach, WA Precalculus Tutors
Manitou Beach, WA SAT Tutors
Manitou Beach, WA SAT Math Tutors
Manitou Beach, WA Science Tutors
Manitou Beach, WA Statistics Tutors
Manitou Beach, WA Trigonometry Tutors
|
{"url":"http://www.purplemath.com/manitou_beach_wa_math_tutors.php","timestamp":"2014-04-19T17:19:14Z","content_type":null,"content_length":"24133","record_id":"<urn:uuid:9e34aa34-cbd8-49e1-81a3-8009d0e56186>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00385-ip-10-147-4-33.ec2.internal.warc.gz"}
|
188 helpers are online right now
75% of questions are answered within 5 minutes.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred.
|
{"url":"http://openstudy.com/users/julianassange/medals","timestamp":"2014-04-20T13:46:58Z","content_type":null,"content_length":"115784","record_id":"<urn:uuid:31df7eb6-d40c-471a-839a-8ad42f5134e8>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00217-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Math Forum Discussions
Math Forum
Ask Dr. Math
Internet Newsletter
Teacher Exchange
Search All of the Math Forum:
Views expressed in these public forums are not endorsed by Drexel University or The Math Forum.
Topic: latex problem in textbox
Replies: 4 Last Post: Nov 3, 2009 3:49 PM
Messages: [ Previous | Next ]
Re: latex problem in textbox
Posted: Nov 3, 2009 6:02 AM
any help? to summarize: I want to have a subscribt and the plus-minus sign in a textbox. What can I find the code for this?
Date Subject Author
11/2/09 ruud verschaeren
11/3/09 Re: latex problem in textbox ruud verschaeren
11/3/09 Re: latex problem in textbox Cristea Bogdan
11/3/09 Re: latex problem in textbox Jane
11/3/09 Re: latex problem in textbox ruud verschaeren
|
{"url":"http://mathforum.org/kb/thread.jspa?threadID=2000947&messageID=6885438","timestamp":"2014-04-19T05:32:42Z","content_type":null,"content_length":"20622","record_id":"<urn:uuid:62af03af-75a4-4c88-b0ca-6b16140910ef>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00167-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Binomial in reverse?
March 18th 2013, 06:59 PM #1
Mar 2013
Binomial in reverse?
I essentially want to calculate the minimum population size required (number of trials) in order to have say, a 99% chance of achieving 1 success.
If P(y)= (nCy) p^y q^n-y , P(y) in this case is equal to 0.99, y is equal to 1, and p and q are known. So I guess I need to solve for n, when n is not equal to y. Is this easily solveable?
Re: Binomial in reverse?
Remember nCy= n when y=1
You will get a non algebraic equation for n which cannot be solved using normal methods. You may have to use an iterative method such as Newton's method - Wikipedia, the free encyclopedia to find
Re: Binomial in reverse?
Thanks Shakarri - I will look into solving this using some form of iteration. Of course, if anyone has any experience solving equations iteratively I would appreciate some tips on where to start,
as I've never done this sort of thing before.
Re: Binomial in reverse?
Shakarri & WChips Note that nC1 = nC(n-1) = n
March 19th 2013, 06:05 AM #2
Super Member
Oct 2012
March 19th 2013, 06:11 PM #3
Mar 2013
March 19th 2013, 09:11 PM #4
Super Member
Jul 2012
|
{"url":"http://mathhelpforum.com/advanced-statistics/215039-binomial-reverse.html","timestamp":"2014-04-19T22:19:30Z","content_type":null,"content_length":"35957","record_id":"<urn:uuid:a8c3c147-b953-412c-ac5d-ec52086e3e6a>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00438-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Bootstrap Tutorial
Distribution of Sample Mean
We begin by reviewing the most elementary problem in statistics, the distribution of the sample mean.
Since the observations are random variables, the sample mean is a random variable and we can, in principle, compute its distribution. Using the properties of expected value, it is a standard exercise
to show that
But what do we do if the
Classical statistics was driven by analytic tractability, and the methods used in classical statistics only apply to certain well-behaved distributions and certain, mostly linear, computations. With
modern computers, analytic complexity is no barrier to computing estimates of the sampling distribution of almost any statistic, as we demonstrate next using Monte Carlo simulation.
Here we draw a list of 25 uniformly distributed random numbers, compute the mean, and repeat this 100 times. This will give us 100 different estimates of the mean of the underlying distribution.
Let us look at the distribution of these 100 calculated means; this frequency distribution can be viewed as an estimate of the true sampling distribution.
Since the underlying random variable is uniformly distributed on [0,1], the estimated mean should be close to 0.5. The variance of the uniform distribution is
So the variance of the sample mean of 25 observations should be
The estimates we have computed should not be too far from these numbers.
We can do the same thing for 5000 repetitions, in which case the estimated results should be much closer to the theoretical predictions.
The Monte Carlo method can be used to compute an estimate of the sampling distribution for virtually any statistic, as long as we know the distribution from which the samples are drawn.
|
{"url":"http://www.mathematica-journal.com/issue/v9i4/contents/BootstrapTutorial/BootstrapTutorial_3.html","timestamp":"2014-04-21T03:26:35Z","content_type":null,"content_length":"10010","record_id":"<urn:uuid:02b67458-642d-45d7-805e-4ebf81e90820>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00236-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Hutchison's Basic Math Skills with Geometry
ISBN: 9780077354749 | 0077354745
Edition: 8th
Format: Paperback
Publisher: McGraw-Hill Science/Engineering/Math
Pub. Date: 10/19/2009
Why Rent from Knetbooks?
Because Knetbooks knows college students. Our rental program is designed to save you time and money. Whether you need a textbook for a semester, quarter or even a summer session, we have an option
for you. Simply select a rental period, enter your information and your book will be on its way!
Top 5 reasons to order all your textbooks from Knetbooks:
• We have the lowest prices on thousands of popular textbooks
• Free shipping both ways on ALL orders
• Most orders ship within 48 hours
• Need your book longer than expected? Extending your rental is simple
• Our customer support team is always here to help
|
{"url":"http://www.knetbooks.com/hutchisons-basic-math-skills-geometry-8th/bk/9780077354749","timestamp":"2014-04-16T19:57:56Z","content_type":null,"content_length":"29860","record_id":"<urn:uuid:db61f3de-54d6-49e8-ad3c-e48c23bc0a14>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00052-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Category = Groupoid x Poset?
up vote 4 down vote favorite
Is it possible to split a given category $C$ up into its groupoid of isomorphisms and a category that resembles a poset?
"Splitting up" should be that $C$ can be expressed as some kind of extension of a groupoid $G$ by a poset $P$ (or "directed category" $P$ the only epimorphisms in $P$ are the identities, all
isomorphisms in $P$ are identities).
ct.category-theory groupoids posets
3 I don't see why you should, without some kind of "acyclicity" condition on your category. Consider the monoid of natural numbers as a one-object category, for instance -- what should this
splitting be? – Dan Petersen Mar 24 '10 at 14:58
The requirement that the only epimorphisms in $P$ are identities is not satisfiable for general categories; consider the following counter example: f 1 -----> 2 ^ -----> ^ |\ g |\ - - The
morphisms f,g are both epi, but not identities. If you replace 'epimorphisms' with isomorphisms, then the construction I outlined below should work. – Mikola Mar 24 '10 at 15:06
2 I'm no category theorist but -- consider the special case of the monoid $M = R - {0}$ under multiplication, where R is an integral domain. Then we have $U = R^{\times}$, the group of units of the
monoid, and $M/U$ has a natural partial ordering induced by the divisibility relation. I wonder whether there is a generalization of this to (some more, not all) categories? – Pete L. Clark Mar 24
'10 at 21:10
Perhaps this should be posted as a separate question, but will this splitting up work if we allow monoids? That is, can a category be split up into posets, groupoids and monoids? – Colin Tan Apr
20 '12 at 16:04
add comment
4 Answers
active oldest votes
I am also looking forward to answers to your question. Meanwhile here is something pointing roughly into that direction:
One can study a category $C$ through its set-valued functor category $Set^C$. By the Yoneda lemma, $C$ sits as a full subcategory inside this functor category, and from it one can
reconstruct something close to $C$ (I think the idempotent completion of $C$). But non-equivalent categories can give rise to equivalent functor categories, e.g. category $C$ in which not
every idempotent splits and its idempotent completion, i.e. the category made from $C$ by adjoining objects such that each idempotent becomes a composition of projection to and inclusion of
a subobject and thus splits. One calls such categories Morita-equivalent.
up vote Now $Set^C$ is a Grothendieck topos (:=category of sheaves on a site, in this case with trivial topology) and there is the following theorem about those:
5 down
vote A locale is a distributive lattice closed under meets and finite joins, just like the lattice of open sets of a topological space, so it is a particular poset. The theorem of Joyal and
Tierney, from their monograph "An extension of the Galois theory of Grothendieck", states that every Grothendieck topos is equivalent to the category of $G$-equivariant sheaves on a groupoid
object in locales - see e.g. here.
Well at least it is a statement which separates a category into a groupoid and a poset part. So if you look from very far and take it with a boulder of salt you could read this as saying
that every category is "Morita-equivalent" (not really!) to a groupoid internal to posets (it makes some intuitive sense to see this as an extension).
I agree that considering idempotents is important. To focus the issue: take the category C with only one object and only one non-identity morphism, which is idempotent. The original
1 question seems to founder on this example; how do you "split it up"? Peter reaches a positive answer by blurring this issue out (which probably has to be done if you do want a positive
answer). – Tom Leinster Mar 25 '10 at 4:00
add comment
One type of category that factors nicely is called an EI category. The definition is that every Endomorphism is an Isomorphism. After taking the quotient by the groupoid, every
endomorphism is the identity. But it is still not a poset. It could be something like the category of two parallel arrows, where Mor(A,B) has two elements, Mor(B,A) none, and the
endomorphisms of each object are only the identity. This has a further poset quotient $A\to B$, but it isn't there yet.
up vote 4
down vote So groupoid and poset are only two kinds of behavior in categories. Monoids are a third that have been mentioned before. In particular, idempotents, as in the monoid {0,1} under
multiplication, do not embed in any group. And the two parallel arrows category is yet a fourth.
this reminds me of a characterization of the graphs of dynamical systems – Joey Hirsh Jul 31 '12 at 5:23
add comment
Given any locally small category, $C$, the collection of all isomorphisms forms a subgroupoid, $G \subseteq C$, where $Ob(G) = Ob(C)$ and $Hom_G(A,B) = \left ( f \in Hom_C(A,B) : \
exists g, h \in Hom_C(B,A) g \circ f = id_A, f \circ h = id_B \right ) $.
Because $G$ is a groupoid, it determines an equivalence relation, $R$ on the objects and morphisms of $C$ such for $A, B \in Ob(C)$:
$A \equiv_R B \Longleftrightarrow Hom_G(A,B) \neq \emptyset$
up vote 2 down
vote And for $f, g \in Hom_C(A,B)$:
$f \equiv_{R_{A,B}} g \Longleftrightarrow \exists h_B \in Hom_G(B,B), h_A \in Hom_G(A,A) : h_B \circ f = g \circ h_A$
If I understand what you are asking, then the quotient $C/R$ should be the 'poset' you want.
*Subject to the substitution epi -> iso as clarified in the comments, as otherwise this is not possible. – Mikola Mar 24 '10 at 16:27
add comment
Unless I totally misunderstand the question, this doesn't even work for categories with one object, i.e. monoids (which are not groups), does it?
up vote 2 down vote
add comment
Not the answer you're looking for? Browse other questions tagged ct.category-theory groupoids posets or ask your own question.
|
{"url":"http://mathoverflow.net/questions/19190/category-groupoid-x-poset/19198","timestamp":"2014-04-20T11:01:11Z","content_type":null,"content_length":"73375","record_id":"<urn:uuid:3f36f1be-90cc-4e4e-a7e0-e5f5f546c8de>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00062-ip-10-147-4-33.ec2.internal.warc.gz"}
|
January 8th 2010, 08:09 AM
Hi, I was just wondering how do you draw the least sqaures of regression line onto your scatter diagram, it seems to come up alot but im not sure what to do. Thanks
January 8th 2010, 08:18 AM
if you have an equation for your regression line just plot it as you would normally graph a line
If you are just eyeballing it you want to try to put the line through the "center" of your points, so that half the data is above the line and half below, and even moreso you want to try to make
the (square) of the distances of the points to the line be equal on top of and below the line
January 8th 2010, 09:10 AM
ohh thankyou! never though about it just like plotting normal (Giggle) thanks again (Rofl)
January 8th 2010, 11:18 AM
To plot scatter diagram, you simply put the dots on the x-y plane.
To plot the regression line, you must first set up the normal regression equations and solve for all constants. Then you will have a regression equation with which to plot the regression line.
See an example here:
|
{"url":"http://mathhelpforum.com/statistics/122905-regression-print.html","timestamp":"2014-04-23T11:27:59Z","content_type":null,"content_length":"6019","record_id":"<urn:uuid:b4ac0829-be2b-41c1-94c4-93b1c1a5eb84>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00207-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Re: st: Estimating the hazard function
Notice: On March 31, it was announced that Statalist is moving from an email list to a forum. The old list will shut down at the end of May, and its replacement, statalist.org is already up and
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: st: Estimating the hazard function
From "Yusvita Triwiadhian S." <07.5544@stis.ac.id>
To statalist@hsphsun2.harvard.edu
Subject Re: st: Estimating the hazard function
Date Sun, 11 Sep 2011 15:54:29 +0700
Mr. brendan, I have a problem like you. I also want to know the h(t)
estimated at each time point. but I still don't know how to get it.
On Sun, Sep 11, 2011 at 7:04 AM, Brendan Corcoran
<brendanjohncorcoran@gmail.com> wrote:
> Thanks Steven.
> How do I actually incorporate the hazard contributions into -kdensity-?
> The analysis time variable is _t, and I've generated the hazard
> contributions through running -sts gen H=h-
> Just unsure how to apply -kdensity- to _t while also using the hazard
> contributions -H- to weight it somehow.
> On Sat, Sep 10, 2011 at 10:26 PM, Steven Samuels <sjsamuels@gmail.com> wrote:
>> You can use -kdensity- to estimate the hazard function and, as in -stcurve-, set the smoothing options yourself. The difference is that -kdensity- has an option to generate the smoothed estimates.
>> Steve
>> On Sep 1
>> 0, 2011, at 3:10 PM, Brendan Corcoran wrote:
>> I want to get a hold of the data produced by -sts graph, hazard- i.e.
>> the h(t) estimated at each time point. This is so I can produce
>> hazard graphs in other applications.
>> I understand that -sts gen dh=h- produces the estimated hazard
>> component deltaH_j = H(t_j) - H(t_(j-1)), and that to calculate h(t)
>> -sts graph, hazard- calculates a weighted kernel-density estimate
>> using deltaH_j.
>> But how could I get the actual h(t) values calculated here, or as a
>> second option how could I run the weighted kernel-density estimate
>> myself?
>> *
>> * For searches and help try:
>> * http://www.stata.com/help.cgi?search
>> * http://www.stata.com/support/statalist/faq
>> * http://www.ats.ucla.edu/stat/stata/
> *
> * For searches and help try:
> * http://www.stata.com/help.cgi?search
> * http://www.stata.com/support/statalist/faq
> * http://www.ats.ucla.edu/stat/stata/
* For searches and help try:
* http://www.stata.com/help.cgi?search
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/
|
{"url":"http://www.stata.com/statalist/archive/2011-09/msg00396.html","timestamp":"2014-04-21T15:04:21Z","content_type":null,"content_length":"10388","record_id":"<urn:uuid:7f199673-e065-4690-b0b4-5746bc0db528>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00467-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Zeno's arrow, Newtonian mechanics and velocity
Start with Zeno's paradox of the arrow. Zeno notes that over every instant of time t[0], an arrow occupies one and the same spatial location. But an object that occupies one and the same spatial
location over a time is not moving at that time. (One might want to refine this to handle a spinning sphere, but that's an exercise to the reader.) So the arrow is not moving at t[0]. But the same
argument applies to every time, so the arrow is not moving, indeed cannot move.
Here's a way to, ahem, sharpen The Arrow. Suppose in our world we have an arrow moving at t[0]. Imagine a world w* where the arrow comes into existence at time t[0], in exactly the same state as it
actually has at t[0], and ceases to exist right after t[0]. At w* the arrow only ever occupies one position—the one it has at t[0]. Something that only ever occupies one position never moves (subject
to refinements about spinning spheres and the like). So at w* the arrow never moves, and in particular doesn't move at t[0]. But in the actual world, the arrow is in the same state at t[0] as it is
at w* at that time. So in the actual world, the arrow doesn't move at t[0].
A pretty standard response to The Arrow is that movement is not a function of how an object is at any particular time, it is a function of how, and more precisely where, an object is at multiple
times. The velocity of an object at t[0] is the limit of (x(t[0]+h)−x(t))/h as h goes to zero, where x(t) is the position at t, and hence the velocity at t[0] depends on both x(t[0]) and on x(t[0]+h)
for small h.
Now consider a problem involving Newtonian mechanics. Suppose, contrary to fact, that Newtonian physics is correct.
Then how an object will behave at times t>t[0] depends on both the object's position at t[0] and on the object's velocity at t[0]. This is basically because of inertia. The forces give rise to a
change in velocity, i.e., the acceleration, rather than directly to a change in position: F(t)=dv(t)/dt.
Now here is the puzzle. Start with this plausible thought about how the past affects the future: it does so by means of the present as an intermediary. The Cold War continues to affect geopolitics
tomorrow. How? Not by reaching out from the past across a temporal gap, but simply by means of our present memories of the Cold War and the present effects of it. This is a version of the Markov
property: how a process will behave in the future depends solely on how it is now. Thus, it seems:
1. What happens at times after t[0] depends on what happens at time t[0], and only depends on what happens at times prior to t[0] by the mediation of what happens at time t[0].
But on Newtonian mechanics, how an object will move after time
depends on its velocity at
. This velocity is defined in terms of where the object is at
and where it is at times close to
. An initial problem is that it also depends on where the object is at times later than
. This problem can be removed. We can define the velocity here solely in terms of times less than
, as lim
, i.e., where we take the limit only over negative values of
[note 1]
But it still remains the case that the velocity at
is defined in terms of where the object is at times prior to
, and so how the obejct wil behave at times after
depends on what happens at times prior
and not just on what happens at
, contrary to (1).
Here's another way to put the puzzle. Imagine that God creates a Newtonian world that starts at t[0]. Then in order that the mechanics of the world get off the ground, the objects in the world must
have a velocity at t[0]. But any velocity they have at t[0] could only depend on how the world is after t[0], and that just won't do.
Here is a potential move. Take both position and velocity to be fundamental quantities. Then how an object behaves after time t[0] depends on the object's fundamental properties at t[0], including
its velocity then. The fact that v(t[0])=lim[h→0](x(t[0]+h)−x(t[0]))/h, at least at times t[0] not on the boundary of the time sequence, now becomes a law of nature rather than definitional.
But this reneges on our solution to The Arrow. The point of that solution was that velocity is not just a matter of how an object is at one time. Here's one way to make the problematic nature of the
present suggestion vivid, along the lines of my Sharpened Arrow. Suppose that the arrow is moving at t[0] with non-zero velocity. Imagine a world w* just like ours at t[0] but does not have any times
other than t[0].[note 2] Then the arrow has a non-zero velocity at t[0] at w*, even though it is always at exactly the same position. And that sure seems absurd.
The more physically informed reader may have been tempted to scoff a bit as I talked of velocity as fundamental. Of course, there is a standard move in the close vicinity of the one I made, and that
is not to take velocity as fundamental, but to take momentum as fundamental. If we make that move, then we can take it to be a matter of physical law that mlim[h→0](x(t[0]+h)−x(t[0]))/h=p(t[0]),
where p(t) is the momentum at t.
We still need to embrace the conclusion that an object could fail to ever move and yet at have a momentum (the conclusion comes from arguments like the Sharpened Arrow). But perhaps this conclusion
only seems absurd to us non-physicists because we were early on in our education told that momentum is mass times velocity as if that were a definition. But that is definitely not a definition in
quantum mechanics. On the suggestion that in Newtonian mechanics we take momentum as fundamental, a suggestion that some formalisms accept, we really should take the fact that momentum is the product
of mass and velocity (where velocity is defined in terms of position) to be a law of nature, or a consequence of a law of nature, rather than a definitional truth.
Still, the down-side of this way of proceeding is that we had to multiply fundamental quantities—instead of just position being fundamental, now position and momentum are—and add a new law of nature,
namely that momentum is the product of mass and velocity (i.e., of mass and the rate of change of position).
I think something is to be said for a different solution, and that is to reject (1). Then momentum can be a defined quantity—the product of mass and velocity. Granted, the dynamics now has
non-Markovian cross-time dependencies. But that's fine. (I have a feeling that this move is a little more friendly to eternalism than to presentism.) If we take this route, then we have another
reason to embrace Norton's conclusion that Newtonian mechanics is not always deterministic. For if a Newtonian world had a beginning time t[0], as in the example involving God creating a Newtonian
world, then how the world is at and prior to t[0] will not determine how the world will behave at later times. God would have to bring about the initial movements of the objects, and not just the
initial state as such.
Of course, this may all kind of seem to be a silly exercise, since Newtonian physics is false. But it is interesting to think what it would be like if Newtonian physics were true. Moreover, if there
are possible worlds where Newtonian physics is true, the above line of thought might be thought to give one some reason to think that (1) is not a necessary truth, and hence give one some reason to
think that there could be causation across temporal gaps, which is an interesting and substantive conclusion. Furthermore, the above line of thought also shows how even without thinking about
formalisms like Hamiltonian mechanics one might be motivated to take momentum to be a fundamental quantity.
And so Zeno's Arrow continues to be interesting.
4 comments:
I am out of my depth here, but that never stopped me before. :-)
It seems to me that velocity is change in position over time. Consequently, velocity requires change and change requires time. There is no such thing, properly speaking, as velocity without
duration; we can figure out velocity at an instant only in a larger context.
It is like (in fact, exactly like) noting that each point on a curve has a slope, and then asking what the slope of a single point, sans curve, would be. There is no answer to that question,
because slope-at-a-point presumes the existence of a larger curve.
I actually don't know how this maps on to your larger discussion.
I'm out of my depth too on this one. However, today I read a news article on a physicist who used wrote a four page article using some similar arguements to get out of a ticket for going through
a stop sign. Read here:
I wonder if Dmitri Krioukov, physicist from UCSD, is on to something. The judge seems to agree with him. Since Krioukov's paper is not available to us. I wonder if Zeno's arrow could be modified
to succesfully contest moving violations in court?
Case I:
"Start with Zeno's paradox of the arrow. Zeno notes that over every instant of time t0, an arrow occupies one and the same spatial location. But an object that occupies one and the same spatial
location over a time is not moving at that time . . . So the arrow is not moving at t0. But the same argument applies to every time, so the arrow is not moving, indeed cannot move." No wonder I
was unable to harvest a deer during archery season. It also seems that switching from a vertical bow to a crossbow hasn't helped much either. However, I did get a nice buck during firearms season
back in '08 with my 20 gauge H&R.
Case II:
State Trooper - You know why I pulled you over?
Deer Hunter - Nope.
State Trooper - You just ran that red light.
Deer Hunter - Couldna done it.
State Trooper - Now how's that?
Deer Hunter (after he spits out his tobacco) - There's this paradox in my truck. It's like this every bit of time, my truck occupies one and the same spot. But an object that occupies one and the
same spot over a time ain't moving at that time. So my truck ain't goin' nowhere. But the same just happens every time, so my truck ain't movin'. It can't move. So it couldna gone through that
there light.
State Trooper - Sir, have you been drinking?
Deer Hunter - Nope.
State Trooper - Sir, get out of your vechicle. You're under arrest. You have the right to remain silent . . .
Deer Hunter - If I ain't been drinkin', what am I bein' busted for?
State Trooper - It's illegal to have a paradox in you vechicle.
There is something I would like to add to Zeno's paradox of the arrow while we are on the subject of arrows. I like shooting the vertical bow. I've had to switch to cross bow because of
tendonitis and that with the scoped cross bow a humane kill on a game animal is far more easy to accomplish. The thing missing in this article is the straightness of the arrow. The most important
thing an archer can have is an arrow that is perfectly straight. None of the physics arguements and reasonings in this post really mean anything if the arrow isn't straight. I was taught by
Hunter Safety instructor, that if an arrow is straight, then one can use a broomstick for a bow and still hit the mark; however, if an arrow isn't straight, then no matter how good, expensive or
sophisticated the bow is, the arrow will not hit its mark. Instead of thinking along the lines "Here's a way to, ahem, sharpen The Arrow." It is far more critical to think of straightening the
|
{"url":"http://alexanderpruss.blogspot.com/2012/04/zeno-arrow-newtonian-mechanics-and.html","timestamp":"2014-04-18T05:32:52Z","content_type":null,"content_length":"299554","record_id":"<urn:uuid:560ce07a-63e8-4ffc-8b23-99d6aed1922c>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00188-ip-10-147-4-33.ec2.internal.warc.gz"}
|
[Numpy-discussion] addition, multiplication of a polynomial and np.{float, int}
Charles R Harris charlesr.harris@gmail....
Wed Mar 7 11:00:09 CST 2012
On Wed, Mar 7, 2012 at 9:45 AM, Pierre Haessig <pierre.haessig@crans.org>wrote:
> Hi,
> Le 06/03/2012 22:19, Charles R Harris a écrit :
> > Use polynomial.Polynomial and you won't have this problem.
> I was not familiar with the "poly1d vs. Polynomial" choice.
> Now, I found in the doc some more or less explicit guidelines in:
> http://docs.scipy.org/doc/numpy/reference/routines.polynomials.html
> "The polynomial package is newer and more complete than poly1d and the
> convenience classes are better behaved in the numpy environment."
> However, poly1d, which is nicely documented doesn't mention the
> "competitor".
> Going further in this transition, do you feel it would make sense adding
> a "See Also" section in poly1d function ?
That's a good idea, I'll take care of it. Note the caveat about the
coefficients going in the opposite direction.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://mail.scipy.org/pipermail/numpy-discussion/attachments/20120307/b9c85547/attachment-0001.html
More information about the NumPy-Discussion mailing list
|
{"url":"http://mail.scipy.org/pipermail/numpy-discussion/2012-March/061222.html","timestamp":"2014-04-17T06:53:28Z","content_type":null,"content_length":"4416","record_id":"<urn:uuid:62f95e97-f9dc-4700-9e5a-2b4b4a20f56f>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00216-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Haralson Math Tutor
Find a Haralson Math Tutor
I am a college senior in Huntsville, AL who is returning home for the summer. I have been a math tutor in the past for subjects such as Algebra I, II, and geometry. I have also assisted students
in studying for the SAT and ACT.
7 Subjects: including SAT math, PSAT, political science, algebra 1
...I gently lead students to figure things out for themselves after explaining thoroughly any new concepts.I home schooled all 3 of my children in elementary school through high school. I also
taught high school math. I have a BA from Emmanuel College.
17 Subjects: including prealgebra, ACT Math, trigonometry, linear algebra
...I also include a "learning is fun attitude" and incorporate activities and games into my sessions. I have adapted various subjects and curriculum to the preferences and needs of student with
learning disabilities or other challenges. I also enjoy the challenge of finding the "key" to unlock inspiration and a desire to learn.
25 Subjects: including algebra 1, biology, geometry, writing
...Understanding that there is not only one way to accomplish and solve a math problem is an important skill that I have taught my child, and could teach yours as well. I have recently acquired a
master's degree in K-8 science. Also, I have taught sixth and seventh grade science at a local middle school.
10 Subjects: including prealgebra, reading, grammar, elementary (k-6th)
I am a 20 year old junior at Albany State University studying Early Childhood Education. I love working with children and leading them into the right direction in life. Tutoring will give me the
opportunity to do this and encourage them when they do a good job on something.
12 Subjects: including prealgebra, algebra 1, reading, geometry
Nearby Cities With Math Tutor
Experiment Math Tutors
Grantville, GA Math Tutors
Greenville, GA Math Tutors
Lovejoy, GA Math Tutors
Luthersville Math Tutors
Meansville Math Tutors
Molena Math Tutors
Moreland, GA Math Tutors
Sargent, GA Math Tutors
Sunny Side Math Tutors
Turin, GA Math Tutors
Williamson, GA Math Tutors
Woodbury, GA Math Tutors
Woolsey, GA Math Tutors
Zebulon, GA Math Tutors
|
{"url":"http://www.purplemath.com/Haralson_Math_tutors.php","timestamp":"2014-04-18T18:50:46Z","content_type":null,"content_length":"23550","record_id":"<urn:uuid:36618615-597e-4b45-9d6e-0851d3e0eef2>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00125-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Math Forum Discussions
- User Profile for: niederberge_@_omcast.net
Math Forum
Ask Dr. Math
Internet Newsletter
Teacher Exchange
Search All of the Math Forum:
Views expressed in these public forums are not endorsed by Drexel University or The Math Forum.
User Profile: niederberge_@_omcast.net
User Profile for: niederberge_@_omcast.net
UserID: 558152
Name: Joe Niederberger
Registered: 10/12/08
Total Posts: 2,635
Show all user messages
|
{"url":"http://mathforum.org/kb/profile.jspa?userID=558152","timestamp":"2014-04-19T20:33:58Z","content_type":null,"content_length":"12024","record_id":"<urn:uuid:1bca9169-c248-43d4-a296-96db156d09be>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00452-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Scripts and Functions
The MATLAB^® product provides a powerful programming language, as well as an interactive computational environment. You can enter commands from the language one at a time at the MATLAB command line,
or you can write a series of commands to a file that you then execute as you would any MATLAB function. Use the MATLAB Editor or any other text editor to create your own function files. Call these
functions as you would any other MATLAB function or command.
There are two kinds of program files:
● Scripts, which do not accept input arguments or return output arguments. They operate on data in the workspace.
● Functions, which can accept input arguments and return output arguments. Internal variables are local to the function.
If you are a new MATLAB programmer, just create the program files that you want to try out in the current folder. As you develop more of your own files, you will want to organize them into other
folders and personal toolboxes that you can add to your MATLAB search path.
If you duplicate function names, MATLAB executes the one that occurs first in the search path.
To view the contents of a program file, for example, myfunction.m, use
type myfunction
When you invoke a script, MATLAB simply executes the commands found in the file. Scripts can operate on existing data in the workspace, or they can create new data on which to operate. Although
scripts do not return output arguments, any variables that they create remain in the workspace, to be used in subsequent computations. In addition, scripts can produce graphical output using
functions like plot.
For example, create a file called magicrank.m that contains these MATLAB commands:
% Investigate the rank of magic squares
r = zeros(1,32);
for n = 3:32
r(n) = rank(magic(n));
Typing the statement
causes MATLAB to execute the commands, compute the rank of the first 30 magic squares, and plot a bar graph of the result. After execution of the file is complete, the variables n and r remain in the
Functions are files that can accept input arguments and return output arguments. The names of the file and of the function should be the same. Functions operate on variables within their own
workspace, separate from the workspace you access at the MATLAB command prompt.
A good example is provided by rank. The file rank.m is available in the folder
You can see the file with
type rank
Here is the file:
function r = rank(A,tol)
% RANK Matrix rank.
% RANK(A) provides an estimate of the number of linearly
% independent rows or columns of a matrix A.
% RANK(A,tol) is the number of singular values of A
% that are larger than tol.
% RANK(A) uses the default tol = max(size(A)) * norm(A) * eps.
s = svd(A);
if nargin==1
tol = max(size(A)') * max(s) * eps;
r = sum(s > tol);
The first line of a function starts with the keyword function. It gives the function name and order of arguments. In this case, there are up to two input arguments and one output argument.
The next several lines, up to the first blank or executable line, are comment lines that provide the help text. These lines are printed when you type
help rank
The first line of the help text is the H1 line, which MATLAB displays when you use the lookfor command or request help on a folder.
The rest of the file is the executable MATLAB code defining the function. The variable s introduced in the body of the function, as well as the variables on the first line, r, A and tol, are all
local to the function; they are separate from any variables in the MATLAB workspace.
This example illustrates one aspect of MATLAB functions that is not ordinarily found in other programming languages—a variable number of arguments. The rank function can be used in several different
r = rank(A)
r = rank(A,1.e-6)
Many functions work this way. If no output argument is supplied, the result is stored in ans. If the second input argument is not supplied, the function computes a default value. Within the body of
the function, two quantities named nargin and nargout are available that tell you the number of input and output arguments involved in each particular use of the function. The rank function uses
nargin, but does not need to use nargout.
Types of Functions
MATLAB offers several different types of functions to use in your programming.
Anonymous Functions
An anonymous function is a simple form of the MATLAB function that is defined within a single MATLAB statement. It consists of a single MATLAB expression and any number of input and output arguments.
You can define an anonymous function right at the MATLAB command line, or within a function or script. This gives you a quick means of creating simple functions without having to create a file for
them each time.
The syntax for creating an anonymous function from an expression is
f = @(arglist)expression
The statement below creates an anonymous function that finds the square of a number. When you call this function, MATLAB assigns the value you pass in to variable x, and then uses x in the equation
sqr = @(x) x.^2;
To execute the sqr function defined above, type
a = sqr(5)
a =
Primary and Subfunctions
Any function that is not anonymous must be defined within a file. Each such function file contains a required primary function that appears first, and any number of subfunctions that can follow the
primary. Primary functions have a wider scope than subfunctions. That is, primary functions can be called from outside of the file that defines them (for example, from the MATLAB command line or from
functions in other files) while subfunctions cannot. Subfunctions are visible only to the primary function and other subfunctions within their own file.
The rank function shown in the section on Functions is an example of a primary function.
Private Functions
A private function is a type of primary function. Its unique characteristic is that it is visible only to a limited group of other functions. This type of function can be useful if you want to limit
access to a function, or when you choose not to expose the implementation of a function.
Private functions reside in subfolders with the special name private. They are visible only to functions in the parent folder. For example, assume the folder newmath is on the MATLAB search path. A
subfolder of newmath called private can contain functions that only the functions in newmath can call.
Because private functions are invisible outside the parent folder, they can use the same names as functions in other folders. This is useful if you want to create your own version of a particular
function while retaining the original in another folder. Because MATLAB looks for private functions before standard functions, it will find a private function named test.m before a nonprivate file
named test.m.
Nested Functions
You can define functions within the body of another function. These are said to be nested within the outer function. A nested function contains any or all of the components of any other function. In
this example, function B is nested in function A:
function x = A(p1, p2)
function y = B(p3)
Like other functions, a nested function has its own workspace where variables used by the function are stored. But it also has access to the workspaces of all functions in which it is nested. So, for
example, a variable that has a value assigned to it by the primary function can be read or overwritten by a function nested at any level within the primary. Similarly, a variable that is assigned in
a nested function can be read or overwritten by any of the functions containing that function.
Global Variables
If you want more than one function to share a single copy of a variable, simply declare the variable as global in all the functions. Do the same thing at the command line if you want the base
workspace to access the variable. The global declaration must occur before the variable is actually used in a function. Although it is not required, using capital letters for the names of global
variables helps distinguish them from other variables. For example, create a new function in a file called falling.m:
function h = falling(t)
global GRAVITY
h = 1/2*GRAVITY*t.^2;
Then interactively enter the statements
global GRAVITY
GRAVITY = 32;
y = falling((0:.1:5)');
The two global statements make the value assigned to GRAVITY at the command prompt available inside the function. You can then modify GRAVITY interactively and obtain new solutions without editing
any files.
Command vs. Function Syntax
You can write MATLAB functions that accept string arguments without the parentheses and quotes. That is, MATLAB interprets
foo a b c
However, when you use the unquoted command form, MATLAB cannot return output arguments. For example,
legend apples oranges
creates a legend on a plot using the strings apples and oranges as labels. If you want the legend command to return its output arguments, then you must use the quoted form:
[legh,objh] = legend('apples','oranges');
In addition, you must use the quoted form if any of the arguments is not a string.
│ Caution While the unquoted command syntax is convenient, in some cases it can be used incorrectly without causing MATLAB to generate an error. │
Constructing String Arguments in Code
The quoted function form enables you to construct string arguments within the code. The following example processes multiple data files, August1.dat, August2.dat, and so on. It uses the function
int2str, which converts an integer to a character, to build the file name:
for d = 1:31
s = ['August' int2str(d) '.dat'];
% Code to process the contents of the d-th file
|
{"url":"http://www.mathworks.com/help/matlab/learn_matlab/scripts-and-functions.html?nocookie=true&s_tid=doc_12b","timestamp":"2014-04-19T02:27:42Z","content_type":null,"content_length":"43726","record_id":"<urn:uuid:dca0aa05-4759-45a6-93a1-d4f1e7c59c88>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00275-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Math Forum - Ask Dr. Math Archives: Middle School Puzzles
This page:
See also the
Dr. Math FAQ:
classic problems
About Math
factoring expressions
graphing equations
factoring numbers
conic sections/
3D and higher
Number Sense
factoring numbers
negative numbers
prime numbers
square roots
Word Problems
Browse Middle School Puzzles
Stars indicate particularly interesting answers or good places to begin browsing.
Selected answers to frequently posed puzzles:
Letter+number puzzles.
Number sentences.
Remainder/divisibility puzzles.
1000 lockers.
Getting across the river.
Heads, legs: how many animals?
A hen and a half...
How many handshakes?
Last one at the table.
Measuring with two containers.
Monkeys dividing coconuts.
Squares in a checkerboard.
Weighing a counterfeit coin.
What color is my hat?
Place each number from 1 through 10 in a box. Each box must contain a number that is the difference of two boxes above it, if there are two above it.
You have a pyramid (1 circle on the top layer, 2 on the second, 3 on the third, 4 on the fourth) and you can only move three circles to turn it upside down...
Today is November 14, 2000, a Tuesday. What day of the week was November 14, 1901?
A rational number greater than one and its reciprocal have a sum of 2 1/6. What is this number? Express your answer as an improper fraction in lowest terms.
How many rectangles are there on a chessboard?
Finding pairs of two-digit numbers that yield the same product when you reverse their digits.
Strategies for winning at Russian Nim (the "20" game).
Find the digit that each letter represents in the equation SEND + MORE = MONEY.
Find the missing number in the sequence 11 > ? > 1045 > 10445.
What kind of a math project could I do with magic squares?
How can you form four triangles from six toothpicks?
How can I use 6 lines to make 12 triangles?
Four skilled workers do a job in 5 days, and five semi-skilled workers do the same job in 6 days. How many days will it take for two skilled and one semi-skilled worker to do that job?
There are mechanical methods to fill in Magic Squares, but here Dr. Wilko presents a nice way to reason out the solution of a 3 by 3 square.
Take five times which plus half of what, and make the square of what you've got...
To find 11 coins that total $1.37, Dr. Ian makes organized lists which reduce the problem to smaller and smaller problems until it can be solved. This general strategy is useful in many math
In a poll of 34 students, 16 felt confident solving quantitative comparison questions, 20 felt confident solving multiple choice questions.... How many students felt confident solving only
multiple choice questions and no others?
I have tried logical reasoning and can't get it.
Create a ten-digit number that meets some special conditions...
Break a clock into exactly five pieces such that the sums of all the numbers on each piece are 8, 10, 12, 14 and 16.
If you have a 50x50 square with small squares inside it, how many squares will there be altogether?
What is the equation for the number of squares in a rectangle (like the chessboard puzzle)?
How many squares are there on a checkerboard?
How many squares are there on a chessboard? How many rectangles?
Take the first digit, multiply it by the next consecutive number, and place it in front of 25. Can you prove this shortcut?
The 1st step is made with 4 matches, the 2nd with 10 matches, the 3rd with 18, the fourth with 28. How many matches would be needed to build 6, 10, and 50 steps?
The question is, given that 4 is 'IV' and 9 is 'IX' and 900 is 'CM', does the subtraction pattern follow for two numerals more than two 'levels' apart, and can numerals which represent numbers
starting with 5 be subtracted? For example, would 99 be 'IC', would 450 be 'LD', and would 995 be 'VM'?
What is the formula to find the sum of the numbers one to five hundred?
Why is the sum of a number with an even number of digits and that same number written in reverse always divisible by 11?
Find all sets of positive consecutive integers that sum to 100, and whose digits sum to greater than 30.
Given several sets of prime numbers, use each of the nine non-zero digits exactly once. What is the smallest possible sum such a set could have?
What number can you add to and subtract from 129 such that the sum is twice the difference?
John decides to swim a certain number of laps of the pool in five days. On the first day he covers one fifth of the total. The next day he swims one third of the remaining laps...
How do I find an equation?
A cafe sold tea at 30 cents a cup and cakes at 50 cents each. Everyone in a group had the same number of cups of tea and the same number of cakes. The bill came to $13.30. How many cups of tea
did each person have?
If you toss a number cube 20 times, could it land on six 20 times?
Arrange ten cards numbered 1-10 in a pile. Turn over the top card, then move the next card to the bottom of the pile. Turn over the new top card and move the next card to the bottom of the pile.
Continue like this until all ten cards have been turned over. The challenge is to arrange the pile so the cards are turned over in order from 1 to 10.
My daughter is in sixth grade and has been doing a pattern journal where she has two columns of numbers; the first column is the n and the second column is the term, and she has to find the rule.
How do I figure out the next 2 numbers in the pattern 1, 8, 27, 64, ____, ____?
You have 2000 meters of fencing. What is the largest area you can enclose with it using various shapes?
Page: [<prev] 1 2 3 4 5 6 7 8 9 10 [next>]
|
{"url":"http://mathforum.org/library/drmath/sets/mid_puzzles.html?start_at=281&num_to_see=40&s_keyid=38489793&f_keyid=38489795","timestamp":"2014-04-16T22:14:48Z","content_type":null,"content_length":"24156","record_id":"<urn:uuid:b77ff792-e58e-4f18-9b18-55cfead06142>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00226-ip-10-147-4-33.ec2.internal.warc.gz"}
|
188 helpers are online right now
75% of questions are answered within 5 minutes.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred.
|
{"url":"http://openstudy.com/users/gabimb/medals","timestamp":"2014-04-16T16:43:36Z","content_type":null,"content_length":"63663","record_id":"<urn:uuid:7c701ae2-5019-4df5-8e25-20b3fcf971bb>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00352-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Tree Version of Hechler Forcing
up vote 7 down vote favorite
In their recent (2009) paper Eventually Different Functions and Inaccessible Cardinals, Brendle and Löwe consider a 'tree version' of the Hechler forcing. This forcing $\mathbb{D}$ consists of
nonempty trees $T\subseteq\omega^{<\omega}$ with the property that there is a unique stem $s\in T$ so that for every $t\in T$ extending $s$, $t\frown n\in T$ for all but finitely many $n\in\omega$.
The forcing is ordered by inclusion. This forcing allows for a very elegant rank analysis, and many properties about Hechler forcing that are proved to hold using rank arguments can also be proved to
hold for $\mathbb{D}$ in a conceptually simpler way. I do not know if the two notions of forcing are equivalent, and based on what is said in the paper I suspect neither do the authors. This is not
my real question, though I would be happy to see an answer.
My question is simply whether this specific tree forcing has appeared anywhere in the literature previously. No reference is given in the paper but I ask because I do think I remember seeing it
somewhere and now I can't seem to find mention of it in likely places.
set-theory forcing
add comment
1 Answer
active oldest votes
This forcing is a special case of forcing with trees that branch into a filter, the filter in this case being the co-finite sets. (This, in turn, can be viewed as a special case of
Shelah's creature forcing.) So the example has certainly implicitly appeared in the literature. An early reference to forcing with trees branching into filters is "Combinatorics on
up vote 8 down ideals and forcing with trees" by Marcia Groszek in J. Symbolic Logic 52 (1987), no. 3, 582–593.
vote accepted
Thank you for the reference! – Justin Palumbo May 5 '11 at 18:43
add comment
Not the answer you're looking for? Browse other questions tagged set-theory forcing or ask your own question.
|
{"url":"http://mathoverflow.net/questions/62804/tree-version-of-hechler-forcing?sort=newest","timestamp":"2014-04-20T18:57:20Z","content_type":null,"content_length":"51095","record_id":"<urn:uuid:a32294fc-e7e4-40fa-82aa-9a92dcea3168>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00576-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Merrimack SAT Tutor
Find a Merrimack SAT Tutor
...Over the past 6 years, I have taught and tutored students in algebra 1, geometry, algebra 2 and chemistry (college prep to AP level). I have also designed and run my own Chemistry SAT 2 prep
classes with a high success rate; and have run math SAT/SSAT courses and individual sessions. I enjoy wo...
6 Subjects: including SAT math, chemistry, algebra 2, geometry
...I retired June 30, 2013. Too often high school math is presented as a bunch of topics whose relationships to each other is sometimes left as tenuous or non-existent. But math is not a pile of
rocks; it is a tree of knowledge with strong roots and multiplying branches pushing leaves into the sky, all connected back to the roots.
9 Subjects: including SAT math, calculus, geometry, algebra 1
...I am currently in the process of obtaining my Massachusetts teaching license in general science and middle school math. I have submitted my application and am simply awaiting approval. I have
been teaching middle school and high school grades for about four years now.
23 Subjects: including SAT math, Spanish, SAT reading, chemistry
...For the past 5 years I have been teaching basic composition and argumentative writing as well as literature at the university level. I also have experience in TOEFL Prep. Finally, I have
taught in an intensive English program at the university level.
30 Subjects: including SAT writing, algebra 1, biology, SAT reading
...I have taught English to adults, high school, and middle school students for 25 years. I also have a Master's in Reading and Language and certification as a Reading/Writing specialist. The SAT
paragraph are about vocabulary and the nuanced understanding for words in context.
20 Subjects: including SAT writing, English, SAT reading, writing
Nearby Cities With SAT Tutor
Amherst, NH SAT Tutors
Bedford, NH SAT Tutors
Chelmsford SAT Tutors
Derry, NH SAT Tutors
Dracut SAT Tutors
Goffstown SAT Tutors
Hudson, NH SAT Tutors
Litchfield, NH SAT Tutors
Londonderry, NH SAT Tutors
Manchester, NH SAT Tutors
Milford, NH SAT Tutors
Nashua, NH SAT Tutors
Pelham, NH SAT Tutors
Salem, NH SAT Tutors
Windham, NH SAT Tutors
|
{"url":"http://www.purplemath.com/merrimack_sat_tutors.php","timestamp":"2014-04-19T15:09:48Z","content_type":null,"content_length":"23546","record_id":"<urn:uuid:08a5ebdd-7fa8-414d-b660-516cc637b7e6>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00074-ip-10-147-4-33.ec2.internal.warc.gz"}
|
HowStuffWorks "What does Einstein's formula E=MC² really mean?"
Einstein's equation E=mc² pops up on everything from baseball caps to bumper stickers. It's even the title of a 2008 Mariah Carey album. But what does Albert Einstein's famous relativity equation
really mean?
For starters, the E stands for energy and the M stands for mass, a measurement of the quantity of matter. Energy and matter are interchangeable. Furthermore, it's essential to remember that there's a
set amount of energy/matter in the universe.
If you've ever read Dr. Seuss's children's book "The Sneetches," you probably remember how the yellow, birdlike characters in the story go through a machine to change back and forth between
"star-bellied sneetches" and "plain-bellied sneetches." The number of sneetches remains constant throughout the story, but the ratio between plain- and star-bellied ones changes. It's the same way
with energy and matter. The grand total remains constant, but energy regularly changes form into matter and matter into energy.
Now we're getting to the C² part of the equation, which serves the same purpose as the star-on and star-off machines in "The Sneetches." The C stands for the speed of light, so the whole equation
breaks down to this: Energy is equal to matter multiplied by the speed of light squared.
Why would you need to multiply matter by the speed of light to produce energy? The reason is that energy, be it light waves or radiation, travels at the speed of light. That breaks down to 186,000
miles per second (300,000 kilometers per second). When we split an atom inside a nuclear power plant or an atomic bomb, the resulting energy releases at the speed of light.
But why is the speed of light squared? The reason is that kinetic energy, or the energy of motion, is proportional to mass. When you accelerate an object, the kinetic energy increases to the tune of
the speed squared. You'll find an excellent example of this in any driver's education manual: If you double your speed, the braking distance is four times longer, so the braking distance is equal to
the speed squared [source: UNSW Physics: Einsteinlight].
The speed of light squared is a colossal number, illustrating just how much energy there is in even tiny amounts of matter. A common example of this is that 1 gram of water -- if its whole mass were
converted into pure energy via E=mc² -- contains as much energy as 20,000 tons (18,143 metric tons) of TNT exploding. That's why such a small amount of uranium or plutonium can produce such a massive
atomic explosion.
Einstein's equation opened the door for numerous technological advances, from nuclear power and nuclear medicine to the inner workings of the sun. It shows us that matter and energy are one.
Explore the links on the next page to learn even more about Einstein's theories.
|
{"url":"http://science.howstuffworks.com/science-vs-myth/everyday-myths/einstein-formula.htm","timestamp":"2014-04-16T10:54:30Z","content_type":null,"content_length":"123203","record_id":"<urn:uuid:af5ae99b-6f32-44bf-8ce4-30ea70f4fcfe>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00106-ip-10-147-4-33.ec2.internal.warc.gz"}
|
SOC S371 3781 Statistics for Sociology
Sociology | Statistics for Sociology
S371 | 3781 | James
Can students win some of David James’ hard-earned money while
attending class and learn some probability stuff too? How can you
calculate your chances of survival (or Leonardo’s) had you been a
passenger on the Titanic? When is the best time to watch the geyser
OLD FAITFULL erupt? How much can you learn from the typical CNN “man
in the street interview?” Who will win the next election? Do tall
men tend to marry tall women? What does a grade point average tell
you about a person?
If you would like to learn the answers to these and other interesting
questions, then this is the class for you! If you prefer to read a
rather standard description of this course, please read on.
S371 is a statistics course required for undergraduate majors in
Sociology that also satisfies the COAS Math requirement. It provides
an introduction to statistical theories and techniques appropriate
for answering sociological questions through the analysis of
quantitative data. No prior knowledge of statistics is assumed but
students must have a good understanding of algebra. If you have
never had a course in algebra at the high school level or above, you
should consider taking one before taking this course.
Descriptive and inferential statistics are covered in this course.
Descriptive statistics are used to describe or summarize sets of
numbers. Grade point average, for example, is a descriptive
Inferential statistics are designed to test sociological theories
based upon samples of data when it too expensive or impossible to
obtain all of the information needed from a population of interest.
Using a sample to estimate the proportion of voters who will vote for
a political candidate is an example of inferential statistics. By
making good choices about who to interview, one can generalize to the
national level, for all 180 million adult Americans, from the
information obtained from only about 2500 people. Inferences are
educated guesses and students will learn how to distinguish good from
bad guesses. You will also learn the following: how to construct and
describe frequency distributions, how to calculate and interpret
measures of central tendency and dispersion, how to tabulate data,
how to measure the association between two variables and how to
control statistically for a third, the logic of statistical inference
and hypothesis testing, how to decide if two groups of people are
different on some characteristic such as income, education, wealth,
age, occupation, skill, birth rates, death rates, or voting behavior
and how to estimate and interpret a linear regression model.
The course will focus on doing statistics. Doing statistics will
require numerical computations, some by hand, some using hand
calculators or personal computers. Nevertheless, I will de-emphasize
calculations per se, and concentrate instead on concepts and the
information conveyed by the numbers.
|
{"url":"http://www.indiana.edu/~deanfac/blspr02/soc/soc_s371_3781.html","timestamp":"2014-04-19T22:18:07Z","content_type":null,"content_length":"3466","record_id":"<urn:uuid:d15f28b4-d7c8-4e9f-83ff-a8957b7f63b1>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00039-ip-10-147-4-33.ec2.internal.warc.gz"}
|
LET ERDŐS_NUM(Ψ) := ERDŐS_NUM(Ψ) − 2
Paul Erdős was a Hungarian mathematician who lived from 1913 to 1996. He was, from a very early age, so utterly engaged in mathematics that he never bothered to learn basic personal and domestic
skills, such as how to tie his shoelaces, or how to use a toaster or fry an egg. He lived his life as an academic vagrant, travelling across the world from mathematical conference to conference with
all his worldly possessions tucked into a suitcase. He would show up, unannounced, at colleagues' homes, announcing, "My brain is now open!", and would stay just long enough to collaborate on a few
papers before moving on.
Erdős drank copious amounts of coffee, and it is this practice of his which may have been his colleague Alfréd Rényi's inspiration for his famous saying, "A mathematician is a machine for turning
coffee into theorems". In 1971 Erdős discovered an even stronger stimulant, amphetamine, and took it daily for the last 25 years of his life. His colleagues warned him that the drug was addictive,
but he dismissed their claims. One of them bet him $500 that he couldn't stop taking the drug for a month; Erdős won the bet, but complained that his abstinence had set back the progress of
mathematics by a month: "Before, when I looked at a piece of blank paper my mind was filled with ideas. Now all I see is a blank piece of paper." Erdős resumed taking the drug after winning the bet.
Aside from his eccentricities, Erdős is best known as one of the most prolific authors of mathematical papers ever—his total output is second only to the great Leonhard Euler. During his lifetime he
authored some 1500 articles with over 500 coauthors. Mathematicians today consider it a badge of pride to have coauthored a paper with Paul Erdős—so much so that they have invented a sort of game out
of it. It works like this: If you are Paul Erdős, you have a score, or "Erdős number", of 0. If you coauthored a paper with Erdős, your Erdős number is 1. If you coauthored a paper with someone who
coauthored a paper with Erdős, then your Erdős number is 2. And it goes on in this manner indefinitely; in the general case, if you coauthor a paper with someone whose Erdős number is n, then your
Erdős number is n + 1. If you cannot link yourself to Paul Erdős by some chain of coauthorship, then you have an Erdős number of infinity.
Recent studies have suggested that most publishing mathematicians have Erdős numbers less than or equal to 15; the median (most common) number is 5, which is slightly greater than the true mean of
4.65. It is considered prestigious to have a very low Erdős number, and not just among mathematicians. Due to the very high frequency of interdisciplinary collaboration in science today, many
computer scientists boast a low Erdős number. Erdős number–bearing authors are even found in the social and biological sciences; many linguists, for example, have finite Erdős numbers due to their
links with Noam Chomsky, whose number is 4.
Now, before I started my present job in industry, I held various jobs in research and academia, and naturally wrote a few papers. Unlike many scientists, I did most of my research and writing on my
own, so I didn't amass a particularly large number of coauthors. Nonetheless, as far back as 1999 I have been able to claim an Erdős number of 6, via my first published paper with Yang Xiang, then of
the University of Regina. The chain of coauthorship was as follows:
me → Yang Xiang → Michael Wong → Francis Y. L. Chin → Joseph Y.-T. Leung → Daniel Kleitman → Paul Erdős
Now, in 2007 I coauthored an article with Noam Chomsky, though I didn't know at the time he had an Erdős number of 4. (This I learned only recently when I stumbled across a blog post from Semantics
etc.) Through him I could have reduced my own Erdős number from 6 to 5:
me → Noam Chomsky → Marcel-Paul Schützenberger → Samuel Eilenberg → Ivan M. Niven → Paul Erdős
I particularly like this chain because it has lots of famous people (well, famous enough to have Wikipedia articles of their own). However, today, through the help of the AMS Collaboration Distance
tool, I have discovered an even shorter path to Erdős which nets me a value equal to Chomsky's 4:
me → Grigoris Antoniou → Cara MacNish → Kaipillil Vijayan → Paul Erdős
Unfortunately, the names on this list aren't nearly so prestigious, but I'm willing to overlook that given that I now have an Erdős number significantly less than average!
If you're interested in learing more about Erdős's strange and wonderful life, I can heartily recommend the very accessible and entertaining biography The Man Who Loved Only Numbers (ISBN
1-85702-829-5) by Paul Hoffman. You don't need to be a mathematician, or even particularly interested in mathematics, to enjoy this book.
• 5 comments
• 5 comments
|
{"url":"http://psych0naut.livejournal.com/15818.html","timestamp":"2014-04-16T16:31:22Z","content_type":null,"content_length":"72464","record_id":"<urn:uuid:07a2eed8-1062-4a97-9c6c-6ee1fd506f0c>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00189-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Complexity of matching red and blue points in the plane.
up vote 8 down vote favorite
I'm just asking because I'm curious. I was seeking references on the following problem, that a friend exposed to me last holidays :
Given $n$ red points and $n$ blue points in the plane in general position (no 3 of them are aligned), find a pairing of the red points with the blue points such that the segments it draws are all
This problem is always solvable, and admits several proof. A proof I know goes like this :
Start with an arbitrary pairing, and look for intersections of the segments it defines, if there are none you're done. If you found one, do the following operation :
r r r r
\ / | |
X => | |
/ \ | |
b b b b
(uncross the crossing you have found), you may create new crossings with this operation. If you repeat this operation, you cannot cycle, because the triangle inequality shows that the sum of the
length of the segments is strictly decreasing. So you will eventually get stuck at a configuration with no crossings.
1. What is the complexity of the algorithm described in the proof ?
2. What is the best known algorithm to solve this problem ?
I wouldn't be surprised to learn that this problem is a classic in computational geometry, however googling didn't give me references. Since some computational geometers are active on MO, I thought I
could get interesting answers here.
You might gain insight by viewing each move as a transposition applied to a permutation, and looking at a directed graph of all permutations joined by edges when two permutations differ by a
transposition and the direction goes from larger sum of lengths to smaller. This problem of traversing such a graph has likely been studied in combinatorics; I would be surprised to find an example
that requires worse than O(n^2) moves. Gerhard "Ask Me About System Design" Paseman, 2012.01.28 – Gerhard Paseman Jan 28 '12 at 18:22
Try adding "ghostbusters" to your searches. – Zsbán Ambrus Jan 28 '12 at 19:50
Note that you could also solve a minimal weight matching problem on a weighted bipartite graph where the edge weights are the distances of red points from blue points. This has $ O(n^3) $ runtime,
still polynomial but asymptotically slower than the algorithms suggested in the answers. – Zsbán Ambrus Jan 28 '12 at 21:57
Thanks to all for your replies and comments. I knew how to prove the existence with the ham sandwich theorem but wasn't sure how efficient it was as an algorithm. – Thomas Richard Jan 29 '12 at
Could we tag this with [co.combinatorics]? I think the part of the question that's still unanswered, that is, whether if you naively uncross edges it may take more than polynomial time to arrive at
a non-crossing matching, is a combinatorial question. – Zsbán Ambrus Feb 15 '12 at 10:16
add comment
3 Answers
active oldest votes
The Ghosts and Ghostbusters problem can be solved in $O(n\log n)$ time, which is considerably faster than the $O(n^2\log n)$-time algorithm suggested by CLRS.
The ham sandwich theorem implies that there is a line $L$ that splits both the ghosts and the ghostbusters exactly in half. (If the number of ghosts and ghostbusters is odd, the line
passes through one of each; if the number is even, the line passes through neither.) Lo, Matoušek, and Steiger [Discrete Comput. Geom. 1994] describe an algorithm to compute a
up vote 9 ham-sandwich line in $O(n)$ time; their algorithm is also sketched here. Now recursively solve the problem on both sides of $L$; the recursion stops when all subsets are empty. The total
down vote running time obeys the mergesort recurrence $T(n) = O(n) + 2T(n/2)$ and thus is $O(n\log n)$.
This algorithm is optimal in the algebraic decision tree and algebraic computation tree models of computation, because you need $\Omega(n\log n)$ time in those models just to decide
whether two sets of $n$ points are equal.
add comment
Check out the classic Cormen, Leiserson, Rivest, Stein, ''Introduction to algorithms'', second edition. In chapter 33 (Computational Geometry). See the exercises at the end of the whole
chapter: exercise 33-3 is your problem. The full solution isn't described, but you will find a hint for a polynomial time algorithm. I have no idea whether that's the fastest algorithm
up vote 3 known.
down vote
add comment
In the paper Geometry Helps in Matching, by P. Vaidya, he shows that the minimum matching (which is what you are finding here) can be found in $O(n^2 \log^3(n))$ time.
up vote 2
down vote
The abstract of that paper claims that they can get $ O(n^2 log^3 n) $ time only in the case of $ L_1 $ and $ L_\infty $ metrics (those two are the same in 2 dimensions of course). They
give a somewhat slower algorithm for the $ L_2 $ case. Does a minimal total length in $ L_\infty $ metric guarantee no intersections between the segments? – Zsbán Ambrus Jan 29 '12 at
The triangle inequality in those metrics is Minkowski's inequality, so certainly $L_1$ should work... – Igor Rivin Jan 29 '12 at 16:11
No way. The triangle inequality proof only works with L_2 because you can break up the crossing segments to two smaller segments at their intersection points. In fact, consider the two
red points a = (0, 0), b = (1, 0), and the two black points c = (1, 2), d = (2, 2). Then the matching a-c, b-d has total $ L_\infty $ length 3 + 3 = 6; but the matching a-d, b-c which has
segments crossing each other has total $ L_\infty $ length 4 + 2 = 6, so a matching shortest in $ L_\infty $ need not be non-crossing. – Zsbán Ambrus Jan 30 '12 at 9:05
add comment
Not the answer you're looking for? Browse other questions tagged computational-geometry mg.metric-geometry algorithms reference-request or ask your own question.
|
{"url":"https://mathoverflow.net/questions/86906/complexity-of-matching-red-and-blue-points-in-the-plane","timestamp":"2014-04-17T04:35:30Z","content_type":null,"content_length":"69171","record_id":"<urn:uuid:4d0a3b15-37d2-4126-9956-0f8fb52670d9>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00108-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Stability experimental
Maintainer ekmett@gmail.com
A generalized State monad, parameterized by a Representable functor. The representation of that functor serves as the state.
type State g = StateT g IdentitySource
A memoized state monad parameterized by a representable functor g, where the representatation of g, Key g is the state to carry.
The return function leaves the state unchanged, while >>= uses the final state of the first computation as the initial state of the second.
:: Representable g
=> (Key g -> (a, Key g)) pure state transformer
-> State g a equivalent state-passing computation
Construct a state monad computation from a function. (The inverse of runState.)
:: Indexable g
=> State g a state-passing computation to execute
-> Key g initial state
-> (a, Key g) return value and final state
Unwrap a state monad computation as a function. (The inverse of state.)
:: Indexable g
=> State g a state-passing computation to execute
-> Key g initial value
-> a return value of the state computation
Evaluate a state computation with the given initial state and return the final value, discarding the final state.
:: Indexable g
=> State g a state-passing computation to execute
-> Key g initial value
-> Key g final state
Evaluate a state computation with the given initial state and return the final state, discarding the final value.
mapState :: Functor g => ((a, Key g) -> (b, Key g)) -> State g a -> State g bSource
Map both the return value and final state of a computation using the given function.
newtype StateT g m a Source
A state transformer monad parameterized by:
• g - A representable functor used to memoize results for a state Key g
• m - The inner monad.
The return function leaves the state unchanged, while >>= uses the final state of the first computation as the initial state of the second.
(Functor f, Representable g, MonadFree f m) => MonadFree f (StateT g m)
(Representable g, MonadReader e m) => MonadReader e (StateT g m)
(Key g ~ s, Representable g, Monad m) => MonadState s (StateT g m)
(Representable g, MonadWriter w m) => MonadWriter w (StateT g m)
Representable f => MonadTrans (StateT f)
Representable f => BindTrans (StateT f)
(Representable g, Monad m) => Monad (StateT g m)
(Functor g, Functor m) => Functor (StateT g m)
(Representable g, Functor m, Monad m) => Applicative (StateT g m)
(Representable g, MonadCont m) => MonadCont (StateT g m)
(Functor g, Indexable g, Bind m) => Apply (StateT g m)
(Functor g, Indexable g, Bind m) => Bind (StateT g m)
evalStateT :: (Indexable g, Monad m) => StateT g m a -> Key g -> m aSource
Evaluate a state computation with the given initial state and return the final value, discarding the final state.
execStateT :: (Indexable g, Monad m) => StateT g m a -> Key g -> m (Key g)Source
Evaluate a state computation with the given initial state and return the final state, discarding the final value.
liftCallCC :: Representable g => ((((a, Key g) -> m (b, Key g)) -> m (a, Key g)) -> m (a, Key g)) -> ((a -> StateT g m b) -> StateT g m a) -> StateT g m aSource
Uniform lifting of a callCC operation to the new monad. This version rolls back to the original state on entering the continuation.
liftCallCC' :: Representable g => ((((a, Key g) -> m (b, Key g)) -> m (a, Key g)) -> m (a, Key g)) -> ((a -> StateT g m b) -> StateT g m a) -> StateT g m aSource
In-situ lifting of a callCC operation to the new monad. This version uses the current state on entering the continuation. It does not satisfy the laws of a monad transformer.
get :: MonadState s m => m s
Return the state from the internals of the monad.
gets :: MonadState s m => (s -> a) -> m a
Gets specific component of the state, using a projection function supplied.
put :: MonadState s m => s -> m ()
Replace the state inside the monad.
modify :: MonadState s m => (s -> s) -> m ()
Monadic state transformer.
Maps an old state to a new state inside a state monad. The old state is thrown away.
Main> :t modify ((+1) :: Int -> Int)
modify (...) :: (MonadState Int a) => a ()
This says that modify (+1) acts over any Monad that is a member of the MonadState class, with an Int state.
|
{"url":"http://hackage.haskell.org/package/representable-functors-2.0.0.4/docs/Control-Monad-Representable-State.html","timestamp":"2014-04-18T22:21:58Z","content_type":null,"content_length":"30330","record_id":"<urn:uuid:af79b1db-0b41-4be9-bbca-2b0f36cfae45>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00503-ip-10-147-4-33.ec2.internal.warc.gz"}
|
The ring of algebraic integers of the number field generated by torsion points on an elliptic curve
up vote 13 down vote favorite
(Warning: a student asking) Let $E$ be an elliptic curve over $\mathbf Q$. Let $P(a,b)$ be a (nontrivial) torsion point on $E$. Is there an easy description of the ring of algebraic integers of $\
mathbf Q(a,b)$? I'm curious about the answer for general elliptic curves, but I'm not sure whether such an answer is possible.
(This question is motivated by the nice description of the rings of integers of cyclotomic fields $\mathbf Q(\zeta_n)$)
algebraic-number-theory elliptic-curves
add comment
4 Answers
active oldest votes
[Comment: what follows is not really an answer, but rather a focusing of the question.]
In general, there is not such a nice description even of the number field $\mathbb{Q}(a,b)$ -- typically it will be some non-normal number field whose normal closure has Galois group $
\operatorname{GL}_2(\mathbb{Z}/n\mathbb{Z})$, where $n$ is the order of the torsion point.
In order to maintain the analogy you mention above, you would do well to consider the special case of an elliptic curve with complex multiplication, say by the maximal order of an
imaginary quadratic field $K = \mathbb{Q}(\sqrt{-N})$, necessarily of class number one since you want the elliptic curve to be defined over $\mathbb{Q}$. In this case, the field $K(P)$
up vote 5 down will be -- up to a multiquadratic extension -- the anticyclotomic part of the $n$-ray class field of $K$.
vote accepted
And now it is a great question exactly what the rings of integers of these very nice number fields are. One might even venture to hope that they will be integrally generated by the x
and y coordinates of these torsion points on CM elliptic curves (certainly there are well-known integrality properties for torsion points, although I'm afraid I'm blanking on an exact
statement at the moment; I fear there may be some problems at 2...).
I'm looking forward to a real answer to this one!
Thanks for your enlightening input. If you don't mind, could you paste your response to my question? or make it an entirely new question? so other people can respond to "the correct
question"? – Anonymous Mar 8 '10 at 21:21
I think people will probably see this answer and respond to it accordingly. Let's wait a few hours and see if that's actually the case. – Pete L. Clark Mar 8 '10 at 21:25
To expand a little on Pete's answer: imagine a 2-torsion point in the curve y^2=f(x). It's of the form (a,0) with a a root of f(x). Now f(x) can be any cubic with distinct roots, so
Q(a) can be (for example) any degree 3 field, and in this case a can be any element of it that isn't in Q. – Kevin Buzzard Mar 8 '10 at 21:26
@Kevin: I think that comment may be worthy of an answer in and of itself, possibly augmented by some remarks about rings of integers in cubic fields (they can already be complicated,
right?). – Pete L. Clark Mar 8 '10 at 21:34
@Kevin: thanks for your answer. Obviously I don't know anything, but maybe it will be better if I adjoin all the n-torsion points to Q and find the ring of integers in that field
instead? – Anonymous Mar 8 '10 at 21:46
show 2 more comments
Abelian extensions of complex quadratic number fields are generated by division points of certain elliptic functions (which I guess you can translate into the language of torsion points
on elliptic curves with complex multiplication - see Pete's answer). Their rings of integers were studied in
up vote 5 • Ph. Cassou-Noguès, M.J. Taylor, Elliptic functions and rings of integers, Progress in Mathematics, Birkhäuser 1987.
down vote
The fact that the answer requires a whole book already suggests that things are not as easy as for cyclotomic fields.
add comment
Franz's reference reminded me that there is an entire school (Universite Bordeaux I?) of people who study relations between elliptic curves, rings of integers and Galois module structure.
It happens that I have hung out a bit with some of these people, but so far they haven't passed on their deep knowledge of this subject (or even their Francophoneness) to me. Nevertheless
I found the following interesting paper of Cassou-Noguès and Taylor which came out soon after their book:
Cassou-Noguès, Ph.(F-BORD); Taylor, M. J.(4-UMIST) A note on elliptic curves and the monogeneity of rings of integers. J. London Math. Soc. (2) 37 (1988), no. 1, 63--72.
up vote 4
down vote I recommend especially the very well written introduction to this paper. It contains the intriguing sentence:
"These results have led us to believe that the rings of integers of all ray class fields of K are monogenic over the ring of integers of the Hilbert class field of K."
@Pete: thank you. I couldn't access the link since I don't have the barcode password. But thanks anyway. – Anonymous Mar 9 '10 at 5:26
1 @Anon: try math.uga.edu/~pete/Cassou-Nogues--Taylor88.pdf – Pete L. Clark Mar 9 '10 at 6:30
1 Hey Pete you're stealing my journal! ;-) – Kevin Buzzard Mar 9 '10 at 10:39
Reason why I gave him +1. – Chandan Singh Dalawat Mar 9 '10 at 13:35
*grin * – Kevin Buzzard Mar 9 '10 at 15:05
show 1 more comment
For further work on monogeneity questions, you might want to have a look at some of the papers of Reinhard Schertz (I'm afraid that I don't have precise references right now). He
up vote 4 apparently also has a new book out entitled "Complex Multiplication" or something like this, which I've not seen, but which probably also discusses some of his work on this topic.
down vote
add comment
Not the answer you're looking for? Browse other questions tagged algebraic-number-theory elliptic-curves or ask your own question.
|
{"url":"http://mathoverflow.net/questions/17516/the-ring-of-algebraic-integers-of-the-number-field-generated-by-torsion-points-o","timestamp":"2014-04-21T10:07:58Z","content_type":null,"content_length":"75166","record_id":"<urn:uuid:be1a2e6c-9c92-4b35-82e6-0e8245c7882b>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00467-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Curvature proof problem
November 17th 2009, 10:11 PM #1
Nov 2009
Curvature proof problem
Consider a curve given in parametric form x(t), y(t). Starting from the definition of the tangent angle (top equation set in picture).
prove the bottom half of the picture (The "dots" refer to differentiation with respect to the parameter t.
So far all I have for this problem is that since tanθ=sinθ/cosθ, that y'=sinθ and x'=cosθ, but I have no idea on how to proceed with this, and I am completely in the dark. Any help would be
greatly appreciated.
Consider a curve given in parametric form x(t), y(t). Starting from the definition of the tangent angle (top equation set in picture).
prove the bottom half of the picture (The "dots" refer to differentiation with respect to the parameter t.
So far all I have for this problem is that since tanθ=sinθ/cosθ, that y'=sinθ and x'=cosθ, but I have no idea on how to proceed with this, and I am completely in the dark. Any help would be
greatly appreciated.
Note that $\tan\theta=\frac{\dot{y}}{\dot{x}}\implies\theta=\ arctan\!\left(\frac{\dot{y}}{\dot{x}}\right)$.
So when you differentiate both sides with respect to t, we see that
$\frac{\,d\theta}{\,dt}=\frac{1}{1+\left(\frac{\dot {y}}{\dot{x}}\right)^2}\cdot\frac{\dot{x}\ddot{y}-\dot{y}\ddot{x}}{\dot{x}^2}=\frac{\ddot{y}\dot{x}-\ddot{x}\dot{y}}{\dot{x}^2+\dot{y}^2}$
Does this make sense?
Take your equation $\tan\theta=\frac{\dot{y}}{\dot{x}}$ and take the derivative of both sides with respect to t. By the chain rule, $\frac{d}{dt}(\tan\theta)=(\sec^2\theta)\left(\frac {d\theta}
{dt}\right)=(1+\tan^2\theta)\frac{d\theta }{dt}$; and you can find $\frac{d}{dt}\left(\frac{\dot{y}}{\dot{x}}\right)$ via the quotient rule. Then replace $\tan\theta$ on the left hand side with $
\frac{\dot{y}}{\dot{x}}$ (since those are equal), then solve the equation for $\frac{d\theta}{dt}$ in terms of $\dot{x}$, $\dot{y}$, $\ddot{x}$, and $\ddot{y}$.
--Kevin C.
Thank you for the help!
November 17th 2009, 11:21 PM #2
November 17th 2009, 11:28 PM #3
Senior Member
Dec 2007
Anchorage, AK
November 19th 2009, 08:02 PM #4
Nov 2009
|
{"url":"http://mathhelpforum.com/calculus/115320-curvature-proof-problem.html","timestamp":"2014-04-18T16:26:36Z","content_type":null,"content_length":"41188","record_id":"<urn:uuid:c3606880-112d-4180-85a3-0cd25ca8df6d>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00258-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Atomic Rockets
What about Torchships?
The Torchship "Lewis & Clark" from Time For the Stars by Robert A. Heinlein, 1956
If one has a beefy, muscular spacecraft propulsion system, you can ignore all that Hohmann nonsense. But it better be really powerful. It will have to be some science-fictional torchship, capable of
accelerating at one g for days at a time, with obscenely high delta V ratings, far beyond foreseeable technology. A drive that will run full tilt into John's Law. What is the actual definition of a
torchship? Well, it is kind of vague. It more or less boils down to "unreasonably powerful." Personally as a rule of thumb I'd say it was a propulsion system with a thrust power of 150 gigawatts or
higher. Thrust power is thrust times exhaust velocity then divided by two (see equations). The entries in the Drive Table are sorted by thrust power for your convenience.
(The term "Torchship" was coined by Robert Heinlein, and is featured in his stories Farmer in the Sky, Time for the Stars, Double Star, and "Sky Lift". Sometimes it is referred to as "Ortega's
Torch". Nowadays it is implied that a Torchship is some kind of high thrust fusion drive, but Heinlein meant it to mean a total-conversion mass-into-energy drive.
But the presence of torchships doesn't mean mathematics no longer applies. You just need different equations.
First figure the distance between the two planets, say Mars and Terra. The "superior" planet is the one farthest from the Sun, and the "inferior" planet is nearest. The distance from the Sun and the
superior planet is D[s] and the distance between the Sun and the inferior is D[i]. No "church lady" jokes.
Obviously the maximum distance between the planets is when they are on the opposite sides of the Sun, the distance being D[s] + D[i]. And of course the minimum is when they are on the same side,
distance being D[s] - D[i]. Upon reflection you will discover that the average distance between the planets is D[s]. (when averaging, D[i] cancels out.)
Just choose a distance between the max and min. If you want to actually calculate the distance between two planets on a given date, be my guest but I'm not qualified to explain how. Do a web search
for a software "orrery".
A Hohmann orbit is the maximum transit time / minimum deltaV mission. A "Brachistochrone" is a minimum transit time / maximum deltaV mission is where you accelerate constantly to the midpoint, flip
over ("skew flip"), and decelerate to the destination (Weaker spacecraft will accelerate up to a certain velocity, coast for a while, then decelerate to rest.). Brachistochrone missions are not only
of shorter mission time, but they also are not constrained by launch windows the way Hohmann are. You can launch any time you like.
It is very important to note that it takes exactly the same amount of time to slow down from a speed X to speed zero as it took to accelerate from speed zero to speed X. People who played the ancient
boardgame Triplanetary or the new game Voidstriker discovered this the hard way. They would spend five turns accelerating to a blinding speed, find out to their horror that it would take five turns
to slow down to a stop, and end up either streaking off the edge of the map or smacking into Mars at a speed high enough to make a crater. This is why a Brachistochrone accelerates to the mid-way
point then decelerates the rest of the trip. The idea is to come to a complete stop at your destination.
If you know the desired acceleration of your spacecraft (generally one g or 9.81 m/s^2) and wish to calculate the transit time, the Brachistochrone equation is
T = 2 * sqrt[ D/A ]
• T = transit time (seconds)
• D = distance (meters)
• A = acceleration (m/s^2)
• sqrt[x] = square root of x
Remember that
• AU * 1.49e11 = meters
• 1 g of acceleration = 9.81 m/s^2
• one-tenth g of acceleration = 0.981 m/s^2
• one one-hundredth g of acceleration = 0.0981 m/s^2
Divide time in seconds by
• 3600 for hours
• 86400 for days
• 2592000 for (30 day) months
• 31536000 for years
Timothy Charters worked out the following equation. It is the above transit time equation for weaker spacecraft that have to coast during the midpoint
T = ((D - (A * t^2)) / (A * t)) + (2*t)
• T = transit time (seconds)
• D = distance (meters)
• A = acceleration (m/s^2)
• t = duration of acceleration phase (seconds), just the acceleration phase only, NOT the acceleration+deceleration phase.
Note that the coast duration time is of course = T - (2*t)
If you know the desired transit time and wish to calculate the required acceleration, the equation is
A = (4 * D) / T^2
Keep in mind that prolonged periods of acceleration a greater than one g is very bad for the crew's health.
Deriving from the Kinematic Equation
"Colony Sphere" from a 1959 poster. Everything else in the poster has been "borrowed" from other sources, so one of suspicious mind would think this might have been "inspired" by the colony
torchship "Mayflower" in Heinlein's FARMER IN THE SKY.
Don't be confused. You might think that the Brachistochrone equation should be T = sqrt[ 2 * D/A ] instead of T = 2 * sqrt[ D/A ], since your physics textbook states that D = 0.5 * A * T^2. The
confusion is because the D in the physics book refers to the mid-way distance, not the total distance.
This changes the physics book equation from
D = 0.5 * A * t^2
D * 0.5 = 0.5 * A * t^2
Solving for t gives us t = sqrt(D/A) where t is the time to the mid-way distance. Since it takes an equal amount of time to slow down, the total trip time T is twice that or T = 2 * sqrt( D/A ).
Which is the Brachistochrone equation given above.
Torchship Mayflower from "Satellite Scout" (Farmer in the Sky) by Robert Heinlein. Artwork by Chesley Bonestell.
Now, just how brawny a rocket are we talking about? Take the distance and acceleration from above and plug it into the following equation:
transitDeltaV = 2 * sqrt[ D * A ]
• transitDeltaV = transit deltaV required (m/s)
The rocket will also have to match orbital velocity with the target planet. In Hohmann orbits, this was included in the total.
orbitalVelocity = sqrt[ (G * M) / R ]
• orbitalVelocity = planet's orbital velocity (m/s)
• G = 0.00000000006673 (Gravitational constant)
• M = mass of primary (kg), for the Sun: 1.989e30
• R = distance between planet and primary (meters) (semi-major axis or orbital radius)
If you are talking about missions between planets in the solar system, the equation becomes
orbitalVelocity = sqrt[1.33e20 / R ]
Figure the orbital velocity of the start planet and destination planet, subtract the smaller from the larger, and the result is the matchOrbitDeltaV
matchOrbitDeltaV = sqrt[1.33e20 / D[i] ] - sqrt[1.33e20 / D[s] ]
If the rocket lifts off and/or lands, that takes deltaV as well.
liftoffDeltaV = sqrt[ (G * P[m]) / P[r] ]
• liftoffDeltaV = deltaV to lift off or land on a planet (m/s)
• G = 0.00000000006673
• Pm = planet's mass (kg)
• Pr = planet's radius (m)
The total mission deltaV is therefore:
totalDeltaV = sqrt(liftoffDeltaV^2 + transitDeltaV^2) + sqrt(matchOrbitDeltaV^2 + landDeltaV^2)
Do a bit of calculation and you will see how such performance is outrageously beyond the capability of any drive system in the table I gave you.
If you want to cheat, you can look up some of the missions in Jon Roger's Mission Table.
For some ballpark estimates, you can use my handy-dandy Transit Time Nomogram. A nomogram is an obsolete mathematical calculation device related to a slide rule. It is a set of scales printed on a
sheet of paper, and read with the help of a ruler or straight-edge. While obsolete, it does have some advantages when trying to visualize a range of solutions. Print out the nomogram, grab a ruler,
and follow my example. You can also purchase an 11" x 17" poster of this nomogram at . Standard disclaimer: I constructed this nomogram but I am not a rocket scientist. There may be errors. Use at
your own risk.
Let's say that our spacecraft is 1.5 ktons (1.5 kilo-tons or 1500 metric tons). It has a single Gas-Core Nuclear Thermal Rocket engine (NTR-GAS MAX) and has a (totally ridiculous) mass ratio of 20.
The equation for figuring a spacecraft's total DeltaV is Δ[v] = Ve * ln[R]. On your pocket calculator, 98,000 * ln[20] = 98,000 * 2.9957 = 300,000 m/s = 300 km/s. Ideally this should be on the
transit nomogram, but the blasted thing was getting crowded enough as it is. This calculation is on a separate nomogram found here.
The mission is to travel a distance of 0.4 AU (about the distance between the Sun and the planet Mercury). Using a constant boost brachistochrone trajectory, how long will the ship take to travel
that distance?
Examine the nomogram. On the Ship Mass scale, locate the 1.5 kton tick mark. On the Engine Type scale, locate the NTR-GAS MAX tick mark. Lay a straight-edge on the 1.5 kton and NTR-GAS MAX tick marks
and examine where the edge crosses the Acceleration scale. Congratulations, you've just calculated the ship's maximum acceleration:2 meters per second per second (m/s^2).
For your convenience, the acceleration scale is also labeled with the minimum lift off values for various planets.
So we know our ship has a maximum acceleration of 2 m/s^2 and a maximum DeltaV of 300 km/s. As long as we stay under both of those limits we will be fine.
On the Acceleration scale, locate the 2 m/s^2 tick mark. On the Destination Distance scale, locate the 0.4 AU tick mark. Lay a straight-edge on the two tick marks and examine where it intersects the
Transit time scale. It says that the trip will take just a bit under four days.
But wait! Check where the edge crosses the Total DeltaV scale. Uh oh, it says almost 750 km/s, and our ship can only do 300 km/s before its propellant tanks run dry. Our ship cannot do this
The key is to remember that 2 m/s^2 is the ship's maximum acceleration, nothing is preventing us from throttling down the engines a bit to lower the DeltaV cost. This is where a nomogram is superior
to a calculator, in that you can visualize a range of solutions.
Pivot the straight-edge on the 0.4 AU tick mark. Pivot it until it crosses the 300 km/s tick on the Total DeltaV scale. Now you can read the other mission values: 0.4 m/s^2 acceleration and a trip
time of a bit over a week. Since this mission has parameters that are under both the DeltaV and Acceleration limits of our ship, the ship can perform this mission (we will assume that the ship has
enough life-support to keep the crew alive for a week or so).
Of course, if you want to have some spare DeltaV left in your propellant tanks at the mission destination, you don't have to use it all just getting there. For instance, you can pivot around the 250
km/s DeltaV tick mark to find a good mission. You will arrive at the destination with 300 - 250 = 50 km/s still in your tanks.
Which reminded me that I had not worked out how long it would take to get home on a one-gee boost, if it turned out that I could not arrange automatic piloting at eight gees. I was stymied on
getting out of the cell, I hadn't even nibbled at what I would do if I did get out (correction: when I got out), but I could work ballistics.
I didn't need books. I've met people, even in this day and age, who can't tell a star from a planet and who think of astronomical distances simply as "big." They remind me of those primitives who
have just four numbers: one, two, three, and "many." But any tenderfoot Scout knows the basic facts and a fellow bitten by the space bug (such as myself) usually knows a number of figures.
"Mother very thoughtfully made a jelly sandwich under no protest." Could you forget that after saying it a few times? Okay, lay it out so:
Mother MERCURY $.39
Very VENUS $.72
Thoughtfully TERRA $1.00
Made MARS $1.50
A ASTEROIDS (assorted prices, unimportant)
Jelly JUPITER $5.20
Sandwich SATURN $9.50
Under URANUS $19.00
No NEPTUNE $30.00
Protest PLUTO $39.50
The "prices" are distances from the Sun in astronomical units. An A.U. is the mean distance of Earth from Sun, 93,000,000 miles. It is easier to remember one figure that everybody knows and some
little figures than it is to remember figures in millions and billions. I use dollar signs because a figure has more flavor if I think of it as money - which Dad considers deplorable. Some way
you must remember them, or you don't know your own neighborhood.
Now we come to a joker. The list says that Pluto's distance is thirty-nine and a half times Earth's distance. But Pluto and Mercury have very eccentric orbits and Pluto's is a dilly; its distance
varies almost two billion miles, more than the distance from the Sun to Uranus. Pluto creeps to the orbit of Neptune and a hair inside, then swings way out and stays there a couple of centuries -
it makes only four round trips in a thousand years.
But I had seen that article about how Pluto was coming into its "summer." So I knew it was close to the orbit of Neptune now, and would be for the rest of my life-my life expectancy in
Centerville; I didn't look like a preferred risk here. That gave an easy figure - 30 astronomical units.
Acceleration problems are simple s=1/2 at^2; distance equals half the acceleration times the square of elapsed time. If astrogation were that simple any sophomore could pilot a rocket ship - the
complications come from gravitational fields and the fact that everything moves fourteen directions at once. But I could disregard gravitational fields and planetary motions; at the speeds a
wormface ship makes neither factor matters until you are very close. I wanted a rough answer.
I missed my slipstick. Dad says that anyone who can't use a slide rule is a cultural illiterate and should not be allowed to vote. Mine is a beauty- a K&E 20" Log-log Duplex Decitrig. Dad
surprised me with it after I mastered a ten-inch polyphase. We ate potato soup that week - but Dad says you should always budget luxuries first. I knew where it was. Home on my desk.
No matter. I had figures, formula, pencil and paper.
First a check problem. Fats had said "Pluto," "five days," and "eight gravities."
It's a two-piece problem; accelerate for half time (and half distance); do a skew-flip and decelerate the other half time (and distance). You can't use the whole distance in the equation, as
"time" appears as a square - it's a parabolic. Was Pluto in opposition? Or quadrature? Or conjunction? Nobody looks at Pluto - so why remember where it is on the ecliptic? Oh, well, the average
distance was 30 A.U.s - that would give a close-enough answer. Half that distance, in feet, is: 1/2 x 30 x 93,000,000 x 5280. Eight gravities is: 8 x 32.2 ft./sec./sec. - speed increases by 258
feet per second every second up to skew-flip and decreases just as fast thereafter.
So- 1/2 x 30 x 93,000,000 x 5280 = 1/2 x 8 x 32.2 x t^2 - and you wind up with the time for half the trip, in seconds. Double that for full trip. Divide by 3600 to get hours; divide by 24 and you
have days. On a slide rule such a problem takes forty seconds, most of it to get your decimal point correct. It's as easy as computing sales tax.
It took me at least an hour and almost as long to prove it, using a different sequence - and a third time, because the answers didn't match (I had forgotten to multiply by 5280, and had "miles"
on one side and "feet" on the other - a no-good way to do arithmetic) - then a fourth time because my confidence was shaken. I tell you, the slide rule is the greatest invention since girls.
But I got a proved answer. Five and a half days. I was on Pluto.
Ed note: I learned it as My Very Educated Mother Just Served Us Nine Pumpkins. In Slide Rule terminology: K&E is Keuffel & Esser, noted manufacturer of quality slide rules. 20 inches is twice the
size and accuracy of a standard slide rule. Log-log means the rule possesses expanded logarithmic scales. Duplex means there are scales on both sides of the rule and the cursor is double sided.
Decitrig means the rule possesses decimal trigometric scales.
From Have Space Suit - Will Travel by Robert A. Heinlein, 1958
Thanks to Charles Martin for this analysis:
In Heinlein's short story "Sky Lift", the torchship on an emergency run to Pluto colony does 3.5 g for nine days and 15 hours. 3.5 g is approximately 35 m/s^2 and 9d15h is 831,600 seconds. 35 m/s
^2 * 831,600 s = 29,100,000 m/s total deltaV.
Assume a mass ratio of 4. Most of Heinlein's ships had a mass ratio of 3, 4 is reasonable for an emergency trip.
V[e] = Δ[v] / ln[R] so 29,100,000 / 1.39 = 21,000,000 m/s exhaust velocity or seven percent of the speed of light.
A glance at the engine table show that this is way up there, second only to the maximum possible Antimatter Beam-Core propulsion, and twice the maximum of Inertial Confinement Fusion. If
Heinlein's torchship can manage a V[e] of ten percent lightspeed it can get away with a mass ratio of 3.
Earthlight by Sir Arthur C. Clarke, 1955. Nice picture, but it does violate the "rockets point down" principle.
In "Sky Lift" and Double Star, the crew spent the days of high thrust in acceleration couches that were like advanced waterbeds (called "cider presses"). In The Mote in God's Eye by Larry Niven and
Jerry Pournelle, the captain's chair had a built-in "relief tube" (i.e., a rudimentary urinal) for use during prolonged periods of multi-g acceleration. There were also a few motorized acceleration
couches used by damage control parties who had to move around during high gs. Such mobile couches also appeared in Joe Haldeman's The Forever War.
He called Bury instead.
Bury was in the gee bath: a film of highly elastic mylar over liquid. Only his face and hands showed above the curved surface. His face looked old-it almost showed his true age.
. . ."Yes, of course, I didn't mean personally. I only want access to information on our progress. At my age I dare not move from this rubber bathtub for the duration of our voyage. How long will
we be under four gees?"
"One hundred and twenty-five hours. One twenty-four, now."
. . .He called Sally's cabin.
She looked as if she hadn't slept in a week or smiled in years. Blaine said, "Hello, Sally. Sorry you came?"
"I told you I can take anything you can take," Sally said calmly. She gripped the arms of her chair and stood up. She let go and spread her arms to show how capable she was.
"Be careful," Blaine said, trying to keep his voice steady. "No sudden moves. Keep your knees straight. You can break your back just sitting down. Now stay erect, but reach behind you. Get both
the chair arms in your hands before you try to bend at the waist-"
She didn't believe it was dangerous, not until she started to sit down. Then the muscles in her arms knotted, panic flared in her eyes, and she sat much too abruptly, as if MacArthur's gravity
had sucked her down.
"Are you hurt?"
"No," she said. "Only my pride."
"Then you stay in that chair, damn your eyes! Do you see me standing up? You do not. And you won't!"
"All right." She turned her head from side to side. She was obviously dizzy from the jolt.
From The Mote in God's Eye by Larry Niven and Jerry Pournelle
Torchship Lewis & Clark. Artwork by Jon Stopa.
A hand grabbed my arm, towed me along a narrow passage and into a compartment. Against one bulkhead and flat to it were two bunks, or "cider presses," the bathtub-shaped, hydraulic,
pressure-distribution tanks used for high acceleration in torchships. I had never seen one before but we had used quite convincing mock-ups in the space opus The Earth Raiders.
There was a stenciled sign on the bulkhead behind the bunks: WARNING!!! Do Not Take More than Three Gravities without a Gee Suit. By Order of-- I rotated slowly out of range of vision before I
could finish reading it and someone shoved me into one cider press. Dak and the other men were hurriedly strapping me against it when a horn somewhere near by broke into a horrid hooting. It
continued for several seconds, then a voice replaced it: "Red warning! Two gravities! Three minutes! Red warning! Two gravities! Three minutes!" Then the hooting started again.
I looked at him and said wonderingly, "How do you manage to stand up?" Part of my mind, the professional part that works independentiy, was noting how he stood and filing it in a new drawer
marked: "How a Man Stands under Two Gravities."
He grinned at me. "Nothing to it. I wear arch supports."
"You can stand up, if you want to. Ordinarily we discourage passengers from getting out of the boost tanks when we are torching at anything over one and a half gees - too much chance that some
idiot will fall over his own feet and break a leg. But I once saw a really tough weight-lifter type climb out of the press and walk at five gravities - but he was never good for much afterwards.
But two gees is okay - about like carrying another man piggyback."
She did not return. Instead the door was opened by a man who appeared to be inhabiting a giant kiddie stroller. "Howdy there, young fellow!" he boomed out. He was sixtyish, a bit too heavy, and
bland; I did not have to see his diploma to be aware that his was a "bedside" manner.
"How do you do, sir?"
"Well enough. Better at lower acceleration." He glanced down at the contrivance he was strapped into. "How do you like my corset-on-wheels? Not stylish, perhaps, but it takes some of the strain
off my heart.
At turnover we got that one-gravity rest that Dak had promised. We never were in free fall, not for an instant; instead of putting out the torch, which I gather they hate to do while under way,
the ship described what Dak called a 180-degree skew turn. It leaves the ship on boost the whole time and is done rather qulckly, but it has an oddly disturbing effect on the sense of balance.
The effect has a name something like Coriolanus. Coriolis?
All I know about spaceships is that the ones that operate from the surface of a planet are true rockets but the voyageurs call them "teakettles" because of the steam jet of water or hydrogen they
boost with. They aren't considered real atomic-power ships even though the jet is heated by an atomic pile. The long-jump ships such as the Tom Paine, torchships that is, are (so they tell me)
the real thing, making use of F equals MC squared, or is it M equals EC squared? You know - the thing Einstein invented.
Our Moon being an airless planet, a torchship can land on it. But the Tom Paine, being a torchship, was really intended to stay in space and be serviced only at space stations in orbit; she had
to be landed in a cradle. I wish I had been awake to see it, for they say that catching an egg on a plate is easy by comparison. Dak was one of the half dozen pilots who could do it.
From DOUBLE STAR by Robert Heinlein, 1956
|
{"url":"http://www.projectrho.com/public_html/rocket/torchships.php","timestamp":"2014-04-18T08:03:15Z","content_type":null,"content_length":"49562","record_id":"<urn:uuid:8f0fff16-7a2b-4260-bb61-eb78303f09e3>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00574-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Prompted by some recent work I’ve been doing on reasoning about monadic computations, I’ve been looking back at the work from the 1990s by Phil Trinder, Limsoon Wong, Leonidas Fegaras, Torsten Grust,
and others, on monad comprehensions as a framework for database queries.
The idea goes back to the adjunction between extension and intension in set theory—you can define a set by its extension, that is by listing its elements:
$\displaystyle \{ 1, 9, 25, 49, 81 \}$
or by its intension, that is by characterizing those elements:
$\displaystyle \{ n^2 \mid 0 < n < 10 \land n \equiv 1 (\mathop{mod} 2) \}$
Expressions in the latter form are called set comprehensions. They inspired a programming notation in the SETL language from NYU, and have become widely known through list comprehensions in languages
like Haskell. The structure needed of sets or of lists to make this work is roughly that of a monad, and Phil Wadler showed how to generalize comprehensions to arbitrary monads, which led to the “do”
notation in Haskell. Around the same time, Phil Trinder showed that comprehensions make a convenient database query language. The comprehension notation has been extended to cover other important
aspects of database queries, particularly aggregation and grouping. Monads and aggregations have very nice algebraic structure, which leads to a useful body of laws to support database query
List comprehensions
Just as a warm-up, here is a reminder about Haskell’s list comprehensions.
$\displaystyle [ 2 \times a + b \mid a \leftarrow [1,2,3] , b \leftarrow [4,5,6] , b \mathbin{\underline{\smash{\mathit{mod}}}} a == 0 ]$
This (rather concocted) example yields the list of all values of the expression ${2 \times a + b}$ as ${a}$ is drawn from ${[1,2,3]}$ and ${b}$ from ${[4,5,6]}$ and such that ${b}$ is divisible by $
{a}$, namely ${[6,7,8,8,10,12]}$.
To the left of the vertical bar is the term (an expression). To the right is a comma-separated sequence of qualifiers, each of which is either a generator (of the form ${a \leftarrow x}$, with a
variable ${a}$ and a list expression ${x}$) or a filter (a boolean expression). The scope of a variable introduced by a generator extends to all subsequent generators and to the term. Note that, in
contrast to the mathematical inspiration, bound variables need to be generated from some existing list.
The semantics of list comprehensions is defined by translation; see for example Phil Wadler’s Chapter 7 of The Implementation of Functional Programming Languages. It can be expressed equationally as
$\displaystyle \begin{array}{lcl} [ e \mid \epsilon ] &=& [e] \\ {} [ e \mid b ] &=& \mathbf{if}\;b\;\mathbf{then}\;[ e ]\;\mathbf{else}\;[\,] \\ {} [ e \mid a \leftarrow x ] &=& \mathit{map}\,(\
lambda a \mathbin{.} e)\,x \\ {} [ e \mid q, q' ] &=& \mathit{concat}\,[ [ e \mid q' ] \mid q ] \end{array}$
(Here, ${\epsilon}$ denotes the empty sequence of qualifiers. It’s not allowed in Haskell, but it is helpful in simplifying the translation.)
Applying this translation to the example at the start of the section gives
$\displaystyle \begin{array}{ll} & [ 2 \times a + b \mid a \leftarrow [1,2,3] , b \leftarrow [4,5,6] , b \mathbin{\underline{\smash{\mathit{mod}}}} a == 0 ] \\ = & \mathit{concat}\,(\mathit{map}
\,(\lambda a \mathbin{.} \mathit{concat}\,(\mathit{map}\,(\lambda b \mathbin{.} \mathbf{if}\;b \mathbin{\underline{\smash{\mathit{mod}}}} a == 0\;\mathbf{then}\;[2 \times a + b]\;\mathbf{else}\;
[\,])\,[4,5,6]))\,[1,2,3]) \\ = & [6,7,8,8,10,12] \end{array}$
More generally, a generator may match against a pattern rather than just a variable. In that case, it may bind multiple (or indeed no) variables at once; moreover, the match may fail, in which case
it is discarded. This is handled by modifying the translation for generators to use a function defined by pattern-matching, rather than a straight lambda-abstraction:
$\displaystyle [ e \mid p \leftarrow x ] = \mathit{concat}\,(\mathit{map}\,(\lambda a \mathbin{.} \mathbf{case}\;a\;\mathbf{of}\;p \rightarrow [ e ] \;;\; \_ \rightarrow [\,])\,x)$
or, more perspicuously,
$\displaystyle [ e \mid p \leftarrow x ] = \mathbf{let}\;h\,p = [ e ] ; h\,\_ = [\,]\;\mathbf{in}\; \mathit{concat}\,(\mathit{map}\,h\,x)$
Monad comprehensions
It is clear from the above translation that the necessary ingredients for list comprehensions are ${\mathit{map}}$, singletons, ${\mathit{concat}}$, and the empty list. The first three are the
operations arising from lists as a functor and a monad, which suggests that the same translation might be applicable to other monads too. But the fourth ingredient, the empty list, does not come from
the functor and monad structures; that requires an extra assumption:
$\displaystyle \begin{array}{ll} \mathbf{class}\;\mathit{Monad}\,m \Rightarrow \mathit{MonadZero}\,m\;\mathbf{where} \\ \quad \mathit{mzero} :: m\,a \end{array}$
Then the translation for list comprehensions can be generalized to other monads:
$\displaystyle \begin{array}{lcl} [ e \mid \epsilon ] &=& \mathit{return}\,e \\ {} [ e \mid b ] &=& \mathbf{if}\;b\;\mathbf{then}\;\mathit{return}\,e\;\mathbf{else}\;\mathit{mzero} \\ {} [ e \mid
p \leftarrow m ] &=& \mathbf{let}\;h\,p = \mathit{return}\,e ; h\,\_ = \mathit{mzero}\;\mathbf{in}\; \mathit{join}\,(\mathit{map}\,h\,m) \\ {} [ e \mid q, q' ] &=& \mathit{join}\,[ [ e \mid q' ]
\mid q ] \end{array}$
(so ${[ e \mid \epsilon ] = [ e \mid \mathit{True} ]}$). The actual monad to be used is implicit; if we want to be explicit, we could use a subscript, as in “${[ e \mid q ]_\mathsf{List}}$“.
This translation is different from the one used in the Haskell language specification, which to my mind is a little awkward: the empty list crops up in two different ways in the translation of list
comprehensions—for filters, and for generators with patterns—and these are generalized in two different ways to other monads (to the ${\mathit{mzero}}$ method of the ${\mathit{MonadPlus}}$ class in
the first case, and the ${\mathit{fail}}$ method of the ${\mathit{Monad}}$ class in the second). I think it is neater to have a monad subclass ${\mathit{MonadZero}}$ with a single method subsuming
both these operators. Of course, this does mean that the translation forces a monad comprehension with filters or possibly failing generators to be interpreted in a monad in the ${\mathit{MonadZero}}
$ subclass rather than just ${\mathit{Monad}}$—the type class constraints that are generated depend on the features used in the comprehension. (Perhaps this translation was tried in earlier versions
of the language specification, and found wanting?)
Taking this approach gives basically the monad comprehension notation from Wadler’s Comprehending Monads paper; it loosely corresponds to Haskell’s do notation, except that the term is to the left of
a vertical bar rather than at the end, and that filters are just boolean expressions rather than introduced using ${\mathit{guard}}$.
We might impose the law that ${\mathit{mzero}}$ is a “left” zero of composition, in the sense
$\displaystyle \mathit{join}\,\mathit{mzero} = \mathit{mzero}$
or, in terms of comprehensions,
$\displaystyle [ e \mid a \leftarrow \mathit{mzero} ] = \mathit{mzero}$
Informally, this means that any failing steps of the computation cleanly cut off subsequent branches. Conversely, we do not require that ${\mathit{mzero}}$ is a “right” zero too:
$\displaystyle \mathit{join}\,(\mathit{map}\,(\lambda a \mathbin{.} \mathit{mzero})\,m) e \mathit{mzero} \quad\mbox{(in general)}$
This would have the consequence that a failing step also cleanly erases any effects from earlier parts of the computation, which is too strong a requirement for many monads—particularly those of the
“launch missiles now” variety. (The names “left-” and “right zero” make more sense when the equations are expressed in terms of the usual Haskell bind operator ${(\gg\!=)}$, which is a kind of
sequential composition.)
Ringads and collection classes
One more ingredient is needed in order to characterize monads that correspond to “collection classes” such as sets and lists, and that is an analogue of set union or list append. It’s not difficult
to see that this is inexpressible in terms of the operations introduced so far: given only collections ${m}$ of at most one element, any comprehension using generators of the form ${a \leftarrow m}$
will only yield another such collection, whereas the union of two one-element collections will in general have two elements.
To allow any finite collection to be expressed, it suffices to introduce a binary union operator ${\uplus}$:
$\displaystyle \begin{array}{ll} \mathbf{class}\;\mathit{Monad}\,m \Rightarrow \mathit{MonadPlus}\,m\;\mathbf{where} \\ \quad (\uplus) :: m\,a \times m\,a \rightarrow m\,a \end{array}$
We require composition to distribute over union, in the following sense:
$\displaystyle \mathit{join}\,(m \uplus n) = \mathit{join}\,m \uplus \mathit{join}\,n$
or, in terms of comprehensions,
$\displaystyle [ e \mid a \leftarrow m \uplus n, q ] = [ e \mid a \leftarrow m, q ] \uplus [ e \mid a \leftarrow n, q ]$
For the remainder of this post, we will assume a monad in both ${\mathit{MonadZero}}$ and ${\mathit{MonadPlus}}$. Moreover, we will assume that ${\mathit{mzero}}$ is the unit of ${\uplus}$, and is
both a left- and a right zero of composition. To stress the additional constraints, we will write “${\emptyset}$” for “${\mathit{mzero}}$” from now on. The intention is that such monads exactly
capture collection classes; Phil Wadler has called these structures ringads. (He seems to have done so in an unpublished note Notes on Monads and Ringads from 1990, which is cited by some papers from
the early 1990s. But Phil no longer has a copy of this note, and it’s not online anywhere… I’d love to see a copy, if anyone has one!)
$\displaystyle \begin{array}{ll} \mathbf{class}\;(\mathit{MonadZero}\,m, \mathit{MonadPlus}\,m) \Rightarrow \mathit{Ringad}\,m\;\mathbf{where} \end{array}$
(There are no additional methods; the class ${\mathit{Ringad}}$ is the intersection of the two parent classes ${\mathit{MonadZero}}$ and ${\mathit{MonadPlus}}$, with the union of the two interfaces,
together with the laws above.) I used roughly the same construction already in the post on Horner’s Rule.
As well as (finite) sets and lists, ringad instances include (finite) bags and a funny kind of binary tree (externally labelled, possibly empty, in which the empty tree is a unit of the binary tree
constructor). These are all members of the so-called Boom Hierarchy of types—a name coined by Richard Bird, for an idea due to Hendrik Boom, who by happy coincidence is named for one of these
structures in his native language. All members of the Boom Hierarchy are generated from the empty, singleton, and union operators, the difference being whether union is associative, commutative, and
idempotent. Another ringad instance, but not a member of the Boom Hierarchy, is the type of probability distributions—either normalized, with a weight-indexed family of union operators, or
unnormalized, with an additional scaling operator.
The well-behaved operations over monadic values are called the algebras for that monad—functions ${k}$ such that ${k \cdot \mathit{return} = \mathit{id}}$ and ${k \cdot \mathit{join} = k \cdot \
mathit{map}\,k}$. In particular, ${\mathit{join}}$ is itself a monad algebra. When the monad is also a ringad, ${k}$ necessarily distributes also over ${\uplus}$—there is a binary operator ${\oplus}$
such that ${k\,(m \uplus n) = k\,m \oplus k\,n}$ (exercise!). Without loss of generality, we write ${\oplus/}$ for ${k}$; these are the “reductions” of the Bird–Meertens Formalism. In that case, ${\
mathit{join} = \uplus/}$ is a ringad algebra.
The algebras for a ringad amount to aggregation functions for a collection: the sum of a bag of integers, the maximum of a set of naturals, and so on. We could extend the comprehension notation to
encompass aggregations too, for example by adding an optional annotation, writing say “${[ e \mid q ]^\oplus}$“; although this doesn’t add much, because we could just have written “${\oplus/\,[e \mid
q]}$” instead. We could generalize from reductions ${\oplus/}$ to collection homomorphisms ${\oplus/ \cdot \mathit{map}\,f}$; but this doesn’t add much either, because the map is easily combined with
the comprehension—it’s easy to show the “map over comprehension” property
$\displaystyle \mathit{map}\,f\,[e \mid q] = [f\,e \mid q]$
Leonidas Fegaras and David Maier develop a monoid comprehension calculus around such aggregations; but I think their name is inappropriate, because nothing forces the binary aggregating operator to
be associative.
Note that, for ${\oplus/}$ to be well-defined, ${\oplus}$ must satisfy all the laws that ${\uplus}$ does—${\oplus}$ must be associative if ${\uplus}$ is associative, and so on. It is not hard to
show, for instance, that there is no ${\oplus}$ on sets of numbers for which ${\mathit{sum}\,(x \cup y) = \mathit{sum}\,x \oplus \mathit{sum}\,y}$; such an ${\oplus}$ would have to be idempotent,
which is inconsistent with its relationship with ${\mathit{sum}}$. (So, although ${[a^2 \mid a \leftarrow x, \mathit{odd}\,a]_\mathsf{Bag}^{+}}$ denotes the sum of the squares of the odd elements of
bag ${x}$, the expression ${[a^2 \mid a \leftarrow x, \mathit{odd}\,a]_\mathsf{Set}^{+}}$ (with ${x}$ now a set) is not defined, because ${+}$ is not idempotent.) In particular, ${\oplus/\emptyset}$
must be the unit of ${\oplus}$, which we write ${1_\oplus}$.
We can derive translation rules for aggregations from the definition
$\displaystyle [ e \mid q ]^\oplus = \oplus/\,[e \mid q]$
For empty aggregations, we have:
$\displaystyle \begin{array}{ll} & [ e \mid \epsilon ]^\oplus \\ = & \qquad \{ \mbox{aggregation} \} \\ & \oplus/\,[ e \mid \epsilon ] \\ = & \qquad \{ \mbox{comprehension} \} \\ & \oplus/\,(\
mathit{return}\,e) \\ = & \qquad \{ \mbox{monad algebra} \} \\ & e \end{array}$
For filters, we have:
$\displaystyle \begin{array}{ll} & [ e \mid b ]^\oplus \\ = & \qquad \{ \mbox{aggregation} \} \\ & \oplus/\,[ e \mid b ] \\ = & \qquad \{ \mbox{comprehension} \} \\ & \oplus/\,(\mathbf{if}\;b\;\
mathbf{then}\;\mathit{return}\,e\;\mathbf{else}\;\emptyset) \\ = & \qquad \{ \mbox{lift out the conditional} \} \\ & \mathbf{if}\;b\;\mathbf{then}\;{\oplus/}\,(\mathit{return}\,e)\;\mathbf{else}
\;{\oplus/}\,\emptyset \\ = & \qquad \{ \mbox{ringad algebra} \} \\ & \mathbf{if}\;b\;\mathbf{then}\;e\;\mathbf{else}\;1_\oplus \end{array}$
For generators, we have:
$\displaystyle \begin{array}{ll} & [ e \mid p \leftarrow m ]^\oplus \\ = & \qquad \{ \mbox{aggregation} \} \\ & \oplus/\,[ e \mid p \leftarrow m ] \\ = & \qquad \{ \mbox{comprehension} \} \\ & \
oplus/\,(\mathbf{let}\;h\,p = \mathit{return}\,e ; h\,\_ = \emptyset\;\mathbf{in}\;\mathit{join}\,(\mathit{map}\,h\,m)) \\ = & \qquad \{ \mbox{lift out the \textbf{let}} \} \\ & \mathbf{let}\;h
\,p = \mathit{return},e ; h\,\_ = \emptyset\;\mathbf{in}\;{\oplus/}\,(\mathit{join}\,(\mathit{map}\,h\,m)) \\ = & \qquad \{ \mbox{monad algebra} \} \\ & \mathbf{let}\;h\,p = \mathit{return}\,e ;
h\,\_ = \emptyset\;\mathbf{in}\;{\oplus/}\,(\mathit{map}\,(\oplus/)\,(\mathit{map}\,h\,m)) \\ = & \qquad \{ \mbox{functors} \} \\ & \mathbf{let}\;h\,p = \mathit{return}\,e ; h\,\_ = \emptyset\;\
mathbf{in}\;{\oplus/}\,(\mathit{map}\,(\oplus/ \cdot h)\,m) \\ = & \qquad \{ \mbox{let~} h' = \oplus/ \cdot h \} \\ & \mathbf{let}\;h'\,p = \oplus/\,(\mathit{return}\,e) ; h'\,\_ = \oplus/\,\
emptyset\;\mathbf{in}\;{\oplus/}\,(\mathit{map}\,h'\,m) \\ = & \qquad \{ \mbox{ringad algebra} \} \\ & \mathbf{let}\;h'\,p = e ; h'\,\_ = 1_\oplus\;\mathbf{in}\;{\oplus/}\,(\mathit{map}\,h'\,m) \
And for sequences of qualifiers, we have:
$\displaystyle \begin{array}{ll} & [ e \mid q, q' ]^\oplus \\ = & \qquad \{ \mbox{aggregation} \} \\ & \oplus/\,[ e \mid q, q' ] \\ = & \qquad \{ \mbox{comprehension} \} \\ & \oplus/\,(\mathit
{join}\,[ [ e \mid q'] \mid q ] \\ = & \qquad \{ \mbox{monad algebra} \} \\ & \oplus/\,(\mathit{map}\,(\oplus/)\,[ [ e \mid q'] \mid q ]) \\ = & \qquad \{ \mbox{map over comprehension} \} \\ & \
oplus/\,[ \oplus/\,[ e \mid q'] \mid q ] \\ = & \qquad \{ \mbox{aggregation} \} \\ & [ [ e \mid q']^\oplus \mid q ]^\oplus \end{array}$
Putting all this together, we have:
$\displaystyle \begin{array}{lcl} [ e \mid \epsilon ]^\oplus &=& e \\ {} [ e \mid b ]^\oplus &=&\mathbf{if}\;b\;\mathbf{then}\;e\;\mathbf{else}\;1_\oplus \\ {} [ e \mid p \leftarrow m ]^\oplus &=
& \mathbf{let}\;h\,p = e ; h\,\_ = 1_\oplus\;\mathbf{in}\;{\oplus/}\,(\mathit{map}\,h\,m) \\ {} [ e \mid q, q' ]^\oplus &=& [ [ e \mid q']^\oplus \mid q ]^\oplus \end{array}$
Heterogeneous comprehensions
We have seen that comprehensions can be interpreted in an arbitrary ringad; for example, ${[a^2 \mid a \leftarrow x, \mathit{odd}\,a]_\mathsf{Set}}$ denotes (the set of) the squares of the odd
elements of (the set) ${x}$, whereas ${[a^2 \mid a \leftarrow x, \mathit{odd}\,a]_\mathsf{Bag}}$ denotes the bag of such elements, with ${x}$ a bag. Can we make sense of “heterogeneous
comprehensions”, involving several different ringads?
Let’s introduced the notion of a ringad morphism, extending the familiar analogue on monads. For monads ${\mathsf{M}}$ and ${\mathsf{N}}$, a monad morphism ${\phi : \mathsf{M} \mathbin{\stackrel{.}{\
to}} \mathsf{N}}$ is a natural transformation ${\mathsf{M} \mathbin{\stackrel{.}{\to}} \mathsf{N}}$—that is, a family ${\phi_\alpha :: \mathsf{M}\,\alpha \rightarrow \mathsf{N}\,\alpha}$ of arrows,
coherent in the sense that ${\phi_\beta \cdot \mathsf{M}\,f = \mathsf{N}\,f \cdot \phi_\alpha}$ for ${f :: \alpha \rightarrow \beta}$—that also preserves the monad structure:
$\displaystyle \begin{array}{lclcl} \phi \cdot \mathit{return}_\mathsf{M} &=& \mathit{return}_\mathsf{N} \\ \phi \cdot \mathit{join}_\mathsf{M} &=& \mathit{join}_\mathsf{N} \cdot \phi \cdot \
mathsf{M}\,\phi &=& \mathit{join}_\mathsf{N} \cdot \mathsf{N}\,\phi \cdot \phi \end{array}$
A ringad morphism ${\phi : \mathsf{M} \mathbin{\stackrel{.}{\to}} \mathsf{N}}$ for ringads ${\mathsf{M},\mathsf{N}}$ is a monad morphism ${\phi : \mathsf{M} \mathbin{\stackrel{.}{\to}} \mathsf{N}}$
that also respects the ringad structure:
$\displaystyle \begin{array}{lcl} \phi\,\emptyset_\mathsf{M} &=& \emptyset_\mathsf{N} \\ \phi\,(x \uplus_\mathsf{M} y) &=& \phi\,x \uplus_\mathsf{N} \phi\,y \end{array}$
Then a ringad morphism behaves nicely with respect to ringad comprehensions—a comprehension interpreted in ringad ${\mathsf{M}}$, using existing collections of type ${\mathsf{M}}$, with the result
transformed via a ringad morphism ${\phi : \mathsf{M} \mathbin{\stackrel{.}{\to}} \mathsf{N}}$ to ringad ${\mathsf{N}}$, is equivalent to the comprehension interpreted in ringad ${\mathsf{N}}$ in the
first place, but with the initial collections transformed to type ${\mathsf{N}}$. Informally, there will be no surprises arising from when ringad coercions take place, because the results are the
same whenever this happens. This property is straightforward to show by induction over the structure of the comprehension. For the empty comprehension, we have:
$\displaystyle \begin{array}{ll} & \phi\,[ e \mid \epsilon ]_\mathsf{M} \\ = & \qquad \{ \mbox{comprehension} \} \\ & \phi\,(\mathit{return}_\mathsf{M}\,e) \\ = & \qquad \{ \mbox{ringad morphism}
\} \\ & \mathit{return}_\mathsf{N}\,e \\ = & \qquad \{ \mbox{comprehension} \} \\ & [e \mid \epsilon ]_\mathsf{N} \end{array}$
For filters, we have:
$\displaystyle \begin{array}{ll} & \phi\,[ e \mid b ]_\mathsf{M} \\ = & \qquad \{ \mbox{comprehension} \} \\ & \phi\,(\mathbf{if}\;b\;\mathbf{then}\;\mathit{return}_\mathsf{M}\,e\;\mathbf{else}\;
\emptyset_\mathsf{M}) \\ = & \qquad \{ \mbox{lift out the conditional} \} \\ & \mathbf{if}\;b\;\mathbf{then}\;\phi\,(\mathit{return}_\mathsf{M}\,e)\;\mathbf{else}\;\phi\,\emptyset_\mathsf{M} \\ =
& \qquad \{ \mbox{ringad morphism} \} \\ & \mathbf{if}\;b\;\mathbf{then}\;\mathit{return}_\mathsf{N}\,e\;\mathbf{else}\;\emptyset_\mathsf{N} \\ = & \qquad \{ \mbox{comprehension} \} \\ & [ e \mid
b ]_\mathsf{N} \end{array}$
For generators:
$\displaystyle \begin{array}{ll} & \phi\,[ e \mid p \leftarrow m ]_\mathsf{M} \\ = & \qquad \{ \mbox{comprehension} \} \\ & \phi\,(\mathbf{let}\;h\,p = \mathit{return}_\mathsf{M}\,e ; h\,\_ = \
emptyset_\mathsf{M}\;\mathbf{in}\;\mathit{join}_\mathsf{M}\,(\mathit{map}_\mathsf{M}\,h\,m)) \\ = & \qquad \{ \mbox{lift out the \textbf{let}} \} \\ & \mathbf{let}\;h\,p = \mathit{return}_\mathsf
{M}\,e ; h\,\_ = \emptyset_\mathsf{M}\;\mathbf{in}\;\phi\,(\mathit{join}_\mathsf{M}\,(\mathit{map}_\mathsf{M}\,h\,m)) \\ = & \qquad \{ \mbox{ringad morphism, functors} \} \\ & \mathbf{let}\;h\,p
= \mathit{return}_\mathsf{M}\,e ; h\,\_ = \emptyset_\mathsf{M}\;\mathbf{in}\;\mathit{join}_\mathsf{N}\,(\phi\,(\mathit{map}_\mathsf{M}\,(\phi \cdot h)\,m)) \\ = & \qquad \{ \mbox{let~} h' = \phi
\cdot h \} \\ & \mathbf{let}\;h'\,p = \phi\,(\mathit{return}_\mathsf{M}\,e) ; h'\,\_ = \phi\,\emptyset_\mathsf{M}\;\mathbf{in}\;\mathit{join}_\mathsf{N}\,(\phi\,(\mathit{map}_\mathsf{M}\,h'\,m))
\\ = & \qquad \{ \mbox{ringad morphism, induction} \} \\ & \mathbf{let}\;h'\,p = \mathit{return}_\mathsf{N}\,e ; h'\,\_ = \emptyset_\mathsf{N}\;\mathbf{in}\;\mathit{join}_\mathsf{N}\,(\phi\,(\
mathit{map}_\mathsf{M}\,h'\,m)) \\ = & \qquad \{ \mbox{naturality of~} \phi \} \\ & \mathbf{let}\;h'\,p = \mathit{return}_\mathsf{N}\,e ; h'\,\_ = \emptyset_\mathsf{N}\;\mathbf{in}\;\mathit{join}
_\mathsf{N}\,(\mathit{map}_\mathsf{N}\,h'\,(\phi\,m)) \\ = & \qquad \{ \mbox{comprehension} \} \\ & [ e \mid p \leftarrow \phi\,m ]_\mathsf{N} \end{array}$
And for sequences of qualifiers:
$\displaystyle \begin{array}{ll} & \phi\,[ e \mid q, q' ]_\mathsf{M} \\ = & \qquad \{ \mbox{comprehension} \} \\ & \phi\,(\mathit{join}\,[ [ e \mid q' ]_\mathsf{M} \mid q ]_\mathsf{M}) \\ = & \
qquad \{ \mbox{ringad morphism} \} \\ & \phi\,(\mathit{map}\,\phi\,[ [ e \mid q' ]_\mathsf{M} \mid q ]_\mathsf{M}) \\ = & \qquad \{ \mbox{map over comprehension} \} \\ & \phi\,[ \phi\,[ e \mid q'
]_\mathsf{M} \mid q ]_\mathsf{M} \\ = & \qquad \{ \mbox{induction} \} \\ & [ [ e \mid q' ]_\mathsf{N} \mid q ]_\mathsf{N} \\ = & \qquad \{ \mbox{comprehension} \} \\ & [ e \mid q, q' ]_\mathsf{N}
For example, if ${\mathit{bag2set} : \mathsf{Bag} \mathbin{\stackrel{.}{\to}} \mathsf{Set}}$ is the obvious ringad morphism from bags to sets, discarding information about the multiplicity of
repeated elements, and ${x}$ a bag of numbers, then
$\displaystyle \mathit{bag2set}\,[a^2 \mid a \leftarrow x, \mathit{odd}\,a]_\mathsf{Bag} = [a^2 \mid a \leftarrow \mathit{bag2set}\,x, \mathit{odd}\,a]_\mathsf{Set}$
and both yield the set of squares of the odd members of ${x}$. As a notational convenience, we might elide use of the ringad morphism when it is “obvious from context”—we might write just ${[a^2 \mid
a \leftarrow x, \mathit{odd}\,a]_\mathsf{Set}}$ even when ${x}$ is a bag, relying on the “obvious” morphism ${\mathit{bag2set}}$. This would allow us to write, for example,
$\displaystyle [ a+b \mid a \leftarrow [1,2,3], b \leftarrow \langle4,4,5\rangle ]_\mathsf{Set} = \{ 5,6,7,8 \}$
(writing ${\langle\ldots\rangle}$ for the extension of a bag), instead of the more pedantic
$\displaystyle [ a+b \mid a \leftarrow \mathit{list2set}\,[1,2,3], b \leftarrow \mathit{bag2set}\,\langle4,4,5\rangle ]_\mathsf{Set} = \{ 5,6,7,8 \}$
There is a forgetful function from any poorer member of the Boom hierarchy to a richer one, flattening some distinctions by imposing additional laws—for example, from bags to sets, flattening
distinctions concerning multiplicity—and I would class these forgetful functions as “obvious” morphisms. On the other hand, any morphisms in the opposite direction—such as sorting, from bags to
lists, and one-of-each, from sets to bags—are not “obvious”, and so should not be elided; and similarly, I’m not sure that I could justify as “obvious” any morphisms involving non-members of the Boom
Hierarchy, such as probability distributions.
9 Responses to Comprehensions
1. Monad comprehensions in Haskell 1.4 used MonadZero for matches, and there was no fail in Monad. However, there was also an extra notion of an unfailable pattern, which allowed the following to
not use MonadZero:
[ x + y | (x, y) <- l ]
(x, y) is an unfailable pattern because it can only be refuted by bottom. For Haskell 98, unfailable patterns were removed, but it was deemed unacceptable for the above code to use MonadZero, so
fail was added instead, and all Monads were given the ability to have arbitrary matches in a weird compromise.
2. Shouldn’t those bag- and set-using comprehensions’ filter be “odd a,” vice “odd x”?
Or is that an established conventional meaning?
(I Am Not A Mathematician, I just play one on the Internet…)
□ Oops. Fixed – thanks.
3. I’m a little bit surprised that you don’t mention concatMap and use it in the translation of list comprehensions as it is the (>>=) of the list monad and makes the generalization from lists to
monads even clearer.
□ I think it’s generally a matter of taste whether to use concat and join or concatMap and bind; they’re equivalent. But imho, algebras for a monad come out much cleaner with join than with
☆ Fair enough.
Considering just the translation of monad comprehensions I personally prefer the version using (>>=) over the one using join though.
4. Having just begun to study probability, I found your little facts concerning probability distributions quite intriguing: can you recommend any sources where one might explore further?
□ Quite a few people have written about probability distributions as a monad; there are some references in the “Just Do It” paper linked above (ie http://www.cs.ox.ac.uk/publications/
5. Reblogged this on Adil Akhter.
This entry was posted in Uncategorized. Bookmark the permalink.
|
{"url":"http://patternsinfp.wordpress.com/2012/01/19/comprehensions/","timestamp":"2014-04-19T09:28:41Z","content_type":null,"content_length":"133038","record_id":"<urn:uuid:446c138f-33ea-4272-8b3d-f666587bf5bf>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00277-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Ihara zeta function
MathOverflow is a question and answer site for professional mathematicians. It's 100% free, no registration required.
Is there a natural connection between the Ihara zeta function of a graph, and (for instance) the Riemann zeta function of certain varieties over finite fields ? Thanks.
up vote 5 down vote favorite ag.algebraic-geometry graph-theory zeta-functions riemann-zeta-function
add comment
Is there a natural connection between the Ihara zeta function of a graph, and (for instance) the Riemann zeta function of certain varieties over finite fields ? Thanks.
RH for the Ihara zeta function will correspond to the graph being Ramanujan (if the graph is (q+1)-regular).
The zeta function for varieties over finite fields is more related to Ruelle's zeta function, but you can see Ihara zeta function as a special instance of it, using symbolic dynamics
up vote 4 down representation of a walk in your graph as a dynamical system.
A nice reference for this material is Audrey Terras' book - "Zeta Functions of Graphs: A Stroll through the Garden"
add comment
RH for the Ihara zeta function will correspond to the graph being Ramanujan (if the graph is (q+1)-regular).
The zeta function for varieties over finite fields is more related to Ruelle's zeta function, but you can see Ihara zeta function as a special instance of it, using symbolic dynamics representation
of a walk in your graph as a dynamical system.
A nice reference for this material is Audrey Terras' book - "Zeta Functions of Graphs: A Stroll through the Garden"
|
{"url":"http://mathoverflow.net/questions/86446/ihara-zeta-function","timestamp":"2014-04-20T13:52:34Z","content_type":null,"content_length":"49615","record_id":"<urn:uuid:1304ce17-2d99-4884-ab81-95775024a7d7>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00539-ip-10-147-4-33.ec2.internal.warc.gz"}
|
This Article
Bibliographic References
Add to:
Structural and Dynamic Changes in Concurrent Systems: Reconfigurable Petri Nets
September 2004 (vol. 53 no. 9)
pp. 1147-1158
ASCII Text x
Marisa Llorens, Javier Oliver, "Structural and Dynamic Changes in Concurrent Systems: Reconfigurable Petri Nets," IEEE Transactions on Computers, vol. 53, no. 9, pp. 1147-1158, September, 2004.
BibTex x
@article{ 10.1109/TC.2004.66,
author = {Marisa Llorens and Javier Oliver},
title = {Structural and Dynamic Changes in Concurrent Systems: Reconfigurable Petri Nets},
journal ={IEEE Transactions on Computers},
volume = {53},
number = {9},
issn = {0018-9340},
year = {2004},
pages = {1147-1158},
doi = {http://doi.ieeecomputersociety.org/10.1109/TC.2004.66},
publisher = {IEEE Computer Society},
address = {Los Alamitos, CA, USA},
RefWorks Procite/RefMan/Endnote x
TY - JOUR
JO - IEEE Transactions on Computers
TI - Structural and Dynamic Changes in Concurrent Systems: Reconfigurable Petri Nets
IS - 9
SN - 0018-9340
EPD - 1147-1158
A1 - Marisa Llorens,
A1 - Javier Oliver,
PY - 2004
KW - Theory of computation
KW - computation by abstract devices
KW - models of computation
KW - relations between models
KW - modes of computation
KW - parallelism and concurrency.
VL - 53
JA - IEEE Transactions on Computers
ER -
The aim of this work is the modeling and verification of concurrent systems subject to dynamic changes using extensions of Petri nets. We begin by introducing the notion of net rewriting system. In a
net rewriting system, a system configuration is described as a Petri net and a change in configuration is described as a graph rewriting rule. We show that net rewriting systems are Turing powerful,
that is, the basic decidable properties of Petri nets are lost and, thus, automatic verification in not possible for this class. A subclass of net rewriting systems are reconfigurable Petri nets. In
a reconfigurable Petri net, a change in configuration amounts to the modification of the flow relations of the places in the domain of the involved rule according to this rule, independently of the
context in which this rewriting applies. We show that reconfigurable Petri nets are formally equivalent to Petri nets. This equivalence ensures that all the fundamental properties of Petri nets are
still decidable for reconfigurable Petri nets and this model is thus amenable to automatic verification tools. Therefore, the expressiveness of both models is the same, but, with reconfigurable Petri
nets, we can easily and directly model systems that change their structure dynamically.
[1] A. Asperti and N. Busi, Mobile Petri Nets Technical Report UBLCS-96-10, Univ. of Bologna, Italy, 1996.
[2] E. Badouel, P. Darondeau, and A. Tokmakoff, Modelling Dynamic Agents Systems with Cooperating Automata Proc. Parallel and Distributed Processing Techniques and Applications (PDPTA '99), pp.
11-17, 1999.
[3] E. Badouel, M. Llorens, and J. Oliver, Modelling Concurrent Systems: Reconfigurable Nets Proc. Parallel and Distributed Processing Techniques and Applications (PDPTA'03), vol. IV, pp. 1568-1574,
[4] E. Badouel and J. Oliver, Reconfigurable Nets, a Class of High Level Petri Nets Supporting Dynamic Changes Technical Report, Inria Research Report PI-1163, France, 1998.
[5] E. Badouel and J. Oliver, Reconfigurable Nets, a Class of High Level Petri Nets Supporting Dynamic Changes within Workflow Systems Proc. Workflow Management: Net-Based Concepts, Models,
Techniques and Tools (WFM '98), CSR 98/07, Dept. of Math. and Computing Science, Eindhoven Univ. of Tech nology, pp. 129-145, 1998.
[6] E. Badouel and J. Oliver, Dynamic Changes in Concurrent Systems: Modelling and Verification Technical Report, Inria Research Report PI-3708, France, 1999.
[7] P. Baldan, Modelling Concurrent Computations: From Contextual Petri Nets to Graph Grammars PhD Thesis, Univ. of Pisa TD-1/00, 2000.
[8] M. Buscemi and V. Sassone, High-Level Petri Nets as Type Theories in the Join Calculus Proc. Fourth Int'l Conf. Foundations of Software Science and Computation Structures (FoSSaCS '01), pp.
104-120, 2001.
[9] A. Corradini, Concurrent Computing: From Petri Nets to Graph Grammars Proc. Joint COMPUGRAPH/SEMAGRAPH Workshop Graph Rewriting and Computation, 1995. http://www.elsevier. nl/locate/
entcsvolume2.html .
[10] C. Dufourd, A. Finkel, and P. Schnoebelen, Reset Nets between Decidability and Undecidability Proc. Int'l Colloquium Automata, Languages, and Programming (ICALP '98), pp. 103-115, 1998.
[11] H. Ehrig, Tutorial Introduction to the Algebraic Approach of Graph Grammmars Proc. Third Int'l Workshop Graph Grammars and Their Application to Computer Science, pp. 3-14, 1987.
[12] J. Engelfriet, G. Leih, and G. Rozenberg, Net Based Description of Parallel Object-Based Systems, or POTs and POPs Proc. Workshop Foundations of Object-Oriented Languages (FOOL '90), pp.
229-273, 1991.
[13] C. Fournet and G. Gonthier, The Reflexive Chemical Abstract Machine and the Join-Calculus Proc. 23rd ACM Symp. Principles of Programming Languages (POPL '96), pp. 372-385, 1996.
[14] C. Fournet, G. Gonthier, J. Lévy, L. Maranget, and D. Rémy, A Calculus of Mobile Agents Proc. Seventh Int'l Conf. Concurrency Theory (CONCUR '96), pp. 406-421, 1996.
[15] P. Gradit, F. Vernadat, and P. Azéma, Layered$\Delta$-Net Specification of a Workshop Proc. Int'l Conf. Parallel and Distributed Processing Techniques and Applications (PDPTA '99), vol. VI, pp.
2808-2814, 1999.
[16] J.E. Hopcroft and J.D. Ullman, Introduction to Automata Theory, Languages and Computation. Addison-Wesley, 1979.
[17] M. Llorens and J. Oliver, Modelización de Sistemas Concurrentes mediante Redes Reconfigurables IX Jornadas de Concurrencia, pp. 213-224, 2001.
[18] R. Milner, J. Parrow, and D. Walker, A Calculus of Mobile Processes J. Information and Computation, vol. 100, no. 1, pp. 1-77, 1992.
[19] T. Murata, “Petri Nets: Properties, Analysis and Application,” Proc. IEEE, vol. 77, no. 4, 1989.
[20] J.L. Peterson, Petri Net Theory and the Modeling of Systems. Englewood Cliffs, N.J.: Prentice Hall, 1981.
[21] H. Schneider, Graph Grammars as a Tool to Define the Behavior of Processes Systems: From Petri Nets to Linda Proc. Fourth Int'l Conf. Graph Grammars, 1993.
[22] T.A. Sudkamp, Languages and Machines. An Introduction to the Theory of Computer Science. Addison-Wesley, 1988.
[23] R. Valk, Self-Modifying Nets, a Natural Extension of Petri Nets Proc. Int'l Colloquium Automata, Languages, and Programming (ICALP '78), pp. 464-476, 1978.
[24] R. Valk, Generalizations of Petri Nets Proc. 10th Symp. Math. Foundations of Computer Science (MFCS '81), pp. 140-155, 1981.
[25] F. Vernadat, K. Drira, P. Azéma, An Integrated Description Technique for Distributed Cooperative Applications Proc. CESA'96 IMACS Multiconf., Symp. Discrete Events and Manufacturing Systems, pp.
608-612, 1996.
Index Terms:
Theory of computation, computation by abstract devices, models of computation, relations between models, modes of computation, parallelism and concurrency.
Marisa Llorens, Javier Oliver, "Structural and Dynamic Changes in Concurrent Systems: Reconfigurable Petri Nets," IEEE Transactions on Computers, vol. 53, no. 9, pp. 1147-1158, Sept. 2004,
Usage of this product signifies your acceptance of the
Terms of Use
|
{"url":"http://www.computer.org/csdl/trans/tc/2004/09/t1147-abs.html","timestamp":"2014-04-19T00:13:17Z","content_type":null,"content_length":"56064","record_id":"<urn:uuid:84f6593f-d61e-4946-9d4a-0a99bca47a8a>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00302-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Roswell, GA Algebra 1 Tutor
Find a Roswell, GA Algebra 1 Tutor
...My past academic successes include passing several Advanced Placement exams, getting a perfect 800 score on the SAT Math exam, and maintaining a GPA in the top percentile. While my current
career has been rewarding, I find the fulfillment of working with middle and high school students and putti...
26 Subjects: including algebra 1, chemistry, calculus, physics
...Thank you for your time, and Thank you for the opportunity.I have taught 6th grade for 10 years. I am very versatile as a teacher and have a double major in Biology & Exercise Physiology which
included coursework that makes the elementary subject areas no problem for me, content wise. I have tutored elementary students before in Reading, Math, and Science.
36 Subjects: including algebra 1, reading, chemistry, writing
...Each student is different and I take pride in helping my students not only understand but comprehend the material rather than memorizing only to forget the next day. Each student has a
different point of view and the material needs to be presented to them differently. My patience and ability to...
29 Subjects: including algebra 1, chemistry, calculus, physics
...Booker’s love for mathematics goes back to his childhood, where he developed his interest in math while working beside his grandfather in a small grocery store. He is a Christian who loves
God, life, people and serving others. The focus of much of his teaching in recent years is to students who...
5 Subjects: including algebra 1, geometry, algebra 2, prealgebra
...I tutored over 100 students in intro and intermediate micro- and macroeconomics Economics as well as statistics through Emory's Academic Advising and Support Programs in the Office for
Undergraduate Education during my junior and senior years. After graduation, I traveled to Spain for a few mont...
14 Subjects: including algebra 1, Spanish, geometry, statistics
Related Roswell, GA Tutors
Roswell, GA Accounting Tutors
Roswell, GA ACT Tutors
Roswell, GA Algebra Tutors
Roswell, GA Algebra 2 Tutors
Roswell, GA Calculus Tutors
Roswell, GA Geometry Tutors
Roswell, GA Math Tutors
Roswell, GA Prealgebra Tutors
Roswell, GA Precalculus Tutors
Roswell, GA SAT Tutors
Roswell, GA SAT Math Tutors
Roswell, GA Science Tutors
Roswell, GA Statistics Tutors
Roswell, GA Trigonometry Tutors
Nearby Cities With algebra 1 Tutor
Alpharetta algebra 1 Tutors
College Park, GA algebra 1 Tutors
Decatur, GA algebra 1 Tutors
Doraville, GA algebra 1 Tutors
Duluth, GA algebra 1 Tutors
Dunwoody, GA algebra 1 Tutors
Johns Creek, GA algebra 1 Tutors
Mableton algebra 1 Tutors
Marietta, GA algebra 1 Tutors
Milton, GA algebra 1 Tutors
Norcross, GA algebra 1 Tutors
Sandy Springs, GA algebra 1 Tutors
Smyrna, GA algebra 1 Tutors
Snellville algebra 1 Tutors
Woodstock, GA algebra 1 Tutors
|
{"url":"http://www.purplemath.com/Roswell_GA_algebra_1_tutors.php","timestamp":"2014-04-16T19:37:46Z","content_type":null,"content_length":"24160","record_id":"<urn:uuid:59e0218e-27a4-4067-ae62-3f333318f961>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00454-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Area between curves
July 5th 2010, 10:47 AM #1
Oct 2009
Area between curves
Find the area between $\[f(x)=sin^5(x)\]$ and $\[g(x)=sin^3(x)\]$ both with domain $\[[0,\pi]\]$.
On $\[[0,\pi]\]$ the point of intersection is 0 and $\[\frac{\pi}{2}\]$. So,
$=\int_{0}^{\frac{\pi}{2}}[(sin^2x)sin(x)-(sin^2x)^2sin(x)]\ dx$
$=\int_{0}^{\frac{\pi}{2}}[(1-cos^2x)sin(x)-(1-cos^2x)^2sin(x)]\ dx$
$u=cos(x); \ du=-sin(x)$
$\int_{0}^{1}[(1-u^2)-(1-u^2)^2]\ du$
$= u - \frac{1}{3}u^3 - u + \frac{2}{3}u^3-\frac{1}{5}u^5\biggr|_{0}^{1}$
$= 1 - \frac{1}{3} - 1 +\frac{2}{3} - \frac{1}{5} = \frac{2}{15}$
However, the answer in the back of the book says the answer is $\frac{4}{15}$. Did I do anything wrong?
Yes. You failed to observe that your point of intersection represents also a point of symmetry. When you changed the limits from $[0,\pi]$ to $[0,\pi/2]$, you should have multiplied the entire
expression by 2.
July 5th 2010, 10:58 AM #2
MHF Contributor
Aug 2007
|
{"url":"http://mathhelpforum.com/calculus/150152-area-between-curves.html","timestamp":"2014-04-20T22:28:51Z","content_type":null,"content_length":"34810","record_id":"<urn:uuid:096b61b6-9c98-4c28-af61-1249820f7cd9>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00027-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Lifting WALKSAT-Based Local Search Algorithms for MAP Inference
Last modified: 2013-06-29
In this short position paper, we consider MaxWalkSAT, a local search algorithm for MAP inference in probabilistic graphical models, and lift it to the first-order level, yielding a powerful algorithm
for MAP inference in Markov logic networks (MLNs). Lifted MaxWalkSAT is based on the observation that if the MLN is monadic, namely if each predicate is unary then MaxWalkSAT is completely liftable
in the sense that no grounding is required at inference time. We propose to utilize this observation in a straight-forward manner: convert the MLN to an equivalent monadic MLN by grounding a subset
of its logical variables and then apply lifted MaxWalkSAT on it. It turns out however that the problem of finding the smallest subset of logical variables which when grounded will yield a monadic MLN
is NP-hard in general and therefore we propose an approximation algorithm for solving it.
|
{"url":"http://www.aaai.org/ocs/index.php/WS/AAAIW13/paper/viewPaper/7165","timestamp":"2014-04-19T09:34:44Z","content_type":null,"content_length":"10458","record_id":"<urn:uuid:34c799ce-3c3b-4193-99ca-f101673b1448>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00185-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Artifical Variable
artificial variable
Example: Product Processing
We continue in the example Example with slack, surplus and artificial variables to get initial solution of the model. The initial simplex tableau in this phase will be as Table 1:
Table 1: initial tableau without artificial variable
We have 6 constrains and so should have 6 solution variables. We have only 5 solution variables and the vector with 1 on 6-th line (other coefficients 0) is missing. That’s why we have to add
artificial variable with 1 on the 6-th line i.e. to the 6-th constraining. Correct initial simplex tableau in Table 2:
The artificial variables have no real interpretation. If the optimal solution contained artificial variables it would have no interpretation either. To ensure that the artificial variables will be
eliminated from the optimal solution, they have prohibitive values in objective function for maximisation big negative value and big positive value for minimisation.
|
{"url":"http://orms.pef.czu.cz/text/ArtificalVariable.html","timestamp":"2014-04-20T18:25:31Z","content_type":null,"content_length":"2255","record_id":"<urn:uuid:977d80cd-aa95-44d4-8f44-7f9d6e468ad1>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00531-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Byung-jae Kwak
WINTER TERM 1999
Byung-jae Kwak
EECS Department
University of Michigan
Thursday, March 18
4:30 - 5:30 PM
Room 1001 EECS
Nonlinear System Identification of Hydraulic Actuator Friction Dynamics
Friction is a highly nonlinear process that depends on many physical parameters. In this research, we focus on identifying the mechanism of the friction process of lubricated lip seal, sliding
against steel shaft at low speed, where "low speed" means a sliding speed where the lubricant film is on the order of surface roughness dimensions or less.
Lubricated sliding lip seals are important components in many hydraulic devices. The requirements imposed by today's high precision machines motivate the precise simulation of friction between these
seals and sliding components. The objective of this research is to develop models which successfully simulate the friction process with the velocity data of the sliding shaft, and the lip seal
friction data as the input and output signals, respectively.
The complexity of friction process makes it very difficult to develop a physically based models. This motivates us to use an empirical system identification techniques. In this research, we take a
macroscopic point of view, and instead of incorporating (microscopic) physical parameters into the model, we assume the system consists of nonlinear and linear components whose characteristics can be
described by nonlinear functions or scalars. There are many advantages of this approach as follow. The modeling process can be much simpler than that based on physics, and often gives a model with
fewer parameters. To some extent, the macroscopic parameters could give more intuitive interpretation of the friction process than the physically based models.
In this research, we present two different approaches of nonlinear system identification. As our first approach, we develop Hammerstein type models. Because of the non-stationary nature of friction
process and time invariant assumption of the model, an adaptive algorithm cannot be used for the estimation of the model parameters. We use least squares gradient search algorithms to estimate the
models parameters. As our second approach, we present state space model. In this model, we estimate some internal signals which are not accessible for measurement. For the estimation of these
internal signals, Extended Kalman filter is used. Since both the internal signals and the model parameters are unknown, they must be estimated at the same time. The method used for the simultaneous
estimation is called an "Extended Least Squares". A recursive Extended Least Squares algorithm for our state space model is presented.
return to Previous CSPL Seminars
|
{"url":"http://www.eecs.umich.edu/systems/kwakWin99.html","timestamp":"2014-04-20T08:22:13Z","content_type":null,"content_length":"3405","record_id":"<urn:uuid:d51dc2f1-fa32-4ff0-9ae0-cb3a75340319>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00189-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Fourier Inversion
February 22nd 2010, 12:51 PM
Fourier Inversion
Given that f is a function of moderate decrease, $\hat{f}(\xi)$ is continuous and also satisfies,
$|\hat{f}(\xi)| \leq \frac{C}{|\xi|^{1+\alpha}}$.
Use the inversion theorem to derive an expression for $f(x+h) - f(x)$. It should be in the form of an integral.
How would I do this? Would I use a change of variable?
Just typed out long possible solution but then realized inversion theorem is $d \xi$ not $dx$...
February 22nd 2010, 01:43 PM
Given that f is a function of moderate decrease, $\hat{f}(\xi)$ is continuous and also satisfies,
$|\hat{f}(\xi)| \leq \frac{C}{|\xi|^{1+\alpha}}$.
Use the inversion theorem to derive an expression for $f(x+h) - f(x)$. It should be in the form of an integral.
How would I do this? Would I use a change of variable?
Just typed out long possible solution but then realized inversion theorem is $d \xi$ not $dx$...
Well, by the inversion formula, $f(x+h)-f(x)=\frac{1}{2\pi}\int (e^{-i\xi(x+h)}-e^{-i\xi x})\hat{f}(\xi)d\xi$ . You can factor by $e^{-i\xi x}$, but that's almost all you can do.
February 22nd 2010, 01:49 PM
Yeah that's what I had. Thought there might be more to it, oh well!
February 22nd 2010, 03:00 PM
Actually that's not what I got I never read it right!
I think I'm getting majorly confused here. We have the Fourier Inversion defined as;
$f(x) = \int_{-\infty}^{\infty} \hat{f}(\xi) e^{2 \pi i x \xi} d \xi$...
So wouldn't it be.
$f(x+h) - f(x) = \int_{-\infty}^{\infty} (e^{2 \pi i (x+h) \xi} - e^{2 \pi i x \xi}) \hat{f}(\xi) d \xi$.
Where does the $\frac{1}{2 \pi}$ come from?
February 23rd 2010, 09:17 AM
There are at least three commonly used definitions of the Fourier transform, and I simply used another one... (namely $\mathcal{F}(f)(\xi)=\int e^{it\xi}f(t)dt$, without $2\pi$ in the exponent)
You switch from one to another by a change of variable. You formula is correct with you definition.
|
{"url":"http://mathhelpforum.com/calculus/130162-fourier-inversion-print.html","timestamp":"2014-04-19T09:32:45Z","content_type":null,"content_length":"10098","record_id":"<urn:uuid:069c76bf-79af-41d4-86d0-c6e221347af2>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00329-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Simple question about a holomorphic function
September 11th 2012, 01:19 PM #1
Nov 2010
Proof: if f holomorphic then f(z)=λz+c
Hello. I need a bit of help or a tip maybe..
I am to show that if f is a holomorphic function that is of the form f(x+iy) = u(x) + i*v(y) where u and v are real functions,
then f(z) = λz+c where λ is a real number and c is a complex one.
How would I begin to prove this?
Thanks to everyone in advance.
Last edited by seijo; September 11th 2012 at 02:39 PM.
Re: Proof: if f holomorphic then f(z)=λz+c
You can't prove it, because it isn't true (unless $\lambda = 0$).
Notice that, when considered as f(z) = f(x,y), the assumption is that f(x,y) is holomorphic and $\partial{f}/\partial{y} = 0$.
But your f(x+iy) = λ(x+iy)+c varies with y (unless $\lambda = 0$).
The thing to prove is that the function must be constant (i.e. it actually is true, but only with $\lambda = 0$).
You can prove that by a straighforward application of the C-R equations.
Re: Proof: if f holomorphic then f(z)=λz+c
I see your point. But I made a typo. It's meant to be f(x+iy) = u(x) + i*v(y) not f(x+iy) = u(x) + i*v(x).
I suppose in this case it actually does make sense? Would the C-R equations still be the way to go?
Re: Proof: if f holomorphic then f(z)=λz+c
Yes, it drops straight out:
September 11th 2012, 02:34 PM #2
Super Member
Sep 2012
Washington DC USA
September 11th 2012, 02:40 PM #3
Nov 2010
September 11th 2012, 04:11 PM #4
Super Member
Sep 2012
Washington DC USA
|
{"url":"http://mathhelpforum.com/differential-geometry/203292-simple-question-about-holomorphic-function.html","timestamp":"2014-04-18T06:31:29Z","content_type":null,"content_length":"41457","record_id":"<urn:uuid:26b9a5c4-a1e4-4739-a1d6-62987cbc7946>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00273-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Excel Formula Errors & How To Deal With Them
#NULL!, #DIV/0!, #VALUE!,#REF!,#NAME?, #NUM! and #N/A
See Also: DEBUG / EVALUATE FORMULA ERRORS
As soon as you have discovered how to use formulas in Excel, you will likely need to lean how to correct or hide formula errors. The first thing you should know is what each error type means. Once
you understand what each error value means, correcting the formula becomes a LOT easier. Also note that a Formula can return an error IF a range it references contains an error cell.
To mask errors and return an alternate value in its place, it is best to return zero rather than empty text (""). This is because zeros are generally more downstream formula friendly than text.
To hide zeros on the Workbook level go to Tools>Options>View - Zero Values.
Custom Formats
Excel sees a cells format as having four Sections. These are, from left to right:
To hide zeros cell-by-cell use a Custom Number Format like 0.00;-0.00; where 0.00 is desired the format for non zeros. Note the use of -0.00 for negatives.
Often occurs when you specify a intersecting range which in fact does NOT intersect. The space is the Intersect Operator and should be used correctly like;
=A1:F1 B1:B10
OR with named ranges
=Range1 Range2
In both cases Excel will return the cell value that intersects A1:F1 and B1:B10. In this case, B2.
However, if we used =A1:F1 B2:B10 Excel would display the #NULL! error as it is NOT possible for a row 1 range to intersect a column range that starts at
row 2.
Simply means you cannot divide zero into a number. For example
would result #DIV/0! IF A2 contains nothing or zero. To correct this one could use one of 2 methods.
Note the use of the ERROR.TYPE Function. It is important to identify the error type so you are NOT masking another error type you SHOULD know about.
That is, we could use;
BUT, it is NOT good practice as you will end up masking most error values
when you SHOULD be masking only the #DIV/0! error.
Error.Type Function
For specifying error types. #NULL! = 1 #DIV/0! = 2 #VALUE! = 3 #REF! = 4 #NAME? = 5 #NUM! = 6 #N/A = 7
Possibly the most frequent error type. Occurs when the wrong type of argument or operand (operand: Items on either side of an operator in a
formula. In Excel, operands can be values, cell references, names, labels, and functions.) is used. For example, you may have;
and IF either cell had text and NOT numbers, the #VALUE! error would be displayed. This is why one should NOT change the default horizontal alignment of data cells. That is, text is always left
aligned while numbers are right aligned by default. If you allow this and then widen a Column, you can tell at a glance what Excel is seeing as text and numbers.
This means a non-valid reference in your formula. Often occurs as the result of deleting rows, columns, cells or Worksheets. This is why deleting
deleting rows, columns, cells or Worksheets is bad practice. Also check named ranges if used.
You DO NOT want to mask this error as you SHOULD be aware of it.
This error means a Function used is not being recognized by Excel. Check for
typos and always type Excel Functions in lower case. This way, when you
enter the formula Excel will automatically convert it to upper case, if it
is recognized.
Another common reason is if you are using a custom function without the code being present in he same Workbook. Or, you are using a function that requires a specific Excel add-in being installed. E.g
the Analysis Toolpak
On the Tools menu, click Add-Ins. In the Add-Ins available list, select the Analysis ToolPak box, and then click OK.
If necessary, follow the instructions in the setup program.
As with the #REF! error, you don't want to mask this error.
This error occurs if you supply a non valid number to a function argument. E.g, using a negative number when a positive is needed. Or, using a $, %
symbol with the number.
This error can be masked so long as you are aware of the reason why. Again, use the Error.Type function as shown in #DIV/0!
The most common reason for this error is any of the Lookup functions. It means Excel cannot find a match for the value it's being told to find. There
are many ways to correct or mask this error out there, BUT most are wrong in their approach and force a LOT of unneeded over-heads.
Consider placing the Lookup functions on the same Worksheet as the Table (if not already), then create a simply reference (e.g. =IV1) to the cell(s) to
get the result into the needed Worksheet. Doing this also opens up another opportunity in that we could now use;
See Stop #N/A Error
Another reason is when Array formulas are used AND the referenced ranges are not of equal size in each array.
See Also: DEBUG / EVALUATE FORMULA ERRORS
Excel Dashboard Reports & Excel Dashboard Charts 50% Off Become an ExcelUser Affiliate & Earn Money
Special! Free Choice of Complete Excel Training Course OR Excel Add-ins Collection on all purchases totaling over $64.00. ALL purchases totaling over $150.00 gets you BOTH! Purchases MUST be made via
this site. Send payment proof to special@ozgrid.com 31 days after purchase date.
Instant Download and Money Back Guarantee on Most Software
Excel Trader Package Technical Analysis in Excel With $139.00 of FREE software!
Microsoft ® and Microsoft Excel ® are registered trademarks of Microsoft Corporation. OzGrid is in no way associated with Microsoft
Some of our more popular products are below...
Convert Excel Spreadsheets To Webpages | Trading In Excel | Construction Estimators | Finance Templates & Add-ins Bundle | Code-VBA | Smart-VBA | Print-VBA | Excel Data Manipulation & Analysis |
Convert MS Office Applications To...... | Analyzer Excel | Downloader Excel | MSSQL Migration Toolkit | Monte Carlo Add-in | Excel Costing Templates
|
{"url":"http://www.ozgrid.com/Excel/formula-errors.htm","timestamp":"2014-04-18T08:06:05Z","content_type":null,"content_length":"14457","record_id":"<urn:uuid:36d5a3d9-0270-41df-b9a1-69a1f71c7887>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00608-ip-10-147-4-33.ec2.internal.warc.gz"}
|
The POWERMUTT Project
Introduction to Research Methods in Political Science: SITE
The POWERMUTT* Project MAP
(for use with SPSS)
*Politically-Oriented Web-Enhanced Research Methods for Undergraduates Topics & Tools
Resources for introductory research methods courses in political science and related disciplines
Subtopics SPSS Tools
· New with this topic
● Getting Started
When data are prepared for analysis by computer, values of variables are usually entered as numbers. Sometimes such coding is natural — for example, the population of a country or the number of votes
received by a candidate. Sometimes, artificial numerical codes are created for convenience of processing. In a file containing data on members of the U.S. Congress, for example, Democrats might be
coded numerically as 1, Republicans as 2, and independents as 3. Numerical, however, is not the same thing as quantitative. In fact, whether data are coded numerically or not, there are different
levels of measurement.
The values of a nominal variable do not indicate the amount of the thing being measured, nor are they in any particular order. If coded numerically, the numbers chosen are arbitrary. For example, if
we list the regions of the United States as Northeast, South, Midwest, and West, we are not indicating the amount of "regionness" each possesses, nor listing them in order of "regionness." We may
code the regions as "1," "2," "3," and "4" respectively, but this is done merely for convenience and in no way quantifies what we are doing. Each value, numerical or otherwise, is merely a label or
name (hence the term "nominal").
Sometimes the values of a variable are listed in order. (Alternatively, we say that the values are "ranked" or "rank ordered.") For example, the army orders (ranks) military personnel from general to
private. At a college or university, class standing of undergraduates (freshman to senior) is another example of an ordinal variable. In both of these examples, the values of the variable in question
(military rank or class standing) are ranked from highest to lowest or vice versa. There are other kinds of ordering. For example, respondents in a survey may be asked to identify their political
philosophy as "very liberal," "liberal," "moderate," "conservative," or "very conservative," creating a scale rank ordered from most liberal to most conservative.
Sometimes, in addition to being ordered, the differences (or intervals) between any two adjacent values on a measurement scale are the same. For example, the difference in temperature between 80
degrees Fahrenheit and 81 degrees is the same as that between 90 degrees and 91 degrees. When each interval represents the same increment of the thing being measured, the measure is called an
interval variable.
Finally, in addition to having equal intervals, some measures also have an absolute zero point. That is, zero represents the absence of the thing being measured. Height and weight are obvious
examples. Physicists sometimes use the Kelvin temperature scale, in which zero means the complete absence of energy. The same is not true of the Fahrenheit or Celsius (Centigrade) scales. Zero
degrees Celsius, for example, represents the freezing point of water at sea level, but this does not mean that there is no temperature at this point. The choice to put zero degrees at this point on
the scale is arbitrary. There is no particular reason why scientists could not have chosen instead the freezing point of beer in Golden, Colorado (other than that water is a more common substance, at
least for most successful scientists). With an absolute zero point, you can calculate ratios (hence the name). For example, $20 is twice as much as $10, but 60 degrees Fahrenheit is not really twice
as hot as 30 degrees. Ratio data is fully quantitative: it tells us the amount of the variable being measured. The percentage of votes received by a candidate, Gross Domestic Product per Capita, and
felonies per 100,000 population are all ratio variables.
Dichotomous variables (those with only two values) are a special case, and may sometimes be treated as nominal, ordinal, or interval. Take, for example, political party affiliation in a two-party
legislature. Party is, on its face, a pure example of a nominal variable, with the values of the variable being simply the names of the parties (or arbitrary numbers used, for convenience, in place
of the names). On the other hand, we could treat party (and other dichotomous variables) as ordinal, since there are only two possible ways for the values to be ordered, and it makes no difference
which way is chosen. There is, therefore, no way that they can be listed out of order.
For certain purposes, we can even treat dichotomous variables as interval, since there is only one interval (the difference between Party A and Party B), which is obviously equal to itself.[1]
Level of measurement is important because the higher the level of measurement of a variable (note that "level of measurement" is itself an ordinal measure) the more powerful are the statistical
techniques that can be used to analyze it. With nominal data, you can count the frequency with which each value of a variable occurs. A person's party identification, for example, is a nominal
variable (with the values of the variable being "Democrat," "Republican," “Green," "Libertarian," etc.), and so you can take data from a public opinion poll and count the number of respondents in the
sample identifying with each party. You can also calculate each party's identifiers as a percentage of the sample total. You can calculate joint frequencies and percentages (how many and what percent
of Asian Americans are Republicans, for example). You can also use certain measures that tell you how strong the overall relationship is between party and ethnicity, and the likelihood that the
relationship occurred by chance.
On the other hand, there are other operations you cannot legitimately perform with nominal data. Even if you use numbers to label candidates (e.g., 1 = Democrat, 2 = Republican, 3 = Green, etc.), you
cannot very well say that Democrat plus Republican equals Green, or that Green divided by Republican is half way between Democrat and Republican. Unfortunately, there are many statistical techniques
that require higher levels of measurement.
With ordinal data, you can employ techniques that take into account the fact that the values of a variable are listed in a meaningful order. With interval data, you can go even further and use
powerful techniques that assume a measurement scale of equal intervals. As it happens, there are very few techniques in the social sciences that require ratio data, and so some textbooks ignore the
distinction between interval and ratio scales.
If you use a technique that assumes a higher level of measurement than is appropriate for your data, you risk getting a meaningless answer. On the other hand, if you use a technique that fails to
take advantage of a higher level of measurement, you may overlook important things about your data. (Note: in addition to level of measurement, many statistical techniques also require other
assumptions about your data. For example, even if a variable is interval, some otherwise appropriate techniques may yield misleading results if the variable includes some values that are extremely
high or low relative to the rest of the distribution.)
The distinctions between levels of measurement are not always hard and fast.
Sometimes it depends on the underlying concept being measured. This applies, for example, to the question of whether to treat a dichotomous variable as nominal or ordinal. Do our values indicate two
distinct categories (e.g., male and female), or do we think of them as two points along a spectrum (e.g., for or against capital punishment, since some people may favor or oppose capital punishment
more strongly than others)?
In survey research, independents are often thought of as being somewhere in between Democrats and Republicans, and so measures of party identification are usually treated as ordinal. On the other
hand, if you were studying the U.S. Senate, you would find that the only independents currently serving (as of the 113^th Congress, 2013-2013) are Bernie Sanders of Vermont and Angus King of Maine.
While King might in some senses be considered "between" the Republican and Democratic parties, the same could hardly be said of Sanders, one of the most liberal members of the chamber. (Before coming
to congress, he had run as a Socialist in winning election as mayor of Bennington, Vermont.)
Sometimes the question of level of measurement hinges on the precise nature of the measure itself. For example, the American National Election Study has for many years been using "feeling
thermometers." Respondents are asked to locate a person (e.g., a presidential candidate) or a category of people (Democrats, Republicans, feminists, evangelical Christians, Latinos, etc.) on a scale
ranging from 0 to 100, with higher numbers representing warmer feelings toward the person or category of people in question. Most researchers using these variables have treated them as interval.
Some, however, have raised doubts about this practice. For example, does the difference between a rating of, say, 60 and 70 really mean the same thing as the difference between 90 and 100?
In designing research, there can be tradeoffs between having data that are at a higher level of measurement and other considerations. Aggregate data (data about groups of people) are generally
interval or ratio, but usually provide only indirect measures of how people think and act. Individual data get at these things more directly, but are usually only nominal or ordinal. Official
election returns, for example, can provide us with ratio level data about the distribution of votes in each precinct. These data, however, tell us little about why individual people vote the way they
do. Survey research (public opinion polling), which provides data that are for the most part only nominal or ordinal, allows us to explore such questions much more extensively and directly.
Sometimes you will find other terms used to describe the level of measurement of variables. SPSS, for example, distinguishes among nominal, ordinal, and scale (that is, interval or ratio) variables.
Some texts distinguish between nonparametric (nominal or ordinal) and parametric (interval or ratio) variables. In describing different statistical procedures, we will sometimes distinguish between
categorical and continuous variables. Categorical variables generally consist of a small number of values, or categories, and are usually nominal or ordinal. The values of continuous variables
represent a large or even infinite number of possible points along a scale, and are interval or ratio.
categorical variable
continuous variable
dichotomous variable
interval variable
levels of measurement
nominal variable
nonparametric variable
ordinal variable
parametric variable
ratio variable
scale variable
1, Start SPSS, and open anes08s.sav. In Variable View, notice that SPSS uses three categories of measurement: nominal, ordinal, and scale (equivalent to interval and ratio). Notice also that almost
all of the variables are either nominal or ordinal. This is typical with data on individuals, such as survey data. Now open countries.sav and do the same. Notice that almost all of the variables are
scale, as is usually the case with aggregate data.
2. Open the codebooks for the other datasets included in this project. Classify each variable as nominal, ordinal, interval, or ratio. Check your answers by opening the dataset in SPSS and examining
the Variable View (noting again that SPSS uses the term “scale” for both interval and ratio data). (In some cases, there may be more than one correct answer, depending on what assumptions are made.)
With each of the datasets included in this project, care has been taken to correctly categorize the level of measurement of variables. Remember, however, that the level of measurement of some
variables may depend on how the variable is used. Also, when using datasets other than those provided with POWERMUTT, do not assume without checking that the author has bothered to verify each
variable's level of measurement.
Lane, David, et al. “Levels of Measurement,” Online Statistics: A Multimedia Course of Study. http://onlinestatbook.com/chapter1/levels_of_measurement.html.
University of Cambridge. "Levels of Measurement," Universities' Collaboration in eLearning. http://www.ucel.ac.uk/showroom/levels_of_measurement/Default.html.
[1] See the section on "dummy" variables under the regression analysis Topic.
Last updated April 28, 2013 .
© 2003---2013 John L. Korey. Licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0 Unported License.
|
{"url":"http://www.csupomona.edu/~jlkorey/POWERMUTT/Topics/levels_of_measurement.html","timestamp":"2014-04-21T12:19:28Z","content_type":null,"content_length":"34162","record_id":"<urn:uuid:c4f758a9-1ebf-4fed-b37c-0835ebb2d031>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00599-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Walter W. Piegorsch
Professor: 14 August 2006 to present, Department of Mathematics, University of Arizona, Tucson, AZ.
Member: 21 May 2010 to present, Graduate Interdisciplinary Program in Applied Mathematics, University of Arizona, Tucson, AZ.
Joint Faculty: 20 April 2010 to present, School of Information: Science, Technology, and Arts (SISTA), University of Arizona, Tucson, AZ.
Professor: 31 December 2007 to present, Department of Agricultural & Biosystems Engineering, University of Arizona, Tucson, AZ.
Director, Statistical Research & Education: 8 October 2007 to present, BIO5 Institute, University of Arizona, Tucson, AZ.
Member: 11 January 2007 to present, Graduate Interdisciplinary Program in Statistics, University of Arizona, Tucson, AZ.
Professor: 1 January 2007 to present, College of Public Health, University of Arizona, Tucson, AZ.
Member, Research Faculty: 14 August 2006 to present, BIO5 Institute, University of Arizona, Tucson, AZ.
Chair: 9 April 2007 to 21 March 2012, Graduate Interdisciplinary Program in Statistics, University of Arizona, Tucson, AZ.
Affiliate Member: 4 May 2007 to 21 May 2010, Graduate Interdisciplinary Program in Applied Mathematics, University of Arizona, Tucson, AZ.
Professor: 1 July 1996 to 15 August 2006, Department of Statistics, University of South Carolina, Columbia, SC.
Research Affiliate: 1 July 2004 to 15 August 2006, Hazards Research Laboratory, University of South Carolina, Columbia, SC.
Member: 3 September 2003 to 30 June 2006, USC NanoCenter, Columbia, SC.
Associated Faculty: 17 October 1995 to 15 May 2006, School of the Environment, University of South Carolina, Columbia, SC.
Adjunct Professor: 1 July 1999 to 15 May 2006, Department of Epidemiology and Biostatistics, University of South Carolina, Columbia, SC.
Senior Scientific Member: 13 February 2004 to 15 May 2006, South Carolina Cancer Center, Columbia, SC.
Adjunct Professor: 1 January 1998 to 31 December 2002, Department of Biostatistics, University of North Carolina, Chapel Hill, NC.
Director of Undergraduate Studies: 16 August 1998 to 15 August 2002, Department of Statistics, University of South Carolina, Columbia, SC.
Adjunct Associate Professor: 29 August 1994 to 30 June 1999, Department of Epidemiology and Biostatistics, University of South Carolina, Columbia, SC.
Adjunct Associate Professor: 1 January 1993 to 31 December 1997, Departments of Statistics and Biostatistics, University of North Carolina, Chapel Hill, NC.
Associate Professor: 16 August 1993 to 30 June 1996, Department of Statistics, University of South Carolina, Columbia, SC.
Adjunct Associate Professor: 1 January 1988 to 31 December 1993, Department of Statistics, North Carolina State University, Raleigh, NC.
Mathematical Statistician: 6 August 1984 to 27 July 1993, Statistics and Biomathematics Branch, National Institute of Environmental Health Sciences, Research Triangle Park, NC.
Accredited Professional Statistician (PStat®), American Statistical Association, 11 October 2010 to 31 July 2016.
HONORS and OFFICES HELD:
Elected to Regional Committee/RECOM (2009-2011) of Western North American Region (WNAR), International Biometric Society.
Elected Chairman (2004) of American Statistical Association Section on Statistics and the Environment (Chair-Elect, 2003; Past-Chair, 2005).
Elected to Council (2002-2005) of the International Biometric Society.
Recipient, 2000 University of South Carolina Educational Foundation Research Award for Science, Mathematics, and Engineering.
Elected Vice-Chair (1997-1999) of American Statistical Association Council of Sections.
Fellow (1995), American Statistical Association.
Member (by election, 1995) of the International Statistical Institute.
Elected Secretary (1995-1996) and to Regional Committee/RECOM (1999-2001) of Eastern North American Region (ENAR), International Biometric Society.
Distinguished Achievement Medal (1993), American Statistical Association Section on Statistics and the Environment.
Elected (1983) to New York Alpha Chapter, Mu Sigma Rho, The National Honor Society in Statistics.
Elected (1979) to New York Eta Chapter, Phi Beta Kappa.
North Carolina State University
STAT 511, Experimental Statistics for the Biological Sciences I
University of South Carolina
SCCC 312A, Proseminar in Statistics
STAT 201, Elementary Statistics
STAT 205, Elementary Statistics for the Biological and Life Sciences*
STAT 511/MATH 511, Probability
STAT 512, Mathematical Statistics
STAT 513, Theory of Statistical Inference
STAT 519, Sampling
STAT 700, Applied Statistics I
STAT 701, Applied Statistics II
STAT 708/BIOS 808, Environmetrics*
STAT 712, Mathematical Statistics I
STAT 713, Mathematical Statistics II
STAT 714, Linear Statistical Models
STAT 715, Analysis of Variance
STAT 775/BIOS 815, Generalized Linear Models*
BIOS 794, Topics in Biostatistics
University of Arizona
ISTA 116, Statistical Foundations for the Information Age*
MATH 564/STAT 564, Theory of Probability*
MATH 566/STAT 566, Theory of Statistics*
MATH 571A/STAT 571A, Advanced Statistical Regression Analysis*
STAT 574E/MATH 574E/CPH 574E, Environmental Statistics*
* Developed or co-developed.
2009 - 2012: Public Health Service Extramural Research Project #R21-ES0016791-01, Model-independent benchmark dose estimation for quantitative risk assessment, with Rabi Bhattacharya.
2008 - 2013: Public Health Service Training Grant #T32-ES016652, Human disease and the interplay between genes and the environment, with Terrence J. Monks (P.I.), et al.
2007 - 2009: National Science Foundation Research Grant #CMMI-0623991: The Recovery Divide: Sociospatial disparities in disaster recovery from Hurricane Katrina along Mississippi's gulf coast, with
Susan L. Cutter (P.I.), Lynn Weber, Jerry Mitchell, and Mark Smith.
2005 - 2009: Environmental Protection Agency STAR Research Project #RD-832419
, Model selection and multiplicity adjustment for benchmark analysis in quantitative risk assessment, with R. Webster West and Edsel A. Peña.
2005 - 2007: Department of Homeland Security National Center for the Study of Terrorism and Responses to Terrorism (START), with Gary La Free (Director), Susan L. Cutter (USC Project Director), et
2004 - 2007: National Science Foundation Research Grant #CMS-0433158: Place-based decision support for spatial and temporal transference of risk and hazards, with Susan L. Cutter (P.I.), Madelyn
Fletcher, Cary J. Mock, John R. Rose, and John M. Shafer.
2004: South Carolina Biomedical Research Infrastructure Network Award: Multiple comparisons for analysis of mutant spectra, with Don Edwards (P.I.).
2003 - 2007: National Science Foundation Research Grant #SES-0304448: From laboratory to society: Developing an informed approach to nanoscale science and technology, with Davis W. Baird (P.I.),
David M. Bérubé, Robert G. Best, Otávio Bueno, R.I.G. Hughes, George Khushf, Loren W. Knapp, Steven W. Lynn, Edward C. Munn, Catherine J. Murphy, Richard P. Ray, Chistopher T. Robinson, Lowndes F.
Stephens, and Robin F. Wilson.
2002 - 2003: National Science Foundation Research Grant #CTS-0210552: Philosophical and social dimensions of nanoscale research: Developing a rational approach to a newly emerging science and
technology, with Davis W. Baird (P.I.), Alfred Nordmann, David M. Bérubé, Robert G. Best, R.I.G. Hughes, George Khushf, Loren W. Knapp, Steven W. Lynn, Edward C. Munn, Richard P. Ray, Chistopher T.
Robinson, Lowndes F. Stephens, Robin F. Wilson, and Christine Schweickert.
2001: Univ. of South Carolina Office of Research Award: Environmental statistics, with Don Edwards (P.I.).
2000: South Carolina Commission on Higher Education Research Project: On the analysis and interpretation of biological sequence data, with Austin L. Hughes (P.I.), László A. Székely, and John R. Rose
1997 - 2008: Public Health Service Extramural Research Project #R01-CA076031-10, Low-dose risk bounds via simultaneous confidence bands, with R. Webster West and Ralph L. Kodell.
1987 - 1993: Public Health Service Intramural Research Project #Z01-ES048001-06: Statistical analysis of data from genotoxicological experiments.
Editor-in-Chief, Environmetrics (2010-present; Editor, 2009; Associate Editor, 1992-2008).
Joint Editor, Journal of the American Statistical Association Theory & Methods Section (2006-2008; Joint Editor-Elect, 2005).
Member, International Advisory Board, Sultan Qaboos University Journal for Science (2004-present).
Member, Editorial Board, Environmental and Ecological Statistics (2004-2011).
Guest Editor, Statistical Science Special Section on Statistics and the Environment, November 2003.
Co-Guest Editor, Environmental and Ecological Statistics Special Issue on Modern Benchmark Analysis for Environmental Risk Assessment, March 2009; Environmetrics Special Issue on Statistical
Challenges in Environmental Health, March 2003; Environmetrics Special Issue on Environmental Biometry, December 1993.
Member, Journal Management Committee, Journal of Agricultural, Biological, and Environmental Statistics (2000-2004).
Associate Editor, Biometrics (1997-2004).
Member, Editorial Board, Mutation Research (1994-2008).
Associate Editor, Environmental and Ecological Statistics (1994-2004).
Member, Editorial Board, Environmental and Molecular Mutagenesis (1994-2004).
Member, Editorial Review Board, Environmental Health Perspectives (1993-1996).
Associate Editor, Journal of the American Statistical Association Biopharmaceutical Special Section (1987-1989); Theory & Methods Section (1996-2004).
Refereed over 215 manuscripts for various statistics, mathematics, epidemiology, toxicology, and environmental science journals, and over 70 research proposals for government or other funding
• Books: 7
• Journal Articles: 119
• Book Chapters/Proceedings: 60
Piegorsch, W.W., Xiong, H., Bhattacharya, R.N., and Lin, L. Benchmark dose analysis via nonparametric regression modeling. Risk Analysis 34, 135-151 (2014).
Cutter, S.L., Emrich, C.T., Mitchell, J.T., Piegorsch, W.W., Smith, M.M., and Weber, L. (2014). Hurricane Katrina and the Forgotten Coast of Mississippi. Cambridge: Cambridge University Press.
Piegorsch, W.W., An, L., Wickens, A.A., West, R.W., Peña, E.A., and Wu, W. Information-theoretic model-averaged benchmark dose analysis in environmental risk assessment. Environmetrics 24, 143-157
Deutsch, R.C. and Piegorsch, W.W. Benchmark dose profiles for joint-action continuous data in quantitative risk assessment. Biometrical Journal 55, 741-54 (2013).
El-Shaarawi, A.H. and Piegorsch, W.W. (eds.) Encyclopedia of Environmetrics, 2nd edn., Vols. 1-6. Chichester: John Wiley & Sons (2012).
Deutsch, R.C. and Piegorsch, W.W. Benchmark dose profiles for joint-action quantal data in quantitative risk assessment. Biometrics 68, 1313–1322 (2012).
Piegorsch, W.W., Xiong, H., Bhattacharya, R.N., and Lin, L. Nonparametric estimation of benchmark doses in environmental risk assessment. Environmetrics 23, 717–728 (2012).
West, R.W., Piegorsch, W.W., Peña, E.A., An, L., Wu, W., Wickens, A.A., Xiong, H., and Chen, W. The impact of model uncertainty on benchmark dose estimation. Environmetrics 23, 706-716 (2012).
Shane, B.S., Zeiger, E., Piegorsch, W.W., Booth, E.D., Goodman, J.I., and Peffer, R.C. Re-evaluation of the big blue® mouse assay of propiconazole suggests lack of mutagenicity. Environmental and
Molecular Mutagenesis 53, 1-9 (2012).
Piegorsch, W.W. and Shaked, M. The impact of Lehmann's work in applied probability. Selected Works of E.L. Lehmann, J. Rojo, ed., New York: Springer-Verlag, pp. 807-813 (2012).
Piegorsch, W.W. and Padgett, W.J. Sequential probability ratio test. International Encyclopedia of Statistical Science, M. Lovric, ed., Heidelberg: Springer-Verlag, Part 19, pp. 1305-1308 (2011).
Piegorsch, W.W. Translational benchmark risk analysis. Journal of Risk Research 13, 653-667 (2010).
Deutch, R.C., Grego, J.M., Habing, B.T., and Piegorsch, W.W. Maximum likelihood estimation with binary-data regression models: small-sample and large-sample features. Advances and Applications in
Statistics 14, 101-116 (2010).
Piegorsch, W.W. and Bailer, A.J. Combining information. Wiley Interdisciplinary Reviews: Computational Statistics 1, 354-360 (2009).
Liu, W., Hayter, A.J., Piegorsch, W.W., and Al-Khine, P. Comparison of hyperbolic and constant width simultaneous confidence bands in multiple linear regression under MVCS criterion. Journal of
Multivariate Analysis 100, 1432-1439 (2009).
Buckley, B.E., Piegorsch, W.W., and West, R.W. Confidence limits on one-stage model parameters in benchmark risk assessment. Environmental and Ecological Statistics, 16, 53-62 (2009).
West, R.W., Nitcheva, D.K., and Piegorsch, W.W. Bootstrap methods for simultaneous benchmark analysis with quantal response data. Environmental and Ecological Statistics, 16, 63-73 (2009).
Piegorsch, W.W. and Schuler, E. Communicating the risks, and the benefits, of nanotechnology. International Journal of Risk Assessment and Management 10, 57-69 (2008).
Schmidtlein, M.C., Deutch, R.C., Piegorsch, W.W., and Cutter, S.L. A sensitivity analysis of the Social Vulnerability Index. Risk Analysis 28, 1099-1114 (2008).
Buckley, B.E. and Piegorsch, W.W. Simultaneous confidence bands for Abbott-adjusted quantal response models. Statistical Methodology 5, 209-219 (2008).
Liu, W., Lin, S., and Piegorsch, W.W. Construction of exact simultaneous confidence bands for a simple linear regression model. International Statistical Review 76, 39-57 (2008).
Piegorsch, W.W., Cutter, S.L., and Hardisty, F. Benchmark analysis for quantifying urban vulnerability to terrorist incidents. Risk Analysis 27, 1411-1425 (2007).
Borden, K.A., Schmidtlein, M.C., Emrich, C.T., Piegorsch, W.W. and Cutter, S.L. Vulnerability of U.S. cities to environmental hazards. Journal of Homeland Security and Emergency Management 4 (2),
Art. 5 (2007).
Nitcheva, D.K., Piegorsch, W.W., and West, R.W. On use of the multistage dose-response model for assessing laboratory animal carcinogenicity. Regulatory Toxicology and Pharmacology 48, 135-147
Piegorsch, K.M., Watkins, K.W., Piegorsch, W.W., Reininger, B.M., Corwin, S., and Valois, R.F. Ergonomic decision-making: A conceptual framework for experienced practitioners from backgrounds in
industrial engineering and physical therapy. Applied Ergonomics 37, 587-598 (2006).
Piegorsch, W.W., Nitcheva, D.K., and West, R.W. Excess risk estimation under multistage model misspecification. Journal of Statistical Computation and Simulation, 76, 423-430 (2006).
Wu, Y., Piegorsch, W.W., West, R.W., Tang, D., Petkewich, M.O., and Pan, W. Multiplicity-adjusted inferences in risk assessment: Benchmark analysis with continuous response data . Environmental and
Ecological Statistics, 13, 125-141 (2006).
Piegorsch, W.W. and West, R.W. Benchmark analysis: Shopping with proper confidence. Risk Analysis, 25, 913-920 (2005).
Piegorsch, W.W., West, R.W., Pan, W., and Kodell, R.L. Simultaneous confidence bounds for low-dose risk assessment with non-quantal data. Journal of Biopharmaceutical Statistics 15, 17-31 (2005).
Nitcheva, D.K., Piegorsch, W.W., West, R.W., and Kodell, R.L. Multiplicity-adjusted inferences in risk assessment: Benchmark analysis with quantal response data. Biometrics 61, 277-286 (2005).
Piegorsch, W.W. and Bailer, A.J. Analyzing Environmental Data. Chichester: John Wiley & Sons (2005).
Piegorsch, W.W., West, R.W., Pan, W., and Kodell, R.L. Low-dose risk estimation via simultaneous statistical inferences. Journal of the Royal Statistical Society, series C (Applied Statistics) 54,
245-258 (2005).
Piegorsch, W.W. Mutagenicity study, Encyclopedia of Biostatistics (2nd edn.), P. Armitage and T. Colton, eds. Chichester: John Wiley & Sons, 4, 3590-3595 (2005).
Piegorsch, W.W., Simmons, S.J., and Zeiger, E. Data mining potency estimators from toxicological databases. Bulletin of Informatics and Cybernetics 36, 51-62 (2004).
Piegorsch, W.W. Sample sizes for improved binomial confidence intervals. Computational Statistics & Data Analysis 46, 309-316 (2004).
Al-Saidy, O.M., Piegorsch, W.W., West, R.W., and Nitcheva, D.K. Confidence bands for low-dose risk estimation with quantal response data. Biometrics 59, 1058-1064 (2003).
Gielazyn, M.L., Ringwood, A.H., Piegorsch, W.W., and Stancyk, S.E. Detection of oxidative DNA damage in isolated marine bivalve hemocytes using the comet assay and formamidopyrimidine glycosylase
(Fpg). Mutation Research 542, 15-22 (2003).
Pan, W., Piegorsch, W.W., and West, R.W. Exact one-sided simultaneous confidence bands via Uusipaikka's method. Annals of the Institute of Statistical Mathematics 55, 243-250 (2003).
Simmons, S.J., Piegorsch, W.W., Nitcheva, D.K., and Zeiger, E. Combining environmental information via hierarchical modeling: An example using mutagenic potencies. Environmetrics 14, 159-168 (2003).
Tu, W. and Piegorsch, W.W. Empirical Bayes analysis for a hierarchical Poisson generalized linear model. Journal of Statistical Planning and Inference 111, 235-248 (2003).
Piegorsch, W.W. and Edwards, D. What shall we teach in environmental statistics? (with discussion). Environmental and Ecological Statistics 9, 125-150 (2002).
El-Shaarawi, A.H. and Piegorsch, W.W. (eds.) Encyclopedia of Environmetrics Vols. 1-4. Chichester: John Wiley & Sons (2002).
Piegorsch, W.W. and Richwine, K.A. Large-sample pairwise comparisons among multinomial proportions with an application to analysis of mutant spectra. Journal of Agricultural, Biological, and
Environmental Statistics 6, 305-325 (2001).
Turner, S.D., Tinwell, H., Piegorsch, W.W., Schmezer, P., and Ashby, J. The male rat carcinogens limonene and sodium saccharin are not mutagenic to male BigBlue(TM) rats. Mutagenesis 16, 329-332
Garren, S.T., Smith, R.L., and Piegorsch, W.W. Bootstrap goodness-of-fit test for the beta-binomial model. Journal of Applied Statistics 28, 561-571 (2001).
Garren, S.T., Smith, R.L., and Piegorsch, W.W. On a likelihood-based goodness-of-fit test of the beta-binomial model. Biometrics 56, 947-949 (2000).
Piegorsch, W.W., West, R.W., Al-Saidy, O.M., and Bradley, K.D. Asymmetric confidence bands for simple linear regression over bounded intervals. Computational Statistics & Data Analysis 34, 193-217
Bailer, A.J. and Piegorsch, W.W. From quantal counts to mechanisms and systems: The past, present and future of biometrics in environmental toxicology (Editors' Invited Paper). Biometrics 56, 327-336
Bailer, A.J. and Piegorsch, W.W. Quantitative potency estimation to measure risk with bio-environmental hazards, Handbook of Statistics Vol. 18: Bioenvironmental and Public Health Statistics, P.K.
Sen and C.R. Rao, eds., New York: North-Holland/Elsevier (2000), pp. 441-463.
Tu, W. and Piegorsch, W.W. Parametric Empirical Bayes estimation for a class of extended log-linear regression models. Environmetrics 11, 271-285 (2000).
Piegorsch, W.W., Simmons, S.J., Margolin, B.H., Zeiger, E., Gidrol, X.M., and Gee, P. Statistical modeling and analyses of a base-specific Salmonella mutagenicity assay. Mutation Research 467, 11-19
Slaton, T.L., Piegorsch, W.W., and Durham, S.D. Estimation and testing with overdispersed proportions using the beta-logistic regression model of Heckman and Willis. Biometrics 56, 125-132 (2000).
Gielazyn, M.L., Stancyk, S.E., and Piegorsch, W.W. Experimental evidence of subsurface feeding by the burrowing ophiuroid Microphiopholis gracillima (Stimpson) (Echinodermata). Marine Ecology
Progress Series 184, 129-138 (1999).
Piegorsch, W.W. Statistical aspects for combining information and meta-analysis in environmental toxicology. Journal of Environmental Science and Health, Part C - Environmental Carcinogenesis &
Ecotoxicology Reviews 16, 83-99 (1998).
Nychka, D.K., Cox, L.H., and Piegorsch, W.W., eds. Case Studies in Environmental Statistics, New York: Springer-Verlag (1998).
Piegorsch, W.W., Smith, E.P., Edwards, D., and Smith, R.L. Statistical Advances in Environmental Science. Statistical Science 13, 186-208 (1998).
Piegorsch, W.W. An introduction to binary response regression and associated trend analyses. Journal of Quality Technology 30, 269-281 (1998).
Piegorsch, W.W. and Bailer, A.J. Experimental design principles for animal studies in pharmaceutical development. Design and Analysis of Animal Studies in Pharmaceutical Development, S.-C. Chow and
J.-P. Liu, eds., New York: M. Dekker, pp. 23-42 (1998).
Piegorsch, W.W. and Bailer, A.J. Statistics for Environmental Biology and Toxicology, Boca Raton, FL: Chapman & Hall/CRC Press (1997). Also available: Solutions Manual for Statistics for
Environmental Biology and Toxicology, Boca Raton, FL: Chapman & Hall/CRC Press (1999).
Beatty, D.A. and Piegorsch, W.W. Optimal statistical design for toxicokinetic studies. Statistical Methods in Medical Research 6, 359-376 (1997).
Kohlmeier, L., DeMarini, D.M., and Piegorsch W.W. Gene-nutrient interactions in nutritional epidemiology, Design Concepts in Nutritional Epidemiology (2nd edn.), B.M. Margetts and Michael Nelson,
eds. Oxford: Oxford University Press, 312-337 (1997).
Piegorsch, W.W., Lockhart, A.C., Carr, G.J., Margolin, B.H., Brooks, T., Douglas, G.R., Liegibel, U.M., Suzuki, T., Thybaud, V., van Delft, J.H.M., and Gorelick, N.J. Sources of variability in data
from a positive selection lacZ transgenic mouse mutation assay: An interlaboratory study. Mutation Research 388, 249-289 (1997).
Piegorsch, W.W. and Casella, G. Empirical Bayes estimation for logistic regression and extended parametric regression models. Journal of Agricultural, Biological, and Environmental Statistics 1,
231-249 (1996).
Cariello, N.F. and Piegorsch, W.W. The Ames test: The two-fold rule revisited. Mutation Research 369, 23-31 (1996).
Green, A.S., Chandler, G.T., and Piegorsch, W.W. Stage-specific toxicity of sediment-associated chlorpyrifos to a marine, infaunal copepod. Environmental Toxicology and Chemistry 15, 1182-1231
Cox, L.H. and Piegorsch, W.W. Combining environmental information. I: Environmental monitoring, measurement and assessment. Environmetrics 7, 299-308 (1996).
Piegorsch, W.W. and Cox, L.H. Combining environmental information. II: Environmental epidemiology and toxicology. Environmetrics 7, 309-324 (1996).
Piegorsch, W.W. Statistical analysis of heritable mutagenesis data. Toxicology and Risk Assessment, A.M. Fan and L.W. Chang, eds., New York: M. Dekker, pp. 473-481 (1996).
Piegorsch, W.W., Margolin, B.H., Shelby, M.D., Johnson, A., French, J.E., Tennant, R.W., and Tindall, K.R. Study design and sample sizes for a lacI transgenic mouse mutation assay. Environmental and
Molecular Mutagenesis 25, 231-245 (1995).
Piegorsch, W.W. Empirical Bayes calculations of concordance between endpoints in environmental toxicity experiments (with discussion). Environmental and Ecological Statistics 1, 153-164 (1994).
Cariello, N.F., Piegorsch, W.W., Adams, W.T., and Skopek, T.R. Computer program for the analysis of mutational spectra: application to p53 mutations. Carcinogenesis 15, 2281-2285 (1994).
Haseman, J.K. and Piegorsch, W.W. Statistical analysis of developmental toxicity data. Developmental Toxicology (2nd edn.), C. Kimmel and J. Buelke-Sam, eds. New York: Raven Press, 349-361 (1994).
Margolin, B.H., Kim, B.S., Smith, M.G., Fetterman, B.A., Piegorsch, W.W., and Zeiger, E. Some comments on potency measures in mutagenicity research. Environmental Health Perspectives 102, Suppl. 1,
91-94 (1994).
Piegorsch, W.W. Statistical models for genetic susceptibility in toxicological and epidemiological investigations. Environmental Health Perspectives 102 Suppl. 1, 77-82 (1994).
Piegorsch, W.W., Weinberg, C.R., and Taylor, J.A. Non-Hierarchical logistic models and case-only designs for assessing susceptibility in population-based case-control studies. Statistics in Medicine
13, 153-162 (1994).
Piegorsch, W.W., Lockhart, A.-M.C., Margolin, B.H., Tindall, K.R., Gorelick, N.J., Short, J.M., Carr, G.J., Thompson, E.D., and Shelby, M.D. Sources of variability from a lacI transgenic mouse
mutation assay. Environmental and Molecular Mutagenesis 23, 17-31 (1994).
Piegorsch, W.W. and Bailer, A.J. Statistical approaches for analyzing mutational spectra: Some recommendations for categorical data. Genetics 136, 403-416 (1994).
Piegorsch, W.W. Environmental Biometry: Assessing impacts of environmental stimuli via animal and microbial laboratory studies. Handbook of Statistics Vol. 12: Environmental Statistics, G.P. Patil
and C.R. Rao, eds., New York: North-Holland/Elsevier, 535-559 (1994).
Piegorsch, W.W. Biometrical methods for testing dose effects of environmental stimuli in laboratory studies. Environmetrics 4, 483-505 (1993).
Thomas, D.C., Nguyen, D.C., Piegorsch, W.W., and Kunkel, T.A. Relative rates of mutagenic translesion synthesis on the leading and lagging strands during replication of UV-irradiated DNA in a human
cell extract. Biochemistry 32, 11476-11482 (1993).
Generoso, W.M. and Piegorsch, W.W. Dominant lethal tests in male and female mice. Male Reproductive Toxicology, R.E. Chapin and J.J. Heindel, eds. Methods in Toxicology, Vol. 3, New York: Academic
Press, 124-141 (1993).
Piegorsch, W.W. and Bailer, A.J. Minimum mean-square error quadrature. Journal of Statistical Computation and Simulation 46, 217-234 (1993).
Dinse, G.E., Boos, D.D., and Piegorsch, W.W. Confidence statements about the time range over which survival curves differ. Journal of the Royal Statistical Society, Series C (Applied Statistics) 42,
21-30 (1993).
Piegorsch, W.W. and Taylor, J.A. Statistical methods for assessing environmental effects on human genetic disorders. Environmetrics 3, 369-384 (1992).
Piegorsch, W.W. Non-parametric methods to assess non-monotone dose response: Applications to genetic toxicology. Order Statistics and Nonparametrics: Theory and Applications, P.K. Sen and I.A.
Salama, eds. Amsterdam: Elsevier/North-Holland, 419-430 (1992).
Lockhart, A.C., Piegorsch, W.W., and Bishop, J.B. Assessing overdispersion and dose response in the male dominant lethal assay. Mutation Research 272, 35-58 (1992).
Piegorsch, W.W. Complementary log regression for generalized linear models. American Statistician 46, 94-99 (1992).
Gutierrez-Espeleta, G.A., Hughes, L.A., Piegorsch, W.W., Shelby, M.D., and Generoso, W.M. Acrylamide: Dermal exposure produces genetic damage in male mouse germ cells. Fundamental and Applied
Toxicology 18, 189-192 (1992).
Piegorsch, W.W., Carr, G.J., Portier, C.J., and Hoel, D.G. Concordance of carcinogenic response between rodent species: Potency dependence and potential underestimation. Risk Analysis 12, 115-121
Generoso, W.M., Shourbaji, A.G., Piegorsch, W.W., and Bishop, J.B. Developmental responses of zygotes exposed to similar mutagens. Mutation Research 250, 439-446 (1991).
Piegorsch, W.W. and Zeiger, E. Measuring intra-assay agreement for the Ames Salmonella assay. Statistical Methods in Toxicology, L. Hothorn, ed. Lecture Notes in Medical Informatics, Vol. 43,
Heidelberg: Springer-Verlag, 35-41 (1991).
Piegorsch, W.W. Multiple comparisons for analyzing dichotomous response data. Biometrics 47, 45-52 (1991).
Bailer, A.J. and Piegorsch, W.W. Estimating integrals using quadrature methods with an application in pharmacokinetics. Biometrics 46, 1201-1211 (1990).
Piegorsch, W.W. Fisher's contributions to genetics and heredity, with special emphasis on the Gregor Mendel controversy. Biometrics 46, 915-924 (1990).
Whittaker, S.G., Moser, S.F., Maloney, D.H., Piegorsch, W.W., Resnick, M.A., and Fogel, S. The detection of mitotic and meiotic chromosome gain in the yeast Saccharomyces cerevisiae: Effects of
methylbenzimidazol-2-YL carbamate, methyl methanesulfonate, ethyl methane sulfonate, dimethyl sulfoxide, propionitrile and cyclophosphamide monohydrate. Mutation Research 242, 231-258 (1990).
Piegorsch, W.W. Maximum likelihood estimation for the negative binomial dispersion parameter. Biometrics 46, 863-867 (1990).
Whittaker, S.G., Zimmermann, F.K., Dicus, B., Piegorsch, W.W., Resnick, M.A., and Fogel, S. Detection of induced mitotic chromosome loss in Saccharomyces cerevisiae- An interlaboratory assessment of
12 chemicals. Mutation Research 241, 225-242 (1990).
Piegorsch, W.W. One-sided significance tests for generalized linear models under dichotomous response. Biometrics 46, 309-316 (1990).
Piegorsch, W.W. Durand's rules for approximate integration. Historia Mathematica 16, 324-333 (1989).
Piegorsch, W.W. and Bailer, A.J. Optimal design allocations for estimating area under curves for studies employing destructive sampling. Journal of Pharmacokinetics and Biopharmaceutics 17, 493-507
Piegorsch, W.W. and Casella, G. The early use of matrix diagonal increments in statistical problems. SIAM Review 31, 428-434 (1989). Erratum: Inverting a sum of matrices. SIAM Review 32, 470 (1990).
Whittaker, S.G., Zimmermann, F.K., Dicus, B., Piegorsch, W.W., Fogel, S., and Resnick, M.A. Detection of induced mitotic chromosome loss in Saccharomyces cerevisiae- An interlaboratory study.
Mutation Research 224, 31-78 (1989).
Piegorsch, W.W., Zimmermann, F.K., Fogel, S., Whittaker, S.G., and Resnick, M.A. Quantitative approaches for assessing chromosome loss in Saccharomyces cerevisiae: general methods for analyzing
downturns in dose response. Mutation Research 224, 11-29 (1989).
Rao, G.N., Piegorsch, W.W., Crawford, D.D., Edmondson, J. and Haseman, J.K. Influence of viral infections on body weight, survival and tumor prevalences of B6C3F1 (C7BL/6N x C3H/HeN) mice in
carcinogenicity studies. Fundamental and Applied Toxicology 13, 156-164 (1989).
Piegorsch, W.W. and Margolin, B.H. Quantitative methods for assessing a synergistic or potentiated genotoxic response. Mutation Research 216, 1-8 (1989).
Piegorsch, W.W. Quantification of toxic response and the development of the median effective dose (ED50) - An historical perspective. Toxicology and Industrial Health 5, 55-62 (1989).
Piegorsch, W.W. and Casella, G. Confidence bands for logistic regression with restricted predictor variables. Biometrics 44, 739-750 (1988).
Piegorsch, W.W. and Hoel, D.G. Exploring relationships between mutagenic and carcinogenic potencies. Mutation Research 196, 161-175 (1988).
Dunnick, J.K., Eustis, S.L., Piegorsch, W.W., and Miller, R.A. Respiratory tract lesions in F344/N rats and B6C3F1 mice after exposure to 1,2-Epoxybutane. Toxicology 50, 69-82 (1988).
Piegorsch, W.W., Weinberg, C.R., and Margolin, B.H. Exploring simple independent action in multifactor tables of proportions. Biometrics 44, 595-603 (1988).
Piegorsch, W.W. Model robustness for simultaneous confidence bands. Journal of the American Statistical Association 82, 879-885 (1987).
Kitamura, H., Inayama, I., Ito, T., Yanaba, M., Piegorsch, W.W., and Kanisawa, M. Morphologic alteration of mouse Clara cells induced by glycerol: ultrastructural and morphometric studies.
Experimental Lung Research 12, 281-302 (1987).
Piegorsch, W.W. Performance of likelihood-based interval estimates for two-parameter exponential samples subject to type I censoring. Technometrics 29, 41-49 (1987).
Rao, G.N, Piegorsch, W.W., and Haseman, J.K. Influence of body weight on the incidence of spontaneous tumors in rats and mice of long term studies. American Journal of Clinical Nutrition 45, 252-260
Piegorsch, W.W. and Weinberg, C.R. Testing for synergistic effects for simultaneous exposures with stratified dichotomous response. Journal of Statistical Computation and Simulation 26, 1-19 (1986).
Piegorsch, W.W., Weinberg, C.R., and Haseman, J.K. Testing for simple independent action between two factors for dichotomous response data. Biometrics 42, 413-419 (1986).
Piegorsch, W.W. Confidence bands for polynomial regression with fixed intercepts. Technometrics 28, 241-246 (1986).
Piegorsch, W.W. The Gregor Mendel controversy: Early issues of goodness-of-fit and recent issues of genetic linkage. History of Science 24, 173-182 (1986).
Piegorsch, W.W. Average width optimality for confidence bands in simple linear regression. Journal of the American Statistical Association 80, 692-697 (1985).
Piegorsch, W.W. Admissible and optimal confidence bands in simple linear regression. Annals of Statistics13, 801-810 (1985).
Piegorsch, W.W. and Casella, G. The existence of the first negative moment. American Statistician 39, 60-62 (1985).
Piegorsch, W.W. The questions of fit in the Gregor Mendel controversy. Communications in Statistics — Theory and Methods 24, 173-182 (1983).
This page lasted updated September 2013.
Return to Piegorsch Home Page
|
{"url":"http://math.arizona.edu/~piegorsch/cv.html","timestamp":"2014-04-16T13:11:19Z","content_type":null,"content_length":"76943","record_id":"<urn:uuid:71083950-0524-4058-a7ac-4a7ff548f0de>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00071-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Induced voltage in a coil
In the second equation, E must be complemented by dA/dt, possibly with some sign if you like.
This is all-important in induction. For instance in a generator, copper wires shall have low loss, meaning E~0, but you get a V at the terminals thanks to the induction dA/dt summed over the
conductor path (or d phi / dt if you prefer). Or even, E=0 in a superconductor, which is considered practically for generators and motors, at orientable pods for boats for instance. Though there, it
would supposedly be a type II superconductor, which has a resistance.
|
{"url":"http://www.physicsforums.com/showpost.php?p=4166660&postcount=4","timestamp":"2014-04-20T21:27:48Z","content_type":null,"content_length":"7430","record_id":"<urn:uuid:65c9576a-7d03-42f1-a5f3-644ae4127161>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00089-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Ever Uribe
Check out the basic information of Loop the Loop here.
Visually, Loop the Loop appears to be a simple paper-and-pencil game but once one is introduced to the gameplay, its complexity evolves quite quickly. The game consists of two players, both of which
agree on a number of small loops to draw within one large loop, which could be considered the game board. The arbitrarily declared first player must find a group of four or more loops inside a larger
loop and draw a loop around three or more of these loops but not all of them. For clarity, the group of internal loops will be called “micro-loops” and their mutually external loop will be called the
“macro-loop”. One of the key rules of the game states that micro-loops and their respective macro-loop are collectively considered one micro-loop within the macro-loop’s respective macro-loop. The
game is sequential and therefore the two players alternate turns. The game continues until one of the players makes the last legal move and is declared the winner.
The freedom that Loop the Loop provides to its players enables the existence of incredibly intricate game state paths. In order to better comprehend the game, one needs to map out some of the
possible gameplays and attempt to perceive some sort of pattern. This analysis begins with a game board in which the initial amount of micro-loops within the macro-loop is four since four is the
minimum number of pieces for a playable game of Loop the Loop. The first player would enact the only possible move, circling three of the micro-loops and converting them into one micro-loop within
the whole game board. No more moves are possible since there are only two micro-loops within the game board, the largest loop, and there are less than four micro-loops within the only macro-loop in
the board. Hence, for a game board of initially four micro-loops, the first player is the automatic winner. Clearly, Loop the Loop can be identified as a normal game since the first player performed
the last possible move and won.
Figure 1 (click to view enlarged)
The next game board is one initially beginning with five micro-loops. Now the first player has two moves to choose from: either circling three or four of the micro-loops. However, it would be in the
first player’s interest to circle only three of them, such that there are no more possible moves and the second player loses. If the first player were to circle four of the micro-loops, then the
second player could win the game by circling three of those micro-loops within the new macro-loop, with no more possible moves remaining. As one continues to add one micro-loop to the initial game
board, it seems that a pattern exists such that the number of possible moves the first player can makes increases by one. Thus, the number of possible first moves of the game is the difference
between the number of total initial micro-loops and three, n – 3. Another noted pattern when adding one micro-loop to the initial game board is that the degree of maximum possible moves of the game
increases by one. For example, with a game board of six, it takes a maximum of three moves for the game to end. Up to four moves can be played sequentially on a game board of seven micro-loops. A
game with a board of eight micro-loops ends in a maximum of five moves.
Figure 2 (click to view enlarged)
Hence, the degree of maximum possible moves also has a relationship with the initial number of micro-loops such that the difference between the total initial micro-loops and three is equal to the
degree of maximum moves, n – 3. These patterns can be clearly demonstrated in a set of tree graphs, as shown below in Figures 1, 2, and 3, where blue arrows represent all the possible first player
moves, green arrows represent all the possible second player moves, the boxes represent the entire game board, a set of identical brackets represents a macro-loop, a set of identical brackets within
a set of identical brackets represents a macro-loop within a larger macro-loop, the numbers represent the initial micro-loops of the game board, and the red boxes represent terminal positions. Note
that the tree graphs only represent the game boards that hold the initial number of micro-loops four through eight. This is due to the continually growing intricacy of the game board past eight
initial micro-loops.
Figure 3 (click to view enlarged)
By now, other aspects of Loop the Loop can be classified. The game is determinate since there are clear paths of sequential game states that depend only on the moves a player can make, as
demonstrated by the tree graphs, not on random chance. The game is zero-sum since there is a winner and a loser for each time the game is played; one of the player's gains is equal to the other
player's losses. The game is also asymmetric since the pay-off of the game depends on which player employs a specific strategy. For example, once the first player creates the first loop, the
different loops that the second player can make changes. Loop the Loop is a game of perfect information as there are no unknown variables and both players can determine all the possible game states,
with time of course. Furthermore, the game is unfair. There are clear game play paths that one of both players, depending on the number of initial micro-loops, can take such that that player can play
optimally and win. For example, for a game board of initially six loops, the second player will always win if the individual plays optimally. For all the other game boards demonstrated in the tree
graphs, the first player, if playing optimally, always wins. Due to the continually developing intricacy of the game, as one increases the number of initial micro-loops, and the continual branching
out of the game's tree graphs, it is hypothetical that there will always be a path such that the first player will always win if playing optimally no matter what the second player does. Finally, Loop
the Loop can be generally considered as an impartial, combinatorial game. Thomas Ferguson characterizes an impartial combinatorial game as a game with: two players, a set of possible positions of the
game, specific rules for both players and positions such that both players have the same options of moving from each position, alternating moves between players, terminal positions at which the game
ends, and the condition of ending in a finite number of moves (3). Loop the Loop satisfies all these conditions.
Figure 4 (click to view enlarged)
Observing the tree graphs for the first five possible game boards doesn't demonstrate a clear strategy for one of the players to play. It appears that the optimal strategy and who has the optimal
strategy depends on the initial number of micro-loops within the game board. One attempt at figuring out the dependences of the optimal strategy is labeling all the game states as either N-positions
or P-positions. Since this is a normal, impartial, and combinatorial game, N-positions and P-positions can be found using the following three conditions: all terminal positions are P-positions; from
every N-position, there is at least one move to a P-position, and from every P-position, every move is to an N-position (Ferguson 5). Figure 4 displays the first five tree graphs with labeled
N-positions and P-positions. These tree graphs demonstrate that all of the initial game states, except for the one containing an initial micro-loop number of 6, are N-positions. An N-position is a
game state such that the next player has the winning strategy. In this case, the next player is the first player since that individual performs the first move. The initial game state for the game
board containing and initial micro-loop number of 6 is a P-position. Thus, the previous player, or the second player, has the winning strategy. These results reflect previous findings when analyzing
the fairness of the game using tree graphs. As one looks closely at these tree graphs, there doesn't appear to be an obvious strategy for a player to win because at each game state, a completely
different move is performed. It is possible that there is a pattern by induction as one increases the initial amount of micro-loops within the game board, such that finding the modulus of that amount
with a certain number provides the information of whether the first or second player will win. However, this would not benefit the general strategy analysis and it would be a difficult task to find
such a pattern, since it seems as if the complexity of the game increases exponentially as one adds another initial micro-loop to the board.
Another attempt in finding a strategy is to create an algorithm that simulates the game in order to better understand any possible structures of the game and exploit a winning strategy. In order to
efficiently simulate the game board, it would be ideal to use the Python (2.7) data structure of lists. Lists have special characteristics that enable them to be ideal for the task of representing
loops. For example, lists are mutable and they can be nested within each other, just like the loops of the game. The algorithm created for the game (viewable on the last page of this report), in this
attempt, reflected the complexity of the game and seemed quite abstract. There were difficulties dealing with mutating deeply nested lists which was fixed by repeating a few sub-algorithms.
Essentially, these sub-algorithms could be converted to procedures but the danger of variable substitution and bringing back values from nested procedures kept this suggestion from consideration.
Thus, no strategic value was gained from creating the algorithm.
This is a flow chart of the algorithm created for the strategy. (click to view enlarged)
So far, the intricacy of the game seems to be an obstacle for finding the optimal game strategy. Perhaps, simplifying the rules of the game would provide more intuition of an optimal strategy. One
possible way of narrowing the game down is to allow only three micro-loops to be fused into one macro-loop only if these micro-loops are found within a group of four or more micro-loops. Figure 5
demonstrates the first six possible tree graphs of this altered game. This altered version of Loop the Loop provides quicker insight to the complexity of the game, such as the ability to count
macro-loops as micro-loops within certain perspectives. However, it doesn't provide insight to an optimal strategy of the original Loop the Loop. The tree graphs indicate that both versions are very
different in complexity.
Figure 5 (click to view enlarged)
If you have any questions, feel free to email me.
|
{"url":"http://robotics.usc.edu/~euribe/loop.html","timestamp":"2014-04-20T14:22:03Z","content_type":null,"content_length":"13475","record_id":"<urn:uuid:92cb90f2-c18f-406c-b05f-774eaf6a04e1>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00078-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Understanding Cancer Incidence and Mortality Rates
Understanding Cancer Incidence and Mortality Rates
All cancer registries use the International Classification of Diseases for Oncology, Third Edition (ICD-O-3) to code the anatomic site and morphology. Cancer incidence statistics include invasive
cancers only, with the exception of in situ cancer of the bladder. Mortality rates are based on the underlying cause of death coded using the International Classification of Diseases, Tenth Edition
(ICD-10). Cancer incidence rates in WISH represent the number of new cases of cancer per 100,000 population. Cancer mortality rates represent the number of cancer deaths per 100,000 population during
a specific time period. Cancer incidence and mortality rates can be adjusted for demographic variables such as race, age, and sex. The most commonly used adjustment for cancer rates is age.
Crude Rates
Crude rates are helpful in determining the cancer burden and specific needs for services for a given population, compared with another population, regardless of size. Crude rates are calculated as
A crude incidence rate equals the total number of new cancer cases diagnosed in a specific year in the population category of interest, divided by the at-risk population for that category and
multiplied by 100,000. A crude death rate equals the total number of cancer deaths during a specific year in the population category of interest, divided by the at-risk population for that category
and multiplied by 100,000.
Crude Rates vs. Age-Adjusted Rates. Crude rates are influenced by the underlying age distribution of the state's (or other locality's) population. Even if two states have the same age-adjusted rates,
the state with the relatively older population generally will have higher crude rates because incidence or death rates for most cancers increase with age. The age distribution of a population (i.e.,
the proportion of people in particular age categories) can change over time and can be different in different geographic areas. Age-adjusting the rate ensures that differences in incidence or deaths
from one year to another, or between one geographic area and another, are not due to differences in the age distribution of the populations being compared.
Age-Adjusted Rates
Older age groups generally have higher cancer rates than younger age groups. To address this issue for purposes of analysis, most cancer incidence and mortality rates in major publications have been
age-adjusted. This removes the effect of different age distributions between populations and allows for direct comparison of those populations. Age-adjustment also allows for the comparison of rates
within a single population over time. The direct standardization method of age adjustment weights the age-specific rates for a given gender, race, or geographic area by the age distribution of the
standard 2000 U.S. population.
There are three major components used to calculate age-adjusted rates: the number of cases or deaths reported, the population, and a "standard" population. A rate (new cases or deaths per 100,000
population) is first computed for each age group, then each of these age-specific rates is weighted by multiplying it by the proportion of the 2000 U.S. standard population for that same age group.
The results from each age group are added to arrive at the age-adjusted rate for the total population.
An age-adjusted rate should only be compared with another age-adjusted rate that was calculated by the same method, using the same U.S. standard population. Starting with all 1999 data, the National
Center for Health Statistics (NCHS) and the National Cancer Institute (NCI) began using the year 2000 U.S. standard million population age distribution reported by the Census Bureau. Cancer incidence
increases with age, and because the 2000 population was older than the 1970 population, the change to the 2000 U.S. standard population resulted in apparently higher rates for most cancers. Caution
should be used when comparing the data in this report with cancer incidence rates adjusted to standard populations other than the 2000 U.S. standard population.
The population estimates used in the WISH cancer modules are based on SEER population data (exit DHS) that also incorporate new intercensal bridged single-race estimates derived from the original
multiple race categories as specified by the Office of Management and Budget for the collection of data on race and ethnicity. The bridged single-race estimates and a description of the methodology
used to develop them are on the National Center for Health Statistics Web site (exit DHS).
Age-adjusted incidence and mortality rates are grouped by primary cancer site (site of origin) per 100,000 population. For cancers that occur only in one sex (prostate, uterine, cervical, female
breast), sex-specific population denominators are used to calculate incidence and mortality rates. Incidence rates are for invasive cancers; the only exception is the incidence rate for urinary
bladder, which includes both in situ and invasive cancers. Cancer incidence rates may include multiple primary cancers that occur in single patients; each cancer is counted as a separate case if a
patient has more than one primary cancer.
Confidence Intervals
Confidence intervals for the age-adjusted rates were calculated with a method based on the gamma distribution (modified by Tiwari, et al., 2006). This method produces valid confidence intervals even
when the number of cases is very small. When the number of cases is large, the confidence intervals produced with the gamma method are equivalent to those produced with more traditional methods. The
formulas for computing the confidence intervals can be found in the report, Tiwari RC, Clegg LX, Zou Z. Efficient interval estimation for age-adjusted cancer rates. Stat Methods Med Res 2006 Dec;15
The cancer incidence and mortality counts derived from cancer registries and vital records are complete enumerations of information rather than samples (used in most research studies) and are,
therefore, not subject to sampling error. The rates based on those counts are, however, subject to what is termed "random error," which arises from random fluctuations in the number of cases over
time or between different communities. The 95 percent confidence intervals are an easily understood way to convey the stability of the rates. A stable rate is one that would be close to the same
value if the measurement were repeated. An unstable rate is one that would vary from one year to the next due to chance alone. A wider confidence interval in relation to the rate itself indicates
instability. On the other hand, a narrow confidence interval in relation to the rate tells you that the rate is relatively stable, and you would not expect to see large fluctuations from year to
year. If differences are observed between stable rates (those with narrow confidence intervals), it is likely that the differences represent true variations rather than random fluctuations in the
number of cases.
RETURN to previous page
Last Revised: February 12, 2014
|
{"url":"http://www.dhs.wisconsin.gov/wish/cancer/understanding.htm","timestamp":"2014-04-19T18:00:08Z","content_type":null,"content_length":"15826","record_id":"<urn:uuid:1b1510a9-6869-4794-8bb5-007ef48135b1>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00225-ip-10-147-4-33.ec2.internal.warc.gz"}
|
An engineer is designing a curve on a highway. The curve will be an arc of a circle with a radius of 1260 ft. the... - Homework Help - eNotes.com
An engineer is designing a curve on a highway. The curve will be an arc of a circle with a radius of 1260 ft. the central angle that intercepts the curve will measure (pi)/6 (π/6) radians. To the
nearest foot, what will be the length of the curve?
We use the formula `s=rtheta` where s is the arc length, r the radius of the circle, and `theta` the central angle(in radians):
Given r=1260 and `theta=pi/6` .
Then `s=1260 pi/6210pi~~659.734`
So the length of the curve is approximately 660ft.
Join to answer this question
Join a community of thousands of dedicated teachers and students.
Join eNotes
|
{"url":"http://www.enotes.com/homework-help/an-engineer-designing-curve-highway-curve-will-an-437772","timestamp":"2014-04-17T04:56:24Z","content_type":null,"content_length":"25696","record_id":"<urn:uuid:8434bdeb-2b3b-4719-b095-879f4ba4c6ef>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00571-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Big and Small Numbers in Biology
Copyright © University of Cambridge. All rights reserved.
'Big and Small Numbers in Biology' printed from http://nrich.maths.org/
Biology makes good use of numbers both small and large. Try these questions involvin
You might need to use standard biological data not given in the question.
Of course, as these questions involve estimation there are no definitive 'correct' answers. Just try to make your answers to each part as accurate as seems appropriate in the context of the question.
1. Estimate how many of the smallest viruses would fit inside the largest bacterium. What assumptions do you make in the calculation?
2. Why might it be misleading to say that the size of a bacterium is 200 microns? How might you provide a more accurate description of the size?
3. It has been said that 1g of fertile soil may contain as many as 2500 million bacteria. Do you think that this is a high density of bacteria? Estimate the percentage, by weight, of the soil that
comprises bacteria. If the bacteria were evenly spread out, estimate the distance between the bacteria and compare this to the size of the bacteria. Does this surprise you?
4. The shape of the earth may be approximated closely by a sphere of radius 6 x10$^{6}$m. In June 2008, its human population, according to the US census bureau , was thought to be 6,673,031,923. If
everyone spread out evenly on the surface of the earth, what area of the planet would they each have?
5. Humans typically live on land. Readjust your answer to the previous question making use of the fact that about 70% of the surface of the earth is water.
6. Compare the previous three parts of the question. Are humans more densely packed than the bacteria in the soil? (given that humans live on the surface of the earth and bacteria live inside a
volume of soil, you might want to consider how best to measure the 'density')
7. Suppose that fertile land on earth extends, on average, down to about 10cm. Estimate how many cubic mm of fertile soil the earth contains. Estimate the number of bacteria living in the fertile
land on earth.
8. Question the assumption concerning the average depth of fertile land in the previous question. Would you say that it should be smaller, larger or is about right? You might want to use these
suggested data (from here ) that the percentages of earth's land surface can be divided into different types: 20% snow covered, 20% mountains, 20% dry/desert, 30% good land that can be farmed,
10% land with no topsoil. What other data might you need to make a more accurate assessment?
9. There are about 300000 platelets in a cubic mm of human blood. How many platelets might you expect to find in a healthy adult male?
10. There are about 4 - 6 million erythrocytes and 1000 - 4500 lymphocytes in a cubic mm of blood. A sample of blood on a slide is 2 microns thick. Would you expect many erythrocytes or lymphocytes
to overlap on the microscope image?
Extension: In mathematics, a bound for a measurement gives two numbers between which we know for certain that the real measurement must lie. For example, a (not very good) bound on the height of the
members of a class would be 1m < heights < 2m. In the previous questions can you find bounds on the quantities? First suggest a really rough bound which you would know to be true and then see if you
can sensibly improve on it.
|
{"url":"http://nrich.maths.org/6140/index?nomenu=1","timestamp":"2014-04-20T23:34:37Z","content_type":null,"content_length":"7479","record_id":"<urn:uuid:19c769f6-c363-46e2-b066-233667b95fa9>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00350-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Set Systems with Restricted Cross-Intersections and the Minimum Rank of Inclusion Matrices
Kevash, Peter and Sudakov, Benny (2005) Set Systems with Restricted Cross-Intersections and the Minimum Rank of Inclusion Matrices. SIAM Journal on Discrete Mathematics, 18 (4). pp. 713-727. ISSN
0895-4801. http://resolver.caltech.edu/CaltechAUTHORS:KEEsiamjdm05
See Usage Policy.
Use this Persistent URL to link to this item: http://resolver.caltech.edu/CaltechAUTHORS:KEEsiamjdm05
A set system is L-intersecting if any pairwise intersection size lies in L, where L is some set of s nonnegative integers. The celebrated Frankl-Ray-Chaudhuri-Wilson theorems give tight bounds on the
size of an L-intersecting set system on a ground set of size n. Such a system contains at most $\binom{n}{s}$ sets if it is uniform and at most $\sum_{i=0}^s \binom{n}{i}$ sets if it is nonuniform.
They also prove modular versions of these results. We consider the following extension of these problems. Call the set systems $\mathcal{A}_1,\ldots,\mathcal{A}_k$ {\em L-cross-intersecting} if for
every pair of distinct sets A,B with $A \in \mathcal{A}_i$ and $B \in \mathcal{A}_j$ for some $i \neq j$ the intersection size $|A \cap B|$ lies in $L$. For any k and for n > n 0 (s) we give tight
bounds on the maximum of $\sum_{i=1}^k |\mathcal{A}_i|$. It is at most $\max\, \{k\binom{n}{s}, \binom{n}{\lfloor n/2 \rfloor}\}$ if the systems are uniform and at most $ \max\, \{k \sum_{i=0}^s \
binom{n}{i} , (k-1) \sum_{i=0}^{s-1} \binom{n}{i} + 2^n\}$ if they are nonuniform. We also obtain modular versions of these results. Our proofs use tools from linear algebra together with some
combinatorial ideas. A key ingredient is a tight lower bound for the rank of the inclusion matrix of a set system. The s*-inclusion matrix of a set system $\mathcal{A}$ on [n] is a matrix M with rows
indexed by $\mathcal{A}$ and columns by the subsets of [n] of size at most s, where if $A \in \mathcal{A}$ and $B \subset [n]$ with $|B| \leq s$, we define M AB to be 1 if $B \subset A$ and 0
otherwise. Our bound generalizes the well-known result that if $|\mathcal{A}| < 2^{s+1}$, then M has full rank $|\mathcal{A}|$. In a combinatorial setting this fact was proved by Frankl and Pach in
the study of null t-designs; it can also be viewed as determining the minimum distance of the Reed-Muller codes.
Item Type: Article
Additional © 2005 Society for Industrial and Applied Mathematics Received by the editors September 18, 2003; accepted for publication (in revised form) August 13, 2004; published electronically
Information: April 22, 2005. We would like to thank an anonymous referee for some useful remarks. This author’s [BS] research was supported in part by NSF grants DMS-0355497 and DMS-0106589, and by
an Alfred P. Sloan fellowship.
Subject set systems, restricted intersections, inclusion matrices
Record CaltechAUTHORS:KEEsiamjdm05
Persistent http://resolver.caltech.edu/CaltechAUTHORS:KEEsiamjdm05
Alternative http://dx.doi.org/10.1137/S0895480103434634
Usage Policy: No commercial reproduction, distribution, display or performance rights in this work are provided.
ID Code: 3925
Collection: CaltechAUTHORS
Deposited By: Archive Administrator
Deposited On: 19 Jul 2006
Last 26 Dec 2012 08:56
Repository Staff Only: item control page
|
{"url":"http://authors.library.caltech.edu/3925/","timestamp":"2014-04-18T05:35:13Z","content_type":null,"content_length":"23513","record_id":"<urn:uuid:935557b9-a8cd-453a-b941-5f368c394ed8>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00249-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Rules that are and aren't functions [f(x) = (x-3)/(x-3)]
March 8th 2013, 12:28 AM #1
Mar 2013
Rules that are and aren't functions [f(x) = (x-3)/(x-3)]
I am reviewing my maths and came across definition of a function. According to it, for each input(x) it needs to have corresponding output(y). This one - f(x) = (x-3)/(x-3) - doesn't have y for x
=3. The same for y=sqrt(x) or y=5x^2 + x^1/4 - first one doesn't have y for x<0 and the second one for x>0 has two values.
Despite of this they(or similar expressions) are called in some books or videos as 'functions'.
My question is does it matter if they are formally called functions or not if we actually can do with them all the things we do with functions?
I would appreciate if someone could explain it to me
Re: Rules that are and aren't functions [f(x) = (x-3)/(x-3)]
First, a "rule", by itself, is never a function. One way of defining a functions is, simply "a set of ordered pairs such that no pairs have the same first member with different second members".
If we are given both a domain (the set of first members) and a "rule" (so that, for each first member we can calculate the second member) then we have a function.
Though it is not good notation, we are sometimes given a "rule" along with the (unstated!) assumption that the domain is the largest set of values to which the "rule" can be applied. That is, if
we are given the rule "f(x)= (x-3)/(x- 3)", the default domain is "all x except 3". But it is still a function. Similarly, $f(x)= \sqrt{x}$, is a function with domain "all non-negative x".
The most important of your examples is "y=5x^2 + x^1/4". If it were true that, as you say, "for x>0 [it] has two values", then it would NOT be a function. However, you are wrong that y, or x^1/4,
"has two values". Yes, it is true that $y^4= x$ has two roots. But x^1/4 is, by definition, the positive root of y. The equatilon $x^4= a$ has (real number) solutions $x= \pm a^{1/4}$. The reason
we need to write " $\pm$" is that " $a^{1/4}$", by itself, means only one of those two solutions.
(in terms of complex numbers, there would be four solutions to $x^4= a$.)
March 8th 2013, 04:40 AM #2
MHF Contributor
Apr 2005
|
{"url":"http://mathhelpforum.com/algebra/214420-rules-aren-t-functions-f-x-x-3-x-3-a.html","timestamp":"2014-04-18T03:43:14Z","content_type":null,"content_length":"35849","record_id":"<urn:uuid:51fb9248-2136-4304-a0cd-afc81bde4618>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00054-ip-10-147-4-33.ec2.internal.warc.gz"}
|
area of triangle using 3d vectors
April 25th 2009, 03:25 PM #1
Junior Member
Jan 2009
area of triangle using 3d vectors
can someone explain how to do this problem, i got the first part correct but i dont understand how to get the area exactly. THanks!
Consider the points below. P(1, 0, 0), Q(0, -2, 0), R(0, 0, -3)
(a) Find a nonzero vector orthogonal to the plane through the points P, Q, and R.
Enter a number.
6i + Enter a number.
-3j + Enter a number.
(b) Find the area of the triangle PQR.
Enter an exact number as an integer fraction or decimal.
NVM I FIGURED IT OUT THANKS FOR LOOKING AT IT!!
can someone explain how to do this problem, i got the first part correct but i dont understand how to get the area exactly. THanks!
Consider the points below. P(1, 0, 0), Q(0, -2, 0), R(0, 0, -3)
(a) Find a nonzero vector orthogonal to the plane through the points P, Q, and R.
Enter a number.
6i + Enter a number.
-3j + Enter a number.
(b) Find the area of the triangle PQR.
Enter an exact number as an integer fraction or decimal.
It will be half of the magnitude of the cross product of PQ and RQ
$PQ=\vec i +2 \vec j$ and $RQ =2 \vec j - 3\vec k$
$\begin{vmatrix}<br /> i & j & k \\<br /> 1 & 2 & 0 \\<br /> 0 & 2 & -3 \\<br /> \end{vmatrix} = (6-0)\vec i-(-3-0)\vec j+(2-0)\vec k=6 \vec i +3 \vec j +2\vec k$
Now we take half of the magnitude to get
$\frac{1}{2}\sqrt{(6)^2+(3)^2+(2)^2}=\frac{1}{2}\sq rt{49}=\frac{7}{2}$
April 25th 2009, 03:35 PM #2
|
{"url":"http://mathhelpforum.com/calculus/85632-area-triangle-using-3d-vectors.html","timestamp":"2014-04-20T23:45:01Z","content_type":null,"content_length":"36107","record_id":"<urn:uuid:eea8788d-0c4d-4807-b861-e1b447be9931>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00218-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Great Hills, Austin, TX
Round Rock, TX 78664
Making Math, Physics and Engineering Concepts Accessible
...'m patient, kind, and cheerful, and I can explain new and difficult concepts in a straightforward, understandable manner. At the university level, I tutor math classes up to
III, freshman and sophomore Physics (I, II and III), and freshman and sophomore...
Offering 10+ subjects including calculus
|
{"url":"http://www.wyzant.com/Great_Hills_Austin_TX_Calculus_tutors.aspx","timestamp":"2014-04-24T19:48:29Z","content_type":null,"content_length":"61829","record_id":"<urn:uuid:addfe3e7-1d8b-4a36-8165-29766f750b36>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00432-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Making Tables of Values
You can use lists as tables of values. You can generate the tables, for example, by evaluating an expression for a sequence of different parameter values.
All the examples so far have been of tables obtained by varying a single parameter. You can also make tables that involve several parameters. These multidimensional tables are specified using the
standard Mathematica iterator notation, discussed in "Sums and Products".
The table in this example is a list of lists. The elements of the outer list correspond to successive values of . The elements of each inner list correspond to successive values of , with fixed.
Sometimes you may want to generate a table by evaluating a particular expression many times, without incrementing any variables.
You can use Table to generate arrays with any number of dimensions.
Functions for generating tables.
You can use the operations discussed in "Manipulating Elements of Lists" to extract elements of the table.
Ways to extract parts of tables.
As mentioned in "Manipulating Elements of Lists", you can think of lists in Mathematica as being analogous to "arrays". Lists of lists are then like two-dimensional arrays. When you lay them out in a
tabular form, the two indices of each element are like its and coordinates.
|
{"url":"http://reference.wolfram.com/mathematica/tutorial/MakingTablesOfValues.html","timestamp":"2014-04-19T12:32:42Z","content_type":null,"content_length":"54667","record_id":"<urn:uuid:551ba305-2792-4061-bb87-40676805a7cc>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00646-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Proof regarding functions
January 13th 2009, 08:14 PM #1
Apr 2008
Proof regarding functions
Prove by induction that if set A has n elements and set B has m elements, then there are $m^n$ many functions from A to B.
I see how this is intuitive, but can't figure out how to write a formal proof.
fix $m$ and do the induction over $n.$ if n = 1, then A has only one element and it can be mapped to any of m elements of B. so there are m functions in this case.
now suppose the claim is true for n and let A be a set with n + 1 elements. so $A = A' \cup \{x \},$ where $A'$ is a set with n elements. a map from A to B can send $x$ to
any element of B. so there are $m$ possibilities for the image of $x.$ also by induction hypothesis there are $m^n$ ways to map $A'$ to B. thus there are $m \cdot m^n=m^{n+1}$
ways to map A to B.
January 13th 2009, 08:42 PM #2
MHF Contributor
May 2008
|
{"url":"http://mathhelpforum.com/discrete-math/68110-proof-regarding-functions.html","timestamp":"2014-04-16T16:17:45Z","content_type":null,"content_length":"35003","record_id":"<urn:uuid:4b2fed3c-9be6-4f6e-a9df-2dbf280d872e>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00152-ip-10-147-4-33.ec2.internal.warc.gz"}
|
MATH 613E: Topics in Number Theory
• Please contact the instructor if you want access to recordings of the lectures from this course.
• The following lecture notes have been completed by students registered in the class:
• A LaTeX template has been posted for you to use (if you wish) to start your writing assignment; it can also serve as a primer to LaTeX if you are less familiar with it. Making the .tex file
compile is a first test of using LaTeX on your computer; the corresponding .pdf file has been posted as well just in case.
Lectures: Mondays, Wednesdays, and Fridays, 10:00-10:50 AM, room WMAX 216 (PIMS)
Office hours: by appointment (in person or by Skype)
Office: MATH 212 (Mathematics Building)
Email address:
Phone number: (604) 822-4371
Course description: This is a topics course in number theory, called “Analytic number theory II” or “Distribution of prime numbers and zeros of Dirichlet L-functions”. The twin themes of the course
are to understand as well as possible the distribution of the zeros of Dirichlet L-functions (including the Riemann zeta-function), and then to use this knowledge to derive results on the
distribution of prime numbers, with particular attention to their distribution within arithmetic progressions. The course will begin with a quick review of the prime number theorem and its analogue
for arithmetic progressions.
Advertisement: There will be a conference on L-functions and their applications to number theory at the University of Calgary from May 30–June 3, 2011. Students who take this course should be
well-prepared to get a lot out of that conference. Contact the instructor if you are interested in attending.
Prerequisites: Students should have had a previous course in analytic number theory (for example, MATH 539 here at UBC). The background of students should include the following elements, all of which
should be present in those who succeeded in MATH 539: a strong course in elementary number theory (for example, MATH 537), a graduate course in complex analysis (for example, MATH 508), and the usual
undergraduate training in analysis (for example, MATH 320).
Evaluation: Each student will deliver three lectures for the course, and write up (in LaTeX) lecture notes corresponding to another student's three lectures. The lectures will be chosen by the
student in consultation with the instructor from the list below; most students will choose to deliver consecutive lectures on the same topic.
The last day of classes is April 6, 2011; however, because there are no exams, the lectures will continue into the beginning of the final exams period to accommodate as many students and topics as
│ Dates │ Speaker │ Topic │ Writer │ Draft due │ Article due │
│ Jan 10–12 │ Greg │ Organization and introduction │ │ │ │
│ Jan 17–19 │ Everyone │ Four-minute talks (all topics) │ │ │ │
│ Jan 21–28 │ Greg │ Review on L-functions and primes in arithmetic progressions: explicit formula, zero-free region, exceptional zeros │ │ │ │
│ Jan 29–Feb 4 │ Greg │ Primes in short intervals; irregularities of distribution (the Maier matrix method) │ │ │ │
│ Feb 7–11 │ Nick │ Linnik's Theorem on the least prime in an arithmetic progression │ Colin │ Feb 21 │ Feb 28 │
│ Feb 14–18 │ │ (no class) │ │ │ │
│ Feb 21–25 │ Justin, Greg │ Zeros on the critical line │ Eric │ Mar 28 │ Apr 4 │
│ Feb 28–Mar 4 │ Tatchai │ The large sieve and the Bombieri–Vinogradov Theorem │ Nick │ Mar 14 │ Mar 21 │
│ Mar 7–11 │ Daniel │ The least quadratic nonresidue and the least primitive root modulo primes (unconditional and conditional results) │ Carmen │ Mar 21 │ Mar 28 │
│ Mar 14–18 │ Eric │ Analytic number theory without zeros (current work of Granville/Soundararajan) │ │ │ │
│ Mar 21–25 │ Li │ Oscillations of error terms, Littlewood's results │ Tatchai │ Apr 4 │ Apr 11 │
│ Mar 28–30 │ Justin, Greg │ The nonvanishing of L-functions at the critical point and on the real axis │ │ │ │
│ Apr 4–8 │ Carmen │ Limiting distributions of explicit formulas and prime number races │ Daniel │ Apr 18 │ Apr 25 │
│ Apr 11–15 │ Colin │ The Selberg class of L-functions │ Li │ Apr 25 │ May 2 │
│ Apr 18 │ Greg │ Horizontal distribution of zeros of Dirichlet L-functions; zero-density theorems │ │ │ │
│ Apr 20 │ Greg │ Proofs of the prime number theorem that avoid the zeros of ζ(s) │ │ │ │
References for these topics:
• H. L. Montgomery and R. C. Vaughan, Multiplicative Number Theory: I. Classical theory (some errata have been posted online) Note: draft versions of some chapters from their future sequel
Multiplicative Number Theory: II. Modern developments can be found on the web.
• H. Iwaniec and E. Kowalski, Analytic Number Theory
• E. C. Titchmarsh (revised by D. R. Heath-Brown), The Theory of the Riemann Zeta-Function
• The primary research literature (you can find references in the above books or by speaking with the instructor), almost all of which is searchable at MathSciNet
Possible references for fundamental analytic number theory:
• H. Davenport, Multiplicative Number Theory
• A. E. Ingham, The Distribution of Prime Numbers
• T. M. Apostol, Introduction to Analytic Number Theory
• P. T. Bateman and H. G. Diamond, Analytic Number Theory: An introductory course
Possible references for elementary number theory:
• I. Niven, H. S. Zuckerman, and H. L. Montgomery, An Introduction to the Theory of Numbers
• G. H. Hardy and E. M. Wright, An Introduction to the Theory of Numbers
Use of the web: After the first day, no handouts will be distributed in class. All course materials will be posted on this course web page. All documents will be posted in PDF format and can be read
with the free Acrobat reader. You may download the free Acrobat reader at no cost. You may access the course web page on any public terminal at UBC or via your own internet connection.
|
{"url":"http://www.math.ubc.ca/~gerg/index.shtml?613-Winter2011","timestamp":"2014-04-19T09:24:42Z","content_type":null,"content_length":"20143","record_id":"<urn:uuid:46d616f6-41a6-40e0-bb09-65897ae95699>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00638-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Prove that n! does not divide n^n for n>2
April 11th 2013, 02:22 AM #1
Apr 2013
Prove that n! does not divide n^n for n>2
Dividing both sides by n, I get that (n-1)! | n^(n-1) => (n-1) | n^(n-1) but I'm stuck as to where to go next. Any help would be appreciated.
Re: Prove that n! does not divide n^n for n>2
Have you considered an induction proof? It's fairly simple.
Yes I did consider this but it doesn't seem simple. Suppose n! doesn't divide n^n , but that (n+1)! divides (n+1)^(n+1). Then for some k, k(n+1)! = (n+1)^(n+1). I need to now get a contradiction
but I can't see how.
April 11th 2013, 04:51 AM #2
Apr 2013
|
{"url":"http://mathhelpforum.com/number-theory/217242-prove-n-does-not-divide-n-n-n-2-a.html","timestamp":"2014-04-16T14:22:58Z","content_type":null,"content_length":"31318","record_id":"<urn:uuid:1d2b25a8-f272-46e7-b99b-4acbe476e5a5>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00040-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Servicios Personalizados
Links relacionados
versión impresa ISSN 0044-5967
Acta Amaz. v.36 n.1 Manaus ene./mar. 2006
Genetic breeding on the bee Melipona scutellaris (Apidae, Meliponinae)
Melhoramento genético na abelha Melipona scutellaris (Apidae, Meliponinae)
José de Ribamar Silva Barros
Departamento de Biologia da Universidade Estadual do Maranhão – UEMA – 65054-970 São Luís, MA, Brasil
A selection of queens of Melipona scutellaris through the most productive colonies were carried out during eight months in an orange honeyflow. Each of the colonies was evaluated by its production,
that is, the gross weight production ( pollen, brood, geopropolis and wax of each hive). With this data a coefficient of repeatability was estimated by the intraclass correlation method, obtained r =
0.835 ± 0.071. The repeatibility is very high showing that the analysed data (production) is repeatable. Selection was then carried out using the regression coefficient of each colony and the
respective production gain. Using these data the colonies were divided into three groups according to the method Vencovsky and Kerr (1982): a with the colonies of highest productivity, b of least
productivity, and c of intermediary productivity. Colonies with the highest production (Group a) gave their queens to those of the lowest production (Group b) after their queens were taken out and
killed; while those of intermediate (Group c) stayed with the same queens during the entire experiment both before and after the selection. The modifications in weight, that is, the genetic response
was (R)= 7.98 gr per day which indicated a selection gain. The estimate of the realized herdability is twice the rate of the response to selection (R) by the selection differential (S[2]). That is
then h^2[R]=2(R/S[2]) then h^2[R]= 0.166
KEY WORDS: Breeding, selection, Melipona, bee.
Foi feita uma seleção de rainhas durante oito meses com 10 colônias aproveitando uma florada de laranjeiras. Cada colônia teve sua produção total (mel, pólen, crias, geoprópolis e cera) avaliada.
Estimamos com estes dados o coeficiente de repetibilidade por meio de uma correlação intraclasse e obtivemos r = 0.835 ± 0,071. Esse valor é alto e mostra que a produção de cada colônia é repetível.
Executamos uma seleção usando o coeficiente de regressão de cada colônia e o respectivo ganho de produção . No uso desses dados, as colmeias foram divididas em três grupos, de acordo com o método de
Vencovsky e Kerr (1982): a contendo as colônias de maior produtividade; b com as colônias que apresentaram menor produtividade e c com as colônias de produção intermediárias. As colônias com maior
produção (grupo a) deram suas rainhas para aquelas com menor produção (grupo b) cujas rainhas foram retiradas e mortas. As colônias do grupo c permaneceram com suas próprias rainhas durante todo o
experimento tanto antes como depois da seleção.As modificações no peso, quer dizer, a resposta genética ocorrida foi (R)= 7.98 gramas por dia, indicando um ganho proveniente da seleção. A estimativa
da herdabilidade é o dobro da resposta à seleção dividida pela seleção diferencial (S[2]), ou seja: , logo = 0,166.
PALAVRAS-CHAVE: Melhoramento, abelha, Melipona scutellaris,Apidae.
Many papers have been produced using populations of the honey bee Apis mellifera L. (Collins, 1986; Kulincevic, 1986; Laidlaw and Page, 1986; Rinderer, 1986;Vencovsky and Kerr, 1982). There are about
100 Brazilian species of stingless bees that produce excellent honey, but had never been subjected to direct genetic improvement. Three of these species have been domesticated by pre-Columbian human
populations: Melipona beecheii,Bennett the Yucatanian cab; Melipona compressipes Fabricius, the tiuba of Maranhão; and Melipona scutellaris Latreille the Northeastern Brazilian bee urussú. The
present work is part of a larger study being carried out on the stingless bees .(Meliponinae).
This work was conducted at the State University of São Paulo (UNESP), on the campus of Jaboticabal, State of São Paulo. Among the 21 colonies of Melipona scutellaris, only 10 colonies were used for
this specific experiment. No information on either resistence to migration or on the behavior of these bees in orange flowers were available. The evaluation period of the hives extended from August,
10, 1992 to March 3, 1993. All hives were in the same environment: in the absence of nectar flow, colonies were fed syrup (50% water plus 50% commercial sugar and a pill of Teragran - M) or 50%
commercially rejected honey of Apis mellifera L., because it was collected by the honey bees in sugar cane stacks produced after burning the straws.
In order to discover the production of a specific colony at different months, a coefficient of repeatability was estimated, using data obtained on the weight of the all hives with M. scutellaris
bees. This coefficient set the superior limit of herdability and includes additive effects of genes plus non-additive effects and the permanent environmental differences existent among colonies. The
method used was intraclass correlation (Fisher 1954). The following statistical model was used:
Y[ij ]= u + a[i] + e[ij ] where:
Y[ij ]= Production j within the hive i (j = 1, 2, . . . ,8; i = 1, 2, . . . ,10).
u = general media effect
a[i ]= colony i effect (i = 1,2....10)
e[i j]= aleatory error.
Table 1 shows the variance analysis model of the variance components used to estimate repeatability.
Production repeatibility of colonies was estimated from variance components obtained using the above model followed by:
r = repeatability coefficient
The standard error of the repeatability coefficient (dr) was obtained by the formula (Fisher 1954):
r = repeatability coefficient
k = number of colonies
n = total number of observations
The directional selection was made based on the weight of the hives. A regression coefficient was estimated for the gross weight (in grams) of each hive, relative to time (x = 0, 30, 60, . . .,210
Based on the preliminary data obtained from these estimates, colonies were divided into three groups:
a = Three colonies with the largest regression coefficients.
b = Three colonies with the smallest regression coefficients.
c = Control group made up of 4 colonies with intermediate regression coefficients.
The method of selection used, developed by Vencovsky and Kerr (1982), runs as follows:
About 25% of the best colonies (as in this present case the aim was total colony weight; the "best colonies" were the 3 heaviest) have their three queens (Group a) removed and introduced in the 25%
worst colonies (the 3 lightest ones ) Group b - that previously had their 3 queens removed and killed. The workers of the three best colonies each choose a virgin queen that takes the nuptial flight,
and usually begins egg laying within 14 days. This selection process (and queen supersedure) was carried out on the 3rd of April, 1993, after 210 days (7 months) migration of the bees to and from an
enormous orange honey flow.
The selection diferential was estimated by the diference of the mean regression coefficient of Group a and the mean linear regression coefficient of the population.
The answer to selection was quantified through the estimates of the linear regression coefficients, and was considered as being the result of the difference of the mean linear regression coefficients
of the colonies of the groups a and b after and before the selection, through the linear mean regression coefficients of Group c (= control group), after and before selection.
Group a and b before selection.
Group c (control) before selection.
Group c (control) after selection.
Group c (control) before selection.
The realized herdability was estimated as being two times the rate of the selection (R) by the selection differential (S); both were estimated based on the regression. The result was multiplied by 2,
due to the data which was based on the mother effect, since all mates were randon.
Genetic Selection
The data based on to the gross weight production of the 10 colonies of Melipona scutellaris are in Table 2 and 3.
The results of the variance analyses obtained from the data of the gross weight of the hives (table 2 and 3) allowed the estimation of the repeatability coefficient, as presented in table 4.
According to the analysis of variance (see Table 4) a significative effect is indicated (P < 0.01) which may infer that the colonies are different as far as weight is concerned.
REPEATABILITY ( R )
The estimate of repeatability for production for the experimental colonies was r = 0.8346 ± 0.71. This is a very high value and means that the observations present the real capacity of production of
the colonies in order to carry out a selection program.
Queens of meliponids are usually inseminated by one male.The data obtained recently by Carvalho (2001) for this species shows that 8% of their queens are inseminated by two males.The selection of
queens must be made through the performance of the colonies.
Table 5 shows the estimates of the regression coefficients of 10 colonies of Melipona scutellaris before and after selection of the queens.
As can be seen in Table 5, the constitution of the groups according to the regression coefficients is:
Group a: superior colonies, with the greatest regression coefficients: k 419, k 309, k 430.
Group b: inferior colonies with the smallest regression coefficients: k 426, k 427, k 431.
Group c: control colonies with intermediate regression coefficients : k 436, k 418, k295, k 428.
There are two considerations to be made as far as the differential selection obtained through the estimation of the mean linear regression coefficients are concerned.
a) The selecion differential (S[1]) that takes in account all colonies envolved in the bee reproduction before selection:
b) The selection differential (S[2]) utilized in the estimation of realized herdability, took in account only nine colonies (since hive K430 died due to the production of diploid males):
The size of the two selection differential shows the phenotypic superiority of queens selected even if one colony died (due to diploid drone production). Evidently, any selection event must take into
account that some colonies will die or at least be weakened due to sex-alleles.
Queens of Group a were introduced into colonies of Group b with no extra precautions, as mentioned by Monteiro and Kerrr (1990), and all were accepted. The queenless colonies of Group a had new
queens emerge and accepted, and all carried out their respective nuptial flight on May, 1992. The new queen of colony k 430 was inseminated by a male with one xo sex allele equal to one of the two of
the queens which lead to a segregation not different from 1:1 ( ½ females to ½ diploid males). Usually two events of diploid male production caused the colony to die out.
RESULT OF THE SELECTION ( R )
The estimates of the selection results were obtained by the differences between the mean linear regression coefficients of the Groups a and b before and after the selection by the estimate of the
mean linear regression coefficient of the control Group c also after and before the selection for nine colonies of Melipona scutellaris
The queens of control Group c remained the same from the begining of the experiment. Therefore, the difference between the mean linear regression coefficients of these four colonies, before and after
selection, provides the non-genetic variation that occured during this entire experiment.
The phenotypic variation that occured in the colonies of Groups a and b are made of both genetic and non genetic components, since there were chances of queens from Group a to Group b. The new queens
of Group a are daughters of the queens that headed each Group a colony and, therefore, they have half of the addidive inheritance:
Another important component to be considered is the environmental effect (E) that was obtained through the difference of the estimates of the mean linear regression coefficients of the colonies of
the control group after and before selection.
The size of this value indicates a great environmental variation during the period of the experiment. In the begining of the evaluation the ten colonies were taken from Jaboticabal to an orange
plantation (Monte Alto city), from August to October, 1992; in October they were returned to Jaboticabal where they remained until July 1993, and went to Bebedouro/SP to use the orange honey flow.
The estimate of the realized herdability is twice the rate of the answer to selection ([2])
The value of the realized herdability was inferior to those cited by Collins (1986) for Apis mellifera L. according to the following works:
a) Soller and Bar-Cohen (1967) found a value of h^2 = 0.36 for orange honey production in one generation of selection;
b) Rothenbuhler et al. (1979) found values of realized herdability for rapid line syrup collection of
c) Hellmich et al. (1985) encountered values for pollen collection in the high line of h^2 = 0.556 ± 161 and h^2 = 0.063 ± 0,190 for the low line. The selection in the above items a and b was
disruptive, that is, in two directions.
Pirchener (1969) considers that disruptive selection allows a better control of the environment. It is important to notice that our experiment had a control group of colonies.
The repeatability value was r = 0.8343 ± 0.07, which is a high value and indicates that there are more variations between colonies than within, sugesting that the method was robust.
The superiority of the selected colonies was shown through the selection differential with
S[1]= 87.21 grams and S[2] = 96.22 grams.
The answer to selection ( ) in the three groups involved in queen substitution was = 7.98 grams per day and indicated that there was a genetic improvement in these groups. The three exceptional
colonies that had their queens taken out produced new laying queens quickly, but one produced diploid males which shows that 10 hives make up a very small group and this suggests that experiments of
this type should be made with 50 or more hives.
The value for realized herdability for honey production was = 0.164, what is inferior those found in literature for Apis mellifera.
Selection of queens of more productive colonies was carried out in 10 hives of Melipona scutellaris that were evaluated for eight months through its gross weight. The repeatability coefficient (r )
was estimated with these data using the intraclass correlation method. The production is a repetitive character of r = 0.8346 ± 0.071.
Selection was made with mean regression coefficient of each hive and of production gains. The colonies then constituted three groups: Group a of colonies with greater, Group b with colonies with less
growth and Group c with hives of intermediate growth. Hives with greater growth (Group a) had their queens taken and introduced into those with lesser growth (Group b) that had its queens removed and
killed; while the colonies of Group c remained with its respective queens during all periods before and after selection.
The genetic answer occured in the group related to queens substitution and was = 7.6 grams/day which indicates that there was an improvement in these groups.
Realized herdability was estimated as being the rate between the answer to selection by the selection differential (S), then = 0.164.
The author thanks: Prof. Dr. Warwick E. Kerr for being his tutor for the last 10 years. Prof. Dra. Regina Helena Nogueira-Couto, as Head of the Dept. provided all help possible. Prof. Dr. Roland
Vencovsky teached me quantitative genetics. FAPEMA - provided me with a studentship. CNPq - provided the funds for the research. Fac. C. Agr. e Vet. da Unesp - Jaboticabal gave the infra structure
facilities. FAPEMIG gave the funds to assemble the Dept. of Genetics of the Federal University of Uberlândia.
Carvalho, G. A. 2001. The Number of Sex Alleles (CSD) in a Bee Population and its Practical Importance (Hymenoptera: Apidae), J. Hym. Res., 10 (1): 10-15 [ Links ]
Collins, A.M. 1986. Quantitative Genetics . In Rinderer (ed), T.E., Bee Genetics and Breeding. Academic Press, Inc. , New York. p. 283-304. [ Links ]
Fisher, R. A. 1954. Statistical Methods for Research Workers. 12 ed. Oliver and Boyd, Edinburgh. [ Links ]
Hellmich, R. L.; II, Kulincevic, J. M.; Rothenbuler, W. C. 1985. Selection for high and low pollen-hoarding honey bees. J. Hered. 76:155 -158. [ Links ]
Kulincevic, J. M. 1986. Breeding accomplishments with Honey bees. In "Rinderer, T.H. (ed) Bee Genetics and Breeding. Academic Press, Inc., New York. p.391-414. [ Links ]
Laidlaw, H.H., JR and PAGE, Robert E. JR . 1986. Mating Designs. In Rinderer. T. H. Bee Genetics and Breeding. Acad. Press Inc., N. York. p.323-344. [ Links ]
Monteiro de Andrade, C.; Kerr, Warwick E. 1990. Experimental exchange of queens between colonies of Melipona compressipes (Apidae, Meliponini). Rev. Bras. Biol. 50(4):975-981. [ Links ]
Pirchener, F. 1969. Population genetics in animal breeding. W.H. Freeman and Company. 274p. [ Links ]
Rinderer, T. E. 1986. Selection. In: Rinderer, T.H. (ed). Bee Genetics and Breeding. Academic Press, Inc. New York. p.305 - 322 [ Links ]
Rothenbuler, W. C.; Kulincevic, J. M., Thompson, V. C. 1979. Successful selection of honeybees for fast and hoarding of sugar syrup in the laboratory. J. Apic. Res. 18: 272 278. [ Links ]
Soller, M.; Bar-Cohen, R. 1967. Some observations on the heritability and genetic correlation between honey production and brood area in the honeybee. J. Apic. Res. 6: 37-43. [ Links ]
Vencovsky, R.; Kerr, Warwick E. 1982. Melhoramento genético em abelhas. II. Teoria e avaliação de alguns métodos de seleção. Brazilian Journal of Genetics, 5(3):493-502. [ Links ]
Recebido em 05/02/2003
Aceito em 03/08/2005
|
{"url":"http://www.scielo.br/scielo.php?script=sci_arttext&pid=S0044-59672006000100014&lng=es&nrm=iso","timestamp":"2014-04-18T22:21:58Z","content_type":null,"content_length":"50740","record_id":"<urn:uuid:fec6e94c-9964-44d5-b04f-1941fba5b14f>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00554-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Classics in the History of Psychology -- Thorndike & Woodworth (1901b)
Classics in the History of Psychology
An internet resource developed by
Christopher D. Green
York University, Toronto, Ontario
(Return to Classics index)
THE INFLUENCE OF IMPROVEMENT IN ONE MENTAL FUNCTION
UPON THE EFFICIENCY OF OTHER FUNCTIONS
II. THE ESTIMATION OF MAGNITUDES
Edward L. Thorndike & R. S. Woodworth (1901)
First published in Psychological Review, 8, 384-395.
In a previous paper we considered in detail a typical experiment on the influence of training in delicate estimation of magnitudes. The present paper will summarize all the experiments of that sort
which we have made with individuals.
Before and after training in judging the areas of rectangles 10-100 sq. cm. in size, four subjects were tested as to their ability to judge:
1. Triangles within the same limits of size.
2. Areas between 140-200 sq. cm. of similar shape to those of the training series.
3. Areas between 200-300 sq. cm. of similar shape to those of the training series.
4. Areas between 100-140 sq. cm. of various shapes; circles, trapezoids, etc.
5. Areas of 140-200 sq. cm. of various shapes; circles, trapezoids, etc.
6. Areas between 200-240 sq. cm. of various shapes; circles, trapezoids, etc.
7. Areas of 240 sq. cm. and over of various shapes; circles, trapezoids, etc.
Table IV. represents the results with these subjects. The figures after each name in part A represent the average errors for the kind of area stated at the head of the column, in the before- and
after-training tests. In part B are given percent- [p. 385]
[p. 386] ages showing the proportion of the late to the early errors. The percentages after 'total' represent the proportion of the sum of the average errors of all four in the after-training tests
to the sum of the average errors in the before-training tests. The figures beneath represent the number who make a smaller proportionate improvement, in the case of each category, than they did in
the case of areas exactly similar to those of the training series but estimated without the correction factor, i.e., in just the same way that they estimated the triangles, irregular areas, etc. It
has seemed unwise to attempt in detail the calculation of the reliability of each of these and of following results. The labor would be enormous and in many cases the laws of chance not easily
applicable. In these preliminary studies we have tried to discover only general tendencies, not their exact amount.
From the figures given for these subjects it seems clear (1) that the improvement in the estimation of rectangles 10 to 100 sq. cm. is not equalled in the other functions; (2) that change in size
without change in shape decreases the amount of improvement in proportion, in general, to the amount of the change, and (3) that the same tends to hold true when both size and shape change. The score
for areas 240 and over presents an exception to this which cannot, we think, be due to chance. (4) The different influence of the training on the different subjects is apparent from the last column.
It teaches, as was pointed out in a previous article, that there is no inner necessity for improvement of one function to improve others closely similar to it, due to a subtle transfer of practice
effect. Improvement in them seems due to definite factors, the operation of which the training may or may not secure.
Two subjects took the training in the same manner as did these four but were tested with only parts of the series. Their records were as in Table V.
Experiments similar in the general plan to these were carried on in the case of several other sorts of estimations of magnitude. A detailed account of their administration is out of the question. As
has been pointed out, an exact measure of the improvement in the case of the different training series has not [p. 387] been possible. In the following results whenever a measure of such improvement
is given, it means the change from the average of the first trial of the whole series to that of the last trial.
The influence of training in estimations of magnitude within certain limits of the ability to estimate similar magnitudes in case of objects qualitatively different.
1. The influence of training in estimating the areas of rectangles from 2 to 12 square inches on the ability to estimate triangles from 1 to 5.5 square inches.
The general method was the same as has been described. The series used for the training was a set of 60 rectangles of various shapes, ranging from 1.5 to 12 sq. inches. It was expected that the
number would be so great as to prevent any from being known by their shape, and the records are free from any proof that such was the case. It may have been, however, that the subjects were to some
extent unconsciously guided by other factors than the mere magnitudes.
Subject W. in the rectangles series, being allowed to note the real lengths after each judgment, made sum of deviations 30.1 square inches (approx.).[1]
After 20 trials, 5 with about two-thirds and 15 with the whole series, he made 11.5, being 28.3 per cent. of his first trial (average errors approximately .5 and .2). With the tri- [p. 388] angle
series W. was tested before and after this training, the results being sums of deviations 2.5 square inches and 5.75 square inches, the latter being 230 per cent. of the former (average errors .11
and .26 square inch). The average error of areas in the training series of corresponding sizes at the end of training was approximately .07 square inch.
Subject T. in a similar way made 30.0 (approximate) at the start and after approximately 20 trials made 39 per cent. of the former (average errors approximately .5 and .2). In tests with the triangle
series before and after this training T.'s sums of deviations were 15.0 and 6.5, 43.3 per cent. of the former. Average errors were .68 and .30. The average error for areas in the training series of
corresponding size was at the end of training .07 square inch approximately.
2. The influence of training in estimating the areas of rectangles and triangles from 0.5 to 12.0 square inches on the ability to estimate various shapes between the same limits.
The general method was the same that has been described. The series used for the training was the set of rectangles used in 1, plus 42 triangles of different shapes, ranging from 1.5 to 5.5 square
inches by .5 square inch steps.
The note on page 387 is equally applicable here. Before and after this training the subjects were tested with 17 areas of various irregular shape running from 3.1 square inches to 11.8,[2] and
averaging 6.4.
Subject W., starting from the point of ability given by experiment 2, and being allowed to note the real lengths after each judgment, made in the first trial sum of deviations 21 square inches, at
the end of 32 trials 8 square inches, 38 per cent. of the former (the training was at intervals of about a week during over a month, hence the slow progress). The average errors were approximately
.21 and .08 square inch. Before and after this training he was tested with the irregular shape series, the results being sums of deviations 17.17 and 16.83 square inches, or 98.0 per cent. of the
former. Average errors, [p. 389] 1.01 and .99. The average error for corresponding sizes in training series was at the end of training approximately .2.
Subject T. in a similar manner made in the first trial sum of deviations 26.5 square inches, at the end of 41 trials 9.0 square inches, 34 per cent. of the former (the training was over a similar
time to W.'s). The average errors were approximately .26 and .09. Before and after this training his results with the irregular shape series were sums of deviations 34.1 and 11.7, the latter being
31.3 per cent. of the former. Average errors 2 and .69. The average error for corresponding sizes in the training series was at the end approximately .2.
Subject N. was tested with the same series as W. and T., but estimated the areas in square centimeters. She was trained with a series of rectangles of 20 to 60 sq. cm. varying each from the next by
one sq. cm.,[3] there being two of each size.[4] With the 20-60 sq. cm. series, being told only that the limits were 20 and 60 cm. and that 1 inch equalled 2.54 sq. cm., N. made an average error of 4
sq. cm. Being then allowed to note the real area after each judgment, she made in her first trial with the series an average error of 2.2 sq. cm. At the end of 28 trials her average error was 0.55
sq. cm., 14 per cent. of the first error, 25 per cent. of the second.
Before any knowledge save that 1 inch equalled 2.54 cm., N. made with estimates of ten of the test series an average error of 63.0 sq. cm., the average real size being 122.3. Of these, four were
under 60 sq. cm., averaging 38.4. The average error for these four was 22.7. After two minutes' observation of a sq. cm., a 10 sq. cm., a 50 sq. cm. and a 100 sq. cm. area, N. made for these four
(when mixed in the total series) an average error of 8.8. For the series of varied shapes (12 being used) she made under similar circumstances average error 12.4, sum of deviations 148.6. After the
28 trials with the training series her average error was 3.6, sum of deviations 44.8, 30 per cent. of the former. For the four areas previously mentioned her average error was 6.0. In brief, her
improvement due to [p. 390] the slight chance to acquire a standard was nearly twice that due to the actual training, in so far as the four determinations were a fair test. For areas in the training
series of sizes corresponding to the varied shapes of the test series her average error at the end of training was approximately .6.
3. The influence of training in estimating weights of 40 to 120 grams on the ability to estimate the weights of miscellaneous objects of similar weights.
The test weights were eight in number, averaging 95.8 grams. The objects were a cup, umbrella handle, pack of cards, etc.
The training was of the general method described, a series of weights 40, 45, 50, 55, * * * 120 grams, differing no wise save in weight, being used.
Subject W., being allowed from the start to note after each judgment the correct weight, made at his first trial with the series sum of deviations 245, after 50 trials with the series sum of
deviations 125. Average errors, 14 and 7.
In tests with the eight weights before and after this training he made sum of deviations 377 and 142, the average errors being 47 and 18. Six judgments improved, 2 were worse. With corresponding
weights of the training series the average error at the end of training was 9.
Subject T. in a similar experiment made at his first and last trials (T. took 100 trials) with the 40-120 series sums of deviations 135 and 80, 59 per cent. of the former.
In tests with the eight weights before and after this training T. made sums of deviations 182.5 and 159.5, 87 per cent. of the former, the average errors being 22.6 and 19.9. Three judgments
improved, 5 were worse. With corresponding weights of the training series the average error at the end of training was 3.
The influence of training in estimations of magnitude within certain limits on the ability to estimate magnitudes outside those limits.
1. The influence of training in estimating lengths from .5 to 1.5 inches on the ability to estimate lengths of 6.0 to 12.0 inches.
[p. 391] The training was of the general type described on page 250, the series used being a set of cards on each of which was drawn a line. The series contained 5 lines of 1/2 inch and 4 lines of
each of the following sizes, 5/8, 3/4, 7/8, etc., up to 1 1/2. The subject was permitted from the start to note after each judgment the real value. In W.'s case the sum of deviations for the first
trial was 9 (eighths of an inch). In the last of 40 trials it was 2. The inaccuracy in the last trial was thus 22 per cent. of that in the first.
Before the 1st and after the 40th trial, W. estimated the lengths of 28 lines from 6 to 12 inches long. His sums of deviations before and after the training were 7.5 inches and 11 inches
respectively, the number of errors being 13 and 19.
Subject T. in a similar experiment made with the training series in the first trial sum of deviations 8 (the average of the first three trials was 10 1/3). In the last of 24 trials the sum of
deviations was 2 (the average of the last three trials being 1 2/3). The inaccuracy of the last trial was thus 25 per cent. of that of the first. With the test series T. made sums of deviations 7.5
and 7.5, the number of errors being 15 and 14.
For four other subjects the records were as follows:
2. The influence of training in estimating lengths of 6.0 to 12.0 inches on ability to estimate lengths of 15 to 24 inches.
The method of the experiment was the same as in 1. The series used for the training was 20 lines from 6 to 12 inches long. The series used for the tests was 13 lines from 15 to 24 inches long.
Subject C. in the 6-12 inch series made sum of deviations 40 when estimating the lengths without aid, save the knowledge that they were between 6 and 12. In the next trial, being allowed to note the
real length after each judgment, she made [p. 392] sum of deviations 23. After 40 trials with the set (roughly 80 units of trial) her sum of deviations was 5.6, 24.3 per cent of her second trial, 14
per cent. of her first trial. With the 15-24 inch series she was tested before the 1st and after the 40th, the results being sums of deviations 31 (all minus) and 9 (8-, 1+), the latter being 29.0
per cent. of the former.
Subject N., being allowed to note the real lengths after each judgment, made sum of deviations 23. After 32 trials (roughly 64 units of trial) her sum of deviations was 1.0 (approximately), 4.0 per
cent. (approximately) of her first trial. With the 15-24 inch series she was tested after the 8th and 32d, the results being sums of deviations 16 and 13, the latter being 81.0 per cent. of the
former. During the period from trial 8 to trial 32 her improvement on the 6-12 inch lines was such as to reduce the sum of deviations from approximately 10 to approximately 1, that is, to 10.0 per
The influence of training in estimations of magnitudes within limits on the ability to estimate magnitudes outside those limits, the objects being in addition qualitatively different.
1. The influence of training in estimations of areas of rectangles and triangles of 0.5 to 12.0 square inches on the ability to judge areas from 12 to 65 square inches of different sorts of shapes.
The training series has been described on page 388. The test series contained 10 areas 12 to 18 inches, averaging 14.1, 6 areas 18 to 24 square inches, averaging 20.9, 6 areas 24 to 30 square inches,
averaging 28.5, 5 areas 30 to 36 square inches, averaging 34.1, 11 areas 36-65 square inches, averaging 44.5.
[p. 393] Subject W. was tested before and after the training described on page 388. The results were as shown on previous page.
The total error after training was thus 80 per cent. of that before training.
Subject T. was tested before and after the training described. The results were:
The total error after training was thus 83 per cent. of that before training.
Subject N. was tested with the large areas at the same times and in the same manner as described on page 389, before and after the training there described. Her estimates were made in square
centimeters. Dividing the areas used in the test into those between 60 and 100, 100 and 140, 140 and 200, 200 and 240, and 240 and over we get the following results:
The inaccuracy in the late tests was thus 61 per cent. of that in the former.
Before any knowledge save that 1 inch equals 2.54 cm., N. made with 10 of the test series an average error of 63 sq. cm. Of these, 6 were above 60 sq. cm., averaging 178 sq. cm. Her average error for
these was 90.1. After two minutes' observations of a sq. cm., a 10 sq. cm., a 50 sq. cm. and a 100 sq. cm. area, N. made for these six (when mixed in the total series) an average error of 55.5, 62
per cent. of the former.
[p. 394] 2. The influence of training in estimating weights of 40 to 120 grams on the ability to estimate the weights of miscellaneous objects of weights outside 40-120 grams.
The test weights were 12 in number, averaging 736 grams. The objects were books, a shoe, a bottle, etc.
The training was that described on page 390.
W. was tested before and after the training with the 40-120 series. The sums of deviations were 1438 and 958, the average errors being 120 and 80. Of the 12, six estimations were improved, two equal,
four worse. One case of improvement was from 390 to 90.
T., in a similar experiment with training as described on page 390, made before and after training sums of deviations 1128 and 1142, the average errors being 94 and 95. 6 judgments improved, three
were equal and three worse.
3. The influence of training in estimating lengths of lines from 0.5 to 1.5 inches on the ability to estimate the lengths of objects qualitatively different of 2.5 to 8.75 inches.
Subject W. before and after the training described on page 391 was tested with 12 objects, e.g., an envelope, a brush, a wrench, the average length being 5.8 inches. His sums of deviations were 5.0
and 5.0, being the same. The average error was 0.42 -- in both cases.
Subject T. in a similar experiment made with a series of ten such objects of nearly the same average length, sums of deviations 2.75 and 3.25, the average errors being 0.275 and 0.325.
When one undergoes training in estimating certain magnitudes he may improve in estimating others from various causes. Such training as was described in our previous paper gives one more accurate
mental standards and more delicacy in judging different magnitudes by them. In the case of estimations of magnitudes in terms of unfamiliar standards such as grams or centimeters, the acquisition of
the mere idea of what a gram or centimeter is, makes a tremendous difference in all judgments. This will be seen in the case of N.'s estimation of areas. She was told that an inch was 2.54
centimeters, and with that as practically the sum of her knowledge of the size of a centimeter [p. 395] made judgments of a certain inaccuracy. The mere examination for two minutes of areas 1, 10, 50
and 100 sq. cm. in size reduced this inaccuracy to 38 per cent. of what it had been. The acquisition of definite ideas is thus an important part of the influence of improvement in one function on the
efficiency of other functions. Even this, however, may not be operative. With some subjects in some cases the new ideas or the refinements of old ideas produced by the training seem impotent to
influence judgments with slightly different data.
It is hard to prove whether or not to what extent the delicacy in judging by means of such ideas in the case of one set of data, is operative with the different data used in the test series. Surely
it sometimes is not.
The training might also give ideas of how to most successfully estimate, habits of making the judgments in better ways, of making allowance for constant errors, of avoiding certain prejudices. These
habits might often concern features in which the function trained and the functions tested were identical. For instance, the subjects who judged areas of various shapes made their judgments before
training by looking at the 10, 25 and 100 sq. cm. areas given them as guides; after training they never looked at these but used the mental standards acquired. This habit is a favorable one, for a
person can look at a 25 sq. cm. area in the shape of a square and still think various-shaped areas from 30 to 50 sq. cm. are under 30. The mental standard works better.
The training might give some mysterious discipline to mental powers which we could not analyze but could only speak of vaguely as training of discrimination or attention. If present, such an effect
should be widely and rather evenly present, since the training in every case followed the same plan. It was not.
For functions so similar and for cases so favourable for getting better standards and better habits of judging the amount of improvement gotten by training in an allied function is small. Studies of
the influence of the training of similar functions in school and in the ordinary course of life, so far as we have made such, show a similar failure to bring large increases of efficiency in allied
[1] The areas from 8 to 9.5 were added after the 5th trial. The sums of deviations for the first five trials were 16.5, 9.5, 6.5, 8.5, 7. They then rose to 17.5, 25.0, 21, 19.5, etc. By calculating
what the sum of deviation would have been had the series been full from the start, we get 30.0 square inches.
[2] These areas were determined by careful weighing, but their accuracy is conditioned by such slight variations as there were in the thickness of the paper used.
[3] This series was intended to be made up of areas indistinguishable save by size, but their shapes did perhaps afford some opportunity for unconscious influence on the estimations.
[4] Save in the first few trials, where 25 per cent. were unduplicated.
[5] [Classics Ed.: for the two 1s in the table] The notable decrease in error here was due to a few very great improvements. Out of 12 judgments 4 were worse than before training, 1 was the same and
7 were better.
|
{"url":"http://psychclassics.yorku.ca/Thorndike/Transfer/transfer2.htm","timestamp":"2014-04-16T07:14:15Z","content_type":null,"content_length":"23283","record_id":"<urn:uuid:d87be173-f8ae-47d0-9008-a1fc7e70744f>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00221-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Number Line
The real number line is a very powerful model for representing relationships between real numbers. The experimental interactive tools below are designed to help students internalize the number line
Number Line Tools for Integer Operations
Adding Integers Comparing Subtracting an Integer to Adding the Opposite Comparing the Integer Tile Model to the Real Number Line Model
Number Line Tools for Operations Involving Fractions
Comparing the Fraction Strip Model to the Real Number Line Model Comparing the Pie Chart Model to the Real Number Line Model (Version I) Comparing the Pie Chart Model to the Real Number Line Model
(Version II) Comparing the Polygonal Area Model to the Real Number Line Model Comparing All Three Area Models to the Real Number Line Model
|
{"url":"http://saddleback.edu/faculty/lperez/algebra2go/tools/index.html","timestamp":"2014-04-21T09:53:30Z","content_type":null,"content_length":"5812","record_id":"<urn:uuid:af1b90f9-1a47-441b-a29e-d26ed6397d66>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00266-ip-10-147-4-33.ec2.internal.warc.gz"}
|
The Volokh Conspiracy - More Multiplication:
More Multiplication:
A comment to the follow-up to my "A Little Multiplication Could Have Gone a Long Way" post says:
What? You mean not everyone memorizes useless conversions like that there are 1440 minutes (or 86400 seconds) per day? What is this country coming to?
I'd have let this slip, but given that the whole thread was about multiplication -- and that I'm a math geek -- I just couldn't resist. First, knowing how many minutes there are in a day, it turns
out, is not useless: Among other things, it would help journalists and press release authors avoid errors like the one I was blogging about.
But second, here's a secret -- you don't have to memorize the conversions. Even if you don't remember the conversion, you can still figure out how many minutes there are in a day, whenever you need
to (for instance, if you want to check whether the item you're about to publish is accurate). How, you might ask? What occult science will give me this magical power? Why . . . multiplication!
In fact, you don't even know how to do multiplication, since there are, I'm told, electronic devices that can do it for you. All you need to know is that such an operation exists, and that it can be
deployed to solve immensely difficult problems like the "how many 5-minute increments in a day" one? (To be fair, it also helps knowing about multiplication's partner in crime, division.)
As it happens, I do remember a rough estimate of the number of seconds in a year, partly because one runs into these "every X seconds/minutes Y happens" -- 30 million, or (for a better approximation)
10 million pi for math geeks. I don't remember the number of minutes or seconds in a day. But I am so learned that the numbers are nonetheless available to me whenever I please. And you too can have
this fearsome power . . . .
|
{"url":"http://www.volokh.com/posts/1139075070.shtml","timestamp":"2014-04-19T07:10:13Z","content_type":null,"content_length":"4645","record_id":"<urn:uuid:dce7f967-eb3f-4c71-9db7-1cc9c496899e>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00616-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Data Logging: Savings Beyond Equipment Scheduling
Uses related to economizers, cycling fans, and simultaneous heating/cooling, boiler lockouts, and supply-air resets
This article discusses data-logging applications related to economizers, cycling fans, and simultaneous heating and cooling, boiler lockouts, and supply-air resets.
Data logging is great for finding equipment-scheduling opportunities in buildings, but its use does not end there. This article discusses applications related to economizers, cycling fans, and
simultaneous heating and cooling, boiler lockouts, and supply-air resets.
In a field survey of 500 economizers, 64 percent were found to have failed.¹ How can you tell if an economizer has failed? With data loggers. You will need three: one to measure outside-air
temperature, one to measure return-air temperature, and one to measure mixed-air temperature. Or, you could use a single four-channel data logger with three temperature sensors. Getting a good
reading of mixed-air temperature can be difficult because of air striation. Just before the inlet to a supply fan (Photo A) or just after filters is a good spot to place a sensor.
The temperature of mixed air follows a simple equation, related to the percentage of outside air being used:
MAT = OAT × x + RAT × (1 − x)
MAT = mixed-air temperature
OAT = outside-air temperature
x = percentage of outside air
RAT = return-air temperature
You can solve this equation for x, yielding the relationship:
x = (MAT − RAT) ÷ (OAT − RAT)
Using data loggers, you can learn what the mixed-, return-, and outside-air temperatures are and determine if an economizer is working.
Unfortunately, the outside-air temperature may be rather close to the return-air temperature, making the equation go to infinity because the denominator is zero. This can be overcome with some simple
Make a scatter plot of your logged data, with the x-axis OAT − RAT and the y-axis MAT − RAT. Discard the data recorded when the unit was turned off.
If the unit is not equipped with an economizer, you may see something resembling Figure 1. Figure 1 was created using measured outside-air and return-air temperature; for mixed-air temperature, a
theoretical model based on dampers letting in 20-percent outside air was used. Note that the slope of the line is 0.2. The economizer will be between these positions when outside-air temperature is
less than 55°F (left side of the graph). Using real-world data, you may get a graph more closely resembling Figure 2. Note that the slope is closer to 0.57, indicating the air handler is stuck at
57-percent outside air.
If the unit is equipped with an economizer, you may see something like Figure 3. Note the difference in slopes. On the left side of the graph, the slope is 1, indicating the unit is at 100-percent
outside air. On the right, the slope is 0.2, indicating the economizer dampers are at a minimum.
Now that you can tell whether or not your economizer is working, you can calculate savings.
A building’s cooling or heating system takes mixed air and cools or heats it to the desired supply-air temperature. The energy use is equal to:
1.08 × cfm × ∆T = energy
1.08 = constant related to the density and specific heat of air at sea level. (Note: A different constant is used for higher altitudes.) This makes the equation produce a result in British thermal
cfm = flow rate across the fan, which can be found in building design documents
∆T = difference between mixed-air temperature and supply-air temperature
Knowing energy use, you can download typical-meteorological-year (TMY) data to get hourly weather patterns for your site and to calculate mixed-air temperature with and without an economizer for
every hour of the year. Using the airflow rate of your system, you then can calculate the energy savings from installing or repairing an economizer. (Note: The above calculation does not account for
latent cooling. In the San Francisco Bay Area, a factor of 0.25 to 0.3 is conservative as an adder to account for dehumidification.)
Cycling Fans
With data loggers, you can accurately forecast energy savings resulting from the installation of variable-frequency drives (VFDs) on pumps and fans.
VFDs take advantage of the affinity laws. In this case, you need only one:
(P1 ÷ P2) = (RPM1 ÷ RPM2)³
This can be rewritten as:
P1 = P2 × (RPM1 ÷ RPM2)³
P1 = fan or pump power at a slow (or fast) speed
P2 = fan or pump power at a fast (or slow) speed
RPM1 = the slow (or fast) speed
RPM2 = the fast (or slow) speed
(Note: The affinity laws are a bit idealized compared with results in the field. If you are installing a VFD on a pump or fan with high static head, you are better off using an exponent of 2.2, as
opposed to 3. If the pump or fan has relatively low static head, an exponent of 2.7 is recommended.)
The important thing to note about the above equation is that as VFD speed decreases, power decreases like the cube. So, if you cut the speed in half, you end up using only 12.5 percent of the power.
To illustrate, assume P2 is the power of the fan when operating at 60 hertz without a VFD. Then, we see:
P1 = 100% × (30 ÷ 60)³ = 12.5%
Imagine a 100-kw fan cycling half of the time. Over the course of an hour, the fan would consume 50 kwh of energy. If a VFD were installed and turned down to 50-percent speed, the fan would consume
only 12.5 kwh of energy over the course of an hour, a savings of 37.5 kwh (Figure 4).
To predict fan speed, which will change throughout the year based on weather conditions, we need to measure the cycle rate of the fan and regress it against weather.
Connect a current transformer to your data logger, and measure the current going into the fan motor. To calculate cycle rate accurately, use 1- or 2-min interval data. Use another data logger to
measure outside-air conditions, or download National Oceanic and Atmospheric Administration data. Now, calculate your cycle rate for each hour, and compare that to your outside-air-temperature data.
Keep in mind that cooling towers track wet-bulb temperature, while evaporator fans are likely to track dry-bulb temperature.
The result should be a graph similar to Figure 5. Note that the fan starts coming on at 49°F and is at 100 percent by 72°F. Between these two temperatures, the cycle rate follows the equation:
Cycle rate = 0.0438 × OAT − 2.1187
If the fan is cycling at 50 percent, the load your system is experiencing is 50 percent of the design capacity. Again, energy use follows the equation:
1.08 × cfm × ∆T = energy
If we reduce flow rate across the fan by half, we get half the energy transfer, the effect of the fan cycling off half of the time. VFD speed, then, is equal to fan cycle rate. (Note: There is a
built-in assumption that a reduction in fan speed will not change delta-T across the coil. In fact, a reduction in fan speed probably will lead to a slightly higher delta-T, meaning speed could be
reduced even further. Thus, savings estimates will be slightly conservative.)
Now, you are ready to calculate savings. Create a spreadsheet. In the first column, enter the annual weather data. In the next column, calculate cycle rate based on the weather (cap the cycle rate so
it does not go lower than zero or higher than 100 percent). The next column should be baseline energy use: the cycle rate times the fan power draw (in kilowatts, measured with a power meter to
account for power factor and voltage). In the final column, use the fan affinity law presented above to calculate the energy use of the fan with a VFD.
Subtract the last two columns, and you have calculated energy savings. (Make sure to account for scheduling.)
Simultaneous Heating/Cooling, Boiler Lockouts, Supply-Air Resets
Supply-air resets can be a great way to reduce energy use in a variable-air-volume (VAV) system. To calculate savings from supply-air resets, one must log:
• The temperature of the air coming out of the air handler.
• The temperature of the air outside (dry bulb).
• Some of the hottest zones in the building or the zones with the coolest air coming out of the registers (likely, on the south or west side of the building).
Unless you have been with the building since construction or you have a good controls system, the first thing you should do is check supply-air temperature across a range of outside-air temperatures
to make sure you are not performing a supply-air reset already. If supply-air temperature always is between 52°F and 58°F, then the compressor probably is cycling. Do a regression of supply-air
temperature based on outside-air temperature to check. If your line basically is flat (e.g., always 55°F or 58°F), then you do not have a supply-air reset. If your line slopes
significantly—supply-air temperature is 55°F when outside-air temperature is above 75°F and 65°F when outside-air temperature is below 40°F—then you probably do have a supply-air reset.
The simplest way to control supply-air reset is based on outside-air temperature. This is how we will calculate savings. The equation we will rely on is:
1.08 × cfm × ∆T = Btu
1.08 = constant based on altitude
To calculate flow rate, you need to know the maximum airflow, which occurs during the cooling design day. This information can be found in air-handler cut sheets. You also need to know the
outside-air temperature at which the VAV boxes close to minimum position and what that minimum position is.
Suppose you know the following about your building:
• The design day is 100°F.
• The maximum flow rate through the air handler is 50,000 cfm.
• The VAV boxes close to a minimum position when outside-air temperature is 45°F.
• The minimum position is 25 percent.
You now are prepared to calculate flow rate based on outside-air temperature. When the outside-air temperature is 100°F, the flow rate is 50,000 cfm; when the outside-air temperature is 45°F, the
flow rate is 12,500 cfm (Table 1).
y = mx + b
y = flow rate
m = slope
b = y-intercept
The slope is equal to the change in flow rate over the change in outside-air temperature:
m = (50,000 − 12,500) ÷ (100 − 45) = 681.82
The y-intercept can be found by solving for a known point—namely, 100°F outside-air temperature—and using the slope above:
50,000 = 681.82 × 100 + b → b = 50,000 − 68,182 = -18,182
To ensure the equation works, let’s plug in 45°F outside air:
cfm = 681.82 × 45 − 18,182 = 12,499.99 ≈ 12,500
Now, we can calculate flow rate based on outside-air temperature using the equation for flow rate we developed.
Once we know the delta-T, we can finish calculating energy savings. In this case, there are two delta-Ts: from the mixed air to the supply air and from the supply air to the discharge air. The former
indicates cooling energy use, while the latter indicates reheat use.
To calculate cooling energy use, we need to know the mixed-air temperature across a range of outside-air temperatures. If you do not have an economizer and use a constant 20-percent outside air, this
is simple:
MAT = OAT × 0.2 + RAT × 0.8
Usually, you can assume return-air temperature is roughly equal to or one or two degrees higher than space temperature. If you assume a flat 72°F year-round, you will be fairly accurate.
For outside-air temperature, you can download a TMY file for every hour of the year for your building.
Even if you have an economizer, calculating mixed-air temperature is relatively easy. Assuming a supply-air-temperature setpoint of 55°F and an outside-air temperature below 55°F, mixed-air
temperature will be 55°F. If outside-air temperature is greater than 55°F, but less than, say, 70°F, mixed-air temperature will be the same as outside-air temperature. If outside-air temperature is
above 70°F, mixed-air temperature will follow the equation above.
For cooling energy:
1.08 × cfm × (MAT − SAT)
For reheat energy, log the discharge-air temperature at a few VAV boxes, as well as outside-air temperature. Next, create a graph with discharge-air temperature on the y-axis and outside-air
temperature on the x-axis. Add a trend line, and you can see what discharge-air temperature should be across a range of outside-air temperatures. (Do not forget to eliminate data for unoccupied
Now, you are ready to calculate energy use. In the base case, you have energy use with a static supply-air setpoint. In the retrofit case, you have a supply-air reset. For boiler resets, lock out
your boiler (and heating water pumps) when outside-air temperature exceeds 70°F (this may vary a bit, based on region and building, but is a good starting point). Calculating savings from a boiler
lockout is simple: Heating energy use above 70°F goes to zero, as does energy use for pumps.
You also can combine a supply-air reset with a boiler lockout. Table 2 shows savings at a range of temperatures. To calculate annual savings, download a TMY weather file, and perform an analysis for
each hour of the year using the methods outlined here. Heating energy savings should be divided by the efficiency of your boiler; cooling energy savings should be divided by your unit’s coefficient
of performance. Lastly, do not forget to take scheduling into account, and do not forget pump savings from your boiler lockout.
1) Hart, R., Callahan, J., Anderson, K., & Johanning, P. (2011). Unitary HVAC premium ventilation upgrade. Paper presented at 2011 ASHRAE Winter Conference, Las Vegas. Available at http://
Brenden Millstein is co-founder and chief executive officer of Carbon Lighthouse, an engineering company dedicated to making carbon neutrality profitable for organizations. Previously, he worked for
the New York State Energy Research & Development Authority and, before that, as a research fellow at Lawrence Berkeley National Laboratory. To date, he has been involved in more than 400
energy-efficiency and demand-response projects in California, Oregon, and New York. He has a bachelor’s degree in physics from Harvard University and master’s degrees in renewable-energy engineering
and business administration from Stanford University.
Did you find this article useful? Send comments and suggestions to Executive Editor Scott Arnold at scott.arnold@penton.com.
|
{"url":"http://hpac.com/archive/data-logging-savings-beyond-equipment-scheduling","timestamp":"2014-04-16T15:00:38Z","content_type":null,"content_length":"105466","record_id":"<urn:uuid:2c7aecda-97f4-4ae2-a278-c62b545852af>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00546-ip-10-147-4-33.ec2.internal.warc.gz"}
|
College Trigonometry
ISBN: 9780618825073 | 061882507X
Edition: 6th
Format: Nonspecific Binding
Publisher: THOMPSON LEARNING
Pub. Date: 3/1/2007
Why Rent from Knetbooks?
Because Knetbooks knows college students. Our rental program is designed to save you time and money. Whether you need a textbook for a semester, quarter or even a summer session, we have an option
for you. Simply select a rental period, enter your information and your book will be on its way!
Top 5 reasons to order all your textbooks from Knetbooks:
• We have the lowest prices on thousands of popular textbooks
• Free shipping both ways on ALL orders
• Most orders ship within 48 hours
• Need your book longer than expected? Extending your rental is simple
• Our customer support team is always here to help
|
{"url":"http://www.knetbooks.com/bk-detail?isbn=9780618825073","timestamp":"2014-04-21T07:36:12Z","content_type":null,"content_length":"31265","record_id":"<urn:uuid:e6ddccef-a9de-4bea-8e57-3123053d22b1>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00039-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Re: st: Graph png format - differences in scaling between vers. 10 and 1
Notice: On March 31, it was announced that Statalist is moving from an email list to a forum. The old list will shut down at the end of May, and its replacement, statalist.org is already up and
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: st: Graph png format - differences in scaling between vers. 10 and 11
From cnguyen@stata.com
To statalist@hsphsun2.harvard.edu
Subject Re: st: Graph png format - differences in scaling between vers. 10 and 11
Date Wed, 17 Mar 2010 11:38:10 -0500
Johannes Geyer <JGeyer@diw.de> wrote:
> I just stumbled across an unexpected behaviour. I updated some graphs for
> a Beamer-Latex presentation. I used the "png" format. But I used Stata 11
> instead of Stata 10. The result was a file smaller than the previous
> version (by a factor of four) with a smaller resolution. I guess, png is
> not a vector format and not indifferent to scaling. So, there is a smaller
> picture with less pixels which looks less nice when its size is increased.
> Even if I use Stata version-control, there is no difference. These new
> graphs look bad in the presentation, so I recreated them using Stata 10.
As others have mentioned, you should always export Stata's graphs to a vector
format for publication quality output. The PNG format is not a vector format
but a bitmap format and does not scale well.
As to why Johannes is seeing different behavior in Stata 11 and Stata 10 when
exporting his graphs to the PNG format (or any other bitmap format such as
TIFF), I believe it's simply because his graph window in Stata 11 is smaller
than his graph window in Stata 10.
When Stata exports a graph to a bitmap format such as PNG or TIFF, Stata
exports the bitmap image using the dimensions of the image displayed in the
Graph window. The exported image is pretty much a snapshot of what's in the
Graph window. If you make the Graph window really large, you'll get a
bitmap with large dimensions. If you make the Graph window really small,
you'll get a bitmap with small dimensions. You can specify the pixel
dimensions of the exported bitmap by using the width() and/or height()
options. If you specify just the width() or height() option, Stata will
determine the appropriate pixel height or width based on the graph's aspect
ratio (which is determined by xsize/ysize).
When exporting graphs to a bitmap format, I recommend always using the width()
option to get consistent output.
-Chinh Nguyen
* For searches and help try:
* http://www.stata.com/help.cgi?search
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/
|
{"url":"http://www.stata.com/statalist/archive/2010-03/msg01190.html","timestamp":"2014-04-18T21:08:16Z","content_type":null,"content_length":"9037","record_id":"<urn:uuid:eaf3eff4-15a9-4c30-8887-98dd43e155a2>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00311-ip-10-147-4-33.ec2.internal.warc.gz"}
|
What's theta??
April 14th 2010, 11:57 PM #1
Mar 2010
What's theta??
Stuck on what's next......Find the sector angle (where cone volume is maximised)when a sector is removed and the circle circumference is joined (point to point) to make a cone.
I came up with this equation:
Volume of cone – sector=V=1/3h(pir^2-(theta/360xpir^2))
Unknowns…V, theta, r, h.
I know I need derivative but I need to put equation in terms one variable.
Any tips on how to proceed??
Stuck on what's next......Find the sector angle (where cone volume is maximised)when a sector is removed and the circle circumference is joined (point to point) to make a cone.
I came up with this equation:
Volume of cone – sector=V=1/3h(pir^2-(theta/360xpir^2))
Unknowns…V, theta, r, h.
I know I need derivative but I need to put equation in terms one variable.
Any tips on how to proceed??
Area of the circle = π*R^2
Area of the sector = 1/2*R^2*θ
Therefore surface area of the cone = π*R^2 - 1/2*R^2*θ = πR*r,....(1) where r is the radius of the cone.
Now volume of the cone V = 1/3*π*r^2*h.
But r^2 = (R^2 - h^2)
So V = 1/3*π*h*(R^2 - h^2)
dV/dh = 0
Find R in terms of h and the in terms of r. And put it in eq(1) to find θ.
Last edited by sa-ri-ga-ma; April 15th 2010 at 03:07 AM.
Stuck on what's next......Find the sector angle (where cone volume is maximised)when a sector is removed and the circle circumference is joined (point to point) to make a cone.
I came up with this equation:
Volume of cone – sector=V=1/3h(pir^2-(theta/360xpir^2))
Unknowns…V, theta, r, h.
I know I need derivative but I need to put equation in terms one variable.
Any tips on how to proceed??
Hi Neverquit,
Use Pythagoras' theorem to obtain one variable instead of 2
$R^2=r^2+h^2\ \Rightarrow\ r^2=R^2-h^2$
differentiate wrt "h"
If we find "r", we can find the remaining arc length of the circle from which the sector was cut from, since
$2{\pi}r=cone\ circular\ circumference=arc\ length\ from\ circle$
$\frac{2{\pi}r}{2{\pi}R}=\frac{\alpha}{360^o}=\frac {r}{R}=\sqrt{\frac{2}{3}}$
Hello, Neverquit!
This is an intricate (and badly worded) problem . . .
Find the sector angle (where cone volume is maximized)
when a sector is removed and the remainder is joined to form a cone.
* * *
* *
* *
* *
* O *
* o *
* * θ * R *
* *
* * * *
A o o B
* *
* * *
The sector angle is $\theta \,=\,\angle AOB.$
The radius of the circle is $R.$
The arc length of $AB$ is: . $R\theta.$
This is the circumference of the circular base of our cone.
. . $2\pi r \:=\:R\theta \quad\Rightarrow\quad r \:=\:\frac{R\theta}{2\pi}$ .[1]
A cross-section of our cone looks like this:
/ | \
/ h| \ R
/ | \
/ | \
* - - + - - *
We have: . $h^2 \;=\;R^2 - r^2 \;=\;R^2 - \left(\frac{R\theta}{2\pi}\right)^2 \;=\;\frac{R^2(4\pi^2 - \theta^2)}{4\pi^2}$
. . Hence: . $h \;=\;\frac{R}{2\pi}\sqrt{4\pi^2 - \theta^2}$ .[2]
The volume of a cone is: . $V \;=\;\frac{\pi}{3}r^2h$
Substitute [1] and [2]: . $V \;=\;\frac{\pi}{3}\left(\frac{R\theta}{2\pi}\right )^2\cdot\frac{R}{2\pi}\sqrt{4\pi^2 - \theta^2}$
. . Therefore: . $V \;=\;\frac{R^3}{24\pi^2}\theta^2\sqrt{4\pi^2-\theta^2}$
And that is the function we must maximize.
Hi soroban,
"The arc length of is: .
This is the circumference of the circular base of our cone.
. . .[1]"
This is not correct. The cone is not formed from the sector removed from the circle. It is formed from the remaining part of the circle.
Hi Archie Meade,
Your calculation of r is correct. But θ is not correct. For that you have to equate the area of the circle to sum of the area of the sector and surface area of the cone.
Therefore surface area of the cone = π*R^2 - 1/2*R^2*θ = πR*r
Substitute the value of r and find θ.
my calculation is fine,
i was wasn't working with the cone surface area,
i worked with the original circle.
A few more questions.....
I followed Archie Meads methodology and I don't understand a few parts:
How.. r^2=2R^2/3
how I get..theta=360deg-alpha
Theta should =66.1deg
Hi Neverquit,
in solving for $h^2$ of the cone corresponding to max volume,
it is $\frac{R^2}{3}$
where $R$ is the radius of the original circle.
When the sector is cut from that, we are left with the "pac-man" shaped remainder of the circle,
that looks like a pizza with a slice removed.
The cone is formed from this.
I calculated the angle of that and subtracted the answer from 360 degrees,
since that then gives the angle of the sector that was removed from the original circle.
The outside arc length of the "pac-man" equals the circumference of the circular opening of the cone.
Also, using Pythagoras' theorem, we can express r in terms of R
using our result for the height of the cone corresponding to maximum volume,
since R is the slant length of the cone.
Evaluating that gives
The cone circumference length is the "pac-man" arc length.
The ratio of this length to the original circle circumference
is the ratio of alpha to 360 degrees, where alpha is the angle of the "pac-man".
Theta is then alpha subtracted from 360 degrees.
Alpha is about 294 degrees or so, giving theta approximately 66 degrees.
Then $\theta=360^o-293.938^o=66.062^o$
April 15th 2010, 02:55 AM #2
Super Member
Jun 2009
April 15th 2010, 06:37 AM #3
MHF Contributor
Dec 2009
April 15th 2010, 07:46 AM #4
Super Member
May 2006
Lexington, MA (USA)
April 15th 2010, 08:45 AM #5
Mar 2010
April 15th 2010, 07:59 PM #6
Super Member
Jun 2009
April 15th 2010, 08:13 PM #7
MHF Contributor
Dec 2009
April 15th 2010, 08:30 PM #8
Super Member
Jun 2009
April 20th 2010, 12:15 AM #9
Mar 2010
April 20th 2010, 01:28 AM #10
MHF Contributor
Dec 2009
|
{"url":"http://mathhelpforum.com/calculus/139266-what-s-theta.html","timestamp":"2014-04-18T13:30:38Z","content_type":null,"content_length":"71740","record_id":"<urn:uuid:13373d21-40aa-4b11-b63c-597decf7f1be>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00052-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Collaborative Number Theory Seminar
Spring 2014 Schedule:
February 7: Xin Wan (Columbia University)
Title: Iwasawa-Greenberg main conjectures for Rankin-Selberg
p-adic L-functions
Abstract: In this talk I will prove an Iwasawa-Greenberg main conjecture for Rankin-Selberg p-adic L-functions for a general modular form and a CM form such that the CM form has higher weight, using
Eisenstein series on U(3,1), under the assumption that the CM form is ordinary (no ordinary conditions on the general modular form). This has many arithmetic applications including proving an
anticylotomic main conjecture in the sign=-1 case (formulated by Perrin-Riou). In view of the Beilinson-Flach elements this gives one way of proving the Iwasawa main conjecture for supersingular
elliptic curves formulated in different ways by Kato, Kobayashi and Perrin-Riou.
February 21: Nathan Kaplan (Yale University)
Title: Rational point counts for curves and surfaces over finite fields via coding theory
Abstract: We explain an approach of Elkies to counting points on varieties over finite fields. A vector space of polynomials gives a linear subspace of (F_q)^N, a linear code, by the evaluation map.
Studying properties of this code and its dual gives information about the distribution of rational point counts for the family of varieties defined by these polynomials. We will describe how this
approach works for families of genus one curves and del Pezzo surfaces in projective space over F_q and will mention how class numbers and Fourier coefficients of modular forms appear in these point
counts. No previous familiarity with coding theory will be assumed.
March 7: Daniele Turchetti (Institut de mathématiques de Jussieu)
Title: Lifting Galois covers to characteristic zero with non-Archimedean analytic geometry
March 21: Maksym Radziwill (IAS)
Title: L-functions, sieves and the Tate Shafarevich group
Abstract: I will explain joint work with Kannan Soundararajan, where we find an "L-function analogue" of the Brun-Hooley sieve. Essentially, our method allows us to work analytically with long
truncated Euler products inside the critical strip. As a consequence we obtain several new results on the distribution of the central values of families of L-functions. In particular I'll focus on
consequences for the distribution of the Tate-Shafarevich group of (prime) twists of an elliptic curve.
April 4: Ian Whitehead (Columbia University)
Title: Axiomatic Multiple Dirichlet Series
Abstract: I will outline an axiomatic description of multiple Dirichlet series based upon work of Diaconu and Pasol. The axioms lead to a canonical construction of multiple Dirichlet series with
(infinite) affine Weyl groups of functional equations. This work is over function fields, and has applications to arithmetic problems including the distribution of point counts and L-functions in
families of curves over a fixed finite field.
May 9: Adriana Salerno (Bates College)
Seminar schedule in past semesters:
Fall 2013
Spring 2013
Fall 2012
Spring 2012
Fall 2011
Spring 2011
Fall 2009
Spring 2009
Spring 2007
Fall 2006
Spring 2006
|
{"url":"http://qcpages.qc.cuny.edu/~msabitova/CNTS.html","timestamp":"2014-04-21T03:31:07Z","content_type":null,"content_length":"16365","record_id":"<urn:uuid:cba240f6-531c-4ba9-866c-9182d4e5f6d8>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00469-ip-10-147-4-33.ec2.internal.warc.gz"}
|