content
stringlengths 86
994k
| meta
stringlengths 288
619
|
|---|---|
Math Forum Discussions - Numerical ODEs
Date: May 12, 2013 11:02 AM
Author: Jose Carlos Santos
Subject: Numerical ODEs
Hi all,
This question is perhaps too vague to have a meaningful answer, but
here it goes.
In what follows, I am only interested in functions defined in some
interval of the type [0,a], with a > 0.
Suppose that I want to solve numerically the ODE f'(x) = 2*sqrt(f(x)),
under the condition f(0) = 0. Of course, the null function is a
solution of this ODE. The problem is that I am not interested in that
solution; the solution that I am after is f(x) = x^2.
For my purposes, numerical solutions are enough, but if I try to solve
numerically an ODE of the type f'(x) = g(f(x)) (with g(0) = 0) and
f(0) = 0, what I get is the null function. So far, my way of dealing
with this has been to solve numerically the ODE f'(x) = g(f(x)) and
f(0) = k, where _k_ is positive but very small and to hope that the
solution that I get is very close to the solution of the ODE that I am
interested in (that is, the one with k = 0). Do you know a better way
of dealing with this problem?
Best regards,
Jose Carlos Santos
|
{"url":"http://mathforum.org/kb/plaintext.jspa?messageID=9122703","timestamp":"2014-04-17T19:07:07Z","content_type":null,"content_length":"2018","record_id":"<urn:uuid:f995ee68-b389-4245-ae19-4091b8fdc81f>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00624-ip-10-147-4-33.ec2.internal.warc.gz"}
|
help me!!! (partial derivative)
August 5th 2008, 08:17 PM
help me!!! (partial derivative)
1. Take the derivatives of the following functions with respect to x (∂/∂x):
2. Calculate the partial derivative with respect to x (∂/∂x) and y (∂/∂y ):
3. Let g = f (x, y) = ax2 + bxy + cy2 . Calculate the partial derivative with respect to x
(∂g/∂x) and y (∂g/∂y).
4. Let g = f (x, y, z) = xa yb zc . Calculate the partial derivative with respect to x (∂g/∂x),
y (∂g/∂y) and z (∂g/∂y).
5. Let g = f (x, y) = a ln x + bln y . Calculate the partial derivative with respect to x
(∂g/∂x) and y (∂g/∂y).
6. Let g = f (x, y) = eax+by . Calculate the partial derivative with respect to x (∂g/∂x) and y
i do not exactly understand how to solve partial derivative (Worried)
please help me thanks!
August 5th 2008, 10:57 PM
Serena's Girl
Partial derivatives
Let's say we are given g = f(x,y,z), and we wish to take the partial derivative of g with respect to x (i.e. ∂g/∂x).
This simply means that we will obtain the derivative of g with respect to x while treating the other variables (i.e. y and z) as though they were constants. We will solve #3 to demonstrate:
$<br /> g = f(x,y) = ax^2 + bxy + cy^2<br />$
Solve for ∂g/∂x. We treat the variable y as though it were a constant. So...
$<br /> \frac { \partial g} { \partial x} = a \frac {d} {dx} (x^2) + by \frac {d} {dx} (x)<br />$
The third term "disappeared" because, in this case, cy^2 is treated as a constant, and the derivative of a constant is 0.
Thus, we obtain:
$<br /> \frac { \partial g} { \partial x} = 2ax + by<br />$
Similarly, ∂g/∂y is:
$<br /> \frac { \partial g} { \partial y} = bx + 2cy<br />$
|
{"url":"http://mathhelpforum.com/calculus/45381-help-me-partial-derivative-print.html","timestamp":"2014-04-18T16:02:48Z","content_type":null,"content_length":"6099","record_id":"<urn:uuid:ef7c9051-52c6-41e5-9170-b08962e48d5e>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00467-ip-10-147-4-33.ec2.internal.warc.gz"}
|
A Graph That Will Make You Smile
Hosted by The Math Forum
Problem of the Week 988
A Graph That Will Make You Smile
MacPoW Home || Forum PoWs || Teachers' Place || Student Center || Search MacPoW
Alice: I have just come across a graph that is the funniest thing I have ever seen. Just graph the set of points (x,y) such that
1/2 < floor[ mod(floor[y/17] 2^(-17 floor[x] - mod(floor[y],17)), 2) ]
Bob: Let me be sure I understand: the thing inside the outermost "mod" is not an integer. What does "mod" mean here?
Alice: It means the remainder when multiples of 2 are subtracted. So mod(15/4, 2) would be 7/4. The mod function in most computer programs works this way, so don't worry about it.
Bob (some minutes later). You're right. It is not hard to graph. The function on the right is either 0 or 1 so it is just a matter of computing where it is 1. I did it and didn't see any points at
all for small values, so I went up a bit and found the following. I fail to get the joke.
Alice: Oh, I forgot to say: You have to graph this for x between 0 and 110 and y between k and k+17, where k is the following 543-digit integer.
Bob (after a break for some more computation): Oh, now I get it! Very funny indeed.
What was the joke?
© Copyright 2003 Stan Wagon. Reproduced with permission.
|
{"url":"http://mathforum.org/wagon/fall03/p988.html","timestamp":"2014-04-19T10:32:17Z","content_type":null,"content_length":"4570","record_id":"<urn:uuid:03aa0ded-436e-4601-8fae-2c35500c80b7>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00603-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Algebra 2 Tutors
New York, NY 10014
Professional Math, SAT, and GRE Tutor
...As a student, I've scored extremely well on my SAT (Math - 800) and middle and high school subjects (A student). I truly believe I have the qualifications to help you or your child with math and
other subjects mentioned below: Algebra 1,
Algebra 2
, Geometry, Prealgebra,...
Offering 10+ subjects including algebra 2
|
{"url":"http://www.wyzant.com/Elizabeth_NJ_algebra_2_tutors.aspx","timestamp":"2014-04-21T03:36:50Z","content_type":null,"content_length":"61141","record_id":"<urn:uuid:c92cfbf6-4008-430f-9325-8a1bf4198a68>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00256-ip-10-147-4-33.ec2.internal.warc.gz"}
|
ProgettoCosmo - An automatic Comparison of the Human and Chimpanzee Genomes
Published human (Homo sapiens) and chimpanzee (Pan troglodytes) genome percentage similarities - 98 %,[1] 99.4 %,[2] 98.77 %,[3] 95 %,[4] and 96 %[5] - demonstrate that any perceived precision is
illusory. Some similarity between these genomes is unsurprising, but the important lesson these numbers show is that relatively small differences in DNA may produce large morphological and behavioral
differences. Summarizing the sequence differences in a single number is challenging if not impossible.
Among the important factors to take into consideration when comparing DNA sequences from a structural point of view are 1) the length of the sequences, 2) the way encoded information will be
expressed, 3) the completeness of the sequences (significant portions of H. sapiens and P. troglodytes heterochromatin remain unsequenced[6]), how differences are distributed, 4) locations at which
recombination occurs (these differ between H. sapiens and P. troglodytes chromosomes[7]) and 5) how DNA is segmented (i.e., the different chromosomes numbers and different gene distribution among
DNA sequences from a mere informatics point of view are simple strings of characters. Therefore it is also possible to develop tests comparing genomes as unstructured sequences of symbols, without
considering genes, pseudo-genes, coding and non coding regions, vertical and horizontal gene transfer, open reading frames (ORFs), or whatever structured concept. This is the goal of this article.
Comparing strings of characters
Strings of symbols are objects more complex than - for example - polygons. It's easy to define equality between polygons: they are equal when they have equal sides, angles and vertices. Polygons are
similar when have equal angles and vertices but different side lengths. Differently from polygons there are a lot of ways to compare strings of characters. An approach to the problem is considering
the set of all strings of characters as a metric space and defining a distance function on all the couples of strings. Many distance functions were developed by mathematicians for studying similarity
among strings (for a list of them see for example http://www.dcs.shef.ac.uk/~sam/stringmetrics.html ).
The simplest way of comparing two strings A and B is the pairwise comparison test or identity test. In this test the n-th character of A is compared to the n-th character of B. In other words the
order matters. The test starts from the first character and terminates at the last character. If two strings of total length n have m characters matching then we say the two strings are identical at
100*m/n per cent. Of course if two strings of identical length n have n characters matching then we say the two strings are identical at cent per cent (in a sense they are the same abstract string).
In the pairwise comparison test we can calculate a simple metric distance called "Hamming distance". At every comparison if the two characters don't match the Hamming distance increases of 1.
If the order doesn't matter and we can compare sub-strings inside the parent strings A and B also if they are at different positions in the two strings then many different tests are possible. We call
them pattern matching or similarity tests. While in principle there is only one identity test (the above one) there are many possible similarity tests, depending on the rules of pattern matching we
Any final result of a similarity test (especially if it is a unique number) has meaning only if: 1) the distance function is mathematically defined; 2) the rules of pattern matching and the formulas
for calculating that number are explained in details; 3) it is declared what parts of the inputs strings are considered; 4) whether computer programs were used to make the comparison, the source
codes and algorithms are freely exhibited. For example here is a test called 3SS-similarity: consider two strings A = xyz and B = yzx, where x,y,z are three sub-strings composed of i,j,k characters
respectively. If we establish that the rules of pattern matching and the formulas are: i) find in both entire strings identical sub-strings, independently form their positions, and eliminate them
from A and B; ii) after the deletions count the characters remaining in A (say them ra) and the characters remaining in B (say them rb), their sum being our 3SS metric distance; iii) the value of
3SS-similarity is obtained by the formula 100 - 50*(ra+rb)/(i+j+k). In this case being ra=0 and rb=0 the 3SS-similarity is cent per cent. Differently if A and B share no sub-strings then ra=(i+j+k)
and rb=(i+j+k) the 3SS-similarity is 0 per cent.
Comparing DNA sequences
The characters most commonly present in DNA sequences are A, C, G, T. There are other less important characters that are used basically to indicate ambiguity about the identity of certain bases in
the sequences.
Homo sapiens and Pan troglodytes genomes were freely downloaded from the bio-informatics public archives of UCSC Genome Bioinformatics: http://genome.ucsc.edu/. The downloaded DNA sequences are in
FASTA format. Before running the tests we have discarded all the symbols different from A, C, G, T. Mainly we had to discard the "N" symbols insofar they represent rare undefined situations (due
probably to difficulties of the scanning technology). It is very low, if any, the presence of other symbols. The deletions of the "N" symbols don't change much the overall results however. Here we
show the results of two methods of comparison we have applied to human and chimp genomes.
First method: pairwise comparison (equality test)
The first difficulty in applying the pairwise comparison test is that in general homologous chromosomes have DNA sequences of different lengths. In other words when we arrive at the end of the
shortest chromosome we must stop the comparison. These differences are often of millions bases. Just only this ascertainment should lead us to understand that the two strings cannot be so equal after
all. In particular human and chimp homologous chromosomes have always different lengths. In our pairwise comparison test we have discarded the unmatched tails of the longest chromosomes.
A second problem is that homo sapiens and chimpanzee have a different number of chromosomes. Meanwhile homo sapiens has an unique chromosome #2, chimpanzee had its chromosome #2 split in two parts,
namely chromosome #2a and chromosome #2b. Therefore we have compared human chromosome #2 with the concatenation of chimp chromosomes #2a and #2b (the longest one).
Consider symbols drawn from a common vocabulary of N symbols, each of them having the same 1/N probability to occur. If two sets composed of an equal number of such symbols are fully random, at the
"pairwise comparison" test they must result 1/N equal. In fact at each single symbol-by-symbol comparison there is the same probability of 1/N that the two twin symbols match. For example if the
symbols in the vocabulary are four (as in the case of DNA) this probability is ¼ and we can say that two random strings generated from such vocabulary are 25% equal.
In reality in DNA the average probabilities of A, T, G, C are not exactly 0.25 but the following: A=0.3, T=0.3, G=0.2, C=0.2. Hence in real DNA this formula applies for the probability of one single
match when probabilities differ:
(30*30 + 30*30 + 20*20 + 20*20)/(100*100) = (900+900+400+400)/10000 = 26%
We will see below how this is exactly what our identity test has outputted.
The following table and graph show the report of the pairwise comparison test:
Remember that 25% represents the equality percentage of two random four-equally-probable-symbols sequences and 26% represents the equality percentage of two random DNA sequences. All the percentage
values of pairwise identity (nucleotide by nucleotide starting from the same end along the entire chromosome) are very near 26%, which is the value that the theory predicts. In the same time this
value means that, if many local similarities exist, they are at different offsets; in other words, identical patterns (if any) are scrambled in homologous chromosomes. This issue is related to the
deletions, additions, inversions, translation, transfers and a number of other chromosomic events, that is the structural alterations that evolutionary theory hypothesizes chromosomes had. Anyway the
simple pairwise comparison per se hadn't to take into account such kind of problems. This ascertainment leads us directly to the second test.
Second method: 30-base pattern matching (30BPM-similarity test)
Before the 26% average value of the pairwise comparison of the previous test some critics say that such result has little sense because human and chimp genomes don't show global identity (as that
investigated by a simple pairwise comparison test) but have a lot of local similarities. Therefore we have tried to discover such alleged local similarities by mean of a new test. To limit someway
the running times we have chosen to use a Monte Carlo method approach. According to this method a pseudorandom number generator (PRNG) generates a set of uniformly distributed random numbers, which
will determine the places where the metric measures will be probed. In two words, in the Monte Carlo method only a portion of the metric space is investigated, but this portion reveals the
characteristics of the whole. As a consequence our measure of similarity will be statistical.
However, this second method is a real pattern matching test because it searches for identical patterns in the chromosome N of homo sapiens and chimpanzee. In other words, in this test patterns can
match independently from their offsets in the chromosomes. In fact the meaning of local similarities in homologous chromosomes is: identical patterns laying out in different positions in the two
chromosomes. This test allows a total scrambling of the patterns between the twin chromosomes. Of course it is very difficult to know what are the functional implications of this scrambling. As an
analogy we know for example that in software randomly scrambling parts of the binary code harms the functionality until halting the computer. May be the positions of genes can shift, but when non
gene-coding is scrambled is doubtful that the functionality is preserved.
Many technologies were developed to investigate genomes. One of them is the BLAST (Basic Local Alignment Search Tool) set of programs (see for example NCBI web site[8]). BLAST is able to find regions
of local similarity between sequences searching in a database of genomes. Alignment methods (as those BLAST and others techniques implement) allow geneticists to interactively search for common local
patterns in different positions. The global comparison of two genomes is a job that cannot be worked out interactively by humans, only fully automatic computer programs can afford such task.
From this point of view our test #2 can be considered a fully automatic program for searching local alignments between two different chromosomes. It is always possible to search local similarities
(even in a couple of fully random strings), but does a pattern matching local similarity always imply a genetic functional similarity? That isn't easy to prove. One can accept that searching local
similarities in homologous genes has some sense, but in the non gene-coding regions (about which molecular biology knows little) a BLAST research has less sense. There the offsets (or the starting
points of any patterns matching) are entirely arbitrary. After all if they call non gene-coding regions "junk DNA" is because these sequences are considered as random by them. Moreover, meanwhile in
the gene-coding region of DNA we know that the universal genetic code matching codons and amino acids is respected (except some very rare exceptions), in the non gene-coding regions nobody knows what
code or specification is used.
This additional test searches for shared 30-base-long patterns between two chromosomes. It might seem arbitrary to choose 30 base matches. It is arbitrary, as any other number would, but if the
genomes were really 9x% identical as they say also a 30-base patterns comparison (or any other n-base patterns comparison) should get 9x% results.
Our second test (30BPM-similarity) implements the following simple iterative algorithm. For each couple of chromosomes a PRNG generates 10000 uniformly distributed random numbers that will specify
the offsets of 10000 30-base sequences inside the chromosome A. Each of these 10000 30-base sequences is searched for in B. The absolute difference between 10000 and the number of patterns found in B
(minimum 0, maximum 10000) is our 30BPM-distance. To be precise this space is only pseudo-metric (or quasi-metric) inasmuch the axiom of identity ("the distance is zero if and only if A and B are
equal") defining a metric space is relaxed (the distance could be zero also if A and B are different) and the axiom of symmetry ("the distance between A and B is equal to the distance between B and
A") doesn't hold. The 30BPM-distance is zero if the two strings are identical. In the case of a test with two 70-million pseudorandom DNA strings the 30BPM-distance was 10000 (no patterns of A found
in B). We call this value 10000 as the "random-distance". The 30BPM-dissimilarity percentage is calculated as 100*30BPMdistance/randomdistance. The 30BPM-similarity percentage is
The following table and graph show the report of this pattern matching test:
The results are statistically meaningful. The same test was run on a sampling of 1000 random 30-base-long patterns and the percentages were almost identical.
The source files of the Perl programs used for the tests are freely downloadable at:
Different methods of genome comparison result in very different estimates of similarity. Assumptions driving the methods used also drive the results obtained and their interpretation. More objective
genome strategies, like the ones reported in this paper tend to give lower estimates of similarity than those commonly reported. It is worth noting that as more information comparing the genomes is
published, the differences appear to be more profound than originally thought. What one should conclude from similarities and differences between humans and chimpanzees remains the big question.
Commonly reported statistics that should inform answers to this question may actually obscure the true answer.
We have seen that in genomes comparison only "similarity" makes sense. Unfortunately the message going public speaks even of 99% "identity". In fact it is usual to find propositions like this:
"because the chimpanzee lies at such a short evolutionary distance with respect to human, nearly all of the bases are identical by descent and sequences can be readily aligned except in recently
derived, large repetitive regions"[9]. As a consequence the laymen truly believes that man and ape genomes are almost identical. We have shown that the scenario is not so simplistic: many
similarities measures are possible and to flag the comparison with a unique measure is like to think of describing well a very complex geometrical object by mean of a unique number. We hope our work
adds a bit to the truth about the 99%-identity myth.
[1] For example: Marks J. 2002. What It Means to Be 98% Chimpanzee: Apes, People, and Their Genes. University of California Press, Berkeley. 325 pages.
[2] Wildman DE, Uddin M, Liu G, Grossman LI, Goodman M. 2003. Implications of natural selection in shaping 99.4% nonsynonymous DNA identity between humans and chimpanzees: Enlarging genus Homo.
Proceedings of the National Academy of Sciences (USA) 100:7181-7188.
Fujiyama A, Watanabe A, Toyoda A, Taylor TD, Itoh T, Tsai S-F, Park H-S, Yaspo M-L, Lehrach H,
Chen Z, Fu G, Saitou N, Osoegawa K, de Jong PJ, SutoY, Hattori M, Sakaki1 Y. 2000. Construction and Analysis of a Human-Chimpanzee Comparative Clone Map. Science 295:313-134.
[4] Britten, R.J. 2002. Divergence between samples of chimpanzee and human DNA sequences is 5% counting indels. Proceedings of the National Academy of Sciences (USA) 99:13633-13635.
[5] The Chimpanzee Sequencing and Analysis Consortium. 2005. Initial sequence of the chimpanzee genome and comparison with the human genome. Nature 437:69-87.
[6] The human genome contains about 2.9 Gb of euchromatin, its total size is approximately 3.2 Gb, so approximately 10 % is heterochromatin, little of which has been sequenced. Green ED, Chakravarti
A. 2001. The Human Genome Sequence Expedition: Views from the "Base Camp." Genome Res. 11: 645-651.
[7] Winckler W, Myers SR, Richter DJ, Onofrio RC, McDonald GJ, Bontrop RE, McVean GAT, Gabriel SB, Reich D, Donnelly P, Altshuler D. 2005. Comparison of Fine-Scale Recombination Rates in Humans and
Chimpanzees. Science 308:107-111.
[9] The Chimpanzee Sequencing and Analysis Consortium, Initial sequence of the chimpanzee genome and comparison with the human genome, Vol 437/1 September 2005/doi:10.1038/nature04072.
|
{"url":"http://progettocosmo.altervista.org/index.php?option=content&task=view&id=130","timestamp":"2014-04-18T18:11:06Z","content_type":null,"content_length":"27504","record_id":"<urn:uuid:2ac452cd-0642-4b4f-a2c0-88fc1359259e>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00119-ip-10-147-4-33.ec2.internal.warc.gz"}
|
the first resource for mathematics
Symmetries of a class of nonlinear third-order partial differential equations.
(English) Zbl 0879.35005
Symmetry reductions of the following class of nonlinear third-order partial differential equations
${u}_{t}-\epsilon {u}_{xxt}+2\kappa {u}_{x}-u{u}_{xxx}-\alpha u{u}_{x}-\beta {u}_{x}{u}_{xx}=0$
with four arbitrary constants $\epsilon ,\kappa ,\alpha ,\beta$ are considered. This class has previously been studied by C. Gilson and A. Pickering [Phys. A, Math. Gen. 28, 2871-2888 (1995; Zbl
0830.35127)] using Painlevé theory. It contains as special cases the Fornberg-Whitham, the Rosenau-Hyman, and the Camassa-Holm equation. The authors apply besides the standard symmetry approach also
the non-classical method of G. W. Bluman and J. D. Cole [J. Math. Mech. 18, 1025-1042, (1969; Zbl 0187.03502)]. Using the so-called differential Gröbner bases developed by one of the authors they
obtain a symmetry classification of the parameters $\epsilon ,\kappa ,\alpha ,\beta$. The computations are done with the help of the Maple package.
35A25 Other special methods (PDE)
58J70 Invariance and symmetry properties
13P10 Gröbner bases; other bases for ideals and modules
35Q58 Other completely integrable PDE (MSC2000)
37J35 Completely integrable systems, topological structure of phase space, integration methods
37K10 Completely integrable systems, integrability tests, bi-Hamiltonian structures, hierarchies
68W30 Symbolic computation and algebraic computation
|
{"url":"http://zbmath.org/?q=an:0879.35005","timestamp":"2014-04-19T17:24:22Z","content_type":null,"content_length":"23434","record_id":"<urn:uuid:2f3763be-1608-429c-bcca-a80bfb781807>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00265-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
Best Response
You've already chosen the best response.
if c=3, you cannot divide a number by (c-3)
Best Response
You've already chosen the best response.
it is zero.. so they have seperated two cases, one when c=3 and the other when c !=3
Best Response
You've already chosen the best response.
Oh. I see; it's all about the pivot being non-zero.
Best Response
You've already chosen the best response.
no no no it matters because you can't divide a number by 0
Best Response
You've already chosen the best response.
if c=3 a thing such as -4/(c-3) doesn't exist!!
Best Response
You've already chosen the best response.
so you can't make the arguments as those cases for c !=3
Best Response
You've already chosen the best response.
Ya, that too BUT if we do not divide by (c-3) = 0, there is nothing illegal and we get what would be a pivot to be a 0.
Best Response
You've already chosen the best response.
what i am saying is the only reason for seperating the cases
Best Response
You've already chosen the best response.
pivot to be 0 doesn't make any sense because the first appearing 1s are called pivots
Best Response
You've already chosen the best response.
you can do your elimination process because c!=3(the one you did previously) you can't argue something like 'Why does it matter what c is equal in problem #3a to if it disappears thanks to
elimination' since it's not the case! for c=3 you need to make a whole new different arguments
Best Response
You've already chosen the best response.
i mean look at the basis for C(A) they are different!! it matters!!
Best Response
You've already chosen the best response.
two different case, two different logics, two different consequences
Best Response
You've already chosen the best response.
if c = 3, then it's a simple plugging in of c = 3, and then you have 3 - 3 = 0 and then you have a matrix with only regular numbers. I think we're miscommunicating again, lol.
Best Response
You've already chosen the best response.
I know that the bases are different. That's because the pivot from c!=3 is no longer a pivot; it's now a 0.
Best Response
You've already chosen the best response.
what you just said is right well what is it then that you don't understand?
Best Response
You've already chosen the best response.
I now do understand this problem. :)
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred.
|
{"url":"http://openstudy.com/updates/512e9380e4b02acc415edf73","timestamp":"2014-04-18T16:26:44Z","content_type":null,"content_length":"63933","record_id":"<urn:uuid:97157506-c3e5-4ab2-8100-576797213223>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00283-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Is this problem a trick question or something? [Archive] - Free Math Help Forum
08-21-2005, 01:59 PM
I have a summer calculus take-home test due the first day of school of 200+ problems. I got all of them but two-- this one and the one I posted in the "other math help" category. This one is geometry
related. I have asked everyone I know to help me out with this problem and no one can get it. A few people have suggested that it may be unsolvable, or that my teacher left out some information so it
is incomplete. If anyone can get an answer though, I would be forever in your debt. Here it goes:
A rectangle is inscribed in a circle of radius r. The horizontal dimension of the rectangle is x, the vertical dimension is y. Express the area of the rectangle as a function of the variable x only.
There is a diagram included as well, but the problem is pretty much self-explanatory. It is on a page of the test that is hand-written by my teacher, so it's possible that he messed it up, but if
there is a way to solve it, I'm hoping that someone here can find it. HELP PLEASE! Thanks in advance for any information you can offer!
|
{"url":"http://www.freemathhelp.com/forum/archive/index.php/t-36722.html","timestamp":"2014-04-20T21:37:26Z","content_type":null,"content_length":"5847","record_id":"<urn:uuid:c3ddead7-3e02-4052-b59c-2ac8c92bc968>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00163-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Calculate The DC Voltage In The Emitter And The ... | Chegg.com
Calculate the DC voltage in the emitter and the base of Q1(VDC) and the value of R2
Please Sketch the small-signal model and calculate the value of each component
Calculate the small-signal gain and output resistance
Electrical Engineering
|
{"url":"http://www.chegg.com/homework-help/questions-and-answers/calculate-dc-voltage-emitter-base-q1-vdc-value-r2-please-sketch-small-signal-model-calcula-q3168704","timestamp":"2014-04-18T20:10:47Z","content_type":null,"content_length":"21598","record_id":"<urn:uuid:b6d21647-7315-44a9-a0d1-605b9412ca0a>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00639-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Pattern block problem
November 29th 2010, 06:27 PM #1
Nov 2010
Pattern block problem
We are working with pattern blocks in class. Questions ask about ratios/fractions comparing different shapes. One question was to find an example of 2 shapes that have a ratio of 3/2. The answer
was trapezoid to rombus. I am stumped on the next question though and thought maybe someone here could help...
Find an example with the pattern blocks that represent a ratio of 7/2.
Any help is appreciated!!
Follow Math Help Forum on Facebook and Google+
|
{"url":"http://mathhelpforum.com/geometry/164799-pattern-block-problem.html","timestamp":"2014-04-18T07:51:59Z","content_type":null,"content_length":"29055","record_id":"<urn:uuid:3a947333-4187-4ecb-81c1-411a8f79d705>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00548-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Note on the End Game in Homotopy Zero Curve Tracking
Note on the End Game in Homotopy Zero Curve Tracking
(1995) Note on the End Game in Homotopy Zero Curve Tracking. Technical Report ncstrl.vatech_cs//TR-95-04, Computer Science, Virginia Polytechnic Institute and State University.
Full text available as:
Postscript - Requires a viewer, such as GhostView
TR-95-04.ps (114548)
Homotopy algorithms to solve a nonlinear system of equations f(x)=0 involve tracking the zero curve of a homotopy map p(a,theta,x) from theta=0 until theta=1. When the algorithm nears or crosses the
hyperplane theta=1, an "end game" phase is begun to compute the solution x(bar) satisfying p(a,theta,x(bar))=f(x(bar))=0. This note compares several end game strategies, including the one implemented
in the normal flow code FIXPNF in the homotopy software package HOMPACK.
|
{"url":"http://eprints.cs.vt.edu/archive/00000419/","timestamp":"2014-04-20T10:50:40Z","content_type":null,"content_length":"6167","record_id":"<urn:uuid:d0be6803-0511-4f05-a0d7-eba37d2f8dc9>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00071-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Re: [TowerTalk] RF Exposure Calculator
At 01:55 AM 4/26/2006, K8RI on Tower talk wrote:
> > At 10:45 AM 4/25/2006, Peter Sundberg wrote:
> >>You can download a program that does all calculations for you, be it
> >>yagis,
> >>dipoles, verticals or what..
> >
> > Very slick... and makes it easy to check across bands
> >
> > Does it take into account nearfield effects for low gain antennas?
> >
> > e.g. for a dipole or inverted V close to the ground, it's the ends that
> > present the RF safety problem, not the far field radiation
>Generally if you are talking inverted V you are taling 150, 75, or 40. Evne
>when running the full legal limit, key down on those bands you can get
>within a couple of feet of the ends of the antennas. If they are high
>enough off the ground that no one can reach up and touch it they will
>probably pass the test.
Hmm.. I ran a quick model of a 40 m inverted V, suspended 30 feet off the
ground, with the ends about 9 ft off the ground (i.e. high enough so you
can't touch it).
With just 100 Watts into the antenna, the peak field at 2 m off the ground
(head height) is about 88 V/m, about 9m from the center support. The peak
field at 3m off the ground is somewhat higher, about 1000V/m.
The ANSI C95.1 limit for controlled environments at 7 MHz is about 270 V/m
In practice, you're not going to be keydown for 6 minutes, so you get all
those averaging factors to work with, but certainly, one can bust the
limits pretty easily, particularly on 20m and higher, where the limit is
lower (61.4 V/m at 30MHz), with remarkably low power.
One might want to be careful if you're doing, for instance PSK31, RTTY, or
Pactor.. pretty constant envelope, long duration.
> >
> > Another problem I ran into was with a phased array, where there's a lot of
> > circulating power among the elements. I suspect that for any
>Even then, don't you have to get pretty close to any particular element even
>in the hear field as inside a phased array?
yes.. but "close" is a relative term. Think of someone putting up a multi
element Yagi for, say, 20m, on a temporary pole at field day, say 20ft off
the ground, and then putting 100W into it.
A Yagi with decent gain is going to have power circulating in the near
field that is 4 or 5 times the power being radiated. So, take numbers like
the ones for the inverted V, above, and multiply them by 2 or 3 to account
for the circulating power that's stored in the antenna, and the fields can
get pretty high, even at some distance (say a few meters).
Look at Emerson's article in the antenna compendium, or his graphics here:
Emerson gives an interesting way to think about the extent of the high
field area: It's comparable to the size of a non-superdirective antenna
with the same gain. A dipole has a gain of 2.15 dBi (G=1.64) To a first
order, antenna gain (as a number) is the number of
Directivity = 4*pi*Aperture (in square wavelengths)
for a dipole, the aperture is about 0.13 square wavelengths (a lot of
people assume it's close to a half wavelength by a quarter wavelength,
which is 0.125 square wavelengths.. close enough)
A 10 dBi antenna will have an effective aperture of about 0.8 square
wavelengths. If we say that it's like a circle the radius of that circle
would be about 0.5 wavelengths. So, for that 20m Yagi, with 10dBi gain,
you'd want to stay at least 10m away to stay out of the high field area.
TowerTalk mailing list
|
{"url":"http://lists.contesting.com/_towertalk/2006-04/msg00445.html?contestingsid=obngu4dkppf7semg2rm2fs0df5","timestamp":"2014-04-18T23:57:48Z","content_type":null,"content_length":"13018","record_id":"<urn:uuid:aade678e-f8af-45e3-8c5a-ff386aa16de5>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00170-ip-10-147-4-33.ec2.internal.warc.gz"}
|
PIMS/CSC Seminar: Deniz Sezer (Calgary)
• Date: 09/23/2011
• Time: 14:30
Simon Fraser University
The Martin boundary theory for the nonlinear p.d.e.1/2 \Delta u=2 u^2: A probabilistic approach
The classical Martin boundary is concerned with harmonic functions: the non-negative solutions of Laplace's equation 1/2\Delta u=0 in a given domain D of R^n. Harmonic functions are extensively
studied and well understood. A culminating result is the integral representation of harmonic functions in terms of the so-called Martin kernel. Can one develop a similar theory for the nonlinear
p.d.e. 1/2 \Delta u=2 u^2 as well? A key feature of the Laplace's equation is that it renders itself to probabilistic analysis, and as a result many important results on this p.d.e can be
re-formulated probabilistically in terms of Brownian motion. This not only brings new insights into the theory, but also inspires similar ideas to be used for other p.d.e.'s. A striking example of
this is the formulation of the solutions of the non-linear equation 1/2 \Delta u=2 u^2 in terms of Super-Brownian motion (SBM). Indeed most of the recent progress on understanding the solutions of
this p.d.e has been made using probabilistic methods. Martin boundary theory for Laplace's equation has an important probabilistic interpretation; it tells us about the exit behavior of Brownian
motion from a domain. A new research program, initiated by E.B. Dynkin, aims to build an analogous theory for SBM by employing the ideas used in the probabilistic construction of Martin boundary for
Brownian motion. In this talk I will discuss some of these key ideas and recent results that we have obtained as a part of this research program. 008, he held a Killam Research Fellowship awarded by
the Canada Council for the Arts. His research interests are in differential geometry, partial differential equations and mathematical physics.
|
{"url":"http://www.pims.math.ca/scientific-event/110923-psds","timestamp":"2014-04-16T13:10:15Z","content_type":null,"content_length":"18350","record_id":"<urn:uuid:c29fd515-c98f-450b-aab5-a47f89507dd9>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00374-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Permutations, Recursion
up vote 1 down vote favorite
I have an assignment: User enters a String, example ABCD and the program has to give out alll the permutations. I don't want the whole code just a tip. this is what i got so far in thery i got
nothing implemented.
Taking ABCD as an example:
get factorial of length of String in this case 4! = 24.
24/4 = 6 So first letter has to change after 6. so far so good.
than get factorial of remaining letters which are three 3! = 6.
6/3 =2 2 places for each letter. from here i don't know how it can continue to fill 24 places.
With this algorithm all i will have is
. (continues with 6 C's and 6 D's)
I think my problem is i do not have alot of experience with recursive problems so who can suggest some programs to program to help me get to know recursion better please do.
Thanks! If somethings aren't clear please point out.
java recursion permutation
Is this a homework assignment? If so, it should be tagged as such. – Miky Dinescu Mar 25 '11 at 22:55
Thanks! You meant like this right? – John Smith Mar 25 '11 at 22:59
Permutations.java – Paolo Falabella Mar 25 '11 at 23:16
Yes it can change a small example of a permutation is entering ABC - these are its permutations ABC,ACB,BAC,BCA,CAB,CBA. – John Smith Mar 25 '11 at 23:18
add comment
3 Answers
active oldest votes
You are right that recursion is the way to go. The example you worked thru w/ the little bit of math was all correct, but kind of indirect.
Heres some pseudocode
def permute(charSet, soFar):
if charSet is empty: print soFar //base case
for each element 'e' of charSet
permute(charSet without e, soFar + e) //recurse
example of partial recursion tree
permute({A,B,C}, '')
/ | \
permute({B,C}, 'A') permute({A,C}, 'B') permute({A,B}, 'C')
/ \
permute({A}, 'BC') permute({C}, 'BA')
permute({}, 'BCA')<---BASE CASE, print 'BCA'
Tp handle repeated characters without duplicating any permutations. Let unique be a function to remove any repeated elements from a collection (or string that is being treated
like an ordered character collection thru indexing)
up vote 4 down vote
accepted 1) Store results (including dupes), filter them out afterwards
def permuteRec(charSet, soFar):
if charSet is empty: tempResults.add(soFar) //base case
for each element 'e' of charSet
permute(charSet without e, soFar + e) //recurse
global tempResults[]
def permute(inString)
permuteRec(inString, '')
return unique(tempResults)
print permute(inString)
2) Avoid generating duplicates in the first place
def permute(charSet, soFar):
if charSet is empty: print soFar //base case
for each element 'e' of unique(charSet)
permute(charSet without e, soFar + e) //recurse
Can you explain a bit further cause i didn't fully understand it so the base case will be an empty string? my idea of the base case was both empty and just 1 character of a
string. – John Smith Mar 25 '11 at 23:11
My Base case: if length of string is less than 2 print 0 for an empty string, Same string for a 1 character string. – John Smith Mar 25 '11 at 23:14
on the first call charSet will be collection containing all n elements and soFar will be empty. on base cases charSet will be empty and soFar will contain all n elements. soFar
must be ordered, charSet can be ordered or unordered – jon_darkstar Mar 25 '11 at 23:15
i dont really follow your idea of base case. what string do you mean when you say 'length of string'? – jon_darkstar Mar 25 '11 at 23:21
1 What if the user enters a string that contains duplicate characters, like "AABC"? – Elian Ebbing Mar 26 '11 at 0:34
show 6 more comments
Speaking a little more generally, to solve a problem recursively you have to break it down into one or more smaller problems, solve them recursively, then combine those solutions somehow
to solve the overall problem. You need a way to handle the minimal case (where it gets too small to break down anymore); that's usually pretty simple.
Quicksort is a classic recursive algorithm: it splits the list into two pieces, hopefully of about the same size, such that all the items in the first piece come before all the items in
up vote 1 the second. It then calls itself on each piece. If a piece is of length one, it's sorted, so it just returns. When the pieces come back, together they constitute a sorted list.
down vote
So, how would you go about breaking this problem into smaller pieces? It doesn't have to be into two equal pieces as with Quicksort; sometimes, it's most appropriate to just reduce the
problem size by 1.
add comment
Make a method that takes a string.
Pick a letter out of the string and output it.
Create a new string with the input string minus the letter you picked.
up vote 0 down vote
call the above method with the new string if it has at least 1 character
do this picking of one letter for each possible letter.
add comment
Not the answer you're looking for? Browse other questions tagged java recursion permutation or ask your own question.
|
{"url":"http://stackoverflow.com/questions/5438960/permutations-recursion","timestamp":"2014-04-18T23:52:12Z","content_type":null,"content_length":"82611","record_id":"<urn:uuid:f453bc16-5af7-4b89-8467-1f866fe77555>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00639-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Statistics for Engineering and the Sciences 5th Edition Chapter 7.5 Solutions | Chegg.com
It is given that the measure the biological condition or health of an aquatic region. The IBI is the sum of metrics which measure the presence, abundance, and health of fish in the region. The study
collected IBI measurements for sites located in different Ohio river basins. Summary data for two river basins, Muskingum and Hocking, are given in the table.
We have to construct a 90% confidence interval to compare the mean IBI values of the two river basins.
Here, the sample sizes of river basins in Muskingum and Hocking are large (that is greater than 30), so two sample z -test approximation is good approach to construct confidence interval for this
A formula of
Here we have used the sample variances
For 90% confident,
A 90% confidence interval to compare the mean IBI values of the two river basins, Muskingum and Hocking is obtained below.
A 95% confidence interval estimate the difference between the mean IBI values of the two river basins Muskingum and Hocking is between –0.6287 and 0.0187.
The engineer can concluded that there is no difference for mean IBI values of the two river basins Muskingum and Hocking, since the interval include zero.
|
{"url":"http://www.chegg.com/homework-help/statistics-for-engineering-and-the-sciences-5th-edition-chapter-7.5-solutions-9780131877085","timestamp":"2014-04-17T19:24:42Z","content_type":null,"content_length":"30956","record_id":"<urn:uuid:cb93453b-5a99-4c9d-85a8-3ff14a082208>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00056-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Calculus Applets
Combinatorial Aspects of the Lascoux-Schützenberger Tree
A Bit of History
In a remarkable seminal paper (Europ. J. Comb. 5 (1984), 359-372) R. Stanley set himself the task of enumerating, for a given
These polynomials are known as the ``Gessel quasi-symmetric functions''. Stanley was led to the bold step of setting for each N large enough
Stanley proves that
with coeffcients vexillary '' permutations. These are the permutations that avoid the pattern ``2143''. He also shows that for these permutations, 63 (1987), 42-99) and Lascoux-Schützenberger (C. R.
Acad. Sci. Paris 295 (1982) 629-633). These two proofs used essentially the same basic idea.
In a course given at UCSD in winter 2001, Garsia noted that a new algorithm for the calculation of Littlewood-Richardson coefficients given by Lascoux-Schützenberger in (Letters in Math. Physics 10
(1985) 111-124) yielded what may certainly be viewed as the most fascinating approach to determining the coefficients in (1). Garsia's proof of the validity of this algorithm may be found in full
detail in the manuscript The Saga of Reduced Factorizations of Elements of the Symmetric Group, which gives a complete account of the material covered in the course. Since Garsia's proof relies on
the theory of Schubert polynomials it cannot be considered elementary. A paper giving a completely elementary, purely combinatorial proof of the validity of the algorithm will soon be available.
This proof is based on the construction, for each permutation Grassmanian'' permutation (i.e. a permutation with only one descent) and T(w) is a standard tableau of shape T(w) is the ``right tableau
'', on the other hand for the permutation
To be precise, the bijection proceeds along the branches of a tree associated to each permutation by Lascoux-Schützenberger. So a few words describing this tree should perhaps be included here. For
this we need some notation. To begin let r,s). Next for a permutation
For a given permutation Branching Algorithm''
1. Locate the last descent of r then let s be the largest index such that s>r and
2. Set
3. If
It can be shown that if we go down a tree constructed by this branching process, starting from a root SAGA that we have the following remarkable fact
Since Grassanian permutations are vexillary, it follows from Stanley's theorem that for each of these leaves we have
where SAGA this identity is derived from Monk's rule for Schubert polynomials.
Now to prove (3) (and therefore also (2)) combinatorially all we need to do is construct a descent preserving bijection between the following two sets of words
Moreover once this bijection is constructed, we can start with
The Applet carries out this bijection.
|
{"url":"http://www.personal.psu.edu/dpl14/java/combinatorics/linediagrams/index.html","timestamp":"2014-04-19T12:04:19Z","content_type":null,"content_length":"12395","record_id":"<urn:uuid:c773829a-ec0c-4d59-9004-06ef14d8b1b9>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00530-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Lake Station Math Tutor
Find a Lake Station Math Tutor
...I have taught Algebra and Algebra II at several high schools, and I'm constantly working with a lot of students with material in Algebra. Since I am actively teaching High School Mathematics
to students that may not have a proper foundation from their earlier Mathematics classes, I have experien...
11 Subjects: including prealgebra, geometry, SAT math, algebra 1
I've taught Algebra 1, Algebra 2, Geometry, and Pre-Calculus at the high school level for 6 years. In addition, I've completed a BS in Electrical Engineering and I am quite knowledgeable of
advance mathematical concepts. (Linear Algebra, Calculus, Differential Equations) I create an individualized...
12 Subjects: including calculus, general computer, precalculus, trigonometry
...Over the past few years I have worked with students ranging from elementary to college-age as well as non-traditional students. I am a recent alum of the prestigious Teach for America program,
where I worked in a Baltimore City School and was responsible for seeing math test results more than do...
20 Subjects: including ACT Math, trigonometry, SAT math, linear algebra
...I am very enthusiastic about transferring knowledge to my students and aiding their understanding. I have a very interesting and fun way of explaining things and can notice if a student
understands, or if I need to change the method of explanation. I guarantee better grades and success!I am an Accounting and Business instructor in college.
19 Subjects: including algebra 2, calculus, accounting, algebra 1
...I think that an appreciation for the classic works of literature is really essential to being an intelligent, informed adult - since these works are made reference to so often in conversation
and in writing. I believe that when students feel comfortable that they have the gist of these works, an...
20 Subjects: including algebra 1, algebra 2, vocabulary, grammar
Nearby Cities With Math Tutor
Crown Point, IN Math Tutors
Dyer, IN Math Tutors
Gary, IN Math Tutors
Griffith Math Tutors
Hobart, IN Math Tutors
Lynwood, IL Math Tutors
Merrillville Math Tutors
New Chicago, IN Math Tutors
Ogden Dunes, IN Math Tutors
Portage, IN Math Tutors
Porter, IN Math Tutors
Saint John, IN Math Tutors
Valparaiso, IN Math Tutors
Valpo, IN Math Tutors
Wheeler, IN Math Tutors
|
{"url":"http://www.purplemath.com/lake_station_math_tutors.php","timestamp":"2014-04-20T06:28:27Z","content_type":null,"content_length":"23952","record_id":"<urn:uuid:79652987-5473-45b9-b8d2-5b2d20541ae1>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00611-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Triangle Classification by Angles
9.7: Triangle Classification by Angles
Created by: CK-12
Practice Triangle Classification by Angles
Do you know about different types of triangles? Well, Cassie is learning all about them.
"As if protractors weren't bad enough," Cassie complained sitting at the kitchen table.
"What's the matter?" her brother Kyle asked.
"Well, look at this," Cassie said showing him the book. "I have to identify these triangles."
"That's not so bad if you know what to look for," Kyle explained.
What is Kyle talking about? Cassie is puzzled about this.
But Kyle is right. There are things to look for when classifying triangles. Pay attention to this Concept and you will know what Kyle is talking about by the end of it.
This next Concept is all about triangles; the prefix “tri” means three-triangle means three angles.
When we classify a triangle according to its angles, we look at the angles inside the triangle. We will be using the number of degrees in these angles to classify the triangle. Let’s look at a
picture of a triangle to explain.
Here is a triangle. We can look at the measure of each angle inside the triangle to figure out what kind of triangle it is. There are four types of triangles based on angle measures.
What are the four kinds of triangles?
The first type of triangle is a right triangle. A right triangle is a triangle that has one right angle and two acute angles. One of the angles in the triangle measures $90^\circ$and the other two
angles are less than 90. Here is a picture of a right triangle.
Can you figure out which angle is the $90^\circ$one just by looking at it?
Sure, you can see that the 90 degree angle is the one in the bottom left corner. You can even draw in the small box to identify it as a 90 degree angle. If you look at the other two angles you cans
see that those angles are less than 90 degrees and are acute.
Here we have one $90^\circ$angle and two $45^\circ$angles. We can find the sum of the three angles.
$90 + 45 + 45 = 180^\circ$
The sum of the three angles of a triangle is equal to $180^\circ$
The second type of triangle is an equiangular triangle. If you look at the word “equiangular” you will see that the word “equal” is right in the word. This means that all three of the angles in a
equiangular triangle are equal.
The three angles of this triangle are equal. This is an equiangular triangle.
In an equiangular triangle, all of the angle measures are the same. We know that the sum of the three angles is equal to $90^\circ$therefore, for all three angles to be equal, each angle must be
equal to $60^\circ$
$60 + 60 + 60 = 180^\circ$
The sum of the angles is equal to $180^\circ$
The next type of triangle is an acute triangle. The definition of an acute triangle is in the name “acute.” All three angles of the triangle are less than 90 degrees. Here is an example of an acute
All three of these angles measure less than 90 degrees.
$33 + 80 + 67 = 180^\circ$
The sum of the angles is equal to $180^\circ$
The last type of triangle that we are going to learn about is called an obtuse triangle. An obtuse triangle has one angle that is obtuse or greater than 90 and two angles that are less than 90 or are
$130 + 25 + 25 = 180^\circ$
The sum of the angles is equal to $180^\circ$
Now it is time to practice. Identify each type of triangle according to its angles.
Example A
A triangle with angles that are all 60 degrees is called _________________.
Solution: An Equiangular Triangle
Example B
A triangle with one angle that is 90 degrees is called _________________.
Solution: A Right Triangle
Example C
A triangle with one angle that is 120 degrees is called _______________.
Solution: An Obtuse Triangle
Now back to Cassie and the triangles. Here is the original problem once again.
"As if protractors weren't bad enough," Cassie complained sitting at the kitchen table.
"What's the matter?" her brother Kyle asked.
"Well, look at this," Cassie said showing him the book. "I have to identify these triangles."
"That's not so bad if you know what to look for," Kyle explained.
What is Kyle talking about? Cassie is puzzled about this.
To identify each triangle by angles, Kyle knows that Cassie needs to look at the interior angles of each triangle. Let's use the information that you just learned in this Concept to classify each
The first one has three angles less than 90, so this is an acute triangle.
The second one has one right angle, therefore it is a right triangle.
The third triangle has one angle greater than 90, so it is an obtuse triangle.
The last triangle has one angle greater than 90, so it is also an obtuse triangle.
Here are the vocabulary words in this Concept.
a three sided figure with three angles. The prefix “tri”means three.
Acute Triangle
all three angles are less than 90 degrees.
Right Triangle
One angle is equal to 90 degrees and the other two are acute angles.
Obtuse Triangle
One angle is greater than 90 degrees and the other two are acute angles.
Equiangular Triangle
all three angles are equal
Guided Practice
Here is one for you to try on your own.
True or false. An acute triangle can also be an equiangular triangle.
This is true. Because all of the angles in an acute triangle are less than 90 and all of the angles in an equiangular triangle are 60 degrees, an acute triangle can also be an equiangular triangle.
Video Review
James Sousa, Angle Relationships and Types of Triangles
Directions: Classify each triangle according to its angles.
Directions: Classify the following triangle by looking at the sum of the angle measures.
6. $40 + 55 + 45 = 180^\circ$
7. $20 + 135 + 25 = 180^\circ$
8. $30 + 90 + 60 = 180^\circ$
9. $60 + 60 + 60 = 180^\circ$
10. $110 + 15 + 55 = 180^\circ$
11. $105 + 65 + 10 = 180^\circ$
12. $80 + 55 + 45 = 180^\circ$
13. $70 + 45 + 65 = 180^\circ$
14. $145 + 20 + 15 = 180^\circ$
15. $60 + 80 + 40 = 180^\circ$
Files can only be attached to the latest version of Modality
|
{"url":"http://www.ck12.org/book/CK-12-Concept-Middle-School-Math---Grade-6/r4/section/9.7/anchor-content","timestamp":"2014-04-18T06:03:00Z","content_type":null,"content_length":"134093","record_id":"<urn:uuid:9046a573-2b80-43f5-8543-5014e7a2503f>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00116-ip-10-147-4-33.ec2.internal.warc.gz"}
|
L'hospital rule confusion
December 16th 2007, 07:28 PM #1
Sep 2007
L'hospital rule confusion
I have the following limit:
$lim$$t->0$$(e^t - 1) / t ^ 3$
I use L'hospital's rule to keep differentiating the numerator and denominator, and I end up with 1/6. But, according to my textbook, the correct answer is infinity. Does anyone see what I am
doing wrong here?
$\lim_{t\to 0}\frac{e^t - 1}{t}$ is the derivative of $e^t$ at $t=0$ by definition. Which is $e^0 = 1$.
That means,
$\lim_{t\to 0}\frac{e^t-1}{t^3} = \lim_{t\to 0}\frac{e^t - 1}{t}\cdot \frac{1}{t^2} = \infty$
What you're doing wrong
OK, first review under what circumstances l'H can be used. Done it? Right then ......
After the first differentiation you no longer have an indeterminant form of the type 0/0. You're mistake was ........
in the continued use of l'H beyond the first differentiation.
December 16th 2007, 07:43 PM #2
Global Moderator
Nov 2005
New York City
December 16th 2007, 10:36 PM #3
|
{"url":"http://mathhelpforum.com/calculus/24987-l-hospital-rule-confusion.html","timestamp":"2014-04-17T14:59:21Z","content_type":null,"content_length":"38887","record_id":"<urn:uuid:4c1dd603-7eba-47ed-ae84-999f2e984ab2>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00121-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Posts from June 2010 on My Brain is Open
NP is the set of languages that have short proofs. coNP is the set of languages that have short refutations. Note that coNP is not the complement of NP. NP $\cap$coNP is non-empty. It is easy to see
that all languages in P are in NP $\cap$coNP i.e., P $\subseteq$NP $\cap$coNP. It is conjectured that P $\subsetneq$NP $\cap$coNP. i.e., there are problems in NP $\cap$coNP that are not in P.
Following are some problems in NP $\cap$coNP that are not known to be in P.
Factoring : Given an integer what is the complexity of finding its factors. Every integer always has a unique factorization. Hence, Factoring is very different from the NP-complete problems. The
following exercise states that it is highly unlikely that Factoring is NP-complete. On the other hand if Factoring is in P then the world as we know today will be in chaos !! Factoring is conjectured
to be an intermediate problem.
Exercise : If Factoring is NP-complete then NP = coNP.
The first step to solve the above exercise is to show that Factoring is in NP $\cap$coNP. In fact it is also in UP $\cap$coUP. Perhaps this is the strongest evidence that P $\subsetneq$NP $\cap$coNP.
Parity Games : Deciding which of the two players has a winning strategy in parity games is in NP $\cap$coNP, as well as in UP $\cap$coUP.
Stochastic Games : The problem of deciding which player has the greatest chance of winning a stochastic game is in NP $\cap$coNP [Condon'92]
Lattice Problems : The problems of approximating the shortest and closest vector in a lattice to within a factor of $\sqrt{n}$ is in NP $\cap$coNP [AR'05].
All the above problems are not known to be in P.
Open Problems :
□ Are there other problems in NP $\cap$coNP that are not known to be in P.
□ PPAD, PLS have been defined to understand problems whose structure is different from NP-complete problems. Can we define new complexity classes to study the complexity of the above mentioned
problems (and related problems if any) ?
□ Graph Isomorphism (GI) is also conjectured to be an intermediate problem. It is known that GI is not NP-complete unless Polynomial Hierarchy collapses to its second level. Can we improve this
result by showing that GI is in coNP ? Whether GI is in coNP is an interesting open problem for a very different reason also. More on this in a future post.
References :
• [Condon'92] Anne Condon: The Complexity of Stochastic Games Inf. Comput. 96(2): 203-224 (1992)
• [AR'05] Dorit Aharonov, Oded Regev: Lattice problems in NP $\cap$ coNP. J. ACM 52(5): 749-765 (2005)
|
{"url":"http://kintali.wordpress.com/2010/06/","timestamp":"2014-04-16T22:18:35Z","content_type":null,"content_length":"43801","record_id":"<urn:uuid:920390a3-ad76-4072-ab13-17a9422393a5>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00159-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Is this the end of maths?
Written by Holger on November 12th, 2010
Here is a little maths teaser for you. Since I was a student, I always loved those “proofs” that zero equals one. Of course, most of the time, this was achieved by sneakily dividing by zero somewhere
along the way.
But yesterday I came across a proof that used a different, slightly more subtle trick and uses complex numbers. I apologise to any reader not familiar with complex numbers. Anyone interested can find
a quick introduction here.
Enough introduction, here is the “proof”:
Looks OK, but it can’t be right of course. So where is the error in this equation? Can you find out?
You can find the answer here.
It’s been a few years since A-level maths, but I think there are some brackets implied in the third stage, which then suffer some terrible fate in between the third and fourth stages…
This is an old one. The problem is that sqrt is not a function: every real and complex number has two square roots.
That depends on your definition of sqrt. If I define sqrt to be the principal square root then it is a function. But the problem above persists.
The subtle error in the proof is the assumption that the property “sqrt(x * y) = sqrt(x) * sqrt(y)” holds for all real numbers. It is only valid for all x, y >= 0 where x, y are real numbers. The
above “proof” can even be modified to yield an actual proof demonstrating that the property does not hold for ANY negative real numbers.
|
{"url":"http://www.notjustphysics.com/?p=43","timestamp":"2014-04-18T00:37:55Z","content_type":null,"content_length":"17860","record_id":"<urn:uuid:ce3aadfa-b515-40dc-b449-1c0a1733d726>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00031-ip-10-147-4-33.ec2.internal.warc.gz"}
|
An implementation of Approximate Policy Iteration (API) from the paper Lagoudakis et. al. 2003.
This is a reinforcement learning algorithm that exploits a classifier, in this case an svm, to select state and action pairs in a large state space.
This algorithm approximates a policy for approximating solutions to very large Markov Decision Processes (MDP) in a parallel fashion. I plan on using it for part of a larger project, however this
component itself is very reusable so I factored it out into a library.
Add the following dependency to your project.clj file.
[apprpoximate-policy-iterajion "0.4.4"]
All of the following code can be found in sample.clj
For our toy problem we will be defining a simple problem in which an agent attempts to reach the number 10 using addition. In this example a state will be a number, and an action will be a number as
well (which gets added to the state).
First we need a generative model that will take state and action pairs, and return the new state sprime and a reward r. To generate sprime we add s + a and to generate a reward we compute 1 / (|goal
- (s + a)\) + 0.01)
(defn m
"States and actions are added."
[s a]
(nil? a) s
:else (+ s a))
Now a reward function.
(defn reward
(/ 1 (+ 0.01 (Math/abs (- goal s)))))
Now we need a function to generate a bunch of starting states. For our problem we will start at every number from 0 to 20.
(defn dp
"0 to goal * 2 for starting states"
[states-1 pi]
(range 0 (* goal 2)))
Now we require a function sp that generates actions for a given state. In this example actions available are the same no matter the state, however in a real world problem actions will vary by state.
In this case we will allow the user to add any number between -(goal / 2) and (goal / 2)
(defn sp
"Can add or subtract up to half of the goal."
(= goal s) []
:else (range (* -1 (/ goal 2)) (/ goal 2))))
Lastly we require a feature extraction function in order to teach our learner. approximate-policy-iterajion uses svm-clj under the hood so our features are maps of increasing numbers 1..n to the
feature value.
(defn features
[s a]
{1 s
2 (- goal s)
3 (if (pos? s) 1 0)
4 (if (pos? (- goal s)) 1 0)
5 (if (> goal s) 1 0)
6 a
Now that we have defined m, dp, sp, and features we can run approximate policy iteration with 300 rollouts per state, and a trajectory length of 10 per rollout using a discount factor of 0.99.
(use 'approximate-policy-iterajion.core)
(def my-policy (api/api m reward dp sp 0.99 300 10 features "sample" 5 :kernel-type (:rbf api/kernel-types))))
; We get some output from the underlying svm implementation
; Now lets ask the policy for an action given our state s
(my-policy 0)
;=> 4
(my-policy 4)
;=> 4
(my-policy 8)
;=> 2
(my-policy 10)
;=> nil
All of this code is available in sample.clj and can be run simply by calling:
(use 'approximate-policy-iterajion.sample :reload-all)
(def my-policy (create-api-policy 300 10))
(my-policy 0)
;=> 4
(my-policy 4)
;=> 4
(my-policy 8)
;=> 2
(my-policy 10)
;=> nil
Now take this and build your own reinforcement learning solutions to problems. :D
Altered the generated policy so that operations are performed in parallel. The reasoning here is that should the policy encounter a state it has not seen and need to evaluate many actions the
performance gain is large, whereas if a small number of actions are up for evaluation the performance loss will be minimal.
Altered the code base so that situations in which no action exist can be handled. In this case the policy functions return nil. Therefore your state generator sp can return an empty list and your
generative model m should be able to handle nil in place of action.
Added a maximum iterations (mi) parameter to api. Allows the user to constrain the run time of the learner.
The deployment to Clojars failed for 0.4.0 so I needed to push a new version.
Simplified the API function signature, policy is no longer a parameter. Implements a proper greedy (on estimated value) returned policy.
Another attempt at fixing the divide by zero bug occuring in the t-test that determines significance.
16 agents are now used for parallelism in the application.
The bug supposedly fixed in 0.3.7 appears to still exist, and this is another attempt at fixing said bug.
Moved the default policy to one that is random during training and greedy on the estimated (rollout) values during testing.
Fixed a bug in which an exception was thrown if qpi contained only one state-action pair.
Added an id parameter to the api function. This allows the run to identify itself and persist and load its training set in case of interruption. Useful for EC2 spot instance computation.
Added a branching factor parameter to the api function. This allows you to chunk the dataset into the specified number of pieces for parallel processing. In experimentation the default pmap settings
did not work well. Setting the number to the number of processors in the machine proved much more useful.
Removed the pmap from the rollout function. It appears as though any attempt at using the svm model in parallel creates a resource deadlock. I'll need to explore classifiers in the future that will
work for this purpose.
The parameter function dp is now provided with states-1 the set of states used in the last iteration and pi the policy. The intention is to allow people to guide their state generation using the
policy in the event that the state space is very large.
• Unit Tests
• Explore alternate classifiers
Copyright © 2013 Cody Rioux Distributed under the Eclipse Public License, the same as Clojure.
|
{"url":"http://codyrioux.github.io/approximate-policy-iterajion/","timestamp":"2014-04-19T12:24:53Z","content_type":null,"content_length":"16743","record_id":"<urn:uuid:1836be43-ee2c-4b30-9f1d-4da8e11b1875>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00519-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
The formula for the kinetic energy of a moving object is e=1/2mv^2 , where E is the kinetic energy in joules, m is the mass in kilograms, and v is the veloci
• one year ago
• one year ago
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred.
|
{"url":"http://openstudy.com/updates/506b0e5fe4b0e78f215dcbf9","timestamp":"2014-04-20T10:59:31Z","content_type":null,"content_length":"51094","record_id":"<urn:uuid:633da63e-be30-4005-8f32-e0b4a6855031>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00317-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
How do I isolate the x in y = 4x - x^2?
• one year ago
• one year ago
Best Response
You've already chosen the best response.
why do you need to isolate x from there ?
Best Response
You've already chosen the best response.
I need to express the function in terms of y
Best Response
You've already chosen the best response.
Maybe it's just really late....but isn't that already "y"?
Best Response
You've already chosen the best response.
...ok i meant in terms of x then whoops
Best Response
You've already chosen the best response.
Oh. Ha ha. Made me confused.
Best Response
You've already chosen the best response.
basically i need to turn it into a "x=" form instead of "y="
Best Response
You've already chosen the best response.
Herrhhh....What grade/course level is this?
Best Response
You've already chosen the best response.
I actually need this for a calculus proble, but I'm having a brain fart.
Best Response
You've already chosen the best response.
OH..so ln and e^x and all that?
Best Response
You've already chosen the best response.
i need to find the radii of the "outer circle" and "inner circle" of the cross section that looks like a donut, so in order to express the radius in terms of the equation, i need it in terms of
Best Response
You've already chosen the best response.
Or do you have to do derivatives too?
Best Response
You've already chosen the best response.
I'm using integrals because I need to find the volume of a graph rotated around the y axis
Best Response
You've already chosen the best response.
why dont you just complete a square
Best Response
You've already chosen the best response.
Wait..does that diagram incldue x,y, z planes, or is that just 2D?
Best Response
You've already chosen the best response.
It should be 3D. Okay, the reason why I'm asking this "x isolation" question is because I need to find the volume of the graph of y = 4x - x^2 rotated around the y-axis and bounded by y=0.
Best Response
You've already chosen the best response.
completing the square will do it., do you know how to ?
Best Response
You've already chosen the best response.
That's the whole problem I'm working on right now.
Best Response
You've already chosen the best response.
I tried using the quadrative formula, but I got a numerical answer. Am I using it wrong? I'm having a big brain fart right now T_T
Best Response
You've already chosen the best response.
if you ever need to solve a quadratic just complete a square
Best Response
You've already chosen the best response.
in this case you need to solve for x, so complete a square to do it o_o
Best Response
You've already chosen the best response.
I'm not trying to solve it though. I need to have the x isolated :( Solvng it gives me a numerical answer.
Best Response
You've already chosen the best response.
-I feel really stupid right now lol-
Best Response
You've already chosen the best response.
Hey, don't say that, or I'll feel stupid for not remembering anything.
Best Response
You've already chosen the best response.
let me give you first few steps.... y= 4x-x^2 = -(x^2-4x+4-4) -y = (x-2)^2-4 now can you isolate 'x' ?
Best Response
You've already chosen the best response.
they condition us to grow accustomed to f(x)! f(y) is just as important =[
Best Response
You've already chosen the best response.
GOD BLESS YOU HARTNN lol
Best Response
You've already chosen the best response.
Thank you everyone, I've got it now :) haha
Best Response
You've already chosen the best response.
lol! you too :)
Best Response
You've already chosen the best response.
welcome ^_^
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred.
|
{"url":"http://openstudy.com/updates/50e93581e4b08e9377934ce4","timestamp":"2014-04-21T16:13:47Z","content_type":null,"content_length":"95745","record_id":"<urn:uuid:cd9df94a-aaac-43dd-bb4b-c1b9b803d0f4>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00657-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Blind PARAFAC receivers for DS-CDMA systems
Results 1 - 10 of 65
- SIAM REVIEW , 2009
"... This survey provides an overview of higher-order tensor decompositions, their applications, and available software. A tensor is a multidimensional or N -way array. Decompositions of higher-order
tensors (i.e., N -way arrays with N â ¥ 3) have applications in psychometrics, chemometrics, signal proce ..."
Cited by 228 (14 self)
Add to MetaCart
This survey provides an overview of higher-order tensor decompositions, their applications, and available software. A tensor is a multidimensional or N -way array. Decompositions of higher-order
tensors (i.e., N -way arrays with N â ¥ 3) have applications in psychometrics, chemometrics, signal processing, numerical linear algebra, computer vision, numerical analysis, data mining,
neuroscience, graph analysis, etc. Two particular tensor decompositions can be considered to be higher-order extensions of the matrix singular value decompo- sition: CANDECOMP/PARAFAC (CP) decomposes
a tensor as a sum of rank-one tensors, and the Tucker decomposition is a higher-order form of principal components analysis. There are many other tensor decompositions, including INDSCAL, PARAFAC2,
CANDELINC, DEDICOM, and PARATUCK2 as well as nonnegative variants of all of the above. The N-way Toolbox and Tensor Toolbox, both for MATLAB, and the Multilinear Engine are examples of software
packages for working with tensors.
- SIAM J. Matrix Anal. Appl , 2004
"... Abstract. The canonical decomposition of higher-order tensors is a key tool in multilinear algebra. First we review the state of the art. Then we show that, under certain conditions, the problem
can be rephrased as the simultaneous diagonalization, by equivalence or congruence, of a set of matrices. ..."
Cited by 37 (7 self)
Add to MetaCart
Abstract. The canonical decomposition of higher-order tensors is a key tool in multilinear algebra. First we review the state of the art. Then we show that, under certain conditions, the problem can
be rephrased as the simultaneous diagonalization, by equivalence or congruence, of a set of matrices. Necessary and sufficient conditions for the uniqueness of these simultaneous matrix
decompositions are derived. In a next step, the problem can be translated into a simultaneous generalized Schur decomposition, with orthogonal unknowns [A.-J. van der Veen and A. Paulraj, IEEE Trans.
Signal Process., 44 (1996), pp. 1136–1155]. A first-order perturbation analysis of the simultaneous generalized Schur decomposition is carried out. We discuss some computational techniques (including
a new Jacobi algorithm) and illustrate their behavior by means of a number of numerical experiments.
- IEEE Trans. Inform. Theory , 2003
"... Orthogonal frequency-division multiplexing (OFDM) converts a time-dispersive channel into parallel subchannels, and thus facilitates equalization and (de)coding. But when the channel has nulls
close to or on the fast Fourier transform (FFT) grid, uncoded OFDM faces serious symbol recovery problems. ..."
Cited by 32 (1 self)
Add to MetaCart
Orthogonal frequency-division multiplexing (OFDM) converts a time-dispersive channel into parallel subchannels, and thus facilitates equalization and (de)coding. But when the channel has nulls close
to or on the fast Fourier transform (FFT) grid, uncoded OFDM faces serious symbol recovery problems. As an alternative to various error-control coding techniques that have been proposed to ameliorate
the problem, we perform complex-field coding (CFC) before the symbols are multiplexed. We quantify the maximum achievable diversity order for independent and identically distributed (i.i.d.) or
correlated Rayleigh-fading channels, and also provide design rules for achieving the maximum diversity order. The maximum coding gain is given, and the encoder enabling the maximum coding gain is
also found. Simulated performance comparisons of CFC-OFDM with existing block and convolutionally coded OFDM alternatives favor CFC-OFDM for the code rates used in a HiperLAN2 experiment.
- IEEE Trans. Signal Process
"... Abstract—CANDECOMP/PARAFAC (CP) analysis is an extension of low-rank matrix decomposition to higher-way arrays, which are also referred to as tensors. CP extends and unifies several array signal
processing tools and has found applications ranging from multidimensional harmonic retrieval and angle-ca ..."
Cited by 27 (5 self)
Add to MetaCart
Abstract—CANDECOMP/PARAFAC (CP) analysis is an extension of low-rank matrix decomposition to higher-way arrays, which are also referred to as tensors. CP extends and unifies several array signal
processing tools and has found applications ranging from multidimensional harmonic retrieval and angle-carrier estimation to blind multiuser detection. The uniqueness of CP decomposition is not fully
understood yet, despite its theoretical and practical significance. Toward this end, we first revisit Kruskal’s Permutation Lemma, which is a cornerstone result in the area, using an accessible basic
linear algebra and induction approach. The new proof highlights the nature and limits of the identification process. We then derive two equivalent necessary and sufficient uniqueness conditions for
the case where one of the component matrices involved in the decomposition is full column rank. These new conditions explain a curious example provided recently in a previous paper by Sidiropoulos,
who showed that Kruskal’s condition is in general sufficient but not necessary for uniqueness and that uniqueness depends on the particular joint pattern of zeros in the (possibly pretransformed)
component matrices. As another interesting application of the Permutation Lemma, we derive a similar necessary and sufficient condition for unique bilinear factorization under constant modulus (CM)
constraints, thus providing an interesting link to (and unification with) CP. Index Terms—CANDECOMP, constant modulus, identifiablity, PARAFAC, SVD, three-way array analysis, uniqueness. I.
- SIAM J. Matrix Anal. Appl
"... Abstract. In this paper we introduce a new class of tensor decompositions. Intuitively, we decompose a given tensor block into blocks of smaller size, where the size is characterized by a set of
mode-n ranks. We study different types of such decompositions. For each type we derive conditions under w ..."
Cited by 21 (3 self)
Add to MetaCart
Abstract. In this paper we introduce a new class of tensor decompositions. Intuitively, we decompose a given tensor block into blocks of smaller size, where the size is characterized by a set of
mode-n ranks. We study different types of such decompositions. For each type we derive conditions under which essential uniqueness is guaranteed. The parallel factor decomposition and Tucker’s
decomposition can be considered as special cases in the new framework. The paper sheds new light on fundamental aspects of tensor algebra.
- IEEE Trans. on Signal Processing , 2001
"... Unlike low-rank matrix decomposition, which is generically nonunique for rank greater than one, low-rank threeand higher dimensional array decomposition is unique, provided that the array rank
is lower than a certain bound, and the correct number of components (equal to array rank) is sought in the ..."
Cited by 16 (5 self)
Add to MetaCart
Unlike low-rank matrix decomposition, which is generically nonunique for rank greater than one, low-rank threeand higher dimensional array decomposition is unique, provided that the array rank is
lower than a certain bound, and the correct number of components (equal to array rank) is sought in the decomposition. Parallel factor (PARAFAC) analysis is a common name for low-rank decomposition
of higher dimensional arrays. This paper develops Cramr--Rao Bound (CRB) results for low-rank decomposition of three- and four-dimensional (3-D and 4-D) arrays, illustrates the behavior of the
resulting bounds, and compares alternating least squares algorithms that are commonly used to compute such decompositions with the respective CRBs. Simple-to-check necessary conditions for a unique
low-rank decomposition are also provided. Index Terms---Cramr--Rao bound, least squares method, matrix decomposition, multidimensional signal processing. I.
- IEEE Transactions on Signal Processing
"... Abstract—In this paper, we study simultaneous matrix diagonalization-based techniques for the estimation of the mixing matrix in underdetermined independent component analysis (ICA). This
includes a generalization to underdetermined mixtures of the well-known SOBI algorithm. The problem is reformula ..."
Cited by 12 (2 self)
Add to MetaCart
Abstract—In this paper, we study simultaneous matrix diagonalization-based techniques for the estimation of the mixing matrix in underdetermined independent component analysis (ICA). This includes a
generalization to underdetermined mixtures of the well-known SOBI algorithm. The problem is reformulated in terms of the parallel factor decomposition (PARAFAC) of a higher-order tensor. We present
conditions under which the mixing matrix is unique and discuss several algorithms for its computation. Index Terms—Canonical decomposition, higher order tensor, independent component analysis (ICA),
parallel factor (PARAFAC) analysis, simultaneous diagonalization, underdetermined mixture. I.
- IEEE Transactions on Signal Processing , 2005
"... Abstract—Parallel factor (PARAFAC) analysis is an extension of low-rank matrix decomposition to higher way arrays, also referred to as tensors. It decomposes a given array in a sum of
multilinear terms, analogous to the familiar bilinear vector outer products that appear in matrix decomposition. PAR ..."
Cited by 12 (0 self)
Add to MetaCart
Abstract—Parallel factor (PARAFAC) analysis is an extension of low-rank matrix decomposition to higher way arrays, also referred to as tensors. It decomposes a given array in a sum of multilinear
terms, analogous to the familiar bilinear vector outer products that appear in matrix decomposition. PARAFAC analysis generalizes and unifies common array processing models, like joint
diagonalization and ESPRIT; it has found numerous applications from blind multiuser detection and multidimensional harmonic retrieval, to clustering and nuclear magnetic resonance. The prevailing
fitting algorithm in all these applications is based on (alternating) least squares, which is optimal for Gaussian noise. In many cases, however, measurement errors are far from being Gaussian. In
this paper, we develop two iterative algorithms for the least absolute error fitting of general multilinear models. The first is based on efficient interior point methods for linear programming,
employed in an alternating fashion. The second is based on a weighted median filtering iteration, which is particularly appealing from a simplicity viewpoint. Both are guaranteed to converge in terms
of absolute error. Performance is illustrated by means of simulations, and compared to the pertinent Cramér–Rao bounds (CRBs). Index Terms—Array signal processing, non-Gaussian noise, parallel factor
analysis, robust model fitting. I.
- JOURNAL OF CHEMOMETRICS , 2009
"... This work was originally motivated by a classification of tensors proposed by Richard Harshman. In particular, we focus on simple and multiple “bottlenecks”, and on “swamps”. Existing
theoretical results are surveyed, some numerical algorithms are described in details, and their numerical complexity ..."
Cited by 10 (3 self)
Add to MetaCart
This work was originally motivated by a classification of tensors proposed by Richard Harshman. In particular, we focus on simple and multiple “bottlenecks”, and on “swamps”. Existing theoretical
results are surveyed, some numerical algorithms are described in details, and their numerical complexity is calculated. In particular, the interest in using the ELS enhancement in these algorithms is
discussed. Computer simulations feed this discussion.
|
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=1421551","timestamp":"2014-04-19T12:20:21Z","content_type":null,"content_length":"39809","record_id":"<urn:uuid:33ff2982-6774-4ca0-8103-b069557a4d98>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00288-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Two types of incomplete block factorizations
Next: Blocking over systems Up: Block factorization methods Previous: The special case
One reason that block methods are of interest is that they are potentially more suitable for vector computers and parallel architectures. Consider the block factorization
We can turn this into an incomplete factorization by replacing the block diagonal matrix of pivots
For factorizations of this type (which covers all methods in Concus, Golub and Meurant [57] and Kolotilina and Yeremin [141]) solving a linear system means solving smaller systems with the
Alternatively, we can replace
For this second type (which was discussed by Meurant [155], Axelsson and Polman [21] and Axelsson and Eijkhout [15]) solving a system with multiplying by the [90].
Jack Dongarra
Mon Nov 20 08:52:54 EST 1995
|
{"url":"http://netlib.org/linalg/html_templates/node73.html","timestamp":"2014-04-16T18:56:38Z","content_type":null,"content_length":"4335","record_id":"<urn:uuid:4cf7f787-8cfc-4e8d-a594-7f5d9aa71d8f>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00021-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Relative Velocities
August 21st 2009, 07:03 AM
Relative Velocities
The question reads:
"A glider is moving with a velocity $v = (40, 30, 10)$ relative to the air and is blown by the wind which has velocity relative to the earth of $w = (5, -10, 0)$. Find the velocity of the glider
relative to the earth."
My argument goes that as the velocity of the wind relative to the earth increases, and the velocity of the glider relative to the air, increase, so does the velocity of the glider relative to
earth. So if we let $v_E$ represent the velocity of the glider relative to the earth,
$v_E = v + w$.
Therefore, in this case the velocity of the glider relative to the earth,
$v_E = (40, 30, 10) + (5, -10, 0) = (45, 20, 10)$
However, the answer booklet has the expression for $v_E$ as follows:
$v_E = v - w$
giving $v_E = (35, 40, 10)$ which I suppose must be the right answer. Could somebody explain this result to me? Thank you.
August 21st 2009, 02:45 PM
The question reads:
"A glider is moving with a velocity $v = (40, 30, 10)$ relative to the air and is blown by the wind which has velocity relative to the earth of $w = (5, -10, 0)$. Find the velocity of the glider
relative to the earth."
My argument goes that as the velocity of the wind relative to the earth increases, and the velocity of the glider relative to the air, increase, so does the velocity of the glider relative to
earth. So if we let $v_E$ represent the velocity of the glider relative to the earth,
$v_E = v + w$.
Therefore, in this case the velocity of the glider relative to the earth,
$v_E = (40, 30, 10) + (5, -10, 0) = (45, 20, 10)$
However, the answer booklet has the expression for $v_E$ as follows:
$v_E = v - w$
giving $v_E = (35, 40, 10)$ which I suppose must be the right answer. Could somebody explain this result to me? Thank you.
I agree with you ... (air vector) + (wind vector) = ground vector
the answer booklet is in error, imho.
August 22nd 2009, 06:46 AM
Hello Harry1W
The question reads:
"A glider is moving with a velocity $v = (40, 30, 10)$ relative to the air and is blown by the wind which has velocity relative to the earth of $w = (5, -10, 0)$. Find the velocity of the glider
relative to the earth."
My argument goes that as the velocity of the wind relative to the earth increases, and the velocity of the glider relative to the air, increase, so does the velocity of the glider relative to
earth. So if we let $v_E$ represent the velocity of the glider relative to the earth,
$v_E = v + w$.
Therefore, in this case the velocity of the glider relative to the earth,
$v_E = (40, 30, 10) + (5, -10, 0) = (45, 20, 10)$
However, the answer booklet has the expression for $v_E$ as follows:
$v_E = v - w$
giving $v_E = (35, 40, 10)$ which I suppose must be the right answer. Could somebody explain this result to me? Thank you.
You need to check on the definition of 'wind velocity'. Sometimes (perversely!) it's given as the direction from which the wind blows. For example, a north-easterly is a wind that blows from the
N-E; i.e towards the South-West.
This would indeed make the velocity of the glider relative to the earth $v - w$.
August 22nd 2009, 06:48 AM
Hello Harry1WYou need to check on the definition of 'wind velocity'. Sometimes (perversely!) it's given as the direction from which the wind blows. For example, a north-easterly is a wind that
blows from the N-E; i.e towards the South-West.
This would indeed make the velocity of the glider relative to the earth $v - w$.
I thought about that also, but dismissed it since the wind was given in proper vector notation.
|
{"url":"http://mathhelpforum.com/math-topics/98804-relative-velocities-print.html","timestamp":"2014-04-19T19:04:00Z","content_type":null,"content_length":"13585","record_id":"<urn:uuid:f03ca4a1-cf03-483b-8104-d973111ebb1f>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00040-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Navy Electricity and Electronics Training Series (NEETS)
NEETS Module 9 — Introduction to Wave- Generation and Wave-Shaping
i - ix
1-1 to 1-10
1-11 to 1-20
1-21 to 1-30
1-31 to 1-40
1-41 to 1-52
2-1 to 2-10
2-11 to 2-20
2-21 to 2-30
2-31 to 2-38
3-1 to 3-10
3-11 to 3-20
3-21 to 3-30
3-31 to 3-40
3-41 to 3-50
3-51 to 3-56
4-1 to 4-10
4-11 to 4-20
4-21 to 4-30
4-31- to 4-40
4-41 to 4-50
4-51 to 4-61
, Index
Q-16. What is the filter called in which the low frequencies do not produce a useful voltage?
Q-17. What is the filter called that passes low frequencies but rejects or attenuates high frequencies? Q-18. How does a capacitor and an inductor react to (a) low frequency and (b) high frequency?
Q-19. What term is used to describe the frequency at which the filter circuit changes from the point of rejecting the unwanted frequencies to the point of passing the desired frequencies?
Q-20. What type filter is used to allow a narrow band of frequencies to pass through a circuit and attenuate all other frequencies above or below the desired band?
Q-21. What type filter is used to block the passage of current for a narrow band of frequencies, while allowing current to flow at all frequencies above or below this band?
All of the various types of filters we have discussed so far have had only one section. In many cases, the use of such simple filter circuits does not provide sufficiently sharp cutoff points. But by
adding a capacitor, an inductor, or a resonant circuit in series or in parallel (depending upon the type of filter action required), the ideal effect is more nearly approached. When such additional
units are added to a filter circuit, the form of the resulting circuit will resemble the letter T, or the Greek letter p (pi). They are, therefore, called T- or p-type filters, depending upon which
symbol they resemble. Two or more T- or p-type filters may be connected together to produce a still sharper cutoff point.
Figure 1-23, (view A) (view B) and (view C), and figure 1-24, (view A) (view B) and (view C) depict some of the common configurations of the T- and p-type filters. Further discussion about the theory
of operation of these circuits is beyond the intended scope of this module. If you are interested in learning more about filters, a good source of information to study is the Electronics Installation
and Maintenance Handbook (EIMB), section 4 (Electronics Circuits), NAVSEA 0967-LP-000-0120.
Figure 1-23A.—Formation of a T-type filter.
Figure 1-23B.—Formation of a T-type filter.
Figure 1-23C.—Formation of a T-type filter.
Figure 1-24A.—Formation of a p-type filter.
Figure 1-24B.—Formation of a p-type filter.
Figure 1-24C.—Formation of a p-type filter.
When working with resonant circuits, or electrical circuits, you must be aware of the potentially high voltages. Look at figure 1-25. With the series circuit at resonance, the total impedance of the
circuit is 5 ohms.
Figure 1-25.—Series RLC circuit at resonance.
Remember, the impedance of a series-RLC circuit at resonance depends on the resistive element. At resonance, the impedance (Z) equals the resistance (R). Resistance is minimum and current is maximum.
Therefore, the current at resonance is:
The voltage drops around the circuit with 2 amperes of current flow are:
E[C] = I[T] x X[C]
E[C] = 2 x 20
E[C] = 40 volts AC
E[L] = I[T] x X[L]
E[L] = 2 x 20
E[L] = 40 volts AC
E[R] = I[T] x R
E[R] = 2 x 5
E[R] = 10 volts AC
You can see that there is a voltage gain across the reactive components at resonance.
If the frequency was such that X
and X
were equal to 1000 ohms at the resonant frequency, the reactance voltage across the inductor or capacitor would increase to 2000 volts AC with 10 volts AC applied. Be aware that potentially high
voltage can exist in series-resonant circuits.
This chapter introduced you to the principles of tuned circuits. The following is a summary of the major subjects of this chapter.
on an
is such that an increase in frequency will cause an increase in inductive reactance. Remember that X
= 2
fL; therefore, X
varies directly with frequency.
on a
is such that an increase in frequency will cause a decrease in capacitive reactance. Remember that
therefore, the relationship between X
and frequency is that X
varies inversely with frequency.
X = (X
- X
) or X = (X
- X
). X
is usually plotted above the reference line and X
below the reference line. Inductance and capacitance have opposite effects on the current in respect to the voltage in AC circuits. Below resonance, X
is larger than X
, and the series circuit appears capacitive. Above resonance, X
is larger than X
, and the series circuit appears inductive. At resonance, X
= X
, and the total impedance of the circuit is resistive.
is often called a
. It has the ability to take energy fed from a power source, store the energy alternately in the inductor and capacitor, and produce an output which is a continuous AC wave. The number of times this
set of events occurs per second is called the resonant frequency of the circuit. The actual frequency at which a tank circuit will oscillate is determined by the formula:
IN A
impedance is minimum and current is maximum. Voltage is the variable, and voltage across the inductor and capacitor will be equal but of opposite phases at resonance. Above resonance it acts
inductively, and below resonance it acts capacitively.
IN A
impedance is maximum and current is minimum. Current is the variable and at resonance the two currents are 180 degrees out of phase with each other. Above resonance the current acts capacitively, and
below resonance the current acts inductively.
of a circuit is the ratio of X
to R. Since the capacitor has negligible losses, the circuit Q becomes equivalent to the Q of the coil.
of a circuit is the range of frequencies between the half-power points. The limiting frequencies are those at either side of resonance at which the curve falls to .707 of the maximum value. If
circuit Q is low, you will have a wide bandpass. If circuit Q is high, you will have a narrow bandpass.
consists of a combination of capacitors, inductors, and resistors connected so that the filter will either permit or prevent passage of a certain band of frequencies.
passes low frequencies and attenuates high frequencies.
passes high frequencies and attenuates low frequencies.
will permit a certain band of frequencies to be passed.
will reject a certain band of frequencies and pass all others.
concerning series resonance: Very high reactive voltage can appear across L and C. Care must be taken against possible shock hazard.
ANSWERS TO QUESTIONS Q1. THROUGH Q21.
a. X
varies directly with frequency.
= 2
b. X
varies inversely with frequency.
c. Frequency has no affect on resistance.
A-2. Resultant reactance.
A-4. Decreases.
A-5. Impedance low Current high.
A-6. Nonresonant (circuit is either above or below resonance).
A-7. Inductor magnetic field.
A-8. Capacitor.
A-9. Natural frequency or resonant frequency (f
A-10. Maximum impedance, minimum current.
A-11. At the resonant frequency.
A-13. Bandwidth of the circuit.
A-14. A filter.
a. Low-pass.
b. High-pass
c. Bandpass.
d. Band-reject.
A-16. High-pass filter, low-frequency discriminator, or low-frequency attenuator.
A-17. Low-pass filter, high-frequency discriminator or high-frequency attenuator.
A-18. At low-frequency, a capacitor acts as an open and an inductor acts as a short. At high-frequency, a capacitor acts as a short and an inductor acts as an open.
A-19. Frequency cutoff (f
A-20. Bandpass.
A-21. Band-reject.
Introduction to Matter, Energy, and Direct Current,
Introduction to Alternating Current and Transformers,
Introduction to Circuit Protection, Control, and Measurement
Introduction to Electrical Conductors, Wiring Techniques, and Schematic Reading
Introduction to Generators and Motors
Introduction to Electronic Emission, Tubes, and Power Supplies, Introduction to Solid-State Devices and Power Supplies
Introduction to Amplifiers, Introduction to Wave-Generation and Wave-Shaping Circuits
Introduction to Wave Propagation, Transmission Lines, and Antennas
Microwave Principles, Modulation Principles
, Introduction to Number Systems and Logic Circuits, Introduction to Microelectronics,
Principles of Synchros, Servos, and Gyros
Introduction to Test Equipment
Radio-Frequency Communications Principles
Radar Principles,
The Technician's Handbook, Master Glossary,
Test Methods and Practices,
Introduction to Digital Computers, Magnetic Recording, Introduction to Fiber Optics
|
{"url":"http://www.rfcafe.com/references/electrical/NEETS-Modules/NEETS-Module-09-1-41-1-52.htm","timestamp":"2014-04-17T09:49:10Z","content_type":null,"content_length":"32082","record_id":"<urn:uuid:7772cf52-d86c-4a84-87e5-a4e6e8b2ad92>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00154-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Math Forum Discussions
Math Forum
Ask Dr. Math
Internet Newsletter
Teacher Exchange
Search All of the Math Forum:
Views expressed in these public forums are not endorsed by Drexel University or The Math Forum.
Topic: Re: binary file i/o
Replies: 0
Re: binary file i/o
Posted: Aug 16, 1996 2:59 AM
William Weglinski wrote:
> I am having difficulty reading and writing binary files correctly, With Mathematica 2.2
> for Windows on my PC. The first problem is that if a file contains the byte: 1A (i.e.
> hexadecimal 26), it is interpreted as EndOfFile, and I can't figure out any way to force
> Mathematica to read any more bytes after that. The second problem is trying to write
> the byte 0A (i.e. hexadecimal 10 -- a.k.a. linefeed ) to a file. Instead of writing
> just the byte: 0A, the bytes: 0D0A are outputted (carriage return+linefeed). Again, I
> can't find any way to override this behavior. The standard Mathematica package:
> Utilities`BinaryFiles` was of no help to me, nor could I make it work using the
> WriteString[] command directly. Setting the StringConversion option on the outputstream
> didn't work either. Has anyone else encountered this problem, and what can be done to
> work around it?
In order to work with binary files on DOS systems, you'll need to use
MathLink and Todd Gayley's package FastBinaryFiles.m (from
MathSource). It's the only way right now. I believe that Mma fo Win
2.2.3 finally supported MathLink. (Personally, I switched to the Mac
for just that reason: to read DOS binary files into Mma).
Harald Berndt University of California
Research Specialist Forest Products Laboratory
Phone: 510-215-4224 FAX:510-215-4299
|
{"url":"http://mathforum.org/kb/thread.jspa?threadID=223865","timestamp":"2014-04-20T04:38:08Z","content_type":null,"content_length":"15148","record_id":"<urn:uuid:40517bd2-168b-4241-b6f4-11ddb0eaa209>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00412-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Finite subgroup of $Gl(n,\mathbb Z)$ and congruences
up vote 6 down vote favorite
Suppose we have an invertible matrix q in a finite subgroup $Q$ of $Gl(n,\mathbb Z)$, the group of all invertible integer matrices. Now I want to find all $x\; mod\; \mathbb Z^n$ for which
$(q+q^2+q^3+...+q^m).x = 0\quad mod\; \mathbb Z^n$
where $m$ is the order of $q$ in the finite subgroup $Q$ of $Gl(n,\mathbb Z)$ so that $q^m=1$. I tried using the Smith normal form so that
$(q+q^2+q^3+...+q^m) = U.D.V$
where $U,V$ in $Gl(n,\mathbb Z)$ and $D$ the Smith normal form, so we have to solve
$D.V.x=0\quad mod\; \mathbb Z^n$
Since $D.V$ is diagonal, $x$ must have rational components unless the diagonal element is zero. Now my question is, what is the maximal denominator of the components in $x$ ? So what is the maximal
absolute value in $D.V$ ?I think this must be $m$, but I can't figure out why.
Edit: Let me clarify why I expect x to be rational with an upper bound on the denominator. Suppose G is a subgroup of the Euclidean Group with isometries (t,q) as elements (t: translational part, q:
linear part). The subgroup T which contains all isometries in G with trivial linear part is a normal subgroup of G. Suppose now that T can be identified with a $\mathbb Z$-lattice in $\mathbb R^n$,
then G/T is isomorph with a finite subgroup Q of $GL(n,\mathbb Z)$. Crystallographers call G a space group and Q a point group.
There are only finite many conjugacy classes of finite subgroups in $GL(n,\mathbb Z)$, so there are only finite many point groups up to conjugacy in $GL(n,\mathbb Z)$. Now I want to understand why
from this finite number of point groups, a finite number of (non-equivalent) space groups can be deduced. If we write G as the union of cosets of T
we see that (composition of two isometries and q belongs to exactly one coset)
$t_{q_1.q_2}=t_{q_1}+q_1.t_{q_2} \quad mod\ \mathbb Z^n$
So we know that $t_{q}$ is a real vector $0\leq t_{q}<1$. Using the previous property we also find that (m order of q)
$(t_{q},q)^{m}=(q^{1}\cdot t_{q}+\cdots+q^{m}\cdot t_{q},q^m)\in (0,id)T$
$\Leftrightarrow (q^{1}+\cdots+q^{m})\cdot t_{q}=0\quad mod\ \mathbb{Z}^{n}$
If an appropriate origin is chosen in Euclidean space, $t_{q}$ should be rational with maximal denominator $m$. Maybe investigating $(t_{q},q)^{m}$ is not the best way to find bounds on $t_{q}$?
gr.group-theory congruences linear-algebra
I don't get it. Isn't $q+q^2+q^3+...+q^m=1+q+q^2+...+q^{m-1}$ the inverse of $1-q$ (where $1$ means the identity matrix), and thus invertible over $\mathbb Q/\mathbb Z$ as well? – darij grinberg
Sep 2 '11 at 16:05
No, it isn't the inverse, since $(1+ q + \dots + q^{m-1})(1-q)=0$. But this shows that the columns of $1-q$ are in the "kernel" of $1+q+\dots +q^{m-1}$. – Ralph Sep 2 '11 at 16:41
Isn't the point that your matrix has the form $mE$ for some idempotent matrix $E$, and is also integral? – Geoff Robinson Sep 2 '11 at 18:22
Ah, right. Another of my $0$-$1$ mixups. – darij grinberg Sep 2 '11 at 18:33
@Ralph: you're right, fixed it. – Wox Sep 5 '11 at 8:12
show 2 more comments
3 Answers
active oldest votes
Edit: I couldn't resist my predilection for generalizations: Using darij grinberg's simplification, the proof below shows:
Let $k$ be a field, $q \in GL_n(k)$ a matrix of finite exponent $m$ with char$(k) \nmid m$ and $M \subseteq k^n$. Futhermore, let $E$ be the eigenspace of $q$ corresponding to the
eigenvalue $1$ and let $U \le k^n$ be the space spanned by the columns of $1-q$. Then the following is true for $A := 1+q+\dots + q^{m-1}$:
□ $\lbrace x \in k^n \mid Ax \in M \rbrace = U + \frac{1}{m}(E \cap M)$
□ $U$ and $(1/m)(E \cap M)$ intersect in $0$ iff $0 \in M$, otherwise the intersection is empty
□ $A$ is diagonizable with diagonal $(m,...,m,0,...,0)$ where the number of m's equals $\dim E$
(Older formulation)
Let $E \le \mathbb{C}^n$ be the eigenspace of $1$ of the matrix $q$ and let $U \le \mathbb{C}^n$ be the space spanned by the columns of $1-q$.
Set $A := 1+q+\dots + q^{m-1}$ and $X:= \lbrace x \in \mathbb{C}^n \mid A\cdot x \in \mathbb{Z}^n \rbrace$ and $L := E \cap \mathbb{Z}^n$.
up vote 5
down vote Then the following holds:
$X = U \oplus \frac{1}{m}L$.
Proof: Assume $\dim E = d$. Then $\dim U = \text{rank}(1-q) = n-d$.
Since each $x \in E$ satisfies $Ax = mx$, $E$ contains eigenvectors from $A$ of the eigenvalue $m$. From $A \cdot (1-q) = 0$ it follows that $U$ consists of eigenvectors of $A$ of the
eigenvalue $0$. Hence $E \cap U = 0$ and for dimensional reasons $$\mathbb{C}^n = U \oplus E.$$ Since $q$ has integral entries, it's possible to chosse a basis of $E$ in $\mathbb{Q}^n$ and
by multiplying with a suitable integer it's also possible to choose a basis in $\mathbb{Z}^n$. Therefore $L = E \cap \mathbb{Z}^n$ is a lattice of rank $d$. Let $\lbrace e_1, \dots, e_d \
rbrace$ be a basis of $L$. Let $x \in X$ and write $$x = u + \sum_i \alpha_i e_i \text{ with } \alpha_i \in \mathbb{C}.$$ Then $Ax = \sum_i m\alpha_i e_i \in \mathbb{Z}^n$ and $q(Ax) =
Ax$. It follows $Ax \in E \cap \mathbb{Z}^n = L = \oplus_i \mathbb{Z}e_i$ and therefore $m\alpha_i \in \mathbb{Z}$. This shows $X \subseteq U \oplus (1/m)L$. The converse inclusion is
obvious. qed.
Edit: Also note that the image of $A$ is given by $$ Y := \lbrace Ax \mid x \in X \rbrace = L.$$
1 Am I off track here or is it possible that your proof can be simplified as follows: Instead of writing $x = u+\sum\limits_i \alpha_i e_i$ with $\alpha_i\in \mathbb C$, let me write $x=
u+e$ for some $u\in U$ and $e\in E$. Then, $Ax=me\in\mathbb Z^n$ and $q\left(Ax\right)=Ax$. This yields $Ax\in E\cap \mathbb Z^n = L$, so that $me=Ax\in L$ and thus $e\in\frac{1}{m}L$,
so that $x = u + e \in U \oplus \frac{1}{m}L$. Nowhere we are using a basis of $E$ or any other property of lattices. Too simple to be true? – darij grinberg Sep 3 '11 at 9:49
Looks good. So the problem can be solved completely by means of linear algebra. In fact, it would be a nice exercise in a LA course. – Ralph Sep 3 '11 at 18:55
In your generalization, you should require $M$ to have some structure. – darij grinberg Sep 4 '11 at 11:06
Well, you are right, the direct sum isn't appropriate. But the result holds for every subset $M$, if one defines (see the change in the last edit) $A+B := \lbrace a+b \mid a \in A, b \
in B \rbrace$ for subsets $A, B \subseteq k^n$ (of course, $A+B = \emptyset$, if $A$ or $B$ empty). – Ralph Sep 4 '11 at 13:10
I forgot to mention that $x\in \mathbb R^n$ – Wox Sep 5 '11 at 15:45
show 4 more comments
That's an answer/comment to the secondary question.
I don't know, if the result can be derived from the finiteness of the matrix $q$ alone (it seems to me that you don't explore the fact that a space group consists of isometries). There is
another point that makes me wonder: At the begining you are considering an isometry $(x,q)$ with $q \in GL(n,\mathbb{Z})$. But then $q \in GL(n,\mathbb{Z}) \cap O_n =P$, the group of
permutation matrices with signed entries. Aren't there space groups, those rotational parts form larger groups than $P$ ?
That said, I was looking in the internet and found a paper (link), having a proof (section 5) that is somewhat related to your approach in the Edit-part of the original question.
The idea is roughly: Let $G$ be a space group with translation subgroup $T$ and let $L$ be the lattice correspnding to $T$. Choose a system of representatives $\lbrace q_1, ...,q_m \rbrace$
for $G/T$ and a basis $\lbrace b_1, ..., b_n \rbrace$ of $L$. Let $a_i$ be the translational part of $q_i$. By writing $a_i$ as linear combination of the $b_j$ it follows that $q_i$ can be
choosen such that $|a_i| \le |b_1| + ... + |b_n| =: \alpha$ (Euklid-Norm). If $x_0 \in \mathbb{R}^n$ let $[x_0]$ denote translation by $x_0$. Then the following product can be expressed as
$$q_i \circ q_j = [\sum_k l_{ijk}b_k] \circ q_{\eta(i,j)},\hspace{10pt} l_{ijk} \in \mathbb{Z},\quad \eta(i,j) \in \lbrace 1,...,m \rbrace \hspace{50pt} (\ast)$$
up vote 1 It's easy to see, that the group law of $G$ is uniquely determined by $(\ast)$.
down vote
If $G'$ is another space group with $(G':T') = (G:T)$, repeat the same procedure and define a mapping $G \to G'$ by $q_i \to q'_i$, $b_j \to b_j'$. If $l_{ijk}' = l_{ijk}$ and $\eta(i,j)' =
\eta(i,j)$ for all $i,j,k$, this is an isomorphism. Therefore there are only finitely many space groups $G$ for fixed $(G:T)$ (up to isomorphism), if it can be shown that there are only
finitely many possible values for the $l_{ijk}$.
If the rotational part of $q_i$ is the matrix $A_i \in O_n$, $(\ast)$ shows $$ | \sum_k l_{ijk}b_k | = |a_i + A_ia_j-A_iA_j(A_{\eta(i,j)})^{-1} a_{\eta(i,j)}| \le |a_i| + |a_j| + |a_{\eta
(i,j)}| \le 3\alpha$$
Suppose $\lbrace b_1, ..., b_n\rbrace$ is an orthonormal base of $\mathbb{R}^n$. Then $\alpha = n$ and each $|l_{ijk}| \le 3n$ is bounded. Thus the result is shown in this case. In general,
a similar estimate holds, but it's harder to establish (that's step 2 on page 144 that relies on lemmas 4.1, 4.2).
Remark: Using the theory of group extensions, the result follows easily from the finiteness of $H^2(Q; \mathbb{Z}^n)$ for finite groups $Q$.
Why is $q \in GL(n,\mathbb{Z}) \cap O_n$? The linear parts of isometries are orthogonal transformations, but don't necessarily correspond to orthogonal matrices (only for an orthonormal
basis). So either $q\in O(n,\mathbb{R})$ or $q \in GL(n,\mathbb{Z})$. So loosing orthogonality is the price we pay for wanting integer matrices. Or am I missing something? – Wox Sep 12
'11 at 8:32
The fact that $(q+q^2+q^3+...+q^m).x = 0\quad mod\; \mathbb Z^n$ results from considering isometries and the properties of space groups, so I do explore the fact that a space group
consists of isometries. – Wox Sep 12 '11 at 8:35
Anyway, you answered my original question and got me on the right track for the secondary problem. Thanks Ralph! – Wox Sep 14 '11 at 7:41
add comment
Edit: This is a secondary question on how Ralph's solution can be simplified by choosing an appropriate origin in Euclidean space.
Ralph's solution to my original question, in the context of space groups, states that an isometry $(x,q)$ in a space group $G$ with linear part $q\in Q< GL(Z^n)$, must have a
translational part x for which
$X_q=\lbrace x\in\mathbb R^n: (q^{1}+\cdots+q^{m})\cdot x\in \mathbb Z^n\rbrace =Col(q-1)+\frac{1}{m}(Null(q-1)\cap \mathbb Z^n)$
Note first that from the composition of isometries we find that
$(t_1,q_1)(t_2,q_2)=(t_1+q_1\cdot t_2,q_1\cdot q_2)$
$\Leftrightarrow X_{q_{1}\cdot q_{2}}=X_{q_{1}}+(q_{1}-1)\cdot X_{q_{2}}$
This means that we must only consider the $X_q$ for the generators of the finite group $Q< GL(Z^n)$ (i.e. the point group).
After a shift of origin in Euclidean space, i.e. an affine transformation $(v,1)$ with $v\in \mathbb R^n$, we can write that
up vote 0
down vote $X_q'=Col(q-1)+\frac{1}{m}(Null(q-1)\cap \mathbb Z^n)+(q-1)\cdot v$
Since $(q-1)\cdot v\in Col(q-1)$, we can find for every $u\in Col(q-1)$ a vector $v\in \mathbb R^n$ for which $(q-1)\cdot v=-u$. Thus for a proper choice of origin we can write for a
generator q
$X_q=\frac{1}{m}(Null(q-1)\cap \mathbb Z^n)$
so that $t_q = X_q\ mod\ \mathbb Z^n$ is a rational number with maximal denominator $|Q|$ (which is the maximal possible m). The question is now, can we find one $v\in \mathbb R^n$ so
that this simplification can be done for all $X_q$? For this, the column spaces $Col(q-1)$ for generators q of Q should be linear independent. If we call $S$ the generating set of Q, then
this can be expressed as
$\forall q,p\in S: Col(q-1)\cap Col(p-1)=\lbrace 0 \rbrace$
Is this true?
One can't always find generators q with $Col(q-1)$ intersection $\lbrace 0 \rbrace$ for a point group, so this is not the way to go. – Wox Sep 14 '11 at 7:39
add comment
Not the answer you're looking for? Browse other questions tagged gr.group-theory congruences linear-algebra or ask your own question.
|
{"url":"http://mathoverflow.net/questions/74370/finite-subgroup-of-gln-mathbb-z-and-congruences/75058","timestamp":"2014-04-18T21:00:54Z","content_type":null,"content_length":"81786","record_id":"<urn:uuid:a4a47b8e-57a5-45e0-90df-e1a834c4000a>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00579-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Re: st: Marginal effects after heckman
Notice: On March 31, it was announced that Statalist is moving from an email list to a forum. The old list will shut down at the end of May, and its replacement, statalist.org is already up and
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: st: Marginal effects after heckman
From Maarten buis <maartenbuis@yahoo.co.uk>
To statalist@hsphsun2.harvard.edu
Subject Re: st: Marginal effects after heckman
Date Mon, 27 Sep 2010 12:11:22 +0000 (GMT)
--- On Mon, 27/9/10, Fred Dzanku wrote:
> How can I get marginal effects of the (probit) selection
> equation after running a heckman selection model by maximum
> likelihood? I estimated a model in the form
> margins, dydx(*) atmeans predict(equation(select)) ///
> vce(unconditional)
> but what I get as marginal effects are exactly the same as
> the coefficients of the selection equation.
You asked for the marginal effects on the linear predictor (xb)
of the selection equation. This is ofcourse exactly the same as
the coefficients you already got. What you want are the marginal
effects on -normal(xb)-, i.e. the probability of being selected.
I don't see a prediction option that will directly give you that
(I looked in -help heckman postestimation-), but you can get it
by specifying the -expression()- option in -margins-, like so:
*---------------- begin example ------------------
webuse womenwk, clear
// the coefl option is only there so I know
// how the selection equation is called
// (in this case Stata chose a very sensible name)
heckman wage educ age, ///
select(married children educ age) coefl
margins, dydx(*) expression(normal(xb(select)))
*----------------- end example --------------------
(For more on examples I sent to the Statalist see:
http://www.maartenbuis.nl/example_faq )
Hope this helps,
Maarten L. Buis
Institut fuer Soziologie
Universitaet Tuebingen
Wilhelmstrasse 36
72074 Tuebingen
* For searches and help try:
* http://www.stata.com/help.cgi?search
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/
|
{"url":"http://www.stata.com/statalist/archive/2010-09/msg01266.html","timestamp":"2014-04-16T16:09:01Z","content_type":null,"content_length":"9240","record_id":"<urn:uuid:093afbd8-bb51-45be-83d2-06e72f81d29b>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00539-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Holy Quran: A Binary Symmetric Book
Bumping this because I found this extremely interesting and was not aware of it so I hope others can learn from it too.
Ignorance is of course the most prized possession of any cult.
Quran alone is too much for those who had too many years of corrupted Islam.
The concern is this. I thought submission.org was right. However after looking into that link above, I noticed that the number of surahs with odd numbered verses, are 27, and the number of surahs
with an even number of verses, are...oh god, nvm.*slaps head*Man I'm a dumb@$$. It doesn't matter! lol...because with those two verses taken out...it's still 127, an odd number. It doesn't change
anything! Glory be to Allah, the greatest mathematician.
2:260 And Abraham said: "My Lord, show me how you resurrect the dead." He said: "Do you not already believe?" He said: "I do, but it is so my heart can be relieved." He said: "Choose four birds, then
cut them, then place parts of the birds on each mountain, then call them to you; they will come racing towards you. And know that God is Noble, Wise."
I hope you haven't made the same mistake as the 19 code in producing this work. The quran is from ALLAH by virtue of its words alone but one would not be surprised if there is some physically
unbreakable cryptography in the mix somewhere. Those letters have to do something or other.
|
{"url":"http://free-minds.org/forum/index.php?topic=11852.0","timestamp":"2014-04-23T23:12:38Z","content_type":null,"content_length":"37745","record_id":"<urn:uuid:5583f311-a342-4867-8714-f04be830efe3>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00160-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Change Fractions to Decimals to Percents
1# Change Fractions to Decimals to Percents.
VDO of Change Fractions to Decimals to Percents
Change Fractions to Decimals to Percents Video Clips. Duration : 5.00 Mins.
We had a good read. For the benefit of yourself. Be sure to read to the end. I want you to get good knowledge from 4Th Grade Math Worksheets. How to change fraction numbers to decimal numbers, then
to percent numbers. Watch full versions of all 3 math videos free of charge for a limited time. Go to www.EssaysMadeEasy.com for details
View Related articles related to 4Th Grade Math Worksheets. I Roll below. I have recommended my friends to help share the Facebook Twitter Like Tweet. Can you share Change Fractions to Decimals to
|
{"url":"http://4th-grademath.blogspot.com/2012/07/change-fractions-to-decimals-to-percents.html","timestamp":"2014-04-19T19:33:28Z","content_type":null,"content_length":"101739","record_id":"<urn:uuid:ca365108-5641-4599-a122-384a23ae3686>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00424-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Measurement unit conversion: cubic metre/hour
Measurement unit conversion: cubic metre/hour
›› Measurement unit: cubic metre/hour
Full name: cubic metre/hour
Plural form: cubic meters/hour
Alternate spelling: cubic metres/hour
Category type: volume flow rate
Scale factor: 0.000277777777778
›› SI unit: cubic meter/second
The SI derived unit for volume flow rate is the cubic meter/second.
1 cubic meter/second is equal to 3600 cubic metre/hour.
›› Convert cubic metre/hour to another unit
Convert cubic metre/hour to [ ] [Go!]
Valid units must be of the volume flow rate type.
You can use this form to select from known units:
Convert cubic metre/hour to [ ] [Go!]
I'm feeling lucky, show me some random units
›› Sample conversions: cubic metre/hour
cubic metre/hour to barrel/day [US] cubic metre/hour to cubic millimetre/minute cubic metre/hour to million gallon/hour [UK] cubic metre/hour to acre inch/hour [survey] cubic metre/hour to cubic mile
/second cubic metre/hour to hectolitre/second cubic metre/hour to cubic metre/day cubic metre/hour to million gallon/day [US] cubic metre/hour to million gallon/hour [US] cubic metre/hour to barrel/
day [UK]
|
{"url":"http://www.convertunits.com/info/cubic+metre/hour","timestamp":"2014-04-17T04:11:03Z","content_type":null,"content_length":"28811","record_id":"<urn:uuid:b1089724-063e-4f49-95c0-ac0b17b708cc>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00126-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Fleetwood, NY Precalculus Tutor
Find a Fleetwood, NY Precalculus Tutor
...They will also learn how to construct and work with truth tables and truth trees. I try to teach not just the terms or the steps involved in introductory logic, but to help students gain a
conceptual understanding--a knowledge of both the "how" and the "why"--so they are better prepared for more...
34 Subjects: including precalculus, reading, English, GRE
...I am currently tutoring two other students in calculus, one in AP calculus, the other in university calculus. I have a doctorate in math. Such a degree requires much facility in geometry.
19 Subjects: including precalculus, reading, writing, calculus
I have tutored many kids ranging from first to eighth grades in mathematics and have helped improve their grades. One student I tutored in eighth grade math started off the year with a 64
average, but after I tutored him he ended up with a 92 average. I have been tutoring kids for years and am stu...
9 Subjects: including precalculus, geometry, algebra 1, algebra 2
I earned an MS in electrical engineering from Boston University and BS in mechanical engineering from Northeastern University. I was a teaching assistant and tutor at both universities. I worked
for many years as a software engineer and software manager for several major companies.
12 Subjects: including precalculus, chemistry, calculus, physics
...I also repair most of my friends clothes when needed using both patch reinforcement with a machine, or by hand. I took a Molecular and Mendelian Genetics course at Columbia and truly loved it.
This course inspired me to major in Biology and Chemistry, and I hope to pursue a career in some form of molecular manipulation of genes to treat disease and cancer.
25 Subjects: including precalculus, chemistry, geometry, calculus
Related Fleetwood, NY Tutors
Fleetwood, NY Accounting Tutors
Fleetwood, NY ACT Tutors
Fleetwood, NY Algebra Tutors
Fleetwood, NY Algebra 2 Tutors
Fleetwood, NY Calculus Tutors
Fleetwood, NY Geometry Tutors
Fleetwood, NY Math Tutors
Fleetwood, NY Prealgebra Tutors
Fleetwood, NY Precalculus Tutors
Fleetwood, NY SAT Tutors
Fleetwood, NY SAT Math Tutors
Fleetwood, NY Science Tutors
Fleetwood, NY Statistics Tutors
Fleetwood, NY Trigonometry Tutors
Nearby Cities With precalculus Tutor
Allerton, NY precalculus Tutors
Bardonia, NY precalculus Tutors
Bronxville precalculus Tutors
Heathcote, NY precalculus Tutors
Hillside, NY precalculus Tutors
Inwood Finance, NY precalculus Tutors
Manhattanville, NY precalculus Tutors
Maplewood, NY precalculus Tutors
Mount Vernon, NY precalculus Tutors
Mt Vernon, NY precalculus Tutors
River Vale, NJ precalculus Tutors
Scarsdale Park, NY precalculus Tutors
Throggs Neck, NY precalculus Tutors
Tuckahoe, NY precalculus Tutors
Wykagyl, NY precalculus Tutors
|
{"url":"http://www.purplemath.com/Fleetwood_NY_Precalculus_tutors.php","timestamp":"2014-04-16T10:34:32Z","content_type":null,"content_length":"24372","record_id":"<urn:uuid:82b39dc5-8a33-44f9-84ac-58431ff3fa54>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00281-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Why do we square numbers?
August 2nd 2011, 02:07 AM
Why do we square numbers?
I understand how to square a number but I'm a little lost in my understanding as to why we would do it in certain circumstances?
For example.
If I take a number, say 20% and take another number 10% and divide 10 into 20 I get the figure 2.
You could liken this to an investment where the investment returns 20% and you would still keep that investment until it was returning 10% but not below. This would make 10% your minimum return.
The resulting 2 would represent how many times the original value of the whole investment you would pay in order to achieve that 10% return. This is just hypothetical.
What would be achieved by squaring the 2 (making it 4). Is there some assumption to be made by squaring a number. If my question isn't clear then would anyone know if there is anything out there
which explains the theory behind the reasons for squaring a figure and what is achieved. I often see this in maths but aren't sure what the reasons are for it.
August 2nd 2011, 02:10 AM
mr fantastic
Re: Why do we square numbers?
I understand how to square a number but I'm a little lost in my understanding as to why we would do it in certain circumstances?
For example.
If I take a number, say 20% and take another number 10% and divide 10 into 20 I get the figure 2.
You could liken this to an investment where the investment returns 20% and you would still keep that investment until it was returning 10% but not below. This would make 10% your minimum return.
The resulting 2 would represent how many times the original value of the whole investment you would pay in order to achieve that 10% return. This is just hypothetical.
What would be achieved by squaring the 2 (making it 4). Is there some assumption to be made by squaring a number. If my question isn't clear then would anyone know if there is anything out there
which explains the theory behind the reasons for squaring a figure and what is achieved. I often see this in maths but aren't sure what the reasons are for it.
Well, for starters I would have thought the idea was pretty useful for finding the area of things like squares, for example.
August 2nd 2011, 02:21 AM
Re: Why do we square numbers?
Hi, thanks for the response.
It is indeed useful for that but I'm trying to work out if squaring a number (say, in the example I gave), would say something about the result of 4. For example, if I originally would pay 2
times as much for an investment because my minimum return was only half of what it is generating now then what would squaring the number achieve. I'm thinking of a specific example but it's far
to extensive for me to put on a forum. The general concept is there.
I've tried to think of reasons for say, Squaring a Standard Deviation to get the Variance. It not only changes a possibly negative number to a positive figure but it more than doubles that figure
as well. What meaning does squaring give the result?
August 2nd 2011, 04:57 AM
Re: Why do we square numbers?
Hi, thanks for the response.
It is indeed useful for that but I'm trying to work out if squaring a number (say, in the example I gave), would say something about the result of 4. For example, if I originally would pay 2
times as much for an investment because my minimum return was only half of what it is generating now then what would squaring the number achieve. I'm thinking of a specific example but it's far
to extensive for me to put on a forum. The general concept is there.
I've tried to think of reasons for say, Squaring a Standard Deviation to get the Variance. It not only changes a possibly negative number to a positive figure but it more than doubles that figure
as well. What meaning does squaring give the result?
It is the otherway around squaring per se has no significance, you square because it does something you need done.
Aside: we do not square the standard deviation to get the variance. It is the variance which is of fundamental, we square root it to get the SD because that gives us something in the same units
as the data and because a lot of things of statistical interest scale as the square root of the variance. Also, by definition SD is always non-negative.
August 3rd 2011, 04:41 AM
Re: Why do we square numbers?
Thanks again but I'll put it another way this time.
If I have an investment which grows at 20% and I only require 10%. If I pay twice as much for that investment then I'll receive the 10% on the whole amount I invested. (never mind why I would do
that). I say twice as much because 20% divided by 10% = 2.
If I take the original 20% return (the actual money I received) and re-invest it into the investment returning 20% (before I reinvested the returns) then that investment would grow. Either the
numerator or the denominator would increase or both? I'm not sure.
What I do know is, if I was to square the number 2 (originally calculated) to give me 4, that number 4 now would mean that I would be paying 4 times the value for the original investment as
opposed to 2. I squared the 2 because this is apparently the way to increase the original investment (by 4 in this case) when I reinvest the original 20% into that original investment earning
This is a financial fact but I don't understand why squaring works in this case. I was wondering if there's a way of analyzing this that I'm not seeing.
Thanks again. (Happy)
August 3rd 2011, 04:56 AM
Re: Why do we square numbers?
Part of the problem is that you are being very loose with operations and the meanings behind them.
Let's just try labeling some of these values.
20% = .2 is a grow rate of an investment.
10% = .1 is another growth rate (which you call "required", but that only distracts us from the issue).
.2÷.1 = 2 ...... but this is essentially meaningless, unless you are trying to compare growth rates.
We have to label the investment. Call it P (for "principal").
What do you mean by "If I pay twice as much for that investment"? That English is so ambiguous...
How about "I invest twice as much"? That way, you can just compare the original amount, P, with twice that... which is 2P.
Now what do you mean by "the original 20% return"? Is it the principal plus interest, or just the interest? Then "the original 20% return" is either 1.2P (120% of P) or just .2P
Can you see why this is confusing?
To make things worse, you then go on to say that you are re-investing the "20% return" BEFORE you "reinvested the returns".
So you're jamming a whole bunch of numbers and computations together, but you haven't been very strict with what you are talking about.
Also, when you ask about the numerator or denominator increasing, that is misguided.
|
{"url":"http://mathhelpforum.com/math-topics/185474-why-do-we-square-numbers-print.html","timestamp":"2014-04-18T03:56:32Z","content_type":null,"content_length":"12250","record_id":"<urn:uuid:8fd17a67-8eb4-4f38-8bc0-68b2805a518f>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00094-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Math Forum Discussions
- User Profile for: spREMOVEHITSjeffA_@_IGNoptonline.net
Math Forum
Ask Dr. Math
Internet Newsletter
Teacher Exchange
Search All of the Math Forum:
Views expressed in these public forums are not endorsed by Drexel University or The Math Forum.
Most relevant results by forum
Results: 47
Pages: 4 [ 1 2 3 4 | Next ]
Results from comp.soft-sys.matlab Show all results within this discussion.
1) Re: What is unbalanced here? 100%
Posted: Nov 20, 2013 4:09 PM, by: Jeff
Show all results within this topic.
2) Re: Who uses MATLAB? 100%
Posted: Oct 14, 2013 2:11 PM, by: Jeff
Show all results within this topic.
3) Re: Plot to a specific figure 100%
Posted: Jul 10, 2013 9:06 PM, by: Jeff
Show all results within this topic.
4) Re: Replacing `find` with logical indexing 100%
Posted: Jul 10, 2013 10:55 AM, by: Jeff
Show all results within this topic.
5) Find the location of the single smallest absolute nonzero entry [NO QUESTION HERE] 100%
Posted: May 13, 2013 9:04 PM, by: Jeff
Show all results within this topic.
6) How to make four nested loops more efficient? 100%
Posted: May 2, 2013 10:55 AM, by: Jeff
Show all results within this topic.
7) How can you use the Rotate3D tool during an animation? 100%
Posted: Mar 23, 2013 6:49 PM, by: Jeff
Show all results within this topic.
8) Re: Can my code be made more efficient using MATLAB's vectorization? 100%
Posted: Feb 16, 2013 5:51 PM, by: Jeff
Show all results within this topic.
9) Re: How do I pass a value to a function called by ode45? 100%
Posted: Feb 7, 2013 10:42 PM, by: Jeff
Show all results within this topic.
10) Re: How to import data without losing much precision 100%
Posted: Dec 3, 2012 4:44 PM, by: Jeff
Show all results within this topic.
11) Re: Export Data from C to MATLAB 100%
Posted: Oct 31, 2012 7:10 PM, by: Jeff
Show all results within this topic.
12) Can you make a circulant matrix of matrices 100%
Posted: Oct 27, 2011 9:01 PM, by: Jeff
Show all results within this topic.
13) Re: How to improve this with Matlab vectors 100%
Posted: Oct 14, 2011 5:26 AM, by: Jeff
Show all results within this topic.
14) Re: Output order of eig(matrix) 100%
Posted: Oct 10, 2011 1:17 AM, by: Jeff
Show all results within this topic.
15) Re: Is it OK to set almost zero values to zero? 100%
Posted: Sep 6, 2011 12:10 PM, by: Jeff
Show all results within this topic.
|
{"url":"http://mathforum.org/kb/search!execute.jspa?userID=611709&forceEmptySearch=true","timestamp":"2014-04-20T11:02:42Z","content_type":null,"content_length":"30481","record_id":"<urn:uuid:09a3c0c1-6ae2-4a7c-990f-4a117657297e>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00010-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Math Forum Discussions
- geometry.pre-college.independent
Discussion: geometry.pre-college.independent
A discussion of secondary school geometry curricula, teaching geometry classes, new software and texts, problems for students, and supplementary materials such as videos and manipulatives. It's also
the place for geometry students to talk about geometry with their peers.
To subscribe, send email to majordomo@mathforum.org with only the phrase subscribe geometry-pre-college in the body of the message.
To unsubscribe, send email to majordomo@mathforum.org with only the phrase unsubscribe geometry-pre-college in the body of the message.
Posts to this group from the Math Forum do not disseminate to usenet's geometry.pre-college newsgroup.
|
{"url":"http://mathforum.org/kb/forum.jspa?forumID=128&start=3540","timestamp":"2014-04-21T16:12:25Z","content_type":null,"content_length":"38909","record_id":"<urn:uuid:69ada50b-7bf8-446e-b224-540bcd4447b2>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00562-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Method for Obtaining a Structure Factor of an Amorphous Material, in Particular Amorphous Glass
Patent application title: Method for Obtaining a Structure Factor of an Amorphous Material, in Particular Amorphous Glass
Sign up to receive free email alerts when patent applications with chosen keywords are published SIGN UP
An incident X-ray is emitted in a wide angular sector toward an amorphous material specimen which backscatters the X-rays. The method comprises: a step of recording experimental photon intensity
measurements as a function of the angle of incidence; a step of correcting the experimental intensity, taking into account at least the absorption phenomena inside the specimen dependent on the
penetration length l of the incident wave inside the specimen before reflection; a normalization step referring the corrected intensity arising from the experimental intensity to an electron
intensity according to a normalization coefficient (α); a step of calculating a discretized function Q.i(Q), i being a reduced intensity arising from the measurements of the corrected and normalized
experimental intensity and Q being the modulus of the wave scattering vector proportional to the quantity (sin θ)/λ, 2θ being the scattering angle and λ being the length of the wave emitted, the
normalization constant (α) varying in a recursive manner so as to minimize the slope of the affine straight line obtained by linear regression over the values of the function Q.i(Q), during each
iteration the values of the reduced intensity being calculated for a penetration length l, the function Q.i(Q) sought corresponding to the minimum slope; a step of determining the structure factor on
the basis of the distribution of the radial atomic concentration ρ(r) dependent on Q.i(Q).
A method for obtaining the structure factor of an amorphous material on the basis of a spectrum of X-ray scattering inside a specimen of said material recorded experimentally, at least one X-ray
being emitted as an incident ray toward said specimen and reflected toward a detector, the incident X-ray scanning the surface of the specimen according to a given angle of incidence, comprising: a
step of recording experimental photon intensity measurements performed by the detector as a function of the angle of incidence; a step of correcting the experimental intensity, taking into account at
least the absorption phenomena inside the specimen, the amount of intensity absorbed at each measurement being dependent on the penetration length l of the incident wave inside the specimen before
reflection; a normalization step referring the corrected intensity arising from the experimental intensity to an electron intensity according to a normalization coefficient (a); a step of calculating
a discretized function Q.i (Q), i being a reduced intensity, which is the ratio (I
) of the reflected dependent coherent intensity over the reflected independent coherent intensity, arising from the measurements of the corrected and normalized experimental intensity and Q being the
modulus of the wave scattering vector proportional to the quantity (sin θ)/λ,
theta. being the scattering angle and λ being the length of the wave emitted, the normalization constant (α) varying in a recursive manner so as to minimize the slope of the affine straight line
obtained by linear regression over the values of the function Q.i(Q), during each iteration the values of the reduced intensity being calculated for a penetration length l, the function Q.i(Q) sought
corresponding to the minimum slope; and a step of determining the structure factor on the basis of the distribution of the radial atomic concentration ρ(r) dependent on Q.i(Q).
The method as claimed in claim 1, wherein the function Q.i(Q) sought corresponds to the zero slope.
The method as claimed in claim 1, wherein the reduced intensity is obtained on the basis of the experimental intensity corrected for the phenomena of absorption, polarization and residual gas I
corrected and the independent incoherent intensity I
and independent coherent intensity I
: i = I measured corrected - ( I ii + I ci ) I ci . ##EQU00015##
The method as claimed in claim 1, wherein the normalization coefficient α is given by the following relation: α = ∫ 0 ∞ Q 2 I exp Q - 2 π 2 ρ 0 ( j Zj ) 2 ∫ 0 ∞ Q 2 ( I elastic ind + I inelastic ind
) Q , ##EQU00016## ρ
being the mean atomic density corresponding to the inverse of the volume of the atoms present in a unit of composition of the specimen, I
being the experimental intensity, I
ind and I
ind being the elastic independent and inelastic independent reflected intensity and Zj corresponding to the atomic number of an atom j.
The method as claimed in claim 1, wherein the function Q.i(Q) obtained is related to the radial atomic concentration distribution function ρ(r) by the following relation: r [ ρ ( r ) - ρ ( 0 ) ] = 1
2 π 2 ∫ a ∞ Qi ( Q ) sin Qr Q , ##EQU00017## the radius r being the distance from a given atom, the scattering center, ρ(r) being the atomic concentration in a spherical shell of radius r and of
thickness dr.
The method as claimed in claim 1, wherein the X scattering method is the so-called large angle scattering method (WAXS).
The method as claimed in claim 1, wherein the material is amorphous glass.
The method as claimed in claim 7, wherein the glass may envelop radioactive elements.
The present invention relates to a method for obtaining the structure factor of an amorphous material. This material is for example amorphous glass. The invention is applied notably for determining
the structure factor of glasses for protecting radioactive elements so as to study the evolution of this factor under the effect of accumulated radioactivity.
The storage of radioactive elements must meet very severe ongoing safety and reliability criteria. In particular, protection in relation to the exterior environment must remain effective for several
tens of years, or indeed several centuries. Radioactive waste is ranked according to several levels. The most sensitive radioactive materials, that is to say those which exhibit the highest
radioactivity, are stored in amorphous glass which is a neutral material from the radioactive standpoint, thus forming a barrier to the propagation of radioactivity. In practice, radioactive waste is
embedded in glass by high-temperature fusion, whereby blocks of glasses are created. The radioactivity is then held captive in these blocks of glasses which are generally in the form of tubes to
facilitate storage.
On a scale of a few years, or indeed several tens of years, it is known that protection against radioactive leaks remains effective. However, beyond this observed duration, there is no certainty
about the absolute effectiveness of glass against leaks. In particular, the radioactive atoms held inside the glass could have a non-negligible impact over time, possibly eventually causing
radioactivity leaks.
A characterization of the structure of amorphous glasses is therefore necessary in order to anticipate possible long-term problems. In particular, it is necessary to characterize the influence of
radioactive elements on the structure of the glass, so as to ascertain notably whether radioactive radiation modifies this structure, how or according to what law, thus making it possible to
ascertain whether protection is maintained over the long term or whether it weakens, to what extent and how to remedy this.
Unlike crystalline matrices, amorphous matrices are devoid of any periodic structure. The characterization of such structures is therefore a problem of great complexity, where modeling plays a
significant role. Therefore, this characterization relies rather on obtaining information in the small interatomic distance region. Experimentally, a set of diagnostics may be implemented, which
include nuclear magnetic resonance (NMR) or Wide Angle X-ray Scattering (WAXS).
In order to study the disordered structure of an amorphous glass, it is possible to use the statistical approach consisting in obtaining, on the basis of spectra recorded experimentally by the WAXS
method, information about the atomic distribution, which is one of the most characteristic representations of an amorphous structure.
In this context, a significant quantity is the elastic scattering, coherent, dependent or interfering, inside the glass on the basis of an emitted X-ray and containing information about the
constructive interferences which occur when the electromagnetic wave passes in proximity to the atoms which are viewed as scattering centers. X-ray diffraction is a coherent and elastic scattering
phenomenon which occurs when X-rays interact with matter. The diffracted wave results from the interference of the waves scattered by each atom.
An experimental spectrum which is recorded by the WAXS method is recorded over the widest possible region of scattering angles. In this case, it is the resultant of elastic and inelastic scattering
phenomena, which are dependent for small scattering angles and quasi independent for large scattering angles. It is therefore necessary to extract just the fraction of dependent coherent signal by
correcting the initial spectrum for the various phenomena which alter it. This requires notably a knowledge of the scattering of the incident beam by the residual gas present around the specimen
studied, of the absorption by this specimen and of the various polarizations which occur when the X-ray beam is reflected at the surface of the specimen or of the crystal of the monochromator.
These various corrections are related to the specifics of the diffractometers used, in particular to the type of monochromator, to the nature of the residual gas surrounding the diffractometer used,
to the type of detector, to the presence of filters in the path of the X-rays and to the scattering of the beam by reflection or by transmission. The other corrections applied to the experimental
spectrum which may not be estimated experimentally like the independent coherent scattering or the independent incoherent scattering, are evaluated in a theoretical manner with the aid of tables
arising from ab-initio calculations.
The application of the various corrections makes it possible to construct the structure factor of the glass, and then the radial distribution function. It makes it possible essentially to quantify
the interatomic distances, as well as the coordinance numbers of the matrix studied.
All the operations described above, as well as the calculation of the radial distribution function, must be performed by successive steps:
on the one hand, the obtaining of an appropriate structure factor requires several iterations in the course of which corrective parameters may be adjusted;
on the other hand, the calculation of the radial distribution function by Fourier transform comes up against the effect of spectrum truncation in the region of the high values of the modulus of the
scattering vector, introducing mathematical artifacts that are difficult to discern subsequently.
An aim of the invention is notably to bring together into a single procedure all the calculations making it possible to obtain the radial distribution function on the basis of an experimental
spectrum obtained notably by the WAXS method.
For this purpose, the subject of the invention is a method such as described by the claims.
Other characteristics and advantages of the invention will become apparent with the aid of the description which follows offered in relation to appended drawings which represent:
[0017] FIG. 1
, an illustration of the principle of measuring a scattering spectrum by X-rays, used by the method according to the invention;
[0018] FIG. 2
, an illustration of the length of penetration of an incident ray inside an amorphous material specimen before reflection;
[0019] FIG. 3
, an exemplary scattering spectrum obtained on the basis of experimental measurements of intensities of reflected photons;
[0020] FIG. 4
, another spectral representation by a curve representing the variation of a quantity Q.i, the product of the modulus of the scattering vector and of the reduced intensity, as a function of Q;
[0021] FIG. 5
, an exemplary distribution of radial atomic distribution function.
[0022] FIG. 1
illustrates the X-ray scattering principle used by the method according to the invention. An incident beam of X-photons 1 emitted by a source 11 toward a glass specimen 10 is backscattered, or
reflected, by the latter.
The glass specimen 10 is placed on a diffractometer 3. The presence or otherwise of a rear monochromator may be taken into account in the configuration of the diffractometer.
The incident X-ray 1 is reflected by the glass.
FIG. 1
depicts a ray 2 reflected by the specimen 10. A detector 12 is placed in the direction of propagation of the reflected ray 2. This detector 12 makes it possible notably to measure the intensity of
the reflected photons.
In a method of the WAXS type, the angle of incidence of the emitted X-ray 1 is made to vary within a significant angular region, giving rise to the variation of the scattering angle θ within a
significant angular region. The intensity of the reflected photons then varies as a function of this scattering angle θ.
The scattering intensity does not change with direction, it is isotropic, and depends only on the modulus of the scattering wave vector
= 4 π sin θ λ , ##EQU00001##
λ being the length of the emitted wave 1.
In practice, the incident ray 11 passes through a certain thickness of glass before being scattered in the glass.
[0028] FIG. 2
illustrates this scattering phenomenon. This figure indeed shows that the incident ray 1 traverses a length/before being scattered, and notably before generating a reflected ray such as the ray 2
illustrated in
FIG. 1
, making an angle 2θ with the direction of the incident ray. Indeed, in the case of scattering by reflection, the beam of X-photons passes through a certain thickness of material in the glass
specimen 10 before and after the scattering phenomena.
[0029] FIG. 3
illustrates by a first curve 31 the shape of the experimental intensity I
, of reflected photons, measured by the detector 12 as a function of the scattering angle 2θ. This curve 31 is obtained on the basis of experimental measurement points 30.
As indicated previously, this representation of the experimental spectrum 31 is the resultant of the elastic and inelastic scattering phenomena, dependent for small scattering angles θ and quasi
independent for large scattering angles θ. It is therefore necessary to extract just the fraction of dependent coherent signal by correcting this experimental spectrum for the various phenomena which
alter it. The method according to the invention makes it possible to obtain the structure factor or the radial distribution function of the specimen 10 on the basis of this spectrum in a simplified
process, circumventing to the maximum a subjective intervention of a user in the establishing of the various quantities calculated.
A second curve 32 illustrates a simplified spectrum obtained by analytical calculation, corresponding to the corrected spectrum. To make the two spectra coincide, it is therefore necessary to correct
the experimental spectrum for the phenomena of absorption, polarization and effect of residual gases present around the glass specimen 10.
An intensity I
of photons is absorbed in the glass, this amount of absorbed photons is given by the following relation:
I a
= I incident [ 1 - exp ( - μ ρ2 l sin θ ) ] ( 1 ) ##EQU00002##
where I
is the intensity of photons of the incident beam, l is the aforementioned length of penetration into the specimen before the first scattering, 2θ is the scattering angle between the incident ray 1
and the reflected ray 2, μ is the mass absorption coefficient and ρ the density.
The beam from the source 11 is in general unpolarized. On the other hand, as soon as it is scattered by the glass specimen 10, part of the radiation is polarized at an angle 2θ. The presence of the
crystal of a monochromator in the diffractometer gives rise to the repetition of this phenomenon with an angle 2θ
, the normalized total intensity I
of the reflected beam may be written according to the following relation:
I N
= I 0 1 + cos 2 2 θ cos 2 2 θ m 1 + cos 2 2 θ m ( 2 ) ##EQU00003##
where I[N]
is the polarization factor P, I
corresponding to the intensity of the incident beam.
The experimental intensity, measured by the detector 12, corrected for the absorption and polarization phenomena described above, as well as for the effects of the residual gas can be written as the
sum of a dependent interfering contribution, of an independent coherent contribution and of an independent incoherent contribution, i.e.:
where I[ci]
, I
, I
represent respectively the independent coherent intensity, the dependent interfering coherent intensity, and the independent incoherent intensity, i.e. ultimately:
I measured corrected
= I ci ( 1 + I cd I ci ) + I ii ( 3 ) ' ##EQU00004##
The correction related to the presence of a residual gas can be achieved in a simple manner by subtracting the spectrum related to this gas.
The atomic scattering coefficients required for estimating the independent coherent intensity I
may be selected automatically on the basis of a known table, the Cromer-Mann table, or on the basis of another known table, the Klug table. In the case of using the Cromer-Mann table, two
possibilities exist:
the coefficients may be derived from the original Cromer-Mann reference, but in this case a condition is applied, this condition being that the amount Q is less than 18.9;
the coefficients arise from a "Lazy-Pulverix" numerical calculation by J.Quintana, with the condition in this case that Q is less than 25. In all cases, the coefficients a
and c of the atomic scattering f
as a function of the angle θ satisfy the following relation:
0 ( sin θ λ ) = c + i = 1 4 a i exp [ - b i ( sin θ λ ) 2 ] ( 4 ) ##EQU00005##
As regards the independent incoherent intensities I
, they may be selected automatically on the basis of a known table, the Balyusi table or tabulated manually on the basis of data from Smith, Thakkar and Chapman (see V. H. Smith Jr, A. Thakkar & D.
C. Chapman, Acta Cryst. A31, 1975). The Smith, Thakkar and Chapman expression exhibits correct asymptotic behavior for small and large values of Q, this being expressed according to the following
I ii I e
= N [ 1 - 1 + a S 2 + b S 4 1 + c S 2 + d S 4 ] ( 5 ) ##EQU00006##
where S
= sin θ λ , ##EQU00007##
and I[ii]
, represent respectively the inelastic independent incoherent intensity and the elastic intensity, N being the number of electrons for a neutral atom.
The invention uses another amount in combination with the quantity Q, this amount is the reduced intensity i defined as the ratio of the interfering coherent intensity to the independent coherent
I cd I ci
, ##EQU00008##
this reduced intensity may be given by the following relation
, arising from relation (3)' above:
= I measured corrected - ( I ii + I ci ) I ci ( 6 ) ##EQU00009##
The reduced intensity i can therefore be obtained on the basis of the intensity I
corrected itself obtained on the basis of the experimental intensity I
corrected for the effects of absorption, polarization and residual gas notably, and the intensities I
, independent incoherent, and I
, independent coherent, obtained for example by means of tables. These intensities are a function of
= 4 π sin θ λ , ##EQU00010##
it follows that the reduced intensity i is itself a function of Q
If S
denotes the static structure factor, the reduced intensity i can be identified with the latter according to the relation i(Q)=S
(Q)-1. S
is a quantity obtained on the basis of the experimental measurement of X-diffraction on the glass studied, and which contains information about its structure.
The intensity obtained after applying the above corrections must be expressed in electron units eV (electron-volts) so as to yield the radial distribution functions. A commonly used technique
consists in making the part of the experimental spectrum 31 for large values of the modulus of the scattering wave vector Q coincide with the intensity describing the scattering phenomena for the
scatterer centers M
considered to be mutually independent.
The expressions for the two independent contributions, elastic and inelastic (or Compton), are given respectively by the following relations:
I ind elastic
= M j c M j f M j 2 ( 7 ) I ind inelastic = M j c M j I j ( 8 ) ##EQU00011##
The coefficients f
represent the atomic scattering coefficients, the coefficients I
represent the elementary intensities and the coefficients c
represent the atomic elementary fractions.
A normalization constant α refers the experimental intensity I
to an electronic quantity I
expressed in eV (electron-volts), i.e. I
. This normalization constant is based on the method of Krogh-Moe, see J. Krogh-Moe, Acta Cryst. 9, 951 (1956) and N. Norman, Acta Cryst. 10, 370 (1957). It is obtained by integrating the spectrum 31
over the whole of the experimentally available Q region, for example between 0 and 17 Å
in the case of
FIG. 4
α = ∫ 0 ∞ Q 2 I exp Q - 2 π 2 ρ 0 ( j Zj ) 2 ∫ 0 ∞ Q 2 ( I ind elastic + I ind inelastic ) Q ( 9 ) ##EQU00012##
The mean atomic density ρ
corresponds to the inverse of the volume of the atoms present in a composition unit. It satisfies the following relation:
ρ 0 = N d A 10 24 ( 10 ) ##EQU00013##
where N is Avogadro
's number, d the density of the matrix of atoms and A the atomic mass, Zj corresponding to the atomic number of an atom j.
[0049] FIG. 4
illustrates by a curve 41 another spectral representation. This curve 41 represents the variation of the quantity Q.i as a function of Q, i being itself a function of Q.
The curve Q.i (Q) 41, may be obtained on the basis of experimental measurements, arising for example from the example of
FIG. 3
, the reduced intensity i being determined by the corrections described previously. The quantity Q.i is for example calculated for a region of Q varying between 0 and 17 Å
Curve 41 is therefore determinable on the basis of corrected and normalized experimental measurements.
The corrective quantities applied to obtain the reduced intensity are determined by calculation or by means of tables, in all cases by means of known parameters except as regards the absorption which
depends on the length of penetration l of the ray into the specimen before scattering. The reduced intensity i is therefore not known if this length l is not determined. It is therefore necessary to
solve/to obtain the values of i and consequently the values of Q.i(Q).
The invention advantageously uses a characteristic of the curve 41, namely that a linear regression over its values is a straight line 42 with zero slope as soon as Q is fairly large, for example as
soon as Q>10 Å
. According to the invention, the penetration length l is solved for this slope p(Q.i)=0.
For this purpose, the normalization constant α is therefore made to vary in a recursive manner so as to minimize the slope of the affine straight line obtained by linear regression over the values of
the reduced intensity i(Q) multiplied by the modulus of the scattering vector Q. During each iteration, the values of the reduced intensity are calculated as a function of Q for a length l. The
values of reduced intensity which are adopted for this linear regression correspond to the successive measurements of experimental intensity each of which is a function of a scattering angle 2θ, and
therefore of a given value of Q.
When the slope p(Q.i) is minimized, p(Q.i)=0 for example, the value l obtained is the value sought and the values Q.i (Q) determined correspond to the values sought. A discretized function Q.i (Q) is
thus obtained. It may be extrapolated as a continuous function.
[0056] FIG. 5
illustrates by a curve 51 an exemplary radial distribution, on the basis of which the form factor of an amorphous structure may be deduced in a known way, notably the form factor of the glass
specimen 10.
The function Q.i(Q) obtained according to the invention yields the radial distribution function ρ(r) defined by the following relation:
[ ρ ( r ) - ρ ( 0 ) ] = 1 2 π 2 ∫ a ∞ Qi ( Q ) sin Qr Q ( 11 ) ##EQU00014##
The radius r is the distance from a given atom, the scattering center. ρ(r) is the atomic concentration in a spherical shell of radius r and of thickness dr. Relation (11) gives the radial
distribution of this concentration.
The inverse Fourier transform yields the radial distribution function in a known manner. The inverse Fourier transform is for example calculated only at predetermined points which make it possible to
minimize the fluctuations related to the truncation of the spectrum, beyond the maximum value of Q, for example Q
=17 Å
, in accordance with the method of R. Lovell et al described in R. Lovell, G. R. Mitchell & A. H. Windle, Acta Cryst. A35, 598-603 (1979). The best precision is obtained for a maximum value of Q
corresponding to an extremum of Q.i. The last extremum is for example detected automatically on the basis of the position of the last zero of the function Q.i. The fineness of separation of very
close peaks 52, 53 in the radial distribution function will depend on the position of this extremum. The Fourier integral is then calculated on the basis of a continuous function whose expression
corresponds to the Fourier series expansion of Q.i.
This method does not make it possible to totally eliminate the spurious fluctuations for the very short interatomic distances. A criterion is then imposed which renders the distribution function
linear below a threshold value r
that can be determined arbitrarily or experimentally.
Patent applications by COMMISSARIAT A L'ENERGIE ATOMIQUE ET AUX ENERGIES ALTERNATIVES
Patent applications in class Scatter analysis
Patent applications in all subclasses Scatter analysis
User Contributions:
Comment about this patent or add new information about this topic:
|
{"url":"http://www.faqs.org/patents/app/20110286577","timestamp":"2014-04-17T08:47:07Z","content_type":null,"content_length":"54764","record_id":"<urn:uuid:f1622f9a-3125-449a-bf35-ead206c91525>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00505-ip-10-147-4-33.ec2.internal.warc.gz"}
|
• RackAndPinion2[cnum, point, rad, axis, C] models a rack and pinion gear set. The center of the pinion (point) is constrained to lie rad units to the left of the rack (axis) and the angular
orientation of the rack and pinion are related as per the pinion radius.
• The constant C sets the initial orientation of the rack and pinion.
• RackAndPinion2 constrains two degrees of freedom.
• The constant C is the distance, in the direction of axis, from the origin of axis to point when the x axis of the pinion body is parallel to the rack.
• If the Euler solution method is specified, RackAndPinion2 generates three constraints, and adds one extra variable to the model. RackAndPinion2[cnum, point, rad, {alpha, guess}, axis, C] can be
used to explicitly specify the name of the extra variable alpha, and its initial guess. Otherwise, a symbol of the form cnum is used.
• RackAndPinion2[cnum, point, rad, {sym1, guess}, axis, C] can be used to explicitly specify the name of the extra variable and its initial guess. Otherwise, a symbol of the form cnum is used.
• The first equation in RackAndPinion2 constrains point to lie rad units to the left of axis. The second, and optional third, equations relate the axial displacement of the rack to the rotation of
the pinion.
• See also: RackAndPinion1, SetConstraints, SetSymbols, SysCon, TwoGears2, TwoPulleys2.
|
{"url":"http://reference.wolfram.com/applications/mechsystems/FunctionIndex/RackAndPinion2.html","timestamp":"2014-04-18T19:18:29Z","content_type":null,"content_length":"31924","record_id":"<urn:uuid:8cfe60c9-4e79-4fe9-ad4f-23ac1675203c>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00165-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Revision:Ocr core 1 - investigating the shapes of graphs - The Student Room
• Revision:Ocr core 1 - investigating the shapes of graphs
TSR Wiki > Study Help > Subjects and Revision > Revision Notes > Mathematics > OCR Core 1 - Investigating the shapes of graphs
11. Investigating the shapes of graphs
It is a useful skill to be able to draw an accurate (representative) sketch graph of a function of a variable. It can aid in the understanding of a topic, and moreover, it can aid those who might
find the mental envisaging of some of the more complex functions very difficult.
Stationary points
Often one refers to the "vertex" of a quadratic function, but what is this?
The vertex is the point where the graph changes the direction, but with the new skills of differentiation this can be generalised (rather helpfully):
A stationary point is a point on a graph of:
such that:
This is simple to explain in words. One is basically finding all values of "x" (and hence the coordinates of the corresponding points, if it is required) of the places on the graph where the gradient
is equal to zero.
1. Calculate the coordinates of the vertex of the graph: .
First one must calculate the derivative, such that one is able to calculate the value of "x" for which this is zero, and hence the gradient is zero.
Hence, one now uses the original function to obtain the "y" value, and thence the coordinate of the point:
Hence there is a stationary point, or vertex at (-1, 2).
One can check this using the rules about the transformation of graphs, along with the completion of the square technique.
Maximum and minimum points
It is evident that there are different types of these stationary points. One can envisage simply that there are those point whose gradient is positive, and then becomes zero, after which they are
negative (maxima), and those points whose gradient change is from negative to zero to positive (minima)
One could perform an analysis upon these points to check whether they are maxima, or minima.
1. For the stationary point calculated in the previous example, deduce whether it is a point of local maximum, or local minimum.
One obtained the point (-1, 2) on the graph of:
One can therefore take an "x" value either side of this stationary point, and calculate the gradient.
Hence, one can now take an "x" value of , giving a gradient:
This is evidently negative.
Now take the value of "x" as , giving a gradient:
This is evidently positive.
Hence the gradient has gone from negative, to zero, to positive; and therefore the stationary point is a local minimum.
It is important that one understands that these "minima" and "maxima" are with reference to the local domain. This means that one can have several points of local minimum, or several points of local
maximum on the same graph (the maximum is not the single point whose value of "y" is greatest, and the minimum is not the single point whose value of "y" is least).
An application to roots of equations
It is evident, and has been shown previously, that one can obtain the roots of an equation through the analysis, and calculation of the points of intersection of two functions (when graphed). It is
evident why this is true; for example:
It is therefore simple to deduce that:
This is only true while the value of is equivalent to "y", and hence one might conclude that the intersections of the lines of:
Are the real roots to the equation.
This is correct, and is useful knowledge when conjoined with the knowledge of stationary points, and basic sketching skills.
Consider that one wishes to calculate the roots of the equation:
These roots (if they are real) are graphically described as the intersections of the lines:
Hence one would plot both graphs, and calculate the points of intersection.
However, it is often the case that one will merely want to know how many real roots there are to an equation, and hence the work on sketch graphs is relevant.
One does not need to know the accurate roots, merely the number of them, and hence it is useful to learn how to plot a good sketch graph.
First one would calculate the stationary points of one of the functions, and then one could deduce their type. This could then be sketched onto a pair of axes. Repetition of this with the second
function would lead to a clear idea of where the intersections may (or may not be), and therefore one can not only give the number of real roots to the equation, but also approximations as to the
answer (these are usually given as inequalities relating to the positioning of the x-coordinate of the intersection with those of the stationary points).
Second derivatives
There is a much better, and usually more powerful technique for calculating the type of stationary point one is dealing with than the method described earlier.
If one is to think of the derivative of a function to be the "rate of change" of the function, the second derivative is the "rate of change, of the rate of change" of a function.
This is a difficult sounding phrase, but it is a rather easy concept. In some functions, one will have noticed that the derivative involves a value of "x", and hence there is a change to the
gradient. A straight line has a constant value as the derivative, and hence it has a constant gradient. A constant gradient is not changing, there is no "rate of change of the gradient, other than
A derivative that has a term of a variable within it will have a second derivative other than zero. This is because at any given value of "x" on the curve, there is a different gradient, and hence
one can calculate how this changes.
1. What is the second derivative of the function of "x":
This is simple; being by calculating the first derivative:
Now, one would like to know what the rate of change, of this gradient is, hence one can calculate the derivative of the derivative (the second derivative):
One should be aware that the second derivative is notated in two ways:
The former is pronounced "f two-dashed x", and the latter, "d two y, d x squared".
The application of this to minima and maxima is useful.
If a graph is bending "upwards" (like a quadratic function whose coefficient of is positive) has a positive second derivative, and will have a local minimum.
If a graph is bending "downwards" (like a quadratic function whose coefficient of is negative) has a negative second derivative, and will have a local maximum.
In many cases, this is a much more powerful tool than the original testing with values above and below the stationary point.
1. Demonstrate why (through the use of second derivative) the stationary point calculated in the example earlier (in this section) produced a local minimum.
First one can assert the function:
Now, the derivative:
Finally, the second derivative:
Hence the point is a local minimum, and moreover, the graph bends upwards, and does not have a local maximum.
Graphs of other functions
Functions other than the simple polynomial one that has been considered can use the same method.
One is already aware that these graphs of fractional, or negative indices of "x" are differentiable using the same rule for differentiating powers of "x" as the positive, integer powers use.
One does have to be slightly more careful however, as there are some points on these graphs that are undefined (the square root of any negative value, for instance, is not defined in the set of real
One should simply apply the same principles.
1. Calculate the coordinates, and types of any stationary points on the curve of .
First one must find the derivative (it might be a good idea to find the second derivative at the same time, so as to do all of the calculus first):
(Note, in this example one might wish to write down the original expression of "y" as a positive, and negative power of "x", it will aid, one would imagine, the understanding of the situation).
Now one can find the stationary points:
Hence, there are stationary points at (-1, -2), and (1, 2).
Now one can identify them, in turn:
Hence there is a point of local maximum at (-1, -2), and a point of local minimum at (1, 2).
One thing that one should be aware of is that sometimes one will encounter a change in the gradient of a curve that is from a positive to a positive, or from a negative to a negative, this is a point
of inflexion. Although this is not strictly in the syllabus, it is useful to know, and can help to explain the stationary point that is found in graphs such as .
Also See
Read these other OCR Core 1 notes:
1. Investigating the shapes of graphs
|
{"url":"http://www.thestudentroom.co.uk/wiki/revision:ocr_core_1_-_investigating_the_shapes_of_graphs","timestamp":"2014-04-16T17:58:17Z","content_type":null,"content_length":"171487","record_id":"<urn:uuid:18f78ef0-0d2d-40b6-bd47-ffc41cecd737>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00248-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Math Tutors
Youngtown, AZ 85363
One Stop Math Tutor: Your Solution To All Your Mathematical Problems
Hello to all interested in learning and understanding not only how to excel in
classes, but how to also understand why it works. First of all,
is my life. It's my one true love and gives me meaning in this world. I have so much passion for
Offering 10+ subjects including algebra 1, algebra 2 and calculus
|
{"url":"http://www.wyzant.com/geo_Goodyear_Math_tutors.aspx?d=20&pagesize=5&pagenum=2","timestamp":"2014-04-16T16:52:09Z","content_type":null,"content_length":"60898","record_id":"<urn:uuid:4243b345-820b-43c5-86ba-a0fe12902b3f>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00540-ip-10-147-4-33.ec2.internal.warc.gz"}
|
1.8 Leibniz and the Stepped Reckoner
The third great calculator inventor of the seventeenth century was Gottfried Wilhelm von Leibniz. The range and richness of his intellect was nothing less than phenomenal. Leibniz was a master of
almost a dozen disciplines: logic, mathematics, mechanics, geology, law, theology, philosophy, history, genealogy, and linguistics. His greatest achievement was the invention of differential
calculus, which he created about twenty years later than Newton but in a much more practical form. Indeed, the stubborn refusal of English mathematicians to adopt Leibniz’s notation retarded the
development of mathematics in England for more than a hundred years. Leibniz was driven by a monumental obsession to create, to build, to analyze, to systematize – and to outdo the French. A
bibliography of his writings would go on for pages; many of his manuscripts have still not been published and his letters may be measured by the pound.
Born in 1646, two years before the end of the Thirty Years’ War, Leibniz was the son of a notary (a minor judge) and professor of moral philosophy at the University of Leipzig. His father died when
he was six and he was raised by his mother, a pious Lutheran who passed away when he was eighteen. Like Pascal, he was a prodigy, and his mother gave him the run of his dead father’s library – not an
easy decision in those days, when children were brought up on a very tight leash and their reading restricted to approved books, lest their minds be contaminated by impure thoughts (of which Leibniz
undoubtedly had many). He had a natural aptitude for languages and taught himself Latin when he was eight and Greek a few years later. At thirteen, he discovered one of his lifelong passions, the
study of logic. He was, as he later wrote, “greatly excited by the division and order of thoughts which I perceived therein. I took the greatest pleasure in the predicaments which came before me as a
muster-roll of all the things in the world, and I turned to ‘Logics’ of all sorts to find the best and most detailed form of this list.”
He entered the University of Leipzig when he was fifteen, majoring in law. He was by nature a weaver of grand systems, and in 1666 he wrote a treatise, De Arte Combinatoria (On the Art of Combination
) offering a system for reducing all reasoning to an ordered combination of elements, such as numbers, sounds, or colors. That treatise is considered one of the theoretical ancestors of modern logic,
a primitive form of the logical rules that govern the internal operation of computers. That same year, all his requirements for the doctorate in law having been completed, Leibniz proudly presented
himself for the degree. He was only nineteen, and the elders in charge of the gates of the bar turned him down on account of his age. Furious, he went to the University of Altdorf, in Nurnberg, where
his dissertation (De Casibus Perplexis, or On Perplexing Cases) immediately won him a doctorate and an offer of a professorship.
However, Leibniz disliked the stuffiness and pettiness of academia and sought a diplomatic career. One of the most important diplomats of the time, Johann Christian von Boyneburg, took him under his
wing and secured a post for him at the court of the archbishop of Mainz, the Prince Elector Johann Philipp von Schonborn. (The electors chose the Holy Roman Emperor, who ruled over the states
encompassing Germany and most of Central Europe.) Leibniz was put to work codifying and revising the laws of Nurnberg – hardly a reforming effort, since the many codifications of the period were
designed to solidify the power of the ruling classes. For the rest of his life, the broad-shouldered, bandylegged Leibniz served in one or another capacity as an official in the courts of the German
princes, a genius in the service of mediocrities.
France was the greatest power in seventeenth-century Europe, and the Holy Roman Empire feared that she would invade Holland and, possibly, Germany. Hoping to distract Louis XIV, Leibniz and the
archbishop’s advisors tried to interest him in a military campaign in the Mideast. In terms full of religious emotionalism, they recommended that France launch a holy crusade against Egypt and
Turkey. In 1672, the archbishop dispatched Leibniz on a solitary mission to Paris to discuss the plan with the king. Not surprisingly, the trip was an utter failure; Louis XIV didn’t even bother to
acknowledge the young German’s arrival, let alone grant him an audience. But Paris proved to be a muse of the highest order, and it was there, between 1672 and 1674, that Leibniz built his first
calculator (or, rather, had a craftsman build it for him).
He explained the genesis of the Stepped Reckoner, as he called his invention, in a note written in 1685:
When, several years ago, I saw for the first time an instrument which, when carried, automatically records the numbers of steps taken by a pedestrian [he's referring to a pedometer, of course],
it occurred to me at once that the entire arithmetic could be subjected to a similar kind of machinery so that not only counting but also addition and subtraction, multiplication and division
could be accomplished by a suitably arranged machine easily, promptly, and with sure results. The calculating box of Pascal was not known to me at that time. I believe it has not gained
sufficient publicity. When I noticed, however, the mere name of a calculating machine in the preface of his”postumous thoughts” [the Pensees]… I immediately inquired about it in a letter to a
Parisian friend. When I learned from him that such a machine exists I requested the most distinguished Carcavius by letter to give me an explanation of the work which it is capable of performing.
He replied that addition and subtraction are accomplished by it directly, the other [operations] in a round-about way by repeating additions and subtractions and performing still another
calculation. I wrote back that I venture to promise something more, namely, that multiplication could be performed by the machine as well as addition, and with greatest speed and accuracy.
Conceptually, the Stepped Reckoner was a remarkable machine whose operating principles eventually led to the development of the first successful mechanical calculator. The key to the device was a
special gear, devised by Leibniz and now known as the Leibniz wheel, that acted as a mechanical multiplier. The gear was really a metal cylinder with nine horizontal rows of teeth; the first row ran
one-tenth the length of the cylinder, the second two tenths, the third three-tenths, and so on until the nine-tenths length of the ninth row. The Reckoner had eight of these stepped wheels, all
linked to a central shaft, and a single turn of the shaft rotated all the cylinders, which in turn rotated the wheels that displayed the answers.
Say you wanted to multiply 1,984 by 5. First, you entered the multiplicand (1,984) through the numbered dials, or pointers, on the top face of the machine. Then you put a metal peg in the fifth hole
of the large dial on the far right; the peg served as a built-in reminder that the multiplier was 5 and prevented you from entering a larger figure. You next took hold of the wooden handle on the big
dial on the front – this was the multiplier dial, which was linked to the central shaft – and turned it once. The answer appeared in the little windows behind the numbered pointers. If the multiplier
contained more than one digit – say, 555 – you had to shift the Reckoner’s movable carriage one place to the left for every decimal place, and turn the multiplier handle once for every digit. (Along
with the stepped cylinder, the movable carriage ended up in many other calculators, not to mention the typewriter.)
Although the Reckoner could process fairly large numbers – multipliers of four or five digits, multiplicands of up to eleven or twelve digits – it wasn’t fully automatic, and you had to fiddle with a
row of pentagonal widgets at the back of the machine to help it carry and borrow digits. Nevertheless, it was far more sophisticated than the Calculating Clock or the Pascaline, capable of all four
arithmetic operations and much closer to what we would consider to be a calculator. But the Reckoner suffered from one great drawback, much more serious than its inability to carry or borrow numbers
automatically – it didn’t work. Leibniz’s ambition outran his engineering skill, and the only surviving version of the calculator, on display at a museum in Hannover, West Germany, is an inoperative
In 1764, forty-eight years after Leibniz’s death, a Reckoner was turned over to a clockmaker in Gottingen for overhauling. The job wasn’t done, and Leibniz’s pride and joy wound up in the attic of
the University of G6ttingen, where a leaky roof led to its rediscovery in 1879. Fourteen years later, the university gave the machine to the Arthur Burkhardt Company, the country’s leading calculator
manufacturer, for repair and analysis. Burkhardt reported that, while the gadget worked in general, it failed to carry tens when the multiplier was a two- or three-digit number. The carrying
mechanism had been improperly designed. It’s unknown whether Leibniz, who worked on the Reckoner off and on for twenty years, built more than one calculator – one that was flawed and one (or more)
that worked. In all likelihood, given the high costs of fashioning a device as complicated as the Reckoner, Leibniz made only one and never managed to perfect it.
Back Continue
|
{"url":"http://ds.haverford.edu/bitbybit/bit-by-bit-contents/chapter-one/1-8-leibniz-and-the-stepped-reckoner/","timestamp":"2014-04-19T22:40:21Z","content_type":null,"content_length":"63264","record_id":"<urn:uuid:57b88488-9e7b-4c78-8a35-0927df7fde4b>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00605-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Braingle: 'Pizza Discount' Brain Teaser
Pizza Discount
Math brain teasers require computations to solve.
The 10" pizza sells for $3.99 at my favorite pizza store. The store claims they have a great deal on the large 12" pizza, which is specially priced at $ 4.31. What is the per cent discount the
store is offering?
The 10" pizza sells for $3.99 at my favorite pizza store. The store claims they have a great deal on the large 12" pizza, which is specially priced at $ 4.31. What is the per cent discount the store
is offering?
|
{"url":"http://www.braingle.com/brainteasers/46899/pizza-discount.html","timestamp":"2014-04-19T07:08:53Z","content_type":null,"content_length":"23042","record_id":"<urn:uuid:b6f95bd2-2511-4345-9b0e-142f3b2aab4d>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00602-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
"The more intense the beam, the more tightly packed the photons. Even so, each photon in the more intense beam has exactly the same amount of energy as those in the less intense beam." What does tis
really mean?? Its making me more confuse.. =.="
Best Response
You've already chosen the best response.
it is saying each photon has a certain amount of energy, call it E. If we want the beam to have more energy, then it has more photons in it per cubic mm, or whatever unit of volume you want to
Best Response
You've already chosen the best response.
See, In quantum physics energy is " Quantized ", i.e the least amount of energy we can have is that of a photon. Intensity is dependent on the no. of photons and how tightly those photons are
Best Response
You've already chosen the best response.
Let us consider Intensity to be proportional to energy transfered by the beam to the particle it falls on. Energy @ Intensity I(w) * area And, Intensity is inversly proportional to area as well,
So, Energy transferred only depends on no. of photons.
Best Response
You've already chosen the best response.
Thanks a LOT!! ^^
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred.
|
{"url":"http://openstudy.com/updates/4f5f2093e4b0602be4395e78","timestamp":"2014-04-17T01:44:36Z","content_type":null,"content_length":"35461","record_id":"<urn:uuid:0068299c-6194-419d-a74e-045bf58b9215>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00011-ip-10-147-4-33.ec2.internal.warc.gz"}
|
New Curriculum
New Curriculum
In 1931, the college hired consultants to review the curriculum and make appropriate recommendations. The result was a new plan that was implemented in 1933 (compare to the "new design" shown in the
image at the upper right from the 1931 Pikes Peak Nugget - a student publication). In the first two years, students took courses in the School of Arts and Sciences. Then they had to be admitted to
one of three divisions: the School of Letters and Fine Arts, the School of Social Sciences, and the School of Natural Sciences. Mathematics belonged to the third division and the curriculum was
divided between the School of Arts and Sciences and the School of Natural Science. The 1933 catalog presented the mathematics curriculum in this way:
The School of Arts and Sciences
• 101 - Introductory College Algebra: Algebraic operations, linear equations in one unknown, factoring, fractions, systems of linear equations, exponents and radicals, quadratic equations,
equations involving radicals, binomial theorem. - Albright (Prerequisite: an introductory course in high school algebra.)
• 103 - College Algebra: Graphs, linear equations, exponents, logarithms, quadratic equations, simultaneous quadratics, variation, binomial theorem, progressions, permutations, combinations, theory
of equations, determinants. - Lyons (Prerequisite: one and one-half units of high school algebra or consent of the instructor.)
• 109 - Solid Geometry: Planes and lines in space, polyhedra, the cylinder, cone and sphere, spherical triangles. - Lyons (Prerequisite: one unit of high school plane geometry.)
• 112 - Mathematical Theory of Investments: Logarithms, simple and compound interest, annuities, amortization, valuation of bonds, sinking funds, depreciation. - Albright (Prerequisite: Mathematics
101, or 103, or one and one-half units of high school algebra.)
• 114 - Elemetary Statistical Methods: Sources, sampling, selection of units, time series, types of frequency distributions, graphs and their interpretation, averages, measures of dispersion and
skewness, correlation, index numbers, trend, use of computing machines. - Lyons (Prerequisite: Mathematics 101, or 103, or one and one-half units of high school algebra.)
• 121 - Trigonometry: Functions of one and two angles; inverse functions, logarithms, solution of triangles, applications. - Lyons, Sisam (Prerequisite: one and one-half units of high school
algebra and one of geometry.)
• 122 - Analytic Geometry: Plane loci of the first and second orders, higher plane curves, solid analytic geometry. - Sisam (Prerequisite: Mathematics 103, or consent of instructor.)
• 203/204 - Differential and Integral Calculus: The theory and technique of differentiation and integration, applications. - Lovitt (Prerequisite: Mathematics 122, or registration therein, and
sophomore standing.)
• 205 - Advanced Statistical Methods: Multiple and partial correlation, business cycles, long time trend, seasonal fluctuations, price movements with special reference to stocks, lag, economic and
social ratios, distributions of wealth and income, Pareto's law, finite differences, interpolation, moments, curve fitting, Lexis, series, Poisson exponential. - Lovitt (Prerequisite: Mathematics
The School of Natural Sciences
• 301/302 - Mechanics: Concurrent and non-concurrent forces, centers of gravity, moments of inertia, flexible cords, motion of a particle, work and energy, friction, impact, dynamics of rigid
bodies, applications to physics and engineering. - Albright (Prerequisite: Mathematics 203 and 204.)
• 303 - Theory of Equations: Solution of cubic and quartic equations, properties of an algebraic equation in one unknown, determinants, linear equations, resultants, and discriminants. - Sisam
(Prerequisite: Mathematics 203 and 204.)
• 305/306 - Differential Equations: Methods of the solution of ordinary and partial differential equations, applications. - Sisam (Prerequisite: Mathematics 203 and 204.)
• 308 - Solid Analytic Geometry: Equations of the plane and right line in space, quadric survaces, special surfaces of higher order. - Lovitt (Prerequisite: Mathematics 203 and 204 or consent of
• 310 - Projective Geometry: The projective properties of primitive forms of the first and second orders. - Sisam
• 311 - Vector Analysis: Vector symbolism, computation by means of vectors, applications to geometry and mechanics. - Sisam (Prerequisite: Mathematics 203 and 204.)
• 315/316 - Advanced Calculus: Partial differentiation, multiple integrals, Taylor's theorem, elliptic integrals, line integrals, Fourier's series, calculus of variations, applications. - Sisam
(Prerequisite: Mathematics 203 and 204.)
• 401 - The Teaching of Mathematics: The history of mathematics and the aims and methods of teaching mathematics in the secondary schools. - Sisam (Prerequisite: Mathematics 101 and senior
• 402 - Readings in Mathematics: Readings, discussions, and reports on selected topics in college mathematics. (Prerequisite: Senior standing and concentration in mathematics.)
• 409/410 - Functions of a Complex Variable: Fundamental properties of functions of a complex variable, linear transformations, infinite series, analytic continuation, Riemann surfaces, multiple
periodic functions. - Sisam
For further notes on the development of the mathematics curriculum,
see Evolution of the Mathematics Curriculum.
Back to the Time Line
|
{"url":"http://www2.coloradocollege.edu/dept/MA/history/Topics/NewCurr.html","timestamp":"2014-04-21T07:06:37Z","content_type":null,"content_length":"6811","record_id":"<urn:uuid:b878e2fa-0da9-40be-9722-2044b084e6b7>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00323-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Statistics for HCI Research
Koji Yatani (http://yatani.jp)
Disclaimer (Please read this first!)
This wiki was initially started as my personal note of statistical methods commonly used in HCI research, but I decided to make it public and put more content in it because I think this may be useful
for some of you (particularly if you use R). I will also put some codes for
, so you can quickly apply the methods to your data. This wiki does not emphasize mathematical aspects of statistics much, and rather tries to provide some intuitions of them. Thus, if you know
maths, you may be unhappy about this wiki, but this is the way this wiki exists.
Keep in mind that
I am not an expert of statistics
. The contents provided here is basically what I learned from my experience of HCI research and by reading different online/offline materials. I always double-check the content before posting, but it
still may be not 100% accurate or even wrong. So,
use the contents on this website at your discretion
. I own no responsibility on any kind of consequences, such as you have done a wrong analysis after reading my wiki or your papers do not get into a conference or a journal, or your adviser doesn't
like your analysis.
I also strongly recommend you to get the second opinion on your analysis from other kinds of resource before you really run a test. If you have found any factual errors, please leave your comment.
Your comments would be greatly appreciated. You can leave your comments anonymously or with your name (just leave your name at the bottom of your comment). I don't care about the anonymity or
non-anonymity (I don't consider you as a coward even if you don't put your name with your comment), but you might be formally acknowledged in this wiki at some point if you leave your name. I will
update the content at some point (depending on how serious the problem is and how busy I am...)
In this website, I use
to show some examples of how you can run statistical tests. I assume you can install R in your machine, and you know some basics of how to write a code in R, install packages, etc. You can read
the online manual
for R. So, please avoid asking how to use R or that kind of stuff in this website. This website is not intended to be a wiki or forum for R.
What is this page about?
This wiki (it's not a wiki like Wikipedia because only I will edit the content to avoid spams and confusions, but you can leave comments) initially started as my personal notes of statistics for HCI
research, but I decided to make them public because some pieces of information here may be useful to others who are doing HCI research. This wiki doesn't really explain mathematical details of each
statistical method. Rather, this wiki's intention is to help you choose which statistical tests to use, how to use them, how to interpret the results, and how to report them. I also talk about some
experimental designs as well because they are closely related to how you analyze the data.
Another reason I decided to make this public is there isn't really a good training of statistics for HCI people and isn't a good place to collect the knowledge of statistics for HCI research. People
from psychology and bio-related fields usually have strong statistics trainings, so it seems that they have fewer problems than us. But it's not uncommon that HCI researchers have not gone through
those trainings. And thus we see papers do different statistical tests in different ways, some of which look kind of questionable. I hope this wiki will provide a better understanding of statistics
for HCI research, and help HCI researchers execute statistics more appropriately.
The third reason is that there are some criticisms on Null Hypothesis Significance Tests (NHST) raised by researchers in other fields and they are proposing to address problems that NHST have (see
more details in
the page about Null Hypothesis Significance Testing
. However, such dicussions are not visible yet in the field of HCI, and I thought it is a good time to think about what we should do. For this, I explain two approaches: effect sizes and regression
analysis. Effect sizes can complement some information that the results of an NHST cannot provide. Regression analysis can provide a deeper understanding on your data. Furthermore, these two are
(more or less) already supported in R. Currently, this wiki is relatively focusing on NHST and effect sizes, but I will be going to put more contents about regression analysis.
This wiki is using
, which offers a nice integration of features for Wiki and Blog.
Why R?
There are different kinds of statistical software, such as SPSS and SAS. In this wiki, I use
. Why R, instead of SPSS or SAS? Well, I simply like R. :) It is a great tool and it is open-source and free. And there is a huge collection of packages for various kinds of statistical methods. You
can create very cool graphs and run machine learning techniques as well as statistical tests. I am sure that you will be comfortable with using R if you have some programming experience (particularly
in Matlab, Mathematica or Python). But it is a little harder to use, and not necessarily be well-documented for HCI research. And there are some differences between SPSS and R (probably SAS and R
too). I wanted to explain how to use R appropriately in the context of HCI research.
Here is a report about some comparisons between R and other stats software, which also explains some advantages of using R.
R Relative to Statistical Packages: Comment 1 on Technical Report Number 1 (Version 1.0) Strategically using General Purpose Statistics Packages: A Look at Stata, SAS and SPSS. by Patrick Burns
But I also would like to say that you should use the software which you feel the most comfortable with using. For most of us (HCI researchers), we use statistical tests for analysis, rather develop
new statistical methods or implement them in particular software. So, if you already know how to use SPSS, for example, you don't have to switch to R. Some of the content in this wiki should be
useful for users of other software. But if you don't have a preference on statistical software and you like programming, just try R! You don't have to pay anything, which is one of the best parts of
R. :)
If you are an SPSS or JMP user, you should also check out
Jake Wobbrock's Practical Statistics for HCI
. It is very comprehensive, and lots of examples for SPSS and JMP are available. I always admire Jake's effort on improving statistical analysis in our field.
Experimental Design
Statistics is really powerful and in most of the cases, you can find some kinds of statistical tests you can do for your data. Thus, people think about statistics after they have done the experiment,
but I would not recommend to do so.
My suggestion is that you should think about what statistical tests you will use when you think about the experimental design
. Although there are many kinds of statistical tests, some tests requires you to exercise lots of cautions to perform. Thus, it is better to design an experiment so that you can just do a simple or
common statistical test.
It is also helpful to make the experiment simple
for the analysis. A simple experiment generally needs only simple statistical tests. If the experiment you are going to do doesn't look simple or is not commonly done in HCI research, make sure you
can run an appropriate statistical test after the experiment.
Another thing you should be careful about is the type of data.
Try to make your dependent variable ratio or interval
. This allows you to do a much wider variety of statistical tests than ordinal and nominal data. If you cannot make it ratio or interval, think about making it ordinal. If this is not possible
either, you have to give up and have nominal data, but make sure that you can do an appropriate analysis on them, and can test what you want to examine after the experiment. You can see more details
about the types of data
What statistical test should I use?
There is a great
to identify what kind of statistical tests you want to use. The following table is made based on that website.
Although this table works well for many cases (I think), keep in mind that you need to double-check whether there is any more appropriate statistical test for your data than what this table suggests.
This wiki is now focusing on regression analysis. In most cases, linear regression or multi-level linear regression is the first thing you should try, but here is a simple decision tree (or a
decision bullet point?) for regression analysis.
Some Statistics Basics (Or "before doing an experiment or analysis")
Methods to Complement Null Hypothesis Significance Testing
Parametric Tests
Non-parametric Tests
Latent Variable Analysis
Regression Analysis
Useful links
There is a paper which talks about statistics in HCI by Paul Cairns. This is one of the motivations why I made this wiki.
HCI... not as it should be: Inferential Statistics in HCI Research
There is a great paper talking about some dangerous aspects of NHST (Null Hypothesis Significant Test).
The Insignificance of Statistical Significance Testing by Johnson, Douglas H.
This is another paper talking about how we need to be careful to interpret the results of NHST.
The Difference Between “Significant” and “Not Significant” is not Itself Statistically Significant by Gelman, A., and Stern, H.
I haven't bought any book for R so far. I found that I can get enough information online in most cases, and the R project also offers
a nice intro pdf
. So, I don't think you really need to buy any book to use R. You can also take a look at
the page for some R tips
My favorite stats book is
SPSS survival manual
. This is written for SPSS users, but the book explains the procedure of common statistical methods very nicely, and I found it is also useful for R users (particularly if you are not familiar with
statistical methods). The style of the explanation in this wiki is kind of similar to that book (although that book explains way better).
If you are interested in regression analysis and don't know much about it, I recommend the following book.
Data Analysis Using Regression and Multilevel/Hierarchical Models by Andrew Gelman and Jennifer Hill.
I found that the book really nicely explains regression analysis, and I think it is easy to follow even if you don't have strong math background.
Some very useful links related to statistics and R.
If you have any general comments on this wiki, please feel free to leave them
|
{"url":"http://yatani.jp/HCIstats/HomePage","timestamp":"2014-04-20T23:29:25Z","content_type":null,"content_length":"26231","record_id":"<urn:uuid:82b9e983-392b-4c4f-97ec-7abd669bf2af>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00216-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Occam’s Razor and pitching statistics
What is Occam’s Razor?
According to the Occam’s Razor Wikipedia page:
-It is a principle urging one to select from among competing hypotheses that which makes the fewest assumptions.
The razor asserts that one should proceed to simpler theories until simplicity can be traded for greater explanatory power.
Occam’s Razor is an influential principle that can be applied to vast array of complex topics, such as religion, outer space, science, economics and so much more. Yet, the phrase Occam’s Razor was
the conclusion of Tom Tango’s response to my last THT article.
Why would Tango bring up Occam’s Razor in a discussion of baseball?
Well, for those who missed that article, I discussed baseball, but for the most part it was an analysis of baseball statistics; which Occam’s Razor could be applicable to.
In that article, I looked into how well ERA (earned run average) estimators were performing, this season. I accumulated a sample of 100 starting pitchers who threw at least 50 innings before July 1
and at least 45 innings after that date. I ran a linear regression to test which ERA estimator had done the best job of predicting second half ERAs, during this season.
Here’s a brief table of the results of a simple linear regression of the predictors against runs against per nine innings (RA9):
│Predictor │R-Sqaured │RMSE │
│(K-BB)/IP │ 9.14% │1.207│
│ SIERA │ 6.19% │1.246│
│ xFIP │ 4.65% │1.267│
│ FIP │ 2.92% │1.290│
│ ERA │ 1.86% │1.304│
│ tERA │ 0.43% │1.343│
The results of that article brought about two surprising conclusions:
(1) I expected the advanced ERA estimators, xFIP, SIERA, tERA and FIP, to have a much higher correlation to second half RA9s than they actually did.
(2) I did not expect the simple baseline of strikeouts minus walks over innings pitched to be the most predictive of how many runs a pitcher would give up.
FIP (or fielding independent pitching) was the main reason I originally became interested in sabermetrics. For that reason (among others), I had always been a big advocate of ERA estimators. Thus,
the second conclusion was rather confusing for me.
But the more I thought about it, I realized that maybe I shouldn’t have been confused at all.
FIP is essentially the same thing as strikeouts minus walks over innings pitched; it just also happens to include home runs. The other estimators also are made up, in large part, by strikeouts and
In that study, the advanced ERA estimators fell short of strikeouts and walks, despite adding in other components that are supposed to add “greater explanatory value.” And suddenly, Occam’s Razor
seems extremely relevant.
Is a simpler estimator actually better than any of the more complicated metrics?
I don’t think the original article came close to answering that question. It was a really interesting result and the results did back the “simpler” argument, but at the same time the sample was very
I looked at only 100 starting pitchers, and the predictors and outcomes came from only half of a season of baseball. I was looking at only 2012 data and half of a season of baseball is a small sample
So, for this article, I decided to see if the simply subtracting walks from strikeouts would be the most predictive again if I expanded the sample and tweaked the test slightly.
The study
For this test, I looked at how well ERA estimators would do in predicting RA9 (runs against per nine innings) for a subsequent season. I used data dating back to 2008 and looked at starting pitchers
who had at least 125 innings pitched in Year X, and at least 100 innings pitched in Year X+1.
For example, I ran a linear regression to compare Zack Greinke‘s FIP in 2008 to his RA9 in 2009.
The predictors include:
I used K – (BB + HBP – IBB) / (PA-IBB), as a slightly better modification to K-BB / IP, that was used in the first article.
K% (strikeouts divided by batters faced).
There were 344 starting pitchers who qualified for this sample and a simple least-squareds linear regression was run for each predictor.
The results
The three measures listed in the table below are the correlation coefficient (r), the r-squared and the root-mean squared error of the estimate.
The correlation coefficient and the r-squared work hand-in-hand. The correlation coefficient tells us about the strength of the linear relationship between the predictor and outcome, while the
r-squared tells us the precent variation in RA9 in Year X+1 that is explained by variation in the predictor in Year X.
The root-mean squared error of the estimate also tells about the strength of the predictor. It works sort of like a standard deviation; thus, the lower the standard deviation (or RMSE) the better the
Here are the results of the six simple linear regressions:
│Predictor │ r (r^2) │RMSE │
│ K-BB │.370 (.137) │0.862│
│ FIP │.361 (.130) │0.865│
│ K% │.351 (.123) │0.868│
│ SIERA │.335 (.112) │0.874│
│ RA9 │.317 (.100) │0.880│
│ xFIP │.312 (.097) │0.881│
The results were pretty much in line with what I expected. All of the r-squareds and RMSEs improved from the results of the original article, because using a full season of data to predict another
full season of data is much more effective than using half of a season to predict another half.
All six of the predictors were statistically significant at predicting RA9.
The Occam’s Razor principle seems to be relevant again with these results. Simply subtracting strikeouts from walks and dividing that result by the number of batters faced was the most predictive of
the six predictors tested.
Although we could’ve expected strikeouts and walks to be the most predictive, as the last study, and other studies have shown strikeouts and walks to be very predictive, it still feels like a
slightly shocking result.
It might still be hard for some to wrap their heads around the idea that strikeouts and walks were better at predicting future RA9 than any of the other advanced metrics, especially when we consider
that this is in no way a perfect sample.
I’ll be the first one to admit that this is sample is slightly flawed. The vast majority of the players in this sample did not change teams between the predictor and outcome seasons. The results for
pitchers who throw in front of the same defense or in the home park will be biased for those metrics that are affected by defense (RA9) or home park (FIP). Despite this bias, strikeouts and walks
were still more predictive than RA9, FIP or the others.
About a month ago, I wrote an article entitled “An argument for FanGraphs’ pitching WAR.” That article looked at the predictive ability of both FanGraphs’ WAR and Baseball-Reference’s WAR, but more
importantly my sample only consisted onlyof starters who changed teams between seasons, from 2002-2011.
Interestingly, fWAR (which is FIP-based with adjustments) was more predictive than both rWAR (RA9-based with adjustments) and the metric that was most predictive in this sample (K-BB/PA).
The sample for this article stretched from 2008-2012, which is one season past the sample for the fWAR article, chronologically. Thus, using the data from the fWAR article I could only take the
sample of starters who changed teams from ’08-’11 to compare to this test. That sample includes was just 49 pitchers, which is a small sample, but the r-squareds for those pitchers are fairly
│Predictor │ r^2 │
│ K-BB/PA │0.175 │
│ rWAR/PA │0.081 │
│ fWAR/PA │0.116 │
The simple strikeout-to-walk predictor seems to be much more predictive than the FIP or RA9-based advanced metrics, during these seasons for starters who changed teams.
I stretched the results back to 2005, to include 114 starters and got this result:
│Predictor │ r^2 │
│ K-BB/PA │0.175 │
│ rWAR/PA │0.107 │
│ fWAR/PA │0.132 │
The predictive value of strikeouts and walks fell almost completely apart for the last three years (2002-2004) that I tested. This caused fWAR, or more simply FIP, to end up looking like the more
predictive statistic. I really have no idea why this happened, but plan on investigating the results for my article next week.
The goal of this piece was to investigate whether simply taking strikeouts and subtracting walks would be the most predictive with a larger sample. Occam’s Razor seems to be applicable again, as
strikeouts and walks were the most predictive.
But what if we took Occam’s Razor one step further and asked whether we should even include walks as a component in our predictor?
It feels extremely foolish to question whether walks matter. Theoretically, a pitcher who walks more batters will end up giving up more runs. It seems hard for anyone who is in tune with the game of
baseball to think that issuing a lot of free passes is a good thing. But the results don’t exactly back that conclusion.
Until this point, I’ve failed to mention that strikeout percentage was third-best at predicting RA9. This metric was even simpler than (K-(BB+HBP-IBB)/(PA-IBB), as it considered only strikeouts per
plate appearance.
Looking at just one metric, strikeouts, and dividing that number by how many batters the pitcher faced outperformed xFIP, SIERA and RA9. Adding in walks (or in the case of FIP, home runs and walks)
only improved the prediction model slightly.
So, I tested the relationship between walks (BB / PA) and RA9. We’d expect a positive relationship between walk rates and RA9, because we assume (in almost all cases) that a pitcher who walks more
batters in Year X should have a higher RA9 in Year X+1.
The relationship was positive, but extremely weak, with an r-squared lower than one percent (0.0592).
While the combination of walks and strikeouts was the most predictive, almost all of that predictive value seems to be coming from the strikeout rate.
In this test, I stripped a bunch of established ERA estimators down to the one single metric at their core, and they lost little to no accuracy.
What do we do with that conclusion?
Well the first obvious reaction is the one that I keep coming back to, Occam’s Razor. That is to say that no matter how far we’ve come in the world of researching baseball statistics, we should
always select “the hypothesis which makes the fewest assumptions.” Maybe when predicting future RA9s, looking at just strikeouts, or the combination of strikeouts and walks, would be more beneficial
than any of the fancier metrics.
The second (also quite obvious) reaction is that my sample size is still extremely small. I looked only at data for starting pitchers over the course of just four seasons. I also looked at projecting
only year-to-year runs allowed, which are subject to extreme amounts of random variation. Strikeouts minus walks was the most predictive, but it was in no way the “oracle of RA9 prediction,” if you
K-BB’s r-squared of 13.7 percent leaves a lot to be desired. And although one of the simplest metrics was the most predictive, these results reiterate the point that it’s still very difficult to
predict year-to-year RA9. Also, to take the devil’s advocate case one step further, strikeouts minus walks didn’t beat the other predictors by a whole lot, and there’s a chance that a different
sample, under another set of assumptions, would’ve led to different results.
Despite those words of caution, I think as of right now when it comes to pitching metrics, simpler is better.
References & Resources
Data for this article come courtesy of FanGraphs.com and Baseball-Reference.com.
1. Paul G. said...
It seems it would make sense that the K% would be the most important as it is a proxy for hits surrendered. Pitchers do give up more hits than walks, usually. By extension K% is a pretty good
proxy for baserunners surrendered so walks are indirectly included to a certain extent.
2. rempart said...
I have gotten similar results when doing roughly the same thing for my Fantasy points league. I would also add, I have found that just K% works better than K-W for relief pitchers in the
following year. As to the 13.7%, RA9 the following year would be better correlated if some stats from the previous year were considered that are not used by the run predictors. For example take
LOB%, my research has shown a reasonable correlation between a rising/falling RA in one season with the next, and an inverse LOB% the following season. A starter has a 4.50 ERA one year and a
strand rate of 65%. The following year this rises to 74%, and his RA drops off.Of course there is alot of luck, but it helps explain some of what you are stating in the article. There are of
course other luck elements involved like Babip. Interesting work!
Leave a Reply Cancel reply
|
{"url":"http://www.hardballtimes.com/occams-razor-and-pitching-statistics/","timestamp":"2014-04-18T03:30:34Z","content_type":null,"content_length":"61181","record_id":"<urn:uuid:cacba7d8-6386-4d14-a603-5bc377d80962>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00091-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Post a reply
Hi all,
Some questions,
1) The speed of an electric motor rises from 1430 to 1490 rev/min in 0.5 seconds.
Find the average angualr acceleration and the number of revolutions turned through this time.
2)The speed of a shaft increases from 300 to 360 rev/min while turning through 18 complete revolutions.
Calculate the angular acceleration and the time taken for this change.
Goodluck to all those who try. Please post any answers with working. Hopefully I will also be posting some answers soon.
|
{"url":"http://www.mathisfunforum.com/post.php?tid=2472&qid=24285","timestamp":"2014-04-18T16:39:45Z","content_type":null,"content_length":"17435","record_id":"<urn:uuid:78729cd1-2948-4afc-97e7-b0d4e278cfe4>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00238-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Steven E. Shreve
Orion Hoch Professor of Mathematical Sciences
Carnegie Mellon University
Pittsburgh, PA 15213-3890
Wean Hall 6216
(412) 268-8484
Fax: (412) 268-6380
E-mail: shreve at the address andrew.cmu.edu.
Essays on the Financial Crisis
Model Risk, Analytics, April 2009.
Response to Pablo Triana's article "The Flawed Math of Financial Models", published on
Stochastic Calculus for Finance
Volume I: The Binomial Asset Pricing Model
Volume II: Continuous-Time Models by Steven E. Shreve
Springer-Verlag, New York
Errata for 2004 printing of Volume II, September 2006
More errata for 2004 printing of Volume II, July 2007
More errata for 2004 printing of Volume II, February 2008
Errata for 2008 printing of Volume I, July 2011
Errata for 2008 printing of Volume II, July 2011
Methods of Mathematical Finance
by Ioannis Karatzas and Steven E. Shreve
Springer-Verlag, New York
Mathematical Finance
Mark H. A. Davis, Darrell Duffie, Wendell Fleming and Steven E. Shreve, editors
IMA Volumes in Mathematics and its Applications 65
Springer-Verlag, New York
Brownian Motion and Stochastic Calculus
by Ioannis Karatzas and Steven E. Shreve
Springer-Verlag, New York
Second Edition, 1991.
Stochastic Optimal Control: The Discrete Time Case
by Dimitri P. Bertsekas and Steven E. Shreve
Academic Press, Orlando
Reprinted by Athena Scientific Publishing, 1995, and is available for free download at
Recently published papers
Matching an Ito Process by a Solution of a Stochastic Differential Equation
by G. Brunick and S. Shreve Annals Applied Probability 23 (2013), 1584--1628
"Mimicking an Ito Process" pdf file.
Given a multi-dimensional Ito process whose drift and diffusion terms are adapted processes, we construct a weak solution to a stochastic differential equation that matches the distribution of the
Ito process at each fixed time. Moreover, we show how to match the distributions at each fixed time of functionals of the Ito process, including the running maximum and running average of one of the
components of the process. A consequence of this result is that a wide variety of exotic derivative securities have the same prices when the underlying asset price is modelled by the original Ito
process or the mimicking process that solves the stochastic differential equation.
Utility Maximization Trading Two Futures with Transaction Costs
by M. Bichuch and S. Shreve SIAM J. Financial Math 4 (2013), 26--85.
"Utility Maximization" pdf file.
An agent invests in two types of futures contracts, whose prices are possibly correlated arithmetic Brownian motions, and invests in a money market account with a constant interest rate. The agent
pays a transaction cost for trading in futures proportional to the size of the trade. She also receives utility from consumption. The agent maximizes expected infinite-horizon discounted utility from
consumption. We determine the first two terms in the asymptotic expansion of the value function in the transaction cost parameter around the known value function for the case of zero transaction
cost. The method of solution when the futures are uncorrelated follows a method used previously to obtain the analogous result for one risky asset. However, when the futures are correlated, a new
methodology must be developed. It is suspected in this case that the value function is not twice continuously differentiable, and this prevents application of the former methodology.
Optimal Execution of a General One-Sided Limit-Order Book
by S. Predoiu, G. Shaikhet and S. Shreve SIAM J. Financial Math 2 (2011), 183--212.
We construct an optimal execution strategy for the purchase of a large number of shares of a financial asset over a fixed interval of time. Purchases of the asset have a nonlinear impact on price,
and this is moderated over time by resilience in the limit-order book that determines the price. The limit-order book is permitted to have arbitrary shape. The form of the optimal execution strategy
is to make an initial lump purchase and then purchase continuously for some period of time during which the rate of purchase is set to match the order book resiliency. At the end of this period,
another lump purchase is made, and following that there is again a period of purchasing continuously at a rate set to match the order book resiliency. At the end of this second period, there is a
final lump purchase. Any of the lump purchases could be of size zero. A simple condition is provided that guarantees that the intermediate lump purchase is of size zero.
Heavy Traffic Analysis for EDF Queues with Reneging
by L. Kruk, J. Lehoczky, K. Ramanan and S. Shreve Annals of Applied Probability 35 (2007), 1740--1768.
This paper presents a heavy-traffic analysis of the behavior of a single-server queue under an Earliest-Deadline-First (EDF) scheduling policy, in which customers have deadlines and are served only
until their deadlines elapse. The performance of the system is measured by the fraction of reneged work (the residual work lost due to elapsed deadlines), which is shown to be minimized by the EDF
policy. The evolution of the lead time distribution of customers in queue is described by a measure-valued process. The heavy traffic limit of this (properly scaled) process is shown to be a
deterministic function of the limit of the scaled workload process, which, in turn, is identified to be a doubly reflected Brownian motion. This paper complements previous work by Doytchinov,
Lehoczky and Shreve on the EDF discipline, in which customers are served to completion even after their deadlines elapse. The fraction of reneged work in a heavily loaded system and the fraction of
late work in the corresponding system without reneging are compared using explicit formulas based on the heavy traffic approximations, which are validated by simulation results.
Futures Trading with Transaction Costs
by K. Janecek and S. Shreve Illinois Journal of Mathematics, 54 (2010), 1239-1284.
A model for optimal consumption and investment is posed whose solution is provided by the classical Merton analysis when there is zero transaction cost. A probabilistic argument is developed to
identify the loss in value when a proportional transaction cost is introduced. There are two sources of this loss. The first is a loss due to "displacement'' that arises because one cannot maintain
the optimal portfolio of the zero-transaction-cost problem. The second loss is due to "transaction,'' a loss in capital that occurs when one adjusts the portfolio. The first of these increases with
increasing tolerance for departure from the optimal portfolio in the zero-transaction-cost problem, while the second decreases with increases in this tolerance. This paper balances the marginal costs
of these two effects. The probabilistic analysis provided here complements earlier work on a related model that proceeded from a viscosity solution analysis of the associated Hamilton-Jacobi-Bellman
Double Skorokhod map and reneging real-time queues
by L. Kruk, J. Lehoczky, K. Ramanan and S. Shreve in Markov Processes and Related Topics: A Festschrift for Thomas G. Kurtz, S. Ethier, J. Feng and R. Stockbridge, eds., Institute of Mathematical
Statistics Collections, Vol. 4, pp. 169-193.
An explicit formula for the Skorokhod map $\Gamma_{0,a}$ on $[0,a]$ for $a>0$ is provided and related to similar formulas in the literature. Specifically, it is shown that on the space $D[0,\infty)$
of right-continuous functions with left limits taking values in $\mathbb{R}$, $$ \Gamma_{0,a}(\psi)(t) = \psi (t) -\left[\big(\psi(0)-a\big)^+ \wedge\inf_{u\in[0,t]}\psi(u)\right] \vee \sup_{s \in
[0,t]} \left[ (\psi(s) - a) \wedge \inf_{u \in [s,t]} \psi(u)\right] $$ is the unique function taking values in $[0,a]$ that is obtained from $\psi$ by minimal ``pushing'' at the endpoints $0$ and
$a$. An application of this result to real-time queues with reneging is outlined.
An Explicit Formula for the Skorohod Map on [0,a]
by L. Kruk, J. Lehoczky, K. Ramanan and S. Shreve Annals of Probability 2007, Vol. 35, No. 5, 1740-1768
The Skorokhod map is a convenient tool for constructing solutions to stochastic differential equations with reflecting boundary conditions. In this work, an explicit formula for the Skorokhod map $\
Gamma_{0,a}$ on $[0,a]$ for any $a>0$ is derived. Specifically, it is shown that on the space $D[0,\infty)$ of right-continuous functions with left limits taking values in R, $\Gamma_{0,a} = \
Lambda_a \circ \Gamma_0$, where $\Lambda_a$ mapping $D[0,\infty)$ into itself is defined by $$ \Lambda_a(\phi)(t) =\phi(t)-\sup_{s\in[0,t]}[(\phi(s)-a)^+ \wedge \inf_{u\in [s,t]}\phi(u)] $$ and $\
Gamma_0$ mapping $D[0,\infty)$ into itself is the Skorokhod map on $[0,\infty)$. In addition, properties of $\Lambda_a$ are developed and comparison properties of $\Gamma_{0,a}$ are established.
A Two-Person Game for Pricing Convertible Bonds
by M. Sirbu and S. Shreve SIAM J. Control and Optimization 2006 Vol. 45, No. 4, pp. 1508-1639
A firm issues a convertible bond. At each subsequent time, the bondholder must decide whether to continue to hold the bond, thereby collecting coupons, or to convert it to stock. The bondholder
wishes to choose a conversion strategy to maximize the bond value. Subject to some restrictions, the bond can be called by the issuing firm, which presumably acts to maximize the equity value of the
firm by minimizing the bond value. This creates a two-person game. We show that if the coupon rate is below the interest rate times the call price, then conversion should precede call. On the other
hand, if the dividend rate times the call price is below the coupon rate, call should precede conversion. In either case, the game reduces to a problem of optimal stopping.
"Finite Maturity Convertible Bonds" pdf file.
Satisfying Convex Risk Limits by Trading
by K. Larsen, T. Pirvu, S. Shreve and R. Tutuncu Finance and Stochastics 2005 Vol. 9, No. 2, pp. 177-195.
A random variable, representing the final position of a trading strategy, is deemed acceptable if under each of a variety of probability measures its expectation dominates a floor associated with the
measure. The set of random variables representing pre-final positions from which it is possible to trade to final acceptability is characterized. In particular, the set of initial capitals from which
one can trade to final acceptability is shown to be a closed half-line {x;x\geq a}. Methods for computing a are provided, and the application of these ideas to derivative security pricing is
Perpetual Convertible Bonds
by M. Sirbu, I. Pikovsky and S. Shreve SIAM J. Control and Optimization 2004 Vol. 43, No. 1, pp. 58-85
A firm issues a convertible bond. At each subsequent time, the bondholder must decide whether to continue to hold the bond, thereby collecting coupons, or to convert it to stock. The firm may at any
time call the bond. Because calls and conversions usually occur far from maturity, we model this situation with a perpetual convertible bond, i.e, a convertible coupon-paying bond without maturity.
This model admits a relatively simple solution, under which the value of the perpetual convertible bond, as a function of the value of the underlying firm, is determined by a nonlinear ordinary
differential equation.
"PerpetualConvertibleBonds" pdf file.
Accuracy of State Space Collapse for Earliest-Deadline-First Queues
by L. Kruk, J. Lehoczky and S. Shreve Annals of Applied Probability 2006, Vol. 16, No. 2, 516-581
This paper presents a second-order heavy traffic analysis of a single server queue that processes customers having deadlines using the earliest-deadline-first scheduling policy. For such systems,
referred to as {\em real-time queueing systems}, performance is measured by the fraction of customers who meet their deadline, rather than more traditional performance measures such as customer
delay, queue length, or server utilization. To model such systems, one must keep track of customer lead times (the time remaining until a customer deadline elapses) or equivalent information. This
paper reviews the earlier heavy traffic analysis of such systems that provided approximations to the system's behavior. The main result of this paper is the development of a second-order analysis
that gives the accuracy of the approximations and the rate of convergence of the sequence of real-time queueing systems to its heavy traffic limit.
"Accuracy of State Space Collapse" pdf file.
Back to CMU Front Door
|
{"url":"http://www.math.cmu.edu/users/shreve/index.html","timestamp":"2014-04-21T14:41:17Z","content_type":null,"content_length":"15889","record_id":"<urn:uuid:18409ac2-d671-4ab4-b751-8b6028e02a08>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00334-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Posts from August 2011 on The OGA Blogs
Six NHL seasons have passed since the NHL lockout. Due to rule changes, NHL hockey fundamentally changed. On Goal Analysis has been prognosticating predictions of who has and has not earned a playoff
berth at about a 90% accuracy rate since then.
But what have we given you for the playoffs? Can we tell who will win before the series is over, and if so, with what kind of accuracy?
There is no fanciful formulary here. Playoff hockey is blood, and guts and character. It is simpler math than OGA’s regular season Playoff Qualifying Curve (PQC). So there is no reason to hold back
the recipe
for predicting what is and is not a good call on who will win a playoff series.
Below, and long in advance of the 2012 NHL Playoffs, we give you the numbers. Keep in mind we will differentiate between overall numbers and predictive numbers. (Predictive numbers are for results
that do not end a playoff series and can therefore still be useful for future prognosticating.)
Just to put this study in perspective, here are some overall numbers:
There have been 514 playoff games since the Lockout, over 90 total series, for an average of 5.71 games per series.
Total possible number of W and L combinations for Game 1’s through Game 7’s = 138
Predictive W and L combinations for Game 1 through Game 6 = 72 (52.17%)
Combinations that have not occurred since the Lockout = 26 (18.84%)
Eastern, predictive W and L combinations for Game 1 through Game 6 that have not occurred = 1 (0.7%)
Eastern, predictive W and L combinations for Game 1 through Game 6 that have not occurred = 8 (5.8%)
Out of all of these, what the best calls to drop an anchor next to?
The Best Calls
What is the best indicator of when a team will win their playoff series? Here at OGA, we like to call it ‘The Rule Of 3’s.’ Be the first team to three W’s and you move on; first to three L’s will be
going home. How
accurate is this rule?
Predictive NHL 3 x W’s = Wins Series: 45 of 49 / .918
Overall NHL 3 x W’s = Wins Series: 71 of 76 / .934
Predictive NHL 3 x L’s = Loses Series: 45 of 49 / .918
Overall NHL 3 x L’s = Loses Series: 71 of 76 / .934
Anything predictive in the playoffs with accuracy greater than 90% is a relatively safe call. Interestingly, there are several NHL-wide combinations of predictive W and L combinations where the
six-year average is a perfect 100% for a team winning a series:
At Game 4 – W, W, W, L (11 – 0)
At Game 5 – L, W, L, W, W (8 – 0); W, W, L, L, W (7 – 0); L, L, W, W, W (5 – 0); W, L, W, W, L (4 – 0); W, W, W, L, L (4 – 0); L, W, W, W, L (3 – 0); W, L, L, W, W (2 – 0); and W, W, L, W, L (2 –
At Game 6 – L, L, W, W, L, W (4 – 0); W, L, W, W, L, L (4 – 0); L, L, W, W, W, L (3 – 0); L, W, W, W, L, L (3 – 0); W, W, L, W, L, L (3 – 0); L, W, W, L, W, L (3 – 0); W, L, L, W, W, L (3 – 0); W, W,
W, L, L, L (2 – 0); L, W, W, L, L, W (2 – 0); L, W, L, L, W, W (2 – 0); L, L, L, W, W, W (2 – 0)
Note there are four At Game 6 combinations in bold print where the Rule of 3’s would indicate the team was most likely to lose the series instead of win. This is why the rule is not 100% accurate and
underscores how perseverance and character push teams on to the next round.
For pure numbers, however, most folks may point to the slightly less accurate but larger data set of going W, W, W to open a series with its 26 – 1 / 96.3% accuracy. At Game 3, it is the earliest
call of victory that is greater than 90%. And it is the most accurate predictor of a series win you can get.
But is there any difference between the two conferences?
Predictions Back East
Actually, yes. The East’s predictability is not as prevalent as the West’s. They sport 16 predictive combinations of W’s and L’s in series which have been 100% accurate since the Lockout in
forecasting victory:
At Game 4 – L, W, L, W (3 – 0)
At Game 5 – L, L, W, W, W (3 – 0); W, W, L, L, W (2 – 0); L, W, L, W, W (2 – 0); L, W, L, W, L (2 – 0); L, W, W, L, W (1 – 0); L, W, L, L, W (1 – 0); and L, L, L, W, W (1 – 0)
At Game 6 –L, L, W, W, W, L (2 – 0); L, W, L, W, L, W (2 – 0); L, L, W, W, L, W (2 – 0); L, W, W, L, W, L (1 – 0); L, W, L, W, W, L (1 – 0); L, W, W, L, L, W (1 – 0); L, W, L, L, W, W (1 – 0); and L,
L, L, W, W, W (1 – 0)
Note the low number of times each of these combinations has occurred. If a one-time occurrence is statistically insignificant, and twice is a mere coincidence, then all but two of these combinations
need to be ignored as predictors of Eastern victory. Also key is the fact that half of the total indicating 100% victory consists of combinations that seem to violate the Rule of 3’s in terms of
victory because they hit three L’s first.
As with the overall predictive numbers, a series that begins W, W, W in the East predicts a victory for the winning team with 12 – 1 / 92.3% accuracy. So in the East, to look less foolish around the
Playoff water cooler or blogosphere, call victory at Game 3 with a W, W, W opening or the first combination above at Game 4 or 5 of a series. Making calls at Game 6 for a series means you will either
be brassy or wrong.
Predictions Out West
The West’s predictability data set is larger than the East’s. There are 18 predictive W and L combinations which have called the series winner with 100% accuracy. They also consist of a total of 65
games to the
East’s 25 total data set:
At Game 3 – W, W, W (14 – 0)
At Game 4 – W, W, W, L (8 – 0); W, W, L, W (6 – 0); W, L, W, W (5 – 0); and L, W, W, W (2 – 0)
At Game 5 – L, W, L, W, W (7 – 0); W, L, W, W, L (5 – 0); W, W, W, L, L (4 – 0); W, L, L, W, W (3 – 0); W, W, L, W, L (1 – 0); L, W, W, W, L (1 – 0); and L, L, W, W, W (1 – 0)
At Game 6 – W, W, W, L, L, L (2 – 0); L, W, L, W, W, L (2 – 0); W, L, W, W, L, L (1 – 0); W, L, L, W, W, L (1 – 0); and L, W, L, W, L, W (1 – 0)
Here there is only one violator of the Rule of 3s which coincidentally also occurs in the Eastern Conference. And if removing from the equation all single or double occurrences of a combination is in
vogue here, nine of the 18 (a similar 50%) can be discounted. This includes all Game 6 predictive combinations.
But with predictive numbers the champion is W, W and W at Game 3 in the West with its perfect 14 – 0 / 100% accuracy. You can also call the series with a measure of confidence for the first three
Game 4, and first four Game 5 combinations above. Don’t make a call with Game 6 combinations above unless you know a lot more information than simply Ws and Ls.
Some Other Interesting Tidbits
In the predictive world, what other interesting predictions are there? I have a couple of significant ones.
How many times is a W, W, W in Game 3 followed by a Loss in Game 4? The answer is 13 of 26 times or 50%. Only one of those times did the 3 – 0 team wind up losing in Game 7. This was Boston versus
Philadelphia in 2010 where their first three wins were followed by four straight losses of 1, 4, 1 and 1 goals respectively. In other words, three times the Bruins could have ended the series by
scoring two more goals and potentially been in the Finals two straight seasons.
On the flip side, teams at L, L, L after Game 3 won the next game that same 50% of the time. So an automatic call of an opposite result in Game 4 is either brilliance or perilous depending on the
outcome. Flip a coin, here.
How many times has a team going into Game 7 with a series of W, W, W, L, L, L won the deciding game? The answer is twice in the West and zero times in the East. The converse is also true for an L, L,
L, W, W, W series’ Game 7 where one Eastern team won and two Western teams lost because all three of these series were intra-Conference contests.
How much parity is there in the NHL? If more difficult competition is an indicator, try this on for size. Of 90 playoff series since the Lockout, 24 or 26.7% have gone to a Game 7. Only seven series
went the distance from 2006 – 2008. Almost 71% of all Game 7’s have come in the last three playoff seasons.
At the same time, only 13 of 90, or 14.4%, have been four-game series. While five-game series have occurred 24 times just like seven-gamers, the most often played series is one decided by Game 6 (29
times / 32.3%).
In NHL playoffs, the Rule of 3’s is the best call you can make in a predictive sense as 91.8% of the time a team beginning a series W, W, W wins or L, L, L loses.
Predictions that are 100% correct are more prevalent in the Western Conference than the Eastern.
There is a 4:1 West to East advantage in 100% predictability for series that historically occur more than twice.
And there are no really strong 100% calls for either Conference after Game 5 in a series.
OGA pledges to keep you abreast of the odds next playoff season as the series play out. The odds as we describe them above for predicting series’ outcomes remain the same through Round 1 in 2012.
|
{"url":"http://ongoalanalysis.wordpress.com/2011/08/","timestamp":"2014-04-21T09:36:14Z","content_type":null,"content_length":"40018","record_id":"<urn:uuid:024ef09d-440c-4b18-9bd1-69095c4b5afb>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00520-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Thomas A. Richmond
Professor of Mathematics
Department of Mathematics
Western Kentucky University
1906 College Heights Blvd #11078
Bowling Green, KY 42101-1078
email: tom.richmond@wku.edu
FAX: 270-745-3699
Ph.D. 1986 Washington State University
M.A. 1983 Duke University
B.S. 1981 Delta State University
1996-present Professor, Western Kentucky University
1999-2000 Visiting Scholar, Technische Universitaet Darmstadt (Darmstadt,Germany)
1991-1996 Associate Professor, Western Kentucky University
1986-1991 Assistant Professor, Western Kentucky University
Selected Service:
2009-2010 Award for Advising, WKU Ogden College of Science and Engineering
2007-2009 Chair, Kentucky Section, Mathematical Association of America preceeded by a 2-year term as Chair-Elect.
WKU Committees:
University Graduate Council
University Professional Education Committee
University Library Advisory Committee
University Academic Probation Committee
College Faculty Excellence Awards Committee
College Curriculum Committee
Departmental Head Search Committee
Departmental Strategic Planning Committee
Departmental Head Advisory Committee
Mathematics Graduate Curriculum Committee
Mathematics Majors Curriculum Committee
Mathematics Undergraduate Curriculum Committee
Mathematics Director of Graduate Studies 2002-07
Courses Taught:
Math 100 Intermediate Algebra
Math 109 General Mathematics
Math 116 Fundamentals of College Algebra
Math 117 Trigonometry
Math 118 College Algebra and Trigonometry
Math 118 College Algebra and Trigonometry (lab section)
Math 119 Fundamentals of Calculus
Math 120 Elements of Calculus I
Math 122 Calculus of a Single Variable I
Math 136 Calculus and Analytic Geometry I
Math 136 Honors Calculus and Analytic Geometry I
Math 137 Caclulus and Analytic Geometry II
Math 137 Honors Caclulus and Analytic Geometry II
Math 183 Introductory Statistics
Math 211 Mathematics for Elementary Teachers I
Math 212 Mathematics for Elementary Teachers II
Math 237 Multivariable Calculus
Math 275 Introductory Topics (Number Theory)
Math 275 Introductory Topics (Mathematics of Puzzles and Games)
Math 275 Introductory Topics (Cryptography)
Math 307 Introduction to Linear Algebra
Math 310 Introduction to Discrete Mathematics
Math 315 Theory of Numbers
Math 317 Introduction to Algebraic Systems
Math 331 Differential Equations
Math 350 Advanced Engineering Mathematics
Math 398 Independent Study
IND 403 Honors Thesis
Math 409/409(G) History of Mathematics
Math 411/411(G) Problem Solving for Elementary and Middle School Teachers
Math 417 Algebraic Systems
Math 431/431(G) Intermediate Analysis I
Math 432/432(G) Intermediate Analysis II
Math 439 /439(G) Topology
Math 450 /450(G) Complex Variables
Math 475 Selected Topics in Mathematics
Math 498 Senior Seminar
Math 500 Readings in Mathematics
Math 509 History of Modern Mathematics
Math 517 Topics from Algebra
Math 532 Real Analysis
Math 542 Advanced Discrete Mathematics
Math 539 Topology II
Math 590 Special Topics in Mathematics
Math 599 Thesis Research and Writing
Selected Presentations:
Brno Technical University (Czech Republic), December 2013.
Workshop on Coverings, Selections, and Games in Topology, Caserta, Italy, June 2012.
Summer Topology Conference, City College of New York, July 2011.
University of Siegen (Germany), June 2011.
AMS Southeast Section Meeting, Georgia Southern University, March 2011.
Summer Topology Conference, Brno Technical University (Czech Republic), July 2009.
Brno Technical University (Czech Republic), April 2008.
National University of Ireland, Galway, September 2007.
Summer Topology Conference, Georgia Southern University, July 2006.
1004th AMS Meeting, Special Session, Western Kentucky University, March 2005.
International Conference and Research Center for Computer Science at Schloss Dagstuhl, Saarland, Germany, August 2004.
Summer Topology Conference, University of Cape Town, July 2004.
International Conference and Research Center for Computer Science at Schloss Dagstuhl, Saarland, Germany, May 2002.
University of South Africa Topology Workshop, Pretoria, June-July, 2001.
University of Cape Town, July 9-10, 2001.
Technische Universitaet Darmstadt (Germany), June 2000, December 1999.
Institute for Algebra, Technische Universitaet Dresden (Germany), January 2000.
Universitaet Hannover (Germany), November 1999.
Tennessee Topology Conference, Nashville, June 1996
22nd Annual Miami University Math. and Stat. Conference, Oct. 1, 1994.
24th Romanian National Conference on Topology and Geometry, Timisoara, Romania, July 1994
(1 hour plenary address).
Ninth Annual Summer Conference on Topology and Applications, Slippery Rock, PA, June 1993.
Technische Hochschule Darmstadt (Germany), May, 1993
Universitaet Hannover (Germany), June 1993
Symposium on Lattice Theory and its Applications in Honor of the 80th Birthday of
Garrett Birkhoff, Darmstadt, Germany, June 1991
5th International Conference on Topology and its Applications, Dubrovnik, Yugoslavia, June 1990
Abo Akademi, Abo, Finland, June 1988
Some Selected Publications. . .
A Discrete Transition to Advanced Mathematics (joint with B. Richmond), 424 pages, American Mathematical Society Pure and Applied Undergraduate Texts, Volume 3, 2004.
On Topology and Order:
The Number of Convex Topologies on a Finite Totally Ordered Set (joint with T. Clark), Involve, to appear.
Complementation in the Lattice of Locally Convex Topologies, Order, Vol. 30, no. 2 (2013) 487-496.
Topologies Arising from Metrics Valued in Abelian l-Groups, (joint with R. Kopperman and H. Pajoohesh), Algebra Universalis, 65 (2011) 315-330.
Collections of Mutually Disjoint Convex Subsets of a Totally Ordered Set (joint with T. Clark), Fibonacci Quarterly, Vol. 48, no. 1 (2010) 77-79.
Every Finite Topology is Generated by a Partial Pseudometric, (joint with A. Güldürdek), Order, Vol. 22 no. 4 (2005) 415 - 421.
T_i-Ordered Reflections, (joint with H.-P. Künzi), Applied General Topology, Vol. 6 no. 2 (2005) 207-216.
Completely Regular Ordered Spaces verses T2-ordered Spaces which are Completely Regular, (joint with H.-P. Künzi), Topology and its Applications, Vol. 135 no. 1 (2004) 185-196.
On General Topology:
Neighborhood Spaces and Convergence, (joint with J. Slapal), Topology Proceedings, 35 (2010) 165-175.
On Analysis:
Cantor Sets Arising from Continued Radicals, (joint with T. Clark), The Ramanujan Journal, to appear. (Published online 04 June 2013 DOI 10.1007/s11139-012-9457-8)
Continued Radicals, (joint with J. Johnson), The Ramanujan Journal, Vol. 15, no. 2 (2008) 259-273.
On Ordered Compactifications:
Ordered Separation Axioms and the Wallman Ordered Compactification (joint with A. McCluskey and H.-P. Künzi), Publicationes Mathematicae Debrecen, Vol. 73 no. 3-4 (2008), 361-377.
A Curious Example Involving Ordered Compactifications, Applied General Topology, Vol. 3 no. 2 (2002) pp. 225-233.
Ordered Compactifications of Products of Two Totally Ordered Spaces (joint with D. D. Mooney), Order, Vol. 16 no. 2 (1999) 113-131.
On Mathematics at the Undergraduate Level:
Instant Insanity II (joint with Aaron Young), College Math. Journal, Vol. 44, no. 4 (Sept. 2013) 265-272.
How to Recognize a Parabola (joint with B. Richmond), Am. Math. Monthly, Vol. 116, no. 10 (Dec. 2009) 910-922.
Characterizing Power Functions by Volumes of Revolution (joint with B. Richmond), College Math. Journal, Vol. 29 no. 1 (Jan. 1998) 40-41.
Metric Spaces in which all Triangles are Degenerate (joint with B. Richmond), Am. Math. Monthly, Vol. 104, no. 8 (Oct. 1997) 713-719.
The Equal Area Zones Property (joint with B. Richmond), Am. Math. Monthly, Vol. 100 no. 5 (May 1993) 475-477.
Tom Richmond's Home Page.|. WKU Mathematics Department .|. WKU Online
Last updated Jan. 17, 2014
|
{"url":"http://people.wku.edu/tom.richmond/Res.html","timestamp":"2014-04-23T09:57:52Z","content_type":null,"content_length":"33058","record_id":"<urn:uuid:844e9123-63d9-428d-8a1b-ffe67ca45a6d>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00308-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Complex number
By representing an orientation (or rotation) by a complex number instead of an explicit angle we can drop a fair number of expensive operations. So instead of storing angle 'a', we store complex
number (cos(a), sin(a)).
Another potential advantage is the algebra framework (just manipulating algebra) and reasoning. Algebra's like complex number, vectors, quaternions, etc allow thinking in terms of relative
information which can greatly simplify the process.
We will assume that standard mathematical convention of the X axis pointing the to right and the Y axis pointing up. Additionally we will assume that the reference orientation of objects is pointing
straight right. Combining these together when thinking about some specific entity, we can think in terms of its center being at the origin and its facing straight down the X axis.
NOTE: Although angles are talked about, this is for understanding and thinking purposes and not computation.
Basic examples in code:
Common definitions
Capital letter are complex number and small are scalars.
R=(cos(a), sin(a))
S=(cos(b), sin(b))
Complex number basics
Complex numbers are represented by two numbers, which we will denote as a pair
. The first number we will call 'x' and the second 'y'.
ConjugateX^* = (a,b)^* = (a,-b)R^* = (cos(a),sin(a))^* = (cos(a),-sin(a)) = (cos(-a),sin(-a))
So the conjugate reflects (
) about the X axis, which is the same as negating the angular information. (SEE:
Trig identities: Symmetry
X+Y = (a,b)+(c,d) = (a+c,b+d)
X-Y = (a,b)-(c,d) = (a-c,b-d)
Operation is component-wise. Can represent translation.
XY = (a,b)(c,d)
= (ac-bd, ad+bc)
RP = (cos(a), sin(a))(x,y)
= (x cos(a) - y sin(a), y cos(a) + x sin(a))
RS = (cos(a), sin(a))(cos(b), sin(b))
= (cos(a)cos(b) - sin(a)sin(b), cos(b)sin(a) + cos(a)sin(b))
= (cos(a+b), sin(a+b))
So the product sums the angular information of the two inputs. (SEE:
Trig identities: angle sum
C2D.mul(C2D)Product combined with conjugate
X^*Y = (a,b)^*(c,d) = (a,-b)(c,d) = (ac+bd, ad-bc)
R^*S = (cos(a),sin(a))^*(cos(b),sin(b))
= (cos(-a),sin(-a))(cos(b),sin(b))
= (cos(a)cos(b)+sin(a)sin(b), -cos(b)sin(a)+cos(a)sin(b))
= (cos(b-a),sin(b-a))
Since we can add angles with the product and can negate an angle with the conjugate, the two together allow us to subtract angles. (AKA get relative angular information)
C2D.mulc(C2D) & C2D.cmul(C2D)Magnitude (L[2] norm)|X| = |XX^*| = |(a,b)(a,-b)| = sqrt(a^2+b^2)
Notice that we're not calling this length. Complex numbers, vectors, etc do not have lengths (nor positions). What they represent in a give instance might have a length equal to its magnitude.
Unit complex and trig form
Unit complex numbers have a magnitude of one and can be written in 'trig form':
Since scale factors can be pulled out (see scalar product) all complex numbers can also be written in 'trig form':
Scalar productsX = s(a,b) = (s,0)(a,b) = (sa, sb)
This can be reversed, so all scale factors can be pulled out.
1/X = X^*/(XX^*) = (a,-b)/(a^2+b^2)
1/R = (cos(-a),sin(-a))/(cos(a)^2+sin(a)^2)
= (cos(-a),sin(-a))
= R^*
The multiplicative inverse of a unit complex is the same as its conjugate.
C2D.inv()Counterclockwise rotation of point about the origin
Falls directly out of the product. Given rotation (R) and point (P), the point after rotation (P'):
P' = RP
= (cos(a), sin(a))(x,y)
= (x cos(a) - y sin(a), y cos(a) + x sin(a))
P = (3,3)
R = (cos(pi/4), sin(pi/4)) = (.707107, .707107)
P' = (3,3)(.707107, .707107)
= (0, 4.24264)How do I find rotation of A into B
Solve the above. Assuming A & B are unit vectors:
RA = B
R = B(1/A)
R = BA^*
A = (0.809017, 0.587785)
B = (0.5, -0.866025)
R = BA^*
= (0.5, -0.866025)(0.809017, 0.587785)^*
= (0.5, -0.866025)(0.809017, -0.587785)
= (-0.104528, -0.994522)Counterclockwise rotation of point about arbitrary point
We can rotate about the origin, to rotate about an arbitrary point (C) translate the system to the origin, perform the rotation and then undo the translation.
P' = R(P-C)+C
= RP-RC+C
= RP+C-RC
= RP+C(1-R)
= RP+T
T = C(1-R)
. Look at the last line. It is telling you that the rotation R about point C is equivalent to a rotation about the origin R followed by a translation T. And C is recoverable from T & R:
C = T/(1-R)
(assuming R isn't 1...or no rotation).
Composition of rotations
Falls directly out of the product. Given rotation (R) followed by rotation (S):
RS = (cos(a+b), sin(a+b))Orthogonal direction
To find a direction orthogonal in a right-handed sense is the same as rotating by pi/2 radians (90 degrees), which is to multiply by (cos[pi/2], sin[pi/2) = (0,1).
ortho(X) = ortho((a,b)) = (a,b)(0,1) = (-b,a)Relation to dot and cross products
Falls directly from the product where one is conjugated:
X^*Y = (a,b)^*(c,d) = (a,-b)(c,d) = (ac+bd, ad-bc)dot(X,Y) = ac+bdcross(X,Y) = ad-bc
The dot product is the parallel projection and the cross is the orthogonal projection. Cross product is related to dot product by:
cross(X,Y) = dot(ortho(X),Y)
Basic geometry
On which side of a line is a point?
A line can be represented by a direction (L) and a point on the line (A). The simplest case is a line which coincides with the X axis,
L=(1,0) & P=(0,0)
, in which case we can simply examine the 'y' value of a test point (P). If 'y' is positive, then it is above, zero on the line and if negative then it is below. Moreover the value is the orthogonal
distance of the point from the line.
Next let's consider an arbitrary line through the origin with unit direction L. We can simply rotate the system such that the line coincides with the X axis as above and we're done. Our modified test
point becomes:
. Now the 'y' of P' is exactly the same as above. To fully generalize we simply need to move the line to the origin which give us:
If we were to plug in symbolic values:
P=(px,py), L=(lx,ly) & A=(ax,ay)
and expand we would see that we have unused intermediate values. This is because we are ultimately only examining a single component..we're only examining the orthogonal projection of the point into
the line (SEE: cross product above).
Additionally the direction of the line does not need to be normalized if we're only interested in above, on or below line question. The reason is because we only care about the sign of the result to
answer our question.
So the 'which side' question reduces to:
, which expands to the following pseudo-code:
return lx*(py-ay)-ly*(px-ax)
Aside: the previous can be expanded to
cross(L,P)-cross(L,A) = cross(L,P)-m
. The scalar 'm' can be stored instead of the point 'A' to represent the line. This value 'm' is commonly called the 'moment about the origin'.
Basic examples
At the top we say we can represent an entity by its position and orientation and think about its center as being at the origin and facing straight down the X axis (the reason for this is because
that's the entity's local coordinate frame).
Let's call it's position E and orientation F and we have some test point P. We can translate the system to the origin
and then we can undo the rotation of the system by multiplying by
, which gives us:
. So P in the reference frame of our entity is:
P' = (P-E)F^*
P = (100,100)
E = (200,200)
F = (.92388, .382683) <- Pi/8 or 22.5 degrees
P' = ((200,200)-(100,100))(.92388, -.382683)
= (130.656, 54.1196)
If you've ever worked with vectors, this should seem similar: find the delta distance and perform the dot and/or cross product. The above equation is finding the delta distance and then effectively
computing both. (Obviously you only compute one if only need one). So the dot product is simply the 'x' coordinate in the local coordinate frame (parallel projection) and the cross is the 'y'
coordinate (orthogonal projection).
What's my unit direction vector?
It's pretending the unit complex number of the orientation is a unit vector. It has the same numeric values for 'x' & 'y'.
Is it behind me?
As noted above the dot product is 'x' in the local coordinate frame, so the sign of the dot product. If negative it's behind the center point with respect to facing and positive if forward.
Turn clockwise or counterclockwise?
As noted above the cross product is 'y' in the local coordinate frame, so the sign of the cross product. If positive the shortest turn is counter clockwise, if negative it's clockwise and if zero
it's straight ahead.
Turn toward point with constant angular velocity
Again, the sign of cross product tells the minimum direction. Take a constant angular velocity, store as a unit complex number 'A'. If the sign of the cross product is negative, we need to conjugate
A (negate it's 'y' component). Multiply the current facing 'F' by the potentially modified 'A'. Take our new 'F' and cross again. If the sign has changed, we've overshot the target.
Is point within field of view
Given an entity in its local coordinate frame: image some field of view (<= Pi or 180 degrees), which become a pair of line segments symmetric about the X-axis (or triangle or cone). We can
immediately note a couple of things. The first is that if our x component of a test point P=(px,py) is negative that it cannot be inside. The second is that given the symmetry about the X-axis, P and
will always return the same results. Given the second we can form a new test point P'=(px,|py|). Now the problem is identical to on which side a point falls with respect to a line through the origin
and we the first observation isn't required to compute the result. Since we're asking 'below' the line, we negate the result to convert into the convention of positive is inside, yielding the
following pseudo-code:
return ly*px-lx*Math.abs(py);
As with the point to line test, our direction L does not need to be normalized and L is half of the field-of-view.
Which unit direction forms the minimum angle with the local X axis.
Although related to point/line and view-of-view, this case reduces to simply the direction with the greatest x component.
Bounce reflection
Given a vector (v) of something that hits a surface with normal (n) the resulting vector (v') has the same orthogonal part and the parallel part is negated. The parallel part is
, so
removes the parallel part and
results in the parallel part being negated.
For point reflections: negate the bounce reflection equation.
SEE: C2D.uref (this is point refection implementation)
This wiki entry has had 5 revisions with contributions from 1 members. (
more info
|
{"url":"http://www.java-gaming.org/index.php?topic=28991.msg265740","timestamp":"2014-04-21T09:47:22Z","content_type":null,"content_length":"129135","record_id":"<urn:uuid:aad65cf7-5d54-443b-835e-bcf3c65c1f64>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00128-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Registration for Fall: School of Science
Your First Semeser in the School of Science
School of Science students usually take four or five classes, depending on their major. School of Science students will be pre-registered in most courses. Regardless of interests, they will take
the following required 3 or 4 classes listed below. If you chose to come to Siena for the one day registration and advising session in July, you will be selecting one, maybe two courses to fill
out your schedule. The following majors take the following courses:
BioChem: MATH 110; BIOL 110 or (170 if you earned a 4 or 5 on the AP exam); CHEM 110; and FIRST YEAR SEMINAR 100
Chemistry: MATH 110; CHEM 110; FIRST YEAR SEMINAR 100; plus one course of your choice - usually Core.
Computer Science:
MATH 110, CSIS 110, FIRST YEAR SEMINAR 100; plus one course of your choice - usually Core.
Environmental Studies (B.A.): ENVA 100, ENVA 140, FIRST YEAR SEMINAR 100; plus two additional courses of your choice - usually Core – Economics 101 suggested.
Environmental Studies (B.S.) ENVA 100, BIOL 110, CALC 110 or Pre CALC,and FYSM
Mathematics: MATH 110; MATH 191; CSIS 110; FIRST YEAR SEMINAR 100;plus one course of your choice - usually Core.
Physics: MATH 110; CSIS 120, PHYS 130; PHYS 132; FIRST YEAR SEMINAR 100;
More information for Physics majors can be found HERE
Undecided Science: If interested in a particular science, follow above. If not, take MATH 110, FIRST YEAR SEMINAR 100, one introductory science course, plus one course of your choice - usually
Core. You will also take a one credit course entitled
SCDV 001 Science Career Exploration Seminar
Some students may take MATH 050 (Prep for Calculus) instead of MATH 110.
If you score a 4 or 5 in AP Calculus AB, you earn 4 credits for MATH-110 and take MATH 120, CAL II, instead.
If you score a 4 or a 5 in AP Calculus BC, you earn 4 credits for MATH-110 and 4 credits for MATH-120.
All School of Science incoming freshman should read this link on:
How do I know which Math course to take?
If you are unsure about some of the choices above, or if you have additional questions, contact us at
783- 2955.
|
{"url":"https://www.siena.edu/pages/1435.asp","timestamp":"2014-04-18T10:54:57Z","content_type":null,"content_length":"19694","record_id":"<urn:uuid:3648101b-c90a-49f2-8ef1-e5980ef138be>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00380-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Brevet US5233604 - Methods and apparatus for optimum path selection in packet transmission networks
This invention relates to packet transmission systems and, more particularly, to optimum path selection for connections between two nodes in such systems.
It has become increasingly useful to interconnect a plurality of data processing elements by means of a packet switching network in which data is transmitted as data assemblages called "packets."
Such networks include a plurality of interconnected network switching nodes which, in turn, are connected to end nodes supporting the data processing equipment. Such packet networks can become quite
large with an extensive geographical distribution. In such a situation, the selection of an efficient path between two end nodes which wish to communicate with each other becomes of paramount
The major criteria for selecting paths between nodes in packet networks are minimum hop count and minimum path length. The hop count is the number of links used to construct the path between the two
end nodes. The path length is a function of the overall transmission delay imposed by the path between the two end nodes. In most high speed networks, the delay (path length) is not a major
consideration since the worst-case delay through such networks is nearly always acceptable. The hop count, however, is a direct measure of the amount of resources required to implement a given path
and hence is of considerable importance in selecting paths.
It is to be noted that a selected path need not be a minimum hop count path since congestion on the network links may force the choice of a larger hop count path. However, such longer alternate paths
cannot be allowed to grow without limit since inordinate amounts of network resources might be committed to the one path, resulting in further congestion for other paths and forcing yet longer hop
count paths to be selected for yet other connections. The long term network throughput could thereby be adversely affected.
The problem, then, is to select a path between an origin node and a destination node which has a minimum hop count, a minimum path length, and which does not utilize an inordinate amount of network
In accordance with the illustrative embodiment of the present invention, optimum paths between origin and destination nodes in a packet network are selected by a modification of the so-called
"Bellman-Ford algorithm," a shortest path on a weighted graph algorithm taught by D. P. Bertsekas in Dynamic Programming: Deterministic and Stochastic Models, pages 318-322, Prentice-Hall, 1987,
Englewood Cliffs, N.J., and D. P. Bertsekas and R. Gallager in Data Networks, pages 315-332, Prentice-Hall, 1987, Englewood Cliffs, N.J.. More particularly, the algorithm of the present invention
defines "principal paths" between any given origin-destination pair. A principal path is defined as a feasible minimum-hop count path and principal links are defined as links in a principal path. All
other links are defined as secondary links. Secondary paths are paths including at least one secondary link and including more than the minimum-hop count.
A principal path is accepted as a route if none of its principal links is saturated, i.e., exceeds its preassigned traffic load. A secondary path, however, is accepted as a route only if none of its
principal links, if any, is saturated and if the load levels on its secondary links are below a preselected threshold (typically lower than that for links designated as principal). If this load
threshold is exceeded on any of the secondary links, the secondary path is rejected as a route.
One advantage of the path selection technique of the present invention is that a maximum path length constraint can be imposed on the path selection process. That is, feasible paths can be tested to
determine if the path length constraint has been exceeded and rejected if the constraint is exceeded. Such constraints can be used to prohibit inordinate resource consumption in implementing a route
and can also be used to impose specific grade of service requirements such as avoidance of low speed links. For this reason, the path length constraints must typically be specified for each
connection request, and the principal links determined separately for each connection request.
In summary, the route selection technique of the present invention involves two phases. In the first phase, the principal links are identified for the requested connection. If no maximum length
constraint is imposed, the principal links between any two nodes of the network can be precomputed and stored for use in the second phase of the algorithm. If a maximum length constraint is imposed,
the principal links must be calculated for each new connection request, comparing each link length with the constraint, or the constraint decreased by the previously accepted principal link lengths.
The route determination technique of the present invention has the advantages of producing optimum paths between arbitrary nodes of a packet switching system, taking into account not only the hop
count and the path length, but also imposing a maximum ceiling on the path length. In addition, the computation of optimum paths is sufficiently rapid that a path computation can be made for each
request for a connection.
A complete understanding of the present invention may be gained by considering the following detailed description in conjunction with the accompanying drawings, in which:
FIG. 1 shows a general block diagram of a packet communications network in which the route determination system of the present invention might find use;
FIG. 2 shows a graphical representation of the header of a data packet which might be transmitted on the packet communications network of FIG. 1;
FIG. 3 shows a block diagram of a typical decision point at the entry point for packets entering the network of FIG. 1;
FIG. 4 shows in tabular form a portion of the topology data base in each decision point such as that shown in FIG. 3 and which is used to calculate optimum paths;
FIG. 5 shows a general flow chart of the path computation procedure of the present invention;
FIG. 6 shows a more detailed flow chart of Phase I of the procedure of FIG. 4;
FIG. 7 show a more detailed flow chart of Phase II of the procedure of FIG. 4;
FIG. 8 is a yet more detailed flow chart of Routine A of the Phase I portion of the procedure of FIG. 5;
FIG. 9 is a yet more detailed flow chart of Routine B of the Phase I portion of the procedure of FIG. 5; and
FIG. 10 is a detailed flow chart of the search algorithm used in Routine A of FIG. 8.
To facilitate reader understanding, identical reference numerals are used to designate elements common to the figures.
Referring more particularly to FIG. 1, there is shown a general block diagram of a packet transmission system 10 comprising eight network nodes 11 numbered 1 through 8. Each of network nodes 11 is
linked to others of the network nodes 11 by one or more communication links A through L. Each such communication link may be either a permanent connection or a selectively enabled (dial-up)
connection. Any or all of network nodes 11 may be attached to end nodes, network node 2 being shown as attached to end nodes 1, 2 and 3, network node 7 being shown as attached to end nodes 4, 5 and
6, and network node 8 being shown as attached to end nodes 7, 8 and 9. Network nodes 11 each comprise a data processing system which provides data communications services to all connected nodes,
network nodes and end nodes, as well as decision points with the node. The network nodes 11 each comprise one or more decision points within the node, at which incoming data packets are selectively
routed on one or more of the outgoing communication links terminated within that node or at another node. Such routing decisions are made in response to information in the header of the data packet.
The network node also provides ancillary services such as the calculation of routes or paths between terminal nodes, and providing directory services and maintenance of network topology data bases
used to support route calculations.
Each of end nodes 12 comprises either a source of digital data to be transmitted to another end node, a utilization device for consuming digital data received from another end node, or both. Users of
the packet communications network 10 of FIG. 1 utilize an end node device 12 connected to the local network node 11 for access to the packet network 10. The local network node 11 translates the
user's data into packets formatted appropriately for transmission on the packet network of FIG. 1 and generates the header which is used to route the packets through the network 10. The header has
the general form shown in FIG. 2 and includes control fields 20, a routing field 22 and a redundancy check byte 23. The routing field 22 contains the information necessary to route the packet through
the packet network 10 to the destination end node 12 to which it is addressed.
In FIG. 3 there is shown a general block diagram of a typical packet network decision point such as is found in the network nodes 11 of FIG. 1. The decision point of FIG. 3 comprises a high speed
packet switching fabric 33 onto which packets arriving at the decision point are entered. Such packets arrive over transmission links via transmission adapters 34, 35, . . . , 36, or originate in
user applications in end nodes via application adapters 30, 31, . . . , 32. It should be noted that one or more of the transmission adapters 34-36 can be connected to intranode transmission links
connected to yet other packet switching fabrics similar to fabric 33, thereby expanding the switching capacity of the node. The decision point of FIG. 3 thus serves to connect the packets arriving at
the decision point to a local user (for end nodes) or to a transmission link leaving the decision point (for network nodes and end nodes). The adapters 30-32 and 34-36 may include queuing circuits
for queuing packets prior to or subsequent to switching on fabric 33. A route controller 37 is used to calculate optimum routes through the network for packets originating at the decision point of
FIG. 3. A topology data base 38 contains information about all of the nodes and transmission links of the network of FIG. 1 which information is used by controller 37 to calculate optimum paths.
The route controller 37 of FIG. 3 may comprise discrete digital circuitry or may preferably comprise properly programmed digital computer circuits. Such a programmed computer can be used to generate
headers for packets originating at user applications in the decision point of FIG. 3 or connected directly thereto. The information in data base 38 is updated when new links are activated, new nodes
are added to the network, when links or nodes are dropped from the network or when link loads change significantly. Such information originates at the network node to which the resources are attached
and is exchanged with all other nodes to assure up-to-date topological information needed for route calculation. Such data can be carried on packets very similar to the information packets exchanged
between end users of the network.
The incoming transmission links to the packet decision point of FIG. 3 may comprise links from local end nodes such as end nodes 12 of FIG. 1, or links from adjacent network nodes 11 of FIG. 1. In
any case, the decision point of FIG. 3 operates in the same fashion to receive each data packet and forward it on to another local or remote decision point as dictated by the information in the
packet header. The packet network of FIG. 1 thus operates to enable communication between any two end nodes of FIG. 1 without dedicating any transmission or node facilities to that communication path
except for the duration of a single packet. In this way, the utilization of the communication facilities of the packet network is optimized to carry significantly more traffic than would be possible
with dedicated transmission links for each communication path.
In FIG. 4 there is shown in tabular form a portion of the information stored in the data base 38 of FIG. 3. As can be seen in FIG. 4, a number of different characteristics of each link in the network
is stored in the data base. For the purposes of the present invention, only a few of these characteristics will be discussed. As might be expected, one of the critical characteristics of the
transmission links is the load threshold available on that link. Moreover, it is well known that such transmission facilities can only be loaded up to a fraction of their theoretical maximum load if
reasonable transmission properties are to be maintained. The load threshold of such a transmission facility can be represented by the quantity C.sub.kl, the effective load capability of the
transmission link between nodes k and l. For reasons to be discussed hereinafter, two different load thresholds are defined for each transmission links, depending on whether the link is selected as a
principal link in a route or as a secondary link in the route. A principal link is defined as a leg of a principal path where a principal path is a feasible minimum hop count path between the
originating node and the destination node. The hop count is simply the number of transmission links in the path. All other links are defined to be secondary links. Any non-minimum hop count path
between the originating node and the destination node is called a secondary path and it always includes at least one secondary link. In accordance with the present invention, a principal path is
preferred over a secondary path in determining optimum routes between nodes. If, however, a principal path is not available due to its already being fully loaded, a secondary path can be chosen. In
order to discriminate against such a secondary path, a load threshold is defined for each secondary link which is less than the corresponding principal load threshold for that same link. Thus the
table of FIG. 4 includes two different load thresholds for each transmission link, one to be used if the link is a principal link in a route being calculated and the other to be used in the link if a
secondary link in the route being calculated.
Also shown in the table of FIG. 4 is the Total Allocated Load, T(AL), for each link. This value represents the total load which has already been allocated for this transmission link due to previously
calculated routes. If the difference between this already allocated load and the total available principal or secondary load of the channel (depending on the whether the link is a principal link or a
secondary link) is not sufficient to carry the new connection, then the link cannot be selected. In accordance with the present invention, in addition a path can be selected only if the overall path
delay does not exceed a maximum delay defined as P.sub.T. In order to calculate the incremental delay introduced by this link, d.sub.kl, the following formula can be used, using the values show in
the table of FIG. 4: ##EQU1## where C.sub.kl =Total Bandwidth of Transmission Link from Node k to l,
=C.sub.kl,P (FIG. 4) if the link is a principal link, or
=C.sub.kl,S (FIG. 4) if the link is a secondary link,
C.sub.kl.sup.(1) =Allocated Bandwidth Before This Connection, and
C.sub.kl.sup.(2) =Allocated Bandwidth After This Connection.
To support the calculation of equation (1), the topology data base 38 contains, for each link, not only the load threshold (e.g., C.sub.kl,P and C.sub.kl,S), but also the currently allocated load for
each transmission link (e.g., C.sub.kl). The definition of the incremental delay given in equation (1) is for illustration purposes only and many other formulations can be used. This incremental
delay is subtracted from the maximum delay to produce a new maximum delay available for future links in the selected route.
In addition to the bandwidth information discussed above, the topological data base 38 of FIG. 3 and illustrated in FIG. 4 includes so-called "Quality Of Service (QOS)" parameters. Such QOS
parameters can be specified in a particular request for an optimum route and the present invention operates to insure that all of the selected links in the path conform to the requested QOS
parameters. Such QOS parameters may include, for example, a particular security level, a maximum propagation delay or a minimum probability of buffer overflow.
In accordance with the present invention, the total amount of delay permitted for each route has the finite limit P.sub.T. This limit is imposed to avoid excessive delay and to prevent the dedication
of an undue amount of resources to any one connection. Without such a limitation, it is possible to create an unstable condition where the resources of the packet communications system are rapidly
used up in increasingly more complex long paths. That is, the use of excessive amounts of resources for one route leaves less resources available for other routes, requiring even greater resources
for future routes.
If no limitation is put on the maximum delay permitted for each route, it is possible to calculate all principal paths and principal links between any two nodes ahead of time and merely access them
from a stored table when the route determination is done. In order to accommodate a finite limitation on path delay, however, it is necessary to determine the principal paths, and hence the principal
links, for each new route request.
In accordance with the present invention, each request for a route determination includes the following input parameters:
Origin Node (i)
Destination Node (j)
Required Bandwidth (c)
Maximum Path Length Threshold (P.sub.T)
Quality of Service Parameters (QOS), Optional
Using these parameters, a path is determined by first searching through all links to determine minimum hop count path that satisfy the maximum path length constraint P.sub.T and then backtracking to
derive a list of principal links. This search is supplemented by accumulating the link delays for each path. Once the destination node is reached, minimum hop count principal paths are backtracked to
create a list of principal links. This principal link list is used in the second phase of the algorithm to determine the optimum path from the source to the destination node. This specific procedure
for determing paths through a packet communications system will be described with reference to the balance of the figures.
In FIG. 5, there is shown a general flow chart of the path computation procedure of the present invention. Starting at start box 40, input box 41 is entered to specify the inputs required to
calculate an optimum route in accordance with the present invention. As noted above, these input parameters include the originating node i, the destination node j, the requested connection bandwidth
c, the maximum path length P.sub.T and, optionally, a group of "quality of service" parameters QOS. With these inputs, box 42 is entered where, as Phase O, the list of links in the network is pruned
by removing all links which do not meet the QOS parameters. With this decimated list of transmission links, Phase I box 43 is entered to identify the principal links. In determining the lengths of
the transmission links in this process, it is assumed that the utilization of each link is zero (C.sub.kl =0), i.e., the entire bandwidth of the link is available. The process used to search the
network for principal paths is a modification of the so-called Bellman-Ford algorithm described at pages 318-322 of Dynamic Programming: Deterministic and Stochastic Models, D. P. Bertsekas,
Prentice-Hall, 1987, Englewood Cliffs, N.J. This algorithm will be described in detail hereinafter
Once the principal paths have been identified in box 43, Phase II box 44 is entered where the optimum path is identified, using the principal paths from box 43, and using the current utilization data
(allocated bandwidth C.sub.kl). As noted in box 45, the output from box 44 is an acceptable minimum hop count path from the originating node i to the destination node j which has the minimum possible
hop count and which path length is less than the input maximum of P.sub.T. If no such path exists, an failure signal φ is returned. The process ends in terminal box 46.
In FIG. 6 there is shown a more detailed flow chart of Phase I of the flow chart of FIG. 5 in which the links of the packet communications system of FIG. 1 are marked as principal or secondary for
this particular request. As noted, in box 51 Routine A utilizes a modified Bellman-Ford algorithm to search all of the links between the origin and the destination nodes, keeping track of the hop
count and the path length increment for each link. Once the destination node is reached, box 52 is entered where Routine B backtracks through the links to discover the paths with both a minimum hop
count and a length that satisfies the maximum path length constraint P.sub.T. These paths are principal paths and all of the links in these paths are principal links. All other links are secondary
links. Only the information as to whether the link is principal or secondary is retained for use in Phase II of the procedure.
In FIG. 7 there is shown a more detailed flow chart of Phase II of the flow chart of FIG. 5 in which the optimum path is computed. Starting in start box 60, box 61 is entered where Routine A,
essentially identical to Routine A in FIG. 6, uses the same modified Bellman-Ford algorithm to determine the feasibility and the link length for each possible link in the ultimate path. Primary links
are accepted as feasible providing only that load capacity is available for the new connection, i.e., C.sub.kl.sup.(2) does not exceed the principal link load threshold C.sub.kl,P. Secondary links
are accepted as feasible provided that adding the new connection does not result in exceeding the secondary link load threshold C.sub.kl,S. Links are accepted as feasible only if the incremental
increase in path length caused by the link does not increase the total path length so that it is greater than the specified maximum path length P.sub.T. In box 62, the list of feasible links are
backtracked through using the hop count and the path lengths to find the shortest path with the minimum hop count. The process of FIG. 7 ends in terminal box 63.
In FIG. 8 there is shown a more detailed flow chart of Routing A used in FIGS. 6 and 7. When used in FIG. 6 to identify the principal links, the flow chart of FIG. 8 starts in box 70 from which box
71 is entered where a breadth-first exhaustive search is made, starting at the origin node i. That is, each link from each node is followed to the next node where each link is again followed to the
next node, and so forth. For each node k encountered in this search, the minimum hop count (h.sub.k) from the origin node to node k is saved along with the length (l.sub.h) of the shortest path from
the origin to node k only if the length is less than P.sub.T. When the destination node is reached in this search, in box 72, the hop count h.sub.f is the minimum hop count from i to j such that the
shortest path length is less than P.sub.T. If no such path exists, box 73 returns a null value of φ. Routine A then terminates in box 74.
In FIG. 9 there is shown a more detailed flow chart of Routine B used in FIGS. 6 and 7. When used in FIG. 6 to identify principal links, the flow chart of FIG. 9 starts in box 80 from which box 81 is
entered. In box 81, the path is retraced from the destination node j to the origin node i. Links are marked as principal in this backtracking if the link belongs to a path with minimum hop count and
length less than P.sub.T. In box 82, links which are not principal are marked as secondary. Box 83 terminates the process.
Returning to FIG. 8, when Routine A is used to calculate the optimum path. The actual utilizations of the links are used to determine whether the links are feasible and the link length. When routine
A is completed, a path has been identified with the minimum feasible hop (i.e., a hop count equal to or greater than the minimum hop count h.sub.f) and with a length less than P.sub.T. In FIG. 9, the
hop count and minimum lengths associated with each node are used to identify the actual optimum path.
In FIG. 10 there is shown a detailed flow chart of the modified Bellman-Ford algorithm used to perform the search for an optimum path. In FIG. 10, starting at box 90, box 91 is entered to set the
next node to the origin node i. In box 92, the data for the next node is retrieved from the topology data base 38 of FIG. 3. In box 93, the data for the next link leaving that node is obtained from
the data base. In decision box 94 it is determined whether or not that link is a principal link. If the link is a principal link, decision box 95 is entered where the accumulated load C.sub.kl.sup.
(2) is compared to the link principal threshold C.sub.kl,P. If the accumulated load is equal to or less than the principal threshold, box 97 is entered to calculate the incremental delay in
accordance with equation (1). If the accumulated load is greater than the principal threshold, as determined by decision box 95, the next link is obtained in box 100.
If it is determined in decision box 94 that the link is a secondary link, decision box 96 is entered to compare the accumulated load C.sub.kl.sup.(2) to the link secondary threshold C.sub.kl,S. If
the accumulated load is equal to or less than the secondary threshold, box 97 is entered to calculate the incremental delay in accordance with equation (1). If the accumulated load is greater than
the secondary threshold, as determined by decision box 96, the next link is obtained in box 100.
In decision box 98, the accumulated path length up to this point is compared to the maximum path length P.sub.T. If the accumulated path length is less than P.sub.T, the hop count h.sub.k and the
accumulated path length d.sub.kl are save in a list in box 99. If the accumulated path length is equal to or greater than P.sub.T, box 99 is bypassed and the data for this link is not added to the
list. In either case, decision box 100 is then entered to determine if there are any more links exiting from this node. If so, box 93 is entered to get the next link and continue the process. If
there are no more links exiting from this node, decision box 101 is entered to determine if the node is the destination node j. If so, the process is complete and terminates in stop box 102. If this
node is not the destination node j, box 92 is entered to get the next node and continue the process.
When the procedure of FIG. 10 is complete, a list of the minimum hop counts h.sub.k and the path lengths d.sub.kl is available. As discussed above, Routine B then backtracks through this list to
identify the path with the minimum path delay as well as the minimum hop count. This is the optimum path to be used in sending the packets involved in the connection through the system of FIG. 1.
When used to identify the principal paths, the flow chart of FIG. 10 is modified to omit the decision box 94. Using the principal load threshold of each link, the minimum hop count and the length of
the path to each node encountered is kept on the list. This list can then be processed by backtracking to identify the principal paths (having both minimum hop counts and lengths less than P.sub.T).
The links of the principal path are the principal links to be used in computing the optimum transmission route. The entire procedure for computing principal links and optimum routes is described in
the pseudocode in the attached Appendix. The correspondence between the pseudocode and the flow charts of FIGS. 5 through 10 is obvious and will not be described in detail here.
It should also be clear to those skilled in the art that further embodiments of the present invention may be made by those skilled in the art without departing from the teachings of the present
APPENDIX Minimum Path Algorithm
The algorithm to be described for computing the best possible path with the minimum possible hop count assumes the availability of a set of all of the principal links between the origin node and the
destination node. This principal link list can be precomputed by assuming that the minimum path length is infinity (P.sub.T =∞), or can be computed, as will be described, by utilizing the same
algorithm in a phase preceding the path computation phase.
i is the index of the origin node.
j is the index of destination node.
N is the total number of nodes in the network.
h is the step number of the algorithm, equal to the hop count.
P.sub.T is the maximum path length between origin and destination nodes.
h.sub.f is the minimum hop count of a selected path of length less than P.sub.T (if such a path exists).
d.sub.kl is the length of the link (if any) between nodes k and l
D.sub.i (l,h) is the length of the shortest path between nodes i and j of exactly h hops (D.sub.i (l,h)=∞ if no such path exists). D.sub.i (l,h) can be represented by a (sparse) two-dimensional
array, with indices l,h=0,1, . . . , N-1.
A(k,l) is a function that is equal to "1" if the link is acceptable, and "0" otherwise. This function depends on whether the link kl is principal or not.
Note that both of the functions A(k,l) and d.sub.kl depend upon the connection request.
The following pseudocode is used to both identify principal links and to compute the best acceptable path. This algorithm is first described to compute the best acceptable path, assuming that the
principal links have already been identified. Thereafter, it will be shown how this same algorthm can be used to identify all of the principal links. The inputs to the algorithm are a connection
request, the index i of the origin node, the index j of the destination node (i≠j), and the path length acceptance threshold P.sub.T. The algorithm will produce an acceptable minimum hop path from i
to j with minimum length less than P.sub.T, if such a path exists. If no such path exists, the algorithm returns a value of φ. ##EQU2## COMMENT: It is assumed that either a single predecessor exists,
or the first predecessor is picked when more than one exists. A preferred alternative is to construct a list of all possible predecessors and select one predecessor from the list by a random process.
Principal Link Identification
If the connection request specifies a finite length of the acceptance threshold P.sub.T, then the first step must be to determine the principal links associated with this request for a new network
connection. The above-described algorithm is used with the following modifications:
A new acceptance function A'(k,l) only checks for links that cannot accomodate the new connection since principal links are not yet known and a special load threshold for non-principal links is
The weight of the link kl (without considering existing network traffics) is given by d'.sub.kl.
It is assumed that A(k,l)≦A'(k,l) and that 0≦d'.sub.kl ≦d.sub.kl for any link kl.
Retracing the path from the destination to the origin can be omitted.
Computation of d'.sub.kl and A'(k,l) assumes that the incoming connection is the only on the link. This eliminates links that are either not capable of carrying the connection, or are part of a path
whose length can never be lower than P.sub.T.
The output of this algorithm includes not only the hop count h.sub.f and the path length D.sub.i (j,h.sub.f) of the best feasible path between the origin and the destination, but also the length of
all lower hop count paths to possible transit nodes. This information is used to identify principal links.
In general, principal links are identified by backtracking from the destination node j, breadth-first and by decreasing the hop count, all computed paths satisfying the length constraint P.sub.T.
More particularly,
1. Starting with destination j, inspect all adjacent nodes l that are one hop away to determine if they satisfy D.sub.i (l,h.sub.f -1)+d'.sub.lj <P.sub.T.
2. For each node l satisfying (1), mark the link lj a principal and define a new length threshold P.sub.T (l)=P.sub.T -d'.sub.lj. By convention, it is assumed that P.sub.T (j)=P.sub.T.
3. After inspecting all adjacent nodes, decrement the hop count by 1 (h=h.sub.f -1).
4. Inspect all nodes k that are two hops away from destination node j.
5. Mark all links kl as principal if D.sub.k (k,h.sub.f -2)+d'.sub.kl is less than P.sub.T (l).
6. Define a new length threshold for node k such that P.sub.T (k)=P.sub.T (l)-d'.sub.kl.
7. Repeat steps (1) through (6) until the hop count is equal to zero. At this time, all principal links have been identified. ##EQU3##
|
{"url":"http://www.google.fr/patents/US5233604","timestamp":"2014-04-18T00:53:03Z","content_type":null,"content_length":"199185","record_id":"<urn:uuid:88995b12-1784-402b-99c6-5e8033ce2703>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00045-ip-10-147-4-33.ec2.internal.warc.gz"}
|
OpenStreetMap Functions - File Exchange - MATLAB Central
Please login to add a comment or rating.
Comments and Ratings (37)
07 Hi wes, thanks you for the review. If I recollect correctly, the ids should refer to the OSM database, so be global ids over the whole world, rather than indices in the connectivity matrix
Mar (which is a sample of the OSM database).
Hi Ioannis,
Great package you have here, working perfectly for what i need! thank you for taking the time to upload.
03 I am having a few problems with the route planner though. when i take the node id out of the parsed osm file i get a big number such as 1151088456 but i get an eorror message saying source
Mar node index invalid. if i randomly pick one of the small numbers out of the connectivity matrix i can get it working but i have no idea how to relate to these locations to there real world
2014 position untill i run it and see where they appear.
any advice on where i'm going wrong?
thanks in advance
07 Also: http://stackoverflow.com/questions/11321718/how-can-i-display-osm-tiles-using-python
07 Please consider switching to Python. There are numerous OpenStreetMap facilities there and excellent visualization capabilities:
2013 https://pypi.python.org/pypi?%3Aaction=search&term=openstreetmap&submit=search
Hi Gaurav,
I've updated the submission on github (and here, waits review) to fix the name shadowing conflict that presumably causes the "no osm field" error.
Nov @Matthew:
2013 the restorehold function is now included (rename was needed) and an example map.osm as well.
What was your reason for commenting that the "chosen XML parser is not appropriate for anything but the extremely simple example." ?
I have been trying to use this distribution to convert the region Dharavi,Mumbai, India to data set. However, I am not able to understand how to use the package and which is the main file
which would call the matlab files given.
Also, I tried using the usage_example.m, however it shows error :
"Reference to non-existent field 'osm'.
28 Error in parse_openstreetmap (line 42)
Oct osm_xml = map_osm.osm;
Error in usage_example (line 29)
[parsed_osm, osm_xml] = parse_openstreetmap(openstreetmap_filename);"
my input file was with name same as in example - map.osm and saved in the same directory as all other files in the distribution.
Your help would be highly valued as I am in urgent need for this thing to work
06 Thanks so much for your reply.
The osm is loaded in MATLAB as a directed graph represented by an adjacency matrix. Assuming constant speed, travel time is proportional to distance. So for them to be different, different
speeds need to be assumed for different types of roads. I would suggest assigning weights to roads (edges) depending on their size by storing the weights in the adjacency matrix. Then you can
06 run Dijkstra's shortest path algorithm on the weighted graph.
2013 Recommended reading the wikipedia article on Dijkstra's algorithm, which you can find implemented e.g. in http://www.mathworks.com/matlabcentral/fileexchange/10922-matlabbgl, as a starting
point to learn more about the connectivity matrix you can look at the help text of extract_connectivity.m .
Okay, Problem resolved. I could change the start and target nodes based in the (index) values generated from my map and it worked!
Oct Now I need your suggestions if I want to get the fastest route (based on travel time not distance). What should I do? should I add these data into the map osm file? so I have to search for
2013 such info/tags. or should this be modified into the algorithms weights? I see it's mainly based on the distance. I'm really confused what to do next!
Hello Ioannis,
I'm sorry for my previous comment, I wasn't setting the path correctly inside Matlab. Now the file runs but I have this:
Warning: path is empty. This means that the planner found no path.
> In plot_route at 24
In my_example at 37
the ways are plotted but the shortest route is not! Should I change the start and target nodes values?
Oct and based on what? should I copy/paste nodes IDs from the XML file? knowing that I've already tried this but I get an error that the node ID is invalid.
Please I need your help since I have to modify all of the code to work on fastest route instead of the shortest as a part of my thesis, and I need to resolve all these errors and warnings
first and start thinking on modifying the algorithm and add traffic or travel time info and start working on the map and algorithm to produce the fastest route. I still have no clue how to
implement this, since I'm totally new to Matlab. But if you could at least help with the file you provided I'd be grateful.
I'd like to thank you for this submission, it's really helpful in my masters thesis, but I really need your help in this error I get when I run the usage_example:
Reference to non-existent field 'osm'.
Error in parse_openstreetmap (line 42)
28 osm_xml = map_osm.osm;
2013 Error in usage_example (line 29)
[parsed_osm, osm_xml] = parse_openstreetmap(openstreetmap_filename);
knowing that I copied the map.osm file in the home directory of openstreetmap and I've traced the code line by line and referred to each function to see how it works and try to solve the
problem but nothing, and found no similar errors in the comments below. I can't even see the output of your code in the end. So could you please help?
Thanks in advance
I wanted to give this 5 stars. May I suggest the following issues are addressed:
23 1. The usage_example throws an error: there is no restorehold() function within the distribution.
Sep 2. The chosen XML parser is not appropriate for anything but the extremely simple example.
2013 3. The example does not work out of the box. Why not included a example map.osm file?
Hi Nermeen,
Thank you, I would suggest against parsing 30GB files in any context, MATLAB certainly cannot handle that, but neither does any other tool. The issue is that your RAM and virtual memory won't
support this, unless you have a very unusual hardware or software setup.
So what I would suggest is file-based operations, where you gradually process your input, while at the same time dumping the results into a (possibly big) output file. Then you can access
them from that file.
You may also consider if your task really requires brute-force parsing of whole countries, vs somehow pre-filtering the information at the source from where you get it (I don't know if that's
possible in this case, but it's an option worth considering, if you don't need all that data).
27 The PBF format is currently not supported by this toolbox. There is already a Python solution here:
Jul http://blog.lifeeth.in/2011/02/extract-pois-from-osm-pbf.html
2013 (pointed to by this: http://wiki.openstreetmap.org/wiki/PBF_Format)
and I'm suggesting switching to Python anyway (see below).
Please feel free to contribute towards that, btw motivated by people's interest, it will probably migrate to github not far in the future.
Finally I would advise switching to Python for heavy GIS tasks as the one you describe, as it is more suited for system-oriented operations and you will find a much wider set of tools already
available, though it takes more searching. Starting points (though haven't tried them):
Jul The tool has been magically helpful in my environment classification project. I was just wondering about the osm file size limitation that can be parsed. As i need to parse files of complete
2013 countries; osm files that can reach up to 30 GB. Also is there a way to parse PBF files instead of XML using this tool? Thanks in advance
22 Happy that it proves useful. No, I haven't considered processing administrative boundaries. If you write some enhancement, please consider sending it in.
2013 Also, when I find some time this project will probably appear in github for people to contribute.
Hi there,
Jul This toolbox is brilliant. I was wondering if you saw a way to extract and perform connectivity analysis on a road network that is within an administrative boundary of a given level. I am
2013 looking at the API, but fairly new, so still struggling...
Please do let me know if you do, and again, a bunch of thanks!
I get an error when I try to load and parse an osm file, I have tried with different file sizes:
Cell contents reference from a non-cell array object.
Error in parse_osm>parse_way (line 61)
ndtemp(1, j) = str2double(waytemp.nd{j}.Attributes.ref);
02 Error in parse_osm (line 20)
Jun parsed_osm.way = parse_way(osm_xml.way);
Error in parse_openstreetmap (line 43)
parsed_osm = parse_osm(osm_xml);
Error in OSMExample (line 29)
[parsed_osm, osm_xml] = parse_openstreetmap(openstreetmap_filename);
Any suggestions?
I have error ??? Reference to non-existent field 'osm'. osm_xml = map_osm.osm;?
matlab file xml2struct.m
tree =
[#document: null]
theStruct =
Name: 'osm'
Attributes: [1x5 struct]
Data: ''
Children: [1x439 struct]
May parse_openstreetmap.m
2013 map_osm =
Name: 'osm'
Attributes: [1x5 struct]
Data: ''
Children: [1x439 struct]
name file
openstreetmap_filename = 'map.osm';
I do not understand where the error, using Matlab 2013a and 2009a?
Hi Richard,
It appears that the Java Virtual Machine run out of memory, you can increase that (depending on the limits of your system). Please see here:
11 This can also be the case when plotting a lot of data.
2013 I would suggest trying to instead reduce the piece of the map you attempt to import, based on what is the purpose for further processing.
Otherwise even if it does load after increasing the java heap size, it will still be cumbersome to process.
Another suggestion might be to break into pieces and load those separately, one at a time, extracting the structure of interest and then identifying common nodes between different patches.
However this is much more involved, because it involves the global (unique) IDs of the nodes, representing them in the openstreetmap database (and file).
I'm trying to get a street map of Beijing, but encountering an out of memory error:
??? Java exception occurred:
java.lang.OutOfMemoryError: Java heap space
Error in ==> xmlread at 91
parseResult = p.parse(fileName);
Error in ==> xml2struct at 40
xDoc = xmlread(file);
May Error in ==> load_osm_xml at 26
2013 map_osm = xml2struct(filename); % downloaded osm file
Error in ==> parse_openstreetmap at 41
map_osm = load_osm_xml(openstreetmap_filename);
Error in ==> GeoLife_main at 21
[parsed_osm, osm_xml] = parse_openstreetmap('beijing.osm');
Beijing.osm is 75.35 MB
Any suggestions?
Some recent feedback I received suggests that the xml loading error might be caused by a name conflict between the function xml2struct included with the openstreetmap functions and a function
with the same name in the MATLAB Bioinformatics toolbox.
My version of the bioinformatics toolbox does not have such a function, however it does have some functions for xml handling, so probably there existed xml2struct function in previous
Feb The Bioinformatics toolbox function (when it exists) can shadow the one included with this software package, depending on their relative placement in the MATLAB path.
Please check if this is the case for you by typing:
which xml2struct
This issue might be fixed in some future version by just renaming xml2struct, although in general it is preferable to maintain original names for files from the file exchange to keep better
track of code and avoid duplicates.
Hi Fabio,
Are you sure that xml2struct is in your path? It appears that the file is not loaded correctly, maybe you could check to see the fields of the structure returned, containing the loaded xml.
Jan A minimal example reproducing the issue would help resolve it.
Hi Loannis
I have just begun to use the functions package, and as a start a wanted to run the usage example. The following error arises indistinctly of the geographical location of the file or its size:
??? Reference to non-existent field 'osm'.
Nov Error in ==> parse_openstreetmap at 42
2012 osm_xml = map_osm.osm;
Error in ==> usage_example at 29
[parsed_osm, osm_xml] = parse_openstreetmap(openstreetmap_filename);
Could you maybe know the reason for this error?
Hey Ioannis!
06 Thanks a lot for your help! Now I’m still working at the details to develop an addition for the run of the power network. I want to find out this run for whole Germany. I got the data
Sep directly from the openstreetmap homepage, where I exported them. But I just can get a really small part of Germany, if I export the data from this homepage. Do you know another page, where I
2012 can get the whole data base of Germany in once? It would be another great help for me!
Thank you again, Paula
Hi Paula,
A connectivity matrix represents the existence of connections between nodes.
In one-way roads the connections are directed, so the connectivity matrix is not symmetric.
In other words, you can go from node a to node b, but not the opposite way.
The route planner just searches within the graph represented by the connectivity matrix, in order to find a path from the initial to the final node.
Sometimes, certain roads are assumed symmetric when extracting the connectivity, but actually they are not.
For this reason, assuming that directions do not matter may work better if the first attempt fails.
This requires that the connectivity matrix be made symmetric.
Both of these two approaches are described in lines 35-48 of the usage_example file.
Since the road network was of interest in this package (at least until now), only roads qualified as valid connections.
So the code searches for road connections only.
Roads can be identified by their special tag, which has a key and a value, see
for the related documentation on highways.
If the object has a tag, then the function get_way_tag_key gets the key and its value.
The tag's key is called "highway" and can take on many values (those of the possible values which were of interest are defined in the "road_vals" cell array in line 39 of
Aug However, objects which are not highways do not have a tag with key named "highway".
2012 This is checked in line 54 of extract_connectivity.
In this case, they are ignored, as far as connectivity is concerned.
Further, even if they do have the "highway" key, they may not be of interest (i.e., have a value included in "road_vals").
In this case, they are again ignored, see line 59 of extract_connectivity.
So the first warning means that the object is not a highway.
Taking the previous into consideration, it follows that this object does not contribute to the road connectivity (since it is not a road).
This is probably the reason that no route has been found.
The connectivity matrix produced from the current version of this code does not incorporate any information about the power lines or cables.
The second warning is just a suggestion by the route planner to use a symmetric network assumption, for the reasons described previously. (i.e., in case the one-way assumption does not hold
exactly. For example, this could happen when we are interested in walking only and not cars. One would need to change "road_vals" as well in this case.)
To work with power networks, you would need to extract their connectivity from the loaded openstreetmap xml data.
You can do this by altering the function extract_connectivity to identify key="power" and the appropriate values, please see
Extracting various networks is a good idea, I may add this capability when I have time.
If you develop an addition towards that direction, I would be happy to add it to the current distribution.
Best regards,
Hey Ioannis,
I want to create a connectivity matrix for th run of cables and powerlines. Right now I'm trying to understand the function of the connectivity matrix and the rout planner, but I'm not even
able to create one simple route. It always says:
"Warning: Way has NO tag.
> In get_way_tag_key at 20
10 In extract_connectivity at 53"
Aug and:
2012 "Warning: No route found by the route planner. Try without road directions."
What does it mean exactly and what can I do now?
Thanks a lot!
One more comments.
For the route planner, it differentiates one-way roads from normal roads only by not making the directed graph symmetric. However, it does not take into account the tag "oneway=yes" which can
06 be extracted from the original osm file. I think maybe using that info. will make more sense.
2012 Thanks,
Hi Loannis,
I'm using your functions to get the connectivity matrix of Shanghai road network. It seems that there are several things to be modified.
1. the tag "highway" may not be always the first tag in each <way> element so in the function 'get_way_tag_key', it should traverse all the tags to see whether there is a "highway" tag.
06 2. In terms of the connectivity matrix, it seems to me that you are trying to get the directed graph only containing the intersection nodes. If so, 'node1_index' should not be initiated to be
Jul the first node but an empty set '[]' since the first node not necessarily belongs to the intersection nodes.
Could you help me check it?
Hi Catherine,
03 Thank you for providing feedback on these functions. I have released an updated version which fixes this bug and adds more functionality.
2012 Best regards,
Hi Ioannis,
Thanks for taking the time to post this. I am trying to use your functions to plot the area of Genoa, Italy but I am getting the following errors:
??? Cell contents reference from a non-cell array object.
Error in ==> get_way_tag_key at 16
key = tag{1}.Attributes.k;
Error in ==> plot_way>show_ways at 43
17 [key, val] = get_way_tag_key(way.tag{1,i} );
2012 Error in ==> plot_way at 34
show_ways(ax, bounds, node, way, map_img_filename);
Error in ==> genoa at 34
plot_way(ax, parsed_osm)
Do you have any suggestions?
Please consider replacing:
parse_osm>parse_way line 59
tag{1,i} = waytemp.tag;
14 with:
2012 if isfield(waytemp, 'tag')
tag{1, i} = waytemp.tag;
tag{1, i} = [];
I am going to post an update, fixing this bug. It would be helpful if you can post the geographical region you exported from OpenStreetmap which caused this issue. (Particularly if you can
e-mail me the .osm file).
Hi David,
Thanks for noting this behavior.
Please consider replacing:
14 parse_osm>parse_way, line 59
2012 if isfield(waytemp, 'tag')
tag{1, i} = waytemp.tag;
tag{1, i} = [];
Hi Ioannis
I like your OpenStreetMap Matlab functions a lot. However, for some osm files I get the following error:
[parsed_osm] = parse_openstreetmap('openstreetmap/map.osm');
??? Reference to non-existent field 'tag'.
Error in ==> parse_osm>parse_way at 59
tag{1,i} = waytemp.tag;
Mar Error in ==> parse_osm at 14
2012 parsed_osm.way = parse_way(osm.way);
Error in ==> parse_openstreetmap at 40
parsed_osm = parse_osm(map_osm.osm);
Do you have an idea how this can be resolved?
|
{"url":"http://www.mathworks.com/matlabcentral/fileexchange/35819-openstreetmap-functions","timestamp":"2014-04-19T07:26:34Z","content_type":null,"content_length":"72583","record_id":"<urn:uuid:7887cfdd-f77d-4593-b09e-9e6a45175c3e>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00039-ip-10-147-4-33.ec2.internal.warc.gz"}
|
2312 -- Battle City
Battle City
Time Limit: 1000MS Memory Limit: 65536K
Total Submissions: 6708 Accepted: 2256
Many of us had played the game "Battle city" in our childhood, and some people (like me) even often play it on computer now.
What we are discussing is a simple edition of this game. Given a map that consists of empty spaces, rivers, steel walls and brick walls only. Your task is to get a bonus as soon as possible suppose
that no enemies will disturb you (See the following picture).
Your tank can't move through rivers or walls, but it can destroy brick walls by shooting. A brick wall will be turned into empty spaces when you hit it, however, if your shot hit a steel wall, there
will be no damage to the wall. In each of your turns, you can choose to move to a neighboring (4 directions, not 8) empty space, or shoot in one of the four directions without a move. The shot will
go ahead in that direction, until it go out of the map or hit a wall. If the shot hits a brick wall, the wall will disappear (i.e., in this turn). Well, given the description of a map, the positions
of your tank and the target, how many turns will you take at least to arrive there?
The input consists of several test cases. The first line of each test case contains two integers M and N (2 <= M, N <= 300). Each of the following M lines contains N uppercase letters, each of which
is one of 'Y' (you), 'T' (target), 'S' (steel wall), 'B' (brick wall), 'R' (river) and 'E' (empty space). Both 'Y' and 'T' appear only once. A test case of M = N = 0 indicates the end of input, and
should not be processed.
For each test case, please output the turns you take at least in a separate line. If you can't arrive at the target, output "-1" instead.
Sample Input
Sample Output
POJ Monthly
|
{"url":"http://poj.org/problem?id=2312","timestamp":"2014-04-21T04:32:49Z","content_type":null,"content_length":"7203","record_id":"<urn:uuid:28ef9d49-7c5e-40b5-81cf-ec014d080b11>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00597-ip-10-147-4-33.ec2.internal.warc.gz"}
|
How Regexes Work: Notes and Errata
The third illustration on page 50 is missing a letter. There should be an a on the arrow from V to W, like this:
These diffs show how to add the . metacharacter to Regex.pm. We only need to add eight lines! If you want to try it yourself, don't peek!
Caution: This . isn't exactly like Perl's, because Perl's . doesn't match \n. It's a simple change to give it Perl's behavior.
The demo program illustrates an situation where Perl's built-in regular expressions take longer than Regex.pm to find that there's no match---a lot longer. How can this be? Is Regex.pm simply better?
No. The answer is that Regex.pm is worse, and that's why it's faster.
Simple. Perl's built-in regexes have backreferences. Regex.pm doesn't. That lack of functionality enables Regex.pm to use a really super optimization that can speed things up a lot. However, the
optimization makes it impossible to support backreferences. So Regex.pm is faster because it's less powerful.
This tradeoff is not a superficial thing, either. Regex.pm's algorithm is well-known and efficient. You might wonder if there is perhaps some way to extend it or modify it so that it works for
regexes with backreferences too. The answer is that there probably isn't such an extension, for strong theoretical reasons.
Regexes with backreferences can be shown to be NP-Complete. This means, in practice, that there is no efficient algorithm known for computing whether or not such a regex matches, and that if there
were such an algorithm, it would immediately translate into efficient algorithms for solving a huge class of other well-studied problems for which no efficient algorithms are known. This class of
problems includes the famous Travelling Salesman problem, integer linear programming, the problem of recognizing whether or not a number is prime, and, in fact, any problem where you can efficiently
check a proposed solution to see whather or not it is actually a correct solution.
Conversely, regular expression matching without backreferences is not NP-complete; there is a well-known, efficient algorithm for it, which is implemented by Regex.pm.
For more details, including a proof, see Regular Expression Matching is NP-complete.
Found an error, or have a remark? Send me mail.
Return to: Universe of Discourse main page | What's new page | Perl Paraphernalia | How Regexes Work
|
{"url":"http://perl.plover.com/Regex/NOTES.html","timestamp":"2014-04-20T05:41:45Z","content_type":null,"content_length":"3857","record_id":"<urn:uuid:0f3316eb-a327-4fc9-9cf1-1c76860f2aee>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00198-ip-10-147-4-33.ec2.internal.warc.gz"}
|
09 - 16 - 2013
"Nonperturbative renormalization for improved staggered bilinears" published in Phys. Rev. D. Also available on the
07 - 19 - 2012
by A. Lytle at 2012 Lattice Conference, containing updated results for bilinear Z-factors and comparison with 1-loop perturbation theory. Results are now for the entire momentum range and on coarse
and fine MILC lattices. Also, first results using S-MOM scheme for scalar bilinears.
03 - 9 - 2012
submitted for 2.5 M Jpsi-hours on the clusters at FNAL. (Granted 2.34 M Jpsi-hours.) Contains new results for (a) momentum dependence of ratios of Z-factors; (b) a first comparison of these ratios on
fine and coarse lattices; and (c) the first results for the scalar and pseudoscalar bilinears on the coarse lattices with non-exceptional momenta.
07 - 16 - 2011
by S. Sharpe at 2011 Lattice Conference, containing updated results for bilinear Z-factors and comparison with 1-loop perturbation theory.
05 - 7 - 2011
at 2011 All Hands meeting. Second half concerns this project.
03 - 11 - 2011
submitted for 2.5 M Jpsi-hours on the clusters at FNAL. Contains preliminary results on matching factors for
hypercube bilinears for asqtad and HYP valence quarks.
Proposal awarded 2.42 M Jpsi-equivalent core-hrs.
09 - 01 - 2010
Andrew Lytle's
, which gives detailed background to the present calculations, and many results, including a result for Z_m from coarse and fine lattices.
05 - 01 - 2010
Presentation on proposal
at USQCD All Hands meeting, April 16th 2010.
03 - 01 - 2010
Renewal proposal
12 - 31 - 2009
We have now included vector bilinear vertices, both the hypercube version (which is the conserved current for naive or HYP-smeared staggered fermions), and the asqtad conserved current (which gives a
second method of calculating Z_q).
09 - 01 - 2009
Preliminary analysis on coarse lattices completed, and presented at Lattice 2009. For write-up, see
. We find that the strange-quark mass in the MS-bar scheme at 2 GeV is 106(6) MeV, to be compared to the 84(5) MeV obtained using 2-loop matching. These results are not directly comparable, since
they contain different $O(a^2\Lambda_{\rm QCD}^)$ corrections. To study this, we need at least one further lattice spacing.
03 - 20 - 2009
Status report is contained in our
renewal proposal
submitted to USQCD. Running is now focusing on fine lattices, in order to compare $Z_m$ with that obtained on the coarse lattices.
10 - 20 - 2008
Tuning runs on coarse MILC lattices underway, using asqtad valence quarks, and calculating Zq, ZS and ZP for a variety of momenta and quark masses.
|
{"url":"http://www.phys.washington.edu/~sharpe/usqcd/index.html","timestamp":"2014-04-20T03:33:57Z","content_type":null,"content_length":"16290","record_id":"<urn:uuid:420a22a8-3150-493f-a12d-b83455c739c0>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00299-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Flower Mound Algebra 2 Tutor
Find a Flower Mound Algebra 2 Tutor
...I am currently tutoring organic chemistry II to a Pre-Med student at Univeristy of North Texas, and I have tutored another Pre-Med student in organic chemistry last summer from U of TX at
Arlington. (Both students were from this website.) My past experiences include tutoring many students from 2...
22 Subjects: including algebra 2, chemistry, calculus, physics
...Can I assume anything else about the problem?" "What do I want? What does the problem want me to find?" "What equations or concepts relate what I know to what I want? Do I need to use more than
one concept to connect what I have to what I want?" And so on.
5 Subjects: including algebra 2, physics, calculus, algebra 1
...I am available to tutor most weekends. My military requirement is one weekend a month thus allowing the others to be open. Thank you, and I look forward to hearing from you.I have taken and
received high marks in Algebra 1 and 2, Linear Algebra, and Calculus 1, 2, and 3.
6 Subjects: including algebra 2, algebra 1, economics, elementary math
...I work very hard to make learning meaningful and fun. As an educational psychologist, I have completed many hours of advanced coursework, and I am well-versed in the current research regarding
learning, memory, and instructional practices. I utilize this knowledge to identify underlying process...
39 Subjects: including algebra 2, reading, English, chemistry
...Sincerely, Dr. Bob Precalculus is the fundamental mathematics necessary for the SAT, ACT, and other core subjects in science and mathematics. My approach is very rigorous with a focus on
College Algebra and techniques related to basic business mathematics.
93 Subjects: including algebra 2, chemistry, English, reading
|
{"url":"http://www.purplemath.com/flower_mound_algebra_2_tutors.php","timestamp":"2014-04-17T11:17:04Z","content_type":null,"content_length":"24098","record_id":"<urn:uuid:cc0263f7-3233-4d2b-a597-e9f5f1cd73cd>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00389-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Born: about 200 BC in Athens, Greece
Died: about 140 BC in Greece
We know little of Zenodorus's life but he is mentioned in the Arabic translation of Diocles' On burning mirrors where it is stated [3]:-
And when Zenodorus the astronomer came down to Arcadia and was introduced to us, he asked us how to find a mirror surface such that when it is placed facing the sun the rays reflected from it
meet a point and thus cause burning.
Toomer notes that his translation of 'when Zenodorus the astronomer came down to Arcadia and was introduced to us' could, perhaps, be translated 'when Zenodorus the astronomer came down to Arcadia
and was appointed to a teaching position there'.
Before the discovery of the Arabic text of Diocles' On burning mirrors, Zenodorus was known to us mainly because of references to his treatise On isometric figures which is lost. There is another
interesting source of information however. When Vesuvius erupted in 79 AD, Herculaneum together with Pompeii and Stabiae, was destroyed. Herculaneum was buried by a compact mass of material about 16
metres deep which preserved the city until excavations began in the 18^th century. Special conditions of humidity of the ground conserved wood, cloth, food, and in particular many papyri.
The papyri contain remarkable information and in particular there is a biography of the philosopher Philonides. This biography speaks of Zenodorus as a friend of Philonides and, although complete
certainty is impossible, we can be confident that this reference to Zenodorus is to the mathematician described in this article. Two visits by Zenodorus to Athens are described in the biography.
Despite the loss of Zenodorus's treatise On isometric figures, we do know something of the results which it contained since Theon of Alexandria quotes a number of propositions from Zenodorus's work
when he is giving his commentary on Ptolemy's Syntaxis. Pappus also made use of Zenodorus's On isometric figures in Book V of his own work and in fact a comparison with what Theon of Alexandria has
presented shows that Pappus followed Zenodorus's presentation rather closely.
In On isometric figures Zenodorus himself follows the style of Euclid and Archimedes quite closely and he refers to results of Archimedes from his treatise Measurement of a circle.
Zenodorus studied the area of a figure with a fixed perimeter and the volume of a solid figure with fixed surface. For example he showed that among polygons with equal perimeter and an equal number
of sides, the regular polygon has the greatest area.
He also showed that a circle is greater than any regular polygon of the same perimeter. To do this Zenodorus makes use of Archimedes result that the area of a circle is equal to that of a
right-angled triangle of perpendicular side equal to the radius of the circle and base equal to the length of the circumference of the circle.
The treatise contains three-dimensional geometry results as well as two-dimensional. In particular he proved that the sphere was the solid figure of least surface area for a given volume.
Article by: J J O'Connor and E F Robertson
April 1999
MacTutor History of Mathematics
|
{"url":"http://www-gap.dcs.st-and.ac.uk/~history/Printonly/Zenodorus.html","timestamp":"2014-04-20T18:25:12Z","content_type":null,"content_length":"4050","record_id":"<urn:uuid:7a33d5a5-34ef-41d6-9de7-8db1ec8ca23e>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00640-ip-10-147-4-33.ec2.internal.warc.gz"}
|
3aUW9. Invariant imbedding analysis of seabed acoustics.
Session: Wednesday Morning, May 15
Time: 10:35
Author: Khaldoun Khashanah
Location: Dept. of Mathematical Sciences, Stevens Inst. of Technol., Hoboken, NJ 07030
Author: Thomas G. McKee, Jr.
Location: Stevens Inst. of Technol., Hoboken, NJ 07030
The linear acoustic equations for the bounded slab of fluid ocean and elastic seabed with cylinderical symmetry are reduced to a set of coupled boundary value problems using separation of variables.
In order to have a computational technique capable of handling stratification effects in the seabed, invariant imbedding is used to replace the boundary value problem with an initial value problem.
As an initial approximation, the case of an ocean with a reflective bottom is considered and the change in the solution, as the depth of the seabed increases, is calculated. The method of invariant
imbedding is shown to be numerically stable and has the advantage of assessing the effects of including an interactive seabed on the solution to the underwater acoustics problem with a reflecting
from ASA 131st Meeting, Indianapolis, May 1996
|
{"url":"http://www.auditory.org/asamtgs/asa96ind/3aUW/3aUW9.html","timestamp":"2014-04-19T15:16:25Z","content_type":null,"content_length":"1657","record_id":"<urn:uuid:0a8dc516-2909-4bdf-b171-668782d6cf73>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00462-ip-10-147-4-33.ec2.internal.warc.gz"}
|
ECT: Time Warp
From a scientific point of view, of course, it would be far more logical to have divided the world up into 24 equal parts, one part for each hour of the day. With the circumference (distance around a
sphere) of the world being about 24,000 miles, this would divide quite nicely into 1000 miles around the equator for every hour. So, if you knew the distance from here to Tokyo, you could find the
local time there. Or could you? Which way would you measure from?
Thinking about the rising of the Sun is thinking about time. Which way does the world spin to make the Sun appear to rise in the east? The world must spin towards the east, so that we spin towards,
past, and away from the Sun to make a day happen. This would mean that people further east of us will spin towards the Sun before we reach the same point, we being a little further back from them
along the curve. So, to find out what time it is in Tokyo, you should think about that and the fact that the starting point (International Date Line) was decided to be somewhere in the Pacific Ocean.
So, is it later or earlier in Tokyo from where you are? If you are in Boston, it is later in the day for Tokyo because they have had more Sun time than you have. If you are in New Zealand, it is
earlier in the day for Tokyo because they have had less Sun time than you have. On a globe, if you live more to the east (to the right, as long as the north pole is "up") than the place you want to
know about, then you have had more day than they have. Just be careful about which side of the International Date Line you fall!
However, this 24 hour/1000 mile system is all well and good only for a uniformly populated world and from a scientific point of view. Time zones proper come from the desire of governments to limit
the confusion which would result from too much watch changing among the ever mobile populations of their countries For example, although the contiguous United States spans one-fourth of the
circumference of the world, it only has 4 time zones when it really might have had 6.
This map was taken from: http://www.telkom.co.za/time_zone_map.htm
(c)1996, 7 Telkom SA Ltd.
The big understanding for this group is: "If people live pretty close to you, then it's dark for them when it's dark for you. If they live far away, it could be dark outside when it is light for you
and vice versa."
With the globe set-up from the last Investigation, Latitudes and Attitudes, let's direct our questioning to a different idea. Which golf tee gets to make a shadow first? What does that mean about
which people get to see the Sun first? Are they the same? Then how long do we have to wait for morning to come? Could we actually have a friend who wakes up half a day earlier than we do because she
has morning half a day before us?
Play with the globe, turning it in front of the light. Have the students place the tees where morning would arrive first. Trick them by turning the globe and then having them try again. There has to
be some place on the globe where we start, some place where we all measure morning from. Point out the International Date Line on the globe, and have them try this all again.
Where is our home? Where are family members or friends? Where is it night? Where is it day? What is a time of day which is daytime, 10 p.m.? Maybe more like 9 AM. If we are having morning at 9 a.m.,
should everyone in the world call it 9 AM? Maybe they should call their morning time 9 a.m.
What does that mean about time? Is there anything that is always keeping time going, without a battery or electricity? Something we could watch and know roughly the time? Can we all have the same
clock? Or do we need to change it depending on where we live? Wouldn't that be a bother? In what ways do we rely upon time for things we do everyday? How could we all tell time without the clock or
the Sun?
Reconstruct or grab the globe with the golf tees on it. We learned about the Sun height in relation to where you live on the world. What about the sunrise? Shine the overhead lamp on the globe again.
Spin the globe and ask the students which tee is getting the Sun first? Who is having dinner or sleeping when this tee is having lunch? What happens if you call a friend at 9 a.m. and she lives far
around the world? Is it 9 a.m. for her? What is her time? Why? Would it be hard to have everyone using one clock? We could never talk about morning being 9 a.m. because it isn't morning for your
friend over there, is it? What might we do to straighten this out? What could we think about that has to do with clocks which could help us here?
Thinking some more about time, it seems that the Sun really tells us much about time in our lives. We rely on it to govern how long a day is, right? What is a day? It is the time it takes for the Sun
to appear to go all around the world, or in other words, the time it takes the world to spin once. But those hour things are man-made. Could we use a day to think more about longer lengths of times?
What other periods of time are there that do not rely so heavily on made-up time frames like hours? Suggest that they think about going in a circle around the Sun. How many days does that take? Does
the length of time the Earth takes to spin us past the Sun rely upon someone else's definition of something? Since a day is usually measured by these things called hours, while a year is measured by
definite cycles called days, could 365 days be a good unit of time measurement? If a day is an Earth-turn time, a year is an Earth-orbit time. Play more with globes and this new idea. Students ready
to play with some math ideas here could manipulate the globe and try to divide it up in easy ways. Or check out the math activities in the next level of this Thread.
Ask the students how they think they might solve this problem. Could we find a way so that everyone could have a time and it would be right? Would it be useful for everyone in the world to say it was
noon when the Sun was only overhead at one place on the Earth? About a century ago, this problem was addressed at an international meeting and twenty-four standard time zones were adopted. Why were
there 24 time zones made? Hopefully, students will link these 24 zones with the 24 hours that make up one day.
Ask them whether or not this makes sense given our understanding of time. If all the clocks in a given time zone are set to the same time, by how much do adjacent time zones differ? This is good
practice for learning to use numbers in a practical fashion. What time is it two time zones away to the west of us? This is a trickier question. With the globe and light source, can the students
prove their answers? Can everyone see that the world to the west is having sunrise later and therefore their clocks are behind those in the world to the east? Do they now find a different answer for
which 24 pieces they would divide the globe into? Each time zone is centered about a line of longitude, or meridian.
How far across the sky does the Sun appear to go during daylight time? They will say all the way across, or halfway around the Earth. Ask them what that amount could be called instead of "all of the
way" or "halfway". How many fists do we need to cover the arc of the sky? Does anyone recall how many degrees a fist covers? It is roughly 10°. How many degrees are in a circle? How many degrees are
in half of a circle, then? They will find that it takes around 18 fists or so to measure from one horizon up and over to the other. How many more fists would you need to cover the sky seen by people
on the other side of the Earth? Find out from them if they think they could use math to find that answer easier than trying to measure it.
Since it takes twenty-four hours for the Earth to spin (or for the Sun to pass over all of the Earth), how many fists can the Sun travel through in only one hour? Remind the students that in 24
hours, the Sun has to travel through 36 fists. So the real mystery is solved by a little math. How many fists do we need to measure one hour? The math is of course 36 fists 24 fists which is 32, 1.5
fists, or one and one half fists. How many degrees are in one fist? So, how many degrees does the Sun seem to travel in one hour across the sky? 15 °. Going backwards, saying if the Sun can travel
15° in one hour, how many degrees can the Sun travel in 24 hours? 360° of course, since 15° in one hour times 24 hours = 360°. That is how many degrees there are in a circle.
So, then ask your students this: If the globe is divided up into 24 time zones, how many degrees wide is each time zone? They should then conclude that the time width of each time zone is one hour,
so the number of degrees in this zone is 15 in longitude.
The establishment of standard time zones is intimately linked with the establishment of the standard grid of latitude and longitude. Ask students if it is 7 P.M. in Boston, where is it midnight?
Midnight is 12 o'clock. 12 minus 7 is 5. Whoever is having midnight is five time zones away. They will hopefully count ahead to the east on the globe 5 slots to rest on England. Ask them what day it
is at that point. Do they think that they would be time traveling if they flew westward to this point?
Using a globe, have the students locate the United States. There are four time zones in North America: Eastern, Central, Mountain and Pacific. Can they discern how many degrees there must be across
the country's mainland from the number of time zones across it? Think if there are 24 hours in a day, there should be 24 time zones. 24 goes into 360 degrees 15 times. So, each time zone should be
approximately 15 degrees across, making the United States 15 degrees times 4 across the globe, or 60 degrees. Check this with the latitude measures on the globe. You will have to account for
fractions. What do we notice about how regular or irregular the real time zones are? Why might this be so? Think about politics and the shapes of states.
Students should already understand that different locations on the globe experience day and night at different times. Even so, the whole notion of time zones may seem a bit mysterious at first
glance. Can we find the local times for our relatives around the country right now? What about for areas in the news? Which city on the Earth will yell, "Happy new year!" first? Can students begin to
think of locations in terms of both time zones and longitude lines?
|
{"url":"http://hea-www.harvard.edu/ECT/Warp/warp.html","timestamp":"2014-04-18T16:19:31Z","content_type":null,"content_length":"19484","record_id":"<urn:uuid:c26e4ef5-07b9-4bed-b9da-6c58d308b844>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00077-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Solubility Equilibrium
19.3: Solubility Equilibrium
Created by: CK-12
Lesson Objectives
• Write solubility product constant expressions for nearly insoluble ionic compounds.
• Calculate the value of K[sp] for a compound from its solubility and determine the solubility of a compound with a known K[sp].
• Use the K[sp] of a compound to predict whether a precipitate will form when two solutions are mixed together.
• Describe the common ion effect and its relationship to solubility.
Lesson Vocabulary
• common ion
• common ion effect
• molar solubility
• solubility product constant
Check Your Understanding
Recalling Prior Knowledge
• What is a saturated solution?
• What is the equilibrium that occurs in a saturated solution?
A saturated aqueous solution is one in which the maximum amount of a solute has been dissolved in a given amount of water. A saturated solution may also have undissolved solute present, in which case
an equilibrium exists between the dissolved and undissolved solute. In this lesson, you will learn about that equilibrium and how to calculate and use the solubility product constant.
The Solubility Product Constant
Ionic compounds have widely differing solubilities. Sodium chloride has a solubility of about 360 g per liter of water at 25°C. Salts of alkali metals tend to be quite soluble. On the other end of
the spectrum, the solubility of zinc hydroxide is only 4.2 × 10^−4 g/L of water at the same temperature. Many ionic compounds containing hydroxide are relatively insoluble. The chapter Solutions
summarized a set of rules for predicting the relative solubilities of various ionic compounds in water.
Most ionic compounds that are considered to be insoluble will still dissolve to a small extent in water. These “mostly insoluble” compounds are still considered to be strong electrolytes, because
essentially any portion of the compound that dissolves will also dissociate into ions. As an example, silver chloride dissociates to a small extent into silver ions and chloride ions upon being added
to water.
$\mathrm{AgCl}(s) \mathrm{\rightleftharpoons Ag^+}(aq) \mathrm{+Cl^-}(aq)$
The process is written as an equilibrium because the dissociation occurs only to a small extent. Therefore, an equilibrium expression can be written for the process. Keep in mind that the solid
silver chloride does not have a variable concentration, so it is not included in the equilibrium expression.
This equilibrium constant is called the solubility product constant, (K[sp]) and is equal to the mathematical product of the ions each raised to the power of the coefficient of the ion in the
dissociation equation.
The formula of the ionic compound dictates the form of the K[sp] expression. For example, the formula of calcium phosphate is Ca[3](PO[4])[2]. The dissociation equation and K[sp] expression are shown
$\mathrm{Ca_3(PO_4)_2}(s) \mathrm{\rightleftharpoons 3Ca^{2+}}(aq) \mathrm{+2PO^{3-}_4}(aq)$
Table below lists solubility product constants for some common nearly insoluble ionic compounds.
Solubility Product Constants (25°C)
│ Compound │ K[sp] │Compound │ K[sp] │
│AgBr │5.0 × 10^−13│CuS │8.0 × 10^−37│
│AgCl │1.8 × 10^−10│Fe(OH)[2]│7.9 × 10^−16│
│Al(OH)[3] │3.0 × 10^−34│Mg(OH)[2]│7.1 × 10^−12│
│BaCO[3] │5.0 × 10^−9 │PbCl[2] │1.7 × 10^-5 │
│BaSO[4] │1.1 × 10^−10│PbCO[3] │7.4 × 10^-14│
│CaCO[3] │4.5 × 10^−9 │PbI[2] │7.1 × 10^-9 │
│Ca(OH)[2] │6.5 × 10^−6 │PbSO[4] │6.3 × 10^−7 │
│Ca[3](PO[4])[2] │1.2 × 10^−26│Zn(OH)[2]│3.0 × 10^−16│
│CaSO[4] │2.4 × 10^−5 │ZnS │3.0 × 10^−23│
Solubility and K[sp]
Solubility is normally expressed in grams of solute per liter of saturated solution. However, solubility can also be expressed as moles per liter. Molar solubility is the number of moles of solute in
one liter of a saturated solution. In other words, the molar solubility of a given compound represents the highest molarity solution that is possible for that compound. The molar mass of a compound
is the conversion factor between solubility and molar solubility. Given that the solubility of Zn(OH)[2] is 4.2 × 10^−4 g/L, the molar solubility can be calculated as shown below:
$\mathrm{\dfrac{4.2 \times 10^{-4} \ \cancel{\text{g}}}{1 \ L} \times \dfrac{1 \ mol}{99.41 \ \cancel{\text{g}}}=4.2 \times 10^{-6} \ mol/L \ (M)}$
Solubility data can be used to calculate the K[sp] for a given compound. The following steps need to be taken.
1. Convert from solubility to molar solubility.
2. Use the dissociation equation to determine the concentration of each of the ions in mol/L.
3. Apply the K[sp] equation.
Sample Problem 19.2: Calculating K[sp] from Solubility
The solubility of lead(II) fluoride is found experimentally to be 0.533 g/L. Calculate the K[sp] for lead(II) fluoride.
Step 1: List the known quantities and plan the problem.
• solubility of PbF[2] = 0.533 g/L
• molar mass of PbF[2] = 245.20 g/mol
The dissociation equation for PbF[2] and the corresponding K[sp] expression can be constructed as follows:
$\mathrm{PbF_2}(s) \mathrm{\rightleftharpoons Pb^{2+}}(aq) \mathrm{+2F^-}(aq) \ \ \ \ \text{K}_{\text{sp}} = [\text{Pb}^{2+}][\text{F}^-]^2$
The steps above will be followed to calculate K[sp] for PbF[2].
Step 2: Solve.
molar solubility: $\mathrm{\dfrac{0.533 \ \cancel{\text{g}}}{1 \ L} \times \dfrac{1\ mol}{245.20\ \cancel{\text{g}}}=2.17 \times 10^{-3} \ M}$
The dissociation equation shows that for every mole of PbF[2] that dissociates, 1 mol of Pb^2+ and 2 mol of F^− are produced. Therefore, at equilibrium, the concentrations of the ions are:
[Pb^2+] = 2.17 × 10^−3 M and [F^−] = 2 × 2.17 × 10^−3 = 4.35 × 10^−3 M
Substitute into the equilibrium expression and solve for K[sp].
K[sp] = (2.17 × 10^-3)(4.35 × 10^-3)^2 = 4.11 × 10^-8
Step 3: Think about your result.
The solubility product constant is significantly less than 1 for a nearly insoluble compound such as PbF[2].
Practice Problem
1. From the given solubility data, calculate K[sp] for each of the following compounds.
1. copper(II) iodide, CuI = 4.30 × 10^−4 g/L
2. silver sulfide, Ag[2]S = 2.84 × 10^−15 g/L
The known K[sp] values from Table above can be used to calculate the solubility of a given compound by following the steps listed below.
1. Set up an ICE problem (Initial, Change, Equilibrium) in order to use the K[sp] value to calculate the concentration of each of the ions. Assume that no ions are initially present in the solution.
2. The concentrations of the ions can be used to calculate the molar solubility of the compound.
3. Use the molar mass to convert from molar solubility to solubility in g/L.
The K[sp] value of calcium carbonate is 4.5 × 10^−9. We begin by setting up an ICE table showing the dissociation of CaCO[3] into calcium ions and carbonate ions. The variable s will be used to
represent the molar solubility of CaCO[3]. In this case, each formula unit of CaCO[3] yields one Ca^2+ ion and one CO[3]^2− ion. Therefore, the equilibrium concentrations of each ion are equal to s.
$\mathrm{CaCO_3}(s) \mathrm{\rightleftharpoons}$ $\mathrm{Ca^{2+}}(aq)+$ $\mathrm{CO^{2-}_3}(aq)$
Initial (M) 0.00 0.00
Change (M) +s +s
Equilibrium (M) s s
The K[sp] expression can be written in terms of s and then used to solve for s.
$\mathrm{s=\sqrt{K_{sp}}=\sqrt{4.5 \times 10^{-9}}=6.7 \times 10^{-5}\:M}$
The concentration of each of the ions at equilibrium is 6.7 × 10^−5 M. We can now use the molar mass to convert from molar solubility to solubility in g/L.
$\dfrac{6.7 \times 10^{-5}\ \cancel{\text{mol}}}{1 \ \text{L}} \times \dfrac{100.09 \ \text{g}}{1 \ \cancel{\text{mol}}}=6.7 \times 10^{-3} \ \text{g/L}$
So the maximum amount of calcium carbonate that is capable of dissolving in 1 liter of water at 25°C is 6.7 × 10^−3 grams. Note that in the case above, the 1:1 ratio of the ions upon dissociation led
to the K[sp] being equal to s^2. This is referred to as a formula of the type AB, where A is the cation and B is the anion. Now let’s consider a formula of the type AB[2], such as Fe(OH)[2]. In this
case, the setup of the ICE table would look like the following:
$\mathrm{Fe(OH)_2}(s) \mathrm{\rightleftharpoons}$ $\mathrm{Fe^{2+}}(aq)+$ $\mathrm{2OH^-}(aq)$
Initial (M) 0.00 0.00
Change (M) +s +2s
Equilibrium (M) s 2s
$\text{s}=\sqrt[3]{\frac{\text{K}_{\text{sp}}}{4}}=\sqrt[3]{\frac{7.9 \times 10^{-16}}{4}}=5.8 \times 10^{-6} \ \text{M}$
Table below shows the relationship between K[sp] and molar solubility based on the formula.
│ Compound Type │ Example │ K[sp] Expression │Cation│Anion│K[sp] in Terms of s│
│AB │CuS │[Cu^2+][S^2−] │s │s │s^2 │
│AB[2] or A[2]B │Ag[2]CrO[4] │[Ag^+]^2[CrO[4]^2−] │2s │s │4s^3 │
│AB[3] or A[3]B │Al(OH)[3] │[Al^3+][OH^−]^3 │s │3s │27s^4 │
│A[2]B[3] or A[3]B[2]│Ba[3](PO[4])[2]│[Ba^2+]^3[PO[4]^3−]^2 │3s │2s │108s^5 │
The K[sp] expressions in terms of s can be used to solve problems in which the K[sp] is used to calculate the molar solubility as in the examples above. Molar solubility can then be converted to
solubility in g/L.
Predicting Precipitates
Knowledge of K[sp] values will allow you to be able to predict whether or not a precipitate will form when two solutions are mixed together. For example, suppose that a known solution of barium
chloride is mixed with a known solution of sodium sulfate. Barium sulfate (Figure below) is a mostly insoluble compound that could potentially precipitate from the mixture. However, it is first
necessary to calculate the ion product, [Ba^2+][SO[4]^2−], for the solution. If the value of the ion product is less than the value of the K[sp], then the solution will remain unsaturated. No
precipitate will form because the concentrations are not high enough to begin the precipitation process. If the value of the ion product is greater than the value of K[sp], then a precipitate will
form. The formation of the precipitate lowers the concentration of each of the ions until the ion product is exactly equal to K[sp], at which point equilibrium is attained.
Barium sulfate is used as a component of white pigments in paints and as an agent in certain x-ray imaging processes.
Sample Problem 19.3: Predicting Precipitates
Will a precipitate of barium sulfate form when 10.0 mL of 0.0050 M BaCl[2] is mixed with 20.0 mL of 0.0020 M Na[2]SO[4]?
Step 1: List the known quantities and plan the problem.
• concentration of BaCl[2] solution = 0.0050 M
• volume of BaCl[2] solution = 10.0 mL
• concentration of Na[2]SO[4] solution = 0.0020 M
• volume of Na[2]SO[4] solution = 20.0 mL
• K[sp] of BaSO[4] = 1.1 × 10^−10 (Table above)
• value of [Ba^2+][SO[4]^2−]
• if a precipitate forms
The concentration and volume of each solution that is mixed together must be used to calculate the values of [Ba^2+] and [SO[4]^2−]. Each individual solution is diluted when they are mixed together.
The ion product is calculated and compared to the K[sp] to determine whether a precipitate forms.
Step 2: Solve.
The moles of each ion from the original solutions are calculated by multiplying the molarity by the volume in liters.
mol Ba^2+ = 0.0050 M × 0.010 L = 5.0 × 10^-5 mol Ba^2+
mol SO[4]^2- = 0.0020 M × 0.020 L = 4.0 × 10^-5 mol SO[4]^2-
The concentration of each ion after dilution is then calculated by dividing the moles by the final solution volume of 0.030 L.
$\mathrm{[Ba^{2+}]=\dfrac{5.0 \times 10^{-5} \ mol}{0.030 \ L}=1.7 \times 10^{-3} \ M}$
$\mathrm{[SO_4^{2-}]=\dfrac{4.0 \times 10^{-5} \ mol}{0.030 \ L}=1.3 \times 10^{-3} \ M}$
Now, the ion product is calculated.
[Ba^2+][SO[4]^2-] = (1.7 × 10^-3)(1.3 × 10^-3) = 2.2 × 10^-6
Since the ion product is greater than the K[sp], a precipitate of barium sulfate will form.
Step 3: Think about your result.
Two significant figures are appropriate for the calculated value of the ion product.
Practice Problem
2. Calculate the ion product for calcium hydroxide when 20.0 mL of 0.010 M CaCl[2] is mixed with 30.0 mL of 0.0040 M KOH. Decide if a precipitate will form.
The Common Ion Effect
In a saturated solution of calcium sulfate, an equilibrium exists between the solid calcium sulfate and its ions in solution.
$\mathrm{CaSO_4}(s) \mathrm{\rightleftharpoons Ca^{2+}}(aq) \mathrm{+SO^{2-}_4}(aq) \ \ \ \ \text{K}_{\text{sp}}=2.4 \times 10^{-5}$
Suppose that some calcium nitrate were added to this saturated solution. Immediately, the concentration of the calcium ion in the solution would increase. As a result, the ion product [Ca^2+][SO[4]^
2−] would increase to a value that is greater than the K[sp]. According to Le Châtelier’s principle, the equilibrium above would shift to the left in order to relieve the stress of the added calcium
ion. Additional calcium sulfate would precipitate out of the solution until the ion product once again becomes equal to K[sp]. Note that in the new equilibrium, the concentrations of the calcium ion
and the sulfate ion would no longer be equal to each other. The calcium ion concentration would be larger than the sulfate ion concentration.
This situation describes the common ion effect. A common ion is an ion that is common to more than one salt in a solution. In the above example, the common ion is Ca^2+. The common ion effect is a
decrease in the solubility of an ionic compound as a result of the addition of a common ion. Adding calcium ions to a saturated solution of calcium sulfate causes additional CaSO[4] to precipitate
from the solution, lowering its solubility. The addition of a solution containing sulfate ion, such as potassium sulfate, would result in the same common ion effect.
Sample Problem 19.4: The Common Ion Effect
What is the concentration of zinc ion in 1.00 L of a saturated solution of zinc hydroxide to which 0.040 mol of NaOH has been added?
Step 1: List the known quantities and plan the problem.
• K[sp] = 3.0 × 10^−16 (Table above)
• moles of added NaOH = 0.040 mol
• volume of solution = 1.00 L
Express the concentrations of the two ions relative to the variable s. The concentration of the zinc ion will be equal to s, while the concentration of the hydroxide ion will be equal to 0.040 + 2s.
Note that the value of s in this case is not equal to the value of s when zinc hydroxide is dissolved in pure water.
Step 2: Solve.
The K[sp] expression can be written in terms of the variable s.
K[sp] = [Zn^2+][OH^-]^2 = (s)(0.040+2s)^2
Because the value of the K[sp] is so small, we can make the assumption that the value of s will be very small compared to 0.040. This simplifies the mathematics involved in solving for s.
K[sp] = (s)(0.040)^2 = 0.0016s = 3.0 × 10^-16
$\mathrm{s=\dfrac{K_{sp}}{[OH^-]^2}=\dfrac{3.0 \times 10^{-16}}{0.0016}=1.9 \times 10^{-13}\ M}$
The concentration of the zinc ion is equal to s, so [Zn^2+] = 1.9 × 10^-13 M.
Step 3: Think about your result.
The relatively high concentration of the common ion, OH^-, results in a very low concentration of the zinc ion. The molar solubility of the zinc hydroxide is less in the presence of the common ion
than it would be in pure water.
Practice Problem
3. Determine the concentration of silver ions in 1.00 L of a saturated solution of silver chloride to which 0.0020 mol of sodium chloride has been added.
Lesson Summary
• In a saturated solution, an equilibrium exists between the dissolved and undissolved solute, at which point the rate of dissolution is equal to the rate of recrystallization. The equilibrium
constant expression for this type of equilibrium is called a solubility product constant (K[sp]).
• The K[sp] of a compound can be calculated from its solubility (g/L) or molar solubility (mol/L). Known K[sp] values can be used to calculate the solubility of a compound.
• When two solutions are mixed, a precipitate may be produced. The starting ion concentrations are used to calculate the ion product, which is then compared to the K[sp]. If the ion product is
greater than the value of K[sp], a precipitate will form.
• The common ion effect describes a reduction in the solubility of a salt that results from the addition of an ion that is common to both the original solution and the salt being added.
Lesson Review Questions
Reviewing Concepts
1. Explain what is meant by the statement that all ionic compounds are strong electrolytes, no matter how soluble.
2. What is the relationship between a compound’s solubility and its solubility product constant?
3. Write the solubility product constant (K[sp]) expression for the following compounds.
1. NiS
2. Ag[2]CO[3]
3. Fe[3](PO[4])[2]
4. Use Table above to rank the following salts from most to least soluble.
1. AgBr
2. BaSO[4]
3. ZnS
4. PbCO[3]
5. How does the addition of lead(II) ions affect the solubility of lead(II) chloride in water?
6. The molar solubility of copper(I) bromide, CuBr, is 2.0 × 10^−4 M. Calculate the solubility of CuBr in g/L.
7. Calculate K[sp] for the following compounds from the given solubilities at 25°C.
1. SrCO[3], 5.9 × 10^−3 g/L
2. Ag[2]SO[4], 4.74 g/L
3. Cr(OH)[3], 3.4 × 10^−6 g/L
8. What is the concentration of lead(II) ions and iodide ions in a saturated solution of lead(II) iodide at 25°C? (Use Table above for the K[sp].)
9. Use the K[sp] values from Table above to calculate the solubility in g/L of the following compounds.
1. Ca[3](PO[4])[2]
2. PbSO[4]
3. Ca(OH)[2]
10. Calculate the ion product of silver bromide when 100.0 mL of 0.0020 M AgNO[3] is mixed with 100.0 mL of 1.0 × 10^−4 M KBr. Will a precipitate of AgBr form when the solutions are mixed?
11. Determine the concentration of lead(II) ions in 1.00 L of a saturated solution of PbF[2] to which 0.025 mol of fluoride ions has been added. The K[sp] of PbF[2] is 4.1 × 10^−8.
Further Reading / Supplemental Links
Points to Consider
In the course of an exothermic reaction, heat is released from the system into the surroundings, resulting in a decrease in the enthalpy of the system. This is a favorable reaction because nature
prefers a state of lower energy.
• What is meant by the term "driving force" as it relates to chemical reactions?
• What other force is responsible for the occurrence of endothermic reactions, which absorb heat into the system?
Files can only be attached to the latest version of None
|
{"url":"http://www.ck12.org/book/CK-12-Chemistry---Intermediate/r12/section/19.3/","timestamp":"2014-04-18T04:08:00Z","content_type":null,"content_length":"152219","record_id":"<urn:uuid:ddf39100-af6b-4214-b304-fc1f91c36363>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00148-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Homework Help
Posted by Melinda on Sunday, December 11, 2011 at 11:29am.
Whats the value of a? =-=
For the graph, determine the equation of the function in the form y=a(x-h)^2+k.
Then describe the transformations that were applied to y=x^2 to obtain the graph.
Here is the graph: imgur dot com/A7e5M
The only thing I know is that:
1. Reflected in x-axis
2. vertical shift UP 2
Whats the vertical compression/stretch? halps~
• Precalculus - Reiny, Sunday, December 11, 2011 at 1:12pm
This time I was able to see your graph,
the new vertex is (0,2) and another point on it is (1,-1)
so if the parent equation is y = x^2
the new equation is
y = a(x-0)^2 + 2
y = ax^2 + 2
but the point (1,-1) lies on it
1 = a(-1)^2 + 2
1 = a + 2
a = -1
new equation: y = -x^2 + 2
Notice that because of the position of the new point (1,-1) the value of a turned out be negative, which we expected, since it opens downwards
• Precalculus - Melinda, Sunday, December 11, 2011 at 2:02pm
Thank you Reiny!!!!!!!
Related Questions
MATH DUDES!!! - I have 3 questions. 1.whats the value for h in 12h+9=85? 2.whats...
precalculus HELP! - I posted this question before: What is the area enclosed by...
algebra - why is the graph of an inverse flipped over the line y=x instead of ...
Algebra 2 - how to i graph the inequality : y<x squared - 3x + 15 whats my x ...
advanced Functions/ Precalculus Log - 1. If (x, y) is on the graph f(x) = log(...
precalculus - is solving this possible graph each function with its parent graph...
algebra - whats the model of 2x + 3 = 3x -1 and whats the value of x?
precalculus - What is the area enclosed by the graph of the absolute value of (...
precalculus - Explain how the graph of f(x)=(-2)/(x-3)^2 can be obtained from ...
precalculus/trigonometry - Find a formula for the given linear function. The ...
|
{"url":"http://www.jiskha.com/display.cgi?id=1323620957","timestamp":"2014-04-20T13:44:57Z","content_type":null,"content_length":"9182","record_id":"<urn:uuid:a0b6d4a9-7ee9-44ed-9037-f6a6429d7748>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00562-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Very quick matrix question
The definition of "transpose" is that if the elements of A are "[itex]a_{ij}[/itex]" then the elements of A^T are [itex]a_{ji}[/itex]. So if all non-diagonal elements are 0, we have [itex]a_{ij}= a_
{ji}= 0[/itex]. And, of course, on the diagonal i= j so we still have [itex]a_{ij}= a_{ji}[/itex].
|
{"url":"http://www.physicsforums.com/showthread.php?p=3847633","timestamp":"2014-04-20T11:20:45Z","content_type":null,"content_length":"26805","record_id":"<urn:uuid:571d727b-c15a-4058-8cc4-aed49c836b61>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00630-ip-10-147-4-33.ec2.internal.warc.gz"}
|
North Chelmsford Algebra 2 Tutor
...I also ask simple questions to ensure your full understanding of a particular subject. After all, it is my duty and joy to know that you have gained confidence in math. I have enjoyed using
mathematics as an essential tool while working as a chemical engineer for industry and while conducting biophysical research at the University of California.
12 Subjects: including algebra 2, chemistry, geometry, algebra 1
...I vary my teaching style to the student's learning style and help them to grasp a solid foundation in understanding the subject, not just learn to pass a test. I focus on building life learning
skills, and good study habits so that the student can carry those values to their other school subjects and activities. I have taught violin and dance lessons for over 10 years.
13 Subjects: including algebra 2, English, geometry, writing
...My grammar and SAT writing students learn how to analyze the deep structure of sentences by taking them apart and identifying the phrases and clauses that make them up. I took the advanced
sections of mechanics and electricity and magnetism at MIT. I've tutored Physics I and II to many high school and college students around the Boston area.
47 Subjects: including algebra 2, English, chemistry, reading
I am a high school math teacher working on my second Master's Degree. I went to Lesley University for my undergraduate degree in education and mathematics. I went back to Lesley for my first
Master's degree in education to be a Reading Specialist.
14 Subjects: including algebra 2, reading, algebra 1, ACT Math
...I received AP credit to place out of all required English courses at the University. I also scored a 2000 on my SATs, with a 740 on the writing section and a 12/12 on the essay. I took AP
Calculus AB and AP Calculus BC and placed out of my required math courses as well.
35 Subjects: including algebra 2, reading, English, law
|
{"url":"http://www.purplemath.com/north_chelmsford_ma_algebra_2_tutors.php","timestamp":"2014-04-16T10:50:06Z","content_type":null,"content_length":"24399","record_id":"<urn:uuid:8d91233c-30b3-40c1-b4de-6639a42c60f8>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00406-ip-10-147-4-33.ec2.internal.warc.gz"}
|
TSA of a cone with 465ml volume
November 17th 2008, 01:59 AM #1
Nov 2008
TSA of a cone with 465ml volume
i require help with a project,
i need to find out what the Total Surface Area of a cone with a volume of 465ml
if you could post all workings out aswell that would be greatly appreciated
Last edited by chickn-winz; November 17th 2008 at 02:27 AM.
let h be height and r be radius
the volume of cone $=\frac{1}{3}\pi r^2 = 465$
the total surface area is given by
$<br /> TSA=\pi r^2+2\pi r \sqrt{r^2+h^2}<br />$thus either height or radius of base should be given in the question
November 17th 2008, 02:50 AM #2
|
{"url":"http://mathhelpforum.com/geometry/60013-tsa-cone-465ml-volume.html","timestamp":"2014-04-17T08:57:19Z","content_type":null,"content_length":"33371","record_id":"<urn:uuid:50177c92-22d2-4469-af52-356f39d9b27b>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00498-ip-10-147-4-33.ec2.internal.warc.gz"}
|
IS 310
A numerical value used as a summary measure for a sample, such as sample mean, is known as a
sample statistic
Since the population size is always larger than the sample size, then the sample statistic
can be smaller, larger, or equal to the population parameter.
The mean of a sample
is computed by summing all the data values and dividing the sum by the number of items
The pth percentile is a value that at least p percent of the observations are
less than or equal to this value
In computing the hinges for data with an odd number of items, the median position is included
both in the computation of the lower and upper hinges.
The interquartile range is used as a measure of variability to overcome what difficulty of the range?
The range is influenced too much by extremes.
If the variance of a data set is correctly computed with the formula using n-1 in the denominator, what is true?
The data set is a sample.
In computing descriptive statistics from grouped data,
data values are treated as if they occur at the midpoint of a class
When should measures of location and dispersion be computed from grouped data rather than from individual data values?
Only when individual data values are unavailable
The descriptive measure of dispersion that is based on the concept of a deviation about the mean is?
The standard deviation
The measure of location which is the most likely to be influenced by extreme values is the data set is the?
The counting rule that is used for counting the number of experimental outcomes when n objects are selected from a set of N objects were order of selection is not important is called?
The counting rule that is used for counting the number of experimental outcomes when n objects are selected from a set of N objects were order of selection is important is called?
When the assumption of equally likely outcomes is used to assign probability values, the method used to assign probabilities is referred to as the
classical method
Two events, A and B, are mutually exclusive and each have a nonzero probability. If event A is known to occur, the probability of the occurrence of even B is
The addition law is potentially helpful when we are interested in computing the probability of
the union of two events
The multiplication law is potentially helpful when we are interested in computing the probability of
the intersection of two events.
The union of events A and B is the even containing
all the sample points belonging to A or B, or both
If a penny is tossed three times and comes up heads all three times, the probability of heads on the fourth trial is
The variance is the measure of the dispersion or variability of a random variable. It is a weighted average of the
squared deviations from the mean
A weighted average of the value of a random variable, where the probability function provides weights is known as
the expected value
When sampling without replacement, the probability of obtaining a certain sample is best given by a
hypergeometric distribution
In the textile industry, a manufacturer is interested in the number of blemishes or flaws occurring in each 100 feet of material. The probability distribution that has the greatest chance of applying
to this situation is the
poisson distribution
What are the characteristics of an experiment where the binomial probability distribution is applicable?
The experiment has a sequence of n identical trials, exactly two outcomes are possible on each trial, the probabilities of the outcomes do not change from one trial to another.
What are the properties of a binomial experiment?
The experiment consists of a sequence of n identical trials, each outcome can be referred to as a success or failure, the trials are independent.
When dealing with the number of occurrences of an event over a specified interval of time or space, the appropriate probability distribution is a
poisson distribution
The key difference between the binomial and hypergeometric distribution is that with the hypergeometric distribution
the probability of success changes from trial to trial
|
{"url":"http://quizlet.com/1915029/is-310-flash-cards/","timestamp":"2014-04-21T02:21:58Z","content_type":null,"content_length":"117309","record_id":"<urn:uuid:f90f350b-cb41-4924-8e16-7a3dd443cb5f>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00228-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Woodside, CA Statistics Tutor
Find a Woodside, CA Statistics Tutor
...I also taught math and science there. From 1983 on, I have taught ESL in Redwood City at Wherry Academy. At one time I had a class with students from Taiwan, Mexico and South Korea.
35 Subjects: including statistics, chemistry, English, reading
...I look forward to tutoring for you!I took two semesters of organic chemistry and one semester of physical organic chemistry with an overall average of an A- grade. During my undergraduate
junior and senior years, I was a designated tutor for my chemistry department in organic chemistry. The students who came in regularly and from the beginning saw the greatest gain.
24 Subjects: including statistics, reading, chemistry, calculus
...I enjoy most working with middle and high school students to achieve their goals. I have a Bachelors Degree in Math and MBA and Masters in Education. I have partnered with the Mountain View
and Menlo Park School Districts to develop and maintain their Math curriculum.
24 Subjects: including statistics, calculus, trigonometry, public speaking
...In Algebra 1 we also study graphical methods in order to visualize functions as straight lines or parabolas. Further we learn about factorization and the solutions of quadratic equations.
Seeing many advanced students who struggle with algebra 1 concepts makes me feel good about my algebra 1 students because I help them to learn it properly from the beginning.
41 Subjects: including statistics, calculus, geometry, algebra 1
I have a Ph.D in Theoretical Physics from M.I.T. and work as a researcher at Stanford Physics Department. I have been tutoring for more than 10 years both high school and college students. I
tutor all subjects in both Math and Physics at all levels. In addition, I tutor SAT Math and Verbal.
15 Subjects: including statistics, physics, calculus, geometry
Related Woodside, CA Tutors
Woodside, CA Accounting Tutors
Woodside, CA ACT Tutors
Woodside, CA Algebra Tutors
Woodside, CA Algebra 2 Tutors
Woodside, CA Calculus Tutors
Woodside, CA Geometry Tutors
Woodside, CA Math Tutors
Woodside, CA Prealgebra Tutors
Woodside, CA Precalculus Tutors
Woodside, CA SAT Tutors
Woodside, CA SAT Math Tutors
Woodside, CA Science Tutors
Woodside, CA Statistics Tutors
Woodside, CA Trigonometry Tutors
|
{"url":"http://www.purplemath.com/Woodside_CA_statistics_tutors.php","timestamp":"2014-04-20T11:28:53Z","content_type":null,"content_length":"24262","record_id":"<urn:uuid:8554b4b7-8f08-4e93-8a92-f75a3c0d4efc>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00141-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Math Forum Discussions
- User Profile for: modemob_@_mail.com
Math Forum
Ask Dr. Math
Internet Newsletter
Teacher Exchange
Search All of the Math Forum:
Views expressed in these public forums are not endorsed by Drexel University or The Math Forum.
User Profile: modemob_@_mail.com
User Profile for: modemob_@_mail.com
UserID: 711564
Name: MoeBlee
Registered: 5/9/11
Total Posts: 1,277
Recent Messages Discussion Posted
1 Re: Are there infinitely many counter examples for the GoldBach sci.math.independent Nov 9, 2012 6:26 PM
Conjecture? Is it possible to find that out?
2 Re: Goedel's 1931 proof is not purely syntactical (?) sci.math.independent Nov 5, 2012 12:03 PM
3 Re: if Cantor diagonal is true, Probability theory takes a hit and sci.math.independent Nov 2, 2012 2:30 PM
wound Chapt28 Summary, Review and Reminders #1220 Co
4 Re: if Cantor diagonal is true, Probability theory takes a hit and sci.math.independent Nov 2, 2012 2:28 PM
wound Chapt28 Summary, Review and Reminders #1220 Correcting Math 3rd ed
5 Re: if Cantor diagonal is true, Probability theory takes a hit and sci.math.independent Nov 2, 2012 1:58 PM
wound Chapt28 Summary, Review and Reminders #1220 Correcting Math 3rd ed
6 Re: if Cantor diagonal is true, Probability theory takes a hit and sci.math.independent Nov 2, 2012 1:49 PM
wound Chapt28 Summary, Review and Reminders #1220 Correcting Math 3rd ed
7 Re: if Cantor diagonal is true, Probability theory takes a hit and sci.math.independent Nov 2, 2012 1:43 PM
wound Chapt28 Summary, Review and Reminders #1220 Correcting Math 3rd ed
8 Re: if Cantor diagonal is true, Probability theory takes a hit and sci.math.independent Nov 1, 2012 5:13 PM
wound Chapt28 Summary, Review and Reminders #1220 Correcting Math 3rd ed
9 Re: if Cantor diagonal is true, Probability theory takes a hit and sci.math.independent Nov 1, 2012 5:09 PM
wound Chapt28 Summary, Review and Reminders #1220 Correcting Math 3rd ed
10 Re: Matheology § 136 sci.math.independent Nov 1, 2012 2:35 PM
Show all user messages
|
{"url":"http://mathforum.org/kb/profile.jspa?userID=711564","timestamp":"2014-04-21T12:32:31Z","content_type":null,"content_length":"15945","record_id":"<urn:uuid:5cd1bc7a-f365-40ab-8f0e-d047cfb46e33>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00306-ip-10-147-4-33.ec2.internal.warc.gz"}
|
2+2=5 (2006)
Rudy Rucker / Terry Bisson
A retired insurance adjuster and a math professor who was fired for telling his students that there are "holes" in the number line pass the time by trying to break a world record for counting. To
achieve... (more)
Actuarial / The Paradox Paradox (2010)
Buzz Mauro
These two extremely short stories by Mauro, part of his thesis project which consisted entirely of original works of mathematical fiction, appeared in the December 2010 issue of Prime Number
Magazine. Actuarial... (more)
Adventure of the Final Problem (1893)
Sir Arthur Conan Doyle
This first Sherlock Holmes story about Professor Moriarty (later to be viewed as Holmes' arch enemy) introduces him as a professor of mathematics who won fame as a young man for his extension of
the binomial... (more)
And He Built a Crooked House (1940)
Robert A. Heinlein
A clever architect designs a house in the shape of the shadow of a tesseract, but it collapses (through the 4th dimension) when an earthquake shakes it into a more stable form (which takes up
very... (more)
Art Thou Mathematics? (1978)
Charles Mobbs
Short story (Analog Science Fiction/Science Fact, October 1978 Vol. 98 No 10) concerning the very nature of mathematical discovery. It was later rewritten in the form of a play, which the author
has... (more)
As Above, So Below (2009)
Rudy Rucker
An LSD of a story - in typical Rucker style - where a computer programmer working with the Mandelbrot set is visited upon by a living UFO in the form of the M-set; the UFO named Ma explains to him
how... (more)
Aurora in Four Voices (1998)
Catherine Asaro
Jato is trapped in Nightingale, a city in permanent darkness, inhabited by mathematical artists who mostly ignore him. Soz arrives to repair her ship, meets Jato, and finds... (more)
Back to Methuselah (1921)
George Bernard Shaw
In this not-very-stageable play in five parts, Shaw expounds on mankind and the theory of evolution, from Adam and Eve in the Garden of Eden to a paradise world 30,000 years in the future. It
turns... (more)
The Balloon Hoax (1844)
Edgar Allan Poe
This is Poe's account of an alleged balloon trip to the moon, in the spirit of the then infamous moon hoax. The balloon rider describes the Earth as appearing concave when 5 miles up. Later,... (
The Bishop Murder Case (1929)
S.S. van Dine (pseudonym of Willard Huntington Wright)
Our hero, Vance, says at the end of this mystery novel: "At the outset I was able to postulate a mathematician as the criminal agent. The difficulty of naming the murderer lay in the fact that
nearly... (more)
BLIT (1988)
David Langford
Goedelian incompleteness is encoded in graphic images that kill viewers. A new kind of infoterrorism spreads. Originally published in INTERZONE #25 Sept/Oct 1988. See also a fake FAQ... (more)
Blowups Happen (1940)
Robert A. Heinlein
A mathematician discovers that his formulas predict that an important new power station poses an extremely grave risk to humanity, and he must convince others of the danger. reprinted in THE
PAST... (more)
Border Guards (1999)
Greg Egan
In a virtual universe shaped like a 3-torus, free from disease and death, Jamil is easily depressed but enjoys playing a game of quantum soccer with his old friends, and one new friend. The new
friend... (more)
The Brink of Infinity (1936)
Stanley G. Weinbaum
A mathematics professor is kidnapped by a madman with a grudge against mathematicians, who threatens dire consequences unless the prof can solve a math riddle he has concocted: by asking ten
questions,... (more)
The Brothers Karamazov (1880)
Fyodor Dostoevsky
In this classic final masterwork by Dostoevsky, the existence of non-Euclidean geometry is mentioned at one point. Although the theme is not explicitly carried throughout the rest of the novel, it
plays... (more)
The Call of Cthulhu (1928)
H.P. Lovecraft
This is the most famous story by Lovecraft, which spawned it's own sub-genre and RPG, called the Cthulhu Mythos. It concerns the investigations of Prof. Francis Wayland Thurston as he
investigates... (more)
The Cambist and Lord Iron (2007)
Daniel Abraham
The story is set in a no-name kingdom, seemingly medieval but with certain modernisms. The cambist of the title is a minor worker, whose daily routine is interrupted by Lord Iron, who has come
to... (more)
The Central Tendency (2003)
Daniel Kaysen
In the first portion of this short story, a teenager and the aunt who took her in when her parents died enjoy doing math together. However, when the girl begins to get advanced training from
Cambridge... (more)
The Chair of Philanthromathematics (1908)
O. Henry (William Sydney Porter)
Jeff Peters and Andy Tucker, con men in the O. Henry stories collected in this volume, are a bit uncomfortable after scoring a really big scam. So they ... (more)
The Circle of Zero (1936)
Stanley G. Weinbaum
Thanks to Vijay Fafat for pointing out this story (with only a little math in it). A character speculates that the laws of probability predict that anything will happen in an infinite amount of
time,... (more)
The Day the Earth Stood Still (1951)
Robert Wise (director) / Harry Bates (story) / Edmund H. North
One must wonder how aliens might communicate with humans when and if they arrive on Earth. In the 1951 film The Day the Earth Stood Still, the extraterrestrial Klaatu (Michael Rennie) introduces
himself... (more)
Division by Zero (1991)
Ted Chiang
Answers the question: what would happen if we found out that mathematics is inconsistent? This is a great piece of mathematical fiction. (Thanks to Frank Chess who pointed it out to me.) Renee... (
Donald in Mathmagic Land (1959)
Hamilton Luske (director)
Disney's Donald Duck takes an adventure to a land where mathematics "comes alive". (Animated short.) I used this video in my 6th grade classroom. The kids enjoyed watching ... (more)
The Dot and the Line: A Romance in Lower Mathematics (1963)
Norton Juster
This picture book describes the love story of two geometrical figures. It was also made into a cartoon by Chuck Jones (available on YouTube). I have loved this book ever since my wonderful
mathematical... (more)
The Einstein See-Saw (1932)
Miles J. Breuer
This is another of the hyperspace stories by Miles Breuer. This time, a mathematical physicist discovers that mattter can be tossed around in and out of space(-time) [see his papers, "A Preliminary
Report... (more)
An Episode of Flatland (1907)
Charles H. Hinton
Hinton, whose biography is a little too weird for me to believe and whose essays on the fourth dimension (see for example A New Era of Thought) leave me wondering how much he really believed that
the fourth... (more)
Euclid and His Modern Rivals (1879)
Charles Lutwidge Dodgson (aka Lewis Carroll)
I have long known that mathematician Charles Dodgson, who wrote the famous Alice stories under the pseudonym "Lewis Carroll", also wrote a book defending Euclid's ancient text as the best for
teaching... (more)
Fantasia Mathematica : Being a Set of Stories, Together With a Group of Oddments and Diversions, All Drawn from ... (1958)
Clifton Fadiman (editor)
This is the first of the two wonderful, classic collections of mathematically flavored literature and such by Clifton Fadiman. (The second is "Mathematical Magpie".) Fortunately, it is now
available... (more)
The Feeling of Power (1957)
Isaac Asimov
An advanced society rediscovers the joys of multipying numbers BY HAND, a forgotten art. It's a gem. The author probably did not realize how quickly the premise of this story (people so
dependent... (more)
Feigenbaum Number (1995)
Nancy Kress
A postdoc who perceives reality different than other people (he sees something like the Platonic ideals people ought to be) works with a professor on combining chaos theory with particle physics.
I'm... (more)
Fermat's Best Theorem (1995)
Janet Kagan
A student comes up with what appears to be a proof of Fermat's Last Theorem. So, she gives it to her professor hoping that he will find a mistake in it (see below). It turns out that the professor
is... (more)
The Fifth-Dimension Catapult (1931)
Murray Leinster
This short novel, originally published in the January 1931 ASTOUNDING, and republished by Damon Knight in SCIENCE FICTION OF THE 30'S (1975), involves a mathematical physicist whose theories get
applied... (more)
Flatland: A Romance of Many Dimensions (1884)
Edwin Abbott Abbott
This is the classic example of mathematical fiction in which the author helps us to think about the meaning of "dimension" through fictional example: a visit to a world with only two spatial
dimensions.... (more)
The Four-Color Problem (1971)
Barrington J. Bayley
A story written in a psychedelic, stream-of-consciousness style a la William S. Burroughs concerning the discovery of previously unknown countries on the Earth whose existence provides a
counter-example... (more)
The Fourth-Dimensional Demonstrator (1935)
Murray Leinster
Uses the fourth dimension as geewhiz terminology to explain a matter duplicator/unduplicator. Includes a tesseract. But if you ignore the story's explanation involving time as ... (more)
Fractions (2011)
Buzz Mauro
A math teacher realizes that the father of one of his students is a man with whom he has had an anonymous sexual relationship. There is some discussion of math education in general, and about
hypothetical... (more)
Funes el Memorioso [Funes, His Memory] (1942)
Jorge Luis Borges
Borges’ short story piece, “Funes, His Memory’ (or in other translations, “Funes, The Memorious”) discusses the phenomenal memory of an acquaintance, Ireneo Funes. Funes, at age nineteen,... (more)
G103 (2006)
Oliver Tearne (director)
This short film "shows a surreal day in the life of a mathematics undergraduate" taking the math course G103 at the University of Warwick. In fact, the Website makes it sound as if it is an
informational... (more)
Gauß, Eisenstein, and the ``third'' proof of the Quadratic Reciprocity Theorem: Ein kleines Schauspiel (1994)
Reinhard C. Laubenbacher / David J. Pengelley
It is presented as a dialogue/drama between Gauss and Eisenstein, talking about the third proof of Gauss's reciprocity theorem (perhaps the actors are supposed to draw symbols in the air to make
the... (more)
The Genius (1901)
Nikolai Georgievich Garin-Mikhailovskii
The Russian Engineer N.G. Mikhailovskii (1852-1906) was also an accomplished author using the pseudonym "N.G. Garin". His short story, "The Genius", tells about an Jewish man who fills his
notebooks with... (more)
Geometry in the South Pacific (1927)
Sylvia Warner
A chapter from Warner's novel (more)
The Geometry of Love (1966)
John Cheever
An engineer is inspired by a passing truck from "Euclid's Dry Cleaning" to apply geometric principles to his own marital problems. He finds that interpreting his family as a triangle has the
advantage... (more)
Getting Rid of Fluff (1908)
Ellis Parker Butler
A humorous story in which two men formulate a mathematical "law of scared dogs" to help in frightening away an annoying dog named Fluff. "I bet if Sir Isaac Newon had had Fluff as long as you have
had... (more)
The Gigantic Fluctuation (1973)
Arkady Strugatsky / Boris Strugatsky
This is an oddly funny story about a man who becomes the "focus point of all miracles in the world", a "gigantic fluctuation". He somehow appears to attract extremely improbably but possible
statistical... (more)
The Gimatria of Pi (2004)
Lavie Tidhar
More ``numerology'' than mathematics, this short story is based on the idea that the decimal expansion of π has predictive value. For example, it is portrayed as predicting the assassination of
Yitzhak... (more)
Glory (2007)
Greg Egan
The story talks about a xenomathematician's quest to understand hieroglyphic tablets on an alien planet containing the mathematical knowledge of an extinct civilization. The extinct aliens had
apparently... (more)
The Gold-Bug (1843)
Edgar Allan Poe
Not only does this very famous Poe story contain a (very little) bit of mathematics in the form of a probabilistic approach to cryptography and a geometric description of the treasure hunt on the
ground... (more)
Hard Times (1853)
Charles Dickens
A suggestion for a novel to be added to your website Mathematical Fiction: In Charles Dickens's "Hard Times", poor schoolgirl Sissy Jupe is struggling in an educational system that is obsessed... (
L' idée fixe du Savant Cosinus (1899)
Christophe -- Georges Colomb
This humorous and profusely illustrated French book is considered to be an early example of what we might today call a "comic book". Cosinus is a mathematician who desperately wants to travel
around... (more)
The Ifth of Oofth (1957)
Walter Trevis
[This] is a short, zany, tall-tale reminiscent of Heinlein's "And He Built A Crooked House". Someone ends up making a 3-dimensional, unfolded projection of a 5-dimensional hypercube, a Penteract.
The... (more)
In Good King Charles's Golden Days (1939)
George Bernard Shaw
Considered by many to be Shaw's worst play, this late example of his witty writing may be of special interest to visitors to this site. It takes place at the home of Sir Isaac Newton where he is
joined... (more)
Inquirendo Island (1886)
Hudor Genone
A very long, thinly disguised satire on sectarian splits in Religion, fairly nicely written. A man lost at sea is ship-wrecked on an island called “Inquirendo Island”, probably a sarcastic aside...
Inside Out (1987)
Rudy Rucker
The story itself is quite disturbing IMO but has the usual zaniness of his other writings. Features quarks as "hypertoroidal vortex rings/loops of superstring", a "cumberquark", "hypertorii with
fuzzy... (more)
The Jester and the Mathematician (2000)
Alan R. Gordon
A short historical fiction piece involving Leonardo of Pisa ("Fibonacci"). Interesting story which features Fibonacci talking briefly about his rabbit-series/sequence, his abacus-duel with Pisa's
foremost... (more)
The Judge's House (1914)
Bram Stoker
A math student seeks a quiet place to study for his exams but winds up battling an angry ghost. Stoker certainly knew mathematical words to throw around (e.g. quaternions and conic sections), but
this... (more)
Kavanagh (1849)
Henry Wadsworth Longfellow
In the fourth chapter of this novel by the famous poet, the school teacher of the title tries to convince his skeptical wife that mathematics can be poetic by reading to her from Lilavati. (This
one chapter was published separately as Englishwoman's Domestic Magazine, 3 (1855), pages 257–62, and so I will consider it both as a short story and as an excerpt from a novel.) (more)
Kim Possible (Episode: Mathter and Fervent) (2007)
Jim Peronto (script)
This episode of the Disney animated TV series "Kim Possible" is a comic book parody featuring a mathematical villain. As an English assignment, Kim Possible and Ron Stoppable have to write a
paper... (more)
The Lost Books of the Odyssey (2008)
Zachary Mason
The introduction to this novel is a work of pseudo-scholarship, explaining how the chapters to follow were decoded by an NSA cryptographer with the help of the author. The intro contains references
to... (more)
Lovesong of the Electric Bear (2005)
Snoo Wilson (playwright)
This play about Alan Turing, told from the point of view of Porgy, his teddy bear, was produced as part of the Summer 2005 season at the Potomac Theater Project in Maryland. Turing certainly had
both... (more)
The Mathematician’s Nightmare: The Vision of Professor Squarepunt (1954)
Bertrand Russell
This short story by [renowned philosopher and mathematician Bertrand] Russell is a mild satire on numerology, taking [Sir Arthur] Eddington’s obsession with it and spinning it as a “nightmare”... (
Maths on a Plane (2008)
Phil Trinh
This story, about a student flirting with the attractive woman in the seat next to him on a plane, won the student category of the 2008 New Writers Award from Cambridge University's ``Plus+
Magazine''.... (more)
Mersenne's Mistake (2008)
Jason Earls
This is a nice piece of mathematical fiction in which the mathematician/monk Marin Mersenne encounters a demon with amazing mathematical skills. Like the other stories by Earls, this seems to be
designed to showcase the interesting numbers which he has found using computer algebra tools. (more)
Message Found in a Copy of Flatland (1983)
Rudy Rucker
This is the story that answers the age old question: "What if Flatland was in the basement of a Pakistani restaurant in London?". The answer is scarier than you might think, especially when you
realize... (more)
Monday Begins on Saturday (1966)
Arkady Strugatsky / Boris Strugatsky
In this parody of the activity at Soviet research thinktanks, mathematics underlies the "science" of magic. Math is rarely discussed in depth and a knowledge of Russian fairy tales helps the reader
to... (more)
Mortal Immortal (1833)
Mary Wollstonecraft Shelley
This fantasy story by the author of Frankenstein, about a man who drinks a half dose of a potion that bestows immortality, is only borderline mathematical fiction. The only arguably mathematical
part... (more)
Morte di un matematico napoletano (1992)
Mario Martone (director)
"This movie describes the last day in [the] life of a famous Italian mathematician: Renato Caccioppoli. He was a fascinating and discussed person in Naples' political and cultural life. [A]
member... (more)
Mrs. Warren's Profession (1894)
George Bernard Shaw
This is Shaw's notorious play about poverty and prostitution, the "profession" of the title. (The play itself was not performed in public in the UK until 1925.) Mrs. Warren has made her fortune...
Musgrave Ritual (1893)
Sir Arthur Conan Doyle
A tiny bit of mathematics is used by Sherlock Holmes to solve this mystery. In it, he ties together the disappearance of a housemaid, the discovery of the dead body of the chief butler and a
strange poem... (more)
Narrow Valley (1966)
R.A. Lafferty
This is a madcap story about a tract of land which is topologically folded through a shamanic incantation. Contains descriptions of some physical effects but explicitly states that the topological
defect... (more)
Nena's Math Force (2005)
Susan Jarema
This picture book for children, which is available for free online and also in print, tells the story of a girl who is upset when her math teacher requires the class to do arithmetic without a
calculator.... (more)
A New Golden Age (1981)
Rudy Rucker
In this story, and in our world as well, mathematicians lament the fact that legislators cannot sufficiently appreciate mathematics and that this adversely affects the funding of their science. To
address this... (more)
Not a Chance (2009)
Peter Haff
A student harangues his physics professor about the possibility that all mathematical proofs are incorrect. His argument is based on the supposed uncertainty about the validity of proofs of the
Four Color... (more)
Notes from the Underground (1864)
Fyodor Dostoevsky
Part I involves an unnamed rather crazed and unreliable narrator (generally known as "the Underground Man") raving and rambling against life, the universe, and everything. A few... (more)
An Old Arithmetician (1885)
Mary Eleanor Wilkins Freeman
The title character of this short story, which appeared in the September 1885 issue of Harper's Weekly, is an old, uneducated woman who loves computing (with chalk and slate): You have always been
very... (more)
On the Quantum Theoretic Implications of Newton's Alchemy (2007)
Alex Kasman
A postdoc at the mysterious "Institute for Mathematical Analysis and Quantum Chemistry" is surprised to learn that his work on Riemann-Hilbert Problems is being used as part of his employer's crazy
alchemy... (more)
The One Best Bet (1911)
Samuel Hopkins Adams
The story is about an amateur detective who uses some elementary geometric triangulation to foil an assassination. The last paragraph is a great touch, “Why, Governor, you’re giving me too much
credit.... (more)
Oracle (2000)
Greg Egan
The protagonist, Robert Stoney is a british mathematician who worked on German codes during WW II, was greatly affected by the death of a close friend, and was later persecuted for his
homosexuality. ... (more)
Our Feynman Who Art in Heaven... (2007)
Paul Di Filippo
A religious cult based on the Standard Model (of high energy physics) has its headquarters in a tesseract. This story, which is certainly more physical than mathematical, appears in the "Plumage
from Pegasus" column in the February 2007 issue of Fantasy and Science Fiction and is available for free at their website. (more)
Pi in the Sky (1983)
Rudy Rucker
The story is about a family which finds an alien artifact on a beach while on vacation: a smooth cone with patterns of stripes on its surface and which produces sound in the same pattern. It turns
out... (more)
The Plattner Story (1896)
Herbert George Wells
Gottfrieb Plattner disappears after an explosion for nine days. Upon return, he recounts a strange tale of a parallel world. More mathematically interesting, he discovers that he is now
left-handed,... (more)
The Power of Words (1845)
Edgar Allan Poe
A very short work (two-pages long!) in which two angels discuss the divine implications of our ability to mathematically determine the future consequences of an action, especially wave
propagation.... (more)
The Problem of Cell 13 (1907)
Jacques Futrelle
"The story which introduces Professor S. F. X. van Dusen, professional scientific supergenius, who lends his talents to solving baffling mysteries. He is described as primarily ... (more)
Professor Morgan's Moon (1899)
Stanley Waterloo
A young mathematician asks for the hand of a senior mathematician's beautiful (and clever) daughter, but is refused on the grounds that his inability to support her financially was a mathematical
certainty.... (more)
The Purloined Letter (1845)
Edgar Allan Poe
"This is the third and last C. Auguste Dupin mystery. The Prefect of Paris police explains a very delicate situation to Dupin, involving a royal letter whose possession grants its bearer great... (
Quarantine (1977)
Arthur C. Clarke
For safety's sake, all organic life on the planet Earth has been wiped out by automatic defenses. The investigator looking into this regrettable turn of affairs in an otherwise promising species
discovers... (more)
The Rapture of the Nerds (2004)
Cory Doctorow / Charles Stross
This story is set in Stross's "Accelerando" series, due for publication in novel form in 2005, offering a worm's eye view of the "Vinge singularity", the supposed moment in the coming decades... (
Reading by Numbers (2009)
Aidan Doyle
Elementary number theory and some superstitious numerology underlie this story, which appeared in the November 11, 2009 issue of the online Fantasy Magazine (though I would never describe this
story as... (more)
Recess (Episode: A Genius Among Us) (2000)
Brian Hamill
This episode of Disney's Saturday Morning cartoon "Recess" is clearly a parody of the film "Good Will Hunting". I hope this doesn't lower anyone's opinion of me...but I personally liked it better
than... (more)
The Remarkable Case of Davidson's Eyes (1895)
Herbert George Wells
Rather than seeing what is actually around him in England, Davidson sees events occurring on a rock off of the Antipodes Island. The explanation offered includes the notion of non-flat geometries
for... (more)
Riding the Crocodile (2005)
Greg Egan
A couple from the race of “Amalgam” wanted to carry out one project before choosing to die after a life spanning tens of thousands of years: Establishing contact with the elusive race called
“Aloof”.... (more)
The Romance of Mathematics: Being the Original Researches of a Lady Professor of Girtham College... (1886)
Peter Hampson Ditchfield
The Reverend Peter Hampson Ditchfield (1854-1930) was the author of many novels and histories, including this odd piece that claims to be compiled from the lecture notes and diaries of a "lady
professor",... (more)
Round the Moon (1870)
Jules Verne
This early science fiction novel about space travel (published originally in French, of course) contains two chapters with explicit (and very nice) mathematical content. In Chapter 4 (A Little
Algebra)... (more)
Rucker - A Life Fractal by Eli Halberstam (1991)
John Allen Paulos
Like Lem's De Impossibilitate Vitae and Prognoscendi , this is a work of fiction that takes the form of a book review. (As Paulos explains in his introduction, "Reviewing [a] book which hasn't been
written... (more)
The Secret Number (2000)
Igor Teper
In this very cute story, a mathematician who believes that there is an integer between 3 and 4 tries to convince his psychiatrist that he is not crazy. The idea is not very deep, but it is well
handled... (more)
Silas P. Cornu's Dry Calculator (1898)
Henry Hering
A very hilarious short story about a man who wants to build a mechanical calculator to evaluate logarithms but has success building a machine that can do only addition and multiplication. On the
other... (more)
Singleton (2002)
Greg Egan
This story involves a physicist and a mathematician who have a child -- well, sort of -- that they have specially designed to remain in a "classical" state (as opposed to a quantum superposition of
states)... (more)
The Sirdar's Chess-Board (1885)
Elizabeth Wormeley Latimer
A military bride travelling in Afghanistan is surprised when a mystic is able to cut up a chess board ("with three snips of my scissors") and put it back together so that the number of squares has
increased... (more)
Six Thought Experiments Concerning the Nature of Computation (2005)
Rudy Rucker
These are six very short stories, a few of which have mathematical themes. In the first story, Lucky Number, a game programmer spots some "lucky numbers" spray painted on a train. On a whim, he
uses... (more)
Space (1911)
John Buchan
This mystical story, as recounted by a lawyer, is about a brilliant mathematician ("an erratic genius who had written some articles in Mind on that dreary subject, the mathematical conception of
infinity",... (more)
The Spacetime Pool (2008)
Catherine Asaro
Janelle, recently graduated from MIT with a degree in math, is pulled through the "branch cut" between two universes to an alternate Earth where two sword wielding brothers rule half the world.
There,... (more)
The Square Root of Pythagoras (1999)
Paul Di Filippo/Rudy Rucker
Pythagoras has been granted the magical power of five numbers. Along the way he discusses his theorem, the five Platonic solids, and his general philosophy about numbers and the universe. But he...
The Story of Yung Chang (1900)
Ernest Bramah (Ernest Bramah Smith)
Before the invention of multiplication tables, a Chinese idol merchant must sell his wares individually, even if someone wishes to purchase a large amount, since he has no way to determine how much
money... (more)
Sylvie and Bruno Concluded (1893)
Lewis Carroll
The sequel to his somewhat popular book "Sylvie and Bruno" never achieved the popularity of the original. This lack of success may or may not be related to Chapter VII (entitled "Mein Herr") of
the... (more)
The Siege Of The "Lancashire Queen" (1906)
Jack London
Describes how the capture of illegal shrimp-poachers becomes a problem of triangular geometry and relative speeds of chase. In particular, the pirates, trapped on a ship, the chasing posse and the
point... (more)
Tiger by the Tail (1951)
A.G. Nourse
A pocketbook contains a gateway to another universe, and a group of unlikely heroes tries to save ours from the aliens there by reaching in and grabbing it. This is a cute short story, with a
not-particularly-sound... (more)
The Time Axis (1949)
Henry Kuttner
This was published as an Ace paperback in 1965. I don't think I have a copy of the paperback in my collection, but I have the original magazine publication, in the January 1949 issue of Startling
Stories.... (more)
The Time Machine (1895)
Herbert George Wells
This famous early science fiction novel opens with a clever (and, if you think ahead to the role of Minkowski Space in special relativity, prophetic) lecture on "the fourth dimension". Of course,
discussions... (more)
Topsy-turvy (Sans Dessus Dessous) (1889)
Jules Verne
The members of the Gun Club want to use a giant cannon's recoil to change the Earth's rotation axis, so they can exploit the presumed coalfields at the North Pole. An unfortunate side effect is
that... (more)
Turing's Apples (2008)
Stephen Baxter
Story about a far-away civilization transmitting a complex message in all directions, containing a software program (“Turing machine”) which ends up creating von Neumann machines with one
specific... (more)
Twenty-seven Uses for Imaginary Numbers (2009)
Buzz Mauro
A teenage boy's discovery of the joys of Euler's formula coincides with the awakening of his homosexual desires. The author's mathematical understanding is very good, and the story reminded me of
young... (more)
Understand (1991)
Ted Chiang
"An experimental treatment for a drowning victim turns him into an incredible supergenius. Mathematics is mentioned several times in passing, and twice the supergenius explicitly uses it ... (more)
The Universal Library [Die Universalbibliothek] (1901)
Kurd Lasswitz
This early "science fiction" story explores the notion of a library containing every possible five hundred page book and an English translation appears in the classic mathematical fiction
collection Fantasia... (more)
Unreasonable Effectiveness (2003)
Alex Kasman
"Unreasonable Effectiveness" reminds me of a classic Arthur C. Clarke style short story. It has exactly enough mathematics done correctly and a twist that boggles the mind at the end. To be fair...
Until Tomorrow, Then (2010)
Shaun Hamill (writer and director)
A short film about a young mathematician obsessed with working out the "rate the universe is running down" so that he can determine time that the universe will end. One of the two other
characters... (more)
The Valley of Fear (1916)
Sir Arthur Conan Doyle
Having introduced Sherlock Holmes' most famous enemy, Professor Moriarty, as a mathematician in an earlier story, Doyle provides us with just a small glimpse of his mathematical genius (as opposed
to... (more)
Vanishing Point (1959)
C.C. Beck
The short story is another take on the true nature of reality and one man's quest to unmask it. It is more an idea piece than a full-fledged development. An artist, Carter, who is a trained
mathematician... (more)
A Victim of Higher Space (1917)
Algernon Blackwood
This is another of the John Silence tall-tales, this time involving a man who learns to visualize 4-dimensional space and then starts slipping in and out of the hyperspace. As he describes it,
"This... (more)
War and Peace (1869)
Lev Tolstoy
Tolstoy's famous novel about...well, about war and peace (!) contains long passages explaining an analogy he makes between history and calculus. In particular, he argues that we should view history
as... (more)
What Are the Odds? (2006)
Justin Spitzer (writer) / Matthew Tritt (director)
Two extremely nerdy strangers who keep running into each other in New York City are surprised to learn that they both "study applied mathematics" and are attending the same conference on
"stochastic processes... (more)
The Wonderful Visit (1895)
Herbert George Wells
"An angel, who normally inhabits a fourth dimensional world (with curvature instead of gravitation!) falls into our three dimensional world." (more)
Zilkowski's Theorem (2003)
Karl Iagnemma
This is a story of a love triangle with a definite mathematical twist. Henderson's roommate, Czogloz, steals away his girlfriend, Milla, when all three were math graduate students. Years later,
seeking... (more)
|
{"url":"http://kasmana.people.cofc.edu/MATHFICT/search.php?go=yes&medium=web&orderby=title","timestamp":"2014-04-20T01:17:10Z","content_type":null,"content_length":"90167","record_id":"<urn:uuid:4d1aa0fa-1743-4d20-bef3-f7d717d36212>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00630-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Yahoo Groups
[Speed cubing group] Re: Ron's puzzle and Chris's interesting observation(s)
Expand Messages
View Source
Jake wrote:
> I figured out that no sequence of moves can ever 'leave behind'
> (when executed twice) just a flip of two edges or just a twist
> of two corners.
Maybe I'm lifting this out of context, or you are using some extra
restrictions, but this is not true. Try:
U' L2 R2 D' F2 D' L' R F U2 F' L' R U'
twice for a edge pair flip, or this classic:
L' F2 L U' B2 U
twice for a corner pair twist.
View Source
d_j_salvia wrote:
"How did you find these algorithms? More to the point, how did you
recognize them when they'd been done once?"
-I didn't actually invent them myself (I wish I had that capacity though). I just figured out what I wanted the cube to look like, so that perfoming twice the sequence that got us there would
produce the desired outcome. Then I used "Cube Explorer" to find the algorithms that would generate the cube that I had constructed. I don't know if you've had the joy of working with 'cube
explorer' before, but I've found it to be extremely useful. You can download this at www.home.t-online.de/home/Kociemba/cube.htm or, as I usually do, you can follow the link off of Jessica
Fridrich's web page.
Jaap wrote:
"Maybe I'm lifting this out of context, or you are using some extra
restrictions, but this is not true. Try:
U' L2 R2 D' F2 D' L' R F U2 F' L' R U'
twice for a edge pair flip, or this classic:
L' F2 L U' B2 U
twice for a corner pair twist."
-you are right. I was working with sequences that were only allowed to flip/twist pieces if they also moved them. With this restriction my statement is correct...but I was wrong in assuming that
this covered all possible sequences, so I was forgetting that I'd restricted myself....I'm feeling a little ignorant at this point.
--- message from "d_j_salvia" <
> attached:
Get Free Email from
Select your own custom email address for FREE! Get
w/No Ads, 6MB, POP & more!
[Non-text portions of this message have been removed]
Your message has been successfully submitted and would be delivered to recipients shortly.
|
{"url":"https://groups.yahoo.com/neo/groups/speedsolvingrubikscube/conversations/topics/4122?viscount=-30&l=1","timestamp":"2014-04-19T17:24:10Z","content_type":null,"content_length":"43813","record_id":"<urn:uuid:d0ffbeee-119a-4204-be82-1ce759f73cef>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00609-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Vanishing Vertices And Triangles
Good evening everyone,
I've ventured into stitching multiple height maps together in order to get a rather massive landscape. I have everything pretty much figured out, except for one strange anomaly.
When I debug and look at both the start and end positions of my vertices for three of the four height map sections, it shows it starts exactly where it should (-1000000, -1000000) and ends where it
should, at (0,0) but when I run it, an entire row and column of triangles never render, clipping the height map section by that amount, essentially giving a start of (-1000000, -1000000) and an end
of (-2000, -2000).
Only one height map section correctly renders, all the others, however, are affected by this anomaly to some degree, some more than others.
While I've patched it with some interpolation, collision is affected by this, and overall, I'm not satisfied in not knowing the root cause.
As I am not entirely sure where to even begin looking, I can't post much code, so if you can point me in the right direction, I'll provide code as requested.
Wishing you all well,
|
{"url":"http://www.dreamincode.net/forums/topic/326934-vanishing-vertices-and-triangles/","timestamp":"2014-04-21T00:52:06Z","content_type":null,"content_length":"254697","record_id":"<urn:uuid:13c6d2d0-8b2b-4896-bfff-bdf39573d96e>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00088-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Roselle Park Geometry Tutor
Find a Roselle Park Geometry Tutor
...After working in NYC for six years and after the 9/11 incident in 2001, I changed my career goals to teaching. I became a teaching assistant in the Hillsborough School District for two years
while obtaining my teaching certification in K-8. I then continued my education by obtaining a master's ...
53 Subjects: including geometry, reading, algebra 1, English
...I have two graduate degrees. I have a graduate degree in physics and electrical engineering. I have been teaching at the college level for the past 8 years.
10 Subjects: including geometry, calculus, physics, algebra 1
...I am very patient and I tutor through constant interaction and real life examples. I had won over 30 national and international mathematics and physics competitions. I have a professional yet
warm presence and attitude.Real Experience - I have tutored over 300 students in all areas of math including ACT/SAT math, algebra, geometry, pre-calculus, Algebra Regents, and more.
30 Subjects: including geometry, reading, English, writing
...I taught English to Foreign Exchange students in college (West Point) I taught English to Foreign Area officers in the US ARMY. I am a current tennis instructor and have played tennis my whole
life. I focus my instruction on basic skills such as racquet grip, foot drills and movement, turning ...
73 Subjects: including geometry, reading, Spanish, English
I have a passion for teaching math and science and I look forward to helping you with your schoolwork and/or preparation for AP, SAT and other tests. I am a recent graduate of Rutgers University
with a B.S. cum laude in Mechanical Engineering and a National AP Scholar. I have been tutoring for the last 10 years both professionally and as a volunteer.
22 Subjects: including geometry, chemistry, calculus, physics
Nearby Cities With geometry Tutor
Clark, NJ geometry Tutors
Colonia geometry Tutors
Cranford geometry Tutors
Elizabeth, NJ geometry Tutors
Elizabethport, NJ geometry Tutors
Hillside, NJ geometry Tutors
Kenilworth, NJ geometry Tutors
Linden, NJ geometry Tutors
Millburn geometry Tutors
New Providence, NJ geometry Tutors
Rahway geometry Tutors
Roselle, NJ geometry Tutors
Springfield, NJ geometry Tutors
Union Center, NJ geometry Tutors
Union, NJ geometry Tutors
|
{"url":"http://www.purplemath.com/roselle_park_geometry_tutors.php","timestamp":"2014-04-18T04:02:13Z","content_type":null,"content_length":"24126","record_id":"<urn:uuid:838cf7b4-6e78-429c-850d-4e1567457329>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00072-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Matchstick Puzzler
RAY: This puzzler is from the matchstick series. Imagine, if you will, that you have four matchsticks of equal length. From those, you can easily make a square.
At each of the vertices, there is a right angle, or a ninety-degree angle, so there are four right angles.
TOM: I'm with you!
RAY: Now, using those same four matchsticks, make not 4 but 16 ninety-degree angles.
You might say, "Can I use the third dimension?" You can use any dimension you want.
I should mention, you are not allowed to fold, bend, break staple, or mutilate the matches in any other way.
TOM: Can you use mirrors?
*Leave your response here! Please!
RAY: No, but that shows you're on the right track.
|
{"url":"http://www.mathisfunforum.com/viewtopic.php?pid=17755","timestamp":"2014-04-20T06:14:12Z","content_type":null,"content_length":"18123","record_id":"<urn:uuid:031338ec-e22a-41da-acd4-8ae06aff61a5>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00204-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Types of Symmetry
When dealing with the graphs of relations, there are times when we recognize certain types of symmetry. There are three types we address here: x-axis symmetry, y-axis symmetry, and origin symmetry.
x-axis Symmetry
We say that a graph has “x-axis symmetry”, or is “symmetric about the x-axis”, when its graph would look the same if it were reflected about the x-axis. So a graph is symmetric about the x-axis if
whenever the point
Here is an example of a graph without x-axis symmetry.
(Note that with the exception of
Y-axis Symmetry
We say that a graph has “y-axis symmetry”, or is “symmetric about the y-axis”, when its graph would look the same if it were reflected about the y-axis. So, a graph is symmetric about the y-axis if
whenever the point
Here is an example of a graph without y-axis symmetry:
This doesn’t have y-axis symmetry because, if we reflected it about the y-axis, we would end up with this:
Here’s another example of a graph that doesn’t have y-axis symmetry, and its reflection:
In order to determine if a graph has y-axis symmetry, we must perform one simple test. We need to check that all
Origin Symmetry
A graph is said to have “origin symmetry”, or is “symmetric around the origin”, when its graph would look the same if it were rotated 180 degrees around the origin. So, a graph is symmetric about the
origin if whenever the point
Here are a couple of examples of graphs that are symmetric about the origin:
The test for origin symmetry is very similar to the ones for x-axis symmetry and y-axis symmetry. We need to check that all
Example 1
Does the graph of the function
Solution 2
We need to check to see if
Therefore, we see that the function
Example 3
Does the graph of the function
Solution 3
Again, we check to see if
This time, since there are values of not have y-axis symmetry.
Example 4
Does the graph of the function
Solution 4
We must check to see if
Example 5
Does the graph of the function
Solution 5
We check to see if
|
{"url":"http://www.uiowa.edu/~examserv/mathmatters/tutorial_quiz/geometry/typesofsymmetry.html","timestamp":"2014-04-20T08:33:30Z","content_type":null,"content_length":"13030","record_id":"<urn:uuid:4d91abf0-1f87-45fa-ab4d-7c41bdb156a1>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00350-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Cartesian closed category
Cartesian closed category
A category $\mathcal{C}$ with finite products is said to be Cartesian closed if each of the following functors has a right adjoint
1. 1.
$\textbf{0}:\mathcal{C}\to\textbf{1}$, where 1 is the trivial category with one object $0$, and $\textbf{0}(A)=0$
2. 2.
the diagonal functor $\delta:\mathcal{C}\to\mathcal{C}\times\mathcal{C}$, where $\delta(A)=(A,A)$, and
3. 3.
for any object $B$, the functor $(-\times B):\mathcal{C}\to\mathcal{C}$, where $(-\times B)(A)=A\times B$, the product of $A$ and $B$.
Furthermore, we require that the corresponding right adjoints for these functors to be
1. 1.
any functor $\textbf{1}\to\mathcal{C}$, where $0$ is mapped to an object $T$ in $\mathcal{C}$. $T$ is necessarily a terminal object of $\mathcal{C}$.
2. 2.
the product (bifunctor) $(-\times-):\mathcal{C}\times\mathcal{C}\to\mathcal{C}$ given by $(-\times-)(A,B)\mapsto A\times B$, the product of $A$ and $B$.
3. 3.
In other words, a Cartesian closed category $\mathcal{C}$ is a category with finite products, has a terminal objects, and has exponentials. It can be shown that a Cartesian closed category is the
same as a finitely complete category having exponentials.
Examples of Cartesian closed categories are the category of sets Set ( terminal object: any singleton; product: any Cartesian product of a finite number of sets; exponential object: the set of
functions from one set to another) the category of small categories Cat (terminal object: any trivial category; product object: any finite product of categores; exponential object: any functor
category), and every elementary topos.
• 1 S. Mac Lane, Categories for the Working Mathematician, Springer, New York (1971).
CompleteCategory, IndexOfCategories
Mathematics Subject Classification
no label found
Added: 2007-01-20 - 05:48
|
{"url":"http://planetmath.org/CartesianClosedCategory","timestamp":"2014-04-18T03:32:56Z","content_type":null,"content_length":"62623","record_id":"<urn:uuid:93aa7964-e452-41a9-973b-d3430c8f557c>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00210-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Depreciation - Mathematics
This section explores depreciation. By the end you should be able to calculate the depreciated value and the amount of depreciation. You should have prior knowledge of working with percentages,
decreasing percentages and good calculator skills.
Depreciation is the opposite to compound interest; while compound interest increases the initial value each year by a given percentage, depreciation decreases the initial value each year by a given
percentage. Depreciation is useful in many areas. Since most items lose their value after a certain period of time when bought in this case knowledge to calculate depreciation can be very useful.
Example 1
Consider this example; The price of a new car is £25000. The price depreciates by 18% each year (p.a). Suppose we wanted to find its value at the end of 3 years. Here let’s use a table to calculate
starting from year 1.
Year 1 £25000 x 0.82 = 20500
Year 2 £20500 x 0.82 = 16810
Year 3 £16810 x 0.82 = 13784
From the table we can see that 3 years after buying the car at £25000 the car is valued at;
It has depreciated by;
…and the depreciation is;
Depreciation formula
Looking at the previous example we saw that;
That must mean that the calculation for the value after 10 years (let’s say) would be;
We can conclude that the formula for depreciated value is;
Using the depreciation formula
In the formula example, we’re going to use the formula we’ve discovered above.
Using the depreciation formula find the value of a TV after 15 months if it’s price while new is £1400 and it depreciates by 8% per month.
Let’s summarise the known variables and the unknown first then work out. We know that;
We can see that after 15 months the TV is valued at £400.82. It has dropped value by £999.18.
You have to be careful when working with depreciation. In some cases, depreciation may be calculated monthly, fortnight, weekly and maybe daily. So you have to be careful when working with problems
involving depreciation and adjust the rate and number of calculations carefully.
6 Responses
other than that its cool
Like or Dislike: 0 0
□ Hi,
Sorry about that. I will look into your suggestion.
Like or Dislike: 0 0
2. one does not simply change the website.
Like or Dislike: 0 0
3. Okay, seen as you’ve supplied all the information on how a retard can calculate a given set of numbers, could you tell viewers how one obtains the “given percentage” of depreciation, you’ve not
included how one is suppose to work out this percentage, but you’ve given the percentages then told people how to calculate it from there … sigh
Like or Dislike: 0 0
□ I think I see what you mean. The percentage is always an assumed value. You always have a record about how the original value depreciate over time i.e the percentage.
I think it will be necessary to include this part so thanks for the suggestion. In the car example above to calculate the percentage of depreciation over time you must have some record of how
the prices have changed in the past. It is very easy. For example
In year 1 the price changed from £25000 to £20500
In year 2 it then changed from £20500 to £16810
In year 3 it then changed from £16810 to £13684
Using this past record we can work out by what percentage the value of the car decreases each year. Simply divide the current year value by the previous year and then subtract it from 1 since
it is depreciation and multiply that by 100 for example;
In year 2 the price dropped from £20500 to £16810 so;
16810/20500 = 0.82, so; 1-0.82 = 0.18 (Note: 80% = 0.82 and 0.18 = 18%). Now multiply 0.18 by 100 to get 18%. You can see that it is the same as in the car example.
You can also use the depreciation formula derived above by rearranging it to find r%. You will need to have a few other values as well such as the initial value of the car for example. the
depreciated value for example the value of the car in year 5.
I will definitely include this so thanks for the suggestion.
Like or Dislike: 0 0
4. Hey thanks man. And these guys above a just being dicks. I thought it was pretty good and helped a lot.
Thanks again
Like or Dislike: 0 0
|
{"url":"http://mathematicsi.com/depreciation/","timestamp":"2014-04-18T03:04:53Z","content_type":null,"content_length":"68945","record_id":"<urn:uuid:e5f43fe0-d726-4741-ba7b-0f1b7a5213b8>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00202-ip-10-147-4-33.ec2.internal.warc.gz"}
|
General works
W.W. Rouse Ball and H.S.M. Coxeter, Mathematical Recreations and Essays, 12th ed. (1974); John H. Conway, On Numbers and Games (1976); Henry E. Dudeney, 536 Puzzles and Curious Problems, ed. by
Martin Gardner (1967); Kobon Fujimura, The Tokyo Puzzles, trans. from the Japanese, ed. by Martin Gardner (1978); Martin Gardner, more than a dozen collections of mathematical recreations, including
Mathematical Circus (1979, reissued 1981) and Wheels, Life and Other Mathematical Amusements (1983); Douglas R. Hofstadter, Gödel, Escher, Bach: An Eternal Golden Braid (1979, reprinted 1980); J.A.H.
Hunter, Challenging Mathematical Teasers (1980); J.A.H. Hunter and Joseph S. Madachy, Mathematical Diversions, rev. ed. (1975); David A. Klarner, ed., The Mathematical Gardner (1981), a tribute to
Martin Gardner; Boris A. Kordemsky, The Moscow Puzzles, trans. from the Russian by Albert Parry, ed. by Martin Gardner (1972); Maurice Kraitchik, Mathematical Recreations, 2nd rev. ed. (1953); Joseph
S. Madachy, Madachy’s Mathematical Recreations (1979); T.H. O’Beirne, Puzzles and Paradoxes (1965, reprinted 1984); Hubert Phillips (“Caliban”), Question Time: An Omnibus of Problems for a Brainy Day
(1938); Frederik Schuh, The Master Book of Mathematical Recreations, trans. from the Dutch, ed. by T.H. O’Beirne (1968); Hugo Steinhaus, Mathematical Snapshots, 3rd U.S. ed. (1983; originally
published in Polish, 1954).
Books on special topics
(Cube puzzles): John Ewing and Czes Kosniowski, Puzzle It Out: Cubes, Groups, and Puzzles (1982); P.A. MacMahon, New Mathematical Pastimes (1921); James G. Nourse, The Simple Solution to Cubic
Puzzles (1981); Don Taylor and Leanne Rylands, Cube Games (1981); Ferdinand Winter, Das Spiel der 30 Bunten Würfel (1934). (Dissections): V.G. Boltyanskii, Equivalent and Equidecomposable Figures
(1963; originally published in Russian, 1956); Harry Lindgren, Geometric Dissections (1964). (Fallacies): V.M. Bradis, V.L. Minkovskii, and A.K. Kharcheva, Lapses in Mathematical Reasoning (1963;
originally published in Russian, 2nd ed., 1959); Edwin A. Maxwell, Fallacies in Mathematics (1959, reprinted 1969). (Fibonacci numbers): Verner E. Hoggatt, Jr., Fibonacci and Lucas Numbers (1969). (
Fractals): Benoit B. Mandelbrot, The Fractal Geometry of Nature, rev. ed. (1983). (Graphs): Oystein Ore, Graphs and Their Uses (1963). (Logical inference): Maxey Brooke, 150 Puzzles in
Crypt-Arithmetic, 2nd rev. ed. (1969); Hubert Phillips (“Caliban”), My Best Puzzles in Logic and Reasoning (1961); Raymond M. Smullyan, What Is the Name of This Book? The Riddle of Dracula and Other
Logical Puzzles (1978), and This Book Needs No Title (1980); George J. Summers, Test Your Logic: 50 Puzzles in Deductive Reasoning (1972); Clarence R. Wylie, 101 Puzzles in Thought and Logic (1957).
(Manipulative puzzles and games): Maxey Brooke, Fun for the Money (1963); Solomon W. Golomb, Polyominoes (1965); Ronald C. Read, Tangrams: 330 Puzzles (1965); T. Sundara Row, Geometric Exercises in
Paper Folding, 2nd ed. (1905, reprinted 1966); Sid Sackson, A Gamut of Games (1969, reissued 1982). (Mazes): Walter Shepherd, Mazes and Labyrinths: A Book of Puzzles, rev. ed. (1961). (Polytopes):
H.S.M. Coxeter, Regular Polytopes, 3rd ed. (1973); H. Martyn Cundy and A.P. Rollett, Mathematical Models, 2nd ed. (1961, reprinted 1977); L. Fejes Tóth, Regular Figures (1964). (Probability): Warren
Weaver, Lady Luck: The Theory of Probability (1963, reprinted 1982).
|
{"url":"http://media-2.web.britannica.com/eb-diffs/300/422300-13705-109429.html","timestamp":"2014-04-20T13:23:32Z","content_type":null,"content_length":"5309","record_id":"<urn:uuid:a4f3acec-38aa-4991-a350-94512b2a2c04>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00313-ip-10-147-4-33.ec2.internal.warc.gz"}
|
the encyclopedic entry of geoid
The geoid is that equipotential surface which would coincide exactly with the mean ocean surface of the Earth, if the oceans were in equilibrium, at rest, and extended through the continents (such as
with very narrow canals). According to C.F. Gauss, who first described it, it is the "mathematical figure of the Earth," a smooth but highly irregular surface that corresponds not to the actual
surface of the Earth's crust, but to a surface which can only be known through extensive gravitational measurements and calculations. Despite being an important concept for almost two hundred years
in the history of geodesy and geophysics, it has only been defined to high precision in recent decades, for instance by works of P. Vaníček and others. It is often described as the true physical
figure of the Earth, in contrast to the idealized geometrical figure of a reference ellipsoid.
The geoid surface is irregular, unlike the reference ellipsoids often used to approximate the shape of the physical Earth, but considerably smoother than Earth's physical surface. While the latter
has excursions of +8,000 m (Mount Everest) and −11,000 m (Mariana Trench), the total variation in the geoid is less than 200 m (-106 to +85 m)compared to a perfect mathematical ellipsoid.
Sea level, if undisturbed by currents and weather, would assume a surface equal to the geoid. If the continental land masses were criss-crossed by a series of tunnels or narrow canals, the sea level
in these canals would also coincide with the geoid. In reality the geoid does not have a physical meaning under the continents, but geodesists are able to derive the heights of continental points
above this imaginary, yet physically defined, surface by a technique called spirit leveling.
Being an equipotential surface, the geoid is by definition a surface to which the force of gravity is everywhere perpendicular. This means that when travelling by ship, one does not notice the
undulations of the geoid; the local vertical is always perpendicular to the geoid and the local horizon tangential component to it. Likewise, spirit levels will always be parallel to the geoid.
Note that a GPS receiver on a ship may, during the course of a long voyage, indicate height variations, even though the ship will always be at sea level (tides not considered). This is because GPS
satellites, orbiting about the center of gravity of the Earth, can only measure heights relative to a geocentric reference ellipsoid. To obtain one's geoidal height, a raw GPS reading must be
corrected. Conversely, height determined by spirit leveling from a tidal measurement station, as in traditional land surveying, will always be geoidal height. Some GPS receivers have a grid
implemented inside where they can obtain the WGS84 geoid height over the WGS ellipsoid from the current position. Then they are able to correct the height above WGS ellipsoid to the height above
WGS84 geoid. In that case when the height is not zero on a ship it is because of the tides.
Simplified Example
The gravity of field of the earth is neither perfect nor uniform. A flattened ellipsoid is typically used as the idealized earth, but let's simplify that and consider the idealized earth to be a
perfect sphere instead. Now, even if the earth were perfectly spherical, the strength of gravity would not be the same everywhere, because the density (and therefore the mass) varies throughout our
blue marble. This is due to magma distributions, mountain ranges, deep sea trenches and so on.
So if that perfect sphere were then covered in water, the water would not be the same height everywhere. Instead, the water level would be higher or lower depending on the particular strength of
gravity in that location.
Spherical harmonics representation
Spherical harmonics are often used to approximate the shape of the geoid. The current best such set of spherical harmonic coefficients is EGM96 (Earth Gravity Model 1996), determined in an
international collaborative project led by NIMA. The mathematical description of the non-rotating part of the potential function in this model is
V=frac{GM}{r}left(1+{sum_{n=2}^{n_{max}}}left(frac{a}{r}right)^n{sum_{m=0}^n} overline{P}_{nm}(sinphi)left[overline{C}_{nm}cos mlambda+overline{S}_{nm}sin mlambdaright]right),
where $phi$ and $lambda$ are geocentric (spherical) latitude and longitude respectively, $overline{P}_{nm}$ are the fully normalized Legendre functions of degree $n$ and order $m$, and $overline{C}_
{nm}$ and $overline{S}_{nm}$ are the coefficients of the model. Note that the above equation describes the Earth's gravitational potential $V$, not the geoid itself, at location $phi,;lambda,;r,$ the
co-ordinate $r$ being the geocentric radius, i.e, distance from the Earth's centre. The geoid is a particular equipotential surface, and is somewhat involved to compute. The gradient of this
potential also provides a model of the gravitational acceleration. EGM96 contains a full set of coefficients to degree and order 360, describing details in the global geoid as small as 55 km (or 110
km, depending on your definition of resolution). One can show there are
sum_{k=2}^n 2k+1 = n(n+1) + n - 3 = 130,317
different coefficients (counting both $overline{C}_{nm}$ and $overline{S}_{nm}$, and using the EGM96 value of $n=n_{max}=360$). For many applications the complete series is unnecessarily complex and
is truncated after a few (perhaps several dozen) terms.
New even higher resolution models are currently under development. For example, many of the authors of EGM96 are working on an updated model that should incorporate much of the new satellite gravity
data (see, e.g., GRACE), and should support up to degree and order 2160 (1/6 of a degree, requiring over 4 million coefficients).
Precise geoid
The 1990s saw important discoveries in theory of geoid computation. The
Precise Geoid Solution
and co-workers improved on the
approach to geoid computation. Their solution enables millimetre-to-centimetre
in geoid
, an
improvement from previous classical solutions .
External links
See also
|
{"url":"http://www.reference.com/browse/geoid","timestamp":"2014-04-20T07:25:47Z","content_type":null,"content_length":"87851","record_id":"<urn:uuid:9e06b95f-9182-4e83-802e-03c2edd77445>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00064-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Physics Forums - View Single Post - Velocity of block as it hits a table sliding down a ramp?
1. The problem statement, all variables and given/known data
A block of steel is placed 120cm from the end of a sheet of metal. The sheet is raised just until the block begins to slide, which occurs when the block as 35cm above the surface of the table. the
block is allowed to slide freely down the sheet until it slides onto the table. If the coefficient of friction is 0.20, what is the velocity of the block as it hits the table?
2. Relevant equations
Fxnet= Fg - Ff
Fynet= Fn + Fg = 0
3. The attempt at a solution
Okay so I know that the angle would be 17 using trig.. But I don't exactly know how the masses cancel out here since we aren't given a mass. I know I have to find the acceleration of the block, and
by finding that, I can find the final velocity using one of the five sisters. What I started out with was..
Fxnet = Fg + Ff
= mgsinθ + (Mu x Fn)
= mgsinθ + (Mu x mgcosθ)
= sin θ + (mu x cosθ) <--The thing is I'm not sure if I can do this or not..
=sin 17 + (0.2)(cos17)
= 0.1 N
then I don't know what to do after this..
|
{"url":"http://www.physicsforums.com/showpost.php?p=2478344&postcount=1","timestamp":"2014-04-20T11:23:50Z","content_type":null,"content_length":"9701","record_id":"<urn:uuid:c511921e-bb9f-4519-acf5-a4c4084afeba>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00571-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Ways to prove an inequality
up vote 7 down vote favorite
It seems that there are three basic ways to prove an inequality eg $x>0$.
1. Show that x is a sum of squares.
2. Use an entropy argument. (Entropy always increases)
3. Convexity.
Are there other means?
Edit: I was looking for something fundamental. For instance Lagrange multipliers reduce to convexity. I have not read Steele's book, but is there a way to prove monotonicity that doesn't reduce to
entropy? And what is the meaning of positivity?
Also, I would not consider the bootstraping method, normalization to change additive to multiplicative inequalities, and changing equalities to inequalities as methods to prove inequalities. These
method only change the form of the inequality, replacing the original inequality by an (or a class of) equivalent ones. Further, the proof of the equivalence follows elementarily from the definition
of real numbers.
As for proofs of fundamental theorem of algebra, the question again is, what do they reduce too? These arguments are high level concepts mainly involving arithmetic, topology or geometry, but what do
they reduce to at the level of the inequality?
Further edit: Perhaps I was looking too narrowly at first. Thank you to all contributions for opening to my eyes to the myriad possibilities of proving and interpreting inequalities in other
ca.analysis-and-odes mp.mathematical-physics sums-of-squares convexity
7 How would you call bootstrapping arguments where, for example, to prove $A\le B$ you show $A\le B+\epsilon$ for all $\epsilon$? Or (what Tao refers to as the tensor product trick) you show that
for all $n$, $A^n\le CB^n$ for some constant $C$ independent of $n$? – Andres Caicedo Jul 7 '10 at 15:27
add comment
9 Answers
active oldest votes
I don't think your question is a mathematical one, for the question about what do all inequalities eventually reduce to has a simple answer: axioms. I interpret it as a metamathematical
question and still I believe the closest answer is the suggestion above about using everything you know.
An inequality is a fairly general mathematical term, which can be attributed to any comparison. One example is complexity hierarchies where you compare which of two problems has the
highest complexity, can be solved faster etc. Another one is studying convergence of series, that is comparing a quantity and infinity, here you find Tauberian theory etc. Even though you
did not specify in your question which kind of inequalities are you interested in primarily, I am assuming that you are talking about comparing two functions of several real/complex
variables. I would be surprised if there is a list of exclusive methods that inequalities of this sort follow from. It is my impression that there is a plethora of theorems/principles/
tricks available and the proof of an inequality is usually a combination of some of these. I will list a few things that come to my mind when I'm trying to prove an inequality, I hope it
helps a bit.
First I try to see if the inequality will follow from an equality. That is to recognize the terms in your expression as part of some identity you are already familiar with. I disagree with
you when you say this shouldn't be counted as a method to prove inequalities. Say you want to prove that $A\geq B$, and you can prove $A=B+C^2$, then, sure, the inequality follows from
using "squares are nonnegative", but most of the time it is the identity that proves to be the hardest step. Here's an example, given reals $a_1,a_2,\dots, a_n$, you want to prove that $$\
sum_{i,j=1}^n \frac{a_ia_j}{1+|i-j|} \geq 0.$$ After you realize that sum is just equal to $$\frac{1}{2\pi}\cdot\int_{0}^{2\pi}{\int_{0}^{1}{\frac{1-r^{2}}{1-2r\cos(x)+r^{2}}\cdot |\sum_{k
=1}^{n}{a_{k}e^{-ikx}}|^{2}dx dr}}$$ then, yes, everything is obvious, but spotting the equality is clearly the nontrivial step in the proof.
In some instances it might be helpful to think about combinatorics, probability, algebra or geometry. Is the quantity $x$ enumerating objects you are familiar with, the probability of an
event, the dimension of a vector space, or the area/volume of a region? There is plenty of inequalities that follow this way. Think of Littlewood-Richardson coeficients for example.
up vote 8
down vote Another helpful factor is symmetry. Is your inequality invariant under permuting some of its variables? While I don't remember right now the paper, Polya has an article where he talks
accepted about the "principle of nonsufficient reason", which basically boils down to the strategy that if your function is symmetric enough, then so are it's extremal points (there is no
sufficient reason to expect assymetry in the maximal/minimal points, is how he puts it). This is similar in vein to using Langrange multipliers. Note however that sometimes it is the
oposite of this that comes in handy. Schur's inequality, for example is known to be impossible to prove using "symmetric methods", one must break the symmetry by assuming an arbitrary
ordering on the variables. (I think it was sent by Schur to Hardy as an example of a symmetric polynomial inequality that doesn't follow from Muirhead's theorem, see below.)
Majorization theory is yet another powerful tool. The best reference that comes to mind is Marshall and Olkin's book "Inequalities: Theory of Majorization and Its Applications". This is
related to what you call convexity and some other notions. Note that there is a lot of literature devoted to inequalities involving "almost convex" functions, where a weaker notion than
convexity is usually used. Also note the concepts of Schur-convexity, quasiconvexity, pseudoconvexity etc. One of the simplest applications of majorization theory is Muirhead's inequality
which generalizes already a lot of classical inequalities and inequalities such as the ones that appear in competitions.
Sometimes you might want to take advantage of the duality between discrete and continuous. So depending on which tools you have at your disposal you may choose to prove, say the inequality
$$\sum_{n=1}^{\infty}\left(\frac{a_1+\cdots+a_n}{n}\right)^p\le \left(\frac{p}{p-1}\right)^p \sum_{n=1}^{\infty}a_n^p$$ or it's continuous/integral version $$\int_{0}^{\infty}\left(\frac
{1}{x}\int_0^x f(t)dt\right)^p dx \le \left(\frac{p}{p-1}\right)^p \int_{0}^{\infty} f(x)^p dx$$ I've found this useful in different occasions (in both directions).
Other things that come to mind but that I'm too lazy to describe are "integration preserves positivity", uncertainity principle, using the mean value theorem to reduce the number of
variables etc. What also comes in handy, sometimes, is searching if others have considered your inequality before. This might prevent you from spending too much time on an inequality like
$$\sum_{d|n}d \le H_n+e^{H_n}\log H_n$$ where $H_n=\sum_{k=1}^n \frac{1}{k}$.
Minor comment about your first example, it is "easily" seen to be nonnegative, because it is the sum of all entries of the positive-definite matrix: $B = aa^T \circ L$, where $L_{ij} = 1
/(1+|i-j|)$ and $\circ$ denotes the Hadamard product. The only semi-trivial part is to prove posdef of $L$, but that can be done in numerous ways, the more advanced of which might use an
integral representation. – Suvrit Sep 28 '10 at 7:47
How would you prove the posdef of L? – Gjergji Zaimi Sep 28 '10 at 19:20
Hmm, the first idea that comes to my mind is: use $\varphi(x) = 1/(1+|x|)$ is a positive-definite function (it is in fact infinitely divisible), which itself can be proved using $|x|$ is
conditionally negative-def. (though proving the latter might require a simple integral!); – Suvrit Oct 1 '10 at 13:00
add comment
Enumerative combinatorics also provides an important source of inequalities. The most basic is that if you can show that $X$ is the cardinality (or dimension) of some set $A$, then you
automatically have $X \geq 0$. This can become non-trivial if one also possesses a very different description of $X$, e.g. as an alternating sum. Similarly, if you can establish a surjection
(resp. injection) from a set of cardinality (or dimension) $X$ to a set of cardinality (or dimension) $Y$, then you have proven that $X \geq Y$ (resp. $X \leq Y$). (The dimension version of
this argument is the basis for the polynomial method in extremal combinatorics.)
up vote The integrality gap is also an important way to improve an inequality by exploiting discreteness. For instance, if you know that $X > Y$, and that $X, Y$ are integers, then this
14 down automatically implies the improvement $X \geq Y+1$. More generally, if you know that $X, Y$ are both divisible by $q$, then we have the further improvement $X \geq Y+q$. A good example of
vote this principle is in applying the Chevalley-Warning theorem, that asserts that the number $X$ of roots of a low-degree polynomial over a finite field $F_p$ is divisible by $p$. If one has
one trivial solution ($X \geq 1$), then this automatically boosts to $X \geq p$, which implies the existence of at least one non-trivial solution also (and in fact gives at least $p-1$ such
8 There's a real equivalent to the enumerative combinatorics method of showing that the quantity counts something. This would be: show that the quantity represents some probability. Then
it's automatically non-negative (and in fact, it's between 0 and 1, so you get two inequalities) – Peter Shor Jul 8 '10 at 15:07
add comment
Steele in his book Cauchy-Schwarz Master Class identifies three pillars on which all inequalities rest
1. Monotonicity
2. Positivity
3. Convexity, which he says is a second-order effect (Chap 6)
These three principles apply to inequalities whether they be
up vote 8 1. discrete or integral or differential
down vote 2. additive or multiplicative
3. in simple or multi-dimensional spaces (matrix inequalities).
In Chap 13 of the book, he shows how majorization and Schur's convexity unify the understanding of multifarious inequalities.
I am still not done reading the book but it also mentions a normalization method which can convert an additive inequality to a multiplicative one.
2 Steele's book should be required reading for all mathematics students regardless of level. One of the reasons calculus classes are in such a sorry state in this country is the complete
inability of students to work with basic inequalities. I know it was definitely my main weakness when beginning to learn real analysis. – Andrew L Jul 7 '10 at 16:48
@Andrew L : Can you please tone down the relentless negativity? I'd assume that by "this country" you mean the US (somehow I can't imagine a resident of Canada, Germany, China, etc
19 assuming that everyone was from there). Calculus classes in the US are hardly "in a sorry state". Many people that post here both live in the US and teach calculus on a regular basis.
I don't see why you have to insult us. – Andy Putman Jul 7 '10 at 20:29
add comment
To prove that $A\leq B$, maximize the value of $A$ subject to the condition that $B$ is constant using, for example, Lagrange multipliers. This does wonders on most classical
up vote 4 inequalities.
down vote
there is an argument that such a maximization (so called fastest descend) is actually a heat flow or entropy argument. Just from what I heard. – Colin Tan Jul 7 '10 at 15:07
4 I have no idea what 'heat flow' or 'entropy argument' mean in this context. Lagrange multipliers, on the other hand, are known to every undergraduate... – Mariano Suárez-Alvarez♦ Jul 7
'10 at 15:11
Mariano: Yes, but the question is not about how to effectively teach inequalities to undergraduates, but about the tools we have to prove them. And most classical inequalities can be
deduced from sum of squares arguments, or some form of convexity results. I actually think this question shows a nice insight. – Andres Caicedo Jul 7 '10 at 15:30
Well, most classical inequalities also follow more or less effortlessly from a little, straightforward, idealess computation using Lagrange multipliers, too. That is my point. –
Mariano Suárez-Alvarez♦ Jul 7 '10 at 15:32
I agree with your point (not the "idealess" part). I am reading "convexity arguments" loosely, so that inequalities proved using first (and sometimes second) derivative arguments are
included here. In that sense, Lagrange multipliers is part of that framework. – Andres Caicedo Jul 7 '10 at 16:59
add comment
I don't think the question has a meaningful answer unless the OP specifies a class of inequalities he has in mind. The problem is that almost any mathematical statement can be restated
as an inequality.
up vote 4 Take, for instance, the fundamental theorem of algebra. It is equivalent to the inequality
down vote "the number of roots of a non-constant polynomial with complex coefficients is greater than zero". Over ten different proofs of this inequality are discussed in this thread. It seems
that none of them has anything to do with positivity, convexity or entropy arguments.
1 You're right; but given the question, I think we can infer that the OP means inequalities of the form $A\leq B$ where $A$ and $B$ are functions of (possibly several) real variables
written in some fixed language, e.g. $(\times, +, -, \operatorname{sin}, ...)$. – Daniel Litt Jul 7 '10 at 15:47
Daniel, probably you are right. Still I think it would help if the question were a bit more specific. – Andrey Rekalo Jul 7 '10 at 16:17
add comment
This doesn't seem like a real question, but here's an answer anyway. Every mathematician should pick up "Inequalities" by Hardy, Littlewood, and Polya. The book lays out a systematic
up vote 3 approach to proving "elementary" inequalities, and it was a surprise to me just how much commonness and beauty there is in the field. It's an old book, but all the more readable for it.
down vote
add comment
I have recently been working on stuff related to the Golod-Shafarevich inequality. So here is a crazy way to prove an inequality. Let $G$ be a finitely generated group and $\left< X|R \
right>$ a presentation of $G$ with $|X|$ finite. Let $r_i$ be the number of elements in $R$ with degree $i$ with respect to the Zassenhaus $p$-filtration. Assume $r_i$ is finite for all
$i$. Let $H_R(t)=\sum_{i=1}r_it^i$.
A group is called Golod -Shafarevich (GS) if there is $0 < t_0 < 1$ such that $1-|X|t_0+H_R(t_0)<0$. Golod and Shafarevich proved that GS groups are infinite. Zelmanov proved their pro-$p$
completion contains a non-abelian free pro-$p$ group.
up vote 3
down vote So suppose $G$ is a group with such a presentation and suppose you know that its pro-$p$ completion does not contain a non-abelian free pro-$p$ group or for some other reason $G$ is not
GS. Then $1-|X|t+H_R(t) \geq 0$ for all $0 < t <1$.
Now, I am sure no one ever used the Golod-Shafarevich this way and I doubt anyone will. But maybe I am wrong. In any case, this does not seem to fit any of the methods that were mentioned
1 +1 the answer is really cute. But shouldn't one of your inequalities have an equal sign appended? The negation of $<$ is $\geq$, not $>$. – Willie Wong Jul 8 '10 at 20:09
Thanks! I fixed it. – Yiftach Barnea Jul 8 '10 at 22:00
add comment
Look at the proofs of known inequalities and solutions to related problems:
I believe the best approach to studying inequalities is proving as many of them as possible. There is a section at ArtOfProblemSolving forum that is a good source of them:
up vote 2 down
vote http://www.artofproblemsolving.com/Forum/viewforum.php?f=32
One may also like to read a classic book on inequalities by Hardy, Littlewood, and Pólya:
1 There's also this somewhat underdeveloped Tricki page: tricki.org/article/Techniques_for_proving_inequalities – Mark Meckes Jul 7 '10 at 16:45
There's also Wikipedia's article titled "list of inequalities", which, unlike the category, is organized. en.wikipedia.org/wiki/List_of_inequalities – Michael Hardy Jul 7 '10 at
add comment
Use other known inequalities. e.g. re-arrangement inequality, Cauchy-Schwartz, Jensen, Hölder.
up vote 2 down vote
8 There is a mild generalization of this technique: use everything you know! :) – Mariano Suárez-Alvarez♦ Jul 7 '10 at 15:12
But aren't these inequalities fundamentally a result of either convexity or sum of squares? – Colin Tan Jul 7 '10 at 15:20
add comment
Not the answer you're looking for? Browse other questions tagged ca.analysis-and-odes mp.mathematical-physics sums-of-squares convexity or ask your own question.
|
{"url":"http://mathoverflow.net/questions/30898/ways-to-prove-an-inequality?answertab=votes","timestamp":"2014-04-16T13:41:48Z","content_type":null,"content_length":"112691","record_id":"<urn:uuid:9f271f6d-6f36-4fde-927d-c88bad13e28a>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00197-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Catalyst Athletics Forums - View Single Post - Just reread CrossFit Journal on Metabolic Conditioning...
What is the true incidence of Crossfit induced rhabdo cases? Of the number of Crossfit workouts that have occurred over the past ten years, how many individuals have actually, medically been
diagnosed with rhabdo? How many workouts have been done by how many people that have not had an occurrence of rhabdo.
I don't deny that 107 cases that you have been able to data mine over the internet is significant, but what does that mean from a statistical or epidemiological point of view? The only person that I
have known to have been diagnosed with a mild case of rhabdo was the result of a Spinning class.
PS...I can't view your attachments from this computer, so I really don't know what the 107th case is at this point, but it really would not change my viewpoint.
|
{"url":"http://www.catalystathletics.com/forum/showpost.php?p=87575&postcount=1002","timestamp":"2014-04-21T12:17:38Z","content_type":null,"content_length":"12975","record_id":"<urn:uuid:411d53a4-b68f-467c-a1f2-fb4ed11e50a2>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00582-ip-10-147-4-33.ec2.internal.warc.gz"}
|
We study the quantum Zeno effect in quantum statistical mechanics within the operator algebraic framework. We formulate a condition for the appearance of the effect in W*-dynamical systems, in
terms of the short-time behaviour of the dynamics. Examples of quantum spin systems show that this condition can be effectively applied to quantum statistical mechanical models. Furthermore, we
derive an explicit form of the Zeno generator, and use it to construct Gibbs equilibrium states for the Zeno dynamics. As a concrete example, we consider the X-Y model, for which we show that a
frequent measurement at a microscopic level, e.g. a single lattice site, can produce a macroscopic effect in changing the global equilibrium. PACS - Klassifikation: 03.65.Xp, 05.30.-d, 02.30. See
the corresponding papers: Schmidt, Andreas U.: "Zeno Dynamics of von Neumann Algebras" and "Mathematics of the Quantum Zeno Effect" and the talk "Zeno Dynamics in Quantum Statistical Mechanics" -
Presentation at the Università di Pisa, Pisa, Itlay 3 July 2002, the conference on Irreversible Quantum Dynamics', the Abdus Salam ICTP, Trieste, Italy, 29 July - 2 August 2002, and the
University of Natal, Pietermaritzburg, South Africa, 14 May 2003. Version of 24 April 2003: examples added; 16 December 2002: revised; 12 Sptember 2002. See the corresponding papers "Zeno
Dynamics of von Neumann Algebras", "Zeno Dynamics in Quantum Statistical Mechanics" and "Mathematics of the Quantum Zeno Effect"
|
{"url":"http://publikationen.ub.uni-frankfurt.de/solrsearch/index/search/searchtype/authorsearch/author/%22Andreas+U.+Schmidt%22/start/0/rows/10/subjectfq/X-Y+model","timestamp":"2014-04-19T03:12:20Z","content_type":null,"content_length":"21466","record_id":"<urn:uuid:1d48d903-dbeb-4e31-8297-14e5ec85b37a>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00437-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Creating Professional Tables Using Estpost
Original Code
* This command should install the package estout.
ssc install estout
estpost is one of several commands included in the pacakge estout.
In my previous post I delt with the primary use of estout, to create post estimation tables fit for publication.
I pretty much read through the high quality documentation listed on the estout site and made my own examples.
I will probably to the same with estpost.
I strongly recommend reading through the documentation found on the website http://repec.org/bocode/e/estout/estpost.html.
See (http://www.econometricsbysimulation.com/2012/11/professional-post-estimation-tables.html)
In this post I will deal with the estpost command which takes the results of several common summary statistics commands and converts them to formats that will be used by esttab.
If you end up using this command to create your tables please cite the author Jann, Ben.
It is obvious a lot of work went into creating this package with probably very little reward:
Citing estout
Thanks for citing estout in your work. For example, include a note such as
"Tables produced by estout (Jann 2005, 2007)."
and add
Jann, Ben (2005): Making regression tables from stored estimates. The Stata Journal 5(3): 288-308.
Jann, Ben (2007): Making regression tables simplified. The Stata Journal 7(2): 227-244.
to the bibliography.
estpost is compatible with the following commands:
From: http://repec.org/bocode/e/estout/hlp_estpost.html#commands
command description
summarize post summary statistics
tabstat post summary statistics
ttest post two-group mean-comparison tests
prtest post two-group tests of proportions
tabulate post one-way or two-way frequency table
svy: tabulate post frequency table for survey data
correlate post correlations
ci post confidence intervals for means,
proportions, or counts
stci post confidence intervals for means
and percentiles of survival time
margins post results from margins (Stata 11)
* Let us start with some made up data!
set obs 10
* Let's imagine that we are interested in 10 different products
gen prod_num = _n
* Each product has a base price
gen prod_price = rbeta(2,5)*10
* In six different markets
expand 6
sort prod_num
* I subtract the 1 because _n starts at 1 but mod (modular function) starts at 0
gen mercado = mod(_n-1, 6)
* Each mercado adds a fixed value to each product based on local demand
gen m_price = rbeta(2,5)*5 if prod_num == 1
bysort mercado: egen mercado_price = sum(m_price)
* Get rid of the reference price
drop m_price
* There is 104 weeks of observations for each product and mercado
expand 104
sort prod_num mercado
gen week = mod(_n-1,104)
* Each week there is a shared shock to all of the prices
gen week_p = rnormal()*.5 if mercado==0 & prod_num==1
bysort week_p: egen week_price=sum(week_p)
drop week_p
* Finally there is a product, market, and week specific shock that is indepentent of other shocks.
gen u = rnormal()*.5
* Let's generate some other random characteristics.
gen prod_stock = rnormal()
* Seasonality
gen seasonality = rnormal()
* Now let's calculate the price
gen price = prod_price + mercado_price + week_price + prod_stock + seasonality + u
* Finally in order to make things interesting let's say that our data set is incomplete because of random factors which occure 10% of the time.
gen missing = rbinomial(1,.1)
drop if missing==1
drop missing
* And to drop our unobservables
drop u week_price mercado_price prod_price
* Now that we have created our data, let's do some descriptive statistics that we will create tables from
* First the basic summarize command
estpost summarize price seasonality prod_stock
* This in effect tells us what statistics can be pulled from the summarize command.
* We can get more stats (such as medians) by using the detail option
estpost summarize price seasonality prod_stock, detail
* We can now create a table of estimates
esttab ., cells("mean sd count p1 p50 p99") noobs compress
* To save the table directly to a rtf (word compatible format)
esttab . using tables.rtf, replace cells("mean sd count p1 p50 p99") noobs compress
* Or excel
esttab . using tables.csv, replace cells("mean sd count p1 p50 p99") noobs compress
* Note the . after esttab is important. I don't know why, but it does not work without it.
* Now imagine we would like to assemble a table that has the mean price seasonality and prod_stock by mercado
estpost tabstat price seasonality prod_stock, statistics(mean sd) columns(statistics) listwise by(mercado)
* Everything looks like it is working properly up to this point but for some reason I can't get the next part to work.
esttab, main(mean) aux(sd) nostar unstack noobs nonote nomtitle nonumber
* The table only has one column when it should have 6 for the six different markets.
estpost tab prod_num
esttab . using tables.rtf , append cells("b(label(freq)) pct(fmt(2)) cumpct(fmt(2))")
* There is also a correlate function that will post information about the correlation between the first variable listed after corr and the other variables.
estpost corr price week seasonality mercado
esttab . using tables.rtf , append cell("rho p count")
* Unfortunately the alternative option, to generate the matrix of correlations that we would expect is not working either.
* This is the sad fate of these user written programs (such as Ian Watson's tabout), Stata becomes updated and they do not.
* I would find it very annoying to have to update code constantly so that a general public that I do not know can continue to use my code for free.
* However, perhaps if people are nice and send the author some emails requesting an update he might be encouraged to come back to his code knowing it is being used.
* His contact information listed on the package tutorial is Ben Jann, ETH Zurich, jann@soz.gess.ethz.ch.
2 comments:
1. Well, while testing your code, I got this:
. * This in effect tells us what statistics can be pulled from the summarize command.
. * We can get more stats (such as medians) by using the detail option
. estpost summarize price seasonality prod_stock, detail
| e(count) e(sum_w) e(mean) e(Var) e(sd) e(skewn~)
price | 5567 5567 4.179372 3.569746 1.889377 .0742438
seasonality | 5567 5567 .0044847 .991086 .995533 -.0089968
prod_stock | 5567 5567 -.0048699 .9900933 .9950343 .0260169
| e(kurto~) e(sum) e(min) e(max) e(p1) e(p5)
price | 2.940709 23266.56 -3.707268 10.74922 -.0037667 1.163866
seasonality | 2.932526 24.96656 -3.446181 3.271273 -2.343164 -1.645892
prod_stock | 2.989126 -27.11094 -3.376829 3.447521 -2.334032 -1.617825
| e(p10) e(p25) e(p50) e(p75) e(p90) e(p95)
price | 1.760584 2.858054 4.16412 5.445695 6.600117 7.375202
seasonality | -1.280259 -.6522442 .0002313 .6745954 1.298612 1.623599
prod_stock | -1.288978 -.669455 -.0005103 .6562532 1.295787 1.65028
| e(p99)
price | 8.694004
seasonality | 2.366776
prod_stock | 2.321529
. * We can now create a table of estimates
. esttab ., cells("mean sd count p1 p50 p99") noobs compress
current estimation results do not have e(b) and e(V)
2. Dear Bach.
This web page seems to be useful for you.
|
{"url":"http://www.econometricsbysimulation.com/2012/11/creating-professional-tables-using.html","timestamp":"2014-04-17T12:30:08Z","content_type":null,"content_length":"206651","record_id":"<urn:uuid:9ee0ad0b-c685-4000-bcc6-7b470d5f19e3>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00447-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
a missile is launched from the ground. its height, h(x), can be represented by a quadratic function in terms of time, x, in seconds. after 1 second, the missile is 103 feet in the air; after 2
seconds, it is 192 feet in the air. find the height, in feet, of the missile after 5 seconds in the air. help!
• one year ago
• one year ago
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred.
|
{"url":"http://openstudy.com/updates/4fc4d47be4b0964abc872d7e","timestamp":"2014-04-19T22:51:10Z","content_type":null,"content_length":"35086","record_id":"<urn:uuid:7433b9c1-dc37-4c97-b602-f52e657c815f>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00320-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Algebraic and Geometric Topology 2 (2002), paper no. 28, pages 591-647.
Product and other fine structure in polynomial resolutions of mapping spaces
Stephen T. Ahearn, Nicholas J. Kuhn
Abstract. Let Map_T(K,X) denote the mapping space of continuous based functions between two based spaces K and X. If K is a fixed finite complex, Greg Arone has recently given an explicit model for
the Goodwillie tower of the functor sending a space X to the suspension spectrum \Sigma ^\infty Map_T(K,X).
Applying a generalized homology theory h_* to this tower yields a spectral sequence, and this will converge strongly to h_*(Map_T(K,X)) under suitable conditions, e.g. if h_* is connective and X is
at least dim K connected. Even when the convergence is more problematic, it appears the spectral sequence can still shed considerable light on h_*(Map_T(K,X)). Similar comments hold when a cohomology
theory is applied.
In this paper we study how various important natural constructions on mapping spaces induce extra structure on the towers. This leads to useful interesting additional structure in the associated
spectral sequences. For example, the diagonal on Map_T(K,X) induces a `diagonal' on the associated tower. After applying any cohomology theory with products h^*, the resulting spectral sequence is
then a spectral sequence of differential graded algebras. The product on the E_\infty -term corresponds to the cup product in h^*(Map_T(K,X)) in the usual way, and the product on the E_1-term is
described in terms of group theoretic transfers.
We use explicit equivariant S-duality maps to show that, when K is the sphere S^n, our constructions at the fiber level have descriptions in terms of the Boardman-Vogt little n-cubes spaces. We are
then able to identify, in a computationally useful way, the Goodwillie tower of the functor from spectra to spectra sending a spectrum X to \Sigma ^\infty \Omega ^\infty X.
Keywords. Goodwillie towers, function spaces, spectral sequences
AMS subject classification. Primary: 55P35. Secondary: 55P42.
DOI: 10.2140/agt.2002.2.591
E-print: arXiv:math.AT/0109041
Submitted: 29 January 2002. Accepted: 25 June 2002. Published: 25 July 2002.
Notes on file formats
Stephen T. Ahearn, Nicholas J. Kuhn
Department of Mathematics, Macalester College
St.Paul, MN 55105, USA
Department of Mathematics, University of Virginia
Charlottesville, VA 22903, USA
Email: ahearn@macalester.edu, njk4x@virginia.edu
AGT home page
Archival Version
These pages are not updated anymore. They reflect the state of . For the current production of this journal, please refer to http://msp.warwick.ac.uk/.
|
{"url":"http://www.emis.de/journals/UW/agt/AGTVol2/agt-2-28.abs.html","timestamp":"2014-04-16T04:18:11Z","content_type":null,"content_length":"4349","record_id":"<urn:uuid:7c3f75d5-a140-4fe3-83d1-ca4fc1b7e79c>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00598-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Physics of a ball [Archive] - OpenGL Discussion and Help Forums
So I'm rendering a sphere, and I want it to move around like a ball does. Does anyone know where I can find information on the physics involved as to the movement of a ball?
OH, Thanx so much for the gay replys, mmmmmm....people are so helpful
Well, that's not exactly what I was looking for. I want to know how a ball rolls around on a flat surface. If I give the ball momentum in a direction I want to make it roll realistically.
Well, if it rolls on a flat surface, that is, a plane, all you have to check is that it's momentum matches its speed. Well, if you know its radius, where's the problem ? speed = radius * momentum ?!
Oh, or maybe you're considering a momentum around a non-horizontal axis ? Like a spinning top ? Then it's more tough. You could look for any basic text on solid mechanics.
Powered by vBulletin® Version 4.2.2 Copyright © 2014 vBulletin Solutions, Inc. All rights reserved.
|
{"url":"http://www.opengl.org/discussion_boards/archive/index.php/t-137443.html","timestamp":"2014-04-21T07:47:48Z","content_type":null,"content_length":"8316","record_id":"<urn:uuid:d7d5395b-e37e-4d6f-a256-428e65f71721>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00045-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Wave Equation using seperation of variables
March 26th 2009, 01:54 PM #1
Junior Member
Jan 2008
Wave Equation using seperation of variables
Hello i was wondering if someone could help me out and show me how to solve the following heat wave problem.We dont have a book for this class and i couldnt make it to my last class so i dont
knoow how to go about this question
Solve the wave equation using seperation of variables and show thte the solution reduces to D'Alembert's solution
$\mu_{tt} = c^2\mu_{xx}$
$\mu(0,t) = 0$
$\mu(L,t) = 0$
$\mu(x,0) = f(x)$
$\mu_t(x,0) = 0$
Hello i was wondering if someone could help me out and show me how to solve the following heat wave problem.We dont have a book for this class and i couldnt make it to my last class so i dont
knoow how to go about this question
Solve the wave equation using seperation of variables and show thte the solution reduces to D'Alembert's solution
$\mu_{tt} = c^2\mu_{xx}$
$\mu(0,t) = 0$
$\mu(L,t) = 0$
$\mu(x,0) = f(x)$
$\mu_t(x,0) = 0$
I would feel better if you would at least show that you know what "separation of variables" is!
Assume $\mu(x,t)= X(x)T(t)$. Then $\mu_{tt}= XT^{''}$ and $\mu_{xx}= X^{''} T$ so the equation becomes $XT^{''} = c^2X^{''} T$. Dividing on both sides of the equation by XT gives $\frac{T^{''} }
{T}= c^2\frac{X^{''} }{X}$. Since the left side depends only on t and the right side only on x, to be equal they must each equal a constant.
That gives two ordinary equations:
$c^2\frac{X^{''} }{X}= \lambda$
$c^2X^{''} = \lambda X$
$\frac{T^{''} }{T}= \lambda$
$T^{''} = \lambda T$.
You should be able to find the general solution to each of those, depending on $\lambda$ of course.
What must $\lambda$ be in order that X(0)= 0 and X(L)= 0?
I have a feeling I have repeated what your text book says!
Last edited by mr fantastic; March 28th 2009 at 01:48 PM. Reason: Added a missing latex tag. Replaced " with ^{''}
March 28th 2009, 07:27 AM #2
MHF Contributor
Apr 2005
|
{"url":"http://mathhelpforum.com/differential-equations/80806-wave-equation-using-seperation-variables.html","timestamp":"2014-04-16T19:41:58Z","content_type":null,"content_length":"38724","record_id":"<urn:uuid:0806adf6-275c-495e-99e1-faa1c2ed44fd>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00083-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Transformer Installation Rules
Some of the particularly important transformer installation rules are listed below.
1. One or more transformers may be hung on a single pole if the total weight does not exceed the safe strength of the pole or of the crossarms and bolts supporting them.
2. When more than one transformer is installed on crossarms, the weight should be distributed equally on the two sides of the pole.
3. Single-phase distribution transformers of 100 kVA or smaller are usually placed ABOVE the secondary mains if conditions permit. Those larger than 100 kVA are usually platform or pad mounted.
4. Lightning arresters and fused cutouts have to be installed on the primary side of all distribution transformers except the self-protected type.
5. Ground wires are required to be covered with plastic or wood molding to a point 8 feet above the base of the pole.
A guy is a brace or cable that is anchored in some fashion to the ground and secured to a point on the pole at the other end. Correctly selected and installed, the guy will protect the pole line from
damage caused by the strain of the line conductors and pole-mounted equipment. The guy will also minimize the damage to the pole line caused by severe weather.
Basic guying information, such as types, locations, and anchors, is covered in Construction Electrician training manuals of lower rates. In this section, we'll be concerned with calculating the "line
conductor load" for various line angles and dead ends, the effects that the lead-to-height ratios have on guy tensioning, and methods used to select the size and type of guy wire and anchors
The first step in determining the guy type and tension requirement is to determine the line conductor tension. Table 5-5 lists the most
Table 5-5.―Breaking Strength of Line Conductors
common size of line conductors (hard-drawn copper) that you will encounter in the field. To determine the conductor tension under maximum loading conditions, take 50 percent or one-half of the
breaking strength of the conductor. This allows for the safety factor of two required by the National Electric Safety Code.
Line tension = 50 percent of breaking strength
For No. 6 copper = 50 percent of 1,280 = 640 lb
Next, we must determine the angle of change in the line. Any change in the direction of the line increases the line conductor tension and, left uncorrected, tends to pull the pole out of alignment.
Table 5-6 lists the most common line angles in degrees and the constant by which the line tension must be multiplied to determine the side pull.
Example: For No. 6 copper conductor, for a 30-degree angle,
640 ラ 0.517 = 330 lb
The total side pull can now be determined by multiplying the side pull of one conductor by the number of conductors.
Example: On the basis of four conductors, the total side pull is as follows:
330 ラ 4 = 1,320 lb
The next step required to determine the correct guy tension is to find the multiplying factor for the lead-to-height ratio. The lead-to-height ratio is the relationship of the lead (L) (distance
between the base of the pole and the anchor rod) to the height (H) of the guy attachment on the pole, as shown in figure 5-16. This ratio will vary because the terrain of obstructions will restrict
the location of the anchor. A guy ratio of 1 to 1 is preferred. Shortening L increases the tension in the guy, causing increased stresses on the pole especially at dead ends and acute angles.
Using our previous example of four No. 6 AWG copper conductors at a 30-degree angle, let's determine the total guy tension using a 1-to-1 ratio, assuming that H = 30 feet and L = 30 feet (refer to
table 5-7). Locate the height of the guy attachment (30 ft) in the left-hand column. Move
Table 5-6.―Angle Constant Based on Line Angle
Figure 5-16.―Methods of measuring height and lead dimensions.
Table 5-7.―Height and Distance Ratio Multiplier
to the right until you reach the column under 30, the number of feet the anchor is away from the pole. The figure shown (1.41) is the guy ratio multiplier. Now let's compute the guy tensioning value.
Example: Total side pull ラ guy ratio multiplier
1,320 ラ 1.41 = 1,861 lb
The guy wire and anchor for this example must be rated to hold at least 1,861 foot-pounds of load.
Guy wire comes in various sizes and grades from 1/4 to 1/2 inch. Table 5-8 lists the grades and sizes in the left-hand column with the breaking and allowable tension strengths in the right columns.
To determine the correct grade and size of guy wire, first multiply the calculated guy tension by the safety factor of 2. Continuing with our example, solve for maximum breaking strength.
We now know the guy wire must have a breaking strength of at least 3,722 pounds. Referring again to table 5-8, locate the breaking strength column; then move down this column until a value that is at
least 3,722 pounds is found. Our example requires a breaking strength of 3,722 pounds. Based on this value, a 3/8-inch common grade would be sufficient.
The final step needed to ensure a safe and adequate guy is the selection of a guy anchor of sufficient holding power. The holding power of an anchor depends upon the area of the anchor plate, the
depth setting, and the type of soil. The greater each of these is, the greater the volume of earth required to hold it in place. Table 5-9 lists the most commonly used manufactured anchors. To use
this chart, determine the type of soil and
Table 5-8.―Guy Wire Breaking Strength
total guy tensioning amount. Move down the correct holding strength column until a value of at least the required amount is found. In the example just given, the guy tension allowed for is 3,722
pounds. Using table 5-9, we see that either an 8-inch expansion or screw would provide adequate holding power. The type selected would be based on material available, cost, and ease of installation.
By following this five-step process closely, you can quickly determine the correct guy requirements for any situation.
|
{"url":"http://www.tpub.com/celec/73.htm","timestamp":"2014-04-19T09:50:28Z","content_type":null,"content_length":"21024","record_id":"<urn:uuid:8db88f73-05bc-454a-ade4-a3cc08ae3b39>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00233-ip-10-147-4-33.ec2.internal.warc.gz"}
|
: N Queens
This 8 Queens visualization shows an implementation of an iterative repair algorithm, a greedy algorithm and a simulated annealing algorithm.
In the N Queens puzzle you have an N by N chess board. On it you must place N queens so that no queen is threatening any other queen. Queens threaten any queen in the same row, column or diagonal.
There are many possible aglorithms for finding a valid placement of the N queens. The ones implemented here are:
• Random Permutations: The queens are initially placed in random rows and columns such that each queen is in a different row and a different column. During each turn, two queens are selected at
random and their columns are swapped. Eventually this algorithm will find a valid placement, though it can be very slow.
• Iterative Repair: The queens are initially placed in random rows and columns such that each queen is in a different row and a different column. During each turn the next queen that is being
threatened by any other queen is moved within its row to the location in which there are the least threatening queens. If there are several such spots, one is chosen at random.
• Simulated Annealing: The queens are initially placed in random rows and columns such that each queen is in a different row and a different column. During each turn, an attacked queen is chosen
and a random column is picked for that queen. If moving the queen to the new column will reduce the number of attacked queens on the board, the move is taken. Otherwise, the move is taken only
with a certain probability, which decreases over time. Hence early on the algorithm will tend to take moves even if they don't improve the situation. Later on, the algorithm will only make moves
which improve the situation on the board. The temperature function used is: T(n) = 100 / n where n is the round number. The probability function used to decide if a move with a negative impact on
the score is allowed is: P(dE)=exp(dE/T) where dE is the difference in "energy" (the difference in the number of attacked queens) and T is the temperature for the round.
• Greedy: The queens are initially placed in random rows and columns such that each queen is in a different row and a different column. During each turn, A move is chosen that reduces the number of
attacked queens on the board by the largest amount. If there are several such moves, one is chosen at random, preferring moves that don't involve the same queen that was moved in the previous
|
{"url":"http://yuval.bar-or.org/index.php?item=9","timestamp":"2014-04-17T06:55:00Z","content_type":null,"content_length":"16496","record_id":"<urn:uuid:80b7bb30-03d8-4343-9119-ddbb954eaff8>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00228-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Matches for:
IRMA Lectures in Mathematics and Theoretical Physics
2010; 964 pp; hardcover
Volume: 16
ISBN-10: 3-03719-079-5
ISBN-13: 978-3-03719-079-1
List Price: US$138
Member Price: US$110.40
Order Code: EMSILMTP/16
The purpose of this handbook is to give an overview of some recent developments in differential geometry related to supersymmetric field theories. The main themes covered are:
• Special geometry and supersymmetry
• Generalized geometry
• Geometries with torsion
• Para-geometries
• Holonomy theory
• Symmetric spaces and spaces of constant curvature
• Conformal geometry
• Wave equations on Lorentzian manifolds
• D-branes and K-theory
The intended audience consists of advanced students and researchers working in differential geometry, string theory, and related areas. The emphasis is on geometrical structures occurring on target
spaces of supersymmetric field theories. Some of these structures can be fully described in the classical framework of pseudo-Riemannian geometry. Others lead to new concepts relating various fields
of research, such as special Kähler geometry or generalized geometry.
A publication of the European Mathematical Society. Distributed within the Americas by the American Mathematical Society.
Graduate students and research mathematicians interested in differential geometry, string theory, and related areas.
Part A. Special geometry and supersymmetry
• M. Roček, C. Vafa, and S. Vandoren -- Quaternion-Kähler spaces, hyper-Kähler cones, and the c-map geometry
• G. Weingart -- Differential forms on quaternionic Kähler manifolds
• C. P. Boyer and K. Galicki -- Sasakian geometry, holonomy, and supersymmetry
• M. A. Lledó, O. Maciá, A. Van Proeyen, and V. S. Varadarajan -- Special geometry for arbitrary signatures
• T. Mohaupt -- Special geometry, black holes and Euclidean supersymmetry
Part B. Generalized geometry
• N. Hitchin -- Generalized geometry--an introduction
• A. Kotov and T. Strobl -- Generalizing geometry--algebroids and sigma models
• U. Lindström, M. Roček, R. von Unge, and M. Zabzine -- A potential for generalized Kähler geometry
Part C. Geometries with torsion
• I. Agricola -- Non-integrable geometries, torsion, and holonomy
• P.-A. Nagy -- Totally skew-symmetric torsion and nearly-Kähler geometry
• J.-B. Butruille -- Homogeneous nearly Kähler manifolds
• L. Schäfer and F. Schulte-Hengesbach -- Nearly pseudo-Kähler and nearly para-Kähler six-manifolds
• A. Swann -- Quaternionic geometries from superconformal symmetry
Part D. Para-geometries
• S. Ivanov, I. Minchev, and S. Zamkovoy -- Twistor and reflector spaces of almost para-quaternionic manifolds
• M. Krahe -- Para-pluriharmonic maps and twistor spaces
• D. V. Alekseevsky, C. Medori, and A. Tomassini -- Maximally homogeneous para-CR manifolds of semisimple type
Part E. Holonomy theory
• A. Galaev and T. Leistner -- Recent developments in pseudo-Riemannian holonomy theory
• A. J. Di Scala, T. Leistner, and T. Neukirchner -- Geometric applications of irreducible representations of Lie groups
• K. Waldorf -- Surface holonomy
Part F. Symmetric spaces and spaces of constant curvature theory
• I. Kath -- Classification results for pseudo-Riemannian symmetric spaces
• D. V. Alekseevsky -- Pseudo-Kähler and para-Kähler symmetric spaces
• O. Baues -- Prehomogeneous affine representations and flat pseudo-Riemannian manifolds
Part G. Conformal geometry
• H. Baum -- The conformal analog of Calabi-Yau manifolds
• Y. Kamishima -- Nondegenerate conformal structures, CR structures and quaternionic CR structures on manifolds
Part H. Other topics of recent interest
• C. Bär -- Linear wave equations on Lorentzian manifolds
• D. S. Freed -- Survey of D-branes and K-theory
• List of contributors
• Index
|
{"url":"http://www.ams.org/bookstore?fn=50&arg1=salegeometry&ikey=EMSILMTP-16","timestamp":"2014-04-19T01:55:55Z","content_type":null,"content_length":"17510","record_id":"<urn:uuid:9a761707-caa7-410a-80c3-c16c8e0b2f3d>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00442-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
How many molecules of NH3 are produced when 3.00 moles of H2 react completely in the following equation? N2 + 3H2 → 2NH3
• one year ago
• one year ago
Best Response
You've already chosen the best response.
give that you have xss N2? set up a ratio using the ratio of moles in the balanced reaction (coefficients) so for every 3 moles of H2 you make 2 mols of NH3 y mol/3 H2 = xmol/2 NH3 you know you
have 3 moles of H2 3 mol/3 = x mol/ 2 x mol = ((3)(2))/3
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred.
|
{"url":"http://openstudy.com/updates/507f8c8ae4b0b8b0cacd57dc","timestamp":"2014-04-20T10:52:24Z","content_type":null,"content_length":"27827","record_id":"<urn:uuid:2fc0a2d3-564d-4e8c-83c1-b96444f538f6>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00426-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Annualized standard deviation ...yes!
Okay, so the decision has been made: effective January 2011, GIPS compliant firms must report a 36-month annualized standard deviation, on an annual basis (that is, for all years starting with 2011).
Further clarity is in order.
First, is standard deviation risk? There is hesitation to call it that, because a lot of folks don't consider it risk. But if it's not risk, why show it? Granted, not everyone thinks of volatility as
being a risk measure, but most firms report that they use standard deviation as a risk measure. If volatility isn't risk, then is volatility such a valuable measure that we need to see it reported?
I think it's a mistake NOT to call standard deviation risk: the fact that not everyone agrees shouldn't be a reason not to. There is disagreement about much of the standards, but that doesn't stop
these items from being included. It's even more confusing not to call standard deviation risk. Is someone going to be offended if we call it "risk"? I think not.
Is the Sharpe ratio a risk measure? Technically it's a risk-adjusted return. And, what risk measure is used to adjust the return? Yes, you're right: standard deviation. But if standard deviation
isn't risk, then I guess the Sharpe ratio can't be a risk-adjusted measure. Who's going to tell Bill?
Okay, and so HOW do we calculate standard deviation? First, use 36 months ... not days, not quarters, not years: months! You will also be required to include the annualized return for each 36 month
period. What if you don't have 36 months' of composite returns? Then don't show this until you do (well, actually, you arguably
show a standard deviation for the period you have, but you're not required to until you reach 36 months).
Do we divide by "n" or "n-1" (where "n" is the number of months (i.e., 36))? No decision has been made yet, though it appears from comments at this week's conference that "n" might win out. We use
"n" for the population and "n-1" for a sample; some might argue that it would be wrong to use "n," while others would argue that it's wrong to uses "n-1." This is debatable and controversial, no
doubt. And, no doubt more details will follow.
7 comments:
1. The Journal of Performance Measurement 50th issue also mentioned about Treynor Ratio (don't subscript but got a free copy from my friend). This could also be consider a risk adjusted return
equation. Are we going to assuem BETA as a form of risk as well?
If we start saying or writing SIGMA as a form or risk, I have a feeling someday a sophisticated investor (not financial savvy but just a lot of money) may bring a legal case and try to use SIGMA
as a way to prove negligent caused by the defendant.
Have we learned our mistake from the fall of Lehman hedge funds?
2. Stephen Campisi, CFASeptember 26, 2009 at 7:57 AM
There is a larger ethical issue here: according to the CFA Institute's own ethics guidance, one must present BOTH the return and the risk of an investment. One cannot present return without its
accompanying risk. To do so would violate an underlying principle of "informed consent" which requires fair representation and full disclosure of all material facts. Fair representation and full
disclosure are also the basis for GIPS.
So why does it take a committee years to catch up to the ethical guidance that has been in place for decades? And why so much debate simply to recommend showing the most basic risk statistic
which provides a reasonable measure of the potential uncertainty around an investment's return? The performance industry might take this opportunity to examine why it is so prone to debate the
nuances of various risk measures while it delays presenting any information whatsoever about the risk of investments. "Half a loaf is better than none" and a reasonable risk measure that enjoys
almost universal acceptance (although admittedly not the complete or perfect measure) is surely better than leaving investors completely in the dark about risk.
We've been "fiddling while Rome burns" and "rearranging the deck chairs on the Titanic" in this ongoing debate about including a risk measure in a GIPS presentation of performance. This decision
to include standard deviation as a required risk measure is LONG overdue. Managers can and should present additional information about risk that is material and representative of their investment
product. THAT is the proper forum for all this debate. Frankly, it was wrong to delay the presentation of risk for so many years, and it was something of a waste of committee time to wrangle over
such a simple and obvious idea. Thankfully, we now have a reasonable baseline risk measure to accompany returns, and investors have some means to compare managers on a more equal basis.
Not perfect? Perhaps. Better than before. Certainly!
3. Simon Blakeney, ASIPSeptember 28, 2009 at 3:04 AM
I agree with Steve - basically GIPS are a minimum standard, not a maximum. If there are better measures for the product, then include them as well, although atleast ensuring everyone has the SD
brings some form of risk measure into the debate rather than potentially not having anything.
As for "n" or "n-1", although n gives the lower answer - it is also the "right" calculation of normal volatility (notwithstanding if there is any such thing as a "correct" risk measure).
The population stat is used when you use all the datapoints of a period, rather than randomly sample some of them and use the sample as a representation of the whole (e.g. you might sample a
group of people and use that information as representation of a larger group).
There are 2 points here:
1) taking the last 36 months is not a "random" sample of history - as we are not pretending the last 36 months is a representation of a longer period.
2) the risk you are stating is described as the risk over the last 36 months, so the information IS the population (if you were to randomly take 24 datapoints out of the 36, then use "n-1" as an
estimation of the 36 month vol, then that would be fine - but we don't).
I don't see too much controversy here, but sure we will hear more ....
4. I agree we should strive to present risk and risk adjusted returns in every presentation. I don't agree we should label them as risk. Instead, these information should be presented under a
category call "Uncertainly" (I know reader will disagree and the meaning of risk encompass uncertainly).
If we speak to some mom or pop store down the street or maybe Joe The Plumber, what do you think the meaning of risk mean to them? After we trained or educated these potential investors, should
we allow them to participate in the market even if these people truly do not understand the meaning of "Risk" in the finance world?
Since most risk numbers are calculated using return series and not holding base, are we not repeating what Lehman hedge fund did? Knowing this could potentially cause misunderstanding to
investors, are we truly being fair in our opinion and properly disclosing all the facts/limitations?
GIPS set a minimum standard but there are still investors with limited GIPS understanding. Some US investors favorite questions have been: is my portfolio following GIPS, or has my portfolio been
reconciled in accordance to GIPS (rather than GAAP).
5. First a question for Simon Blakeney.... if I were to use n-1 rather than n, wouldn't it make the risk measure (slightly) forward looking, in which case it would be a benefit? Splitting hairs I
know but I'm still curious what you think.
Second I am curious what other firms do to calculate Sharpe ratio as the curriculum from the CIPM program I went through last year states a method different from the one my firm uses. For a 3
year period we would use annualized geometric returns in the numerator and the "standard" standard deviation in the denominator, by "standard" I mean it measures deviations around the arithmetic
mean. However the Feibel text from the CIPM curriculum uses annualized arithmetic mean returns in the numerator in order to match the "standard" standard deviation. A footnote states that if
geometric returns are used in the numerator, the denominator should be "the standard deviation of the natural log of the single period growth rates".
I hope that was clear.... In practice what do others do?
6. Simon Blakeney, ASIPOctober 2, 2009 at 4:11 AM
On the n v n-1, certainly using a smaller number increases the SD, but do not think increasing a number "makes it forward looking" and it is potentially a way that we use statistical terminology
to build inappropriate "confidence" to clients rather than concentrating on actually what the information means (and its shortfalls).
From a technical perspective, my understanding of a sample is that it is a random representation of what you are trying to measure, and I do not think the last 36 (say) consecutive months is
particularly random. As it is not, then lets represent it as it is and then the user can decide if it is a good predictor for the future etc (rather than think that you have already made some
Similarly, I do not think that increasing a number by about 1.5% of its size (for a 36m period) makes it any "more predictive" bearing in mind the normal assumptions it is based upon etc and
would rather concentrate on its historical context and looking at investment-horizon relevance for the end investor.
I know it is minor, but do think that we should differentiate facts and estimates if at all possible and clearly define them as such. In the case of historical performance and volatility we have
fact, so lets report them like that - rather than making "minor" adjustments that do not add any value and potentially cloud the important information to the end investor.
I hope that makes sense ....
7. Some calculates Sharpe Ratio without the risk free return. Certainly, this equation is inappropriate for comparsion purpose and shouldn't be call Sharpe Ratio. If you ask the right question(s),
such as: show me the historical Sharpe Ratio, you might be able to see some of the portfolio traits / Characteristics. A better question might be, just show me the monthly returns and I'll
calculate the Sharpe Ratio myself.
|
{"url":"http://investmentperformanceguy.blogspot.com/2009/09/annualized-standard-deviation-yes.html","timestamp":"2014-04-21T12:47:53Z","content_type":null,"content_length":"128196","record_id":"<urn:uuid:23482499-8bc5-48a0-b00b-9212d8f41fa0>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00207-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Optics for Passive Radiative Cooling
Optics for Passive Radiative Cooling
Aubrey Jaffer
Radiative cooling is the process by which a body loses heat by radiation. The spectrum and intensity of this thermal radiation, called blackbody radiation, depend on the temperature of the body
emitting it.
The Second Law of thermodynamics dictates that no net radiative energy transfers between two objects at the same temperature.
Thus the blackbody radiative flux through an aperture can be no greater than that produced by a blackbody at the same temperature having that aperture as its surface.
Therefore, blackbody radiation cannot be concentrated by any arrangement of mirrors. The rate of cooling is limited by the smallest aperture through which the blackbody radiation all flows.
But this does not mean that optics are useless. The blackbody radiation is proportional to the radiating surface area integrated with the angle over which it radiates. While mirrors cannot increase
the effective radiating area, they can channel the radiation emitted over a 180° cone (hemisphere) into a narrower cone.
This turns out to be more than just interesting; it makes radiative cooling practical in locations where the view to the horizon is obstructed.
Passive optical systems are reciprocal; if light can travel from point-A to point-B, then light can travel from point-B to point-A. For radiative cooling applications this means it is desirable for
the blackbody radiator to see only those regions of sky far above the horizon. Atmosphere near the horizon is warmer than atmosphere near the zenith. If in view of the radiator, warm atmosphere will
transfer heat into the radiator, opposing the desired cooling.
Thermal-infrared optical systems are further constrained in that the only way to obstruct the view between points A and B is by reflection or refraction. A material which absorbs thermal-infrared
radiation also emits it.
What then, is the optimal aperture for radiative cooling? Through analysis of the properties of the atmosphere we can determine a threshold angle t (from the horizontal), such that rays entering from
angles above t should be directed to the radiating element; while rays entering at angles below t should be reflected away.
Consider the parabolic section on the right side of the diagram. It directs all rays from angle t onto the focus point at the origin. As the angle of incidence increases, so does the angle of
reflection, illuminating the radiator at the base (and some of the rays will strike the radiator directly). As the angle of incidence decreases, so does the angle of reflection, directing the rays to
the parabola on the other side, which directs them up upward.
The top of each parabolic section is perpendicular to the base (on the x axis). The u and v axes are the coordinate system of the right-side parabola. This reflector configuration is a compound
parabolic concentrator[48] (CPC). The diagram is linked to a PDF of these diagrams with t ranging from 10° to 50°. An animation of these diagrams is also available.
The diameter at the top of the CPC is b/cos(t), where b is the diameter of the base. If this configuration is optimal (vis-a-vis the Second Law), then the flux through the (larger) top aperture for
angles greater than t should equal the hemispheric flux from the base. Lets examine the case of a surface of rotation around the midpoint of the base of diameter b=1. From an inclination θ, the
aperture flux around the circle is 2π·sin(θ)/cos(t)·dθ. Integrating from θ=t to θ=π/2 yields a total aperture flux of 2π, independent of t. t=0 is the hemispheric case; so the assertion is verified.
It is remarkable that the Second Law of Thermodynamics places hard limits on geometrical optics. This equivalence above was easily found; no attention was paid to the nonuniform distribution of rays
striking the target at any particular angle of incidence.
Ray-tracing a surface of revolution of this system shows the abrupt transition between the radiator being hidden and being exposed. This image is viewing the aperture from 28° above the horizon from
a distance 200 times the diameter of the aperture (as through a telephoto lens). The dark bands at the left and right extremes of the interior are artifacts of the POV-Ray simulation.
This image is viewing the aperture from 30° above the horizon, which is the threshold angle. The aperture rests on a red and white checkerboard, whose distorted reflection is visible in the
This image is linked to a PDF of images with the viewing angle ranging from 0° to 90°. An animation of these images is also available.
The top of the aperture being larger than the radiating base means that the directivity comes at the price of area. The aperture efficiency of this CPC is cos^2(t), which for t=30° is 75%.
Solar Irradiance
The TMY2 (Typical Meteorological Year) format splits solar irradiance into direct and indirect components, the assumption being that indirect irradiance is uniform over the hemisphere. By the
previous calculation, the indirect solar flux will be equivalent to the indirect solar flux impinging on a totally exposed level disk having the same area as the base of the CPC.
Direct solar irradiance from below the threshold angle will not impinge on the base of the CPC. But for solar irradiance from above the threshold angle, all that enters the upper aperture will reach
the base. That is A·sin(θ)/cos^2(t), which for overhead sun is greater than that which would fall on the base without the aperture.
Stackable Apertures
While optically optimal, cylindrical apertures are not best shape for all applications of radiative-cooling apertures.
This image is viewing the aperture from 30° above the horizon, which is the threshold angle.
This image is linked to a PDF of images with the viewing angle ranging from 0° to 90°. An animation of these images is also available.
Looking from a ray parallel to the sides, the transition zone is 4° (same as for the round aperture). Looking along the diagonal, the transition zone is only 2° (better than the round aperture).
So the square aperture has nearly optimal angular selectivity. The worst case (shown) transition zone is 6° wide, only 2° wider than for circular apertures.
An array of square apertures could be assembled as a lattice of (notched) extrusions.
It turns out that square CPC arrays have found application in linear fluorescent light fixtures. The fluorescent tube and the upper reflector feed the narrow end of the CPCs. The limited cone of the
wide end of the CPCs directs light downward and not toward the walls, raising lighting efficiency and reducing glare. The specifications for the aluminum coated plastic are for 95% reflectivity, more
than the 90% reflectivity I simulate here. The technology used to produce these CPCs looks like it would be suitable for radiative cooling applications. Examples of lamps with these CPC louvers are
Lightolier's Vision Smart 2' x 2' VP 4'' Parabolic Louver Recessed Air Handling (pictured here) and Williams Fluorescent Lighting's MP3 2'x2' multi-purpose parabolic troffer.
To facilitate the drainage of rainwater, most roofs are not level, but sloped. If the slope of the roof is less than the threshold angle of a CPC, then it can be used in that roof, although its
cooling efficiency will be reduced because its cone of radiation will not be centered on the zenith.
For some sites, inclining the apertures could be beneficial. An aperture inclined toward the east would, in addition to cooling radiatively, collect and channel direct sunlight downward during the
This schematic is for a square CPC. As before, the indirect solar flux impinging on the base of the CPC will be equivalent to the indirect solar flux impinging on a totally exposed level disk having
the same area as the base of the CPC.
The incidence of direct solar flux is more complicated. As before, no flux will impinge on the base while the solar inclination θ is less than t. With the sun at zenith, the base receives all the
flux received through the top aperture, A/cos^2(t).
With the sun on the left side, no flux will impinge on the base while θ is less than φ=atan(m·cos(t)/b), where m is the height of the mirror extension. Note that φ may be greater or less than t; in
the diagram they are equal. The mirror casts a shadow on the plane of the upper aperture which is m/tan(θ) wide. When b/cos(t)>m/tan(θ), a strip of the aperture b/cos(t)-m/tan(θ) wide is illuminated.
With the sun on the right side, a strip of the aperture b/cos(t)-m/tan(θ) wide is illuminated twice. This is the same amount as was shaded with the sun at elevation θ on the left side.
terrace rows of square CPCs to match the slope of any roof. Here is a design for a 3 by 5 terraced array of CPCs. This view is 10 times closer than the previous views; so it has noticeable parallax;
the top row shows all sky while the lowest row shows mostly radiator.
The vertical mirror works to avoid obstacles even when it isn't extending with the parabolic section. Obstacles like chimneys can be shielded from radiating their heat into the CPC by interposing
vertical aluminum flashing between the obstacles and the CPC.
Although not necessary, a reflective back panel has also been added at the top so that the (top) side edges of the assembly don't have a level miter at the highest cell.
An excess of cooling is never a problem in low altitude tropical locations. But subtropical regions can experience uncomfortably cold temperatures. For instance, Miami, FL experiences 50 hours of
temperatures below 10°C per year. Because cold air holds less moisture than warm air, these cold temperatures are also the times when radiative cooling is at its maximum.
For use in subtropical and temperate regions, there needs to be a way to temporarily disable roof panel radiation. Hinged reflective flaps along each row of cells can accomplish this. The top two
images show a roof panel with flaps horizontal (covering the CPCs), pivoted on the black axles protruding from the right side.
The bottom two images show a roof panel in the vertical (open) position. As was established earlier, vertical reflectors do not interfere with radiative cooling from the compound parabolic
concentrators. The images on the right side paint the flaps with a checkerboard pattern in order to help distinguish the flaps from reflections.
emissivity; so when the roof panel is closed, it will not radiate much heat. To reduce convective heat losses, the assembly should be a sealed air space. So far the prototypes have had their
checkerboard radiative targets in strips in contact with the lower CPC apertures. But they can be combined into a single continuous sheet along the bottom of the assembly.
A (inclined) flat-bottom, terraced roof panel without flaps is shown to the right. One with open flaps is shown below.
The square CPC is a nearly optimal device for channeling thermal radiation to a cone around the zenith. It can be arrayed and terraced for placement in any roof pitch. Flaps can be deployed with
these units which disable both cooling and solar heat gain, and offer protection from violent weather. With automated flap actuators, these units may find use in climates where nighttime cooling is
sufficient. For applications needing daytime cooling, the combination of these units with a solar screen or a cold mirror looks promising.
Copyright © 2009, 2010 Aubrey Jaffer
I am a guest and not a member of the MIT Computer Science and Artificial Intelligence Laboratory. My actions and comments do not reflect in any way on MIT.
Radiative Cooling in Hot Humid Climates
agj @ alum.mit.edu Go Figure!
|
{"url":"http://people.csail.mit.edu/jaffer/cool/Aperture/","timestamp":"2014-04-21T14:42:41Z","content_type":null,"content_length":"20931","record_id":"<urn:uuid:348ec2f4-b404-4296-bece-321ebddb96cc>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00469-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Trust Region Methods
A trust region method has a region around the current search point, where the quadratic model
for local minimization is "trusted" to be correct and steps are chosen to stay within this region. The size of the region is modified during the search, based on how well the model agrees with actual
function evaluations.
Very typically, the trust region is taken to be an ellipse such that . is a diagonal scaling (often taken from the diagonal of the approximate Hessian) and is the trust region radius, which is
updated at each step.
When the step based on the quadratic model alone lies within the trust region, then, assuming the function value gets smaller, that step will be chosen. Thus, just as with line search methods, the
step control does not interfere with the convergence of the algorithm near to a minimum where the quadratic model is good. When the step based on the quadratic model lies outside the trust region, a
step just up to the boundary of the trust region is chosen, such that the step is an approximate minimizer of the quadratic model on the boundary of the trust region.
Once a step is chosen, the function is evaluated at the new point, and the actual function value is checked against the value predicted by the quadratic model. What is actually computed is the ratio
of actual to predicted reduction.
If is close to 1, then the quadratic model is quite a good predictor and the region can be increased in size. On the other hand, if is too small, the region is decreased in size. When is below a
threshold, , the step is rejected and recomputed. You can control this threshold with the method option "AcceptableStepRatio"->. Typically the value of is quite small to avoid rejecting steps that
would be progress toward a minimum. However, if obtaining the quadratic model at a point is quite expensive (e.g. evaluating the Hessian takes a relatively long time), a larger value of will reduce
the number of Hessian evaluations, but it may increase the number of function evaluations.
To start the trust region algorithm, an initial radius needs to be determined. By default Mathematica uses the size of the step based on the model (1) restricted by a fairly loose relative step size
limit. However, in some cases, this may take you out of the region you are primarily interested in, so you can specify a starting radius using the option . The option contains Scaled in its name
because the trust region radius works through the diagonal scaling , so this is not an absolute step size.
This shows the steps and evaluations taken during a search for a local minimum of a function similar to Rosenbrock's function, using Newton's method with trust region step control.
The plot looks quite bad because the search has extended over such a large region that the fine structure of the function cannot really be seen on that scale.
This shows the steps and evaluations for the same function, but with a restricted initial trust region radius . Here the search stays much closer to the initial condition and follows the narrow
It is also possible to set an overall maximum bound for the trust region radius by using the option so that for any step, .
Trust region methods can also have difficulties with functions which are not smooth due to problems with numerical roundoff in the function computation. When the function is not sufficiently smooth,
the radius of the trust region will keep getting reduced. Eventually, it will get to the point at which it is effectively zero.
This finds a local minimum for the function using the default method. The default method in this case is the (trust region) Levenberg-Marquardt method, since the function is a sum of squares.
The message means that the size of the trust region has become effectively zero relative to the size of the search point, so steps taken would have negligible effect. Note: On some platforms, due to
subtle differences in machine arithmetic, the message may not show up. This is because the reasons leading to the message have to do with numerical uncertainty, which can vary between different
The plot along one direction makes it fairly clear why no more improvement is possible. Part of the reason the Levenberg-Marquardt method gets into trouble in this situation is that convergence is
relatively slow because the residual is nonzero at the minimum. With Newton's method, the convergence is faster, and the full quadratic model allows for a better estimate of step size, so that
FindMinimum can have more confidence that the default tolerances have been satisfied.
The following table summarizes the options for controlling trust region step control.
option name default value
"AcceptableStepRatio" 1/10000 the threshold , such that when the actual to prediction reduction , the search is moved to the computed step
"MaxScaledStepSize" ∞ the value , such that the trust region size for all steps
"StartingScaledStepSize" Automatic the initial trust region size
|
{"url":"http://reference.wolfram.com/mathematica/tutorial/UnconstrainedOptimizationTrustRegionMethods.html","timestamp":"2014-04-17T01:12:01Z","content_type":null,"content_length":"42097","record_id":"<urn:uuid:5e7b4427-bba3-4264-a14e-e11a8a41a59b>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00411-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Math Help
June 8th 2010, 03:22 PM #1
May 2010
Maxima and Minima
Consider the functionn, find the critical points of f(x) and determine the local maxima and minima. (give both x and y coordinates) on the given interval. part (b) Find the absolute maxima and
minima of f(x)(give both x and y coordinates) on the given interval.
f(x)=x^3-6x^2+9x+2 on the interval [-1,4]
For this problem, I took the derivative of f(x).
Then i Made a table and plugged in all the numbers within -1-4. Then i'm stuck.
How to I find the 0's? f'(0)?
What happens at critical points? At a critical point, $f'(x) = 0$. So to find them, we find $f'(x)$ (which you have already done), set it equal to zero, and then solve for $x$. The zeros of $f'
(x)$ are the roots of $f'(x)$, i.e. when $3x^2-12x+9= 0$. Can you find the two roots of the quadratic equation?
June 8th 2010, 03:50 PM #2
Super Member
Mar 2010
June 8th 2010, 03:54 PM #3
May 2010
June 8th 2010, 03:59 PM #4
Super Member
Mar 2010
|
{"url":"http://mathhelpforum.com/calculus/148302-maxima-minima.html","timestamp":"2014-04-16T14:02:18Z","content_type":null,"content_length":"38220","record_id":"<urn:uuid:c850eae1-8ca6-45ca-8da2-22b81d4e4730>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00495-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Elementary Algebra for College Students
Why Rent from Knetbooks?
Because Knetbooks knows college students. Our rental program is designed to save you time and money. Whether you need a textbook for a semester, quarter or even a summer session, we have an option
for you. Simply select a rental period, enter your information and your book will be on its way!
Top 5 reasons to order all your textbooks from Knetbooks:
• We have the lowest prices on thousands of popular textbooks
• Free shipping both ways on ALL orders
• Most orders ship within 48 hours
• Need your book longer than expected? Extending your rental is simple
• Our customer support team is always here to help
|
{"url":"http://www.knetbooks.com/elementary-algebra-college-students-8th/bk/9780131994577","timestamp":"2014-04-19T09:27:45Z","content_type":null,"content_length":"31686","record_id":"<urn:uuid:f72e296f-309a-43f2-8bca-39f8be3df3d5>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00572-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Complex Numbers: Subtraction, Division
Date: 08/12/2001 at 23:04:42
From: April
Subject: Subtraction and division for imaginary numbers
I know that when adding imaginary numbers the formula is
(a+bi)+(c+di) = (a+c)+(b+d)i
but for subtraction, is the formula (a+c)-(b+d)i ?
And how do you divide imaginary numbers - like for instance:
Date: 08/12/2001 at 23:39:04
From: Doctor Peterson
Subject: Re: Subtraction and division for imaginary numbers
Hi, April.
The formula for addition works because complex numbers follow the
associative and commutative properties just as real numbers do. We can
do the same for subtraction, and get
(a+bi) - (c+di) = (a+bi) + -(c+di)
= (a+bi) + (-c + -di)
= a + bi + -c + -di
= a + -c + bi + -di
= (a-c) + (b-d)i
That is, you subtract the real parts and the imaginary parts.
For division, we generally think of it as simplifying a fraction, much
like the way you rationalize the denominator of a fraction. In this
case, you can multiply numerator and denominator by the complex
conjugate of the denominator. So in general, we get
a+bi (a+bi)(c-di) (ac+bd) + (bc-ad)i
---- = ------------ = ------------------
c+di (c+di)(c-di) c^2 + d^2
Since the denominator is now a real number, you can just divide the
real and imaginary parts of the numerator by it to get the answer.
- Doctor Peterson, The Math Forum
|
{"url":"http://mathforum.org/library/drmath/view/53880.html","timestamp":"2014-04-18T05:41:05Z","content_type":null,"content_length":"6338","record_id":"<urn:uuid:bf66f1bb-0dec-4206-9dd0-631c9ad1958e>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00045-ip-10-147-4-33.ec2.internal.warc.gz"}
|