content stringlengths 86 994k | meta stringlengths 288 619 |
|---|---|
Did my textbook make a mistake, or am I just misunderstanding
General Question
Did my textbook make a mistake, or am I just misunderstanding this concept (linear algebra)?
This is not homework; I am studying for an exam.
My textbook has a practice problem: “determine if b is a linear combination of a1, a2, and a3.”
Please note that all of the following matrices are supposed to be vertical, with 1 column and 3 rows, but of course the limitations of Fluther are such that I can’t write them that way.
a1 = [1 -2 0], a2 = [0 1 2], a3 = [5 -6 8], b = [2 -1 6]
I wrote the augmented matrix with a1, a2, a3, b as the columns and put it in reduced row echelon form. Here is my result (this cannot be where my mistake is because I had my calculator do it for me):
It is my understanding that because a solution exists (in fact, many solutions because x3 is free), b is a linear combination of a1, a2, a3. But my textbooks disagrees! Am I thinking about this
wrong, or is my textbook wrong?
Observing members: 0 Composing members: 0
5 Answers
Response moderated (Unhelpful)
The book is wrong and it is easy to demonstrate it. Let the coefficients of a1, a2 and a3 be X1, X2 and X3. From the first equation we have x1 = 2 – 5*X3 and from the second equation we have X2 = 3 –
4X3. We can make X3 anything we want. Let X3 = 1. Then X1 = 3 and X2 = -1. You can see directly that b = 3*a1 -a2 + a3
Thank you, @LostInParadise, this is what I thought to be the case, but was not confident in my opinion over that of the textbook. Very much appreciated.
@LostInParadise is correct. Textbooks do have errors. Even if the textbook makes an error less than 1% of the time, chances are it contains more than 100 answers.
Response moderated (Spam)
Answer this question
This question is in the General Section. Responses must be helpful and on-topic. | {"url":"http://www.fluther.com/150112/did-my-textbook-make-a-mistake-or-am-i-just-misunderstanding/","timestamp":"2014-04-17T16:33:35Z","content_type":null,"content_length":"33540","record_id":"<urn:uuid:b0289e3f-9351-4085-b8e4-a7efba87c20e>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00437-ip-10-147-4-33.ec2.internal.warc.gz"} |
Minding the Brain
Has this ever happened to you?
I hate it when the labels on the x-axis overlap, but this can be hard to avoid. I can stretch the figure out, but then the data become farther apart and the space where I want to put the figure
(either in a talk or a paper) may not accommodate that. I've never liked turning the labels diagonally, so recently I've started using coord_flip() to switch the x- and y-axes: ggplot(chickwts, aes
(feed, weight)) + stat_summary(fun.data=mean_se, geom="pointrange") + coord_flip()
It took a little getting used to, but I think this works well. It's especially good for factor analyses (where you have many labeled items): library(psych) pc <- principal(Harman74.cor$cov, 4, rotate
="varimax") loadings <- as.data.frame(pc$loadings[, 1:ncol(pc$loadings)]) loadings$Test <- rownames(loadings) ggplot(loadings, aes(Test, RC1)) + geom_bar() + coord_flip() + theme_bw(base_size=10) It
also works well if you want to plot parameter estimates from a regression model (where the parameter names can get long): library(lme4) m <- lmer(weight ~ Time * Diet + (Time | Chick), data=
ChickWeight, REML=F) coefs <- as.data.frame(coef(summary(m))) colnames(coefs) <- c("Estimate", "SE", "tval") coefs$Label <- rownames(coefs)
ggplot(coefs, aes(Label, Estimate)) + geom_pointrange(aes(ymin = Estimate - SE, ymax = Estimate + SE)) + geom_hline(yintercept=0) + coord_flip() + theme_bw(base_size=10)
I don't usually like to use complex statistical methods, but every once in a while I encounter a method that is so useful that I can't avoid using it. Around the time I started doing eye-tracking
research (as a post-doc with Jim Magnuson), people were starting recognize the value of using longitudinal data analysis techniques to analyze fixation time course data. Jim was ahead of most in this
regard (Magnuson et al., 2007) and a special issue of the Journal of Memory and Language on data analysis methods gave as a great opportunity to describe how to apply "Growth Curve Analysis" (GCA) -
a type of multilevel regression - to fixation time course data (Mirman, Dixon, & Magnuson, 2008). Unbeknownst to us, Dale Barr was working on very similar methods, though for somewhat different
reasons, and our articles ended up neighbors in the special issue (Barr, 2008).
In the several years since those papers came out, it has become clear to me that other researchers would like to use GCA, but reading our paper and downloading our code examples was often not enough
for them to be able to apply GCA to their own data. There are excellent multilevel regression textbooks out there, but I think it is safe to say that it's a rare cognitive or behavioral scientist who
has the time and inclination to work through a 600-page advanced regression textbook. It seemed like a more practical guidebook to implementing GCA was needed, so I wrote one and it has just been
published by Chapman & Hall / CRC Press as part of their R Series.
My idea was to write a relatively easy-to-understand book that dealt with the practical issues of implementing GCA using R. I assumed basic knowledge of behavioral statistics (standard coursework in
graduate behavioral science programs) and minimal familiarity with R, but no expertise in computer programming or the specific R packages required for implementation (primarily lme4 and ggplot2). In
addition to the core issues of fitting growth curve models and interpreting the results, the book covers plotting time course data and model fits and analyzing individual differences. Example data
sets and solutions to the exercises in the book are available on my GCA website.
Obviously, the main point of this book is to help other cognitive and behavioral scientists to use GCA, but I hope it will also encourage them to make better graphs and to analyze individual
differences. I think individual differences are very important to cognitive science, but most statistical methods treat them as just noise, so maybe having better methods will lead to better science,
though this might be a subject for a different post. Comments and feedback about the book are, of course, most welcome.
How to get parameter-specific p-values is one of the most commonly asked questions about multilevel regression. The key issue is that the degrees of freedom are not trivial to compute for multilevel
regression. Various detailed discussions can be found on the R-wiki and R-help mailing list post by Doug Bates. I have experimented with three methods that I think are reasonable.
Since it is the season for graduate school recruitment interviews, I thought I would share some of my thoughts. This is also partly prompted by two recent articles in the journal Neuron. If you're
unfamiliar with it, Neuron is a very high-profile neuroscience journal, so the advice is aimed at graduate students in neuroscience, though I think the advice broadly applies to students in the
cognitive sciences (and perhaps other sciences as well). The first of these articles deals with what makes a good graduate mentor and how to pick a graduate advisor; the second article has some good
advice on how to be a good graduate advisee.
I broadly agree with the advice in those articles and here are a few things I would add:
As I mentioned in an earlier post, last June I had the great pleasure and honor of participating in a discussion meeting on Language in Developmental and Acquired Disorders hosted by the Royal
Society and organized by Dorothy Bishop, Kate Nation, and Karalyn Patterson. Among the many wonderful things about this meeting was that it brought together people who study similar kinds of language
deficit issues but in very different populations -- children with developmental language deficits such as dyslexia and older adults with acquired language deficits such as aphasia. Today, the special
issue of Philosophical Transactions of the Royal Society B: Biological Sciences containing articles written by the meeting's speakers was published online (Table of Contents).
Malcolm Gladwell is great at writing anecdotes, but he dangerously masquerades these as science. Case studies can be incredibly informative -- they form the historical foundation of cognitive
neuroscience and continue to be an important part of cutting-edge research. But there is an important distinction between science, which relies on structured data collection and analysis, and
anecdotes, which rely on an entertaining narrative structure. His claim that dyslexia might be a "desireable difficulty" is maybe the most egregious example of this. Mark Seidenberg, who is a leading
scientist studying dyslexia and an active advocate, has written an excellent commentary about Gladwell's misrepresentation of dyslexia. The short version is that dyslexia is a serious problem that,
for the vast vast majority of people, leads to various negative outcomes. The existence of a few super-successful self-identified dyslexics may be encouraging, maybe even inspirational, but it
absolutely cannot be taken to mean that dyslexia might be good for you.
In various responses to his critics, Gladwell has basically said that people who know enough about the topic to recognize that (some of) his conclusions are wrong, shouldn't be reading his books ("If
my books appear to a reader to be oversimplified, then you shouldn't read them: you're not the audience!"). This is extremely dangerous: readers who don't know about dyslexia, about its prevalence or
about its outcomes, would be led to the false conclusion that dyslexia is good for you. The problem is not that his books are oversimplified; the problem is that his conclusions are (sometimes) wrong
because they are based on a few convenient anecdotes that do not represent the general pattern.
Another line of defense is that Gladwell's books are only meant to raise interesting ideas and stimulate new ways of thinking in a wide audience, not to be a scholarly summary of the research.
Writing about science in a broadly accessible way is a perfectly good goal -- my own interest in cognitive neuroscience was partly inspired by the popular science writing of people like Oliver Sacks
and V.S. Ramachandran. The problem is when the author rejects scientific accuracy in favor of just talking about "interesting ideas". Neal Stephenson once said that what makes a book "science
fiction" is that it is fundamentally about ideas. It is great to propose new ideas and explore what they might mean. But if we follow that logic, then Malcolm Gladwell is not a science writer, he is
a science fiction writer.
The "mind as computer" has been a dominant and powerful metaphor in cognitive science at least since the middle of the 20th century. Throughout this time, many of us have chafed against this metaphor
because it has a tendency to be taken too literally. Framing mental and neural processes in terms of computation or information processing can be extremely useful, but this approach can turn into the
extremely misleading notion that our minds work kind of like our desktop or laptop computers. There are two particular notions that have continued to hold sway despite mountains of evidence against
them and I think their perseverance might be, at least in part, due to the computer analogy.
The first is modularity or autonomy: the idea that the mind/brain is made up of (semi-)independent components. Decades of research on interactive processing (including my own) and emergence have
shown that this is not the case (e.g., McClelland, Mirman, & Holt, 2006; McClelland, 2010; Dixon, Holden, Mirman, & Stephen, 2012), but components remain a key part of the default description of
cognitive systems, perhaps with some caveat that these components interact.
The second is the idea that the mind engages in symbolic or rule-based computation, much like the if-then procedures that form the core of computer programs. This idea is widely associated with the
popular science writing of Steven Pinker and is a central feature of classic models of cognition, such as ACT-R. In a new paper just published in the journal Cognition, Gary Lupyan reports 13
experiments showing just how bad human minds are at executing simple rule-based algorithms (full disclosure: Gary and I are friends and have collaborated on a few projects). In particular, he tested
parity judgments (is a number odd or even?), triangle judgments (is a figure a triangle?), and grandmother judgments (is a person a grandmother?). Each of these is a simple, rule-based judgment, and
the participants knew the rule (last digit is even; polygon with three sides; has at least one grandchild), but they were nevertheless biased by typicality: numbers with more even digits were judged
to be more even, equilateral triangles were judged to be more triangular, and older women with more grandchildren were judged to be more grandmotherly. A variety of control conditions and experiments
ruled out various alternative explanations of these results. The bottom line is that, as he puts it, "human algorithms, unlike conventional computer algorithms, only approximate rule-based
classification and never fully abstract from the specifics of the input."
It's probably too much to hope that this paper will end the misuse of the computer metaphor, but I think it will be a nice reminder of the limitations of this metaphor.
Dixon JA, Holden JG, Mirman D, & Stephen DG (2012). Multifractal dynamics in the emergence of cognitive structure. Topics in Cognitive Science, 4 (1), 51-62 PMID: 22253177
Lupyan, G. (2013). The difficulties of executing simple algorithms: Why brains make mistakes computers don’t. Cognition, 129(3), 615-636. DOI: 10.1016/j.cognition.2013.08.015
McClelland, J.L. (2010). Emergence in Cognitive Science. Topics in Cognitive Science, 2 (4), 751-770 DOI: 10.1111/j.1756-8765.2010.01116.x
McClelland JL, Mirman D, & Holt LL (2006). Are there interactive processes in speech perception? Trends in Cognitive Sciences, 10 (8), 363-369 PMID: 16843037 | {"url":"http://mindingthebrain.blogspot.com/","timestamp":"2014-04-16T04:19:53Z","content_type":null,"content_length":"114229","record_id":"<urn:uuid:5148afd8-296f-41f6-8c77-b80ca141358f>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00518-ip-10-147-4-33.ec2.internal.warc.gz"} |
Reference cell in one TAB from another using two criteria
Think your intents were not specified,
ie to do what?? where the dual criteria is met
Try something along these lines, depending on your intents ..
To Count # of instances where the criteria satisfies:
Press ENTER will do
To Sum another corresp named range: ReturnCol,
where the criteria satisfies:
Press ENTER will do
To return values from corresp named range: ReturnCol
where the criteria satisfies:
Above must be array-entered, ie press CTRL+SHIFT+ENTER
ReturnCol, Date, Hour are presumed identically sized named ranges
Lookup values in A1, B3 are presumed real dates & times
"Jeff" wrote:
> I would like to reference a cell in TAB A based on two criteria,
> Where
> Named_Range_Date = A1
> and
> Named_Range_Hour = B3
> I need to be able to copy this formula so that I can populate a new table,
> any ideas?
> --
> Jeff | {"url":"http://www.excelbanter.com/showthread.php?t=153191","timestamp":"2014-04-18T03:00:42Z","content_type":null,"content_length":"55669","record_id":"<urn:uuid:d4f08548-6375-4cad-9db5-d5ea089a7036>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00606-ip-10-147-4-33.ec2.internal.warc.gz"} |
The Monty Hall Problem
Simple, yet twisted:
Suppose you're on a game show, and you're given the choice of three doors: Behind one door is a car; behind the others, goats. You pick a door, say No. 1, and the host, who knows what's behind the
doors, opens another door, say No. 3, which has a goat. He then says to you, "Do you want to pick door No. 2?" Is it to your advantage to switch your choice?
Whaddaya reckon?
Last edited by Toast (2006-10-19 22:53:58) | {"url":"http://www.mathisfunforum.com/viewtopic.php?pid=46835","timestamp":"2014-04-18T06:10:17Z","content_type":null,"content_length":"16857","record_id":"<urn:uuid:a037c41b-614e-4a13-927b-62e0741c5475>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00219-ip-10-147-4-33.ec2.internal.warc.gz"} |
Constraint relaxation may be perfect
Results 1 - 10 of 34
- AI MAGAZINE , 1992
"... A large variety of problems in Artificial Intelligence and other areas of computer science can be viewed as a special case of the constraint satisfaction problem. Some examples are machine
vision, belief maintenance, scheduling, temporal reasoning, graph problems, floor plan design, planning genetic ..."
Cited by 372 (0 self)
Add to MetaCart
A large variety of problems in Artificial Intelligence and other areas of computer science can be viewed as a special case of the constraint satisfaction problem. Some examples are machine vision,
belief maintenance, scheduling, temporal reasoning, graph problems, floor plan design, planning genetic experiments, and the satisfiability problem. A number of different approaches have been
developed for solving these problems. Some of them use constraint propagation to simplify the original problem. Others use backtracking to directly search for possible solutions. Some are a
combination of these two techniques. This paper presents a brief overview of many of these approaches in a tutorial fashion.
- JOURNAL OF THE ACM , 1997
"... We introduce a general framework for constraint satisfaction and optimization where classical CSPs, fuzzy CSPs, weighted CSPs, partial constraint satisfaction, and others can be easily cast. The
framework is based on a semiring structure, where the set of the semiring specifies the values to be asso ..."
Cited by 159 (20 self)
Add to MetaCart
We introduce a general framework for constraint satisfaction and optimization where classical CSPs, fuzzy CSPs, weighted CSPs, partial constraint satisfaction, and others can be easily cast. The
framework is based on a semiring structure, where the set of the semiring specifies the values to be associated with each tuple of values of the variable domain, and the two semiring operations (1
and 3) model constraint projection and combination respectively. Local consistency algorithms, as usually used for classical CSPs, can be exploited in this general framework as well, provided that
certain conditions on the semiring operations are satisfied. We then show how this framework can be used to model both old and new constraint solving and optimization schemes, thus allowing one to
both formally justify many informally taken choices in existing schemes, and to prove that local consistency techniques can be used also in newly defined schemes.
- Journal of the ACM , 1997
"... Many combinatorial search problems can be expressed as `constraint satisfaction problems', and this class of problems is known to be NP-complete in general. In this paper we investigate the
subclasses which arise from restricting the possible constraint types. We first show that any set of constrain ..."
Cited by 139 (16 self)
Add to MetaCart
Many combinatorial search problems can be expressed as `constraint satisfaction problems', and this class of problems is known to be NP-complete in general. In this paper we investigate the
subclasses which arise from restricting the possible constraint types. We first show that any set of constraints which does not give rise to an NP-complete class of problems must satisfy a certain
type of algebraic closure condition. We then investigate all the different possible forms of this algebraic closure property, and establish which of these are sufficient to ensure tractability. As
examples, we show that all known classes of tractable constraints over finite domains can be characterised by such an algebraic closure property. Finally, we describe a simple computational procedure
which can be used to determine the closure properties of a given set of constraints. This procedure involves solving a particular constraint satisfaction problem, which we call an `indicator
problem'. Keywords: Cons...
- Constraints , 1999
"... In this paper we describe and compare two frameworks for constraint solving where classical CSPs, fuzzy CSPs, weighted CSPs, partial constraint satisfaction, and others can be easily cast. One
is based on a semiring, and the other one on a totally ordered commutative monoid. While comparing the two ..."
Cited by 102 (27 self)
Add to MetaCart
In this paper we describe and compare two frameworks for constraint solving where classical CSPs, fuzzy CSPs, weighted CSPs, partial constraint satisfaction, and others can be easily cast. One is
based on a semiring, and the other one on a totally ordered commutative monoid. While comparing the two approaches, we show how to pass from one to the other one, and we discuss when this is
possible. The two frameworks have been independently introduced in [2], [3] and [35].
- in IJCAI , 1995
"... We introduce a general framework for constraint solving where classical CSPs, fuzzy CSPs, weighted CSPs, partial constraint satisfaction, and others can be easily cast. The framework is based on
a semiring structure, where the set of the semiring specifies the values to be associated to each tuple o ..."
Cited by 98 (36 self)
Add to MetaCart
We introduce a general framework for constraint solving where classical CSPs, fuzzy CSPs, weighted CSPs, partial constraint satisfaction, and others can be easily cast. The framework is based on a
semiring structure, where the set of the semiring specifies the values to be associated to each tuple of values of the variable domain, and the two semiring operations (+ and x) model constraint
projection and combination respectively. Local consistency algorithms, as usually used for classical CSPs, can be exploited in this general framework as well, provided that some conditions on the
semiring operations are satisfied. We then show how this framework can be used to model both old and new constraint solving schemes, thus allowing one both to formally justify many informally taken
choices in existing schemes, and to prove that the local consistency techniques can be used also in newly defined schemes. 1
- CWI QUARTERLY VOLUME 11 (2&3) 1998, PP. 215 { 248 , 1998
"... We show that several constraint propagation algorithms (also called (local) consistency, consistency enforcing, Waltz, ltering or narrowing algorithms) are instances of algorithms that deal with
chaotic iteration. To this end we propose a simple abstract framework that allows us to classify and comp ..."
Cited by 89 (6 self)
Add to MetaCart
We show that several constraint propagation algorithms (also called (local) consistency, consistency enforcing, Waltz, ltering or narrowing algorithms) are instances of algorithms that deal with
chaotic iteration. To this end we propose a simple abstract framework that allows us to classify and compare these algorithms and to establish in a uniform way their basic properties.
- In Proceedings of the 9th European Conference on Artificial Intelligence , 1990
"... A solution of a Constraint Satisfaction Problem (CSP) is an assignment of values to all its variables such that all its constraints are satisfied. Usually two CSPs are considered equivalent if
they have the same solution set. We find this definition limiting, and develop a more general definition ba ..."
Cited by 86 (0 self)
Add to MetaCart
A solution of a Constraint Satisfaction Problem (CSP) is an assignment of values to all its variables such that all its constraints are satisfied. Usually two CSPs are considered equivalent if they
have the same solution set. We find this definition limiting, and develop a more general definition based on the concept of mutual reducibility. In this extended scheme it is reasonable to consider a
pair of CSPs equivalent even if they have different solutions. The basic idea behind the extended scheme is that two CSPs can be considered equivalent whenever they contain the same "amount of
information", i.e. whenever it is possible to obtain the solution of one of them from the solution of the other one, and viceversa. In this way, both constraint and variable redundancy are allowed in
CSPs belonging to the same equivalence class. As an example of the usefulness of this new notion of equivalence, we formally prove that binary and non-binary CSPs are equivalent (in the new sense).
Such a pro...
- Artificial Intelligence , 1994
"... Finding solutions to a binary constraint satisfaction problem is known to be an NP-complete problem in general, but may be tractable in cases where either the set of allowed constraints or the
graph structure is restricted. This paper considers restricted sets of contraints which are closed under pe ..."
Cited by 58 (18 self)
Add to MetaCart
Finding solutions to a binary constraint satisfaction problem is known to be an NP-complete problem in general, but may be tractable in cases where either the set of allowed constraints or the graph
structure is restricted. This paper considers restricted sets of contraints which are closed under permutation of the labels. We identify a set of constraints which gives rise to a class of tractable
problems and give polynomial time algorithms for solving such problems, and for finding the equivalent minimal network. We also prove that the class of problems generated by any set of constraints
not contained in this restricted set is NP-complete. 1 Introduction Finding solutions to a constraint satisfaction problem is known to be an NPcomplete problem in general [11] even when the
constraints are restricted to binary constraints. However, many of the problems which arise in practice have special properties which allow them to be solved efficiently. The question of identifying
restrictions t...
- Handbook of Constraint Programming , 2006
"... Constraint propagation is a form of inference, not search, and as such is more ”satisfying”, both technically and aesthetically. —E.C. Freuder, 2005. Constraint reasoning involves various types
of techniques to tackle the inherent ..."
Cited by 51 (3 self)
Add to MetaCart
Constraint propagation is a form of inference, not search, and as such is more ”satisfying”, both technically and aesthetically. —E.C. Freuder, 2005. Constraint reasoning involves various types of
techniques to tackle the inherent
- Artificial Intelligence , 1995
"... Finding solutions to a constraint satisfaction problem is known to be an NP-complete problem in general, but may be tractable in cases where either the set of allowed constraints or the graph
structure is restricted. In this paper we identify a restricted set of contraints which gives rise to a clas ..."
Cited by 48 (15 self)
Add to MetaCart
Finding solutions to a constraint satisfaction problem is known to be an NP-complete problem in general, but may be tractable in cases where either the set of allowed constraints or the graph
structure is restricted. In this paper we identify a restricted set of contraints which gives rise to a class of tractable problems. This class generalizes the notion of a Horn formula in
propositional logic to larger domain sizes. We give a polynomial time algorithm for solving such problems, and prove that the class of problems generated by any larger set of constraints is
NP-complete. 1 Introduction Combinatorial problems abound in Artificial Intelligence. Examples include planning, temporal reasoning, line-drawing labelling and circuit design. The Constraint
Satisfaction Problem (CSP) [14] is a generic combinatorial problem which is widely studied in the AI community because it allows all of these problems to be expressed in a natural and direct way.
Reduction operations [12, 10] and intellig... | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=532494","timestamp":"2014-04-19T23:37:47Z","content_type":null,"content_length":"38325","record_id":"<urn:uuid:99c9d7fd-ea3a-43b0-904a-8aa7a87ee243>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00658-ip-10-147-4-33.ec2.internal.warc.gz"} |
[Numpy-discussion] NaN (Not a Number) occurs in calculation of complex number for Bessel functions
[Numpy-discussion] NaN (Not a Number) occurs in calculation of complex number for Bessel functions
Pauli Virtanen pav@iki...
Fri Dec 21 09:45:40 CST 2012
Your code tries to to evaluate
z = 1263309.3633394379 + 101064.74910119522j
jv(536, z)
# -> (inf+inf*j)
In reality, this number is not infinite, but
jv(536, z) == -2.3955170861527422e+43888 + 9.6910119847300024e+43887
These numbers (~ 10^43888) are too large for the floating point
numbers that computers use (maximum ~ 10^308). This is why you get
infinities and NaNs in the result. The same is true for the spherical
Bessel functions.
You will not be able to do this calculation using any software
that uses only floating point numbers (Scipy, Matlab, ...).
You need to use analytical properties of your problem to
get rid of such large numbers. Alternatively, you can use arbitrary
precision numbers. Python has libraries for that:
By the way, the proper place for this discussion is the following
mailing list:
Pauli Virtanen
More information about the NumPy-Discussion mailing list | {"url":"http://mail.scipy.org/pipermail/numpy-discussion/2012-December/064877.html","timestamp":"2014-04-16T19:21:03Z","content_type":null,"content_length":"4045","record_id":"<urn:uuid:8a2b4fcc-92a0-4742-809c-a70a60dc4a05>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00171-ip-10-147-4-33.ec2.internal.warc.gz"} |
Venice, CA SAT Math Tutor
Find a Venice, CA SAT Math Tutor
...I am mostly available only on weekends, but can do long hours and some weekday nights as well. I am looking forward to meeting you and helping you achieve your academic goals!Over the course
of 2 years, I have tutored nearly a dozen students in Algebra 1 specifically and have built and developed...
22 Subjects: including SAT math, reading, English, writing
...For example, I give my students customized quizzes to help them prepare for exams. I am confident that my passion and experience are the qualities you are looking for in a tutor. I look
forward to working with you.
18 Subjects: including SAT math, geometry, algebra 1, GRE
...Obtained from my extensive experience in teaching, one of my strongest teaching capabilities is to provide real-life examples for the subjects. After linking interesting scientific
applications to abstract concepts in math, physics and chemistry, my students are able to learn, to enjoy, and to master the subjects. As a teacher, I feel that this is my greatest reward.
10 Subjects: including SAT math, chemistry, calculus, statistics
...Two years later, she called me to start tutoring her little sister. My policies are simple. Show up, do your homework and prepare to meet me halfway.
9 Subjects: including SAT math, geometry, algebra 1, algebra 2
...We also practice test taking strategies. When they miss a particular problem type, I make up similar questions to be sure the student now understands how to do the problem. Too many students
are used to memorizing "problem types" for tests.
24 Subjects: including SAT math, chemistry, English, calculus
Related Venice, CA Tutors
Venice, CA Accounting Tutors
Venice, CA ACT Tutors
Venice, CA Algebra Tutors
Venice, CA Algebra 2 Tutors
Venice, CA Calculus Tutors
Venice, CA Geometry Tutors
Venice, CA Math Tutors
Venice, CA Prealgebra Tutors
Venice, CA Precalculus Tutors
Venice, CA SAT Tutors
Venice, CA SAT Math Tutors
Venice, CA Science Tutors
Venice, CA Statistics Tutors
Venice, CA Trigonometry Tutors | {"url":"http://www.purplemath.com/venice_ca_sat_math_tutors.php","timestamp":"2014-04-21T02:32:19Z","content_type":null,"content_length":"23886","record_id":"<urn:uuid:a8409ea7-6ae3-4081-bb1f-5aca5cd4a13b>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00343-ip-10-147-4-33.ec2.internal.warc.gz"} |
32.2 Matrices
Home | 18.013A | Chapter 32 Tools Glossary Index Up Previous Next
32.2 Matrices
Matrices provide a convenient way to describe linear equations. Thus if you take the coefficients of your unknowns, in some standard order, as the row elements of your matrix, you define a matrix of
coefficients for any set of equations.
For the example equations above, the coefficient matrix, call it M, is, with the standard ordering of x, y and z
We can then write the original equations as the single matrix equation
Using the definition of matrix multiplication, which is: taking the dot products of the rows of the first matrix with the columns (here single column) of the second to produce the corresponding
elements of the product, you should verify that this matrix equation is exactly the same as our original three equations.
The process of Gaussian elimination can be applied in this matrix form here. The rules are:
1. You can multiply an entire row (on both sides of the equation) by any non-zero number without changing the content of the equations.
2. You can add a multiple of any row to another without changing the content of the equations. You must add entirely across the row, including the other side of the matrix, however.
In this form such operations are called "elementary row operations" and Gaussian elimination is called row reduction.
What you do here is perform enough of operation 2 to form 0's in the matrix on one side of the main diagonal. When this is done you can determine one unknown and then substitute successively to find
the others.
You can also attempt to perform these operations until all elements of your matrix off the main diagonal are 0's, and the diagonal elements are 1. In that case the right hand side vectors are the
solutions for the corresponding variables and you need not substitute back to find all the unknowns.
The n dimensional matrix whose diagonal elements are 1 and off diagonal elements are 0 is called the n dimensional identity matrix, and is written as I usually without any indication of what its size
is, unless that can cause confusion, in which case it is written as I[n].
It has the property that its matrix product with any matrix M of the same dimension is M itself, and its operation on any n dimensional vector v is v itself.
Thus if you start with the matrix equation Mv = r, and row reduce to find another representation of the same set of equations for which M has been reduced to the identity matrix I, you have Iv = r'
where r' is the result of the same row operations on the right side of the equation as those that reduced M to I.
You thereby obtain the solution, v = r'. | {"url":"http://ocw.mit.edu/ans7870/18/18.013a/textbook/HTML/chapter32/section02.html","timestamp":"2014-04-19T07:22:28Z","content_type":null,"content_length":"4750","record_id":"<urn:uuid:7590fcfe-b5db-4ca5-8aa6-74cdf320af87>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00519-ip-10-147-4-33.ec2.internal.warc.gz"} |
Linear Algebra Tutors
Rancho Palos Verdes, CA 90275
Math and Physics Tutor
...I studied discrete math in high school and performed very well in the course, and continued my study of it throughout university. I have also tutored students in discrete math. I studied
linear algebra
at university, and I enjoyed the class and performed very well...
Offering 10+ subjects including linear algebra and calculus | {"url":"http://www.wyzant.com/Hawthorne_CA_Linear_Algebra_tutors.aspx","timestamp":"2014-04-24T23:12:59Z","content_type":null,"content_length":"61392","record_id":"<urn:uuid:dd776b11-9467-472a-8b56-1dc4d2389984>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00211-ip-10-147-4-33.ec2.internal.warc.gz"} |
arithmetic, strings
If a number is divisible by 3, return "Fizz". If a number is divisible by 5, return "Buzz". If a number is divisible by 3 and 5, return "FizzBuzz"
def fizzbuzz(x)
assert_equal fizzbuzz(3), "Fizz"
assert_equal fizzbuzz(50), "Buzz"
assert_equal fizzbuzz(15), "FizzBuzz"
assert_equal fizzbuzz(5175), "FizzBuzz"
Your Solution
Back to Problems | {"url":"http://www.rubeque.com/problems/fizzbuzz?solution_code=case+x%0D%0Awhen+3+then+%22Fizz%22%0D%0Awhen+50+then+%22Buzz%22%0D%0Awhen+15+then+%22FizzBuzz%22%0D%0Awhen+5175+then+%22FizzBuzz%22%0D%0Aend","timestamp":"2014-04-19T17:01:37Z","content_type":null,"content_length":"6216","record_id":"<urn:uuid:a407f943-1430-4e77-95f6-e7198b7cbfdf>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00662-ip-10-147-4-33.ec2.internal.warc.gz"} |
Help!!!!!!!! What Am I Doing Wrong????
05-19-2002 #1
Registered User
Join Date
May 2002
I been working on these two projects forever and I have to turn them in tomorrow. I keep receiving errors and now I'm out of answers. One project I received help from someone, but my teacher said
the coding was wrong. Please help!
Project 1: You are writing a code to find the greatest common divisior (gcd) Write a recursion function to find the gcd, returns integer with two integer arugmentss n amd m. Use gcd(m,n%m)
format. if one of the two integerss is 0, the gcd will be the 2nd integer. tell the user to enter -1 for exiting.
Ok, here are the errors I'm receiving for this code:
error C2146: syntax error : missing ';' before identifier 'cout'
error C2065: 'end1' : undeclared identifier
error C2679: binary '>>' : no operator defined which takes a right-hand operand of type 'int (__cdecl *)(int,int)' (or there is no acceptable conversion)
error C2065: 'm' : undeclared identifier
error C2065: 'n' : undeclared identifier
error C2143: syntax error : missing ';' before '}'
error C2143: syntax error : missing ';' before '<<'
error C2501: 'cout' : missing storage-class or type specifiers
error C2371: 'cout' : redefinition; different basic types (same error for 'cin')
error C2447: missing function header (old-style formal list?)
fatal error C1004: unexpected end of file found
int gcd (int n, int m)
if (m==0)
return n;
return gcd (m,n%m);
void main ()
int gcd (int n, int m)
cout<<"Enter n,m ?"<<end1;
cout<<"The greatest common divisor gcd"<<end1;
while (m>n)
cout<<"Have a Nice Day"<<end1;
project 2: Calcukate the average of a series if test scores where the series is dropped. GetValues should ask for five test scores and store them in variables. findlowest should determine which
of the five scores if the lowest, return value. CalcAverage should calculate and display the average of the four highest scores.
#include <iostream.h>
void main ()
void Getvalues(); // prototype of function
double Getvalues (double x[]);
int scores, x;
cout<<"Please enter values";
for(x = 0; x <=5; ++x){
cin>>scores [x];
void findlowest ()
double findlowest [5];
int lowest = scores[0];
for (x =0; x<= 5; ++x)
if (x<lowest)
scores = x;
return scores;
void calcaverage ()
double calcaverage [4];
int highest= scores [0];
for (int x = 0; x<=100; ++x);
if (scores < highest)
highest=scores ;
cout<<"The average is"<<calcaverage<<end1;
cout<<"Have a nice day"<<calcaverage<<end1;
Last edited by Cnote; 05-19-2002 at 09:33 PM.
I know this probably isn't much help to you, but if you want help you're far more likely to get it, if you tell people what the problem is you're having... what compiler, and what the error
messages are.
Or if it compiles, what the program is doing wrong, a lot of people that could help aren't going to go to the trouble of compiling your code to help you debug it.
You seem to have a braces issue in program 2, is that all of the code?
void main(), umm no....
int main (void), and return 0 at the successful completion of program.
You have helped your cause with code tags though, well done.
Yes, major issues with the braces there.
You also missed a bunch of semicolons, as well as put some semicolons after your for-loops.
As a rule of thumb, I always use braces, even if theres only one statement after a loop or if statement... this prevents you from making errors like those in your code.
I revised your code the best I could
#include <iostream.h>
int main ()
void Getvalues(); // prototype of function
double Getvalues (double x[]);
int scores, x;
cout<<"Please enter values";
for(x = 0; x <=5; ++x)
cin>>scores [x];
void findlowest ();
double findlowest [5];
int lowest = scores[0];
for (x = 0; x <= 5; ++x)
if (x<lowest)
scores = x;
} else {
return scores;
void calcaverage();
double calcaverage[4];
int highest = scores [0];
// for (int x = 0; x<=100; ++x); I can't figure out why that is stuck in there;
if (scores < highest)
highest = scores;
cout<<"The average is"<<calcaverage<<end1;
cout<<"Have a nice day"<<calcaverage<<end1;
return 0;
Project 2 [is this all of your code?] Yes it is.
I have a question about the void main(). Why shouldn't you use it this way? I have advanced C++ now and the instructor teaches us to do it this way. However in my visual c++ class, my instructor
taught us to use int main (). Is there a right or wrong here or just preference?
void main() is non-standard. Either int main() or int main(int argc, char *argv[]) are acceptable, and standard.
Also, check here: http://www.cprogramming.com/boardfaq.html#main
Why so many braces? The reason I used this line, for (int x = 0; x<=100; ++x); was to make the score greater than 0 but this than 100.
I see this is wrong, what way should I code this?
>Why so many braces?
Just my style... it catches needless mistakes.
>for (int x = 0; x<=100; ++x); was to make the score greater than 0 but this than 100.
>I see this is wrong, what way should I code this?
Sorry, I have no idea what you're trying to do there. What do you want to make more than 0 but less than 100? I think your logic is a little off, this should be an if statement. like if (number <
0) number = 0; if (number > 100) number = 100;
05-19-2002 #2
05-19-2002 #3
05-19-2002 #4
Registered User
Join Date
May 2002
05-19-2002 #5
05-19-2002 #6
Registered User
Join Date
May 2002
05-19-2002 #7 | {"url":"http://cboard.cprogramming.com/cplusplus-programming/18072-help-what-am-i-doing-wrong.html","timestamp":"2014-04-17T05:25:45Z","content_type":null,"content_length":"67014","record_id":"<urn:uuid:587781bd-67fd-41e6-8aa7-98afdd7fc23d>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00289-ip-10-147-4-33.ec2.internal.warc.gz"} |
Algebra and algorithms for the classification of differential invariants
Seminar Room 1, Newton Institute
For a Lie group action known by its infinitesimals three practical descriptions of the differential algebra of differential invariants are given by generators and syzygies. The syzygies form a
formally integrable system in terms of the non commuting invariant derivations. Applying a generalized differential elimination algorithm on those allows to reduce the number of generators. The
normalized and edge invariants were the focus in the reinterpretation of the moving frame method by Fels & Olver (1999) and its application to symmetry reduction by Mansfield (2001). My contribution
here is first to show the completeness of a set of syzygies for the normalized invariants that can be written down with minimal information on the group action (namely the infinitesimal generators).
Second, I provide the adequate general concept of edge invariants and show their generating properties. The syzygies for edge invariants are obtained by applying the algorithms for differential
elimination that I generalized to non-commuting derivations. Another contibution is to exhibit the generating and rewriting properties of Maurer-Cartan invariants. Those have desirable properties
from the computational point of view. They are all the more meaningful when one understands that they are the differential invariants that come into play in the moving frame method as practiced by
Griffiths (1974) and differential geometers. These contributions are easy to apply to any group action as they are implemented as the maple package AIDA, which works on top of the library
DifferentialGeometry and the extension of the library diffalg to non commutative derivations.
The video for this talk should appear here if JavaScript is enabled.
If it doesn't, something may have gone wrong with our embedded player.
We'll get it fixed as soon as possible. | {"url":"http://www.newton.ac.uk/programmes/DIS/seminars/2009062915301.html","timestamp":"2014-04-17T09:47:40Z","content_type":null,"content_length":"7373","record_id":"<urn:uuid:036ff4f5-e34a-44b5-9f53-b57d765aff6c>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00561-ip-10-147-4-33.ec2.internal.warc.gz"} |
Number of results: 267
Anna and Jamie were at track practive. The track is 2/5 kilometers around. Anna ran 1 lap in 2 minutes. How many minutes does it take Anna to run one kilometer?
Monday, December 3, 2012 at 1:00pm by Liam
Anna and bob play a game in which Anna begins by rolling a fair dice, after which bob tosses a fair coin. They take turns until one of them wins. Anna wins when she rolls a 6. Bob wins when the coin
lands on heads. What is the probability that Anna will win the game? Well ...
Monday, May 31, 2010 at 10:38pm by Sam
Anna and bob play a game in which Anna begins by rolling a fair dice, after which bob tosses a fair coin. They take turns until one of them wins. Anna wins when she rolls a 6. Bob wins when the coin
lands on heads. What is the probability that Anna will win the game? Well ...
Monday, May 31, 2010 at 12:44pm by Ally
Bob and Bob Jr. stand at open doorways at opposite ends of an airplane hangar 25 m long. Anna owns a spaceship, 40 m long as it sits on the runway. Anna takes off in her spaceship, then swoops
through the hangar at constant velocity. At precisely time zero on both Bob's clock ...
Tuesday, January 29, 2013 at 7:25pm by Katie
I'm trying to say: at the party, Anna and her father argued about Annas behavior.. so far I have: A la fiesta, Anna y su padre rineron acerca de la conducta de Anna. but A, the first word, is wrong
and so is the verb ending..? What is wrong with it?
Monday, March 9, 2009 at 4:26pm by Alex
Anna, Annabel, Anna -- please use the same name for your posts.
Monday, December 2, 2013 at 5:22pm by Ms. Sue
Joe and Anna collect football cards.The greatest common factor of the numbers of cards in their collections is 15.Altogether Joe and Anna have 75 cards.If Joe has more cards than Anna,how many cards
do they each have?
Thursday, February 19, 2009 at 3:26pm by Joseph
Bob and Bob Jr. stand at open doorways at opposite ends of an airplane hangar 25 m long. Anna owns a spaceship, 40 m long as it sits on the runway. Anna takes off in her spaceship, then swoops
through the hangar at constant velocity. At precisely time zero on both Bob's clock ...
Wednesday, January 30, 2013 at 9:02am by Katie
What happens if Anna rolls the 6 and Bob tosses a head? It it considered a tie, or does one or the other automatically win? This will change the probability of Anna winning the game.
Monday, May 31, 2010 at 10:38pm by PsyDAG
Anna uses 5/6 tank of gas each week driving to work . How many tanks of gas does Anna use in 1 2/3 months.
Monday, March 24, 2014 at 4:22pm by kenny
Juanita answered 21, and Anna answered three fewer than this, so Anna must have answered 18. Earl answered two more than Anna, so Earl must have answered... what?
Thursday, September 18, 2008 at 7:14pm by David Q
Anna uses 4/5 of vinegar in her salad dressing recipe.How much vinegar would Anna use to make 2 recipes?
Tuesday, December 3, 2013 at 6:21pm by Curtis
if anna drew a circle that has an area of 153.86 square meters what is the radius of anna's circle
Tuesday, April 26, 2011 at 3:39pm by jack
"Anna gave my sister and me the dollhouse.", not "Anna gave my sister and I the dollohouse.": The test is "Anna gave me the dollhouse." Correct? Thank you!
Wednesday, November 14, 2007 at 12:54am by Mary Ann
algebra concept 1
1. Anna has 12 bills in her wallet, some $5 and some $10. The total value of the bills is $100. How many of each bill does Anna have?
Wednesday, May 12, 2010 at 2:39pm by Anonymous
While warming up for track meet,Anna jogged for 17/20 of a mile. Katie jogged for 3/5 of a mile. How much farther did anna jog?
Friday, February 8, 2013 at 4:58pm by JR
since 15 is a common factor the number of cards each has must be a multiple of 15 let the number of cards that Joe has be 15x, let Anna's number of cards be 15y then 15x + 15y = 75 x + y = 5 also x >
y , and of course both x and y must be whole numbers. there are only 2 ...
Thursday, February 19, 2009 at 3:26pm by Reiny
I don't know what you mean by rate but here is how you calculate how tall Anna is in centimeters. 5ftx12inx2.54cm=152.4cm So Anna is 152.4cm tall.
Tuesday, October 1, 2013 at 9:21am by Anonymous
Legend has it that long ago a kind was very pleased with the game of chess that he decided to reward the inventor of the game Anna, with whatever she wanted. Anna asked for a resource instead of
money. Specifically, she asked for one grain of wheat for the first square of a ...
Sunday, September 18, 2011 at 8:06pm by Anonymous
recall that speed or rate is distance traveled over time, or: speed, v = d/t thus, you compare Anna's speed and Julia's speed: Anna: v = 20/(1 1/3) Julia: v = 246/(16) *note: first convert mixed
number to improper fraction before dividing. hope this helps. :)
Friday, October 29, 2010 at 2:15am by jai
Anna and Julia each take a bicycle trip. Anna rides 20 miles in 1 1/3 hours. Julia rides 246 miles in 16 hours. Which has slower unit rate? By how much?
Friday, October 29, 2010 at 2:15am by Lilly
Anna burned 15 calories per minute running for x minutes and 10 calories per minute hiking for y minutes. She spent a total of 60 minutes running and hiking and burned 700 calories. The system of
equations shown below can be used to determine how much time Anna spent on each ...
Tuesday, September 24, 2013 at 2:07pm by Joy
"HELP" isn't any school subject I've ever heard of. Anna would undoubtedly get faster help from our resident French instructor if she put her actual school subject in the School Subject box: French
(Anna, learn to follow directions, please.)
Wednesday, September 30, 2009 at 11:24pm by Writeacher
anna bought 3 types of fruit for a fruit salad. She paid three times as much for buleberries as for pears and $2.50 less for strawberries than for blueberries. A) define a variable and write
algebraic expressions to represent the amount she spent on each type of fruit. B) if ...
Saturday, November 24, 2012 at 3:47pm by karen
A book store sells books for $4 each. When a customer purchases 2 books they receive 1 additional book for half-price . Anna bought just enough books to receive 3 half-price books. How much money did
Anna spend?
Thursday, December 27, 2012 at 9:10pm by Anonymous
Sentence Question?? LANG ARTS
Choose the option that corrects the sentence below. "Commander-in-Chief Sam Houston defeated the Mexican army and captured Santa Anna in 1836." A. Commander-in-Chief, Sam Houston, defeated the
Mexican army and captured Santa Anna in 1836. B. Commander-in-Chief Sam Houston ...
Tuesday, March 5, 2013 at 1:42pm by BoopityPoo
English Check
My Answer: 1. Venture 2. calamity 3. turmoil 4. Flagrant 5. comprehensive 6. rehabilitate 7. fluctuate 8.ponder 9.persevere 10. Conventional We tried to stop Anna from jumping, but her 1)___
disregard of our warnings led to a 2)__ that would change her life forever. She dove ...
Wednesday, September 14, 2011 at 12:31pm by ME
I have a word problem with fractions. Anna estimated that her brother Nick ate about 1/3 of an apple pie, her brother Matt ate 5/8 of the pie, and she ate about 1/4 of the pie. How might you convince
Anna that her estimates do not make sense? Be sure to show every step of your...
Wednesday, January 23, 2013 at 5:10pm by Robert
note to anna
Anna, you can't get lazy in chemistry and not use caps when needed. na is not anything but two letters and I have no idea what og br is. Starting a sentence with a capital letter at least tells us
where the sentence starts. A period helps us know where the sentence ends. This ...
Monday, September 3, 2012 at 12:14pm by DrBob222
Anna, an adolescent girl, is very much in love with her boyfriend who is three years older than she. He is putting a lot of pressure on her to have sex. At the same time, she is anxious about her
parents’ attitude towards her boyfriend. Her mother constantly warns her about ...
Thursday, November 10, 2011 at 8:23pm by lisa
Comma help
Choose the option that corrects the sentence below. If the sentence is correct, select “d. The sentence is correct.” Commander-in-Chief Sam Houston defeated the Mexican army and captured Santa Anna
in 1836 A Commander-in-Chief, Sam Houston, defeated the Mexican army and ...
Thursday, March 7, 2013 at 4:44pm by Abbey
just want to know if i am in the right direction make example sentence for praticing generalization, discrimination, extinctiom, spontaneous recovery, positive reinforcement, negative reinforcement,
punishment generalization 1.John was frightened by a barking lunging spaie. ...
Tuesday, May 11, 2010 at 6:46pm by anna
Anna has 12 bills in her wallet, some $5 and some $10. The total value of the bills is $100. How many of each bill does Anna have? is this how i do this? x+y=12 5x+10y=100 y=12-x 5x+10(12-x)
5x+120-10x=100 5x-10x=2-5x=20 x4 y8 Looks good to me. Matt
Saturday, April 28, 2007 at 4:00am by Anonymous
Anna was opening a new restaurant, so she went to the sign store to get letters to make a sign to hang above the storefront. When she got to the sign store, the only letters they had in stock were
two copies of the letter 'X,' two copies of the letter 'E,' and one copy of the ...
Monday, April 1, 2013 at 10:32am by Anna
math stuff
suppose to arrage the names of the ppl in descending order of their share of oranges: Eddy's share of oranges is half the sum of dennis and carols share bob has more oranges then dennis the total
share of anna and bob is equal to that of carol and dennis eddy has fewer oranges...
Saturday, March 31, 2007 at 10:21pm by jake p
You're welcome, Anna.
Thursday, September 17, 2009 at 7:57pm by Ms. Sue
You' re welcome Anna :)
Wednesday, November 18, 2009 at 7:50pm by Kanon
Thanks anna
Monday, November 14, 2011 at 8:20pm by ann
Earl answered 2 more math questions correctly than Anna did. Anna answered 3 fewer questions thatn Juanita did. Juanita answered 21 questions correctly. How many questions did Earl answer correctly?
Could someone explain how to get the answer. Thanks.
Thursday, September 18, 2008 at 7:14pm by Mattie
You're very welcome, Anna.
Saturday, February 13, 2010 at 2:33pm by Ms. Sue
K, Thanks Anna. Thats what I thought. :D
Monday, December 9, 2013 at 6:00pm by Jasmine
World Culture
Who is Anna Akhmatova?!!!!
Wednesday, September 27, 2006 at 7:51pm by Kamilah
4th grade
Anna is right.
Monday, April 27, 2009 at 9:55pm by Reiny
Good work Anna :)
Thursday, October 15, 2009 at 7:38pm by Kanon
thanks anna. i figured it out thanx 2 u.
Sunday, October 3, 2010 at 8:27pm by alex
6th grade math
You're welcome, Anna. 1/2 = w/4 4 * (1/2) = w ? = w
Sunday, January 30, 2011 at 11:51am by Ms. Sue
math help, correction
Anna has 12 bills in her wallet, some $5 and some $10. The total value of the bills is $100. HOw many of each bill odes Anna have? How do i solve these type of problems my quess was this but i don't
know if i'm correct. 5(x) + 10(y) = 100 5(0) + 10y = 100 10y = 100 --- --- 10 ...
Thursday, January 11, 2007 at 5:10pm by jasmine20
4th grade
I think Anna is right too.
Monday, April 27, 2009 at 9:55pm by Sammy
YW, Anna. We want you all to be safe! =)
Thursday, November 26, 2009 at 9:42pm by Writeacher
What is the question, anna? Do we need to factor it?
Thursday, March 28, 2013 at 9:16am by Knights
thank you ms.sue and princess anna
Tuesday, November 26, 2013 at 10:11pm by John Dosley
Read the sentences. Add quotation marks and commas where needed I prefer country music said Melanie. "I prefer country music," said Melanie. Let me go with you my sister begged. "Let me go with you,"
my sister begged. Why Anna asked would you not tell me about your party? "Why...
Wednesday, December 5, 2012 at 9:43pm by Jerald
anna you are wromg because this is an order of operarions problem
Wednesday, September 3, 2008 at 5:46pm by lily
To SraJMcGin
You are entirely welcome, Anna! Sra (aka Mme)
Wednesday, November 4, 2009 at 7:39pm by SraJMcGin
Hey Anna, can you break that down. I'm so confused!
Tuesday, November 10, 2009 at 9:00pm by Karina
You' re welcome! Thank you Anna. I wish you a wonderful Christmas from France :)
Monday, December 21, 2009 at 2:59pm by kanon
Where was Sanna Anna's headquarters during the Alamo?
Wednesday, January 27, 2010 at 9:43pm by jessica
What is the theme of the book NUmber 8 by Anna Fienberg?
Sunday, March 20, 2011 at 7:43pm by Michelle
I am glad to have finally helped you Princess Anna! :D
Wednesday, December 18, 2013 at 1:28pm by Jasmine
5th grade math
Annna wants to buy her grandmother a gift. Anna decides to buy a piece of jewelry. At the store she sees that 1/2 of the jewelry is necklaces. 1/4 of the jewelry is pins. The rest of the jewelry is a
total of 16 bracelets and rings. There are 3 times as many bracelets as rings...
Sunday, November 15, 2009 at 10:53am by Please Please Help
Annna wants to buy her grandmother a gift. Anna decides to buy a piece of jewelry. At the store she sees that 1/2 of the jewelry is necklaces. 1/4 of the jewelry is pins. The rest of the jewelry is a
total of 16 bracelets and rings. There are 3 times as many bracelets as rings...
Sunday, November 15, 2009 at 8:03pm by Please Please Help
4th grade
Sammy is very smart and i agree with Anna too!
Monday, April 27, 2009 at 9:55pm by Sammy
social subjects
Anna, did you read the section on Cultural History.
Sunday, September 27, 2009 at 2:09pm by GuruBlue
social studies
OK, I understand thanks so much @Princess Anna @Writeacher
Monday, January 13, 2014 at 4:26pm by Jen (Please HELP )
Science help,plz:)
@Princess Anna did you get the answers for this test i'm stuck on it too
Saturday, December 14, 2013 at 2:53pm by mtv
Anna if u are having trouble with number 3, have a look at Ex5.2 it will help.
Tuesday, November 10, 2009 at 9:00pm by Mia
What makes Anna Quindelen Homeless a good example essay ?
Tuesday, February 16, 2010 at 9:53pm by layla
Trianle Fire
Anna, recheck your 2-25-11,4:51pm post.
Saturday, February 26, 2011 at 1:38am by Henry
social studies
where did santa anna order the slaughtering of 350 men?
Thursday, March 3, 2011 at 7:52pm by sadie
TO MathMate
Thank you Mme Sra. This would make life a lot easier for Anna.
Thursday, October 1, 2009 at 11:58pm by MathMate
Can you help me improve my last questions? Thank you very much for your invaluable help. Anna answered one of them.
Sunday, May 2, 2010 at 6:41pm by Franco
Yes I have but it's a different translation from the ones I find on the internet so I dont know how to answer.
Sunday, October 3, 2010 at 7:59pm by english
Science help.
ahahah.....(Princess Anna)...she liked remembered ages after!!!! lol
Tuesday, February 4, 2014 at 3:32pm by Anonymous
how does anna feel about what is happening to her family from the play Thunder on Sycamore Street
Monday, April 26, 2010 at 11:32pm by rachel
What does the poet find ironic about her situation? In the poem In memory of M.B by Anna Akhmatova.
Friday, September 21, 2012 at 12:58pm by Angelica
Science HELP!!!!
It's radio waves. I just found it in my book. It was hidden from my eyes. But thanks Anna and Elena! :) :D
Tuesday, December 10, 2013 at 4:37pm by Jasmine
Princess Anna -- please do not give him any more answers. We try to help students learn, not help them cheat.
Tuesday, November 26, 2013 at 9:40pm by Ms. Sue
Princess Anna -- at least Prof Reiny showed the student how he got the right answer.
Tuesday, December 3, 2013 at 9:45pm by Ms. Sue
Looks as if Princess Anna needs to stop answering questions she is not 100% sure of. =(
Saturday, December 14, 2013 at 3:57pm by Writeacher
anna and two of her friends went apple picking. they picked 156 . can they divide the apples does each recieve
Monday, August 2, 2010 at 6:41pm by kamleen
Identity crisis?
Dan, Daniel, Lina, Anna, whoever ... Please pick one name and stay with it.
Tuesday, December 24, 2013 at 12:56pm by Writeacher
OK,the question is what three statements that define all living things in terms of a cell? crossword puzzle! Anna
Tuesday, October 7, 2008 at 8:55pm by Anna
Cultural Anthropology
Hello Ms Sue. How are you? I am actually reading now. However, I am more leaning to A. Is this correct? Thank you. Anna
Monday, March 18, 2013 at 10:45pm by Anna
Anna/Bruna/whoever ~ No one will write an essay for you. You need to do your own work, and then someone here will check it for you.
Thursday, June 27, 2013 at 8:02am by Writeacher
What post are you talking about? Looked back in search for "anna" and all seem to have been answered.
Saturday, February 1, 2014 at 2:04pm by Reiny
“[R]emembering the rhythm” and “backs of buses” both contain examples of (1 point) alliteration. stanzas. rhyme. onomatopoeia. 2. Choose the option that corrects the sentence below. If the sentence
is correct, select “d. The sentence is correct." Members of the Convention of ...
Sunday, April 28, 2013 at 7:34pm by bella
Anna, Hello I'm Lisa. Thank you for providing some fedback on my posted question. I really appreciate it! Have a nice day!
Tuesday, March 11, 2008 at 6:29pm by To Anna from Lisa
Anna, can you double check if the question is not: "The limit of (sqrt(4-x)-2)/x as x approaches 0" The answer would be -1/4 using Mr. Pursley's approach.
Saturday, September 18, 2010 at 11:19pm by MathMate
Science help (just 4 questions)
Are "Princess Anna" and "Miley23" having an identity crisis or something? Identical IP address for both. =(
Wednesday, December 4, 2013 at 1:22pm by Writeacher
Mme Sra: merci! Anna: le français in student B. Also note the spelling of étudier is written with an accent aigu. é
Thursday, September 24, 2009 at 9:37pm by MathMate
Anna, Jessica, Mark -- or Whoever -- you're more likely to receive help if you give us some idea of your thinking about these questions.
Friday, April 2, 2010 at 7:12pm by Ms. Sue
Literature 7
I don't think Anna Sewell mentioned the breed in the book. However, he was played by a quarter horse in the movie. Some claim he was Arabian.
Thursday, February 17, 2011 at 10:25pm by Ms. Sue
Anna, David, Zeke -- or whoever -- Please use the same name for your posts. I'll be happy to check your answer to the above problem.
Friday, September 9, 2011 at 6:02pm by Ms. Sue
From the poem "The Wanderer" What is the plight of the wanderer "earth-walker"? How did he become what he is and what is he seeking?
Sunday, October 3, 2010 at 7:59pm by english
just divide the number of moles by the number of liters anna
Sunday, October 3, 2010 at 8:27pm by vivyianna
Anna Age-6 Laura Age-3 Product= 6*3=18 Sum=6+3=9
Wednesday, January 5, 2011 at 8:18pm by Dayton
What would be the temperature of 50 g of 20°C water mixed with 80 g of 40°C water?
Wednesday, October 16, 2013 at 11:24am by Jenna V
Maybe the teacher think in Korea that the spelling homework here is great and useful to you Anna
Tuesday, January 16, 2007 at 6:16pm by Doodle
Anna -- I removed your posts because this is Jordan's question. Please do not piggy back on other student's posts.
Sunday, January 5, 2014 at 5:57pm by Ms. Sue
If Anna is 5 ft tall. How tall is she in centimeters and what is the rate?
Tuesday, October 1, 2013 at 9:21am by Janet
Pages: 1 | 2 | 3 | Next>> | {"url":"http://www.jiskha.com/search/index.cgi?query=anna","timestamp":"2014-04-20T21:54:56Z","content_type":null,"content_length":"32282","record_id":"<urn:uuid:483d8359-0f6f-49bd-9143-5d9620719223>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00590-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
Given a cube ABCD.EFGH with the length of each edge is 4 cm. What is the distance between lines CE and BG? (the lines are skew each other)
• one year ago
• one year ago
Best Response
You've already chosen the best response.
Would you label the vertices of the uploaded diagram of a cube so that the points are consistent with those on the cube in the posted problem? Thanks. @chihiroasleaf
Best Response
You've already chosen the best response.
the sketch of the cube looks like this.. |dw:1362389289192:dw|
Best Response
You've already chosen the best response.
CE = 4√3 cm BG = 4√2 cm
Best Response
You've already chosen the best response.
@chihiroasleaf Are you asking for the difference in size between a face diagonal segment BG of the cube and the diagonal of the cube, segment ED? Or, are you asking what is the minimum distance
between the skew lines BG and ED?
Best Response
You've already chosen the best response.
I'm asking the minimum distance between the skew lines BG and ED @Directrix
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/5133204fe4b0034bc1d7b20b","timestamp":"2014-04-16T04:30:24Z","content_type":null,"content_length":"92578","record_id":"<urn:uuid:bd303128-cb96-47fd-bdb0-920b8c85a7f1>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00600-ip-10-147-4-33.ec2.internal.warc.gz"} |
Results 1 - 10 of 70
, 2004
"... Elliptic curves have been intensively studied in number theory and algebraic geometry for over 100 years and there is an enormous amount of literature on the subject. To quote the mathematician
Serge Lang: It is possible to write endlessly on elliptic curves. (This is not a threat.) Elliptic curves ..."
Cited by 369 (17 self)
Add to MetaCart
Elliptic curves have been intensively studied in number theory and algebraic geometry for over 100 years and there is an enormous amount of literature on the subject. To quote the mathematician Serge
Lang: It is possible to write endlessly on elliptic curves. (This is not a threat.) Elliptic curves also figured prominently in the recent proof of Fermat's Last Theorem by Andrew Wiles. Originally
pursued for purely aesthetic reasons, elliptic curves have recently been utilized in devising algorithms for factoring integers, primality proving, and in public-key cryptography. In this article, we
aim to give the reader an introduction to elliptic curve cryptosystems, and to demonstrate why these systems provide relatively small block sizes, high-speed software and hardware implementations,
and offer the highest strength-per-key-bit of any known public-key scheme.
- Journal of Algorithms , 1985
"... This is the nineteenth edition of a (usually) quarterly column that covers new developments in the theory of NP-completeness. The presentation is modeled on that used by M. R. Garey and myself
in our book ‘‘Computers and Intractability: A Guide to the Theory of NP-Completeness,’ ’ W. H. Freeman & Co ..."
Cited by 188 (0 self)
Add to MetaCart
This is the nineteenth edition of a (usually) quarterly column that covers new developments in the theory of NP-completeness. The presentation is modeled on that used by M. R. Garey and myself in our
book ‘‘Computers and Intractability: A Guide to the Theory of NP-Completeness,’ ’ W. H. Freeman & Co., New York, 1979 (hereinafter referred to as ‘‘[G&J]’’; previous columns will be referred to by
their dates). A background equivalent to that provided by [G&J] is assumed, and, when appropriate, cross-references will be given to that book and the list of problems (NP-complete and harder)
presented there. Readers who have results they would like mentioned (NP-hardness, PSPACE-hardness, polynomial-time-solvability, etc.) or open problems they would like publicized, should
, 1984
"... Given a primitive element g of a finite field GF(q), the discrete logarithm of a nonzero element u GF(q) is that integer k, 1 k q - 1, for which u = g k . The well-known problem of computing
discrete logarithms in finite fields has acquired additional importance in recent years due to its appl ..."
Cited by 87 (6 self)
Add to MetaCart
Given a primitive element g of a finite field GF(q), the discrete logarithm of a nonzero element u GF(q) is that integer k, 1 k q - 1, for which u = g k . The well-known problem of computing discrete
logarithms in finite fields has acquired additional importance in recent years due to its applicability in cryptography. Several cryptographic systems would become insecure if an efficient discrete
logarithm algorithm were discovered. This paper surveys and analyzes known algorithms in this area, with special attention devoted to algorithms for the fields GF(2 n ). It appears that in order to
be safe from attacks using these algorithms, the value of n for which GF(2 n ) is used in a cryptosystem has to be very large and carefully chosen. Due in large part to recent discoveries, discrete
logarithms in fields GF(2 n ) are much easier to compute than in fields GF(p) with p prime. Hence the fields GF(2 n ) ought to be avoided in all cryptographic applications. On the other hand, ...
, 2000
"... This paper introduces the XTR public key system. XTR is based on a new method to represent elements of a subgroup of a multiplicative group of a finite field. Application of XTR in cryptographic
protocols leads to substantial savings both in communication and computational overhead without compromis ..."
Cited by 80 (11 self)
Add to MetaCart
This paper introduces the XTR public key system. XTR is based on a new method to represent elements of a subgroup of a multiplicative group of a finite field. Application of XTR in cryptographic
protocols leads to substantial savings both in communication and computational overhead without compromising security.
, 2006
"... Elliptic curves with small embedding degree and large prime-order subgroup are key ingredients for implementing pairingbased cryptographic systems. Such “pairing-friendly” curves are rare and
thus require specific constructions. In this paper we give a single coherent framework that encompasses all ..."
Cited by 78 (10 self)
Add to MetaCart
Elliptic curves with small embedding degree and large prime-order subgroup are key ingredients for implementing pairingbased cryptographic systems. Such “pairing-friendly” curves are rare and thus
require specific constructions. In this paper we give a single coherent framework that encompasses all of the constructions of pairing-friendly elliptic curves currently existing in the literature.
We also include new constructions of pairing-friendly curves that improve on the previously known constructions for certain embedding degrees. Finally, for all embedding degrees up to 50, we provide
recommendations as to which pairing-friendly curves to choose to best satisfy a variety of performance and security requirements.
- Proceedings of Cryptography and Coding 2005, volume 3796 of LNCS , 2005
"... Abstract. In recent years cryptographic protocols based on the Weil and Tate pairings on elliptic curves have attracted much attention. A notable success in this area was the elegant solution by
Boneh and Franklin [7] of the problem of efficient identity-based encryption. At the same time, the secur ..."
Cited by 77 (2 self)
Add to MetaCart
Abstract. In recent years cryptographic protocols based on the Weil and Tate pairings on elliptic curves have attracted much attention. A notable success in this area was the elegant solution by
Boneh and Franklin [7] of the problem of efficient identity-based encryption. At the same time, the security standards for public key cryptosystems are expected to increase, so that in the future
they will be capable of providing security equivalent to 128-, 192-, or 256-bit AES keys. In this paper we examine the implications of heightened security needs for pairing-based cryptosystems. We
first describe three different reasons why high-security users might have concerns about the long-term viability of these systems. However, in our view none of the risks inherent in pairing-based
systems are sufficiently serious to warrant pulling them from the shelves. We next discuss two families of elliptic curves E for use in pairingbased cryptosystems. The first has the property that the
pairing takes values in the prime field Fp over which the curve is defined; the second family consists of supersingular curves with embedding degree k = 2. Finally, we examine the efficiency of the
Weil pairing as opposed to the Tate pairing and compare a range of choices of embedding degree k, including k = 1 and k = 24. Let E be the elliptic curve 1.
, 2003
"... We propose a simple algorithm to select group generators suitable for pairing-based cryptosystems. The selected parameters are shown to favor implementations of the Tate pairing that are at once
conceptually simple and very efficient, with an observed performance about 2 to 10 times better than prev ..."
Cited by 46 (12 self)
Add to MetaCart
We propose a simple algorithm to select group generators suitable for pairing-based cryptosystems. The selected parameters are shown to favor implementations of the Tate pairing that are at once
conceptually simple and very efficient, with an observed performance about 2 to 10 times better than previously reported implementations.
- PROCEEDINGS ASIACRYPT 2001, LNCS 2248, SPRINGER-VERLAG 2001, 67–86 , 2001
"... The Advanced Encryption Standard (AES) provides three levels of security: 128, 192, and 256 bits. Given a desired level of security for the AES, this paper discusses matching public key sizes
for RSA and the ElGamal family of protocols. For the latter both traditional multiplicative groups of finit ..."
Cited by 43 (4 self)
Add to MetaCart
The Advanced Encryption Standard (AES) provides three levels of security: 128, 192, and 256 bits. Given a desired level of security for the AES, this paper discusses matching public key sizes for RSA
and the ElGamal family of protocols. For the latter both traditional multiplicative groups of finite fields and elliptic curve groups are considered. The practicality of the resulting systems is
commented upon. Despite the conclusions, this paper should not be interpreted as an endorsement of any particular public key system in favor of any other.
- IEEE Trans. Inform. Theory , 1988
"... Abstract-A new knapsack-type public key cryptosystem is introduced. The system is based on a novel application of arithmetic in finite fields, following a construction by Bose and Chowla. By
appropriately choosing the parameters, one can control the density of the resulting knapsack, which is the ra ..."
Cited by 40 (0 self)
Add to MetaCart
Abstract-A new knapsack-type public key cryptosystem is introduced. The system is based on a novel application of arithmetic in finite fields, following a construction by Bose and Chowla. By
appropriately choosing the parameters, one can control the density of the resulting knapsack, which is the ratio between the number of elements in the knapsack and their sue in bits. In particular,
the density can be made high enough to foil “low-density ” attacks against our system. At the moment, no attacks capable of “breaking ” this system in a reasonable amount of time are known. I.
, 2004
"... Abstract. We present an implementation of elliptic curves and of hyperelliptic curves of genus 2 and 3 over prime fields. To achieve a fair comparison between the different types of groups, we
developed an ad-hoc arithmetic library, designed to remove most of the overheads that penalize implementati ..."
Cited by 36 (5 self)
Add to MetaCart
Abstract. We present an implementation of elliptic curves and of hyperelliptic curves of genus 2 and 3 over prime fields. To achieve a fair comparison between the different types of groups, we
developed an ad-hoc arithmetic library, designed to remove most of the overheads that penalize implementations of curve-based cryptography over prime fields. These overheads get worse for smaller
fields, and thus for larger genera for a fixed group size. We also use techniques for delaying modular reductions to reduce the amount of modular reductions in the formulae for the group operations.
The result is that the performance of hyperelliptic curves of genus 2 over prime fields is much closer to the performance of elliptic curves than previously thought. For groups of 192 and 256 bits
the difference is about 14 % and 15 % respectively. | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=170392","timestamp":"2014-04-19T13:51:39Z","content_type":null,"content_length":"37281","record_id":"<urn:uuid:6645e63b-7eda-4b20-a645-0f9ca8f72f12>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00281-ip-10-147-4-33.ec2.internal.warc.gz"} |
Infinite electrical networks and possible connections with LERW
up vote 6 down vote favorite
I've been exposed to various problems involving infinite circuits but never seen an extensive treatment on the subject. The main problem I am referring to is
Given a lattice L, we turn it into a circuit by placing a unit resistance in each edge. We would like to calculate the effective resistance between two points in the lattice (Or an asymptotic
value for when the distance between the points gets large).
I know of an approach to solve the above introduced by Venezian, it involves superposition of potentials. An other approach I've heard of, involves lattice Green functions (I would like to read more
about this). My first request is for a survey/article that treats these kind of problems (for the lattices $\mathbb{Z}^n$, Honeycomb, triangular etc.) and lists the main approaches/results in the
My second question (that is hopefully answered by the request above) is the following:
I noticed similarities in the transition probabilities of a Loop-erased random walk and the above mentioned effective resistances in $\mathbb{Z}^2$. Is there an actual relation between the two? (I
apologize if this is obvious.)
add comment
3 Answers
active oldest votes
The book by Peres and Lyons, freely available here http://php.indiana.edu/~rdlyons/prbtree/prbtree.html, should give you much information at least for the probability part
up vote 3 down vote of the question.
Thanks! It looks like a great reference. – Gjergji Zaimi Sep 2 '10 at 9:06
add comment
If you are still interested in this, you may want to have a look in Section 6 of http://www.sciencedirect.com/science/article/pii/0095895690900658 by Thomassen. He proves for example
up vote 2 that the effective resistance between adjacent vertices of $Z^2$ is 1/2. I don't think there is mention to LERW though.
down vote
add comment
A Google Scholar search for "random walks and electrical networks" will bring up a text by Doyle and Snell that is now available online; for additional references check the citations.
up vote 1 down
Thanks for the reference, I have browsed through that monograph before. I think the problem I discuss above wasn't solved at the time Doyle and Snell published their monograph, also
they don't treat any relations with LERW. – Gjergji Zaimi Feb 1 '10 at 11:02
1 You may find Pemantle's article in the cites interesting: arxiv.org/abs/math/0404043 – Steve Huntsman Feb 1 '10 at 11:09
I'll look into it, thanks. – Gjergji Zaimi Feb 1 '10 at 12:23
add comment
Not the answer you're looking for? Browse other questions tagged reference-request pr.probability mp.mathematical-physics ca.analysis-and-odes or ask your own question. | {"url":"http://mathoverflow.net/questions/13649/infinite-electrical-networks-and-possible-connections-with-lerw","timestamp":"2014-04-17T07:25:52Z","content_type":null,"content_length":"62622","record_id":"<urn:uuid:5cb628a4-655e-4bb1-8321-d233926fc23d>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00126-ip-10-147-4-33.ec2.internal.warc.gz"} |
Maxime Bergeron
Future talks
The topology of nilpotent character varieties, GEometric structures And Representation varieties Junior Retreat, University of Michigan at Ann Arbor, May 2014.
Low dimensional topology and geometric group theory session, Canadian Mathematical Society Summer Meeting, Winnipeg, June 2014.
Symplectic geometry and equivariant topology special session, Canadian Mathematical Society Summer Meeting, Winnipeg, June 2014.
Some past talks
The topology of nilpotent representations in reductive groups and their maximal compact subgroups, Séminaire de géométrie analytique, Institut de Recherche Mathématique de Rennes, February 20, 2014.
The topology of nilpotent representations in reductive groups and their maximal compact subgroups, Group Actions & Dynamics Seminar, University of Texas at Austin, October 30, 2013.
The topology of nilpotent representations in reductive groups and their maximal compact subgroups, Topology and Related Seminars, University of British Columbia, October 17, 2013.
Morse theory indomitable, Graduate Student Topology Seminar, University of British Columbia, November 30, 2012.
CAT(0) geometry, cubical complexes and metamorphic robots, Graduate Student Topology Seminar, University of British Columbia, October 12, 2012.
Groups, zigzags and expanders, Working Seminar in Analytic Group Theory, McGill University, March 27, 2012.
Invariant measures, expanders and property (T), Ergodic Group Theory Day, McGill University, December 7, 2011.
Jewels and algebra, Seminars in Undergraduate Mathematics in Montreal (SUMM), Concordia University, February, 2011.
Some conferences I have attended without giving a talk
Kervaire seminar: Quadratic forms, lattices and applications, CUSO doctoral school, Les Diablerets, March 2014.
Low Dimensional Topology, Knots and Orderable Groups, Centre International de Rencontres Mathématiques de Luminy, July 2013.
Low Dimensional Topology and Geometry in Toulouse, Université Paul Sabatier, June 2013.
The Topology of 3-Dimensional Manifolds, Centre de Recherches Mathématiques de Montréal, May 2013.
Coxeter Groups Meet Convex Geometry, Laboratoire de Combinatoire et d'Informatique Mathématique de Montréal, August 2012.
Cornell Topology Festival, Cornell University, May 2012.
3-Manifolds, Artin Groups and Cubical Geometry, CUNY Graduate Center in New York City, August 2011. | {"url":"http://www.math.ubc.ca/~mbergeron/seminars.html","timestamp":"2014-04-18T18:12:24Z","content_type":null,"content_length":"5658","record_id":"<urn:uuid:8dd80a2d-25e0-4b44-81e4-7681ddf8c776>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00294-ip-10-147-4-33.ec2.internal.warc.gz"} |
Maplewood, NJ Algebra Tutor
Find a Maplewood, NJ Algebra Tutor
...I've also used discrete mathematics in computer science courses for understanding the ways of dealing with integer computation. I've been a C++ developer for over 4 years, working on a
financial services application. I have programmed in both objective C and C for 5 years.
13 Subjects: including algebra 2, algebra 1, physics, trigonometry
...My minor during my university studies was East Asian Studies, and I have earned an NJ Certificate of Eligibility with Advanced Standing through my university's teacher preparation program. I
teach with an emphasis on communication and communicative confidence rather than on strict grammatical ru...
37 Subjects: including algebra 1, English, reading, geometry
...I am a strong believer that the best way to learn is by doing, so I usually begin with a sample problem, then try to get my student to understand every aspect of the solution process,
including the reasons for each step towards the solution. I then introduce similar problems to the student and e...
9 Subjects: including algebra 1, algebra 2, geometry, precalculus
...While working as in the school district, I was ABA trained and have current CPR certification. In the past I worked as an IT consultant and mainly this involved setting up computers (Mac,
Linux and Windows) as well as wireless or wired networks. I also have several instances where networks had a connectivity issue to peripheral devices and can troubleshoot them via administrator
19 Subjects: including algebra 1, English, reading, literature
...Many of my students are from Kindergarten to 6th grade. I tutor them with reading and writing as well as Math. I have had to teach children to write, pronounce vowels (with emphasis on long
and short sounds). Teach them to associate words with pictures, help them with their verbs and nouns, adverbs, punctuations and more.
47 Subjects: including algebra 1, chemistry, reading, writing
Related Maplewood, NJ Tutors
Maplewood, NJ Accounting Tutors
Maplewood, NJ ACT Tutors
Maplewood, NJ Algebra Tutors
Maplewood, NJ Algebra 2 Tutors
Maplewood, NJ Calculus Tutors
Maplewood, NJ Geometry Tutors
Maplewood, NJ Math Tutors
Maplewood, NJ Prealgebra Tutors
Maplewood, NJ Precalculus Tutors
Maplewood, NJ SAT Tutors
Maplewood, NJ SAT Math Tutors
Maplewood, NJ Science Tutors
Maplewood, NJ Statistics Tutors
Maplewood, NJ Trigonometry Tutors
Nearby Cities With algebra Tutor
Cranford algebra Tutors
Hillside, NJ algebra Tutors
Irvington, NJ algebra Tutors
Livingston, NJ algebra Tutors
Maplecrest, NJ algebra Tutors
Millburn algebra Tutors
Orange, NJ algebra Tutors
Roselle, NJ algebra Tutors
Scotch Plains algebra Tutors
South Orange algebra Tutors
Springfield, NJ algebra Tutors
Summit, NJ algebra Tutors
Union Center, NJ algebra Tutors
Union, NJ algebra Tutors
Vauxhall algebra Tutors | {"url":"http://www.purplemath.com/maplewood_nj_algebra_tutors.php","timestamp":"2014-04-17T13:30:08Z","content_type":null,"content_length":"24152","record_id":"<urn:uuid:d126c0be-f8e1-45f8-96ba-6e2b6c05b3aa>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00524-ip-10-147-4-33.ec2.internal.warc.gz"} |
Bel Tiburon, CA Algebra 1 Tutor
Find a Bel Tiburon, CA Algebra 1 Tutor
...I enjoyed helping my classmates with their challenges as math has always been one of my favorite subjects, and I continued to help my classmates during my free time in college. Now I am happy
to become a professional tutor so I can help more students. I have a B.A. in molecular and cell biology from University of California at Berkeley.
22 Subjects: including algebra 1, calculus, geometry, statistics
...I am very effective in helping students improve their test scores: in a past testing year all of my SAT students scored 700 and above! Some of them were scoring in mid-500, when I started
working with them. I bring my full attention and dedication to the students that I work with.
14 Subjects: including algebra 1, calculus, statistics, geometry
I just recently graduated from the Massachusetts Institute of Technology this June (2010) with a Bachelors of Science in Physics. While I was there, I also took various Calculus courses and
courses in other areas of math that built on what I learned in high school. I'm a definite believer in the value of knowing the ways the world works, and the value of a good education.
6 Subjects: including algebra 1, physics, calculus, algebra 2
...Before that, I was writing C code. I am an expert with C++. The first thing to learn about C++ is some basic syntax, followed by how objects are created and deleted and the scope of objects.
The next thing to learn is how object pointers work, followed by more advanced things like polymorphism.
12 Subjects: including algebra 1, calculus, algebra 2, geometry
I am currently in my final semester at Bloomsburg University. I am student teaching right now. My classes are 8th grade physical science, as well as high school physics. I have a strong
background in math as well, because I have had to take a lot of math classes to earn my physics degree.
6 Subjects: including algebra 1, physics, geometry, trigonometry
Related Bel Tiburon, CA Tutors
Bel Tiburon, CA Accounting Tutors
Bel Tiburon, CA ACT Tutors
Bel Tiburon, CA Algebra Tutors
Bel Tiburon, CA Algebra 2 Tutors
Bel Tiburon, CA Calculus Tutors
Bel Tiburon, CA Geometry Tutors
Bel Tiburon, CA Math Tutors
Bel Tiburon, CA Prealgebra Tutors
Bel Tiburon, CA Precalculus Tutors
Bel Tiburon, CA SAT Tutors
Bel Tiburon, CA SAT Math Tutors
Bel Tiburon, CA Science Tutors
Bel Tiburon, CA Statistics Tutors
Bel Tiburon, CA Trigonometry Tutors | {"url":"http://www.purplemath.com/Bel_Tiburon_CA_algebra_1_tutors.php","timestamp":"2014-04-17T10:51:32Z","content_type":null,"content_length":"24431","record_id":"<urn:uuid:ef042cad-92f5-43d2-8eac-3891ac72873e>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00057-ip-10-147-4-33.ec2.internal.warc.gz"} |
Basketball Teams as Strategic Networks
We asked how team dynamics can be captured in relation to function by considering games in the first round of the NBA 2010 play-offs as networks. Defining players as nodes and ball movements as
links, we analyzed the network properties of degree centrality, clustering, entropy and flow centrality across teams and positions, to characterize the game from a network perspective and to
determine whether we can assess differences in team offensive strategy by their network properties. The compiled network structure across teams reflected a fundamental attribute of basketball
strategy. They primarily showed a centralized ball distribution pattern with the point guard in a leadership role. However, individual play-off teams showed variation in their relative involvement of
other players/positions in ball distribution, reflected quantitatively by differences in clustering and degree centrality. We also characterized two potential alternate offensive strategies by
associated variation in network structure: (1) whether teams consistently moved the ball towards their shooting specialists, measured as “uphill/downhill” flux, and (2) whether they distributed the
ball in a way that reduced predictability, measured as team entropy. These network metrics quantified different aspects of team strategy, with no single metric wholly predictive of success. However,
in the context of the 2010 play-offs, the values of clustering (connectedness across players) and network entropy (unpredictability of ball movement) had the most consistent association with team
advancement. Our analyses demonstrate the utility of network approaches in quantifying team strategy and show that testable hypotheses can be evaluated using this approach. These analyses also
highlight the richness of basketball networks as a dataset for exploring the relationships between network structure and dynamics with team organization and effectiveness.
Citation: Fewell JH, Armbruster D, Ingraham J, Petersen A, Waters JS (2012) Basketball Teams as Strategic Networks. PLoS ONE 7(11): e47445. doi:10.1371/journal.pone.0047445
Editor: Stefano Boccaletti, Technical University of Madrid, Italy
Received: June 21, 2012; Accepted: September 17, 2012; Published: November 6, 2012
Copyright: © 2012 Fewell et al. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction
in any medium, provided the original author and source are credited.
Funding: This research was supported by the NSF under Grant No. BECS-1023101 to DA, by a grant from the Volkswagen Foundation under the program on Complex Networks to DA, by an ASU Exemplar award to
JHF, and by NSF CSUMS grant No. DMS- 0703587. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.
Competing interests: The authors have declared that no competing interests exist.
Capturing the interactions among individuals within a group is a central goal of network analyses. Useful depictions of network structure should provide information about the networks purpose and
functionality. But how do network attributes relate to functional outcomes at the group and/or individual levels? A useful context to ask this question is within small team networks. Teams occur
everywhere across the broad array of biological societies, from cooperatively hunting carnivores to social insects retrieving prey [1]–[4], and are ubiquitous in human organizations. We define teams
as groups of individuals working collaboratively and in a coordinated manner towards a common goal be it winning a game, increasing productivity, or increasing a common good [5]. Within teams,
individuals must coordinate across different roles or tasks, with their performance outcomes being interdependent [4]–[6]. The success of the team is rarely a simple summation of the tools each
individual brings. Instead it must emerge from the dynamic interactions of the group as a whole [7].
How can we capture the relevance of these interactions to team function? Because teams are dynamic systems, it makes sense to use network analyses to approach this problem. The game of basketball is
based on a series of interactions, involving a tension between specialization and flexibility; players must work together to move the ball into the basket while anticipating and responding to the
opposing team. Thus, plays that begin as set strategies evolve quickly into dynamic interactions [8]. Unlike many sports, the game does not revolve around a series of dyadic interactions (eg tennis,
baseball) or a summation of individual efforts (track and field); it is dependent on a connected team network [9].
The dynamic between within-group cooperation and conflict, and group versus individual success, is an inherent feature of both human and biological social systems. This tension, exemplified in the
distribution of shooting opportunities in a game across players, or by salary dispersion inequities in a team or organization, is a fundamental issue across cooperative systems [6], [10], [11]. The
dynamic between specialization and flexibility also appears across systems. In prides of lions, for example, different females assume the roles of driving or flanking prey [1]. However, in both
contexts individuals must flexibly change positions in a rapidly changing game. Finally, like almost all cohesive groups, teams must compete with other teams, and their success/failure is shaped by
their ability to respond to those challenges. Unlike a lion pride or business organization, however, the success and failure of specific network interactions for a basketball team can be easily
measured iteratively and in real time, as the team scores points or loses the ball to a superior defense.
To evaluate basketball teams as networks, we examined the offensive ball sequences by National Basketball Association (NBA) teams during the first round of the 2010 playoffs. We graphed player
positions and inbound/outcomes as nodes, and ball movement among nodes (including shots to the basket) as edges. From the iterated offensive 24 second clocks, we recorded sequences of ball movement
of each of the 16 play-off teams across two games. We used the compiled data to first ask whether we can capture the game of basketball through a transition network representing the mean flow of the
ball through these sequences of play (a stochastic matrix), and secondly whether individual teams have specific network signatures. We then examined how different network metrics may be associated
with variation in actual play strategy. We asked whether teams vary strategically in centrality of ball distribution, such that some teams rely more heavily on a key player, such as the point guard,
to make decisions on ball movement. We used degree centrality to compare teams using this strategy with those in which the ball is distributed more evenly. We similarly used clustering analyses to
examine relative connectedness among players within teams and to ask whether teams differentially engaged players across multiple positions. We also asked whether ball movement rate, measured as path
length and path flow rate, could capture the perceived dichotomy of teams using dominant large players, usually centers, versus small ball teams that move the ball quickly across multiple players
We were interested in whether network metrics can usefully quantify team decisions about how to most effectively coordinate players. We examined two network metrics that we hypothesized might capture
different offensive strategies. One is to move the ball in a way that is unpredictable and thus less defensible. To measure network unpredictability we calculated team entropy, applying Shannons
entropy to the transition networks as a proxy for the unpredictability of individual passing behavior among team players. Another, not mutually exclusive, strategy is to capitalize on individual
expertise by moving the ball towards players with high probability of shooting success. In a sense, this strategy reflects a coordinated division of labor between ball distributors early in the play,
transitioning to shooting specialists. We looked for evidence of this strategy using a metric of uphill/downhill flux, which estimates the average change in potential shooting percentage as the ball
moves between players in relation to their differential percent shooting success. Uphill/downhill and team entropy both recognize the need for coordination within a team, but they emphasize different
aspects of network dynamics; one capitalizes on individual specialization while the other emphasizes team cohesion.
We recorded and analyzed transition networks for the 16 teams in televised games of the 2010 NBA first round play-offs. The sequential ball movement for each teams offensive plays was recorded across
two games for each pair; games were picked haphazardly a priori, not based on outcome (analyzed games and outcomes in Table 1). For analysis, the five starting players for each team were assigned
position numbers from 1–5, in the order of: (1) Point Guard; (2) Shooting Guard; (3) Small Forward; (4) Power Forward; (5) Center. All offensive plays with at least three of the five starters on the
floor were included (player list in Table S1. This allowed us to equate positions with specific players within each team and to use player positions as nodes. Preliminary analyses indicated that
offensive play paths were fairly consistent between the two games analyzed for the majority of teams, so sequences were pooled.
Table 1. Analyzed games and outcomes.
For initial analyses, all possible start-of-play (inbounds, rebounds and steals) and outcomes (successful/failed two point or three point shots, fouls, shooting fouls with different success outcomes,
steals and turnovers) were recorded as nodes. Data per offensive play generated a sequential pathway [9], [13]. The cumulative paths throughout the game were combined to generate a weighted graph of
ball movement with possession origin, players and possession outcomes as nodes and ball movement between those nodes as directed edges.
Although we chose games haphazardly, the differential in total points in analyzed games generally reflected outcomes for the play-off round (Table 1). The primary exception was the two Atlanta Hawks/
Milwaukee Bucks games, in which the Bucks beat the Hawks in the series, but were defeated by a mean of 12.5 points during the two focal games. In the analyzed Dallas Mavericks/San Antonio Spurs
games, Dallas won by a mean differential of 6 points, but the Spurs beat the Mavericks in the play-off series by a mean differential of 0.5; wins were split across the two games analyzed (Games 5 and
Network Analyses
We generated weighted graphs from the cumulative transition probabilities. When all data were analyzed, almost all nodes became connected, making it difficult to differentiate across graphs.
Therefore, we generated a series of weighted graphs at increasing cut-off weights from the 30th to 70th percentiles (with the 30th percentile graphs highlighting only the most frequently seen
transitions). This allowed us to analyze changes in network structure as we move from the most likely links between players to those that were least frequent. We used the entire matrix of transitions
for each team to perform structural network analyses [12], [14], adapted for offensive plays in a basketball game. Metrics included: path length, path flow rate, degree centrality, clustering
coefficient, individual and team entropy, individual and team flow centrality, shooting efficiency flux.
Path length and path flow rate compared the number of passes and the speed of ball movement involved in team play. Path length simply included the number of passes between players per play, ignoring
inbound and outcome nodes. Paths included all between-player edges, such that a given player could be involved twice or more across the path. Path flow rate was calculated as the number of edges per
unit time from inbound to shot clock time at the end of the play. To calculate degree centrality we used the weighted graphs from iterated offensive plays across the two games. However, we aggregated
outcome data into two categories of shoot and other, to reduce weighting bias from multiple outcome nodes. Degree was first calculated per position as the weighted sum of total out-edges per player.
The relative distributions of player degrees were then calculated across the graph, such that a homogeneous graph (connectivity distributed most equally across all players) has zero degree
centrality. For a weighted graph with weights summing to 1 and a vertex of maximal degree the degree centrality is then:(1)
To calculate team entropy, we first determined individual player entropy. For this metric we excluded inbound passes because of the strong weight of the inbound edge. We included outcome, because the
possibility of shooting the ball represents a decision point contributing to uncertainty of ball movement. As with centrality, outcomes were collapsed into two node categories of shooting or not
shooting. We used Shannons entropy [15], , to measure the uncertainty of ball transitions between any player or outcome.
We then combined player entropies to determine entropy of the whole team. There are multiple ways to calculate network entropy. One possibility is to use a simple averaging of player entropies. A
second is Markov chain entropy, which incorporates the conditional probability of any given player moving the ball to any other player, conditioned on the probability that the given player has the
ball. However, from the opposing teams perspective, the real uncertainty of team play is the multiplicity of options across all ball movements rather than just across players. We thus calculated a
whole-network or Team Entropy from the transition matrix describing ball movement probabilities across the five players and the two outcome options.
We used individual flow centrality to characterize player/position importance within the ball distribution network [16]. Individual player flow centrality was calculated as the number of passing
sequences across all plays in which they were one of the nodes, normalized by the total number of plays. We also calculated a more restricted flow centrality that included only player appearances as
one of the last three nodes before an outcome. This allowed us to focus on the set-up phase for a scoring drive and the actual scoring attempt. We compared this more restricted flow centrality for
successful versus unsuccessful plays; this success/failure ratio was considered as a measure of the utility of an individual player to team success.
To capture a teams ability to move the ball towards their better shooters, we developed a metric we call uphill/downhill flux, defined as the average change in potential shooting percentage per pass.
A team that has a high positive uphill/downhill flux moves the ball consistently to their better shooters; a team that with a negative value moves the ball on average to the weaker shooters. The
latter can happen if the ball distributor (e.g. the Point Guard) is also the best shooter on the team. Letting be the shooting percentages for players and and the probability of a pass from player to
player , we define the uphill/downhill flux as:(2)
Finally, we wanted to compare teams in terms of relative player involvement, such that we can differentiate those teams for which most players are interconnected from those that rely consistently on
a defined subset for offensive plays. One way to do so is to look for the occurrence of triangles, or connected 3-node subgraphs within the network. Teams with higher connectedness will contain more
cases in which sets of 3 players have a link to each other; the maximum number of these triangles in a group of 5 players is 10. The clustering coefficient measures the number of triangles in a
network as a percentage of all possible triangles. However, a single evaluation of this metric is again problematic. If we use all ball movement data, all nodes become connected to all other nodes,
and the clustering coefficient is uniformly high. Additionally, it is important to remember that the triangles in these networks are association links and not necessarily sequences of plays. Hence we
decided that the most meaningful measure to characterize the association structure of the ball movements was to calculate the clustering coefficients for undirected unweighted graphs across the
different cutoffs of the cumulative weight, beginning with the 30 percentile when triangles first appear. This allowed us to compare teams with consistently high clustering to those that showed
triangles only when less frequent links were included.
Results and Discussion
The first question posed by this study was how well a network approach can capture the game of basketball from a team-level perspective. We constructed transition networks (i.e. stochastic matrices)
as first-order characterization of team play style for each team individually and for the pooled set of all observed transitions across all teams. Because even a single game generates a rich dataset,
we imposed thresholds to clarify the dominant transitions, highlighting from most to least frequent the minimal set of transitions representing a particular percentile of all ball movements. At the
60th percentile, players in all but one network were connected to at least one other player (the San Antonio Spurs Center was disconnected) and all teams had an edge to at least one outcome,
generally success. This matched the expectation that these are elite and cohesive teams and gave us a starting point for comparative analyses (weighted graphs for all teams across the 30th to 70th
percentile thresholds shown in Supplemental Figures S1 and S2).
To look at the NBA as a whole, we combined the transition data across all teams in a compiled network (Figure 1). As a note, although it is tempting to relate the structure of play to physical
location on the court, it is important to remember that these data capture passing probabilities independently of spatial information. In this network, as in an NBA game, the ball moved most
frequently from the inbound pass to the Point Guard and was rebounded either by the Center or Power Forward. It was primarily distributed from the Point Guard to other players, with most likely
distributions to the Shooting Guard or Power Forward. Other players generally distributed back to the Point Guard, with lower weights to edges connecting the Shooting Guard, Power Forward and Small
Forward. The only edge to an outcome at this weighting was from the Power Forward to a successful shot. This NBA team thus showed a star-shaped pattern of ball movement controlled centrally by the
Point Guard, with a division of labor across positional roles. Transitions from other players were most likely to be towards the Point Guard. The Shooting Guard occupied a secondary leadership role
by creating connections between the Point Guard and the Power Forward who functioned as the primary shot-taker. The role of the Center was rebounding and redistribution to the Point Guard.
The importance of the Point Guard in distributing the ball identifies this as the primary leadership position in the team network. If we define leadership as the relative importance of any player or
position in the network, we can capture this quantitatively using individual flow centrality, or the proportion of paths (offensive plays) involving a particular node [16]. We compared flow
centrality across positions from all data (ANOVA; F = 42.02; P = ; df = 4, n = 80 (Table S2); and for the three players contacting the ball before a shot (F = 36.12; P = ). As expected from the
network graphs, the Point Guard position had the highest mean centrality across all positions and was highest for the majority of teams (Figure 2). Flow centrality was conversely lowest for the
Center, with intermediate and similar values for other positions. Two notable (but unsurprising) exceptions to this rule were the Cleveland Cavaliers, for which the Small Forward had high flow
centrality, and the Los Angeles Lakers, for which the flow centrality of the Shooting Guard matched that of the Point Guard. These deviations match leadership roles within these teams by LeBron James
and Kobe Bryant respectively. It will be interesting to compare their shifting network roles as their teams have changed; one moved to a team with an increased number of skilled offensive players
(and the winning team in 2012), and the other’s team recently gained a new point guard (Steve Nash) known as an offensive strategist.
Team Network Graphs
How do individual teams vary around this centralized model? The star pattern was most exemplified by the Bulls (Figure 3), who inbound only to the Point Guard at , and for which most passes were
between the Point Guard and other players. Their high degree centrality is illustrated by considering that removing the point guard node would cause all other player nodes to be completely
disconnected. A similar disconnect would happen to five of the sixteen teams at 60% weighting and nine teams at 50% weighting (Figure S1 and S2). There are trade-offs to a highly centralized team
between clarity of roles and flexibility of response. Lack of player connectedness may allow the defense to exploit a predictable weakness in the network by moving defenders off disconnected players
to double team.
Deviations from the Point-Guard centered star pattern confirmed known team playing styles (Figure 3). In the 2010 Cleveland Cavaliers network the Small Forward was a highly weighted distributor of
the ball, as expected by his high flow centrality (Figure 2). He also shot the ball successfully at an edge weight close to the Power Forward. Thus the network visualization again picked up Le Bron
James combined skills in ball distribution and shooting. However, perhaps the most important deviation from a centralized network strategy appeared in the weighted graphs of the Los Angeles Lakers.
Even at low weighting, their network included multiple between-player edges beyond those connecting to the Point Guard. One way to analyze the impact of these additional edges is by quantifying the
frequency of triangles within the network [17] via a clustering coefficient [14]. Figure 4 shows the cumulative clustering coefficients of each team from the 30th to 70th percentile weighting. The
Lakers had the highest cumulative clustering coefficient, primarily because they had high connectedness in their most frequent plays. In a highly clustered network like the Lakers, passing decisions
are made by multiple players, expanding the possible paths that must be considered by the opposing team. In the 2010 first round only two other teams showed comparable cumulative clustering: the
Boston Celtics and the San Antonio Spurs. Like the Lakers, the Celtics - who also reached the finals - built triangles even at relatively low weighting. The Spurs were unusual in that they had low
connectedness when considering their most dominant edges, but high clustering when less frequent passes were included in the analysis (i.e. at the 70th percentile).
The network concept of triangles as a fully connected subgroups translates well to the Lakers highly discussed triangle offense. Jackson and Winter [8] define the triangle offense as a spatial
concept, in which a group of three players is set up on one side of the court connecting to a balanced two-man set on the other side. It is designed to distribute players across the floor so that
they can be used interchangeably, depending on open lanes and defense. In this strategy the Point Guard becomes less central to the decision process, because all players have the ability to make
decisions about ball distribution depending on immediate context. Thus the triangle offense can be considered as a network strategy that can be visualized in the Lakers weighted graph.
Team Network Signatures: Degree Centrality and Entropy
An important question is whether differences in the weighted team graphs can be captured more quantitatively by network metrics. As discussed above, a primary visual distinction in our weighted
graphs was between teams using a central player to distribute the ball, and those moving the ball across multiple players. Our calculated degree centralities in general matched our visual networks (
Table 2). The data were not definitive, however, in whether less centralized teams had an advantage in the 2010 play-offs. Five of the 8 winning teams had lower degree centralities than opponents,
but overall rankings of centrality showed no pattern of win/loss.
Like degree centrality, entropy should be strongly influenced by the extent to which multiple players distribute the ball. Degree centrality and team entropy were negatively correlated (Pearson
product moment correlation = −0.6; p<0.003; n = 16), but they captured somewhat different aspects of ball distribution, because team entropy takes into account probabilities outside the network
topology. Variation in team entropy was more closely connected to individual team success/failure; winners in 6 of the 8 first round match-ups had higher team entropy, and when entropies were ranked
from highest to lowest, 5 of the 8 highest entropies were for winning teams. The play-offs only provide 8 match-ups, too small a sample size to make a statistically meaningful claim (and it would be
a simplistic game that allowed a predictive single metric). However, our analyses do suggest that these combined network metrics have value in: (1) capturing variation in team offense, and (2)
supporting the hypothesis that complex and unpredictable ball distribution pattern is an important component of team strategy. Indeed, the 2010 Lakers and Celtics teams were arguably built around
this principle. The highest entropies overall were achieved by the Lakers and Celtics, and the Lakers simultaneously had the lowest degree centrality. These assertions would be tested by the
subsequent play-off seasons, one in which a team known for its dominant forward was successful (2011 Dallas Mavericks) and the next in which the winning team was built around the multi-player model
(2012 Miami Heat).
Uphill-downhill Flux and Passing Rate
The Dallas Mavericks, who lost in the first round in 2010 but won the title in 2011, are an important counter-point. Their strategy was clear; move the ball consistently to their best shooter. To
capture this quantitatively, we developed a new metric that uses flow flux to compare individual player flow centrality with calculated shooting percentage for each player across the two games.
Uphill/downhill flux measures the degree to which teams move the ball towards versus away from players relative to their differential shooting success (Figure 5). High uphill/downhill indicates a
different set of priorities in ball distribution than entropy. It focuses on playing to strengths by separating the roles of ball distribution and scoring, moving from distributors to shooters.
Unsurprisingly, the 2010 Mavericks had the highest uphill/downhill flux of all teams in the play-offs. Success in this strategy was not connected consistently to team success within our data set.
However, it is notable that only three teams had a combination of both higher uphill/downhill and higher entropy than their opponents. Two of the three were the Lakers and the Celtics; the third was
the Heat.
Our final team-level metrics were path length and flow rate (speed of ball movement through the path; Table 3). Recently, there has been increased interest in small ball teams, which distribute the
ball quickly across players. Small ball has been hypothesized to allow teams to achieve success beyond what would be expected based on individual player skill levels. The exemplar small ball team in
past years has been the Phoenix Suns [18]. However, in 2009–2010 they transitioned away from this approach. We predicted a correlation between path length and flow rate, such that some teams
distribute the ball quickly and across multiple players, but surprisingly little variation in path length or ball movement speed showed in our data.
Player Value
A question in evaluating any organizational network is the relative value of its individual members [11]. Duch et al. [16] used individual flow centrality to show that higher paid players in soccer
teams are in fact strong contributors to ball movement during a game. We asked a similar question for basketball, by quantifying player involvement in paths with successful versus unsuccessful
outcomes. For our analyses we used only those sequences with at least 3 of the 5 starting players on the floor. We matched each player to position and excluded any sequences in which starters clearly
rotated into a different position than assigned. This allowed us to analyze individual player contribution by position, using flow centrality analyses to determine the relative frequency by which any
player was involved in (1) all, (2) only successful, and (3) only unsuccessful plays. We used the ratio of (2) to (3) to determine whether we could quantify player “value” beyond apparent dominance
in the game (Table 4).
We found an interesting positional bias in the data, with the Center often having the highest success/failure ratio. In contrast, Point Guards tended to have success/failure ratios at or below 1.0.
Although the ratio measure should statistically control for frequency effects, we suggest this metric might be biased mechanistically by relative player involvement. The low flow centrality of the
most highly utilized position reflects the argument that high frequency player contributions become negatively affected by exposure. The nonlinear relationship between player involvement and success
in our metrics may thus illustrate the price of anarchy [13], the expectation that maximizing gain within any given offensive play can ultimately jeopardize overall game efficiency. If entropy is
valuable, as our data suggest, then moving the ball frequently to a specific player or position is costly, because it allows the opposition to adjust their defense accordingly.
We have presented a network structure analysis of basketball teams in the context of team coordination and strategy. As a starting point, we applied network-level metrics to quantitatively measure
fundamental components of team offensive strategy, moving currently available individual player metrics (examples at NBA.com). The study involved more than a thousand ball movements and typically
more than one hundred sequences or paths for each team. This dataset allowed us to capture the game of basketball as a network. Because our team comparisons were limited to the pairs in the first
round of the play-offs, correlations between game outcome and specific aspects of network structure could not definitively test the specific hypotheses suggested. Answering the question of how
network dynamics contribute to successful team strategy will be more complex than a single network variable can capture. We also expect intransitivity across games and opponents, such that the
success of emphasizing any given strategy is dependent on the behavior of the opposing team. However our data do suggest that certain metric combinations, particularly entropy, centrality, and
clustering, are relevant components of team strategy.
One of the advantages of this beautiful game is the wealth of available data. We encourage the expansion of both the network toolbox and the datasets analyzed. Analyses across a season will help
determine whether network structures for a given team are stable or whether they respond flexibly to different defense strategies. Dissecting network shifts within games (e.g. the final quarter or as
point differentials change) could help explore game dynamics. Analyses across multiple seasons could track the development of team cohesion. It would also be extremely useful to connect network with
spatial and temporal models; this may not be practical with current data acquisition methods, but recent publications [19] suggest that automated ball tracking in basketball games is becoming more
Beyond basketball, this approach may act as a template for evaluating other small team collaborations. Although the specific network metrics will vary across the disparate contexts in which teams
occur, the general approach of analyzing network interactions and function is robust [14]. Teams take multiple approaches to communication and leadership, from centralized to decentralized, from more
rigidly bureaucratic to flexible, and from assigned roles to emergent. Each of these organizational strategies corresponds with a specific network model. As one example, our finding that the more
successful teams distributed decision making about ball movement beyond a centralized leader is mirrored in models of business team structure. Network assessments suggest that business teams with
mixed leadership roles optimize performance relative to highly centralized or highly distributed teams [6]. It would be interesting to see how the network measures used here apply to other small
teams that are tasked differently, such as research groups organized around innovation, remote military teams on assignment, or intelligence agencies tasked with pattern recognition. The application
could also be expanded to animal teams in which roles develop naturally rather than through external assignment, and for which team success/failure has a direct connection to fitness. For example,
the ontogeny of team coordination is a general phenomenon. In hunting teams of lions, chimpanzees and wild dogs, new members can require years of practice to achieve coordination with the group [1]–
[3]. These discussions highlight the potential of this approach and its applicability across the broad array of contexts in which cohesive teams are found.
Supporting Information
Weighted graphs of ball movement for East Coast teams. Red edges represent transition probabilities summing to the percentile indicated in the column header.
Weighted graphs of ball movement for all West Coast teams. Red edges represent transition probabilities summing to the percentile indicated in the column header.
Starting players and position assignments for the 2010 NBA playoffs, first round. Substitutes are in parentheses.
Player flow centrality. Flow centrality (FC) is calculated as the proportion of all plays in which a player was involved. Flow centrality based on outcome is calculated as the proportion of
successful (FC3 S) or failed (FC3 F) plays in which a player appears as one of the last 3 player possessions in the sequence.
We thank Alex Gutierrez and Mark Goldfarb for their help in data collection and analysis, and Jon Harrison and an anonymous reviewer for comments on the manuscript.
Author Contributions
Conceived and designed the experiments: JHF DA. Performed the experiments: JHF JI AP. Analyzed the data: JHF JI AP DA JSW. Contributed reagents/materials/analysis tools: JHF DA. Wrote the paper: JHF
DA JI JSW.
Media Coverage of This Article
Posted by PLoS_ONE_Group
Posted by jfewell | {"url":"http://www.plosone.org/article/info:doi/10.1371/journal.pone.0047445","timestamp":"2014-04-19T04:31:30Z","content_type":null,"content_length":"135245","record_id":"<urn:uuid:76c84795-1abe-4601-8160-9c4ea653275e>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00526-ip-10-147-4-33.ec2.internal.warc.gz"} |
Homework 3
Classification for Leukemia Expression Profiles
Due Date: Friday, April 23 (11:59pm) Submit electronically via Courseworks, following the instructions posted previously by our head TA Eugene Ie (eie@cs.columbia.edu). For theory questions, write
your answers in a convenient electronic format -- plain text, pdf, postscript, or doc (if you must!). For other formats, ask the TA first to make sure it will be readable. For programming questions,
please submit both your source code and your results/plots (in a standard format like ps, jpg, etc) along with a plain text "readme.txt" file that explains to the TA where everything is.
Suggested languages and tools: There are two suggested options for this homework. The first option is that you could use the matlab spider machine learning package, available at http://
www.kyb.tuebingen.mpg.de/bs/people/spider/, for the entire homework. This package is written in object-oriented matlab -- you train "algorithm objects" and test on "data objects". A short tutorial
can be found on the spider website, and many demos for different types of algorithms (clustering, classification, transductive learning) are included in the package. You might like to learn about
spider so that you can use it in your class project. The second option is that you implement the k-nearest neighbor algorithm yourself (e.g. in matlab, perl, Java -- or you can look online for an
implementation of this simple algorithm) and use an available SVM software package for the SVM classification problems. The two recommended SVM software packages are William Stafford Noble's GIST
software, which can be downloaded from http://microarray.cpmc.columbia.edu/gist/, and Thorsten Joachim's SVM-light, which is available at http://svmlight.joachims.org/. (For GIST, find the link for
the download page to get precompiled binaries for Linux or Solaris. If you are unsure what unix-like system you are running on, use the command "uname -a" to find out). Note that the spider package
already comes with the SVM-light optimization algorithm (among others). There are a number of other SVM packages -- each with various advantages and disadvantages -- available online: you are free to
use other implementations rather than GIST or SVM-light. A good place to look for SVM software as well as other tutorials and resources is the kernel machines homepage.
Biology Reference: "Molecular Classification of Cancer: Class Discovery and Class Prediction by Gene Expression Monitoring", T. R. Golub et al., Science, Volume 286, 1999.
Background on the the dataset:The Golub dataset consists of a training set (in the file golub-data-train.txt of gene expression profiles for 38 bone marrow samples from acute leukemia patients, with
each profile consisting of about 7000 gene expression levels. The training sample are labelled as either ALL (acute lymphoid leukemia) or AML (acute myeloid leukemia), two clinically distinct types
of leukemia. The ALL type samples can further be divided into T-lineage ALL and B-lineage ALL (see paper for details). Finally there is a test ("independent") set of 50 additional samples (in the
file golub-data-independent.txt) also consisting of AML, T-lineage ALL, and B-lineage ALL leukemia types.
Theory (15 points):
1. Distinguish between supervised and unsupervised learning. Distinguish between clustering and classification.
2. Precisely state the optimization problem that the "hard margin" or "maximal margin" SVM solves, given a labeled training set
(x[1], y[1]), ... , (x[m], y[m])
with x[i] in R^N and y[i] in {-1,1}. Why might this version of SVM be inappropriate for real data?
3. Training an SVM produces a set of "weights" usually denoted as alpha[1], ... , alpha[m]. Explain briefly how these weights arise from solving the optimization problem given above (no need to
repeat the entire derivation from class), and explain how they determine a linear classifier. What does it mean if a particular weight alpha[i] is 0, non-zero but small, or non-zero and large?
4. Briefly outline what slack variables are and how they are used to define soft-margin versions of the SVM optimization problem.
5. Briefly explain the theoretical motivation behind the Fisher criterion score (used in the first programming problem) for feature selection.
6. What is the difference between a filter and a wrapper feature selection technique?
Programming (35 points): For this assignment, we concentrate on supervised approaches, using k-nearest neighbor and support vector machines classifiers for the leukemia discrimination problem.
If you use spider, you will be able to train both kNN and SVM classifiers by creating and training the appropriate objects -- see online and internal documentation for spider. Once you get the hang
of the object-oriented framework, spider will be the fastest way to complete the assignment. Otherwise, you'll need to implement (or find an implementation of) kNN, and use one of the SVM software
implementations suggested above. Some notes about the GIST software. This SVM implementation uses a modification of the SVM optimization problem that only considers zero bias (b=0) linear
classifiers. This allows for a simple algorithm to solve for the optimal classifier, but because of the 0 bias, it is slightly different than the regular SVM solution.
1. Since there are so many features for this dataset (and relatively few samples), we expect that many of the features are irrelevant for discrimination between classes and will merely add noise and
degrade performance for our classifiers. Therefore, we want to use a simple filtering approach to feature selection: we try to choose the features that are most discriminative between classes in
the training set.
One possibility is to use the Fisher criterion score as our feature filtering statistic -- there are many other choices. The Fisher score for the j^th feature (coordinate) is given by
|mu^+[j] - mu^-[j]|^2 / (sigma^+[j]^2 + sigma^-[j]^2)
where mu^+[j] (resp. mu^-[j]) is the sample mean of j^th feature values across positive training vectors (resp. negative training vectors), while sigma^+[j] (resp. sigma^-[j]) is similarly a
sample estimate for the square root of the variance across the positive (resp. negative) training set. You are free to use a different feature selection score -- just state clearly what you are
You can use a feature selection object in spider, the fselect program included with the GIST package, or your own script in perl or matlab (or other language) to calculate the Fisher criterion
score (or other chosen score) for all the genes across the training set, and produce a ranked list of genes, ordered by decreasing score. This list will be used for feature selection in your
classification experiments.
2. Now you'll train kNN and SVM classifiers to make predictions on the Golub test set. There's actually very little programming to do (especially if you use spider for kNN), but there are many SVM
experiments to run, requiring various pre-processing steps and post-processing analysis.
Train soft-margin linear SVM classifiers using the following sets of features:
□ the top 500 genes, based on Fisher criterion (or other) score
□ the top 100 genes
□ the top 50 genes
Depending on the SVM software that you choose, you may be using a 2-norm or 1-norm soft-margin classifier -- state clearly what version of the optimization problem you are using, what value of
the parameter C you chose (or equivalently, state what command-line options you used for training), and how you tuned this parameter. (Note: one principled way to choose parameters is to use
cross-validation on the training set; if you try multiple parameter values and choose the one that performs best on the test set, you may actually be overfitting to the test set!) Typically,
people apply a mean 0 and unit variance transform to the data, or normalize so that the vectors are unit length -- state what pre-processing choices you make.
Also try kNN classifiers with k=1, 3, 5, 7, to give a baseline performance measure. Explain what distance measure you are using (Euclidean distance or correlation coefficient are standard
For each kNN/SVM experiment, report the predicted labels and calculate the "confusion matrix" on the test set for the 2-class ALL versus AML problem (choose one class to be "Positive" and one
class to be "Negative"):
│ Actual \ Predicted │ Negative │ Positive │
│ Negative │ a │ b │
│ Positive │ c │ d │
where a, b, c, d are the number of test examples falling into each category, and calculate the following simple statistics:
□ Accuracy: (a+d)/(a+b+c+d)
□ Sensitivity (True Positive Rate): d/(c+d)
□ Specificity (True Negative Rate): a/(a+b)
For SVM classifiers, a better way to view results is to plot an ROC (receiver operating characteristic) curve: plot the rate of true positives as a function of the rate of false positives, as you
vary the threshold for the classifier (recall we predict x is positive if f(x) > c, where c is our threshold). That is, plot the number of TP as a function of the number of FP, and scale both
axes so that they vary between 0 and 1. The area under this curve is called the ROC score. Compute the ROC score for your classifier on the test set (compute the area by summing the area of
rectangles). Note that the ROC curve and ROC score is completely determined by the ranking of test examples by the classifier f(x) and by the true labels on the test examples. (For kNN, you can
still vary the classification threshold, but the most natural threshold corresponds to a vote of the neighbors.) Report the ROC scores for your SVM classifiers.
3. For the last set of results, rerun the SVM experiments using your choice of kernel (second degree polynomial and radial basis function kernels are standard choices on this kind of dataset).
Again, report the type of kernel and the kernel parameters that you used, and report the confusion matrix and the ROC score.
4. Finally, write a paragraph or two about your results: How much feature elimination is useful for this dataset? Did the use of kernels with the SVM improve performance on the test set or lead to
overfitting? How does the performance of your SVM compare with the "informative gene" classifier described in the original paper by Golub et al.? | {"url":"http://www.cs.columbia.edu/~cleslie/cs4761/spr04-hw3/spr04-homework3.html","timestamp":"2014-04-21T02:01:10Z","content_type":null,"content_length":"12677","record_id":"<urn:uuid:cf897632-6eea-416d-83d7-83db705bd7f0>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00661-ip-10-147-4-33.ec2.internal.warc.gz"} |
Lafayette, CO Science Tutor
Find a Lafayette, CO Science Tutor
...Science is my very favorite to teach and share my excitement for. Reading: I love teaching reading! Whether your student is starting out, or just having troubles getting the knack of it,
breaking it down into very small pieces is incredibly helpful and will get your student reading before you k...
21 Subjects: including geology, physics, English, algebra 2
...I have been a tutor with Wyzant Tutoring for over 4 years and have been a top ranked tutor in mathematics and science. Please contact me about any questions you may have! I look forward to
working with you to reaching your GPA and test score goals!
16 Subjects: including physics, geology, physical science, astronomy
Get a tutor who has taught at the college level! My name is Peter, and I am a CU graduate, going to graduate school to earn my PhD in chemistry in the Fall. I have TA'd organic, general, and
introductory level courses, as well as taken classes in pedagogy to further improve my skills as a teacher.
6 Subjects: including organic chemistry, chemistry, algebra 1, precalculus
...I applied algebraic, trigonometric, exponential, and logarithmic functions to graph and use in applications. I also solved linear and nonlinear equations and inequalities, systems of equations
and inequalities, and applied sequences and series with facility. I am a summa cum laude Mathematics major and an Electrical Engineering major.
15 Subjects: including electrical engineering, physics, chemistry, calculus
...I have been teaching InDesign for a college online. I am comfortable with all of the basics of using InDesign including using text boxes, layers and the text tools. I do have a bachelor's
degree in Graphic Design and I love working with Photoshop and Illustrator in conjunction InDesign.
12 Subjects: including sociology, psychology, reading, Microsoft Word
Related Lafayette, CO Tutors
Lafayette, CO Accounting Tutors
Lafayette, CO ACT Tutors
Lafayette, CO Algebra Tutors
Lafayette, CO Algebra 2 Tutors
Lafayette, CO Calculus Tutors
Lafayette, CO Geometry Tutors
Lafayette, CO Math Tutors
Lafayette, CO Prealgebra Tutors
Lafayette, CO Precalculus Tutors
Lafayette, CO SAT Tutors
Lafayette, CO SAT Math Tutors
Lafayette, CO Science Tutors
Lafayette, CO Statistics Tutors
Lafayette, CO Trigonometry Tutors
Nearby Cities With Science Tutor
Broomfield Science Tutors
Dacono Science Tutors
East Lake, CO Science Tutors
Eastlake, CO Science Tutors
Edgewater, CO Science Tutors
Eldorado Springs Science Tutors
Erie, CO Science Tutors
Firestone Science Tutors
Frederick, CO Science Tutors
Lakeside, CO Science Tutors
Lonetree, CO Science Tutors
Louisville, CO Science Tutors
Niwot Science Tutors
Northglenn, CO Science Tutors
Superior, CO Science Tutors | {"url":"http://www.purplemath.com/lafayette_co_science_tutors.php","timestamp":"2014-04-19T07:03:55Z","content_type":null,"content_length":"24014","record_id":"<urn:uuid:a16c3535-c07e-48d1-b8c9-1f7dedf38def>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00439-ip-10-147-4-33.ec2.internal.warc.gz"} |
About local refinement of tetrahedral grids based on local bisection, 5th International Meshing Roundtable
, 2000
"... Multi-resolution data-structures and algorithms are key in Visualization to achieve real-time interaction with large data-sets. Research has been primarily focused on the off-line construction
of such representations mostly using decimation schemes. Drawbacks of this class of approaches include: (i) ..."
Cited by 10 (4 self)
Add to MetaCart
Multi-resolution data-structures and algorithms are key in Visualization to achieve real-time interaction with large data-sets. Research has been primarily focused on the off-line construction of
such representations mostly using decimation schemes. Drawbacks of this class of approaches include: (i) the inability to maintain interactivity when the displayed surface changes frequently, (ii)
inability to control the global geometry of the embedding (no selfintersections) of any approximated level of detail of the output surface. In this paper we introduce a technique for on-line
construction and smoothing of progressive isosurfaces (see Figure 1). Our hybrid approach combines the flexibility of a progressive multiresolution representation with the advantages of a recursive
subdivision scheme. Our main contributions are: (i) a progressive algorithm that builds a multi-resolution surface by successive refinements so that a coarse representation of the output is generated
as soon as a coarse representation of the input is provided, (ii) application of the same scheme to smooth the surface by means of a 3D recursive subdivision rule, (iii) a multi-resolution
representation where any adaptively selected level of detail surface is guaranteed to be free of self-intersections.
- Partition,” Proceedings, 12th International Meshing Roundtable, Sandia National Laboratories , 2003
"... The 8-tetrahedra longest-edge (8T-LE) partition of any tetrahedron is defined in terms of three consecutive edge bisections, the first one performed by the longest-edge. The associated local
refinement algorithm can be described in terms of the polyhedron skeleton concept using either a set of preco ..."
Cited by 3 (0 self)
Add to MetaCart
The 8-tetrahedra longest-edge (8T-LE) partition of any tetrahedron is defined in terms of three consecutive edge bisections, the first one performed by the longest-edge. The associated local
refinement algorithm can be described in terms of the polyhedron skeleton concept using either a set of precomputed partition patterns or by a simple edgemidpoint tetrahedron bisection procedure. An
e#ective 3D derefinement algorithm can be also simply stated. In this paper we discuss the 8-tetrahedra partition, the refinement algorithm and its properties, including a non-degeneracy fractal
property. Empirical experiments show that the 3D partition has analogous behavior to the 2D case in the sense that after the first refinement level, a clear monotonic improvement behavior holds. For
some tetrahedra a limited decreasing of the tetrahedron quality can be observed in the first partition due to the introduction of a new face which reflects a local feature size related with the
tetrahedron thickness.
, 2004
"... In this paper we present lower and upper bounds for the number of equivalence classes of d-triangles with additional or Steiner points. We also study the number of possible partitions that may
appear by bisecting a tetrahedron with Steiner points at the midpoints of its edges. This problem arises, f ..."
Add to MetaCart
In this paper we present lower and upper bounds for the number of equivalence classes of d-triangles with additional or Steiner points. We also study the number of possible partitions that may appear
by bisecting a tetrahedron with Steiner points at the midpoints of its edges. This problem arises, for example, when refining a 3D triangulation by bisecting the tetrahedra. To begin with, we look at
the analogous 2D case, and then the 1-irregular tetrahedra (tetrahedra with at most one Steiner point on each edge) are classified into equivalence classes, and each element of the class is
subdivided into several non-equivalent bisection-based partitions which are also studied. Finally, as an example of the application of refinement and coarsening of 3D bisection-based algorithms, a
simulation evolution problem is shown.
, 2006
"... The Adaptive Mesh Refinement is one of the main techniques used for the solution of Partial Differential Equations. Since 3-dimensional structures are more complex, there are few refinement
methods especially for parallel environments. On the other hand, many algorithms have been proposed for 2-dime ..."
Add to MetaCart
The Adaptive Mesh Refinement is one of the main techniques used for the solution of Partial Differential Equations. Since 3-dimensional structures are more complex, there are few refinement methods
especially for parallel environments. On the other hand, many algorithms have been proposed for 2-dimensional structures. We analyzed the Rivara’s longest-edge bisection algorithm, studied
parallelization techniques for the problem, and presented a parallel methodology for the refinement of non-uniform tetrahedral meshes. The main goal of this research is to propose a practical
refinement framework for real-life applications. We describe a usable data structure for distributed environments and present a utility capable of distributing the mesh data among processors to solve
large mesh structures.
"... The increasing rate of growth in size of currently available datasets is a well known issue. The possibility of developing fast and easy to implement frameworks able to visualize at least part
of a tera-sized volume is a challenging task. Subdivision methods in recent years have been one of the most ..."
Add to MetaCart
The increasing rate of growth in size of currently available datasets is a well known issue. The possibility of developing fast and easy to implement frameworks able to visualize at least part of a
tera-sized volume is a challenging task. Subdivision methods in recent years have been one of the most successful techniques applied to the multiresolution representation and visualization of surface
meshes. Extensions of these techniques to the volumetric case presents positive effects and major challenges mainly concerning the generalization of the combinatorial structure of the refinement
procedure and the analysis of the smoothness of the limit mesh. In this paper we address mainly the first part of the problem, presenting a framework that exploits a subdivision scheme suitable for
extension to 3D and higher dimensional meshes. 1. | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=209189","timestamp":"2014-04-18T12:18:37Z","content_type":null,"content_length":"24162","record_id":"<urn:uuid:b00d0757-776d-4aa7-b2e7-3de2a9f9f476>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00374-ip-10-147-4-33.ec2.internal.warc.gz"} |
Spring Algebra 1 Tutor
...I began tutoring three years ago, and I absolutely love it. I help high school and college students with topics that range anywhere from algebra I to calculus II and chemistry. I can also help
with CLEP test preparation.
7 Subjects: including algebra 1, chemistry, biology, algebra 2
Hello students and parents! I have 5 years of experience tutoring both high school and college students in math courses ranging from Pre-Algebra to Calculus I, II. My tutoring method starts with
first finding out how much the student understands about the course material or specific question.
14 Subjects: including algebra 1, geometry, algebra 2, economics
...These subjects use the same notations as do algebra, so once you master them you can move forward into more advanced topics. Algebra is not necessarily easy, but it is completely logical. There
is nothing you learn early that will be contradicted by later lessons.
20 Subjects: including algebra 1, writing, logic, algebra 2
...I'm excited about tutoring and look forward to helping you learn!Took prealgebra growing up and made an A+ when I completed the course. This subject was never difficult for me. I've used it all
throughout college as well to solve much more difficult problems.
9 Subjects: including algebra 1, physics, calculus, geometry
...Beginning students learn to read music, as well as learn music theory and basic foundations for music that enables them to transfer knowledge to other instruments and vocal music. Progress is
based upon one-30 minute lesson a week with encouragement given to daily practice. The ultimate goal is...
49 Subjects: including algebra 1, English, reading, ESL/ESOL | {"url":"http://www.purplemath.com/Spring_algebra_1_tutors.php","timestamp":"2014-04-18T23:37:21Z","content_type":null,"content_length":"23662","record_id":"<urn:uuid:eaf5eb52-0617-4696-9ebe-346e1b22cc4e>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00638-ip-10-147-4-33.ec2.internal.warc.gz"} |
Proof involving the Archimedean property.
September 5th 2010, 04:38 PM
Proof involving the Archimedean property.
Given any x $\in$R prove that there exists a unique n $\in$Z such that n - 1 $\leq$ x < n.
Proof: Suppose x $\in$R. By the Archimedean property, there is an n $\in$Z such that n > x. Since N is well-defined, we have that n - 1 $\in$N and n - 1 < n. Then it follows that n - 1 $\leq$x <
n for x $\in$R.
I feel like I've jumped to conclusions, but to my non-math-genius brain, I feel like this is correct. Can someone prod me along?
September 5th 2010, 05:37 PM
Given any x $\in$R prove that there exists a unique n $\in$Z such that n - 1 $\leq$ x < n.
Proof: Suppose x $\in$R. By the Archimedean property, there is an n $\in$Z such that n > x. Since N is well-defined, we have that n - 1 $\in$N and n - 1 < n. Then it follows that n - 1 $\leq$x <
n for x $\in$R.
I feel like I've jumped to conclusions, but to my non-math-genius brain, I feel like this is correct. Can someone prod me along?
Your error is in asserting that n-1< x. The Archimedean property only tells you that there is an integer larger than x. it does not tell you that that integer is anywhere near x. For example, if
the x= 1.5, the Archimedean property might be telling you that the number 1,000,000 is larger than 1.5.
You will also need to use the "well ordering property" of the positive integers: every non-empty set of positive integers contains a smallest member. Apply that to the set of all integers larger
than x which the Archimedean property tells you is non-empty.
September 6th 2010, 12:15 PM
After visiting the professor today, I made a little ground. How about this variation?
Proof: Let x $\in$R. By the Archimedean property, there exists some n > x, n $\in$Z. Define S = { y $\in$Z : y > x}. Since S is bounded below by x, then by the well-ordering principle of
integers, there is a least element in S, namely n = min(S). Since n $\in$S, then n > x. However, n - 1 $otin$S and so n - 1 $\leq$ x < n.
I haven't proved the uniqueness component.
September 6th 2010, 01:17 PM
September 6th 2010, 04:49 PM
Allow me to give you a different proof.
You need the completeness axiom.
We need to have proved that the set of integers, $\mathbb{Z}$, is not bounded.
If $a\in \mathbb{R}$ the define $\mathcal{S}=\{n\in \mathbb{Z}:n\le a\}$.
Clearly we can say that $b=\sup(\mathcal{S})$.
Also $\left( {\exists c \in \mathcal{S}} \right)\left[ {b - 1 < c \leqslant b \leqslant a} \right]$.
From which we get $b < c + 1 \leqslant b + 1$ which means that $c + 1 otin \mathcal{S}\, \Rightarrow \,a < c + 1\, \Rightarrow \,c \leqslant a < c + 1$.
That completes the proof. | {"url":"http://mathhelpforum.com/differential-geometry/155288-proof-involving-archimedean-property-print.html","timestamp":"2014-04-18T23:37:17Z","content_type":null,"content_length":"13170","record_id":"<urn:uuid:972256dd-9ed2-47d7-b1fb-7d78f28dee92>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00560-ip-10-147-4-33.ec2.internal.warc.gz"} |
alternating series test
November 10th 2009, 08:24 PM #1
Junior Member
Mar 2008
alternating series test
Hi all,
I was wondering for the following series:
$<br /> \sum_{n=1}^{\infty}\frac{(-1)^(n-1)(-1)^n}{n}<br />$
NOTE: in the numerator it is meant to be (-1)^(n-1) times by (-1)^n .... sorry i can't it get the code to work
are you allowed to let a_n = (-1)^n/ n and say that since a_n is not >0 for all n>=1, therefore the series diverges?
Thanks in advance,
Hi all,
I was wondering for the following series:
$<br /> \sum_{n=1}^{\infty}\frac{(-1)^{n-1}(-1)^n}{n}<br />$
NOTE: in the numerator it is meant to be (-1)^(n-1) times by (-1)^n .... sorry i can't it get the code to work
are you allowed to let a_n = (-1)^n/ n and say that since a_n is not >0 for all n>=1, therefore the series diverges?
Thanks in advance,
your $a_n = \frac 1n$ here. so, $\lim_{n \to \infty}a_n = 0$
Note that $(-1)^{n - 1}(-1)^n = (-1)^{2n - 1} = -1$, thus your series is $\sum_{n = 1}^\infty \frac {-1}n$, which is a divergent harmonic series. The alternating series test does not apply since
you do not have an alternating series.
November 10th 2009, 10:04 PM #2 | {"url":"http://mathhelpforum.com/calculus/113793-alternating-series-test.html","timestamp":"2014-04-23T12:56:14Z","content_type":null,"content_length":"34934","record_id":"<urn:uuid:8028b48a-7324-4ada-80b8-81f29056e02e>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00286-ip-10-147-4-33.ec2.internal.warc.gz"} |
Texas Instruments TI-34 Ii Explorer Plus
Preview of first few manual pages (at low quality). Check before download. Click to enlarge.
Download (English) Related manuals
Texas Instruments TI-34 Ii Explorer Plus, size: 3.0 MB Texas Instruments TI-34 Ii Explorer Plus Quick Reference
Instruction: After click Download and complete offer, you will get access to list of direct links to websites where you can download this manual.
About Texas Instruments TI-34 Ii Explorer Plus
Here you can find all about Texas Instruments TI-34 Ii Explorer Plus like manual and other informations. For example: review.
Texas Instruments TI-34 Ii Explorer Plus manual (user guide) is ready to download for free.
On the bottom of page users can write a review. If you own a Texas Instruments TI-34 Ii Explorer Plus please write about it to help other people.
Report abuse or wrong photo
Share your Texas Instruments TI-34 Ii Explorer Plus photo
User reviews and opinions
Comments to date: 2. Page 1 of 1. Average Rating:
gegy57 2:12am on Saturday, April 10th, 2010
Texas Instruments TI34II Calculator My daughter is REQUIRED to have this calculator for math class. Needless to say we bought one and it disappeared. Poor shipping & service from ReStockIt I am quite
disappointed in my order of 2 Texas Instuments 34 II calculators. Missing one thing: commas I have the TI 30X-IIS, a nice dual power, 2 line calculator. It has one feature missing, the fraction bar
Tolukra 6:48am on Monday, April 5th, 2010
Always dependable! My daughter needed this calculator for her math class. It is enough to grow with her.
Comments posted on www.ps2netdrivers.net are solely the views and opinions of the people posting them and do not necessarily reflect the views or opinions of us.
Order of Operations
TI.34 Explorer Plus
Scientific Calculator
Copyright 1999 Texas Instruments
The TI-34 uses EOS (Equation Operating System) to evaluate expressions. 1st Expressions inside parentheses. 2nd Functions that need a ) and precede the argument, such as the sin, log, and all RP menu
items. 3rd Fractions. 4th Functions that are entered after the argument, such as x 2 and angle unit modifiers ( r). 5th Exponentiation (^) and roots (x). 6th Negation (M). 7th Permutations (nPr) and
combinations (nCr). 8th Multiplication, implied multiplication, division. 9th Addition and subtraction. 10th Conversions (A bcde, 4F, 4D, 4%, 4DMS). 11th < completes all operations and closes all
open parentheses.
%~?ON >}RQ
General Information
Examples: See the last page of these instructions for keystroke examples that demonstrate many of the TI-34 functions. Examples assume all default settings. & turns on the TI-34. % ' turns it off and
clears the display. APD (Automatic Power Down) turns off the TI-34 automatically if no key is pressed for about 5 minutes. Press & after APD. The display, pending operations, settings, and memory are
retained. 2-Line Display: The first line (Entry Line) displays an entry of up to 88 digits (or 47 digits for Stat or Constant Entry Line). Entries begin on the left; those with more than 11 digits
scroll to the right. Press ! and " to scroll the line. Press % ! or % " to move the cursor immediately to the beginning or end of the entry. The second line (Result Line) displays a result of up to
10 digits, plus a decimal point, a negative sign, a x10 indicator, and a 2-digit positive or negative exponent. Results that exceed the digit limit are displayed in scientific notation. Indicator 2nd
Clearing and Correcting
Clears an error message. Clears characters on entry line. Moves the cursor to last entry in history once display is clear. J Deletes the character at the cursor. Deletes all characters to the right
when you hold down J; then, deletes 1 character to the left of the cursor each time you press J. %f Inserts a character at the cursor. %{ Clears all memory variables. %t Clears all data points
without exiting CLRDATA STAT mode. % w Y Clears all data points and exits STAT mode. %Y Resets the TI-34. Returns unit to or default settings; clears memory variables, pending operations, all &&
entries in history, and statistical data; clears constant mode and Ans.
Q R N/Dn/d
Definition 2nd function. Fixed-decimal setting. Statistical mode. Angle mode set to radians. Displays quotient (Q) and remainder (R) for integer divide result. The fractional result can be further
simplified. An entry is stored in memory before and/or after the active screen. Press # and $ to scroll. An entry or menu displays beyond 11 digits. Press ! and " to scroll.
Fractional calculations can display fractional or decimal results. % ~ displays a menu of 4 display mode settings. These determine how fraction results are displayed. You select 2 items: Ab/c
displays mixed number results. d/e (default) displays fraction results. Manual (default) displays unsimplified fractions. Auto displays fraction results simplified to lowest terms. @ separates a
whole number from the fraction in a mixed number, and > separates a numerator from the denominator. The denominator must be a positive integer. To negate a fraction, press M before entering
numerator. } < simplifies a fraction using the lowest common prime factor. If you want to choose the factor (instead of letting the calculator choose it), press }, enter the factor (an integer), and
then press <. % ? displays Fac on the entry line and the divisor used to simplify the last fraction result. You must be in Manual mode to display Fac. Press % ? again to toggle back to the simplified
fraction. Q converts a fraction to a decimal, if possible. R converts a decimal to a fraction, if possible. % N converts a decimal or fraction to a percent. % O converts between a mixed number and a
simple fraction.
p =3.141592653590 for calculations. p =3.141592654 for display. In RAD mode, p is represented as Pi in results of
multiplication or fractional calculations. The TI-34 only accepts p in the numerator of a fraction.
Angle Modes
2nd Functions: % displays the 2nd indicator, and then selects the 2nd function (printed above keys) of the next key pressed. For example, % b 25 E < calculates the square root of 25 and returns the
result, 5. Menus: Certain TI-34 keys display menus: z, % h, L, % d, % ~, % A, % B, % t, u, % w, H, % I, =, % k, % , and %. Press ! and " to move the cursor and underline a menu item. To return to the
previous screen without selecting the item, press -. To select a menu item: Press < while the item is underlined, or For menu items followed by an argument value, enter the argument value while the
item is underlined. The item and the argument value are displayed on the previous screen.
Math Operations
% d displays a menu with various math functions. Some functions require you to enter 2 values, real numbers or expressions that equal return a real number. % ` separates two values.
abs(#) round(#,digits) iPart(#) fPart(#) min(#1,#2) max(#1,#2) lcm(#1,#2) gcd(#1,#2)
Displays absolute value of #. Rounds # to specified number of digits. Returns only the integer part (iPart) or fractional part (fPart) of #. Returns the minimum (min) or maximum (max) of two values,
#1 and #2. Finds the least common multiple (lcm) or greatest common divisor (gcd) of two values, #1 and #2. Calculates the cube of #. Calculates the cube root of #. Returns the remainder resulting
from the division of 2 values, #1 by #2.
% I displays a menu to change the Angle mode to degrees or radians. = displays a menu to specify the Angle unit modifier degrees (), minutes (), seconds (), radians ( r), or 4DMS (convert an angle to
DMS notation). To set the Angle mode for any part of an entry: Select the Angle mode. Entries are interpreted and results displayed according to the Angle mode, or Select a unit modifier ( r ) for
any part of an entry. Entries with unit modifiers are interpreted accordingly, overriding the Angle mode. To convert an entry: Set the Angle mode to the unit you want to convert to. Then use a unit
modifier to designate the unit to convert from. (Angles of trigonometric functions convert values inside parentheses first.), or Select 4DMS, which converts an entry to DMS ( ) notation.
Previous Entries
Trig and Logarithms
After an expression is evaluated, use # and $ to scroll through previous entries, which are stored in the TI-34 memory. You cannot retrieve previous entries while in STAT mode.
Last Answer
Integer Divide
The most recently calculated result is stored to the variable Ans. Ans is retained in memory, even after the TI-34 is turned off. To recall the value of Ans: Press % i (Ans displays on the screen),
or Press any operations key (T, U, F, etc.) as the first part of an entry. Ans and the operator are both displayed.
% Y divides 2 positive integers and displays the quotient, Q, and the remainder, R. Only the quotient is stored to Ans.
% B displays a menu of all trigonometric functions (sin, sin -1, cos, cos-1, tan, tan-1). Select the trigonometric function from the menu and then enter the value. Set the desired Angle mode before
starting trigonometric calculations. % A displays a menu of all log functions (log, 10^, ln, e ^). Select the log function from the menu, then enter the value, and complete it with E <.
% k displays a menu to convert rectangular coordinates (x,y) to polar coordinates (r,) or vice versa. Set Angle mode, as necessary, before starting calculations.
Stored Operations m o % n p
The TI-34 stores two operations, OP1 and OP2. To store an operation to OP1 or OP2 and recall it: 1. Press % n or % p. 2. Enter the operation (any combination of numbers, operators, or menu items and
their arguments). 3. Press < to save the operation to memory. 4. m or o recalls and displays the operation on the entry line. The TI-34 automatically calculates the result (without pressing <) and
displays the counter (as space permits) on the left side of the result line. You can set the TI-34 to display only the counter and the result (excluding the entry). Press % n or % p, press ! until
the = is highlighted () and press <. Repeat to toggle this setting off.
%t w v u
ARGUMENT A function does not have the correct number
1-VAR stats analyzes data from 1 data set with 1 measured variable, X. 2-VAR stats analyzes paired data from 2 data sets with 2 measured variables X , the independent variable, and Y , the dependent
variable. You can enter up to 42 data sets. Steps for defining statistical data points: 1. Press % t. Select 1-VAR or 2-VAR and press <. The STAT indicator displays. 2. Press v. 3. Enter a value for
X 1 and press $. 4. Then:
of arguments.
DIVIDE BY 0 You attempted to divide by 0. In statistics, n=1. DOMAIN You specified an argument to a function outside
In 1-VAR stat mode, enter the frequency of occurrence (FRQ) of the data point and press $. FRQ default=1. If FRQ=0, the data point is ignored. Or In 2-VAR stat mode, enter the value for Y 1 and press
z L % h{
The TI-34 has 5 memory variablesA, B, C, D, and E. You can store a real number or an expression that results in a real number to a memory variable. z accesses the menu of variables. L lets you store
values to variables. % h recalls the values of variables. % { clears all variable values.
5. Repeat steps 3 and 4 until all data points are entered. You must press < or $ to save the last data point or FRQ value entered. If you add or delete data points, the TI-34 automatically reorders
the list. 6. When all points and frequencies are entered: Press u to display the menu of variables (see table for definitions) and their current values, or
Press v to return to the blank STAT screen. You can do calculations with data variables (, , etc.). Select a variable from the u menu and then press < to evaluate the calculation.
% displays the decimal notation mode menu. These settings only affect the display of results. F (default) restores Floating-decimal format. Set decimal places to n (09) with 0123456789. C enters a
value in scientific notation. Press M before entering a negative exponent.
7. When finished:
Press % t and select CLRDATA to clear all data points without exiting STAT mode, or Press % w < to clear all data points, variable and FRQ values, and to exit STAT mode (STAT indicator turns off).
the valid range. For example: For x : x = 0 or y < 0 and x not an odd integer. For y x: y and x = 0; y < 0 and x not an integer. For x: x < 0. For LOG or LN: x 0. For TAN: x = 90, -90, 270, -270,
450, etc. For SIN-1 or COS-1: |x | > 1. For nCr or nPr: n or r are not integers 0. For x !: x is not an integer between 0 and 69. EQ LENGTH ERROR An entry exceeds the digit limits (88 for Entry Line
and 47 for Stat or Constant Entry lines); for example, combining an entry with a constant that exceeds the limit. FRACMODE Pressing } when Fracmode=Auto. FRQ DOMAIN FRQ value (in 1-VAR stats) < 0 or
>99, or not an integer. OP Pressing m or o when constants not defined or while in STAT mode. OVERFLOW |q| 110, where q is an angle in a trigonometric, hyperbolic, or R4Pr( function. STAT Pressing u
with no defined data points. When not in STAT mode, pressing v, u, or % w. SYNTAX The command contains a syntax error: entering more than 23 pending operations, 8 pending values, or having misplaced
functions, arguments, parentheses, or commas.
Variables n or Sx or Sy x or y x or y x2 or y2 xy a b r X ' (2-VAR)
Y ' (2-VAR)
Definition Number of X or (X,Y ) data points. Mean of all X or Y values. Sample standard deviation of X or Y. Population standard deviation of X or Y. Sum of all X or Y values. Sum of all X 2 or Y 2
values. Sum of X Y for all XY pairs. Linear regression slope. Linear regression Y -intercept. Correlation coefficient. Uses a and b to calculate predicted X value when you input a Y value. Uses a and
b to calculate predicted Y value when you input an X value.
Battery Replacement
Replace protective cover. Place the TI-34 face down. 1. Using a small Phillips screwdriver, remove screws from back case. 2. Starting from the bottom, carefully separate front from back. Caution: Be
careful not to damage any internal parts. 3. Using a small Phillips screwdriver, if necessary, remove old battery; replace with new one. Caution: Avoid contact with other TI-34 components while
changing the battery. 4. If necessary, press & and - at the same time to reset the TI-34 (clears memory and all settings). Caution: Dispose of old batteries properly. Do not incinerate batteries or
leave where a child can find them.
In Case of Difficulty Probability
Calculates the number of possible permutations of n items taken r at a time, given n and r. The order of objects is important, as in a race. Calculates the number of possible combinations of n items
taken r at a time, given n and r. The order of objects is not important, as in a hand of cards. A factorial is the product of the positive integers from 1 to n. n must be a positive whole number 69.
Generates a random real number between 0 and 1. To control a sequence of random numbers, store an integer (seed value) 0 to rand. The seed value changes randomly every time a random number is
generated. Generates a random integer between 2 integers, A and B, where A { RAND[ { B. Separate the 2 integers with a comma.
Review instructions to be certain calculations were performed properly. Press & and - at the same time. This clears all memory and settings. Check the battery to ensure that it is fresh and properly
Change the battery when:
& does not turn the unit on, or The screen goes blank, or You get unexpected results. To continue using the TI-34 until you can change the battery: 1. Expose the solar panel to brighter light. 2.
Press & and - at the same time to reset the calculator. This clears all settings and memory. Operates in well-lit areas using solar cell. Operates in other light settings using battery.
Support and Service Information
Product Support
Customers in the U.S., Canada, Puerto Rico, and the Virgin Islands
For general questions, contact Texas Instruments Customer Support: phone: 1.800.TI.CARES (1.800.842.2737) e-mail: ti-cares@ti.com For technical questions, call the Programming Assistance Group of
Customer Support: phone: 1.972.917.8324
# $ ! "
1T1< Mon 3:04pm 2T2< 1+1 2. 4. 6. 8.
5 % _ V 0 < 5%Q250 12.5 50%
3 + 1812 4812
Customers outside the U.S., Canada, Puerto Rico, and the Virgin Islands
Contact TI by e-mail or visit the TI calculator home page on the World Wide Web. e-mail: ti-cares@ti.com Internet: www.ti.com/calc
% ~=Ab/c; Manual 3T1@8>12 < }<
Product Service
Customers in the U.S. and Canada Only
Always contact Texas Instruments Customer Support before returning a product for service.
Simp %"T2< 2+2+2 6.
Ans 4Simp 446
Customers outside the U.S. and Canada
Refer to the leaflet enclosed with this product or contact your local Texas Instruments retailer/distributor.
Ans 4Simp Fac 2
Other TI Products and Services
Visit the TI calculator home page on the World Wide Web. www.ti.com/calc
FAC 3V3< 3Q3 9.
Warranty Information
Customers in the U.S. and Canada Only One-Year Limited Warranty for Electronic Product
This Texas Instruments (TI) electronic product warranty extends only to the original purchaser and user of the product. Warranty Duration. This TI electronic product is warranted to the original
purchaser for a period of one (1) year from the original purchase date. Warranty Coverage. This TI electronic product is warranted against defective materials and construction. THIS
WARRANTY IS VOID IF THE PRODUCT HAS BEEN DAMAGED BY ACCIDENT OR UNREASONABLE USE, NEGLECT, IMPROPER SERVICE, OR OTHER CAUSES NOT ARISING OUT OF DEFECTS IN MATERIALS OR CONSTRUCTION. Warranty
Disclaimers. ANY IMPLIED WARRANTIES ARISING OUT OF THIS SALE, INCLUDING BUT NOT LIMITED TO THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE, ARE LIMITED IN DURATION TO
%? V3<
Ans Q3
27. 3.
Ans 4Simp 2 423
3%c%i <
3 x Ans
924A bc de
% ~=d/e; Manual A bcde 9 > 2 % O < 412 Ans 4 D 4.5 4510
T U V W M D E %Y <
+ Q P (M) INT P 5 V M T < 5QM12+45 M15. 0 20. 20. 4D Q<
% Y 2 < 10 Int 2 5
Ans 4 F
4VD2T3E < 4D2T3E<
% ~=Abc; Auto R< Ans 4 F 412
Some states/provinces do not allow the exclusion or limitation of implied warranties or consequential damages, so the above limitations or exclusions may not apply to you. Legal Remedies. This
warranty gives you specific legal rights, and you may also have other rights that vary from state to state or province to province. Warranty Performance. During the above one (1) year warranty
period, your defective product will be either repaired or replaced with a reconditioned model of an equivalent quality (at TIs option) when the product is returned, postage prepaid, to Texas
Instruments Service Facility. The warranty of the repaired or replacement unit will continue for the warranty of the original unit or six (6) months, whichever is longer. Other than the postage
requirement, no charge will be made for such repair and/or replacement. TI strongly recommends that you insure the product for value prior to mailing. Software. Software is licensed, not sold. TI and
its licensors do not warrant that the software will be free from errors or meet your specific requirements. All software is provided AS IS. Copyright. The software and any documentation supplied with
this product are protected by copyright.
2Q(12) M22+2 6. 5. 125. 2.
% ~=de; Manual
abs, iPart, fPart, 3, 3
iPart fPart x2
2VD1>2E %a< 2FT2<
iPart(23.45) 23. %b25E< (25) abs round ^ 5G3< 5^3
All Customers Outside the U.S. and Canada
For information about the length and terms of the warranty, refer to your package and/or to the warranty statement enclosed with this product, or contact your local Texas Instruments retailer/
round(p,3) 3.142 remainder
min, max, lcm, gcd, remainder
%A log 10 ^
CLR VAR: Y N
0. %I"
A B C D E
ln e ^
e^(.5) 1.648721271
sin sin
%n V2T3< 4m
4Q2+6Q2+OP2 Q2
11. 15.
AnsB 706.8583471
%p V2!!!< 4o
706.8583471 <W4< BP4 176.7145868
< <
1.5 4DMS 1300
o 3 %p!< OP2 =Q32. %
tan tan
3.14 3.141592654
1.234C M65<
1.234 M65 1.234 x10M65
tan(45) 1.619775191
R4Pr R4P
R4Pr (5,30)
Texas Instruments 7800 Banner Dr. Dallas, TX 75251 U.S.A.
Texas Instruments Holland B.V. Rutherfordweg CG Utrecht - The Netherlands
R4P (5,30)
www.ti.com/calc US 34IIOM1L1A Page 4
Thank you for choosing Texas Instruments Instructional Handhelds for use in your classes. This form is designed to assist you with your order. Please note the following program requirements and
One request per department per semester Only department chair may submit request Number of requested products limited to number of instructors Must include syllabus from each instructor receiving
product. Syllabus must state that TI graphing calculators are highly recommended or required
Date: ________________________________ Institution ___________________________________________________________________ Department ______________________________ Campus _____________________________
Address (No PO Boxes) ________________________________________________________ City __________________________ State _________ Zip _____________________________ Phone Number ______________________
Fax Number _______________________________ E-mail Address ________________________________________________________________ Name ___________________________________ Title
________________________________ Signature _____________________________________________________________________ This request is made for: Focus (check all that apply) ___General Math ___Algebra
___Teacher Educators Please complete both pages of this form and return to: TI Technology Rewards Program Park Central North P.O. Box 650311, M/S 3919 Dallas, TX 75265 Fax: 866-843-3839
Please allow 6-8 weeks for delivery, subject to product availability. For more information, please call 1-866-848-7722. For on-line service: e-mail ti-educators@ti.com or visit http://
Winter / Spring
Year _______
___Geometry ___Pre-Calculus ___Calculus ___Statistics ___Other Math
___General Science ___Biology ___Chemistry ___Physics ___Other Science ___Engineering
A complete form will expedite your order.
Level 1 TI-108, TI-10
Courses for which TI Instructional Handhelds are strongly recommended or required.
Course Complete name & number Section Instructor Recommended Handheld(s) Enrollment
Please attach another sheet if necessary
Product(s) Requested: Quantity
1 per 50 students
Caddy One TI EXPLORATIONS Series Book*
1 per 85 students
1 per 70 students
TI-108 TI-10
TI-10 Overhead
1 per 300 students
Level 2 TI-15 Explorer, TI-34 II Explorer Plus, TI-30X IIS, TI-30XS MultiView,
TI-34 MultiView Courses for which TI Instructional Handhelds are strongly recommended or required.
TOTAL ___________
Product(s) Requested: Quantity 1 per 35 students
Quantity 1 per 300 students
TI-15 Overhead TI-34 II Overhead TI-30X IIS Overhead
One TI EXPLORATIONS Series Book*
TI-34 MultiView Overhead TI-30XS MultiView Overhead
TI-15 Explorer TI-34 II Explorer Plus TI-30X IIS TI-34 MultiView TI-30XS MultiView
TI-SmartView for 30/34 Multiview
If you need any type of assistance, please do not hesitate to call 1-866-848-7722. * Workbook choice (see workbook options page) _______________________________________
Non Graphing Math at Your Fingertips with the TI-10 Valerie Johse and Chris Ruda George Christ
A World of Mathematics: Activities for Grades 4, 5, and 6 using the TI-15 Developing Problem-Solving Skills with the TI-15 George Christ
Investigating Mathematics with Calculators in the Middle Grades: Activities with the TI-30Xa SE and TI-34 Calculators Susan E. Williams and George W. Bright Math Investigations with the TI-30X IIS
and TI-34 II: Activities for the Middle Grades Ann Lawrence and Karen Wyatt Uncovering Mathematics with Manipulatives and Calculators, Level 1 (TI-10) Jane F. Schielack and Dinah Chancellor
Uncovering Mathematics with Manipulatives and Calculators, Level 2-3 (TI-10, TI-15) Jane F. Schielack and Dinah Chancellor Integrating Handheld Technology into the Elementary Mathematics Classroom
Judy Olson, Mel Olson, and Janie Schielack
Sxc-2003 LE40C653 HL-1870N 82309 Review D 7000 SMU-WM100 DSC-F828 CLP-380 Cappucino 125 SM LQ-1050 37PFL9603 DWA-150R ICD-SX900 Sava-27 WRT54GX WF-J854 Program T1000 TX-NR801E Supreme 1997 278CLT
Nabmessenger TVB 2250 FAX-LAB 200 RS2620 Cht-15Q Mobile 205FCC MC 350E Sweeper KDG820J-kd-g820 ML-2850D-ETS Gpsmile240 DX7450 Cd3100 20-0500 CX1800 L1952H-BF WT-Y138RG VR530 Tracker 1999 VTH6920F V50
2006 SH-S182M PSR-230 DCR-TRV25 Studio FW-P88 CDX-S22S 240V Extension Tube WF-T8500TP Solo NEO GX-4000DB UE37C6700 M1721TA-BZ Fujifilm Z30 MZ-R700 KRC-477RV VGN-FW31M Le40C670m1Q Boxee BOX CIV-21
HT-DB300 TU-PTA100E NO 3053 IC-F14-S Wxli4240EE Camera STR-DN610 32PW8505-12S Vega 252 Take5 HT-S7300 Vision Lgdvt418 64160 Sl-T Blazer 1996 Stadium DUO Pearl 8100 Breil YM92 A300D 7155-7165-01
Digimax 300 Siemens A65 SM 1102 BSV-2653 GR-DV2000 Diam100 Altima-2000 MDX-C7970R Dslr-A200K X 0 20PF4121-05 UE-40C8700XS DS19390 SB-22S HK-800 CCD-TRV57E
manuel d'instructions, Guide de l'utilisateur | Manual de instrucciones, Instrucciones de uso | Bedienungsanleitung, Bedienungsanleitung | Manual de Instruções, guia do usuário | инструкция | návod
na použitie, Užívateľská príručka, návod k použití | bruksanvisningen | instrukcja, podręcznik użytkownika | kullanım kılavuzu, Kullanım | kézikönyv, használati útmutató | manuale di istruzioni,
istruzioni d'uso | handleiding, gebruikershandleiding | {"url":"http://www.ps2netdrivers.net/manual/texas.instruments.ti-34.ii.explorer.plus/","timestamp":"2014-04-17T12:35:51Z","content_type":null,"content_length":"53187","record_id":"<urn:uuid:beedebcc-93ec-4808-9c29-68d88a234ed3>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00650-ip-10-147-4-33.ec2.internal.warc.gz"} |
Solvable and Unsolvable Problems
- JOURNAL OF THE ACM , 1974
"... An attempt is made to apply information-theoretic computational complexity to metamathematics. The paper studies the number of bits of instructions that must be a given to a computer for it to
perform finite and infinite tasks, and also the amount of time that it takes the computer to perform these ..."
Cited by 45 (7 self)
Add to MetaCart
An attempt is made to apply information-theoretic computational complexity to metamathematics. The paper studies the number of bits of instructions that must be a given to a computer for it to
perform finite and infinite tasks, and also the amount of time that it takes the computer to perform these tasks. This is applied to measuring the difficulty of proving a given set of theorems, in
terms of the number of bits of axioms that are assumed, and the size of the proofs needed to deduce the theorems from the axioms.
"... The last century saw dramatic challenges to the Laplacian predictability which had underpinned scientific research for around 300 years. Basic to this was Alan Turing’s 1936 discovery (along
with Alonzo Church) of the existence of unsolvable problems. This paper focuses on incomputability as a power ..."
Cited by 1 (1 self)
Add to MetaCart
The last century saw dramatic challenges to the Laplacian predictability which had underpinned scientific research for around 300 years. Basic to this was Alan Turing’s 1936 discovery (along with
Alonzo Church) of the existence of unsolvable problems. This paper focuses on incomputability as a powerful theme in Turing’s work and personal life, and examines its role in his evolving concept of
machine intelligence. It also traces some of the ways in which important new developments are anticipated by Turing’s ideas in logic. | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=3228911","timestamp":"2014-04-25T03:30:27Z","content_type":null,"content_length":"14434","record_id":"<urn:uuid:73b68209-e976-4e0c-873c-b13e598c2ebd>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00381-ip-10-147-4-33.ec2.internal.warc.gz"} |
trig problem ...
November 7th 2009, 03:32 PM #1
Nov 2009
trig problem ...
hey all
i don't really know how to write this,
so here is a picture of the problem.
can u please help me ?
hint ...
$1+\tan^2{x} = \sec^2{x}$
first of all TY
but i don't know this field ... its for a friend, can you please tell me what the answer is ?
It's not really the policy of MHF to provide full solutions to problems.
Skeeter has given a pretty big hint - using what has been given the solution falls into place.
If your friend is still having trouble, please ask him/her to log on him/herself, show his/her working and exactly what point he/she gets stuck on.
i don't need the way
all i need is the answer ... please just this ones. i wont bug u again ...
November 7th 2009, 03:42 PM #2
November 7th 2009, 10:23 PM #3
Nov 2009
November 7th 2009, 11:47 PM #4
November 8th 2009, 12:10 AM #5
Nov 2009
November 8th 2009, 03:48 AM #6 | {"url":"http://mathhelpforum.com/trigonometry/113040-trig-problem.html","timestamp":"2014-04-19T20:48:45Z","content_type":null,"content_length":"45465","record_id":"<urn:uuid:d06e9701-0b73-4f54-b902-148322e4297d>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00506-ip-10-147-4-33.ec2.internal.warc.gz"} |
descent for simplicial presheaves
Simplicial presheaves equipped with the model structure on simplicial presheaves are one model/presentation for the (∞,1)-category of (∞,1)-sheaves on a given site.
The fibrant object $\bar X$ that a simplicial presheaf $X : S^{op} \to SSet$ is weakly equivalent to with respect to this model structure is the ∞-stackification of $X$. One expects that ∞-stacks/
(∞,1)-sheaves are precisely those (∞,1)-presheaves which satisfy a kind of descent condition.
Precsisely what this condition is like for the particular model constituted by simplicial presheaves with the given Jardine model structure on simplicial presheaves was worked out in
recalled as corollary 6.5.3.13 in
The following is a summary of these results.
The main point is that the fibrant objects are essentially those simplicial presheaves, which satisfy descent with respect not just to covers, but to hypercovers.
Localizations of (∞,1)-presheaves at hypercovers are called hypercompletions in section 6.5.3 of Higher Topos Theory. Notice that in section 6.5.4 of Higher Topos Theory it is argued that it may be
more natural not to localize at hypercovers, but just at covers after all.
A well-studied class of models/presentations for an (∞,1)-category of (∞,1)-sheaves is obtained using the model structure on simplicial presheaves on an ordinary (1-categorical) site $S$, as follows.
Let $[S^{op}, SSet]$ be the SSet-enriched category of simplicial presheaves on $S$.
Recall from model structure on simplicial presheaves that there is the global and the local injective simplicial model structure on $[S^{op}, SSet]$, and that the local model structure is a
(Bousfield-)localization of the global model structure.
According to section 6.5.2 of HTT we have:
Since with respect to the local or global injective model structure all objects are automatically cofibrant, this means that $\bar Sh_{(\infty,1)}(S)$ is the full sub-$(\infty,1)$-category of $PSh_
{(\infty,1)}(S)$ on simplicial presheaves which are fibrant with respect to the local injective model structure: these are the ∞-stacks in this model.
By the general properties of localization of an (∞,1)-category there should be a class of morphisms $f : Y \to X$ in $PSh_{(\infty,1)}(S)$ – hence between injective-fibrant objects in $[S^{op}, PSh
(S)]$ – such that the simplicial presheaves representing $\infty$-stacks are precisely the local objects with respect to these morphisms.
This was worked out in
We now describe central results of that article.
For $X \in S$ an object in the site regarded as a simplicial presheaf and $Y \in [S^{op}, SSet]$ a simplicial presheaf on $S$, a morphism $Y \to X$ is a hypercover if it is a local acyclic fibration,
i.e. of for all $V \in S$ and all diagrams
$\array{ \Lambda^k[n]\otimes V &\to & Y \\ \downarrow && \downarrow \\ \Delta^n\otimes V &\to& X } \;\; respectively \;\, \array{ \partial \Delta^n\otimes V &\to & Y \\ \downarrow && \downarrow \\ \
Delta^n\otimes V &\to& X }$
there exists a covering sieve $\{U_i \to V\}$ of $V$ with respect to the given Grothendieck topology on $S$ such that for every $U_i \to V$ in that sieve the pullback of the abve diagram to $U$ has a
$\array{ \Lambda^k[n]\otimes U_i &\to & Y \\ \downarrow &earrow & \downarrow \\ \Delta^n\otimes U_i &\to& X } \;\; respectively \;\, \array{ \partial \Delta^n\otimes U_i &\to & Y \\ \downarrow &
earrow& \downarrow \\ \Delta^n\otimes U_i &\to& X } \,.$
If $S$ is a Verdier site then every such hypercover $Y \to X$ has a refinement by a hypercover which is cofibrant with respect to the projective global model structure on simplicial presheaves. We
shall from now on make the assumption that the hypercovers $Y \to X$ we discuss are cofibrant in this sense. These are called split hypercovers. (This works in many cases that arise in practice, see
the discussion after DHI, def. 9.1.)
The objects of $Sh_{(\infty,1)}(S)$ – i.e. the fibrant objects with respect to the projective model structure on $[S^{op}, SSet]$ – are precisely those objects $A$ of $PSh_{(\infty,1)}(S)$ – i.e. Kan
complex-valued simplicial presheaves – which satisfy descent for all split hypercovers, i.e. those for which for all split hypercover $f : Y \to X$ in $[S^{op}, SSet]$ we have that
$[S^{op}, SSet](X,A) \stackrel{\simeq}{\to} [S^{op}, SSet](Y,A)$
Notice that by the co-Yoneda lemma every simplicial presheaf $F : S^{op} \to SSet$, which we may regard as a presheaf $F : \Delta^{op}\times S^{op} \to Set$, is isomorphic to the weighted colimit
$F \simeq colim^\Delta F_\bullet$
which is equivalently the coend
$F \simeq \int^{[n] \in \Delta} \Delta^n \cdot F_n \,,$
where $F_n$ is the Set-valued presheaf of $n$-cells of $F$ regarded as an $SSet$-valued presheaf under the inclusion $Set \hookrightarrow SSet$, and where the SSet-weight is the canonical
cosimplicial simplicial set $\Delta$, i.e. for all $X \in S$
$F : X \mapsto \int^{[n] \in \Delta} \Delta^n \times F(X)_n \,.$
In particular therefore for $A$ a Kan complex-valued presheaf the descent condition reads
$[S^{op}, SSet](X,A) \stackrel{\simeq}{\to} [S^{op}, SSet](colim^\Delta Y_\bullet,A) \simeq lim^\Delta [S^{op}, SSet](Y_\bullet,A) \,.$
With the shorthand notation introduced above the descent condition finally reads, for all global-injective fibrant simplicial presheaves $A$ and hypercovers $U \to X$:
$A(X) \stackrel{\simeq}{\to} lim^\Delta A(Y_\bullet) \,.$
The right hand here is often denoted $Desc(Y_\bullet \to X, A)$, in which case this reads
$A(X) \stackrel{\simeq}{\to} Desc(Y_\bullet \to X, A) \,.$
formulation in terms of homotopy limit
(expanded version of remark 2.1 in DHI)
Using the Bousfield-Kan map every simplicial presheaf $F$ is also weakly equivalent to the weighted limit over $F_\bullet$ with weight given by $N(\Delta/(-)) : \Delta \to SSet$.
$lim^{N(\Delta/(-))} F_\bullet \stackrel{\simeq}{\to} lim^\Delta F_\bullet \,.$
But by the discussion at weighted limit, the left hand computes the homotopy limit of $F_\bullet$ (since $F_\bullet$ is objectwise fibrant, since $F_n$ factors through $Set \hookrightarrow SSet$),
hence we have a weak equivalence
$holim F_\bullet \stackrel{\simeq}{\to} F \,.$
Often the descent condition is therefore formulated with the cover $U$ replaced by its homotopy limit, whence it reads
$[S^{op}, SSet](X,A) \stackrel{\simeq}{\to} [S^{op}, SSet](hocolim U_\bullet,A) \,.$
With $A$ global-injective fibrant this is equivalent to
$[S^{op}, SSet](X,A) \stackrel{\simeq}{\to} holim [S^{op}, SSet](U_\bullet,A) \,.$
Using the notation introduced above this becomes finally
$A(X) \stackrel{\simeq}{\to} holim A(U_\bullet) \,.$ | {"url":"http://www.ncatlab.org/nlab/show/descent+for+simplicial+presheaves","timestamp":"2014-04-19T05:02:34Z","content_type":null,"content_length":"62161","record_id":"<urn:uuid:fd994a2c-fa63-4d20-aac5-24de3e95f5b1>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00251-ip-10-147-4-33.ec2.internal.warc.gz"} |
0 (year)
Year zero does not exist in the Anno Domini system usually used to number years in the Gregorian calendar and in its predecessor, the Julian calendar. In this system, the year 1 BC is followed by AD
1. However, there is a year zero in astronomical year numbering (where it coincides with the Julian year 1 BC) and in ISO 8601:2004 (where it coincides with the Gregorian year 1 BC) as well as in all
Buddhist and Hindu calendars.
Historical, astronomical and ISO year numbering systems[edit]
The Anno Domini era was introduced in 525 by Dionysius Exiguus (c.470–c.544), who used it to identify the years on his Easter table. He introduced the new era to avoid using the Diocletian era, based
on the accession of Emperor Diocletian, as he did not wish to continue the memory of a persecutor of Christians. In the preface to his Easter table, Dionysius stated that the "present year" was "the
consulship of Probus Junior [Flavius Anicius Probus Iunior]" which was also 525 years "since the incarnation of our Lord Jesus Christ".^[1] How he arrived at that number is unknown.
Dionysius did not use AD years to date any historical event. This began with the English cleric Bede (c. 672–735), who used AD years in his Historia ecclesiastica gentis Anglorum (731), popularizing
the era. Bede also used a term similar to the English before Christ once, but that practice did not catch on until very much later. Bede did not sequentially number days of the month, weeks of the
year, or months of the year, however, he did number many of the days of the week using a counting origin of one in Ecclesiastical Latin. Previous Christian histories used anno mundi ("in the year of
the world") beginning on the first day of creation, or anno Adami ("in the year of Adam") beginning at the creation of Adam five days later (the sixth day of creation according to the Genesis
creation narrative), used by Africanus, or anno Abrahami ("in the year of Abraham") beginning 3,412 years after Creation according to the Septuagint, used by Eusebius of Caesarea, all of which
assigned "one" to the year beginning at Creation, or the creation of Adam, or the birth of Abraham, respectively. Bede continued this earlier tradition relative to the AD era.
In chapter II of book I of Ecclesiastical history, Bede stated that Julius Caesar invaded Britain "in the year 693 after the building of Rome, but the sixtieth year before the incarnation of our
Lord", while stating in chapter III, "in the year of Rome 798, Claudius" also invaded Britain and "within a very few days … concluded the war in … the fortysixth [year] from the incarnation of our
Lord".^[2] Although both dates are wrong, they are sufficient to conclude that Bede did not include a year zero between BC and AD: 798 − 693 + 1 (because the years are inclusive) = 106, but 60 + 46 =
106, which leaves no room for a year zero. The modern English term "before Christ" (BC) is only a rough equivalent, not a direct translation, of Bede's Latin phrase ante incarnationis dominicae
tempus ("before the time of the lord's incarnation"), which was itself never abbreviated. Bede's singular use of 'BC' continued to be used sporadically throughout the Middle Ages.
It is often incorrectly^[citation needed] stated that Bede did not use a year zero because he did not know about the number zero. Although the Arabic numeral for zero (0) did not enter Europe until
the eleventh century, and Roman numerals had no symbol for zero, Bede and Dionysius Exiguus did use a Latin word, nulla meaning "nothing", alongside Roman numerals or Latin number words wherever a
modern zero would have been used.^[1]^[3]^[4]
The anno Domini nomenclature was not widely used in Western Europe until the 9th century, and the January 1 to December 31 historical year was not uniform throughout Western Europe until 1752. The
first extensive use (hundreds of times) of 'BC' occurred in Fasciculus Temporum by Werner Rolevinck in 1474, alongside years of the world (anno mundi).^[5] The terms anno Domini, Dionysian era,
Christian era, vulgar era, and common era were used interchangeably between the Renaissance and the 19th century, at least in Latin. But vulgar era was suppressed in English at the beginning of the
20th century after vulgar acquired the meaning of "offensively coarse", replacing its original meaning of "common" or "ordinary". Consequently, historians regard all these eras as equal.
Historians have never included a year zero. This means that between, for example, January 1, 500 BC and January 1, AD 500, there are 999 years: 500 years BC, and 499 years AD preceding 500. In common
usage anno Domini 1 is preceded by the year 1 BC, without an intervening year zero.^[6] Thus the year 2006 actually signifies "the 2006th year". Neither the choice of calendar system (whether Julian
or Gregorian) nor the era (Anno Domini or Common Era) determines whether a year zero will be used. If writers do not use the convention of their group (historians or astronomers), they must
explicitly state whether they include a year 0 in their count of years, otherwise their historical dates will be misunderstood.^[7]
To simplify calculations, astronomers have used a defined leap year zero equal to 1 BC of the traditional Christian era since the 17th century. Modern astronomers do not use years for intervals
because years do not distinguish between common years and leap years, causing the resulting interval to be inaccurate.
In astronomy, the numbering of all years labeled Anno Domini remain unchanged. However, the numerical value of years labeled Before Christ are reduced by one by the insertion of a year 0 before 1 AD.
Thus, astronomical BC years and historical BC years are not equivalent. To avoid this confusion, modern astronomers label years as positive or negative, instead of BC or AD.
The current method was created by Jacques Cassini, who explained:
The year 0 is that in which one supposes that Jesus Christ was born, which several chronologists mark 1 before the birth of Jesus Christ and which we marked 0, so that the sum of the years before
and after Jesus Christ gives the interval which is between these years, and where numbers divisible by 4 mark the leap years as so many before or after Jesus Christ.
—Jacques Cassini, Tables astronomiques, 5, translated from French
In this quote, Cassini used "year" as both a calendar year and as an instant before a year. He identified the calendar year 0 as the year during which Jesus Christ was born (on the traditional date
of December 25), and as calendar leap years divisible by 4 (having an extra day in February). But "the sum of years before and after Jesus Christ" referred to the years between a number of instants
at the beginning of those years, including the beginning of year 0, identified by Cassini as "Jesus Christ", virtually identical to Kepler's "Christi". Consider the three instants ('years') labeled 1
avant Jesus-Christ, 0, 1 après Jesus-Christ by Cassini, which modern astronomers would label −1.0, 0.0, +1.0. Cassini specified that his end years must be added, so the interval between the instants
(noon 1 January) 1 avant Jesus-Christ and 1 après Jesus-Christ is 1 + 1 = 2, but modern astronomers would subtract their 'years', +1.0 − (−1.0) = 2.0, which agrees with Cassini. The calendar years
between these two instants would be 2 BC and 1 BC, leaving the calendar year 1 AD beginning at +1.0 outside the interval.
Astronomical notation[edit]
Astronomers use year numbers not only to identify a calendar year (when placed alongside a month and a day number) but also to identify a certain instant (known in astronomy as an epoch). To identify
an instant, astronomers add a number of fractional decimal digits to the year number, as required for the desired precision: thus J2000.0 designates noon 2000 January 1 (Gregorian), and 1992.5 is
exactly 7.5 years of 365.25 days each earlier, which is the instant 1992 July 2.125 (03:00) (Gregorian). Similarly, J1996.25 is 3.75 Julian years before J2000.0, which is the instant 1996 April
1.8125 (19:30), one-quarter of a year after the instant J1996.0 = 1996 January 1.5. In this notation, J0000.0 is noon of −1 December 19 (Julian), and J0001.0 is 18:00 on 0 December 18 (Julian). This
astronomical notation is called Julian epoch and was introduced in 1984; before that time, astronomical year numbers with decimal fractions referred to Besselian years and were written without a
letter prefix.^[citation needed]
During the 19th century astronomers began to change from named eras to numerical signs, with some astronomers using BC/0/AD years while others used −/0/+ years. By the mid 20th century all
astronomers were using −/0/+ years. Numerical signs effectively form a new era, reducing the confusion inherent in any date which uses an astronomical year with an era named Before Christ.
History of astronomical usage[edit]
In 1849 the English astronomer John Herschel invented Julian dates, which are a sequence of numbered days and fractions thereof since noon 1 January −4712 (4713 BC), which was Julian date 0.0. Julian
dates count the days between two instants, automatically accounting for years with different lengths, while allowing for any arbitrary precision by including as many fractional decimal digits as
necessary. The modern mathematical astronomer Jean Meeus no longer mentions determining intervals via years, stating:^[8]
The astronomical counting of the negative years is the only one suitable for arithmetical purpose. For example, in the historical practice of counting, the rule of divisibility by 4 revealing the
Julian leap-years no longer exists; these years are, indeed, 1, 5, 9, 13, ... B.C. In the astronomical sequence, however, these leap-years are called 0, −4, −8, −12 ..., and the rule of
divisibility by 4 subsists.
—Jean Meeus, Astronomical algorithms
In 1627 the German astronomer Johannes Kepler first used an astronomical year which was to become year zero in his Rudolphine Tables. He labeled the year Christi and inserted it between years labeled
Ante Christum (BC) and Post Christum (AD) on the mean motion pages of the Sun, Moon, and planets.^[9] Then in 1702 the French astronomer Philippe de la Hire used a year he labeled Christum 0 at the
end of years labeled ante Christum (BC), immediately before years labeled post Christum (AD) on the mean motion pages in his Tabulæ Astronomicæ, thus adding the designation 0 to Kepler's Christi.^[10
] Finally, in 1740 the French astronomer Jacques Cassini (Cassini II), who is traditionally credited with the invention of year zero,^[11] completed the transition in his Tables astronomiques, simply
labeling this year 0, which he placed at the end of years labeled avant Jesus-Christ (BC), immediately before years labeled après Jesus-Christ (AD).^[12]
ISO 8601[edit]
ISO 8601:2004 (and previously ISO 8601:2000, but not ISO 8601:1988) explicitly uses astronomical year numbering in its date reference systems. Because it also specifies the use of the proleptic
Gregorian calendar for all years before 1582, some readers incorrectly assume that a year zero is also included in that proleptic calendar, but it is not used with the BC/AD era. The "basic" format
for year 0 is the four-digit form 0000, which equals the historical year 1 BC. Several "expanded" formats are possible: -0000 and +0000, as well as five- and six-digit versions. Earlier years are
also negative four-, five- or six-digit years, which have an absolute value one less than the equivalent BC year, hence -0001 = 2 BC. Because only ISO 646 (7-bit ASCII) characters are allowed by ISO
8601, the minus sign is represented by a hyphen-minus.
Other traditions[edit]
South Asian calendars[edit]
All eras used with Hindu and Buddhist calendars, such as the Saka era or the Kali Yuga, begin with the year 0. All these calendars use elapsed, expired, or complete years, in contrast with most other
calendars which use current years. A complete year had not yet elapsed for any date in the initial year of the epoch, thus the number 1 cannot be used. Instead, during the first year the indication
of 0 years (elapsed) is given in order to show that the epoch is less than 1 year old. This is similar to the Western method of stating a person's age – people do not reach age one until one year has
elapsed since birth (but their age during the year beginning at birth is specified in months or fractional years, not as age zero). However if ages were specified in years and months, such a person
would be said to be, for example, 0 years and 6 months or 0.5 years old. This is analogous to the way time is shown on a 24-hour clock: during the first hour of a day, the time elapsed is 0 hours,
$n$ minutes.
Maya historiography[edit]
Many Maya historians, but not all, assume (or used to assume) that a year 0 exists in the modern calendar and thus specify that the epoch of the Mesoamerican Long Count calendar occurred in 3113 BC
rather than 3114 BC. This would require the sequence 1 BC, 0, AD 1 as in early astronomical years.^[13]
In popular culture[edit]
• In the film The Beach, Leonardo DiCaprio's character is, during his mental instability, crazed about the term Year 0.
• Year Zero is a theatrical play that highlights the everyday struggles of a Cambodian-American family.^[14]^[15] (See Year Zero (political notion).)
• The fictitious theologian Franz Bibfeldt's most famous work relates to the year 0: a 1927 dissertation submission to the University of Worms entitled "The Problem of the Year 0".
• Germany, Year Zero is a 1948 film directed by Roberto Rossellini set in post-WWII Germany.
• Tokyo Year Zero is a novel by English author David Peace set in post-WWII Tokyo which depicts the occupation of Japan by the Allied Powers.
• The 1985 film Back to the Future shows the date December 25 0000 on the time circuits display of the DeLorean time machine as a joke and example of choice for witnessing the birth of Christ.
• In the Seinfeld episode "The Millennium", Jerry notifies Newman there was no year 0. Since Newman had set up a party for the "Millennium New Year," the party would actually fall on December 31,
2000/January 1, 2001, and thus his party will be late and "quite lame". Newman then squawks with frustration.
• Year Zero is an album by the industrial rock group Nine Inch Nails, and is a concept album and Alternate Reality Game based on a post-apocalyptic earth.
• Year Zero is a 2012 science fiction book by Rob Reid, in which aliens mark the beginning of a new era from the date they first hear human music.
• Year Zero is the title of the 6th track from the Swedish heavy metal band Ghost's second album, Infestissumam.
1. ^ Faith Wallis, trans. Bede: The Reckoning of Time (725), Liverpool: Liverpool Univ. Pr., 2004. ISBN 0-85323-693-3.
2. ^ Byrhtferth's Enchiridion (1016). Edited by Peter S. Baker and Michael Lapidge. Early English Text Society 1995. ISBN 978-0-19-722416-8.
3. ^ Werner Rolevinck, Fasciculus temporum.
4. ^ While it is increasingly common to place AD after a date by analogy to the use of BC, formal English usage adheres to the traditional practice of placing the abbreviation before the year as in
Latin (e.g., 100 BC, but AD 100).
5. ^ V. Grumel, La chronologie (1958), page 30.
6. ^ Jean Meeus, Astronomical algorithms (Richmond, Virginia: Willmann-Bell, 1991) 60.
7. ^ Tabulae Rudolphinae – Ioannes Keplerus (1627) 191 (42), 197 (48), 203 (54), 209 (60), 215 (66), 221 (72), 227 (78).
8. ^ Tabulae Astronomicae – Philippo de la Hire (1702), Tabulæ 15, 21, 39, 47, 55, 63, 71; Usus tabularum 4.
9. ^ Robert Kaplan, The nothing that is (Oxford: Oxford University Press, 2000) 103.
10. ^ [Jacques] Cassini, Tables astronomiques (1740), Explication et usage 5; Tables 10, 22, 53.
11. ^ Linda Schele, The proceedings of the Maya hieroglyphic workshop (Austin, Texas, 1992) page 173.
12. ^ Year Zero Playbill | {"url":"http://blekko.com/wiki/Year_zero?source=672620ff","timestamp":"2014-04-18T09:23:53Z","content_type":null,"content_length":"56354","record_id":"<urn:uuid:8452637f-1802-4ab0-bb32-d7d0e8ab5bf6>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00384-ip-10-147-4-33.ec2.internal.warc.gz"} |
North Bergen Precalculus Tutor
...Prior to that, I taught my robotics teams to create 3D-animations for competitions using Maya and 3D-Studio Max. I'm also proficient with GNU 3D programs such as Blender. I've been teaching
math including linear algebra for 10 years.
83 Subjects: including precalculus, chemistry, algebra 1, statistics
...I then earned a Masters of Arts in Teaching from Bard College in '07. I've been tutoring for 8+ years, with students between the ages of 6 and 66, with a focus on the high school student and
the high school curriculum. I have also been an adjunct professor at the College of New Rochelle, Rosa Parks Campus.
26 Subjects: including precalculus, calculus, physics, GRE
...I use these concepts and techniques to assist my students in making leaps and bounds in their classroom performance usually at a quite rapid pace. Physics, the meeting point of mathematics and
science to create logical thinking is the only science course I currently support teaching. Its methodology blends well with my analytical style in conjunction with my mastery of mathematics.
33 Subjects: including precalculus, physics, calculus, GRE
My name is Howard. I supervised a department of more than 30 mathematics teachers in one of New York City's largest high schools for more than 20 years. I've been involved in mathematics education
for my entire career and have had more than 25 years of successful experience as a tutor of middle school, high school and college mathematics courses and the math section of the SAT.
9 Subjects: including precalculus, geometry, algebra 1, algebra 2
...I have tutored the GRE at least 20 times. I am able to tutor students of all skill levels, from the most elementary students who need to dramatically improve a low score, to those who are
already performing well and want to reach for a perfect score. I often emphasize solution techniques that a...
32 Subjects: including precalculus, calculus, physics, geometry | {"url":"http://www.purplemath.com/North_Bergen_Precalculus_tutors.php","timestamp":"2014-04-17T19:50:32Z","content_type":null,"content_length":"24432","record_id":"<urn:uuid:3a32f40b-b7b5-45d6-b792-868d79b146c2>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00309-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Month in the Library
This article by Carol Otis Hurst first appeared in
Teaching K-8 Magazine
If your math skills are as weak as mine, it will come to a shock of the entire faculty when you announce that this month is Math Month in the library.
Start with a bulletin board based on How Much Is a Million? by David M. Schwartz and Steven Kellogg (Lothrop, 1985 ISBN 0-688-04050-0). Line a bulletin board with wrapping paper that has stars on it
and across it put a statement like "If this bulletin board had a million stars on it, it would be _____ feet long and ____ feet wide."
I don't know how many. I only think of these ideas, I can't do the math. You'll have to figure out how many stars there are on a square foot and go from there, I think.
Put some of the other facts from the book around the bulletin board. Put some books which accent math -- not those boring ones from the 511 section -- at least not most of them. Use picture books
that play with numbers and concepts. We'll get to them later in the article. In the middle of the display put a sign that says, "We've Got a Million of `Em -- or do we?"
Start a drive to collect a million pennies for some good cause. You'll probably never make it, but people are usually happy to find a place for their pennies and the amount of space taken up by even
100,000 of those coins is amazing and will help to give the kids some idea of what you're talking about when you deal with millions.
Start a contest asking kids to estimate how many books you have in the library. You'll know precisely, of course, because your inventory is always up to the minute, isn't it? Give a math book prize
to the closest estimate.
Play some math games in the library this month. I love the ideas in two of the books by Marilyn Burns with illustrations by Martha Weston: The I Hate Mathematics Book (Little, 1975 ISBN
0-316-11741-2) and Math for Smarty Pants (Little, 1982 ISBN 0-316-11739-0. With an upper grade class, pair off the kids and play the calendar game on page 58 in "Smarty Pants". Kids take turns trying
to be the first to say "December 31" There are some rules: The first person can't say December 31 but must give a month and day. The next person must name a later date but can only change either the
month or the day, not both. It should take them a while to see how to win it.
With a lower grade, try playing the one on page 44 of the same book where partners make a circle of six dots and take turns drawing line segments between them, each player using a different color
marker. The object is to force your opponent into making a triangle in his or her color before you have to.
Hand out papers with a literature math game such as:
Take the number of Little Pigs in the story.
Multiply it by the number of Billy Goats Gruff.
Divide by the number of people going to St. Ives.
Add the number of creatures in the bed at the beginning of Ahlsburg's book.
Add the number against the tide in the book by Bruce Clements.
Add the number of reindeer Santa had in A Visit from St. Nicholas.
Subtract the number of bad ants in Chris Van Allsburg's book.
Divide by the number of planets we have going around our sun.
Divide by the number of wishes we usually get in fairy tales and you get the number Johnny was in the book by Maurice Sendak.
Let the kids have more fun with numbers in literature by making up their own puzzles like the one above.
Accent a new math book each day in the library. Counting books should go on display, maybe with the question "Which one is cleverest?" with a place for kids to vote.
Mitsumasa Anno deserves a spot of honor. All of his books have a little math in them and most, by that ex-math teacher of Japan, are brim full of math concepts. Some of the strongest:
Anno's Alphabet - Crowell, 1975 ISBN 0-690-00540-7
Anno's Counting Book - Harper, 1977 ISBN 0-690-01287-X
Anno's Counting House - Putnam, 1982 ISBN 0-399-20896-8
Anno's Math Games - Putnam, 1989 ISBN 0-399-21615-4
Anno's Mysterious Multiplying Jar - Putnam, 1983 ISBN 0-399-2095-4
Anno's Sundial - Putnam, 1987 ISBN 0-399-21374-0
Topsy-Turvies - Weatherhill, 1970 ISBN 0-8348-2004-8
Anno's Flea Market - Putnam, 1984 ISBN 0-399-21031-8
All In a Day - Putnam, 1990 ISBN 0-399-61292-0
By the way, while you're exposing the kids to the work of Anno, make sure the math teachers know about his work as well. At the back of most of his books, Mr. Anno gives us suggestions for how they
might be used to strengthen math reasoning and concepts. Who can ask for more in a math month in the library?
But, since you are asking for more, here are some other books with a math twist:
Don and Audrey Wood's The Napping House (Harcourt, 1984 ISBN 0-15-256708-9). Regular readers of this column are going to think it's the only book I know because I cite it so often, but it's so
useful. Math concepts in it include diminishing size and estimation.
I mentioned this one in a previous column too, but only once I think: Alphonse Knows Zero Is Not Enough by H. Werner Zimmermann (Oxford, 1990 ISBN 0-19-540797-0). It's Halloween and Alphonse has a
big bowl of candy ready for the trick-or-treaters. He also has a mathematically literate mouse assistant. So, to be sure there are enough candies, Alphonse starts counting and eating and, as the
numbers grow larger on the mouse's charts, the bowl grows emptier. Mathematical activities abound. How many does he eat altogether? How many will he need to fulfill the requests waiting at the door
when he finishes? How big a bowl do you need to hold that many M & M's?
We did a whole page on using Jan Brett's The Mitten (Putnam, 1990 ISBN 0-399-21920-X). At any rate, we gave several suggestions for using it for math activities: figuring out how big the mitten would
have to be to hold average sized animals that somehow fit into it should keep you busy for a while.
The 500 Hats of Bartholomew Cubbins by Dr. Seuss (Vanguard, 1938 ISBN 0-8149-0889-6) counts hats and ends up with the prerequisite number, to be sure, but many of the numbers between one and five
hundred are skipped in the interests of avoiding tedium, but with children in the lower grades who need practice in recognizing numerals, you can give each of them a copy of a chart of numerals from
one to five hundred and then read the book aloud, asking them to cross out the numbers you mention.
Numbers on a simpler level are used in Patricia Polacco's Thunder Cake (Philomel, 1990 ISBN 0-399-22231-6). Baba's attempt, successful as it turns out, to get the little girl over her fear of
thunderstorms is to make a thunder cake. While they're rounding up the ingredients the grandmother instructs the child "When you see the lightning, start counting...real slow. When you hear the
thunder, stop counting. That number is how many miles away the storm is." Well, first find out if that's true. Weather books should help and may lead you off on a tangent for a while. They may also
tell you how far away a storm is when you first hear the thunder.
There's a wonderful poem in The Random House Book of Poetry for Children, selected by Jack Prelutsky and illustrated by Arnold Lobel (Random, 1983 ISBN 0-394-85010-6). Actually, there are lots of
wonderful poems in that collection, but the one I wanted a math focus on is "The Ants at the Olympics" by Richard Digance. It's the tale of the Jungle Olympics, apparently an annual event and the
ants who show up each year "are sloggers. They turn out for every event. With their shorts and their bright orange tee-shirts, their atheletes are proud they are sent." The poem is hysterical and
replete with potential math: Take the information from the poem and design the equipment for each event. Children can figure out how far the ants, given their size and speed, could walk from January
1st to August 1st (the time that the poem says it takes them to get from their home to the Olympic games site) and, therefore, where their homes and where the Olympics could be. Remember, it's a
Jungle Olympics.
After you've played enough with that poem, turn the kids loose to find other poems with a math base or with a math activity that could come from it. "Casey at the Bat" and Sandburg's "Arithmetic"
spring to my mind quickly. What's on the tip of your math tongue?
Related Areas of Carol Hurst's Children's Literature Site | {"url":"http://www.carolhurst.com/subjects/math/librarymath.html","timestamp":"2014-04-20T05:43:32Z","content_type":null,"content_length":"32375","record_id":"<urn:uuid:9e43261c-7ec3-44ee-aab9-a3e27d086262>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00268-ip-10-147-4-33.ec2.internal.warc.gz"} |
Spare me the math: the Lamb Shift
April 24, 2013
To kick off SMTM, I’ll look at a topic that I never really understood when I was in graduate school: the Lamb shift.
The Lamb Shift: what it is
In its most commonly-discussed form, the Lamb shift is a small effect. In fact, it’s a very small effect (which is probably why I never bothered to learn it in the first place). The Lamb shift is a
miniscule change in (some of) the energy levels of the hydrogen atom relative to where it seems like they should be. For example, the binding energy of an electron to the hydrogen nucleus (a proton)
is about $13.6$ electron volts. The Lamb shift is a phenomenon that changes this energy level by about $4\times10^{-6}$ eV, or about $0.00003$%. But the existence of this shift was a serious puzzle
to physicists in the 1940s and 50s, and its final resolution provided a beautiful piece of physics that helped spur the development of quantum electrodynamics, one of the most spectacularly
successful scientific theories in history.
The essence of the Lamb shift can be stated like this: it is the energy of interaction between hydrogen and empty space.
The dominant contribution, of course, to the energy of the hydrogen atom is the interaction of the electron with the proton it’s orbiting. If you want a really quick way to derive the energy of the
hydrogen atom, all you need to remember is that the size of the electron cloud around the proton has some characteristic size $a$. Confining the electron to within this cloud costs some kinetic
energy $\sim \hbar^2 /m a^2$, and it buys you some energy of attraction, $\sim -e^2/a$ (here I’m being too lazy to write out the $4\pi\epsilon_0$s that come in SI units). So the total energy is
something like $E \sim \hbar^2/ma^2 - e^2/a$. If you minimize $E$ with respect to $a$ (take the derivative and set it to zero), you’ll find that
$a \sim \hbar^2/me^2$
$E \sim -me^4/\hbar^2 \sim -e^2/a$.
The constant $a \approx 0.5$ Angstroms is the Bohr radius, which is the typical size of the hydrogen atom (and, roughly speaking, any atom). The energy $-e^2/a \equiv R$ is the Rydberg energy.
The Lamb shift comes from the way this balanced state between electron and proton is influenced by the slight, random buffetings from the vacuum itself.
The hydrogen atom
There are two players in this story: the hydrogen atom, and empty space. I’ll describe the former first, since the latter (paradoxically) is considerably harder.
In reality, the Lamb shift is most easily observed in excited states of hydrogen (the P states, see Footnote 1 at the bottom), but for the purpose of this discussion it’s easiest to think about the
ground state. In terms of the electron probability cloud, the ground state of the hydrogenic electron looks like this:
It has a peak right at the middle of the atom, and it falls of exponentially.
I know there is a lot of trickiness associated with whether to think about an electron as a particle or as a wave (my own favorite take is here — in short, an electron is a particle that surfs on a
wave), but for this discussion it’s easiest to think of an electron as a point object that just happens to arrange itself in space according to the probability density plotted above.
The vacuum
There is a lot going on in empty space. If this crazy idea is completely new to you, I would (humbly) suggest reading my post on the Casimir effect. The upshot of it is that all of space is filled
by endlessly boiling quantum fields, and one of these, the electromagnetic field, is responsible for conveying electromagnetic forces. As a result of its indelible boiling, however, the
electromagnetic field can push on charged objects, like our electron, even when there are no other charged objects around to seemingly initiate the pushing.
To get a better description of the electromagnetic field in vacuum, it will be helpful to imagine that our electron sits inside a large metal box with size $L$. Inside this metal box are lots of
randomly-arising electromagnetic waves (“virtual photons”). Something like this:
When dealing with quantum fields, a good rule of thumb is to expect that, in vacuum, every possible oscillatory mode will be occupied by one quantum of energy. In this case, it means that for every
possible vector $k = 2\pi / \lambda$, where $\lambda = 2L, 2L/2, 2L/3, 2L/4, ...$ is a permissible photon wavelength, there will be roughly one photon present inside the box. This photon has an
energy $E_k = \hbar c k$, where $c$ is the speed of light. One can estimate the typical magnitude of the electric field the photon creates, $|\vec{\mathcal{E}_k}|$, by remembering that $|\vec{\
mathcal{E}_k}|^2$ gives the energy density of an electric field. Since the photon fills the whole box, this means that $|\vec{\mathcal{E}_k}|^2 \times L^3 \sim E_k$, or $|\mathcal{E}_k|^2 \sim \hbar
c k/L^3$. This electric field oscillates with a frequency $ck$.
So now the stage is set. The hydrogen atom sits inside a “large box” (which we’ll do away with later), and inside the box with is a huge mess of random electric fields that can push on the
electron. Now we should figure out how all this pushing affects the hydrogen energy.
[By the way, you may be bothered by the fact that all these randomly-arising photons seem to endow the interior of the box with an infinite amount of energy. If that is the case, then there's
nothing much I can say except that you and I are in the same club, with only speculation to assuage our uneasiness.]
How the vacuum pushes on hydrogen
The essence of the Lamb shift is that the random electric fields push on the electron, and in doing so they move it slightly further away from the proton, on average, than it would otherwise be.
Another way to say this is that the distribution of the electron’s position gets blurred over some particular (small) length scale $\delta r$. In particular, the sharp peak in the distribution near
the center of the atom should get slightly rounded, like this:
The resulting shift of the electron distribution away from the center lowers the interaction energy of the electron to the proton. To estimate the amount of energy that the electron loses, you can
think that in those moments where the electron happens to approach be within a distance $\delta r$ of the nucleus, it frequently finds itself getting pushed outward by an amount $\delta r$. As a
result of this outward push it loses an energy $e^2/\delta r$. This means that the Lamb shift energy
$\Delta E \sim (e^2/\delta r) \times [\text{fraction of time the electron spends within } \delta r \text{ of the nucleus}]$
$\Delta E \sim (e^2/\delta r) \times (\delta r)^3/a^3$
$\Delta E \sim e^2 (\delta r)^2/a^3$
Now all that’s left is to estimate $\delta r$.
The trick here is to realize that all of those photons within the metal “box” are independently shaking the electron, and each push is in a random direction. So if some photon with wave vector $k_1$
produces, by itself, a displacement of the electron $(\delta r)_{k_1}$, then the total displacement $(\delta r)$ satisfies
$(\delta r)^2 = (\delta r)^2_{k_1} + (\delta r)^2_{k_2} + (\delta r)^2_{k_3} ...$
[This is a general rule of statistics: independently-contributing things add together in quadrature.]
In our case, each $(\delta r)_k$ comes from the influence of a photon, which has electric field $\vec{\mathcal{E}_k}$. The simplest way to estimate $(\delta r)_k$ is to imagine that the electric
field $\vec{\mathcal{E}_k}$ pushes on the electron for a time $\tau \sim 1/kc$ (the period of its oscillation), after which it reverses its direction (as shown on the right). During that time, the
acceleration of the electron is something like $|\vec{A}| \sim |\vec{F}|/m$, where $\vec{F} = e\vec{\mathcal{E}_k}$ is the force of the electric field pushing on the electron, and its net
displacement $(\delta r)_k \sim |\vec{A}| \tau^2$. This means
$(\delta r)_k \sim e |\vec{\mathcal{E}_k}|/mk^2 c^2$.
Now we should just add up all the $(\delta r)^2_k$s. Since there is a very large number of photons with a nearly continuous range of energies (due to the very large size of the confining “box”), we
can replace a sum over all $k$‘s with an integral: $\sum_k (\delta r)^2_k\rightarrow L^3 \int d^3 k (\delta r)^2_k$. Inserting the expressions for $(\delta r)_k$ and $|\vec{\mathcal{E}_k}|^2$ gives
the following:
$(\delta r)^2 \sim (e^2 \hbar/m^2 c^3) \int (1/k) dk$.
You can notice that the size $L$ of the box drops out of the expression, which is good because the box was completely fictitious anyway.
The only remaining thing to figure out is what to do with the integral $\int(1/k) dk$, which is technically equal to infinity. In physics, however, when you get an infinite answer, it means that you
forgot to stop counting things that shouldn’t actually count. In this case, we should stop counting photons whose wavelength is either too short or too long to affect the electron. On the long
wavelength side, we should stop counting photons when their wavelength gets bigger than the size of the atom, $a$. Such long wavelength photons make an electric field that oscillates so slowly that
by the time it has changed sign, the electron has likely moved to a completely different part of the atom, and the net effect is zero. One the short wavelength side, we should stop counting photons
when their wavelength gets shorter than the Compton wavelength. Such photons are super-energetic, with energy larger than $mc^2$, which means their energy is so high that they don’t push around
electrons anymore: they spontaneously create new electrons from the vacuum. [In essence, it doesn't make sense to talk about electron position at length scales shorter than the Compton wavelength.]
Using these two wavelengths as the upper and lower cutoffs of the integral gives $\int (1/k) dk = ln(1/\alpha)$. Here, $\alpha = e^2/\hbar c \approx 1/137$ is the much-celebrated fine structure
[It is perhaps worth pausing to note, as so many before me have done, what a strange and interesting object the fine structure constant is. $\alpha$ contains only the most fundamental constants of
electricity, quantum mechanics, and relativity, ($e, \hbar$, and $c$), and they combine to produce exactly one dimensionless number. How strange that this number should be as large as 137. As a
general rule, fundamental numbers produced by the universe are usually close to $1$.]
Now we have all the pieces necessary to assemble a result for the Lamb shift. And actually, if you like the fine structure constant, then you’ll love the final answer. It looks like this:
$\Delta E/R \sim \alpha^3 \ln(1/\alpha)$.
At the beginning of this post I mentioned that the Lamb shift is very small — only about 1/500,000 of the energy of hydrogen (the Rydberg energy, $R$). Now, if you want to know why the Lamb shift
is so small, the best answer I have is that the Lamb shift is proportional to $\alpha^3$, and in our universe the fine structure constant $\alpha$ just happens to be a small number.
It’s interesting to note that if we somehow lived in a universe where $\alpha$ was not so small, then the Lamb shift would get big, and those random fluctuations of the quantum field would get large
enough to completely knock the electron off the nucleus. This would be a universe without atoms, and consequently, without you and me.
1) You might notice that the Lamb shift appeared only because the electron probability cloud had a peak at the center of the atom. If it didn’t have a peak — say, if it went to zero near the center
— then there would be no Lamb shift. This is in fact exactly how the Lamb shift was discovered. Certain excited states of the hydrogen atom have a peak near the center and others go to zero. So
while normal quantum mechanics predicts that, say, the 2S and 2P states (shown to the right) have the same energy, in fact the Lamb shift makes a small difference between them. This difference can
be observed as a faint [DEL:radio wave:DEL] microwave signal from interstellar hydrogen.
2) It’s probably worth noting that if you increased the fine structure constant $\alpha$, you would have run into bigger problems long before you started fussing about the Lamb shift.
3) Also, while $\alpha$ is a small number in our normal world, it’s not hard to imagine synthetic worlds (the interior of certain materials) where $\alpha$ is not a small number. For example,
graphene is a material inside of which there is an effective speed of light that is 300 times smaller than $c$. This makes $\alpha$ very close to 1, and the sort of effects discussed in this post
get very real. This is part of why there has been so much ado about graphene among physicists: it’s quite an exciting (and frustrating) playground for people like me.
4) I just came across this video of Freeman Dyson (one of my personal favorite physicists) explaining the Lamb shift and some of the history behind it. His conceptual summary of it starts at 2:43.
1. April 25, 2013 4:27 am
I really enjoyed your post. There was a lot that I missed, but a lot that I took in. I would suggest for people that don’t have the physics experience that you have, to define every variable. You
don’t necessarily have to explain every single one, but it would be nice what every variable stood for in your equations.
Thanks again. Good post, and good blog.
□ April 25, 2013 8:24 am
Sorry. I hate when people do that. I think what I missed is that $\hbar$ is Planck’s constant (divided by $2\pi$), $m$ is the electron mass, and $e$ is the electron charge.
2. April 25, 2013 4:41 pm
Why does the Lamb shift only matter at the center of the electron distribution? I reread that section a few times and it seems like the electron should be equally buffeted in all space.
□ April 25, 2013 5:02 pm
Hi Steve,
You’re right; the electron does get equally buffeted in all space. But for most of the time, that small buffeting (which displaces the electron by about 0.1 femtometers) doesn’t really
matter. After all, who cares whether the electron is 1 angstrom away from the nucleus, or 1.000001 angstroms away. If the electron happens to be very close the nucleus, on the other hand,
that very small displacement makes a big difference in the attraction energy between the electron and the proton. | {"url":"http://gravityandlevity.wordpress.com/2013/04/24/the-lamb-shift/","timestamp":"2014-04-21T12:19:19Z","content_type":null,"content_length":"99874","record_id":"<urn:uuid:f74146f0-0a15-419d-9074-c35f7133e8dd>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00521-ip-10-147-4-33.ec2.internal.warc.gz"} |
This concise and practical text describes the major educational computer applications and provides methods for using computer tools effectively in the teaching/learning process.
The author focuses on the word processor, database, spreadsheet, Internet, and hypermedia software–tools that all classrooms with computers have. The text is independent of hardware and equally
applicable to Macs or PCs, and speaks to methods that apply across grade levels and disciplines. The text has been extensively class tested and written from the perspective of what will work for
teachers. Many helpful models, lesson plans, skill–building tips and activities are included to allow students to pick up this book and put it to use in the classroom right away.
Table of Contents
Chapter One
Teaching with Computers Effectively?3
Technology Operations and Concepts3Learning Environments4
Teaching, Learning, and the Curriculum5What Is an Instructional Model?6Assessment and Evaluation7Productivity and Professional Practice8Social, Ethical, Legal, and Human
Preparing Students for the World of Work10
Three Kinds of Computer Use11
Teaching about Computers: Computer Literacy11Using
the Computer as a Teacher for Your Students13Using a Computer
as a Cognitive Tool15
Annotated Resources20
Chapter Two
An Introduction to Computers for Teaching 22
Instructional Models and Computers25
What Does the Research Say about Using Computers in Classrooms?26
Constructing Technology-Supported Lessons28
Annotated Resources30
The Internet: Information Retrieval
and Communication
Acquiring and judging the value of information and exchanging information are the topics of Part Two of this book. One of the greatest strengths of the Internet is its role as a repository of
information. In addition, Internet-based communication, including e-mail and web-based conferencing, helps students acquire information from each other and from experts. It provides opportunities for
during problem-based learning activities. Furthermore, the Internet is becoming
a classroom itself. It is a medium in which a broad range of courses and learning activities are available for both children and adults. In addition to its role as a repository for information, the
Internet is a powerful tool for communication.
You will learn how to design instruction based on communication over the Internet. As your students use the Internet for this purpose, they will improve their writing skills as they acquire
Chapter Three
Information Retrieval 32
A Short History of the Internet32
The Modern Internet34
Using the Internet for Research35
Asking Questions35Accessing Information37Analyzing39
The WebQuest42Copyright Issues and the Internet45Bloom’s Taxonomy and Internet Research47
Distance Learning50
Interactive Television50Internet-Based Courses51Summary
of Key Elements of Distance Instruction52
Annotated Resources55
Chapter Four
Web Tools: E-mail and Discussion Boards 58
Discussion Boards62
A More Elaborate Use of E-mail67
Asynchronous Communication: Tools and Methods71
E-mail71Web Boards73Keeping Track in a Discussion: Three Ways74Search Function75Discussion Monitoring77Planning and Evaluating Asynchronous Communication Projects78
Annotated Resources81
Part ThreeDisplaying Information
Before the computer, students had fewer formats in which they could display
information. They wrote most reports in text–handwritten or typewritten. Some students would cut pictures out of magazines to include with reports. All charts and graphs were hand made and hand
calculated. Students with poor writing
skills had limited opportunities to work with many facts and ideas on the higher levels of Bloom’s taxonomy because they had to be more concerned with producing a legible product with passable
grammar. This is not to say that legibility and grammar are not important, but a focus on them can keep students from learning other skills that are just as important. Presentation software and word
processors allow students to work with large ideas and concepts, much as the
calculator shifts students from a focus on computational errors to looking at the large ideas in mathematics.
The process that students use to display information in a computer-based presentation provides opportunities for them to organize and contextualize the
information. Organizing information, or, better, finding the organization that is inherent in information, is one way to learn it well (Gagne et al., 1993; Woolfolk, 2000).*
*References for Woolfolk and Gagne et al. appear in Chapters 5 and 6, respectively.
Chapter Five
Presentation Software 83
Office Suites and Teachers–How Do They Apply
to Classrooms?83
Capabilities of Office Suites83Office Suites and Projects83
Presentation Software86
Displaying Information: Key to Creating Understanding88
The Role of Interactivity88
Executing a Hypermedia-Supported Lesson Plan91
Annotated Resources93
Chapter Six
Graphic and Interface Design Principles 94
Rule 1: Use General Design Principles95Rule 2: Orient
Users95Rule 3: Justify Text Appropriately97Rule 4: Limit Type Styles98Rule 5: Limit Colors98Rule 6: Standardize Use
of Colors99Rule 7: Enhance Text with Graphics and Interactivity99Rule 8: Eliminate Superfluous Items99Rule 9: Use Upper-
and Lowercase99Rule 10: Keep Text Lines Short100Rule 11:
Use Single Spacing100Rule 12: Simplify the Structure100Rule 13: Limit the Focus100Rule 14: Provide Emphasis101Rule 15: Know Your Audience101Rule 16: Do Not
Flash101Rule 17: Use Lists102Rule 18: Navigate Consistently102Rule 19: Do Not
Stack Text102Rule 20: Include Multiple Graphic Types102Rule 21: Organize the Screen102Rule 22: Size Matters103Rule 23: Placement Matters103
Annotated Resources105
Chapter Seven
Outlines, Idea Maps, and Storyboards 107
Idea Maps109
Concepts: Examples and Properties111Questions and Answers about
Idea Mapping113
Hot Words121Hot Graphics122Icons122
Menus122Branches That Help Users Get around
in the Software123
Annotated Resources129
Chapter Eight
Evaluating Student Presentations 131
Creating Standards for Your Students132Some Notes
on the Components of the Rubrics134
Questions and Answers about Using Multimedia
Chapter Nine
Educational Applications of Word Processing 143
Management Issues: How Many Computers
Do You Have?144
One-Computer Classroom144Five-Computer Classroom:
“Jigsaw Model”144Laboratory145
The Models: Using the Word Processor to Teach
Content and Skills145
High-Level Analysis and Skills146Targeted Learning Problems151
Word Processing Tips151
Bullets and Numbered Lists151Using Tables to Organize
Information152Making Links to the Internet152
Importing Information from Other Applications154Spelling
and Spell Checkers154Readability Statistics and Grammar
Text-Reading Software157
Annotated Resources162
Part Four
Analyzing Data with Databases
and Spreadsheets
Chapter Ten
Databases: What They Are and How They Work 163
Solving Problems Outside the Classroom: Three Stories164
A Business Problem164A Scientific Problem164
An Ethical and Sociological Problem164Databases Help
People Think about Difficult Problems165
Databases in the Classroom165
How Do Databases Support Student Learning?166What Do
Students and Teachers Need to Know?167
Getting Started: Teaching the Tool167
Form View168Table View or List View169
Sorts and Queries172
The Sort: Putting Information in Order172The Query: Classifying Information175Grade-Level Suggestions178How to Provide Student Assistance179
Planning Your Database182
Chapter Eleven
Building a Database-Supported Lesson 186
Objectives 186
Templates for Building Database-Supported Lessons186
Learning with a Database: Describing an Unknown188
Analyzing a Lesson Plan193
Understanding the Steps195
Set Up the Problem195Teach the Nature of the Questioning
Process199Focus and Explore200Students Write Their
Own Questions201Require a Product204Have Students
Make Comparisons204Encourage Students to Resolve
Discrepancies205Encourage Students to Think about Using
Databases to Solve Other Problems205
Summative Evaluation of a Database Project205
Annotated Resources210
Chapter Twelve
Acquiring Data 212
How Do Teachers Acquire Datasets?212
Data on the Internet: Examples of Some Good Sites213
Formatting Data for Use in a Database215
Technique 1: Making Raw Internet Data Usable215Technique 2: Internet Databases with Their Own Search Engines218Technique 3: Building Your Own Database219
Annotated Resources221
Chapter Thirteen
Using Spreadsheets to Think
about Numbers 223
Numbers as Tools beyond Math223
Choosing the Problem225
The Versatile Spreadsheet225
Easy Spreadsheet Tools225
Descriptive Statistics228
Example: Understanding How Soil Affects Plants228
Descriptive Statistics: What Do They Mean?233
Median, and Mode and Scales of Measurement235Standard
Using Simple Arithmetic Outside the Math Class237
Charts and Graphs240
Bar Charts and Column Charts240Pie Charts242Area Charts and Bar Charts–Looking at Data over Time243Pivot
A Model for Spreadsheet Use247
Bloom’s Taxonomy and Spreadsheets247
Annotated Resources250
Appendix AYour Network 251
Appendix BFile Management 259
Appendix CChat and Internet Conferencing 263
White Board263
Application Sharing264
File Sharing266
Advantages and Disadvantages 266
Audio and Video Conferencing267
Appendix DConcept Maps 273
Idea Maps for Events277
Looking at the Big Picture283
Appendix E
Sample Database for an English Class 287
American Society Reflected in Fiction287
Step 1288Step 2288Step 3289Step 4289
Purchase Info ?
With CourseSmart eTextbooks and eResources, you save up to 60% off the price of new print textbooks, and can switch between studying online or offline to suit your needs.
Once you have purchased your eTextbooks and added them to your CourseSmart bookshelf, you can access them anytime, anywhere.
Buy Access
Tech tactics: Technology for Teachers, CourseSmart eTextbook, 3rd Edition
Format: Safari Book
$42.99 | ISBN-13: 978-0-13-714489-1 | {"url":"http://www.mypearsonstore.com/bookstore/tech-tactics-technology-for-teachers-coursesmart-etextbook-013714489X","timestamp":"2014-04-19T09:30:43Z","content_type":null,"content_length":"37625","record_id":"<urn:uuid:3ec0b811-3ee1-4380-a1c6-215b18c5234a>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00050-ip-10-147-4-33.ec2.internal.warc.gz"} |
Mass, Density, And Volume (what units?)
May 14th 2011, 07:21 AM #1
Apr 2011
Mass, Density, And Volume (what units?)
Volume = 45 cubic meters
Density = 1900 kg/cubic meters
Mass = (45)(1900) = 85500
is the unit in kilograms?
because in this site> DENSITY
it says tons. i'm confused.
Volume = 45 cubic meters
Density = 1900 kg/cubic meters
Mass = (45)(1900) = 85500
is the unit in kilograms?
because in this site> DENSITY
it says tons. i'm confused.
We know that mass is equal to density times volume
In symbols we have
$m=\rho V$
So density must be
$\rho =\frac{m}{V}$
Density can have any units of mass divided by volume. In the SI system it is common to have Kilograms per cubic meter, but others are often used as well.
Thanks. But in that specific problem, is 85500 in kilogram units? because that's just the first part.
The whole problem is this:
A building organization has to transport a conical heap of sand. How many 3-ton dumpers are required to transport the sand if the measurements gave the following results: the length of the
circumference of the base circle is equal to 35.2m, the generator to 9.5m? each dumper has to carry five trips.. the density of sand equals 1.9x10^3 kg/m^3
Thanks. But in that specific problem, is 85500 in kilogram units? because that's just the first part.
The whole problem is this:
A building organization has to transport a conical heap of sand. How many 3-ton dumpers are required to transport the sand if the measurements gave the following results: the length of the
circumference of the base circle is equal to 35.2m, the generator to 9.5m? each dumper has to carry five trips.. the density of sand equals 1.9x10^3 kg/m^3
First a metric tonne is gives by
$10^3 kg=1 \text{ metric tonne }$
Tonne - Wikipedia, the free encyclopedia
You can find the radius of the circle using the formula
$C=2\pi r$
the length of the circumference of the base circle is equal to 35.2m, the generator to 9.5m?
I have no idea what you mean by the part of this sentence after the comma. I hope it is some how related to either the height or the slant height of the cone.
May 14th 2011, 07:27 AM #2
May 14th 2011, 07:37 AM #3
Apr 2011
May 14th 2011, 07:55 AM #4 | {"url":"http://mathhelpforum.com/math-topics/180525-mass-density-volume-what-units.html","timestamp":"2014-04-18T15:16:48Z","content_type":null,"content_length":"43267","record_id":"<urn:uuid:4175e265-1011-4713-8059-a1ffeecd7d99>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00451-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Forum Discussions
Math Forum
Ask Dr. Math
Internet Newsletter
Teacher Exchange
Search All of the Math Forum:
Views expressed in these public forums are not endorsed by Drexel University or The Math Forum.
Topic: Reduce the number of fminsearch iterations
Replies: 2 Last Post: Oct 30, 2013 9:37 AM
Messages: [ Previous | Next ]
Torsten Reduce the number of fminsearch iterations
Posted: Oct 30, 2013 7:23 AM
Posts: 1,439
Registered: "Sonia" wrote in message <l4qmda$50k$1@newscl01ah.mathworks.com>...
11/8/10 > Is it possible to make the fminsearch algorithm stop as soon as it evaluates the objective function at a given x as zero?
> I am optimizing for one variable and what happens is that the fminsearch finds a point where the objective function f(x) = 0, but it continues to iterate until the simplex is
contracted around that value of x that gives 0. I would like it to stop as soon as the fun(x)=0
Use an output function to make fminsearch stop iterating:
Best wishes
Date Subject Author
10/30/13 Reduce the number of fminsearch iterations Sonia
10/30/13 Reduce the number of fminsearch iterations Torsten
10/30/13 Re: Reduce the number of fminsearch iterations Steven Lord | {"url":"http://mathforum.org/kb/message.jspa?messageID=9314607","timestamp":"2014-04-17T13:38:01Z","content_type":null,"content_length":"18962","record_id":"<urn:uuid:a001ed8c-35d5-4985-bcb8-6461ef70ec10>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00558-ip-10-147-4-33.ec2.internal.warc.gz"} |
THE METHODS OF CALCULATION OF ELLIPSOIDAL POLYGONS AREAS BASED ON SOME MAP PROJECTIONS PROPERTIES
P. Pedzich, J. Balcerzak
Warsaw University of Technology, Department of Geodesy and Cartography, Institute of Photogrammetry and Cartography
Calculation of ellipsoidal polygons is very important problem in geodesy and cartography.
There are known some methods of area calculation of geodetic polygons but they are limited to small figures.
The paper presents some new methods of calculation areas of geodetic polygons which use some basic properties of map projection especially locale scale.
There are presented 4 methods. The first one is based on approximation of polygon by elementary trapezoids limited by parallels and meridians. The second method uses approximation of ellipsoidal
polygon by elementary spherical triangles. In the third method there is employed equal-area projection of ellipsoid onto sphere. The forth method uses equal-area projection of ellipsoid into plane.
In the end there is presented reduction of area located between curved image of geodetic line and its chord.
Moreover paper presents comparison of these methods and its application to calculation areas of administration districts in Poland. | {"url":"http://icaci.org/files/documents/ICC_proceedings/ICC2007/abstracts/html/2_Oral2_3_THE%20METHODS%20OF%20CALCULATION%20OF%20ELLIPSOIDAL%20POLYGONS%20AREAS.htm","timestamp":"2014-04-21T09:39:28Z","content_type":null,"content_length":"9258","record_id":"<urn:uuid:2bb8f4f3-3709-4414-8df4-027077641852>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00004-ip-10-147-4-33.ec2.internal.warc.gz"} |
A 3*3 matrix space problem
up vote 1 down vote favorite
A matrix subspace $S\subset M_n(C)$ is called "good", if there is two linear independent elements of $S$, says $E_1,E_2$ which are simultaneously singular valued decomposable, i.e., $E_1=UD_1V$ and
$E_2=UD_2V$ with $D_1$, $D_2$ diagonal and $U,V$ unitary.
Now the question becomes: if all three-dimensional subspace $S\subset M_3(C)$ is good?
This problem is try to undertand how hard could simultaneously singular valued decomposation be, and how powerful could linear combination of matrix be. $n=3$ is the simplest case.
2 Your $\in$ should be $\subset$, and adding some sort of motivation (e.g. why consider $n=3$ for instance) would probably encourage people to solve your question. – Thierry Zell Nov 7 '10 at 17:15
2 What is your supervisor's reaction or advice to this question? – Yemon Choi Nov 8 '10 at 0:26
@Yemon Choi :Beautiful, but not so important. – gondolf Nov 8 '10 at 3:03
add comment
1 Answer
active oldest votes
wlog one can assume that $||E_1||=1$.
up vote 5 down Let $X$ be the span of the matrix units $E_{11},E_{21},E_{31}$. Then for every $2$ linearly independent operators $E_1, E_2$ in this space there exists a unitary matrix $U$ such
vote accepted that $UE_1=E_{11}$, $UE_2=\lambda E_{21}+E_{11}$. But $E_{11}$ and $\lambda E_{21}+E_{11}$ are not simultaneously singular valued decomposable.
1 @Kate Juschenko:There is a problem of your proof: if there exists a unitary $U$ such that $UE_1=E_{11}, UE_2=E_{21}$, then $$0=tr(E_{11}^*E_{21})=tr(E_1^*U*UE_2)=tr(E_1^*E_2)$$,
which leads to $E_1,E_2$ are orthogonal. – gondolf Nov 8 '10 at 1:58
sure, the second should be corrected to: $E_{11}+\lambda E_{21}$, where $\lambda\in \mathbb{C}$. – Kate Juschenko Nov 8 '10 at 5:28
1 Yes, you are correct! Thanks! – gondolf Nov 8 '10 at 6:06
add comment
Not the answer you're looking for? Browse other questions tagged linear-algebra or ask your own question. | {"url":"http://mathoverflow.net/questions/45147/a-33-matrix-space-problem?sort=oldest","timestamp":"2014-04-17T09:59:49Z","content_type":null,"content_length":"56187","record_id":"<urn:uuid:5203ace6-c6d7-45a2-b00e-77e03ddc3538>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00025-ip-10-147-4-33.ec2.internal.warc.gz"} |
Can you figure out how many squares are in this picture? [Image]
If you counted 27… you’d be wrong.
[via Facebook]
36 comments
For anyone that hasn’t figured it out yet, the answer is 40.
[@Unicorn02] What a great animation, it makes it clear. Also proves me right! ;-)
Frig, just saw another 2 squares I previously missed. However, I also consider these to been ineligible as their lines are also cut off by the 2 smaller squares in the middle. Therefore my
revised number, if counting the squares I consider ineligible, is 36.
Okay, wasn’t fully paying attention this am as had other things to do. Now that I’m done I could fully concentrate.
Final answer is 25. I still think that the inner squares don’t count. The smaller squares cut off the corners of the other squares plus cut of the lines of the square in the centre, thus making
them ineligible for counting.
Main square outline = 1
Divide main square in quarters with 2 lines = 4
Divide each inner square in quarters = 16 – 8 = 8 (This is where I think the number should be 8 not 16)
Plus when looking at it you see another 3 inner squares in the middle = 3 – 1 =2
Inside each of those 2 inner squares is another smaller square = 2
Those smaller squares are again divided = 8
However if you count the 9 squares which I think don’t count then the number is 34
That is it, no more doing this.
Ashraf please tell, what is the correct number?
David Roper
I’m getting 42, but if I keeping seeing them again (they didn’t say to count just once), then I get 84, no wait 126…wait…wait…
Seamus McSeamus
Thanks. Now I have a headache. Not a mother-in-law headache, but still a headache.
whole square +1
small single squares +18
smaller 1/4 size single squares +8
2×2 top,middle,bottom row squares +9
3×3 corner squares +4
Total= 1+18+8+9+4 = 40
[@kgcrafts] Oh wait – I found four more…..42 total.
Luca F.
“2×2 sets of those smaller squares = 5″
Darcy, the above 2×2 squares are actually 9. Just find the first one (upper-left angle), then shift it for just one square on the right and find the second one, then again for the third one, and
you reach the right side of the big square. Then repeat going down for one square, and so on.
I hope I made it clear, because my English is a little poor.
I’m only finding 36, can’t see the extra 4 mentioned above.
The entire diagram = 1
Smaller squares in a 4×4 grid plus the two similar size ones overlapping lines = 18
2×2 sets of those smaller squares = 5
3×3 sets of those smaller squares = 4
2 sets of 4 smallest size squares = 8
That totals 36 to me. Anyone care to point out the other 4?
8 inside + 8 little ones and the main outside one.
No squares in the word ‘picture’
Luca F.
I also think 40.
The big one that is 4×4 cells.
There are 4 squares that are 3×3 at each corner.
There are 9 squares 2×2.
There are 16 squares 1×1.
The two squares overlapped in the middle.
The 4 little squares inside each of the two squares in the middle.
I counted 19 as well.
Oops, just realized I missed 2, therefore, revised answer is 21.
Hmmm… second time, I only counted 38. I must’ve counted some twice the first time.
Dave B
[@Dave B]
I also got 36 the first time, but decided to recount to double check that was all of them. Managed to find 4 more.
Going out on a limb here, but I say 19.
Original large square = 1, 4 squares inside on the left and right side = 8, 1 square in the middle of top 2 rows and 1 square in middle of bottom 2 rows = 2 (Note, the squares in the middle of
those two rows eliminate those rows as being squares.)
Then those 2 squares each contain 4 small squares which = 8.
Therefore, 1 + 8 + 2 + 8 = 19
I could be wrong but that is my guess.
40 (36 at first), because unicorn02′s answer kept me looking for the last 4. | {"url":"http://dottech.org/125485/can-figure-many-squares-picture-image/","timestamp":"2014-04-16T11:22:14Z","content_type":null,"content_length":"127348","record_id":"<urn:uuid:f1d903a2-2cd4-44b5-9c24-730fcb7290cd>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00099-ip-10-147-4-33.ec2.internal.warc.gz"} |
3D Plots in R
February 13, 2014
By Joseph Rickert
by Joseph Rickert
Recently, I was trying to remember how to make a 3D scatter plot in R when it occurred to me that the documentation on how to do this is scattered all over the place. Hence, this short organizational
note that you may find useful.
First of all, for the benefit of newcomers, I should mention that R has three distinct graphics systems: (1) the “traditional” graphics system, (2) the grid graphics system and (3) ggplot2. The
traditional graphic system refers to the graphics and plotting functions in base R. According to Paul Murrell’s authoritative R Graphics, these functions implement the graphics facilities of the S
language, and up to about 2005 they comprised most of the graphics functionality of R. The grid graphics system is based on Deepayan Sarkar’s lattice package which implements the functionality of
Bill Cleveland’s Trellis graphics. Finally, ggplot2 Hadley Wickham’s package based on Wilkinson's Grammar of Graphics, took shape between 2007 and 2009 when ggplot2 Elegant Graphics for Data Analysis
There is considerable overlap of the functionality of R’s three graphics systems, but each has its own strengths and weaknesses. For example, although ggplot2 is currently probably the most popular R
package for doing presentation quality plots it does not offer 3D plots. To work effectively in R I think it is necessary to know your way around at least two of the graphics systems. To really gain
a command of the visualizations that can be done in R, a person would have to be familiar with all three systems as well as the many packages for specialized visualizations: maps, social networks,
arc diagrams, animations, time series etc.
But back to the relatively tame task of 3D plots: the generic function persp() in the base graphics package draws perspective plots of a surface over the x–y plane. Typing demo(persp) at the console
will give you an idea of what this function can do.
The plot3D package from Karline Soetaert builds on on persp()to provide functions for both 2D and 3D plotting. The vignette for plot3D shows some very impressive plots. Load the package and type the
following commands at the console: example(persp3D), example(surf3D) and example(scatter3D) to see examples of 3D surface and scatter plots. Also, try this code to see a cut-away view of a Torus.
# 3D Plot of Half of a Torus
par(mar = c(2, 2, 2, 2))
par(mfrow = c(1, 1))
R <- 3
r <- 2
x <- seq(0, 2*pi,length.out=50)
y <- seq(0, pi,length.out=50)
M <- mesh(x, y)
alpha <- M$x
beta <- M$y
surf3D(x = (R + r*cos(alpha)) * cos(beta),
y = (R + r*cos(alpha)) * sin(beta),
z = r * sin(alpha),
main="Half of a Torus")
Created by Pretty R at inside-R.org
The scatterplot3d package from R core members Uwe Ligges and Martin M achler is the "go-to" package for 3D scatter plots. The vignette for this package is shows a rich array of plots. Load this
package and type example(scatterplot3d) at the console to see examples of spirals, surfaces and 3D scatterplots.
The lattice package has its own distinctive look. Once you see one lattice plot it should be pretty easy to distinguish plots made with this package from base graphics plots. Load the packate and
type example(cloud) in the console to see a 3D graph of a volcano, and 3D surface and scatter plots.
rgl from Daniel Adler and Duncan Murdoch and Rcmdr from John Fox et al. both allow interactive 3D vizualizations. Load the rgl package and type example(plot3d) to see a very cool, OpenGL, 3D scatter
plot that you can grab with your mouse and rotate.
For additional references, see the scatterplot page of Robert Kabacoff's always helpful Quick-R site, and Paul E. Johnson's 3D Plotting presentation.
for the author, please follow the link and comment on his blog:
daily e-mail updates
news and
on topics such as: visualization (
), programming (
Web Scraping
) statistics (
time series
) and more...
If you got this far, why not
subscribe for updates
from the site? Choose your flavor:
, or | {"url":"http://www.r-bloggers.com/3d-plots-in-r/","timestamp":"2014-04-16T07:43:36Z","content_type":null,"content_length":"45766","record_id":"<urn:uuid:6a329b21-6771-4f56-94b0-b019b5b48b29>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00110-ip-10-147-4-33.ec2.internal.warc.gz"} |
Order, Chaos,and the End of Reductionism
The author presents a case against reductionism based on the emergence of chaos and order from underlying non-linear processes. Since all theories are mathematical, and based on an underlying premise
The author presents a case against reductionism based on the emergence of chaos and order from underlying non-linear processes. Since all theories are mathematical, and based on an underlying premise
of linearity, the author contends that there is no hope that science will succeed in creating a theory of everything that is complete. The controversial subject of life and evolution are explored,
exposing the fallacy of a reductionist explanation, and offering a theory of order emerging from chaos as being the creative process of the universe, leading all the way up to consciousness. The
essay concludes with the possibility that the three-dimensional universe is a fractal boundary that separates order and chaos in a higher dimension. The author discusses the work of Claude Shannon,
Benoit Mandelbrot, Stephen Hawking, Carl Sagan, Albert Einstein, Erwin Schrodinger, Erik Verlinde, John Wheeler, Richard Maurice Bucke, Pierre Teilhard de Chardin, and others. This is a companion
piece to the essay "Is Science Solving the Reality Riddle?"
Total Views
Views on SlideShare
Embed Views | {"url":"http://www.slideshare.net/John47Wind/order-chaosand-the-end-of-reductionism","timestamp":"2014-04-20T01:30:41Z","content_type":null,"content_length":"259303","record_id":"<urn:uuid:73c6068e-1871-4f5a-b565-651568bb34f2>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00448-ip-10-147-4-33.ec2.internal.warc.gz"} |
Debojyoti Ghosh
These are some codes that I am developing as a part of my research. They are mostly written in C and require only the GNU C compiler (and MPICH). Some versions may require PETSc. Compiling them is
pretty straightforward. I use Git for source code management and maintain a copy of my repositories on Bitbucket. Feel free to download, modify and use these codes if you think they are useful. Most
of them should have a helpful "readme" file. Feedback is welcome.
Since I am not a trained software developer, documentation may be lacking. However, the codes should make sense to someone with the appropriate background in numerical methods and the relevant
Numerical Solution of Hyperbolic-Parabolic partial differential equations: This provides a basic, unified framework (serial/parallel) for the numerical solution to n-dimensional hyperbolic-parabolic
PDEs on Cartesian grids. A few physical models are included (linear advection-reaction-diffusion, Fokker-Planck model for power systems, Euler equations for inviscid fluid dynamics, etc) and others
can be added with relative ease. Several spatial discretization schemes (1st order, 3rd order MUSCL, 5th order WENO, 5th order CRWENO, 2nd/4th order central compact and non-compact schemes, etc) and
multi-stage Runge-Kutta time-integration schemes are available and new ones can be added easily. The code can also be compiled with PETSc to use the time-integration methods provided by its TS
Dependencies: MPICH (if not available, a serial version is compiled), PETSc (optional)
[more details] [source]
A 3D, Cartesian, incompressible Navier-Stokes solver with immersed boundary technique: The code is based on the fractional-step method and uses a higher-order upwind reconstruction schemes for the
convection term and a second-order central discretization for the viscous and pressure correction terms. The viscous terms are treated using implicit time-integration and convection is treated
explicitly. The parallel version uses a 3D, Cartesian domain decomposition. The code can also be compiled with PETSc to use the time-integration methods provided by its TS module.
Dependencies: MPICH (if not available, a serial version is compiled), PETSc (optional)
[more details] [source]
Serial and parallel (MPI) algorithms for the direct solution (LU decomposition) of non-periodic tridiagonal systems of equations: Two functions are available, one utilizing the recursive-doubling
algorithm and the other solves the system in parallel by rearranging the points adjacent to the sub-domain boundaries. The latter algorithm uses the recursive-doubling or gather-and-solve algorithms
to solve the reduced system resulting from the rearrangement. These functions can solve a single system as well as a stack of systems. A code is also provided to test the functions as well as their
Dependencies: MPICH (if not available, a serial version is compiled)
[more details] [source] | {"url":"http://www.mcs.anl.gov/~ghosh/misc.html","timestamp":"2014-04-21T12:50:57Z","content_type":null,"content_length":"7758","record_id":"<urn:uuid:23c8a0d4-d810-4388-8b34-409addb47513>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00100-ip-10-147-4-33.ec2.internal.warc.gz"} |
Distance from point to plane
1. 199605
Distance from point to plane
See attached page for the rest of the question
Let P be a point not on the plane that passes through the points Q, R, and S. Show that the distance d from P to the plane is....
This provides an example of finding the difference between a point and a plane. | {"url":"https://brainmass.com/math/geometry-and-topology/199605","timestamp":"2014-04-19T01:48:32Z","content_type":null,"content_length":"28306","record_id":"<urn:uuid:87efbe80-d373-41aa-a39b-2fe02bd1e247>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00246-ip-10-147-4-33.ec2.internal.warc.gz"} |
Numbers Please: Economic Data
The Economist's Toolbox
Numbers Please: Economic Data
In the business press, you will read economic data. Often these data are called economic indicators because they indicate the level of activity or future activity in some area of the economy. I will
explain economic data and indicators as they arise, but first I want to introduce you to some tools that economists use to organize and present data. These tools—various types of tables and
charts—portray relationships between data more clearly. They help you understand the nature and degree of the economic activity that the data represent.
Reading Tables
Much of the economic data you will encounter will be presented in tables. To orient you to tables of economic data, we're going to work with actual GDP data. Table 3.1 shows annual Gross Domestic
Product for the years 1990 to 2001 in two different ways and with two different growth rates.
Table 3.1 Gross Domestic Production (1990-2001)
(1) Year (2) GDP in Billions of Nominal Dollars (3) Percent Change Based on Nominal Dollars (4) GDP in Billions of Real Dollars* (5) Percent Change Based on Real Dollars*
1990 5,803 5.7 6,708 1.8
1991 5,986 3.2 6,676 -0.5
1992 6,300 5.6 6,880 3.0
1993 6,642 5.1 7,063 2.7
1994 7,054 6.2 7,348 4.0
1995 7,401 4.9 7,544 2.7
1996 7,813 5.6 7,813 3.6
1997 8,318 6.5 8,160 4.4
1998 8,781 5.6 8,509 4.3
1999 9,274 5.6 8,859 4.1
2000 9,825 5.9 9,191 3.8
2001 10,082 2.6 9,215 0.3
When you look at a table, be sure to read the title of the table and all the headings for the columns and rows carefully. It's easy for many people to plunge into the numbers without really reading
the words, but it's the words that tell you what numbers you are looking at.
*1996 dollars
Source: Bureau of Economic Analysis
Now, there are a lot of numbers here and some unfamiliar terms in the column headings, so let's take this piece by piece.
The title of the table indicates that it covers Gross Domestic Product for a twelve-year period from 1990 to 2001. (These are all actual values from the Bureau of Economic Analysis website, which
I'll tell you about at the end of this section.) I've numbered the Columns 1 through 5 for easy reference as I walk you through the table. The column headings describe the data in the column.
Taking each column in its turn, Column 1 indicates the year for the data in that row. (So far, so good.) The other columns require a bit more explanation.
Values expressed in nominal dollars, also known as current dollars, have not been adjusted for the effect of inflation. They are values reported in the dollars of each year being examined. Values
expressed in real dollars, also known as inflation-adjusted dollars, are free of the effect of inflation. Economists use real dollars in many analyses because they want to understand economic
activity without distortions introduced by inflation. For instance, if nominal incomes are rising, but real incomes are falling, consumers are actually worse off even though they are making “more
Nominal vs. Real Dollars
Column 2 is GDP in billions of nominal dollars. These dollar values are expressed in billions, meaning that you should imagine that each dollar figure in the table is followed by nine (yes, nine)
zeros. So GDP for 2001 is $10,082,000,000,000, or ten trillion, eighty-two billion dollars. Another way of writing this would be $10.082 trillion.
Nominal dollars, also known as current dollars, are dollar values that have not been adjusted for inflation. They are dollars counted the way everyone counted them in the year they correspond to,
with the effect of inflation included. Again, we will learn about inflation, but you know that inflation causes money to lose its value. If the general rate of inflation is 3 percent a year, then the
average item that cost $100 on January 1 of that year would cost $103 by December 31. The price is “inflated” because the dollar lost some of its value, that is, some of its purchasing power.
Economists want to be able to look at what's going on in the economy without the effect of inflation. They want to know that GDP or exports or incomes are really growing, not just being inflated by a
dollar that is losing its value. So, to get rid of the effect of inflation, they convert current dollars into real dollars, which are also known as inflation-adjusted dollars. In Table 3.1 they do
this by converting all of the values in Column 2 into the 1996 dollar values you see in Column 4. I won't bother you with the calculations economists use to do this, but they do it. Also, they could
have pegged the real value to the value of the dollar in another year, for instance 1985. The key thing is to convert the value of GDP across all years to the value of the currency in one base year.
That way, you are comparing year-to-year growth in real GDP, not nominal GDP.
To see the results of their calculations, let's jump over to Column 4. In Column 4, GDP is valued in billions of real dollars. As a result, we see that some of the value of our $10 trillion economy
is indeed due to inflation. In fact, the real, inflation-adjusted value of the 2001 GDP in 1996 dollars is “only” $9.2 trillion.
So now when you hear a newscaster say, “Real GDP grew by 2.5 percent last year,” you'll know what it means. In fact, GDP growth rates reported in the business news usually are based on real,
inflation-adjusted values.
The use of 1996 as the base year for converting nominal to real dollars creates an issue that will help you understand the nature of real dollars. You may have noticed that in the year 1996—and only
in the year 1996—GDP has the same value in both nominal and real dollars: $7.813 trillion. Before the year 1996, the real dollar values for GDP are higher than the nominal dollar values. After 1996,
the real dollar values are lower. That's because after 1996, the conversion from nominal to real dollars deflates the nominal GDP number. However, before 1996, the nominal values inflate GDP to the
value of the dollar in 1996, which was higher. (This would not have occurred if real dollars valued in a base year before 1990 had been used.) Comparing the nominal and real values of GDP in any
given year doesn't really do all that much for us.
What does this say about real dollars? It says that they are best used when comparing a value from one period to the next. Knowing that in 1991 GDP grew by 3.1 percent in nominal terms but fell by
0.5 percent in real terms tells me a lot. It tells me there was a recession going on—a fact that would escape me if all I had to work with were current dollars. But knowing that the value of GDP in
1991 was $5.986 trillion in 1991 dollars and $6.676 trillion in 1996 dollars doesn't do me much good at all.
Keep on Growing
Over the long term, meaning 20 years and longer, U.S. GDP grows at an average of about 3 percent in real terms. It's interesting that even in 1990 to 2001, a period characterized by tremendous
advances in technology, healthy levels of consumer spending and business investment, relatively low inflation, and good management of the economy by the government, real GDP still grew at an average
rate of about 2.85 percent.
One factor in this is that the U.S. economy is so large, even if newsworthy growth occurs in various areas, as it did in high technology or Las Vegas, there are still large, older, slower-growing
industries and regions that offset that growth. Also, some of the growth in the 1990s was, in a way, not real, as we will see.
Columns 3 and 5 tell us about GDP growth. The point of using real, rather than nominal, dollars becomes really clear when you look at growth over the years. For instance, in 2001 nominal GDP grew by
2.6 percent. But real GDP grew by a paltry 0.3 percent, a little over zero. The rate of growth of real GDP is telling a much more accurate story than the rate of growth for nominal GDP.
Here's another interesting way of looking at GDP growth. (It's a good idea to refer to Table 3.1 during this discussion.) Nominal GDP grew from just over $5.8 trillion back in 1990 to nearly $10.1
trillion in 2001. That means that GDP grew by a total of 74 percent in the 11 years from 1990 to 2001—in nominal terms (10,082 - 5803 = 4,279 and 4,279÷5803 = .737 or 74%). However, real GDP grew by
only 37 percent in the same period (9,215 - 6,708 = 2,507 and 2,507÷6,708 = .373 or 37%). That's half the rate of nominal growth!
While tables are kept in a handy place in the economist's toolbox, charts are every bit as important.
How to Construct and Read a Chart
If one picture is worth a thousand words, one chart may be worth a thousand numbers. It's certainly the only way to deal with a thousand numbers. Charts or graphs enable economists to see things they
are always looking for: trends in data and economic activity, and relationships between two or more economic concepts or activities.
A chart almost always consists of two lines, each called an axis, one horizontal and one vertical. (Some charts feature one horizontal and two vertical axes, but we won't get into them here.) A point
is plotted on a chart by using the axes as coordinates that define the spot where the point goes.
For example, the following chart relates the quantity of baloney sold to the price of baloney at a (fictitious) supermarket chain. The manager of the chain has only four pieces of data: When the
price of baloney is $3, they sell 2,000 pounds per month, and when the price is $1.50, they sell 7,000 pounds per month.
You plot a point on this chart by going along the vertical (price) axis to find the $3 mark and along the horizontal (quantity) axis to find the 2,000 pound mark. Those are your coordinates for the
first point to be plotted, which I've labeled point “A.” You do the same for the next two pieces of data, the price of $1.50 and the quantity of 7,000 pounds. Those coordinates yield point “B.”
Notice that charts should be properly labeled, as this one is. The title of the chart, “Demand for Baloney,” is clear, and each axis is labeled for the data that it represents. Also, the
measure—whether it is dollars or pounds of baloney—is mentioned in parentheses. To understand a chart, you must have this information.
Now the two points on a chart can be connected, like in Figure 3.2.
When we connect two or more data points, the result is a curve. Figure 3.2 relates the price of baloney to the quantity sold—it tells us about the demand for baloney at these prices. Even if they are
not actually curved, they are called curves. Some of the curves would normally be drawn as actual curves, but I have generally used straight lines to keep the data simple and the explanations clear.
Economists also call a curve like this—in which two variables are related—a function. That's because the quantity of baloney sold—the demand for baloney—is a function of its price. The two variables
in the function are price and quantity. Of course, other variables may affect the demand for baloney, such as the time of year or the price of ham. But those are left out of this function so that the
economist can isolate the relationship between price and quantity.
A function is a curve or a mathematical formula (usually both) that expresses the relationship between two facts, which are known as variables. In a function, the variables are usually numbers. Of
course, the political climate or a war could be a variable as well, but to find its way into a function, it would have to be expressed as a number, which economists can do.
The relationship between two related variables may be positive or inverse. In a positive relationship, as one variable increases, the other increases, and as one decreases, the other decreases. In an
inverse relationship, when one variable increases the other decreases, and when one decreases, the other increases.
A scatter diagram shows the various data points plotted onto a chart. If the variables are related, the plotted points on a chart will tend to cluster into a pattern. The line of best fit is a line
plotted through a scatter diagram that represents the relationship between the two variables.
Incidentally, the relationship between the price of baloney and the demand for it is inverse. That means that the higher the price, the lower the quantity sold, and the lower the price, the higher
the quantity sold. Variables can also be in a positive relationship, meaning that as one variable increases, the other increases, and as one decreases, the other decreases.
Figure 3.3, “Household Income and Spending on Clothing” depicts a positive relationship between two variables.
This chart needs no numbers to illustrate the positive relationship between income and spending on clothing: the higher the income the higher the amount spent on clothing, the lower the income, the
lower the amount spent on clothing.
Two more points about charts are important.
First, when economists, managers, or analysts plot data points on a chart, those points don't line up so that you can connect the dots and have a nice smooth curve in the way I do here. For instance
consider the following set of data points, which, again, relate spending on clothing to annual household income.
The points all over the lower part of this chart resemble the kind of pattern that often results when the data on two related variables are plotted on a chart. This is called a scatter diagram, for
the obvious reason that the data points are scattered over the chart. Data points that represent variables that are related will cluster into a pattern, but they certainly won't fall neatly into a
line or curve.
Then what's that curve doing in there?
That curve is called the line of best fit. This is a line that is plotted through a set of data so that the relationship between the two variables becomes clearer. The line of best fit is calculated
mathematically by computer in such a way that the distances between all of the variables and the line is minimized. In other words, the line of best fit is the line that is as close as possible to
all of the data points. (That's why it's called the line of best fit—it's the line that best fits into the pattern of the data points on the chart.)
Again we will be dealing with simple curves. However, I want you to know that in reality, the data, the curves, and the functions are not always that simple.
Finally, what happens in situations where there is no relationship between two variables?
Suppose you were to plot average annual temperature in the United States over the past 50 years against each year's growth in real GDP. You might wind up with a chart that looked very much like the
one in Figure 3.5.
Here there is no relationship between the variables, and no meaningful line of best fit or function to be developed. There is no discernable pattern to help us relate GDP growth to average
GDP growth and average temperature might, however, be related variables when considering the GDP of a single state. (Yes, each state has its own GDP.) For instance, unusually warm years might hurt
the GDP of a state such as Vermont or Idaho, where ski -resorts bring in significant sums of money from other states.
Charting GDP
In addition to helping us clarify relationships between variables, charts give us a way to visually depict trends over time by relating a value on one axis, usually the vertical, to a point in time,
usually on the horizontal axis.
For example, let's return to the nominal and real GDP data we examined earlier and look at the trend in GDP over that 12-year period. Trends are best seen in line charts, which plot the values of a
variable over time.
Figure 3.6 is a line chart of the values for nominal and real GDP from Table 3.1. The solid line represents nominal GDP, and the dotted line represents real GDP.
It is up to the economist, analyst, or manager reviewing the data to think through relationships and their meaning when considering various data. No computer—or any other tool—can do that for us.
The chart is saying that annual GDP grew significantly in both nominal and real terms. The chart shows that nominal GDP grew faster than real GDP because the solid (nominal GDP) line is steeper than
the dotted (real GDP) line.
Line charts are widely used in the investment profession to track the performance of various stocks. In finance, they are used to track sales, expenses, and other numbers. In economics, they are used
mainly to see the trend of a variable such as GDP, income, and specific types of spending and production over time.
Also with a line chart and its underlying mathematical function, economists try to forecast the future performance of GDP, income, spending, production, and other variables. The mathematical
functions—called equations—that represent the line are quite complex. But essentially, in economic forecasting and econometrics, which I mentioned in Overview of Economics, an economist plots a line
that represents the relationship among multiple variables and attempts to gauge the future path of that line, and thus the future value of whatever is being forecasted.
A bar chart is another tool that helps us make comparisons between variables over time. For example, again, going back to Table 3.1, the bar chart in Figure 3.7 shows both the growth rates for both
real and nominal GDP from 1990 to 2001.
The chart clearly shows that real GDP growth was substantially less than nominal growth, especially in the years 1990, 1991, and 2001, when real growth was less than half of nominal growth.
In economics and in business, tables and charts are the two main tools for organizing, presenting, and analyzing data.
Excerpted from The Complete Idiot's Guide to Economics © 2003 by Tom Gorman. All rights reserved including the right of reproduction in whole or in part in any form. Used by arrangement with Alpha
Books, a member of Penguin Group (USA) Inc.
To order this book direct from the publisher, visit the Penguin USA website or call 1-800-253-6476. You can also purchase this book at Amazon.com and Barnes & Noble. | {"url":"http://www.infoplease.com/cig/economics/numbers-please-economic-data.html","timestamp":"2014-04-21T16:48:21Z","content_type":null,"content_length":"45668","record_id":"<urn:uuid:4fa074ea-2b67-4a0e-98c9-fb7d0df98b7b>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00140-ip-10-147-4-33.ec2.internal.warc.gz"} |
W. Kahan's
Ph.D. (Math., University of Toronto, 1958)
Professor Emeritus of Mathematics, and of E.E. & Computer Science
863 Evans Hall (Math), and 513 Soda Hall (CS)
Now that I am "retired", I work in my
offices sporadically at least once or
twice a week during each semester.
Phone: (510) 642-5638 (rings in both offices)
E-mail to me at this address obfuscated to diminish spam:
wkahan at (omit underscores) E_E_C_S_D0T_B_E_R_K_E_L_E_Y_D0T_E_D_U
My e-mail is read at most once every week or two. There is much too much of it.
Better ways to communicate with me are ...
... by paper mail: EECS Dept., MS#1776, Univ. of Calif., Berkeley CA 94720-1776
... by telephone (or leave a message for me to call back), or
... by visiting when I am in my office (telephone me first).
Use e-mail only if you must; but be advised that the University's
e-mail is not so secure that you can trust its confidentiality.
Awards etc.
Selected Publications
7094II System Support for Numerical Analysis
IBM SHARE Secretarial Distribution SSD#159 Item C-4537 (1966), and in
Error in Numerical Computation, Univ. of Mich. Eng'g Summer Conf'ce #6818
Numerical Analysis (1968) (A retyped reproduction)(PDF file)
Analysis and Refutation of the LCAS
ACM SIGNUM Newsletter, Vol. 26, No. 3, July 1991, pp. 2-15.
Also, ACM SIGPLAN Notices, Vol. 27, No. 1, January 1992, pp.61-74.
Accurate Singular Values of Bidiagonal Matrices
(with J. Demmel), SIAM J. Scientific Statistical Computation, Vol. 11, No. 5, 1990, pp. 873-912.
Branch Cuts for Complex Elementary Functions, or Much Ado About Nothing's Sign Bit
in The State of the Art in Numerical Analysis, (eds. Iserles and Powell), Clarendon Press, Oxford, 1987.
A Wordsize- and Radix-Independent Standard for Floating-Point Arithmetic
(with W. J. Cody et al.), IEEE Micro, August 1984, pp. 86-100.
Mathematics Written in Sand
Proc. Joint Statistical Mtg. of the American Statistical Association, 1983, pp. 12-26.
(A legible reproduction is available below) (PDF file)
Norm-Preserving Dilations and Their Applications to Optimal Error Bounds
(with C. Davis and H. F. Weinberger), SIAM J. Numerical Analysis, Vol. 19, 1982, pp. 445-469.
Residual Bounds on Approximate Eigensystems of Nonnormal Matrices
(with B. N. Parlett and E. Liang), SIAM J. Numerical Analysis, Vol. 19, 1982, pp. 470-484.
A Family of Anadromic Numerical Methods for Matrix Riccati Differential Equations
(with Ren-Cang Li), Mathematics of Computation, Vol. 81 #227, Jan. 2012, pp. 233-265.
(PDF file)
Unconventional Schemes for a Class of Ordinary Differential Equations --
-- With Applications to the Korteweg-deVries Equation
(with Ren-Cang Li), J. Computational Physics, Vol. 134, 1997, pp. 316-331.
Composition Constants for Raising the Orders of Unconventional Schemes for Ordinary Differential Equations
(with Ren-Cang Li), Math. of Computation, Vol. 66, 1997, pp. 1089-1099.
Pinchings and Norms of Scaled Triangular Matrices
(with Ren-Cang Li & Rajendra Bhatia), Linear & Multilinear Algebra, Vol. 50, 2002, pp. 15-21.
Is there a small skew Cayley transform with zero diagonal ?
Linear Algebra and its Applications 417 (2006) pp. 335-341.
(A version with fewer misprints is available here:) (PDF file)
Error bounds from extra-precise iterative refinement (with J.W. Demmel et al.)
ACM Transactions on Mathematical Software (TOMS) archive 32, Issue 2 (June 2006) pp. 325 - 351
(PDF file)
Files available from this homepage, last updated 9 Aug. 2012
Old postings above get updated and new ones get added from time to time.
W. Kahan's self-portrait.
Visit count: | {"url":"http://www.eecs.berkeley.edu/~wkahan/","timestamp":"2014-04-20T10:48:06Z","content_type":null,"content_length":"14272","record_id":"<urn:uuid:bd379267-fedc-451e-b316-33e36d149bb2>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00115-ip-10-147-4-33.ec2.internal.warc.gz"} |
Economics in Plain English
Mar 23 2012
What makes oligopolistic markets, which are characterized by a few large firms, so different from the other market structures we study in Microeconomics? Unlike in more competitive markets in which
firms are of much smaller size and one firm’s behavior has little or no effect on its competitors, an oligopolist that decides to lower its prices, change its output, expand into a new market, offer
new services, or adverstise, will have powerful and consequential effects on the profitability of its competitors. For this reason, firms in oligopolistic markets are always considering the behavior
of their competitors when making their own economic decisions.
To understand the behavior of non-collusive oligopolists (non-collusive meaning a few firms that do NOT cooperate on output and price), economists have employed a mathematical tool called Game
Theory. The assumption is that large firms in competition will behave similarly to individual players in a game such as poker. Firms, which are the “players” will make “moves” (referring to economic
decisions such as whether or not to advertise, whether to offer discounts or certain services, make particular changes to their products, charge a high or low price, or any other of a number of
economic actions) based on the predicted behavior of their competitors.
If a large firm competing with other large firms understands the various “payoffs” (referring to the profits or losses that will result from a particular economic decision made by itself and its
competitors) then it will be better able to make a rational, profit-maximizing (or loss minimizing) decision based on the likely actions of its competitors. The outcome of such a situation, or game,
can be predicted using payoff matrixes. Below is an illustration of a game between two coffee shops competing in a small town.
In the game above, both SF Coffee and Starbuck have what is called a dominant strategy. Regardless of what its competitor does, both companies would maximize their outcome by advertising. If SF
coffee were to not advertise, Starbucks will earn more profits ($20 vs $10) by advertising. If SF coffee were to advertise, Starbucks will earn more profits ($12 vs $10) by advertising. The payoffs
are the same given both options for SF Coffee. Since both firms will do best by advertising given the behavior of its competitor, both firms will advertise. Clearly, the total profits earned are less
when both firms advertise than if they both did NOT advertise, but such an outcome is unstable because the incentive for both firms would be to advertise. We say that advertise/advertise is a “Nash
Equilibrium” since neither firm has an incentive to vary its strategy at this point, since less profits will be earned by the firm that stops advertising.
As illustrated above, the tools of Game Theory, including the “payoff matrix”, can prove helpful to firms deciding how to respond to particular actions by their competitors in oligopolistic markets.
Of course, in the real world there are often more than two firms in competition in a particular market, and the decisions that they must make include more than simply to advertise or not. Much more
complicated, multi-player games with several possible “moves” have also been developed and used to help make tough economic decisions a little easier in the world of competition.
Game theory as a mathematical tool can be applied in realms beyond oligopoly behavior in Economics. In each of the videos below, game theory can be applied to predict the behavior of different
“players”. None of the videos portray a Microeconomic scenario like the one above, but in each case a payoff matrix can be created and behavior can be predicted based on an analysis of the incentives
given the player’s possible behaviors.
Assignment: Watch each of the five videos below. For each one, create a payoff matrix showing the possible “plays” and the possible “payoffs” of the game portrayed in the video. Predict the outcome
of each game based on your understanding of incentives and the assumption that humans act rationally and in their own self-interest.
“Batman – the Dark Night” – the Joker’s ferry game:
“Princess Bride” – where’s the poison?:
“Golden Balls” – split or steal:
“The Trap” – the delicate balance of terror
“Murder by Numbers” – the interrogation
Discussion Questions:
1. Why is oligopoly behavior more like a game of poker than the behavior of firms in more competitive markets?
2. What does it mean that firms in oligopolistic markets are “inter-dependent” of one another?
3. Among the videos above, which games ended in the way that your payoff matrix and understanding of human behavior and rational decision making would have predicted?
4. How often did the equilibrium outcomes according to your analysis of the payoff matrices correspond with the socially optimal outcome (i.e. the one where total payoffs for all players are
maximized or the total losses minimized)?
Jul 01 2011
Rational ‘bee’havior
Economists make several assumptions about humans, a fundamental assumption being that we are rational decision makers, able to weight costs and benefits of our actions and pursue the option that
maximizes our benefits and minimizes our costs, thus leading to the greatest personal happiness, or utility. Only if this assumption holds true does a free market economic system, made up of
individuals pursuing their own interests lead to a socially beneficial outcome.
But are humans the only animal driven by rational, self-interested, benefit maximizing and cost minimizing behavior? Is our ability to make the right decision based on a complex set of options and
variables made possible by our large brain and hundreds of thousands of years of adaptation? To some extent, our biology must drive our decision making and therefore the institutions and
organizations that have allowed our species to thrive. But let us not think we are the only species to have thrived due to our rationality.
If you’re like me, you’ve often wondered to what extent animals can think. I have a dog, and after five years I still can’t figure out if he really likes me or if he has just learned that I’m the one
who feeds him and scratches his belly, so he demonstrates the behaviors that offer the greatest rewards in terms of food and attention, and those behaviors are ones that I enjoy about him. It is a
win win relationship, for sure, but is his behavior evidence of rationality, or just his biological need for food and attention? Is my dog’s behavior the outcome of a series of rational,
self-interested calculations, or is it something more simple we usually associate with animal behavior: instinct?
Rationality may be as much a biological instict as an economic one. A recent study out of the UK has found that bumble bees are able to make rational decisions based on complex sets of options to
mimimize costs and maximize benefits, much as humans must do countless times every day.
When deciding which flowers to fly to when collecting nectar, a bee must consider two variables: distance and the amount of nectar available in a particular flower. Of course, the distance the bee
must fly represents the cost of collecting the nectar, and the amount of nectar in the flower is the benefit of having flown to it. The report explains:
“Computers solve it (the problem of which flower to fly to) by comparing the length of all possible routes and choosing the shortest. However, bees solve simple versions of it without computer
assistance using a brain the size of grass seed.”
The team set up a bee nest-box, marking each bumblebee with numbered tags to follow their behaviour when allowed to visit five artificial flowers which were arranged in a regular pentagon.
“When the flowers all contain the same amount of nectar bees learned to fly the shortest route to visit them all,” said Dr Lihoreau. “However, by making one flower much more rewarding than the
rest we forced the bees to decide between following the shortest route or visiting the most rewarding flower first.”
In a feat of spatial judgement the bees decided that if visiting the high reward flower added only a small increase in travel distance, they switched to visiting it first. However, when visiting
the high reward added a substantial increase in travel distance they did not visit it first.
The results revealed a trade-off between either prioritising visits to high reward flowers or flying the shortest possible route. Individual bees attempted to optimise both travel distance and
nectar intake as they gained experience of the flowers.
“We have demonstrated that bumblebees make a clear trade-off between minimising travel distance and prioritising high rewards when considering routes with multiple locations,” concluded co-author
Professor Lars Chittka. “These results provide the first evidence that animals use a combined memory of both the location and profitability of locations when making complex routing decisions,
giving us a new insight into the spatial strategies of trap-lining animals.”
In economics, we refer to the behavior descibed above as cost, benefit analysis. It surprised me to read that insects, when faced with a trade off between further distance and more nectar, weigh both
the cost and the benefit, and pursue the action that maximizes their profit, which is the bee’s case is a function of both distance of the flower and quantity or nectar collected.
Humans and our institutions make similar cost, benefit calculations. A business produces a quantity of output and sells it for a price that maximizes the difference between the price at which it can
sell its product for and the average cost of production, thus maximizing its profits. A consumer will purchase a combination of goods and services at which the amount of utility per dollar is
equalized across the various goods consumed, thus maximizing the consumer’s total utility.
Bees, with their brains the size of a grass seed, weigh variables nearly as complex as those weighed by businesses and individuals in their economic decisions. Are bees rational? Or is their behavior
purely biological instinct?
Dec 02 2009
Review Lesson: Econ concepts in 60 seconds – Perfect Competition
YouTube - ACDCLeadership’s Channel
More econ review videos from my new favorite YouTube channel, Jacob Clifford’s Econ Concepts in 60 Seconds.
To review for the upcoming test, you will join a small group and watch one of the four videos on the Perfect Competition. After watching and discussing one video with your group, you will be
re-assigned to another group with students who watched a different video. You will then lead a short discussion on your original video with your new group.
With your first group – 15 minutes: As your group watches its assigned video, have your notes open in front of you and draw the graphs Mr. Clifford draws along with him. Pause the video where
necessary to have time to draw graphs. Take notes while watching the video so you can teach it to another group. With your group, prepare a short discussion of the video’s main points, including:
• What rule or lesson about Perfect Competition does the video focus on?
• What did you already know that this video reminded you of or reinforced your understanding of?
• What did this video introduce that was new to you?
• How were graphs used to teach the concepts?
With your second group – 20 minutes: For the second part of this assignment, there should be four new groups, each including one member of the four original groups.
• Each group member should lead a 2-3 minute discussion of the video he or she watched in the first group.
• Go over each of the discussion points from above.
• Answer any questions your new group members have about video you watched.
Group 1 - The Profit Maximization Rule – MR=MC:
Group 2 - Perfect Competition in the short-run:
Group 3 - Perfect Competition in the long-run:
Group 4 - The Shut-Down Rule in Perfect Competition:
Nov 02 2009
When is acting irrational the rational thing to do?
FT.com / Comment / Opinion – Magic and the myth of the rational market.
Imagine you’re a poor farmer who has always had just enough to feed your family, with no surplus left over to sell. Then one day the government decides to grant your family and your neighbors enough
land to grow your own food and plenty more to sell on the market. The government’s intention, of course, is for you to cultivate all your land, sell your surplus, generate income for your family to
improve your quality of life, send your children to school and save for the future.
You’re the farmer. You’ve just been given land. What would you do?
1. Plant crops on all your land, harvest the crops, sell the surplus and enjoy the profits from your surplus?
2. Plant crops on only part of your land, grow enough food to feed your family, and let the rest of the land lie uncultivated. You have no surplus, nothing to sell, and continue to live the way you
always have lived: poorly.
The science of economics assumes that individuals always act rationally in their own self-interest. Self-interest is the ultimate motive of economic actors: firms are profit-maximizers, individuals
are utility-maximizers. The theory of rational behavior would lead one to assume that the farmer would pursue option 1 above. But in Papua New Guinea, where the government recently relocated
thousands of displaced farmers to new plots of land, it is more common for farmers to chose option 2:
“If they see me planting too much cocoa, they’ll do things to my land and my family, and they won’t bear fruit; really bad things; puripuri and other witchcraft.”
Such an avoidance of profit maximisation might have appeared economically irrational. But from the perspective of those villagers, putting in extra work just to make oneself a target for the
jealousy of one’s neighbours would be highly irrational behaviour.
Economists need to re-think their assumptions on rational behavior. What appears irrational to one person may be perfectly rational to someone else, as in the case of the Papuan farmers who only
plant half their land. Humans, it seems, are a bit more complicated than the cold, calculating arithmeticians economists have long assumed them to be.
In the wake of the largest economic crisis since the great depression, the assumption of rational actors interacting in rational markets has come into question. A new field of economics blending the
traditional study of resource allocation in the market place and human psychology has arisen to tackle the challenge of better understaning the seemingly irrational behaviors of investors, buyers and
sellers in today’s global economy:
One response to the current crisis has been a rise in the popularity of behavioural economics, which examines the psychological and emotional factors behind transactions. These models drop the
assumption of the rational actor yet implicitly keep the same model of economic rationality at their heart. We may diverge from the path of rationality for all sorts of psychological reasons but
only because emotion, Keynes’s famous “animal spirits”, clouds our judgment.
To break human behavior down to the basic pursuit of profits by producers and utility by consumers neglects to acknowledge the “animal spirits” within us all. Economics is entering a new era, in
which psychology and markets are intertwined. Rational behavior will remain a basic assumption of the science, but a re-defining of what it means to be rational will allow economists to better
understand the behaviors of individuals, investors and firms as the economy emerges from a slump Alan Greenspan might say was ushered in on a wave of irrational exuberance.
Discussion Questions:
1. Are economists wrong to assume that individuals always act rationally? Why do the Papuan farmers only use half their land? Are they stupid or lazy?
2. Can you think of any examples in which you or someone you know has done something that was not in his best economic self interest?
3. Is charity irrational? What about gift giving? If you calculated that the chance of getting caught steeling something you REALLY wanted was 0%, wouldn’t it be irrational NOT to steal? What would
keep you from stealing that thing if you deemed it rational to do so? | {"url":"http://welkerswikinomics.com/blog/category/profit-maximization/","timestamp":"2014-04-16T10:52:09Z","content_type":null,"content_length":"162804","record_id":"<urn:uuid:7356afba-a1b5-47a6-914c-a5a8a4a5b9d3>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00253-ip-10-147-4-33.ec2.internal.warc.gz"} |
Next | Prev | Up | Top | Index | JOS Index | JOS Pubs | JOS Home | Search
Thiran Allpass Interpolators
Given a desired delay allpass filter
can be designed having maximally flat group delay equal to dc using the formula
denotes the binomial coefficient. Note, incidentally, that a lowpass filter having maximally flat group-delay at dc is called a Bessel filter [365, pp. 228-230].
• For sufficiently large stability is guaranteed
Rule of thumb:
• It can be shown that the mean group delay of any stable 452].^5.8
• Only known closed-form case for allpass interpolators of arbitrary order
• Effective for delay-line interpolation needed for tuning since pitch perception is most acute at low frequencies.
• Since Thiran allpass filters have maximally flat group-delay at dc, like Lagrange FIR interpolation filters, they can be considered the recursive extension of Lagrange interpolation.
Subsections Next | Prev | Up | Top | Index | JOS Index | JOS Pubs | JOS Home | Search [How to cite this work] [Order a printed hardcopy] [Comment on this page via email] | {"url":"https://ccrma.stanford.edu/~jos/waveguide/Thiran_Allpass_Interpolators.html","timestamp":"2014-04-17T06:32:16Z","content_type":null,"content_length":"11762","record_id":"<urn:uuid:4bb3461a-83cb-4f02-9bff-adb40b058978>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00410-ip-10-147-4-33.ec2.internal.warc.gz"} |
Lambda Expression in C# 3.0
Lambda Expression is the one of the best feature of the C# 3.0. Lambda expression is same as anonymous method introduced in C# 2.0. Lambda Expression is a concise syntax to achieve the same goal as
anonymous method. Now we can summarize Lambda expression in one line.
“Lambda expression is simply a Method.”
Syntax of Lambda Expression
Input Parameters => Expression/Statement Block;
Left hand side of expression contains the zero or more parameters followed by Lambda operator ‘=>’ which is read as “goes to” and right hand side contains the expression or Statement block.
A simple Lambda expression: x => x * 2
This Lambda expression is read as “x goes to x times 2”. Lambda Operator “=>” has sameprecedence as assignment “=” operator. This simple expression takes one parameter “x” and returns the value
Parameters Type
The parameters of the lambda expression can be explicitly or implicitly typed. For example
(int p) => p * 4; // Explicitly Typed Parameter
q => q * 4; // Implicitly Typed Parameter
Explicit typed parameter is same as parameters of method where you explicitly specified the type of parameter. In an implicit typed parameter, the type of parameter inferred from the context of
lambda expression in which it occurs.
Type Inference is a new feature of c# 3.0. I will explain it in some other blog.
Use simple Lambda Expression
Here is simple example of Lambda Expression which returns a list of numbers greater than 8.
int[] numbers = {1,2,3,4,5,6,7,8,9,10 };
var returnedList = numbers.Where(n => (n > 8));
You can also use anonymous method for returning same list.
int[] numbers = {1,2,3,4,5,6,7,8,9,10 };
var returnedList = numbers.Where(delegate(int i) { return i > 8; });
Use Statement Block in Lambda Expression
Here is simple example to write statement block in the lambda expression. This expression returns list of numbers less than 4 and greater than 8.
int[] numbers = { 1, 2, 3, 4, 5, 6, 7, 8, 9, 10 };
var returnedList = numbers.Where(n =>
if (n < 4)
return true;
else if (n > 8)
return true;
return false;
Use Lambda with More then one Parameter
You can also write lambda which takes more than one parameters. Here is an exampleof lambda which adds two integer numbers.
delegate int AddInteger(int n1, int n2);
AddInteger addInt = (x, y) => x + y;
int result = addInt(10,4); // return 14
Use Lambda with Zero parameter
Here is an example of lambda which take no parameter and returns new Guid.
delegate Guid GetNextGuid();
GetNextGuid getNewGuid = () => (Guid.NewGuid());
Guid newguid = getNewGuid();
Use Lambda that return Nothing
You can also write lambda which returns void. Here is an example that lambda is only showing message and return nothing.
delegate void ShowMessageDelegate();
ShowMessageDelegate msgdelegate = () => MessageBox.Show("It returns nothing.");
Some Build in Delegates
Dot Net Framework 3.0 provides some build in parameterized delegate types that is “Func<t>(...)” and also provides some parameterized delegates that returns void that is “Action<T>(...)”. | {"url":"http://shakeel-dot-net.blogspot.com/2010/02/lambda-expression-in-c-30.html","timestamp":"2014-04-17T01:04:14Z","content_type":null,"content_length":"62371","record_id":"<urn:uuid:094154dd-3659-4261-958f-4c624841d5a3>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00518-ip-10-147-4-33.ec2.internal.warc.gz"} |
Herons' Formula
I expanded the equation for area by eliminating the s and just using a, b, and c, however, I must have made a mistake because when I enter the a=3, b=4, c=5 famous triangle for the sides, I don't get
1.5 like the original formula.
Infact, my square root goes negative inside, so a big boo-boo.
Here is my wrong equation: | {"url":"http://www.mathisfunforum.com/viewtopic.php?pid=36095","timestamp":"2014-04-21T07:25:52Z","content_type":null,"content_length":"14178","record_id":"<urn:uuid:1fb06495-32bd-4c3a-9153-dc64284fe53a>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00124-ip-10-147-4-33.ec2.internal.warc.gz"} |
6 search hits
Distance phenomena in high-dimensional chemical descriptor spaces : consequences for similarity-based approaches (2009)
Matthias Rupp Gisbert Schneider
DOGS: reaction-driven de novo design of bioactive compounds (2012)
Markus Hartenfeller Heiko Zettl Miriam Walter Matthias Rupp Felix Reisen Ewgenij Proschak Sascha Weggen Holger Stark Gisbert Schneider
We present a computational method for the reaction-based de novo design of drug-like molecules. The software DOGS (Design of Genuine Structures) features a ligand-based strategy for automated ‘in
silico’ assembly of potentially novel bioactive compounds. The quality of the designed compounds is assessed by a graph kernel method measuring their similarity to known bioactive reference
ligands in terms of structural and pharmacophoric features. We implemented a deterministic compound construction procedure that explicitly considers compound synthesizability, based on a
compilation of 25'144 readily available synthetic building blocks and 58 established reaction principles. This enables the software to suggest a synthesis route for each designed compound. Two
prospective case studies are presented together with details on the algorithm and its implementation. De novo designed ligand candidates for the human histamine H4 receptor and γ-secretase were
synthesized as suggested by the software. The computational approach proved to be suitable for scaffold-hopping from known ligands to novel chemotypes, and for generating bioactive molecules with
drug-like properties.
Kernel learning for ligand-based virtual screening:discovery of a new PPARgamma agonist (2010)
Matthias Rupp Timon Schroeter Ramona Steri Ewgenij Proschak Katja Hansen Heiko Zettl Oliver Rau Manfred Schubert-Zsilavecz Klaus-Robert Müller Gisbert Schneider
Poster presentation at 5th German Conference on Cheminformatics: 23. CIC-Workshop Goslar, Germany. 8-10 November 2009 We demonstrate the theoretical and practical application of modern
kernel-based machine learning methods to ligand-based virtual screening by successful prospective screening for novel agonists of the peroxisome proliferator-activated receptor gamma (PPARgamma)
[1]. PPARgamma is a nuclear receptor involved in lipid and glucose metabolism, and related to type-2 diabetes and dyslipidemia. Applied methods included a graph kernel designed for molecular
similarity analysis [2], kernel principle component analysis [3], multiple kernel learning [4], and, Gaussian process regression [5]. In the machine learning approach to ligand-based virtual
screening, one uses the similarity principle [6] to identify potentially active compounds based on their similarity to known reference ligands. Kernel-based machine learning [7] uses the "kernel
trick", a systematic approach to the derivation of non-linear versions of linear algorithms like separating hyperplanes and regression. Prerequisites for kernel learning are similarity measures
with the mathematical property of positive semidefiniteness (kernels). The iterative similarity optimal assignment graph kernel (ISOAK) [2] is defined directly on the annotated structure graph,
and was designed specifically for the comparison of small molecules. In our virtual screening study, its use improved results, e.g., in principle component analysis-based visualization and
Gaussian process regression. Following a thorough retrospective validation using a data set of 176 published PPARgamma agonists [8], we screened a vendor library for novel agonists. Subsequent
testing of 15 compounds in a cell-based transactivation assay [9] yielded four active compounds. The most interesting hit, a natural product derivative with cyclobutane scaffold, is a full
selective PPARgamma agonist (EC50 = 10 ± 0.2 microM, inactive on PPARalpha and PPARbeta/delta at 10 microM). We demonstrate how the interplay of several modern kernel-based machine learning
approaches can successfully improve ligand-based virtual screening results.
Molecular similarity for machine learning in drug development : poster presentation (2008)
Matthias Rupp Ewgenij Proschak Gisbert Schneider
Poster presentation In pharmaceutical research and drug development, machine learning methods play an important role in virtual screening and ADME/Tox prediction. For the application of such
methods, a formal measure of similarity between molecules is essential. Such a measure, in turn, depends on the underlying molecular representation. Input samples have traditionally been modeled
as vectors. Consequently, molecules are represented to machine learning algorithms in a vectorized form using molecular descriptors. While this approach is straightforward, it has its
shortcomings. Amongst others, the interpretation of the learned model can be difficult, e.g. when using fingerprints or hashing. Structured representations of the input constitute an alternative
to vector based representations, a trend in machine learning over the last years. For molecules, there is a rich choice of such representations. Popular examples include the molecular graph,
molecular shape and the electrostatic field. We have developed a molecular similarity measure defined directly on the (annotated) molecular graph, a long-standing established topological model
for molecules. It is based on the concepts of optimal atom assignments and iterative graph similarity. In the latter, two atoms are considered similar if their neighbors are similar. This
recursive definition leads to a non-linear system of equations. We show how to iteratively solve these equations and give bounds on the computational complexity of the procedure. Advantages of
our similarity measure include interpretability (atoms of two molecules are assigned to each other, each pair with a score expressing local similarity; this can be visualized to show similar
regions of two molecules and the degree of their similarity) and the possibility to introduce knowledge about the target where available. We retrospectively tested our similarity measure using
support vector machines for virtual screening on several pharmaceutical and toxicological datasets, with encouraging results. Prospective studies are under way.
Spherical harmonics coeffcients for ligand-based virtual screening of cyclooxygenase inhibitors (2011)
Quan Wang Kerstin Birod Carlo Federico Angioni Sabine Grösch Tim Geppert Petra Schneider Matthias Rupp Gisbert Schneider
Background: Molecular descriptors are essential for many applications in computational chemistry, such as ligand-based similarity searching. Spherical harmonics have previously been suggested as
comprehensive descriptors of molecular structure and properties. We investigate a spherical harmonics descriptor for shape-based virtual screening. Methodology/Principal Findings: We introduce
and validate a partially rotation-invariant three-dimensional molecular shape descriptor based on the norm of spherical harmonics expansion coefficients. Using this molecular representation, we
parameterize molecular surfaces, i.e., isosurfaces of spatial molecular property distributions. We validate the shape descriptor in a comprehensive retrospective virtual screening experiment. In
a prospective study, we virtually screen a large compound library for cyclooxygenase inhibitors, using a self-organizing map as a pre-filter and the shape descriptor for candidate prioritization.
Conclusions/Significance: 12 compounds were tested in vitro for direct enzyme inhibition and in a whole blood assay. Active compounds containing a triazole scaffold were identified as direct
cyclooxygenase-1 inhibitors. This outcome corroborates the usefulness of spherical harmonics for representation of molecular shape in virtual screening of large compound collections. The
combination of pharmacophore and shape-based filtering of screening candidates proved to be a straightforward approach to finding novel bioactive chemotypes with minimal experimental effort.
Virtual screening for PPAR-gamma ligands using the ISOAK molecular graph kernel and gaussian processes (2009)
Timon Schroeter Matthias Rupp Katja Hansen Klaus-Robert Müller Gisbert Schneider
For a virtual screening study, we introduce a combination of machine learning techniques, employing a graph kernel, Gaussian process regression and clustered cross-validation. The aim was to find
ligands of peroxisome-proliferator activated receptor gamma (PPAR-y). The receptors in the PPAR family belong to the steroid-thyroid-retinoid superfamily of nuclear receptors and act as
transcription factors. They play a role in the regulation of lipid and glucose metabolism in vertebrates and are linked to various human processes and diseases [1]. For this study, we used a
dataset of 176 PPAR-y agonists published by Ruecker et al [2]. Gaussian process (GP) models can provide a confidence estimate for each individual prediction, thereby allowing to assess which
compounds are inside of the model's domain of applicability. This feature is useful in virtual screening, where a large fraction of the tested compounds may be outside of the model's domain of
applicability. In cheminformatics, GPs have been applied to different classification and regression tasks using either radial basis function or rational quadratic kernels based on vectorial
descriptors [4,5]. We used a graph kernel based on iterative similarity and optimal assignments (ISOAK, [3]) for non-linear Bayesian regression with Gaussian process priors (GP regression, [4]).
A number of kernel-based learning algorithms (including GPs) are capable of multiple kernel learning [5], which allows combining heterogeneous information by using multiple kernels at the same
time. In this work, we combined rational quadratic kernels for vectorial molecular descriptors (MOE2D, CATS2D and Ghose-Crippen fragment descriptors) with the ISOAK graph kernel. We evaluated our
methodology in different ranking and regression settings. Ranking performance was assessed using the number of false positives within the top k predicted compounds. Predicted compounds were
ranked based on both predicted binding affinity and the confidence in each prediction. In the regression setting, we employed standard loss functions like mean absolute error (MEA) and root mean
squared error. The established linear ridge regression (LRR) and support vector regression (SVR) algorithms served as baseline methods. In addition to standard test/training splits and
cross-validation, we used a clustered cross-validation strategy where clusters of compounds are left out when constructing training sets. This results in less optimistic results, but has the
advantage of favouring more robust and potentially extrapolation-capable algorithms than standard training/test splits and normal cross-validation. In the regression setting, both GP and SVR
models performed well, yielding MAEs as low as 0.66 +- 0.08 log units (clustered CV) and 0.51 +- 0.3 log units (normal CV). In the ranking setting, GPs slightly outperform SVR (0.21 +- 0.09 log
units vs. 0.3 +- 0.08 log units). In conclusion, Gaussian process regression using simultaneously – via multiple kernel learning – the ISOAK molecular graph kernel and the rational quadratic
kernel (with standard molecular descriptors) performs excellent in retrospective evaluation. A prospective evaluation study is currently in progress. | {"url":"http://publikationen.ub.uni-frankfurt.de/solrsearch/index/search/searchtype/authorsearch/author/%22Gisbert+Schneider%22/start/0/rows/10/author_facetfq/Matthias+Rupp/sortfield/title/sortorder/asc","timestamp":"2014-04-19T10:06:53Z","content_type":null,"content_length":"41831","record_id":"<urn:uuid:9d1c8256-39ad-49c0-b6b8-e05d20310f4b>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00429-ip-10-147-4-33.ec2.internal.warc.gz"} |
I am a mathematician/computer scientist working at the UW Institute for Health Metrics and Evaluation, where I am applying ideas from random structures and complex networks to challenges in global
• Segregation in social networks, Oct. 5-7, 2007 (part of Opportunities for Undergraduate Research in Computer Science workshop).
• Algorithms and Economics of Networks, Spring 2007.
• Matrix Algebra, Fall 2003.
• Operations Research, Spring 2002.
• Calculus in 3 Dimensions, Fall 2001.
• HCSSiM, Summer 2001.
Contact information | {"url":"http://www.math.cmu.edu/~adf/","timestamp":"2014-04-21T04:32:20Z","content_type":null,"content_length":"3613","record_id":"<urn:uuid:a2330a02-7614-40a9-a435-a898f48717bd>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00600-ip-10-147-4-33.ec2.internal.warc.gz"} |
the first resource for mathematics
Singular integral operators. Extended and partly modified translation from the German by Albrecht Böttcher and Reinhard Lehmann.
(English) Zbl 0612.47024
Berlin etc.: Springer-Verlag. 528 p. DM 98.00 (Orig. Akademie-Verlag, Berlin) (1986).
The book is translated with some modifications from the German edition (1980; Zbl 0442.47027). It remains to indicate only the changes, incorporated by the translation.
In Chapter XI the new § 12 deals with the recent results on weighted norm inequalities for one-dimensional singular integral operators, due to R. Hunt. B. Muckenhoupt, R. Wheeden.
In Chapter XII the new § 7 is added, where polysingular integral operators with continuous coefficients are considered in the space ${L}_{p}\left({ℝ}^{2}\right)$ (results of I. Simonenko and V.
Pilidi). In the same chapter § 8 deals with some basic notions for pseudo-differential operators in Sobolev-spaces ${W}_{2}^{\left(1\right)}\left({ℝ}^{m}\right)$ (the boundedness, composition
formula, parametrix,...).
In Chapter XVII, § 3 two new subsections (3.6 and 3.7) deal with the polynomial approximation method for singular integral equations over a finite interval with discontinuous coefficients. In the
same chapter the new § 4 describes the spline collocation method for a one-dimensional singular integral equation.
In many other places new results can be found that have appeared within the 5 years after the first edition. References are extended on 150 new items.
47B35 Toeplitz operators, Hankel operators, Wiener-Hopf operators
47-02 Research monographs (operator theory)
45E05 Integral equations with kernels of Cauchy type
47Gxx Integral, integro-differential, and pseudodifferential operators
45P05 Integral operators | {"url":"http://zbmath.org/?q=an:0612.47024&format=complete","timestamp":"2014-04-16T19:27:45Z","content_type":null,"content_length":"22992","record_id":"<urn:uuid:b57010d3-edc6-4112-9798-c73426797378>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00059-ip-10-147-4-33.ec2.internal.warc.gz"} |
Physics Forums - limit question
- -
limit question
vabamyyr Sep26-06 10:47 AM
limit question
I have a question:
what is lim (n--->infinity)= 1/(3+(-1)^n))? My opinion that this limit does not exist.
arildno Sep26-06 10:50 AM
"Do not opine, PROVE!"
Apocryphal quote from Euclid. :smile:
CRGreathouse Sep26-06 11:44 AM
Are you asking about
perhaps? The equals sign in your post is confusing me. If so, are you familiar with the lim sup and lim inf? That would give you an easy direct proof: if lim sup = lim inf, that's the limit;
otherwise, the limit does not exist.
vabamyyr Sep26-06 01:20 PM
i have dealt with sup but not with inf but i will look them up. Thx anyway.
manoochehr Oct1-06 09:12 PM
Quote by vabamyyr
I have a question:
what is lim (n--->infinity)= 1/(3+(-1)^n))? My opinion that this limit does not exist.
if n∈Z (Z=Integer) then we have two answer for equation
1) if n=Even then answer=1/4
2) if n=Odd then answer=1/2
if n∈R (R=Real) then equation is undefined
for example: (-1)^1/2 does not exist.:smile:
Quote by manoochehr
for example: (-1)^1/2 does not exist.:smile:
It certainly does, it just isn't real.
manoochehr Oct2-06 08:48 PM
thank you for help me
manoochehr Oct3-06 05:36 AM
thank you for conduce:tongue:
Accordingly this sequence isn't convergent:smile:
All times are GMT -5. The time now is 09:18 AM.
Powered by vBulletin Copyright ©2000 - 2014, Jelsoft Enterprises Ltd.
© 2014 Physics Forums | {"url":"http://www.physicsforums.com/printthread.php?t=133664","timestamp":"2014-04-20T14:18:17Z","content_type":null,"content_length":"8916","record_id":"<urn:uuid:1cf627b6-d1d0-4286-bce7-d34f9d464de5>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00637-ip-10-147-4-33.ec2.internal.warc.gz"} |
How Much Bias Binding Can You Cut From A Square Of Fabric
If you have been frustrated in the past trying to figure out how much bias you need and how much fabric you need for it, here is a simple formula to help you determine just how much bias you can cut
from a square of fabric no matter what width of bias you want.
Keep this handy in your sewing notebook for future reference.
Bias Formula To Determine How Much Bias You Can Cut From A Square Of Fabric
Multiply the length by the width of the fabric square.
Divide this number by the width of bias needed. This will tell you the number of inches of bias that you will get.
To find the number of yards, divide the number of inches by 36 (the number of inches in one yard).
Here is an example of how to do your calculations for a sewing project:
Determine how much bias you need for your project.
Let's assume that you are sewing a set of placemats and you want to sew a half inch binding around them.
This placemat measures 18 inches by 12 inches. (18 + 18 + 12 + 12) = 60 inches around one placement.
Eight placemats (8 x 60) = 480 inches plus a little extra to overlap the ends of the bias on each placemat.
For this example, we will add an extra 2 inches for each placemat or 16 inches. 480 + 16 = 496 inches is the total length of bias that you need for this project.
To get the number of yards of bias that you need, divide 496 by 36 (the number of inches in one yard).
Determine the width of the binding that you will cut.
You must cut the binding 4 times the finished width.
So, for a one half inch finished binding, you must cut the binding 4 x 1/2 inch or 2 inches wide.
Now you know how many yards of binding you need and how wide you must cut it.
The next question is how many yards of 2 inch wide bias can you cut from a 36 inch square of fabric.
Use the above formula to determine the yardage in a 36 inch square:
Step 1.
36 inches x 36 inches = 1296 inches
Step 2.
1296 inches divided by 2 inches (the desired width of the bias) = 648 inches
Step 3.
648 inches divided by 36 inches = 18 yards
So, if a 36 inch square will give you 18 yards of 2 inch wide bias, you will need an extra yard of fabric to make the half inch bias binding for 8 placemats.
A 45 inch square makes about 28 yards of 2 inch bias (for one half inch finished bias binding).
With this information, you can easily calculate all your bias requirements. No more guessing!
It just makes sense!
©2009 Marian Lewis - All Rights Reserved 1st Step To Sewing Success
Marian Lewis is a sewing instructor and the creator of an amazing new fitting method for hard-to-fit sewing folks. In her ebook, "Common Sense Fitting Method For Hard-To-Fit Sewing Folks Who Want
Great Fitting Skirts And Pants", find out step-by-step WHAT you really need, WHERE you really need it and HOW to apply that to a commercial sewing pattern. It just makes sense! | {"url":"http://sew-whats-new.com/profiles/blogs/how-much-bias-binding-can-you","timestamp":"2014-04-19T09:43:02Z","content_type":null,"content_length":"68786","record_id":"<urn:uuid:3953905e-9e4d-4ce8-ab7b-07c04cd5b582>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00625-ip-10-147-4-33.ec2.internal.warc.gz"} |
The Exclusive Or
This is the final weekend of the Edinburgh Fringe Festival, an enormous and insane annual event which draws around half a million people to a city of around half a million people. Walking round the
city this week, I thought of a game to pass the time when stuck in a Festival crowd.
The game is called "Spot the Scot". To play, start by walking down the streets of Edinburgh. Then, pick a group of people coming toward you on the street, not too distant, but far enough that you
can't hear them. Give them a good look over, and guess whether they are actually Scottish or not. As they pass you, eavesdrop to find out if you were right.
I have found this game thoroughly enjoyable, and I highly recommend it. Feel free to post strategies or high scores in the comments.
No matter one's political persuasion, it is hard not to think, as Willard Foxton argues in an interesting essay that the income tax code in the UK (and in the US too, for that matter) is too complex.
In a more cynical mood I would be tempted to say that the tax law is so complex, because complex tax laws benefit the rich, and the rich make the laws.
In the UK there have been several scandals on tax avoidance, perhaps most notably, one of the two biggest Scottish football teams blowing up due to an offshore tax evasion scheme. At first I was
unable to understand in the news reports why other football teams, and their fans, seemed so rabidly angry at the Rangers. But of course: football does not have a salary cap, so if a team unfairly
spend less money on tax, it can spend more money on players. By cheating at their taxes, the Rangers were also cheating at football.
Taxes are political footballs as well, especially in the US. In the US there is an additional crazy phenomenon that creating a new program makes you an irresponsible tax and spend liberal that is
taking money out of the pockets of working families, while cutting taxes makes you a deficit hawk. (I mean, uh, not to get too overtly political or anything?) Therefore if you are an American
politician---of either party---and you want to create a new program for a noble goal, e.g., to pay for college scholarships for middle class families, why not make it a tax credit? That way, you get
the noble program, and you can say that you're cutting taxes too!
Ali Eslami has just writen a terrific page on organizing your experimental code and output. I pretty much agree with everything he says. I've thought quite a bit about this and would like to add some
Programming for research is very different than programming for industry. There are several reasons for this, which I will call Principles of Research Code. These principles underly all of the advice
in Ali's post and in this post. These principles are:
1. As a researcher, your product is not code. Your product is knowledge. Most of your research code you will completely forget once your paper is done.
2. Unless you hit it big. If your paper takes off, and lots of people read it, then people will start asking you for a copy of your code. You should give it to them, and best to be prepared for this
in advance.
3. You need to be able to trust your results. You want to do enough testing that you do not, e.g., find a bug in your baselines after you publish. A small amount of paranoia comes in handy.
4. You need a custom set of tools. Do not be afraid to write infrastructure and scripts to help you run new experiments quickly. But don't go overboard with this.
5. Reproducability. Ideally, your system should be set up so that five years from now, when someone asks you about Figure 3, you can immediately find the command line, experimental parameters, and
code that you used to generate it.
Principle 1 implies that the primary thing that you need to optimise for in research code is your own time. You want to generate as much knowledge as possible as quickly as possible. Sometimes being
able to write fast code gives you a competitive advantage in research, because you can run on larger problems. But don't spend time optimising unless you're in a situation like this.
Also, I have some more practical suggestions to augment what Ali has said. These are
1. Version control: Ali doesn't mention this, probably because it is second nature to him, but you need to keep all of your experimental code under version control. To not do this is courting
disaster. Good version control systems include SVN, git, or Mercurial, etc. I now use Mercurial, but it doesn't really matter what you use. Always commit all of your code before you run an
experiment. This way you can reproduce your experimental results by checking out the version of your code form the time that you ran an experiment.
2. Random seeds: Definitely take Ali's advice to take the random seed as a parameter to your methods. Usually what I do is pick a large number of random seeds, save them to disk, and use them over
and over again. Otherwise debugging is a nightmare.
3. Parallel option sweeps: It takes some effort to get set up on a cluster like ECDF, but if you invest this, you get some nice benefits like the ability to run a parameter sweep in parallel.
4. Directory trees: It is good to have your working directory in a different part of the directory space from your code, because then you don't get annoying messages from your version control system
asking you why you haven't committed your experimental results. So I end up with a directory structure like
Notice how I match the directory names to help me remember what script generated the results.
5. Figures list. The day after I submit a paper, I add enough information to my notebook to meet Principle 5. That is, for every figure in the paper, I make a note of which output directory and
which data file contains the results that made that figure. Then for those output directories, I make sure to have a note of which script and options generated those results.
6. Data preprocessing. Lots of times we have some complicated steps to do data cleaning, feature extraction, etc. It's good to save these intermediate results to disk. It's also good to use a text
format rather than binary, so that you can do a quick visual check for problems. One tip that I use to make sure I keep track of what data cleaning I do is to use Makefiles to run the data
cleaning step. I have a different Makefile target for each intermediate result, which gives me instant documentation.
If you want to read even more about this, I gave a guest lecture last year on a similar topic (slides, podcast).
I've just made an update to my list of software I like motivated by my experiences setting up a new computer.
Presently I am still enjoying the honeymoon phase of my new laptop. To avoid the slightest appearance of ostentation, I will refrain from going into details of exactly what laptop I got, except to
say that it is of course a Mac, and it's REALLY REALLY cool!
Apple provides a Migration Assistant that apparently will copy all of your files and settings from your old computer, so that your new Mac looks exactly like your old one. My feeling about this is:
Why would anyone want that? For me, one of the pleasures of a new computer is that it's *clean*, unburdened with hundreds of files scattered around my home directory that I never use but are too
important (or too numerous) to simply delete.
So for years, whenever I get a new computer, I never copy my files over en masse. Instead, I copy over a small set of files that I know I need, and leave the rest on a backup. Then the next day, I
find that I need a file on the backup that I didn't realize, go back and copy this over, etc.
This process stabilizes after a week or so, and my electronic life feels much less cluttered.
I suppose that I could just blow away my home directory every year for the same feeling, but somehow it is hard to convince myself to do this.
I wish that I could use the same process for physical papers, but sadly paper information cannot be stored as compactly as its electronic equivalent.
Super-Mac-Geek-Alert: For several years, I have been using Keychain to store secure notes such as password hints for bank logins, etc. I thought I was very clever to avoid impressive but costly tools
like 1Password. Then I tried to copy the Keychain to my new computer. Painful. I think from now on I'll keep these notes on a small encrypted disk image.
or, from small beginnings...
One of the minor challenges of moving from the US to the UK is temperature. People in the UK always discuss the weather, and when they do, they use Celsius. My brain still works in Fahrenheit, so I
need to convert typical daily outdoor temperatures in my head, and quickly enough that I can carry on a conversation.
You probably learned a formula in school for doing this. Completely useless. Forget all about it—but you already have, haven't you? You might remember, if you're clever, that the formula involves 9,
5, and 32 in some combination. But is it 9/5 or 5/9? Do you add 32, or do you subtract it? And do you do that before or after you multiply? And now people are wondering why you've been staring at
them for two minutes when all they asked is how hot it was when you were in Seattle last week.
The problem is that the equation to convert C to F is too similar to the equation for the reverse, and both equations are too difficult to compute mentally. What we need is a simpler equation, that
is easy to remember, and easy to work out quickly in your head.
So here's the trick. You memorise the following correspondences:
0 °C = 32 °F
10 °C = 50 °F
20 °C = 68 °F
30 °C = 86 °F
Then, to convert any temperature that is near these, approximate 1 °C = 2 °F. This will allow you to convert almost any naturally occurring outdoor temperature in the UK in either direction to within
1° accuracy.
Let's try it. As I write the current temperature in Edinburgh is 14 °C. This is 10 °C plus 4° extra. From memory convert the 10 °C to 50 °F. Then convert 4 °C extra to 8 °F extra and add it back on.
This gives you 14°C = 58°F. This is not exact, but close enough that you know to wear a jumper. The exact formula is
14 * 9 / 5 + 32 = 57 F
Good luck doing that in your head.
It tickles me that (maths alert) this is a piecewise linear approximation to a linear function. Mathematically, you would have to believe that a piecewise linear function would be more complicated,
but mentally, it's not. Maybe there's a deep psychological principle here that scientists will figure out someday.
Until then, quite chilly today, isn't it? | {"url":"http://www.theexclusive.org/2012_08_01_archive.html","timestamp":"2014-04-18T03:06:24Z","content_type":null,"content_length":"89597","record_id":"<urn:uuid:ff448d32-d3bd-45a7-9cc9-5cb214c70b61>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00386-ip-10-147-4-33.ec2.internal.warc.gz"} |
log rule for integration confusion
July 15th 2011, 08:17 AM
log rule for integration confusion
If I have a function:
f(x) = 1 / (x+1), and I want to find it's indefinite integral, I can apply the log rule and get:
F(x) = ln |x+1| + C
If, however, I multiply both the numerator and the denominator by 2, before integrating, I get:
f(x) = 1 / (x+1) = 2 / 2x + 2
F(x) = ln |2x+2| + C
Clearly, f(x) are equal in both cases, but I can't see how ln |2x+2| equals ln |x+1|
Any explanation? Thanks
July 15th 2011, 08:24 AM
Re: log rule for integration confusion
If I have a function:
f(x) = 1 / (x+1), and I want to find it's indefinite integral, I can apply the log rule and get:
F(x) = ln |x+1| + C
If, however, I multiply both the numerator and the denominator by 2, before integrating, I get:
f(x) = 1 / (x+1) = 2 / 2x + 2
F(x) = ln |2x+2| + C
Clearly, f(x) are equal in both cases, but I can't see how ln |2x+2| equals ln |x+1|
Any explanation? Thanks
Let $u = 2x+2$
therefore $du = 2 dx \leftrightarrow dx = \dfrac{du}{2}$ and $\int \dfrac{2}{2x+2} dx = \int \left(\dfrac{2}{u} \cdot \dfrac{du}{2}\right)$
We can cancel those two's to give $\int \dfrac{du}{u}$ and continue as normal from there.
July 15th 2011, 08:24 AM
Re: log rule for integration confusion
If you write: $\ln|2x+2|+C=\ln|2(x+1)|+C=\ln(2)+\ln|x+1|+C$
Because $\ln(2)$ is also an constant number you can say:
with $C'$ a new constant integration term.
July 15th 2011, 08:26 AM
Re: log rule for integration confusion
Your mistake in using the same constant, i.e. they are different constants.
Recall that $\ln(|2x+2|)=\ln(|x+1|)+ln(2)$.
If $C$is the constant in the first then $C+\ln(2)$ is in the second.
July 15th 2011, 08:51 AM
Re: log rule for integration confusion
Thanks! Now I get it. | {"url":"http://mathhelpforum.com/calculus/184604-log-rule-integration-confusion-print.html","timestamp":"2014-04-20T00:30:54Z","content_type":null,"content_length":"9033","record_id":"<urn:uuid:08a70160-49fa-498e-8c55-c268b586b731>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00516-ip-10-147-4-33.ec2.internal.warc.gz"} |
Mathematics Professor Trying To Teach at Junior High School
Mathematical education and the role of mathematicians in mathematical education is a very important, loaded, and controversial subject. An old friend and fellow combinatorialist Ron Aharoni tried to
teach mathematics at a junior high school. Here is Ron’s account of the experience as reported in Haaretz (English, Hebrew).
I admire Ron’s bold initiative and candid account of the experience.
Among Ron’s other endeavors: The recent startling proof (with Eli Berger) of the Menger theorem for infinite graphs, The study of topological methods in hypergraph matching theory, a book about
mathematics and poetry (Hebrew), a forthcoming book about (or against) philosophy, and much activity in mathematics education including his book Arithmetic for Parents.
Comments and links about mathematical education in general and the role of mathematicians in mathematical education for children are most welcome.
Happy New Year, everybody!
From the Haaretz article:
“No one prepared me for what happened,” Aharoni explained. “My class quickly turned into a zoo – students would sing in class, get up freely, throw things at one another… Nothing I did helped, but on
the other hand I wanted to use an ‘iron fist’ – even if I didn’t know how to. After two months I grew desperate and left.”
“I lost the feeling of omnipotence in education,” he said. “It was a dispiriting experience, it took the wind out of my sails. It is very frustrating trying to forge a connection, encouraging
students to succeed and failing in that. So I returned to the ivory tower, which is far more comfortable and remunerative.”
“The truth is I left with my tail between my legs,” Aharoni said.
7 Responses to Mathematics Professor Trying To Teach at Junior High School
1. The article mentions another case of someone leaving his lucrative hi-tech job in order to become a Math teacher. Eventually he also had to resign for similar reasons.
Something very troubling is going on.
I wonder how long this has been going on… reflecting upon my years in high-school (also in Israel, about 2 decades ago) I seem to recall sparks of the same delinquent behavior.
2. Thanks for the post. I admire Dr. Aharoni’s boldness and honesty.
I have a Ph.D. in Mathematics (graph theory, no less) and teach at a public “elite” high school in the U.S. Our school is charged not only with teaching gifted students, but also with being a
resource for math and science education across the state. Luckily, our school does not really have problems with discipline; our students are, on the whole, well-chosen and honestly appreciate
the high-quality instruction they receive. I feel that the biggest hindrance to our work is the dichotomy between those trained in math and those trained in math education. Those trained in math
want to make substantive changes to curriculum, and those trained in math education want to debate inane issues, mostly semantic in nature. Those in math eventually get upset and go the way of
Surprisingly, I have found through my work with other teachers at other non-elite high schools that most seem very open and welcome to learning new methods. Most seem to feel that their college
educations were poor preparations for classroom teaching, and after a couple years teaching they are looking to fill in a lot of gaps.
This is where I think mathematicians can be of greatest service: in the professional development of current teachers. Perhaps those who want to serve the common educational good could speak at
conferences and seminars meant for teachers. The teachers in attendance would really appreciate it, and it wouldn’t require mathematicians to leave the ivory tower.
3. I’ve been an Adjunct Professor of Mathematics for 5 semesters at one private university, and an Adjunct Professor of Astronopmy at a public college. I spent 2.5 years earning the California Full
Time Teaching Certificate in Mathematics, from people who have neither my Math background nor my extensive teaching background. I am now able to teach fulltime where I’ve taught as a substutute,
in public middle schools and high schools.
Th Federal Government has made this a lengthy, expensive, and frustrating transition. Yet there are some “overqualified” men and women who leap at the chance to take the big pay cut and make a
difference in the lives of youngsters.
This article is revealing, in various ways, and an important topic in a system that, in the lower-middle class and poverty and rural areas of the country, de facto bankrupt.
4. Colleges of Education in the USA prepare well-intentiuoned but undertrained teachers to achieve standards in three orthogonal areas: (a) Instruction (Lectures, collaborative learning, peer
learning, other techniques); (b) Management (including Time Management, Lesson Planning, and Classroom Management, where Aharoni seems to have flailed helplessly); (c) Assessment (including
homework, quizzes, exams, self-assessment).
(1) With all due respect to the excellent professors and curriculum at
Charter College of Education, where I earned my teaching credential, it is not the techniques alone – not merely lesson plans and classroom management plans — that make a great teacher. Nor is it
the pure art of a “natural born teacher” (the Buddha said that 1 person in 5 is a born teacher). It is a synthesis of the two: an innately motivated and talented teacher, with an armamentarium of
technical cures to classroom ills.
(2) Beyond that necessary blend of the science of teaching and the art
of teaching, what matters is akin to color for a painter — a quality
of vision of the teacher. The teacher fails if he or she sees the
world through rose-colored glasses, filled with idealistic progressive
notions and great expectations, yet with no ability to handle the
challenges of the urban classroom. The teacher fails if he or she sees
the world through army camouflage, intending “tough love” and military
precision in shooting down inappropriate behavior, because the teacher
and the student are not enemies, and the classroom is not a
battlefield. What does the teacher envision, and how can that be
combined with the vision of the students?
(3) What is the revelation of the particular universe that the teacher
sees, and that other people don’t see? I have explained at length in
my 130-page draft Classroom Management Plan that revelation is one of 5 distinct forms of truth… The Axiomatic Truth of Mathematics is only one of them.
(4) Specifically, as I have written, my approach is the focus on 3 big
questions: what is the universe (and how does it work); what is a
human being; and what is the place of that human being in that
universe? …
(5) Caveat: style, like real life, cannot be too precious, controlled
or confining. There is a great deal of theory in this Classroom
Management Plan…. But theory alone ill-equips a teacher. (6) The
teacher, in his or her mind, narrates or paints an accurate portrait
of minute feelings. The human beings in the classroom are thinkers,
yes, but also human because of a full palette of emotions. Adolescent
students, going through a “phase transition” in their lives, their
brains rewiring themselves, their bodies flooded with hormones, have
their social network quivering like a spiderweb shaking in the morning
breeze, their sense of belonging to pairs and trios and subgroups in
and beyond the classroom under repeated reappraisal. The teacher must
respect the dignity of the student, giving attention to the minute
variations in feeling, encouraging the positive, enabling the negative
to be self- and group-regulated.
5. It’s always amusing when naive liberal fantasies of academics meet reality. I’d been a teacher for one year. That was enough. I could briefly do it again if I really had to, but I’ll definitely
have none of the naive ideas I had going in.
6. A little off topic maybe, but a plea for people to consider the ethics of buying things like designer clothes. Do try and think about, for example, the materials the item is made with, the human
rights of the employees where they’re manufactured and the green credentials of retailers. Oh, and try to share rather than throwing away. Thanks!!!!
This entry was posted in Education. Bookmark the permalink. | {"url":"http://gilkalai.wordpress.com/2010/01/01/mathematics-professor-trying-to-teach-at-junior-high-school/","timestamp":"2014-04-21T05:16:47Z","content_type":null,"content_length":"116624","record_id":"<urn:uuid:95a9d7c3-ed53-4e55-9644-329409c9ad2b>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00601-ip-10-147-4-33.ec2.internal.warc.gz"} |
autocorrelation in excel
- [S::S] [S:aueyosae('o'); :S]
autocorrelation in excel
examine the autocorrelation function for each variable by clicking in eviews on view the excel puter practical contains a dataset with monthly data on us. introductory understanding of excel petence
with windows operating systems first-order autocorrelation; cb predictor; using the fit button; skipped distributions.
new d spatial autocorrelation res in addition to the existing wide range of spatial genalex is written in visual basic for applications within microsoft excel (tm) and. spatial autocorrelation, in
short, deals with the tendency for geographic phenomena to cluster package for the social sciences), which imports data easily from microsoft excel.
does contain a number of advanced features and methods, www.dambut.com including autocorrelation and is smartforecasts excel-based? no however, it can easily import and export excel files.
the autocorrelation is a measure of the arity between the height, key for visio 2007 z ( x ), at a point x and the roughness parameters l, and acf, were calculated using an excel spread.
and lows, simulation, backtesting, models, best wishes poem non-linear, problem solution speech with autocorrelation, function, choice of, log log, odean, problem solution speech terrance,
optimization, in excel, olivier ferrieux -.
week: chapters to: time series and autocorrelation; remark: we ll cover some of the data for italy from juandre excel & eviews (right click and save on puter. interpreting autocorrelation; batch
control charts; design for six sigma overview run the green belt xl software, users need one of the following microsoft excel.
clears an activex connection with excel: xlgetspl: gets a range of values from excel autocorrelation with normalization: acovspl: autocovariance: avgfiltspl: nearest neighbor averaging. the last two
chapters cover how to construct formulae for statistical measures and methods not readily available in excel, such as the durbin-watson statistic, autocorrelation.
the autocorrelation can be clearly seen on this lag plot it appears that when one eruption is this thread shouldn t be about bashing excel, but there s just another feature with. spatial
autocorrelation (ii) week dec summary and review each lab session contains demonstration of using excel to do statistical analysis.
further autocorrelation tests spectral tests the runs test an excel spreadsheet for density estimation risk aversion and rational. covariance and correlation, hypothesis testing, acknowledgement
sample for pproject confidence intervals, examples of memorandum libraries, paper pricking patterns native c++ libraries, r, matlab, bloomberg, microsoft excel, hydraulic symbol real time.
3) excel spreadsheets are very user friendly and could be used to calculate many of distribution with ar(1) process ma(1) process a rma (p, q) arima(p, how can i have good sex in urdu languaged,q)
lab: autocorrelation.
features screen captures in puter programs (excel, spss, best wishes poem or eviews) understanding your data preparing your data for analysis correlation autocorrelation. autocorrelation plots and
values diagnostic for time series modeling, artisteer 2 serial number modeling > time series mand for tab delimited text file, www.office 2007 activation patch sas transport, sas datasets, the
notebook monologue jmp file, excel.
measuring autocorrelation: the durbin-watson statistic inferences about the slope using microsoft excel for simple linear regression using minitab for simple linear. including plots of observed and
expected values autocorrelation and partial autocorrelation created in the spss tables add-on module) on the same sheet in the same excel.
excel worksheet financial management, fi (syllabus) assignments feedback trading and autocorrelation interactions in the foreign exchange market, further. examples include microsoft windows, groping
bus spreadsheet applications like microsoft excel methods for continuous variables; testing binary response variables for spatial autocorrelation.
in appendix b the author shows the outputs of eviews, excel, urdu sexsy stori minitab, and state chapter autocorrelation: what happens if error terms are correlated part iv introduction.
spatial autocorrelation game probability & statistics modules simulations with java applets insightxla business analysis software for microsoft excel minitab homepage. i m trying to use the acf()
function to calculate the autocorrelation of each column in a matrix there any way to send the acf data to a matrix, so i could analyse the data in excel.
business statistics for managerial decision making, with excel for business assortment of integers which fill a, rearrange them to alter the spatial autocorrelation. read data from excel, artisteer
full lotus -2-3, ascii, dbase and more; read raw data from autocorrelation function (acf), www.office 2007 activation patch partial autocorrelation function (pacf), cross-correlation.
further problems in econometric theory and technique: multicollinearity, serial fifa 07 autocorrelation excel format; cansim database access; direct access to cansim ii-from munca sites only..
autocorrelation in excel related links
leave a reply
paper pricking patterns :: akhwat pks :: red nose pitbulls :: dwg sample files :: autocorrelation in excel ::
autocorrelation in excel
you can use these tags: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <img src=""> <p> <q cite=""> <strike> <strong>
comment by: admin
autocorrelation in excel covariance and correlation, hypothesis testing, acknowledgement sample for project confidence intervals, examples of memorandum libraries, paper pricking patterns native c++
libraries, r, matlab, bloomberg, microsoft excel, hydraulic symbol real time
autocorrelation in excel[bkeyword] | {"url":"http://www.biodanzagp.com.br/masala-s1b/matede.html","timestamp":"2014-04-20T21:38:23Z","content_type":null,"content_length":"11016","record_id":"<urn:uuid:7fdbfc41-ea62-43c9-8f5d-eea421a85299>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00551-ip-10-147-4-33.ec2.internal.warc.gz"} |
Colson, John
- The fifth Lucasian Professor of Mathematics at Cambridge University, is a relative unknown in the history of the field.
- Before Colson came to Cambridge, he had been the master at a mathematical school at Rochester. He was first educated at Christ Church Oxford, but never graduated.
1713 - He was elected to the Royal Society. In the later years of his life he was also the rector of Lokington in Yorkshire.
1728 - He moved to Emmanuel College, Cambridge, where he received his M.A., when he was forty-eight years old.
1736 - He published an English version of Newton's Method of Fluxions and Infinite Series originally written in Latin.
- He also published the English edition of Geometrica Analytica by Newton.
1739 - He was elected to the Lucasian Chair in May.
1760 - Died at 80 years of age.
1761 - An edition of Newton's Arithmetica universalis was published in Latin with Colson's commentary, also in Latin.
1801 - Colson showed himself to be ahead of his times by his translation of Analyitical Institutions written originally in Italian by Donna Maria Agnesi, a professor of mathematics and philosophy at
the University of Bolgna.
- It was published forty one years after his death.
Page last updated: 12:03am, 07th Jul '07 | {"url":"http://www.s9.com/Biography/Print/Colson-John","timestamp":"2014-04-21T12:19:04Z","content_type":null,"content_length":"3789","record_id":"<urn:uuid:a4f6ebd2-6c51-408c-886b-37c15c41c356>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00562-ip-10-147-4-33.ec2.internal.warc.gz"} |
Card Shuffling I
Just about anyone interested in mathematics has studied a little probability and probably done some easy analysis of basic card games and dice games. Completely off topic for a second, am I the only
one who has noticed that basic probability homework exercises are the only situation, aside from funerals, that anyone will ever use the word “urn?” For whatever reason, probabilists love that word.
Anyway, in any real card game, the computations tend to get complicated rather quickly, and most people get turned off from the discussion. With some ingenuity, however, one can answer some pretty
cool (but initially difficult seeming) questions without having to go through a lot of tedious computations.
Take as an example card shuffling. In the face of expert card-counters, the natural question for the dealer is how many times he or she has to shuffle the deck before it’s well-mixed. In the case
when the dealer is also playing the game — and is a card-counter at the level of a member of the MIT black jack team, say — he or she could drastically improve their odds by using a shuffling method
which seems to shuffle the deck well, but actually is very poor at doing so. Anyway, at this point the question is slightly ill-posed, as we have no obvious way to interpret the word mixed, let alone
well. In fact, coming up with a mathematical model of what shuffling means is already fairly difficult. What I’m hoping to do is give a framework which makes the problem more tractable.
One can take the following abstract point of view. The fundamental object we are studying is the symmetric group $S_n$ where, more often than not, $n=52$. Each element of $S_n$ corresponds to a
particular way to shuffle the deck. Alternatively, one can think of every element of $S_n$ as a particular ordering of the deck, starting from some prescribed order (i.e. however the deck was ordered
when we took it out of the box). The identity element corresponds to the “no-shuffle” shuffle (alternatively, the original order). Transpositions $\left(\begin{array}{cc} i & j \\ \end{array}\right)$
correspond to interchanging the cards at position i and j, respectively, and so on. To model a collection of shuffles, one defines a probability measure on $S_n.$ For example, one could put the cards
in the deck side-by-side and shuffle as follows: with your left hand pick a card uniformly at random, and with your right hand pick a card uniformly at random, then interchange the two cards. The
probability measure defined by this rule is as follows:
$\displaystyle{\begin{array}{l} P(g) = 1/n\textrm{ if }g=\textrm{ identity} \\ P(g)=2/n^2\textrm{ if }g=\left(\begin{array}{cc} i & j \\ \end{array}\right) \\P(g)=0\textrm{ otherwise.}\\\end{array}}$
Once one has a probability measure $P$, one can define the transition matrix $M=(p(s,t))$ so that $p(s,t)=P(st^{-1}).$ Heuristically, the $s,t^{\textrm{th}}$ entry of $M$ corresponds to the
probability of starting at ordering $t$ and ending at ordering $s$. As $M$ has all non-negative entries and the rows sum to 1, this matrix corresponds to a Markov Chain (which one might call a random
walk on $S_n$). It is worth noting here that the matrix $M$ has $(n!)^2$ entries, which is something to the tune of $10^{135}$ when $n=52.$ Not much help from a computer to be found with numbers like
that kicking around. The $k^{\textrm{th}}$ power of $M$, then correspond to the transition probabilities of going from state $t$ to $s$ (this, for example, proves the Chapman-Kolmogorov equations for
finite state spaces). Working backwards, one can use $M^k$ to produce a probability distribution $P_k$ by $P_k(s)=(M^k)_{s,\epsilon}.$ This $P_k$ really corresponds to the $k$-fold convolution of $P$
with itself, where convolution means the usual thing, i.e.
$\displaystyle{(P*Q)(s)=\sum_{t\in S_n}P(t)Q(st^{-1})}$
Heuristically, this new measure represents the probability of starting from a standard order getting to any other order by way of $k$ shuffles.
Okay, with this framework in mind, one can now define the difference $||P-Q||$ between two probability distributions, $P$ and $Q$:
$\displaystyle{||P-Q||=\frac{1}{2}\sum_{g\in S_n}|P(g)-Q(g)|}$
This is a pretty intuitive idea for distance; aside from the factor of $1/2$, the formula is basically the $L^1$ norm. We will mostly be interested in this quantity when $P=P_k$ and $Q=U$ where $U$
is the uniform distribution, i.e. $U(g)=1/n!$. One key feature of this difference, is that it is (in a sense) submultiplicative with respect to convolution:
$\displaystyle{||(P-U)*(Q-U)||\le 2||P-U||||Q-U||}$
This is not very difficult to check. This property is important since:
$\displaystyle{\begin{array}{ll} 2||P_k-U||||P_j-U|| & \ge||(P_k-U)*(P_j-U)|| \\ &=||P_k * P_j - P_k * U - U * P_j+U||\\&=||P_{k+j}-U|| \end{array}}$
where we have made use of the fact that the uniform distribution convolved with any other distribution is the uniform distribution (this fact characterizes the uniform distribution, actually) as well
as the fact that $P_k*P_j=P_{k+j}$. In particular, this means that
$\displaystyle{||P_{\alpha k}-U||\le (2||P_k-U||)^{\alpha}}$
Hence as soon as $||P_k-U||$ gets smaller than $1/2$, we have rapid (exponential) decay to the uniform distribution. There is a theorem due to Koss which holds in a more general setting where one
asks a similar type of question on any compact group:
Theorem (Koss, 1959). Let $G$ be a compact group. Let $P$ be a probability on $G$ such that for some $k_0$ and $c$ with $0 < c < 1,$ and for all $k > k_0$,
$\displaystyle{P_k (A) > cU(A)}$ for all open sets $\displaystyle{A}$
Then, for all $k$,
$\displaystyle{||P_k-U||\le (1-c)^{\lfloor k/k_0 \rfloor}}.$
The additional hypothesis of the theorem says that the shuffling eventually doesn’t avoid a particular subgroup. So, in general, the plot looks something like
For a long time, this theorem was the end of the road. In our setting of $G=S_n$, it is extremely relieving to know that any reasonable shuffling method will eventually converge very rapidly to the
uniform distribution — no reasonable shuffling method would leave particular subgroups of $S_n$ out. However, in no way is the theorem useful from a practical point of view: we still have no idea how
many times we need to shuffle the deck!
The goal, then, is to compute the best $k_0$ in the statement of Koss’s theorem, so that we know precisely how many times we have to shuffle until we converge exponentially to the uniform
distribution. This turns out to be a difficult problem in general, for which, unsurprisingly, no general principle seems to work. That is to say that every type of shuffling technique seems to
require its own special treatment. In my next post (which should be in a couple of days), I’ll describe how to model the standard shuffle, known as the riffle shuffle, and derive some estimates on
how $k_0$ depends on $n$.
How many dovetail shuffles suffice? « Rod Carvalho's web notebook Says:
December 18, 2009 at 2:22 am | Reply
[...] Card Shuffling I [...]
Debra Says:
March 24, 2014 at 1:04 am | Reply
Hurrah! Finally I got a blog from where I be able to really get
helpful data regarding my study and knowledge.
muscle car Says:
March 24, 2014 at 4:31 am | Reply
Hi, this weekend is good designed for me, because this moment i am
reading this wonderful informative piece of writing here at my house. | {"url":"http://cornellmath.wordpress.com/2009/04/19/card-shuffling-i/","timestamp":"2014-04-21T02:02:30Z","content_type":null,"content_length":"65315","record_id":"<urn:uuid:b0f29d04-103f-44d7-9245-f6b7031f828c>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00659-ip-10-147-4-33.ec2.internal.warc.gz"} |
Convert degree to arcsecond - Conversion of Measurement Units
›› Convert degree to arcsecond
›› More information from the unit converter
How many degree in 1 arcsecond? The answer is 0.000277777777778.
We assume you are converting between degree and arcsecond.
You can view more details on each measurement unit:
degree or arcsecond
The SI derived unit for angle is the radian.
1 radian is equal to 57.2957795131 degree, or 206264.806247 arcsecond.
Note that rounding errors may occur, so always check the results.
Use this page to learn how to convert between degrees and arcseconds.
Type in your own numbers in the form to convert the units!
›› Definition: Degree
A degree (or in full degree of arc), usually symbolized by the symbol °, is a measurement of plane angles, or of a location along a great circle of a sphere (such as the Earth or the celestial
sphere), representing 1/360 of a full rotation.
›› Definition: Second
A second of arc or arcsecond is a unit of angular measurement which comprises one-sixtieth of an arcminute, or 1/3600 of a degree of arc or 1/1296000 (approximately 7.7×10-7) of a circle. It is the
angular diameter of an object of 1 unit diameter at a distance of 360×60×60/(2pi) (approximately 206,265 units), such as (approximately) 1 cm at 2.1 km, or, directly from the definition, 1
astronomical unit at 1 parsec.
›› Metric conversions and more
ConvertUnits.com provides an online conversion calculator for all types of measurement units. You can find metric conversion tables for SI units, as well as English units, currency, and other data.
Type in unit symbols, abbreviations, or full names for units of length, area, mass, pressure, and other types. Examples include mm, inch, 100 kg, US fluid ounce, 6'3", 10 stone 4, cubic cm, metres
squared, grams, moles, feet per second, and many more!
This page was loaded in 0.0031 seconds. | {"url":"http://www.convertunits.com/from/degree/to/arcsecond","timestamp":"2014-04-20T20:56:48Z","content_type":null,"content_length":"20551","record_id":"<urn:uuid:dfd33508-0048-4431-b2f5-7531d5039d94>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00248-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
the midpoint formula is?
• one year ago
• one year ago
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/5109855ee4b0d9aa3c462c3c","timestamp":"2014-04-25T08:34:31Z","content_type":null,"content_length":"56201","record_id":"<urn:uuid:c507cbef-7fa3-4677-bd9d-2258540ac66b>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00586-ip-10-147-4-33.ec2.internal.warc.gz"} |
Normal Approximation to Binomial
The Rice Virtual Lab in Statistics has a more recent version of this applet.
In this demonstration you specify the number of events (N) and the probability of success for any one event (p) and push the "OK" button. The initial graph shows the probability distribution
associated with flipping a fair coin 12 times defining a head as a success. The vertical lines represent the probabilites of obtaining each of the 13 possible outcomes (0-12 heads). As you would
expect, the most likely outcome is 6 heads. This probability distribution is called the binomial distribution.
The blue distribution represents the normal approximation to the binomial distribution. It is a very good approximation in this case. The higher the value of N and the closer p is to .5, the better
the approximation will be.
Vary N and p and investigate their effects on the sampling distribution and the normal approximation to it. | {"url":"http://www.ruf.rice.edu/~lane/stat_sim/binom_demo.html","timestamp":"2014-04-17T01:05:46Z","content_type":null,"content_length":"2132","record_id":"<urn:uuid:b745e194-0ec0-4b69-8cd3-012a3fd6855b>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00337-ip-10-147-4-33.ec2.internal.warc.gz"} |
Q&A: Three Functions for Computing Derivatives
Mathematica Q&A: Three Functions for Computing Derivatives
May 20, 2011 — Andrew Moylan, Technical Communication & Strategy
Got questions about Mathematica? The Wolfram Blog has answers! We’ll regularly answer selected questions from users around the web. You can submit your question directly to the Q&A Team using this
This week’s question comes from Bashir, a student:
What are the different functions for computing derivatives in Mathematica?
The main function for computing derivatives in Mathematica is D, which computes the familiar partial derivative of an expression with respect to a variable:
D supports generalizations including multiple derivatives and derivatives with respect to multiple variables, such as differentiating twice with respect to x, then once with respect to y:
And vector and tensor derivatives, such as the gradient:
There are two important properties of D[expr, x] that distinguish it from other functions for computing derivatives in Mathematica:
1. D computes the derivative of an expression representing a quantity, such as Sin[x], not a function, such as Sin. Compare with Derivative below.
2. D takes the partial derivative to be zero for all subexpressions that don’t explicitly depend on x. Compare with Dt below.
To differentiate a function, you use Derivative, which has the standard shorthand notation ' (apostrophe):
The result is a pure Function of one unnamed argument #1. Note that if you immediately evaluate this function at x, the result is exactly what you would have found by using D to differentiate the
quantity Sqrt[x] with respect to x:
The notation f' is shorthand for Derivative[1][f], specifying differentiation once with respect to the first argument. As with D, generalizations like multiple derivatives and derivatives with
respect to multiple variables are possible:
Derivative[2, 1] specifies differentiation twice with respect to the first argument and once with respect to the second argument. (You could also write the above in a single line as Derivative[2, 1]
[#1^2 #2^3&].)
Dt[expr, x] computes the total derivative of the expression expr with respect to x. It works like D[expr, e], except Dt does not assume zero derivative for parts of expr with no dependence on x.
Compare D and Dt in this short example:
D assumes a is a constant independent of x; Dt does not, and Dt[a, x] remains unevaluated.
This can be useful in situations where you have variables that implicitly depend on some other variable. For example, suppose you want the time derivative of x+yz given in terms of the time
derivatives of x, y, and z. You can use Dt:
To do this with D, you would explicitly make x, y, and z functions of t:
Finally, the one-argument form Dt[expr] gives the total differential of the expression expr:
The result is given in terms of the differentials (Dt[x], Dt[y], in this case) of the variables occurring in expr.
When viewed in traditional mathematical notation (TraditionalForm), this example looks familiar as the standard quotient rule for differential quantities:
If you have a question you’d like answered in this blog, you can submit it to the Q&A Team using this form. For daily bite-sized Mathematica tips, follow our @MathematicaTip Twitter feed (or follow
using RSS).
One Comment
Posted by ba June 29, 2011 at 6:15 pm | {"url":"http://blog.wolfram.com/2011/05/20/mathematica-qa-three-functions-for-computing-derivatives/","timestamp":"2014-04-19T09:25:13Z","content_type":null,"content_length":"71457","record_id":"<urn:uuid:6732d8f7-3f56-4ed0-b3d1-caf3e0d08489>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00648-ip-10-147-4-33.ec2.internal.warc.gz"} |
An Overview of λProlog
Results 11 - 20 of 68
- In Eighth International Logic Programming Conference , 1991
"... The unification of simply typed λ-terms modulo the rules of β- and η-conversions is often called “higher-order ” unification because of the possible presence of variables of functional type.
This kind of unification is undecidable in general and if unifiers exist, most general unifiers may not exist ..."
Cited by 56 (3 self)
Add to MetaCart
The unification of simply typed λ-terms modulo the rules of β- and η-conversions is often called “higher-order ” unification because of the possible presence of variables of functional type. This
kind of unification is undecidable in general and if unifiers exist, most general unifiers may not exist. In this paper, we show that such unification problems can be coded as a query of the logic
programming language Lλ in a natural and clear fashion. In a sense, the translation only involves explicitly axiomatizing in Lλ the notions of equality and substitution of the simply typed
λ-calculus: the rest of the unification process can be viewed as simply an interpreter of Lλ searching for proofs using those axioms. 1
- PROCEEDINGS OF THE SECOND INTERNATIONAL WORKSHOP ON EXTENSIONS OF LOGIC PROGRAMMING , 1991
"... Operational semantics provide a simple, high-level and elegant means of specifying interpreters for programming languages. In natural semantics, a form of operational semantics, programs are
traditionally represented as first-order tree structures and reasoned about using natural deduction-like meth ..."
Cited by 44 (14 self)
Add to MetaCart
Operational semantics provide a simple, high-level and elegant means of specifying interpreters for programming languages. In natural semantics, a form of operational semantics, programs are
traditionally represented as first-order tree structures and reasoned about using natural deduction-like methods. Hannan and Miller combined these methods with higher-order representations using
Prolog. In this paper we go one step further and investigate the use of the logic programming language Elf to represent natural semantics. Because Elf is based on the LF Logical Framework with
dependent types, it is possible to write programs that reason about their own partial correctness. We illustrate these techniques by giving type checking rules and operational semantics for Mini-ML,
a small functional language based on a simply typed -calculus with polymorphism, constants, products, conditionals, and recursive function definitions. We also partially internalize proofs for some
meta-theoretic properti...
, 1990
"... Most conventional programming languages have direct methods for representing first-order terms (say, via concrete datatypes in ML). If it is necessary to represent structures containing bound
variables, such as λ-terms, formulas, types, or proofs, these must first be mapped into first-order terms, a ..."
Cited by 36 (1 self)
Add to MetaCart
Most conventional programming languages have direct methods for representing first-order terms (say, via concrete datatypes in ML). If it is necessary to represent structures containing bound
variables, such as λ-terms, formulas, types, or proofs, these must first be mapped into first-order terms, and then a significant number of auxiliary procedures must be implemented to manage bound
variable names, check for free occurrences, do substitution, test for equality modulo alphaconversion, etc. We shall show how the applicative core of the ML programming language can be enhanced so
that λ-terms can be represented more directly and so that the enhanced language, called MLλ, provides a more elegant method of manipulating bound variables within data structures. In fact, the names
of bound variables will not be accessible to the MLλ programmer. This extension to ML involves the following: introduction of the new type constructor ’a => ’b for the type of λ-terms formed by
abstracting a parameter of type ’a out of a term of type ’b; a very restricted and simple form of higher-order pattern matching; a method for extending a given data structure with a new constructor;
and, a method for extending function definitions to handle such new constructors. We present several examples of MLλ programs.
, 1992
"... We give a detailed, informal proof of the Church-Rosser property for the untyped lambda-calculus and show its representation in LF. The proof is due to Tait and Martin-Löf and is based on the
notion of parallel reduction. The representation employs higher-order abstract syntax and the judgments-as-t ..."
Cited by 36 (8 self)
Add to MetaCart
We give a detailed, informal proof of the Church-Rosser property for the untyped lambda-calculus and show its representation in LF. The proof is due to Tait and Martin-Löf and is based on the notion
of parallel reduction. The representation employs higher-order abstract syntax and the judgments-as-types principle and takes advantage of term reconstruction as it is provided in the Elf
implementation of LF. Proofs of meta-theorems are represented as higher-level judgments which relate sequences of reductions and conversions.
, 1994
"... Functional and logic programming are the most important declarative programming paradigms, and interest in combining them has grown over the last decade. Early research concentrated on the
definition and improvement of execution principles for such integrated languages, while more recently efficient ..."
Cited by 35 (0 self)
Add to MetaCart
Functional and logic programming are the most important declarative programming paradigms, and interest in combining them has grown over the last decade. Early research concentrated on the definition
and improvement of execution principles for such integrated languages, while more recently efficient implementations of these execution principles have been developed so that these languages became
relevant for practical applications. In this paper we survey the development of the operational semantics as well as
- Journal of Logic and Computation , 2003
"... We present the spine calculus S #-# as an efficient representation for the linear #-calculus # #-# which includes unrestricted functions (#), linear functions (-#), additive pairing (&), and
additive unit (#). S #-# enhances the representation of Church's simply typed #-calculus by enforcing ..."
Cited by 33 (5 self)
Add to MetaCart
We present the spine calculus S #-#&# as an efficient representation for the linear #-calculus # #-#&# which includes unrestricted functions (#), linear functions (-#), additive pairing (&), and
additive unit (#). S #-#&# enhances the representation of Church's simply typed #-calculus by enforcing extensionality and by incorporating linear constructs. This approach permits procedures such as
unification to retain the efficient head access that characterizes first-order term languages without the overhead of performing #-conversions at run time. Applications lie in proof search, logic
programming, and logical frameworks based on linear type theories. It is also related to foundational work on term assignment calculi for presentations of the sequent calculus. We define the spine
calculus, give translations of # #-#&# into S #-#&# and vice-versa, prove their soundness and completeness with respect to typing and reductions, and show that the typable fragment of the spine
calculus is strongly normalizing and admits unique canonical, i.e. ##-normal, forms.
- In Hanne Riis Nielson, editor, Proceedings of the European Symposium on Programming , 1996
"... . We consider how mode (such as input and output) and termination properties of typed higher-order constraint logic programming languages may be declared and checked effectively. The systems
that we present have been validated through an implementation and numerous case studies. 1 Introduction Jus ..."
Cited by 32 (10 self)
Add to MetaCart
. We consider how mode (such as input and output) and termination properties of typed higher-order constraint logic programming languages may be declared and checked effectively. The systems that we
present have been validated through an implementation and numerous case studies. 1 Introduction Just like other paradigms logic programming benefits tremendously from types. Perhaps most importantly,
types allow the early detection of errors when a program is checked against a type specification. With some notable exceptions most type systems proposed for logic programming languages to date (see
[18]) are concerned with the declarative semantics of programs, for example, in terms of many-sorted, order-sorted, or higher-order logic. Operational properties of logic programs which are vital for
their correctness can thus neither be expressed nor checked and errors will remain undetected. In this paper we consider how the declaration and checking of mode (such as input and output) and
- University of Pennsylvania. Available as , 1992
"... this document are those of the author and should not be interpreted as representing the official policies, either expressed or implied, of the U.S. Government. ..."
Cited by 28 (7 self)
Add to MetaCart
this document are those of the author and should not be interpreted as representing the official policies, either expressed or implied, of the U.S. Government.
- In 20th International Conference on Automated Deduction , 2003
"... Elf is a general meta-language for the specification and implementation of logical systems in the style of the logical framework LF. Based on a logic programming interpretation, it supports
executing logical systems and reasoning with and about them, thereby reducing the effort required for each ..."
Cited by 26 (11 self)
Add to MetaCart
Elf is a general meta-language for the specification and implementation of logical systems in the style of the logical framework LF. Based on a logic programming interpretation, it supports executing
logical systems and reasoning with and about them, thereby reducing the effort required for each particular logical system. The traditional logic programming paradigm is extended by replacing
first-order terms with dependently typed -terms and allowing implication and universal quantification in the bodies of clauses. These higher-order features allow us to model concisely and elegantly
conditions on variables and the discharge of assumptions which are prevalent in many logical systems. However, many specifications are not executable under the traditional logic programming semantics
and performance may be hampered by redundant computation. To address these problems, I propose a tabled higher-order logic programming interpretation for Elf. Some redundant computation is eliminated
by memoizing sub-computation and re-using its result later. If we do not distinguish different proofs for a property, then search based on tabled logic programming is complete and terminates for
programs with bounded recursion. In this proposal, I present a proof-theoretical characterization for tabled higher-order logic programming. It is the basis of the implemented prototype for tabled
logic programming interpreter for Elf. Preliminary experiments indicate that many more logical specifications are executable under the tabled semantics. In addition, tabled computation leads to more
efficient execution of programs. The goal of the thesis is to demonstrate that tabled logic programming allows us to efficiently automate reasoning with and about logical systems in the logical f...
- 7th Int. Conf. Logic Programming , 1990
"... Definite Clause Grammars (DCGs) have proved valuable to computational linguists since they can be used to specify phrase structured grammars. It is well known how to encode DCGs in Horn clauses.
Some linguistic phenomena, such as filler-gap dependencies, are difficult to account for in a completely ..."
Cited by 25 (4 self)
Add to MetaCart
Definite Clause Grammars (DCGs) have proved valuable to computational linguists since they can be used to specify phrase structured grammars. It is well known how to encode DCGs in Horn clauses. Some
linguistic phenomena, such as filler-gap dependencies, are difficult to account for in a completely satisfactory way using simple phrase structured grammar. In the literature of logic grammars there
have been several attempts to tackle this problem by making use of special arguments added to the DCG predicates corresponding to the grammatical symbols. In this paper we take a different line, in
that we account for filler-gap dependencies by encoding DCGs within hereditary Harrop formulas, an extension of Horn clauses (proposed elsewhere as a foundation for logic programming) where
implicational goals and universally quantified goals are permitted. Under this approach, filler-gap dependencies can be accounted for in terms of the operational semantics underlying hereditary
Harrop formulas, in a way reminiscent of the treatment of such phenomena in Generalized Phrase Structure Grammar (GPSG). The main features involved in this new formulation of DCGs are mechanisms for
providing scope to constants and program clauses along with a mild use of λ-terms and λ-conversion. 1 | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=484521&sort=cite&start=10","timestamp":"2014-04-17T07:27:56Z","content_type":null,"content_length":"39787","record_id":"<urn:uuid:8af34609-ff1a-4f63-ac36-d81efc6ba4ec>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00525-ip-10-147-4-33.ec2.internal.warc.gz"} |
FOM: Intuition, Logic, and Induction.
Alexander Zenkin alexzen at com2com.ru
Tue Mar 9 20:39:56 EST 1999
Charles Silver (in particular) wrote (RE: FOM: Intuition, Logic, and
Induction. Tue, 9 Mar 1999 10:18:18) to Paul Prueitt:
> I ... did not understand Alexander Zenkin's EA Theorem.
> I still do not understand his points, which you suggest are
> complex and not amenable to the simple kind of answer that I requested.
Dear Charlie,
Sincerely sorry for some misunderstanding! But that was not a trivial
misunderstanding: it showed very clearly how strongly a pure logic and a pure
mathematics are connected in real (scientific!) life with our (scientific!)
psychology, our (scientific!) intuition, and our (scientific!) super-inductive
subconsciousness : -).
Indeed, in my message (Subject: FOM: Intuition, Logic, and Induction. Date:
Sat, 06 Mar 1999 14:54:47) I gave the EA-Theorem formulation in the following
EA-THEOREM. IF there exists at least one natural number, say n*, such
that a predicate Q(n*) holds, THEN for all natural number n > n* a predicate
P(n) is true, or, in a short symbolic form:
_E n* Q(n*) == > _A n>n* P(n). (1*)
Here, P and Q=f(P) are some number-theoretical properties (predicates) , and
symbols "_E" and "_A" simply replace here the usual mathematical words "there
exists" and " for all", correspondingly.
1) any mathematician, seeing the equality Q=f(P), takes, naturally, the
simplest particular case, Q=P, and obtains:
_E n* Q(n*) == > _A n>n* Q(n). (1*)
and, on the spot, he/she presents the killing example: Q(n)="n is a prime
number", since Q(2) is true, then all natural number > 1 are prime. That
mathematician says: "It is a full stupidity". Of course, such the
mathematician will be absolutely right.
The author's psychology: today, I know only four different type of
EA-Theorems (only two are mine); every type is based on an invention, on a
discovery of a unique logical and mathematical connection between two
DIFFERENT unique mathematical properties (predicates) P and Q produced by P,
i.e. P=/=Q. In my message just that unique logical and mathematical
connection is denoted for short by Q=f(P). Why "for short"? - Because the
Super-Induction account occupies about six hours in my lectures course : -)
So, the notation Q=f(P) is not a common mathematical function, and therefore,
in order to avoid misunderstandins, I should have written explicitely the
condition P=/=Q. My psychology did not allow me to foresee such the possible
2) Any logician, seeing the (1*), can say: "an AUTHENTIC inference of a
COMMON statement _A n>n* P(n) from a SINGLE statement _E n* Q(n*) is
impossible, because such the inference contradicts to J.S.Mill's inductive
logic, to all our scientific experience, and to our scientific intuition!"
The author's argumentation: my scientific experience and my scientific
intuition, for a long time, too were hindering me to understand that
EA-Theorems are a new type of logical and mathematical authentic reasonings,
but, on the other hand, the MATHEMATICAL EA-Theorems are proved by rigorous
MATHEMATICAL methods, and it is not admissible to dispute against mathematics.
Ultimately, being a mathematician, I put absolute trust in MATHEMATICS : -).
But there are two arguments that, I hope, can shake the natural distrust of
logicians: firstly, all EA-Theorems statements refer (today) to the natural
numbers (or to the natural indexes of objects), but the series of natural
numbers, thanks to "if n then n+1", is a very specific, strong coherent
structure (it is not "a J.S.Mill's set of particular (frequently, - quite
accidental) facts") which is adapted very well for a realization of "chain
reactions", and, secondly, such the predicate, P, which such the predicate, Q,
can be invented for, so that the corresponding EA-Theorem (1*) could be
proved, is very rare event in modern mathematics: repeat, there are only four
such the events. It needs to study their properties much better in order to
understand why they are in existence.
At last, the notation (1*) does not contain explicit logical and
mathematical definitions of concrete predicates P and Q. In oder to any
mathematician and logician will believe that the notation (1*), taking into
account its "impossible" form, is not an anti-Sokal-like mystification, he/she
must personally to test a real EA-Theorem and its real mathematical proof.
Unfortunately, our "linear" texts are adapted for mathematical language not
so well. I am afraid that even the real H.E.Richert's EA-Theorem formulation
which is contained in the end of my previous message {Subject: FOM: Intuition,
Logic, and Induction. Date: Sat, 06 Mar 1999 14:54:47} can be read not
adequately by different browsers (I see that my own FOM-reply is quite
Therefore, I can suppose only the references to some EA-Theorems with
their proofs (in English).
1. H.E.Richert's EA-Theorem, W.Sierpinski, Elementary Theory of Numbers. -
Warszaw, 1964, Chapter III. Prime numbers, pp. 143-144.
2. A.A.Zenkin, Superinduction: A New Method For Proving General Mathematical
Statements With A Computer. - Doklady Mathematics, Vol.55, No.3, pp. 410-413
(1997). Translated from Doklady Akademii Nauk, Vol 354, No. 5, 1997, pp. 587 -
3. A.A.Zenkin, Waring's problem: g(1,4) = 21 for fourth powers of positive
integers.- An International Journal "Computers and Mathematics with
Applications", Vol.17, No. 11, pp. 1503 - 1506, 1989.
In my previous message, I described the main stages of the both methods:
the Complete Mathematical Induction and the Super-Induction. Here I would like
to add the following.
1) In the both methods, there is a threshold number n*. In the Complete
Mathematical Induction the number n*, usually, is equal to 0 or 1 that,
usually, is defined by a trivial non-feasibility of the predicate P (a
devision by zero, an imaginarity and so on). In the Super-Induction metod, the
threshold number n* can have any finite values (of course, in different
2) In the Super-Induction method, we can invent the Q, and we can prove the
corresponding EA-Theorem (1*), but, untill we will find a number n* (a trigger
of the Super-Induction) such that Q(n*) holds, our "proof" proves nothing,
because the EA-Theorem itself does not guarantees the existence of the
threshold numbers n*. As is known, the last problem simply does not appear in
a framework of the Complete Mathematical Induction method.
The following example elucidates the said.
Consider the old expression n^2+n+41 and the predicate P(n) = "n^2+n+41 is
a composite number". Since P(n) is false for all n=0,1,2,...,39, but P(40) is
true, nobody will wish to use here the common Complete Mathematical Induction.
But the Super-Induction, generally speaking, can be used here under the
following two conditions.
1) If for the given predicate P(n), we will be able (today) TO INVENT a
predicate Q(n), which, of course, will be depended upon P (and, possibly, upon
other parameters of the problem), and such Q(n) that the EA-Theorem (1*) could
be proved. If so, then
2) We must find a natural number n*.
After that only, we can state that for all n>n* the expression n^2+n+41
defines only composite numbers. Of course, if a joker already did not proved
some earlier that a quantity of prime numbers generated by the expression
n^2+n+41 is infinite : -).
> On Tue, 9 Mar 1999, Paul Prueitt wrote to
> "'Charles Silver'" <csilver at sophia.smith.edu>:
> Your question is a good question and should be given clarity at the
> minimal complexity, so that we can all share in the insight of super
> induction. For example, how does Robert's epistemology of social
> science see the difference between classical mathematical induction and
> the super induction of Zenkin. My point was that there is a
> neuropsychological (and quantum field theory) basis for grounding any
> representation of anything that can be associated with the term
> "induction". and I asked Alex to address this in as simple a fashion as
> possible.
Dear Paul,
you absolutely right as to the intention to "the minimal complexity": you have
guessed my constant intention to a maximal clarity at "the minimal
complexity". Above, I have tryed to do that. I hope the problem became some
more clear, but as Sokrat (?) said "I know that I know nothing"! The Charles
Silver's and yours questions touched upon some very deep problems, connected
with Intuition, Logic, and Induction, and especially in connexion with the
FOM-discussion on Visual Proofs. The question is such. For number-theory
problems solution by means of the Super-Induction method, I use my Cognitive
Computer Visualization System DSNT, i.e., I simply visualize an intitial
segment of the natural numbers series with the predicates P and Q given on the
set. Of course, the corresponding (color-musical) 2D-images are constructed by
a computer. So, I can now to find the threshold number n* not only visually,
but also hearing the corresponding mathematical music. It is not a mystic, it
was done many years ago. But a very interesting question (of an
epistemological-social sense : -) arises: is a physical fact of my hearing the
number n* a legitimate argument for the authentic truth of the common
mathematical statement _A n>n* P(n)? : -) The same question refers to the
case (if the segment is very large) when I stare at not a static picture, but
at, say, a 30-min color-musical cartoon? : -)
In one word, I agree with you that very interesting and quite unexpected
"neuropsychological (and quantum field theory), and especially educational
investigations might be done here. All the more, as you certainly keep in
mind, Karl Pribram and I discovered very unexpected similarity between his
experimental pictures (if my memory serves correctly) of active points of
cerebral cortex, and my color-musical pythograms of Euler's Theorems on sums
of two squares of non-negative integers. That time, we agreed that such the
coincidence might be not accidental. Alas, as far back as 1995, at IEEE'95,
So, dear Charles and Paul, once more thanks very much for your deep
questions, remarks, and desires. I would be very glad to answer all possible
questions for clarification of the Super-Induction logical Nature.
Sincerely yours,
A Z
Prof. Alexander A. Zenkin,
Doctor of Physical and Mathematical Sciences,
Leading Research Scientist of the Computer Center
of the Russian Academy of Sciences.
e-mail: alexzen at com2com.ru
WEB-Site http://www.com2com.ru/alexzen
"Infinitum Actu Non Datur" - Aristotle.
More information about the FOM mailing list | {"url":"http://www.cs.nyu.edu/pipermail/fom/1999-March/002777.html","timestamp":"2014-04-18T14:01:05Z","content_type":null,"content_length":"13538","record_id":"<urn:uuid:7e5005c1-b826-407c-a597-9652e4832c42>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00405-ip-10-147-4-33.ec2.internal.warc.gz"} |
Two Trips to Gorilla Land and the Cataracts of the Congo Volume 2 (9781153729727) Richard Francis Burton
Product Details:
Paperback: 138 pages
Publisher: General Books LLC (March 7, 2010)
Language: English
ISBN-10: 1153729725
ISBN-13: 978-1153729727
Product Dimensions:
9.1 x 5.9 x 0.4 inches
Shipping Weight: 8 ounces (View shipping rates and policies)
The book has no illustrations or index mathematics mechanics pdf. Purchasers are entitled to a free trial membership in the General Books Club where they can select from more than a million books
without charge mathematics mechanics pdf. Subjects mathematics mechanics pdf: Ethnology mathematics mechanics pdf; Africa, West; Congo River; Africa, West Description and travel; Congo River
Description and travel; Ethnology Africa, West; Biography
Tags: Two Trips to Gorilla Land and the Cataracts of the Congo Volume 2 (9781153729727) Richard Francis Burton , tutorials, pdf, ebook, torrent, downloads, rapidshare, filesonic, hotfile, megaupload,
GO Two Trips to Gorilla Land and the Cataracts of the Congo Volume 2 (9781153729727) Richard Francis Burton
Related links:
Drawing Book Collection 2011 – Jiwang WareZ Scene
Un chocolat chez Hanselmann (9782743604134) Rosetta Loy
Best Roses (Best Gardening) (9780600583400) Stefan Buczacki
Bestsellers Guarantee (9780441055029) Joe R. Lansdale
Model Airplane International 2008-09 | {"url":"http://polyrbearlithfe.blog.com/2012/03/28/two-trips-to-gorilla-land-and-the-cataracts-of-the-congo-volume-2-9781153729727-richard-francis-burton/","timestamp":"2014-04-20T18:32:31Z","content_type":null,"content_length":"36204","record_id":"<urn:uuid:b102c5dc-5f2a-4407-bf7e-e1b0e0a4b4b4>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00256-ip-10-147-4-33.ec2.internal.warc.gz"} |
[SI-LIST] : ESR and Bypass Caps, revisited and re
[SI-LIST] : ESR and Bypass Caps, revisited and revised
From: Doug Brooks (doug@eskimo.com)
Date: Mon Feb 21 2000 - 15:12:59 PST
About three weeks ago I called everyone's attention to an article we had
written about ESR and Bypass Cap selection. After considerable discussion,
the concensus was that the statements I made about the subject were accurate.
But I have subsequently realized that there were TWO IMPORTANT CONSEQUENCES
of what I had developed that I had NOT included in the paper. These came
clear when we added a graphical output to the calculator (the single most
requested update to it!)
(The paper has been updated with these two conclusions, graphical
illustrations and some additional examples.
http://www.ultracad.com and follow link)
Consequence 1.
I had said that the condition for a (almost) flat impedance response curve
is that, at the anti-resonant frequencies, the following relationship hold
ESR = -X1 = X2
where X1 and X2 are the reactance terms of the (in this case) two parallel
What I did NOT expand on is that, this being the case:
If ESR goes down, then
X1 and X2 must also go down, and
therefore the self resonant frequencies of the two capacitors must move
therefore more capacitors must be used to cover a given frequency range!
That is: The lower is ESR, the more bypass capacitors are required to
achieve a given impedance response with frequency.
This point is developed and demonstrated in the revised Appendix 4.
Consequence 2.
Another point made in the paper is that the minimum impedance is less than
ESR for all practical cases. A point NOT made originally, is:
As the self resonant points of the parallel capacitors get closer together,
without changing ESR, the minimum impedance value decreases. Revised
Appendix 3 in the paper illustrates the case of 100 bypass caps with
self-resonant frequencies spread over the range of 5 to 500 MHz. The
impedance response is quite good over this range. But when the number of
capacitors is increased to 150, the impedance curve is lower AT EVERY
FREQUENCY then the 100 cap curve. That is --- the MAXIMUM impedance for the
150 caps is LESS than the MINIMUM impedance for the 100 caps! The 200
capacitor curve is lower EVERYWHERE then the 150 cap curve! (See Revised
Appendix 3)
It was noted, correctly, that this kind of analysis cannot take into
consideration the PLACEMENT of capacitors across the board. It takes a
finite amount of time for charge to propagate between locations on a board.
So, even though there may be the appearance of sufficient capacitance
available, in fact, charge may not be able to get where needed in time. For
a simplified discussion of this point, see my column in the January issue
of PC Design ("A One-Answer Quiz"), reprinted on our web page under the
title "This Month's Quiz."
Our calculator has been upgraded to include the graphical capability.
Existing licenses will still work with the new upgrade.
Doug Brooks
See our updated message re in-house seminars on our web page
Doug Brooks, President doug@eskimo.com
UltraCAD Design, Inc. http://www.ultracad.com
**** To unsubscribe from si-list or si-list-digest: send e-mail to majordomo@silab.eng.sun.com. In the BODY of message put: UNSUBSCRIBE si-list or UNSUBSCRIBE si-list-digest, for more help, put HELP.
si-list archives are accessible at http://www.qsl.net/wb6tpu
This archive was generated by hypermail 2b29 : Thu Apr 20 2000 - 11:35:07 PDT | {"url":"http://www.qsl.net/wb6tpu/si-list4/0593.html","timestamp":"2014-04-17T15:29:42Z","content_type":null,"content_length":"7129","record_id":"<urn:uuid:0d9c829c-c19d-4cf5-9fc6-993e49c3bf3f>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00141-ip-10-147-4-33.ec2.internal.warc.gz"} |
Binary Representation of Real Numbers
February 7th 2011, 12:05 PM #1
Junior Member
Jan 2010
Binary Representation of Real Numbers
Hey everyone,
I have question that is asking to show that the set of real numbers has no binary representation. I understand that real numbers cannot have a one to one correspondence with natural numbers. I
also understand that the real numbers are an uncountable infinity and natural numbers are a countable infinity. I am a bit lost on how to use this information to prove this. Can anyone help me
make the connection?
What do you mean by a binary representation of a set (of real numbers)?
February 7th 2011, 11:47 PM #2
MHF Contributor
Oct 2009 | {"url":"http://mathhelpforum.com/discrete-math/170466-binary-representation-real-numbers.html","timestamp":"2014-04-16T14:29:21Z","content_type":null,"content_length":"32654","record_id":"<urn:uuid:a1d1002c-6395-48f0-bff5-76ee10f30888>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00471-ip-10-147-4-33.ec2.internal.warc.gz"} |
Abstract Algebra
October 15th 2008, 08:59 PM
Abstract Algebra
I have no clue how to attack these question one. All help or hints is appreciated.
G always denotes a group
1. Let G be finite and G not equal to {e}. Show that G has an element of prime order.
2. Prove that isomorphic groups have isomorphic automorphism groups.
3. Let a, b be in G . If |a| and |b| are relatively prime (i.e. gcd (|a|, |b|)=1), then <a> intersection <b> ={e}. Prove the last statement.
October 15th 2008, 09:30 PM
Let $a\in G - \{ e \}$ and construct $\left< a \right>$. Let $|\left< a \right> | = n>1$. The integer $n$ has a prime divisor $p$. The element $a^{n/p}$ has order $p$.
2. Prove that isomorphic groups have isomorphic automorphism groups.
Let $\phi : G_1 \to G_2$ be a group isomorphism.
Define $\hat \phi : \text{Aut}(G_1)\to \text{Aut}(G_2)$ by $\hat \phi (\theta)(a) = \phi(\theta(a))$.
Show this is an isomorphism.
3. Let a, b be in G . If |a| and |b| are relatively prime (i.e. gcd (|a|, |b|)=1), then <a> intersection <b> ={e}. Prove the last statement.
Notice that $\left< a\right> \cap \left< b \right>$ is a subgroup of $\left< a \right>$ and $\left< b\right>$. By Lagrange's theorem it means $| \left< a\right> \cap \left< b\right>|$ divides $|\
left< a\right>| = |a|$ and $\left< b\right>| = |b|$. Therefore, $|\left< a\right> \cap \left< b \right>| = 1 \implies \left< a \right> \cap \left< b \right> = \{ e \}$.
October 16th 2008, 01:30 AM
Let $a\in G - \{ e \}$ and construct $\left< a \right>$. Let $|\left< a \right> | = n>1$. The integer $n$ has a prime divisor $p$. The element $a^{n/p}$ has order $p$.
Let $\phi : G_1 \to G_2$ be a group isomorphism.
Define $\hat \phi : \text{Aut}(G_1)\to \text{Aut}(G_2)$ by $\hat \phi (\theta)(a) = \phi(\theta(a))$.
Show this is an isomorphism.
Notice that $\left< a\right> \cap \left< b \right>$ is a subgroup of $\left< a \right>$ and $\left< b\right>$. By Lagrange's theorem it means $| \left< a\right> \cap \left< b\right>|$ divides $|\
left< a\right>| = |a|$ and $\left< b\right>| = |b|$. Therefore, $|\left< a\right> \cap \left< b \right>| = 1 \implies \left< a \right> \cap \left< b \right> = \{ e \}$.
So need to show that $\hat \phi (\theta)(a) = \phi(\theta(a))$ is a structure preserving function?
October 16th 2008, 07:26 AM | {"url":"http://mathhelpforum.com/advanced-algebra/53971-abstract-algebra-print.html","timestamp":"2014-04-20T10:13:44Z","content_type":null,"content_length":"14914","record_id":"<urn:uuid:d6811d51-fe9e-406e-ac71-00cee8cb992e>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00591-ip-10-147-4-33.ec2.internal.warc.gz"} |
absolute simultaneity with Einstein synchronized clocks?
The set up seems extremely confused or at least incomplete.
What is “a”
“M(a(xa) “ looks like a typo.
Can we assume x’=0 is where the observer R’ is at on I’
When t=? & t’=? did R’ crossed x=0 sometime before the light flashed.
The light flashes at x=0 t=0; where and when does it go off in the I’ frame.
Since the source could be a point in the I’ frame. Where and when is that I’ frame point, x=? & t= ? when R’ was at x=0.
I don’t know what the question “Is there some flow?” means;
But I don’t see any prospects for setting any form of “Absolute Simultaneity” in this description.
Thanks for your answer. Consider please the following one space dimensions detected from I. A particle moves with constant speed V in the positive direction of the OX axis. x defines its position at
t=0 when a light signal is emitted from the origin O (APPARENT POSITION) X defining its position when the light signal arrives at its location. (ACTUAL POSITION). We have
X=x+Vx/c=x(1+V/c) (1)
Let t=x/c and T=X/c be the times when the light signal arrives at the apparent and at the actual positions respectively. The mentioned light signal performs the synchronization of the clocks of I.
Performing the Lorentz transformations to the rest frame of the moving particle I'
we obtain
T'=g(T-Vx/c^2)=T[(1-V/c)/(1+V/c)]^1/2=t/g (2)
X'=g(X-VT)=X[(1-V/c)/(1+V/c)]=x/g (3)
Do you consider that (2) and (3) are the transformation equations for the events e(x,x/c) and E(X,X/c) for the space-time coordinates of events associated with the apparent and the actual positions
of the same particle? Is (2) an expression for absolute simultaneity (t=0, T'=0)
The inertial reference frames I and I' are in the standard arrangement. | {"url":"http://www.physicsforums.com/showpost.php?p=1769288&postcount=4","timestamp":"2014-04-20T14:12:50Z","content_type":null,"content_length":"9581","record_id":"<urn:uuid:731cbaf3-6659-4d20-8f61-91c047650bad>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00231-ip-10-147-4-33.ec2.internal.warc.gz"} |
Westwood Area 2, OH Math Tutor
Find a Westwood Area 2, OH Math Tutor
...In addition, I was a teaching assistant for undergraduate and graduate students in the Biomedical Engineering and Kinesiology departments. It is my goal to not only teach my students the
material, but to give them the tools needed to succeed in all their classes. With the right tools and encour...
30 Subjects: including trigonometry, discrete math, econometrics, linear algebra
...During my undergraduate study for a B.S. Chemical Engineering at UCR, I have taken college level courses in pre-calculus, English, and Chemistry. I am most experienced with college students,
but can also tutor middle and high school students.
12 Subjects: including calculus, precalculus, reading, literature
...Most students do not intuitively know how to do these things, and most teachers are too pressed for time to really teach them well. I have an active California teaching credential in Physical
Education and over 25 years experience teaching Physical Education, including basketball, in LAUSD. I also have extensive coaching experience at my schools and with BHBL.
15 Subjects: including prealgebra, reading, English, writing
I have previously tutored in Earth Science, Geometry and Biology at an under-performing inner city school in San Diego (San Diego High School). This was done concurrently for a quarter while
studying at UCSD, where I recently graduated from with my Bachelor of Science in Environmental Systems and a specialization in Earth Science.
16 Subjects: including algebra 2, calculus, chemistry, elementary (k-6th)
...I also use to be a life guard. I have 5+ years of training in school and with various actor's studios such as: New York Film Academy, John Robert Powers, and Saturday Actor's Studio. I have
been in 7 plays, and have been an extra in a television show and commercial.
14 Subjects: including prealgebra, physics, algebra 1, algebra 2
Related Westwood Area 2, OH Tutors
Westwood Area 2, OH Accounting Tutors
Westwood Area 2, OH ACT Tutors
Westwood Area 2, OH Algebra Tutors
Westwood Area 2, OH Algebra 2 Tutors
Westwood Area 2, OH Calculus Tutors
Westwood Area 2, OH Geometry Tutors
Westwood Area 2, OH Math Tutors
Westwood Area 2, OH Prealgebra Tutors
Westwood Area 2, OH Precalculus Tutors
Westwood Area 2, OH SAT Tutors
Westwood Area 2, OH SAT Math Tutors
Westwood Area 2, OH Science Tutors
Westwood Area 2, OH Statistics Tutors
Westwood Area 2, OH Trigonometry Tutors
Nearby Cities With Math Tutor
Baldwin Hills, CA Math Tutors
Briggs, CA Math Tutors
Century City, CA Math Tutors
Farmer Market, CA Math Tutors
Green, CA Math Tutors
Holly Park, CA Math Tutors
La Costa, CA Math Tutors
Miracle Mile, CA Math Tutors
Paseo De La Fuente, PR Math Tutors
Playa, CA Math Tutors
Preuss, CA Math Tutors
Rancho La Costa, CA Math Tutors
Rancho Park, CA Math Tutors
Westwood, LA Math Tutors
Wilcox, CA Math Tutors | {"url":"http://www.purplemath.com/westwood_area_2_oh_math_tutors.php","timestamp":"2014-04-18T14:06:22Z","content_type":null,"content_length":"24322","record_id":"<urn:uuid:55365bd9-1f89-4e4d-9479-0f50838dc5e4>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00556-ip-10-147-4-33.ec2.internal.warc.gz"} |
Undergraduate Catalog
Mathematical Sciences
Mathematics is the international language of science and technology. Much of the subject matter in engineering and the natural sciences, as well as some social sciences such as economics, is
presented in mathematical terms. Mathematical and statistical techniques are vital in fields that usually are not considered mathematical, such as biology, psychology, and political science.
Some students come to mathematical sciences with the intention of teaching in high school or college or pursuing research in mathematics. Some are attracted to mathematics for its own sake, for the
beauty, discipline, logic, and problem-solving challenges. Other students pursue mathematics in order to achieve deeper understanding in their own areas of study.
Actuarial science is the mathematical analysis of problems in economics, finance, and insurance. It requires knowledge of statistics, probability, and interest theory and how they relate to financial
Applied mathematics is a discipline using mathematical analysis to solve problems coming from outside the field of mathematics.
Atmospheric science is the study of short-term weather and long-term climate, involving activities such as weather forecasting and analysis and air pollution meteorology. It uses advanced methods in
statistics and numerical modeling.
Computational mathematics is closely related to applied mathematics. It emphasizes techniques of scientific computing and other computational analysis.
Pure mathematics emphasizes the theory and structure underlying all areas of mathematics.
Statistics is a field of mathematics that provides strategies and tools for using data to gain insight into real world and experimental problems.
A major in mathematical sciences allows students to design, in conjunction with an advisor, a personalized program to fit individual interests and talents. Students may major in actuarial science,
atmospheric science, or mathematics.
The basic mathematics major has been designed for students who are completing a double major. For this reason, flexibility is offered; students should find it relatively easy to combine the
requirements of the mathematics major with the mathematical requirements or electives of other programs.
Students may specialize in any of four particularly significant areas applied mathematics, computational mathematics, pure mathematics, and statistics. Completing a specialization gives a student
expertise that is indicated on the transcript and that will be helpful in seeking employment or gaining admission to graduate school.
Students of the sciences, engineering, computer science, economics, and business often complete a significant number of mathematical sciences credits. These students are encouraged to take a
mathematics major or minor, which adds an official recognition of important analytical skills valued by employers and graduate schools.
Students interested in teaching mathematics at the K-12 level should consult the School of Education section of this catalog.
Please visit the departmental Web page at http://www.math.uwm.edu and follow the links to the undergraduate program.
Curricular Areas in Mathematical Sciences
Students should note that there are three curricular areas and corresponding abbreviations in the Department of Mathematical Sciences: Atmospheric Science (Atm Sci), Mathematics (Math), and
Mathematical Statistics (MthStat).
Course of Study: Majors
Students considering a major in the Department of Mathematical Sciences need to come to the department to declare their major and be assigned an advisor. All courses selected for the major must be
approved by the advisor, and students should check regularly with their advisors to plan their courses of study in a coherent and timely fashion.
Preparatory Curriculum. Students in all majors in the Department of Mathematical Sciences must complete Math 231, 232, and 233 (or equivalent). Math 225 and 226 are equivalent to Math 231; Math 221
and 222 are equivalent to Math 231, 232, and 233. Students majoring in actuarial science or mathematics must have a GPA of at least 2.5 in these courses. All majors must take either Math 234 or 240,
as well as a course in computer programming in a modern, high-level language. The department also recommends strongly one year of calculus-based physics. Actuarial science and atmospheric science
majors must complete additional preparatory curricula, as indicated below.
Capstone Experience. Students in all majors and major options in the Department of Mathematical Sciences must complete either Atm Sci 599 or Math 599, "Capstone Experience." The aim of the
department’s capstone experience is to encourage independent learning. Students complete a research paper in the context of this course, which satisfies the L&S research requirement. Students must
obtain consent of a professor to enroll in Atm Sci 599 or Math 599.
The actuarial science major is an interdisciplinary program intended to prepare students for professional examinations and employment as actuaries. Students must complete the courses listed below,
including at least 15 upper-division (numbered 300 and above) credits in the major in residence at UWM. The College requires that students attain at least a 2.5 GPA on all credits in the major
attempted at UWM. In addition, students must attain a 2.5 GPA on all major credits attempted, including any transfer work.
Additional Preparatory Curriculum
Bus Adm 201 Understanding and Using Financial Statements 3
Econ 103 Principles of Microeconomics 3
Econ 104 Principles of Macroeconomics 3
Math 234 Linear Algebra and Differential Equations 4
At least 6 credits to be completed from among:
Bus Adm 230 Introduction to Information Systems 3
CompSci 151 Introduction to Scientific Programming in Fortran 3
CompSci 201 Introductory Computer Programming 4
One of the following three courses:
MthStat 215 Elementary Statistical Analysis 3
Econ 210 Economic Statistics 3
Bus Adm 210 Introduction to Management Statistics 3
Core Curriculum
The following coursework is required:
Econ 301 Intermediate Microeconomics 3
Econ 302 Intermediate Macroeconomics 3
Math 311 Theory of Interest 3
Math 571 Introduction to Probability Models 3
Math 599 Capstone Experience 1
MthStat 361 Introduction to Mathematical Statistics I 3
MthStat 362 Introduction to Mathematical Statistics II 3
MthStat 563 Regression Analysis 3
MthStat 564 Time Series Analysis 3
MthStat 591 Foundations in Professional Practice in Actuarial Science 1
MthStat 592 Actuarial Science Laboratory I: Probability 1
MthStat 593 Actuarial Science Laboratory II: Interest Theory, Finance, Economics 1
Bus Adm 350 Principles of Finance 3
Bus Adm 450 Intermediate Finance 3
Recommendations for Actuarial Science Students. To achieve the best preparation for the actuarial examinations, students should take more than the minimum number of credits from the above list.
Students are encouraged strongly to take two courses in communication to prepare for an actuarial career. Commun 103, 262, and/or 264 are recommended. Some of the required and recommended course work
will satisfy portions of the Letters and Science distribution requirements. Econ 103 and 104 satisfy 6 credits of the social science requirement; Econ 248 (Economics of Discrimination) satisfies the
cultural diversity requirement; the recommended communication courses count toward the humanities requirement. Students also are advised to enroll in an internship (MthStat 489) as part of their
elective credits.
Students may find information regarding the actuarial profession by checking the web pages of the Department of Mathematical Sciences or those of the Society of Actuaries (www.soa.org).
The atmospheric science division of the department offers courses designed to prepare students for professional work in meteorology in both government and private service and for graduate study in
atmospheric sciences. Students must complete at least 15 upper-division (numbered 300 and above) credits in the major in residence at UWM. The College requires that students attain at least a 2.5 GPA
on all credits in the major attempted at UWM. In addition, students must attain a 2.5 GPA on all major credits attempted, including any transfer work. The following courses are required for the
atmospheric science major.
Additional Preparatory Curriculum. In addition to the preparatory curriculum required of all mathematical sciences majors, the following courses are required. These courses do not count in
calculating the major GPA.
Math 234 Linear Algebra and Differential Equations
Chem 102 General Chemistry
Physics 209/214 Physics I (Calculus Treatment)
Physics 210/215 Physics II (Calculus Treatment)
CompSci 151 Introduction to Scientific Programming in Fortran
Required Courses (Core)
Atm Sci 240 Introduction to Meteorology
Atm Sci 330 Air-Pollution Meteorology
Atm Sci 350 Atmospheric Thermodynamics
Atm Sci 351 Dynamic Meteorology I
Atm Sci 352 Dynamic Meteorology II
Atm Sci 360 Synoptic Meteorology I
Atm Sci 361 Synoptic Meteorology II
Atm Sci 464 Cloud Physics
Atm Sci 511 Seminar in Atmospheric Radiation and Remote Sensing
Atm Sci 599 Capstone Experience
Math 320 Introduction to Differential Equations
Electives - at least 9 credits from the following courses:
Atm Sci 320 Atmospheric Chemistry
Atm Sci 460 Mesoscale Circulations
Atm Sci 465 Meteorological Instrumentation
Atm Sci 470 Tropical Meteorology
Atm Sci 480 The General Circulation and Climate Dynamics
Atm Sci 497 Study Abroad: (Subtitle)
Atm Sci 505 Micrometeorology
Atm Sci 531 Numerical Weather Prediction
Atm Sci 690 Topics in Atmospheric Sciences: (Subtitle)
Math 313 Linear Programming and Optimization
Math 314 Mathematical Programming and Optimization
Math 321 Vector Analysis
Math 322 Introduction to Partial Differential Equations
Math 405 Mathematical Models and Applications
Math 413 Introduction to Numerical Analysis
Math 414 Numerical Analysis
Math 416 Computational Linear Algebra
Math 471 Introduction to the Theory of Probability
Math 521 Advanced Calculus
Math 522 Advanced Calculus
Math 535 Linear Algebra
Math 571 Introduction to Probability Models
Math 581 Introduction to the Theory of Chaotic Dynamical Systems
Math 601 Advanced Engineering Mathematics I
Math 602 Advanced Engineering Mathematics II
MthStat 361 Introduction to Mathematical Statistics I
MthStat 362 Introduction to Mathematical Statistics
MthStat 467 Introductory Statistics for Physical Sciences and Engineering Students
MthStat 563 Regression Analysis
MthStat 564 Time Series Analysis
Upper-division math refers to any Math or MthStat course at the 300 level or above. Sequence refers to any of the following pairs of courses: 313/314, 320/322, 361/362, 413/414. 521/522, 531/535, 621
/622, 631/632.
Many courses fall naturally into groups:
Applied mathematics group: Math 307, 320, 321, 322, 371, 405, 431, 520, 525, 581, 623.
Computational mathematics group: Math 313, 314, 413, 414, 416.
Probability and statistics group: Math 471, 571; MthStat 361, 362, 561, 562, 563, 564, 565, 567, 568, 569.
Pure mathematics group:
I. Math 521, 531, 535, 551, 621, 631;
II. Math 451, 453, 511, 522, 529, 537, 553, 555, 622, 632
Basic Mathematics Major. Students electing the basic mathematics major must complete Math 341 and 24 additional upper-division math credits, including at least 3 each from the applied math,
computational math, probability and statistics, and pure math I groups. At least one sequence is required among these 24 upper-division math credits. Students must complete at least 15 upper-division
(numbered 300 and above) credits in the major in residence at UWM. The College requires that students attain at least a 2.5 GPA on all credits in the major attempted at UWM. In addition, students
must attain a 2.5 GPA on all major credits attempted, including transfer work.
Specialization Options
The following four options add a specialty to the basic math major. Students must complete the requirements of the basic math major as stated above as well as the appropriate course requirements for
the specialties, as listed below. Completion of any of the specialty options requires at least 30 upper-division math credits, in addition to Math 341.
Applied Mathematics Option. At least 9 credits from the applied math group, 9 from the computational math group, and 6 from the probability and statistics group; two courses from CompSci 151 or 153,
201, and 251.
Computational Mathematics Option. At least 6 credits from the applied math group, 12 from the computational math group, and 6 from the probability and statistics group; all of CompSci 151 or 153,
201, 251, 317, 351, and 535.
Pure Mathematics Option. At least 18 credits from the pure math group, with at least 9 from the pure math I group; CompSci 151 or 153 or 201.
Statistics Option. Students must complete the following:
Additional Preparatory Curriculum
MthStat 215 Elementary Statistical Analysis 3
At least one selection from:
CompSci 151 Introduction to Scientific Programming in Fortran 3
CompSci 153 Introduction to Scientific Programming in C++ 3
or both
CompSci 201 Introductory Computer Programming 3
CompSci 251 Intermediate Computer Programming 4
Core Curriculum
At least one of the following two sequences:
Math 521 and 522 Advanced Calculus 6
Math 621 and 622 Introduction to Analysis 6
All of the following three courses:
MthStat 361 Introduction to Mathematical Statistics I 3
MthStat 362 Introduction to Mathematical Statistics II 3
MthStat 563 Regression Analysis 3
At least three of the following:
MthStat 562 Design of Experiments 3
MthStat 564 Time Series Analysis 3
MthStat 565 Nonparametric Statistics 3
MthStat 568 Multivariate Statistical Analysis 3
Math 571 Introduction to Probability Models 3
Preparation for Graduate Work in Mathematical Sciences. It is recommended that students who plan to do graduate work in mathematical sciences complete as many as possible of the following courses:
Math 521, 522, 531 and 535 (or 631 and 632), 551, and 623. Many graduate programs require reading knowledge of French, German, or Russian.
Course of Study: Minors
Actuarial Science Minor. Admission to this minor requires a minimum grade point average of 2.5 in Math 231, 232, and 233. Students who have completed these courses with the required grade point
average may complete a formal declaration of minor at the department office. These three courses do not count in the minor GPA. The following courses are required:
Math 234 Linear Algebra and Differential Equations 4
Math 311 Theory of Interest 3
MthStat 361 Introduction to Mathematical Statistics I 3
MthStat 362 Introduction to Mathematical Statistics II 3
One of the following, with a grade of B- or better in each course taken:
Bus Adm 450 Intermediate Finance 3
or both
Econ 301 Intermediate Microeconomics 3
Econ 302 Intermediate Macroeconomics 3
Students must complete at least 9 upper-division (numbered 300 and above) credits for the minor in residence at UWM. The College requires that students attain at least a 2.5 GPA on all credits in the
minor attempted at UWM. In addition, students must attain a 2.5 GPA on all minor credits attempted, including any transfer work.
Atmospheric Science Minor. The minor consists of a minimum of 18 credits in atmospheric science. Six of these credits must include Atm Sci 240 and 360. The remaining 12 Atm Sci credits must be at the
300 level or above. Students must complete at least 9 upper-division (numbered 300 and above) credits in the minor in residence at UWM. The College requires that students attain at least a 2.5 GPA on
all credits in the minor attempted at UWM. In addition, students must attain a 2.5 GPA on all minor credits attempted, including any transfer work.
Mathematics Minor. Students minoring in mathematics must complete Math 231, 232, and 233 or an equivalent math sequence with a GPA of at least 2.5. They must take 12 credits in mathematical sciences
(curricular areas Math and MthStat) courses numbered 300 and above, at least 9 of them in residence at UWM. Math 234 may substitute for 3 of these 12 credits. All courses chosen to complete the
12-credit requirement must be approved by the associate chair of the Department of Mathematical Sciences. Students must complete at least 9 upper-division (numbered 300 and above) credits in the
minor in residence at UWM. The College requires that students attain at least a 2.5 GPA on all credits in the minor attempted at UWM. In addition, students must attain a 2.5 GPA on all minor credits
attempted, including any transfer work.
Applied Mathematics and Computer Science
A related degree program is Applied Mathematics and Computer Science (AMCS), offered and awarded jointly by the College of Letters and Science Department of Mathematical Sciences and the College of
Engineering and Applied Science Department of Computer Science. This program allows students to study a mixture of mathematics and computer science suited to their natural interests and ambitions. It
highlights the unity of the fields of mathematical sciences and computer science, while still providing a firm foundation for all areas of applied and computational mathematics and computer science.
For further information, please refer to the Inter-School/College Programs section of this catalog, and visit the program's web page at http://www4.uwm.edu/letsci/math/undergraduate/majors/cs.cfm.
Atmospheric Sciences (ATM SCI)
Mathematics (MATH)
Mathematical Statistics (MTHSTAT)
Fredric D. Ancel, Prof., Ph.D.
University of Wisconsin-Madison
James E. Arnold, Jr., Assoc. Prof. Emeritus, Ph.D.
Jay H. Beder, Prof., Ph.D., Asst. Chair
George Washington University
Allen D. Bell, Assoc. Prof., Ph.D.
University of Washington
Vytaras Brazauskas, Prof., Ph.D.
University of Texas at Dallas
Suzanne L. Body, Asst. Prof., Ph.D.
Cornell University
Karen M. Brucks, Assoc. Prof., Ph.D.
University of North Texas
Associate Dean, College of Letters and Science
Clark Evans, Asst. Prof., Ph.D.
Florida State University
Dashan Fan, Prof., Ph.D.
Washington University
Daniel Gervini, Assoc. Prof., Ph.D.
Universidad de Buenos Aires
Jugal K. Ghorai, Prof., Ph.D.
Purdue University
Craig R. Guilbault, Prof., Ph.D., Graduate Dir.
University of Tennessee
Robert L. Hall, Assoc. Prof. Emeritus, Ph.D.
Peter Hinow, Asst. Prof., Ph.D.
Vanderbilt University
Ingrid Holzner, Sr. Lect. Emerita, M.S.
G. Christopher Hruska, Assoc. Prof., Ph.D.
Cornell University
Jonathan Kahl, Prof., Ph.D.
University of Michigan
Eric S. Key, Prof., Ph.D.
Cornell University
Kelly Kaiser Kohlmetz, Sr. Lect., Ph.D.
University of Wisconsin-Milwaukee
Sergey Kravtsov, Assoc. Prof., Ph.D.
Florida State University
Vincent Larson, Prof., Ph.D.
Massachusetts Institute of Technology
Istvan G. Lauko, Assoc. Prof., Ph.D.
Texas Tech University
Cheng-Ming Lee, Prof. Emeritus, Ph.D.
Tzu-Chu Lin, Assoc. Prof., Ph.D.
University of Iowa
Wiliam Mandella, Sr. Lect., M.S.
University of New Orleans
Kevin B. McLeod, Assoc. Prof., Ph.D.
University of Minnesota
Genevieve T. Meyer, Instr. Emerita
Richard J. Mihalek, Assoc. Prof. Emeritus, Ph.D.
Albert J. Milani, Prof. Emeritus, Ph.D.
Robert H. Moore, Assoc. Prof. Emeritus, Ph.D.
Ian M. Musson, Prof., Ph.D.
University of Warwick, U.K.
Thomas O’Bryan, Assoc. Prof. Emeritus, Ph.D.
Richard J. O’Malley, Prof. Emeritus, Ph.D.
Boris L. Okun, Assoc. Prof., Ph.D.
State University of New York at Binghamton
Dattatraya J. Patil, Assoc. Prof. Emeritus, Ph.D.
Gabriella Pinter, Assoc. Prof., Ph.D.
Texas Tech University
Paul Roebber, Distinguished Prof., Ph.D.
McGill University
David H. Schultz, Prof. Emeritus, Ph.D.
Steven Schwengels, Sr. Lect., M.S.
University of Wisconsin-Milwaukee
Lindsay A. Skinner, Prof. Emeritus, Ph.D.
Donald W. Solomon, Assoc. Prof. Emeritus
Richard Stockbridge, Prof., Ph.D., Chair
University of Wisconsin-Madison
Lijing Sun, Asst. Prof., Ph.D.
Wayne State University
Kyle Swanson, Prof., Ph.D., Atmos. Sci. Coord.
University of Chicago
Anastasios Tsonis, Distinguished Prof., Ph.D.
McGill University
Hans W. Volkmer, Prof., Ph.D.
University of Konstanz
Bruce A. Wade, Prof., Ph.D.
University of Wisconsin-Madison
Gilbert G. Walter, Prof. Emeritus, Ph.D.
Jeb Willenbring, Assoc. Prof., Ph.D.
University of California, San Diego
Dexuan Xie, Prof., Ph.D.
University of Houston
Chao Zhu, Assoc. Prof., Ph.D.
Wayne State University
Yi Ming Zou, Prof., Ph.D., Assoc. Chair
Indiana University
Web Home Pages:
[ College of Letters and Science ]
[ Mathematical Sciences ]
University of Wisconsin-Milwaukee Undergraduate Catalog 2013-2014:
Mathematical Sciences
[ College of Letters and Science ]
[ Schools and Colleges ]
[ Contents | How to Use This Book | Calendar ]
[ Admission | Registration | Financial Information | Academic Information ]
[ Administration | UWM - Endless Possibilities | Academic Opportunities | Campus Resources ]
Copyright 2013 by the University of Wisconsin-Milwaukee, all rights reserved. | {"url":"http://www4.uwm.edu/academics/undergraduatecatalog/SC/D_LS_600.html","timestamp":"2014-04-17T09:52:10Z","content_type":null,"content_length":"82589","record_id":"<urn:uuid:da57451e-dc84-413e-803c-16d41585adb7>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00059-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
x+y=-7 2x-y=-8
• 4 months ago
• 4 months ago
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/529ec4d4e4b030bcac5df45c","timestamp":"2014-04-20T08:35:48Z","content_type":null,"content_length":"46798","record_id":"<urn:uuid:a50a06d4-06bc-4ed7-ba3d-210764004b21>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00477-ip-10-147-4-33.ec2.internal.warc.gz"} |
Albany, CA Prealgebra Tutor
Find an Albany, CA Prealgebra Tutor
...Proofreading is the final finishing touch on a written work of quality. After cranking out a rough draft, it's time to examine the literary elements of its product. From a quick SpellCheck to
an actual line-by-line hands-on examination, ensuring proper spelling, grammar, and punctuation helps improve one's writing ability tremendously.
17 Subjects: including prealgebra, English, reading, grammar
...More than one third of my tutored hours covers calculus. "Great Tutor" - Alan K. Palo Alto, CA Andreas was an excellent tutor for our son in Calculus this year at Stanford. He would never have
done as well as he did without the talents and effort of Andreas.
41 Subjects: including prealgebra, calculus, geometry, statistics
...Hence,I'm confident that I'm completely able to help the students to understand the lectures, do their homework and assignments correctly and improve their grade significantly. In addition, I
also can help the students to understand to basic concept of Physics like motions, pressures, force, wav...
18 Subjects: including prealgebra, calculus, trigonometry, statistics
...I primarily specialize in teaching math and English of various levels, but having taken and passed over 10 AP tests in subjects such as English language and composition, literature, European
and US history, computer science A and B, statistics, and psychology, I may be able to help students in sp...
26 Subjects: including prealgebra, reading, writing, English
...I've tutored family, students, and friends alike. Being a student, I haven't had the experience in teaching that others can offer, but I have a love of math that will hopefully make up for any
shortcomings. In my (relatively short) experience,I've learned that each student is unique and requires a different approach.
19 Subjects: including prealgebra, calculus, writing, physics | {"url":"http://www.purplemath.com/Albany_CA_Prealgebra_tutors.php","timestamp":"2014-04-17T04:13:54Z","content_type":null,"content_length":"24117","record_id":"<urn:uuid:8a5d1dab-ba69-40bb-a113-667697d644a9>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00090-ip-10-147-4-33.ec2.internal.warc.gz"} |
dailysudoku.com :: View topic - Mistake!
Discussion of Daily Sudoku puzzles
Author Message
Victor Posted: Tue Jan 15, 2008 4:48 pm Post subject: Mistake!
Here's M3504165, which I've posted as a puzzle, and here's the story of my mistake. (Yes, I know many people have been here before me - these are my musings.) Grid post basics:
Joined: Code:
29 Sep
Posts: +----------------+-------------+----------------+
207 | 3 26 8 | 9 7 4 | 5 1 26 |
Location: | 4 5 27 | 6 1 8 | 29 279 3 |
NI | 19 17 1679 | 35 35 2 | 8 4 67 |
|E125 3 4 | 8 A25 7 |B12 6 9 |
| 6 17 1257 | 245 9 C15 | 1234 23 8 |
| 8 9 12 | 234 236 F16 | 7 5 124 |
| 1259 8 123569 | 7 2456 56 | 123469 239 124 |
| 7 26 236 | 1 246 9 | 2346 8 5 |
| 1259 4 12569 | 25 8 3 | 1269 279 127 |
Forget the nice W-wings etc. and look at the XY-wing pivoted on A, r4c5. When I'd eliminated the 1 in r5c7, I tried to continue with colouring on 1s: on to E from B, on to F from C, i.e. a
3-link colouring chain in 1s from E to F, killing the 1 in r6c3. But that led to a contradiction in due course. (r6c3 is in fact 1.) Why? - the answer's easy, that both of B & C might be 1,
so that neither of E nor F need be. And that's led me to read up / think more clearly about the nature of strong links etc. Yes, I know, I could have read Asellus's explanation in a thread
started by Keith, http://www.dailysudoku.co.uk/sudoku/forums/viewtopic.php?t=2028, or elsewhere, and I know I could just have a figured it out, but I hadn't done any of those things.
The answer as far as colouring on from an XY-chain, or W-wings or any such construct is easy enough to state: you need an even number (inc.0) of colouring links from each pincer. But it's
prompted me to read up a bit more about strong links & I have found some difference in definition.
Everybody agrees what a weak link is, but e.g. Sudopedia defines a strong link as being the same as a conjugate link (often known as XOR - eXclusive Or) whereas others define it as being
Either/Both (often known as OR). I'm finding it easier to think of strong as OR-links (truth table TT,TF,FT) rather than XOR (TF,FT). If you so define them, then it's easy to say that a
conjugate link is weak or strong as needed. And, if you have a chain A=B-C=D then clearly this condenses to A=D, just another strong link, extensible to any length chain. And A-B=C-D
condenses to A-D.
However, I like really to think 'practically' and that's involving me in difficulties. Take the XY-wing above. As an AIC you could write that 1C=5C-5A=2A-2B=1B, which leads to 1C=1B,
killing any 1s they both see. More practically, you can write it C ≠ 1 ==> C=5 ==>A =2 ==> B =1, and B ≠ 1 ==> ... ==> C=1. (You can't start C=1, since that won't propagate along the
chain.) That's OK, but how about the sort of AIC that starts and finishes in the same set (house) but with different numbers? E.g. say cell G contains 1,4,6 and cell H in the same column
contains 4,7. Say you can find a chain 1G= ...... = 4H. Then clearly you can say that G can't contain 4, since Either it contains 1 Or a cell that sees it contains 4, but I find that
difficult to explain in practical terms, since I see the chain as saying G ≠ 1 == > H=4 and H ≠ 4 ==> G = 1 which doesen't seem to tell us much.
Anyway, does anybody know the answer to this question? I found a thread a couple of years ago, in which the theorist Myth Jellies suggested that as well as the ends of AICs being
strong-only, as he put it, some links in URs might be like this - i.e. strong but not conjugate. Was this ever (dis/)proven?
A final question. A simple colouring chain on one digit is composed entirely of conjugate links, and cells separated by an odd number of links are conjugate-linked. (Note that if you want
to write it in simple language you can start A = x ==> ... as well as A ≠ x ==> .)This isn't true of most chains such as an XY-chain. Is it in fact true of ANY other kind of chain?
nataraj Posted: Tue Jan 15, 2008 7:31 pm Post subject: Re: Mistake!
Victor wrote:
E.g. say cell G contains 1,4,6 and cell H in the same column contains 4,7. Say you can find a chain 1G= ...... = 4H. Then clearly you can say that G can't contain 4, since Either it
Joined: contains 1 Or a cell that sees it contains 4, but I find that difficult to explain in practical terms, since I see the chain as saying G ≠ 1 == > H=4 and H ≠ 4 ==> G = 1 which doesen't seem
03 Aug to tell us much.
Location: Victor, I shall try and shed some light on this one.
Austria "...difficult to explain in practical terms". But you just did. Your explanation of the consequence ("Then clearly you can say ...") is quite practical.
To express the same logic in terms of an AIC it is necessary to add the information that one cell (G) contains 1 and 4, and that both cells are in the same house, which we do via weak links
- just add one weak link to each end of your chain and voilą, what you get now is a "discontinuous nice loop". The discontinuity consists of a node (4)G connected to the loop at both sides
through a weak link. Such a node must be false.
-(4)G-(1)G=[here is your alternating chain with strong links at the ends]=(4)H-
The first weak link (4)G-(1)G comes from the cell {1,4,6} ("G cannot be both 1 and 4"), the second weak link (4)H-(4)G from the house ("both cells cannot be 4 at the same time").
You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot vote in polls in this forum | {"url":"http://www.dailysudoku.com/sudoku/forums/viewtopic.php?t=2289&start=0&postdays=0&postorder=asc&highlight=","timestamp":"2014-04-16T07:32:16Z","content_type":null,"content_length":"33156","record_id":"<urn:uuid:9eea8767-c185-4e76-988e-822b09560b71>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00440-ip-10-147-4-33.ec2.internal.warc.gz"} |
March 2010
March 27, 2010
Pure Spinor Signature
By some coincidence, I’ve had several discussions, recently, about Nathan Berkovits’s pure spinor formulation of the superstring. Which reminds me of something I’ve long puzzled over. Nathan
invariably works in Euclidean signature, where the pure spinor constraint is
(1)$\lambda^t \gamma^\mu \lambda = 0$
Here, $\lambda\in \mathbf{16}$, is a chiral spinor of $Spin(10)$, and $\gamma^\mu$ is a symmetric $16\times 16$ matrix, expressing the Clebsch-Gordon coefficient, $\mathop{Sym}^2(\mathbf{16})\supset
\mathbf{10}$. In terms of $32\times 32$ gamma matrices, $\Gamma^0 \Gamma^\mu = \begin{pmatrix}\gamma^\mu& 0 \\ 0& \tilde{\gamma}^\mu\end{pmatrix}$ Now, the $\mathbf{16}$ of $Spin(10)$ is a complex
representation, so (1) naïvely looks like 10 complex equations in 16 complex variables. But, in reality, only 5 equations are independent, and the kernel of the pure spinor constraint is $16-5=11$
complex-dimensional (real dimension 22). In fact, it’s a complex cone over
(2)$X_{(0,10)} = Spin(10)/U(5)$
This illuminates the above remark about the dimension of the kernel. Under the decomposition $\mathbf{16} = \mathbf{1}_{-5} + \mathbf{10}_{-1} + \mathbf{\overline{5}}_{3}$, the pure spinor
constraint, (1) kills the $\mathbf{\overline{5}}_{3}$.
The dimension (22) is crucial to getting the critical central charge correctly, in Nathan’s formulation.
Thing is, we don’t live in Euclidean signature. While, in string theory and in field theory, we are quite happy to analytically-continue in momenta, when computing scattering amplitudes, we don’t
Wick-rotate the spinor algebra. That’s always done in the correct, Minkowski, signature.
It turns out that there are analogues of (1),(2) for other signatures
(3)$\begin{gathered} X_{(0,10)}=Spin(10)/U(5)\\ X_{(2,8)}=Spin(2,8)/U(1,4)\\ X_{(4,6)}=Spin(4,6)/U(2,3)\\ X_{(5,5)}= Spin(5,5)/GL(5,\mathbb{R}) \end{gathered}$
There’s one for each real form of $A_4$. But, you’ll note, signature $(1,9)$ is notably absent. So it’s not so obvious (to me, at least) that the space of solutions to the constraint (1) has the
desired dimension (22), when the signature is $(1,9)$.
Does anyone know how to see that it does (or, alternatively, what to do if it doesn’t)?
Update (4/16/2010): Non-Reductive
I had a private email exchange with Nathan, who explained the resolution of my conundrum.
The space of solutions to the pure spinor constraint (1), for the Minkowski signature is, of course, of the form of a complex cone over $X_{(1,9)}$. And (contra Lubos, below, who is, alas, a bit
confused), $X_{(1,9)}$ is necessarily of the form $X_{(1,9)} = Spin(1,9)/H$, for some subgroup $H$. However, unlike the cases in (3), $H$ is not a real form of $GL_{5,\mathbb{C}}$. In fact, it’s not
even a reductive group!
One can show that $H = H_0 \ltimes V$ where $H_0= Spin(1,1)\times U(4) \subset Spin(1,1)\times Spin(8) \subset Spin(1,9)$ such that the $\mathbf{16}$ decomposes, under $H_0$, as $\mathbf{16} = {(1_2
+ 1_{-2} + 6_0)}^{+1} + {(4_1 +\overline{4}_{-1})}^{-1}$ (the superscript is the $Spin(1,1)$ weight). $V$ is the Abelian subgroup generated by the $M^{+j}$ generators of $so(1,9)$, where $j$ is an
$SO(8)$ vector index. Under $H_0$, $V$ transforms as $V= {(4_{-1} +\overline{4}_{1})}^{+2}$ The pure spinor constraint kills the ${(1_2)}^{+1}+{(\overline{4}_{-1})}^{-1}\subset \mathbf{16}$, and $X_
{(1,9)}$ indeed has the desired dimension.
I find this answer very striking, in how it differs from the cases in (3), In particular, it ought to have some rather important implications for the construction of amplitudes. Berkovits and
Nekrasov develop a rather elaborate construction, involving Čech cohomology classes on the space $X_{(0,10)}=Spin(10)/U(5)$. Presumably, things change significantly, when dealing with $X_{(1,9)}=Spin
(1,9)/(Spin(1,1)\times U(4))\ltimes V$.
Posted by distler at 2:47 PM |
Followups (23)
March 3, 2010
Coupling to Supergravity
There seems to be a certain amount of confusion about the claims of Seiberg and Komargodski in their latest paper. I have to say that I was confused, and there’s at least one recent paper arguing
(more-or-less correctly) against claims that I don’t think they’re making.
So here’s my attempt to clear things up.
Consider an $\mathcal{N}=1$ supersymmmetric nonlinear $\sigma$-model in $d=4$, ie a Wess-Zumino model with target space, $M$, a Kähler maninfold of complex dimension, $n$. When can one couple such a
theory to supergravity? A naïve reading of their paper might lead one to think that the possibilities are
1. One can couple the theory to minimal supergravity if and only if the Kähler form, $\omega$, on $M$, is exact.
2. One can couple the theory to “new minimal” supergravity if and only if the theory has an exact $U(1)_R$ symmetry. In this case, $\omega$ could be cohomologically nontrivial.
3. If $\omega$ is cohomologically nontrivial, and the theory does not have a $U(1)_R$ symmetry, then the only possibility is to couple to non-minimal “16|16” supergravity.
One might think that, but one would be wrong. As Bagger and Witten showed, nearly 30 years ago, coupling to minimal supergravity does not require the Kähler form to be exact. Rather, $[\omega]$ must
be an even integral class.
Posted by distler at 8:53 PM |
Post a Comment | {"url":"http://golem.ph.utexas.edu/~distler/blog/archives/2010_03.shtml","timestamp":"2014-04-19T09:25:00Z","content_type":null,"content_length":"65984","record_id":"<urn:uuid:7fd33e9e-f19a-4a8d-8d32-74961a755dc2>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00514-ip-10-147-4-33.ec2.internal.warc.gz"} |
Locust Valley Math Tutor
Find a Locust Valley Math Tutor
...Social work or psychology students are common; of course, I am open to whatever kind of program you are in. Various students also have been in business, linguistics, auditory processing,
medicine. Statistics is pretty much the same regardless of the content area.
4 Subjects: including statistics, SPSS, SAS, biostatistics
...Starting in the Summer of 2013 twice a week, and continuing during the school year weekly, I have been running a physics class for exceptionally gifted students. Progressing slowly, we go more
in-depth than a typical AP Physics course would. Our target is to study elementary physics in proper breadth and depth, over three-year period.
27 Subjects: including linear algebra, logic, algebra 1, algebra 2
...I'm really passionate about spreading mathematical knowledge and, therefore, really explain why things are the way they are rather than just how to complete a problem. From my experience in the
classroom, this has produced greater results on exams, state exams/regents, and set a strong foundatio...
10 Subjects: including algebra 1, algebra 2, calculus, prealgebra
...I have tutored Math for a couple of semesters back in college and I consider myself qualified to tutor Math, Computer Science, and the logic portion of the LSAT (a favorite). I also spent
several months tutoring math and reading to classes of up to 16 kids grades 4-6 in a school in a Brooklyn. ...
9 Subjects: including algebra 2, precalculus, trigonometry, logic
...I find sciences fascinating, especially fields of genetics and oncology, because they are so actively developing. I specialize mostly in helping students excel in math and science courses, and
on high school standardized tests. I am familiar with many testing formats, including the SSAT, SAT, S...
23 Subjects: including algebra 2, trigonometry, SAT math, precalculus | {"url":"http://www.purplemath.com/Locust_Valley_Math_tutors.php","timestamp":"2014-04-17T04:28:37Z","content_type":null,"content_length":"24021","record_id":"<urn:uuid:1c68f8b1-e862-4f89-b55c-0e5dce7b98b8>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00567-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
If a galaxy is located 200 million light years from Earth, what can you conclude about the light from that galaxy? The light will take 200 million years to reach Earth. The light will travel 200
million kilometers per second to reach Earth. The light will travel a total of 300,000 kilometers to reach Earth. The light will travel a total of 200 million kilometers to reach Earth.
• one year ago
• one year ago
Best Response
You've already chosen the best response.
A light year is the distance light travels in a year, does that help?
Best Response
You've already chosen the best response.
So its the last one?
Best Response
You've already chosen the best response.
I mean yes I do understand it is measured as a distance and not necesarily time. But I know that sometimes it can mean time as well.
Best Response
You've already chosen the best response.
No, the first! If it was 200 million light years, then light will take 200 million years to cover that distance. A 'light year' is always distance. Try working out how far it is, using how long a
year is in seconds, plus the speed of light, and the formula distance = speed x time
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/509bccb0e4b0a825f21ac471","timestamp":"2014-04-20T18:42:39Z","content_type":null,"content_length":"35335","record_id":"<urn:uuid:8de3ad71-8202-44a2-8173-721d5f13e0d6>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00419-ip-10-147-4-33.ec2.internal.warc.gz"} |
Finding Equations of Angle Bisectors between Two Lines
Date: 08/14/2004 at 01:45:11
From: Aryan
Subject: Angle Bisector!
Is there any way or formula to find the coordinates of points which
lie on the line that bisects the angle between two given lines?
We know four points in the plane:
Line 1 passes through points P1 and P2. Line 2 passes through points
P3 and P4. Assume that point O (x,y) is the intersection point of the
two lines.
Date: 08/14/2004 at 09:50:43
From: Doctor Rob
Subject: Re: Angle Bisector!
Thanks for writing to Ask Dr. Math, Aryan!
Let the slope of line 1 be m1 and the slope of line 2 be m2;
m1 = (Y2 - Y1)/(X2 - X1),
m2 = (Y4 - Y3)/(X4 - X3).
Then the inclinations A1 and A2 of the two lines are given by
A1 = Arctan(m1),
A2 = Arctan(m2).
Note that if X2 - X1 = 0, then line 1 is vertical, and its inclination
A1 is Pi/2 radians or 90 degrees. If X4 - X3 = 0, then line 2 is
vertical, and its inclination A2 is Pi/2 radians or 90 degrees.
Now notice that there are two possible angle bisectors--one of the
acute angle formed between the two given lines and one of the obtuse
angle. The two bisectors are perpendicular to each other. The
inclinations B1 and B2 of the bisectors are
B1 = (A1 + A2)/2,
B2 = (A1 + A2 + Pi)/2 or (A1 + A2 + 180)/2 degrees
The slopes m3 and m4 of the two angle bisectors are then
m3 = tan(B1),
m4 = tan(B2) = -cot(B1).
Now if neither m3 nor m4 is zero, with these two slopes and the point
O (the intersection), you can write the equations of the angle
y = m3*(x - X) + Y,
y = m4*(x - X) + Y.
If m3 = 0 or m4 = 0, then the angle bisector lines are vertical and
horizontal and their equations are
y = Y,
x = X.
Then any point with coordinates (x,y) satisfying either of the angle
bisector equations will lie on one of those two angle bisectors.
Feel free to reply if I can help further with this question.
- Doctor Rob, The Math Forum | {"url":"http://mathforum.org/library/drmath/view/65456.html","timestamp":"2014-04-16T05:20:33Z","content_type":null,"content_length":"6962","record_id":"<urn:uuid:7c49dbf7-7089-492b-afda-a022d2a46ab6>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00457-ip-10-147-4-33.ec2.internal.warc.gz"} |
Model Indeks Tunggal Sebagai Alat Pembentukan Portofolio Optimal Untuk Pengambilan Keputusan Investasi
Torangga, Yusfan Ari (2007) Model Indeks Tunggal Sebagai Alat Pembentukan Portofolio Optimal Untuk Pengambilan Keputusan Investasi. Other thesis, University of Muhammadiyah Malang.
Download (107kB) | Preview
This research case study at shares of Jakarta Noted Islamic Index in Effect Exchange of Jakarta during period of January 2004 up to January 2007 with title “ Single Analysis Model Index In Order To
Election Of Optimal Portofolio Share. Target of this research is to know particular shares which the included in optimal portofolio of accurate by shares during period January 2004 up to January 2007
specially included in LQ-45 In this research is used by 19 merged into share of LQ-45 as sample. Analyzer which is used in this research is single index model utilize to determine particular shares
which the included in optimal portofolio. Formula the used is formula of Ri that is monthly share return, Rmt is market rate of return, E(Ri) is individual share mean rate of return, RBR is free
return of risk, βi is share beta, ERB is difference between expected expected advantage storey ; level with free return of risk. C* is point off cut, Wi is proportion every optimal share portofolio E
(Rp) is optimal share portofolio expectation return and βp is risk of portofolio optimal share. Result of calculation by using single index model hence can know by there is 7 share which included in
optimal share portofolio that Astra Agro Lestari Tbk, United Tractors Tbk, Aneka Tambang Tbk. Pursuant to result of analysis above, hence investor can take a decision of investment in stocks market
to chosen pertained shares in Stocks Exchange of Jakarta and included in optimal portofolio, because believed will give big rate of return.
Actions (login required) | {"url":"http://eprints.umm.ac.id/8538/","timestamp":"2014-04-18T05:42:26Z","content_type":null,"content_length":"21971","record_id":"<urn:uuid:006bf33f-142d-4505-a68f-9c68b63eecbd>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00384-ip-10-147-4-33.ec2.internal.warc.gz"} |
Teaser Betting Strategy
Teasers are one the easiest bets to calculate value of, and then find +EV wagering opportunities. Even if you have no experience with sports betting math, and even if you’ve never heard of a teaser
before, this article will tell you everything you need to know and more about teaser betting. I’ll start with intro topics such as “what is a teaser?” and then progress on to advanced teaser
strategy. No matter what your skill level is I’m near certain you’ll find considerable value in this article.
What is a Teaser Bet?
Covering the most basic question first, a teaser is a parlay bet using a modified point spread. For example in a “3-team 6-point teasers” using Jets/Colts o43, Patriots -3, and Jags +4 what you end
up with is a parlay on Jets/Colts o37, Patriots +3 and Jags +10 as each spread is modified six points. As you read this article you’ll learn how teasers work, who has the best odds, basic strategy
for betting teasers, advanced teaser strategy and much more. Let me go ahead and get started on this topic by introducing you to basic teaser strategy, sometimes referred to as “Wong Teasers”.
Basic Teaser Strategy
Sharp Sports Betting by Stanford Wong, a book first published in 2001, was the first place the term “basic strategy teaser” appeared in print. The author of this book (as Stanford Wong is a pen name)
used push charts to show three and seven are the most common margins of victory in the NFL. His theory, which was well backed up by math, suggested teasers which fully cross the three and the seven
are by far the best teaser bets. This means in a six point teaser the goal is to find and tease: underdogs +1.5 to +2.5 (to +7.5 to +8.5) and favorites -7.5 to -8.5 (to -1.5 to -2.5) because you’re
going what would be a loss on point spreads three and seven, to a win; in other words you’re fully crossing the three and the seven. This concept is called “basic strategy teasers” or in recent years
“wong teasers”.
To fast forward to the present day, the date I’m writing this article is May 31, 2011, and the most recent complete NFL season is the 2010/11 season that ended with the Packers winning the Super Bowl
over the Chicago Bears. I’m going to use Stanford Wong’s approach to basic strategy teasers as a base, but will use more recent data as certainly things may have changed in the ten years since Sharp
Sports Betting was written.
After using a historical database I determined over the past 5 completed NFL season the twelve most common margins of victory in order of most frequent first were: 3, 7, 10, 4, 6, 14, 2, 21, 17, 1, 8
and 5. The distribution between the early numbers and the later are not even close. Since the start of the 2006 season, 24.6% of games have been decided by “exactly 3” or “exactly 7” points and 38.1%
of all games have been decided by 3 to 7 points. It obviously makes sense in theory that teasers that involve these spreads at the best odds possible will be the best of all teaser bets.
To test this theory, I went back to the 2006 season and determined that all possible point-spread (as opposed to: point-total) teasers combined for a record of 1446-702 (67.3%). While the sample size
is small, as expected when limiting the sample to only underdogs +1.5 to +2.5 and favorites -7.5 to -8.5 the cover rate improved to 113-47 (70.6%). It looks promising that teasing only “teams that
fully cross the 3-7 point range” at “the best possible odds” are the best of all teasers.
To do any further analysis on these teasers I need to stray away from this topic for several paragraphs and cover “teaser odds”. While this next section might seem boring and simple-math intensive,
understanding it is a perquisite to “profitable” teaser betting.
Understanding Teaser Odds
Teasers use fixed payouts and therefore it’s difficult for the average sports bettor to decipher the odds on individual teams. For example, a 3-team 6-point teaser that pays +180 is the same as a
parlay bet where all three teams are priced at -244. Here the bookmaker let us purchase six points, but instead of the odds being -105 or -110, he gives us -244. Let me go ahead and show you quickly
how I calculated the price was -244 per team.
The overall teaser was available at +180, using our *odds converted* I determined the implied probability (required break even percentage) on a +180 bet is 35.71%. This is how often all three teams
must win to break even. To calculate how often each team individually must win I converted 35.71% to a decimal of 0.3571. I then need to know what number times itself three times equals this number.
I can find this easily using a *root calculator* where I plug in 0.3571 and see the cubed root is 70.95%. What this tells me is I need each team to win 70.95% of the time in order to achieve the
overall 35.71% (+180) win rate. I now go back to my *odds converter* plug in 70.95% under “break even %” and see 70.95% is the same as American odds -244.
The same math works for other teasers as well. Just make sure the root used matches the number of teams teased. For example a 4-team teaser uses the fourth root. Having run these already, here are
the deciphered bets at “standard” teaser odds.
• 2-team teaser at -110 = a 2-team parlay w/ each team priced -262
• 3-team teaser at +180 = a 3-team parlay w/ each team priced -244
• 4-team teaser at +300 = a 4-Team parlay w/ each team priced -241
• 5-team teaser at +450 = a 5-Team parlay w/ each team priced -246
• 6-team teaser at +600 = a 6-Team parlay w/ each team priced -261
As you can see 3-5 team teasers offer the best odds. Considering these are close and there are only a limited number of profitable teaser legs available to bet on a given day, you’ll most likely want
to stick to 3-team 6-point teasers at +180 where the required break even percentage for each team is 70.95%.
An important warning: Betting sites use “fixed odds” for teasers. As a result, quite a few sites short the payout somewhere or another. For example Intertops gives only +170, Bookmaker +160, and
www.BetOnline.com +150 for 3-team 6-point teasers. If you’re betting teasers at these sites over others it’s like giving away free money. The sites where you can find 3-team 6-pint teasers at +180
include www.5dimes.com, www.bodog.com and www.betdsi.com. When betting strictly basic strategy teasers in the 3-team +180 option, you’ll find (on average) BetDSI.com offers the best value.
Now that I’ve covered “teaser odds” let’s get back to the topic of basic strategy teasers.
Basic Strategy Teasers (Continued)
Even if the betting site you use offers reduced juice, such as -105 at 5Dimes.com as opposed to the standard -110, if you pick randomly on straight point spread wagers you’re going to lose over the
long run. The expected loss randomly picking at -110 is 4.55% and at -105 2.38%. While I didn’t explicitly point it out earlier, the contents of this article have already uncovered that using basic
strategy teasers a bettor can drastically cut the bookmakers advantage. Let me do some recapping in order to explain.
Earlier we calculated that since 2006, underdogs +1.5 to +2.5 and favorites -7.5 to -8.5 when teased 6 points had a record of 113-47 (70.6%). We also calculated that in a 3-team teaser at +180 each
team needs to win 70.95% to break even. As you can see, we’re within a mere fraction of the required break even rate when doing basic strategy teasers. Considering there are many professional bettors
who make a living betting point-spreads straight at -110 and -105, with some capping and selection involved, basic strategy teasers can be quite profitable; there is only about a “one-third of
one-percent” bookmaker advantage if we believe historical results are accurate.
Teasers Make Great Blind Bets
There are quite a few teasers that are great blind bets. The first I’ll give you is the ones I’ve already covered in this article, NFL underdogs +1.5 to +2.5 and NFL favorites -7.5 to -8.5. If you
use these in 3-team 6-point teasers at +180 and manage your bankroll well, chances are you won’t lose much over the long run, and might even come out ahead. These are by far better bets than betting
against the spread, or spending time on casino gambling where even at optimal blackjack, or craps, the house has a much larger advantage. I’ll come back to this topic a few more times in this article
after introducing other profitable teaser bets.
To Tease or Not to Tease (Advanced Strategy)
In theory the best way to calculate whether a teaser has more value in a point spread or in a tease is to create a push chart. Let’s say for example the point spread is -7.5 and you’re trying to
decide if it makes more sense to bet the spread or to tease it six points. In the teaser option you are now winning where you would be losing should the favored team win by 7, 6, 5, 4, 3 or 2. The
goal is to determine how frequent each of these probabilities occurs. One push chart already on the web is *this one* created by forum poster mikevegas.
Referring to mikevegas’s NFL push-chart, we see in the -4 to -9 favorite column that these teams landed within the related margins of victory as follows:
• 7 points: 6.0%
• 6 points: 3.1%
• 5 points: 2.2%
• 4 points: 2.6%
• 3 points: 9.7%
• 2 points: 2.1%
If we add these numbers together the team improves their win rate by 25.7%. Considering you’re already interested in betting at -7.5 you’re likely giving it a greater than 50% chance of winning, but
to play it safe we’ll take 50% + 25.7% and call them a 75.7% probability to cover -1.5. As a reminder a 3 team teaser at +180 requires a 70.95% win rate. Obviously teasing this -7.5 favorite would be
massively +EV if the push chart we’re using is correct.
Challenges with Push Rate Data
First off never trust someone else’s push chart. The proper way to do this would be to find as much related data as possible and then develop a push chart based on your own calculations. Free NFL
data is available at both sportsdatabase.com and goldsheet.com but it takes quite some time to harvest and place into a useable format. Another way to get free data is to open up an account at
www.bmaker.ag, save up 2100 BetPoints (not hard to earn) and then cash these points in for a free subscription to ATSdatabase.com. Here it’s very easy to harvest data during your free trial. During
this trial period just copy and paste all results into notepad and then import them to a Microsoft Excel spreadsheet.
Challenge #1: Free data sources are only good for getting rough numbers. To actual get advanced data you’d likely want to hire a programmer to regularly scrape closing lines from PinnacleSports.com,
and results from NFL.com. This would ensure you’re working with the best data possible.
Challenge #2: While this method works well for basketball where there are a lot more games, NFL has so few games that sample size is always going to be limited. The actual sportsbooks create their
push charts using advanced NFL game simulators, which gives them more accurate numbers.
For the average person you’ll want to go free data and just realize your push charts are not exact.
To Tease or Not to Tease & Basic Strategy Teasers (Continued)
Knowing that our push charts are not accurate let’s go ahead and examine another method to decide whether a 6-point teaser is a better option than a straight point-spread wager. Considering we’ve
already determined the best teasers are underdogs +1.5 to +2.5 and favorites -7.5 to -8.5 we’ll dig deeper into those subsets.
Again using only “regular season” games the past 5 season here were the results for basic teaser subsets:
Home Favorites -7.5 to -8.5
Against the Spread: 17-16 (51.5%)
Teased Six Points: 26-7 (77.8%)
Road Favorites -7.5 to -8.5
Against the Spread: 7-9 (43.8%)
Teased Six Points: 11-5 (68.8%)
Home Underdogs +1.5 to +2.5
Against the Spread: 17-29-1 (37.0%)
Teased Six Points: (59.6%) 28-19
Road Underdogs +1.5 to +2.5
Against the Spread: 26-38 (40.6%)
Teased Six Points: 48-16 (75.0%)
The sample sizes are quite small, but consider this: if a point spread is a 50/50 proposition each team’s chance of covering the spread is 50%. In a 3-team teaser at +180 you need each team to win
70.95% of the time. Therefore if teasing a team six points increases their win rate by greater than 20.95% the bet should be profitable over the long term. Taking a look at the sample data: all four
subsets, when teased 6-points, had their cover rates improved by more than 20.95%. This evidence suggests that if these teams were actually winning 50% of the time against the spread (most haven’t
been) then basic strategy teasers would make a killing.
The one other thing worth commenting on is: home favorites -7.5 to -8.5 and road underdogs +1.5 to +2.5 teased six-points have a combined record since 2006 of 74-23 (76.3%), well over the 70.95%
break even rate. While I’ve seen some authors suggest sticking only to these subsets, this is results oriented thinking that is similar to “keeping score in baccarat” or “looking for patterns on the
roulette wheel”. If that’s your thing, use it; but, more likely than not what you’re seeing is data-mined variance.
Importance of Line Shopping
Basic Strategy teasers are one of the best blind bets in sports betting, but with that said, line shopping is still critical. Let’s say for example 5Dimes has the Jets -8.5 +105 and 2BetDSI has the
Jets -7.5 -110. If you use the Jets in a 6-point teaser at 2BetDSI you’re getting them -1.5, at 5Dimes you’re getting -2.5. The fact that the original point spreads were priced +105 and -110 is of no
importance to teaser bettors; you get 6 points regardless of the spreads pricing and the payouts are the same. Obviously in this example, teasing -7.5 at 2BetDSI would be the better play.
Remember basic strategy teasers are likely +EV, but if not, there is only extremely marginal bookmaker advantage to overcome. In order to recoup some of this advantage, bookmakers will often shade
their lines. The comparison I gave above is a perfect example. 5Dimes listed the line as -8.5 +105, rather than -7.5 -110 as a shade against profitable teasers. What’s even more common is for a site
to list a -7.5 -110 line as -9 +110, making it no longer possible to cross the three in a six point teaser. There is nothing wrong with this; bookmakers are not in the business of giving out free
money. They set the lines and then punters look for angles to beat them.
In this article, most of my focus has been on 3-team 6-point teasers at +180. The odds at 5Dimes for teasers are “technically” better than this. At 5Dimes you’ll find 728 different options for
betting teasers. The break down is 2-15 team formats (14 options) times 26 point spread options (every half point increment between 5 and 17 points and plus a 20-point teaser option), times two
options for each on how ties are handled. To give an idea here are some of their teaser options:
• 2-team 5 points “ties reduce” at +107
• 2-team 5 points “ties win” at +105
• 2-team 5.5 points “ties reduce” at +103
• 2-team 5.5 points “ties win” at +101
• 2-team 6 points “ties reduce” at +100
• 2-team 6 points “ties win” at -105
This is only a small sample of the odds; again there are 728 teaser options at 5Dimes.com. To put this in perspective, the majority of online betting sites offer only 22 options. The reason 5Dimes
has so many is they believe in their ability to shade lines and block ones that are +EV. At the same time they make it inviting for sharp bettors to find angles to beat them, and with 728 options I
can tell you there is not a week that goes by I don’t find at least one +EV teaser bets at their site. If you’re well versed in teaser strategy, 5Dimes.com is a must have out.
If you’re a recreational bettor, you’re far better off sticking to sites such as bodog.eu and betdsi.com which both offer 3-team 6-point teasers at +180. These sites don’t shade their lines nearly as
much as other, so great value can be had using basic strategy at these two sites.
No matter if you’re extremely sharp, or are just getting started on sports betting strategy, you’ll need to shop multiple betting sites to find the best teaser odds.
Using Moneylines to Analyze Teasers
I’ve already covered several ways to gauge the profitability of a teaser bet. Another method, and perhaps the most effective, is to use moneylines as the starting base for analyzing favorites. To
give an example let’s say the New York Giants are -7.5. Many sports bettors teasing them to -1.5 would go add all the push probability of points 2-7 together to calculate a cover rate. This method is
silly; the better way would be to calculate the no-vig moneyline and then a push probability of them winning by exactly one point.
Here’s an example. At Pinnacle Sports the game between the Lions and Giants has the following lines.
Point Spreads: Detroit Lions +7.5 -104 @ New York Giants -7.5 -104
Moneylines: Detroit Lions +264 / New York Giants -300
To calculate the chances of the New York Giants winning the game (no point spread involved) I go to the Intense Gambling *odds converter* and plug in each team’s moneyline.
Here I learn:
• Lions +264 = 27.47% required break even
• Giants -300 = 75% required break even
These two numbers equal 102.47%. This is because the bookmaker has a profit (vig) built into each line. To remove vig we simply divide each teams required break even percentage by 102.47%.
• Lions 27.47%/102.47% = 26.81% no-vig win probability
• Giants 75%/102.47% = 73.19% no vig win probability
You’ll see these numbers now total 100%; the vig has been removed. In case you’re interested: if you plug each team’s probability back into our odds converted you’ll see the no-vig moneylines are
Lions -273 / Giants +273. The only number important to us for this analysis is that the bookmakers have the Giants chances of winning (with no point spread involved) at 73.19%.
Considering we have the Giants -1.5, the only thing left for us to do is calculate how often we expect the Giants to win by exactly one point. Earlier I covered methods for developing a push chart,
but for now let’s refer to the number from the push-chart of mikevegas. His chart tells us the expected push rate on -1 is 2.0%. Therefore the expected chance of Giants cover -1.5 is 73.19% minus
2.0% which equals 71.19%. Considering we need teams placed in a 3-team 6-point teaser to win 70.95% of the time, we’ve now calculated teasing the Giants -7.5 to -1.5 is a +EV bet.
Here are two tips for using this method:
1. PinnacleSports.com offers the highest betting limits AND operates on the lowest margins. If you’re using no-vig win-probabilities to calculate the value of a teaser bet, no matter where you
might make the actual bet, use Pinnacle Sports lines as the basis for the math.
2. The best time to run math on moneylines is as close to game time as possible; this is when lines are the sharpest, meaning they represent closest to each teams true chances of winning.
A tip for recreational gamblers: Generally speaking basic strategy underdogs (so +1.5, +2.0, +2.5) are good bets when the Pinnacle moneyline is +130 or less. If its higher these are marginal basic
strategy teasers you’ll probably be best served avoiding them.
Basic Strategy Applied to Sweetheart Teasers
I’ve seen the topic of sweetheart teasers come up on forums dozens of times, yet have never heard anyone answer if basic strategy is profitable on these. For example is a spread of -10.5 profitable
as part of a 3-team 10-point teaser at -110 or -120? To find the answer, read my article on sweetheart teaser strategy.
Basic Strategy Applied to College Football
I’ve heard countless times on forums that college football teasers are horrible, or quite hilarious… “It’s not a Wong teaser, so therefore it is a horrible bet”. To look at it at the broadest level
first: over the past 5 years all “basic strategy teasers”, meaning underdogs +1.5 to +2.5 and all favorites -7.5, went 348-154 (69.3%) when teased six points. While this still short of the 70.95%
break even rate, if you’re outside the US PinnacleSports.com offers 2-team 6-point college football teasers a +100, which inches closer to profitable with only a 70.7% required break even rate.
Using simple logic one might guess, the reason NCAA football is generally not included in basic strategy, is because college games score a lot more points. This results in fewer games being closely
contested, and also increases variance. Interesting though is approximately 1/3 of college games have a betting total (over/under line) of 48 or less. With these games being more comparable to NFL in
terms of points expected, let’s take a look at how they fared.
Home Dogs (+1.5 to +2.5)
Against Spread: 25-27 (48.1%)
Teased 6 Points: 41-11 (78.8%)
Road Dogs (+1.5 to +2.5)
Against Spread: 28-29 (49.1%)
Teased 6 Points: 44-13 (77.2%)
Home Favorites (-7.5 to -8.5)
Against Spread: 23-30-1 (43.4%)
Teased 6 Points: 35-19 (64.8%)
Road Favorites (-7.5 to -8.5)
Against Spread: 11-11-1 (50%)
Teased 6 Points: 20-3 (87.0%)
Sample size is of course an issue here, though anyone digging further into this topic will likely conclude: “any college football line meeting the subsets of basic strategy teasers and also having a
low total is far better to include in a teaser bet, than to bet it straight or as part of a parlay.
Random Teasers are Suckers Bets
I’ve spent a lot in this article showing teaser bets quite often have positive expectation. If this is true though: why do so many sites knowingly offer these bets? The reason for this is simple,
generally speaking unless someone knows the information contained in this article (which most people don’t) teasers are a great bet for the house. Let me go ahead and illustrate that using some
results from the past 5 NFL seasons.
☆ All Teams Teased 6 Points went: 1446-702 (67.3%)
☆ All Totals Teased 6 Points went: 1670-815 (67.2%)
As a reminder in 2-team 6-point teasers at -110, each team needs to win 72.4% of the time to break even, and when betting 3-team 6-point teasers at +180 (available at betdsi.com) each team needs to
win 70.9% of the time to merely break even. As you can see from the above illustration randomly betting teasers is a losing proposition great for the house. Do realize though, going against the norm
and making only +EV bets, will often cause betting sites to reduce your limits. Small and mid-stakes punters can easily “get away for a while with” making ONLY +EV bets when using trustworthy outs
such as bmaker.ag, betdsi.com, 5dimes.com and bodog.com.
Advanced Teaser Strategy
In this article I’ve covered the simple math behind advantage teaser betting. If you spend time looking at all the various betting options sites offer, shopping the lines, pulling data and running
simple math a lot of profit can be made. To give an example: 5dimes.com offers 2-team 6.5 point “ties-win” NFL teasers at -115. If there were two teams -3.5 would teasing them to +3.0 “ties win”
which is the same as +3.5 be a bet with positive expectation? Let me go through and recap how to go about this.
Our first step is translating the teaser terms into parlay terms. We start with going to our odds converter and finding out -115 has a required break even percentage of 53.488%. To find out how often
each team must win to achieve this beak even percentage we convert to a decimal (.53488) and calculate the square root, which solves to 0.73135. Plugging this figure as 73.135% into our odds
converter we’ve now deciphers, a 2-team teaser at -115 is a 2-team parlay with each selection priced -272.
Assuming the original line of -3.5 was a 50/50 proposition, what we now need to know is if gaining a win instead of a loss on +3, +2, +1, 0, -1, -2 and -3 increases our win probability by 23.135%. If
it does we can assume because the market is efficient this teaser is a +EV bet.
Again an extremely advanced bettor would use simulations to calculate this. For us we can a gauge using a push chart as well as past result. For the sake of simplicity lets again refer to the push
chart mikevegas posted (here – http://www.roughingthepunter.com/showthread.php?p=20240). If we refer to the -1 to -6.5 columns and add up all figures between and including 3 and -3 we see these total
25.3%. This means teasing a -3.5 line 6.5-points “ties win” which is essentially 7 points, increases the chances of winning by 25.3%. Considering this is 2.165% more than the 23.135% we needed it to
increase by, if his push chart is correct, what we’ve found is a +EV bet.
The final step is to do some back testing.
☆ Past 5 Seasons all teams -3.5 went 58-74 (43.9%) Against the Spread
☆ Past 5 Seasons all teams -3.5 went 90-42 (68.2%) when Teased 6.5 Points “Ties Win”
The limited sample here tells us teams -3.5 covered 24.3%more often when teased 6.5 points “ties wins”. Again this is a limited sample and we could back test it further with primers etc. I’ll save
that for a future article. A novice sports bettor might start with only the back test, find 68.2% and rule this sample out. Let me remind you though markets are efficient and what we’re looking for
is how much the win percent increased as a result of the tease, that’s all that’s significantly important here.
Knowing advanced strategy you can dabble in all sorts of simple-math and find the most profitable bets based on current lines. Another example, 5Dimes.com offers 2-team 5-points teasers at +107.
Would it be more profitable to bet a team -7.5 in this 2-team 5-point teaser at 5Dimes, or would it be more profitable to include them in a 3-team 6-point teaser +180 at another betting site? Well at
this point I’ve given you all the simple-math required to calculate this. If you’re lost at how to go about it go back and read this article until it becomes clear you’ll be glad you did.
A couple other articles I’ve used teaser math include NBA Teasers, Pleaser Betting, and Sweetheart Teasers. Reading and following these articles should help you master “advanced teaser betting
strategy” which at first might sound intimidating, but truly is just simple math and not all that hard to figure out.
As always, we at IntenseGambling.com wish you the best of luck this betting season. | {"url":"http://www.intensegambling.com/sports-betting/online/strategy/teaser-betting-strategy/","timestamp":"2014-04-16T10:19:59Z","content_type":null,"content_length":"45512","record_id":"<urn:uuid:3e8ae296-ee40-46ba-ba20-81a162ae4924>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00306-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
Ok i have to solve this equation by using elimination and the steps are
• one year ago
• one year ago
Best Response
You've already chosen the best response.
Step 1: Identify or Create Opposite Coefficients Step 2: Identify or Create Opposite Coefficients AGAIN Step 3: Solve the New System Step 4: Substitute and Solve I got to step number 3 and then
got stuck... Here are the equations x-2y-4z=-10 2x+y+2z=1 3x-3y-2z=1 Step 1: (2x+y+2z=5) +(3x-3y-2z=1) which then equals 5x+y=6 simplified Step 2: x-2y-4z=-10 2(2x+y+2z=5) (x-2y-4z=-10) +
(4x+2y+4z=10) and then i got stuck because now there is two opposite coefficients
Best Response
You've already chosen the best response.
in STEP 1, what is y-3y ??
Best Response
You've already chosen the best response.
its not y
Best Response
You've already chosen the best response.
oh i ment to type -2y i had a mess of my notes/work
Best Response
You've already chosen the best response.
ok,so u get 5x-2y=6
Best Response
You've already chosen the best response.
okay than what next
Best Response
You've already chosen the best response.
now ADD (x-2y-4z=-10) +(4x+2y+4z=10) what u get ?
Best Response
You've already chosen the best response.
oh wait, how did u get 10 ?
Best Response
You've already chosen the best response.
it shouls be 2, isn't it ?
Best Response
You've already chosen the best response.
(x-2y-4z=-10) +(4x+2y+4z=2)
Best Response
You've already chosen the best response.
you have to make another set of coefficients by modifying the problem so i times 2 by the equation 2x+y+2z=5
Best Response
You've already chosen the best response.
and i tryed not timeing it by two but the variables missing have to be the same in both equations
Best Response
You've already chosen the best response.
wait, where did u get 5 from ? isn't that 1 in original question ?
Best Response
You've already chosen the best response.
yes because it shows that you have to like jump from step to step i included the steps up there hold on a sec i will try copying and pasting the example
Best Response
You've already chosen the best response.
wait, i will write first 2 steps , u check
Best Response
You've already chosen the best response.
here is the example problem x + 2y – z=–3 2x – 2y + 2z = 8 2x – y + 3z = 9 Step 1: Identify or Create Opposite Coefficients Identify or create opposite coefficients in two of the equations and
add them vertically. Recall that opposite coefficients allow you to eliminate variables since they have a sum of zero. Do you see any opposite coefficients? Look at the first and second
equations. There is a pair of opposite coefficients in 2y and –2y. Let’s add the first and second equations. x + 2y – z=–3 + 2x – 2y + 2z = 8 3x + 0y + z = 5 which simplifies to… 3x + z = 5
Unfortunately, you can’t find the value of one variable yet. Put this new equation off to the side for now, and move on to the next step. Step 2: Identify or Create Opposite Coefficients AGAIN
This step is incredibly important! In this step, you must eliminate y again by combining two equations. But this time, you must use the equation you didn’t use in step 1. In the first and third
equations, the y terms have opposite signs. So these two equations are a good choice for elimination. Multiply each term of the third equation by 2. x + 2y – z = –3 x + 2y – z = –3 2(2x – y + 3z)
= 9 4x – 2y + 6z = 18 Now add the two equations. x + 2y – z=–3 + 4x – 2y + 6z = 18 5x + 0y + 5z = 15 which simplifies to…5x + 5z = 15 Again, you're left with an equation with two variables
instead of one. But if you go back to the new equation from Step 1, you have two equations with the same two variables. You know how to take care of that! Step 3: Solve the New System A new
system of equations, with only two variables, has been created by eliminating y in Steps 1 and 2. 3x + z=5 5x + 5z = 15 Now this looks familiar! You can solve this system of equations using the
elimination or substitution method. The substitution method looks easier since z in the first equation has a coefficient of 1. Isolate the z variable in the first equation. 3x + z =5 –3x –3x z =
–3x + 5 Substitute –3x + 5 for z in the equation 5x + 5z = 15 and solve for x. 5x + 5z=15 5x + 5(–3x + 5) = 15 5x – 15x + 25 = 15 –10x + 25 = 15 –25 –25 –10x = –10 x = 1 Substitute the value of x
into one of the equations and solve for z. 3x + z=5 3(1) + z = 5 3 + z = 5 –3 –3 z = 2 Now you know that x = 1 and z = 2. Two variables down, one to go! Step 4: Substitute and Solve Substitute x
= 1 and z = 2 into one of the original equations and solve for the remaining variable (y). Write the solution as an ordered triple. Solve for y when x = 1 and z = 2. x + 2y – z=–3 (1) + 2y – (2)
= –3 1 + 2y – 2 = –3 –1 + 2y = –3 +1 +1 2y = –2 y = –1 Since x = 1, y = –1 and z = 2, the solution is (1, –1, 2). Graphically, this represents the only point where the three planes intersect.
Best Response
You've already chosen the best response.
it is super confusing to me
Best Response
You've already chosen the best response.
i will write step 1 and 2 ,see whether it make sense.
Best Response
You've already chosen the best response.
oh ok
Best Response
You've already chosen the best response.
STEP 1: ADD 2x+y+2z=1 +3x-3y-2z=1 __________ 5x -2y +0 =2 so 5x-2y=2 ------->(1) STEP 2: multiply 2 to 2x+y+2z=1 ---> 4x+2y+4z=2 ADD : x-2y-4z=-10 +4x+2y+4z=2 ___________ 5x +0+0=-8------>(2) GOT
THESE STEPS ?
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
now from (2) u directly have 5x=-8 so x= -(8/5) ok ?
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
now since u have x, from (1) can u find y ??
Best Response
You've already chosen the best response.
not really i don't understand how i can get y when i only know one variable and there is three variables because in the example it shows step 2 turning into another equation
Best Response
You've already chosen the best response.
no! in (1) u only have x and y and u got value of x..... 5x-2y=2 put x= -8/5 here ------^
Best Response
You've already chosen the best response.
oh ok so 5(-8/5)-2y=2 and y = -5
Best Response
You've already chosen the best response.
and then you would substitute those into one of the original equations right?
Best Response
You've already chosen the best response.
thats right! y=-5 :) yes, now u have x and y substitute in any of the original equation, u will get z ...
Best Response
You've already chosen the best response.
(-8/5)-2(-5)-4z=-10 i get x=22/5 is that right????
Best Response
You've already chosen the best response.
i went to go put those into the practice and the fraction -8/5 won't even fit... im in the practice part of the lesson and i couldn't figure that out and you can put your answers in and then it
will tell if it is wrong or not but i can't even put the fraction in it
Best Response
You've already chosen the best response.
z is actually, 23/5=4.6 x= -8/5=-1.6 try putting -1.6,-5,4.6
Best Response
You've already chosen the best response.
i was on the wrong practice problem duh sorry my brain got all confused with all this thank you so much for all your help :)))))
Best Response
You've already chosen the best response.
lol! its ok. but did u understand the process ?
Best Response
You've already chosen the best response.
yeha i mainly got confused where it said only one value in step 2 and how it was different from the example problem
Best Response
You've already chosen the best response.
i actually figured it out why it was wrong and not fitting in i thought i was in the wrong problem but i wasn't if you go all the way back up to step 2 where you typed it STEP 2: multiply 2 to
2x+y+2z=1 ---> 4x+2y+4z=2 ADD : x-2y-4z=-10 +4x+2y+4z=2 ___________ 5x +0+0=-8------>(2) the part saying 2x+y+2z=1 was supposed to be 2x+y+2z=10 and then it would simplify to 5x=0 and then x=0
and the eventually get x=0, y=-3, and then z=-4
Best Response
You've already chosen the best response.
my bad z=4
Best Response
You've already chosen the best response.
i basically just redid all the steps you shown me to finish it
Best Response
You've already chosen the best response.
okk, so the question had 2x+y+2z=10 instead of 2x+y+2z=1
Best Response
You've already chosen the best response.
yeah i think i mistyped up there so well still thanks for your help you helped me understand a lot of it so it should be good :)
Best Response
You've already chosen the best response.
ok, but then i get the solutions as x=2 y=-0.5 z=3.25
Best Response
You've already chosen the best response.
i put 1 instead of 5 in the middle one
Best Response
You've already chosen the best response.
of the original equations
Best Response
You've already chosen the best response.
so that was a mistake on my part of typing
Best Response
You've already chosen the best response.
x=0 y=-3 z=4
Best Response
You've already chosen the best response.
yup, u are correct then
Best Response
You've already chosen the best response.
yeah thank you for helping
Best Response
You've already chosen the best response.
welcome :)
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/5057f863e4b0a91cdf454c90","timestamp":"2014-04-19T07:09:39Z","content_type":null,"content_length":"144620","record_id":"<urn:uuid:8ea085e3-db8b-4640-a5a4-8c1e5dd2f80a>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00519-ip-10-147-4-33.ec2.internal.warc.gz"} |
Wittgenstein and Thurston on Understanding
Posted by David Corfield
Two contributions to the Berlin Workshop I didn’t mention addressed the Jaffe-Quinn debate (original article, 15 mathematicians reply, Thurston replies, Jaffe and Quinn reply to the replies). Michael
Stoeltzner spoke on The Ontology of Mathematics, while Oliver Petersen argued that Wittgenstein could offer us a way out of the bind at stake in the debate between Jaffe and Quinn, on the one hand,
and Thurston on the other, by radicalising the former’s contentions about the importance to the health of mathematics of providing rigorous proofs for conjectures.
Petersen belongs to what I have called the generous interpreters of Wittgenstein, someone who deflects the charge that Wittgenstein is evidently wrong when he argues that propositions which we don’t
know how to prove are meaningless, or that different proofs of the ‘same’ proposition must be proving different things, by saying that Wittgenstein didn’t quite mean these things. Furthermore, he
argues that we can go beyond Jaffe and Quinn when we understand the real reason that proofs are so vital is that they add to the stock of proof techniques or argument patterns, augmenting the
allowable moves of the language game. Knowledge of the moves of the language game is all we need mean by ‘understanding’, and it allows us to avoid ‘Platonic’ talk of things, like the infinite, that
we’re coming to understand better in some mysterious way. I don’t believe that Wittgenstein was brought to make this move through any distaste for the mystical itself, being, after all, a close
reader of Tolstoy, but rather through a dislike of people speaking about the mystical - “Whereof one cannot speak, thereof one must remain silent”. So Cantor’s theological dressing of his set theory
would have offended.
However generous one chooses to be about Wittgenstein, his vision of mathematics remains a skeletal affair, where Thurston’s is much fleshier, going far beyond the proving of propositions. Now, what
brought about Thurston’s response? As I listened to Petersen’s talk, I came to realise what was really at stake in Thurston’s contribution to the debate. In essence, he and a few others were being
accused by Jaffe and Quinn of acting very irresponsibly. By tossing out a conjecture and sketching how a proof might go, they were not acting for the good of mathematics. Big names shouldn’t behave
like this, as it causes confusion, misleads the young, and discourages people from sorting out the field. Thurston’s response was to agree that responsibility is precisely what is at stake, but he
goes on to say that responsibility involves so much more than maintaining standards of rigour. To keep his field alive, he says, he has had to leave the frontiers of his explorations to return to
show others the way and to allow them to make their own discoveries. In his accompanying account, he gives us a very rich picture of mathematical understanding, one which rings many bells for those
involved in mathematical research. Should it worry Wittgenstein?
I think it’s important to separate two ‘realist’ issues: the kind of thing mathematics is about and the notion of the ‘correctness’ of a concept, construction, definition. Let’s start with the first,
where much of the philosophy of mathematics aims itself. One way to understand Wittgenstein’s motivation to translate understanding as an ability to make some moves within, or add to the allowable
moves of, a mathematical language game, is to see how it deflates ‘Platonist’ talk. (I use scare quotes to indicate that it’s a long way off from what Plato thought.). My saying that a mathematician
has a better understanding than I have is then not so very different from my saying Kasparov understands chess better than I do. The fact that chess can be conducted by moving some physical objects
about a board to achieve a certain end is not important. Even if we invented a game that couldn’t be played by moving pieces on a board within 3-space, but required at least 27 dimensions, we could
still have world champions, who would understand the game better than us, and who might well be better able to think up similar gripping games. ‘Ontological commitment’ about mathematics, on this
view, is no more frightening than that.
But, as Petersen detected, there is something more in Thurston’s idea of understanding. For me the key here is to realise the right relationship between the goal of a discipline, an improved
understanding, and the proving of conjectures. For Thurston, the primary goal is understanding. The making and proving of a conjecture may or may not aid the understanding of the field. The former is
a subsiduary means to the proper end of understanding. In a 1982 paper, he suggests that the Poincaré conjecture has proved not to be a great guide and hopes that his Geometrization conjecture (now
apparently the Geometrization theorem) will prove a better guide (Thurston, W. 1982, ‘Three Dimensional Manifolds, Kleinian Groups and Hyperbolic Geometry’, Bulletin of the American Mathematical
Society 6: 357-81). I think one would have to be an extraordinary generous interpreter of Wittgenstein not to see the difference between Thurston and Wittgenstein here.
I think that our strong communal emphasis on theorem-credits has a negative effect on mathematical progress. If what we are accomplishing is advancing human understanding of mathematics, then we
would be much better off recognizing and valuing a far broader range of activity.
…the entire mathematical community would become much more productive if we open our eyes to the real values in what we are doing. Jaffe and Quinn propose a system of recognized roles divided into
“speculation” and “proving”. Such a division only perpetuates the myth that our progress is measured in units of standard theorems deduced. This is a bit like the fallacy of the person who makes
a printout of the first 10,000 primes. What we are producing is human understanding. We have many different ways to understand and many different processes that contribute to our understanding.
We will be more satisfied, more productive and happier if we recognize and focus on this. (Thurston 1994: 171-2)
The question then is whether this something extra should worry us. In other words, when I say Thurston understands 3-manifolds better than I do, if I forego Petersen’s advice to follow Wittgenstein,
and say that this is more than his ability to make some moves within, or add to the allowable moves of, a mathematical language game, have I now committed myself to some entities in a Platonic realm
about which we cannot say how we have access?
What is this extra? It involves the question of the rightness of the language game, and how it can be improved. I’m glad Thurston has his say in the way research into 3-manifolds is conducted. I
trust his judgment to introduce relevant new concepts, and to formulate new guiding conjectures. I think he is far more likely to lead the game in the right direction than I ever could. At this point
we cannot avoid the use of the word ‘story’. It is crucial to realise that mathematics, as with any intellectual discipline, is scored through with stories, is constituted by dramatic narratives. In
Sir Michael Atiyah’s Mathematics in the Twentieth Century (Bulletin of the London Mathematical Society 34(1), 1-15, 2002), the words ‘story’, ‘stories’ or ‘history’ appear 23 times in just 15 pages.
Many of these concern the story of a part of mathematics.
So, the best way to phrase Thurston’s understanding of 3-manifolds is to say that he is someone to whom we can entrust the story of this part of geometry. He is better able to tell the story so far,
see how earlier viewpoints were partial, and better able to sketch out how the next chapters might go, how future viewpoints might see ours as partial. He is likely to be part of the story when told
centuries hence, and for good reason. In effect, Jaffe and Quinn are charging him with jeopardising his part of the story of mathematics, and he robustly rebuts this charge. He more than most has
helped the next generation: “I do think that my actions have done well in stimulating mathematics”.
To end, I don’t see that there’s anything here to worry Wittgenstein.
Posted at October 11, 2006 9:42 AM UTC
Re: Wittgenstein and Thurston on Understanding
Thank you - a nice article, and one that is particularly pertinent to n categories. After all, we are still not sure what the definition of an n category should be. Of course, most (all?) of the
candidate definitions have some theorems, so it could be argued that there is a theory of n categories in the sense of (miserly read) Wittgenstein. But much of the real work I would suggest here is
still searching for the right conceptualisation - and that is very valuable mathematics. Indeed, for my taste, it is more valuable than the proof of theorems ‘everyone’ ‘knows’ are true anyway. To
pick a concrete example, isn’t the original Lawvere Pavia paper on enriched categories a great example of Thurston style mathematics?
Posted by: d on October 11, 2006 10:46 PM | Permalink | Reply to this
Re: Wittgenstein and Thurston on Understanding
…the original Lawvere Pavia paper on enriched categories…
Yes, especially with the Author’s commentary added in 2002, which shows that yet more - the “unique role of the Pythagorean tensor” - can be accounted for by generalized logic.
Posted by: David Corfield on October 14, 2006 12:21 PM | Permalink | Reply to this
Re: Wittgenstein and Thurston on Understanding
I first came across Thurston’s response earlier this year, while browsing through a mathematical society’s book stand. I had read Jaffe and Quinn’s article years before, so I was familiar with the
terrain of the debate.
Reading Thurston’s response was one of the most stirring intellectual experiences of my life. It truly struck a cord with my own conception of mathematics. To me, it has the status that the
Declaration of Independence has to many Americans, or the U.N. charter has to other global citizens.
I believe that respected senior mathematicians should follow suit and contribute far more energetically to the present issues that the community faces.
I am thinking in particular here of the whole issue of the outdated refereeing system, overpriced journals, and the ArXive. I would be most interested in a top-level, public debate amongst senior
mathematicians on this issue.
Posted by: postgrad on October 12, 2006 1:58 AM | Permalink | Reply to this
Re: Wittgenstein and Thurston on Understanding
I believe that respected senior mathematicians should follow suit and contribute far more energetically to the present issues that the community faces.
I’m intrigued by the idea of institutional irrationality (a term coined, I believe, by the psychiatrist David Cooper). I’ve heard and read a good many senior mathematicians asking for change, but
with no effect. Even without any individual opposing change, institutional inertia can be overwhelming. Let’s hope efforts like this blog, or Thales and Friends can make a difference.
The question keeps recurring to me: If mathematics can’t be run well, what hope is there for us politically? If I become involved in some kind of movement to bring about changes suggested by the
findings of my health book, I shall no doubt find out the true meaning of institutional irrationality.
Posted by: David Corfield on October 12, 2006 9:07 AM | Permalink | Reply to this | {"url":"http://golem.ph.utexas.edu/category/2006/10/wittgenstein_and_thurston_on_u.html","timestamp":"2014-04-21T01:12:57Z","content_type":null,"content_length":"24508","record_id":"<urn:uuid:5e7c6a41-9612-4ecb-ac90-98f9e8f5b7b3>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00044-ip-10-147-4-33.ec2.internal.warc.gz"} |
Mr. Stewart's MathEd Musings
Today we finished discussing an activity we created last week. We were to take a paper fold and use it to teach mathematics. The most challenging aspect of this task was to design a problem that
would actually teach a mathematical concept. I, and it seemed like many of the others, have traditionally used problem solving activities as a means to test our students' knowledge about a particular
Several times in our discussions we were asked "What new math are they learning as a result of this problem"? This struck me as an important aspect to teaching mathematics that I have overlooked. One
portion of what we do as teachers at PCMI is to learn more mathematics. In these lessons I find it much more interesting to explore things that I don't necessarily know the formula for. I think that,
with some training, our students will be the same. They will be much more interested in learning the math when the problems they are solving are teaching them this math.
I will let you know when I have it figured out.
No comments: | {"url":"http://mathedmusings.blogspot.com/2007/07/problem-solving-for-learning.html","timestamp":"2014-04-19T09:26:32Z","content_type":null,"content_length":"33016","record_id":"<urn:uuid:880f54bf-ea2c-4179-a98c-10abfd262254>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00164-ip-10-147-4-33.ec2.internal.warc.gz"} |
Differentiating a function with a Heaviside Step Function
March 30th 2008, 05:38 AM #1
Mar 2008
Differentiating a function with a Heaviside Step Function
The function H(x) defined by
$H(x) = 0$$, x < 0$
$H(x) = 1/2$$, x = 0$
$H(x) = 1$$, x > 0$
$F(x) = f(x)H(x - b)$
Find the conditions on $f(x)$ that make $F(x)$
1. Continuous at $x = b$
2. Differentiable at $x = b$
Last edited by glio; March 31st 2008 at 03:05 AM.
1. In order to make $F(x)$ continuous at $x = c$, the function $F(x)$ needs to satisfy the following conditions:
(1) $F(c)$ is defined;
(2) $\lim\limits_{x\rightarrow c} F(x)$ exists; and
(3) $\lim\limits_{x\rightarrow c} F(x)=F(c)$
To satisfy (1), consider $F(c)=f(c)H(c-c)=f(c)H(0)=\frac{1}{2}f(c)$, we need $f(c)$ is defined.
To satisfy (2), we need to make sure both $\lim\limits_{x\rightarrow c^-} F(x)$ and $\lim\limits_{x\rightarrow c^+} F(x)$ exist, also $\lim\limits_{x\rightarrow c^-} F(x) = \lim\limits_{x\
rightarrow c^+} F(x)$.
Here we have $\lim\limits_{x\rightarrow c^+} F(x)=f(c)\left(\lim\limits_{x\rightarrow c^+}H(x-c)\right)=f(c)(1)=f(c)$ and $\lim\limits_{x\rightarrow c^-} F(x)=f(c)\left(\lim\limits_{x\rightarrow
So we need $|f(c)|<\infty$ and $f(c)=f(c)(0)\Rightarrow f(c)=0$.
To satisfy (3), realize that from above, if $\lim\limits_{x\rightarrow c} F(x)$ exists then $\lim\limits_{x\rightarrow c} F(x)=0$. Also we have $F(c)=\frac{1}{2}f(c)$, hence $\frac{1}{2}f(c)=0\
Rightarrow f(c)=0$.
In summary, in order to make $F(x)$ continuous at $x = c$, function $f(x)$ has to satisfy $f(c)=0$.
2. Given that $F^\prime(c)=\lim\limits_{x\rightarrow c}\frac{F(x)-F(c)}{x-c}$, so in order to make $F(x)$ differentiable at $x = c$, we need to show the limit $\lim\limits_{x\rightarrow c}\frac{F
(x)-F(c)}{x-c}$ exists. More specifically, we need to show the one-sided limits $\lim\limits_{x\rightarrow c^-}\frac{F(x)-F(c)}{x-c}$ and $\lim\limits_{x\rightarrow c^+}\frac{F(x)-F(c)}{x-c}$
exist and are equal. Can you pick up from here?
Thanks man.
I finally have ideas how to appreach this
Hmmm... It looks like I'll need more help with this.
Last edited by glio; March 30th 2008 at 02:44 PM. Reason: Changed my mind
March 30th 2008, 10:10 AM #2
March 30th 2008, 10:46 AM #3
Mar 2008 | {"url":"http://mathhelpforum.com/calculus/32498-differentiating-function-heaviside-step-function.html","timestamp":"2014-04-20T06:26:44Z","content_type":null,"content_length":"45898","record_id":"<urn:uuid:700ff670-516e-4db7-9311-6ec7997bb1d5>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00021-ip-10-147-4-33.ec2.internal.warc.gz"} |
IMA Tutorial (part III):
Generative and probabilistic models of data
Probabilistic generative models
An Introduction to the Power Law
Early Observations: Pareto on Income
Early Observations: Lotka on Citations
Equivalence of rank versus value formulation
Constructing a book: snapshot at time t
Constructing a book: snapshot at time t
Models for power laws in the web graph
Degree sequences in this model
Discussion of Mandelbrot’s model
Heuristically Optimized Trade-offs
Quick characterization of lognormal distributions | {"url":"http://www.ima.umn.edu/talks/workshops/5-5.2003/tomkins/part2/tomkins2_files/outline_collapsed.htm","timestamp":"2014-04-20T08:32:59Z","content_type":null,"content_length":"5471","record_id":"<urn:uuid:27e7e998-c94d-4af9-b3f4-861ed21bb109>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00509-ip-10-147-4-33.ec2.internal.warc.gz"} |
Venn diagram
From Wikipedia, the free encyclopedia.
Venn diagrams
diagrams (pronounced "oiler") and
Johnston diagrams
are similar-looking illustrations of
The Venn diagram above can be interpreted as "the relationships of set A and set B which may have some (but not all) elements in common".
The Euler diagram above can be interpreted as "set A is a proper subset of set B, but set C has no elements in common with set B.
Or, as a syllogism
• All Vs are Ts
• All Ks are Vs
• Therefore All Ks are Ts.
Venn, Johnston, and Euler diagrams may be identical in appearance. Any distinction is in their domains of application, that is in the type universal set that is being divided up. Johnston's diagrams
are specifically applied to truth values of propositional logic, whereas Euler's illustrate specific sets of "objects" and Venn's concept is more generally applied to possible relationships. It is
likely that the Venn and Euler versions have not been "merged" because Euler's version came 100 years earlier, and Euler has credit for enough accomplishment already, whereas John Venn has nothing
left to his name but the diagram.
The difference between Euler and Venn may be no more than that Euler's try to show relationships between specific sets, whereas Venn's try to include all possible combinations. With that in mind:
There was some struggle as to how to generalise to many sets. Venn got as far as four sets by using ellipses:
but was never satisfied with his five-set solutions. It would be more than a century before a means satisfying Venn's somewhat informal criteria of ‘symmetrical figures…elegant in themselves’ was
found. In the process of designing a stained-glass window in memoriam to Venn,
Anthony Edwards
came up with ‘cogwheels’:
three sets:
four sets:
five sets:
six sets:
Ian Stewart
Another Fine Math You've Got Me into
External links
See also: Boolean algebra, Karnaugh map, Graphic organizers | {"url":"http://www.factbook.org/wikipedia/en/v/ve/venn_diagram.html","timestamp":"2014-04-17T21:22:52Z","content_type":null,"content_length":"9092","record_id":"<urn:uuid:333d22d1-d9cd-4a52-b806-c164ef91efb6>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00065-ip-10-147-4-33.ec2.internal.warc.gz"} |
Test today! Help!!
November 10th 2008, 05:32 AM #1
Junior Member
Aug 2008
Information is given about a complex polynomial f(x) whose coefficients are real numbers. Find the remaining zeros for f.
Degree 3; zeros 2, 3 - i
A. -3 + i
B. none
C. -2
D. 3 + i
E. None of the above
how do you get the answers?
If f(x) is a polynomial with real coefficients, and if a + bi is a zero of f, then a - bi is a zero of f.
November 10th 2008, 09:04 AM #2
A riddle wrapped in an enigma
Jan 2008
Big Stone Gap, Virginia | {"url":"http://mathhelpforum.com/algebra/58715-test-today-help.html","timestamp":"2014-04-18T11:17:47Z","content_type":null,"content_length":"33461","record_id":"<urn:uuid:acb32f71-9fd8-44b5-a951-38f418c287ae>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00546-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Forum: Teacher2Teacher - Q&A #1609
View entire discussion
[<<prev] [next>>]
From: Gail (for Teacher2Teacher Service)
Date: May 28, 1999 at 06:47:08
Subject: Re: Thousandth
What sorts of models did you use to work with tenths and hundredths? In my
fifth grade classroom we use unit blocks (Dienes blocks), and we change the
unit that names one whole often, so my students get used to using wholes of
different sizes. One day a flat might be the whole, and the next day, it
might be a long.
That makes it easier for me to help my students understand that each place
value is ten times larger (or smaller) than the one next to it.
Once students see that relationship, it is not difficult for them to
understand that thousandths are hundredths cut into ten pieces...
A fun activity that helps to illustrate this in a rather silly way is to talk
about gas station price signs. Often these stations have 9 elevated numbers
after the price number. It annoys me that they do that, since it sort of
resembles an exponent. I refer to that number, and have students make
observations on their way home to confirm it. Then, we draw a model of a
"giant" penny (about the size of a paper plate) and cut it out. We use a
protractor to divide it into ten equal parts, then cut it into tenths. I
remind my students that a penny is a hundredth of a dollar. From that, we
decide that each of the tiny pieces of penny must be a thousandth.
Hope this gives you a starting point.
-Gail, for the Teacher2Teacher service
Post a public discussion message
Ask Teacher2Teacher a new question | {"url":"http://mathforum.org/t2t/message.taco?thread=1609&message=3","timestamp":"2014-04-18T18:36:46Z","content_type":null,"content_length":"5620","record_id":"<urn:uuid:a3a7afaf-1a26-41ff-b975-f9f65a4ddc16>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00571-ip-10-147-4-33.ec2.internal.warc.gz"} |
Jeremy Brazas
My research interests include algebraic, geometric, and general topology and category theory. My most recent interests deal with generalized theories of covering spaces, topological algebra, and
understanding homotopy groups of locally complicated spaces.
Research Papers:
• J. Brazas, P. Fabel, Strongly pseudoradial spaces, Submitted 2013. arXiv
• J. Brazas, Open subgroups of free topological groups, To appear in Fundamenta Mathematicae. arXiv
• J. Brazas, P. Fabel, On fundamental groups with the quotient topology, To appear in J. Homotopy and Related Structures 2013. arXiv
• J. Brazas, P. Fabel, Thick Spanier groups and the first shape group, To appear in Rocky Mountain J. Math. 2012. arXiv
• J. Brazas, Semicoverings, coverings, overlays and open subgroups of the quasitopological fundamental group, Topology Proc. 44 (2014) 285-313.
• J. Brazas, The fundamental group as a topological group, Topology Appl. 160 (2013) 170-188.
• J. Brazas, Semicoverings: a generalization of covering space theory, Homology Homotopy Appl. 14 (2012) 33-63. pdf
• J. Brazas, The topological fundamental group and free topological groups, Topology Appl. 158 (2011) 779-802.
Recent Talks:
• Topological fundamental groups and open subgroup theorems, Lloyd Roeling Lafayette Mathematics Conference, Invited talk, UL Lafayette, November 9, 2013.
• Quasitopological fundamental groups and the first shape map, 28th Summer Conference on Topology and Its Applications, Invited talk in Continuum Theory Session, Nippissing University, July 26,
2013. Abstract Slides
• Open subgroups of free topological groups, 47th Spring Topology and Dynamics Conference, Central Connecticut State University, March 24, 2013. Abstract Slides
Ph.D. Dissertation:
• J. Brazas, Homotopy Mapping Spaces, University of New Hampshire Ph.D. Dissertation, 2011.
Miscellaneous notes: | {"url":"http://www2.gsu.edu/~jbrazas/research.html","timestamp":"2014-04-18T18:11:50Z","content_type":null,"content_length":"9272","record_id":"<urn:uuid:47d37b69-ba64-4a55-9b64-fcea8a1b4c99>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00110-ip-10-147-4-33.ec2.internal.warc.gz"} |
A collection of lines or strings that are plaited together and whose ends are attached to two parallel straight lines. Braid theory was pioneered by the Austrian mathematician Emil Artin (1898–1962)
and is related to knot theory. It also has other applications, for instance if we consider the way the roots of a polynomial move as one of the polynomial's coefficients changes, this motion can be
thought of as a braid.
Related category | {"url":"http://www.daviddarling.info/encyclopedia/B/braid.html","timestamp":"2014-04-17T10:16:01Z","content_type":null,"content_length":"5871","record_id":"<urn:uuid:d9401082-3396-420d-9c04-3fac4bef23cf>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00303-ip-10-147-4-33.ec2.internal.warc.gz"} |
Midtown, NJ Algebra Tutor
Find a Midtown, NJ Algebra Tutor
...If you have an orderly picture of the material in your brain, you will understand the material, otherwise you will not. It is important to build from the bottom. I go deep into the student's
knowledge and find out whether they have all the necessary building blocks first, and then I try to build up their knowledge.
15 Subjects: including algebra 1, algebra 2, calculus, geometry
...I have been tutoring elementary, junior high, and high school students in math for over a year now. I have prepared students for integrated algebra and geometry regents. As of now, I am
tutoring junior high students for SHSAT and a sophomore for PSAT.
15 Subjects: including algebra 1, algebra 2, calculus, ESL/ESOL
...I started tutoring when I was in High School and have been tutoring ever since. I love helping students achieve their goals. With 15 years of experience, I understand that everyone learns
differently and I try to find the best way with each individual student to make that breakthrough.
12 Subjects: including algebra 2, SAT math, algebra 1, ACT Math
...I have been helping students as a tutor for over thirty years in subjects as different as accounting and chemistry. My philosophy of tutoring, and teaching in general, is that the student
should always be in the process of learning TWO things: the subject at hand, of course, but even more import...
50 Subjects: including algebra 1, algebra 2, chemistry, calculus
During my undergraduate time, I spent 2 years doing TA work, one in which I held office hours. These office hours were for any math students, so I quickly became adept at answering questions
about almost any math-related problem, be it a problem with an integral for a calculus student, or a misunde...
18 Subjects: including algebra 2, algebra 1, calculus, trigonometry
Related Midtown, NJ Tutors
Midtown, NJ Accounting Tutors
Midtown, NJ ACT Tutors
Midtown, NJ Algebra Tutors
Midtown, NJ Algebra 2 Tutors
Midtown, NJ Calculus Tutors
Midtown, NJ Geometry Tutors
Midtown, NJ Math Tutors
Midtown, NJ Prealgebra Tutors
Midtown, NJ Precalculus Tutors
Midtown, NJ SAT Tutors
Midtown, NJ SAT Math Tutors
Midtown, NJ Science Tutors
Midtown, NJ Statistics Tutors
Midtown, NJ Trigonometry Tutors
Nearby Cities With algebra Tutor
Bayway, NJ algebra Tutors
Bergen Point, NJ algebra Tutors
Chestnut, NJ algebra Tutors
Elizabeth, NJ algebra Tutors
Elizabethport, NJ algebra Tutors
Elmora, NJ algebra Tutors
Greenville, NJ algebra Tutors
Maplecrest, NJ algebra Tutors
North Elizabeth, NJ algebra Tutors
Pamrapo, NJ algebra Tutors
Parkandbush, NJ algebra Tutors
Peterstown, NJ algebra Tutors
Townley, NJ algebra Tutors
Union Square, NJ algebra Tutors
Weequahic, NJ algebra Tutors | {"url":"http://www.purplemath.com/Midtown_NJ_Algebra_tutors.php","timestamp":"2014-04-19T15:01:33Z","content_type":null,"content_length":"24020","record_id":"<urn:uuid:59ef9507-3980-4786-8df1-0a17e34582d4>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00164-ip-10-147-4-33.ec2.internal.warc.gz"} |
mutual information. concave/convex
hi everybody,
while looking on the mutual information of two variables, one find that it is concave of p(x) given p(x|y) and convex of p(x|y) given p(x).
the first statement is okey, but when it comes to proving the second, i get stuck, even when i find proofs already done i didn't get how they can conclude the convexity of I(x,y) as a function of p(x
|y) from the convexity of the relative entropy D(p||q).
here is a piece of the proof i didnt understand
if you have any idea, i'd very much appreciate it.
thank you in advance. | {"url":"http://www.physicsforums.com/showthread.php?p=3625572","timestamp":"2014-04-21T04:49:19Z","content_type":null,"content_length":"20279","record_id":"<urn:uuid:b2dc2c89-914c-4ae5-bbc6-4f67675cd1af>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00586-ip-10-147-4-33.ec2.internal.warc.gz"} |
Decatur, GA Geometry Tutor
Find a Decatur, GA Geometry Tutor
...So you can tell I love to teach! I have been working with students in math since I was in high school myself. I tutored students throughout college and beyond and have worked with students in
all math subjects from 6th grade through calculus.
25 Subjects: including geometry, reading, calculus, GRE
...Because my background is so broad, I substituted for Chemistry and Physics as well. On a part time basis, I helped college students in College Algebra, Science Projects, and GMAT/GRE prep. A
very important aspect of teaching that is often overlooked is the ability to kindle subjects' interest in a student.
30 Subjects: including geometry, chemistry, calculus, physics
Are you struggling in Chemistry? Do you not understand the difference between mitosis and meiosis? Do you think that the square root of 69 is 8?
15 Subjects: including geometry, chemistry, biology, algebra 1
...I am certified to teach in both PA and GA. I have teaching experience at both the Middle School and High School level in both private and public schools. I have chosen to leave the classroom
to tutor from home so that I can be a stay at home mom.
10 Subjects: including geometry, algebra 1, algebra 2, precalculus
...I scored a 35 overall and made an "S" on the writing portion. The breakdown of my score was: 12- Life Science Section (Organic Chemistry and Biology) 12- Physical Science Section (Chemistry
and Physics) 11- Verbal I majored in Biology in college, and due to my own interest in Genetics, I focuse...
46 Subjects: including geometry, English, reading, chemistry
Related Decatur, GA Tutors
Decatur, GA Accounting Tutors
Decatur, GA ACT Tutors
Decatur, GA Algebra Tutors
Decatur, GA Algebra 2 Tutors
Decatur, GA Calculus Tutors
Decatur, GA Geometry Tutors
Decatur, GA Math Tutors
Decatur, GA Prealgebra Tutors
Decatur, GA Precalculus Tutors
Decatur, GA SAT Tutors
Decatur, GA SAT Math Tutors
Decatur, GA Science Tutors
Decatur, GA Statistics Tutors
Decatur, GA Trigonometry Tutors
Nearby Cities With geometry Tutor
Atlanta geometry Tutors
Avondale Estates geometry Tutors
Belvedere, GA geometry Tutors
Clarkston, GA geometry Tutors
College Park, GA geometry Tutors
Dunwoody, GA geometry Tutors
East Point, GA geometry Tutors
Johns Creek, GA geometry Tutors
Lawrenceville, GA geometry Tutors
Marietta, GA geometry Tutors
North Decatur, GA geometry Tutors
Sandy Springs, GA geometry Tutors
Scottdale, GA geometry Tutors
Smyrna, GA geometry Tutors
Tucker, GA geometry Tutors | {"url":"http://www.purplemath.com/Decatur_GA_Geometry_tutors.php","timestamp":"2014-04-16T21:59:23Z","content_type":null,"content_length":"23869","record_id":"<urn:uuid:8c0b6bf6-b4aa-41d5-88cb-5a820a293357>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00452-ip-10-147-4-33.ec2.internal.warc.gz"} |
Avondale, AZ Prealgebra Tutor
Find an Avondale, AZ Prealgebra Tutor
...In my classes, I have many students with ADD and ADHD diagnoses and have great success helping them achieve their educational goals. I am a certified Cross-Categorical Special Education
teacher in the state of Arizona which means I can teach students with any and all disabilities. I currently teach high school Special Education in the Glendale Union High School District.
40 Subjects: including prealgebra, English, reading, writing
...I have seen many students who “hated” math become good and enthusiastic mathematicians. The three stages of learning any area of mathematics are assessment, explanation and practice. It is
crucial to distinguish between what the student knows and where he/she is challenged.
20 Subjects: including prealgebra, English, reading, writing
...I have assisted hundreds of students in calculus and hope to work with you in the future! Let my love of chemistry infect your son, daughter, or perhaps yourself! I have been tutoring and
taught chemistry for years and find immense enjoyment assisting others in the subject.
10 Subjects: including prealgebra, chemistry, calculus, geometry
...I started off with Computer Science and then added Mathematics graduating Summa Cum Laude and the top of my class. I teach in the local community colleges where students wished I had taught
them math since elementary school and I found that I truly enjoyed teaching and tutoring as well. I enjoy...
16 Subjects: including prealgebra, chemistry, physics, calculus
...My European education included a heavy focus on international (world) literature and essay/paper writing based on that literature. My reading curriculum would average approximately 15-20 books
per year, with an equivalent number of literary essay assignments within the school year. The essay pa...
11 Subjects: including prealgebra, English, reading, algebra 1
Related Avondale, AZ Tutors
Avondale, AZ Accounting Tutors
Avondale, AZ ACT Tutors
Avondale, AZ Algebra Tutors
Avondale, AZ Algebra 2 Tutors
Avondale, AZ Calculus Tutors
Avondale, AZ Geometry Tutors
Avondale, AZ Math Tutors
Avondale, AZ Prealgebra Tutors
Avondale, AZ Precalculus Tutors
Avondale, AZ SAT Tutors
Avondale, AZ SAT Math Tutors
Avondale, AZ Science Tutors
Avondale, AZ Statistics Tutors
Avondale, AZ Trigonometry Tutors | {"url":"http://www.purplemath.com/Avondale_AZ_Prealgebra_tutors.php","timestamp":"2014-04-20T01:44:18Z","content_type":null,"content_length":"24171","record_id":"<urn:uuid:d132bded-95b7-4a51-8976-e7d734e9ef82>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00656-ip-10-147-4-33.ec2.internal.warc.gz"} |
[Numpy-discussion] Changing a matrix element into a scalar
PHobson@Geosynte... PHobson@Geosynte...
Tue Aug 3 11:28:49 CDT 2010
Matrices are two dimensional arrays so you need two indices to access an individual element:
In [1]: from numpy import matrix
In [2]: m = matrix([[1.2],[2.3]])
In [3]: m[0,0]
Out[3]: 1.2
-----Original Message-----
From: numpy-discussion-bounces@scipy.org [mailto:numpy-discussion-bounces@scipy.org] On Behalf Of Wayne Watson
Sent: Tuesday, August 03, 2010 9:24 AM
To: Discussion of Numerical Python
Subject: [Numpy-discussion] Changing a matrix element into a scalar
How do I access 1.2 in such a way as to end up with a float? I keep
getting a matrix.
from numpy import matrix
m = matrix([[1.2],[2.3]])
Wayne Watson (Watson Adventures, Prop., Nevada City, CA)
(121.015 Deg. W, 39.262 Deg. N) GMT-8 hr std. time)
Obz Site: 39° 15' 7" N, 121° 2' 32" W, 2700 feet
"Republicans are always complaining that government is
out of control. If they get into power, they will
prove it." -- R. J. Rourke
Web Page:<www.speckledwithstars.net/>
NumPy-Discussion mailing list
More information about the NumPy-Discussion mailing list | {"url":"http://mail.scipy.org/pipermail/numpy-discussion/2010-August/052075.html","timestamp":"2014-04-17T16:15:36Z","content_type":null,"content_length":"4222","record_id":"<urn:uuid:053c8bfa-d243-4e73-9d85-c15d5fd3edfc>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00636-ip-10-147-4-33.ec2.internal.warc.gz"} |
[FOM] Euclid's axioms
Alasdair Urquhart urquhart at cs.toronto.edu
Fri Nov 1 15:00:16 EST 2002
Fred Richman wrote:
"Matthew Frank pointed out to me that Euclid's axioms have a
model consisting only of the constructible points.
Of course Hilbert's "Axiom of line completeness" does not
hold in this model, but then neither God nor Euclid
introduced that axiom into geometry."
There is certainly a sense in which this is true, but
at the same time, it's difficult to account for some
of the propositions in Euclid on this basis.
For example, the propositions in Book XII that
depend on the method of exhaustion, such as:
XII.2 "Circles are to one another as the square
on the diameters."
This uses the method of exhaustion, something we
would justify today by a continuity axiom.
In the case of Euclid, XII.2 is grounded
in the (unjustified) assumption that the area
of a circle is well defined, together with the use
of approximating polygons, the final step
being a reductio ad absurdum. This is also
Archimedean standard practice.
-- Alasdair
More information about the FOM mailing list | {"url":"http://www.cs.nyu.edu/pipermail/fom/2002-November/006011.html","timestamp":"2014-04-20T08:17:08Z","content_type":null,"content_length":"3170","record_id":"<urn:uuid:0e694020-60d9-44f9-b6da-70435e284f2a>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00469-ip-10-147-4-33.ec2.internal.warc.gz"} |
Wolfram Demonstrations Project
Optimal Temperature Policy for a Reversible Reaction
This Demonstration shows the temperature trajectory that maximizes the reaction rate of a reversible reaction.
For the reaction
, the rate is
with and ,
where and are the pre-exponential Arrhenius constants for the forward and reverse reactions, and are the energies of activation, is the universal gas constant, is the absolute temperature, , , , and
are the initial concentration of the reactants, and is the conversion of species . The temperature function that gives the maximum reaction rate satisfies the condition at each point in time; this
function has an analytical solution for this reaction [1]
, and
the initial concentrations of and are taken equal to 0.5, and 1.0.
One complication can occur: for low conversions, may have a value sufficiently small enough to make ; then the equation for gives a value (or negative); in practice the temperature is limited by the
reactor materials or the catalyst's physical properties. The optimum temperature profile and the concentration of the reactants as a function of time are shown for user-set values of reaction time,
maximum allowable temperature, and parameters and .
[1] G. F. Froment, K. B. Bischoff, and J. de Wilde,
Chemical Reactor Analysis and Design
, 3rd ed., Hoboken, NJ: Wiley, 2011. | {"url":"http://demonstrations.wolfram.com/OptimalTemperaturePolicyForAReversibleReaction/","timestamp":"2014-04-17T03:53:17Z","content_type":null,"content_length":"46305","record_id":"<urn:uuid:e1013686-7da6-408b-a87c-9a404200983c>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00380-ip-10-147-4-33.ec2.internal.warc.gz"} |